anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Confusion about Lagrangian equation of Motion | Question: I am reading a book about tensor analysis. I stumbled upon the starting of the Lagrangian equations of motion. It said in the first lines....
$$ T=\frac{1}{2}mv^2$$ can be written as $$T=\frac{m}{2}\sum_{i,j=1}^3g_{ij}\frac{dx^i}{dt}\frac{dx^j}{dt}.$$
Can someone explain me how the velocity equals to that?
Answer: First note the velocity is defined via $\vec{v}\equiv d\vec{x}/dt$.
Next note that in the expression $T=\frac{1}{2}mv^2$ you have smuggled in a definition of the "inner product", \begin{align}v^2 \equiv \vec{v}\cdot \vec{v}.\tag{*}\label{inner}\end{align}
Using indices to denote the components of a vector, $\vec{v}=v^i$, we could just as well express \eqref{inner} as
\begin{align}
v^2=\sum_{i}(v^i)^2=\sum_{i,j}\delta_{ij}v^iv^j=\sum_{i,j}\delta_{ij}\frac{dx^i}{dt}\frac{dx^j}{dt},\tag{**}\label{generalized}
\end{align}
where $\delta_{ij}=1$ when $i=j$, but is zero otherwise.
Now, in some contexts (for example general relativity), it is useful to generalize \eqref{generalized} by replacing $\delta_{ij}\to g_{ij}$, where $g_{ij}$ now defines the inner product of vectors. If you would like to learn more about how this works in general relativity, you should look up the "metric tensor". | {
"domain": "physics.stackexchange",
"id": 45146,
"tags": "lagrangian-formalism, metric-tensor, vectors, relativity"
} |
Inject a list of processes to execute using spring DI | Question: I have couple of MessageProcessors. One of the Processor will log the payload. Other Processor will save into an embedded database.
Here is the code I have written so far:
I am using annotation-driven configuration so I have omitted other classes which are not used in this use case.
@Component
public class ProcessorConfiguration {
@Bean
@Qualifier("textMp")
public MessageProcessor textMessageProcessor() {
return new TextMessageProcessor();
}
@Bean
@Qualifier("employMp")
public MessageProcessor employMessageProcessor() {
return new EmployMessageProcessor();
}
@Bean
private List<MessageProcessor> messageProcessorList(@Qualifier("textMp") MessageProcessor textMessageProcessor,
@Qualifier("employMp") MessageProcessor employMessageProcessor) {
List<MessageProcessor> list = new ArrayList<>();
list.add(textMessageProcessor);
list.add(employMessageProcessor);
return list;
}
}
This class is responsible for all the JMS messages the application receives.
public class MessageHandler {
@Autowired
private List<MessageProcessor> messageProcessors;
public void handleMessage(Notification notification) {
messageProcessors.forEach(processor -> processor.doProcess(notification));
}
}
public interface MessageProcessor {
void doProcess(Notification notification);
}
public class TextMessageProcessor implements MessageProcessor {
private static final Logger logger = Logger.getLogger(TextMessageProcessor.class);
@Override
public void doProcess(Notification notification) {
logger.info("The payload is " + notification.getText());
}
}
I have created a builder which takes String as input and returns an Employ object.
@Service
public class EmployMessageProcessor implements MessageProcessor {
@Autowired
private EmployDao dao;
@Override
public void doProcess(Notification notification) {
Employ employ = EmployBuilder.buildEmploy(notification.getText());
dao.save(employ);
}
}
public interface Notification {
String getText();
}
I think, the way I am injecting the processors can be improved. Please review my code and provide your valuable feedback.
Answer: This is a learning exercise for me as well so I hope I can be helpful with my answer.
I think you can loose the Configuration class entirely. Add
@Component/@Service annotation to your TextMessageProcessor. Spring should be
capable of figuring out MessageProcessor List wiring on its own. Have you already tried that?
If you are worried about the order of the beans inside MessageProcessor List then you can use @Order annotation on your Service and/or Component.
@Component
@Order(value=1)
public class TextMessageProcessor implements MessageProcessor {
}
Your Configuration class in its current form is conflicting with Open Closed principle. In other words, whenever you add a new MessageProcessor, your configuration will grow. Imagine that configuration after you have 20 processors.
Spring creates qualifier names automatically as your bean names.
From Spring documentation:
For a fallback match, the bean name is considered as a default qualifier value.
I recommend doing your wiring through constructors instead of fields (I am especially eyeballing that DAO of yours). That way you preserve an ability to test your code later on. You will need to inject objects through constructors in your tests.
You could move Logging to MessageProcessor if it was an abstract class. Make a field or a method for Logging which returns the logging instance for current class. (Stackoverflow link) | {
"domain": "codereview.stackexchange",
"id": 30602,
"tags": "java, spring"
} |
Using ASP.NET identity in a new app | Question: I need code review for using ASP.NET Identity in a new app.
Goals:
Use int instead of GUID for IDs.
Separate identity from view layer
I need code review if I did everything right. Yes it is working but maybe I did something that I shouldn't or maybe there is some place for improvement.
ApplicationUser
public class ApplicationUser : IdentityUser<int, CustomUserLogin, CustomUserRole,
CustomUserClaim>
{
public ICollection<ProductCategory> ProductCategories { get; set; }
public ICollection<Store> Stores { get; set; }
public ICollection<Product> Products { get; set; }
public async Task<ClaimsIdentity> GenerateUserIdentityAsync(
UserManager<ApplicationUser, int> manager)
{
var userIdentity = await manager.CreateIdentityAsync(
this, DefaultAuthenticationTypes.ApplicationCookie);
return userIdentity;
}
}
public class CustomUserRole : IdentityUserRole<int> { }
public class CustomUserClaim : IdentityUserClaim<int> { }
public class CustomUserLogin : IdentityUserLogin<int> { }
public class CustomRole : IdentityRole<int, CustomUserRole>
{
public CustomRole() { }
public CustomRole(string name) { Name = name; }
}
public class CustomUserStore : UserStore<ApplicationUser, CustomRole, int,
CustomUserLogin, CustomUserRole, CustomUserClaim>
{
public CustomUserStore(ApplicationDbContext context)
: base(context)
{
}
}
public class CustomRoleStore : RoleStore<CustomRole, int, CustomUserRole>
{
public CustomRoleStore(ApplicationDbContext context)
: base(context)
{
}
}
DbContext
public class ApplicationDbContext : IdentityDbContext<ApplicationUser, CustomRole,
int, CustomUserLogin, CustomUserRole, CustomUserClaim>
{
public static ApplicationDbContext Create()
{
return new ApplicationDbContext();
}
public ApplicationDbContext()
: base("DefaultConnection")
{...
View models are still in view layer. All above is in data layer (class library).
In the view assembly in AccountController I changed all ApplicationUser to reference ApplicationUser in data layer assembly.
I also added extension to get UserID as int:
public static int GetUserIdInt(this IIdentity identity)
{
if (identity == null)
throw new ArgumentNullException("identity");
string stringUserId = identity.GetUserId();
int userId;
if (string.IsNullOrWhiteSpace(stringUserId) || !int.TryParse(stringUserId, out userId))
{
return default(int);
}
return userId;
}
Answer: Your customization looks fine - that's the way it is recommended to do it.
The only note I could add is about your GetUserIdInt method which you can get rid of. You can just use this generic overload:
User.Identity.GetUserId<int>() | {
"domain": "codereview.stackexchange",
"id": 12895,
"tags": "c#, asp.net, asp.net-mvc"
} |
Correct sign of vertical displacement in projectile motion | Question:
I understand how to do this problem perfectly fine.
I am posting here however because I have a disagreement with my professor and classmates in finding the final y-coordinate of the projectile.
I am confident that to find the final y-coordinate of the projectile, the correct equation should be:
$-y = (100sin60^o )t + 0.5 (-32.2)t^2$
My professor, however, says that my equation for finding the final y-coordinate of the projectile is categorically incorrect. It should be:
$y = (100sin60^o )t + 0.5 (-32.2)t^2$
So, who's right?
I believe I'm correct because the formula we're using here is the formula for displacement. Displacement is a vector, and it's the change in position. The final y-position of the projectile is obviously "-y" and the initial y-position of the projectile is 0. Therefore, the vertical displacement should be "-y," not just "y."
In addition, there is a very similar problem in the text. In the below problem, if we solve it with the professor's method/equation (in addition to the other requisite equations):
$y = (80sin30^o )t + 0.5 (-32.2)t^2$
$x = (80cos30^o)t$
$y = -0.04x^2$
We would actually end up with a time of flight that is negative:
If we solve it with my method:
$-y = (80sin30^o )t + 0.5 (-32.2)t^2$
$x = (80cos30^o)t$
$y = -0.04x^2$
We get the correct answer in the back of the textbook, and a time of flight that is positive.
Answer: You are confusing distance with coordinates. The correct equation to use would be $y$. Why? Because the variable $y$ itself can be either positive or negative, and this sign is automatically built into the definition of $y$. It would be extraneous (and incorrect) to add an extra negative sign in front of it.
The same thing applies if you choose to orient the $y$-axis downwards. In this case, the value of $g$ will be positive as it points in the same direction as $y$. The negative sign in the quadratic equation of the surface will also need to be removed to reflect this. | {
"domain": "physics.stackexchange",
"id": 68383,
"tags": "homework-and-exercises, classical-mechanics, projectile"
} |
Should input images be normalized to -1 to 1 or 0 to 1 | Question: Many ML tutorials are normalizing input images to value of -1 to 1 before feeding them to ML model. The ML model is most likely a few conv 2d layers followed by a fully connected layers. Assuming activation function is ReLu.
My question is, would normalizing images to [-1, 1] range be unfair to input pixels in negative range since through ReLu, output would be 0. Would normalizing images into [0, 1] range instead be a better idea?
Thank you.
Answer: In addition to just initialization (as the great answer of Djib2011 notes), many analyses of artificial neural networks utilize or rely on the normalization of inputs and outputs (e.g., the SELU activation). So normalizing the input is a good idea.
Often, however, this can be done with normalization layers (e.g., LayerNorm or BatchNorm), and furthermore, we may want to enforce that the pixels are in a particular fixed range (since real images are like this). This is especially important when the output is an image (e.g., for a VAE of images). Since we need to compare the input image $I$ to the output image $\widehat{I}$, it should be readily possible to enforce the pixel values of $\widehat{I}$ into a simple, known, hard range. Using sigmoid produces values in $[0,1]$, while using tanh does so in $[-1,1]$. However, it is often thought that that tanh is better than sigmoid; e.g.,
https://stats.stackexchange.com/questions/142348/tanh-vs-sigmoid-in-neural-net
https://stats.stackexchange.com/questions/330559/why-is-tanh-almost-always-better-than-sigmoid-as-an-activation-function/369538
https://stats.stackexchange.com/questions/101560/tanh-activation-function-vs-sigmoid-activation-function
In other words, for cases where the output must match the input, using $[-1,1]$ may be a better choice. Furthermore, though not "standardized", the range $[-1,1]$ is still zero-centered (unlike $[0,1]$), which is easier for the network to learn to standardize (though I suspect this matters only rather early in training).
Also, for this phrase
would normalizing images to [-1, 1] range be unfair to input pixels in negative range since through ReLu, output would be 0
the answer is "no". Mainly because the non-linear activation happens after other layers first in nearly all cases. Usually those layers (e.g., fully connected or conv) have a bias term which can and will shift around the range anyway (after some additional, usually linear, transformation occurs).
It is true however that values below zero do "die" wrt to their contribution to the gradient. Again, this may be especially true early in training. This is one argument for using activations other than ReLU, like leaky ReLU, and it is a real danger. However, the hope is that these values should have more than one way to propagate down the network.
E.g., multiple outputs in the first feature map (after the first convolutional layer, before the activation) will depend on a given single input, so even if some are killed by the ReLU, others will propagate the value onwards.
This is thought to be one reason why ResNet is so effective: even if values die to ReLU, there is still the skip connection for them to propagate through.
Despite all this, it is still probably more common to normalize images with respect to the statistics of the whole dataset. One problem with per-image normalization is that images with very small pixel value ranges will be "expanded" in range (e.g., an all blue sky with a tiny cloud will immensely highlight that cloud). Yet, others may consider this a benefit in some cases (e.g., it may remove differences in brightness automatically).
Ultimately, the optimal approach is up for debate and likely depends on the problem, data, and model. For more, see e.g.
[1],
[2],
[3],
[4],
[5],
[6],
[7] | {
"domain": "datascience.stackexchange",
"id": 5461,
"tags": "normalization"
} |
Relationship between diffused light and intensity | Question: I was wondering if the relationship of the intensity of diffused light is a linear correlation to the diffusing material. Basically will the intensity of the diffused light change in a linear fashion as the diffusing factor increases/decreases?
Answer: One simple model of a diffusive material is a material where there is a certain probabability per path length that a photon will be scattered in a random direction, similar to how a colored material absorbs photons with a certain probability per path length.
If the characteristic length of the material is $L=d^{-1}$, where $d$ is a "diffusivity factor" for the material, then it's easy to show that if the incoming beam is collimated, the amount of light that exits the material and remains collimated is
$$P_c=Pe^{-Td}$$
where $P$ is the power of the incoming beam and $T$ is the thickness of the diffusing material.
Meanwhile, the scattered light will be isotropically radiated, with an intensity
$$P_s=\frac{P}{4\pi}(1-e^{-Td})\approx \frac{PTd}{4\pi}$$
per steradian in the far field in the low diffusion limit $Td\ll1$. As a result, scattered intensity is roughly linear with diffusion factor (up to a point). | {
"domain": "physics.stackexchange",
"id": 12308,
"tags": "visible-light"
} |
Falling chain fixed at one end: force at the hinge | Question:
The end B of the chain of mass per unit length (a) and length (l) is released from rest as shown in the picture given above. The force at the hinge when the end B is at $\frac{l}{4}$ from the ceiling is ________________
My attempt:
I have tried to locate the position of the center of mass of the chain from the top after end B has fallen distance x from the ceiling. I then used the Principle of Conservation of Energy to find the velocity of the hanging part when it has fallen distance x, by equating the change in gravitational potential energy to change in kinetic energy. I cannot however figure out the relation between the force at the hinge and the velocity of hanging part.
Ideas?
Edit after JiK comment's i am writing down the equations here
Suppose the free end of the chain is displaced by a distance x.thus the length of the hanging part now becomes $\frac{l+x}{2}-x=\frac{l-x}{2}$.now to find out the position of com
$$\frac{l-x}{2}.a.\frac {l+3 x}{4}+\frac{l+x}{2}.a.\frac{l+x}{4}=a.lx(com)$$
Now applying conservation of energy principle
$$al\frac{l}{4}g=alx(com)+\frac{1}{2}.a(\frac{l-x}{4})v^2$$
Here I have considered increase in kinetic energy of only the hanging portion as it is only in motion.
However writing the equation for the other portion of the chain,I am finding trouble
$$a(\frac{l+x}{2})g+?=Hingeforce .$$
I couldnot find out what should be the force replacing the question mark.
Answer: This problem has been tackled before on Physics SE, for example :
Can someone explain this solution for the motion of a falling chain?
Energy of Falling chain
A comprehensive solution is given in Falling Chains in American Physics Teacher.
The 1st link above includes an instructive discussion of and solution to the problem from the textbook by Marion & Thornton. It is tempting to assume that the free side of the chain is in free fall, but this is incorrect. Instead, it should be assumed that energy is conserved, because this is how real chains are observed to behave.
Your approach is valid. The reaction $R$ at the hinge is related to the acceleration $\ddot y$ of the centre of mass by Newton's 2nd Law : $Mg-R=M\ddot y$. What you are unsure about is how to find $\ddot y$.
Your calculation of the position of the CM of the whole chain is correct :
$y=\frac{1}{4l}(l^2+2lx-x^2)$.
Differentiating twice gives :
$\ddot y=\frac{1}{2l}((l-x)\ddot x-\dot x^2)$
Expressions for $\ddot x$ and $\dot x^2$ can be found from the conservation of energy, as follows :
At any instant only the RHS of the chain is moving. The length of this side is $\frac12(l-x)$ and the CM has velocity $\dot x$, so its KE is $\frac14\rho (l-x)\dot x^2$. The KE gained equals the loss of PE due to the fall of the CM of the whole chain. The CM is initially at $y=\frac14 l=\frac{1}{4l}l^2$, so the loss in PE is
$l\rho g \frac{1}{4l}(l^2+2lx-x^2-l^2)=\frac14 \rho g (2lx-x^2)$.
Therefore
$\dot x^2 = g \frac{2lx-x^2}{l-x}$.
Differentiating :
$\ddot x= \frac{g(2l^2-2lx+x^2)}{2(l-x)^2}$
Substitute :
$\ddot y=\frac{g(2l^2-6lx+3x^2)}{4l(l-x)}$
$R=M(g-\ddot y)=Mg\frac{2l^2+2lx-3x^2}{4l(l-x)}$.
Substituting $x=\frac14l$ gives $R=\frac34Mg$. But note that as $x \to l$ then $R \to \infty$. This happens because of the whiplash effect. | {
"domain": "physics.stackexchange",
"id": 34201,
"tags": "homework-and-exercises, newtonian-mechanics, forces, acceleration, string"
} |
Stack/package robot_setup_tf not found | Question:
Hi all. I'm new to ROS. I'm tasked to Hokuyo UST-20LX's laser scan to scan area.
I'm stuck with the robot_setup_tf part. I have already set up my workspace environment with:
~/catkin_ws$ source devel/setup.bash
and to confirm:
~/catkin_ws$ echo $ROS_PACKAGE_PATH
I get:
/home/gordon/catkin_ws/src:/opt/ros/groovy/share:/opt/ros/groovy/stacks
Afterwards I did catkin_make and catkin_make install (no error) and tried to run:
rosrun robot_setup_tf tf_broadcaster
It gives me:
[rospack] Error: stack/package robot_setup_tf not found
I suspect the problems lies with the environment configuration (just my guess) and ran:
~/catkin_ws$ echo $ROS_PACKAGE_PATH
and I got:
/opt/ros/groovy/share:/opt/ros/groovy/stacks
Any problems with the environment setup? If not, what is the problems with not being able to start up tf_broadcaster?
Thanks!
Originally posted by psprox96 on ROS Answers with karma: 97 on 2015-06-16
Post score: 0
Answer:
Afterwards I did catkin_make and catkin_make install (no error) and tried to run: rosrun robot_setup_tf tf_broadcaster. It gives me:
[rospack] Error: stack/package robot_setup_tf not found
I suspect the problems lies with the environment configuration (just my guess) and ran: ~/catkin_ws$ echo $ROS_PACKAGE_PATH, and I got:
/opt/ros/groovy/share:/opt/ros/groovy/stacks
Catkin does out-of-source builds, which is why you need to source other spaces like devel. Since you ran catkin_make install, all build artefacts will be installed into the install space. That space has its own setup.bash, so you'll need to source that, but only after you've ran catkin_make install.
I don't see a source ~/catkin_ws/install/setup.bash in your OP, so I'm guessing you didn't do that.
See wiki/catkin/workspaces for more info on catkin workspaces in general, and on catkin spaces in particular. REP-128 is the more normative reference for this.
PS: running catkin_make install isn't actually necessary to execute nodes in your workspace. A regular catkin_make results in a devel space that can already be used. Just source /path/to/your/catkin_ws/devel/setup.bash after building your workspace. No need to install anything.
PPS: this is completely up to you of course, but I'd not use Groovy anymore. It EOL-ed for some time now, so no updated and / or fixes will be released for it.
Originally posted by gvdhoorn with karma: 86574 on 2015-06-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by psprox96 on 2015-06-16:
Oh I get it. I sourced back devel/setup.bash and run rosrun robot_setup_tf tf_broadcaster , it works. Thanks a lot! | {
"domain": "robotics.stackexchange",
"id": 21934,
"tags": "ros, catkin, broadcaster, robot, ros-groovy"
} |
Why is the quantum Venn diagram paradox considered a paradox? | Question: I've just watched this video on YouTube called Bell's Theorem: The Quantum Venn Diagram Paradox
I don't quite understand why it is considered a paradox
At 0:30, he says that as you rotate 2 polarizing filters, less and less photons come through the 2nd filter (and when the angle is 90 degrees 0% of photons come through).
Then at 0:55 he adds the 3rd filter in the middle rotated by 45 degrees and expect two 45 degrees polarizing filters to be equal to one 90 degrees filter.
But 45 degrees filter + 45 degrees filter (relative to each other) does not equal to one 90 degrees filter, right? why does he expect it to be?
I don't understand why he's so surprised saying "somehow introducing another filter actually let's more light through" at 1:05. What's so surprising about it?
He rotated the 2nd filter by 45 degrees, allowing photons with polarization of 0 - 50 degrees (since 45 degrees is 50% of 0-90 degrees range) to come through. Why would they have any problems coming through the 3rd one which rotated 45 degrees relative to the 2nd one?
At 1:10 he says, "the more filters you add the more light comes through". Well, no kidding, by adding 89 more filters between the first filter (0 degrees) and the last one (90 degrees) you just widening the polarization range photons can have - you're basically allowing 100% of photons from the 2nd filter to go through to the end. So in theory, using perfect filters, the amount of photons you can see will gradually increase, and by the time you add 89 more filters (with step of 1 degree) you'll be able to see the same amount of photons as you could after the 2nd filter.
Answer: To clear your mind I want to tell you in detail how the filters influence the light.
A polarizing filter (for some range of light) let 50% of the incoming light through the filter. Behind the filter the light is polarized: the electric field component of all photons is oriented in the same direction.
It is important to understand that this 50% polarization happens even for light from a thermal source. The light from a thermal source is unpolarized, means the direction of the electric field component of the emitted photons is equally distributed over 360°.
[
Would you agree that the filter rotates 50% of the light from the thermal source? In the case of the filter from the sketch photons oriented between +/- 45° and between 135° to 225° get rotated and are polarized behind the first filter.
To proof this one takes the same kind of filter behind the first but with 90° rotated to the first: no light comes through. So one get the proof that this filter really could not rotate photons with field components which are perpendicular orientated to the filter. Rotating the filter more and more light is going through. Dependent from the narrow band of the filter the relationship between the angle of rotation of the filter and the light intensity varies.
What I've ever seen were so narrow band filters that light was going through the second filter only if this filter was in the same orientation as the first filter. So the sketch is a idealization, in reality the light has some variation angle of its polarization.
Now you can understand why a third filter between the first and the last with an orientation not equal the two others let light through. Simple the second filter rotates the light in the same way as the first filter do. And the last do so.
Why would they have any problems coming through the 3rd one which rotated 45 degrees relative to the 2nd one?
So your intuition is right, there should be no doubt. | {
"domain": "physics.stackexchange",
"id": 73909,
"tags": "optics, polarization, quantum-optics, bells-inequality"
} |
Are fairy rings documented as a growth pattern in ferns? | Question: I planted an Onoclea sensibilis, a single plant, in my garden. After the first season, there was signs that a fairy ring was forming. A few years later it was mostly complete, but then was obscured in the following years by the growth of the next generation ferns on the ring.
I have not seen any documentation in my field guides that sensibilis or any other fern creates a fairy ring.
I would believe that the fairy rings would be rare in the field.
Where would I find documentation on fern fairy rings?
Answer: As this was the only result I could find about Onoclea sensibilis fern ring I wanted to add a reference to a wild fern ring I found of the species. This was found in the fall, 2022, in WV during a hike. I do not know the age of the patch of ferns.
This response, I hope, should act as an answer to the question for documentation of a sensibilis fern fairy ring.
As reference and showing ownership of copyright, I originally posted this image on instagram here: https://www.instagram.com/p/CiTl_VvjXq5/ | {
"domain": "biology.stackexchange",
"id": 12075,
"tags": "botany, plant-physiology"
} |
Should I rescale losses before combining them for multitask learning? | Question: I have a multitask network taking one input and trying to achieve two tasks (with several shared layers, and then separate layers).
One task is multiclass classification using the CrossEntropy loss, the other is sequence recognition using the CTC loss.
I want to use a combination of the two losses as criterion, something like Loss = λCE + (1-λ)CTC. The thing is that my CE loss starts around 2 while the CTC loss is in the 400s.
Should I rescale the losses at each epoch with a Max(L₁)/L₁ factor, where Max(L₁) is the maximal loss at epoch 1 and L₁ is each “sub-loss” at epoch 1. That is we scale the loss so that at the first epoch they have the same magnitude and then we keep scaling using those factors.
Is there a better approach? How do I ensure that my two losses have the same influence on the backpropagation with respect to λ?
Answer: Check this.
Under the heading Multi-tasks losses they have mentioned how they average losses from two different tasks. They do a weighted average depending on their use case. | {
"domain": "datascience.stackexchange",
"id": 5429,
"tags": "neural-network, machine-learning-model, loss-function, multiclass-classification, multitask-learning"
} |
Does the Weights of Discriminator get updated when traning Generators in GANs? | Question: When we train the GAN we usually train the discriminator first then the generator, first we stop the generator from updating its weight by removing it from the computation graph, using fake_image.detach()
noise=get_noise(num_images,z_dim,device=device)
fake_images=gen(noise)
disc_fake_pred = disc(fake_images.detach())
disc_fake_loss=criterion(disc_fake_pred,torch.zeros_like(disc_fake_pred))
disc_real_pred = disc(real)
disc_real_loss=criterion(disc_real_pred,torch.ones_like(disc_real_pred))
disc_loss = (disc_real_loss + disc_fake_loss)/2
disc_loss.backward()
disc_opt.step()
But when we want to train the discriminator we don't stop discriminator weights all we do is just leave the generator weights in the computational graph, this means discriminator and generator weights will be updated since we didn't stop it
gen_opt.zero_grad()
z_noise = get_noise(num_images, z_dim, device)
fake_images = gen(z_noise)
disc_fake_pred = disc(fake_images)
gen_loss = criterion(disc_fake_pred, torch.ones_like(disc_fake_pred))
gen_loss.backward()
gen_opt.step()
But in
Answer: Since you are using Pytorch, an optimizer can only update parameters wrapped inside it since initializing.
Therefore, even if the gradient of discriminator parameters exists while updating generator, discriminator parameters are still stable if you don't call disc_opt.step(). | {
"domain": "ai.stackexchange",
"id": 3467,
"tags": "deep-learning, generative-adversarial-networks, pytorch, discriminator, dc-gan"
} |
Impulse Invariant method for digital filter design | Question: One of the known methods for discretizing analog filters is impulse response invariant. We get the impulse response in time domain, discretize it and then get the Z transform.
What I am trying to understand is why the freq response of the resulting digital filter has a freq response magnitude scaled by (1/T) T:sampling time?
Matlab, using c2d command, modifies it by multiplying by T so that the freq response is similar to the analog filter, but this is not the result of the Z transform I described earlier.
Answer: This is just as it turns out when you do the math. The discrete-time Fourier transform (DTFT) of the sampled continuous-time impulse response $h(t)$ is
$$H_d(e^{j\omega T})=\sum_nh(nT)e^{-jn\omega T}\tag{1}$$
With
$$h(nT)e^{-jn\omega T}=\int_{-\infty}^{\infty}h(t)e^{-j\omega t}\delta(t-nT)dt\tag{2}$$
this can be written as
$$\begin{align}H_d(e^{j\omega T})&=\sum_n\int_{-\infty}^{\infty}h(t)e^{-j\omega t}\delta(t-nT)dt\\&=\int_{-\infty}^{\infty}\left[h(t)\sum_n\delta(t-nT)\right]e^{-j\omega t}dt\\&=\mathcal{F}\left\{h(t)\sum_n\delta(t-nT)\right\}\\&=\frac{1}{2\pi}H(\omega)\star\frac{2\pi}{T}\sum_k\delta\left(\omega-\frac{2\pi k}{T}\right)\\&=\frac{1}{T}\sum_kH\left(\omega-\frac{2\pi k}{T}\right)\tag{3}\end{align}$$
where $H(\omega)$ is the Fourier transform of $h(t)$, and $\star$ denotes convolution. From $(3)$ we see that the DTFT of the sampled impulse response equals the sum of shifted spectra of $h(t)$, scaled by $1/T$.
If we assume that $H(\omega)$ is approximately band-limited and that $T$ is chosen sufficiently small such that aliasing becomes negligible, we obtain the approximation
$$H_d(e^{j\omega T})\approx\frac{1}{T}H(\omega),\qquad |\omega|<\frac{\pi}{T}\tag{4}$$
For the step-invariance method, we use samples of the step response instead of samples of the impulse response, and we obtain a relation analogous to $(3)$ between the DTFT $G_d(e^{j\omega T})$ of the step response of the discrete-time system, and the Fourier transform $G(\omega)$ of the continuous-time step response:
$$G_d(e^{j\omega T})=\frac{1}{T}\sum_kG\left(\omega-\frac{2\pi k}{T}\right)\tag{5}$$
In order to obtain the frequency response $H_d(e^{j\omega T})$ we multiply $(5)$ by $1-e^{-j\omega T}$, because the impulse response is obtained by computing a first-order difference of the step response:
$$H_d(e^{j\omega T})=\left(1-e^{-j\omega T}\right)G_d(e^{j\omega T})=\frac{1-e^{-j\omega T}}{T}\sum_kG\left(\omega-\frac{2\pi k}{T}\right)\tag{6}$$
For frequencies that are small compared to the sampling frequency, i.e., for $|\omega T|\ll 1$ we obtain from $(6)$
$$\begin{align}H_d(e^{j\omega T})&\approx\frac{1-(1-j\omega T)}{T}\sum_kG\left(\omega-\frac{2\pi k}{T}\right)\\&=j\omega \sum_kG\left(\omega-\frac{2\pi k}{T}\right)\tag{7}\end{align}$$
If we again assume that aliasing can be neglected, we arrive at
$$H_d(e^{j\omega T})\approx j\omega G(\omega)=H(\omega),\qquad |\omega|<\frac{\pi}{T}\tag{8}$$
From $(8)$ we see that, unlike for the impulse invariance method, the step invariance method doesn't involve a scaling of the continuous-time frequency response. | {
"domain": "dsp.stackexchange",
"id": 9098,
"tags": "filter-design, sampling, impulse-response, digital-filters, discretization"
} |
The anode and cathode when corrosion happens | Question: Let's say $\ce{Fe}$ reacts with $\ce{Cu^{2+}}$ ions. $\ce{Fe}$ would oxidize and therefore give electrons to $\ce{Cu^{2+}}$so that:
$$\ce{Fe-> Fe^{2+} +2e-}$$
$$\ce{Cu^{2+} +2e^- ->Cu}$$
The overall reaction:
$$\ce{Cu^{2+} +Fe ->Cu +Fe^{2+}}$$
Now this is an example of corrosion, right? And when this type of corrosion happens, the anode is the electrode where oxidation happens and the cathode is the electrode where reduction happens, right?
Therefore iron ($\ce{Fe}$) is the anode where oxidation happens and Copper 2+ ions ($\ce{Cu^{2+}}$) is the cathode where the reduction happens.
My question is, is this all correct? Have I successfully described corrosion (of this type) and given an example of it? Or is there something wrong? I'm 15 and soon I'm having a test in chemistry, and because of my age there's no point in making it "super-advanced", but if there's something I missed or did wrong please point it out, I would be grateful for that!
Answer: You are right! What you have written is an example of Galvanic corrosion (Galvanic corrosion occurs when two different metals have physical or electrical contact with each other and are immersed in a common electrolyte).
You have successfully described Galvanic corrosion and given an example of it | {
"domain": "chemistry.stackexchange",
"id": 2709,
"tags": "electrochemistry, redox, electrons, metal"
} |
why we can use the equipartition theorem for translational motion of molecules at room temperature and above because quantization is unimportant | Question: From the book Chemical Principles The Quest for Insight, 5th Edition by Peter Atkins, Loretta Jones
The equipartition theorem is a result from classical mechanics; so we can use it for
translational and rotational motion of molecules at room temperature and above, where
quantization is unimportant.
I know quantization describes that photons, separate "packets,"are what energy is thought to happen in a subatomic level. However, I don't how quantization is related here: because quantization is unimportant, we can use it for translational and rotational motion of molecules at room temperature and above? and also why quantization is unimportant here? I believe translational and rotational motion of molecules happen in a subatomic level. Moreover, why can't we use the equipartition theorem for vibrational motion?
Answer: What they mean is that for translational motion the quanta are so small that using classical mechanics produces no error, i.e $k_BT$ is far far bigger than the energy gaps between translational quanta, ($k_B$ is the Boltzmann constant) the same may be true of rotational motion where quanta may be only be separated by a few wavenumbers vs $k_BT\approx 210$ wavenumbers at room temperature. It is generally true for rotational motion but not all e.g. H$_2$. Vibrations have quanta of at least hundreds of wavenumbers and $v=1 , v=2$ etc. are hardly populated at room temperature.
We can only use the equipartition theorem if $k_BT \gg \Delta E$ where $\Delta E$ s the gap between energy levels, i.e the fact that there are discrete levels is no longer important. This can occur if the energy gaps are v small or the temperature very high or both.
(Technically the 'equipartition theorem' of classical statistical mechanics states that the mean (average) value of each independent quadratic term in the energy is equal to $k_BT/2$) | {
"domain": "chemistry.stackexchange",
"id": 15252,
"tags": "quantum-chemistry"
} |
Time and sample rate for QAM | Question: I am trying to understand how to get time as I already have sample rate. I want to draw a graph of sample rate vs time. I have information such as carrier frequency, channel bandwidth. Can that be used to get time.
So far, I have understood this: Samples per symbol and number of symbols for QAM
In order to get the time, I would need to do this:
T = 1/Samples per symbol
Would this be correct.
Answer: Consider you have a bandwidth of B. The sampling rate $F_s$ which is related to B using the relation $F_s > 2B$. Now the sample time is $\frac{1}{F_s}$. So if you have N samples then total time is $\frac{N}{F_s}$
Is that what you are looking for? | {
"domain": "dsp.stackexchange",
"id": 8656,
"tags": "digital-communications"
} |
Help! An 8 year old asked me how to build a nuclear power plant | Question: I would really like to give an explanation similar to this one.
Here's my current recipe:
(i) Mine uranium, for example take a rock from here (picture of uranium mine in Kazakhstan).
(ii) Put the rock in water. Then the water gets hot.
(iii) [Efficient way to explain that now we are done with the question]
This seems wrong, or the uranium mine would explode whenever there is a rainfall. Does one need to modify the rock first? Do I need some neutron source other than the rock itself to get the reaction started?
As soon as I have a concrete and correct description of how one actually does I think I can fill in with details about chain reactions et.c. if the child would still be interested to know more.
Answer: Everything is made of tiny things called atoms. All atoms have a tiny center part called the nucleus. Some atoms have an unusual type of nucleus that, every once in a long while, randomly explodes, sending tiny pieces in all directions. Normally those tiny pieces just bounce around until they join another atom. However, if you have a bunch of the right kind of exploding nuclei together, the exploding pieces of one nucleus can hit other exploding nuclei, and make them explode immediately, then those pieces hit even more exploding nuclei, and you get a chain reaction, sort of like dominoes.
To make a nuclear reactor, you dig up a bunch of rocks with the right kind of exploding atoms, and you carefully remove many of the other atoms so the exploding atoms are close enough together to make a chain reaction, then you put them in water*. All the exploding nuclei produce a lot of heat, which boils the water. The steam turns a fan, which spins a magnet, and creates electricity. You have to be very careful that you don't put too many of the pieces with exploding atoms together, or the atoms will explode too fast, and reactor will get too hot.
*If you want to get into more detail, you could explain that the exploding bits are going so fast, that they usually pass right through the other atoms, cartoon-style, unless you have other atoms, like those in water (a moderator), for them to bounce off of and slow down. You could also explain that reactors use "control rods", which are made of atoms that easily absorb the exploding bits, and therefore slow down the chain reaction. So, if they push the control rods further into the reactor, the chain reaction slows down more.
If you want to include more terminology:
Rocks = Uranium ore
Removing all the other atoms = enrichment
Nucleus exploding = nuclear fission
Exploding atoms = radioactive atoms (often Uranium)
Exploding pieces = neutrons (and some other particles)
Fan = turbine | {
"domain": "physics.stackexchange",
"id": 29396,
"tags": "nuclear-physics, education, nuclear-engineering"
} |
Understand the definition of frame and inertial frame in Arnold's Galilean spacetime definition | Question: In Arnold's Mathematical Methods of Classical Mechanics, we define the physical space time as a four dimensional affine space with associated Galilean structure. I understand this part.
Now what I'm not clear after reading the next section in the book is:
What's the definition of a frame of reference, i.e. the formal definition of the mapping from the physical spacetime to $\mathbb{R}^4$. I guess the mapping needs to be bijective and preserving the Galilean structure.
What's the definition of an inertial frame? Is it an additional axiom that states there's a special frame mapping? Or does the concept of inertial frame already emerge from the Galilean spacetime definition?
Answer: With the definition of the Galilean structure written in your comment, an inertial reference frame can be safely defined as a Cartesian coordinate system with respect to the affine structure of $A^4$ with origin $O$ and axes $e_0,e_1,e_2,e_3$ such that
(1) $\langle e_0,dt\rangle =1 $
(2) $\langle e_i, dt \rangle = 0$ for $i=1,2,3$.
With these requirements, it is not difficult to prove that, considering a pair of these Cartesian coordiante systems associated to different choices of the basis and the origin, the transformation laws are, in fact, the Galilean transformations
$$x'^0 = x^0 + c \: (= t+k)$$
$$x'^k = c^k+t v^k + \sum_{j=1}^3R^k_j x^j \:, \quad k=1,2,3$$
where
$v^k, c^k \in \mathbb{R}$ and $[R^k_j] \in O(3)$.
A general reference frame $x'^b$ is connected to an inertial one $x^a$ through a generic motion of the non inertial frame with respect to the inertial one:
$$x'^0 = x^0 + c\:,$$
$$x'^k = c^k(t) + \sum_{j=1}^3R^k_j(t) x^j \:, \quad k=1,2,3$$
where
$c^k(t)$ and $[R^k_j](t) \in O(3)$ are arbitrary smooth functions. | {
"domain": "physics.stackexchange",
"id": 98523,
"tags": "classical-mechanics, inertial-frames, galilean-relativity"
} |
Max independent set in planar graphs PTAS proof | Question: I've been searching a few hours for a proof to Max independent set in planar graphs beeing in PTAS but I couldn't find anything, I'm searching for one without any reductions and I wonder if anyone here can help me find one.
Thanks in advance.
Answer: I haven't got any paper, but looking through my favorite book regarding approximation algorithms i've got something.
It is named: The Design of Approximation Algorithms.
You find it free for download (even legal) on https://www.designofapproxalgs.com.
Take a look at page 269 (ff.) especially page 271 should be interesting for you as they describe and proof a PTAS for max independent set in planar graphs.
It might not be "front of the line" research or results but hope it helps. | {
"domain": "cs.stackexchange",
"id": 16418,
"tags": "complexity-theory, approximation, planar-graphs"
} |
What's the relationship between dew point and temperature? | Question: From a Ph.D dissertation, the writer change the land surface from nature earth to urbn and built up to compare the difference involving with this land use change.
Due to the urban heat land effect, the temperature of lower atmosphere above the surface increase, and the dew point decrease.
Dew point is a parameter represent the absolute moisture. When the temperature increase, how to analysis the dew point increase?
Answer: There are two separate effects:
The average temperature above urban land is greater than above rural land. This could be because of the cooling effect of plants evaporating water, or because of the higher albedo of vegetation than pavement and roofing.
The absolute humidity decreases; that is, the total amount of water vapor in a given volume of air decreases, perhaps due to the capture and run-off of rainwater into storm drains, rather than evaporating. | {
"domain": "chemistry.stackexchange",
"id": 5614,
"tags": "physical-chemistry"
} |
Is multiplying by Ts faster than dividing by fs? | Question: Say fs = 1000 and Ts = 0.001. Would it be faster to compute Ts at the beginning and subsequently multiply by 0.001 instead of dividing by 1000 when computing frequency-dependent quantities?
Answer: Generally, it makes sense to ensure that your code is logically correct, that it is numerically well-behaved, intuitive to read and tested. That is hard enough. Only when you observe that some innerloop or library call is a real hotspot, affecting the functionality of your software does it make sense to rewrite code for speed, and then you should always profile before and after.
If a constant is known compile time, the compiler may apply the inversion to substitute division for multiplication, if this is within precision constraints and runs faster for a given target. If ppssible, I would rather outsource that complexity to the compiler.
Edit:
It is not "technically the same operation". To see why, have a look at this MATLAB snippet:
a = single(10.0)
b = 1/a
c = 42/a
d = 42*b
c-d
ans =
single
-4.7684e-07
Since floating-point is operating with finite precision and intermediate rounding, the order of operations does matter. Depending on compiler flags, the compiler may be allowed to re-order floatingpoint arithmetic even though the result will differ to some degree.
If we look at the binary representation we see that they differ in the lsb:
dec2bin(typecast(c,'uint32'),32)
dec2bin(typecast(d,'uint32'),32)
ans = '01000000100001100110011001100110'
ans = '01000000100001100110011001100111'
-k | {
"domain": "dsp.stackexchange",
"id": 9955,
"tags": "floating-point"
} |
The need for the Ising model in Mean field theory? | Question: Consider the Heisenberg Hamiltonian:
$$\newcommand{\p}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\f}[2]{\frac{ #1}{ #2}} \newcommand{\l}[0]{\left(} \newcommand{\r}[0]{\right)} \newcommand{\mean}[1]{\langle #1 \rangle}\newcommand{\e}[0]{\varepsilon} \newcommand{\ket}[1]{\left|#1\right>}
H=-\f{J}{2} \sum_{\mean{ij}}\vec S_i \cdot \vec S_j+\mu_Bg_S \vec B \cdot \sum_i \vec S_i$$
We can write the contributions which involves a given spin as:
$$ H_i=-J\vec S_i \cdot \sum_{j}\vec S_j+\mu_Bg_S \vec B \cdot \vec S_i$$
In the mean field approximation we then replace $\vec S_j$ with it's expectation value to get:
$$ H_i=-J\vec S_i \cdot \sum_{j}\mean{\vec S_j}+\mu_Bg_S \vec B \cdot \vec S_i$$
Now my question is; is it enough that we have rotational symmetry about $\vec B$ to say that $\mean{\vec S_j}$ must point along $\vec B$ or do we have to make the ising approximation?
My thought is that it is the latter since $\mean{\vec S_j}$ is not necessarily zero when $\vec B$ is.
Answer: You are correct when saying that $\vec m_j = \langle\vec S_j \rangle$ is not necessarily aligned with $\vec B$.
We can write your second equation as
$$ H_i = -JS_i \cdot (N\vec m + \vec B)$$
where $\vec m$ is the average of the $\vec m_j$s over $j$. It should be intuitively clear that if -- for any reason -- a majority of spins happens to be aligned along an arbitrary axis, then $|\vec m|$ is large, and the $i$-th spin described by the Hamiltonian above will align with $\vec m$, if $\vec B$ is small (even if nonzero). This means that this "arbitrary axis" solution is stable.
Your intuition did tell you that there must be something wrong, considering the system is rotationally symmetric. Indeed, in phase transitions, systems do undergo spontaneous symmetry breakings. In the case $B=0$, one could think that the total magnetisation must be zero because of symmetry, but that is not necessarily the case. | {
"domain": "physics.stackexchange",
"id": 40736,
"tags": "electromagnetism, ising-model, ferromagnetism"
} |
remap to part of message | Question:
Hi. I have written my first node - a Quaternion to euler conversion node (using the tf library). The node subscribes to geometry_msgs::Quaternion and publishes geometry_msgs::Vector3.
I now want to connect this node to the quaternion part of a pose message that's published by another node.
So i used this command:
rosrun mypkg quat_to_euler_conv quaternion:=ground_truth/pose/orientation
because pose/orientation is of type geometry_msgs::Quaternion i think this should work. but it doesn't connect. using rostopic echo and pub i can make the node work fine though.
Is it maybe not possible to remap a topic to a message part of another topic?
Originally posted by Orangelynx on ROS Answers with karma: 1 on 2016-06-21
Post score: 0
Answer:
use toopic_tools:
http://wiki.ros.org/topic_tools/transform
Originally posted by AA with karma: 26 on 2017-03-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2017-03-17:
To add to the answer of @AA:
Is it maybe not possible to remap a topic to a message part of another topic?
No, that is not possible. rostopic uses some tricks internally to allow this, but the regular remapping infrastructure does not support remapping parts of msgs. | {
"domain": "robotics.stackexchange",
"id": 25011,
"tags": "ros, remap, topics"
} |
Why does $H_2$ have $C_V$=$7/2 R$ at high temperatures, while the total number of degrees of freedom is 6? | Question: The two hydrogen atoms have 6 degrees of freedom in total. Of them, $3$ contribute to translation, $2 $contribute to rotation and $1$ contribute to the vibration.
I know that the vibrations motion is frozen at low temperature due to quantum mechanical effects.
However, then the $C_V$ at high temperature should be $6/2 R$, while experimentally, it is $7/2 R$ (source: Principles of Physics by Walker, Resnick and Halliday)
Edit: The answers reveal that the missing part of specific heat is due to potential energy of vibration. So I am extending the question for clarification. $CO_2$ has total 9 degrees of freedom, of which 3 are translational, 2 are rotational, 4 are vibrational. So, at high temperature, will the $C_V$ of $CO_2$ be $\frac{R}{2} × [3+2+4+4] $? The two 4s are due to kinetic and potential energy of vibrational motion.
Answer: quoting from wikipedia on heat capacity:
Each rotational and translational degree of freedom will contribute R/2 in the total molar heat capacity of the gas. Each vibrational mode will contribute R to the total molar heat capacity, however. This is because for each vibrational mode, there is a potential and kinetic energy component. Both the potential and kinetic components will contribute R/2 to the total molar heat capacity of the gas.
For more general gas, wikipedia [general gas] also give you example of how to calculate the number of degree of freedom and how to apply properly the equirepartion theorem:
For example, triatomic nitrous oxide N2O will have only 2 degrees of rotational freedom (since it is a linear molecule) and contains n=3 atoms: thus the number of possible vibrational degrees of freedom will be v = (3⋅3) − 3 − 2 = 4. There are four ways or "modes" in which the three atoms can vibrate, corresponding to 1) A mode in which an atom at each end of the molecule moves away from, or towards, the center atom at the same time, 2) a mode in which either end atom moves asynchronously with regard to the other two, and 3) and 4) two modes in which the molecule bends out of line, from the center, in the two possible planar directions that are orthogonal to its axis. Each vibrational degree of freedom confers TWO total degrees of freedom, since vibrational energy mode partitions into 1 kinetic and 1 potential mode. This would give nitrous oxide 3 translational, 2 rotational, and 4 vibrational modes (but these last giving 8 vibrational degrees of freedom), for storing energy. This is a total of f = 3 + 2 + 8 = 13 total energy-storing degrees of freedom, for N2O.
For a bent molecule like water H2O, a similar calculation gives 9 − 3 − 3 = 3 modes of vibration, and 3 (translational) + 3 (rotational) + 6 (vibrational) = 12 degrees of freedom. | {
"domain": "physics.stackexchange",
"id": 51097,
"tags": "thermodynamics, ideal-gas, degrees-of-freedom"
} |
Why need an low pass filter after up-sampling? | Question: I understand why there need an low pass filter before down-sampling, the sample frequency need to be at least twice of the max frequency signal, else there will be alias issue.
But base on the Wiki page, we need an lowpass filter after up-sampling.
Upsampling requires a lowpass filter after increasing the data rate,
and downsampling requires a lowpass filter before decimation.
I don't understand that, can anyone explain it?
Answer: Let's say you've sampled an analog signal $x(t)$ with spectrum $X(\omega)$ at rate $1/T$ that is high enough to satisfy the sampling theorem. The spectrum (i.e. the discrete time Fourier transform (DTFT)) of the sampled signal $x_{1,k}$ will be a periodic repetition of $X(\omega)$. The repetition period is $1/T$.
Now you sample the same signal with a higher rate $L/T,\, L\in\mathbb N$ yielding $x_{L,k}$. Again the spectrum will be a periodic repetition of $X(\omega)$ but this time the repetition period is $L/T$, so the spectral images have greater frequency distance than before.
The task of upsampling consists in calculating $x_{L,k}$ from $x_k$. First, $L-1$ zeros are inserted after every sample of $x_k$. Actually this just changes the basic frequency support of the DTFT to $-L/(2T)\ldots L/(2T)$ containing $L$ copies of the original (analog) spectrum. Therefore the unwanted copies are filtered out with a lowpass filter so that only the original spectrum in range $-1/(2T)\ldots 1/(2T)$ remains. This is identical to $x_{L,K}$
The above steps are quite well explained in the figure of the Wiki article you quoted (in the same order). | {
"domain": "dsp.stackexchange",
"id": 1624,
"tags": "sampling, downsampling, supersampling"
} |
How to deal with method which throws 7 exceptions | Question: I'm re factoring some code and have come across a method which is throwing 7 exceptions, the method length is approx 20 lines. The method caller wraps this method in a try catch, and just catches the generic Exception. Is there a better way to handle this ? I don't think I should handle each possible exception thrown individually ?
try {
setter();
}
catch(Exception e){
e.printStackTrace();
}
private void setter() throws MalformedURLException, IOException, ParserConfigurationException, SAXException, TransformerFactoryConfigurationError, TransformerException, XPathExpressionException
{
.....
}
Answer: This depends on what you want to do in case of an exception. If your handling is just printing the stack trace your solution is OK. But you might want to react in a different way depending on the problem.
Just an example:
an IO problem could mean that you have to retry (in case on network)
a malformed URL might mean that you have to generate an error message and ask the user to re-enter the data
and so on
If you do not try to recover from the error, I don't know if it makes sense to catch the exception, print the stack trace and then continue. It might be the same as letting the application crash (and print the exception anyway). | {
"domain": "codereview.stackexchange",
"id": 34519,
"tags": "java"
} |
How many stars are there max and average per galaxy? | Question: What is the range of the number of stars possible in a galaxy? What is the rough average?
Googling leads to vague answers, things like "billions upon billions". But what is a more pinpointed set of numbers?
Answer: The term galaxy can not be given an approximate number of stars since our galaxy has a 27,000 light-years radius while a dwarf galaxy is typically 130 light-years. On the other hand, for and old elliptical galaxy, it can happen that some of the stars are already dead and the creation rate is almost zero, but since those are probably made out of 2 spiral galaxies colliding, the number would be around the sum of the two initial spirals. For some young spiral galaxies, 2-3 stars per year are being created and very few die.
So there are many factors that play a role. Ironically, for our galaxy the number of stars is pretty uncertain since we are inside of it, and we can't infer it from the apparent mass knowing how it rotates, because the total mass and the dark matter both play a role on it. Plus the star mass range can be as wide as 0.1 to 150 (!) solar masses. So, an order of magnitude for our galaxy? $10^{11}$ assuming the population of stars is consistent with the H-R diagram and that half of the population are binary stars. | {
"domain": "astronomy.stackexchange",
"id": 4439,
"tags": "galaxy, star-systems, mathematics"
} |
Installing a package on Mac | Question: I am trying to instll sigproextractor package on my Mac
I tried
pc-133-235:~ fi1d18$ python --version
Python 2.7.10
pc-133-235:~ fi1d18$ python3 --version
Python 3.8.2
pc-133-235:~ fi1d18$ pip --version
-bash: pip: command not found
pc-133-235:~ fi1d18$ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
But whatever I am trying says that can't find pip
Any help?
Answer: maybe you could try python3 -m pip install package | {
"domain": "bioinformatics.stackexchange",
"id": 1317,
"tags": "python, conda"
} |
Get Gazebo Mesh size | Question:
Hi All,
I was wondering if there is a way to get the mesh size of a Gazebo model within ros through a service call.
If not, what would be the best way to write such a service
Basically if you have such a model
<!-- office walls -->
<model:physical name="wall_1_model">
<xyz>0 -5 1</xyz>
<rpy>0.0 0.0 0.0</rpy>
<static>true</static>
<body:box name="wall_1_body">
<geom:box name="wall_1_geom">
<mesh>default</mesh>
<size>10 .2 2</size>
<visual>
<size>10 .2 2</size>
<material>Gazebo/Green</material>
<mesh>unit_box</mesh>
</visual>
</geom:box>
</body:box>
</model:physical>
I want to get the size <10 .2 2> from the mesh section.
Thanks
Chris
Originally posted by Christian on ROS Answers with karma: 56 on 2012-05-31
Post score: 0
Answer:
Ok, so I found a solution. To get access to the gazebo shapes you have to create a Gazebo Plugin. There is a tutorial on Gazebo how to do that.
After that you can access Model->Body->Geom->Shape->BoxShape.
gazebo::physics::ModelPtr model = this->world_->GetModel(req.model_name);
if (!model)
{
ROS_ERROR("GetModelProperties: model [%s] does not exist",req.model_name.c_str());
res.success = false;
res.status_message = "GetModelProperties: model does not exist";
return false;
}
else
{
// get model parent name
gazebo::physics::ModelPtr parent_model = boost::dynamic_pointer_cast<gazebo::physics::Model>(model->GetParent());
// get list of child bodies, geoms
for (unsigned int i = 0 ; i < model->GetChildCount(); i ++)
{
gazebo::physics::LinkPtr body = boost::dynamic_pointer_cast<gazebo::physics::Link>(model->GetChild(i));
if (body)
{
// get list of geoms
for (unsigned int j = 0; j < body->GetChildCount() ; j++)
{
gazebo::physics::CollisionPtr geom = boost::dynamic_pointer_cast<gazebo::physics::Collision>(body->GetChild(j));
if (geom)
{
res.geom_name = geom->GetName();
gazebo::physics::ShapePtr shape(geom->GetShape ());
if (shape->HasType (gazebo::physics::Base::BOX_SHAPE))
{
gazebo::physics::BoxShape *box = static_cast<gazebo::physics::BoxShape*>(shape.get ());
math::Vector3 tmp_size = box->GetSize ();
geometry_msgs::Vector3 size;
size.x = tmp_size.x;
size.y = tmp_size.y;
size.z = tmp_size.z;
res.size = size;
}
}
}
}
}
res.success = true;
res.status_message = "GetModelProperties: got properties";
return true;
}
return true;
Originally posted by Christian with karma: 56 on 2012-06-01
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 9624,
"tags": "gazebo"
} |
Why do people look different after a long sleep? | Question: What happens during a long sleep that makes people look odd when they have just woken up? Why doesn't the same phenomenon occur in the case of a person who lies down for an extended period of time, but stays awake? I've noticed that some nights seem to make a bigger difference than others in the appearance of the sleeper, but haven't noticed a pattern.
Answer: Puffy eyes are caused by fluid build up in tear ducts from extended periods of lying down. Gravity from sitting or standing slowly drains them during the day. The crusty 'sleep' that accumulates in the corner of your eyes is the residue from basal tear liquid that has seeped out of the eye and evaporated during the night. | {
"domain": "biology.stackexchange",
"id": 156,
"tags": "human-biology, sleep"
} |
STL Stack Implementation | Question: I implemented std::stack from the STL for deeper understanding of the language and memory since I still am only a beginner. I implemented the stack using a singly linked list.
Header file:
/* Header file for abstract data type "STACK" implemented using a linked list */
#ifndef STACK_H
#define STACK_H
template <class T>
class stack {
public:
void push(T); //Function that inserts elements into the stack
bool empty(); //Function to test whether the stack is empty
T top(); //Returns top element of stack
void pop(); //Removes element at the top of the stack
int size(); //Returns size of stack
void print(); //Prints stack contents
struct node { //Definition of node structure with constructor and destructor
T node_data;
node *next;
//default ctor
node() { next = nullptr; }
//default dtor
~node() { delete root_node; }
};
private:
node *root_node = nullptr;
int elements = 0;
};
#endif //STACK_H
And here is the actual implementation file:
/* Definitons of the STACK data structure implemented using a linked list */
#include <iostream>
#include "stack.h"
using std::cout;
using std::endl;
/* FUNCTION: Test to check if the stack is empty or has at least one element */
template <class T>
bool stack<T>::empty() { return (root_node == nullptr); }
/* FUNCTION: Returns current size of the stack */
template <class T>
int stack<T>::size() { return elements; }
/* FUNCTION: Adds nodes to the stack with one argument which is the data to be inserted */
template <class T>
void stack<T>::push(T data) {
//Operation to preform if the stack is empty.
//Root element is popped off last (First in, Last out)
if ( empty() ) {
root_node = new node;
root_node->node_data = data;
root_node->next = nullptr;
elements++;
}
//Operation to preform if stack is not empty.
//Elements inserted into stack with dynamic allocation.
else {
node *new_node = new node;
*new_node = *root_node;
root_node->next = new_node;
root_node->node_data = data;
elements++;
}
}
/* FUNCTION: Removes element at the top of the stack */
template <class T>
void stack<T>::pop() {
if (size() > 1) {
node *temp_node = new node;
temp_node = root_node->next;
root_node = temp_node;
elements--;
}
else if (size() == 1) {
root_node = nullptr;
elements--;
}
else {cout << "\nOperation pop() failed: Stack is empty!" << endl;}
}
/* FUNCTION: Retrieves element at the top of the stack */
template <class T>
T stack<T>::top() {
if (!empty()) {return root_node->node_data;}
else {cout << "\nOperation top() failed: Stack is empty!" << endl; return -1;}
}
/* FUNCTION: Prints the stack contents */
template <class T>
void stack<T>::print() {
int index = size();
for (int i = 1; i < index; i++) {
cout << "Element " << i << ": " << " " << top() << endl;
pop();
}
}
I would appreciate all criticism relevant to code, style, flow, camelCase vs underscore, and so forth.
Answer:
I would appreciate all criticism relevant to code, style, flow, camelCase vs underscore, and so forth.
First, (contrary to Loki Astari's answer) I think your style is correct (i.e. please do not capitalize the first letter of your classes - keep them matching the std:: style).
Regarding the APIs of your code:
Your code doesn't enforce const correctness
For argument types, consider the following convention:
Observed parameter (no modification of the value in the function)
void function(const argument& a);
I/O parameter (function modifies the parameter, not owner):
void function(argument& a); // a is modified in the function (I/O parameter)
Owned parameter:
void function(argument a); // function "owns" a (gets it's own exclusive copy)
Following this, push() and top() should be written like this:
template<typename T>
void stack<T>::push(T data) // pass by value here
{
root = new node{ std::move(data), root }; // and move the value here
++elements;
}
template<typename T>
const T& // return a const T&
stack<T>::top() const // and function is const
{
if (!root)
throw std::runtime_error("stack<T>::top: empty stack");
return root->data;
}
Taking a parameter by value in push has these advantages:
specifies ownership in the interface
enables you to use all constructors of type T here
is exception safe (if the instance of the arg cannot be created, code will fail before entering the function body).
The size API:
int size();
Should be:
std::size_t size() const;
The print API has more problems:
It clears the list (either rename to "destructive_print", or ensure the iteration is not destructive). If you implement it in a non-destructive way, mark it const.
It introduces a dependency on std::cout that has nothing to do with the functionality of objects. Ideally, it shouldn't be a member. If you still want to have it as a member, pass the output stream instance as a parameter.
The constructor of the node class should construct a fully valid instance in one step. To decide on it's parameters, you should look at how you use it:
Usage example (your old code):
root_node = new node;
root_node->node_data = data;
root_node->next = nullptr;
Optimal example (new client code):
root_node = new node{ std::move(data), root_node }; // use std::move on the data
implies the node should be implemented like this:
template <class T>
class stack {
// ...
struct node { // struct, because it is internal and only
// access will be in stack
// (i.e. we can guarantee correct use in client code)
T *node_data;
node* next;
};
}
The size should also be of std::size_t type.
Your stack doesn't have a destructor. The destructor should call delete on each node, in an iteration).
Your print element skips the last element in the list, as you begin indexing from 1 instead of 0, to less than number of elements. This means you iterate [elements - 1] times.
The pop() implementation can be written with no special case for one element:
template <class T>
void stack<T>::pop() {
if(empty())
throw std::runtime_error{"stack<T>::pop(): empty stack"};
node* new_root = root_node->next;
delete root_node;
root_node = new_root;
--elements;
}
Regarding the implementation of your class:
Do not print error messages, throw exceptions (this allows client code to decide what to do in case of an error, instead of forcing client code to get messages printed into it's application's output stream). | {
"domain": "codereview.stackexchange",
"id": 7771,
"tags": "c++, beginner, memory-management, template, stack"
} |
Why does ampicillin in solution turn yellow? | Question: I have a universal tube with 10 mg mL-1 ampicillin. When I got it, it was supposed to be sterile. It was opened for approximately 20 minutes for an experiment and has since been standing around sealed for a good month now.
Within the last couple weeks, it has gradually turned yellow. Right now the colour is faint, with a green-ish tint.
Why would it turn yellow? I know NADH is yellow, so that was my first guess. But I couldn't exaplain why ampicillin would cause NADH to accumulate, so I discarded that.
Side info: Ampicillin acts on bacterial cell walls, maybe that might help.
Answer: Why is NADH the first thing to come to mind? Time for some physical chemistry.
Beta-lactams has a shorter ring structure which give it a different absorbance (around 322 nm apparently (source needed)) which gives it a blueish hue. Typically Penicillin and Ampicillin start off as an off-white. As hinted by @Mad Scientist, the beta-lactam rings are unstable and will undergo acid-catalyzed hydrolysis which breaks the 4-membered ring.
The Penicillin core also near a ketone which can attack the ketone in the lactam (I'm pretty sure this qualifies as enol chemistry but it has been awhile). The cyclization will form a 5-membered oxazole ring. Based on my experience, unsaturated 5-membered rings will absorb at a higher wavelength tending to result in a yellowish color (570-590 nM) so I am going to suggest that is the source of your color.
Ampicillin degradation | {
"domain": "biology.stackexchange",
"id": 433,
"tags": "bacteriology, antibiotics"
} |
Microwave inside-out cooking true/false | Question: The wikipedia article on microwave ovens says
Another misconception is that microwave ovens cook food "from the inside out", meaning from the center of the entire mass of food outwards.
It further says that
with uniformly structured or reasonably homogenous food item, microwaves are absorbed in the outer layers of the item at a similar level to that of the inner layers.
However, on more than one occasion I've microwaved a stick of butter, and the inside melts first, then the outside caves in releasing a flood of butter. (It may be relevant that my microwave turntable does not turn - but since I've done it more than once, I would not expect it to be a fluke of placement in the standing wave. And, the resulting butter-softness seemed very strongly correlated with depth, more than I'd expect from accident.) That sure seems consistent with the food absorbing more energy on the inside than on the outside. Given that this takes place over 30 seconds or so, I'd not expect much heat exchange to occur with the butter's environment (nor inside the butter itself), so that would forestall explanations of "the air cools off the outer layer of butter", unless I'm seriously underestimating the ability of air to cool off warm butter. So what's going on?
Answer: When people say "It's a myth that a microwave heats from the inside out", they are trying to correct the impression that heating starts in the very middle of a large item like a roast, and works its way out. For something thin or narrow like a stick of butter or a single serving of pasta, pretty much the whole item consists of "outer layer", so heating is fairly uniform throughout except for variation caused by the microwave variability (which means that it is actually not uniform at all).
The answer by g s explained why the most melting would be between the two ends of the stick, but I have the impression that you are seeing the interior melt before the top surface. What I suspect is happening is something like this:
Somewhere along the stick is a region of maximum amplitude
In that region, there may be an amplitude peak inside the stick, but heating may be fairly uniform on that small scale of 2-3 cm.
If there is an amplitude peak inside the stick, then that will explain your observations. But also I am not sure that g s is right about air cooling being irrelevant. Plus there is radiant cooling. (And evaporative cooling, maybe most important of all, props to user21820.) Heat deposited in the interior of the stick cannot dissipate as quickly as heat deposited on the exterior. Considering how close the stick is to melting already, it would not be surprising for the inside to get slightly hotter than the outside and melt first.
Added later: I did some research. If you google "microwave penetration depth" you will find different sites giving reasonably consistent values. For water at room temperature it's around 1 to 1.5 cm, but it's more for cooked meat.
It's also important to note that there is no sharp cutoff. The penetration depth is defined as the depth at which the power is $1/e$ of the level at the surface, or about 37%. If you go the same distance further in, it will be 37% of 37%, and so on.
Also, the more efficiently it heats, the shallower it penetrates. An analogy might be carrying a plate of hors d'oeuvres into a large crowded party. Water is like a hungry crowd that takes snacks quickly (but still takes fewer if they see the plate is getting empty, so you get that exponential decay). So you don't make it far into the crowd, but they get filled up quicker, just like water heats up quicker. Meat is like people who are too busy talking so not as many people grab the snacks. You make it farther in but the crowd is less satiated. And glass is like a room full of kids who can't even see what's on the tray as you heartlessly walk all the way across the room.
Butter is a mixture of water and fat, so the penetration depth will probably be greater than 1.5cm, but I don't know what to guess. | {
"domain": "physics.stackexchange",
"id": 89958,
"tags": "thermodynamics, temperature, microwaves, food"
} |
Velocity squared confusion | Question: As far as I know, a vector cannot be squared. Velocity is also a vector, though we have equation of motion, $v^2=u^2+2aS$. I am perplexed with $v^2$ here. What does it imply?
Answer: A vector can be squared, viz. $v^2=v\cdot v$. The $3$-dimensional constant-acceleration equations $v-u=ta,\,v+u=2t^{-1}s$ have dot product $v^2-u^2=2a\cdot s$. | {
"domain": "physics.stackexchange",
"id": 81711,
"tags": "kinematics, vectors, velocity, speed"
} |
Exact solution of the 2D Ising model in an external magnetic field? | Question: The 2D Ising model is a thoroughly studied model. One of the remarkable features of the model is that it predicts a hysteresis. However, I cannot seem to find the appropriate literature on this subject. I did do searches on Scopus to find relevant articles. I also found the book "The 2D Ising model" by Barry M. McCoy and Tai Tsun Wu. This is a large book dedicated to the 2D Ising model and has a few paragraphs on hysteresis. One of these shows the expected hysteresis: a "smooth" flip from the -1 to 1 normalized magnetization state.
However, this is only for the first row of the lattice. This is not what I am searching. Is there an analytical expression for the whole net magnetization of the whole lattice, in an external magnetic field? The field is swept from some negative value to some positive value, and then from positive back to negative, while keeping the temperature constant.
A pointer to a numerical solution or to a simulation is fine too.
EDIT:
Background
I am using the Metropolis algorithm to simulate an Ising lattice. See the following figure generated by the model:
Apologies for the weird legend. What you see: the model starts at T=0, then is swept in linear steps to T=2 in 100 steps. Each step sweeps 100 times over the lattice. The lattice is 20x20 and uses periodic boundary conditions. I'm using J = 1 and k = 1, where J is the coupling constant between the spins. This first step is done to generate an appropriate "initial condition" before the field is turned on.
Then the field is swept from -0.1 to 1, in 1000 steps, linearly. The same is done but now backwards.
As you can see, there is an hysteresis as expected. However, I would expect the hysteresis to be "smooth", but it appears to be a step function (or "square" hysteresis).
I tried zooming in on the hysteresis region, but it either moves or gets even sharper (the intermediate points, which appear to be artifacts, are deleted).
I'm therefore wondering wether this hysteresis is what I should see, or that I'm doing something wrong in my simulation. Maybe this outcome is a wrong prediction when using the Metropolis algorithm?
Answer: The Ising model is indeed very interesting!
In 2-dimensions, there is an analytical solution, in the case of no applied field. It is very complicated, and when it first came out, it consisted of 30 pages of very challenging maths. Only to be 'simplified' down to about 15 pages in the 60s.
With an applied field, or in higher dimensions, it is typical to numerically simulate the Ising model. The system is very amicable to Monte Carlo methods. You can look up the Metropolis algorithm. I found a very useful resource was the book Monte Carlo Methods in Classical Statistical Physics by Janke. Here is an excerpt https://www.physik.uni-leipzig.de/~janke/Paper/lviv-ising-lecture-janke.pdf
A lot of very interesting physics can be seen in these numerical simulations. Here's one of many online simulations of the 2D Ising Model:
http://physics.weber.edu/schroeder/software/demos/IsingModel.html
If you want an offline simulation, you can find some (along with many other interesting simulations) at NetLogo http://ccl.northwestern.edu/netlogo/
Edit:
Your hysteresis curve is definitely believable, if you are under $T_\textrm{C}$, which you easily are. One of the results from Onsager's original paper, is that the spontaneous magnetisation, while below $T_\text{C}$, is given by:
$$|m|=\Bigg(1-\sinh\Big(\frac{2J}{T}\Big)^{-4}\Bigg)^\frac{1}{8}.$$
I've included an image of when I was simulating the 2D ising model, and in the inset, the red line is the result from the exact solution. (No applied field.)
What you can see from this, is that when you are below the critical temperature, you can expect a large spontaneous magnetism. At $T=1$, the system in equilibrium is nearly fully magnetised, spontaneously. Although there isn't an exact solution, you can imagine what will happen when you apply an external field. If the field is in the direction of your magnetisation, it will slightly increase the magnetisation. (If the field is strong enough, it will become fully magnetised.) On the other hand, if the field is small, and in the opposite direction, it will, after a quick period, entirely flip the spins, so they are at a little above their spontaneous magnetisation value, in the opposite direction.
I've mocked up a hysteresis curve based on this reasoning, at $T=1$ (note the heavily discontinuous scale!). The spontaneous magnetisation value of $m=0.999$ comes from the formula I quoted earlier. At $H=0$, both $m=\pm0.999$ are equilibrium states, but as soon as you slightly alter $H=\pm\delta$, where $\delta$ is small, you pick one of them.
There's one crucial thing missing from this logic, and that's the problem of nucleation energy costs. If all your spins point one way, the system can exist out of equilibrium, due to it being hard to flip all the spins. For all the spins to flip, requires at first a small cluster of spins to flip against their neighborhood, which is energetically unfavourable, despite the flip being with the field. That leads to a curve with hysteresis behaviour, and it's in good agreement with what you've seen. Once a field is reached strong enough to overcome the nucleation barrier, the entire grid of spins can flip. | {
"domain": "physics.stackexchange",
"id": 40535,
"tags": "ising-model"
} |
N-Toffoli on Cirq | Question: I am looking for guidance in more generally how to developed n-bit gates in Cirq.
I am working on a QNN paper and I need to develop a n-controlled gate to be able to measure the cost function of the circuit.
Answer: This is actually very easy in Cirq. The controlled_by method can be used to automatically make any given gate controlled by an arbitrary number of control qubits. Here is a simple example for creating an X gate with 5 controls:
import cirq
qb = [cirq.LineQubit(i) for i in range(6)]
cnX = cirq.X.controlled_by(qb[0], qb[1], qb[2], qb[3], qb[4])
circuit = cirq.Circuit()
circuit.append(cnX(qb[5])) | {
"domain": "quantumcomputing.stackexchange",
"id": 3317,
"tags": "circuit-construction, cirq"
} |
Populations genetics and dynamics of bacteria on a Graph | Question: Disclaimer:
I'm a physicist and I'm fairly new to Bioinformatics therefore somethings below may not make sense.
My purpose:
I would like to simulate genetic evolution of bacterial populations, implementing interactions among different bacterial strains on a graph.
My starting point is to generate a graph with a certain topology.
Each vertex $V_i$ should contain a population of $N_i$ individuals for which I choose a certain allelic profile from an MLST database. Population numerosity and allelic profile is assigned by random choice.
$$\\\\$$
At starting time each vertex is populated with a "pure" strain of $N_i$ clones, identified with a randomly choosen allelic profile.
Dynamics assumptions
I will assume that:
Topology does not change with time
Vertex dynamic flows as a Moran process:
At time $t$ , one individual is chosen randomly to reproduce and one
individual is chosen to die. The same individual can be chosen to reproduce and
then die. Thus, an individual has either zero, one, or two descendants. Zero and
two with equal probability $p_0 = p_2 = (N_i -1 )/N_i^2$ , and one with probability $p_1 = 1 − 2 p_2$.
Migration from one Vertex to one another is possible for nearest neighborhood with some fixed probability depending on choosen topology. I will assume that when migration occurs $I$ new individuals enters into $V_i$ and $O$ individuals exits from $V_i$, with $I$ and $O$ depending on origin vertex numerosity. Migrants carry allelic frequencies according to population they came from. When migration occurs a new population is established: allelic profile frequencies changes according to that.
Migration and Moran process evolution does not happen simultaneously; This allows me to guarantee that each population performs at least one evolution.
Questions:
(1) Is there something fundamental that I'm missing?
(2) Moran model ensures that the population size remains constant (apart from migration effects obviously). Is this appropriate in order to simulate dynamic in stationary growing phase or should I introduce some demographic effects?
(3) Which could be a proper "timing" for migration and evolution steps?
Answer: I assume you are simulating a null distribution. Are you investigation recombination??
My main advice is to use population genetic terminology rather than geomometry to describe your simulation (e.g. vertex is a migration event between allopatric populations). Migration is investigated between populations investigated via $F_{ST}$. You appear to be assuming clonality, and population movement appear to follow a 'stepping stone model' (I might have misunderstand the question).
Mutation rate ... I'll have a guess at $10^5$ per site per year. Migration rate depends on the bacterial species being modeled, eg. MRSA spread in a very similiar fashion to that described in your model (along a moterway), in contrast soil bacteria don't adhere to allopatry at all (when I looked at them).
Fixing one parameter (population size) whilst investigation a second (migration) would be normal to assess the parameter space initially. Bacterial populations do undergo selective sweeps and using an appropriate growth model I would look at a standard population growth model (e.g. the one Bob May investigated in chaotic interactions within a differential equation). There are several population growth models and fixing migration whilst assessing the model would seem sensible. The parameter space in the combined model of migration, space and population size becomes complex.
Okay I get your model now. Its fine, the thing I'd be cautious about is the population density of the bacterial host because this would skew migration across the nodes of your graph. A quick literature review will demonstrate the importance, double paeroto log normal distribution and its cousin (the name escapes me .... ) are sort of key words that might flag up. Bacteria are not my active thing so I'm rusty. | {
"domain": "bioinformatics.stackexchange",
"id": 1541,
"tags": "phylogenetics, phylogeny, networks, bacteria, allele-frequency"
} |
How to understand the Fresnel relation $1+R=T$? | Question: From the perspective of energy conservation, we are familiar with the relation $T+R=1$ (Set the incident wave amplitude as 1, $T$ and $R$ are Fresnel transmission and reflection coefficient, supposing no energy absorbed by the material). However, in the book "Electromagnetic Wave theory" written by Kong Jin Au (MIT) when deducing the reflection and transmission for TE or TM waves, he has used the relation $1+R=T$ and it really confused me. Could anyone explain the reason?
Answer: Keep in mind that $R,T$ as defined in your text are the coefficients of the electric field, not of the intensity.
The reason for the relation $R+1=T$ is simply that the electric field should be continuous across the boundary. Outside the material, you have an incident wave (coefficient of $1$) and the reflected wave (coefficient of $R$). Inside the material you have only the transmitted wave with coefficient $T$.
For continuity then you expect the electric field just outside the material matches that just inside the material, in other words $1+R=T$. | {
"domain": "physics.stackexchange",
"id": 77776,
"tags": "electromagnetic-radiation, conservation-laws, reflection, scattering, refraction"
} |
Why didn't I beat the avalanche? | Question: Yesterday, while skiing out of bounds on the south west canyons of Mt. Hood, I experienced a small and extremely mild quake. This, combined with the melting conditions caused an extremely small avalanche in the canyon region we where in.
This was extremely mild in the spectrum of what is shown on TV or from what I've seen in other videos. A snow field of a bout 100 sq.m, shifted about 100 m down hill. And while mild, this was still my first ever experience in this type of situation.
Im a fairly experienced skier and we were already traveling downhill at about 35 mph +/- (we were tracking with a racing GPS app), when the snow "gave out from under". In normal "powder" and "good" snow, it normally feel like im floating on the snow. In this situation, it felt like I was being pushed from behind, while sinking under. Maneuvering was next to impossible and the only option to accelerate was to "tuck". While initially I accelerated a little, the snow caught me and my group rather easily and rapidly. Gladly, no one was hurt, but my questing are as follows:
1) If all free falling objects accelerate at the same rate (this was on a fairly steep mountain section), why did we get "trapped" into the avalanche, when our acceleration already had 35 mph +/- accelerating it? Did the thicker snow breaking and shifting somehow create more friction between my ski and the snow?
2) Most modern ski's today have alot more "nose curve", also known as rocker, in the front and back, in order to break through thicker snow with more ease The skis I was riding have what most in the ski world would call "excessive rocker". Why was it so easy for snow to sink me down, when my ski's design is for 2+ ft of snow and I was forcefully trying to "float" above the collapsing snow? This avalanche was tiny in the spectrum of perimeter and mass traveled., And also only lasted about 3 seconds.
Answer: Tl;DR: It is mainly due to the skier's positioning on the ski and how being thrown off-balance by the avalanche affects it.
Concerning the first part of the first question:
1) If all free falling objects accelerate at the same rate (this was on a fairly steep mountain section), why did we get "trapped" into the avalanche, when our acceleration already had 35 mph +/- accelerating it?
there is probably a large psychological component here. You were moving across a stationary ground and suddenly that ground starts gliding, which it didn't before. So it is really the change in its movement that you felt.
There is also a physical component however: you state correctly that "all free falling objects accelerate at the same rat" and under this assumption it shouldn't make any difference that the ground started moving. But a free fall approximation is not very good for skiing, especially when you already are in motion.
Did the thicker snow breaking and shifting somehow create more friction between my ski and the snow?
The answer is of course no, on the other this question already hints at what actually happens. Skiing is a large dependent on your position and balancing on the ski, and the acceleration does too (1). Our case is different from racing, but similar concepts apply. In the "dust on crust" you describe you have to lean back slightly to float, even with your rockers. This comes at a cost of acceleration. When the avalanche starts moving you get thrown off balance a bit, leaning back even further relative to the ground. To accelerate, you get back in front first applying the pressure to the ski as before. This takes some time of course.
Why was it so easy for snow to sink me down, when my ski's design is for 2+ ft of snow and I was forcefully trying to "float" above the collapsing snow?
Same reason, due to the leaning back your ski doesn't float properly anymore.
(1) which is e.g. why you we saw Ted Ligety win almost every Giant Slalom a couple of years ago: he invented a better ski position
(2) aaaah, second best after deep powder in my opinion | {
"domain": "physics.stackexchange",
"id": 31016,
"tags": "newtonian-mechanics, newtonian-gravity, acceleration"
} |
Does leaving snow on my hot tub cover save me money? | Question: Snow is supposedly such a good insulator, that some animals dig snow caves in which to hibernate through the winter. However, my hands seem to feel more comfortable being exposed to winter air than to snow, but I'd imagine part of this has to do with the increased heat transfer associated with the water the snow melts into.
Will my outdoor hot tub run more efficiently with snow on top? Could the answer change depending on air temperature and/or wind speed?
Answer:
However, my hands seem to feel more comfortable being exposed to winter air than to snow
Air (still air) is a better insulator than snow. In fact the insulating properties of snow are due to the fact is has a lot of trapped air within it. The other component of snow, water, is not noted for being a good insulator.
The trouble with using air as an insulator is that it's hard to keep the air still. Even if there is no wind the convection currents caused by your hands/whatever/tub will cause the air to move. And as the air flows away it carries away heat and brings fresh cold air into contact with the object. Though your hands may feel warmer in still air then in snow, I bet they'd feel warmer in snow than in a 100mph blizzard.
If you look at how animals use snow burrows they typically rely on their fur (or blubber if they're seals) as the primary insulator. The temperature immediately outside them will typically be minus a few degrees C. Where the snow comes in is by insulating them from the -40°C gale that's howling outside their burrow.
In the case of your hot tub, assuming the polystyrene lid you mention can provide enough insulation that the temperature above it stays below 0°C then a layer of snow would add more insulation if the external temperature is well below zero. If the external temperature is only around zero then the layer of snow will achieve little.
Snow if obviously a useless insulator if the internal temperature is above zero. In that case the snow will melt and the latent heat of fusion required will cause a massive heat loss. | {
"domain": "physics.stackexchange",
"id": 18309,
"tags": "thermodynamics"
} |
2 block friction problem | Question:
Consider a block of mass $m_2$ placed on a heavier block of mass $m_1$. $m_2$ is tied to a wall with string. A force $F$ is applied on $m_1$ at an angle $\theta$ to the horizontal (upwards). The friction coefficient between $m_1$ and ground is $\mu_1$ and that between $m_1$ and $m_2$ is $\mu_2$. The problem is to find minimum force to start moving $m_1$.
My doubt is in finding frictional force at top of $m_1$ due to $m_2$. I found frictional force (at top of $m_1$) as $f=\mu_2\cdot(m_2\cdot g+F\sin\theta)$. But in the book the answer does not have the $F\sin\theta$ term. Shouldn't it be considered because it increases the normal reaction from $m_1$ on $m_2$ as the force is acting at an angle?
Answer: A free body diagram of block 2 will show only four external forces acting on it: (1) gravity acting vertically downward (2) the normal reaction force of block 1 acting vertically upward (3) the tension in the string acting horizontally (say to the left) and (4) the friction force of block 1 acting horizontally (say to the right). Assuming the string doesn’t break and the vertical component of the $F$ doesn’t exceed the weight of both blocks, block 2 doesn’t accelerate horizontally or vertically and the sums of the horizontal and vertical forces are zero. Therefore the normal reaction force of block 1 on block 2 is simply the weight of block 1.
The vertical component of $F$ only reduces the reaction force, and consequently the maximum static friction force, of the ground on block 1.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 65315,
"tags": "homework-and-exercises, newtonian-mechanics, friction"
} |
How does the physical motion of atom lead to photon emission? | Question: It's known that what we call a temperature is in fact molecular motion at microscopic scale. But at which point the emission of photons happens due to this physical motion, so that we can talk about the thermal emission?
the Wikipedia page on thermal radiation says the motion of atoms "..results in charge-acceleration and/or dipole oscillation which produces electromagnetic radiation..", what is beyond the terminology I am competent of.
May I ask for some explanation that doesn't require a degree in Physics to understand it?
Answer: When charge accelerates, it radiates energy. The "prototypical scenario" illustating this is the uniformly accelerated point charge and the Larmor Formula (see Wikipedia page of this name) that quantifies the radiated power (both the non-relativistic and relativistic versions are given on the Wiki page). For an everyday example, you can think of a dipole antenna, wherein charge undergoes roughly simple harmonic motion back and forth in the antenna's conductor, and thus radiates power (even a receiver antenna radiates a reaction field when its charge is accelerated by incoming radiation, which is how the TV detector man in the UK can get you if you don't have a licence: their detectors seek the reaction field from a receiver set).
So now, in a hot gas, we have a great deal of clumps of charge (gas molecules) bumping into one another. Each collision is naturally a violent acceleration for these charges, and so they radiate. Analogous processes happen in hot solids, although each molecule is more like the dipole antenna - heat represents vibrational energy in the lattice of molecules and there aren't really collisions as such.
In contrast, in deep space, hot gasses expand swiftly and pretty soon they become so rarefied and thin that there are very few collisions. Thus, in deep space there can be hot, "dark" gasses which do not give off their telltale radiation. Before anyone asks me whether this is a candidate for dark matter, I'll just say I don't know, I'm out of my depth and you need to ask another question for a real astronomer to answer. | {
"domain": "physics.stackexchange",
"id": 16250,
"tags": "temperature, thermal-radiation"
} |
Magnification of a system of lenses | Question: I have a lens system consisting of 7 elements. However, I cannot take the system apart to measure the distance between them or their focal lengths. I do, however, know the effective focal length of the whole system. Is there a way to calculate the magnification of the whole system (for a given image distance)?
Answer: If your system is equivalent to a single lens with focal lenght $f$, then you can proceed like if you had one only lens with that $f$, so $M_L=\frac{s'}{s}$, with $s$ the distance to the object | {
"domain": "physics.stackexchange",
"id": 80046,
"tags": "optics, geometric-optics, lenses, microscopy"
} |
Can I plug models like Linear Regression into a CNN feature map result? | Question: I was learning about image recognition on the Orange Software and I saw that I can feed my image database into a CNN(they call image embedding) that has as output a feature map of the image and then I can feed that feature map into models like Logistic/Linear Regression. And that is my objective, compare those models, but I want to do it without Orange.
I am thinking about using VGG16 as my CNN and extract the feature map from it. After that I want to plug those Linear/Logistic Regression to predict my image. Does that work at all? Is it possible?
(Using python and a proper labeled image database. Also I don't want to use the CNN alone, I really would like to use those 'simpler' models consuming the feature map)
Answer: What you're explaining is basically almost every CNN model where you basically have a fully connected layer at the end of the convolutions and that is equivalent to having a linear/logistic regression at the end (given there's only one output)
The only difference between all of those is whether there's an activation function or not, and if you are trying to build a classifier then you would definitely want an activation function to map the output values to a probability value between 0 and 1.
Example:
That "Fully Connected" layer is just basically a Linear/Logistic regression (10 of them actually since we have 10 outputs in this example) and a softmax applied to the output values to scale them all between 0/1 and ensure the sum is exactly 1. | {
"domain": "datascience.stackexchange",
"id": 7492,
"tags": "image-classification, image-recognition, orange, image-preprocessing"
} |
Zener diode characteristics | Question: Why does a zener diode have very thin depletion region and high junction voltage? I know that it is heavily doped but can't relate the two. Pls help
Answer: A Zener diode is heavily doped which implies the concentration of impurities is very high, so the carriers. At junction(initially), carriers diffuse from their higher concentration to lower concentration and left behind ions, since the ions formed are more per unit length( since their concentration is high) the depletion layer is very thin. This thin depletion layer forms enough electric field (to oppose carriers), Such that at equilibrium diffusion current is equal to field current(causing net current equal to zero). The junction voltage is high for the same reason and it must be so becuase the carriers on both sides are more so to effectively hinder them the opposing potential( junction potential) must be high.
Why zener diodes need to have such high potentials?
This high potential along with thin depletion layer causes very high electric fields and when connected in reverse bias this field is increased such that the electrons bounded to atoms move along the direction of field causing a large amount of field current. This current can increase much more without any significant increase in reverse bias, this property of zener diode is used as voltage regulator, so that it can allow large currents at nearly same potential.
Care must be taken that current doesn't increase beyond the maximum value written on diode because of intense heat the diode will be damaged and become not usable. | {
"domain": "physics.stackexchange",
"id": 33985,
"tags": "semiconductor-physics"
} |
Forces in play on a ball at the exact point where the slope meets the floor. Related to normal force in vertical circular motion. (Answered) | Question: I am grateful for any help offered:
Ignoring friction (and hence rolling), what are the forces in play on the ball in the middle frame??
I am attempting to mirror this diagram to make a half hexagonal shape. With this I should be able to visualise what would happen when the slope becomes more and more curved until it becomes a circle. This is an attempt to explain to myself how the normal force at point C exists (see below) in the first place.
"A normal force will always be only as much as is needed to prevent the two object from occupying the same space." Since there is no force directed at the track at the exact point C, there should be nothing making them occupy the same space, and thus no force. I know that circular motion means there must be some centripetal force, but there is no force for the track to react to, so it simply cannot push the ball!
Some answers on the internet mention the centrifugal force, however all my teachers have strongly instructed I ignore any mention of the centrifugal force. Instead, my teacher's response to this question is that the ball is in fact exerting a kind of outward force since its momentum is tangential and the track is attempting to change its momentum.
Investigating his response lead me to draw the above diagram to explain it to myself and to this question.
[EDIT] Additionally, this dialogue below may help anyone with a similar query regarding how the track could "know" that in the next instant the ball will hit the next molecule of the track, and that it needs to act pre-emptively and exert a normal force to prevent this.
Answer: "A normal force will always be only as much as is needed to prevent the two object from occupying the same space."
That is true. But that does not mean that $N=mg$ always. Let us consider a hypothetical situation when the normal force is not acting (as you have assumed). Then, since no force is acting, the velocity will remain constant and tangential to the circular track.
This means that the very next instant, the object collapses into the track, which is precisely what should not happen. So, the only plausible conclusion is that the normal force is pushing the ball inward towards the centre to prevent the ball from going into the track.
So, there is no other force, but just the normal force $N$, where,
$$N= \frac{mv^2}{R}$$
As you can see from the hexagon you have drawn (which is a highly efficient way to clear a problem involving circles, used by Archimedes once), there are two points of contact. The sum of the normal forces, will give a resultant that acts perpendicular to the tangent drawn at the point of contact, as in the illustration below | {
"domain": "physics.stackexchange",
"id": 67864,
"tags": "newtonian-mechanics, centripetal-force"
} |
How do I find the relevant features out of 11,000+ possibilities? | Question: While working on Kaggle Competition, I ended up with 11,726 columns which are mostly "dummies" (one hot encoding). Is this too many?
I know that we need to find out which features are relevant, but not sure how to do this.
Answer: Your solution will depend on a couple of factors. One is what type of model you are using. If you are using something that automatically calculates feature importances then you could simply look at these (or take a more balanced look using permutation importances).
While you could look at feature importances, with ~11,000 possibilities this is going to be pretty difficult. The main focus should be to cut down these features into something more manageable, do you really need one hot encoding? Without knowing more about the dataset I can't provide much more advice. | {
"domain": "datascience.stackexchange",
"id": 4628,
"tags": "neural-network, deep-learning, keras, data-cleaning"
} |
audio_common "Perhaps you forgot to run soundplay_node.py" | Question:
Hi Everyone
I am new to ROS and cpp so please be patient ;)
I work with Ubuntu 18.04.4 LTS and ROS Melodic.
I installed audio_commen with command sudo apt-get install ros-melodic-audio-common according the side link
After some hours of trying I created completely new package just for this function to eliminate other factors and at the moment it looks pretty simple.
node code:
#include "ros/ros.h"
#include <sound_play/sound_play.h>
int main(int argc, char **argv) {
ros::init(argc,argv,"audio_common_tutorial");
ros::NodeHandle n;
sound_play::SoundClient sc;
sc.playWaveFromPkg("sound_play", "sounds/BACKINGUP.ogg");
sc.playWave( "/home/ntuser/catkin_ws/src/audio_common_tutorial/doc/file.wav");
ros::spin();
return 0;
}
I am starting the nodes with launch file:
<launch>
<include file="$(find sound_play)/soundplay_node.launch"> </include>
<node pkg="order_receiver" type="order_receiver" name="order_receiver" output="screen">
<rosparam command="load" file="$(find order_receiver)/config/default.yaml" />
</node>
</launch>
both sounds command failed
[ WARN] [1587131813.785943539]: Sound command issued, but no node is subscribed to the topic. Perhaps you forgot to run soundplay_node.py
even thouth the node is listed by launching and after command rosnode list
The package WORKS good from command line e.g rosrun sound_play play.py "path" or roslaunch sound_play test.launch
Any ideas and recommendation?
regards
Originally posted by eliza on ROS Answers with karma: 16 on 2020-04-17
Post score: 0
Answer:
Hi everyone
I solve it.
It looks like I need a give a some time to initialize SoundClient.
After I implemented a delay before issuing a sound ,everything works.
The code look as follow:
#include "ros/ros.h"
#include <sound_play/sound_play.h>
int main(int argc, char **argv) {
ros::init(argc,argv,"audio_common_tutorial");
ros::NodeHandle n;
sound_play::SoundClient sc;
ros::Duration(1, 0).sleep();
sc.playWaveFromPkg("sound_play", "sounds/BACKINGUP.ogg");
sc.playWave( "/home/ntuser/catkin_ws/src/audio_common_tutorial/doc/file.wav");
ros::spin();
return 0;
}
If anyone has some smart comment why it is so I am happy to hear it ;)
regards
Originally posted by eliza with karma: 16 on 2020-04-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 34784,
"tags": "ros, ros-melodic, audio-common"
} |
Battleship type game in Java | Question: I made the code, and it works, but it uses many if conditions and it looks ugly. I would really appreciate it if someone could give me a hand by pointing me in the right direction to make it more object-oriented.
package ar.edu.uca.ceis.objetos.imperio;
import ar.uba.fi.algo3.batallaespacial.CabinaDeControl;
import ar.uba.fi.algo3.batallaespacial.Civilizacion;
import ar.uba.fi.algo3.batallaespacial.Piloto;
import ar.uba.fi.algo3.batallaespacial.Reporte.Espectro;
import ar.uba.fi.algo3.batallaespacial.Sustancia;
import ar.uba.fi.algo3.batallaespacial.comandos.Comando;
import ar.uba.fi.algo3.batallaespacial.Direccion;
public class PilotoImperial implements Piloto {
private CabinaDeControl cabina;
private Imperio civilizacion;
public PilotoImperial(Imperio civilizacion) {
super();
this.civilizacion = civilizacion;
}
public void setCabinaDeControl(CabinaDeControl cabina) {
this.cabina = cabina;
}
public Comando proximoComando() {
Direccion[] values = Direccion.values();
// for (int i = 1; i < values.length; i++) {
int i;
i = (int)Math.round(Math.random() *values.length-1) ;
//if border of an unknow position, go to a random place
if (Espectro.DESCONOCIDO == this.cabina.getRadar().getReporte( values[i] ).getEspectro()){
// generaremos numeros aleatorios entre 1 y values.length
int c = (int)Math.round(Math.random() *values.length) ;
if (Espectro.VACIO == this.cabina.getRadar().getReporte( values[c] ).getEspectro()){
return this.cabina.getControl().avanzar( values[c] );
}
}
// i found and enemy base, attack
if (Espectro.BASE == this.cabina.getRadar().getReporte( values[i] ).getEspectro()) {
if (this.cabina.getRadar().getReporte( values[i] ).getCivilizacion()!=this.civilizacion){
return this.cabina.getControl().atacar(values[i]);
}
}
// if found and enemy, attack
if (Espectro.NAVE == this.cabina.getRadar().getReporte( values[i] ).getEspectro()) {
if (this.cabina.getRadar().getReporte( values[i] ).getCivilizacion()!=this.civilizacion){
return this.cabina.getControl().atacar(values[i]);
}
}
// attack asteroid if (Espectro.ASTEROIDE == this.cabina.getRadar().getReporte( values[i] ).getEspectro()) {
return this.cabina.getControl().atacar(values[i]);
}
// if found a container, upload materia 100
if (Espectro.CONTENEDOR == this.cabina.getRadar().getReporte( values[i] ).getEspectro()) {
while (this.cabina.getRadar().getReporte( values[i] ).getCantidadDeSustancia(Sustancia.ANTIMATERIA)>0){
return this.cabina.getControl().transferirCarga(values[i], Direccion.ORIGEN, Sustancia.ANTIMATERIA, 100);
}
}
// if found a ally base, download anti-materia
if (Espectro.BASE == this.cabina.getRadar().getReporte( values[i] ).getEspectro()) {
while (this.cabina.getMonitor().getCarga(Sustancia.ANTIMATERIA)>0){
return this.cabina.getControl().transferirCarga(Direccion.ORIGEN,values[i], Sustancia.ANTIMATERIA, 100);
}
}
// if there is nothing, move to a position
int ii = (int)Math.round(Math.random() *values.length-1) ;
if (Espectro.VACIO == this.cabina.getRadar().getReporte( values[ii] ).getEspectro()){
return this.cabina.getControl().avanzar( values[ii] );
}
//}
// found none wait
return this.cabina.getControl().esperar();//.avanzar();
}
public Civilizacion getCivilizacion() {
return this.civilizacion;
}
public String getNombre() {
return "Piloto Imperial";
}
}
Considering this is a framework and I only show you the class I'm changing, the rest works fine. This is a homework, so no need for a super fancy code. Last but not least, if you could help me use a pattern, that would be amazing.
Answer: Your Professor has assigned you code that fails to meet some basic tenets of object-oriented programming, including:
Information Hiding
The Law of Demeter
The Open-Closed Principle
The DRY Princple
Effectively, your Professor is teaching you how to program with structures, not objects, using Java.
For example, here's one problem with breaking the Law of Demeter:
if (Espectro.VACIO == this.cabina.getRadar().getReporte( values[ii] ).getEspectro()){
If any one of the following are null, the code will throw a NullPointerException:
this.cabina
getRadar()
getReporte( ... )
There are no try/catch blocks, nor does the method declare any exceptions, so the program will crash. A program that crashes can be quite frustrating for the users.
The line should be either of the following:
if( getCabina().isEspectro( Espectro.VACIO ) ) {
if( isEspectro( Espectro.VACIO ) ) {
In the second case, the method isEspectro would resemble:
public boolean isEspectro( Espectro e ) {
return getCabina().isEspectro( e );
}
This is called delegation. It avoids the cascading dot notation that is the source of so many bugs. See this answer for more details about information hiding.
The method getCabina() would be wholly responsible for ensuring that the CabinaDeControl instance is set. Otherwise how do you guarantee that this.cabina is not null (without using a framework such as Spring, which supports inversion of control)? For example:
private synchronized CabinaDeControl getCabina() {
if( this.cabina == null ) {
this.cabinia = createCabina();
}
return this.cabina;
}
/** Subclasses can vary the CabinaDeControl instances used by this class. */
protected CabinaDeControl createCabina() {
return new CabinaDeControl(); // ... or whatever is required to instantiate.
}
This is called lazy initialization. Importantly, the pattern follows the Open-Closed Principle whereby you can change the behaviour of a class by overriding the createCabina() method inside a subclass. You don't have to change the original class to change its behaviour. That prevents introducing widely-scoped bugs (the new subclass can still introduce bugs, but the ripple effect should be less severe than by changing the original class).
Using this.cabina.method() violates the DRY Principle because this.cabina is repeated several times. Programming means eliminating duplicate code and the reasons are many.
Each occurrence of this.cabina.method() can throw a NullPointerException because there is no check to ensure cabina is not null. That should leave you with an uneasy feeling. Replace this.cabina with getCabina() (as I have implemented above) and that uneasy feeling should go away. It does not mean the code will be correct, but at least the program won't crash if cabina is ever set to null. | {
"domain": "codereview.stackexchange",
"id": 3941,
"tags": "java, game, homework"
} |
How can i fusion gyroscope and accelerometer when accelerometer has only roll,pitch val? | Question: I'm currently studying IMU Sensor with 6dof(gyro and accelerometer).
My goal is to get Orientation of robot, and i found out that only 6dof imu sensor wouldn't be
possible to get a correct Orientation due to noise and gyroscope yaw drift(also roll and pitch).
But, I just want it to try with errors. (Working on the ROS.)
so I managed to get data from my IMU sensor data which gyroscope's R,P,Y and Acceleromter's R,P.
The Problem was that i don't know how to get a quaternion for some other library(package),
so i just used gyro's yaw value and accelerometer's roll and pitch value to find
quaternion.
Other fusion algorithms such as EKF,Complementary Filter ask linear_acc, angular_v, and Quaternion. But I don't feel like my ways to get quaternion is the right ways to do it.
So, Am i doing something wrong?
How can i mix(or use) gyroscope's roll, pitch, yaw data and acclerometer's roll,pitch,yaw(?) data to get a quaternion that fusion algorithms ask?
some information :
I'm using Microstrain 3dm-cx5-15, using ros, managed to publish sensor_msgs/Imu
(with not quite correct quaternion)
thx for the replies, any insight will help me
sorry for my bad english.
Answer: First you have to understand that Gyroscope gives angular rates around X,Y,Z axes and Accelerometer gives linear accelerations in X,Y,Z directions. Neither of them gives Roll, Pitch and Yaw values. But you can calculate Roll and Pitch values using the Accelerometer readings.
For your question, in order to calculate the quaternion you need to have a 9 DoF IMU to feed data to both prediction and update step in the Kalman filter. I suggest you to follow the Probabilistic Robotics Book to get a better idea on this.
Using a 6 DoF IMU, you can calculate Roll and Pitch using a Kalman Filter. But the gyro provides data only for the prediction step when calculating the yaw. An external magnetometer or a 9 DoF IMU can provide the absolute yaw for the update step. | {
"domain": "robotics.stackexchange",
"id": 2105,
"tags": "localization, kalman-filter, imu, gyroscope, accelerometer"
} |
Single method string formatter | Question: Because I don't want to download an entire library for a single function (which I do understand, was made better than I made mine), I decided to implement my own string formatting function.
I am not very proud of it, as I find it pretty ugly and unreadable.
This is only a string formatter with string-only arguments.
Efficiency is not a concern for me.
#include<iostream>
#include<unordered_map>
#include<string>
using std::string;
string formatString(string format, const std::unordered_map<string, string>& args) {
string ret;
string::size_type bracketLoc;
while((bracketLoc = format.find_first_of('{')) != string::npos) {
// Handling the escape character.
if(bracketLoc > 0 && format[bracketLoc - 1] == '\\') {
ret += format.substr(0, bracketLoc + 1);
format = format.substr(bracketLoc + 1);
continue;
}
ret += format.substr(0, bracketLoc);
format = format.substr(bracketLoc + 1);
bracketLoc = format.find_first_of('}');
string arg = format.substr(0, bracketLoc);
format = format.substr(bracketLoc + 1);
auto it = args.find(arg);
if(it == args.end()) {
ret += "(nil)";
} else {
ret += it->second;
}
}
ret += format;
return ret;
}
int main() {
std::cout << formatString("Hello, {Name}! {WeatherType} weather, right?", {
{"Name", "Midnightas"},
{"Fruit", "Apple"}
});
return 0;
}
Compile with -std=c++11.
The above program will output Hello, Midnightas! (nil) weather, right?.
Answer:
You have a problem if you want to have the escape-character immediately before a replacement, because you don't support escaping the escape-character.
I suggest dispensing with escape-characters, and simply replace an empty replacement {} with an opening brace {.
There's no reason to modify the format-string at all, and thus receiving it by copy. Doing so is supremely inefficient. Well, using named arguments at all already is, so it might not matter too much.
Do you know the ternary operator cond ? true_exp : false_exp? Using it would simplify things.
Avoid allocations, thus avoid std::string. At least the format string should be C++17 std::string_view, maybe also the map should be too.
If you template it, you don't even really have to decide for the caller:
template <class Args = std::unordered_map<std::string_view, std::string_view>>
std::string formatString(std::string_view format, const Args& args) | {
"domain": "codereview.stackexchange",
"id": 30773,
"tags": "c++, formatting"
} |
Show linear bounded automata accepting w is PSPACE-complete | Question: ALBA={⟨M;w⟩ | M is linear bounded automata which accepts input w}
Show that ALBA is PSPACE-complete.
How I would try to solve it...
We need to prove ALBA belongs to PSPACE. So I would construct TM A which accepts <M;w>. Simulate M till qng^n steps or when it halts. If it halts: Accept <M,w> if M accepts w, otherwise reject.
How would be space complexity? I would say it will need n (=size of the tape) so space complexity should be O(n) and then ALBA belongs to PSPACE. Is it correct?
We need to do reduction from TQBH to ALBA to prove that ALBA is NP hard, but I have no idea how to do it. How should this reduction be done? TQBF must be 1 if ALBA accepts w i guess, but dont know how to show the proof...
Answer: Savitch's theorem shows that PSPACE=NPSPACE, which immediately implies that ALBA is in PSPACE.
In the other direction, suppose that $L$ is a language in PSPACE. Thus $L$ is accepted by some machine that uses space $p(n)$. We can construct another machine $L'$ which accepts an input $(x,y)$, erases $y$, and simulates $L$ on $x$. We can reduce $L$ to ALBA by mapping $L$ to $L'$ and $x$ to $(x,0^{p(|x|)})$. | {
"domain": "cs.stackexchange",
"id": 17348,
"tags": "complexity-theory, turing-machines, reductions, space-complexity"
} |
Why do we restrict the maximal supercharge to 32? | Question: Many supersymmetry textbook state that the maximal supersymmetry in any dimension has 32 hermitian supercharges. (Actually for lowest number of supersymmetry $N=1$ the highest dimension is $D=11$)
I want to know why the maximal supercharge is restricted as $32$.
Answer: Because you want the maximal spin particle allowed to be 2 (since there is no higher spin field theory interacting non trivially), thus in a supermultiplet starting with a particle with helicity $0$, say in $d=4$, the maximal number of susy you can apply is $8$, thus $N=8$ in $d=4$ is the maximal supersymmetry admitted, which means $32$ supercharges. | {
"domain": "physics.stackexchange",
"id": 20356,
"tags": "special-relativity, quantum-spin, supersymmetry, representation-theory"
} |
PMNS matrix versus CKM matrix -- how precise is the analogy? | Question: PMNS matrix is said to be the matrix for the neutrinos as the CKM matrix for the quarks.
See https://en.wikipedia.org/wiki/Pontecorvo–Maki–Nakagawa–Sakata_matrix#The_PMNS_matrix
However, I am confused why this is true.
The PMNS matrix $M_{PMNS}$ is the matrix changing between the neutrino flavor eigenstate and the neutrino mass eigenstate
$$
\begin{bmatrix} {\nu_e} \\ {\nu_\mu} \\ {\nu_\tau} \end{bmatrix}
= \begin{bmatrix} U_{e 1} & U_{e 2} & U_{e 3} \\ U_{\mu 1} & U_{\mu 2} & U_{\mu 3} \\ U_{\tau 1} & U_{\tau 2} & U_{\tau 3} \end{bmatrix} \begin{bmatrix} \nu_1 \\ \nu_2 \\ \nu_3 \end{bmatrix}
= M_{PMNS} \begin{bmatrix} \nu_1 \\ \nu_2 \\ \nu_3 \end{bmatrix}
$$
However, the CKM matrix is obtained from (see p.723 of Peskin QFT)
$$
V_{CKM} =U_u^\dagger U_d
$$
where
$U_u$ is a matrix of $u,c,t$ flavor to flavor matrix.
$U_d$ is a matrix of $d,s,b$ flavor to flavor matrix.
The $U_u$ and $U_d$ are obtained in an attempt to diagonalizing the Higgs Yukawa term to a diagonalized form as the mass eigenstates. The $
V_{CKM}$ is the weak charge current coupling to the $W$ bosons with flavor changing process.
So
$$
V_{CKM}
= \begin{bmatrix} V_{ud} & V_{us} & V_{ub} \\V_{cd} & V_{cs} & V_{cb} \\
V_{td} & V_{ts} & V_{tb} \end{bmatrix}
$$
Question
So how can $M_{PMNS}$ for neutrinos is an analogy of $V_{CKM}$ for quarks?
I thought the correct analogy for lepton sectors (as $V_{CKM}$ for quarks) would be a matrix of the form like
$$
\begin{bmatrix} V_{\nu_e e} & V_{\nu_e \mu} & V_{\nu_e \tau} \\ V_{\nu_{\mu} e} & V_{\nu_{\mu} \mu} & V_{\nu_{\mu} \tau} \\
V_{\nu_{\tau} e} & V_{\nu_{\tau} \mu} & V_{\nu_{\tau} \tau} \end{bmatrix} ?
$$
Not the $M_{PMNS}$.
True or false?
Answer:
$$
\begin{bmatrix} V_{\nu_e e} & V_{\nu_e \mu} & V_{\nu_e \tau} \\ V_{\nu_{\mu} e} & V_{\nu_{\mu} \mu} & V_{\nu_{\mu} \tau} \\
V_{\nu_{\tau} e} & V_{\nu_{\tau} \mu} & V_{\nu_{\tau} \tau} \end{bmatrix} ?
$$
Not the $M_{PMNS}$.
True or false?
True. Not the $M_{PMNS}$. Indeed, by definition, the oxymoronic off-diagonal elements of the matrix you wrote vanish: we define the e-neutrino to be precisely that linear combination of the three neutrino mass eigenstates which weak-couples to the electron; and analogously for the other two leptons.
Analogously, for quarks, the up quark couples to a linear combination of downlike quark eigenstates, namely $V_{ud} d+V_{us} s+V_{ub} b$. If we utilized the same aggressively confusing "flavor eigenstate" language, we'd call this combination something like $D_u$, clearly stupid, since, for quarks, flavor is defined by mass eigenstates.
So the "real", mass eigenstate particles appearing in the SM lepton sector are
$e,\mu,\tau; \nu_L, \nu_M, \nu_H $, (Lightest, Middle, Heaviest; most probably 1,2,3, in the most likely, normal, hierarchy alternative). Mercifully, the SM wall charts confusing generations have now been corrected to reflect this.
The PMNS matrix is a very close analog to the CKM matrix, indeed, and it is only historical usage (ritual misuse) of the slippery term "flavor" that victimizes students with devilish glee. For quarks, flavor indicates the mass of the particle, hence aligns with generation; whereas for neutrinos, it indicates the charged lepton the state couples to, and straddles generations, except in the older, misguided charts. | {
"domain": "physics.stackexchange",
"id": 74667,
"tags": "particle-physics, definition, neutrinos, beyond-the-standard-model, cp-violation"
} |
ROS fuerte rosmake -std=c++11 compile error | Question:
Hi, I was trying to compile a node that uses a library that needs c++11 flag (It uses std::function and nullptr among other features).
Here is de outpulog:
https://www.dropbox.com/s/mr8gltnrdn162xi/build_output.log?dl=0
But summarizing:
from /usr/include/c++/4.8/bits/stl_algo.h:62,
from /usr/include/c++/4.8/algorithm:62,
from /usr/include/boost/math/tools/config.hpp:16,
from /usr/include/boost/math/special_functions/round.hpp:13,
from /opt/ros/fuerte/include/ros/time.h:58,
from /opt/ros/fuerte/include/ros/ros.h:38,
from /home/bardo91/programming/EC-SAFEMOBIL/ros/PatrollingCV/uav_vision/src/uav_vision_node.cpp:9:
/usr/include/c++/4.8/bits/stl_construct.h: In instantiation of ‘void std::_Construct(_T1*, _Args&& ...) [with _T1 = actionlib_msgs::GoalStatus_<std::allocator<void> >; _Args = {actionlib_msgs::GoalStatus_<std::allocator<void> >&}]’:
/usr/include/c++/4.8/bits/stl_uninitialized.h:75:53: required from ‘static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = actionlib_msgs::GoalStatus_<std::allocator<void> >*; _ForwardIterator = actionlib_msgs::GoalStatus_<std::allocator<void> >*; bool _TrivialValueTypes = false]’
/usr/include/c++/4.8/bits/stl_uninitialized.h:117:41: required from ‘_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = actionlib_msgs::GoalStatus_<std::allocator<void> >*; _ForwardIterator = actionlib_msgs::GoalStatus_<std::allocator<void> >*]’
/usr/include/c++/4.8/bits/stl_uninitialized.h:258:63: required from ‘_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, std::allocator<_Tp>&) [with _InputIterator = actionlib_msgs::GoalStatus_<std::allocator<void> >*; _ForwardIterator = actionlib_msgs::GoalStatus_<std::allocator<void> >*; _Tp = actionlib_msgs::GoalStatus_<std::allocator<void> >]’
And so on... It's not only for the library, I would like to use new features of c++, but I don't know what to do to fix it.
Can anybody help me?
Thank in advance
Originally posted by Bardo91 on ROS Answers with karma: 30 on 2015-02-18
Post score: 1
Answer:
ROS fuerte do not use all about c++11. Moving to ROS indigo fix the problem.
Originally posted by Bardo91 with karma: 30 on 2015-02-24
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 20922,
"tags": "ros, rosmake, c++11"
} |
Program for face recognition | Question: I have been using the following script for face recognition as a security feature:
main.py
from itertools import izip
from PIL import Image
def compare(self,pic):
i1 = Image.open("pic1.jpg")
i2 = Image.open(pic)
size = i1.size
i2 = i2.resize(size)
assert i1.mode == i2.mode, "Different kinds of images."
assert i1.size == i2.size, "Different sizes."
pairs = izip(i1.getdata(), i2.getdata())
if len(i1.getbands()) == 1:
# for gray-scale jpegs
dif = sum(abs(p1-p2) for p1,p2 in pairs)
else:
dif = sum(abs(c1-c2) for p1,p2 in pairs for c1,c2 in zip(p1,p2))
ncomponents = i1.size[0] * i1.size[1] * 3
print ((dif / 255.0 * 100) / ncomponents)
return (dif / 255.0 * 100) / ncomponents
def password(self):
import passw
y=str(self.password_input.text)
x=passw.verify(y)
if x == True:
try:
os.startfile("recog.py")
import time
time.sleep(6)
x = self.compare("security.jpg")
if x <= 30:
os.remove("pic1.jpg")
os.rename("security.jpg","pic1.jpg")
return True
else:
raise ValueError
except ValueError, e:
print(e)
sf=open('secure.txt','w')
sf.write("")
sf.close()
if os.path.exists("securitylog.jpg"):
from datetime import datetime, date, time
dt = datetime.today()
t=dt.strftime("%A, %d. %B %Y %H:%M %p")
t='{0:%H:%M }'.format(dt)
f='{0:%A}, {0:%d} of {0:%B}, {0:%Y}.'.format(dt)
timed=(("Date: ",f,"\nTime: ",t))
timed=''.join(timed)
sf=open('secure.txt','a')
sf.write(timed)
sf.close()
kl=Thread(target=self.say, args="You are not authorized to access this program.")
kl.start()
p = Wrong()
p.open()
else:
os.rename("security.jpg","securitylog.jpg")
else:
sf=open('secure.txt','w')
sf.write("")
sf.close()
if os.path.exists("securitylog.jpg"):
from datetime import datetime, date, time
dt = datetime.today()
t=dt.strftime("%A, %d. %B %Y %H:%M %p")
t='{0:%H:%M }'.format(dt)
f='{0:%A}, {0:%d} of {0:%B}, {0:%Y}.'.format(dt)
timed=(("Date: ",f,"\nTime: ",t))
timed=''.join(timed)
sf=open('secure.txt','a')
sf.write(timed)
sf.close()
kl=Thread(target=self.say, args="You are not authorized to access this program.")
kl.start()
p = Wrong()
p.open()
else:
os.rename("security.jpg","securitylog.jpg")
passw.py:
from passlib.hash import sha256_crypt
import onetimepad
def verify(string):
y=string
key=y[-2:]
enc=onetimepad.encrypt(y,key)
x=sha256_crypt.verify(enc,'<hash>')
return x
def main():
x=raw_input("Enter Password: ")
print verify(x)
if __name__ =="__main__":
main()
recog.py
#!/usr/bin/env python
# Python 2/3 compatibility
from __future__ import print_function
import sys
import numpy as np
import cv2
# local modules
from video import create_capture
from common import clock, draw_str
def detect(img, cascade):
rects = cascade.detectMultiScale(img, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30),
flags=cv2.CASCADE_SCALE_IMAGE)
if len(rects) == 0:
return []
rects[:,2:] += rects[:,:2]
return rects
def draw_rects(img, rects, color):
for x1, y1, x2, y2 in rects:
cv2.rectangle(img, (x1, y1), (x2, y2), color, 2)
if __name__ == '__main__':
import sys, getopt
print("Face Recognition using Haar Cascades")
args, video_src = getopt.getopt(sys.argv[1:], '', ['cascade=', 'nested-cascade='])
try:
video_src = video_src[0]
except:
video_src = 0
args = dict(args)
cascade_fn = args.get('--cascade', "E:\\Python27\\My programs\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt.xml")
nested_fn = args.get('--nested-cascade', "E:\\Python27\\My programs\\opencv\\sources\\data\\haarcascades\\haarcascade_eye.xml")
cascade = cv2.CascadeClassifier(cascade_fn)
nested = cv2.CascadeClassifier(nested_fn)
cam = create_capture(video_src, fallback='synth:bg=../data/lena.jpg:noise=0.05')
while True:
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.equalizeHist(gray)
t = clock()
rects = detect(gray, cascade)
vis = img.copy()
draw_rects(vis, rects, (0, 255, 0))
if not nested.empty():
for x1, y1, x2, y2 in rects:
roi = gray[y1:y2, x1:x2]
vis_roi = vis[y1:y2, x1:x2]
subrects = detect(roi.copy(), nested)
print("Face detected")
image = vis_roi
cv2.imwrite('security.jpg', image)
cv2.destroyWindow('facedetect')
sys.exit()
dt = clock() - t
draw_str(vis, (20, 20), 'time: %.1f ms' % (dt*1000))
cv2.imshow('facedetect', vis)
if 0xFF & cv2.waitKey(5) == 27:
break
cv2.destroyAllWindows()
As you can see i have provide all the code to ensure that my face recognition program works.
I feel that the above code is not very efficient. Is there a better way to use face recognition as a security measure using python?
Answer: You should read and follow PEP8.
This means:
Put all imports at the top of the file.
Put a space around assignment operators.
Place a space after most commas.
Use descriptive names.
This is as it makes your code much easier to read.
We put spaces around words in English to increase readability,
we do it in programming languages to do the same.
You should:
Use with.
This makes sure the file is closed, even if the program errors. Say the user force quits the program, KeyboardInterrupt.
Don't use assert.
assert is for debugging, not for live code.
If I run your code with the -O flag, it removes all of these, and then your code is broken.
Allow more customizability.
Passing to functions a couple of file names is not hard.
What's more hard is when you decide to change pic1.jpg to image.jpg and also use pic1.jpeg.
This means you'll need to re-write your code to allow you to do this.
Don't have massive try blocks.
This can lead to masking of bugs.
This is as you may have two functions that raise say ValueErrors.
But you don't know which one raised ValueError, and now you're handling unknown states.
Again put all imports at the top of you files.
This allows me to know what you are importing as soon as I open your file.
This prevents confusion, like you did in main.py where you never import os, but you're using it.
Remove duplicated code.
You need to put duplicated code in a new function or re-arrange your current one to be able to merge the duplicated code.
Take password in main.py, you duplicated a large chunk of the function where you can join it together, or at least make a function for it.
The biggest changes I know how to make are to main.py. Other than the above, I changed your code to use less variables, and removed dead code.
Take:
dt = datetime.today()
t=dt.strftime("%A, %d. %B %Y %H:%M %p")
t='{0:%H:%M }'.format(dt)
f='{0:%A}, {0:%d} of {0:%B}, {0:%Y}.'.format(dt)
timed=(("Date: ",f,"\nTime: ",t))
timed=''.join(timed)
The first t is dead code, you instantly overwrite t to something else, so you can remove it.
After this you have the two timed variables, you can change this to a single str.format, Date: {}\nTime: {}.
From this you should notice you can merge all the str.formats into one, and end up with:
timed = 'Date: {0:%A}, {0:%d} of {0:%B}, {0:%Y}.\nTime: {0:%H:%M }'.format(datetime.today())
I'll only show the changes to main.py, as the other files didn't have as much to change:
from itertools import izip
import time
from datetime import datetime, date, time
from PIL import Image
import passw
def compare(self, picture1, picture2):
image1 = Image.open(picture1)
image2 = Image.open(picture2).resize(image1.size)
if image1.mode != image2.mode:
raise ValueError("Different kinds of images.")
if image1.size != image2.size:
raise ValueError("Different sizes.")
pairs = izip(image1.getdata(), image2.getdata())
if len(image1.getbands()) == 1:
dif = sum(abs(p1-p2) for p1, p2 in pairs)
else:
dif = sum(abs(c1-c2) for p1, p2 in pairs for c1, c2 in zip(p1,p2))
ret = (dif / 255.0 * 100) / image1.size[0] * image1.size[1] * 3
print ret
return ret
def password(self):
y = str(self.password_input.text)
x = passw.verify(y)
if x:
try:
os.startfile("recog.py")
time.sleep(6)
if self.compare("pic1.jpg", "security.jpg") <= 30:
os.remove("pic1.jpg")
os.rename("security.jpg", "pic1.jpg")
return True
else:
raise ValueError
except ValueError, e:
print(e)
# If not x or ValueError.
with open('secure.txt', 'w') as sf:
sf.write('')
if os.path.exists("securitylog.jpg"):
with open('secure.txt', 'a') as sf:
sf.write('Date: {0:%A}, {0:%d} of {0:%B}, {0:%Y}.\nTime: {0:%H:%M }'
.format(datetime.today()))
kl = Thread(target=self.say, args="You are not authorized to access this program.")
kl.start()
p = Wrong()
p.open()
else:
os.rename("security.jpg", "securitylog.jpg") | {
"domain": "codereview.stackexchange",
"id": 21849,
"tags": "python, image, opencv"
} |
When should I use diagnostic_aggregator? | Question:
Please help in writing up a ROS best practice.
Originally posted by mmwise on ROS Answers with karma: 8372 on 2011-11-07
Post score: 7
Answer:
Executive Summary
Use /diagnostics to publish hardware diagnostics data from every device driver.
Use diagnostics_aggregator to collect and group diagnostics data on any significant system.
Background: /diagnostics
To begin with, it is best practice to set up diagnostics on all robot hardware at a minimum. Most of the drivers included with ROS include some form of diagnostics messages. The ROS diagnostics toolchain is not a computation graph level concept (like parameters, nodes, or topics), but is instead built on top of the /diagnostics topic.
Hardware drivers publish to the /diagnostics topic a diagnostic_msgs/DiagnosticArray message, which contains a header (sequence number, timestamp, and frame_id) and an array of diagnostic_msgs/DiagnosticStatus messages.
The DiagnosticStatus contains:
byte level - One of three states (OK, WARN, ERROR), which represents the overall hardware health.
string name - The name of the device this DiagnosticStatus represents
string hardware_id - A unique hardware identifier, possibly a serial number or UUID
diagnostic_msgs/KeyValue[] - An array of key/value pairs used to represent any additional pertinent information about the sensor. (For example "temperature":"35C", "frequency:100Hz", "voltage:24V")
Any node subscribing to this /diagnostics topic will receive the raw diagnostics messages (which can be overwhelming on a large system like the PR2).
To visualize raw diagnostics messages in ROS, you can currently use the runtime_monitor by simply running:
rosrun runtime_monitor montior
More Background: The diagnostic_updater
The diagnostic_updater is not quite relevant to the aggregator, but is an often-overlooked tool. It provides convenience functions for working with the DiagnosticArray messages with your hardware drivers in C++.
With the diagnostic_updater libraries, you can create an object for interacting with DiagnosticArray messgaes, as well as monitoring frequency status, and over/under monitoring for critical values in your hardware device (temperature, voltage, etc).
This was mainly included in this write-up so that no one tries to reinvent what is already written.
The Diagnostic Aggregator
diagnostic_aggregator is a package for aggregating and analyzing diagnostics data.
Assuming that you have a working robotic system publishing raw diagnostic data to /diagnostics, you will see that the raw data accumulates quickly, and becomes cumbersome to actually sort through. For this reason, we use the diagnostic_aggregator. It allows us to group and sort data into namespaces (much like the ROS computational graph). It will also rate-limit the aggregated diagnostics output to ~pub_rate (typically 1 Hz).
From the wiki page, this can transform something like:
Left Wheel
Right Wheel
SICK Frequency
SICK Temperature
SICK Connection Status
Stereo Left Camera
Stereo Right Camera
Stereo Analysis
Stereo Connection Status
Battery 1 Level
Battery 2 Level
Battery 3 Level
Battery 4 Level
Voltage Status
Into something that is more readable, like:
My Robot/Wheels/Left
My Robot/Wheels/Right
My Robot/SICK/Frequency
My Robot/SICK/Temperature
My Robot/SICK/Connection Status
My Robot/Stereo/Left Camera
My Robot/Stereo/Right Camera
My Robot/Stereo/Analysis
My Robot/Stereo/Connection Status
My Robot/Power System/Battery 1 Level
My Robot/Power System/Battery 2 Level
My Robot/Power System/Battery 3 Level
My Robot/Power System/Battery 4 Level
My Robot/Power System/Voltage Status
Additionally, each group is given a level, which allows you to quickly see at-a-glance, where the errors are on your machine. ERROR and WARN propagates up the tree. For instance, an ERROR on Left propagates up to an ERROR on Wheels, and an ERROR on My Robot.
This can then be inspected using the robot_monitor tool.
Why Should I Use This
Using /diagnostics is best practice on a robotic system of any scale. It makes troubleshooting hardware (and software) easier in almost all cases.
Using /diagnostics_agg is good practice on any larger robotic system. It is also good practice on any sort of production system, as it allows more flexibility and clarity when looking at diagnostics data.
Additionally, if the system is already set up to use aggregated diagnostics, the user may choose to write additional analyzer plugins for their system, further customizing diagnostic analysis.
Helpful Resources
diagnostics
robot_monitor
runtime_monitor
microstrain_3dmgx2_imu - A node that publishes good diagnostics data
pr2_analyzers.yaml - A good example of setting up analyzers for use with diagnostic_aggregator
Originally posted by mjcarroll with karma: 6414 on 2012-03-13
This answer was ACCEPTED on the original site
Post score: 17
Original comments
Comment by 130s on 2012-10-31:
RFP 107 might be worth being added to the ref to get an overview of ROS' diagnostics concept: http://www.ros.org/reps/rep-0107.html
Comment by jbohren on 2013-07-02:
This has been added to ROS/Patterns here: http://www.ros.org/wiki/ROS/Patterns/RobotModelling#Runtime_Diagnostics | {
"domain": "robotics.stackexchange",
"id": 7222,
"tags": "ros, diagnostics, diagnostic-aggregator, best-practices"
} |
Sometimes when I publish to cmd_vel node got stuck | Question:
Hi!
I am working with the ardrone_autonomy package for the Ar.Drone. I have already a code for the control signal and it works great, except for the part when I try to publish the commands in the cmd_vel topic. Normally it publish fast, which is what i need, but sometimes it kinda freeze, for 1 or 2 seconds, and goes back to publishing normally. I dont know why is that happening, und is very important for me to fix that delay, because in that 2 seconds the Drone could crash.
Here is the relevant part of the code:
Inside the main:
ros::init(argc, argv, "tracker");
ros::NodeHandle nh;
ros::Rate loop_rate(50);
Inside the callback:
ros::NodeHandle neu;
ros::Publisher pub_empty_land;
ros::Publisher pub_point=neu.advertise<geometry_msgs::Point>("color_position",1);
ros::Publisher pub_control=neu.advertise<geometry_msgs::Twist>("/cmd_vel", 10);
twist_msg.linear.x=Control.at<float>(2);
twist_msg.linear.y=Control.at<float>(0);
twist_msg.linear.z=Control.at<float>(1);
pub_point.publish(posptav);
pub_control.publish(twist_msg);
cv::waitKey(15);
ros::spinOnce();
I would appreciate your help
Thanks!
Originally posted by erivera1802 on ROS Answers with karma: 59 on 2015-09-27
Post score: 0
Answer:
Publishers should be persistent; you shouldn't be creating a new publisher on each callback. This is because publisher setup can take a few seconds under poor conditions, and this can cause messages to be dropped before the publisher is completely set up.
Instead, you should create your publisher in main(), and the callback should only be interpreting the data and using the existing publisher to publish a new message.
You should NEVER call ros::spinOnce() from inside a callback; spinOnce is the call which processes message callbacks in the first place. Most ROS nodes call ros::spin() or ros::spinOnce() from main.
If you're trying for consistent performance, calling cv::waitKey from within your callback is also a bad idea; you don't want to do anything within your callback which could create a significant delay.
Originally posted by ahendrix with karma: 47576 on 2015-09-27
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 22718,
"tags": "ros, navigation, publisher, ardrone-autonomy, ardrone"
} |
Time series forecast for small data set | Question: I am new in data science so please accept my apology in advance if my question sounds stupid. I want to do a time series forecast of outage mins in the current regulatory year. The regulatory year starts from 1 April and ends on 30 March of next year. I have data of around six months i.e. from April to September. Outage does not occur every day. So I have only 144 data points (or days out of 171 days) where the outage occurred. I have plotted the data in the following graph. The graph shows the cumulative sum of outage mins.
Now I am trying to predict the value from October to March. I wanted to forecast the value that what would be the cumulative outage mins by the end of March next year. I tried to use Exponential smoothing but it did not work, it may be because I don't have a lot of observation. Then I was reading about ARIMA but not sure whether its the right algorithm to use or not as I don't think that there would be any seasonality in this scenario and also I don't have long data points. Could anyone help with which algorithm should I use to forecast the value? I am using Python as a programming language. Any help would be really appreciated.
Answer: ARIMA could work, I think it's the right approach. It's simple enough to be used on a small dataset, but sufficiently flexible at the same time. If you are using Python, library statsmodels allows you to implement ARIMA regressions. You have to grid search and find the right parameters to find the best fit, and run the prediction.
If you want to know how to do it, take a look here and here.
Alternatively, even simpler models could work correctly, such as a simple moving average (MA), or auto-regression (AR). But that's something you can find from the ARIMA grid search above. | {
"domain": "datascience.stackexchange",
"id": 8351,
"tags": "time-series, forecasting, forecast, arima"
} |
Single cell organism's brain | Question: Multi cellular organisms have brain.But what about single celled organisms do they have brain to control the cell's work?If they have something what that part called?You can say that the nucleolus do that work
then all cells have nucleolus.Then why multi cellular organisms have brain?
Answer: Single cells do not have brains. Plenty of multicellular organisms do not have brains either. Multicellular organisms such as fungi, plants, sponges do not even have nervous systems, and many organisms with nervous systems (like some jellyfish, molluscs, arthropods...) do not have something you could call a brain (I mean, I guess arthropods have a brain in that it's how you call the ganglion they have in the head, but it's not always that much bigger than other ganglia).
Organisms do not actually need a centralized control to function. You can do a lot in a multicellular organism simply with cellular signalling (each cells reacts to its environment in certain ways, sometimes emitting molecules that cause other cells to react in certain ways and so on), and single cells work similarly inside themselves, different parts of the cell "work together" by producing or consuming chemicals that others react to (or other physical processes).
Brains (i.e. a centralization of the nervous system) allow more complex behavior, to coordinate perceptions and actions in more precise and flexible ways.
This Wikipedia article on Chemotaxis for example describes how cells can move along a gradient of a useful or dangerous chemical, and gives some of the molecular mechanisms for this to happen:
https://en.wikipedia.org/wiki/Chemotaxis
Here is an article that seems to describe cell signalling in some detail:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1679905/ | {
"domain": "biology.stackexchange",
"id": 10975,
"tags": "cell-biology, brain"
} |
Connection between NAND gates and Turing completeness | Question: I know that NAND gates can be used to create circuits that implement every truth table, and modern computers are built up of NAND gates. What is the theoretical link between NAND gates and Turing completeness? It seems to me that NAND gate circuits are closer to finite automata than Turing machines. My intuition is that I can build flip-flops, and therefore registers and memory, out of NAND gates, and unbounded memory is a crucial property of Turing complete systems. I'm looking for a more theoretical or mathematical explanation, or pointers on what to read.
Answer: There is indeed little connection. For a thorough understanding, let me explain the connection between programs and circuits.
A program (or algorithm, or machine) is a mechanism for computing a function. For definiteness, let us assume that the input is a binary string $x$, and the output is a Boolean output $b$. The size of the input is potentially unbounded. One example is a program that determines whether the input is the binary encoding of a prime number.
A (Boolean) circuit is a collection of instructions for computing some finite function. We can picture the circuit as an electrical circuit, or think of it as a sequence of instructions (this view is called confusingly a straight-line program). Concretely, we can assume that the input is a binary string $x$ of length $n$, and the output is Boolean. One example is a circuit that determines whether the input encodes a prime number (just as before, only now the input has to be of length $n$).
We can convert a program $P$ into a circuit $P_n$ that simulates $P$ on inputs of length $n$. The corresponding sequence of circuits $P_0,P_1,P_2,\ldots$ is not arbitrary – they can all be constructed by a program that given $n$ outputs $P_n$. We call such a sequence of circuits a uniform circuit (confusingly, we often think of the sequence as a "single" circuit $P_n$ for an indefinite $n$).
Not every sequence of circuits is uniform. Indeed, a sequence of circuits can compute every function from strings to Boolean, computable or uncomputable! Nevertheless, in complexity theory we are interested in such non-uniform models in which the circuits are restricted. For example, the question P=NP states that NP-complete problems cannot be solved by polynomial time algorithms. This implies that NP-complete problems cannot be solved by polynomial size uniform circuits. It is moreover conjectured that NP-complete problems cannot be solved by polynomial size circuits without the requirement of uniformity.
Turing-complete computation models are models which realize all computable functions (and no more). In contrast, complete systems of gates (such as AND,OR,NOT or NAND) allow computing arbitrary finite functions using circuits made of these gates. Such complete systems can compute completely arbitrary functions using (unrestricted) sequences of circuits. | {
"domain": "cs.stackexchange",
"id": 21479,
"tags": "turing-machines, turing-completeness, digital-circuits"
} |
Why is it that heterozygous loci appear as two separate bands during gel electrophoresis while homozygous loci appear as one band? | Question: Is it because heterozygotes have a greater base pair length? (And if they do, why is that?) Or is it because recessive alleles are moving slower than the dominant alleles in the gel?
Answer: A heterozygous locus has two different alleles and therefore it is possible that their DNA sequence lengths are different. However, it is unlikely that they are so different so as to be clearly resolved in a gel electrophoresis. Agarose gels can at best resolve 20bp; polyacrylamide gels can resolve smaller differences but it is only done for small DNA fragments.
I think upto 50bp difference is possible between alleles. But such huge insertions/deletions are rare and the difference between the alleles is usually a point mutation or small insertions/deletions(indels). Big indels are possible when a transgene is inserted specifically in one chromosome; these cases are mostly experimental.
If you consider paralogs as alleles then they can be of different lengths and can be resolved in agarose gels— usually the difference is in the untranslated regions. If you are doing Southern blot and using a probe that can bind to both paralogs then you will end up getting two bands.
Having said that, it is possible to see a difference in gel bands even if there is no size difference. This is possible by a technique called RFLP. In this technique the DNA is digested using a restriction enzyme which basically cuts the DNA at a specific sequence (usually a hexamer). If a point mutation abolishes or creates a restriction site then the fragment length would be different between the alleles. This is detectable on the gel.
A similar technique is AFLP. | {
"domain": "biology.stackexchange",
"id": 3528,
"tags": "dna"
} |
Is the concept of entropy a result of limited technology? | Question: I know that entropy is the energy within a system that is unable to do useful work. However, there was a time in which we were unable to harness the energy of the wind or the sun, and now we can, using turbines and solar panels. Is the energy that is lost to entropy really lost? It still exists, due to the conservation of energy, and thus must be accessible via some sort of technology.
If I recall correctly, entropy can also be described as the dispersion of energy within a system. This can be proved true via the following chain of logical assumptions:
Entropy in the universe is always increasing, overall.
The end state of the universe, under current models, is a thin sea of subatomic particles.
Thus, a thin sea of subatomic particles has high entropy.
While mankind will be long dead before the heat death, it would in theory be possible to harness the energy of these particles, even if it would take far more energy to extract it than it would give out. However, it would be possible nonetheless.
In conclusion, is entropy merely a result of our technological limitations?
Answer:
I know that entropy is the energy within a system that is unable to do useful work.
This doesn’t really bear scrutiny. Entropy and energy have different units, for one. But even an approximate analogy would be in trouble: A hotter object has a greater entropy, and it would also provide more work if I connected it to a heat engine (in conjunction with a cold reservoir). So the comparison doesn’t really make intuitive or rigorous sense.
Here’s one better way to think about it: If you ask me to obtain work from a fast-moving cold object or a hot motionless object with the same total energy, I’ll take the moving object every time. The reason is that I can extract all the kinetic energy of the cold object by bringing it to a complete halt. But I can never extract all the thermal energy from the hot object because I have no such “wall”—that is, I have nothing at absolute zero that I can let the hot object heat (while inserting a heat engine to extract work). At best, I can let the hot object heat something around me at room temperature, and this isn’t as efficient. (In particular, if the hot object is at room temperature, its thermal energy is useless to me.)
I didn’t mention entropy at all in the previous example, but it can be completely understood through an entropy framework. Specifically, the (1) hotter object has greater entropy, and (2) heat transfer is entropy transfer, broadly inversely proportional to the temperature, and (3) entropy can’t be destroyed. So the engineering limitation is a matter of maximizing the temperature difference we can provide to the heat engine. In this context, we are indeed limited by our technology (e.g., the need for refractory and creep-resistant moving engine parts that can withstand great heat).
The way you “harness at least some of the energy locked in entropy,” as you put it, is to find the largest, coldest cold reservoir you can to drive and maintain the greatest possible heat transfer. But asking whether we can completely sidestep Carnot constraints, for example, is to ask whether we can perform heat transfer without performing heat transfer—it’s just a nonstarter. | {
"domain": "physics.stackexchange",
"id": 94517,
"tags": "statistical-mechanics, entropy"
} |
Very simple stack in C | Question: I'm starting to learn C and thought it would be a good idea to try and implement some data structures. I started with a very simple stack and would like to hear some opinions.
Stack.c
#include <stdio.h>
#include <stdlib.h>
#include "stack.h"
struct Node {
int element;
struct Node* next;
};
struct Node* head;
int size;
int stack_init() {
struct Node* dummy = malloc(sizeof(struct Node));
if(dummy == NULL) {
return -1;
}
head = dummy;
return 0;
}
int push(int value) {
struct Node* tmp = malloc(sizeof(struct Node));
if (tmp == NULL) {
return -1;
}
tmp->element = value;
tmp->next = head;
head = tmp;
size++;
return 0;
}
int pop() {
if(size == 0) {
printf("Stack is empty\n");
}else {
printf("Element #%d ->value: %d has been removed\n", size, head->element);
free(head);
head = head->next;
size--;
}
return 0;
}
void print_stack() {
struct Node* tmp = head;
printf("Stacksize: %d\n", size);
for(int i = 0;i < size;i++) {
printf("Element %d -> %d\n",i,tmp->element);
tmp = tmp->next;
}
printf("#####################\n");
}
int get_size() {
return size;
}
Stack.h
#ifndef STACK_H
#define STACK_H
int stack_init();
int push(int value);
int pop();
void print_stack();
#endif
Answer: I see a number of things that may help you improve your code.
Use a stack pointer as a parameter
Right now, one can only have one stack at a time. Worse, if one calls stack_init after some items have already been pushed onto the stack, there will be a memory leak. An alternative scheme would be to have stack_init() return a pointer to a Stack and then use that as a parameter for all other calls. That way, one could have multiple stacks simultaneously, making it that much more useful.
Initialize all values before use
In the stack_init routine, the head variable is initialized, but neither head->element nor head->next are initialized. It's also probably a good idea to explicitly zero the size. Although it's already zeroed at the moment because it's a file scope variable, if you follow the other advice on this list, you'll have to initialize it yourself.
Provide a way to free memory
Right now, the only way to free memory is to repeatedly call pop(). Unfortunately, the calling program has no way to know how many items are on the stack or to know when the stack is empty, so it's not going to be able to know how many times to do so. I'd suggest providing at least an explicit isEmpty() call.
Provide a complete interface
At the moment, there isn't any way to actually use the stack except to call print_stack() which limits its usefulness in the extreme. I'd suggest that adding a means of examining the top of the stack, as with a call to top() that returns the value for that item might be a good idea.
Don't access freed memory
These lines have a problem:
free(head);
head = head->next;
The problem is that after you've freed head, you dereference it to get the next pointer. This is undefined behavior -- anything could happen and it probably won't be good! Better would be to save a copy of the pointer, do your housekeeping and then free the copy like this:
struct Node* temp = head;
head = head->next;
free(temp);
Separate printing from data manipulation
There is no way at the moment to use pop without it printing something. In a real program that use such a stack, that's unlikely to be the desired effect. Better would be to have pop() just do what it needs to do and leave printing to the calling program.
Return something useful from functions
Most of your non-void functions return something useful, but pop always returns 0 which isn't very useful.
Encapsulate related data in a structure
The size and head elements are closely related items. For that reason (and to accomodate suggestion #1 on this list), I'd recommend combining them into a struct instead like this:
struct Stack {
struct Node* head;
int size;
};
Use a typedef to simplify your code
If you use a typedef like this:
typedef struct {
struct Node* head;
int size;
} Stack;
the code can refer to simply Stack instead of struct Stack. It's not necessary, but it does tend to make things a little easier to write and to read. | {
"domain": "codereview.stackexchange",
"id": 22588,
"tags": "beginner, c, stack"
} |
How can I estimate the elasto-optic coefficients ($p_{11}$ and $p_{12}$) of a material? | Question: I am attempting to estimate the elasto-optic coefficients ($p_{11}$ and $p_{12}$) of $\mathrm{TiO}_2$ and $\mathrm{ZrO}_2$, where $p_{11}$ and $p_{12}$ refer to the elements of a strain-optic tensor for a homogeneous material as given in Hocker (Fiber-optic sensing of pressure and temperature, 1979).
I have found a document which specifies that the longitudinal elasto-optic coefficient ($p_{12}$) can be estimated using the Lorentz-Lorenz relation that it gives as
$$p_{12} = \frac{(n^2 - 1)(n^2 + 2)}{3n^4}$$
however no reference is given, and other sources give the Lorentz-Lorenz relation as something rather different. For example Wikipedia says that the equation relates the refractive index of a substance to its polarizability and gives it as
$$\frac{n^2 - 1}{n^2 + 2} = \frac{4\pi}{3}N\alpha$$
which bares only a vague relation to the earlier equation.
Does anyone know of any other ways in which to estimate the elasto-optic coefficients of a material?
Answer: Elasto-optic properties are complex tensorial properties, and I don't think there is any good way to estimate them short of:
measuring them experimentally
calculating them through quantum chemistry methods (CRYSTAL14 is one code with such features)
finding them in the literature
Luckily for you, a simple search reveals that values have been measured, at least for TiO2:
Source here; it's Google's first hit for a search of “elasto-optic tensor TiO2”. | {
"domain": "physics.stackexchange",
"id": 18680,
"tags": "optics, refraction"
} |
Gradient of dot product | Question: I am asked to show using indicial notation that $\mathbf{u}\cdot\nabla \mathbf{u}=\nabla\left(\dfrac{\mathbf{u}\cdot\mathbf{u}}{2}\right)-\mathbf{u}\times\nabla\times\mathbf{u}$. I recognize that this is simply a consequence of the gradient of a dot product identity that is often used.
My attempt at a solution starts with $\mathbf{u}\times(\nabla\times\mathbf{u})$ and reducing this to $u_j\partial_iu_j-u_j\partial_ju_i$ using the identity $\epsilon_{ijk}\epsilon_{klm}=\delta_{il}\delta_{jm}-\delta_{im}\delta_{jl}$. Also, $\mathbf{u}\cdot\nabla\mathbf{u}=u_j\partial_ju_i$, which cancels out the other identical term, so now I'm attempting to show that $u_j\partial_iu_j=\nabla\left(\dfrac{\mathbf{u}\cdot\mathbf{u}}{2}\right)=\frac{1}{2}\partial_iu_ju_j$, however I don't see how this is true. Where did I go wrong?
Answer: $\frac{1}{2}\partial_iu_ju_j$ is more clearly written $\frac{1}{2}\partial_i(u_ju_j)$ which can be evaluated by the chain rule or the product rule. Using the product rule you get $\frac{1}{2}(u_j\partial_iu_j+u_j\partial_iu_j),$ and using the chain rule you get $\frac{1}{2}(2u_j\partial_iu_j).$
And technically you need linearity since you want to do this for every $j$ and add up the results.
$$\frac{1}{2}\partial_i\left(\sum_j u_ju_j\right)=\frac{1}{2}\sum_j\partial_i( u_ju_j)=\frac{1}{2}\sum_j2u_j\partial_iu_j$$ | {
"domain": "physics.stackexchange",
"id": 24701,
"tags": "vectors"
} |
What is wrong in this false derivation of length "dilation"? | Question: Suppose an object measures $L$ in moving frame $S'$. This is measured at the same time so $\Delta t'=0$ :
$$\Delta x = \gamma(\Delta x'+v\Delta t') = \gamma \Delta x' = \gamma L$$
Since $\gamma \gt 1$, the same object is dilated in the rest frame $S$.
This is clearly wrong as we know that the object is actually contracted. What am I doing wrong?
Answer: Before we start, I find that a lot of confusion with Special Relativity can be cleared up if we use a standard convention, so I'm going to use all ``primed'' quantities to represent quantities measured in the $S^\prime$ frame, and all unprimed quantities to represent the same quantities in the $S$ frame. (In other words, the quantity $L$ that the OP uses in their question is what I will refer to as $L^\prime$. I apologise for this, but I find that it makes my answer easier to understand.)
Let me also write down the Lorentz Transformations:
\begin{equation}
\begin{aligned}
&\text{(A)}\quad\Delta x^\prime = \gamma \left(\Delta x - v \Delta t\right)\\
&\text{(B)}\quad \Delta t^\prime = \gamma \left( \Delta t - \frac{v}{c^2}\Delta x\right)\\
\\
&\text{(C)}\quad\Delta x = \gamma \left(\Delta x^\prime + v \Delta t^\prime \right)\\
&\text{(D)}\quad \Delta t = \gamma \left( \Delta t^\prime + \frac{v}{c^2}\Delta x^\prime \right)\\
\end{aligned}
\label{LT}
\end{equation}
And lastly, let's actually make the definition of a length clear. For an observer sitting in $S^\prime$, since the object is at rest with respect to him, its length $L^\prime$ is simply the difference in the coordinates, irrespective of when $x_B^\prime$ and $x_A^\prime$ are measured. He could measure $x_B^\prime$, have a coffee, and then measure $x_A^\prime$ and the difference would give him the length. However, for an observer sitting in $S$, since the object is moving with respect to her, both the endpoints $x_B$ and $x_A$ need to be measured simultaneously in her frame of reference ($S$) in order for the difference to be the length $L$. (In other words, if she has a coffee between measuring $x_B$ and $x_A$, the object would have moved between measurements!) So, we have
$$L^\prime = x_B^\prime - x_A^\prime |_\text{ for any $\Delta t^\prime$}$$
$$L = x_B - x_A |_\text{ only when $\Delta t=0$}$$
If you understand this, the rest of the answer is quite simple. Let us consider, as you have, that the object we are measuring is at rest in the frame $S^\prime$, and its length is being measured both from $S$ (in which it is moving to the right with a velocity $v$) and $S^\prime$ in which it is at rest.
The observer in $S$ requires to measure the endpoints of the object simultaneously in her frame of reference, as otherwise the object would move between measurements. In other words, for $(x_B - x_A)$ to be the length, we require that $\Delta t = t_B - t_A = 0$. Note: we are not placing any condition on $\Delta t^\prime$. It may not be (and isn't!) zero. Two observers, moving at some velocity $v$ relative to each other will not agree on simultaneous events.
Thus, we need to find a relation between $\Delta x$ and $ \Delta x^\prime$, when $\Delta t=0$. The mistake you've made in your argument is that you're relating $\Delta x$ and $\Delta x^\prime$ when $\Delta t^\prime=0$. So, the mistake comes when you say that $\Delta x|_{\Delta t^\prime = 0} = L$, the length measured in $S$.
We refer to transformations above, and see that (A) is the transformation we should use, as it relates these quantities.
\begin{equation*}
\begin{aligned}
\Delta x^\prime &= \gamma \left(\Delta x - v \Delta t\right)\\
\Delta x^\prime|_{\Delta t = 0} &= \gamma \left(\Delta x|_{\Delta t =0} - v \Delta t|_{\Delta t = 0}\right)\\
\\
L^\prime &= \gamma L
\end{aligned}
\end{equation*}
Thus, the length that an observer measures when she is at rest with respect to the object (i.e. sitting in $S^\prime$) $L^\prime$ is always greater than $L$, since, as you point out, $\gamma > 1$. Thus, an observer sitting in $S$, with respect to whom the object is moving at a constant velocity will measure a length $L$ which is shorter: lengths contract! | {
"domain": "physics.stackexchange",
"id": 63705,
"tags": "special-relativity"
} |
Where Can I find documentation on how to write drivers in ros? | Question:
I am new to ROS. I am trying to write differential drivers . Where can I find documentation ?
Originally posted by Jackel Fox on ROS Answers with karma: 1 on 2014-05-26
Post score: 0
Answer:
There's lots of documentation for that. I'd suggest a quick search
A few highlights include:
Chad Rocky's ROSCon Talk https://roscon.ros.org/2012/schedule/ https://www.youtube.com/watch?v=pagC2WXT1x0 https://docs.google.com/presentation/d/13yyOB5CXOzpvMa0_wYxDvNzjb_9dfMjDuVo-CvBcoRw/edit#slide=id.p
A tutorial http://wiki.ros.org/ROS/Tutorials/Creating%20a%20Simple%20Hardware%20Driver
Originally posted by tfoote with karma: 58457 on 2017-08-15
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by jayess on 2017-08-15:
@tfoote: That last link is pretty old (last updated 2011-03-06 and doesn't use catkin) and the last step is
Next, wait for the rest of the tutorial to be written
Is there anything "official" that's more recent? | {
"domain": "robotics.stackexchange",
"id": 18066,
"tags": "ros, documentation"
} |
Making the Tardis with Dark Energy? | Question: I've got a sci-fi based Physics question that involved Dark Energy...
So I've been watching a lot of Doctor Who recently, and I'm very interested on how his "Tardis" is bigger on the inside.
Here you can see the Tardis looks small on the outside.
But inside, it's much bigger.
So here is what I've been thinking. What if you got a box (like a Police Box), and somehow filled it with Dark Energy!
Im hoping that the Dark Energy would expand space inside the box, thus making it "bigger on the inside". My question is, could this actually work?
Thanks for your help. (This is just a fun question btw)
Answer: Perhaps surprisingly there is a structure in General Relativity something like a Tardis. It's called the bag of gold spacetime. I can't find a simple article on the bag of gold spacetime. It is discussed in the paper Black Holes, AdS, and CFTs by Donald Marolf, though this will be a bit advanced for the non-GR aware.
This diagram from the paper gives an idea of what the spacetime looks like:
The spacetime is constructed by joining the curved spacetime due to a black hole to the spacetime for an expanding universe. As far as I know this was first done by Markov in the 1960s. For more on this see How does the friedmon solution to Einstein's equations resolve paradox of bounded infinities?. Lee Smolin's idea of black hole evolution uses the same idea.
From outside this object looks like a black hole, but it you fell into it you'd find yourself in an expanding universe much like ours. This universe is closed, but would be expanding away from you faster than you could travel, so to effectively this is an infinite universe contained within a finitely sized object.
But ... you knew this was too good to be true, didn't you?
As a Tardis this has some serious drawbacks. Firstly it's a one-way trip through the black hole, so you could enter this type of Tardis but never leave again. Secondly on passing through the black hole you'd find yourself at the Big Bang of the universe on the other side, which is likely to prove fatal. Finally, this joining of the spacetimes is a purely mathematical operation. That it can be done mathematically shows the spacetime could exist, but tells us nothing about how to actually construct it or indeed if it can be constructed at all.
There is another possibility, but you need to be aware that this is even more speculative that the spacetime I described above. This is discussed in my question Building a wormhole. If you have a cube whose edges are made from exotic matter then the spacetime geometry around it looks like a wormhole. So if you were to construct such a cube starting in flat spacetime it's possible that would create a bag of gold spacetime rather more manageable that the black hole/universe spacetime discussed above.
But (as far as we know) exotic matter doesn't exist, and even if it did where constructing the cube would actually create a bag of gold spacetime is unknown. | {
"domain": "physics.stackexchange",
"id": 34954,
"tags": "experimental-physics, space-expansion, dark-energy"
} |
Variability of the orbital inclination of a black hole merger? | Question: How variable should we expect the orbital inclination of a typical black hole merger to be, relative to the galactic plane? The LIGO simulation seems to give the impression that the orbital inclination doesn’t change, but wouldn’t general relativity predict that there would be a very large precession of the perihelion? And wouldn’t the orbital inclination also precess due to the gravitational pull from the surrounding galaxy?
Asked a different way, what would the merger look like from an outside observer watching over billions of years? Would the orientation of the orbital inclination become extremely erratic, switching from horizontal to vertical and everywhere in between?
Answer: If neither of the black holes is spinning (or if their spins are (anti-)aligned with the orbital angular momentum), then the orbital plane will stay fixed for the entire inspiral. There will be some effect due to the galactic potential, but that will be minute and will act on time scales much longer than those typical for the inspiral. You certainly would not expect anything dramatic.
However, if the spins are not aligned with the orbital angular momentum all sorts of the crazy effects can happen. In the weak field the dominant effect will be that the orbital plane will precess around the total angular momentum. The associated time scale is long compared to the orbital time scale, but short compared to the inspiral timescale. This affects the gravitational waveform by introducing a modulation of the signal. No clear signs of such modulations have been seen in the observed GW events, which tells us that the effective spin (a combination of the spins of the two objects weighted by their masses) of the observed systems is fairly low.
More extreme things can happen as well. For example, as the orbital angular momentum changes due to emission of gravitational waves, it is possible that we ended up in a situation where the total (vector) sum of the spins and the orbital angular momentum vanishes. As a result there is very little resistance to the direction of the orbital angular momentum being changed, and the system will transition from precessing around one direction to precessing around a different direction. This effect is known as "transitional precession".
Finally, in the strong field regime the precession time scale will become comparable to the orbital time scale and you can get some crazy looking orbits. For comparable mass binaries (such as observed by LIGO) this is not too noticeable, because at the same time the inspiral timescale also becomes similar to the orbital timescale and we can no longer really distinguish the effects from precession and the merger process. However, small mass-ratio sytems evolve much more slowly, and the crazy strong field orbits are much more apparent. Here is a picture of the an inclined orbit of a test mass around a spinning Kerr black hole:
You can even get into "resonant" situations where there is some rational ration between the orbital frequency, precession rate, and rate of pericenter advance. This leads to pretty pictures like this: | {
"domain": "physics.stackexchange",
"id": 48084,
"tags": "general-relativity, cosmology, black-holes, orbital-motion, ligo"
} |
Linear system in polar coordinates | Question: Unlike the Cartesian coordinates, I find navigating through polar coordinates difficult. Is the system defined by the following Lagrangian $L$ defined in polar coordinates linear?
$$L = \frac{1}{2} m \left( \dot{r}^2 + r^2 \dot{\theta}^2 \right) - Dar^2 - \frac{K}{2} \theta ^2,$$
where $D$, $a$ and $K$ are constant positive real numbers and $r$, $\theta$ are the coordinates of the system in the polar representation. In addition $m$ is the mass of the particle.
Answer: with :
$$r(t)=\sqrt{x(t)^2+y(t)^2}$$
and
$$\tan(\varphi(t))=\varphi(t)=\frac{y(t)}{x(t)}$$
$$L\mapsto L(x,y,\dot{x},\dot{y})$$
$$L=1/2\,m \left( 1/4\,{\frac { \left( 2\,x{\it \dot{x}}+2\,y{\it \dot{y}} \right) ^
{2}}{{x}^{2}+{y}^{2}}}+ \left( {x}^{2}+{y}^{2} \right) \left( {\frac
{{\it \dot{y}}}{x}}-{\frac {y{\it \dot{x}}}{{x}^{2}}} \right) ^{2} \right) -{
\it Da}\, \left( {x}^{2}+{y}^{2} \right) -1/2\,{\frac {K{y}^{2}}{{x}^{
2}}}
$$
your Lagrangian with Cartesian coordinates is highly nonlinear, thus the equations of motion will be also nonlinear. | {
"domain": "physics.stackexchange",
"id": 67589,
"tags": "classical-mechanics, lagrangian-formalism, non-linear-systems, linear-systems"
} |
Confusion with the group theory notation | Question: I know for a group $O(N)$, $N$ is the dimensionality of the matrix corresponding to the transformation, but then for Lorentz transformation we say its $O(3,1)$ while the dimensions of transformation matrix is $4\times 4$. So my question is what these numbers stand for? in case of $O(3,1)$ I know that 3 is for three rotations and 1 is for the boost but in general what do they indicate? for example if we have a group $SU(2,2)$ what do 2 and 2 stand for? And what are the dimensions of the matrix that correspond to this representation?
Answer: In $O(3,1)$, the metric is $(1,1,1,-1)$ (or equivalently $(-1,-1,-1,1)$.)
Thus $O(3,1)$ matrices preserve the length of vectors, with length defined by
$$
x^2+y^2+z^2-t^2\, ,
$$
for the metric $(1,1,1,-1)$ (with obvious applications to special relativity).
In $SU(2,2)$ the metric is $(1,1,-1,-1)$ and transformations will preserve the (complex) inner product
$$
xx^*+yy^*-zz^*-tt^*\, .
$$
In general, for $SU(p,q)$ the metric will be diagonal and contain $+1$ $p$ times and $-1$ $q$ times, and will preserve the length of vectors defined by
$$
\sum_{k=1}^p x_kx^*_k - \sum_{s=p+1}^{p+q} x_sx_s^*\, .
$$
Note that neither $O(3,1)$ nor $SU(2,2)$ are compact so the $4\times 4$ realization of the group as matrices will not be unitary. The most obvious example of this observation is the boost matrix
$$
L(\beta)=\left(\begin{array}{cccc}
\cosh(\beta)&0&0&\sinh(\beta)\\
0&1&0&0\\
0&0&1&0\\
\sinh(\beta)&0&0&\cosh(\beta)\end{array}\right)
$$
acting on the vector $(t,x,y,z)$, with $\beta$ is the rapidity parameter. Clearly here $L^{\dagger}(\beta)=L(\beta)\ne L(-\beta)$, with the latter being the inverse of $L(\beta)$. | {
"domain": "physics.stackexchange",
"id": 39624,
"tags": "metric-tensor, group-theory, notation"
} |
My LSTM can't reduce error down to zero, when overfitting | Question: I have implemented LSTM in c++ which steadily decreases in error, but slows down at the certain error value. It also seems to predict most of the characters, but gets stuck and not able to correct some mistakes (or will correct them very slowly), even after 5000 backprop iterations. For example, asking it to predict character one by one might result in
abcdefgjffklmnopqrstuxwxyz
or something similar. Notice, the network almost gets things right. Also, it never 'gets lost' after making a mistake, for instance in the above example, after jff it produces k and gets back on track as if it had never made that mistake. The results are always different - sometimes network learns all the letters. However, the error still plateaus at the same value.
The error starts from around 7.0 and decreases down to 2.35 after which it slows down with every iteration, which seems like it's hitting a plateau.
If my alphabet consists only from a,b, then network almost instantly realises it should be producing abababababab, however the error starts at 0.8 and now always plateas at around 0.2 to 0.28
If with a,b, we set 4 timesteps, network learns to produce abab, but after 50 000 back-props (being well-stuck even after 25 000) it predicts 'a' only with 85%-ish certainty, even though I would expect it to be 99.999%; Similar value when 'b' has to be predicted. Once it gets stuck it maintains values similar to these. So it could keep guessing the same value over and over again when working with the a,b dataaset;
Strangelly, when working with that a,b dataset most of the time, I observe the final learnt probabilities to be:
a=[0.68, 0.31] b=[0.15, 0.85]
and sometimes, after re-initializing the network it learns the final probabilities as a=[0.8205, 0.1794] and b=[0.1795, 0.8205]
Disabling momentum (previous frame's grads times zero) still has same effect on a,b
The network doesn't explode, but its gradients seem to vanish.
Question:
Is it usual to get stuck at those values? Back-propping after 26 timesteps, done 200 000 times, and by that time the changes get turtle slow. The error sits at around 2.35 and is not worth the wait for another 0.000001 change in error.
Experimenting with a smaller learning rate (0.0004) allows the error to get down to 2.28 -that's the best I've got. Also, using a momentum coefficient with 0.2; It's applied to the previous frame's gradient. I don't increase momentum while the program executes, but keep it at a constant 0.2;
newgradient = newgradientMatrix + prevFrameGradientMatrix*0.2
I am not using any form of dropout etc. I just want the network to overfit, but it's getting stuck at 2.35 for 26-char aphabet.
I am only getting 0 error when an entire alphabet consists of a single character. In that case, the NN will predict aaaaaaaaaaaaaaaaa and error will be 0
Forward prop:
All is done on a single thread, in CPU. Using 'float' for the components of vectors.
Tanh on dataGate and after the 'cell' is constructed (before the 'cell' is multiplied by an output gate)
Sigmoid on Input, Forget and Output gate
Each gate has Matrix of weights where each column is weights from neuron in current LSTM unit to all neurons in previous LSTM unit. Last column is ignored because nothing should feed into our bias. Also, bias value (but not the weights from it!) is set manually to 1.0 just to be sure.
Each gate has a separate NxN matrix with recurrent weights (U-matrix) operating on the results of the current LSTM unit at [time-1]
Both W and U keep the last row, so they both sample bias of lower-LSTM. This shouldn't create issues, granted that both of biases are back-propagated properly. In fact, last row was removed from U-Matrix altogether - just to be sure, but the error still plateaus at the same 2.35 quantity regardless.
Weight initialization:
Xavier-Benjio with a uniform distribution page 253, bottom right.
Boundaries of the uniform distrib are defined as mentioned here, like this:
low = -4*np.sqrt(6.0/(fan_in + fan_out)); // use |4| for sigmoid gates, |1| for tanh gates
high = 4*np.sqrt(6.0/(fan_in + fan_out));
Cost function
Result of LSTM unit are softmaxed (bias is ignored), then passed through cross-entropy function to get a single float value.
Cross entropy is the correct one, for multi
class classification.
float cost = 0;
for(each vector component){
if(predictedVal == 0){ continue; }
cost += -(targetVec[i])*naturalLog(predictedVec[i]);
}
return cost;
The cost is then summed up across all timesteps and its average is returned right before we do the backprop. This is where I am getting plateaus at 2.3 for 26-character alphabet
By the way, the Cell (aka c) and Result (aka h) are cached after the last (26th timestep). After back-propagation they are used by timestep0. This could be (and was for some time) disabled, but the results are similar.
Backpropagation
Will list a couple of important gotchas & keypoints that I took care of:
de_dh is simply (targetVec - predictedVec), in that order. That's because the gathered gradient will be subtracted from the W and U matrices. This is due to derivatives cancelling out nicely when the softmax & crossEntropy are used together during forward prop.
To de_dh an extra gradient is added from t+1. That added quantity is sum from all 4 gates. To explain better, recall that one of such gates was forward-propping as follows:
dataGate = tanh(W * incomingVal + U * lstmResultPrevT + bias_ComesFrom_W_and_U); //one of the 4 gates
In the above formula, the bold represents the quantity from where the gradient is taken of one of such four gates at [t+1]. Such a gradient is then summed up across those 4 gates and added to de_dh, as stated originally. It's necessary to be done because during forward prop, 'H' of [t] has affected all 4 gates of [t+1]
When computing the gradient for Cell at [t], the cell's gradient from [t+1]is added. Afterwards, we compute a gradient for C of [t-1], to be able to repeat this process when we arrive to the earlier timestep.
The gradient for the U-weights leading to bias at [t-1] is computed with remembering that the bias's original value was 1.0; Also, it's double-checked to ensure gradient doesn't flow from our bias at [t] to the neurons at [t-1]. That's because nothing fed into our bias originally. As follows, the entire last column of U-gradient matrix is always 0.0;
Similar thing is done for such a bias-column of the W matrix too - that entire column is zero.
Finally, the gradient is computed for each H of [t-1] for each of the four gates. This is done so that the '2. key-point' is possible (adding the 4-grads to de_dh), when we get to the earlier timestep in this back-prop.
Unit Tests & debugging:
after 20 000 backprops (done every 26 timesteps) a file is collected.
it was observed that gradients were very small on all 4 gates, especially after being passed through the activation function at each gate.
This is one of the reasons why Xavier init (above) was introduced, to prevent weights from being too large (shrinks grad after pushing-back through activation) or being too small (shrinks grad after pushing-back through weights).
A significant improvement was observed after 'norm clipping' was used, allowing my LSTM to seemingly learn a correct sequence even when 56 unique characters were used and backprop was done after 56 timesteps. Similar to the original example (with the 26 chars) only a couple of characters were predicted incorrectly. However the error still always plateaus, at a higher value (around 4.5)
Once again, is this traditional behavior, and I just have to rely on the things like dropout and averaging the results of multiple networks? However it seems that my network isn't even capable of overfitting...
Edit:
I've discovered one thing - the result of LSTM is vector, whose components cannot be less than -1 or greater than 1 (curtesy of tanh and sigmoid) As a result, $e^x$ cannot be smaller than ~0.36 or greater than ~2.71
So the probabilities always have some 'precipitation' dangling, and network always 'worries' that it can't reach 100% confidence? Tried to get clarification on that here
Answer: As stated in the last edit of my question, the issue indeed was to do with the softmax function.
As clarified here We shouldn't apply softmax directly to the result of the last LSTM. Notice, LSTM will produce a vector of values, each of which is bounded between -1 and 1 (due to the tanh squashing function that's applied to the Cell).
Instead, I've created a traditional fully-connected layer (just additional weight matrix), and feed result of LSTM to that layer. This "output" layer isn't activated - it feeds into a softmax function, which actually serves as an activation instead.
I modified the back-prop algorithm to supply the gradient generated by the softmax to the Output layer. Of course, if you used a cross entropy Cost function originally, then such a gradient will remain $(predicted - expected)$. It's then pushed through the weights of that Output Layer, to get the gradient w.r.t. LSTM. After this the backprop is applied as usual and the network finally converges.
Edit: There is slight improvement to momentum.
Also, using a momentum coefficient with 0.2; It's applied to the previous frame's gradient. I don't increase momentum while the program executes, but keep it at a constant 0.2;
newgradient = newgradientMatrix + prevFrameGradientMatrix*0.2
That's fine, but changing the momentum-coefficient will require us to also re-adjust learning rate. A cleaner version will be:
newgradient = newgradientMatrix*(1-0.9) + prevFrameGradientMatrix*0.9
Which is an exponential moving average, that remembers roughly $\frac{1}{0.1} = 10$ days.
On the 10th day, the coefficinet will be $$\frac{(1-\epsilon)^{\frac{1}{\epsilon}}}{0.9} = \frac{(1-0.1)^\frac{1}{0.1}}{0.9} = \frac{0.9^{10}}{0.9} \approx 0.387 \approx \frac{1}{e}$$ of the peak; Because 9 newer days have larger coefficients (larger than 0.387), their avearge really makes 10th day and older be negligible
Any older days will have even less contribution
Also, don't forget about the bias correction, - which helps get a better estimate when we are "just starting" to compute the average. Without the bias correction it would start very low, and will take some time to catch-up with the expected "exponential moving average"
newval = curval*(1-0.9) + prevVal*0.9
newval /= 1-(0.9)^t //where t is timestep
However, in practice, it will be fine after approximately 10 timesteps - and so there is no real need for the bias correction in Momentum, but it should be done if we are using combination of Adam (a combination of Momentum & RMSProp) | {
"domain": "datascience.stackexchange",
"id": 2324,
"tags": "backpropagation, lstm, overfitting, cost-function"
} |
Large pose jumps in gmapping | Question:
Hi! I have a Turtlebot with Create base, a Hokuyo laser scanner, and no Kinect, running Groovy. When I use gmapping (through the gmapping_demo), or even when navigating in a static map (through amcl_demo), the pose seems to be reasonably accurate at first, but then I see a large jump in pose (several meters). gmapping produces the error "Transform from /base_link to /odom failed". Sometimes it rights itself after a while, but sometimes it just messes up the map. The pose jumps to locations both inside and outside the mapped region.
Symptoms are similar to unanswered question 58955.
Is the transform failure the likely cause of these jumps? Where should I look to find the cause of the transform failure?
Edit:
I've been looking at the tf frames in rviz and some of the transforms using tf_echo.
My tree looks like:
/map -> /odom -> /base_footprint -> /base_link -> (a lot of frames for various standoffs, plates, etc.)
The transform from /base_footprint to /base_link is static and looks OK.
The large pose jumps occur when the transform between /map and /odom zeros itself out (edit 2: keeps publishing, but with all zero values). This transform appears to be adjusting for the accumulated odometry errors, so when it zeros out, the robot model jumps to the pose from the odometry, which is generally off by a few meters.
I haven't seen any large changes in the transform between /odom and /base_footprint using tf_echo, but I haven't looked carefully at it yet.
I haven't yet looked at the transform between /base_link and /odom (which gmapping says is failing when the jumps occur).
So far, the /map -> /odom transform seems to have the largest problems, yet gmapping says the /odom -> /base_link transform is failing. Is the /odom frame failing? Is something failing to publish the correct transforms?
Thank you!
Edit 2:
Here is a more complete description of the gmapping output:
gmapping usually prints this immediately upon jumps:
[ERROR] [1371054374.583403027]: Transform from base_link to odom failed
update frame 18556
update ld=1.09735 ad=2.69054
Laser Pose= 2.60559 0.20104 2.43665
m_count 27
Then it starts printing this:
[ WARN] [1371054380.206694798]: Clearing costmap to unstuck robot.
[ WARN] [1371054380.407737352]: Rotate recovery behavior started.
[ WARN] [1371054380.606770270]: Clearing costmap to unstuck robot.
[ WARN] [1371054380.806767690]: Rotate recovery behavior started.
[ERROR] [1371054381.006780153]: Aborting because a valid plan could not be found. Even after executing all recovery behaviors
Average Scan Matching Score=396.033
neff= 40.7624
Registering Scans:Done
[ERROR] [1371054386.480540756]: Transform from base_link to odom failed
update frame 18557
update ld=0.00494867 ad=0.52552
Laser Pose= 2.61022 0.199294 2.96217
m_count 28
Average Scan Matching Score=411.485
neff= 40.7294
Registering Scans:Done
Printouts like this continue until the /map -> /odom transform and robot position return to normal.
Occasionally, I see a printout like this:
[ WARN] [1371054758.986420773]: Map update loop missed its desired rate of 3.0000Hz... the loop actually took 0.3656 seconds
[ WARN] [1371054759.138883979]: Invalid Trajectory 0.000000, 0.000000, -1.000000, cost: -1.000000
Is this relatively normal? If not, where can I look for an example of normal output?
Thanks for the help!
Originally posted by shiloh on ROS Answers with karma: 46 on 2013-06-04
Post score: 2
Original comments
Comment by Ben_S on 2013-06-04:
Does your transform /odom -> /base_link also jump? Try visualizing your tf-frames in RViz while driving around your robot. Maybe your odometry is broken somehow...
Comment by jorge on 2013-06-04:
We are tracking this problem, but still don't have an answer. Check this video of our experiments: http://youtu.be/unLtvC2sBN4. Your jumps are somehow similar? Another interesting questions is whether the machine you use is very loaded. Please make a top so we can discard this possibility.
Comment by Ben_S on 2013-06-04:
@jorge: Is there a reason, why you are using the kinect to laserscan (i guess) for gmapping and not the high-fov laserscanner shown in blue? I guess your fixed frame in the video is /odom? @shiloh could you post a bag file where the jumps are happening?
Comment by jorge on 2013-06-05:
We are just evaluation gmapping with kinect; the laser scan is just to provide a reference. And yes, odom is the fixed frame.
Comment by shiloh on 2013-06-07:
@jorge: As opposed to the jumps in the video, mine are showing a large change in position as well as a rotation.
Comment by jorge on 2013-06-11:
/map -> /odom zeros itself means it keep published with all zero values? Or that it's not published for a while? Can you check the gmapping standard output messages when tf zeros?
Comment by shiloh on 2013-06-12:
/map -> /odom keeps publishing, but with all zero values.
Comment by Ben_S on 2013-06-12:
Would it be possible to post a bag file when this is happening?
Comment by fergs on 2013-06-13:
I second the request for a bag file! Jorge, if you have a bag file for a similar test to that video linked above, that would very interesting to look at as well.
Comment by jorge on 2013-06-16:
Well, I can publish the bag files, but we are preparing a second level of testing with ground truth that will be far more useful. If hope we will be able to post them at the end of this week or beginning of the next one.
Comment by jorge on 2013-06-25:
Sorry, prepare the multi-camera system is getting longer than we expected. Here you have the current files, just scan and tf: http://files.yujinrobot.com/turtlebot/gmapping_test.tgz. Use -s 90 when running the bag files because the robot don't move for more than 1 minute and half.
Answer:
I was using a Prolific USB-to-serial cable before to connect the netbook to the Create. Apparently, the Prolific cable does not work well with the Create. After changing the cable for an iRobot model 4522 direct USB cable, I stopped seeing the jumps while mapping.
Originally posted by shiloh with karma: 46 on 2013-06-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 14425,
"tags": "navigation, turtlebot, gmapping-demo, gmapping, groovy-turtlebot"
} |
Integer to English challenge | Question: I have a challenge, which is to create a JavaScript function that turns a given number into the string representation. For example:
console.log(inToEnglish(15)) should print fifteen
console.log(inToEnglish(101)) should print one hundred one
and so on...
This challenge covers Non-Negative, greater than zero Integer numbers.
I have accomplished this objective with the following code:
var b4Twenty = ["one", "two", "three", "four", "five", "six", "seven", "eight",
"nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen",
"seventeen", "eighteen", "nineteen"
];
var b4Hundred = ["twenty", "thirty", "forty", "fifty", "sixty", "seventy",
"eighty", "ninety"
];
function intToEnglish(n) {
return translator(n).trim();
}
function translator(n) {
if( n == 0)
return "";
else if (n <= 19)
return b4Twenty[n - 1] + " ";
else if (n <= 99)
return b4Hundred[Math.floor(n / 10 - 2)] + " " + translator(n % 10);
else if (n <= 199)
return "one hundred " + translator(n % 100);
else if (n <= 999)
return translator(Math.floor(n / 100)) + "hundred " + translator(n % 100);
else if (n <= 1999)
return "one thousand " + translator(n % 1000);
else if (n <= 999999)
return translator(Math.floor(n / 1000)) + "thousand " + translator(n % 1000);
else if (n <= 1999999)
return "one million " + translator(n % 1000000);
else if (n <= 999999999)
return translator(Math.floor(n / 1000000)) + "million " + translator(n % 1000000);
else if (n <= 1999999999)
return "one billion " + translator(n % 1000000000);
else if (n <= 999999999999)
return translator(Math.floor(n / 1000000000)) + "billion " + translator(n % 1000000000);
else if (n <= 1999999999999)
return "one trillion " + translator(n % 1000000000000);
else if(n <= 999999999999999)
return translator(Math.floor(n / 1000000000000)) + "trillion " + translator(n % 1000000000000);
else if (n <= 1999999999999999)
return "one quadrillion " + translator(n % 1000000000);
else
return translator(Math.floor(n / 1000000000000000)) + "quadrillion " + translator(n % 1000000000000000);
}
This is my recursive function to achieve the given goal. It works, and it takes 65ms to complete the battery of 50 tests.
However, I have some concerns regarding its performance:
Given that recursive functions are usually slower than iterative ones, is there a way to make this iterative?
Should I use a switch case, or is my if statement OK?
I keep using Math.floor to round the numbers and find the indexes. Perhaps there is a better solution out there without using this technique?
I have to wrap the translator function (who does all the work) into another function because I need to trim the final result of extra spaces. Is there a way I can avoid this?
I am open to suggestions on how to improve this. Thanks!
Answer: Interesting question! I came up with a premature function in my spare time. There are many TODOs should be done, but I think that its a good start.
First, instead of splitting logic into array, I would rather use an mapping object (dictionary).
const mapNumberToString = {
1: "one",
2: "two",
3: "three",
4: "four",
5: "five",
6: "six",
7: "seven",
8: "eight",
9: "nine",
10: "ten",
11: "eleven",
12: "twelve",
13: "thirteen",
14: "fourteen",
15: "fifteen",
16: "sixteen",
17: "seventeen",
18: "eighteen",
19: "nineteen",
20: "twenty",
30: "thirty",
40: "forty",
50: "fifty",
60: "sixty",
70: "seventy",
80: "eighty",
90: "ninety",
100: "hundred",
1000: "thousand",
1000000: "million",
1000000000: "billion",
1000000000000: "trillion",
1000000000000000: "quadrillion"
};
Then, my function goes like this...
const intToEnglish = (n) => {
const arr = n.toString().split('').map(s => +s);
let eng = "0";
// under 20
if (n <= 20 && n > 0) {
eng = mapNumberToString[n];
}
// 21 ~ 99
else if (arr.length < 3) {
eng = [
mapNumberToString[arr[0]*10],
mapNumberToString[arr[1]]
].join(' ');
}
// 100 ~ 999
else if (arr.length < 4) {
eng = [
mapNumberToString[arr[0]],
mapNumberToString[100],
intToEnglish(n % 100)
].join(' ');
}
// above 1000
else {
const exp = Math.pow(1000, Math.floor( (arr.length - 1) / 3));
eng = [
intToEnglish(Math.floor(n / exp)),
mapNumberToString[exp],
intToEnglish(n % exp)
].join(' ');
}
return eng.trim();
};
Taking advantages of array and recursive function, and also javascript weird undefined to make this magic happen.
1: one
5: five
10: ten
15: fifteen
20: twenty
25: twenty five
100: one hundred
105: one hundred five
115: one hundred fifteen
120: one hundred twenty
212: two hundred twelve
232: two hundred thirty two
999: nine hundred ninety nine
1000: one thousand
1234: one thousand two hundred thirty four
12345: twelve thousand three hundred forty five
123456: one hundred twenty three thousand four hundred fifty six
234100: two hundred thirty four thousand one hundred
1032001: one million thirty two thousand one
5000021: five million twenty one
810238903242: eight hundred ten billion two hundred thirty eight million nine hundred three thousand two hundred forty two
In this way, you can easily add up your mapping object without changing your code (quintillion, sextillion...). Plus, there are less if statement, which I think is easier to read, in my own opinion of course lol.
Again, there is so much to improve, and I currently don't have that much time :(. | {
"domain": "codereview.stackexchange",
"id": 19413,
"tags": "javascript, performance, recursion, numbers-to-words"
} |
How to attach soft tubing to glass | Question: I plan to do some amateur chemistry with some basic equipment. I've watched hours of youtube videos for how to bend and polish glass tubing and push it through rubber stoppers, and I've seen diagrams of such tubing attached to rubber or silicone tubing, but nothing showing how to do that. I've even searched through catalogs of equipment and seen lots of specialized gadgets with hose barbs and such, but nothing basic. I just want to do a simple thing: have a rubber stopper with a glass tube in it, and attach that directly to a flexible tube. Nothing fancy, just one tube connected to another. I see it in diagrams like http://www.mctcteach.org/chemistry/C2224/labman/vacfilt/filter7.htm , but it gives no explanation. What am I missing?
Answer: Heat up the soft tubing to make it easier to fit over the connection. Most labs have a heat gun that is used for this (amongst other uses). A hairdryer will serve, or put the end of the tubing in hot water. Many glass connection have what is called an olive which you have to push the tubing over while hot, when it cools it contracts giving a good seal (particularly important for the water lines of condensers which may be left overnight) | {
"domain": "chemistry.stackexchange",
"id": 17624,
"tags": "equipment"
} |
If $|\psi\rangle, U|\psi\rangle$ are known, how many pairs of such qubits are required to find the operator $U$? | Question: Assume that we know a quantum state and the result of applying an unknown unitary $U$ on it. For example, if the quantum states are pure qubits, we know $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$ and $U|\psi\rangle=\gamma|0\rangle+\delta|1\rangle$. Then how can we compute the unknown operator $U$?
Answer: If $U$ acts on a $d$-dimensional Hilbert space, then you need the result $U|\psi\rangle$ for a set of $d$ linearly independent vectors $|\psi\rangle$.
So, if you're talking about a single qubit unitary, you need two different states $|\psi\rangle$. If you have these, then by linearity you can find out
$$
U|0\rangle=\alpha|0\rangle+\beta|1\rangle\qquad U|1\rangle=\gamma|0\rangle+\delta|1\rangle.
$$
Then, your $U$ is just
$$
U=\left(\begin{array}{cc} \alpha & \gamma \\ \beta & \delta \end{array}\right)
$$
(the columns are just the individual outcomes for the computational basis states.)
You might wonder if you can get away with fewer. For example, we know that $U|0\rangle$ and $U|1\rangle$ must be orthogonal. However, there is still some freedom of parameters (such as a global phase) that you cannot determine, and hence you need to know both outcomes. | {
"domain": "quantumcomputing.stackexchange",
"id": 601,
"tags": "quantum-gate, quantum-state"
} |
Trajectory of an electric field line in a charge arrangement emerging at a given angle from a source charge | Question:
Consider a system of 2 particles with charge +q and -2q as source of electric field. The charges are fixed at A and B respectively. A field line from 1st charge emerges at an angle 150 degree with AB vector. Will this field line reach the 2nd charge or would travel to infinity? If it reaches the 2nd charge, what will be its angle with BA vector at location of the 2nd charge?
Answer: This problem is made more difficult because it is three dimensional and that makes imagining the situation and drawing diagrams harder so first look at the problem in two dimensions.
The electric field lines very close to a point charges (compared to the separation of the charges) are radially outwards as shown in the diagram below and not affected by the other charge.
The electric field lines are shown as a visual aid and in this case to show that the magnitude of charge $q_1$ id greater than the magnitude of charge $-q_2$ by having more electric field lines leaving charge $q_1$ than arriving at charge $-q_2$.
Given that electric field lines start at a positive charge and finish at a negative charge and also never cross then all the five red field lines which leave charge $q_1$ must finish as the five red field lines arriving at charge $q_2$.
Stating this in a more refined way means that the flux of electric field through arc $AB$ must be the same as the flux of electric field through arc $A'B'$.
If the circles which are drawn have a radius $r$ and the circle is taken to be a Gaussian "surface" then the proportion of the electric flux which passes through arc $AB$ compared with the whole circumference is $\dfrac {AB}{2 \pi r} = \dfrac {2r\alpha}{2\pi r} = \dfrac{ \alpha }{\pi}$.
Using Gauss's law the total flux through the circumference is $\dfrac{q_1}{\epsilon_{\rm o}}$ so the flux through arc $AB$ is $\dfrac{ \alpha }{\pi}\,\dfrac{q_1}{\epsilon_{\rm o}}$.
A similar analysis cab be done to show that the flux through area $A'B'$ is $\dfrac{ \beta}{\pi}\,\dfrac{q_2}{\epsilon_{\rm o}}$.
Equating the two fluxes will give a relationship between the two angles.
In three dimensions the analysis is a little trickier in part because you need to use the area of a cap of a sphere.
The area of a spherical cap is $A = 2 \pi r^2(1-\cos \,\alpha)$.
Knowing the area of the spherical cap the analysis is similar to the two-dimensional case except that now the Gaussian surface has an area of $4\pi r^2$.
You can thus find a relationship between the two angles. | {
"domain": "physics.stackexchange",
"id": 40727,
"tags": "homework-and-exercises, electrostatics, electric-fields"
} |
When an object hangs from a Rigid rod is the force of the weight located in the center of the sign? | Question:
From the rope that supports the sign (assuming the mass is distributed evenly).
If the length of the sign is, $L$ can I model it so that the weight is concentrated at $\frac{L}{2}$ Therefore, the force of the whole sign would be located on the pole at $80+\frac{L}{2}$?
Answer: The weight of your sign is actually supported by the two tiny threads that connect it to the rod.
The weight of the sign is distributed among these two threads, the amount of weight at each one will depend on the distribution of mass of the sign. Assuming that your sign is homogeneous and that the threads are positioned simmetrically, they both will support the same weight.
How these two forces affect the rod depends on the rest of the system. For example, if you hold the rod just from the middle of the two threads, then the sign will be horizontal. But if you hold the rod from the point where the left thread is attached, the sign will rotate to the right.
In your particular problem, I think that the rod works as a lever, the hinge being at the wall. If this is the case, then it is equivalent to the sign being hung from the middle point between the two threads. This is because the torque at the hinge is:
$T = F_1 d_1 + F_2 d_2 $
Being $F_1$ and $F_2$ the weights supported by each thread, and $d_1$ and $d_2$ the distances of each thread from the wall.
But we know that both weights are equal, because of simmetry ($F_1 = F_2 = F / 2$). And the middle point of the sign is $d_m = {d_1 + d_2 \over 2}$, so the equivalent weight there would be $F_m$ where:
$ T = F_m d_m $
And you get that $F_m$ equals the full weight of the sign. | {
"domain": "physics.stackexchange",
"id": 27485,
"tags": "newtonian-mechanics, forces, statics"
} |
On the electric field created by a conductor | Question: The electric field created by a conductor at a point $M$ extremely close to it is $\vec{E}=\vec{E_1}+\vec{E_2}$ where $\vec{E_1}$ is the electric field created by such a tiny bit of the conductor that we can suppose it to be a plane, and since $M$ is extremely close to the conductor such that the distance is really small compared to the size of the plane we further ahead assimilate it to an infinite plane and hence $\vec{E_1}=\frac{\sigma}{2\epsilon_0}$ and this is where I block, when we use Gauss' law on an infinite plane we also account for the electric fields on the other side of the cylinder (here our gaussian surface), but in the case of the conductor the electric field inside of it would be $\vec{0}$ and so $\vec{E_1}$ should be $\frac{\sigma}{\epsilon_0}$.
I cannot see where I've gone wrong.
Answer: You are not alone about being confused about this topic and in part is is because of the use the same symbol $\sigma$ being used to mean two different things; sheet charged density and surface charge density.
On the HyperPhysics website there is a derivation Electric Field: Sheet of Charge as shown below
The sheet charge density $\sigma$ is related to the total charge residing on both surfaces of a piece of conducting sheet not the charge residing on one surface of a piece of conducting sheet.
Note that $\sigma$ has not been called the surface charge density in the HyperPhyics derivation.
Let me change the definition of a symbol.
In the diagram below the sheet charge density is $\Sigma$ per unit area.
So the total charge on the sheet (with charges residing above and below the sheet) is $\Sigma A$.
In this case the surface charge density is $\sigma = \dfrac{\Sigma A}{2A} = \dfrac {\Sigma}{2}$
In your example you are dealing with only one surface which has a surface charge density which is double the surface charge density that was used in the HyperPhysics derivation and so you should expect the electric field to be twice as large. | {
"domain": "physics.stackexchange",
"id": 54489,
"tags": "electrostatics, electric-fields, conductors"
} |
Get Set vs passing class at init? | Question: I have the main form with a list inside and i need to open another form and edit the list in the main. Generally i'm doing something like ...
public class Form1 : Form
{
List<objectType> obj = new List<objectType>();
private void button_EDIT_BUTTONS_Click(object sender, EventArgs e)
{
EDIT_BUTTONS edit_b = new EDIT_BUTTONS(this);
edit_b.Show();
}
}
public partial class EDIT_BUTTONS : Form
{
Form1 main;
public EDIT_BUTTONS(Form1 mainP)
{
InitializeComponent();
main = mainP;
}
button = main.obj.Find(x => x.ID == object.ID);
............................................................
//modify button
}
My question is to pass the main form like this only to expose the list object or i should Create setters and getters in main form ?
Answer: You should only pass the list to the second form. If you pass the entire first form you are exposing somethings that wouldn't be best, and coupling the two forms tightly. | {
"domain": "codereview.stackexchange",
"id": 3338,
"tags": "c#, winforms"
} |
Intel® TinyCrypt: AES-128/CTR to encryption/decryption of 2D arrays | Question: I need to encrypt/decrypt 2D arrays (double pointers) using AES-128/CTR using Intel TinyCrypt (written in C). The following are two helper methods to simplify the library usage. Any comments would be highly appreciated. Thanks in advance.
#include <tinycrypt/constants.h>
#include <tinycrypt/ctr_mode.h>
#include <tinycrypt/aes.h>
#define AES_128_KEY_LENGTH 16
#define AES_128_CTR_LENGTH 16
typedef struct aes_128_ctr_params_t{
byte key[AES_128_KEY_LENGTH];
byte ctr[AES_128_CTR_LENGTH];
} aes_128_ctr_params_t;
//---------------------------------------------------------------------------
inline int32_t encrypt(uint8_t const * const * const plaintext,
uint8_t * const * const cihpertext,
size_t const height,
size_t const width,
aes_128_ctr_params_t params) {
//TODO: Do some validation here!
struct tc_aes_key_sched_struct sched;
uint32_t result = TC_CRYPTO_SUCCESS;
result = tc_aes128_set_encrypt_key(&sched, params.key);
if (result != TC_CRYPTO_SUCCESS)
return result;
size_t const row_size_in_bytes = sizeof(uint8_t) * width;
for (size_t row_index = 0; row_index < height; ++row_index) {
result = tc_ctr_mode(cihpertext[row_index], row_size_in_bytes,
plaintext[row_index], row_size_in_bytes, params.ctr, &sched);
if (result != TC_CRYPTO_SUCCESS)
return result;
}
}
//---------------------------------------------------------------------------
inline int32_t decrypt(uint8_t const * const * const cihpertext,
uint8_t * const * const plaintext,
size_t const height,
size_t const width,
aes_128_ctr_params_t params) {
//TODO: Do some validation here!
struct tc_aes_key_sched_struct sched;
uint32_t result = TC_CRYPTO_SUCCESS;
result = tc_aes128_set_encrypt_key(&sched, params.key);
if (result != TC_CRYPTO_SUCCESS)
return result;
size_t const row_size_in_bytes = sizeof(uint8_t) * width;
for (size_t row_index = 0; row_index < height; ++row_index) {
result = tc_ctr_mode(plaintext[row_index], row_size_in_bytes,
cihpertext[row_index], row_size_in_bytes, params.ctr, &sched);
if (result != TC_CRYPTO_SUCCESS)
return result;
}
}
Answer: I find your code a bit dense and hard to read overall.
Your identifier names are compound words but all lower case.
plaintext
I prefer plainText others would prefer plain_text (and a lot of the code uses this second C like style). But either is preferable to your current style.
This seems redundant.
uint32_t result = TC_CRYPTO_SUCCESS;
result = tc_aes128_set_encrypt_key(&sched, params.key);
Just use one line:
uint32_t result = tc_aes128_set_encrypt_key(&sched, params.key);
Technically both functions exhibit undefined behavior (in C++ not sure about C). There is no return on successful completion.
if (result != TC_CRYPTO_SUCCESS)
return result;
}
// Add the following line
return result;
} | {
"domain": "codereview.stackexchange",
"id": 21386,
"tags": "c++, c, cryptography, aes"
} |
does every graph have a complete, preferred, stable and grounded extension? | Question: I was looking at graph extensions in argumentation theory and was wondering, which of those extensions does every graph have? and which of those only some graphs have? and is there a proof for each?
thanks
Answer: As a complete extension is an admissible set containing exactly these arguments it defends, there is always at least one complete extension, that may be the empty set and nothing else.
Grounded extension is a minimal complete extension, hence there is always one, and it also has the property to be a unique fixed point.
Perferred extension are maximal complete ones, hence starting from the same principle (there is at least one complete extension) there should always be at least one.
As far as I know, the notion of stable extension is not so well-defined and their existence therefore depends on the chosen definition.
You can probably work out additional properties from the paper
Baroni, P., Caminada, M., & Giacomin, M. (2011). An introduction to argumentation semantics. The Knowledge Engineering Review, 26(04), 365-410. | {
"domain": "cs.stackexchange",
"id": 8946,
"tags": "graphs, artificial-intelligence"
} |
Storing hierarchical data into a data structure | Question: With the following data in a table,
+----+-----------+-----------+---------+
| id | name | parent_id | prev_id |
+----+-----------+-----------+---------+
| 1 | Section 1 | NULL | NULL |
| 2 | Item 1.1 | 1 | NULL |
| 3 | Item 1.2 | 1 | 2 |
| 4 | Item 1.3 | 1 | 3 |
| 5 | Section 2 | NULL | 1 |
| 6 | Item 2.1 | 5 | NULL |
| 7 | Item 2.2 | 5 | 6 |
| 8 | Item 2.3 | 5 | 7 |
| 9 | Item 1.4 | 1 | 4 |
+----+-----------+-----------+---------+
I have created this data structure:
[
1: stdClass Object (
[id] => 1
[name] => Section 1
[parent_id] => NULL
[prev_id] => NULL
[children] => [
2: stdClass Object ([id] => 2, [name] => Item 1.1, [parent_id] => 1, [prev_id] => NULL)
3: stdClass Object ([id] => 3, [name] => Item 1.2, [parent_id] => 1, [prev_id] => 2)
4: stdClass Object (...)
9: stdClass Object (...)
]
),
5: stdClass Object (
[id] => 5
[name] => Section 2
[parent_id] => NULL
[prev_id] => 1
[children] => [
6: stdClass Object (...)
7: stdClass Object (...)
8: stdClass Object (...)
]
)
]
Here is the working code that I wrote:
function build_tree(array $rows)
{
$accum = array();
$rec = function ($parentId = null, $prevId = null) use (&$rec, &$accum, $rows) {
foreach ($rows as $row) {
if ($row->parent_id == $parentId && $row->prev_id == $prevId) {
// add child
if ($row->parent_id) {
if (!isset($accum[$row->parent_id]->children)) {
$accum[$row->parent_id]->children = array();
}
$accum[$row->parent_id]->children[$row->id] = $row;
// add root
} else {
$accum[$row->id] = $row;
}
// find children
$rec($row->id);
// find next root
$rec($row->parent_id, $row->id);
}
}
};
$rec();
return $accum;
}
// SELECT * FROM table ORDER BY parent_id, prev_id
$rows = $itemDAO->getItems();
$items = build_tree($rows);
I am terrible at recursion, so any suggestions on how to improve the code would be helpful.
Answer: There are actually a couple of issues with that approach, so we will step through them one by one.
Let's start with with revising what a tree is. A tree is coherent, undirected graph with a single root node and no loops.
Let's compare that with your sample data (and the implementation based on it):
There are no loops - Check
There is exactly one root node - No, there are two. And the graph isn't coherent either.
So what you have in your database isn't a tree to start with.
Add a new node which is the only root to your graph, and attach all of the previously created root nodes to this new node. This makes it much easier to work with the data.
Now let's have a look at your code. There are a couple of oddities which strike immediately:
Using == for comparisons
You got to be careful with with == in PHP, it's not typesafe. What this specifically means for you, in your case, it that 0 == NULL is actually true, while the typesafe === operator correctly yields false.
This is an important difference for you, since you are using NULL as a special value, but there might also exist an entry with the perfectly valid numerical ID 0.
Having an $accum variable
This is a direct consequence of not having a single root node. Half of your code revolves around deciding whether a node you visited is a root node or not. Specify a root node explicitly prior to entering the recursion, and this problem is void.
Scanning the result set multiple times instead of indexing it
You don't actually need to iterate over the result set over and over again to find the rows you want. Index it once, and you are good to go:
index = []
foreach(rows as row) {
index[row->id] = row
}
That's one of the benefits of PHP, you have always a hashmap at hand when you need one ;)
Supporting only a single level of nesting
Let's see what happens if we add another generation of children. $items suddenly lists the child with grandchildren as another regular root.
Ups, that did not go as expected, did it?
Not using the return statement in a recursion
This points out that you have a rather weird understanding of recursion. When you do a recursion, your goal is always to to break the problem down into smaller problems, and to return a partial solution.
In this case, respectively for a tree in general, your goal is always to completely construct a single sub-tree, starting at the current node, prior to passing the current tree back to the parent context.
So in short, for traversing a tree, the recursion function looks something like (pseudo code):
TreeObject build_tree(id, index) {
tree = new TreeObject(index[id]);
foreach(index as node) {
if(node->parent == id) {
tree->children[] = build_tree(node->id, rows);
}
}
return tree;
}
Notice a difference? There is nothing modified by reference. In a recursion, there is no global state, each single recursion step only reads the data.
A strange way to sort children
There is nothing wrong with wanting to define an order on children of a node, but make sure you are aware what it actually means. It's still a tree, so the same construction applies. But in addition, you want to be able to sort the children of a node. Or even better, already have them sorted, e.g. in a linked list.
A linked list just what you constructed with prev_id, except that you got it linked backwards, not forwards. If you really want to use this structure (I will cover alternatives later), do yourself a favor and use next_id instead. This at allows to traverse the list of children in a more natural way.
Furthermore, if you want to be able to use the linked list half way efficient, make sure that you always store an entry point to the linked list. This means in addition to next_id, also store a first_child on the parent node.
Let's just extend the code sample for better comprehension:
TreeObject build_tree(id, index) {
tree = new TreeObject(index[id]);
next_child = tree->first_child;
while(next_child_id != NULL) {
tree->children[] = build_tree(next_child, index);
next_child = index[next_child]->next;
}
return tree;
}
Still rather comprehensible, isn't it? Unfortunately, the drawback of using next and first_child references is that updating the database just got slightly more complicated.
It's generally better to just use a index which can be sorted by. So best drop next_id and first_child again, and instead just add an sort_index field which contains a unique number for each entry. You can then just fetch the rows in the correct order from the database, and the previous algorithm already yields the children in the correct order.
(Bonus round: Ensure that sort_index is not only ascending for all children of a single node, but also when traversing the tree in pre-order. Now the previous algorithm can be modified to traverse the result set from the database only a single time, treating it as a queue which only supports the pop and peek functions. If you got that working, congratulations. Your algorithm has just reached O(n) runtime.)
Reinventing the wheel. While it is possible to fix the flaws in the design to get the recursion working properly, storing a tree with a fixed order in a database is actually a standard problem. A problem to which much better solutions than explicitly storing parent-child relations are known.
Before we go ahead, ask yourself a question: Given that the database actually contains a much larger tree, and you only want to fetch a sub-tree starting at a given node, how would you do that?
Right, you couldn't, at least not without modifying the algorithm slightly. You would always have to fetch the whole table content.
There is actually a model which provides that, and also gives the sort order of children for free: Nested set model
Have a read for yourself, knowing how these structures work can never hurt. | {
"domain": "codereview.stackexchange",
"id": 19842,
"tags": "php, recursion, tree"
} |
Reusable carousel slider component using the revealing module pattern | Question: I want to understand better how to make re-usable components that I can create multiple instances of, and I think the revealing module pattern comes handy, however, I'm not sure I'm implementing it right.
The code below receives a string that represents a single pre-existing DOM element (with certain specific structure) into which I initialize the animation and create a navigation panel to manually select the next slide, live working code can be seen here.
Am I making a good use of the pattern and the closure inside the module?
var Slider = function(opt){
var slider = {}; // i'm using this object to add variables from the functions below and make them available to the other functions
var currentSlide = 0;
var nextSlide = 1;
var sliderInterval;
var _init = function(options){
//recieve options, we need the -el- attribute
slider.el = $(options.el);
slider.items = slider.el.find("[class^='main-header']");
slider.sliderLength = slider.items.length;
slider.offset = slider.el.children().length - slider.sliderLength;
createNavigation();
setSliderInterval();
}
var createNavigation = function(){
//create the -buttons- to navigate between slides
var ulSlider = slider.el.find('.slider-nav'); //position this where you want your navigation buttons to be
var newLI = '<li></li>';
var newLIActive = '<li class= "active-button"></li>';
for (var i = 0; i< slider.sliderLength ; i++){
if (i === 0){
ulSlider.append(newLIActive);
}
else{
ulSlider.append(newLI);
}
}
addEventListeners();
};
var addEventListeners = function(){
// add event binders to the dynamically added li elements
slider.el.find('.slider-nav').on('click', 'li', function() {
//set this -> this will be the to be the next slide, stop the function interval and start it again
nextSlide = $(this).index();
clearInterval(sliderInterval);
startSlider();
setSliderInterval();
});
}
var setSliderInterval = function(){
sliderInterval = setInterval(startSlider, 4000);
};
var startSlider = function(){
var mainDivs = slider.items;
var ulSlider = slider.el.find('.slider-nav');
//which slide comes next?
if(nextSlide >= slider.sliderLength){
nextSlide = 0;
currentSlide = slider.sliderLength -1;
}
//animations using the eq selector
//we first add the animation class, and then remove the previous one we don't want
//toggle class, it results in an unwanted behaviour
mainDivs.eq(currentSlide).find('h1').addClass('fade-out-bottom-top');
mainDivs.eq(currentSlide).find('h1').removeClass('fade-in-top-bottom');
mainDivs.eq(currentSlide).find('p').addClass('fade-out-left-right');
mainDivs.eq(currentSlide).find('p').removeClass('fade-in-left-right');
mainDivs.eq(currentSlide).find('a').addClass('fade-out-top-bottom');
mainDivs.eq(currentSlide).find('a').removeClass('fade-in-bottom-top');
mainDivs.eq(currentSlide).fadeOut('slow');
mainDivs.eq(nextSlide).find('h1').addClass('fade-in-top-bottom');
mainDivs.eq(nextSlide).find('h1').removeClass('fade-out-bottom-top');
mainDivs.eq(nextSlide).find('p').addClass('fade-in-left-right');
mainDivs.eq(nextSlide).find('p').removeClass('fade-out-left-right');
mainDivs.eq(nextSlide).find('a').addClass('fade-in-bottom-top');
mainDivs.eq(nextSlide).find('a').removeClass('fade-out-top-bottom');
mainDivs.eq(nextSlide).delay(300).fadeIn('slow');
//find offset of child elements to use their index to match the current slide with the selected button
ulSlider.children().removeClass("active-button");
ulSlider.children().eq(mainDivs.eq(nextSlide).index() - slider.offset).addClass("active-button");
//update variables
currentSlide = nextSlide;
nextSlide += 1;
};
return {
init: _init // reveal only the init function
}
}
var headerSlider = new Slider();
headerSlider.init({el: '#slider'});
Answer: From the demo page, I have to say, it looks very nice. I am very curious why you do not use the constructor to call init that seems a bit verbose for any user, neither does it look very good to use new Slider() as it is not a constructor in the strict sense, you don't even need it.
For a detailed description, please find it on MDN, where this explanation is of importance to you. If you return anything else but undefined, you will not receive a new Slider object, but just the returned object.
The object returned by the constructor function becomes the result of the whole new expression. If the constructor function doesn't explicitly return an object, the object created in step 1 is used instead. (Normally constructors don't return a value, but they can choose to do so if they want to override the normal object creation process.)
So, I honestly don't see the need to use a constructor and then having to call the init function, it seems very illogical to me.
Another thing that was not apparent to me, was that you are in need of jQuery, it is not mentioned anywhere explicitly, but you seem to refer to the $ inside the code. If I am not mistaking, jQuery prefers an IIFE to set up their plugin system.
It has the advantage that you can explicitly require jquery to be available, and it would be apparent through any use of your plugin itself, looking a bit like this:
(function($) {
$.fn.slider = function( options ) {
// manipulate the data
}
}(jQuery));
more about plugin creation for jQuery can be learned here
Currently, I believe your method takes a bit to much control, and as a user of the slider function, I cannot really do a lot. I can only start the slider, I do not have any actions available to me where I could stop the slider (eg: onmouseover) or decide how fast it should slide, and though you offer the opt it is completely unused at the moment.
There is also no way to interact with the slider from a different place in your code, to for example show a specific slide on the click of a button. So I think your plugin is to much in control. | {
"domain": "codereview.stackexchange",
"id": 27085,
"tags": "javascript, jquery, revealing-module-pattern"
} |
Computing flux modulation of the energy spectrum in a DC SQUID | Question: I've been reading some work by Y. Chen et al's paper on tunable couplers for transmon qubits (you can find the work at https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.113.220502) and wanted to model the circuit but am running into an issue. As I understand it, the circuit can be approximated as a lumped element parallel LCJJ. I'm interested because this is a general class of LC oscillators with a tunable inductance given by the DC and RF SQUIDs in the lumped element circuit limit.
So I write down the hamiltonian in terms of relevant energies and junction asymmetry $\alpha := E_{J_{2}}/E_{J_{1}}$, and $\chi := \frac{1-\alpha}{1+\alpha}$ and I get
\begin{align}
H_\text{total} &= 4E_c \frac{d^2}{d\phi^2} + \frac{E_L}{2}(\phi-\phi_{ext})^2 \\
&- (1+\alpha) E_{J_{1}} \cos \left(\pi\frac{\Phi}{\Phi_0} \right) \sqrt{1 + \chi^2 \tan \left(\pi \frac{\Phi}{\Phi_0} \right)} \\
& \cos \left(\phi - \arctan \left(-\chi \, \tan \left(\pi\frac{\Phi}{\Phi_0} \right) \right) \right)
\end{align}
which as I understand it is all fine and well with $E_c$ and $E_L$ being the capacitance and inductive energies respectively, $\Phi_0$ the flux quantum, $\phi_{ext}$ the RF squid external bias, and $\Phi$ the DC squid bias.
As a sanity check, I decided to see if I could reproduce the flux tuning of the 0-1 resonance for a symmetric DC SQUID in the transmon regime. I set $\alpha = 1$, $E_L = 0$,$E_c = 0.3$ GHz, $E_J = 20$ GHz and have the reduced hamiltonian:
$$H_\text{Transmon} = 4E_c \frac{d^2}{d\phi^2} - 2E_{J} \cos \left(\pi\frac{\Phi}{\Phi_0} \right) \cos(\phi) \, .$$
To solve this I discretize the phase $\phi$ and $\Phi/\Phi_0$ over the interval $[-\pi,\pi]$ and $[-2,2]$ respectively, generate the corresponding matrix equation in matlab, and compute the eigenvalues using eig() function. I expect the first transition energy to be approximately $E_{01} (\Phi=0) \approx \sqrt{8E_cE_J}-E_c = 6.628 GHz$ and to modulate to zero frequency every half flux quantum. See below for what I expect to find for various levels or junction loop asymmetry (note this guy has the label asymmetry for $\chi$ in my notation):
For the $\alpha = 1$ case I get the below plot for $f_{01}$ and $f_{12}$. The peak frequency and modulation are wrong. Notably, it does get the nominal anharmonicity correct at 0 flux bias. I have noticed that the outcome changes somewhat by changing the window and step size in phase/flux to be multiple periods with smaller slices which makes me suspect the periodicity and convergence are issues at play. A friend suggested it might be due to the fact that the junction phase shifts as a function of flux bias and that I might need to solve in phase about $arctan(-\chi tan(\pi\frac{\Phi}{\Phi_0})$. I tried deleting the phase shift as a fast test but it doesn't seem to fix the issue. Any suggestions? I can also supply the code I wrote.
Answer: After some more investigation with a friend who works with superconducting circuits I have the solution. The first mistake was I wasn't giving it the values I thought, so the peak frequency error was essentially a result of me setting the individual Josephson energies to $E_{J\Sigma} = (1+\alpha)E_{J1}$.
The bigger issue with flux biasing is actually somewhat subtle (at least to me, someone who doesn't work with flux biased circuits). Even though the potential for the junction loop is $-E_{J\Sigma}\cos\left(\pi\frac{\Phi}{\Phi_0}\right)\cos(\phi)$ to get the "correct" behavior at multiple flux quanta you need to diagonalize with $-E_{J\Sigma}|\cos\left(\pi\frac{\Phi}{\Phi_0}\right)|\cos(\phi)$ for the flux bias in the transmon regime (or at least when $E_L = 0$). I don't have a clean explanation for why it should be simulated this way. I know when the sign changes the location of the potential minima move, but to me that seems like it should be accounted for already during diagonalization.
Another thing of note that fundamentally limits you even in the perfect junction symmetry case from achieving zero frequency at half a flux quantum is the fact you fail to have a bound state solution for a given "kinetic" energy of $4E_c$. In my simulations it appears you lose the bound state around $3E_c$. I am sure we could probably derive that exact lower bound for a given $E_c$ but that is an exercise for another time. | {
"domain": "physics.stackexchange",
"id": 62313,
"tags": "quantum-information, josephson-junction"
} |
How did they take photos of Jupiter? | Question: How did they take photos of Jupiter - I mean Jupiter is illuminated and that's a lot of light to produce. Am I missing something, and there was some sort of dark photo technology used, or was there simply enough light from Sun to begin with? Or is this photo a fake?
Answer: You can see Jupiter in the night sky with your naked eyes due to its reflected sunlight (although I believe that in July and August of 2014 Jupiter is very close to the Sun in the sky and is visible only for a little while near twilight). You can take a picture of Jupiter in the sky with any old camera.
If you want a high-quality picture, your camera needs to have a lens arrangement that will make the image of Jupiter on the camera's CCD larger than the image of Jupiter on your retina. The thing to look for is a lens with a long focal length. If the focal length of the lens1 is long enough, it will need to stand some distance away from the camera's CCD on a rigid mount; this is usually called a telescope. You can replace the camera with your eye and see Jupiter's cloud bands directly.
1Actually most telescopes use a curved mirror rather than a lens, for several technical reasons.
Images as nice as that one usually come (possibly) from professional astronomical observatories on the ground, or from the Hubble Telescope, probably NASA's most successful instrument ever (after a rocky start). Your particular image seems to have been taken by the robotic spacecraft Cassini when it passed near Jupiter en route to Saturn, where it has been orbiting and collecting data for the last ten years. In that case the camera had the advantage of being much closer to Jupiter than I'll ever be :-( | {
"domain": "physics.stackexchange",
"id": 15286,
"tags": "optics, astronomy, jupiter, astrophotography"
} |
Ozone-Air Paradox | Question: One day, I decided to calculate the density of air, so I searched for the individual densities of the 3 main elements that forms breathable air. I found the following data, at $\pu{0^\circ C}$ and $\pu{100 kPa}$:
\begin{array}{lrS}
\text{Component} &\% &\rho,\,\pu{g cm-3}\\ \hline
\text{Nitrogen} &78 &0.001251 \\
\text{Oxygen} &21 &0.00142897 \\
\text{Argon} &1 &0.001784
\end{array}
I'm calculating the density of $\pu{1 cm3}$ of air. Individual masses in $\pu{1 cm3}$ of air:
\begin{align}
&\text{Oxygen} &\pu{0.0003000837 g}\\
&\text{Nitrogen} &\pu{0.00097578 g} \\
&\text{Argon} &\pu{0.000001784 g} \\
\end{align}
The final density is equal to $\pu{0.0012937037 g cm-3}$
Then I realized that the density of ozone should be lighter due to the ozone layer to be able to exist, otherwise, the ozone gas, as it is fairly more dense than air, would pass directly through the atmosphere. The problem is that the density of ozone gas at the same conditions (STP), have a density of $\pu{0.00214 g cm-3}$, which is $65.41654785\%$ heavier than air.
With the context explained, my question is: if is really a ozone layer up there in the atmosphere, how it is there, being heavier than the base gas mix that is below it?
Answer: The atmosphere is not well-separated into layers of density; the most prominent reason for that is wind that mixes the different layers.
However, even aside from that there is a reason for ozone to be found in the ozone layer primarily and not much elsewhere: simply because it is produced there by UV radiation and because it breaks down fast enough to not come all the way down. | {
"domain": "chemistry.stackexchange",
"id": 8796,
"tags": "physical-chemistry, atmospheric-chemistry"
} |
Is it false that the absorbance of two concentrations is the sum of the absorbances of the concentrations? | Question: I am in the lab and trying to perform the molybdenum blue test for phosphorus using a photometer. It appears common speak in colorimetric lab sheets that
Absorbance of a mix of two concentrations is the sum of the absorbances of the individual
concentrations
This is often exemplified by statements such that: “You can zero your photometer on the reagent and subtract the absorbance of the sample to just measure the absorbance due to the reaction”. I tried to test if this logic is correct, but I think it is false and performed the following experiment by preparing 4 curettes to find out:
I zeroed my instrument on a cuvette with distilled water.
I prepared 1ml of the molybdenum reagent
I prepared 1ml of the molybdenum reagent to which I add 10 ul phosphorus sample
I prepared 1ml of distilled water to which I add 10 ul of phosphorus sample.
I measured the absorbance of the four cuvettes within about 1350 milliseconds of mixing and they were 0, 0.172, 0.072, 0.402. I measured again after 10 seconds and got similar absorbances.
As 0.172 + 0.072 = 0.244 and not 0.402, I conclude that the logic cited above is false.
QUESTION:
Am I correct in jumping to this conclusion? What is the reason that the mixed substance does not have the absorbance of the sum of the individual concentrations?
Answer: There is a fallacy in your experiment. Yes, absorbance is indeed additive tha low concentrations provided the reagents do not react. This sum property allows simultaneous determination or two or more components in a solution using the additive property of Beer's law.
Note: When you zero the photometer you are not taking absorbance into account but a lot of other things too. Reflection, refraction, scattering etc by cuvets.
For that do a simple experiment: Zero the photometer when you have nothing in the spectrophotometer (i.e., with air only). Now put an empty cuvet back. Do you still see zero absorbance? Mostly likely not, and you know very well that the cuvet is not absorbance any light at light. It should be transparent.
In order to test the absorbance idea, take two dilute dyes and then check the sum after mixing and taking dilution into account. | {
"domain": "chemistry.stackexchange",
"id": 15956,
"tags": "biochemistry, uv-vis-spectroscopy"
} |
Master's theorem | Question: Is Master's theorem applicable on $T(n) = 2 T(\frac{n}{2})+n\log n$ ?
I got this doubt from here:
https://gateoverflow.in/227814/introduction-to-algorithms
Answer: Yes, Master's theorem is applicable to equations of type:
$$T(n) = aT(\frac{n}{b}) + \Theta(n^k log^pn)$$
where $a \geq 1$, $b \gt 1$, $k \geq 0$ and for some real number p.
This is slightly modified and we can apply it more easily. The results are as follows:
if $a \gt b^k$, then $$T(n) = \Theta(n^{log_ba})$$
if $a = b^k$, then
a) if $p > -1$, $$T(n) = \Theta(n^{log_ba} log^{p+1}n)$$
b) if $p = -1$, $$T(n) = \Theta(n^{log_ba} loglogn)$$
c) if $p < -1$, $$T(n) = \Theta(n^{log_ba})$$
if $a < b^k$, then
a) if $p \geq 0$, $$T(n) = \Theta(n^klog^pn)$$
b) if $p < 0$, $$T(n) = O(n^k)$$ | {
"domain": "cs.stackexchange",
"id": 12164,
"tags": "recurrence-relation"
} |
how to connect to an already running roscore from a tethered usb device | Question:
hi
i am trying to connect to an already running roscore with a tethered usb device.
situation is as follows:
i have a running roscore on a
ubuntu machine
after the
roscore is up, i connect an android
tablet to the computer and create a
network connection using usb
tethering
i am using ros android
to communicate with the roscore
note:
connecting the tablet with usb tethering creates a new usb network device over which i want to talk to the roscore
the tablet may be connected and disconnected several times without re-starting the roscore
also maybe other clients connect to the master using a ethernet connection
has anyone an idea how to do that ? how would the ros network setup look like (master uri / ros ip / ros hostname) ?
Originally posted by jochen.mueck on ROS Answers with karma: 48 on 2012-10-19
Post score: 0
Answer:
Seems like i found a working solution myself.
this also works if you have the following setup:
running roscore on a machine A
connect ethernet cable and get ip adress via dhcp
communicate from a machine B via network to roscore on machine A
the solution is to set up a network bridge on machine A which bridges to all other network devices (e.g usb0, eth0 ...). Assign a static IP to the bridge device and set ROS_MASTER_URI and ROS_IP to that ip. Start the roscore.
When plugin the ethernet cable or tablet (...) just restart the bridge adapter using ifconfig down/up and all devices can communicate with the core using the bridge adapter's ip
Originally posted by jochen.mueck with karma: 48 on 2012-10-23
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11438,
"tags": "ros, linux, ubuntu, network, roscore"
} |
Quantum entanglement and information transfer | Question: I know it was stated many times that information transfer using entanglement is not possible and I am most probably wrong but I would be glad if you can at least point me to where I am mistaken.
If we entangle only 2 particles it does not gives us much. However lets say we entangle 4 pairs of particles.
Now we have 2 observers Alice and Bob which have 4 particles each (one of each entangled pair).
Lets say particle one will be used to pass the data and particles 2,3,4 to idicate whether the data is correct.
So for example Alice and Bob, both agree beforehand that if 2,3,4 have for example all spin up (spin down for the other observer) then then data transfer occurs and spin of particle 1 indicates correct value. Otherwise it indicates that transfer is not happening (break between 2 messages).
I know now that there is something called Zeno Quantum Effect, that tells us if we do the measurement of particles frequently enough we have a great chance of them returning to a state we measured at the beginning. I don't know if that is already possible or not but let's assume that we are able to measure so frequently that over 0,1 second we are able to get the same measurement of spin 99% of times.
My idea is as follows.
Let's say Alice wants to transfer spin up to Bob.
She measures particle one until she gets spin up and after that measures so frequently to experience Zeno effect and maintain same measurement results for as long she can.
Then she has to indicate that the value is correct and Bob can read the correct data by forcing the other 3 particles to be all spin-up (as agreed with Bob). I assume she would also use Zeno effect here.
On the other side Bob does observes to detect if paritcles 2,3,4 have all spin down on his side. If they do, then it means he can assume with great probability that spin of particle 1 is what Alice intended him to get.
Now Alice stops measuring particles so Bob gets inconsistent results and knows that there is a break in communication.
Alice repeats everyhting again but this time maybe passing other spin in particle one.
This way Bob can read bits of information from Alice one by one in 0,2 secs period.
Maybe we could increase this frequency or use much more particles and get even faster transfer?
Wouldn't this be possible using multiple particle system and Zeno effect?
Answer: After the first measurement, Alice and Bob's further results are uncorrelated. They go from being $| \downarrow \uparrow \rangle + |\uparrow \downarrow \rangle$ to just $|\uparrow \downarrow \rangle$ or $| \downarrow \uparrow \rangle$, and this is no longer an entangled state (since we can write it as a product). Things that Alice does to her qubit after measuring it don't affect Bob's. Otherwise, we don't need to use tricky Zeno effect anything--she could just flip it over with a $\pi$-pulse or something. They only get one measurement on each qubit, essentially, so this idea that Bob can "keep testing his qubit" doesn't work. It's also not clear to me what adding the extra qubits achieve, is it just redundancy? | {
"domain": "physics.stackexchange",
"id": 25576,
"tags": "quantum-mechanics, quantum-information, quantum-spin, quantum-entanglement, faster-than-light"
} |
How do I design a DP algorithm to count the minimum amount of continuous palindromic subsequences in sequence? | Question: Taking a sequence, I am looking to calculate the minimum amount of continuous palindromic subsequences to build up such a sequence. I believe the best way is using a recursive DP algorithm.
I am having trouble picturing the problem space or the most efficient way to do it. (For example, is it best to find the longest palindromic continuous subsequence first, or rather find the earliest one first?)
For example:
Sequence: [A, C, C, C, A, B, Y, B] is made up from 2 subsequences, namely [A, C, C, C, A], [B, Y, B]
So, therefore, my output for this example would be 2.
Thanks in advance!
Answer: Let $S$ be your input sequence and denote by $S[i:j]$ the subsequence going from the $i$-th to the $j$-th element of $S$ (inclusive).
Let $OPT[i]$ be the minimum amount of continuous palindromic subsequences in which $S[1:i]$ can be partitioned. Let $OPT[0]=0$. For $j=1,\dots,|S|$, you have:
$$
OPT[j] = \min_{\substack{i=1,\dots,j\\S[i:j] \text{ is a palindrome}}} \{1+ OPT[i-1] \}.
$$
The value you are looking for is $OPT[|S|]$. This algorithm requires time $O(|S|^2)$. Notice that all the pairs $(i,j)$ for which $S[i:j]$ is palindrome can be precomputed in $O(|S|^2)$ time. | {
"domain": "cs.stackexchange",
"id": 15991,
"tags": "algorithms, dynamic-programming, recurrence-relation, recursion"
} |
Method and class method debugging decorator | Question: A little while ago in The 2nd Monitor, me and @Phrancis were talking about Python and debugging, and the topic of function decorators came up with this message:
Ethan Bierlein: You could even build a debug decorator. Give me a sec.
So, I built a small, simple debugging decorator, which would just print out the arguments of any method it was applied to.
I decided to go a little further though, and build two separate decorators. One for regular methods, and one built specifically for class methods. Essentially, the regular method decorator takes a function, and will print out it's arguments, and keyword arguments if debug=True. The class method decorator does the same, except it also prints out the attributes of the class it's contained in as well.
I'm wondering a couple of things though:
Is this a "pythonic" way to debug the arguments of a function?
Is there any additional data that should be debugged, that I missed?
Is the extra ClassMethodDecorator really needed? Or is there a simpler way to support both with one decorator?
I've built this to be Python 3, and Python 2.7 compatible. Did I do this correctly?
debug.py
from pprint import pprint
class MethodDebug(object):
"""Debug a normal method.
This decorator is used for debugging a normal method,
with normal arguments, i.e, not printing out the data
of the class it's contained in.
Keyword arguments:
debug -- Whether or not you want to debug the method.
"""
def __init__(self, debug):
self.debug = debug
def __call__(self, function):
def wrapper(*args, **kwargs):
if self.debug:
pprint(args)
pprint(kwargs)
return function(*args, **kwargs)
return wrapper
class ClassMethodDebug(object):
"""Debug a class method.
This decorator is used for debugging a class method,
with normal arguments, and self. When using this
decorator, the method will print out it's arguments
and the attributes of the class it's contained in.
Keyword arguments:
debug -- Whether or not you want to debug the method.
"""
def __init__(self, debug):
self.debug = debug
def __call__(self, function):
def wrapper(function_self, *args, **kwargs):
if self.debug:
pprint(function_self.__dict__)
pprint(args)
pprint(kwargs)
return function(function_self, *args, **kwargs)
return wrapper
test.py
from debug import MethodDebug, ClassMethodDebug
@MethodDebug(debug=True)
def normal_method(a, b):
return a * b
print(normal_method(10, 10))
class TestClass(object):
def __init__(self, a, b):
self.a = a
self.b = b
@ClassMethodDebug(debug=True)
def class_method(self, c):
return self.a * self.b * c
a = TestClass(10, 10)
print(a.class_method(10))
Answer: To address your specific questions:
Is it Pythonic enough? I think so.
Is there any additional data that should be debugged? If I was using this for my debugging, I’d want a bit more information:
The name of the function that’s been called – knowing which args/kwargs were supplied to an unknown function is not that useful
The return value of that function
A trace statement when the function returns
There should also be something to distinguish trace output from regular output. That makes it much easier for me to reconstruct the call flow afterwards. Here’s an example of the sort of output I mean:
[trace] enter my_function {
[trace] args: (10, 10)
[trace] kwargs: {}
hello world
[trace] return 100 } exit my_function
For extra brownie points, use the traceback module to indent a function and its arguments/return value to match their level in the call tree.
Do I need a separate function and method decorator? It seems like it would be possible to do. I haven’t tried it, but https://stackoverflow.com/q/19314405/1558022 seems like it might have a couple of approaches.
If applied to a method, your function decorator will always print the repr() of the object. That might be enough in most cases – perhaps rather than defining a second decorator, you add an argument debug_object to your function decorator which additionally prints the object dict on demand.
Is it compatible with Python 2.7 and Python 3? As far as I can tell, yes.
Two other (minor) comments:
I would rename the decorators. The first is misleading, because it’s described as “method debug”, but actually gets applied to functions. More generally, I’d be inclined to violate naming principles and make these lowercase instead, so they don’t look out of place next to the decorated functions. I’d probably call them something like @internaltrace.
The docstring for “debug” is not very useful. A better docstring would tell me exactly what debugging means in this context – here, something like “print the arguments supplied to the function” would be better.
I’d be inclined to rename this parameter to “trace”, but I think correcting the docstring is more important.
A couple of additional ideas that came to me this morning:
You might want to tie this into the logging library, and direct all trace output to a dedicated file (say inttrc.log). That isolates trace statements from regular prints without putting [trace] everywhere, and is easier to search through later.
This will get you timestamps, which are worth including, but I forgot about yesterday.
Although pprint is nice, I think a one-line string might be better. You can always unpack it later if you need to, but having all the associated parts of args/kwargs on the same line will be easy for parsing later. | {
"domain": "codereview.stackexchange",
"id": 15632,
"tags": "python"
} |
Is light *nothing more* than a pair of transverse electric and magnetic oscillating field moving in a given direction? | Question: If one creates an oscillating electric field and a magnetic field, transversal to each other, and oscillating at a given frequency belonging to the visible spectrum, and moving in a given direction of an observator, will an observer see the same as compared to the same experiment where he looks at the light ?
Said differently, is light nothing more than a pair of transverse electric and magnetic fields?
So would creating a pair of transverse electric and magnetic oscillating fields moving in a given direction be equivalent to create light?
Answer: It can be hard to say what light really is. You are talking about the classical view of light. Descheleschilder's answer is correct. But if you look at a microscopic view of light, you need quantum mechanics.
This is like looking at what air pressure is. In a large scale view (classical), it is a smooth force that air exerts on the walls. But microscopically, it isn't smooth. It is individual air molecules bouncing off the wall. Each molecule gives the wall an individual kick. When you add up lots of these kicks, you get a smooth force. It is really the same explanation, but it looks totally different.
Light is the same. On a microscopic scale, light can be emitted by an individual electron in an atom, and absorbed by another electron in another atom. One atom gives another a kick. When you add up lots of atoms, you can see a smooth force that is described by an electromagnetic field.
An individual atom's worth of light has a name "photon", but that doesn't say what light is. A photon is sort of like a particle and sort of like a wave. For more on that, see my answer to How can a red light photon be different from a blue light photon?.
It can also get confusing if you take a careful look at the classical picture. What kind of thing is an electric field? See my answer to In what medium are non-mechanical waves a disturbance? The aether? | {
"domain": "physics.stackexchange",
"id": 70776,
"tags": "electromagnetism, visible-light"
} |
Proof of lemma for flow in residual graph | Question: In CLRS 3'rd edition there is a Lemma 26.2 which states that:
Let $G=(V, E)$ be a flow network, let $f$ be a flow in $G,$ and let $p$ be an augmenting path in $G_{f}$. Define a function $f_{p}\colon V \times V \rightarrow \mathbb{R}$ by
$$f_{p}(u, v)=\left\{\begin{array}{ll}c_{f}(p) & \text { if }(u, v) \text { is on } p \\ 0 & \text { otherwise }\end{array}\right.$$
Then, $f_{p}$ is a flow in $G_{f}$ with value $\left|f_{p}\right|=c_{f}(p)>0$
How would you go about proving this?
As I understand we need to check for flow conservation and capacity constraint. We know that $c_f(p)$ is the minimum of the residual capacities on path $p$ which is smaller than the capacities, hence the capacity constraint is satisfied. But how about the flow conservation constraint and proving that the flow value is in fact $c_f(p) > 0$?
Answer: Observe that if $v$ is not a vertex of $p$, then $f_p(u,v)=0$.
When $v$ is in $p$ and not a source nor a sink, then there are only two vertices $v_1$ and $v_2$ such that the edges $(v_1,v),(v,v_2)$ are in $p$. Therefore, in the excess flow at $v$ $$\sum_u f_p(u,v)$$ only has two non-zero terms $f_p(v_1,v)=c_f(p)$ and $f_p(v_2,v)=-f_p(v,v_2)=-c_f(p)$.
So, the excess flow is zero.
To see that $c_f(p)>0$ just recall that it is defined as the minimum of the residual capacities of the edges of $p$. There are finitely many edges in $p$ and by definition of augmenting path the residual capacities of its edges are positive. So, you are taking the minimum of finitely many positive numbers. That results in a positive number. | {
"domain": "cs.stackexchange",
"id": 16860,
"tags": "algorithms, graphs, network-flow"
} |
Deleting email using mailcore with imapSession | Question: The following function works fine. However, deleting 1 mail takes about 4 seconds (from start of operation to the firing of the completion handler).
Currently, using mailcore2, copying mails on the Gmail server.
Are there any significant issues?
func deleteOnServer (indexSet: MCOIndexSet) {
NSLog("will delete email")
let sessionForOperation = getSessionForOperation()
let localCopyMessageOperation = sessionForOperation.copyMessagesOperationWithFolder("INBOX", uids: indexSet, destFolder: account.trashFolderPath)
localCopyMessageOperation!.start { (error, uidMapping) -> Void in
if let error = error {
NSLog("error in deleting email : \(error.userInfo!)")
} else {
NSLog("email deleted")
}
}
}
}
My session is configured as follows:
func createNewIMAPSessionWith(userName: String, hostname: String, oauth2Token: String) -> MCOIMAPSession {
let retSession = MCOIMAPSession()
retSession.hostname = hostname
retSession.port = 993
retSession.username = userName
retSession.OAuth2Token = oauth2Token
retSession.authType = MCOAuthType.XOAuth2
retSession.connectionType = MCOConnectionType.TLS
retSession.maximumConnections = 2
retSession.timeout = NSTimeInterval(60)
retSession.allowsFolderConcurrentAccessEnabled = true
return retSession
}
Answer: I don't know anything about mailcore2, nor have you provided any Time Profiler information to help narrow down what takes so long to complete the action (is it just a slow network, or is it something in your code? Anyone's guess), so I can't really address the slowness issue. I also don't have a clue which methods are yours versus what you simply get from mailcore2, but with that said...
NSLog() statements should almost always be wrapped in #if DEBUG and #endif statements.
Moreover, regardless of whether or not the deletion is successfully, we need to let the user know one way or the other, and NSLog() isn't going to cut it.
I'm not sure how the second code snippet ties to the first, but all of our methods could use better names.
deleteOnServer(indexSet:) - delete what? on which server? What does index set represent?
createNewIMAPSessionWith - the words New and With can both be removed from this method. Although, what would probably be best is simply creating a factory method:
extension MCOIMAPSession {
class func session(userName: String, hostname: String, oauth2Token: String) -> MCOIMAPSession {
// all the code you're already doing
}
}
And then we call it simply like this:
let imapSession = MCOIMAPSession.session(userName:"username", hostname:"hostname", oauth2Token:"token")
localCopyMessageOperation!.start
This is a pretty big no-no, in my opinion. If the method we're calling to get localCopyMessageOperation returns an optional, then we should use real optional chaining, not forced unwrapping. It's as simple as changing the exclamation point to a question mark and it prevents an "found nil when unwrapping" (or whatever it's called) exception. | {
"domain": "codereview.stackexchange",
"id": 12720,
"tags": "ios, email, swift"
} |
Fork Autoware_AI repository and create docker image | Question:
Hi Autoware_AI currently does not include some vehicles that I wish to use to work with python API. I need to make some changes into the code, modify launch files, etc to make my simulation based on Autoware_AI repository https://github.com/Autoware-AI/docker.git
It is not feasible change the code, let it inside shared_dir ( folder with access to my desktop and to the docker) and every time when I build the standard Autoware_AI image to remove and add the modified package code with my modifications.
For this reason I wish to fork the original github packages a little bit, and afterwards create an image based on my for and not based on the original repo.
As a solution to work on my codes I was trying to use a branch. However when I exit docker the branch does not exist anymore after issue the cmd below to use docker.
./run.sh -t 1.14.0
the image will build the standard docker image and the created branch no longer exist!
Wha would be the best option do be done? And where can I find a good tutorial for this case involving fork Autoware project?
Thanks in advance!
---------------------------------------------------------------------------------------------------------------------------------------------------------------As solution I tried 2 different approaches:
ATTEMPT 1:
To modify the container and save it as described here: https://www.scalyr.com/blog/create-docker-image/
Then using other terminal, trying to add .txt file for Autoware_AI running container, to modify the container, Autoware_AI container does not appear as active (but it is). Just other container are avaialable when I try to copy a file to Autoware_AI:
$ docker cp file.txt (Twice tab):
ade:
ade_registry.gitlab.com_autowarefoundation_autoware.auto_ade-lgsvl_foxy_2020.06:
ade_registry.gitlab.com_autowarefoundation_autoware.auto_autowareauto_amd64_ade-foxy_master:
ade_registry.gitlab.com_autowarefoundation_autoware.auto_autowareauto_amd64_binary-foxy_master:
hi_mom:
These ade images above are from AutowareAuto, not from AutowareAI, that I wish to use. I have just as external option to access Autoware.Auto images path and the nginx(Hi mom) image (from this tutorial link passed). Unfortunately docker command does not work inside the running container terminal…
But if I list the images I have available, it is possible to see that AutowareAI image was built:
$ docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 26b77e58432b 2 weeks ago 72.9MB
autoware/autoware local-melodic-cuda 4997df3ad6dc 2 weeks ago 10.3GB
autoware/autoware local-melodic-base-cuda 8fb0b62fcab2 2 weeks ago 6.99GB
autoware/autoware local-melodic-base 0d87fce181db 2 weeks ago 3.45GB
registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto/amd64/binary-foxy master 46ac7d2cbd73 2 weeks ago 144MB
registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto/amd64/ade-foxy master acd13c509891 3 weeks ago 4.59GB
lgsvl/simulator-scenarios-runner simulator-build__2021.1 63c9bdef5e3a 5 weeks ago 413MB
registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto/amd64/ade-foxy <none> 6448c91f68e8 6 weeks ago 4.85GB
registry.gitlab.com/autowarefoundation/autoware.auto/ade-lgsvl/foxy 2020.06 7be8da9ce3bb 2 months ago 251MB
etc...
However you can check in right upper terminal or in left bottom terminal the Autoware_AI container is running with the name ( autoware/autoware: 1.14.0-melodic-cuda).
Different from AutowareAuto project that I used to issue the cmd:
$ ade --rc .aderc-amd64-foxy start --update --enter
Then the image was built and I further accessed it.
In Autoware_AI I need to run a .sh entrypoint:
$ ./run.sh -t 1.14.0
I am still stuck to be able to modify the Autoware/src packages present inside this “image” and then save this image for further creation of a modified container.
Different from AutowareAuto project that I used to issue the cmd:
$ ade --rc .aderc-amd64-foxy start --update --enter
Then the image was built and I further accessed it.
In Autoware_AI I need to run a .sh entrypoint:
$ ./run.sh -t 1.14.0
For me this command sounds that I am executing a program (.sh) and not actually running a container ??? Not completely understood the difference.
Do I need to make another install type of Autoware_AI project? I mean build from source on my OS? I thought just docker would be enough…besides that I am using ubuntu 20.04 and I know that Autoware_AI stopped just on Ubuntu 18.04 (I am almost sure I will have broken pkgs, compatibility issues if I try to install on ubuntu 20.04).
Someone has any other idea how to solve this issue?
I believe I need to do something similar to this:
https://stackoverflow.com/questions/19585028/i-lose-my-data-when-the-container-exits
ATTEMPT 2:
Commit Changes To a Docker Image: https://phoenixnap.com/kb/how-to-commit-changes-to-docker-image
With this attempt I was able to creat a new image based on Autoware_AI container. However after finish all tutorial steps, I got an error "Connection Refused" to launch Autoware_AI packages. I do not understand what can be done to get access to their server with my custom_image created. I have done the following steps:
A) Extracted the image_ID and runned with docker cmd:
$ sudo docker images
[sudo] senha para autoware-auto-ros1:
REPOSITORY TAG IMAGE ID CREATED SIZE
autoware_ai_changed_image latest 8a61bf4e20f4 3 hours ago 10.3GB
change_image_test latest 7143875f8440 3 hours ago 72.9MB
hi_mom_nginx latest 2c89904348df 5 hours ago 22.6MB
nginx alpine a64a6e03b055 5 days ago 22.6MB
ubuntu latest 26b77e58432b 2 weeks ago 72.9MB
autoware/autoware local-melodic-cuda 4997df3ad6dc 2 weeks ago 10.3GB
autoware/autoware local-melodic-base-cuda 8fb0b62fcab2 2 weeks ago 6.99GB
autoware/autoware local-melodic-base 0d87fce181db 2 weeks ago 3.45GB
registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto/amd64/binary-foxy master 46ac7d2cbd73 2 weeks ago 144MB
registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto/amd64/ade-foxy master acd13c509891 3 weeks ago 4.59GB
B) Selected the Image_ID of autoware/autoware local-melodic-cuda: 4997df3ad6dc and thenrunned:
$ sudo docker run -it 4997df3ad6dc
Afterwards inside the container I listed the files inside it:
Autoware@0448a42dedae: /home/autoware $ ls
Autoware
C) Modified the original docker container adding a .txt file (just to test a modification):
Autoware@0448a42dedae: /home/autoware $ vim author.txt
Autoware@0448a42dedae: /home/autoware $ ls
Autoware author.tx
D) Exited the image and then Added a commit to the image:
Autoware@0448a42dedae: /home/autoware $ exit
home $ sudo docker commit 0488a42dedae autoware_ai_changed_image
e) Entered the image again and Tried to use the contents of the image ( Runtime manager interface)
home $ sudo docker run -it autoware_ai_changed_image
Autoware@0448a42dedae: /home/autoware $ roslaunch runtime_manager runtime_manager.launch
And then I got this error msg:
... logging to /home/autoware/.ros/log/bac95566-a1f4-11eb-b56a-0242ac110002/roslaunch-7bdf40bf24c5-42.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://7bdf40bf24c5:40463/
SUMMARY
========
PARAMETERS
* /rosdistro: melodic
* /rosversion: 1.14.10
NODES
/
run (runtime_manager/run)
auto-starting new master
process[master]: started with pid [52]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to bac95566-a1f4-11eb-b56a-0242ac110002
process[rosout-1]: started with pid [63]
started core service [/rosout]
process[run-2]: started with pid [66]
***Unable to init server: Could not connect: Connection refused
[run-2] process has died [pid 66, exit code 1, cmd /home/autoware/Autoware/install/runtime_manager/share/runtime_manager/scripts/run __name:=run __log:=/home/autoware/.ros/log/bac95566-a1f4-11eb-b56a-0242ac110002/run-2.log].
log file: /home/autoware/.ros/log/bac95566-a1f4-11eb-b56a-0242ac110002/run-2*.log***
There is a problem of server connection that I did not have when building the standard Autoware_AI container with the standard cmd:
home:~/docker/generic$ .run.sh -t 1.14.0
home/autoware$ roslaunch runtime_manager runtime_manager.launch
... logging to /home/autoware/.ros/log/2f40e752-a20d-11eb-a087-c82158f9534e/roslaunch-marcus-ros2-foxy-72.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://marcus-ros2-foxy:45207/
SUMMARY
========
PARAMETERS
* /rosdistro: melodic
* /rosversion: 1.14.6
NODES
/
run (runtime_manager/run)
auto-starting new master
process[master]: started with pid [82]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to 2f40e752-a20d-11eb-a087-c82158f9534e
process[rosout-1]: started with pid [93]
started core service [/rosout]
process[run-2]: started with pid [96]
[run-2] process has finished cleanly
log file: /home/autoware/.ros/log/2f40e752-a20d-11eb-a087-c82158f9534e/run-2*.log
I am kind of new in forking,changing docker images. I do not understand how to fix this, find a solution for create my custom docker image and make it functional.
Thanks in advance.
Originally posted by Vini71 on ROS Answers with karma: 266 on 2021-04-19
Post score: 1
Answer:
I figured out how to solve the issue:
2 Steps:
A) Work locally with the Autoware.AI repos, installing at first a local image (Case 2 of this website https://github.com/Autoware-AI/autoware.ai/wiki/Generic-x86-Docker#run-an-autoware-docker-container). For that I runned a base container:
$ fork/docker/generic$ ./run.sh -b home/desired_empty_folder
Then I installed the package contained in the official DockerFile (After run the container, inside container):
~/generic$ cd /home/$USERNAME/Autoware
~/generic$ wget https://raw.githubusercontent.com/Autoware-AI/autoware.ai/1.14.0/autoware.ai.repos
~/generic$ source /home/$USERNAME/Autoware/install/local_setup.bash
~/generic$ vcs import src < autoware.ai.repos
~/generic$ source /opt/ros/$ROS_DISTRO/setup.bash
~/generic$ colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
~/generic$ source /home/$USERNAME/Autoware/install/local_setup.bash >> /home/$USERNAME/.bashrc
The steps above must be done just the first time you run the base image. Then after exit the container and re-enter other times later...it is required to run just one command to enable ROS pkgs to work:
~/generic$ source /home/$USERNAME/Autoware/install/local_setup.bash
B) Then I worked around and everytime I exited the container, the files are saved wheI enter the container again. However later to build all the project with my custom changed codes and upload to dockerhub, I have built an image, and changed the Dockerfile, as described in steps below:
Basically I needed to build the image and afterwards run the container with a custom Dockerfile (that pulled the code from muy github).
My Dockerfile was:
And After the modification, the Dockerfile became:
To dowload the modified ROS codes to my image, instead of autoware repo ROS pkgs, I needed:
1 - Copy the autoware.ai.repos file from here: https://raw.githubusercontent.com/Autoware-AI/autoware.ai/1.14.0/autoware.ai.repos To my docker local folder (docker/generic) and unwrap them with vcs import command as the Dockerfile displays above...
2- Edit the autoware.ai.repos, in order to change the address of some of the repositories contained in autoware.ai.repos to my personal github:
I Removed the lines:
autoware/visualization:
type: git
url: https://github.com/Autoware-AI/visualization.git
version: 1.14.0
And replaced by:
autoware/visualization:
type: git
url: https://github.com/marcusvinicius178/visualization
Afterwards I followed the build instructions in case 3 here: https://github.com/Autoware-AI/autoware.ai/wiki/Generic-x86-Docker#run-an-autoware-docker-container
$ ./build.sh
$ ./run.sh -t local
I know that may exist a more professional way to work with Docker images, and build a new Dockerfile based on the original one. But I am not that expert in Docker and also in this way my problem was solved.
Originally posted by Vini71 with karma: 266 on 2021-04-29
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 36345,
"tags": "ros, ros-melodic, github"
} |
How would one solve this question if no initial conditions are given? Which assumption can I make? | Question: This was a question on our test, I know it can be easily solved by Z-transforms but there are no initial conditions specified. In this case, what would be the right approach?
Assume all initial conditions are zero?
Assume $y$ is right-sided, so substitute $k=0$ and $k=-1$ to find $y[2]$, $y[1]$ and $y[0]$?
A system is described by the difference equation
$$y[k+2]+1.2y[k+1]+0.32y[k]=(-0.5)^{k+1}$$
Find the value of $y[10]$.
Answer: Your intuition that you need initial conditions to fully solve this is correct, so it becomes a problem in test-taking.
Were it me, and were the test proctored by the prof, then I would go to the front of the room and ask. Failing that, I would first solve for $y[10]$ in terms of $y[0]$, $y[1]$ and $y[2]$, then I would point out that as they were not given, then I'm going to assume that they're all equal to one and I'd work out a concrete answer based on that assumption.
This latter approach means that you've (A) given the two most likely solutions that are on the solutions sheet that's to be given to the grader, (B) it establishes that the problem statement wasn't clear, and (C) in the event that the initial conditions were given in some form that you didn't recognize, that's why you're not answering the question as given.
If you were supposed to assume that the initial conditions were zero, you'll get full points. If the prof left off the initial conditions (and isn't a sadistic and/or narcissistic maniac), you'll get full points. If it was a trick question or poorly written and you were given some hint about the initial conditions that you missed (and thus didn't tell us), then it establishes why you didn't answer the question fully, and probably gives you the most partial points possible under the circumstances. | {
"domain": "dsp.stackexchange",
"id": 10712,
"tags": "discrete-signals, z-transform, homework"
} |
How do I fit a curve into non linear data? | Question: I did an experiment in my Uni and I collected data $(ω,υ(ω))$ modeled by the equation:
$$ v(ω)=\frac{C}{\sqrt{(ω^2-ω_0^2 )^2 +γ^2 ω^2}} $$
where $ω_0$ is known. Do you know how can I fit a curve to my data $(ω,υ(ω))$ ? and how to find the parameter $ γ $ through this process ?
Answer: I used mycurvefit.com for your problem. After creating an account (or maybe without if number of parameters is 2 or less) it lets you fit your function with at most 20 data points, which was enough. Here is an example
That correctly founds the parameter (g) close to 6.
Here are 20 data points that I have generated for $C=10$, $\omega_0=10$, and $\gamma=6$:
w v(w)
5.4881 0.1294
7.1519 0.1538
6.0276 0.1366
5.4488 0.1290
4.2365 0.1164
6.4589 0.1429
4.3759 0.1176
8.9177 0.1746
9.6366 0.1716
3.8344 0.1132
7.9173 0.1655
5.2889 0.1271
5.6804 0.1319
9.2560 0.1744
0.7104 0.1004
0.8713 0.1006
0.2022 0.1000
8.3262 0.1706
7.7816 0.1636
8.7001 0.1737
Copy and paste them into the data sheet at the bottom.
P.S.: an analytical answer cannot be derived since the derivative equation (derivative = 0) of mean squared error with respect to parameter $\gamma$ is intractable, therefore gradient descent must be used with the help of computer (similar to what this site does).
EDIT:
I've forgot to add noise to $v(\omega)$, here is a noisy ($\tilde{v}(\omega) = v(\omega)+\mathcal{N}(\mu=0, \sigma=0.01)$) version with the same parameters:
w v(w)
7.7132 0.1512
0.2075 0.1014
6.3365 0.1559
7.488 0.1483
4.9851 0.1039
2.248 0.0868
1.9806 0.106
7.6053 0.1848
1.6911 0.1136
0.8834 0.1174
6.8536 0.15
9.5339 0.1866
0.0395 0.0973
5.1219 0.1313
8.1262 0.1656
6.1253 0.1325
7.2176 0.1562
2.9188 0.1026
9.1777 0.1877
7.1458 0.1556
which gives $g=5.7$, meaning 20 data points are not enough for this level of noise or higher.
If you are more interested you can learn a framework like tensorflow to build the function and fit it to arbitrarily large number of data. | {
"domain": "datascience.stackexchange",
"id": 4919,
"tags": "linear-regression"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.