anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
What does it mean to have a zero-dimensional induced metric? | Question: I have an integral on the form
\begin{equation}
S=\int d^dx g_{\mu \nu} h^{ab}.
\end{equation}
In this example, $g_{ab}$ is a $d$-dimensional metric, $h_{ab}$ is an co-dim 2 induced metric. I wanted to conside what would happen with the above integral for $d=2$. But this means that $h_{ab}$ would be a zero dimensional metric. I am not sure conceptually what this means. Does it mean that $h_{ab}$ is just a number, namely the value of the point it is induced on?
But if I substitute in a point the indices in the above equation is not right anymore?
Answer: "A metric" is an object that takes two vectors and gives a scalar, in a bilinear and symmetric way. For GR, the metric $g_{\mu\nu}$ takes two 4D vectors. You are considering a case where $g_{\mu\nu}$ takes two 2D vectors, you have a codimension $2$ submanifold (i.e. a collection of points), and so the induced metric $h_{ab}$ on that submanifold takes two 0D vectors. (Contrary to the comment,) This makes perfect mathematical sense, but you don't get anything useful. The tangent space at each point of the submanifold is just the 0D vector space $V=\{\vec0\},$ and the only bilinear (symmetric) function $h:V\times V\to\mathbb R$ is $h(\vec0,\vec0)=0.$ The indices $a,b$ have no valid values (whereas for $g$ the indices range $\mu,\nu=1,2$) and $h_{ab}$ has no components (because there's no freedom at all in its behavior). If you tried contracting $h_{ab}$ against vectors $v^a,u^b$ you would just get the empty sum $h_{ab}v^au^b=\sum_{a=1}^0\sum_{b=1}^0h_{ab}v^au^b=0$ (since $h_{ab}$ has no components, $v^a$ has no components, $u^b$ has no components). The bottom line is that every formula will make sense, but will also degenerate to zero.
I suspect you won't get much physical intuition on this route. | {
"domain": "physics.stackexchange",
"id": 93496,
"tags": "lagrangian-formalism, differential-geometry, metric-tensor, spacetime-dimensions"
} |
Why the proportionality factor of 'Quantum' Poisson brackets is imaginary? | Question: When trying to understand the correspondence principle, I found a proof in this section (of this book) about why the quantum Poisson brackets ($\{\,,\,\}_{\text{QM}}$) must be proportional to the commutator ($[\,,\,]_{-}$).
But, I'm stuck in the last step:
$$\Big[\hat A_1,\hat B_1\Big]_{-}\Big\{\hat A_2,\hat B_2\Big\}_{\text{QM}}=\Big\{\hat A_1,\hat B_1\Big\}_{\text{QM}}\Big[\hat A_2,\hat B_2\Big]_{-}$$
Since $\hat A_i,\hat B_i$ are almost arbitrary chosen operators, this result suggests that:
$$\Big\{\hat A,\hat B\Big\}_{\text{QM}}=i\alpha\Big[\hat A,\hat B\Big]_{-}$$
My question is, why is that suggestion so straightforward?
I thought it could be obtained by choosing $\hat A_1$ and $\hat B_1$ such that $\dfrac{\{\hat A_1,\hat B_1\}_{\text{QM}}}{[\hat A_1,\hat B_1]_{-}}=i\alpha$, but that is redundant.
Answer: Notice that this equation
$$\Big[\hat A_1,\hat B_1\Big]_{-}\Big\{\hat A_2,\hat B_2\Big\}_{\text{QM}}=\Big\{\hat A_1,\hat B_1\Big\}_{\text{QM}}\Big[\hat A_2,\hat B_2\Big]_{-}$$
suggests that $\Big\{\hat A_1,\hat B_1\Big\}_{\text{QM}}\propto \Big[\hat A_1,\hat B_1\Big]_-$. That I think you understand. But next thing that must be noticed that $\hat{A}$ and $\hat{B}$ are hermitian operators as they represent the observables. So its poisson bracket must also be hermitian. Take an example of $\dot{q}=\{q,H\}$. $q$ and $H$ are observable along with $\dot{q}$. This is evident that the poisson bracket of two observable must be observable. However, commutation is an anti-hermitian for two hermitian operator.
$$[\hat{A},\hat{B}]_-^{\dagger}=-[\hat{A},\hat{B}]_-$$
So the proportionality constant between poisson bracket and the commutator must be a purely imaginary quantity.
$$\Big\{\hat A_1,\hat B_1\Big\}_{\text{QM}}= i\alpha \Big[\hat A_1,\hat B_1\Big]_-\quad\quad\quad \alpha\in\Re$$ | {
"domain": "physics.stackexchange",
"id": 67272,
"tags": "semiclassical"
} |
Inequality to be disproved | Question: Suppose that a search for a key in a binary search tree ends up in a leaf. Consider three sets :
A,the keys to the left of the search path
B,the keys on the search path
C, the keys to the right of the search path.
Any 3 keys a,b,c belonging to A,B,C respectively must satisfy a<=b<=c.
How can I disapprove it? I am trying to generate a counter example, but could not find
Answer: Search for 20 in
5
/ \
3 8
/ \
6 11
$6 \in A$ and $5 \in B$ but 6>5 | {
"domain": "cs.stackexchange",
"id": 3277,
"tags": "trees, binary-trees"
} |
Diffusion across cell membranes | Question: I got this question and don't really understand the difference between the answers.
Diffusion is (in cell membrane):
a) passive by nature, no metabolic energy is needed
b) driven process by pressure or voltage
Answer: Diffusion is always a passive process that doesn't require energy. Therefore it would seem that A is correct, B doesn't really make a whole lot of sense. In the case of the cell membrane diffusion will often be in the form of 'facilitated diffusion' through carrier proteins, however the transport is still not active. | {
"domain": "biology.stackexchange",
"id": 1028,
"tags": "homework, cell-membrane"
} |
Simple circuit breaker implementation | Question: I am trying to implement a basic circuit breaker design for my internal API calls. I would appreciate some criticism and feedback about my code. I am also planning to implement an interface off of the class once I am happy with it. As mentioned this circuit breaker design will be used for internal gRPC calls between my microservices. This was written in .NET Core 3.0.
I look forward to your feedback!
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Text;
using System.Timers;
namespace CircuitBreaker
{
public class CircuitBreaker
{
private Action _currentAction;
private int _failureCount = 0;
private readonly int _threshold = 0;
private readonly System.Timers.Timer _timer;
private CircuitState State { get; set; }
public enum CircuitState
{
Closed,
Open,
HalfOpen
}
public CircuitBreaker(int threshold, int timeOut)
{
State = CircuitState.Closed;
_threshold = threshold;
_timer = new Timer(timeOut);
_timer.Elapsed += TimerElapsed;
}
public void ExecuteAction(Action action)
{
_currentAction = action;
try
{
action();
}
catch (Exception ex)
{
if (ex.InnerException == null)
throw;
if (State == CircuitState.HalfOpen)
Trip();
else if (_failureCount < _threshold)
{
_failureCount++;
Invoke();
}
else if(_failureCount >= _threshold)
Trip();
}
if(State == CircuitState.HalfOpen)
Reset();
if (_failureCount > 0)
_failureCount = 0;
}
public void Trip()
{
if (State != CircuitState.Open)
ChangeState(CircuitState.Open);
_timer.Start();
}
public void Reset()
{
ChangeState(CircuitState.Closed);
_timer.Stop();
}
private void ChangeState(CircuitState callerCircuitState)
{
State = callerCircuitState;
}
private void Invoke()
{
ExecuteAction(_currentAction);
}
private void TimerElapsed(object sender, System.Timers.ElapsedEventArgs e)
{
if (State == CircuitState.Open)
{
ChangeState(CircuitState.HalfOpen);
_timer.Stop();
Invoke();
}
}
}
}
Answer: Welcome to Code Review. Not too bad but I would suggest a few things.
_failureCount is only used in ExecuteAction so I would only define it locally to ExecuteAction and renamed it simply failureCount.
You may want to expose threshold publicly as a property since different instances could have differing thresholds. Suggest:
public int Threshold { get; }
Consider having a ToString() override.
CircuitState enum is okay where its at, but if it were me, I usually define it external to the class.
You may consider having the State be gettable publicly. Suggest:
public CircuitState State { get; private set; }
I see no reason for Invoke(). Just call ExecuteAction(_currentAction) directly.
The constructor has a timeout parameter. I would encourage clarity in the name with millisecondsTimeout.
And finally, the biggest issue I see is that you really should get into the practice of use { } with if. I know C++ was okay with this, and C# allows it, but here at CR we strongly discourage it because it could lead to nefarious hard-to-find bugs. There are lots of places where you would change this; here is but one example:
if (State != CircuitState.Open)
{
ChangeState(CircuitState.Open);
} | {
"domain": "codereview.stackexchange",
"id": 36829,
"tags": "c#, design-patterns"
} |
Offset QPSK detection in GNU Radio - Sample delay | Question: An OQPSK detector is being tested in GNU Radio. The architecture was obtained from Michael Rice's Digital Communications - A discrete-time approach. The flowgraph is shown below.
The modulator architecture is basically QPSK delayed by half the sample rate, as shown below. In the flowgraph, the block oqpskIQMap maps the I/Q symbols from QPSK to OQPSK. It does so by delaying the Q-channel by d_delay, which is equal to the number of samples per symbol divided by two.
oqpskIQMap_impl::oqpskIQMap_impl(int sps)
: gr::sync_block("oqpskIQMap",
gr::io_signature::make(1, 1, sizeof(gr_complex)),
gr::io_signature::make(1, 1, sizeof(gr_complex))),
d_delay(sps/2)
{
//set_history(d_delay);
}
/*
* Our virtual destructor.
*/
oqpskIQMap_impl::~oqpskIQMap_impl()
{
}
int
oqpskIQMap_impl::work(int noutput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
const gr_complex *in = (const gr_complex *) input_items[0];
gr_complex *out = (gr_complex *) output_items[0];
for(int i = 0; i < noutput_items; i++)
out[i] = gr_complex(real(in[i]),imag(in[i - d_delay]));
// Do <+signal processing+>
// Tell runtime system how many output items we produced.
return noutput_items;
}
The receiver architecture from Rice's book is shown below. In this architecture, the match-filter produces 2 samples per symbol. The oqpskIQDemap block processes the output samples (X(KTs), X(KTs + Ts/2), Y(KTs) and Y(KTs + Ts/2) into a constellation point (X(KTs) ,Y(KTs + Ts/2)), effectively reducing the sample rate to 1 sample per symbol.
oqpskIQDemap_impl::oqpskIQDemap_impl(int delay)
: gr::sync_decimator("oqpskIQDemap",
gr::io_signature::make(1, 1, sizeof(gr_complex)),
gr::io_signature::make(1, 1, sizeof(gr_complex)), 2),
d_delay(delay)
{}
/*
* Our virtual destructor.
*/
oqpskIQDemap_impl::~oqpskIQDemap_impl()
{
}
int
oqpskIQDemap_impl::work(int noutput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
const gr_complex *in = (const gr_complex *) input_items[0];
gr_complex *out = (gr_complex *) output_items[0];
for(int i = 0; i < noutput_items; i+=2)
out[i] = gr_complex(real(in[i]),imag(in[i + 1]));
// Tell runtime system how many output items we produced.
return noutput_items;
}
The transmitted constellation (top) looks pretty okay. The Rx constellation (bottom), on the other hand, seems to have some points crossing the boundaries unexpectedly, given that the SNR is 25 dBs. I suspect the problem is with the way I introduce the delay in the two IQ map/demap blocks.
Please do let me know what you think.
Regards,
Answer: Thanks to Marcus' answer, I was able to resolve the problem. See below for the updated code:
IQ Mapping
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#include <gnuradio/io_signature.h>
#include "oqpskIQMap_impl.h"
namespace gr {
namespace oqpsk {
oqpskIQMap::sptr
oqpskIQMap::make(int sps)
{
return gnuradio::get_initial_sptr
(new oqpskIQMap_impl(sps));
}
/*
* The private constructor
*/
oqpskIQMap_impl::oqpskIQMap_impl(int sps)
: gr::sync_block("oqpskIQMap",
gr::io_signature::make(1, 1, sizeof(gr_complex)),
gr::io_signature::make(1, 1, sizeof(gr_complex))),
d_delay(sps/2)
{
//set_history(1);
}
/*
* Our virtual destructor.
*/
oqpskIQMap_impl::~oqpskIQMap_impl()
{
}
int
oqpskIQMap_impl::work(int noutput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
const gr_complex *in = (const gr_complex *) input_items[0];
gr_complex *out = (gr_complex *) output_items[0];
for(int i = d_delay; i < noutput_items; i++)
out[i] = gr_complex(real(in[i]),imag(in[i - d_delay]));
// Do <+signal processing+>
// Tell runtime system how many output items we produced.
return noutput_items - d_delay;
}
} /* namespace oqpsk */
} /* namespace gr */
IQ Demapping
Please note that this is a downsampling block and which expects 2 samples per symbol and produces 1 sample per symbol (a single constellation point).
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#include <gnuradio/io_signature.h>
#include "oqpskIQDemap_impl.h"
namespace gr {
namespace oqpsk {
oqpskIQDemap::sptr
oqpskIQDemap::make(int delay)
{
return gnuradio::get_initial_sptr
(new oqpskIQDemap_impl(delay));
}
/*
* The private constructor
*/
oqpskIQDemap_impl::oqpskIQDemap_impl(int delay)
: gr::sync_decimator("oqpskIQDemap",
gr::io_signature::make(1, 1, sizeof(gr_complex)),
gr::io_signature::make(1, 1, sizeof(gr_complex)), 2),
d_delay(delay)
{
set_output_multiple(2);
}
/*
* Our virtual destructor.
*/
oqpskIQDemap_impl::~oqpskIQDemap_impl()
{
}
int
oqpskIQDemap_impl::work(int noutput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
const gr_complex *in = (const gr_complex *) input_items[0];
gr_complex *out = (gr_complex *) output_items[0];
//for(int i = 0; i < noutput_items; i+=2)
//out[i] = gr_complex(real(in[i]),imag(in[i + 1]));
out[0] = gr_complex(real(in[0]),imag(in[1]));
// Tell runtime system how many output items we produced.
return 1;
}
} /* namespace oqpsk */
} /* namespace gr */
Regards, | {
"domain": "dsp.stackexchange",
"id": 7496,
"tags": "digital-communications, qpsk, gnuradio"
} |
How do I represent $A$ transpose $A$ in indicial notation? | Question: I know this question sounds lame, but the book I am following doesn't use the answer I expect and it has been using a similar notations everywhere else which has confused me.
I think Q[Any tensor] transpose Q can be written as Q(im)Q(mj), but the book has written Q(im)Q(jm) which, in my opinion, doesn't even satisfy the condition of matrix multiplication, so it should be wrong.
Answer: $(Q\cdot Q)_{ij}=Q_{im}Q_{mj}$
$(Q^T\cdot Q)_{ij}=(Q^T)_{im}Q_{mj}=Q_{mi}Q_{mj}$
where we use that $(Q^T)_{im}=Q_{mi}$ | {
"domain": "physics.stackexchange",
"id": 20510,
"tags": "homework-and-exercises, tensor-calculus, notation, conventions"
} |
Monomers and polymers | Question: At Wikipedia a monomer is defined to be
a molecule that may bind chemically to other molecules to form a
polymer.
A polymer is defined to be
a molecule composed of many monomers.
These definitions are obviously circular.
Are there non-circular definitions of monomers and polymers?
Answer: Let's have a look at what IUPAC say; they would be a reliable source to get definitions of chemical terminology from:
Monomer:
"A substance composed of monomer molecules."
Monomer molecules:
"A molecule which can undergo polymerization thereby contributing
constitutional units to the essential structure of a macromolecule. "
Polymer:
"A substance composed of macromolecules."
Macromolecule:
"A molecule of high relative molecular mass, the structure of which
essentially comprises the multiple repetition of units derived,
actually or conceptually, from molecules of low relative molecular
mass. "
These definitions are consistent with the latest IUPAC Compendium of Polymer Terminology and Nomenclature, IUPAC Recommendations 2008
Note that, historically, the definition of a polymer has changed somewhat. Up until about 80 years ago, polymers were seen as molecules comprised of similar empirical formula - benzene, for example, was considered a polymer of acetylene. It was Staudinger's work in the 1920's (contributing to his Nobel Prize) that defined polymers as macromolecules. | {
"domain": "chemistry.stackexchange",
"id": 1333,
"tags": "polymers, reference-request"
} |
A C++ wrapper for Apple CoreFoundation | Question: The wrapper makes it easy to manage memory when using CoreFoundation objects. It aims to act like a simple shared_ptr.
Here is a usage example:
cfobject_wrapper<CGImageSourceRef> imgSource(CGImageSourceCreateWithData(...), false);
...
dispatch_async(..., ^{
foo(imgSource);
});
// no need to manually call CFRelease, and the CGImageSourceRef is automatically retained when copied to block and released when the block is destroyed.
Here is the code of the wrapper:
/** A wrapper for Apple's Core Foundation objects */
template<typename _T>
class cfobject_wrapper final
{
protected:
_T _internal;
public:
cfobject_wrapper()
: _internal(nullptr)
{
}
public:
explicit cfobject_wrapper(const _T& ref, bool retain = true)
{
if (ref && retain)
CFRetain(ref);
_internal = ref;
}
public:
cfobject_wrapper(const cfobject_wrapper<_T>& ref)
: _internal(ref._internal)
{
if (_internal)
CFRetain(_internal);
}
public:
cfobject_wrapper(cfobject_wrapper<_T>&& ref)
{
_internal = std::move(ref._internal);
ref._internal = nullptr;
}
public:
cfobject_wrapper& operator=(cfobject_wrapper other)
{
std::swap(_internal, other._internal);
return *this;
}
public:
~cfobject_wrapper()
{
if (_internal)
{
CFRelease(_internal);
_internal = nullptr;
}
}
public:
_T internal() const
{
return _internal;
}
public:
explicit operator bool() const
{
return _internal != nullptr;
}
};
Answer: This is a really great idea! I could certainly use a class like this. Here are some thoughts:
Don't Multiply Declare public
It's very odd that you're prefixing every single method with its visibility. I really hate working with code that has multiple public, private or protected sections. I expect all public methods and members to be grouped together, all protected methods and members to be grouped together, and all private methods and members to be grouped together (usually in that order). While it's not required by the language, it's a pain to constantly have to figure out which section you're in when reading, and it's a pain to write it out for every method prototype when you're writing the code.
Avoid protected Member Variables
In general protected member variables are a bad idea. They essentially mean that any other object of the class or subclass can change their value out from under you. There's a case to be made for public member variables when performance is of the utmost importance, but in general member variables should be private so you can control who's changing them and when.
Retains and Releases Should Be Matched
It's a little surprising that I can tell the constructor not to retain the object, but the destructor will still release it. This is almost like a false cognate in another language - it looks similar to another pattern where you tell the constructor of some object that you're going to manually manage the memory for a given resource, so it shouldn't delete it when destructed. I understand the reason why - some CF methods return retained objects. Still, as someone potentially reading the code, it's a little unexpected. In Objective-C you could autorelease the value when calling the constructor and not worry about the retain, but there's no such mechanism for the C interface to CoreFoundation. Furthermore, the likelihood that I'll remember to pass false to the constructor when I can't remember to do the release manually is unlikely. For that reason, I recommend at least removing the default value. I'm also left wondering if there's some better way to handle this particular case. (I guess a first-class interface to CoreFoundation from C++ would be the ideal answer, but that's not going to happen unless you write it yourself.) | {
"domain": "codereview.stackexchange",
"id": 27946,
"tags": "c++, objective-c, ios, macos"
} |
Placement of success code in a conditional | Question: I'm not sure if there's a standard view where the placement of a success result should be in a conditional that can return multiple statuses. The success condition of this function is in the middle of the conditional block:
def redeem(pin)
success = false
message = nil
if !self.date_claimed.nil?
message = 'Reward already claimed'
elsif self.reward.redemption_pin == pin
success = true
self.date_claimed = Time.zone.now
self.save
else
message = "Wrong pin"
end
return {:success => success, :message => message}
end
I could rewrite it to be at the end:
def redeem(pin)
success = false
message = nil
if !self.date_claimed.nil?
message = 'Reward already claimed'
elsif self.reward.redemption_pin != pin
message = "Wrong pin"
else
success = true
self.date_claimed = Time.zone.now
self.save
end
return {:success => success, :message => message}
end
or I could rewrite it to be at the top but conditions become slightly more complex:
def redeem(pin)
success = false
message = nil
if self.reward.redemption_pin == pin && self.date_claimed.nil?
success = true
self.date_claimed = Time.zone.now
self.save
elsif date_claimed
message = 'Reward already claimed'
else
message = "Wrong pin"
end
return {:success => success, :message => message}
end
I personally think I prefer it to be at the beginning or the end, but in this function the end seems to be the best place.
Answer: Some notes:
Imperative programming has its use cases (or so I've heard :-)) but implementing logic is definitely not one of them. Some thoughts on the matter. Use always expressions, not statements, to describe logic; that's it, don't begin with a x = 1 and then modify x somewhere else in the code. Write expression branches instead (conditionals are in expressions in Ruby). Don't think in terms of "how" but in terms of "what".
if !x.nil? -> if x.
obj.save may fail but you are not checking it.
Don't write explicit return (more on idiomatic Ruby).
As you say the order of checks is important. I usually check for "problems" first, and keep the "ok" scenario for the last branch.
You are using a value (a hash) instead of an exception to signal errors. It's slightly not idiomatical in Ruby (in the sense that people tend not to do it, not that there's anything wrong with it), but personally I like it (and use it).
You could write:
def redeem(pin)
success, message = if date_claimed
[false, 'Reward already claimed']
...
end
{:success => success, :message => message}
end
However, returning {success: ..., message: ...} on each branch, yet slightly more verbose, looks pretty nice and declarative:
def redeem(pin)
if date_claimed
{success: false, message: "Reward already claimed"}
elsif reward.redemption_pin != pin
{success: false, message: "Wrong pin"}
else
update_attribute(:date_claimed, Time.zone.now)
{success: true}
end
end | {
"domain": "codereview.stackexchange",
"id": 2969,
"tags": "ruby, ruby-on-rails"
} |
Why do i see more steam coming out from the vessel containing water after i turned off the stove? | Question: It is a little observation i did when i was boiling the water.
what i observe is that when i turned off the stove on which the water was boiling i could see more steam coming out from it than the steam which was coming before the stove was turned off i.e. the amount of steam coming out per second was more when the flames which waere giving heat to the water is turned off
Why is that so?
Answer: So steam is caused when you have a supersaturated water vapor that cools and forms large suspended water droplets in the air. This is governed by the kelvin equation which you can read more about here:
https://en.wikipedia.org/wiki/Kelvin_equation
To get larger water droplets, or in your case a denser looking steam, you have two options: 1. you increase your partial vapor pressure or 2. you decrease the temperature. Without knowing more details, my guess would be your flame was also supplying a substantial amount of heat to the air around the water vapor. When you turned the heat off, the air cooled much more rapidly than the water and caused the vapor to condense more rapidly.
One easy way to think about this is breathing in the winter. When you blow hot moist air out of your mouth, you see steam. When you are in a warm building, you don't. | {
"domain": "physics.stackexchange",
"id": 27929,
"tags": "thermodynamics, everyday-life, observers"
} |
How does physics research work? | Question: I am going to try and be as short and as concise as possible.
I was thinking these last few days about how we're still trying to discover a unified Theory of Everything.
The question is: how is this actually being done? Is there a group of (paid) researchers that work on M-Theory 24/7, hoping that someday they'll finally unify physics? Or is it more like a thing that passionate people do in their spare time?
I hope this all makes sense.
Answer:
Is there a group of (paid) researchers that work on M-Theory 24/7,
hoping that someday they'll finally unify physics? Or is it more like
a thing that passionate people do in their spare time?
Virtually all serious physics research is done by full time professionals on salaries funded by grants or the institutions they are associated with, or both, or by full time graduate students in physics. The graduate students have teaching or research obligations or both, get their tuition waived, receive a modest salary, and sometimes are given access to subsidized graduate student housing. A typical physics graduate student who successfully finishes a dissertation and earns a PhD will spend three to seven years as a graduate student, often followed by several one to three year stints as post-docs before obtaining a permanent position as a university or college, a corporation, or some sort of institution or laboratory. Those who get academic jobs with often get a series of one to three year assistant professorships which are not tenured before finally earning tenure and becoming an associate professor, after which with further productivity in research and experience, they are promoted to full professor status.
Lots of people who earn PhDs either never end up getting a job in the field as well, get jobs as low level instructors who have heavy teaching loads, low pay, no tenure and no time or resources for research, or drop out of the field entirely either immediately after getting their PhDs (quite a few go into technical securities trading on Wall Street for big investment banks), or end up teaching physics to high school students, or do one of these things after a few stints as a post-doc or assistant professor, or leave the field to have kids and raise a family after which they may or may not return to active research in the field.
There are passionate people who do it in their spare time, but this is the rare exception and doesn't impact the development of the field very much. I would be surprised if they account for more than 2% of all publications in the field and those papers are disproportionately less cited. Even physicists working outside their primary specialties make up a quite small percentage of all papers and often have papers that are less influential than those of specialists in the field.
Many are professors or graduate students who also have teaching or studying obligations, although some physicists with a big physics experiment collaborations or institutes (many of whom are what are called "post-docs" who have earned PhDs but not held an academic professorship or senior institute fellow position) do work full time without teaching or studying responsibilities.
Senior professors or graduate teaching assistants may teach a couple of classes three or four hours a week each during a semester, with senior professors often having taught at least one of their courses dozens of times before and being assisted by TAs and student graders so that the load isn't too burdensome. Junior professors might have three or four classes of three or four hours a week each semester, have less TA and student grading assistance, and will have to spend more time developing their lectures and lesson plans and labs in classes they may be teaching for the first time or have taught only once or twice before, yet also have much more pressure to do research and get published since universities operate on a "publish or perish" system.
The institutions are mostly funded by governments, big foundations, and charitable endowments of other institutions.
The scope of physics research is also much broader than you imagine. Theoretical physics work in things like M-Theory and fundamental physics (often classed under the umbrella of theoretical high-energy physics) is a pretty small proportion of the total. Maybe there are several tens of thousands of professional physics researchers out there in the entire world, and maybe there are a couple thousand at most who are doing the kind of work you are imagining, broken up further into many subspecialties within that kind of research. The number of people doing pure theoretical M-theory work in particular, probably numbers in the several hundreds, maybe 1% of all professional research physicists.
Lots of collaborative work and exchanges of ideas in subfields is done via the Internet supplemented by annual conferences each year in that specialty. In addition to teaching and researching, most mid-level and senior professional physicists spend a fair amount of time trying to get grant money and organizing and attending conferences. Conference presentations are often a way to beta test an idea and get some kinks out of it before trying to publish it in a peer reviewed scientific journal (and participating in the operations and peer review process of scientific journals is another thing that most professional physicists at the mid-level to senior level do some of the time).
Also, obviously, physics researchers are human beings who sleep, spend time with their families, eat, have fun and even take vacations now and then, so almost no physics researchers truly work "24/7" on anything.
How is this actually being done?
Physics research is broken up into many highly specialized subcomponents that use very different methods.
High energy physics experimentalists are part of large collaborations around particle accelerator experiments who are further divided between people who design and operate the machine itself, people who come up with ways to filter the data to address particular questions, people who model what the expected results of an experiment are in both the Standard Model and alternatives to it, people who handle statistical issues like margins of error and the statistical significance of results, and people who manage the entire enterprise and focus on keeping it funded.
Another area which works with smaller collaborations organized in a somewhat similar way is astronomy where each telescope or group of telescopes (using the term broadly to refer to all kinds of astronomy observation tools from gravitational wave detectors to neutrino detectors to optical detectors, to radio telescopes, etc.) have a collaboration within which some people who work on designing and operating the machinery, others on deciding what to look for and how to filter the data to look at it, and still others who sift through it.
The first two tend to be academically or governmentally funded and are pursuing basic research.
Solid state and nuclear physics collaborations are often corporate funded and oriented not towards discovering new fundamental laws of physics, but to understanding complex systems in a practical way, often in a university or corporate laboratory. These project may have one or two lead investigators with post-docs and graduate students assisting them on experiments that can fit in a large room in an industrial or university science building. This can be a mix of research and development of technology and often takes a real flair for operating and fabricating precision instrumentation. Physicists in corporations have job titles like "physicist" or "senior physicist" with less prestige, but often have better salaries and benefits, because their work can be related directly to corporate profits.
The group that your question alludes to is generally known as theoretical high-energy physicists. They usually are professors at universities or fellows at institutes like the Perimeter Institute in Canada or the Santa Fe Institute in New Mexico who basically sit in their offices, confer with a handful of colleagues, and try to come up with new theories or discuss variations on existing ones. Their lives are similar to that of academic mathematicians, except that these theoretical physicists keep abreast of what the experimentalists and astronomers are observing in the real world and calibrate their own work to be consistent with those observations. These theoretical physicists tend to work either individually or in much smaller collaborations and typically have only a small number of graduate students or post-docs working under them (often only one or two). There aren't may grants available for basic theoretical physics research, but from a university's perspective they are just as good at teaching undergraduate and graduate students to generate revenue to subsidize their research activities and are by far the cheapest physics researchers to support because beyond their salaries, they work in small offices that universities have already paid for and need only minimal equipment and support staff.
Between the experimentalists and those 'pure' high-energy theorists sits a broad array of physics researchers who work in small teams to program models to run on high powered computers that simulate physical systems. For example, lattice QCD researchers apply the Standard Model laws of physics involved with the strong force and simulate interactions according to those laws using discrete numerical approximations of those laws which are too hard to solve analytically. Other physics researchers of this type simulate turbulent air flow, calculate quantities in the Standard Model with known difficult equations to high precision, or run simulations of the evolution of the universe with large number of particles over billions of simulation years. Sometimes theoretical expectations in high energy physics experiments are simulated thousands of times rather than calculated analytically to produce a Standard Model prediction to compare to actual experimental results.
In addition to those, within particle and high-energy physics, between the 'pure' theoretical high-energy physicists and the computational physicists, there are also theoretical researchers who focus their research on trying to come up with more efficient way to compute results from existing physics theories or toy model approximations of them. Those who do this within quantum field theory are sometimes called "amplitudologists."
Obviously, this list is not exhaustive. There are many other categories of physicists and many other kinds of physics research modes than those that I have mentioned here.
In each specialty, physicists not only do their own research but also read papers by other physicists whose work is relevant to them but that they don't do themselves. For example, theoretical physicists who work on issues in general relativity and cosmology read key subsets of astronomy research even though they don't do astronomy observations themselves.
Science journalists and passionate lay people like myself, do a lot of the same reading of scientific journal papers and digesting them for more general audiences that professional physicists do, but without themselves doing a significant amount of original research other than perhaps providing feedback to the authors of a preprint on one or two discrete subpoints that they've identified (perhaps an error in a calculation, or awkward phrase from someone not writing in their native language, or an omitted or inaccurate citation) in a manner similar to that of a peer reviewer. | {
"domain": "physics.stackexchange",
"id": 55869,
"tags": "soft-question, research-level, theory-of-everything, unified-theories"
} |
Understanding the concepts of word embedding in GPT-2 | Question: I have a program that calculate the word embedding using GPT-2 specifically the GPT2Model class:
from transformers import GPT2Tokenizer, GPT2Model
import torch
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
caption = "this bird is yellow has red wings"
encoded_caption = tokenizer(caption, return_tensors='pt')
input_ids = encoded_caption['input_ids']
with torch.no_grad():
outputs = model(input_ids)
word_embeddings = outputs.last_hidden_state
I have a few questions about this:
When calculating the word embedding using outputs.last_hidden_state, does this mean that the word embedding only uses the token embedding and positional embedding of GPT-2, without feeding them to the decoder blocks after that ?
Is this embedding also known as contextualized embedding ?
How does this embedding better than RNN architectures, such as LSTM or bidirectional-LSTM embedding ?
Answer: First one clarification: GPT-2 does not deals with words, but with tokens. A token represents a string from a letter to a full word. You can check the vocabulary file here. The tokens in the vocabulary where defined by an algorithm called byte-pair encoding (BPE); you can check the questions about BPE in this very site.
Now, the answers to your questions:
When calculating the word embedding using outputs.last_hidden_state, does this mean that the word embedding only uses the token embedding and positional embedding of GPT-2, without feeding them to the decoder blocks after that ?
No, it means the opposite: it uses the states of the last layer of the Transformer decoder, after going through all the layers of the model. Note that these embeddings are close to the fixed token embeddings. You may obtain more useful embeddings if you concatenate the hidden states of the last few layers.
Is this embedding also known as contextualized embedding ?
Well, your embeddings are contextual indeed. However, you are computing them with a causal language model, which for each token prediction only takes into account the previos tokens. Therefore, the context used for each token embedding is the part of the sentence that came before that token, not the following part.
How does this embedding better than RNN architectures, such as LSTM or bidirectional-LSTM embedding ?
LSTMs are also causal, so they also only take into account the previous tokens as context. Bidirectional LSTMs are a concatenation of the results of LSTMs in opposite directions; in this case, you obtain context from both sides, but such context has not been computed "at once", but in separate halfs (one half is computed from the context before each token, the other half from the context after it), therefore is not actually fully contextual. | {
"domain": "datascience.stackexchange",
"id": 12104,
"tags": "machine-learning, python, deep-learning, nlp, lstm"
} |
How to predict the direction of electric field generated by a changing magnetic field (in conditions of cilindrycal sysmmetry) and viceversa? | Question: Consider the two Mawell equations
$$∇×E=-∂B/∂t \tag{1}$$
$$∇×B=μ_0 ε_0 ∂E/∂t \,\,\, \mathrm{if} \,\, i=0 \tag{2}$$
Consider the following situation with cilindrycal symmetry. A magnetic field $B$ is uniform in space but it is changing in time with a certain law $B(t)=\alpha t$, and it is directed pointing outwards from screen. (for example one can consider an ideal solenoid with current varying in time, which would give $B(t)=\mu_0 n i(t)$).
Eq $(1)$ says that an electric field is generated. Since we have cilindrycal symmetry I can use Stokes theorem and, considering a surface $S$ that has as border the circle of radius $r$ perpendicular to the direction of $B$:
$$\int_{S} ∇×E \,\,\, dS=-\int_S ∂B/∂t \,\, dS=\oint E \cdot dl$$
$$E(r) 2\pi r=-\pi r^2 \frac{dB}{dt}$$
$$E(r)=-r \alpha \frac{1}{2}$$
Electric field is perpendicular to $B$ but the question is: what rule should I use to predict the orientation of E with respect to B?
I tried with the classic right-hand grip rule (or coffee-mug rule or the corkscrew-rule) (the same as Ampere's law) but it looks wrong in this case, infact, if $B$ is pointing outwards from the screen, then $E$ is directed (tangentially) clockwise! While it should not by the right-hand grip rule.
So how can I predict the direction of $E$ knowing the direction of $B$ in this case?
The same question holds for eq $2$ (in that case, I would like to predict the direction of $B$ generated by changing $E$). In particular, does the minus sign in the first equation affect the rule to use in the two case to find the direction of the generated field?
Answer: The answer is very simple. Stokes theorem describes the relation between an oriented surface integral and an oriented line integral along the boundary of this surface. The mathematical convention is the following. When you have chosen the side of the surface where the normal surface element vector $d\vec S $ protrudes, and you look from there to the line integral path around the surface, then the line element $d\vec r$ in the line integral is pointing in counterclockwise direction.
Therefore, for the first equation with $\vec B$ and $d\vec S$ pointing out of the screen your line integral is taken anti-clockwise. Because of the minus sign of the surface integral of the positive $∂\vec B/∂t$ this means that $\vec E$ is oriented clockwise (opposite to $d\vec r$).
Analogously, you will find that in equation (2) and positive $∂ \vec E/∂t$, $\vec B$ will be oriented counter-clockwise (same as $d\vec r$). | {
"domain": "physics.stackexchange",
"id": 35030,
"tags": "homework-and-exercises, electromagnetism, electricity, maxwell-equations, electromagnetic-induction"
} |
RGBDSLAM customization | Question:
I have RGBDSLAM installed on Ubuntu 14.04 via ROS Indigo (virtualBox via osx el captian). I have everything working well and I'd like to start adding custom functionality. I have a lot of experience with coding but I don't have much experience with ROS or RGBDSLAM. How can I add custom functionality to RGBDSLAM?
I would like to accomplish:
object detection
human detection
send data (rgbd, octomap, and/or point clouds) to separate machine for in-depth processing
Can anyone point me in the right direction to get started?
Originally posted by jacksonkr_ on ROS Answers with karma: 396 on 2016-02-11
Post score: 0
Answer:
In general, decoupling and modularization is one of the main ideas of ROS. Therefore for most things you will want to create your own nodes that subscribe to the topics advertised by rgbdslam (while it is running, you can check them out in rqt_graph or via rosnode info <name-of-rgbdslam-node>). You can also call rgbdslam's services from your node, e.g., to make it send out data or save the octomap. See rosservice list and rgbdslam's readme.
If you need other functionality or data, you might implement another service call or topic in rgbdslam and process it in an external node. As an example, you might have a look at the feature-msg-output branch on github, where I implemented sending out of the extracted features.
For processing the clouds as a map, you could subscribe to the openni driver and the /tf frames from rgbdslam, assemble the clouds according to the received transformation and then run the pcl code on it.
Sending data to other machines can be done by ROS (more or less) out-of-the-box.
If you want to work on pointclouds within rgbdslam, probably the best place would be to call your code from the end of GraphManager::optimizeGraphImpl. Your code will then run after graph optimization, i.e., when the refined trajectory estimation has taken place. Note that also in this case, you will need to merge the individual clouds. Check out GraphManager::saveAllClouds for an example of how to do it.
Originally posted by Felix Endres with karma: 6468 on 2016-02-16
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 23719,
"tags": "ros, slam, navigation, 3dslam"
} |
Why my simulation of eclipse 2017 is wrong! | Question: I made a simulation of total eclipse 2017 in Kansas city by StarryNight5.0 software and this software shows this eclipse will not be total in this city. Web sites says it is total. Where I was wrong with this?
here is the
file
Answer: If I have understood your attached file correctly, it shows your location as latitude 45.3, and longitude -80.4. I don't know where that is, but it's not Kansas City. | {
"domain": "astronomy.stackexchange",
"id": 2406,
"tags": "solar-eclipse"
} |
How does velocity relate to energy difference in Compton scattering? | Question: I'm having trouble understanding what my professor is getting at asking in this question. I just visited her office and her explanation minutely helped. I'm hoping to get a bit more clarity on what is being asked. Also, as a note, I got special placement in this class, and as a result, many of the equations being talked about (i.e., Compton Scattering) are new to me. I shouldn't have an issue learning them relatively quickly. Anyway:
Consider Compton scattering for x-rays, original wavelength 0.073 nm scattered to a wavelength 0.00243 nm longer. The scattered photon has a longer wavelength, therefore less energy. Assuming that all of that “lost” energy goes into the kinetic energy of the electron partner in this collision, at what fraction of the speed of light will the electron be moving? Be sure you think carefully about the difference in energy between the original photon and the scattered photon – that difference gets assigned to the KE of the electron.
I'm not looking to get the answer to the question, but maybe to have some help in working it out, or if I am just not recognizing something obvious it could be pointed out to me.
What I assume the process for figuring out the fractional velocity relative to $c$ will be is going to depend on my using the Compton scattering equation, then finding a relative difference in the $E$ of each wavelength and computing the velocity from that E relation. However, I don't know how Compton scattering plays a role in $E$.
Answer:
what is the energy of a photon?
how much has that energy changed after interacting with the electron?
what is the formula for the kinetic ennergy of an electron?
if the electron was not moving, and all of the change of the photon energy changed th eelectron's kinetic energy, then what is the speed of the electron?
divide the answer from 4. by c. | {
"domain": "physics.stackexchange",
"id": 16376,
"tags": "homework-and-exercises, astrophysics, electrons, scattering"
} |
Serial Connection to Hokuyo from Master | Question:
Hi,
I am currently getting the error:
[ERROR] [1531729856.715593296]: Error connecting to Hokuyo: Could not open serial Hokuyo:
/dev/ttyACM0 @ 115200
could not open serial device.
Initially, after encountering this problem, I was able to resolve it by adding myself to the dialout list as suggested:
https://answers.ros.org/question/286646/error-connecting-to-hokuyo-could-not-open-serial-hokuyo/
I was able to view the data from the Hokuyo after doing this.
I am now trying to access the Hokuyo from a different machine. On the drone (where the Hokuyo is connected), I changed ROS_MASTER_URI and ROS_IP to match the PC. On the PC, I then ran:
$ roscore
$ ssh drone-user@drone-IP
I am able to run '$ rostopic list' correctly.
However, after typing '$ rosrun urg_node urg_node', I receive the error mentioned above.
Any help is greatly appreciated.
Thanks,
Justin
Originally posted by Justin-RR2-IP on ROS Answers with karma: 16 on 2018-07-16
Post score: 0
Answer:
I have managed to fix the error, it was caused by a large discrepancy in time between the PC and the drone.
Connecting both devices to the internet and synchronizing their times was enough to resolve the issue.
Initially, I tried using the 'chrony' package for Ubuntu but I was unable to synchronize the times using this.
Any suggestions on how to get the drone to synchronize its time to match the PC's would be greatly appreciated!
Originally posted by Justin-RR2-IP with karma: 16 on 2018-07-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 31277,
"tags": "urg-node, ros-kinetic, master, hokuyo, time"
} |
Image to HTML / CSS conversion using pure JS | Question: Just for fun I wrote some JavaScript that "converts" an image into pure HTML / CSS. The user is able to select a local image. This image is loaded into an invisible canvas in order to extract pixeldata from the image. Using the pixeldata, a structure of 1x1 <div/> elements is created with each element having its background color set to its corresponding pixel color of the image.
The script works perfectly fine on small (width / height) images, but kills the browser on any large image file. I was wondering if there are some things I can do within my code to solve this issue?
document.addEventListener("DOMContentLoaded", function(event) {
const file = document.getElementById('file');
file.addEventListener('change', handleFile);
function handleFile(e){
// Create canvas
let canvas = document.createElement('canvas');
let ctx = canvas.getContext('2d');
// Load image
let img = new Image;
img.src = URL.createObjectURL(e.target.files[0]);
img.onload = function() {
// Draw image to canvas
ctx.drawImage(img, 0, 0, img.width, img.height);
let container = document.createElement('div');
container.id = 'container';
container.style.Width = img.width;
container.style.Height = img.height;
let pixelData = ctx.getImageData(0, 0, img.width, img.height).data;
let pixel = 0;
// Loop through each row
for(let x = 0; x < img.height; x++){
let row = document.createElement('div');
row.className = 'row';
// Loop through each column
for(let y = 0; y < img.width; y++){
let col = document.createElement('div');
col.className = 'col';
col.style.cssText = 'background: rgba('+pixelData[pixel]+','+pixelData[pixel+1]+','+pixelData[pixel+2]+','+pixelData[pixel+3]+');';
row.appendChild(col);
pixel = pixel + 4;
}
container.appendChild(row);
}
document.getElementById('body').appendChild(container);
URL.revokeObjectURL(img.src);
}
}
});
#container {
margin-left: 50px;
margin-top: 50px;
}
.row {
overflow: auto;
}
.row {
height: 1px;
}
.col {
width: 1px;
height: 1px;
float:left;
}
<body id='body'>
<input type='file' id='file'/>
</body>
Answer: After working on this for a while with other developers on Stack Overflow chat, the best solution I came up with (so far) was to use an asynchronous loop and adding nodes to a Document Fragment instead of adding them directly to the DOM.
The result:
document.addEventListener("DOMContentLoaded", function(event) {
const file = document.getElementById('file');
file.addEventListener('change', handleFile);
function handleFile(e){
document.getElementById('container').innerHTML = '';
// Load image
let img = new Image;
img.src = URL.createObjectURL(e.target.files[0]);
img.onload = function() {
// Create canvas
let canvas = document.createElement('canvas');
let ctx = canvas.getContext('2d');
canvas.width = img.width;
canvas.height = img.height;
// Draw image to canvas
ctx.drawImage(img, 0, 0, img.width, img.height);
let container = document.getElementById('container');
container.style.width = img.width+'px';
container.style.height = img.height+'px';
let pixelData = ctx.getImageData(0, 0, img.width, img.height).data;
let pixel = 0;
let processedPixels = 0;
let total = img.height*img.width;
(function _asyncLoop(){
let row = document.createElement('div');
let fragment = document.createDocumentFragment();
row.className = 'row';
let rowPixels = 0;
do {
let col = document.createElement('div');
col.className = 'col';
col.style.cssText = 'background: rgba('+pixelData[pixel]+','+pixelData[pixel+1]+','+pixelData[pixel+2]+','+pixelData[pixel+3]+');';
fragment.appendChild(col);
processedPixels++;
rowPixels++;
pixel = pixel + 4;
} while(rowPixels < img.width);
row.appendChild(fragment);
document.getElementById('container').appendChild(row);
if( processedPixels < total ) setTimeout( _asyncLoop, 100 );
}());
URL.revokeObjectURL(img.src);
}
}
});
#container {
margin-left: 50px;
margin-top: 50px;
}
.row {
overflow: auto;
}
.row {
height: 1px;
}
.col {
width: 1px;
height: 1px;
float:left;
}
<body>
<p>Select an image from your computer and allow Javascript to draw it in the DOM using only DIV elements!</p>
<input type='file' id='file'/>
<div id='container'></div>
</body> | {
"domain": "codereview.stackexchange",
"id": 32301,
"tags": "javascript, performance"
} |
Where can I find the Ubuntu source packaging? | Question:
It doesn't seem to be available on http://packages.ros.org/ros/ubuntu.
Originally posted by ashuang on ROS Answers with karma: 1 on 2012-02-24
Post score: 0
Answer:
AFAIK there are no source packages (as for apt-get source).
There is the source based install for the basic ROS system.
For other packages, look in the wiki at ros.org for the packages that you are interested in. Usually there is a download/checkout link.
Originally posted by dornhege with karma: 31395 on 2012-02-24
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by tfoote on 2013-09-09:
Newer releases now have source debs on packages.ros.org | {
"domain": "robotics.stackexchange",
"id": 8379,
"tags": "ubuntu"
} |
Resource recommendations for learning tensor calculus | Question: I am having trouble understanding tensor calculus can anyone please suggest some good books or lecture series?
Answer: You might wanna check Spacetime and Geometry: An Introduction to General Relativity by Sean Carroll. The first chapter treats elements of tensor calculus from a physics point of view. Most of General relativity courses contain material on this subject in their first chapters. | {
"domain": "physics.stackexchange",
"id": 85977,
"tags": "tensor-calculus, resource-recommendations"
} |
Solving Fouriertransform exercises without explicitly doing the transform | Question: Hey there in the signal processing course I am studying there is an excercise that reads:
The sequence $x(n)$ is given $x(n)=\{-1\quad2\quad \underline{-3}\quad 2\quad -1\}$ and the fouriertransform $X(\omega)$. Without explicitly performing the fouriertransform solve the following:
Sekvensen x(n) är given av x(n) =
a) $X(0)$
b) $argX(\omega)$
c)$\int_{-\pi}^{\pi}X(\omega)d\omega$
d)$X(\pi)$
e)$\int_{-\pi}^{\pi}|X(\omega)|^2d\omega$
I could not figure out how to solve any of this without doing the actual transform. I eventually solved all of it with the transform, but still I am none the wiser how to solve any of this without doing the transfom.
Suggestions?
Please and thank you!
Answer: Some key take-aways /properties to know about the Fourier Transform will help reveal the answers. There is a method to the madness in extracting some key take-aways and high level understanding of what the Fourier Transform represents that makes this exercise useful.
The line under the 3 indicates the assumed position of the vertical axis, meaning $t=0$ and thus we see we have a symmetric real waveform in time. A symmetric real waveform will always have a real transform in frequency (and vice versa). This is referred to as an even function. Similarly an odd function (where the positive is the same as the negative but sign reversed) will always have an imaginary transform. (This is the proof, once those points are proven, that a causal function in time (as the sum of an even and odd function) must always have a complex transform. Remember this relationship, it's useful.
So knowing that allows you to solve (B).
To solve (A), consider what X(0) represents. (hint, what is the Fourier Transform of a 9V battery??). Without actually solving the Fourier Transform, take a look at the formula when $\omega=0$ and see what it simply is for that case. Remember that one too!
To solve (D), do the same as above when $\omega$ is $\pi$ and look what happens to the formula as a simplification. Note how +1, -1, +1 , -1 ...comes into the picture and how easy it can make it to do that one in your head the next time around.
Note how (C) and (E) are integrated over the entire unique range of $\omega$. Go through a few cases to see what always occurs in that integration between the time domain signal and the frequency domain signal. Studying the formulas carefully will help you to do that again in your head in the future. Look into Parseval's Theorem and really understand what it is describing.
Hope this helps! | {
"domain": "dsp.stackexchange",
"id": 10332,
"tags": "fourier-transform"
} |
How to denoise overrepresented lines in sample of 2D (geospatial) data? | Question: I have some geospatial data in lat/lon form accurate to 6th decimal place.
As shown in the picture below, there are some over-represented lines of points at specific latitudes which appear in the sample. In that example they are at fixed latitudes (which happen to be evenly spaced, but that is not relevant to the question).
In other cases though, we have observed similar over-represented lines at an arbitrary slant ie. not a fixed latitude.
Is there an algorithm to detect
a) lines of overpresentation in sample lat/lon data at fixed lattitude
or even better
b) a general algoritm to detect lines of overrepresentation at arbitrary slants in geospatial data
?
Answer: You might take a look into Hough Transform. It is usually applied on images but it can be easily adapted for 2 dimensional data. There are various packages, but a straight-forward custom implementation would be enough for your purposes. The original Hough transform was used to detect straight line, but suffered some improvements and with some efforts can be employed to detect also other shapes. | {
"domain": "datascience.stackexchange",
"id": 11277,
"tags": "data-cleaning, geospatial, noise"
} |
How to quantify the idea that physical calculations of objects of close by geometry give same answer? | Question: In many times in quick physics calculations, involving the geometry of a physical body, there is an assumption to simplfy the problem by considering the sample problem over a simpler geometry. Examples:
"spherical cow" for calculating aerodynamic situation
" assume mountain look likes a solid cylinder" for calculating the pressure by weight of mountains
Assume earth is spherical etc for calculating density of it
In the ideas of approximation as said above, I don't understand a priori how physicist think that two things which have similar-ish geometry (cone of mountain to solid cylinder for example) should have give the right answer.
Is there a general principle of quantifying the "topological stability" of physical answers? In sense that the answer to a certain physical problem doesn't change much to variations in space which it is considered under?
Answer: Often when one makes such approximations, they're usually shown to be valid under some set of assumptions, and the resulting calculations are usually 'continuous with respect to errors in the approximation'. It's difficult to phrase a general summary, so let me just describe it with the example of calculating the mass of a mountain. We shall describe a mountain using its mass density $\rho$. This is a function $\rho:\Bbb{R}^3\to[0,\infty)$, and we shall assume at the very least that $\rho$ is a bounded measurable function, and that it has compact support (mountains are finite in size, and have bounded densities). Then, the mass of the mountain is by definition just
\begin{align}
M_{\rho}&=\int_{\Bbb{R}^3}\rho\,dV,
\end{align}
where $dV$ is the volume element (i.e $3$-dimensional Lebesgue measure). Note that the shape of the mountain is encoded in the function $\rho$ itself, because we assume that the density is positive everywhere that the mountain is present, and is $0$ elsewhere, i.e the set $\text{supp}(\rho)$ describes the region occupied by the mountain.
Now, $\rho$ is a function, meaning at each point of $\Bbb{R}^3$, it has to give me a certain element of $\Bbb{R}$. No one can give such exact results. So, we have to admit that the $\rho$ we use is indeed an approximation, and that there are some uncertainties here. Ok, so how do these uncertainties affect the mass? Well, I claim not by much, because the mass depends on density 'continuously' in the following sense. The mountain lies inside of some fixed compact set $K\subset\Bbb{R}^3$ (we can be very crude with the determination of this set). Now, for any pair of densities $\rho_1,\rho_2$ with support in $K$, we have:
\begin{align}
\left|M_{\rho_1}-M_{\rho_2}\right|&=\left|\int_{\Bbb{R}^3}\rho_1\,dV-\int_{\Bbb{R}^3}\rho_2\,dV\right|\\
&=\left|\int_{K}(\rho_1-\rho_2)\,dV\right|\\
&\leq \text{vol}(K)\cdot \sup_{x\in K}|\rho_1(x)-\rho_2(x)|.
\end{align}
In fact, in more technical terms, what I've shown above is that the (linear) mapping $\rho\mapsto \int_{\Bbb{R}^3}\rho\,dV=\int_K\rho\,dV$, from the space of bounded measurable functions (equipped with the supremum norm) into the real numbers is continuous (actually I've shown Lipschitz continuity, but for linear maps, these are equivalent). So, if the RHS is small, so is the LHS. Another way of saying this result is that if you start with a density $\rho$, and you are sure that the uncertainty $\delta\rho$ is a small function (small in the sense of $\sup\limits_{x\in K}|(\delta\rho)(x)|$ being a small number), then the difference in the mass calculation when using $\rho+\delta\rho$ as the density versus using $\rho$ as the density is also a small quantity. So, in non-technical terms, small uncertainties in density imply small changes to mass calculation.
Anyway, I've gone over one approach for describing continuity in these calculations, but there's also other aspects in which we can make approximations. For instance, assuming the mountain is a cylinder in that calculation is fine, because the purpose of the calculation was to get an order of magnitude estimate for the mass/resulting pressure; in such situations we don't care about exact answers. We just want to know roughly how large the quantities are (for that just some reasonable upper and lower bounds are sufficient).
Assuming cows are spherical falls under a similar category; for the scales we're interested in for those problems, the exact geometry doesn't matter (see also the answer by @Nickolas Alves about multipole expansions). Note however, that we're not always this cavalier. Imagine you're a big sports company trying to design equipment for your star athlete (where e.g. outcomes of races are decided by milliseconds). Just take a look at some documentaries for how much biometric data they gather in order to come up with tailor-made shoes, sportswear or whatever else they come up with. No one in their situation will make the assumption that athletes are spheres! The bottom line is the level of detail and accuracy you put into your model depends highly on your purpose for the calculation. | {
"domain": "physics.stackexchange",
"id": 89062,
"tags": "differential-geometry, geometry, topology"
} |
[Solved] Nodelet does not seem to initialize correctly | Question:
Hello, I'm trying to follow this tutorial on writing nodelets to run on the kobuki using ROS Kinetic and Ubuntu 16.04.4 LTS (Xenial Xerus), but I'm having issues with getting it to run. A nodelet seems to be created when running the launch file according to the rqt_graph, but it seems as though the initialization function is never called, therefore the node never subscribes or publishes to the necessary topics.
The code for nodelet.cpp, bump_blink_controller.hpp, nodelet_plugins.xml, and bump_blink_app.launch is identical to the linked tutorial, except for adding console output to the constructor and init methods of the nodelet (which were not displayed upon running).
Here is my CMakeLists.txt omitting the default comments from creating the package:
cmake_minimum_required(VERSION 2.8.3)
project(kobuki_controller_tutorial)
find_package(catkin REQUIRED COMPONENTS
kobuki_msgs
nodelet
roscpp
std_msgs
yocs_controllers
)
catkin_package(
# INCLUDE_DIRS include
# LIBRARIES kobuki_controller_tutorial
# CATKIN_DEPENDS kobuki_msgs nodelet roscpp std_msgs yocs_controllers
# DEPENDS system_lib
)
include_directories(
${catkin_INCLUDE_DIRS}
./include/
)
add_library(bump_blink_controller_nodelet
src/nodelet.cpp
)
The package.xml (manifest?) file with comments removed:
<?xml version="1.0"?>
<package format="2">
<name>kobuki_controller_tutorial</name>
<version>0.0.0</version>
<description>The kobuki_controller_tutorial package</description>
<maintainer email="user@todo.todo">turtlebot</maintainer>
<license>TODO</license>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>kobuki_msgs</build_depend>
<build_depend>iostream</build_depend> #added only for additional debugging
<build_depend>nodelet</build_depend>
<build_depend>roscpp</build_depend>
<build_depend>std_msgs</build_depend>
<build_depend>yocs_controllers</build_depend>
<build_export_depend>kobuki_msgs</build_export_depend>
<build_export_depend>iostream</build_export_depend>
<build_export_depend>nodelet</build_export_depend>
<build_export_depend>roscpp</build_export_depend>
<build_export_depend>std_msgs</build_export_depend>
<build_export_depend>yocs_controllers</build_export_depend>
<exec_depend>kobuki_msgs</exec_depend>
<exec_depend>iostream</exec_depend>
<exec_depend>nodelet</exec_depend>
<exec_depend>roscpp</exec_depend>
<exec_depend>std_msgs</exec_depend>
<exec_depend>yocs_controllers</exec_depend>
<export>
<nodelet plugin="${prefix}/plugins/nodelet_plugins.xml" />
</export>
</package>
Output from Launching the Nodelet (run alongside roslaunch kobuki_node minimal.launch) omitting info about the rosmaster and roslaunch server:
user@user:~/kobuki_ws$ roslaunch kobuki_controller_tutorial bump_blink_app.launch --screen
[ . . . ]
SUMMARY
========
PARAMETERS
* /rosdistro: kinetic
* /rosversion: 1.12.13
NODES
/
bump_blink_controller (nodelet/nodelet)
[ . . . ]
process[bump_blink_controller-1]: started with pid [27818]
[ INFO] [1533240116.733278721]: Loading nodelet /bump_blink_controller of type kobuki_controller_tutorial/BumpBlinkControllerNodelet to manager mobile_base_nodelet_manager with the following remappings:
[ INFO] [1533240116.733416911]: /bump_blink_controller/commands/led1 -> /mobile_base/commands/led1
[ INFO] [1533240116.733471337]: /bump_blink_controller/events/bumper -> /mobile_base/events/bumper
Any help is appreciated as I'd like to use nodelets for other projects, and I can provide any additional information upon request. Hopefully the fix is simple though!
Originally posted by chavi014 on ROS Answers with karma: 16 on 2018-08-02
Post score: 0
Original comments
Comment by lucasw on 2018-08-02:
I tried out https://github.com/blackzafiro/ROSExercises/blob/master/src/kobuki_noisy_controller/launch/bump_blink_sound_app.launch (and got a properly named nodelet manager running) which looks to have followed the same tutorial, and it worked fine. Does that one work for you?
Comment by chavi014 on 2018-08-03:
Hello @lucasw, I've downloaded he repo, ran catkin_make, and tried to launch the updated file only to get the same results. Here's an rqt_graph. Is it possible I'm building something incorrectly?
Answer:
I fixed my problem. I forgot the nodelet needed to be activated via a message on a specific topic, so once I sent the message there everything worked out.
Also to note, rebuilding your nodelet and trying to re-launch it in the same manager only loads the old shared object -- you must restart the manager to see any changes.
Originally posted by chavi014 with karma: 16 on 2018-08-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 31454,
"tags": "ros-kinetic, kobuki, nodelet"
} |
Polynomial approximation algorithm for set cover with assumption | Question: We want to cover $n$ elements with some sets from $S_1, …, S_m$ (classical set cover).
We furthermore suppose that any element belongs to at least $k$ sets and want to find a set cover with cardinal at most $\mathcal{O}\left(\frac{m \cdot log(n)}{k}\right)$.
For $k=1$, the classical greedy algorithm works, but after that I'm stuck.
Answer: I ended up finding the answer.
I'm going to prove the greedy algorithm yields the correct answer.
Count the number of membership relationships. There are at least $n\cdot k$, so one set must contain more than $\frac{n\cdot k}m$ elements.
After one step of greedy algorithm, there are less than $n_1 = n - \frac{n\cdot k}m = n \left(1 - \frac km\right)$ elements left.
At the second step we remove at least $n_1(1-\frac{k}{m-1})$.
We are guaranteed to terminate when $n~\Pi_{i=0}^j \left( 1-\frac{k}{m-i}\right) < 1$, ie $\log(n) + \sum_{i=0}^j \log\left(1-\frac{k}{m-i}\right) < 0$.
Since $\log(1-x) < -x$, $\log(n) < \sum_{i=0}^j \frac{k}{m-i}$ is sufficient.
We are looking for the smallest $j$ such that:
$ m-j < \exp(\log(m) - \frac{\log(n)}{k}) = \frac{m}{n^\frac1k}$
$\Longleftrightarrow j > m\left(1-\frac{1}{n^\frac1k}\right) = m\left(1-\frac{1}{\exp(\frac{log(n)}{k}}\right)$
Now, $1-\frac{1}{\exp(x)} ≤ x$ by convexity, hence $j > \frac{m \log(n)}{k}$ is sufficient. | {
"domain": "cstheory.stackexchange",
"id": 4541,
"tags": "approximation-algorithms, set-cover"
} |
Alternatives to word to vector embedding | Question: I'm just curious are there some alternative techniques to word 2 vector representation? So words/phrases/sentences are not represented as vectors but have a different form. Thanks.
Answer: In your question you talk about vector embeddings or "word 2 vector representation" (word2vec was the first software to train word embeddings). It's important to understand that not all vectors are embeddings:
Embeddings are short vectors made of real numbers, they were invented around 2010. There are many different types of embeddings, i.e. methods to train the embeddings from a corpus: word2vec, Glove, Elmo, Fasttext, Bert...
Before this, people were also using vectors representing a "bag of words": one-hot-encoding for a single word, frequency count or TFIDF for a sentence/document. These vectors are long and sparse, i.e. they usually contain a lot of zeros.
These are the most common word representation methods, but there are potentially other alternatives. For example, in Wordnet the words are nodes in a graph and relations between words are represented as edges. | {
"domain": "datascience.stackexchange",
"id": 11319,
"tags": "nlp, word-embeddings, word2vec"
} |
Does the octopus have an anus? What does it look like? | Question: Octopuses are cephalopods, which have separate anal and oral openings. Indeed, descriptions of the cephalopod GI tract clearly depict an anal opening.
However, I am very confused about how this applies to the octopus. I don't think I've ever seen the anus depicted - where is it? What does it look like? Does anything much come out?
My first instinct would be to simply do a google image search for this. Unfortunately, those results are flooded with unrelated hits (apparently the octopus is a popular motif for tattoos... in a certain part of the body).
Answer: The anus of Octopus is channeled into its siphon.
Image taken from Carina M. Gsottbauer
Note:
Siphon is a tube that leads from the mantle to the outside. Octopuses use their siphon to force water out in jets for propulsion and to flush waste products from the anus.
From Encyclopedia of the Aquatic World, Volume 6
By Marshall Cavendish Corporation | {
"domain": "biology.stackexchange",
"id": 5938,
"tags": "zoology, anatomy, cephalopods"
} |
Why is everything confused up in thermodynamics | Question:
Is the formula application done properly when calculating work. I think the volume should be final minus the initial. So in first step it should be $V_1-V_i$ and in second step it should be $V_f-V_1$. But totally different things are given and I can't understand. There is also the graph based on this and conclusions too. I wonder whether they are correct. Please help me.why is it so confusing?
Answer: You have a two-step process. In the first step, the mass m1 is removed from atop m2, and the system is allowed to re-equilibrate. In this step, if I do a Newton's law force balance on the combination of the piston (assumed massless) and m2, I get: $$F_g-m_2g-P_{atm}A=m_2\frac{dv}{dt}$$where $F_g(t)$ is the force exerted by the gas on the base of the piston during step 1, A is the area of the piston, and v is the velocity of the piston.
If I multiply this equation by the velocity of the piston $\frac{dx}{dt}=v$, I get:$$F_g\frac{dx}{dt}=m_2g\frac{dx}{dt}+P_{atm}A\frac{dx}{dt}+m_2v\frac{dv}{dt}=\left[\frac{m_2g}{A}+P_{atm}\right]\frac{dV}{dt}+\frac{1}{2}m_2\frac{dv^2}{dt}$$where V is the volume of gas at time t.
If I integrate this equation from t = 0 to arbitrary time t during the process, I obtain:
$$W_g(t)=\int{F_gdx}=\left[\frac{m_2g}{A}+P_{atm}\right][V(t)-V_i]+\frac{1}{2}m_2v^2(t)$$where $W_g(t)$ is the work done by the gas on the combination of piston and m2 up to time t, V(t) is the volume of gas at time t, $V_i$ is the initial volume of gas, and v(t) is the velocity of the combination of piston and m2 at time t. The final term on the right hand side of the equation represents the kinetic energy of the combination of piston and m2 at time t.
What do you think will happen to the velocity of the combination of piston and m2 at very long times?
(a) the velocity will continue increasing forever
(b) the velocity will slow down gradually until the piston and m2 finally stop at the equilibrium position $V_1$
(c) the piston and mass will overshoot the equilibrium position (until their velocity slows to zero), reverse direction and overshoot the equilibrium position going in the other direction (until their velocity slows to zero again), reverse direction again, etc. They will oscillate about the equilibrium position forever (like a mass/spring system in simple harmonic motion)
(d) same as (c), except that, even if the piston is frictionless, viscous damping stresses in the gas will ultimately damp out the oscillation until the combination of piston and m2 settle at the equilibrium position $V_1$
Actually, the correct answer is (d). However, I suppose (b) would be possible if the gas were somehow very viscous. Either way, however, the conclusion is the same. At final equilibrium of step 1, the piston is no longer moving. So the amount of work that the gas does on the combination of piston and m2 when final equilibrium is attained in step 1 is just:
$$W_{g1}=\left[\frac{m_2g}{A}+P_{atm}\right][V_1-V_i]$$This is the result they obtained in your reference (aside from the sign (since they are considering work done by the surroundings on the system, and I am considering work done by the system, in this case the gas, on its surroundings). | {
"domain": "chemistry.stackexchange",
"id": 10810,
"tags": "thermodynamics"
} |
Will a bullet travel the same after going through 2 substances (in different orders)? | Question: This may seem like a trivial question, however I am a little curious as to whether 'order matters' in the case of this system, or whether you would be able to treat any object as a 'black box' with given input and output.
My scenario is this: If you have a block of ballistics gel with a steel plate at the end, and you fire a bullet through it (Such that the bullet has enough energy to easily pierce both gel and steel) will the amount of kinetic energy (or speed) of the bullet vary depending on if it went through the gel first and than steel, or through the steel first and than the gel?
Assume that the materials are held in place such that when shot they won't move vertically or out of place.
Would the solution change if the bullet didn't get deformed?
Answer: In general, the answer is no.
Let us assume that the drag in the "black box" is viscous - then the force goes as $\rho v^2$ (plus some other terms).
If the velocity is highest while traversing the medium with the higher density, the total stopping power will be greater. It's that $v^2$ term that gets you... | {
"domain": "physics.stackexchange",
"id": 23301,
"tags": "projectile"
} |
Getting data from an API in JSON format | Question: I'm getting data from an API in JSON format. It's getting the resting heart rate of the user and inserting it into my database. It gets today's date as well as a couple days in the past and updates the database accordingly. The code below does work (although it's inserting a blank row for some reason). My question is, is there a better/more efficient way of writing this code?
Here is the array:
[activities-heart] => Array (
[0] => Array (
[dateTime] => 2018-03-22
[value] => Array (
[customHeartRateZones] => Array ( )
[heartRateZones] => Array (
[0] => Array (
[caloriesOut] => 1135.7736
[max] => 85
[min] => 30
[minutes] => 814
[name] => Out of Range
)
[1] => Array (
[caloriesOut] => 1260.7179
[max] => 119
[min] => 85
[minutes] => 289
[name] => Fat Burn
)
[2] => Array (
[caloriesOut] => 690.64515
[max] => 145
[min] => 119
[minutes] => 90
[name] => Cardio
)
[3] => Array (
[caloriesOut] => 0
[max] => 220
[min] => 145
[minutes] => 0
[name] => Peak
)
)
[restingHeartRate] => 65
)
)
[1] => Array (
[dateTime] => 2018-03-23
[value] => Array (
[customHeartRateZones] => Array ( )
[heartRateZones] => Array (
[0] => Array (
[caloriesOut] => 1512.00346
[max] => 85
[min] => 30
[minutes] => 1113
[name] => Out of Range
)
[1] => Array (
[caloriesOut] => 1315.59604
[max] => 119
[min] => 85
[minutes] => 280
[name] => Fat Burn
)
[2] => Array (
[caloriesOut] => 98.14618
[max] => 145
[min] => 119
[minutes] => 13
[name] => Cardio
)
[3] => Array (
[caloriesOut] => 0
[max] => 220
[min] => 145
[minutes] => 0
[name] => Peak
)
)
[restingHeartRate] => 64
)
)
)
And here is the relevant code to process it:
$obj = new RecursiveIteratorIterator( new RecursiveArrayIterator( json_decode( $return, TRUE ) ), RecursiveIteratorIterator::SELF_FIRST );
foreach ( $obj as $key => $val ) {
$$key = $val;
$stmt = $connection->prepare( "INSERT INTO heartrate SET `encodedid` = ?, activitydate = ?, rhr = ? ON DUPLICATE KEY UPDATE rhr= ?" );
$stmt->bind_param( 'ssii', $studentencodedid, $dateTime, $restingHeartRate, $restingHeartRate );
$stmt->execute();
if ( $stmt->affected_rows < 1 ) {
header( "Location: " . $url . "/index.php?errormessage=Failed to updated the resting heart rate value." );
}
$stmt->close();
}
Answer: Code samples in this are approximate, I didn't test them.
A couple thoughts:
1)
You've currently called prepare() inside the body of your foreach loop. This is unnecessary, you only need to prepare your query once. You may reuse the PDOStatement that prepare() returns repeatedly in every iteration of your loop. This is actually very efficient, as the query will only have to be evaluated once. Once you're prepared, the preparation is done, no need to re-prepare for each loop iteration if your query structure is the same.
(For the record, prepare is indeed the correct method to use in batch operations like this. In the future, if you only need to execute a single query, you could alternative use PDO:query() (if you need results) or PDO:exec() (if you don't need results from the query -- it simply returns the number of rows affected by the query as an integer.))
The same goes for bindParam(). You don't need to re-bind the params each loop iteration-- once bound, they're bound. bindParam() binds by reference, the actual evaluation of the param happens every time execute() is called.
So let's do this minor refactor. We prepare the statement and bind the params before the loop begins, we execute() the statement on every iteration of the loop, and we'll move the close()-ing of the PDOStatement to be after the loop. Your sample now looks like this, and will be a bit more performant:
$obj = new RecursiveIteratorIterator( new RecursiveArrayIterator( json_decode( $return, TRUE ) ), RecursiveIteratorIterator::SELF_FIRST );
$stmt = $connection->prepare( "INSERT INTO heartrate SET `encodedid` = ?, activitydate = ?, rhr = ? ON DUPLICATE KEY UPDATE rhr= ?" );
$stmt->bind_param( 'ssii', $studentencodedid, $dateTime, $restingHeartRate, $restingHeartRate );
foreach ( $obj as $key => $val ) {
$$key = $val;
$stmt->execute();
if ( $stmt->affected_rows < 1 ) {
header( "Location: " . $url . "/index.php?errormessage=Failed to updated the resting heart rate value." );
}
}
$stmt->close();
2)
$$key = $val;
I'd recommend NOT to use variable variables. They're generally considered a blight on the language, and people have pretty strong opinions on them.
They tend to make code more confusing to read, debug, and lint; and they don't do much that you can't do in a simpler manner.
2.5)
Actually, looking at your sample data array, your data doesn't look to be of arbitrary depth. It looks fixed-depth. Yes, there are n number of heartRateZones per day, but the the structure is repetitive. It looks keyed per-day, each day has a few unique key/value sets. The only thing in the value array you seem to care about is the per-day dates and resting heart rates. You aren't using them yet, but I can also imagine maybe you want the heartRateZones, you can just grab those directly without iterating over all the other keys in the 'value' array.
I'm not seeing a need for recursion at all.
We can easily eliminate the variable variables and get rid of the unnecessary recursion all at once. I'd probably handle this with loops or nested loops instead of recursion.
You don't appear to currently be using any of the heartRateZones. If you wanted to insert a record per zone, you could run execute() in that inner foreach. Otherwise just delete that loop and run the execute() in the days loop.
I didn't see $studentencodeid defined in the code sample you gave, or in the JSON data. I assume you already defined that elsewhere in your script.
$obj = json_decode($return, TRUE);
$stmt = $connection->prepare( "INSERT INTO heartrate SET `encodedid` = ?, activitydate = ?, rhr = ? ON DUPLICATE KEY UPDATE rhr= ?" );
$stmt->bind_param( 'ssii', $studentencodedid, $dateTime, $restingHeartRate, $restingHeartRate );
// Iterate through days
foreach ($obj as $key => $value) {
// Grab data unique to a whole day that you care about
$dateTime = $value['dateTime'];
$restingHeartRate = $value['restingHeartRate'];
// Iterate through heartRateZones (Or remove this loop if you don't need them)
foreach($value['value'] as $zoneKey => $zoneValue) {
// If you want to insert records based on individual heart rate records, do that here and remove the execute() below. You'd insert a record per-heartrate with the same dateTime and restingHeartRate values as all the other heartrates on that day.
// $stmt->execute();
}
$stmt->execute();
if ($stmt->affected_rows < 1 ) {
header( "Location: " . $url . "/index.php?errormessage=Failed to updated the resting heart rate value." );
}
}
$stmt->close(); | {
"domain": "codereview.stackexchange",
"id": 30008,
"tags": "php, array, json"
} |
"contains" function for STL containers | Question: I have made a simple program that tests if the given element exists in any STL container. The program tests if the container has a find member function. It will use it if it does. Otherwise, it will call the STL find function instead. I would like to know how I can improve it further?
#include <iostream>
#include <algorithm>
#include <utility>
#include <map>
#include <set>
#include <vector>
#include <array>
#include <type_traits>
template <typename C> decltype(std::declval<C>().find(0), std::true_type{}) test(int);
template <typename C> std::false_type test(...);
template <typename C> using has_find = decltype(test<C>(0));
template <bool B, typename T, typename F>
std::enable_if_t<std::integral_constant<bool, B>::value, T>
conditional(T&& t, F&&)
{
return std::forward<T>(t);
}
template <bool B, typename T, typename F>
std::enable_if_t<!std::integral_constant<bool, B>::value, F>
conditional(T&&, F&& f)
{
return std::forward<F>(f);
}
template <typename C, typename T>
auto contains(const C& container, const T& key)
{
static auto first = [] (auto&& c, auto&& k)
{
return c.end() != c.find(k);
};
static auto second = [] (auto&& c, auto&& k)
{
return c.end() != std::find(c.begin(), c.end(), k);
};
auto op = conditional<has_find<C>::value>(first, second);
return op(container, key);
}
int main()
{
std::cout << std::boolalpha;
std::array<int, 3> a = {{ 1, 2, 3 }};
std::cout << contains(a, 0) << "\n";
std::cout << contains(a, 1) << "\n\n";
std::vector<int> v = { 1, 2, 3 };
std::cout << contains(v, 0) << "\n";
std::cout << contains(v, 1) << "\n\n";
std::set<int> s = { 1, 2, 3 };
std::cout << contains(s, 0) << "\n";
std::cout << contains(s, 1) << "\n\n";
std::map<int, int> m = { { 1, 1}, { 2, 2}, { 3, 3} };
std::cout << contains(m, 0) << "\n";
std::cout << contains(m, 1) << "\n";
}
Answer: If you like new features, and even experimental features, you can make your code a lot cleaner.
Concepts
A lot of those arcane SFINAE techniques will be obsolete once concepts are out, and concepts are already available in an experimental state with the last versions of gcc (enable them with the -fconcepts option):
template <typename Container, typename Value>
concept bool HasFindFunction = requires(Container c, Value v) {
c.find(v);
};
... and that's it for your trait.
if constexpr
To specialize the contains function, you can rely on C++17 if constexpr. It's a compile-time if, in which only the chosen branch has to be well-formed. So, here's what you could write:
template <typename Container, typename Value>
auto find(Container&& cont, Value&& val) {
if constexpr (HasFindFunction<Container, Value>) {
std::cout << "member find\n";
return cont.find(val);
}
else {
std::cout << "algorithm find\n";
return std::find(std::begin(cont), std::end(cont), val);
}
}
I replace contains by find, because I find it a pity to make a hard won information (the item position) go to waste. You can then write contains on top of it.
Complete working example (g++ prog.cc -Wall -Wextra -std=gnu++2a -fconcepts)
#include <iostream>
#include <vector>
#include <map>
#include <algorithm>
template <typename Container, typename Value>
concept bool HasFindFunction = requires(Container c, Value v) {
c.find(v);
};
template <typename Container, typename Value>
auto find(Container&& cont, Value&& val) {
if constexpr (HasFindFunction<Container, Value>) {
std::cout << "member find\n";
return cont.find(val);
}
else {
std::cout << "algorithm find\n";
return std::find(std::begin(cont), std::end(cont), val);
}
}
template <typename Container, typename Value>
auto contains(Container&& c, Value&& v) {
return std::end(c) != find(c, std::forward<Value>(v));
}
int main() {
std::map<int, int> map;
std::vector<int> vector;
find(map, 5);
contains(vector, 5);
} | {
"domain": "codereview.stackexchange",
"id": 30249,
"tags": "c++, c++14, template"
} |
Do black holes emit no light or does light fall back in black holes? | Question: Black holes have such a strong gravity that escape velocity from it is more than speed of light which basically means nothing could escape it.
Everything in the universe have escape velocity. For Earth it is 11.2 km/s which means if any body has speed of 11.2 km/s from earth's surface then it could reach infinite distance away and would never come back to earth.
If something has half the speed of escape velocity say 5.6 km/s then the body would return back to earth after travelling some distance.
Suppose a black hole have escape velocity double the speed of light the it would mean photons can't escape it.
But as in our previous example of earth where body with 5.6 km/s speed travelled some distance before coming back to earth in the same way any photon emitted from that black hole would travel some distance before coming back to black hole.
It would mean if any person is standing outside the range of that photon he wouldn't see black hole but he could see it if he go close in range of the photon.
A very similar question could be that if any photon is emitted from the surface of earth then would its speed decrease or energy decrease as it move away like blue light photon turn less intense .
Answer: Your intuition is that of Newtonian mechanics, I can say this because you said "Black holes have such a strong gravity that escape velocity from it is more than speed of light which basically means nothing could escape it". In fact, gravity is much more complicated than that. In the frame of general relativity, it is not surprising to have a Black-Hole whose surface gravity is less than Earth's.
As your question seems to focus only on the horizon's gravity, I will not consider the internal structure of a static black hole solution in General Relativity. But I do think that it is noteworthy to consider it too in order to have a good understanding of a black hole. Furthermore, I will say "infinite amount of" because the maths blow up at the coordinate singularity (the event horizon) in the coordinates I chose. But it is to be interpreted as "when you approach the horizon from the exterior, things tend to either $0$ or $\infty$".
Firstly, what is an event horizon in General Relativity, in simple words? To keep things simple, I will only consider Schwarzschild's black holes, which do not rotate. An event horizon is a surface in space such that the first component of the metric tensor is precisely $0$. To understand this statement, let us focus on the Schwarzschild line element, which we can interpret as a deviation of the geometry of space-time from the Minkowski picture:
\begin{equation}
ds^2=\underbrace{\left(1-\frac{r_s}{r} \right)}_{\text{radial time dilation}}\times\overbrace{dt^2}^{\begin{array} \\ \text{infinitesimal}\\ \text{time parameter}\end{array}}-\underbrace{\left( 1-\frac{r_s}{r} \right)^{-1}}_{\text{radial length contraction}}\times\overbrace{dr^2}^{\begin{array} \\ \text{infinitesimal} \\ \text{radial parameter} \end{array}}-\underbrace{r^2d\Omega_2^2}_{\text{angular part}}
\end{equation}
Concretely, $ds^2$ is the "square" of an infinitesimal length in space-time and is called the "space-time interval" (among other appellations). Note that when $r=r_s$, the "radial time dilation" becomes 0, this means that the event horizon is located at a radius $r_s$ from the origin center. Of course, the "radial length contraction" factor blows up at this radius, but this is a feature of Schwazrchild black holes. In the Kerr geometry, which describes a rotating black hole, this is no longer the case.
But what does it mean to have a time dilation factor of $0$? It means that, for the faraway observer, times will appear to stop at the horizon. So if you are far away from the horizon and drop an apple towards it, the apple will gradually stop moving, and ultimately be completely stopped right at the horizon. Of course from the point of view of the apple, time does not stop. But if the apple has a camera stuck to it that films you, it will see you in accelerated motion. Ultimately, when the apple is at the horizon, it would film the death of the universe (of course, if the camera has an infinite lifetime, and if the black hole does not evaporate).
Now, I have focused on the falling of the apple only, but not on its appearance. As you may know, you see electromagnetic radiations from ~400nm to ~800nm. But, as the apple falls towards the horizon, the light it scatters in your direction will be redshifted. So light still does travel at $c$, but the more it is emitted near the horizon (and directed outwards), the more its wavelength is dilated. Ultimately if the light, which has a non-zero wavelength, is emitted toward the faraway observer from the horizon, then from the point of view of this very observer${}^{\ast}$, light has an infinite wavelength, and so a frequency of $0$, and so does not exist anymore. Thus, black holes are classically... well, black.
I still must clarify some things about your question. The phenomenon you describe well, the fact that objects having a speed below the escape velocity fall back to the Earth, happens for light too, but when you are inside of the black hole!
Also, what I described is purely in terms of the faraway coordinates $(t,r,\theta,\varphi)$. In other coordinate systems, you have other conclusions. Among those is, for example, the fact that you can fall in a finite time into the black hole. But in this case, the time parameter is not the cosmological time $t$ anymore. See this Wikipedia link for the other representations of the line element I wrote above.
${}^{\ast}$Finally, I said that you will no longer see the emitted light, but this is if you stay still for an infinite amount of time! Indeed, due to the "radial length contraction" term, which blows up at the horizon, light has to travel an infinite distance to reach you, and so it takes an infinite amount of time. | {
"domain": "physics.stackexchange",
"id": 97785,
"tags": "black-holes, photons, escape-velocity"
} |
Can we find the electric field at the centre of a regular polygon **using polygon rule**? | Question: Consider a regular pentagon whose vertices are labelled as $1, 2, 3, 4$ and $5$. Now, let us put a charge +q at each corner. I can understand that the field at the centre $O$ must be zero, from symmetry. Also, by breaking each electric field vector at the centre into two mutually orthogonal components in the plane of the polygon, I can show that the fields cancel out at the center. I have also seen this answer.
But can we use the "polygon rule of vector addition" to show that the field at the center vanishes? I am interested in solving the problem without breaking the fields into components or without using the symmetry of the problem. Of course, we have to show that $\vec{1O}+\vec{2O}+\vec{3O}+\vec{4O}+\vec{5O}=\vec{0}$. I have a hunch that the polygon rule must lead to this. Any hint?
Answer: The polygon rule can be definitely used to justify the zero electric field at the center. First of all, note that for any arbitrary configuration of charges (of equal magnitude), located on the unit circle (centered at the origin) at locations A, B, C, D and E, the electric field vector at the center O is proportional to $\vec{AO}+\vec{BO}+\vec{CO}+\vec{DO}+\vec{EO}$. However, this vector lies in the opposite direction of the vector $\vec{OA}+\vec{OB}+\vec{OC}+\vec{OD}+\vec{OE}$, which is in the direction of the center of charge (say G), because center of charge is defined as
$$\mathbf r_{\rm COM}=\frac{\displaystyle \sum_1 ^n q_i \mathbf r_i}{\displaystyle \sum _1 ^n q_i}$$
where $\mathbf r_i$ are the position vectors of the charges under consideration. In our case, it's easy to see that
$$\mathbf r_{\rm COM}=\frac{
\vec{OA}+\vec{OB}+\vec{OC}+\vec{OD}+\vec{OE}}{5}$$
Thus, we can say that $\mathbf E\propto -\mathbf r_{\rm COM}$
Now in the case where these charges are placed symmetrically across the unit circle, the value of $\mathbf r_{\rm COM}$ vanishes to become a null vector. Thus, the electric field which, as we discussed, was proportional to $-\mathbf r_{\rm COM}$, bow also vanishes and becomes zero. In fact, these set of arguments can be generalized to any $n$-sided polygon, always yielding the same value (zero) if the electric field. | {
"domain": "physics.stackexchange",
"id": 70724,
"tags": "homework-and-exercises, electrostatics, electric-fields, vectors"
} |
What is the hybridisation of hydrogen in methane? | Question: I am trying to understand hybridisation. In methane carbon has $\mathrm{sp^3}$ hybridisation, but what is the hybridisation of hydrogen? Is it $\mathrm{sp^3}$? If yes then why?
Answer: Hydrogen does not hybridise, as it only has one filled $s$ orbital. Hybridisation is the "mixing of $s$ and $p$ orbitals; here is a good explanation (for ethane, but it explains the general theory as well). | {
"domain": "chemistry.stackexchange",
"id": 6431,
"tags": "bond, hybridization"
} |
Controlling one signal with another one (Rocket engine effect - Subtractive synthesis of sound) | Question: I am programming some sound effects in Java and exporting them into .wav files. Currently, I am trying to program a rocket engine sound effect. I want to do it in the following way:
The sound of a rocket engine may be synthesized with a red noise generator controlled by a second red noise generator. The parameter of the first generator modified by the second one is the number of interpolated samples, influencing the spectral content of the generated noise. In order to simulate changes of sound intensity (e.g. during launch) the envelope generator should be used.
I am wondering how can it be done, e.g. what does it mean that one signal controls another one. Probably this part is explaining it, but I am not sure what to do now:
The parameter of the first generator modified by the second one is the
number of interpolated samples, influencing the spectral content of
the generated noise.
Is it about this parameter describing how many values from white noise are taken and linearly interpolated while creating the red noise? (see my simple drawing explaining this process below)
I have a red noise generator, which returns an array of doubles with values between -1 and 1 (it is generated from the white noise as described). What am I supposed to do now? How can I control the second red noise? I guess that it does not mean that I should control the amplitude of the second signal. Does it? Schema of steps required to obtain the rocket engine sound effect is attached below.
Answer: I have already solved this problem:
I have a white noise generated as an array of doubles:
public static double[] whiteNoise(int length) {
double[] out = new double[length];
for (int i = 0; i < length; i++) {
out[i] = (Math.random() * 2 - 1.0)/2.0;
}
return out;
}
I obtain the red noise from the white noise in the following way:
public static double[] redNoise(int length, int m) {
double out[] = whiteNoise(length);
for (int i=0; i<length/m-1; i++) {
int k = i*m;
int l = ((i+1)*m <= length-1) ? (i+1)*m : length-1;
double d = (out[l] - out[k])/((double)(m));
for (int j=0; j<m; j++) {
out[k+j] = out[k] + d*j;
}
}
return out;
}
Where the m parameter is the number of interpolated samples (we take the first sample, and the m+1 sample, and interpolate values of all samples between these two).
And now the SHOW BEGINS
I generate the red noise with the interpolation number equals to let's say 10000. According to the value of consecutive samples, I alter this red noise with different values obtained from these samples taken as the interpolation number parameter. In this way a rocket sound effect is obtained.
int interpolatedframes = 10000;
double[] rednoise1 = SigGen.redNoise(duration, interpolatedframes);
while(i<rednoise1.length-1)
{
selectedNumber = (int)Math.floor(Math.abs((100*(rednoise1[i]+0.5)+100)));
if(i+selectedNumber < rednoise1.length-1) {
rednoise1 = SigGen.alterRedNoise(rednoise1, i, selectedNumber);
}
else
{
break;
}
i+=selectedNumber;
}
With the following code the array with the noise is altered in a way, that it interpolates "interpolationNumber" of samples beginning from the startingIndex.
public static double[] alterRedNoise(double[] input, int startIndex, int interpolationNumber)
{
int k = startIndex;
int l = ((startIndex + interpolationNumber)<input.length-1?(startIndex+interpolationNumber):input.length-1);
double d = (input[l] - input[k])/((double)(interpolationNumber));
for (int j=0; j<interpolationNumber; j++) {
input[k+j] = input[k] + d*j;
}
return input;
}
Sorry for a bit messy code, but I just wanted to publish it ASAP.
#EDIT 1
The sound which is supposed to be generated is the sound of a rocket being launched. Take a look at this movie (maybe from 1:45): https://www.youtube.com/watch?v=OnoNITE-CLc
To have the effect of the rocket flying over, an envelope generator might be used. It can be implemented as a simple multiplication of the samples obtained from the described method in such way, that it will be gradually suppressed. | {
"domain": "dsp.stackexchange",
"id": 2762,
"tags": "audio, sound, signal-synthesis"
} |
Is there a specific branch of physics that studies waves? | Question: Is there a branch of physics that studies waves and how they propagate through air, wires etc.?
Answer: Acoustic physic deals with mechanical waves. But as CuriousOne says, many areas of physics uses waves in some way, so it's hard to pinpoint a "wave-only" physics. | {
"domain": "physics.stackexchange",
"id": 16561,
"tags": "waves"
} |
simple robot integration with Gazebo to test all my robot code | Question:
I have built a simple chassis with left and right wheels with two separate motor controllers. The motor controllers are on a simple CAN communications bus. I've written an ROS service that sends the appropriate CAN messages to command each motor to any velocity it is capable of. I call that service from an ROS node that is subscribed to the standard geometry /cmd_vel Twist type message, which can be published by the example turtlebot-tele-op node or from the RQT plugin Robot Steering.
This is sort of the chain of events:
Robot Steering -> [Twist Messages] -> Locomotion Node -> [API Messages] -> Motor API Service -> [CAN frames] -> Motor Controllers -> [current, amps] -> Motors
So I've done all this and it works fine, but I still feel like a newbie because I cannot figure out how to run all my software in Gazebo. Sure it's running on real hardware, but I'd like to add complexity and I want to test that complexity in simulation before running it on real hardware.
So I've used this ROS/Gazebo Integration Differential Drive tutorial (http://gazebosim.org/wiki/Tutorials/1.9/ROS_Motor_and_Sensor_Plugins#Differential_Drive) to attach a differential drive plugin to a simple two wheeled and caster simulated robot in Gazebo and it allows me to drive the robot around the sim from the Robot Steering rqt plugin which publishes the Twist messages. This is great, but it only executes the Robot Steering rqt plugin to publish the Twist message and then it subscribes to that Twist message and none of the other software is used. So this scenario does not test my Locomotion Node nor my Motor API Service software that I wrote.
So my question is: Am I somehow able and does it make any sense to break the chain of events above at the CAN frames link and simulate just the motor controllers and the motors in the Gazebo sim so I can test the execution of the Robot Steering, Locomotion Node, and Motor API Service? I don't see any Gazebo nor ROS tutorials that show me a means to simulate down at this level of hardware, but maybe I'm just missing something fundamental because I'm so new to this domain.
Will I have to write my own layer of software to simulate the motor controllers and motors to Gazebo? Is there a plugin that I can use that operates at this level, rather than at the Twist Message level?
Thanks in advance for any help or advice you can give.
Kurt
Originally posted by Kurt Leucht on ROS Answers with karma: 486 on 2014-04-07
Post score: 0
Answer:
Most ROS nodes (the nav stack, most teleop nodes, etc) assume that you use twist messages to control the base. Twists are general enough to describe almost any base motion, and usually we're using gazebo to debug higher level problems. If you get lower level than that, you might start having to tweak gazebo quite a bit to get friction coefficients correct, get inertia matrices just right, etc.
That being said, if you want to do simulation of the individual wheels, it shouldn't be too hard to modify the diff drive plugin to do what you want. Start with this file: https://github.com/ros-simulation/gazebo_ros_pkgs/blob/hydro-devel/gazebo_plugins/src/gazebo_ros_diff_drive.cpp
Overview of the changes you'll need to make:
subscribe to your CAN API messages instead of the cmd_vel on line 227
modify the cmdVelCallback to be canMsgCallback and to expect the right messages
modify UpdateChild() to apply the individual wheel velocities to the left and right wheels
Originally posted by jbinney with karma: 606 on 2014-04-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Kurt Leucht on 2014-04-09:
Thanks for the advise! Maybe I will use the Twist for certain level of software testing and only dive down to the lower level to test a small portion of my code. | {
"domain": "robotics.stackexchange",
"id": 17562,
"tags": "ros, gazebo, integration, beginner-tutorials"
} |
What does Herb Sutter refer to in his seminal paper The free lunch is over? | Question: In his paper The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software, Herb Sutter writes:
The mainstream state of the art revolves around lock-based programming, which is subtle and hazardous. We desperately need a higher-level programming model for concurrency than languages offer today; I'll have more to say about that soon.
Can somebody with more knowledge about his work answer what he is referring to? Did he say more about that later?
Answer: I think this question was answered in "The Trouble with Locks" (Dr. Dobb's Journal, 1 March 2005), which article indicates that his "I'll have more to say about that soon" was more about the problems of lock-based programming than about a specific proposal for improvement.
Personally, I think that the most important sentence in ... ["The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software"] ... was the fourth and last conclusion: "4. Finally, programming languages and systems will increasingly be forced to deal well with concurrency." Before concurrency becomes accessible for routine use by mainstream developers, we will need significantly better, still-to-be-designed, higher level language abstractions for concurrency than are available in any language on any platform. Lock-based programming is our status quo and it isn't enough; it is known to be not composable, hard to use, and full of latent races (where you forgot to lock) and deadlocks (where you lock the wrong way). Comparing the concurrency world to the programming language world, basics such as semaphores have been around almost as long as assembler and are at the same level as assembler; locks are like structured programming with goto; and we need an "OO" level of abstraction for concurrency.
Note: "still to be designed".
("Lock-Free Code: A False Sense of Security", 8 September 2008, and "Writing Lock-Free Code: A Corrected Queue", 29 September 2008, explored how "lock-free coding is hard even for experts".) | {
"domain": "cs.stackexchange",
"id": 19979,
"tags": "programming-languages, concurrency"
} |
Confusion regarding rest mass of system in relativistic conditions | Question: In Morin's Mechanics there is this interesting question where we are required to find the mass of a system under relativistic conditions:
12.8. System of masses **
Consider a dumbbell made of two equal masses, $m$. The dumbbell spins around, with its center pivoted at the end of a stick (see Fig. 12.15). If the speed of the masses is $v$, then the energy of the system is $2γm$. Treated as a whole, the system is at rest. Therefore, the mass of the system must be $2γm$. (Imagine enclosing it in a box, so that you can’t see what’s going on inside.) Convince yourself that the system does indeed behave like a mass of $M = 2γm$, by pushing on the stick (when the dumbbell is in the “transverse” position shown in the figure) and showing that $F ≡ dp/dt = Ma$.
The mass of the system is quite clearly $2γm$, considering that
$$M=\sqrt{(\sum_i E_i)^2-(\sum_i p_i)^2}=\sqrt{(2*\gamma m)^2}=2 \gamma m$$
However, when I attempted to confirm the result by Morin's argument, I came across a contradiction.
The force on each particle, F, is given by
$$F=\frac{dp}{dt}=\frac{d}{dt}(\gamma m v)=mv\frac{d\gamma}{dt}+ m\gamma\frac{dv}{dt}=mv^2\gamma^3 a+m \gamma a = m \gamma^3 a$$
Total force required to accelerate the particles is therefore
$$F_{total}=2m\gamma^3 a = \gamma^2 M a \neq M a$$
What is wrong in my chain of reasoning?
Answer:
Consider a dumbbell made of two equal masses, [...]
It may be advantageous to denote the two dumbbell weights nevertheless distinctly, say as $A$ and $B$; with equal masses $m_A = m_B := m$.
Convince yourself that the system does indeed behave like a mass of $M = 2 \, \gamma \, m$, by pushing on the stick (when the dumbbell is in the “transverse” position shown in the figure) [...]
The force on each particle,
Force, momentum, velocity and acceleration are of course quantities with direction as well as magnitude. Accordingly, as determined initially by members of the reference frame of the dumbbell system at the “transverse” position:
$$\vec v_B = -\vec v_A$$
with
$$(\vec v_A \cdot \vec v_A) = (\vec v_B \cdot \vec v_B) := v^2 \gt 0,$$
and
$$\vec p_A = -\vec p_B = \frac{m \, \vec v_A}{\sqrt{1 - (\vec v_A \cdot \vec v_A) }} = \frac{-m \, \vec v_B}{\sqrt{1 - (\vec v_A \cdot \vec v_A) }} := \frac{m \, \vec v_A}{\sqrt{1 - v^2 }}.$$
Also, importantly, at the “transverse” position initial to pushing on the stick:
$$\frac{d}{dt} \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \vec v_A \right] =: \underset{\Delta t \longrightarrow 0}{\text{lim}}\left[ \left( \frac{\vec v_A + \Delta t \, \vec a}{1 + (\vec v_A \cdot \vec a) \, \Delta t} - \vec v_A \right) / \Delta t \right] = \vec a - \vec v_A (\vec v_A \cdot \vec a) = \vec a \, (1 - v^2)$$
and
$$\frac{d}{dt} \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \vec v_B \right] =: \underset{\Delta t \longrightarrow 0}{\text{lim}}\left[ \left( \frac{\vec v_B + \Delta t \, \vec a}{1 + (\vec v_B \cdot \vec a) \, \Delta t} - \vec v_B \right) / \Delta t \right] = \vec a - \vec v_B (\vec v_B \cdot \vec a) = \vec a \, (1 - v^2).$$
(This seems to be the origin of "the factor $1 / \gamma^2$" you had been missing.)
Then let's evaluate $$F_{\text{total}} := \frac{d}{dt} \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \vec p_{\text{total}} \right] = \frac{d}{dt} \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \vec p_A\left[ \, m, \vec v_A \, \right] + \vec p_B\left[ \, m, \vec v_B \, \right] \right] = \frac{d}{dt} \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \vec p_A\left[ \, m, \vec v_A \, \right] \right] + \frac{d}{dt} \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \vec p_B\left[ \, m, \vec v_B \, \right] \right] $$
with
$$\! \frac{d}{dt} \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \vec p_A\left[ \, m, \vec v_A \, \right] \right] := \frac{d}{dt} \! \! \left[ \frac{m \, \vec v_A}{\sqrt{1 - (\vec v_A \cdot \vec v_A) }} \right] = \frac{m}{\sqrt{1 - v^2 }} \, \frac{d}{dt} \! \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \vec v_A \! \right] \, + \, m \, \vec v_A \, \frac{d}{dt} \! \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \frac{1}{\sqrt{1 - (\vec v_A \cdot \vec v_A) }} \right] \! \! =\\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \frac{m \, \vec a \, (1 - v^2)}{\sqrt{1 - v^2 }} \qquad + \qquad m \, \vec v_A \, \frac{(\vec v_A \cdot \vec a \, (1 - v^2))}{\left(\sqrt{1 - v^2 } \right)^3} = \\ \qquad \qquad \qquad \qquad \qquad \qquad m \, \vec a \, \sqrt{1 - v^2 } \qquad + \qquad \frac{m \, v^2 \, \vec a}{\sqrt{1 - v^2 }} \qquad \qquad = \frac{m \, \vec a}{\sqrt{1 - v^2 }}, $$
while
$$\! \frac{d}{dt} \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \vec p_B\left[ \, m, \vec v_B \, \right] \right] := \frac{d}{dt} \! \! \left[ \frac{m \, \vec v_B}{\sqrt{1 - (\vec v_B \cdot \vec v_B) }} \! \right] \! = \! \frac{m}{\sqrt{1 - v^2 }} \, \frac{d}{dt} \! \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \vec v_B \right] \, + \, m \, \vec v_B \, \frac{d}{dt} \! \! \left[ \phantom{\frac{d}{dt}} \! \! \! \! \! \! \! \frac{1}{\sqrt{1 - (\vec v_B \cdot \vec v_B) }} \right] \! = \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \frac{m \, \vec a \, (1 - v^2)}{\sqrt{1 - v^2 }} \qquad + \qquad m \, \vec v_B \, \frac{(\vec v_B \cdot \vec a \, (1 - v^2))}{\left(\sqrt{1 - v^2 } \right)^3} = \\ \qquad \qquad \qquad \qquad \qquad \qquad m \, \vec a \, \sqrt{1 - v^2 } \qquad + \qquad \frac{m \, v^2 \, \vec a}{\sqrt{1 - v^2 }} \qquad \qquad = \frac{m \, \vec a}{\sqrt{1 - v^2 }}.$$
Together therefore
$$F_{\text{total}} := \frac{2 \, m \, \vec a}{\sqrt{1 - v^2 }} = M \, \vec a.$$ | {
"domain": "physics.stackexchange",
"id": 91467,
"tags": "special-relativity, mass"
} |
rosmake rgbdslam_freiburg FAILED.. Please help | Question:
hasitha@hasitha-HP-ProBook-4520s:~$ rosmake rgbdslam_freiburg
[ rosmake ] rosmake starting...
[ rosmake ] Packages requested are: ['rgbdslam_freiburg']
[ rosmake ] Logging to directory /home/hasitha/.ros/rosmake/rosmake_output-20131028-235835
[ rosmake ] Expanded args ['rgbdslam_freiburg'] to:
['rgbdslam']
[rosmake-0] Starting >>> roslang [ make ]
[rosmake-1] Starting >>> geometry_msgs [ make ]
[rosmake-2] Starting >>> opencv2 [ make ]
[rosmake-3] Starting >>> bullet [ make ]
[rosmake-0] Finished <<< roslang No Makefile in package roslang
[rosmake-2] Finished <<< opencv2 ROS_NOBUILD in package opencv2
[rosmake-0] Starting >>> rospy [ make ]
[rosmake-2] Starting >>> roscpp [ make ]
[rosmake-1] Finished <<< geometry_msgs No Makefile in package geometry_msgs
[rosmake-1] Starting >>> sensor_msgs [ make ]
[rosmake-0] Finished <<< rospy No Makefile in package rospy
[rosmake-3] Finished <<< bullet ROS_NOBUILD in package bullet
[rosmake-0] Starting >>> rosconsole [ make ]
[rosmake-3] Starting >>> angles [ make ]
[rosmake-3] Finished <<< angles ROS_NOBUILD in package angles
[rosmake-3] Starting >>> rostest [ make ]
[rosmake-2] Finished <<< roscpp No Makefile in package roscpp
[rosmake-2] Starting >>> roswtf [ make ]
[rosmake-0] Finished <<< rosconsole No Makefile in package rosconsole
[rosmake-1] Finished <<< sensor_msgs No Makefile in package sensor_msgs
[rosmake-0] Starting >>> message_filters [ make ]
[rosmake-1] Starting >>> image_geometry [ make ]
[rosmake-3] Finished <<< rostest No Makefile in package rostest
[rosmake-1] Finished <<< image_geometry ROS_NOBUILD in package image_geometry
[rosmake-3] Starting >>> std_msgs [ make ]
[rosmake-2] Finished <<< roswtf No Makefile in package roswtf
[rosmake-1] Starting >>> rosbag [ make ]
[rosmake-0] Finished <<< message_filters No Makefile in package message_filters
[rosmake-2] Starting >>> rosbuild [ make ]
[rosmake-0] Starting >>> tf [ make ]
[rosmake-0] Finished <<< tf ROS_NOBUILD in package tf
[rosmake-2] Finished <<< rosbuild No Makefile in package rosbuild
[rosmake-1] Finished <<< rosbag No Makefile in package rosbag
[rosmake-3] Finished <<< std_msgs No Makefile in package std_msgs
[rosmake-0] Starting >>> roslib [ make ]
[rosmake-1] Starting >>> pcl [ make ]
[rosmake-2] Starting >>> smclib [ make ]
[rosmake-3] Starting >>> rosservice [ make ]
[rosmake-0] Finished <<< roslib No Makefile in package roslib
[rosmake-0] Starting >>> pluginlib [ make ]
[rosmake-2] Finished <<< smclib ROS_NOBUILD in package smclib
[rosmake-3] Finished <<< rosservice No Makefile in package rosservice
[rosmake-2] Starting >>> bond [ make ]
[rosmake-3] Starting >>> dynamic_reconfigure [ make ]
[rosmake-1] Finished <<< pcl ROS_NOBUILD in package pcl
No Makefile in package pcl
[rosmake-0] Finished <<< pluginlib ROS_NOBUILD in package pluginlib
[rosmake-0] Starting >>> common_rosdeps [ make ]
[rosmake-2] Finished <<< bond ROS_NOBUILD in package bond
[rosmake-2] Starting >>> bondcpp [ make ]
[rosmake-1] Starting >>> cv_bridge [ make ]
[rosmake-1] Finished <<< cv_bridge ROS_NOBUILD in package cv_bridge
[rosmake-2] Finished <<< bondcpp ROS_NOBUILD in package bondcpp
[rosmake-2] Starting >>> nodelet [ make ]
[rosmake-1] Starting >>> visualization_msgs [ make ]
[rosmake-0] Finished <<< common_rosdeps ROS_NOBUILD in package common_rosdeps
[rosmake-0] Starting >>> octomap_ros [ make ]
[rosmake-3] Finished <<< dynamic_reconfigure ROS_NOBUILD in package dynamic_reconfigure
[rosmake-3] Starting >>> nav_msgs [ make ]
[rosmake-2] Finished <<< nodelet ROS_NOBUILD in package nodelet
[rosmake-2] Starting >>> nodelet_topic_tools [ make ]
[rosmake-1] Finished <<< visualization_msgs No Makefile in package visualization_msgs
[rosmake-1] Starting >>> actionlib_msgs [ make ]
[rosmake-0] Finished <<< octomap_ros ROS_NOBUILD in package octomap_ros
[rosmake-3] Finished <<< nav_msgs No Makefile in package nav_msgs
[rosmake-0] Starting >>> trajectory_msgs [ make ]
[rosmake-3] Starting >>> std_srvs [ make ]
[rosmake-2] Finished <<< nodelet_topic_tools ROS_NOBUILD in package nodelet_topic_tools
[rosmake-2] Starting >>> pcl_ros [ make ]
[rosmake-1] Finished <<< actionlib_msgs No Makefile in package actionlib_msgs
[rosmake-2] Finished <<< pcl_ros ROS_NOBUILD in package pcl_ros
[rosmake-0] Finished <<< trajectory_msgs No Makefile in package trajectory_msgs
[rosmake-3] Finished <<< std_srvs No Makefile in package std_srvs
[rosmake-2] Starting >>> flann [ make ]
[rosmake-0] Starting >>> mk [ make ]
[rosmake-3] Starting >>> orocos_kdl [ make ]
[rosmake-1] Starting >>> arm_navigation_msgs [ make ]
[rosmake-2] Finished <<< flann ROS_NOBUILD in package flann
[rosmake-2] Starting >>> opencv_tests [ make ]
[rosmake-3] Finished <<< orocos_kdl ROS_NOBUILD in package orocos_kdl
[rosmake-0] Finished <<< mk No Makefile in package mk
[rosmake-2] Finished <<< opencv_tests ROS_NOBUILD in package opencv_tests
[rosmake-1] Finished <<< arm_navigation_msgs ROS_NOBUILD in package arm_navigation_msgs
[rosmake-1] Starting >>> octomap_server [ make ]
[rosmake-3] Starting >>> python_orocos_kdl [ make ]
[rosmake-1] Finished <<< octomap_server ROS_NOBUILD in package octomap_server
[rosmake-1] Starting >>> rgbdslam [ make ]
[rosmake-3] Finished <<< python_orocos_kdl ROS_NOBUILD in package python_orocos_kdl
[rosmake-3] Starting >>> kdl [ make ]
[rosmake-3] Finished <<< kdl ROS_NOBUILD in package kdl
No Makefile in package kdl
[rosmake-3] Starting >>> eigen_conversions [ make ]
[rosmake-3] Finished <<< eigen_conversions ROS_NOBUILD in package eigen_conversions
[rosmake-3] Starting >>> tf_conversions [ make ]
[rosmake-3] Finished <<< tf_conversions ROS_NOBUILD in package tf_conversions
[rosmake-3] Starting >>> cv_markers [ make ]
[rosmake-3] Finished <<< cv_markers ROS_NOBUILD in package cv_markers
[ rosmake ] All 2 linesrgbdslam: 0.0 sec ] [ 1 Active 47/48 Complete ]
{-------------------------------------------------------------------------------
mkdir: cannot create directory `build': Permission denied
[ rosmake ] Output from build of package rgbdslam written to:
[ rosmake ] /home/hasitha/.ros/rosmake/rosmake_output-20131028-235835/rgbdslam/build_output.log
[rosmake-1] Finished <<< rgbdslam [FAIL] [ 0.07 seconds ]
[ rosmake ] Halting due to failure in package rgbdslam.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 48 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/hasitha/.ros/rosmake/rosmake_output-20131028-235835
Originally posted by Hasitha on ROS Answers with karma: 19 on 2013-10-28
Post score: 0
Answer:
Which version of ROS are you running? It seems you have a permission problem, where did you put the rgbdslam package?
Originally posted by Tirjen with karma: 808 on 2013-10-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Hasitha on 2013-10-31:
Fuerte with Ubuntu 12.04 LTS
Comment by Tirjen on 2013-10-31:
then I think, as I said, that you have a permission problem. So maybe you putted the package in a wrong place.
Comment by Hasitha on 2013-11-01:
The package downloaded to /home
Then I copied that folder to relevant place & resolved the problem after giving chmod premissions. Not It's working. Thank you Tirjen. | {
"domain": "robotics.stackexchange",
"id": 15987,
"tags": "ros, rgbdslam-freiburg"
} |
Does the formation of water inside the mitochondrial matrix help contribute to the proton gradient during the electron transport chain? | Question:
Does the synthesis of water in the final step of the electron transport chain significantly increase the electrochemical gradient across the matrix? I understand that pumping protons out of the matrix clearly increases the positive charge in the intermembrane space, but the chemical reaction in Complex IV O2 + 4H+ + 4e− → 2H2O also reduces the protons in the matrix by binding them in water.
Since ATP synthase relies entirely on the electrochemical proton gradient, it seems like this might be worth mentioning, yet I haven't seen this explicitly explored in any textbooks. If there is any literature about this topic, I would love to see it.
Answer: Your description of it is largely correct, but the electron transport chain does not simply "dump" charged oxygen ions in the mitochondrial matrix. Instead, cytochrome C oxidase (complex IV) binds the O$_2$ molecule to one of its heme groups, and the reduction O$_2$ + 4 H$^+$ + 4 e$^-$ $\rightarrow$ 2 H$_2$O occurs at the heme group before water is released. Some more details on the enzyme is found here and here (or in any biochemistry book).
In theory you're right that producing water in the mitochondrial matrix would dilute the H$^+$ concentration there a little (along with the concentration of everything else). But this dilution is negligible: the mitochondrial matrix has pH $\approx$ 7.8, which is the same as a H$^+$ concentration of about 15 x 10$^{-9}$M, while the concentration of water (yes, there is such a thing ;) is 55 M. So adding a water molecule has virtually no effect compared to removing a proton. | {
"domain": "biology.stackexchange",
"id": 5818,
"tags": "biochemistry, cell-biology, cellular-respiration"
} |
Converting sonar ranges to laserscan message | Question:
Hi,
In my project i should do mapping and localization using only ultrasonic sensor and IMU. Most of mapping packages in ros use laserscan message, the ultrasonic sensor return a range message.
I am using 10 HC-SR04 ultrasonic sensor.
ROS kinetic ubuntu 16.04.
MPU-6050.
How can i construct a laser scanner message using the different readings sonar sensor ???
Originally posted by أسامة الادريسي on ROS Answers with karma: 37 on 2017-06-29
Post score: 0
Original comments
Comment by Humpelstilzchen on 2017-06-29:
Possible duplicate: http://answers.ros.org/question/237478/can-i-localize-with-imprecisenoisy-sonar-sensor/
Comment by أسامة الادريسي on 2017-06-29:
This is not a duplication
Answer:
Here's one option: http://wiki.ros.org/range_sensor_layer
Originally posted by David Lu with karma: 10932 on 2017-06-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by أسامة الادريسي on 2017-06-29:
thank you very much, I hope this package can help me. | {
"domain": "robotics.stackexchange",
"id": 28248,
"tags": "ros, slam, navigation, gmapping"
} |
Is back-propagation applied for each data point or for a batch of data points? | Question: I am new to deep learning and trying to understand the concept of back-propagation. I have a doubt about when the back-propagation is applied. Assume that I have a training data set of 1000 images for handwritten letters,
Is back-propagation applied immediately after getting the output for each input or after getting the output for all inputs in a batch?
Is back-propagation applied $n$ times till the neural network gives a satisfactory result for a single data point before going to work on the next data point?
Answer: Short answers
Is back-propagation applied immediately after getting the output for each input or after getting the output for all inputs in a batch?
You can perform back-propagation using (or after) only one training input (also known as data point, example, sample or observation) or multiple ones (a batch). However, the loss function to train the neural network is slightly different in both cases.
Is back-propagation applied $n$ times till the neural network gives a satisfactory result for a single data point before going to work on the next data point?
If we use only one example, we usually do not wait until the neural network gives satisfactory results for a single input-label pair $(x_i, y_i)$, but we keep feeding it with several input-label pairs, one after the other (and each time updating the parameters of the neural network using back-propagation), without usually caring whether the neural network already produces a satisfactory output for an input-label pair before feeding it with the next (although you could also do that).
Long answer (to the 1st question)
In case you want to know more about the first question, keep reading!
What is back-propagation?
Back-propagation is the (automatic) process of differentiating the loss function, which we use to train the neural network, $\mathcal{L}$, with respect to all of the parameters (or weights) of the same neural network. If you collect the $N$ parameters of the neural network in a vector
$$\boldsymbol{\theta} =
\begin{bmatrix}
\theta_1\\
\vdots \\
\theta_N
\end{bmatrix},
$$
then the derivative of the loss function $\mathcal{L}$ with respect to $\boldsymbol{\theta}$ is called the gradient, which is a vector that contains the partial derivatives of the loss function with respect to each single scalar parameter of the network, $\theta_i$, for $i=1, \dots, N$, that is, the gradient looks something like this
$$
\nabla \mathcal{L} =
\begin{bmatrix}
\frac{\partial \mathcal{L}}{ \partial \theta_1}\\
\vdots \\
\frac{\partial \mathcal{L}}{ \partial \theta_N}
\end{bmatrix},
$$
where the symbol $\nabla$ denotes the gradient of the function $\mathcal{L}$.
Loss functions
The specific loss function $\mathcal{L}$ that is used to train the neural network depends on the problem that we need to solve. For example, if we need to solve a regression problem (i.e. predict a real number), then the mean squared error (MSE) can be used. If we need to solve a classification problem (i.e. predict a class), the cross-entropy (CE) (aka negative log-likelihood) may be used instead.
Example
Let us assume that we need to solve a regression problem, so we choose the mean squared error (MSE) as the loss function. For simplicity, let's also assume that the neural network, denoted by $f_{\boldsymbol{\theta}}$, contains only one output neuron and contains no biases.
Stochastic gradient descent
Given an input-label pair $(x_i, y_i) \in D$ (where $D$ is a dataset of input-label pairs), the squared error function (not the mean squared error yet!) is defined as
$$\mathcal{L}_i(\boldsymbol{\theta}) = \frac{1}{2} (f_{\boldsymbol{\theta}}(x_i) - y_i)^2,$$
where $f_{\boldsymbol{\theta}}(x_i) = \hat{y}_i$ is the output of the neural network for the the data point $x_i$ (which depends on the specific values of $\boldsymbol{\theta}$) and $y_i$ is the corresponding ground-truth label.
As the name suggests, $\mathcal{L}_i$ (where the subscript $_i$ is only used to refer to the specific input-label pair $(x_i, y_i)$) measures the squared error (i.e. some notion of distance) between the current prediction (or output) of the neural network, $\hat{y}_i$, and the expected output for the given input $x_i$, i.e. $y_i$.
We can differentiate $\mathcal{L}_i(\boldsymbol{\theta})$ with respect to the parameters of the neural network, $\boldsymbol{\theta}$. However, given that the details of back-propagation can easily become tedious, I will not describe them here. You can find more details here.
So, let me assume that we have a computer program that is able to compute $\nabla \mathcal{L}_i$. At that point, we can perform one step of the gradient descent algorithm
$$
\boldsymbol{\theta} \leftarrow \boldsymbol{\theta} - \gamma \nabla \mathcal{L}_i, \label{1}\tag{1}
$$
where $\gamma$ is the learning rate and $\leftarrow$ is the assignment operator. Note that $\boldsymbol{\theta}$ and $\nabla \mathcal{L}_i$ have the same dimensions, $N$.
So, I have just shown you that you can update the parameters of the neural network using only one input-label pair, $(x_i, y_i)$. This way of performing GD with only one input-label pair is known as stochastic gradient descent (SGD).
Batch (or mini-batch) gradient descent
In practice, for several reasons (including learning instability and inefficiency), it is rarely the case that you update the parameters using only one input-label pair $(x_i, y_i)$. Instead, you use multiple input-label pairs, which are collected in a so-called batch
$$B = \{(x_1, y_i), \dots, (x_M, y_M) \},$$
where $M$ is the size of batch $B$, which is also known as mini-batch when $M$ is smaller than the total number of input-label pairs in the training dataset, i.e. when $|B| = M < |D|$. If you use mini-batches, typical values for $M$ are $32$, $64$, $128$, and $256$ (yes, powers of 2: can you guess why?). See this question for other details.
In this case, the loss function $\mathcal{L}_M(\boldsymbol{\theta})$ is defined as as the mean (or average) of the squared errors for single input-label pairs, $\mathcal{L}_i(\boldsymbol{\theta})$, i.e.
\begin{align}
\mathcal{L}_M(\boldsymbol{\theta})
&= \frac{1}{M} \sum_{i=1}^M \mathcal{L}_i(\boldsymbol{\theta}) \\
&= \frac{1}{M} \sum_{i=1}^M \frac{1}{2} (f_{\boldsymbol{\theta}}(x_i)-y_i)^2 \\
&= \frac{1}{M} \frac{1}{2} \sum_{i=1}^M (f_{\boldsymbol{\theta}}(x_i)-y_i)^2.
\end{align}
The normalisation factor, $\frac{1}{M}$, can be thought of as averaging out the losses of all input-label pairs. Note also that we can take the $\frac{1}{2}$ out of the summation because it is a constant with respect to the variable of the summation, $i$.
In this case, let me also assume that we are able to compute (using back-propagation) the gradient of $\mathcal{L}_M$, so that we can perform a gradient descent (GD) update (using a batch of examples)
$$
\boldsymbol{\theta} \leftarrow \boldsymbol{\theta} - \gamma \nabla \mathcal{L}_M \label{2} \tag{2}
$$
The only thing that changes with respect to the GD update in equation \ref{1} is the loss function, which is now $\mathcal{L}_M$ rather than $\mathcal{L}_i$.
This is known as mini-batch (or batch) gradient descent. In the case $M = |D|$, this is simply known as gradient descent, which is a term that can also be used to refer to any of its variants (including SGD or mini-batch GD).
Further reading
You may also be interested in this answer, which provides more details about mini-batch gradient descent. | {
"domain": "ai.stackexchange",
"id": 1062,
"tags": "neural-networks, backpropagation, gradient-descent, stochastic-gradient-descent, mini-batch-gradient-descent"
} |
What is a force? (form Newton law and law of universal Gravitation) | Question: I was thinking of some very basics concept when a doubt came to my mind, therefore I will briefly explain the argument that led me to the doubt so that the question will be clear.
Newton law states: $\vec F= m \ddot{\vec x}$
For what I know this isn't a definition of force but a relation between two different quantities. The force applied on one body is equal to the product of the body mass by its acceleration.
Gravitational law states: $\vec F_{12}= G \frac {m_1 m_2}{r^2} \hat x_{12}$
For what I know this also isn't a definition of force but is a relation between the force acting on a body (in the gravitational case) and some others quantities of the system.
At this point I thought that I don't know what is a force because I only know that is something that in general is equal to $m \ddot{\vec x}$ and that in the gravitational case is equal to $G \frac {m_1 m_2}{r^2} \hat x_{12}$.
My idea now is that I have to consider the Gravitational law as the definition of force (in the gravitational case) and Newton law as the relation between two different quantities.
Answer: You are touching a quite delicate point at the foundation of Newtonian mechanics: the interplay between principles, definitions and basic objects the theory is built on.
Your doubts probably stem from the fact that different basic formulations of the basic principles have been proposed over more than three centuries, partially overlapping and ending up with a very unsatisfactory exposition in most of the textbooks. Unfortunately the issue is not just a problem of correct reconstruction of the historical development, but it touches directly what can be said or not about Newtonian forces.
I'll try to make short a long story.
From Newton up to the fist half of nineteen century, the dominant point of view was that forces and accelerations have different definitions and the Newton's second law was an empirical finding about their proportionality through a constant, the mass.
Around the middle of that century a different point of view was pushed forward by a movement, including Kirchoff, Mach and Hertz, which assumed the second principle as a definition of force. Moreover, force ceases to be a fundamental quantity the dynamics is based on, to become a nickname for $m \vec a$. It is quite clear that this is a completely different point of view and it is incompatible with the previous one.
More recently, there has been a resurrection of the Newtonian original approach, after a critic review of the main weakness of Mach's point of view.
At the best of my knowledge, no final assessment is ever appeared in the literature about the two main approaches. However, refined Mach-like approaches and neo-Newtonian approaches are frequently present in the best textbooks on Newtonian mechanics. What should be absolutely avoided is to mix them.
Quite schematically (many variations on the main themes exist)
in a Mach-like approach:
one has to define and state the existence of an inertial reference frame without using the word force;
mass is defined by analyzing collisions between point-like particles;
force is defined as $m\vec a$;
in a neo-Newtonian approach:
force is a primitive concept;
inertial reference frames can be defined as the frames where any force-free motion is uniform and on a straight line;
$\vec F =m\vec a$ becomes a principle, i.e. a statement collecting all the known experience, and $m$ is defined as the proportionality constant between force and acceleration (this is probably the point with the maximum number of variants);
Depending on the level of required conceptual rigor, these formulations could be considered satisfactory or not. Probably for practical purposes both can be used, but without mixing them.
An example of the practical relevance of a keeping a clean difference between the two points of view is the following.
How do we know that forces are invariant in every inertial reference frame?
In the Machian approach, it is a trivial consequence of the invariance of the mass and of the acceleration.
In the neo-Newtonian approach, either we have to add the requirement of such invariance to the concept of force, or we have to check it for each proposed force law. | {
"domain": "physics.stackexchange",
"id": 64159,
"tags": "newtonian-mechanics, forces, newtonian-gravity"
} |
Using an API to obtain JSON data and get the date string and determine if data is stale | Question: This is a nagios check that will use an API URL, get JSON data, flatten the data into a usable perl hash, and ultimately obtain a date string. Once the date is obtained, it should recognize the strftime format based on user input and determine the delta hours or minutes. Once the delta time is calculated, it should return critical, warning, or OK, based on the -c or -w user inputs. I just started Perl a week ago and need some code review to become better at it.
Gitlab repo
#!/usr/bin/perl
use warnings;
use strict;
use Data::Dumper;
use LWP::UserAgent;
use Getopt::Std;
use JSON::Parse 'parse_json';
use JSON::Parse 'assert_valid_json';
use Hash::Flatten qw(:all);
use DateTime;
use DateTime::Format::Strptime;
my $plugin_name = "Nagios check_http_freshness";
my $VERSION = "1.0.0";
my $dateNowUTC = DateTime->now;
my $verbose = 0;
$Getopt::Std::STANDARD_HELP_VERSION = "true";
# nagios exit codes
use constant EXIT_OK => 0;
use constant EXIT_WARNING => 1;
use constant EXIT_CRITICAL => 2;
use constant EXIT_UNKNOWN => 3;
#parse cmd opts
my %opts;
getopts('U:K:F:u:t:w:c:z:v', \%opts);
$opts{t} = 60 unless (defined $opts{t});
$opts{w} = 12 unless (defined $opts{w});
$opts{c} = 24 unless (defined $opts{c});
$opts{F} = "%Y%m%dT%H%M%S" unless (defined $opts{F});
$opts{u} = "hours" unless (defined $opts{u});
$opts{z} = "UTC" unless (defined $opts{z});
if (not (defined $opts{U}) || not (defined $opts{K}) ) {
print "[ERROR] INVALID USAGE\n";
HELP_MESSAGE();
exit EXIT_UNKNOWN;
}
if (defined $opts{v}){$verbose = 1;}
if ($opts{w} >= $opts{c}){
print "[ERROR] Warning value must be less than critical value.\n"; HELP_MESSAGE(); exit EXIT_UNKNOWN;
}
if (not ($opts{u} eq "hours") && not ($opts{u} eq "minutes")){
print "[ERROR] Time unites must be either hours or minutes.\n"; HELP_MESSAGE(); exit EXIT_UNKNOWN;
}
# Configure the user agent and settings for the http/s request.
my $ua = LWP::UserAgent->new;
$ua->agent('Mozilla');
$ua->protocols_allowed( [ 'http', 'https'] );
$ua->parse_head(0);
$ua->timeout($opts{t});
my $response = $ua->get($opts{U});
# Verify the content-type of the response is JSON
eval {
assert_valid_json ($response->content);
};
if ( $@ ){
print "[ERROR] Response isn't valid JSON. Please verify source data. \n$@";
exit EXIT_UNKNOWN;
} else {
# Convert the JSON data into a perl hashrefs
my $jsonDecoded = parse_json($response->content);
my $flatHash = flatten($jsonDecoded);
if ($verbose){print "[SUCCESS] JSON FOUND -> ", Dumper($flatHash), "\n";}
if (defined $flatHash->{$opts{K}}){
if ($verbose){print "[SUCCESS] JSON KEY FOUND -> ", $opts{K}, ": ", $flatHash>{$opts{K}}, "\n";}
NAGIOS_STATUS(DATETIME_DIFFERENCE(DATETIME_LOOKUP($opts{F}, $flatHash->{$opts{K}})));
} else {
print "[ERROR] Retreived JSON does not contain any data for the specified key: $opts{K} \nUse the -v switch to verify the JSON output and use the proper key(s).\n";
exit EXIT_UNKNOWN;
}
}
sub DATETIME_LOOKUP {
my $dateFormat = $_[0];
my $dateFromJSON = $_[1];
my $strp = DateTime::Format::Strptime->new(
pattern => $dateFormat,
time_zone => $opts{z},
on_error => sub { print "[ERROR] INVALID TIME FORMAT: $dateFormat OR TIME ZONE: $opts{z} \n$_[1] \n" ; HELP_MESSAGE(); exit EXIT_UNKNOWN; },
);
my $dt = $strp->parse_datetime($dateFromJSON);
if (defined $dt){
if ($verbose){print "[SUCCESS] Time formatted using -> $dateFormat\n", "[SUCCESS] JSON date converted -> $dt $opts{z}\n";}
return $dt;
} else {
print "[ERROR] DATE VARIABLE IS NOT DEFINED. Pattern or timezone incorrect."; exit EXIT_UNKNOWN
}
}
# Subtract JSON date/time from now and return delta
sub DATETIME_DIFFERENCE {
my $dateInitial = $_[0];
my $deltaDate;
# Convert to UTC for standardization of computations and it's just easier to read when everything matches.
$dateInitial->set_time_zone('UTC');
$deltaDate = $dateNowUTC->delta_ms($dateInitial);
if ($verbose){print "[SUCCESS] (NOW) $dateNowUTC UTC - (JSON DATE) $dateInitial ", $dateInitial->time_zone->short_name_for_datetime($dateInitial), " = ", $deltaDate->in_units($opts{u}), " $opts{u} \n";}
return $deltaDate->in_units($opts{u});
}
# Determine nagios exit code
sub NAGIOS_STATUS {
my $deltaTime = $_[0];
if ($deltaTime >= $opts{c}){print "[CRITICAL] Delta $opts{u} ($deltaTime) is >= ($opts{c}) $opts{u}. Data is stale.\n"; exit EXIT_CRITICAL;}
elsif ($deltaTime >= $opts{w}){print "[WARNING] Delta $opts{u} ($deltaTime) is >= ($opts{w}) $opts{u}. Data becoming stale.\n"; exit EXIT_WARNING;}
else {print "[OK] Delta $opts{u} ($deltaTime) are within limits -c $opts{c} and -w $opts{w} \n"; exit EXIT_OK;}
}
sub HELP_MESSAGE {
print <<EOHELP
Retrieve JSON data from an http/s url and check an object's date attribute to determine if the data is stale.
--help shows this message
--version shows version information
USAGE: $0 -U http://www.convert-unix-time.com/api?timestamp=now -K url -F %s -z UTC -c 24 -w 12 -v
-U URL to retrieve. (required)
-K JSON key to look for date attribute. (required)
-F Strftime time format (default: %Y%m%dT%H%M%S). For format details see: man strftime
-z Timezone that for the JSON date. Can be "UTC" or UTC offset "-0730" (default is UTC)
Can also be "Ameraca/Boston" See: http://search.cpan.org/dist/DateTime-TimeZone/lib/DateTime/TimeZone/Catalog.pm
-w Warning if data exceeds this time. (default 12 hours)
-c Critical if data exceeds this time. (default 24 hours)
-u Time unites. Can be "hours", or "minutes". (default hours)
-t Timeout in seconds to wait for the URL to load. (default 60)
-v Verbose output.
EOHELP
;}
sub VERSION_MESSAGE {
print <<EOVN
$plugin_name v. $VERSION
Copyright (C) 2016 Nathan Snow
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
EOVN
;}
Answer: You may also want to look at the Nagios::Plugin module or it's successor, Monitoring::Plugin.
In these plugins, the OK, WARNING, CRITICAL and UNKNOWN exit statuses are exported by default, so you do not need to declare them.
To initiate a plugin, you need something like this:
my $plugin = Nagios::Plugin->new(
shortname => $PLUGIN_NAME, # Short name of your plugin
usage => $USAGE, # Your usage message
version => $VERSION # Version number
);
Then you can add your command line options thus:
$plugin->add_arg("w=i", "-w <hours>\n Warning if data exceeds this time", 12, 1);
$plugin->add_arg("c=i", "-c <hours>\n Critical if data exceeds this time", 24, 1);
$plugin->getopts();
and so on. The arguments for the add_arg() method are spec, usage, default and required.
You can then compare your result with the warning and critical thresholds. In essence, your NAGIOS_STATUS subroutine can be broken down to one simple line:
my $status = $plugin->check_threshold(check => $delta_time, warning => $plugin->opts->w, critical => $plugin->opts->c);
Or
my $threshold = $plugin->set_thresholds(warning => $plugin->opts->w, critical => $plugin->opts->c); # Set threshold so you can use it later
my $status = $plugin->check_threshold(check => $delta_time);
Then, to exit with the correct status, you add this line:
$plugin->plugin_exit($status, sprintf("Delta time is %s", $delta_time));
Hope that helps! | {
"domain": "codereview.stackexchange",
"id": 23138,
"tags": "datetime, json, api, perl"
} |
Probability of winning a turn-based game with a random element | Question: I am preparing for a programming exam on probability theory and I stumbled across a question I can't solve.
Given a bag, which contains some given amount of white stones $w$ and some given amount of black stones $b$, two players take turns drawing stones uniformly at random from the bag. After each player's turn a stone, chosen uniformly at random, vanishes, and only then does the other player take their turn. If a white stone is drawn, the player, who has drawn it, instantly loses and the game ends. If the bag becomes empty, the player, who played second, wins.
What is the overall probability that the player, who played second, wins?
I assume it's a dynamic programming question, though I can't figure out the recursion formula. Any help would be greatly appreciated. :)
Example input: $w$ = 3, $b$ = 4, then the answer is, I believe, 0.4, which I arrived at after computing by hand all possible ways for the game to go, so not very efficient.
Answer: Let us denote by $\Pr(w, b)$ the probability of P2 winning given its their turn and there are $w$ white stones and $b$ black stones remaining and similarly we write $\overline\Pr(w, b)$ for the probability of P2 winning given it is P1's turn.
From the rules of the game we get that any of the following cases can occur in P2's turn (assuming that enough stones are left):
White is drawn and P2 loses. This happens with probability $w / (w + b)$.
Black is drawn and a white stone vanishes. The probability for this is $b / (w + b) \cdot w / (w + b - 1)$.
Black is drawn and a black stone vanishes. We find the probability for this to be $b / (w + b) \cdot (b - 1) / ( w + b - 1)$.
If it is P1's turn to play we find similar cases:
White is drawn and P2 wins. This happens with probability $w / (w + b)$.
Black is drawn and a white stone vanishes. The probability for this is $b / (w + b) \cdot w / (w + b - 1)$.
Black is drawn and a black stone vanishes. We find the probability for this to be $b / (w + b) \cdot (b - 1) / ( w + b - 1)$.
We can now factor $\Pr$ and $\overline\Pr$ using our case distinctions and some simple probability theory:
$$
\begin{align*}
\Pr(w, b) &= \Pr(\text{Case 1.2}) \cdot \overline\Pr(w - 1, b - 1) + \Pr(\text{Case 1.3}) \cdot \overline\Pr(w, b - 2), \\
\overline\Pr(w, b) &= \Pr(\text{Case 2.1}) + \Pr(\text{Case 2.2}) \cdot\Pr(w - 1, b - 1) + \Pr(\text{Case 2.3}) \cdot \Pr(w, b - 2)
\end{align*}
$$
Note that we can combine these 2 equations to find a recursive formula for $\Pr$ (which I will not type out here for brevity).
Coupling these with the initial values $\Pr(w, b)$ where $w + b \leq 4$ (which can be precomputed) will give us an algorithmic way to find the desired probability. | {
"domain": "cs.stackexchange",
"id": 16759,
"tags": "dynamic-programming, probability-theory"
} |
What does Logits in machine learning mean? | Question: "One common mistake that I would make is adding a non-linearity to my logits output."
What does the term "logit" means here or what does it represent ?
Answer: Logits interpreted to be the unnormalised (or not-yet normalised) predictions (or outputs) of a model. These can give results, but we don't normally stop with logits, because interpreting their raw values is not easy.
Have a look at their definition to help understand how logits are produced.
Let me explain with an example:
We want to train a model that learns how to classify cats and dogs, using photos that each contain either one cat or one dog. You build a model give it some of the data you have to approximate a mapping between images and predictions. You then give the model some of the unseen photos in order to test its predictive accuracy on new data. As we have a classification problem (we are trying to put each photo into one of two classes), the model will give us two scores for each input image. A score for how likely it believes the image contains a cat, and then a score for its belief that the image contains a dog.
Perhaps for the first new image, you get logit values out of 16.917 for a cat and then 0.772 for a dog. Higher means better, or ('more likely'), so you'd say that a cat is the answer. The correct answer is a cat, so the model worked!
For the second image, the model may say the logit values are 1.004 for a cat and 0.709 for a dog. So once again, our model says we the image contains a cat. The correct answer is once again a cat, so the model worked again!
Now we want to compare the two result. One way to do this is to normalise the scores. That is, we normalise the logits! Doing this we gain some insight into the confidence of our model.
Let's using the softmax, where all results sum to 1 and so allow us to think of them as probabilities:
$$\sigma (\mathbf {z} )_{j}={\frac {e^{z_{j}}}{\sum _{k=1}^{K}e^{z_{k}}}} \hspace{20mm} for \hspace{5mm} j = 1, …, K.$$
For the first test image, we get
$$prob(cat) = \frac{exp(16.917)}{exp(16.917) + exp(0.772)} = 0.9999$$
$$prob(dog) = \frac{exp(0.772)}{exp(16.917) + exp(0.772)} = 0.0001$$
If we do the same for the second image, we get the results:
$$prob(cat) = \frac{exp(1.004)}{exp(1.004) + exp(0.709)} = 0.5732$$
$$prob(dog) = \frac{exp(0.709)}{exp(1.004) + exp(0.709)} = 0.4268$$
The model was not really sure about the second image, as it was very close to 50-50 - a guess!
The last part of the quote from your question likely refers to a neural network as the model. The layers of a neural network commonly take input data, multiply that by some parameters (weights) that we want to learn, then apply a non-linearity function, which provides the model with the power to learn non-linear relationships. Without this non-linearity, a neural network would simply be a list of linear operations, performed on some input data, which means it would only be able to learn linear relationships. This would be a massive constraint, meaning the model could always be reduced to a basic linear model.
That being said, it is not considered helpful to apply a non-linearity to the logit outputs of a model, as you are generally going to be cutting out some information, right before a final prediction is made. Have a look for related comments in this thread. | {
"domain": "datascience.stackexchange",
"id": 11014,
"tags": "machine-learning, deep-learning"
} |
Beta reduction and vacuous lambda abstraction | Question: Suppose we have the following typed lambda term (where $s$ does not occur in E (which is of type $s \to p$) and $s$ and $s'$ have the same type), and want to apply $\beta$-reduction:
$(\lambda s. E)\, s'$
Every occurrence of $s$ in E must be replaced with $s'$. But suppose there are no occurrences of $s$ in $E$. In this case, does beta reduction lead to (1) or to (2)?
(1) $E$
(2) $E\, s'$
I can't see how this is fixed by the definition of beta-reduction.
Edit
I have completely rewritten the question to make it clearer.
Answer: Substitution means replacement, not attachment. If there are no occurrences of $s$ to replace, then nothing will be replaced. So the answer is (1).
This follows unambiguously from the definition of substitution. The substitution $[s'/s]$ is recursively passed into the subterms of $E$, until the level of atoms (= constants and variables) is reached and the substitution is applied to each symbol. When the symbol is the variable $s$, it will get replaced by $s'$; when it is a different variable or a constant, it will remain unchanged, according to the definition of substitution. If all atomic symbols in $E$ are different from $s$, then at the end of the substitutoin operation no replacement will have happened, and the result is just $E$. | {
"domain": "cstheory.stackexchange",
"id": 5045,
"tags": "lambda-calculus, typed-lambda-calculus"
} |
Energy of a state of free particles | Question: We know the the normalization of a state of free particle is indeterminate and therefore it doesn't have any physical state or definitive energy.
On the other hand, lets consider a free particle of wave function $\sin kx$. The energy of the state can be written as, $\frac{ \hbar^2 k^2}{2m} $. Since, the energy of the state is not k dependent, therefore we can say from the energy equation, Energy of a state is definite.
The two paragraph seems to me contradicting for the free particles energy.
Answer: Consider a propagating particle of mass $m$. Its energy is $\hbar^2 k^2/2m$ just as you say.
Now ask what is the wavefunction for this particle? Well it depends on the initial conditions. We could start with a tightly localised particle, a particle that is only localised to a quite large region or a particle that is almost completely delocalised. All three are valid solutions to the Schrodinger equation and all three have the same energy of $\hbar^2 k^2/2m$. And of course all three wavefunctions are normalised to a total probability of one.
The diagram shows three gaussian wavepackets $\frac{a}{\sqrt{\pi}}\exp^{-x^2/a^2}$ with different degrees of localisation all normalised to unity.
When you say you want your wavefunction to be $\sin\,kx$ you are saying that you want your particle to be be completely delocalised. On my figure above that would correspond to a gaussian with infinite width and zero height. Normalising such a wavepacket is impossible because you end up trying to integrate $\tfrac{1}{\infty}$ from $-\infty$ to $\infty$ and that isn't a mathematically valid procedure. This is why it's often said that a free particle state can't be normalised.
But we can approach the completely delocalised particle by using a gaussian wavepacket of the form $\frac{a}{\sqrt{\pi}}\exp^{-x^2/a^2}$, like the ones shown above, and taking the limit of $a \rightarrow \infty$. Whatever the value of $a$ all these wavepackets are normalised to unity and have the same energy, and that's why we can treat the completely delocalised particle as still being normalised to unity and having the energy $\hbar^2 k^2/2m$. | {
"domain": "physics.stackexchange",
"id": 25701,
"tags": "quantum-mechanics"
} |
Why is the magnetic field outside a solenoid considered zero? | Question: While applying Ampere's law to derive the magnetic field of a solenoid,
why can we consider $\vec{\bf B }$ to be zero just outside the solenoid?
For example here it says "Only the upper portion of the path contributed to the sum because the magnetic field is zero outside..". What is the proper justification for this statement?
Answer: For an infinite solenoid, you can argue by symmetry that the $B$-field outside the solenoid has to be parallel to the axis. From this, by varying the size of the loop used in Ampere's law, you can show that the $B$-field outside the solenoid (whatever strength it is) does not vary with distance from the solenoid.
It's pretty easy to show that the $B$-field goes to zero from a solenoid, even an infinite one, as the distance from the solenoid goes to infinity. And so the $B$-field has to be uniformly zero outside the solenoid.
For a finite solenoid, if you are not close to the ends, you can argue that the missing parts of the infinite solenoid shouldn't affect the $B$-field much, and so the field is weak outside the solenoid as compared to inside. | {
"domain": "physics.stackexchange",
"id": 90403,
"tags": "electromagnetism, magnetic-fields, inductance"
} |
Altering Scene.cc to Change Grid Size | Question:
This is a follow-up question to an earlier one I asked regarding changing grid size. Is there any way to access the Scene.cc file so I can change the hard-coded value of 20x20? I tried to use ls in the directory it's stored in, but all it showed were the .hh files.
Originally posted by K. Zeng on Gazebo Answers with karma: 103 on 2014-07-23
Post score: 0
Answer:
You would need to build gazebo from source. Try these instructions. Then look in the gazebo/rendering folder for Scene.cc
Originally posted by scpeters with karma: 2861 on 2014-07-24
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by K. Zeng on 2014-07-25:
I see. Thanks. I'll find an alternate method, then.
Comment by scpeters on 2014-07-25:
I suppose you could also write a GUI system plugin to modify the grid size | {
"domain": "robotics.stackexchange",
"id": 3621,
"tags": "gazebo"
} |
Search files from large directory and sub-directories | Question: Essentially I have a large directory with many sub-folders and different formats of files. It reads through XML, JAVA, .txt, etc, in search of a selected String inputted by the user. The results come back as snippets, reading a certain amount of characters previous and after the selected string has been found.
It does function as planned...I send the results as snippets and links to the main pages through REST calls via Spring and AJAX. It does take about 5 - 7 seconds on a localhost server to return the results for a directory that is 19,405 Files, 9,610 Folders.
private ArrayList<String> pathNames = new ArrayList();
private HashMap<String, String> snippets = new HashMap<String, String>();
public HashMap readFiles(String search, String dirPath) {
snippets.clear();
InputStream stream = null;
//Method to get all the files in main and sub folders
listDirectory(dirPath);
for (int i = 0; i < pathNames.size(); i++) {
try {
stream = new FileInputStream(pathNames.get(i));
BufferedReader br = new BufferedReader(new InputStreamReader(stream));
Field chars = br.getClass().getDeclaredField("cb");
chars.setAccessible(true);
Field f = br.getClass().getDeclaredField("nChars");
f.setAccessible(true);
Field ff = br.getClass().getDeclaredField("nextChar");
ff.setAccessible(true);
String line;
//not used
int lineNum = 0;
String path = pathNames.get(i);
while ((line = br.readLine()) != null) {
//Getting through to Java Source Code - uh oh
// int charCount = (int) f.get(br);
char[] cb = (char[]) chars.get(br);
int nextChar = (Integer) ff.get(br);
//not used
lineNum++;
if (line.contains(search)) {
String lines = "";
for (int t = nextChar - 200; t < nextChar + 100; t++) {
lines += cb[t];
}
snippets.put(pathNames.get(i), lines);
}
}
} catch (FileNotFoundException ex) {
Logger.getLogger(FileRead.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(FileRead.class.getName()).log(Level.SEVERE, null, ex);
} catch (IllegalArgumentException ex) {
Logger.getLogger(FileRead.class.getName()).log(Level.SEVERE, null, ex);
} catch (IllegalAccessException ex) {
Logger.getLogger(FileRead.class.getName()).log(Level.SEVERE, null, ex);
} catch (NoSuchFieldException ex) {
Logger.getLogger(FileRead.class.getName()).log(Level.SEVERE, null, ex);
} catch (SecurityException ex) {
Logger.getLogger(FileRead.class.getName()).log(Level.SEVERE, null, ex);
}
}
return snippets;
}
Answer: For anybody interested, here is how NetBeans likes the optimized code (discussed above) to be formatted... It does iterate over ~20,000 files and ~10,000 folders. This may not be the best way to iterate over so many files and folders but it does produce results in a somewhat timely fashion. There seems to be a small difference in speed between the two codes above but my testing methods are not ideal and for the most part, seem similar.
private ArrayList<String> pathNames = new ArrayList();
private HashMap<String, String> snippets = new HashMap<String, String>();
public HashMap readFiles(String search, String dirPath) {
snippets.clear();
//Method to get all the files in main and sub folders
listDirectory(dirPath);
for (int i = 0; i < pathNames.size(); i++) {
String path = pathNames.get(i);
try (InputStream stream = new FileInputStream(path); BufferedReader br = new BufferedReader(new InputStreamReader(stream))) {
Field chars = br.getClass().getDeclaredField("cb");
chars.setAccessible(true);
Field ff = br.getClass().getDeclaredField("nextChar");
ff.setAccessible(true);
String line;
while ((line = br.readLine()) != null) {
if (line.contains(search)) {
//Getting through to Java Source Code - uh oh
char[] cb = (char[]) chars.get(br);
int nextChar = (Integer) ff.get(br);
char[] charBuffer = new char[300];
System.arraycopy(cb, nextChar - 200, charBuffer, 0, 300);
snippets.put(path, new String(charBuffer));
}
}
} catch (IOException | NoSuchFieldException | SecurityException | IllegalAccessException ex) {
Logger.getLogger(FileRead.class.getName()).log(Level.SEVERE, null, ex);
}
}
return snippets;
} | {
"domain": "codereview.stackexchange",
"id": 20693,
"tags": "java, file, search, stream"
} |
CLRS(Introduction To Algorithms) implementation of BFS and DFS in Java | Question: This is the implementation of BFS and DFS i have tried to follow from CLRS.Please suggest what can be improved in this code.
import java.io.File;
import java.io.FileNotFoundException;
import java.util.LinkedList;
import java.util.Queue;
import java.util.Scanner;
public class Graph {
VertexList[] row;
int time;
public Graph(String file) throws FileNotFoundException {
Scanner sc = new Scanner(new File(file));
String graphType = sc.next();
boolean undirected = true;
if (graphType.equals("directed"))
undirected = false;
row = new VertexList[sc.nextInt()];
for (int v = 0; v < row.length; v++)
row[v] = new VertexList(sc.next(), null);
while (sc.hasNext()) {
int v1 = indexForName(sc.next());
int v2 = indexForName(sc.next());
row[v1].head = new Node(v2, row[v1].head);
if (undirected) {
row[v2].head = new Node(v1, row[v2].head);
}
}
}
public int indexForName(String name) {
for (int v = 0; v < row.length; v++) {
if (row[v].vertexName.equals(name))
return v;
}
return -1;
}
public void print() {
System.out.println();
for (int v = 0; v < row.length; v++) {
System.out.print(row[v].vertexName);
for (Node nbr = row[v].head; nbr != null; nbr = nbr.next) {
System.out.print("-->" + row[nbr.vertexNum].vertexName);
}
System.out.println("\n");
}
}
public void bfs(int s, int v) {
Node[] N = new Node[row.length];
for (int i = 0; i < row.length; i++) {
N[i] = new Node(indexForName(row[i].vertexName), null);
N[i].color = "white";
N[i].d = 1000;
N[i].p = null;
}
N[s].color = "gray";
N[s].d = 0;
N[s].p = null;
Queue Q = new LinkedList();
Q.add(s);
while (Q.isEmpty() != true) {
int u = (Integer) Q.remove();
for (Node nbr = row[u].head; nbr != null; nbr = nbr.next) {
if (N[nbr.vertexNum].color == "white") {
N[nbr.vertexNum].color = "gray";
N[nbr.vertexNum].d = N[u].d + 1;
N[nbr.vertexNum].p = N[u];
Q.add(nbr.vertexNum);
}
N[u].color = "black";
}
}
System.out.println("Printing distances of nodes");
for (int i = 0; i < N.length; i++) {
System.out.println("Node " + N[i].vertexNum + " Distance is "
+ N[i].d);
}
System.out.println("Printing shortest path from " + s + " to " + v);
printPath(N, s, v);
}
public void dfs() {
Node[] N = new Node[row.length];
for (int i = 0; i < row.length; i++) {
N[i] = new Node(indexForName(row[i].vertexName), null);
N[i].color = "white";
N[i].p = null;
}
time = 0;
for (int i = 0; i < row.length; i++) {
if (N[i].color == "white")
dfsVisit(N, N[i].vertexNum);
}
System.out.println("\n\nPrinting time and freq of vertexes in DFS");
for (int i = 0; i < row.length; i++) {
System.out.println("Node " + i + " time-d is " + N[i].time_d
+ " time-f is " + N[i].time_f);
}
}
public void dfsVisit(Node[] N, int u) {
time = time + 1;
N[u].time_d = time;
N[u].color = "gray";
for (Node v = row[u].head; v != null; v = v.next) {
if (N[v.vertexNum].color == "white") {
N[v.vertexNum].p = N[u];
dfsVisit(N, v.vertexNum);
}
}
N[u].color = "black";
time = time + 1;
N[u].time_f = time;
}
public void printPath(Node[] N, int s, int v) {
if (v == s)
System.out.print(s + " ");
else if (N[v].p == null)
System.out.println("No Path from s to v");
else {
printPath(N, s, N[v].p.vertexNum);
System.out.print(v + " ");
}
}
public static void main(String[] args) throws FileNotFoundException {
String fileName = "C:/Users/Dell PC/Algorithm_Workspace/Graph_CLRS/src/graph.txt";
Graph graph = new Graph(fileName);
graph.print();
graph.bfs(0, 3);
graph.dfs();
}
}
class Node {
int vertexNum;
Node next;
String color;
int d;
Node p;
int time_d;
int time_f;
public Node(int vertexNum, Node next) {
this.vertexNum = vertexNum;
this.next = next;
}
}
class VertexList {
String vertexName;
Node head;
public VertexList(String vertexName, Node head) {
this.vertexName = vertexName;
this.head = head;
}
}
The graph input file is in this format:
undirected
5
Ram
Dam
Mam
Kam
Tam
Ram Dam
Ram Mam
Dam Tam
Mam Tam
Tam Kam
Answer: the first thing I noticed in your code is the non-conventional indentation of 2 spaces instead of 4. Additionally your constructor declaration is not indented. The next thing I noticed is that yiou're working on String instead of Path.
It's usually considered incorrect to use Strings as paths, especially since normalizing Strings is harder than normalizing Paths. You should use the "new" I/O API of java from the java.nio package.
public Graph(Path file) throws IOException {
Scanner sc = new Scanner(file.toFile());
since you never use the graphType aside from determining whether the graph is directed or not you can condense the next four lines into one:
boolean undirected = !"directed".equals(sc.next());
If you want to keep your "readability-variables" (those are really not a bad thing) I strongly urge you to make a habit of putting braces around single-line if-blocks:
if (graphType.equals("directed")) {
undirected = false;
}
On that same note I strongly recommend always putting braces around for-loops (or in general put braces where possible, not where necessary). This helps you avoid bugs when later modifying the code and it's generally a good habit to get into.
Onwards to the traversal. You use an algorithm that works by "coloring" nodes as you visit them. The colors are different depending on who you ask, but it's always three colors. You use these colors as Strings. It would make the code much less error-prone (typos in color-checks) if you had an enum of colors:
enum Colors {
WHITE, GRAY, BLACK
}
This would give you compile-time checking of constant names instead of some erroneous behaviour at runtime.
~editor's note: at this point I scrolled through the rest of the code
I already noticed you're using a Vertex-List as a representation of the graph. Interestingly you seem to intermingle the Vertex Nodes and the List's "Edge-Nodes" into one Node class. For this algorithm, it might be easier to actually represent the graph differently.
Consider following Vertex class:
class Vertex {
String label;
Set<Vertex> outgoingEdges;
}
Using this could make creating your graph much simpler. You can create Vertex instances and for building the graph put them into a Map<String, Vertex> instead of looking them up by name in an Array.
If your graph is undirected you'd have to duplicate the outgoingEdge for both vertices. Add in a Color and the other attributes you need for traversal and you're good to massively simplify your traversal algorithm through iterative application (for BFS and DFS) and recursive application (for DFS) of a simplistic visit method:
void dfsVisit(Vertex n) {
time++;
n.time_d = time;
n.color = Color.GRAY;
for (Vertex outgoing : n.outgoingEdges) {
if (outgoing.color == Color.WHITE) {
outgoing.p = n;
dfsVisit(outgoing);
}
}
n.color = Color.BLACK;
time = time+1;
n.time_f = time;
}
This avoids all the implciit baggage you carry around in the numbers (if we ignore the time thing)... | {
"domain": "codereview.stackexchange",
"id": 22441,
"tags": "java, algorithm, graph"
} |
How to get depth and IR data from realsense R200 | Question:
Hi to all,
I've compiled with catkin_make the Intel Real Sense package: https://github.com/intel-ros/realsense with no errors and I'm able to display colored images by using:
rosrun rqt_image_view rqt_image_view
and selecting the topic:
/camera/color/image_raw
If I select other topics like
/camera/infrared1/image_raw
/camera/depth/image_raw
I only get a black screen.
If I try to follow this tutorial: http://wiki.ros.org/RealSense_R200 and I load the file realsenseRvizConfiguration1.rviz I get only the colored image and the other windows related to depth and infrared options are black.
Can you help me please?
I would like to capture both infrared and depth data coming from the R200 sensor.
I would like to show the same data as showed in this screenshot: http://wiki.ros.org/RealSense_R200?action=AttachFile&do=get&target=snapshot.png in order to get an environment map.
Thank you!!
Originally posted by Marcus Barnet on ROS Answers with karma: 287 on 2016-05-07
Post score: 0
Original comments
Comment by Marcus Barnet on 2016-05-11:
no tips? :(
Answer:
Hey, try running the librealsense binaries first. If it's still black for IR views, make sure you run the patch for uvcvideo. Check out https://github.com/intel-ros/realsense/issues/12
Originally posted by alee with karma: 194 on 2016-05-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Marcus Barnet on 2016-05-19:
The IR view works well if I run the librealsense binaries but only get a black screen under ROS. :(
Comment by alee on 2016-05-19:
Try using the instructions on the Github readme for the ROS driver
Comment by Marcus Barnet on 2016-05-19:
Yes, I solved it by installing the ROS driver! Now, it seems to work well!! thank you!
Comment by Ark on 2017-03-14:
Hi Marcus, could you please give any details on how to install the ROS driver to fix this problem? Many thanks!
Comment by Marcus Barnet on 2017-03-14:
I followed the instructions on this page: http://wiki.ros.org/RealSense | {
"domain": "robotics.stackexchange",
"id": 24583,
"tags": "ros, rviz, realsense, realsense-camera, rqt-image-view"
} |
Theoretical question about elevators | Question: Question
Imagine we put a bascule/weighing machine in an elevator, and that elevator starts to acelerate downwards with a certain aceleration that we don't know of. Is it true that the bascule will always register less than if the bascule was in stationary position, or more, or more than if it was accelerating upwards, or is there not enough information to answer this?
Thoughts/ideas
I'm thinking towards less than when the elevator is in stationary position since $N=m(g−a)$ if it is accelerating downwards.
Answer: Let a mass $m$ be on the scale. In the non-inertial reference frame moving with the elevator, there is a fictitious force on $m$ upwards equal to $ma$ where $a$ is the downward acceleration of the elevator. The total force on $m$ downwards is $m(g - a)$. The total force on $m$ upwards is $m(g - a)$ to keep $m$ at rest in the frame of the elevator. This upwards force is on $m$ from the scale, and there is an equal and opposite force on the scale from $m$ and this is the weight. The weight is $m(g - a)$ which is less than the weight $mg$ for the elevator at rest. For free fall of the elevator, $a = g$; the mass is weightless. | {
"domain": "physics.stackexchange",
"id": 91376,
"tags": "newtonian-mechanics, forces, newtonian-gravity, reference-frames, free-body-diagram"
} |
IBM Quantum: Are ibmq_5_yorktown and ibmqx2 different devices? | Question: When I run provider.backends(), ibmqx2 is listed as an available backend, but is not listed on the IBMQ systems page. This file from the archived repository ibmq-device-information associates backend name ibmqx2 with display name IBM Q 5 Yorktown. The IBMQ systems page does list a 5 qubit Yorktown device, but under the name ibmq_5_yorktown. This backend is also available via ibmq/open/main, but is not listed by provider.backends(). Are ibmqx2 and ibmq_5_yorktown respectively versions 1 and 2 of backend software providing access to a single Yorktown device? Or do these names refer to two distinct Yorktown devices?
Answer: Actually, ibmqx2 and Yorktown are one quantum processor. When IBM Quantum platform was released, two processors - Yorktown and Tenerife were available. These processor were denoted also ibmqx2 and ibmqx4, respectivelly. Note that Tenerife processor is retired now.
If you have a look into IBM Quantum environment, you can see that there is ibmqx2 processor in list of available devices. See here:
When you click on ibmqx2, this page is shown:
So, it means that ibmqx2 and Yorktown are identical devices.
The naming is just a legacy of former times.
I would be happy if anybody from IBM can add more comments. | {
"domain": "quantumcomputing.stackexchange",
"id": 3014,
"tags": "programming, qiskit, ibm-q-experience"
} |
Stopping a nuclear missile | Question: I had been thinking about whether it is possible to stop an incoming missile or not. I found something quite interesting here.
So, there many people said that it will be possible to stop an incoming missile (at least in some cases) by firing an anti-missile at it and the nuclear weapon will break into "a lot of little pieces" and "harmlessly fall to the ground".
However as I had learnt in our class, a nuclear weapon contains two small masses of radioactive materials separated by a lead barrier such that each mass is below critical mass but the sum total of two masses is greater than critical mass. So when the lead barrier is removed, chain reaction starts and explosion occurs. So, I think that in case an anti-missile hits the nuclear weapon, it would break and as soon as the lead barrier breaks the two masses in air will have sum larger than critical mass and chain reaction will start. So there will be explosion in the air and large amount of radioactive particles will fall to the ground resulting in future deaths.
So, the reason I feel that I might be wrong is that the two masses will remain separated in air so they might not be considered as a mass greater than critical mass so no explosion.
Any light regarding this matter will be highly helpful. Thanks.
Answer: There are two general types of atomic warhead, the gun type where two sub-critical masses of Uranium 235 which when triggered one is fired at the other to create a critical mass, the other is an implosion device where a hollow sphere of Plutonium 239 is compressed by the detonation of an explosive jacket. Such devices comprise the trigger for themo-nuclear warheads (hydrogen bombs).
To get such a warhead to detonate properly requires the correct firing of the conventional explosives which unites and compresses the critical mass of fissile material. I suppose that with a poorly designed gun type warhead it would be possible to get the thing to detonate due to a sympathetic detonation of the conventional charge, but a badly designed implosion device would not detonate correctly this way. You should also note that many proposals for anti-ballistic missile weapons do not carry explosive warheads but are intended to achieve their purpose by impacting the target at high speed to give a kinetic energy kill.
Also in the current era, weapon designers go to great lengths to prevent conventional explosives in weapons detonating when they are not supposed to, making sympathetic detonation of a nuclear warhead when hit by an anti-missile weapon even less likely. | {
"domain": "physics.stackexchange",
"id": 22248,
"tags": "nuclear-physics"
} |
Why does my model not improve when training with mini-batch gradient descent, while it does with Adam? | Question: I am currently experimenting with the U-Net. I am doing semantic segmentation on the 2018 Data Science Bowl dataset from Kaggle without any data augmentation.
In my experiments, I am trying different hyper-parameters, like using Adam, mini-batch GD (MBGD), and batch normalization. Interestingly, all models with BN and/or Adam improve, while models without BN and with MBGD do not.
How could this be explained? If it is due to the internal covariate shift, the Adam models without BN should not improve either, right?
In the image below is the binary CE (BCE) train loss of my three models where the basic U-Net is blue, the basic U-Net with BN after every convolution is green, and the basic U-Net with Adam instead of MBGD is orange. The learning rate used in all models is 0.0001. I have also used other learning rates with worse results.
Answer: Well, some time ago I also faced the same issue in the semantic segmentation task. Batch normalization is expected to improve convergence, because the normalization of activations prevents the explosion of the gradients magnitude and leads to more steady convergence.
Adam is an adaptive optimizer with momentum and division by the weighted sum of gradients on previous iterations squared. https://towardsdatascience.com/adam-latest-trends-in-deep-learning-optimization-6be9a291375c.
The loss surfaces of the neural networks is a difficult and poorly understood topic in the present. I suppose, that the poor convergence of SGD is caused by the roughness of loss surface, where the gradient makes big leaps, and jumps over the mimima. The adaptive learing strategy of the Adam, on the other hand, allows to dive into the valleys. | {
"domain": "ai.stackexchange",
"id": 2611,
"tags": "u-net, batch-normalization, adam, mini-batch-gradient-descent, semantic-segmentation"
} |
How do I find the rotation axis that maximises the angular momentum for a set of discrete points and velocities? | Question: I have a set of independent particles (we can assume with equal mass), distributed pseudo-randomly in 3D space, each with its own individual velocity.
What is the process by which I could determine the orientation of an axis that would maximise the angular momentum about that axis?
I want a method that it isn't just trial and error, looping over a grid of possible positions and rotation angles (I know how to do this, and it takes a long time!).
NB: the rotation axis should pass through the centre of mass.
Answer: From this derivation the total angular momentum of any number of particles is
$\vec{L} = \vec{R} \times M \vec{V} + \sum_{i} \vec{r_{i}}\times m_{i}\vec{v_{i}}$
where
$\vec{R}$ is the vector from the origin (or position of rotation axis) to the center of mass
$M\vec{V}$ is the center of mass momentum (calculate once)
$\vec{r_{i}}$ is the position of particle $i$ relative to the center of mass.
$\vec{v_{i}}$ is the velocity of particle $i$ relative to the center of mass (just the raw velocities are fine, if you don't want to rotate your coordinate system).
(The above is essentially what John Hunter's answer describes in words)
You can calculate the center of mass momentum ($M\vec{V}$) and spin angular momentum $\vec{S} = \sum_{i} \vec{r_{i}}\times m_{i}\vec{v_{i}}$ once and cheaply recalculate $\vec{L}$ for any desired $\vec{R}$.
The best axis will be parallel with the spin angular momentum so that it contributes maximally. This provides the optimal orientation given any distance. Then, choose a position so that $\vec{R}\times\vec{V}$ is also parallel to $\vec{S}$. You can use $\vec{R} \propto \vec{V}\times\vec{S}$.
This is where the question as currently asked is not uniquely specified, since you can arbitrarily extend the distance to arbitrarily increase $L = |\vec{L}|$. However, it's possible that you want to choose $\vec{R}=0$, by placing the axis at the position of the center of mass of the particles.
To calculate center of mass position, you use $\frac{\sum m_{i}\vec{x}_{i}}{\sum m_{i}}$ with data $\vec{x}_{i}$. This is needed to calculate $r_{i}$. | {
"domain": "physics.stackexchange",
"id": 84185,
"tags": "angular-momentum, reference-frames, vectors, rotational-kinematics"
} |
Construct a DFA recognizing a language $L$ that has exactly $I(L)$ states | Question: Let $L$ be a language, and consider the following relation $\equiv_L$ on strings:
$s_1 \equiv_L s_2$ if and only if, for every string $w$, we have that $s_1w \in L \Leftrightarrow s_2w \in L$.
This is an equivalence relation.
Let $I(L)$ be the number of equivalence classes of $\equiv_L$
(a) Suppose $L$ is a language and $I(L)$ is finite. Construct a DFA recognizing $L$ that has
exactly $I(L)$ states.
(b) Consider the language $L = \{www : w \in \{a,b\}^*\}$. Show that $L$ is not regular by giving
infinitely many pairwise inequivalent elements. [which is something proven to work earlier]
Now, for (a) I think I got a reasonable solution, for (b) I don't feel so sure.
For part (a) I describe an algorithm which first creates a start state for the DFA and labels it $\bar\varepsilon$, i.e. the $\equiv_L$-equivalence class of $\varepsilon$. Second, for each letter $a$ in the input alphabet a new state $\bar a$ is created and a transition from $\bar\varepsilon$ to $\bar a$ is labelled $a$. Then all the states with the same label are merged in a single state, and the transitions are adjusted as a consequence. Hence, this procedure just carried on $\bar\varepsilon$ is iterated to each state just added. The algorithm stops when an iteration does not add any new state or transition.
Do you think that the writer wanted me to use this much information about $\equiv_L$-equivalence classes or there is a neater solution?
For part (b), I believe that all the words generated by $ab^*$ are pairwise not $\equiv_L$-equivalent, with that $L$. Am I not sure I can justify it further than this, but is there another simpler example?
Thank you for any help, this is a rather long question.
Answer: (a) Let $\Sigma$ be the alphabet of $L$ and $\Sigma^*/\equiv_L$ the set of equivalence classes of words over $\Sigma$ according to the equivalence relation $\equiv_L$. Define the DFA to have one state for each element of $\Sigma^*/\equiv_L$ (we could think of the states as the classes themselves). Define the initial state to be the class of the empty word $\epsilon$, the accepting states to be the classes of the words in $L$. For each pair of states $s_1,s_2\in \Sigma^*/\equiv_L$ and letter $t\in\Sigma$ we will have the value of the transition function $\delta(s_1,t)=s_2$ if and only if for a word (equivalently, for all words) $x\in s_1$ we have that the word $xt\in s_2$.
(b) The examples that you showed work. The classes of the words $ab^n$, for $n=1,2,...$ are all different. In fact, if $x=ab^m$, $y=ab^n$ and $m>n$, then taking $w=ab^nab^n$ we get that $xw\not\in L$, while $yw\in L$. Therefore, $x\not\equiv_L y$. | {
"domain": "cs.stackexchange",
"id": 16849,
"tags": "formal-languages, regular-languages, automata, finite-automata"
} |
What happens during the fermentation process of the eco-enzyme? | Question: introduction about the eco-enzyme
I have tried to make several ones at home, no matter what I am using, lemon or pineapple peels with brown or white sugar, the final products all show the similar brown colour.
It seems to me that no matter what type of fruit peel or sugar used as the material, would all lead to a similar brown colour.
So I am curious about:
Why the colour of the final product is different from normal homemade fermentative fruit or vegetable?
Are the microbes involved in fermentation have some difference between the both? Is this because of the difference between enzymes that are produced by bacteria?
I have tried to find the appropriate and availible resource about microorganisms that may take part in the fermenting process of fruit and vegetables, but I am still not sure which one would take part in which step of the eco-enzyme.
(In my observation, some white tiny spots and big speckles began to float on the the surface of the liquid after a week, before opening the container to release gas and make it become aerobic).
What could be the role of fruit peels aside from being the source of microbes,if it will also produce enzymes to influence the chemical reaction?
Is the eco-enzyme really an enzyme?
Answer: The recipe is using yeast to convert the sugar into alcohol (ethanol). All of the proposed cleaning effect is from the dilute alcohol (and also some vinegar). The citrus peel provides some aromatic oils. There may be low levels of some particularly hardy enzymes released from the rotting fruit and dying microbes, but it is difficult to imagine that any of those proteins have any effect on removing dirt from clothes, dishes, or surfaces. The solution is also acidic from the fermentation and the dissolved CO2 gas--just like if you used grape pulp and skins you would make a primitive wine with this recipe, and that wine would then turn into vinegar (acetic acid, HOAc). Mixing some vinegar and alcohol would give you the same cleanser.
The enzyme concentration should be similar to the amount of active enzymes present in homemade beer or wine--would you use either of those for cleaning? | {
"domain": "biology.stackexchange",
"id": 5299,
"tags": "biochemistry, microbiology"
} |
How to create a model to suggest similar words in realtime? | Question: I have a huge database of job titles, I want to build a system where if you enter something like "jav " then it should suggest next some similar job titles like (java developer, java engineer) etc..
How should one approach this problem? How can build something like this, the latency is biggest concern because it has to be real time. We have to integrate this in UI at the end.
Any suggestions how to proceed further?
Answer: Autocompletion algorithms usually build on either tree or hash structure for the sake of efficiency. One famous approach is the Ternary Search Tree. Use this source to have an initial idea. It is also a comparably space-efficient solution. You can find more complex systems build on the Ternary Search Tree.
Another good solution is Trie. Trie data structure is a good alternative of Ternary Search Trees, as an example implementation check this.
These both are handy at the front end of the application since they are simple but powerful approaches. Other than that, they have a good balance of time and space complexity.
Moreover, a fast-autocomplete library may help you. Since, in your case, you may have two names for the same job, it has synonym functionality.
In general, tree structures are what you are looking for. You can do this task even using Binary Trees. However, I would suggest you consider the case that in reality many times there appears spelling problems, so in that case, you either will not be able to suggest a word or the suggestion will be incorrect. For such problems, you can also have a spelling check algorithm that first checks if such a word exists, if not using spell check you can have proposed words and you can suggest the autocompletion based on those suggestions. E.g. instead of 'Java' one may type in 'Jsva' and your algorithm will find the correct word and make the suggestion accordingly. This is a one-of-the best algorithms that does auto-completion using ML techniques. You can also build a simple one using Levenshtein distance. (However, spelling error cases mostly discarded in the systems because of its being costly to fix them) | {
"domain": "datascience.stackexchange",
"id": 8580,
"tags": "machine-learning, python, deep-learning"
} |
speed up RViz/MoveIt | Question:
I am working with a KaWaDa nextage robot using rtmros_nextage and Moveit via the moveit_commander python interface like this:
rtmlaunch nextage_ros_bridge nextage_ros_bridge_simulation.launch
roslaunch nextage_moveit_config moveit_planning_execution.launch
ipython -i `rospack find nextage_ros_bridge`/script/nextage.py
Sadly the robot movements are very slow, so I am looking for a way to speed up things. I thought of several possibilities:
I do not necessarily need the visualisation all the time. Can I switch off the rendering somehow while the robot keeps trying things in a mathematical simulation only?
I also came across this solution which essentially tries to scale the velocity fields of RobotTrajectory, but that does not seem to be possible for me, because the velocities (or points) are tuples and thus immutable.
Is there another way to let the robot try different actions faster?
(sorry for the long tag list, I am quite new and don't know whether the solution will be robot-specific, or moveit-specific, or rospy etc... )
edit:
I also found this file containing max_velocity and max_acceleration limits.
Can I dynamically change these via rospy?
edit 2:
The code referred to as 'this solution' in 2. (might actually be a workaround rather than a solution):
the following code snippet appears to alter the trajectory
speed as desired (here the speed is doubled):
traj = right_arm.plan()
new_traj = RobotTrajectory()
new_traj.joint_trajectory = traj.joint_trajectory
n_joints = len(traj.joint_trajectory.joint_names)
n_points = len(traj.joint_trajectory.points)
spd = 2.0
for i in range(n_points):
traj.joint_trajectory.points[i].time_from_start = traj.joint_trajectory.points[i].time_from_start / spd
for j in range(n_joints):
new_traj.joint_trajectory.points[i].velocities[j] = traj.joint_trajectory.points[i].velocities[j] * spd
new_traj.joint_trajectory.points[i].accelerations[j] = traj.joint_trajectory.points[i].accelerations[j] * spd
new_traj.joint_trajectory.points[i].positions[j] = traj.joint_trajectory.points[i].positions[j]
self.right_arm.execute(new_traj)
Originally posted by robotfan on ROS Answers with karma: 65 on 2016-12-16
Post score: 2
Answer:
I do not necessarily need the visualisation all the time. Can I switch off the rendering somehow while the robot keeps trying things in a mathematical simulation only?
I'll take a close look later today (and I'll update this answer once I can), but try running separately the launch files that are started through nextage_moveit_config/launch/moveit_planning_execution.launch; you should be able to run MoveIt! without visualization on RViz.
(sorry for the long tag list, I am quite new and don't know whether the solution will be robot-specific, or moveit-specific, or rospy etc... )
I'd say so far the issues/desires you raised seem to me pretty much robot-agnostic ;)
UPDATE 20170104
I also came across this solution which essentially tries to scale the velocity fields of RobotTrajectory, but that does not seem to be possible for me, because the velocities (or points) are tuples and thus immutable.
The thread you referred to is a bit long for me to track down...Which "solution" did you mean?
I also found this file containing max_velocity and max_acceleration limits.
Can I dynamically change these via rospy?
The link to "this file" seems corrupted but I assume joint_limits.yaml. The velocity and acceleration values in that file is meant to be static and not supposed to be configurable during runtime (as Jeremy explained in the same thread you referred to earlier).
With the newest MoveIt! (for Indigo, 0.7.6), you can use scaling factor feature more conveniently than ever before. Using it through RViz is the easiest but in case visualization is not an option for you, you can also modify the topic MotionPlanRequest as explained in the same page (although I haven't figured out how...I haven't found that topic being published using nextage_moveit_config. I asked a question).
Originally posted by 130s with karma: 10937 on 2016-12-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by robotfan on 2016-12-19:
Indeed, it seems I can run my code without visualisation this way. Thanks!
(if someone has a comment on 2., i.e. on setting velocities, I'm still interested though!)
Comment by robotfan on 2017-01-05:
thanks, I ment joint_limits.yaml. Updated the link in my question | {
"domain": "robotics.stackexchange",
"id": 26503,
"tags": "rviz, moveit, rospy"
} |
Work on an object in equilibrium | Question: I heard somewhere that when an object is in equilibrium on the flat bottom of a hill that it will require work proportional to $dx^2$ when moved the small distance of $dx$. I thought that it would be proportional to $dx$ because $dw=F\,dx$. What makes the bottom of the hill so special?
Answer: Assume a one dimensional case with $\hat x$ being the unit vector in the positive x-direction.
Suppose a body is the system under consideration and it is at position $x=0$.
To be in a position of stable equilibrium a displacement of the body needs to result in an external force acting in a direction towards the position $x=0$.
An example of such an external force is $\vec F = - k \,\vec x\Rightarrow F\,\hat x = - k\, x\, \hat x \Rightarrow F= -k\,x$.
If such an external force is acting then the work done by the external force in displacing the body a distance $x$ is $\displaystyle \int _0^x(-k\,x)\,dx = - \frac 12 k x^2$
Now negative work done by an external force on a body can be thought of as positive work done by the body.
This is where the idea, that the body has to do work proportional to the change in position squared to move to a new position, comes from; ie work done by body $= +\frac 12 k (\Delta x)^2 \propto (\Delta x)^2$
So a body at position $x=0$ might be given a "kick" which gives it some kinetic energy.
The body then starts moving away from the position $x=$ but in dong so has to do work at the expense of its kinetic energy.
Eventually all the kinetic energy is used up and the body stops.
However it has a force acting on it which makes the body start moving back towards the $x=0$ position with the external force doing work on the body and the body gaining kinetic energy.
With no "lossy" forces eg friction acting the body will overshoot the position $x=0$ and undergo oscillatory motion about that position.
If friction does act then the amplitude of oscillation of the body will decrease with time until the body eventually stops at position $x=0$. | {
"domain": "physics.stackexchange",
"id": 61117,
"tags": "energy, work, statics"
} |
How is heavy water detrimental to the human body? | Question: Heavy water (D2O) is known to be lethal to humans and other life in large quantities. All I've been able to find on the toxicity is that it's similar to chemotherapy chemistry. I'd like to know precisely how having a single neutron in two atoms of an otherwise non-toxic molecule can cause the body to degrade. What's the root cause here?
Answer: The question is already answered by Armand. I am just going to elaborate on that by referencing the paper.
Different isotopes of chemical elements have slightly different chemical behaviors, but for most elements the differences are far too small to have a biological effect. In the case of hydrogen, differences in chemical properties among protium (light hydrogen), deuterium, and tritium occur, in part because chemical bond energy (the strength of a bond) changes with changed mass of the nucleus–electron system. The isotope effects are especially relevant in biological systems because of the prevalence of hydrogen atoms in biological molecules (even deuterated water can have significant effects in the human body).
Note that enzymes have a finely-tuned network of hydrogen bonds, both in the active center with their substrates, and outside the active center, to stabilize their tertiary structures. In a deuterated environment, some hydrogen bonds will be replaced with deuterium bonds which have different strength (Ref. 3), so normal reactions in cells can be disrupted.
Studies and experiments* show that heavy water affects cell division(mitosis), cell membrane changes and cellular heat stability, possibly as a result of inhibition of chaperonin formation. Also noted among the presumably wide-ranging cellular effects was the display of an altered glucose metabolism under deuterated condition in cells. $D_2O$ is more toxic towards at least some malignant cells, but the difference in sensitivity vs. normal cells does not seem high enough for therapeutic use.
In ref. 2, you will find a paper which shows the biochemical and pharmacological effects of heavy water on humans. Basically, to study the effects of heavy water on the metabolism humans, $D_2O$ and deuterated drugs were used and various changes were noted. Have a look for more details.
References
Effect on biological systems (Wikipedia)
Kushner DJ, Baker A, Dunstall TG. Pharmacological uses and perspectives of heavy water and deuterated compounds. Can J Physiol Pharmacol. 1999 Feb;77(2):79-88. PMID: 10535697.
Steve Scheiner and Martin Čuma. Relative Stability of Hydrogen and Deuterium Bonds. J. Am. Chem. Soc. 1996 118, 6, 1511–1521 DOI: 10.1021/ja9530376
*Experiments with mice, rats, and dogs have shown that a degree of 25% deuteration causes (sometimes irreversible) sterility, because neither gametes nor zygotes can develop. Mammals (e.g. rats) were given heavy water to drink die after a week, at a time when their body water approaches about 50% deuteration. The mode of death appears to be the same as that in cytotoxic poisoning or in acute radiation syndrome , and is due to deuterium's action in generally inhibiting cell division. | {
"domain": "biology.stackexchange",
"id": 11618,
"tags": "human-biology, toxicology"
} |
When did the term "non-avian dinosaur" come in common use? | Question: When I first learned about the prehistoric (mega)fauna that we now commonly call non-avian dinosaurs they where just referred to as dinosaur. I am trying to figure out when we made the collective switch in articles and literature to the use of "non-avian dinosaur".
Merriam-Webster provides us with a definition and indicates that the first known use is in 1988. But that does not indicate it was in common use nor general knowledge that birds are a descendant of dinosaurs which means we need to distinguish between avian and non-avian dinosaurs.
So can someone help me find when the trend to the more specific non-avian dinosaurs got started?
Answer: Here is what Google Ngrams has:
This suggests that the mid-70's are the first recorded instance, but it wasn't until the 90's that it began to gain use. What defines "common use" is up to you. | {
"domain": "biology.stackexchange",
"id": 12302,
"tags": "terminology, dinosaurs"
} |
Is the partition function of non-conformal theories on a torus modular invariant? | Question: Usually we say that the partition function of CFTs on a torus is modular invariant, because we define theory on a torus. If I have a non-conformal field the theory on a torus, is its partition function still modular invariant?
Answer: No, a generic theory on a torus is not modular invariant.
A conformal field theory cannot depend on the metric of the space it is defined on but only on its conformal equivalence class. It is a fact that the space of such classes for the torus is given by a complex parameter $\tau$, the (conformal) modulus, defining the lattice we quotient out of $\mathbb{R}^2$ to get the 2-torus: The basic lattice vectors are $(1,0)$ and the vector in the plane corresponding to the complex number $\tau$, i.e. $\tau$ is the "ratio" between the two fundamental circles on the torus. If we did not have conformal invariance of the theory we would not be allowed to identify the lattice given by $(1,0)$ and $\tau$ with the lattice given by $(2,0)$ and $2\tau$ because while related by a rescaling the area of these parallelograms and hence the resulting torus is different, so we would not have only this one parameter $\tau$ describing the relevant torus structure.
The modular group $\mathrm{PSL}(2,\mathbb{Z})$ sends $\tau$ to values that are equivalent in the sense that they define the same lattice after rescaling one of the basis vectors to $(1,0)$ as allowed by conformal invariance, and so the partition functions, as a function of $\tau$, must be modular invariant since $\tau$ related by modular transformation define tori with the same conformal equivalence class. | {
"domain": "physics.stackexchange",
"id": 35928,
"tags": "quantum-field-theory, conformal-field-theory, partition-function, moduli"
} |
Refractive index and light-matter interaction Hamiltonian | Question: I am wondering if the light-matter Hamiltonian obtains a dependency on the refractive index if we insert our system into a homogeneous medium that can be characterized by a scalar refractive index $n$.
Lets assume we work in the dipole approximation and treat the field classically. In this case the coupling term to a electromagnetic wave reduces to
$$
H_{vac}(t) = -e \mathbf{\hat r} \cdot \mathbf E_{vac}(t)\\
\mathbf E_{vac}(t) =\boldsymbol{\epsilon} E_0\cos(\omega t)
$$
where $E_0$ is the scalar amplitude of the electric field, $\boldsymbol \epsilon$ is the normalized polarization vector and $\omega$ is the angular frequency of the incident wave.
What happens in a medium with $n$ ? Are the Hamiltonians connected by a simple proportionality factor of $n$, like this
$$
H_{medium}(t) \propto n^\alpha H_{vac}(t).
$$
If this is appropriate, which values takes $\alpha$ and how is it derived ? How does $n$ enter the equation ?
Answer: I think the only way in which the refractive index enters the picture is through the amplitude of the field at a point. This makes intuitive sense, as the light-matter interaction depends on the amplitude and the frequency of the light, and $\omega$ does not depend on the refractive index.
The oscillating electric field will give rise to a polarization field - the strength of this field can be computed by treating the atom as a driven harmonic oscillator (https://www.feynmanlectures.caltech.edu/II_32.html). Note that the $\epsilon_0 \epsilon_r$ is dependent on the frequency of the incident radiation.
In the dipole model, the Hamiltonian assumes this form because the potential energy depends on the separation distance and the total field between the electron and the nucleus ($V = qE$). The polarization field, $\vec{P}$ is produced as a result of the separation between the electron and the nucleus, which is brought about by the effect of the incident light. However, when we talk about the polarization field within a material at a point, it is not the field produced by the atom at that point, but by all the other surrounding particles (see the part in the reference about the Clausius-Mossotti equation). So the total field is the sum of $\vec{P}$ and $\vec{E_{rad}}$.
Since the Schroedinger equation is linear, assuming $\epsilon_r$, and therefore $n$, is uniform within the material, $\hat H$ is proportional to the electric displacement, $\vec{D} = \epsilon_r \epsilon_0 \vec{E}$, and since $n = \sqrt{\epsilon_0 \epsilon_r}$, this would make $\alpha = 2$. | {
"domain": "physics.stackexchange",
"id": 81023,
"tags": "quantum-mechanics, electromagnetic-radiation, hamiltonian, refraction, maxwell-equations"
} |
Summing up distinct elements in steps (follow up) | Question: This is a "follow up" to an answer I gave on this question : Summing up distinct elements in steps
Here are the OP's requirements :
"My current task is to find a score from an array where the highest/lowest scores have been taken away, and if the highest/lowest occur more than once (ONLY if they occur more than once), one of them can be added:
E.g. int[] scores = [4, 8, 6, 4, 8, 5] therefore the final addition will be ∑4,8,6,5=23.
Another condition of the task is that LINQ cannot be used, as well as any of the System.Array methods (you can see by my previously ask questions that has been a bit of a pain for me, since I solved this with LINQ in less than 5 minutes)."
public int CalculateScore(int[] scores)
{
int lowestValue = int.MaxValue,
highestValue = int.MinValue,
ammountOfHighestValue = 1,
ammountOfLowestValue = 1,
finalScore = 0;
foreach (int score in scores)
{
finalScore += score;
if (score < lowestValue)
{
lowestValue = score;
ammountOfLowestValue = 1; //We need to reset the ammount
}
else if (score > highestValue)
{
highestValue = score;
ammountOfHighestValue = 1; //We need to reset the ammount
}
else if (score == lowestValue)
ammountOfLowestValue++;
else if (score == highestValue)
ammountOfHighestValue++;
}
if (ammountOfHighestValue > 1)
//This way, we keep the highest score once.
finalScore -= ((ammountOfHighestValue - 1) * highestValue);
else
finalScore -= highestValue; //The value is there once, we remove it.
if (ammountOfLowestValue > 1)
finalScore -= ((ammountOfLowestValue - 1) * lowestValue); //Same as highest
else
finalScore -= lowestValue;
return finalScore;
}
I'm interested about how can I remove the multiple if/else statements while keeping a complexity of O(n) and still loop through the array only once.
Answer: Bugs
Console.WriteLine(CalculateScore(new[] { 1 } ));
Console.WriteLine(CalculateScore(new[] { 2, 1 } ));
Console.WriteLine(CalculateScore(new[] { 3, 2, 1 } ));
What's the expected output? The question is maybe underspecified for the first case (I would say it's 0), but the other two are clear: 0 and 2.
But we get:
-2147483648
-2147483646
-2147483643
Bonus question: what is the correct result for the array { 1, 1 }? I would say 2, but your program returns 0. | {
"domain": "codereview.stackexchange",
"id": 9265,
"tags": "c#, algorithm, rags-to-riches"
} |
Fuzzy logic functions library with driver code | Question: This file implements functions based on vectors/sets.
FuzzySetLibrary.h
#include <bits/stdc++.h>
using namespace std;
const int MAX_SIZE_ARRAY = 3;
namespace fuzzySet
{
vector<vector<float> > cartesianProduct(vector<float>const & array1, vector<float>const & array2)
{
vector< vector<float> > output(MAX_SIZE_ARRAY);
for (int i = 0; i < MAX_SIZE_ARRAY; ++i)
output[i].resize(MAX_SIZE_ARRAY);
for (int i = 0; i < MAX_SIZE_ARRAY; ++i)
for (int j = 0; j < MAX_SIZE_ARRAY; ++j)
output[i][j] = min(array1[i], array2[j]);
return output;
}
vector<float> fuzzyUnion(vector<float>const & array1, vector<float>const & array2)
{
vector<float> output(MAX_SIZE_ARRAY);
for (int i = 0; i < MAX_SIZE_ARRAY; ++i)
output[i] = max(array1[i], array2[i]);
return output;
}
vector<float> fuzzyIntersection(vector<float>const & array1, vector<float>const & array2)
{
vector<float> output(MAX_SIZE_ARRAY);
for (int i = 0; i < MAX_SIZE_ARRAY; ++i)
output[i] = min(array1[i], array2[i]);
return output;
}
vector<float> fuzzyComplement(vector<float>const & array)
{
vector<float> output(MAX_SIZE_ARRAY);
for (int i = 0; i < MAX_SIZE_ARRAY; ++i)
output[i] = 1 - array[i];
return output;
}
void display(vector<float> const & array)
{
for (int i = 0; i < MAX_SIZE_ARRAY; ++i)
cout<<array[i]<<" ";
cout<<"\n\n";
}
}
This library implements all the fuzzy functions that can be performed on fuzzy relations and matrices.
MatrixLibrary.h
#include <bits/stdc++.h>
using namespace std;
const int MAX_SIZE_MATRIX = 3;
namespace matrix
{
vector<vector<float> > fuzzyUnion(vector<vector<float> > const & matrix1, vector< vector<float> > const & matrix2)
{
vector< vector<float> > output(matrix1.size());
for (int i = 0; i < output.size(); ++i)
output[i].resize(output.size());
for (int i = 0; i < matrix1.size(); ++i)
for (int j = 0; j < matrix1[i].size(); ++j)
output[i][j] = max(matrix1[i][j], matrix2[i][j]);
return output;
}
vector<vector<float> > fuzzyIntersection(vector<vector<float> > const & matrix1, vector< vector<float> > const & matrix2)
{
vector< vector<float> > output(matrix1.size());
for (int i = 0; i < output.size(); ++i)
output[i].resize(output.size());
for (int i = 0; i < matrix1.size(); ++i)
for (int j = 0; j < matrix1[i].size(); ++j)
output[i][j] = min(matrix1[i][j], matrix2[i][j]);
return output;
}
vector<vector<float> > fuzzyComplement(vector<vector<float> > const & matrix1)
{
vector< vector<float> > output(matrix1.size());
for (int i = 0; i < MAX_SIZE_MATRIX; ++i)
output[i].resize(MAX_SIZE_MATRIX);
for (int i = 0; i < matrix1.size(); ++i)
for (int j = 0; j < matrix1[i].size(); ++j)
output[i][j] = 1 - matrix1[i][j];
return output;
}
vector< vector<float> > fuzzyComposition(vector<vector<float> >const & matrix1, vector<vector<float> >const & matrix2, int type)
{
vector< vector<float> > output(MAX_SIZE_MATRIX);
for (int i = 0; i < MAX_SIZE_MATRIX; ++i)
output[i].resize(MAX_SIZE_MATRIX);
for (int i = 0; i < MAX_SIZE_MATRIX; ++i)
{
for (int j = 0; j < MAX_SIZE_MATRIX; ++j)
{
float maxValue = 0.0;
float minValue = 1.1;
for (int k = 0; k < MAX_SIZE_MATRIX; ++k)
{
if(type == 0) // MaxMin Composition
{
float val = min(matrix1[i][k], matrix2[k][j]);
if(val > maxValue) maxValue = val;
}
else if(type == 1) // MinMax Composition
{
float val = max(matrix1[i][k], matrix2[k][j]);
if(val < minValue) minValue = val;
}
else if(type == 2) // MaxAverage Composition
{
float val = (matrix1[i][k] + matrix2[k][j])/2.0;
if(val > maxValue) maxValue = val;
}
else if(type == 3) // MinAverage Composition
{
float val = (matrix1[i][k] + matrix2[k][j])/2.0;
if(val < minValue) minValue = val;
}
}
if(type == 0 || type == 2) output[i][j] = maxValue;
else if(type == 1 || type == 3) output[i][j] = minValue;
}
}
return output;
}
void display(vector<vector<float> > const & matrix)
{
for(int i = 0; i < 3; i++)
{
for (int j = 0; j < 3; ++j)
cout<<matrix[i][j]<<" ";
cout<<"\n";
}
cout<<"\n";
}
}
Main driver code that actually uses the aforementioned libraries and performs functions and displays the results.
Main.cpp
#include <bits/stdc++.h>
#include "MatrixLibrary.h"
#include "FuzzySetLibrary.h"
using namespace std;
int main()
{
srand(time(NULL));
std::cout << std::setprecision(1) << std::fixed;
vector< vector<float> > input1(MAX_SIZE_MATRIX);
vector< vector<float> > input2(MAX_SIZE_MATRIX);
vector<float> arrayInput1(MAX_SIZE_ARRAY);
vector<float> arrayInput2(MAX_SIZE_ARRAY);
vector< vector<float> > unionOutput(MAX_SIZE_MATRIX),
interOutput(MAX_SIZE_MATRIX), compleOutput(MAX_SIZE_MATRIX), maxMinOutput(MAX_SIZE_MATRIX),
cartesianOutput(MAX_SIZE_MATRIX);
for (int i = 0; i < MAX_SIZE_MATRIX; ++i)
{
input1[i].resize(MAX_SIZE_MATRIX);
input2[i].resize(MAX_SIZE_MATRIX);
unionOutput[i].resize(MAX_SIZE_MATRIX);
interOutput[i].resize(MAX_SIZE_MATRIX);
compleOutput[i].resize(MAX_SIZE_MATRIX);
maxMinOutput[i].resize(MAX_SIZE_MATRIX);
cartesianOutput[i].resize(MAX_SIZE_MATRIX);
}
for(int i = 0; i < MAX_SIZE_MATRIX; i++)
{
arrayInput1[i] = (rand()%10)*0.1;
arrayInput2[i] = (rand()%10)*0.1;
for (int j = 0; j < MAX_SIZE_MATRIX; ++j)
{
input1[i][j] = (rand()%10)*0.1;
input2[i][j] = (rand()%10)*0.1;
}
}
unionOutput = matrix::fuzzyUnion(input1, input2);
interOutput = matrix::fuzzyIntersection(input1, input2);
compleOutput = matrix::fuzzyComplement(input1);
maxMinOutput = matrix::fuzzyComposition(input1, input2, 0);
cartesianOutput = fuzzySet::cartesianProduct(arrayInput1, arrayInput2);
matrix::display(input1);
matrix::display(input2);
fuzzySet::display(arrayInput1);
fuzzySet::display(arrayInput2);
matrix::display(unionOutput);
matrix::display(interOutput);
matrix::display(compleOutput);
matrix::display(maxMinOutput);
matrix::display(cartesianOutput);
return 0;
}
I would like to know if my code writing style is good and is it professional grade or it still looks like some newbie wrote it. Any improvements regarding the style of function declarations, variable usage or function usage, constant usage.
And please let me know how can I actually improve logic and make code more readable and have better control over the overall structure.
Answer: Prefer standard headers
<bits/stdc++.h> isn't in any standard; even on those platforms where it exists, it's probably a poor choice if it includes the whole of the Standard Library. Instead, include (only) the standard library headers that declare the types that are required for the header.
Avoid using namespace in headers
A header file should provide the definitions that the including program requires. If it brings all of std into the global namespace, that's a far-reaching side-effect that can make correct programs much harder to write.
Although there are differing opinions, I would extend this advice to program files too.
Namespace hygiene
The constants MAX_SIZE_ARRAY and MAX_SIZE_MATRIX would be better within their respective namespaces, rather than in the global namespace. I'm not entirely sure why we have those fixed limits at all, but that may be because I don't understand the real-world problem this code attempts to solve.
Interface design
Is there a strong reason that std::vector should be the only supported container? And that float should be the only supported element type?
If not, then it's likely that the methods should be templated on those types. You may want to encapsulate the data type and methods into a class; alternatively (if you want to be more general), you might want to keep the procedural interface, but pass appropriate input and output iterators rather than containers.
Options to reduce duplication
Much of the matrix implementation is repeated application of the list implementation over the rows of the matrix. I think there's scope to simply call the list methods from within the matrix ones. It may well be worth moving the definitions out from the headers, leaving only the declarations in the headers (as usual for non-template code).
Allow output to any stream
Instead of embedding the use of std::cout in display(), it's better to pass a std::ostream as parameter, to allow output where it's needed (e.g. standard error, file or socket).
If you create a class (or pair of classes), you'll probably want to write a stream operator:
std::ostream& operator<<() const | {
"domain": "codereview.stackexchange",
"id": 26129,
"tags": "c++"
} |
Why use a large separate starshade instead of an occulting disk? | Question: The New Worlds Mission proposal has a large occulter on a different spacecraft from the space telescope to block glare from a star to reveal its planets. What is its advantage compared to a disk on an arm such as on the Solar and Heliospheric Observatory (SOHO)?
I can understand that on Earth, a coronagraph suffers from atmospheric scattering, but why would it matter in space?
Answer: TL;DR: The Sun is well-resolved in small telescopes so a focal plane occulter works well. Other stars are not resolved (except in a few exciting cases using interferometry) so an internal blocking disk would be useless.
Local blocking disks for telescope coronagraphs are placed inside the instrument at a focal plane, usually the first focal plane. That way they have sharp edges and can block the Sun (or star) but allow light nearby to reach a detector.
A blocking disk "on an arm" some short distance in front of a telescope would be way way out of focus and so act like a "fuzzy blob" at best. SOHO does not have one of these.
It does have a small disk on a small arm inside the telescope.
The disk of the Sun is huge compared to the resolution of the telescope, roughly 2000 arcseconds compared to order 1 arcsecond resolution (depending on wavelength). So a disk at the focal plane is sufficient to block the Sun and allow the larger corona to still reach the focal plane.
However a star's angular size is way, way, way smaller than the resolution of a modern space telescope. There's no hope of blocking the star and letting light from nearby orbiting exoplanets to reach the focal plane.
So the only alternative is to move the disk so far from the telescope that it appears as small as the star itself, and blocks the light. | {
"domain": "astronomy.stackexchange",
"id": 6514,
"tags": "exoplanet, space-telescope"
} |
Is it true that points of root locus only satisfiy the angle condition? | Question: So , I was reading a course book : ''Modern Control Engineering '' by Ogata and I came forward this statement that made me skeptical:
''The points of root locus only satisfiy the angle - condition. Closed - loop poles are the roots of the characteristic equation and therefore satisfiy both the magnitude and angle condition. If , we want to find the closed - loop poles for a given gain value K , then we have to ...''
I am not sure why does not every point of root locus correspond to a closed - loop pole for some K. Is it true? And if so , these points that are not closed - loop poles for any K , do they have any meaning/ physical/geometrical/control theory ?
I even have an assumption to this: Maybe he means , that if K has to be integer ( K may has to be integer , in ''real world applications''(?))? Then of course , root locus would have a discrete-represantion graph , so he means that if K - is integer , in root locus we connect the dots (to have a continoous represantantion) and thefore not every point of it is closed-loop pole and everything would make sense.
I even graphed, a root-locus in python : If you click an any point python returns the corresponding K for this point (which may be float) . So is this the reason? Let me know if I am wrong/right
Answer: Suppose we have a system $G(s) = \frac{1}{s(s+1)}$ and controller $K$ (this is purely a gain) and we close the loop:
$$T(s) = \frac{KG(s)}{1+KG(s)} = \frac{K\frac{1}{s(s+1)}}{1+K\frac{1}{s(s+1)}}$$
$$ = \frac{K}{s^2+s+K} $$
As you might notice, the poles of this closed-loop equation depend on the value of $K$:
$$s^2 + s + K = 0 \rightarrow s = -0.5\pm\sqrt{0.25 - K}$$
This means that one can influence the behaviour by only changing this $K$ to any arbitrary real value (imaginary might sound cool in simulation, but its kinda hard to put an imaginary voltage to a system for instance). The root locus plot represents for how the poles shift for changing values of $K$ (where $K > 0$). As you stated, if $K$ can only be an integer, you will indeed not get a continuous function, only dots at the places where $K$ exists. But as I stated earlier $K$ can by any real value, however for negative values you can quickly see the system becomes unstable.
EDIT: I have rewritten this part in a more elaborate proof.
Instead of looking at a specific system, suppose an arbitrary system
$$G(s) = \frac{N(s)}{D(s)}$$
The angle condition is the point at which the phase of the open loop system is an odd multiple of $-180^o$ or in other words:
$$\mathcal{Im}\left\{\frac{KN(s)}{D(s)}\right\} = 0 ~~\text{ and }~~ \mathcal{Re}\left\{\frac{KN(s)}{D(s)}\right\} < 0$$
The magnitude condition is the point where the magnitude of the open loop transfer equals 1. If both the magnitude condition and the angle condition match, the denominator of the closed loop transfer function becomes 0 (which is something we tend to avoid at all cost). Now, suppose we take a value for $K$ and calculate the value for $s = s_0$ such that the open loop transfer equals $-1$:
$$\frac{KN(s_0)}{D(s_0)} = -1 \rightarrow \frac{N(s_0)}{D(s_0)} = -\frac{1}{K}$$
We know this $s_0$ lies on the root locus. What you might also notice is that the sign does not change if I change $K$ to any positive, real value. Therefore, if $s$ lies on the root locus, the angle condition holds for any positive, real $K$. You could also reflect this in the bode plot, as $K$ only changes the magnitude, not the phase plot. As you might expect, this also implies the inverse holds: the angle condition holds for any value of $s$ that lies on the root locus of $G(s)$.
The magnitude condition only holds for a finite set of values of $s$ on the root locus (the solution at which the characteristic equation equals 0).
I hope I explained it a bit better, you can try also to just write out a couple cases and you quickly notice the same. | {
"domain": "engineering.stackexchange",
"id": 3676,
"tags": "control-engineering, control-theory, feedback-loop"
} |
Why does waitForMessage not work for a single non-latching message? | Question:
I need a way to get the first message that was published after this line of code. I am thinking waitForMessage seems like a logical choice. However, it does not seem to be behaving as I expected.
Given the following simple program that waits for messages from a topic a couple of times.
#include "ros/ros.h"
#include "std_msgs/Int32.h"
int main(int argc, char** argv)
{
ros::init(argc, argv, "listener");
ros::NodeHandle nh;
std_msgs::Int32::ConstPtr ret;
ret = ros::topic::waitForMessage<std_msgs::Int32>("/topic");
ROS_INFO("%d", ret->data);
ret = ros::topic::waitForMessage<std_msgs::Int32>("/topic");
ROS_INFO("%d", ret->data);
ret = ros::topic::waitForMessage<std_msgs::Int32>("/topic");
ROS_INFO("%d", ret->data);
ros::spin();
return 0;
}
I publish with the following non-latching publish
#!/usr/bin/env python
import rospy
import std_msgs
if __name__ == "__main__":
rospy.init_node('publisher')
pub = rospy.Publisher('/topic', std_msgs.msg.Int32, queue_size=1, latch=False)
pub.publish(0)
rospy.spin()
rostopic echo /topic shows that the message is published correctly. However, the listener program is still stuck at waitForMessage even though the listener program was started before the publishing node. Why is that the case?
Further testing with
#!/usr/bin/env python
import rospy
import std_msgs
if __name__ == "__main__":
i = 0
while not rospy.is_shutdown():
pub = rospy.Publisher('/topic', std_msgs.msg.Int32, queue_size=1)
rospy.init_node('publisher')
pub.publish(i)
i += 1
rospy.sleep(10)
shows that it's always only the first message that is ignored. The subsequent messages are correctly caught by waitForMessage
Originally posted by Rufus on ROS Answers with karma: 1083 on 2020-12-30
Post score: 1
Original comments
Comment by gvdhoorn on 2020-12-30:
Seeing your question (and #q368589), perhaps it would be good if you could clarify why you "need a way to get the first message that was published after this line of code". What is it you're trying to achieve?
Publish-subscribe being a completely asynchronous pattern of communication, implementing anything which requires strict ordering of events in different entities is typically a bit more complex than normal. So things may need a different approach.
Comment by Rufus on 2020-12-30:
Thanks for the patience and detailed responses, I am currently running tests in simulation where I rearrange some 200+ objects in gazebo and check if the sensor detects them. This is repeated many times for many different configurations. Since I am simulating multiple lidars, getting each sensor measurement takes an incredibly long time. As such, to speed up my testing, I only get the first sensor reading immediately after rearrangement (which I was thinking to accomplish with waitForMessage). I am currently tracking down an issue where the sensor measurements don't seem to be updated afte rearrangement (question on gazebosim here) which led me to the questions on waitForMessage.
Answer:
tl;dr: creating publisher->subscriber connections takes time. You are not giving waitForMessage(..) sufficient time, hence it doesn't see any messages on your topic. Result: first message is "lost" and your program then "hangs".
Longer explanation:
Why does waitForMessage not work for a single non-latching message?
From what you show, I don't believe this is caused by waitForMessage(..) not working, but again (as in #q368589) has to do with your expectations.
First, to get it out of the way: there is no central buffer or message queue maintained between publishers and subscribers. Publishers and subscribers have their own internal buffer, but that is not shared with any other nodes, and only exists as long as a node is running.
Consequence of this: messages published before a subscriber is online (or: has a connection with a publisher) will disappear.
Second: waitForMessage(..) does approximately the following (from here and here):
create a NodeHandle (if needed)
subscribe to the topic you specify with the message type you specify
register a callback, which sets a bool to true as soon it has received a message
returns the message received, or times-out
Note: approximately, as the actual implementation is a bit more complicated (uses separate ros::CallbackQueues fi).
From this it should be clear waitForMessage(..) essentially does the same thing as any code which subscribes to a topic.
And it also has the exact same limitations and requirements as any other code subscribing to a topic.
As to your observations:
I publish with the following non-latching publish
#!/usr/bin/env python
import rospy
import std_msgs
if __name__ == "__main__":
rospy.init_node('publisher')
pub = rospy.Publisher('/topic', std_msgs.msg.Int32, queue_size=1, latch=False)
pub.publish(0)
rospy.spin()
rostopic echo /topic shows that the message is published correctly. However, the listener program is still stuck at waitForMessage even though the listener program was started before the publishing node. Why is that the case?
There are many, many Q&As here on ROS Answers about this (keywords: "subscriber first message lost"), but summarising: the cause here is most likely that setting up publisher<->subscriber connections takes time. Even though your listener program was started earlier, the code creating the ros::Subscriber in waitForMessage(..) simply didn't have sufficient time to subscribe to /topic before you published the 0.
So waitForMessage(..) subscribes, but then never sees a message (as you only publish a single one), causing it to "get stuck".
Further testing with
#!/usr/bin/env python
import rospy
import std_msgs
if __name__ == "__main__":
i = 0
while not rospy.is_shutdown():
pub = rospy.Publisher('/topic', std_msgs.msg.Int32, queue_size=1)
rospy.init_node('publisher')
pub.publish(i)
i += 1
rospy.sleep(10)
shows that it's always only the first message that is ignored. The subsequent messages are correctly caught by waitForMessage
This should now be less of a surprise: while this code goes against best-practices (you should never call rospy.init_node(..) nor rospy.Publisher(..) in a loop), there is simply more time for waitForMessage(..) to create the necessary connections.
shows that it's always only the first message that is ignored.
that's exactly what you would expect (see some of the existing Q&As about "first message ignored" problems).
All-in-all, I believe again waitForMessage(..) does exactly what it is designed for, and the behaviour you observe appears to be by-design.
Whether it is what you expected would be something else.
Originally posted by gvdhoorn with karma: 86574 on 2020-12-30
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 35920,
"tags": "ros-melodic"
} |
How does an inflationary universe solve the Flatness Problem, Horizon Problem and Monopole Problem? | Question:
Possible Duplicate:
What is the evidence for Inflation of the early universe?
I am reading some public science books on inflationary universe, e.g. The Inflationary Universe by A. Guth.
Both in this book and the Wiki page mention briefly how this theory of inflation can possibly solve Flatness Problem, Horizon Problem and Monopole Problem. But their explanations are too sketchy for me.
Can somebody give a detailed explanation? But I do not want something too technical. I am doing this for a presentation in my general education class. The professor is a prominent physicist so I need some more details to satisfy him, but most of the audience are just average college students.
How does an inflationary universe solve the Flatness Problem, Horizon Problem and Monopole Problem?
Answer: This is really a comment rather than an answer, but it got too long to put in a comment.
Flatness problem:
On Flatness problem, Inflation etc
Why does inflation (the inflaton field) push Omega down closer to zero (flatten the universe)?
How does inflation drive Ω close to 1?
http://en.wikipedia.org/wiki/Flatness_problem
Monopoles:
How does Inflation solve the Magnetic Monopole problem?
http://en.wikipedia.org/wiki/Monopole_problem#Magnetic-monopole_problem
Horizon problem:
I couldn't find a match question on this site but see http://en.wikipedia.org/wiki/Horizon_problem.
Each of the three subjects, the flatness problem, the horizon problem and the monopole problem is a long answer in it's own right. I'd suggest you have a look at the links above and ask a new quaestion about any specific issues you don't understand. | {
"domain": "physics.stackexchange",
"id": 5391,
"tags": "cosmology, universe, space-expansion, cosmological-inflation"
} |
What's the physical origin of bound states in the continuum (BICs)? | Question:
From my point of view, BICs are modes built by destructive interference. But I'm confused about the orgin of BIC, I have two questions:
The differences between BIC modes and defect modes.
It has been reported that some previous research mistaked defect modes with BICs. However, still I cannot tell the differences between BIC modes and defect modes, especially when we focus on quasi-BIC. Since we can see a DBR cavity as a structure supporting BIC, I think BIC is just a different way to view the problem instead of something new.
I know that there are some symmetry protected BICs without any defects or special designs(like BIC vortex beam generator), but I'm confused about the relationship between the topological nature and basic characteristics(symmetry protected or parametric).
How can BICs survive in individual structures.
When it comes to individual structures without any particular design, it's so strange that they can support BICs. Are they real BICs?
Changing the parameters of the structures to reach high Q modes, I think, is very similar to normal optimization. Is there something special lies inside when we use BIC to view it?
References
1Hsu C W, et al. Nat. Rev. Mater., 2016
2Bogdanov A A, et al. Adv. Photonics, 2019
3Koshelev K, et al. Science, 2020
Answer: Intro
As far as I can see from
https://en.wikipedia.org/wiki/Bound_state_in_the_continuum
https://doi.org/10.1126/science.aaz3985
and from my previous brief experience, BIC in photonics refers to having an electromagnetic excitation that looks like it 'could' propagate away but don't. What it looks like normally is that there is some structure and some specific kind of excitation that stays pinned to the spot, even though you can have other electromagnetic waves propagating in the same space.
Problem statement
Is this strange to have electromagnetic excitation being fixed to a single place? I don't think so. Imagine having an excitation sitting inside a finite-sized perfect electric conductor sphere. It will sit there it will not go anywhere etc. Clearly this is not too helpful, more generally we can ask: Under which conditions can you localize electromagnetic excitation in a space that supports propagating waves?
Dielectrics/Metals/Domains -> Currents
Before continuing it is helpful to get away from considering bits of dielectric, defects in photonic crystals etc. Instead we will focus on homogeneous space (could be free space, could be photonic crystal) and currents in that space. The time-harmonic wave equation for electric field ($\mathbf{E}$) in this case is:
$$
\begin{align}
\left(\boldsymbol{\nabla}\times\boldsymbol{\nabla}\times\mathbf{E}-\frac{\omega^2}{c^2}\epsilon\left(\mathbf{r}\right)\mathbf{E}\right)&=-i\omega \mu_0\mathbf{J} \\
\left(\boldsymbol{\nabla}\times\boldsymbol{\nabla}\times\mathbf{E}-\frac{\omega^2}{c^2}\epsilon_0\mathbf{E}\right)&=-i\omega\mu_0\left( \mathbf{J}+\frac{-\frac{\omega^2}{c^2}\cdot\left(\epsilon_0-\epsilon\left(\mathbf{r}\right)\right)\cdot\mathbf{E}}{-i\omega\mu_0}\right)
\end{align}
$$
What I have done there is to convert the spatially dependent part of the complex dielectric constant ($\epsilon\left(\mathbf{r}\right)$) into current density ($\mathbf{J}$). What this shows is that photonic BIC problems, which often rely on some non-trivially structured materials and defects, e.g. dielectric particles, can be considered as problems of electrodynamics in homogeneous space (permittivity $\epsilon_0$, permeability $\mu_0$, speed of light $c^2=1/\epsilon_0\mu_0$) with some oscillating current density.
Non-radiating configurations
Now we can restate the problem as follows:
Can you have a localized oscillating distribution of current density in homogeneous space (that supports wave propagation), which emits no radiation?
The answer is yes you can: https://doi.org/10.1103/PhysRevD.8.1044 (Devaney, Wolf, PRD 8, 1044 (1973))
In particular, a current density of the form:
$$
\mathbf{J}=\boldsymbol{\nabla}\times\boldsymbol{\nabla}\times\mathbf{f}+\alpha\mathbf{f}
$$
Where $\mathbf{f}\left(\mathbf{r}\right)$ is some localized time-harmonic vector field, and $\alpha$ is a suitable constant (depends on how you write your Maxwell's equations), will always emit no light.
Anapoles
Anapoles are point-like excitations (https://doi.org/10.1038/s42005-019-0167-z, V. Savinov et al. Comm Phys, 2:69 (2019) ) with time-harmonic current density:
$$
\mathbf{J}=\boldsymbol{\nabla}\times\boldsymbol{\nabla}\times\mathbf{N}\delta\left(\mathbf{r}\right)+\alpha\mathbf{N}\delta\left(\mathbf{r}\right)
$$
Where $\mathbf{N}$ is a vector. It should be clear that Devaney-Wolf non-radiating configurations can be assembled from anapoles.
Interestingly anapoles, and the linked toroidal dipoles occur naturally in nuclear physics https://en.wikipedia.org/wiki/Toroidal_moment
Conclusion
What is the origin of bound states? The physical origin is that electromagnetic (dyadic) Green function ($\mathbf{G}\left(\mathbf{r}-\mathbf{r'}\right)$) has non-empty null-space - such are Maxwell's equations. Hence if electric field is given by:
$$
\mathbf{E}\left(r\right)=\int_V d^3r' \mathbf{G}\left(\mathbf{r}-\mathbf{r'}\right).\mathbf{J}\left(\mathbf{r'}\right)
$$
It is possible to have vanishing electric field outside $V$ even with non-vanishing oscillating current density $\mathbf{J}$.
Light is emitted by oscillating currents and charges, but not all oscillating currents and charges have to emit light. | {
"domain": "physics.stackexchange",
"id": 85534,
"tags": "quantum-mechanics, optics, electromagnetic-radiation, interference"
} |
turtlebot_teleop: control moving around keys | Question:
Hi everyone,
I am going through the tutorials for the turtlebot simulation on Gazebo. I can make the robot move with the command.
I see the on-screen instructions:
Control Your Turtlebot!
---------------------------
Moving around:
u i o
j k l
m , .
q/z : increase/decrease max speeds by 10%
w/x : increase/decrease only linear speed by 10%
e/c : increase/decrease only angular speed by 10%
space key, k : force stop
anything else : stop smoothly
CTRL-C to quit
I have movements:
u/o, m/. turn around a point,
j/l turn in place,
i/, move forward, backward.
What is the difference between u/o and m/.? How can I change these movements? For example, add move left, move right.
Thanks
Originally posted by nampi on ROS Answers with karma: 77 on 2015-10-26
Post score: 1
Answer:
You can see the bindings and change them as you wish in turtlebot_teleop/scripts/turtlebot_teleop_key (see https://github.com/turtlebot/turtlebot/blob/indigo/turtlebot_teleop/scripts/turtlebot_teleop_key#L53)
For example, in binding 'm':(-1,-1), first element is x velocity, second one is angular velocity.
For your question,
'o':(1,-1),
'u':(1,1),
u makes TurleBot move forward and rotate counter clockwise, while o makes it forward and clockwise. Same thing applies for m and . and others.
Originally posted by Akif with karma: 3561 on 2015-10-27
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by tfoote on 2015-10-27:
Note that although you can update the directions commanded to include moving sideways. The wheels are rigidly mounted on the bottom of the vehicle in what's called a differential drive which means it cannot move sideways.
Comment by Akif on 2015-10-27:
Yes, Tully is right for sure. @nampi, you need to change URDF and/or drive plugin to make it a holonomic base. | {
"domain": "robotics.stackexchange",
"id": 22841,
"tags": "ros, gazebo, turtlebot, keyboard-teleop"
} |
Can numerical discrete finite data be always treated also as categorical? | Question: In many sources, for example here data is classified as being qualitative (categorical) and quantitative (numerical). Where numerical data can be continuous or discrete, and discrete can be finite or infinite.
I want to establish if a numerical, discrete and finite data can also be treated as categorical data.
I know that it depends on 'the meaning' of the data and requires some common sense analysis but I want to establish if the following statement is always true:
"Numerical, discrete and finite data can also be a categorical data"
In the classification of data the numerical data is said to have 'mathematical meaning as a measure of something'. But 'technically', without assessing the meaning of the data, it does also make them capable of being a categorical data (ordinal or not), if we strip it from mathematical meaning.
Example can be a following array of items:
Energy
15
15
20
25
25
Every observation has 'Energy' characteristic it can be treated as mathematical discrete and finite numerical value which can be a measurement of energy an item has. But also it can be treated as a category: two items are in 15 category, one in category 20 and two in category 25.
Thanks for confirming this.
Answer: I would separate value with representation in this case.
Energy as you mentioned, in the real world, holds a very continuous value. However, we may choose (for various reasons) to represent this value in different forms.
We can take values as they are (15.21252, 23.76535), we can round them into integers (15, 24), we can even decide to represent this data by clusters ("UNDER 20", "OVER 20").
Technically, all data can be represented as categorical data, we need to consider what value does this data represent, and what are we losing/gaining from using different representations. | {
"domain": "datascience.stackexchange",
"id": 6885,
"tags": "data-mining, statistics, data"
} |
Terminal velocity of a raindrop | Question:
I have solved the question and got the answer , but I do not understand why will the raindrop attain a terminal velocity.Without any resistive forces why does it attain a terminal velocity?
Answer: You can actually think of the "little rain drops" that you are picking up as providing some resistance.
When you travel at velocity $v$, and have area $A$, you are "picking up" all the material in a cylinder with volume $V=vA$ per unit time. That volume of material needs to be accelerated to velocity $v$, requiring a force $F\Delta t \propto m \Delta v = Av^2$.
So you have $v^2$ type drag - the same as you would have if you were a drop of constant size, moving through air (where drag is usually given to $F=\frac12 \rho A v^2 C_D$). The "approximation" comes from the fact that $C_D$, the drag factor, is a weak function of velocity.
The equation you were given is actually slightly different - it says to make an assumption about the rate of growth of the drop as being proportional to the mass. That may be mathematically easier (I didn't try it) but doesn't really make sense from a physics perspective. But either way, "directionally" the two equations will get you to the same thing - as the drop goes faster, it picks up more material that needs accelerating and that results in additional force. Eventually that force will limit the velocity - work done by gravity will be proportional to $mv$ but the force experienced by the drop increases faster than that. That is why there will be a terminal velocity. | {
"domain": "physics.stackexchange",
"id": 24282,
"tags": "homework-and-exercises, newtonian-mechanics, momentum, projectile, drag"
} |
Dismissing a modal dialog | Question: I am implementing a modal dialog in my Android application, allowing the user to either pick a player from a list, or create one with predefined traits.
PlayerPickerListener
public interface PlayerPickerListener {
Activity getActivity();
void playerCreated();
}
PlayerPicker (extends Dialog)
public PlayerPicker(Context context, String title, final PlayerPickerListener listener) {
super(context);
setTitle(title);
setContentView(R.layout.player_dialog);
View.OnClickListener changeHandler = new View.OnClickListener() {
@Override
public void onClick(View v) {
// PlayersActivity is basically a ListView selecting the player
Intent intent = new Intent(listener.getActivity(), PlayersActivity.class);
intent.putExtra(REQ_CODE, SELECT_ACTION);
listener.getActivity().startActivityForResult(intent, SELECT_ACTION);
}
};
Button b = (Button) findViewById(R.id.button_change);
b.setOnClickListener(changeHandler);
View.OnClickListener newHandler = new View.OnClickListener() {
@Override
public void onClick(View v) {
EntityPlayer player = PlayerManager.createNewPlayer();
listener.playerCreated();
}
};
b = (Button) findViewById(R.id.button_new);
b.setOnClickListener(newHandler);
}
MainActivity (implements PlayerPickerListener)
public Activity getActivity() { return this; }
public void displayPlayer(View view) {
myPlayerPicker = new PlayerPicker(this, "Pick a player", this);
myPlayerPicker.show();
}
public void playerCreated() {
myPlayerPicker.dismiss();
// some extra stuff
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == PlayersActivity.SELECT_ACTION && resultCode == RESULT_OK && data != null) {
myPlayerPicker.dismiss();
// some extra stuff
}
super.onActivityResult(requestCode, resultCode, data);
}
I have the following issues with my approach:
I am led to add the getActivity() method in my listener, because I have no other way of starting an intent from the Dialog. I need an intent to go to my ListView and return with a result.
Because of the above behavior, I am calling dismiss() on my Dialog from inside my Activity. For the new player case, I could easily add the dismissal of the Dialog inside the OnClickListener, but I am leaving it on the Activity just to have a type of uniform behavior.
My main question is whether it is common practice to dismiss a Dialog in its own code, or externally.
I'd argue that it would not seem natural for a dialog to be shut down from another source. A Dialog should disappear when its objective is complete. Is this bad design?
Answer: The code in general looks good, but here are a few observations:
Activity
You can access the activity from the context as well. In your case, the context is actually the activity context. Therefore, you can cast to activity so that you can start an intent.
Dialog
You can disable the dialog from wherever you want, depending on the needs. Disabling from inside is ok if, for example, you want to do something simple and return a callback to the caller and disable it immediately. Something simple and that's it, no complex operations involved, like rotation for example.
If you want to handle rotations and other complicated operations as coming from memory, etc, then it might make sense to do it externally from the parent application, or from the application that started the dialog. | {
"domain": "codereview.stackexchange",
"id": 16064,
"tags": "java, android"
} |
Plotting covariance from amcl_pose in Matlab | Question:
I have recorded a bag-file and am trying to plot the /amcl_pose/pose/covariance matrix which has a message type geometry_msgs/PoseWithCovarianceStamped. I followed this tutorial on plotting data from topics, however, when I try:
bagselect = select(bag, 'Topic', '/amcl_pose');
msgs = readMessages(bagselect);
ts = timeseries(bagselect, 'Pose.Covariance');
I Get the error:
The Pose.covariance property does not exist for message type geometry_msgs/PoseWithCovarianceStamped.
What am I doing wrong?
Originally posted by fendrbud on ROS Answers with karma: 48 on 2020-03-05
Post score: 0
Answer:
I found out I have to read the messages as structures as in this example.
bSel = select(bag, 'Topic', '/amcl_pose');
msgStructs = readMessages(bSel, 'DataFormat', 'struct');
I am then able to read amcl_pose as a struct and can then extract the first covariance matrix by
msgStructs{1}.Pose.Covariance
I now have to iterate through all the matrices and concatenate them in order to plot their values.
Originally posted by fendrbud with karma: 48 on 2020-03-05
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2020-03-05:
I've converted your comment to an answer as it would appear this is the answer.
To link it to my comment about the covariance field not being at the top-level of the message (but at pose.pose.covariance: it would appear msgStructs{1} contains the top-level message contents. So then pose.covariance does exist there. | {
"domain": "robotics.stackexchange",
"id": 34545,
"tags": "matlab, navigation, ros-kinetic, amcl-pose, amcl"
} |
Controlling permissions in a MVP application | Question: In a Windows forms payroll application employing MVP pattern (for a small scale client) I'm planing user permission handling as follows (permission based) as basically its implementation should be less complicated and straightforward.
NOTE: System could be simultaneously used by few users (maximum 3) and the database is at the server side.
This is my UserModel. Each user has a list of permissions given for them.
class User
{
string UserID { get; set; }
string Name { get; set; }
string NIC {get;set;}
string Designation { get; set; }
string PassWord { get; set; }
List <string> PermissionList = new List<string>();
bool status { get; set; }
DateTime EnteredDate { get; set; }
}
When user login to the system it will keep the current user in memory.
For example in BankAccountDetailEntering view, I can control the permission to access a command button as follows.
public partial class BankAccountDetailEntering : Form
{
bool AccountEditable {get; set;}
public BankAccountDetailEntering ()
{
InitializeComponent();
}
private void BankAccountDetailEntering_Load(object sender, EventArgs e)
{
cmdEditAccount.enabled = false;
OnLoadForm (sender, e); // Event fires...
If (AccountEditable )
{
cmdEditAccount.enabled=true;
}
}
}
In this purpose all relevant presenters (like BankAccountDetailPresenter) should aware of UserModel as well in addition to the corresponding business Model it is presenting to the View.
class BankAccountDetailPresenter
{
BankAccountDetailEntering _View;
BankAccount _Model;
User _UserModel;
DataService _DataService;
BankAccountDetailPresenter( BankAccountDetailEntering view, BankAccount model, User userModel, DataService dataService )
{
_View=view;
_Model = model;
_UserModel = userModel;
_DataService = dataService;
WireUpEvents();
}
private void WireUpEvents()
{
_View.OnLoadForm += new EventHandler(_View_OnLoadForm);
}
private void _View_OnLoadForm(Object sender, EventArgs e)
{
foreach(string s in _UserModel.PermissionList)
{
If( s =="CanEditAccount")
{
_View.AccountEditable =true;
return;
}
}
}
public Show()
{
_View.ShowDialog();
}
}
So I'm handling the user permissions in the presenter iterating through the list. Should this be performed in the Presenter or View? Any other more promising ways to do this? I'd appreciate it if you could review the code and provide your feedback.
Answer:
Full disclosure: I haven't done much with windows forms applications, so I may be breaking some otherwise well known winforms programming patterns.
Permissions are better handled by a service, so that you can more easily unit test that layer, plus the permissions logic becomes portable between presenters to promote DRYness of your code (Don't Repeat Yourself).
public class UserPermissionsService
{
private User _user = null;
public UserPermissionsService(User user)
{
_user = user;
}
public bool CanEditAccount { get { return _user.PermissionList.Contains("CanEditAccount"); } }
}
Then to use it:
public partial class BankAccountDetailEntering : Form
{
UserPermissionsService PermissionsService { get; set; }
public BankAccountDetailEntering()
{
InitializeComponent();
}
private void BankAccountDetailEntering_Load(object sender, EventArgs e)
{
cmdEditAccount.enabled = PermissionsService.CanEditAccount;
}
}
The BankAccountDetailEntering form would need to instantiate the user permissions service with the proper user object before the BankAccountDetailEntering_Load method gets called. | {
"domain": "codereview.stackexchange",
"id": 7935,
"tags": "c#, .net, security, winforms, mvp"
} |
Show that Vertex-Cover is NP-complete, using Stable-Set | Question: My task is to give proof, the Vertex-Cover problem is NP-complete, assuming it's already shown that the Stable-Set problem is NP-complete, too.
My approach: i know, Stable-Set is NP-complete, and all Problems that are NP-complete can be reduced to each other. If i could solve one NP-complete problem, i might be able to solve all NP-complete problems. It should be possible to create a function with polynomial complexity to reduce Vertex-Cover to Stable-Set. At least, this was, what my Professor told.
Now all i have to do, is to find this polynomial function, in order to Show that Vertex-Cover is NP-complete. But here is where i am stuck.. so i need some advice how to build such functions.
Answer: You have it backwards- you have to reduce a known NP-complete problem to the thing that you want to show is NP-hard. If you were to reduce Vertex-Cover to Stable-Set, you would be saying that Stable-Set is at least as hard as Vertex-Cover... but Vertex-Cover could still theoretically be polynomial.
Since this seems like a homework problem, I don't want to actually give the solution here. What you need to do is find a way of transforming the Stable-Set problem into the Vertex-Cover problem. That way, any solution to Vertex-Cover would be a solution to Stable-Set, so VC is at least as hard as SS. After that, you need to show that VC (note that you need to work with the decision version of VC and SS, which asks does there exists a VC/SS of size k) is in NP by showing that given a witness, you can check if it's correct or not.
Solutions to both problems are sets of nodes in a graph. If there's a VC of size k, that means that all edges have at least one node touching them. If there's a SS of size k, that means no edge has two nodes touching them. I'd suggest drawing out some graphs, finding the VC and SS of both of them, and seeing if there's a relationship between the two sets, as well as the size of each set and the number of nodes in the graph. While I haven't worked out a proof, I don't think you need to do a transformation on the graph itself (like drawing in new nodes/edges) in this case. However, for some problems, you might need to.
Here is a handout on NP-completeness reductions, which might help you. I suggest reading some reductions to get a handle on it. | {
"domain": "cs.stackexchange",
"id": 1400,
"tags": "complexity-theory, graphs"
} |
The meaning of velocities, accelerations and time_from_start in JointTrajectoryPoint.msg | Question:
I've followed this tutorial
When I tried to fill in the action goal of the type JointTrajectoryActionGoal, I didn't know what value shoud goal.goal.trajectory.points[i].velocities[i] and goal.goal.trajectory.points[i].time_from_start be, which belonged to trajectory_msgs/JointTrajectoryPoint.msg
I've tried several value and felt that time_from_start seemed to be related to the total execution time, but still got no idea about velocities, which tended to make the trajectory weird.
I would like to know what's the meaning behind these parameters and how to set it up properly. Thanks~
Originally posted by Albert K on ROS Answers with karma: 301 on 2012-12-16
Post score: 3
Answer:
Yes, the semantics of the joint trajectory messages is a little under-specified. Some pointers:
time_from_start is relative to trajectory.header.stamp; each trajectory point's time_from_start must be greater than the last
the velocities specify the joint velocities you would like the joints to have at that trajectory point
the velocities should be all 0 for the first and last trajectory point
you shouldn't execute such a "hand-crafted" trajectory directly on the robot; instead, run it through a properly set-up trajectory_filter pipeline first to make sure it's physically possible to reach the desired joint trajectories at the given time points
I think an example would be nice, so this is a trajectory generated and executed on the Katana robot arm. (Actually, this is the new FollowJointTrajectoryAction and not the old JointTrajectoryAction format, but they are almost identical).
I just noticed while writing this answer that this trajectory might not be an ideal example. The accelerations are all 0, which doesn't make sense; they should either be empty (saying that there are no desired accelerations, and the controller is free to choose) or be filled to sensible values. Seems to be a bug in my trajectory filtering pipeline. Anyway, here you go:
header:
seq: 0
stamp:
secs: 1355824277
nsecs: 243513013
frame_id: ''
goal_id:
stamp:
secs: 1355824277
nsecs: 243517830
id: /kurtana_move_arm-1-1355824277.243517830
goal:
trajectory:
header:
seq: 0
stamp:
secs: 1355824277
nsecs: 443477691
frame_id: ''
joint_names: ['katana_motor1_pan_joint', 'katana_motor2_lift_joint', 'katana_motor3_lift_joint', 'katana_motor4_lift_joint', 'katana_motor5_wrist_roll_joint']
points:
-
positions: [-2.9641690268167444, 2.13549384276445, -2.1556486321117725, -1.9719493470579683, -2.9318804356548496]
velocities: [0.0, 0.0, 0.0, 0.0, 0.0]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 0
nsecs: 0
-
positions: [-2.9251733424342388, 2.13549384276445, -2.1556486321117725, -1.9719493470579683, -2.8933095299198217]
velocities: [0.29930315443351097, 0.0, 0.0, -6.139114030773157e-19, 0.2960428554763487]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 0
nsecs: 228646348
-
positions: [-2.8849641701621285, 2.13549384276445, -2.1556486321117725, -1.9719493470579683, -2.853538354776994]
velocities: [0.44598050148009644, 0.0, 0.0, 1.823853811086392e-18, 0.4411224512312072]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 0
nsecs: 339639817
-
positions: [-2.52736640929761, 2.13549384276445, -2.1556486321117725, -1.9719493470579683, -2.4998358940105807]
velocities: [1.0028722191625212, 0.0, 0.0, -1.6240875412254114e-17, 0.9919479666049925]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 0
nsecs: 800369305
-
positions: [-2.2959492044755803, 2.13549384276445, -2.1556486321117725, -1.9719493470579683, -2.2709395088336946]
velocities: [1.0910884747158898, 0.0, 0.0, 4.7121007739083667e-17, 1.0792032855236304]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 1
nsecs: 20369305
-
positions: [-2.1094083544721736, 2.13549384276445, -2.1556486321117725, -1.9719493470579683, -2.0864306418873473]
velocities: [1.0870331216234024, 0.0, 0.0, -1.5451562371091847e-16, 1.0751921072527668]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 1
nsecs: 190369305
-
positions: [-1.9654064488044534, 2.13549384276445, -2.1556486321117725, -1.9719493470579683, -1.9439973440205225]
velocities: [1.0735605072416647, 0.0, 0.0, 5.153091454187644e-16, 1.0618662495956706]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 1
nsecs: 324259613
-
positions: [-1.7266965336015323, 2.13549384276445, -2.1556486321117725, -1.971949347057968, -1.7078876877058584]
velocities: [1.0955434012456187, 0.0, 0.0, 5.577067357268163e-16, 1.0836096846920393]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 1
nsecs: 544259614
-
positions: [-1.5618053834346186, 2.13549384276445, -2.1556486321117725, -1.971949347057968, -1.5447926911610177]
velocities: [1.1002506664107818, 0.0, 0.0, -1.6280551286439496e-16, 1.0882656737798182]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 1
nsecs: 694259614
-
positions: [-1.352801108899281, 2.13549384276445, -2.1556486321117725, -1.971949347057968, -1.3380650929928186]
velocities: [1.0995760502885568, 0.0, 0.0, 3.1423072274376064e-17, 1.087598406227786]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 1
nsecs: 884388906
-
positions: [-1.2428109699673853, 2.13549384276445, -2.1556486321117725, -1.971949347057968, -1.229273072857686]
velocities: [1.099721341194797, 0.0, 0.0, -1.0271726233852743e-17, 1.0877421144851813]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 1
nsecs: 984388907
-
positions: [-1.0668126798340363, 2.13549384276445, -2.1556486321117725, -1.971949347057968, -1.0551919260396805]
velocities: [1.101621901250425, 0.0, 0.0, 3.1360609511064693e-18, 1.0896219718055542]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 2
nsecs: 144388907
-
positions: [-0.8186363391284005, 2.13549384276445, -2.1556486321117725, -1.971949347057968, -0.8097189616694035]
velocities: [1.0917942273591639, 0.0, 0.0, -6.310525555784859e-19, 1.079901350427656]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 2
nsecs: 370054003
-
positions: [-0.6872969356042933, 2.13549384276445, -2.1556486321117725, -1.971949347057968, -0.6798102337462729]
velocities: [1.1073704870325298, 0.0, 0.0, 2.6560761706557022e-19, 1.0953079384406423]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 2
nsecs: 490054004
-
positions: [-0.10077700694030943, 2.13549384276445, -2.1556486321117725, -1.971949347057968, -0.09967924647315002]
velocities: [0.6703568712225323, 0.0, 0.0, -4.289222110264426e-20, 0.663054696902626]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 3
nsecs: 76996454
-
positions: [-0.00012129289047690934, 2.13549384276445, -2.1556486321117725, -1.971949347057968, -0.00011997165119668348]
velocities: [0.0, 0.0, 0.0, 0.0, 0.0]
accelerations: [0.0, 0.0, 0.0, 0.0, 0.0]
time_from_start:
secs: 3
nsecs: 356996686
path_tolerance: []
goal_tolerance: []
goal_time_tolerance:
secs: 0
nsecs: 0
Originally posted by Martin Günther with karma: 11816 on 2012-12-17
This answer was ACCEPTED on the original site
Post score: 7
Original comments
Comment by Adolfo Rodriguez T on 2012-12-18:
Trajectory endpoints don't need to have zero vel/acc. This is especially so for the first point: if it has time_from_start>0, a spline will be traced from the current state (can also have nonzero vel/acc) in time_from_start sec. We use this a lot for online trajectory substitution.
Comment by Albert K on 2012-12-18:
Thank you very much~ I have two questions. Is time from start equals to the execution time? Can I deduce that accelerations term equals to the acceleration at that trajectory point?
Comment by Albert K on 2012-12-18:
Actually, I've tried trajectory filter and get problems :
http://answers.ros.org/question/50611/trajectory-filter-response-with-nothing/
Comment by Albert K on 2012-12-18:
Why is that some of the time_from_start are the same as (not greater than) the last points in the example?
Comment by Adolfo Rodriguez T on 2012-12-18:
Trajectories with unspecified accelerations use cubic spline interpolation, hence have no acceleration continuity. Trajectories specifying accelerations use quintic splines and guarantee acceleration continuity. Setting them to zero at waypoints is a perfectly valid choice (and used in practice).
Comment by Adolfo Rodriguez T on 2012-12-18:
Total desired trajectory duration is the time_from_start of the last trajectory point. In the posted example all waypoint times are monotonically increasing, as expected. Note that durations consist of two fields: secs and nsecs, and although the secs field may be equal for two points, nsec is not.
Comment by Martin Günther on 2012-12-18:
Thanks for your helpful comments, Adolfo! I forgot about trajectory substitution and acceleration continuity; the Katana arm is more limited than the PR2's arms, so it cannot do either of them.
Comment by varunagrawal on 2016-02-01:
Hi, I am using FollowJointTrajectoryGoal to define trajectories for the UR5. Would you recommend FollowJointTrajectoryAction for it as well? I am using the ros-industrial/universal_robot package.
Comment by varunagrawal on 2016-02-01:
Also, I can't seem to make sense of the values of the positions and velocities. What are their units? Metres and m/sec? Also, how do I specify the velocity with which the UR should move? When I run the program, the UR just goes full speed to the goal point.
Comment by Martin Günther on 2016-02-05:
All units in ROS are SI units (m, m/s, m/s^2, ...). | {
"domain": "robotics.stackexchange",
"id": 12128,
"tags": "ros, arm-navigation, joint-trajectory-action, pr2-controllers"
} |
Coupling of silyl imine and aldehyde | Question:
LDA is a strong base and probably deprotonates somewhere which I'm not sure about. It I would to make a guess it would probably be beta to the imine. This is following by nucleophilic attack of the carboanion onto the use of the second reagent Pr-CHO to generate an alcohol. But I'm not sure the purpose of the 3rd reagent (oxalic acid).
Answer: This is a Peterson olefination, specifically this one is from the synthesis of Roseophilin published in J. Am. Chem. Soc. (2001) 123 8509. The LDA deprotonates next to the silicon as both it and the adjacent imine stabilise the anion.
The Wikipedia page gives a good summary of the mechanism and why an acidic work up is used in this case to give the required double bond geometry (Wikipedia).
(Image from www.chem.wisc.edu) | {
"domain": "chemistry.stackexchange",
"id": 12184,
"tags": "organic-chemistry, reaction-mechanism"
} |
Blocking quantum effects | Question: A Faraday cage blocks electromagnetic fields, provided they are not intense enough to change the state to something non-conductive (e.g. slicing it in two with a laser).
Is there any analogous system that blocks non-local quantum effects, including tunneling?
I have only a layperson's knowledge of quantum mechanics. If my question makes sense as stated, I'm hoping to an answer. If not, I'm hoping for a reframing that does. Such a barrier is intended to set up a plot element in a story I'm writing.
Answer: There are cases where tunneling is suppressed. For example, if you have a square potential barrier and send a particle towards it, the transmission (measuring the tunneling rate) coefficient oscillates as a function of energy. That is, for some energies the tunneling rate is lower than one might expect.
For low energies the particle bounces back with high probability, but there is some tunneling rate. As the energy becomes larger than the barrier, there is at some energy levels enhanced reflection and less tunneling. However, the tunneling rate never goes down to zero in this case (although there are energies where the reflection is zero).
By analogy to optical high-reflection coatings I suspect a more clever arrangement of potential steps would boost the reflection a lot, but only at some energies.
The reason the Faraday cage works is that the conductive metal responds near-instantly to changes in potential outside, producing currents that counteract induced currents and equalizing the charge distribution to stay isopotential. The problem for a "quantum Faraday cage" is that it needs to be expressed in terms of the wave function of the impinging particle: it is not a separate thing. In some sense this is always true (there is just one electron field, and all electrons and positrons are excitations of that field), but for calculating and designing it is usually far more convenient to deal with things that are modular. In the above example the potential step is just assumed to be there, and I calculate the effects on a particle. Doing it for a system including the step as a quantum object would be very, very hard.
So my suspicion is that quantum Faraday cages are to some extent possible (for particular energy levels, for particular types of particles and cages) but they are not as general as electric Faraday cages and tricky to design.
Addendum: I implemented the transfer matrix method to allow multiple barriers. Here is the effect of 8 barriers, one unit wide and separated by 1/3 units.
The result is that tunnelling is unsurprisingly suppressed for low energies - a wave packet needs to tunnel 8 times to get across. But more interesting, even for energies above the barrier height (that classically would just sweep past) there are some fairly wide ranges like $E\approx 1.25$ where transmission is strongly suppressed. One could say this a partial quantum Faraday cage.
One neat result from transfer matrices is that they can be expressed in terms of reflection $r$ and transmission $t$ amplitudes like $$M=\left (
\begin{matrix}
1/t & r^*/t^* \\
r/t & 1/t^* \\
\end{matrix}
\right ).$$
This immediately shows that if we want $t=0$ we end up with a matrix with infinite coefficients. We can easily get the $r=0$ case, but not the opposite one. So at least for this 1D case I think this proves that perfect quantum Faraday cages are not possible. | {
"domain": "physics.stackexchange",
"id": 68287,
"tags": "quantum-mechanics, electromagnetic-radiation, quantum-tunneling, non-locality, casimir-effect"
} |
If a CMB photon traveled for 13.7 billion years to reach me, how far away was the source of that CMB photon when it first emitted it? | Question: If a CMB photon traveled for 13.7 billion years (- 374,000 years) to reach me.
How far away was the source of that CMB photon when it first emitted it?
My attempt to solve this question was to use the following assumptions:
Temperature of CMB photon today is 2.725 K (will use value of 3 K here)
Temperature of CMB photon when it was first emitted is 3000 K
A factor of x1000 in temperature decrease results in a factor of x1000 in wavelength increase. (According to Wien's displacement law)
Does this mean that the source of the CMB photon that just reached me today, was actually 13.7 billion light years / 1000 = 13.7 million light years away from me when it first emitted the photon?
Answer: The comoving distance traveled by light in vacuum between cosmological times $t_i$ and $t_f$ is $\displaystyle \int_{t_i}^{t_f} \frac{c\,dt}{a(t)}$. The metric distance at cosmological time $t$ is $a(t)$ times the comoving distance. The distance you're looking for is therefore $\displaystyle \int_{t_i}^{t_f} \frac{a(t_i)}{a(t)} c\,dt$.
This value depends on $a(t)$ over the whole time interval from $t_i$ to $t_f$, not just at the endpoints. The redshift factor is equal to $\displaystyle\frac{a(t_f)}{a(t_i)}$, but you can't just divide $cΔt$ by that factor to get the distance.
You can, however, divide the usually quoted comoving distance to the CMBR (around $46\text{ Gly}$ or $14\text{ Gpc}$) by the CMBR redshift (around $1100$) to get the correct distance (around $42\text{ Mly}$ or $13\text{ Mpc}$). This is because the $46\text{ Gly}$ distance is calculated using that first integral, without multiplying by $a(t_i)$, and $a(t_f)=a(t_0)=1$ by convention. | {
"domain": "physics.stackexchange",
"id": 77215,
"tags": "general-relativity, cosmology, space-expansion, estimation, cosmic-microwave-background"
} |
Showing that null geodesics are incomplete | Question: Given a metric:
$$ds^2 = -dt^2 + t dx^2$$
for a manifold $M = \mathbb{R}^+ \times \mathbb{R}$. The geodesic equation for null geodesics is $$x = x_0 \pm \log t$$ for some constant $x_0$.
Now I want to show that these geodesics are not complete, but I'm not exactly sure how to do this. The metric reminds of the Rindler wedge so I assume the manifold is not geodescially complete but I'm not sure how to proceed here.
Answer: By definition a (metric) geodesic parametrized in terms of an affine parameter is a solution of the equation (in local coordinates)
$$\frac{d^2x^a}{d\lambda^2}+ \Gamma^a_{bc}\frac{dx^b}{d\lambda}\frac{dx^c}{d\lambda}=0\:,\tag{0}$$
where the $\Gamma$s are the standard Christoffel coefficients of the metric $g$.
A (bidifferentiable) change of parameter $\lambda'=\lambda'(\lambda)$ preserves the form above of the equation if and only if
$$\lambda' = k \lambda + h\tag{1}$$
for some $k\neq 0$, $h \in \mathbb{R}$.
The affine form of (1) is a justification of the name "affine parameter". It turns out that if the geodesic is not light-like, then the length parameter is an affine parameter in particular.
Geodesic completeness means that all geodesic are defined on the whole real axis when parametrized with their affine parameters $\lambda$ (this requirement does not depend on the specific choice of the affine parameter in view of the affine form of (1)).
You found (there is a mistake see may comment) an expression for null geodesics, but you do not know if the coordinate $t$ you used as the parameter of the curves belongs to the family of affine parameters.
If it is the case, then you are done since, by definition of $M$, $\lambda =t>0$. However it could happen that, for instance $t= e^\lambda$, where $\lambda \in (-\infty,\infty)$. So the issue deserves a better scrutiny.
To solve your problem you have to check if $t$ is an affine parameter solving (0),
However there is another shorter procedure. It is possible to prove (by direct inspection) that the Hamilton equations of the Hamiltonian $$H := \frac{1}{2}g^{ab}p_ap_b$$ in the cotangent space are identical to (0) when re-written into a second order equations in local coordinates in $M$. In particular the ``time parameter'' of these Hamilton equations is actually an affine parameter $\lambda$ of the geodesics.
Since the metric you consider is very simple, and
$$H = -\frac{p_t^2}{2} + \frac{p_x^2}{2t}\:, $$
it is convenient to use this approach instead of computing the coefficients $\Gamma^{a}_{bc}$.
The Hamilton equations read
$$\frac{dt}{d\lambda} = -p_t\:, \quad \frac{dx}{d\lambda} = \frac{p_x}{t}\:, \quad \frac{dp_t}{d\lambda} = -\frac{p_x^2}{2t^2}\:, \quad \frac{dp_x}{d\lambda} = 0\:.$$
The first two equations say that $p_t,p_x$ are the covariant components of the tangent vector to the considered geodesic: $p_a = g_{ab} \frac{dx^b}{d\lambda}$.
Furthermore, since $H$ does not depend explicitly on time, $H$ is constant along the solutions of the equations above.
Hence, for a null geodesic, since $H(\lambda =0)= g^{ab}p_a(0)p_b(0)=0$,
$$ -\frac{p_t^2}{2} + \frac{p_x^2}{2t} = 0\:,$$
so that
$$p_t = \pm \frac{p_x}{\sqrt{t}}\:.$$
Inserting this identity in the first Hamilton equation, we have
$$\frac{dt}{d\lambda} = \mp \frac{p_x}{\sqrt{t}}$$
where $p_x$ is constant in view of the last Hamilton equation. We conclude that
$$\lambda = C \pm \frac{2}{3p_x}t^{3/2}$$
Notice that $p_x \neq 0 $ since the geodesic is light-like.
This result already permits to conclude.
Since $t>0$, the parameter $\lambda$ cannot range in the whole line $(-\infty, +\infty)$ for every solution of the Hamilton equation with light-like tangent vector, i.e., for every null geodesic. | {
"domain": "physics.stackexchange",
"id": 79439,
"tags": "general-relativity, differential-geometry, metric-tensor, geodesics"
} |
Why does turbulent mixing result in a reduction of pressure and an increase in temperature? | Question: For example, if i have a horizontal orifice plate in a tube and i'm pushing flow through it, there will be significant turbulent mixing downstream of the orifice plate. E.g. https://www.youtube.com/watch?v=KbTnJwaVUcQ&feature=youtu.be
This turbulent mixing will reduce the static pressure of the flow i believe, resulting in an increase in temperature due to conservation of energy. But i can't fully grasp this idea conceptually.
I think i struggle to conceptualize static pressure and temperature as being inversely proportional i.e. in my head an increase in temperature means particles are moving about more and if particles are moving about more they would exert a larger pressure in a confined box, so why does the pressure reduce in a flow when the temperature is increasing?
Answer: Piping systems that contain orifice plates are normally "open", meaning that they are practically never blocked in while full of liquid. This means that when turbulence results in fluid friction with a small amount of consequent heating, the liquid is free to expand slightly. As this expansion occurs, the liquid is flowing down its associated pipe due to a pressure gradient, meaning that the pressure is decreasing in the direction of flow. Thus, it is a normal operating condition for the liquid to heat slightly as the liquid pressure drops. | {
"domain": "physics.stackexchange",
"id": 69637,
"tags": "fluid-dynamics, pressure, temperature, kinetic-theory, turbulence"
} |
Earth orbit around the Sun and its position in the Galaxy | Question: Given the orbit of the Earth around the Sun and the position of the Sun in the Galaxy, in which season are we (the Earth) closer to the galactic center?
Answer: The radio source Sgr A* is at RA 17h46m Dec -29°, or about 6° south of the ecliptic at longitude 267°.
The Sun appears most nearly opposite that position, ecliptic longitude 87°, about three days before the June solstice.
At that time of year, the brightest part of the Milky Way is highest in the sky around local solar midnight.
However, the distance from here to Sgr A* is about 7.9 kpc or 1.6 billion au.
The Earth is only infinitesimally closer to the galactic center in June than in December. | {
"domain": "astronomy.stackexchange",
"id": 3598,
"tags": "orbit, milky-way"
} |
When do we divide by $2\pi$ while solving Nyquist problems? | Question: I slightly lost the tempo in the course for a couple of days, so I apologies for the noob question if seemed so.
Why do we divide by $2\pi$ in Nyquist problems sometimes and sometimes not? I saw there is a frequency and a radial frequency. But I don't understand when do we divide max frequency by $2\pi$ and when do we not?
[Edit] for instance:
We have $cos(t\pi/3)$ for what T can we sample it with a dirac train, such as that we can recover the original signal afterwards?
On my lecture the max T is 3. I see the 3 but how the rest of $2\pi/3$ is gone?
Answer: Well! all signals in this world are made up of sum of different rotations(sinusoidals) - different in three senses:
a. how big is the amplitude (A)
b. how fast is the rotation ($\omega$)
c. where is the starting point of the rotation (phase $\phi$)
Fourier made this very clear.
How do we measure rapidness of the rotating signals(sinusoidals) : by their angular velocity $\omega$ which is given in radians/seconds. This is the correct and most appropriate unit for measuring rotations - how much of angle is being covered per unit time! If a constituent signal is rotating by $200\pi$ radians in 1 second, we say $\omega = 200\pi$ rad/sec.
But we can also measure the rapidness of these rotating signals as number of rotations per unit time. That is $f$ expressed in $Hertz$ : number of rotations per second.
How are $f$ and $\omega$ related? One complete rotation covers a full circle meaning an angle of $360^o = 2\pi \ radians$. That means if the signal is making $200$ rotations per second then it is covering an angle of $200*2\pi = 400\pi$ per second. So, the relation between $f$ and $\omega$ is basically:
$$\omega = 2\pi f$$
That is why you divide $\omega$ by $2\pi$ when you want to express the frequency in $Hz$.
The example you have given is $\cos{\frac{\pi t}{3}} = \cos{\frac{2\pi t}{6}}$. The period of this sinusoidal is $T = 6$sec, therefore, the frequency in $Hz$ will be $f = \frac{1}{6}$ and in radians/sec will be $\omega = 2\pi f = \frac{2\pi}{6}$ rad/sec.
(Think about why the period of this sinusoidal is $6sec$. Figure out that the sinusoidal will repeat after $t = 6sec$. Figure out that the sinusoidal makes 1 full rotation of angle $2\pi$radians in $6sec$.)
And the sampling frequency is twice the frequency of the sinusoidal giving :
$$f_{sampling} = \frac{2}{6} = \frac{1}{3}sec$$
Meaning Sampling period is $\frac{1}{f_{sampling}} = 3sec$. | {
"domain": "dsp.stackexchange",
"id": 9018,
"tags": "nyquist"
} |
Maintaining Maximum Social Distance | Question: So I wrote this simple program to place a certain number of people on a list of seats whilst maintaining maximum social distance.
Given that there is at least one empty seat, calculate the maximum.distance from an empty seat to the closest occupied seat and return the available seat.
How can this be improved to be better
// seats.cpp
#include <iostream>
#include <array>
constexpr int N = 50;
int traverse_right(const std::array<int, N> &seats, int i)
{
if (i < 0)
return 0;
else if(seats[i] == 1)
return i;
else return traverse_right(seats, --i);
}
int traverse_left(const std::array<int, N> &seats, int i)
{
if (i >= seats.size())
return 0;
else if(seats[i] == 1)
return i;
else return traverse_left(seats, ++i);
}
int get_available_seat(const std::array<int, N> &seats)
{
int max = 0;
int right_distance = 0;
int left_distance = 0;
int available_seat = -1;
for(size_t i = 0; i != seats.size(); ++i)
{
if(seats[i] == 0)
{
right_distance = abs(i - traverse_right(seats, i));
left_distance = abs(i - traverse_left(seats, i));
}
if(left_distance > max && right_distance > max)
{
if(left_distance <= right_distance && left_distance != 0)
max = left_distance;
else
max = right_distance;
available_seat = i;
}
}
std::cout << "Maximum distance to available_seat is " << max << '\n';
return available_seat;
}
void display(const std::array<int, N> &seats)
{
for(const auto x : seats)
std::cout << x << " ";
std::cout << '\n';
}
int main()
{
std::array<int, N> seats{};
for(int i = 0; i < 12; ++i)
{
int available_seat = get_available_seat(seats);
seats[available_seat] = 1;
}
display(seats);
}
The code works as expected and though I didn't perform the check, user would need to test if there is an available seat before placing.
Answer: Here are some things that may help you improve your program.
Be careful with signed and unsigned
The code compares an int i with seats.size(). However, seats.size() is unsigned and i is signed. For consistency, it would be better to declare i as std::size_t which is the type returned by size().
Use standard library functions
The display function isn't bad, but it is probably not really needed, either. Instead of display(seats), I'd write this:
std::copy(seats.begin(), seats.end(), std::ostream_iterator<int>(std::cout, " "));
Rethink the algorithm
There are problems with the current algorithm. It produces this result:
...X..X..X..X..X..X..X..X.....X.....X.....X......X
It's not hard to notice that people on the left are closer together than they need to be and people are the right have extra room. Hint: there is a simple mathematical way to figure out how many spaces should be between occupied seats.
Clearly state the goal
Are the chairs in a circle? If so, your one-chair-at-a-time algorithm works. If not, there's a missed opportunity: after the first person sits at one end, the maximum distance is achieved if the second person sits at the other end, and not in the middle. However, assuming it's a circle, it doesn't matter which chair is chosen first. Here's how I would approach that, using a single non-recursive function using standard library functions:
template <class ForwardIterator>
ForwardIterator find_seat(ForwardIterator begin, ForwardIterator end) {
auto left_empty{begin};
auto right_occupied{begin};
auto dist{right_occupied - left_empty};
for (auto b{begin}; b != end; ++begin) {
// find the first unoccupied chair (marker a)
begin = std::find(begin, end, 0);
// now find first occupied chair to the right, or end (marker b)
b = std::find(begin, end, 1);
// if (b-a) > dist, dist=b-a, left=a, right=b
if ((b - begin) > dist) {
dist = b - begin;
left_empty = begin;
right_occupied = b;
}
}
// return left_empty + dist/2
return left_empty + dist/2;
}
int main()
{
std::array<int, N> seats{};
for(int i = 0; i < 12; ++i)
{
int available_seat = get_available_seat(seats);
seats[available_seat] = 1;
}
std::copy(seats.begin(), seats.end(), std::ostream_iterator<int>(std::cout, " "));
std::cout << '\n';
}
Note that in this version, only a single forward pass is made through the container and that, by using a template, we are free to use any kind of container, including std::array, std::vector, etc. It's up to the caller to assure that there is at least one empty chair. | {
"domain": "codereview.stackexchange",
"id": 40251,
"tags": "c++, algorithm"
} |
maximum coverage version of dominating set | Question: The dominating set problem is :
Given an $n$ vertex graph $G=(V,E)$, find a set $S(\subseteq V)$ such that $|N[S]|$ is exactly $n$, where $$N[S] := \{x~ | \text{ either $x$ or a neighbor of $x$ lies in $S$}\}$$
My question is if the following (new problem) has a definite name in literature, and if not what should be the most appropriate name.
New Problem: Given an $n$ vertex graph $G=(V,E)$ and an integer $k$ , find a set $S(\subseteq V)$ of size $k$ such that $|N[S]|$ is maximized.
For the second problem, some of the names I have seen in the literature are maximum-graph-coverage; partial-coverage; k-dominating-set, (however, the exact same names are also used in other contexts).
Answer: The problem in which you must select $k$ vertices to maximize the number of vertices dominated is known as the budgeted dominating set problem. The problem or its connected variant is studied at least by Lamprou, Sigalis and Zissimopoulos [1] and Khuller, Purohit and Sarpatwar [2]. It also appears in the recent survey of Narayanaswamy and Vijayaragunathan [3].
[1] Lamprou, Ioannis, Ioannis Sigalas, and Vassilis Zissimopoulos. "Improved Budgeted Connected Domination and Budgeted Edge-Vertex Domination." arXiv preprint arXiv:1907.06576 (2019).
[2] Khuller, Samir, Manish Purohit, and Kanthi K. Sarpatwar. "Analyzing the optimal neighborhood: Algorithms for budgeted and partial connected dominating set problems." Proceedings of the twenty-fifth annual ACM-SIAM Symposium on Discrete algorithms. Society for Industrial and Applied Mathematics, 2014.
[3] Narayanaswamy, N. S., and R. Vijayaragunathan. "Parameterized Optimization in Uncertain Graphs—A Survey and Some Results." Algorithms 13.1 (2020): 3. | {
"domain": "cs.stackexchange",
"id": 15296,
"tags": "graphs, np-complete, terminology, np-hard, set-cover"
} |
Prove $E(R_n(\alpha),R_n(\theta)^n)<\epsilon/3$ from $E\big(R_n(\alpha),R_n(\alpha+\beta)\big)=|1-\exp(i\beta/2)|$ | Question:
where $E(U,V)=\max_{|\psi\rangle}||(U-V)|\psi\rangle ||=||U-V||$ is the error when $V$ is implemented instead of $U$. See page 196, Quantum Computation and Quantum Information by Nielsen and Chuang.
I have performed calculations for the exercise
$$
E\big(R_n(\alpha),R_n(\alpha+\beta)\big)=|(R_n(\alpha)-R_n(\alpha+\beta))|\phi\rangle|=|(R_n(\alpha)-(R_n(\alpha)R_n(\beta))|\phi\rangle|=\Big|R_n(\alpha)\big[1-R_n(\beta)\big]|\phi\rangle\Big|=\sqrt{\langle\phi|\big[1-R_n(\beta)\big]^\dagger R_n^\dagger(\alpha)R_n(\alpha)\big[1-R_n(\beta)\big]|\phi\rangle}\\
=\sqrt{\langle\phi|\big[1-R_n^\dagger(\beta)\big] \big[1-R_n(\beta)\big]|\phi\rangle}=\sqrt{\langle\phi|\big[1-R_n(-\beta)\big] \big[1-R_n(\beta)\big]|\phi\rangle}=\sqrt{\langle\phi|\big[1-R_n(-\beta)-R_n(\beta)+1\big]|\phi\rangle}=\sqrt{2-2\cos(\beta/2)}=|1-\exp(i\beta/2)|.
$$
Now, how do we prove that for any $\epsilon>0$ there exists an $n$ such that $E(R_n(\alpha),R_n(\theta)^n)<\epsilon/3$ from Eq. $(4.77)$?
Answer: The proof consists in connecting together two arguments. The first, covered by the exercise, reduces the problem of approximating the rotation gate $R_\hat{n}(\alpha)$ to the problem of approximating the rotation angle $\alpha$. The second, described in the quoted text from Nielsen & Chuang, shows that one can achieve arbitrarily fine approximations of all angles using a rotation by an irrational multiple of $\pi$.
Reducing gate approximation to angle approximation
From $(4.77)$ we have
$$
\lim_{\beta\to 0}E(R_\hat{n}(\alpha), R_\hat{n}(\alpha+\beta))=\lim_{\beta\to 0}|1 -\exp(i\beta/2)| = 0.
$$
In other words, for any sequence of angles $\gamma_k$ such that $\lim_{k\to\infty}\gamma_k=\alpha$, we have
$$
\lim_{k\to\infty}E(R_\hat{n}(\alpha), R_\hat{n}(\gamma_k)) = 0.
$$
This means that if we can apply a rotation around $\hat{n}$ by a angle that approximates the rotation angle $\alpha$ to arbitrary accuracy then we can approximate $R_\hat{n}(\alpha)$ to arbitrarily small error $E$.
Achieving arbitrarily fine approximations of all angles
Now, as shown in the quoted paragraph from Nielsen & Chuang, the set
$$
\Theta=\{\theta_k\,|\,\theta_k=(k\theta)\mod{2\pi}\},
$$
of angles of rotations around $\hat{n}$ attainable by $R_\hat{n}(\theta)^k$ for $k\in\mathbb{Z}$, fills up the interval $[0, 2\pi)$ in the sense that for any rotation angle $\alpha$ and any desired accuracy $\delta>0$ there exists $\tilde\theta\in\Theta$ such that $|\alpha - \tilde\theta|<\delta$. In other words, the set of attainable angles $\Theta$ contains arbitrarily fine approximations of all angles. In particular, $\Theta$ contains $\theta^*$ that approximates $\alpha$ to whatever accuracy is needed for $E(R_\hat{n}(\alpha), R_\hat{n}(\theta)^n)<\frac{\epsilon}{3}$.
Connecting the arguments
The connection between the two parts is as follows. The first argument proves the implication that if we can approximate the rotation angle $\alpha$ arbitrarily well then we can approximate $R_\hat{n}(\alpha)$ to arbitrarily small error $E$. The second argument establishes the premise for that implication, namely that we can indeed approximate the rotation angle $\alpha$ arbitrarily well using repeated applications of $R_\hat{n}(\theta)$, as long as $\theta$ is an irrational multiple of $\pi$. | {
"domain": "quantumcomputing.stackexchange",
"id": 3205,
"tags": "quantum-gate, textbook-and-exercises, nielsen-and-chuang, universal-gates"
} |
Spectral Analysis vs. Spectral Line Analysis | Question: We usually talk about "spectral analysis" but some resources (this paper or this doc) talk about "spectral line analysis".
Does this make sense to you, i.e. are the 2 fields actually different or the 2 names refer to the same thing?
Answer: Spectrum analysis is more general: it involves looking at the entire spectrum of a given signal.
Spectral line analysis assumes that the spectrum contains several peaks (lines) of interest at specific frequencies. The aim then is to find the precise frequency, magnitude, and phase of those peaks (lines). | {
"domain": "dsp.stackexchange",
"id": 3163,
"tags": "frequency-spectrum, power-spectral-density"
} |
Alternatives for (not working) nested substitution arguments for roslaunch | Question:
I´m currently changing the launch file setup on a project and would like to do something like
<include file="$(find $(env ROBOT_TYPE)_config)/launch/planning_config.launch">
This doesn´t appear to be possible. There is a workaround by creating one launch file per robot type and referencing that, but I was wondering if there are other options that I missed and that do not require creating one launch file per robot type. I´m hoping I´m missing some otherwise obvious simple alternative here ;)
Originally posted by Stefan Kohlbrecher on ROS Answers with karma: 24361 on 2013-09-20
Post score: 3
Answer:
I've had the same issue and never found a real solution. One thing that may be a work around is to use xacro ( http://wiki.ros.org/xacro ) to generate the launch files for each robot. You can run xacro from makefiles (at least in rosbuild you could ).
Originally posted by sedwards with karma: 1601 on 2013-09-22
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 15606,
"tags": "ros, roslaunch, xml, launch-file"
} |
Building a query URL in Scala | Question: I'm working on a webapp in ScalaJS and want to create a query url for requesting some JSON.
Right now I'm using a method called urlBuilder to take the query options from an Options case class and return a String that represents a usable query URL. The urlBuilder method, despite my best efforts, has become somewhat ugly.
Am I missing out on any syntactic sugar or language features that could make this a little easier to understand/cleaner looking? Is pattern matching + string interpolation the best strategy here?
case class GameRegions(americas: Boolean, europe: Boolean, asia: Boolean)
case class Options(startRank: Option[Int], endRank: Option[Int], startDate: Option[Date], endDate: Option[Date], startTime: Option[Date], endTime: Option[Date], regions: GameRegions, gameMode: Boolean)
//Builds the URL based on the provided options object
def urlBuilder(baseUrl: String, o: Options): String = {
baseUrl + "?" + (o.startRank match {
case Some(x) => s"rank[0]=$x&"
case None => ""
}) + (o.endRank match {
case Some(x) => s"rank[1]=$x&"
case None => ""
}) + (o.startDate match {
case Some(x) => s"added[0]=@${x.getTime / 1000}&"
case None => ""
}) + (o.endDate match {
case Some(x) => s"added[1]=@${x.getTime / 1000}&"
case None => ""
}) + (o.startTime match {
case Some(x) => s"time[0]=@${x.getTime / 1000}&"
case None => ""
}) + (o.endTime match {
case Some(x) => s"time[1]=@${x.getTime / 1000}&"
case None => ""
}) + (o.regions match {
case GameRegions(false, false, false) => ""
case GameRegions(x, y, z) => {
var i: Int = -1
if (x) {
i = i + 1
s"region[$i]=Americas&"
} else {
""
} + (if (y) {
i = i + 1
s"region[$i]=Europe&"
} else {
""
}) + (if (z) {
i = i + 1
s"region[$i]=Asia&"
} else {
""
}) + "&"
}
}) + s"format[0]=${if (o.gameMode) "Standard" else "Wild"}"
}
Answer: Sometimes pattern matching over Option could be replaced with fold or map/getOrElse
o.startRank.fold("")(x => s"rank[0]=$x&")
o.startRank.map(x => s"rank[0]=$x&").getOrElse("")
You could notice that url parameters have a pattern key=value joined with &. To leverage this pattern url builder could be split into separate steps:
create a map Map[String,String] of parameters i.e.
Map(
"rank[0]" -> o.startRank.fold("")(_.toString),
...
"time[0]" -> o.startTime.map(time => time.getTime/1000).map(_.toString).getOrElse("")
join parameters into string
map.filter(_._2.nonEmpty).map {
case (key, value) => s"$key=$value"
}.mkString("&") | {
"domain": "codereview.stackexchange",
"id": 21260,
"tags": "strings, scala"
} |
Thermal decomposition of magnesium bicarbonate | Question: I was trying to solve a problem from a regional contest where it was a mixture of bicarbonates of $\ce{Na}$ and $\ce{Mg}$ which was put at high temperature. What are the reactions of decomposition?
I've thought that the reactions are:
$$\ce{2NaHCO3 -> Na2CO3 + H2O + CO2}$$
and $$\ce{Mg(HCO3)2 -> MgCO3 + H2O + CO2}$$
But, I found out that the second reaction is wrong. Actually, the reaction is
$$\ce{Mg(HCO3)2 -> MgO + H2O + 2CO2}$$
I searched on the web and I found the decomposition of other bicarbonates, like $\ce{Ca(HCO3)2}$ is similar to the first reaction. Is there any rule? When is carbonate and when is oxide? Or is $\ce{Mg}$ an exception?
Thanks!
Answer: I can quite easily remember from my class XI studies that carbonates of alkali metals decompose on heating to give carbon dioxide and the corresponding metal oxide.
Moreover the thermal stability of the alkali metal carbonates increases with increasing cationic size because carbonate ion is big in size and increased cationic size leads to better bonding and hence greater stability.
In this regard the beryllium carbonate is the least stable alkali carbonate or in other words, an unstable alkali carbonate and readily decomposes into $\ce{BeO}$ and $\ce{CO2}$. $\ce{BeCO3}$ is so unstable that it can only be kept in an atmosphere of $\ce{CO2}$. $\ce{MgCO3}$ is also a bit unstable taking into consideration the above fact.
So the reaction you mentioned basically gives $\ce{MgCO3}$ as the product but $\ce{MgCO3}$ decomposes readily to give $\ce{MgO}$ and $\ce{CO2}$.
$$\ce{Mg(HCO3)2 -> MgCO3 + H2O + CO2}$$
and
$$\ce{MgCO3 -> MgO + CO2}$$
giving
$$\ce{Mg(HCO3)2 -> MgO + H2O + 2CO2}$$ | {
"domain": "chemistry.stackexchange",
"id": 13221,
"tags": "inorganic-chemistry, decomposition"
} |
What is the likelihood of ever discovering the graviton? | Question: How would one look for and confirm existence of a graviton?
Someone was speaking to me about perhaps one day discovering the graviton, but to me it seems unlikely, although I'm young and essentially quite naive, so am coming to you physicists to ask
What actually is the likelihood of finding it?
How would we find it?
Answer: Unfortunately, no physically reasonable detector could ever detect gravitons. For example, a detector with the mass of Jupiter placed in close orbit around a neutron star would only be expected to observe one graviton every 10 years (see the below paper). The few that would be detected would be indistinguishable from the background 'noise', i.e. neutrinos.
See here:
http://arxiv.org/abs/gr-qc/0601043
Even though we can't detect individual gravitons, gravitational wave detectors may shed some light on them, since the graviton is the quantum of the gravitational wave (similar to how early 20th century physicists studied the nature of the photon based on properties of light, such as the photoelectric effect.). | {
"domain": "physics.stackexchange",
"id": 4300,
"tags": "general-relativity, gravity, theory-of-everything"
} |
Subscriptable/Indexable generator | Question: I'm not a Python developper, but I enjoy programming with it, and for a project I wanted to have generators that I can easily index. Using python's slice model is obviously the way to go, and here's the solution I've come up with.
class _SubscriptableGenerator():
def __init__(self, generator, *args):
self.gen = generator(*args)
def __getitem__(self, key):
try:
if isinstance(key, int):
self.ignore(key)
yield next(self.gen)
else:
step = key.step if key.step else 1
start = key.start if key.start else 0
i = start
self.ignore(start)
while i < key.stop:
yield next(self.gen)
i = i + step
self.ignore(step-1)
except Exception:
self.raiseInvalidSlice(key)
def raiseInvalidSlice(self, key):
raise KeyError("{0} is not a valid key (only int and [x:y:z] slices are implemented.".format(key))
def ignore(self, n):
for i in range(n):
next(self.gen)
It is not intended to be called by the user of the module, that's internal code. I for example define my generators like so
def _myGen(arg1, arg2):
while 1:
yield something
and provide them wrapped in my class
def myGen(*args):
return _SubscriptableGenerator(_myGen, *args)
I'd like to know what a more pythonic solution would be, if there are things to fix, etc. I am not sure about the way to handle exceptions on the key.
Answer: The most Pythonic solution would be to use itertools.islice from the standard library. For example, like this:
from itertools import islice
class Sliceable(object):
"""Sliceable(iterable) is an object that wraps 'iterable' and
generates items from 'iterable' when subscripted. For example:
>>> from itertools import count, cycle
>>> s = Sliceable(count())
>>> list(s[3:10:2])
[3, 5, 7, 9]
>>> list(s[3:6])
[13, 14, 15]
>>> next(Sliceable(cycle(range(7)))[11])
4
>>> s['string']
Traceback (most recent call last):
...
KeyError: 'Key must be non-negative integer or slice, not string'
"""
def __init__(self, iterable):
self.iterable = iterable
def __getitem__(self, key):
if isinstance(key, int) and key >= 0:
return islice(self.iterable, key, key + 1)
elif isinstance(key, slice):
return islice(self.iterable, key.start, key.stop, key.step)
else:
raise KeyError("Key must be non-negative integer or slice, not {}"
.format(key))
Note that I've given the class a better name, written a docstring, and provided some doctests. | {
"domain": "codereview.stackexchange",
"id": 4822,
"tags": "python, python-3.x, generator"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.