anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Measurement of the speed of light form different perspectives | Question: I've been showing a special interest in Einstein's theory of relativity and how he proved the speed of light to be always the same. At first it was a bit hard for me to understand, but now I THINK I understand what this means.
So basically Einstein's theory claims that the speed of light is always the same, even if supposedly someone (say, person A) would move towards a light beam, he would measure the same speed in respect to a person (say, person B) measuring the speed of the light beam from a relative rigid position. Furthermore, the theory states that the time for person A is moving slower in respect to person B, because person A is moving in respect to person B (the time of a moving object runs slower in respects to a rigid object).
Keeping these features in mind, I've stated an example to try and explain why person A and B are measuring the same speed of light, even though person B is moving towards the light (you would think he'd measure a greater speed), which is as follows:
We have person A, whose moving with v=100,000 km/s straight towards a light beam with c=300,000 km/s. Then there's person B, who is standing at a rigid position in respect to the earth, so we can state that person B's velocity is 0 km/h. A "logical" measurement of the speed of the light beam for person A would be 100,000 + 300,000 = 400,000 km/s and 300,000 + 0 = 300,000 km/s for person B. This, however, is not what we observe. We observe the measurements of the speed of light of the two persons to be exactly the same (c = about 300,000 km/s). So I personally thought that the measurements of the speed of the light beams are exactly the same because time is slowing down for person A in respect to person B. So, person A WOULD measure 400,000 km/s, were it not that 1 second for him is not the same second for person B. A simple calculation would then conclude that in 1 second of person B's time 0.75 seconds passes in person A's time (400,000 times 0.75 equals 300,000 km/s equals the speed of light).
Can somebody tell me if my way of thinking is legit? Thanks in advance.
Answer: Speed of 'c' remains constant in all inertial frame of references.
Time dilution can be briefly explained as below-
As Person 'A' is moving close to 'c',
space around him expands w.r.t his frame.As Speed=dist/time, in order to keep this ratio constant
time slows done for him,but for person 'B' everything seems to be normal, of course he would measure 'c=3x10^8 m/s'
The only difference is for 'A', c remains constant by adjustment of space-time around him. | {
"domain": "physics.stackexchange",
"id": 23543,
"tags": "special-relativity, speed-of-light, time-dilation"
} |
52-Line Logging System | Question: I wanted to make an extremely light-weight logging system. What are your thoughts on this and what can be done to improve it?
First, the usage:
int main( )
{
logging::level = logging::fatal | logging::warning;
mini_log( logging::fatal, "entry point", "nothing", " is ", "happening!" );
}
Now, the code:
#if defined( NDEBUG )
#define mini_log( ... ) static_cast< void >( 0 )
#else
namespace logging
{
enum level_t
{
none = 0b00000,
information = 0b00001,
debug = 0b00010,
warning = 0b00100,
error = 0b01000,
fatal = 0b10000,
all = 0b11111
} inline level = all; // level should only be used in the entry point
inline std::mutex stream;
}
#define mini_log \
[ ]( logging::level_t const level_message, \
std::string_view const location, \
auto &&... message ) -> void \
{ \
std::lock_guard< std::mutex > lock_stream( logging::stream ); \
struct tm buf; \
auto time = [ & ]( ) \
{ \
auto t = std::time( nullptr ); \
localtime_s( &buf, \
&t ); \
return std::put_time( &buf, \
"[%H:%M:%S]" ); \
}; \
auto level = [ = ]( ) -> std::string \
{ \
switch( level_message ) \
{ \
case logging::information: \
return " [" __FILE__ "@" stringize_val( __LINE__ ) "] [Info] ["; \
case logging::debug: \
return " [" __FILE__ "@" stringize_val( __LINE__ ) "] [Dbug] ["; \
case logging::warning: \
return " [" __FILE__ "@" stringize_val( __LINE__ ) "] [Warn] ["; \
case logging::error: \
return " [" __FILE__ "@" stringize_val( __LINE__ ) "] [Erro] ["; \
case logging::fatal: \
return " [" __FILE__ "@" stringize_val( __LINE__ ) "] [Fatl] ["; \
} \
}; \
if( level_message & logging::level ) \
( ( std::cout << time( ) << level( ) << location << "]: " << message ) << ... ) << '\n'; \
}
#endif
Answer: Macros
Most of your code is in a macro. This makes it harder to write, read and debug.
Why not place the logic in normal functions and only use macros to pass __FILE__ and __LINE__?
That is until you can switch to C++20 which provides std::source_location
Naming
Why is the mutex called stream? | {
"domain": "codereview.stackexchange",
"id": 38931,
"tags": "c++, c++17"
} |
Stress-energy tensor that produces an Einstein-Rosen bridge | Question: As of recent, I've been doing a bit of self-education in GR, equipped with a working knowledge of the key elements of the differential geometry in GR, and in looking at the Einstein-Rosen bridge,
I see that geometrically it is a hyperboloid of one sheet. Now when using this manifold to calculate things like curvature and geodesics, we need a metric, which we can somewhat easily derive from the equation of the aforementioned surface.
Now per $$ R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R = \frac{8\pi G}{c^{4}} T_{\mu\nu} $$ (setting $ \Lambda = 0 $ for the simple case), one should be able to solve for the stress-energy tensor, albeit the grotesque mathematics. By this logic, based on the predetermined shape of the local spacetime, we should be able to calculate the mass and geometry of the object required to create such a metric.
My questions then are:
To what extent is this a valid statement if at all?
Given the fact that there are going to be initial value problems embedded in the PDE's with conditions that require the input of some properties of the geometry of the aforementioned object: How is it that one goes about finding physically viable objects that satisfy the EFE under the constraints that we've applied if any exists. How does one test to see if there are even any solutions that work?
For an attempt at the second questions, I was thinking that if we have to apply constraints to get rid of the unknown constants that resulted from the PDE, we just put constraints on what the solution can be, to ensure that it fits with what we want. Ie I want to make sure my sphere of matter is of this size, but can't exceed this mass.
Am I at least on the right track with this?
Answer: After a little bit more research, I verified this is the exact method employed to determine whether a given metric is feasible to create with a non-exotic energy-momentum tensor, $ T_{\mu\nu} $.
Resources: General Relativity Lecture Notes by Sean M. Carroll and the arXiv research paper Passing The Einstein-Rosen Bridge, by M. O. Katanaev, published at Mod. Phys. Lett. A 29, 17, 1450090 (2014). The research paper was an excellent read, and I definitely recommend giving it a read! | {
"domain": "physics.stackexchange",
"id": 12482,
"tags": "general-relativity, spacetime, differential-geometry, wormholes"
} |
Maximal minimal DFA for some language of n-bit strings | Question: Notation: $M$ is a DFA; $L(M)$ is the language accepted by $M$; $\min(M)$ is the minimal automaton equivalent to $M$ derived from a minimization algorithm such as the Hopcroft algorithm; and $|M|$ is the size of $M$: the number of states in $M$.
We are given the alphabet $\{0,1\}$ and some $n \in \mathbb{N}$.
Let's define some sets to set up the question.
$$ A = \{M \mid L(M) \subseteq \{0,1\}^n\}$$
So $A$ is the set of all automata that accept some language whose words are composed of $n$-bit binary strings. My intention here is also to require that all members of $A$ reject strings of other lengths. Building from this, consider
$$ A_{\min} = \{ \min(M) \mid M \in A \} $$
So $A_{\min}$ is the set of all minimized automata from $A$. Building further, let
$$ x = \max \{ |M| \mid M \in A_{\min}\} $$
So $x$ is the size of the largest automaton from $A_{\min}$. Now we can specify the automaton or automata we're looking for:
$$S = \{ M \mid M \in A_{\min} \, \, \text{and} \, \, |M| = x \} $$
How would one go about constructing a member (or members) of $S$? Any will do, but simpler constructions are preferred.
My initial naive attempt to do this failed. I tried building it up by induction, starting from a basis state which had two states: an accepting state and a non-accepting state. For the inductive step, I tried to build a binary tree composed of two different subtrees built from the previous step. This failed because minimization still merged common subtrees together.
Answer: Your question is solved in Cezar Câmpeanu and Wing Hong Ho, The Maximum State Complexity for Finite Languages, Corollary 10. | {
"domain": "cs.stackexchange",
"id": 18427,
"tags": "automata, finite-automata"
} |
If gravity is not a force, then how come gravitational assists work? | Question: I have learned about general relativity and how gravity arises from spacetime curvature. And I have always been taught that gravity is not a real force in the sense that
$$\frac{dp}{dt} = 0$$
And from this, gravity does not accelerate objects while they are in freefall. They are only accelerated when they are on the ground at rest.
On the other hand, when a spacecraft needs to reach a destination more quickly, they can use planets as velocity boosters. They use a gravitational assist from the planet to accelerate them to a greater velocity.
How can this be if gravity does not accelerate objects in freefall since it is not a force? I am seeing a contradiction here and it is confusing me. What am I missing in my conceptual understanding of gravity?
Answer: Well, gravity is a force and it isn't. What is a force anyway? It's what makes you accelerate, which is already a statement about a second-order derivative of one variable with respect to another, and now all of a sudden your coordinate system is important.
The point being made when someone says "gravity isn't a force" is that, if you express a body's location in spacetime, not space as a function of proper, not "ordinary" time along its path, gravity doesn't appear in the resulting generalization of Newton's second law in the same way as other forces do. In that coordinate system, the equation can be written as $\color{blue}{\ddot{x}^\mu}+\color{red}{\Gamma^\mu_{\nu\rho}\dot{x}^\nu\dot{x}^\rho}-\color{limegreen}{a^\mu}=0$, where the red (green) part is gravity (other forces). But this red/green distinction looks different, or disappears, if you look at things another, mathematically equivalent way. In particular:
Putting on Newton's hat This is the less elegant of two options I'll mention, one that uses pre-relativistic coordinates. If you look at the body's location in space, not spacetime as a function of ordinary, not proper time, the red term looks like the green term, hence like the stuff you learned from Newton. In particular, $\frac{dp^i}{dt}\ne0$.
Putting on Einstein's hat Even more elegantly, we don't need to leave behind the coordinates I suggested first to change our perspective. As @jawheele notes in a comment, we unlock the real power of GR if we use a covariant derivative as per the no-red formulation $\color{blue}{\dot{x}^\nu\nabla_\nu\dot{x}^\mu}-\color{limegreen}{a^\mu}=0$. This time, the equation's terms manifestly transform as a tensor, making the blue term the unique simplest coordinate-invariant notion of acceleration.
The main advantage of the $\Gamma$-based version is doing calculations we can relate back to familiar coordinates. This not only recovers Newtonian gravity in a suitable limit, it computes a correction to it.
Regarding the first bullet point above, have you ever spun on a big wheel? There's a similar perspective-changing procedure that says the dizziness you're feeling is due to something that's "not a force". You're still dizzy, though. This isn't a contradiction; they're just two different ways of deciding what counts as a force.
The good news is we don't need to "forget" GR to understand a gravity assist. How does it work? It exploits the fact that, if a planet's in the right place at the right time for you, the red term is very different from what the Sun alone would normally give you there. This has implications for the blue part even without wasting fuel on the green part. Or you can explain it without GR; your choice. | {
"domain": "physics.stackexchange",
"id": 93075,
"tags": "general-relativity, forces, gravity, reference-frames, equivalence-principle"
} |
Is an algorithm with an approximation factor of 4000 useful? | Question: A paper published in SODA this year (2019) proposed a constant approximation algorithm for the lower bounded facility location problem with general lower bounds.
To my surprise, when reading the paper, I verified that its constant approximation factor is 4000.
Therefore, I was left wondering if this algorithm is really useful for something.
Like, do you know, a solution that is 4000 times worse than the optimal solution can be anything, and probably a solution given by a simple polynomial-time greedy algorithm will be better than that of given by this more complex approximation algorithm.
Answer: A very good question! While I think a feasible solution which is 4000 times worse than the optimum is often not very practical, if you have several approximation algorithms to choose from, I would rather implement the one with a better performance guarantee. Well, maybe not always. At the very least, the feasible solutions found are often of much better quality than the guarantee. That is, if an algorithm performs better than terribly in the worst case, it often performs very well in practice. At least I've read something along those lines.
Apart from that, you can think of the result as a mathematical theorem, which may be of interest in its own right. Citing the wikipedia article on approximation algorithm:
The desire to understand hard optimization problems from the perspective of approximability is motivated by the discovery of surprising mathematical connections and broadly applicable techniques to design algorithms for hard optimization problems. One well-known example of the former is the Goemans-Williamson algorithm for Maximum Cut which solves a graph theoretic problem using high dimensional geometry. | {
"domain": "cstheory.stackexchange",
"id": 4879,
"tags": "approximation-algorithms, approximation"
} |
Schwarzschild Solution, The constant of integration for 2+1 case | Question: Lets say I want to find the spherical symmetric solution to EFE
$$G=2T$$
in $d+1$ dimensions. The symmetries and EFE imply
$$ds^2=-f(r)dt^2+\frac{dr^2}{f(r)}+r^2d\Omega^2$$
With $f(r)$ satisfying
$$f^\prime(r)\propto r^{1-d}$$
Therefore
$$f(r)=A\int^r\frac{dx}{x^{d-1}}$$
Comparison with the Newtonian limit, allows us to $call$ the constant of proportionality $A$ such that
$$\Delta f(r)=2\Delta\Phi_N$$
One can easily check that changing the constant of integration is NOT a matter of coordinate (therefore gauge) transformation and different constants give rise to different worlds. A single choice may however be singled out by assuming flat ($f=1$) at spatial infinity.
Question: What about $d=2$? In this case the (different) answers are
$$f_a(r)=\frac{M}{\pi}\log(r/a)$$
None of which yields asymptotic flatness!
PS: My constants may differ from yours (Cf. My EFE) but that does not change anything. The problem is still there
Answer: Gravity in 2+1 dimensions does not have propagating degrees of freedom, so vacuum Einstein field equations $Ric=0$ simply mean that the metric is locally flat. As a result, there is no Schwarzschild solution. Instead, a point particle corresponds to a conical singularity of a spatial slice of spacetime. Also, if one considers a negative cosmological constant (EFE then ensure that the spacetime is locally AdS₃), then there is a black hole solution, the BTZ black hole.
An overview of (2+1)-dimensional general relativity could be found here:
Carlip, S. (1995). Lectures in (2+ 1)-dimensional gravity. arXiv:gr-qc/9503024.
Review of the BTZ black hole:
Carlip, S. (1995). The (2+1)-dimensional black hole. Classical and Quantum Gravity, 12(12), 2853, doi:10.1088/0264-9381/12/12/005, arXiv:gr-qc/9506079. | {
"domain": "physics.stackexchange",
"id": 58908,
"tags": "general-relativity, black-holes"
} |
Does the gravitational redshift obey the Stefan-Boltzmann law? | Question: Here is a non rotating neutron star emitting perfect black body radiation from its surface. Supposing that the radiation is redshifted by a factor of 2 for distant observers (which is chosen purely for simplicity because neutron stars in reality can’t be that compact), each photon’s energy should be halved uniformly. As a result, the frequency peak should also be halved. Because the frequency peak is linearly proportional to the temperature, the effective temperature perceived by distant observers should be halved as well. Besides the surface of the neutron star also experiences time dilation by a factor of 2, which means the number of photons emitted from the surface per second is twice the number of photons reaching distant sites. Taken together, the effective luminosity of the neutron star should be a quarter (1/4) of the surface luminosity.
However, according to the Stefan-Boltzmann law, the radiation power is proportional to the 4th power of temperature, which means the effective luminosity should be 1/16, not 1/4 of the surface luminosity. Is it due to the gravity lensing effect which makes the neutron star look bigger than its actual size? Another complicating factor is that once below the photon sphere, the surface of the neutron star no longer obeys the lambert emission law because light rays emitted at low angles are directed back towards the surface, which reduces the effective luminosity. These calculations are beyond my capability, so I come to ask for help.
Answer: The light is gravitationally lensed. According to the distant observer, the emitting surface area is increased by exactly the right amount to deal with the apparent contradiction that you have noted.
The relevant equations are (Haensel 2001):
$$ L_\infty = 4\pi R_\infty^2 \sigma T_\infty^4 = L(1 -r_s/R)$$
$$ T_\infty = T(1 - r_s/R)^{1/2}$$
$$ R_\infty = R(1 - r_s/R)^{-1/2}$$
where $r_s$ is the Schwarzschild radius and the $\infty$ subscript means the quantity inferred by a distant obsever.
The last of these equations though can only be applied if $R>1.5r_s$. This point marks the minimum value of $R_{\infty, {\rm min}} = 3\sqrt{3}r_s/2$ at a fixed mass. For lower values of the neutron star radius then photons with an impact parameter larger than $R_{\infty, {\rm min}}$ cannot escape from the neutron star.
i.e. The effective radius will be fixed at its minimum value and the first equation becomes
$$ L_\infty = \frac{27L}{4}\left(\frac{r_s}{R}\right)^2\left(1 -\frac{r_s}{R}\right)^2$$
for $R<1.5r_s$ and the luminosity falls more steeply as you suspected.
The extreme situation you hypothesise requires $R= 4r_s/3$. This is still above the Buchdahl limit so might be allowed by the hardest of proposed equations of state. You only need $R < 1.76 r_s$ for a distant observer to see the whole of the surface (e.g., see this answer). The limiting $R_{\infty, {\rm min}}$ corresponds to tangentially emitted light orbiting the neutron star multiple times before exiting to the observer at infinity! | {
"domain": "physics.stackexchange",
"id": 94369,
"tags": "thermodynamics, general-relativity, thermal-radiation, neutron-stars"
} |
How to demonstrate the inductance of an inductor? | Question: I have some troubles to demonstrate this formula. Actually my problem is about how the length of the coil is integrated in the formula... I could understand, that the electromotive force $V_L$ is equal to $NA\frac{dB}{dt}$ ? Then I know that I must use the Ampere's theorem to replace $B$ by $i$, but the closed loop that I used has no link with the length of the inductor wire $\ell$
$$V_L=N\frac{d\Phi}{dt}=\frac{\mu N^2A}{\ell}\frac{di}{dt} $$
Answer: This equation is for a long solenoid (coil) and it assumes (ignoring end effects) that the flux density, $\vec B$, is the same in magnitude and direction inside the solenoid, and zero outside the solenoid. So if you consider a path that starts from one end (X) of the solenoid and goes through the solenoid, parallel to the axis, to the other end (Y), and then returns to X via some route outside the solenoid, the integral of $\vec B.d \vec{\ell}$ will simply be $B \ell$ in which $B$ is the magnitude of the flux density inside the solenoid, and $\ell$ is the axial length of the solenoid. So Ampère's law gives $$B \ell=\mu_0NI.$$
So you now have the link between $B$ and $I$ that you need. | {
"domain": "physics.stackexchange",
"id": 64064,
"tags": "electromagnetism, electromagnetic-induction, inductance"
} |
Scrolly - A (very) simple infinite mouse "scroll" | Question: Out of fun, and to practice my rusty C# skills, I've made a very basic program.
It only has 1 function: When you move the mouse to a side of the screen, it shown on the other side. Like an infinite scroll!
I am totally aware that there are users with 2 or more screens. I'm sorry, but it only works for 1. It's a limitation I've imposed on purpose.
I've used C# 6.0 to make this project, and I guarantee that it works with that very specific version.
Program.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace Scrolly
{
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
//hides the form at startup
Form form = new settings();
Application.Run();
}
}
}
settings.Designer.cs (ignorable, but required to compile)
namespace Scrolly
{
partial class settings
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.IContainer components = null;
/// <summary>
/// Clean up any resources being used.
/// </summary>
/// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}
#region Windows Form Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.components = new System.ComponentModel.Container();
System.ComponentModel.ComponentResourceManager resources = new System.ComponentModel.ComponentResourceManager(typeof(settings));
this.ticker = new System.Windows.Forms.Timer(this.components);
this.paddingY = new System.Windows.Forms.TrackBar();
this.paddingX = new System.Windows.Forms.TrackBar();
this.pixelsH = new System.Windows.Forms.Label();
this.pixelsV = new System.Windows.Forms.Label();
this.warning = new System.Windows.Forms.Label();
this.trayIcon = new System.Windows.Forms.NotifyIcon(this.components);
this.trayIconMenu = new System.Windows.Forms.ContextMenuStrip(this.components);
this.settingsToolStripMenuItem = new System.Windows.Forms.ToolStripMenuItem();
this.exitToolStripMenuItem = new System.Windows.Forms.ToolStripMenuItem();
((System.ComponentModel.ISupportInitialize)(this.paddingY)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.paddingX)).BeginInit();
this.trayIconMenu.SuspendLayout();
this.SuspendLayout();
//
// ticker
//
this.ticker.Enabled = true;
this.ticker.Interval = 50;
this.ticker.Tick += new System.EventHandler(this.ticker_Tick);
//
// paddingY
//
this.paddingY.LargeChange = 2;
this.paddingY.Location = new System.Drawing.Point(97, 5);
this.paddingY.Maximum = 20;
this.paddingY.Name = "paddingY";
this.paddingY.Size = new System.Drawing.Size(175, 45);
this.paddingY.TabIndex = 1;
//
// paddingX
//
this.paddingX.LargeChange = 2;
this.paddingX.Location = new System.Drawing.Point(97, 56);
this.paddingX.Maximum = 20;
this.paddingX.Name = "paddingX";
this.paddingX.Size = new System.Drawing.Size(175, 45);
this.paddingX.TabIndex = 1;
//
// pixelsH
//
this.pixelsH.AutoSize = true;
this.pixelsH.Location = new System.Drawing.Point(12, 56);
this.pixelsH.Name = "pixelsH";
this.pixelsH.Size = new System.Drawing.Size(81, 13);
this.pixelsH.TabIndex = 2;
this.pixelsH.Text = "H Offset (pixels)";
//
// pixelsV
//
this.pixelsV.AutoSize = true;
this.pixelsV.Location = new System.Drawing.Point(12, 5);
this.pixelsV.Name = "pixelsV";
this.pixelsV.Size = new System.Drawing.Size(80, 13);
this.pixelsV.TabIndex = 2;
this.pixelsV.Text = "V Offset (pixels)";
//
// warning
//
this.warning.AutoSize = true;
this.warning.Location = new System.Drawing.Point(15, 96);
this.warning.Name = "warning";
this.warning.Size = new System.Drawing.Size(259, 13);
this.warning.TabIndex = 3;
this.warning.Text = "Warning: This program only works on a single screen!";
//
// trayIcon
//
this.trayIcon.ContextMenuStrip = this.trayIconMenu;
this.trayIcon.Icon = ((System.Drawing.Icon)(resources.GetObject("trayIcon.Icon")));
this.trayIcon.Visible = true;
this.trayIcon.MouseDoubleClick += new System.Windows.Forms.MouseEventHandler(this.trayIcon_MouseDoubleClick);
//
// trayIconMenu
//
this.trayIconMenu.Items.AddRange(new System.Windows.Forms.ToolStripItem[] {
this.settingsToolStripMenuItem,
this.exitToolStripMenuItem});
this.trayIconMenu.Name = "trayIconMenu";
this.trayIconMenu.Size = new System.Drawing.Size(153, 70);
//
// settingsToolStripMenuItem
//
this.settingsToolStripMenuItem.Name = "settingsToolStripMenuItem";
this.settingsToolStripMenuItem.Size = new System.Drawing.Size(152, 22);
this.settingsToolStripMenuItem.Text = "Settings";
this.settingsToolStripMenuItem.Click += new System.EventHandler(this.settingsToolStripMenuItem_Click);
//
// exitToolStripMenuItem
//
this.exitToolStripMenuItem.Name = "exitToolStripMenuItem";
this.exitToolStripMenuItem.Size = new System.Drawing.Size(152, 22);
this.exitToolStripMenuItem.Text = "Exit";
this.exitToolStripMenuItem.Click += new System.EventHandler(this.exitToolStripMenuItem_Click);
//
// settings
//
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.CausesValidation = false;
this.ClientSize = new System.Drawing.Size(284, 121);
this.Controls.Add(this.warning);
this.Controls.Add(this.pixelsV);
this.Controls.Add(this.pixelsH);
this.Controls.Add(this.paddingX);
this.Controls.Add(this.paddingY);
this.MaximizeBox = false;
this.MinimizeBox = false;
this.Name = "settings";
this.ShowIcon = false;
this.ShowInTaskbar = false;
this.Text = "Settings";
this.FormClosing += new System.Windows.Forms.FormClosingEventHandler(this.settings_FormClosing);
((System.ComponentModel.ISupportInitialize)(this.paddingY)).EndInit();
((System.ComponentModel.ISupportInitialize)(this.paddingX)).EndInit();
this.trayIconMenu.ResumeLayout(false);
this.ResumeLayout(false);
this.PerformLayout();
}
#endregion
private System.Windows.Forms.Timer ticker;
private System.Windows.Forms.TrackBar paddingY;
private System.Windows.Forms.TrackBar paddingX;
private System.Windows.Forms.Label pixelsH;
private System.Windows.Forms.Label pixelsV;
private System.Windows.Forms.Label warning;
private System.Windows.Forms.NotifyIcon trayIcon;
private System.Windows.Forms.ContextMenuStrip trayIconMenu;
private System.Windows.Forms.ToolStripMenuItem settingsToolStripMenuItem;
private System.Windows.Forms.ToolStripMenuItem exitToolStripMenuItem;
}
}
settings.cs
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace Scrolly
{
public partial class settings : Form
{
//we (hopefully) will always have the same first screen
private Screen screen = Screen.AllScreens[0];
public settings()
{
InitializeComponent();
}
private void ticker_Tick(object sender, EventArgs e)
{
Point pos = Cursor.Position;
if (pos.X <= paddingX.Value)
{
Cursor.Position = new Point(screen.Bounds.Width - paddingX.Value - 1, Cursor.Position.Y);
}
else if (pos.X >= screen.Bounds.Width - paddingX.Value - 1)
{
Cursor.Position = new Point(paddingX.Value + 2, Cursor.Position.Y);
}
if (pos.Y <= paddingY.Value)
{
Cursor.Position = new Point(Cursor.Position.X, screen.Bounds.Height - paddingY.Value - 1);
}
else if (pos.Y >= screen.Bounds.Height - paddingY.Value - 1)
{
Cursor.Position = new Point(Cursor.Position.X, paddingY.Value + 2);
}
}
private void trayIcon_MouseDoubleClick(object sender, MouseEventArgs e)
{
ticker.Enabled = !ticker.Enabled;
if (ticker.Enabled)
{
trayIcon.Icon = icons.on;
}
else
{
trayIcon.Icon = icons.off;
}
}
private void settingsToolStripMenuItem_Click(object sender, EventArgs e)
{
this.Show();
}
private void exitToolStripMenuItem_Click(object sender, EventArgs e)
{
Application.Exit();
}
private void settings_FormClosing(object sender, FormClosingEventArgs e)
{
if (e.CloseReason == CloseReason.UserClosing)
{
e.Cancel = true;
this.Hide();
}
}
}
}
The form has the following aspect:
This will also create an icon on the system tray. This requires the file icons.resx, on the project root, and must contain 1 icon with the name on and another with the name off. These icons must be in the .ico format, must be 32x32 and can't have PNG compression.
In terms of readability, in all parts I've changed, is there anything I can change? Anything else I can improve?
Answer:
private void ticker_Tick(object sender, EventArgs e)
{
Point pos = Cursor.Position;
if (pos.X <= paddingX.Value)
{
Cursor.Position = new Point(screen.Bounds.Width - paddingX.Value - 1, Cursor.Position.Y);
}
else if (pos.X >= screen.Bounds.Width - paddingX.Value - 1)
{
Cursor.Position = new Point(paddingX.Value + 2, Cursor.Position.Y);
}
if (pos.Y <= paddingY.Value)
{
Cursor.Position = new Point(Cursor.Position.X, screen.Bounds.Height - paddingY.Value - 1);
}
else if (pos.Y >= screen.Bounds.Height - paddingY.Value - 1)
{
Cursor.Position = new Point(Cursor.Position.X, paddingY.Value + 2);
}
}
Don't use abbreviations for variable names.
the calculation of the new position of the Cursor should be extracted to a separate method to be called from the event.
Instead of assigning the cursors position twice, you should calculate the X and Y values and then assign the calculated values to the position.
Implementing the mentioned points will lead to
private Point CalculateCursorPosition(Point currentPosition, Rectangle bounds, int offsetX, int offsetY)
{
// initialize x and y to the former values for the case that none of the
// conditions will be met.
int x = currentPosition.X;
int y = currentPosition.Y;
if (currentPosition.X <= offsetX)
{
x = bounds.Width - offsetX - 1;
}
else if (currentPosition.X >= bounds.Width - offsetX - 1)
{
x = offsetX + 2;
}
if (currentPosition.Y <= offsetY)
{
y = bounds.Height - offsetY - 1;
}
else if (currentPosition.Y >= bounds.Height - offsetY - 1)
{
y = offsetY + 2;
}
return new Point(x, y);
}
private void ticker_Tick(object sender, EventArgs e)
{
Cursor.Position = CalculateCursorPosition(Cursor.Position, screen.Bounds,
paddingX.Value, paddingY.Value);
}
To keep the amount of passed method arguments low I have decided to pass the Cursor.Position instead of two parameters Cursor.Position.X and Cursor.Position.Y. | {
"domain": "codereview.stackexchange",
"id": 15189,
"tags": "c#, winforms"
} |
Free variables in constraint-typing derivation? | Question: In Types and Programming Language's constraint typing rules (Figure 22-1), is it possible for any part of the typing derivation to contain free type variables that aren’t part of the fresh variables? Because the typing rules are based on the simply-typed lambda calculus with an infinite number of base types, a naïve reading would allow for $\Gamma$, $t$, or $T$ to contain type variables not mentioned in $\chi$. However, given that TAPL hasn't introduced polymorphism yet, I wonder if this case should implicitly be considered "invalid."
For a concrete example: Should $\varnothing \vdash (\lambda \, x : X \rightarrow X, \, 0) \, (\lambda \, x : X, \, x) : \mathbb{N}$ be considered to have a "valid" constraint typing derivation, even though $X$ wouldn't be mentioned in the resulting $\chi$?
Figure 22-1 from TAPL, for reference:
EDIT: This question has been significantly edited in response to @frabala's answer, which made me realize that I hadn't been very clear initially. Hopefully it makes more sense now: See the edits and comments on @frabala's answer for a targeted answer to my question as currently written.
Answer: Not sure whether I completely understand the question, but here is my attempt:
a naïve reading would allow for Γ, t, or T to contain type variables not mentioned in χ.
$\Gamma$, $t$ and $T$ may contain type variables not mentioned in $\mathcal{X}$. The set $\mathcal{X}$ contains only unification variables. That is, fresh variables generated by the typing process (in particular, by rule CT-App, where it is explicitly stated that the new unification variable $X$ should not appear in $\Gamma$, $t$, $T$, etc). Try exercise 22.3.3. In the process of building a derivation for the judgment shown there, $\Gamma$ will end-up containing the type variables $X$, $Y$ and $Z$ (which initially are part of the term $t$) and those variables will not belong in $\mathcal{X}$ at any point of the derivation.
I think my question is actually if any of Γ, t, or T can have free type variables (i.e. that haven't been introduced by the type-inference rules).
$\Gamma$ can have type variables not introduced by the rules. For example, the term $(\lambda y : B.y)$ of the polymorphic type $B \to B$ is typable under some environment like $\Gamma = x:A$ (and to avoid ambiguity, one needs to make sure that the identifiers $x$ and $y$ are distinct).
This would be another way to encode constants in a language. For example, instead of having the rule CT-Zero, we could always type a term under the unvironment $\Gamma_0 = 0 : \mathrm{Nat}$.
Regarding the term $t$, this term is given and therefore, so are the type variables that it contains. So, the type variables in $t$ are certainly not introduced by the rules. The rules only manipulate the environment $\Gamma$, the type $T$, the unification variables $\mathcal{X}$ and the constraint set $\mathcal{C}$. The term $t$ is only being read.
Regarding the type $T$, it can also contain type variables not mentioned in $\mathcal{X}$. For example, try typing the term $(\lambda f: A \to A. f\,0)$.
When you reach rule CT-Var, the type $T$ in the conclusion of the rule will be $A\to A$, where $A$ will not appear in $\mathcal{X}$.
Edit 1
I still don't fully understand the question. What do you mean by "standalone" derivations? Anyway, there is something I'd like to add:
e.g. I can apply CT-App to two subderivations for which ₂ contains type variables in χ₁.
First, note that the typing rules are such that every new unification variable is recorded in $\mathcal{X}$.
Now, indeed rule CT-App loses some information. In particular, the derivation for $t_1$ does not keep track of the fresh unification variables introduced in the derivation for $t_2$, and vice versa. So, it could happen that $\mathcal{C}_2$ contains unification variables that are mentioned in $\mathcal{X}_1$.
However, any unification variable in $\mathcal{C}_2$ is recorded in $\mathcal{X}_2$. Thus, if $\mathcal{C}_2$ contains unification variables mentioned in $\mathcal{X}_1$, then
$\mathcal{X}_1\cap \mathcal{X}_2 \neq \emptyset$.
Because of this, applying CT-App on the two subderivations will fail in this problematic scenario, since the rule requires that $\mathcal{X}_1\cap \mathcal{X}_2 = \emptyset$.
This does not mean that the term application $t_1\, t_2$ is not typable. Instead, this condition forces you to rename your unification variables accordingly, so that overlaps between the names of variables contained in $\mathcal{X}_1$ and $\mathcal{X}_2$ do not occur.
To sum up, two variables that have the same name and belong to both $\mathcal{X}_1$ and $\mathcal{X}_2$ should not be considered as the same variable. Instead, they should be considered as distinct variables that "coincidentally" were given the same identifier. Thus, an appropriate renaming solves the problem and rule CT-App can be applied.
Maybe now I do understand the question:
if there are any implicit meta-constraints that disallow "standalone" type derivations from containing free type variables.
First, let's clarify that all type variables appear free, because the language has no type abstraction in its syntax. There are no $\Lambda$ or $\forall$ binders as those in System F.
Now, every type variable $P$ that appears in $\Gamma$, $T$ or $\mathcal{C}$ has one and only one of the following two origins: either a) $P$ originates from the term $t$ (in the type of a $\lambda$-binder), and thus was given by the programmer, or b) $P$ is a freshly generated type variable. The set $\mathcal{X}$ keeps track of these type variables.
So given that we start our typing process with an empty $\Gamma$, all type variables that appear in a derivation are free and necessarily have one of these two origins. There can be no type variables of some other origin and this is not an implicit constraint. It is enforced by the typing rules. You can try proving the following:
$\text{If }\Gamma\vdash t: T\mid_{\mathcal{X}} \mathcal{C}\text{, then }\forall P,\,P\in\mathsf{FT}(T)\cup\mathsf{FT}(\mathcal{C})\implies P\in\mathcal{X}\cup\mathsf{FT}(\Gamma)\cup\mathsf{FT}(t).$
This statement also takes into account the case where we start the typing process with a non-empty $\Gamma$.
I believe the proof can work by induction on the typing judgment.
Wow, I wrote a lot! :D
Edit 2
Yes, your example is well-typed. In the application of rule CT-App, type $T_1$ (from the rule in the figure) is actually $(X\to X)\to \mathbb{N}$ and type $T_2$ is $X\to X$. This rule generates a fresh variable. The name of this variable should be distinct from those that appear in $\mathcal{X}_1$, $\mathcal{X}_2$, $T_1$, $T_2$, ... (3rd line of the rule's premises). So, because $X$ already appears in $T_1$, we should come up with another name. Say, Y. The conclusion of the rule will be:
$$ \emptyset \vdash t_1\,t_2 : Y \mid_{\{Y\}} \{(X\to X)\to\mathbb{N} = (X\to X)\to Y\}$$
where $t_1 = (\lambda x : X\to X. 0)$ and $t_2 = (\lambda x : X. x)$.
The rest of the derivation is more straight-forward to build. | {
"domain": "cs.stackexchange",
"id": 18507,
"tags": "type-theory, type-inference, types-and-programming-languages"
} |
Why are noble gases used in "neon" lamps | Question: Neon lamps are lamps that contain noble gases... They light due to the presence of energetic levels for electrons (according to the definition of it in books). But I don't understand yet why noble gases are used in such lamps. Any help please?
Answer: Strictly speaking "neon" lamps contain neon but the term is often used colloquially for a whole range of coloured lighting probably because the red neon tubes are one of the commonest.
But it isn't just noble gases that are used, though they are the most common. Carbon dioxide is sometimes used. And there are many discharge lamp that add metals or metal salts, though these are more common in other uses such as high intensity street lighting.
The noble gases and mixtures of them are commonly used in coloured advertising lighting because they offer a range of colours in a simple low-pressure discharge lamp without the added complexity of fluorescent coatings (as in common fluorescent tubes) or high-pressure systems (as is required in high intensity systems used in street or industrial lighting). Advertising signage often consists of long glass tubes in custom shapes and the simplicity of simple gas discharge makes the results cheaper than more sophisticated alternatives.
The colours are derived from the energy levels of the electrons in the gases and it just so happens that the range of those energy levels in noble gases correspond to a useful variety of colours.
There is a good summary of the colours (and other types of discharge lamps) in this Wiki article. | {
"domain": "chemistry.stackexchange",
"id": 4407,
"tags": "noble-gases"
} |
2d collisions of perfectly elastic circles with mass | Question: My question is about 2 circles colliding. I can't describe it perfectly, which is part of the reason I'm stuck. Here is a picture.
There are 2 circles. They both have an initial position, velocity, and mass. They collide.
The collision
Momentum is conserved, and since it is a vector, I can break it up into components. If I constrain this scenario further and say the collision is perfectly elastic, then KE is conserved. KE is not a vector, so I can't break that down any more. So I have 3 equations :
Px(before) = Px(after)
Py(before) = Py(after)
KE(before) = KE(after)
The problem
So the pickle is that I have 3 equations, but 4 unknown variables (the new velocities).
If it makes it easier, suppose mass is uniformly distributed over the area of a given circle.
question-1a:
Do I need to make additional assumptions in order to find a unique solution to this problem? If not, then
question-1b:
Have I misunderstood the application of the conservation of momentum for the case when the objects colliding are not point particles?
I can't help but think torque has something to do with it. When the circles strike, it feels like there would be a tendency for them to spin. Is that relevant?
Answer: I see that you are trying to count the degrees of freedom (DoF) and the number of equations. The answer to this is quite fun and subtle.
So the pickle is that I have 3 equations, but 4 unknown variables (the new velocities).
I will come back to how you are correct, but you should be counting a lot more. Let us ignore the possibility of the circles rotating. Then you have the position of each of the two circles, two coördinates each, so that is already 4 variables. Their velocities make 4 variables too, so you actually have 8 DoF.
You know that there is a centre of mass. Conservation of (total) linear momentum (CoLM or Co$\vec p$) means that the centre of mass moves on a straight line with uniform velocity.
Before and after the collision, the relative momentum is also constant, which means that the relative motion is also made of straight lines. That is, the relative position is really equivalent to a time coördinate, just like the centre of mass is.
Taking the origin to be the event of the collision and using polar coördinates, then you can see that we have even more constants: The angles are constant before and after the collision, and the collision can only change the angle. The radius becomes basically the time coördinate.
If the energy is not conserved in the collision, i.e. only before and after the collision, then the relative momentum's magnitude can also change at the collision.
Let us come back to the 3 equations and 4 unknown variables. Really, these are all about the momenta, with energy being a constraint on the magnitude of the relative momentum. This leaves the angle of the relative momentum free to change in a collision, and it is precisely this that is correctly being shown as the missing equation.
Now, if you are dealing with this 2 circle problem, then the line of collision is itself a constraint: you can assert that the momentum transfer is only along this line of collision, and then that will fix the new angle. This obviously changes when you have rotations, because then the momentum transfer cannot only be along the line of collision.
When the circles strike, it feels like there would be a tendency for them to spin. Is that relevant?
Yes, as mentioned above. | {
"domain": "physics.stackexchange",
"id": 97141,
"tags": "kinematics, momentum, energy-conservation, collision"
} |
Can methylation of a promoter induce gene expression in some rare cases? | Question: Can methylation of a promoter induce gene expression in some rare cases?
I've read somewhere that methylation of an intron can induce gene expression (eg. Igf2). How is that even possible?
Thank you in advance!
Answer: There is nothing intrinsic to DNA methylation itself that requires it to repress transcription. Simply, it affects sequence recognition by proteins. CpG methylation can prevent transcription factor binding and/or recruit proteins that inhibit transcription, either competitively or through chromatin condensation. This is why it's generally associated with transcriptional repression.
I have found some articles which describe methylation in intergenic regions and introns activating transcription but, since you're asking specifically about promoters, I'll limit the examples to methylation in the 5' flanking region. Please note that I'm grossly oversimplifying these articles, you should actually read them to get the full picture.
Bahar HK, Vana T, Walker MD. 2014. Paradoxical role of DNA methylation in activation of FoxA2 gene expression during endoderm development. J Biol Chem 289(34):23882-23892
This study reports that, during development, high levels of methylation of a CpG island in the FoxA2 promoter is present in expressing tissues and absent in non-expressing tissues. Their hypothesis is that CpG methylation prevents binding of a protein that represses transcription by condensing chromatin through histone modifications.
Hantusch B, Kalt R, Krieger S, Puri C, Kerjaschki D. 2007. Sp1/Sp3 and DNA-methylation contribute to basal transcriptional activation of human podoplanin in MG63 versus Saos-2 osteoblastic cells. BMC Mol Bio 8(20)
Here, PDPN expression was associated with CpG methylation. They hypothesize that methylation could affect chromatin state or recruit, what they term, a methylation dependent factor, which activates transcription.
Niesen MI, Osborne AR, Yang H, Rastogi S, Chellappan S, Cheng JQ, Boss JM, Blanck G. 2005. Activation of a methylated promoter mediated by a sequence-specific DNA-binding protein, RFX. J Biol Chem 280(47):38914-38922
This study describes a DNA binding protein (RFX) that can recognize and activate the methylated promoter of MHC. They suggest that RFX can competitively inhibit binding of methyl-DNA binding domain proteins that condense chromatin. | {
"domain": "biology.stackexchange",
"id": 3397,
"tags": "cell-biology, gene-expression, gene"
} |
Practical applications for a Bose-Einstein condensate | Question: What are the main practical applications that a Bose-Einstein condensate can have?
Answer: I assume you mean the relatively recent phenomenon of Bose-Einstein Condensation in dilute atomic vapors (first produced in 1995 in Colorado). The overall phenomenon of Bose-Einstein Condensation is closely related to superconductivity (in a very loose sense, you can think of the superconducting transition in a metal as the formation of a BEC of pairs of electrons), and that application would trump everything else.
The primary application of atomic BEC systems is in basic research areas at the moment, and will probably remain so for the foreseeable future. You sometimes hear people talk about BEC as a tool for lithography, or things like that, but that's not likely to be a real commercial application any time soon, because the throughput is just too low. Nobody has a method for generating BEC at the sort of rate you would need to make interesting devices in a reasonable amount of time. As a result, most BEC applications will be confined to the laboratory.
One of the hottest areas in BEC at the moment is the use of Bose condensates (and the related phenomenon of degenerate Fermi gases) to simulate condensed matter systems. You can easily make an "optical lattice" from an interference pattern of multiple laser beams that looks to the atoms rather like a crystal lattic in a solid looks to electrons: a regular array of sites where the particles could be trapped, with all the sites interconnected by tunneling. The big advantage BEC/ optical lattice systems have over real condensed matter systems is that they are more easily tunable. You can easily vary the lattice spacing, the strength of the interaction between atoms, and the number density of atoms in the lattice, which allows you to explore a range of different parameters with essentially the same sample, which is very difficult to do with condensed matter systems where you need to grow all new samples for every new set of values you want to explore. As a result, there is a great deal of work in using BEC systems to explore condensed matter physics, essentially making cold atoms look like electrons. There's a good review article, a couple of years old now, by Immanuel Bloch, Jean Dalibard, and Wilhelm Zwerger (RMP paper, arxiv version) that covers a lot of this work. And people continue to expand the range of experiments-- there's a lot of work ongoing looking at the effect of adding disorder to these systems, for example, and people have begun to explore lattice structures beyond the really easy to make square lattices of the earliest work.
There is also a good deal of interest in BEC for possible applications in precision measurement. At the moment, some of the most sensitive detectors ever made for things like rotation, acceleration, and gravity gradients come from atom interferometry, using the wavelike properties of atoms to do interference experiments that measure small shifts induced by these effects. BEC systems may provide an improvement beyond what you can do with thermal beams of atoms in these sorts of systems. There are a number of issues to be worked out in this relating to interatomic interactions, but it's a promising area. Full Disclosure: My post-doc research was in this general area, though what I did was more a proof-of-principle demonstration than a real precision measurement. My old boss, Mark Kasevich, now at Stanford, does a lot of work in this area.
The other really hot area of BEC research is in looking for ways to use BEC systems for quantum information processing. If you want to build a quantum computer, you need a way to start with a bunch of qubits that are all in the same state, and a BEC could be a good way to get there, because it consists of a macroscopic number of atoms occupying the same quantum state. There are a bunch of groups working on ways to start with a BEC, and separate the atoms in some way, then manipulate them to do simple quantum computing operations.
There's a lot of overlap between these sub-sub-fields-- one of the best ways to separate the qubits for quantum information processing is to use an optical lattice, for example. But those are what I would call the biggest current applications of BEC research. None of these are likely to provide a commercial product in the immediate future, but they're all providing useful information about the behavior of matter on very small scales, which helps feed into other, more applied, lines of research.
This is not by any stretch a comprehensive list of things people are doing with BEC, just some of the more popular areas over the last couple of years. | {
"domain": "physics.stackexchange",
"id": 3797,
"tags": "quantum-mechanics, condensed-matter, bose-einstein-condensate, applied-physics, big-list"
} |
Patience game (Klondike) | Question: I've been working on a Patience clone in java using javaFX.
It's not finished yet, there are some features not working and some bugs still there, but I'm really in need of some structural advice and I know that alot of the things I'm doing are just getting real messy.
If someone has the knowledge and time to give me any advice on what I could do differently and how it would improve my code, I'd really appreciate it.
Any feedback is welcome :)
An image of the program:
Here is the link to my github repo: https://github.com/vincent-nagy/Card-Games
Here is the controller of the game since I need to post some code with this question. All other classes can be found on the github since it's too many to post here.
package be.vincent_nagy.cardgames.java.controller;
import be.vincent_nagy.cardgames.java.controls.CardPane;
import be.vincent_nagy.cardgames.java.event.DragDetectedHandler;
import be.vincent_nagy.cardgames.java.event.DragDoneHandler;
import be.vincent_nagy.cardgames.java.event.DragDroppedHandler;
import be.vincent_nagy.cardgames.java.event.DragOverHandler;
import be.vincent_nagy.cardgames.java.model.*;
import javafx.event.ActionEvent;
import javafx.fxml.FXML;
import javafx.scene.Group;
import javafx.scene.Node;
import javafx.scene.control.Button;
import javafx.scene.image.ImageView;
import javafx.scene.input.MouseEvent;
import javafx.scene.layout.AnchorPane;
import javafx.scene.layout.StackPane;
import java.util.ArrayDeque;
public class PatienceController {
@FXML
private AnchorPane tablePane;
@FXML
private StackPane nextCardStackPane;
@FXML
private Button start;
@FXML
private Group stackGroup;
@FXML
private ImageView nextCardButton;
private Deck deck;
private Table gameTable;
private Stacks stacks;
private NextCards nextCards;
private ArrayDeque<Node> eventHandlerQueu = new ArrayDeque<>();
public void initialize(){
start.setOnAction(this::startGame);
nextCardButton.setOnMouseClicked(this::showNextCards);
}
private void startGame(ActionEvent event) {
//Empty out previous things
tablePane.getChildren().removeAll(tablePane.getChildren().filtered(e -> e instanceof CardPane));
nextCardStackPane.getChildren().clear();
stackGroup.getChildren().clear();
//Create a new gamefield
createGameField();
addEventHandlers(null, false);
}
/*
Create everything needed to play the game
*/
private void createGameField() {
//create stacks
createCardStacks();
//deal the cards on the table
dealCards();
//Create and show next cards
initNextCards();
}
/*
Create the stacks on which the cards should be stacked to finish the game.
*/
private void createCardStacks() {
//Create 4 stacks on which cards are to be stacked from 1 to 13 to finish the game
stacks = new Stacks();
//Add events and add to the stackgroup
for (int i = 0; i < 4; i++) {
addEventHandlers(stacks.getStack(i), false);
stackGroup.getChildren().add(stacks.getStack(i));
}
}
/*
Deal the cards on the table and set them up
*/
private void dealCards() {
//Create a new deck which gets shuffled automatically
deck = new Deck();
//Create a new table which creates 7 columns which contain 20 empty CardPanes each
gameTable = new Table(this);
//Add the CardPanes from whole table to the field.
for (Column c: gameTable.getColumns()) {
for (CardPane cp : c.getCardPanes()) {
tablePane.getChildren().add(cp);
}
}
//Pull the cards from the deck and show them on the table. Each column has 1 more card than the previous
for(int i = 0; i < 7; i++){
for (int j = 0; j <= i; j++) {
Card drawnCard = deck.playCard();
//Shouldn't be null but if it is, show a message
if(drawnCard != null){
gameTable.setCard(i,j,drawnCard);
//Make the card shown if it's the last card in the column and add events
if(j == i){
gameTable.setShown(i, j, true);
addEventHandlers(gameTable.getCardPane(i,j), false);
}
} else{
System.out.println("Deck is empty error");
}
}
}
}
//Load and show the next cards in the created card stacks
private void initNextCards() {
nextCards = new NextCards(nextCardStackPane, this);
nextCardButton.setVisible(true);
showNextCards(null);
}
private void showNextCards(MouseEvent mouseEvent) {
nextCards.show(deck);
addEventHandlers(nextCards.getCardPane(2), false);
}
//Give a node to add drag and drop handlers. Once everything is finished,
// this method gets called with null to ensure the handlers don't called to early
public void addEventHandlers(Node node, boolean doNow){
DragDetectedHandler dragDetectedHandler = new DragDetectedHandler(stacks, nextCards, gameTable);
DragOverHandler dragOverHandler = new DragOverHandler();
DragDroppedHandler dragDroppedHandler = new DragDroppedHandler(stacks, gameTable, nextCards);
DragDoneHandler dragDoneHandler = new DragDoneHandler(gameTable, nextCards);
if(!doNow && node != null){
eventHandlerQueu.add(node);
} else {
if(node == null) {
while (eventHandlerQueu.peek() != null) {
Node currentNode = eventHandlerQueu.pop();
currentNode.setOnDragDetected(dragDetectedHandler);
currentNode.setOnDragOver(dragOverHandler);
currentNode.setOnDragDropped(dragDroppedHandler);
currentNode.setOnDragDone(dragDoneHandler);
}
} else {
node.setOnDragDetected(dragDetectedHandler);
node.setOnDragOver(dragOverHandler);
node.setOnDragDropped(dragDroppedHandler);
node.setOnDragDone(dragDoneHandler);
}
}
}
}
Answer: Keep it simple
public void addEventHandlers(Node node, boolean doNow){
This is a weird interface. The same method can
Enqueue a node for later processing,
Process the queued nodes, or
Process a node immediately.
Why?
Consider making three methods. One for each purpose. Then instead of passing different parameters, you can just call the right one. As originally written, if you accidentally call the method with a null node, it will immediately clear the queue.
The general rule of thumb is that a method should do one thing. This method has three different behaviors. That's two too many.
addEventHandlers(null, false);
What's this supposed to do? On the one hand, you're telling it not to process nodes, and on the other, you're telling it to process all the nodes. As written, process all the nodes wins. But to figure that out, one has to read the method's code.
Magic numbers
addEventHandlers(nextCards.getCardPane(2), false);
Why 2? What happens if you reorganize? How will this code know that you made a change elsewhere? At a guess, what you actually want to do here is to getTopCardPane(). But of course that depends on how you have the rest of the code arranged.
You also have 4 and 7, although it's less clear to me if those matter outside this code.
Say what you mean
//Shouldn't be null but if it is, show a message
if(drawnCard != null){
The comment doesn't match what the code does. The comment talks about what to do when it's null. The code handles the case when it's not null. Why not do what the comment says?
if (drawnCard == null) {
System.out.println("Deck is empty error");
continue;
}
Now you don't need the comment. The code is self-explanatory.
Of course, a bigger issue is that you are notifying a user of a programming error. What is a user supposed to do with this information? You made a mistake and produced a null card during the deal. This code soldiers on.
A better solution is to end the program if this extremely weird situation actually occurs. Then the user would report a null pointer exception. As is, the user might report an empty deck error or just that they stopped being able to play the game. Because the code tries to keep running after encountering what should be a fatal error.
Consider how much simpler it would be if you don't do this defensive checking. The block of code starting
for (int j = 0; j <= i; j++) {
could be
for (int j = 0; j <= i; j++) {
gameTable.setCard(i, j, deck.playCard());
}
gameTable.setShown(i, i, true);
addEventHandlers(gameTable.getCardPane(i, i), false);
Now instead of checking on every iteration if we're done, we move that behavior after the iterations. We don't need a conditional, as we always do it once.
If you still want the defensive check, then you should exit on it. It's not recoverable.
Interfaces over implementations
private ArrayDeque<Node> eventHandlerQueu = new ArrayDeque<>();
Could be
private Deque<Node> eventHandlerQueu = new ArrayDeque<>();
Then we don't have to worry about what kind of Deque it is anywhere but here. If we want to change in the future, no one else needs to know.
Don't reinvent the wheel
while (eventHandlerQueu.peek() != null) {
You want to iterate while the queue is not empty. So say that
while (!eventHandlerQueu.isEmpty()) {
Don't fetch a value that you then discard. Particularly since chances are that it does the the same isEmpty check to return null. So not only is this less readable, it's probably less efficient. | {
"domain": "codereview.stackexchange",
"id": 26611,
"tags": "java, game, javafx"
} |
Parity symmetry complete/detailed definition and the group elements | Question: I am trying to write down a complete/detailed definition for the parity symmetry. Symmetry as a concept is different in mathematics and in physics. There are also many other concepts which differ in their use in physics and mathematics i.e : symmetry group, discrete/continuous group, continuous and discrete symmetry etc.
I am trying to consider the following concepts and phrases for the definition:
Parity symmetry.
Parity transformation.
Invariant property (invariance and symmetry while similar they also differ in use)
parity symmetry group.
discrete symmetry group.
Then:
In physics:
"Parity symmetry it describes the invariance of a system, it's properties, under the parirty transformation, a spatial transformation, represented via the symmetry group of parity, a discrete symmetry group."
Is my definition, while including the above listed phrases accurate?
Also, can someone show me the fact that there is a group for parity? I.e elements, neutral element, inverse etc?
In other words, which are the elements and the operation in the symmetry group of parity?
Answer: Your definition is a good definition. As for the exact group that is the symmetry group of parity, it is $\mathbb{Z}_2$, the group of order 2, because applying parity twice gets you back where you started. The elements of this group are just the parity operation and the identity operation. | {
"domain": "physics.stackexchange",
"id": 97548,
"tags": "representation-theory, parity"
} |
Assumptions behind Ornstein-Zernike correlation function | Question: Let $S(\mathbf q)$ be come correlation function in Fourier space ($\mathbf q$ = wavevector). In the study of condensed matter systems, I have often encountered the statements that a reasonable form for $S(\mathbf q)$ is a Lorentzian, i.e.
$$
S(\mathbf q) = \frac{S(0)}{1+(q\xi)^2} \tag{1}
$$
where $q=|\mathbf q|$ and $\xi$ should be interpreted as a correlation length.
Authors usually refer to $(1)$ as the "Ornstein-Zernike" function, apparently after two papers (a), which unfortunately I wasn't able to find. Apparently, the two authors were discussing the problem of light scattering from a fluid in the vicinity of the liquid-gas transition as the critical point is approached (which I think is called "critical opalescence").
We find this kind of function in the study of magnetic systems, in which case $S$ is the magnetic susceptibility (b), or in the study of density fluctuation in polymer solutions (c).
I know that (1) is related to the Ornstein-Zernike recursive integral equation for the direct pair correlation function $c(\mathbf r,\mathbf r')$, which for a uniform and isotropic system takes in Fourier space the form (d):
$$
\tilde h( q) = \frac{\tilde c ( q)}{1-\rho \tilde c(q)} \tag{2}
$$
where the "tilde" denotes the Fourier transform and $h(r)=g(r)-1$, with $g(r)$ the pair correlation function. I also know that the structure factor (sometimes called "scattering function"), which is nothing else than a response function for density fluctuations, is related to $h$ by
$$
S(\mathbf q) = 1 +\rho \tilde h(\mathbf q) \tag{3}
$$
and that often it is assumed that it has the form $(1)$.
However, it is not clear to me under which assumption does $(1)$ follow from $(2)-(3)$ (even if I suspect that a small wavevector limit is involved).
In general, what I would like to know is: under which assumption can we say that a reasonable form for some correlation function in Fourier space is given by $(1)$?
A mathematically detailed treatment and pertinent references would be greatly appreciated.
PS: It may help to know that the real space functional form corresponding to $(1)$, i.e., its Fourier transform is, in 3D:
$$
\tilde S(\mathbf r) = \frac{\lambda}{r} e^{-r/\xi}
$$
(a): L. S. Ornstein and F. Zernike, Physik. Z., 19, 134 (1918); 27, 761 (1926)
(b): Chaikin P.M., Lubensky T.C. - Principles of Condensed Matter Physics
(c): Doi M., Edwards S.F. - The Theory of Polymer Dynamics
(d) Hansen J.P., McDonald I.R. - Theory of Simple Liquids
Answer: The mathematical assumption that Ornstein and Zernike made was that the direct correlation function is short ranged, in the sense that its second moment
$$
\int d\mathbf{r} r^2 c(\mathbf{r})
$$
is finite, and as a consequence its Fourier transform has a Taylor series expansion in $q$ at low $q$, at least up to second order, which we may write
$$
\tilde{c}(q) = \tilde{c}(0)[1- \alpha q^2] + \ldots
$$
since any linear term in $q$ must vanish by symmetry. This leads almost immediately to your first equation, which is taken to describe the low-$q$ behaviour of $S(q)$:
$$
S(q)=\frac{1}{1-\rho\tilde{c}(q)}
\approx \frac{S(0)}{1+\xi^2q^2}
$$
where
$$
\xi^2=\alpha\frac{\rho\tilde{c}(0)}{1-\rho\tilde{c}(0)} =\alpha \rho\tilde{h}(0).
$$
There's plenty of discussion of this in the literature on critical phenomena, for example section 3 of ME Fisher J Math Phys, 5, 944 (1964) and very briefly in section 9.2 of Rowlinson and Widom, Molecular theory of capillarity. It is very clearly explained in the last chapter on 'Phase Transitions' of the First Edition of Hansen and McDonald's book Theory of Simple Liquids. Unfortunately (IMHO) they dropped that chapter in later editions!
The point of this is that, although the correlation length that characterizes $h(r)$ will diverge as the critical point is approached, the $c(r)$ function may be assumed to remain of finite range, and your eqn (1) still applies, and may be used to discuss density fluctuations which give rise to critical opalescence. This assumption, that $c(r)$ remains short ranged even at the critical point, turns out not to be quite true, but it is a reasonable first approximation. | {
"domain": "physics.stackexchange",
"id": 56417,
"tags": "statistical-mechanics, correlation-functions, critical-phenomena"
} |
Programming Inverse Kinematics in C++ | Question: I want to write my own kinematics library for my project in C++. I do understand that there are a handful of libraries like RL (Robotics Library) and ROS with inverse kinematics solvers. But for my dismay, these libraries DO NOT support MacOS platform. I have already written the Forward Kinematics part, which was quite straight forward. But for the Inverse Kinematics part, I am quite skeptical since the solution to a IK problem involves solving sets of non-linear simultaneous equation. I found out the Eigen/Unsupported 3.3 module has a APIs to non-linear equations. But before I begin on this uncertain path, I want to get some insight from you guys on the plausibility and practicality of writing my IK library. My manipulator design is rather simple with 4 DoF and the library will not be used for other designs of manipulators. So what I am trying to achieve is taylor-made IK library for my particular manipulator design than a rather a 'Universal' library.
So,
Am I simply trying to reinvent the wheel here by not exploring the already available libraries? If yes, please suggest examples of IK libraries for MacOS platform.
Has anyone written their own IK library? Is it a practical solution? Or is it rather complex problem which is not worth solving for a particular manipulator design?
Or should I just migrate all my project code (OpenCV) to a Linux environment and develop the code for IK in Linux using existing libraries?
Answer: It is rather straightforward to implement inverse kinematics for a particular manipulator in C++. Of course, you need to begin with the inverse kinematic equations themselves. Putting those into code will only involve a few trigonometric functions such as acos, asin, and atan2 (use atan2 instead of atan), and probably a couple of square and square root terms.
You will learn a lot by doing this. | {
"domain": "robotics.stackexchange",
"id": 1360,
"tags": "ros, inverse-kinematics, c++, forward-kinematics, linux"
} |
Assumption of equipartion theorem in Langevin equation | Question: To show Einstein's diffusion relation, one can develop the mean square displacement from the Langevin equation as shown in https://en.wikipedia.org/wiki/Equipartition_theorem#Brownian_motion In this demonstration one uses the fact that the solute that is diffusing in a liquid is in thermal equilibrium with the liquid, and hence they share both the same temperature. Thus they use equipartion theorem to substitute some parts of the Langevin equation given the temperature of the fluid. I assume that the solute is just one more molecule of the system, and since its mass is much much higher than the molecules, its speed will be much lower because the energy a molecule has and this solute is the same. But now, the question that I have is related to the equipartion theorem. Wasn't this result only valid for a monoatomic ideal gas? How can you apply this to a liquid with a big solute inside it that is interacting with such amount of collisions with the molecules of the liquid?
Answer: Late, but this is an answer.
The equipartition theorem has a general validity, provided classical statistical mechanics can be applied. In the present context, it is a general result on the average value of $\langle p_i^2\rangle$ ($p_i$ being one component of the momentum of one particle) and does not depend on whether such a particle is an atom or a whole molecule in a one-component system or in a mixture and also does not depend on the thermodynamic phase.
Such independence on the system can be seen almost immediately by looking at the derivation of the result in the canonical ensemble (however the result is independent of the ensemble).
It is simply a matter of noticing that if the Hamiltonian has the form
$$
H(p_1,\dots,p_{3N},q_1,\dots,q_{3N})= \sum_{i=1}^{3N}\frac{p_i^2}{2m_i}+U(q_1,\dots,q_{3N})
$$
we have
$$
\langle p_i^2\rangle=\frac{1}{Z}\int{\mathrm d}\Gamma p_i^2 e^{-\beta H},\tag{1}
$$
where $Z$ is the canonical partition function, $\beta=\frac{1}{k_BT}$ and
${\mathrm d}\Gamma$ is the phase space volume measure of classical statistical mechanics.
Notice that the exponential (Boltzmann's factor) can be factorized into
$$ e^{-\beta H} = e^{-\beta \frac{p_i^2}{2m_i}} e^{\beta H'}, $$
where $H'$ does not depend on $p_i$. Therefore the explicit integral in $(1)$ and $Z$ factorize leaving
$$
\langle p_i^2\rangle=\frac{\int_{-\infty}^{+\infty}p_i^2 e^{-\beta \frac{p_i^2}{2m_i}}}{\int_{-\infty}^{+\infty}e^{-\beta \frac{p_i^2}{2m_i}}}.
$$
After a simple integration by parts, we get
$$
\langle p_i^2\rangle=\frac{m_i}{\beta}
$$
and finally $\langle \frac{p_i^2}{2m_i}\rangle=\frac{k_BT}{2}$.
As it is clear from the derivation, no assumption on the phase (solid, liquid, or gas) has been used. | {
"domain": "physics.stackexchange",
"id": 90802,
"tags": "fluid-dynamics, statistical-mechanics, kinetic-theory, brownian-motion"
} |
Working with the DCT | Question: I am having a very hard time to implement the DCT algorithm. I have quite a few requirements like it has to work with NxN matrix or at least power of 2, it has to be 2D, it has to produce same output as FFTW fftwf_plan_r2r_2d(FFTW_REDFT10) it has to use real data and I need DCT I, II and III. And it has to be fast!
1) I could use FFT to compute DCT and wikipedia mentions this:
"One can also compute DCTs via FFTs combined with O(N) pre- and post-processing steps. In general, O(N log N) methods to compute DCTs are known as fast cosine transform (FCT) algorithms."
What are those 2 steps?
2) If I use Apples vDSP library is that even a good idea? since it needs an array twice as big as the DCT array (2N with radix2). And also vDSP is 1D only so I would end up doing it for each row and column to get a 2D result.
3) Is it possible to use an algorithm that works on 8x8 blocks and adapt it for my needs?
I am very new to learning about DSP so any help is appreciated! Thanks!
ps: Does anyone have any sample code for what I need? would help me a lot
Answer: You want to implement a 2D NxN Fast DCT-i (Discrete Cosine Transform)
where -i refers to the type of DCT with slight modifications and type-II (DCT-II) being the most widely used one in such as old JPEG image codecs.
Note: since both 2D-FTT and 2D-DCT has a separable kernel, a 2D DCT-II is implemented from a 1D DCT-II in practice (as you described)
In the literature you may find many fast implementations of DCT.
Here I will put an example how to use FFTs to implement a DCT-II in 1D (from K.R.Rao). It's not the most efficient one though.
step-1: given the N-point (length of N) signal xN[n] for n=0 to n=N-1
step-2: padd it with N-point zeros at the end to make it a 2N-point signal x2N[n]
step-3: compute 2N-point FFT of this 2N-point x2N[n] call it X2N[k]
step-4: multiply X2N[k] by c(k) * exp(-jkPI/2*N),
where c(0)=1/sqrt(2) and c(k)=1 for k=1,...,N-1
step-5: naming the output of step4 as Y[k] then 1D DCT[k]= Re{Y[k]}, k=0,N-1
step 6: multiply DCT[k] with sqrt(2/N) if you need to get a Normalized DCT[k]
This is for 1D DCT-II computation. After getting it you should adopt it into row-column decomposition to get a 2D result.
here is a sample matlab code to verify the steps:
------------------------------------------------
N=8; % Length of DCT-II
x=randn(1,N); % create a test signal
c = ones(1,N); % prepare the coefficients c[k]
c(1)= 1/sqrt(2); %
x2 = [x zeros(1,N)]; % padd xN[n] with zeros (step-2)
X2 = fft(x2 , 2*N); % get FFT of padded signal x2N[n] (step-3)
Y = c.* exp(-j* pi* [0:N-1]/(2*N)).* X2(1:N); % (step:4)
XdctNormalized = sqrt(2/N).* real(Y); % (steps 5-6)
-----------------------------------------------
You can actually find these in the following books:
1- 2D Image and Signal Processing (Jae S. Lim)
2- Techniques & Standards for Image-Video & Audio Coding (K.R.Rao) | {
"domain": "dsp.stackexchange",
"id": 2328,
"tags": "dct, real-time, 2d, 1d"
} |
AI natural voice generator | Question: I want to create a solution, which clones my voice. I tried my commercial solutions or implementation of Tacotron. Unfortunately, results not sound natural, generated voice sounds like a robot. Anybody could recommend good alternative?
Answer: The reason for robot like speech may be because tacotron uses griffin lim for vocoder, which cannot reproduce sound with perfection, often introducing robot like sound artifects.
A vocoder is a network that transforms a transform a spectrogram image back to speech waveform. Tacotron and many other speech generation neural network uses CNN to generate spectrogram instead of raw waveforms as output. Spectrogram is a lossy representation of raw audio waveform, so a perfect reconstruction of audio waveform is not possible. Griffin-Lim is a vocoder that uses algorithmic way to transform spectrogram to audio waveform, but often introduces a robot-like quality to generated waveforms. A neural network based vocoder can solve the problem. The wavenet vocoder is often used in speech generation as it can transform the spectrogram to audio with little artifects. Many new speech generation models use the wavenet vocoder as the deafult vocoder of the generation model. For a public implementation, this is a good github repository: https://github.com/r9y9/wavenet_vocoder
You can also use the newer tacotron 2 which uses the wavenet vocoder as the default vocoder. You can check it out here: https://github.com/Rayhane-mamah/Tacotron-2 | {
"domain": "ai.stackexchange",
"id": 1508,
"tags": "deep-learning, voice-recognition"
} |
Stack implementation using Swift | Question: I would like to request some code review and feedback on a stack implementation using Swift.
Disclaimer: I might have not used protocols, optional unwrapping and error and nil check correctly (still trying to figure out the essence of protocols - but that's the drill I'm going through now).
import Foundation
/* Node conformance */
protocol NodeProtocol {
typealias IDType;
typealias IDNode;
func insertNode(element : IDType, nextNode : IDNode?);
}
/* Stack's conformance prototol */
protocol StackProtocol {
typealias IDType;
/* Function to push element on stack */
func push(element : IDType);
}
class IDNode<IDType> : NodeProtocol
{
var element: IDType?;
var nextNode: IDNode?;
/* Sets the element and updates next pointer */
func insertNode(element: IDType, nextNode: IDNode?) {
self.element = element;
self.nextNode = nextNode;
}
}
class IDStack<IDType> : StackProtocol
{
private var stackHead: IDNode<IDType>?;
init() {
/* Constructor to initialize the stack */
stackHead = IDNode<IDType>();
}
func push(element: IDType) {
/* If stackHead is empty - just create a new stack
* Ideally the constructor should have taken care of this.
* but in our case - because of optional. stackHead can be NULL.
* So we don't want to take any chance while pushing element to stack.
*/
if(stackHead == nil) {
stackHead = IDNode<IDType>();
}
/* if stack's first element is empty - Insert as the first element
* At this point stack is guaranteed not NULL. Right?
* What if memory allocation fails from above?
*/
if(stackHead!.element == nil) {
stackHead!.insertNode(element, nextNode: nil);
}else {
/* create a new node and insert the new element at top */
var nodeToPushInStack: IDNode<IDType>! = IDNode<IDType>();
/* I'm assuming memory allocation always passes from above.
* Is it correct?
*/
nodeToPushInStack.insertNode(element, nextNode: stackHead);
/* Update stack's head to the new Node */
stackHead = nodeToPushInStack;
}
}
func popElement() -> IDType? {
/* Remove from top and return the element. */
var itemPoppedFromStack : IDType?;
if(self.stackHead == nil) {
/* Stack is empty / not initialized */
return nil;
}
itemPoppedFromStack = self.stackHead!.element!;
if(itemPoppedFromStack == nil) {
return nil;
} else {
/* restore pointer order */
stackHead = stackHead!.nextNode;
}
return itemPoppedFromStack;
}
func getLength() -> Double {
var length: Double = 0;
var headNode: IDNode<IDType>! = self.stackHead;
if(headNode == nil) {
/* stack is probably empty, just return 0 */
return 0;
}
while(self.stackHead != nil) {
length++;
self.stackHead = self.stackHead!.nextNode;
}
/* restore back stackHead to original head */
self.stackHead = headNode;
return length;
}
}
Answer: Let's start with some general remarks:
Swift does not require semicolons after statements (but they are allowed). From what I have seen since Swift was introduced last year,
most people do not write semicolons in Swift.
Classes are reference types, and the properties of an instance of a class
can be modified even if the instance is declares as a constant with let.
Therefore you can replace var by let at many places in your code,
e.g.
let nodeToPushInStack: IDNode<IDType>! = IDNode<IDType>()
Often an explicit type annotation is not necessary because the compiler
can infer the type of an expression automatically. E.g. the above statement can be shortened to
let nodeToPushInStack = IDNode<IDType>()
Now the "major" points. To simplify things, I'll consider the
implementation without protocols first, so in the first part
of this answer I'll ignore your protocol definitions and assume that
the classes are defined as
class IDNode<IDType> { ... }
class IDStack<IDType> { ... }
First the IDNode class: You have defined element as a variable
and optional, but actually node elements are cannot be nil and
are never mutated once a node has been been pushed on the stack.
You probably did that because you create "empty" nodes first and then
call the insertNode() method to set the element and next pointer.
Another reason is that you use nodes "without an element", but I'll come
back to that later.
I would suggest to make element a constant and non-optional
and replace the insertNode() method by an init method:
class IDNode<IDType>
{
let element: IDType
let nextNode: IDNode?
init(element : IDType, before: IDNode?) {
self.element = element
self.nextNode = before
}
}
Which leads us to the IDStack class: The push and pop methods are
a bit too complicated. The reason is that you use a single node
with element == nil for an empty stack, a single node with
element != nil for a stack with a single element, and then
a linked list of nodes for two or more elements.
It becomes much simpler if you use a linked list with one node for each element in each case, and stackHead points to the front node,
or is nil for an empty stack. Then you don't need an init() method
because the optional property
private var stackHead: IDNode<IDType>?
is automatically initialized to nil, and pushing an element
simplifies to
func push(element: IDType) {
let newNode = IDNode(element: element, before: stackHead)
stackHead = newNode
}
Popping an element becomes simpler as well. Note that the preferred
way to check an optional (here: stackHead) for nil is optional
binding:
func popElement() -> IDType? {
if let firstNode = stackHead {
let element = firstNode.element
stackHead = firstNode.nextNode
return element
} else {
return nil
}
}
Remark: Your comment
/* Stack is empty / not initialized */
is misleading: Variables cannot be uninitialized in Swift. This is
ensured by the compiler and one of the major design goals in Swift.
Finally your getLength() method: Using Double as return type seems
quite strange to me, Int would be appropriate. And I would use
a computed property count instead, similar to the count property
of Swift's sequence types.
Modifying self.stackHead inside the method temporarily is a bad idea
and I can see no reason to do so. Use a local variable instead which
traverses through the linked list, and again use optional binding
to check if you have hit the end of the list:
var count : Int {
var numItems = 0
var currentNode = self.stackHead
while let node = currentNode {
numItems++
currentNode = node.nextNode
}
return numItems;
}
Back to the protocols: Defining a protocol for "Stack" makes only sense
if it contains all required methods:
protocol StackProtocol {
typealias IDType
func push(element : IDType)
func popElement() -> IDType?
var count : Int { get }
}
Then you can create an instance of IDStack and pass it to a function
that expects a StackProtocol. Simple example:
func test<S : StackProtocol where S.IDType == String>(stack : S) {
stack.push("foo")
stack.push("bar")
stack.push("baz")
print(stack.count)
while let element = stack.popElement() {
print(element)
}
}
test(IDStack<String>())
Of course you can also define protocols for both the node and the
stack type. Now the node protocol should contain all required
properties and methods:
protocol NodeProtocol {
typealias IDType
var element : IDType { get }
var nextNode : Self? { get }
init(element : IDType, before: Self?)
}
and a concrete implementation would be
final class IDNode<IDType> : NodeProtocol
{
let element: IDType
let nextNode: IDNode?
init(element : IDType, before: IDNode?) {
self.element = element
self.nextNode = before
}
}
(Please don't ask me why the final is necessary here :)
To make the stack independent of the used node class, the stack protocol
uses node protocol as generic placeholder (and not IDType as you did):
protocol StackProtocol {
typealias NodeType : NodeProtocol
func push(element : NodeType.IDType)
func popElement() -> NodeType.IDType?
var count : Int { get }
}
and a concrete implementation (in terms of a generic node type)
is
class IDStack<Node : NodeProtocol> : StackProtocol
{
typealias NodeType = Node
private var stackHead: Node?
func push(element: Node.IDType) {
let newNode = Node(element: element, before: stackHead)
stackHead = newNode
}
func popElement() -> Node.IDType? {
if let firstNode = stackHead {
let element = firstNode.element
stackHead = firstNode.nextNode
return element
} else {
return nil
}
}
var count : Int {
var numItems = 0
var currentNode = self.stackHead
while let node = currentNode {
numItems++
currentNode = node.nextNode
}
return numItems;
}
}
As an example, a stack of strings could now be created with
let stack = IDStack<IDNode<String>>() | {
"domain": "codereview.stackexchange",
"id": 14827,
"tags": "beginner, stack, swift"
} |
Image processing, recognizing a small feature in a larger image | Question: I am trying to write an image processing program to recognize bubbles in oil.
It has been suggested I try computing the convolution of the image and an image of a typical bubble.
i.e. : ifft(fft(a).fft(b)) will have high peaks near bubbles.
where:
ifft : inverse fourier transform
fft : fourier transform
a : the image
b : kernel image (bubbles)
Is this a good way to solve this problem? Also, for the dot product to work, I need the images to be the same dimension. This is not likely to be the case.
Answer: Template matching works well for some cases, with some caveats:
1) you will probably need to pre-whiten your image before it will work successfully.
2) your FFTs and IFFT should be of an appropriate length to avoid aliasing due to the circular convolution involved.
See this example of item 1:
The patch we are trying to find is the singer's nose. This is a toy example, because I've used the exact pixels from the image (rather than a "template"), but it illustrates the issue.
The bottom left image shows what happens when you use template matching (as you've described in the question) without doing any processing. The problem is that there is no localization at all: it's very hard to discern the true "peak" in the surface that is generated.
The bottom right image shows what happens when the same operation occurs, but a (stupidly simple) pre-whitening of applying the diff operation to both the image being searched and the template. It's a little hard to see because the peak is so small and large, but there is a red peak in the correct place.
This image shows the "side view" of the bottom two images. | {
"domain": "dsp.stackexchange",
"id": 834,
"tags": "image-processing, fourier-transform, opencv, convolution"
} |
Understanding why the definition of life is so challenging | Question: The Wikipedia article on the definition of life states that there is no consensus for the definition of life with at least 123 definitions being proposed.
I am unclear why this is the case.
Answer: Here is a question, at what point in organisation, from molecules to cells, does 'life' suddenly appear?
We can rephrase the question, what is the difference between 'life' and 'non-life'?
The takeaway is, that the definition of 'life', and its distinction between 'non-life', like all definitions (i.e., words and concepts) and distinctions, is based on human whim. In other words, all definitions (i.e., words and concepts) are arbitrary, and are abstractions derived from reality.
Here is another question, if we are constantly replacing the matter, that composes 'us', how do we define or distinguish an individual organism? Furthermore, how do we define an atom, in a non-arbitrary way?
These questions arise, because the distinctions we make between any two things are based on personal choice and because, we do not understand our 'reality' or universe, therefore, we cannot answer these questions. | {
"domain": "biology.stackexchange",
"id": 12204,
"tags": "life, artificial-life"
} |
ROS package template for Python package generation | Question:
Are there any project templates for ROS Python packages which can be used with a project generator like cookiecutter?
Originally posted by thinwybk on ROS Answers with karma: 468 on 2018-04-06
Post score: 1
Original comments
Comment by gvdhoorn on 2018-04-08:
I don't know of anything distributed "officially" (ie: through some package or other distributable artefact from OSRF or the ROS buildfarm). I know of some work within ROSIN that goes in this direction (slightly further, code generation as well). Not sure when that will be released though.
Comment by thinwybk on 2018-04-08:
Would have been nice if one could streamline package skeleton generation a bit. In case you can provide info about...: What will the ROSIN generation deal with exactly (what will it generate)?
Comment by gvdhoorn on 2018-04-08:
Package structure, nodes, topics, launch files, etc. A bit more than just a skeleton package. And based on a declarative data structure (essentially an xml file, nothing too fancy). And some expectation management: it's not a research topic in ROSIN, so this is a convenience tool.
Comment by thinwybk on 2018-04-08:
Sound helpful. Will this support ROS node packages (which include node/nodelet specific code) separate from ROS lib packages (which include code without any ROS dependencies despite of building)?
Comment by gvdhoorn on 2018-04-08:
Note that the fact that I don't know about something for tools like cookiecutter doesn't mean it doesn't exist.
Answer:
Look like this one is what you are looking for?
(https://github.com/simlei/cookiecutter-ros-python)
Originally posted by DangTran with karma: 66 on 2020-05-30
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 30567,
"tags": "ros"
} |
Potassium chlorate from potassium chloride and hydrogen peroxide | Question: Why can't you make potassium chlorate using the chloride anion from the potassium chloride and having oxygen bond with it from hydrogen peroxide?
Answer: Do not attempt! Peroxide is not a strong enough oxidant to oxidize chloride ion to chlorate. Neither is oxygen. However, peroxide can oxidize chloride to poisonous chlorine gas.
Redox reactions are spontaneous if the two half reactions would produce a positive potential in a galvanic cell. Most redox half reactions have measured standard electrode potentials against a known standard so that they can be compared.
The reaction you want to do is:
$$ \ce{3H2O2 + Cl- -> 3H2O + ClO3-}$$
The two half-reactions are:
Oxidation: $\ce{Cl- +3H2O -> ClO3- +6H+ +6e-}$
Reduction: $\ce{H2O2 +2H+ +2e- -> 2H2O} \ \ \ \ E^ \circ = +1.78 \text{ V}$
The reduction potential of hydrogen peroxide is on the table, but the reduction potential for the other reaction is not. However, the other reaction is the sum of two reactions on the table, so we can use Hess's Law.
$$\ce{ 2ClO3- + 12H+ + 10e- ->Cl2 + 6H2O} \ \ \ \ E^\circ = +1.49 \text{ V}$$
$$\ce{Cl2 +2e- -> 2Cl-} \ \ \ \ E^\circ = +1.36 \text{ V}$$
$$\ce{2ClO3- 12H+ + 12e- -> 2Cl- + 6H2O} \ \ \ \ E^\circ = +2.85 \text{ V}$$
Since standard potentials do not scale by stoichiometric factor (it's in the Nernst equation instead), the value we just found also works for our oxidation reaction. However, since this reaction is reverse of the "reduction half-reaction", we need a negative sign:
Oxidation: $\ce{Cl- +3H2O -> ClO3- +6H+ +6e-}\ \ \ \ E^ \circ = -2.85 \text{ V}$
Reduction: $\ce{H2O2 +2H+ +2e- -> 2H2O} \ \ \ \ E^ \circ = +1.78 \text{ V}$
Overall: $\ce{3H2O2 + Cl- -> 3H2O + ClO3-}\ \ \ \ E^ \circ = -1.07 \text{ V}$
This reaction is not spontaneous. Peroxide cannot oxidize chloride, at least not to the chlorate ion. Note that peroxide can and does oxidize chloride to poisonous chlorine gas in the presence of acid. This redox is spontaneous and will happen. Do not attempt!
Reduction: $\ce{H2O2 +2H+ +2e- -> 2H2O} \ \ \ \ E^ \circ = +1.78
\text{ V}$
Oxidation: $\ce{2Cl- + 2e- -> Cl2} \ \ \ \ E^ \circ =
-1.36 \text{ V}$
Overall: $\ce{ 2Cl- + H2O2 + 2H+ -> Cl2 + 2H2O} \ \ \ \ E^ \circ = +0.42 \text{ V}$
Peroxide will oxidize chlorine gas to chlorate, but who wants to handle chlorine? | {
"domain": "chemistry.stackexchange",
"id": 594,
"tags": "inorganic-chemistry, synthesis"
} |
how to disable query from beeline results | Question: I am occuring a strange hive-client beeline behavior.
In the outputed file with query results there is also a queary at the beggining and at the end. Is there any option to disable such behavior? I can't see such option in the beeline -help
-bash-4.2$ beeline -help
Usage: java org.apache.hive.cli.beeline.BeeLine
-u <database url> the JDBC URL to connect to
-n <username> the username to connect as
-p <password> the password to connect as
-d <driver class> the driver class to use
-i <init file> script file for initialization
-e <query> query that should be executed
-f <exec file> script file that should be executed
--hiveconf property=value Use value for given property
--hivevar name=value hive variable name and value
This is Hive specific settings in which variables
can be set at session level and referenced in Hive
commands or queries.
--color=[true/false] control whether color is used for display
--showHeader=[true/false] show column names in query results
--headerInterval=ROWS; the interval between which heades are displayed
--fastConnect=[true/false] skip building table/column list for tab-completion
--autoCommit=[true/false] enable/disable automatic transaction commit
--verbose=[true/false] show verbose error messages and debug info
--showWarnings=[true/false] display connection warnings
--showNestedErrs=[true/false] display nested errors
--numberFormat=[pattern] format numbers using DecimalFormat pattern
--force=[true/false] continue running script even after errors
--maxWidth=MAXWIDTH the maximum width of the terminal
--maxColumnWidth=MAXCOLWIDTH the maximum width to use when displaying columns
--silent=[true/false] be more silent
--autosave=[true/false] automatically save preferences
--outputformat=[table/vertical/csv2/tsv2/dsv/csv/tsv] format mode for result display
Note that csv, and tsv are deprecated - use csv2, tsv2 instead
--truncateTable=[true/false] truncate table column when it exceeds length
--delimiterForDSV=DELIMITER specify the delimiter for delimiter-separated values output format (default: |)
--isolation=LEVEL set the transaction isolation level
--nullemptystring=[true/false] set to true to get historic behavior of printing null as empty string
--help display this message
Answer: Use the option --silent=True. | {
"domain": "datascience.stackexchange",
"id": 12235,
"tags": "apache-hadoop, hive"
} |
Drawing conclusions from impulse response of a discrete LTI system | Question: I have an impulse response of an LTI system which is
h(n)= n*u(n)-u(n-2) where n=[0,3] (1)
Now regarding the stability and causality of this system, i've drawn the conclusion that the system is causal since the output only relies on past or present values of n and it is stable since n is bounded. In this case n=0,1,2,3. Correct me here if i'm wrong but in case h(n) was u(n) - u(n-2) without knowing anything about n, it would stil be stable right? And if h(n) is the same as the original (1) but again without knowing anything about n it would be unstable correct?
Now i want to analyze this h(n) in odd and even components and also find the linear difference equation of h(n) ,but i can't find anything in my textbook that is of any help. It's the first time i've attended a signals and systems course and i'm really struggling to understand a few things, so any help here would be greatly appreciated.
Answer: You're right about your conclusions. And now a few hints to help you solve the questions:
Your textbook should state somewhere the definition of even and odd parts of a (real-valued) sequence: $$h_e[n]=\frac12\big(h[n]+h[-n]\big)\\h_o[n]=\frac12\big(h[n]-h[-n]\big)$$
In this case (FIR filter), the difference equation is simply given by the convolution of the input sequence $x[n]$ with the impulse response $h[n]$: $$y[n]=\sum_kx[k]h[n-k]$$ | {
"domain": "dsp.stackexchange",
"id": 9096,
"tags": "discrete-signals, impulse-response"
} |
install ros on odroid c2 running Ubuntu 16.04 | Question:
Hello community,
I'm trying to install ROS Kinetic on ODROID C2 running Ubuntu 16.04 but am facing some errors while running the rosdep command. I also tried the source installation but no luck! The wiki page I followed - [http://wiki.ros.org/kinetic/Installation/Source]
I had problems with the qtbase5 dependency installation. So, I tried installing QT5 separately but that too didn't work. I want to install MAVROS on ODROID C2. So, any help on this would be really appreciated. Tried looking at all the wiki pages possible but couldn't find a solution. Saw others also facing the same problem!
Thanks in advance
Originally posted by audia8.sid on ROS Answers with karma: 46 on 2016-07-25
Post score: 1
Original comments
Comment by neuronet on 2016-07-25:
Which step gave which error? Could you post the specific messages?
Comment by neuronet on 2016-07-25:
Have you seen this: http://wiki.ros.org/indigo/Installation/UbuntuARM
I'm not sure Kinect is there yet. See http://answers.ros.org/question/187472/ros-on-odroid-u3/ and perhaps http://answers.ros.org/question/235258/kinetic-desktop-full-for-ubuntu-armhf. I'm a noob and not ARM-user--good luck.
Comment by audia8.sid on 2016-07-25:
Thanks for the info. I followed to steps to install ROS Indigo for Debian and somehow it worked! Thank you so much for the links..
Comment by neuronet on 2016-07-26:
That's good, and it's supported through 2019..I would suggest, whatever you did (even if it is to just follow the instructions at that link) that you turn it into an answer and accept it.
Answer:
I followed the instructions to install ROS Indigo for Debian and it worked fine on the ODROID C2 [Ubuntu 16.04 Xenial]
link text
But finally I wanted to install mavros on it and could not complete that one! So, if anyone has done it or has any idea please help me out!
Thanks
Originally posted by audia8.sid with karma: 46 on 2016-07-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Sam94 on 2016-08-07:
Did you ever get mavros working? I need to get a C2 setup with mavros if it's possible. Using Ubuntu 16.04 with Indigo seems like it could cause a lot of issues, so I haven't tried yet.
Comment by audia8.sid on 2016-08-10:
Yes indeed! I installed ROS Kinetic and then got the MAVROS working on the C2 with some twitches in the installation.
Comment by Sam94 on 2016-08-11:
How did you install Kinetic? I tried a couple of methods and kept running into dependency issues. If you get a chance, can you please post a solution? Thanks.
Comment by audia8.sid on 2016-08-12:
Yea, if you directly follow the steps you get dependency issues. So, for that I removed a package mali-X11, installed qt5-default and then installed ROS. After that, I reinstalled mali-X11 and was able to run roscore. Detailed steps
Comment by Sam94 on 2016-08-31:
Thanks, I got it installed about a week ago. As a quick addendum, this has to be done on a fresh installation of Ubuntu, if you try it after using 'apt-get dist-upgrade' you won't be able to access the necessary packages.
Comment by audia8.sid on 2016-09-01:
OK great! Thanks for the info and cool you got it working. | {
"domain": "robotics.stackexchange",
"id": 25341,
"tags": "ros, ubuntu-xenial, ubuntu"
} |
Is the Landau free energy scale-invariant at the critical point? | Question: My question is different but based on the same quote from Wikipedia as here. According to Wikipedia,
In statistical mechanics, scale invariance is a feature of phase transitions. The key observation is that near a phase transition or critical point, fluctuations occur at all length scales, and thus one should look for an explicitly scale-invariant theory to describe the phenomena. Such theories are scale-invariant statistical field theories, and are formally very similar to scale-invariant quantum field theories.
Question I understand that at the critical point the correlation length $\xi$ diverges and as a consequence, the correlation functions $\langle\phi(\textbf{x})\phi(\textbf{y})\rangle$ behave as a power law. Power laws are scale-invariant. But for a theory itself to be scale-invariant (as Wikipedia claims) the Landau Free energy functional should have a scale-invariant behaviour at the critical point. But the free energy functional is a polynomial in the order parameter and polynomials are not scale-invariant.
Then how is the claim that the relevant statistical field theory is scale-invariant justified?
Answer: I answered a very similar question here, but in the context of quantum field theory rather than statistical field theory. The point is that it is impossible to have a nontrivial fixed point classically (i.e. without accounting for quantum/thermal fluctuations) for exactly the reason you stated: the dimensionful coefficients will define scales.
We already know that quantum/thermal fluctuations can break scale invariance, e.g. through the phenomenon of dimensional transmutation, where a quantum theory acquires a mass scale which wasn't present classically. And what's going on here is just the same process in reverse: at a nontrivial critical point the classical scale-dependence of the dimensionful coefficients is exactly canceled by quantum/thermal effects. Of course this cancellation is very special, which is why critical points are rare. | {
"domain": "physics.stackexchange",
"id": 47709,
"tags": "statistical-mechanics, field-theory, phase-transition, critical-phenomena, scale-invariance"
} |
How to set/get ROS2 params from another node using Python? | Question:
Hi guys,
how can I get/set the parameter value of another node?
Something like
ros2 param get /my_node my_param
But in code?
Thank you all!
Originally posted by MLEV on ROS Answers with karma: 25 on 2022-03-17
Post score: 1
Answer:
https://github.com/ros-planning/navigation2/issues/2415#issuecomment-1028468173
Ask and ye shall receive
class SetExternalParam(Node):
def __init__(self, server_name):
super().__init__('nav2_simple_commander_parameter_setter')
self.cli = self.create_client(SetParameters, '/' + server_name + '/set_parameters')
while not self.cli.wait_for_service(timeout_sec=1.0):
self.get_logger().info('service not available, waiting again...')
self.req = SetParameters.Request()
def send_request(self, param_name, param_value):
if isinstance(param_value, float):
val = ParameterValue(double_value=param_value, type=ParameterType.PARAMETER_DOUBLE)
elif isinstance(param_value, int):
val = ParameterValue(integer_value=param_value, type=ParameterType.PARAMETER_INTEGER)
elif isinstance(param_value, str):
val = ParameterValue(string_value=param_value, type=ParameterType.PARAMETER_STRING)
elif isinstance(param_value, bool):
val = ParameterValue(bool_value=param_value, type=ParameterType.PARAMETER_BOOL)
self.req.parameters = [Parameter(name=param_name, value=val)]
self.future = self.cli.call_async(self.req)
while rclpy.ok():
rclpy.spin_once(self)
if self.future.done():
try:
response = self.future.result()
if response[0].successful:
return True
except Exception as e:
pass
return False
Originally posted by stevemacenski with karma: 8272 on 2022-03-17
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by MLEV on 2022-03-18:
Thank you, I'll try it | {
"domain": "robotics.stackexchange",
"id": 37512,
"tags": "ros, ros2, python3"
} |
Why do two objects traveling at the speed of light not collide at 2 times the speed of light? | Question: I was watching a Brian Cox lecture, and he touched upon the notion that if two objects were moving towards eachother at the speed of light and collided, they would collide at no greater than the speed of light, when they should collide at twice the speed of light I believe. Why is this?
Answer: (This answer elaborates on the comment from @Armando .)
Consider this spacetime diagram on rotated graph paper.
Alice uses Minkowski-right triangle OPQ, with right-angle P and legs parallel to her axes to declare that Bob has velocity $v_{BA}=6/10$ with respect to Alice. (Any triangle similar to OPQ could be used to determine this velocity.)
Bob uses Minkowski-right triangle OXY, with right-angle X and legs parallel to his axes to declare that Carol has velocity $v_{CB}=5/13$ with respect to Bob.
So, what is the velocity of Carol with respect to Alice?
Alice uses Minkowski-right triangle OTY, with right-angle T and legs parallel to her axes to declare that Carol has velocity $v_{CA}=16/20$ with respect to Alice.
These velocities are related by the velocity composition formula
$$\begin{align}v_{CA}
&=\frac{v_{CB}+v_{BA}}{1+v_{CB}v_{BA}}\\
&=\frac{(\frac{5}{13})+(\frac{6}{10})}{1+(\frac{5}{13})(\frac{6}{10})}=(\frac{4}{5})
\end{align}$$
(This is the equation used by @NoethersOneRing .)
The Euclidean analogue of this is asking how "slopes" "add [compose]".
Geometrically, angles add... and slopes are equal to the tangent-function of the angle... but slopes don't add.
In special relativity, using the Minkowski-angle [the rapidity $\theta$, where $v=\tanh\theta$ and $\gamma=1/\sqrt{1-v^2}=\cosh\theta$], we have
$$\begin{align}v_{CA}=\tanh_{CA}
&=\tanh(\theta_{CB}+\theta_{BA})\\
&=\frac{\tanh\theta_{CB}+\tanh\theta_{BA}}{1+\tanh\theta_{CB}\tanh\theta_{BA}}\\
&=\frac{v_{CB}+v_{BA}}{1+v_{CB}v_{BA}}\\
\end{align}
$$
So, in this example, (using a calculator or WolframAlpha)
$$v_{CA}=\tanh(\rm arctanh(5/13) + arctanh(6/10) )=4/5.$$
For the speed of light case,
$$v_{CA}=\tanh(\rm arctanh(1) + arctanh(1) )=1,$$
which implicitly uses $\infty+\infty=\infty$.
[In Galilean relativity, Galilean-angles add and slopes [i.e. velocities] add. Unfortunately, our common-sense has brainwashed us into thinking that velocities always add.] | {
"domain": "physics.stackexchange",
"id": 39573,
"tags": "special-relativity, forces, speed-of-light, faster-than-light"
} |
pillow cannot import / conda unexpected error | Question: I created a conda environment previously and it worked fine with python and tensorflow. At that stage I used anaconda.
On a fresh install I am using miniconda since I now understand the conda commands better. After installing python, packages like numpy and scipy (nothing exotic) and tensorflow I can run my previous simple neural network code. The package versions are all listed as compatible on the tensorflow site.
I also installed pillow from conda. It is visible using conda list, but python returns "No module named 'pillow'". My code to preprocess images no longer works so I need to fix this. So I'm trying to create a new environment to work in.
When I try to install python 3.8.0 in the new environment I get "An unexpected error has occurred. Conda has prepared the above report."
conda install python==3.8.0
> An unexpected error has occurred. Conda has prepared the above report.
>
> If submitted, this report will be used by core maintainers to improve
> future releases of conda. Would you like conda to send this report to
> the core maintainers? [y/N]: y Upload successful.
Should I purge conda and start from scratch?
Answer: Its a bit drastic. Could you try ...
conda update --strict-channel-priority --all
conda update --all
conda update anaconda # this could be removed 'cause you're using miniconda
conda update conda
conda activate myenv
conda install python=3.8.0
I always thought a single = was used. If that fails I'd delete the environment and create a new one
conda remove -n myenv --all
conda create -n newenv python=3.8
conda activate newenv | {
"domain": "datascience.stackexchange",
"id": 11729,
"tags": "python, deep-learning, tensorflow, anaconda, conda"
} |
SharePoint CRUD class | Question: I am writing an application where I have some objects like customer, supplier, product, etc.
I have written a class for the object 'supplier' and wanted to ask if this is a good design. I have put in the class the object itself, some methods for saving and deleting, ... and some static methods for example to return all suppliers or for example delete a specific supplier other than the current object.
This is my class:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using MyApplication.Administration.Utility;
using Microsoft.SharePoint;
namespace MyApplication.Administration.Model
{
public class Supplier
{
public int id { get; set; }
public string name { get; set; }
public int su_id { get; set; }
public string su_name { get; set; }
public string portal { get; set; }
public string sort_order { get; set; }
public bool active { get; set; }
public Supplier() {
}
public static Supplier GetSupplier(int supplierId)
{
Supplier supplier = new Supplier();
Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(delegate ()
{
string connString = Factories.Database.GetConnectionString();
using (SqlConnection conn = new SqlConnection(connString))
{
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = "SELECT supplier.*, supplier_status.name AS su_name FROM supplier INNER JOIN supplier_status ON supplier.su_id = supplier_status.id WHERE supplier.id = @id";
cmd.Parameters.AddWithValue("@id", SqlDbType.Int).Value = supplierId;
conn.Open();
using (SqlDataReader dr = cmd.ExecuteReader())
{
if (dr.HasRows)
{
// Hier ggf. nur das erste Element nehmen
while (dr.Read())
{
supplier.id = SqlReaderHelper.GetValue<int>(dr, "id");
supplier.name = SqlReaderHelper.GetValue<string>(dr, "name");
supplier.su_id = SqlReaderHelper.GetValue<int>(dr, "su_id");
supplier.su_name = SqlReaderHelper.GetValue<string>(dr, "su_name");
supplier.portal = SqlReaderHelper.GetValue<string>(dr, "portal");
supplier.sort_order = SqlReaderHelper.GetValue<string>(dr, "sort_order");
supplier.active = SqlReaderHelper.GetValue<bool>(dr, "active");
}
}
else
{
supplier = null;
}
}
}
});
return supplier;
}
public static List<Supplier> GetSuppliers()
{
List<Supplier> supplierList = new List<Supplier>();
Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(delegate ()
{
string connString = Factories.Database.GetConnectionString();
using (SqlConnection conn = new SqlConnection(connString))
{
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = "SELECT supplier.*, supplier_status.name AS su_name FROM supplier INNER JOIN supplier_status ON supplier.su_id = supplier_status.id";
conn.Open();
using (SqlDataReader dr = cmd.ExecuteReader())
{
if (dr.HasRows)
{
while (dr.Read())
{
Supplier supplier = new Supplier();
supplier.id = SqlReaderHelper.GetValue<int>(dr, "id");
supplier.name = SqlReaderHelper.GetValue<string>(dr, "name");
supplier.su_id = SqlReaderHelper.GetValue<int>(dr, "su_id");
supplier.su_name = SqlReaderHelper.GetValue<string>(dr, "su_name");
supplier.portal = SqlReaderHelper.GetValue<string>(dr, "portal");
supplier.sort_order = SqlReaderHelper.GetValue<string>(dr, "sort_order");
supplier.active = SqlReaderHelper.GetValue<bool>(dr, "active");
supplierList.Add(supplier);
}
}
else
{
supplierList = null;
}
}
}
});
return supplierList;
}
/// <summary>
/// Create new Supplier.
/// </summary>
public void Save()
{
Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(delegate ()
{
string connString = Factories.Database.GetConnectionString();
using (SqlConnection conn = new SqlConnection(connString))
{
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = "INSERT INTO supplier (name, su_id, portal, sort_order, active) VALUES (@name, @su_id, @portal, @sort_order, @active);";
cmd.Parameters.AddWithValue("@name", SqlDbType.NVarChar).Value = name;
cmd.Parameters.AddWithValue("@su_id", SqlDbType.Int).Value = su_id;
cmd.Parameters.AddWithValue("@portal", SqlDbType.NVarChar).Value = portal;
cmd.Parameters.AddWithValue("@sort_order", SqlDbType.NVarChar).Value = sort_order;
cmd.Parameters.AddWithValue("@active", SqlDbType.Bit).Value = active;
conn.Open();
cmd.ExecuteNonQuery();
}
});
}
/// <summary>
/// Delete the Supplier itself.
/// </summary>
public void Delete()
{
Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(delegate ()
{
string connString = Factories.Database.GetConnectionString();
using (SqlConnection conn = new SqlConnection(connString))
{
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = "DELETE FROM supplier WHERE id = @id";
cmd.Parameters.AddWithValue("@id", SqlDbType.Int).Value = id;
conn.Open();
cmd.ExecuteNonQuery();
}
});
}
/// <summary>
/// Delete a Supplier by id.
/// </summary>
/// <param name="supplierId"></param>
public static void Delete(int supplierId)
{
Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(delegate ()
{
string connString = Factories.Database.GetConnectionString();
using (SqlConnection conn = new SqlConnection(connString))
{
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = "DELETE FROM supplier WHERE id = @id";
cmd.Parameters.AddWithValue("@id", SqlDbType.Int).Value = supplierId;
conn.Open();
cmd.ExecuteNonQuery();
}
});
}
/// <summary>
/// Update a Supplier.
/// </summary>
public void Update()
{
Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(delegate ()
{
string connString = Factories.Database.GetConnectionString();
using (SqlConnection conn = new SqlConnection(connString))
{
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = "UPDATE supplier SET name = @name, su_id = @su_id, portal = @portal, sort_order = @sort_order, active = @active WHERE id = @id";
cmd.Parameters.AddWithValue("@id", SqlDbType.Int).Value = id;
cmd.Parameters.AddWithValue("@name", SqlDbType.NVarChar).Value = name;
cmd.Parameters.AddWithValue("@su_id", SqlDbType.Int).Value = su_id;
cmd.Parameters.AddWithValue("@portal", SqlDbType.NVarChar).Value = portal;
cmd.Parameters.AddWithValue("@sort_order", SqlDbType.NVarChar).Value = sort_order;
cmd.Parameters.AddWithValue("@active", SqlDbType.Bit).Value = active;
conn.Open();
cmd.ExecuteNonQuery();
}
});
}
}
}
Is this a good design? How could I optimize this? What would you prefer?
Answer: There was a lot of duplicate code, I changed it to be more clean:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Collections.Generic;
namespace MyApplication.Administration.Model
{
public class Supplier
{
public int Id { get; set; }
public string Name { get; set; }
public int SuId { get; set; }
public string SuName { get; set; }
public string Portal { get; set; }
public string SortOrder { get; set; }
public bool Active { get; set; }
public static Supplier GetSupplier(int supplierId) => RunWithElevatedPrivileges(() => GetSupplierInternal(supplierId));
private static Supplier GetSupplierInternal(int supplierId)
{
var supplier = new Supplier();
return ExecuteSql(cmd =>
{
cmd.CommandText =
"SELECT supplier.*, supplier_status.name AS su_name FROM supplier INNER JOIN supplier_status ON supplier.su_id = supplier_status.id WHERE supplier.id = @id";
cmd.Parameters.AddWithValue("@id", SqlDbType.Int).Value = supplierId;
using (var dr = cmd.ExecuteReader())
{
if (!dr.HasRows)
return null;
// Hier ggf. nur das erste Element nehmen
while (dr.Read())
MapSupplierData(supplier, dr);
}
return supplier;
});
}
private static void MapSupplierData(Supplier supplier, SqlDataReader dr)
{
supplier.Id = SqlReaderHelper.GetValue<int>(dr, "id");
supplier.Name = SqlReaderHelper.GetValue<string>(dr, "name");
supplier.SuId = SqlReaderHelper.GetValue<int>(dr, "su_id");
supplier.SuName = SqlReaderHelper.GetValue<string>(dr, "su_name");
supplier.Portal = SqlReaderHelper.GetValue<string>(dr, "portal");
supplier.SortOrder = SqlReaderHelper.GetValue<string>(dr, "sort_order");
supplier.Active = SqlReaderHelper.GetValue<bool>(dr, "active");
}
public static List<Supplier> GetSuppliers() => RunWithElevatedPrivileges(GetSuppliersInternal);
private static List<Supplier> GetSuppliersInternal()
{
return ExecuteSql(cmd =>
{
List<Supplier> supplierList;
cmd.CommandText =
"SELECT supplier.*, supplier_status.name AS su_name FROM supplier INNER JOIN supplier_status ON supplier.su_id = supplier_status.id";
using (var dr = cmd.ExecuteReader())
{
if (!dr.HasRows)
return null;
supplierList = new List<Supplier>();
while (dr.Read())
{
var supplier = new Supplier();
MapSupplierData(supplier, dr);
supplierList.Add(supplier);
}
}
return supplierList;
});
}
/// <summary>
/// Create new Supplier.
/// </summary>
public void Save() => Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(SaveInternal);
private void SaveInternal()
{
ExecuteSql(cmd =>
{
cmd.CommandText =
"INSERT INTO supplier (name, su_id, portal, sort_order, active) VALUES (@name, @su_id, @portal, @sort_order, @active);";
cmd.Parameters.AddWithValue("@name", SqlDbType.NVarChar).Value = Name;
cmd.Parameters.AddWithValue("@su_id", SqlDbType.Int).Value = SuId;
cmd.Parameters.AddWithValue("@portal", SqlDbType.NVarChar).Value = Portal;
cmd.Parameters.AddWithValue("@sort_order", SqlDbType.NVarChar).Value = SortOrder;
cmd.Parameters.AddWithValue("@active", SqlDbType.Bit).Value = Active;
});
}
/// <summary>
/// Delete the Supplier itself.
/// </summary>
public void Delete() => Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(DeleteInternal);
private void DeleteInternal()
{
ExecuteSql(cmd =>
{
cmd.CommandText = "DELETE FROM supplier WHERE id = @id";
cmd.Parameters.AddWithValue("@id", SqlDbType.Int).Value = Id;
});
}
/// <summary>
/// Delete a Supplier by id.
/// </summary>
/// <param name="supplierId"></param>
public static void Delete(int supplierId) => Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(() => DeleteInternal(supplierId));
private static void DeleteInternal(int supplierId)
{
ExecuteSql(cmd =>
{
cmd.CommandText = "DELETE FROM supplier WHERE id = @id";
cmd.Parameters.AddWithValue("@id", SqlDbType.Int).Value = supplierId;
});
}
/// <summary>
/// Update a Supplier.
/// </summary>
public void Update() => Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(UpdateInternal);
private void UpdateInternal()
{
ExecuteSql(cmd =>
{
cmd.CommandText =
"UPDATE supplier SET name = @name, su_id = @su_id, portal = @portal, sort_order = @sort_order, active = @active WHERE id = @id";
cmd.Parameters.AddWithValue("@id", SqlDbType.Int).Value = Id;
cmd.Parameters.AddWithValue("@name", SqlDbType.NVarChar).Value = Name;
cmd.Parameters.AddWithValue("@su_id", SqlDbType.Int).Value = SuId;
cmd.Parameters.AddWithValue("@portal", SqlDbType.NVarChar).Value = Portal;
cmd.Parameters.AddWithValue("@sort_order", SqlDbType.NVarChar).Value = SortOrder;
cmd.Parameters.AddWithValue("@active", SqlDbType.Bit).Value = Active;
});
}
// To a different class:
private static T RunWithElevatedPrivileges<T>(Func<T> action)
{
var result = default(T);
Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(() => result = action());
return result;
}
private static void ExecuteSql(Action<SqlCommand> action)
{
string connectionString = Factories.Database.GetConnectionString();
using (var conn = new SqlConnection(connectionString))
{
var cmd = conn.CreateCommand();
action(cmd);
conn.Open();
cmd.ExecuteNonQuery();
}
}
private static T ExecuteSql<T>(Func<SqlCommand, T> action)
{
T result;
string connectionString = Factories.Database.GetConnectionString();
using (var conn = new SqlConnection(connectionString))
{
var cmd = conn.CreateCommand();
result = action(cmd);
conn.Open();
cmd.ExecuteNonQuery();
}
return result;
}
}
} | {
"domain": "codereview.stackexchange",
"id": 20634,
"tags": "c#, object-oriented, sql, crud"
} |
Fraunhofer Diffraction using lenses | Question: I've come across a question that I don't know how to tackle:
An alternative way of observing Fraunhofer diffraction uses lenses to provide
appropriate conditions. Sketch an optical configuration for observing Fraunhofer
diffraction using a point source of light and two lenses.
A parallel beam of light is incident normally on a diffraction grating which
consists of a regular array of 200 narrow slits per mm, each slit being 1 µm wide. Light
emerging from the grating is focussed by a lens of focal length 300 mm onto a screen.
The lens and screen are centred on a line perpendicular to the centre of the grating, and
are parallel to the grating. It is intended to observe the 3rd diffraction order with light at
a wavelength of around 460 nm. Where on the screen does this diffraction order appear?
I've done the first part of the question (sketching the optical configuration, with the aperture between two lenses, and the screen and source at a distance f away from each other lenses).
Everything I've learnt about lenses is used to magnify images, and I have no idea how to deal with a ray coming in at an angle. Surely you need to know at what distance the aperture is from the lens?
Any help/hints would be greatly appreciated.
Answer: The key to this kind of problem is (i) to think of a lens as a Fourier transformer and (ii) use the principle of Linear superposition.
Take a look at my drawing below (it's one I drew to train people in the use of infinity conjugate optics, so don't worry about the "tested objective"). The key point here is that a point source on the focal plane of a lens transforms into a plane wave (or an approximation thereto, limited by the system's aperture) tilted at an angle $\theta$ to the optical axis given by $\tan\theta= \frac{r}{f}$, where $r$ is the transverse distance of the point source from the optical axis. You can derive this from a simple ray diagram: draw a ray from the point source through the optical centre and you've got it. The ray, in the wave world, represents the wavevector, whose direction is the direction of propagation of a plane wave.
So, we have, roughly, making a paraxial approximation
$$\delta(\vec{x} - \vec{x}_0)\,\leftrightarrow\, \exp\left(i\,\frac{k}{f}\,\vec{X}\cdot \vec{x}_0\right)\tag{1}$$
where I write on the left the field distribution on the focal plane and on the right the output field distribution over the transverse plane through the optical centre of the equivalent thin lens, or at the system aperture. So now you simply use linear superposition to calculate the field distribution on the output transverse plane when the input field is $g(x,\,y)$ on the focal plane $\mathscr{F}$ or, re-writing it in terms of an inner product:
$$g(\vec{x}) = \int_\mathscr{F} \delta(\vec{x}^\prime - \vec{x})\, g(\vec{x}^\prime)\,\mathrm{d}^2 x^\prime\tag{2}$$
by linear superposition of the "basic response" in (1) as weighted in (2), the transverse distribution at the system output must be:
$$G(\vec{X}) = \int_\mathscr{F} \exp\left(i\,\frac{k}{f}\,\vec{X}\cdot \vec{x}\right)\,g(\vec{x})\,\mathrm{d}^2 x$$
where $\vec{X}$ is the transverse position in the output plane. This is, of course, the Fraunhofer diffraction integral. | {
"domain": "physics.stackexchange",
"id": 13284,
"tags": "homework-and-exercises, optics, diffraction, lenses"
} |
Calling a (Gazebo) service parallelly from multiple nodes/ Remapping a (Gazebo) service to a particular namespace | Question:
I have a multi robot setting (with different namespaces which work totally independent of each other, in the same empty_world) in Gazebo. I have to reset the manipulated objects after a certain time interval. I have different ROS nodes controlling the robots and the resetting objects. So I want to use gazebo/set_model_state service parallelly in all the nodes. Spawning one robot works fine as it is the only node using the service. But as soon as I launch the second node it gives me the following error from the first simulation and the second simulation starts running fine.
rospy.service.ServiceException:
service [/gazebo/set_model_state]
returned no response
I'm aware that the service call executes commands in a serial manner. But I would like to call the service parallelly from all the nodes at the same time. Is this possible? I suppose this can be done if I can somehow "remap" the service like I do for the topics. That way I can specifically call the service belonging to the robot of that particular namespace. Or if I somehow clone the existing service into multiple unique services each node can call these unique nodes parallelly. But I'm not sure how this can be done. Any suggestions?
Originally posted by nf on ROS Answers with karma: 16 on 2018-04-17
Post score: 0
Answer:
Since the service call takes very less time to execute the command. I added an extra line for rospy to wait till the service is again available but just this line without adding a time like around 5 sec wasnt working.
rospy.wait_for_service('/gazebo/set_model_state',
5.0)
After adding this line my error vanished and am able to execute things as I want them to. To prevent future errors i also added a try, except statements while doing the service call.
Though this doesn't answer the asked question this provided me with a quick fix to work with.
Originally posted by nf with karma: 16 on 2018-04-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 30670,
"tags": "gazebo, ros-kinetic, service, nodes, multiple"
} |
Bad classification performance of logistic regression on imbalanced data in testing as compared to training | Question: I am trying to fit a logistic regression model to an imbalanced dataset (0.5/99.5) with high dimensionality(about 15k). I used random forest to select top 200 important features. Observations are around 120K.
When I fit a logistic regression model on based dataset (using Smote for over sampling) , on training f1, recall and precision are good. But on testing, precision score and f1 are bad. I assume it makes sense because in training there were a lot more of the minority case while in reality/testing there is only very small percentage. So the algorithm is still looking for more minority cases, which caused the high false positive.
I was wondering what kind of methods could I try to improve the performance?
I am currently trying different sampling method for imbalanced dataset, also plan to try PCA.
Answer: I suspect the reason is that the class balance in your test set is different from the class balance in your training set. That will throw everything off. The fundamental assumption made by statistical machine learning methods (including logistic regression) is that the distribution of data in the test set matches the distribution of data in the training set. SMOTE can throw that off.
It sounds like you have used SMOTE to augment the training set by adding additional synthetic positive instances (i.e., oversampling the minority class) -- but you haven't added any negative instances. So, the class balance in the training set might have shifted from 0.5%/99.5% to something like (say) 10%/90%, while the class balance in the test set remains 0.5%/99.5%. That's bad; it will cause the classifier to over-predict positive instances. For some classifiers, it's not a major problem, but I expect that logistic regression might be more sensitive to this mismatch between training distribution and test distribution.
Here are two candidate solutions for the problem that you can try:
Stop using SMOTE. Ensure the training set has the same distribution as the test set. SMOTE might actually be unnecessary in your situation.
Continue to augment the training set using SMOTE as you're currently doing, and compensate for the train/test mismatch by shifting the threshold for classification. Logistic regression produces an estimated probability that a particular instance is from the positive class. Typically, you then compare that probability to the threshold 0.5 and use that to classify it as positive or negative. You can adjust the threshold to correct for that: replace $0.5$ with $0.5/k$, where $k$ is the ratio of positives in your training set after augmentation to positive before (e.g., if augmentation shifted the training set from 0.5%/99.5% to 10%/90%, then $k=10/0.5=20$); or you can use cross-validation to find a suitable threshold that maximizes the F1 score (or some other metric).
Incidentally, I recommend you make sure to use regularization with your logistic regression model, and use cross-validation to select the regularization hyper-parameter. There's nothing wrong with 15K features if you have 120K instances in your training set, but you might want to regularize it strongly (choose a large regularization parameter) to avoid overfitting.
Finally, understand that dealing with severe class imbalance such as you have is just hard. Fortunately, there are many techniques available. Do some reading and research (including on Stats.SE) and you should be able to find other methods you could try, if these don't work well enough. | {
"domain": "datascience.stackexchange",
"id": 6960,
"tags": "classification, logistic-regression, class-imbalance"
} |
Examples of reversible computations | Question: Irreversible computations can be intuitive. For example, it is easy to understand roles of AND, OR, NOT gates and design a system without any intermediate, compilable layer. The gates can be directly used as they conform to human's thinking.
I have read a paper where it was stated that it is obviously correct way to code irreversibly, and compile to reversible form (can't find the paper now).
I am wondering if there exists a reversible model, that is as easy to understand as AND, OR, NOT model. The model should be therefore "direct" use of reversibility. So no compilation. But also: no models of form: $f(a) \rightarrow (a,f(a))$ (ie. models created by taking irreversible function $f$ and making it reversible by keeping copy of its input).
Answer: The paper you mention is probably one of Paul Vitányi's, possibly Time, Space, and Energy in Reversible Computing.
However, not everyone takes the viewpoint that simulation of irreversible computations is the main point. There is some research into what reversible computing can do in addition to such simulations, the beginnings of which is in Bennett's seminal paper Logical Reversibility of Computation on reversible Turing machines. See this paper for an elaboration of these ideas.
In terms of reversible logic circuits, there has been significant effort from the quantum computing community to build non-trivial circuits for arithmetic, e.g. Quantum networks for elementary arithmetic operations, some of which are purely classical, i.e., reversible. These implement a reversible variant of, say, addition, where one of the operands is conserved, but they do not rely on "irreversible thinking", do not use a history, and display a significant amount of ingenuity in their design. | {
"domain": "cstheory.stackexchange",
"id": 1327,
"tags": "reference-request, computability, machine-models"
} |
Normal mode analysis | Question: I'm reading lots of texts about normal modes and I've seen that normal modes are solutions of the wave function produced by separation of variables. However, when most of authors I've read perform the separation of variables, they consider:
$$
\psi (t,x)=\phi (x) e^{i\omega t} \,\, ,
$$
For example. My question is: from where does this exponential dependence come? Why this dependence? It's looking like an \textit{ad hoc} assumption, what do we gain with it? Couldn't it be any other, for example $\exp({-i\omega t})$ ?
Answer: The wave phenomena we're interested in is usually governed a second order differential equation with respect to time. A spring-mass oscillator or LC circuit for example, are both governed by equations of the form:
$$\frac{d^2 u}{dt^2} = -\omega^2u$$
This equation has sin or cosine solutions, but it's much easier to work with the exponential solutions. You're correct that $\exp(i\omega t)$ and $\exp(-i\omega t)$ are both valid choices. The preferred choice is convention, and changes depending on the field (it can be a pain trying to figure out which convention any particular author's using).
Why exponentials? Because they're eigenfunctions with respect to derivatives, which makes them convenient mathematically. Not only that, but they're orthogonal, which allows us to write any arbitrary time dependence as a sum of complex exponentials.
Another thing worth mentioning is that to solve the wave equation in different geometries, it's often necessary to choose coordinates appropriate to those geometries, and separate using those variables. This will lead to different solutions in the spatial variables. The time dependence of the wave equation is always going to be the same, so you can always choose the same exponential time dependence, without even choosing the boundaries or geometry of the problem.
Edit: To answer why we don't use the general solution $y=Ae^{iwt}+Be^{-iwt}$ or choose the constant $A$ different from the unity, it's useful to consider the rest of the wave equation solution. For our simplest form of the wave equation:
$$\frac{\partial ^2 \psi}{\partial x^2} = \frac{1}{c^2} \frac{\partial ^2 \psi}{\partial t^2}$$
We can separate this into the spatial and time dependent parts. Since this is second order in both time and space, there will be two solutions for each the time and spatial components. The total solution will thus be of the form:
$$\psi = (Ae^{i\omega t} + Be^{-i\omega t})(Ce^{ik x} + De^{-ik y})$$
Multiplying this out will yield four solutions
$$\psi = A'e^{i(\omega t - kx)} + B'e^{i(\omega t + kx)} + C'e^{-i(\omega t -kx)} + D'e^{-i(\omega t -kx)}$$
If we insist that the answer to the wave equation be a real valued quantity, then the $C'$ and $D'$ parts of the solution must be complex conjugates of the $A'$ and $B'$ parts. We can equivalently write the solution as:
$$\psi = \operatorname{Re} \{A''e^{i(\omega t - kx)} + B''e^{i(\omega t + kx)} \}$$
If we're being lazy, we can by convention say that only the real part represents a physical solution, and write:
$$\psi = A''e^{i(\omega t - kx)} + B''e^{i(\omega t + kx)}$$
Then we can write the solution as:
$$\psi = e^{i\omega t}(A''e^{-ikx} + B''e^{ikx})$$
It turns out that the solution comes out the same as if we simply chose the time dependent part to be $e^{i\omega t}$ in the first place. The constant $A$ gets multiplied into the spatial part anyways, and the $B e^{-i\omega t}$ part gets erased by the convention that we only care about the real part of the solution. Of course, there are other ways to write the solution to the wave equation, so it really boils down to convenience and convention. | {
"domain": "physics.stackexchange",
"id": 29416,
"tags": "waves, fourier-transform, oscillators, normal-modes"
} |
Distribute repeated values into bins as evenly as possible | Question: Preface
I've asked a very similar question already on stack overflow in a different wording and gotten a working answer under the assumption that there is no way to go through all possibilities (np-hard).
After doing further research (because this is my bottleneck atm), I stumbled upon this similar question, but again, there's the assumption of the problem beeing np-hard and it's not exactly what I'm looking for. However, it seems the question is a better fit on this site, than SO (correct me if I'm wrong).
I'm pretty sure this can be solved "brute force" (aka optimal) with the correct algorithm in small time for the given constraints. More to that later.
Problem
I have a small set of repeating values e.g.
values = 10 x [18] (ten times the value 18); 5 x [30], 6 x [55]
of which I need to distribute as many as possible evenly across a fixed number of bins with a maximum difference between the bins (maximum difference between the sum of elements in the bins, see below).
Constraints:
The reason I think this can be brute forced are my constraints. I will never have a great number of bins (at the moment the maximum is 3, let's assume it might be 4 or 5 though). The maximum difference between the bins is atm fixed at 2, we can assume it stays there if that helps because I can set this arbitrarily if needed.
Example
For a small example, lets set values = [[18,18,18,18,18],[30,30,30]], number_of_bins = 2 and max_difference_between_bins = 2.
The algorithm to distribute them would be something along the lines
for remainder in range(0,8) #try distributing all packages first,
# then one less, then two less, etc.
generate possible distributions for values in bins
add the distributions to check differences, if any difference
# is below the threshhold of max max_difference_between_bins, break
Trying it out:
Remainder = 0
Distribute 5 times 18 into two bins:
(0,90),(18,72),(36,36),(72,18),(90,0)
Distribute 3 times 30 into two bins:
(0,90),(30,60),(60,30),(90,0)
See that the example is bad because it works out in the first run, anyways, adding them yields:
(0,90) -> (0,180),(30,150),(60,120),(90,90)
(18,72)-> (18,162),(48,132),(78,102)(108,72)
(36,36)-> (36,126),(66,96),(96,66)(126,36)
(72,18)-> (72,108),(102,78),(132,48),(162,18)
(90,0) -> (90,90),(120,60),(150,30),(180,0)
As you can see, half of them are obsolete because they are duplicates.
If there hadn't been a solution (90,90) this would then continue in the next run with trying the same for values = [[18,18,18,18],[30,30,30]] and values = [[18,18,18,18,18],[30,30]] and remainder = [18] or remainder = [30].
Runtime
So for a really big example of say 70 values with 5 different sizes distributed across 3 bins, the possible solutions should be for the first run:
Say 50 values of the same size are distributed across 3 bins, then we get 50+3-1 over 3 = 1326 possibilites or 316251 for 5 bins (right?) (supposing the bins are differentiable, which they don't need to be, but then we would have to upscale later for the combination with the other values).
Distributing the other 20 with say 8, 5, 4, 3 values each thats 45, 21, 15, 10 values. Multiplicated, that's 190 million possibilities. However, most of those combinations can't possibly be a good solution. Some depth search or divide and conquer should be able to break them down.
Of course, those are then multiplied by 5 for the first run with a remainder of 1, by 120 for a run with a remainder of 2, etc. But those solutions can also be generated from the old solutions and again, it should be possible to go through only a fraction of them.
Question
Is basically, how do I get a solution for this in small time? Pseudocode or even python would be great, but not necessary, I can figure the coding out myself.
We can make many assumptions for the number of different values and maximum number of values you want, so if you think the part about reducing it from 190 million times whatever solutions is too big, I'm fine with going down to, for example, maximum 3 bins, maximum 3 different values occuring a maximum of, say, 20 times together (so not 20 times this, 20 times that, but rather 10 times this, 5 times that, 5 times the other).
Answer: After an epiphany, I solved the problem myself now. To generate all solutions quite efficiently I have used the following algorithm:
Find all groups of possible bins with any combination of occurences of
the values smaller or equal to their maximum occurence that have a sum
smaller than (the sum of all values divided by the number of bins +
the maximum difference between bins).
Bins include for example [],[18],[18,18],[18,18,18],[30],[30,18], etc.
Calculate the sum of each bin.
Combine only those bins with each other that have a sum that varies by
less than the maximum difference.
Chose all combinations as solutions where the number of values in all
their bins equals the maximum number of values. if none, reduce
maximum number of values and add a value to remainder.
Tested for 3 bins and up to like 50 values, runtime is almost instant in python. | {
"domain": "cs.stackexchange",
"id": 12792,
"tags": "algorithms, optimization"
} |
canopen chain node exit with error code -11 | Question:
When I run the canopen_chain_node's chain the launch with the following yaml file:
bus:
device: vcan0
master_allocator: canopen::SimpleMaster::Allocator
sync:
interval_ms: 10
overflow: 0
heartbeat:
rate: 100 # simple heartbeat producer, optional!
msg: "77f#05" # message to send, cansend format: heartbeat of node 127 with status 5=Started
nodes:
node1:
id: 1
eds_file: /home/user/eds/phyPS409Y.eds
I get the following the in logs:
# roslaunch --screen canopen_chain_node chain.launch
... logging to /home/turtle1/.ros/log/d83321c8-1182-11e8-9307-c8ff284dfc57/roslaunch-turtle1-12835.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://192.168.12.107:38425/
SUMMARY
========
PARAMETERS
* /canopen_chain/bus/device: vcan0
* /canopen_chain/bus/master_allocator: canopen::SimpleMa...
* /canopen_chain/heartbeat/msg: 77f#05
* /canopen_chain/heartbeat/rate: 100
* /canopen_chain/nodes/node1/eds_file: /home/user/eds/ph...
* /canopen_chain/nodes/node1/id: 1
* /canopen_chain/sync/interval_ms: 10
* /canopen_chain/sync/overflow: 0
* /rosdistro: kinetic
* /rosversion: 1.12.12
NODES
/
canopen_chain (canopen_chain_node/canopen_chain_node)
auto-starting new master
process[master]: started with pid [12845]
ROS_MASTER_URI=http://192.168.12.107:11311
setting /run_id to d83321c8-1182-11e8-9307-c8ff284dfc57
process[rosout-1]: started with pid [12858]
started core service [/rosout]
process[canopen_chain-2]: started with pid [12869]
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::property_tree::ini_parser::ini_parser_error> >'
what(): /home/user/eds/phyPS409Y.eds: cannot open file
[canopen_chain-2] process has died [pid 12869, exit code -6, cmd /opt/ros/kinetic/lib/canopen_chain_node/canopen_chain_node __name:=canopen_chain __log:=/home/turtle1/.ros/log/d83321c8-1182-11e8-9307-c8ff284dfc57/canopen_chain-2.log].
log file: /home/turtle1/.ros/log/d83321c8-1182-11e8-9307-c8ff284dfc57/canopen_chain-2*.log
But other .eds files work.
This topic did not solve my problem.
I'm using the latest version of canopen_chain_node.
Eds file on github
Originally posted by Dizett on ROS Answers with karma: 3 on 2018-02-14
Post score: 0
Original comments
Comment by Mathias Lüdtke on 2018-02-14:
roslaunch --screen should give you more error output.
Comment by Dizett on 2018-02-14:
I added the output
Answer:
With https://github.com/ros-industrial/ros_canopen/pull/258 merged (2 hours ago), you will get
terminate called after throwing an instance of 'canopen::ParseException'
what(): Type of 0x6005 does not match or is not supported
Datatype 1 (1 bit Boolean) is not supported by ros_canopen.
Update: I have added an issue.
Originally posted by Mathias Lüdtke with karma: 1596 on 2018-02-14
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Dizett on 2018-02-14:
Yes! Thank you! Python quietly works with .eds files even with incorrect "ObjectType,DataType, AccessType ", but ROS is picky.
Comment by Mathias Lüdtke on 2018-02-14:
Just because I care about type safety at protocol level ;) | {
"domain": "robotics.stackexchange",
"id": 30042,
"tags": "ros, ros-kinetic, ros-canopen"
} |
Describe the decrease in potential energy if two forces are acting on it one conservative which is greater than other applied by us in opposite dirn | Question: So imagine this situation.
An object is experiencing two forces, one due to gravity in downward direction and other applied by us in upward direction such that our force is less than gravitational force.
So the object will accelerate downward and its potential energy will decrease.
Will the change in potential energy equals to negative of the work done by gravity?
And If so then how, isn't it should be less than that because of our force, but then the change in potential energy will be negative of the work done by resultant force.
Thanks if you answer it.
Answer:
Will the change in potential energy equals to negative of the work
done by gravity?
Yes.
And If so then how, isn't it should be less than that because of our
force, but then the change in potential energy will be negative of the
work done by resultant force.
No.
Gravity is a conservative force, meaning the work it does, and the resulting change in gravitational potential energy, depends only on the initial and final position of the object. So it is independent of the force you apply to the object. In other words, the difference between the work done by gravity and the work done by you between the initial and final positions has no effect on the change in potential energy.
On the other hand, the net work done by gravity and determines the change in kinetic energy of the object between its initial and final position. If gravity does more positive work on the object than the negative work done by you, there will be an increase in kinetic energy between the initial and final positions, in addition to a loss of gravitational potential energy,
In your example, since the force you apply is less than the force of gravity, there will be an increase in kinetic energy between the intial and final positions..
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 98174,
"tags": "newtonian-mechanics, energy, gravity, newtonian-gravity, work"
} |
How much work is required to lift an object (laying horizontally on ground) to vertical position? | Question: Consider we know mass and length of an object. Blue line is initial position and black one is final position of an object. Confused about Work needed to lift it.
Answer: Use Conservation of Energy principle.
Say your object has length $L$ and mass $m$. Its centre of gravity (CoG) is right in the middle of its length.
That means that the potential energy $U$ for an increase of the CoF in the vertical position by $\Delta h$ increases by:
$$\Delta U=mg\Delta h=mg\frac{L}{2}$$
This also the work done to make this change, so:
$$W=\frac12 mgL$$ | {
"domain": "physics.stackexchange",
"id": 54293,
"tags": "newtonian-mechanics, forces, newtonian-gravity, work"
} |
Why is a TTT diagram called isothermal transformation diagram? | Question: Why are TTT diagrams called isothermal temperature diagram?As far as I know the temperature variation is plotted with the variation of time in a TTT diagram. Then why do we call it "isothermal".What is so constant about the temperature?
Answer: They are two names for the same procedure. A steel is heated to a temperature in the austenite range then quenched/quickly cooled to some specific temperature. Held at that constant ( isothermal) temperature ( T) for a specific time (T) then quenched to room temperature for examination of the transformation (T) products. You may be thinking of CCT ; continuous cooling diagrams ; where a steel sample is austenitized then continuously cooled at some specific rate. Then examined for transformation products. The results can have similarities depending on alloy content , etc. | {
"domain": "chemistry.stackexchange",
"id": 12366,
"tags": "physical-chemistry, metal, metallurgy"
} |
pcl_ros bag_to_pcd not found | Question:
I'm using a full install of ROS Groovy and am trying to get point clouds from bag files. When I attempt to run bag_to_pcd I the following error:
[rosrun] Couldn't find executable named bag_to_pcd below /opt/ros/groovy/share/pcl_ros
In fact, none of the pcl_ros nodes listed in the documentation are there. Have they moved somewhere else?
Originally posted by Jeffrey Kane Johnson on ROS Answers with karma: 452 on 2013-02-03
Post score: 0
Original comments
Comment by kalectro on 2013-02-03:
please provide some more information. Did you compile from source or use a prebuilt binary? Does /opt/ros/groovy/pcl_ros exist maybe it is inside the package perception_pcl
Comment by Jeffrey Kane Johnson on 2013-02-03:
I just did the standard Desktop-Full install using sudo apt-get install ros-groovy-desktop-full. The pcl_ros directory does exist, but there are only two XML files in it and a cmake directory. If I run sudo apt-get install ros-groovy-pcl-ros it just tells me it's already installed.
Answer:
They install in /opt/ros/groovy/bin. That should be directly accessible from your shell $PATH, if it was set up correctly, but it is not the correct place for rosrun to find it.
This looks like a bug. Several packages, converted to catkin, are mistakenly installing executables in inappropriate places. You should probably open an issue to get that resolved.
Originally posted by joq with karma: 25443 on 2013-02-05
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Jeffrey Kane Johnson on 2013-02-05:
Right on. I opened an issue here: https://github.com/ros-perception/perception_pcl/issues/5 | {
"domain": "robotics.stackexchange",
"id": 12712,
"tags": "ros, pcl, bag-to-pcd, pcl-ros"
} |
Does a crescent moon cause (very faint) shadow bands? | Question: Shadow bands appear right before and right after totality during an eclipse. According to the Wikipedia article, this phenomenon occurs because all the light coming through from the sun at that time is collimated and susceptible to atmospheric scattering effects.
Does a crescent moon have the same properties as a "crescent sun," and if so, would it at least in theory be possible to detect extremely faint shadow bands during a crescent moon?
Answer: Based on first principles, I would say "in principle yes" - but then you have to think about the mechanism of shadow bands. These bands appear when the sunlight becomes almost completely (spatially) coherent - the angular size of the source becomes so small that atmospheric fluctuations can focus the light more in some places than others.
For the moon to be a "sliver of a crescent", the sun needs to be essentially directly behind it: this means that any shadow bands would be completely drowned out by the intensity of the sun light. As such, any fluctuations would be no different than the ones you can observe with full sunlight: in principle, there will be shadow bands for any size of source, but they will be convolved with the size of the object (which makes them effectively disappear except when the sun is reduced to a tiny sliver). So we can safely say they would be invisible. | {
"domain": "physics.stackexchange",
"id": 43504,
"tags": "optics, astronomy"
} |
Quantum mechanical angular momentum and spin formalism/notation | Question: I am currently stuck on the following notation:
$\frac{1}{2}\otimes\frac{1}{2} = 0 \text{ (antisym) } \oplus 1 \text{ (sym) }$
No matter what I tried, I couldn't derive the identity. I am sure that it is trivial, but I can't figure how to treat the notation. It would be great, if somebody could write this in matrix notation or in bra/ket notation with explicit spins ($\uparrow\downarrow$).
Answer: The basis states on the left are given by
$$|{\uparrow\uparrow}\rangle, |{\uparrow\downarrow}\rangle,
|{\downarrow\uparrow}\rangle,\text{ and }|{\downarrow\downarrow}\rangle.$$
On the right, you are supposed to symmetrize these states with respect to exchanging the first and second spin (that is what sym and antisym stand for). There is only a single antisymmetric combination (try to see why there is only this one up to multiplication with a complex-constant)
$$|S\rangle = 2^{-1/2}(|{\uparrow \downarrow}\rangle - | {\downarrow \uparrow
}\rangle)$$
where $2^{-1/2}$ is for normalization purposes. Because this state is alone it is called singlet.
The orthogonal complement of the four states are three states (triplet) which are symmetric under exchange
$$|T,1\rangle = |{\uparrow\uparrow
}\rangle, |T,0\rangle = 2^{-1/2}(|{\uparrow \downarrow}\rangle +| {\downarrow \uparrow}\rangle), \text{ and } |T,1\rangle = |{\downarrow\downarrow}\rangle.$$
This is written out explicitly what the notation
$$\frac{1}{2}\otimes\frac{1}{2} = 0 \text{ (antisym) } \oplus 1 \text{ (sym) }$$
means. | {
"domain": "physics.stackexchange",
"id": 5899,
"tags": "quantum-mechanics, angular-momentum, quantum-spin, group-representations, lie-algebra"
} |
navigation carlike | Question:
how to use the navigation stack in a carlike robot?
Originally posted by kleber on ROS Answers with karma: 1 on 2011-11-25
Post score: 0
Answer:
http://answers.ros.org/question/27/how-can-i-use-the-navigation-stack-on-a-carlike
Originally posted by dornhege with karma: 31395 on 2011-11-25
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 7424,
"tags": "navigation"
} |
Eclipse unresolved includes | Question:
Hey,
I'm trying to use Eclispe (Indigo) with ROS (Hydro). I can import my packages but I still have inresolved includes.
I followed the Turorial but I'm not able to do this step:
In Eclipse, right click the project,
click properties -> C/C++ general ->
Preprocessor Include Paths, Macros
etc. Click the tab "Providers" and
check the box next to "CDT GCC
Built-in Compiler Settings [ Shared
]".
When I click on porperties -> c/c++ general there is no Preprocessor Include Paths, Macros etc.
Is there any way to solve this problem in another way?
Originally posted by hannjaminbutton on ROS Answers with karma: 65 on 2015-06-11
Post score: 1
Answer:
Did you click on the small black triangle next to C/C++ General? If you clicked on just C/C++ General it should just say "Enable Project Specific Settings" which is not the option you want.
On Eclipse Luna, which is what I'm using, under C/C++ General it says: >Code analysis, Documentation, File Types, Formatter, Indexer, Language Mapping, Preprocessor Include Paths and Profiling Catagories.
If any of those are missing, you might want to consider using a more recent version of eclipse.
If adding CDT GCC Built-in Compiler Settings [ Shared ] doesn't fix your problem, make sure to check Project -> Properties -> C/C++ Include Paths and Symbols, and see if /usr/include/c++/4.x and /usr/include/c++/4.x/backward are in there. 4.x for me was 4.8. Check yours by using your file manager and going to /usr/include.
Remember to go to Project ->C/C++ Index after finishing.
It should be fixed by then, assuming you installed eclipse and ran it from terminal with bash -i -c "eclipse
Originally posted by raghav.rao32 with karma: 16 on 2015-06-11
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by hannjaminbutton on 2015-06-12:
Yes, I clicked on the triangle. For my version it only says Code Style, Documentation, File Types, Indexer and Language Mapping.
I also included these paths but its still not working.
But thanks for your answer, I will get a more recent version of eclipse!
Comment by Reiner on 2015-06-16:
http://answers.ros.org/question/52013/catkin-and-eclipse/
The top rated answer will help you.
I battled with the same problem for a week. sourcing seems to be the answer | {
"domain": "robotics.stackexchange",
"id": 21885,
"tags": "eclipse"
} |
Using MO theory, give an explanation for the C-C bond length in cyanogen | Question: 1. The problem statement, all variables and given/known data
2. The attempt at a solution
I know that the MO diagram for CN- is this:
I am unsure how to draw the MO diagram for cyanogen. I know that the higher the bond order, the shorter the bond length. I have tried to draw a MO diagram for cyanogen by combining two of the MO diagram for CN:
Not sure if this is necessary but I have drawn the following resonance structures:
I think that structure (I) contributes the most as structures (II) and (III) have a positive charge on the nitrogen.
I think the C-C bond in cyanogen is shorter than the "usual" C-C bond lengths due to delocalisation across it but I don't know how to explain this using Molecular Orbital theory.
Answer: See my answer to your question on methyl vinyl ether for some background on molecular orbital (MO) theory.
In this case, your molecule is analogous to the all-carbon system, butadiene. In cyanogen you have 4 p orbitals that can align, overlap and mix together to generate 4 molecular orbitals and you have 4 electrons (1 from each p orbital involved in a pi bond) to place in your molecular orbitals. Since the carbons in cyanogen are $\ce{sp}$ hybridized you actually have a second set of molecular orbitals that are identical to the first in all respects except that the 4 p orbitals in the second set are rotated 90 degrees with respect to the other set of p orbitals.
Let's count electrons. We have 4 electrons in each of these orthogonal molecular orbitals that each contains 4 p orbitals, so we have 8 electrons in our two orthogonal sets of molecular orbitals. We also have 2 lone pairs remaining on each nitrogen for a total of 4 electrons. Finally we have the sigma system; we have 2 C-N sigma bonds and 1 C-C sigma bond each containing 2 electrons for a total of 6 sigma electrons. For the whole molecule we have 8+4+6=18 electrons as expected.
As I mentioned in my other answer, we typically focus on the delocalized molecular orbitals formed by combining the p orbitals and disregard the sigma system (usually) when we create our MOs. Disregarding the sigma system usually simplifies matters.
There are a number of ways to create the MO diagram for butadiene or cyanogen, but all of them will have the common theme of mixing 4 p atomic orbitals and generating 4 molecular orbitals. Here is one way to do it, let's start by mixing 2 p orbital to create the MO's for ethylene (or nitrile).
image source
Now we can mix 2 ethylenes (or nitriles) to generate butadiene (or cyanogen)
image source
We label molecular orbitals $\ce{\Psi_1 ~- ~\Psi_4}$ in order of increasing energy (so $\ce{\Psi_1}$ is the molecular orbital at the bottom of the figure and $\ce{\Psi_4}$ is at the top). Don't worry about the "S"'s and "A"'s in the diagram, they are just telling us about the symmetry properties of the molecular orbitals which we don't need to focus on right now.
Notice how $\ce{\Psi_1}$ has overlap between the orbitals on the two middle carbons. This indicates overlap and a higher bond order between these 2 carbons, just as your resonance structures II and II indicate. Just like in butadiene, this carbon-carbon bond has some double bond character and, therefore there is a higher barrier to rotation about it then you might otherwise expect. | {
"domain": "chemistry.stackexchange",
"id": 2478,
"tags": "inorganic-chemistry, bond, molecules, molecular-orbital-theory, resonance"
} |
How much water can exist as a gas in a closed container? | Question: If i leave water in a closed container at some ambient temperature what proportion of that water will turn into a gas at equilibrium?
Maybe it is easier to solving this by first answering what is the ratio between steam and water vapour at a certain pressure and temperature? I'm not making much progress
Answer: When an equilibrium is reached between a liquid phase and the corresponding gaseous phase (where there is only the vapor of the liquid), it appears that the pressure of the gas only depends on the temperature (and not on the quantity of liquid left, as long as there is still some liquid). This value of the pressure is called vapor pressure.
Now, let's imagine that there is some liquid $A_l$ in a container which already contains some gas $B$, then it appears that the partial pressure of the gaseous phase $A_g$ equals the vapor pressure $P_{vap,A}$. Let's call $n_{A,g}$ the quantity of gas $A$, $V$ the available volume of the container (that is, the initial volume of the container minus the volume of $A_l$) and $T$ the temperature, then since $P_A = P_{vap,A}$, ideal gas law gives
$$P_{vap,A}V = n_{A,g}RT$$
or $$n_{A,g} = \frac{P_{vap,A}V}{RT}$$
So it appears that the amount of vapor in the container depends on the available volume and on the temperature, but not on the amount of liquid introduced (as long as there is still some liquid inside).
At $20°C$, $P_{vap,water} = 23.4$ mbar, so this gives a concentration $c = 0.71$ mol/m$^3$, or $c_m = 13$ g/m$^3$ | {
"domain": "physics.stackexchange",
"id": 42234,
"tags": "pressure, water, equilibrium, kinetic-theory"
} |
Are the position eigenkets $\lvert x \rangle$ really a basis for the space of states? | Question: In my current understanding, matrix formulation and wave-function formulation of QM are basically the same because $\left|\psi\right>$ and $\psi(x)$ are really the same mathematical object: A vector in the (vector) space of complex functions. My issue with this is the status of $\hat{x}$ and its eigenvectors $\left|x\right>$.
Let’s ask a simple question: What is the dimensionality of the vector space? We can simply write that every vector can be decomposed in a sum of energy eigenstates:
$\left|\psi\right> = \sum E_n\left|n\right>$
Or every vector can be decomposed in a sum of position eigenstates:
$\left|\psi\right> = \int \psi(x)\left|x\right>dx$
In the first case, we have an infinite enumerable basis. In the second case, we have an infinite non-enumerable basis. But they are supposed to be two different orthonormal basis of the same vector space!
Or put another way, how can we write $\psi(x) = \left<\psi|x\right>$ if the set of $\left|x\right>$s is enumerable but the domain of $\psi(x)$ isn’t ?
Answer: Yes, this is something physicists like to sweep under the rug.
$\lvert x \rangle$ as an "eigenvector" of the position operator is not a vector in the physical Hilbert space of states $\mathcal{H}$ - it's not normalizable, for one, as the odd "inner product"
$$ \langle x' \vert x \rangle = \delta(x' - x)$$
indicates. To deal with it mathematically, one has to introduce the concept of rigged Hilbert spaces, see also this question and answer. It boils down to the fact that the statement "But they are supposed to be two different orthonormal basis of the same vector space!" is simply false - they are not two bases of the same space.
It's completely non-obvious, but nevertheless surprisingly often true, that one can get away with pretending that the identity on $\mathcal{H}$ can be written as $\int \lvert x \rangle \langle x \rvert \mathrm{d}x$ as if the $\lvert x \rangle$ were a basis of $\mathcal{H}$, when, in fact, the space is separable in all physical cases and has a countable basis $\lvert \psi_i \rangle$ so that the identity is $\sum_{i\in\mathbb{N}}\lvert\psi_i\rangle\langle\psi_i\rvert$. Note that the $\lvert\psi_i\rangle$ are not always energy eigenstates, since the Hamiltonian need not have discrete spectrum (and indeed hasn't for the free states, usually). | {
"domain": "physics.stackexchange",
"id": 25110,
"tags": "quantum-mechanics, hilbert-space"
} |
Hamilton-Jacobi equation and Action Functional | Question: Let the action functional $S[q]$ given by
\begin{equation}\label{eq16}
S[q]=\int\limits^{t_2}_{t_1}L\left(q^i(t),\dot{q}^i(t)\right)dt.\tag{1}
\end{equation}
Also, we know that using Legendre Transform the hamiltonian $H(q^i,p_i)$ is related with $L(q^i,\dot{q}^i)$ by
\begin{equation}
L(q^i,\dot{q}^i)=p_i q^i - H(q^i,p_i) \quad \text{with} \quad p_i=\frac{\partial L}{\partial \dot{q}^i}\tag{2}
\end{equation}
Thus, replacing this last equation inside the action function we have
\begin{align}\label{eq105}
S(q,t)&=\int\limits^t_{t_0}p_i(t)\dot{q}^i(t)dt-\int\limits^t_{t_0}H(q^i(t),p_i(t))dt\tag{3}\\
S(q,t)&=\int\limits^{q(t)}_{q(t_0)}p_i(t)dq^i-\int\limits^t_{t_0}H(q^i(t),p_i(t))dt.\tag{4}
\end{align}
Finally, we also know the diferential form of $S(q,t)$ is by definition
\begin{equation}\label{eq110}
dS(q^i,t)=\frac{\partial S(q^i,t)}{\partial q^i}dq^i + \frac{\partial S(q^i,t)}{\partial t}dt\tag{5}
\end{equation}
which give us these following relations
\begin{equation}
p_i=\frac{\partial S(q^i,t)}{\partial q^i} \quad \text{and} \quad -H(q^i,p_i)=\frac{\partial S(q^i,t)}{\partial q^i}.\tag{6}
\end{equation}
Replacing $p_i$ relation inside $H$ give us the Hamilton-Jacobi equation
\begin{equation}\label{eq116}
\frac{\partial S(q^i,t)}{\partial t}+H\left(q^i,\frac{\partial S(q^i,t)}{\partial q^i}\right)=0.\tag{7}
\end{equation}
Question When working with action functional we already know that physical motions are those curves that are the an extremum of $S[q]$. But here we actually wrote $S[q]$ as a function $S(q,t)$ of $(q,t)$ and this give us Hamilton-Jacobi equation. My question is:
Every solution of Hamilton-Jacobi equation is an extremum of $S[q]$?
Also, there's an relation (even conceptually) between extremum of $S[q]$ and general solutions of Hamilton-Jacobi equation?
Answer:
Eq. (1) is the off-shell action functional $S[q]$.
Eqs. (3)-(4) are presumably the (Dirichlet) on-shell action function $S(q_f,t_f;q_i,t_i)$. It satisfies eqs. (5)-(6), which are proven in a Lemma of my Phys.SE answer here.
Hamilton's principal function $S(q,\alpha,t)$ is the solution to Hamilton-Jacobi equation (7).
OP's main questions (v2) seems a bit like asking to compare apples and oranges. Presumably they want to ask about relationships between the 3 above objects denoted with the same letter $S$. This is explained in my Phys.SE answer here. | {
"domain": "physics.stackexchange",
"id": 69408,
"tags": "classical-mechanics, lagrangian-formalism, hamiltonian-formalism, variational-principle, action"
} |
When will a cubical shaped diamond in outer space transform in a spherical shaped one? | Question: Suppose we were able to build up a diamond (it could also be another material, but the structure of a diamond is very solid, literally) with the form of a cube in outer space. Will the diamond remain cubical if we make it bigger and bigger? I suppose the diamond’s mass will become big enough at a certain point to transform its cubical shape into a spherical shape, and if so how can one calculate the length of the cube's edges when this happens?
EDIT
Maybe it's easier to ask, as Anders Sandberg commented, to ask if, and when, if so, a spherical diamant will start to contract if we increase the mass of the diamant symmetrically and (almost) continuously. Will the increase in mass of the diamant (or some other solid) cause a gravitational force at the surface which will overcome the force of the solid pushing outward, and, if so, at which radius will this occur? Or will the spherical diamant grow without limit?
Answer: If we look at the spherical case, the pressure at the core of a sphere of constant density is $$P=\frac{2\pi}{3}G\rho^2 r^2.$$ At some point this becomes higher than what the diamond lattice will support, and adding more radius will cause the core to contract. This happens at $$r_{max}=\sqrt{\frac{3P_{max}}{2\pi G\rho^2}} .$$ For diamond the compressive strength is somewhere north of 110 GPa (with simulations suggesting 223.1, 469 and 470 Gpa along different directions), so if we plug in $\rho=3.5\cdot 10^3$ kg/m$^3$ we get a max size of 8015 to 16567 km depending on whether we go with 110 or 470 GPa.
Note that this does not mean nothing happens before. Diamond has a bulk modulus of 443 GPa, which means that at 100 GPa core pressure you would get a volume contraction of about 22%. So the crystal lattice would have to accommodate some strain, and I have no doubt there would be some risk of of cracking.
After reaching $P_{max}$ other forces will come into play, in particular electron degeneracy pressure. Basically one can model the further evolution of the diamond ball as a carbon white dwarf star at zero temperature. One of the interesting things with such degenerate objects is that they become smaller with increasing mass, so it is a reasonable bet that $r_{max}$ represents the peak size of the diamond object.
What about the cubical case? It is possible to get closed-form expressions of gravity in and around a cube. They are not particularly simple, and integrating them again to get the central pressure looks like it will give very messy algebra (or maybe I am just too lazy). Doing a numerical integration in Matlab along a line towards a face, towards a corner and towards an edge midpoint (plus comparing with the spherical case of the same volume) gives the following forces and pressures for a $2\times2\times 2$ meter cube:
The sphere case gives a slightly smaller force but it extends further than the center-side distance, while giving more force than the corner directions that extend even further. Approximating the cube with a sphere does not give a too off answer.
We can see that the cube corners experience less force inwards than the faces, but they are significantly taller. The height wins: the pressure will be about twice bigger in their direction, and the alignment with the crystal lattice will really matter.
(EDIT: I redid my calculations, this time hopefully getting all coordinate lengths right.) | {
"domain": "physics.stackexchange",
"id": 56916,
"tags": "gravity, solid-state-physics"
} |
Most negative and most positive value for proton chemical shifts | Question: What are the most negative and the most positive values for proton chemical shifts recorded till present?
Answer: I don't know if the following two examples present the "largest" upfield and downfield proton-nmr chemical shifts, but I suspect they're in the running. The dihydropyrene dianion example has 16 pi electrons
delocalized around the periphery of the pyrene frame. It fits the 4n rule with n=4, so it is antiaromatic. The [16]-annulene dianion has 18 pi electrons and fits the 4n+2 rule with n=4, so it is aromatic. Note how the direction of the ring current reverses between antiaromatic (paramagnetic current) and aromatic (diamagnetic current) systems. More intesting proton chemical shifts can be found in this compilation.
Edit
Guilty of thinking organic. If we're including inorganic proton shifts, then how about IrHCl2(PMe(t-Bu)2)2 which has a chemical shift of -50.5! | {
"domain": "chemistry.stackexchange",
"id": 1733,
"tags": "nmr-spectroscopy"
} |
Can I sterilize the equipment for experiments without an autoclave? | Question: If I don't have access to an autoclave, can I sterilize my equipment in a regular pot. If so, what time is needed at about 110°C (I measured inside) for a proper sterilization?
Answer: It certainly can be done. Tyndallisation may as discussed on labmate may work well for your situation. If not, there are many other viable alternatives suggested.
**Tyndallisation:
**
Developed partway through the 19th century by English physicist, John Tyndall, the eponymous Tyndallisation was a commonly used
technique by the microbiologists of the 1800s. Tyndall experimented
with boiling beef broths to develop a method to sterilise liquids in a
safe and comprehensive manner.
His research led him to the basic principles for Tyndallisation, a process by which medias are subjected to relatively short boils at a
regular atmospheric pressure. This relatively straightforward method
is still suitable for small labs, or research facilities which only
require sterilised equipment part of the time.
It is not recommended to attempt to sterilise closed glass containers using this method, without lining them with cotton and
capping them with foil – allowing for air to escape without being
subjected to contaminants.
The process involves boiling fluids for 10-15 minutes before leaving to cool to room temperature and leaving it to sit for 24
hours. Repeat this process another three or four times, by which time
sterilisation should have occurred.
Naturally this method has reduced in popularity due to the requirement to continue the process for up to five days to achieve
sterilisation. | {
"domain": "biology.stackexchange",
"id": 10142,
"tags": "botany, microbiology, sterilisation"
} |
How to re-order logical qubit location into desired state? | Question: Now, I'm studying about qubit mapping. When the mapping is done, the logical qubit will be randomly mapped to physical qubit. But I want to move the logical qubit location into desired state.
For example, if the mapping state is like below. Left side is logical and right side is physical
1 -> 0
3 -> 1
2 -> 2
0 -> 3
So, after executing all quantum gates, I want the logical qubit's location would be like below
0
1
2
3
It means change the qubit state into desired state. (0, 1, 2, 3)
Or Is there any way(qiskit function) that I can extract the circuit's qubit location and change them?
Answer: I found something that could help you, it's a documentation with commands to do different operations between physical and virtual [qu]bits.
You could use something like this:
{(QuantumRegister(3, 'qr'), 0): 0,
(QuantumRegister(3, 'qr'), 1): 1,
(QuantumRegister(3, 'qr'), 2): 2}
Can be written more concisely as follows:
* virtual to physical::
{qr[0]: 0,
qr[1]: 1,
qr[2]: 2}
* physical to virtual::
{0: qr[0],
1: qr[1],
2: qr[2]} | {
"domain": "quantumcomputing.stackexchange",
"id": 2978,
"tags": "programming, quantum-algorithms, qubit-mapping"
} |
The total length of input to a pushdown automata which accepts by empty stack is an upper bound on the number states and stack symbols | Question: I was going through the classic text "Introduction to Automata Theory, Languages, and Computation" (3rd Edition) by Jeffrey Ullman ,John Hopcroft, Rajeev Motwani, where I came across few statements about a pushdown automata (PDA) which accepts by empty stack, as:
1. $n$, the total length of the input, is surely an upper bound on the number of states and stack symbols.
2. One rule could place almost n symbols on the stack.
The following statements were made while the authors were about to make some notes about the decision properties of CFLs(Context Free Languages)
Now here are some points by which I am possibly able to contradict the claim rather than proving it correct.
Suppose $n$, is the total length of the input, but as per the design of the PDA it might so happen that to accept the input string all the states of the PDA is not involved, so by this we can't say that $n$ is an upper bound on the number of states the PDA has.
Though the PDA accepts by empty stack, it might so happen that a transition function adds more than $n$ elements on the top of the stack, but at the end on consuming the $n$ input symbols we can stay on the particular state and use epsilon transitions to just remain in the same state and pop the elements from the stack till it becomes empty. So how can we say that $n$ is an upper bound on the number elements on the stack? We arrive at a contradiction...
I don't understand where I am making the mistake, because the same statements are written in the 3rd edition of the book without any changes being made from the second edition which makes it probable that the statement is correct.
I have attached the corresponding portion of the text below:
Answer: That section is not talking about parsing. The algorithms referred to are algorithms for converting between CFGs and PDAs of different types. The question is, as usual, "what is the computational complexity of the algorithm", and the response is, as usual, expresseded in terms of the size of the input -- to the algorithm.
The input to an algorithm which converts a PDA to a CFG is the PDA, and the size of the input -- the metric stick for the algorithm's complexity -- is the size of the *PDA". That's what the authors intend n to be.
How does one measure the size of a PDA? Simple: you write the PDA out as a string in some PDA definition language, and count symbols in the description. That's a completely fair procedure, because in order to call the conversion algorithm on the PDA, it's necessary to give the converter an input which represents the PDA.
Given that definition, the claims are not at all surprising. Every state and every alphabet symbol must be present at least once in the PDA description, so the size if that description is an upper bound. Similarly, it is possible that most of the description of the PDA is the description of a single rule, which could push almost n symbols. | {
"domain": "cs.stackexchange",
"id": 15908,
"tags": "formal-languages, automata, context-free, pushdown-automata, upper-bound"
} |
Center of gravity of a ring and static equilibrium | Question: We know that the center of gravity of a ring whose mass is uniformly distributed is at the geometrical center. Now if the ring is in a vertical plane and a vertical force against the gravity is applied at any point on the ring, how does the ring remain in equilibrium?( the applied force is directing opposite to the gravity)
As far as I know, the principle of transmissibility of force is applicable if two forces(same direction and same magnitude) have same line of action and their point of action is on the body (in our case its the ring),then both of them will have the same effect individually. But in the above stated case the gravity is working outside the body so we cannot expect that it will have same effect if it were acting at the point where another force,F(force applied to keep the ring in static equilibrium) is acting. So how does the force F is causing the ring to not to fall?
I considered it for a while as something hypothetical that the center of gravity of a ring being at the center is something imaginary. Even if it is something imaginary how does the applied force F is keeping the ring in equilibrium?
(My expression might not be good enough or might be something stupid, but I did not mean to put something stupid in this. I am just curious about the action of forces in this case)
Maybe my understanding is flawed. It would be a great help if someone provides me a clear and easy explanation to figure out this topic.
Answer: There is not one individual force acting at the center of mass. In a simple, first-order model, multiple gravitational forces are acting downward on every bit of mass in the ring. If we add all those vectors together, the net vector is the mass of the ring, $m$, times the gravitational field strength, $g$. The net line of action passes through the geometric center of the ring (uniform mass distribution).
Each bit of mass in the ring interacts with its nearest neighbors through electromagnetic forces (molecular bonding) so that the ring maintains is shape. The fact that bits of the ring are not accelerating relative to each other tells use the net internal forces are zero, so we don't worry about them. In reality, for a vertical ring, the ring distorts slightly and creates local stresses, but we ignore those in introductory physics.
If the ring is sitting on a table, the table exerts an upward force on the ring at the point of contact. If we observed the ring to not be accelerating (constant velocity or constant zero velocity), we know that (in our chosen reference frame) the vector sum of forces is zero. How can that happen? Because the intermolecular forces of the table hold it together and prevent the ring from passing through it, and result in an upward force on the ring.
The lines of actions of the forces are only important in 1) determining how to add the vectors because of the angular relationships and 2) in determining whether there is any net torque because the lines are or are not co-linear. In your case, you have specified that the normal (electromagnetic) force from the table is co-linear with and opposite in direction to the net gravitational force from the Earth. | {
"domain": "physics.stackexchange",
"id": 77718,
"tags": "newtonian-mechanics, statics"
} |
Get all followers and friends of a Twitter user | Question: I'm trying to find my bug or any potential bottleneck that cause my program to be really slow. The script is to get all the followers and friends and save that in MongoDB.
import pymongo
import tweepy
consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''
from pymongo import MongoClient
client = MongoClient()
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True, retry_count=3, retry_delay=60)
db = client.tweets
raw_tweets = db.raw_tweets
users = db.users
def is_user_in_db(user_id):
return get_user_from_db(user_id) != None
def get_user_from_db(user_id):
return users.find_one({'user.id' : user_id})
def get_user_from_twitter(user_id):
return api.get_user(user_id)
def get_followers(user_id):
users = []
page_count = 0
for user in tweepy.Cursor(api.followers, id=user_id, count=200).pages():
page_count += 1
print 'Getting page {} for followers'.format(page_count)
users.extend(user)
return users
def get_friends(user_id):
users = []
page_count = 0
for user in tweepy.Cursor(api.friends, id=user_id, count=200).pages():
page_count += 1
print 'Getting page {} for friends'.format(page_count)
users.extend(user)
return users
def get_followers_ids(user_id):
ids = []
page_count = 0
for page in tweepy.Cursor(api.followers_ids, id=user_id, count=5000).pages():
page_count += 1
print 'Getting page {} for followers ids'.format(page_count)
ids.extend(page)
return ids
def get_friends_ids(user_id):
ids = []
page_count = 0
for page in tweepy.Cursor(api.friends_ids, id=user_id, count=5000).pages():
page_count += 1
print 'Getting page {} for friends ids'.format(page_count)
ids.extend(page)
return ids
def process_user(user):
user_id = user['id']
screen_name = user['screen_name']
print 'Processing user : {}'.format(user['screen_name'])
the_user = get_user_from_db(user_id)
if the_user is None:
follower_ids = get_followers_ids(user['screen_name'])
friend_ids = get_friends_ids(user['screen_name'])
user['followers_ids'] = follower_ids
user['friends_ids'] = friend_ids
users_to_add = []
for follower in get_followers(screen_name):
if not is_user_in_db(follower.id):
users_to_add.append(follower._json)
for friend in get_friends(screen_name):
if not is_user_in_db(friend.id):
users_to_add.append(friend._json)
users.insert_many(users_to_add)
users.insert_one(doc['user'])
if __name__ == "__main__":
for doc in raw_tweets.find({'processed' : {'$exists': False}}):
print 'Start processing'
if 'user' in doc:
process_user(doc['user'])
if 'retweeted_status' in doc:
process_user(doc['retweeted_status']['user'])
raw_tweets.update_one({'_id': doc['_id']}, {'$set':{'processed':True}})
Answer: You should use is to compare to None, as that's faster.
def is_user_in_db(user_id):
return get_user_from_db(user_id) is None
You can also use list concatenation rather than list.extend as it's slightly faster and there's no benefit to extend in this context. I also second the recommendation to use enumerate rather than having a manually incremented number in a for loop.
def get_followers(user_id):
users = []
page_count = 0
for i, user in enumerate(tweepy.Cursor(api.followers, id=user_id, count=200).pages()):
print 'Getting page {} for followers'.format(i)
users += user
return users
When you have a local variable here, use it instead of a dictionary call. It's quicker to access than a key.
screen_name = user['screen_name']
print 'Processing user : {}'.format(screen_name)
You can also collapse these lines by assing the results of the function calls directly.
user['followers_ids'] = get_followers_ids(screen_name)
user['friends_ids'] = get_friends_ids(screen_name)
You can make a list comprehension out of users_to_add, it's a single line of code that more efficiently creates a list based on a for loop-like construct. So this:
users_to_add = []
for follower in get_followers(screen_name):
if not is_user_in_db(follower.id):
users_to_add.append(follower._json)
for friend in get_friends(screen_name):
if not is_user_in_db(friend.id):
users_to_add.append(friend._json)
Can be turned into this:
users_to_add = [follower._json for follower in
get_followers(screen_name) if not is_user_in_db(follower.id)]
users_to_add += [friend._json for friend in
get_friends(screen_name) if not is_user_in_db(friend.id)]
Also, it's more efficient to call dictionary keys, assuming they're there and handle the exception if they're not, like so:
try:
process_user(doc['user'])
except KeyError:
pass
try:
process_user(doc['retweeted_status']['user'])
except KeyError:
pass
You could replace pass with something else if you'd like, but this is more efficient as the if statement in your script would check the dictionary to see if a key exists, and then check it again to get the actual value attached to the key. The try except way only checks once, and moves on if it gets nothing.
One final note, you'd be surprised how expensive frequent print calls are. You have ones running in loops. I don't know how often the loops run, but if you are experiencing sluggishness, try removing them to see what difference it makes. Feedback is obviously important, but speed is also important. | {
"domain": "codereview.stackexchange",
"id": 27477,
"tags": "python, mongodb, twitter"
} |
What database should I use? | Question: I am a high-school students who is learning about data science in his free time. I have gotten a neural network to work which is able to solve xor problems. My neural network uses sigmoid as the activation function for both the hidden and output layers. It also has only one hidden layer. I am wondering about what would be the best simple problem which I could solve with my neural net. I would like a database in which there is a probability output or something similar since I've had problems converting sigmoided output to normal values. I have looked on the UCI machine learning repository but have found nothing witch has caught my eye. I would appreciate any help! :)
Answer: Search "dataset" instead of "database".
see these:
A Neural Network from Scratch in just a few Lines of Python code
Multi Layer Neural Networks with Sigmoid Function Deep Learning for Rookies | {
"domain": "datascience.stackexchange",
"id": 8071,
"tags": "neural-network, dataset"
} |
Technique for converting recursive DP to iterative DP | Question: I'm new to Dynamic Programming and before this, I used to solve most of the problems using recursion(if needed).
But, I'm unable to convert my recursive code to DP code.
For eg, below is the pseudo-code for Finding longest common-subsequence in 2 strings :
int LCS(i,j):
if A[i]=='\0' or B[j]=='\0':
return 0;
else if(A[i]==B[j]):
return 1 + LCS(i+1,j+1)
else:
return max(LCS(i+1,j),LCS(i,j+1))
Now, I understand the above code and below is its DP equivalent:
//Some initializations are needed which have been skipped
if(A[i]=B[j]):
LCS[i,j] = 1 + LCS[i-1,j-1]
else:
LCS[i,j] = max(LCS[i+1,j],LCS[i,j+1])
I'm not able to figure out how did they construct its DP equivalent with the help of the recursion technique?
In the recursion technique, 1 + LCS(i+1,j+1) is written whereas, in DP 1 + LCS[i-1,j-1] has been used.
I am wondering how did they use minus in place of plus and how will I figure out myself(for other programs involving DP) that such adjustments need to be made in the code?
Answer: This DP code is the bottom up solution of the problem. The key is that you start with smaller subproblems and then use these solutions to solve a bigger subproblem.
The data structure we use to store these results is actually holding results to different subproblems of the problem. For eg. let say we want to find a LCS for strings abcbdab and bacdba and let the data structure be a 2D array called cache[][].
The smallest problem is when first string has only a and second string has only b. Here nothing is common. So the LCS has a length of 0 and hence $cache[1][1] = 0$ (1-based indexing). A slightly bigger problem than this would be when first string is a and second string is ba. Here the characters match and hence $cache[1][2] = 1$.
So in this way you can see we can keep growing the problem incrementally and use the previous solutions to have a solution of a bigger size problem.
The reason why we are using minus is because we are just using solutions of subproblems of smaller size represented by smaller indexes in our solution array.
You should not relate recursive and bottom up solutions. There is no direct relation. They are totally different way to solve the same problem. Hence you should keep in mind that you have to solve smaller subproblems and store their results.
I hope this clears the doubt. | {
"domain": "cs.stackexchange",
"id": 12948,
"tags": "algorithms, dynamic-programming, recursion"
} |
Why is the simple harmonic motion idealization inaccurate? | Question: While in my physics classes, I've always heard that the simple harmonic motion formulas are inaccurate e.g. In a pendulum, we should use them only when the angles are small; in springs, only when the change of space is small. As far as I know, SHM came from the differential equations of Hooke's law - so, using calculus, it should be really accurate. But why it isn't?
Answer: The actual restoring force in a simple pendulum is not proportional to the angle, but to the sine of the angle (i.e. angular acceleration is equal to $-\frac{g\sin(\theta)}{l}$, not $-\frac{g~\theta}{l}$ ). The actual solution to the differential equation for the pendulum is
$$\theta (t)= 2\ \mathrm{am}\left(\frac{\sqrt{2 g+l c_1} \left(t+c_2\right)}{2 \sqrt{l}}\bigg|\frac{4g}{2 g+l c_1}\right)$$
Where $c_1$ is the initial angular velocity and $c_2$ is the initial angle. The term following the vertical line is the parameter of the Jacobi amplitude function $\mathrm{am}$, which is a kind of elliptic integral.
This is quite different from the customary simplified solution
$$\theta(t)=c_1\cos\left(\sqrt{\frac{g}{l}}t+\delta\right)$$
The small angle approximation is only valid to a first order approximation (by Taylor expansion $\sin(\theta)=\theta-\frac{\theta^3}{3!} + O(\theta^5)$).
And Hooke's Law itself is inaccurate for large displacements of a spring, which can cause the spring to break or bend. | {
"domain": "physics.stackexchange",
"id": 38449,
"tags": "newtonian-mechanics, harmonic-oscillator, anharmonic-oscillators"
} |
point cloud is transformed wrong | Question:
Hi all,
I'm not sure what information would be sufficient for this question, so please feel free to ask about any details.
The problem is as follows:
I have recorded a dataset with kinect, based on the PR2, which provides the tf topic.
I then construct the point clouds from depth images and want to translate those point clouds into the world frame.
I know, that the world frame is represented by /odom_combined and camera frame by /head_mount_kinect_rgb_link, so I lookup the transform:
listener.lookupTransform("/head_mount_kinect_rgb_link", "/odom_combined", time, transform);
And apply it to the point cloud via
pcl_ros::transformPointCloud(*filteredCloud,*filteredCloud,transform);
This works, but gives some strange results.
As I have recorded the dataset with the robot, I definitely know how it looks and that the camera was only rotating with the robot base, so the floor should normally stay horizontal, while in my case whenever the robot turns the floor starts turning so it starts to look like a wall and then eventually as ceiling. I know this may be a bad way to explain the situation, but for now it's the best I can do.
Does anyone have any idea what happens there and what can be done to avoid keep the floor horizontal?
Thanks in advance.
UPD:
It seems like the wrong rotation is performed on the point cloud. So, when the robot turns around the y axes the point cloud is turned around the x (??) axes. And that seems to be the problem. Could maybe anyone suggest how to deal with that?
UPD2:
Added the tf_frames.pdf on my google drive
Originally posted by niosus on ROS Answers with karma: 386 on 2012-10-19
Post score: 0
Answer:
Hi all! I have asked in the lab and eventually the problem is solved. The problems were in 2 different places.
The first one was that I used the ros::Time(0) instead of the timestamp already provided my info message from the camera i.e. info_msg->header.stamp.
The second one was that I got confused with the order of the topics to give into the "lookuptransform" function. The correct order should be (as suggested by @Mac):
listener.lookupTransform( "/odom_combined",filteredCloud->header.frame_id, time, transform);
After that I get the expected result. Thanks everyone for help.
Originally posted by niosus with karma: 386 on 2012-10-25
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11440,
"tags": "kinect, stampedtransform, ros-fuerte, pointcloud, ros-pcl"
} |
Pressure and friction force distribution around a blunt body and a streamlined body | Question: How to create a schematic that shows pressure and force distribution around a blunt body (for e.g. a cylinder) and an airfoil? Where can one get the data for these distributions? I looked for these schematics in some of the well known books on aerodynamics (By John Anderson for e.g.) I couldn't find them. Referring me to a literature that has these schematics would be helpful.
Answer: I assume you refer to a schematic like the following
If the actual pressure is needed you would need to do it experimentally:
to create a replica of the body surface and locate around it (as many as you can afford and its possible) pitot-tubes.
Then you would need to find a calibrated wind tunnel that fits the object
Perform a wind tunnel test for the angle and speed that you want (and record the pitot tubes at a high enough frequency)
Once the "noise" is filtered (usually through some sort of averaging or low pass filter), the pressure between points is interpolated using some sort of spline.
Of course, the above procedure is very expensive and its time consuming -- I doubt this type of research is carried out at nowadays. What usually happens for the plot generation nowadays is to perform some type of computation analysis with some type of CFD software.
There are many Computational Fluid Dynamics (CFD) packages available, that can perform the creation of a schematic like the above in a matter of hours. (of course the main problem is that in order to get meaningful results you'd need years of training and experience). | {
"domain": "engineering.stackexchange",
"id": 4535,
"tags": "fluid-mechanics, airflow"
} |
Is there a task that is solvable in polynomial time but not verifiable in polynomial time? | Question: A colleague of mine and I have just hit some notes of one of our professors. The notes state that there are tasks that are possible to solve in polynomial time (are in the class of PF) but that are NOT verifiable in polynomial time (are NOT in the class of NPF).
To elaborate about these classes: We get some input X and produce some output Y such that (X,Y) are in relation R representing our task. If it is possible to obtain Y for X in polynomial time, the task belongs to the class of PF. If it is possible to verify polynomial-length certificate Z that proves a tuple (X,Y) is in relation R in polynomial time, the task belongs to the class of NPF.
We are not talking about decision problems, where the answer is simply YES or NO (more formally if some string belongs to some language). For decision problems it appears that PF is a proper subset of NPF. However, for other tasks it might be different.
Do you know of a task that can be solved in polynomial time but not verified in polynomial time?
Answer: This is only possible if there are many admissible outputs for a given input. I.e., when the relation $R$ is not a function because it violates uniqueness.
For instance, consider this problem:
Given $n \in \mathbb{N}$ (represented in unary) and a TM $M$, produce another TM $N$ such that $L(M)=L(N)$ and $\# N > n$ (where $\# N$ stands for the encoding (Gödel number) of $N$ into a natural number)
Solving this is trivial: keep adding a few redundant states to the TM $M$, possibly with some dummy transitions between them, until its encoding exceeds $n$. This is a basic repeated application of the Padding Lemma on TMs. This will require $n$ paddings, each of which can add one state, hence it can be done in polynomial time.
On the other hand, given $n,M,N$ it is undecidable to check if $N$ is a correct output for the inputs $n,M$. Indeed, checking $L(M)=L(N)$ is undecidable (apply the Rice theorem), and the constraint $\#N > n$ only discards finitely many $N$s from those. Since we remove a finite amount of elements from an undecidable problem, we still get an undecidable problem.
You can also replace the undecidable property $L(M)=L(N)$ to obtain variations which are still computable but NP hard/complete. E.g. given $n$ (in unary) it is trivial to compute a graph $G$ having a $n$-clique inside. But given $n,G$ it is hard to check whether a $n$-clique exists. | {
"domain": "cs.stackexchange",
"id": 7855,
"tags": "complexity-theory, np"
} |
Precision of geocentric gravitational constant | Question: I am looking for an answer as to why the geocentric gravitational constant, μ, defined as the product of the gravitational constant, G, and the mass of a body (earth in this case), M, can be calculated to a higher degree of precision than either G or M alone.
From wikipedia
...for celestial bodies such as Earth and the Sun, the value of the product GM is known much more accurately than each factor independently. Indeed, the limited accuracy available for G limits the accuracy of scientific
determination of such masses in the first place.
Can someone explain this? I can't find a good answer anywhere I look.
Answer: Ignoring details such as the oblateness of the Earth, atmospheric drag, third body influences such as the Moon and the Sun, relativity, ..., the period of a satellite of negligible mass (even the International Space Station qualifies as a "satellite of negligible mass") is $T=2\pi\sqrt{\frac {a^3}{\mu_\mathrm{Earth}}}$. Neither Newton's gravitational constant nor the mass of the Earth are involved in this expression. This means that, ignoring those details, calculating $\mu_\mathrm{Earth}$ is merely a matter of calculating a satellite's rotational period and its semimajor axis.
Humanity has lots and lots of artificial satellites in orbit, and the people who model the orbits of those satellites don't ignore those details. A few of those satellites were specially designed to enable the determination of the Earth's non-spherical gravitational field (e.g., GRACE and GOCE), and a few were specially designed to enable extremely precise orbit determination (e.g., LAGEOS). Even with all of those details, the Earth's gravitational parameter is a directly inferable quantity (i.e., knowledge of G is not required). Moreover, the value is known to a very high degree of precision.
The Earth's mass? Not so much. The most precise way to "weigh the Earth" is to divide the high precision Earth's gravitational parameter by the low precision universal gravitational constant G. There's a problem here, which is the notoriously low precision of the gravitational constant when expressed in SI units. | {
"domain": "astronomy.stackexchange",
"id": 1659,
"tags": "gravity, newtonian-gravity"
} |
How is this global temperature chart compiled? | Question: In this BBC News article, there is a chart labelled "Hottest day on record globally - Daily average air temperature, 1940-2023". It shows temperatures that are higher in summer and lower in winter for the northern hemisphere, which leads me to wonder what exactly this chart is showing. If it were a global average, would the temperature in summer (or winter) not be offset by the fact that the other side of the globe has its winter (or summer) at the same time?
So what is the method used in this chart? Is it just for the northern hemisphere? Is it the temperature over the land mass but not over the sea? Is the data for the southern hemisphere offset by half a year? Or is the world actually hotter in July because there's more land mass in the nothern hemisphere?
Answer: As stated in your linked article, the graph shows the global average temperature based on ERA5 reanalysis data.
Typically for graphs like this you integrate the surface temperature over the whole domain (surface of earth) and normalise the result with Earth's surface area.
A short explanation for the seasonal cycle is that the northern hemisphere has more land surface area (less water surface area) compared to the southern hemisphere. Water surfaces heat much slower than land surfaces, which introduces a phase lag with respect to the solar heating. Very simplified think about it like this: High solar radiation in the southern hemisphere leads to high temperatures a few weeks/months later, while in the northern hemisphere high temperatures coincide with high radiation levels. This leads to the sinusoidal pattern you observed in the graph.
In the introduction of this article you can find many nice references if you are interested in some more details. Apparently, this has been textbook knowledge since at least 1903. | {
"domain": "earthscience.stackexchange",
"id": 2707,
"tags": "meteorology, climate-change, temperature, seasons"
} |
Convert heat from heat pump into energy | Question: I've learned that a heat pump transfers the energy from the hot side to the cold side.
Instead of transferring heat to the cold side, is it possible to convert it into electrical power, as with a steam turbine? Could an air conditioner produce power instead of releasing hot air?
My guess is that the energy transferred is much less than the energy needed to compress the gas that transfers the heat.
Answer:
Could an air conditioner produce power instead of releasing hot air?
I’m going to assume you meant heat pump and not air conditioner since hat is the title of your question.
In theory you could use a heat pump to operate a heat engine that produces electricity. But the very best you can do is to connect a Carnot heat pump to a Carnot heat engine. Because the Carnot heat pump and heat engine are the most efficient possible.
Let's say the Carnot heat pump uses electrical energy to move heat from a low temperature reservoir to a high temperature reservoir. This heat is then taken from the high temperature reservoir as the input to a Carnot heat engine, produces electrical energy as its output while rejecting heat to the lower temperature reservoir. That heat can then be the heat input to the heat pump and so on. The electrical energy output of the heat engine will equal the electrical energy input to the heat pump. The net electrical power produced will be zero.
My guess is that the energy transferred is much less than the energy
needed to compress the gas that transfers the heat.
If I understand what you are saying correctly, your guess is correct. The scenario described above involves the most efficient heat pump and heat engine possible, and results in a "break even" regarding the electrical energy needed and the electrical energy produced. In reality this possibility doesn’t exist as it would constitute a perpetual motion machine (continual circulation of heat). For all real heat pumps and heat engines, you will need more electrical energy to operate the heat pump than you can produce with the heat engine.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 59510,
"tags": "thermodynamics"
} |
Are photon energies universally quantized? | Question: Is it theoretically possible to have a photon of any energy/wavelength? I have a vague memory in my mind of something like there being a minimum energy level for a photon and then the possible energy levels jump up in discrete but ever decreasing amounts, so that near e.g. the visible spectrum photon energies are for practical purposes continuous.
Answer: The photon is a quantum mechanical particle in the standard model of particle phyisics. Classical electromagnetic radiation emerges from a confluence of innumerable photons.
The photon is characterized by its zero mass and its energy which is equal to h*nu, where nu is the frequency of the emergent classical wave.
What you are referring to are photons from transitions in energy levels of atomic or molecular or lattice bound states. There the photon comes in quanta, but these transitions are different for different atoms/molecules/latices . Once emitted they can be redshifted or blue shifted depending on the emitting source's velocity, so there are spectra , but no real constraints on the possible energy carried by a photon. So yes, even though the photons may be coming from individual spectra, the frequency can seem continuous because of the great multiplicity of atomic/molecular/lattice sources.
In addition, charged elementary particles emit photons when accelerating or decelerating. In an antenna for example, the accelerated and decelerated electrons emit an electromagnetic wave: this is the confluence of individual photons emitted by individual electrons within the antenna, and the frequency is variable and constrained only by the possible dimensions of the antenna.
In conclusion photons are only constrained to have zero mass, but can have any energy, i.e. any nu. This is true for massive particles too, which are constrained to have their fixed mass but can have any energy. | {
"domain": "physics.stackexchange",
"id": 36137,
"tags": "photons"
} |
Sum of primes less than 2,000,000 | Question: I have been attempting the questions at Project Euler and I am trying to find the sum of Primes under two million (question 10)
The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
Find the sum of all the primes below two million.
Here is my attempt which does work, but I would like to know if there is any way to improve this code. Any suggestions welcomed!
class SumOfPrimes
{
static void Main(string[] args)
{
Primes primes = new Primes(2000000);
long sum = 0;
foreach(int p in primes.list_of_primes){
sum += p;
}
Console.WriteLine(sum);
Console.ReadLine();
}
}
class Primes
{
public HashSet<int> all_numbers = new HashSet<int>();
public HashSet<int> list_of_primes = new HashSet<int>();
public HashSet<int> list_of_nonprimes = new HashSet<int>();
public Primes(int n)
{
all_numbers = new HashSet<int>(Enumerable.Range(1, n));
for (int i = 2; i < Math.Sqrt(n) + 1; i++)
{
for (int j = 3; j <= n / i; j++)
list_of_nonprimes.Add(i * j);
}
list_of_primes = new HashSet<int>(all_numbers.Except(list_of_nonprimes));
}
}
Answer: Some other things that you can try, short of trying a totally different algorithm:
rename all_numbers to prime_candidates and remove composite numbers from it. (list_of_nonprimes.Add(i * j); -> prime_candidates.Remove(i*j). Avoiding the Except at the end.
You can also hold which numbers are non primes in an array of bools and lose all the HashSets. Changing:
list_of_nonprimes.Add(i * j)
to
nonprimes[i*j] = true;
Thus avoiding a bunch of hash lookups. After the loops sum up any n s.t. nonprime[n]==false
You can also use a BitArray instead of a bool array. Because it is more space efficient it might reduce cache misses.
Of course, you can only be sure if any of these actually makes any improvement after you try. | {
"domain": "codereview.stackexchange",
"id": 3653,
"tags": "c#, primes, project-euler"
} |
Location of microbes causing eliminated diseases | Question: Do eliminated diseases like polio, smallpox etc still exist in locations they have been eliminated from or in the wild or did we just make them go extinct like the dodo?
Answer: Whether a disease can be eradicated or not depends on its reservoir. To be eliminated it needs to be eliminated in all places and organisms it can thrive in and spread from. This makes diseases like rabies for example hard to eradicate altogether because it can exist and spread in many different animal species.
Polio and smallpox however both have no non-human reservoir, which makes it possible to eradicate them. There is no "in the wild" for them to persist in because humans are "the wild" for them. So smallpox the disease has indeed been made extinct like the dodo; polio we're not quite there yet but we seem to be close.
I said "smallpox the disease" because "smallpox the virus" still exists in two laboratories in the US and Russia.
Polio, as I said, isn't an "eliminated disease" yet (as of May 2017), but since you mentioned it, the main places it persists now are Afghanistan and Pakistan. | {
"domain": "biology.stackexchange",
"id": 7083,
"tags": "evolution, microbiology, pathology, extinction"
} |
Using MFCC to an ANN Speech Recognition System | Question: I'm developing an Artificial Neural Network based Speech Recognition System using MFCCs.
Suppose I have 260 input nodes in the ANN, and this number of nodes corresponds to the number of MFCCs that I will use. During feature extraction the number of total coefficients vary with respect to the duration of the sound file. This poses a problem if the ANN was trained just for 260 coefficients.
So most likely the system will fail if a different sound duration which yields lesser or greater number of coefficients is used to test the Neural Network. My question is how do I go about this problem? I have seen several papers in the net talking about Speech Recognition using ANN but I haven't seen something concerning this problem
Answer: You will be using an "enframe" operation, such as this one:
http://my.fit.edu/~vkepuska/ece5526/TIMIT_Corpus/MATLAB/voicebox/enframe.m
This will split up your signal with certain overlap. You will extract certain features (MFCC) from those features and will train these as the parts of phonemes (or any other speech indicator).
You will do the same thing in runtime and classify each block obtained by enframe as a phoneme. At the end you get a result, where intervals of your speech are mapped to key speech blocks. Unifying them would allow to recognize what is spoken.
If you don't want to do that, than you can go with HMMs, which can deal with changing length speech signals. | {
"domain": "dsp.stackexchange",
"id": 3354,
"tags": "speech-recognition, mfcc, machine-learning"
} |
Average momentum in quantum mechanics over some finite interval of space | Question: Why can't the expectation value of momentum be computed over some finite interval of space? Something like, $$ \int_a^b \psi^* \hat{p}\psi ~\mathrm{d}x.\tag{1}$$ I understand that usually we compute expectation value over all space, but does the above quantity mean anything? Also, I am not assuming that $[a,b]$ is the width of some box that the particle is confined to.
Answer:
Technically, OP's eq. (1) is the expectation value
$$ \langle \hat{A} \rangle ~:=~ \int_{\mathbb{R}} \!\mathrm{d}x~\psi^{\ast} \hat{A}\psi \tag{i}$$
of a non-Hermitian operator
$$ \hat{A}~:=~ 1_{[a,b]}(\hat{x})\hat{p}.\tag{ii}$$
Here $x\mapsto 1_{[a,b]}(x)$ denotes the characteristic/indicator function for the interval $[a,b]\subseteq \mathbb{R}$. We assume $a<b$.
$\hat{A}$ is non-Hermitian, since $\hat{x}$ and $\hat{p}$ do not commute
$$ [\hat{x},\hat{p}]~=~i\hbar~{\bf 1}.\tag{iii} $$
Formally, to get a Hermitian operator/observable, consider e.g. instead the symmetrized operator
$$ \hat{B}~:=~\frac{1}{2}\left\{1_{[a,b]}(\hat{x}),~\hat{p}\right\}_+
~:=~\frac{1}{2}\left(1_{[a,b]}(\hat{x}) \hat{p}+\hat{p}1_{[a,b]}(\hat{x})\right)
~=~\hat{A} +\frac{i\hbar}{2}\left(\delta(\hat{x}\!-\!b)-\delta(\hat{x}\!-\!a)\right).\tag{iv}
$$
If there exist some boundary conditions such that
$$\psi(a)~=~0~=~\psi(b), \tag{v}$$
(e.g. because of an infinite potential at $x=a$ and $x=b$), then the operators $\hat{A}$ and $\hat{B}$ are effectively the same.
\psi
Fun fact: If the wave function $\psi\in\mathbb{R}$ is differentiable, real, and vanishes $\psi(\pm\infty)=0$ at infinity, then the expectation value vanishes
$$ \langle \hat{B} \rangle~\stackrel{(\text{iv})}{=}~
\frac{1}{2}\int_a^b \!\mathrm{d}x\left(\psi^{\ast} (\hat{p}\psi) + (\hat{p}\psi)^{\ast}\psi\right)~\stackrel{(\text{vii})}{=}~\int_a^b \!\mathrm{d}x ~mj~=~0, \tag{vi}$$
where
$$j~:=~\frac{1}{2m}\left(\psi^{\ast} \hat{p}\psi-\psi \hat{p}\psi^{\ast}\right)~=~0\tag{vii}$$
is the probability current. | {
"domain": "physics.stackexchange",
"id": 38010,
"tags": "quantum-mechanics, operators, momentum, hilbert-space, observables"
} |
openni_tracker has problem opening database/parameter File | Question:
I'm trying to use the openni_launch and openni_tracker packages. I made a package called kinect_hmi and a launch file which launches both the openni_launch file and openni_track. When I tested this previously I could view he PointCloud2 data and the tf data in RVIZ. However, today when I started up the code I get the following error.
Error opening database/parameters file.
The point cloud data still shows up in RVIZ, but the skeleton tracking does not seem to be working. I tried starting roscore, openni_launch, and openni_tracker separately to see if my launch file was the problem but it didn't help. I have no idea why it isn't working all of the sudden.
Just in case here is my launch file
<launch>
<include file="$(find kinect_hmi)/launch/camera_topics.launch" />
<node pkg="tf" type="static_transform_publisher" name="depth_to_camera_broadcaster" args="0 0 0 0 0 0 camera_link openni_depth_frame 100" />
<node name="openni_tracker" pkg="openni_tracker" type="openni_tracker" respawn="true"/>
</launch>
The camera topics launch file is the following
<launch>
<include file="$(find openni_launch)/launch/openni.launch" />
</launch>
I thought maybe that the parameter server wasn't running so I tried the rosparam list command. However, the command returns a long list of things so I'm assuming that the parameter server is running.
My question is: Why is this happening and how do I fix it?
Thanks in advance
Originally posted by rhololkeolke on ROS Answers with karma: 15 on 2013-02-21
Post score: 0
Original comments
Comment by Josch on 2013-03-04:
Have you tried reinstalling the openni package? Maybe you can just start install.sh again.
Answer:
Hello.
I have same problem, but i managed to fix it.
Problem is in primesense library. Some file don't have privileges.
U need go to:
/usr/etc/primesense/Features_1_5_2
and set privileges to folder Data. I did it through nautilus and my problem was fixed.
Hope it will help you too.
Originally posted by PNowak with karma: 26 on 2013-03-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by rhololkeolke on 2013-03-20:
The system I'm developing on is erased each reboot. Unfortunately I rebooted before I could test this solution, but because rebooting (which would reset all file permissions to when they were working) fixed the problem I'm accepting this answer as correct.
Comment by jarvisschultz on 2015-11-06:
This answer fixed my error. Strange, I've been using the NITE packages for a long time, and I've never seen this error before. I wonder what caused it? | {
"domain": "robotics.stackexchange",
"id": 13003,
"tags": "ros, kinect, openi-tracker, parameter, parameter-server"
} |
Ratio of two tension forces in rods | Question:
Suppose I am rotating a meter rod. At other end,there is an object of "m" kg.there is another rod connected with that object .This rod's length is also 1 meter .There is another object at the other end of the second rod .
what is the ratio of tension forces in these two rods when the objects are rotating with same angular velocity.
This question came in my exam at my school and answer is 2:3 but I think it should be 1:1.because first rod experiences one tension where the second rod experiences two tension forces.
Answer: Suppose the tension in the rods are $T1$, and $T2$ respectively. The equation of force for the first object is:
$$ T1-T2= m\omega ^2 l$$
and for the second object:
$$ T2= m\omega ^2 2l$$
Divide both equations and you'll get your answer. | {
"domain": "physics.stackexchange",
"id": 74109,
"tags": "homework-and-exercises, newtonian-mechanics, forces"
} |
Are there ROS API documents published, and where? | Question:
This seems like a stupid question, but I have honestly tried and failed to find any documentation for the ROS APIs. To better clarify, I'm coming from the Java world, where I can go to Oracle's website and look up the API for any class that's in the standard libraries. Not that I'm asking for the same level of documentation, but it seems like there should be something similar and it's just my failure in finding it. Or do I need to get the source code for the particular library I'm wondering about and reference those files directly?
I just posted previous question that I should have been able to answer for myself if I could have located the API documentation for the roscpp library. Can someone please point me in the right direction?
Thanks!
Hung
Originally posted by HQ on ROS Answers with karma: 3 on 2012-12-16
Post score: 0
Original comments
Comment by HQ on 2012-12-17:
Ah, I see where I went wrong - I started drilling down from the link in the main page, under the section "3. API Reference", and assumed that the link in the top right box for "Package Links" would be the same. Thanks for your patience and quick responses, dornhege and tfoote.
Answer:
You'll find API documentation for most packages by visiting their wiki page and clicking on the Code API link in the standard header. For example see the top right of the roscpp wiki page
Originally posted by tfoote with karma: 58457 on 2012-12-16
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 12132,
"tags": "ros, documentation"
} |
5G NR Network Collision Avoidance | Question: I'm looking for a technical explanation or a specification that explains exactly how 5G NR collision avoidance will work. I was watching a talk from the Mobile World Conference in Shanghai two years ago (2018) and it appears that centralized transmission scheduling may be a part of the standard - here is a timestamped link to the panel member that references this:
Steve Greaves addresses 802.11ay collision avoidance
To quote:
If you look at what Facebook are doing with AY they are trying to get coordination & synchronization across the
whole network. Once you have that capability what you can do is you
can manage interference. You can control the scheduling of
transmissions such that you can deal with the dense urban deployments
which are interference limited.
I can't find a standard or explanation for this exact thing within 5G so I was hoping someone could point me in the right direction.
Note: I am not intimately familiar with 5G and am not sure what parts of the "standard" for 5G have been defined. For a long time there was no standard at all. Also please let me know if there is a better community for this question.
Answer: IEEE 802.11 is not related to 5G NR, so consideration for WLAN networks following 802.11ay don't apply to cellular infrastructure.
I don't know what "ad-hoc" modi 5G NR brings; I don't think it's any.
So, 5G doesn't do any collision avoidance, because it's not a problem it has: it's a cellular standard that defines that the basestation / cellular infrastructure defines who sends when, where. The mobile network operators own the spectrum, so there's by definition no uncooperative users.
Generally, whereas interference mitigation is in fact a non-negligible topic for coordinated microwave networks, with these deci- to centimeter waves still taking many non-optical paths and receiver sensitivity being high enough for that to create "unwantedly far reach", you can basically ignore that for cellular mmWave applications, where you can be pretty sure that you, as the owner of the used spectrum, will have a relatively easy time making sure that your own femtocell base stations don't lead to mutual interference: The propagation is pretty much optical, so, the potential numbers of base stations a single handset might reach is pretty limited by lines of sight and a pretty small radius. | {
"domain": "dsp.stackexchange",
"id": 8544,
"tags": "radio, interference"
} |
orocos_toolchain install git issue | Question:
I am trying to install hector_slam on Ubuntu Lucid lynx. For installing orocos_toolchain, I am following instructions on roswiki. At a intermediate step, i am getting following error:
dst:desktop:~/robotics/experiments/orocos_toolchain$ git submodule update
Initialized empty Git repository in /home/dst/robotics/experiments/orocos_toolchain/rtt/.git/
error: Unable to get pack file https://git.gitorious.org/orocos-toolchain/rtt.git/objects/pack/pack-8d2bd25397e5fedbb908340c0536ada28f680d46.pack
Peer closed the TLS connection
error: Unable to find 526a4adc3102ba0ec72ca1135cecf669e42d1f76 under https://git.gitorious.org/orocos-toolchain/rtt.git
Cannot obtain needed object 526a4adc3102ba0ec72ca1135cecf669e42d1f76
while processing commit a03ed29d72bd8eeaa06eb5f5ae3145b26d7026e1.
error: Fetch failed.
Clone of 'https://git.gitorious.org/orocos-toolchain/rtt.git' into submodule path 'rtt' failed
Can someone tell me cause of this?
Update1:
I deleted everything. and start a fresh again. But I am still getting following error.
error: Unable to get pack file https://git.gitorious.org/orocos-toolchain/rtt.git/objects/pack/pack-8d2bd25397e5fedbb908340c0536ada28f680d46.pack
Peer closed the TLS connection
error: Unable to find 526a4adc3102ba0ec72ca1135cecf669e42d1f76 under https://git.gitorious.org/orocos-toolchain/rtt.git
Cannot obtain needed object 526a4adc3102ba0ec72ca1135cecf669e42d1f76
while processing commit a03ed29d72bd8eeaa06eb5f5ae3145b26d7026e1.
error: Fetch failed.
Clone of 'https://git.gitorious.org/orocos-toolchain/rtt.git' into submodule path 'rtt' failed
Update2: i deleleted everything and started from fresh. Following is the output.
dst@dst-desktop:~/robotics/experiments$ git clone http://git.gitorious.org/orocos-toolchain/orocos_toolchain.git
Initialized empty Git repository in /home/dst/robotics/experiments/orocos_toolchain/.git/
dst@dst-desktop:~/robotics/experiments$ git clone http://git.mech.kuleuven.be/robotics/rtt_ros_integration.git
Initialized empty Git repository in /home/dst/robotics/experiments/rtt_ros_integration/.git/
remote: Counting objects: 624, done.
remote: Compressing objects: 100% (395/395), done.
remote: Total 624 (delta 392), reused 339 (delta 208)
Receiving objects: 100% (624/624), 97.34 KiB | 38 KiB/s, done.
Resolving deltas: 100% (392/392), done.
dst@dst-desktop:~/robotics/experiments$ git clone http://git.mech.kuleuven.be/robotics/rtt_ros_comm.git
Initialized empty Git repository in /home/dst/robotics/experiments/rtt_ros_comm/.git/
remote: Counting objects: 22, done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 22 (delta 9), reused 0 (delta 0)
Unpacking objects: 100% (22/22), done.
dst@dst-desktop:~/robotics/experiments$
dst@dst-desktop:~/robotics/experiments$ git clone http://git.mech.kuleuven.be/robotics/rtt_common_msgs.git
Initialized empty Git repository in /home/dst/robotics/experiments/rtt_common_msgs/.git/
remote: Counting objects: 83, done.
remote: Compressing objects: 100% (80/80), done.
remote: Total 83 (delta 48), reused 0 (delta 0)
Unpacking objects: 100% (83/83), done.
dst@dst-desktop:~/robotics/experiments$ git clone http://git.mech.kuleuven.be/robotics/rtt_geometry.git
Initialized empty Git repository in /home/dst/robotics/experiments/rtt_geometry/.git/
remote: Counting objects: 223, done.
remote: Compressing objects: 100% (219/219), done.
remote: Total 223 (delta 115), reused 0 (delta 0)
Receiving objects: 100% (223/223), 44.48 KiB | 46 KiB/s, done.
Resolving deltas: 100% (115/115), done.
dst@dst-desktop:~/robotics/experiments$ roscd orocos_toolchain
dst@dst-desktop:~/robotics/experiments/orocos_toolchain$ git submodule init
Submodule 'log4cpp' (https://git.gitorious.org/orocos-toolchain/log4cpp.git) registered for path 'log4cpp'
Submodule 'ocl' (https://git.gitorious.org/orocos-toolchain/ocl.git) registered for path 'ocl'
Submodule 'orogen' (https://git.gitorious.org/orocos-toolchain/orogen.git) registered for path 'orogen'
Submodule 'rtt' (https://git.gitorious.org/orocos-toolchain/rtt.git) registered for path 'rtt'
Submodule 'rtt_gems' (https://git.gitorious.org/orocos-toolchain/rtt_gems.git) registered for path 'rtt_gems'
Submodule 'rtt_typelib' (https://git.gitorious.org/orocos-toolchain/rtt_typelib.git) registered for path 'rtt_typelib'
Submodule 'typelib' (https://git.gitorious.org/orocos-toolchain/typelib.git) registered for path 'typelib'
Submodule 'utilmm' (https://git.gitorious.org/orocos-toolchain/utilmm.git) registered for path 'utilmm'
Submodule 'utilrb' (https://git.gitorious.org/orocos-toolchain/utilrb.git) registered for path 'utilrb'
dst@dst-desktop:~/robotics/experiments/orocos_toolchain$ git submodule update
Initialized empty Git repository in /home/dst/robotics/experiments/orocos_toolchain/log4cpp/.git/
Submodule path 'log4cpp': checked out 'b81c6394d68b6d517f70c69605bb93c59391e65a'
Initialized empty Git repository in /home/dst/robotics/experiments/orocos_toolchain/ocl/.git/
Submodule path 'ocl': checked out '15d28f143392983a1de67bc2a6149dbe13117ebb'
Initialized empty Git repository in /home/dst/robotics/experiments/orocos_toolchain/orogen/.git/
Submodule path 'orogen': checked out 'f8bde6648360e04d9887223bd0b6df8df1e55500'
Initialized empty Git repository in /home/dst/robotics/experiments/orocos_toolchain/rtt/.git/
error: Unable to get pack file https://git.gitorious.org/orocos-toolchain/rtt.git/objects/pack/pack-8d2bd25397e5fedbb908340c0536ada28f680d46.pack
Peer closed the TLS connection
error: Unable to find 526a4adc3102ba0ec72ca1135cecf669e42d1f76 under https://git.gitorious.org/orocos-toolchain/rtt.git
Cannot obtain needed object 526a4adc3102ba0ec72ca1135cecf669e42d1f76
while processing commit a03ed29d72bd8eeaa06eb5f5ae3145b26d7026e1.
error: Fetch failed.
Clone of 'https://git.gitorious.org/orocos-toolchain/rtt.git' into submodule path 'rtt' failed
stuck! Any alternative to download??
Originally posted by prince on ROS Answers with karma: 660 on 2012-07-10
Post score: 0
Original comments
Comment by ipso on 2012-07-10:
Perhaps it was a transient error? Just checked here and all seems to work.
Comment by prince on 2012-07-10:
i had tried thrice over 24 hours!
Answer:
Edit after updated question:
Besides the fact that you have a problem with git, I can't find the dependency on OROCOS in hector_slam (at least not on its wiki page)?
Also: a deb (binary package) install is not possible? Edit: seems not, as there are (AFAIK) no debs for orocos_toolchain on fuerte yet.
I am sitting behind a proxy but that should not be a problem.
Should not no, but I've seen stranger things happen. The 'Peer closed the TLS connection' bit could point towards your proxy interfering with SSL traffic somehow. As to why the other git clone invocations succeed, no idea. A quick Google search shows more people with similar problems (see here for instance). No solutions that I can find though.
If possible, try without the proxy, to rule out any problems with it.
Alternatively, you could download a zip from gitorious of each of the projects referenced, create the required directory structure by hand and proceed with building.
You might also first try updating your git (since you mention being on Lucid): you can use the Ubuntu Git Maintainers PPA for that.
Originally posted by ipso with karma: 1416 on 2012-07-11
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by prince on 2012-07-11:
Please find the update2 in original question. I am sitting behind a proxy but that should not be a problem.
Comment by prince on 2012-07-15:
OROCOS tool chain is required for hector_quadrotor_controller
Comment by ipso on 2012-07-16:
Ok, but your original question stated that you were installing 'hector_slam', not the quadrotor_controller. But what is the status of the problem currently? Have you tried without going through the proxy? What about trying to update git? | {
"domain": "robotics.stackexchange",
"id": 10146,
"tags": "ros-fuerte"
} |
What is an unbalanced interferometer? | Question: I have read in some papers about a so-called unbalanced interferometer. This appears particularly in the context of experimentally verifying the Englert-Greenberger-Yasin duality relation. However, I can't figure out exactly what they mean by an unbalanced interferometer, and whenever I search for the definition of an unbalanced interferometer I only get more papers and I can't find a straightforward definition. So what exactly is an unbalanced interferometer?
Answer: People might use the term in different contexts to mean slightly different things.
When you are interested in interference between two beams that have a different history (path), you normally want to ensure two things:
equal intensity
equal path length
Without the former, you would risk any interference pattern being "washed out" (because the pattern have an intensity associated with the weaker components, superposed on an intensity associated with the difference). Without the latter, you run into a problem of temporal coherence: if beams travelled different distances, then anything but perfect monochromatic light will exhibit some loss of coherence and reduced intensity of the interference pattern. In fact this effect can be used to accurately estimate the line width of a spectral line: plotting the intensity of the interference pattern as a function of the path difference tells you about the coherence length $L$; the relative line width is then related to the coherence lenght by $\frac{\delta \lambda}{\lambda}=\frac{\lambda}{L}$, or $\delta \lambda = \frac{\lambda^2}{L}$.
When either of these conditions is not met, the interferometer is "unbalanced". But which kind it is depends on the context. | {
"domain": "physics.stackexchange",
"id": 33085,
"tags": "experimental-physics, wave-particle-duality, interferometry"
} |
Codable Failure Response Handler | Question: Suppose You have two kind of response like one for success and one for failure
Success response model will looks like
struct UserLogin: Codable {
let status: Bool?
let accessToken: String?
let data: [UserLoginData]?
....
}
Failure Model will looks like
struct FailedResponse: Codable {
let status: Bool?
let error: ErrorResponse?
}
How I handle this two responses
struct FailableResponse <T:Codable,E:Codable> : Codable {
var success:T?
var failure:E?
public init(from decoder:Decoder) throws {
let singleValue = try decoder.singleValueContainer()
success = try singleValue.decode(T.self)
failure = try singleValue.decode(E.self)
}
}
And How I use FailableResponse
APIClient.login(userName: self.loginViewModel.userName, password: self.loginViewModel.password) { (response:FailableResponse<UserLogin,FailedResponse>? , error) in
}
// METHOD OF API CLIENT
// API CALLING
static func login<T:Codable>(userName:String,password:String,completion:@escaping completionResponse<T>) {
self.performRequest(request: APIRouterUserModule.login(email: userName, password: password)) {(model) in
self.handleResponseCallCompletion(result: model, completion: completion)
}
}
// Parsing
private static func handleResponseCallCompletion<T:Codable>(result:Result<Any>,completion:@escaping completionResponse<T>) {
let object = CodableHelper<T>().decode(json: result)
completion(object.object,object.error)
}
I think It can be more better
Any suggestion Please :)
Answer: There are two approaches
For the sake of completeness, I’ll describe the typical, simple solution:
struct UserLoginResponse: Codable {
let status: Bool
let error: ErrorResponse?
let accessToken: String?
let data: [UserLoginData]?
}
The status is not optional (because it’s presumably there regardless). But you can just decode this struct:
do {
let responseObject = try decoder.decode(UserLoginResponse.self, from: data)
switch responseObject.status {
case true:
guard let accessToken = responseObject.accessToken, let userLoginData = responseObject.data else {
throw ParsingError.requiredFieldMissing
}
// use accessToken and userLoginData here
case false:
guard let errorObject = responseObject.error else {
throw ParsingError.requiredFieldMissing
}
// use errorObject here
}
} catch {
print(error)
}
If you really want to do this generic, wrapper approach, I’d suggest a slight refinement. Notably, the code in your question returns an object for which both the success and error objects are optionals. (There seem like there are a lot of ? thrown in there quite liberally, whereas most API dictate “if success, x and y will be present, if failure, z will be present”.)
I’d suggest, instead, a pattern that captures the fact that the response will be either success (when status is True) or failure (when status is False), and rather than returning both success and failure objects as optionals, use the Result<Success, Failure> enumeration with associated values, which is included in Swift 5. Or, if you’re using an earlier version of Swift, you can define it yourself:
enum Result<Success, Failure> {
case success(Success)
case failure(Failure)
}
Then, the API response wrapper can have a non-optional result property of type Result<T, E>, where:
If status is True, parse the success object as a non-optional associated value which is the Success type;
If status is False, parse the error object as a non-optional associated value which is the Failure type;
Thus:
struct ApiResponse<T: Codable, E: Codable>: Codable {
let status: Bool
var result: Result<T, E>
var error: E?
enum CodingKeys: String, CodingKey {
case status, error
}
init(from decoder: Decoder) throws {
let values = try decoder.container(keyedBy: CodingKeys.self)
status = try values.decode(Bool.self, forKey: .status)
if status {
let singleValue = try decoder.singleValueContainer()
result = try .success(singleValue.decode(T.self))
} else {
let parsedError = try .failure(values.decode(E.self, forKey: .error))
error = parsedError
result = .failure(parsedError)
}
}
}
Then you can do:
do {
let responseObject = try decoder.decode(ApiResponse<UserLoginResponse, ErrorResponse>.self, from: data)
switch responseObject.result {
case .success(let object):
print(object.accessToken, object.data)
case .failure(let error):
print(error)
}
} catch {
print(error)
}
This would seem to better capture the true nature of the response, that it’s either successful (and you get the non-optional success object) or it’s a failure (and you get the non-optional ErrorResponse object).
By the way, I’d suggest that in this scenario, that UserLoginResponse be updated to reflect which fields are truly optional and which aren’t. For example, if you know that if status is True, that both accessToken and data will be present, then I’d make those non-optional properties:
struct UserLoginResponse: Codable {
let accessToken: String
let data: [UserLoginData]
} | {
"domain": "codereview.stackexchange",
"id": 34370,
"tags": "swift, ios"
} |
Is $F$ greater than the axial component of $f$? | Question: It is assumed that the flow at the convergent nozzle outlet is constant. f is the force exerted by the convergent nozzle shell on the fluid in the nozzle. F is the axial force of upstream fluid to the fluid in the nozzle. Is F greater than the axial component of f? I think F greater than the axial component of f. Therefore, the convergent nozzle will generate recoil force.
Is my idea correct?
Answer: To get the force acting on the solid body due to the convergent flow, you need to carefully used the integral balances.
It's possible to study the system using integral balances of the mass and the momentum of the steady control volume coinciding with the internal volume of the nozzle, delimited by the inflow and outflow surfaces, assuming steady condition $\frac{d}{dt} \equiv 0$, and negligible volume forces $\mathbf{g} \equiv 0$,
mass:
$\displaystyle \underbrace{\dfrac{d}{dt} \int_V \rho}_{=0} + \oint_{\partial V} \rho \mathbf{u} \cdot \mathbf{\hat{n}} = 0$
momentum:
$\displaystyle \underbrace{\dfrac{d}{dt} \int_V \rho \mathbf{u}}_{=\mathbf{0}} + \oint_{\partial V} \rho \mathbf{u} \mathbf{u} \cdot \mathbf{\hat{n}} = \underbrace{\int_V \rho \mathbf{g}}_{=\mathbf{0}} + \oint_{\partial V} \mathbf{t_n}$.
Assuming uniform velocity on the inflow and outflow surfaces, assuming negligible viscous forces on the inflow and outflow surfaces it's possible to write
mass: $\rho_1 U_1 A_1 = \rho_2 U_2 A_2 = \dot{m}$
momentum, in the axial direction:
$\rho_2 U_2^2 A_2 - \rho_1 U_1^2 A_1 = P_1 A_1 - P_2 A_2 + F_{fs,x}$, or, if we foresee that the solid exerts on the fluid a force pointing on the left and we don't like negative unknowns, we can define $F_{fs,x}^{to\ the\ left} = - F_{fs,x}$
$\rho_2 U_2^2 A_2 - \rho_1 U_1^2 A_1 = P_1 A_1 - P_2 A_2 - F_{fs,x}^{to\ the\ left}$
being $F_{fs,x}$ the force applied on the fluid by the solid wall.
Now, we can recast the momentum equation as
$\dot{m}(U_2 - U_1) = P_1 A_1 - P_2 A_2 - F_{fs,x}^{to\ the\ left}$,
and since $U_2 > U_1$ we can conclude that
$0 \lt \dot{m}(U_2 - U_1) = P_1 A_1 - P_2 A_2 - F_{fs,x}^{to\ the\ left}$ and thus
$ P_1 A_1 > P_2 A_2 + F_{fs,x}^{to\ the\ left} $.
Concluding, $P_1 A_1$ is not only larger than $F_{fs,x}^{to\ the\ left} $, but also of the sum of the wall force and the resultant of stress on the outflow surface. | {
"domain": "physics.stackexchange",
"id": 92462,
"tags": "forces, fluid-dynamics, flow"
} |
Fermi-Propagated Jacobi equation in the book The Large scale structure of space-time | Question: On page 81, equation (4.6), the author use the Fermi derivative to write the Jacobi equation
\begin{equation} \tag{4.6}
\frac{{D^2}_\text{F}}{\partial s^2} {}_{\bot}Z^a = -{R^a}_{bcd}{}_{\bot}Z^cV^bV^d + {h^a}_b {\dot{V}^b}_{;c} {}_{\bot}Z^c + \dot{V}^a\dot{V}_b {}_{\bot}Z^b
\end{equation}
and also the equality (4.5)
\begin{equation} \tag{4.5}
\frac{D_{\text{F}}}{\partial s}{}_{\bot}Z^a = {V^a}_{;b}{}_{\bot}Z^b
\end{equation}
where $Z$ is the deviation vector, $V$ is the unit tangent vector along the timeline curves, and ${h^a}_b = {\delta^a}_b+V^aV_b$ is the projection operator (see this).
Using the property of (i) to (iv) of the Fermi derivative (which I can prove), equation (4.6) and (4.5) come naturally from equation (4.3) and (4.4).
The problem is on equation (4.7) and (4.8)
\begin{equation} \tag{4.7}
\frac{\text{d}}{\text{d} s} Z^\alpha = {V^\alpha}_{;\beta} Z^{\beta}
\end{equation}
\begin{equation}\tag{4.8}
\frac{\text{d}^2}{\text{d}s^2} Z^\alpha = (-{R^\alpha}_{4\beta 4} + {\dot{V}^\alpha}_{;\beta} + \dot{V}^\alpha \dot{V}_\beta)Z^\beta
\end{equation}
This is an ordinary differential equation with respect to the component (as a function on the curve), and the Greek indices take the value $1,2,3$, where the time component is on the forth one.
To derive (4.7)
\begin{gather}
\frac{D_{\text{F}}}{\partial s}{}_{\bot}(Z^\alpha\mathbf{E}_\alpha) = {V^\alpha}_{;\beta} \mathbf{E}^\beta \otimes \mathbf{E}_\alpha ({}_{\bot}Z^\gamma \mathbf{E}_{\gamma}) \\
\biggl(\frac{D_{\text{F}}}{\partial s} {}_{\bot}Z^\alpha \biggr) \mathbf{E_\alpha} = \biggl( {V^\alpha}_{;\beta} {}_\bot Z^\beta \biggr) \mathbf{E_\alpha}
\end{gather}
where $\mathbf{E}$ are bases orthogonal to $\mathbf{V}$. Well, I cannot get rid off the $\bot$, but the author wrote on page 82
As ${}_\bot \mathbf{Z}$ is orthogonal to $\mathbf{V}$ it will have
components with respect to $\mathbf{E}_1,\mathbf{E}_2,\mathbf{E}_3$
only. Thus it may be expressed as $Z^\alpha \mathbf{E}_\alpha$.
I guess ${}_{\bot}Z^\alpha = Z^\alpha$ in this notation?
As for equation (4.8), there are several terms got contracted away. To make the second term of (4.6) equals to (4.8), I think
\begin{align}
{h^a}_b ({V^b}_{;d}V^d)_{;c} {}_\bot Z^c &= {h^a}_b ({V^b}_{;dc}V^d + {V^b}_d {V^d}_{;c}) {}_\bot Z^c \\
& = ({h^a}_b {V^b}_{;d})_{;c}V^d {}_\bot Z^c + {h^a}_b {V^b}_d {V^d}_{;c}{}_\bot Z^c - {h^a}_{b;c} {V^b}_{;d} V^d {}_\bot Z^c \\
& = \dot{V}^a_{;c} - ({V^a}_{;c}V_b + V^a V_{b;c}) V^d {}_\bot Z^c \\
& = \dot{V}^a_{;c} - V^a V_{b;c}V^d {}_\bot Z^c
\end{align}
So there is an extra term in the end, where it should be contracted from the space components of the first term of (4.6)
\begin{align}
0 =& -{R^a}_{bcd} {}_\bot Z^c V^b V^d - V^a V_{b;c}V^d {}_\bot Z^c\\
=& -({V^a}_{dc} - {V^a}_{;cd}){}_\bot Z^cV^d - V^a V_{b;c}V^d {}_\bot Z^c
\end{align}
I fail to equate this.
Additionally I do not know how to solve this component differential equation, where the author has given a answer to (4.7)
\begin{equation} \tag{4.9}
Z^\alpha (s) = A_{\alpha \beta}(s) Z^\beta|_q
\end{equation}
where $A_{\alpha \beta}(s)$ is a $3\times 3$ matrix which is the unit matrix at $q$ and satisfies
\begin{equation} \tag{4.10}
\frac{\text{d}}{\text{d} s} A_{\alpha\beta}(s) = V_{\alpha ;\gamma} A_{\gamma \beta}(s)
\end{equation}
Equation (4.9) does not even contract to the correct index, and it does not equate (4.7) when plugging back in. And then there is equation (4.11)
\begin{equation}\tag{4.11}
A_{\alpha \beta} = O_{\alpha \delta} S_{\delta \beta}
\end{equation}
where $O_{\alpha \delta}$ is an orthogonal matrix with positive determinant and $S_{\delta \beta}$ is a symmetric matrix. I cannot figure it out the physics behind this.
Any advice would be greatly appreciated, as I'm trying to clarify these new ideas and equations! Thanks!
Answer: Let us first argue that ${}_\bot Z^\alpha={}_\bot Z^a$. Now, ${}_\bot\mathbf{Z}$ does not contain the component of $\mathbf{Z}$ along $\mathbf{V}$. Suppose we expand $\mathbf{Z}$ in terms of the the basis $\{\mathbf{E}_1,\mathbf{E}_2,\mathbf{E}_3,\mathbf{V}\}$, then it is clear that the projection of $\mathbf{Z}$ into the subspace orthogonal to $\mathbf{V}$ (which is exactly ${}_\bot\mathbf{Z}$) contains only the components of $\mathbf{Z}$ in the $\mathbf{E}_1,\mathbf{E}_2,\mathbf{E}_3$ directions. But these components are just $Z^\alpha$, which we have shown equal ${}_\bot Z^\alpha$.
We know that ${}_\bot\mathbf{Z}=Z^\alpha\mathbf{E}_\alpha$, but since the frame $\{\mathbf{E}_\alpha\}$ is Fermi-transported, we have
$$\frac{\mathrm{D}_\mathrm{F}}{\partial s}{}_\bot \mathbf{Z}=\frac{\mathrm{D}_\mathrm{F}}{\partial s}(Z^\alpha\mathbf{E}_\alpha)=\frac{\mathrm{D}_\mathrm{F}Z^\alpha}{\partial s}\mathbf{E}_\alpha+Z^\alpha\frac{\mathrm{D}_\mathrm{F}\mathbf{E}_\alpha}{\partial s}=\frac{\mathrm{d}Z^\alpha}{\mathrm{d}s}\mathbf{E}_\alpha$$
(We used property (iii) on page 81.) This is how the ordinary derivatives appear.
For $\mathbf{X}$ any vector, it is clear that $X^a{}_\bot Z_a=X^\alpha Z_\alpha$ because ${}_\bot Z^4=0$ and ${}_\bot Z^\alpha=Z^\alpha$, as was shown above. (4.7) should now be clear.
For $\boldsymbol{\omega}$ any covector, we have $\omega_aV^a=\omega_4$ because $\mathbf{V}$ is the fourth basis vector. This explains the presence of the $4$s in (4.8). The $\alpha$s on the other vectors appear because we set $a=\alpha$ on the LHS of (4.6) anyway and are using $h^a{}_b$ to project into $H_p\mathcal{M}$. This should explain (4.8).
The equations (4.9) and (4.10) are the standard solution method for a linear first order differential system like (4.7). To verify this, take the derivative of (4.9) and insert (4.10):
$$\frac{\mathrm{d}Z^\alpha}{\mathrm{d}s}=\frac{\mathrm{d}A_{\alpha\beta}}{\mathrm{d}s}Z^\beta\rvert_q=V_{\alpha;\gamma}A_{\gamma\beta}Z^\beta\rvert_q=V_{\alpha;\gamma}Z^\gamma$$
Since $A$ is a real matrix, at the points where $\det A\ne0$, its polar decomposition is of the form $OS$, where $O\in\mathrm{SO}(3)$ and $S=S^t$. $O$ represents the rotation of the curves because it is an element of $\mathrm{SO}(3)$, the rotation group. $S$ is interpreted as telling us about the separations because it is symmetric. The distance between flow lines in the $\alpha\beta$ direction is the same as in the $\beta\alpha$ direction. | {
"domain": "physics.stackexchange",
"id": 29304,
"tags": "general-relativity, differential-geometry, metric-tensor, tensor-calculus"
} |
Codility: An activity selection problem about scheduling two enemies to watch movies at a festival | Question: I don't remember the exact wording of the question, but it is clearly an activity selection problem, which is
[...] notable in that using a greedy algorithm to find a solution will always result in an optimal solution.
I put a rough description of the problem in the docstring, and present it together with my solution:
def solution(movies, K, L):
"""
Given two integers K and L, and a list movies where the value at the ith index
represents the number of movies showing on the (i + 1)th day at the festival, returns
the maximum number of movies that can be watched in two non-overlapping periods of K and
L days, or -1 if no two such periods exist.
"""
kl_days = [
sum(movies[k_start:k_start + K] + movies[l_start:l_start + L])
for k_start in range(len(movies) - K)
for l_start in range(k_start + K, len(movies) - L + 1)
]
lk_days = [
sum(movies[k_start:k_start + K] + movies[l_start:l_start + L])
for l_start in range(len(movies) - L)
for k_start in range(l_start + L, len(movies) - K + 1)
]
days = [-1] + kl_days + lk_days
return max(days)
Is there a more elegant way to solve this?
P.S. The function name is fixed for grading purposes.
Answer:
Your code is probably around the best it can be performance wise. And so there's not much point splitting hairs finding anything better.
However you should be able to find something that uses less memory. Something about \$O(L + K)\$ rather than \$O(M^2)\$
I'd split finding indexes, and sums into two separate functions.
Your code, to me, looks slightly wrong.
range(l_start + L, len(movies) - K + 1)
Should probably be:
range(l_start + L, len(movies) - K)
Finally I'd split your code into two more functions, one that performs the sum, and then one that performs the max.
And so I'd change your code to:
def _solution_indexes(movies, K, L):
for k in range(movies - K):
for l in range(k + K, movies - L):
yield k, l
for l in range(movies - L):
for k in range(l + L, movies - K):
yield k, l
def solution_(movies, K, L):
for k, l in _solution_indexes(len(movies), K, L):
yield sum(movies[k:k + K]) + sum(movies[l:l + L])
def solution(movies, K, L):
return max(solution_(movies, K, L), default=-1) | {
"domain": "codereview.stackexchange",
"id": 28777,
"tags": "python, interview-questions"
} |
How air gets filled in bicycle tire without removing its valve tube? | Question: If I remove valve tube from tire, air inside the tire comes out.
but we fill the tire with air pump without removing its valve tube.
by doing this air gets filled it.
How something going in with closed state ?
Answer: It gets filled in only one direction because the valve is there, a valve is like a door that can only move in one direction ( because it is blocked to move in the other direction by a piece of metal or by some other thing) so when you fill air inside your tyre valve moves down due to the pressure of your bike pump and when you are done filing the air, it prevents the inside air from coming out because that metal piece prevents it from moving back, its stuck at that location. | {
"domain": "physics.stackexchange",
"id": 38142,
"tags": "everyday-life, air"
} |
What is the physical meaning of a pure imaginary force? | Question: I am reading an article (this) and the equations result in an inertial force $F_g$ (which I understand as a fictitious force (?)) that is purely imaginary. The system consists of a gyroscope fixed to the floor and attached to a mass (in a lattice of identical masses-gyroscopes) through which a wave passes that would cause a rotational movement on the gyroscope (as detailed below). I understand that due to the nutation of the gyroscope the mass can change its coordinate in z, but I do not understand the physical meaning that it is purely imaginary. As the author details at the end, this indicates "This imaginary nature of the gyroscopic inertial effect indicates directional phase shifts between two independent directions of the tip displacements, which breaks time-reversal symmetry", but I do not understand it at all.
Answer: The physical force is not imaginary, it is given by the real part of the expression. That expression is not pure imaginary because each component of ${\bf U}_{\rm tip}$ is itself a complex number here. So its real part (which is the force) will not be zero. The presence of $i$ here is telling you that the oscillations of the force are a quarter-cycle out of phase with the oscillations of the velocity.
This whole calculation is an example of the use of complex numbers to analyze linear sets of equations. It is a method that is particularly helpful when oscillation is involved, because the function $e^{i \omega t}$ is easier to work with than combinations of $\cos(\omega t)$ and $\sin(\omega t)$.
In general if one has a physical quantity $x$ satisfying some equation $\hat{D} x = 0$ then one can introduce a complex number $z$ satisfying $\hat{D} z = 0$, where $\hat{D}$ is some bunch of algebraic operations and differential operations, all real-valued. But for a complex number to be zero, both its real and imaginary parts are zero, so the real part of $z$ here satisfies the same equation as $x$. The method consists in solving $\hat{D} z = 0$ to obtain $z$, and then you finish by asserting $x = {\rm Re}[z]$. | {
"domain": "physics.stackexchange",
"id": 85593,
"tags": "newtonian-mechanics, complex-numbers, rigid-body-dynamics, time-reversal-symmetry, gyroscopes"
} |
Simulating the spatial distribution of water droplets from a dripping tap | Question: I saw this pattern under a leaky tap.
(Recreated images)
The pattern was interesting because it looked like a probability distribution. Bigger droplets lie in the centre, and smaller ones scattered outwards (Why?).
It would be interesting to make a simulation of this spatial distribution. There is no specific motivation, just for the pure joy of it.
So there is a drop crashing on the ground with energy $E$. It splits up into $n$ droplets of random sizes. What would be the probability $P(r,m)$ of finding a droplet of mass $m$ at a distance $r$ from the centre?
Though an actual solution requires complex fluid dynamics, an approximate solution using stochasticity that still resembles the real case would be enough for the simulation.
If we were to consider the drop oscillations and collision imperfections, it would be chaos. So it is assumed that everything is perfect and chaos doesn't exist.
Answer: Here is some conceptual misunderstanding. Basically the the water droplets bounce very similar size droplet back after the fall. This droplet then flies to random direction in almost random angle causing the difference on distance and position of where it lands.
The reason why there is bigger droplets and smaller droplets is that the bigger droplets are created by multiple small droplets landing on the reach of surface tension at almost the same position which causes them to unite.
This is also the reason why the middle "huge" droplet is shaped so variably.
You see on both pictures the standard size droplets; all the smallest ones are approximately the same size. | {
"domain": "physics.stackexchange",
"id": 84931,
"tags": "fluid-dynamics, statistical-mechanics, probability"
} |
Do particles act like waves on large scales? | Question: I've seen illustrations of particles being particles and then becoming waves momentarily on impact before going back to being particles. But I picture it as particles behaving like waves on the grand scale. Are the particles and waves measured on the same scale? Basically what I'm trying to say is, is saying lightwave particle duality anything like saying $H_{2}O$ molecule water wave duality? Or is that just the classical interpretation?
Answer: Particles do not become waves, nor waves waves become particles, nor any other thing in the between.
The terminology of being a particle or a wave is, in the classical literature, addressed to the solutions of the Newton's equations and the D'Alambert equation, respectively, but nevertheless has no direct experimental meaning, even in the classical picture (without going to quantum mechanics). In fact, keep in mind that the equations are only a way to predict and describe quantities that can be measured in a laboratory, and the latter ones are the characteristics that really matter. For example, although counterintuitive, you may think that a ball is a particle because you somehow see it, but what actually happens is that your eyes absorb the photons scattered by the ball: in this respect a ball in no more of a particle than anything else you can scatter photons on. Likewise when you measure energy, momentum and so forth.
Equations in quantum mechanics and quantum field theory tend to be D'Alambert like, therefore people tended to use the unfortunate terminology of particles behaving like waves (or anything else of that sort). However, the mathematical description is nothing but a framework and a language that allows you to predict the quantities that you will eventually measure in a laboratory. In particular, the observables that one deals with are most of the times scattering amplitudes and transmissions coefficients. Such quantities naturally arise out of an elegant description that makes use of fields as operators on some Fock spaces acting on elements of a system; in order to obtain finite quantities many mathematical constructions are necessary and sometimes people tend to address physical meaning to each of those steps, although they have none, being only a language that allows us to describe and precisely predict new outcomes.
In this respect, many equivalent formulations of the same theory may arise, some making use of this and some of that other description and formalism; as such, every interpretation of the mathematics is equally correct (or incorrect). When in a laboratory, you only measure the outcomes, that is, a number (energy, momentum, scattering amplitudes), however this number has been generated. The bottom line is that the terminology used in these contexts is sometimes misleading because it tries to make sense of a mathematical formulation, whilst from the point of view of the physics the only true quantities are the ones you can directly measure, no matter the name you want to give the underlying formalism. | {
"domain": "physics.stackexchange",
"id": 25025,
"tags": "quantum-mechanics, wave-particle-duality"
} |
Bernoullis in Parallel pipes | Question: Im having trouble with a scenario where a flow splits into two parallel pipes and then rejoins before exiting the control volume
what makes this scenario difficult is the parallel pipes are of varying diameter
At the diffluence Pipe A has a diameter 2D and Pipe 2 has a diameter of 1D
by the time they reach the confluence they have reversed diameter now have Pipe A has a dimeter of 1D and Pipe B has a dimeter of 1D
Bernoullis
Start - assume a flow velocity 5ms static pressure SP 100
$Q_s=A1.V1$
$Q_s=3.5$
$Q_s=15$
$TP=SP+\frac{1}2mv^2$
$TP=100+\frac{1}25^2$
$TP=112.5$
Stream a
$QA=A1.V1=A2.V2$
$A1.V1 =A2.V2$
$2A.5ms = 1A.10ms$
$112.5=SP+\frac{1}2m10^2$
$SP=112.5-50$
$SP=62.5$
Stream b
$A1.V1 =A2.V2$
$1A.5ms^-1 = 2A.2.5ms^-1$
$112.5.5=SP+DP$
$SP=112.5-\frac{1}2m2.5^2$
$SP=112.5-3.125$
$SP=109.375$
Q Check
$Q_s=3.5$=15=1.5+2.5=QA+QB=1.2.5+1.10=3.5=Q_e$
Head Loss
What we know
All elements of flow converging at WILL have the same head loss.
The flow will adjust automatically so that the head loss in each branch pipe WILL BE THE SAME
$Hl_A=Hl_B$
According to resistance coefficient tables the divergent pipe has a K value of 0.46 and the convergent pipe has a K value of 0.1
As these are Losses are proportional to – velocity of flow, this suggests that the expansion pipe will decrease its flow (to decrease its losses) while the convergent pipe must increase its flow to maintain continuity
This means that the flow rates have diverged not come together but we know that they must be the same at the exit of the control volume examined
Continuity also tells us that the total flow rate must be the same at all points in the pipe
$_=_1+_1 =a_2+b_2 =_$
$_._=_a. _a+_._=_._++_ _ =_.A_$
$_.3=_a.2+_.1=_.1++_2 =_.3$
So on total head/ stagnation value we will have the same value at the convergence as both paths have experienced the same head loss but Bernoullis tells us that we have very different velocities and static pressures at this point .
My question is how at the confluence does this follow that we do not require the same value of pressure and velocity at the confluence for both streams?
If this can occur we must then have a mechanism to achieve the expected uniform velocity and pressure (Not considering head losses) at the exit
$_.3 =_.3$
What would this mechanism be ?
Answer: I wasn't able to figure out what you did, so here is my analysis, without the resistance. Let:
Q = Total volume flow rate
$Q_a$ = Volume flow rate into converging pipe
$Q_b$ = Volume flow rate into diverging pipe
$p_1a$ = static pressure just after entrance to a
$p_2a$ = static pressure just before exit from a
$p_1b$ = static pressure just after entrance to a
$p_2b$ = static pressure just before exit from a
$T_1$ = "total pressure" in channel leading up to diffluence
$T_2$ = "total pressure" in channel after diffluence
$A_{a1}$ = cross sectional area of converging pipe at inlet
$A_{a2}$ = cross sectional area of converging pipe at outlet
$A_{b1}$ = cross sectional area of diverging pipe at inlet
$A_{b2}$ = cross sectional area of diverging pipe at outlet
CASE OF NO FRICTIONAL LOSS
Bernoulli equations relevant to pipe a:
$$T_1=p_1+\rho \frac{(Q_a/A_{a1})^2}{2}$$
$$p_1+\rho \frac{(Q_a/A_{a1})^2}{2}=p_2+\rho \frac{(Q_a/A_{a2})^2}{2}$$
$$p_2+\rho \frac{(Q_a/A_{a2})^2}{2}=T_2$$
Adding these three equations together gives $$T_1=T_2$$
Thus, for the case without friction, energy is conserved and the "total pressure" after the split section is equal to the "total pressure" before the split section. This is irrespective of how the flow splits between the two sections. The Bernoulli equations for pipe b will give the same result. Also, the convergence and divergence in the channels doesn't matter, as long as the final outlet pipe has the same cross sectional area as the initial inlet pipe.
CASE WITH FRICTIONAL EFFECTS INCLUDED
Bernoulli equations relevant to pipe a:
$$T_1=p_1+\rho \frac{(Q_a/A_{a1})^2}{2}$$
$$p_1+\rho \frac{(Q_a/A_{a1})^2}{2}=p_2+\rho \frac{(Q_a/A_{a2})^2}{2}+k_a\rho \frac{(Q_a/A_{a1})^2}{2}$$
$$p_2+\rho \frac{(Q_a/A_{a2})^2}{2}=T_2$$
Adding these three equations together gives $$T_1=T_2+k_a\rho \frac{(Q_a/A_{a1})^2}{2}\tag{1}$$
Similarly, for channel b:$$T_1=T_2+k_b\rho \frac{(Q_b/A_{b1})^2}{2}\tag{2}$$
Thus, for the case with friction, mechanical energy is not conserved and the "total pressure" after the split section is not equal to the "total pressure" before the split section. Moreover, the split between the two channels is relevant.
Mass balance equation:
$$Q_a+Q_b=Q\tag{3}$$
Eqns. 1-3 provide three algebraic equations in the three unknowns $(T_1-T_2)$, $Q_a$, and $Q_b$. | {
"domain": "physics.stackexchange",
"id": 30763,
"tags": "fluid-dynamics, bernoulli-equation"
} |
Why is there no upper-bound on power-efficiency of Hydro-turbines? | Question: We have Betz's law, an upper bound on power efficiency, for wind-turbines. But there is no theoretical upper limit on hydro-turbines, Why this is so?
Is it mainly due to the incompressibility of water?
Answer: Betz' limit arises from the fact that if the turbine tries to take too much energy out of the flow, the wind will divert and go around the turbine instead of through it. With a hydrothermal system the water is typically behind a dam, so the water has nowhere else to go. The turbines effectively convert gravitational potential energy into work, which can be done (in principle) without any thermodynamic losses. | {
"domain": "physics.stackexchange",
"id": 95963,
"tags": "fluid-dynamics, energy, power, renewable-energy"
} |
In regards to the hypothetical Alcubierre Drive (hear me out), are energy costs consumed immediately, or during transit? | Question: I am aware the Alcubierre Drive is highly hypothetical and likely cannot exist. However, there are significant quantities of research material into Warp Drives of many types, and presumably the answer to my question is out there.
Basically, are energy costs for the Alcubierre Drive / Related Warp Metrics consumed immediately upon "jump to warp" (10kg to jump to 10c) or consumed constantly (10kg per hour at 10c)?
As an additional question, could Alcubierre Metrics potentially be feasible at sub-light speeds of exotic energy exists but quantum instabilities forbid FTL travel. Would such a drive be able to reach light-speed, simply not beyond?
Final optional question; how does cherenkov radiation relate to Alcubierre Drives, FTL or otherwise.
Answer: As you say, the Alcubierre metric is a perfectly good solution to the Einstein equations, though since it requires exotic matter to work it doesn't seem likely that one could be built. However it is important to understand that the Alcubierre metric is time independent i.e. it describes the object moving at a constant speed. It does not describe how the object accelerates or how it could slow down after its journey.
For the moving object described by the Alcubierre metric no energy is required, but this shouldn't surprise you. A conventional spacecraft moving through space at a constant speed also doesn't require any energy. Work only need to be done when the spacecraft accelerates or brakes.
If you were to attempt the building of a drive like this presumably you would start with the exotic matter widely spread, and you'd need to bring it together to form the torus that makes the drive. Then when you wanted to stop you would dismantle the torus and disperse the exotic matter again. The energy required would be the energy involved in making then dismantling the torus. However I don't know of any studies done to establish how much energy is needed. Since exotic matter attracts other exotic matter and repels ordinary matter this suggests that building the torus would release energy not consume it. Then dismantling the torus would require energy to be put in. But in the absence of any rigorous calculations the best we can do is speculate. | {
"domain": "physics.stackexchange",
"id": 66118,
"tags": "energy-conservation, faster-than-light, warp-drives, exotic-matter, cherenkov-radiation"
} |
Photochemical rearrangement of 4,4‐diphenylcyclohexa‐2,5‐dien‐1‐one | Question:
I can see how the option A and B are correct as it is just a standard dienone–phenol type reaction which involves a phenyl shift. However, the given answer is A, B, C and D.
I do not see how C and D are formed. Does the presence of light affect the reaction? If so, how?
Answer: The formation of structure 6 (your D) occurs as follows. Photolysis of dienone 1 affords excited state 2 that forms cyclopropane 3. In turn, 3 is capable of breaking a different cyclopropane bond to form 5-membered ring 4. Structure 5 is a resonance structure of 4. Structure 5 closes to the [3.1.0] product 6 (your D). I don't know how you formed A (12 here) but further photolysis of 6 can form 12 via a bridged phenyl migration as shown in structure 9. A similar migration in structure 2 leads to your B. Of course, structure C has only five carbons and was added to the choices as a trap. See reference 1 for an example of a dienone phenol rearrangement.
1) A. G. Schultz and S. A. Hardinger, J. Org. Chem., 1991, 56, 1105. | {
"domain": "chemistry.stackexchange",
"id": 13830,
"tags": "organic-chemistry, carbonyl-compounds, rearrangements"
} |
Regression Model for explained model(Details inside) | Question: I am kind of a newbie on machine learning and I would like to ask some questions based on a problem I have .
Let's say I have x y z as variable and I have values of these variables as time progresses like :
t0 = x0 y0 z0
t1 = x1 y1 z1
tn = xn yn zn
Now I want a model that when it's given 3 values of x , y , z I want a prediction of them like:
Input : x_test y_test z_test
Output : x_prediction y_prediction z_prediction
These values are float numbers. What is the best model for this kind of problem?
Thanks in advance for all the answers.
More details:
Ok so let me give some more details about the problems so as to be more specific.
I have run certain benchmarks and taken values of performance counters from the cores of a system per interval.
The performance counters are the x , y , z in the above example.They are dependent to each other.Simple example is x = IPC , y = Cache misses , z = Energy at Core.
So I got this dataset of all these performance counters per interval .What I want to do is create a model that after learning from the training dataset , it will be given a certain state of the core ( the performance counters) and predict the performance counters that the core will have in the next interval.
Answer: AFAIK if you want to predict the value of one variable, you need to have one or more variables as predictors; i.e.: you assume the behaviour of one variable can be explained by the behaviour of other variables.
In your case you have three independent variables whose value you want to predict, and since you don't mention any other variables, I assume that each variable depends on the others. In that case you could fit three models (for instance, regression models), each of which would predict the value of one variable, based on the others. As an example, to predict x:
x_prediction=int+cy*y_test+cz*z_test
, where int is the intercept and cy, cz, the coefficients of the linear regression.
Likewise, in order to predict y and z:
y_prediction=int+cx*x_test+cx*z_test
z_prediction=int+cx*x_test+cy*y_test | {
"domain": "datascience.stackexchange",
"id": 133,
"tags": "machine-learning, logistic-regression, predictive-modeling, regression"
} |
Building a Model to distinguish between elementary particles from their tracks | Question: I have a set of variables - positions x1,y1 and time t1 for a specific particle (kaon). Each one of these variables is stored in a separate 1-D histogram. And I have another set of variables positions x2, y2 and time t2 for another particle (pion).
What I want is build a model for a hypothesis test. So if I get a set of experimental data for an unknown particle and I want to test it to figure out if this particle is a pion or a kaon.
Any hints how to do this? Thanks.
Answer: A couple of random observation before I look at the problem in front of us. First, TTrees can hold many things other than just histograms. Second, the problem would probably be easier if the data was in tuple form. Third, this could be treated as a rather simplified verion of the TPC particle ID problem which means that it is (once again) a current data processing problem in the particle physics world, but we usually have energy deposition as well as position (and timing is sometimes a bit ambiguous).
If the data you have really is 1D histograms of individual kinematic variables without energy or momentum information, then the only discriminators that you have are the lifetimes (which differ by roughly a factor of 2 for charged pions versus charged kaons) and possibly the range (if you can assume that you are getting range information from the position data).
The problem is complicated by time dilation, and the difficulty of extracting some kind of speed and range from the rather limited data you say that you have.
Once you do that the issue has been reduced to one of statistics, though it may still be non-trivial. | {
"domain": "physics.stackexchange",
"id": 32806,
"tags": "particle-physics, statistics"
} |
Constitutional isomers for four carbon thioesters | Question: Question:
How many isomers are there for thioesters with the formula $\ce{C4H8OS}$?
Answer given is four.
I can see that the following two are possible constitutional isomers:
But what are the other two?
Answer: One way to break things down in by the number of carbon atoms on either "side" of the thioester.
There are 4 carbon atoms to work with, of which one will be taken up by the thioester moiety. Thus there are three left to work with.
On the carbonyl "side" of the thioester, between zero and three carbons are possible. However, on the sulfur "side" of the thioester, at least one carbon atom is required; otherwise the compound is not a thioester but a thioacid. Thus the possibilities for carbon atom distribution are:
one carbon on S side, two on CO side.
two carbons on S side, one on CO side.
three carbons on S side, zero on CO side.
You have drawn the only structures for the first two possibilities.
For the last possibility, there are two possible ways to attach a three-carbon alkyl chain to the thioester sulfur:
as a $n$-propyl group
as an isopropyl group
Thus, the four possible thioesters are:
methyl thiopropanoate
ethyl thioacetate
n-propyl thioformate
isopropyl thioformate | {
"domain": "chemistry.stackexchange",
"id": 9695,
"tags": "organic-chemistry, isomers"
} |
Huge Fibonacci modulo m optimization in Python 3 | Question: I'm doing some an Algorithm Design online course as a way of learning algorithms and Python as well. One of these challenges requires that I write code for finding a "Huge Fibonacci Number Modulo m". The code works, but for some inputs (such as 99999999999999999 100000), it exceeds the allocated time limit. However, I can't see a way of improving performance.
Here is the problem description:
Problem Introduction
The Fibonacci numbers are defined as follows:
F(0) = 0, F(1) = 1, and F(i) = F(i−1) + F(i−2) for i ≥ 2.
Problem Description
Task: Given two integers n and m, output F(n) mod m (that is, the remainder of F(n) when divided by m).
Input Format: The input consists of two integers n and m given on the same line (separated by a space).
Constraints: 1 ≤ n ≤ 10^18 , 2 ≤ m ≤ 10^5 .
Output Format: Output F(n) mod m
Time Limits:C: 1 sec, C++: 1 sec, Java: 1.5 sec, Python: 5 sec. C#: 1.5 sec, Haskell: 2 sec, JavaScript: 3 sec, Ruby: 3 sec, Scala: 3
sec. Memory Limit: 512 Mb
Sample:
Input:
281621358815590 30524
Output:
11963
Here is the code:
#!/usr/bin/python3
from sys import stdin
def get_fibonacci_huge(n, m):
fibonacciNumbers = [0, 1, 1]
pisanoNumbers = [0, 1, 1]
pisanoPeriod = 3
for i in range(2, n):
if pisanoNumbers[i - 1] == 0 and pisanoNumbers[i] == 1:
pisanoPeriod = i - 1
break;
nextFibonacci = fibonacciNumbers[i - 1] + fibonacciNumbers[i];
fibonacciNumbers.append(nextFibonacci)
pisanoNumbers.append(nextFibonacci % m)
else:
pisanoPeriod = None # If we exhausted all values up to n, then the pisano period could not be determined.
if pisanoPeriod is None:
return pisanoNumbers[n]
else:
return pisanoNumbers[n % pisanoPeriod]
if __name__ == '__main__':
dataEntry = stdin.readline()
n, m = map(int, dataEntry.split())
print(get_fibonacci_huge(n, m))
For an input of 99999999999999999 100000, it takes around 5.10 seconds, and the maximum allowed time is 5.00 seconds.
As a complete Python and algorithms newbie, any input is appreciated.
Answer: Fibonacci numbers grow exponentially. It means that the number of bits representing them grows linearly, and so does the complexity of computing nextFibonacci. It results in the overall quadratic complexity of your loop.
Good news is that you don't need to compute Fibonacci numbers at all. You only need to compute Pisano numbers. They obey the similar recurrence,
p[n+1] = (p[n] + p[n-1]) % m
and by virtue of the modulo they never exceed m. A complexity of an individual addition stays constant, and the overall complexity becomes linear.
(Tiny additional: \$ (a + b)\ \textrm{mod}\ c\equiv a\ \textrm{mod}\ c\ + a\ \textrm{mod}\ c\$) | {
"domain": "codereview.stackexchange",
"id": 29800,
"tags": "python, performance, algorithm, python-3.x, fibonacci-sequence"
} |
How have single-celled organisms learned to survive? | Question: Upfront: I am a computer science student trying to understand evolution and its implications.
I've been pondering the concepts of evolution for some time, especially related to neuroscience / how the brain works / how to emulate brains. I wonder how single-celled organisms evolved at all. I know, they don't "think"; but once their ATP levels get low, they know they must find food to sustain their system, aka staying alive. This is probably no active decision, more like: "The enzyme concentration tells me to find food".
Question: Was the entropy just high enough that at some point those organisms had the property that finding food somehow is preferable to death? Edit: How can an organism perform something without "knowing" it needs to perform it?
I hope, this isn't a terribly stupid question.
Answer: Nothing more than physics
Simple physics
If you drop a rock, the rock will fall to the ground. If there was a table underneath, the rock would stop its fall when reaching the table. Clearly the rock response to the environment but the rock does not need to be conscious (whatever you mean by "consciousness") of anything in order to response to the environment.
Explanations for a programmer
As you are a programmer, you will agree that when you do std::cout << 3+2 << std::endl;, nothing more than physics is happening so that the value 5 followed by a newline gets printed to the screen. It is relatively complex though. So complex as it may feel quite magical before having spend many hours learning about computer engineering.
You understand that if you write a, say, boids program, all that is happening is just a bunch of addition and multiplication (eventually a power). The program does not need to be "aware" of what it is doing (nor do the individual agent) in order to respond to the change of behaviour of a predator.
You could even write a self-learning algorithm and you would still accept that there is no consciousness involved but only very simple operations.
Biology
Living things are pretty much the same. They are relatively complex (more like a computer than like a rock) but essentially the same. Also, the response tot he stimulus is often (not always) adaptive as it has been selected. But at it's core there's nothing more than physics involved!
More info
If you want to learn more about the mechanism of living things in general, you might want to just open an intro book or have a look at an intro course such as Khan Academy > Biology for example.
You can as well pick a simple response to an environmental stimulus and study its mechanism. How about having a look at chemotaxis for example!
Note that you might also want to follow a short and introductory course to evolutionary biology such as Understanding Evolution for example. | {
"domain": "biology.stackexchange",
"id": 7967,
"tags": "evolution, life, life-history"
} |
How could negative $W$ "eat" the positive Goldstone bosons $\phi_1$ and $\phi_2$? | Question: In the Standard Model, the Higgs field is
$\Phi=\left(\begin{array}{c}\phi^+\\\phi^0\\\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{c}\phi_1+i\phi_2\\\phi_3+i\phi_4\\\end{array}\right)$.
The component with isospin $+1/2$ is positively charged. The component with isospin $-1/2$ is neutrally charged.
We have 3 Goldstone bosons : $\phi_1$, $\phi_2$, $\phi_3$, and the Higgs boson $H$ is a subpart of $\phi_4$ : $\phi_4=v+H$.
Since the Higgs doublet has only a positive component, with hands, how could the negative W boson eat the positive Goldstone boson $\phi^+$?
How could we have $\phi^-$ for the $W^-$?
For example, in this web page Who ate the Higgs?
(at a moment where the figure was still readable),
they introduce a $H^+$ and a $H^-$: but where are there coming from? As seen above, there is only one $\phi^+$, not a $\phi^+$ and a $\phi^-$.
Is $\phi_1$ and $\phi_2$ both a Goldstone boson, or is the combination $\phi_1+i\phi_2$ a Goldstone boson?
Answer: All three $\phi^1,\phi^2, \phi^3$ are massless Goldstone bosons. It turns out we may rearrange them in complex charged states,
$$
\phi^3,\qquad \phi^{\pm}=(\phi^1\pm i \phi^2)/\sqrt{2}~.
$$
You may (must!) check that all three combinations transform under the symmetries by infinitesimal pieces whose v.e.v.s don't vanish (the litmus test of Goldstone's realization).
With malice aforethought, we further define, contragrediently,
$$
W^{\pm}_\mu= (W^{ 1}_\mu \mp i W^{ 2}_\mu)/\sqrt{2}~.
$$
You then look at the kinetic term for the Higgs doublet,
$$
\frac{1}{2} \partial_\mu {\boldsymbol \Phi}^* \cdot \partial^\mu {\boldsymbol \Phi}= \frac{1}{2}
( \partial_\mu \phi ^3 \partial^\mu \phi^{3~*}+ \partial_\mu h \partial^\mu h)+ \partial_\mu \phi^+ \partial^\mu \phi^ - ~.
$$
I'll be cavalier/inconsistent with normalizations in the trail map that follows, since it is only a trail map―you are meant to flesh out the details in your basic SM course. They are all correct in Schwartz's text's (29.4) for the masses. All you need is write down the relevant terms that contain the goldstons in the covariant version of the above,
$$
(D^\mu {\boldsymbol \Phi})^\dagger D^\mu {\boldsymbol \Phi}=( D^\mu {\boldsymbol \Phi})^* \cdot D^\mu {\boldsymbol \Phi}~.\tag{$\natural$}
$$
The covariant derivative is completed to
$$
D^\mu {\boldsymbol \Phi}\propto \partial^\mu \begin{pmatrix}\phi^+\\\phi^0 \end{pmatrix} +ig \begin{pmatrix} A_\mu /\cos\theta_w & W_\mu^+ \sqrt{2}\\ W^-_\mu \sqrt{2} & -Z_\mu/\sin\theta_W \end{pmatrix} \begin{pmatrix}\phi^+\\\phi^0 \end{pmatrix} ~~~\leadsto \\
D^\mu {\boldsymbol \Phi}^*\propto \partial^\mu \begin{pmatrix}\phi^-\\\phi^{0~*} \end{pmatrix} -ig \begin{pmatrix} A_\mu /\cos\theta_w & W_\mu^- \sqrt{2}\\ W^+_\mu\sqrt{2} & -Z_\mu/\sin\theta_W \end{pmatrix} \begin{pmatrix}\phi^-\\\phi^{0~*} \end{pmatrix} .
$$
Note how each rung of the two-vectors has the same charge!
If we give $\langle \phi_4\rangle = v$, the hermitian term quadratic in the matrix in the ($\natural$) term above will provide the mass terms for the $W^\pm$ and the Z, namely $~~2W^+ W^-+Z^2/\cos ^2 \theta_W$, with no contribution for the photon, as expected.
But, pertaining to your question, you can also see in ($\natural$), among others, the emergent bilinear combinations $(\partial_\mu \phi^+ -gW^+_\mu v)(\partial^\mu \phi^- -gW^{-~^\mu } v)+ ...$, where the $W_\mu^\pm $ are poised to gauge-absorb the respective $\phi^{\pm}$ of the same charge, and net the mass term $g^2v^2W^+_\mu W^{-~\mu}$ .
There are more elegant and compact ways to present this, but I opted for a brutishly explicit trail map here, as I assumed that's where the problem was... Often, students are surprised by
$W^1-iW^2$ eating $\phi^1+i\phi^2$, and crosswise for the $W^-$, but this is what the charge conservation dictates on you, which is why I wrote the covariant derivatives in terms of the physical fields. | {
"domain": "physics.stackexchange",
"id": 87353,
"tags": "standard-model, higgs, symmetry-breaking, electroweak, goldstone-mode"
} |
Simple Stack implementation in C++ | Question: I've recently started teaching myself basic C++ and decided to implement a simple stack with pointers.
#include <iostream>
using namespace std;
struct StackElement {
char value;
StackElement* next;
StackElement(char value, StackElement* next) : value(value), next(next) {}
};
struct Stack {
StackElement* top = NULL;
bool isEmpty() { return top == NULL; }
void push(char value) {
StackElement* newElement = new StackElement(value, top);
top = newElement;
}
StackElement pop() {
StackElement* toBeDeleted = top;
StackElement toBeReturned = *top;
top = top->next;
delete toBeDeleted;
return toBeReturned;
}
};
int main() {
Stack* stack = new Stack();
cout << "Created a stack at " << &stack << endl;
int number_of_inputs;
cout << "Enter the number of elements you want to push at the stack: ";
cin >> number_of_inputs;
for (int i = 0; i < number_of_inputs; i++) {
char input;
cin >> input;
stack->push(input);
}
cout << "- - - - - - - - - - - - - - - - " << endl;
cout << "Displaying content of the stack: " << endl;
while (!stack->isEmpty()) {
cout << stack->pop().value << endl;
}
return 0;
}
My questions are:
- what could be generally done better here?
- is the pop() method written correctly? Does it create any memory leaks? Is there a way to write it shorter?
Thank you in advance! (And forgive use of using namespace std)
Answer: Your stack implementation is terrible, and so is @hc_dev: neither handles memory correctly.
Resource Handling
It is generally frowned upon to call new and delete directly, simply because doing it correctly is hard.
Proper resource handling requires:
Thinking about moves.
Thinking about copies.
Thinking about destruction.
This used to be called the Rule of 3 in C++03 (Copy Constructor, Copy Assignment Operator and Destructor) and is called the Rule of 5 since C++11 (+Move Constructor, +Move Assignment Operator).
Your current Stack implements neither of those 5 operations correctly -- it doesn't implement them at all, and the default generated operations are buggy due to your use of a raw pointer.
The best advice for resource handling, though, is to use the Rule of Zero: just delegate it to something that works!
In your case, look into std::unique_ptr and std::make_unique!
Corrected resource management:
struct StackElement {
char value;
std::unique_ptr<StackElement> next;
StackElement(char value, std::unique_ptr<StackElement> next) :
value(value), next(std::move(next)) {}
};
struct Stack {
std::unique_ptr<StackElement> top = nullptr;
bool isEmpty() { return top == nullptr; }
void push(char value) {
top = std::make_unique<StackElement>(value, std::move(top));
}
char pop() {
assert(!isEmpty());
char toBeReturned = top->value;
top = std::move(top->next);
return toBeReturned;
}
};
This struct is no longer copiable, as std::unique_ptr is not copiable.
Limited stack depth.
The previous rewrite is good, but its destructor suffers from stack overflow (!).
That is, when the destructor is executed, you get:
Call destructor of Stack
Call destructor of Stack::top
Call destructor of StackElement (stack->top)
Call destructor of StackElement::next.
Call destructor of StackElement (stack->top->next)
...
To handle this, create a clear method, and manually write the destructor.
struct Stack {
// ...
Stack(Stack&&) = default; // automatic generation is disabled when
// the destructor is explicit, so explicitly
// ask for it.
Stack& operator=(Stack&&) = default; // automatic generation...
~Stack() { clear(); }
void clear() {
while (!isEmpty()) {
pop();
}
}
};
General
Once you have the memory part correct, further improvements:
Encapsulation: do not expose your privates.
Generalization: make it work for any type.
This yields:
// No need for a class here, it's internal.
template <typename T>
struct StackElement {
StackElement(T value, std::unique_ptr<StackElement> next):
value(std::move(value)), next(std::move(next)) {}
T value;
std::unique_ptr<StackElement<T>> next;
};
template <typename T>
class Stack {
public:
~Stack() { this->clear(); }
Stack() = default;
Stack(Stack&&) = default;
Stack& operator=(Stack&&) = default;
bool isEmpty() const { return this->head == nullptr; }
T const& top() const {
assert(!this->isEmpty());
return this->head->value;
}
void clear() {
while (!isEmpty()) {
this->pop();
}
}
void push(T value) {
// Create empty node first, in case moving `value` throws an exception.
auto neo = std::make_unique<StackElement<T>>(std::move(value), nullptr);
neo->next = std::move(this->head);
this->head = std::move(neo);
}
T pop() {
assert(!isEmpty());
// Pop top first, in case moving `current->value` throws an exception.
auto current = std::move(this->head);
this->head = std::move(current->next);
return std::move(current->value);
}
private:
std::unique_ptr<StackElement<T>> head;
};
Miscellaneous
There are few nits in your main:
There is no need to allocate Stack on the heap, just Stack stack; works.
Don't use std::endl, just use '\n' or "\n".
std::endl both appends \n and calls flush, the latter kills all performance benefit of internally buffering.
With that in mind, the rewritten main is:
int main() {
Stack<char> stack;
std::cout << "Created a stack at " << &stack << "\n";
int number_of_inputs;
std::cout << "Enter the number of elements you want to push at the stack: ";
std::cin >> number_of_inputs;
for (int i = 0; i < number_of_inputs; i++) {
char input;
std::cin >> input;
stack.push(input);
}
std::cout << "- - - - - - - - - - - - - - - - " << "\n";
std::cout << "Displaying content of the stack: " << "\n";
while (!stack.isEmpty()) {
std::cout << stack.pop() << "\n";
}
return 0;
} | {
"domain": "codereview.stackexchange",
"id": 38286,
"tags": "c++, beginner, stack"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.