anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Basic Transformer Question | Question: My understanding of how a transformer works is that there is a primary and a secondary coil wrapped around a common iron coil. An alternating current in the primary coil results in a changing magnetic field which in turn results an induced current in the secondary coil.
The resulting voltage across secondary coil is proportional the the number of turns in the secondary coil. My understanding of this is that each loop produces a small emf and these loops are all in series so adding more loops is like adding more cells in series in a battery.
My question is why does adding more loops in the primary winding reduce the output voltage? In an electromagnet, increasing the number of turns of wire increases the magnetic field. So I would think that increasing the number of loops in the primary coil would result in a larger magnetic field and thus a greater induced voltage in the secondary coil.
Answer: We usually consider a transformer to be driven by a fixed voltage $V_i$.
The current in the primary is then determined by voltage and inductance $V_i/L_i$.
If you add turns to the primary, you increase the inductance, and therefore decrease the current. This decreased current makes less magnetic field. So the secondary produces less voltage. | {
"domain": "physics.stackexchange",
"id": 58988,
"tags": "electromagnetic-induction"
} |
How does the voltage between two charged sheets change if change their distance | Question: Suppose I have two charged capacitor plates that both are isolated and carry a charge density $D = \frac QA$. According to textbook physics the electric field between them is given by $E=\frac D {\epsilon\epsilon_0}$ and the voltage by $U = Ed = \frac {Dd}{\epsilon\epsilon_0}$ with $d$ the distance between the plates. According to the formula for the voltage from above I could set any voltage between the plates if I just separate them far enough from each other and also the electric field would be constant no matter how far the plates are apart which is also quite counter-intuitive. As far as I remember this is true as long as $d$ is small compared to the size of the charged plates.
But what if this condition no longer holds? What is happening then? Is there another formula for this case that is comparably simple? I would suppose that for very large $d$ the whole thing can be seen as two point charges which would give a $\frac1r$ dependency of the voltage. But what is happening in between?
Answer: Here is a simplified approach to this question-I hope it is not too simplistic. Sorry I could not upload the mathematics and the illustrating diagram from my computer file. I need to learn how to do this, or I would appreciate if someone could leave some ideas.
Basically the approach by "MyUserIsThis" is intuitively sound. The analysis is not detailed enough to show how V depends on d (distance between the plates) at small and large d.
Imagine the two parallel plates $P_1$ and $P_2$ with finite Area $A_1$ and $A_2$, carrying eletric charges with uniform densities $D_1$ (charge $+Q_1$) and $D_2$ (charge $-Q_2$) respectively. The plates are placed on top of each other ($P_2$ above $P_1$). We assume uniform densities for simplicity. Now, choose two differential elements: $dA_1 = dx_1dy_1$ on $P_1$ at point ($x_1, y_1, 0$) and $dA_2=dx_2dy_2$ on $P_2$ at point ($x_2, y_2, d$). The potential difference between the plates is given from standard electrostatic theory, leading to the following general but ‘complicated’ double integral on the surfaces $A_1$ and $A_2$
$V(d)=-\frac{D_1D_2}{4\pi \epsilon_o} \int_{A1,A2} dA_1dA_2 \frac{1}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2+d^2}}$
However, for large $d$-values, due to the small size of the plates, the terms $(x_2-x_1)^2$ and $(y_2-y_1)^2$ are very small compared to $d^2$, so that the above equation reduces to this
$V(d)=-\frac{D_1D_2A_1A_2}{4\pi \epsilon_o} \frac1d= \frac{-Q_1Q_2}{4\pi \epsilon_o} \frac1d$
Therefore, the potential difference drops as $1/d$, which is equivalent to saying that for large $d$, the two plates see each other as point particles of charge $+Q_1$ and $-Q_2$, as mentioned in the previous answer. I hope this adds some clarity to the answer. | {
"domain": "physics.stackexchange",
"id": 6292,
"tags": "electric-fields, capacitance, voltage"
} |
Which of an array's contiguous subarrays is an algorithm best applied to? | Question: A function $f$ accepts two equally-long arrays $A$ and $B$ as input, and returns a real number $s$ such that the root mean square of $A-sB$ is minimal.
I'm hoping to come up with a better-than-brute-force approach to the following problem:
Given a pair of equally-long ($n \sim 10^7$) arrays $C,D$ of numbers from the interval $[-1,1]\subset\mathbb{R}$, how are array indices $i,j$ chosen that satisfy
a) $j-i \gtrsim 10^4$
b) $s=f\left(C[i\dots j],D[i\dots j]\right)$ is 'optimal', in the sense that if indices $k,l$ satisfy $k\leq i<j\leq l$, then
$$\text{RMS}(C[i\dots j]-sD[i\dots j]) \leq \text{RMS}(C[i\dots j]-s'D[i\dots j])$$
where $s'=f\left(C[k\dots l],D[k\dots l]\right)$.
(I'm hoping that by recursing on the subarrays to the left and right of these 'optimal' subarrays, the entirety of the pair of large arrays can be processed... 'optimally'.)
I'll be grateful for even a piece of jargon describing the abstract approach this is surely an instance of. Apologies for notational abuses and not-entirely-appropriate title - hope the point is clear.
Answer: As per GrapefruitIsAwesome's answer, solving the RMS problem is fairly trivial even with large arrays. The part with the subarrays seems equally trivial: If $s$ minimizes the RMS difference between $C[i…j]$ and $D[i…j])$ then
$$\text{RMS}(C[i\dots j]-sD[i\dots j]) \leq \text{RMS}(C[i\dots j]-s'D[i\dots j])$$
will always be true regardless of how you define $s'$. That's the whole point of "minimization": the RMS difference will be the smallest it can be. | {
"domain": "dsp.stackexchange",
"id": 11208,
"tags": "audio, algorithms, audio-processing, array-signal-processing"
} |
Why ar_track_alvar is not installing? | Question:
When i try to install the ar_track_alvar using sudo apt-get install ros-indigo-ar-track-alvar
I was getting the following error.
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package ros-indigo-ar-track-alvar
I also tried using synaptic tool which is alternative to apt-get install. But i was not able to find indigo related alvar packages
Can anyone please help me on this?
Originally posted by anilmullapudi on ROS Answers with karma: 75 on 2016-08-09
Post score: 0
Original comments
Comment by ahendrix on 2016-08-09:
ar_track_alvar is listed as released for Indigo, so that should work if you're using Indigo that was installed through apt on Ubuntu x86. Perhaps you're using a different OS or a different CPU?
Comment by anilmullapudi on 2016-08-09:
Here is my OS details
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty
Comment by ahendrix on 2016-08-09:
The build status page and the arm build status page show that ar_track_alvar is built and stable for Ubuntu Trusty x86, amd64 and armhf architectures,
Comment by ahendrix on 2016-08-09:
Did you install ROS through apt or from source? Are your apt sources set up correctly?
Comment by ahendrix on 2016-08-09:
Your system reports that it's running Ubuntu 14.04 (Trusty), but the ROS packages listed are for Ubuntu 12.04 (Precise). Did you have a previous version of ROS installed? Did you upgrade from a previous version of Ubuntu?
Comment by anilmullapudi on 2016-08-09:
ROS was installed through apt-get rosversion -d indigo
Comment by anilmullapudi on 2016-08-09:
Thank you so much for your support, i re-installed the ros indigo keys. i was able to install the ar_track_alvar now.
Answer:
I followed 1.2 and 1.3 from this link http://wiki.ros.org/indigo/Installation/Ubuntu
and sudo apt-get update
above steps worked for me, now i was able to install
Originally posted by anilmullapudi with karma: 75 on 2016-08-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25484,
"tags": "ros, ar-track-alvar"
} |
Hilbert transform modifications for a non-sinusoidal waveform | Question: I have recently been using the Hilbert transform a fair bit and its ability to return a instantaneous phase and magnitude estimate but it has got me thinking about the meaning of phase.
From my understanding the Hilbert transform (HT) and the analytic signal are fundamentally defined off trigonometric functions and so taking the HT of a sin wave results in a linearly changing phase response. If you take the HT of a superposition of sins (say just two) then you would see the frequencies averaged in the phase output and their difference in the magnitude output. I have worked through examples like this and that is all clear to me.
Now say then we take the HT of a saw tooth wave. This is a periodic signal and so clearly has a Fourier decomposition which would have lots of sin and cos terms, in fact it is just this:
$$\frac{2A}{\pi} \left(\sin(a) + \frac{1}{2} \sin(2a) + \frac{1}{3} \sin(3a) \ldots\right.$$
You can of course take the HT and interpret the result as again combining these (now infinitely many) frequency components with their respective amplitudes. I have plotted that below
But in a way this is kind of useless right? Like to me the fundamental meaning of phase is just how far through the oscillation you are so I could look at the input signal tell you that the time period is 1 second and arbitrarily define the falling zero as having a phase of 0. The 'phase' is then just time_since_falling_zero/time_period and in that sense varies linearly (also a sawtooth). The magnitude would just be thought of a constant =1.
So in that way we have defined a new 'basis set' which instead of being sin and cos is itself a sawtooth wave which kind of gives more meaningful information in my opinion. My question is basically weather it is possible (and if so weather you can point me in a direction to this resources) to define pseudo-analytic signals and pseudo-phase planes where you could use another periodic signal as the basis, so you could have superposition of saw tooth waves...
I hope that makes sense. My head feels a little cloudy thinking about this stuff and trying to define phase planes not based on sin and cos.
Answer:
From my understanding the Hilbert transform (HT) and the analytic signal are fundamentally defined off trigonometric functions
Not really. The Hilbert Transform is essentially defined as convolution with $\frac{1}{\pi t}$
It is a linear time invariant filter (provided the convolution integral converges) and as such has a transfer function. That transfer function is a 90 degree phase shift, i.e. a magnitude of 1 and a phase of $pi/2$ for positive frequencies and $-pi/2$ for negative frequencies.
All trigonometric observation that you describe are consequences of these properties, but it's not the definition.
Like to me the fundamental meaning of phase is just how far through the oscillation you are so I could look at the input signal tell you that the time period is 1 second and arbitrarily define the falling zero as having a phase of 0
That's not a particular useful definition of phase. The Fourier Transform represent a time domain signal as a superpositions of complex exponentials (and vice versa). These are defined as $Ae^{j(\omega t + \varphi)}$ where $A$ is the amplitude and $\varphi$ is the phase. That's kind of all there is to it.
My question is basically weather it is possible ... you could use another periodic signal as the basis
You can use any basis functions you like, but by far the most useful ones are orthogonal or even orthonormal basis functions as it makes the math much much easier. That's the reason why everyone uses complex exponentials as the basis function: they are orthonormal and easy to work with. | {
"domain": "dsp.stackexchange",
"id": 10941,
"tags": "signal-analysis, phase, hilbert-transform"
} |
Secure client-side encryption of user content | Question: I am working on a pet project of mine which I've recently revived after a year long hiatus. The application is a note-taking application with client-side encryption. If you need an analogy think Evernote meets LastPass.
Before any version of the app hits the first beta testers I would like to have the encryption related parts of the code scrutinized by many more eyes.
For your convenience I've created a small Github Repository that includes all of the code shown here plus a minimal demo application (console) in a solution (C#) for Visual Studio 2013, 2015 (Community Edition will do).
Let me give you a quick conceptual overview of how the encryption in Ciphernote is supposed to work before I dive into the code. While it is not totally necessary to read the overview, it might help understanding the implementation.
User Registration: Client
User provides email and a password
Client generates a random Content Encryption Key (CEK) using a cryptographic random number generator. This key is used to encrypt all user content including text content and media resources such as images, audio etc. If this key would not exist as intermediate layer, changing a user's password would involve re-encrypting all content.
Derive a key for encrypting the CEK using:
var input = padToMaxLength(email) + password;
var salt = SHA512(input)
var contentKeyEncryptionKey = PBKDF2(input, salt, 10000)
Encrypt the CEK using contentKeyEncryptionKey derived in the previous step
var encryptedContentKey = AES256(CEK, contentKeyEncryptionKey) (prefixed with HMAC-256 over encryptedContentKey)
Derive a server authentication token with
var input = contentKeyEncryptionKey;
var salt = SHA512(padToMaxLength(email) + password)
var authToken = PBKDF2(input, salt, 10000)
User Registration: Server
Server receives request containing:
Email
encryptedContentKey
authToken
Generates a unique 256 Bit per-user salt using a cryptographic random number generator
Generates a server-side authentication token using:
var serverSideAuthToken = PBKDF2(authToken, salt, 100000)
Stores email, serverSideAuthToken and encryptedContentKey in database
Authentication
Client computes server authentication token as described above and passes it along with the user's email to the server
Server computes PBKDF2(authToken, saltFromDatabase, 100000). User is authenticated if email and derived token matches.
Note: I realize that using email+password for salting is far from ideal.
Update:
CryptoService.cs:
This class implements pretty much everything described in the previous section, except for the server part.
public class CryptoService
{
public CryptoService(IRandomNumberGenerator rng)
{
this.rng = rng;
}
private byte[] contentKey;
// cryptographic RNG (client platform specific)
private readonly IRandomNumberGenerator rng;
protected const int Pbkdf2Iterations = 10000;
private int IvLength = 16;
protected const int KeyLength = 32; // AES-256
protected const int HmacLength = 32; // HMAC-SHA-256
public const int MaxUsernameLength = 256;
/// <summary>
/// Pads the supplied username to maxlength
/// </summary>
public static string PadUsername(string username, int desiredLength)
{
var sb = new StringBuilder(username, desiredLength);
sb.Append('-', desiredLength - username.Length);
return sb.ToString();
}
/// <summary>
/// Derives the key for encrypting/decrypting the content key using the supplied credentials
/// </summary>
private Task<byte[]> GetContentKeyDecryptionKeyAsync(string username, string password)
{
return Task.Run(() =>
{
var paddedUsername = PadUsername(username, MaxUsernameLength);
byte[] salt;
var input = Encoding.UTF8.GetBytes(paddedUsername + password);
using (var hasher = SHA512.Create())
salt = hasher.ComputeHash(input);
using (var alg = new Rfc2898DeriveBytes(input, salt, Pbkdf2Iterations))
return alg.GetBytes(KeyLength);
});
}
/// <summary>
/// Initializes the content key from the supplied encrypted version and credentials
/// </summary>
public Task SetContentKeyAsync(string username, string password, Stream encryptedContentKey)
{
return Task.Run(async () =>
{
var key = await GetContentKeyDecryptionKeyAsync(username, password);
var result = new MemoryStream();
await Decrypt(encryptedContentKey, result, key);
contentKey = result.ToArray();
});
}
/// <summary>
/// Returns the decrypted content key
/// </summary>
public byte[] GetContentKey()
{
return contentKey;
}
/// <summary>
/// Generates a virgin content key (used during new user registration)
/// </summary>
public void GenerateAndSetContentKey()
{
contentKey = rng.GenerateRandomBytes(KeyLength);
}
/// <summary>
/// Returns the content key encrypted using the provided credentials
/// </summary>
public Task<byte[]> GetEncryptedContentKeyAsync(string username, string password)
{
return Task.Run(async () =>
{
var key = await GetContentKeyDecryptionKeyAsync(username, password);
return await Encrypt(contentKey, key);
});
}
/// <summary>
/// Computes an access token for the backend using the supplied credentials
/// </summary>
public async Task<byte[]> GetAccessTokenAsync(string username, string password)
{
return await Task.Run(async () =>
{
var paddedUsername = PadUsername(username, MaxUsernameLength);
byte[] salt;
using (var hasher = SHA512.Create())
salt = hasher.ComputeHash(Encoding.UTF8.GetBytes(paddedUsername + password));
var input = await GetContentKeyDecryptionKeyAsync(username, password);
// request two Blocks of 20 Bytes since Rfc2898DeriveBytes uses HMAC-SHA1 internally
using (var alg = new Rfc2898DeriveBytes(input, salt, Pbkdf2Iterations))
return alg.GetBytes(40);
});
}
public async Task Encrypt(Stream source, Stream destination, byte[] key)
{
Debug.Assert(key.Length == KeyLength);
// Create Random IV
var iv = rng.GenerateRandomBytes(IvLength);
// Reserve space for MAC (SHA256)
destination.SetLength(HmacLength);
destination.Seek(0, SeekOrigin.End);
// Prefix stream with IV
await destination.WriteAsync(iv, 0, iv.Length);
// Encrypt
using (var symmetricKey = Aes.Create())
{
symmetricKey.KeySize = KeyLength * 8;
symmetricKey.Mode = CipherMode.CBC;
symmetricKey.Padding = PaddingMode.PKCS7;
using (var encryptor = symmetricKey.CreateEncryptor(key, iv))
{
var cs = new CryptoStream(destination, encryptor, CryptoStreamMode.Write);
await source.CopyToAsync(cs);
if (!cs.HasFlushedFinalBlock)
cs.FlushFinalBlock();
}
}
// Compute HMAC
using (var hasher = new HMACSHA256(key))
{
destination.Seek(HmacLength, SeekOrigin.Begin);
var hmac = hasher.ComputeHash(destination);
Debug.Assert(hmac.Length == HmacLength);
// seek to begin of IV
destination.Seek(0, SeekOrigin.Begin);
// write it
destination.Write(hmac, 0, hmac.Length);
}
}
public async Task Decrypt(Stream source, Stream destination, byte[] key)
{
Debug.Assert(key.Length == KeyLength);
var hmac = new byte[HmacLength];
var iv = new byte[IvLength];
// Read HMAC
await source.ReadAsync(hmac, 0, hmac.Length);
// Verify HMAC
using (var hasher = new HMACSHA256(key))
{
var hmacActual = hasher.ComputeHash(source);
// compare
if (!hmac.ConstantTimeAreEqual(hmacActual))
throw new CryptoServiceException(CryptoServiceExceptionType.HmacMismatch);
}
// Read IV
source.Seek(HmacLength, SeekOrigin.Begin);
await source.ReadAsync(iv, 0, iv.Length);
// Decrypt
using (var alg = Aes.Create())
{
alg.KeySize = KeyLength * 8;
alg.Mode = CipherMode.CBC;
alg.Padding = PaddingMode.PKCS7;
using (var decryptor = alg.CreateDecryptor(key, iv))
{
var cs = new CryptoStream(source, decryptor, CryptoStreamMode.Read);
await cs.CopyToAsync(destination);
if (!cs.HasFlushedFinalBlock)
cs.FlushFinalBlock();
}
}
}
public async Task<Stream> GetDecryptedStream(Stream source, byte[] key)
{
var hmac = new byte[HmacLength];
var iv = new byte[IvLength];
// Read HMAC
await source.ReadAsync(hmac, 0, hmac.Length);
// Verify HMAC
using (var hasher = new HMACSHA256(key))
{
var hmacActual = hasher.ComputeHash(source);
// compare
if (!hmac.ConstantTimeAreEqual(hmacActual))
throw new CryptoServiceException(CryptoServiceExceptionType.HmacMismatch);
}
// Read IV
source.Seek(HmacLength, SeekOrigin.Begin);
await source.ReadAsync(iv, 0, iv.Length);
// Decrypt
var alg = Aes.Create();
alg.KeySize = KeyLength * 8;
alg.Mode = CipherMode.CBC;
alg.Padding = PaddingMode.PKCS7;
var decryptor = alg.CreateDecryptor(key, iv);
return new CryptoStreamWithResources(source, decryptor, CryptoStreamMode.Read,
new IDisposable[] { alg, decryptor });
}
public async Task<byte[]> Encrypt(byte[] sourceBytes, byte[] key)
{
var source = new MemoryStream(sourceBytes);
var destination = new MemoryStream();
await Encrypt(source, destination, key);
return destination.ToArray();
}
public async Task<byte[]> Decrypt(byte[] sourceBytes, byte[] key)
{
var source = new MemoryStream(sourceBytes);
var destination = new MemoryStream();
await Decrypt(source, destination, key);
return destination.ToArray();
}
public Task EncryptContent(Stream source, Stream destination)
{
if(contentKey == null)
throw new CryptoServiceException(CryptoServiceExceptionType.ContentKeyNotSet);
return Encrypt(source, destination, contentKey);
}
public Task DecryptContent(Stream source, Stream destination)
{
if (contentKey == null)
throw new CryptoServiceException(CryptoServiceExceptionType.ContentKeyNotSet);
return Decrypt(source, destination, contentKey);
}
public Task<Stream> GetDecryptedContentStream(Stream source)
{
if (contentKey == null)
throw new CryptoServiceException(CryptoServiceExceptionType.ContentKeyNotSet);
return GetDecryptedStream(source, contentKey);
}
public async Task<byte[]> ComputeContentHmac(Stream source)
{
if (contentKey == null)
throw new CryptoServiceException(CryptoServiceExceptionType.ContentKeyNotSet);
return await Task.Run(() =>
{
using (var hasher = new HMACSHA256(contentKey))
{
var hmac = hasher.ComputeHash(source);
return hmac;
}
});
}
}
Answer: First the good news
Your code looks clean
Your methods are mostly short ones
You are mostly disposing disposable objects by using the using statement
You name your things mostly well
Validation
You don't validate your input parameters which is a bad habit if the methods are public. By not validating the parameters your code will throw exceptions with stacktraces which are exposing the implementation details of your code. This is something you don't want not only because you are dealing with security here.
Some hints:
before you call Seek() on a Stream you should check if the stream is seekable.
null checks
range of arguments like integers etc.
Naming
If a method operates asynchron, using the async keyword, it should be postfixed with Async.
That beeing said let's dig into your code...
The first thing I noticed was your PadUsername() method. This method is a little bit too much. You could simply use PadRight(int, char) which does the same thing but in a cleaner way like so
public static string PadUsername(string username, int desiredLength)
{
return username.PadRight(desiredLength, '-');
}
The changed method behaves different than the former implementation this means if you pass a username with Length > desiredLength it will simply return the username. The former method would throw an ArgumentOutOfRangeException at sb.Append().
But you have another problem here which is the Xml documentation which states
/// Pads the supplied username to maxlength
This comment is lying ! It doesn't pad to maxlength but to desired length. If a comment is not true, either change it or remove it.
/// <summary>
/// Returns the decrypted content key
/// </summary>
public byte[] GetContentKey()
{
return contentKey;
}
why is this a method ?
You should change it to a property with a private setter like so
public byte[] ContentKey
{
get;
private set;
}
which would make the backing field contentKey superflous as well.
public async Task Decrypt(Stream source, Stream destination, byte[] key) and public async Task<Stream> GetDecryptedStream(Stream source, byte[] key)
The verifycation of the HMAC should be extracted to a private static method. This has the advantage that you don't need a comment, the code duplication is removed and also that both methods becomes shorter.
I would change it like so
private static byte[] ComputeHash(Stream content, byte[] key)
{
using (var hasher = new HMACSHA256(key))
{
return hasher.ComputeHash(content);
}
}
and the VerifyHMAC() like so
private static void VerifyHMAC(Stream content, byte[] key)
{
var hmacActual = ComputeHash(source, key);
if (!hmac.ConstantTimeAreEqual(hmacActual))
{
throw new CryptoServiceException(CryptoServiceExceptionType.HmacMismatch);
}
}
So each of this blocks
// Verify HMAC
using (var hasher = new HMACSHA256(key))
{
var hmacActual = hasher.ComputeHash(source);
// compare
if (!hmac.ConstantTimeAreEqual(hmacActual))
throw new CryptoServiceException(CryptoServiceExceptionType.HmacMismatch);
}
can be replaced by
VerifyHMAC(source, key);
You have a construct like the following
byte[] salt;
var input = Encoding.UTF8.GetBytes(paddedUsername + password);
using (var hasher = SHA512.Create())
salt = hasher.ComputeHash(input);
two times which should be extracted in the same way.
This
if(contentKey == null)
throw new CryptoServiceException(CryptoServiceExceptionType.ContentKeyNotSet);
appeary 4 times so place it into a method like
private void ValidateContentKey()
{
if(contentKey == null)
{
throw new CryptoServiceException(CryptoServiceExceptionType.ContentKeyNotSet);
}
}
public async Task Encrypt(Stream source, Stream destination, byte[] key)
CryptoStream is implementing IDisposable hence you should enclose its usage in a using block as well.
Braces {}
Omiting braces although they might be optional, like for single statment if, using etc. can lead to hidden bugs, which one dealing with security won't want. Hidden bugs are very hard to track. They can be introduced simply by mistake.
I would like to encourage you to always use them which helps to make your code less error prone and better structured (IMO).
Comments
Some of your comments are good like
// request two Blocks of 20 Bytes since Rfc2898DeriveBytes uses HMAC-SHA1 internally
using (var alg = new Rfc2898DeriveBytes(input, salt, Pbkdf2Iterations))
return alg.GetBytes(40);
and some are bad like
// seek to begin of IV
destination.Seek(0, SeekOrigin.Begin);
// write it
destination.Write(hmac, 0, hmac.Length);
Comments should tell the reader of the code (which may be you or Sam the maintainer) why something is done in the way it is done. Let the code itself tell what is done by using meaningful named variables, methods and classes.
Sure it would be good to know why the Seek() from above is taking place, but this could be achieved by having and using a constant like so
private const int IVBeginning = 0;
destination.Seek(IVBeginning, SeekOrigin.Begin);
making the comment superflous. | {
"domain": "codereview.stackexchange",
"id": 24037,
"tags": "c#, security, cryptography"
} |
PHP - avoid many on-demand imports | Question: I'm trying to create small PHP framework for my own needs (and likes). However there are questions for which I may need advice of more wise and experienced people before things became too complicated - so I moved it to github from private repo.
Currently I'm curious about organizing imports of modules better. I use the special object, "context" - to hold links to all necessary modules - data access objects, utils etc. I use lazy initialization here, so that when field of this object is required, it is provided by the getter of the same name.
It looks like:
class ProtoContext {
function __get($name) {
$methodName = 'get' . ucfirst($name);
if (!method_exists($this, $methodName)) {
throw new Exception("No property '$name' in Context!");
}
$res = $this->$methodName();
if (is_object($res)) {
$res->ctx = $this;
}
$this->$name = $res;
return $res;
}
protected function getElems() {
return Elems::$elems;
}
protected function getUtil() {
module('sys/Util');
return new Util();
}
}
main Context class is inherited from this ProtoContext, though it is not significant now. With the grow of application context may look like this:
module('sys/ProtoContext');
class Context extends ProtoContext {
protected function getAuth() {
module('MyAuth');
return new MyAuth();
}
protected function getUsersDao() {
module('dao/MysqlDao');
return new MysqlDao('users');
}
protected function getRolesDao() {
module('dao/MysqlDao');
return new MysqlDao('roles');
}
/*
* 5-10 more similar methods 'getSomethingDao'
* each including MysqlDao via method 'module'
*/
protected function getLinksCViewDao() {
module('dao/MysqlDao');
return new MysqlDao('linksc_view');
}
}
There is no problem that module method may be called several times, include is performed once only. However it looks annoying that inclusion of MysqlDao is mentioned so many times. On the other hand if I pull it above the class (like import of ProtoContext) it will be imported even if I need it not. For example when addressing $ctx->auth field (which will call getAuth method).
I wonder, whether here is comfortable workaround which will preserve lazy load and lazy initialization - and at the same time will allow to get rid of extra imports?
Answer: As N.B. mentioned in their comment - why are you implementing a module method if we already have lazy loading via autoloading? I've even written a class that you can simply attach and not have to worry about loading your classes. Now, there are better autoloaders out there - like Symphony's ClassLoader/autoload.php. Anyways, here's your code, with my LazyLoader class handling the loading of methods (untested code).
/LazyLoader.php
/**
* LazyLoader
* A fast, strict lazy loader (time ~ 0.0001)
*
* Class name must match File name.
* If class has namespace, must be called via namespace.
*
* @author Juan L. Sanchez <juanleonardosanchez.com>
* @license MIT
* @version 1.2.0
* @internal 06.26.2013
*/
Namespace LazyLoader;
class LazyLoader{
public static $dirRoot;
public static function autoload($class_name){
$file = dirname(__FILE__) .
(strlen(self::$dirRoot) > 0 ? self::$dirRoot : "") .
'/' . array_pop(explode("\\", $class_name)) . '.php';
file_exists($file) ? require_once($file) : "";
}
public static function SetBaseDirectory($directory_root){
self::$dirRoot = substr($directory_root, -1) == "\\" ?
substr($directory_root, 0, -1) : "";
}
public static function Register(){
return spl_autoload_register(__NAMESPACE__ .'\LazyLoader::autoload');
}
}
$LazyLoader = new LazyLoader;
$LazyLoader->SetBaseDirectory("Classes"); # Optional
$LazyLoader->Register();
/Context.php
<?php
module('sys/ProtoContext');
class Context extends ProtoContext {
protected function getAuth() {
return new MyAuth();
}
protected function getUsersDao() {
return new MysqlDao('users');
}
protected function getRolesDao() {
return new MysqlDao('roles');
}
/*
* 5-10 more similar methods 'getSomethingDao'
* each including MysqlDao via method 'module'
*/
protected function getLinksCViewDao() {
return new MysqlDao('linksc_view');
}
}
The directory structure of the above code would look something like:
/LazyLoader.php
/Context.php
/Classes/MyAuth.php
/Classes/ProtoContext.php
/Classes/MysqlDao.php | {
"domain": "codereview.stackexchange",
"id": 4275,
"tags": "php"
} |
When is conditional Kolmogorov complexity zero? | Question: It seems intuitive that conditional Kolmogorov complexity is only zero when the bitstrings are the same, and otherwise is greater than 0. I.e. if $b_1 = b_2$, then $K(b_1|b_2) = 0$, otherwise $K(b_1|b_2) > 0$.
However, if that is the case, then we could break a long bitstring into many smaller, different bitstrings, and then use conditional Kolmogorov complexity to establish a lower bound on the long bitstring's Kolmogorov complexity.
E.g., original bitstring is $\mathcal{B}$. It is broken into different, smaller bitstrings $b_1,b_2,\ldots,b_n$. We then use conditional Kolmogorov complexity to establish a lower bound on $K(\mathcal{B})$, in the following way,
\begin{align*}
K(\mathcal{B}) &= K(b_1,b_2,\ldots,b_n) \\
&= K(b_1) + K(b_2|b_1) + \ldots + K(b_n|b_1,b_2,\ldots,b_{n-1}) +
O(log(K(b_1,b_2,\ldots,b_n)))\\
&\geq n.
\end{align*}
The equality is not strict, there are errors involved, which may invalidate the argument.
As the bitstring length grows, we can continue to grow the lower bound $n$ arbitrarily large, since there are a greater number of distinct smaller bitstrings that can be used to construct the large bitstring. This appears to violate Chaitin's incompleteness theorem, which states with a fixed axiomatic system there is a limit $\mathcal{L}$ above which we cannot prove $K(\mathcal{B}) > \mathcal{L}$.
What am I missing here?
Answer: The answer depends on whether your encoding is prefix-free or not, and in the latter case, on your universal Turing machine.
There are two variants of Kolmogorov complexity: one in which the set of programs form a prefix-free code, and one in which there is no such requirement. For the first variant, the Kolmogorov complexity is always positive. For the second variant, the universal Turing machine can do whatever it wants given an empty program (or indeed, any fixed program). You can arrange that $K(x_1|x_2) = 0$ for your choice of $x_1,x_2$, for example. | {
"domain": "cs.stackexchange",
"id": 9706,
"tags": "kolmogorov-complexity"
} |
Difference between fixed-to-variable length codes and variable-to-fixed length codes? | Question: I am a bit confused by the difference between the two. Can someone clarify the difference between the two?
Answer: A fixed-to-variable length code is a code that takes a string in $\cal X^*$, partitions it into chunks of fixed length $n$, and replaces each chunk $w$ by some codeword $C(w)$ whose length isn't fixed. The classical example is prefix codes. A prefix code such as Huffman's code replaces each symbol $\sigma \in \cal X$ by a codeword $C(\sigma)$, with the aim of minimizing $\mathbb{E}[|C(\sigma)|]$ with respect to some distribution on $\cal X$. We can obtain a better rate by applying Huffman coding on $k$-tuples of inputs. This corresponds to a fixed-to-variable length code that encodes each word $w$ of fixed length $k$ by a codeword $C(k)$.
A variable-to-fixed length code is a code that takes a string in $\cal X^*$, breaks it into pieces of variable length, and replaces each piece into a word of fixed length. The classical example is Lempel-Ziv encoding with fixed dictionary size. Lempel-Ziv breaks its input into chunks, where each chunk extends a word in the dictionary by one symbol. Each chunk is then encoded as an index into the dictionary together with the new symbol. The encoding thus has fixed length, but the chunks vary in length. | {
"domain": "cs.stackexchange",
"id": 10245,
"tags": "coding-theory"
} |
Turned to steel in the great magnetic field | Question: This is obviously a "fun" question, but I'm sure it still has valid physics in it, so bear with me.
How great of a magnetic field would you need to transmute other elements into iron/nickel, if that's in fact possible?
The magnetic field near a neutron star, roughly of order 10^10 tesla, is so strong that it distorts atomic orbitals into thin "cigar shapes". (The cyclotron energy becomes greater than the Coulomb energy.) Certainly if a solid crystal were placed in such a field it would become very anisotropic, and at some field strength the lattice constant in the direction transverse to the field could become small enough for nuclear fusion rates between the nuclei to become non-negligible.
How high do we need to crank up the field before the nuclei all equilibrate to the absolute energy minimum of iron and nickel in, say, a matter of hours or days?
Update: From http://dx.doi.org/10.1086/152986 it appears that matter in strong magnetic fields forms into strongly-bound 1D chains along the field lines, which are only weakly bound to each other, and the parallel and transverse lattice constants are actually comparable.
Answer: a great topic. First, ten gigatesla is only the magnetic field near a magnetar - a special type of neutron star. They were discussed e.g. in this Scientific American article in 2003:
https://web.archive.org/web/20120204052553if_/http://solomon.as.utexas.edu/~duncan/sciam.pdf
Ordinary neutron stars have magnetic fields that are 1000 times weaker than that.
It is true that in the magnetar stars, atoms are squeezed to cigars thinner than the Compton wavelength of the electron - which is in between the radius of the electron (and also the radius of the nucleus) and the radius of the atom.
However, it is such strong a field that many other things occur. For example, there is a box interaction between 4 photons, caused by a virtual electron loop. This is normally negligible - so we say that Maxwell's equations are linear in the electromagnetic fields. However, at such strong magnetic fields, the nonlinearity kicks in and one photon often splits into two, or vice versa.
So there's a lot of new stuff going on in such fields. A magnetar that would be 1000 miles away would kill us due to diamagnetism of water in our cells.
Magnetars and fusion
Your idea to use magnetars to support fusion is creative, of course. But I think that to start fusion, you have to squeeze the nuclei closer than the Compton wavelength of the electron which is still $2.4 \times 10^{-12}$ meters, much longer than the nuclear radius. You would need to add two or three more orders of magnitude to the squeezing. A magnetar is not enough for that.
When you have such brutally deformed atoms, you can't neglect the nuclear reactions involving electrons - which are usually thought of as "irrelevant distant small particles" that don't influence the nuclear processes. However, if their wave functions are squeezed to radia that are substantially shorter than the Compton wavelength, their kinetic energy substantially increases. At the width of the wave function comparable to the Compton wavelength, the total energy/mass of the electron increases by O(100%) or so. This increase comes from the "thin" directions only but it is enough.
Now, note that the difference between the neutron mass and the proton mass is just 2.5 masses of the electron. So if you squeeze the electron so that its total energy increases more than 2.5 times, it becomes energetically favored for the protons inside your (not so) "crystal" to absorb the electron and turn into neutrons.
So I believe that all the matter in a near proximity of the magnetar will actually turn into the same matter that the neutron star itself is made of. That will happen before the protons will have any chance to create new bound states such as iron nuclei (that you wanted to produce by fusion). You will end up with neutrons and almost no protons - the same state of matter that the star is built from itself. In some sense, I think that this shouldn't be surprising - if it is surprising for someone, he should have asked the question why there is no ordinary matter left on the neutron stars.
What is the timescale after which the electrons are absorbed to turn the protons into neutrons? Well, it's a process mediated by the weak nuclear interaction - like beta-decays. Recall that the lifetime of the neutron is 15 minutes but it is anomalously long a time because of some kinematical accidents. The normal objects of the same size - such as the extremely squeezed cigar-shaped atoms - would decay more quickly (into neutron and neutrinos, in this case). On the other hand, the electrons in the cigar-shaped atoms occupy a bigger region than the quarks in the neutron. But this can only add at most 4 orders of magnitude. To summarize, I think that within days or months, if not more quickly, the electrons would get swallowed to create neutrons.
All the best
Lubos | {
"domain": "physics.stackexchange",
"id": 251,
"tags": "nuclear-physics, solid-state-physics, atoms, neutron-stars, order-of-magnitude"
} |
If the metric tensor is unitless, why do its perturbations pick up units of Newton's constant? | Question: If the metric tensor is unitless, why do its perturbation terms pick up units of Newton's constant?
In the following expansion, metric perturbations pick up a factor of $\kappa\propto\sqrt{G}$
\begin{equation}
g_{\mu\nu}=\eta_{\mu\nu}+\kappa h_{\mu\nu}+ \kappa^{2}h_{\mu\lambda}h^{\lambda}_{\nu}+\cdots
\end{equation}
For example, in this paper on pg.5. Also in this paper by 't Hooft, for the expansion on pg.2 and defined on pg.3.
What is the explicit origin of $\kappa$ and why does it have dimensions while the metric $g_{\mu\nu}$ doesn't?
Answer: You are right that $g_{\mu \nu}$ is dimensionless. But this is not true for $h_{\mu \nu}$: it has dimension $1$ because it was canonically normalized so that the kinetic term in the perturbative expansion of the Einstein-Hilbert action has no coupling (just what we expect in a standard QFT). And since $[\kappa] = -1$, each term in the expansion of $g_{\mu \nu}$ is dimensionless. See my answer here for a brief explanation of how this works: https://physics.stackexchange.com/a/467869/133418.
Canonical normalization is a standard redefinition that one usually makes in a QFT when, for some reason, the Lagrangian does not have the kinetic and interaction terms in their usual forms. The redefinition then puts them in the usual form we know in which the perturbative methods for computing scattering amplitudes have been developed. In principle, the physics is the same whether you perform a redefinition or not, but it's just more convenient and practical to use the existing methods and apply them on similar-looking Lagrangians. | {
"domain": "physics.stackexchange",
"id": 96238,
"tags": "general-relativity, metric-tensor, field-theory, dimensional-analysis, perturbation-theory"
} |
Generic method for HTTP GET/POST parameter conversion | Question: I just made a method that should handle GET and POST parameters (primitive types) sent by AJAX requests to the server:
protected T GetParam<T>(string key)
{
string value = HttpContext.Current.Request[key];
if (string.IsNullOrEmpty(value))
return default(T);
return (T)Convert.ChangeType(value, typeof(T));
}
Do you have any suggestions regarding improvements?
Answer: First point, small but important: give your method a proper name. GetParam doesn't mean much and doesn't tell what it returns. Take a name like GetValueFromParameter or GetHttpRequestValue.
Secondly, did you test this method properly? I changed the code a bit to be able to test it, I removed the HttpContext value and just use the parameter as value:
protected T GetHttpRequestValue<T>(string value)
{
if (string.IsNullOrEmpty(value))
return default(T);
return (T)Convert.ChangeType(value, typeof(T));
}
When I call it like this:
var longFromString = GetHttpRequestValue<long?>("0");
I get following exception:
Invalid cast from 'System.String' to 'System.Nullable`1[[System.Int64, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]'.
I went looking for a cause and it seems that Convert.ChangeType has problems with nullable types. You have to use the underlying type of the nullable in order to make it work:
protected T GetHttpRequestValue<T>(string key)
{
string value = HttpContext.Current.Request[key];
if (string.IsNullOrEmpty(value))
return default(T);
var type = Nullable.GetUnderlyingType(typeof(T)) ?? typeof(T);
var convertedValue = (value == null) ? null : Convert.ChangeType(value, type);
return (T)convertedValue ;
}
Now I get following results:
GetHttpRequestValue<long?>("0"); // 0
GetHttpRequestValue<long?>(""); // null
GetHttpRequestValue<long>("1"); // 1
GetHttpRequestValue<bool>("true"); // True
GetHttpRequestValue<bool>(""); // False
GetHttpRequestValue<long?>(null); //null
GetHttpRequestValue<bool>(null); //False
You also made your method dependent of an HttpContext, you cannot reuse it. Better would be to rename it to something like ConvertTo and leave out the HttpContext-line:
protected T ConvertTo<T>(string value)
{
if (string.IsNullOrEmpty(value))
return default(T);
var type = Nullable.GetUnderlyingType(typeof(T)) ?? typeof(T);
var convertedValue = (value == null) ? null : Convert.ChangeType(value, type);
return (T)convertedValue;
}
Example usage:
protected void Page_Load(object sender,EventArgs e)
{
var value = HttpContext.Current.Request["YourKey"];
var converted = ConvertTo<long?>(value);
}
Your code is still clean and the method is reusable. Notice that I use var instead of explicitly declaring variables. This is not mandatory, I just prefer to use it. Hope this helps!
Update:
I updated the method to handle a fail on the conversion, in case you want to change qwerty to a bool for example. :)
protected T ConvertTo<T>(string value)
{
if (String.IsNullOrEmpty(value))
return default(T);
var type = Nullable.GetUnderlyingType(typeof(T)) ?? typeof(T);
T convertedValue = default(T);
try
{
convertedValue = (T)Convert.ChangeType(value, type);
}
catch {}
return convertedValue ;
}
This way of working swallows the exception. In case one occurs the default of T will be returned, otherwise the converted value. Results:
ConvertTo<int>("qwerty"); //0
ConvertTo<bool>("qwerty"); //False
Update:
As stated in the comments below, you can also take the code and throw it in an extension method. Also nice for reusability and your could will also look a bit cleaner. Here goes:
public static class Extensions
{
public static T ConvertTo<T>(this string value)
{
if (String.IsNullOrEmpty(value))
return default(T);
var type = Nullable.GetUnderlyingType(typeof(T)) ?? typeof(T);
T convertedValue = default(T);
try
{
convertedValue = (T)Convert.ChangeType(value, type);
}
catch {}
return convertedValue ;
}
}
Usage:
var intFromString = "2".ConvertTo<int>();
//intFromString is an integer with value 2 | {
"domain": "codereview.stackexchange",
"id": 10903,
"tags": "c#, asp.net, ajax"
} |
What activation function does the human brain use? | Question: Does the human brain use a specific activation function?
I've tried doing some research, and as it's a threshold for whether the signal is sent through a neuron or not, it sounds a lot like ReLU. However, I can't find a single article confirming this. Or is it more like a step function (it sends 1 if it's above the threshold, instead of the input value)?
Answer: The thing you were reading about is known as the action potential. It is a mechanism that governs how information flows within a neuron.
It works like this: Neurons have an electrical potential, which is a voltage difference inside and outside the cell. They also have a default resting potential, and an activation potential. The neuron tends to move towards the resting potential if it is left alone, but incoming electric activations from dendrites can shift its electric potential.
If the neuron reaches a certain threshold in electric potential (the activation potential), the entire neuron and its connecting axons goes through a chain reaction of ionic exchange inside/outside the cell that results in a "wave of propagation" through the axon.
TL;DR: Once a neuron reaches a certain activation potential, it electrically discharges. But if the electric potential of the neuron doesn't reach that value then the neuron does not activate.
Does the human brain use a specific activation function?
IIRC neurons in different parts of the brain behave a bit differently, and the way this question is phrased sounds as if you are asking if there is a specific implementation of neuronal activation (as opposed to us modelling it).
But in general behave relatively similar to each other (Neurons communicate with each other via neurochemicals, information propagates inside a neuron via a mechanism known as the action potential...) But the details and the differences they cause could be significant.
There are various biological neuron models, but the Hodgkin-Huxley Model is the most notable.
Also note that a general description of neurons don't give you a general description of neuronal dynamics a la cognition (understanding a tree doesn't give you complete understanding of a forest)
But, the method of which information propagates inside a neuron is in general quite well understood as sodium / potassium ionic exchange.
It (activation potential) sounds a lot like ReLU...
It's only like ReLU in the sense that they require a threshold before anything happens. But ReLU can have variable output while neurons are all-or-nothing.
Also ReLU (and other activation functions in general) are differentiable with respect to input space. This is very important for backprop.
This is a ReLU function, with the X-axis being input value and Y-axis being output value.
And this is the action potential with the X-axis being time, and Y being output value. | {
"domain": "ai.stackexchange",
"id": 1212,
"tags": "activation-functions, neuroscience, brain"
} |
Time Complexity: Intuition for Recursive Algorithm | Question: I decide to learn more about dynamic programming, so I started reading the Dynamic Programming chapter from the CLSR book.
The first example problem presented there is Rod Cutting (15.1). Given a rod of length n and a list of prices for rods of any sizes figure out how to cut the rod so that the price of the pieces will be maximized (and one can only cut at even positions).
The first recursive algorithm presented there is the following
CutRod(p, n)
if n == 0
return 0
q = -inf
for i = 1 to n
q = max(q, p[i] + CutRod(p, n -1))
return q
n is the size of the rod and p an array that contains the prices.
I understand the algorithm, the problem I have is that I thought intuitively the time complexity of such an algorithm would be O(b^d) (where b is the branching factor and d the depth of the recursion tree) which would be O(n^n).
In the book the recurrence relation is presented: T(0) = 1 and T(n) = 1 + sum(j=0, n-1, T(j)) Then it is explained that the complexity following from this is O(2^n) which you can easily be seen by expand the recurrence relation.
How can I quickly see that my initial intuition was wrong? And in general when looking at a recursive algo how can I figure out weather the time complexity is O(b^d) or not.
Answer: Based on the code you show there, your intuition is right.
However, it looks like there is a typo in the code and the next-to-last line should have been
q = max(q, p[i] + CutRod(p, n-i))
i.e., n-i rather than n-1. Try to work through a proof of correctness, or through a few examples, to see why I say that. The running time analysis they show is for the corrected code, rather than the code with the typo, and then once you make that correction to the code, the recurrence relation they provide is correct. | {
"domain": "cs.stackexchange",
"id": 11565,
"tags": "time-complexity, dynamic-programming, recurrence-relation"
} |
Oscillations - Mass Change on Simple Pendulum | Question: The problem that I am thinking of is phrased as follows:
A person on a swing is holding a sandbag and is moving with some initial velocity $v_0$ at the bottom of the swing of length $l$. The person, who weighs $m$, drops the sand bag, which weighs $\epsilon$ at the bottom of the swing. What happens to the amplitude and frequency of the system?
I know that frequency in this case is proportional to $\omega = \sqrt{\frac{g}{l}}$, since the SHO equation for simple pendulums is:
$$\frac{\partial^2 \alpha(t)}{\partial t^2} + \frac{g}{l}\alpha(t) = 0$$
Since frequency is independent of mass, we have that the frequency does not change.
However, I wrongly suspected that the amplitude decreases, since the mass of the system is decreased($m+\epsilon$ to $m$) and thus the kinetic energy of the system is decreased, leading to a lower maximal amplitude. What's wrong with my reasoning here?
Also, I am curious about what happens if one were to drop the sand bag at the max amplitude; would this make a difference in our solution?
Answer: Both the kinetic energy $\frac 12mv^2$ and gravitational potential energy $mgh$ are proportional to the mass $m$ so changing the mass will change each form of energy in the same ratio.
Another way of looking at the arrangement is to consider two separate pendulums of the same length and amplitude but of differing masses.
They will have the same period.
It so happens that your arrangement starts off with both the masses (person and sandbag) joined together and then one of the masses is ditched. | {
"domain": "physics.stackexchange",
"id": 56749,
"tags": "newtonian-mechanics, harmonic-oscillator, oscillators"
} |
error during pcl_ros | Question:
Hi ,
I downloaded pcl and perception_pcl from pointclouds.org and did rosdep without any errors..
when I rosmake pcl_ros, its resulting the errors below.
mkdir -p bin
cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=rospack find rosbuild/rostoolchain.cmake ..
[rosbuild] Building package pcl_ros
[rosbuild] Cached build flags older than manifests; calling rospack to get flags
Failed to invoke /opt/ros/fuerte/bin/rospack cflags-only-I;--deps-only pcl_ros
CMake Error at /usr/lib/vtk-5.8/VTKTargets.cmake:16 (ADD_EXECUTABLE):
Command add_executable() is not scriptable
Call Stack (most recent call first):
/usr/lib/vtk-5.8/VTKConfig.cmake:231 (INCLUDE)
/usr/share/cmake-2.8/Modules/FindVTK.cmake:73 (FIND_PACKAGE)
/home/sai/fuerte_workspace/pcl17/pcl/vtk_include.cmake:1 (find_package)
CMake Error at /opt/ros/fuerte/share/ros/core/rosbuild/public.cmake:129 (message):
Failed to invoke rospack to get compile flags for package 'pcl_ros'. Look
above for errors from rospack itself. Aborting. Please fix the broken
dependency!
Call Stack (most recent call first):
/opt/ros/fuerte/share/ros/core/rosbuild/public.cmake:227 (rosbuild_invoke_rospack)
CMakeLists.txt:4 (rosbuild_init)
-- Configuring incomplete, errors occurred!
Thanks
Originally posted by sai on ROS Answers with karma: 1935 on 2013-02-18
Post score: 0
Answer:
You should use the one provided with ROS, type in a terminal :
sudo apt-get install ros-fuerte-pcl* ros-fuerte-perception*
Unless you wanted to use some unstable ones ?
Bests reguards,
Steph
Ps :And next time, don't forget to precise : your distribution (Ubuntu xx.xx), your platform, and which ROS you use (I see from the error message you are using fuerte)
Originally posted by Stephane.M with karma: 1304 on 2013-02-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by sai on 2013-02-19:
ubuntu 12.04 and fuerte, The registration library which i wanted to use is not up to date in ros-fuerte repository. So I downloaded from the trunk from pcl website which has ros wrappers but it isresulting in the above errors given in the question
Comment by Stephane.M on 2013-02-19:
OK... Yes, I tried to compiled pcl trunk also but a lot of things doesn't compile in it, thus needing to flag only the part you want to compile, and leave the rest out. You can go to PCL support forum, in which you may find more help (suscribe at http://pointclouds.org/mailman/listinfo/pcl-users) | {
"domain": "robotics.stackexchange",
"id": 12949,
"tags": "pcl, rosmake, pcl-ros"
} |
Could there be dark matter black holes? | Question: Could dark matter compress and form black holes? Since dark matter is even more abundant than normal matter, a dark matter black hole should not be rare...right?
Answer: The problem with trying to form a black hole with dark matter is that dark matter can only weakly interact (if at all) with normal matter and itself, other than by gravity.
This poses a problem. To get dark matter concentrated enough to form a black hole requires it to increase its (negative) gravitational binding energy without at the same time increasing its internal kinetic energy by the same amount. This requires some sort of dissipative interaction between dark matter and normal matter (or itself).
The following scenario should make this clear. Suppose we have a lump of dark matter that gravitationally attracts another lump of dark matter. As the two approach each other, they accelerate and gain kinetic energy. The kinetic energy gained will be exactly enough to then separate them to a similar degree to which they started, unless some dissipative process takes place.
An example is to suppose that dark matter is weakly interacting massive particles (WIMPs). WIMPs are gravitationally drawn towards the centres of stars. If the weak interactions happen sufficiently frequently then it might be possible for them to accumulate in stars, rather than shoot through and out the other side.
It has been hypothesised that black holes could be made like this near the centre of a Galaxy, seeded by dense neutron stars. The density of neutron star matter, combined with the enhanced density of dark matter near galaxy centres could result in dark matter accumulation in the neutron stars, leading to the formation of black holes.
Once a black hole is formed then any dark matter that enters the event horizon cannot emerge regardless of what kinetic energy it gains in the process. However, there is still a problem. Material in orbit around a black hole has less angular momentum the closer it orbits. To pass inside the event horizon requires the dark matter to lose angular momentum. Normal matter does this via an accretion disc that can transport angular momentum outwards by viscous torques, allowing matter to accrete. Dark matter has almost zero viscosity so this can't happen.
So building a supermassive black hole from a smaller seed would be difficult, but forming small black holes out of neutron stars might be easier. It has been proposed that a relative lack of pulsars observed towards our own Galactic centre could be due to this process. | {
"domain": "astronomy.stackexchange",
"id": 1114,
"tags": "gravity, black-hole, dark-matter, supermassive-black-hole, matter"
} |
How to prepare a specific initial state of three qubits? | Question: I would like to prepare the following initial state for variational quantum algorithms:
$$
\sin\theta_1 \sin\theta_2 \sin\theta_3 |000\rangle + \sin\theta_1 \sin\theta_2 \cos\theta_3 |001\rangle + \sin\theta_1 \cos\theta_2 |010\rangle + \cos\theta_1 |100 \rangle.
$$
Should I make a circuit for this state from scratch?
Or is there any library to find a circuit to make this state such as Cirq or Qiskit?
Answer: If you call initialize in this case, you will be specifying a general state in $\mathbb{C}^8$. However what you have is more specialized. For example only having 4 nonzero amplitudes. So the call to initialize won't know this a priori. So it won't realize the initialization circuit can be decomposed easily. Or at least it will need to do some extra simplification steps before realizing this.
I'm going to change $\sin$ and $\cos$ from your state. You can fix this by changing the angles with appropriate angle redefinitions. $\theta \to \frac{\pi}{2}-\theta$.
$$
| \psi_1 \rangle = (\mathrm{R}(\theta_1) \otimes I_4) | 0 0 0 \rangle = \cos \theta_1 | 0 0 0 \rangle + \sin \theta_1 | 1 0 0 \rangle\\
| \psi_2 \rangle = (\mathrm{CR}(0,\theta_2) \otimes I_2) | \psi_1 \rangle = \cos \theta_1 \cos \theta_2 | 0 0 0 \rangle + \cos \theta_1 \sin \theta_2 | 0 1 0 \rangle + \sin \theta_1 | 1 0 0 \rangle\\
| \psi \rangle = \mathrm{CCR}(00,\theta_3) | \psi_2 \rangle
$$
where $\mathrm{R}(\theta)$ is to indicate a 2 by 2 rotation matrix.
$\mathrm{CR}(0,\theta)$ is to indicate controlled $\mathrm{R}(\theta)$ on the second index but controlled on 0 instead of 1 on the first.
$\mathrm{CCR}(00,\theta)$ is to indicate controlled $\mathrm{R}(\theta)$ on the third index but controlled on 00 instead of 11 on the first two.
You should be able to fix the angles and get the controls back to normal from here. | {
"domain": "quantumcomputing.stackexchange",
"id": 1009,
"tags": "programming, qiskit, cirq"
} |
Can large scientific telescopes observe the Moon without being damaged? | Question: When I look at the full Moon through my 10cm telescope, it is so bright that it hurts. Can large scientific telescopes observe the moon at all? Does that require special protection equipment? Or dedicated telescopes (or none at all)?
In particular, the E-ELT (European Extremely Large Telescope) will have a mirror with a 39m diameter. If it were pointed at the full moon, would that damage the science instruments? Would that generate a significant temperature at the focal point?
Answer: No chance of damage (the giantness of the telescope makes little difference!), but these cameras can not shoot at a shutter speed of 1/1000 second, so the lit part of the Moon is out of reach due to overexposure.
Can large scientific telescopes observe the moon...
The bright part will probably be too bright to easily image with a deep field camera because it's designed for integration times of seconds to minutes.
The hardware won't be able to provide a 1/1000 second exposure, so only objects in shadows (or the unlit side of the Moon) have a chance of being exposed.
When I look at the full moon through my 10cm telescope, it is so bright that it hurts
Because of conservation of etendue (see below) the Moon has the same surface brightness when seen through any telescope or binocular. It's just that it's bigger and so is spread over a larger area of your retina.
It's just like looking at 100 full Moons in the sky, but each Moon is no brighter than the one we see now.
Put in less than precise but simple wording, magnification increases the size, but not the apparent brightness per unit area of extended objects.
...without being damaged?
There's no chance of damage.
This answer to Can a telescope ever increase the apparent luminance of an extended object? says No and explains that this is the result of conservation of etendue
In big telescopes, the focal planes are also pretty huge.
(units: mm) aperture focal length f/no.
Human eye 6 17 2.8
Vera C. Reuben telescope 8,360 10,310 1.23
So per square micron, the image of the moon will be $(2.8/1.23)^2 \approx 5$ times brighter on the worst case1 telescope's focal plane than on our retina (seen through a telescope or by eye), that's not going to hurt the silicon.
After all we often take outdoor photos with the Sun in the field of view and that doesn't even melt the polymer coatings and color filters on top of the CCD!
1lowest f/no. big telescope so brightest per unit area on the sensor.
Source
Suzanne Jacoby with the LSST focal plane array scale model. The array's diameter is 64 cm. This mosaic will provide over 3 gigapixels per image. The image of the moon (30 arcminutes) is present to show the scale of the field of view. | {
"domain": "astronomy.stackexchange",
"id": 5858,
"tags": "the-moon, telescope, e-elt"
} |
Validating ISBN-10s, with or without dashes in C++11 | Question: Description
A regex-based method of validating ISBN-10 as strings. Can be digits (or 'X' at the end) only, or with dashes, according to those of English-speaking publications (which are defined by the regular expressions contained in the code below).
The last digit, a check digit (unless it is an 'X'), is calculated in the following manner. Multiply each digit, starting from the leftmost digit, by a weight, and then sum the results. The check digit should be such that this sum is divisible by 11. The weight starts at 1, and increases by 1 for each digit. For example, consider the ISBN 0-306-40615-2. The sum is calculated as follows:
sum = 0*1 + 3*2 + 0*3 + 6*4 + 4*5 + 0*6 + 6*7 + 1*8 + 5*9 + 2*10 = 165
165 mod 11 = 0 // the check digit, 2, is valid.
I am a .NET developer dabbling in C++ for fun. With this exercise, I have tried to hide certain things from anything outside the namespace, by creating an anonymous namespace inside it. Essentially, I have tried to achieve what the private keyword does in C#, in C++ semantics. I am curious if I have done anything "un C++ like" in this code. I envisage this code could be expanded to have a function that validates ISBN-13 numbers.
Code
#include <iostream>
#include <string>
#include <vector>
#include <regex>
#include <algorithm>
namespace isbn_validation
{
namespace // anonymous namespace
{
// regex expressions
// dashless
const std::regex isbn10_no_dashes(R"((\d{9})[\d|\X])");
// with dashes
const std::regex isbn10_dashes1(R"((\d{1})\-(\d{5})\-(\d{3})\-[\d|\X])");
const std::regex isbn10_dashes2(R"((\d{1})\-(\d{3})\-(\d{5})\-[\d|\X])");
const std::regex isbn10_dashes3(R"((\d{1})\-(\d{4})\-(\d{4})\-[\d|\X])");
const std::regex isbn10_dashes4(R"((\d{1})\-(\d{5})\-(\d{3})\-[\d|\X])");
const std::regex isbn10_dashes5(R"((\d{2})\-(\d{5})\-(\d{2})\-[\d|\X])");
const std::regex isbn10_dashes6(R"((\d{1})\-(\d{6})\-(\d{2})\-[\d|\X])");
const std::regex isbn10_dashes7(R"((\d{1})\-(\d{7})\-(\d{1})\-[\d|\X])");
bool isbn10_check_digit_valid(std::string isbn10)
{
auto valid = false;
// split it
std::vector<char> split(isbn10.begin(), isbn10.end());
// if the very last character is an 'X', don't bother with it
if (split[9] == 'X')
{
return true;
}
// all digits
// validate the last digit (check digit)
int digit_sum = 0;
int digit_index = 1;
for (std::vector<char>::iterator it = split.begin(); it != split.end(); ++it)
{
digit_sum = digit_sum + ((*it - '0')*digit_index);
digit_index++;
}
valid = !(digit_sum%11);
return valid;
}
}
bool valid_isbn10(std::string isbn)
{
// can take ISBN-10, with or without dashes
auto valid = false;
// check if it is a valid ISBN-10 without dashes
if (std::regex_match(isbn, isbn10_no_dashes))
{
// validate the check digit
valid = isbn10_check_digit_valid(isbn);
}
// check if it is a valid ISBN-10 with dashes
if (std::regex_match(isbn, isbn10_dashes1) || std::regex_match(isbn, isbn10_dashes2) || std::regex_match(isbn, isbn10_dashes3) ||
std::regex_match(isbn, isbn10_dashes4) || std::regex_match(isbn, isbn10_dashes5) || std::regex_match(isbn, isbn10_dashes6) || std::regex_match(isbn, isbn10_dashes7))
{
// remove the dashes
isbn.erase(std::remove(isbn.begin(), isbn.end(), '-'), isbn.end());
// validate the check digit
valid = isbn10_check_digit_valid(isbn);
}
return valid;
}
}
Answer: When you're comparing against 8 different patterns, and then simply removing - for 7 of those validations, why not just remove - initially, and then validate against the only remaining pattern.
Another thing to note is, in the patterns, at the end; you have a character set: [\d|\X]. This actually will match one of:
a digit
literal |
literal X characters (you don't need \X though).
This should instead be:
\d{9}[\dX]
A general outline of how the code should work:
Remove all - from given string
Check if length of string is exactly \$ 10 \$
Validate against the pattern rewritten above
Check if last character is X
Validate individual digit sum and divisibility. | {
"domain": "codereview.stackexchange",
"id": 39813,
"tags": "c++, c++11, regex"
} |
change publish rate of a topic | Question:
Is there any way to change the rate at which messages are published?
Edit: To update with more information. I would like to know all ways to throttle a node without using programming practices that don't involve the ROS Api. E.g nothing that does (if loop_count % 5) -> Publish(msg)
How to throttle the topic via roslaunch
via command line ROS commands
via ROS api
Thanks!
Originally posted by DevonW on ROS Answers with karma: 644 on 2014-10-07
Post score: 3
Answer:
Sure there is, but it depends on what you really want to do: is this in your own node (read: own code), are you trying to throttle an existing node, or something else?
Please update your question with some more information.
Edit:
1 . How to throttle the topic via roslaunch
Your statement is a bit ambiguous (what should roslaunch do in your opinion?), but if starting another node is acceptable, then I think the throttle or drop nodes from topic_tools should work for you.
2 . via command line ROS commands
Afaik, no such thing exists. That would probably have to rely on a built-in throttling capability, which doesn't exist at the moment.
3 . via ROS api
Personally, I always think of topics & services as the ROS API, but I think you're referring to the functionality exposed by the C++/Python/X client libraries. See my comment on your 2nd bullet. There is no direct support for expressing don't-publish-this-at-more-than-X-hz right now.
[..] E.g nothing that does (if loop_count % 5) -> Publish(msg)
Do these options fall into that category?
use a ros::Rate with an appropriate period, see C++/Time - Sleeping and Rates. This obviously only works if your node is a source, or if you can somehow coalesce all messages received during r.sleep(), and base your own publications on that coalesced state.
use ros::Timer with an appropriate period, see C++/Timers. You'll have to deal with similar issues as with ros::Rate though.
use if ((now() - previous_) > desired_): admittedly primitive, but at least time-based (in contrast to your counting example) and the ROS C++ API supports it easily.
Finally, throttling / rate limiting / any kind of QoS will be much easier to achieve in ROS2.0: one of the fundamental properties of DDS middleware is their support for QoS policies, and any system built on top of such a middleware should be able to exploit that.
PS: this has been asked before, see (for instance):
Creating a throttle node
Throttle message rate for subscribers
Originally posted by gvdhoorn with karma: 86574 on 2014-10-07
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 19663,
"tags": "ros, rate, topic, publisher"
} |
How neutrons interact if not through an electromagnetic interaction? | Question: According to the question Do positively charged particles exchange photons? there was an answer
Yes. Photons are the carriers for the electromagnetic force, regardless of the charges involved.
followed by a comment
Unless of course the charge is zero ;)
Now I'm getting curious how neutrons get scattered of each other?
Answer: In condensed matter research, neutron scattering experiments are very useful to study the magnetic structure of materials.
Neutron do indeed possess a magnetic moment, and thus interact with the local magnetic field. Electrons have a much larger magnetic moment, yet neutrons are used precisely because they have zero charge and do not interact electrically, but purely magnetically.
In the end it is still an EM interaction though. Microscopically, it is one of the charged quarks that interacts with the EM field.
At higher energies (read: particle accelerators), neutrons can also interact vi the strong nuclear force. | {
"domain": "physics.stackexchange",
"id": 34881,
"tags": "nuclear-physics, interactions, neutrons, magnetic-moment, pauli-exclusion-principle"
} |
Pricing decisions using neural network | Question: I have a big list of spare parts with several parameters (material, weight, size, manufacturing complexity, ...). For some parts in this list, a price has either not been set or has to be adjusted in order to be in line with other parts. There are a few obvious and simple correlations in this dataset, for example:
if material and complexity is the same, bigger parts are more expensive;
if size and material is the same, more complex parts are more expensive;
for equal size and complexity, more expensive material leads to higher price.
Trying to figure out all these rules by hand and sticking them together seems to be an endless endeavour, so I thought about training a neural network with the priced parts' parameters (input) and prices (output) and let it figure out the prices for the parts which don't have a price yet. The decisions of the NN could be supervised by an expert who knows the parts and could manually figure out a price.
Question 1) Is this a good idea in general?
Question 2) If yes, what type of NN would be most suited for such a problem?
Answer: Question 1) Is this a good idea in general?
Solving this problem as a supervised learning regression problem is a fantastic idea and is the type of solution that will greatly benefit your company since it will translate to other similar and dissimilar problems much more easily than deterministic methods.
However using a neural network to solve this supervised learning regression problem is probably a very bad idea. Neural networks can add great value to certain problems, are among the very best algorithms for complex problems where computing power is not an issue, and have fascinating implications for the future of machine learning and artificial intelligence. But... they can be very tricky to train, require a lot of computing power, require a lot of data, and perform poorly in many cases.
I suggest you scope the problem using linear regression and then try a support vector regressor (SVM, SVR) or naive Bayes regressor. The analogue methods in SVR work very well with limited data and provide surprisingly accurate results.
Question 2) If yes, what type of NN would be most suited for such a problem?
If you must use a neural network then try playing with the problem. Start with a feed forward neural network. Note that it will likely underperform an SVR. Then think about moving toward a convolution neural network. Again, this is probably a very bad way to go. Try linear regression followed by other methods first.
Moving forward
The documentation for either Scikit-Learn in python, H20 in Java, or Weka provide very shallow learning curves to jump on the merry-go-round and take a spin or two. Please make sure that you thoroughly understand cross validation and scoring metrics as this is essential to continued, adequate progress.
Hope this helps! | {
"domain": "datascience.stackexchange",
"id": 920,
"tags": "neural-network"
} |
Addition of velocities | Question: Let's say a car ,in straight line motion ,has a speed of V m/s (in the absence of wind) and let the speed of wind be W m/s the in the same direction . What is the speed of the car in the presence of the wind ?
It is intuitively clear that it is V + W m/s . But how to prove it?
Answer:
It is intuitively clear that it is V + W m/s .
It's not.
This may roughly be correct for an airplane and all velocities referenced to the ground, but it's not true for a car.
The car moves by applying torque to the wheels. The resulting velocity is a the result of the equilibrium between the power put out by the engine and all loss mechanisms: drive train, roll resistance, tire loss, wind resistance etc. Wind in the drive direction will reduce the loss through wind resistance somewhat since the relative speed between car and air molecules goes down, but the overall effect on the car's velocity is much less than the wind speed. | {
"domain": "physics.stackexchange",
"id": 50319,
"tags": "newtonian-mechanics, kinematics, inertial-frames, relative-motion"
} |
Why can't motors be as elegant as human muscles? Why do we have to use electromagnetism to create movement? | Question: Pretty much as the title says. Why isn't it possible to build motors as microscopically and simply as they're built in humans? Why can't we recreate electronic nerves and muscles?
Answer: Human muscles take 9 months to grow and maybe 18 months to train for useful motions. It then takes years to refine precision and strength ... and still it falls short of an electric actuator arm in terms if holding power and precision ... but they last 80 years.
Robotic arms are optimized for ease of production, reliability, and strength. Biological arms are optimized for energy efficiency, among other things like being manufacuted from cells.
"elegance" is hard to define, but I think a Kuka beats a human arm in terms of simplicity, strength, and precision any day. | {
"domain": "robotics.stackexchange",
"id": 1838,
"tags": "motor, design"
} |
AspNetCore - Injecting a Func | Question: I have an ASP.NET Core controller I am creating. The controller endpoint looks something like this right now:
[HttpPost("")]
public async Task<ActionResult<Thing>> AddThing([FromBody] string otherThingId)
{
// First I perform some validation here (null check, proper ID, etc).
// Next I get OtherThing to make a Thing out of it
// _getOtherThing is at the heart of what I'm trying to understand
var sample = await _getOtherThing(otherThingId);
// Finally I do some work to convert it to a Thing and send it back
return newThing;
}
_getOtherThing is a method that performs a very specific concrete call to another API to get the data I needed. It's a method that takes a string and returns a Task<OtherThing>. There are issues with this method as it is though, such as testing, sharing it in the code base, and swapping implementations later on.
To me, it seems like it's an external dependency. So it would make sense to pass it into the controller. The controller class does with the Repository it uses via DI like so:
public ThingController(IThingRepository thingRepo)
{
_thingRepo = thingRepo;
}
The interface and its concrete implementation are then supplied for injection in the Startup.cs file:
public void ConfigureServices(IServiceCollection services)
{
services.AddScoped<IThingRepository, ThingRepository>();
}
So I end up with two questions:
What is the most common/expected way to extract this function and then supply it to the controller?
If I did want to just supply a function, what is the most reasonable way to do it?
With respect to the first question - Here are two strategies I could think of. Are there others?
Supply the function directly to the class. What I came up with looks like this:
public ThingController(IThingRepository thingRepo, Func<string, Task<OtherThing>> getOtherThing)
{
_thingRepo = thingRepo;
_getOtherThing = getOtherThing;
}
And then during Startup.cs:
public void ConfigureServices(IServiceCollection services)
{
services.AddScoped<IThingRepository, ThingRepository>();
services.AddSingleton<Func<string, Task<OtherThing>>>(
OtherThingUtils.GetOtherThing
);
}
Convert the function into an interface/class pair and inject that:
interface IOtherThingProvider {
Task<OtherThing> getOtherThing(string id);
}
class OtherThingProvider : IOtherThingProvider {
public async Task<OtherThing> getOtherThing(string id)
{
// original code here
}
}
And then in the Startup.cs file:
public void ConfigureServices(IServiceCollection services)
{
services.AddScoped<IThingRepository, ThingRepository>();
services.AddSingleton<IOtherThingProvider, OtherThingProvider>();
}
Answer: You basically butcher the entire reason to dependency inject here.
Some IoCs let you deffer injection by doing public MyConstructor(Func<IMyInterface> factory). This is fine, because IMyInterface is an interface and the concrete implementation will be invoked through the standard pipeline and it can have its own dependencies.
But your solution cuts off the DI pipeline half way through and the OverThingUtils.GetOtherThing can not benefit from DI at all. And the special Func<string, IType> construct is dangerously close to service locator pattern.
I would create an interface,
interface IOtherThingProvider
{
Task<OtherThing> GetOtherThing(string id);
} | {
"domain": "codereview.stackexchange",
"id": 34088,
"tags": "c#, asp.net-core, .net-core"
} |
Why do the calicheamicins bind to DNA at the minor, rather than the major, groove? | Question: I am trying to understand why some drugs bind only to the minor groove and not to the major groove. More specifically, I am interested in calicheamicins.
They target DNA and cause strand scission. Calicheamicins bind with
DNA in the minor groove, wherein they then undergo a reaction
analogous to the Bergman cyclization to generate a diradical species.
This diradical, 1,4-didehydrobenzene, then abstracts hydrogen atoms
from the deoxyribose (sugar) backbone of DNA, which ultimately leads
to strand scission.[7] The specificity of binding of calicheamicin to
the minor groove of DNA has been demonstrated.
Answer: Accommodation in the major or minor groove
I am not in a position to generalize about all drugs that bind to the major groove of DNA, but at least one well-known example, actinomycin D, does so because it intercalates between the base-pairs in the double helix. Although other weak chemical interactions stabilize this binding, much of it is through base stacking with base-pairs. Hence binding requires it be inserted into the major groove in which the base pairs lie (see e.g. PDB 101 Molecule of the Month article).
Binding of calicheamicins, in contrast, involves interaction with two parts of the molecule. The ‘head’ of the molecule makes specific interactions with a TCCT sequence in the DNA (in the case of calicheamicin γ1), which is achieved from the minor groove (intercalation is not necessary) allowing the saccharide ‘tail’ to fit into the minor groove. (Ikemoto et al. 1995).
(Constructed from 3D Structure images on the Protein Data Bank website)
Sequence specificity
A concern of the poster is the fact that the interaction of calicheamicin is sequence specific (there is apparently a preference for d(T-C-C-T).d(A-G-G-A)) whereas access to the bases is restricted in the minor groove. This is addressed in the paper by Ikemoto et al., but without illustration, assuming you are able to visualize the chemical interactions in your mind. I have used Jmol to view the structure 2PIK, have prepared a couple of screen shots to help illustrate the details I shall quote from the text of the paper.
Cleavage of the DNA is performed by the enediyne aglycone (R), which does not make contact with the bases. Two of the regions that do interact with bases are the thio-sugar ring (B) and the aromatic ring (C). These are shown below in the complex of calicheamicin (coloured yellow) with a deoxy-oligonucleotide duplex, together with a close-up of the interactions of these rings.
In the left-hand frame the DNA has standard cpk colouring, and the phosphates (orange–red) lining the minor groove are evident. The base-pairs are perpendicular to the plane of the image. In the second frame I have tilted the image slightly to allow the rings of the bases to be seen, and coloured the bases (actually the whole nucleoside) red/white/green/blue for A/T/G/C. Quoting from the Ikemoto paper:
The thio sugar B is positioned edgewise in the minor groove and contacts the A20 residue through van der Waals and hydrogen bonding (B ring hydroxyl to N3 of the base) interactions. The aromatic ring C is positioned between the walls of the minor groove with its iodine and CH3 groups directed toward the floor of the minor groove.… The S-carbonyl linker, which adopts an orthogonal alignment relative to the plane of ring C (favored by the steric demands of the ortho aromatic ring substituents), bridges the minor groove and makes van der Waals contacts with the opposing walls of the groove.
This may not have the clarity of, say, Watson and Crick base-pairing, but neither does the interaction of proteins with particular DNA sequences — one needs to examine the multiple interactions that occur. What I think it does show is that this drug can make specific contact to bases, even though it binds in the minor groove. | {
"domain": "biology.stackexchange",
"id": 10584,
"tags": "dna, pharmacology, 3d-structure"
} |
Deriving photon propagator | Question: In Peskin & Schroeder's book on page 297 in deriving the photon propagator the authors say that
$$\left(-k^2g_{\mu\nu}+(1-\frac{1}{\xi})k_\mu k_\nu\right)D^{\nu\rho}_F(k)=i\delta^\rho_\mu \tag{9.57b}$$
With the solution given in the next line in equation (9.58) as
$$D^{\mu\nu}_F(k)=\frac{-i}{k^2+i\epsilon}\left(g^{\mu\nu}-(1-\xi) \frac{k^\mu k^\nu}{k^2}\right)\tag{9.58}$$
Which is the propagator. I can verify this equation by inserting $D^{\mu\nu}_F(k)$ into the first equation, but I have no idea how to actually solve $D^{\nu\rho}_F(k)$ from $(9.57b)$. If anyone can help, it would be much appreciated.
Answer: $D_{\mu\nu} = A g_{\mu\nu}+B k_{\mu} k _{\nu}$ with A and B two unknown functions of the scalar k^2. The two tensor after A and B are the only possible Lorentz invariant tensors . Simply plugin and calculate the unknown functions. | {
"domain": "physics.stackexchange",
"id": 92050,
"tags": "homework-and-exercises, quantum-field-theory, linear-algebra, propagator"
} |
Why Pauli called the swap matrix $σ_x$? Why not $σ_y$? | Question: Why Pauli called the following matrix $\:\sigma_x\:$ and not $\:\sigma_y$?
\begin{equation}
\sigma_x\boldsymbol{=}
\begin{bmatrix}
0 & 1 \vphantom{\tfrac{a}{b}}\\
1 & 0 \vphantom{\tfrac{a}{b}}
\end{bmatrix}
\tag{01}\label{01}
\end{equation}
Answer: Any vector in $\mathbb{R}^3$ can be represented by a $2\times2$ hermitian traceless matrix and vice versa. So, there exists a bijection (one-to-one and onto correspondence) between $\mathbb{R}^3$ and the space of $2\times2$ hermitian traceless matrices, let it be $\mathbb{H}$ :
\begin{equation}
\mathbf{r}\boldsymbol{=}(x,y,z)\in \mathbb{R}^3\;\boldsymbol{\longleftrightarrow} \;
\mathrm R=
\begin{bmatrix}
z & x\boldsymbol{-}iy \\
x\boldsymbol{+}iy & \boldsymbol{-}z
\end{bmatrix}
\in \mathbb{H}
\tag{01}
\end{equation}
From the usual basis of $\mathbb{R}^3$
\begin{equation}
\mathbf{e}_x\boldsymbol{=}\left(1,0,0\right),\quad
\mathbf{e}_y\boldsymbol{=}\left(0,1,0\right),\quad \mathbf{e}_z\boldsymbol{=}\left(0,0,1\right)
\tag{02}
\end{equation}
we construct a basis for $\mathbb{H}$
\begin{align}
\mathbf{e}_x & \boldsymbol{=}(1,0,0)\qquad \boldsymbol{\longleftrightarrow} \qquad \sigma_x\boldsymbol{=}
\begin{bmatrix}
\:\: 0 & \hphantom{\boldsymbol{-}}1\:\:\vphantom{\dfrac{a}{b}}\\
\:\: 1 & \hphantom{\boldsymbol{-}}0\:\:\vphantom{\dfrac{a}{b}}
\end{bmatrix}
\tag{03a}\\
\mathbf{e}_y & \boldsymbol{=}(0,1,0)\qquad \boldsymbol{\longleftrightarrow} \qquad \sigma_y\boldsymbol{=}
\begin{bmatrix}
\:\: 0 & \boldsymbol{-}i\:\:\vphantom{\dfrac{a}{b}}\\
\:\: i & \hphantom{\boldsymbol{-}}0\:\:\vphantom{\dfrac{a}{b}}
\end{bmatrix}
\tag{03b}\\
\mathbf{e}_z & \boldsymbol{=}(0,0,1)\qquad \boldsymbol{\longleftrightarrow} \qquad \sigma_z\boldsymbol{=}
\begin{bmatrix}
\:\: 1 & \hphantom{\boldsymbol{-}}0\:\:\vphantom{\dfrac{a}{b}}\\
\:\: 0 & \boldsymbol{-}1\:\:\vphantom{\dfrac{a}{b}}
\end{bmatrix}
\tag{03c}
\end{align}
where $\:\boldsymbol{\sigma}\equiv(\sigma_x,\sigma_y,\sigma_z)\:$ the Pauli matrices.
Note also that the matrix
\begin{equation}
U\boldsymbol{=}\cos\tfrac{\theta}{2}\,\mathrm I\boldsymbol{-}i\sigma_x\sin\tfrac{\theta}{2}
\boldsymbol{=}
\begin{bmatrix}
\:\: \cos\tfrac{\theta}{2} & \boldsymbol{-}i\sin\tfrac{\theta}{2}\:\:\vphantom{\dfrac{a}{b}}\\
\:\: \boldsymbol{-}i\sin\tfrac{\theta}{2} & \hphantom{\boldsymbol{-}} \cos\tfrac{\theta}{2}\:\:\vphantom{\dfrac{a}{b}}
\end{bmatrix}
\tag{04}
\end{equation}
is the unitary matrix representation of the rotation around the $x$-axis through an angle $\theta$. | {
"domain": "physics.stackexchange",
"id": 67130,
"tags": "quantum-mechanics, angular-momentum, quantum-spin, terminology, notation"
} |
Do working physicists consider Newtonian mechanics to be "falsified"? | Question: In the comments for the question Falsification in Math vs Science, a dispute around the question of "Have Newtonian Mechanics been falsified?"
That's a bit of a vague question, so attempting to narrow it a bit:
Are any of Newton's three laws considered to be 'falsified theories' by any 'working physicists'? If so, what evidence do they have that they believe falsifies those three theories?
If the three laws are still unfalsified, are there any other concepts that form a part of "Newtonian Mechanics" that we consider to be falsified?
Answer: "Falsified" is more philosophical than scientific distinction.
Newton laws have been falsified somehow, but we still use them, since usually they are a good approximation, and are easier to use than relativity or quantum mechanics.
The "action at distance" of Newton potentials has been falsified (finite speed of light...) but again, we use it every day.
So, in practical terms, no, Newton laws are still not falsified, in the sense that are not totally discredited in the scientific community. Classical mechanics is still in the curriculum of all universities, in a form more or less identical that 200 years ago (Before Relativity, quantum mechanics, field theory).
Most concept in physics fit more in the category of "methods" rather than "paradigms", so can be used over and over again. And all current methods and laws fails and give "false" results, when used outside their range of applicability.
The typical example of "falsified" theory is the Ptolemaic system of Sun & planets rotating around the Earth. However, philosopher usually omits the facts that:
Ptolemaic system was experimentally pretty good at calculating planet motions
Most mathematical and experimental methods of the new Heliocentric paradigm are the same of the old Ptolemaic
So the falsification was more on the point of view, rather than in the methods. | {
"domain": "physics.stackexchange",
"id": 57537,
"tags": "newtonian-mechanics, models"
} |
Encoding before vs after train test split? | Question: Am new to ML and working on a dataset with lot of categorical variables with high cardinality.
I observed that in lot of tutorials for encoding like here, the encoding is applied after the train and test split.
Can I check why is it done so?
Why can't we apply the encoding even before the train test split?
Can't we apply the encoding to the full dataset and after encoding, split it into train and test sets?
What difference does it make?
Answer: If you perform the encoding before the split, it will lead to data leakage (train-test contamination) In the sense, you will introduce new data (integers of Label Encoders) and use it for your models thus it will affect the end predictions results (good validation scores but poor in deployment).
After the train and validation data category already matched up, you can perform fit_transform on the train data, then only transform for the validation data - based on the encoding maps from train data.
Almost all feature engineering like standarisation, Normalisation etc should be done after train testsplit. Hope it helps | {
"domain": "datascience.stackexchange",
"id": 10537,
"tags": "machine-learning, deep-learning, neural-network, classification, machine-learning-model"
} |
Qiskit primitives with commuting observables | Question: Is there a way of bundling commuting observables in qiskit primitive estimator to save the number of shots?
With the new feature of Qiskit primitives, it is easy to sample observables for a specific circuit on any backend. There is also a feature to run multiple circuits corresponding to an equal number of observables. I would like to run the same circuit on multiple observables. I would like to exploit their commutativity and reduce the number of shots. I am not sure if the current estimators already do this, or if there is any way I could do this and save the number of circuit executions
Answer: As mentioned in the answer linked above, there are different Estimator implementations from different providers. The implementations of the Aer Estimator and Qiskit Estimator and BackendEstimator are open source, and you can see that:
the Qiskit Estimator performs an exact statevector calculation (so no grouping is performed or needeed)
both the Aer and Backend Estimator expose an abelian_grouping parameter that triggers the use of the group_commuting method from Qiskit's quantum_info module.
As for the Qiskit Runtime Estimator, the implementation is not open source, but my understanding is that it performs qubit-wise group commuting by default, at least for resilience_level=0 (might be for all resilience levels), and this option is NOT configurable by users for the time being.
I hope this helps. | {
"domain": "quantumcomputing.stackexchange",
"id": 4859,
"tags": "qiskit-runtime, qiskit"
} |
Description of this difference in wave speed among two mechanical transverse waves? | Question: Consider a string of arbitrary length. One end is fixed and the other is in your hand. Bring the end to some point away from the equilibrium (creating a transverse wave). Perform the same action but 'faster' relative to the last wave. The same exact procedure is being performed but one wave has a shorter wavelength than the other.
Say $\text{A}$ is the normal trial and $\text{B}$ is the faster trial. I would like to know specifically what it is that causes this change in wavelength. My guess is: The wave speed of the medium is fixed so that in $\text{B}$ you are supplying the entire wavelength in a 'shorter' amount of time relative to the wave speed than in $\text{A}$. If this is the right way to think about it then what would be the cause of the wave speed here? (I am looking for a complete description)
edit: Actually I think my guess was wrong. I don't believe $\text{A}$ and $\text{B}$ would be travelling at the same speed with different wavelengths. I think that $\text{B}$ would be faster. (mentioned in the comments) This is because $\text{B}$ supplies a greater initial velocity to the wave. My new guess is that the dependence of the wavelength is to the ratio of [the individual wave speeds] to [the rate at which you're moving the string with respect to the wave speed]. $\text{B}$ then has a smaller ratio than in $\text{A}$ so that $\text{A}$ has a longer wavelength.
Answer: as philip_0008 said, the wave speed in a string is usually given by $v=\sqrt{\frac{T}{\mu}}$ where $T$ is the tension and $\mu$ is the mass per unit length. This is the speed of the disturbance along the string. As soon as you displace one end of the sting that disturbance will begin to propagate along the string at that rate regardless of how quickly you are moving your hand.
As you say in your original question, the wave length of the wave produced depends on the time it takes your hand to complete one whole vibration. It just follows the wave equation $v=f\lambda$, or, if you like, $v=\frac{\lambda}{T}$. Since $v$ is fixed and determined by properties of the medium shorter $T$ gives shorter $\lambda$.
The derivation of the speed equation above (which you'll find in most first year texts, or here on wikipedia), assumes the x-component of the tension (along the string) is constant. The y-component, transverse to the string, of tension is not constant. It's zero at the extreme positions of the wave and at a maximum at the points where the string is at the equilibrium position.
If you want to generate a bigger wave with you hand you apply a larger tension which accelerates the string more in the transverse direction. So moving you hand faster does make the string move faster transverse to its length but it does not make the disturbance move any faster along the string.
Now, if the amplitude of the wave on the sting is large, then the approximation that the horizontal component of the tension is constant does not hold and the equation above for the speed of the wave is not completely accurate. Also, if the rope is not perfectly flexible but rather has appreciable stiffness then the frequency of the wave will have an effect on its speed. | {
"domain": "physics.stackexchange",
"id": 34248,
"tags": "waves"
} |
Integration of Sinusoidal functions | Question: Since Differentiation of a sinusoidal function of a certain angular frequency gives a sinusoidal function of the same frequency, does the statement "Integration of a sinusoidal function of certain frequency gives again a sinusoidal of same frequency" holds true or not?
I am asking this as I recall I have read that the second statement does not hold good but I cannot figure out now why.
Answer: Yes and no.
If you recall from Calculus, if $g(t) = \frac{d}{dt} f(t)$, then $\int g(t)\ dt = f(t) + C$. So, since the derivative of a sinusoid is a sinusoid of the same frequency, if $g(t)$ sinusoidal, its integral must be a sinusoid of the same frequency, plus a constant.
Whether that "plus a constant" bit is going to make you consider the integral of a sinusoid to not be a sinusoid is up to you. Depending on the problem at hand, that constant term may or may not be a deal breaker for the "sinusoid-ness" of the result. | {
"domain": "dsp.stackexchange",
"id": 10536,
"tags": "signal-analysis, continuous-signals, cosine"
} |
Why is SPT0418-47 ("the most distant Milky Way look-alike") expected to evolve into an elliptical galaxy? | Question: Phys.org's ALMA sees most distant Milky Way look-alike describes the image reconstruction of a strongly and very nicely lensed z = 4.2 galaxy by a by a foreground galaxy at z = 0.263 and says:
"What we found was quite puzzling; despite forming stars at a high rate, and therefore being the site of highly energetic processes, SPT0418-47 is the most well-ordered galaxy disc ever observed in the early Universe," stated co-author Simona Vegetti, also from the Max Planck Institute for Astrophysics. "This result is quite unexpected and has important implications for how we think galaxies evolve." The astronomers note, however, that even though SPT0418-47 has a disc and other features similar to those of spiral galaxies we see today, they expect it to evolve into a galaxy very different from the Milky Way, and join the class of elliptical galaxies, another type of galaxies that, alongside the spirals, inhabit the Universe today.
and links to Rizzo et al. (2020) in Nature: A dynamically cold disk galaxy in the early Universe. Also see (YouTube and ESO)
Question: Why is SPT0418-47 ("the most distant Milky Way look-alike") expected to evolve into an elliptical galaxy? Is there something about this particular observation that indicates that, or is that just what galaxies "back then" did, even if they had a disk-like phase?
Answer: The standard cold-dark-matter model ("Lambda CDM") says that galaxy formation is seeded by initial density fluctuations in the dark matter and gas. The earliest galaxy formation will happen in the densest such fluctuations (denser = stronger gravity = faster collapse), which will be local overdensities within an initial, larger-scale overdensity. Such regions will almost certainly have other local overdensities, which will also form early proto-galaxies. Since these are near to the first, they're more likely to merge early in the universe's history.
So if you see a massive galaxy at a high redshift (early in the universe's history), it mostly likely means that it's in a very dense region of the early universe -- e.g., the core of what will become a cluster of galaxies -- which means there will be other massive galaxies forming (or soon to form) nearby, which will probably merge with this galaxy fairly rapidly and turn it into an elliptical. (Plus, since this is happening in a dense region, the gravity of that region will draw in other galaxies not immediately nearby, leading to more mergers over time.) | {
"domain": "astronomy.stackexchange",
"id": 4743,
"tags": "galaxy, galactic-dynamics, gravitational-lensing"
} |
Devise algorithm to travel destinations in a way that you always have enough money for the next destination (getting paid at each destination as well) | Question: This problem probably falls under some category of algorithms but I don't know which.
A given restaurant has $b_1..b_n$ branches around the world. A food critic has to travel all of the branches.
When the critic finishes visit to the branch $1 \le j\le n$ he gets paid $m_j$ money by his employer for the job.
For branch $j \le n-1$ he takes a flight to the next branch $b_{j+1}$ while the flight costs $c_j$ money. From the last branch $b_n$ a flight goes out to the first branch $b_1$ for the cost of $c_n$.
It is also given that the total amount of money the critic received equals the total cost of flights so:
$$
\sum_{j=1}^n m_j=\sum_{j=1}^n c_j
$$
At any given point the critic can only pay for the next flight out of the money he currently has.
Lastly, the critic can choose any branch which he will visit first and the first trip will be free.
Prove that the branch $j$ exists such that the critic will be able to start from $b_j$ and visit all branches and suggest an algorithm which will find such a $j$ in $\Theta(n)$ time.
The hint which was given to us is to use queue data structure.
I thought of the following algorithm:
1) Add all branches to a queue.
2) Check if we can travel to the next branch that is if $m_j \ge c_{j+1}$.
If no, that means we don't have enough money so push the branch to the tail of the queue.
If yes, check if $m_j+m_{j+1} \ge c_{j+1}+c_{j+2}$ that is if the critic does travel to the next branch will he have enough money to continue from that branch?
--If yes then advance one branch in the queue.
--If no push the next branch to the tail of the tail of the queue
After one traversal of the queue we should have branches in such an order that the critic can complete the journey and branch $j$ will be at the head of the queue.
I'm on the right track? I'm not sure this proves formally enough that a $j$ indeed exists so the critic can complete the journey.
Answer: The algorithm that you suggested using a queue does not seem to be linear-time with respect to $n$, because you did not prove that each restaurant gets pushed in the queue only once (or at most a constant number of times). I suppose there are cases where a restaurant gets pushed into the queue nearly $n$ times.
However, there is a simple algorithm for this problem and I got the idea based on a similar question in `Introduction to Algorithms' by Cormen et al. The main idea is to start from the first restaurant and fly as much as possible, until we run out of money. If we reached the end, then that was the starting point, otherwise, we can start from the place where we run out of money. It is easy to see that the next starting point is exact the place where we run out of money (because previous restaurants did not pay enough money for sustaining).
Let $i \gets 1$;
While ($i \leq n$)
$~~~~$ Let $money \gets m_i$;
$~~~~$ Let $starting \gets i$;
$~~~~$ While ($money \geq 0$)
$~~~~~~~~$ Let $money \gets money - c_i$;
$~~~~~~~~$ Let $i \gets i + 1$;
$~~~~$ EndWhile;
EndWhile;
Output $starting$;
The main idea behind the proof is mentioned in my second paragraph. Once you start from a restaurant $b_i$ and run out of your money in a restaurant $b_j$, you can make sure that none of the restaurants $b_i$ to $b_{j-1}$ are not good starting points.
How to prove that such $starting$ exist? It is possible to show that by contradiction. We know that:
$$\sum_{1 \leq i \leq n} m_i = \sum_{1 \leq i \leq n} c_i,$$
which is equivalent to:
$$\sum_{1 \leq i \leq n} m_i - c_i = 0.~~~~(Eq. X)$$
Let us assume that there is no such $starting$ for which the person can start from restaurant $b_{starting}$ and visit all restaurants maintaining a positive amount of money in his pocket. We know that starting from restaurant $b_i$, if we run out of money in restaurant $b_j$, this means that for every restaurant $b_k$ ($i \leq k < j$) there is no way to pass restaurant $b_j$ (because we will definitely run out of money). Given our assumption that no such $starting$ exist, this means that we will have a bunch of pairs $(b_{i_1}, b_{j_1}), \cdots, (b_{i_m}, b_{j_m})$ ($i_{x+1} = j_{x}$) that all result in running out of money. This means that for every pair $(b_{i_x}, b_{j_x})$ we have,
$$\sum_{i_x \leq y \leq j_x} m_{i_y} - c_{i_y} \leq 0,$$
which results in either of the following two cases: (1) The money becomes zero and the critic does not have money to catch up with the next flight and cannot continue flying, (2) The overall summation becomes negative, which is in clear contradiction with our very initial assumption in Eq. (X). | {
"domain": "cs.stackexchange",
"id": 9192,
"tags": "algorithms, time-complexity, priority-queues"
} |
Is dynamic topic remapping implemented in ROS2 Dashing? | Question:
Based on the documentation I have seen at the links below
https://design.ros2.org/articles/static_remapping.html (bottom of page)
https://www.osrfoundation.org/wordpress2/wp-content/uploads/2015/04/ROSCON-2014-Why-you-want-to-use-ROS-2.pdf (slides 23 -24)
It seems ros2 nodes are supposed to support dynamic remapping of their topics at runtime. However, I do not see any documentation on this. I feel this feature is useful during development and debugging, so I would like to see it implemented.
Is this feature implemented in Dashing? If not, is it still on the roadmap?
Originally posted by msmcconnell on ROS Answers with karma: 268 on 2020-02-07
Post score: 0
Answer:
No, it is not yet implemented and still on the roadmap: https://index.ros.org/doc/ros2/Roadmap/#new-features
Atm there is no plan for anybody to work on this in the foreseeable future.
Originally posted by Dirk Thomas with karma: 16276 on 2020-02-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by fmrico on 2023-05-23:
Do you know if it is still in the roadmap or if it has been implemented? | {
"domain": "robotics.stackexchange",
"id": 34400,
"tags": "ros, ros2, remapping"
} |
Conservation Genetics - Book recommendations | Question: Can you please give me some advice for a book in (evolutionary) conservation genetics that offers an in-depth review of the mathematical formulations used in this field.
I read the book Evolutionary Conservation Biology (Ferrière and Dieckmann, 2009) and I really liked it. I'm seeking for another book of the same kind that goes further on the part A (Theory of Extinction) and especially on the concern of the importance of population structure and the genetic load. I am also interested in landscape genetics.
Answer: This book "A primer of conservation genetics" would suit quite well I think. In particular chapter five deals with "Genetics and Extinction" and is preceded by a lot of population genetics based theory. A beginner might also combine it with "A primer of ecological genetics" (Hartl & Conner) but you seem to have enough Pop gen knowledge to not need it!
The landscape genetics is perhaps a bit lacking from this book though. Maybe a better and more comprehensive solution, but slightly more advanced, is this book which does feature some aspects of landscape genetics in chapter 15 along with good population genetics type coverage of extinction. | {
"domain": "biology.stackexchange",
"id": 2563,
"tags": "evolution, population-dynamics, book-recommendation, population-genetics, conservation-biology"
} |
How do multiple linear neurons together allow for nonlinearity in a neural network? | Question: As I understand it, the point of architecting multiple layers in a neural network is so that you can have non-linearity represented in your deep network.
For example, this answer says: "To learn non-linear decision boundaries when classifying the output, multiple neurons are required."
When I watch online tutorials and whatnot, I see networks described as in the screenshot below. In cases like this, I see a series of linear classifiers:
We have a multiply, add, ReLu, multiply and add, all in series.
From studying math, I know that a composite function made out of linear functions is itself linear.
So how do you coax non-linearity out of multiple linear functions?
Answer: The phase
"To learn non-linear decision boundaries when classifying the output, multiple neurons are required."
is NOT correct. More precisely, it should be:
"To learn non-linear decision boundaries when classifying the output, we need a non-linear activation function."
To understand why, imagine you have a network with many layers and nodes (multiple neurons in your question). If you don't have a non-linear activation function such as ReLu or sigmoid, your network is just a linear combination of bias and weights. Your network won't be useful for classifying non-linear decision boundary. But if your inputs can be linearly separable, you don't need neutral network...
That's why all neutral networks almost always have a non-linear activation function. ReLu is the most popular, but there are other possibilites. When you pipe up a dozen of non-linear outputs like in neutral network, your network will be able to classify a non-linear decision boundary. The more your have, the better it can perform (but also easier for overfitting). | {
"domain": "datascience.stackexchange",
"id": 1116,
"tags": "neural-network"
} |
What is the best way to predict multiple outcome from a single entity? | Question: Let's say i have three model: Facial recognition, Face landmark detection, Emotion recognition.
Now if i want to predict those three feature from a single image. What should be my approach?
Should i combined those three model? or
Run three model in three different thread?
Answer: All three models fit to single GPU
Since you have already trained the models and models are separate (do not share the features), you could construct the computational graph in a way that you have only one input (your image), but that input is pushed to three different branches of the computational graph (each branch is one of your three models). At the output of such constructed graph, you will get three outputs (one from each of three branches).
This way you will run all three models at once.
If you are using TF, it will look like this:
output_1, output_2, output_3 = sess.run(output_op, feed_dict:{input_layer: input_image})
where the output_op holds a list of outputs from three models (hence, we unpack them to three variables output_1, output_2 and output_3); input_layer is the tensor operation which takes the image and pushes it to three branches as already described.
This is only possible if your GPU memory is large enough to fit all three models into the memory.
All three models do not fit to single GPU
In this case, assuming you have access to multiple GPUs, you could modify the computational graph which combines three models to use different GPUs for each branch.
Run one after another
Also, this can be always done.
If you are using TF, this link could be useful. | {
"domain": "datascience.stackexchange",
"id": 4496,
"tags": "machine-learning, deep-learning, predictive-modeling, cnn"
} |
Special Relativity Problem | Question: I am having trouble with the following problem:
Fry travels in a rocket ship towards Leela, at constant relative speed $v$: Fry
is delivering a pizza, which in its rest frame stays hot for exactly another
2 minutes. If Leela measures that Fry is 27 million kilometers away, then
calculate the minimal value of $v$ for which the pizza is hot when delivered.
My approach was to use speed=distance/time and account for time dilation but I cannot figure out how to eliminate the Lorentz Factor. The answer is meant to be $3c/5$. Any pointers would be appreciated! Thanks.
Answer: Since an answer has already been posted using time dilation and $\gamma$, here's an alternative method employing the invariant interval.
$$(c\tau)^2 = (c \Delta t)^2 - \Delta x^2$$
As specified, the proper time for the pizza is $\tau = 120s$
The displacement of the pizza is $\Delta x = 27 \cdot 10^6 km = 90$ light-seconds.
Solving for the elapsed coordinate time $\Delta t$ yields
$$\Delta t = \sqrt{ \tau^2 + \frac{\Delta x^2}{c^2}} = \sqrt{120^2 + 90^2} = 150s$$
Thus,
$$v = \frac{\Delta x}{\Delta t} = \frac{90}{150}c= \frac{3}{5}c$$ | {
"domain": "physics.stackexchange",
"id": 13868,
"tags": "homework-and-exercises, special-relativity, spacetime, time-dilation"
} |
A Neural Network | Question: I programmed a Neural Network in python. Feedback every kind is appreciated.
I tried to use some vectorization but it turned out to become quite a mess. Because you can't append to numpy arrays I sometimes needed to use numpy arrays in list and sometime I could use just numpy arrays.
Is there a way to make It look cleaner?
"""
Author: Lupos
Purpose: Practising coding NN
Date: 17.11.2019
Description: test NN
"""
from typing import List, Dict # used for typehints
import numpy as np # used for forward pass, weight init, ect.
# logging
import logging # used to log errors and info's in a file
from datetime import date # used to get a name for the log file
import os # used for creating a folder
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
class NeuralNetwork:
def __init__(self, x: List[float], y: List[float], nn_architecture: List[Dict], alpha: float, seed: int, custom_weights_data: List = [], custom_weights: bool = False) -> None:
"""
Constructor of the class Neural Network.
Parameters
----------
x : List[float]
Input data on which the Neural Network should get trained on
y : List[float]
Target/corresponding values for the input data.
nn_architecture : List[Dict]
Describes the architecture of the Neural Network.
alpha : float
The learning rate.
seed: int
Seed for numpy.For creating random values. For creating reproducible results.
Returns
-------
None
"""
self.level_of_debugging = logging.INFO
self.logger: object = self.init_logging(self.level_of_debugging) # initializing of logging
# Dimension checks
self.check_input_output_dimension(x, y, nn_architecture)
np.random.seed(seed) # set seed for reproducibility
self.input: List = self.add_bias(x)
self.y: List = y
self.output_model: float = np.zeros(y.shape)
self.alpha: float = alpha
self.layer_cache = {} # later used for derivatives
self.error_term_cache = []
self.nn_architecture: List[Dict] = nn_architecture
self.weights: List = [] # np.array([])
self.init_weights(custom_weights, custom_weights_data) # initializing of weights
self.w_d: List = [] # gardient in perspective to the weight
self.curr_layer: List = []
self.weight_change_cache: List = []
self.logger.info("__init__ executed")
# for visuliozing
self.x_train_loss_history = []
self.y_train_loss_history = []
self.bias_weight_tmp = []
def add_bias(self, x) -> List[float]:
x = np.array([np.insert(x, 0, 1)])
return x
def check_input_output_dimension(self, x, y, nn_architecture):
"""
Gets executed from the constructor "__init__". Is used
to check if the dimensions of input and output values correspond to the neuron size
in the input and output layer.
Parameters
----------
x
Input values
y
Output Values
nn_architecture : List[Dict]
Architecture of the neural network.
Returns
-------
None
"""
assert len(x[0]) == nn_architecture[0][
"layer_size"], 'Check the number of input Neurons and "X".' # check if the first element in "x" has the right shape
assert len(y[0]) == nn_architecture[-1][
"layer_size"], 'Check the number of output Neurons and "Y".' # check if the first element in "y" has the right shape
assert len(x) == len(y), "Check that X and Y have the corresponding values."
# mean square root
def loss(self, y: List[float], y_hat: List[float]) -> List[float]:
return np.sum(1 / 2 * (y - y_hat) ** 2)
def loss_derivative(self, y: List[float], y_hat: List[float]) -> List[float]:
y = np.array([y]).T
return np.array(-(y - y_hat))
def sigmoid(self, x: List[float]) -> List[float]:
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(self, x: List[float]) -> List[float]:
return self.sigmoid(x) * (1 - self.sigmoid(x))
def relu(self, x: List[float]) -> List[float]:
return np.maximum(0, x)
def relu_derivative(self, x: List[float]) -> List[float]:
x[x <= 0] = 0
x[x > 0] = 1
return x
def linear(self, x: List[float]) -> List[float]:
return x
def activation_derivative(self, layer: Dict, curr_layer: List[float]) -> List[float]:
if layer["activation_function"] == "linear":
return np.array(self.linear(curr_layer))
elif layer["activation_function"] == "relu":
return np.array(self.relu_derivative(curr_layer))
elif layer["activation_function"] == "sigmoid":
return np.array(self.sigmoid_derivative(curr_layer))
else:
raise Exception("Activation function not supported!")
def communication(self, curr_epoch: int, curr_trainingsdata: int, data: List[float], target: List[float], how_often: int = 10) -> None:
"""
Gets executed from the method "train". Communicates information
about the current status of training progress.
Parameters
----------
i : int
A paramter that gets hand over. Current Iteration in a foor loop.
how_often: int
Is used to determine the frequently of updates from the training progress.
Returns
-------
None
"""
if curr_epoch % how_often == 0:
print("For iteration/trainings-example: #" + str(curr_epoch) + "/#"+ str(curr_trainingsdata))
print("Input: " + str(data))
print("Actual Output: " + str(target))
print("Predicted Output: " + str(self.output_model))
print("Loss: " + str(self.loss(y=target, y_hat=self.output_model)))
print("Value of last weight change: " + str(self.weight_change_cache[-1]))
print("\n")
def init_logging(self, level_of_debugging: str) -> object:
"""
Gets executed from the constructor "__init__". Initializes the logger.
Parameters
----------
level_of_debugging: {"logging.DEBUG", "logging.INFO", "logging.CRITICAL", "logging.WARNING", "logging.ERROR"}
Which error get logged.
Returns
-------
Object
return a logger object which is used to log errors.
"""
# creating a directory for "logs" if the directory doesnt exist
path = os.getcwd()
name = "logs"
full_path = path + "\\" + name
try:
if not os.path.isdir(full_path):
os.mkdir(full_path)
except OSError:
print("ERROR: Couldn't create a log folder.")
# create and configure logger
today = date.today() # get current date
today_eu = today.strftime("%d-%m-%Y") # european date format
LOG_FORMAT: str = "%(levelname)s - %(asctime)s - %(message)s" # logging format
logging.basicConfig(filename=full_path + "\\" + today_eu + ".log", level=level_of_debugging, format=LOG_FORMAT)
logger = logging.getLogger()
# Test logger
logger.info("------------------------------------------------")
logger.info("Start of the program")
logger.info("------------------------------------------------")
return logger
# TODO: "init_weights" is work in progress.
# TODO: "init_weights" init bias.
def init_weights(self, custom_weights: bool, custom_weights_data: List) -> List[float]:
"""
Gets executed from the constructor "__init__".
Initializes the weight in the whole Neural Network.
Returns
-------
List
Weights of the Neural Network.
"""
self.logger.info("init_weights executed")
for idx in range(0, len(self.nn_architecture) - 1): # "len() - 1" because the output layer doesn't has weights
if not custom_weights:
# "self.nn_architecture[idx]["layer_size"] + 1" "+ 1" because we also have a bias term
weights_temp = 2 * np.random.rand(self.nn_architecture[idx + 1]["layer_size"], self.nn_architecture[idx]["layer_size"] + 1) - 1
self.weights.append(weights_temp)
if custom_weights:
self.weights = custom_weights_data
return self.weights
def activate_neuron(self, x: List[float], layer: Dict) -> List[float]:
"""
Gets executed from the method "forward" and "full_forward".
Activates the neurons in the current layer with the specified activation function.
Parameters
----------
x: List[float]
This are the values which get activated.
layer: Dict
A Dictionary with different attributes about the current layer.
Returns
-------
List
Outputs a List with activated values/neurons.
"""
if layer["activation_function"] == "relu":
temp_acti = self.relu(x)
# add bias to cache when not output layer
if not layer["layer_type"] == "output_layer":
tmp_temp_acti_for_chache = self.add_bias(temp_acti)
else:
tmp_temp_acti_for_chache = temp_acti.T
# the name of the key of the dict is the index of current layer
idx_name = self.nn_architecture.index(layer)
self.layer_cache.update({"a" + str(idx_name): tmp_temp_acti_for_chache})
return temp_acti
elif layer["activation_function"] == "sigmoid":
temp_acti = self.sigmoid(x)
# add bias to cache when not output layer
if not layer["layer_type"] == "output_layer":
tmp_temp_acti_for_chache = self.add_bias(temp_acti)
else:
tmp_temp_acti_for_chache = temp_acti.T
# the name of the key of the dict is the index of current layer
idx_name = self.nn_architecture.index(layer)
self.layer_cache.update({"a" + str(idx_name): tmp_temp_acti_for_chache})
return temp_acti
else:
raise Exception("Activation function not supported!")
def forward(self, weight: List[float], x: List[float], layer: Dict, idx: int) -> List[float]:
"""
Gets executed from the method "full_forward". This method make´s one
forward propagation step.
Parameters
----------
weight : List[float]
The weights of each associated Neurons in a List.
x : List[float]
The Input from the current layer which gets multiplicated with the weights and summed up.
layer : Dict
A Dictionary with different attributes about the current layer.
Returns
-------
List
List with values from the output of the one step forward propagation.
"""
curr_layer = np.dot(weight, x.T)
# add bias to cache when not output layer
if not layer["layer_type"] == "output_layer":
tmp_curr_layer_for_chache = self.add_bias(curr_layer)
else:
tmp_curr_layer_for_chache = curr_layer.T
# the name of the key of the dict is the index of current layer
idx_name = self.nn_architecture.index(layer)
tmp_dict = {"z" + str(idx_name): tmp_curr_layer_for_chache}
self.layer_cache.update(tmp_dict) # append the "z" value | not activated value
curr_layer = self.activate_neuron(curr_layer, layer)
return curr_layer
# TODO: "full_forward" is work in progress
def full_forward(self, data):
"""
Gets executed from the method "forward_backprop". Makes the full forward propagation
through the whole Architecture of the Neural Network.
Returns
-------
List
List with the values of the output Layer.
"""
self.logger.info("full_forward executed")
self.layer_cache = {} # delete cache used from previous iteration
for idx in range(0, len(self.nn_architecture) - 1):
self.logger.debug("Current-index (full_forward methode): " + str(idx))
if self.nn_architecture[idx]["layer_type"] == "input_layer":
self.layer_cache.update({"z0": data})
self.layer_cache.update({"a0": data})
self.curr_layer = self.forward(self.weights[idx], data, self.nn_architecture[idx + 1], idx=idx) # "idx + 1" to fix issue regarding activation function
else:
self.curr_layer = self.add_bias(self.curr_layer)
self.curr_layer = self.forward(self.weights[idx], self.curr_layer, self.nn_architecture[idx + 1], idx=idx)
self.output_model = self.curr_layer
# TODO: "backprop" is work in progress
def backprop(self, target: List[float]) -> None: # application of the chain rule to find derivative
"""
Gets executed from the method "forward_backprop". This method handels
the backpropagation of the Neural Network.
Returns
-------
None
"""
self.weight_change_cache = []
self.error_term_cache = []
self.logger.info("Backprop executed")
for idx, layer in reversed(list(enumerate(nn_architecture))): # reversed because we go backwards
if not layer["layer_type"] == "input_layer": # if we are in the input layer
# calculating the error term
if layer["layer_type"] == "output_layer":
temp_idx = "z" + str(idx)
d_a = self.activation_derivative(layer, self.layer_cache[temp_idx])
d_J = self.loss_derivative(y=target, y_hat=self.output_model)
error_term = np.array([np.multiply(d_a.flatten(), d_J.flatten())])
self.error_term_cache.append(error_term)
tmp_matrix_weight = np.asarray(self.weights[idx - 1])
tmp_bias_weight_t = np.array(tmp_matrix_weight.T[0])
self.bias_weight_tmp.append([tmp_bias_weight_t])
else:
temp_idx = "z" + str(idx)
layer_cache_tmp_drop_bias = np.delete(self.layer_cache[temp_idx], 0, 1)
d_a = self.activation_derivative(layer, layer_cache_tmp_drop_bias)
d_J = 0
for item in reversed(self.error_term_cache):
tmp_matrix_weight = np.asarray(self.weights[idx - 1])
self.bias_weight_tmp.append([tmp_matrix_weight.T[0]])
weights_tmp_drop_bias = np.delete(self.weights[idx], 0, 1)
d_J = d_J + np.dot(weights_tmp_drop_bias.T, item.T)
error_term = d_a.T * d_J
error_term = error_term.T
self.error_term_cache.append(error_term)
err_temp = error_term.T
temp_idx = "a" + str(idx - 1)
cache_tmp = self.layer_cache[temp_idx]
cache_tmp = np.delete(cache_tmp, 0, 1) # delete bias
weight_change = err_temp * cache_tmp
self.weight_change_cache.append(weight_change)
# update weights
for idx in range(0, len(self.weight_change_cache)): # reversed because we go backwards
curr_weight = self.weights[-idx - 1]
curr_weight = np.delete(curr_weight, 0, 1) # delete bias
weight_change_tmp = self.weight_change_cache[idx]
total_weight_change = self.alpha * weight_change_tmp # updating weight
curr_weight = curr_weight - total_weight_change
self.weights[-idx - 1] = curr_weight
# update bias
if layer["layer_type"] == "output_layer":
for i in range(0, len(self.bias_weight_tmp)):
tmp_weight_bias = np.asarray(self.bias_weight_tmp[i])
tmp_error_term_bias = np.asarray(self.error_term_cache[i])
self.bias_weight_tmp[i] = tmp_weight_bias - (self.alpha * tmp_error_term_bias)
# insert bias in weights
for i in range(0, len(self.weights)):
self.weights[i] = np.insert(self.weights[i], obj=0, values=self.bias_weight_tmp[i], axis=1) # insert the weights for the biases
# TODO: "train" is work in progress
def train(self, how_often, epochs=20) -> None:
"""
Execute this method to start training your neural network.
Parameters
----------
how_often : int
gets handed over to communication. Is used to determine the frequently of updates from the training progress.
epochs : int
determines the epochs of training.
Returns
-------
None
"""
self.logger.info("Train-method executed")
for curr_epoch in range(epochs):
for idx, trainings_data in enumerate(x):
trainings_data_with_bias = self.add_bias(trainings_data)
self.full_forward(trainings_data_with_bias)
self.backprop(self.y[idx])
self.communication(curr_epoch, idx, target=self.y[idx], data=trainings_data, how_often=how_often)
self.x_train_loss_history.append(curr_epoch)
self.y_train_loss_history.append(self.loss(y[idx], self.output_model))
def predict(self):
"""
Used for predicting with the neural network
"""
print("Predicting")
print("--------------------")
running = True
while(running):
pred_data = []
for i in range(0, self.nn_architecture[0]["layer_size"]):
tmp_input = input("Enter " + str(i) + " value: ")
pred_data.append(tmp_input)
self.full_forward(np.asarray([pred_data], dtype=float))
print("Predicted Output: ", self.output_model)
print(" ")
running = input('Enter "exit" if you want to exit. Else press "enter".')
if running == "exit" or running == "Exit":
running = False
else:
running = True
def visulize(self):
data = {"x": self.x_train_loss_history, "train": self.y_train_loss_history}
data = pd.DataFrame(data, columns=["x", "train"])
sns.set_style("darkgrid")
plt.figure(figsize=(12, 6))
sns.lineplot(x="x", y="train", data=data, label="train", color="orange")
plt.xlabel("Time In Epochs")
plt.ylabel("Loss")
plt.title("Loss over Time")
plt.show()
if __name__ == "__main__":
# data for nn and target
x = np.array([[1, 0]], dtype=float)
y = np.array([[0, 1]], dtype=float)
# nn_architecture is WITH input-layer and output-layer
nn_architecture = [{"layer_type": "input_layer", "layer_size": 2, "activation_function": "none"},
{"layer_type": "hidden_layer", "layer_size": 2, "activation_function": "sigmoid"},
{"layer_type": "output_layer", "layer_size": 2, "activation_function": "sigmoid"}]
weights_data = [np.array([[2, 0.15, 0.2], [2, 0.25, 0.3]], dtype=float), np.array([[4, 0.4, 0.45], [4, 0.5, 0.55]], dtype=float)]
weights_data = weights_data
#, custom_weights=True, custom_weights_data=weights_data
NeuralNetwork_Inst = NeuralNetwork(x, y, nn_architecture, 0.1, 5)
NeuralNetwork_Inst.train(how_often=100, epochs=500)
NeuralNetwork_Inst.visulize()
Answer: Separation of concerns
The NeuralNetwork class is quite complex at the moment, since it implements network handling (training, ...), logging and even visualization. The good news is, that there are already separate methods for them. My recommendation here would be to go one step further and move all the non-essential stuff (logging setup, visualization) out of the class. That will make the class much easier to maintain (and also to review). It will also very likely lead to greater flexibility, e.g. since the logging would not be hidden from the user.
Internal functions
There are quite a few internal/helper methods in the class that are only supposed to be used by the class itself, e.g. in __init__. As per the PEP 8 style guide, their names should start with a single underscore (e.g. def _check_input_output_dimension(...) to mark them as "for internal use only" (there is no real private in Python). Following this convention makes it easier to tell the public and internal methods apart.
Activation and derivatives
All the activation functions and their derivatives are stateless, i.e. they don't really need to be instance methods. Consider removing them from the class and provide them as callbacks when describing the network structure. For example:
# they could also live in your library, maybe with a bit of documentation
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
return sigmoid(x) * (1 - sigmoid(x))
# later or in an other file:
nn_architecture = [
{
"layer_type": "input_layer",
"layer_size": 2,
"activation_function": None
},
{
"layer_type": "hidden_layer",
"layer_size": 2,
"activation_function": {
"function": sigmoid,
"derivative": sigmoid_derivative
}
},
{
"layer_type": "output_layer",
"layer_size": 2,
"activation_function": {
"function": sigmoid,
"derivative": sigmoid_derivative
}
}
]
That will remove a lot of complexity from your implementation of various methods (e.g. activation_derivative and activate_neuron) and also makes it more extensible and flexible, since it's now up to the user to define new activation functions (and their derivative). Best practice implementations of the most common activation functions could still be part of your library, and you can even implement a helper function that does something like the following:
def get_activation(name):
if name == "sigmoid":
return {"function": sigmoid, "derivative": sigmoid_derivative}
elif name == "linear":
return {"function": linear, "derivative": linear_derivative}
elif ...:
...
# at the last line
except ValueError(f"No known activation function for name '{name}'")
or a dict
# a missing name would lead to a KeyError here, that maybe should be handled
# when used somewhere.
# also possible: implement get_activation from above using this dict,
# catch and transform the KeyError there
ACTIVATION = {
"sigmoid": {"function": sigmoid, "derivative": sigmoid_derivative},
"linear": {"function": linear, "derivative": linear_derivative},
...
}
This can also be hidden in your network, that if the user enters a string as it is now, the network class uses either of the two methods above and tries to determine which functions to use, while still providing the possibility to provide custom functions as well.
Type annotations and documentation
From what I can see, there are a few cases where the type annotations don't seem to fit. E.g.
def relu_derivative(self, x: List[float]) -> List[float]:
x[x <= 0] = 0
x[x > 0] = 1
return x
This won't work with List[float], but is tailored to numpy arrays. I'd try to annotate the with np.ndarray, but the numpy developers don't seem to have settled on a best practice in that regard yet (see this GitHub issue). I don't use type annotations all to much, so maybe I'm wrong here. But they are not binding, so there is not a lot that can go wrong in that regard apart from confusing other programmers and some tools like mypy ;-)
Since you are otherwise following the numpydoc convention, a quick note on that regard: most numpy functions that can work both with Python types (lists, tuples, ...) and numpy arrays, define the input/output type to be array_like (see np.sin for example).
Logging
Logging is a great functionality to have at hand, but there can be vastly different needs. My recommendation in that regard would be not to impose any kind of details on the user. There is simply no need to force a European date format on somebody from somewhere else or force them to have their log written to a file, especially if they can neither control the name nor the location the log file is written to. Simply allow the user to pass an (optional) logger when building the network, and work with that. What happens if no logger is provided is up to you. Either setting up a simple console logger or no logging at all are sensible defaults in my opinion.
Tool support
It was already mentioned in a comment on the other answer, that there are quite a few typos in comments and method names (e.g. visulize → visualize). There are tools like codespell or language plugins for the IDE of your choice (e.g. Code Spell Checker for VS Code) that can help you in that regard.
There are also a lot of other tools in the Python ecosystem that can help you to keep a consistent code style and the like. A non-exhaustive list can be found at this answer here on Code Review Meta.
That's it for now. I would strongly recommend to implement at least some of these changes before bringing the class up for another round of review. Including them will make it much easier to judge the implementation of the core algorithms itself, since I'd reckon they are a lot easier to follow then. | {
"domain": "codereview.stackexchange",
"id": 36838,
"tags": "python, python-3.x, numpy, neural-network"
} |
If Jupiter is a gas-giant then why don't its features change? | Question: A naive question. When we look at Jupiter, we see that its features didn't change largely over many years, for instance, the red-spot. If it is composed of gases and liquids, then why aren't the effects of mixing of these fluids visible?
My intuition is that due to very low temperatures ($-145\, ^{\circ}$C), diffusion of fluids doesn't occur and therefore the superficial appearence of Jupiter remains the same.
Answer: Believe it or not, Jupiter isn't too consistent. Take a look at these pictures, the first taken in 2009 and the second taken in 2010:
and
Quite the difference, eh? Why?
Jupiter's atmosphere is made of zones and belts. Zones are colder and are composed of rising gases; they are dark-colored. Belts are warmer and are composed of falling gases; they are light-colored. The reason the two don't intermix is because of constant flows of wind, similar to the Jetstream. These winds make it hard for bands to mix.
There are two types of explanations for the jets. Shallow models say that the jets are caused by local disturbances. Deep models say that they are the byproduct of rotating cylinders comprising the mantle. At the moment, we don't know which explanation is correct. | {
"domain": "astronomy.stackexchange",
"id": 581,
"tags": "jupiter"
} |
How does tensorflows's session.run's fetch argument can be an output of another function, without declaring that output's name as a placeholder | Question: I have the following code that I have simplified:
class someClass():
def __init__(self):
self.u_pred, self.v_pred = self.fnc(self.x_tf, self.y_tf)
self.sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(allow_soft_placement=True,
log_device_placement=True))
def predict(self, x, y):
tf_dict = {self.x_tf: x, self.y_tf: y}
u = self.sess.run(self.u_pred, tf_dict)
v = self.sess.run(self.v_pred, tf_dict)
return u, v
Here how does sess.run able to fetch self.u_pred and self.v_pred when they are not even part of the graph as they havent been declared using tf.placeholder?
After going through this question, my understanding is that session calls the tf.placeholder and fetches the first argument passed and evaluates with the dictionary provided.
So in this case does self.sess.run(self.u_pred, tf_dict) able to automatically find self.fnc() and plugs in tf_dict = {self.x_tf: x, self.y_tf: y} to auto-evaluate the function?
Answer: TF graphs are frozen when you first create the session. Most tensorflow function definitions within a graph experience this event. As soon as this event happens, you are enabled to add new or existing functions. After you do this process, the new or existing functions (you added) become frozen too. Note you may be using an outdated version of tf. This is significant as tf workflows may change after each major tf release, given that some tf functions may get deprecated. | {
"domain": "datascience.stackexchange",
"id": 12129,
"tags": "tensorflow"
} |
Permitivity $\mathcal{E}$ as a function of Voltage across a medium | Question: What I wanted to Know was is there a change in Dielectric Permitivity constant as the voltage increases.
My question arose from the fact that for a breakdown voltage to be reached the Permitivity constant $\mathcal{E}_0$ must be a function of voltage.
Meaning that the Capacitance of a capacitor must change as the voltage increases across it .
Further Meaning that the charge as a function of voltage must not be linear for a capacitor (paralell plate).
If it isn't then the capacitance of a capacitor would also depend on the voltage across it.
So all my above questions boil down to one single question :
Is the Permitivity Constant $\mathcal{E}$ of a dielectric medium Voltage dependent ?
P.S. Plots Please.
Answer: Generally speaking the permittivity is not voltage dependant for voltages up to a substantial fraction of the breakdown voltage. So the capacitance of a capacitor is not voltage dependant.
Dielectric breakdown usually occurs due to some major change in the structure of the dielectric. The most obvious example is lightening. The permettivity of air is not voltage dependant until the field strength gets high enough to ionise the air, at which point the permittivity changes completely and you get a lightening bolt. Likewise, in a typical capacitor of the type used in your TV at some voltage a high enough voltage will rip electrons free in the dielectric and you'll get a current flowing between the plates.
In both cases this isn't really a voltage dependence of the permittivity but rather a change in state of the dielectric that produces a new permittivity. | {
"domain": "physics.stackexchange",
"id": 3868,
"tags": "electrostatics, capacitance"
} |
My first project - mini hangman game | Question: I decided to polish my skills a little bit after learning about one dimensional arrays, and built my own mini project - hangman game.
I'd like to have some criticism, I know it's not perfect and far from it, but please be aware, it's my FIRST EVER real project with more than 20 + lines of code.
import java.util.Arrays;
import java.util.Scanner;
import java.util.Random;
public class Main {
static Scanner input = new Scanner(System.in);
public static void main(String[] args) {
String[] words = {"java", "hello"};
int random = (int)(0 + Math.random() * words.length);
String wordToGuess = words[random];
boolean isGuessed = false;
int counter=0;
int maxGuess = 7;
char[] chars = new char[wordToGuess.length()];
for (int i =0; i<chars.length; i++)
chars[i] = '-';
while (!isGuessed && counter <7) {
System.out.println("Enter your guess: ");
char guess = input.next().charAt(0);
counter++;
maxGuess--;
for (int i = 0; i < wordToGuess.length(); i++) {
if (wordToGuess.charAt(i) == guess)
chars[i] = guess;
}
if (doesContain(chars)) {
for (int j = 0; j < wordToGuess.length(); j++) {
System.out.print(chars[j] + " ");
}
System.out.println("Guess left: " + maxGuess);
} else {
System.out.println("Good job! The word is: ");
for (int j = 0; j < wordToGuess.length(); j++) {
System.out.print(chars[j] + " ");
}
System.out.print("Number of tries: ");
System.out.print(counter);
isGuessed = true;
}
}
}
public static boolean doesContain(char[] chars) {
for (int i =0; i<chars.length; i++) {
if (chars[i] == '-')
return true;
}
return false;
}
}
Answer: It is necessary to have consistent and correct (as per style convention) white space. This includes vertical alignment (such as indentation) and spacing between and around particular syntax.
Convention on braces vary, but often braces are encouraged for single statement blocks. One reason is that a later revision may not realise to add braces when adding another statement to the block.
The following is how I might format the code. Note that I am not afraid to add extra blank lines to separate structures. I am not opposed to having three blank lines between methods, though I did not do that here.
import java.util.Arrays;
import java.util.Scanner;
import java.util.Random;
public class Main {
static Scanner input = new Scanner(System.in);
public static void main(String[] args) {
String[] words = {"java", "hello"};
int random = (int)(0 + Math.random() * words.length);
String wordToGuess = words[random];
boolean isGuessed = false;
int counter = 0;
int maxGuess = 7;
char[] chars = new char[wordToGuess.length()];
for (int i = 0; i < chars.length; i++) {
chars[i] = '-';
}
while (!isGuessed && counter < 7) {
System.out.println("Enter your guess: ");
char guess = input.next().charAt(0);
counter++;
maxGuess--;
for (int i = 0; i < wordToGuess.length(); i++) {
if (wordToGuess.charAt(i) == guess) {
chars[i] = guess;
}
}
if (doesContain(chars)) {
for (int j = 0; j < wordToGuess.length(); j++) {
System.out.print(chars[j] + " ");
}
System.out.println("Guess left: " + maxGuess);
} else {
System.out.println("Good job! The word is: ");
for (int j = 0; j < wordToGuess.length(); j++) {
System.out.print(chars[j] + " ");
}
System.out.print("Number of tries: ");
System.out.print(counter);
isGuessed = true;
}
}
}
public static boolean doesContain(char[] chars) {
for (int i = 0; i < chars.length; i++) {
if (chars[i] == '-') {
return true;
}
}
return false;
}
}
This blob concerns me.
String[] words = {"java", "hello"};
int random = (int)(0 + Math.random() * words.length);
String wordToGuess = words[random];
boolean isGuessed = false;
int counter = 0;
int maxGuess = 7;
char[] chars = new char[wordToGuess.length()];
It is best to keep separate what can be kept separate. I took a look at "x depends on y" written as x ⇒ y and came up with this.
chars ⇒ wordToGuess
wordToGuess ⇒ words
wordToGuess ⇒ random
random ⇒ words
Therefore we can break and reorder these variable declarations like so:
String[] words = {"java", "hello"};
int random = (int)(0 + Math.random() * words.length);
String wordToGuess = words[random];
char[] chars = new char[wordToGuess.length()];
boolean isGuessed = false;
int counter = 0;
int maxGuess = 7;
Then looking down a bit further there is a for-loop which only depends on chars, so we can pull that up.
String[] words = {"java", "hello"};
int random = (int)(0 + Math.random() * words.length);
String wordToGuess = words[random];
char[] chars = new char[wordToGuess.length()];
for (int i = 0; i < chars.length; i++) {
chars[i] = '-';
}
boolean isGuessed = false;
int counter = 0;
int maxGuess = 7;
There are seven variables in scope (are accessible) for the rest of main. For each variable one must wonder "does this matter later?". This increases the effort to understand a block of code. Doing a text search I found that words and random are unused later. Therefore, lets more closely contain them.
String wordToGuess;
{
String[] words = {"java", "hello"};
int random = (int)(0 + Math.random() * words.length);
wordToGuess = words[random];
}
char[] chars;
{
chars = new char[wordToGuess.length()];
for (int i = 0; i < chars.length; i++) {
chars[i] = '-';
}
}
Now the program is well formatted and just by looking at dependencies we have improved the organisation (we did not have to know what the program does). Further improvements can come from rethinking some of how the program does its task. There seems to be quite a bit happening in the while-loop so I want to consider that first.
Reading the code and doing some string searching, I have determined what the while-loop reads and writes.
Reads & Writes
isGuessed
counter
maxGuess
chars
input
Reads Only
wordToGuess
Writes Only
N/A
Lets discover the purpose of each variable which is both read and written. Why these particularly? If a variable is only read it is invariant (unchanging) during the loop. If a variable is only written it has no effect on the loop behaviour.
input is used to read through stdin a character at a time, once each iteration.
counter is increased once per iteration. This is used to print the number of guesses thus far and to terminate the loop if !(counter < 7).
maxGuess is decreased once per iteration. This is used to print the number of guesses remaining.
chars is updated with the guessed letter once per iteration.
isGuessed is set to true only if the guessed word is complete. This is used to terminate the loop if !!isGuessed.
I see a redundancy between counter and maxGuess. counter = 7 - maxGuess or maxGuess = counter + 7. I think counting down to zero from maxGuess makes more sense than counting up to 7, so I will replace counter with maxGuess. Also I rename maxGuess to guessesRemaining and introduce a final variable maxGuesses assigned to 7.
boolean isGuessed = false;
final int maxGuesses = 7;
int guessesRemaining = maxGuesses;
while (!isGuessed && guessesRemaining > 0) {
System.out.println("Enter your guess: ");
char guess = input.next().charAt(0);
guessesRemaining--;
for (int i = 0; i < wordToGuess.length(); i++) {
if (wordToGuess.charAt(i) == guess) {
chars[i] = guess;
}
}
if (doesContain(chars)) {
for (int j = 0; j < wordToGuess.length(); j++) {
System.out.print(chars[j] + " ");
}
System.out.println("Guess left: " + guessesRemaining);
} else {
System.out.println("Good job! The word is: ");
for (int j = 0; j < wordToGuess.length(); j++) {
System.out.print(chars[j] + " ");
}
System.out.print("Number of tries: ");
System.out.print(maxGuesses - guessesRemaining);
isGuessed = true;
}
}
Final variables cannot be changed, so we know automatically the most the while-loop does with this variable is read it, which it does. Thus we have reduced the number of variables the loop both reads and writes and this can simplify comprehension.
While reading I found the method name doesContain does not suggest its purpose well. Thus I renamed it to isSolved. I also thought chars was too generic of a name so I renamed it to incompleteWord.
I found '-' was used as a placeholder letter in both main and isSolved. We want to avoid magic literals in our program. A "magic literal" is some literal (such as a string, character, or number) which occurs in one or more places and has an non-obvious and unstated reason for being the value it is. When reading code with magic literals we are not sure what the code does. When updating code with magic literals we are not sure where all we need to make the replacement, especially because there may be different reasons for a particular literal. To fix this we can simply name the literal.
static final char placeholderLetter = '-';
...
incompleteWord[i] = placeholderLetter;
...
if (incompleteWord[i] == placeholderLetter) {
There is much from here we can still do. However, I think this is a good place to stop seeing as this is your first substantial program.
I do not think giving advice or answers on how to implement a program from the abstract is on topic. Rather, this Q&A service is for suggesting improvements to code which is already working and understood. If you want to know how to add some ASCII art to your program then I suggest Stack Overflow.
This is my final version of your program after moving a couple more declarations around.
import java.util.Arrays;
import java.util.Scanner;
import java.util.Random;
public class Main {
static Scanner input = new Scanner(System.in);
static final char placeholderLetter = '-';
static final final int maxGuesses = 7;
static final String[] words = {"java", "hello"};
public static void main(String[] args) {
String wordToGuess;
{
int random = (int)(0 + Math.random() * words.length);
wordToGuess = words[random];
}
char[] incompleteWord;
{
incompleteWord = new char[wordToGuess.length()];
for (int i = 0; i < incompleteWord.length; i++) {
incompleteWord[i] = '-';
}
}
boolean isGuessed = false;
int guessesRemaining = maxGuesses;
while (!isGuessed && guessesRemaining > 0) {
System.out.println("Enter your guess: ");
char guess = input.next().charAt(0);
guessesRemaining--;
for (int i = 0; i < wordToGuess.length(); i++) {
if (wordToGuess.charAt(i) == guess) {
incompleteWord[i] = guess;
}
}
if (isSolved(incompleteWord)) {
for (int j = 0; j < wordToGuess.length(); j++) {
System.out.print(incompleteWord[j] + " ");
}
System.out.println("Guess left: " + guessesRemaining);
} else {
System.out.println("Good job! The word is: ");
for (int j = 0; j < wordToGuess.length(); j++) {
System.out.print(incompleteWord[j] + " ");
}
System.out.print("Number of tries: ");
System.out.print(maxGuesses - guessesRemaining);
isGuessed = true;
}
}
}
public static boolean isSolved(char[] incompleteWord) {
for (int i = 0; i < incompleteWord.length; i++) {
if (incompleteWord[i] == placeholderLetter) {
return true;
}
}
return false;
}
} | {
"domain": "codereview.stackexchange",
"id": 27220,
"tags": "java, beginner, hangman"
} |
Regarding Quasi-normal modes of black holes | Question: I am a Ph.D. student working on quasinormal modes of black holes. I am following the paper
https://ui.adsabs.harvard.edu/abs/1985ApJ...291L..33S/abstract which is perhaps the first paper on calculating quasinormal modes using WKB approzimation upto 1st order. The basic equation that describes perturbation of a schwarzschild black hole is
d$^2$ $\psi$/dr$_*$$^2$ + {$\sigma$ $^2$ - [1 - (2/r)][$\lambda$/r$^2$ + (2$\beta$/r$^3$)]} $\psi$ = 0
(Equation 7 in tha paper).
From my understanding , the term [1 - (2/r)][$\lambda$/r$^2$ + (2$\beta$/r$^3$)] represents potential. (Please correct me if I am wrong).
My doubt is
a] from where this potential is coming from ? Is this regge wheeler potential ?
b] Why $\beta$ = 1, 0, -3 for scalar, electromagnetic and gravitational perturbations respectively ?
It would be very helpful if someone could clear my doubts or provide any helpful refernces. Thank you in advance!
Answer: The equation in question in is the (radial) Teukolsky equation, which describes black hole perturbations. It was derived by Saul Teukolsky in this 1973 paper. One of the remarkable aspects of the Teukolsky equation is that you can write a single equation with just one free parameter $s$ that describes the spin of the perturbing field. When $s=0$ the equation describes (massless) scalar perturbations. When $s=\pm 1$, the equation describes (massless) vector perturbations the most obvious example being the electromagnetic field, and when $s=\pm 2$ it describes (massless) tensor perturbations with the most obvious example the gravitational field. (Any half integer will work.)
In the equation of the paper you mention they use $\beta= 2s+1$ for a scalar (s=0), EM (s=-1) and gravitational (s=-2) perturbation.
The equation in question is specialized to Schwarzschild, but the Teukolsky equation can also be written for perturbations of rotating Kerr black holes. | {
"domain": "physics.stackexchange",
"id": 92001,
"tags": "general-relativity, black-holes, perturbation-theory"
} |
How to create an executable in the $package/bin directory? | Question:
I tried to use this in my CMakefile :
rosbuild_add_executable(arp_core.arp arp_core_main.c)
INSTALL(TARGETS arp_core.arp RUNTIME DESTINATION bin)
but the exe does go into the bin directory, it stays in the root of the package folder.
I also defined theses :
set( CMAKE_INSTALL_PREFIX ${${CMAKE_PROJECT_NAME}_SOURCE_DIR})
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
Originally posted by Willy Lambert on ROS Answers with karma: 352 on 2011-02-26
Post score: 0
Answer:
You're not doing any installing so the install command doesn't come in to play. This should do it:
set(RUNTIME_OUTPUT_DIRECTORY ${CMAKE_SOURCE_DIR}/bin)
see the docs at
cmake --help-full | less
You'd need to post your whole cmakelists for me to be sure.
Originally posted by Straszheim with karma: 426 on 2011-02-26
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 4878,
"tags": "rosbuild"
} |
Questions about fields | Question: My questions are the following:
I saw that a field was defined as a correspondence of a vector or a scalar to every point in space. I asked my teacher about this definition, and he told me that there is a better one: “a field is a region in space, where a particle with a certain property (such as being charged or having a mass) would feel a force acting upon it. My problem with his definition is the fact that he ignored the fact that fields are vector or scalar quantities, not a “region in space” – a region in space has no magnitude and direction. Is his definition is correct?
Later I read another definition – “a field is an influence an object has on the space around it”. Is there a way to make that definition more precise and rigorous? In my opinion, there is a problem with the word “influence” – this word is not defined in a precise and clear way – it can be misinterpreted.
I read that the field is the one who is exerting a force on an object, not the object itself.
I will try to be clearer – a charge creates a field around itself, and another charge wonders in that field. We then say that the force on the second charge is exerted by the field, not the charge that created the field. Could you explain this? It doesn’t make any sense – the field is just a model we created to describe the influence an object has on its environment. Is it just a convention that the field is the one who is exerting the force, not the charge who created the field?
How do the charges “sense” each other? Is it because of the field or the fact that they have the property of “being charged”?
Answer: Your three questions are three different ways of recasting a single problem: what is a field?
Let me start with what a field is not. It cannot be "a region of space" equipped with special properties. The reason is already evident in your comment, but I would add that there is no way to make sense of a negative or even complex region of space. There are historical reasons for this definition. Maxwell used this kind of definition to introduce the concept of field. But it should be seen as a pictorial way to speak about a physical property that shares with space a continuum nature. Unfortunately, this region of space definition is with us, and I am afraid it will remain, even if it is complete nonsense.
Now, what is a physical field? Stated this way, it looks like a tough question. It is longtime Physics is not more concerned about the intrinsic nature of things. For the excellent reason that investigating how things work has been demonstrated much more fruitful than asking what things are.
To understand the concept of physical field, one has to remember the conceptual steps which brought about the status of physical system for this quantity.
The starting point is the description of force, in classical mechanics, as a force at distance. This description looks familiar to us, but it is not free from conceptual difficulties. How a thing here knows about the presence of another thing there? The classical definition of field is based on the finding that in some cases, the force on a test body which does not perturb too much the system depends only on a function of the point. This allows a formal shift from an action at distance concept to a local interaction between a body and a local quantity (the field). This description, alternative but equivalent to the action at distance, looks just as a formal trick at the static level. However, when time variations are taken into account, one discovers that some fields seem to have their own dynamics. It is possible to attach to them physical properties such as energy, momentum, and angular momentum. In other words that these auxiliary quantities for describing forces behave exactly like other entities that we consider physical systems. At this point, the conceptual shift is to assign the status of physical system also the field. We do not know what an electron or a proton is, but we know that it is something carrying some properties we can measure. In the same way, we can assign similar properties to an electromagnetic field, whatever its intrinsic nature is.
At this point, saying that charges sense each other by the intermediate of another physical system, which is the local field, looks more fundamental than speaking about action at distance. The key point is that we are always speaking about our concepts to describe the world. We do not know what the world is. Still, this pragmatic approach works very well.
Summarizing, the concept of field is a useful concept to describe interactions in terms of quantities at the same point. It is convenient, but at the classical physics level, it would always be possible to maintain its auxiliary nature and remain with interaction at distance. The price would be an extreme complexity of the resulting formalism. In particular, a relativistic description of interacting bodies without the concept of field would be significantly more complicated, not to mention the problem of the transition to Quantum Mechanics. | {
"domain": "physics.stackexchange",
"id": 72795,
"tags": "forces, field-theory, definition, interactions"
} |
While freezing ice from salt water, does the salt dissolve in the ice or is it separated out? | Question: I know that salt decreases the freezing point of water. We have freezing point (in Celsius), $\mathrm{T_f=-K_f\cdot m}$ (molality of salt). But, is it true that all of the solution doesn't freeze at the same temperature? (i.e. either parts of the ice form at temperature less than $0^\circ C$ but greater than $T_f$ or the process of freezing gets completed at a temperature less than $T_f$)
My question arises from the second part of the following question in my homework:
Two hypotheses came to my mind:
the molality of salt in the frozen part at temperature $T < 0$ is always such that the freezing point of that part is $T$
the molality of salt in the unfrozen part at temperature $T <T_f$ is always such that the freezing point of that part is $T$
Upon repeated trial and error, I found that the second hypothesis worked out to give the correct answer of $15g$ when I also assumed that no salt is being frozen with the ice. This was in accordance to the fact that icebergs in oceans (of salt water) are always composed of fresh water.
But, this led to another question. Like salt sugar is also a non volatile solute. How are ice lollies sweet when sugar can't be frozen into ice?
I would like to know at which part of my deduction am I going wrong.
Answer: Your process occurs under fractional crystallization, which is a separations process. Assuming that freezing takes place relatively "slowly" (e.g., under conditions where "flash" freezing doesn't occur), ice crystals will form in the salt water solution at a temperature below 0 C, and no salt will be incorporated in those ice crystals. As freezing proceeds, this obviously leads to higher salt concentrations in the remaining water, so the freezing temperature will drop as freezing proceeds. | {
"domain": "physics.stackexchange",
"id": 70317,
"tags": "thermodynamics, energy, water, phase-transition, ice"
} |
What utility does the tau bond model of orbital overlap have? | Question: In his book on molecular orbital theory, Molecular Orbitals and Organic Chemical Reactions, Ian Fleming notes that Pauling formulated an early alternative model to Hückel theory for explaining the bonding of simple conjugated polyenes. Evidently, this came to be called τ-bonding.
Fleming describes it as a modification to, or offshoot from, the hybridization model, in which orbitals similar to sp3-hybridized orbitals are combined. Fleming says that the τ-bond model makes the extent of the conjugation less obvious, but that the model
[…] might have some virtues, not present in the Hückel model, especially in trying to explain some aspects of stereochemistry.
I don't have access to the relevant primary literature, and the treatment of the subject in Fleming's book amounts to one paragraph with an accompanying diagram. A search online was relatively fruitless.
Is τ-bonding just the historical precursor to the bent/banana-bond model, or does it have some unique independent significance that makes it directly relevant to modern chemists?
What are the specific "virtues" that Fleming might have been referring to, and what, if any, advantages does the τ-bond model have over Hückel theory and/or other approaches grounded in MO theory?
Answer:
τ-Bonds provide an alternative description of electron density in alkenes and alkynes valid even in the modern chemistry.
A good example is the conformational preferences in propene with the lower energy of the “eclipsed” conformation [1].
Reference
Deslongchamps, G.; Deslongchamps, P. Bent Bonds, the Antiperiplanar Hypothesis and the Theory of Resonance. A Simple Model to Understand Reactivity in Organic Chemistry. Org. Biomol. Chem. 2011, 9 (15), 5321. DOI: 10.1039/C1OB05393K. | {
"domain": "chemistry.stackexchange",
"id": 1597,
"tags": "bond, molecular-orbital-theory, hybridization, bent-bond"
} |
Why can't a balance that can weigh to the nearest milligram be able to weigh 23.8 milligrams of something? | Question: I'm having trouble understanding the answer to this question:
Most chemistry laboratories have balances that can weigh to the nearest milligram, would it be possible to weigh $5.64\times10^{18}$ molecules of octadecane, $\ce{C18H38}$, on such a balance?
I worked through the equation and my end result was $\pu{0.00238 g}$ which is equal to $\pu{2.38 mg}$ so I thought that the answer was that it could, but the book states that the answer is that it cannot. Why can't it? Am I misunderstanding how weighing to the nearest $\pu{mg}$ works?
Answer: Accepting what is the "wrong" answer bugs me. Although I hope the point about significant figures has been made adequately, let me go into more detail.
QUESTION
Most chemistry laboratories have balances that can weigh to the nearest milligram, would it be possible to weigh $5.64×10^{18}$ molecules of octadecane, $\ce{C18H38}$, on such a balance?
Li Zhi correctly points out that octadecane has a molecular weight of 254.5 g/mole and that $5.64×10^{18}$ molecules would weigh 2.38 milligrams considering the significant figures.
Li Zhi makes the correct assessment when he says "It certainly cannot weigh that amount with adequate precision for most purposes I can think of."
Li Zhi is also correct when he says that "A balance with an ability to weigh to the nearest mg would, in a perfect world give you a mass of 2 or possibly 3 mg." But this is accuracy not precision.
Thus a balance has two factors to consider. Accuracy and precision. A balance which weighs to 1 mg has neither the accuracy nor the precision to weigh out 2.38 milligrams.
Accuracy: On average 2 mg will be 0.38 mg too low and 3 mg will be 0.62 mg too high. So either weight is biased.
Now if a chemist weighs out 2 mg 62% of the time and 3 mg 38% of the time, then on average the chemist weighs out 2.38 mg. (See how stupid this is?)
Let's look at the reverse too. On average 2 mg will contain $4.73\times10^{18}$ molecules and 3 mg will on average contain $7.10\times10^{18}$ molecules.
Precision: $5.64×10^{18}$ molecules of octadecane implies +/- 1 part in 564. If we assume the balance rounds, which is reasonable, then 2 mg is precise to 1 part in 4, not nearly enough.
The point here overall is that chemists make assumptions all the time to simply problems using significant figures. Using significant figures has to become innate to a chemist so that one can cut through the Gordian knot offered by so many chemistry problems. | {
"domain": "chemistry.stackexchange",
"id": 7427,
"tags": "physical-chemistry"
} |
$C^\infty$, nonvanishing parallel vector field along geodesic, orthogonal to tangent | Question: The following question(s) showed up in my admittedly basic undergraduate research in general relativity/cosmology, and I was wondering if anybody could me with it.
Let $(X, g)$ be a $n$-dimensional Riemannian manifold, and $\gamma: S^1 \to X$ a $C^\infty$ embedded closed geodesic.
If $n = 2m$, does there exist a $C^\infty$, nonvanishing parallel vector field $U(t)$ along $\gamma(t)$, which is orthogonal to $\gamma'(t)$?
What can we say if $X$ is not orientable?
Answer: We can always find such a vector field when $n=2m$ and $M$ is orientable.
Proof. Let $\gamma:[0,a]\to M$ be the geodesic in question, with $\gamma(0)=\gamma(a)=p$. Let $P_t:T_{p}M\to T_{\gamma(t)}M$ be the parallel transport along this geodesic. From Picard-Lindelöf, it is clear that $P_a\gamma'(0)=\gamma'(a)=\gamma'(0)$. Thus $P_a$ is an isomorphism of $T_pM$ that fixes $\gamma'(0)$. This means that $P_a$ is a rotation (since it preserves lengths and angles) in the subspace of $T_pM$ orthogonal to $\gamma'(0)$, $T_pM^\bot$. Thus, on $T_pM^\bot$, $P_a\in\mathrm{SO}(2m-1)$. We need the following Lemma: If $n$ is odd, any $O\in\mathrm{SO}(n)$ has at least one fixed point. Thus $P_a\lvert T_pM^\bot$ has at least one fixed point, $U$. Then define $U(t)=P_tU$, which is a vector field along $\gamma$ with the desired properties. $\quad\Box$
Proof of the Lemma. Consider $O\in\mathrm{SO}(n)$ as a matrix, it will have unit determinant. Equivalently, the product of its eigenvalues is one. If $O$ has a complex eigenvalue $\lambda$, it will be accompanied by the complex conjugate eigenvalue $\bar\lambda$ by this theorem. If $\lambda$ is real, we have $\lambda=\pm 1$ because $O$ leaves the length of the eigenvector $v$ invariant. But any negative eigenvalue must be accompanied by another negative one because the determinant (product of eigenvalues) is positive. Then, since $n$ is odd, after taking away the negative and complex eigenvalue pairs we are left with at least one positive eigenvalue $+1$. $\quad\Box$
We get $\mathrm{SO}(n-1)$ instead of $\mathrm{O}(n-1)$ because parallel transport preserves orientation. To see this, let $\omega$ be the Riemannian volume form of $M$ and $E_1,\dotsc,E_n$ an oriented orthonormal basis of $T_pM$. Let $c:[0,a]\to M$, $c(0)=p$, be a smooth curve and $P_t$ its parallel transport. Since $P_t$ preserves angles and lengths, $P_tE_1,\dotsc,P_tE_n$ is an orthonormal basis of $T_{c(t)}M$. Consider the function $f(t)=\omega(P_tE_1,\dotsc,P_tE_n)$. Then, by definition, $f(0)=1$. Suppose that $f(a)<0$, which indicates a reversal of orientation at some point along $c$. By Picard-Lindelöf, $f(t)$ is smooth in $t$, so by the intermediate value theorem there is a $t^*\in[0,a]$ such that $f(t^*)=0$. But then $P_{t^*}E_1,\dotsc, P_{t^*}E_n$ is not an orthonormal basis of $T_{c(t^*)}M$, a contradiction.
In the odd-dimensional and nonorientable case, the best we can say is that $P_a\lvert T_pM^\bot\in \mathrm{O}(2m)$, which need not have a fixed point at all. In fact, the existence of such a $U(t)$ is equivalent to the existence of a fixed point. In the even-dimensional nonorientable case, we have $P_a\lvert T_pM^\bot\in \mathrm{O}(2m-1)$, which also need not have a fixed point. | {
"domain": "physics.stackexchange",
"id": 30995,
"tags": "general-relativity"
} |
Why don't ionic compounds show stereoisomerism? | Question: When I was reading about crystalline nature of ionic compounds, I came across the statement that ionic compound doesn't show stereoisomerism. What does that mean and can anybody explain the reason with an example?
Answer: They can, and they do. The secret is to have a chiral cation and separately a chiral anion. Letting $D^+$ be the dextrorotatory form of the cation, $L^+$ be the levorotatory form, and anologously for the anion, we then have four isomeric salts:
$D^+D^-$
$D^+L^-$
$L^+D^-$
$L^+L^-$
The first and second are diastereomers because only the anion is mirror-reflected; the cation is not mirror-reflected. Similarly for three of the other five possible pairs; only two of the six possible pairs are enantiomeric.
Being diastereomers instead of enantiomers, the first and second salts, for instance, may have different physical properties such as solubility in a given solvent. This may be used for separating different enantiomers of the anion (or, with a different pair such as first and third, separating different enantiomers of the cation). And it's actually done. See here for a brief summary and here for an example of diastereomeric salt formation. | {
"domain": "chemistry.stackexchange",
"id": 12145,
"tags": "stereochemistry, crystal-structure, ionic-compounds"
} |
Download stock data from Yahoo Finance | Question: This Python 3.4 script downloads stock data and puts it into an Excel file.
## Imports/Initiation
# Put a "#" in front of the one you don't want to use.
signs = 'a abc abt ace acn act adbe adi aet afl agn agu aig all alxn amgn amt amzn apa apc apd axp azo ba bac bam bax bbby bdx ben bfb bhi bhp biib bmy bp brk-b bud bwa bxp c cah cam cat cbs celg cern chkp ci cmcsa cme cmg cmi cnq cof cog coh cost cov cs csco csx ctsh ctxs cvs cvx dal dd deo dfs dgx dhr dis dlph dov dtv dva dvn ebay ecl el emc emn enb eog epd esrx esv etn f fb fdx fis flr gd ge gild gis glw gm gps gsk gww hal hd hes hmc hog hon hot hst hsy hum ice intc ip isrg jci jnj jpm kmp kmx ko kr krft kss l lly low lvs lyb m ma mar mat mcd mck mdlz mdt met mfc mhfi mmc mo mon mos mpc mrk mro mro ms msft mur myl nbl ne nem nke nlsn nov nsc nue nvs orcl orly oxy pcp pep pfe pg ph pm pnc pnr ppg pru psx px pxd qcom qqq regn rio rl rop rost rrc rsg sbux se shw sjm slb slm sndk spg stt stz su swk syk tck tel tjx tm tmo trow trv twc twx tyc ual unh unp ups utx v vfc viab vlo vno vz wag wdc wfc wfm wmb wmt wy wynn yhoo yum zmh'
#signs = '' # Testing purposes
dates = [1429228800 , 1431648000, 1434672000, 1442534400]
import time
from subprocess import call
import os
from datetime import datetime
# This shows all modules that failed to install rather than just the first.
importErrors = []
try:
import requests
except ImportError:
importErrors.append('requests')
try:
from lxml import html
except ImportError:
importErrors.append('lxml')
try:
import xlsxwriter
except ImportError:
importErrors.append('xlsxwriter')
if importErrors != []:
raise ImportError('Unable to import {}'.format(', '.join(importErrors)))
dt = datetime.fromtimestamp(time.time())
date = dt.strftime('%d-%m-%Y')
path = 'options_report_{}.xlsx'.format(date)
try:
excel = xlsxwriter.Workbook(path)
except:
sys.exit('Unable to open workbook. Please close it if it is open and try again.')
start = time.time()
try:
test_web = requests.get('http://yahoo.com')
except:
# raise ConnectionError('Unable to contact the Internet. Please check your connection and try again.')
pass
## Download Data
signs = signs.upper().replace(' ', ' ').split(' ')
site = 'https://finance.yahoo.com/q/op?s={}&date={}' # Call .format(sign, date)
left_col = "//div[@id='optionsCallsTable']//tbody/tr"
path_table = "//div[@id='optionsCallsTable']//tbody/tr[{}]/td/*//text()"
path_last = "//*[@id='yfs_l84_{}']//text()" # Call .format(sign)
site_2 = 'https://finance.yahoo.com/q/in?s={}+Industry' # .format(sign)
paths_info = ['//*[@id="yfi_rt_quote_summary"]/div[1]/div/h2/text()', '//tr[1]/td/a/text()', '//tr[2]/td/a/text()']
all_data = {}
for sign in signs:
all_data[sign] = {}
print('\n{:{}} ({:{}} of {})'.format(
sign, len(max(signs, key=len)) + 1, signs.index(sign) + 1,
len(str(len(signs))), len(signs)
), end='')
page = requests.get(site_2.format(sign))
tree = html.fromstring(page.text)
try:
all_data[sign]['Info'] = [tree.xpath(path)[0] for path in paths_info]
except IndexError:
print(' Error: stock does not exist.', end='')
else:
for date in dates:
all_data[sign][date] = []
print('.', end='')
page = requests.get(site.format(sign, date))
tree = html.fromstring(page.text)
left_data = tree.xpath(left_col) # So we know how many rows there are
exists = True
for row_n in range(len(left_data)):
temp_row = tree.xpath(path_table.format(row_n + 1))
try:
temp_row.insert(0, tree.xpath(path_last.format(sign))[0])
except IndexError as e:
exists = False
if exists:
all_data[sign][date].append(temp_row)
if not exists:
print(' Stock does not exist.', end='')
break
print() # Allow printing of the last line
download_end = time.time()
print('Download completed in {:.2f} seconds (average {:.2f} seconds per stock)'.format(download_end - start, (download_end - start) / len(signs)))
## Format Data
formats = [
'str', 'str', 'str', 'str', 'float',
'str', 'str_f', 'str', 'float', 'float',
'float', 'int', 'int', 'float', 'float_f',
'int_f', 'int_f', 'int_f', 'float_f', 'percent_f',
'percent_f', 'float_f', 'percent_f', 'percent_f', 'str_f'
]
headers = [
'co_symbol', 'company', 'industry', 'sector', 'Last',
'Option', 'exp_date', 'Call', 'Strike', 'Bid',
'Ask', 'Open interest', 'Vol', 'Last', '3/24/2015',
'days', '60000', ' $invested', ' $prem', ' prem%',
'annPrem%', ' MaxRet', ' Max%', 'annMax%', '10%'
]
data = []
for sign in all_data:
for date in all_data[sign]:
if date != 'Info':
for r in all_data[sign][date][:]:
# human-readable date = hrd
try:
hrd_lst = [r[2][-15:-9][x:x + 2] for x in range(0, 6, 2)]
except IndexError as ie:
raise IndexError(ie.args, r) from ie
hrd_str = '/'.join((hrd_lst[1], hrd_lst[2], hrd_lst[0]))
try:
row = [sign, all_data[sign]['Info'][0], all_data[sign]['Info'][1], all_data[sign]['Info'][2], r[0],
r[2], hrd_str, 'C', r[1], r[4],
r[5], r[9], r[8], r[3], '=IF(J{n}<F{n},(J{n}-F{n})+K{n},K{n})',
'=H{n}-P$6', '=ROUND(R$6/((F{n}-0)*100),0)', '=100*R{n}*(F{n}-0)', '=100*P{n}*R{n}', '=T{n}/S{n}',
'=(365/Q{n})*U{n}', '=IF(J{n}>F{n},(100*R{n}*(J{n}-F{n}))+T{n},T{n})', '=W{n}/S{n}', '=(365/Q{n})*X{n}', '=IF((ABS(J{n}-F{n})/J{n})<Z$6,"NTM","")']
except IndexError as ie:
raise IndexError(row) from ie
data.append(row)
# Check that everything that's supposed to be the same length is.
if len(formats) != len(headers) or len(headers) != len(data[0]):
raise Exception('The "formats" list, "headers" list, and rows in the data are not all the same length!')
for row in data:
for i, cell in enumerate(row):
if '_f' in formats[i]:
row[i] = str(row[i])
elif 'percent' in formats[i]:
row[i] = float(row[i].replace('%', ''))
else:
try:
row[i] = eval('{}(row[i])'.format(formats[i].replace('_f', '')))
except ValueError:
if '-' in row[i]:
row[i] = str(row[i])
## Output Data
write_start = time.time()
sheet = excel.add_worksheet()
r_offset, c_offset = 5, 1
pt = excel.add_format({'num_format': '0.00\%'})
ft = excel.add_format({'num_format': '0.00'})
it = excel.add_format({'num_format': '0'})
sr = excel.add_format({})
fa = excel.add_format({})
print('Writing data...', end='')
for i, header in enumerate(headers):
sheet.write(r_offset, i + c_offset, header)
r_offset += 1
for r, row in enumerate(data):
for c, cell in enumerate(row):
if '_f' in formats[c]:
sheet.write(r + r_offset, c + c_offset, cell.format(
n=str(r + r_offset + 1)), eval(formats[c][0] + formats[c][-3]))
else:
sheet.write(r + r_offset, c + c_offset, cell, eval(formats[c][0] +
formats[c][-1]))
excel.close()
## Finish Up
end = time.time()
print(' Completed in {:.2f} seconds'.format(end - write_start))
print('Script completed in {:.2f} seconds'.format(end - start))
try:
os.startfile(path)
except OSError:
print('Unable to open Excel. The file is called {}.'.format(path.split('/')[-1]))
print('Press Enter to exit')
end = input()
First, the import section. How can I make this less... er... bulky? It works, but doesn't seem very Pythonic.
Next, the formatting section. This is quite convoluted. How can I slim it down?
And finally, the writing section. Is there a better way to prevent conflicts with the actual filetype names that are in the formats list than just having the first and last letters?
Answer: A few general comments:
It's good that you're being proactive in trying to catch problems, but you could be losing valuable information. An Exception includes an error message and a traceback, both of which are useful for debugging. It's better to let those show up, so that you have all that information for debugging, than to wrap it with your own message that might be inaccurate or incorrect.
Use better variable names! It makes your code easier to read if your variable names match their intended function.
You should comment your code explaining why something is set up in a particular way: for example, the URL options in the finance.yahoo.com are meaningless to me because I've never used that API. You should explain roughly what these options are doing, at a high level at least, and what you expect to happen.
Here are some suggestions:
Drop the try ... except ImportError blocks. Although it might be useful to see all the modules which failed to import, you could actually be losing information. If the module fails to import, it might give you some information in the exception message that's more useful than simply knowing it went wrong. Don't throw that away; just do the import the usual way.
If the script crashes out at the first import, that's okay.
Note also that rather than testing if importErrors != [], you could just do if importErrors, because an empty list is implicitly coerced to False.
PEP 8, the Python style guide, has a few things to say about imports. In particular:
Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
You've inadvertently grouped the imports with your ImportError blocks. There's also a convention that module imports should be alphabetically ordered, which is a good habit to get into.
Another common convention is that module constants are in UPPERCASE to make them easy to spot. You should do this with your signs and dates variables, and also make the names more descriptive – starting at the top of your file, I don't know what they're for.
There's no need to go to the time module to get the current date; the datetime module can do that for you. This is how it works:
from datetime import datetime
today_date= datetime.now().strftime('%d-%m-%Y')
spreadsheet_path = 'options_report_{date}.xlsx'.format(date=today_date)
Note that I've also tried to give the variables more descriptive names.
Next, opening the notebook. Again, by trying to wrap the error in try ... except, you could be losing information. Two reasons why:
Using bare except is a bad idea, because it can catch errors you didn't mean to (such as SystemExit and KeyboardInterrupt). It's better to catch the specific exception you meant to, and let others bubble up.
When you try to open the notebook, there could be all sorts of problems – the file might not exist, there might be a file lock, it could be the wrong format, etc. That information will show up in the Exception, but not if you mask it with your own message.
Likewise, don't wrap the exception around requests.get('yahoo.com'). And don't save the result of that line – you don't do anything with it afterwards.
When you split signs, you can simplify this line. If you don't specify a separator, split() will just use whitespace, and compress consecutive runs of whitespace into a single split. From the stdtypes docs:
str.split([sep[, maxsplit]])
If sep is not specified or is None, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace.
So this line could be simply signs = signs.upper().split().
Rather than assigning the values of all_dict as empty dicts for every new key, you might want to look at collections.defaultdict. This is a rather handy object that mimics a dictionary, but allows you to set a default value. So you'd set it up as follows:
import collections
all_data = collections.defaultdict(dict)
You can simplify this block of code:
try:
temp_row.insert(0, tree.xpath(path_last.format(sign))[0])
except IndexError as e:
exists = False
if exists:
all_data[sign][date].append(temp_row)
Rather than assigning to all_data after you've finished the try ... except block, do it within the try:
try:
temp_row.insert(0, tree.xpath(path_last.format(sign))[0])
all_data[sign][date].append(temp_row)
except IndexError as e:
exists = False
If you hit an Exception, you'll never run that line anyway, but now the code is a little simpler. | {
"domain": "codereview.stackexchange",
"id": 14370,
"tags": "python, python-3.x, excel, finance, web-scraping"
} |
Slow performance in looping and filling missing data | Question: I wrote some code that loops for specific data and then fills the missing cells in another sheet. The code works perfectly but it takes too much time to fill the missing cells (values).
What I tried to do is to Test if Cell B is blank or not , then I created 2 variables :
x for activesheet values which need to be filled and
y for sheet 1 (source) and keep comparing until x match y to take the value in front of the specific data.
The code I came up with:
Sub TraiterNoms()
Application.ScreenUpdating = False
Application.DisplayAlerts = False
Dim i As Variant
Dim CompareRange As Variant
Dim x As Variant
Dim y As Variant
Dim derlignE As Variant
Dim derlignC As Variant
derlignE = Range("A" & Rows.Count).End(xlUp).Row
derlignC = Sheets("Feuil1").Range("B" & Rows.Count).End(xlUp).Row
Set CompareRange = Sheets("Feuil1").Range("A:A").resize(derlignC, 1)
For i = Range("A" & Rows.Count).End(xlUp).Row To 1 Step -1
If Range("B" & i) = "" Then
For Each x In Range("A:A").resize(derlignE, 1)
For Each y In CompareRange
If x = y Then x.Offset(0, 1) = y.Offset(0, 1)
Next y
Next x
End If
Next i
Application.ScreenUpdating = True
Application.DisplayAlerts = True
End Sub
I feel like I could improve this code to make it more fluent. Would that be over-complicated?
What's there to say about this code?
Answer: General Observations
Note: Feuil is this Excel version's name for Sheet.
What the OP's code does is assign the date of the last occurence of each Id in Feuil1 to the matching IDs on Feuil2. I assume that the OP is actually interested in the latest data because the data is sorted by data ascending.
It seems odd to that there are multiple occurrences of Ids on Feuil2. I assume that this is because the OP is still testing.
The OP stated that he is wants to "Test if Cell B is blank or not". The OP needed this to keep the last occurence of the ID from being overwritten. I handle this by saving the latest date associated with an ID in the Dictionary lookup.
Neither of the lists have headers. Unless there is a compelling reason for this, add headers to your lists.
Performance
Collections are idea to lookup values associated with Ids in a list. The values are stored as Key/Value pairs. There are many kinds of Collection but Scripting Dictionaries are the easiest to use. I will provide example of using a Scripting Dictionary and a SortList in my code below.
Working with your data in an Array is far more efficient than working with a Range. You will receive a small performance boost by Reading the data from an Array and a huge boost by writing the data to the Range in one operation using an Array. Always remember that Reading data is a cheap operation and Writing data is relatively expensive in comparison.
Reference: Excel VBA Introduction Part 25 - Arrays
In my code below I do not bother to turn off Application.ScreenUpdating. Because I am using the lookups and more importantly writing the data in one operation from an Array to the worksheet is that fast.
Example 1: Dictionary - Match IDs
In this example I store the latest date associated with an ID as a Key/Value pair in a dictionary. I then create an an array data2A to store the Ids to match and data2B to store the associated dates. Finally I write the associated dates data2B to the Feuil2 Column B.
Sub TraiterNoms1()
Dim data1 As Variant, data2A As Variant, data2B As Variant
Dim x As Long
Dim dic As Object, Source As Range
Set dic = CreateObject("Scripting.Dictionary")
With Worksheets("Feuil1")
data1 = .Range("A1:G1", .Range("B" & Rows.Count).End(xlUp))
'Add the latest date with the IDs on Sheet1 to the Dictionary
For x = 1 To UBound(data1)
key = data1(x, 1)
If dic.Exists(key) Then
If dic(key) < data1(x, 7) Then dic(key) = data1(x, 7)
Else
dic.Add key, data1(x, 7)
End If
Next
End With
With Worksheets("Feuil2")
Set Source = .Range("A1", .Range("A" & Rows.Count).End(xlUp))
data2A = Source.Value
ReDim data2B(1 To UBound(data2A), 1 To 1)
For x = 1 To UBound(data2A)
key = data2A(x, 1)
data2B(x, 1) = dic(key)
Next
Source.Offset(0, 1).Value = data2B
End With
End Sub
Example 2: Dictionary - Write Unique IDs and Matching Values to Feuil2
Sub TraiterNoms2()
Dim data1 As Variant
Dim x As Long
Dim dic As Object
Set dic = CreateObject("Scripting.Dictionary")
With Worksheets("Feuil1")
data1 = .Range("A1:G1", .Range("B" & Rows.Count).End(xlUp))
'Add the latest date with the IDs on Sheet1 to the Dictionary
For x = 1 To UBound(data1)
key = data1(x, 1)
If dic.Exists(key) Then
If dic(key) < data1(x, 7) Then dic(key) = data1(x, 7)
Else
dic.Add key, data1(x, 7)
End If
Next
End With
With Worksheets("Feuil2")
.Columns("A:B").ClearContents
.Range("A1:B1").Value = Array("Items", "Latest Date")
.Range("A2").Resize(dic.Count).Value = Application.Transpose(dic.Keys)
.Range("B2").Resize(dic.Count).Value = Application.Transpose(dic.Items)
End With
End Sub
Example 3: SortedList - Write Sorted Unique IDs and Matching Values to Feuil2
Sub TraiterNoms3()
Dim data1 As Variant, data2AB As Variant
Dim x As Long
Dim sList As Object
Set sList = CreateObject("System.Collections.SortedList")
With Worksheets("Feuil1")
data1 = .Range("A1:G1", .Range("B" & Rows.Count).End(xlUp))
'Add the latest date with the IDs on Sheet1 to the Dictionary
For x = 1 To UBound(data1)
key = data1(x, 1)
If sList.Contains(key) Then
If sList(key) < data1(x, 7) Then sList(key) = data1(x, 7)
Else
sList.Add key, data1(x, 7)
End If
Next
End With
ReDim data2AB(1 To sList.Count, 1 To 2)
For x = 0 To sList.Count - 1
key = sList.getKey(x)
data2AB(x + 1, 1) = key
data2AB(x + 1, 2) = sList(key)
Next
With Worksheets("Feuil2")
.Columns("A:B").ClearContents
.Range("A1:B1").Value = Array("Items", "Latest Date")
.Range("A2").Resize(sList.Count, 2).Value = data2AB
End With
End Sub | {
"domain": "codereview.stackexchange",
"id": 29600,
"tags": "performance, vba, excel"
} |
Why does a non-functional retinoblastoma protein cause tumours in the cells of the retina specifically? | Question: I know that the name of the protein itself is the retinoblastoma protein - but that's only because the result of a pathogenic variant is retinoblastoma. I'm trying to kind of reverse engineer the name and figure out: why would a non-functional Rb protein (since the Rb protein is crucial to the cell cycle of all dividing cells in our body) primarily manifest itself in the form of an eye tumour? Is there anything special about the cells in the retina that make it more prone to developing such a tumour in relation to the function of the Rb protein?
Would anyone have any insight to offer?
Thanks.
Answer: From Wikipedia:
The retinoblastoma protein ... is a tumor suppressor protein that is dysfunctional in several major cancers
So, although it's commonly associated with retinoblastoma, it's not limited to a particular type of cancer.
Retinal cells are not sloughed off or replaced, and are subjected to high levels of mutagenic UV radiation, and thus most pRb knock-outs occur in retinal tissue (but it has also been documented in certain skin cancers in patients from New Zealand where the amount of UV radiation is significantly higher)
Since the gene is commonly damaged from UV radiation, the retina is particularly vulnerable. UV radiation does not penetrate very deeply, but the retina is exposed to light per its function.
Two forms of retinoblastoma were noticed: a bilateral, familial form and a unilateral, sporadic form. Sufferers of the former were over six times more likely to develop other types of cancer later in life, compared to individuals with sporadic retinoblastoma.[10] This highlighted the fact that mutated pRb could be inherited and lent support for the two-hit hypothesis
The two-hit hypothesis is the idea that often one functional copy of a tumor suppressor gene is sufficient. For people with familiar cancers involving tumor suppressor genes, including the BRCA genes associated with breast cancer, usually what you have are people who are heterozygous for some non-functional version of the gene. That means that they only need one "hit" to the functional gene to facilitate development of cancer.
pRb restricts the cell's ability to replicate DNA by preventing its progression from the G1 (first gap phase) to S (synthesis phase) phase of the cell division cycle
Because pRb is associated with G1 to S phase transitions, you wouldn't expect it to be as important of a gate in differentiated cells in G0; that's why you're finding it in a childhood cancer: the affected cells need to be immature and dividing to be affected.
In summary, Rb loss-of-function mutations occur via UV damage. To be vulnerable to this damage, a cell needs to be actively dividing and exposed to UV. The developing retina fits these characteristics. People who inherit one non-functional Rb copy are much more likely to develop cancer, because they only require additional damage to one copy of the gene, not two. | {
"domain": "biology.stackexchange",
"id": 12072,
"tags": "physiology, cancer, eyes"
} |
Why is equilibrium achieved at different stages of a reaction? | Question: In other words, I want to know why some reactions attain equilibrium early in the reaction while some reactions obtain equilibrium at the end of the reaction.
Why is this the case?
Answer: If you define early or end of reaction by the how much the concentration of reactants change from initial reaction to once equilibrium is reached, it is because the equilibrium constant itself is essentially a ratio of the forward and reverse rate constants (can be approximated by the Arrhenius equation posted by t.c.). A high ratio means "more" products (at least a higher concentration) are present at equilibrium than reactants (so in a sense near the "end of the reaction"), since the forward rate is much higher than the reverse, a lower concentration of reactants compared to products is needed to maintain equilibrium. A low ratio means the opposite. | {
"domain": "chemistry.stackexchange",
"id": 2029,
"tags": "equilibrium"
} |
Is significand same as mantissa in IEEE754? | Question: I'm trying to understand IEEE 754 floating point. when I try convert 0.3 from decimal to binary with online calculator, it said the significand value was 1.2
Where 1.2 come from?
I did understand another bits like exponent and sign bit.
.3 if converted to binary it will be
.3 * 2 = .6 + 0
.6 * 2 = .2 + 1
.2 * 2 = .4 + 0
.4 * 2 = .8 + 0
.8 * 2 = .6 + 1
...
So
0.3 = 0.010011001... in binary
Apply the scientific notation in binary:
0.010011001 = 1.0011001 * 2 ^ (-2)
So the exponent is -2. And the normalized mantissa is 0011001...
I will not talk about exponent bit and sign bit. Back to the my question, what is the difference of significand and mantissa?
Answer: In base 2, the significand is a number of the form $1.b_1b_2\ldots$ where the $b_i$'s are base 2 digits. The mantissa is the digits $b_1b_2\ldots$.
More generally, in base $n$, the normalized significand $s$ and exponent $e$ of a positive number $x$ are the numbers such that $x = s \cdot n^{e}$, $1 \le s < n$ and $e$ is an integer (negative if $x < 1$). In the case of base 2, the integer part of $s$ is always $1$ (since the definition yields $1 \le s < 2$). So instead it's usual to write the number as $x = (1 + m) \cdot n^{e}$ with $0 \le m < 1$. The digits of $m$ in base 2 (or the fractional part of $s$, depending on who you ask) are called the (normalized) mantissa.
For $x = 0.3$, we have $x = 1.2 \cdot 2^{-2}$ so for its representation in base 2, the exponent is $-2$, the significand is $1.2_{10} = 1.001\overline{1001}_2$ (where $x_n$ means the digits are written in base $n$ and $\overline{1001}$ means the digits are repeated infinitely many times). The mantissa is the digits after the point: $001\overline{1001}$. | {
"domain": "cs.stackexchange",
"id": 20134,
"tags": "floating-point"
} |
How do I mechanically convert 200 steps into 360 discreet degrees? | Question: I have a Nema 17 stepper motor that does 200 steps per revolution, or 400 800 and 1600 micro-steps if in micro-stepping mode for convenience sake I would like to somehow translate the 200 steps into 360 steps. What gear ratio / micro stepper configuration do I need to convert 200 steps to 360 steps.
Update
If anyone wants an easy way to create an involute spur gear, I found this page, converted the to DXF in illustrator then extruded in Rhino!
http://geargenerator.com/
Answer: You want a transmission with a ratio R such that $ R \cdot 200 / 360 $ gives an integer number of steps per degree. Then in controlling software you can program it to take e.g. 5 steps to move one degree.
As Brian Drummond mentioned in a comment, 36:20 is one possible ratio, giving 1 step per degree.
Some other options:
9:5, equal to 36:20 except smaller gears.
9:1, gives 5 steps per degree, so greater torque but slower speed
As to how to implement the transmission, you can do it with either gears or with belts. Some of the ratios above are very basic, I'm sure you could even find a premade 9:1 part. | {
"domain": "engineering.stackexchange",
"id": 1080,
"tags": "mechanical-engineering, gears, stepper-motor"
} |
A game called "Twerk" | Question: This my first Python program and game. Can you please point how I can improve and what code should be changed?
import pygame
from pygame import *
import random
import time
import os
import sys
from pygame.locals import *
black = (0,0,0)
white = (255,255,255)
pygame.init()
def game():
os.environ['SDL_VIDEO_CENTERED'] = '1'
mouse.set_visible(False)
screen = display.set_mode((800,500))
backdrop = pygame.image.load('bg.jpg').convert_alpha()
menu = pygame.image.load('green.jpg').convert_alpha()
ballpic = pygame.image.load('ball.gif').convert_alpha()
mouseball = pygame.image.load('mouseball.gif').convert_alpha()
display.set_caption('Twerk')
back = pygame.Surface(screen.get_size())
def text(text,x_pos,color,font2=28):
tfont = pygame.font.Font(None, font2)
text=tfont.render(text, True, color)
textpos = text.get_rect(centerx=back.get_width()/2)
textpos.top = x_pos
screen.blit(text, textpos)
start = False
repeat = False
while start == False:
for event in pygame.event.get():
if event.type == pygame.QUIT:
start = True
#falling = True
#finish = True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_SPACE:
start = True
#game over screen
screen.blit(menu,[0,0])
pygame.display.set_caption("TWERK")
#Text
#"Welcome to Escape"
#needs replacing with logo
text("Twerk",60,white,300)
#"Instructions"
text("Instructions",310,white)
text("----------------------------------------------------------------------------------------",320,white)
text("Avoid the the enemies",340,white)
text("Last as long as you can!",360,white)
text("Press space to start",420,white)
pygame.display.flip()
while start == True:
positionx=[]
positiony=[]
positionxmove=[]
positionymove=[]
falling = False
finish = False
score=0
enemies=4
velocity=1
for i in range(enemies):
positionx.append(random.randint(300,400)+random.randint(-300,200))
positiony.append(random.randint(200,340)+random.randint(-200,100))
positionxmove.append(random.randint(1,velocity))
positionymove.append(random.randint(1,velocity))
font = pygame.font.Font(None, 28)
text = font.render('Starting Twerk... ', True, (100,100,100))
textRect = text.get_rect()
textRect.centerx = screen.get_rect().centerx
textRect.centery = screen.get_rect().centery
screen.blit(backdrop, (0,0))
screen.blit(text, textRect)
pygame.display.update()
game=time.localtime()
while start == True:
end=time.localtime()
score= (end[1]-game[1])*3600 + (end[4]-game[4])*60 + end[5]-game[5]
if score > 1: break
first=True
strtTime=time.localtime()
while not finish or falling:
screen.blit(backdrop, (0,0))
for i in range(enemies):
screen.blit(ballpic,(positionx[i],positiony[i]))
(mousex,mousey)=mouse.get_pos()
screen.blit(mouseball,(mousex,mousey))
display.update()
strt = time.localtime()
if first:
while True:
end=time.localtime()
score= (end[3]-strt[3])*3600 + (end[4]-strt[4])*60 + end[5]-strt[5]
if score > 3: break
first = False
if falling:
for i in range(enemies):
positionymove[i]=1000
positionxmove[i]=0
for i in range(enemies): positionx[i]=positionx[i]+positionxmove[i]
for i in range(enemies): positiony[i]=min(600,positiony[i]+positionymove[i])
if falling:
falling=False
for posy in positiony:
if posy<600: falling=True
if not falling:
for i in range(enemies):
for j in range(i+1,enemies):
if abs(positionx[i]-positionx[j])<20 and abs(positiony[i]-positiony[j])<20:
temp=positionxmove[i]
positionxmove[i]=positionxmove[j]
positionxmove[j]=temp
temp=positionymove[i]
positionymove[i]=positionymove[j]
positionymove[j]=temp
for i in range(enemies):
if positionx[i]>600: positionxmove[i]*=-1
if positionx[i]<0: positionxmove[i]*=-1
if positiony[i]>440: positionymove[i]*=-1
if positiony[i]<0: positionymove[i]*=-1
for i in range(enemies):
if abs(positionx[i]-mousex)<40 and abs(positiony[i]-mousey)<40:
falling = True
finish = True
start = False
endTime=time.localtime()
score= (endTime[3]-strtTime[3])*3600 + (endTime[4]-strtTime[4])*60 + endTime[5]-strtTime[5]
break
for event in pygame.event.get():
if event.type==KEYUP and event.key==K_ESCAPE:
finish=True
pygame.quit()
game()
Answer: Here, I think you wanted to exit the for loop if start was found true right?
Add a break after the assignment for both. You want to do this for most of the loops.
start = False
repeat = False
while start == False:
for event in pygame.event.get():
if event.type == pygame.QUIT:
start = True
break
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_SPACE:
start = True
break
You can combine some of these conditions, for example
for i in range(enemies):
if positionx[i]>600 || positionx[i]<0: positionxmove[i]*=-1
if positiony[i]>440 || positiony[i]<0: positionymove[i]*=-1
Note that positionx and positiony may best be stored together in some datastructure - perhaps a tuple
Something like this
positionxmove.append(random.randint(1,velocity))
positionymove.append(random.randint(1,velocity))
can be reduced to
for p in [positionxmove, positionymove]:
p.append(random.randint(1,velocity))
The idea is to reduce repetition as much as you can.
Combine the loops. for e.g
for i in range(enemies): positionx[i]=positionx[i]+positionxmove[i]
for i in range(enemies): positiony[i]=min(600,positiony[i]+positionymove[i])
Should be
for i in range(enemies):
positionx[i]=positionx[i]+positionxmove[i]
positiony[i]=min(600,positiony[i]+positionymove[i])
Also remove all the magic numbers like 600. They should be named with some meaningful names, and those variables/constants should be used instead.
You can reduce some thing such as this
temp=positionxmove[i]
positionxmove[i]=positionxmove[j]
positionxmove[j]=temp
temp=positionymove[i]
positionymove[i]=positionymove[j]
positionymove[j]=temp
to
for p in [positionxmove, positionymove]:
p[i], p[j] = p[i],p[i]
that is, use parallel assignment for swapping. | {
"domain": "codereview.stackexchange",
"id": 1648,
"tags": "python, beginner, game, pygame"
} |
Compressing a 'char' array using bit packing | Question: I have a large array (around 1 MB) of type unsigned char (i.e. uint8_t). I know that the bytes in it can have only one of 5 values (i.e. 0, 1, 2, 3, 4). Moreover, we do not need to preserve '3's from the input, they can be safely lost when we encode/decode.
So I guessed bit packing would be the simplest way to compress it, so every byte can be converted to 2 bits (00, 01..., 11).
As mentioned, all elements of value 3 can be removed (i.e. saved as 0), which gives me option to save '4' as '3'. While reconstructing (decompressing) I restore 3's to 4's.
I wrote a small function for the compression but I feel this has too many operations and is just not efficient enough. Any suggestions or hints on how to handle the operations more efficiently but maintaining the readability will be of much help.
/// Compress by packing ...
void compressByPacking (uint8_t* out, uint8_t* in, uint32_t length)
{
for (int loop = 0; loop < length/4; loop ++, in += 4, out++)
{
uint8_t temp[4];
for (int small_loop = 0; small_loop < 4; small_loop++)
{
temp[small_loop] = *in; // Load into local variable
if (temp[small_loop] == 3) // 3's are discarded
temp[small_loop] = 0;
else if (temp[small_loop] == 4) // and 4's are converted to 3
temp[small_loop] = 3;
} // end small loop
// Pack the bits into write pointer
*out = (uint8_t)((temp[0] & 0x03) << 6) |
((temp[1] & 0x03) << 4) |
((temp[2] & 0x03) << 2) |
((temp[3] & 0x03));
} // end loop
}
Cross-posted from SO
Answer: Conditional jumps are murder on throughput due to branch-misprediction. Consider simply using a lookup-table instead:
const static uint8_t map[] = { 0, 1, 2, 0, 3 };
*out = (uint8_t)
( (map[in[0]] << 0)
| (map[in[1]] << 2)
| (map[in[2]] << 4)
| (map[in[3]] << 6));
There are many architectures where shifting is more expensive than masking, but I doubt there are any where the reverse holds. Thus, the following code, even though it probably requires one extra machine code instruction (3 shifts vs. 4 masks) is likely to be faster:
const static uint8_t map[] = { 0x00, 0x55, 0xaa, 0x00, 0xff };
*out = (uint8_t)
( (map[in[0]] & 0x03)
| (map[in[1]] & 0x0c)
| (map[in[2]] & 0x30)
| (map[in[3]] & 0xc0));
By the way, how do you handle the tail of up to 3 bytes? Or is your data guaranteed to be a multiple of 4 bytes long?
Also, your inner loop currently duplicates the first byte of every 4-byte-chunk four times.
Your comments are blatantly superfluous; at best they repeat the code. Say why you do something, not what you just did. | {
"domain": "codereview.stackexchange",
"id": 26574,
"tags": "c, bitwise, compression"
} |
labelpix, image annotation tool for object detection GUI python | Question: For the full code, please go to https://github.com/emadboctorx/labelpix
This is an object detection tool for drawing bounding boxes over images and save output to csv/hdf or yolo (You Only Look Once) format. Suggestions for improvement / features to add / general feedback are more than welcome.
Features
Preview and edit interfaces.
Save bounding box relative coordinates to csv / hdf formats.
Save relative coordinates to yolo format
Instructions
Upload photos.
Add labels to session labels.
Click on a photo from the photo list.
Click on the desired label from the labels you added.
Activate edit mode.
Draw bounding boxes.
Switch photos by scrolling/clicking on images in the list.
Save data by entering filename_example.csv or filename_example.h5
You can also save to yolo formatted txt outputs.
For deleting any of the 3 right lists (session labels / Labels of the current image / Photo list) items, check item and press the delete button
Image
Code
labelpix.py:
from PyQt5.QtGui import QIcon, QPixmap, QPainter, QPen
from PyQt5.QtWidgets import (QMainWindow, QApplication, QDesktopWidget, QAction, QStatusBar, QHBoxLayout,
QVBoxLayout, QWidget, QLabel, QListWidget, QFileDialog, QFrame,
QLineEdit, QListWidgetItem, QDockWidget, QMessageBox)
from PyQt5.QtCore import Qt, QPoint, QRect
from settings import *
import pandas as pd
import cv2
import sys
import os
class RegularImageArea(QLabel):
"""
Display only area within the main interface.
"""
def __init__(self, current_image, main_window):
"""
Initialize current image for display.
Args:
current_image: Path to target image.
main_window: ImageLabeler instance.
"""
super().__init__()
self.setFrameStyle(QFrame.StyledPanel)
self.current_image = current_image
self.main_window = main_window
def get_image_names(self):
"""
Return:
Directory of the current image and the image name.
"""
full_name = self.current_image.split('/')
return '/'.join(full_name[:-1]), full_name[-1].replace('temp-', '')
def paintEvent(self, event):
"""
Adjust image size to current window.
Args:
event: QPaintEvent object.
Return:
None
"""
painter = QPainter(self)
current_size = self.size()
origin = QPoint(0, 0)
if self.current_image:
scaled_image = QPixmap(self.current_image).scaled(
current_size, Qt.IgnoreAspectRatio, Qt.SmoothTransformation)
painter.drawPixmap(origin, scaled_image)
def switch_image(self, img):
"""
Switch the current image displayed in the main window with the new one.
Args:
img: Path to new image to display.
Return:
None
"""
self.current_image = img
self.repaint()
@staticmethod
def calculate_ratios(x1, y1, x2, y2, width, height):
"""
Calculate relative object ratios in the labeled image.
Args:
x1: Start x coordinate.
y1: Start y coordinate.
x2: End x coordinate.
y2: End y coordinate.
width: Bounding box width.
height: Bounding box height.
Return:
bx: Relative center x coordinate.
by: Relative center y coordinate.
bw: Relative box width.
bh: Relative box height.
"""
box_width = abs(x2 - x1)
box_height = abs(y2 - y1)
bx = 1 - ((width - min(x1, x2) + (box_width / 2)) / width)
by = 1 - ((height - min(y1, y2) + (box_height / 2)) / height)
bw = box_width / width
bh = box_height / height
return bx, by, bw, bh
@staticmethod
def ratios_to_coordinates(bx, by, bw, bh, width, height):
"""
Convert relative coordinates to actual coordinates.
Args:
bx: Relative center x coordinate.
by: Relative center y coordinate.
bw: Relative box width.
bh: Relative box height.
width: Current image display space width.
height: Current image display space height.
Return:
x: x coordinate.
y: y coordinate.
w: Bounding box width.
h: Bounding box height.
"""
w, h = bw * width, bh * height
x, y = bx * width + (w / 2), by * height + (h / 2)
return x, y, w, h
def draw_boxes(self, ratios):
"""
Draw boxes over the current image using given ratios.
Args:
ratios: A list of [[bx, by, bw, bh], ...]
Return:
None
"""
img_dir, img_name = self.get_image_names()
to_label = cv2.imread(self.current_image)
to_label = cv2.resize(to_label, (self.width(), self.height()))
for bx, by, bw, bh in ratios:
x, y, w, h = self.ratios_to_coordinates(bx, by, bw, bh, self.width(), self.height())
to_label = cv2.rectangle(to_label, (int(x), int(y)), (int(x + w), int(y + h)), (0, 0, 255), 1)
temp = f'{img_dir}/temp-{img_name}'
cv2.imwrite(f'{img_dir}/temp-{img_name}', to_label)
self.switch_image(temp)
class ImageEditorArea(RegularImageArea):
"""
Edit and display area within the main interface.
"""
def __init__(self, current_image, main_window):
"""
Initialize current image for display.
Args:
current_image: Path to target image.
main_window: ImageLabeler instance.
"""
super().__init__(current_image, main_window)
self.main_window = main_window
self.start_point = QPoint()
self.end_point = QPoint()
self.begin = QPoint()
self.end = QPoint()
def paintEvent(self, event):
"""
Adjust image size to current window and draw bounding box.
Args:
event: QPaintEvent object.
Return:
None
"""
super().paintEvent(event)
qp = QPainter(self)
pen = QPen(Qt.red)
qp.setPen(pen)
qp.drawRect(QRect(self.begin, self.end))
def mousePressEvent(self, event):
"""
Start drawing the box.
Args:
event: QMouseEvent object.
Return:
None
"""
self.start_point = event.pos()
self.begin = event.pos()
self.end = event.pos()
self.update()
def mouseMoveEvent(self, event):
"""
Update size with mouse move.
Args:
event: QMouseEvent object.
Return:
None
"""
self.end = event.pos()
self.update()
def mouseReleaseEvent(self, event):
"""
Calculate coordinates of the bounding box, display a message, update session data.
Args:
event: QMouseEvent object.
Return:
None
"""
self.begin = event.pos()
self.end = event.pos()
self.end_point = event.pos()
x1, y1, x2, y2 = (self.start_point.x(), self.start_point.y(),
self.end_point.x(), self.end_point.y())
self.main_window.statusBar().showMessage(f'Start: {x1}, {y1}, End: {x2}, {y2}')
self.update()
if self.current_image:
bx, by, bw, bh = self.calculate_ratios(x1, y1, x2, y2, self.width(), self.height())
self.update_session_data(x1, y1, x2, y2)
current_label_index = self.main_window.get_current_selection('slabels')
if current_label_index is None or current_label_index < 0:
return
self.draw_boxes([[bx, by, bw, bh]])
def update_session_data(self, x1, y1, x2, y2):
"""
Add a row to session_data containing calculated ratios.
Args:
x1: Start x coordinate.
y1: Start y coordinate.
x2: End x coordinate.
y2: End y coordinate.
Return:
None
"""
current_label_index = self.main_window.get_current_selection('slabels')
if current_label_index is None or current_label_index < 0:
return
window_width, window_height = self.width(), self.height()
object_name = self.main_window.right_widgets['Session Labels'].item(current_label_index).text()
bx, by, bw, bh = self.calculate_ratios(x1, y1, x2, y2, window_width, window_height)
data = [[self.get_image_names()[1], object_name, current_label_index, bx, by, bw, bh]]
to_add = pd.DataFrame(data, columns=self.main_window.session_data.columns)
self.main_window.session_data = pd.concat([self.main_window.session_data, to_add], ignore_index=True)
self.main_window.add_to_list(f'{data}', self.main_window.right_widgets['Image Label List'])
class ImageLabeler(QMainWindow):
"""
Image labeling main interface.
"""
def __init__(self, window_title='Image Labeler', current_image_area=RegularImageArea):
"""
Initialize main interface and display.
Args:
window_title: Title of the window.
current_image_area: RegularImageArea or ImageEditorArea object.
"""
super().__init__()
self.current_image = None
self.label_file = None
self.current_image_area = current_image_area
self.images = []
self.image_paths = {}
self.session_data = pd.DataFrame(
columns=['Image', 'Object Name', 'Object Index', 'bx', 'by', 'bw', 'bh'])
self.window_title = window_title
self.setWindowTitle(self.window_title)
win_rectangle = self.frameGeometry()
center_point = QDesktopWidget().availableGeometry().center()
win_rectangle.moveCenter(center_point)
self.move(win_rectangle.topLeft())
self.setStyleSheet('QPushButton:!hover {color: orange} QLineEdit:!hover {color: orange}')
self.tools = self.addToolBar('Tools')
self.tool_items = setup_toolbar(self)
self.top_right_widgets = {'Add Label': (QLineEdit(), self.add_session_label)}
self.right_widgets = {'Session Labels': QListWidget(),
'Image Label List': QListWidget(),
'Photo List': QListWidget()}
self.left_widgets = {'Image': self.current_image_area('', self)}
self.setStatusBar(QStatusBar(self))
self.adjust_tool_bar()
self.central_widget = QWidget(self)
self.main_layout = QHBoxLayout()
self.left_layout = QVBoxLayout()
self.adjust_widgets()
self.adjust_layouts()
self.show()
def adjust_tool_bar(self):
"""
Adjust the top tool bar and setup buttons/icons.
Return:
None
"""
self.tools.setToolButtonStyle(Qt.ToolButtonTextUnderIcon)
if sys.platform == 'darwin':
self.setUnifiedTitleAndToolBarOnMac(True)
for label, icon_file, widget_method, status_tip, key, check in self.tool_items.values():
action = QAction(QIcon(f'../Icons/{icon_file}'), label, self)
action.setStatusTip(status_tip)
action.setShortcut(key)
if check:
action.setCheckable(True)
if label == 'Delete':
action.setShortcut('Backspace')
action.triggered.connect(widget_method)
self.tools.addAction(action)
self.tools.addSeparator()
def adjust_layouts(self):
"""
Adjust window layouts.
Return:
None
"""
self.main_layout.addLayout(self.left_layout)
self.central_widget.setLayout(self.main_layout)
self.setCentralWidget(self.central_widget)
def adjust_widgets(self):
"""
Adjust window widgets.
Return:
None
"""
self.left_layout.addWidget(self.left_widgets['Image'])
for text, (widget, widget_method) in self.top_right_widgets.items():
dock_widget = QDockWidget(text)
dock_widget.setFeatures(QDockWidget.NoDockWidgetFeatures)
dock_widget.setWidget(widget)
self.addDockWidget(Qt.RightDockWidgetArea, dock_widget)
if widget_method:
widget.editingFinished.connect(widget_method)
self.top_right_widgets['Add Label'][0].setPlaceholderText('Add Label')
self.right_widgets['Photo List'].selectionModel().currentChanged.connect(
self.display_selection)
for text, widget in self.right_widgets.items():
dock_widget = QDockWidget(text)
dock_widget.setFeatures(QDockWidget.NoDockWidgetFeatures)
dock_widget.setWidget(widget)
self.addDockWidget(Qt.RightDockWidgetArea, dock_widget)
def get_current_selection(self, display_list):
"""
Get current selected item data.
Args:
display_list: One of the right QWidgetList(s).
Return:
Image path or current row.
"""
if display_list == 'photo':
current_selection = self.right_widgets['Photo List'].currentRow()
if current_selection >= 0:
return self.images[current_selection]
self.right_widgets['Photo List'].selectionModel().clear()
if display_list == 'slabels':
current_selection = self.right_widgets['Session Labels'].currentRow()
if current_selection >= 0:
return current_selection
@staticmethod
def add_to_list(item, widget_list):
"""
Add item to one of the right QWidgetList(s).
Args:
item: str : Item to add.
widget_list: One of the right QWidgetList(s).
Return:
None
"""
item = QListWidgetItem(item)
item.setFlags(item.flags() | Qt.ItemIsSelectable |
Qt.ItemIsUserCheckable | Qt.ItemIsEditable)
item.setCheckState(Qt.Unchecked)
widget_list.addItem(item)
widget_list.selectionModel().clear()
def display_selection(self):
"""
Display image that is selected in the right Photo list.
Return:
None
"""
ratios = []
self.right_widgets['Image Label List'].clear()
self.current_image = self.get_current_selection('photo')
if not self.current_image:
return
self.left_widgets['Image'].switch_image(self.current_image)
image_dir, img_name = self.left_widgets['Image'].get_image_names()
for item in self.session_data.loc[self.session_data['Image'] == img_name].values:
self.add_to_list(f'{[[x for x in item]]}', self.right_widgets['Image Label List'])
ratios.append([x for x in item][3:])
self.left_widgets['Image'].draw_boxes(ratios)
def upload_photos(self):
"""
Add image(s) to the right photo list.
Return:
None
"""
file_dialog = QFileDialog()
file_names, _ = file_dialog.getOpenFileNames(self, 'Upload Photos')
for file_name in file_names:
image_dir, photo_name = '/'.join(file_name.split('/')[:-1]), file_name.split('/')[-1]
self.add_to_list(photo_name, self.right_widgets['Photo List'])
self.images.append(file_name)
self.image_paths[photo_name] = image_dir
def upload_vid(self):
pass
def upload_folder(self):
"""
Add images of a folder to the right photo list.
Return:
None
"""
file_dialog = QFileDialog()
folder_name = file_dialog.getExistingDirectory()
if folder_name:
for file_name in os.listdir(folder_name):
if not file_name.startswith('.'):
photo_name = file_name.split('/')[-1]
self.add_to_list(photo_name, self.right_widgets['Photo List'])
self.images.append(f'{folder_name}/{file_name}')
self.image_paths[file_name] = folder_name
def switch_editor(self, image_area):
"""
Switch between the display/edit interfaces.
Args:
image_area: RegularImageArea or ImageEditorArea object.
Return:
None
"""
self.left_layout.removeWidget(self.left_widgets['Image'])
self.left_widgets['Image'] = image_area(self.current_image, self)
self.left_layout.addWidget(self.left_widgets['Image'])
def edit_mode(self):
"""
Switch between the display/edit interfaces.
Return:
None
"""
if self.windowTitle() == 'Image Labeler':
self.setWindowTitle('Image Labeler(Editor Mode)')
self.switch_editor(ImageEditorArea)
else:
self.setWindowTitle('Image Labeler')
self.switch_editor(RegularImageArea)
self.display_selection()
def save_session_data(self, location):
"""
Save session data to csv/hdf.
Args:
location: Path to save session data file.
Return:
None
"""
if location.endswith('.csv'):
self.session_data.to_csv(location, index=False)
if location.endswith('h5'):
self.session_data.to_hdf(location, key='session_data', index=False)
def read_session_data(self, location):
"""
Read session data from csv/hdf
Args:
location: Path to session data file.
Return:
data.
"""
data = self.session_data
if location.endswith('.csv'):
data = pd.read_csv(location)
if location.endswith('.h5'):
data = pd.read_hdf(location, 'session_data')
return data
def save_changes_table(self):
"""
Save the data in self.session_data to new/existing csv/hdf format.
Return:
None
"""
if self.label_file:
location = self.label_file
old_session_data = self.read_session_data(location)
self.session_data = pd.concat([old_session_data, self.session_data], ignore_index=True)
self.session_data.drop_duplicates(inplace=True)
self.save_session_data(location)
else:
dialog = QFileDialog()
location, _ = dialog.getSaveFileName(self, 'Save as')
self.label_file = location
self.save_session_data(location)
self.statusBar().showMessage(f'Labels Saved to {location}')
def clear_yolo_txt(self):
"""
Delete txt files in working directories.
Return:
None
"""
working_directories = set(['/'.join(item.split('/')[:-1]) for item in self.images])
for working_directory in working_directories:
for file_name in os.listdir(working_directory):
if file_name.endswith('.txt'):
os.remove(f'{working_directory}/{file_name}')
def save_changes_yolo(self):
"""
Save session data to txt files in yolo format.
Return:
None
"""
if self.session_data.empty:
return
self.clear_yolo_txt()
txt_file_names = set()
for index, data in self.session_data.iterrows():
image_name, object_name, object_index, bx, by, bw, bh = data
image_path = self.image_paths[image_name]
txt_file_name = f'{image_path}/{image_name.split(".")[0]}.txt'
txt_file_names.add(txt_file_name)
with open(txt_file_name, 'a') as txt:
txt.write(f'{object_index!s} {bx!s} {by!s} {bw!s} {bh!s}\n')
self.statusBar().showMessage(f'Saved {len(txt_file_names)} txt files')
@staticmethod
def get_list_selections(widget_list):
"""
Get in-list index of checked items in the given QWidgetList.
Args:
widget_list: One of the right QWidgetList(s).
Return:
A list of checked indexes.
"""
items = [widget_list.item(i) for i in range(widget_list.count())]
checked_indexes = [checked_index for checked_index, item in enumerate(items)
if item.checkState() == Qt.Checked]
return checked_indexes
def delete_list_selections(self, checked_indexes, widget_list):
"""
Delete checked indexes in the given QWidgetList.
Args:
checked_indexes: A list of checked indexes.
widget_list: One of the right QWidgetList(s).
Return:
None
"""
if checked_indexes:
for q_list_index in reversed(checked_indexes):
if widget_list is self.right_widgets['Photo List']:
image_name = self.images[q_list_index].split('/')[-1]
del self.images[q_list_index]
del self.image_paths[image_name]
if widget_list is self.right_widgets['Image Label List']:
current_row = eval(f'{self.right_widgets["Image Label List"].item(q_list_index).text()}')[0]
row_items = dict(zip(self.session_data.columns, current_row))
current_boxes = self.session_data.loc[self.session_data['Image'] == current_row[0]]
for index, box in current_boxes[['bx', 'by', 'bw', 'bh']].iterrows():
if box['bx'] == row_items['bx'] and box['by'] == row_items['by']:
self.session_data = self.session_data.drop(index)
break
widget_list.takeItem(q_list_index)
def delete_selections(self):
"""
Delete all checked items in all 3 right QWidgetList(s).
Return:
None
"""
checked_session_labels = self.get_list_selections(self.right_widgets['Session Labels'])
checked_image_labels = self.get_list_selections(self.right_widgets['Image Label List'])
checked_photos = self.get_list_selections(self.right_widgets['Photo List'])
self.delete_list_selections(checked_session_labels, self.right_widgets['Session Labels'])
self.delete_list_selections(checked_image_labels, self.right_widgets['Image Label List'])
self.delete_list_selections(checked_photos, self.right_widgets['Photo List'])
def upload_labels(self):
"""
Upload labels from csv or hdf.
Return:
None
"""
dialog = QFileDialog()
file_name, _ = dialog.getOpenFileName(self, 'Load labels')
self.label_file = file_name
new_data = self.read_session_data(file_name)
labels_to_add = new_data[['Object Name', 'Object Index']].drop_duplicates().sort_values(
by='Object Index').values
self.right_widgets['Session Labels'].clear()
for label, index in labels_to_add:
self.add_session_label(label)
self.session_data = pd.concat([self.session_data, new_data], ignore_index=True).drop_duplicates()
if file_name:
self.statusBar().showMessage(f'Labels loaded from {file_name}')
def reset_labels(self):
"""
Delete all labels in the current session_data.
Return:
None
"""
message = QMessageBox()
answer = message.question(
self, 'Question', 'Are you sure, do you want to delete all current session labels?')
if answer == message.Yes:
self.session_data.drop(self.session_data.index, inplace=True)
self.statusBar().showMessage(f'Session labels deleted successfully')
def display_settings(self):
pass
def display_help(self):
pass
def add_session_label(self, label=None):
"""
Add label entered to the session labels list.
Return:
None
"""
labels = self.right_widgets['Session Labels']
new_label = label or self.top_right_widgets['Add Label'][0].text()
session_labels = [str(labels.item(i).text()) for i in range(labels.count())]
if new_label and new_label not in session_labels:
self.add_to_list(new_label, labels)
self.top_right_widgets['Add Label'][0].clear()
def remove_temps(self):
"""
Remove temporary image files from working directories.
Return:
None
"""
working_dirs = set(['/'.join(item.split('/')[:-1]) for item in self.images])
for working_dir in working_dirs:
for file_name in os.listdir(working_dir):
if 'temp-' in file_name:
os.remove(f'{working_dir}/{file_name}')
def closeEvent(self, event):
"""
Save session data, clear cache, and close with or without saving.
Args:
event: QCloseEvent object.
Return:
None
"""
if not self.label_file and not self.session_data.empty:
message = QMessageBox()
answer = message.question(self, 'Question', 'Quit without saving?')
if answer == message.No:
self.save_changes_table()
if self.label_file and not self.session_data.empty:
self.save_changes_table()
self.remove_temps()
event.accept()
if __name__ == '__main__':
test = QApplication(sys.argv)
test_window = ImageLabeler()
sys.exit(test.exec_())
settings.py
def setup_toolbar(qt_obj):
tools = {}
names = ['Upload photos', 'Upload Labels', 'Save', 'Save Yolo', 'Upload Photo Folder',
'Upload video', 'Edit Mode', 'Delete Selection(s)', 'Reset', 'Settings', 'Help']
icons = ['upload_photo6.png', 'labels.png', 'save3.png', 'yolo.png', 'upload_folder5.png', 'upload_vid3.png',
'draw_rectangle3.png', 'delete.png', 'reset4.png', 'settings.png', 'help.png']
methods = [qt_obj.upload_photos, qt_obj.upload_labels, qt_obj.save_changes_table, qt_obj.save_changes_yolo,
qt_obj.upload_folder, qt_obj.upload_vid, qt_obj.edit_mode, qt_obj.delete_selections,
qt_obj.reset_labels, qt_obj.display_settings, qt_obj.display_help]
keys = 'OLSYFVRDJAH'
tips = ['Select photos from a folder and add them to the photo list',
'Upload labels from csv, hdf',
'Save changes to csv or hdf',
'Save changes to txt files with Yolo format',
'Open a folder from the last saved point or open a new one containing '
'photos and add them to the photo list',
'Add a video and convert it to .png frames and add them to the photo list',
'Activate editor mode',
'Delete all selections(checked items)', 'Delete all labels in the current working folder',
'Display settings', 'Display help']
tips = [f'Press ⌘⇧{key}: ' + tip for key, tip in zip(keys, tips)]
key_shorts = [f'Ctrl+Shift+{key}' for key in keys]
check_status = [False, False, False, False, False, False, False, True, False, False, False, False]
assert len(names) == len(icons) == len(methods) == len(tips) == len(key_shorts)
for name, icon, method, tip, key, check in zip(names, icons, methods, tips, key_shorts, check_status):
tools[name] = [name, icon, method, tip, key, check]
return tools
Answer: setup_toolbar from settings.py
Maintaining a several lists in parallel is tedious. setup_toolbar has a minimal check with assert len(names) == len(icons) == len(methods) == len(tips) == len(key_shorts) to help here, but even if all the lists have the same length, there is no way to make sure that those values are really consistent.
I'd propose to use something like the following:
def setup_toolbar(qt_obj):
tools = {
'Upload photos': {
'icon': 'upload_photo6.png',
'callback': qt_obj.upload_photos, 'key': 'O',
'hint': 'Select photos from a folder and add them to the photo list',
'checkable': False
},
# and so on ...
}
# auto-generate shortcuts and rich hints
for name, properties in tools.items():
shortcut = f'Ctrl+Shift+{properties['key']}'
properties['shortcut'] = shortcut
# Mac symbols omitted out of lazyness ;-)
properties['hint'] = f'Press {shortcut}: {properties["hint"]}'
# this /is redundant, but is in line with the original code
properties['name'] = name
return tools
This should be more robust, since all the relevant parts are closer together. Using a dict here is also more robust than a list because the properties can now be accessed using their names instead of having to remember to order in the list. Of course, ImageLabeler.adjust_tool_bar would have to be adapted to this change.
More on a semantic note, maybe also replace Upload with Load or Open, since, at least in my opinion, "upload" is usually used when pushing some content onto a remote system or device. I guess that's not your intention.
labelpix.py
Imports
Imports should be sorted and grouped. Also, avoid wildcard * imports, especially if you only need a single function like setup_toolbar. With these changes the code looks as follows:
# built-in libraries
import os
import sys
# third-party libraries
import cv2
import pandas as pd
from PyQt5.QtCore import QPoint, QRect, Qt
from PyQt5.QtGui import QIcon, QPainter, QPen, QPixmap
from PyQt5.QtWidgets import (QAction, QApplication, QDesktopWidget,
QDockWidget, QFileDialog, QFrame, QHBoxLayout,
QLabel, QLineEdit, QListWidget, QListWidgetItem,
QMainWindow, QMessageBox, QStatusBar, QVBoxLayout,
QWidget)
# libraries from this module
from settings import setup_toolbar
The comments between the groups are only for educational purposes.
Path handling
The code handles paths at several points. Doing it "manually" like here in foo
def get_image_names(self):
"""
Return:
Directory of the current image and the image name.
"""
full_name = self.current_image.split('/')
return '/'.join(full_name[:-1]), full_name[-1].replace('temp-', '')
is error-prone and won't work on Windows (and possibly other operating systems) where / is not used as path separator.
Fortunately, Python can help here. The could should use os.path.split or os.path.dirname/os.path.basename from the built-in os module, or the pathlib module, which provides a higher level, more OOP-like abstraction to the whole problem. Similarly building paths should use os.path.join or the corresponding functionality from pathlib.
General feedback
I tried to use the program on some more or less random example images. Since the aspect ratio of the image display area is fixed to that of the window, images become squished once you resize the window or simply if they don't have the correct aspect ratio. There also seems to be a bug where the original image seems to persist in the background (see screenshot below).
The image is similar to the one in a question of mine here on Code Review, so the circle on the left should really be a circle, not an ellipse.
Sometimes also freshly drawn bounding boxes vanished immediately and where also not listed in the Image Label List on the right. I admit that I did not really try to look into this, so it might be a simple user error on my side.
I'd also prefer to have a little bit more control over where the label files are stored, or at least have some indication where they were put.
There is likely more to say about the code, but that's all for now. Maybe I will have another go at it later. Till then: Happy Coding! | {
"domain": "codereview.stackexchange",
"id": 37639,
"tags": "python, python-3.x, gui, pyqt"
} |
The difference between static library(.a) and shared object library(.so) | Question:
Dear All,
Could anybody explain what difference between static library(.a) and shared object(.so) is in ROS library types?
How can I build them? Is there another type?
I have written a c++ class and build it as separate ROS pkg available for other ROS pkgs. Is it a static library?
Thanks for your explanations.
Originally posted by A.M Dynamics on ROS Answers with karma: 93 on 2016-07-25
Post score: 0
Answer:
Whilst not directly an answer of mine, this topic is too old to answer afresh!
Stackoverflow Question 1
Stackoverflow Question 2
These explain it all fairly well.
Originally posted by NZNobody with karma: 156 on 2016-07-25
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 25345,
"tags": "ros, library"
} |
Why is space a vacuum? | Question: Why is space a vacuum ? Why is space free from air molecules ?
I heard that even space has a small but finite number of molecules. If so, won't there be a drag in space?
Answer:
Why is space a vacuum ?
Because, given enough time, gravity tends to make matter clump together. Events like supernovae that spread it out again are relatively rare. Also space is big. Maybe someone could calculate the density if visible matter were evenly distributed in visible space. I imagine it would be pretty thin.
(Later)
Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.
Douglas Adams, Hitchiker's guide to the galaxy.
According to Wikipedia, the observable universe has a radius of 46.6 billion light years and contains about $10^{53}$ kg of matter.
One light year is about $9.5 \times 10^{15} m$ - so that is a radius of roughly $4.4 \times 10^{24} m$ and a volume of roughly $2.73 \times 10^{80} m^3$ So that means a density of $0.366 \times 10^{-27} kg/m^3$
If that matter were all Hydrogen, which has $6 \times 10^{26}$ atoms per kg, that would give us around $0.2$ atoms per $m^3$.
So if my horrible calculations are any guide (and I'm very likely to have made an error), space is a vacuum mostly because the amount of matter in the observable universe is negligible.
Why is space free from air molecules ?
Well, air is what we call the mix of gases in Earth's atmosphere, so this is a question about space near Earth specifically.
Air is mostly molecular Nitrogen and Oxygen - $N_2$ and $O_2$. These are heavy enough that not many of them escape Earth's gravity. Also, space is big.
I heard that even space has a small but finite number of molecules. If so, Wont there be a drag in space?
According to WIkipedia:
Intergalactic space contains a few hydrogen atoms per cubic meter. By comparison, the air we breathe contains about $10^{25}$ molecules per cubic meter.
That is such a large difference that space is effectively frictionless (at least for typical space vehicles constructed by humans).
Bumping against 1 hydrogen atom is very different to bumping against 10000000000000000000000000 Nitrogen molecules. | {
"domain": "physics.stackexchange",
"id": 18085,
"tags": "vacuum"
} |
Why do clothes dry at room temperature? | Question: When I leave wet clothes in the open air, they will get dry over time by themselves even at room temperature. I know that somehow the water becomes vaporized; it's not "disappearing". For that, it needs to get energy from its environment and probably it's getting this energy from heat. But since it is at the same temperature with the clothes or its environment, I don't understand how an energy transfer can occur. Doesn't this violate the definition of temperature, which is only defined to determine where heat will flow?
So the question can also be expressed as: How can water get heat (if not heat, what?) from its environment when there is no temperature difference so that it can evaporate?
Answer: Microscopically, both the water molecules in the air and the water molecules on the clothing are rapidly moving around due to their thermal energy. Every once in a while, a molecule on the clothing will have enough energy to break free; every once in a while, a molecule in the air will stick to your clothes. Because the humidity in your room is less than 100%, the first process will happen more often, so water will go into the air. Conversely, if you put your clothes in a sauna, where the humidity is more than 100%, your clothes will get more wet over time.
Now let's look macroscopically. Why should evaporation happen at all, if it costs energy to do it? The reason is that there's "more room" for the water molecules in the air than on your clothes, so it's more likely for things to bounce into the air (which is big) than land on your clothes (which are small). Formally, we say that the process is entropy-driven. (Since entropy counts available microstates, this is just the same thing.)
Increasing entropy and decreasing energy are separate goals. In this case, the energy and entropy effects oppose each other, but entropy wins; in general, you can tell which one wins by looking at the change in Helmholtz free energy, $F = U - TS$. | {
"domain": "physics.stackexchange",
"id": 27660,
"tags": "thermodynamics, temperature, everyday-life, water, evaporation"
} |
Identify Bad Products from given parameters using neural networks | Question: I have a problem at hand to identify Good/Bad Products using given parameters. The number of parameters are in the order of 5000s and there are multiple values for the parameters. However I do not have a labelled set of data which says these are the products that are good or bad.
For Example, Say the parameters are AX, AY, AZ, B, C, DX, DY, etc.
Each of them has a different range. Is decision trees the right approach?
Can classification be applied to this problem?
Answer: No. Classification requires labelled data. Without labelled data there is no way to solve this. How would you do anything at all, if you don't know which of the products in the training set are good and which are bad? There's no basis for making a decision of any sort. | {
"domain": "datascience.stackexchange",
"id": 2776,
"tags": "neural-network, classification, decision-trees"
} |
Calculate speed of proton at infinity | Question: I have this problem to solve:
Two protons are on the x axis at x=-1 and x=0. An alpha particle is placed at x=2, and the proton on the left is released, find its speed at infinity.
I know that this involves the change in kinetic and potential energies of the particles, but I dont understand which particles are included in that calculation. Is the proton on the left neglected because it is the one released? An explanation in general would be appreciated.
For the purposes of following the rules in asking a homework question properly, the specific physics topics I was asking about were: which particles are included in the kinetic and potential energy calculation for determining the speed at infinity of a proton, and why. My effort in solving this question was finding out that work is the change in kinetic energy + change in potential energy, but not knowing which particles to include in the calculation. This should clear up any confusion that this question is off topic.
Answer: I'm assuming the proton at x=0 and the alpha particle at x=2 are held in place. I'll call these the "held" particles. There will be two forces which act on the "free" proton (at x=-1). I'm also assuming that this is just a classical mechanics problem, and will solve it that way.
Before we go into the calculations, let's think about what is going to happen. We expect the proton to feel a force from the two other particles. Due to the fact that every particle is positive, we expect the free proton to be repulsed by the other two particles and shoot off in the -x direction. Since the free proton was at rest when we released it (right?), it's going to try to convert all of its energy from potential energy to kinetic energy. At some infinite point, the proton will get rid of its potential energy, and get to its maximum speed. Sweet.
This proton got all of its energy from the held particles, right? So we need to calculate the potential energy from each of them. Since the free proton doesn't care about the interactions between the two held particles, we don't need to calculate that. We do need:
Energy #1: The force of the held proton on the free proton.
Energy #2: The force of the held alpha particle on the free proton.
Total Potential Energy= Energy #1 + Energy #2
The Total Potential Energy is all the energy that proton can use in its speedy retreat. Since it will convert all of its potential energy into kinetic energy, we just use a formula for kinetic energy (which has velocity and mass in it, not the other ones!), and set that equal to our total potential energy. Do a little math, and find the free proton's velocity! That's your answer! | {
"domain": "physics.stackexchange",
"id": 13196,
"tags": "homework-and-exercises, speed, protons"
} |
Phase Spectrum of Signals | Question: I did fft of a fish's trajectory, because it looks periodic and I tried to find the frequency. However, I can't understand what does the phase spectrum below represent. Does this lower left to upper right curve have any special meaning?
Below is my Matlab code:
L = length(X);
Fs = 30; % Sampling frequency
T = 1/Fs; % Sampling period
t = (0:L-1)*T; % Time vector
figure;
plot(t,X);
X = X - mean(X);
y = fft(X);
z = fftshift(y);
ly = length(y);
f = (-ly/2:ly/2-1)/ly*Fs;
figure;
plot(f,abs(z))
title("Double-Sided Amplitude Spectrum of x(t)")
xlabel("Frequency (Hz)")
ylabel("|y|")
grid
tol = 1e-6;
z(abs(z) < tol) = 0;
theta = angle(z);
figure;
plot(f,theta/pi)
title("Phase Spectrum of x(t)")
xlabel("Frequency (Hz)")
ylabel("Phase/\pi")
grid
Answer: A few things to consider here
The DFT of a real signal is conjugate symmetric, i.e. $\varphi(-\omega) = - \varphi(\omega)$ so the left side of the graph is just the negative of the right half. Most people don't bother looking at negative frequencies since there is no independent information there.
Interpreting the phase without looking at the amplitude is tricky. A real measurement will always have some amount of noise and limited signal to noise ratio (SNR) at low frequencies. The phase of $z=0$ is undefined and the phase of a small but noisy amplitude is just a random number. That seem to be the case here below maybe 3 Hz or so.
The sudden jumps at 11Hz or 12 Hz are phase wrapping issues. You constrain the phase to the interval $[-\pi,+\pi]$. That means that if the "real" phase moves a small amount from, say 3.1 to 3.2, in your graph it jumps from 3.1 to -3.08. This can be improved by so-called "unwrapping" algorithms but that's not trivial for measured data with bad SNR at the band edges, so I recommend asking a different question around that.
Above 13 kHz it looks very messy again. Could be a combination of wrapping and noise especially once you hit the frequency range where the anti-aliasing filter kicks in.
So overall it looks you have bad SNR at low frequencies and somewhat noisy but fairly well defined monotically increasing phase up to 13 Hz or so. That could be a property of the signal itself or some artifact of the data acquisition system or post-processing. Without cleaning up the graph first and also looking at the amplitude spectrum at the same time, that's very hard to tell. | {
"domain": "dsp.stackexchange",
"id": 12009,
"tags": "fft, fourier-transform, frequency-spectrum, phase"
} |
If the Einstein Field Equations are so hard to solve, how did Einstein know they were correct in the first place? | Question: Consider a formula like $y = mx + b$. For instance, $y = 2x + 3$. It is simple to check that $(1,5)$ is a solution, as is $(2,7)$, as is $(3,9)$, etc. So it's easy to see that $y =2x + 3$ is a useful equation in this case because it accurately describes a pattern between a bunch of numbers.
But if it's so hard to calculate exact solutions of the Einstein Field Equations, how did he verify the were correct? How did he even know to write them without first doing calculations and then identifying the general formula?
To use my initial analogy, if I begin with a bunch of pairs of numbers, I then derive the equation $y = 2x +3$ as the equation that describes the pattern. But if solutions to the EFEs are so hard to find, how did Einstein find the EFEs in the first place?
Answer: It's not uncommon that the equations to describe a system are fairly simple but finding solutions is very hard. The Navier-Stokes equations are a good example - there's a million dollars waiting for the first person to make progress in finding solutions.
In the case of relativity, it became clear to Einstein fairly quickly that a metric theory was required so the equation needed was one that gave the metric as a solution. Einstein tried several variations before settling on the GR field equation. I believe one of the factors that influenced him was when Hilbert pointed out that the GR field equations followed from an obvious choice for the gravitational action.
I'm not sure if Einstein himself ever found an analytic solution to his own equations. However he used a linearised form of the equation to calculate the precession of Mercury and to calculate the deflection of light. The precession of Mercury was already known by then, so he knew (linearised) GR gave the correct answer there, but he had to wait a few years for Eddington's measurement of the deflection of light (though to modern eyes it seems likely that Eddington got the answer he wanted!).
The first analytic solution was Schwarzschild's solution for a spherically symmetric mass.
General relativity is one of the very few cases in science where a successful theory was devised purely on intellectual grounds rather than as a response to experimental data. Anyone who has suffered the pain of trying to learn GR can appreciate what an astonishing accomplishment this was, and why Einstein deserves every bit of the fame associated with him. | {
"domain": "physics.stackexchange",
"id": 19229,
"tags": "general-relativity"
} |
Extension vs Class For ViewModel | Question: What the advantages & disadvantages to each of these approaches pertaining to creating file that takes care of the view configuration to reduce a controller's file size.
Main Purposes Are:
Memory
Performance
Testing
Usability
This is the simplest, capable of working, example to demonstrate the question, but when many views are present using many methods do any of the above concerns alter when comparing the Extension & ViewModel Class?
Reminder: The ViewModel Class or Extension would be placed in a separate file.
ViewModel Approach:
class VC: UIViewController {
lazy var viewModel: ViewModel {
return (main: self)
}()
override viewDidLoad() {
super.viewDidLoad()
initializeUI()
}
func initializeUI() {
viewModel.configureView()
}
}
class ViewModel {
private let main: UIViewController
init(main: UIViewController) {
self.main = main
}
func configureView() {
main.view.backgroundColor = UIColor.blue
}
}
Extension Approach:
class VC: UIViewController {
lazy var viewModel: ViewModel(main: self)
override viewDidLoad() {
super.viewDidLoad()
initializeUI()
}
func initializeUI() {
configureView()
}
}
extension VC {
func configureView() {
main.view.backgroundColor = UIColor.blue
}
}
Answer: The viewModel property on the Extension example should be removed as it is redundant:
lazy var viewModel: ViewModel(main: self)
Irrespective of this, it seems unnecessary to create a separate ViewModel class solely for the purpose of encapsulating a configureView function. On that basis, the Extension approach seems like a much more sensible and performant way of managing this kind of configuration. | {
"domain": "codereview.stackexchange",
"id": 26703,
"tags": "object-oriented, comparative-review, swift, ios, mvvm"
} |
Time position in STFT output | Question: How can I understand in which time position in STFT function output am I?
For example, if I have 3987 time frame in the output of STFT, my window length is 625 (hamming), my hop size is 125 and the length of signal is 2 second.
How can I estimate for example the first 5 milisecond or a window of 10 milisecond in the ouput of STFT? I know that the the number of time frames are calculated by
coln = 1+fix((xlen-wlen)/h);
But how is the equation in my situation?
Can I say that each time frame has a length of hop size?
Answer: Assuming 0-based vectors, hop size $H$, and window size $M$, the frame number $N_f$ will contain the DFT of a window that starts on sample $n_1 = N_f \times H$ and ends on sample $n_2 = n_1 + M -1$.
That means that for each STFT frame, you shift the window by $H$ samples. Note that $H$ and $M$ not necessarily coincide. | {
"domain": "dsp.stackexchange",
"id": 4943,
"tags": "stft, time-frequency"
} |
How can i make a qt project that has qexserialport library? | Question:
I write a program in qt for recieving data from serial port.I use qextserialport. I build this library(qextserialport) with qmake and use it.
but when i want to build this qt project with rosmake to use it in ros, i see this error :
In file included from /home/user/ros_workspace/qt_test/qextserialport_test/widget.cpp:1:0:
/home/user/ros_workspace/qt_test/qextserialport_test/widget.h:5:28: fatal error: qextserialport.h: No such file or directory
what should i do?
Originally posted by mr.karimi on ROS Answers with karma: 52 on 2013-12-09
Post score: 0
Answer:
In order to use qexserialport in ros I suggest you this cmake.This program writes "h" on the port an receives some data from micro.
Cmake
cmake_minimum_required(VERSION 2.4.6)
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
find_package(Qt4 COMPONENTS QtCore QtGui)
INCLUDE(${QT_USE_FILE})
ADD_DEFINITIONS(${QT_DEFINITIONS})
link_directories(/usr/include/qt4/QtExtSerialPort)
rosbuild_init()
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
rosbuild_add_executable(first_test src/first_test.cpp)
target_link_libraries(first_test ${QT_LIBRARIES} -lqextserialport -lpthread)
and the program
/*
* first_test.cpp
*
* Created on: Dec 19, 2013
* Author: hamid
*/
#include "ros/ros.h"
#include <QtExtSerialPort/qextserialport.h>
#include "QDebug"
#include "QCoreApplication"
int main(int argc, char** argv)
{
ros::init(argc, argv, "first_test");
QCoreApplication app(argc, argv);
ros::NodeHandle n;
QextSerialPort *port;
QByteArray bytes;
QByteArray bytes2;
QString portName = QLatin1String("ttyUSB0");
port = new QextSerialPort(QString(portName), QextSerialPort::EventDriven);
port->setBaudRate(BAUD9600);
port->setFlowControl(FLOW_OFF);
port->setParity(PAR_NONE);
port->setDataBits(DATA_8);
port->setStopBits(STOP_1);
if (port->open(QIODevice::ReadWrite) == true)
{
qDebug() << "listening for data on" << port->portName();
}
else
{
qDebug() << "device failed to open:" << port->errorString();
}
bytes[0]='h';
int total = port->write(bytes,bytes.size());
qDebug() << total;
sleep(1);
int a = port->bytesAvailable();
qDebug() << a;
bytes2.resize(a);
port->read(bytes2.data(),bytes2.size());
qDebug() <<(QString::fromAscii(bytes2).toUcs4());
return 0;
}
Originally posted by Hamid Didari with karma: 1769 on 2013-12-22
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 16397,
"tags": "ros, qtcreator"
} |
Is water a gas at critical density, room temperature? | Question: I am quoting Chaikin, Lubensky, Principles of Condensed Matter Physics, p. 4.
Now suppose we have a closed container of water vapor at a density of 0.322 g/cc at room temperature. As the temperature is lowered...
It then proceeds to describe condensation, and says it is a continuous process because the system is at the critical density. However, it is definitely not at the critical temperature. How can it condense at a temperature lower than that? Is it a mistake?
Answer: If this state is prepared at room temperature (somehow), condensation will happen immediately, and the temperature will rise so much that the water will be boiling hot (some water will stay in the vapor phase), although it will still be less hot than the critical point.
Yes, it is a mistake. You can only prepare/maintain water vapor (or more properly, a single water phase) at that density at the critical point. Any cooler and it will condense into two phases. It's possible to forestal condensation somewhat by eliminating condensation nuclei, but this won't work near the critical point where phase changes are smoother. | {
"domain": "physics.stackexchange",
"id": 10329,
"tags": "thermodynamics, condensed-matter, physical-chemistry, phase-transition, critical-phenomena"
} |
Bulk boundary correspondence = difference in Chern numbers? | Question: In topological insulators the bulk boundary correspondence is frequently stated as the principle that the number of edge modes equals the difference in Chern numbers at that edge. I found this e.g. in "Topological Band Theory and the
Z2 Invariant" by C.L. Kane pp.18 . However in there is no "derivation" to this in the text by C.L. Kane.
Can anyone present to me an argument (maybe even a proof) that the number of edge states corresponds to the change in Chern numbers? A source where I can find such an argument would also be great!
Answer: There seems to be no general but simple derivation to this fact. However there are some papers concerning the issue that involve mathematical techniques too advanced for me still. The works that I found are
Bulk-Edge Correspondence for Chern Topological Phases: A Viewpoint from a Generalized Index Theorem. T. Fukui, K. Shiozaki, T. Fujiwara, S. Fujimoto. J. Phys. Soc. Jpn. 81 114602, (2012); arXiv:1206.4410.
Edge states and the bulk-boundary correspondence in Dirac Hamiltonians. Roger S. K. Mong, Vasudha Shivamoggi. Phys. Rev. B 83, 125109 (2011); arXiv link
I also found some lecture notes on the topic also giving some more detailed background information at arXiv:1501.02874. Maybe it helps somebody. | {
"domain": "physics.stackexchange",
"id": 80367,
"tags": "solid-state-physics, topology, topological-insulators"
} |
Would a fish in a sealed ball swim normally? | Question: This question led me to wonder whether swimming would be the same experience for a fish in a full, sealed ball as it is normally.
If the fish is about 7cm from the walls of the tank, a pressure wave can propagate from the fish to the wall and back in .0001 seconds, while the time scale on which a fish wiggles is tenths of a second. So unlike the open ocean, the water surrounding the fish can all communicate with itself on the time scale that the fish wiggles, and unlike a normal fish tank, the water has nowhere to go and so can't change its shape.
Would the fish notice any hydrodynamic effects in a full, sealed tank compared to normal swimming?
Answer: It is not sound propagation but rather momentum diffusion that makes a fish swim. The rate at which momentum diffuses is determined by the kinematic viscosity, which for water is about $10^{-6} m^2/s$.
It takes minutes for momentum to diffuse in water over distances of centimeters, while the time scale over which a fish wiggles is tenths of a second. So, during a wiggle the water surrounding the fish doesn't 'communicate' with any wall, and the fish doesn't notice any anomalous hydrodynamic effects. | {
"domain": "physics.stackexchange",
"id": 5772,
"tags": "fluid-dynamics, propulsion"
} |
Harmonic Product Spectrum algorithm. I dont understand one step | Question: I was looking some Harmonic Product Spectrum algorithm examples and I came up with this:
//Implement Harmonic Product Spectrum
for(k = 0; k < buffer_size / 8; k++)
{
sum[k] = magFFT[k] * magFFT[2*k] * magFFT[3*k];
// find fundamental frequency (maximum value in plot)
if( sum[k] > max_value2 && k > 0 )
{
max_value2 = sum[k];
fund_freq = k;
}
}
fund_freq1 = fund_freq * 8000 / buffer_size;
What I dont understand is the reason behind the last line, I tested it and it works but I couldn't find an explanation for that multiplication. Can anyone help me please?
Source of code: SOURCE
Answer: "8000 / buffer_size" is frequency resolution, decide by sample length or "buffer_size",
you know the sampling interval is "1/8000", the whole time length 'T' of the sequence is "buffer_size/8000", then '1/T' is frequency resolution,
in your code,if 'k' is 0, represent 0hz,is the signal's dc part,if 'k' is 1,represent '1/T' or "8000 / buffer_size" hz, then you can see 'k' correspond with frequency bin "k*8000 / buffer_size" . | {
"domain": "dsp.stackexchange",
"id": 5460,
"tags": "sound, pitch, peak-detection"
} |
Prove that adding any non Clifford gate to the Clifford group yields a universal gate set | Question: I have seen it claimed in multiple places that adding any non Clifford gate to the Clifford group yields a universal gate set. It is, however, not easy to find an accessible proof of this fact.
The question
https://cstheory.stackexchange.com/questions/34707/theorems-for-universal-set-of-quantum-gates-for-sud?noredirect=1&lq=1
cites Corollary 6.8.2 of Nebe et al. (Self-Dual Codes and Invariant Theory (Springer 2006)) as a proof of the fact that the Clifford group plus any other gate is universal.
This proof by Nebe makes heavy use of invariant theory and coding theory.
I'm looking for a more elementary or self contained proof of this result.
I would also be very happy with an answer along the lines of "Ian you are so silly, the proof in Nebe is actually quite elementary and self contained you just have to know blah blah blah [insert some actually clear explanation of why the ideas in the proof are elementary]."
Just to clarify, I understand the proof in Nielson and Chuang that single qubit gates together with cNot gates are universal.
And I am comfortable with the classification of the subgroups of $ PU_2=PSU_2=SO_3(\mathbb{R}), SU_2, U_2 $ in particular that the Clifford group on one qubit is a maximal closed subgroup and so adding any single qubit gate to the Clifford group gives a universal set for all single qubit gates and thus for all gates.
What I am really trying to understand is why adding any non Clifford gate, even a multi qubit gate, to the Clifford group yields a universal gate set.
Long comment: Ah you are totally right that naively taking the determinant 1 subgroup can go horribly wrong, thanks for catching that! I guess what I really meant is take the group
$$
sG:=\pi_2^{-1}[\pi_1(G)]=\{ (e^{i \theta} g) \in SU(d): g \in G \}
$$
where $ \pi_1: U(d) \to PU(d) $ and $ \pi_2: SU(d) \to PU(d) $ are the natural projections and $ \pi_2^{-1}[\pi_1(G)] $ just denotes taking the inverse image under $ \pi_2 $ of the group $ \pi_1(G) \subset PU(d) $ (see https://quantumcomputing.stackexchange.com/a/27232/19675 for another situation where I define and use this group $ sG $). And I'm not saying anything new because the group I describe is exactly the group you call $\overline G := \{ \det(U^\dagger)^{1/d} U \, | \, U \in G\}$ where by $ \det(U^\dagger)^{1/d} U $ you mean all possible $ d $ roots, so for example $ \det(H^\dagger)^{1/d} H $ is both $ iH $ and $ -iH $ and $ \det(I^\dagger)^{1/d} I $ is both $ I $ and $ -I $. So for $ G=\{ I, H \} \subset U(2) $ the corresponding subgroup of $ SU(2) $ is $ \{ I,-I,iH,-iH \} $ exactly as you say
Answer: There is at least one other way to prove this I'm aware of. The argument uses the concept of a unitary 2-design and how this restricts the representation theory of a group.
To avoid pathological cases, we restrict our attention to the special unitary group $SU(d)$. Note that we can regard any subgroup $G\subset U(d)$ as a subgroup of $SU(d)$ by taking $\overline G := \{ \det(U^\dagger)^{1/d} U \, | \, U \in G\}$.
The goal is to prove that the Clifford group plus any non-Clifford gate generates a dense subgroup of $SU(p^m)$ and hence is universal.
A unitary 2-design is a (finite) set $D\subset SU(d)$ such that
$$
\frac{1}{|D|} \sum_{U\in D} U^{\otimes 2} X (U^{\otimes 2})^\dagger = \int_{SU(d)} U^{\otimes 2} X (U^{\otimes 2})^\dagger \,\mathrm{d}U, \qquad \forall X \in \mathbb C^{d^2\times d^2}.
$$
Note that the right hand side projects onto the commutant of the tensor square representation, i.e. on the subspace of matrices which commute with the representation $U\mapsto U^{\otimes 2}$ of $SU(d)$.
The definition of unitary 2-design extends naturally to infinite sets endowed with an appropriate measure to perform the average on the LHS.
Fact 1: We call a subgroup $G\subset SU(d)$ a unitary 2-group if it is a unitary 2-design, i.e. we have
$$
\int_G U^{\otimes 2} X (U^{\otimes 2})^\dagger \,\mathrm{d}U, = \int_{SU(d)} U^{\otimes 2} X (U^{\otimes 2})^\dagger \,\mathrm{d}U, \qquad \forall X \in \mathbb C^{d^2\times d^2}.
$$
By the above remark, this is equivalent to saying that the commutant of $G$ and $SU(d)$ is the same.
Fact 2: Let $\mathrm{Cl}_p(m)$ be the $m$-qudit Clifford group of local prime dimension $p$. Then, $\mathrm{Cl}_p(m)$ is a unitary 2-group. See e.g. Zhu: "Multiqubit Clifford groups are unitary 3-designs"
Fact 3: Let $V\in SU(p^m)\setminus \overline{\mathrm{Cl}_p(m)}$ be any non-trivial non-Clifford gate. Then $G:=\langle \overline{\mathrm{Cl}_p(m)}, V\rangle$ is an infinite subgroup. [Nebe, "The invariants of the Clifford group", Thm. 6.5 and 7.3]
Claim: $G$ is a unitary 2-group.
Proof: Since $G$ is a subgroup of $SU(p^m)$ the commutant of $G$ has to contain the commutant of $SU(p^m)$. Likewise, since $\mathrm{Cl}_p(m)$ is a subgroup of $G$, the commutant of $\mathrm{Cl}_p(m)$ has to contain the commutant of $G$. But the commutant of $\mathrm{Cl}_p(m)$ and $SU(d)$ is the same, since $\mathrm{Cl}_p(m)$ is a unitary 2-group. qed
Fact 4: Any finitely generated infinite unitary 2-group is dense in $SU(d)$. [A. Sawicki and K. Karnas, "Universality of single qudit gates", Cor. 3.5; See also Prop. 3 in my paper]
Final remark: We can also replace Nebe's argument on the maximal finiteness of the Clifford group ("Fact 3") with the classification of finite unitary group designs [Bannai et al. "Unitary t-groups"] which implies that any finite unitary 2-group that contains the Clifford group, has to be the Clifford group (up to its centre). However, this results makes itself heavy use of the classification of finite groups and thus at least on the same level as Nebe's proof. I would say it's vastly more difficult. | {
"domain": "quantumcomputing.stackexchange",
"id": 3575,
"tags": "quantum-gate, universal-gates, clifford-group"
} |
Reading text from 2 files | Question: I've got 2 simple text files:
besede.txt
To je vsebina datoteke z besedami. V njej so razlicne besede ki imajo lahko pomen ali pa tudi ne.
skrivno.txt
4 3 19 2 3 2 4 3
I wrote a program which receives these 2 files' names as arguments. It finds pairs of numbers in skrivno.txt and the first number in the pair is the index of the word in besede.txt; the second number in the pair is the index of the letter in that word.
So in this case I get:
4th word, 3rd letter - T
19th word, 2nd letter - E
3rd word, 2nd letter - S
4th word, 3rd letter - T
So, TEST.
Here is my program that does this. I'm open to suggestions on what I could improve and do better, etc.
public class meow {
public static void main(String args[]) throws Exception{
Scanner sc1 = new Scanner(new File(args[0])); //besede.txt
Scanner sc2 = new Scanner(new File(args[1])); //skrivno.txt
int inW = 0, inL = 0; //index of word, index of line
String text = "", code = "";
while(sc1.hasNext()){
text += sc1.next() + " ";
}
String[] text2 = text.split(" ");
while(sc2.hasNext()){
inW = sc2.nextInt();
inL = sc2.nextInt();
if(inW == 0 && inL == 0){
code += " ";
}
code += text2[inW-1].charAt(inL-1);
}
System.out.printf("%s\n", code);
}
}
Answer: There are quite some things you can improve here, this list may not cover everything that could be improved though:
Adhere to the Java naming conventions. Classes are named in PascalCase for example, so it would be public class Meow. On other places it seems fine.
Use proper indentation everywhere, this definately applies to the main method inside public class meow, as all items should be intended one to the right. This may also apply to the while loops, I would prefer to write while(sc1.hasNext()){ as while (sc1.hasNext()) { for readability.
Variable names are not expensive. sc1, sc2 are dubious, but could be ok. However inW and inL are plain alarming, the fact that you need to comment to explain them is a bad sign. My suggestions are wordIndex and lineIndex.
Consider using a StringBuilder over dealing with raw strings. This means for example that:
String text = "", code = "";
while(sc1.hasNext()){
text += sc1.next() + " ";
}
could be rewritten as:
StringBuilder sb = new StringBuilder();
while (sc1.hasNext()) {
sb.append(sc1.next()).append(' ');
}
String text = sb.toString();
You have a bug in your second while-loop. You check for sc2.hasNext() and you take two times sc2.nextInt(). Your hasX() and nextX() calls should always match. This means that in the loop per iteration in total there should be two hasNextInt() calls with two nextInt() calls.
The same deal about StringBuilder applies here to your second while-loop for building code.
In the future you can consider switching to using the java.nio.Path API, over the old java.io.File API. I will not show more code here on how to do it, as it would completely change the structure of the program and is best used in conjunction with Java 8, which is not in the scope of this answer. | {
"domain": "codereview.stackexchange",
"id": 7125,
"tags": "java, file"
} |
Is l1-norm works better that l2-norm in minimization using projection method? | Question: Given a vector of errors $e(x)$ obtained by variable $x$
In the following problem :
$min_x || e(x) ||$
Besides the robustness, consider only convergence speed, is it l1 norm works better than l2 norm using projected gradient method?
Answer: l1 norms typically work better in Hamming space (boolean, binary lattices-space) whereas l2 norms typically work better for real numbers in a real valued space.
The reason for this is that in an integer lattice, a square root value may not make sense as the shortest path...
A norm typically defines a shortest path between two points. To figure out what is the best norm, you would need to figure out the ambient space where your data lies.
For a min type of function, an l_\infty norm can be practical whenever the problem can be reexpressed from min to max by flipping the values according to the highest possible value in the function space.
In CS, that bound is nothing more than the max integer value sometimes... | {
"domain": "cstheory.stackexchange",
"id": 2181,
"tags": "optimization"
} |
Project Euler #85: Find the rectangular grid with closest to 2M rectangles | Question: For one place that I interviewed at (for a Python developer position) I worked with one of the devs on two Project Euler problems, one being problem 85. We talked through different approaches and came up with solutions together, but we coded separately. He seemed impressed that we were able to get through two problems. After the interview, he asked me to clean up the code and submit it, which you can see below. He never got back to me after I sent him the code. I'd like to know what's wrong with my code and not make the same mistakes again.
Problem 85
By counting carefully it can be seen that a rectangular grid measuring 3 by 2 contains eighteen rectangles:
Although there exists no rectangular grid that contains exactly two million rectangles, find the area of the grid with the nearest solution.
# problem85.py
# Project Euler
# Nicolas Hahn
# int m, int n -> # of rectangles that can be made from m*n rectangle
def numRectangles(m,n):
return (m*n*(m+1)*(n+1))//4
# int 'num' -> side length of first square that has >='num' rectangles
def getLargestSquareLength(num):
rectNum = 0
length = 0
while rectNum < num:
length += 1
rectNum = numRectangles(length,length)
return length
# int 'num' -> area of rectangle with closest to 'num' rectangles
def closestTo(num):
rects = []
# start from a square, work towards a 1*n rectangle
length = getLargestSquareLength(num)
for i in range(1,length+1):
m = length-i
n = m+1
# find closest rectangle to 'num' with width m
while numRectangles(m,n) < num and m > 0:
n += 1
# store both the >num rectangle and <num rectangle with width m
rects.append((m,n,numRectangles(m,n)))
rects.append((m,n-1,numRectangles(m,n-1)))
# get closest number of rectangles, then compute area
m,n,r = sorted(rects, key=lambda k: abs(k[2]-num))[0]
return m*n
def main():
# correct answer = 2772
print(closestTo(2000000))
if __name__ == "__main__":
main()
Answer: Well, let's ignore the code, because having started coding at that stage is wrong.
Let's look instead at a better starting-point:
Find a closed formula for the number of rectangles in a grid of given size (x, y):
N = \$\sum_{a=1}^x\sum_{b=1}^y(x-a+1)(y-b+a)\$
\$= \sum_{a=0}^{x-1}\sum_{b=0}^{y-1}(x-a)(y-b)\$
\$ = \frac 1 4 * (2x^2 - x*(x-1))(2y^2 - y*(y-1))\$
\$= \frac 1 4 * (x^2+x)(y^2+y)\$
Solve it for given y and N:
\$x^2+x-\frac{4*N}{y^2+y} = 0\$
\$x=\frac{-1 \pm \sqrt{1 + \frac{4*4*N}{y^2+y}}}2\$
\$x=-\frac 1 2 \pm \sqrt{\frac 1 4 + \frac{4*N}{y^2+y}}\$
(Only the positive solution is of interest)
Simply iterate all possible rectangles starting with short side of 1 until our short dimension becomes the long one, and calculate the number of rectangles for a slightly bigger / smaller rectangle than the fractional one we calculated.
The solution will be found in \$\Theta\left(\sqrt[4]N\right)\$. | {
"domain": "codereview.stackexchange",
"id": 16214,
"tags": "python, programming-challenge, python-3.x"
} |
Derivation of Inverse Fourier transform from forward Fourier transform | Question: Consider the Fourier pairs:
$$\psi(x,t) \stackrel{\mathrm{FT}}{\longleftrightarrow} \Psi(k,t)$$
$$\text{If } \quad \quad\Psi(k,t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} \psi(x,t) e^{-ikx} \, dx \quad \quad \dots(i)$$
$$\text{then, can we derive: } \quad \quad \psi(x,t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} \Psi(k,t) e^{ikx} \, dk \quad \quad \dots(ii) \quad ?$$
I have used comparison method to proof $eq.(ii) $
i.e., $$\text{consider:} \quad \quad x(t) \stackrel{\mathrm{FT}}{\longleftrightarrow} X(\omega)$$
$$\text{then, } \quad \quad X(\omega)=\int_{-\infty}^{\infty} x(t) e^{-i\omega t} \, dt \quad \quad \dots(iii)$$
$$\text{and, } \quad \quad x(t)=\frac{1}{2\pi}\int_{-\infty}^{\infty} X(\omega) e^{i\omega t} \, d\omega \quad \quad \dots(iv)$$
Now, comparing $eq(iii)$ with $eq(i)$ , we find:
$w \to k$
$t \to x$
$x(t) \to \frac{\psi(x,t)}{\sqrt{2\pi}}$
$X(w) \to \Psi(k,t)$ , putting these values in $eq(iv)$, we get:
$$\frac{\psi(x,t)}{\sqrt{2\pi}}=\frac{1}{2\pi}\int_{-\infty}^{\infty} \Psi(k,t) e^{ikx} \, dk $$
$$\implies \psi(x,t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} \Psi(k,t) e^{ikx} \, dk$$
But can we derive $eq(ii)$ from $eq(i)$? {without using any comparison}
Answer: I'll use the common DSP definition and notation of the Fourier transform and its inverse like given in Eqs $(iii)$ and $(iv)$ in the OP. Note that the argument $t$ in Eqs $(i)$ and $(ii)$ is irrelevant.
First of all, note that the Fourier transform of a delay $x(t)=\delta(t-\tau)$ is given by
$$\int_{-\infty}^{\infty}\delta(t-\tau)e^{-j\omega t}dt=e^{-j\omega \tau}\tag{1}$$
Consequently, we have to accept the following expression (in a distributional sense), following from the inverse Fourier transform of $(1)$:
$$\delta(t-\tau)=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{j\omega (t-\tau)}d\omega\tag{2}$$
Now we can prove $(iv)$ from $(iii)$:
$$\begin{align}\int_{-\infty}^{\infty}X(j\omega)e^{j\omega t}d\omega&=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}x(\tau)e^{-j\omega\tau}d\tau e^{j\omega t}d\omega\\&=\int_{-\infty}^{\infty}x(\tau)\underbrace{\int_{-\infty}^{\infty}e^{j\omega (t-\tau)}d\omega}_{2\pi\delta(t-\tau)} d\tau\\&=2\pi x(t)\end{align}\tag{3}$$ | {
"domain": "dsp.stackexchange",
"id": 9855,
"tags": "fourier-transform, derivation"
} |
Is the definition of a meter tautological? | Question: The speed of light is defined as $c=299{,}792{,}458\,\mathrm{m/s}$, and a meter is defined as the distance that light travels in a $1/299{,}792{,}458=1/c$ of a second, but then we would have defined a meter in terms of the speed of light, but we also defined the speed of light in terms of a meter, seems a bit circular for me.
My guess is that we defined a meter as the distance that light travels in a $1/299{,}792{,}458$ of a second so that the speed of light would be exactly $299{,}792{,}458\,\mathrm{m/s}$, but then why didn't we define it as the distance light travels in a $1/100$ of a second, that would make $c=100\,\mathrm{m/s}$, which is much more easy to remember and manage.
Please tell me if there are any ambiguities in my question, I'll do my best to fix them, thanks.
Answer: Theoretically, we have not defined the speed of light in terms of the metre. We have defined it as a specific distance (that light can cover in one second).
Now take that distance and divide it with $299792458$, and then you have a smaller portion of a distance. That portion is defined as a metre.
So, there's no circular metre definition here.
Why this number? you may reasonably ask. The answer is that while we can change the definitions of fundamental units such as the metre so that they become more future-safe and universally accessible and thus scrap an old definition, we can't just change their values to something entirely different. Because those fundamental units have already been in use in everything from research to daily life through centuries.
If we suddenly redefined the metre to be just $1/100$ of the distance covered by light in a second (which is an enormously long distance, by the way), then we would have to alter every ruler, every length scale, every textbook in the world, not to speak of altering people's uses, mindsets, traditions and so on. (Also, making the metre so enormously long as you suggest, might cause the use of the metre-unit to die out from every-day life and other units better fitting to the human-scale might become more used.)
Such a value-redefinition would be an enormously impractical task to implement - to get this through, you might want a better reason than just that the definition becomes easier to remember. Nevertheless, it is an interesting question that goes to the historical roots of how standardisation is done. | {
"domain": "physics.stackexchange",
"id": 89450,
"tags": "special-relativity, speed-of-light, definition, si-units, metrology"
} |
Which markers could suggest that there was extinct or extant life on Mars? | Question: Researchers who are involved in study of life on Mars are saying that there might be multicellular life present on Mars, today or in the past. Which traces, markers or environments on Mars could support this hypothesis and how it will be investigated?
Answer: One of the next big missions to Mars, named Mars 2020 is planned to depart from Earth to Mars during late July 2020. This mission involve a very capable rover, like Curiosity on steroids.
From the mission site:
The mission takes the next step by not only seeking signs of habitable conditions on Mars in the ancient past, but also searching for signs of past microbial life itself.
This mean that there is two important objective:
to find extinct life forms, such as assuming there was conditions a long time ago permitting lifeforms to exist on Mars, likely 3-3.5 Billion years from now or so, in other words, fossils. Why 3-3.5 Gy ago? This was during Hesperian geological epoch when water was likely a major agent in forming channels lakes and rivers, thus with atmospheric and ground conditions possibly permitting the existence of life back then.
to find extant life forms such as bacteria that are existing presently on Mars surface or near surface.
One component of the mission is to sample rocks that are likely lifebearing, analyze and store for a later pickup by a further mission. Lifebearing rocks in that case may be found as old lake beds, for example iron rich lacustrine mudstone.
Lacustrine mudstone is obviously originating from depositing fine sediments over a lake floor, a stable environment through a long time, as shown by layers like this (pictured by Curiosity's MastCam in 2014). And iron rich is important as this chemical arrangement is favoring and helping the preservation of microbial lifeforms.
Source: https://www.nasa.gov/image-feature/jezero-crater-mars-2020s-landing-site
As can be seen in the picture, the big circle (as a crater) in the center of the delta is named the Jezero crater and is the target landing site for Mars 2020. This delta was formed due to water flowing in a lake; clay rich deposited were detected in the area. The crater is exposing deeper (early) layers in the delta, making this an ideal exploration site to look for extinct or extinct life. | {
"domain": "earthscience.stackexchange",
"id": 1900,
"tags": "planetary-science"
} |
How can I send JSON data to and from ROS? | Question:
I am currently working on a task that involves constant intercommunication between a web browser (using roslibjs) and the ROS system through the rosbridge server. Is there a way to send JSON-formatted data from the browser to a ROS node? I realize there is no JSON among the ROS system's standard data types.
Originally posted by qureeb on ROS Answers with karma: 41 on 2017-11-05
Post score: 1
Original comments
Comment by gvdhoorn on 2017-11-05:
I'm confused: if you are already using roslibjs and rosbridge_suite, you are sending (and receiving) JSON data to (and from) ROS.
Can you clarify what it is you actually want to do?
Answer:
(follow up to my comment)
If you want to send JSON encoded data as part of a message to a ROS node, then I don't believe there are any special provisions for that.
Most straightforward would probably be to define a new message that stores the JSON string in a field. Any receiving node will have to know it's JSON though.
Originally posted by gvdhoorn with karma: 86574 on 2017-11-05
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 29282,
"tags": "ros, rosbridge-server, roslibjs"
} |
n-qubit circuit run with > n qubits? | Question: I was reading the Qiskit tutorial on circuit properties and there is a section (Unitary Factors) which states that even though an example circuit is made of 12 qubits, it may not need 12 qubits to run.
The original circuit: here
The layers of the circuit: here
It is stated that: We can see that at the end of the computation there are three independent sets of qubits. Thus, our 12 qubit computation is actual two two qubit calculations and a single eight qubit computation.
I am struggling to understand where in this diagram it is illustrating that we only need two 2 qubit calculations and one 8 qubit calculation.
Answer: Look at the picture of layer #9. It tells you explicitly how to group the qubits. There's a pair (q0,q5), another pair (q3,q8) and the rest of the qubits (q1,q2,q4,q6,q7,q9,q10,11). To see the relevance, look back to the circuit. Start with qubits 0 and 5. You can see that there's a two-qubit gate between them, but there are no two-qubit gates going from one of those to any other qubit. So qubits 0 and 5 will be in a product state from everything else (assuming they start in a product state), and you can perform that part of the computation independently from everything else. The same is true for the qubit pair 3 and 8. | {
"domain": "quantumcomputing.stackexchange",
"id": 979,
"tags": "qiskit, circuit-construction"
} |
A twist on the leaning ladder problem | Question:
For the first part, I know will need to form moment equations and resolve horizontally and vertically. But how do I decide which position to take for the dog while I form moment equations?
Answer: Looking just at the upper part of the diagram - the ramp leaning against the wall and the block - you can write down the horizontal and vertical components of force as a function of the location of the dog:
With this diagram in mind you can write down equations for $F_{1h}, F_{2h}, F_{1v}, F_{2v}$ as a function of $x$. You know that the net torque must be zero, that the total vertical force must equal zero, and that the total horizontal force must equal zero. Finally, you know that there is a relationship between $F_{iv}$ and $F_{1h}$ given by the coefficient of friction - and the same is true for the forces against the wall.
This gives you a total of five equations with five unknowns from which you can solve for $\mu$. You will find that $\mu$ is a function of $x$, and that it has its largest value when $x=0$ (for $x$ as drawn here - referenced to the wall).
In the spirit of "homework like" answers, I will leave it up to you to work out the details from here.
You asked for a further hint for part two.
For the cube to slide, the force of friction between the cube and the ground would have to be less than the force of friction between cube and ladder. But the only horizontal force is applied through friction. The same coefficient of friction applies at both points. The cube won't slide if the normal force between cube and ground is larger than between ladder and cube. And of course for any positive value of $k$, that will be true because... | {
"domain": "physics.stackexchange",
"id": 19784,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
Input on where and how I can make improvements on my randomizer | Question: As my title states I'm just looking for a little input on how I can improve one of my side projects when I'm not doing my CS assignments. All my program is, is a primitive randomizer for Battlefield 4 classes and weapons (because I can never choose what to play). But since I am still new to java (second semester in) I was wondering if someone more experienced than I could shoot a few concepts my way on how I can make this better, if not more re-useable for other things. Here is my driver:
package BattlefieldClassRandomizer;
import java.util.Scanner;
public class BF4ClassRandomizerDriver {
public static void main(String[] args) {
BattlefieldClass random = new BattlefieldClass();
Scanner keyboard = new Scanner(System.in);
int numberOfClasses;
do{
System.out.print("Choose how many times to randomize: ");
numberOfClasses = keyboard.nextInt();
} while (numberOfClasses <= 0);
System.out.println("Here are your soldiers");
for (int r = 0; r < numberOfClasses; r++) {
random.randomizeChance();
random.classSelection();
random.randomizeChance();
random.weaponSelection();
System.out.println("Class:" + random.getClassType()+ "; Weapon:" + random.getWeaponType());
}
}
}
And here is the my entity class (clunky, I know):
package BattlefieldClassRandomizer;
//System.out.println(System.getProperty("user.dir"));
public class BattlefieldClass {
private String classType;
private double chance;
private String weaponType;
BattlefieldClass() {
classType = "Assault";
chance = 0.0;
weaponType = "Assault Rifle";
}
public String getClassType() {
return classType;
}
public double getChance() {
return chance;
}
public String getWeaponType() {
return weaponType;
}
public void setClassType(String newClassType) {
if ((classType.equals("Assault")) || (classType.equals("Engineer"))
|| (classType.equals("Support")) || classType.equals("Recon")) {
classType = newClassType;
}
}
public void setWeaponType(String newWeaponType) {
if ((weaponType.equals("Assault")) || (weaponType.equals("Engineer"))
|| (weaponType.equals("Support")) || weaponType.equals("Recon")) {
classType = newWeaponType;
}
}
public void randomizeChance() {
chance = Math.random();
}
public void classSelection() {
if ((chance >= 0) && (chance < .25)) {
classType = "Assault";
}
if ((chance >= .25) && (chance < .50)) {
classType = "Engineer";
}
if ((chance >= .50) && (chance < .75)) {
classType = "Support";
}
if (chance >= .75) {
classType = "Recon";
}
}
public void weaponSelection() {
if ((chance >= 0) && (chance < .25) && (classType.equals("Assault"))) {
weaponType = "Assault Rifle";
} else if ((chance >= .25) && (chance < .50) && (classType.equals("Assault"))) {
weaponType = "Carbine";
} else if ((chance >= .50) && (chance < .75) && (classType.equals("Assault"))) {
weaponType = "DMR";
} else if ((chance >= .75) && (classType.equals("Assault"))) {
weaponType = "Shotgun";
}
if ((chance >= 0) && (chance < .25) && (classType.equals("Engineer"))) {
weaponType = "PDW";
} else if ((chance >= .25) && (chance < .50) && (classType.equals("Engineer"))) {
weaponType = "Carbine";
} else if ((chance >= .50) && (chance < .75) && (classType.equals("Engineer"))) {
weaponType = "DMR";
} else if ((chance >= .75) && (classType.equals("Engineer"))) {
weaponType = "Shotgun";
}
if ((chance >= 0) && (chance < .25) && (classType.equals("Support"))) {
weaponType = "LMG";
} else if ((chance >= .25) && (chance < .50) && (classType.equals("Support"))) {
weaponType = "Carbine";
} else if ((chance >= .50) && (chance < .75) && (classType.equals("Support"))) {
weaponType = "DMR";
} else if ((chance >= .75) && (classType.equals("Support"))) {
weaponType = "Shotgun";
}
if ((chance >= 0) && (chance < .25) && (classType.equals("Recon"))) {
weaponType = "Sniper Rifle";
} else if ((chance >= .25) && (chance < .50) && (classType.equals("Recon"))) {
weaponType = "Carbine";
} else if ((chance >= .50) && (chance < .75) && (classType.equals("Recon"))) {
weaponType = "DMR";
} else if ((chance >= .75) && (classType.equals("Recon"))) {
weaponType = "Shotgun";
}
}
}
Answer: Bug
Your setWeaponType function sets classType, not weaponType.
You also do not check the input parameters (newClassType and newWeaponType), but classType and weaponType.
Brackets
I think that you use too many brackets, this makes your code harder to read. Generally, grouping complex boolean expressions with brackets is good, but you don't need to put a single expression in them (eg (classType.equals("Support"))).
Public Methods
Right now, you have to call 4 methods to use your class. Do you expect use cases where you need to call these methods separately? If not, just make them private and only create one public method (eg called generate).
Extract code to function
You do a lot of range checks in your code, so why not define a function for it:
// returns true if input is in between min and max (including min, excluding max).
private boolean inInterval(double input, double min, double max) {
return input >= min && input < max;
}
Then your classSelection function would look like this:
public void classSelection() {
if (inInterval(chance, 0, .25)) {
classType = "Assault";
} else if (inInterval(chance, 0.25, .50)) {
classType = "Engineer";
} else if (inInterval(chance, 0.50, .75)) {
classType = "Support";
} else if (inInterval(chance, .75, 1.0)) {
classType = "Recon";
}
}
Note also that I replaced your ifs with else-ifs, as only one condition can be true at a time. No need to check the other once.
You can also use this function in your weaponSelection method.
You can do the same for the type checks:
private boolean isValidType(String type) {
return classType.equals("Assault") || classType.equals("Engineer")
|| classType.equals("Support") || classType.equals("Recon");
}
Then use this in your setWeaponType and setClassType functions to avoid duplicate code and to make it easier to add new types. You could also create a list or enum of types to make this a lot easier.
Use switch to simplify if statements
If you use a switch, your if statements in weaponSelection would look like this:
public void weaponSelection() {
switch(classType) {
case "Assault":
if (inInterval(chance, 0, .25)) {
weaponType = "Assault Rifle";
} else if (inInterval(chance, 0.25, .50)) {
weaponType = "Carbine";
} else if (inInterval(chance, 0.50, .75)) {
weaponType = "Carbine";
} else if (inInterval(chance, .75, 1.0)) {
weaponType = "Shotgun";
}
break;
case "Engineer":
[...]
break;
case "Support":
[...]
break;
case "Recon":
[...]
break;
}
}
I think that this would be a bit cleaner and more readable.
General structure
Right now, your class does two things: It selects classes, and - relatively independent of this - it selects weapons.
You might consider creating two classes, a WeaponSelector and a ClassSelector.
They can both extend a common base class and share some methods (like the inRange and randomizeChance methods). You could also create an enum for the different classes. Then, your ClassSelector could return a value from this enum, and pass it to the WeaponSelector, so it can make it's choice based on it. | {
"domain": "codereview.stackexchange",
"id": 9512,
"tags": "java, random"
} |
Hydroxide as leaving group in Aldol Condensation in base | Question: My professor said that in the base-catalyzed aldol condensation, because there is a lot of hydroxide already in solution, hydroxide on the beta position on a ketone can act as a leaving group. What's the logic/reasoning behind this?
Answer: Quoting ORGANIC CHEMISTRY by Jonathan Clayden ,Nick Greeves , Stuart Warren ,these are elimination reactions .You cannot normally eliminate water from an alcohol in basic solution as hydroxide is a bad leaving group.It is the carbonyl group that allows elimination here: these are E1cB reactions, with a second enolization allowing the loss of OH−.
the base-catalysed aldol reaction sometimes
gives the aldol and sometimes the elimination product. The choice is partly based
on conditions—the more vigorous the conditions (stronger base, higher temperature,
longer time) the more likely elimination is to occur—and partly on the structure of the
reagents.
The key to what is going on is the carbonyl group.Negative
charges are stabilized by conjugation with carbonyl groups,The proton that is removed in this
elimination reaction is adjacent to the carbonyl group, and is therefore also rather acidic (pKa about 20).The anion that results is stable enough to exist because it can be delocalized on to the carbonyl group.
This next step is ,
The leaving group is not lost from the starting
molecule, but from the conjugate base of the starting molecule, so this sort of elimination,
which starts with a deprotonation, is called E1cB (cB for conjugate base). | {
"domain": "chemistry.stackexchange",
"id": 11955,
"tags": "organic-chemistry"
} |
How to set build order/dependencies | Question:
In my workspace I have created 3 packages - 2 nodes and a library. I use catkin_make to build, however it compiles the packages alphabetically; the nodes first then the library. I get compiler errors because the library isn't built yet
Eg..
make[2]: *** No rule to make target /catkin_ws/devel/lib/libmrls_lib.so', needed by catkin_ws/devel/lib/mrls_laser_range_finder/mrls_laser_range_finder_node'. Stop
I can compile everything if I manually make the library first, eg..
catkin_make mrls_lib
I've added the library package to find_package() in CMakeLists.txt. Have I missed something?
Frank
Originally posted by frank on ROS Answers with karma: 11 on 2013-09-14
Post score: 0
Answer:
Do you have a package dependency in the package.xml?
Originally posted by tfoote with karma: 58457 on 2013-09-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by frank on 2013-09-16:
Ah ha! That was it. The <build_depend> tag. | {
"domain": "robotics.stackexchange",
"id": 15530,
"tags": "ros"
} |
two qubits measurment | Question: Let
$$\text{qubit} = \sqrt{\frac{1}{6}} (i∣00\rangle + ∣10\rangle − 2 ∣11\rangle) \, .$$
I need to calculate the probability for each state while measuring both qubits in the standard basis.
I did : for the 00 state : 1/6
for the 10: 1/6
for the 11: 4/6
That is ok? or i need to normalize the result? this is allready normalized?
"We measure the first qubit in the standard basis. What are the probabilities of getting |0> and |1>? Given that we measured |1 >, what is the
state after measurement."
The possabilitises for 0 on the first qubit is just 00 state and because of that its like the posability for 00 and in the last qustion we calc 1/6.
For the first qubit to be 1 lead us for 2 states 10 and 11 and therfor 4/6+1/6=5/6.
Am i right?
Afterwards "If we measure only the second qubit in the |+>; |-> basis, what
are the probabilities for the dierent results and what are the states after
measurement."
Now how i sopuose to do that? to calculate one in standard basis and the second on the +/- basis?
Please help :)
Answer:
need to normalize the result? this is allready normalized?
Do the frequencies add up to 100%?
For the first qubit to be 1 lead us for 2 states 10 and 11 and therfor 4/6+1/6=5/6. Am i right?
You can answer this with the same technique as for any measurement. A measurement needs a bunch of orthogonal finals states. You need to orthogonally project your qubit onto a subspace where each of those orthogonal vectors is either completely in your space or completely orthogonal to the subspace.
Just identify orthogonal vectors in your subspace and the vectors orthogonal to it. If your qubit can be written as a linear combination of those vectors then the parts orthogonal to the subspace go away and you have your projection. The squared length of your projection is the frequency this happens when you do it many times from the start.
Afterwards "If we measure only the second qubit in the |+>; |-> basis, what are the probabilities for the dierent results and what are the states after measurement."
Same thing, find an orthonormal set of final states that span your collection of possible final states. Then write your qubit as a part orthogonal to that and a part that is in that space. Find the part in the space, its squared length is the probability, and that vector is the actual result.
So you should be able to ask yourself if you have a basis for the final states and if you have enough and if they are orthogonal. You should be able to write your quibit as a sum of two vectors one that is a sum of those vectors and one that is orthogonal. Both of those can be checked and if so you are good. Throw away the part that is orthogonal, now you have your result. Compute the squared length that is your probability.
Its just about writing a basis for the measurement. You can write $0$ and $1$ in terms of $\pm$ so you can get it in that basis pretty quickly. | {
"domain": "physics.stackexchange",
"id": 24129,
"tags": "quantum-mechanics, homework-and-exercises, quantum-information"
} |
Cis trans isomer and cyclic compounds | Question: 8 Calcitriol is a steroid hormone found in human blood.
Calcitriol shows geometrical isomerism.
Give the number of geometrical isomers of calcitriol, including calcitriol.
I’m confused I think it doesn’t have a single geometric isomer because rotation is restricted in cyclic compounds! What is the concept behind it!
Answer: for calcitriol:
Assuming only Z and E isomer 64 isomers are possible .For rest of isomers (3 ,ZZ,EZ,EE)
each will have 64 isomers.
For Z and E isomer 64 isomers are :
enter image description here
Quoting
MARCH’S ADVANCED ORGANIC CHEMISTRY REACTIONS, MECHANISMS,AND STRUCTURE SIXTH EDITION
Michael B. Smith ,Jerry March | {
"domain": "chemistry.stackexchange",
"id": 11889,
"tags": "organic-chemistry"
} |
Finding Velocity and Displacement from Acceleration | Question: First time posting, so please advise if and where I'm braking protocols. Here is the question I was given:
A particle travels with acceleration given by $a(t)=(2e^{-t})i+(5\cos{t})j-(3\sin{t})k$. When the particle is located at $(1,-3,2)$ at time $t=0$ and is moving with a velocity given by $v(t)=4i-3j+2k$, find the velocity and displacement of the particle at any time $t>0$.
I know the relation between velocity and acceleration is $a(t)=dv/dt$, and displacement is the integral of $v(t)$ across an interval. But I'm not sure how to set this one up. Wouldn't the velocity just be the integral of acceleration? And then the displacement is the integral of the velocity? But why then was I given the position and velocity at $t=0$?
Answer: Yes, you are right that velocity is the integral of acceleration, and displacement is in turn the integral of velocity. The reason that you were given those points is that when you integrate, you get a +C constant. You can find the value of that constant by plugging in those points for velocity and position at time t=0. | {
"domain": "physics.stackexchange",
"id": 27970,
"tags": "homework-and-exercises, classical-mechanics"
} |
Wess-Zumino Gauge in non-Abelian supersymmetric theory | Question: I've got a question concerning non-Abelian supersymmetric gauge theories.
Consider supersymmetric non-Abelian theory realized on chiral superfields $\Phi_i$ in a representation $R$ with matrix generators $T_{i}^{aj}$. Let us define supergauge transformation as
$$\Phi_i \rightarrow (e^{2\imath g_a \Omega^a T^a})_{i}{}^{j} \, \Phi_j.$$
The supergauge-invariant term in lagrangian is
$$\mathcal{L} = \Bigl[\Phi^{*i}\,(e^V)_i{}^j \, \Phi_j\Bigr]_D.$$
For this to be gauge-invariant, the non-Abelian gauge transformation for the vector field must be
$$e^V \rightarrow e^{\imath \Omega^\dagger}\,e^V\,e^{-\imath \Omega}.$$
Using Baker-Hausdorff formula, we obtain
$$V^a \rightarrow V^a + \imath(\Omega^{a*}-\Omega^a)+g_a \, f^{abc}\,V^b(\Omega^{c*}+\Omega^c)+...$$
Usually at this moment they argue that since the second term on the right side does not depend on $V^a$, one can always do a supergauge transformation to Wess-Zumino gauge by choosing $\Omega^{a*}-\Omega^a$ appropriately.
This is the moment that I don't get. What does it mean? Strictly speaking, the latter expression is complicated non-linear equation on components of $V^a$ superfield.
I guess they mean, that since the second term on r.h.s. doesn't depend on $V^a$, it's possible to solve it within the framework of perturbation theory in the coupling constant(s) $g_a$. Is it correct? If so, how to prove it strictly in all orders?
Answer: I) The gauge transformation of the real gauge field $V$ reads
$$ e^{\widetilde{V}} ~=~e^Xe^Ve^Y, \qquad X~:=~i\Omega^{\dagger}, \qquad Y~:=~-i\Omega. \tag{1}$$
We next use the following BCH formulas
$$ e^Xe^V~\stackrel{\rm BCH}{=}~e^{V+B({\rm ad} V)X+{\cal O}(X^2)}, \qquad e^Ve^Y~\stackrel{\rm BCH}{=}~e^{V+B(-{\rm ad} V)Y+{\cal O}(Y^2)}.\tag{2} $$
Keeping only linear orders in $\Omega$, we get
$$\begin{align}\widetilde{V}~&\stackrel{(1)+(2)}{=}~B({\rm ad} V)X+V+B(-{\rm ad} V)Y\cr
&~~~\stackrel{(4)}{=}~V+\frac{1}{2}[V,Y-X]+B_+({\rm ad} V)(X+Y),\end{align}\tag{3} $$
where
$$\begin{align} B(x)&~:=~\frac{x}{e^x-1}~=~\sum_{m=0}^{\infty}\frac{B_m}{m!}x^m~=~B_+(x)-\frac{x}{2}\cr
&~=~1-\frac{x}{2}+\frac{x^2}{12}-\frac{x^4}{720}+\frac{x^6}{30240}+{\cal O}(x^8)\end{align} \tag{4} $$
and
$$\begin{align} B_+(x) &~:=~\frac{B(x)+B(-x)}{2}~=~\frac{x/2}{\tanh\frac{x}{2}} \cr
&~=~1+\frac{x^2}{12}-\frac{x^4}{720}+\frac{x^6}{30240}+{\cal O}(x^8) \end{align} \tag{5} $$
are generating functions of Bernoulli numbers.
II) We would like $\widetilde{V}$ to be in WZ gauge
$$ \widetilde{V}~=~{\cal O}(\theta^2) .\tag{6} $$
For given $V$, $\widetilde{V}$, and $X-Y$, the eqs. (3+6) is an affine$^1$ equation in $X+Y=i\Omega^{\dagger}-i\Omega$. This has formally a solution if the operator
$$ B_+({\rm ad} V)~=~{\bf 1} + \ldots \tag{7} $$
is invertible, which is true, at least perturbatively. To finish the proof, one should write out the equation in its superfield components to check that the above affine shift mechanism really is realized at the component level. Recall e.g. that the gauge field $\widetilde{V}$ can not be gauged away completely (= put to zero), since $\Omega$ is a chiral superfield with not enough $\theta$'s to reach all components of $\widetilde{V}$, so to speak.
References:
S.P. Martin, A Supersymmetry Primer, arXiv:hep-ph/9709356; p.43.
--
$^1$ An affine equation is a linear equation with an inhomogeneous term/source term. | {
"domain": "physics.stackexchange",
"id": 40090,
"tags": "quantum-field-theory, supersymmetry, gauge-theory, gauge-invariance, gauge"
} |
How to set viso2_ros stereo parameters? | Question:
Hi everybody,
I'm trying to using stereo odometry with viso2_ros library. How to set the distance from left camera to right camera?
Which camera is related to base_link -> camera tf transformation? The right camera or the left camera?
Originally posted by aldo85ita on ROS Answers with karma: 252 on 2012-10-23
Post score: 2
Answer:
The base camera frame for a stereo system is the left optical frame.
The viso2_ros stereo odometer reads all information including baseline from the corresponding camera info messages (i.e. <stereo>/left/camera_info and <stereo>right/camera_info). Be sure to calibrate your stereo system using the camera_calibration package!
Originally posted by Stephan with karma: 1924 on 2012-11-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Lily on 2014-03-20:
Do you mean that if I want to use viso2_ros I should first use the camera calibration package you mentioned to get the calibration parameters? Can't I use other calibration method? How about the parameters in libviso2? Is that the same situation? Because my results of libviso2 always have scale problem and sometimes monocular trajectory seems better than stereo. I think maybe it is the parameters problem. | {
"domain": "robotics.stackexchange",
"id": 11483,
"tags": "ros, stereo, viso2-ros, libviso2"
} |
Difference in Energy Transfer Between Impacts and Nudges | Question: Say I have a table and I want to produce a vibration in the table. Would I be better off impacting the surface of the table (i.e. smacking the table) or nudging the surface (i.e. leaning on the surface and briefly pushing down hard. Is there a difference in energy transfer between these two actions, even if they're performed with the same force?
This is a question that's been bothering me for a little while. Intuitively, it seems like an impact would deliver more force, but I'm not too sure.
Answer: This is a tricky question. If the average force is really the same, what is going to change is impulse, the integral of force over time
$$J = \int_0^{\tau} F(t) dt \simeq F_{av} \tau$$
When you push the surface you interact with it for a longer time ($\tau$), while when you smack it you interact with it for a shorter time. So if the average force $F_{av}$ is the same, by increasing the interaction time you are going to deliver more energy to the system (the impulse will be greater).
You intuitively feel that smacking the table is more effective because it is usually easier to deliver more force by exploiting the kinetic energy of the object we are using to strike. Just think about "pushing" a baseball with the bat instead of hitting it: doesn't feel very effective, right?
Also, if your objective is to damage (or even break) the table, then probably it is best to hit it. This is because if the force is delivered in a relatively long time interval the material will have the time to bend and deform to absorb the energy without damage, while if the force is delivered in a short time it won't be able to do so and will usually suffer damage. Smacking water with your hands is an extreme example of what I'm saying, even though water won't obviously be damaged by the blow. Just think of breaking a small rock with an hammer: to break it by pushing it you will require a lot of force, but if you deliver a nice hit you will be able to smash it easily. | {
"domain": "physics.stackexchange",
"id": 31066,
"tags": "forces, vibrations"
} |
Problem with ROS Networking | Question:
Hello,
I have already tried this tutorial,
http://www.ros.org/wiki/Robots/TurtleBot/Network%20Setup
and everything was perfect, so I don't have a problem with Turtlebot networking setup.
Now I'm trying to follow this tutorial
http://nootrix.com/2012/06/ros-networking/
On my workstation, first I run
ssh name@turtlebot_lap
and when I run
rosrun turtlesim turtlesim_node
I get this error
turtlesim_node: cannot connect to X server
Any suggestions will be appreciated.
Thanks!
Originally posted by TheSkyfall on ROS Answers with karma: 39 on 2013-02-17
Post score: 0
Answer:
I think that tutorial is not very clear. The turtlesim_node has to be run on the actual machine itself (ie in that tutorial, you have to run it on the r2d2 machine) and not through ssh. This is because turtlesim_node requires gui (X server) and you have to use "ssh -X name@turtlebot_lap" in order to see gui through ssh.
If you want to follow that tutorial, I would suggest using the r2d2 machine, open a terminal to ssh into c3po, and run roscore. Then, open a new terminal (this will be the r2d2 terminal) and run turtlesim_node.
Following the tutorial here would be easier.
Originally posted by weiin with karma: 2268 on 2013-02-17
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by bit-pirate on 2013-07-01:
@TheSkyfall please mark your question as answered, if this solution solved your problem. Thanks. | {
"domain": "robotics.stackexchange",
"id": 12930,
"tags": "turtlebot2, turtlebot, ros-groovy"
} |
Learning ROS2 without ROS1 background | Question:
Is it possible to learn ROS2 without knowing ROS1? Is it worth it to try?
Many ROS2 resources I see describe themselves with respect to the ROS1 resource that implemented that functionality.
Originally posted by johnconn on ROS Answers with karma: 553 on 2019-07-07
Post score: 0
Answer:
It's possible, but with the current state of ROS 2 material / documentation I would probably suggest to get at least some experience with ROS 1.
You'll probably have a much easier time matching ROS 1 domain concepts with their analogues in ROS 2.
You will have to "unlearn" some things though (nodelets fi don't exist in ROS 2) but most of the times at least the concept is similar which allows you to employ conceptual reuse (the idea of nodelets can be mapped to ROS 2 nodes running in the same component and using zero-copy exchange of messages). That helps me personally tremendously.
Originally posted by gvdhoorn with karma: 86574 on 2019-07-08
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by gvdhoorn on 2019-07-08:
Related recent Q&A: #q327456.
Comment by pavel92 on 2019-07-08:
I agree with gvdhoorn, ROS has a steep learning curve and ROS1 is pretty well documented with a low of materials, examples and implementation. In some cases ROS1 and ROS2 are used alongside so its good to be familiar with both of them. | {
"domain": "robotics.stackexchange",
"id": 33359,
"tags": "ros, ros2"
} |
Efficient URL Escape (Percent-Encoding) | Question: Here I have a simple algorithm to percent-encode any string.
(Specification from Wikipedia; note that this is not compatible with URLEncoder.encode())
Is this an efficient solution to the problem?
Using a StringBuilder should be efficient, but it doesn't seem great since every character is added to the StringBuilder individually. Could this have any significant impact?
This method works only for ASCII characters; that is sufficient for my use case.
private static String urlEscape(String toEscape){
//if null, keep null (no gain or loss of safety)
if (toEscape==null)
return null;
StringBuilder sb=new StringBuilder();
for (char character: toEscape.toCharArray())//for every character in the string
switch (character){//if the character needs to be escaped, add its escaped value to the StringBuilder
case '!': sb.append("%21"); continue;
case '#': sb.append("%23"); continue;
case '$': sb.append("%24"); continue;
case '&': sb.append("%26"); continue;
case '\'': sb.append("%27"); continue;
case '(': sb.append("%28"); continue;
case ')': sb.append("%29"); continue;
case '*': sb.append("%2A"); continue;
case '+': sb.append("%2B"); continue;
case ',': sb.append("%2C"); continue;
case '/': sb.append("%2F"); continue;
case ':': sb.append("%3A"); continue;
case ';': sb.append("%3B"); continue;
case '=': sb.append("%3D"); continue;
case '?': sb.append("%3F"); continue;
case '@': sb.append("%40"); continue;
case '[': sb.append("%5B"); continue;
case ']': sb.append("%5D"); continue;
case ' ': sb.append("%20"); continue;
case '"': sb.append("%22"); continue;
case '%': sb.append("%25"); continue;
case '-': sb.append("%2D"); continue;
case '.': sb.append("%2E"); continue;
case '<': sb.append("%3C"); continue;
case '>': sb.append("%3E"); continue;
case '\\': sb.append("%5C"); continue;
case '^': sb.append("%5E"); continue;
case '_': sb.append("%5F"); continue;
case '`': sb.append("%60"); continue;
case '{': sb.append("%7B"); continue;
case '|': sb.append("%7C"); continue;
case '}': sb.append("%7D"); continue;
case '~': sb.append("%7E"); continue;
default: sb.append(character);//if it does not need to be escaped, add the character itself to the StringBuilder
}
return sb.toString();//build the string, and return
}
Answer: You are in dire need of a lookup table. What you want to do is define a mapping somewhere else, and then just percent-encode all characters you have mapped. This gets rid of that unwieldy, huge and btw. hacky switch-statement.
consider the following:
StringBuilder sb = new StringBuilder(toEncode.length());
for (Character c : toEncode.toCharArray()) {
if (MAPPING.containsKey(c)) {
sb.append(MAPPING.get(c));
} else {
sb.append(c);
}
}
This should be equally fast, if not faster. Also it makes use of a MAPPING you can change, without significantly affecting how this method works. btw. I am prespecifying the StringBuilder's capacity, since it's more efficient when the backing collection in there does not need to be resized too often.
Oh and another thing. Your switch-case only works by sheer luck. continue; is not the correct keyword to use there, instead you should rely on break; that allows you to add more work in the loop body, which is currently more or less impossible. | {
"domain": "codereview.stackexchange",
"id": 15522,
"tags": "java, performance, url, escaping"
} |
Maximal subsets of a point set which fit in a unit disk | Question: Suppose that there are a set $P$ of $n$ points on the plane, and let $P_1, \dots, P_k$ be distinct subsets of $P$ such that all points in $P_i$ fits inside one unit disk for all $i$, $1\le i\le k$.
Moreover, each $P_i$ is maximal, i.e., no unit disk can cover a subset of $P$ that is a strict superset of $P_i$. Visually speaking, if we move a unit disk that covers $P_i$ to cover a point not in $P_i$, then at least one point which was inside that disk will become uncovered.
Here is an example:
In the above figure, there are three maximal subsets.
I don't know whether this problem has a name or was studied before, but here are my questions.
Can $k$ be exponential with respect to $n$?
If not, then can we find those maximal subsets in polynomial time w.r.t. $n$?
I think that there are exponentially many such subsets because of the following argument:
Suppose that the points are centers of some disks with radius $1/2$. If a subset of such points fit in a unit disk, then they form a clique. Since there are exponentially many cliques in a set of disks, then there should be exponentially many maximal subsets of this particular set of points that fit into a unit disk.
Answer: Let's define a function $f: \Bbb R \times \Bbb R \rightarrow \left [0, n \right ] $, returning a number of points from the set $P$, covered by a unit disk with the center at the point $(x, y)$. This is a piecewise constant function, and it's easy to see that its domain can be thought as a planar subdivision, defined by all intersections of unit disks, centered at points from the set $P$. This subdivision contains vertices (= intersection points), edges (= circular arcs) and faces (= pieces of the plane, where the function $f$ returns the same value). We'll say that these faces are labeled by this value.
We'll assume that the planar graph, defined by this subdivision, is a 4-regular one - it's a common assumption, meaning that all the points in $P$ are in general position (each intersection point belongs to exactly two circles). An example of such subdivision for $n=3$ is below, where the label for each face (including the external one) is shown in red color.
As far as I understand your maximal subsets $P_i$ can be associated with such faces of this subdivision, which have labels larger than all labels of their neighboring faces. It's kind of static interpretation of your "moving disk" semantics, which you used in your definition of the maximal subset.
The subdivision will contain as many faces as possible in the case when all the unit circles pairwise intersect each other. It can be shown, that in this case:
number of vertices is $n(n-1)$
number of edges is $2n(n-1)$
number of faces is $n(n-1)+2$
So, answers to your questions will be:
No, the number $k$ of maximal subsets is $O(n^2)$, because it can't be more than the number of all faces.
Yes, all the maximal subsets can be found in $O(n^3)$ time by naive scanning algorithm.
As for name of this problem - it looks like it may be related to some variations of the Unit Disk Cover problem. | {
"domain": "cs.stackexchange",
"id": 16843,
"tags": "time-complexity, computational-geometry"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.