anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Fountain codes and LDPC codes | Question: I am having a small clarification with the difference between LDPC and fountain codes.
In LDPC codes, each parity bit is depenedent on numerous data bits. Isn't that similar to fountain codes as the encoded blocks are dependent on each other(I'm thinking of luby transform codes)?
Also, my professor was explaining that it doesn't matter if some blocks are missed, as long as it receives a subset of the encoded blocks. Fountain codes can decode the message because it just keeps listening to receive more information as there is enough redundancy built in.
My doubt is that, if it just keeps listening, when does the transmitter transmit the next set of messages if it's always sending redundancy to make up for what could be possibly deleted? Especially since there is no feedback.
Or is it that the message sizes are so large and that the encoding is done so that the redundancy is built throughout the message so that even if certain blocks are lost, we can still decode the information?
Lastly, fountain code applications are lossy connection, etc. In which case, why is this only applied to erasure channels?
Because if it is good at combating lossy connection, then it should also be able to effectively combat deletion codes right?
Answer: LDPC codes are a subset of linear channel codes. In terms of functionality, they are similar to other linear channel codes. However, they are constructed using a sparse bipartite graph.
Fountain codes are rateless codes. These type of codes are more useful in broadcast/multicast settings. If we use fixed-rate codes in such settings, we need to make sure that each and every packet is correctly received at all receivers. This can become inefficient since each broadcast can potentially be re-iterated (depending on the re-transmission protocol) until all receivers receive all packets. Consider for example a case with 10 receivers, all have correctly received the packet #2 except one. This specific packet will be broadcast to all until the last receiver gets the packet. However, with a rateless code, the packet index is not a matter anymore (i.e. whether it is packet #2 or not). This is because a group of packets are selected at first and new encoded packets are generated randomly (hence, all randomly encoded packets look just like a new piece of information about the group and the index does not matter anymore). Any linearly independent subset of valid packets is enough to decode (acquire) the original group of packets.
Also I can see you have difficulty understanding acknowledge policy. We can have many different types of such policy. So based on the Ack protocol, the transmitter stops and listens for the ack from the receivers to find out their status in terms of the number or the index of missing/collected packets. | {
"domain": "dsp.stackexchange",
"id": 4213,
"tags": "channelcoding, forward-error-correction"
} |
1-D set cover optimisation with connected subsets | Question: Given a 1-D universe $U$ (e.g. $\mathbb{Z}$) and a set-of-sets $\pmb{S}$, where each element of $\pmb{S}$ is a closed, connected subset of $U$ (e.g. $[a .. b]$ given $ a,b\in\mathbb{Z}$) and $\bigcup\pmb{S}=U$, what is the quickest way to find $\pmb{S}^*\subseteq \pmb{S}$ with the smallest cardinality such that $\bigcup \pmb{S}^* = U$?
In other words, is there a fast 1-D set cover optimisation algorithm given that the sets you're working from are closed-connected?
It seems to me that the naive greedy set cover will in fact be optimal here, but I'm not expert enough to prove it. Can anyone think of a counter-example?
Answer: The leftmost point must be covered by some interval. There is no harm in picking the longest interval that covers the leftmost point. Then iterate (after removing the interval and all covered points from the instance).
This is the well-known greedy algorithm for the "Interval Point Cover" problem;
see for instance the course page by Andranik Mirzaian for a full analysis. | {
"domain": "cstheory.stackexchange",
"id": 3065,
"tags": "set-cover"
} |
How to extract galaxy spectra for different radii in Python for spectra taken by long slit spectrograph? | Question: I am trying to extract the kinematics from the elliptical galaxy NGC 4697 using the Fourier Correlation Quotient (FCQ) algorithm described by Bender (http://adsabs.harvard.edu/full/1990A%26A...229..441B). I am working with the stellar spectrum of the K3-III star hd132345 as template and the galaxy spectra along the major axis of NGC 4697. Both spectra were taken by a long slit spectrograph. I have implemented a first version of the FCQ algorithm, my current problem is that I am not completely certain how to extract the galaxy spectra for different radii (I am new to working with astrophysical spectra in fits format). Underneath I will show my first lines of code for the data acquisition.
file_temp = dir + '/hd132345.fits' # template spectra: star hd132345 (K3-IIICN 2 B)
file_gal = dir + '/ngc_4697_major_axis.fits' # galaxy spectra along major axis from ngc 4697`
hdu_temp = fits.open(file_temp)
hdu_gal = fits.open(file_gal)
hdr_temp = hdu_temp[0].header
hdr_gal = hdu_gal[0].header
data_gal = hdu_gal[0].data
data_temp = hdu_temp[0].data
# extract wavelength array and flux
flux_gal = data_gal[0]
flux_temp = data_temp
w_gal = WCS(hdr_gal, naxis=1, relax=False, fix=False)
loglam_gal = w_gal.wcs_pix2world(np.arange(len(flux_gal)), 0)[0]
w_temp = WCS(hdr_temp, naxis=1, relax=False, fix=False)
loglam_temp = w_temp.wcs_pix2world(np.arange(len(flux_temp)), 0)[0]`
Plotted flux_gal over loglam_gal and flux_temp over loglam_temp looks as follows (spectra are rebinned in loglam). The redshift has not been removed yet. I was wondering why the absorbtion lines of the template are broader then those of the galaxy, since it actually should be the other way arround.
The header of my galaxy fits file:
I understand that the keywords CRVAL1 and CRVAL2 describe the initial values for ln lambda in A and the radius in arcseconds. While CDELT1 and CDELT2 describe the increments ln lambda and the radius. So I should have a spectrum for my galaxy every 0.2 arcseconds. I am unsure how to extract those from my input fits files and would be very happy about a respond. Do I have to shift my galaxy spectra for all radii to the rest-wavelength frame, or can I somehow do it all in advance, since the redshift for all radii should be the same. I would be pleased about any tips or comments on this issue.
Thanks to Peter Erwin's comment, the results now look as follows.
The entire 2D-image spectral image of NGC-4697 along the major axis (displayed with SAOImage):
The spectrum of the galaxy center taken at row 597. (flux_gal_center = data_gal(597,:)):
Answer:
I was wondering why the absorbtion lines of the template are broader
then those of the galaxy, since it actually should be the other way
arround.
You are correct that it should be the other way around. The reason the plot looks confusing is that you are not actually plotting the galaxy spectrum in the top panel; you are plotting some combination of noise and lack of any signal.
What has happened is that by defining the galaxy spectrum as data_gal[0], you have extracted the first (bottom) row from the 2D image, which is far away from the actual galaxy light. I would suggest displaying ngc_4697_major_axis.fits in a FITS image display program (such as SAOimage DS9). The spectrum corresponding to the center of the galaxy will be the brightest line running down the middle of the image. (The axes of the image are wavelength in the one direction and distance along the slit in the perpendicular direction.)
Here's an example from the Gemini GMOS instrument webpage. You can see the bright zone running across the middle of the image; this is the center of the galaxy, which gets fainter as you go up or down (i.e., along the slit in either direction):
Figure out what row or rows that corresponds to, and extract them via flux_gal = data_gal[n_row,:] (for a single row) or flux_gal = np.mean(data_gal[n_row1:n_row2,:], 0) to get the mean of rows n_row1 through (n_row2 - 1).
(Remember that Python and Numpy treat image coordinates as [row_number, column_number] = [y, x], where x and y correspond to the normal coordinates when the image is displayed in SAOimage DS9.)
To extract spectra at different radial distances from the galaxy center, do the same thing, but chose rows (or ranges of rows) above or below the row corresponding to the galaxy center. As you get further away from the center, the S/N will get worse, so you will probably want to start summing or averaging over multiple rows.
Do I have to shift my galaxy spectra for all radii to the
rest-wavelength frame, or can I somehow do it all in advance, since
the redshift for all radii should be the same.
In general, you do not want to shift the galaxy spectrum (unless perhaps the FCQ algorithm requires it). The redshift of the individual spectra is one of the things you are trying to measure, after all.
The redshift for all radii will almost certainly not be the same, since the redshift at any given radius is the sum of the galaxy's redshift (Hubble flow + peculiar velocity of the galaxy) and the mean rotation velocity of the stars at that radius. Some galaxies ("slow rotators") may have almost no rotation, but NGC 4697 is a "fast rotator", and I believe the rotation velocity will reach $\sim \pm 100$ km/s at a radius of 10 arcsec away from the galaxy nucleus along the major axis. | {
"domain": "astronomy.stackexchange",
"id": 5703,
"tags": "observational-astronomy, galaxy, data-analysis, spectra, python"
} |
What is a material that allows air to pass but not water vapor? | Question: What are some materials that allow air to pass but not, or to a lesser extent, water vapor?
Here are two materials I am aware of:
Teflon is a material that allows water vapor to pass, but not air. Water itself can't pass.
What are some other sourceable substances which have this feature? What are some defining terms and measurements used to identify this property in a material, and how can the rate of water vapor / air transfer be manipulated in such a substance?
Answer: I just check on Google and I find the following results
TEMISH (TM) of Nitto
Silicone (yes, you read that right) (fixed for spelling mistake) - Apparently, the American company GE tested it out in 1950s
Tyvek (R) of DuPont
Gore-Tex (R)
The stuff above can happen due to something called "Molecular Sieve", find quick reading here. Basically, the material concerned is porous ("with holes"), but these holes are very, very, very small, at the atomic level. The holes can block bigger molecules/atoms (like water - H2O), but allow air (say oxygen - O2) to pass through. Any material with sufficient small holes can "block" the water, and "unblock" air.
Of course, said is easier than done, because the water molecules have the size of nearly 3 angstroms (2.75 if you want more precise value), so... yeah. Building material like that is possible, but very hard. | {
"domain": "engineering.stackexchange",
"id": 2904,
"tags": "materials"
} |
Doubly linked list in Rust | Question: I have just started learning Rust, and in order to try to get the hang of references, ownership and mutability, I have attempted to make a doubly linked list. It now compiles, and the add function seems to be working as intended.
There is one implementation detail that I am curious about, and in general I'm wondering whether my code violates any good practice. Also, related to this, have I painted myself into any kind of corner if I want to extend this code? (Making add() keep the list ordered, using generics, implementing Iterator, and so on.)
use std::rc::{Rc, Weak};
use std::cell::RefCell;
#[derive(Debug)]
struct Node {
value: i32,
next: RefCell<Option<Rc<Node>>>,
prev: RefCell<Weak<Node>>,
}
impl Node {
pub fn add(&self, this: &Rc<Node>, i: i32) {
if let Some(ref r) = *self.next.borrow() {
r.add(r, i);
return;
};
*self.next.borrow_mut() = Some(Rc::new(Node {
value: i,
next: RefCell::new(None),
prev: RefCell::new(Rc::downgrade(&this)),
}));
}
}
#[derive(Debug)]
struct List {
head: RefCell<Option<Rc<Node>>>,
}
impl List {
pub fn add(&self, i: i32) {
if let Some(ref r) = *self.head.borrow() {
r.add(r, i);
return;
};
*self.head.borrow_mut() = Some(Rc::new(Node {
value: i,
next: RefCell::new(None),
prev: RefCell::new(Weak::new()),
}));
}
}
fn main() {
let list = List {
head: RefCell::new(None),
};
println!("{:?}", list);
list.add(1);
list.add(2);
println!("{:?}", list);
}
The detail I'm curious about is whether the if let parts can be done with a match. Or, more specifically, can it be done without the extra return call. No matter what I tried (both match and if let with else), the borrow() got in the way of the borrow_mut() in the None arm, and I couldn't get the pattern matching to work without the borrow().
Answer: Before looking at your code, I want to give a general notice: don't use linked lists. They are almost always the wrong choice of data structure. They are useful primarily pedagogically.
Furthermore, I should warn you that implementing a doubly-linked list will not look like typical Rust. You can't implement a doubly-linked list in Rust without using escape hatches like Rc, RefCell, or raw pointers. These escape hatches are necessary when doing low level things like implementing a data structure, but normal Rust code avoids them as much as possible. If you want to get used to "thinking in Rust" you should pick a different exercise.
The first thing about your code I note is that Node::add function. It takes a pointer to itself twice. self and this. Having two different pointers to the same thing is unhelpful. Just drop the &self parameter and refer to this consistently in the function. Then you can call the function as Node::add.
The second thing is that you have RefCells containing Rc's. However, the standard way is to have it the other way around. It should be Rc<RefCell<?>> not RefCell<Rc<?>>. You'll find that your current design does not let you modify the value of each node.
A third thing is the lack of mutable references. Your add method allows adding to your list without the list being mutable. That doesn't really make sense, your add method really should take a mutable reference to the list. You get away with this because of the previous point.
Finally, you would be better off not using recursion to add the item to the end of list. Recursion is much slower than iteration. It would make more sense to scan through the nodes looking for the end of the list in a loop than in a recursive function call. | {
"domain": "codereview.stackexchange",
"id": 32093,
"tags": "linked-list, rust"
} |
How does resonance fail in approximating chemical structures? | Question: In the book "Concise Inorganic Chemistry" by Prof. JD Lee, it says here:
These contributing structures do not actually exist. The $\ce{CO3^2-}$ does not consist of a mixture of these structures, nor is there an equilibrium between them. The true structure is somewhere in between and is called a resonance hybrid. Resonance was widely accepted in the 1950s but is now regarded at best as clumsy and inadequate, and at worst as misleading or wrong!
I want to know where exactly does resonance fail in approximating the structures?
Answer: Resonance structures are one model that explains bonding, but not a very good one
Resonance structures are widely misunderstood. Individual resonance structures are not real compounds that exist, but an approximation that assumes all structures are made from bonds consisting of two electrons. The true structure of a compound is thought to be an average mixture of the various possible resonance structures consisting of simple bonds.
The archetypal example is benzene. If we model benzene as a structure consisting of "normal" 2-electron bonds, we get a ring consisting of alternating single and double bonds. This contradicts the observed structure where all the bonds are the same (and are somewhere between the character of single and double carbon-carbon bonds). The resonance model of benzene consists of two structures with the normal double and single bonds swapped, giving an average overall structure where all the bonds are an average of single and double bonds.
For structures with more complexity than benzene, the resonance structures are more complex and frequently don't make equal contributions, nor do they do a good job of predicting the real structures. Molecular orbital based approaches (which recognise the difference between pi-bonds and sigma bonds and do a much better job of recognising the fractional character of many real bonds) are far better.
That is why resonance structures "fail". They make drawing simple structures easier, but they don't do a good job of predicting real structures. | {
"domain": "chemistry.stackexchange",
"id": 12648,
"tags": "bond, theoretical-chemistry, molecular-structure, resonance"
} |
How computationally powerful is an Arduino Uno board? | Question: What can an Arduino board such as the Uno really do? Of course simple things like controlling a couple servos is very easy for it. However, I don't think an Uno board would be able to preform real-time 3D SLAM from point cloud data gathered from a Kinect sensor on a mobile robot, right? If the robot had any speed at all the Arduino wouldn't be able to keep up, correct? Could it do 2D SLAM while moving and be able to keep up? What about taking 1/10 of the points from the Kinect sensor and processing only those?
Basically, what are some examples of the resource limitations of such an Arduino board?
Answer: It depends - on the number of landmarks in the feature map, and how much time you're willing to invest tuning the algorithm for speed, and a number other parameters which you may or may not be able to control for a given application.
Edit: As a thought experiment, I think it would theoretically be powerful enough to do extremely simple near-real-time SLAM in a very contrived environment, such as the center of a gym floor with a few cones set out. It could take a single scan line from the Kinect and update a low-resolution, 2D internal map, updating periodically (say every 10 seconds).
The Uno's 2K RAM would probably be a deal breaker, but the Mega might have enough (8K) and there is hack for upgrading it to 520K.
In practice, doing floating point matrix calculations on an 8-bit processor is not a good idea. | {
"domain": "robotics.stackexchange",
"id": 52,
"tags": "arduino, slam, kinect"
} |
Depth for UCC ansatz | Question: I tried to understand the notion of depth vs Trotter step in a uccsd circuit for the vqe algorithm in qiskit (I use an older version, 0.15). When I tried understanding it for a small system like H2, and started changing depth from 1 through 3, the whole circuit for U (the variational form, which in this case is uccsd) started repeating itself. Physically, this looked very much like increasing Trotter steps (or num_time_slices), for example, e^(A+B) = (e^(A/3)e^(B/3))^3. Indeed, when I set num_time_slices as 2 and depth as 1, I got the same circuit as with the numbers switched. Can someone help me with how the two are different, viz, depth and Trotter step?
Answer: It is a bit strange that they decided to include "depth" as a parameter for UCCSD circuit. As you noted, the depth just means the repetition of the circuit/var_form here. So in fact, we do not need it as we can increase the number of time slices. Also note that in the new Qiskit release, "depth" is not called "reps" which, obviously, stands for repetition .
Also note that UCCSD is a chemically motivated Ansatz, so it only makes sense that the starting state, "initial_state" start out as the Hartree-Fock state. | {
"domain": "quantumcomputing.stackexchange",
"id": 2036,
"tags": "qiskit, vqe"
} |
Simple Address Book- Python | Question: I am a Python beginner (well, a newbie programmer). I am creating a small address book, just as a fun project, nothing too serious. I would appreciate if you review it and advise me on how to make it better (not complete yet). Here's the link:
https://github.com/volf52/address-book
Here's the code:
main.py
import contact
import addressBook
import re
from sys import exit
import random
__author__ = 'Muhammad Arslan <rslnkrmt2552@gmail.com>'
app = addressBook.addressBook(str(raw_input("Enter name of book (Will be created if doesn't exist) \n> ")))
main_menu = '\n1. Show all contacts.\n2. Add contact.\n3. Search.\n4. Delete a contact.\n5.Update contact.\n6. Exit\n\n>'
def exitProg():
exitMessages = ['You have my permission to die.']
print random.choice(exitMessages)
exit(0)
def getOption(prompt):
inp = raw_input(prompt)
try:
inp = int(inp)
except ValueError:
print 'You should have selected a proper option.'
return 13
return inp
def showContacts():
print 'show all'
def addContact():
flag = 13
while flag == 13:
exp = map(lambda x: re.compile(x), [r'^([a-zA-Z]+)$', r'^(\+)?(\d)+$', r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$)"])
fName = str(raw_input('Enter first name : ')).strip()
while not exp[0].match(fName):
fName = str(raw_input('\nWrong Input\nEnter (proper) first name : ')).strip()
lName = str(raw_input('Enter last name : ')).strip()
while not exp[0].match(lName):
lName = str(raw_input('\nWrong Input\nEnter (proper) last name : ')).strip()
pNum = str(raw_input('Enter phone number : ')).strip()
while not exp[1].match(pNum):
pNum = str(raw_input('\nWrong Input\nEnter (proper) number : ')).strip()
email = str(raw_input('Enter email(Blank for none) : ')).strip()
while not exp[2].match(email):
if not email:
break
email = str(raw_input('\nWrong Input\nEnter (proper) email : ')).strip()
print app.addEntry(contact.Contact(fName, lName, pNum, email))
while (flag < 1) or (flag > 3):
flag = getOption('\n1. Add another.\n2. Go to main menu\n3. Exit.\n\n> ')
if flag == 2:
break
elif flag == 3:
exitProg()
else:
flag = 13
def searchContact():
print 'search'
def removeContact():
name = str(raw_input('Enter first name of the contact: '))
print app.removeEntry(name)
def updateContact():
name = str(raw_input('Enter the first name of the contact: '))
msg, cont = app.searchEntry(name)
print msg
funcs = [showContacts, addContact, searchContact, removeContact, updateContact, exitProg]
while True:
inp = getOption(main_menu)
while inp < 1 or inp > 6:
print 'Input a proper number, moron.'
inp = getOption(main_menu)
funcs[inp - 1]()
contact.py
__author__ = 'Muhammad Arslan <rslnkrmt2552@gmail.com>'
class Contact(object):
"""Initialize a new contact object.
Takes in name and phone number. Other arguments are optional."""
def __init__(self, firstname, lastname, pNumber, email = ''):
super(Contact, self).__init__()
self.__firstName = firstname.lower()
self.__lastName = lastname.lower()
self.__pNumber = pNumber
self.__email = email
def __str__(self):
return self.getName() + '\t' + self.getNumber()
def __eq__(self , other):
return (self.getName() == other.getName()) or (self.getNumber() == other.getNumber())
def getName(self):
return self.__firstName[0].upper()+self.__firstName[1:] + ' ' + self.__lastName[0].upper() + self.__lastName[1:]
def getFirstName(self):
return self.__firstName
def getLastName(self):
return self.__lastName
def getNumber(self):
return self.__pNumber
def getEmail(self):
return self.__email
def setFirstName(self, newFName):
self.__firstName = newFName
def setLastName(self, newLName):
self.__lastName = newLName
def setName(self, fullName):
self.__firstName, self.__lastName = fullName.split(' ')
def setNumber(self, newNumber):
self.__pNumber = newNumber
def setEmail(self, newEmail):
self.__email = newEmail
addressBook.py
try:
import cPickle as pickle
except:
import pickle
from hashlib import sha256
__author__ = 'Muhammad Arslan <rslnkrmt2552@gmail.com>'
class addressBook():
"""Class : Addressbook"""
def __init__(self, name):
try:
self.__name = self.createName(name)+ '.db'
self.__db = open(self.__name, 'rb')
self.__entries = pickle.load(self.__db)
self.__db.close()
except:
self.__db = open(self.__name, 'wb')
self.__entries = {}
self.__db.close()
def __update(self):
self.__db = open(self.__name, 'wb')
pickle.dump(self.__entries, self.__db, -1)
self.__db.close()
def addEntry(self, contact):
name = contact.getFirstName()
if name in self.__entries:
return '\nContact already present.\n'
else:
self.__entries[name] = contact
self.__update()
return '\nContact added successfully.\n'
def removeEntry(self, name):
if name in self.__entries:
del self.__entries[name]
self.__update()
return '\nContact removed successfully.\n'
else:
return '\nName not found.\n'
def searchEntry(self, name):
name = name.lower()
if name in self.__entries:
return ('Contact found.', self.__entries[name])
else:
return ('Contact not found.', None)
def updateEntry(self, name, param, val):
name = name.lower()
val = val.lower()
if name in self.__entries:
k = self.__entries[name]
funcs = [k.setFirstName, k.setLastName, k.setName, k.setNumber, k.setEmail]
funcs[param-1](val)
return '\nContact updated successfully.\n'
else:
return '\nName not found.\n'
@staticmethod
def createName(mName):
hsh = sha256(mName).hexdigest()
return ''.join(hsh[1::3])
Answer: Like partially mentioned in the comments, you should use __setattr__(self, name, value) and __getattr__(self, name) for security if needed, and otherwise just allow direct access, instead of get_* and set_* functions.
You can also use @property on individual attributes. | {
"domain": "codereview.stackexchange",
"id": 27138,
"tags": "python"
} |
Gazebo GUI crashed immediately on start! | Question:
Hi :)
after i install gazebo on ubuntu 11.10 with Tarball. after install, i trying to type "gazebo" on terminal:
mohsen@mohsen-ThinkPad-R500:~$ gazebo
Gazebo multi-robot simulator, version 1.4.0
Copyright (C) 2013 Open Source Robotics Foundation.
Released under the Apache 2 License.
http://gazebosim.org
Gazebo multi-robot simulator, version 1.4.0
Copyright (C) 2013 Open Source Robotics Foundation.
Released under the Apache 2 License.
http://gazebosim.org
Msg Waiting for master.Msg Waiting for master
Msg Connected to gazebo master @ http://127.0.0.1:11345
Msg Publicized address: 192.168.14.139
Msg Connected to gazebo master @ http://127.0.0.1:11345
Msg Publicized address: 192.168.14.139
(gazebo:5099): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap",
mohsen@mohsen-ThinkPad-R500:~$
my gazebo gui show and exit immedietly without any error or warning!
what am i shoud to0 do0? :)
===============================================================
after i installed "gtk2-engines-pixbuf" by synaptic, it didn,t any error or warning but my gazebo shut down immediately after call!
gmohsen@mohsen-ThinkPad-R500:~$ gazebo
Gazebo multi-robot simulator, version 1.4.0
Copyright (C) 2013 Open Source Robotics Foundation.
Released under the Apache 2 License.
http://gazebosim.org
Gazebo multi-robot simulator, version 1.4.0
Copyright (C) 2013 Open Source Robotics Foundation.
Released under the Apache 2 License.
http://gazebosim.org
Msg Waiting for master.Msg Waiting for master
Msg Connected to gazebo master @ http://127.0.0.1:11345
Msg Publicized address: 192.168.1.51
Msg Connected to gazebo master @ http://127.0.0.1:11345
Msg Publicized address: 192.168.1.51
mohsen@mohsen-ThinkPad-R500:~$
===============================================================
** i re install ubuntu ,but 12.04 64bit! and try to0 install pre-compile binary gazebo with "synaptic" but still i have this problem! :( **
===============================================================
mohsen@mohsen-ThinkPad-R500:~$ gdb --args gzserver empty.world
GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2.1) 7.4-2012.04
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
<http://bugs.launchpad.net/gdb-linaro/>...
Reading symbols from /usr/bin/gzserver...Reading symbols from /usr/lib/debug/usr/bin/gzserver-1.5.0...done.
done.
(gdb) run
Starting program: /usr/bin/gzserver empty.world
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffe5289700 (LWP 2511)]
Gazebo multi-robot simulator, version 1.5.0
Copyright (C) 2013 Open Source Robotics Foundation.
Released under the Apache 2 License.
http://gazebosim.org
[New Thread 0x7fffe4a88700 (LWP 2512)]
[New Thread 0x7fffdffff700 (LWP 2513)]
Msg Waiting for master
Msg Connected to gazebo master @ http://127.0.0.1:11345
Msg Publicized address: 192.168.1.52
[New Thread 0x7fffd6a68700 (LWP 2515)]
[New Thread 0x7fffd6267700 (LWP 2516)]
[New Thread 0x7fffd5866700 (LWP 2517)]
[New Thread 0x7fffd4c43700 (LWP 2518)]
[New Thread 0x7fffc3fff700 (LWP 2522)]
[New Thread 0x7fffc37fe700 (LWP 2523)]
[New Thread 0x7fffc2ffd700 (LWP 2524)]
=============================================================== (in another terminal)
mohsen@mohsen-ThinkPad-R500:~$ gdb gzclient
GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2.1) 7.4-2012.04
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
<http://bugs.launchpad.net/gdb-linaro/>...
Reading symbols from /usr/bin/gzclient...Reading symbols from /usr/lib/debug/usr/bin/gzclient-1.5.0...done.
done.
(gdb) run
Starting program: /usr/bin/gzclient
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffe431a700 (LWP 2509)]
Gazebo multi-robot simulator, version 1.5.0
Copyright (C) 2013 Open Source Robotics Foundation.
Released under the Apache 2 License.
http://gazebosim.org
[New Thread 0x7fffe3b19700 (LWP 2510)]
Msg Waiting for master.
Msg Connected to gazebo master @ http://127.0.0.1:11345
Msg Publicized address: 192.168.1.52
[New Thread 0x7fffe30ea700 (LWP 2514)]
[New Thread 0x7fffe24c7700 (LWP 2519)]
[New Thread 0x7fffd1548700 (LWP 2520)]
[New Thread 0x7fffd0d47700 (LWP 2521)]
[New Thread 0x7fffbc819700 (LWP 2525)]
[New Thread 0x7fffb7fff700 (LWP 2526)]
[New Thread 0x7fffb6b8e700 (LWP 2527)]
[New Thread 0x7fffb4def700 (LWP 2532)]
[New Thread 0x7fffa4027700 (LWP 2533)]
[New Thread 0x7fffa3826700 (LWP 2534)]
Program received signal SIGSEGV, Segmentation fault.
0x00007fffd2aae307 in ?? () from /usr/lib/fglrx/dri/fglrx_dri.so
(gdb) bt
#0 0x00007fffd2aae307 in ?? () from /usr/lib/fglrx/dri/fglrx_dri.so
#1 0x00007fffd2b2410e in ?? () from /usr/lib/fglrx/dri/fglrx_dri.so
#2 0x00007fffd2b24449 in ?? () from /usr/lib/fglrx/dri/fglrx_dri.so
#3 0x00007fffd2037421 in ?? () from /usr/lib/fglrx/dri/fglrx_dri.so
#4 0x00007fffd1f440b8 in ?? () from /usr/lib/fglrx/dri/fglrx_dri.so
#5 0x00007fffd20a35b7 in ?? () from /usr/lib/fglrx/dri/fglrx_dri.so
#6 0x00007fffd23c383d in ?? () from /usr/lib/fglrx/dri/fglrx_dri.so
#7 0x00007fffe0632a20 in Ogre::GLRenderSystem::bindGpuProgram(Ogre::GpuProgram*) () from /usr/lib/x86_64-linux-gnu/OGRE-1.7.4/RenderSystem_GL.so
#8 0x00007ffff1ca05e5 in Ogre::SceneManager::_setPass(Ogre::Pass const*, bool, bool) () from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#9 0x00007ffff1c9b269 in Ogre::SceneManager::SceneMgrQueuedRenderableVisitor::visit(Ogre::RenderablePass*) () from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#10 0x00007ffff1c52d59 in Ogre::QueuedRenderableCollection::acceptVisitorDescending(Ogre::QueuedRenderableVisitor*) const () from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#11 0x00007ffff1c52e01 in Ogre::QueuedRenderableCollection::acceptVisitor(Ogre::QueuedRenderableVisitor*, Ogre::QueuedRenderableCollection::OrganisationMode) const ()
from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#12 0x00007ffff1c9b7bc in Ogre::SceneManager::renderBasicQueueGroupObjects(Ogre::RenderQueueGroup*, Ogre::QueuedRenderableCollection::OrganisationMode) ()
from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#13 0x00007ffff1c9a9e7 in Ogre::SceneManager::renderVisibleObjectsDefaultSequence() () from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#14 0x00007ffff1c9f184 in Ogre::SceneManager::_renderScene(Ogre::Camera*, Ogre::Viewport*, bool) () from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#15 0x00007ffff1b066ec in Ogre::Camera::_renderScene(Ogre::Viewport*, bool) () from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#16 0x00007ffff1c6b9f8 in Ogre::RenderTarget::_updateViewport(Ogre::Viewport*, bool) () from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#17 0x00007ffff1c6b91b in Ogre::RenderTarget::_updateAutoUpdatedViewports(bool) () from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#18 0x00007ffff1c6b3be in Ogre::RenderTarget::updateImpl() () from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#19 0x00007ffff1c6b95c in Ogre::RenderTarget::update(bool) () from /usr/lib/x86_64-linux-gnu/libOgreMain.so.1.7.4
#20 0x00007ffff72e0ac8 in gazebo::rendering::Camera::RenderImpl (this=0x32db2e0) at /tmp/buildd/gazebo-1.5.0/gazebo/rendering/Camera.cc:310
#21 0x00007ffff72e0b49 in Render (this=0x32db2e0) at /tmp/buildd/gazebo-1.5.0/gazebo/rendering/Camera.cc:301
#22 gazebo::rendering::Camera::Render (this=0x32db2e0) at /tmp/buildd/gazebo-1.5.0/gazebo/rendering/Camera.cc:295
#23 0x00000000004c5383 in operator() (this=<optimized out>) at /usr/include/boost/function/function_template.hpp:1013
#24 Signal (this=<optimized out>) at /tmp/buildd/gazebo-1.5.0/gazebo/common/Event.hh:126
---Type <return> to continue, or q <return> to quit---
Originally posted by Mohsen Hk on Gazebo Answers with karma: 31 on 2013-03-06
Post score: 1
Original comments
Comment by nkoenig on 2013-03-07:
Can you provide a backtrace on gzclient. See here: http://gazebosim.org/wiki/Help
Comment by Mohsen Hk on 2013-03-14:
i did it, my error added to0 my question.
Comment by nkoenig on 2013-03-19:
after the program segfaults in GDB, please enter bt then hit enter, and post the results. The bt command produces the backtrace information.
Comment by nkoenig on 2013-03-19:
Can you try removing the fglrx video driver? I think there is an open source alternative which may work better.
Comment by Mohsen Hk on 2013-03-19:
oh! it's work! @};- thank you :)
Comment by konradb3 on 2013-04-13:
it is known problem with HD4xxx : https://bitbucket.org/osrf/gazebo/issue/132/startup-error-on-some-gpus#comment-2784443 The crash is cased by broken sky shaders.
Answer:
i found answre!
sudo apt-get --purge remove fglrx*
sudo apt-get install build-essential cdbs fakeroot dh-make debhelper
debconf libstdc++6 dkms libqtgui4 wget execstack libelfg0
dh-modaliases
sudo apt-get remove --purge fglrx-updates fglrx-amdcccle-update
for more detials:
http://askubuntu.com/questions/159586/how-to-install-radeon-open-source-driver
https://help.ubuntu.com/community/RadeonHD
https://help.ubuntu.com/community/RadeonDriver
Originally posted by Mohsen Hk with karma: 31 on 2013-03-20
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by 4dahalibut on 2014-07-11:
If anybody finds this and tries to run the last line, try fglrx-amdcccle-updates instead | {
"domain": "robotics.stackexchange",
"id": 3092,
"tags": "gazebo-gui"
} |
Is the Dirac $\delta$-function necessarily symmetric? | Question: The Dirac $\delta$-function is defined as a distribution that satisfies these constraints:
$$ \delta (x-x') = 0 \quad\text{if}\quad x \neq x' \quad\quad\text{and}\quad\quad \delta (x-x') = \infty \quad\text{if}\quad x = x'$$
$$\int_{-\infty} ^{+\infty} \delta(x-x')\, dx = 1 $$
Some authors also put another constrain that that Dirac $\delta$-function is symmetric, i.e., $\delta(x)=\delta(-x)$
Now my question is, do we need to separately impose the constraint that the Dirac $\delta$-function is symmetric or it automatically comes from other constrains?
Well, to illustrate my query clearly, I am going to define a function like that:
$$ ξ(t)=\lim_{\Delta\rightarrow0^+} \frac{\frac{1}{3}{\rm rect}\left(\frac{2x}{\Delta}+\frac{1}{2}\right)+\frac{2}{3}{\rm rect}\left(\frac{2x}{\Delta}-\frac{1}{2}\right)}{\Delta} $$
where ${\rm rect}(x)$ is defined as: $$ {\rm rect}(x)= 1 \quad\text{if}\quad |x| < \frac{1}{2} \quad\quad\text{and}\quad\quad {\rm rect}(x)= 0 \quad\text{elsewhere}. $$
$ξ(t)$ is certainly not symmetric, but it does satisfy the following conditions,
$$ ξ(t)= 0 \quad\text{if}\quad t \neq 0 \quad\quad\text{and}\quad\quad ξ(t)= \infty \quad\text{if}\quad t = 0$$
$$\int_{-\infty} ^{+\infty} ξ(t)\,dt = 1 $$
Now, my question is, can we define $ξ(t)$ as Dirac Delta function or not?
Answer: "Delta function" is not a function, but a distribution. Distribution is a prescription for how to assign number to a test function. This distribution may but does not have to have function values in the ordinary sense. In case of delta distribution, it does not have function values.
So statement like
$$
\delta(x) = \delta(-x) \quad\text{for all }x \tag{*}
$$
meaning "value of $\delta$ at $x$ equals value of $\delta$ at $-x$" is meaningless/invalid.
But statement
$$
\int dx~ \delta(x) f(x) = \int dx~\delta(-x) f(x) \quad \text{for all functions }f \tag{**}
$$
may be valid.
You can easily verify that the function of $\Delta$ and $x$ ( the expression after the limit sign in definition of $\xi$) does not satisfy either of these two statements (in the role of $\delta$). So it is not "symmetric".
The delta distribution can hypothetically satisfy only the second statement. Does it do so?
We can evaluate both sides of the equality. The left-hand side has value, by definition of $\delta(x)$, $f(0)$.
We can transform the right-hand side integral into
$$
\int dx~\delta(-x) f(x) = \int dy~\delta(y) f(-y)
$$
By definition of $\delta(y)$,value of this integral is $f(0)$, the same as the left-hand side. So (**) is satisfied.
The equation $\delta(x) = \delta(-x)$ is thus consequence of the definition of $\delta(x)$, it is not independent assumption.
Your function $\xi$ may actually obey the second statement too (and thus be symmetric in that sense), even though the $\Delta$-dependent expression after the limit sign does not. This is similar for other approximations of delta distribution; the approximation may not have properties of $\delta$ (such as symmetry), but the limit does. | {
"domain": "physics.stackexchange",
"id": 75243,
"tags": "mathematical-physics, dirac-delta-distributions"
} |
Why is a threshold determined for Byzantine Fault Tolerance of an "Asynchronous" network? (where it cannot tolerate even one faulty node) | Question: In following answer (LINK: https://bitcoin.stackexchange.com/a/58908/41513), it has been shown that for Asynchronous Byzantine Agreement:
"we cannot tolerate 1/3 or more of the nodes being dishonest or we lose
either safety or liveness."
For this proof, the following conditions/requirements has been considered:
Our system is asynchronous.
Some participants may be malicious.
We want safety.
We want liveness.
A fundamental question is that:
With considering the well-known paper titled: "Impossibility of Distributed Consensus with One Faulty Process" (LINK: https://apps.dtic.mil/dtic/tr/fulltext/u2/a132503.pdf)
showing that:
no completely asynchronous consensus protocol can tolerate even a single unannounced process death,
Can we still assume that the network is asynchronous ? As in that case the network cannot tolerate even one faulty node.
Answer: The answer has to do with tracking the precise assumptions that are made in these different results. In short, while both results assume asynchrony, the "impossibility of distributed consensus with one faulty process" requires a stronger form of liveness and determinism, and that makes consensus impossible.
1. Impossibility of Distributed Consensus with One Faulty Process
This seminal result (of Michael Fischer, Nancy Lynch, and Michael Paterson) is about distributed consensus where the system is not just asynchronous, but also satisfies:
Determinism. The consensus algorithm does not use any randomness.
Liveness under message delays. Not only may messages be delayed arbitrarily, but we also must guarantee liveness even in the presence of such continued delays.
Let us see why tehse properties are too strong, and make distributed consensus impossible.
Consider an example: Alice, Bob, and Charlie are friends and want to decide on where to meet for dinner. It is possible that one of them unexpectedly goes AWOL (or decides they are not interested in being friends anymore) and stops responding to messages. In this case, the other two can still decide on a place to meet. Now what should they do?
The obvious approach would be that:
Alice just decides where to go, and tells Bob and Charlie.
But this does not work because Alice may be the one who goes AWOL.
So to fix this, the next most obvious approach might be:
Both Alice and Bob tell everyone else where to go. If everyone hears from Alice, then they will go where Alice says; otherwise, they will go where Bob says.
But this has a new problem. Suppose you are Charlie. If you hear from Alice, you know where to go. If you hear from neither of them, you wait to hear. The problem is when you have heard from Bob, but not Alice. Because there are arbitrary message delays, you cannot commit to go where Bob said: Alice might have said where to go, and you just have not received it yet!
So you are completely stuck, and if it happens that Alice has gone AWOL, you will just keep waiting forever.
The problem here is that we have no way to abort the transaction; no way to say, "OK, this isn't working, it's been too long and I haven't received a delay -- let's try again." Real-world consensus algorithms (the best known being Paxos) have the possibility that rounds fail due to network delays, and in this case they just try again, hoping for shorter network delays. Additionally, it is possible to get around the problem by using randomized protocols which usually work, and only go on forever with small (or even zero) probability.
2. Asynchronous Byzantine Agreement where less than $1/3$ of the nodes fail
The bitcoinSE post you link glosses over the issue of liveness, saying only that it is "the ability to continue to make forward progress". In fact, the above result shows that the strongest form of this is impossible, so we have to relax our requirements / assumptions.
Let's consider two examples. In Miguel Castro and Barbara Liskov's "Practical Byzantine Fault Tolerance", they achieve practical liveness with less than a third of nodes being faulty by assuming that message delays do not continue to grow indefinitely. As the authors state:
We guarantee liveness, i.e., clients
eventually receive replies to their requests, provided at most
$\frac{n-1}{3}$ replicas are faulty and $delay(t)$ does not
grow faster than $t$ indefinitely...This is a rather weak synchrony
assumption that is likely to be true in any real system
provided network faults are eventually repaired, yet it
enables us to circumvent the impossibility result in [9].
Here [9] is the impossibility result discussed above. In plain terms for our example, they avoid the above problem with Charlie by requiring a weak form of synchrony: Charlie does not simply have to keep waiting forever, as we know that message delays can only grow termporarily, and not indefinitely. (Of course the actual algorithm gets a lot more complex, but that is partly the conceptual idea of why liveness is possible.)
In Ran Canetti and Tal Rabin's "Fast Asynchronous Byzantine Agreement with Optimal Resilience", they use randomness to get liveness with less than $n/3$ Byzantine node failures. From their paper:
In this setting, we describe an $(\lceil\frac{n}{3}\rceil- 1)$-resilient Byzantine Agreement protocol. With
overwhelming probability all the non-faulty players complete the execution of the protocol. Conditioned on the
event that all the non-faulty players have completed the
execution of the protocol, they do so in constant expected
time.
Here $(\lceil\frac{n}{3}\rceil- 1)$-resilient just means less than a third of nodes are Byzantine.
Note the key words with overwhelming probability. So they have a probabilistic algorithm which has many possible "runs", and overwhelmingly most of them work. Note that the above impossibility result implies that there must always be some runs where liveness does not occur, i.e. there is no consensus:
Fischer, Lynch and Paterson’s [FLP] seminal impossibility result for deterministic protocols implies that any (randomized) protocol reaching BA must have nonterminating runs. | {
"domain": "cs.stackexchange",
"id": 15583,
"tags": "distributed-systems, security, consensus, fault-tolerance, byzantine"
} |
Bandpass filter that automatically adapts its bandwidth when a transient is detected (to avoid to smoothen the transient) | Question: Let's say we want to isolate a band 1000 hz +/- 50 hz.
Obviously, limiting the bandwidth by applying a passband filter will always destroy a bit the sharp transients (a Dirac or a rectangular envelope / Heavyside step function requires all frequencies, so if limit the bandwidth, we lose a part of it, and it becomes smoother).
Question: are there some adaptative band-pass filters that would auto-extend their bandwidth for a short time when a transient is detected, in order to not lose the sharp transients?
In this example (1000 hz sinusoid modulated by a rectangle envelope, input in blue):
the filter would of course still focus on the 1000hz +/- 50hz band, but it would extend its bandwidth near the transient so that the transient is not smoothened like with a normal filter (signal in red).
Does such an adaptative bandpass exist, and is it available easily in most languages (Matlab, Python, etc.)?
NB: on this graph there is nothing else except the 1000hz sinusoid, so you may wonder "why bandpass filtering?", but it's just an example, in the general case, it would be a broadband signal.
Answer: Ok, I think I got your point. You want a BPF, $H(z)$, that auto extends its bandwidth accordingly to the energy distribution in the magnitude spectrum. If you have a pure 1k Hz sinusoidal tone (that corresponds, in the frequency domain, to a dirac delta located at $\omega_0=\pm2\pi 1$k rad/s), you want to pass only frequencies in the 1k$\pm 50$ Hz range, and if you have a transient event with a white noise-like distribution, you want an all-pass filter to preserve the sharp attack.
What you need is a resonator filter [1]:
$$
H(z)=\frac{(1-\lambda)\sqrt{1+\lambda^2-2\lambda\cos(2\omega_0)}}{1-(2\lambda\cos(\omega_0))z^{-1}+\lambda^2 z^{-2}},
$$
its behavior for different values of $\lambda \in [0,1]$ is like so:
so for $\lambda\to 0$ you will get a flat response to catch transient events, and for $\lambda\to 1$ you will have a localized filter at the desired frequency. Here, for illustration purposes, I set $w_0=\pi/2$, but you can change the desired frequency using the formula $w_0=2\pi F_0/F_s$.
For setting $\lambda$ automatically you can use the spectral flatness estimator [2]:
$$
f = \frac{\left(\prod_{n=0}^{N-1}{x[n]}\right)^{1/N}}{\frac{1}{N}\sum_{n=0}^{N-1}{x[n]}},
$$
which is $f=1$, when the magnitude spectrum is completely flat, and $f=0$, when the magnitude spectrum is completely localized. Therefore, you can make $\lambda=1-f$. I wrote the following code to exemplify how you can apply this control:
Fs=16e3;
F0=1e3;
w0 = 2*pi*F0/Fs;
x1 = [zeros(1,50),2*rand(1,50)-1];
x2 = 0.7*sin(w0.*[1:100])+0.3*rand(1,100);
x3 = 0.7*sin(3.5*w0.*[1:100])+0.3*rand(1,100);
y = [adaptiveResonatorFilter(x1,w0), adaptiveResonatorFilter(x2,w0), adaptiveResonatorFilter(x3,w0)];
plot([x1,x2,x3],'linewidth',2)
hold on
plot(y,'linewidth',2)
xlabel('Samples')
ylabel('Amplitude')
legend('Original','Filtered')
function y = adaptiveResonatorFilter(x,w0)
X = fft(x);
mX = abs(X);
mX = mX/max(mX);
sf = mean(mX,'g')/mean(mX,'a')
lambda = ifelse(0.5<1-sf, 0.99, 0.0)
B = (1-lambda)*sqrt(1+lambda^2-2*lambda*cos(2*w0));
A = [1,-2*lambda*cos(w0), lambda^2];
[H,W] = freqz(B,A,linspace(-pi,pi,length(mX)));
Y = X .* fftshift(H);
y = real(ifft(Y));
end
which gives the following output:
where you can see that transient part is kept untouched, the 1k Hz pure tone contaminated with noise has been cleared and the 3.5k Hz pure tone has been attenuated, as you wanted.
Note: I am taking this as the definition of "transient attack". Please correct me if I misunderstood.
M. Vetterli, P. Prandoni. Signal Processing for Communications. EPFL press.
https://en.wikipedia.org/wiki/Spectral_flatness | {
"domain": "dsp.stackexchange",
"id": 6822,
"tags": "filters, filter-design, adaptive-filters, envelope"
} |
When inverting a transfer function, solving for the input using the output does the causality status change | Question: suppose $y(n)=ax(n-1)+bx(n-2)+\dots$ ($y$ is the output and $x$ the input). What happens if I want to solve $x(n)$ from $y(n)$?
Z transform: $$Y(z)=G(z)X(z)\tag{1}$$
then $$X(z)=\frac{1}{G(z)}Y(z)\tag{2}$$
What are the properties of $1/G(z)$ ? If $(1)$ is causal what is the status of the inverse $(2)$? The roles of the poles and zeros have changed.
Answer: As you've pointed out, inversion leads to poles at locations of the zeros of the original transfer function and vice versa. Assuming that $G(z)$ is causal and stable (i.e., it has all its poles inside the unit circle), we have to distinguish $3$ cases:
$G(z)$ has at least some zeros outside the unit circle. This means its inverse has some poles outside the unit circle, and consequently, it cannot be causal and stable. If $G(z)$ has no zeros on the unit circle, there exists a stable impulse response corresponding to $1/G(z)$ but it cannot be causal. This is because a transfer function does not uniquely determine an impulse response. We can get different impulse responses corresponding to different regions of convergence of $1/G(z)$.
$G(z)$ has some of its zeros on the unit circle. No stable inverse exists because $1/G(z)$ has poles on the unit circle.
$G(z)$ has all its zeros inside the unit circle, i.e., it is a minimum-phase system. Consequently, $1/G(z)$ also has all its poles and zeros inside the unit circle (i.e., it is minimum-phase), and can be implemented by a causal and stable system. | {
"domain": "dsp.stackexchange",
"id": 5837,
"tags": "discrete-signals, z-transform, transfer-function, stability, causality"
} |
Are Atoms and ions actually mini solar systems? | Question: If you look at an Atom, don't you notice it works much like our solar system,
and if they are solar systems then we might be an Atom.
Answer: There are definitely some similarities between the solar system and the bohr model. But it is a gross oversimplification and even then there are many differences. There is no similarity between the solar system and newer and much more accurate quantum mechanical model of the atom.
http://www.school-for-champions.com/science/atoms_solar_systems.htm#.Wm32Ya6WbIU
https://www.quora.com/What-if-our-Solar-System-is-an-atom-and-Sun-is-the-nucleus-and-we-are-sub-atomic-particles
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 46371,
"tags": "atomic-physics, solar-system"
} |
Dirac equation derivation | Question: I am working through a set of lecture notes containing a derivation of the Dirac equation following the historical route of Dirac. It states that Dirac postulated a hermitian first-order differential equation for a spinor field $\psi(x) \in \mathbb{C}^{n}$,
\begin{equation}
i \partial^{0} \psi(x)=\left(\alpha^{i} i \partial^{i}+\beta m\right) \psi(x),\tag{1}
\end{equation}
where the sum is over spacial indices only, and the hermitian property of $H_D:=\left(\alpha^{i} i \partial^{i}+\beta m\right)$ means that the coefficient matrices $\alpha^{i}, \beta \in \mathbb{C}^{m \times m}$ are hermitian. Next, the notes go on to derive that
\begin{align}\label{EQadawdwww}
\left\{\alpha^{i}, \alpha^{j}\right\}=2 \delta^{i j} I, \quad\quad\left\{\alpha^{i}, \beta\right\}=0,\quad\text{ and} \quad \beta^{2}=I.
\end{align}
It then goes on to state that, as expected for a Hamiltonian formulation of a theory, the ansatz above does not treat space and time on equal footing. This problem is claimed to be fixed by multiplying through by $\beta$ and rearranging:
\begin{align}
0=\left(i\left(\beta \partial^{0}-\beta \alpha^{i} \partial^{i}\right)-m\right) \psi(x).\tag{2}
\end{align}
First off, I don't see how (1) does not treat space and time on equal footing. Unlike the Schrodinger equation, (1) has space and time derivatives that are of the same order, so I don't see the problem. I'm also curious about the claim that Hamiltonian formulations generically have similar issues. I can't think of a convincing argument.
The next claim is equally puzzling. What have we actually changed in going from (1) to (2) that remedies the proposed unequal treatment of space and time?
Answer: The problem is that the matrix $\beta$ will mix the components of $\psi$ in the $m\beta\psi$ term, but not in the terms with $\partial ^i \psi$ in equation 1. That will prevent you from putting the components of the 4-derivative operator $\partial^\mu = (\partial^0, \partial^i)$ into a Lorentz-invariant product that operates like a scalar (roughly speaking) on $\psi$.
Multiplying through by $\beta$ eliminates the problem. Since $\beta^2 = I$, the matrix is "taken off" of the term proportional to the scalar $m$, and it now pre-multiplies all of the components of the derivative operator (space and time) in the same way. The relative sign between the time and space derivatives point to an inner-product-type construction that you could make more explicit if you like, and $m$, as noted, is already a scalar. Now you have a "scalar-like" object multiplying your spinor. Again, you could formalize this. | {
"domain": "physics.stackexchange",
"id": 59485,
"tags": "quantum-mechanics, special-relativity, dirac-equation, spinors, dirac-matrices"
} |
if condition for approximate time subscriber | Question:
In my current code I subscribe to two topics using the Approximate time policy and suppose the callback is called callbackTwoTopics.
Now there's this third that sometime published and sometimes doesn't publish, but when it does publish I'd like to call another function callbackThreeTopics instead of callbackTwoTopics.
I know how to subscribe to three topics, but I don't know how to have a condition on a callback. Is it possible to do this in ROS cpp? If so, how?
Originally posted by rav728 on ROS Answers with karma: 3 on 2019-06-09
Post score: 0
Answer:
I don't think there is a specific way to do that using the existing sync policies. To handle your case you would have to implement a new sync policy https://github.com/ros/ros_comm/tree/melodic-devel/utilities/message_filters/include/message_filters/sync_policies
A simpler way would probably to only sync the two high frequency topics, buffer messages on the third topic and dequeue messages from the buffer in the two-topic callback, with some timestamp checking of course.
Edit: Here is some pseudocode to get you started but you need to put some effort into solving your problem instead of searching for an existing implementation. Please mark this question as answered if this has helped.
void processTwo(const T1 &t1, const T2 &t2);
void processThree(const T1 &t1, const T2 &t2, const T3 &t3);
std::deque<T3> buffer_t3;
const double sync_tolerance_seconds = 0.1; // Some window of tolerance
void callbackTwoTopics(const T1 &t1, const T2 &t2) {
if (buffer_t3.empty()) {
// You could also do timestamp checks here and decide to buffer your
// messages and wait for t3
processTwo(t1, t2);
return;
}
auto&& t3 = buffer_t3.front();
ros::Time&& stamp1 = t1.header.stamp;
ros::Time&& stamp3 = t3.header.stamp;
double time_diff_seconds = stamp1.toSeconds() - stamp3.toSeconds();
if (abs(time_diff_seconds) < sync_tolerance_seconds) {
// t3 is usable
buffer_t3.pop_front();
processThree(t1, t2, t3);
} else if (time_diff_seconds > 0) {
// t3 is too far in the future, leave it in the buffer for later
} else if (time_diff_seconds < 0) {
// t3 is too far in the past, drop it
buffer_t3.pop_front();
}
}
void callbackOtherTopic(const T3 &t3) {
buffer_t3.push_back(t3);
process();
}
Originally posted by JustinBlack02 with karma: 131 on 2019-06-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by rav728 on 2019-06-09:
Thanks for your answer. I did a quick search on implementations of buffering and dequeue messages, but couldn't find too much. Would you be able to provide an example or point me to an example that does this? Also, what would function could you use to check timestamps when dequeuing messages?
Comment by rav728 on 2019-06-10:
any help would be greatly appreciated @JustinBlack02 !
Comment by JustinBlack02 on 2019-06-10:
It's impossible to clearly answer your question, you will have to figure out how precise of a solution you want on your own. I have edited the original answer with some pseudocode. | {
"domain": "robotics.stackexchange",
"id": 33146,
"tags": "ros, callback, roscpp, ros-kinetic"
} |
Gravitational Anomaly of a subsurface body | Question: How does the gravitational anomaly measured at the Earth's surface and produced by a subsurface body depend on its depth and on the density contrast of the body relative to its surroundings?
Answer: The gravitational anomaly (delta_g) changes linearly with the density contrast, and proportionally to the inverse square of depth. That's simply a version of the 'Universal Law of Gravitation'.
Here is an example for the anomaly created by an spherical anomalous spherical body with a density difference of delta_rho relative to the surrounding density. G is the known Gravitational Constant:
From Turcotte and Schubert, 2002, Geodynamics | {
"domain": "earthscience.stackexchange",
"id": 748,
"tags": "geophysics, geology, structural-geology"
} |
How to setup and control the dynamixels for the three omni-wheels? | Question:
I have ΧΜ430-W210-T motors. I can't find anywhere something that will solve my problem... Any help will be appreciated!
Thank you
Originally posted by al_ca on ROS Answers with karma: 79 on 2020-08-07
Post score: 0
Answer:
I will suggest you to checkout Dynamixel WorkBench to familiarize yourself.
After that checkout Dynamixel SDK that will show you how to command your motors. There are very good examples given already. I think they are more than enough atleast to move a robot and an example(Python and CPP) how to use dynamixel motors and SDK.
Calculate robots velocity commands. A nice tutorial can be found here and its corrresponding ros example.
Now just connect step 1 and 2 and your robot should move.
Originally posted by Tahir M. with karma: 213 on 2020-08-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by al_ca on 2020-08-11:
Thank you very much Tahir, I will try it and mark this answer | {
"domain": "robotics.stackexchange",
"id": 35387,
"tags": "ros-kinetic"
} |
How close to a black hole can an object orbit elliptically? | Question: How close to a black hole can an object orbit elliptically?
I know circular orbits are no longer stable at distance less than 3 times the Schwarzschild radius. But what about elliptical orbits?
Can an object have a semi-major axis or perihelion at distance of less than 3 times the Schwarzschild radius?
Answer: A bound elliptical orbit around a Schwarzschild black hole must have $r > 2 r_s$ at all times (where $r_s = 2 M$ is the Schwarzschild radius). Deriving this result is a good exercise for students learning about the Schwarzschild geometry, so I won't go through all the details, but the basic sketch of the proof is as follows:
Recall that a massive particle moving in a Schwarzschild geometry is equivalent to a particle moving in a classical "effective potential" given by
$$
V_\text{eff}(r) = - \frac{M}{r} + \frac{\ell^2}{2 r^2} - \frac{M\ell^2}{r^3},
$$
where $M$ is the mass of the black hole and $\ell$ is the specific angular momentum of the particle.
Note that for a bound orbit, we must have $V_\text{eff}(r) < 0$ at all times.
Find the points at which $V_\text{eff}(r) = 0$ for a given value of $\ell$. This will be the closest possible value of perihelion for a bound orbit for a particular value of $\ell$.
Find the value of $\ell$ that allows for the closest perihelion. It turns out to be $\ell = 4M$, and for that value of the angular momentum you must have $r > 2 r_s$ to satisfy $V_\text{eff}(r) < 0$. | {
"domain": "physics.stackexchange",
"id": 84188,
"tags": "black-holes, orbital-motion"
} |
What are the nearest galaxies I can observe? | Question: What are the nearest galaxies I can observe using my Telescope? Does it require to be out of the city lights?
Answer: The nearest one is the one you are on: Milky Way. It can be seen from any place, as it is all around us.
If you want to actually look at a whole galaxy in a simple view, then the nearest galaxies easy to be observed are the Small and Large Magellan Clouds, and the Galaxy of Andromeda.
There are galaxies nearer that those, but harder to be observed. A quite complete list is at https://en.wikipedia.org/wiki/List_of_nearest_galaxies | {
"domain": "astronomy.stackexchange",
"id": 627,
"tags": "galaxy, telescope"
} |
What is the old (50's) convention on Dirac gamma matrices? | Question: What were the standard relations for gamma matrices in the mid 50's, when 4-vectors where represented by $(x_1, x_2, x_3, ict)$? In particular the values of $\gamma^\mu\gamma^\nu$
, the definition of $\bar{\psi}$ and $\gamma^5$.
Answer: Gregor Wentzel's 1949's book, "Quantum theory of fields" defines the Dirac matrices as
\begin{eqnarray}
\alpha ^{(\nu)} &=& \alpha ^{(\nu)*}\\
\alpha ^{(\mu)} \alpha ^{(\nu)} + \alpha ^{(\nu)} \alpha ^{(\mu)} &=& 2 \delta_{\mu\nu}
\end{eqnarray}
and the gamma matrices as ($\beta$ is $\alpha^{(4)}$)
\begin{eqnarray}
\gamma^{(k)} &=& -i \beta \alpha^{(k)}\\
&=& i \alpha^{(k)} \beta\\
\gamma^{(4)} &=& \beta
\end{eqnarray}
with properties
\begin{eqnarray}
\gamma^{(\nu)} &=& \gamma^{(\nu)*}\\
\gamma^{(\mu)} \gamma^{(\nu)} + \gamma^{(\nu)} \gamma^{(\mu)} &=& 2 \delta_{\mu\nu}
\end{eqnarray}
and the adjoint spinor is
\begin{eqnarray}
\psi^\dagger = i \psi^* \beta
\end{eqnarray}
Rivier in "On the quantum theory of fields" (1953) defines the matrices by
\begin{eqnarray}
\gamma^{i} &=& \beta a^i\\
\gamma^4 &=& \beta
\end{eqnarray}
with
\begin{eqnarray}
a^i = \begin{pmatrix}
0 & \sigma^i\\
\sigma^i & 0
\end{pmatrix},\ \beta = \begin{pmatrix}
(1) & 0\\
0 & (1)
\end{pmatrix}
\end{eqnarray}
with the identities
\begin{eqnarray}
[\gamma^\mu, \gamma^\nu]_+ &=& 2 g^{\mu\nu}
\gamma^{i+} &=& -\gamma^i\\
\gamma^{4+} &=& \gamma^4
\end{eqnarray}
The adjoint spinor is defined by
\begin{eqnarray}
\bar{\psi} = \psi^+_b \beta_{ba}
\end{eqnarray}
Morse defines them in Methods of theoretical physics (1953) as
It's harder to find uses of $\gamma^5$ (most theories were chiral), but you can check Theory of the Fermi interaction (1957), which puts it as $\gamma^5 = \gamma_x \gamma_y \gamma_z \gamma_t$, with
\begin{eqnarray}
\gamma^t = \begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix},\ \mathbf{\gamma} = \begin{pmatrix}
0 & \mathbf{\sigma}\\
- \mathbf{\sigma} & 0
\end{pmatrix},\ i\gamma^5 = -\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}
\end{eqnarray} | {
"domain": "physics.stackexchange",
"id": 65838,
"tags": "conventions, history, dirac-equation, complex-numbers, dirac-matrices"
} |
What flying insect is this? Resembles Wasp / Crane Fly / Moth? | Question: What insect is this?
Found this in southern Bangalore, India. Looked dangerous, flew really slowly with its body upright and legs spread out and landed on a wall.
Size: about 1.5 inches tall, 1.5 inches wide, when spread.
These photos are zoomed in and rather dull colored, but the actual color was a bright orange, like #FF9050 and dark black.
Answer: Pselliophora laeta, a species of crane fly (Tipulidae) in the Ctenophorinae subfamily.
Route to discovery: Reverse image searching my own photo on Google led me to a matching image on Naturalista (Familia Tipulidae), and in-turn to iNaturalist, which showed me to Genus Dictenidia. Further Googling for "Genus Dictenidia" brought up many families of crane-fly, which I visually scanned until I spotted it -- Pselliophora! That led to Wikipedia, and to this image, which mentions its source to be whatsthatbug, where the discussion cites Pselliophora laeta on India Nature Watch.
--
Source: https://www.inaturalist.org/observations/299363
Observed: Thiruvananthapuram, India; Jul 28, 2012 5:13 PM IST
Mating Crane Flies from India
Source: https://www.whatsthatbug.com/2010/09/30/mating-crane-flies-from-india/
Observed: Mumbai, India; Sep 30, 2010 1:43 AM IST | {
"domain": "biology.stackexchange",
"id": 8834,
"tags": "species-identification, entomology"
} |
How to show that translational invariance in $y$ of implies that it's an eigenstate of $p_y$? | Question: Let us consider a particle on a plane with uniform magnetic field $B=B\hat{z}$, and hence with the Hamiltonian $H=\frac{1}{2m}(\vec{p}+e\vec{A})^2$. I am concerned with finding the energy eigenstates, and in order to do that let us specify the gauge potenial as $\vec{A}=Bx\hat{y}$ which is known as the Landau gauge, resulting in a Hamiltonian $H=\frac{1}{2m}(p_x^2+(p_y+eBx)^2)$. Notice that there is a translational invariance of $H$ in $y$.
I want to argue that this implies that the eigenstate of $H$ must be an eigenstate of $p_y$.
Although I can intuitively see any eigenstate of $p_y$ should have invariance under translation in $y$, I want to find a way to show this. In order to do this, how should I start?
Answer:
I want to argue that this implies that the eigenstate of $H$ must be an eigenstate of $p_y$.
Tough luck. It doesn't.
You are guaranteed one complete set of shared eigenstates between $H$ and $p_y$.
However, you are not guaranteed that every eigenstate of $H$ will be an eigenstate of $p_y$.
To get a simple counterexample, start off with the usual shared eigenstates,
$$
\psi_{k,n}(x,y) = \varphi_n(x-k/eB)e^{iky}
$$
(where $\varphi_n(x)$ is an eigenfunction of the harmonic oscillator with mass $m$ and cyclotron frequency $\omega_c=eB/m$), which are eigenstates of $H$ with eigenvalue $\hbar\omega_c(n+\tfrac12)$ and eigenstates of $p_y$ with eigenvalue $k$.
From those, construct the linear combination
\begin{align}
\tilde\psi(x,y)
& =
\psi_{k_1,n}(x,y) + \psi_{k_2,n}(x,y)
\\ & =
\varphi_n(x-k_1/eB)e^{ik_1y} + \varphi_n(x-k_2/eB)e^{ik_2y}
,
\end{align}
with the same $n$ but with different $k$. These are still eigenstates of $H$ but not eigenstates of $p_y$. QED. | {
"domain": "physics.stackexchange",
"id": 61199,
"tags": "quantum-mechanics, homework-and-exercises, momentum, conservation-laws, symmetry"
} |
Image Processing and applicability of 2D Fourier Transform | Question: As a newbie in the world of signal processing, I am having a hard time in appreciating image 2-D fourier transforms.
I am fully able to appreciate the concept of 1-D Fourier transform. Essentially, given a random causal signal, it can be decomposed into sinusoids. We cross-corelate known sinusoids (Basis functions) using FT and obtain the frequency spectrum. This is just perfect.
Please help me understand why in image processing, transformation along both axes is needed.
Given a greyscale image, like the Lena image, stored in a matrix $M \times N$, I infer the following:
Pixel intensity varies from left to right
Pixel intensity also varies top to bottom
A FT performed on 1 Row, Left to Right, gives us a spectrum of frequencies belonging to that row
Similarly, such a FT can be individually performed on each of the rows.
So in the end, we end up with M frequency lists of frequencies. e.g., Freq-List[Row-0] = {f1, f2, f4 ... fj}, Freq-List[Row-5] = {f2, f11 ..}
With this data, the Row frequency lists, will we not be able to tell how each pixel is affected by frequencies of that row? Shouldn't the row frequencies be sufficient?
Will the frequency lists along the columns also have a bearing on the pixels?
Answer:
Please help me understand why in image processing, transformation along both axes is needed.
A one dimensional signal describes how does a quantity vary across, usually, time. Time, commonly represented by the symbol $t$, is the only parameter required to describe completely the signal at $t$.
A two dimensional signal describes how does a quantity vary across two parameters that are absolutely required to describe it completely. When referring to images, the quantity that is described is, usually, radiant flux. That is, how much "light" (more generally, radiation) is received by the sensor. In common handheld cameras, each pixel of an image describes how much visible light is received by some point in the scene that is viewed.
The complete set of pixels of an image, describes the variation of visible light across the surface of the camera's sensor.
When applying the Fourier Transform to a one dimensional signal, the dimension of time is transformed to a dimension of frequency and the transform breaks the signal down to a sum of sinusoids.
When applying the Fourier Transform to a two dimensional signal, its two spatial dimensions are decomposed into sums of orthogonal spatial sinusoids. To cut a long story short, if the basis functions of the Fourier Transform were not orthogonal, this trick of decomposing and recomposing would not be possible. How does this look like? It looks like a carton of eggs:
The higher the spatial frequency, the smaller the eggs (more of them fit in the same length) and vice versa.
More formally, it looks like this:
And it doesn't even have to be "symmetrical", that is, each one of its dimensions may be supporting a different spatial frequency:
In this last image, we have many more sinusoidal cycles across the $x$ dimension than across the $y$ dimension.
Therefore, whereas in the one dimensional case, a signal as complicated as the voice of a singer is decomposed into a set of simple "whistles", in two dimensions, an image as complicated as Lena is decomposed into a set of elementary little blobs. In the first case, the signal is correlated with a series of progressively increasing frequency sinusoids, in the second case, exactly the same thing happens only now, the signal is a patch of pixels and the "sinusoid" is a patch of spatial frequencies that could vary differently across the $x$ and $y$ dimension.
Now, in terms of expressing this process via the one dimensional Fourier Transform (that performs this correlation process with one set of sinusoids), the same, is applied twice.
Consider Lena. And apply the Fourier Transform across each one of its rows. What do you get? You get a set of rows in the frequency domain. These describe the set of sinusoids that visible light across the rows of the image varies. But!!! at this point, we know nothing about the set of sinusoids that describe visible light variation along the vertical.
Another way to "visualise" this, is to consider the DC bin of the row Fourier Transforms (frequency is zero). This tells you the average brightness of the pixels in each row but it still varies along the column direction! (i.e. we know nothing about the DC along the columns).
In other words, where we had an $f(x,y)$, we pass it through the Fourier Transform along the rows and we now have an $f(F_x, y)$. We are now in an intermediate state where one dimension is frequency and the other is still space.
For this reason, you apply the Fourier Transform once more along the columns of the Fourier Transform of the rows. (And to get back to the DC example, you now get one DC coefficient that describes the average brightness along the rows and the columns, that is, you get the average brightness of the image.)
Now, remember, the one dimensional Fourier Transform decomposes a signal ($x(t)$) into two series of "strength" coefficients, one for the strengths of the $\sin$ and one for the strength of the $\cos$ at different frequencies. The two dimensional Fourier Transform does exactly the same thing but now the "strength" coefficients are two dimensional. That is, some coefficient at pixel $i,j$ (in the Fourier Transformed image, i.e. the Frequency Domain) describes the contribution of a "carton of eggs" with different number of cycles in the $x$ and $y$ dimension to the whole image.
Generalising to three and higher dimensions is done similarly.
Hope this helps.
(Note, all images retrieved via Google Images and linked, rather than uploaded, to the post) | {
"domain": "dsp.stackexchange",
"id": 4271,
"tags": "image-processing, fourier-transform, 2d"
} |
One-Way Functions vs. perfectly binding commitments | Question: If OWFs exist, then statistically binding bit commitment is possible.[1]
Is it known that if OWFs exist then perfectly binding bit commitment is possible?
If no, is there a known black-box separation between them?
[1] http://en.wikipedia.org/wiki/Pseudorandom_generator_theorem and
http://en.wikipedia.org/wiki/Commitment_scheme#Bit-commitment_from_a_pseudo-random_generator
Answer: In a recent work with Rafael Pass, it is shown that without those extra complexity assumptions of Barak-Ong-Vadhan, noninteractive commitments can not be based on one-way functions in a black-box way. In fact even with these extra assumptions (when formalized as some kind of hitting property assumed in addition to one-way-ness) still a black-box separation holds:
http://eprint.iacr.org/2012/523.pdf
(the construction of Barak-Ong-Vadhan is non-black-box). | {
"domain": "cstheory.stackexchange",
"id": 1576,
"tags": "cr.crypto-security, one-way-function"
} |
A thought experiment about neutrinos | Question: I don't understand all the details of Dirac mass, Majorana mass, and many other "deep" notions.
I have in mind a very simple thought experiment.
Because of neutrino oscillations we know neutrinos have mass. Thus their speed is less than $c$.
I imagine a beam of neutrinos created by some experiment in a lab. They are neutrinos, not antineutrinos, and have energy much larger than their rest mass. So they have left-handed helicity.
Now I imagine (this is a thought experiment, OK?) that some lab is moving, with respect to the one that created them, at a speed so very, very close to $c$ that it will overtake the beam, so fast that in the frame of this second lab, the speed of the beam appears to be directed towards the lab at a speed close to $c$ and in fact, opposite of their speed in the frame of the lab that created them.
In the frame of this new lab, the particles that are directed towards it have right-handed helicity.
Now two things can happen:
A) either they interact with the instruments in that lab with the same efficiency as in the original lab, and this means, since they have right-handed helicity, that they are now antineutrinos, as seen in this lab. So lepton number is not conserved.
B) or lepton number is conserved, they are still neutrinos, but having the right-handed helicity, which means the "wrong one" for neutrinos, they would interact much, much less than neutrinos of the correct, left-handed helicity. Then they are, in that lab, sterile neutrinos, but their rest mass is the same as for "normal neutrinos" and this sounds wrong.
So which is which? And please, don't throw me complicated notions that I cannot follow, Dirac vs Majorana mass, symmetry groups, chiral anomalies, etc.
Just tell me, A is right or B is right.
Thanks.
Well, thanks to you folks, I have learned something. I really mixed up chirality and helicity, and that has been cleared up.
I upvoted all of you, but I cannot accept an answer to a question so ill-posed.
But your answers only bring more questions.
Rather than editing this question, I think it would be better to ask a new one. I have to digest all this before asking a well-posed question (I hope).
If, by the time I am ready, from your by answers or comments, I see a consensus that I should edit it rather than ask a new one, I shall oblige.
OK, so here is where my new question is.
Answer: This is the question about neutrino masses.
If you outrun a left-handed neutrino, and the right-handed particle interacts like an antineutrino, then the neutrino is its own antiparticle.
If you outrun a left-handed neutrino, and the right-handed particle is sterile because the weak interaction doesn’t couple to right-handed matter particles and the matter neutrino doesn't couple (at tree level) to electromagnetism or the strong force, then the neutrino and the antineutrino are different.
The problem is that both of these possibilities are consistent with all of the neutrino data that we have so far. Nobody knows the answer to your question.
Since there are now several comments and answers pointing out that chirality, unlike helicity, is invariant under boosts, I should clarify. A particle with Dirac-type mass cannot have definite chirality in its rest frame, and therefore cannot have constant chirality in any other frame. I was implicitly assuming that "outrunning" the neutrino took some finite amount of time from its creation in a left-chiral state, and that the neutrino in its rest frame had evolved into an incoherent mixture of left- and right-chiral components.
You can absolutely change whether (or better, how much) a particle with Dirac mass participates in the weak interaction by flipping its helicity; I spent fifteen years doing this with polarized beams of electrons and neutrons. | {
"domain": "physics.stackexchange",
"id": 89740,
"tags": "special-relativity, mass, neutrinos, chirality, helicity"
} |
Are the authors of the VAE paper writing the PDFs as a function of the random variables? | Question: Usually, I see the conventions:
discrete random variable is denoted as $X$,
the pmf is written as $P(X=x)$ or $p(X=x)$ or $p_{X}(x)$ or $p(x)$, where $x$ is an instance of $X$
a continuous random variable is denoted as $X$,
the pdf is denoted as $f_{X}(x)$ or $f(x)$, where $x$ is an instance of $X$; sometimes $p$ is used here too instead of $f$.
However, the VAE paper uses slightly different notation that I'm trying to understand
Let us consider some dataset $\mathbf{X}=\left\{\mathbf{x}^{(i)}\right\}_{i=1}^{N}$ consisting of $N$ i.i.d. samples of some continuous or discrete variable $\mathrm{x}$. We assume that the data are generated by some random process, involving an unobserved continuous random variable $\mathbf{z}$. The process consists of two steps: (1) a value $\mathbf{z}^{(i)}$ is generated from some prior distribution $p_{\boldsymbol{\theta}^{*}}(\mathbf{z}) ;(2)$ a value $\mathbf{x}^{(i)}$ is generated from some conditional distribution $p_{\boldsymbol{\theta}^{*}}(\mathbf{x} \mid \mathbf{z})$. We assume that the prior $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ and likelihood $p_{\boldsymbol{\theta}^{*}}(\mathbf{x} \mid \mathbf{z})$ come from parametric families of distributions $p_{\boldsymbol{\theta}}(\mathbf{z})$ and $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$, and that their PDFs are differentiable almost everywhere w.r.t. both $\boldsymbol{\theta}$ and $\mathbf{z}$. Unfortunately, a lot of this process is hidden from our view: the true parameters $\theta^{*}$ as well as the values of the latent variables $\mathrm{z}^{(i)}$ are unknown to us.
So I am looking at these:
$p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$
$p_{\boldsymbol{\theta}^{*}}(\mathbf{x} \mid \mathbf{z})$
dataset $\mathbf{X}=\left\{\mathbf{x}^{(i)}\right\}_{i=1}^{N}$
So I know the subscript for $\theta$ denotes those are the parameters for the pdf. It says "discrete variable $\mathrm{x}$", "unobserved continuous random variable $\mathbf{z}$", and "latent variables $\mathrm{z}^{(i)}$". In the top, where I wrote " discrete random variable $X$", seems like that's the equivalent of "discrete variable $\mathrm{x}$" in this paper.
So, it looks like they're writing the PDFs as a function of the random variables. Is my assumption correct? Because it is different than the typical conventions I see.
edit: looks like his other paper has a notation guide, in the appendix, though it seems like he's conflating both random vector and instances of vector in the notation?
https://arxiv.org/pdf/1906.02691.pdf
Answer: When it comes to notation/terminology, often, people in machine learning are (a bit?) sloppy, which causes a lot of confusion, especially for newcomers to the field or people not very math-savvy. I was also confused about this notation at some point (see my last questions here, which are all about this confusing topic). See also this answer.
In the VAE paper, $\mathbf{X}$ is a dataset, as the authors write.
Your confusion also arises because the authors vaguely use the term "probability distribution", rather than pdf or pmf, to refer, for example, to $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$, which thus does not refer to a pdf or pmf. In fact, the authors also write
their PDFs are differentiable almost everywhere w.r.t. both $\boldsymbol{\theta}$ and $\mathbf{z}$
The $\mathbf{z}$ can refer to
a random variable, or
an input to the function $p_{\boldsymbol{\theta}^{*}}$
If it's the first case, then $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ is the composition of 2 functions (because a rv is also a function).
If it's the second case, then $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ is the evaluation of $p_{\boldsymbol{\theta}^{*}}$ at $\mathbf{z}$.
I think the 2nd case is the most likely. In addition, people are being sloppy here and use the notation $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ (rather than just $p_{\boldsymbol{\theta}^{*}}$) to emphasize $p_{\boldsymbol{\theta}^{*}}$ is a function of some input variable (not random variable!), which we denote with the letter $\mathbf{z}$ to remind ourselves that $\mathbf{z}$ is associated with a random variable denoted with the same letter (and maybe also in bold and lowercase).
So, in this case, let's say we denote the random variable associated with $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ with $\mathbf{z}$, then we could refer to this associated prior more explicitly as follows $p_\mathbf{z}(\mathbf{z})$ (but that would even be more confusing). It would have been a better idea to use $\mathbf{Z}$, but then we may use the upper case letters to denote matrices or sets (like the VAE paper), so we end up with this mess (which is one of the 2 mythical difficult problems well-known in Computer Science, i.e. naming things), which we need to learn to deal with or just ignore.
Conclusion: when I look at $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$, which has been referred to as a probability distribution, I think there's also some associated random variable, which people, in that same context, will probably denote as $\mathbf{z}$ or $\mathbf{Z}$. There may also be some input variable (not a random variable), which we denote by $\mathbf{z}$ or $z$. If they are not mentioned, then I just ignore that. I never think that $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ is the composition of 2 functions (even if that's the case), because that case was never useful in my readings. | {
"domain": "ai.stackexchange",
"id": 3123,
"tags": "papers, variational-autoencoder, probability-distribution, notation"
} |
How to conveniently separate Cd impurities from ZnO without contaminating it? | Question: How could I separate Cd impurities from ZnO without contaminating it and without using overly expensive equipment or extremely dangerous solvents?
I don't want to contaminate it by introducing other heavy metals or being left with more than trace levels of undesired solvents.
I'd be fine losing some amount (say max 10-20%) of my ZnO in the process, if necessary.
Please ask me for any further clarity I could bring to this question.
I know nothing about this so I'd be happy with any contributions on how to easily separate any reasonable portion of $\ce{Cd}$ salts from $\ce{ZnO}$ in general (precipitating it out or solving it away from $\ce{ZnO}$ for instance). Cycling the zinc through a series of reaction and then back to ZnO would be fine also (if not leaving trace levels of anything biologically harmful).
Answer: Since you actually considered the use of a calutron, I assume that you are interested in separation of relatively small amounts of pure ZnO - on the order of a few grams.
Cadmium and zinc are similar in many respects, which makes me think of rare earths. One of the separation techniques used there is ion exchange chromatography. Resins with special affinity for many ions (either to catch it or reject it) are available. Some contaminants are not easy to remove by conventional ion exchange resins. In many cases, very specific resins have been developed for these contaminants. Selective resins from Rhom and Haas, www.lenntech.com, are available today for the removal of:
• Boron
• Cadmium, mercury and other heavy metals
• Chromate
• Lead
• Nickel
• Nitrate
• Perchlorate
and some other contaminants.
On the more researchy side, the stability constants of the tetra ammonium complexes of zinc and cadmium differ by a large factor, unlike their complexes with EDTA and similar chelators. For Zn(NH3)4++, the dissociation constant is 9.8 x 10^-10; for Cd(NH3)4++, it is 2.5 x 10^-7. (Reference Book of Inorganic Chemistry, Latimer and Hildebrand, 1951, 3rd ed., pp 135-6). The impure ZnO could be dissolved in NH4Cl (which will also dissolve CdO) and the solution run thru a column. Here's the research: a column of what? Perhaps a column of the impure ZnO. Since the zinc complex is more stable (more soluble), it should concentrate in the first eluate, leaving CdO behind. Solid ZnO would rip NH3 off the cadmium complex and become soluble, so the ZnO will turn into CdO. I have no idea how useful this scheme could be, but the big difference in stability constants is an eye-catcher. | {
"domain": "chemistry.stackexchange",
"id": 12275,
"tags": "purification, separation-techniques"
} |
Are there still researches on new Data Structures? If so, what are some examples / scenarios? | Question: And I assume these new data structures are more context/field-specific?
Answer: Yes, there is. For instance, you can look at the SODA conference to find several examples of research papers published on data structures. Many of these are sophisticated and even esoteric. The field is too broad to summarize here. | {
"domain": "cs.stackexchange",
"id": 19200,
"tags": "data-structures, research"
} |
Do we need regular expression first or finite state automata in lexical anlysing? | Question: I'm a bit confused about the concept of finite state automata (FSA) and regular expression (RE) in lexical analysis. I have reading some books about compiler construction. At the part of tokenization, all the books I read talk about the regular expression first to recognize the tokens. For example, the regex below is to recognize the identifier:
([a-zA -Z] | _ | $ )([a-zA -Z0 -9] | _ | $)*
Then, they jumped to explain another technique which is finite state automata (FSA). As a result, some questions have come to my mind which are:
1- What part I should learn first? RE or FSA?.
2- Programmatically, which part should be converted to other to build the lexer? RE ==> FSA or FSA to RE.
3-Since all tokens can be recognized by regular expression, then, why we need finite state automata?.
Sorry for the to many questions, but I really can't figure out how to start. many thanks in advance.
Answer: A regular expression is a language used to describe a finite state automaton. It allows you to define the fsa without drawing nodes and edges all over the place. The two go hand in hand in that regard. | {
"domain": "cs.stackexchange",
"id": 10259,
"tags": "finite-automata, regular-expressions, compilers, lexical-analysis"
} |
Simple Rock, Paper, Scissors in Python | Question: I have looked at many different approaches to this game online, but as someone who isn't very experienced with Python, I may not be aware of what others are doing right.
import random
def RPS():
print("You are playing Rock Paper Scisscors")
comp_possible = 1,2,3
score = [0,0]
flag = 0
while True:
print("Enter your choice:")
while True:
choice = input('->')
if choice == 'r' or choice == 'R' or choice == 'Rock' or choice == 'rock' or choice == '1':
choice_identifier = 1
break
elif choice == 'S' or choice == 's' or choice == 'Scissors' or choice == 'sciccors' or choice == '2':
choice_identifier = 2
break
elif choice == 'P' or choice == 'p' or choice == 'Paper' or choice == 'paper' or choice == '3':
choice_identifier = 3
break
else:
print('That\'s not an option in this game :)')
print('Try again:')
continue
comp_choice = random.choice(comp_possible)
if choice_identifier == comp_choice:
print('It\'s a draw!')
score[0] = score[0] + 1
score[1] = score[1] + 1
elif (choice_identifier == 1 and comp_choice == 2) or (choice_identifier == 2 and comp_choice == 3) or (choice_identifier == 3 and comp_choice == 1):
print('You win!')
score[0] = score[0] + 1
else:
print('You lose...')
score[1] = score[1] + 1
while True:
answer = input('Play another round?')
if answer == 'y' or answer == 'Y' or answer == 'yes' or answer == 'Yes' or answer == 'ye' or answer == 'Ye' or answer == 'sure' or answer == 'Sure':
print(' Current score: You - ',score[0],' Computer - ', score[1])
flag = 0
break
elif answer == 'n' or answer == 'N' or answer == 'no' or answer == 'No' or answer == 'nah' or answer == 'Nah':
print('Thanks for playing! Final score: You - ',score[0],' Computer - ', score[1])
flag = 1
break
else:
print('Yay or nay...')
continue
if flag == 0:
continue
else:
break
RPS()
What things in my code are, for example, horribly inefficient or are bad practices?
Answer: The game "Rock Paper Scissors" can be specified in terms of states.
Specification
The game is played by two players, playerA and playerB. Each player selects from among a set of three options {null, rock, paper, scissors}. null is used to represent the state before a player has chosen. Using an ordered pair (playerA_choice, playerB_choice) creates the possible game states:
(null, rock)
(null, paper)
(null, scissors)
(rock, null)
(rock, rock)
(rock, paper)
(rock, scissors)
(paper, null)
(paper, rock)
(paper, paper)
(paper, scissors)
(scissors, null)
(scissors, rock)
(scissors, paper)
(scissors, scissors)
There are three final states and three transition functions to them:
playerA_wins = (rock, scissors) | (paper, rock) | (scissors, paper)
playerB_wins = (rock, paper) | (paper, scissors) | (scissors, rock)
draw = (rock, rock) | (paper, paper) | (scissors, scissors)
The start state is:
(null, null)
The states:
(null, rock)
(null, paper)
(null, scissors)
(rock, null)
(paper, null)
(scissors, null)
are blocking and require further input from one of the players.
Implementing The Specification
This is a sketch of the game:
# Some Useful Names
null = "null"
rock = "rock"
paper = "paper"
scissors = "scissors"
# A Thesaurus (implemented as a dictionary)
synonyms = {"rock": rock,
"paper": paper,
"scissors": scissors,
"stone": rock,
"vellum": paper,
"shears": scissors}
# Final States
game_is_draw = "Game is a Draw"
playerA_wins = "Player A Wins"
playerB_wins = "Player B Wins"
# Initial State
both_players_must_choose = "Both Players Must Choose"
# Transition States
playerA_must_choose = "Player A Must Choose"
playerB_must_choose = "Player B Must Choose"
# Transition Table (implemented as a dictionary)
transitions = {(null, null): both_players_must_choose,
(null, rock): playerA_must_choose,
(null, paper): playerA_must_choose,
(null, scissors): playerA_must_choose,
(rock, null): playerB_must_choose,
(rock, rock): game_is_draw,
(rock, paper): playerB_wins,
(rock, scissors): playerA_wins,
(paper, null): playerB_must_choose,
(paper, rock): playerA_wins,
(paper, paper): game_is_draw,
(paper, scissors): playerB_wins,
(scissors, null): playerB_must_choose,
(scissors, rock): playerB_wins,
(scissors, paper): playerA_wins,
(scissors, scissors): game_is_draw}
# Simulate Initialization
playerA_choice = null
playerB_choice = null
# Simulate Players Choosing
playerA_choice = synonyms["stone"]
playerB_choice = synonyms["shears"]
# Main Logic
state = (playerA_choice, playerB_choice)
print(outcomes[state])
Details to handle input output should be at a higher layer of abstraction. It shouldn't matter to the game engine if the game is between a human and a computer, two humans, or two computers. It shouldn't matter if it is being played using a laptop or over the internet.
Data Structures
A good rule of thumb is to replace complex logic with a data structure, and
if choice == 'r' or choice == 'R' or choice == 'Rock' or choice == 'rock' or choice == '1':
is the sort of code that is hard to understand and hard to maintain. Perhaps to the point where it is better to forgo the user friendly approach? Nay! A thesaurus is a good place to look for synonyms. Although, Python lacks Thesauri, a dictionary will probably do. Now the game can be sold at Ye Local Renaissance Faire as Stone, Vellum, Shears!
Indeed, a dictionary is a good way to map each possible game state to the next state. The code does in lieu of directly implementing the logic the specification uses to describe the final states.
The reason is maintainability. The mobile version of the game will offer an upgrade to Rock, Paper, Scissors, Spock, Lizard currently in development. Once we get around to updating the dictionaries synonyms and transitions the game will be done, profits will role in, and we will never have to work again unless sleeping on a stack of money counts as work.
But Really, Why All the Ceremony?
One of the real values that comes from using dictionaries is that a dictionary can be used to dispatch functions.
# Abbreviated Rock Paper Scissors
# Some Useful Names
rock = "rock"
paper = "paper"
def playerA_wins():
print("Player A Wins")
def playerB_wins():
print("Player B Wins")
transitions = {(paper, rock): playerA_wins,
(rock, paper): playerB_wins}
transitions[(paper, rock)]()
transitions[(rock, paper)]()
Getting user input can be dispatched similarly by the dictionary. However, the initial (null, null) state, threading might be justified so that playerA_choice() doesn't block playerB_choice() or vice versa...that's IO for 'ya.
Recommendations
Better variable names. Write about rocks and paper and scissors: rock = 1 makes the code more readable. Readability particularly helps the person writing the code.
Make the code more modular. Put all the initialization together. Put all the user interface someplace else. Separate out the string matching. Only put high level abstractions in the main loop.
Consider specifying the problem before writing code for a solution. At the core Rock, Paper, Scissors does not deal with synonyms. Writing a specification makes it clear that synonyms are a feature and encourages keeping their logic outside the main loop. This makes the code more readable. | {
"domain": "codereview.stackexchange",
"id": 12216,
"tags": "python, beginner, game, python-3.x, rock-paper-scissors"
} |
How to get param value in to arg in roslaunch? | Question:
I am using ros kinetic
I have ros parameter name "tal"
I want to get this param value into argument tag.
<arg name="tal_arg" value=??? />
I am using ros kinetic,
this post suggest to use eval, but I don't know how to do it.
Can you help?
Originally posted by Tal on ROS Answers with karma: 108 on 2018-12-06
Post score: 0
Original comments
Comment by pmuthu2s on 2018-12-06:
Did you try putting the value in double quotes?
Like this,
<arg name="foo" value="bar" />
as given in the link
Comment by PeteBlackerThe3rd on 2018-12-06:
I think the OP is asking how to put a dynamic parameter value into an argument not a literal value.
Comment by Tal on 2018-12-06:
I want to put the value of rosparm tal like $(param tal) into argument tal_arg.
Answer:
The solution you linked to involves making a simple python node which reads the desired parameter value and prints it to stdout. You can then use the value="{eval scriptname.py}" syntax to run that node and put the value it outputs into the argument.
Hope this helps
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-12-06
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Tal on 2018-12-06:
where i can put this python scrip?
There is other way besides the ways in the link?
Comment by PeteBlackerThe3rd on 2018-12-06:
Ideally it should be inside a catkin package in the src directory. If you don't already have a workspace with a package setup then I recommend having a look at the tutorials to set one up and create a simple node.
Comment by just_a_normal_college_student on 2022-08-04:
Warning: The mentioned solution DOES NOT work! I tried to implement a python script as recommended, but the "eval" command is not able to interprete it. A more thorough explanation is given here: https://answers.ros.org/question/361411/running-a-python-node-using-eval-in-roslaunch/. As far as I know, there is currently no way to get params into args. An extension to add the "command" attribute to the arg tag is mentioned here: https://github.com/ros/ros_comm/issues/723, but not yet implemented as of today. | {
"domain": "robotics.stackexchange",
"id": 32133,
"tags": "ros, rosparam, roslauch, ros-kinetic"
} |
solidworks to urdf | Question:
Hi all,
What I want to do is converting Solidworks assembly to urdf file. I am using sw_urdf_exporter in inidgo. The problem I have right now is the coordinates of each link are not at the correct position. Since there is no way to specify link coordinate during property manager, I wonder whether there is a way to choose coordinate for each link except manually changing coordinate values in link property after exported.
Or should I be careful about something during configuration such that coordinates of links will be correct.
Thanks in advance for any help.
Originally posted by Oh233 on ROS Answers with karma: 55 on 2018-03-23
Post score: 0
Original comments
Comment by lagankapoor on 2018-03-24:
@oh233 when you design create a referance coordinate fram at base of your robot and then chose that while configure the base link and in other link chose Automatically configured then you can get the right solution
Comment by Oh233 on 2018-03-24:
@lagankapoor I followed the way you suggested me, but the problem is all joints it automatically detected are fixed. I wonder whether coordinate system belongs to joint during configuration of each links. I want to manaully define all coordinate systems. Thanks for your reply.
Comment by lagankapoor on 2018-03-27:
@Oh233 sorry for late reply , you then select joint specific . i mean which is revolute and which is prismatic . i done this way and select link very carefully . if you have screenshot share with me for more understanding
Comment by Oh233 on 2018-03-27:
@lagankapoor Thank you so much for your reply. I think my problem has been solved. I first chose automatically generate for every link, and then went back to choose the coordinate I built for base link in configuration. It worked.
Comment by lagankapoor on 2018-03-28:
yeah So post Answer and mark it Answered
Answer:
Basically the solution should be let it automatically generate the configuration after setup the whole mates. And then manually choose the coordinate you set for base_link and let the rest to be the same as before.
Originally posted by Oh233 with karma: 55 on 2018-04-09
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 30433,
"tags": "urdf, ros-indigo"
} |
Sort stack using two stacks | Question: My program is supposed to sort a stack such that the smallest item are on the top. I can use a temporary stack, but you may not copy the elements into any other data structure. The program works overall, but I was wondering if there is anything that I can do to make it more efficient/better.
import java.util.Stack;
public class SortStack {
Stack sorted;
public Stack sort(Stack unsorted) {
int temp2 = 0; // to keep track of number of top of the sorted stack
while(!unsorted.isEmpty()) {
int temp1 = (int) unsorted.pop();
if(sorted == null) { // if sorted stack is empty, create it and push the first number onto it
sorted = new Stack();
sorted.push(temp1);
System.out.println(temp1 + " pushed from s1 to s2");
} else if(temp1 >= temp2) { // push onto sorted stack if what is popped from original stack >= unsorted stack
sorted.push(temp1);
temp2 = temp1;
System.out.println(temp1 + " pushed from s1 to s2");
} else { // keep on popping from sorted stack to unsorted stack until unsortedTop < sortedTop
// this will make sure whatever is popped from original stack will not be less than the peek of the sorted stack
while(temp1 < temp2) {
int sortedPop = (int) sorted.pop();
unsorted.push(sortedPop);
System.out.println(sortedPop + " pushed from s2 to s1");
temp2 = (int) sorted.peek();
}
sorted.push(temp1); // after both stacks are balanced, push element to sorted stack
System.out.println(temp1 + " pushed from s1 to s2");
}
}
return sorted;
}
public static void main(String[] args) {
SortStack ss = new SortStack();
Stack unsorted = new Stack();
unsorted.push(7);
unsorted.push(10);
unsorted.push(5);
unsorted.push(12);
unsorted.push(8);
unsorted.push(3);
unsorted.push(1);
ss.sort(unsorted);
}
}
Answer: Generic type declaration
Your Stacks are missing the type parameter, i.e. in your case Stack<Integer>. This ensures they can only contain Integer objects.
static vs instance fields
sorted is the only instance field used in the only method of your SortStack class. This means it is likely better to exist as a method variable, and in turn your SortStack class becomes an utility class where sort(Stack<Integer> input) is now a static utility method.
Primitive unboxing
On a related note to the first point, you are implicitly relying on primitive unboxing, i.e. converting an Integer to int when you do this:
int temp1 = (int) unsorted.pop();
If a null is lurking in your unsorted object, this will fail.
Debugging and variable naming
If possible, use a logging framework like SLF4J so that you can configure when you want to see the debug statements, instead of simply printing via System.out.println(). Also:
" pushed from s1 to s2"
Isn't that meaningful - is s1 or s2 the sorted one? You may want to specify that. Same goes for temp1 and temp2, it's not possible to identify which contains the previous/current iterative element from the unsorted or sorted stack.
Future considerations
If you don't want to restrict to sorting Stack<Integer> objects, you can generic-fy it as such:
public static <T> sort(Stack<T> input) {
// ...
}
However, you can't use your arithmetic-based comparison, so you can change that to:
public static <T extends Comparable<T>> sort(Stack<T> input) {
// ...
if (fromUnsorted.compareTo(fromSorted) >= 0) {
// ...
}
// ...
}
And if you will like to make it even more 'generic' and cater for non-Comparable types, you can specify a Comparator<T> to do so:
public static <T> sort(Stack<T> input, Comparator<T> comparator) {
// ...
if (comparator.compare(fromUnsorted, fromSorted) >= 0) {
// ...
}
// ...
} | {
"domain": "codereview.stackexchange",
"id": 20085,
"tags": "java, sorting, stack"
} |
Relativistic angular momentum confusing definition | Question: After reading Wikipedia, I'm confused by the relativistic angular momentum definition. OK for the 4-angular momentum tensor. But does it mean that the following more intuitive "angular momentum" will not be exactly conserved at high speeds in a reference frame at rest?
$${\bf M} = \sum_i r_i\times {\bf p_i},$$
where $\bf p_i$ is the relativistic momentum $\gamma m_iv_i$, for a system of masses $m_i$.
Answer: For Minkowski or Schwartzschild spacetimes, the quantity $$m\left(X^i\frac{dX^j}{d\tau} - X^j\frac{dX^i}{d\tau}\right)$$ is conserved for masses following geodesic trajectories. It results from the existence of some Killing vectors.
In the Minkowski spacetime, the geodesics are straight lines, and it is the trivial fact that the relativistic angular momentum is just the distance to the line multiplied by the linear relativistic momentum (that is also conserved).
In the Schwartzschild spacetime, it means that the conservation of angular momentum of classical eliptical orbits is an approximation to the conservation of the relativistic angular momentum. Here it is supposed one big mass M, and only one small orbiting mass m, where M>>m. | {
"domain": "physics.stackexchange",
"id": 88203,
"tags": "special-relativity, angular-momentum"
} |
I don't understand why for resistors in AC circuits the phase angle between voltage and current is zero | Question: From the following graph I understand it is because they reach the voltage and the current reach their maximum values at the same time on the resistor, but I don't understand why this should be that way. What is the intuition behind this?
Answer: Why do you think the phase should be different? Which should lag or lead?
Ideal resistors have no lag. If you apply a voltage, the proportional, steady-state current appears immediately. As the voltage changes, the current changes alongside.
All real circuit elements have some non-zero inductance that would make the current deviate from this ideal. But for a small circuit with wires and resistors, the inductance and deviation is usually tiny enough to be ignored. | {
"domain": "physics.stackexchange",
"id": 84350,
"tags": "electric-circuits, electric-current, electrical-resistance, phase-diagram"
} |
ROS Answers SE migration: ROS fuerte sdf | Question:
Hi everybody,
I'm trying to switch between ROS electric and ROS fuerte. My problem is in launching the '.launch' file since the shell says that my model is deprecated (originally it was a '.xacro' file).
Does anybody know how to convert it into a new ROS format without rewriting by scratch my model?
Thank you,
Neostek
P.S. I also tried with the 'gzsdf' command following this guide
http://answers.ros.org/question/41766/gazebo-urdf-deprecated/
The real problem is the model misses some parts and the warning message
Warning [parser.cc:348] Gazebo SDF has no gazebo element
Warning [parser.cc:291] DEPRECATED GAZEBO MODEL FILE
On July 1st, 2012, this formate will no longer by supported
continues to appear
Originally posted by Neostek on ROS Answers with karma: 156 on 2012-10-12
Post score: 0
Answer:
If the message you're seeing is a warning, check out this post
Otherwise, the way to convert from URDF to SDF is a 3 step process.
First, convert the xacro to an urdf using rosrun xacro xacro.py.
Then, from here,
rosrun gazebo urdf2model -f you_model.urdf -o old_gazebo_format.xml
roscd gazebo
source setup.bash
./gazebo/bin/gzsdf print old_gazebo_format.xml > your_model.sdf
Originally posted by phil0stine with karma: 682 on 2012-10-14
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 11340,
"tags": "ros, gazebo, sdf, xacro, ros-fuerte"
} |
Drawing transition diagram from transition table | Question: I have a DFA transition table like
\begin{array}{cc|c|c}
& & 0 & 1 \\ \hline
\to & p & qs & q \\
* & q & r & qr \\
* & qs & r & pqr \\
& r & s & p \\
* & s & t & p \\
& t & t & t \\
* & qr & rs & pqr \\
* & pqr & qrs & pqr \\
* & rs & s & p\\
* & qrs & rs & pqr
\end{array}
I am not able to draw the transition diagram as it's getting too much complicated. Any help is appreciated.
EDIT: I drew the transition diagram but some of the lines were intersecting each other. Is there any way not to intersect those lines?
Answer: Try to use an online graph editor, like this one. In the settings set it to have directed edges and custom labels, and type a triplet $(s_1,s_2,v)$ for an edge from $s_1$ to $s_2$ with $v$ written on the edge.
However, this won't allow you to create "accepting" states, when you draw this yourself, add them by hand... If you prefer a slightly worse-looking editor, but one that can also have accepting states, consider this automata drawer | {
"domain": "cs.stackexchange",
"id": 18428,
"tags": "finite-automata, transition-systems"
} |
$\phi^4$ theory in higher dimensions | Question: For a scalar, the Lagrangian $$L = \partial_{\mu}\phi\partial^{\mu}\phi - m^2\phi^2 + \lambda \phi^4$$ seems to be particularly suited for 4 dimensional space-time. In four dimensions, the coupling constant, $\lambda$, ends up being dimensionless, and the theory is renormalizable. How does the interaction Lagrangian $\lambda \phi^4$ get modified for higher dimensions? Does it become $\lambda\phi^n$ in n-dimensions? Is the $\phi^4$ theory or $\phi^n$ theory renormalizable in higher dimensions?
Also, any literature reference would also be helpful.
Answer: Let $D$ be the spacetime dimension. The action is dimensionless for any $D$ (since the action has units of $\hbar$, and we set $\hbar=1$)., Since $S = \int {\rm d}^D x \mathcal{L}$, and the volume element ${\rm d}^D x$ has mass dimension $-D$, this means the Lagrangian $\mathcal{L}$ has dimension $D$
Assuming we have a weakly coupled scalar field theory, the scaling dimension of the field $\phi$ will be determined by the kinetic term, $\mathcal{L} \sim (\partial \phi)^2$ (if you like, in the free theory only the kinetic term and maybe mass term are there, so in the free theory these determine the scaling of the field, and then perturbative quantum corrections will only lead to small changes to the free theory mass dimension). Since derivatives have mass dimension $1$ in any dimension, and the Lagrangian has mass dimension $D$, in order for things to work, the field must have dimensions $(D-2)/2$. You can check that in $D=4$, this works out to say that the field should have dimension $1$, which is the case.
Then we can consider a general operator (term in the Lagrangian) of the form
\begin{equation}
\mathcal{L} \sim \lambda \partial^{n_d} \phi^{n_\phi}
\end{equation}
where $\lambda$ is a (possibly dimensionful) coupling constant; $n_d$ is the number of derivatives; and $n_\phi$ is the number of powers of $\phi$. Then the dimension of $\lambda$ is
\begin{equation}
\Delta_\lambda = D - n_d - n_\phi \frac{D-2}{2} = D + \left(1-\frac{D}{2}\right)n_\phi - n_d
\end{equation}
For $D=4, n_\phi=4, n_d=0$, this yields $\Delta_\lambda=0$, as you expect.
For an arbitrary $D$, with $n_\phi=4, n_d=0$, we have
\begin{equation}
\Delta_\lambda = 4 - D
\end{equation}
which is negative for all $D>4$; in other words, $\phi^4$ theory is power-counting non-renormalizable for all $D>4$.
Since we set up the formalism, we might as well look at a general interaction. Let's set $n_d=0$. Then the expression is
\begin{equation}
\Delta_\lambda = D + \left(1-\frac{D}{2}\right)n_\phi
\end{equation}
Then...
For $D=1$, this is always positive.
For $D=2$, the mass dimension is always $2$ for any operator. (2 dimensions is special in many ways).
For $D=3$, this simplifies to $\Delta_\lambda = 3 - \frac{n_\phi}{2}$, which is only nonegative if $n_\phi \leq 6$. Therefore, $\phi^6$ theory is dimensionless in three dimensions. Higher powers of $\phi$ are nonrenormalizable.
For $D=4$, this becomes $\Delta_\lambda = 4 - n_\phi$, so only $\phi^3$ and $\phi^4$ theories are renormalizable.
For $D=5$, this becomes $\Delta_\lambda = 5 - \frac{3 n_\phi}{2}$, so only $\phi^3$ is renormalizable.
For $D=6$, we have $\Delta_\lambda = 6 - 2 n_\phi$, so $\phi^3$ is dimensionless (and therefore analogous to $\phi^4$ theory in $4$ dimensions). This is the theory that Srednicki bases the first third of his textbook on.
For $D \geq 7$, all terms with $n_\phi \geq 3$ have negative mass dimension, and so are not renormalizable.
Since $n_d$ contributes negatively to $\Delta_\lambda$, any interactions with derivatives can only possibly be renormalizable if the derivative-less version is. | {
"domain": "physics.stackexchange",
"id": 81522,
"tags": "quantum-field-theory, lagrangian-formalism, renormalization, dimensional-analysis"
} |
Do women have testosterone? | Question: In a documentary on fitness I saw it was stated that women can't get big like men because of their low concentration of testosterone. If it is true that women have testosterone, where is it made? Why do some women, especially later in life, develop facial hair (though obviously not as much as men)? Do men also have "female" hormones in their body?
Answer: Yes, they do. The ovaries produce both testosterone and estrogen. Relatively small quantities of testosterone are released into your bloodstream by the ovaries and adrenal glands. Sex hormones are involved in the growth, maintenance, and repair of reproductive tissues [1].
The serum testosterone level in women with no acne, hirsutism, or menstrual dysfunction is 14.1 +/- 0.9 ng/dL (nanograms per decilitre) [2]. An average adult man has 270-1,070 ng/dL serum testosterone [3].
Men have female sex hormones too. For a prepubescent male, estrogen levels are expected to be between 1 and 3.7 ng/dL. During puberty, normal levels fall between 2.3 and 8.4 ng/dL. Levels for an adult male should be between 2.5 and 5 ng/dL [4].
References:
WebMD, LLC. Normal Testosterone and Estrogen Levels in Women.
Ayala C, Steinberger E, Smith KD, Rodriguez-Rigau LJ, Petak SM. Serum testosterone levels and reference ranges in reproductive-age women.
Alexia Severson. Testosterone Levels by Age. Healthline Networks, Inc.
wiseGEEK.com. What Are Normal Estrogen Levels in Men? (Measurements cited in picograms per millilitre, converted to ng/dL) | {
"domain": "biology.stackexchange",
"id": 2416,
"tags": "physiology, endocrinology"
} |
A variadic C function for concatenating multiple strings | Question: I have this function that takes a number \$N\$, and \$N\$ C strings, concatenates and returns the result:
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
char* mystrcat(int count, ...)
{
char** p;
char* result;
char* ptr;
size_t* len_array;
va_list ap;
int j;
size_t total_length;
if (count < 1)
{
return "";
}
va_start(ap, count);
p = malloc(sizeof(char*) * count);
len_array = calloc(count, sizeof(size_t));
total_length = 0;
for (j = 0; j != count; ++j)
{
p[j] = va_arg(ap, char*);
total_length += (len_array[j] = strlen(p[j]));
}
result = malloc(sizeof(char) * (total_length + 1));
ptr = result;
for (j = 0; j != count; ++j)
{
strcpy(ptr, p[j]);
ptr += len_array[j];
}
*ptr = '\0';
va_end(ap);
return result;
}
int main(int argc, char* argv[]) {
puts(mystrcat(5, "Hello", ", ", "world", ", ", "friends!"));
}
As always, please tell me anything that comes to mind.
Answer: Always return allocated string or never do it
This code here:
if (count < 1)
{
return "";
}
is bad because you return a static string whereas the normal case returns a string allocated by malloc. If the caller then tries to free this static string, it will cause some kind of error. You can fix this by either returning an allocated empty string:
if (count < 1)
{
return calloc(1, 1);
}
or by just returning NULL:
if (count < 1)
{
return NULL;
}
Memory leaks
Your function never frees p or len_array. Actually, I would probably not even use temporary arrays like these. If I were to write this function, I would measure the total length on one pass, and then copy the strings on a second pass.
Rewrite
After fixing the above issues, my rewrite of your function would look like this:
char *mystrcat(int count, ...)
{
va_list ap;
size_t len = 0;
if (count < 1)
return NULL;
// First, measure the total length required.
va_start(ap, count);
for (int i=0; i < count; i++) {
const char *s = va_arg(ap, char *);
len += strlen(s);
}
va_end(ap);
// Allocate return buffer.
char *ret = malloc(len + 1);
if (ret == NULL)
return NULL;
// Concatenate all the strings into the return buffer.
char *dst = ret;
va_start(ap, count);
for (int i=0; i < count; i++) {
const char *src = va_arg(ap, char *);
// This loop is a strcpy.
while (*dst++ = *src++);
dst--;
}
va_end(ap);
return ret;
} | {
"domain": "codereview.stackexchange",
"id": 22199,
"tags": "c, strings, variadic"
} |
Is spacetime simply connected? | Question: As I've stated in a prior question of mine, I am a mathematician with very little knowledge of Physics, and I ask here things I'm curious about/things that will help me learn.
This falls into the category of things I'm curious about. Have people considered whether spacetime is simply connected? Similarly, one can ask if it contractible, what its Betti numbers are, its Euler characteristic and so forth. What would be the physical significance of it being non-simply-connected?
Answer: I suppose there are many aspects to look at this from, anna v mentioned how Calabi-Yau manifolds in string theory (might?) have lots of holes, I'll approach the question from a purely General Relativity perspective as far as global topology.
Solutions in the Einstein Equations themselves do not reveal anything about global topology except in very specific cases (most notably in 2 (spatial dimensions) + 1 (temporal dimension) where the theory becomes completely topological). A metric by itself doesn't necessarily place limits on the topology of a manifold.
Beyond this, there is one theorem of general relativity, called the Topological Censorship Hypothesis that essentially states that any topological deviation from simply connected will quickly collapse, resulting in a simply connected surface. This work assumes an asymptotically flat space-time, which is generally the accepted model (as shown by supernova redshift research and things of that nature).
Another aspect of this question is the universe is usually considered homogenous and isotropic in all directions, topological defects would mean this wouldn't be true. Although that really isn't a convincing answer per say... | {
"domain": "physics.stackexchange",
"id": 1254,
"tags": "general-relativity, spacetime, universe, differential-geometry, topology"
} |
Globally constant vector field in a curved spacetime | Question: Is it possible to define a globally constant vector field in a curved spacetime, that is a vector field for which the covariant derivative vanishes along every world line? The vector field $V^{\mu}=0$ would be one example, but are there other examples. Intuitively I would guess there are no other examples since going around loop on which a vector is parallel transported already leads to the fact that one ends up with a different vector than with which one started.
Answer: An important concept to reason about these things is holonomy, which describes how the tangent space at a given point transforms by parallel transport along the closed path starting and ending at this point. For orientable Lorentzian manifold holonomy is an element of a Lorentz group $SO(1,3)$. All such elements, for all the paths comprise the holonomy group of a given manifold. For a “generic” curved spacetime holonomy group is the whole Lorentz group, but if the holonomy group is the proper subgroup we have a manifold of special holonomy. If the action of holonomy group leaves fixed a vector then there is a nontrivial parallel vector field which is precisely the object OP is interested in. Of course, since the full Lorentz group does not have invariant subspaces only special holonomy manifolds could admit parallel vector field.
Example. Consider the manifold that is a direct product of curved 3D Riemannian space and a timelike factor $(\mathbb{R},-dt^2)$ . The metric for such a spacetime could be written as: $$ ds^2 = - dt^2 + g^{(3)}_{ij}(X)\,dX^idX^j.$$
It is easy to see that the parallel transport of a vector $A =\alpha\, \partial_t $ ($\alpha=\mathrm{const}$) would leave it unchanged. So $A$ is a parallel vector field on this static spacetime.
For Riemannian manifold de Rham decomposition theorem is a useful tool in classifying such parallel vector fields and manifolds that admit them, but situation is more complicated in the Lorentzian case. There we have a class of spacetimes with parallel light-like vector field that has some importance in string/M theory. For a sampler, I suggest looking at this paper and follow the references. | {
"domain": "physics.stackexchange",
"id": 60158,
"tags": "spacetime, differential-geometry, curvature, vector-fields, topology"
} |
Example of tree with > 6 vertices, tree would have depth = n after splay() deepest vertex | Question: How to build tree with more than 6 vertices, that after operation splay() would have depth = number of vertices? Is it possible?
UPD:
Example for n = 4:
insert 60
insert 10
insert 20
insert 50
splay(10)
You can use splay tree visualization
Answer: This is a good question whose answer can help us understand how splay tree tends to keep away from being imbalanced.
No, it is impossible for any rooted tree with more than 5 vertices to become a linear tree after operation splay() one of the deepest vertices, i.e. its depth is one less than the number of vertices.
Why?
For the sake of contradiction, let there be a rooted tree with more than 5 vertices which becomes a linear tree after splaying $x$, one of its deepest vertices. Consider the point of time just before the last splay step that moves $x$ to the root. There are three cases.
That last step is a zig-step (the following graph) or its mirroring zag-step. Since the resulted tree is linear, part $A$ and $B$ are empty. That means, the left subtree of the root before that step contains node $x$ only. Since $x$ is the deepest node, the whole tree has at most 3 nodes, which is not true.
That last step is zig-zig step (the following graph) or its mirroring zag-zag. Since the resulted tree is linear, part $A$, $B$ and $C$ are empty. That means, the left subtree of the root before that step contains node $P$ and $x$ only. Part $D$ can have at most two nodes; otherwise, either $x$ was not the deepest node or the resulted tree is not linear. So the whole tree has at most 5 nodes, which is not true.
That last step is a zig-zag step (the following graph) or its mirroring zag-zig. However, after that step, the root will alway has two children, which is not ture.
Exercise. Draw a tree with 5 vertices that becomes a linear tree after splaying one of its deepest vertices. | {
"domain": "cs.stackexchange",
"id": 12884,
"tags": "data-structures, trees, splay-trees"
} |
What does multiplying two real-world values represent? | Question: I totally get what division means in the real world. "dollars / hour", well, that's the number of dollars you will make in one hour. "kilometers / gallon" is the distance you can go with a gallon of gas. Division means that given a certain amount of one thing, you'll get a certain amount of another thing.
I'm so good with division that you could give me a ratio I've never seen before and I can tell you what it means. "Burgers / McDonalds"? It's the average number of burgers a McDonalds will produce. "Dolphin / Miles"? Every mile you drive, you get this amount of dolphins.
Multiplication on the other hand makes no sense at all. What the heck is a foot-pound? And I don't really mean the definition. I mean, what does multiplication actually do for these two values? What does multiplying two values say about their relationship? What if I were to say kilometer-hours? Or dollar-kilograms? Or dolphin-miles? What would those things mean?
Bonus points go to explaining this in a very simple, clear manner.
Answer: One way to think of different dimensions multiplied together is as a weighting factor that helps transform your unit of measure into something else. I know that this doesn't sound that helpful, but think of the following.
A foot-pound is a unit of torque. It is a measure of 1 pound of force, applied 1 foot away from a pivot point. The distance is a weighting factor that transforms the force you apply into a torque. Larger/smaller distance transform your applied force into larger/smaller torques.
Similar things can be said for other units, such as Newton-meters, which transforms a Force (Newtons) into an energy by weighting it by the distance over which the force is applied. (I realize now that this is the same units as above, but for a different quantity, energy vs torque)
As for some of the more strange units you've listed above, for instance, dolphin-miles, you could use this as a measure of how far dolphins a certain number of dolphins are from a certain point. Adding many dolphin-mile quantities together, and dividing by the total number of dolphins gives you the average position of the dolphins.
You could also use this as a measure of the total distance traveled by a group of dolphins. If 10 dolphins each travel 10 miles, then you would have 100 dolphin-miles of travel. (the same goes for Frisbee's man-hours comment above, which is what reminded me to put this)
Admittedly, things do get weird, because the final unit has to be something that you can make sense of, but this is one way to think of it. | {
"domain": "physics.stackexchange",
"id": 27155,
"tags": "measurements"
} |
Can a computer count to infinity? | Question: So, could a computer count to infinity assuming it was a super computer and had near unlimited amounts of ram and hard drive/solid state drive storage? I am being serious when I ask this.
[This is what I am asking]: Wouldn't Infinity mean endless counting, and wouldn't that be infinity?
Answer: It depends on what you mean by "count to infinity". Specifically, how does the computer give output?
consider the following quesitons:
Can a computer show, on its screen, all the number from 1 till (infinity): increasing the number on screen by 1 every second?
Can a computer send on the network line, a package that contains a number starting with 1, and increasing the number by 1 every second (splitting large numbers into multiple packages, assuming computation is very fast, and "every second" can be "every minute", etc, as necessary)
then, the answer for the above is YES.
However,
can a computer hold (in his memory/RAM/HD/whatever finite storage unit it connects to) a number, so that it starts with 1 and increase this number every x seconds?
then the answer is NO. At some point, the computer will be out of storage. This is because the storage is finite, and "counting to infinite" requires an infinite amount of information. | {
"domain": "cs.stackexchange",
"id": 4867,
"tags": "counting, mathematical-programming"
} |
Nonzero charges medium | Question: In Maxwell equations, always it is considered that there is no free charge in the medium ($\rho=0$ whatever the medium: conductor, dielectric, ...).
1) In atoms, each electron is a (negative) charge carrier, so what does "free charge" exactly mean ?
2) In which case we can have a medium (or Matter) with nonzero charges ($\rho\neq0$) ? Any example(s) ?
Answer: Actually, it is not always assumed that there are no free charges. It depends on the particular problem that one wants to study. When one wants to study free space propagation of light, for instance, then yes, we would assume that there are no free charges. The presence of free charges would mean that the light can interact with these charges and that would complicate things.
Now consider a piece of glass. It is made up of atoms each consisting of a charged nucleus surrounded by charged electrons. Yes, but these charges neutralize each other, so that on the scale of the wavelength of visible light propagating through this piece of glass, these charges would not have any effect. Therefore we say that a piece of glass does not have any free charges.
Usually when it states that there are no free charges, it also includes currents. So when we say there are no free charges then the medium does not supporting currents. It is not a conductor. Otherwise, the electric field in the light will cause currents in the medium.
An example of a medium with free charges is a plasma. There one has free charges that can interact with electromagnetic waves (depending on its parameters). | {
"domain": "physics.stackexchange",
"id": 34878,
"tags": "electromagnetism, electrostatics, electricity, charge, maxwell-equations"
} |
Can there be planets, stars and galaxies made of dark matter or antimatter? | Question: We know that the universe has more dark and anti matter as compared to normal matter. Can there be dark matter galaxies or antimatter galaxies?
Answer: Dark matter galaxies are possible but very speculative. On a theoretical level, they are hard to form because dark matter interacts only gravitationally (see Anders Sandberg's answer), which makes it hard to lose energy and become bound structures. On an observational level, they would be hard to detect. Gravitational lensing can do something, but since one cannot actually see the galaxy, it's also hard to say where the dark galaxy is -- if there is one at all.
Still, people have studied the idea, so it's not impossible.
Antimatter galaxies: At some level the idea that there are antimatter galaxies out here is appealing. First it can solve the baryon asymmetry problem at a stroke. It's also the case that an antimatter star would shine. From long distance, it would also be practically indistinguishable from a "normal" star.
However, there are strong reasons to believe that there are no antimatter galaxies. That's because antimatter annihilates with normal matter, which leaves experimental signatures. If any part of the Earth were made of antimatter, it would immediately vanish in a flash, so we can be sure that the Earth is mostly matter. Similarly, if the Sun were made of antimatter, we would be quickly annihilated (thanks to the antimatter solar wind radiating from the anti-Sun), so we can be sure the Sun is also mostly matter. Similar arguments allow us to conclude that the Milky Way is almost entirely matter, the Local Group is almost entirely matter, etc, all the way up to the largest structures in the sky.
If antimatter galaxies exist, they are probably outside our observable universe, at which point some will argue it's no longer science. | {
"domain": "astronomy.stackexchange",
"id": 5180,
"tags": "universe, dark-matter, antimatter"
} |
Defining simultaneity with a central light vs with clocks | Question: So there's the classic example of the relativity of simultaneity involving two people on a train, with a light source exactly between them. Moments after the lights turn on, observers on the train will say the light struck the passengers simultaneously, while folks on the ground looking into the train will see the light hit one of them first.
Now let's say the passengers start out next to each other, synchronize their clocks, and then slowly proceed to their respective ends of a 2 lightsecond long table.
At 2:59:59 PM according to their clocks, the light between them turns on. Now there are 4 events:
1) The light hits Passenger A (the one closer to the front of the train).
2) The light hits Passenger B.
3) Passenger A's clock ticks 3 PM.
4) Passenger B's clock ticks 3 PM.
Passengers on the train should all agree the four events are simultaneous, but what will people outside the train see? Will the clocks stay simultaneous with each other, or will 2 and 4 happen simultaneously, followed by 1 and 3? And regardless of the answer, can you justify it in terms of the fixed speed of light?
Answer: The simple answer to this is in two parts:
(1) In all reference frames, A's clock reads 3pm when the light hits passenger A and B's clock reads 3pm when the light hits passenger B.
(2) Only in the trains' reference frame are the clocks synchronized. In any other uniformly moving reference frame, the clocks are not synchronized and, thus, the light hits one passenger first.
This is a profoundly important result from Special Relativity: the relativity of simultaneity.
I must address this comment below (at the time of this edit) from user12262:
-1 @Alfred Centauri: "Only in the trains' reference frame are the clocks synchronized. In any other uniformly moving reference frame,
the clocks are not synchronized" -- By Einstein's definition the
determination of whether two given clocks are synchronized, or not, is
solely a matter of those clocks and (by transitivity) of other
suitable members of "their frame"; not of any other participants
(members of "any other frame"). – user12262
From "It's About Time: Understanding Einstein's Relativity" on Google books: | {
"domain": "physics.stackexchange",
"id": 10016,
"tags": "special-relativity"
} |
Optimize an algorithm for preparing a dataset for machine learning | Question: I'm learning how to use R coming from a python background. I'm following Andrej Karpathy's zero-to-hero course, reimplementing it in R.
We start with a list of 32033 names. These names have to be broken into a format digestible by the network. For example, if the first name in the dataset is emma, we would represent it as such:
X | Y
________
... | e
..e | m
.em | m
emm | a
mma | .
... | etc
Although each character is represented as a number so they can later be stored as a tensor. I've written the following alogrithm to do so:
XS <- vector("list",length(data)*block_size*2)
YS <- vector("numeric",length(data)+1)
stoi <- setNames(0:26,c('.',letters))
i = 1
for (item in data) {
context <- rep(0,block_size)
for (ch in strsplit(item,"")[[1]]) {
ch <- stoi[ch]
XS[[i]] <- context
YS[[i]] <- ch
context <- c(context[-1],ch)
i <- i + 1
}
}
return(list(XS,YS))
}
I would really appreciate any feedback on how to write more performant and idiomatic code in R that can accomplish this better than my implementation. Thank you.
Answer: Right now your code basically loops through words to process. For each word, it first uses strsplit to split it into characters, and then it loops through each character, maintaining a running vector of the last block_size characters it has encountered. It iteratively stores the running vectors into a list, which it eventually returns.
A few thoughts on this code:
Since each of your blocks is of the same size (as determined by your block_size variable), it would make more sense to me to actually build a matrix instead of a list. Probably you will find the matrix easier and faster to work with.
In R we strive to identify pre-implemented, vectorized code that does what we are working to accomplish. Usually this involves some amount of searching around to find the correct function. In your case, it turns out that there is a built-in function called embed that does basically what you're asking. For instance, here is the output for your "emma" example, with block size 3, where we separately compute the contributions to X and Y as defined in your question:
word <- "emma"
block_size <- 3
(wordLetters <- c(rep(".", block_size), strsplit(word, "")[[1]]))
# [1] "." "." "." "e" "m" "m" "a"
(Xpart <- embed(wordLetters, block_size)[,block_size:1])
# [,1] [,2] [,3]
# [1,] "." "." "."
# [2,] "." "." "e"
# [3,] "." "e" "m"
# [4,] "e" "m" "m"
# [5,] "m" "m" "a"
(Ypart <- c(strsplit(word, "")[[1]], "."))
# [1] "e" "m" "m" "a" "."
Once we have things working for one word, we can simply loop to combine them together across all the words we need to process. I'll also include the actual conversion to numbers (via stoi) as you defined in your code:
data <- c("emma", "hello")
block_size <- 3
stoi <- setNames(0:26,c('.',letters))
(XS <- do.call(rbind, lapply(data, function(word) {
wordLetters <- stoi[c(rep(".", block_size), strsplit(word, "")[[1]])]
embed(wordLetters, block_size)[,block_size:1]
})))
# [,1] [,2] [,3]
# [1,] 0 0 0
# [2,] 0 0 5
# [3,] 0 5 13
# [4,] 5 13 13
# [5,] 13 13 1
# [6,] 0 0 0
# [7,] 0 0 8
# [8,] 0 8 5
# [9,] 8 5 12
# [10,] 5 12 12
# [11,] 12 12 15
(YS <- unlist(lapply(data, function(word) {
unname(stoi[c(strsplit(word, "")[[1]], ".")])
})))
# [1] 5 13 13 1 0 8 5 12 12 15 0 | {
"domain": "codereview.stackexchange",
"id": 44709,
"tags": "r, machine-learning, neural-network"
} |
Example of why IReadOnlyList is better than public List { get; private set; } | Question: Early today, I gave an answer to someone where I recommended using IReadOnlyList<T>. Then I was asked why not just use a private setter, e.g. public IList<T> { get; private set; }? This was not an entirely unexpected question. I provided a small example as an update to my answer. However, my example really did not directly apply as a review to the OP's code. Thus, I thought I would post the example code here for its own review.
I am using C# and .NET Core 3.1.
First, there is a very simple User class.
namespace Read_Only_List_Example
{
public class User
{
// Intentionally a very simplified DTO class
public string Name { get; set; }
public bool IsAdmin { get; set; }
}
}
Secondly, there is some class that does something with a list of users.
using System.Collections.Generic;
using System.Linq;
namespace Read_Only_List_Example
{
public class SomeClassWithUsers
{
public SomeClassWithUsers(IEnumerable<User> users)
{
// This example requires independent copies of the user list.
UserList1 = users.ToList();
_users = users.ToList();
}
// SPOILER: just because we use a private setter does not mean this list is immune from external changes!
// Which is a way of saying that UserList1 is not entiredly safe from the public.
public List<User> UserList1 { get; private set; }
// Here _users is private and safe from public eyes, as is UserList2.
private List<User> _users = new List<User>();
public IReadOnlyList<User> UserList2 => _users;
public static SomeClassWithUsers CreateSample()
{
// NOTE that none of the initial sample users are Admins or "evil" (yet).
var users = new List<User>()
{
new User() {Name = "Alice", IsAdmin = false },
new User() {Name = "Bob", IsAdmin = false },
new User() {Name = "Carl", IsAdmin = false },
new User() {Name = "Dan", IsAdmin = false },
new User() {Name = "Eve", IsAdmin = false },
};
return new SomeClassWithUsers(users);
}
}
}
And finally, we have Program.Main to give the simple example:
using System;
using System.Collections.Generic;
namespace Read_Only_List_Example
{
class Program
{
static void Main(string[] args)
{
var x = SomeClassWithUsers.CreateSample();
// Even though UserList1 has a private setter, I can still change individual members.
// Below, each user is made "evil" and granted full admin rights.
for (var i = 0; i < x.UserList1.Count; i++)
{
// Holy smokes! Someone can create an entirely new User.
x.UserList1[i] = new User() { Name = $"Evil {x.UserList1[i].Name}", IsAdmin = true };
}
Console.WriteLine("UserList1 - with a private setter - has been modifed!");
DisplayUsers(x.UserList1);
// But I cannot alter UserList2 in any way since it is properly marked as a IReadOnlyList.
// You cannot compile the code below. See for youself by uncommenting it.
//for (var i = 0; i < x.UserList2.Count; i++)
//{
// x.UserList2[i] = new User() { Name = $"Evil {x.UserList1[2].Name}", IsAdmin = true };
//}
Console.WriteLine("\nUserList2 - which is IReadOnlyList - remains unchanged.");
DisplayUsers(x.UserList2);
Console.WriteLine("\nPress ENTER key to close");
Console.ReadLine();
}
private static void DisplayUsers(IEnumerable<User> users)
{
foreach (var user in users)
{
Console.WriteLine($" {user.Name} {(user.IsAdmin ? "IS" : "is NOT")} an Admin.");
}
}
}
}
Here is an example of the console output:
UserList1 - with a private setter - has been modifed!
Evil Alice IS an Admin.
Evil Bob IS an Admin.
Evil Carl IS an Admin.
Evil Dan IS an Admin.
Evil Eve IS an Admin.
UserList2 - which is IReadOnlyList - remains unchanged.
Alice is NOT an Admin.
Bob is NOT an Admin.
Carl is NOT an Admin.
Dan is NOT an Admin.
Eve is NOT an Admin.
Press ENTER key to close
There you go. I wanted to keep the example short and easy to follow, so things are kept a minimum. I did try to employ DRY where possible. The one area where someone could say there is dead code in comments, I would point out that it is there intentionally as part of a learning exercise.
Here's what happens if you uncomment the code that tries to alter UserList2 from Main.
Answer: The conclusion misses the point
Your code technically does touch on what makes a list readonly, but the example you've used to display that behavior suggests a completely different problematic scenario, i.e. that of mutable objects. This by itself has nothing to do with lists, regardless of whether they're readonly or not.
So your example is not good. Not because the code doesn't work, but because it gets distracted by a completely unrelated problem, and the outcome you show is more related to that problem than it is to the readonly-ness of the collection.
UserList1 - with a private setter - has been modifed!
Evil Alice IS an Admin.
...
UserList2 - which is IReadOnlyList - remains unchanged.
Alice is NOT an Admin.
...
While technically you did change the list by creating new users and overwriting the old users, it's not really a good example. User is a mutable class, and in your example I would be perfectly capable of doing this:
for (var i = 0; i < x.UserList2.Count; i++)
{
x.UserList2[i].IsAdmin = true;
}
The mutability of your User class is a problem, but IReadOnlyList<T> does not protect you against that.
Had User been immutable, that's a different story. The combination of an immutable class contained in an IReadOnlyList<T> would guard against that.
But even then, you need to make sure that the object you expose as an IReadOnlyList<T> cannot be cast back to a mutable type, e.g:
IReadOnlyList<string> readOnlyList = new List<string>() { "a" };
(readOnlyList as List<string>).Add("b");
Console.WriteLine(String.Join(",", readOnlyList)); // prints "a, b"
So you really need many different components before you could validate your example as a valid example.
But this is supposed to be a simple example on the purpose of IReadOnlyList<T>, and you've really overcomplicated it with several unnecessary distractions.
So here's my attempt to provide a clear example of the difference:
My version of this answer
There's a difference between setting a list:
myObject.MyList = new List<string>();
and setting the members of a list:
myObject.MyList.Add("new value");
These are two different actions, each of which you can guard against, but in a different way.
Private setters guard against the list itself being set:
public class PublicSetListClass
{
public List<string> MyList { get; set; } = new List<string>() { "original" };
}
var myObject1 = new PublicSetListClass();
myObject1.MyList = new List<string>() { "new" }; // this is allowed
public class PrivateSetListClass
{
public List<string> MyList { get; private set; } = new List<string>() { "original" };
}
var myObject2 = new PrivateSetListClass();
myObject2.MyList = new List<string>() { "new" }; // this is NOT allowed!
But public setters do not guard against the list's content being altered:
myObject1.MyList.Add("added"); // this is allowed
myObject2.MyList.Add("added"); // this is ALSO allowed!
IReadOnlyList<T>, on the other hand, guards against the content of the list being altered:
// this is the same PublicSetListClass object from before
myObject1.MyList.Add("added"); // this is allowed
public class PublicSetReadOnlyListClass
{
public IReadOnlyList<string> MyList { get; set; } = new List<string>() { "original" };
}
var myObject3 = new PublicSetReadOnlyListClass();
myObject3.MyList.Add("added"); // this is NOT allowed
But IReadOnlyList<T> does not guard against the list itself being replaced!
myObject1.MyList = new List<string>() { "new" }; // this is allowed
myObject3.MyList = new List<string>() { "new" }; // this is ALSO allowed!
So if you want a list that cannot be replaced and whose content cannot be altered, you need to both use a private setter and use an IReadOnlyList<T> type (or any other readonly collection type):
public class PrivateSetReadOnlyListClass
{
public IReadOnlyList<string> MyList { get; private set; } = (new List<string>() { "original" }).AsReadOnly();
}
var myObject4 = new PrivateSetReadOnlyListClass();
myObject4.MyList = new List<string>() { "new" }; // this is NOT allowed
myObject4.MyList.Add("added"); // this is NOT allowed
Notice I also added the .AsReadOnly() cast to prevent consumers from casting this readonly list back to its mutable List<string> type. This would require the consumer to actively decide to recast it, but it should be guarded against when the consumer can be assumed to be malevolent.
To summarize, there are three different solutions at play here:
If you don't want the list to be overwritten, give it a private setter.
If you don't want the list's elements to be altered, make it a readonly list (or any other readonly collection type
For further protection, ensure that the object you expose cannot be cast back to a writeable collection type
If you dont want the properties of the list elements themselves to be altered, then those elements' type must be immutable.
To make this list property, its elements, and its elements' properties truly immutable, you have to comply with all four of the bullet points.
Comparing your answer to mine
This is obviously subjective, but I wanted to point out exactly what I changed about your approach:
In the beginning of the answer, I very quickly highlighted the two distinct behaviors we were comparing (setting a list vs setting the list's content) without elaborating. This helps readers give structure to the more verbose part of the answer that follows that introduction, which helps them understand that when they read the first behavior, they can already compare it to what the second behavior is going to be. This lowers the cognitive load as you've provided a thread to follow.
Compare this to your answer, where both the "first" and "second" parts don't actually address the concrete result. They are two preparatory sections (and not very small ones at that).
Additionally, by providing a terse summary of the content in the beginning, readers who already understand this problem (or even those who don't even know what a list is) can quickly decide that they don't need to read the whole thing. It's a nice-to-have, really.
The demo code is terse and to the point, directly using list.Add() and list = new ... and nothing else, to highlight the specific behaviors that we're addressing.
I broke up the demo code into small, independent pieces, each of which can be digested by themselves, as they all focus on one particular behavior. Each digestible snippet is max 3 lines long (class definition with one property, object initialization, using the object)
Comparatively, your code is formatted in a way that I need to read the whole thing before I can then understand the individual steps and why they are different - this requires a much bigger cognitive load. While I was able to follow it, keep in mind that your target audience is already learning about something that is new/foreign to them, so you want to reduce that cognitive load as much as possible.
I used string instead of User, since the specific type of your list elements doesn't actually matter when we're discussing list behavior by itself. The fact that your list types are generic doubly proves that point, though using a concrete class instead of a generic type parameter does lower the cognitive load somewhat. But if you use a complex type for that, you're actually increasing that cognitive load again.
In your example, there wasn't really a purpose to doing the same thing for all five elements of the array. So I stuck to a list with one element. This meant I could skip the for loops, which simplifies the example and again reduces the cognitive load. | {
"domain": "codereview.stackexchange",
"id": 38628,
"tags": "c#, .net-core"
} |
Dyson expansion for the density matrix | Question: I am following these notes and I am stuck on going from equation (37) to (38). In a nutshell, given
$$
\frac{d \tilde{\rho}(t)}{dt} =-i\alpha[\tilde{H}_I(t),\tilde{\rho}(t)], \quad (*)
$$
where $\tilde{A}$ is an operator in the interaction picture with $H_T=H_0+\alpha H_I$. This equation has the standard solution
$$
\tilde{\rho}(t) = \tilde{\rho}_0-i\alpha\int_0^tds [\tilde{H}_I(s),\tilde{\rho}(s)].
$$
We can iterate $(*)$ with the above definition to yield
$$
\frac{d \tilde{\rho}(t)}{dt} =-i\alpha[\tilde{H}_I(t),\tilde{\rho}_0]-\alpha^2 \int_0^tds [\tilde{H}_I(s),[\tilde{H}_I(s),\tilde{\rho}(s)]].
$$
One wishes to eliminate the dependance of $\rho$ on all previous times so we take advantage that $\alpha$ is assumed small to iterate infinitely, obtaining
$$
\frac{d \tilde{\rho}(t)}{dt} =-i\alpha[\tilde{H}_I(t),\tilde{\rho}_0]-\alpha^2 \int_0^tds [\tilde{H}_I(s),[\tilde{H}_I(s),{\tilde{\rho}(t)}]] +\mathcal{O}(\alpha^3), \quad (**)
$$
where now the integral doesnt integrate $\rho$ for all times.
That is what I don’t understand, I expected $(**)$ to be instead
$$
\frac{d \tilde{\rho}(t)}{dt} =-i\alpha[\tilde{H}_I(t),\tilde{\rho}_0]-\alpha^2 \int_0^tds [\tilde{H}_I(s),[\tilde{H}_I(s),{\tilde{\rho}_0}]] +\mathcal{O}(\alpha^3),
$$
why isn’t this the case?
Answer: According to “The Theory of Quantum Open Systems” by Breuer and Petruccione.
[…]
$$
\frac{d \tilde{\rho}(t)}{dt} =-i[\tilde{H}_I(t),\tilde{\rho}_0]-\int_0^tds [\tilde{H}_I(t),[\tilde{H}_I(s),\tilde{\rho}(s)]].
$$
In order to simplify the above equation further we perform the Markovian approximation, in which the integrand $\tilde{\rho}(s)$ is replaced by $\tilde{\rho}(t)$. In this way we obtain an equation of motion in which the time development of the state of the system at time $t$ only depends on the present state $\tilde{\rho}(t)$,
$$
\frac{d \tilde{\rho}(t)}{dt} =-i[\tilde{H}_I(t),\tilde{\rho}_0]-\int_0^tds [\tilde{H}_I(t),[\tilde{H}_I(s),\tilde{\rho}(t)]].
$$
This equation is called the Redfield equation […]
So it is just a Markovian approximation. | {
"domain": "physics.stackexchange",
"id": 76598,
"tags": "quantum-mechanics, quantum-field-theory, condensed-matter, density-operator, open-quantum-systems"
} |
Proving Irregularity of $L = \{ a^mb^nb^n \mid nm \ge 3 \} $ | Question: I'm trying to prove the irregularity of the following language:
$$L = \{ a^mb^nb^n \mid nm \ge 3 \} $$
I tried to demonstrate that it doesn't verifies the Pumping Lemma but for all words I tried it seems to accept all of them.
Any hints or suggestions?
Answer: It is regular since $a^*(bb)^* \setminus L$ is regular:
$a^*(bb)^*\setminus L = a^* \cup (bb)^* \cup \{a^mb^{2n}|1\leq mn < 3\}$
The last language of the union is finite thus regular. | {
"domain": "cs.stackexchange",
"id": 18296,
"tags": "formal-languages, regular-languages, pumping-lemma"
} |
What is "power per unit frequency" in black body radiation? | Question: What is the meaning of power radiated by a black body per unit frequency?
If you have a black body with a frequency filter around it set to 530nm and you calculate the energy radiated in 1 second
you will get a definite (not a differential) value. So where does "per unit frequency" come from? What is its physical meaning?
Edit:
There is a physical anamoly in my argument as, suppose, at 530 nm we get 5 watt energy, so at 530.01 nm the power will approximately be the same 5 watt, and similarly for 530.001nm.
But I still don't get the flaw in my argument (even though I get that the result is wrong). What is it that I am missing?
Answer:
What is its physical meaning?
It means that, to find the power within a given bandwidth, one integrates the power spectral density (PSD) over that bandwidth.
Assuming the PSD is essentially flat over a 1 Hz bandwidth, the power at the output of an ideal bandpass filter, centered at 530 nm and with 1 Hz bandwidth, would be the value of the PSD at 530 nm multiplied by 1 Hz.
If the bandwidth were 1 mHz, the power would be 1000 times less. For an arbitrarily small bandwidth, the power would be arbitrarily small, i.e., the power at a specific frequency (wavelength) is infinitesimal.
Keep in mind that no physical bandpass filter has infinitesimal bandwidth.
suppose, at 530 nm we get 5 watt energy
As I wrote above, the power at a specific frequency (wavelength) is infinitesimal so you won't find a finite amount of power at 530 nm unless there is a delta function in the PSD there (for example, due to the output of an ideal 530 nm laser source). But the ideal blackbody spectrum is continuous. | {
"domain": "physics.stackexchange",
"id": 60497,
"tags": "quantum-mechanics, classical-mechanics, classical-electrodynamics, thermal-radiation"
} |
In ROSJava,how can i transform the 'odom' frame to 'map' frame? | Question:
I want to transform the 'odom' frame to the 'map' frame,but i do not know what i will do? In RosJava i saw a class "Transform".
package org.ros.rosjava_geometry;
import geometry_msgs.Pose;
import geometry_msgs.PoseStamped;
import org.ros.message.Time;
import org.ros.namespace.GraphName;
import org.ros.rosjava_geometry.Quaternion;
import org.ros.rosjava_geometry.Vector3;
public class Transform {
private Vector3 translation;
private Quaternion rotationAndScale;
public static Transform fromTransformMessage(geometry_msgs.Transform message) {
return new Transform(Vector3.fromVector3Message(message.getTranslation()), Quaternion.fromQuaternionMessage(message.getRotation()));
}
public static Transform fromPoseMessage(Pose message) {
return new Transform(Vector3.fromPointMessage(message.getPosition()), Quaternion.fromQuaternionMessage(message.getOrientation()));
}
public static Transform identity() {
return new Transform(Vector3.zero(), Quaternion.identity());
}
public static Transform xRotation(double angle) {
return new Transform(Vector3.zero(), Quaternion.fromAxisAngle(Vector3.xAxis(), angle));
}
public static Transform yRotation(double angle) {
return new Transform(Vector3.zero(), Quaternion.fromAxisAngle(Vector3.yAxis(), angle));
}
public static Transform zRotation(double angle) {
return new Transform(Vector3.zero(), Quaternion.fromAxisAngle(Vector3.zAxis(), angle));
}
public static Transform translation(double x, double y, double z) {
return new Transform(new Vector3(x, y, z), Quaternion.identity());
}
public static Transform translation(Vector3 vector) {
return new Transform(vector, Quaternion.identity());
}
public Transform(Vector3 translation, Quaternion rotation) {
this.translation = translation;
this.rotationAndScale = rotation;
}
public Transform multiply(Transform other) {
return new Transform(this.apply(other.translation), this.apply(other.rotationAndScale));
}
public Transform invert() {
Quaternion inverseRotationAndScale = this.rotationAndScale.invert();
return new Transform(inverseRotationAndScale.rotateAndScaleVector(this.translation.invert()), inverseRotationAndScale);
}
public Vector3 apply(Vector3 vector) {
return this.rotationAndScale.rotateAndScaleVector(vector).add(this.translation);
}
public Quaternion apply(Quaternion quaternion) {
return this.rotationAndScale.multiply(quaternion);
}
public Transform scale(double factor) {
return new Transform(this.translation, this.rotationAndScale.scale(Math.sqrt(factor)));
}
public double getScale() {
return this.rotationAndScale.getMagnitudeSquared();
}
public double[] toMatrix() {
double x = this.rotationAndScale.getX();
double y = this.rotationAndScale.getY();
double z = this.rotationAndScale.getZ();
double w = this.rotationAndScale.getW();
double mm = this.rotationAndScale.getMagnitudeSquared();
return new double[]{mm - 2.0D * y * y - 2.0D * z * z, 2.0D * x * y + 2.0D * z * w, 2.0D * x * z - 2.0D * y * w, 0.0D, 2.0D * x * y - 2.0D * z * w, mm - 2.0D * x * x - 2.0D * z * z, 2.0D * y * z + 2.0D * x * w, 0.0D, 2.0D * x * z + 2.0D * y * w, 2.0D * y * z - 2.0D * x * w, mm - 2.0D * x * x - 2.0D * y * y, 0.0D, this.translation.getX(), this.translation.getY(), this.translation.getZ(), 1.0D};
}
public geometry_msgs.Transform toTransformMessage(geometry_msgs.Transform result) {
result.setTranslation(this.translation.toVector3Message(result.getTranslation()));
result.setRotation(this.rotationAndScale.toQuaternionMessage(result.getRotation()));
return result;
}
public Pose toPoseMessage(Pose result) {
result.setPosition(this.translation.toPointMessage(result.getPosition()));
result.setOrientation(this.rotationAndScale.toQuaternionMessage(result.getOrientation()));
return result;
}
public PoseStamped toPoseStampedMessage(GraphName frame, Time stamp, PoseStamped result) {
result.getHeader().setFrameId(frame.toString());
result.getHeader().setStamp(stamp);
result.setPose(this.toPoseMessage(result.getPose()));
return result;
}
public boolean almostEquals(Transform other, double epsilon) {
return this.translation.almostEquals(other.translation, epsilon) && this.rotationAndScale.almostEquals(other.rotationAndScale, epsilon);
}
public Vector3 getTranslation() {
return this.translation;
}
public Quaternion getRotationAndScale() {
return this.rotationAndScale;
}
public String toString() {
return String.format("Transform", new Object[]{this.translation, this.rotationAndScale});
}
public int hashCode() {
boolean prime = true;
byte result = 1;
int result1 = 31 * result + (this.rotationAndScale == null?0:this.rotationAndScale.hashCode());
result1 = 31 * result1 + (this.translation == null?0:this.translation.hashCode());
return result1;
}
public boolean equals(Object obj) {
if(this == obj) {
return true;
} else if(obj == null) {
return false;
} else if(this.getClass() != obj.getClass()) {
return false;
} else {
Transform other = (Transform)obj;
if(this.rotationAndScale == null) {
if(other.rotationAndScale != null) {
return false;
}
} else if(!this.rotationAndScale.equals(other.rotationAndScale)) {
return false;
}
if(this.translation == null) {
if(other.translation != null) {
return false;
}
} else if(!this.translation.equals(other.translation)) {
return false;
}
return true;
}
}
}
Originally posted by Tony10012 on ROS Answers with karma: 56 on 2016-12-18
Post score: 0
Answer:
I ues the method,maybe be not correct.
FrameTransform frameTransform = view.getFrameTransformTree().transform(GraphName.of("odom"), GraphName.of("map"));
See it
link text
Originally posted by Tony10012 with karma: 56 on 2016-12-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26518,
"tags": "rosjava, android, transform"
} |
Safe Integer Library in C++ | Question: I've designed a single-file safe integer library in C++. It catches undefined behavior prior to integer overflows or underflows and throws the respective exceptions. I intend this to be portable, to not rely on undefined behavior, and to rely as little on implementation-defined behavior as possible.
safe_integer.hpp:
/*
* Copyright © 2020 James Larrowe
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <https://www.gnu.org/licenses/>.
*/
#ifndef SAFE_INTEGER_HPP
# define SAFE_INTEGER_HPP 1
#include <limits>
#include <stdexcept>
#include <type_traits>
template<typename I,
typename std::enable_if<std::is_integral<I>::value, bool>::type = true>
class safe_int
{
I val;
public:
typedef I value_type;
static constexpr I max = std::numeric_limits<I>::max();
static constexpr I min = std::numeric_limits<I>::min();
safe_int(I i) : val { i } { };
operator I() const { return val; }
I &operator=(I v) { val = v; }
safe_int operator+()
{
return *this;
}
safe_int operator-()
{
if(val < -max)
throw std::overflow_error("");
return safe_int(-val);
}
safe_int &operator++()
{
if(val == max)
throw std::overflow_error("");
++val;
return *this;
}
safe_int &operator--()
{
if(val == min)
throw std::underflow_error("");
--val;
return *this;
}
safe_int operator++(int)
{
if(val == max)
throw std::overflow_error("");
return safe_int(val++);
}
safe_int operator--(int)
{
if(val == min)
throw std::underflow_error("");
return safe_int(val--);
}
safe_int &operator+=(I rhs)
{
if( val > 0 && rhs > max - val )
throw std::overflow_error("");
else if( val < 0 && rhs < min - val )
throw std::underflow_error("");
val += rhs;
return *this;
}
safe_int &operator-=(I rhs)
{
if( val >= 0 && rhs < -max )
throw std::overflow_error("");
if( val < 0 && rhs > max + val )
throw std::overflow_error("");
else if( val > 0 && rhs < min + val )
throw std::underflow_error("");
val -= rhs;
return *this;
}
safe_int &operator*=(I rhs)
{
if(val > 0)
{
if(rhs > max / val)
throw std::overflow_error("");
}
else if(val < 0)
{
if(val == -1)
{
if(rhs < -max)
throw std::overflow_error("");
goto no_overflow;
}
if(rhs > min / val)
throw std::underflow_error("");
}
no_overflow:
val *= rhs;
return *this;
}
safe_int &operator/=(I rhs)
{
if( rhs == -1 && val < -max )
throw std::underflow_error("");
else if(rhs == 0)
throw std::domain_error("");
val /= rhs;
return *this;
}
safe_int &operator%=(I rhs)
{
if( rhs == -1 && val < -max )
throw std::underflow_error("");
else if(rhs == 0)
throw std::domain_error("");
val %= rhs;
return *this;
}
safe_int operator+(I rhs)
{
return safe_int(val) += rhs;
}
safe_int operator-(I rhs)
{
return safe_int(val) -= rhs;
}
safe_int operator*(I rhs)
{
return safe_int(val) *= rhs;
}
safe_int operator/(I rhs)
{
return safe_int(val) /= rhs;
}
safe_int operator%(I rhs)
{
return safe_int(val) %= rhs;
}
safe_int &operator+=(safe_int rhs)
{
return *this += static_cast<I>(rhs);
}
safe_int &operator-=(safe_int rhs)
{
return *this -= static_cast<I>(rhs);
}
safe_int &operator*=(safe_int rhs)
{
return *this *= static_cast<I>(rhs);
}
safe_int &operator/=(safe_int rhs)
{
return *this /= static_cast<I>(rhs);
}
safe_int &operator%=(safe_int rhs)
{
return *this %= static_cast<I>(rhs);
}
safe_int operator+(safe_int rhs)
{
return safe_int(val) += static_cast<I>(rhs);
}
safe_int operator-(safe_int rhs)
{
return safe_int(val) -= static_cast<I>(rhs);
}
safe_int operator*(safe_int rhs)
{
return safe_int(val) *= static_cast<I>(rhs);
}
safe_int operator/(safe_int rhs)
{
return safe_int(val) /= static_cast<I>(rhs);
}
safe_int operator%(safe_int rhs)
{
return safe_int(val) %= static_cast<I>(rhs);
}
};
#endif
This should work on non-two's complement systems and with any integer type.
Here's a little example:
#include "safe_integer.hpp"
int main(void)
{
safe_int<int> i = 0;
i -= -0x80000000;
return 0;
}
Output:
terminate called after throwing an instance of 'std::overflow_error'
what():
Aborted
What I'm particularly interested in:
Are there any corner cases I've missed?
Is there any undefined behavior (probably not)?
Is there any way I can simplify all of the (somewhat redundant) operator overloads?
What I'm not interested in:
efficiency. I agree that my solution may not perform well but my personal opinion is that leaving the checks in unconditionally and not using undefined behavior to get a faster result are worth the cost.
Answer: You can get rid of the goto in operator*= by adding an else to the if (val == -1) statement.
The % operator cannot underflow, as the result always has a magnitude less than the rhs value. So you don't need your underflow check (which is incorrect anyways, as it would throw rather than return a 0).
An "underflow" represents a number that is too small to represent, and is typically applied to floating point types. A calculation that gives a number that is negative and too large to store in the result (i.e., is less than min) is still an overflow, as the result has overflowed the storage space available. So all those places that you throw an underflow_error should be overflow_error (unless you're changing the usage of underflow to represent too large of a negative value).
How does the code behave if I instantiate a safe_int<unsigned>? The evaluation of -max in that case will not give the correct result, and possibly cause a compiler warning (for negation of an unsigned value). | {
"domain": "codereview.stackexchange",
"id": 43427,
"tags": "c++, c++11, integer"
} |
how to enable SIFT_GPU, GICP_BIN, and GICP_CODE | Question:
Forgive my ignorance if this is a silly question. I set these to =1 in CMakesList.txt and ran
rosmake --rosdep-install rgbdslam and get the following error:
CMake Error at CMakeLists.txt:69 (FILE):
file COPY cannot find
"/home/lab/ros_workspace/rgbdslam/external/siftgpu/linux/bin/libsiftgpu.so".
Am I forgetting a simple step...? CUDA is up and running.
Originally posted by spitzbubchen on ROS Answers with karma: 1 on 2011-08-30
Post score: 0
Original comments
Comment by spitzbubchen on 2011-08-31:
Just wondering if there is a quick way to enable/install siftgpu and gicp for rgbdslam...? These do not seem to be ros packages. Or do I have to hunt them and their dependencies down on the internet...? If install instructions were added to the rgbdslam ros wiki that would be great.
Answer:
Siftgpu should be built automatically from the sources included in the rgbdslam package. If it doesn't, I am not sure why (it works for me). You can try and call "make" manually in /home/lab/ros_workspace/rgbdslam/external/siftgpu/linux
Originally posted by Felix Endres with karma: 6468 on 2011-08-31
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 6568,
"tags": "slam, navigation"
} |
Is temperature equilibrium actually reached in a mixer tap, or is it merely a jumbling of hot/cold droplets? | Question: Most kitchen and bathroom sink faucets have a mixer tap which blends the hot and cold inflows into a warm, uniform outflow stream. It's difficult to believe that complete heat transfer and temperature equilibrium is reached within the fraction of a second in which the hot/cold water flows through the mixer mechanism.
It makes me wonder if the mixer is doing nothing more than breaking the inflows into droplets and jumbling those droplets together, which yields the sensation (illusion) of a warm, uniform outflow.
Does the difference matter? In the extreme, it would. We can imagine some sort of super mixer tap that can take in an ultrahigh temperature inflow (like mafic lava) and a very low temperature inflow (like liquid helium) and blend them together. If complete heat transfer and equilibrium is actually reached within the mixer, then, with the right proportion of inflows, the warm outflow should be safe to touch. However, if the outflow is just a jumble of discrete hot/cold droplets, then it would be very dangerous to touch. I imagine the still-hot and still-cold droplets would cause both burning and freezing damage to the skin.
Answer: When each of the fluids is broken down into small parcels in intimate contact, the surface to volume ratio of each parcel becomes large, and the conductive heat transfer between parcels becomes enormously enhanced. The time for a parcel temperature to equilibrate with that of its neighbors is on the order of $t=D^2/\alpha$, where D is the nominal parcel diameter and $\alpha$ is the thermal diffusivity of the liquid. The thermal diffusivity of water is about 0.0015 cm^2/sec. For a 0.01 cm diameter parcel, what do you calculate for the equilibration time? | {
"domain": "physics.stackexchange",
"id": 53216,
"tags": "thermodynamics, fluid-dynamics, temperature, equilibrium"
} |
Converting from spherical to Cartesian in cosmology | Question: So I want to find the distance between two different objects at different red shifts. My plan so far is to calculate the line of sight (LOS) co moving distance using the formula $$D_C=D_H\int_0^z \ \frac{dx}{E(x)} $$ find the distance from the observer to each object and get two different comoving LOS values. Now this is where I get stuck; are the right ascension(RA) and declination(DEC) corresponding to the azimuthal angle and theta respectively, and the LOS comoving distance the radial component. Then from this can I simply convert from spherical to cartesian and use basic geometry to find the distance between the points?
Answer: First object Right Ascension, Declination and Redshift: $\quad \alpha_1, \ \delta_1, \ z_1$
Second object Right Ascension, Declination and Redshift: $\alpha_2, \ \delta_2, \ z_2$
Applying spherical trigonometry, the cosine of the angle $\theta$ between both objects is:
$$\cos \theta = \sin \delta_1 \sin \delta_2+\cos \delta_1 \cos \delta_2 \cos(\alpha_2-\alpha_1)$$
For a Flat Universe $\Omega_{K_0}=0$. For $z_i < 100$ we can neglect $\Omega_{R_0}$. Then, the distances to us now can be calculated using:
$$d_1=\frac{c}{H_0} \int_0^{z_1} \dfrac{dx}{\sqrt{\Omega_{M_0}(1+x)^3+\Omega_{\Lambda_0}}}$$
$$d_2=\frac{c}{H_0} \int_0^{z_2} \dfrac{dx}{\sqrt{\Omega_{M_0}(1+x)^3+\Omega_{\Lambda_0}}}$$
$c=299 \ 792 \ 458 \ m/s$
According to the results-2018 of the Planck Mission, the best values of the cosmological parameters are:
$H_0=67.66 \ (km/s)/Mpc$
$\Omega_{M_0}=0.3111$
$\Omega_{\Lambda_0}=0.6889$
The integral has no elementary primitive, but can be easily calculated by numerical methods.
Finally, to calculate the distance "$d$" between both objects now, use the law of cosines:
$$d=\sqrt{d_1^2+d_2^2-2 \ d_1 d_2 \cos \theta \, \, }$$
Regards | {
"domain": "physics.stackexchange",
"id": 49519,
"tags": "cosmology, coordinate-systems, geometry, distance, redshift"
} |
How is NaOH able to clear a solution of Paraformaldehyde? | Question: While making a solution of $4~\%$ para-formaldehyde for immunohistochemistry (to fix the tissue), we add para-formaldehyde into water and stir. It looks like a chalky supersaturated solution even after stirring with a magnetic stirrer. And as per protocol, you are supposed to add 2 drops of $2~\mathrm{N}\ \ce{NaOH}$ to clear the solution. It surprisingly turned crystal clear!
I was amazed at how just 2 drops could make such a huge difference. Does it react with para-formaldehyde? Does changing pH change solubility? But para-formaldehyde is a polymer, how could it suddenly become soluble? Does adding $\ce{NaOH}$ make it a monomer?
Answer: para-formaldehyde consists of long chains of the following type:
$$\ce{HO-CH2-O-[CH2-O]_{$n$}-CH2-OH}$$
When being dissoluted in water, this needs to be broken down into formaldehyde monomers (and then further into formalin):
$$\ce{(CH2O)_{$n$} + H2O <=> $n$ CH2O + H2O <=>> CH2(OH)2}$$
Acids and bases are both able to speed up the first step. Bases by deprotonating one end and causing a domino effect of bonds being broken and formed with a hydroxide released on the other end. Acids by protonating one end and then the domino occuring in the other direction.
$$\ce{HO-[CH2-O]_{$n$}-CH2-O-CH2-O-CH2-OH ->[\ce{OH-}] \\HO-[CH2-O]_{$n$}-CH2-O-CH2-O-CH2-O- ->\\ HO-[CH2-O]_{$n$}-CH2-O-CH2-O- + H2C=O -> \\HO-[CH2-O]_{$n$}-CH2-O- + 2 H2C=O ->}\\ \dots\\\ce{HO- + ($n$+3) H2C=O}$$ | {
"domain": "chemistry.stackexchange",
"id": 5761,
"tags": "organic-chemistry, acid-base, aqueous-solution, solubility, carbonyl-compounds"
} |
Why must the deuteron wavefunction be antisymmetric? | Question: Wikipedia article on deuterium says this:
The deuteron wavefunction must be
antisymmetric if the isospin
representation is used (since a proton
and a neutron are not identical
particles, the wavefunction need not
be antisymmetric in general).
I wonder why does the wave function need to be antisymmetric when isospin representation is used. I assume that if two somehow different particles are exchanged the total wavefunction changes sign. Is it so? Why?
Thanks
Answer: The neutron and proton may be viewed as the same particle - sometimes referred to as the nucleon. A proton is a "nucleon with the isospin up" and the neutron is a "nucleon with the isospin down". With this qualification, it's still true that nucleons are identical fermions, so their total wave function has to be antisymmetric.
In the simplest Ansatz, the total wave function is the tensor product of the isospin wave function; spin wave function; and orbital (spatial) wave function. The odd number of those has to be antisymmetric for the result to be antisymmetric. The (anti)symmetry of each factor is governed by the total isospin (0 antisymmetric or 1 symmetric); total spin (0 antisymmetric or 1 symmetric); orbital angular momentum (even means symmetric and odd means antisymmetric). | {
"domain": "physics.stackexchange",
"id": 446,
"tags": "nuclear-physics, quantum-mechanics, symmetry"
} |
Would a ring magnet around a copper tube be slowed? | Question: if a ring-shaped magnet was dropped with a copper pole through the center would it be slowed the same as a cylindrical magnet in a copper tube? if so would it be slowed more or less?
Answer: In a word, yes.
Eddy currents are induced in conductors exposed to changing magnetic fields. Those currents specifically counter the change in magnetic flux the conductor is experiencing. That's Lentz's law. Or, more formally it's faraday's law. Part of Maxwell's equations describing how classical electromagnetism works on a fundamental level.
Translation:
Dropping a cylinder magnet in a metal tube? Slowed.
https://www.youtube.com/watch?v=H31K9qcmeMU
Dropping a ring magnet around a metal tube? Still slowed.
Sliding a ring magnet along a metal sheet? Slowed
Sliding a regular magnet along a metal sheet? Slowed
https://www.youtube.com/watch?v=4KsnKHsD3Ak
Swinging a conducting pendulum through poles of a horseshoe-type magnet? Slowed
https://www.youtube.com/watch?v=dHwvbywdpuQ
Dropping a magnet directly onto a conductor? Slowed
https://www.youtube.com/watch?v=Ajjoi2CfI20
Dropping a square magnet through a thick metal tube? Slowed
Rolling a thick metal tube towards a magnet? Slowed
Ahh, but plot twist. Leaving a magnet stationary inside a tube and then lifting it up? Sped up (since resisting the "move the magnet away from the tube" change in flux would require the magnet to move along with the tube).
https://www.youtube.com/watch?v=Q7leJTZ6E48
Rule of thumb: If you're trying to speed up the magnet or tube, eddy currents will slow them down. If you're trying to slow down the magnet or tube, eddy currents will speed them up. | {
"domain": "physics.stackexchange",
"id": 88070,
"tags": "electromagnetism, magnetic-fields, electromagnetic-induction, lenz-law"
} |
Observable Operator on a Superposition? | Question: I'm probably missing something obvious and basic here but I can't make sense of certain usages of Observables as present in basic treatments of Quantum Mechanics that i've come across.
$$ \hat{A}|\Psi\rangle = a|\Psi\rangle $$
The above equation implies to me that a single eigenket gives a single eigenvalue of $\hat{A}$.
However Ket Vectors that are composed of superpositions have multiple possible eigenvalues. Which leads me to believe that that equation is only valid for eigenkets which are Basis States.
However in the Schrödinger equation we have an Observable (Hamiltonian) acting on Wave Functions in Position Space which are composed of an infinite number of Basis States.
In this usage is it somehow assumed that every Basis State in the Position Basis corresponds to a single Energy Eigenstate? (I wouldn't think this would be the case. But what is the point/result of applying the Hamiltonian to any given Wave Function then?)
Further confusion arises from this because if the Energy is exactly known then shouldn't there be some sort of maximal uncertainty in time?
As a final question is there any kind of useful interpretation of multiplying the eigenket by it's eigenvalue as appears in the above Observable Equation? In all treatments I've seen this multiplication is simply ignored and the eigenvalue itself is the only focus.
Answer:
However Ket Vectors that are composed of superpositions have multiple possible eigenvalues. Which leads me to believe that that equation is only valid for eigenkets which are Basis States.
The equation
\begin{align}
\hat A|\Psi\rangle = a|\Psi\rangle
\end{align}
holds only for eigenvectors of the operator $\hat A$. In general, there is a mathematical theorem, the spectral theorem, that says that for any hermitian (self-adjoint) operator $\hat A$ acting on a Hilbert space $\mathcal H$, there exists a basis of the Hilbert space composed of eigenvectors of $\hat A$. This tells us that any vector $|\psi\rangle$ in the Hilbert space can be written as a linear combination of eigenvectors of any given observable. Let's say, for example, that the basis of eigenvectors corresponding to observable $\hat A$ is denoted by $\{|a_1\rangle, |a_2\rangle, \dots\}$ where the vector $|a_i\rangle$ has eigenvalue $a_i$. Then for any state $|\psi\rangle$ in the Hilbert space, we can write
\begin{align}
|\psi\rangle = \sum_i c_i|a_i\rangle
\end{align}
is it somehow assumed that every Basis State in the Position Basis corresponds to a single Energy Eigenstate?
No. An eigenstate of one operator is not necessarily an eigenstate of another operator. If, however, two operators commute, then it is possible to find a basis for the Hilbert space comprised of vectors that are eigenstates of both operators (we usually call these "simultaneous eigenstates" of the two operators).
is there any kind of useful interpretation of multiplying the eigenket by it's eigenvalue as appears in the above Observable Equation?
I'm not sure what you're exactly looking for here, but one fact is that if $|\Psi\rangle$ satisfies the eigenvalue equation, and if the system is prepared in that state, then a measurement of the observable $\hat A$ will return the corresponding eigenvalue with probability $1$. | {
"domain": "physics.stackexchange",
"id": 10191,
"tags": "quantum-mechanics, operators, schroedinger-equation, hilbert-space, observables"
} |
Why does hot water eventually turn cold, and why does cold water eventually turn hot? | Question: I get cold water from my tap and put it in my water bottle. Over the day as I drink it, the water looses it coldness and slowly becomes warmer.
On the other hand, if I get hot water (maybe I'm sick or something), over the day it doesn't turn "cold", but it definitely does get "less hotter"
Why?
Is this because of equilibrium?
Answer: Yes, what you are describing is a system approaching thermal equilibrium.
The temperature of the water approaches the temperature of it's surroundings as heat is transferred from the warmer object (either the air or the water) to the colder. This is basically true of any two objects in contact with each other; the temperature difference drives the flow of heat from the warmer object to the colder one. At first, when the temperature difference is large, this happens quickly, then more slowly as the difference in temperature gets small.
One complication to this simple idea is that the water is also evaporating, which is a cooling process. Particularly if the surrounding air is dry, the water can evaporate quickly enough to become a fair bit colder than it's surroundings. | {
"domain": "chemistry.stackexchange",
"id": 8527,
"tags": "everyday-chemistry, equilibrium, water"
} |
In a uniformly accelerated motion experiment, the acceleration can be attained from $V_{ave}$ vs. $t/2$ and $x$ vs. $t^2$ graph. How is this possible? | Question: Specifically, in the experiment, we had to release a glider from an inclined plane (that had an angle of inclination of 10 degrees). We had to calculate the time it reaches the final position. We had to calculate the average velocity from the five trials we had to do for each distance (50cm, 60cm, 70cm, 80cm, 90cm, and 100cm). After this, we had to plot in the values in a x vs y graph.
I am just confused as to why time in the first graph is divided by 2, and how the graph's slope is equal to the acceleration of the particle in the experiment. Why is Vave plotted against t/2 instead of just t? In the first graph, what does a non-zero y-intercept value mean?
As for the second graph, I am confused as to why time is squared, and how the graph's slope times 2 is equal to the acceleration of the particle in the experiment.
I've been stuck with these questions for days now, and I still couldn't figure it out. I'm very curious what the right answers are to these questions.
Answer: If an object is moving at constant acceleration then its position and speed are described by a set of five equations known as the suvat equations. The one we need to use here is:
$$ s = ut + \tfrac12 at^2 \tag{1}$$
where $t$ is the time, $a$ is the acceleration, $u$ is the initial speed and $s$ is the distance travelled. In your experiment you are starting the object from rest so $u=0$, and the equation simplifies to:
$$ s = \tfrac12 at^2 \tag{2}$$
This immediately explains your second graph, because if we graph the distance $s$ against $t^2$ we get a straight line going through the origin with gradient $\tfrac12a$. Your fit doesn't quite go through the origin, but this is likely to be experimental error.
To explain your first graph note that the average speed is the distance travelled divided by the time taken:
$$v_{av} = \frac{s}{t}$$
If we take equation (2) and divide both sides by $t$ we get:
$$ \frac{s}{t} = v_{av} = a~\left(\tfrac12 t\right) $$
So if we graph $v_{av}$ against $\frac12t$ we get a straight line through the origin with a gradient of $a$. | {
"domain": "physics.stackexchange",
"id": 98824,
"tags": "kinematics, acceleration, time, velocity, data-analysis"
} |
What would happen if a hydrogen bomb were to explode in Saturn's atmosphere? | Question: Purely hypothetical since any kind of testing in atmosphere/space is banned by international legislation/agreement.
The humans have already bombed Luna so ... what could be expected to happen on Saturn if a hydrogen bomb were to explode in it's atmosphere? Would the explosion set the planet's atmosphere ablaze?
Answer: Nothing devastating would happen. When the comet Shoemaker Levy hit Jupiter, with considerably more energy than an H-bomb, it made a big bang but Jupiter is still there.
Saturn's atmosphere can't burn because there is no free oxygen present. In fact there is regular lightning on Saturn, so if the atmosphere was going to catch fire it would have done so by now.
I wonder if you were thinking the H-bomb would start a hydrogen fusion reaction in Saturn's atmosphere. If so, no runaway fusion reaction would occur as the density and temperature is far too low. | {
"domain": "physics.stackexchange",
"id": 31302,
"tags": "nuclear-physics, astrophysics, planets, fusion, explosions"
} |
Why would we overexpress Sir2 by overexpressing its hypomorph (dSir2-EP2300) in C. elegans? | Question: Can't we just overexpress regular Sir2 in the paper? Rather than overexpress a reduced-function gene?
The paper is Burnett C, Valentini S, Cabreiro F, Goss M, Somogyvári M, Piper MD, Hoddinott M, Sutphin GL, Leko V, McElwee JJ, et al.. 2011. Absence of effects of Sir2 overexpression on lifespan in C. elegans and Drosophila. Nature 477: 482–5.
Answer: I have had a little time to look over this paper.
They do overexpress a native sir2 clone in high doses, called sir2.1 OE. Which seems to be native and also appears in high copy numbers. This strain was found in previous publication to have a high lifespan... that is old news.
This paper sees sir2.1 expression levels as an oversimplification of the causes of longevity. When they create crosses of the sir2.1 OE strain with wildtype, you can see the Outcross, which is verified to have a high level of sir2 expressed no longer has an extraordinary lifespan.
This can be seen in Figure 1. http://www.nature.com/nature/journal/v477/n7365/fig_tab/nature10296_F1.html
So this paper is now asking, if sir2 levels do not convey the information that creates an extended lifespan, then what does? It must be some modulation of some protein that sir2 affects. They implies that the actual cause of the lifespan increase may have been a mutation somewhere else in the organism.
"However, longevity was not suppressed by sir-2.1 RNA interference (RNAi) ... indicating causation by factors other than sir-2.1, either on mDp4 or elsewhere in the genome."
"This implies that lifespan extension is due to transgene-linked genetic effects other than the overexpression of dSir2."
In the second half of the paper (figure2) the investigators have moved on to drosophila work, where they look at how expressing constructs or inhibiting sir2 protein levels might affect lifespan. Although the dSir(EP2300) / + construct (drosophila sir2) with a wild type gene promoter did not live quite as long as dSir2(EP2300) with a fancy promoter (dSir2(EP2300) / tub-GAL4), the promoter construct (tub-GAL4/+) alone also had just as long a lifespan. How can this be? Not sure, but the expression of sir2 is clearly not the panacea we had hoped. Note however that the reduced power gene still gave the same boost to lifetime. This shows dramatically that sir2 activity alone does not drive the longer lifetime.
Lastly, deletion constructs (dSir4.5/1.7) , which should have lower than wild type protein levels for sir2 had completely normal lifetimes.
So the answer to your question; you need to test a hypothesis going in both directions - does increase of the protein really create a strong effect? Does decreasing it have a negative effect then? There are lots of other reasons to use knockouts and non functional genes in such an experiment, but those are the broad strokes. | {
"domain": "biology.stackexchange",
"id": 573,
"tags": "molecular-biology, senescence"
} |
Why we add a constant value column in our DataFrame sometimes? | Question: Currently I'm learning data science and I'm in the beginners stage. I have seen many times we add a "constant" column in our data frame with all row cells of that column having value 1.
I need to know why we do so. And also what will happen if we don't use it.
Thank you.
Answer: In linear regression you need that column to have lines which are not constrained to pass through origin. Think of linear model $y = b_1 x_1 + b_2 x_2 + ...$. Iff all $x_i$ are 0, y must be 0, you need an additional parameter to pass that constraint. | {
"domain": "datascience.stackexchange",
"id": 5616,
"tags": "machine-learning, python, pandas, data-science-model, dataframe"
} |
Why don't we see these lanthanide species? | Question: For most lanthanide metals$^{[1]}$, the stable oxidation state is III. The general electronic structure$^{[2]}$ is $$\ce{[Xe] 4f^{0}^{-14} 5s^2 5p^6 5d^{0}^{-1} 6s^2}.$$
Elements that have the d-electron are La, Ce, Gd, and Lu. Furthermore, the f-subshell is considered relatively stable in states $$f^0, f^7, \text{and} f^{14}.$$
We can conclude that La, Gd, and also Lu easily form $\ce{E^3+}$ ions. Yet, as would be predicted by this easy approach, we would also see $$\ce{Sm+, Tm+ (f^7 \ and \ f^14 ), \\ Pr^5+, Dy^5+ (f^0 \ and \ f^7)}.$$
Why is this not the case?
$^{[1]}$ Ce, Pr, and Tb also have the oxidation state IV. Eu and Tm have the additional state II.
$^{[2]}$ Ordering (relative energy) changes with the number of electrons.
I strongly recommend having a look at these questions:
Howcome orbitals become 'core-like' when electrons are removed?
What is meant by 'electrons of like/unlike rotation'?
What's up with this quarter / three-quarter rule?
Original topic: Predominance of III oxidation state for lanthanides
Answer: This is really the exact same question but just in a different context of transition metals: Cr(II) and Mn(III) - their oxidizing and reducing properties?
The answer is because exchange energy (which is the reason behind the stability of half-filled and fully-filled shells) is not the main factor that decides the predominance of an oxidation state.
Whether an oxidation state can be reached depends on the balance between two things:
An energetic cost of having to ionise the element, i.e. the ionisation energy
The energetic payback obtained by forming covalent bonds / ionic bonds / solvation in solution / etc.
Exchange energy effects only give rise to minor variations in ionisation energies. The increase going from the nth to (n+1)th ionisation energy is far greater than any exchange effects.
Just like the magnesium example I gave in the other question, the lanthanides have very small second ionisation energies, and these can also be easily compensated for by bonding or ion-dipole interactions in solution. Take your example of $\ce{Sm+}$. The fact that it has a $\mathrm{f^7}$ configuration means that the second IE is marginally larger than would be expected. However, this marginal increase hardly changes the fact that the cost of ionising Sm a second time can be easily recouped.
Likewise, your example of $\ce{Pr^5+}$ is totally unrealistic because those 5th IEs are so huge that there is absolutely zero chance that they will give it up, no matter what exchange stabilisation there is.
The predominance of the +3 ionisation state for the lanthanides is therefore simply a consequence of IE1 + IE2 + IE3 being sufficiently small, and IE4 being way too large. That's all there is to it. Note how the variations in the graph below (due to exchange energy effects) are so small that all the IE graphs never cross over each other. IE5 is so huge that they didn't even bother plotting it!
(source: Inorganic Chemistry 6ed, Weller et al., p 630) | {
"domain": "chemistry.stackexchange",
"id": 5834,
"tags": "physical-chemistry, electronic-configuration, stability, oxidation-state, elements"
} |
ROS Answers SE migration: ROS2 Linking | Question:
I'm having trouble linking libraries in ROS2. I have a simple example posted here
I have two packages. awesome_library defines a C++ class in a library and fantastic_node defines a C++ executable that instantiates the class from awesome_library.
I am familiar with how to do this in ROS1, but in ROS2 I am getting the following error.
[ 50%] Linking CXX executable fantastic_node_node
CMakeFiles/fantastic_node_node.dir/src/fnode.cpp.o: In function `main':
fnode.cpp:(.text+0x17d): undefined reference to `awesome_library::Awesomeness::Awesomeness()'
collect2: error: ld returned 1 exit status
CMakeFiles/fantastic_node_node.dir/build.make:177: recipe for target 'fantastic_node_node' failed
make[2]: *** [fantastic_node_node] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/fantastic_node_node.dir/all' failed
From the command ament build --only-packages awesome_library fantastic_node
I've tried a couple different variations of the library installation in awesome_library and target linking in fantastic_node but have not found one that works, or a similar example.
Current attempts with allenh1's branch:
ros-bouncy-ament-tools:
Installed: 0.5.0-0bionic.20180719.223430
Candidate: 0.5.0-0bionic.20180719.223430
Version table:
*** 0.5.0-0bionic.20180719.223430 500
500 http://repo.ros2.org/ubuntu/main bionic/main amd64 Packages
100 /var/lib/dpkg/status
! ros2_ws/ > apt-cache policy python3-colcon-core
python3-colcon-core:
Installed: 0.3.12-1
Candidate: 0.3.12-1
Version table:
*** 0.3.12-1 500
500 http://repo.ros2.org/ubuntu/main bionic/main amd64 Packages
500 http://repo.ros2.org/ubuntu/main bionic/main arm64 Packages
100 /var/lib/dpkg/status
! ros2_ws/ > colcon build --packages-select awesome_library fantastic_node --symlink-install
Starting >>> awesome_library
Finished <<< awesome_library [0.27s]
Starting >>> fantastic_node
--- stderr: fantastic_node
CMakeFiles/fantastic_node_node.dir/src/fnode.cpp.o: In function `main':
fnode.cpp:(.text+0x17d): undefined reference to `awesome_library::Awesomeness::Awesomeness()'
collect2: error: ld returned 1 exit status
make[2]: *** [fantastic_node_node] Error 1
make[1]: *** [CMakeFiles/fantastic_node_node.dir/all] Error 2
make: *** [all] Error 2
---
Failed <<< fantastic_node [ Exited with code 2 ]
Summary: 1 package finished [0.75s]
1 package failed: fantastic_node
1 package had stderr output: fantastic_node
[0.860s] ERROR:colcon.colcon_notification.desktop_notification:Exception in desktop notification extension 'notify2': org.freedesktop.Notifications.MaxNotificationsExceeded: Exceeded maximum number of notifications
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/colcon_notification/desktop_notification/__init__.py", line 107, in notify
title=title, message=message, icon_path=icon_path)
File "/usr/lib/python3/dist-packages/colcon_notification/desktop_notification/notify2.py", line 50, in notify
self._last_notification.show()
File "/usr/lib/python3/dist-packages/notify2.py", line 188, in show
self.timeout, # expire_timeout
File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 70, in __call__
return self._proxy_method(*args, **keywords)
File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 145, in __call__
**keywords)
File "/usr/lib/python3/dist-packages/dbus/connection.py", line 651, in call_blocking
message, timeout)
dbus.exceptions.DBusException: org.freedesktop.Notifications.MaxNotificationsExceeded: Exceeded maximum number of notifications
Originally posted by David Lu on ROS Answers with karma: 10932 on 2018-11-01
Post score: 1
Original comments
Comment by gvdhoorn on 2018-11-01:
Unfortunately, GHFM doesn't work here :(
Answer:
you should probably use colcon if you're using bouncy:
colcon build --packages-select awesome_library fantastic_node
Edit: upon further review, I see you're linking to ${catkin_LIBRARIES}, which you probably shouldn't be doing.
In my ports, I usually do a
set(req_deps
"rclcpp"
# other libs
)
then, in lieu of ${catkin_LIBRARIES}, I would do this:
ament_auto_find_build_dependencies(REQUIRED ${req_deps})
and this
ament_auto_add_library(awesomeness src/aswesome.cpp)
ament_target_dependencies(awesomeness ${req_deps})
For an example of libraries, I'll point you to my openslam_gmapping port, and for a very minimal port (so you can see the whole diff), I'll point you to a port of rplidar_ros I did recently.
I'm sure I forgot something above, so let me know what it was. ;)
Originally posted by allenh1 with karma: 3055 on 2018-11-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by David Lu on 2018-11-01:
That results in the same error.
Comment by allenh1 on 2018-11-01:
Ya, noticed a bit more for you to change, haha. See above.
Comment by David Lu on 2018-11-01:
Okay, I've tried porting to ament_auto (of which there are few examples/documents that I've found). Results are in this branch: https://github.com/DLu/simple_ros2_example/tree/ros2auto
Still gets the same linking error.
Comment by allenh1 on 2018-11-02:
Hey! I just went ahead and played around with it some. PR is on your repo, linked here for convenience.
Comment by David Lu on 2018-11-02:
I tried your branch and it didn't work (see comments on the PR). It also seems like there should be a way to do it without ament_auto (since the migration guide and examples don't use it)
Comment by allenh1 on 2018-11-02:\
I tried your branch and it didn't work
:( It definitely worked on my machine. Did you invoke colcon with --symlink-install?
do it without ament_auto
that makes sense. @dirk-thomas can you please explain what ament_auto is? I've forgotten the difference.
Comment by allenh1 on 2018-11-02:
Also, I didn't see the PR comments...
Comment by David Lu on 2018-11-02:
Oops, forgot to submit my review. Done now.
I've updated the above with some of the pertinent version info, exact command and output.
Comment by David Lu on 2018-11-02:
Any hints @dirk-thomas? | {
"domain": "robotics.stackexchange",
"id": 31998,
"tags": "ros, rclcpp, ros-bouncy, linking"
} |
How to add message field in a rosbag | Question:
I got a rosbag with a number of topics being played.
I am interested in adding a Header (more specifically, a frame_id) to the message received so some 3d points in it are automatically displayed in RViz in the correct frame defined.
Is this possible? I know you can iterate a rosbag with the type of loop:
for topic, msg, t in inbag.read_messages():
if topic == "/desired_topic":
outbag.write(topic,msg.h,t)
if topic not in ["/desired_topic"]:
outbag.write(topic,msg,t)
outbag.close()
But I don't know how to add a field to the incoming message and save it in a new bag.
Originally posted by thepirate16 on ROS Answers with karma: 101 on 2018-11-15
Post score: 0
Answer:
The rosbag_storage package described here allows you to access and modify the contents of a bag file without playing it back as you would normally have to. Using this you can look through all of the messages in a bag file and write messages into a bag file too.
However you cannot add a field into a message, the structure of messages is fixed and they can only contain the fields they are defined with. Some message types however include a normal type and 'Stamped' type which is the same message with an additional Header.
For example if your bag file contains geometry_msgs/pose messages then you could read these and create geometry_msgs/PoseStamped with the additional frame_id information in the header and write them back to the file. Only some messages have a stamped version though.
Hope this helps.
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-11-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 32055,
"tags": "ros, rviz, rosbag, ros-kinetic, header"
} |
Interpreting the evaluation result of multiple linear regression | Question: I am learning the multiple linear regression model.
I've built a model and using R command:
summary(model)
I got this result:
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 253.2 on 44 degrees of freedom Multiple
R-squared: 0.3336, Adjusted R-squared: 0.2579 F-statistic: 4.405 on
5 and 44 DF, p-value: 0.002444
How can I interpret this result in order to have a decision regarding the goodness of the model? In specific, what is the 44 degrees of freedom means for this case?
Also, why do we have adjusted and multiple r squared parameters?
Answer: I am going to answer your questions one after another.
First, what do the 44 degrees of freedom mean?
It simply means that the model you built is constructed by using 44 independent variables. For example a model that looks like y = ax + b has 1 independent variable (i.e. a) and thus 1 degree of freedom. A model that looks like y= ax1 + b*x2 + c would have 2 independent variables (i.e. a and b) and thus 2 degrees of freedom.
Second, what is the Multiple R-squared?
Here for the purpose of interpretating it Multiple R-squared is equivalent to the (simple) R-squared you would have for a linear regression model with 1 degree of freedom. Multiple R-squared tells us the share of the observed variance that is explained by the model. For example if you have a Multiple R-squared of 0.79 it means that your model explains 79% of the observed variance in your data.
Third, what is the Adjusted R-squared and why do we need it?
There are several problems with Multiple R-squared.
Problem 1: Every time you add a predictor to a model, the R-squared increases, even if due to chance alone. It never decreases. Consequently, a model with more independent variables (more degrees of freedom) may appear to have a better fit simply because it has more independent variables.
Problem 2: If a model has too many predictors and higher order polynomials, it begins to model the random noise in the data. This condition is known as overfitting the model and misleadingly high R-squared values and a lessened ability to make predictions.
Problem 1 is caused by Problem 2. And this is where Adjusted R-squared is coming in handy. Adjusted R-squared is an attempt at fixing these problems by factoring in the number of independent variables. The adjusted R-squared tells you the percentage of variation explained by only the independent variables that actually affect the dependent variable.
where:
n is the number of data points you have,
and k is the number of independent variables used to explain their distribution, excluding the constant
If you add more and more useless variables to a model, adjusted r-squared will decrease. If you add more useful variables, adjusted r-squared will increase.
Adjusted R-squared will always be less than or equal to R-squared . You only need R-squared when working with samples. In other words, R-squared isn’t necessary when you have data from an entire population.
Here is an interesting series of articles which will help you understand how to use R-squared to interpret the results of your model even better.
Regression Analysis: How Do I Interpret R-squared and Assess the Goodness-of-Fit?
Multiple Regression Analysis: Use Adjusted R-Squared and Predicted R-Squared to Include the Correct Number of Variables | {
"domain": "datascience.stackexchange",
"id": 814,
"tags": "r, regression, linear-regression"
} |
Logging conditionally repetitively in code | Question: I am using Log4j2 for logging. Multiple operations return success code. If status 0 is received, it means operation is successful, else failed. If operation is successfully, I log "*** is successfully." and if it fails I logs "*** failed with status : " + status. This is kind of repetitive in code.
public class ReadCallbackManager {
//Log Object
private static final Logger LOGGER = LogManager.getLogger(ReadCallbackManager.class);
private static ReadCallback readCallback;
private static final int SUCCESS_STATUS = 0;
public int setHook(final ICallback aCallBack) {
readCallback = new ReadCallback(aCallBack);
final int status = readCallback.setHook();
if (status == SUCCESS_STATUS) {
LOGGER.info("Read callback hook is set successfully!!!");
} else {
LOGGER.info("Read Callback hook set failed with status : " + status);
}
return status;
}
}
So I though to create a wrapper around the Logger class to handle this if else condition as below:
public final class BacStacLogger {
public static final int SUCCESS_STATUS = 0;
//Log Object
private Logger logger;
public BacStacLogger(final Class<?> className) {
if (logger == null) {
logger = LogManager.getLogger(className);
}
}
public void infoHookByStatus(final int status, final String successMessage, final String failureMessage) {
if (status == SUCCESS_STATUS) {
logger.info(successMessage);
} else {
logger.info(failureMessage + " : " + status);
}
}
}
And I have updated my previous code as :
public class ReadCallbackManager {
//Log Object
private static final ProjectLogger LOGGER = ProjectLogger(ReadCallbackManager.class);
private static ReadCallback readCallback;
public int setHook(final ICallback aCallBack) {
readCallback = new ReadCallback(aCallBack);
final int status = readCallback.setHook();
LOGGER.infoHookByStatus(status, "Read hook set successfully.", "Read hook set failed.");
return status;
}
}
Would you please provide your comments on it? Or if required, any improvements?
Answer: There are a few remarks that come to my mind.
Simplifying the logic
In your sample code, the two different log strings for success and failure don't differ too much. So I guess the following strings might be acceptable to you as well:
"Setting read callback hook: successful"
"Setting read callback hook: FAILED with status nnn"
Then you can create a static method statusText(int status) e.g. in a Utils class that returns "successful" for status==0 and "FAILED with status nnn" in all other cases. If I understood correctly, such a statusText() method should be re-usable all over your application, assuming that 0 always means success.
And then you can do the logging like:
LOGGER.info("Setting read callback hook: " + Utils.statusText(status));
But there are a few other issues:
Log Levels
Log4J has different log levels, and you shouldn't ignore that aspect. Depending on the consequences, a failure should typically get the ERROR level, at least the WARN level. So, in case of a non-zero status, LOGGER.error() seems to be a better choice.
INFO is meant for messages useful to the system administrator running the site. Messages targetting the developer should get a lower level, typically DEBUG or TRACE. I guess, the success of setting some callback isn't important to the system admin. The admin typically only wants to see things needing his/her special attention. So, I'd change the success case to LOGGER.debug().
Of course, then my statusText() solution no longer fits, as that doesn't support different log levels.
Why not Exceptions?
Your code sticks to the 1970s status code pattern to communicate failure (return an integer where some values mean success and others mean failure). By the 1990s, the software industry had learned that error handling can be done in a better way using exceptions.
So, instead of returning int values and forcing each and every layer of your software to check for the success results, throw an exception if something like setting a callback failed. By embracing exceptions as the signal of failure, your code typically becomes
cleaner,
more readable,
more compact,
more focussed on the main task instead of the failures
and more robust.
The simple guidelines for exceptions are:
If a method can't fulfill its job (whatever you defined that job to be), it throws an exception.
If a method returns normally, the caller can rely on the fact it has successfully done its job.
You catch an exception only if you know a way how to continue successfully even after an internal failure, e.g. by retrying. | {
"domain": "codereview.stackexchange",
"id": 40106,
"tags": "java, logging"
} |
System Identification using LMS Adaptive Filter | Question: I just have a question about using an least-mean-squares algorithim adaptive filter for system identification. Consider the following
I am told that as the error converges to a small value, the adaptive filter coefficients w[k] will indeed repersent the unknown system h[k]. Now, that doesn't make sense to me since the n[k] is being used in the error calculation.
Won't the adaptive filter coefficients w[k] now repersent the unknown system coefficients h[k] AND the noise n[k]?
Answer: Because an LMS estimator will, over time, "average out" uncorrelated zero-mean noise. It's pretty much in the name.
But yes, you're right, there is a noise component in the estimate; one of the qualities of an estimator is how little the noise variance influences the estimate variance after a given length of observation.
That's the case, however, for all estimators: you measure signal + noise, and you estimate parameters from that. The parameters must be somewhat noisy; otherwise, there's something broken with your noise model. | {
"domain": "dsp.stackexchange",
"id": 7022,
"tags": "adaptive-filters"
} |
Can stimulated emission happen in nuclear energy states? | Question: We know that stimulated emission of photons can occur when photons induce an electron in a metastable state to drop down, creating a new photon which is identical to the original photon in terms of frequency, phase, and polarization.
I'm wondering, can the phenomenon of stimulated emission be extended to nuclear transitions as well? The existence of nuclear stimulated emission could mean that gamma ray lasers are possible.
Answer: It seems that research has reached the stage of creating a gamma ray laser, a proposal:
In the study, which is published in a recent issue of Physical Review Letters, Tkalya explains that a nuclear gamma-ray laser has to overcome at least two basic problems: accumulating a large amount of isomeric nuclei (nuclei in a long-lived excited state) and narrowing down the gamma-ray emission line. The new proposal fulfills these requirements by taking advantage of thorium’s unique nuclear structure, which enables some of the photons from an external laser to interact directly with thorium’s nuclei rather than its electrons.
This involves nuclear transitions , but the output is electromagnetic .
If you mean whether a proton, or alpha , laser is possible , i.e. incoming protons stimulating emission on an isotope with the same energy level protons , this involves the strong interactions and electromagnetism and I would think it very hard to achieve even if there exist long lived isotopes .
Neutrons are difficult to control but this might interest you. | {
"domain": "physics.stackexchange",
"id": 44587,
"tags": "nuclear-physics, laser, quantum-states"
} |
Does $m$ in this problem refer to the total mass of the system or just the mass of a single body? | Question: I am trying to figure something out in this problem:
I am having trouble with these types of problems because often I don't understand which forces I need to consider when setting up $F=ma$.
Here is what I have got so far:
I separated the force of gravity $m_1 g$ and $m_2 g$ acting on the relative bodies into the horizontal and vertical components. Because I can assume there is no friction the vertical component doesn't matter. For the forces along the plane I get:
$$F=ma \iff F_1-F_T=ma \iff m_1g\sin{\theta_1}-F_T=\color{red}{ma}\\ F= ma \iff F_2-F_T=ma \iff m_2g\sin{\theta_2}-F_T=\color{red}{ma}$$
Here is my first question: Does the $\color{red}{m}$ in both equations refer to the total mass $m_1+m_2$ of the system or only to the mass I am setting up the equation for? For example, should the first equation read: $$m_1g\sin{\theta_1}-F_T=m_1 a_1?$$
My second question: If the only force pulling $m_1$ up the slope is $F_T$ (tension), why can't I just say $F_T=m_2g$? It seems to me that $m_2g$ is the only thing causing $F_T$.
I hope my questions make sense.
Answer: The first thing you should do in such a problem is decide on the system(s) that you are going to consider and you will then apply $F=ma$ to that system.
In this case there are two systems (mass $m_1$ and mass $m_2$) as shown in the diagram below which are constrained to move with the same acceleration $a$ if the string connecting them is inextensible.
The next think to do is to draw a free body diagram for each of the systems which you have done by considering only the forces acting in directions parallel to the inclined plane.
Now apply Newton's second law to each of the systems taking the positive directions as shown by $x_1$ and $x_2$.
$F_1-F_T = m_1 a$ and $F_T-F_2 = m_2 a$
Having set up theses two equations you can eliminate $F_T$.
$F_1-F_2 = (m_1+m_2)a$
Having obtained this equation you might think "Why did I not choose both masses as the system? as the equation looks like an application of Newton's second law to such a system.
The reason that you should not do this is because $F=ma$ is actually a vector equation $\vec F = m \vec a$ and by considering each mass separately the application of $\vec F = m \vec a$ is easily converted into a scalar equation.
For mass $m_1$ the two forces acting on the mass are $F_1 \hat x_1$ and $-F_T \hat x_1$ and the acceleration is $a \hat x_1$ where $\hat x_1$ is the unit vector in the $x_1$ direction.
$\vec F = m \vec a \Rightarrow F_1 \hat x_1-F_T \hat x_1 = m_1 a \hat x_1\Rightarrow F_1-F_T = m_1 a$
and a similar thing can be done for mass $m_2$.
For the two masses you could introduce a single coordinate system and apply Newton's second law but to get to the final result would be very convoluted and messy. | {
"domain": "physics.stackexchange",
"id": 52290,
"tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram, string"
} |
Do you travel in the 5th dimension? | Question: If you travel freely through the first 3 dimensions, and you travel in a singular direction through time, how do you travel through the other dimensions, like the 5th?
Answer: Higher dimensions are pretty hypothetical. Most of the time they appear in theory, they are involved in string theory (which I am far from an expert). My rough understanding, though, is the higher dimensions are looped.
For comparison, imagine that the universe was a hypersphere. Moving in one direction for long enough would bring you back to the beginning, a bit like how moving in one direction on a sphere will bring you back to the beginning after you have gone around that sphere.
Now instead of a hypersphere, imagine a hyperellipse, where the lengths of the axes are such that you could take a nice walk in a straight line, and end up where you began without having turned around.
My understanding is that the higher dimensions mentioned in string theories are so small that they loop around in incredible small distances.
A lot of smart physicists have been looking into string theories for years now, but those models are far from being as solid as quantum field theory or general relativity.
That being said, in order to move in another dimension, you would have to change your motion in that direction, and so somehow apply a force between you and an object that was already displaced in that extra dimension from our usual position.
Since all known forces act in the usual dimensions, I'm not sure how that could be accomplished.
After all that, the only thing that comes to mind is this: If there were two of the usual 3+1 space-times separated by a 5th dimension, maybe (and this is getting into scifi territory) a wormhole could connect them. | {
"domain": "physics.stackexchange",
"id": 35410,
"tags": "spacetime, spacetime-dimensions, space-travel, time-travel"
} |
What are angular sizes of thin and thick disks in our Galaxy? | Question: What are diapasons of galactic coordinates for thin and thick disks of our Galaxy?
Answer: The sun is usually taken as the center of the galactic coordinate systems. Our solar system is well within the thin disk of our galaxy (population I stars). So taking the headline question literally, we see the thin disk as well as the surrounding thick disk in almost every direction, meaning any galactic latitude and longitude.
The radius interval, where we find mostly stars of the thin or thick disk, varies with angles. Along the galactic plane the radius interval is across the galaxy (up to about 30 kilo parsecs, varying a bit with direction, because we are not in the center of the galaxy) for the thin disk, containing the sun as origin; perpendicular to the galactic plane it's about 1 kilo parsec to both directions for the thin disk, followed by a respective radius interval from 1 to 3 kilo parsecs for the thick disk, if I follow the Wikipedia articles referenced in the question.
The region dominated by the thick disk is the farther away from us the closer we look along the galactic plane.
There are certainly no sharp boundaries between the star populations. Actually the thick disk penetrates the thin disk, but it's much sparser populated.
The Gaia project may give a much more detailed answer about the distribution of stars within our galaxy in the next 5 to 10 years. | {
"domain": "astronomy.stackexchange",
"id": 240,
"tags": "milky-way, coordinate, size"
} |
Recursive LAMBDA() function to create a formula that adds internal separators to a string in Excel | Question: I have created a named function with signature PadInternal(base, width, paddingStr) where:
base is a string you want to add padding to
width is the length of the individual chunks
paddingStr is the string to pad with
called from a cell like:
PadInternal("Hello", 1, " ") = "H e l l o"
PadInternal("World", 3, "**") = "Wor**ld"
And here's the function:
Tag
Description
Name
PadInternal
Scope
Workbook
Comment
base : a string you want to add padding to | width : the length of the individual chunks | paddingStr : the string to pad with
Refers To
=LAMBDA(base,width,paddingStr, IF(LEN(base)<=width, base, LET(LHS, LEFT(base, width), RHS, RIGHT(base, LEN(base) - width), LHS & paddingStr & PadInternal(RHS,width,paddingStr))))
=LAMBDA(
base,
width,
paddingStr,
IF(
LEN(
base
) <= width,
base,
LET(
LHS,
LEFT(
base,
width
),
RHS,
RIGHT(
base,
LEN(
base
) - width
),
LHS & paddingStr &
PadInternal(
RHS,
width,
paddingStr
)
)
)
)
Questions
As this is my first time using recursive Lambda functions in Excel, I'd like some feedback. In particular:
Is my algorithm efficient - I was thinking something with TEXTJOIN may be faster?
How could this be improved to take a dynamic array as "base"?
Can I have default values for the arguments?
What about meta data (formatting of the tooltip, scope etc), are there other ways to make my function more accessible?
Answer: I thought about this for a while and came up with some (what I think are) improvements:
=LAMBDA(base,width,paddingStr,TEXTJOIN(paddingStr,,MID(base,SEQUENCE((LEN(base)/width)+1,,1,width),width)))
This would:
No longer require a recursive LAMBDA() which should prove to be faster (untested)
Uses TEXTJOIN() which can handle errors and empty values internally so there is no longer any worry about that.
The use of MID() negates the use of LEFT(), RIGHT() etc.
No longer be using LET() which may also be using internal memory. | {
"domain": "codereview.stackexchange",
"id": 40288,
"tags": "excel, lambda"
} |
Pseudo Random Number Generator | Question: I recently watched this video about the random number generation in Super Mario World. The technique that is used, as seen in the image below, multiplies one of the seeds by 5 and adds 1, the other seed is multiplied by 2, and then depending on whether the 4th and 7th bits are the same, 1 is added. So as stated in the video, the sequence of numbers will repeat after 27776 successive calls.
In my implementation of a pseudo random number generator, I have used 16 bit values for the two seeds to allow for a greater range of numbers, and my get_rand() function returns the two 16 bit strings joined together, resulting in a 32 bit number. This means that the sequence of numbers repeats after 526838144 successive calls, which is far greater than that achieved with the pseudo random number generator used in Super Mario World.
I have also created a rand_int() and a random() function that allow for better use of the numbers generated, these simply divide the number returned by 2 ** 32 over the difference between the integer range.
The PRNG Test.py is there only so that I can make sure that all the functions work as expected, and it seems to provide evenly split pseudo random numbers. So I am just after a review of the PRNG.py file, as it is that which I would like to optimize and improve.
PRNG.py
#Make sure Seeds.txt exists
try:
file = open('Seeds.txt')
except FileNotFoundError:
file = open('Seeds.txt', 'a+')
file.write('0\n0')
#Gets the values of the seeds from the file
values = file.readlines()
file.close()
S = int(values[0].rstrip('\n'))
T = int(values[1])
def seed(seed_value):
'''Resets the seed for the PRNG to make values predictable'''
global S, T
with open('Seeds.txt', 'w') as file:
file.write(str(seed) + '\n' + str(seed))
file.close()
S = seed
T = seed
def update_seeds(S, T):
'''Generates the next two seeds'''
S = 5 * S + 1
try: bit_11 = '{0:b}'.format(T)[-11]
except IndexError: bit_11 = '0'
try: bit_16 = '{0:b}'.format(T)[-16]
except IndexError: bit_16 = '0'
if bit_11 == bit_16: T = 2 * T + 1
else: T = 2 * T
return S, T
def get_rand(): #Has 526838144 Possible numbers
'''Produces a random number in the range 0 to 2 ** 32'''
global S, T
S, T = update_seeds(S, T)
S = int('{0:b}'.format(S)[-16:], 2)
T = int('{0:b}'.format(T)[-16:], 2)
K = '{0:b}'.format(S ^ T)
S, T = update_seeds(S, T)
S = int('{0:b}'.format(S)[-16:], 2)
T = int('{0:b}'.format(T)[-16:], 2)
J = '{0:b}'.format(S ^ T)
with open('Seeds.txt', 'w') as file:
file.write(str(S) + '\n' + str(T))
file.close()
for i in range(16 - len(K)): K = '0' + K
for i in range(16 - len(J)): J = '0' + J
return int(K + J, 2)
def rand_int(a, b):
'''Produces in a random integer in the range a to b'''
difference = (b + 1) - a
factor = 2 ** 32 / difference
return a + int(get_rand() / factor)
def random():
'''Returns a random float between 0 and 1'''
return get_rand() / 2 ** 32
PRNG Test.py
import PRNG
l = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
for i in range(10000):
number = PRNG.rand_int(0, 9)
l[number] += 1
for i in l:
print(str(round(i / 10000 * 100, 2)) + '% :', i)
Answer: At the end I present some bit manipulations stuff, but first of some comments to the style of your script:
Docstrings usually use the double quote, """A docstrings""" – This is the first time I've seen docstrings using single quotes. PEP-0257 uses double-quotes all over.
Don't collapse try...except statements – It doesn't look good to collapse these statements. For example this one:
try: bit_16 = '{0:b}'.format(T)[-16]
except IndexError: bit_16 = '0'
would read a lot better as:
try:
bit_16 = '{0:b}'.format(T)[-16]
except IndexError:
bit_16 = '0'
Don't collapse if or for blocks, either – These are even worse to read, so please don't do:
if bit_11 == bit_16: T = 2 * T + 1
else: T = 2 * T
Insert the few extra newlines, and make it readable:
if bit_11 == bit_16:
T = 2 * T + 1
else:
T = 2 * T
On second looks, when it is readable, this could actually be rewritten to:
T = 2 * T + (bit_11 == bit_16)
In seed() you could use str.format() with reusable inputs – Your seed writing could become:
file.write("{0}\n{0}".format(str(seed)))
Not sure if you even need the str() in there... And a neat trick to get leading zeroes in the bit strings: "{:016b}".format(K).
Is seed() used, and does it work? – Within this method is used the variable seed, but which variable is that? And is this method used at all, or is replaced by the writing of the file within get_rand()?
Use with(open...) on the module initialization also – At start of module I would also the with() construct. And there exists proper methods to test whether the file exists.
And if you insist on trying to open it manually, you could just as well complete the file reading and/or writing within it. Do however be aware of the possibility of the file write to also throw an exception if you're not allowed to write the file or similar errors.
Some basic bit manipulations
Get the final 16 bits of a number: S & 0xffff
Set a given bit \$n\$ (zero-based): 1 << (n-1)
Extract a bit, and move to 0th position: (S >> (n-1)) & 1
Using this knowledge the get_rand() could be rewritten to:
def get_rand():
global S, T
S, T = update_seeds(S, T)
S = S & 0xffff
T = T & 0xffff
K = S ^ T
S, T = update_seeds(S, T)
S = S & 0xffff
T = T & 0xffff
J = S ^ T
with open('Seeds.txt', 'w') as file:
file.write("{}\n{}".format(S, T))
return int("{:016b}{:016b}".format(K, J), 2)
And with an extra helper method, the update_seeds could become:
def extract_bit(value, n):
"""Extract the n'th bit, and move back to 0 position."""
return (value >> (n-1)) & 1
def update_seeds(S, T):
'''Generates the next two seeds'''
return (5 * S + 1),
(2 * T + (extract_bit(T, 11) == extract_bit(T, 16)) | {
"domain": "codereview.stackexchange",
"id": 25620,
"tags": "python, python-3.x, random"
} |
How a particle moves against a pressure gradient? | Question:
This is a photo of a compressor together with a diffuser. These engine parts convert the kinetic energy of the air to static pressure and supply air to internal combustion engines.
The somewhat complicated geometry is due to the fact that air moves against a pressure gradient and the designer wants to avoid flow separation.
My question is why air moves at all?
If force is the reason of acceleration, how is possible to accelerate against a force?
Answer: I will try to answer my own question.
The wheel is rotating. The rotating wheel is a non-inertial accelerating reference frame. The fictitious forces that arise in accelerating reference frames account for the force that pushes against static pressure.
Similar to when someone inflates a balloon: a field of velocities supports a field of static pressures. | {
"domain": "physics.stackexchange",
"id": 29953,
"tags": "newtonian-mechanics"
} |
Electric field strength away from a negative spherical charge | Question: I was wonder how would a graph of electrical field strength away from a spherical -ve charge graph would look. Since E= potential gradient I was guessing that since potential increases away from a negative charge , electric field strength would increase away from a negative charge also ?
Answer: The potential does increase. This means that the gradient of potential plotted against distance, r, from the charge is always positive. But the gradient, $\frac{dV}{dr},$ keeps decreasing in magnitude – just sketch the graph! The field strength in the r direction is given by$$E=\ –\frac{dV}{dr},$$so the field is in the –r direction and decreases in magnitude the further we go from the negative charge. | {
"domain": "physics.stackexchange",
"id": 53698,
"tags": "electrostatics, electric-fields"
} |
Test if arithmetic operation will cause undefined behavior | Question: I've written a command line calculator as an exercise to get ready for my upcoming first programming job. The whole program source is around 450 lines of code, which I think is too long for a single question. I'll therefore post the code in parts starting with this one.
Any software needs to validate it's data. The functions below are used to test whether a basic arithmetic operation on two signed integers a and b will cause undefined behavior or not. They are checking if the result would be out of bounds and for division by zero. If they return zero the operation is then carried out and the result returned. If they return non-zero, an error message is printed out to the user. The logic of these functions is taken from this page of the SEI CERT C Coding Standard.
My main concern with this code is the is_undefined_mult function, which I don't think is very readable.
int is_undefined_add(int a, int b)
{
return (a > 0 && b > INT_MAX - a) ||
(a < 0 && b < INT_MAX - a);
}
int is_undefined_sub(int a, int b)
{
return (b > 0 && a < INT_MAX + b) ||
(b < 0 && a > INT_MAX + b);
}
int is_undefined_mult(int a, int b)
{
if (a > 0) {
if (b > 0) {
if (a > INT_MAX / b) {
return 1;
}
}
else {
if (b < INT_MIN / a) {
return 1;
}
}
}
else {
if (b > 0) {
if (a < INT_MIN / b) {
return 1;
}
}
else {
if (a != 0 && b < INT_MAX / a) {
return 1;
}
}
}
return 0;
}
int is_undefined_div(int a, int b)
{
return b == 0 || (a == INT_MIN && b == -1);
}
Answer: OP edited post.
Is the description backwards?
If they return non-zero the operation is then carried out and the result returned. If they return zero, an error message is printed out to the user.
int is_undefined_mult(int a, int b) {
if (a > 0) {
if (b > 0) {
if (a >= INT_MAX / b) {
return 1; // 1 is the overflow condition here.
This hints that using a macro or enumerated result would be less error prone than 0 or 1.
a == INT_MIN && b == -1 is only a problem with 2's complement:
int is_undefined_div(int a, int b) {
#if INT_MIN < -INT_MAX
if (a == INT_MIN && b == -1) return 1;
#endif
return b == 0;
}
As @JS1 noted, replace INT_MAX with INT_MIN in 2 places. Otherwise, per my tests, the functions are correct, expecting is_undefined_div().
[Edit]
Candidate is_undefined_mult() simplification - really just a collapsing of the if structure.
int is_undefined_mult1(int a, int b) {
if (a > 0) {
if (b > 0) {
return a > INT_MAX / b; // a positive, b positive
}
return b < INT_MIN / a; // a positive, b not positive
}
if (b > 0) {
return a < INT_MIN / b; // a not positive, b positive
}
return a != 0 && b < INT_MAX / a; // a not positive, b not positive
}
[Edit2]
A potential simplification with is_undefined_add()/is_undefined_sub().
int is_undefined_add1(int a, int b) {
return (a < 0) ? (b < INT_MIN - a) : (b > INT_MAX - a);
}
int is_undefined_sub1(int a, int b) {
return (b < 0) ? (a > INT_MAX + b) : (a < INT_MIN + b);
} | {
"domain": "codereview.stackexchange",
"id": 14145,
"tags": "c, integer"
} |
Mean squared displacement of a particle on a biased random walk | Question: Given a particle on a 1-D random walk with some drift velocity $\nu_d = \frac{\Delta x_d}{\Delta t}$, the position in at some time step j is given by $$x_j=x_{j-1}+k_j L + \Delta x_d$$ where $L$ is the step length and $k_j \in \{-1,+1\}$ with equal probability. I tried calculating the mean square displacement using
$$\langle x_N^2 \rangle = \langle (x_{N-1} + k_N L + \Delta x_d)^2 \rangle = \langle x_{N-1}^2 \rangle + 2\Delta x_d \langle x_{N-1} \rangle + L^2 + \Delta x_d^2$$
Solving this recurrence gives that the mean squared displacement is exponential with respect to time, although intuitively, and when I graph it, it should be quadratic with respect to time. Where am I going wrong?
Answer: The easiest approach is to transform into coordinates that comove with the drift, i.e.
$$\tilde x_j = x_j + \Delta x_d j.$$
In those coordinates $\tilde x_j$, the problem reduces to a standard (fixed-step) random walk.
Nevertheless, your approach is also fine. You likely made an error when solving the recurrence relation. Note that the recurrence relation consists of two equations,
$$\langle x_j\rangle = \langle x_{j-1}\rangle + \Delta x_d,$$
$$\langle x_j^2 \rangle = \langle x_{j-1}^2 \rangle + 2\Delta x_d \langle x_{j-1} \rangle + L^2 + \Delta x_d^2.$$ | {
"domain": "physics.stackexchange",
"id": 97778,
"tags": "biophysics, statistics, diffusion, stochastic-processes"
} |
Set cartesian limits with moveit! | Question:
Hello! I'm working with moveit in ros melodic and a Hyundai robot. When I generated the robot files I found a file labeled cartesian_limits.yaml as show below:
cartesian_limits:
max_trans_vel: 18
max_trans_acc: 20
max_trans_dec: -30
max_rot_vel: 15.7
Is it possible to set cartesian limits with moveit?
I want the robot to not move outside a defined box. Something like:
x_min_limit = -0.4
x_max_limit = 1.5
My Hyundai robot has this function and I want to implement it in the simulation to avoid wierd paths.
Is it possible to set this kind of function in cartesian_limits.yaml file?
Originally posted by mth_sousa on ROS Answers with karma: 35 on 2022-07-08
Post score: 0
Answer:
Nope, that cartesian_limits.yaml is for velocities and accelerations and it's only used for the Pilz motion planner. We probably should rename that file to make it more clear - thanks for the reminder.
On ROS2 Rolling or Humble you can use constrained planning to do what you want. Tutorial here:
https://moveit.picknik.ai/main/doc/examples/planning_with_approximated_constraint_manifolds/planning_with_approximated_constraint_manifolds_tutorial.html
To be honest I'm not sure if the "constraint manifolds" planning works in ROS1. Probably not Melodic since it's quite old now.
Originally posted by AndyZe with karma: 2331 on 2022-07-08
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by AndyZe on 2022-07-08:
I would also recommend trying the PRM* planner rather than the default RRTConnect. RRTConnect is fast to plan but it produces wild paths often.
Comment by mth_sousa on 2022-07-08:
Thanks for your answer! I will check PRM* | {
"domain": "robotics.stackexchange",
"id": 37835,
"tags": "ros, moveit, ros-melodic, cartesian"
} |
In a frequency comb, how is $n$ determined? | Question: A frequency comb from a mode-locked laser produces a series of spectral lines with $f_n = nf_r + f_{ce}.$
$f_r$ is the frequency of pulses coming out of the laser and can be measured directly via an electronic counter. There is a trick for measuring $f_{ce}$ involving squaring the electric field and counting a beat frequency.
How is $n$ measured? For example, is the light put through a spectroscope giving a measurement of $\lambda_n$ that is accurate enough that we can infer $n$ from it, because the error $\Delta \lambda / \lambda < f_r / f_n?$
Answer: You basically just count the lines.
You use a high-precision spectrometer (i.e. one whose instrument linewidth is smaller than that of your comb) and you get the optical spectrum of the beam. This will get you a direct measurement of
$$
k_n = 2\pi/\lambda_n = 2\pi \ f_n/c
$$
for a bunch of lines spanning (say) $10\,000\leq n \leq 20\,000$. Generally this spectrum won't go all the way down to the DC regime (hence the lower bound on $n$) but it will get you a sharp enough measurement on the $k_n$ and therefore on $\Delta k = k_{n+1}-k_n$ (which will be constant throughout) that you can extrapolate the set of $k_n$'s that you do observe down to $n=0$ and keep enough precision in your extrapolation that the uncertainty in $n$ from the extrapolation is smaller than $1$ and therefore zero. | {
"domain": "physics.stackexchange",
"id": 53328,
"tags": "optics, spectroscopy"
} |
Understanding PDA and Equivalence of PDA and CFG | Question: When we wanted to construct a PDA for $0^n1^n$ the idea was to put all the zeroes (which is a part of the input string) to the stack associated with the PDA, and then pop each of them when we get a $1$ from the latter part of the input.
But when we try to prove that, we can create a PDA for a given CFG we put nonterminals and terminals in the stack and try to match it with input and pop from the stack.
Why do we do something like this? For a problem, we push some part of my input to the stack and match rest of them, and for some problems do not push any input symbol rather only use inputs to compare?
Maybe I am missing some intuitive part of it.
Answer: Your question is, in essence,
Why does the proof that every context-free grammar can be converted to an equivalent PDA proceed in a particular way rather than in another way?
It's hard to answer such a question unless it gets more specific. For example, you can ask why the resulting PDA have to invoke nondeterminism. The answer is that some context-free languages cannot be accepted by a DPDA. Indeed, while this particular proof uses PDAs acting in a certain way, another proof might use PDAs acting in a different way.
One alternative such proof uses the Chomsky–Schützenberger representation theorem. The theorem states that every context-free language can be realized as
$$ h(D \cap R), $$
where $D$ is a Dyck language (the language of all correctly nested strings of parentheses of some fixed number of sorts), $R$ is a regular language, and $h$ is a homomorphism. This theorem, which can be proved directly using context-free grammars (see, for example, Context-free languages and pushdown automata by Autebert, Berstel and Boasson, from the Handbook of Formal Languages), allows to convert a context-free grammar to a PDA which is more like the one for $0^n 1^n$, along the following lines:
Start with a PDA for $D$. This is a PDA that pushes whenever encountering a left parenthesis, and pops whenever encountering a right parenthesis (checking that the two parentheses, the one on the stack and the one being read, have the same type).
Construct a DFA/NFA for $R$, and use the product construction to construct a PDA for $D \cap R$.
Construct a PDA for $h(D \cap R)$ by replacing each transition on $\sigma$ to a transition on $h(\sigma)$.
If you apply this construction to $0^n1^n$ (which you can realize as $D \cap 0^*1^*$, where $D$ is the Dyck language with a single type of left parenthesis $0$ and the corresponding right parenthesis $1$) then you get a PDA which is remarkably similar to the one you describe. | {
"domain": "cs.stackexchange",
"id": 10568,
"tags": "automata, context-free, pushdown-automata"
} |
Maximum number of contacts Gazebo ROS | Question:
Where and how can we set maximum number of contacts in Gazebo?
By default the max. num. of contacts is 20 "rosparam get /gazebo/max_contacts"
Setting the parameter to any other ( ex. 100) " rosparam set /gazebo/max_contacts 100 "
does not affect the simulation results, I am always getting 20 contacts using gazebo plugin
bumpers.
Originally posted by Nomad on ROS Answers with karma: 53 on 2015-03-31
Post score: 0
Original comments
Comment by Nomad on 2015-04-06:
Calling service rosservice call /gazebo/setphysicsproperties and setting maxcontacts to 100 does not work. After calling service rosservice call /gazebo/getphysicsproperties the maxcontacts is still 20.
Comment by GuiHome on 2015-04-08:
in https://bitbucket.org/osrf/gazebo/src/e01e520d15603b87efaffa26817bf6224f00f320/gazebo/physics/Contact.hh?at=gazebo_2.2
#define MAX_CONTACT_JOINTS 32
might be one reason which limits the number of contacts generated ?
Answer:
I think gazebo would react to a service call, not to a parameter change on the parameter server
maybe try using the
rosservice call /gazebo/set_physics_properties
and copy the current properties that you get with
rosservice call /gazebo/get_physics_properties
something like
rosservice call /gazebo/set_physics_properties "time_step: 0.001
max_update_rate: 1000.0
gravity:
x: 0.0
y: 0.0
z: 0.0
ode_config:
auto_disable_bodies: False
sor_pgs_precon_iters: 0
sor_pgs_iters: 50
sor_pgs_w: 1.3
sor_pgs_rms_error_tol: 0.0
contact_surface_layer: 0.001
contact_max_correcting_vel: 100.0
cfm: 0.0
erp: 0.2
max_contacts: 100"
Originally posted by GuiHome with karma: 242 on 2015-03-31
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Nomad on 2015-04-01:
@guiHome the objects in Gazebo freeze after calling this service. However in RVIZ, I see that frames move and there even collisions. It seems that simulation is working but the visualization in Gazebo fails. The maximum number of contacts is still 20.
Comment by GuiHome on 2015-04-08:
my suggested parameters have no gravity and settings that work on my side. You should copy the settings that worked for you using the get_physics_properties
I double checked and indeed the maximum contact does not change after setting it | {
"domain": "robotics.stackexchange",
"id": 21306,
"tags": "ros, gazebo, parameter"
} |
Calculating number of atoms per cell in a crystal. Nonsense question in class | Question: I don't agree at all with this question, and I think my teacher does not understand his topic. I hope you can prove me wrong.
The question was this one:
Potassium crystallises in a Body Centered Cubic way. It's density is 0.853 g/cm3, its molar mass is 39.9 g/mol.
Calculate the number of atoms per cell and its atomic packing factor.
My argument is this:
This question makes no sense because BCC means it has 2 atoms per cell and has a 0.68 Atomic packing factor. You might say: That's the theoretical model, what we want you to calculate is the actual thing.
And I would say: Well you surely can't do it with the regular formula because that formula depends on a perfectly organised crystal and even makes reference to the relation 4r=(3)^.5*a which depends on a regular lattice.
When I do the calculations the way you want it I get 1.929 atoms per cell, which is absurd. Does this mean some cells have more atoms than others? Well if that's implied then theres no way to talk about a unit cell and make reference to the 4r=(3)^.5*a relationship I mentioned. Another interpretation of this cypher (1.929) is that maybe atoms are of varying mass, and the mathematics is telling you atoms are incomplete. But I think that too is absurd.
How does one interpret these things? Or is it in fact nonsense (they question they gave me) ?
Thanks a lot.
Answer: Yes, that certainly seems like a rather odd question. Given that the crystal is BCC, that information alone means that there are 2 atoms per cell, and the atomic packing factor is $\pi\sqrt{3}/8\approx 0.680$. The density and molar mass are irrelevant additional pieces of information.
A sensible thing to ask for given those three pieces of information (BCC structure, density and molar mass) would be the implied atomic radius.
You don't say how the 1.929 atoms/cell was calculated, but if it was calculated from an atomic radius (which wasn't specified as a part of the problem), it wouldn't make sense to calculate things in that direction, because the 2 atoms per cell implied by the BCC crystal structure is a more precise piece of available information than what a table gives for potassium's atomic radius. Crystals do have crystallographic defects, and a metal like potassium is going to actually be polycrystalline instead of being one solid crystal, but I'm presuming you aren't being expected to make calculations based on those considerations, and even if you were, the calculations presumably would be expressed in a different way than as a non-integer number of atoms per cell. | {
"domain": "physics.stackexchange",
"id": 16664,
"tags": "homework-and-exercises, crystals"
} |
Is there any work underway to push the long baseline capabilities of the Event Horizon Telescope to sub-millimeter wavelengths? | Question: The Max Planck Institute for Radio Astronomy's press release Something is Lurking in the Heart of Quasar 3C 279; First Event Horizon Telescope Images of a Black-Hole Powered Jet shows a stunning montage of three Event Horizon telescope images at 7, 3 and 1.3 mm wavelengths (43, 86 and 230 GHz) demonstrating how the highest frequency in combination with the planet-sized baselines work together to produce observations at "an extreme 20 microarcsecond resolution", quoting the title of the April 5 2020 Astronomy and Astrophysics paper Kim et al. 2020 Event Horizon Telescope imaging of the archetypal blazar 3C 279 at an extreme 20 microarcsecond resolution.
From How does ALMA produce stable, mutually coherent ~THz local oscillators for all of their dishes? I know that ALMA's receivers can go as far as about 950 GHz.
Question: Is there work underway to increase the number of radiotelescope sites around the Earth with circa 1 THz receivers to push the long baseline capabilities of the Event Horizon Telescope to sub-millimeter wavelengths?
Answer:
Is there work underway to increase the number of radiotelescope sites around the Earth with circa 1 THz receivers to push the long baseline capabilities of the Event Horizon Telescope to sub-millimeter wavelengths?
Yes, regarding the Event Horizon Telescope network itself, this recent paper evaluates the efficacy of 40 new sites for observing submilimeter wavelengths at $\sim$ THz (really $\approx$ 300 GHz). They conclude that "A group of new sites with favorable transmittance and geographic placement leads to greatly enhanced imaging and science on horizon scales.
On a longer developmental timescale, there's also this white paper which proposes a space-based observatory TeraHertz Exploration and Zooming-in for Astrophysics (THEZA), which would aim to observe the THz regime. From the abstract:
The concept will open up a sizeable range of hitherto unreachable parameters of observational astrophysics. It unifies two major lines of development of space-borne radio astronomy of the past decades: Space VLBI (Very Long Baseline Interferometry) and mm- and sub-mm astrophysical studies with "single dish" instruments. It also builds upon the recent success of the Earth-based Event Horizon Telescope (EHT) -- the first-ever direct image of a shadow of the super-massive black hole in the centre of the galaxy M87. As an amalgam of these three major areas of modern observational astrophysics, THEZA aims at facilitating a breakthrough in high-resolution high image quality studies in the millimetre and sub-millimetre domain of the electromagnetic spectrum. | {
"domain": "astronomy.stackexchange",
"id": 6197,
"tags": "black-hole, radio-astronomy, radio-telescope, event-horizon-telescope"
} |
Different Rotation of Earth. Consequences? | Question: What if the rotation of the Earth about its own axis was in the direction North-South (instead of East-West) while the rotation about the sun remained as it is now? What would the consequences be other than "time zones" (meaning exposure to sunlight) would be fixed in horizontal sections/frames? I guess the main differences would be for countries that are thin in an either North-South sense, like Cuba, or in an East-West sense, like Chile. Is there a reason for why the rotation is in an East-West sense?
Answer: Don't worry about individual countries - there won't be any.
The earth's axis of rotation is close to perpendicular to the orbital plane around the sun (offset by 23 degrees). This means that the poles are always cold, and the equator is always warm. Except for the arctic/antarctic circles, sunlight falls everywhere on the planet every day.
If the earth's axis of rotation is parallel to its orbital plane, this changes everything. As the earth orbits the sun, you'll have one pole pointed directly at the sun, and 6 months later, the other pole will be pointed directly at the sun. Half of the world will be in complete darkness, and the other half will be burned by constant sunlight, before gradually swapping to be the other way around over the course of half a year.
A day/night cycle will take an entire year, not 24 hours. Life would likely have evolved very differently on a planet like this, so the concerns about time zones and country-level effects aren't what you should be worried about. Humans may not have evolved at all on a planet with such harsh, prolonged, and inescapable extremes. | {
"domain": "physics.stackexchange",
"id": 62379,
"tags": "earth"
} |
Tic-Tac-Toe in C++11 - follow-up 2 | Question: Previous question:
Tic-Tac-Toe in C++11 - follow-up
Is there any way to improve this code?
#include <iostream>
#include <cctype>
#include <algorithm>
#include <functional>
#include <array>
enum struct Player : char
{
none = '-',
first = 'X',
second = 'O'
};
std::ostream& operator<<(std::ostream& os, Player p)
{
return os << static_cast<char>(p);
}
enum struct Type : int
{
row = 0,
column = 1,
diagonal = 2
};
enum struct Lines : int
{
first = 0,
second = 1,
third = 2
};
class TicTacToe
{
public:
TicTacToe();
bool isFull() const;
void draw() const;
void turn(Player player);
bool check(Player player) const;
private:
bool applyMove(Player player, int position);
static const std::size_t mDim = 3;
std::array<Player, mDim * mDim> mGrid;
};
// utility functor to compute matching condition
template<int dim>
struct Match
{
Match(Type t, Lines i) : mCategory(t), mNumber(i){}
bool operator() (int number) const
{
switch (mCategory)
{
case Type::row:
return (std::abs(number / dim) == static_cast<int>(mNumber));
case Type::column:
return (number % dim == static_cast<int>(mNumber));
case Type::diagonal:
if (mNumber == Lines::first)
return ((std::abs(number / dim) - number % dim) == static_cast<int>(mNumber));
else
return ((std::abs(number / dim) + number % dim) == static_cast<int>(mNumber));
}
return false;
}
Type mCategory;
Lines mNumber;
};
TicTacToe::TicTacToe()
{
mGrid.fill(Player::none);
}
bool TicTacToe::applyMove(Player player, int position)
{
if (mGrid[position] != Player::none)
return false;
mGrid[position] = player;
return true;
}
bool TicTacToe::isFull() const
{
return 0 == std::count_if(mGrid.begin(), mGrid.end(),
[](Player i)
{
return i == Player::none;
});
}
bool TicTacToe::check(Player player) const
{
// check for row or column wins
std::array<bool, 8> win;
win.fill(true);
int j = 0;
// checking condition loop
std::for_each(mGrid.begin(), mGrid.end(),
[&](Player i)
{
int x = j++;
// columns
if (Match<mDim>(Type::column, Lines::first)(x))
win[0] &= i == player;;
if (Match<mDim>(Type::column, Lines::second)(x))
win[1] &= i == player;
if (Match<mDim>(Type::column, Lines::third)(x))
win[2] &= i == player;
// rows
if (Match<mDim>(Type::row, Lines::first)(x))
win[3] &= i == player;
if (Match<mDim>(Type::row, Lines::second)(x))
win[4] &= i == player;
if (Match<mDim>(Type::row, Lines::third)(x))
win[5] &= i == player;
// diagonals
if (Match<mDim>(Type::diagonal, Lines::first)(x))
win[6] &= i == player;
if (Match<mDim>(Type::diagonal, Lines::third)(x))
win[7] &= i == player;
});
for (auto i : win)
{
if (i)
return true;
}
return false;
}
void TicTacToe::draw() const
{
//Creating a onscreen grid
std::cout << ' ';
for (auto i = 1; i <= mDim; ++i)
std::cout << " " << i;
int j = 0;
char A = 'A';
for (auto i : mGrid)
{
if (Match<mDim>(Type::column, Lines::first)(j++))
std::cout << "\n " << A++;
std::cout << ' ' << i << ' ';
}
std::cout << "\n\n";
}
void TicTacToe::turn(Player player)
{
char row = 0;
char column = 0;
std::size_t position = 0;
bool applied = false;
std::cout << "\n" << player << ": Please play. \n";
while (!applied)
{
std::cout << "Row(1,2,3,...): ";
std::cin >> row;
std::cout << player << ": Column(A,B,C,...): ";
std::cin >> column;
position = mDim * (std::toupper(column) - 'A') + (row - '1');
if (position < mGrid.size())
{
applied = applyMove(player, position);
if (!applied)
std::cout << "Already Used. Try Again. \n";
}
else
{
std::cout << "Invalid position. Try again.\n";
}
}
std::cout << "\n\n";
}
class Game
{
public:
Game() = default;
void run();
private:
TicTacToe mTicTacToe;
std::array<Player, 2> mPlayers{ { Player::first, Player::second } };
int mPlayer = 1;
void resultScreen(bool winner);
std::function<void()> display = std::bind(&TicTacToe::draw, &mTicTacToe);
std::function<void(Player)> turn = std::bind(&TicTacToe::turn, &mTicTacToe, std::placeholders::_1);
std::function<bool(Player)> win = std::bind(&TicTacToe::check, &mTicTacToe, std::placeholders::_1);
std::function<bool()> full = std::bind(&TicTacToe::isFull, &mTicTacToe);
};
void Game::run()
{
while (!win(mPlayers[mPlayer]) && !full())
{
mPlayer ^= 1;
display();
turn(mPlayers[mPlayer]);
}
resultScreen(win(mPlayers[mPlayer]));
}
void Game::resultScreen(bool winner)
{
display();
if (winner)
{
std::cout << "\n" << mPlayers[mPlayer] << " is the Winner!\n";
}
else
{
std::cout << "\nTie game!\n";
}
}
int main()
{
Game game;
game.run();
}
Answer: Here are some things that may allow you to improve your code:
Separate responsibilities
The Model-View-Controller design pattern is often useful for programs like this. Because the view in this case is essentially just printing the board to std::cout, we can simplify a bit and just have a model, the TicTacToe class, and a controller, the Game class. Here's what the TicTacToe class looks like:
class TicTacToe
{
public:
TicTacToe() = delete;
TicTacToe(const TicTacToe &t) = delete;
TicTacToe(const TicTacToe &&t) = delete;
TicTacToe(char ch, std::size_t dim)
: mDim(dim), emptychar(ch), remaining(mDim*mDim), grid(remaining, emptychar)
{ }
bool isNotFull() const { return remaining; }
bool isWinner(char player) const;
bool applyMove(char player, unsigned row, unsigned column);
friend std::ostream &operator<<(std::ostream &out, const TicTacToe &t) {
out << ' ';
for (std::size_t i = 1; i <= t.mDim; ++i)
out << " " << i;
std::size_t j = 0;
char A = 'A';
for (auto& i : t.grid)
{
if (j == 0) {
out << "\n " << A++;
j = mDim;
}
--j;
out << ' ' << i << ' ';
}
return out << "\n\n";
}
private:
const std::size_t mDim;
const char emptychar;
unsigned remaining;
std::vector<char> grid;
};
There are some differences in this class compared to yours, so I'll point out the salient features.
Delete automatic functions which are not wanted
The way I've defined the TicTacToe class requires values to be passed to the constructor. For that reason, I've deleted the default constructor, the copy constructor and the move constructor. This prevents the class from being misused and alerts the user of the class that some things are not supported.
Isolate the internal representation from the interface
The game is played on a square grid and not a linear array (even though that may be the internal representation), so the applyMove function in the revised version takes row and column arguments rather than a linear position value.
Allow for dynamic sizing
The dimension of the board in the revised version of the TicTacToe class is a const value that is initialized with a value passed to the constructor. This allows for more than one size game to be played without recompiling. Also, this required changing from a std::array to a std::vector.
Allow for any character representations
This version does not specify the representations for an empty square, or any of the player tokens. In particular, the emptychar member function is initialized by the constructor. Perhaps more interesting is the fact that this class allows for more than two players. This can be seen most easily in the applyMove member function:
// Returns `false` if requested move was applied, otherwise true
bool TicTacToe::applyMove(char player, unsigned row, unsigned column)
{
unsigned position = row + mDim * column;
if ((position > grid.size()) || (grid[position] != emptychar))
return true;
grid[position] = player;
--remaining;
return false;
}
Define logical functions in a way that makes them most useful
If we look at the original isFull routine, it was always being used as !isFull() so it seems that what's actualy more useful is a method to check if the grid is not full. For this reason, the function is now isNotFull() in the redefined version.
Avoid inefficient algorithms
The original isFull routine counts empty squares each time it is called, but a more efficient (and simpler) way to do this is to simply keep a running count as the game is played.
Use clear function names
The original code has a function named check but it's not clear what it checks. I've renamed it to isWinner so that it's very clear now that it's checking to see if a particular player is a winner or not. I've also reimplemented it to work simply and efficiently no matter what size the array happens to be:
// returns true if the player is a winner
bool TicTacToe::isWinner(char player) const
{
// check for row or column wins
for(unsigned i = 0; i < mDim; ++i){
bool rowwin = true;
bool colwin = true;
for (unsigned j=0; j < mDim; ++j) {
rowwin &= grid[i*mDim+j] == player;
colwin &= grid[j*mDim+i] == player;
}
if (colwin || rowwin)
return true;
}
// check for diagonal wins
bool diagwin = true;
for (unsigned i=0; i < mDim; ++i)
diagwin &= grid[i*mDim+i] == player;
if (diagwin)
return true;
diagwin = true;
for (unsigned i=0; i < mDim; ++i)
diagwin &= grid[i*mDim+(mDim-i-1)] == player;
return diagwin;
}
Revise the Game class to be a controller
In the interest in clearly separating responsibilities of the classes, here is the revised Game class:
class Game
{
public:
Game(std::size_t dim=3) : ttt(players[2], dim), player(1) {}
void run();
void run(const char *move);
void turn();
void showResult() const;
private:
const char players[3] = { 'X', 'O', '-' };
TicTacToe ttt;
int player;
};
The most significant change here is that the turn method is a method of Game rather than of TicTacToe. This is important because the controller actually controls the game; the model simply reacts to the applied controls. This makes some alternatives much easier to implement as I'll describe.
Put the player character representations within the Game class
The character representations for each player and an empty space are all solely concerns of the Game class. They don't need to be in global space as originally defined.
Validate user input carefully
The current code accepts such inputs as (0,B) which should be rejected. The revised code fixes this:
void Game::turn()
{
char row = 0;
char column = 0;
std::cout << "\n" << players[player] << ": Please play. \n";
for (bool pending = true; pending; )
{
std::cout << "Row(1,2,3,...): ";
std::cin >> row;
std::cout << players[player] << ": Column(A,B,C,...): ";
std::cin >> column;
column = std::toupper(column) - 'A';
row -= '1';
pending = column < 0 || row < 0 || ttt.applyMove(players[player], row, column);
if (pending)
std::cout << "Invalid position. Try again.\n";
}
std::cout << "\n\n";
}
Note that it also changes the sense of the boolean variable from applied to pending which somewhat simplifies the code and requires no negations.
Eliminate pointless obfuscation
The use of std::bind is really not needed in this program and makes the program that much harder to read and understand. The revised version of run doesn't need them and is easy to read and understand:
void Game::run()
{
while (!ttt.isWinner(players[player]) && ttt.isNotFull())
{
player ^= 1;
std::cout << ttt;
turn();
}
showResult();
}
Use const where possible
The resultScreen function doesn't and shouldn't modify the underlying Game class, and so it should be declared const. Also, I've changed the name of the function to a more desscriptive showResult and eliminated the need to pass a variable.
void Game::showResult() const
{
std::cout << ttt;
if (ttt.isWinner(players[player]))
std::cout << "\n" << players[player] << " is the Winner!\n";
else
std::cout << "\nTie game!\n";
}
Note that this code only yields correct results when the game has already ended; one could add a check ttt.isNotFull() to handle any mid-game requests for results.
Consider having the computer play by itself
By separating the turn function in Game from the applyMove function in TicTacToe, we have the first step toward having the potential for the computer to play against a human player. I haven't implemented that, but I did implement a means by which a game can be run automatically, given a fixed series of moves. That function looks like this:
void Game::run(const char *move)
{
unsigned row, column;
while (!ttt.isWinner(players[player]) && ttt.isNotFull() && *move)
{
player ^= 1;
std::cout << ttt;
row = *move++ - '1';
column = std::toupper(*move++) - 'A';
std::cout << "Applying " << players[player] << " to "
<< row+1 << static_cast<char>('A'+column) << "\n";
ttt.applyMove(players[player], row, column);
}
showResult();
}
Note that this part of the code lacks much in the way of error handling, but it's meant solely as illustration.
Putting it all together
Here's a sample main function that plays one 3x3 tie game automatically, and then allows for two humans to play a 4x4 game against each other:
#include <iostream>
#include <cctype>
#include <vector>
// TicTacToe and Game classes go here
int main()
{
Game game1;
game1.run("2B1A2A2C1C3A3B1B3C");
Game game2(4);
game2.run();
} | {
"domain": "codereview.stackexchange",
"id": 10823,
"tags": "c++, game, c++11, tic-tac-toe"
} |
ROS Namespace related confusion | Question:
I am developing a ROS Package on ROS Indigo in Ubuntu 14.04 LTS OS.
The package contains a launch file with the following content:
<launch>
<arg name="host" default="172.17.69.137"/>
<arg name="port" default="1357"/>
<arg name="timeout" default="1000"/>
<arg name="model" default="$(find my_package)/files/robot.urdf"/>
<param name="robot_description" command="$(find xacro)/xacro --inorder $(arg model)"/>
<param name="use_gui" value="true"/>
<node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher"/>
<node name="receiver" pkg="my_package" type="receiver" output="screen">
<param name="host" value="$(arg host)" />
<param name="port" value="$(arg port)" />
<param name="timeout" value="$(arg timeout)" />
</node>
</launch>
The CPP source file uses the above parameters as shown below:
ros::init(argc, argv, "receive_potentio", ros::init_options::AnonymousName);
ros::NodeHandle nh;
ros::Publisher jointStatePub = nh.advertise<sensor_msgs::JointState>("joint_states", 1);
std::string host;
int timeout, port;
// the following doesn't work. However when relative is
// used, i.e. ros::NodeHandle nh("~") it works!
nh.getParam("host", host);
nh.getParam("port", port);
nh.getParam("timeout", timeout);
The absolute namespace defined in this way i.e., ros::NodeHandle nh doesn't get params nh.getParam("host", host) . But when relative is used ros::NodeHandle nh("~"), param nh.getParam("host", host) works. However in this case ros::Publisher jointStatePub doesn't work!
I think I am missing something obvious here. How do I make both of the work together?
Originally posted by ravijoshi on ROS Answers with karma: 1744 on 2019-03-08
Post score: 0
Original comments
Comment by gvdhoorn on 2019-03-08:\
However in this case ros::Publisher jointStatePub doesn't work!
How does a Publisher "not work" exactly?
Answer:
<node name="receiver" pkg="my_package" type="receiver" output="screen">
<param name="host" value="$(arg host)" />
<param name="port" value="$(arg port)" />
<param name="timeout" value="$(arg timeout)" />
</node>
Here, you're setting host, port and timeout as private parameters of the receiver node.
But here:
ros::NodeHandle nh;
[..]
nh.getParam("host", host);
nh.getParam("port", port);
nh.getParam("timeout", timeout);
You're trying to access those parameters as if they're "public" (ie: exist in some other namespace than the node's namespace).
That won't work.
// the following doesn't work. However when relative is
// used, i.e. ros::NodeHandle nh("~") it works!
And that makes sense, as ~ has a special meaning as a namespace name (which is what the first argument in that particular ros::NodeHandle ctor is): it refers to the private namespace of the node in which the NodeHandle is created.
So now you're reading private parameters from the private namespace. And that succeeds.
See also wiki/Parameter Server.
Originally posted by gvdhoorn with karma: 86574 on 2019-03-08
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 32616,
"tags": "ros-indigo"
} |
jquery update hidden and a tags | Question: First off, not the greatest with jQuery but know enough to get by. I created this mock up that is base on my application and wanted to see if there is a better way to update hidden values and a tags. When the form is saved I am using the sortorder to find the corresponding .ui-sortable li that has a tag and hidden fields to get updated on the page. I am trying to update the correct one and looping all of them checking with the sortorder I just wanted to know if there is a better way to do this?
https://jsfiddle.net/tjmcdevitt/vh9n5srL/35/
actual code from a project
$("#submit-form").submit(function (event) {
event.preventDefault();
var sortOrder = $(this).data('sortorder');
$('.ui-sortable li').each(function (i) {
if (i == sortOrder) {
console.log('Updating');
$("#References_0_UI").html(data.referenceText);
$("#References_0_UI").attr("href", data.GuidelineExternalReference);
$('input[name="References[0].Text"]').val(data.referenceText);
$('input[name="References[0].Link"]').val(data.GuidelineExternalReference);
$('input[name="References[0].GuidelineId"]').val(data.Value);
console.log("Completed Task");
}
});
Answer: The main issue I see is that the looping doesn't accomplish anything useful, since you never reference the li that you iterate over - just take the sortOrder from the form and concatenate it into the selectors you try to find.
Because you reference the #References_##_UI twice, consider saving it in a variable first - or, even better, since this is jQuery, you can chain methods on the selected collection.
It sounds like the data.referenceText is text, not HTML markup - in which case you should insert it into the DOM with .text, not with .html. (.text is faster and safer)
$('#submit-form').submit(function (event) {
event.preventDefault();
const sortOrder = $(this).data('sortorder');
$(`#References_${sortOrder}_UI`)
.text(data.referenceText)
.prop('href', data.GuidelineExternalReference);
$(`input[name='References[${sortOrder}].Text']`).val(data.referenceText);
$(`input[name='References[${sortOrder}].Link']`).val(data.GuidelineExternalReference);
$(`input[name='References[${sortOrder}].GuidelineId']`).val(data.Value);
});
The above looks mostly reasonable to me, but I'd change the HTML too, if that's permitted. Numeric-indexed IDs are never a good idea; IDs should be reserved for singular, unique elements. (You can also consider not using IDs at all, because every time there is one, a global variable is created, and globals can result in confusing behavior).
A related issue is that the submit handler here is attached to:
$('#submit-form').submit(
Since IDs must be unique in a document, this will only attach a listener to a single form, but it sounds like you have multiple forms that you want to listen for submit events to.
To solve the duplicate IDs and the numeric-indexed IDs, use the already-existing class to select the forms instead, and once you have a reference to the form in the handler, use .find to select its children elements that need to be populated.
Your 3 hidden inputs are somewhat repetitive. It might look somewhat tolerable now, but if you might add more, or for the general case of linking each data property name to a particular input, consider using an object or array linking each property to the selector:
const inputNamesByDataProps = {
referenceText: 'Text',
GuidelineExternalReference: 'Link',
Value: 'GuidelineId',
};
$('.form-horizontal').on('submit', (event) => {
event.preventDefault();
const $this = $(this);
$this.find('a')
.text(data.referenceText)
.prop('href', data.GuidelineExternalReference);
for (const [dataProp, inputName] of Object.entries(inputNamesByDataProps)) {
$this.find(`input[name$=${inputName}]`).val(data[dataProp]);
}
});
(The [name$=${inputName}] means: "Find an element whose name attribute ends with what's in inputName")
<form class="form-horizontal" data-sortorder="1">
<p>Update Values</p>
<a class="References_1_UI" href="www.current.com" target="_blank">Testing html</a>.
<input type='hidden' name='References[1].Index' value="1">
<input type='hidden' name='References[1].Link' value="www.oldlink.com">
<input type='hidden' name='References[1].Id' value="88">
<div class="text-center">
<button type="submit">Save</button>
<button type="button" data-dismiss="modal">Close</button>
</div>
</form>
The input HTML name attributes look pretty repetitive too, but repetitive HTML usually isn't something to worry about, especially if your backend logic is easier to work with when the attributes are like References[1].Index. But if you wanted, you could change it to something like
<input type='hidden' name='sortorder' value="1">
<input type='hidden' name='Index' value="1">
<input type='hidden' name='Link' value="www.oldlink.com">
<input type='hidden' name='Id' value="88">
putting the [1] into the hidden sortorder input instead, and then parse the form values on the backend. | {
"domain": "codereview.stackexchange",
"id": 37908,
"tags": "javascript, jquery"
} |
How to assemble brushless motors and propellers? | Question: I'm building a quadcopter and I've received my motors and propellers.
What's the right way to assemble those together?
I'm not confident with what I've done, as I'm not sure the propeller would stay in place on a clockwise rotating motor.
I mean, if the motor rotates clockwise, will the screw stay tightly in place, even with the prop's inertia pushing counter-clockwise?
Here's what I've done (of course i'll tighten the screw...) :
Answer: This is correct. Just make sure to insert a lever into the hole at the top to really tighten the nut! | {
"domain": "robotics.stackexchange",
"id": 137,
"tags": "brushless-motor"
} |
Why do high frequency waves have high energy | Question: It is known that electromagnetic waves with a high frequency possess a greater amount of energy than waves with lower frequencies. Why is this the case? Does it have anything to do with Planck's law?
Answer:
It is known that electromagnetic waves with a high frequency possess a greater amount of energy than waves with lower frequencies.
This isn't quite true. The energy carried by an electromagnetic wave is the product of two independent factors:
the energy of each individual photon, which is given by the Planck law $$E_\mathrm{photon}=h\nu,$$ in terms of the light's frequency $\nu$ and the Planck constant $h$, and
the number $N$ of photons present in the beam.
For light that's far away from the quantum regime, the total quantum-mechanical energy $E=Nh\nu$ transitions over into the classical regime, where it becomes better described by the classical intensity, which is proportional to the amplitude of the electric-field oscillations in the light. In that regime, the light can carry any amount of energy you wish to put into it.
However, that property fails to be true at low energies, where the photon number $N$ is of order $1$. In this regime, quantum mechanics takes over, and the light becomes incapable of carrying less energy than the photon energy $h\nu$: it either has one photon's worth, or none at all. And, because of the Planck law, this minimal energy increases with the frequency.
The reason this is important is that if you have a biological tissue that's absorbing, say, one UV photon's worth of energy spread over a bunch of infrared photons, then the energy absorbed by each individual molecule can be quite small. However, if the light is in the UV, then it's impossible to break that energy down into smaller chunks, and it's a single molecule that needs to take the entire hit, and if the photon energy is big enough then that will take the molecule over its damage threshold. | {
"domain": "physics.stackexchange",
"id": 51526,
"tags": "electromagnetism, energy, waves, frequency"
} |
Finite Transformation of the Special Conformal Transformation | Question: In numerous discussion of the special conformal transformation, they cite the finite transformation as
$${x^\mu}'=\frac{x^\mu-b^\mu x^2}{1-2x\cdot b+b^2 x^2}$$
This can be found from integrating the infinitesimal conformal transformation
$$\delta x^\mu =2(b\cdot x)x^\mu-x\cdot x b^\mu$$
I found the derivation given as an answer on this site. I completely understand what they did, but at the end of the day they get the answer to be
$$x(t)=\frac{x_0-x^2_0 (tb)}{1-2x_0(tb)+x_0^2(tb)^2}$$
Their starting point was the differential equation $\dot{x}=2(b\cdot x)x-x^2 b$.
Why, exactly, is there a $t$ in the second equation but no $t$ in the first? Is $tb=b^\mu$? Also, where in the derivation I've cited do they only consider the case of $\mu=0$ (i.e., time)? Any clarification would be greatly appreciated.
Answer: @Trimok solved the problem most elegantly in his comment to the question cited, and since you are troubled by @Josh's simplifying changes of variables, $b^\mu\equiv \hat{b}~ t$,
I'll avoid them to merely integrate the variation
$$\delta x^\mu =2(b\cdot x)x^\mu-x\cdot x~ b^\mu$$
directly. It immediately implies
$$\delta \left (\frac{ x^\mu}{x^2}\right) = \frac{ \delta x^\mu}{x^2} -2 x^\mu \frac{ x\cdot \delta x }{x^4} = -b^\mu ~.$$
That is, the vector $x^\mu/x^2$ evolves by shifting along $-\hat{b}^\mu$, linearly in the magnitude $|b|$ of $b^\mu$, so with constant unit speed in this "pseudotime" $|b|$. Integrating this simplest of advections for finite pseudotime, we immediately get
$$
\frac{ x'^\mu}{x'^2}= \frac{ x^\mu}{x^2} -b^\mu .
$$
Square both sides, to get the normalization,
$$
\frac{ 1}{x'^2}= \frac{1}{x^2} +b^2 -2\frac{ x\cdot b}{x^2}= \frac{1-2x\cdot b+b^2 x^2 }{x^2},
$$
which divides the above vector equation to yield your conventional form for it, ∴
$${x^\mu}'=\frac{x^\mu-b^\mu x^2}{1-2x\cdot b+b^2 x^2}~.$$ | {
"domain": "physics.stackexchange",
"id": 46718,
"tags": "group-theory, conformal-field-theory"
} |
Entropy change in sudden expansion | Question: For an irreversible sudden expansion from $V$ to $2V$, no heat is added during the expansion. However, the entropy changes by $N\log2$. I'm not sure how there can be a change in entropy without any heat added, since $dS = \frac{dQ}{T} = 0$. Of course, integration can yield $\Delta S = C$, where $C$ is a constant, but I'm not sure if this is the correct mathematical and physical way of thinking about this. Thank you for any and all help.
Answer: Your original equation is incorrect. The entropy change is not $\Delta S=\int{\frac{dq}{T}}$. The correct equation is$$\Delta S=\int{\frac{dq_{rev}}{T}}$$ where $dq_{rev}$ is the heat flow for an alternate reversible process between the same two end states. For such a reversible path, the heat flow will not be zero. | {
"domain": "physics.stackexchange",
"id": 69658,
"tags": "thermodynamics, statistical-mechanics, entropy"
} |
Why is the time constant of RC circuits calculated the way it is? | Question: I learnt about the basics of RC circuits, taking a simple case of only a resistor and a capacitor connect in series to a battery.
But following this I faced several questions which involved a more complicated arrangements of resistors around a single capacitor.
My teacher and all the online articles I read told me to find the steady state charge separately and the time constant seperately.
The charge part was fairly easy to find and understand.
However coming to the time constant part, everyone only mentioned that I had to replace the battery with a conducting path and find net resistance across the capacitor.
Though I managed to avoid this process by directly using differential equations to solve it, however the process was greatly simplified with the first method.
But I could not form any intuition for why we replace the battery by a wire and find resistance over the capacitor.
Moving on I found several more complicated questions involving several capacitors and resistors like this:
For finding the time constant of a particular capacitor I was again told to connect the battery by a wire and also replace the other capacitors with wires.
All these methods seem very un-intuitive to me to be able to use properly, whereas proceeding in the general way is very time inefficient.
Can someone please help me understand why we do whatever we do here?
Answer: When you have to find steady state voltages and currents with capacitors (and inductors) in the circuit a rule of thumb is that no current floes in a branch with a capacitor ie the capacitor acts as an open circuit.
So in the problem shown in your diagram no currents flow anywhere in the circuit and so the potential difference across each of the capacitors can immediately be stated.
In steady state inductors act as short circuits.
I do not think that the circuit that you have shown is best solved by the use of Thevenin's method.
It is a highly symmetric circuit and the time constant is relatively easy to find by using this symmetry.
Two methods spring to mind.
By symmetry nodes $a$ and $b$ are at the same potential so connecting a wire between those two nodes does not alter any currents or voltages in the circuit.
So you now have two resistors in parallel and two capacitors in parallel and the circuit can be analysed as a cell in series with a resistor and pair of parallel resistors and a pair of parallel capacitors.
Knowing how to can combine resistors/capacitors in parallel will enable you to find the time constant of the circuit.
The other way is to replace the left had resistor with two resistors with resistance $2R$ in parallel with one another.
Again this does not charge any currents in the circuit noting that if the current in each branch with a capacitor in it is $I$ the current in the left hand resistor is $2I$ and the potential difference across that resistor is $2IR$
The two replacement resistors each have a current of $I$ passing through them and a potential difference of $2IR$ across them.
Now you have a cell which can be thought of of independently charging each capacitor through a series combination of resistances with resistance $2R$ and $R$.
Think of removing some unnecessary wires from the circuit and having two parallel loops connected to the cell.
Again the time constant is now easy to find.
However the use of Thevenin's (and Norton's) method is often the route to simplify a solution to a circuit problem. | {
"domain": "physics.stackexchange",
"id": 69932,
"tags": "homework-and-exercises, electric-circuits, electric-current, electrical-resistance, capacitance"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.