anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
robot_state_publisher with SDF Files | Question:
Hi all,
I just want to ask about the support of robot_state_publisher with SDF Files, can we use it to broadcast the state of the robot to the TF Transform Library using SDF Files (not URDF Files).
One related link: http://answers.ros.org/question/61097/using-robot_state_publisher-with-a-sdf-file/ but it is couple of years old and I was wondering whether there has been any change.
Thanks.
Naman
Originally posted by Naman on ROS Answers with karma: 1464 on 2015-03-11
Post score: 8
Original comments
Comment by l0g1x on 2015-03-13:
Im really interested in this as well, as i would like to visualize the robot in rviz, but my gazebo model requires closed loop joints, so i need to use a SDF description for gazebo. I dont think your can pass both a URDF and SDF into the robot_description parameter.. not sure if im correct though.
Comment by Naman on 2015-03-13:
The issue is with robot_state_publisher to convert Joint States to TF Transforms which is only compatible with URDF files. There is no direct way to do this using SDF files. One option can be to add SDF support to get the TF transforms which can be challenging OR just maintain 2 copies(SDF and URDF)
Comment by owais2k12 on 2017-11-08:
I am facing the similar issue, has anyone solved this problem. If yes, kindly share the details. If not, can you explain a little bit how we can make use of URDF and SDF at the same time?
Answer:
There is an open ticket against gazebo to create a SDF to URDF conversion tool: https://bitbucket.org/osrf/gazebo/issue/1482/sdf-to-urdf-convertor
Since the URDF is used in several places in ROS, it makes sense to try to convert an SDF into a URDF so that it can be visualized by rviz and used with robot_state_publisher.
Originally posted by ahendrix with karma: 47576 on 2015-05-26
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 21114,
"tags": "ros, sdf, robot-state-publisher, joint-state-publisher"
} |
System for tracking stock information | Question: Background: I'm designing a system (VB/WinForms) that uses a database(MS SQL Server 2008 R2) to track people, their stock account #'s, which stocks they are investing in, and the payout of those stocks.
Basically, just reading/writing to the database and some time-elapsed functionality. (One payout per year, etc.) I have forms where users are entering employees to the database, adding accounts, etc.
My issue: I never truly understood OOP, however I am getting more involved with it now, and while my program works, I want the code to be better (look better, be more flexible, utilize classes/objects more, etc.)
That being said, how can I make this code more Objected-Oriented?
Note: I am also utilizing ReSharper, however that can only do so much and I don't want to have to rely on a tool for the rest of my career.
Note: This project utilizes an encryption class.
Imports System.Data.SqlClient
Public Class EmployeeUpdateFrm
Private _id As String
Dim _seqId As Integer
Private Sub Load(sender As Object, e As EventArgs) Handles MyBase.Load
Dim conn1 As New SqlConnection("Data Source=SQLTEST_HR,4000\SQLEXPRESS;Integrated Security=True")
Dim qry As String = "SELECT CMPNY_SEQ_ID, CMPNY_NM FROM CMPNY"
Dim ds1 As New DataSet()
Using da As New SqlDataAdapter(qry, conn1)
'fill data set 1 for combobox
da.Fill(ds1)
End Using
With CompanyCbx
'what the user sees
.DisplayMember = "CMPNY_NM"
'value behind each display member
.ValueMember = "CMPNY_SEQ_ID"
.DataSource = ds1.Tables(0)
.SelectedIndex = 0
End With
'close connection
conn1.Close()
Dim index As Integer = 1
'The vertical spacing between rows of controls relative to the textboxes.
Dim yMargin As Integer = 10
Dim query As String
'Create a new instance of the encryption class.
Dim strKey As String = "Key1"
Dim Crypto As ClsCrypt
Crypto = New ClsCrypt(strKey)
Dim eID As String = Crypto.EncryptData(_id)
query = "SELECT EMPL_SEQ_ID, EMPL_ID, EMPL_LAST_NM, EMPL_FIRST_NM, EMPL_PREFRD_NM, EMPL_BIRTH_DT, EMPL_MAIL_STN_CD,"
query &= " EMPL_ADDR1_TXT, EMPL_ADDR2_TXT, EMPL_CITY_NM, EMPL_STATE_CD, EMPL_POSTL_CD, EMPL_PYRL_CD, "
query &= " EMPL_FILE_NO, EMPL_SPRTN_DT, CMPNY_SEQ_ID, EMPL_ACTV_IND, BEG_DT, END_DT "
query &= " FROM EMPL"
query &= " WHERE EMPL_ID = @ID; "
'New DataSet object to hold employee records.
Using ds As New DataSet()
Using conn As New SqlConnection("Data Source=SQLTEST_HR,4000\SQLEXPRESS;Integrated Security=True")
Using da As New SqlDataAdapter()
da.SelectCommand = New SqlCommand(query, conn)
da.SelectCommand.Parameters.Add(New SqlParameter("@ID", eID))
da.Fill(ds)
End Using
End Using
If CompanyCbx.Items.Count > 0 Then
CompanyCbx.SelectedIndex = ds.Tables(0).Rows(0).Item(15).ToString
End If
Try
For i As Integer = 0 To ds.Tables(0).Rows.Count - 1
For z As Integer = 0 To ds.Tables(0).Columns.Count - 1
If z <> 0 Then
'Decrypt all rows and columns
ds.Tables(0).Rows(i)(z) = Crypto.DecryptData(ds.Tables(0).Rows(i)(z))
End If
Next
Next
Catch ex As Exception
MsgBox(ex.ToString)
End Try
I also ask (without being opinionated), with programs like this (fairly small, read/write to database with no extraordinary functionality) how flexible is the coding? By that I mean, I don't really see how I could use something like:
Dim Person as New Person()
Class Person
Public firstName as String
Public lastName as String
etc.
Especially because I utilize DataSets so heavily.
Answer: If you want to learn how to use classes, the best approach is to use them while building a small program. I suggest you do create a Person class and that you stop using dataset. This will be a great learnign experience.
Using numbers instead of const or strings can create a mantenance problem. At a glance, it's not easy to know what 15 mean.
ds.Tables(0).Rows(0).Item(15)
Remove all the database query from the UI. It'll also make the code more readable and you'll be able to reuse the functions if needed. You should have the connection string at one place.
Public Class EmployeeUpdateFrm
Private Sub Load(sender As Object, e As EventArgs) Handles MyBase.Load
LoadDropDown()
' ...
End Sub
Sub LoadDropDown()
Dim companies As List(Of Company)
companies = DAL.GetCompanies()
With CompanyCbx
'what the user sees
.DisplayMember = "Name"
'value behind each display member
.ValueMember = "Id"
.DataSource = companies
.SelectedIndex = 0
End With
End Sub
End Class
Public Class DAL
Private Shared _connectionString As String = "Data Source=SQLTEST_HR,4000\SQLEXPRESS;Integrated Security=True"
Public Shared Function GetCompanies() As List(Of Company)
' ...
End Function
End Class
Public Class Company
Public Property Id As Integer
Public Property Name As String
End Class | {
"domain": "codereview.stackexchange",
"id": 6966,
"tags": "object-oriented, vb.net"
} |
Question about carnot two phase power cycle | Question: The figure below is based on material presented in an MIT OpenCoursware program on thermodynamics on the web. They were comparing a Carnot two-phase power cycle to a Rankine cycle with superheating (only the Carnot cycle shown here).
I have issues with the isentropic compression process $a-b$ in diagram. In typical depictions of this cycle this process is shown in the two-phase region where the vapor phase is what is being compressed from the condenser temperature into saturated liquid at the boiler temperature.
In this diagram, however, the isentropic compression starts with saturated liquid at the condenser temperature and ends with liquid at the boiler temperature. This doesn’t seem possible. I would think compressing water isentropically should have an insignificant effect on its temperature. For example, isentropically compressing saturated liquid water from 10 KPa to liquid at 10 MPa increases the temperature only about 5 C (based on the properties of compressed liquid water).
Is process $a-b$ possible, and, if so, how?
Answer: Process a-b corresponds to an isentropic (constant $s$) increase of $T$.
For an incompressible liquid, $T$ and $s$ are directly related ($\text{d}s = c_p \text{d}T/T$), so there is no way of changing one without the other and process a-b is impossible.
For a real liquid, you could theoretically add heat (which tends to increase $s$) while simultaneously compressing (which tends to decrease $s$), but it would be impractical - the pressure would get unmanageably high really quickly. | {
"domain": "physics.stackexchange",
"id": 52573,
"tags": "thermodynamics"
} |
Limit scan angle of RPLidar A1 | Question:
I have RPLidar A1 device and I would like to know how I could limit the scan angle from 0 degrees to 180 degrees using Hector Mapping Slam.
Note: I am using ROS indigo on ubuntu 14.04.
Originally posted by Hamad on ROS Answers with karma: 1 on 2017-09-18
Post score: 0
Answer:
By using LaserScanAngularBoundsFilter
Originally posted by naveedhd with karma: 161 on 2017-09-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Hamad on 2017-09-27:
I have tried that but did not work for me; it might be there is something that I am doing wrong. If you have any experience with this, please give the instruction how to run it.
Comment by naveedhd on 2017-09-28:
Try following the example launch and use yaml file given in the tutorial, by replacing with your desired angles. If you need explanation of any part, feel free to ask.
Comment by Hamad on 2017-10-09:
the problem is when you run the ros indigo using the PRlidar view_slam I do not know how to run it with leaser filter in order to limit my angles if you know how to do this give me more explanation on it | {
"domain": "robotics.stackexchange",
"id": 28877,
"tags": "ros, slam, navigation, mapping, hector"
} |
Quantum Computing, Qubit Creation/Entanglement | Question: I am currently a high school student researching quantum computing. I was referred to this site by Google and a friend. Currently I am researching the qubit part of quantum computing. My question is exactly how are qubits created in the lab, and how are they entangled? I don't expect the answers to be incredibly specific but a general overview would be of a great deal of help.
Answer: Take a proton (the nucleus of Hydrogen - everywhere in water) which has a spin, and since it's charged, it has North and South poles. If you measure it, the North pole points either up or down in your instrument.
If you embed it in a magnetic field, it will want to line up with that field, but it can't easily because it's spinning like a little gyroscope, so it precesses like a top.
The rate at which it precesses depends on the strength of the field, and that can be detected, and so you have Nuclear Magnetic Resonance, used everywhere in MRI machines.
By manipulating the field, you can put the proton into a state where it's "in-between" up and down. If you measure it, it will be either one or the other, but before you measure it, it's in a mixture of states, called a "superposition".
If you have some number of them, like for example four, by manipulating the field, you can put them all in a mixture state.
But it's not like four independent mixtures.
Rather it's one mixture of 16 possible states.
If you measure all of them at once, you could get any one of the 16 possible answers.
Each one of those states in the superposition is a fully-specified combination of bits, so it's like having 16 different 4-bit computers running in parallel,
but they're all running the same program at the same time.
The "program" consists of magnetic pulse trains that affect all the states at the same time.
That's called "quantum parallelism", and you can see that if you can put enough qubits into this superposition where every one of the 2^N combinations is equally likely, you can carry on 2^N computations in parallel.
Then, suppose one of those computations reaches a result that you want to know.
You have to get the result by measuring, but that's complicated to explain and may be a bit much for this answer.
P.S. One of the interesting aspects of quantum computation is that it has to be reversible.
So if you have an algorithm that you want to execute on a quantum computer, you have to make sure the algorithm can be run in reverse just as well as forward.
So for example, if you have a state machine where either state A or state B can transition to state C, it won't work in a quantum computer unless there is some memory of how C was entered, so the state transition can be "un-done".
P.P.S. Let me take another stab at how you get the results out of a quantum computer.
The method I'm familiar with is Lov Grover's Search Algorithm, for doing search in an unsorted table.
If the table contains M entries, you create a superposition with M states, one of which will "succeed".
Since the only way you can get information out is by measuring, what you need to do is adjust the probability amplitudes of the states so that the successful one has a high probability, so when you measure, it is the one you will most likely see.
That is done by a manipulation that transfers some of the probability from the unsuccessful states to the successful one.
Then the computation is run in reverse back to the beginning, then run forward again, and the probability-transfer operation is done again.
This is done several times, until the successful state has nearly all of the probability.
It's important not to do it too many times, because it will start having the opposite effect. | {
"domain": "physics.stackexchange",
"id": 1983,
"tags": "quantum-mechanics, experimental-physics, quantum-information, quantum-entanglement, quantum-computer"
} |
Second Order Elastic Analysis | Question: When analysing a structure using traditional methods (e.g. Hardy Cross, Slope Deflection, Force Method, etc.), it is a first-order elastic analysis. My question is, how would one analyse a statically indeterminate frame by hand but taking into account the second-order effects. This is more from an academic perspective and that is why I am asking for hand calculations using traditional methods or even direct stiffness method.
Answer: After completing some further research, I have come across a few approximate methods which were described in the book: Stability Design of Steel Frames. These include the two-cycle iterative method and modified slope deflection equations which can both analyse statically indeterminate structures while taking into account second-order effects. Thanks for all the answers. | {
"domain": "engineering.stackexchange",
"id": 3020,
"tags": "structural-engineering, structural-analysis"
} |
Sharing a database connection with multiple modules | Question: I am building what essentially could be viewed as a glorified database wrapper as a Python package, where I'm having several classes and functions spread out into different modules. My current problem is how to share the database connections with the different modules.
An example class in user.py:
class User(object):
def __init__(self, user_name, db_con):
self.user_name = user_name
self._db_con = db_con
Which results in my __init__.py looking like this:
from .user import User as _User
def _connect_dbs():
econ = _mdb.connect(host='db.com',
user='user', passwd=pass,
db='db', charset='utf8')
return econ
_db = _connect_dbs()
def User(user_name):
return _User(user_name, _db)
Is this a good solution, or could I implement it in another manner? While this avoids global variables, it does result in some code I would rather not have to write for quite a few functions.
Answer: If you are creating a database wrapper, why not make a lib out of the wrapper and stick it into a lib directory that other modules/classes can import? So, your Users class would be in a directory structure like this:
project/
account.py
lib/
database.py
user.py
zombie.py
Your database.py would look something like:
import pymysql
class Database:
@staticmethod
def connect_dbs():
econ = _mdb.connect(host='db.com',
user='user', passwd=pass,
db='db', charset='utf8')
return econ
Then, in the user.py, you'd simply do an import like:
from .lib.database import Database as MyDatabase
Then, to get the connection, something along the lines of:
my_connection = MyDatabase.connect_dbs()
EDIT: As an example, see this repl.it:
class Database:
connection = None
def __init__(self):
print "Instantiating!"
def connect_dbs(self):
if self.connection is not None:
return self.connection
self.connection = 1
return self.connection
class User:
db = None
def __init__(self):
self.db = Database()
def save_user(self):
print self.db.connect_dbs()
U = User()
for i in range(30):
U.save_user() # Only instantiates the connection once, then reuses it 30x | {
"domain": "codereview.stackexchange",
"id": 9892,
"tags": "python, database, closure"
} |
About lack of selective pressure | Question: In [1] it is stated that:
the frequency of comutations in FGFR3 and KRAS or PIK3CA and KRAS was lower than predicted by chance, suggesting ... a lack of selective pressure for both mutations to occur
It is not clear to me how lack of selective pressure may produce a frequency of comutations that is lower than predicted by chance. I'd expect the frequency to be the one predicted by chance, if no selective pressure is applied. What am I missing?
[1] S. E. Woodman and G. B. Mills, “Are oncogenes sufficient to cause human cancer?,” Proc. Natl. Acad. Sci. U.S.A., vol. 107, no. 48, pp. 20599–20600, Nov. 2010.
Answer: Often the phenotype caused by a given allele depends on the alleles present in other genes; this is termed epistasis. In the context of tumors, selective pressure is primarily associated with the ability of cells to grow and divide (without dying). It could be possible for two mutations which each confer some tumorigenic properties to conflict with each other, such that when both mutations are present they interact to either kill the cell or otherwise block the effect of the other mutation. The expected result would be that few tumor cells would exist with that combination of mutations because they would proliferate less than those cells with just one.
However, I suspect they are just addressing an non-significant difference from chance that they felt was too big to ignore entirely in discussion. You left out the rest of the sentence including a key word 'either' which I think is entirely misleading here and I wish you would not have done that (emphasis mine):
frequency of comutations in FGFR3 and KRAS or PIK3CA and KRAS was lower than predicted by chance, suggesting either a lack of selective pressure for both mutations to occur or a negative interaction between the consequences of each mutation
Essentially they are hedging the statement that there could be a negative interaction (i.e., selective pressure against) having both mutations with the possibility that there is just no meaningful positive interaction and the observation of a less than chance co-occurrence is not a true result. Note that they have a very small population because these are clinical tumors, and KRAS analysis was only in 59 patients (with only 12 mutants), making the statistical power for interactions very low.
I agree that this wording is imprecise and a bit unclear, so your confusion is understandable, but I think there are two reasons why it is worded this way. First, I believe they are trying to emphasize a possible future research direction and are trying to do so without making a criticism of the paper they are reviewing, because I don't think they view it as a criticism. Had they said "This study doesn't have enough statistical power to determine if there is a negative interaction between these mutations," that would sound overly critical given they are discussing something that wasn't really the intent of the original study. Second, I think the simplest prior expectation (which they do not state but is implied) is that two mutations that contribute to tumor production individually will be even more tumorigenic when present together. By stating the other outcomes: no selective pressure for the two mutations to cooccur or a negative interaction, they are implicitly stating that the simpler outcome, a positive interaction, does not seem to be occurring.
There is also a definite issue of selection bias here (statistically, not evolutionarily). That is, they are investigating a certain type of tumor. If those tumors can be caused by either a mutation in one gene or a mutation in another, then the chance association of those mutations would be the chance in the general population/other types of tumors/noncancerous tissue, which would be low (because most have neither), rather than the chance in patients.
The article you refer to is just a commentary on another article; you should really read Hafner et al instead (and note the original paper doesn't talk much if at all about these co-occurrences). | {
"domain": "biology.stackexchange",
"id": 7807,
"tags": "genetics, population-genetics, natural-selection"
} |
Degrees of freedom of constrained rigid body | Question: A rigid body constrained at a distance $r$ from its center of mass with a ball-and-socket type constraint is considered to only have three (rotational) degrees of freedom. But isn't rigid body's center of mass still technically translating when it rotates with respect to the pivot of the constraint? This type of constrained rigid body can clearly have both linear and angular acceleration. So why then isn't it considered to still have 6 degrees of freedom?
Answer: Because translational and angular velocities for such a system are tied together; once you know one, you can directly tell the other. Therefore the system has only three degrees of freedom. | {
"domain": "physics.stackexchange",
"id": 65461,
"tags": "rigid-body-dynamics, degrees-of-freedom"
} |
Medium sized Black Holes | Question: For many years the existence of medium sized Black Holes (IMBH$^1$) have eluded scientists. BH of several times the mass of our sun have been found, as well as SMBH with millions of sun masses. SMBH's and small ones mass grows with time, as matter gets transferred to the accretion disc and later gets absorbed or by merging. Quoting a paper from 2018:
Although many IMBH candidates have been identified, none are accepted as definitive; thus, their very existence is still debated.
My question is: If we know BH grow, where are the medium sized Black Holes? How come small sized ones are easier to spot than medium ones? Why is their mere existence being debated on?
$^1$ For the purposes of this question we could define an IMBH to be one of mass between about $10^2$ and $10^5$ solar masses (https://en.wikipedia.org/wiki/Intermediate-mass_black_hole).
Answer: This review detailing the state of the art of searching for intermediate BH candidates, e.g. $m \sim 10^2 - 10^5$ M$_{\odot}$, from electromagnetic observations "for the few hundreds of nearby IMBH candidates found in dwarf galaxies, globular clusters, and ultra-luminous X-ray sources, as well as the possible discovery of a few seed BHs at high redshift."
You might find my answer to the question of Why did astronomers believe most or all stellar black holes had masses no greater than 15 solar masses? helpful.
Why is their mere existence being debated on?
Intermediate mass BHs in active galactic nuclei (AGN) can be difficult to definitively discover, whereas stellar mass BHs in X-ray binaries and supermassive BHs in AGN are comparatively easier to probe, i.e. they typically do not have as strong of a gravitational pull to effect the trajectories of stars and other cosmic material which would produce strong X-ray signature.
The future is bright though! Just like with stellar mass and supermassive BHs, gravitational wave observations are expected to revolutionize the field of intermediate BHs, too. The space based gravitational wave observatory LISA, which will be online in a decade or so (hopefully), is suspected to observe intermediate mass BHs, although one should withhold being too optimistic since this is uncertain. And the coming future of multiband gravitational wave astronomy - low frequency covered by LISA, high frequency covered by LIGO, and the frequencies in between covered by a DECIhertz detector - could be a boon for observing intermediate mass BHs.
In fact, there is a binary black hole merger event detected by LIGO/Virgo, dubbed GW190521, where the resultant post-merger BH had a mass of ~150 M$_{\odot}$, which is the first direct observation of an intermediate mass BH, e.g., m is within $10^2 - 10^5$ M$_{\odot}$ to high confidence, and thus the first conclusive experimental evidence that the intermediate mass range can be populated by mergers of high stellar-mass BHs. How the high stellar-mass BBH system originated is speculative currently: it could have originated from hierarchical mergers of stellar mass progenitor BBHs in a dense stellar cluster, or as a high-mass isolated stellar binary or a stellar binary in an accretion disk, or more exotic possibilities... | {
"domain": "astronomy.stackexchange",
"id": 5702,
"tags": "black-hole, cosmology"
} |
Can a light absorbing bulb-like thing be made? | Question: A bulb emits light.
So can a bulb-like thing or anything be made such that it absorbs all/most of the light when turned on?
For example, when room is dark we switch on bulb and it spreads light. Let a room already be bright with daylight, so we turn on the light absorbent bulb/machine and it makes the room dark.
Here, light refers to visible light. But if this kind of device could be made, it could be extended to other lights too.
Answer: A light bulb is a source of light continually emitting photons . But light, built up by photons, is not a fluid, so that a corresponding sink or a pump can pull it in. Photons at first order, do not interact with each other, they can be absorbed given the corresponding energy levels in the atoms of the surfaces they impinge on.
Visible light can be absorbed by surfaces only if the light hits the surface. You could make a room with all black surfaces , which would absorb all light, but not in the sense of a bulb/sink. Anyway if there is no continuous source of light, the existing photons are extremely rapidly absorbed by any walls, that is how dark rooms for photography work. | {
"domain": "physics.stackexchange",
"id": 56809,
"tags": "visible-light, electromagnetic-radiation"
} |
Most jobs fail in my buildfarm | Question:
I configured a buildfarm and triggered an import upstream job with the build parameters as mentioned in the documentation. But, most of the jobs failed to build. Also, is there any way one can educate me or point me to some material that tells how to start a job for my ros package (which is in my git repo right now) in my buildfarm ?
I tried starting a job in my buildfarm but it just clones the repo from github and doesn't perform any build on it. I'm new to jenkins and I'm learning.
Also I performed everything as mentioned here in this link.
I don't know whether I should be doing this but, I updated my jenkins to 2.121 from 2.89. Please tell me if this is wrong. Also in my lab currently everyone is developing in lunar, but I see that the distributions in ros_buildfarm_config show only jade, indigo and kinetic. Is this fine ? Can I still build packages developed in lunar or melodic in the buildfarm I configured ?
I apologize if this isn't much information. I can improve this post to help others who wants to setup a buildfarm. Inputs are welcome.
Thanks !
EDIT - 1
I scrapped the entire deployment and redid everything from scratch. I still ended up having all my jobs fail. This time I had my jenkins version set to default (2.89).
Originally posted by venkisagunner on ROS Answers with karma: 89 on 2018-07-19
Post score: 1
Answer:
I don't know whether I should be doing this but, I updated my jenkins to 2.121 from 2.89. Please tell me if this is wrong.
Jenkins 2.107.1 makes some substantial changes (https://jenkins.io/changelog-stable/#v2.107.1), mainly moving from a blacklist to a whitelist for serialize-able classes (https://jenkins.io/blog/2018/01/13/jep-200/) and I haven't had the time to audit our buildfarm for issues related to that. Anyone in the community who does use a more recent jenkins version is encouraged to share their experience in https://github.com/ros-infrastructure/buildfarm_deployment/issues/193
Also in my lab currently everyone is developing in lunar, but I see that the distributions in ros_buildfarm_config show only jade, indigo and kinetic. Is this fine ? Can I still build packages developed in lunar or melodic in the buildfarm I configured ?
You'll need to update your configuration in order to build packages for Lunar and Melodic. And to save overhead on rosdistros you don't need you'll want to comment out or remove indigo, jade, and kinetic from your ros_buildfarm_config index.yaml if you don't need to build for them.
Since you mention above that you want to build your own packages which aren't in the upstream rosdistro you'll also want to follow this guide https://github.com/ros-infrastructure/ros_buildfarm/blob/master/doc/custom_rosdistro.rst#use-a-custom-rosdistro-database
But, most of the jobs failed to build.
There are many reasons a job could fail. Could you post an example build log which will help identify possible causes.
how to start a job for my ros package (which is in my git repo right now) in my buildfarm ?
After setting up your custom rosdistro and updating your ros_buildfarm_config your buildfarm will use your rosdistro database to know what jobs to build. You can set up devel jobs without first creating a release but if you want to build install-able binaries for your package you will first need to run bloom and release it into your custom rosdistro. This Bloom tutorial http://wiki.ros.org/bloom/Tutorials/FirstTimeRelease is aimed at releasing packages to the upstream rosdistro but much of the information is still useful. If you do have questions about that process be sure to ask them separately so that this question stays on topic.
Originally posted by nuclearsandwich with karma: 906 on 2018-07-24
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by venkisagunner on 2018-07-24:
Thank you for your answer. Is there an example that I can refer for creating a custom rosdistro ? Also is there an example that says how to add melodic and lunar to the ros_buildfarm_config ? | {
"domain": "robotics.stackexchange",
"id": 31306,
"tags": "ros, ros-lunar, buildfarm"
} |
What's a good Python HMM library? | Question: I've looked at hmmlearn but I'm not sure if it's the best one.
Answer: SKLearn has an amazing array of HMM implementations, and because the library is very heavily used, odds are you can find tutorials and other StackOverflow comments about it, so definitely a good start.
http://scikit-learn.sourceforge.net/stable/modules/hmm.html
PS: Right now its outdated in June 2019
As of July 2019 you can use hmmlearn (pip3 install hmmlearn) | {
"domain": "datascience.stackexchange",
"id": 1907,
"tags": "python, markov"
} |
What is being said here about the strange precession of DI Herculis? | Question: The binary star system DI Herculis was brought to the astronomy community by Edward Guinan because it has an observed precession which is far lower than the one predicted by General Relativity. In 2009, a research team "Solved the Problem" by pointing to DI Herculis misaligned spin and orbital axis. Zimmerman, Guinan, & Maloney (2010) later replied with The Eclipsing Binary Di Herculis: One Mystery Solved, But Another Takes Its Place1
They state:
Further, we find evidence that the projected rotation axes of the stars may be precessing, since it appears that the value of V(rot)sini has increased over the past 30 years
Are they saying DI Herculis precession is changing?
1American Astronomical Society, AAS Meeting #215, id.419.34; Bulletin of the American Astronomical Society, Vol. 42, p.282
Answer: The quoted passage says explicitly only that the stars observed velocity amplitude changes over time (due to precession); that does not imply that precession itself changes, but merely that we can actually observe its effects in terms of seeing different radial velocity amplitude ($V_{rot}\sin i$).
DI Herculis is assumed to undergo apsidal precession, thus the orientation of the ellipse relative to our line of sight (LOS) changes over time. We observe the radial velocity projected onto our LOS - thus $v_{orbit}\sin i$. Consider these cases $v_1$ and $v_2$ in the following graphics, observer on the right:
In case 1 (the vertical ellipse), the orbital velocity of the observed star is quite different (much smaller) when it is moving away from us and much larger when moving towards us. This contrasts with the case where the semi-major axis of the orbit is aligned with our LOS: the minimum und maximum values for the velocity are equal.
Precession causes the orientation of this ellipse to change over time - and that and its influence on the observed orbital velocities is what the authors hint at. | {
"domain": "astronomy.stackexchange",
"id": 6320,
"tags": "observational-astronomy, precession"
} |
Change of variable in function | Question: Suppose I have a function $h(\theta)$ measuring the height of a piston, with $\theta = \omega t$. I would like to know the vertical acceleration of this piston as $\omega$ changes at the point $\theta = \theta_0$. How would I differentiate $h$ to do this?
Answer: If $\omega$ is not constant, I don't really see a reason to write $\theta = \omega t$. You can do it, I just don't see why it would be convenient. It's probably better to just think of $\theta(t)$.
So, I'm assuming that you have actual expressions for the functions $h(\theta)$ and $\theta(t)$ somewhere, even if you didn't put them in your question. If I misunderstood, please correct me.
We will use the chain rule:
$$\frac{d}{dx} f(g(x)) = \frac{df}{dg}\Big(g(x)\Big) \, \frac{dg}{dx}(x)$$
More specifically:
$$\frac{d}{dt}f(\theta(t)) = \frac{df}{d\theta}\Big( \theta(t) \Big) \, \frac{d\theta}{dt}(t)$$
You are interested in the vertical acceleration, so that is the second derivative of $h$ with respect to time:
$$\frac{d^2}{dt^2} h(\theta(t)) = \frac{d}{dt} \Bigg( \frac{dh}{d\theta} \bigg( \theta(t) \bigg) \, \frac{d\theta}{dt}(t) \Bigg) = \frac{d}{dt} \Bigg( \frac{dh}{d\theta} \bigg( \theta(t) \bigg) \Bigg) \, \frac{d\theta}{dt}(t) + \frac{dh}{d\theta} \bigg( \theta(t) \bigg) \, \frac{d^2\theta}{dt^2}(t) = \frac{d^2h}{d\theta^2} \bigg( \theta(t) \bigg) \bigg( \frac{d\theta}{dt} (t) \bigg)^2 + \frac{dh}{d\theta} \bigg( \theta(t) \bigg) \, \frac{d^2\theta}{dt^2}(t) $$
In your case, where you are interested in the acceleration at a specific angle $\theta = \theta_0$, you'll need to work out the values of all elements in the expression above at the moment that $\theta = \theta_0$ and use those to get your result. | {
"domain": "physics.stackexchange",
"id": 57298,
"tags": "classical-mechanics, kinematics, acceleration, differentiation, calculus"
} |
${\cal N} = 1$ SUSY Non-renormalization theorem | Question: In Ref. 1, on Page 53, the ${\cal N} = 1$ SUSY non-renormalization theorem is derived. One first specifies the symmetries of the general ${\cal N} = 1$ SUSY action in the superspace formalism, and then imposes holomorphicity of the Wilsonian effective action and finally takes limits of spurious fields. In this context my problem is as follows. After imposing consistency conditions due to the symmetries on the action, one arrives at the following form for the holomorphic term in the action
$$H ~=~ Y h(\Phi) + (\alpha X + g(\Phi))W^{\alpha}W_{\alpha},\tag{1} $$
where $X$ and $Y$ are the spurious fields. The contribution of this term to the action can be found after integrating with respect to the spacetime and $\theta$ coordinates. Immediately afterwards on Page 54, the following statement is made:
"In the limit $Y \to 0$, there is an equality $h(Φ) = W(Φ)$ at tree level, so $W(Φ)$ is not renormalized."
I don't understand the reasoning here. Since the spurion field $Y$ plays the role of coupling here, then shouldn't the same argument hold for any QFT potential in general which respects symmetries, which is obviously not true? What exactly is the role of holomorphicity in the above statement which makes it work?
References:
S. Krippendorf, F. Quevedo & O. Schlotterer, Cambridge Lectures on Supersymmetry and Extra Dimensions, arXiv:1011.1491; Subsection 4.2.2.
Answer: OP writes (v2):
Since the spurion field $Y$ plays the role of coupling here, then shouldn't the same argument hold for any QFT potential in general which respects symmetries, which is obviously not true?
TL;DR: The point is that the spurious superfield/coupling constant $Y$ has $R$-symmetry$^1$ charge 2, so that the Wilsonian action (with $R$-symmetry charge 0) cannot generate (and hence contain) holomorphic $F$-terms with higher powers of $Y$, unlike any old coupling constant.
Sketch proof of perturbative non-renormalization theorem for holomorphic $F$-terms:
This is explained in more detail in Ref. 3. The original ${\cal N}=1$ SUSY action with $X=0,Y=1$ is not $R$-symmetric. The holomorphic $F$-sector is
$$ W(\Phi) + f(\Phi) W^{\alpha}W_{\alpha}.\tag{A}$$
The extended ${\cal N}=1$ SUSY model with the spurious superfield $X$ and $Y$ have $R$-symmetry and Peccei-Quinn symmetry
$$X\to X + \text{imaginary constant}.\tag{B}$$
The holomorphic $F$-sector is
$$Y W(\Phi) + (X+f(\Phi)) W^{\alpha}W_{\alpha}.\tag{C}$$
The Wilsonian effective action is defined as
$$\begin{align}
\exp&\left\{-\frac{1}{\hbar}W_c[J^H,\phi_L] \right\}\cr
~:=~~~&\int \! {\cal D}\frac{\phi_H}{\sqrt{\hbar}}~\exp\left\{ \frac{1}{\hbar} \left(-S[\phi_L+\phi_H]+J^H_k \phi_H^k\right)\right\}\cr~\stackrel{\text{Gauss. int.}}{\sim}
&{\rm Det}\left( (S_2)_{mn}\right)^{-1/2}
\exp\left\{-\frac{1}{\hbar} S_{\neq 2}\left[\phi_L+ \hbar \frac{\delta}{\delta J^H}\right] \right\} \cr
&\exp\left\{ \frac{1}{2\hbar} J^H_k (S_2^{-1})^{k\ell} J^H_{\ell} \right\}.\end{align}\tag{D}$$
The Wilsonian effective action consists only of connected Feynman diagrams. Only Feynman diagrams made entirely out of holomorphic $F$-vertices and $F$-propagators can generate $F$-terms in the Wilsonian effective action. The holomorphic $F$-sector of the Wilsonian effective action is protected by $R$-symmetry:
$$H ~=~ Y h(\Phi) + (\alpha X+g(\Phi)) W^{\alpha}W_{\alpha}.\tag{E}$$
In the limit $Y\to 0$ and $X\to \infty$ all heavy-field Feynman diagrams linear in $Y$ vanish beyond tree-level, so that the superpotential $W(\Phi)=h(\Phi)$ is unrenormalized perturbatively.
Let $Y=0$ from now on. In more detail, a $X$-vertex is proportional to $X$, while the gauge propagator is proportional to $1/X$. Therefore the number $L$ of loops is related to the number of $X$s,
$$\#(X)~=~V-I~=~1-L~\leq~1. \tag{F}$$
In the limit $X\to \infty$, we must also have $\#(X)\geq 0$. And $\#(X)=0$ corresponds to 1-loop, while $\#(X)=1$ corresponds to tree-level. We conclude that $\alpha=1$ and that $g(\Phi)-f(\Phi)$ contains only one-loop corrections. $\Box$
References:
S. Krippendorf, F. Quevedo & O. Schlotterer, Cambridge Lectures on Supersymmetry and Extra Dimensions, arXiv:1011.1491; Subsection 4.2.2.
N. Seiberg: arXiv:hep-ph/9309335 & arXiv:hep-ph/9408013.
S. Weinberg, Quantum Theory of Fields, Vol. 3; Section 27.6, p. 150-151.
--
$^1$ Explicitly, $\theta$, $\overline{\theta}$ and $Y$ are the only fundamental objects that carry $R$-charge, which is $1$, $-1$ and $2$, respectively. The super-field-strength $W_{\alpha}$ only has $R$-charge $1$ because it consists of one ${\cal D}$ and two $\overline{\cal D}$'s. Similarly, the chiral measure $\int d^2\theta~=~\partial^2_{\theta}$ has $R$-charge $-2$. We stress that this $R$-symmetry is not the conventional $R$-symmetry. | {
"domain": "physics.stackexchange",
"id": 54704,
"tags": "quantum-field-theory, renormalization, supersymmetry, effective-field-theory, superspace-formalism"
} |
How to estimate exposure in Roentgens/hr from 99m Tc on a given distance? | Question: I tried to find out how to calc it myself, but seems it's not something very straigthforward... I hope it's not a bad question, sorry if it is!
So, this is known:
It's 99mTc
Let's assume it's point source
It's the gamma radiation
A = quantity is 50 MBq
E = photon energy is 140 keV
R = distance is 10 cm
I think I also need radiation intensity (photon yield) for 99mTc before we go further - where this can be found?
After I know the photon yield value (y), I think I can find amount of energy coming per seconds through a unit of area on the given distance:
ψ = AyE/4πr2 = y * 50*106 * 1403 / (4 * 3.1416 * 102) = y * 5,570 MeV/cm2/s
Is this right so far?
But anyway even if "y" is known, this is where I stop...
How this amount translates then into the Roentgens/hr?? I'm really puzzled by how Roentgen unit is defined - conversion from known eV does not seem to be very straighthforward.
Thanks!!
Answer: As @imabug suggested, it's really much simpler than I tried, with all the intermediate calculations already wrapped into well-known constants. Details are here, it even contains the exposure rate constant for the 99mTc:
https://en.wikipedia.org/wiki/Radiation_exposure#Exposure_rate_constant
For 50MBq, it's this:
50Mbq = 1.35mCi
Exposure rate constant for 99mTc, Г =
0.720
So exposure at 10 cm is = 0.72 * 1.35 / 102 ~= 0.009730 R/h = 9.73 mR/h | {
"domain": "physics.stackexchange",
"id": 68034,
"tags": "nuclear-physics, radioactivity"
} |
Is there a kind of Noether's theorem for the Hamiltonian formalism? | Question: The original Noether's theorem assumes a Lagrangian formulation. Is there a kind of Noether's theorem for the Hamiltonian formalism?
Answer: Action formulation. It should be stressed that Noether's theorem is a statement about consequences of symmetries of an action functional (as opposed to, e.g., symmetries of equations of motion, or solutions thereof, cf. this Phys.SE post). So to use Noether's theorem, we first of all need an action formulation. How do we get an action for a Hamiltonian theory? Well, let us for simplicity consider point mechanics (as opposed to field theory, which is a straightforward generalization). Then the Hamiltonian action reads
$$ S_H[q,p] ~:=~ \int \! dt ~ L_H(q,\dot{q},p,t). \tag{1}$$
Here $L_H$ is the so-called Hamiltonian Lagrangian
$$ L_H(q,\dot{q},p,t) ~:=~\sum_{i=1}^n p_i \dot{q}^i - H(q,p,t). \tag{2}$$
We may view the action (1) as a first-order Lagrangian system $L_H(z,\dot{z},t)$ in twice as many variable
$$ (z^1,\ldots,z^{2n}) ~=~ (q^1, \ldots, q^n;p_1,\ldots, p_n).\tag{3}$$
Eqs. of motion. One may prove that Euler-Lagrange (EL) equations for the Hamiltonian action (1) leads to Hamilton's equations of motion
$$\begin{align} 0~\approx~&\frac{\partial S_H}{\partial z^I}
~=~\sum_{J=1}^{2n}\omega_{IJ}\dot{z}^J -\frac{\partial H}{\partial z^I}\cr\cr
\qquad\Updownarrow&\qquad\cr\cr
\dot{z}^I~\approx~&\{z^I,H\}\cr\cr
\qquad\Updownarrow&\qquad\cr\cr
\dot{q}^i~\approx~& \{q^i,H\}
~=~\frac{\partial H}{\partial p_i}\cr
\qquad \wedge&\qquad\cr
\dot{p}_i~\approx~& \{p_i,H\}~=~-\frac{\partial H}{\partial q^i}. \end{align}\tag{4}$$
[Here the $\approx$ symbol means equality on-shell, i.e. modulo the equations of motion (eom).] Equivalently, for an arbitrary quantity $Q=Q(q,p,t)$ we may collectively write the Hamilton's eoms (4) as
$$ \frac{dQ}{dt}~\approx~ \{Q,H\}+\frac{\partial Q}{\partial t}.\tag{5}$$
Returning to OP's question, the Noether theorem may then be applied to the Hamiltonian action (1) to investigate symmetries and conservation laws.
Statement 1: "A symmetry is generated by its own Noether charge."
Sketched proof: Let there be given an infinitesimal (vertical) transformation
$$\begin{align} \delta z^I~=~& \epsilon Y^I(q,p,t), \cr
&I~\in~\{1, \ldots, 2n\}, \cr
\delta t~=~&0,\end{align}\tag{6}$$
where $Y^I=Y^I(q,p,t)$ are (vertical) generators, and $\epsilon$ is an
infinitesimal parameter. Let the transformation (6) be a quasisymmetry of the Hamiltonian Lagrangian
$$ \delta L_H~=~\epsilon \frac{d f^0}{dt},\tag{7}$$
where $f^0=f^0(q,p,t)$ is some function. By definition, the bare Noether charge is
$$ Q^0~:=~
\sum_{I=1}^{2n}\frac{\partial L_H}{\partial \dot{z}^I} Y^I \tag{8}$$
while the full Noether charge is
$$ Q~:=~Q^0-f^0. \tag{9} $$
Noether's theorem then guarantees an off-shell Noether identity
$$\begin{align}
\sum_{I=1}^{2n}\dot{z}^I \frac{\partial Q}{\partial z^I}
&+\frac{\partial Q}{\partial t}\cr
~=~& \frac{dQ}{dt}\cr
~\stackrel{\text{NI}}{=}~&
-\sum_{I=1}^{2n} \frac{\delta S_H}{\delta z^I}Y^I \cr
~\stackrel{(4)}{=}~&
\sum_{I,J=1}^{2n}\dot{z}^I\omega_{IJ}Y^J\cr
&+\sum_{I=1}^{2n} \frac{\partial H}{\partial z^I}Y^I . \end{align} \tag{10}$$
By comparing coefficient functions of $\dot{z}^I$ on the 2 sides of eq. (10), we conclude that the full Noether charge $Q$ generates the quasisymmetry transformation
$$ Y^I~=~\{z^I,Q\}.\tag{11}$$
$\Box$
Statement 2: "A generator of symmetry is essentially a constant of motion."
Sketched proof: Let there be given a quantity $Q=Q(q,p,t)$ (a priori not necessarily the Noether charge) such that the infinitesimal transformation
$$\begin{align} \delta z^I~=~& \{z^I,Q\}\epsilon,\cr
&I~\in~\{1, \ldots, 2n\}, \cr
\delta t~=~&0,\cr
\delta q^i~=~&\frac{\partial Q}{\partial p_i}\epsilon, \cr
\delta p_i~=~& -\frac{\partial Q}{\partial q^i}\epsilon, \cr
&i~\in~\{1, \ldots, n\},\end{align}\tag{12}$$
generated by $Q$, and with infinitesimal parameter $\epsilon$, is a quasisymmetry (7) of the Hamiltonian Lagrangian. The bare Noether charge is by definition
$$\begin{align} Q^0~:=~&
\sum_{I=1}^{2n}\frac{\partial L_H}{\partial \dot{z}^I} \{z^I,Q\}\cr
~\stackrel{(2)}{=}~&
\sum_{i=1}^n p_i \frac{\partial Q}{\partial p_i}.\end{align}\tag{13}$$
Noether's theorem then guarantees an off-shell Noether identity
$$\begin{align} \frac{d (Q^0-f^0)}{dt}
~\stackrel{\text{NI}}{=}~&
-\sum_{I=1}^{2n}\frac{\delta S_H}{\delta z^I} \{z^I,Q\}\cr
~\stackrel{(2)}{=}~&
\sum_{I=1}^{2n}\dot{z}^I \frac{\partial Q}{\partial z^I} +\{H,Q\}\cr
~=~&\frac{dQ}{dt}-\frac{\partial Q}{\partial t} +\{H,Q\}. \end{align}\tag{14}$$
Firstly, Noether theorem implies that the corresponding full Noether charge $Q^0-f^0$ is conserved on-shell
$$ \frac{d(Q^0-f^0)}{dt}~\approx~0,\tag{15}$$
which can also be directly inferred from eqs. (5) and (14). Secondly, the off-shell Noether identity (14) can be rewritten as
$$\begin{align} \{Q,H\}+\frac{\partial Q}{\partial t}
~\stackrel{(14)+(17)}{=}&~\frac{dg^0}{dt}\cr
~=~~~&\sum_{I=1}^{2n}\dot{z}^I \frac{\partial g^0}{\partial z^I}+\frac{\partial g^0}{\partial t},\end{align}\tag{16} $$
where we have defined the quantity
$$ g^0~:=~Q+f^0-Q^0.\tag{17}$$
We conclude from the off-shell identity (16) that (i) $g^0=g^0(t)$ is a function of time only,
$$ \frac{\partial g^0}{\partial z^I}~=~0\tag{18}$$
[because $\dot{z}$ does not appear on the lhs. of eq. (16)]; and (ii) that the following off-shell identity holds
$$ \{Q,H\} +\frac{\partial Q}{\partial t}
~=~\frac{\partial g^0}{\partial t}.\tag{19}$$
Note that the quasisymmetry and the eqs. (12)-(15) are invariant if we redefine the generator
$$ Q ~~\longrightarrow~~ \tilde{Q}~:=~Q-g^0 .\tag{20} $$
Then the new $\tilde{g}^0=0$ vanishes. Dropping the tilde from the notation, the off-shell identity (19) simplifies to
$$ \{Q,H\} +\frac{\partial Q}{\partial t}~=~0.\tag{21}$$
Eq. (21) is the defining equation for an off-shell constant of motion $Q$.
$\Box$
Statement 3: "A constant of motion generates a symmetry and is its own Noether charge."
Sketched proof: Conversely, if there is given a quantity $Q=Q(q,p,t)$ such that eq. (21) holds off-shell, then the infinitesimal transformation (12) generated by $Q$ is a quasisymmetry of the Hamiltonian Lagrangian
$$\begin{align}
\delta L_H
~\stackrel{(2)}{=}~~&
\sum_{i=1}^n\dot{q}^i \delta p_i
-\sum_{i=1}^n\dot{p}_i \delta q^i \cr
&-\delta H
+\frac{d}{dt}\sum_{i=1}^np_i \delta q^i \cr
~\stackrel{(12)+(13)}{=}&~
-\sum_{I=1}^{2n}\dot{z}^I
\frac{\partial Q}{\partial z^I}\epsilon\cr
&-\{H,Q\}\epsilon + \epsilon \frac{d Q^0}{dt}\cr
~\stackrel{(21)}{=}~~& \epsilon \frac{d (Q^0-Q)}{dt}\cr
~\stackrel{(23)}{=}~~& \epsilon \frac{d f^0}{dt},\end{align}\tag{22}$$
because $\delta L_H$ is a total time derivative. Here we have defined
$$ f^0~=~ Q^0-Q .\tag{23}$$
The corresponding full Noether charge
$$ Q^0-f^0~\stackrel{(23)}{=}~Q \tag{24}$$
is just the generator $Q$ we started with! Finally, Noether's theorem states that the full Noether charge is conserved on-shell
$$ \frac{dQ}{dt}~\approx~0.\tag{25}$$
Eq. (25) is the defining equation for an on-shell constant of motion $Q$.
$\Box$
Discussion. Note that it is overkill to use Noether's theorem to deduce eq. (25) from eq. (21). In fact, eq. (25) follows directly from the starting assumption (21) by use of Hamilton's eoms (5) without the use of Noether's theorem! For the above reasons, as purists, we disapprove of the common praxis to refer to the implication (21)$\Rightarrow$(25) as a 'Hamiltonian version of Noether's theorem'.
Interestingly, an inverse Noether's theorem works for the Hamiltonian action (1), i.e. a on-shell conservation law (25) leads to an off-shell quasisymmetry (12) of the action (1), cf. e.g. my Phys.SE answer here.
In fact, one may show that (21)$\Leftrightarrow$(25), cf. my Phys.SE answer here.
Example 4: The Kepler problem: The symmetries associated with conservation of the Laplace-Runge-Lenz vector in the Kepler problem is difficult to understand via a purely Lagrangian formulation in configuration space
$$ L~=~ \frac{m}{2}\dot{q}^2 + \frac{k}{q},\tag{26}$$
but may easily be described in the corresponding Hamiltonian formulation in phase space, cf. Wikipedia and this Phys.SE post. | {
"domain": "physics.stackexchange",
"id": 8409,
"tags": "lagrangian-formalism, symmetry, conservation-laws, hamiltonian-formalism, noethers-theorem"
} |
$G$-injective MPS and symmetry-broken phases | Question: First, a little bit of motivation. I was reading the paper "Matrix Product States and Projected Entangled Pair States" to try to learn more about MPS representations of symmetry broken states. There's a discussion of the GHZ states $\frac{|{0000...}\rangle \pm |{1111...}\rangle}{\sqrt{2}}$ which are the ground states deep in the symmetry-broken phase of the transverse field Ising model (i.e. the ground states of $-\sum_i \sigma^z_i \sigma^z_{i+1} - h\sum_i \sigma^x_i$ for $h=0$). The GHZ states have $A$ matrices of the form $$A^{[0]} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$$ and $$A^{[1]} = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}$$
The paper makes the following interesting statement:
Because the corresponding MPS is not injective, the tensors $A^i$ exhibit
a non-trivial symmetry $G$ on the virtual level represented by the
matrices that commute simultaneously with all $A^i$ ($\sigma^z$ in this GHZ
case). MPS with this property are called $G$-injective. Note that this
symmetry is dual to the physical symmetry which permutes the blocks
(represented by $\sigma^x$ in the case under consideration).
I interpret this as follows: being $G$-injective implies there exists a basis on the bond space for which the $A^i$ matrices are block diagonal. The set of block diagonal matrices commutes with symmetry operators that are themselves block diagonal and proportional to the identity on each block. However, it's not clear to me what is meant by the fact that these symmetry operators that commute with the $A$ matrices are "dual" to physical symmetry operators. It seems like the paper is making a claim that the symmetry operators that permute blocks are physical in some sense and related to the symmetry of the phase.
Indeed, $\prod_i \sigma^x_i$ is the symmetry operator for the transverse field Ising model, and $(\sigma^x)^\dagger A^i \sigma^x$ permutes the $1$ by $1$ blocks of the $A^i$ matrices, so it seems like there is a close connection.
My aim is to try to understand the theory above within a symmetry broken phase but away from any special points within that phase. My main interest in the theory above is that at the special GHZ point above, in the physical basis of $\sigma^x$ eigenstates, $|+\rangle$ and $|-\rangle$, we have that $$A^{[+]} = \frac{1}{\sqrt{2}} I$$ and $$A^{[-]} = \frac{1}{\sqrt{2}} \sigma^z.$$ This is a special form for the $A$ matrices - it makes them proportional to unitary matrices, and I believe this is related to successfully implementing a teleportation protocol via $\sigma^x$ measurement on the GHZ state. Suppose I am in a symmetry broken phase like that given by $-\sum_i \sigma^z_i \sigma^z_{i+1} - h\sum_i \sigma^x_i$ for very small but nonzero $h$. In this basis away from the GHZ point but still in the phase, could I always write the $A$ matrices as $A^{[+]} = I \otimes B^{[+]}$ and $A^{[-]} = \sigma^z \otimes B^{[-]}$? Here, the $B$ matrices are arbitrary matrices to help allow my bond-dimension to grow away from the special GHZ point.
Answer:
Suppose I am in a symmetry broken phase like that given by $-\sum_i
\sigma^z_i \sigma^z_{i+1} - h\sum_i \sigma^x_i$ for very small but
nonzero $h$. In this basis away from the GHZ point but still in the
phase, could I always write the $A$ matrices as $A^{[+]} = I \otimes
B^{[+]}$ and $A^{[-]} = \sigma^z \otimes B^{[-]}$?
The answer is yes, and I thank Frank Pollmann for a useful discussion. Note that in the symmetry broken phase, we can construct two symmetry-broken ground states in the thermodynamic limit. One symmetry broken ground state will be mostly up, with scattered downs, and the other will be mostly down, with scattered ups.
The mostly up state will be $$|\psi_u\rangle = \sum_{i_1,...,i_L} \text{Tr}[A_u^{[i_1]}A_u^{[i_2]}...A_u^{[i_L]}] |i_1 i_2,...,i_L\rangle$$
and the mostly down state will be
$$|\psi_d\rangle = \sum_{i_1,...,i_L} \text{Tr}[A_d^{[i_1]}A_d^{[i_2]}...A_d^{[i_L]}] |i_1 i_2,...,i_L\rangle$$
Note that the MPS matrices are related to one another by the symmetry transformation $\prod_i \sigma^x_i$, which interchanges the two symmetry broken ground states: $$A_u^{[0]} = A_d^{[1]}\,\,\text{ and }A_u^{[1]} = A_d^{[0]} $$
The symmetric ground state will be a cat state built out of an MPS with the direct sum of these matrices:
$$|\psi_c\rangle = \sum_{i_1,...,i_L} \text{Tr}[\begin{pmatrix} A_u^{[i_1]} & 0 \\ 0 & A_d^{[i_1]} \end{pmatrix}\begin{pmatrix} A_u^{[i_2]} & 0 \\ 0 & A_d^{[i_2]} \end{pmatrix}...\begin{pmatrix} A_u^{[i_L]} & 0 \\ 0 & A_d^{[i_L]} \end{pmatrix}] |i_1 i_2,...,i_L\rangle$$
Let's call the MPS in the symmetric cat state $$A_c^{[i]} = \begin{pmatrix} A_u^{[i]} & 0 \\ 0 & A_d^{[i]} \end{pmatrix}$$
Note that $$A_c^{[+]} = \frac{1}{\sqrt{2}} \left( \begin{pmatrix} A_u^{[0]} & 0 \\ 0 & A_d^{[0]} \end{pmatrix} + \begin{pmatrix} A_u^{[1]} & 0 \\ 0 & A_d^{[1]} \end{pmatrix} \right) = \frac{1}{\sqrt{2}} \begin{pmatrix} A_u^{[0]}+A_u^{[1]} & 0 \\ 0 & A_d^{[0]}+A_d^{[1]} \end{pmatrix}$$ and note that we can use the relationship between $A_u$ and $A_d$ to rewrite $A_d^{[0]}+A_d^{[1]}$ as $A_u^{[0]}+A_u^{[1]}$, giving
$$A_c^{[+]} = \frac{1}{\sqrt{2}} \begin{pmatrix} A_u^{[0]}+A_u^{[1]} & 0 \\ 0 & A_u^{[0]}+A_u^{[1]} \end{pmatrix} = I \otimes \frac{A_u^{[0]}+A_u^{[1]}}{\sqrt{2}}. $$
A similar computation shows that
$$A_c^{[-]} = \sigma^z \otimes \frac{A_u^{[0]}-A_u^{[1]}}{\sqrt{2}}.$$
These are exactly of the postulated form, confirming the guess. | {
"domain": "physics.stackexchange",
"id": 96468,
"tags": "symmetry, symmetry-breaking, spin-chains, tensor-network"
} |
Cleaning with Vinegar - what surfaces does it react badly with? Metal window sills ok? | Question: I'm cleaning up some mould around the home and I have some substance on my window sills which I'm not quite sure if it's dirt or mould.
I have read about using Vinegar to clean up mould, but an article I was reading awhile back said you shouldn't use Vinegar on certain things as it can eat away at it or something because it's acidic; I thought this may have been metal and/or painted substances but I may be wrong.
I know you shouldn't use it on granite, marble and natural stone, but was wondering if using it on the painted metal window sills is ok or should I use something like diluted Tea Tree Oil instead, or would that be too oily?
FYI I would be using White Vinegar.
Here is a pic of my window sills:
Answer: Avoid aluminum alloys when working with vinegar solutions!
With time, the pH effect of acetic acid can penetrate the protective oxide coating. The exposed Al will even react with water:
2 Al (s) + 6 H2O (l) --> 2 Al(OH)3 + 3 H2 (g)
and with the presence of alloy metals, galvanic corrosion (especially if the 'dirt' is rich in salts that can serve as a good electrolyte) which apparently accelerates the attack on the Aluminum.
To quote a source:
corrosivity environment for aluminium in food industry are foodstuffs with pH 3 – 5, such as fruit juices, jams and acidic canned fruits or hot gravies, sauces as well as dressings, vegetables and fish pickled in brines with 1 –3 % salt [1]. In particular, acetic acid needs to be taken into corrosion consideration, due to its wide usage in food industry (vegetable and fish pickling), and its representative properties among acids and juices in fruits, vegetables and other organic materials that can corrode metals. Although aluminium has a good resistance to acetic acid solution at room temperature, aluminium can corrode in almost any concentration of acetic acid at any temperature if the acid is contaminated with the proper species [2].
Further comments related to the presence of select metals:
Although aluminium has a good resistance to almost all the concentrations of acetic acid at room temperature, care must be taken that the metal is free of other impurities such as iron, copper, tin and lead even in traces [2]. With increasing purity of aluminium, its resistance to acetic acid solution increases, and 99.5% aluminium can be used for the majority of engineering purposes, but components added to its alloy can increase the corrosion of aluminium in acetic acid solution [14]. Since the metal corrosion occurs via electrochemical reactions at the interface between the metal and an electrolyte solution, electrochemical techniques are ideal for the study of corrosion processes.
And importantly, the effect of pH:
It is well known that aluminium resistance is related to the thin and compact layer of naturally formed oxide on aluminium surface, but this oxide layer is stable only in pH range 4-8. Lower or higher pH values caused prominent destroying of protective layer and thus the significant metal dissolution [15]. | {
"domain": "chemistry.stackexchange",
"id": 13159,
"tags": "organic-chemistry, food-chemistry, cleaning"
} |
how to publish kinect laser scan 2d image in a browser | Question:
Hello.. I am interested in publishing a kinect laser scan in a browser to be broadcast over the internet. Something that is close to a player laser scan image or a rviz 2dnav image would be perfect. I am currently using WebUi to control my turtlebot with button control for cmd_vel and a video cam feed to view the environment. But I would like to transmit the kinect 2dnav scan like in rviz to get a idea of where I might be in a saved map with a view of the current laser scan in the browser. I was thinking about a screen image broadcast of the rviz window in place of a webcam image but I hope that someone on here might be able to point me to better solution.
Originally posted by Atom on ROS Answers with karma: 458 on 2012-08-12
Post score: 1
Answer:
rosjs / rosbridge are probably your best bet. This uses javascript to talk to a ROS master over the internet. There's a barebones tutorial on how to use the rosbridge visualization stuff here that can probably get you started: http://www.ros.org/wiki/rosjs_tutorials/Tutorials/Visualization
Originally posted by jbohren with karma: 5809 on 2012-08-14
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Atom on 2012-08-15:
Thanks Jonathan for the reply... I have known about rosjs and all the cool features coming out of the stack. But was intimidated on approaching it on my own but now seems like a better time than ever. Your link is definitely the place to start, do you know of any good examples that might be useful?
Comment by jbohren on 2012-08-15:
You can probably look at wviz: http://www.ros.org/wiki/wviz Unfortunately, their tutorials appear to be hollow. | {
"domain": "robotics.stackexchange",
"id": 10578,
"tags": "rviz, turtlebot, rosbridge"
} |
Flatten an array - loop or reduce? | Question: Problem: Concat all sub-arrays of a given array.
Example input: [[0, 1], [2, 3], [4, 5]]
Example output: [0, 1, 2, 3, 4, 5]
Solution A: Use a loop
var flattened=[];
for (var i=0; i<input.length; ++i) {
var current = input[i];
for (var j=0; j<current.length; ++j)
flattened.push(current[j]);
}
Solution B: Use reduce
var flattened = input.reduce(function(a, b) {
return a.concat(b);
}, []);
Solution B looks much shorter and easier to understand, but, it seems to be much more resource-wasteful - for each element of the input, a new array is created - the concatenation of a and b. On the contrary, solution A uses a single array 'flattened', which is updated during the loop.
So, my question is: which solution is better? Is there a solution that combines the best of both worlds?
Answer: Here is the performance test for these two and couple more approaches (one suggested by @elclanrs in the comments).
The performance differences will vary significantly across different browsers, and even different version on same browser, as browser these days try to optimize javascript very aggressively.
Unless you are dealing with very large arrays or this operation is performed repeatedly in quick succession, I would suggest you to use simplest and clearest approach. But, that being said the loop solution is also not that complex or big anyway (and it performs better than others especially on firefox) | {
"domain": "codereview.stackexchange",
"id": 12606,
"tags": "javascript, comparative-review"
} |
Any OpenCV 3 with Kinetic? | Question:
Based on the recommended platforms found here:
http://www.ros.org/reps/rep-0003.html
Can any OpenCV 3 be installed with Kinetic? My project leans more toward OpenCV than ROS so I'd like to get OpenCV 3.4.6 (or any 3.4 really) if possible - I read somewhere the default with Kinetic was 3.1 or 3.2.
Also there is no compatibility with OpenCV 4.1 for any ROS distribution, correct?
Thanks!
Originally posted by cjk11091 on ROS Answers with karma: 11 on 2019-07-12
Post score: 1
Answer:
I believe that Kinetic comes with OpenCV 3.2 or 3.3 by default.
You can use newer versions of OpenCV with ROS, but you will have to modify and compile any other ROS packages which use OpenCV to use the same version you compiled.
So far this is no compatibility for OpenCV 4.1 even for Melodic or Dashing. I took a brief stab at building the package for OpenCV 4.1, but it wasn't as simple as upgrading between OpenCV 3 versions. You may be able to build OpenCV from scratch on each system, and then modify/recompile the other ROS packages that depend on OpenCV.
Originally posted by daniel_dsouza with karma: 258 on 2019-07-12
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 33412,
"tags": "ros, ros-kinetic, opencv3"
} |
Sampling H(z) to get DFT | Question: Suppose that I have a $H(z)$ and I sample it to get a DFT of 15 values. Let's call this DFT $H_{1}[k]$. Then, suppose I antitransform $H(z)$ and grab the first 10 values of the sequence, and then I add 5 zeros. After this, I calculate the DFT of this new sequence. Let's call it $H_{2}[k]$. What differences would these two DFTs have between each other (if any)?
Answer: When grabbing the first 10 samples of $h_1[n]$ you are essentially multiplying the whole sequence $h_1[n]$ with the rectangular window
$$
w[n] =
\begin{cases}
1 & 0 \leq n < 10 \\
0 & \mbox{otherwise}
\end{cases}
$$
whose DFT is given by:
$$
\begin{align}
W\left[k\right]
&= \sum_{n=0}^{9} e^{-j 2\pi k n / N} \\
&= \frac{1 - e^{-j 2\pi k \cdot 10/15}}{1 - e^{-j 2\pi k / 15}}
\end{align}
$$
By the circular convolution theorem:
Multiplication in the discrete-time domain becomes circular convolution in the discrete-frequency domain. Circular convolution in the discrete-time domain becomes multiplication in the discrete-frequency domain.
In other words,
$$
\mbox{DFT}\left\{h_1[n] \cdot w[n]\right\}
= \frac{1}{N} H_1\left[k\right] \otimes W\left[k\right]
$$
where $\otimes$ is the circular convolution operator.
This can be seen with the following matlab script (you can play around with h1 to see what happens with other waveforms):
n = [0:14]; % discrete-time index
k = [0:14];
h1 = sin(pi*n/14); % some arbitrary function
% window function
w = [ones(1,10) zeros(1,5)];
% time domain operation
h2 = h1.*w;
% convert to frequency domain
H2t = fft(h2);
% frequency domain operation
W = fft(w);
H1 = fft(h1);
H2f = cconv(W,H1) / 15;
% Compare H2t & H2f
figure(1);
subplot(2,1,1);
hold off; plot(k, abs(H2t), 'b');
hold on; plot(k, abs(H2t), 'bx');
hold on; plot(k, abs(H2f), 'ro');
title('Magnitude');
subplot(2,1,2);
hold off; plot(k, angle(H2t), 'b');
hold on; plot(k, angle(H2t), 'bx');
hold on; plot(k, angle(H2f), 'ro');
title('Phase');
Note that if $h_1[n]$ is already no longer than 10 samples (i.e. if those truncated samples are already zero), then windowing in the time-domain will have no effect and similarly the circular convolution with $W[k]$ in the frequency domain will happen to produce $H_1[k] = \frac{1}{N} H_1[k] \otimes W[k]$ (but only as a special case). | {
"domain": "dsp.stackexchange",
"id": 3506,
"tags": "sampling, dft, z-transform"
} |
Relationship between fps and time stamps | Question: I am getting a bit confused about fps (frame rate) in a video.
I build a video (using python and opencv) using 600 frames and I specify 10 fps.
However I want to also record the timestamps for each frame in a text file.
If the time for the first frame is 0 ns what would be the time stamp for the second frame provided the values I gave above?
EDIT:
About my confusion. If we have 10 frames with 0.1 difference we have
| | | | | | | | | |
-------------------------------------------------------------
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
As you can see the time for the 10 frames is not one second but less so fps would not be 10 fps
| | | | | | | | | |
-------------------------------------------------------------
0 0.11 0.22 0.33 0.44 0.56 0.67 0.78 0.89 1
Wouldn't this be the correct calculation?
Answer: Python implementation:
import numpy as np
frames = 600
# frames per seconds
fps = 10
# sampling time in seconds
t_s = np.arange(600) / fps
# sampling time in nano-seconds
t_ns = t_s * (10**9)
Regarding the confusion related to the duration, the problem is that the sampling does not start at $t=0$, where the first sample takes place. If you consider that the sampling starts at $t=-\frac{1}{F_s}$ then it would all make sense.
Please do not neglect the integration time, which is part of the sampling process. For example, in the attached figure the sampling starts at $t=0$ but the first sample is at $t=\frac{1}{F_s}$. | {
"domain": "dsp.stackexchange",
"id": 11168,
"tags": "video-processing"
} |
Why are general QFT states locally similar to the vacuum? | Question: Recently, when reading about the Black Hole Information Paradox, many authors have said that any state in a quantum field theory "looks" like the vacuum at sufficiently short distances. One example from a recent review by Marolf is
First recall that at sufficiently short distances any state of our quantum field will be well approximated by the vacuum. We must then also recall that the vacuum of quantum field theory contains divergent ultraviolet correlations between spacelike separated points.
I have not managed to find a detailed explanation why this should be so, either by doing a quick search on google or looking at my QFT textbooks, so I would appreciate if either a detailed explanation (with all the maths) or at least a heuristic (with maybe a link to some book/article) could be given.
Answer:
First recall that at sufficiently short distances any state of our quantum field will be well approximated by the vacuum. We must then also recall that the vacuum of quantum field theory contains divergent ultraviolet correlations between spacelike separated points.
Given that Marolf's review concerns quantum fields in the strongly curved space-times of black holes, I think this statement may concern the fact that physically reasonable field states $\omega$ are Hadamard states satisfying the Hadamard condition. That is, the singularity structure of their 2-point correlation function $\omega(\phi(x) \phi(y))$ has the general form
$$
\omega(\phi(x) \phi(y)) = \frac{u}{\sigma} + v\log\sigma + w \tag{*}
$$
where $\sigma(x, y)$ is the square geodesic distance between points $x$ and $y$, and $u$, $v$, $w$ are smooth functions.
The reason is that if the Fock space is constructed starting from a Hadamard vacuum state $\omega_0$, with a 2-point function $\omega_0(\phi(x) \phi(y))$ satisfying (*), then any allowed physical state $\omega$ will inherit the same singularity structure for its 2-point correlations and the same type of short range (ultraviolet) behavior as the vacuum state. In other words, at short distances physically allowed states "look like the vacuum" and share its divergent ultraviolet correlations.
In fact, the Hadamard condition is imposed precisely to ensure this behavior, because this is what is required to construct a meaningful (finite) stress-energy tensor. Specifically, finite stress-energy requires a finite $\omega(\phi^2(x))$, which is renormalized by "subtracting the vacuum"
$$
\omega(\phi^2(x)) \sim \lim_{x\rightarrow y} \Big(\omega(\phi(x) \phi(y)) - \omega_0(\phi(x) \phi(y)) \Big) < \infty
$$
If the Fock states would have a different behavior at small scales compared to the vacuum, they would compromise the energy spectrum.
Source: A. Wipf, Quantum fields near black holes
Note: According to an argument by Srednicki, for a field in a vacuum state the entropy of entanglement of the degrees of freedom in a bounded region of space with the degrees of freedom outside that region is proportional both to the ultraviolet cutoff of the theory and to the area of the boundary of said region (no, no black hole involved). A similar argument may be construed for two spacelike separated bounded regions and their mutual entanglement entropy. But Marolf's reference is to spacelike separated points, which in this context would mean vanishing boundary areas, etc. So I think it's much more probable that his reference to divergent ultraviolet correlations concerns instead the obviously divergent 2-point function. | {
"domain": "physics.stackexchange",
"id": 39377,
"tags": "quantum-field-theory, vacuum, qft-in-curved-spacetime"
} |
Determining most efficient algorithm for a problem | Question: This is a very straightforward question, and I apologize if it is a repeat. All I want to know is if there is any general method for determining how efficient the most efficient algorithm for some problem is, in terms of time. Barring that, are there any specific methods used that have accomplished the same for a particular problem? If this is not possible in general or at all what are the hurdles to this, and is there any research along these lines?
Answer: Note that there is a problem with no maximally efficient algorithm.
The problem "Is it the case that x has an even number of 1s and
the Turing machine M halts within length(x) steps?" is such that
$\:$ if M halts then the problem takes linear time to solve, since for inputs whose length exceeds
$\:$ the amount of time it takes for M to halt, the algorithm will have to look at each bit of x
$\;\;\;\;$ and
$\:$ if M does not halt then the problem can be solved
$\:$ in constant time, since the answer is no for all x
.
Therefore, deciding how efficient the most efficient algorithm for some problem is, to the
extent that makes sense, is at least as hard as the halting problem. $\:$ On the other hand,
you may be interested in the circuit minimization problem and/or DFA minimization. | {
"domain": "cs.stackexchange",
"id": 4762,
"tags": "algorithms, time-complexity, efficiency"
} |
Will a 5200mAh 30C 22.2V 6S Lipo battery work with | Question: I am building a Quadcopter and I was wondering if a 5200mAh 30C 22.2V 6S Lipo battery will work with a 40Amp Esc's, MT4108 370 KV Motors, and GEMFAN 1470 Carbon Fiber Props. The over all payload will be about 5-6 pounds.
Answer: Your motors are rated for 466W max, which with a 6S battery gives about 20A max current draw per motor (This also tells you your ESC are a good choice, provided that they can handle the 6S voltage).
I'd estimate the hover voltage at about 10A, so you'd get a flight duration of t = Capacity(C) / Current(I) = 5.2/40 = 0.13h = 7.8mins.
This flight duration sounds a bit small. I'd try something that would get me to 10+minutes, depending on application.
That said, the correct way to approach this, would be to get some real data from your power system, either experimentally or online. The course of action is:
(1) Set a bench test with one propeller and motor fixed on a thrust bench or scale.
(2) Run your motor to the maximum and record the current $I_{max}$. This will verify the motor and ESC maximum specs.
(3) Run your motor to produce thrust equal to 1/4 of the estimated quadrotor weight. This will give you your hover current, $I_H$.
(4) Use $I_H$ and your required flight time to calculate the required battery capacity. | {
"domain": "robotics.stackexchange",
"id": 927,
"tags": "quadcopter"
} |
Projectile with air resistance | Question: If you have a projectile with these variables.
$x_0 = 1v_{0x} = 70, y_0 = 0, v_{0y} = 80, a_x = 0, a_y = -9.8$
I know how to plot these points with this equation.
$$ x = x0 + (v_{0x})t + 1/2((a_x)t^2) $$
$$ y = y0 + (v_{0y})t + 1/2((a_y)t^2) $$
I want to add air resistance to this problem and i know its a sphere, so the drag coefficient is 0.47, and lets say the area is 0.5. I use this equation to find the resistance.
$$K = 1/2*C_p*A_p$$ where $C_p$ is the drag coefficient and $A_p$ is the area of the sphere.
I then try to find the velocity of x and y by using these equations.
$$F_dx = KV^2_x$$
$$F_dy = KV^2_y$$
I then plug these in back into my initial x and y equations
$$ x = x0 + (v_{0x})t - 1/2((F_dx/m +a_x)t^2) $$
$$ y = y0 + (v_{0y})t - 1/2((F_dy/m +a_y)t^2) $$
I am having a hard time getting the right numbers and pictures when i use these equations. Am i doing something wrong here? Will someone please help me.
I would really appreciate any help.
Answer: Those first two equations you mentioned only work in the case of constant acceleration (for more info on this type of kinematics, go here: Can the equations of motion be used for both instantaneous and average quantities?). In your case, we clearly don't have constant acceleration if the force (which defines the acceleration) depends on how fast the object is going. Just picture it this way: first the object starts out with some speed, so there's air resistance which slows it down, so now it has less speed, therefore the air resistance is lesser. So there's a changing acceleration, and you can't apply those seemingly standard kinematics equations. I'm afraid if you don't know a bit of differential equations (or at least basic differential calculus) it'll be impossible for you to understand how to solve the problem (so learn calculus!). If you do know calculus, here's a really nice look at different cases with quadratic air resistance (the type of air resistance that's acting in your problem). Also, as is mentioned in the page I linked to, you'll find that in your particular case which is fairly general, there is no general solution (pardon the redundancy) to the differential equation that comes out of your situation; you can get an approximate numerical solution though. | {
"domain": "physics.stackexchange",
"id": 16163,
"tags": "homework-and-exercises, newtonian-mechanics, projectile, drag"
} |
How the cosmic inflation solves the horizon problem (an exact solution)? | Question: I am reading an article, Inflation and CMBR by Charles H. Lineweaver.
https://www.mso.anu.edu.au/~charley/papers/canberra.pdf (Page 5/13)
He explains the inflation period as the shrinking of the event horizon in the comoving coordinate system. Which it makes sense since the inflation was a period of $\Lambda$. And In this period of time event horizon shrinks down to $0$ as time goes to infinity (in future). And in the solution part of the horizon problem, the author defines a new surface last scattering due to the inflation.
I am having trouble to understand how can shrinking event horizon can lead to a new surface of the last scattering and solve the horizon problem.
Answer: Imagine two points, A and B, that are presently at opposite locations on the last scattering sphere. Light emitted from A and from B has only just now reached Earth, and since the proper distance between A and B is twice the last scattering distance, they have never exchanged light signals. In other words, their light cones when projected from the last scattering surface to $\tau = 0$, do not intersect, as shown in the figure. And, yet, all such points on the last scattering sphere are strikingly uniform, suggesting that they were in equilibrium by the time of last scattering. As there is not causal mechanism which could have brought about this equilibrium, this is considered a problem -- the horizon problem.
Inflation addresses this problem by providing such a mechanism. That the proper distance to the event horizon is shrinking in comoving coordinates is just another way of saying that space is expanding at a greater rate than the horizon is. Consider again points A and B, but now suppose that they have had time to exchange light signals before inflation begins, i.e. they are within each other's particle horizon's at the start of inflation. During inflation, since space expands at a greater rate than the horizon, points A and B will be pulled outside of each other's event horizons (as well as Earth's horizon). See the following sequence illustrating how a single point can be pulled outside an observer's horizon on account of the inflating space.
Now, when inflation ends, points A and B will appear to subtend an acausal distance on the last scattering sphere, but in reality they were once in causal contact. This has the effect of moving the last scattering surface ahead in time such that their lightcones indeed intersect at $\tau = 0$, i.e. that they have time to exchange light signals before last scattering. | {
"domain": "physics.stackexchange",
"id": 55906,
"tags": "cosmology, astrophysics, space-expansion, cosmological-inflation"
} |
Man jumps into a lake with a ball and the ball propels out | Question: In this video, a man throws up a ball and then jumps into a lake immediately followed by the ball. The ball then rockets out of the water with a tremendous speed. How does this happen?
My assumption was that the man jumps, breaking the surface tension of the water, and then hits the ball with his full power, such that his force and the water’s the buoyancy propel the ball upward.
However, many people in the comments believe that it is purely the pressure of the water that gave it such a high velocity. Is this correct?
Answer: By hitting the water, the man displaces a lot of water, creating a hole in the water. Water rushes back from the sides, gaining momentum. Once the hole is filled, the water still has momentum and thus jets upward, having no other place to go. This jet hits the ball propelling it upwards.
Since the ball is much lighter than the man, you only need to transfer a fraction of the man’s initial positional energy to the man to propel it upwards that much.
I assume that you need several attempts to get this one right, which is arguably the main part of the trick: It’s not surprising that a person jumping into water causes considerable local water jets. You can see similar effects in slow-motion videos of stones dropping into water, e.g. this one. The surprising thing is that such a jet exactly hits the ball in the right moment. This surprise becomes diminished if you consider that you can just repeat the trick until you get it right.
Looking at the video in slow-motion confirms this: The man jumps directly underneath the ball (such that the water surge will hit the ball); you can see the hole in the water and how it is filled and this is finished in just about the time that the ball hits the water.
In addition, the man may hit the ball, but I consider that unlikely.
Buoyancy cannot account for what you see: At most the ball can bounce up as high as its highest point in the initial throw. Surface tension certainly doesn’t play a role in this. | {
"domain": "physics.stackexchange",
"id": 93861,
"tags": "newtonian-mechanics, fluid-dynamics, projectile"
} |
Why is the sum of the square of the orbital coefficients 1 | Question: I'm reading up on molecular orbital theory and LCAO in Ian Fleming's "Molecular Orbitals and Organic Chemical Reactions" and I don't understand the logic here:
"When there are electrons in the
orbital, the squares of the c-values are a measure of the electron population in the neighbourhood of the atom
in question. Thus in each orbital the sum of the squares of all the c-values must equal one, since only one
electron in each spin state can be in the orbital."
I get the first part but i don't see how c values are related to spin and wouldn't total spin in an orbital equal 0? Any help would be appreciated but, since I haven't studied quantum mechanics yet, a simple as possible explanation would be best.
Answer: The short answer is that $\sum |c|^2$ is basically equal to "the number of electrons in the orbital". You might think that this should be 2, not 1. But note that each molecular orbital actually comprises two different "spin orbitals". Each "spin orbital" contains one electron of a particular spin - so there is a spin-up orbital which can hold one spin-up electron, and a spin-down orbital which can hold a spin-down electron.
The value of $\sum |c|^2$ is calibrated with respect to these "spin orbitals", so the value has to be 1.
Obviously, this raises far more questions: What is a spin orbital exactly? What is a molecular orbital exactly? Why does one MO have two spin orbitals? (Well, you might have some intuition as to that.) Why does $\sum |c|^2$ represent electron density? Why is the number taken with respect to the spin orbitals and not the molecular orbitals? I could provide answers for these, but unfortunately, no further detail is possible without going into quantum mechanics. If you want to truly understand this, there is no way around it. Of course, it doesn't mean you have to learn it right now. The only point is that it will only make more sense if and when you do it.
When you do, you will find that these topics are covered in standard QM texts. | {
"domain": "chemistry.stackexchange",
"id": 15969,
"tags": "molecular-orbital-theory, spin"
} |
Identify a Hamiltonian system consistent or not? | Question: I'm sorry if my question is too classic and basic.
As Dirac-Bergmann algorithm for Hamiltonian formalism, I find out that a Hamiltonian system is inconsistent if Poisson bracket of primary constraints and Hamiltonian $$\{f_{i},H\}=1\approx 0$$
If the Poisson brackets are equal to $0$ or produce new constraints, the Hamiltonian system is consistent.
Am I right about them?
Working on a trivial example, such as:
$$H=\frac{1}{2}(p^2+x^2)\\f=p^2+x^2+x^4$$
I take Poisson bracket between them and obtain $4x^3p$. So this is consistent? Because there is no obvious result as $1$ appearing.
Answer: I) In OP's last example, one gets formally a secondary constraint $x^3p\approx 0$.
However, assuming that $x$ and $p$ are real variables in the phase space $M:=\mathbb{R}^2$, then the primary constraint
$$f(x,p)~:=~ p^2+x^2+x^4 ~\approx~ 0 \tag{A}$$
by itself implies that
the constraint constrained submanifold $C\subset M$ is just the origin:
$$ C~:=~f^{-1}(\{0\})~=~\{(0,0)\}, \tag{B}$$
i.e. $x\approx 0\approx p$. So all dynamics are killed/frozen, and the secondary constraint is automatically satisfied. In summary, OP's last example is a consistent but empty/trivial theory with no DOF.
II) That being said, let us rush to add that the constraint (A) does not fulfill a standard regularity condition, namely that the gradient $\vec{\nabla} f$ should not vanish on the constrained submanifold $C$, cf. e.g. my Phys.SE answer here. In fact the gradient (A) does vanish on $C$.
In general, it is much more demanding to perform the Dirac-Bergmann analysis for non-regular constraints because many standard results from differential geometry does not hold. | {
"domain": "physics.stackexchange",
"id": 47398,
"tags": "hamiltonian-formalism, hamiltonian, phase-space, constrained-dynamics, poisson-brackets"
} |
Difference between double-sided and single-sided AWGN noise after bandpass filtering? | Question: Consider a bandpass signal $s(t)$ with bandwidth $W$.
After bandpass filtering, let the output signal be $r(t)=s(t)+n(t)$
I have read a paper that denotes $n(t)$ as Gaussian noise with one-sided power spectral density $N_0$. Therefore, the noise power is $\sigma^2=\mathbb{E}\{n^2(t)\}=N_0W$.
What would be the purpose of denoting the noise as single-sided?
It seems that if we consider the noise as double-sided with power spectral density $\frac{N_0}{2}$, noise power is still $\sigma^2=\mathbb{E}\{n^2(t)\}=N_0W$ since we have to integrate over the negative frequencies and the positive frequencies.
What is the purpose of describing AWGN noise as single-sided versus double-sided? Considering the case of real signals, do both end up giving the same results?
Answer: Both descriptions give the same result. We often use the one-sided noise power spectral density (PSD) because for real-valued processes the negative frequencies are redundant, so defining the PSD for positive frequencies is sufficient. You just have to scale the noise power spectrum such that integrating the one-sided power spectral density (PSD) over the positive frequencies gives the same result as integrating the two-sided PSD over positive and negative frequencies. I.e., for white noise with a constant PSD, defining the one-sided PSD as $N_0$ means that we would need to define the two-sided PSD as $N_0/2$ such that the noise power remains the same: $N_0W=N_0/2\cdot 2W$ | {
"domain": "dsp.stackexchange",
"id": 9492,
"tags": "digital-communications, noise, power-spectral-density"
} |
2dnav_pr2 failed on 2DNavigationStackDemoWithSimple2DesksWorld | Question:
In doc page: http://www.ros.org/wiki/pr2_simulator/Tutorials/2DNavigationStackDemoWithSimple2DesksWorld
2dnav_pr2 package doesn't exist,so I can't execute
roslaunch 2dnav_pr2 rviz_move_base.launch
and
roslaunch rviz/rviz_move_base.launch
How to solve it?Is there any latest docs?
Originally posted by sam on ROS Answers with karma: 2570 on 2011-07-11
Post score: 0
Answer:
This tutorial is just outdated (last edit in January 2010). The launch files you are mentioning are not present anymore but I guess they would just start up rviz for visualizing data from navigation. Have a look here for more infos on rviz and the navigation stack.
Originally posted by Lorenz with karma: 22731 on 2011-07-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Lorenz on 2011-07-14:
I think it is a lot of work to keep all tutorials up-to-date so some might always be outdated. If you find problems, you can file a ticket. Since it is a wiki page, you can also edit the tutorial yourself, but I would only do that if I'm totally sure about what I'm doing ;)
Comment by sam on 2011-07-14:
Thank you~Is there anyone who will maintain the docs always to be latest? | {
"domain": "robotics.stackexchange",
"id": 6113,
"tags": "ros"
} |
Check if an element is present within a linked list - follow up | Question:
This is a follow up question from here. I have revised my code as
per the suggestions of peer reviewers. I still feel there is lot of
scope for improvement in this code.
class Node(object):
def __init__(self,data, next=None):
self.data = data
self.next = next
class LinkedList(object):
def __init__(self):
self.head = None
self.size =0
def extend(self, seq = None):
"""extends list with the given sequence"""
for i in range(0, len(seq)):
node = Node(seq[i])
self.size +=1
node.next = self.head
self.head = node
def append(self, item):
"""append item to the end of list"""
node = Node(item)
node.next = self.head
self.head = node
self.size += 1
def printdata(self):
"""print elements of linked list"""
node = self.head
while node:
print node.data
node = node.next
def __iter__(self):
node = self.head
while node:
yield node.data
node = node.next
def __contains__(self, item):
"""checks whether given item in list"""
node = self.head
while node:
if node.data == item:
return True
node = node.next
def len(self):
"""returns the length of list"""
return self.size
def remove(self, item):
"""removes item from list"""
node = self.head
current = self.head.next
if node.data == item:
self.size -= 1
self.head = current
current = current.next
while current:
if current.data == item:
current = current.next
node.next = current
self.size -= 1
node = current
current = current.next
def __str__(self):
return str(self.data) + str(self.size)
test cases :
if __name__ == "__main__":
llist = LinkedList()
llist.extend([98,52,45,19,37,22,1,66,943,415,21,785,12,698,26,36,18,
97,0,63,25,85,24])
print "Length of linked list is ", llist.len()
llist.append(222)
print "Length of linked list is ", llist.len()
llist.remove(22)
print "Elements of linked list \n", llist.printdata()
print "Length of linked list is ", llist.len()
## Search for an element in list
while True:
item = int(raw_input("Enter a number to search for: "))
if item in llist:
print "It's in there!"
else:
print "Sorry, don't have that one"
Answer:
The construct
for i in range(0, len(seq)):
node = Node(seq[i])
is usually frowned upon. Consider a Pythonic
for item in seq:
node = Node(item)
I see no reason to default seq to None.
append does not append. It prepends.
remove has an unpleasant code duplication. To avoid special casing the head, consider using a dummy node:
def remove(self, item):
dummy = Node(None, self.head)
prev = dummy
while prev.next:
if prev.next.data == item:
prev.next = prev.next.next
size -= 1
prev = prev.next
self.head = dummy.next
printdata may (or shall?) use the iterator:
def printdata(self):
for node in self:
print node.data | {
"domain": "codereview.stackexchange",
"id": 30447,
"tags": "python, linked-list"
} |
Why does the inner wall of a smooth glass seemingly have rings when viewed from above? | Question: I was looking at my glass and I found these rings. Are they reflection of the upper circle of the glass?
Answer: More likely the rings are reflections of the bottom of the glass. Light that entered through the sides and that was reflected at the bottom, travels up the wall, and bounces repeatedly.
An experiment I did with one of my drinking glasses: rings appear in the drinking glass if a beam of light is aimed at the bottom of the glass. Then, if the bottom is lowered into water, the rings disappear again. This means the rings are due to internal reflections in the glass, because the water allows the light to escape from the glass. | {
"domain": "physics.stackexchange",
"id": 98183,
"tags": "visible-light, reflection"
} |
Using PPS pressure sensors in ROS | Question:
Recently we acquired touch sensor pads for our Barrett hands from PPS. I would like to use these sensors on linux and integrate them into ROS, but PPS offers no support in this direction. Yet I have seen that Joe Romano has used this system on the PR2, which I assume is linux based. I have checked out the repository and found the hardware_interface.h" which has declarations for some classes involving this sensor, but I can't seem to find any implementation. Does anyone have ideas/ resources I could use to tackle this?
Originally posted by Woutervdh on ROS Answers with karma: 1 on 2012-02-24
Post score: 0
Answer:
On the PR2, we have the fingertip sensors connected by an SPI interface to our custom gripper control board, which communicates with the rest of the robot over EtherCAT. This may be overly complicated for your setup.
If you sensors have an SPI interface, you probably want to find or build an SPI to computer interface; an Arduino and rosserial is probably a good place to start.
Originally posted by ahendrix with karma: 47576 on 2012-02-25
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8374,
"tags": "ros, sensor"
} |
How do I calculate the Relative Humidity from number density and pressure? | Question: From some measurements I have determined and calculated the temperature and number density of water molecules in (m^-3) at a pressure of X Pa. How do I get the RH value out of those variables?
Answer: Dividing the number density by avagadro's number gives you the number of moles per unit volume of water $\rho_w$. The partial pressure of water vapor is then $$P_{water}=RT\rho_{w}$$If, at temperature T, the equilibrium vapor pressure of water vapor is $P^*(T)$, the relative humidity is $$RH=\frac{RT\rho_W}{P^*}\times 100\ percent$$ | {
"domain": "physics.stackexchange",
"id": 98611,
"tags": "thermodynamics, humidity"
} |
How to simulate quantum entanglement variation in different quantum gates? | Question: I'm trying to study quantum entanglement variation during quantum computation with 4 qubit systems comprising a variety of quantum gates.
How can I simulate this variation on MATLAB? Is there any other alternatives?
Answer: Step 1: Choose a measure for the entanglement. There are a lot of entanglement measures, some of them are easier to calculate in MATLAB than others.
Step 2: Initialize your 4-qubit system's density matrix. It will be a 16x16 matrix since $2^4 = 16$. This means all calculations will be very fast in MATLAB and you will have no problem working with the density matrix rather than wavefunction. You could also work with the wavefunction, but a lot of entanglement measures are defined in terms of the density matrix so it's better to work with the density matrix for this type of thing.
Step 3: Apply your various gates. They will be unitaries, so this will just be some matrix multiplications like: $\rho_{t} = U\rho_{t-1} U^\dagger$, where $\rho_{t-1}$ is the density matrix before the gate is applied, and $\rho_{t}$ is the density matrix after the gate is applied.
Step 4: Calculate the your entanglement measure, which can be a function of the density matrix: $f(\rho_t)$.
Step 5: Now you have $f(\rho_1)$, $f(\rho_2)$, $f(\rho_3) \ldots $
You now have the entanglement measure at each point in time, and you can plot this and see how your measure of etanglement varies over the course of applying all your gates. | {
"domain": "quantumcomputing.stackexchange",
"id": 424,
"tags": "entanglement, simulation"
} |
Is it possible to observe strong gravitational lensing with amateur telescopes? | Question: Strong gravitational lens systems like the Cosmic Horseshoe have been imaged by scientific space telescopes, but have any amateur astronomers accomplished this?
Or are amateur/small ground based telescopes not suited for this?
Answer: The cosmic horseshoe is beyond amateur instruments. It is a magnitude 20 object. In a large (2.5 m) professional telescope it looks like:
This image taken from the SDSS III data. It is small (10'', half the size of Mars at opposition) but that is not insurmountable for amateur equipment. But is it very dim. If an object like this was visible in moderate telescopes, it would have been found long ago. It is really only the sky surveys, and computerised searches of the survey data that allows for such objects to be found.
Processsing the SDSS data can bring out more detail, as can be found on the discovery paper https://arxiv.org/pdf/0706.2326.pdf
While this isn't visible, another example of gravitational lensing can be: the Einstein cross is a gravitationally lensed quasar, and it is at the limits of amateur equipment. It is even smaller, but as it is composed of four point-sources it can be seen. Skyhound recommends at least 18 in telescope, but practically 24'' to see the four components, exceptional dark skies, fully adapted eyes and perfect conditions (especially perfect seeing: this object is small). Your reward for this is "four smudges of light embedded in the galaxy" The SDSS view gives an indication of how hard this object is. | {
"domain": "astronomy.stackexchange",
"id": 6374,
"tags": "telescope, amateur-observing, gravitational-lensing"
} |
Thermodynamic limit for an ideal gas | Question: In S. Salinas, Introduction to Statistical Physics (Springer, 2001), the author states (p. 68):"...the thermodynamical limit is essential to allow the connection between the average values of statistical mechanics and the macroscopic values of the thermodynamic quantities". However, on p. 80 the author states: "As a matter of fact, it is not even necessary to invoke the thermodynamic limit to obtain the equations of state of the ideal gas" and goes on to show:
$$S(E,V,N;\delta E)=k_B\ln\Omega(E,V,N;\delta E)=\frac{3}{2}k_B\ln E+k_B\ln V+f(N;\delta E),$$
where $f(N;\delta E)$ is a function of $N$ and $\delta E$.
Therefore,
$$\frac{1}{T}=\frac{\partial S}{\partial E}=\frac{3k_B}{2E}$$ and $$\frac{p}{T}=\frac{\partial S}{\partial V}=\frac{k_B}{V}.$$
Is there a contradiction here? How is the temperature (a macroscopic, averaged quantity) related to the derivative of the entropy without taking the thermodynamic limit?
Answer: The thermodynamic limit is a formal procedure required to get the usual thermodynamic properties from statistical mechanics since, generally, not all of them are unconditionally true for finite-size systems. For example, equilibrium thermodynamics requires that the Helmholtz free energy is an extensive function of the volume. In general, for a finite system, this is not the case: there is a contribution to free energy coming from the boundary layers that does not scale as the volume. The thermodynamic limit takes care of this problem by making negligible the boundary contribution with respect to the bulk. Moreover, finite-size systems have completely analytic thermodynamic potentials, thus excluding the possibility of phase transitions. It is the thermodynamic limit that allows getting a non-analytic function as the limit of an analytic sequence. A third role of the thermodynamic limit is eliminating the dependence of thermodynamic results on the specific statistical ensemble.
However, there are cases where the thermodynamic limit is not required to restore the proper thermodynamic behavior and ensemble independence, at least for some properties. This is rooted in an interplay between the Hamiltonian of the system and the ensemble. For example, in the case of a one-component classical ideal gas (not interacting particles), the whole thermodynamics can be exactly reproduced even with a finite-size system in the canonical and grand canonical ensembles (some ensemble dependence may appear in the microcanonical ensemble depending on the specific flavor of it). The reason can be related to the absence of interaction, which has, among other consequences, the absence of different behavior of the boundary layers and the absence of phase transitions.
The existence of exceptional cases like the one you are reporting is not a contradiction. Actually, in your example, the critical observation is that the term containing the dependence on the ensemble is the function $f(N,\delta E)$. More precisely, the ensemble dependence is the dependence of the entropy on $\delta E$. It is generally possible to show that dependence gives a vanishing contribution to the entropy per particle (or the entropy per unit volume) at the thermodynamic limit if $\delta E$ does not scale with the size. However, for a perfect gas, the $\delta E$ dependence is confined to an additive term, which exactly disappears when a partial derivative of entropy is taken with respect to volume or energy.
The relations $\frac{1}{T} = \frac{\partial S}{\partial E}$ and
$\frac{P}{T} = \frac{\partial S}{\partial V}$ are pure thermodynamic relations valid for every possible thermodynamic system. | {
"domain": "physics.stackexchange",
"id": 76503,
"tags": "thermodynamics, statistical-mechanics, ideal-gas"
} |
Retrieving mail attachments given criteria with VBA | Question: Scenario: I (together with great help from SO) finished this piece of code which downloads attachments from an e-mail account (outlook) with VBA.
Issue: Given the internal functioning of the code, and the sheer amount of e-mails to check (more than 800k) the code cannot function in available time. Last time I tried to run it, it went on for 8 days and then just stopped.
Question: Does anyone have any ideas on how to improve the efficiency of this code?
Obs: The code goes through a secondary e-mail account specified (first criteria), loops through the emails if they are from a certain sender (second criteria), opens file ad check if there are worksheets names "alpha" and "beta" (very smart idea given to me in SO, to find the third criteria), and if it does, saves the file in a final location.
Code:
Sub Get_Mail_Attachments()
Application.ScreenUpdating = False
Dim olApp As New Outlook.Application
Dim olNameSpace As Object
Dim olMailItem As Outlook.MailItem
Dim olFolder As Object
Dim olFolderName As String
Dim olAtt As Outlook.Attachments
Dim strName As String
Dim sPath As String
Dim i As Long
Dim j As Integer
Dim olSubject As String
Dim olSender As String
Dim sh As Worksheet
Dim LastRow As Integer
Dim TempFolder As String: TempFolder = VBA.Environ$("TEMP")
Dim wB As Excel.Workbook
'delete content except from row 1
ThisWorkbook.Worksheets("FileNames").Rows(2 & ":" & ThisWorkbook.Worksheets("FileNames").Rows.count).Delete
'set foldername and subject
olFolderName = ThisWorkbook.Worksheets("Control").Range("D10")
olSender = ThisWorkbook.Worksheets("Control").Range("D16")
sPath = Application.FileDialog(msoFileDialogFolderPicker).Show
sPathstr = Application.FileDialog(msoFileDialogFolderPicker).SelectedItems(1)
Set olNameSpace = olApp.GetNamespace("MAPI")
'check if folder is subfolder or not and choose olFolder accordingly
Set olFolder = olNameSpace.Folders("email@email.com").Folders("Inbox")
If (olFolder = "") Then
Set olFolder = olNameSpace.Folders("email@email.com").Folders("Inbox")
End If
'loop through mails
h = 2
For i = 1 To olFolder.Items.count
'check to see if it is an e-mail
If olFolder.Items(i).Class <> olMail Then
Else
Set olMailItem = olFolder.Items(i)
'check if the search name is in the email subject
If (InStr(1, olMailItem.Sender, olSender, vbTextCompare) <> 0) Then
With olMailItem
For j = 1 To .Attachments.count
strName = .Attachments.Item(j).DisplayName
'check if file already exists
If Not Dir(sPathstr & "\" & strName) = vbNullString Then
strName = "(1)" & strName
Else
End If
'Save in temp
.Attachments(j).SaveAsFile TempFolder & "\" & strName
ThisWorkbook.Worksheets("FileNames").Range("A" & h) = strName
'Open file as read only
Set wB = Workbooks.Open(TempFolder & "\" & strName, True)
DoEvents
'Start error handling
On Error Resume Next
Set sh = wB.Sheets("alpha")
Set sh = wB.Sheets("beta")
If Err.Number <> 0 Then
'Error = At least one sheet is not detected
Else
'No error = both sheets found
.Attachments(j).SaveAsFile sPathstr & "\" & strName
End If
Err.Clear
Set sh = Nothing
wB.Close
On Error GoTo 0
h = h + 1
Next j
End With
End If
End If
Next i
Application.ScreenUpdating = True
MsgBox "Download complete!", vbInformation + vbOKOnly, "Done"
End Sub
Answer: Some general review suggestions:
Ditch the Hungarian Notation. Most people implement it incorrectly through no fault of their own.
Add Option Explicit to the top of your code module. There are a couple of undefined variables in there.
Use clear and meaningful variables names. What is h? After spending some time pondering the code, it appears to be the row number where you're writing information in your logging workbook, so call it something like logRowNum. That will make it pretty explicit to the next person (or you in 60 8 days.)
You make reference to ThisWorkbook - it appears that you're expecting it to be your logging workbook, so Dim logWorkbook as Workbook and set it before opening any other workbooks. It appears that you don't have any issues with it for now but if you have an 8-day process running, someone could click on another workbook when it's open and ThisWorkbook now points at something else breaking your code.
Indent your code neatly. Rubberduck (I'm a big fan and hang out in their chat room, but haven't yet started contributing) will help you with that. (It'll also help with renaming variables and a ton of other things.) Also, eliminating a lot of those extra blank lines will make it a bit more readable.
Modify your If statements to test for what you're after instead of what you're not after. Having a blank True "statement" is very awkward to follow.
'change:
If olFolder.Items(i).Class <> olMail Then
Else
'some code here
End If
'to:
If olFolder.Items(i).Class = olMail Then
'some code here
End If
Eliminate empty Else clauses - they make me think you're going to write more code but haven't yet.
Specific to improving speed:
Work with arrays instead of the spreadsheet itself.
Turn off other screen updating things in Excel.
Instead of downloading all attachments, make sure you're only downloading Excel workbooks, no need to download a PDF if you're not going to do anything with it:
strName = .Attachments.Item(j).DisplayName
If Instr(0, strName, "xls") > 0 then
'rest of the code here
End If
'loops to next email message
Short-cut your worksheet check, as well as ensuring they both exist:
Dim AlphaFound as Boolean
Dim BetaFound as Boolean
On Error Resume Next
Set sh = wB.Sheets("alpha")
If Err.Number = 0 Then 'alpha exists
AlphaFound = True
End If
If not AlphaFound Then
'only look for "is alpha" if "alpha" isn't there
Set sh = wB.Sheets("is alpha")
If Err.Number = 0 Then 'alpha exists
AlphaFound = True
End If
End If
If AlphaFound Then
Set sh = wB.Sheets("beta")
If Err.Number = 0 Then 'beta exists
BetaFound = True
End If
End If
'turn off error ignoring as soon as possible vvv
On error goto 0
If AlphaFound and BetaFound then
.Attachments(j).SaveAsFile sPathstr & "\" & strName
End If
Ditch the Set wb = nothing. You're doing it 800,000 times and each time you do, Excel has to free the memory then reallocate it when you hit the next Set wb = something statement. Leave the object in memory until you're done processing. Actually, VBA will Set all objects to Nothing automatically at the end of the Sub for you, so there's no reason to do it for any of them. (Note that wb is the only object you're manually freeing.)
Put the trailing \ on all paths as early as possible to minimize concatenation later. (Don't know how much execution time it will save, but it will save typing time.)
sPath = Application.FileDialog(msoFileDialogFolderPicker).Show
If Right(sPath, 1) <> "\" then
sPath = sPath & "\"
End If
You have a DoEvents in there, which is being a good Windows citizen, but you may have it a bit too frequently. Maybe move it to outside your attachment count loop so you only do it once per email. I hate to suggest another counter in there, but maybe one that counts processed emails - every time you process 10 emails with attachments, DoEvents and reset the counter. That way other things can still run on the machine, but you're not pausing your process too frequently. If this is running on a dedicated machine (for 8 days!) maybe make it every 100 messages... If the machine is dedicated to this process, you might get away with not having it in there at all. | {
"domain": "codereview.stackexchange",
"id": 25916,
"tags": "vba, excel, email, outlook"
} |
What gives an elementary particle a charge? | Question: We know that proton is positive, and electron is negative. But where does come notion of negativity and positivity? Does charge come from some specific particles, or they specific order?
Answer: There are various layers one could address your question.
As all observables in QM, the charge exist because there exist a self-adjoint operator associated to it. This operator corresponds to a (class of) experimental instruments that the observers use to make measurements on the system, the collection of possible numerical outcomes being known as charges.
Another answer, perhas deeper, could be that the existance of such an operator $Q$ (an thus the associated instrument) is guaranteed, as for most of the relevant operators in QM, by the Noether theorem associated with a $U(1)$ global symmetry of the action. We can thus say that the charge exists because there the dynamics is invariant under a certain $U(1)$ symmetry group. Moreover, we know that $[Q,H]=0$ and therefore the electric charges is also conserved in time.
There are other layers. One is specific to the electric charge: it is not only corserved, it is also superselected i.e. it commutes with all physical observables. This means that any physical state is always in a definite charge state. It is not possible to prepare a system with non-definite electric charge (as it is instead possible, for example, with the spin for which one can take linear cominations of states up and down). In particular, any one-particle state will have definite electric charge.
Finally, since the charge is exactly conserved, any composite system made of more elementary stuff (say the proton is made of quarks and gluons), will have exactly the same charge that the sum of its elementary constituents. In other words, a system of ''free'' constituents and the bound constituents have the same net electric charge. For a proton we can say that the electric charge +1 comes from the fact that the up-quarks carry charge 2/3 and the down quarks -1/3. Their electric charge is fixed, up to overall normalization, by the anomaly cancellation in the SM which can be naturally explained in GUT theories (including in this case the normalization if the larger group is simple). | {
"domain": "physics.stackexchange",
"id": 14831,
"tags": "electrostatics, charge"
} |
How to calculate the altitude of a star given the hour angle, declination, and latitude? | Question: I'm trying to find the altitude of a star for observing, but all I have is the hour angle and declination of the star, along with latitude of the location I'm observing from. How can I find the altitude?
Answer: You can use this fundamental formula in spherical astronomy[1]
$$\sin a=\sin \phi \sin \delta + \cos \phi \cos \delta \cos H$$
where
$a$ is the wanted altitude,
$\phi$ is your latitude,
$\delta$ is the declination of the star, and
$H$ is the hour angle, measured in the clockwise direction.
Pay attention to the units! (Don't mix degrees, radians and grads. Common cause of error!)
Since I don't know how you are familiar with the trigonometric functions (I believe pretty well), you only get $\sin a$ using that formula. You need to get the $\arcsin$ of that value in order to get wanted altitude $a$.
The solution above is perfectly correct in theory (on competitions, exams, and for personal use), but if you are writing a program on computer, you might find the following useful:
The factor we haven't yet addressed is the atmospheric refraction[2] It causes the star to look higher than in reality. The effect is pretty small, on range of few arc minutes.
First you need to calculate the factor $R$ by the formula[3]
$$R=\frac{16.27''\cdot P}{273 + T}$$
where $P$ is the pressure in millibars and $T$ is the temperature in degrees of Celsius. You are perfectly fine using just $R=60''=1'$. Then the apparent altitude of the star is given by $a'=R+a$. Again, pay attention to the units (everything in degrees or everything in minutes ...)
If you are interested in learning about positional or spherical astronomy, then I advise you to visit another Stack Exchange question about this topic. Personaly, I have had great fun with Fundamental Astronomy. But in general, the formula is derived using the spherical law of cosines. | {
"domain": "astronomy.stackexchange",
"id": 5907,
"tags": "celestial-mechanics"
} |
Adding Angular Momenta Operators in QM | Question: Consider $j,m$ to be the angular momentum magnitude and $z$-projection eigenvalues corresponding to a total angular momentum operator $\hat{J}$, composed of angular momentum $\hat{J}_1$ and $\hat{J}_2$ with eigenvalues $j_1,m_1$ and $j_2,m_2$. We want to know what values $j$ and $m$ can take on in terms of $j_1,m_1,j_2,m_2$. It is commonly stated that
$$ \hat{J}_z = \hat{J}_{1z} + \hat{J}_{2z}, $$
from which one can immediately derive $m = m_1+m_2$.
What is the explanation for the simple addition of the $z$ operators? If there is some vectorial model explanation, then is it also true that $\hat{J}_x = \hat{J}_{1x} + \hat{J}_{2x}$, for example? Is there some other way to prove this?
Further, if we are looking at a vectorial model, why isn't it true that the magnitudes are the same, i.e. that $j = j_1+j_2$?
Answer: I think this is a great question. It also puzzled me for a while.
The key here is irreducible representations of the rotation group. You start with one quantum particle, the state of this quantum partile is $|\psi\rangle_1$ which is a vector in some Hilbert space $\mathcal{H}_1$. You also have a set of operators $\exp\left(iJ_{1x}\theta_x\right),\, \exp\left(iJ_{1y}\theta_y\right),\, \exp\left(iJ_{1z}\theta_z\right)$ that change this state to appear as it would appear to some other observer rotated by angles $-\theta_{x,y,z}$ arround the corresponding axis.
More generally you have an operator
$R_1\left(\boldsymbol{\theta}\right)=\exp\left(i\left[J_{1x}\theta_x+J_{1y}\theta_y+J_{1z}\theta_z\right]\right)$
That changes your state into one that would be observed by another, 'rotated', observer.
What you are after is description of the system that would be 'independent' of observer position. Whilst observers may not agree on all aspects of the state, they may agree on some of its aspect, more specifically they will agree on whether state is in a specific irreducible representation of $R_1\left(\boldsymbol{\theta}\right)$. More generally all observers can agree on the decomposition of $|\psi\rangle_1$ into sub-spaces of $\mathcal{H}_1$ that are mapped into themselves by all $R_1\left(\boldsymbol{\theta}\right)$. The simplest form of this is spherical symmetry, i.e. all observers will agree if the state is spherically symmetric. However, there are other forms of this, and that's the irreducible representations. More specifically, that's the irreducible representations of the SO(3) Lie Group, with elements $R_1\left(\boldsymbol{\theta}\right)$. If you look at the representation theory of this group, you will find that for a given representation it is sufficient, and much easier to find irreducible representations of the Lie algebra (rather than the actual group), i.e. the irreducible representations of $\mathbf{J}_1=\left(J_{1x},\,J_{1y},\,J_{1z}\right)$.
Now consider two such particles. The full state of the system is now $|\psi_1\psi_2\rangle=|\psi\rangle_1\otimes|\psi\rangle_2$ that is a vector in the tensor product space of the two underlying Hilbert spaces, $\mathcal{H}_1\otimes\mathcal{H}_2$. The rotations of this state are now represented by:
$R_{12}\left(\boldsymbol{\theta}\right)=R_{1}\left(\boldsymbol{\theta}\right)R_{2}\left(\boldsymbol{\theta}\right)$
And you are still seeking to find irreducible sub-spaces of this new representation of SO(3) group. Assuming that $\left[\mathbf{J}_1, \mathbf{J}_2\right]=0$ we have:
$R_{12}\left(\boldsymbol{\theta}\right)=\exp(i\mathbf{J}_1.\boldsymbol{\theta})\exp(i\mathbf{J}_2.\boldsymbol{\theta})=\exp(i\left(\mathbf{J}_1+\mathbf{J}_2\right).\boldsymbol{\theta})=\exp(i\mathbf{J}_{12}.\boldsymbol{\theta})$
i.e. the Lie algebra of this new representation is simply $\mathbf{J}_{12}=\mathbf{J}_1+\mathbf{J}_2$. Therefore the irreducible subspaces you need to find are the irreducible sub-spaces of $\left(\mathbf{J}_1+\mathbf{J}_2\right)$. These will be the subspaces of $\mathcal{H}_1\otimes\mathcal{H}_2$ that all observers will agree on. These will also turn out to be subspaces with specific angular momentum numbers ($j$), but that's peculiarities of the SO(3) representation theory (see https://en.wikipedia.org/wiki/Casimir_element).
Sorry if my explanation is a bit muddled, but I hope it conveys the general idea. The reason you add up angular momentum operators is that you multiply the rotation operators, and the reason for that, is that you combine different Hilbert spaces through tensor products.
The point of this explanation is that it does not need classical mechanics, or even notion of angular momentum operator. The reasoning here can be conducted entirely in terms of observers and seeking to find unique ways to represent states of the system. The connection to classical angular momentum comes much later, you find quantity that is conserved as a result of isotropy of space ($j$), and in classical mechanics this quantity is angular momentum, so you link the two. | {
"domain": "physics.stackexchange",
"id": 63694,
"tags": "quantum-mechanics, hilbert-space, angular-momentum, operators"
} |
If the curve a particle moves in when calculating work is dependent on the force itself, why don't we find this path before doing work calculations? | Question: I am having some confusion getting my head around work done by a general force on a curve. Lets say I had some mass in space so we aren't considering gravitational potential, if I apply some force that makes the particle move in a curve how would I calculate the work done?
Would I need to find out the curve that the force creates? And then take the line integral of the force with that curve, also does this mean no other curve makes physical sense with this force since the force couldn't cause the particle to move in multiple different curves?
When in a force field I think I understand these questions since I am applying a force against the field to make it move, or the field is making the particle move. But in the case of no field and an applied force I don't understand how there can be a generic path surely the path is determined by the force or is it dependent on initial conditions i.e the velocity and position of the particle initially?
In summary I think I am asking isn't the curve a particle moves in when calculating work dependent on the force itself? Why don't we ever find this path before doing work calculations?
Answer:
if I apply some force that makes the particle move in a curve how would I calculute the work done?
The work $W$ done by any force $\mathbf F$ along some path $C$ is, by definition, the line integral of that force along the path:
$$W=\int_C\mathbf F\cdot\text d\mathbf x$$
Would I need to find out the curve that the force creates? And then take the line integral of the force with that curve, also does this mean no other curve makes physical sense with this force since the force couldn't cause the particle to move in multiple different curves?
You need to find the path the particle moves along, yes. If the force you are finding the work of is the only force acting on the particle, then the path the particle moves along is the curve (trajectory) the force creates. Of course there could be other forces in general, and so you would need to know the trajectory of the particle due to all of the forces before you can find the work done by any particular force along this path.
In summary I think I am asking isn't the curve a particle moves in when calculating work dependent on the force itself?
Yes, the trajectory is dependent on all of the forces (the net force) acting on the particle. This is just Newton's second law.
Why don't we ever find this path before doing work calculations?
What are you referring to, specifically? You need the specified path $C$ to find the work done. You might be getting confused with introductory physics problems where the path is already given to you ahead of time; this is just to make the problems easier for students to grasp the concept of work and how to calculate it rather than also put onto the problem for them to find the specific trajectory of the particle themselves. This is also easier for the asker of the questions as well, as then they don't need to contrive the specific forces required to cause the particle to move on the path they want to with the specific force in the question. | {
"domain": "physics.stackexchange",
"id": 80931,
"tags": "newtonian-mechanics, classical-mechanics, work"
} |
Why aren't lenticular clouds moved by the wind? | Question: My book tells me that altocumulus lenticularis are not moved by wind but rather the air just flows through them without effect on their position. How come they are the only cloud type that is stationary?
Answer: Clouds form when particular conditions of temperature and humidity intersect.
In the case of lenticular clouds, those conditions occur at specific locations slightly downwind from the tops of mountains -- a standing wave forms in the air between areas of low and high humidity, much as standing waves form in river rapids.
Source: National Park Service
The standing waves can only form when wind is blowing over the mountain, and so the region of air where the standing waves create the conditions to form clouds is pinned down by the mountain.
The region doesn't move, but the air and water molecules do.
New moist air blows over the mountain, following the standing wave, and into a zone of lower temperature, where it condenses into the droplets that form the cloud.
Still following the wave, the droplets blow into an area of higher temperature and evaporate again.
Source: National Weather Service
Also, the standing waves that guide moist air to form lenticular clouds are an example of laminar flow.
Other clouds appear to move because and the air where they form is moving turbulently, and they are also not controlled by temperature (they move within a layer about the same temperature), only moisture. | {
"domain": "earthscience.stackexchange",
"id": 2716,
"tags": "clouds"
} |
gazebo standalone ROS plugin crashes | Question:
Hello all,
I am using gazebo 1.7.1 standalone with Ros-Groovy and I want to write a plugin that works with ROS.
Actually, I wanted to recreate the plugin found @ simulator_gazebo/gazebo_plugins/src/gazebo_ros_p3d.cpp and I followed the tutorial at http://gazebosim.org/wiki/Tutorials/1.5/ros\_enabled\_model\_plugin even though it states "This tutorial may not work with Gazebo Quantal and Ros-Groovy"
When my model (with the plugin) is inserted, standalone gazebo crashes with gazebo: symbol lookup error: ../my_gazebo_plugins/lib/libmy_gazebo_ros_p3d.so: undefined symbol: _ZN2tf7resolveERKSsS. (gdb gazebo is not working)
According to the tutorial, I have created rospack.cmake, CMakeLists.txt, and GAZEBO_PLUGIN_PATH is appended. My manifest.xml includes only Also I am still using the rosbuild not catkin.
What am I missing?
Thanks in advance
Peshala
Originally posted by peshala on Gazebo Answers with karma: 197 on 2013-04-24
Post score: 0
Answer:
Here is how I resolved the problem. I am describing the steps for GazeboRosP3D.
create your package just like a regular ROS package. I am still using rosbuild by the way.
pull the gazebo_ros_p3d.h and gazebo_ros_p3d.cpp from simulator_gazebo/gazebo_plugins and change the .cpp file to include the correct header (e.g. #include <my_gazebo_plugins/gazebo_ros_p3d.h>
change your CMakeLists.txt as described by @AndreiHaidu in his answer. Also, add the
rosbuild\_add\_library(my\_gazebo\_ros\_p3d src/gazebo\_ros\_p3d.cpp)
rosbuild\_add\_boost\_directories()
rosbuild\_link\_boost(my\_gazebo\_ros\_p3d thread)
change the manifest by adding
<depend package="tf"> <!-- this is needed for p3d plugin -->
in your model.sdf add
<plugin filename="libmy_gazebo_ros_p3d.so" name="my_gazebo_ros_p3d"> <bodyName>chassis</bodyName> <topicName>/ground_truth_odom</topicName> <alwaysOn>true</alwaysOn> <updateRate>100.0</updateRate> <gaussianNoise>0.0</gaussianNoise> <frameName>map</frameName> <xyzOffset>0 0 0</xyzOffset> <rpyOffset>0 0 0</rpyOffset> </plugin>
append the plugin path at /usr/share/gazebo-1.7/setup.sh by modifying
export GAZEBO_PLUGIN_PATH=/usr/lib/gazebo-1.7/plugins:<your plugin path>/my_gazebo_plugins/lib
source the gazebo setup.sh file
Based on the plugin, you may add different packages in the manifest.
Originally posted by peshala with karma: 197 on 2013-04-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by peshala on 2013-04-25:
also change DisconnectWorldUpdateStart to DisconnectWorldUpdateBegin in your .cpp file. This is described in https://bitbucket.org/osrf/drcsim/commits/7070d7a926eb930f0b9cd00534f38f2ecc99ced7 | {
"domain": "robotics.stackexchange",
"id": 3238,
"tags": "gazebo-plugin, groovy"
} |
Why would we need to know $c_p - c_v$ | Question: I understand that it's easier to calculate $c_p$ experimentally since we pretty much experience constant pressure conditions all our lives. It's also more difficult to calculate $c_p$ mathematically since $$c_p = \Bigg{(}\dfrac{\partial H}{\partial T}\Bigg{)}_p$$ Mathematically $$c_v= \Bigg{(}\dfrac{\partial E}{\partial T}\Bigg{)}_V$$Is easier to calculate, But I've just proved $c_p - c_v$ and it took me a while so I was wondering why would we need to know $$c_p-c_v = \Bigg{[}p + \Bigg{(}\dfrac{\partial E}{\partial V}\Bigg{)}_T\Bigg{]}\Bigg{(}\dfrac{\partial V}{\partial T}\Bigg{)}_p$$
Basically why would we need to know $c_p - c_v$ at all.
Answer: The difference between the heat capacities is the definition of the specific gas constant:
$$ R_{\text{specific}} = c_p - c_v $$
And so depending on your use-case, that could be handy. Usually we use it the other way around to get either $c_p$ or $c_v$ from the universal gas constant and molecular weight and whichever heat capacity we know. | {
"domain": "physics.stackexchange",
"id": 44250,
"tags": "thermodynamics"
} |
Why did my recently cleaned glass break? | Question: Yesterday, I washed a glass with hot water and left it to dry in the following position:
After a several minutes I heard a weird sound. I came back to the kitchen and found that the glass broken, however it did not look like it fell. It seemed like it had exploded (image taken after some of the pieces were cleaned):
The weird thing was that it was shatter to a lot of small pieces:
My question: is it possible for my glass to break by itself in such a way and how?
Answer: The fact that the glass exploded into those blunted, blocky little pebbles proves that during its manufacture, there were significant amounts of compressive thermal stresses frozen into it.
These arise because as it solidifies from the molten state, glass shrinks slightly, and the first part of the glass to solidify is the outermost part (i.e., the exposed external surfaces). Then, when the still-molten interior solidifies a moment later, it contracts and in so doing tries to pull together the solidified "skin" with it- thereby applying a compressive stress to the skin.
This makes the resulting glass part significantly stronger, because to place the glass in tension and cause it to fail in brittle fracture requires that these residual compressive stress be overcome by the applied load. This is in fact the mechanism by which tempered safety glass is made, as used in sliding glass doors and glass bathtub and shower enclosures.
When the applied stress is great enough to trigger brittle failure in the safety glass, the entire piece of glass explodes into a pile of blunt pebbles instead of a series of razor-sharp daggers which can do great harm.
Now, how to blow up a piece of tempered glass? If you rub it hard enough, it is possible to put microscratches into its surface, any one of which can redistribute the stresses in the part and initiate brittle fracture. Note also that water is very slightly soluble in glass, and can also get into the microscratches and alter the stress distributions in the part. Having one zone of the glass hotter than another zone makes this uneven stress distribution even worse. Then, when your back is turned, the part goes kaboom. | {
"domain": "physics.stackexchange",
"id": 65351,
"tags": "thermodynamics, glass"
} |
Difference between matrix representations of tensors and $\delta^{i}_{j}$ and $\delta_{ij}$? | Question: My question basically is, is Kronecker delta $\delta_{ij}$ or $\delta^{i}_{j}$. Many tensor calculus books (including the one which I use) state it to be the latter, whereas I have also read many instances where they use the former. They cannot be the same as the don't have the same transformation laws. What I think is that since
$\delta_{j}^{i}$ = ($\delta_{j}^{i}$)$'$, but $\delta_{ij}$ doesn't, so the latter cannot be a tensor. But the problem is that both have the same value :- ($1,0$) depending upon the indices. So it makes me think that $\delta_{ij}$ is just the identity matrix $I$ and not a tensor, and $\delta^{i}_{j}$ is a function. But since $\delta_{j}^{i}$ also has the same output as $\delta_{ij}$, WHAT IS THE DIFFERENCE?
I think it could be matrix representations. IN GENERAL, is there a difference between matrix representations of $\delta_{ij}$, $\delta^{ij}$ and $\delta_{j}^{i}$ (or any other tensor for that matter). Please answer these (the difference between mixed indices AND matrix representations).
Answer: I) Let us for simplicity discuss tensors in the context of (finite-dimensional) vector spaces and multilinear algebra. [There is a straightforward generalization to manifolds and differential geometry.]
II) Abstractly in coordinate-free notation, the Kronecker delta tensor, or tensor contraction, is the natural pairing
$$\tag{1} V \otimes V^{*}~\stackrel{\delta}{\longrightarrow}~ \mathbb{F}$$
$$ v\otimes f ~\stackrel{\delta}{\mapsto}~ f(v) , \quad v\in V, \quad f\in V^{*},$$
between an $\mathbb{F}$-vector space $V$ and its dual vector space $V^{*}$.
III) If we choose a basis $(e_i)_{i\in I}$ for $V$, there is an dual basis $(e^{j*})_{j\in I}$ for $V^{*}$ such that
$$\tag{2} e^{j*}(e_i)~=~\delta^j_i~:=~\left\{ \begin{array}{rcl} 1 &\text{for}& i=j, \\ \\ 0 &\text{for}& i\neq j. \end{array}\right. $$
[Here we distinguish between covariant and contravariant indices.]
Then for a vector $v=\sum_{i\in I} v^i e_i\in V$ and a covector $ f=\sum_{j\in I} f_j e^{j*}\in V^{*}$, the contraction map (1) is
$$\tag{3} \delta(v,f)~=~ \sum_{i,j\in I} f_j \delta^j_i v^i~=~ \sum_{i\in I}f_i v^i. $$
In other words, $\delta^j_i$ is the matrix representation for the $\delta$ contraction map (1). It's an interesting fact that the matrix representation is independent of the choice of basis $(e_i)_{i\in I}$ for $V$, as long as we choose the corresponding dual basis for $V^{*}$ in the natural way. We often say that $\delta^j_i$ transforms as a tensor, or is a tensor.
IV) Now what about $\delta_{ij}$ with lower indices? Well, first we must introduce a symmetric bilinear form, or metric,
$$\tag{4} V\times V ~\stackrel{g}{\longrightarrow}~ \mathbb{F} $$
$$ g(v,w)=g(w,v) .$$
If we choose a basis $(e_i)_{i\in I}$ for $V$, then we can write
$$ \tag{5} g ~=~ \sum_{i,j\in I} g_{ij}~ e^{i*}\otimes e^{j*} .$$
Often we will choose a metric which is the unit matrix in a certain basis
$$\tag{6} g_{ij} ~=~\delta_{ij}~:=~\left\{ \begin{array}{rcl} 1 &\text{for}& i=j, \\ \\ 0 &\text{for}& i\neq j. \end{array}\right. $$
If we now choose another basis then the matrix representation $g_{ij}$ for the metric (4) will in general change. It will in general no longer be the unit matrix $\delta_{ij}$. We say that $\delta_{ij}$ does not transform as a tensor under general change of bases/coordinates.
In a nutshell, $\delta_{ij}$ with lower indices implicitly signals the presence of a metric (4), or in other words, a notion of length scale in the vector space $V$. It is important to realize that the choice of a metric (4) in $V$ is a non-canonical choice.
V) However, once we are given a metric $g$, it is natural to study changes of bases/coordinates that preserve this metric $g$. These correspond to orthogonal transformations and $\delta_{ij}$ behaves as a covariant tensor under such orthogonal transformations. | {
"domain": "physics.stackexchange",
"id": 14401,
"tags": "differential-geometry, tensor-calculus, metric-tensor, covariance"
} |
How to determine the shape of hybridized atomic orbitals in VB theory? | Question: From diagrams, it's rather obvious how $sp$ orbitals are hybridized - the hybrids are just a composite of the $s$ and the $\pm p_{(x)}$ orbitals. However, $sp^2$ orbitals are not just composites of $s, \pm p_x, \pm p_y$, and the same goes for $sp^3$.
This incongruity leads to my question.
I am aware that the general shapes are trigonal planar and pyramidal, respectively. However, I recall learning of a more mathematical approach that used the wave function (I think?)
In my notes, I found this formula.
$$\Phi_i=\sum_{j=1}^nc_{ij}\phi_j, i=1,2,3...n$$
My understanding is that $n$ is the number of atomic orbitals (and thus also the number of orbitals). I believe $c$ is a constant, and $\phi$ is the orbital.
Later, for $sp^2,$ I have the following formulas:
\begin{align}
\require{cancel}
\Phi_1 &=c_{1,1}\cdot s+ \cancelto{0}{c_{1,2}\cdot p_x} + c_{1,3} \cdot p_y &&= c_{1,1}\cdot s+c_{1,3} \cdot p_y\\
\Phi_2 &=c_{2,1}\cdot s+ c_{2,2}\cdot p_x + c_{2,3} \cdot p_y &&= c_{2,1}\cdot s+ c_{2,2}\cdot p_x + c_{2,3} \cdot -p_y\\
\Phi_2 &=c_{3,1}\cdot s+ c_{3,2}\cdot p_x + c_{3,3} \cdot p_y &&= c_{3,1}\cdot s+ c_{3,2}\cdot -p_x + c_{3,3} \cdot -p_y \\
\end{align}
I have some recollection that the wave functions zeroes out for particular quantum number values, but I can't remember how it all fits together with this equation.
I couldn't find this formula online, so I'm hoping that it might make sense to someone here.
Thanks for the help!
Answer: What we call "Hybridization" is really just a mathematical transformation of an approximated wavefunction. Your basic summation formula describes how to build some sort of customized wavefunction by forming a linear combination of some set of "basis" orbitals; you will find this equation over and over if you look up "basis set expansion". Now, of course you can't just use any random coefficients $c_{ij}$ of your liking for this custom wavefunction; you have to adhere to some physical rules. Most importantly, the overall energy that the wavefunction represents must not change. I won't go into detail about the constraints, but if you are interested, google "unitary transformation".
For orbital hybridization, you first have to find a set of basis orbitals from a physically sound starting wavefunction. If you only look at an isolated atom, then atomic orbitals are obviously a useful choice. From there, you can form combinations of those orbitals (by some criteria that really aren't uniquely defined) while adhering to the rules of the unitary transformation. In the end, you will arrive at your preferred hybrid orbitals that fulfill your chosen criterion as best as they can while keeping the wavefunction energetically equivalent to the original one. Some of the coefficients $c_{ij}$ in your equations may come out to be 0, typically for reasons of symmetry. For example, an $sp^2$ hybrid that is oriented along the $y$ axis of your coordinate system would obviously only have contributions from the $p_y$ orbital, not from $p_x$ and $p_z$. But the exact values of the cofficients will generally depend on the specifics of your problem, most importantly the definition of your coordinate system. So you really can't say per se which coefficients will drop out to be 0 and which won't -- it's one of those "it depends" situations.
General Thoughts
Unfortunately, hybridization is a concept that is so misleading that I would consider its general use in chemistry to be outright physically incorrect. People invoke it to suggest and visualize bonding situations in molecules and to explain the resulting molecular shapes, but unfortunately always seem to forget some very basic undergraduate quantum chemistry in this process.
Hybrid orbitals are linear combinations of eigenfunctions to different energy eigenvalues of the underlying one-electron Hamiltonian; think $s$ and $p$ orbitals. As such, they are not solutions to the time-independent Schrödinger Equation anymore, which in turn means that they are time-dependent functions. This implies two things:
1.) Hybrid orbitals do not have a fixed shape; what you see visualized is just a single snapshot in time. The orbital shapes periodically "morph" back and forth and can at other points in time look nothing like those suggestive localized "bonding" orbitals anymore. In fact, the hybrid orbitals can even change places among each other over time if the symmetry of the system permits.
To better drive this point home, here are some (semi-quantitative) visualizations of the time development of the electron densities in the bonding and non-bonding orbitals in water; the C-C σ and π "banana" bonds in ethylene; and an sp³ hybrid in methane. In each case, the orbitals transform over time into something that is actually quite the opposite to their original "purpose": Bonding electron densities shift outwards to resemble something of anti-bonding character, and symmetry-equivalent orbitals migrate from one bonded pair of nuclei to another. At each point throughout these videos, one could stop the animation and take a freeze-frame of the orbital at that time, and it would still describe the exact same "localized electron pair" that was used in the beginning to explain how electron density is confined with a certain purpose to some specific region of the molecule.
2.) Hybrid orbitals do not have defined energies, since they are not solutions to the time-independent Schrödinger Equation. If you were to do any sort of "energy measurement" on a hybrid orbital, you would end up with a Schrödinger's Cat situation where you observe the energies of the constituent atomic orbitals with certain probabilities that are connected to the expansion coefficients $c_{ij}$.
3.) Hybridization is not stringently mathematically defined if one regards it as an orbital localization procedure. Different for concepts exist for this localization process, which can yield qualitatively different results (see, for example, the discussion about "rabbit ears" lone pairs in water). This is also what ultimately gives rise to the weird fractional mixtures of s-p hybridization ratios that one sometimes finds discussed in the literature.
"MO and VB Theory are ultimately equivalent"
Quantum chemistry deals with two major theories to describe the electronic structure of molecules: Molecular orbital (MO) theory, which gives rise to canonical MOs that can generally be delocalized over an entire molecule and are thus very unappealing from a classical chemical perspective; and Valence-Bond (VB) theory, which attempts to reconcile the idea of isolated chemical bonds and electron pairs with quantum mechanics by postulating the existence of the localized hybrid orbitals that are in question here.
Now, in almost any discussion on the topic, it is at some point claimed that the two theories are ultimately equivalent and related to one another only through mathematical transformations. This is technically correct, and one can arrive at the total overall wavefunction and energy of the molecule either way. However, it is again also very misleading, because it suggests that the two theories sort of "meet in the middle" if you apply them in some strict, physically rigorous way. But what this statement actually means is that appropriately constructed hybrid orbitals can be used as a basis set for developing static wavefunctions with valid energy solutions to a one-electron molecular Hamiltonian; and that, when taking the totality of all hybrid orbitals together, you arrive at the same wavefunction as MO theory. In other words, "accurate" VB theory is simply MO theory with extra intermediate steps! The two theories are equivalent in the sense that at the invoked "limit of correctness", VB theory becomes MO theory; the latter never has to budge in the process, as it is a physically and mathematically sound treatment of the molecule to begin with.
Experimental (Counter-)Evidence
Up to this point, all arguments have been made from a purely theoretical perspective. The one concession that can be readily made to VB theory is that a combination of all hybrid orbitals again yields the total MO wavefunction for the molecule. As chemists, however, we are very often not just interested in a "holistic" description of molecules as a whole, but rather its individual electrons (or electron pairs). In that case, this equivalence is of no help. What we would really like to have is a direct look into whether the orbital hybridization concept holds up as a model in physical reality for the constituent electron pairs inside a molecule. In other words, can we interrogate electrons as to "what kind of orbitals they are in"?
Usually, this question immediately prompts responses that hinge on Photoelectron Spectroscopy (PES), where one measures the photon energies required to eject electrons from a molecule. An exemplary case of this is again the methane molecule: MO theory predicts a 3:1 ratio of two energetically distinct orbital types in the valence shell, whereas VB theory postulates its equivalent hybrid orbitals. The actual photoelectron spectrum reveals two distinct peaks at roughly a 3:1 intensity ratio, which is often taken as direct evidence for the veracity of the MO picture. VB proponents typically counter these claims by invoking involvement of the ionic product state of methane in the spectrum, and by requiring that it is not just one hybrid orbital that contributes to the spectrum, but a combination of all four of them (which, again, is simply the reconstruction of MOs).
Since the interpretation of these findings is contentious, they don't serve us well to find an answer as to whether hybrid or molecular orbitals are actually observable in any way. Luckily, there is another method we can use to investigate the nature of bound electrons, called Electron Momentum Spectroscopy (EMS). In essence, this technique allows the measurement of the momentum distribution of valence electrons in the initial state of the molecule without significant involvement of the ionic product, thus addressing one of the criticisms of the PES experiments. When compared to MOs or the extremely similar Kohn-Sham orbitals (KSOs) from Density Functional Theory (which are essentially MOs with inclusion of some electron correlation), it can be seen that the measurement data is almost perfectly explained by MOs or KSOs, but qualitatively incompatible with hybrid orbitals. (1,2)
Obviously, these observations lend strong support to the idea that orbitals can indeed be interpreted as a physically meaningful constructs for describing the behaviour of specific electrons in a molecule. In order to be observable in this way however, they must be eigenfunctions to the molecular Hamiltonian, which -- with the arguments above -- directly explains why MOs (or KSOs), but not hybrid orbitals, are able to describe these orbitals.
From my own perspective, the concept of orbital hybridization thus has a very convincing body of both theoretical and experimental evidence speaking against it, or at least against the way that it is usually invoked to explain molecular shapes and chemical bonding. As mentioned above, you can use VB theory and its hybrid orbitals in a sense that makes them physically rigorous; but it just means that you're actually back to doing MO theory again.
Unfortunately, at least some of the points I have addressed above -- especially the time dependence and lack of energy eigenvalues -- seem to rarely ever be mentioned when it comes to discussions of hybrid orbitals. This is undoubtedly because these topics are simply too advanced for a lot of the teaching contexts that hybrid orbitals appear in (although even high-level disputes in quantum chemistry journals generally fail to address these points too). But especially with regards to chemical education, we must ask ourselves whether teaching a concept "not because it is correct, but because it is simple" is really an appropriate thing to do. Obviously, the answer should be no; and I think that chemistry as an entire discipline needs to abandon the idea of hybrid orbitals in the same sense, and for the same reasons, that we abandoned Bohr's atom model. | {
"domain": "chemistry.stackexchange",
"id": 14458,
"tags": "quantum-chemistry, valence-bond-theory"
} |
Is the Planck length the smallest length that exists in the universe or is it the smallest length that can be observed? | Question: I have heard both that Planck length is the smallest length that there is in the universe (whatever this means) and that it is the smallest thing that can be observed because if we wanted to observe something smaller, it would require so much energy that would create a black hole (or our physics break down). So what is it, if there is a difference at all.
Answer: Short answer: nobody knows, but the Planck length is more numerology than physics at this point
Long answer: Suppose you are a theoretical physicist. Your work doesn't involve units, just math--you never use the fact that $c = 3 \times 10^8 m/s$, but you probably have $c$ pop up in a few different places. Since you never work with actual physical measurements, you decide to work in units with $c = 1$, and then you figure when you get to the end of the equations you'll multiply by/divide by $c$ until you get the right units. So you're doing relativity, you write $E = m$, and when you find that the speed of an object is .5 you realize it must be $.5 c$, etc. You realize that $c$ is in some sense a "natural scale" for lengths, times, speeds, etc. Fast forward, and you start noticing there are a few constants like this that give natural scales for the universe. For instance, $\hbar$ tends to characterize when quantum effects start mattering--often people say that the classical limit is the limit where $\hbar \to 0$, although it can be more subtle than that.
So, anyway, you start figuring out how to construct fundamental units this way. The speed of light gives a speed scale, but how can you get a length scale? Turns out you need to squash it together with a few other fundamental constants, and you get:
$$
\ell_p = \sqrt{ \frac{\hbar G}{c^3}}
$$
I encourage you to work it out; it has units of length. So that's cool! Maybe it means something important? It's REALLY small, after all--$\approx 10^{-35} m$. Maybe it's the smallest thing there is!
But let's calm down a second. What if I did this for mass, to find the "Planck mass"? I get:
$$
m_p = \sqrt{\frac{\hbar c}{G}} \approx 21 \mu g
$$
Ok, well, micrograms ain't huge, but to a particle physicist they're enormous. But this is hardly any sort of fundamental limit to anything. It isn't the world's smallest mass. Wikipedia claims that if a charged object had a mass this large, it would collapse--but charged point particles don't have even close to this mass, so that's kind of irrelevant.
It's not that these things are pointless--they do make math easier in a lot of cases, and they tell you how to work in these arbitrary theorists' units. But right now, there isn't a good reason in experiment or in most modern theory to believe that it means very much besides providing a scale. | {
"domain": "physics.stackexchange",
"id": 36092,
"tags": "spacetime, quantum-gravity, physical-constants, discrete"
} |
Are the image data augmentation generators in Keras randomly applied | Question: I am working on an image classification problem and using data augmentation in Keras.
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
rotation_range=2,
horizontal_flip=True)
I would like to know if the ImageDataGeneratorapplies the transformation randomly to image patches. That is for example, rotation may be applied on one image, whilst flip may be applied to another image. I want to know if the decision to apply rotation or a flip is randomly determined.
Answer: I suggest having a look at the relevant documentation.
There, it states:
rotation_range: Int. Degree range for random rotations.
...
horizontal_flip: Boolean. Randomly flip inputs horizontally.
Saying each of the operations is applied randomly, I would say, means your images will be generated sometime with and sometimes without the augmentation steps, independently from one another.
If that doesn't convince you, here is the relevant snippet from the source code:
if self.horizontal_flip:
if np.random.random() < 0.5:
x = flip_axis(x, img_col_axis)
The code for the rotation step is a little more involved, but contained within the same class that I liked above. | {
"domain": "datascience.stackexchange",
"id": 2662,
"tags": "machine-learning, deep-learning, keras"
} |
Is conservation of momentum and energy valid for non-inertial frames? | Question: Conservation laws of momentum and energy are said to be the most basic principles of physics. Are they also valid for non-inertial frames, and in what way?
Answer: Regarding total momentum conservation, the point is that in non-inertial reference frames inertial forces are present acting on every physical object. Momentum conservation is valid in the absence of external forces.
However, if these forces are directed along a fixed axis, say $e_x$, or are always linear combinations of a pair of orthogonal unit vectors, say $e_x,e_y$, (think of a frame of axes rotating with respect to an inertial frame around the fixed axis $e_z$ with a constant angular velocity), conservation of momentum still holds in the orthogonal direction, respectively. So, for instance, in a non-inertial rotating frame about $e_z$, conservation of momentum still holds referring to the $z$ component.
Mechanical energy conservation is a more delicate issue. A general statement is that, for a system of points interacting by means of internal conservative forces, a notion of conserved total mechanical energy can be given even in non-inertial reference frames provided a technical condition I go to illustrate is satisfied.
Let us indicate by $I$ an inertial reference frame and by $I'$ the used non-inertial frame. Assume that our physical system is made of a number of points interacting by means of conservative true forces depending on the differences of position vectors of the points, so that a potential energy is defined and it does not depend on the reference frame.
If the origin of $I'$ has constant acceleration with respect to $I$ and the same happens for the angular velocity $\omega$ of $I'$
referred to $I$ (it is constant in magnitude and direction), then only three types of inertial forces take place in $I'$ and all them are conservative but one which does not produce work (Coriolis' force). In this case, the sum of the kinetic energy in $I'$, the potential energy of the true forces acting among the points and the potential energy of the inertial forces appearing in $I'$ turns out to be conserved in time along the evolution of the physical system. | {
"domain": "physics.stackexchange",
"id": 31565,
"tags": "newtonian-mechanics, momentum, reference-frames, conservation-laws, inertial-frames"
} |
GMapping for ROS Melodic? | Question:
It is possible to use under ros melodic gmapping?
Thanks,
Henne
Originally posted by Henne on ROS Answers with karma: 33 on 2019-07-09
Post score: 0
Answer:
To check the status of gmapping in ROS Melodic, see #q303253 (the procedure, not necessarily the answer itself of course).
For building it yourself from sources, see #q300480.
Originally posted by gvdhoorn with karma: 86574 on 2019-07-09
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by mgruhler on 2019-07-09:
I agree with the answer by @gvdhoorn
As I've just stumbled upon this: gmapping has been released into melodic two days ago. So it should be available with the next sync...
Comment by gvdhoorn on 2019-07-09:
@mgruhler: I've just updated #q300480. Not all of gmapping has been released, but it most likely will be (ref the referenced issue).
Comment by mgruhler on 2019-07-09:
@gvdhoorn: you are correct. I missed that the link to the rosdistro PR from ros-perception/slam_gmapping is actually pointing to the PR for ros-perception/openslam_gmapping. | {
"domain": "robotics.stackexchange",
"id": 33373,
"tags": "ros, ros-melodic"
} |
Travelling salesman using brute-force and heuristics | Question: I have implemented both a brute-force and a heuristic algorithm to solve the travelling salesman problem.
import doctest
from itertools import permutations
def distance(point1, point2):
"""
Returns the Euclidean distance of two points in the Cartesian Plane.
>>> distance([3,4],[0,0])
5.0
>>> distance([3,6],[10,6])
7.0
"""
return ((point1[0] - point2[0])**2 + (point1[1] - point2[1])**2) ** 0.5
def total_distance(points):
"""
Returns the length of the path passing throught
all the points in the given order.
>>> total_distance([[1,2],[4,6]])
5.0
>>> total_distance([[3,6],[7,6],[12,6]])
9.0
"""
return sum([distance(point, points[index + 1]) for index, point in enumerate(points[:-1])])
def travelling_salesman(points, start=None):
"""
Finds the shortest route to visit all the cities by bruteforce.
Time complexity is O(N!), so never use on long lists.
>>> travelling_salesman([[0,0],[10,0],[6,0]])
([0, 0], [6, 0], [10, 0])
>>> travelling_salesman([[0,0],[6,0],[2,3],[3,7],[0.5,9],[3,5],[9,1]])
([0, 0], [6, 0], [9, 1], [2, 3], [3, 5], [3, 7], [0.5, 9])
"""
if start is None:
start = points[0]
return min([perm for perm in permutations(points) if perm[0] == start], key=total_distance)
def optimized_travelling_salesman(points, start=None):
"""
As solving the problem in the brute force way is too slow,
this function implements a simple heuristic: always
go to the nearest city.
Even if this algoritmh is extremely simple, it works pretty well
giving a solution only about 25% longer than the optimal one (cit. Wikipedia),
and runs very fast in O(N^2) time complexity.
>>> optimized_travelling_salesman([[i,j] for i in range(5) for j in range(5)])
[[0, 0], [0, 1], [0, 2], [0, 3], [0, 4], [1, 4], [1, 3], [1, 2], [1, 1], [1, 0], [2, 0], [2, 1], [2, 2], [2, 3], [2, 4], [3, 4], [3, 3], [3, 2], [3, 1], [3, 0], [4, 0], [4, 1], [4, 2], [4, 3], [4, 4]]
>>> optimized_travelling_salesman([[0,0],[10,0],[6,0]])
[[0, 0], [6, 0], [10, 0]]
"""
if start is None:
start = points[0]
must_visit = points
path = [start]
must_visit.remove(start)
while must_visit:
nearest = min(must_visit, key=lambda x: distance(path[-1], x))
path.append(nearest)
must_visit.remove(nearest)
return path
def main():
doctest.testmod()
points = [[0, 0], [1, 5.7], [2, 3], [3, 7],
[0.5, 9], [3, 5], [9, 1], [10, 5]]
print("""The minimum distance to visit all the following points: {}
starting at {} is {}.
The optimized algoritmh yields a path long {}.""".format(
tuple(points),
points[0],
total_distance(travelling_salesman(points)),
total_distance(optimized_travelling_salesman(points))))
if __name__ == "__main__":
main()
Answer: I enjoyed the first look at the code as it's very clean, you have
extensive docstrings and great, expressive function names. Now you know
the deal with PEP8, but except for the one 200 character long line I
don't think it matters much really.
There're a few typo with the wrong spelling "algoritmh".
The coordinates should be immutable 2-tuples. The reason being the
safety of immutable data-structures. YMMV, but that makes it really
obvious that those are coordinates as well.
optimized_travelling_salesman should make a defensive copy of
points, or you should otherwise indicate that it's destructive on that
argument.
Instead of if start is None: start = points[0] you could also use
start = start or points[0] to save some space while still being
relatively readable.
For the algorithms the only thing I'd is not to use square root if you
don't have to. You can basically create a distance_squared and use that
instead of distance because the relationship
between a smaller and bigger distance will stay the same regardless.
That doesn't apply for the final output of course. Edit: And, as mentioned below by @JanneKarila, you can't use that for the brute-force version. | {
"domain": "codereview.stackexchange",
"id": 29803,
"tags": "python, traveling-salesman"
} |
Filtering Passband Signals using Complex Baseband Filtering | Question: So I'm given a pass band filter with specific transfer function $H_p(f)$, I want to implement this via baseband processing.
I already know how to take the input signal $u(t)$ and process it such that I get the I/Q components $u_I(t), u_Q(t)$.
I also have figured out that the following convolution relation holds for the filter output $y$'s I/Q components $$y_I=(u_I * h_I) - (u_Q*h_Q)$$ and $$y_Q = (u_I*h_Q) + (u_Q*h_I)$$ where $h_Q$ and $h_I$ are the filters I/Q components.
I have run into an issue where I can't figure out a simple way to determine $h_Q$ and $h_I$.
I know for the complex envelope of the filter I have $$h(t) = h_I(t) + j\cdot h_Q(t)$$
I can take the fourier transform of this to obtain $H(f) = H_I(f) + j\cdot H_Q(f)$.
So essentially my issue is with how to obtain $H_I$ and $H_Q$.
My book basically gives the answer but I must be missing something since I can't figure out the reasoning behind it. They state that $$H_I(f) = (H(f) + H^*(-f))/2$$ and $$j\cdot H_Q(f) = (H(f)-H^*(-f))/2$$.
I'm guessing its some property to do with the fourier that I've forgotten but I'm hoping someone can explain the reasoning for it. Thanks!
Answer: Since $h_I(t)$ is the real part of $h(t)$ you have
$$h_I(t)=\frac12[h(t)+h^*(t)]\tag{1}$$
where $h^*(t)$ is the complex conjugate of $h(t)$. For the imaginary part you have
$$h_Q(t)=\frac{1}{2j}[h(t)-h^*(t)]\tag{2}$$
Since the Fourier transform of $h^*(t)$ is $H^*(-f)$1, the Fourier transforms of (1) and (2) are
$$H_I(f)=\frac12[H(f)+H^*(-f)]\\
H_Q(f)=\frac{1}{2j}[H(f)-H^*(-f)]$$
1The fact that $H^*(-f)$ is the Fourier transform of $h^*(t)$ can be easily seen as follows:
With
$$H(f)=\int_{-\infty}^{\infty}h(t)e^{-j2\pi ft}dt$$
the Fourier transform of $h^*(t)$ is
$$\mathcal{F}\{h^*(t)\}=\int_{-\infty}^{\infty}h^*(t)e^{-j2\pi ft}dt=
\left[\int_{-\infty}^{\infty}h(t)e^{j2\pi ft}dt\right]^*=H^*(-f)$$ | {
"domain": "dsp.stackexchange",
"id": 2339,
"tags": "filters, fourier-transform, convolution, filtering, fourier"
} |
How to refactor function with string inputs to select a "mode" to avoid magic strings? | Question: See code from this question about the advantages of Enums.
def print_my_string(my_string: str, mode: Literal['l', 'u', 'c']):
if mode == 'l':
print(my_string.lower())
elif mode == 'u':
print(my_string.upper())
elif mode == 'c':
print(my_string.capitalize())
else:
raise ValueError("Unrecognised mode")
There is a single function which has multiple behaviors depending on mode flag that has been passed in. It is possible to communicate to users via the documentation, exception, and source code, that valid modes are 'l', 'u', and 'c'. But in this code these are essentially magic strings.
Code like this is used without issue all over the place, see scipy least_squares. Nonetheless I'm trying to understand better or best practices.
One improvement would be to define constants within the module.
LOWER_MODE = 'l'
UPPER_MODE = 'u'
CAPITALIZE_MODE = 'c'
def print_my_string(my_string: str, mode: Literal['l', 'u', 'c']):
if mode == LOWER_MODE :
print(my_string.lower())
elif mode == UPPER_MODE:
print(my_string.upper())
elif mode == CAPITALIZE_MODE:
print(my_string.capitalize())
else:
raise ValueError("Unrecognised mode")
However, I often see Enums come up as a solution to this problem.
from enum import Enum
class StringMode(Enum):
LOWER_MODE = 'l'
UPPER_MODE = 'u'
CAPITALIZE_MODE = 'c'
def print_my_string(my_string: str, mode: StringMode):
if mode == StringMode.LOWER_MODE :
print(my_string.lower())
elif mode == StringMode.UPPER_MODE:
print(my_string.upper())
elif mode == StringMode.CAPITALIZE_MODE:
print(my_string.capitalize())
else:
raise ValueError("Unrecognised mode")
Somehow this seems nice, helpful for ides/code completion, etc. But there is one major downside it has for me (and which has me scratching my head when trying to use Enums to replace magic strings. If print_my_string is a public facing method then the user can't use this method without ALSO importing, understanding, and using the StringMode enum. I don't want to burden the user with this. I want to maintain the non-magic enum handling of options on the back end, but allow the users to continue to pass regular documented strings.
checks like 'l' == StringMode.LOWER do not work. Instead I have to do 'l' == StringMode.LOWER.value. This isn't great because if I'm using print_my_string internally I now can't use mode == StringMode.LOWER unless I check against BOTH StringMode.LOWER and StringMode.LOWER.value which just complicates things.
Is there a nice way to handle this that avoids using magic strings on the backend by using Enums, but allows users to pass in simple string literals on the front-end?
Right now the second method I showed using hard coded constants is feeling more attractive. But it somehow feels like what I'm doing here is almost exactly what enums are meant for, I just can't see quite how to make it work.
Answer: Use an enum mixed with the str type -- this allows your users to use either the string literal, or the enum member, and allows you to do direct comparisons in your code:
class StringMode(str, Enum): # or StrEnum in Python 3.11+
LOWER = 'l'
UPPER = 'u'
CAPITALIZE = 'c'
def print_my_string(my_string, mode):
if mode == 'l':
print(my_string.lower())
elif mode == 'u':
print(my_string.upper())
elif mode == 'c':
print(my_string.capitalize())
else:
raise ValueError("Unrecognised mode")
This allows for backwards-compatibility, and forward progress. As an old user of your code, I don't need to change a thing; as a new user, I would do something like:
from jager import print_my_string, StringMode as SM
print_my_string('See how easy?', SM.LOWER) | {
"domain": "codereview.stackexchange",
"id": 44768,
"tags": "python, enum"
} |
Call method of specific parent class | Question: This question is related to this topic.
Context:
class OperationDevice(Device):
def __init__(self):
super(OperationDevice, self).__init__()
def operation_start(self):
# Do operation specific code
self.start()
def operation_stop(self):
# Do operation specific code
self.stop()
class SimulationDevice(Device):
def __init__(self):
super(SimulationDevice, self).__init__()
def simulation_start(self):
# Do simulation specific code
self.fsm.start()
def simulation_stop(self):
# Do simulation specific code
self.fsm.stop()
class DualModeDevice(OperationDevice, SimulationDevice):
def __init__(self, mode='simulation'):
super(DualModeDevice, self).__init__()
self._mode = mode
self._start_map = {
'simulation': self.simulation_start,
'operation': self.operation_start
}
self._stop_map = {
'simulation': self.simulation_stop,
'operation': self.operation_stop
}
def start(self):
self._start_map[self._mode]()
def stop(self):
self._stop_map[self._mode]()
Here I have to define in OperationDevice and SimulationDevice different method name like simulation_start and operation_start (because of MRO).
I actually want to define the same method name for both class and be able to call each one from DualModeDevice or subclasses.
For example operation_start from OperationDevice and simulation_start from SimulationDevice will become start. Is it possible and how?
This solution is a way to switch between class OperationDevice and SimulationDevice following value of mode. I am wondering wether is it possible to define automatically all method from SimulationDevice linked with mode = "simulation" (same for OperationDevice with "operation") without having to specify for each method (self._start_map for example) in constructor (init) of DualModeDevice.
Answer: You could use composition instead of inheritance:
class OperationDevice(Device):
def __init__(self):
super(OperationDevice, self).__init__()
def start(self):
# Do operation specific code
def stop(self):
# Do operation specific code
class SimulationDevice(Device):
def __init__(self):
super(SimulationDevice, self).__init__()
def start(self):
# Do simulation specific code
self.fsm.start()
def stop(self):
# Do simulation specific code
self.fsm.stop()
class DualModeDevice(Device):
def __init__(self, mode='simulation'):
super(DualModeDevice, self).__init__()
self._mode = mode
self._mode_map = {
'simulation': SimulationDevice(),
'operation': OperationDevice()
}
def start(self):
self._mode_map[self._mode].start()
def stop(self):
self._mode_map[self._mode].stop() | {
"domain": "codereview.stackexchange",
"id": 7839,
"tags": "python, inheritance"
} |
Why $\vec F=m\vec a$ instead of $\vec F=m\vec v$? | Question: $\vec F=m\vec a$ ,for moving object with 10 $km/s$ in a constant manner ,where acceleration is zero. ie No force on object ?
http://www.physicsforums.com/showthread.php?t=622711
Answer: Force is defined as something which can change the speed or direction of motion of an object. In other words, force causes acceleration. If you invert this statement, you see that a body undergoing acceleration must mean there is some force acting on the body.
Therefore, acceleration and force must be related somehow. Therefore, $F = m v~$ is incorrect. The correct relation is $F = ma$
Edit:
From the referenced url:
So if a car (say 2000kg) is travelling at a constant speed of say
70mph on the motorway, and I like to have a picnic in the center lane,
and that car hits me, the force it exerts should be zero because
F=2000 x 0 which is zero so no force exerted on me?
The car is traveling at a constant speed, so there is no net external force being exerted on the car. When the car hits you, obviously your velocity changes - you won't be enjoying the picnic anymore because you'd be in pieces, but if you were a rigid body, you would be intact but now you'd be moving at some velocity. That is to say, you underwent acceleration. The force on you, therefore, would be non-zero. By Newton's Third Law, the force on the car would also be non-zero and, without anything to counter this force (like the engine powering the wheels), the car would decelerate. | {
"domain": "physics.stackexchange",
"id": 13959,
"tags": "newtonian-mechanics, classical-mechanics, forces"
} |
Astronomical Constant in Astronomical units? | Question: I'm doing a computer simulation of the solar system and I'm having trouble working with big numbers (implementation specific problem). So what would be the Newtonian gravitational constant $G$ in relation with the Earth mass instead of kilograms and astronomical units instead of meters?
Answer: This is a typical "unit conversion" problem. Write $G$ in SI units:
$$G=6.6738\times10^{-11} \frac{\text{m}^3}{\text{kg}\cdot\text{s}^2}.$$
Now find out how many kilograms are in an Earth mass, and how many meters are in an astronomical unit. Also consider converting seconds to some other more convenient measure of time so that $G$ comes close to unity. (Thanks, Davidmh.)
All of this should help you convert units. See this page for further help. | {
"domain": "physics.stackexchange",
"id": 13764,
"tags": "homework-and-exercises, astronomy, unit-conversion"
} |
Can you convert the Christoffel Symbol to the form of a scalar? | Question: Given some tensor $T_{\mu v}$, you can use the metric tensor to contract its indices, converting it into the form of a scalar:
$$g^{\mu v}T_{\mu v}=T$$
Even though the Christoffel Symbol is not a tensor, can you convert it to the form of a scalar in a similar fashion?
$$\Gamma_{abc} \rightarrow \Gamma$$
Maybe by contracting it with itself?
$$\Gamma_{abc} \Gamma^{abc} = \Gamma$$
If such a form of the Christoffel symbol is allowed, can you take the contravariant derivative of its scalar form?
I apologize if this question is a bit dull, I am fairly new to the subject. Any insight would be greatly appreciated!
Answer: It depends on what you mean by "convert the Christoffel symbol to the form of a scalar". If you simply mean as you suggested in your post, $\Gamma_{abc}\Gamma^{abc}$, then no that won't work (just consider a 1-dimensional manifold: in the obvious cartesian coordinate system, this is $0$, but if you change to some non-trivial coordinate system $\Gamma$ won't be $0$, and since there's only one component, this is very easily seen).
Recall that the transformation law for $\Gamma^a_{\,bc}$ between two coordinate charts is of the form
\begin{align}
\Gamma(y)&=\Gamma(x)\frac{\partial y}{\partial x}\frac{\partial x}{\partial y}\frac{\partial x}{\partial y}+ \frac{\partial y}{\partial x}\frac{\partial^2 x}{\partial y\partial y},
\end{align}
dressed up with appropriate indices. It is this second term which causes all the 'issues'. So, very roughly speaking, to create something tensorial, you have to consider differences in $\Gamma$'s. For instance, the Riemann curvature tensor field $R^a_{\,bcd}$ is made of $\Gamma$, its partial derivatives, and their differences:
\begin{align}
R&\sim\frac{\partial \Gamma}{\partial x}-\frac{\partial \Gamma}{\partial x}+\Gamma\Gamma-\Gamma\Gamma,
\end{align}
again with appropriate indices which I don't feel like filling in now.
You can definitely construct scalar functions out of the Riemann curvature by contraction, e.g the Ricci scalar curvature $R=g^{bd}R^a_{\,bad}$, or certain quadratic expressions like $R_{abcd}R^{abcd}$ or $R_{ab}R^{ab}$, where $R_{ab}:=R^{s}_{\,asb}$ is the Ricci curvature tensor. You can consider their derivatives etc. | {
"domain": "physics.stackexchange",
"id": 89516,
"tags": "differential-geometry, tensor-calculus, curvature"
} |
Metrology: How are the most accurate measurements made? | Question: How is a most accurate measurement made when there is no other equipment to verify it? Consider you base your apparatus on a set of theory and assumptions, and the result does not match prediction. How can it be determined if the theoretical basis is incorrect of if the fault lies with the measurement equipment?
Answer: Say there is a physical parameter $x$. We can measure it with many (ideally different) experiments to get a best estimate of $x$ with uncertainty which I'll denote by $\hat{x}_e \pm \delta x_e$. Likewise, we may have a theory which predicts the value $x$ from other physics parameters $\{y_1, \ldots, y_n\}$ each of which may be defined or measured with some uncertainty. This produces a theoretical estimate of $x$ which I denote by $\hat{x}_t \pm \delta x_t$.
If the $n$-sigma (choose your favorite value for $n$, maybe 1, 2, 3, or 5) error bars for $\hat{x}_e$ and $\hat{x}_t$ are non-overlapping then we say the two estimates are not consistent. Note that the definition could work for multiple experiments or multiple theoretical calculations which may be inconsistent.
If two estimates of $x$ are inconsistent then there are a few possibilities. The most likely possibility (especially if $n$ is large) is that one of the estimates is incorrect. Either the experiment isn't measuring what it think it's measuring or the theory is missing some other unknown effects. There is another possibility which is that the experiments were just extremely unlucky. If an experiment is performed and the result is plotted with a 5-sigma error bar there is still a tiny chance that the "true" value is not covered by that error bar.
In any case, when such inconsistencies are found in science it is typically an opportunity for more work on both the theoretical and experimental side to improve things or find the source of the discrepancy. A real world example of this is the proton radius puzzle. There I believe at least two experiments are inconsistent, and I'm not sure the state of the theory. I haven't read up on this in a while.
edit: To answer your question directly: Other than double checking work, we can't know if the problem is with the experiment or the theory. We can double check the work done on each end. We can perform new experiments or do different theory calculations to get at the same result. Otherwise we have to just improve both experiment and theory until the discrepancy is resolved, possibly discovering new science along the way. | {
"domain": "physics.stackexchange",
"id": 90048,
"tags": "experimental-physics, measurements, metrology"
} |
Scaffolding a genome with hybrid data | Question: I am assembling a ~500MB genome, and have ~150x long reads and ~200x 150bp PE short reads, with a ~400bp insert size.
I've done a lot of work with minimap+miniasm, and have what I think is a good set of unitigs and a .gfa file from miniasm. Depending on the parameters, I have a few hundred to ~1K unitigs. I can easily polish these unitigs with pilon, racon, and nanopolish, so that they are as error free as possible.
The question I have is: given access to
Polished (or not) unitigs
150x long reads (long enough to span almost all repeats in the genome)
200x 150bp PE short reads (~400bp insert size)
What would you do to scaffold a genome? And what are the pros and cons of your suggested approach?
What I need here is a draft genome. I would prefer a conservative approach that is more fragmented and has fewer misassemblies. So of course one option is just to keep the unitigs and make no attempt at scaffolding. But it does strike me there may be suitably conservative approaches I haven't considered. E.g. perhaps I could use ABYSS to scaffold my polished unitigs using my short-read data, thereby leveraging whatever information I can from the PE short reads?
Answer: Try LR_Gapcloser.
I've used L_RNA_Scaffolder for trying to scaffold a genome (which turned out to be more complex than I had expected). It looks like LR_Gapcloser (written by the same people) is similar, but designed specifically for scaffolding using long-reads.
That page also suggests PBJelly and GMCloser as competing tools. | {
"domain": "bioinformatics.stackexchange",
"id": 314,
"tags": "assembly, long-reads, illumina, scaffold"
} |
Why are wheels made to carry more mass on the circumference? | Question: We assume an external force $F$ parallel to the horizontal surface on the top edge of a cylindrical wheel with radius $r$ and mass $m $ and moment of inertia $I$.
For this cylinder to roll without slipping it should satisfy the condition:
$a = \alpha* r$ ————(1)
(Where $a$ is the translational acceleration and $ \alpha$ is the angular acceleration.
The friction ($f_s$) acts to balance the changes in a manner so that condition of rolling is met. First, it enhances the net external force ($F + f_s$) and hence the translational acceleration ($a$). Second, it constitutes a torque in anticlockwise direction inducing angular deceleration.
Applying Newton's second law for translation, the linear acceleration of the center of mass is given by :
$ a = \frac{F + f_s}{m}$ ————(2)
Similarly applying Newton's second law for rotation, the angular acceleration of the center of mass is given by:
$ \alpha = \frac{r*(f_s - F)}{I}$ ————(3)
Combing eqn 1,2 and 3 we get the expression for $f_s$ :
$f_s = (\frac{mr^2 - I}{mr^2 + I}) * F $
The source says:
For ring and hollow cylinder, $I = mr^2$. Thus, friction is zero even for accelerated rolling in the case of these two rigid body. This is one of the reasons that wheels are made to carry more mass on the circumference.
Now the part I don’t understand is that why do we want to reduce the friction here since eqn 2 says that more friction means more horizontal acceleration which is good for wheels I guess. Help will be really appreciated..
Answer: Yes, for your second equation increasing $f_s$ means increasing $a$, but you are blindly applying equations here instead of thinking about the physics.
You have imposed rolling without slipping conditions. Therefore, your equation $f_s = (\frac{mr^2 - I}{mr^2 + I}) * F$ gives you the required static friction force needed to prevent slipping. The larger this value is, the more static friction you need to prevent slipping. So you want this to be smaller. Making this $0$ means that you don't need static friction to prevent slipping, and hence applying your force to the top of the ring is sufficient to cause rolling without slipping. So, by increasing $I$ you are not "decreasing static friction". You are just decreasing what you need the static friction to be so slipping doesn't occur.
As a concrete example, you could push a ring on ice in this manner and there would not be slipping between the ice and the ring; the resulting translational motion from the applied force and resulting rotational motion from the applied torque ends up satisfying the rolling without slipping condition without requiring any additional help from static friction.$^*$
Contrast this to an example where $(\frac{mr^2 - I}{mr^2 + I}) * F>\mu_sN$. Then you can never get rolling without slipping because your required static friction force is larger than the maximum value it can have.
So this means that more the moment of inertia more the “grip” on the road (which is favoured in wheels).
No. Notice that none of your analysis has taken into account the material properties between the two surfaces. What you are essentially doing in your analysis is finding what static friction needs to be in order for rolling without slipping to occur.
A better way to look at this is to think of rolling without slipping as a "balance" between rotational and translational motion. We need these two types of motion to be related exactly right so that $v_\text{COM}=\omega r$. The moment of inertia is important here because that influences the rotational motion.
Where the "grip" comes into play is when you are comparing the required static friction force to the maximum value it could obtain for the materials in question.
$^*$Note that the same thing can be done with a solid cylinder if you apply the force half way between the center and the edge of the cylinder. In general if you apply a force a distance $\beta R$ (with $0\leq\beta\leq1$) then the required static friction force to prevent slipping is
$$f_s=\frac{\beta mr^2-I}{mr^2+I}F$$ | {
"domain": "physics.stackexchange",
"id": 66641,
"tags": "newtonian-mechanics, rotational-dynamics, rotational-kinematics, rotation"
} |
Front end for system ordering | Question: I am developing a microservice that enables business partners to order systems for common customers:
/*
ordering.js - Terminal ordering JavaScript library.
(C) 2017 HOMEINFO - Digitale Informationssysteme GmbH
This library is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this library. If not, see <http://www.gnu.org/licenses/>.
Maintainer: Richard Neumann <r dot neumann at homeinfo period de>
Requires:
* jquery.js
* sweetalert.js
* homeinfo.js
* his.js
*/
'use strict';
var ordering = ordering || {};
/*
Static globals.
*/
ordering._BASE_URL = 'https://top.secret.com/endpoint';
ordering._ORDERS_ENDPOINT = ordering._BASE_URL + '/order';
ordering._CUSTOMERS_ENDPOINT = ordering._BASE_URL + '/customers';
ordering._CLASSES_ENDPOINT = ordering._BASE_URL + '/classes';
ordering._OSS_ENDPOINT = ordering._BASE_URL + '/oss';
/*
Runtime globals.
*/
ordering.orders = ordering.orders || [];
ordering.customers = ordering.customers || [];
ordering.classes = ordering.classes || [];
ordering.oss = ordering.oss || [];
/*
Displays an error using sweetalert.
*/
ordering.error = function (title, text, type) {
return function () {
swal({
title: title,
text: text,
type: type || 'error'
});
};
};
/*
Queries orders from the API.
*/
ordering._getOrders = function () {
return his.auth.get(ordering._ORDERS_ENDPOINT).then(
function (orders) {
ordering.orders = orders;
},
ordering.error('Fehler.', 'Konnte verfügbare Bestellungen nicht abfragen.')
);
};
/*
Queries customers from the API.
*/
ordering._getCustomers = function () {
return his.auth.get(ordering._CUSTOMERS_ENDPOINT).then(
function (customers) {
ordering.customers = customers;
},
ordering.error('Fehler.', 'Konnte verfügbare Kunden nicht abfragen.')
);
};
/*
Queries classes from the API.
*/
ordering._getClasses = function () {
return his.auth.get(ordering._CLASSES_ENDPOINT).then(
function (classes) {
ordering.classes = classes;
},
ordering.error('Fehler.', 'Konnte verfügbare Terminal-Klassen nicht abfragen.')
);
};
/*
Queries classes from the API.
*/
ordering._getOSs = function () {
return his.auth.get(ordering._OSS_ENDPOINT).then(
function (oss) {
ordering.oss = oss;
},
ordering.error('Fehler.', 'Konnte verfügbare Betriebssysteme nicht abfragen.')
);
};
/*
Performs a HIS login.
*/
ordering._login = function () {
var queryString = new homeinfo.QueryString();
return his.session.login(queryString.userName, queryString.passwd);
};
/*
Initializes the page.
*/
ordering.init = function () {
ordering._login().then(
ordering._getData().then(
ordering._initPage
)
);
};
/*
Submit form.
*/
ordering._submitForm = function (event) {
event.preventDefault(); // Avoid to execute the actual submit of the form.
his.auth.post(ordering._ORDERS_ENDPOINT, jQuery('#orders').serialize()).then(ordering._resetForm);
};
/*
Resets the form.
*/
ordering._resetForm = function () {
document.getElementById('orders').reset();
ordering.init();
};
/*
Initialization function.
*/
ordering._getData = function () {
var getOrders = ordering._getOrders();
var getCustomers = ordering._getCustomers();
var getClasses = ordering._getClasses();
var getOSs = ordering._getOSs();
return Promise.all([getOrders, getCustomers, getClasses, getOSs]);
};
/*
Refreshes the orders list.
*/
ordering._renderList = function () {
var tableContainer = document.getElementById('tableContainer');
tableContainer.innerHTML = '';
tableContainer.appendChild(ordering._dom.list());
};
/*
Renders the orders input.
*/
ordering._renderOrders = function () {
var formContainer = document.getElementById('formContainer');
formContainer.innerHTML = '';
formContainer.appendChild(ordering._dom.orders());
};
/*
Initializes the page.
*/
ordering._initPage = function () {
ordering._renderList();
ordering._renderOrders();
jQuery('#orders').submit(ordering._submitForm);
};
/*
Deletes the respective order.
*/
ordering._deleteOrder = function (id) {
return his.auth.delete(ordering._ORDERS_ENDPOINT + '/' + id).then(ordering._getData).then(ordering._renderList);
};
/*
Adds an order.
*/
ordering._addOrderForm = function () {
var orderForms = document.getElementById('orderForms');
var order = ordering._dom.order(true);
orderForms.appendChild(order);
};
/*
Deletes the respective order form.
*/
ordering._deleteOrderForm = function (index) {
var orderForms = document.getElementById('orderForms');
var order = document.getElementById('order_' + index);
orderForms.removeChild(order);
};
/*
DOM model.
*/
ordering._dom = ordering._dom || {};
ordering._dom.COLUMNS = [
'Konto', 'Bestelldatum', 'Bearbeitungsstatus', 'Kunde', 'Klasse', 'Betriebssystem', 'Adresse', 'Anmerkung',
'Stornieren'];
/*
ID of orders form.
*/
ordering._dom.ORDERS = 0;
/*
Builds an order set.
*/
ordering._dom.orders = function () {
var form = document.createElement('form');
form.setAttribute('id', 'orders');
var orderForms = document.createElement('div');
orderForms.setAttribute('id', 'orderForms');
orderForms.setAttribute('class', 'row row-center');
var firstOrder = ordering._dom.order();
orderForms.appendChild(firstOrder);
form.appendChild(orderForms);
form.appendChild(ordering._dom.buttonAddOrderForm());
form.appendChild(document.createTextNode(' '));
form.appendChild(ordering._dom.submit());
return form;
};
/*
Creates a button to add an order form input for a further terminal.
*/
ordering._dom.buttonAddOrderForm = function () {
var button = document.createElement('button');
button.setAttribute('class', 'btn btn-success');
button.setAttribute('onclick', 'ordering._addOrderForm();');
button.textContent = 'Weiteres Terminal hinzufügen.';
return button;
};
/*
Creates a button to delete the respective oder inputs.
*/
ordering._dom.buttonDeleteOrderForm = function (index) {
var buttonDeleteOrderForm = document.createElement('button');
buttonDeleteOrderForm.setAttribute('onclick', 'ordering._deleteOrderForm(' + index + ');');
buttonDeleteOrderForm.setAttribute('class', 'btn btn-danger');
buttonDeleteOrderForm.textContent = 'X';
return buttonDeleteOrderForm;
};
/*
Creates a button to delete the respective order.
*/
ordering._dom.buttonDeleteOrder = function (id) {
var button = document.createElement('button');
button.setAttribute('class', 'btn btn-danger');
button.setAttribute('onclick', 'ordering._deleteOrder(' + id + ');');
button.textContent = 'X';
return button;
};
/*
Creates the title for the terminal order.
*/
ordering._dom.caption = function (index, additional) {
var caption = document.createElement('legend');
var textNode = document.createTextNode('Terminal ');
caption.appendChild(textNode);
if (additional) {
caption.appendChild(ordering._dom.buttonDeleteOrderForm(index));
}
return caption;
};
/*
Builds a single order.
*/
ordering._dom.order = function (additional) {
if (additional == null) {
additional = false;
}
var index = ordering._dom.ORDERS;
ordering._dom.ORDERS += 1;
var order = document.createElement('fieldset');
order.setAttribute('id', 'order_' + index);
order.setAttribute('class', 'row');
var caption = ordering._dom.caption(index, additional);
order.appendChild(caption);
var columnSelects = document.createElement('div');
columnSelects.setAttribute('class', 'col-md-6');
var customerSelect = ordering._dom.customerSelect(index);
columnSelects.appendChild(customerSelect);
var classSelect = ordering._dom.classSelect(index);
columnSelects.appendChild(classSelect);
var osSelect = ordering._dom.osSelect(index);
columnSelects.appendChild(osSelect);
var columnAddress = document.createElement('div');
columnAddress.setAttribute('class', 'col-md-6');
var address = ordering._dom.address(index);
columnAddress.appendChild(address);
order.append(columnSelects);
order.append(columnAddress);
return order;
};
/*
Builds a default option for a dropdown list.
*/
ordering._dom.defaultOption = function (text) {
var defaultOption = document.createElement('option');
var attributeSelected = document.createAttribute('selected');
defaultOption.setAttributeNode(attributeSelected);
var attributeDisabled = document.createAttribute('disabled');
defaultOption.setAttributeNode(attributeDisabled);
defaultOption.textContent = text;
return defaultOption;
};
/*
Builds the customer selection dropdown menu.
*/
ordering._dom.customerSelect = function (index) {
var select = document.createElement('select');
select.setAttribute('required', true);
select.setAttribute('name', 'customer_' + index);
select.setAttribute('class', 'form-control');
var defaultOption = ordering._dom.defaultOption('Kunde*');
select.appendChild(defaultOption);
for (var i = 0; i < ordering.customers.length; i++) {
var customer = ordering.customers[i];
var option = document.createElement('option');
option.setAttribute('value', customer.id);
option.textContent = customer.company.name;
select.appendChild(option);
}
return select;
};
/*
Builds the class selection dropdown menu.
*/
ordering._dom.classSelect = function (index) {
var select = document.createElement('select');
select.setAttribute('required', true);
select.setAttribute('name', 'class_' + index);
select.setAttribute('class', 'form-control');
var defaultOption = ordering._dom.defaultOption('Klasse*');
select.appendChild(defaultOption);
for (var i = 0; i < ordering.classes.length; i++) {
var class_ = ordering.classes[i];
var option = document.createElement('option');
option.setAttribute('value', class_.id);
option.textContent = class_.full_name;
select.appendChild(option);
}
return select;
};
/*
Builds the operating system selection dropdown menu.
*/
ordering._dom.osSelect = function (index) {
var select = document.createElement('select');
select.setAttribute('required', true);
select.setAttribute('name', 'os_' + index);
select.setAttribute('class', 'form-control');
var defaultOption = ordering._dom.defaultOption('Betriebssystem*');
select.appendChild(defaultOption);
for (var i = 0; i < ordering.oss.length; i++) {
var os = ordering.oss[i];
var option = document.createElement('option');
option.setAttribute('value', os.id);
option.textContent = os.name + ' ' + os.version;
select.appendChild(option);
}
return select;
};
/*
Builds the address input fields.
*/
ordering._dom.address = function (index) {
var address = document.createElement('fieldset');
address.setAttribute('class', 'form-group');
var street = document.createElement('input');
street.setAttribute('type', 'text');
street.setAttribute('required', true);
street.setAttribute('name', 'street_' + index);
street.setAttribute('placeholder', 'Straße*');
address.appendChild(street);
var houseNumber = document.createElement('input');
houseNumber.setAttribute('type', 'text');
houseNumber.setAttribute('required', true);
houseNumber.setAttribute('name', 'houseNumber_' + index);
houseNumber.setAttribute('placeholder', 'Hausnummer*');
address.appendChild(houseNumber);
var zipCode = document.createElement('input');
zipCode.setAttribute('type', 'text');
zipCode.setAttribute('required', true);
zipCode.setAttribute('name', 'zipCode_' + index);
zipCode.setAttribute('placeholder', 'PLZ*');
address.appendChild(zipCode);
var city = document.createElement('input');
city.setAttribute('type', 'text');
city.setAttribute('required', true);
city.setAttribute('name', 'city_' + index);
city.setAttribute('placeholder', 'Ort*');
address.appendChild(city);
var annotation = document.createElement('input');
annotation.setAttribute('type', 'text');
annotation.setAttribute('name', 'annotation_' + index);
annotation.setAttribute('placeholder', 'Anmerkung');
address.appendChild(annotation);
return address;
};
/*
Builds a submit button.
*/
ordering._dom.submit = function () {
var submit = document.createElement('input');
submit.setAttribute('type', 'submit');
submit.setAttribute('value', 'Kostenpflichtig bestellen.');
submit.setAttribute('class', 'btn btn-primary');
return submit;
};
/*
Builds the list of existing orders.
*/
ordering._dom.list = function () {
var table = document.createElement('table');
table.setAttribute('id', 'list');
table.setAttribute('class', 'table table-hover');
var thead = document.createElement('thead');
var tr = document.createElement('tr');
var th;
for (var i = 0; i < ordering._dom.COLUMNS.length; i++) {
th = document.createElement('th');
th.setAttribute('scope', 'col');
th.textContent = ordering._dom.COLUMNS[i];
tr.appendChild(th);
}
thead.appendChild(tr);
var tbody = document.createElement('tbody');
var empty = true;
for (var attr in ordering.orders) {
if (ordering.orders.hasOwnProperty(attr)) {
empty = false;
tbody.appendChild(ordering._dom.listItem(ordering.orders[attr]));
}
}
if (empty) {
var h2 = document.createElement('h2');
h2.setAttribute('align', 'center');
h2.textContent = 'Keine Bestellungen vorhanden.';
return h2;
}
table.appendChild(thead);
table.appendChild(tbody);
return table;
};
/*
Builds the list items for each existing order.
*/
ordering._dom.listItem = function (order) {
var row = document.createElement('tr');
var columnAccount = document.createElement('td');
columnAccount.textContent = order.account.name;
row.appendChild(columnAccount);
var columnIssued = document.createElement('td');
columnIssued.textContent = order.issued;
row.appendChild(columnIssued);
var columnAccepted = document.createElement('td');
var notProcessedYet = 'Noch nicht bearbeitet.';
switch (order.accepted) {
case null:
columnAccepted.textContent = notProcessedYet;
break;
case undefined:
columnAccepted.textContent = notProcessedYet;
break;
case true:
columnAccepted.textContent = 'Angelegt.';
break;
case false:
columnAccepted.textContent = 'Auftrag abgewiesen.';
break;
}
row.appendChild(columnAccepted);
var columnCustomer = document.createElement('td');
columnCustomer.textContent = order.customer.company.name + ' (' + order.customer.id + ')';
row.appendChild(columnCustomer);
var columnClass = document.createElement('td');
columnClass.textContent = order.class_.name;
row.appendChild(columnClass);
var columnOs = document.createElement('td');
columnOs.textContent = order.os.name + ' ' + order.os.version;
row.appendChild(columnOs);
var columnAddress = document.createElement('td');
var address = order.address;
columnAddress.textContent = address.street + ' ' + address.house_number + ', ' + address.zip_code + ' ' + address.city;
row.appendChild(columnAddress);
var columnAnnotation = document.createElement('td');
columnAnnotation.textContent = order.annotation || '-';
row.appendChild(columnAnnotation);
var columnDelete = document.createElement('td');
columnDelete.appendChild(ordering._dom.buttonDeleteOrder(order.id));
row.appendChild(columnDelete);
return row;
};
jQuery(document).ready(ordering.init);
The corresponsing HTML document is:
<!DOCTYPE HTML>
<head>
<meta charset="utf-8"/>
<title>Terminal Bestellung</title>
<link rel="stylesheet" href="https://libraries.homeinfo.de/bootstrap/latest/css/bootstrap.min.css">
<link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons">
<link rel="stylesheet" href="https://libraries.homeinfo.de/sweetalert/dist/sweetalert.css">
<script src="https://libraries.homeinfo.de/jquery/jquery-latest.min.js"></script>
<script src="https://libraries.homeinfo.de/bootstrap/latest/js/bootstrap.min.js"></script>
<script src="https://libraries.homeinfo.de/sweetalert/dist/sweetalert.min.js"></script>
<script src="https://javascript.homeinfo.de/homeinfo.min.js"></script>
<script src="https://javascript.homeinfo.de/his/his.js"></script>
<script src="https://javascript.homeinfo.de/his/session.js"></script>
<script src="ordering.js"></script>
</head>
<body>
<h1 align="center">Ihre Bestellungen bei HOMEINFO</h1>
<br>
<div class="container">
<div id="tableContainer" class="row"></div>
<br>
<div id="formContainer" class="row"></div>
</div>
</body>
I'd like to have general feedback especially on the JavaScript code.
Answer: If you ask for specific questions, we can provide better feedback.
A few thoughts:
if you're including jQuery, go ahead and use it for DOM generation, as it will be much more concise and readable, instead of all the bulky createElement, setAttribute code. And the event handlers can be done through jQuery a little nicer.
if you're allowed, you have good names for the functions in the comments, and I'd use them. For example, listItem would be more clearly buildListItem; ordering._dom.list would probably be better buildExistingOrdersTable (it's not a list!). I would find buildSubmitButton clearer than ordering._dom.submit. | {
"domain": "codereview.stackexchange",
"id": 31627,
"tags": "javascript, user-interface"
} |
Computing 'Robustness of Magic' of $n$-bit W states | Question: Question
What is the asymptotic robustness-of-magic of a $W$ state over $n$ qubits. Is it $\Theta(n)$? $\Omega(\sqrt{n})$? $O\left(\frac{n}{\lg n} \right)$?
Background
$W$ states are entangled states where exactly one qubit is ON and every qubit has an equal chance of being ON if measured. For example, $W_3 = \frac{1}{\sqrt{3}} \left( |100\rangle + |010\rangle + |001\rangle \right)$.
In the paper "Application of a resource theory for magic states to fault-tolerant quantum computing", Howard and Campbell associate a quantum state $\rho$ with a cost $R(\rho)$ called "robustness of magic" and defined as:
$$R(\rho) = \min_x \sum_i |x_i|; \rho = \sum_i x_i \sigma_i $$
Where each $\sigma_i$ is a state that can be prepared by a stabilizer circuit (i.e. using only H, S, CNOT, and measurement gates). In other words, you try to find a weighted combination of stabilizer states that produce your desired state, and the cost you minimize while doing so is the sum of the weights' magnitudes. The minimum such cost is the robustness of magic of the state.
What I've Tried
The best-scoring decomposition I've found, for the $n$-qubit $W$ state, is:
$$s_{j,k} = X_j \cdot \text{CNOT}_{j\rightarrow k} \cdot H_j \cdot |0\rangle$$
$$S_{j,k} = s_{j,k} \cdot s_{j,k}^\dagger$$
$$W_n = \left(\sum_{j=0}^{n-1} \sum_{k=j+1}^{n-1} \frac{1}{n} S_{j,k} \right) + \left(\sum_{j=0}^{n-1} \frac{2-n}{n}|j\rangle \langle j| \right)$$
Which achieves a cost of $\frac{1}{n} \cdot \frac{n(n-1)}{2} + \frac{n-2}{n}\cdot n = \frac{3}{2}n-\frac{5}{2}$.
The best circuit construction I know for producing a large W state has a T-count that also scales like $\Theta(n)$. For example, here is an example construction that shows how to create a $W_n$ state using $2n-4$ T gates if $n$ is a power of 2:
Note that, if you want to compare the $2n-4$ T-count to the $\frac{3}{2}n - \frac{5}{2}$ potential-robustness-of-magic of the best-I-found decomposition, you must account for the fact that a $|T\rangle$ state's robustness-of-magic is $\sqrt{2}$ (not $1$).
Now, obviously, trying to find better and better decompositions is a reasonable way to upper-bound the robustness of magic... but this strategy will never create a lower-bound. What kinds of strategies do work for making lower bounds on this kind of problem?
Answer: Here is a proof that $R(W_n) \in \Omega(\sqrt[4]{n})$.
Given a $W_n$ state, it is possible to create $\lg n$ T states with Clifford operations and a single additional T gate. Basically:
Xor together all the qubit's whose binary index has an odd number of 1s, apply a T gate to that xor'd value, then unxor.
Apply an S gate to every qubit whose binary index has a number of 1s that's 2 or 3 more than a multiple of 4.
Apply a Z gate to every qubit whose binary index has a number of 1s that's 4, 5, 6, or 7 more than a multiple of 8.
Compress the unary $W_n$ state down into a binary register.
Here's an example circuit turning $W_8$ into 3 $|T\rangle$ states:
I also tested turning $W_{16}$ into 4 $|T\rangle$ states.
We can produce $\lg n$ T states from a $W_n$ state and a single $T$ state. It must always be the case that an output's robustness of magic is less than or equal than the input's. Therefore:
$$R(W_n) \cdot R(T) \geq R(W_n \otimes T) \geq R(T^{\otimes \lg n})$$
In the paper "Application of a resource theory for magic states to fault-tolerant quantum computing" it is proven (see page 3) that:
$$R(T^{\otimes k}) \in \Omega(1.2^k)$$
Meaning $R(W_n) \cdot R(T)$ is greater than or equal to something asymptotically lowerbounded by $1.2^{\lg n}$. Since $R(T)$ is a constant, we can ignore it and deduce:
$$\begin{align}
R(W_n) &\in Ω(1.2^{\lg n})
\\&= Ω(n^{\lg 1.2})
\\&\subset Ω(n^{0.263})
\\&\subset Ω(\sqrt[4]{n})
\end{align}$$
Therefore $R(W_n)$ is asymptotically somewhere between $\sqrt[4]{n}$ (by synthesizing into Ts) and $n$ (via the decomposition I showed in the question). | {
"domain": "cstheory.stackexchange",
"id": 4319,
"tags": "quantum-information, asymptotics"
} |
Why does this block move backwards? | Question: In the diagram given below is a mass $m_1$ placed on an inclined block of mass $m_2$. And the question is to find the distance moved by the wedge when m1 reached the lowest point. The solution was given that as there is no external force on the system the center of mass doesn't move. My question is why isn't there an external force block m1 has a force mgsinx which acts downwards, now I don't know what this force will be canceled by if I take the complete two-block system. Could someone help me with this also Why does the wedge move backward? I don't have any force acting in a backward direction because of mgsinx. Any help would be appreciated.
Answer: As this is a pretty standard homework problem, I will only attempt to answer the conceptual question Why does the wedge move backwards?
Forces are responsible for changes in momentum. However, in this problem, there is only one external force acting on the two-block system, which is gravity. All the other forces are internal (the normal force of one block on the other, and so on). The external force of gravity acts only in the $y$ (vertical) direction, and there is thus no external force acting along the $x$ (horizontal) direction. As a result, the net momentum along $x$ is conserved.
I imagine the block is released from rest at the top of the wedge. In this case, the total momentum (and thus the $x-$component of the momentum as well) is zero. As the block begins to slide down the wedge, it is forced by the wedge to move along the angle $\theta$ and so it will have a velocity (and thus momentum) in the $x-$direction (in addition to the velocity in the $y$ direction).
However, we know that the net momentum of the wedge and the block in the $x$ direction must be zero as it is conserved, and so the wedge must also move in the opposite direction of the block in order to conserve the $x-$ component of the momentum.
The same argument can be used to describe the motion of the centre of mass of the system: the centre of mass will indeed "fall" as an object would under gravity, but it will not move along $x$ since there is no net force along $x$. | {
"domain": "physics.stackexchange",
"id": 71249,
"tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram"
} |
Guessing a low entropy value in multiple attempts | Question: Suppose Alice has a distribution $\mu$ over a finite (but possibly very large) domain, such that the (Shannon) entropy of $\mu$ is upper bounded by an arbitrarily small constant $\varepsilon$. Alice draws a value $x$ from $\mu$, and then asks Bob (who knows $\mu$) to guess $x$.
What is the success probability for Bob? If he is only allowed one guess, then one can lower bound this probability as follows: the entropy upper bounds the min-entropy, so there is an element that has probability of at least $2^{-\varepsilon}$. If Bob chooses this element as his guess, his success probability will be $2^{-\varepsilon}$.
Now, suppose that Bob is allowed to make multiple guesses, say $t$ guesses, and Bob wins if one of his guesses is correct. Is there a guessing scheme that improves Bob's success probability? In particular, is it possible to show that Bob's failure probability decreases exponentially with $t$?
Answer: Bob's best bet is to guess the $t$ values with largest probability.
If you're willing to use Rényi entropy instead, Proposition 17 in Boztaş' Entropies, Guessing and Cryptography states that the error probability after $t$ guesses is at most
$$ 1 - 2^{-H_2(\mu)\left(1-\frac{\log t}{\log n}\right)} \approx \ln 2 \left(1-\frac{\log t}{\log n}\right) H_2(\mu), $$
where $n$ is the size of the domain. Granted, the dependency on $t$ is pretty bad, and perhaps Boztaş was focused on a different regime of the entropy.
For the Shannon entropy, you can try to solve the dual optimization problem: given a fixed failure probability $\delta$, find the maximal entropy of such a distribution. Using the convexity of $-x\log x$, we know that the distribution $\mu$ has the form $a,b,\ldots,b;b,\ldots,b,c$, where $a\geq b\geq c$, $a+(t-1)b = 1-\delta$, and $c = \delta-\lfloor\frac{\delta}{b}\rfloor b$. We have $t-1+\lfloor\frac{\delta}{b}\rfloor$ values that get probability $b$. Conditioning on $s = \lfloor\frac{\delta}{b}\rfloor$, we can try to find $b$ which minimizes the entropy. For the correct value of $s$, this will be an internal point (at which the derivative vanishes). I'm not sure how to get asymptotic estimates using this approach. | {
"domain": "cstheory.stackexchange",
"id": 1721,
"tags": "it.information-theory, pr.probability"
} |
My rosbag info shows messages, but my ros msgs are empty | Question:
Below is my code snippet to record rostopics using ROSbag API. My messages are empty when I do a rostopic echo. I am not sure what is happening.
My bag is being recorded and my
rosbag info 17Apr17.bag gives my this output:
path: 17Apr17.bag
version: 2.0
duration: 8.9s
start: Apr 17 2017 18:57:55.21 (1492469875.21)
end: Apr 17 2017 18:58:04.08 (1492469884.08)
size: 1022.8 KB
messages: 10656
compression: none [2/2 chunks]
types: baxter_core_msgs/EndEffectorState [ade777f069d738595bc19e246b8ec7a0]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
sensor_msgs/JointState [3066dcd76a6cfaef579bd0f34173e9fd]
tf2_msgs/TFMessage [94810edda583a504dfda3829e70d7eec]
topics: /camera/depth/image_raw 888 msgs : sensor_msgs/Image
/camera/rgb/image_color 888 msgs : sensor_msgs/Image
/cameras/left_hand_camera/image 888 msgs : sensor_msgs/Image
/cameras/right_hand_camera/image 888 msgs : sensor_msgs/Image
/robot/end_effector/left_gripper/state 888 msgs : baxter_core_msgs/EndEffectorState
/robot/end_effector/right_gripper/state 888 msgs : baxter_core_msgs/EndEffectorState
/robot/joint_states 888 msgs : sensor_msgs/JointState
/softkinetic_left/depth/image_raw 888 msgs : sensor_msgs/Image
/softkinetic_left/rgb/image_color 888 msgs : sensor_msgs/Image
/softkinetic_right/depth/image_raw 888 msgs : sensor_msgs/Image
/softkinetic_right/rgb/image_color 888 msgs : sensor_msgs/Image
tf 888 msgs : tf2_msgs/TFMessage
CODE:
#!/usr/bin/env python
import rosbag
import rospy
from random import randint
from tf2_msgs.msg import TFMessage
from baxter_core_msgs.msg import DigitalIOState
from std_msgs.msg import String
from sensor_msgs.msg import Image
from sensor_msgs.msg import JointState
from baxter_core_msgs.msg import EndEffectorState
from create_folder import FileRead
flag = 1
print flag
tfMsg = TFMessage()
num = FileRead()
print num[0]
bag = rosbag.Bag("/home/dhiraj/ros_ws/src/baxter_examples/scripts/"+str(num[0])+".bag",'w')
# Flag = 0 will not record(callbackStop) the file and Flag=1 will recrd the file
# I need to record rosbag by the click of the buttons and hence the the setup
def callbackStop(data):
if data.state:
global flag
if flag == 1:
print "closing bag"
flag = 0
print "closing bag with flag", flag
bag.close()
def callbackRestart(data):
if data.state == 0:
print "ENTERING CALLBVBACK"
global flag
if flag == 1:
print "RECORDING NEW BAG"
bag.write("tf",tfMsg)
bag.write("/cameras/right_hand_camera/image",Image())
bag.write("/cameras/left_hand_camera/image",Image())
bag.write("/robot/joint_states",JointState())
bag.write("/robot/end_effector/right_gripper/state",EndEffectorState())
bag.write("/robot/end_effector/left_gripper/state",EndEffectorState())
bag.write("/softkinetic_left/rgb/image_color",Image())
bag.write("/softkinetic_right/rgb/image_color",Image())
bag.write("/softkinetic_left/depth/image_raw",Image())
bag.write("/softkinetic_right/depth/image_raw",Image())
bag.write("/camera/rgb/image_color",Image())
bag.write("/camera/depth/image_raw",Image())
def tf_info():
rospy.init_node("get_tf")
rightStart = rospy.Subscriber("/robot/digital_io/right_shoulder_button/state",DigitalIOState, callbackRestart)
bagClosRight = rospy.Subscriber("/robot/digital_io/right_itb_button2/state",DigitalIOState,callbackStop)
bagCloseLeft = rospy.Subscriber("/robot/digital_io/left_itb_button2/state",DigitalIOState,callbackStop)
leftRestart = rospy.Subscriber("/robot/digital_io/left_itb_button1/state",DigitalIOState,callbackSetFlag)
rightRestart = rospy.Subscriber("/robot/digital_io/right_itb_button1/state",DigitalIOState,callbackSetFlag)
rospy.spin()
if __name__=='__main__':
tf_info()
Originally posted by Joy16 on ROS Answers with karma: 112 on 2017-04-18
Post score: 0
Original comments
Comment by Thomas D on 2017-04-18:
What is your question?
Comment by Joy16 on 2017-04-18:
My rosbag messages contain no data. All fields in the ros messages are empty
Comment by Thomas D on 2017-04-18:
How are you playing back your bag files? How did you determine that your messages are empty? Are you setting the parameter use_sim_time to true?
Comment by Joy16 on 2017-04-18:
Ya. I set rosparam set use_sim_time true. Iam lpaying my rosbag file as rosbag play --clock 17Apr18.bag -l my messages are empty as a rostopic echo on any topic gives me empty fields.
header:
seq:0
stamp:
secs:0
nsecs:
frame_id: '"
data =[]
Answer:
You are writing empty message objects into your bag file:
print "RECORDING NEW BAG"
bag.write("tf",tfMsg)
bag.write("/cameras/right_hand_camera/image",Image())
bag.write("/cameras/left_hand_camera/image",Image())
bag.write("/robot/joint_states",JointState())
bag.write("/robot/end_effector/right_gripper/state",EndEffectorState())
bag.write("/robot/end_effector/left_gripper/state",EndEffectorState())
bag.write("/softkinetic_left/rgb/image_color",Image())
bag.write("/softkinetic_right/rgb/image_color",Image())
bag.write("/softkinetic_left/depth/image_raw",Image())
bag.write("/softkinetic_right/depth/image_raw",Image())
bag.write("/camera/rgb/image_color",Image())
bag.write("/camera/depth/image_raw",Image())
bag.write() writes the message object that you give it to the bag file with the specific topic name. Since you're supplying default-constructed messages (which are empty), that's what it is writing to the bag file.
If you want to subscribe to a topic and record the messages on that topic to a bag, you need to either call the robag command-line tool from your program and let it subscribe, or you need to create subscribers on each of the topics that you care about, and then actually write the received messages to the bag file.
Originally posted by ahendrix with karma: 47576 on 2017-04-18
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Joy16 on 2017-04-18:
Hi!
Thanks a lot! I am sorry, I had not posted the latest code. I am subscribing to these topics, but I am sure how to pass them to bag.write()
Comment by Joy16 on 2017-04-18:
#This is giving me stop_writing_chunk error
def callbackt(data):
bag.write("/cam",data)
def callbackRestart(data):
if data.state == 0:
global flag_bag
if flag_bag == 1:
rospy.Subscriber("/cam",Image,callback)
Comment by ahendrix on 2017-04-18:
You need to shut down the subscribers that write to the bag before you close the bag file.
Comment by ahendrix on 2017-04-18:
You should probably set up your subscribers once (instead of every time you get callbackRestart); note that because subscriber callbacks happen in a separate thread, the if flag_bag == 1: won't prevent callbacks. | {
"domain": "robotics.stackexchange",
"id": 27633,
"tags": "ros, rosbag, rosbag-record"
} |
Difference between rulefit & skope-rules Python Packages | Question: RuleFit is an implementation of a rule-based prediction algorithm based on the rulefit algorithm from Friedman and Popescu (PDF).
skope-rules
Skope-rules is a Python machine learning module built on top of scikit-learn and distributed under the 3-Clause BSD license.
Both the packages generate rules. Do both the packages rely on The RuleFit algorithm by Friedman and Popescu (2008), and implement the same algorithm?
Answer: In the Skope rules poster: http://2018.ds3-datascience-polytechnique.fr/wp-content/uploads/2018/06/DS3-309.pdf
They made themselves a differentiation
However, our approach mainly differs in the way that decision rules are chosen: semantic deduplication based on variables composing each rule as opposed to L1-based feature selection (RuleFit).
In my personal opinion and after seeing the semantic deduplication package, I will be quite careful as it made some strong modifications of logical rules by default. | {
"domain": "datascience.stackexchange",
"id": 9343,
"tags": "machine-learning, statistics, algorithms"
} |
Why don't I have a 1-in-2 chance of getting rained on if the forecast says 50% probability of precipitation? | Question: I'm trying to understand what weather forecasts mean more precisely. As I understand it from reading Wikipedia, blogs, etc., the percentage value for rain/precipitation that you see in a forecast is technically called the "probability of precipitation". To quote the National Weather Service webpage:
"Mathematically, PoP is defined as follows: PoP = C x A where "C" = the confidence that precipitation will occur somewhere in the forecast area, and where "A" = the percent of the area that will receive measureable precipitation, if it occurs at all.
So... in the case of the forecast above, if the forecaster knows precipitation is sure to occur ( confidence is 100% ), he/she is expressing how much of the area will receive measurable rain. ( PoP = "C" x "A" or "1" times ".4" which equals .4 or 40%.)"
This definition does not seem well stated to me, for the reason that "confidence" is (presumably) not uniform across an area. For instance, the phrase "how much of the area will receive measurable rain" seems odd, since a forecast would (presumably) only be able to give a probabilistic estimate for this area.
Let's cook up an example. Consider a town (the forecast area) consisting of two parts of equal area (called north side and south side). Each point in the north side will be rained on with 100% probability tomorrow, and each point in the south side will be rained on with 50% probability (at every point) tomorrow. What is the PoP in this example? At face value, the definition could be interpreted as implying that the PoP is 100%, since precipitation will occur somewhere in the forecast area. However, this value seems intuitively unsatisfying, since some people might not get any rain.
Here's what I might expect a more precise definition to be. If $A$ is the area and $C(x)$ is the "pointwise confidence function" depending on a point (location) $x$, then define
$$PoP = \frac{1}{area(A)}\int_{A} C(x)\, dx.$$
In words, this is just the expected value of $C(x)$, or the probability that a randomly located person would see rain over the specified time interval. In practice, of course the integral would be estimated based on actual measurement sites. If such a formula is indeed an accurate definition, then I'd be satisfied. In the above example, the PoP would be 75%. (The official definition could in essence be viewed as a shorthand that is more useful for those without any background in calculus.) If this definition is not correct, then some explanation would be helpful.
I've read web articles with statements like the following: "As a student and observer of meteorology, it constantly bums me out that people do not understand what it means when someone says there’s an “X% chance of rain” tomorrow. A 50 percent chance of rain does not mean there’s a 1-in-2 chance that you’re going to get wet."
It's not clear to me why "A 50 percent chance of rain means there’s a 1-in-2 chance that you’re going to get wet" would be an inaccurate interpretation of PoP. If the definition I suggested above is correct, then it is perfectly correct to say that a stationary observer placed at a random location in this scenario would have a 50% chance of getting wet. Am I missing something, or is this author being careless?
I don't have any background in meteorology, and in particular I don't have much of a sense of how PoP is actually computed in practice.
Answer: I agree that the PoP = C x A that we see on a lot of websites leaves something to be desired. It gets across the loose idea that the definition involves the area affected as well as the probability of occurrence, which is fine for most casual readers but can be frustrating for more inquisitive minds. How much are the confidence and the area estimated from ensemble forecasts versus expert judgement? How was it done before ensemble NWP started in the 1990s?
Anyway, there’s an interesting survey by Stewart et al (2016) that gives a broader overview of how different meteorologists use the phrase “probability of precipitation”. They categorise the PoP = C x A usage as following the NWS Operations Manual (1984), which is nicely summarized in a comment by Schaefer and Livingstone (1990). In short, your interpretation in terms of expectations is correct and, as you mention, they describe it in terms of a sum over a hypothetical network of points rather than a continuous integral:
$\textrm{PoP} = \frac{1}{N} \sum_{i=1}^N E[R_i]$,
where $R_i$ equals 1 if it rains at station $i$ and 0 if it doesn't. In your Pluieville example, this could be approximated as a town with one rain gauge in the north and one in the south, giving PoP = (1.0 + 0.5)/2 = 0.75.
I don't know for sure, but it wouldn't surprise me if these things are still calculated by subsampling gridded NWP models at actual rain gauge network locations, which would allow for comparable long-term verification of the probabilistic forecasts.
Stewart, A.E., C.A. Williams, M.D. Phan, A.L. Horst, E.D. Knox, and J.A. Knox, 2016: Through the Eyes of the Experts: Meteorologists’ Perceptions of the Probability of Precipitation. Wea. Forecasting, 31, 5–17, doi:10.1175/WAF-D-15-0058.1
National Weather Service, 1984: Zone and local forecasts. NWS Operations Manual W/OM15.
Schaefer, J.T. and R.L. Livingston, 1990: Operational Implications of the “Probability of Precipitation”. Wea. Forecasting, 5, 354–356, doi:10.1175/1520-0434(1990)005<0354:OIOTOP>2.0.CO;2 | {
"domain": "earthscience.stackexchange",
"id": 2298,
"tags": "weather-forecasting"
} |
Human spine: Is the coccyx really fused? | Question: Prologue: This post is about the adult (say, a 20 year old) human skeleton; I'm not particularly interested in infant skeletons.
The human spine has is composed of cervical, thoracic, lumbar, sacral and coccygeal vertebrae
Image source
My school textbook (which has a history of possessing erroneous/outdated information) states:
The coccyx consists of 4 coccygeal vertebrae that are fused to become a single bone.
However, this Wikipedia article states:
Most anatomy books incorrectly state that the coccyx is normally fused in adults. In fact it has been shown that the coccyx may consist of up to five separate bony segments, the most common configuration being two or three segments.
Is Wikipedia correct?
If so, what is the (most common) number of bones that constitute the spine? Is the number of vertebrae that constitute the spine going to be different?
This other Wiki article appears to be particularly contentious:
A fully grown adult features 30 bones in the spine, whereas a child can have 33.
The cervical vertebrae (7)
The thoracic vertebrae (12)
The lumbar vertebrae (5)
The sacral vertebrae (5 at birth, later fused into one)
The coccygeal vertebrae (5 at birth, some or all of the bones fuse together but there seems to be a disagreement between researchers as to what the most common number should be. Some say the most common is 1, others say 2 or 3, with 4 being the least likely. It is counted as 1 in this article.)
I understand that there is little to no variation in the number of cervical, thoracic and lumbar vertebrae among humans, so they altogether constitute $\mathrm{7+12+5 = 24}$ vertebrae/bones (in this case, I believe "vertebrae" and "bones" are synonymous).
Assuming that the sacrum really is one fused bone (my question's regarding the coccyx, so I'm not going to question the anatomy of the sacrum in this post) consisting of five vertebrae (I think there's a bone-vertebra distinction here, so they aren't the same thing)
In that case, the cervical, thoracic, lumbar and sacral contributions are $\mathrm{ 24 + 1 = 25}$ bones, or $\mathrm{ 24 + 5 = 29}$ vertebrae. (Right?)
According to the Wikipedia article I cited above (the second one), the adult human spine has 30 bones.
So the coccyx contributes $\mathrm{30 - 29 = 1}$ bone to the spine, or $\mathrm{30 - 25 = 5}$ vertebrae to the spine.
In other words, the second Wiki article implies that the coccyx is a single (fused) bone. However, the first article states that the coccyx is really composed of 3-5 (say, 4 on an average) bones.
This is horribly contradictory.
If the finer points of my question was lost, I'll restate them here:
1) Is the coccyx a single bone or multiple bones?
2) How many bones are there in the adult human spine?
3) How many vertebrae are there in the adult human spine?
Answer: The wikipedia article links to two papers. The first article has data for 120 pain-free and 51 affected patients with data on the number of coccygeal segments in each. We can back-calculate a reasonable approximation of segment distributions from their percentages:
1 Segment: 7%
2 Segments: 51%
3 Segments: 38%
4 segments: 4%
So 2 or 3 segments makes up almost 90% of the population. Given that only 7% of the subjects had single element coccyxes, I think it's safe to call the coccyx usually made of multiple elements.
Whether you refer to those multiple elements a single bone if largely a semantic argument. Since there is variation, it seems reasonable to call the collection 5 of variably fused terminal vertebrae the "coccyx".
As a practicing anatomist, I would say that there are 29 vertebrae, as in 29 vertebral elements (but you can't say 29 ossification centers, because each is made of multiple centers). These fuse into 24 free vertebrae, the sacrum (5 fused vertebrae) and one coccyx, which is variably fused. | {
"domain": "biology.stackexchange",
"id": 8148,
"tags": "human-anatomy, bone-biology"
} |
How to control turning in Navigation Stack | Question:
I have a omnidirectional skid steer drive robot. I am implementing Navigation stack for it. Using default planners and default configuration. Whenever I give a close distance goal to the bot it takes huge turns to correct its orientation and misses the target. What should I do to either make it take sharp turns or disable x velocity while on rotation
TrajectoryPlannerROS:
max_vel_x: 0.65
min_vel_x: 0
max_vel_theta: 2
min_in_place_vel_theta: 1
acc_lim_theta: 5.2
acc_lim_x: 2.5
acc_lim_y: 2.5
holonomic_robot: false
meter_scoring: true
xy_goal_tolerance: 0.2
yaw_goal_tolerance: 0.1
Originally posted by wolf on ROS Answers with karma: 21 on 2022-01-09
Post score: 0
Answer:
Try adjusting min_in_place_vel_theta default is 0.4 not 1.0, setting at 1 rotates at 1 radian or 57 degrees per second.
This parameter sets the minimum rotational velocity allowed for the base while performing in-place rotations in radians/sec.
More info found here: http://wiki.ros.org/base_local_planner
Originally posted by osilva with karma: 1650 on 2022-01-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by wolf on 2022-01-14:
Thanks! That made some improvement. | {
"domain": "robotics.stackexchange",
"id": 37331,
"tags": "ros, navigation, rviz"
} |
Why the most dominant programming languages didn't follow CSP thread model? | Question:
I was trying to ask this question in StackOverflow, but later realized that this question is more relevant to general computer science, not specific engineering problems. If you think it's not, please let me know.
Recently I've found out what CSP(Communicating Sequential Processes) is.
According to the article Bell Labs and CSP Threads:
Most computer science undergraduates are forced to read Andrew Birrell's “An Introduction to Programming with Threads.” The SRC threads model is the one used by most thread packages currently available. The problem with all of these is that they are too low-level. Unlike the communication primitive provided by Hoare, the primitives in the SRC-style threading module must be combined with other techniques, usually shared memory, in order to be used effectively...
Another article Share memory by communicating from Golang blog says:
Traditional threading models (commonly used when writing Java, C++, and Python programs, for example) require the programmer to communicate between threads using shared memory (...)
Go's concurrency primitives - goroutines and channels - provide an elegant and distinct means of structuring concurrent software. (These concepts have an interesting history that begins with C. A. R. Hoare's Communicating Sequential Processes.)
Based on what I've seen so far, because Hoare proposed CSP in 1978, it seems that there was no reason to use SRC thread model in programming languages like C++(1985), Java(1995) or Python(1990).
So my question is, why the most dominant programming languages didn't follow Hoare's thread model?
Here's my guesses:
Most programmers back then didn't know about Hoare's thread model.
Most programmers are used to traditional thread model.
What do you think?
Answer: Practically speaking, the C++ threading (memory) model is directly inspired by the Java Memory Model, and C followed. Hans Böhm was closely involved with the process and has a great resource list. (*)
You'll quickly note that your dates are pretty optimistic - this memory model did not exist in 1985 when the first C++ implementations were created. There's a practical reason for this. Even in 2000, it wasn't clear which model of parallel computing was going to win. Parallel extensions to mainstream languages were first defined as ad-hoc extensions by the hardware vendors, and later partially standardized by such things as POSIX threads and MPI.
Because of the challenges involved, you see that parallelism support is added first to languages used in high-performance computing. Even C++ was a bit late to the party; C and FORTRAN were the main languages. Java, by virtue of being late, had a chance to learn lessons from those. And since it was designed as a portable language, Java needed a clean memory model not tied to a particular hardware vendor's implementation.
So the common memory model can be traced to various actual hardware implementations that could be unified by a single definition. And this led to a secondary effect: because this was now a standard, software started to use it, and hardware vendors generally followed suit.
There's another winner in the parallelism space: GPGPU's. They entered the competition via a different way. NVidia's CUDA is a similar vendor-extension to C, C++ and FORTRAN. This was possible because the GPU market justified independent development, and the general-purpose use of GPU's was a lucky coincidence.
* This development can be dated pretty accurately, the use of the Java memory model for C++ was presented at the 2001-10 ISO C++ meeting in Redmond, WA. | {
"domain": "cs.stackexchange",
"id": 15382,
"tags": "formal-languages, parallel-computing, concurrency, threads"
} |
[ROS2] Close all threads when using CTRL-C | Question:
Hi all!
In my project I'm spawning a thread which contains a while loop, simply like so:
std::thread worker([this]() { this->ballotCheckingThread(); });
std::unique_lock<std::mutex> lock(this->candidate_mutex_);
where the thread code is like this
void ballotCheckingThread() {
while (!this->checkVotingCompleted() && !this->checkForExternalLeader()) {
// Simulate some delay between checks
std::this_thread::sleep_for(std::chrono::milliseconds(200));
}
// Notify the first thread to stop waiting
cv.notify_all();
}
Unfortunately, when I use CTRL-C to terminate the node, this thread remains hanging and I have to manually kill it.
Right now in the main function I'm doing this:
int main(int argc, char *argv[]) {
std::cout << "Starting node..." << std::endl;
setvbuf(stdout, NULL, _IONBF, BUFSIZ);
rclcpp::init(argc, argv);
rclcpp::Node::SharedPtr node = std::make_shared<myNode>();
rclcpp::executors::MultiThreadedExecutor executor;
executor.add_node(node);
try {
executor.spin();
} catch (std::exception& e) {
std::cout << "rclcpp shutting down..." << std::endl;
rclcpp::shutdown();
return 0;
}
}
but I guess that's not enough. Is there a standard way to do this?
Originally posted by slim71 on ROS Answers with karma: 18 on 2023-07-17
Post score: 0
Answer:
In the end, I've found a solution.
Using a global variable std::weak_ptr<MyNode> MyNode::instance_;, I've initialized it in the main and registered it as a signal handler:
// Set the instance pointer to the shared pointer of the main node
MyNode::setInstance(node);
// Register the signal handler for SIGINT (CTRL+C)
signal(SIGINT, MyNode::signalHandler);
before adding and spinning the node.
I had to add some function to the node, of course:
class MyNode : public rclcpp::Node {
public:
std::atomic<bool> is_terminated_ {false};
void stopThread();
void startThread();
static void signalHandler(int signum);
static std::shared_ptr<MyNode> getInstance();
static void setInstance(rclcpp::Node::SharedPtr instance);
private:
std::thread helping_thread_;
static std::weak_ptr<MyNode> instance_; // Weak pointer to the instance of the node
}
void MyNode::setInstance(rclcpp::Node::SharedPtr instance) {
instance_ = std::static_pointer_cast<MyNode>(instance);
}
std::shared_ptr<MyNode> MyNode::getInstance() {
return instance_.lock();
}
void MyNode::signalHandler(int signum) {
// Stop the thread gracefully
std::shared_ptr<MyNode> node = getInstance();
if (node) {
node->stopThread();
}
rclcpp::shutdown();
}
void MyNode::startThread() {
if (!this->helping_thread_.joinable()) {
this->helping_thread_ = std::thread(&MyNode::spawnedThread, this);
}
}
void MyNode::stopThread() {
if (this->helping_thread_.joinable()) {
this->is_terminated_ = true;
this->helping_thread_.join();
}
}
where spawnedThread is the main function used in the additional thread.
I don't really like using global variables and such, but for now this works as intended.
Originally posted by slim71 with karma: 18 on 2023-07-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 38460,
"tags": "ros"
} |
Invert n bits beginning at position p of an unsigned integer | Question: In an attempt to create a function that returns x with the n bits that begin at position p inverted, leaving the others unchanged - following the exercise in K&R - I've come up with the following piece of code.
Still, I find it unclear where to start counting, i.e. the result would be different if starting from the right. I am starting from the left - bit one is the first bit from the left.
unsigned invert(unsigned x, unsigned p, unsigned n)
{
return x ^ ((~0U >> (p - 1 + (sizeof(int) * CHAR_BIT - p + 1 - n))) << (sizeof(int) * CHAR_BIT - p + 1 - n));
}
Which I'd then simplify to:
unsigned invert(unsigned x, unsigned p, unsigned n)
{
return x ^ ((~0U >> (sizeof(int) * CHAR_BIT - n)) << (sizeof(int) * CHAR_BIT - p + 1 - n));
}
I'm not performing any boundary checks.
Is this good practice? Is there a cleaner solution?
Answer: When you find it "unclear where to start counting" you have encountered one of the greatest problems in software development: extracting useful specifications from your stakeholders! In these situations, when we can't just ask, then we need to choose an interpretation that seems most useful and reasonable, and (this is the important bit), be very clear that we had to make a choice, and what we chose.
In this case, I would probably make a somewhat more specific name, and also add an explanatory comment:
/*
* Returns x with n bits inverted starting at bit p
* (counting from the left)
* E.g. invert_bits(0x0000, 4, 2) == 0x0c00 for 16-bit unsigned
*/
unsigned invert_bits(unsigned x, unsigned p, unsigned n)
That example in the comment would normally become one of the tests I use to verify the function.
I find it odd that we're working with unsigned yet we use sizeof (int) to calculate the width. Although we know that int and unsigned int have the same size, it's clearer to be consistent. Actually, I would just use sizeof x, which then makes it easier to re-use the code - e.g. if we write a version for unsigned long.
I think the arithmetic could be simplified if we consider that a mask with the leftmost p bits unset is ~0u >> p. Using invert_bits(0x0000, 4, 2) again, as in the comment, we can visualise what we're doing:
0000111111111111 mask_p = ~0u >> p;
0000001111111111 mask_n = mask_p >> n;
0000110000000000 mask_p ^ mask_n
As a completed function, we can reuse a single mask variable for this:
unsigned invert_bits(unsigned x, unsigned p, unsigned n)
{
unsigned mask = ~0u >> p;
mask ^= mask >> n;
return x ^ mask;
}
The nice thing about this is that it adapts automatically to the type's size, without needing any sizeof at all in the computation.
In case I've provided too much spoiler here, I provide some follow-up exercises:
Write the same function, but with the assumption that you count bits from the rightmost (least significant) bit.
Write a version for unsigned long.
Write versions that unconditionally set and reset the specified group of bits.
Write a main() that tests whether the above functions work as advertise. Remember to return EXIT_SUCCESS or EXIT_FAILURE as appropriate.
Which code can be shared in the answers to these questions? | {
"domain": "codereview.stackexchange",
"id": 40870,
"tags": "c, bitwise"
} |
Minimum Area Rectangle in Swift | Question: This is my solution to LeetCode – Minimum Area Rectangle in Swift
939. Minimum Area Rectangle
Given a set of points in the xy-plane, determine the minimum area of a rectangle formed from these points, with sides parallel to the x and y axes.
If there isn't any rectangle, return 0.
Example 1:
Input: [[1,1],[1,3],[3,1],[3,3],[2,2]]
Output: 4
Example 2:
Input: [[1,1],[1,3],[3,1],[3,3],[4,1],[4,3]]
Output: 2
class Solution {
struct Point_Dng: Hashable{
var x: Int
var y: Int
init(_ x: Int, _ y: Int) {
self.x = x
self.y = y
}
var hashValue: Int{
return x * 100000 + y
}
static func == (_ lhs: Point_Dng, _ rhs: Point_Dng) -> Bool{
return lhs.x == rhs.x && lhs.y == rhs.y
}
}
func minAreaRect(_ points: [[Int]]) -> Int {
let points_new = points.map({ (point: [Int]) -> Point_Dng in
return Point_Dng(point[0], point[1])
})
let set = Set(points_new)
var ans = Int.max
for point in points{
for point_piece in points{
if point[0] != point_piece[0] , point[1] != point_piece[1] , set.contains(Point_Dng(point[0], point_piece[1])) ,set.contains(Point_Dng(point_piece[0], point[1])) {
ans = min(ans, abs((point_piece[1] - point[1] ) * (point_piece[0] - point[0])))
}
}
}
if ans == Int.max {
return 0
}
else{
return ans
}
}
}
Note:
1 <= points.length <= 500
0 <= points[i][0] <= 40000
0 <= points[i][1] <= 40000
All points are distinct.
According to LeetCode's note, I improved the hash performance.
I turned
var hashValue: Int{
return "\(x)\(y)".hashValue
}
into
var hashValue: Int{
return x * 100000 + y
}
because Swift's tuple is not hashable.
The prior one will lead to “Time Limit Exceeded”
How can I improve it further?
In fact I want to know is there something I missed in Swift.
Something out of my knowledge.
Because I did it simple.
Answer: Naming
The meaning of some identifier names is hard to grasp:
What does Point_Dng stand for? Why not simply Point?
What is point_piece in the inner loop, and how is it different from
piece from the outer loop?
set is too generic, what does it contain?
ans stands for “answer,” but actually contains the “minimal area” found so far.
Simplifications
As of Swift 4.2, the compiler automatically creates the required methods
for Equatable and Hashable conformance for a struct if all its member
are Equatable/Hashable.
A struct also has a default memberwise initializer if you don't define your
own.
The properties of a point are never mutated, so they can be declared as constants (with let).
This makes the struct Point as simple as
struct Point: Hashable {
let x: Int
let y: Int
}
The closure in
let points_new = points.map({ (point: [Int]) -> Point_Dng in
return Point_Dng(point[0], point[1])
})
can be simplified because the compiler can infer the argument type and the
return type automatically. Since the array is only needed for creating the
set, the assignments can be combined into one:
let pointSet = Set(points.map { point in Point(x: point[0], y: point[1]) })
Performance improvements
In the nested loop it suffices to consider only those pairs where one point
is the “lower left” and the other the “upper right” corner of a potential
rectangle. That reduces the number of tests, and makes the abs() call
redundant.
Putting it together
The following version was roughly twice as fast in my tests with
random arrays of 500 points (on a 3.5 GHz Intel Core i5 iMac, compiled
in Release mode, i.e. with optimizations):
class Solution {
struct Point: Hashable {
let x: Int
let y: Int
}
func minAreaRect(_ points: [[Int]]) -> Int {
let pointSet = Set(points.map { point in Point(x: point[0], y: point[1]) })
var minArea = Int.max
for lowerLeft in points {
for upperRight in points {
if upperRight[0] > lowerLeft[0]
&& upperRight[1] > lowerLeft[1]
&& pointSet.contains(Point(x: lowerLeft[0], y: upperRight[1]))
&& pointSet.contains(Point(x: upperRight[0], y: lowerLeft[1])) {
let area = (upperRight[0] - lowerLeft[0]) * (upperRight[1] - lowerLeft[1])
minArea = min(minArea, area)
}
}
}
return minArea == Int.max ? 0 : minArea
}
}
Further suggestions
Sorting the point array in increasing order of x-coordinates would allow to
find “lower left/upper right” pairs faster, potentially increasing the
performance. | {
"domain": "codereview.stackexchange",
"id": 33107,
"tags": "programming-challenge, swift"
} |
Why isn't image_transport_plugins available on ROS2 dashing? | Question:
Hello, I've been using image_transport on ROS crystal but now I would like to upgrade to dashing. I noticed ros-dashing-image-transport exists, but ros-dashing-image-transport-plugins doesn't. Why so, if it was available on crystal?
Originally posted by amargs on ROS Answers with karma: 1 on 2019-07-29
Post score: 0
Answer:
Apologies, this was likely me missing the package on the checklist when doing the release. I'll get it released for you.
Originally posted by mjcarroll with karma: 6414 on 2019-07-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by amargs on 2019-07-30:
Thank you! When will it be released?
Comment by mjcarroll on 2019-08-28:
This has been released now. | {
"domain": "robotics.stackexchange",
"id": 33544,
"tags": "ros2, ros-crystal, image-transport"
} |
How to best handle imbalanced text classification with Keras? | Question: I implemented a text classification model using Keras.
Most of the datasets that I use are imbalanced. Therefore, I would like to use SMOTE to handle said imbalance.
I tried both on plain text, and once the text was vectorized, but I don't seem to be able to apply SMOTE on text data.
I use imblearn and received the following error:
Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 6
How can I fix this error? And is SMOTE a good idea? If not, what other ways could I deal with class imbalance?
Answer: First of all, to reassure you, SMOTE should work on text data. SMOTE will work on any data type as long as there is a way to compute the distance between data points.
Based on the error message you receive, it seems that it's an implementation issue (adding part of your code or how much data you have would greatly help).
As the error states, you have only 3 samples but the method requires at least 6. My guess is that something went wrong and you should have much more than 3 samples. | {
"domain": "datascience.stackexchange",
"id": 7474,
"tags": "classification, keras, class-imbalance, smote, text-classification"
} |
Move Cartesian with MoveIt | Question:
Hello,
i would like to move the robot "Franka Emika Panda" with the cartesian movement in C++. I make the configuration which is recommended on the homepage. My current result is to move the robot with joint movement with moveit. Then I trying the cartesian movement with the following link:
http://docs.ros.org/kinetic/api/moveit_tutorials/html/doc/move_group_interface/move_group_interface_tutorial.html#cartesian-paths
In this section they show how it works with the simulation, but i would like to try this one on the real robot. I hope i can get some tips to solve my problem.
EDIT: I add the source code and the launch-file which i use.
SOURCE CODE
//Cartesian Path
geometry_msgs::Pose target_pose3 = move_group.getCurrentPose().pose;
std::cout<< "Anfangsposition" << std::endl;
std::cout<< target_pose3 << std::endl;
std::vector<geometry_msgs::Pose> waypoints;
waypoints.push_back(target_pose3);
target_pose3.position.z -= 0.2;
waypoints.push_back(target_pose3); //down
target_pose3.position.y -= 0.2;
waypoints.push_back(target_pose3); //right
target_pose3.position.z += 0.2;
target_pose3.position.y += 0.2;
target_pose3.position.x -= 0.2;
waypoints.push_back(target_pose3);
// Skalierungsfaktor der maximalen Geschwindigkeit jedes Gelenks
move_group.setMaxVelocityScalingFactor(0.1);
moveit_msgs::RobotTrajectory trajectory;
const double jump_threshold = 0.5;
const double eef_step = 0.01;
move_group.computeCartesianPath(waypoints, eef_step, jump_threshold, trajectory);
moveit::planning_interface::MoveGroupInterface::Plan goal_plan;
goal_plan.trajectory_ = trajectory;
move_group.execute(goal_plan);
LAUNCH-FILE
<?xml version="1.0" ?>
<launch>
<arg name="robot_ip" />
<include file="$(find franka_control)/launch/franka_control.launch">
<arg name="robot_ip" value="172.16.0.2" />
<arg name="load_gripper" value="false" />
</include>
<include file="$(find panda_moveit_config)/launch/panda_moveit.launch">
<arg name="load_gripper" value="false" />
</include>
</launch>
When i start my source code then i get some information back from the console
INFO] [1551336690.500652276]: Received request to compute Cartesian path
[ INFO] [1551336690.501040209]: Attempting to follow 4 waypoints for link 'panda_link8' using a step of 0.010000 m and jump threshold 0.500000 (in global reference frame)
[ INFO] [1551336690.507374299]: Computed Cartesian path with 2 points (followed 2.597403% of requested trajectory)
[ INFO] [1551336690.508019615]: Execution request received
[ WARN] [1551336690.534495590]: Dropping first 1 trajectory point(s) out of 2, as they occur before the current time.
First valid point will be reached in 0.013s.
[ INFO] [1551336690.667268099]: Completed trajectory execution with status SUCCEEDED ...
[ INFO] [1551336690.667388163]: Execution completed: SUCCEEDED
Regards,
Markus
Originally posted by MarkusHHN on ROS Answers with karma: 54 on 2019-02-27
Post score: 0
Original comments
Comment by mlautman on 2019-03-05:
You should always visually inspect the generated plan before executing on hardware. The computeCartesianPath functionality has a known issue that could result in joint flips.
Comment by MarkusHHN on 2019-03-06:
thank you for the information. It is possible to solve or improve this problem?
Answer:
If I understand your question correctly, you probably just need to create/update a plan using the trajectory which is I/O in computeCartesianPath, and then call MoveGroup::execute with said path.
Assuming the tutorial's notation, this should work:
my_plan.trajectory_ = trajectory;
move_group.execute(my_plan);
Originally posted by aPonza with karma: 589 on 2019-02-27
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by MarkusHHN on 2019-02-28:
Thank you for your answer, but the robot dont moving to this position. I will add some source code and information | {
"domain": "robotics.stackexchange",
"id": 32543,
"tags": "ros, moveit, ros-kinetic, cartesian"
} |
Faster than light galaxies/clusters? | Question: A few years ago in an astronomy course, we calculated some (transverse?) velocity of a moving object and got super luminal results. The answer was apparent and not physical velocity of the object. Hence no problem. But at the moment, I don't recall the solution to this apparent issue. Anyone?
Answer: The velocity calculation was distance*(change in angle). However, this does not take into account the changing time-delay of light: we see it sped-up because the time delay is decreasing, like a TV recording where you are fast-forwarding as you gradually catch up with real time. Fortunately, all we need to do to calculate the real speed is to account for the time delay, no weird relativity is necessary.
Suppose a far away object is approaching at $0.8c$ and has a transverse velocity of $0.25c$ (total velocity of $0.84c$). It emits a burst of light (in our frame) at time $t$ and $t+5$ seconds. It is $d$ light seconds from us at time $t$ and $d-4$ at $t+5$. Accounting for the time delay of light, we see flashes at time $t+d$ and $t+5+(d-4) = t+d+1$; we see them only $1$ second apart. In those $5$ seconds, it moved transversely a distance of $5\times0.25 = 1.25$ light seconds. Since it appeared to move $1.25$ light seconds in $1$ second, we see apparent superluminal motion. | {
"domain": "physics.stackexchange",
"id": 11654,
"tags": "cosmology, universe, space-expansion, faster-than-light, galaxies"
} |
nltk's stopwords returns "TypeError: argument of type 'LazyCorpusLoader' is not iterable" | Question: While trying to remove stopwords using the nltk package, the following error occurred:
from tqdm import tqdm
import nltk
from nltk.corpus import stopwords
preprocessed_reviews = []
for sentance in tqdm(final["Text"].values):
sentance = ' '.join(e.lower() for e in sentance.split() if e.lower() not in stopwords)
TypeError
Traceback (most recent call last)
<ipython-input-136-ac5c19fafd9c> in <module>()
---> 7 sentance = ' '.join(e.lower() for e in sentance.split() if e.lower() not in stopwords)
8 preprocessed_reviews.append(sentance.strip())
9
TypeError: argument of type 'LazyCorpusLoader' is not iterable
Answer: There are a couple of items that could be improved in your code:
nltk.corpus.stopwords is a nltk.corpus.util.LazyCorpusLoader. You might want stopwords.words('english'), a list of stop English words.
It can cause bugs to update a variable as you iterate through it, for example sentance in your code.
In your code preprocessed_reviews is not being updated.
You might want to tokenize instead of str.split().
Here is a revision version:
import nltk
from nltk.corpus import stopwords
from tqdm import tqdm
reviews_raw = ('The quick brown fox jumps over the lazy dog', "I have been doing what I should not been doing")
reviews_processed = []
for sentence in tqdm(reviews_raw):
reviews_processed.append(' '.join(token.lower() for token in nltk.word_tokenize(sentence) if token.lower() not in stopwords.words('english')))
assert reviews_processed == ['quick brown fox jumps lazy dog', ''] | {
"domain": "datascience.stackexchange",
"id": 5349,
"tags": "nlp, preprocessing, nltk, text"
} |
Finding short and fat paths | Question: Motivation: In standard augmenting path maxflow algorithms, the inner loop requires finding paths from source to sink in a directed, weighted graph. Theoretically, it is well-known that in order for the algorithm to even terminate when there are irrational edge capacities, we need to put restrictions on the paths that we find. The Edmonds-Karp algorithm, for example, tells us to find shortest paths.
Empirically, it has been observed that we might also want to find fat (is there a better term for this?) paths. For example, when using capacity scaling, we find shortest paths that can bear at least $\epsilon$ amount of flow. There is no restriction on how long the path can be. When we can no longer find any paths, we decrease $\epsilon$ and repeat.
I am interested in optimizing the choice of augmenting paths for a very specific application of max-flow, and I want to explore this tradeoff between short and fat paths. (Note: it is not necessary for me to always solve the problem. I am most interested in finding the largest lower bound on flow in the shortest amount of wall time.)
Question: Is there a standard way to
interpolate between the shortest path
approach and the capacity-scaling
approach? That is, is there an
algorithm for finding paths that are
both short and fat, where ideally some
parameter would control how much
length in the path we are willing to
trade off for fatness? At the extremes, I'd like to be able to recover
shortest paths on one end and capacity scaling-style paths on the other.
Answer: In the spirit of your comment about "pretty good but not necessarily optimal," I present the following idea with absolutely no guarantee of optimality!
For completeness, here is the pseudocode that you referred to (Remark: the algorithm linked assumes edge capacities are integers between 1 and C and that flow and residual capacity values are integral):
Scaling-Max-Flow(G, s, t, C) {
foreach e ∈ E f(e) ← 0
Δ ← smallest power of 2 greater than or equal to C
G_f ← residual graph
while (Δ ≥ 1) {
G_f(Δ) ← Δ-residual graph
while (there exists augmenting path P in G_f(Δ)) {
f ← augment(f, C, P)
update G_f(Δ)
}
Δ ← Δ / 2
}
return f
}
Observe that when $\epsilon$ = 1 ($\epsilon = \Delta$ in the psuedocode), you're just finding paths in shortest to longest order, and when $\epsilon$ is large, you're finding paths in (more or less) fattest to slimmest order. In fact, depending on the instance, the capacity-scaling algorithm finds paths in shortest to longest order within "buckets" of "enough flow."
Then, add another input parameter $0 \le \rho \le 1$ that represents how much you care about "fatness" vs "shortness." In order to ensure we're not massively affecting the runtime, we further require that $\rho$ is a rational number.
Then, each time $\epsilon$ is assigned a value, we additionally take the $\rho$-weighted arithmetic mean (I hope that's the correct term..) between 1 and its current value. That is,
$\epsilon \leftarrow (\rho)\epsilon + (1-\rho)$
For $\rho = 0$, we end up with a pure shortest path algorithm; for $\rho = 1$, we get a pure fattest path algorithm; and for $0 < \rho < 1$ we get something in between. In particular, for some middle value, $\epsilon$ will converge to $\le 1$ more quickly, so you'll get more shortest paths, and fewer fattest paths. | {
"domain": "cstheory.stackexchange",
"id": 254,
"tags": "ds.algorithms, graph-theory, graph-algorithms"
} |
Asymptotic Proofs - BigOh/BigTheta | Question: This is not homework, but from a past exam. I do not know how to solve this one at all. Can anyone please take the time and show me how to do these? Thank you.
Prove that $5^n \in O(6^n)$, but that $5^n \notin \Theta(6^n)$. (Doing this shows that $6^n$ bounds $5^n$ from above, but that $6^n$ is not the tightest such bound.)
I am familiar with the definitions, but it seems that $5^n$ is clearly $\lt 6^n$. and $5^n$ is not $\gt 6^n$.
Answer: First we show that $5^n \in O(6^n)$. To do this, we need to find constants $n_0$ and $c$ such that, for all $n > n_0$, $5^n \leq c6^n$. Without getting into a deep philosophical discussion about how I arrived at these constants, you should be able to convince yourself that $n_0 = 1$ and $c = 1$ work.
Now, to show that $5^n \notin \Theta(6^n)$, we need to show that $5^n \notin \Omega(6^n)$. This is because we already know $5^n \in O(6^n)$ and $\Theta(6^n)$ is simply $O(6^n) \cap \Omega(6^n)$. Read that a few times to make sure we're on the same page. For simplicity's sake, we can stick with $O$ and show that $6^n \notin O(5^n)$, which is equivalent to our condition.
To show that $6^n \notin O(5^n)$, we need to show that for any $c$ we pick, there's an $n_0$ after which $6^n > c5^n$; in other words, no matter what constant we multiply $5^n$ by, $6^n$ always eventually wins. To do this, we can calculate $n_0$ from $c$. We solve:
$$6^n > c5^n$$
$$\frac{6^n}{5^n} > c$$
$$1.2^n > c$$
$$n > log_{1.2} c$$
We can choose the smallest integer greater than the RHS and get a value for $n_0$ that will break any $c$ we choose. For instance, for $c = 10^{100}$, my formula says that 1,263 should work for $n_0$. Plugging in, looks like we're good to go. | {
"domain": "cs.stackexchange",
"id": 3382,
"tags": "asymptotics, proof-techniques"
} |
Energy, power and action | Question: Through unit analysis, one can identify the following relationship linking energy, action and power:
$$\mathrm{energy\ ^2 = action \times power}.$$
Alternatively, we rewrite this expression as:
$$\mathrm{power = \frac{energy^2}{action}};$$
or
$$\mathrm{action = \frac{energy^2}{power}}.$$
In light of this tight relationship, it seems odd that physicists prefer only to discuss energy and action when it comes to quantum field theory. It would seem that the inverse relationship between action and power would lead to an alternative formulation where the goal is to find extremum of power instead of action. So why is there very little discussion of power in physics?
Answer: we say that your formulae are "dimensionally correct", but that's it. They're dimensionally correct for a simple reason: energy is expressed in Joules which is the geometric average of Joule.seconds, the units of action, and Joules per second, the units of power.
But if you can construct an equation that is dimensionally correct, it doesn't yet mean that it is a correct identity - and it is very far from being a demonstrably useful one. I can't imagine any sensible context in which if would be true that "energy squared equals action times power". Moreover, even if such an identity existed, there could be a numerical coefficient, and it is pretty unlikely that it would be one.
So you're just playing with units - and yes, the left hand side has the same unit as the right hand side. That's a necessary condition but not a sufficient one for an equation to be OK. For such an equation to make sense, you must actually know "power of what", "energy of what", and "action of what" you are inserting into the equation. Otherwise you don't know what you're doing.
There is no important law of "extreme power". Why there should be one? Physical laws can't be discovered by randomly combining verbs and names of physical quantities. Science is not equivalent to the business of monkeys who randomly type letters and after some time, they inevitably write Hamlet just like Shakespeare. There is a law that systems that may get rid of some energy - or dissipate it into heat - will eventually do it. That means that they will minimize their energy. So energy minimization exists and is important for the description of the state of objects that become static.
Also, there is the law of least action which can be used to derive the differential equations governing all of classical physics. But there's no law of "extreme power". There's even no context that could be waiting for such a new law. So "extremized power" may be helpful for consumers of electricity who try to save power, and/or for energy companies who try to maximize their power production. But both groups usually have some other considerations aside from the power itself, too. So your "law" never works.
Power itself is simply not as fundamental as energy or action, much like the permittivity change per unit viscosity and per unit entropy per day is not. The simple term "power" shouldn't confuse you - linguistic simplicity of some terms doesn't imply anything for physics. The "power" is still some "energy per unit time" which is a derived, rather than fundamental, concept. Energy and action are simply more fundamental and primary than power.
Best wishes
Lubos
P.S.: By the way, it's not difficult to produce as many dimensionally correct equations as many you want. In particular, every equation you write down is dimensionally OK in the Planck units because all quantities are dimensionless. Equivalently, one can always add the product of appropriate powers of universal constants - namely $c,\hbar,G,k_{Boltzmann},\epsilon_0$ - to all terms in an equation to transform a dimensionally wrong equation into a correct one. So the nontriviality or the "degree of shocking surprise" of the fact that you managed to construct a dimensionally correct equation involving energy, action, and power is exactly zero. | {
"domain": "physics.stackexchange",
"id": 276,
"tags": "energy, dimensional-analysis, power, action"
} |
How to prove polynomial time equivalence? | Question: Define the problem $W$:
Input: A multi-set of numbers $S$, and a number $t$.
Question: What is the smallest subset $s \subseteq S$ so that $\sum_{k \in s} k = t$, if there is one? (If not, return none.)
I am trying to find some polytime equivalent decision problem $D$ and provide a polytime algorithm for the non-decision problem $W$ assuming the existence of a polytime algorithm for $D$.
Here is my attempt at a related decision problem:
$\mathrm{MIN\text{-}W}$:
Input: A multi-set of numbers $S$, two numbers $t$ and $k$.
Question: Is there a subset $s \subseteq S$ so that $\sum_{k \in s} k = t$ and $|s| \leq k$?
Proof of polytime equivalence:
Assume $W \in \mathsf{P}$.
solveMIN-W(S, t, k):
1. S = sort(S)
2. Q = {}
3. for i=1 to k:
4. Q.add(S_i)
5. res = solveW(Q, t)
6. if res != none and res = t: return Yes
7. return No
I'm not sure about this algorithm though. Can anyone help please?
Answer: The problems you mention here are variations of a problem called SUBSET-SUM, FYI if you want to read some literature on it.
What is $S_i$ in your algorithm? If it's the $i$-th element, then you assume that the minimal set will use the smaller elements, which is not necessarily true - there could be a large element that is equal to $t$, in which case there is a subset of size $1$. So the algorithm doesn't seem correct.
However, as with many optimization problems that correspond to NP-complete problems, you can solve the optimization problem given an oracle to the decision problem.
First, observe that by iterating over $k$ and calling the decision oracle, you can find what is the minimal size $k_0$ of a subset whose sum equals $k$ (but not the actual subset).
After finding the size, you can remove the first element in $S$, and check whether there is still a subset of size $k_0$ - if not, then $s_1$ is surely in a minimal subset, so compute $t-s_1$, and repeat the process with the rest of $S$.
If $s_1$ does not effect the size $k_0$, then repeat the process with $S\setminus \{s_1\}$.
It is not hard to verify that this process takes polynomial time. | {
"domain": "cs.stackexchange",
"id": 1191,
"tags": "complexity-theory, reductions, p-vs-np"
} |
Counting integers that are within intervals | Question: I am looking for a more efficient way than bruteforcing my way through in the following problem in python 3.
Problem statement:
Input:
An array of n integers, scores, where each score is denoted by scores_j
An array of q integers, lowerLimits, where each lowerLimits_i denotes
the lowerLimit for score range i.
An array of q integers, upperLimits, where each upperLimits_i denotes
the upperLimit for score range i.
Output: A function that returns an array of Q integers where the value at each index i denotes the number of integers that are in the inclusive range [lowerLimits_i, upperLimits_i].
Constraints:
1 ≤ n ≤ 1e5
1 ≤ scores_j ≤ 1e9
1 ≤ q ≤ 1e5
1 ≤ lowerLimits_i ≤ upperLimits_i ≤ 1e9
Example:
Given scores= [5, 8, 7], lowerLimits = [3, 7], and upperLimits = [9, 7]
I want to check how many of the integers are contained in each interval (inclusive). In this examples: intervals are [3,9] and [7,7], and the result would be [3, 1].
My code looks like this:
def check(scores, lowerLimits, upperLimits):
res = []
for l, u in zip(lowerLimits, upperLimits):
res.append(sum([l <= y <= u for y in scores]))
return res
if __name__ == "__main__":
scores= [5, 8, 7]
lowerLimits = [3, 7]
upperLimits = [9, 7]
print(check(scores, lowerLimits, upperLimits))
Answer: If you sort your values, you can then make an iterator on the sorted list, forward it to the lower limit, count until the first value is reached that is larger than the upper limit and discard all further values.
The sorting will add \$\mathcal{O}(n\log n)\$ time complexity, but if you have a lot of values larger than (all) your upper bounds, you could get this back.
An implementation using itertools could be:
from itertools import dropwhile, takewhile
def graipher(scores, lower, upper):
scores = sorted(scores)
for l, u in zip(lower, upper):
s = iter(scores)
yield sum(1 for _ in takewhile(lambda x: x <= u, dropwhile(lambda x: x < l, s)))
Since the scores are now already sorted, you could even use bisect to find the right indices to insert the upper and lower limits. The difference between the two indices will give you the number of values in range:
from bisect import bisect_left, bisect_right
def graipher2(scores, lower, upper):
scores = sorted(scores)
for l, u in zip(lower, upper):
yield bisect_right(scores, u) - bisect_left(scores, l)
Both functions are generators. You can just call list() on them to consume them into a list, giving the same result as your code:
if __name__ == "__main__":
scores= [5, 8, 7]
lowerLimits = [3, 7]
upperLimits = [9, 7]
print(check(scores, lowerLimits, upperLimits))
print(list(graipher(scores, lowerLimits, upperLimits)))
print(list(graipher2(scores, lowerLimits, upperLimits)))
Finally, Python has an official style-guide, PEP8, which recommends using lower_case for variables and functions.
When running your function and my two functions on an input of the maximum defined size for scores and a single pair of limits, I get the following timings:
check: 249 ms ± 3.84 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
graipher: 77.3 ms ± 950 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
graipher2: 53.9 ms ± 772 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
When using a scores of length 10 and the maximum defined size for the lengths of the limits, I get:
check: 2.8 s ± 112 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
graipher: 246 ms ± 2.77 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
graipher2: 73.1 ms ± 612 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
And finally, when using the maximum defined size for both scores and the limits, only graipher2 finishes in a reasonable time (I stopped the other ones after a few minutes):
graipher2: 247 ms ± 4.94 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
So, to summarize, sort your scores and use bisection. | {
"domain": "codereview.stackexchange",
"id": 32088,
"tags": "python, performance, interval"
} |
Distinction between contraction and application of a tensor in abstract index notation | Question: A somewhat similar question is this one but it is not quite the same.
I am getting used to the abstract index notation used for tensor algebra. So far so good, but the is one issue that concerns me, In General Relativity by Wald, it is discussed how the difference between two connections $\tilde{\nabla}$ and $\nabla$ induces a tensor $C$ of rank (1,2) in $T_pM$, and hence (by the index notation) we denote this tensor as ${C^c}_{ab}$. The problem is that Wald defines this tensor by the equation
$$\nabla_a\omega_b=\tilde{\nabla}_a \omega_b- {C^c}_{ab}\omega_c $$
Which seems funny, we can think of the last term in the RHS of the latter as the contraction of the (1,3) tensor $C\otimes \omega $ with respect to the first dual vector slot and the third vector slot. But rather than this odd way of deffining the tensor $C$ we could make use of the index-free notation. So that we could define the tensor $C:V^{*}\times V\times V\to \mathbb{R}$ as $$\omega,t,s\mapsto (\tilde{\nabla}\omega)(t,s)-(\nabla\omega )(t,s) $$
which is what would be natural way of defining a tensor "by it's action". So the question is: Is my way of understanding the abstract index notation in this case is the correct one, or rather does the definition of Wald refers to $$\sum_{k=1}^nC(e^{k*},t,s)\cdot \omega(e_{k})=(\tilde{\nabla}\omega)(t,s)-(\nabla\omega)(t,s) $$
where $\{e_1,\ldots, e_n \}$ is some basis of $T_pM$ and $\{e^{1*},\ldots, e^{n*} \}$ is its dual basis. Which is it? How could one know using the Abstract index notation?
Note that the first definition (the one I assume is the correct one) does indeed define a tensor, meanwhile a tensor is hardly ever characterized by a contraction. But the notation, as defined does indeed suggest the contraction interpretation.
Answer: The equation $$\nabla_a\omega_b=\tilde{\nabla}_a \omega_b- {C^c}_{ab}\omega_c$$
can be contracted with vector components $X^a$ and $Y^b$:
$$C^{c}_{ab}X^aY^b\omega_c = (\tilde{\nabla}_a\omega_b) X^aY^b - (\nabla_a\omega_b) X^a Y^b$$
which can be expressed in coordinate-free notation as
$$\mathbf C(\boldsymbol\omega,\mathbf X,\mathbf Y) = (\nabla_{\mathbf X}\boldsymbol \omega)(\mathbf Y) - (\tilde{\nabla}_{\mathbf X}\boldsymbol \omega)(\mathbf Y)$$
where $\nabla_\mathbf{X} =X^a \nabla_a$. Note in particular that $\nabla_{\mathbf X}$, which is the covariant directional derivative along $\mathbf X$, maps a covector field $\boldsymbol \omega$ to a covector field
$$\nabla_\mathbf{X} \boldsymbol \omega = (X^a\nabla_a\omega_b)\hat \epsilon^b = X^a(\partial_a \omega_b -\omega_c \Gamma^c_{ab})\hat\epsilon^b$$
You can use this to go backwards to verify that the coordinate-free expression above does indeed yield the expression you started with.
Lastly, note that the $(0,2)$-tensor with components $C^c_{ab}\omega_c$ could equivalently viewed as the $(1,2)$-tensor $\mathbf C$ with the covector $\boldsymbol \omega$ plugged into the first slot (i.e. $\mathbf C(\boldsymbol \omega,\bullet,\bullet)$ ) or as the trace over the first and fourth index of the $(1,3)$-tensor ($\mathbf C \otimes \boldsymbol \omega$). If I'm interpreting the question correctly, the answer is that your two alternate interpretations are equivalent. | {
"domain": "physics.stackexchange",
"id": 67576,
"tags": "general-relativity, tensor-calculus, notation"
} |
Can I link to ROS libraries without CMAKE? | Question:
Everytime I try to use a library built by CMAKE, I need to also use CMAKE. I don't want to use CMAKE. I just want to use the LIB, INCLUDEPATH, and DEPENDPATH variables in my Qt project. Is this possible?
Originally posted by drhalftone on ROS Answers with karma: 1 on 2017-09-12
Post score: 0
Original comments
Comment by ahendrix on 2017-09-12:
Please consider rephrasing this in a way that does not read as an attack on cmake.
Answer:
This post http://jbohren.com/tutorials/2014-02-10-roscpp-hello-world/#create-the-simplest-ros-c-program describes what catkin and cmake are doing under the hood, and how to build a simple ROS node on the command line without cmake.
The ROS packages that you install through apt-get also install pkgconfig files in /opt/ros/<distro>/lib/pkgconfig, if your build system is compatible with pkgconfig.
Originally posted by ahendrix with karma: 47576 on 2017-09-12
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 28835,
"tags": "ros"
} |
Sketching Phase Spectra Using Group Delay and Magnitude Spectra Informations | Question: I have only magnitude spectra and group delay information and I need to sketch phase spectra from this information. For example, group delay is given like this: $\tau_{g}(\omega) = c$ where c is a constant value.
I know the group delay is the derivative of minus phase function of the system. In this case, phase function can be written likse this: $\beta(\omega) = - c \omega + constant$, but how to determine the $constant$ here? Is it possible to find phase from these information?
Answer: If the magnitude spectrum is symmetric
$$M(\omega)=M(-\omega)\tag{1}$$
(as I assume), then your system is real-valued. The phase response of a real-valued system is asymmetric:
$$\phi(\omega)=-\phi(-\omega)\quad(\mod 2\pi)\tag{2}$$
This means that there can be two cases:
The phase goes through zero at $\omega=0$, i.e. the phase is given by $\phi(\omega)=-c\cdot\omega$, where $c$ is the constant group delay.
The phase is $\pm \pi$ at $\omega=0$, which means $\phi(\omega)=-c\cdot\omega\pm\pi$.
The phase jumps at $\omega=0$. This is only possible if the magnitude has a zero at $\omega=0$, i.e. $M(0)=0$. The phase jumps by $\pi$, which is simply a sign change of the (bipolar) amplitude function. In this case the phase is given by $$\phi(\omega)=\begin{cases}-\pi/2-c\cdot\omega,&\quad\omega>0\\\pi/2-c\cdot\omega,&\quad\omega<0\end{cases}$$ | {
"domain": "dsp.stackexchange",
"id": 2689,
"tags": "discrete-signals, signal-analysis, magnitude, linear-phase, group-delay"
} |
Do I need to use a callback function here, or is there another way? | Question: Here is the pertinent code:
index.html
<!-- DOM INITIALIZATION -->
<script type="text/javascript">
$().ready(function() {
getThemeInfo();
if (themeSelect==2) {
ReplaceJSCSSFile("css/skin1.css", "css/skin2.css", "css"); // overwrite CSS
}
AJAX_LoadResponseIntoElement("skinContainer", "skin" + themeSelect + ".txt", function() {
AJAX_LoadResponseIntoElement("contentdiv", "index.txt", initPage);
});
});
</script>
funcs.js
function initPage()
{
setContentDimensions();
replaceCSSMenu();
showContainer();
setContentPositions();
}
function setContentPositions()
{
var contentTop = findTop(document.getElementById('NavMenu')) + 4;
var contentLeft = findLeft(document.getElementById('kwick1')) + 226;
document.getElementById('contentdiv').style.top = (contentTop)+ "px";
}
Quick recap: It fetches the theme selection, and if it's not the default (1), then it changes the CSS file to skin2.css. Then, it fetches the page with AJAX and initializes it, and part of initialization is setting the div dimensions and positions.
Although the theme swap works perfectly through the button (code not shown here,) it does not work in Opera if the theme setting, stored in cookies, is a non-default theme, causing the CSS to be swapped during the loading of the page (i.e. this code here.) For whatever reason, the .top and .left of my "contentdiv," which is set in ContentPositions() function, is wrong.
I'd assumed this was happening because the CSS styles were not loaded prior to the JavaScript attempting to set contentdiv's position. To test this theory, I put an alert() in setContentPositions() to test that contentTop & contentTop were indeed wrong (they were,) and then another alert() after the DOM init line that changes the CSS file. With the addition of the alert() after the CSS change within DOM init, it loads perfectly.
Why is the CSS file not processed by the time it does two AJAX fetches? Is a callback function the proper way to fix this?
Edit...
The code for some of the functions used in the above code was requested. Here it is:
function ReplaceJSCSSFile(oldfilename, newfilename, filetype){
var targetelement=(filetype=="js")? "script" : (filetype=="css")? "link" : "none";
var targetattr=(filetype=="js")? "src" : (filetype=="css")? "href" : "none";
var allElements=document.getElementsByTagName(targetelement);
for (var i=allElements.length; i>=0; i--){
if (allElements[i] && allElements[i].getAttribute(targetattr)!=null && allElements[i].getAttribute(targetattr).indexOf(oldfilename)!=-1){
var newelement=CreateJSCSSFile(newfilename, filetype);
allElements[i].parentNode.replaceChild(newelement, allElements[i]);
}
}
}
function CreateJSCSSFile(filename, filetype){
if (filetype=="js"){
var fileref=document.createElement('script');
fileref.setAttribute("type","text/javascript");
fileref.setAttribute("src", filename);
}
else if (filetype=="css"){
var fileref=document.createElement("link");
fileref.setAttribute("rel", "stylesheet");
fileref.setAttribute("type", "text/css");
fileref.setAttribute("href", filename);
}
return fileref;
}
function AJAX_LoadResponseIntoElement (elementId, fetchFileName, cfunc) {
var XMLHRObj;
if (window.XMLHttpRequest) { XMLHRObj=new XMLHttpRequest(); }
else { XMLHRObj=new ActiveXObject("Microsoft.XMLHTTP"); }
XMLHRObj.onreadystatechange=function()
{
if (XMLHRObj.readyState==4 && XMLHRObj.status==200)
{
document.getElementById(elementId).innerHTML=XMLHRObj.responseText;
cfunc();
}
}
XMLHRObj.open("GET",fetchFileName,true);
XMLHRObj.send();
}
Answer: I would suggest to take a different approach. It is complicated to get a callback for the complete loading of a CSS stylesheet loaded dynamically: see this Stack Overflow question for reference:
Is there anyway to listen to the onload event for a element?
Do not use JavaScript to set the content position: it is part of styling and should be done in CSS instead. Isn't the role of your CSS skins to modify the appearance of the page? | {
"domain": "codereview.stackexchange",
"id": 74,
"tags": "javascript, css"
} |
Does a material exist that reduces a magnetic field without being affected by the magnetic field itself? | Question: Consider a common bar magnet, magnet 1 resting on a surface with its North pole facing up. Suspended distance $y$ above it (supported side-to-side by a plastic tube) is a second, smaller bar magnet, magnet 2, with its North pole facing down. The magnetic forces between them exceed the force of gravity, and keep magnet 2 suspended. Consider some material, material-X that is moving towards the gap between the two magnets at an initial velocity $v$.
Does a material, material-X exist that would reduce the distance $y$ between the two magnets, and pass through the gap without changing velocity $v$?
Answer: The material you are looking for could be a super conductor.
These material have zero resistance for current and thus can compensate penetrating field lines within the first layers of the material. This phenomenon is called the Meissner effect and is the very definition of the supra-conductive state.
In your case of a plate between two magnets, this would definitively reduce $y$.
For the velocity:
Here, normally the eddy currents induced by a magnetic field lead to a loss of power, given by:
$$P = \frac{\pi^2 B_\text{p}^{\,2} d^2 f^2 }{6k \rho D},$$
since, however, a super conductor has zero resistance and thus, de facto,
$$\rho = \infty$$
no kinetic energy should be lost, and thus the velocity will remain unchanged.
There is only one problem:
Superconductor can only exist under very low temperature, so this might not be realizable in the case of your machine...you would at least need a cooling system working with liquid nitrogen to cool it.
Other than superconductors, I do not see any possible material because, either the material is a conductor, then you always have losses due to the eddy currents (thus reducing $v$) or the material is not a conductor (then $y$ will not decrease). | {
"domain": "physics.stackexchange",
"id": 4744,
"tags": "electromagnetism"
} |
Getting partition boundaries in sorted integer array using only equalTo operator | Question: For my task, I need to find partition boundaries in sorted input. Values are expected to be repeated, I just need to find range for each value. Please check comment in following output for example.
For some reason, I can not use < or > operation, I only have equality operator.
/*
Input:
Index : Value
0 : 1
1 : 1
2 : 1
3 : 2
4 : 2
5 : 2
6 : 2
Output:
Value 1 is till 2 index
Value 2 is till 6 index
*/
public void printPartitionBoundaries(ArrayList<Integer> array)
{
int f = 0;
int l = array.size()-1;
while (f < l) {
int cp = ((Integer)array.get(f)).intValue();
int of = f;
boolean done = false;
while (!done)
{
int m = (f + l)/2;
if ((l-f) <= 1) {
if ( l == array.size() -1 )
System.out.println("Value " + cp + " is till " + l + " index");
else
System.out.println("Value " + cp + " is till " + (l-1) + " index");
done = true;
break;
}
if (array.get(f).equals(array.get(l)) == false) {
if (array.get(f).equals(array.get(m)) == false)
l = m;
else
f = m;
} else {
f = m;
}
}
f = l;
l = array.size()-1;
}
}
Answer: In general single letter variables are okay, but here short names like first and last would be more readable.
Also java has a very strong convention of always using braces {}, and an indentation of 4 spaces. (Historically C/C++ had normally 3, and one whished less nesting of code due to the "superior" quality of the new language.)
List as interface is more generic than ArrayList, so do not overspecify, be too restrictive.
Using an exclusive upperbound is often done in computer science, and indeed in the java APIs themselve. Here it would mean having partions [f, l> like [a, b>, [b, c>, [c, d>, [d, e>. So at the end of the range searching loop f = l;.
public void printPartitionBoundaries(List<Integer> array)
{
int f = 0;
int l = array.size(); // Upperbound exclusive
while (f < l) {
int cp = array.get(f);
of (old first) is a leftover.
The loop with done: done could be removed, a while (true) would
work as well. Hence use the first if-condition as while-condition.
Also m was calculated too early.
Also == false and == true should really not be seen.
The nested ifs can be simplified.
Mind that the condition treats two cases of 1 element and 0 elements. Simpler would be while (f < l).
while ((l-f) > 1)
{
int m = (f + l)/2;
if (!array.get(f).equals(array.get(l))
&& !array.get(f).equals(array.get(m))) {
l = m;
} else {
f = m;
}
}
I wonder about the correctness; somehow one expects f = m + 1; to have a loop variant (l-f) diminishing every step.
The following if-statement on l == array.size() should become:
System.out.println("Value " + cp + " is till " + (l-1) + " index");
Check, maybe the while-condition should become > 0, and extra step more.
f = l;
l = array.size();
}
}
To verify the correctness I leave up to you yourself.
Resulting code:
public void printPartitionBoundaries(List<Integer> array)
{
int f = 0;
int l = array.size(); // Upperbound exclusive
while (f < l) { // f != l
int cp = array.get(f);
while ((l-f) > 1) // maybe better f != l too
{
int m = (f + l)/2;
if (!array.get(f).equals(array.get(l))
&& !array.get(f).equals(array.get(m))) {
l = m;
} else {
f = m;
}
}
System.out.println("Value " + cp + " is till " + (l-1) + " index");
f = l;
l = array.size();
}
} | {
"domain": "codereview.stackexchange",
"id": 28456,
"tags": "java, array, binary-search"
} |
What is the use/importance of the equilibrium constant Kc | Question: I understand what the equilibrium constant is, however when reading in my textbook it has occurred to me that I do not understand what it's actual use/importance is. One specification in my syllabus is "Understand the importance of the numerical value of Kc".
Thank you for any help.
I do not understand why this question has been put on hold. How can I demonstrate that I have put in an effort into understanding this question when I literally cannot find the answer on the internet or in any of my four textbooks? I have familiarised myself with Kc, however nowhere can I find it's importance, hence the reason that I ask the question.
Answer: $K$ is the value of $Q$ at equilibrium. The expression for $Q$ and $K$ are the same however $Q$ describes the state of a system and $K$ describes the system at equilibrium. Le Chatelier's principle tells us that a system that has $Q\neq K$ will approach $K$. When $Q$ is not $K,$ the system is not at equilibrium.
K tells you where an equilibrium lays. For a given reaction, if $K$ is larger than 1 than the equilibrium is towards the right. If $K$ is less than one then the equilibrium is towards the left.
For a simple equilibrium such as acid dissociation, the forward reaction's equilibrium constant is
$$K = \frac{[\ce{H+}][\ce{A-}]}{[\ce{HA}]}$$
so you can compare the strengths of acids by comparing the equilibrium constant. Since the $K$ of nitric acid is larger than the $K$ of hydrofluoric acid we know that it is a stronger acid.
Hope this helps! | {
"domain": "chemistry.stackexchange",
"id": 5709,
"tags": "equilibrium"
} |
Force in a member of the loaded overhead sign truss | Question: I was solving problems from the Meriam and Kraige Engineering Mechanics Statics vol-1 and I couldn't get the correct way through this problem.
The question is to find out the force in member $DK$ of the loaded overhead sign truss.
My attempt
I tried eliminating member NM,FG,GT,HI.i tried using the method of section but any section passing through DK is of more than 3 members.
Please suggest how to approach this problem.Thanks.
Answer: This is a K-Truss. I will try to provide a solution.
Corrections: There should be a cyan member in the truss, otherwise, it is unstable.
As shown above, the system will be cut by curved line (as highlighted by the red line). This is a special case to the method of sections wherein the rule allows cutting of more than 3 members provided that all member except one are collinear at a point. Separating the left side of the system gives:
The reactions have been solved by static equations. In the above image, we only need the horizontal unknowns $LK$ and $CD$, thus:
Summing up moments at C yields:
$$15\ (8)=LK\ (5)$$
$$LK\ =\ 24\ kN\ (Tension)$$
I will leave the sign as is so as to simplify the analysis of sections.
Next is summing up moments at L yields:
$$15\ (8)\ -\ CD\ (5) = 0$$
$$CD\ =\ -24\ kN$$
Note that we do not need to solve the values of member BP and MP as they are not needed in the solution.
Next, we cut another section from the system as shown below:
and separating the left section:
Note that members $CD$ and $LK$ has been solved as denoted in yellow color. Two equations and two unknowns will be needed, thus, summing up forces along x-axis and y-axis yields the following equations respectively:
$$QD\ [4/sqrt(22.25)]\ +\ QK\ [4/sqrt(22.25)]\ +\ CD\ +\ LK\ =\ 0$$
$$QD\ [2.5/sqrt(22.25_]\ -\ QK\ [2.5/sqrt(22.25)]\ -\ 5\ +\ 15\ =\ 0$$
$$QD\ =\ -9.434\ kN$$
$$QK\ =\ +9.434\ kN$$
Note that we cannot still use method of joints at joint $D$, therefore, we will continue to solve for more members of the system as:
Separating the right sections:
Summing up moments at J yields:
$$20\ (8)\ +\ ED\ (5)\ =\ 0$$
$$ED\ = \ -32\ kN$$
Now we can use method of joints at joint $D$ because there are only two unknowns left as:
Summing up forces along x-axis yields:
$$-DC\ -\ DQ\ [4/sqrt(22.25)]\ +\ DE\ +\ DR\ [(4/sqrt(22.25)]\ = \ 0$$
$$DR\ = \ 0$$
Summing up forces in the y-axis yields:
$$-DQ\ [2.5/sqrt(22.25)]\ +\ -DK\ =\ 0$$
$$DK\ =\ 5\ kN$$ | {
"domain": "engineering.stackexchange",
"id": 1712,
"tags": "mechanical-engineering"
} |
Combining the energy from two optical fibers transmitting visible light | Question: Can this be done?
It wouldn't violate energy conservation...
I've tried looking at 1xN beamsplitters used in reverse. But it won't work, since using them in reverse also splits the incoming power and portion of it (commonly, 50%) gets lost inside the splitting device.
Assume laser light with the same wavelength in the fibers.
To complete the discussion, could you also comment what will happen if the light in both fibers is solar, collected from free space.
Answer: Theoretically yes, though the answer to your question may depend on the properties of the two sources.
If, for example, the two sources have different wavelengths, then you need a glass surface with an optical coating that reflects the one wavelength and transmits the other. By the principle of reversibility, if you had one source with the two wavelengths mixed, you would be able to use the same coated surface to split the two wavelengths out, and if you reverse the direction, the paths taken should not change.
From your question though I get the sense that you are talking about two sources with the same wavelength. In that case they need to be coherent and in phase with each other, and you can use a regular 50-50 beamsplitter.
This also follows from the principle of reversibility, though it is not as obvious. If you have one source then you can certainly use a 50-50 beamsplitter to split it into two beams and couple those into optical fibers. So, you must be able to do the same thing in reverse. However as you correctly noted, if you simply aim both beams into the beamsplitter, you'll get two beams out.
The trick is to think of the reversed system slightly differently. In the reversed system, you have two input beams, one of intensity $I$ just like you expected, and a second input beam of zero intensity. These combine in the beamsplitter into two output beams of intensity $I/2$ which are in phase with each other.
Therefore, in the forward case, the two input beams must also be in phase with each other! If they are, then they will destructively interfere to produce a zero-intensity output beam at one port, and constructively interfere to produce the desired output beam of intensity $I$ at the other port.
This is assuming an ideal lossless beam splitter with all beams perfectly aligned, etc. In reality, you will not get perfect transmission (nor would you in the reverse case) but you will get much better than 50%. | {
"domain": "physics.stackexchange",
"id": 34495,
"tags": "optics, energy, geometric-optics, fiber-optics, solar-cells"
} |
Stocks application using Web Api | Question: I was given task to build a client server application, using any technology I want.
The task was to build a database(doesn't have to be a real database, it can be mocked). the client side should support more than one user/
I didn't use a real database I just created some shares and created a mechanism to update them from time to time using a random value.
I initialized the database with 2 users, I don't need to add users or delete them, just show I can support more than one.
Since I have a background in C# and WPF, I created 3 projects:
1. WPF/MVVM client side
2. common library
3. WebAPI - server side, which "includes" the database.
I would like you to please comment about the correctness of my implementation as if it was a code review for your team.
OOP design, usage of client server, please don't take into account I did it with WPF.
Assume you have half a day to work on the project and then submit.
I would appreciate any comments or questions.
1. WPF project/ MVVM - I used mvvm light tool kit
MainWindow.xaml
<Window x:Class="Client.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="MainWindow" Height="350" Width="525">
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="auto" />
<RowDefinition Height="auto" />
<RowDefinition Height="auto" />
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"></ColumnDefinition>
<ColumnDefinition Width="Auto"></ColumnDefinition>
</Grid.ColumnDefinitions>
<TextBlock Grid.Row="0" Grid.Column="0" Text="enter use name: " MinWidth="75"/>
<TextBox Grid.Row="0" Grid.Column="1" MinWidth="75" Text="{Binding UserName,Mode=TwoWay,UpdateSourceTrigger=PropertyChanged}"/>
<Button Grid.Row="1" Grid.Column="0" Content="Get All Shares" Command="{Binding GetAllSharesCommand,Mode=TwoWay}"/>
<Button Grid.Row="1" Grid.Column="1" Content="Get My Shares" Command="{Binding GetSharePerUserCommand,Mode=TwoWay}"/>
<DataGrid Grid.Row="2" ItemsSource="{Binding Shares}">
</DataGrid>
</Grid>
</Window>
HttpHandler.cs
namespace Client
{
public class HttpHandler
{
private HttpClient client;
public HttpHandler()
{
client = new HttpClient();
client.BaseAddress = new Uri("http://localhost:18702/");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
}
public async Task<IEnumerable<Share>> GetallSharesAsync(string path)
{
IEnumerable<Share> shares = null;
HttpResponseMessage response = await client.GetAsync(path);
if (response.IsSuccessStatusCode)
{
shares = await response.Content.ReadAsAsync<IEnumerable<Share>>();
}
return shares;
}
public async Task<IEnumerable<Share>> GetSharePerUserAsync(string path)
{
IEnumerable<Share> shares = null;
HttpResponseMessage response = await client.GetAsync(path);
if (response.IsSuccessStatusCode)
{
shares = await response.Content.ReadAsAsync<IEnumerable<Share>>();
}
return shares;
}
public async Task<IDictionary<string, int>> GetAllUsersAsync(string path)
{
IDictionary<string, int> users2Id = null;
HttpResponseMessage response = await client.GetAsync(path);
if (response.IsSuccessStatusCode)
{
users2Id = await response.Content.ReadAsAsync<IDictionary<string, int>>();
}
return users2Id;
}
}
}
ClientViewModel.cs
namespace Client
{
public class ClientViewModel : ViewModelBase
{
private ObservableCollection<Share> _shares;
public ObservableCollection<Share> Shares
{
get { return _shares; }
set { _shares = value; }
}
private string _userName;
public string UserName
{
get { return _userName; }
set
{
_userName = value;
RaisePropertyChanged("UserName");
GetAllSharesCommand.RaiseCanExecuteChanged();
}
}
private RelayCommand _getAllSharesCommand;
public RelayCommand GetAllSharesCommand
{
get { return _getAllSharesCommand; }
set
{
_getAllSharesCommand = value;
}
}
private RelayCommand _GetSharesPerUserCommand;
public RelayCommand GetSharePerUserCommand
{
get { return _GetSharesPerUserCommand; }
set { _GetSharesPerUserCommand = value; }
}
private HttpHandler handler;
private Dictionary<string, int> _userName2Id;
public ClientViewModel()
{
GetAllSharesCommand = new RelayCommand(ExecuteGetAllShares, CanExecuteGetAllShares);
GetSharePerUserCommand = new RelayCommand(ExecuteGetSharePerUserCommand, CanExecuteGetSharePerUserCommand);
handler = new HttpHandler();
Shares = new ObservableCollection<Share>();
GetUsers();
}
private async void GetUsers()
{
IDictionary<string, int> userNames2ID = await handler.GetAllUsersAsync("api/users");
_userName2Id = new Dictionary<string, int>(userNames2ID);
}
private bool CanExecuteGetSharePerUserCommand()
{
return !String.IsNullOrEmpty(UserName);
}
private async void ExecuteGetSharePerUserCommand()
{
string temp = "api/shares" + "/" + _userName2Id[UserName];
try
{
IEnumerable<Share> tempShares = await handler.GetSharePerUserAsync(temp);
Shares.Clear();
foreach (var item in tempShares)
{
Shares.Add(item);
}
}
catch (Exception)
{
throw;
}
}
public bool CanExecuteGetAllShares()
{
return !String.IsNullOrEmpty(UserName);
}
public async void ExecuteGetAllShares()
{
try
{
IEnumerable<Share> tempShares = await handler.GetallSharesAsync("api/shares");
Shares.Clear();
foreach (var item in tempShares)
{
Shares.Add(item);
}
}
catch (Exception)
{
throw;
}
}
}
}
2.Common - project
Share.cs
namespace Common
{
public class Share
{
public int Id { get; set; }
public string Name { get; set; }
public double Price { get; set; }
}
}
3.Server - WebApi project(yea I know what a great name)
WebApiConfig.cs
namespace SharesApp
{
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
config.MapHttpAttributeRoutes();
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = RouteParameter.Optional }
);
}
}
}
Controllers folder
SharesController.cs
namespace SharesApp.Controllers
{
public class SharesController : ApiController
{
//this is a mock for a real database, i'm not sure where do I need to connect to the real DB
private static IDataBase _dataBase;
public SharesController()
{
if (_dataBase == null)
{
_dataBase = new SharesDataBase();
}
}
public IEnumerable<Share> GetAllShares()
{
try
{
return _dataBase.GetAllShares();
}
catch (Exception)
{
throw;
}
}
public IHttpActionResult GetUpdatedShares(int id)
{
IEnumerable<Share> share = null;
try
{
share = _dataBase.GetShareById(id);
}
catch (Exception)
{
throw;
}
if (share == null)
{
return NotFound();
}
return Ok(share);
}
}
UsersController .cs
namespace SharesApp.Controllers
{
public class UsersController : ApiController
{
private Dictionary<string, int> _userName2Id;
public UsersController()
{
_userName2Id = new Dictionary<string, int>();
_userName2Id.Add("user10", 1);
_userName2Id.Add("user20", 2);
}
public IDictionary<string, int> GetAllUserNames()
{
return _userName2Id;
}
public string GetUserNameById(int id)
{
if (!_userName2Id.ContainsValue(id))
{
return null;
}
return _userName2Id.FirstOrDefault(x => x.Value == id).Key;
}
}
}
Models folder
IDataBase.cs
namespace SharesApp.Models
{
public interface IDataBase
{
IEnumerable<Share> GetAllShares();
IEnumerable<Share> GetShareById(int id);
}
}
SharesDataBase.cs
namespace SharesApp.Models
{
public class SharesDataBase : IDataBase
{
//user name to list of shares names
const string INTC = "INTC";
const string MSFT = "MSFT";
const string TEVA = "TEVA";
const string YAHOO = "YAHOO";
const string P500 = "P500";
private List<Share> _shares;
private Random _random;
private int _maximum = 100;
private int _minimum = 1;
public Dictionary<int, List<string>> User2Shares { get; set; }
private Object thisLock = new Object();
public SharesDataBase()
{
_random = new Random();
User2Shares = new Dictionary<int, List<string>>();
//init the shares list
_shares = new List<Share>
{
new Share { Id = 1, Name = INTC, Price = 1 },
new Share { Id = 2, Name = MSFT, Price = 3.75 },
new Share { Id = 3, Name = TEVA, Price = 16.99},
new Share { Id = 4, Name = YAHOO, Price = 11.0},
new Share { Id = 5, Name = P500, Price = 5.55},
};
//init the users
User2Shares.Add(1, new List<string>() { INTC, MSFT, TEVA });
User2Shares.Add(2, new List<string>() { YAHOO, P500, TEVA });
Task.Run(()=>UpdateShares());
}
private void UpdateShares()
{
while (true)
{
System.Threading.Thread.Sleep(1000);// wait for 1 sec
lock (thisLock)
{
foreach (var item in _shares)
{
int tempRandom = _random.Next(1, 1000);
if (tempRandom % 100 == 0)
{
item.Price = _random.NextDouble() * (_maximum - _minimum) + _minimum;
}
}
}
}
}
public IEnumerable<Share> GetAllShares()
{
return _shares;
}
public IEnumerable<Share> GetShareById(int id)
{
if (!User2Shares.ContainsKey(id))
{
return null;
}
var listOfShares = User2Shares[id];
if (listOfShares.Count == 0)
{
//this userName doesn't have any shares
return null;
}
List<Share> sharesList = new List<Share>();
foreach (var name in listOfShares)
{
var res = _shares.FirstOrDefault(x => x.Name == name);
if (res != null)
{
sharesList.Add(res);
}
//share is missing from the server
}
return sharesList;
}
}
Answer: ClientViewModel
If you don't validate property setters, meaning you just store the setted value in a backing variable, you should use autoimplemented properties like so
public RelayCommand GetAllSharesCommand
{
get; set;
}
which makes the code shorter and more readable.
You should only catch exceptions which you can/want to handle. If you have
catch (Exception)
{
throw;
}
you can just ommit the try..catch because you will handle the exception at another place. Here the try..catch only adds noise to your code.
As an example, the ExecuteGetAllShares() method would look like so
public async void ExecuteGetAllShares()
{
IEnumerable<Share> tempShares = await handler.GetallSharesAsync("api/shares");
Shares.Clear();
foreach (var item in tempShares)
{
Shares.Add(item);
}
}
and would do the exact same thing. | {
"domain": "codereview.stackexchange",
"id": 26430,
"tags": "c#, asp.net-web-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.