anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How is Memory Segmentation done in 8086? | Question: Basically what I know is that 8086 can address up to 1 MB of locations which are divided in 4 segments(code, data, extra and stack) 64 KB each. But 64 KB * 4 is 256 KB, which doesn't add up to 1 MB(1024 KB). So what about the rest of space.
Answer: The four segments you mentioned are not static - these are in fact registers that can point to any 64kb zone in the 1MB memory. By changing the value of these registers we can point to other fragments of the memory.
The exact computation of the effective address is performed using 2 registers: a segment register (CS,DS,SS,ES) and an offset register (usually BX, SI, DI, BP). Each one of those registers is 16bit.
To get the 20bit effective address (EA)
the CPU performs
EA = SEGMENT_REG * 10h + OFFSET_REG (mod 20bits).
It is easy to see that by changing the SEGMENT_REG/OFFSET_REG (0000h-FFFFh), the above equation allows one to span the entire 1MB memory address space (00000h-FFFFFh). | {
"domain": "cs.stackexchange",
"id": 16063,
"tags": "computer-architecture"
} |
What gives mass to dark matter particles? | Question: Assuming that dark matter is not made of WIMPs (weakly interacting massive particles), but interacts only gravitationally, what would be the possible mechanism giving mass to dark matter particles? If they don't interact weakly, they couldn't get mass from interacting with the Higgs field. The energy of gravitational interactions alone does not seem to be sufficient to account for a large particle mass. Would this imply that dark matter consists of a very large number of particles with a very small mass, perhaps much smaller than of neutrinos? Or do we need quantum gravity to explain the origin of mass of dark matter?
Answer: I think this question contains a misconception unfortunately caused by popular science descriptions of the Standard Model.
The question seems to assume there needs to be some concrete source that particles "get" mass from, as if mass is a resource like money and the Higgs field is giving it out. But that's not right. In a generic field theory there is no issue adding a new field $\psi$ whose particles have mass. The only thing you have to do is make sure the Lagrangian has a term proportional to $\psi^2$.
You might protest that this violates the conservation of energy because the mass has to "come from" somewhere, but that's not right. Mass is the energy price for creating a particle. I don't create money by changing the pricetag of an item in a store.
The reason science popularizers say that mass must come from the Higgs mechanism is because of a peculiarity of the Standard Model (SM). The symmetries of the SM forbid a term such as $\psi^2$ for any field $\psi$ in the SM, so we need a trick to get a mass term. In brief, the Higgs field $\phi$ allows us to write terms like $\phi \psi^2$ which do respect the symmetry. This is an interaction term, but we can set up the Lagrangian so the Higgs field $\phi$ acquires a constant part, yielding the $\psi^2$ mass term we wanted.
However, once you start speculating about dark matter models, especially dark matter that does not interact with the electroweak force at all, these constraints don't apply and generically there is nothing forbidding a $\psi^2$ term. There's no need for any special mechanism for "giving" mass. You just treat mass exactly like you did in high school, intro mechanics and quantum mechanics: write it down, call it $m$ and call it a day. | {
"domain": "physics.stackexchange",
"id": 49474,
"tags": "mass, dark-matter, higgs, beyond-the-standard-model, weak-interaction"
} |
In an ideal gas, is the pressure dependent on the mass of the molecules? | Question: If you take the ideal gas law and substitute $\dfrac mM$, where $m$ is mass of the particles and $M$ is molecular weight, you can derive $D = \dfrac{MP}{RT}$ with algebraic manipulation, where $D$ is density.
From this, I initially thought that pressure was dependent on mass since mass and pressure are both on numerator of opposite sides. But then, I realized that volume is proportional to mass, so essentially it would cancel out the numerator mass, making pressure not dependent on pressure.
I'm still not $100$% sure about this conclusion. Would appreciate it if anyone can help.
Answer: Pressure, like temperature, is an intensive thermodynamic property. It does not depend on the mass.
Suppose you have a room filled with air (which can typically be considered an ideal gas), at 1 atmosphere pressure and room temperature of 20 C. If you divide the room into two equal parts by a wall, the pressure and temperature in each half will be the same, though the mass of the air in each room is half of that for the entire room.
$\frac{m}{M}$ is simply the number of moles, $n$, of the gas. $D$ is simply $\frac{m}{V}$. So your expression
$$D=\frac{MP}{RT}$$
is simply
$$PV=nRT$$
The ideal gas law where $n$ is the number of moles of gas and $R$ is the universal gas constant.
So pressure is proportional to the number of moles of gas, not the mass of the gas. If you replace a given gas with one with double the molecular weight, for $n$ to be the same, the mass of the replacement gas has to be double the mass. Yet the pressure is the same. Pressure is not dependent on mass.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 69245,
"tags": "thermodynamics, pressure, ideal-gas, kinetic-theory"
} |
Missing Values in Data | Question: I have experienced that most of the datasets contain missing values, which make our task bit challenging.
Please let me know how to fill up those missing values in an efficient way? and is there any specific techniques to handle missing values?
Answer: Various methods are available for fill missing values in data.
Ignore the tuple is the simplest and not effective method.
Fill the missing value manually.
Use a global constant to fill the missing value.
Use attribute mean value to fill missing value.
Use attribute mean for all samples belonging to the same class as the given
tuple.
Use most probable value to fill in the missing value (this may be determined
with regression, inference tool, or decision tree induction)
Reference:
Data Mining – Concepts and Techniques - JIAWEI HAN & MICHELINE KAMBER, ELSEVIER, 2nd Edition. | {
"domain": "datascience.stackexchange",
"id": 1987,
"tags": "data-mining, dataset, data-cleaning, data, missing-data"
} |
Presence of Diabatic heating term in the continuity equation | Question: The tropical circulation paper by Gill (1980) approximates vertical velocity as the sum of pressure and diabatic heating terms (equation 2.5 at page 449).
Since the pressure is already a function of Q, what's the need of writing Q explicitly in this expression? Doesn't it double count p?
Answer: In the book - Atmosphere-Ocean Dynamics, Volume 30 the author AE Gill defines the perturbation pressure in the paper that OP references as the following.
Consider a pressure of a reference system (ocean/atmosphere at rest) and let us call it
$$p_0$$.
It must be noted that a real atmosphere is never at rest and so we consider deviations from the reference system.
Since we are dealing with a vertical pressure gradient for OP's equation(2.5)(the gravitational force matches the vertical pressure gradient for hydrostatic large scale circulations)
$$p_0$$ is a function of $z$ where z is the height coordinate.
Then the perturbation pressure $p'$ is defined in the following way
$$p = p_0(z) + p' $$
So if you add the perturbation pressure to the reference pressure we get the deviation from the reference system.
Similarly a perturbation density can be defined in the following way
$$\rho = \rho_0(z) + \rho' $$
The basis for doing this comes from classical mechanics Perturbation Theory
The question is why is this being done ?
We are looking for solutions that are basically of a oscillatory nature. So in effect this gets rid of the non linear cross terms for which no oscillatory solution exists.
In the specific case of 2.5 from the paper you referenced in addition to the Brunt–Väisälä frequency one needs to consider a "buoyancy forcing" especially in the tropics because of the large contributions from diabatic heating. We are essentially talking of an "open system" where in addition to adiabatic "work" being done you also have to account for heat exchange between a system and the ambient environment. In reality this is assumed as "pseudoadiabatic" process
Then the question is how is the heating rate modeled ? In the AE Gill book this is given by the following equation
$$ Q_H = -L_V \frac{Dq_w}{Dt}$$
where $L_v$ is the latent heat of vaporization and $q_w$ is the saturation humidity where the word saturation means the saturation water vapor mixing ratios.
Equation (2.5) from OP's paper is derived in Gill's book and is present in the 9th chapter(9.13.7) and will not be derived here but will be stated as is
$$\omega_n = \frac{\partial \widetilde \eta}{\partial t} + \widetilde b_n$$
Here $\widetilde \eta $ is the vertical coordinate(in OP's paper the vertical coordinate is pressure) and $b_n$ is the rate of change of buoyancy per unit volume. This form is known as the buoyancy forced Shallow water equations
One can get more details about the perturbation theory for the atmosphere by looking at this link - Atmospheric Oscillations: Linear Perturbation Theory
Similarly if you want horizontal pressure perturbations you model the reference pressure as a function of x and y and then consider deviations thereof.
Useful reading
What is the meaning of pressure in the Navier-Stokes equation? | {
"domain": "earthscience.stackexchange",
"id": 1788,
"tags": "meteorology"
} |
Ansys multiphysics import deformed mesh to HFSS | Question: I want to run a coupled simulation in Ansys - RF heating and deformation of waveguide. Pictures of project layout and settings are here.
I calculated RF fields and surface losses in HFSS; imported them to steady-state thermal, calculated temperature of the waveguide; calculated structure deformation in static structural. Now what is needed is to run this problem until it converges. But for some reason HSFF does not import calculated deformed mesh from static structural. What else should I do to make it import mesh? Can I then use feedback iterator to automatically recalculate this problem?
Answer: Problem was with workbench setup. Static structural node should be placed over thermal node, not near it. More detailed description with pictures is avaliable here. | {
"domain": "engineering.stackexchange",
"id": 2377,
"tags": "ansys, ansys-workbench, radio"
} |
Infinitary Counting Logics: 1-sorted vs. 2-sorted framework | Question: There are two ways to extend infinitary logic with counting:
Grädel's way (cf. p. 11):
We extend $L_{\infty\omega}$ by introducing a counting existential quantifier:
$$ \mathcal{A} \models \exists^{\geq m} x \varphi(x) \Leftrightarrow
\textrm{There are at least $m$ $a \in A$ s.t. } \mathcal{A} \models \varphi(a).$$
Introducing such a quantifier for $\mathsf{FO}$ does not increase the expressive power because can easily define it with the usual $\exists$-quantifiers. However, $C^1_{\infty\omega} \not\leq L_{\infty\omega}$
(One variable is sufficient to express that $|A|$ is even in $C_{\infty\omega}$ yet it is not definable in $L_{\infty\omega}$).
Libkin's way (cf. p. 8):
We define a two-sorted framework. We have two kinds of variables now
Number variables in $N^A = \{ x \in \mathbb{N} \mid 0 \leq x \leq |A|-1\}$
Vertex variables (the old type) in $A$
Additionally, we have the natural ordering on the numbers. Furthermore, we introduce
$$ \#x.\varphi \in N^A$$
which represent the numeric value how many $x \in A$ satisfy $A \models \varphi(x)$.
Libkin's way is at least as expressive as Grädel's way, since for
$\exists^{\geq m} x \varphi(x)$ we write
$$ \#x.\varphi \geq m.$$
Im trying to define Graph Canonization with Infinitary Counting Logics with finitely many variables. Libkin's way is easier on first glance since we have free numerical variables which can represent graph nodes, edges and so on. Graph Canonization with Fixed-Point Logics works very similar. However, for the Grädel way there are a lot of results like $k$-Pebble-Games and a connection the $k$-dimensional Weisfeiler-Lehman Algorithm which is a direct connection to the Graph-Isomorphism Problem.
Are the Libkin way and the Grädel way equivalent in expressiveness? Do the mentioned results work with the Libkin way? Are their respective restrictions with $k$ variables equivalent? How do I count the variables for the Libkin approach (numerical variables + vertex variables?)?
Any ideas how to define Graph Canonizations with the Grädel way?
Answer: The answer to the question "Can we diretcly express graph canonization Grädel's way" is no.
However, we can substitute each $v<w$ in a $\{E,<\}$-formula with a $\{E\}$-formula (Libkin's wy) by Logical Transduction (also known as Logical Interpretation). This $\{E\}$ formula is a sentence without free variables. Hence, we can use a construction (by Grädel and Otto in the same paper) to eliminate all number variables which results in a Grädel-formula with the same number of vertex variables. | {
"domain": "cstheory.stackexchange",
"id": 2783,
"tags": "graph-theory, lo.logic, graph-isomorphism, descriptive-complexity, finite-model-theory"
} |
Using Threading Building Blocks with ROS | Question:
Hi,
I couldn't find any information about how to use ROS with Threading Building Blocks (TBB). If I add
<rosdep name="tbb" />
to my manifest.xml and do "rosdep install" I get the error "Failed to find rosdep tbb for package ... on OS:ubuntu version:natty". Do I have to create my own rosdep.yaml file? I'm working with ROS Electric on Ubuntu Natty.
Thanks for your help!
Originally posted by christiank on ROS Answers with karma: 113 on 2012-07-06
Post score: 1
Answer:
This is confusing, because rosdep changed significantly between Electric and Fuerte.
On Fuerte it resolves to libtbb-dev.
On Electric, it needs to be defined in some stack's rosdep.yaml.
What does this show?
$ rosdep where_defined tbb
UPDATE: your result shows that tbb is not defined by any stack in your $ROS_PACKAGE_PATH.
If you are developing a stack of your own, add a rosdep.yaml containing something like this stanza:
tbb:
arch: intel-tbb
ubuntu: libtbb-dev
fedora: tbb-devel
Originally posted by joq with karma: 25443 on 2012-07-06
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by christiank on 2012-07-08:
the output is:
$ rosdep where_defined tbb
tbb defined in set([])
Comment by christiank on 2012-07-09:
thanks, this works :)
Finally, I had to add a command to link against tbb to my CMakeLists.txt:
target_link_libraries(mynode tbb) | {
"domain": "robotics.stackexchange",
"id": 10074,
"tags": "rosdep, ros-electric"
} |
Why is an ellipse not a self-intersecting curve? | Question: For a Hamiltonian which is time-independent, the phase trajectories don't intersect. But the Hamiltonian of a one-dimensional harmonic oscillator with constant energy, for example, has an elliptical phase trajectory. Since the phase space point comes back to the same point of the trajectory, how is an ellipse not a self-intersecting curve?
Answer: In case of Hamiltonian mechanics, fixing initial condition (x and p for 2 dimensional system) uniquely fixes the time evolution. The trajectories in phase space can not intersect, because if it does then for same initial condition one can have multiple trajectories. For a periodic trajectory (circle or ellipse) no such problem arises. That is why elliptic trajectories are allowed. Whether you can call it self intersecting or not, I think, is a question of nomenclature. | {
"domain": "physics.stackexchange",
"id": 41518,
"tags": "classical-mechanics, terminology, hamiltonian-formalism, phase-space, non-linear-systems"
} |
How to cross-compile ROS for BeagleBoard-xM (ARM) | Question:
I'd like to cross-compile ROS stacks for a BeagleBoard-xM. How should I proceed? I haven't got any experience with cross-compilation. I've read there is a stack for this purpose called eros, is it still alive? Or is it better to go with OpenEmbedded or CodeSourcery, or is rostoolchain.cmake enough?
I've already managed to build all the stacks necessary by compiling them directly on the BB-xM once (Ubuntu 10.10), but this takes a lot of time and is prone to errors (see here or here). So I'd like to learn how to cross-compile to avoid such problems in the future.
Any hints appreciated.
Originally posted by tom on ROS Answers with karma: 1079 on 2011-02-28
Post score: 0
Answer:
In answer to your question, I think you need to understand the roles of each
OpenEmbedded : its a system builder, as you already have ubuntu installed, you dont need it. It can build you toolchains, but there are probably easier ways to get a toolchain if you already have ubuntu installed.
CodeSourcery : it's a toolchain (cross compiler + utilities). I haven't actually had any problems with these, and eros even has a package which helps you install a code sourcery g++/glibc toolchain on 32 bit systems.
Eros is not a system builder, nor a toolchain. It simply brings all the elements together so that you can cross-compile ros easily.
As for the situation you have described, you don't need to worry about a system builder. The next step is a toolchain, code sourcery should be ok, though I also see maverick now includes arm-linux toolchains. These latter toolchains will probably be a better fit (no chance of libc/libc++ mismatches) but I haven't tested them myself.
Once you have a toolchain, you can use eros to help you do a toolchain bridge (i.e. install the apache runtimes, log4cxx and boost into your toolchain) and globally configure the ros environment for cross compiling.
This will work well for simple systems, but if you're using large stacks of packages with alot of other rosdeps, then it probably isn't viable as you haven't got a good means of installing those rosdeps into your toolchain. Eros could add build scripts for some of these, though I'd like to see eros fill this need by using something like openembedded under the hood to do so. If you have an ubuntu already installed, a simple hack that might work is to copy libs and headers for rosdeps from your board across into your toolchain.
Originally posted by Daniel Stonier with karma: 3170 on 2011-02-28
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 4898,
"tags": "ros, beagleboard, cross-compilation, rosbuild"
} |
Using data from a ROS node in a separate C++/Python program | Question:
Suppose there is a user-defined ROS node. I make it publish to a topic. Now i want to use this data published to the topic as input to a C++/Python program. So how do I
make the node publish to a topic?
route the data from the topic as input to another separate C++/Python program?
Any reference/tutorial with concrete coding examples is sought.
Originally posted by GoBaxter on ROS Answers with karma: 15 on 2015-02-02
Post score: 0
Original comments
Comment by BennyRe on 2015-02-03:
Did I understand you right that you do not want to subscribe to this topic. Instead you want this data as a program parameter?
Comment by GoBaxter on 2015-02-03:
yes. i have a c++ function and the node is reading sensor data and processing it. i need the outputted data from this node as input for my c++ function. will subscribing to the topic to which this node publishes work? i amn't sure.
Comment by gustavo.velascoh on 2015-02-04:
I guess that creating a subscriber node that executes your c++ function on callback should work... But I'm not sure what you are trying to do.
Comment by GoBaxter on 2015-02-04:
Yes, I think i should combine my code to a subscriber code and it should work.
Answer:
@gustavovelascoh is right.
Create a subscriber node and call you function from this subscriber.
Originally posted by BennyRe with karma: 2949 on 2015-02-05
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 20766,
"tags": "rosnode, rostopic"
} |
Under what conditions does Latent Heat hold? | Question: Let us say I have a liquid of mass $m$ at its boiling point and add heat to increase to cause turn it all into a vapour. Under what conditions will the heat I actually add be equal to:
$$Q=ml$$
Where $l$ is the specific latent heat of vaporization of the liquid.
Answer: The equation $$Q=ml$$
Only holds in a system at constant pressure (isobaric) and constant temperature (isothermal) and generically it is not true.
This can be shown as follows:
If we look at the change in the internal energy (which is a state function) of the system:
$$dU=dQ+dW$$
We see that for the above process we have:
$$U=mL-p(V_2-V_1)$$
Where $V_1$ and $V_2$ are the initial and final volumes respectivly (with generally $V_2>V_1$)*. Now take a different process which has the same intial and final state but which does it in the following steps: 1. Free expansion to the final volume. 2. Isochoric (constant volume) vaporization of all of the liquid In this process no work is done but we still have the same change in internal energy (since it is a state function). In this case all that energy must come from the heat supplied and is thus simply not equal to $ml$ as one may naively expect. (i.e. for this process the heat added would be:
$$\tilde Q=mL-p(V_2-V_1)$$
Which is actually less then that for the standard isobaric, isothermal phase change. | {
"domain": "physics.stackexchange",
"id": 27539,
"tags": "thermodynamics"
} |
Retrieving the most recent communication from a user | Question: Could someone review an accepted answered I gave on Stack Overflow?
The use-case is as follows:
Given a messaging system where a user
can receive a message from a single
user and send messages to one or more
users, return the most recent
communication (sent or received)
between a passed userId and the
individual(s) that user communicated
with.
For the example, I have three tables:
Users
id user_name
1 Walker
2 John
3 Kate
Messages
id senderid body time
1 1 ignored 1 2010-04-01 00:00:00.000
2 1 ignored 2 2010-04-02 00:00:00.000
3 3 ignored 3 2010-04-03 00:00:00.000
4 1 msg A to john and kate 2010-04-10 00:00:00.000
5 3 msg b from kate to walker and john 2010-04-11 00:00:00.000
messages_recipients
id messageid userid
1 1 2
2 1 3
3 2 2
4 3 1
5 4 2
6 4 3
7 5 1
8 5 2
The data is tailored in such a way that I want a list of communications between user Walker and the people Walker has spoken with.
You can see a list of these messages by running the following SQL statement:
SELECT
u2.user_name AS Sender,
u1.user_name AS Receiver,
m.body,
m.time
FROM
messages m
JOIN
messages_recipients mr ON m.id = mr.messageid
JOIN
users u1 ON mr.userid = u1.id
JOIN
users u2 ON m.senderid = u2.id
ORDER BY
time DESC
Now that we have the test scenario, the part I want reviewed: returning the most recently communicated message between Walker, John, and Kate.
BEGIN
DECLARE @UserId INT = 1
--A. Main Query
SELECT
CASE
WHEN mtemp.senderid = 1 --@UserId
THEN
CONCAT('Message To: ', receivers.user_name)
ELSE
CONCAT('Message From: ' , senders.user_name)
END AS MessageType,
mtemp.body,
mtemp.time
FROM
messages mtemp
INNER JOIN users senders ON
mtemp.senderid = senders.id
INNER JOIN
(
--B. Inner Query determining most recent message (based on time)
-- between @UserID and the person @UserID
-- Communicated with (either as sender or receiver)
select userid,max(maxtime) as maxmaxtime from
(
--C.1. First part of Union Query Aggregating sent/received messages on passed @UserId
SELECT
m2.body,
kk.*
FROM
`messages` m2 INNER JOIN
(
SELECT DISTINCT
userid,
MAX(m.time) AS MaxTime
FROM
messages m INNER JOIN
messages_recipients mr ON m.id = mr.messageid AND
m.senderid = 1 --@UserId
GROUP BY
mr.userid
) kk on m2.time = kk.MaxTime and m2.senderid = 1 --@UserId
UNION
--C.2. Second part of Union Query Aggregating sent/received messages on passed @UserId
SELECT
m1.body,
jj.*
FROM
`messages` m1 INNER JOIN
----C.2a. Inner most query of users users who sent message to userid
(SELECT DISTINCT
senderid as userid,
MAX(m.time) AS MaxTime
FROM
messages m INNER JOIN
messages_recipients mr ON m.id = mr.messageid AND
mr.userid = 1 --@UserId
GROUP BY
m.senderid) jj on m1.time = jj.MaxTime and m1.senderid = jj.userid
) MaximumUserTime
group by
MaximumUserTime.userid
) AggregatedData on mtemp.time = AggregatedData.maxmaxtime
INNER JOIN users receivers on AggregatedData.userid = receivers.id
ORDER BY `time` DESC
END
To test in phpMyAdmin, you'll have to remove the comments and the BEGIN/END DECLARE statements as well. I just wanted to post this as if it would look in a procedure.
When I run this query I get the following results:
MessageType body time
Message From: Kate msg b from kate to walker and john 2010-04-11 00:00:00.000
Message To: John msg A to john and kate 2010-04-10 00:00:00.000
That's the most recent communications concerning Walker among all those users who have communicated with Walker.
Is there a better way to run this query?
Answer: My solution has a similar complexity to yours (14 steps in EXPLAIN), assuming MySQL's query optimizer is smart enough. However, in my opinion, this formulation will be much easier to understand.
SELECT IF(recipientid,
CONCAT('Message To: ', recipient.user_name),
CONCAT('Message From: ', sender.user_name)) AS MessageType,
body,
time
FROM
( -- Join messages with recipients, relabeling userids in terms of interlocutor and self
SELECT messageid, time, body, NULL AS senderid, userid AS recipientid, userid AS interlocutor, senderid AS self
FROM messages_recipients
INNER JOIN messages
ON messages.id = messageid
UNION
SELECT messages.id, time, body, senderid, NULL, senderid, userid
FROM messages_recipients
INNER JOIN messages
ON messages.id = messageid
) AS thread_latest
LEFT OUTER JOIN users AS recipient
ON recipient.id = recipientid
LEFT OUTER JOIN users AS sender
ON sender.id = senderid
WHERE
-- Discard all but the latest message in each thread
NOT EXISTS (
SELECT messageid
FROM
(
SELECT messageid, time, userid AS interlocutor, senderid AS self
FROM messages_recipients
INNER JOIN messages
ON messages.id = messageid
UNION
SELECT messages.id, time, senderid, userid
FROM messages_recipients
INNER JOIN messages
ON messages.id = messageid
) AS thread_later
WHERE
thread_later.self = thread_latest.self AND
thread_later.interlocutor = thread_latest.interlocutor AND
thread_later.time > thread_latest.time
) AND
self = 1 --@UserId
ORDER BY time DESC;
The main insight is that once you relabel senders and recipients in terms of interlocutor and self, it's just a simple matter of filtering out the results. Retain only those messages where self is the user in question. Then, every row that has the same interlocutor conceptually constitutes a thread.
Notice that there is a subquery that appears twice. We can make it clearer by creating a view.
CREATE VIEW threads AS
-- Messages I sent
SELECT messageid, time, body, NULL AS senderid, userid AS recipientid, userid AS interlocutor, senderid AS self
FROM messages_recipients
INNER JOIN messages
ON messages.id = messageid
UNION
-- Messages I received
SELECT messages.id, time, body, senderid, NULL, senderid, userid
FROM messages_recipients
INNER JOIN messages
ON messages.id = messageid;
SELECT IF(recipientid,
CONCAT('Message To: ', recipient.user_name),
CONCAT('Message From: ', sender.user_name)) AS MessageType,
body,
time
FROM
threads AS thread_latest
LEFT OUTER JOIN users AS recipient
ON recipient.id = recipientid
LEFT OUTER JOIN users AS sender
ON sender.id = senderid
WHERE
NOT EXISTS (
SELECT messageid
FROM threads AS thread_later
WHERE
thread_later.self = thread_latest.self AND
thread_later.interlocutor = thread_latest.interlocutor AND
thread_later.time > thread_latest.time
) AND
self = 1 --@UserId
ORDER BY time DESC;
I'll take this opportunity to point out that this query is where PostgreSQL really shines. Two features in PostgreSQL (since version 8.4) make it easy. The WITH clause lets you define a helper view in the query itself. More importantly,
window functions let you partition the threads by interlocutor, which is precisely the tricky part about this problem.
WITH threads(messageid, time, body, senderid, recipientid, interlocutor, self) AS (
-- Messages I sent
SELECT messageid, time, body, NULL, userid, userid, senderid
FROM messages_recipients
INNER JOIN messages
ON messages.id = messageid
UNION
-- Messages I received
SELECT messages.id, time, body, senderid, NULL, senderid, userid
FROM messages_recipients
INNER JOIN messages
ON messages.id = messageid
)
SELECT CASE WHEN recipientid IS NOT NULL
THEN 'Message To: ' || recipient.user_name
ELSE 'Message From: ' || sender.user_name
END AS MessageType,
body,
time
FROM (
SELECT *,
RANK() OVER (PARTITION BY interlocutor ORDER BY time DESC) AS thread_pos
FROM threads
WHERE self = 1 --@UserId
) AS my_threads
LEFT OUTER JOIN users AS recipient
ON recipient.id = recipientid
LEFT OUTER JOIN users AS sender
ON sender.id = senderid
WHERE thread_pos = 1 -- Only the latest message per thread
ORDER BY time DESC; | {
"domain": "codereview.stackexchange",
"id": 4342,
"tags": "sql, mysql"
} |
Can Boosted Trees predict below the minimum value of the training label? | Question: I am using gradient Gradient Boosted Trees (with Catboost) for a Regression task. Can GBtrees predict a label that is below the minimum (or above the max) that was seen in the training ?
For instance if the minimum value the label had is 10, would GBtrees be able to predict 5?
Answer: Yes, gradient boosted trees can make predictions outside the training labels' range. Here's a quick example:
from sklearn.datasets import make_classification
from sklearn.ensemble import GradientBoostingRegressor
X, y = make_classification(random_state=42)
gbm = GradientBoostingRegressor(max_depth=1,
n_estimators=10,
learning_rate=1,
random_state=42)
gbm.fit(X,y)
preds = gbm.predict(X)
print(preds.min(), preds.max())
outputs -0.010418732339562916 1.134566081403055 (and make_classification gives outputs just 0 and 1).
Now, this is unrealistic for a number of reasons: I'm using a regression model for a classification problem, I'm using learning rate 1, depth only 1, no regularization, etc. All of these could be made more proper and we could still find an example with predictions outside the training range, but it would be harder to construct such an example. I would say that in practice, you're unlikely to get anything very far from the training range.
See the (more theoretical) example in this comment of an xgboost github issue, found via this cv.se post.
To be clear, decision trees, random forests, and adaptive boosting all cannot make predictions outside the training range. This is specific to gradient boosted trees. | {
"domain": "datascience.stackexchange",
"id": 8934,
"tags": "regression, boosting, natural-gradient-boosting"
} |
AC power, EM waves and Poynting vector | Question: Question: Does electric energy propagate as electromagnetic waves and if that is the case, how? This is clearly not the same as propagating EM waves by an antenna whatsoever. If that was the case we wouldn't be filling the world with wires.
Some details about what I know about our electrical system below.
In AC power lines, electrons oscillate back and forth for miles and miles, if I understand this right the electrons move in the direction of electric field and the two are in sync. If this be the case then there must be electric field outside the wires and on the surface of the wire. The magnetic fields change direction orthogonally to the direction of the current and electric field and is also on the wire.
If I have that right, are there electromagnetic waves propagating from the wires every which way? Or is it linear, and if so which direction? Three-phase wires have certain distance from each other and typical home wirings are close to each other in one insulated solid configuration.
If this wave does exist, and the wires go on for miles, I just can't picture how this oscillation maybe and the result of EM waves. Is it at each point on the wire, knowing that the waves are ultra low frequency? Something does not add up in my head.
Also, the Poynting vector, being a cross product of the magnetic and electric field, does this mean that energy enters the conductors from outside the wires into the current carrying wires at all points inward orthogonal?
Furthermore, at the source of power generation, I assume EM waves exist with its own direction moving away in certain direction. Does it result in Poynting vector pointing outward at the source?
(I am interpreting Poynting vector as being the energy density and its direction and that it is neither in the direction of electric or magnetic field nor is it in the direction of EM propagation, nor in the direction of current. That seem to cover all 3 dimensions. It doesn't sound right as there is no 4th dimension)
(Furthermore I am interpreting that most of the EM field is not far field but near-field and also not considering any back ward propagation towards the source due to all the other capacitance and inductive loads) And I assume that in essence it is pretty complicated phenomenon.
Energy entering the wires at all points, is the most difficult for me to picture if that is the case. At the source I assume EM waves leave into the empty space as is energy density flux Poynting vector, they point away leaving the source into empty space, and the same for the load; energy flux pointing out leaving the load.
And the energy that enters the wires may not necessarily be the same energy the leaves the source and the same with the energy that leaves the load. To my understanding, they don't have to be the same energy. So how on earth does the communication takes place?
And at last, how does the energy propagate from the source to the load if not in some way by means of EM propagation? The energy certainly moves far faster than the electrons vibrating back and forth and in one direction too from the source to the load.
On another note, I do not have the slightest idea how energy moves in DC circuit. My question comes from the understanding that energy is delivered by means of photons/electromagnetic waves when it comes to currents, electricity phenomenon.
There is something I am surely missing. I can only think that this energy moving from the source to the load is not in the form of radiation or propagation which can not be in the direction of current either.
I hope I made some sense and my question is clear at least somewhat. Most likely, I will be asked to edit. It will end up the same long narrative, and will end up pieced together bit by bit deffered instead of all in one place.
Thank you in advance.
Answer:
Does electric energy propagate as electromagnetic waves and if that is the case, how? This is clearly not the same as propagating em waves by an antenna whatsoever.
Yes, it does propagate as electromagnetic waves, but it is what is known as the near field. This means that the energy propagates in the fields outside the wire (rather than in the wire), but the energy stays localized near the wire and does not radiate far away.
In AC power lines, electrons oscillate back and forth for miles and miles,
The electrons often oscillate less than 1 mm. In very high currents maybe 1 cm. To get an electron to oscillate a mile would require currents so high that it would instantly vaporize the wire.
I think you are thinking of the fields, which do have a long wavelength.
If I have that right, are there electromagnetic waves propagating from the wires every which way?
Yes, there are far field waves, but they carry very little energy.
Also, the Poynting vector, being a cross product of the magnetic and electric field, does this mean that energy enters the conductors from outside the wires into the current carrying wires at all points inward orthogonal?
No, you have the orientation wrong. There is a surface charge on the wire which makes the E field outside the wire point radially outward. The current in the wire produces a B field outside the wire which is circumferential. The cross product outside the wire is therefore along the wire. This is the energy that is transported to the load.
Inside the wire the E field points along the wire, while the B field remains circumferential. The cross product inside the wire is therefore radially inward. This is the energy that is lost to resistive heating in the wire itself.
The Poynting vector points out from the source and in towards the load. The overall energy flow looks something like this:
I do not have the slightest idea how energy moves in DC circuit.
The principles are the same as above, except that there is no far field. | {
"domain": "physics.stackexchange",
"id": 81282,
"tags": "electromagnetic-radiation, electric-circuits, electric-current, poynting-vector"
} |
CUDA/C++ Host/Device Polymorphic Class Implementation | Question: I have an abstract class which acts as in interface for a variety of physical models providing Electric/Magnetic Fields as the result of a number of phenomena. I'm wondering if how I've done it is a good way to do it, or if there's a better way to achieve my goals.
My goals are:
A common interface for returning the field strength (and a few related things through different functions) according to a physical field model chosen by the user at runtime
The ability to call functions of this interface from host AND device with the same underlying implementation
As performant as possible
The way I've achieved this is to: specify a base BField class with __host__ __device__ specified pure virtual interface functions, and overwrite these with a number of derived classes (here DipoleB). On host, when an instance of the derived class is created, a mirror image of the instance is also created on device and a pointer to the on-device instance is stored on host. This on-device instance is also destroyed on host-instance destruction. The interface functions (here it's getBFieldAtS(double, double) and getGradBAtS(double, double)) are called on device by a __global__ kernel which is run over ~3.5mil particles.
BField.h (Base class):
#ifndef BFIELD_H
#define BFIELD_H
#include <string>
//CUDA includes
#include "host_defines.h"
class BField
{
protected:
BField** this_d{ nullptr };
#ifndef __CUDA_ARCH__ //host code
std::string modelName_m;
#else //device code
const char* modelName_m; //placeholder, not used
#endif /* !__CUDA_ARCH__ */
__host__ virtual void setupEnvironment() = 0; //define this function in derived classes to assign a pointer to that function's B Field code to the location indicated by BFieldFcnPtr_d and gradBFcnPtr_d
__host__ virtual void deleteEnvironment() = 0;
__host__ __device__ BField() {}
public:
__host__ __device__ virtual ~BField() {};
__host__ __device__ BField(const BField&) = delete;
__host__ __device__ BField& operator=(const BField&) = delete;
__host__ __device__ virtual double getBFieldAtS(const double s, const double t) const = 0;
__host__ __device__ virtual double getGradBAtS (const double s, const double t) const = 0;
__host__ virtual std::string name() const { return modelName_m; }
__host__ virtual BField** getPtrGPU() const { return this_d; } //once returned, have to cast it to the appropriate type
};
#endif
DipoleB.h (Derived):
#ifndef DIPOLEB_BFIELD_H
#define DIPOLEB_BFIELD_H
#include "BField\BField.h"
#include "physicalconstants.h"
constexpr double B0{ 3.12e-5 }; //won't change from sim to sim
class DipoleB : public BField
{
protected:
//Field simulation constants
double L_m{ 0.0 };
double L_norm_m{ 0.0 };
double s_max_m{ 0.0 };
//specified variables
double ILATDegrees_m{ 0.0 };
double ds_m{ 0.0 };
double errorTolerance_m{ 0.0 };
//protected functions
__host__ virtual void setupEnvironment() override;
__host__ virtual void deleteEnvironment() override;
__host__ __device__ double getSAtLambda(const double lambdaDegrees) const;
__host__ __device__ double getLambdaAtS(const double s) const;
public:
__host__ __device__ DipoleB(double ILATDegrees, double errorTolerance = 1e-4, double ds = RADIUS_EARTH / 1000.0);
__host__ __device__ ~DipoleB();
__host__ __device__ DipoleB(const DipoleB&) = delete;
__host__ __device__ DipoleB& operator=(const DipoleB&) = delete;
//for testing
double ILAT() const { return ILATDegrees_m; }
double ds() const { return ds_m; }
double L() const { return L_m; }
double s_max() const { return s_max_m; }
__host__ virtual void setds(double ds) { ds_m = ds; }
__host__ __device__ double getBFieldAtS(const double s, const double t) const override;
__host__ __device__ double getGradBAtS (const double s, const double t) const override;
__host__ double getErrTol() const { return errorTolerance_m; }
__host__ double getds() const { return ds_m; }
};
#endif
DipoleB.cu (Derived member function definition and some CUDA kernels):
#include "BField\DipoleB.h"
#include "device_launch_parameters.h"
#include "ErrorHandling\cudaErrorCheck.h"
#include "ErrorHandling\cudaDeviceMacros.h"
//setup CUDA kernels
__global__ void setupEnvironmentGPU_DipoleB(BField** this_d, double ILATDeg, double errTol, double ds)
{
ZEROTH_THREAD_ONLY("setupEnvironmentGPU_DipoleB", (*this_d) = new DipoleB(ILATDeg, errTol, ds));
}
__global__ void deleteEnvironmentGPU_DipoleB(BField** dipoleb)
{
ZEROTH_THREAD_ONLY("deleteEnvironmentGPU_DipoleB", delete ((DipoleB*)(*dipoleb)));
}
__host__ __device__ DipoleB::DipoleB(double ILATDegrees, double errorTolerance, double ds) :
BField(), ILATDegrees_m{ ILATDegrees }, ds_m{ ds }, errorTolerance_m{ errorTolerance }
{
L_m = RADIUS_EARTH / pow(cos(ILATDegrees * RADS_PER_DEG), 2);
L_norm_m = L_m / RADIUS_EARTH;
s_max_m = getSAtLambda(ILATDegrees_m);
#ifndef __CUDA_ARCH__ //host code
modelName_m = "DipoleB";
setupEnvironment();
#endif /* !__CUDA_ARCH__ */
}
__host__ __device__ DipoleB::~DipoleB()
{
#ifndef __CUDA_ARCH__ //host code
deleteEnvironment();
#endif /* !__CUDA_ARCH__ */
}
//B Field related kernels
__host__ __device__ double DipoleB::getSAtLambda(const double lambdaDegrees) const
{
//double x{ asinh(sqrt(3.0) * sinpi(lambdaDegrees / 180.0)) };
double sinh_x{ sqrt(3.0) * sinpi(lambdaDegrees / 180.0) };
double x{ log(sinh_x + sqrt(sinh_x * sinh_x + 1)) }; //trig identity for asinh - a bit faster - asinh(x) == ln(x + sqrt(x*x + 1))
return (0.5 * L_m / sqrt(3.0)) * (x + 0.25 * (exp(2.0*x)-exp(-2.0*x))); /* L */ //0.25 * (exp(2*x)-exp(-2*x)) == sinh(x) * cosh(x) and is faster
}
__host__ __device__ double DipoleB::getLambdaAtS(const double s) const
{// consts: [ ILATDeg, L, L_norm, s_max, ds, errorTolerance ]
double lambda_tmp{ (-ILATDegrees_m / s_max_m) * s + ILATDegrees_m }; //-ILAT / s_max * s + ILAT
double s_tmp{ s_max_m - getSAtLambda(lambda_tmp) };
double dlambda{ 1.0 };
bool over{ 0 };
while (abs((s_tmp - s) / s) > errorTolerance_m) //errorTolerance
{
while (1)
{
over = (s_tmp >= s);
if (over)
{
lambda_tmp += dlambda;
s_tmp = s_max_m - getSAtLambda(lambda_tmp);
if (s_tmp < s)
break;
}
else
{
lambda_tmp -= dlambda;
s_tmp = s_max_m - getSAtLambda(lambda_tmp);
if (s_tmp >= s)
break;
}
}
if (dlambda < errorTolerance_m / 100.0) //errorTolerance
break;
dlambda /= 5.0; //through trial and error, this reduces the number of calculations usually (compared with 2, 2.5, 3, 4, 10)
}
return lambda_tmp;
}
__host__ __device__ double DipoleB::getBFieldAtS(const double s, const double simtime) const
{// consts: [ ILATDeg, L, L_norm, s_max, ds, errorTolerance ]
double lambda_deg{ getLambdaAtS(s) };
double rnorm{ L_norm_m * cospi(lambda_deg / 180.0) * cospi(lambda_deg / 180.0) };
return -B0 / (rnorm * rnorm * rnorm) * sqrt(1.0 + 3 * sinpi(lambda_deg / 180.0) * sinpi(lambda_deg / 180.0));
}
__host__ __device__ double DipoleB::getGradBAtS(const double s, const double simtime) const
{
return (getBFieldAtS(s + ds_m, simtime) - getBFieldAtS(s - ds_m, simtime)) / (2 * ds_m);
}
//DipoleB class member functions
void DipoleB::setupEnvironment()
{// consts: [ ILATDeg, L, L_norm, s_max, ds, errorTolerance ]
CUDA_API_ERRCHK(cudaMalloc((void **)&this_d, sizeof(BField**)));
setupEnvironmentGPU_DipoleB <<< 1, 1 >>> (this_d, ILATDegrees_m, errorTolerance_m, ds_m);
CUDA_KERNEL_ERRCHK_WSYNC();
}
void DipoleB::deleteEnvironment()
{
deleteEnvironmentGPU_DipoleB <<< 1, 1 >>> (this_d);
CUDA_KERNEL_ERRCHK_WSYNC();
CUDA_API_ERRCHK(cudaFree(this_d));
}
Calling Functions:
__device__ double accel1dCUDA(const double vs_RK, const double t_RK, const double* args, BField** bfield, EField** efield) //made to pass into 1D Fourth Order Runge Kutta code
{//args array: [s_0, mu, q, m, simtime]
double F_lor, F_mir, stmp;
stmp = args[0] + vs_RK * t_RK; //ps_0 + vs_RK * t_RK
//Mirror force
F_mir = -args[1] * (*bfield)->getGradBAtS(stmp, t_RK + args[4]); //-mu * gradB(pos, runge-kutta time + simtime)
//Lorentz force - simply qE - v x B is taken care of by mu - results in kg.m/s^2 - to convert to Re equivalent - divide by Re
F_lor = args[2] * (*efield)->getEFieldAtS(stmp, t_RK + args[4]); //q * EFieldatS
return (F_lor + F_mir) / args[3];
}//returns an acceleration in the parallel direction to the B Field
__device__ double foRungeKuttaCUDA(const double y_0, const double h, const double* funcArg, BField** bfield, EField** efield)
{
// dy / dt = f(t, y), y(t_0) = y_0
// funcArgs are whatever you need to pass to the equation
// args array: [s_0, mu, q, m, simtime]
double k1, k2, k3, k4; double y{ y_0 }; double t_RK{ 0.0 };
k1 = accel1dCUDA(y, t_RK, funcArg, bfield, efield); //k1 = f(t_n, y_n), returns units of dy / dt
t_RK = h / 2;
y = y_0 + k1 * t_RK;
k2 = accel1dCUDA(y, t_RK, funcArg, bfield, efield); //k2 = f(t_n + h/2, y_n + h/2 * k1)
y = y_0 + k2 * t_RK;
k3 = accel1dCUDA(y, t_RK, funcArg, bfield, efield); //k3 = f(t_n + h/2, y_n + h/2 * k2)
t_RK = h;
y = y_0 + k3 * t_RK;
k4 = accel1dCUDA(y, t_RK, funcArg, bfield, efield); //k4 = f(t_n + h, y_n + h k3)
return (k1 + 2 * k2 + 2 * k3 + k4) * h / 6; //returns delta y, not dy / dt, not total y
}
__global__ void computeKernel(double** currData_d, BField** bfield, EField** efield,
const double simtime, const double dt, const double mass, const double charge, const double simmin, const double simmax)
{
unsigned int thdInd{ blockIdx.x * blockDim.x + threadIdx.x };
double* v_d{ currData_d[0] }; const double* mu_d{ currData_d[1] }; double* s_d{ currData_d[2] }; const double* t_incident_d{ currData_d[3] }; double* t_escape_d{ currData_d[4] };
if (t_escape_d[thdInd] >= 0.0) //particle has escaped, t_escape is >= 0 iff it has both entered and is outside the sim boundaries
return;
else if (t_incident_d[thdInd] > simtime) //particle hasn't "entered the sim" yet
return;
else if (s_d[thdInd] < simmin * 0.999) //particle is out of sim to the bottom and t_escape not set yet
{
t_escape_d[thdInd] = simtime;
return;
}
else if (s_d[thdInd] > simmax * 1.001) //particle is out of sim to the top and t_escape not set yet
{
t_escape_d[thdInd] = simtime;
return;
}
//args array: [ps_0, mu, q, m, simtime]
const double args[]{ s_d[thdInd], mu_d[thdInd], charge, mass, simtime };
v_d[thdInd] += foRungeKuttaCUDA(v_d[thdInd], dt, args, bfield, efield) / 2;
s_d[thdInd] += v_d[thdInd] * dt;
}
A few questions:
Am I achieving my goals in the most efficient way possible?
Are there any performance issues incurred by the fact that I'm creating one instance of a derived class on GPU and calling the interface function ~3.5 million * number of iterations times? That is, what are the implications of this many calls to a single member function?
This produces expected physical results (that is, calls to interface functions are producing the correct values because the particles behave appropriately), however when running through cuda-memcheck, I get a whole host of issues. I'm thinking this is because of how BField is set up and the fact that calling the (virtual) interface functions accesses something that would be outside the memory footprint of a Base instance:
[BField instance memory footprint][-------(x impl of virt fcn here)----DipoleB Instance footprint-------]
and cuda-memcheck doesn't think this should be valid. Does this sound feasible? Do I understand what is going on right?
Any non-optimal performance issues incurred by device-side dynamic allocation? Is there even another way to do this?
Answer: nice!
The code is generally rather good. I don’t have a lot to say about the language usage in the individual lines.
The idea of using an abstract base to do different implementations selected at runtime is classic, easy to follow, and costs one indirection at calling time. Any run-time system to choose between different impls would have the same cost at least.
I don’t understand the part about making a matching copy on the device.
Why do you have modelName_m on dummied on the controller? Can’t you just leave it out? Since its only use is to set from a lexical string literal, why not keep it as a const char*? The base class should take this as a constructor argument and the derived class supply it in the init line.
one thing stuck out…
setupEnvironment and deleteEnvironment are called from the constructor and destructor respectively. But they are virtual functions.
The object "is" of the type being constructed here, even if it is really of a derived class: it runs the base class ctors and then updates the vtable for the derived class and then runs the derived ctor body.
So, don’t call virtual functions from the ctor/dtor. To do what this actually means, make it non-virtual.
other
Passing double everywhere when values have different meanings is a good way to crash your spacecraft on Mars. Consider a zero-overhead unit labeling template, or at the very least typedefs for different meanings even though they are not checked by the compiler.
angle_in_degrees should be the type name, not the variable name. But even better, just angle_t and the choice of degrees or whatever is made by the caller and it converts (perhaps at compile time!) to the unit that the type really holds. | {
"domain": "codereview.stackexchange",
"id": 30443,
"tags": "c++, polymorphism, cuda"
} |
Is it better to adjust the natural lighting (while recording the video) or to subsequently apply filters on the original video? | Question: For the purpose of object detection, is it better to adjust the natural lighting (while recording the video) or to apply filters (e.g. brightness filters, etc.) on the original video to make it brighter?
My intuition is that it shouldn't matter when you adjust the natural lighting or do it after with video filters.
Answer: Personally, I'd say as long as the object is visible don't do either. If the model has been well built and if lighting changes would help, the convolution operation weights would learn an operation similar to contrast or brightness changes.
On the other hand if the object visibility is an issue, then natural lighting changes would be better, due to the lack of potential artefacts a filter would create.
So overall, I'd say natural lighting changes should be more helpful (Assuming model is built well) and brightness filters would not be very helpful as the convolution operations would learn them if they were useful, also there would be artefacts in the input which can lead to the model learning irrelevant details.
Hope this helped! | {
"domain": "ai.stackexchange",
"id": 1502,
"tags": "convolutional-neural-networks, object-detection"
} |
Scaling data in a topic? | Question:
Let's say I have a node (that I didn't write) publishing an Int32 value data between 0 and 1. Is it possible to scale data so it's between 0 and 10 instead of 0 and 1? Ideally, I'm wondering if this can be done in a launch file, though I'm not sure.
Something like:
<remap from="/data" to="/data" * 10 />
if you see what I mean
Thanks in advance!
Originally posted by gerhmi on ROS Answers with karma: 5 on 2019-06-24
Post score: 0
Answer:
Let's say I have a node (that I didn't write) publishing an Int32 value data between 0 and 1. Is it possible to scale data so it's between 0 and 10 instead of 0 and 1?
I'm going to assume you meant to write:
Let's say I have a node (that I didn't write) publishing an Int32 value data between 0 and 1. Is it possible to scale data so it's between 0 and 10 instead of 0 and 1?
As an Int32 can only encode integer values, so it would only ever be 0 or 1, nothing in between.
This is something that can be done using topic_tools/transform.
One of the examples on that page does something very similar:
convert an orientation quaternion to Euler angles:
rosrun topic_tools transform /imu/orientation /euler geometry_msgs/Vector3 'tf.transformations.euler_from_quaternion([m.x, m.y, m.z, m.w])' --import tf
So for your specific example, it would probably be something like (haven't tested this):
rosrun topic_tools transform /input /output std_msgs/Float32 'm.data * 10.0'
As to your question:
I'm wondering if this can be done in a launch file, though I'm not sure.
Something like:
<remap from="/data" to="/data" * 10 />
No, that is not possible. remaps don't work like that.
Originally posted by gvdhoorn with karma: 86574 on 2019-06-24
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gerhmi on 2019-06-24:
Thanks! I doubted it could be done as a remap, but you can always hope.
Comment by gvdhoorn on 2019-06-24:
PS: note btw that using std_msgs msgs is not recommended, as they have almost zero semantics. | {
"domain": "robotics.stackexchange",
"id": 33249,
"tags": "ros, remapping, ros-kinetic, rostopic"
} |
If an edge e doesn't belong to any mst, then what can we say about e? | Question: I'm trying to prove a statement "given a graph G=(V,T) and that no cycle C exists that contains only edge "e" and other edges that their weight is smaller than that of "e", prove that "e" must be in some mst".
I tried to proof by contradiction, but I'm not sure what I can say about "e" when it never belongs to any mst.
Answer: Suppose that you run Kruskal's algorithm, which computes an MST by repeatedly adding the cheapest edge that does not create a cycle.
In any run of Kruskal, when we consider your edge $e = uv$, it turns out that $u$ and $v$ are already in the same connected component.
That means that there is a path from $u$ to $v$ where every edge is strictly cheaper than $e$.
Hence $e$ must be the heaviest edge in a cycle. | {
"domain": "cs.stackexchange",
"id": 21894,
"tags": "minimum-spanning-tree"
} |
How to explain the connection between the input layer and H1 of this CNN Architecture? | Question: I am currently reading the paper proposed by LeCun et al. for handwritten zip code recognition. There is this figure below visualizing the CNN architecture. But I do not really understand how the connection between Layer H1 and input layer makes sense. If there are 12 kernels with size 5x5, shouldn't the layer H1 be 12x144? Or is there any downsampling taking place here too?
Answer: Yes, the spatial dimensions (height and width) are reduced: the input is 16x16, H1 is 8x8 and H2 is 4x4.
Also see the first paragraph in the architecture section:
Source
In modern terms you would say that they use a stride of 2. Which reduces the spatial dimensions accordingly.
EDIT (based on your comment)
The formula for the spatial output dimension $O$ of a (square shaped) convolutional layer is the following:
$$O = \frac{I - K + 2P}S + 1$$ with $I$ being the input size, $K$ being the kernel size, $P$ the padding and $S$ the stride. Now you might think that in your example $O = \frac{16 - 5 + 2*2}2 + 1 = 8.5$ (assuming $P=2$)
But take a closer look at how it actually plays out when the 5x5 kernel of layer H1 scans the 16x16 input image with a stride of 2:
As you can see from the light grey area the required and effective padding is actually not 2 on all sides. Instead for the width or height respectively it is 2 on one side and 1 on the other side, i.e. on average $(2+1)/2=1.5$.
And if you plug that into the equation to calculate the output size it gives: $O = \frac{16 - 5 + 2*1.5}2 + 1 = 8$. Accordingly the convolutional layer H1 will have spatial dimensions of 8x8. | {
"domain": "datascience.stackexchange",
"id": 6667,
"tags": "cnn, learning"
} |
Possibilties of chebyshev polynomial waveshaping | Question: When talking about harmonic distortion, or more specifically waveshaping, we say the order of distortion, can be solved from the equation:
$x^n$
Where n is the order of distortion. To my knowledge the complete result of any waveshaping function can be had by summing all of the different orders of harmonic distortion, multiplied with an appropriate index $a_n$.
$$\sum_{n=1}^{\infty} a_nx^n = WaveshaperFunction$$
Be aware that by feeding the equation with $x=sin(\omega t)$, the highest frequency any $x^n$ can produce is $sin(n*\omega t) $. However, higher orders can still produce multiple different frequencies for a sinusoidal input.
Chebyshev polynomials addresses the issue, removing any other frequencies limiting the harmonic distortion to only $sin(n*\omega t)$, producing an orthogonal result. The polynomials are defined as:
$T_0(x) = 1$
$T_1(x) = 2x-1$
$T_{n+1}(x)=2xT_n-T_{n-1}$
Now, since the result of putting any $sin(\omega t)$ through the chebyshev waveshaper is a sine with a frequency multiple of N, it is easy to say that any harmonic signal with the original signal's phase (or inverted phase) can be produced by summing over all the chebyshev polynomials. That is to say, summing over all the polynomials, any waveshaping function can be generated.
$$\sum_{n=1}^{\infty} b_nT_n(x) = WaveshaperFunction$$
Where $b_n$ represents the amplitude of nth generated harmonic, given a sine input. $T_n$ represents the cheyshev polynomial producing nth harmonic given sine input. I think there could be many advantages for thinking about time-invariant harmonic distortion or intermodulation distortion through this framework rather than through the before-mentioned orders.
As for the question, did I make a mistake somewhere? Particularly I am not so certain that any function can be generated through summing the chebyshev polynomials. Chebyshev polynomials seem to not care about the amplitude of the input sine wave (apart from some DC offset differences), yet I can easily generate a waveshaper function that dose so (any function that is linear near 0 and then changes). Re-framed, can the polynomials generate any possible function?
Answer: For $-1 <= x <= 1$, let's compare Chebyshev polynomials of the first kind, $T_n(x)$, and the basis functions of the Fourier cosine series, $F_n(x)$:
$F_n(x)=\cos(n \pi x)$
$T_n(x)=\cos(n\ \text{acos}\ x)$
Writing $T_n(x_T) = F_n(x_F)$ and solving for $x_F$ gives $x_F = (\text{acos}\ x_T) / pi$, revealing that the Chebyshev polynomial series is simply an argument-warped version of the Fourier cosine series that maps the range $0 <= x_F <= 1$ to $-1 <= x_T <= 1$:
This exclusion of $x_F < 0$ makes irrelevant the constraint that the Fourier cosine series is only applicable to even functions, functions symmetric around $x_F = 0$, meaning that $f(x_F) = f(-x_F)$. The mapping function is well-behaved in the range of interest, so if the Fourier cosine series can represent any even function, then the Chebyshev polynomial series can represent any function.
Not knowing the amplitude of the input sinusoid, it is not possible to control the proportional amplitudes of the generated harmonics by the choice of the waveshaper. For example if the waveshaper is $f(x) = x^2 + x$:
$$f(\cos x) = \frac{1}{2} + \cos x + \frac{1}{2} \cos 2x\\
f(\frac{1}{2} \cos x) = \frac{1}{8} + \frac{1}{2} \cos x + \frac{1}{8} \cos 2x$$
The first one has the amplitude of the second harmonic at -6 dB compared to the fundamental, and the second one at -12 dB. Chebyshev polynomials are no miracle cure. The above polynomial $x^2 + x$ was a weighted sum of the first three Chebyshev polynomials. Any polynomial can be written as a weighted sum of Chebyshev polynomials, and vice versa.
The problem persists for pure Chebyshev polynomials, for example $f(x) = 4 x^3 - 3x$:
$$f(\cos x) = \cos 3x\\
f(\frac{1}{2} \cos x) = \frac{1}{8} \cos 3x - \frac{9}{8} \cos x$$
A Chebyshev polynomial series does have the advantage of better numerical stability than a simple power series, and if the amplitude of the input sinusoid is controlled somehow, it can be used to control the level of harmonics in the way you describe. | {
"domain": "dsp.stackexchange",
"id": 3187,
"tags": "non-linear, shape-analysis"
} |
Can dark energy dominate from the Big-Bang? | Question: I'm studying the age of the universe for a universe dark energy dominated. Using Friedmann equation
$$
\left( \frac{\dot{a}}{a} \right)^2 = H_0^2 \left[ \Omega_R \cdot a^{-4} + \Omega_{NR} \cdot a^{-3} + \Omega_k \cdot a^{-2} + \Omega_{\Lambda} \right]
$$
with the conditions $\Omega_R=\Omega_{NR}=\Omega_k=0$, $\Omega_\Lambda=1$, I have found the following expression
$$
\frac{\dot{a}}{a} = H_0 \quad \to \quad \frac{da}{a} =H_0 \cdot dt
$$
If I integrate from the Big-Bang $(t=0, a(0)=0)$ to the present $(t=T, a(T)=1)$
$$
\int_0^1 \frac{da}{a} =H_0 \cdot \int_0^T dt
$$
but I have a singularity in the first integral. Does it mean that is impossible to find a universe full made of dark energy from the Big-Bang?
Answer: Solving the differential equation for the scale factor $a$ in this case we have explicitly
$$ \frac{da}{a} =H_0 \cdot dt\qquad \Longrightarrow\qquad a(t) \propto e^{H_0 t},$$
assumming $H_0$ is constant through time. From the expression above you can conclude for this model there is no point in time where the scale factor was exactly zero. You might then speak of infinite negative time or understand this as a model for some fraction of the cosmological history away from $a=0$. | {
"domain": "physics.stackexchange",
"id": 85417,
"tags": "cosmology, space-expansion, big-bang, dark-energy"
} |
Is there a table that sums up the parameters, the assumptions/symmetries, and the predictions of the standard model? | Question: The title says it all: is there a table that sums up the parameters, the assumptions/symmetries, and the (most important) predictions of the standard model?
Answer: Yes, indeed. There is a working group called the 'particle data group' operating
The particle data listing
Where they sum up all experimental results, have short reviews about the physics behind it, also explaining the implicit assumptions made and give bounds on the most popular extensions of the standard model. | {
"domain": "physics.stackexchange",
"id": 13933,
"tags": "particle-physics, resource-recommendations, standard-model"
} |
Connecting a Sick laserscanner via ethernet | Question:
Hi,
We're working on our robot running Ubuntu 14.04 and ROS indigo. Right now we are trying to connect a Sick LMS 111 laserscanner via ethernet. We installed the drivers http://wiki.ros.org/LMS1xx and can start the node to do this.
Afterwards we tried to connect to the laserscanner using many ip's (found by ifconfig for example) but any of them gave back this error:
[ INFO] [1429170035.841400945]: Connecting to laser at : 192.168.0.1:2111
[ERROR] [1429170163.097305396]: Connection to LMS1xx device failed, retrying in 1 second.
To find the port we downloaded the Sopas Engineering Toolbox from Sick. This gave us ip 192.168.0.1 and tcp 2111.
To set the port we adjusted the launch file:
<launch>
<arg name="host" default="192.168.0.1:2112" />
<node pkg="lms1xx" name="lms1xx" type="LMS1xx_node" output="screen">
<param name="host" value="$(arg host)" />
</node>
</launch>
Does anyone have an idea on how to get data from the laserscanner?
Originally posted by Sander on ROS Answers with karma: 3 on 2015-04-16
Post score: 0
Answer:
Hi!
It is not necessary to indicate port in laser's ip address, just write <arg name="host" default="192.168.0.1" />.
Default port connection is set to 2111, which is the usual port SOPAS gives to sick lms 1xx laser scanners.
If this does not solve the problem, try adjusting your manual ethernet connection to suit a similar address to the laser's one.
Originally posted by danimtb with karma: 68 on 2015-05-16
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Sander on 2015-05-17:
You're correct. The problem was that we manually added a connection in Ubuntu with 192.168.0.1 as ip address. The trick was that the ip address has to be different from the ip address from the laserscanner. Thus when we added 192.168.0.2 as ip address on that ethernet port it worked.
Comment by danimtb on 2015-05-17:
Great to see it worked! I am using Sick LMS100 with LMS1xx node too and I'm able to connect to the laserscanner but after a few minutes the node stops sending data.
Could you tell the ethernet configuration you are using (IP, MASK, GATEWAY)? I suppose your ethernet connection is set to manual. Thx
Comment by Sander on 2015-05-18:
On edit connections, click add, go to ethernet tab, set device MAC address to the right ethernet port, got to IPv4 Settings tab, set method to Manual and enter these settings:
Address: 192.168.0.1
Netmask: 255.255.255.0
Leave Gateway open and click save.
Comment by danimtb on 2015-05-19:
Thanks for your comment, now it is working except when I connect my computer to a router via wifi at the same time.
If I disconnect wifi it works great but with wifi connection I have same problem: node connects to lms100 well but after a while it stops sending data...
Did you have this problem? | {
"domain": "robotics.stackexchange",
"id": 21453,
"tags": "sicklms, ubuntu, laserscanner, ubuntu-trusty, sick"
} |
relative path to material file | Question:
Hello everybody,
I am trying to add some custom materials. To do this I used the tag in the following way
<material>
<script>
<uri>file:///home/mago/Development/hydro/src/ugv_description/Media/materials.material</uri>
<name>materials/Black</name>
</script>
</material>
However, the project is shared via some repositories and I want the other users to run the simulations without having to change the path to the script.
The materials are defined in a ROS package that is shared as well, and I would like to reference it in the SDF file.
Is there any way of doing this?
Thank you,
Andrea
Originally posted by Andrea on Gazebo Answers with karma: 72 on 2014-02-01
Post score: 0
Answer:
Yo could use a relative path and the GAZEBO_RESOURCE_PATH environment variable. For example, in your case:
<material>
<script>
<uri>file://Media/materials.material</uri>
<name>materials/Black</name>
</script>
</material>
export GAZEBO_RESOURCE_PATH=$GAZEBO_RESOURCE_PATH:/home/mago/Development/hydro/src/ugv_description/
It's very common to use a bash script that sets GAZEBO_RESOURCE_PATH according to the directory where your resource files are located. Using this method you just need to remember source this script.
Originally posted by Carlos Agüero with karma: 626 on 2014-02-01
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Andrea on 2014-02-05:
Thank you very much! I will write the script! | {
"domain": "robotics.stackexchange",
"id": 3546,
"tags": "sdformat"
} |
Rate law of reactions, what about the proportions? | Question: Consider the reaction
$$\ce{A + B + C -> D + E}$$
and $\mathrm{rate} = k[\ce{A}][\ce{B}]^2.$ My book said if we double the amount of $\ce{A},$ then the reaction rate will double.
I understand this, but what about the law of definite proportion?
Answer:
[OP, in a comment] oh ok, so if we double the amount of only one substance, that substance will get left?
Yes. The rate law tells you how fast product is being made. The change in the different species is still given by the chemical reaction equation (which tells you in which proportion the amount of species will change, not in which proportion they are present at the beginning or at the end of the reaction).
You can see that when you look at the definition of rate:
$$\mathrm{rate} = \frac{d[X_i]}{\nu_i dt}$$
where $\nu_x$ is the (signed, i.e. negative for reactants) stoichiometric factor of the $i$-th species. Dividing by the stoichiometric factor ensures that the law of definite proportions is followed.
[OP in another comment] we cant simply put more of a substance to get more of the product, the reactants must have a fixed ratio.
The law of definite proportions is not about the amounts present, but about the change in those amounts. I can set up a reaction with 1000 times more elemental oxygen than hydrogen, but they will still react in a 1:2 ratio if the reaction yields water. | {
"domain": "chemistry.stackexchange",
"id": 13845,
"tags": "physical-chemistry, kinetics"
} |
Where can we find the application of bayes's theorem in Bayesian optimiation with gaussian processing | Question: I am trying to learn bayesian optimisation by following this tutorial.
However, until now I don't get the relation between bayes's theorem to the gaussian process formalism.
Any ideas?
Answer: It is a 49 page long paper, so following observations are based only on a cursory reading.
The optimisation is for finding best value of parameters for cost function of machine learning models.
Rather than finding a fixed value of the parameters, it is assumed that the parameters come from statistical distribution and the task is to find the nature/shape of this statistical distribution.
Bayes theorem tells you that if you have prior beliefs and evidence (data), you can go to posterior.
They start with the assumption that prior distribution of the parameters is Gaussian. The task is to then find the posterior.
Since there are multiple parameters and not single variable, Gaussian process comes into picture rather than single Gaussian distributed random variable.
Bayes theorem comes into picture while going from priors to posteriors. The optimisation is solved using sampling technique like Monte Carlo sampling. Reading up about MCMC sampling method will help you see the connection. | {
"domain": "datascience.stackexchange",
"id": 3553,
"tags": "optimization, bayesian, gaussian, hyperparameter-tuning"
} |
No subset of RE captures the set of TMs halting on themselves as input | Question: For a property $P$ of languages, which we identify with the set of languages satisfying it, let $L_P = \{ \langle M \rangle \mid L(M) \in P \}$ be the set of descriptions of Turing machines whose languages satisfy $P$.
How can we prove that there's no property $\displaystyle P\subseteq RE $ for which $ L_{P}=K $, where $K=\{\langle M \rangle \mid M(\langle M \rangle)\!\downarrow\} $?
Answer: According to the recursion theorem, there is a Turing machine $M$ such that $L(M) = \{\langle M \rangle\}$. If $P$ is a property such that $L_P = K$ then $\langle M \rangle \in K$ implies that $L(M) = \{ \langle M \rangle \} \in P$. Now it is easy to construct a different machine $M' \neq M$ such that $L(M') = \{ \langle M \rangle \}$. Since $L(M') \in P$ and $L_P = K$, we must have $\langle M' \rangle \in K$, yet $M'$ doesn't halt on $\langle M' \rangle$.
Note that this solution doesn't use the condition $P \subseteq RE$. | {
"domain": "cs.stackexchange",
"id": 8050,
"tags": "computability, turing-machines"
} |
Temperature of a System of molecules | Question: Suppose I have a closed system with N molecules in it which are vibrating and all motion equations (rotation, translation and vibration) of the system are known along with any EM field equations in the region. Given all these information how do we calculate (not measure) the temperature of this system?
Answer: AIB,
As you have read the temperature is a measure of the change of energy of a system to the change of the number of microstates, or:
$$T = \dfrac{\partial{E}}{\partial{S}}$$
Where:
$$S = \ln \Omega$$
and $\Omega$ is the number of microstates.
However, it is important to understand what a microstate is. In general a microstate describes all of the exact positions and momentums of all the particles in a system (or particles in a box). The microstate is often thought of as a point in a 6N dimension space, where N is the number of particles and the factor of 6 is related to there being 3 position components in a 3D space and 3 momentum components in 3D space (however, one could certainly consider other parameter).
If we could know all of the positions and momentums of all the particles than we would know the precise microstate of the system. This precise microstate would have an exact energy associated with it. However, we do not have an ability to know the precise microstate of a system, so we must appeal to a statistical concept.
So at any one instant, there are a large number of microstates a system could be in. Again, each microstate being a unique description of the exact position and momentum of all particles in the box. What this also suggests, is that for any microstate, there are a large number of other microstates that would have the same energy associated with them.
The question asked was:
"Given all these information how do we calculate (not measure) the temperature of this system?"
The answer to this question would be determined first by the range of all the free parameters and the values of those parameters that that would keep the energy constant. The number of combinations possible within the energy constraint is the number of microstates of the systems, the log of which is the entropy.
The next step is tricky, because we would have to ask how the energy changes with a change in entropy. This is where the partition function comes into play, assuming energy of each microstate is equivalent, if we look at the partion function we see:
$${Z} = \sum_i e^{-\frac{\partial{S}}{\partial{E}}{E_i}}$$
$$Z = Ne^{-\frac{1}{T}E_i}$$
$$N = Ze^{\frac{1}{T}E_i}$$
So we can see that the temperature is a variable that controls the growth of states if we keep the value of Z constant. If I know an exact microstate from which to derive the energy, then I still won't necessarily know anything about the temperature until I can partition the relavant space in question. In this sense, the temperture is a sort of hyperbolic phase factor which changes the range of states. | {
"domain": "physics.stackexchange",
"id": 325,
"tags": "statistical-mechanics, temperature"
} |
Strength of a nucleophile | Question: A nucleophile should be stronger if it can donate a pair of electrons more easily. We say that a more electronegative atom should be less nucleophilic , generally. I understand this is because the more EN an atom gets, the more strongly it pulls electrons towards itself. But why does the fact that such an atom would ,generally, have more electron density not override this?
Answer: Electronegativity and nucleophilicity are different concepts. Electronegativity refers to how strongly an atom pulls bonding electrons towards itself. Nucleophilicity refers to ability of atom to donate a lone pair of electrons. Different factors affect nucleophilicity, such as orbital size, charge of ion, size of molecule, hybridisation etc etc. | {
"domain": "chemistry.stackexchange",
"id": 11378,
"tags": "organic-chemistry"
} |
Microcanonical and canonical ensemble entropy comparison in Einstein solid | Question: Consider Einstein solid model ($N$ oscillators of same frequency $\omega$, where $n=\sum k_i $ with $k_i$ being the occupation number of single oscillators)
In microcanonical ensemble entropy is
$$S=k \ln (\frac{(N+n-1)!}{n! (N-1)!}) \sim_{\mathrm{thermodynamic \,\, lim. \,\, (N\to \infty)\,\,\,\,\,}} k \ln (\frac{(N+n)^{N+n}}{n^n (N)^N})\tag{1}$$
On the other hand in canonical ensemble, since the partition function if $Z=(\frac{1}{2\sinh(\beta\omega\hbar/2)})^N$ one finds that
$$S=\frac{kN}{2} [\beta\hbar\omega \coth(\beta\omega\hbar/2)-2 \ln(2\sinh(\beta\omega\hbar/2))]\tag{2}$$
Where $\beta=1/kT$.
I would like to show that two expressions are the same in the thermodynamic limit which was not taken for $(2)$.
Firstly I also got this relation between $\beta $ and $n$
$$\beta(n)=\frac{1}{\hbar \omega} \ln (\frac{N}{n}+1) \tag{3}$$
I tried to put $\beta(n)$ in $(2)$ and make an approximation for $N\to \infty $ in $(2)$ but I could not get to $(1)$ in any way.
What is the way to correctly approximate $(2)$ (with $\beta (n)$) to get to $(1)$?
Answer: Consider first the microcanonical ensemble. The total energy of the $N$ oscillators is denoted
$$E=\hbar\omega\left(n+{N\over 2}\right)
\ \Leftrightarrow\ n={E\over\hbar\omega}-{N\over 2}$$
i.e. $n$ is the sum of the $N$ quantum numbers $n_i$ of each oscillators. $n$ can be interpreted as the number of energy quanta in the system. The number of stationary states with energy $E$ is, as you wrote,
$$\Omega(E)={(N+n-1)!\over n!(N-1)!}$$
Therefore the microcanonical energy is
$$S(E)=k_B\ln\Omega(E)=k_B\big[\ln(N+n-1)!-\ln n!-\ln (N-1)!\big]$$
Using Stirling approximation $\ln M!\sim M\ln M-M$, we get after cancellation of the three linear terms
$$S(E)\simeq k_B\big[(N+n-1)\ln(N+n-1)-n\ln n-(N-1)\ln(N-1)\big)$$
The microcanonical temperature is defined as
$${1\over T}={\partial S\over\partial E}={\partial S\over\partial n}{dn\over dE}={k_B\over\hbar\omega}\big[\ln (N+n-1)-\ln n\big]$$
Invert this relation to write $n$ as a function of $\beta=1/k_BT$
$$\beta={1\over\hbar\omega}\ln{N+n-1\over n}\ \Leftrightarrow\
n={N-1\over e^{\beta\hbar\omega}-1}$$
i.e. the Bose-Einstein distribution ! Noting that
$$N-1+n=(N-1){e^{\beta\hbar\omega}\over e^{\beta\hbar\omega}-1}$$
the microcanonical entropy now reads
$$\eqalign{
S(E)&\simeq k_B\big[(N+n-1)\ln(N+n-1)-n\ln n-(N-1)\ln(N-1)\big)\cr
&=k_B\left[(N-1){e^{\beta\hbar\omega}\over e^{\beta\hbar\omega}-1}
\ln{(N-1)e^{\beta\hbar\omega}\over e^{\beta\hbar\omega}-1}-{N-1\over e^{\beta\hbar\omega}-1}\ln{N-1\over e^{\beta\hbar\omega}-1}-(N-1)\ln(N-1)\right]\cr
&=(N-1)k_B\Big[\Big(\underbrace{{e^{\beta\hbar\omega}\over e^{\beta\hbar\omega}-1}-{1\over e^{\beta\hbar\omega}-1}-1}_{=0}\Big)\ln(N-1)\Big.\cr
&\quad\quad\quad\quad\quad\quad\Big. +{e^{\beta\hbar\omega}\over e^{\beta\hbar\omega}-1}\ln{e^{\beta\hbar\omega}\over e^{\beta\hbar\omega}-1}-{1\over e^{\beta\hbar\omega}-1}\ln{1\over e^{\beta\hbar\omega}-1}\Big]\cr
&={(N-1)k_B\over e^{\beta\hbar\omega}-1}\Big[\beta\hbar\omega e^{\beta\hbar\omega}+\Big(e^{\beta\hbar\omega}-1\Big)\ln{1\over e^{\beta\hbar\omega}-1}\Big]\cr
}$$
The logarithm can be written as
$$\ln{1\over e^{\beta\hbar\omega}-1}
=\ln{e^{-\beta\hbar\omega/2}\over e^{\beta\hbar\omega/2}-e^{\beta\hbar\omega/2}}=-{\beta\hbar\omega\over 2}-\ln 2\sinh{\beta\hbar\omega\over 2}$$
The microcanonical entropy is now
$$\eqalign{
S&=(N-1)k_B\Big[\beta\hbar\omega {e^{\beta\hbar\omega}\over e^{\beta\hbar\omega}-1}-{\beta\hbar\omega\over 2}-\ln 2\sinh{\beta\hbar\omega\over 2}\Big]\cr
&=(N-1)k_B\Big[{\beta\hbar\omega\over 2}{e^{\beta\hbar\omega}+1\over e^{\beta\hbar\omega}-1}-\ln 2\sinh{\beta\hbar\omega\over 2}\Big]\cr
&=(N-1)k_B\Big[{\beta\hbar\omega\over 2}{e^{\beta\hbar\omega/2}+e^{-\beta\hbar\omega/2}\over e^{\beta\hbar\omega/2}-e^{-\beta\hbar\omega/2}}-\ln 2\sinh{\beta\hbar\omega\over 2}\Big]\cr
&=(N-1)k_B\Big[{\beta\hbar\omega\over 2}{\cosh\beta\hbar\omega/2\over\sinh\beta\hbar/2}-\ln 2\sinh{\beta\hbar\omega\over 2}\Big]\cr
&=(N-1)k_B\Big[{\beta\hbar\omega\over 2}{\rm cotanh}\ \!{\beta\hbar\omega\over 2}-\ln 2\sinh{\beta\hbar\omega\over 2}\Big]\cr
}$$
which is precisely the expression of the entropy in the canonical ensemble with the assumption $N-1\simeq N$. | {
"domain": "physics.stackexchange",
"id": 44619,
"tags": "thermodynamics, classical-mechanics, statistical-mechanics, solid-state-physics, entropy"
} |
The direction of static friction? | Question: In my book, the following are described to be the steps that one must follow if he wishes to find the direction of static friction force acting on an object:
"(1) Draw the F.B.D(free body diagram) of the object in question with respect to the other object on which it is kept.
(2) Include the pseudo force also, if the contact surface is accelerating.
(3) Decide the direction of the resultant force and resolve this force into two components; one along the surface of contact and the other along the normal to this surface.
(4) The direction of static friction is opposite to the component of the result force along the contact surface."
Firstly, are these steps correct? Will I always obtain the correct answer by following these steps? Secondly, consider the following problem:
In questions like these, when I try to find the resultant force and then assign the direction to static friction, I find that my answer is incorrect! Are there some other steps to follow when I know that the object is definitely accelerating in one direction or another?
Edit 1: I approached the above question by doing exactly what my book suggests; first, I identified all the forces acting on the smaller block, being the normal force applied by the block A and its weight mg downward. Adding these two forces, I get another force that actually acts on the body like so:
I wonder if I'm not supposed to take either of the contact forces into consideration when drawing the F.B.D to figure out the direction of static friction acting on the body. Also, are there other steps for finding the direction of kinetic friction?
Edit 2: From Judge's answer, I have been able to figure out my mistake, it being that I completely missed step (3) and didn't resolve the resultant force into components! Sheesh. Sorry!
Much thanks in advance! :) Regards.
Answer:
Firstly, are these steps correct? Will I always obtain the correct answer by following these steps?
Yes and yes, those steps are correct :)
Are there some other steps to follow when I know that the object is definitely accelerating in one direction or another?
I don't think so. Let's work through your example using those steps:
Inertial frame of table
The 10kg block (B) has a weight force downwards, $\vec{W_B} = -mg \ \hat{y}$. A is accelerating into B, compressing it until B provides an equal and opposite reaction on A. We'll call A's force on B $\vec{N_{BA}}$ and it points right.
In an inertial frame (e.g. the table's) we skip this step.
Resultant force on B, $\vec{R_B}$ is the vector sum of all forces on B: $\vec{W_B} + \vec{N_{BA}}$, pointing down-right. Resolving $\vec{R}$ into components perpendicular and parallel to the slope yields
\begin{align}
\vec{R}_{parallel} &= \vec{W_B} \\
\vec{R}_{perpendicular} &= \vec{N_{BA}}
\end{align}
The direction of static friction, $\hat{Fr}_{BA}$ is the opposite to $\vec{R_{parallel}}$ : $$\hat{Fr}_{BA} = -\hat{R}_{parallel} = -\hat{W_B} = -(-\hat{y}) = \hat{y}$$
To stop the block slipping downwards, the frictional force must be equal and opposite to the downwards force
\begin{align}
\vec{Fr}_{BA} &= - \vec{W_B} \\
\mu |\vec{N_{BA}}| \hat{y} &= -(-mg)\hat{y} = mg\hat{y}
\end{align}
If you consider the horizontal components you can see that $|\vec{N_{BA}}| = m|\vec{a}|$, so
$$
|\vec{a}| = \frac{g}{\mu} = 19.62 \ \text{ms}^{-2}
$$
Non-inertial frame of block B
Same as in inertial frame.
A's and B's frame is accelerating with respect to the table, so we add a pseudo force $\vec{P}$ pointing left. B is also accelerating downwards (due to gravity), so we add an additional pseudo force pointing up. We then add these 2 pseudo forces to every other object in the system (A and the table).
The resultant force on B is zero: $\vec{R}_B = 0$, which is not surprising in it's own frame. To get static friction, consider what happens to A from B's perspective. The resultant force on A is $\vec{P}_{vert}$, which accelerates it upwards.
The direction of static friction on A from B is in the opposite direction to $\vec{P}_{vert}$
$$
\hat{Fr}_{AB} = -\hat{P}_{vert} = -\hat{y}
$$
Every action has an equal and opposite reaction (cheers Newton!), so we also have a static friction on B from A
$$
\hat{Fr}_{BA} = -\hat{Fr}_{AB} = -(-\hat{y}) = \hat{y}
$$
which is the same as in the inertial frame.
To stop A slipping upwards past B, the frictional force on A must be equal and opposite to the upwards force
\begin{align}
\vec{Fr_{AB}} &= - \vec{P_{vert}} \\
&= -(-W_B) \\
-\mu |\vec{N_{AB}}| \hat{y} &= -(--mg)\hat{y} = -mg\hat{y}
\end{align}
If you consider the horizontal components you can see that $|\vec{N_{AB}}| = m|\vec{a}|$, so
$$
|\vec{a}| = \frac{g}{\mu} = 19.62 \ \text{ms}^{-2}
$$
which is also the same as in the inertial frame... nailed it! :) Now if you're like me, you're probably thinking "that was the worst piece of nasty, hateful, misery I've ever seen." And you'd be right! My advice would be just pick an inertial frame and be done with it.
Edit 1: Simple example
Let's walk through a more simple example; just block A and the table (no block B).
Inertial frame
See the diagram below. I start by adding all the original forces (left) and then I add all the action-reaction pairs (right).
Skip this step, because there's no pseudo forces in inertial frames
Take the vector-sum of all forces on A to calculate the resultant force on A, $\vec{R_A}$
\begin{align}
\vec{R_A} &= \vec{D} + \vec{N_{AT}} + \vec{W_A} \\
&= D \hat{x} + (N_{AT} - W_A) \hat{y} \\
&= D \hat{x} + 0 \hat{y}
\end{align}
Now we resolve $\vec{R_A}$ into components that are parallel and perpendicular to the slope
\begin{align}
\vec{R}_{parallel} &= D \hat{x} \\
\vec{R}_{perpendicular} &= 0 \hat{y}
\end{align}
We know the direction of static friction is opposite to $\vec{R}_{parallel}$, so $\hat {Fr_{A}} = -\hat{x} $. Done :)
Non-inertial frame
Start with the same as before
We add pseudo forces to all objects, such that the resultant force on A is zero, which is what one would expect in a frame, in which it's not moving. However, the resultant force on the table is now not zero.
The resultant force on the table is the vector sum of all forces on it
\begin{align}
\vec{R_{T}} &= \vec{P_{horz}} + \vec{N_{TG}} + \vec{W_T} + \vec{W_A} + \vec{N_{AG}} \\
&= -P_{horz} \hat{x} + (N_{TG} - W_T + W_A - N_{AG})\hat{y} \\
&= -D \hat{x} + 0 \hat{y}
\end{align}
Resolving this into components:
\begin{align}
\vec{R}_{T, \ parallel} &= -D \hat{x} \\
\vec{R}_{T, \ perpendicular} &= 0 \hat{y}
\end{align}
The direction of static friction on the table is opposite $\vec{R}_{T, \ parallel}$:
$$\vec{Fr}_{TA} = -\vec{R}_{T, \ perpendicular} = -(-D) \hat{x} = D \hat{x}$$
Every action has an equal and opposite reaction, so as A is exerting a frictional force on the table rightwards, the table must be exerting a frictional force leftwards on A and voila
$$
\hat{Fr}_{AT} = -\hat{Fr}_{TA} = -\hat{x}
$$
Edit 2: Answer to comment
To be honest, pseudo forces make my brain explode. There's a helpful example over on wikipedia. In the example, the person is analogous to block B and the seat is analogous to block A. If you imagine being block B it's like being accelerated in a car; you get pushed back into your seat (which is the pseudo-force). Then at constant velocity you feel like your not moving, but the road is moving under you.
It's always worth bearing in mind these useful rules:
The magnitude of static friction is always less than or equal to the magnitude of the driving force
Frictional forces, like all forces, come in action-reaction pairs
You can take any frame of reference you like (in classical mechanics). After all, what's stopping you?
Pseudo-forces are just the result of coordinate transformations :)
I'll leave what happens in A's frame as a fun exercise for you. | {
"domain": "physics.stackexchange",
"id": 33480,
"tags": "newtonian-mechanics, forces, friction, free-body-diagram"
} |
Why do batteries specifically vent *hydrogen* in the event of abuse? | Question: When a battery is subject to overcharging or overdischarging (including attempts to recharge a primary cell), it may vent hydrogen. It seems that every type of electrochemical battery I've read about or used specifically emits hydrogen under these circumstances.
Why is hydrogen emitted in all cases, and not some other gas or chemical? What role does hydrogen play in electrochemical batteries that other chemicals can't?
Answer: Lead storage batteries use an electrolyte of sulfuric acid and water. On overcharging you are electrolysing the water with emission of $\ce{H2}$ in one electrode and $\ce{O2}$ in the other.
Other batteries use molecules with H and Li. The idea is to use light elements so you can accumulate more charge for the weight. | {
"domain": "chemistry.stackexchange",
"id": 521,
"tags": "electrochemistry, hydrogen"
} |
Why do I see a rainbow when I look at Insulfilm with my sunglasses? | Question: I'm wearing glasses with a sunglass clip-on. This means I have my regular glasses and, on top of them, I have a second pair of lenses that work as sunglasses and attach to my regular glasses using tiny magnets.
I'm also in a car, and I'm looking through a window layered with Insulfilm. Curiously, when I look through the Insulfilm with the sunglasses on, I see a rainbow pattern. If I keep my glasses on, but remove the sunglass clip-on, the rainbow is gone. If I look through a window without Insulfilm, the rainbow also seems to be gone. If I look through the Insulfilm with the sunglasses clip-on, but without my regular glasses, the rainbow is there.
I believe my sunglasses are polarized, and the rainbow fades away when I tilt my head a little.
Why exactly is this phenomenon happening? I guess it has something to do with polarization due to the rainbow fading when I tilt my head, but I can't figure out why it is a rainbow and why I can't see it without the sunglasses.
Answer: Trying not to introduce math:
Yes, it looks like your sunglasses are polarized.
What you are seeing is one of the possible behaviours of a birefringent material called photoelasticity. Essentially, birefringence means that the material interacts with light in a way that depends on its orientation. This is something that often happens in crystals that are very ordered structures, or polymer materials (materials made of long molecules). Insulfilm is likely mostly made of polymers. And films are usually made by stretching or pressing an initial block of material until it makes a film of the right thickness. After this process, the material tends to be stressed, and polymer chains tend to align in a particular direction. This introduces the orientation-dependent behaviour at the molecular level that leads to birefringence because light interacts with the molecules as it passes through the material.
Now, why can you see the effect only through polarizing glasses and not your naked eyes or through regular glasses? The reason is that, to see the colorful-kind of birefringent effects, you need light that has itself orientation-dependent properties (I mean more that just the orientation introduced by its direction of propagation). In fact, the usual setup to see the rainbows in birefrigent materials is to place the material between two polarizers. You already wear one, and light that arrives on the film is probably already partly polarized (for example by the sky). With a second polarizer you would be able to make the rainbow pattern more contrasted.
You can have a look at https://en.wikipedia.org/wiki/Birefringence for a broad overview of birefringence, and more specifically photoelasticity in that same page. | {
"domain": "physics.stackexchange",
"id": 95302,
"tags": "optics, visible-light, everyday-life, polarization, birefringence"
} |
Definition of Sexual Selection? | Question: How do you define Sexual Selection (SS)?
(One might want to subdivide SS into intra- and inter- SS to answer)
Is SS clearly different from Natural Selection (NS)?
Is SS nested within NS or are NS and SS two different and (anti- or not) parallel processes?
Is the evolutionary processes due to sexual conflict are SS processes?
Sexual conflict arises when the reproductive interests of males and females are not aligned, generating sex-specific selection on shared traits such as mating rate (from this article)
(One may want to subdivise sexual conflict into intralocus and interloci conlict to answer)
Are the evolutionary processes of sexually antagonist genes SS processes?
Sexually antagonist genes are those genes where one allele is favored in females while another allele is favored in males (my own definition)
Can SS be accurately defined or is there a continuum of kinds or processes from complete SS processes to complete NS processes?
Stated differently, is SS defined as the color orange (it is a continum) or like a square and a rtiangle (no continuum)
(I talk here about kinds of processes, not about the relative importance of SS and NS in explaining the evolutionary dynamic of a trait or a locus)
If you define SS based on some other concepts that might be subject to various definition, please define this other concept. For example, if one says "SS occurs on a trait whenever this trait undergoes directional selection due to sexual competition", he will necessarily need to define "sexual competition" as well.
Below is a list of scenarios where you can answer by yes or no whether SS is involved in the evolution of some feature. In all of these scenarios, we'll consider cases where the males have to find a mate…
The males need good ears to find the females before other males. Is SS involved in the adaptation of ears abilities?
The males need good flying abilities (supposing we talk about flying animals) to have access to females before another male. Is SS involved in the adaptation to good flying abilities?
The males need good flying abilities to have access to females although there is not much between male competition because the vast majority of females won't never be fertilized. Is SS is involved in the adaptation to good flying abilities?
The males need good flying abilities to be able to fly a long time without spending to much energy otherwise they will die before finding a female. Is SS is involved in the adaptation to good flying abilities?
The males need good flying abilities to be able to fly a long time without spending to much energy otherwise they will die before finding a female. There are quite a few females but there the competition is so rough to find a female that is still unmated Is SS is involved in the adaptation to good flying abilities?
The males need good camouflage because when they fly to find a mate they may get eaten by a predator and die before reproducing. Is SS is involved in the adaptation to good flying abilities?
The males need good camouflage because when they fly to find a mate they may get eaten by a predator and die before reproducing. Here the difference is that the predators are attracted to the spot if they can sense many males. So the presence of the other males influence the probability of finding a mate of a given male. Is SS is involved in the adaptation to good flying abilities?
Answer:
Is SS clearly different from Natural Selection (NS)?
Is SS nested within NS or are NS and SS two different and (anti- or not) parallel processes?
Darwin, in The Descent of Man, and Selection in Relation to Sex defined sexual selection as a type of selection that "depends on the advantage which certain individuals have over other individuals of the same sex and species, in exclusive relation to reproduction." Throughout, he distinguishes between natural and sexual selection. This suggests to me that Darwin thought of them as different processes. A spot check of various evolutionary texts show the authors all using some variation of Darwin's definition.
In my opinion, the two are are clearly different but not necessarily separate. Both NS and SS affect fitness, a major component of the evolutionary process. I tend to think of SS as nested within NS. NS affects fitness indirectly. Beneficial traits increase survivorship. The longer individuals survive, the more reproductive opportunities they will have and fitness will, on average, increase. But, survivorship does not guarantee reproduction. Thus, I think natural selection acts indirectly on reproductive success.
SS operates directly on reproductive success because males selected by females will reproduce, increasing their fitness. Sex is, after all, a natural process. In my view, female choice is the "environment" of the male (as far as reproduction) so variation of male traits in the female choice "environment" differentially affects the fitness of the males. That is natural selection.
Is the evolutionary processes due to sexual conflict are SS processes?
Absolutely. Sexual selection ultimately arises because of a difference between genders of energy investment in gametes. Males produce lots of gametes and relatively little energetic cost. Females produce relatively few gametes with lots of energetic cost (think of tiny sperm vs large, yolk-filled eggs). It benefits males to mate with as many females as possible because, even if a mating produces no offspring, he lost very little of his energy investment. Females will be choosier because they can't afford to waste gametes. Thus, there is an inherent conflict of interest between males and females.
Are the evolutionary processes of sexually antagonist genes SS processes?
Sexually antagonist genes are those genes where one allele is favored in females while another allele is favored in males (my own definition)
Are these genes related exclusively to reproduction? If so, then that would be consistent with the original concept of SS. However, sexually antagonistic genes are not necessarily the result of or a cause of SS. For example, Rice (1992) argued that sexually antagonistic genes may explain the evolutionary reduction of recombination between primitive sex chromosomes.
Rice, W.R. 1992. Sexually antagonistic genes: Experimental evidence. Science 256: 1436-1439.
Can SS be accurately defined or is there a continuum of kinds or processes from complete SS processes to complete NS processes?
I think SS is accurately defined, but whether is is a type of NS or not is (I think) open for debate. As I described above, I think of SS as a particular type of NS, one that operates directly on reproductive success rather than indirectly.
Below is a list of scenarios where you can answer by yes or no whether SS is involved in the evolution of some feature. In all of these scenarios, we'll consider cases where the males have to find a mate…
The males need good ears to find the females before other males. Is SS involved in the adaptation of ears abilities?
Depends. Do males and females also use their ears for other purposes, such as detecting prey or predators? Is hearing ability much greater in males than in females? Darwin specifically addresses such a scenario. (Last sentence of page 256, first part of page 257.)
I think these ideas play out for most of your scenarios. It depends on if the traits differ between the sexes (not specified in your scenarios) and the traits lead directly to increased male reproductive success. But I will address one more.
The males need good camouflage because when they fly to find a mate they may get eaten by a predator and die before reproducing. Here the difference is that the predators are attracted to the spot if they can sense many males. So the presence of the other males influence the probability of finding a mate of a given male. Is SS is involved in the adaptation to good flying abilities?
In this case, I would say not SS. The camoflage is necessary for survival, not direct reproductive success. This is natural selection. Increasing the probability of finding a mate is not the same as guaranteed reproduction. | {
"domain": "biology.stackexchange",
"id": 2594,
"tags": "evolution, genetics, natural-selection, sexual-selection, definitions"
} |
Data Access Layer Code | Question: Wondering if I could do this a little bit better. Currently, my DAL has a blank constructor that sets the connection string from the web.config.
private string cnnString;
public DAL()
{
cnnString = ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString;
}
Then each method in here is mapped directly to a stored procedure, and they ALL look something like this:
public bool spInsertFeedback(string name, string subject, string message)
{
int rows = 0;
SqlConnection connection = new SqlConnection(cnnString);
try
{
connection.Open();
SqlCommand command = new SqlCommand("[dbo].[spInsertFeedback]", connection);
command.CommandType = CommandType.StoredProcedure;
// params
SqlParameter messageName = new SqlParameter("@name", SqlDbType.VarChar);
messageName.Value = name;
SqlParameter messageSubject = new SqlParameter("@subject", SqlDbType.VarChar);
messageSubject.Value = subject;
SqlParameter messageText = new SqlParameter("@message", SqlDbType.VarChar);
messageText.Value = message;
// add params
command.Parameters.Add(messageName);
command.Parameters.Add(messageSubject);
command.Parameters.Add(messageText);
rows = command.ExecuteNonQuery();
}
catch
{
return (rows != 0);
}
finally
{
connection.Close();
}
return (rows != 0);
}
Obviously some return a DataSet or a list of an object or something, where as this one just returns whether or not any rows were affected.
However, each method does things this way, and I just feel like I have a lot of redundant code, and I'd like to simplify it. From the other classes, I'm calling the DAL like this:
DAL dal = new DAL();
bool success = dal.spInsertFeedback(name, subject, message);
return Json(success);
Thanks guys.
Answer: I would not recommend putting your dal code in a try-catch block. At least you should be logging or doing something about it, currently you're just returning false (or no value). If an exception occur, let it happen, or put a try-catch on the function that calls it. Remember that exceptions vary (did the db server stop? was the sql malformed? these all are different cases.)
Using using is a better practice for all IDisposable objects, thus embrace it. If you run Visual Studio's code analysis tools (Ultimate Edition only I guess) it would also tell you to do so.
My version of your code is similar to Jeff's (which I wanted to vote up but didn't have enough rep yet) with an exception, SqlCommand is also disposable and can/should be used with using block.
public bool spInsertFeedback(string name, string subject, string message)
{
int rows = 0;
using (var connection = new SqlConnection(cnnString))
{
connection.Open();
using (var command = new SqlCommand("[dbo].[spInsertFeedback]", connection))
{
command.CommandType = CommandType.StoredProcedure;
// params
SqlParameter messageName = new SqlParameter("@name", SqlDbType.VarChar);
messageName.Value = name;
SqlParameter messageSubject = new SqlParameter("@subject", SqlDbType.VarChar);
messageSubject.Value = subject;
SqlParameter messageText = new SqlParameter("@message", SqlDbType.VarChar);
messageText.Value = message;
// add params
command.Parameters.Add(messageName);
command.Parameters.Add(messageSubject);
command.Parameters.Add(messageText);
rows = command.ExecuteNonQuery();
}
}
return (rows != 0);
}
Also I would suggest you to use connectionString instead of cnnString, which is not a proper naming. | {
"domain": "codereview.stackexchange",
"id": 377,
"tags": "c#"
} |
Why is the size of Al3+ less than that of Li+? | Question:
Why is it so that the size of $\ce{Al^{3+}}$ is lesser than $\ce{Li+}$?
I'm a bit confused about this one as the aluminium(III) ion is composed of 10 electrons and 13 protons, while the lithium ion has only 2 electrons and 3 protons.
Electronic configuration of $\ce{Al^{3+}}$ is $\mathrm{1s^2 2s^2 2p^6}$ and that of $\ce{Li+}$ is $\mathrm{1s^2}$.
Now, it seems obvious that the size of $\ce{Al^{3+}}$ should be greater than that of $\ce{Li+}$, but it is actually the other way around. Why is this? I think that it is a kind of exception, but if anyone can explain the reason behind this, it will help me in my upcoming tests.
Answer:
Now, it seems obvious that the size of Al3+ should be greater than that of Li+
No it does not. It is true that electrons repulse each other, however, they all are attracted towards the nucleus. The real 'size' of an atom or ion is a result of compromise between electron-electron repulsion and electron-nuclear attraction.
I think that it is a kind of exception
Obviously, it is not.
Why is this?
Commonly, there are three consideration, used to rationalize basic trends in atomic/ionic radii.
electron-electron repulsion (negative charges repulse each other)
electron-nucleus attraction (negative charges are attracted towards positive ones)
electron shielding (electrons in shells closer to nucleus 'shield' outer electrons from some charge of the nucleus.
While electrons repulse each other, increasing nuclear charge more then compensates it; so if not for shell organisation of electrons, atoms would monotonically decrease in size toward the end of the periodic table, as it is observable in each period 1—3.
However, inner electronic shells effectively shield the outer electrons from the nucleus, so they 'see' a nucleus that is a) larger in size and b) has charge equal to nuclear charge minus number of electrons in inner shells. Given that, outer electrons 'see' a nucleus with low charge density, so outer shells progressively increase in size within the same column.
In addition to that, electrons still repulse each other, so positive ions are smaller than neutral atoms, and neutral atoms are smaller than negative ions.
Now, consider $\ce{Li+}$ and $\ce{Al^{3+}}$. For the former, 2 outer electrons feel the charge of three protons. For the latter, 8 outer electrons feel the charge of 11 protons in the nucleus. Such great charge easily beats all other factors, so the latter cation is much smaller than the former. | {
"domain": "chemistry.stackexchange",
"id": 1646,
"tags": "periodic-trends"
} |
How to fit an odd relationship with a function? | Question: Let's say there is a function $f$ such that $y = f(x)$. However, if $f$ is a piecewise function such that:
$$y = \begin{cases} 0 \quad x \leq 0 \\ 1 \quad x >0\end{cases} $$
How do I fit $f$ in that case?
Many thanks, guys.
Answer: The definition you gave is the definition of the function. This is called the Heaviside Step Function. There is not a simple analytic way to express it (like as a ratio, product, or composition of trigonometric functions, exponentials, or polynomials). Note that it is neither continuous nor differentiable at x = 0.
There are a couple of cool ways to represent it. The coolest and most intuitive way is as an integral of a Dirac Delta Function:
$$
H(x) = \int_{-\infty}^x { \delta(s)} \, \mathrm{d}s
$$
Note, though, that a Dirac Delta Function is itself not an "official" function, since it is not well-defined at x = 0. Check out Distribution Theory for some cool info on weird "functions" like this.
Now, I think you may be trying to approximate this function, because you asked how to "fit" it. Taken straight from Wikipedia:
For a smooth approximation to the step function, one can use the logistic function
$$
H(x) \approx \frac{1}{2} + \frac{1}{2}\tanh(kx) = \frac{1}{1+\mathrm{e}^{-2kx}},
$$
where a larger k corresponds to a sharper transition at x = 0. | {
"domain": "datascience.stackexchange",
"id": 330,
"tags": "statistics, predictive-modeling, regression"
} |
Drawing tangent vectors to the Bloch sphere with qutip | Question: I need to plot drawings of qubit dynamics on the Bloch sphere. I know QuTip allows to do such drawings but I specifically need to represent evolution velocities on the Bloch sphere so I need to draw tangent vectors. Is there a way I can do it with QuTip? More generally it would be good to be able to draw affine vectors with QuTip. Do you know of any way I can do it? If this cannot be done with QuTip do you know of any other software that allows me to do it?
Answer: First of all, qutip is not a visualisation library, even though it does provide some visualisation functionalities, mostly leveraging matplotlib.
However, because qutip does provide handy functionalities to plot Bloch spheres and points on it, it does make sense to ask how one can tweak such functionalities to for example add tangent vectors to the bloch sphere.
Reading the relevant doc page, this functionality does not seem to have been implemented.
However, having a look at the relevant source code, it does not seem particularly difficult to add manually additional vectors to the Bloch sphere.
In particular, at least in the current version of qutip, the vectors are added via the method plot_vectors, which creates the arrows using an Arrow3D object.
One can add any other vector to the sphere by simply adding new Arrow3D objects to the underlying matplotlib axes object.
Here is an example of how to do this:
import matplotlib.pyplot as plt
%matplotlib inline
import qutip
b = qutip.Bloch()
b.render(b.fig, b.axes)
new_arrow = qutip.bloch.Arrow3D(xs=[1, 1], ys=[0, .5], zs=[0, .5],
mutation_scale=b.vector_mutation,
lw=b.vector_width, arrowstyle=b.vector_style, color='blue')
b.axes.add_artist(new_arrow)
which gives
note that we need to call explicitly b.render instead of b.show because we need the axes object to be created but the plot to not be drawn before we add the new arrow.
Just change the coordinates given to the parameters xs, ys, zs to decide what vectors to draw.
As a final more or less unrelated note, you might also be interested in the drawing library plotly, which allows to build very nice 3d plots (see e.g. these examples). | {
"domain": "quantumcomputing.stackexchange",
"id": 709,
"tags": "programming, bloch-sphere, qutip"
} |
Homemade deep learning library: numerical issue with relu activation | Question: For the sake of learning the finer details of a deep learning neural network, I have coded my own library with everything (optimizer, layers, activations, cost function) homemade.
It seems to work fine when benchmarking in on the MNIST dataset, and using only sigmoid activation functions.
Unfortunately I seem to get issues when replacing these with relus.
This is what my learning curve looks like for 50 epochs on a training dataset of ~500 examples:
Everything is fine for the first ~8 epochs and then I get a complete collapse on the score of a dummy classifier (~0.1 accuracy). I checked the code of the relu and it seems fine. Here are my forward and backward passes:
def fprop(self, inputs):
return np.maximum( inputs, 0.)
def bprop(self, inputs, outputs, grads_wrt_outputs):
derivative = (outputs > 0).astype( float)
return derivative * grads_wrt_outputs
The culprit seems to be in the numerical stability of the relu. I tried different learning rates and many parameter initializers for the same result. Tanh and sigmoid work properly. Is this a known issue? Is it a consequence of non-continuous derivative of the relu function?
Answer: I don't know exactly what the problem is, but maybe you could try checking the value of your gradients and see if they change a lot around the 8th epoch. | {
"domain": "datascience.stackexchange",
"id": 2819,
"tags": "machine-learning, deep-learning, backpropagation, activation-function, numerical"
} |
How do you differentiate an uncertainty function? | Question: The uncerainty of an ensemble of particles can be represented as (from the Schrodinger uncertainty relation):
$g(t) = (\langle p^2\rangle(t)-(\langle p\rangle(t))^2)(\langle q^2\rangle(t)-(\langle q\rangle(t))^2)-(\langle pq\rangle(t)-\langle p\rangle(t)\langle q\rangle(t))$
How do you differentiate this wrt time?
Answer: Since all $\langle \cdots \rangle$ are scalar functions of $t$, you will have to differentiate the expression with the usual differentiation rules. How this looks in detail will depend on your problem at hand, i.e. the form of the $\langle \cdots \rangle$ expressions. | {
"domain": "physics.stackexchange",
"id": 41852,
"tags": "heisenberg-uncertainty-principle, differentiation"
} |
Electric field and electric potential for spherical shell | Question: We know that electric field inside a spherical shell is 0 . But electric potential 'V' inside a spherical shell is kQ/R (Q = charge on the spherical shell and R = radius of the shell)
We also know that V=Ed for D = distance of the point where we want to find the electric field or the potential .
My doubt is that for thin spherical shell if we put the value of E= 0 , I.e. for a point inside the shell , then we will get V= 0 but from the formula above we are clearly not getting we = 0 , where am I mistaken ?
Answer:
V=Ed
is only valid for constant or uniform electric field.
The correct formula is $V=-\int \vec E.\vec {dl}$ .It gives potential difference and not absolute potential .The potential difference between any two points inside the shell is 0 but not their absolute potential. | {
"domain": "physics.stackexchange",
"id": 70065,
"tags": "potential"
} |
Is group theory used at all in the study of supersymmetry? | Question: I have encountered Lie groups in gauge symmetry, but I was wondering if anyone could give me some specific examples of mathematical group theory concepts being used in determining the properties of SUSY particles, if any.
Answer: In general, whenever one is describing the symmetries of a theory in physics, one is automatically dealing with group theory, and supersymmetry is not an exceptional case.
The Coleman-Mandula no-go theorem in essence states that any QFT (under certain assumptions) can only realise symmetries being some direct product of the Poincaré group and an internal group. Supersymmetry naturally arises in this context by considering spinor generators. All this is set in the language of group and representation theory.
For the more mathematically inclined, as explained in Supersymmetry and Supergravity, one can arrive at supersymmetry by considering graded Lie algebras, and this extends to how superspace is naturally considered after studying graded vector spaces.
Though, the notion of a superfield and superspace arises directly from group theory too. If we think of the SUSY algebra as one of anti-commuting parameters, we can define a group element,
$$G(x,\theta,\bar\theta) = \exp \left(-ix^mP_m + \theta Q + \bar\theta \bar Q\right)$$
and by interpreting group multiplication as a motion in the parameter space, one can define differential operators $Q$ and $\bar Q$ for left multiplication (as well as for right multiplication). Naturally we arrive at the question: what do these operators act on? The answer is functions on this parameter space - superfields.
In the same way we can build Lorentz scalars, we can build objects transforming suitably under supersymmetry transformations with these superfields (made up of component fields), allowing us to construct Lagrangians, but the whole idea is first founded upon group theory. | {
"domain": "physics.stackexchange",
"id": 45102,
"tags": "quantum-field-theory, group-theory, supersymmetry, representation-theory"
} |
LeetCode 124: Binary Tree Maximum Path Sum | Question: I'm posting my code for a LeetCode problem. If you'd like to review, please do so. Thank you for your time!
Problem
Given a non-empty binary tree, find the maximum path sum.
For this problem, a path is defined as any sequence of nodes from some starting node to any node in the tree along the parent-child connections. The path must contain at least one node and does not need to go through the root.
Example 1:
Input: [1,2,3]
1
/ \
2 3
Output: 6
Example 2:
Input: [-10,9,20,null,null,15,7]
-10
/ \
9 20
/ \
15 7
Output: 42
Inputs
[1,2,3]
[-10,9,20,null,null,15,7]
[-10,9,20,null,null,15,7,9,20,null,null,15,7]
[-10,9,20,null,null,15,7,9,20,null,null,15,720,null,null,15,7,9,20,null,null,15,7]
[-10,9,20,null,null,15,7,9,20,null,null,15,720,null,null,15,7,9,20,null,null,15,7999999,20,null,null,15,7,9,20,null,null,15,720,null,null,15,7,9,20,null,null,15,7]
Outputs
6
42
66
791
8001552
Code
#include <cstdint>
#include <algorithm>
struct Solution {
int maxPathSum(TreeNode* root) {
std::int_fast64_t sum = INT_FAST64_MIN;
depthFirstSearch(root, sum);
return sum;
}
private:
static std::int_fast64_t depthFirstSearch(
const TreeNode* node,
std::int_fast64_t& sum
) {
if (!node) {
return 0;
}
const std::int_fast64_t left = std::max(
(std::int_fast64_t) 0,
depthFirstSearch(node->left, sum)
);
const std::int_fast64_t right = std::max(
(std::int_fast64_t) 0,
depthFirstSearch(node->right, sum)
);
sum = std::max(sum, left + right + node->val);
return std::max(left, right) + node->val;
}
};
References
Problem
Discuss
Solution
Answer: There's not much to say about your answer, it looks fine! One could quibble over the names of variables, maybe left and right could be named left_sum and right_sum for example, and you could've used auto for the type of those two variables. But other than that I think there is nothing that can be improved. | {
"domain": "codereview.stackexchange",
"id": 38972,
"tags": "c++, beginner, algorithm, programming-challenge, c++17"
} |
Outliers handling | Question: I have a large dataset of >100 columns with nearly all types of data.
I want to remove outliers from my dataset for which purpose I've decided to use IQR.
Problem is even when I apply quantile of 0.25/0.75, I still get significant amount of outliers in columns like ClientTotalIncome, etc. Further by doing that, I eliminate more than 90% data.
My code in Python for outliers removal is as follows:
num_train = train.select_dtypes(include=['number'])
cat_train = train.select_dtypes(exclude=['number'])
Q1 = num_train.quantile(0.25)
Q3 = num_train.quantile(0.75)
IQR = Q3 - Q1
idx = ~((num_train < (Q1 - 1.5 * IQR)) | (num_train > (Q3 + 1.5 *
IQR))).any(axis=1)
train_cleaned = pd.concat([num_train.loc[idx], cat_train.loc[idx]], axis=1)
Any ideas?
Answer: It's not always a good idea to remove data from your dataset.
In some circumstances - and income is a good example - your data will be skewed / long-tailed and so will lie outside of the interquartile range. This doesn't imply that there is anything wrong with the data, but rather that there is a disparity between observations.
Nevertheless, if you are set on removing observations perhaps you should consider scaling your features prior to determining which observations are outliers. For example, taking the log of a feature and then applying your outlier removal based on the log(variable).
Don't forget that IQR doesn't carry over well to categorical and ordered features. | {
"domain": "datascience.stackexchange",
"id": 4493,
"tags": "python, outlier"
} |
AT command in SIM900A GSM/GPRS module to find out originating address of an SMS | Question: I am using SIM900A for some purpose and want to know the number of the sender from where a message comes. I am unable to find the specific AT command related to receiving message which give me number from where latest message comes.
I had used AT+CNMI (it corresponds to notification regarding latest received message), but am unable to find sender number.
I had seen AT+CMGL=<stat>[,<mode>] will give you a string which will have oa i.e. originating address and once that is stored in a string I can easily parse it out, but when I had data format of that string. Need help or any suggestion if somebody can help me out with any other possible solution.
Answer: Text mode vs PDU mode
The SIM900 module supports both text and PDU mode. Please note that some AT commands have different syntax depending on which mode is currently active.
To check which mode is currently active (0: PDU mode, 1: Text mode):
AT+CMGF?
Retrieving SMS's
You can use the CMGL command to retrieve unread, read, stored unsent, stored sent, or all messages.
If text mode is currently selected:
AT+CMGL=STAT,MODE
Parameters:
STAT:
"REC UNREAD" Received unread messages
"REC READ" Received read messages
"STO UNSENT" Stored unsent messages
"STO SENT" Stored sent messages
"ALL" All messages
MODE: (OPTIONAL)
0 Normal
1 Not change status of the specified SMS record
In other words, the following command should print all SMS messages:
AT+CMGL="ALL"
If PDU mode is currently selected:
AT+CMGL=STAT,MODE
Parameters:
STAT:
0 Received unread messages
1 Received read messages
2 Stored unsent messages
3 Stored sent messages
4 All messages
MODE: (OPTIONAL)
0 Normal
1 Not change status of the specified SMS record
In other words, the following command should print all SMS messages:
AT+CMGL=4
Example
If text mode is currently selected, and a CMGL command is issued, the following is an example of what could be expected (note there's a line break before the actual message starts).
AT+CMGL="ALL"
+CMGL: 1,"REC READ","+85291234567",,"07/05/01,08:00:15+32",145,37
It is easy to list SMS text messages.
1 : Message index
"REC READ" : Message status (it's been read before)
"+85291234567" : Source number (ie the person that sent you the sms)
"07/05/01,08:00:15+32" : Service center timestamp
145 : Character set
37 : Length of message
Refer to Section 4.2.3, page 99, of the SIM900 AT command set for more information.
External links
SIM900A AT Command set (V1.05: 2011-10-24)
Developers home — Tutorials regarding AT commands (I'd strongly recommend you take some time and go through the tutorials — especially sections 18 to 26) | {
"domain": "robotics.stackexchange",
"id": 218,
"tags": "arduino, microcontroller"
} |
Skin porosity in glans penis when erect | Question: (Apologies if this is a too weird/inappropriate question in this forum. I only thought that I ask it since it happens to be part of human life.)
The skin on a human typically blocks many substances that could be harmful for the organism. One area that is not covered by skin, however, is the "glans penis" (the tip of the penis).
Considering that the surface on this area seems to become more stretched during erection, it would look like erection causes the tip to allow more substances to pass through than otherwise.
Are there any studies about how erection affects the porosity in this area of the body?
Answer: So if you're interested in the detailed anatomy of the glans penis, I recommend this as a nice article to go to after you've checked general references. Opening two sentences are worth quoting here:
The human glans penis is covered by stratified squamous epithelium and
a dense layer of connective tissue equivalent to the dermis of typical
skin. Rete ridges of the epidermis are irregular and vary in height
depending on location, age, and presence or absence of a foreskin. The
papillary layer of the dermis blends into and is continuous with the
deep connective tissue forming the tunica albuginea of the corpus
spongiosum of the glans penis.
And as pointed out in The Last Word's answer, foreskin covers the penis if you look at humans in an evolutionary scale of time. The exception being an erection, which should lead to foreskin retraction (at least at some level).
So the question then becomes if the squamous epithelium and connective tissue are more prone to infection than the dermis. First, it's worth pointing out that the epidermis is not equally protective across the body. Think about the skin on your face vs. the bottom of your foot, or vs. a baby's.
I think comparing the glans penis directly to the surrounding tissue -- the prepuce (foreskin), fossa navicularis, and urethra -- makes the most sense. To make a plumbing analogy, is it the pipe (urethra), the fitting (glans penis), hole (fossa navicularis), or insulation (prepuce) that is most prone to infection?
Well, first we need to look at our pathogen, "what's doing the infecting?" For HIV, probably the urethra, but if you're interested in bacterial infection, the best things to look into are balanitis and balanoposthitis. That would be the inflammation of the glans or glans & prepuce. Here is a good review on infectious causes.
Unfortunately, there is going to a strong sampling bias due to circumcision rates. Also most studies are not going to be asking the the specific question we are trying to answer here. And it would be hard to prove that infection occurred during the erection, not immediately before or after.
I have found little evidence of bacterial infection of the prepuce but not the glans penis. I do know of cases of the reverse however, which may indicate that the glans has a greater infection rate.
However, I think it's worth pointing out the difference is not likely to be drastic, and that the surface of glans has many similarities to skin. For example, it has an epidermis and dermis. It stretches quite well, and I think dryness will be an issue far before porosity. | {
"domain": "biology.stackexchange",
"id": 3237,
"tags": "sex"
} |
Is Palindrome subset of a regular language regular? | Question: Suppose we have $L$ being a regular language with alphabet $\Sigma$, if we define $M=\{ x \in \Sigma^{*} \mid xx^{R} \in L \}$, then we know that $M$ contains all half copies of palindrome strings of $L$, now here comes the question, is $M$ regular?
Is it easy to see that when $L$ is finite, then $M$ is clearly also finite hence regular. However, what if $L$ is infinite? I think it will be non-regular and try to pumping lemma but does not work...
Answer: The subset of all palindromes in L is obviously not usually regular, take the simple example $a^*ba^*$ where the subset of palindromes $a^nba^n$ is not regular. But that is not the question. The question is about the set of all left halves of palindromes in L.
Assume you have an FSM for L (that is an FSM describing and defining L). You can take that FSM and use a simple algorithm to determine if w is in M:
Given a state S, define succ(S, a) as the state that the FSM will transition to from state S with input a, or nil if there is no transition for a. For a set of states T, define pred(T, a) as the set of states S such that succ(S, a) is not nil and an element of T.
To determine if w is in L, you would start with S = $S_0$. Then for every symbol s in w, you replace S with succ(S, s), exiting the loop if S is nil. Finally, w is in L if and only if S ≠ nil and S is an accepting state.
To determine if w is in M, you start with S = $S_0$ and T = set of accepting states. Then for every symbol s in w, you replace S with succ(S, s) and T with pred(T, s), exiting the loop if S is nil or T is empty. Finally, w is in M if and only if S ≠ nil and S is an element of T.
Instead of using this algorithm, you can quite easily construct an FSM for M. | {
"domain": "cs.stackexchange",
"id": 14704,
"tags": "formal-languages, regular-languages, finite-automata, pumping-lemma"
} |
Is it possible to be 'still' in space? | Question: So I was reading this answer about how galaxies are the fastest moving objects in the universe because space is expanding faster than the speed of light. This got me wondering, would it be possible to be 'still' inside of space so that galaxies are moving away from you and others towards you? If so, wouldn't this allow us to travel from place to place in the universe much quicker? I know that this wouldn't be something that's even close to possible today, but I was just curious if such a thing is possible. Thank you.
Answer: The so called expansion of the universe is not as trivial as most people think. What is happening, in fact, is that the distance between two points in space (note that I'm not talking about objects with velocities, but just coordinates in space) increases with time in a manner proportional to a given factor (in this case, the Hubble constant - which is actually not a constant in time, but let's ignore that for now).
This means that galaxies are not exactly moving away from a specific point with a particular velocity (i.e., a particular rest frame), but that the space between them is expanding. It is somewhat hard to wrap your head around the difference between these two situations, but this has to do with the geometry of space, and not with what's inside it.
It is widely accepted that the expansion of the universe is homogeneous and isotropic (the Cosmological Principle), which means that there is no "special" position (rest frame) on the universe, and whatever your velocity and your location, the universe will seem to expand the same way.
The best way to imagine this is to reduce to 2 dimensions expanding in a third dimension. For example, suppose that you live in a galaxy embedded on the surface (a 2D universe) of a balloon (completely unaware of the third dimension), with other galaxies distributed homogeneously. If, for some reason, the balloon is expanding isotropically and, every point of the surface will be moving away from every other point, so will every galaxy from each other. Think about this: is it possible to find a special, "still", point (without resorting to the elusive third dimension) where you will see galaxies moving from you in one side and galaxies moving towards you on the other side?
Our universe works similarly, but we're embedded on a three-dimensional space instead of a two-dimensional one. | {
"domain": "astronomy.stackexchange",
"id": 297,
"tags": "universe, speed"
} |
A variant of maximum matching: disjunctive constraints on the endpoints' degrees of edges in matching | Question: The question is asked first at here. It described what the problem is and a trival greedy algorithm. Also the accepted answer gave a proof of its NP-completeness.
Problem: Given a graph $G(V,E)$, Find a subset of edges $M \subseteq E$. For each edge $(u,v) \in M$ , $\left( \left(d(u) < c\right) \lor \left(d(v) < c\right) \right)$ holds, where $c=3$. $d(u)$ means degree in $M$, i.e., $d(u)=|(u,v) \in M|$. We call this constraint a degree-constraint.
Output: The maximum sized (cardinality of edges) $M$.
The problem on $c=2$ is already $NP-complete$ as shown by Jukka Suomela. There is a $2-approximation$ algorithm for a general graph already. Jukka devised a $O(|V|c)$ DP for this problem when $G$ is restricted on trees.
Answer: The case of trees and a constant $c$ seems to admit a straightforward dynamic programming algorithm.
You root your tree arbitrarily. You will start from the leaf nodes and propagate information towards the root. You will keep track of the following pieces of information for each subtree; here $u$ is a node and $T(u)$ is the subtree rooted at $u$:
$M_1(u)$ = best matching in $T(u)$ if $u$ has unlimited capacity,
$M_2(u)$ = best matching in $T(u)$ if $u$ has capacity $c - 1$,
$M_3(u)$ = best matching in $T(u)$ if $u$ has capacity $c$.
Note that $M_1(u)$ and $M_2(u)$ can be extended by adding the edge $(v,u)$, where $v$ is the parent of $u$.
All of these values are trivial to compute for a leaf node. Once you have computed these values for all subtrees of a node $v$, you can compute them for $v$; let $C$ consist of the children of $v$:
$M_1(v)$: For each $u \in C$ of $v$ we take either $M_2(u) + (v,u)$ or $M_3(u)$, whichever is better.
$M_2(v)$: We check all subsets $S \subseteq C$ of size at most $c - 2$. For each child $u \in S$ we take $M_1(u) + (v,u)$ or $M_2(u) + (v,u)$, whichever is better. For each child $u \notin S$ we take $M_1(u)$, $M_2(u)$, or $M_3(u)$, whichever is best.
$M_3(v)$: Similar to $M_2(v)$, but we consider subsets of size at most $c - 1$.
Everything is polynomial if $c$ is a constant. | {
"domain": "cstheory.stackexchange",
"id": 1062,
"tags": "cc.complexity-theory, graph-algorithms, approximation-algorithms, parameterized-complexity"
} |
Variadic Datablocks | Question: I would like to have my variadic template class reviewed.
First some explanation what it should do:
I'm writing an application where I read in blocks of data.
I know before i read in the data which formats to expect.
For example:
ID123.EnableValve := FALSE;
ID123.Position1 := 0;
ID123.Position2 := 2;
ID123.Position3 := 9;
Or:
UDT30.EnableValve := FALSE;
UDT30.This := 0;
UDT30.is := L#2;
UDT30.an := 9;
UDT30.Example := TRUE;
These data blocks share the following similarities:
Each line starts with an id (e.g. UDT30 or ID123).
This Device Id must be the same on each line. Otherwise we read a corrupt data block.
Then a type name follows (e.g. EnableValve or Position3).
The type names are unique. That means in one data block there is never
the same type twice (e.g. Position1 does not occur two times).
Then always the string := follows.
Add the end the value follows (e.g. TRUE, 0, L#).
The count of lines is flexible (e.g. length = 4 or 5).
I want to do the following with the datablocks:
Reading in datablocks (from std::istream)
Access a specific row and modify its value(e.g. ID123.EnableValve
:= FALSE; becomes ID123.EnableValve := TRUE;
Writing the datablock back (to std::ostream)
Also it must be possible to:
construct a new datablock out of Types
write the datablock (to std::osstream)
I want to inherit from this template and restrict the use of set and get for the rows. The inherited classes can decide with the protected set / get which lines they want to modify and which are just read only.
I used Google Test with Visual 2017 to create unit tests for the template.
Here you can see what the template can do.
Variadic_templatesTest.cpp
#include "pch.h"
#include "..\variadic_templates\Fake_types.h"
// hack to test protected methods in variadic datablock
// is there a better way??
#define protected public
#include "..\variadic_templates\Variadic_datablock.h"
#include <sstream>
TEST(Variadic_templatesTest, DefaultConstructor) {
Variadic_datablock t;
EXPECT_EQ(t.get_id(), std::string{});
}
TEST(Variadic_templatesTest, Constructor) {
std::string id{ "ID123" };
EnableValve enableValve{ "TRUE" };
Position1 position1{ "0" };
Position2 position2{ "2" };
Position3 position3{ "3" };
Variadic_datablock<EnableValve, Position1, Position2, Position3> t{
id,
enableValve,
position1,
position2,
position3
};
EXPECT_EQ(t.get_id(), id);
EXPECT_EQ(t.get_element<EnableValve>().m, enableValve.m);
EXPECT_EQ(t.get_element<Position1>().m, position1.m);
EXPECT_EQ(t.get_element<Position2>().m, position2.m);
EXPECT_EQ(t.get_element<Position3>().m, position3.m);
}
TEST(Variadic_templatesTest, SetElement) {
std::string id{ "ID123" };
EnableValve enableValve{ "TRUE" };
Variadic_datablock<EnableValve> t{
id,
enableValve,
};
EXPECT_EQ(t.get_element<EnableValve>().m, enableValve.m);
enableValve.m = "FALSE";
t.set_element(enableValve);
EXPECT_EQ(t.get_element<EnableValve>().m, enableValve.m);
}
TEST(Variadic_templatesTest, TestIOstream) {
std::string input = {
R"( ID123.EnableValve := FALSE;
ID123.Position1 := 0;
ID123.Position2 := 2;
ID123.Position3 := 9;
)"
};
std::istringstream ist{ input };
Variadic_datablock<EnableValve, Position1, Position2, Position3> t;
ist >> t;
EXPECT_EQ(t.get_id(), "ID123");
EXPECT_EQ(t.get_element<EnableValve>().m, "FALSE");
EXPECT_EQ(t.get_element<Position1>().m, "0");
EXPECT_EQ(t.get_element<Position2>().m, "2");
EXPECT_EQ(t.get_element<Position3>().m, "9");
std::ostringstream ost;
ost << t;
EXPECT_EQ(ost.str(), input);
}
Variadic_datablock.h
#pragma once
#include "Read_from_line.h"
#include <tuple>
#include <iostream>
#include <string>
#include <vector>
#include <typeinfo>
template<typename ...T>
class Variadic_datablock
/*
Requires elements in T... are unique (not repeat of types)
*/
{
public:
Variadic_datablock() = default;
explicit Variadic_datablock(std::string id, T... args)
:m_id{ std::move(id) },
m_data{ std::move((std::tuple<T...>(args...))) }
{
}
std::string get_id() const
{
return m_id;
}
protected:
template<typename Type>
Type get_element() const
{
return std::get<Type>(m_data);
}
template<typename Type>
void set_element(Type a)
{
std::get<Type>(m_data) = a;
}
std::tuple<T...> get_data() const
{
return m_data;
}
private:
std::string m_id{};
std::tuple<T...> m_data{};
template<typename ...T>
friend std::ostream& operator<<(std::ostream& os,
const Variadic_datablock<T...>& obj);
template<typename ...T>
friend std::istream& operator>>(std::istream& is,
Variadic_datablock<T...>& obj);
};
template<class Tuple, std::size_t n>
struct Printer {
static std::ostream& print(
std::ostream& os, const Tuple& t, const std::string& id)
{
Printer<Tuple, n - 1>::print(os, t, id);
auto type_name =
extract_type_name(typeid(std::get<n - 1>(t)).name());
os << " " << id << "." << type_name << " := "
<< std::get<n - 1>(t) << "; " << '\n';
return os;
}
};
template<class Tuple>
struct Printer<Tuple, 1> {
static std::ostream& print(
std::ostream& os, const Tuple& t, const std::string& id)
{
auto type_name =
extract_type_name(typeid(std::get<0>(t)).name());
os << " " << id << "." << type_name << " := "
<< std::get<0>(t) << "; " << '\n';
return os;
}
};
template<class... Args>
std::ostream& print(
std::ostream& os, const std::tuple<Args...>& t, const std::string& id)
{
Printer<decltype(t), sizeof...(Args)>::print(os, t, id);
return os;
}
template<typename ...T>
std::ostream& operator<<(
std::ostream& os, const Variadic_datablock<T...>& obj)
{
print(os, obj.m_data, obj.m_id);
return os;
}
template<class Tuple, std::size_t n>
struct Reader {
static std::istream& read(
std::istream& is, Tuple& t, std::string& last_id)
{
Reader<Tuple, n - 1>::read(is, t, last_id);
auto id = extract_id(is);
if (!is_expected_id(is, id, last_id)) {
return is;
}
last_id = id;
auto type_name = extract_type_name(is);
auto expected_name =
extract_type_name(typeid(std::get<n - 1>(t)).name());
if (!is_expected_name(is, type_name, expected_name)) {
return is;
}
dischard_fill(is);
// can we do this better and extract into type of
// actual dispatched tuple?
auto s = extract_line_value_type(is);
// prefered: std::get<n - 1>(t) = s but not possible?
std::get<n - 1>(t).insert(s);
return is;
}
};
template<class Tuple>
struct Reader<Tuple, 1> {
static std::istream& read(
std::istream& is, Tuple& t, std::string& last_id)
{
auto id = extract_id(is);
if (!is_expected_id(is, id, last_id)) {
return is;
}
last_id = id;
auto type_name = extract_type_name(is);
auto expected_name =
extract_type_name(typeid(std::get<0>(t)).name());
if (!is_expected_name(is, type_name, expected_name)) {
return is;
}
dischard_fill(is);
// can we do this better and extract into the type of
// actual dispatched tuple?
// typeid(std::get<0>(t)) s = extract_line_value_type(is); ?
auto s = extract_line_value_type(is);
// prefered: std::get<0>(t) = s but not possible?
std::get<0>(t).insert(s);
return is;
}
};
template<class... Args>
std::istream& read(
std::istream& is, std::tuple<Args...>& t, std::string& last_id)
{
Reader<decltype(t), sizeof...(Args)>::read(is, t, last_id);
return is;
}
template<typename ...T>
std::istream& operator>>(std::istream& is,
Variadic_datablock<T...>& obj)
{
std::tuple<T...> tmp{};
read(is, tmp, obj.m_id);
obj.m_data = std::move(tmp);
return is;
}
Read_from_line.h
#pragma once
#include <iosfwd>
#include <string>
std::string extract_id(std::istream& is);
bool is_expected_id(std::istream& is,
const std::string& id, const std::string& expected_id);
std::string extract_type_name(std::istream& is);
bool is_expected_name(std::istream& is,
const std::string_view& name, const std::string_view& expected_name);
void dischard_fill(std::istream& is);
std::string extract_line_value_type(std::istream& is);
std::string erase_whitespace_in_begin(const std::string& s);
std::string extract_type_name(const std::string& typeid_result);
Read_from_line.cpp
#include "Read_from_line.h"
#include <algorithm>
#include <iostream>
#include <sstream>
std::string extract_id(std::istream& is)
{
std::string id; // id e.g. K101 PI108
std::getline(is, id, '.');
id = erase_whitespace_in_begin(id);
return id;
}
bool is_expected_id(std::istream& is,
const std::string& id, const std::string& expected_id)
{
if (expected_id == std::string{}) {
return true;
}
if (id != expected_id) {
is.setstate(std::ios::failbit);
return false;
}
return true;
}
std::string extract_type_name(std::istream& is)
{
std::string type_name; // data e.g. DeviceType
is >> type_name;
return type_name;
}
bool is_expected_name(std::istream& is,
const std::string_view& name, const std::string_view& expected_name)
{
if (name != expected_name) {
is.setstate(std::ios::failbit);
return false;
}
return true;
}
void dischard_fill(std::istream& is)
{
std::string fill; // fill ":="
is >> fill;
}
std::string extract_line_value_type(std::istream& is)
{
std::string value; // value 10
std::getline(is, value, ';');
value = erase_whitespace_in_begin(value);
return value;
}
std::string erase_whitespace_in_begin(const std::string& s)
{
std::string ret = s;
ret.erase(0, ret.find_first_not_of(" \n\r\t"));
return ret;
}
std::string extract_type_name(const std::string& typeid_result)
{
std::string ret = typeid_result;
// case normal name class Test
if (ret.find('<') == std::string::npos) {
std::istringstream ist{ ret };
ist >> ret;
ist >> ret;
return ret;
}
// Case using Test = Test2<struct TestTag>
{
std::istringstream ist{ ret };
std::getline(ist, ret, '<');
std::getline(ist, ret, '>');
}
{
std::istringstream ist2{ ret };
ist2 >> ret; // read name such as struct or class
ist2 >> ret; // get typenameTag
}
ret.erase(ret.find("Tag"));
if (ret.find("EventMask") != std::string::npos) {
std::replace(ret.begin(), ret.end(), '_', '.');
}
return ret;
}
Fake_types.h
#pragma once
/*
Fake types used to test the template class.
*/
#include <iostream>
#include <string>
template<typename Tag>
struct Faketype {
std::string m;
void insert(std::string(s)) {
m = s;
}
};
template<typename Tag>
std::istream& operator>>(std::istream& is, Faketype<Tag>& obj)
{
is >> obj.m;
return is;
}
template<typename Tag>
std::ostream& operator<<(std::ostream& os, const Faketype<Tag>& obj)
{
os << obj.m;
return os;
}
using Position1 = Faketype<struct Position1Tag>;
using Position2 = Faketype<struct Position2Tag>;
using Position3 = Faketype<struct Position3Tag>;
// Only Special case. The file has EventMask.Address1
using EventMask_Address1 = Faketype<struct EventMask_Address1Tag>;
struct EnableValve
{
std::string m;
void insert(std::string(s)) {
m = s;
}
};
std::istream& operator>>(std::istream& is, EnableValve& obj)
{
is >> obj.m;
return is;
}
std::ostream& operator<<(std::ostream& os, const EnableValve& obj)
{
os << obj.m;
return os;
}
Please review, sorted by priority:
The Variadic_datablock class:
Is it a good design? What can be improved? Is it possible to change std::get<n - 1>(t).insert(s); to std::get<n - 1>(t) = s in the Reader?
Please let me know any smells. It's the first time I have used variadic templates. I left some comments in the template which indicate smells, but I don't know how to fix them.
The unit test for the Variadic_datablock class:
Are they good test? Easy to understand? Is there something else you would test?
Can we ged rid of the protected / public hack?
Read_from_line.h This file provided helper functions for the overloaded Istream Operator of the Variadic_datablock.
Do not review:
Fake_types.h: It is a helper to emulate some datatypes, because the real datatypes I use in my application are more complicated and I wanted to simplyfy to focus on the template
edit: Let me know if you need additional informations to review this.
Answer: Variadic_datablock:
class Variadic_datablock
/* Requires elements in T... are unique (not repeat of types) */
We could enforce this requirement with std::enable_if or a static_assert in the class, combined with some template-metaprogramming.
Should an empty parameter pack be allowed? It looks like this would break the print / read functions, so we should probably check for that too.
What's the purpose of the protected member functions? Do we actually need inheritance or could we make everything public except m_id? That would make the class a lot easier to test. Access control in C++ is best used to prevent breaking class-invariants and hide complex internal functionality. The only invariant here seems to be that the ID shouldn't be changed after creation.
The constructor doesn't need to create a temporary tuple. We can move the arguments directly into m_data:
explicit Variadic_datablock(std::string id, T... args):
m_id{ std::move(id) },
m_data{ std::move(args)... }
{
}
Printer:
There's some unnecessary duplication. We could definitely abstract this bit into a print_element(os, std::get<n - 1>(t));
auto type_name =
extract_type_name(typeid(std::get<n - 1>(t)).name());
os << " " << id << "." << type_name << " := "
<< std::get<n - 1>(t) << "; " << '\n';
return os;
Since the Printer, Reader classes and the print and read functions aren't supposed to be used directly by the user, they could be placed in a detail (or similarly named) namespace.
typeid(x).name() is implementation defined. Several different types may have the same name, and the name can even change between invocations of the same program. In other words, it's not something we should use for serialization. I'd suggest adding a static const std::string data-member to each element class.
Reader:
Same issues as Printer.
It would be reasonable to merge the extract_id and is_expected_id into one function. This means we don't return state that may or may not be valid and then have to pass it into a separate function to check. Sticking with the input operator conventions, we get: std::istream& extract_id(std::istream& is, std::string& id, std::string const& expected_id);. We can use the state of the stream to indicate success / failure, and don't need the extra boolean. e.g.:
std::string id;
if (!extract_id(is, id, last_id)) // checks stream fail / bad bits (eof will be caught at next read)
return is;
One test-case with valid input calling the high-level stream operator is not enough to properly confirm the behavior. We need to check the behavior of the individual functions and think about edge cases. For example, for the extract_id function, one might expect the following:
TEST(Test_Reader, IStreamWithValidIDAndDot) { ... }
TEST(Test_Reader, IStreamWithOnlyDot) { ... }
TEST(Test_Reader, IStreamWithIDAndNoDot) { ... }
TEST(Test_Reader, EmptyIStream) { ... }
TEST(Test_Reader, IStreamWithValidIDAndDotAndBadbitSet) { ... }
TEST(Test_Reader, IStreamWithValidIDAndDotAndFailitSet) { ... }
TEST(Test_Reader, IStreamWithValidIDAndDotAndEofbitSet) { ... }
discard, not dischard.
Extra parentheses in FakeType::insert() and EnableValve::insert():
void insert(std::string(s)) {
m = s;
}
It's fine to take the argument by value, but we can then move it into place:
void insert(std::string s) {
m = std::move(s);
}
We have an operator>> for the FakeTypes, but we aren't using it?
I think we only need to set last_id once for a given datablock. | {
"domain": "codereview.stackexchange",
"id": 33510,
"tags": "c++, template, c++17, variadic"
} |
Checkup for a block of memory, containing a repeated single byte | Question: I was looking at this question on SO, because I needed something similar. There are eight answers to it, and I wasn't satisfied with any of them - so I've written my own solution (please see below). It's in C++, so I decided not to add the ninth answer to the C-oriented question.
The solution tries to use the multibyte comparison, which is presumably supported by hardware. I used to call the compare function with the unsigned long template parameter. The byte-to-byte comparison might be needed only in the end if and only if the length of the memory block is not divisible by the sizeof(T).
template <typename T>
inline auto repeatedByte(unsigned char const B)
{
T res = 0;
memset(&res, B, sizeof(T));
return res;
}
template <typename T>
auto compare(const void* const PTR, unsigned char const B, unsigned const N)
{
if (N > 0)
{
auto const value = repeatedByte<T>(B);
auto const n0 = N / sizeof(T);
auto i0 = 0U;
auto ptr0 = reinterpret_cast<const T*>(PTR);
while (i0 < n0 && *ptr0 == value) {++i0; ++ptr0;}
if (i0 >= n0)
{
if (N % sizeof(T) > 0)
{
auto i1 = n0 * sizeof(T);
auto ptr1 = reinterpret_cast<const unsigned char*>(ptr0);
while (i1 < N && *ptr1 == B) {++i1; ++ptr1;}
return (i1 >= N);
}
else
{
return true;
}
}
else
{
return false;
}
}
else
{
return false;
}
}
UPDATE. I've implemented all the @GSliepen suggestions how to improve this code - especially about the data alignment, which looks most critical. All the updates can be found in this Github repository.
Answer: This doesn't account for alignment
Consider that PTR might not be aligned correctly for a T. You have to account for this first, for example by comparing the first few bytes individually until PTR is suitably aligned.
Depending on T and your CPU architecture, misaligned reads might either work fine, work but be much slower, crash your program, or worst of all: give you the wrong results.
Remove unnecessary if-statements
It's very unlikely anyone will call your function with N set to 0. The if-statement at the start is being evaluated every time the function calls, so wastes a few unnecessary cycles, and it makes the code look more complex. But your loops are already handling the case of N = 0 correctly, so I would just remove that outer if-statement entirely.
You might do the same with if (N % sizeof(T) > 0), although there it's of course more likely to avoid the inner loop, but consider that the loop itself would exit immediately after the first check of i0 < N, so there is very little to no performance loss to run it even if N % sizeof(T) == 0.
Finally, you can remove some unnecessary indentation by changing the order of the if and else branches, for example you can write:
if (i0 < n0) {
return false;
}
// no else
…
Use static_cast<>() where possible
static_cast<>() is the safest way to cast things. Use it where possible. Casting any pointer type to and from void* is possible with static_cast<>(), so there is no need for reinterpret_cast<>(). | {
"domain": "codereview.stackexchange",
"id": 45342,
"tags": "c++"
} |
How can (in Dirac's terminology) the product of two "real" linear operators be "not real"? | Question: I'm puzzled about a statement from Dirac's book, The principles of quantum mechanics, (§8, p.28):
As a simple examples of this result, it should be noted that, if $\xi$
and $\eta$ are real, in general $\xi\eta$ is not real. This is an important difference from classical mechanics. However, $\xi\eta + \eta\xi$ is real, and so is $i(\xi\eta - \eta\xi)$. Only when $\xi$ and $\eta$ commute is $\xi\eta$ itself also real.
Here $\xi$ and $\eta$ are linear operators, so (I think) these could be represented as matrices.
However how can a product of two real matrices be "not real"? What does Dirac mean when he says "is not real"? Is Dirac maybe talking about eigenvalues? So does Dirac mean, that the product of two matrices with real eigenvalues could have imaginary eigenvalues? And does he want to say, that a real symmetric matrix like $\xi\eta + \eta\xi$ and a purely imaginary antisymmetric matrix like $i(\xi\eta - \eta\xi)$ always have real eigenvalues?
Answer: Flip back a page; Dirac uses real to mean Hermitian ($A^{\dagger} = A$) when talking about linear operators. So you can see that even if $A$ and $B$ are Hermitian, $AB$ won't be Hermitian unless they commute, whereas those linear combinations will be. | {
"domain": "physics.stackexchange",
"id": 12215,
"tags": "quantum-mechanics, operators, linear-algebra"
} |
How does the excess GPE of a mountain cause its base to melt? | Question: Weisskopf suggested that the Gravitational Potential Energy (GPE) of a vertical column of mountain rock of mass $m$ must be less than the latent heat of fusion $L_f$ of the rock, i.e. $$mgh<η L_fm $$
η is a dimensionless constant
He coined this concept by solely considering the energy balance in which excess GPE will always be transformed into energy to melt the base of such a column of rock. However, does there exist a physical explanation of how this occurs, not just one based on the conservation of energy (although it is always true)?
In addition, the $h$ used in the GPE is the total height of the column. Should it not be $h$/2 instead (assuming uniform distribution of mass), considering the GPE of such a column is dependent on the height from its base to its centre of mass, not the full height?
References:
Weisskopf, V.F., “Physics of Mountains.” American Journal of Physics 54, 871 (1986).
Faraoni, V, "Alpine Physics." World Scientific, (2019).
Answer:
However, does there exist a physical explanation of how this occurs, not just one based on the conservation of energy
From Weisskopf's response to comments:
I have used the melting heat...as a measure of the energy...necessary to induce plastic flow.
In other words, viscous flow is broadly similar to melting because a fair fraction of the bonds are broken with each; we expect (and find) more refractory materials to better resist creep. This is why deformation mechanism maps are commonly plotted using the homologous temperature:
Weisskopf further emphases that the broad equivalence is proposed as an order-of-magnitude comparison only. | {
"domain": "physics.stackexchange",
"id": 98584,
"tags": "thermodynamics, energy, energy-conservation, earth, geophysics"
} |
How to handle OS Dependant manifest.xml? | Question:
Hi all,
I am working with a double gnulinux/xenomai system. The switch from one to the other is done like Orocos does with the "OROCOS_TARGET" env variable. But When I have a package depending of another, I need to define 2 different manifest, for exemple :
<export>
<cpp cflags="-L${prefix}/lib -larp_core-gnulinux"/>
</export>
or
<export>
<cpp cflags="-L${prefix}/lib -larp_core-xenomai"/>
</export>
How could I handle this ?
Originally posted by Willy Lambert on ROS Answers with karma: 352 on 2011-11-08
Post score: 1
Answer:
You can restrict a specific set of flags using the os parameter of the <cpp> tag, like this:
<cpp os="osx" cflags="-I${prefix}/include" lflags="-L${prefix}/lib -Wl,-rpath,${prefix}/lib -lrosthread -framework CoreServices"/>
For more details, see the Manifest XML tags reference.
Originally posted by joq with karma: 25443 on 2011-11-08
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Willy Lambert on 2011-11-08:
thans for this usefull link. I tagged it preferred because it is the ROS way. But I'll be use the orocos one as my package is an orocos one :)
Comment by Willy Lambert on 2012-04-16:
do you know how ROS detects the current OS ? more precisely, how does he knows between gnulinux and xenomai the one to choose ? | {
"domain": "robotics.stackexchange",
"id": 7230,
"tags": "ros, manifest.xml"
} |
using simple autoencoder for feature selection | Question: I am using a simple autoencoder to extract the informative features and I have multiple Q:
I know that the features extracted will be a linear combination of the original features so I consider that the feature that has a larger mean weight (has the highest percentage in the formation of new features) will be important so I will take that features but I don't know if this is true or not
the second things is that I want to apply the grid search to find the optimal hyperparameters for the model but I can't do that please if anyone can help me in this and save my life
Answer:
Autoencoders normally aren't linear models. If you make them linear (i.e. you create a shallow Autoencoder with linear activations) then you get exactly a PCA result. The power of Neural Networks is their non-linearity, if you want to stick with linearity go for PCA imho.
Keep a Train-Validation-Test set split, and try different configurations of hyperparams checking their performance on Validation data. Alternatively there are many libraries, such as hyperopt, that let you implement more sophisticated Bayesian hyperparameter searches, but unless you want to be published at a conference or win some competition it's a bit overkill. If you're still interested, the internet is plenty of tutorials like this one. | {
"domain": "datascience.stackexchange",
"id": 11833,
"tags": "python, neural-network, feature-selection, autoencoder, grid-search"
} |
Heating a black body | Question: By definition, Black body has absotivity=emmisstivity=1. This means the black body radiates all energy it accepts.
Does this mean the black body cant be heated?
Answer: Every body in thermal equilibrium radiates the same amount of energy that it receives, otherwise its temperature would change until it attained equilibrium. This is not unique to black bodies.
Suppose an object, not necessarily a black body, is at a temperature $T_1$ and its surroundings are at a temperature $T_2$, then the rate of radiation by the object is:
$$ j_1 = \epsilon \sigma T_1^4 $$
and the rate its surrounds heat it is:
$$ j_2 = \epsilon \sigma T_2^4 $$
So the net heat flow is:
$$ j = j_2 - j_1 = \epsilon \sigma(T_2^4 - T_1^4) $$
If the internal and external temperatures aren't equal there will be a net heat flow and the object will heat or cool. The only special thing about a black body is that the emmissivity, $\epsilon$, is equal to one so the black body reaches equilibrium faster than any other type of body. | {
"domain": "physics.stackexchange",
"id": 13087,
"tags": "thermodynamics, thermal-radiation, blackbody"
} |
How can SNPs be well-defined if chromosome lengths differ? | Question: I don't have a background in biology, but is trying to learn more about genetics; I have watched many videos about SNPs and still feel confused about the concept of a single nucleotide polymorphism. According to wikipedia, a SNP "is a germline substitution of a single nucleotide at a specific position in the genome". What I am most puzzled by is the concept of a 'specific position in the genome', if chromosome lengths differ, how can we talk of a 'specific position'?
For example, if we have two arrays of items that are of the same length:[A,C,C,C,A,T,G] and [A,C,G,C,A,T,G] then it is straightforward to that the entries at the third position of each array are different. But with chromosomes, their lengths differ, so it might look like: [A,C,C,C,A,T,G] and [A,C,C,C,A,A,T,G], now how can we say what corresponds to what? Do we say there is a SNP at the fifth/sixth base pair?
What about something more different like [A,C,C,C,A,A,T,G] and [C,C,A,A]? How do we know what corresponds to what? What are the SNPs?
What if two individuals each has a chromosome 1 that differ by many base pairs, how do we know what corresponds to what?
Answer: The reason we can compare SNPs between different people is because sequencing reads are aligned to a reference genome with a fixed set of coordinates for each chromosome (local alignment), rather than aligning the entire chromosomes of two different people (global alignment). For instance, 10:3424234 refers to the 3424234th base-pair on chromosome 10 on the reference genome.
Say we wish to compare this particular SNP between two different people; we can sequence both individuals and reads covering the 3424234th base-pair on chromosome 10 will be produced and aligned to the same part of the reference genome; we are certain that the reads originate from the same part of the genome in both individuals. Then, it's trivial to look at this position and see whether it differs between the two individuals. It therefore doesn't matter if one individual has chromosome 10 twice the length of the other (although this would never be the case in reality), as long as the read originates from the same part of the genome in both individuals, they can be compared. | {
"domain": "biology.stackexchange",
"id": 11801,
"tags": "genetics"
} |
expected number of sets generated by greedy set cover ? | Question: I see most of the analysis for the greedy set cover analyses the approximation ratio. However, assume that each element in $T$ belong with a constant probability to one of the sets of $S$ (where $S = \{S_1, ...., S_k\}$). The question is then what is the expected number of sets generated by the greedy set cover in this case ?
Answer:
Given that a cover exists, greedy will return a cover of expected size $O(p^{-1}\log n)$.
As long as $k$ is not too large, with high probability every cover has size $\Omega(p^{-1}\log n)$
This implies that (for $k$ not too large) greedy gives an $O(1)$-approximation with probability $1-\delta$, for any constant $\delta>0$.
Here is the upper bound, followed by the lower bound.
Upper bound
Conditioning on the event that there exists a set cover,
the greedy set-cover algorithm returns a set cover with expected size
$O(p^{-1}\log n)$.
Proof of upper bound.
Condition on the event that a set cover exists.
Assume without loss of generality that $k\ge (12/p)\ln(n)$
(otherwise, since greedy chooses at most $k$ sets, we are done).
We prove that there is a fractional set cover $X$ of expected size at most
$$\frac{2}{p} + n\exp(-pk/12).$$
By the assumption on $k$ this is $$O(1/p).$$
The upper bound will follow, because
(as is well-known) greedy returns a cover of size $O(\log n)$
times the size of any fractional set cover (and so, here, $O(p^{-1}\log n)$)
(see here or Chvatal).
Define the fractional cover $X$ in two stages.
Give each set $s$ weight $X_s = 2/pk$.
For each element $e$ such that $\sum_{e\in s} X_s < 1$,
choose any set $s$ containing $e$ and raise $X_s$ enough to fully cover $e$.
This clearly gives a fractional set cover. (Step 2 is well defined,
because we are conditioning on the event that there is a set cover,
that is, that each element is in some set.)
To finish we bound the expected total weight of $X$, that is, $\sum_s X_s$.
Stage 1 contributes exactly $k 2/pk = 2/p$ to the total weight.
Stage 2 contributes, in expectation, at most $n\exp(-pk/12)$,
because each of the $n$ elements has probability at most
$\exp(-pk/12)$ of being covered with weight less than 1
by stage 1.
(To verify this, fix any element.
The element is left insufficiently covered
iff it is contained in fewer than $kp/2$ sets.
The expected number of sets covering the element
is $kp$, so by a standard Chernoff bound the
probability that the number falls below $kp/2$
is at most $$\exp(-(1/2)^2 k p/3) = \exp(-kp/12).$$
This ignores the conditioning, but the conditioning
only decreases the chance that the element is contained
in fewer than $kp/2$ sets.)
QED
As an aside, note that if $k$ is large enough, then greedy will almost certainly return a cover of constant size. (E.g. if $k$ is $2n2^n$ and $p=1/2$, then won't all possible subsets be present with high probability? And in that case greedy will return just one set of course.)
Lower bound
Assuming $p\le 1/2$, $k\le \exp(n^{1-\epsilon})$ for some $\epsilon\in [0,1/2]$,
and $s = \min((\epsilon/4p)\ln n, n^{\epsilon/2})$,
then the probability that there exists a set cover of size $s$ or less is $o(1)$.
Of course this implies that greedy does not return a set cover of size less than $s$.
The proof is probabilistic, using direct calculation and the naive union bound.
Hopefully there are no mistakes in the calculations.
Proof.
Fix $p$, $k$, $s$, $\epsilon$ as above.
The number of ways of choosing $s$ sets from the $k$ sets available
is ${k\choose s}$.
For any fixed collection of $s$ sets, the chance that it covers all
$n$ elements is $(1-(1-p)^s)^n$.
Combining these two observations, the expected number of size-$s$ covers among the $k$ sets is
$${k\choose s} (1-(1-p)^s)^n
~\le~ k^s \exp(-n(1-p)^s)
~\le~ \exp\big(s\ln(k) - n e^{-2sp}\big).$$
To complete the proof, one shows that the right-hand side above is $o(1)$
(under the assumptions on $k$ and $s$).
To do that, it suffices to show that
$n e^{-2sp} \rightarrow \infty$, and
$s\ln k \le (1/e) n e^{-2sp}$.
The first of these follows from the assumption on $s$.
The second reduces (taking logarithms) to
$$1+2ps + \ln s \le \ln(n) - \ln\ln k.$$
By the assumption on $k$ this reduces to
$$(1+2ps) + \ln s \le \epsilon \ln n.$$
This holds because the assumptions on $s$ imply that each of the two summands
on the left is at most $(\epsilon/2)\ln n$.
QED | {
"domain": "cstheory.stackexchange",
"id": 1757,
"tags": "ds.algorithms, pr.probability, set-cover, greedy-algorithms"
} |
Why is the weight of the hanging part of the chain equal to the friction on the remaining part of the chain kept on the table? | Question:
In order to prevent the chain from slipping, the friction on the part of the chain kept on the table should be equal to the weight of the hanging part of the chain. Why is that so?
Answer: Consider the horizontal forces acting on the part of the chain kept on the table. There are two: friction $F_f$ (acting to the left) and tension $F_T$ (acting to the right). For the forces to be balanced, it is necessary that $|F_f| = |F_T|$.
Now let's consider the vertical forces acting on the hanging part of the chain. Say this part of the chain has mass $m_2$. There are two forces: gravity $m_2g$ (acting downwards) and tension $F_T$ (acting upwards). For the forces to be balanced, it is necessary that $|m_2g| = |F_T|$.
Why do these tensions have equal magnitude? Essentially, the connection between the two parts of the chain "redirects" the tension force along the direction of the chain.
Thus $|F_f| = |m_2g|$. | {
"domain": "physics.stackexchange",
"id": 71093,
"tags": "newtonian-mechanics, friction"
} |
program that sends an email to everyone registered in the database | Question: I'm 16 years old and no one to help me, no one to give any advice or constructive criticism, I'm aimless. I'll leave the link to my last code (github), a program that sends an email to everyone registered in the database. I would like some advice and project/content ideas for me to evolve. Evaluate my project sincerely.
Class that connects to the database and sends the email:
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.mail.Authenticator;
import javax.mail.Message;
import javax.mail.PasswordAuthentication;
import javax.mail.Session;
import javax.mail.Transport;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeMessage;
import javax.swing.JOptionPane;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
public class JavaMailUtil {
Connection conn;
PreparedStatement ps;
ResultSet rs;
String sender = "testemailforjava16@gmail.com";
String senderPassword = "*******";
private static Connection connectionToMySql() throws SQLException {
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/emailproject?useTimezone=true&serverTimezone=UTC", "root", "");
return conn;
}
public void sendEmail(String title, String text) {
conn = null;
ps = null;
rs = null;
try {
conn = connectionToMySql();
ps = (PreparedStatement) conn.prepareStatement("select * from users");
rs = ps.executeQuery();
while (rs.next() ) {
Properties prop = new Properties();
prop.put("mail.smtp.auth", true);
prop.put("mail.smtp.starttls.enable", "true");
prop.put("mail.smtp.host", "smtp.gmail.com");
prop.put("mail.smtp.port", "587");
Session session = Session.getInstance(prop, new Authenticator() {
@Override
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(sender, senderPassword);
}
});
Message message = prepareMessage(session, sender, rs.getString(2), text, title);
Transport.send(message);
}
} catch (Exception e) {
JOptionPane.showMessageDialog(null, "Error!");
e.printStackTrace();
return;
} finally {
try {
if (conn != null) {
conn.close();
}
if (ps != null) {
ps.close();
}
if (rs != null) {
rs.close();
}
} catch (Exception e) {
e.printStackTrace();
}
}
JOptionPane.showMessageDialog(null, "Email successfully sent!");
}
private static Message prepareMessage(Session session, String sender, String recepient, String text, String title) {
try {
Message message = new MimeMessage(session);
message.setFrom(new InternetAddress(sender));
message.setRecipient(Message.RecipientType.TO, new InternetAddress(recepient));
message.setSubject(title);
message.setText(text);
return message;
} catch (Exception e) {
Logger.getLogger(JavaMailUtil.class.getName()).log(Level.SEVERE, null, e);
}
return null;
}
}
Swing Part:
package windows;
import java.awt.EventQueue;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JOptionPane;
import javax.swing.JTextField;
import project.JavaMailUtil;
import javax.swing.JButton;
import java.awt.event.ActionListener;
import java.awt.event.ActionEvent;
import java.awt.Font;
import javax.swing.JTextArea;
import javax.swing.SwingConstants;
public class MainWindow {
private JFrame frame;
private JTextField txtTitle;
/**
* Launch the application.
*/
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
public void run() {
try {
MainWindow window = new MainWindow();
window.frame.setVisible(true);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
/**
* Create the application.
*/
public MainWindow() {
initialize();
}
/**
* Initialize the contents of the frame.
*/
public void initialize() {
frame = new JFrame();
frame.setBounds(100, 100, 450, 300);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().setLayout(null);
frame.setLocationRelativeTo(null);
JLabel lbl1 = new JLabel("Title:");
lbl1.setFont(new Font("Tahoma", Font.PLAIN, 18));
lbl1.setBounds(10, 81, 43, 26);
frame.getContentPane().add(lbl1);
JLabel lbl2 = new JLabel("Text:");
lbl2.setFont(new Font("Tahoma", Font.PLAIN, 18));
lbl2.setBounds(10, 133, 43, 26);
frame.getContentPane().add(lbl2);
txtTitle = new JTextField();
txtTitle.setHorizontalAlignment(SwingConstants.CENTER);
txtTitle.setFont(new Font("Tahoma", Font.BOLD, 15));
txtTitle.setBounds(54, 77, 342, 37);
frame.getContentPane().add(txtTitle);
txtTitle.setColumns(10);
JButton buttonSend = new JButton("Send Email");
buttonSend.setFont(new Font("Tahoma", Font.PLAIN, 16));
buttonSend.setBounds(269, 227, 127, 23);
frame.getContentPane().add(buttonSend);
JTextArea txtText = new JTextArea();
txtText.setFont(new Font("Tahoma", Font.BOLD, 15));
txtText.setBounds(54, 136, 342, 80);
frame.getContentPane().add(txtText);
txtText.setLineWrap(true);
txtText.setWrapStyleWord(true);
JLabel lblNewLabel = new JLabel("Automatic Sender");
lblNewLabel.setFont(new Font("Tahoma", Font.BOLD | Font.ITALIC, 23));
lblNewLabel.setBounds(95, 22, 243, 37);
frame.getContentPane().add(lblNewLabel);
JButton buttonUsers = new JButton("Users");
buttonUsers.setFont(new Font("Tahoma", Font.PLAIN, 16));
buttonUsers.setBounds(54, 227, 83, 23);
frame.getContentPane().add(buttonUsers);
// TODO
buttonSend.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
JavaMailUtil javaMailUtil = new JavaMailUtil();
buttonSend.setText("Sending...");
JOptionPane.showMessageDialog(null, "Sending... Wait the alert!");
javaMailUtil.sendEmail(txtTitle.getText(), txtText.getText());
buttonSend.setText("Send Email");
}
});
}
}
Link: https://github.com/DaviRibeiro-prog/JAVA/tree/main/EmailSenderProject/src
Answer: Welcome in the fantastic world of development.
There is this great acronym in software development: S.O.L.I.D.. You can refer to it at any time to evaluate the quality of your code.
However, in a two classes project, many of them are overkill. But there is one that you should try to use at all stages:
"S" for "Single responsibility"
Every class and method should have one responsibility
Your JavaMailUtil is responsible to connect to the database, retrieve the users and send emails. This is a bit too much.
You should better extract some functionalities to different classes. I see three main roles of your actual code:
Connect to the database
Retrieve the users
Send an email
The Data Access Object and the _Factory are two design patterns that you can use apply in you program.
The Factory will be used to get a connection to the database. While the _DAO can be used to get a list of users from that connection. Finally, your JavaMailUtil can use that DAO to retrieve the users.
Another good principle is the "D" for "Dependency inversion".
The issue with the code above is that the JavaMailUtil has a strong dependency on your DAO to access the users. And, again, his role is to send emails, not retrieve users. A good solution is to pass the users as a parameter.
So you'll need a new class that will be responsible to retrieve the users and call the sendEmail method. It can be your MainWindow but ideally it is another class that should receive the command from the user facing interface and translate them to some actions on your program. Those classes are called controllers and are part of the Model View Controller pattern which is widely used in user facing applications.
I admit that I did not provide direct advice on your code. But with those few paragraphs you ave enough to learn and improve your program quite a bit. | {
"domain": "codereview.stackexchange",
"id": 41513,
"tags": "java, sql, email"
} |
Can the nonlinear hydrodynamic forces be neglected when a capsized ship is righting? | Question: Can the nonlinear hydrodynamic forces be neglected when a capsized ship is righting?
Thank you for answering my question.
Answer: Without some detailed information this is impossible to answer.
In general your inclusion of non-linear hydrodynamic forces probably depends a lot on your righting moment and sea-state conditions. If the dominate forces are the linear ones (large righting moment), then smaller effects from friction, turbulence, etc. on an irregular structure is less important. However if the linear forces are marginal due to steepness/period/size of waves, strength of wind, low righting moment, etc. then they become more critical and could prevent the vessel from righting -- or at least from righting on first roll or in a predictable way. | {
"domain": "physics.stackexchange",
"id": 17219,
"tags": "forces, fluid-dynamics"
} |
Defining total energy in weak interaction involving a neutrino | Question: This question was inspired by the brief introduction to neutrino oscillations given in a Particle Physics course I took recently and has been bothering me for a while now. I hope to find some clarification here.
To the best of our knowledge mass (energy) eigenstates for neutrinos are distinct from flavour eigenstates, in the sense that the states in one representation are always a superposition of at least two states in the alternative representation. This means that when we talk about, say, an electron neutrino, the particle's energy is not well defined.
Now, neutrinos only interact via weak force. I assume that in order for the associated vertex to make sense (e.g. in a Feynman Diagram representation), the incoming neutrino must exist in a well defined flavour eigenstate. This however would imply that its energy is NOT well defined.
If the above is correct, does it mean we can no longer define a total energy for the interaction?
A similar question can be asked for quarks but the fact that neutrinos only interact weakly makes it clearer in my opinion.
Answer: This might be for K McDonald, 2016 to answer properly, by balancing exuberant jitterbugging angels on the head of a pin. I'll just let you remind yourself of the 11 orders of magnitude of irrelevance of your question, which I have to assume your instructor on neutrino oscillations emphasized to you before he went on to recondite and subtler phenomena.
A wave packet of comoving neutrino mass eigenstates hits your detector at E ~ 5MeV; let's take only two species, with masses m and m', and energies E and E', mixing maximally, and take a common momentum p for simplicity. Your un-normalized minimal wavepacket then is just
$$
e^{ipx -iEt } + e^{ipx -iE't } = e^{ip(x-t)} (e^{-it\frac{m^2}{2p}}+e^{-it\frac{m'^2}{2p}}),
$$
where we have expanded $E\sim p+m^2/2p$ for relativistic neutrinos.
So, yes, there is a "slop" of $\Delta E \sim (m^2-m'^2)/2E ~$ in the
wavepacket, of order $10^{-4} \cdot 10^{-7}$ eV, for typical $\Delta m^2\sim 10^{-4}$eV$^2$. The packet's energy spread is thus $10^{-11}$eV.
This is what you wish to probe by multiplying it by Ls of hundreds and hundreds of kilometers, and monitoring changes in the cosine envelope of the wavepacket; these huge distances are there for a reason; so, ipso facto, it is nothing you could resolve in centimeters or meters, nay, 10s of meters in your detector, if you imagined you'd meaningfully capture it, somehow.
You recall that pre-fab terms such as "electron neutrino" are merely terms of convenience: you may write your fundamental vertices without ever naming, or knowing about, an electron neutrino. Sure, go ahead, "properly" compute three separate vertices in 3 separate Feynman diagrams with 3 different neutrino mass eigenstates and energies, and fold them into the PMNS matrix coefficients, and sum them, integrate them over suitable ranges, etc... but why? Part of what you presumably learned in your course is to choose your battles wisely and just do the simplest possible calculation, but not simpler. That's why most/all people opt for the above convenience. Why would you fuss about $10^{-11}$eVs?
Nevertheless, Kirk's paper mentioned gives you a magnificent bibliography of papers that dared fuss... | {
"domain": "physics.stackexchange",
"id": 39196,
"tags": "particle-physics, energy-conservation, feynman-diagrams, neutrinos, weak-interaction"
} |
Quantum Computing Research Papers, on puzzles or game theory | Question: Are there any research papers focused on implementing a game/puzzle or game theory in Quantum Computing.
Answer: To start, I would read "The next stage: quantum game theory" by E.W.Piotrowski, J. Sladkowski. While the paper is from 2003, the authors discuss how developments in quantum computation allow the extension of the scope of game theory. It includes some basic history as well as some basic ideas and recent developments in quantum game theory.
These same authors also wrote a paper entitled "Quantum Bargaining Games" in 2001 which I would think will also be useful in your research. It's part of a larger analysis they did of "quantum-like" descriptions of market economics with roots in the then recently developed quantum game theory".
I think these two papers would be great starting points.
I would also check out James Wootton's "Using a simple puzzle game to benchmark quantum computers" in which he created a puzzle game called "Quantum Awesomeness". I recommend this primarily because it's slightly related but completely awesome. | {
"domain": "quantumcomputing.stackexchange",
"id": 372,
"tags": "resource-request, research"
} |
What defines a microbial species? | Question: I know that microbes are not capable of sexual reproduction, thus sorting them into species according to "groups that can interbreed and generate fertile offspring" should not apply.
Answer: Your question is relative to the species concept that you are using. Mayr's biological species concept (BSC) is based on the ability to interbreed; a process-based definition. Most biologists use it, but most taxonomists, who are the people who actually describe species, use some variation of the phylogenetic species concept. The phylogenetic species concept is not based on process, but on fixed differences. Fixed differences are evidence of a lack of interbreeding, but the concept is not explicitly based on reproduction. Fixed morphological differences have been used to classify microbes, most often in culture. But currently, microbes are most often classified with a combination of morphological and genetic characters. The small subunit rRNA is sequenced and put through an algorithm for estimating species based on genetic distances. In short, your question depends a lot on which species concept you are using, and whether you would accept a distance-based DNA definition. But species need not be defined reproductively. | {
"domain": "biology.stackexchange",
"id": 8130,
"tags": "microbiology, terminology, asexual-reproduction"
} |
Why is the magnetic field created? | Question: From Oersted experiment we know " When an electric current is passed through a conducting wire, a magnetic field is produced around it." An electric current is a flow of electric charge or electron. My question is "How does a flow of electron or electric charge create magnetic field?"
Answer: I would suggest watching this by Veritasium.
Suppose you are the charge and moving along a current carrying wire at a distance d from it.
Now the electrons in the wire relative to you are going backwards and since einstein proposed his laws, the electrons get a little squished relative to you.
So you will see that the electrons are crowded together and the wire is now not neutral relative to you, you will observe a electrostatic force. In the ground frame we call it magnetic force. | {
"domain": "physics.stackexchange",
"id": 47180,
"tags": "electricity, magnetic-fields, electrons, electric-current, charge"
} |
import hector quadrotor in a different gazebo environment | Question:
Hii,
I want to import hector quadrotor in a different gazebo environment. My question is how can this be done? Also, what should be the format of the environment's CAD model. Else, are there different gazebo environments to choose from?
Please help
Originally posted by vacky11 on ROS Answers with karma: 272 on 2017-03-14
Post score: 1
Original comments
Comment by gvdhoorn on 2017-03-15:
Could you clarify what you mean with "gazebo environments"? I first thought you meant "worlds" (ie: simulation models of the world), but then I thought you might be asking about different simulators (ie: alternatives for Gazebo, like V-REP and others).
Comment by vacky11 on 2017-03-15:
Yes. I meant worlds
Answer:
I have figured out the way of changing the gazebo environment (world) for hector quadrotor. The hector_quadrotor_tutorials itself comes with variety of gazebo worlds. These can be found in src/hector_gazebo/hector_gazebo_worlds/launch.
Originally posted by vacky11 with karma: 272 on 2017-03-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Airuno2L on 2017-03-15:
Feel free to accept you're answer as the correct answer by clicking the check mark next to it.
Comment by vacky11 on 2017-03-15:
@Airuno2L since I dont have 10 points I am not able to accept my own answer.
Comment by vacky11 on 2017-03-15:
But I have closed the question.
Comment by gvdhoorn on 2017-03-15:
You now have 10+ karma points, so please accept your own answer.
Comment by Airuno2L on 2017-03-15:
I hooked you up ;)
Comment by vacky11 on 2017-03-15:
answer accepted :)
Comment by vacky11 on 2017-03-15:
thanks @Airuno2L :) | {
"domain": "robotics.stackexchange",
"id": 27313,
"tags": "ros-indigo, hector-quadrotor"
} |
fovis_ros error | Question:
Hi,
I'm working with turtlebot and kinect (hydro) in ubuntu 12.04. I integrated the visual odometry (fovis_ros).
So I launched turtlebot_bringup minimal.launch to get odom and imu topics
and launched openni to get the depth camera topic and fovis_hydro_openni.launch:
<launch>
<arg name="camera" default="camera" />
<node pkg="nodelet" type="nodelet" args="manager" name="nodelet_manager" />
<node pkg="nodelet" type="nodelet"
name="convert_openni_fovis"
args="load depth_image_proc/convert_metric nodelet_manager">
<remap from="image_raw" to="$(arg camera)/depth_registered/sw_registered/image_rect_raw"/>
<remap from="image" to="$(arg camera)/depth/image_rect"/>
</node>
<node pkg="fovis_ros" type="fovis_mono_depth_odometer" name="kinect_odometer" >
<remap from="/camera/rgb/image_rect" to="$(arg camera)/rgb/image_rect_mono" />
<remap from="/camera/rgb/camera_info" to="$(arg camera)/rgb/camera_info" />
<remap from="/camera/depth_registered/camera_info"
to="$(arg camera)/depth_registered/sw_registered/camera_info" />
<remap from="/camera/depth_registered/image_rect" to="$(arg camera)/depth/image_rect" />
<param name="approximate_sync" type="bool" value="True" />
</node>
</launch>
and finally i launched the robot_pose_ekf.launch which fuse the three topics (odom,imu,and vo) to publish the odom_combined topic. But i had this error:
[ERROR] [1431006366.365384721]: Covariance specified for measurement on topic vo is zero
[ERROR] [1431006366.392443095]: filter time older than vo message buffer
I think that the problem comes from the visual odometry beacause before adding visual odometry the robot_pose_ekf works fine it fused odom and imu data and published odom_combined
Any help please
Originally posted by sophye_turtlebot on ROS Answers with karma: 53 on 2015-05-07
Post score: 0
Answer:
In fovis_ros wiki it's stated that pose and twist covariance is not published for mono depth odometer. That's why robot_pose_ekf is telling you that the covariance for the "vo" topic is zero. The library libfovis itself does not provide that information, and so does the wrapper.
If you want, you can change fovis_ros/src/odometer_base.hpp to set a fixed covariance to do some tests.
Originally posted by Miquel Massot with karma: 1471 on 2015-05-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21632,
"tags": "ros, navigation, visual-odometry, turtlebot, robot-pose-ekf"
} |
Does equality of molarity or molality determine constancy of boiling (or freezing) points of different solutions? | Question: I encountered a question among papers for JEE (2005) that demands completing a statement:
Equimolar solutions in the same solvent have _______
There are four options and the correct answers are
same boiling point
and
same freezing point
I don't think this is correct as the change in boiling and freezing point depends on molality and not molarity. Hence, according to the options, the one saying different boiling and freezing points seems to be correct to me.
However, such exams very rarely have mistakes which are not corrected later. Please clarify.
Answer: At low concentrations for which the ideal bp elevation/fp depression expressions usually apply, molarity is linearly proportional to molality, therefore the statements are equivalent.
This webpage explains nicely why low concentrations are important:
Raoult's law only works for low concentration solutions. Why? Well, in order for our approximation to work, the interactions between the solute and solvent molecules must be nearly identical. If the interactions are stronger, then the heat of vaporization of the solvent will change and thus our whole approximation falls apart. Since we know that intermolecular forces vary greatly between molecules, the affect of those forces must be kept at a minimum by keeping the solute concentration real low.
In any case (as explained by additional statements in the abovementioned website, which discusses dissociation of ionic solutes), the statement made in the answer to your exam would not be true generally even if molality instead of molarity was mentioned.
The point of the question seems to be that a student should associate a universal response of the solvent that is independent of solute with colligative properties as described using Raoult's law, which is behavior most often approximated by dilute solutions (the statement that Raoult's law only applies to dilute solutions happens to be false, btw), and that in this realm of application molarity is approx linearly proportional to molality. | {
"domain": "chemistry.stackexchange",
"id": 12382,
"tags": "physical-chemistry, thermodynamics, solutions, boiling-point, melting-point"
} |
Counting the nodes of a unidirectional graph | Question:
Suppose that we have the above graph. The graph is directed and is unidirectional; that is, we can only traverse it from top to bottom.
We want to update the count of the nodes such that the count of any given node indicates the sum of the count of the node itself and the previous nodes, avoiding any duplicate nodes.
Example
For $ id:5 $ the count will be the sum of counts of $id:1,2,3,4$.
So, the count will be calculated as
$Count(id:5) = \sum Count(ancestors) + selfCount$.
So, the $Count(id:5) = 5$ in this case.
Implementation that has been thought of
One such implementation could be traversal from top to bottom, and updating the count levelwise (breadth-first search).
We could then add the count of previous nodes, and subtract the count of the LCA(Least Common Ancestor) node.
Example
So, if we have to find the count of node $id:5$, we would get the following count:
$ Count(id:5) = Count(id:5) + Count(id:2) + Count(id:3) + Count(id:4) - 2*Count(id:1) = 1 + 2 + 2 +2 - 2*1 = 5$
Here is the final graph that we are getting.
Another example:
$ Count(id:7) = Count(id:7) + Count(id:5) + Count(id:6) - Count(id:4) = 5 + 3 - 2 +1 = 7 $
Problem
How do we remember the counts of ancestor nodes, when we are analyzing the counts of the current nodes?
It could be possible that the ancestor might not be immediately present, it could be far away at previous levels also.
Is there any algorithm that could solve the problem? I would really appreciate the help.
Answer: You can build a graph $G'=(V,E')$ that comprises the same set of vertices $V$ from the input graph $G=(V,E)$ and whose set of edges $E'$ includes an edge $(v,u)$ for every edge $(u,v) \in E$. That is, $G'$ is the same graph as $G$, but with the edges reversed; therefore, $G'$ is also topologically sorted (in the same way as $G$, but in the opposite direction). You are looking for the number of successors (+1) of every vertex in $G'$. This can be found using depth-first search, starting a new traversal from every vertex and counting the nodes that you find. The running time of this algorithm is $O(mn)$, where $n$ is the number of vertices of the graph and $m$ is the number of edges. D.W.'s answer provides some useful links with approaches with slightly better running times. | {
"domain": "cs.stackexchange",
"id": 8400,
"tags": "algorithms, graphs, data-structures"
} |
UV-Visible Spectroscopy in the analysis of sodium chloride in potato chips | Question: Here is the question and answer out of an exam paper:
Firstly, I thought UV-Visible can also use radiation in the visible spectrum. Also when analyzing sodium chloride (a molecule), then UV-Visible would be more appropriate than AAS, because AAS would be used for just Na (Sodium). Would I be wrong to have said UV-Visible?
Answer: NaCl solution as you know is almost transparent in the VIS region, UV-VIS spectroscopy can be used to determinate concentration of soluble salt see J. Phys. Chem. A 2008, 112, 2242-2247 however UV part of the spectra is used instead of VIS part, so this does not fit the restrictions of your question.
Surely AAS has a greater sensibility and so is the best technique to use in this case. | {
"domain": "chemistry.stackexchange",
"id": 1098,
"tags": "homework, electrons, spectroscopy, spectrophotometry"
} |
Writing a given unitary in the same basis as the Hamiltonian (Operator Representation and Confusion) | Question: I have a simple question concerning how to write the representation of operators, such as unitaries, using a specific order for the basis elements. Let me give you an example.
Consider a tripartite system (composed of qubits) with the Hamiltonian $H = |1\rangle \!\langle 1 |$. The total Hamiltonian, which is additive, is represented as
$$
H_{\textrm{tot}} = H\otimes I\otimes I + I\otimes H \otimes I + I\otimes I \otimes H.
$$
Without loss of generality, we can assume that the energies are arranged in ascending order, leading to:
$$H_{\textrm{tot}}= \begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 \\
\end{bmatrix}.$$
This matrix is represented using the convention that the sum of each element of the basis $|ijk \rangle := |i\rangle \otimes |j\rangle \otimes |k\rangle$ determines the energy in the block. For instance, block 1, $|000\rangle$, has an energy of $0+0+0 = 0$. Block 2 has an energy of 1, so the basis for this block comprises permutations of $|100\rangle$, and so on.
My question is: Assume that I want to construct a unitary operator that commutes with $H_{\textrm{tot}}$ whose action is given by:
\begin{align}
U|001\rangle = |010\rangle, \\
U|010\rangle = |001\rangle, \\
U|101\rangle = |110\rangle, \\
U|110\rangle = |011\rangle, \\
U|011\rangle = |101\rangle.
\end{align}
As before, I would construct such a unitary (which is block-diagonal) using the same basis elements that I used for the Hamiltonian. However, here's where I'm puzzled: both blocks with energies 1 and 2 have at least three different possible orderings. Which should I choose? For instance, using the convention:
$$
(|000\rangle, |001\rangle, |010\rangle, |100\rangle, |011\rangle, |101\rangle, |110\rangle, |111\rangle)
$$
I don't obtain the expected representation, which confuses me. I believe I have freedom to choose, but something is going wrong.
For example, I'm getting the following representation for $U$ (which is not correct according to what I should obtain):
$$
U = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{bmatrix}
$$
Answer: Short answer: You can choose any representation you want, and the one you chose is correct, as is the way you represented such a unitary matrix. Quantum mechanics doesn't prescribe a "correct" representation for states or operators, just as long as the representation is consistent.
Not so short answer: Typically, when using the Kronecker product (in languages such as Mathematica), it is standard to order the basis elements $|i,j,k\rangle$ in lexicographic order. For example, for three qubits, we would order the basis elements as:
$$|0,0,0\rangle, |0,0,1\rangle,|0,1,0\rangle, |0,1,1\rangle, |1,0,0\rangle, |1,0,1\rangle, |1,1,0\rangle, |1,1,1\rangle .$$
As a result, when representing the unitary transformation you provided as an example, we obtain the following matrix:
$$U=\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{pmatrix}.$$
This is equivalent to yours, as they are connected via a permutation matrix. | {
"domain": "physics.stackexchange",
"id": 97929,
"tags": "quantum-mechanics, homework-and-exercises, hilbert-space, quantum-information, unitarity"
} |
Why do small gears used to turn big gears allow more force with less effort? | Question: If I have a small gear that I am turning a large gear with, from what I know that gives me the ability to move heavier things. Like a winch of sorts. Why does turning that little gear allow that mechanical advantage. I saw a video that said gears were really lots of levers. I can’t visualize the lever.
In a car, for example, why is first gear more powerful that fifth for initial movement? I don't know how to visualize it. In order for me to lift a car with a lever, I need a fulcrum close to the car and an effort arm fairly long (I guess). So where's the fulcrum and effort arm in the gear moving the car here?
Is my question worded correctly for what I ask in the body?
edit: Am I correct that gears in a car transmission are mostly for ratio aspect of things? if I look at two gears together, I can see the ratio and one turning twice as much as another. But what is missing is the longer arm that eased the effort. Is that "effort arm" replaced with a strong source of power (an engine) to make up for the missing long effort arm?
edit: This is the video I saw that made me think about it, https://youtu.be/odpsm3ybPsA?t=3m5s I started it at the point I'm asking about. It's pretty quick for the part I'm talking about, maybe 2 minutes. You see that guy turning the crank to move that little gear? See the long crank, the "effort arm"? That looks like a lever to me.
Then I saw images somewhere, that had similar ideas and it was showing the crank, like on a well crank, being the "circle" or wheel moving the axle which turns the small gear.
In a car though you don't have that big wheel to turn first. You have the small drive shaft. Is there a bigger wheel or gear I am missing? One that is rotating before the first small gear that turns the big one? edit: Is this the flywheel?
Answer: The work-energy principle says that the change in energy in a system is equal to the work done to the system. Conservation of energy means that the sum of inputs must equal the sum of outputs.
Work, in turn, is a force times a distance. That is, $W = Fd$.
So, if the sum of inputs and sum of outputs have to be equal and there's no friction, then:
$$(Fd)_{\mbox{in}} = (Fd)_{\mbox{out}}$$
So, let's talk about the lever first. Assuming both ends of the lever are rigid, motion of certain distance on the input requires motion of a certain distance on the output. How far the output moves for a particular input motion is determined by the location of the fulcrum. FYI, the study of how physical constraints determine input and output motions is called "kinematics."
So, if the input and output distances are fixed by physical constraints, and the input force is a given (you supply a force of $X$), then the only thing in the equation that can change to "balance" the input work and the output work is the output force.
That is, the output force varies as required to keep $(Fd)_{\mbox{out}}$ equal to $(Fd)_{\mbox{in}}$.
Hopefully this all makes sense so far.
A lever doesn't actually move strictly up-and-down, though. It rotates about the fulcrum. The actual distance the input traverses is $L_1\theta$, and the output moves $L_2\theta$, where $L_1$ is the length of the lever from the input side to the fulcrum, $L_2$ is the length of the lever from the output side to the fulcrum, and $\theta$ is the angle of how much the lever rotated.
Define the arc length, or distance actually traveled by the input or output end of the lever to be $s$. The input moves:
$$
s_1 = L_1\theta \\
$$
The output moves:
$$
s_2 = L_2\theta \\
$$
If you divide the output by the input, you can see that:
$$
\frac{s_2}{s_1} = \frac{L_2\theta}{L_1\theta} \\
$$
The thetas cancel, and you're left with:
$$
\frac{s_2}{s_1} = \frac{L_2}{L_1} \\
$$
which can be restated as:
$$
\boxed{s_2 = \left(\frac{L_2}{L_1}\right)s_1} \\
$$
The output distance traveled is equal to the input distance times the ratio of lever arm lengths. You can plug this back into the work equation:
$$
F_1 s_1 = F_2 s_2 \\
F_1 s_1 = F_2 \left(\frac{L_2}{L_1}\right)s_1 \\
$$
Cancel the $s_1$:
$$
F_1 = \left(\frac{L_2}{L_1}\right)F_2 \\
\boxed{F_2 = \left(\frac{L_1}{L_2}\right)F_1} \\
$$
So, these two boxed equations show that the output distance changes by $L_2/L_1$, but the output force changes by $L_1/L_2$. The output force can only go up if the output distance goes down, and vice-versa. This ability to "exchange" force for distance is referred to as "mechanical advantage."
Now, considering this, where a lever can't flip "around-the-world" because it would hit the ground or fall off the fulcrum, a pulley or gear can rotate continuously.
Where before, for the lever, the amount of output motion was dependent on the lengths of the lever arms, here the "levers" are actually gears, and their "lengths" are their radii.
That is, just like before:
$$
s_2 = \left(\frac{r_2}{r_1}\right) s_1 \\
$$
The only difference here is the change from lengths to radii. The total angular distance traversed, $s$, still keeps the same form.
So, if the output gear's radius is very large and the input gear's radius is very small, you get:
$$
s_2 = \left(\frac{\mbox{big}}{\mbox{small}}\right)s_1 \\
s_2 = \left(\mbox{really big}\right) s_1 \\
$$
So now, revisiting the work equation:
$$
W = Fd \\
$$
but, as discussed, the distance traveled isn't quite linear, it's the arc the lever takes about the fulcrum, because the lever rotates about the fulcrum. So you could say instead, that:
$$
W = Fs \\
$$
But, from the definition of arc length:
$$
s = L\theta \\
$$
so, you could substitute:
$$
W = FL\theta \\
$$
You can view or group this two ways - the first is as I did the substitutions here:
$$
W = F(L\theta) \\
$$
but, you could also group that to read:
$$
W = (FL)\theta \\
$$
What is a force times a lever arm? A torque. So you can rewrite the equation as:
$$
W = \tau \theta \\
$$
This is the analogy between linear systems and rotational systems - a force is akin to a torque, and a distance is akin to an angular span. Important to note here that $\theta$ isn't the particular angle the system is currently at, it's the angle through which the system traversed. That is, $\theta = \theta_2 - \theta_1$.
Anyways, hopefully the explanation of how the linear (lever) and rotational (pulley or gear) frames are related makes sense.
I think, more to your question, the "force multiplier" effect that you get with a lever arm, or gear, etc. is a tradeoff between applied force and applied distance.
You do the same amount of work to lift a 1000 pound rock up 1 foot as you do to lift a 1 pound rock up 1000 feet. If you don't have the strength to lift the 1000 pound rock directly, you can use mechanical advantage to trade the 1000 pounds for the 1000 feet.
:EDIT:
I drew a picture that hopefully illustrates the relationship between gears and levers. A lever has a fulcrum, which provides reaction forces, and a length on either side of the fulcrum.
A gear (pulley, etc.) is like a lever with equal lengths on either side of the fulcrum that is lashed/tied/bound to another lever.
The lashing that "ties" the two levers together is referred to as the gear mesh. Two teeth come into contact with one another, and that physical contact causes one "lever" (gear) to push the other.
I'll add a little more information, in the hopes that more detail will help cement the analogy rather than confuse you. Just like the example I've mentioned - two levers lashed together, if you lash them together too tightly then they're not actually able to move at all. The same can happen with a gear mesh - if the gears are too close together, the mesh is too tight and the assembly won't spin.
Conversely, if the lashing that binds the levers is too loose, then when you change direction there will be some dead band where the lashing is sagging. The input is able to rotate freely before the lashing snaps taught again, at which point the output starts to move. Again, the same thing happens in real gear systems - if the gears are too far apart, or the teeth are too narrow, then there is a void between one pair of teeth and the next. This is referred to as backlash.
Again, this might be too much information, but my hope is that you can understand that two gears are like two levers that have been tied together. If the binding (backlash) is too tight then the gears can't move, but if it's too lose then the output tends to get jerked around a lot as the "lashing" goes through the slack-taught-jerk-slack cycle, like a series of flicks instead of a continuous push. | {
"domain": "engineering.stackexchange",
"id": 1033,
"tags": "mechanical-engineering, gears"
} |
Hydrogen in a glass of water | Question: One of the great Sagan quotes is that we are made of star stuff - meaning the atoms in our bodies were formed from stellar nucleosynthesis.
However, what about the hydrogen in a glass of water? Would it be correct to say that all of the hydrogen in that glass was made during the epoch of recombination in the early universe? (I am specifically referring to the hydrogen atoms, not the H2 molecules, or oxygen for that matter)
Answer: Hydrogen wasn't really made at recombination, but in the hadron epoch, starting at one millisecond after Big Bang, and lasting about 1 second. Sure, it wasn't neutral hydrogen, but it was hydrogen nonetheless.
As you mention, neutral hydrogen was made at the epoch of recombination. But since the water doesn't contain free, neutral hydrogen atoms, this epoch isn't of much significance to the glass of water.
Water molecules was of course first formed after oxygen was created which, together with all other elements than hydrogen, helium, and lithium, was made in stars. The first stars came around a few hundred million years after the Big Bang, started enriching the Universe with heavier elements, and rather soon after, water molecules were present (Bialy et al. (2015))
Since you emphasize the word "all", if you want to be pedantic, not every single hydrogen nucleus was made during the hadron epoch, since you also have random fissions of helium nuclei in stars. They're just completely outnumbered by the number of fusions of hydrogen to helium. | {
"domain": "physics.stackexchange",
"id": 29014,
"tags": "cosmology, big-bang, atoms, nucleosynthesis"
} |
Grid minor in digraphs | Question: Thor Johnson, et al, in their paper: Directed Tree Width, introduced a definition for directed grid $J_k$, and they conjectured:
$(5.1)$ For every integer $k$ there exists an integer $N$ such that every
digraph with tree-width $N$ or more has a minor isomorphic to $J_k$.
And they continued by saying:
We have convinced ourselves that $(5.1)$ holds for planar digraphs,
but the general case is open.
And I'm looking for this unpublished paper (how they proved the conjecture for di-planar graphs), or related stuff in this case, actually how to use such a grid (I mean $J_k$).
Answer: There is a new preprint by Stephan Kreutzer and Ken-ichi Kawarabayashi, in which they apparently show that the statement (5.1) is true for all digraphs.
Stephan Kreutzer and Ken-ichi Kawarabayashi: The directed grid theorem. arXiv:1411.5681 [cs.DM]
EDIT (June 16, 2015):
A short version of their paper appears here:
Ken-ichi Kawarabayashi, Stephan Kreutzer. The Directed Grid Theorem. In: Rocco A. Servedio, Ronitt Rubinfeld (eds.), Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing 2015. pp. 655-664 | {
"domain": "cstheory.stackexchange",
"id": 2988,
"tags": "reference-request, graph-theory, co.combinatorics, graph-minor"
} |
Would bending of spacetime make tides an invisible effect? | Question: Similar to this question: How does general relativity explain tides?
But I’m specifically interested in if General Theory of Relativity would predict that bending spacetime means the water and land on earth bend together as the below authors suggest.
“According to the General Theory of Relativity, mass do not interact gravitational [sic] with each other by the gravity force, but because they bend the space. If, according to the General Theory of Relativity, gravity manifests itself as a curvature of space, then it is difficult to explain tides and outflows. At tides and outflows, the differences in water levels reach several meters. It follows that the changes in curvature of space, postulated by GTR, caused by variable gravitation of the Moon are very large. The problem is why tides and outflows are very clear, while the deformations of mainland are invisible. If the mass of Moon bends the space, it is the same when that space is filled with water and when it is filled with the mainland. Therefore, sea water should be deformed in the same way as the mainland. The banks should rise in the same way as the water. Then, of course, tides and outflows would be invisible. However, because they are visible, it follows that the mass of Moon does not distort space, but rather the matter. The water is malleable, and therefore it deforms more than stiff rocks. This proves that the assumption of GTR, that gravity is a curvature of space, is incorrect.”
(PDF) Gravitational Waves in Newton’s Gravitation and Criticism of Gravitational Waves Resulting from the General Theory of Relativity (LIGO). Available from: https://www.researchgate.net/publication/326034030_Gravitational_Waves_in_Newton's_Gravitation_and_Criticism_of_Gravitational_Waves_Resulting_from_the_General_Theory_of_Relativity_LIGO [accessed Feb 20 2023].
What, if anything, is wrong with this assessment? Would the bending of spacetime make the tides invisible?
Answer:
If the mass of Moon bends the space, it is the same when that space is filled with water and when it is filled with the mainland. Therefore, sea water should be deformed in the same way as the mainland. The banks should rise in the same way as the water. Then, of course, tides and outflows would be invisible.
The "therefore" is incorrect.
In the language of Newtonian gravity, tides happen because of a differential gravitational pull between different places on Earth and its center of mass acting as a forcing term for the system of all the bodies of water, whose response function is rather complex.
See this excellent SE answer or Theory of tides on Wikipedia.
The different response accounts for the fact that bodies of water move by much more compared to land.
The thing that differs between Newtonian gravity and general relativity is the way the forcing term is handled; in this low-velocity, low-density regime their predictions are almost identical, although the language is different.
In this regime, the Newtonian language is to consider any given particle (molecule of water, say) to have an acceleration due to gravity:
$$ \frac{\text{d}^2 x^i}{\text{d}t^2} = F^i_{\text{grav}} + F^i_{\text{other}}
$$
where the $F_{\text{other}}$ term accounts for all non-gravitational forces.
In the language of GR, the main equation is the geodesic one:
$$ \frac{\text{d}^2 x^\mu}{\text{d}s^2} + \Gamma^\mu_{\nu \rho} \frac{\text{d} x^\nu}{\text{d}s}
\frac{\text{d} x^\rho}{\text{d}s} = F^\mu_{\text{other}}
$$
which in this case reduces to
$$ \frac{\text{d}^2 x^i}{\text{d}t^2} \approx - \Gamma^i_{00} + F^i_{\text{other}}
$$
and in particular, again in this approximation, $- \Gamma^i_{00}$ is quite close to the Newtonian expression for the gravitational force $F^i_{\text{grav}}$. | {
"domain": "physics.stackexchange",
"id": 93934,
"tags": "general-relativity, spacetime, curvature, tidal-effect"
} |
Revolving pendulum contradicts laplace determinism | Question: A question came once into my mind
There is a pendulum having length of string 1metre and was initially at rest. Now the point of suspension suddenly starts to move in uniform horizontal circular motion of radius 10 cm and angular velocity 1 second inverse. The bob of the pendulum will initially show some erratic motion but after some time it itself will also begin to move in a uniform horizontal circular motion with same angular velocity. Your task is to find the radius of this circular motion of bob.
The question appeared simple at first sight.
This is how I thought the motion will be. It looks like a part of a conical pendulum.It is similar to that seen while riding in
merry-go-round.(centrifugal causes the horse to move slightly away from the axis). Note that at any instant the bob and point of suspension has same angular position. The surface of revolution of the string is that of a frustum. Now I can easily apply Newton's laws of motion and solve it. I did got an answer but I don't remember it and it's irrelevant, if you are convinced that this is a possible solution.
I asked this question to my friend and he solved and gave a different answer. I thought he did mistake but I was surprised to see his approach. This is how he thought the motion should occur
Here the bob has a slightly smaller radius. The surface of revolution of string is in the shape of 2 inverted cones joined at vertex. At any instant the angular position of bob is opposite to the point of suspension. Here also we got a second (also valid) answer to the question.
You can try to perform this experiment practically by attaching some heavy weight to the end of a rope and then revolving it by hand. Someone can make a good multiple option correct question from this giving both the answers in the option but most of the people will only be able to think of 1 answer.
Anyways,my query is regarding laplace determinism which states that if all the conditions and parameters of a system are known at an instant then the values of these parameters can be accurately predicted at any other instant. There seems to be a violation of that here because initial conditions were known and we are getting 2 possible answers. How is this possible?
Answer: There may be at least two different reasons for this difference. (I intentionally will not discuss which of them is applicable to this particular setup, but will present two reasons that may be at work in many similar situations.)
The first reason is that you consider only the stabilized part of the movement, while omitting the "[t]he bob of the pendulum will initially show some erratic motion" part. It may be that the resulting stabilized motion depends on the nature of suspension point movement during this phase.
You say that "[y]ou can try to perform this experiment practically by attaching some heavy weight to the end of a rope and then revolving it by hand", I presume that you mean that it is possible to intentionally get both configurations, and probably, with some experience you will learn how exactly you need to start rotation with your hand to get a selected configuration. Then this "how exactly you need to start rotation" will represent exactly the difference that you are looking for. You will have two different ways of moving your hand, and two different outcomes, and you will be able to intentionally choose one of them.
So the first reason is that there may be some observable differences in suspension point movement during initial stage that results in different configuration. (In particular, the acceleration of the suspension point when it moves from rest to circular movement may be important.)
Of course, this leave open the question "what will happen if the suspension points moves in an ideal circular trajectory". It is possible that only one specific configuration can be achieved, but to understand which one you will need to analyze this starting period. Nor your analysis (that looks only at stabilized movement), nor your experiment with hand (where you don't have precise control on suspension point movement) can be used to answer this question.
A simple example to illustrate this: consider an ideal sphere on an ideal horizontal surface. Let's mark some point on a sphere, and then push the sphere to start rolling. The sphere will stop at some point due to friction; let's ask: where the marked point will be on the sphere when it stops?
We can do a stationary analysis, but (assuming the mark itself is of zero weight) we will not get any definite answer; any orientation of the sphere will be a possible stationary solution. But if we analyze the movement and the initial position of the mark, then it will be rather easy to find where the mark will be located at the end.
And the second reason is deterministic chaos. In some systems, even minor perturbances in the initial conditions and/or external conditions during system motion can result in radically different results. In this case, the choice of specific configuration of resulting movement will depend on minor perturbances in initial state of the pendulum, or on minor external influences (wind, etc.) that we usually neglect in our analysis. So if you know the initial conditions and external influences with a very very very good precision, you know the result, but if you don't know them to a needed very high precision, the result can be seen as random.
Like above, this leaves open the question "what will happen in an ideal situation", but the difference from the first reason described above is that this question becomes unpractical in this case, because you never have an absolutely precise situation.
(Not to mention that the required "very high precision", if for example we talk about spatial position, can easily be much less than the size of an atom, which is beyond the limits of applicability of classical mechanics. That is, the result may radically change if you change the initial position by a fraction of atom size; but you can not require a classical mechanics initial state to be specified to that precision.)
This is very similar to the flipping coin problem. While the movement of the coin can be seen as absolutely deterministic, still the outcome can not be predicted in a reasonable way, because small alterations in the initial state can grow large enough to change the result.
UPD: I would also add a third possibility, namely that one of the configurations can be unstable. That is, it will be a valid solution of the equations, but any small variation will lead to the system leaving that configuration. Moreover, this means that such a configuration will not be reachable in the first place. You did not do any stability analysis, but if your manual experiment shows that both configurations are possible, then most probably both are stable. | {
"domain": "physics.stackexchange",
"id": 89552,
"tags": "newtonian-mechanics, determinism"
} |
Why is it $q^2$ for the individual count in hardy weinberg? | Question: My understanding:
In Hardy-Weinberg problems the frequency of a homozygous recessive genetic occurrence in a population is $q^2$. So if 1 in 100 people in a population have albinism (homozygous recessive disorder) then we say the frequency is $q^2=1/100$.
We then say, to find the frequency of the allele count $q$ that $q=\sqrt{1/100}= 1/10$.
I don't understand why we say this. Why would the allele count be the square root of the population frequency? There's 2 alleles per person. Why isn't it x2 or /2 instead?
I suppose my problem is understand what exactly is p and q.
Answer: First of, let me correct your equation: $q = \sqrt{\frac{1}{100}} = \frac{1}{10} = 0.1 ≠ 10 $.
From allele frequency to genotype frequency
Imagine you were to randomly sample an allele from a population of allele where the allele A is present at frequency $q$. What is the probability that you draw allele A? Answer: $P(A) = q$. Now, put this allele back in the pool and imagine you have to draw two alleles. What is the probability that the two alleles are A. Well it is the probability that the first allele is A times the probability that the second allele is A, it is $P(A) \cdot P(A) = q \cdot q = q^2$.
From genotype frequency to allele frequency
Therefore, under Hardy-Weinberg conditions, if the allele A is at frequency $q$, then the homozygote genotype AA is at frequency $q \cdot q = q^2$. Now let's denote the frequency of the genotype AA by $f_{AA}$. You know that $f_{AA} = q^2$ from the above though experiment. If you take the square root of both sides, you get $\sqrt{f_{AA}} = \sqrt{q^2} = q$. In words, the frequency of the allele A is the square root of the frequency of the genotype AA.
Recessivity and dominance
Btw, you'll note that those calculations tell nothing about the patterns of dominance/recessivity of alleles (Hardy-Weinberg principle assumes no selection anyway). | {
"domain": "biology.stackexchange",
"id": 4032,
"tags": "genetics, population-genetics, hardy-weinberg"
} |
Securing access to PHP services from Javascript Ajax calls | Question: I am creating a generic PHP handler, or "dispatcher", for Ajax calls made from Javascript running on the page served by PHP. The handler will take posted data and, depending on a value, decide how to route the call further. WordPress is being used, but my question is not specific to WordPress (if it was I would have asked on the WordPress sub-site).
The PHP Ajax handler should be secured, not serving calls from anywhere other than the page that the visitor is viewing in the browser (assuming that Apache is not configured to restrict access to the handler). I have designed a security structure with this aim in mind, and would like to know if my approach is good or could be improved. Are there any weaknesses that could be exploited?
The design is quite simple. When the page is constructed for serving, a session cookie is created, and its value is an encrypted token, derived from the client's IP, and a random salt. The cookie's value, the encrypted token, is also used as the name of a temporary db record for 60 minutes, and the record's value is the salt that was used to generate the token.
$ajaxSalt = bin2hex(openssl_random_pseudo_bytes(30)); // Create random salt
$cookieAndTrans = md5(crypt($_SERVER['REMOTE_ADDR'], $ajaxSalt)); // We use cookie value as token
setCookie('qnrwp_ajax_cookie', $cookieAndTrans); // Set session cookie, for JS Ajax caller to echo back to us
set_transient('qnrwp_ajax_temp_salt_'.$cookieAndTrans, $ajaxSalt, 60 * MINUTE_IN_SECONDS); // Save salt for 60 mins
When the page is served and the client makes an Ajax request, the Javascript code will, as part of the data payload, send the value of the cookie to the handler.
Receiving the call, the handler will check that the transmitted cookie value matches the cookie value that was set initially. It then uses this value as the name of the temporary db record to call up (if 60 minutes haven't passed), and uses the value of the record, the salt, to confirm that the IP matches.
// ----------------------- Security check
if ($_POST['qnrwp_ajax_cookie'] !== $_COOKIE['qnrwp_ajax_cookie']) wp_die();
$ajaxTrans = get_transient('qnrwp_ajax_temp_salt_'.$_POST['qnrwp_ajax_cookie']);
if (!$ajaxTrans) wp_die();
if ($_POST['qnrwp_ajax_cookie'] !== md5(crypt($_SERVER['REMOTE_ADDR'], $ajaxTrans))) wp_die();
A couple of things to clarify to avoid confusion: md5() is primarily used as replacement for bin2hex(), its weak crypto security just an added bonus, but not being relied upon - crypt() is used for good encryption. crypt() is able to generate a salt, but I prefer creating my own as I'll store it in the temporary record (set/get_transient() in the code). The salt is quite long, helping avoid clashes between different clients accessing the site at the same time. A clash is still possible, but I think unlikely, and even if it happened, it would not be catastrophic - if the clashing clients share their IP, one of them may find up to 120 minutes available after page load rather than 60, no big deal.
I believe my question is at quite an advanced level and would prefer if those with good knowledge of the subject write answers rather than try to engage in discussion in the comments. That said, if anything is unclear, feel free to ask in the comments. If you see weaknesses in the code, please try to provide concrete suggestions for improvement, rather than merely pointing out the loopholes. The most constructive answer will be accepted. I will wait a few days before accepting.
Please note I'm not open to suggestions of third-party tools or WordPress plugins - this is a case of "rolling my own".
Answer: I consider myself a security nazi, and you're asking security-related questions, so I'm going to do something I don't normally do and be pretty blunt here. Please don't take it personally.
You've completely missed the mark on security. Not to say that your solution is "insecure" (you haven't given us enough information to really decide that), but simply because, from a security perspective, most of what you are doing just doesn't make sense. Your code more-or-less looks like you threw together as many security-related functions as you could and called it secure. Granted, these things aren't making things less-secure, but it also isn't making you more-secure. I'm a firm believer that code that doesn't actually do anything should be thrown away. That is the case with most of your code here.
I think the problem you have is that you've missed out on the big picture: what is your code actually trying to accomplish? The answer, as near as I can tell, is to make sure that only the original viewer of a page can access the API calls related to that page. In other words, you are trying to identify users via a secret. In that context, here's the thing:
Don't use the remote address. The client's IP address can change for any number of reasons, completely breaking your system. This can happen as easily as someone opening your website on their phone with their home WIFI connection, and then stepping outside and switching to internet via their mobile carrier. When that happens their IP changes and suddenly your javascript refuses all subsequent requests. A system that breaks when the user moves 50 feet is definitely a problem. Moreover, this is not a perfect security step anyway. It is quite common for multiple users to share the same IP address, sometimes in contexts in which session stealing is most relevant (i.e. public wifi). This means that identifying users by IP Address causes UI problems, and is also not secure. Lose/Lose.
bin2hex is not the best choice. If you want to encode random bytes as a string (which is perfectly normal: strings, especially ASCII, move back and forth from server-to-client easily), then don't use bin2hex. For a given number of bytes, a hexadecimal string will be longer than an equivalent ASCII string. Instead store your random string as an ASCII string using only numbers and letters. You'll get shorter strings with the same amount of entropy. Granted, this is really just a nitpick: this won't impact your security either way. In essence, you are trying to generate a random string, and are doing that by taking random bytes and converting it to hex. It's better to just generate a random string, IMO.
Neither crypt nor md5 are doing anything for you at all. Again, focus on the big-picture goal here. You are (as near as I can tell) trying to generate a random string to act as a "secret" to later identify users. Encrypting things doesn't make them more random, and neither does hashing. It's especially pointless to hash an encrypted string. Encryption preserves data, while hashing destroys it. The fact that you would follow up one with the other suggests that you don't really understand what you are doing.
Checking for the value both in the cookie and post data doesn't make your app more secure: it just gives you more code to maintain on both the front-end and back-end. Moreover, it violates Kerckhoff's principle, which states that the only thing that should be a secret in any secure system is the actual secret (in this case, your key). You're effectively trying to rely on an extra security step to keep out attackers, but that security step is already public (because it is present in your front-end javascript code). As a result, it is pointless. Just let the value come up in the cookie that you already set.
So what should you do? I would suggest one of two things:
Best bet: authenticate users in this API call in exactly the same way that you authenticated them for the initial page load. The only reason you can't do that is if:
Your system is anonymous and people are not authenticated at all. In this case though, you don't need to do anything fancy at all: just use PHP's built in session system. Its entire purpose is to securely identify users from one page-view to another, and everything you have here is basically a poor-man's session system. You don't have to worry about errors in implementation if you use one that has been around and vetted for many years and by countless websites. Don't reinvent the wheel, especially when it comes to security.
Also, standard things apply:
Make sure to flag your cookies as HTTP-only
Make sure to configure your cookies to only be transmitted over HTTPS.
This won't protect against session stealing, but then again neither will your solution. That topic is a completely different discussion.
Forgot to say
If you don't want to use PHP sessions, then things can still be substantially simplified. To reiterate the big picture goal, all you really want is a unique (and unguessable) key that lets you identify visitors from page-view to page-view. You don't need crypt, md5, or salting to make that happen. All you need is, literally, a long random string. A 32 character string made from just numbers and letters (upper+lower case) gives you almost 200 bits of entropy, and is very easy to generate. As long as your random string generator has no weaknesses, no one is going to guess such a string. This becomes your session ID. Store it in a cookie, and get it out of the cookie when it comes back up. No need to encrypt or hash it: those don't increase your security at all. So in the end your code basically just looks like:
$sessionId = randomString(32);
setCookie('qnrwp_ajax_cookie', $sessionId);
set_transient('qnrwp_ajax_temp_salt_'.$sessionId, 60 * MINUTE_IN_SECONDS);
Then:
$ajaxTrans = get_transient('qnrwp_ajax_temp_salt_'.$_COOKIE['qnrwp_ajax_cookie']);
if (!$ajaxTrans) wp_die();
// Also make sure and check session expiration
Also, you don't want to just die. You should return some sort of API error that your front-end AJAX call can turn into a more user friendly "Your session has expired" error message. Don't forget to check the session expiration. | {
"domain": "codereview.stackexchange",
"id": 28369,
"tags": "php, security, cryptography, ajax"
} |
Help finding motor to replace awning motor (want to jerry rig something together) | Question: The motor on my awning just bit the dust (again). This is the second time this has happened.
I'm looking to replace it with something that I can build myself, using the existing motor enclosure. You can see the exact motor I'm talking about below (amazon link). I just don't want to pay that much money for a replacement. I'd rather rip the burnt out one apart and buy a cheap motor to do the same thing.
What I'm struggling with is what motor to get that would handle a 20ft awning. And to fit in the existing enclosure, the diameter has to be 3cm.
I found the following motor, but I believe it may be way under powered to do it. Dimensions wise, it's like perfect for the enclosure that the fried motor is in.
https://www.amazon.ca/gp/product/B00HDDXBEY/ref=ox_sc_act_title_1?smid=A1IQ6DRJX762AU&psc=1
Here are some pictures of the burnt out motor (removed from the enclosure):
OEM Replacement motor:
DOMETIC 3310423.209U Torsion Assembly
https://www.amazon.ca/Dometic-3310423209U-Drive-Assembly-Awning/dp/B07L3JF78D/ref=asc_df_B07L3JF78D/?tag=googleshopc0c-20&linkCode=df0&hvadid=347072134503&hvpos=&hvnetw=g&hvrand=3888389780963375681&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9001077&hvtargid=pla-843687300391&psc=1
Answer: After ripping it all open, I believe I can repair the existing motor. I cleaned off the commutator and brushes; but it looks like I need to replace the brushes.
The windings all look good. I tested the contacts on the commutator with a multimeter and all were good.
I ordered new brushes; but I'm pretty confident this will fix it.
** On a side note **
I can tell that this thing was made to fail - the components around the brushes are super cheap, and I can tell it won't last long. | {
"domain": "engineering.stackexchange",
"id": 4261,
"tags": "motors"
} |
Tests for Salesforce controller concerning candidates and notes | Question: I have this controller and some test cases. Any corrections/advice are appreciated to make them better.
public with sharing class MTX_MatrixDetailViewController {
public transient String activities {get;set;}
public MTX_MatrixDetailViewController(){
String candidateId = ApexPages.currentPage().getParameters().get('id');
Map<String, String> filters = new Map<String, String>();
List<LightningActivity> actList = LightningActivityAccessor.getLightningActivitiesByType('Contact', candidateId, filters, 0);
//sort activities
actList = LightningActivityAccessor.sort(actList,'date','DESC');
activities = JSON.serialize(actList);
}
@AuraEnabled
public static MTX_MatrixWrapper fetchCandidateDetails(String candidateId, String matrixId){
MTX_MatrixWrapper wrapper = new MTX_MatrixWrapper();
wrapper.candidateRecord = MTX_MatrixAccessor.getCandidatesById(new List<String>{candidateId})[0];
wrapper.matrixCandidate = MTX_MatrixAccessor.getMatrixCandidatesById(new List<String>{matrixId})[0];
wrapper.currentMatrix = String.isNotBlank(wrapper.matrixCandidate.Matrix__c) ? MTX_MatrixAccessor.getMatrixById(wrapper.matrixCandidate.Matrix__c) : new MTX_Matrix__c( Name = Label.MTX_Candidates_Shared_with_Me );
wrapper.preferredMobile = MTX_MatrixAccessor.getPreferedMobileForCandidate(wrapper.candidateRecord.Master_People__c);
wrapper.candidates = MTX_MatrixAccessor.getMatrixCandidatesById(new List<String>{matrixId});
List<Matrix_User_Setting__c> userSettings = MTX_MatrixAccessor.getSettingsForCurrentUser(UserInfo.getUserId());
wrapper.userSettings = userSettings.isEmpty() == false ? userSettings[0] : MTX_MatrixService.createMatrixUserSetting(UserInfo.getUserId());
/*** XEngine ***/
XEngineUtils.postMatrixCandidatesEvent(wrapper.candidates, XEngineUtils.MTX_CAND_CANDIDATE_DETAILS);
/*** XEngine ***/
return wrapper;
}
@AuraEnabled
public static MTX_Matrix_Note__c getNewMatrixNote(String candidateId){
return new MTX_Matrix_Note__c(Candidate__c = candidateId);
}
@AuraEnabled
public static MTX_Matrix_Note__c saveNote(MTX_Matrix_Note__c note){
return MTX_MatrixService.saveNote(note);
}
@AuraEnabled
public static MTX_Matrix_Note__c getNoteById(Id noteId){
/*** XEngine ***/
XEngineUtils.postMatrixNoteEvent(noteId, XEngineUtils.NOTE_VIEW);
/*** XEngine ***/
return MTX_MatrixAccessor.getMatrixNoteById(noteId);
}
@AuraEnabled
public static void deleteSelectedNote(MTX_Matrix_Note__c note){
MTX_MatrixService.deleteCandidateNote(note);
}
}
Here are my test classes:
@isTest
public class MTX_MatrixDetailViewControllerTest {
//create test data
@testSetup static void setup() {
//setup FO full users (not inserted yet)
List<User> matrixUsersToGivePermission = MatrixTestFactory.createMatrixUsers(2);
//give FO full users matrix permission set (will insert the users)
matrixUsersToGivePermission = MatrixTestFactory.giveMatrixPermission(matrixUsersToGivePermission);
//setup spotlight
MatrixTestFactory.setupEliseConnection();
}
static testMethod void gettingnewNote{
List<Contact> candidates = MatrixTestFactory.getCandidates();
List<User> matrixUsers = MatrixTestFactory.getMatrixUsers();
Test.startTest();
//MTX_Matrix_Note__c nt = new MTX_Matrix_Note__c();
System.runAs(matrixUsers[0]) {
MTX_MatrixService.getNewMatrixNote( candidates[0].Id );
}
Test.stopTest();
}
static testMethod void savingtheNote{
//create a method for for getting notes in MatrixTestFactory
List<MTX_Matrix_Note__c> notes = MatrixTestFactory.getNotes();
List<User> matrixUsers = MatrixTestFactory.getMatrixUsers();
Test.startTest();
//MTX_Matrix_Note__c nt = new MTX_Matrix_Note__c();
System.runAs(matrixUsers[0]) {
MTX_MatrixService.saveNote(notes[0] );
}
Test.stopTest();
}
static testMethod void deletingspecificNote{
List<User> matrixUsers = MatrixTestFactory.getMatrixUsers();
//create a method for for getting notes in MatrixTestFactory
List<MTX_Matrix_Note__c> notes = MatrixTestFactory.getNotes();
Test.startTest();
//MTX_Matrix_Note__c nt = new MTX_Matrix_Note__c();
System.runAs(matrixUsers[0]) {
MTX_MatrixService.deleteCandidateNote(notes[0]);
}
Test.stopTest();
}
static testMethod void retrievingNoteiD{
List<User> matrixUsers = MatrixTestFactory.getMatrixUsers();
//create a method for for getting notes in MatrixTestFactory
List<MTX_Matrix_Note__c> notes = MatrixTestFactory.getNotes();
Test.startTest();
//MTX_Matrix_Note__c nt = new MTX_Matrix_Note__c();
System.runAs(matrixUsers[0]) {
MTX_MatrixService.deleteCandidateNote( notes[0]);
}
Test.stopTest();
}
}
Side note: Matrix Test Factory creates my test data, and when I call
MTX_MatrixService.deleteCandidateNote, MTX_MatrixService.saveNote,
MTX_MatrixService.getNewMatrixNote and MTX_MatrixAccessor.getMatrixNoteById, they are defined as
public static MTX_Matrix_Note__c saveNote(MTX_Matrix_Note__c note){
boolean addNote;
if(note.Id != null) { addNote = true; } else { addNote = false; }
upsert note;
/*** XEngine ***/
if(!addNote){
XEngineUtils.postMatrixNoteEvent(note.Id, XEngineUtils.NOTE_ADD);
} else {
XEngineUtils.postMatrixNoteEvent(note.Id, XEngineUtils.NOTE_UPDATE);
}
/*** XEngine ***/
return note;
}
public static void deleteCandidateNote(MTX_Matrix_Note__c note){
string matrixNoteJson = XEngineUtils.getMatrixNoteJson(note, XEngineUtils.NOTE_DELETE);
delete note;
/*** XEngine ***/
XEngineUtils.postEventJson(matrixNoteJson);
/*** XEngine ***/
}
/*retrieves single matrix note content by note id*/
public static MTX_Matrix_Note__c getMatrixNoteById(Id noteId){
List<MTX_Matrix_Note__c> notes = new List<MTX_Matrix_Note__c>();
notes = [SELECT Id, Note__c, CreatedById, CreatedBy.Name, LastModifiedDate FROM MTX_Matrix_Note__c WHERE Id =: noteId];
return notes.isEmpty() == false ? notes[0] : null;
}
```
Answer: Your unit tests contain no assertions about the behavior of your code, so they prove nothing other than that the code does not crash in this specific situation. These are commonly referred to as smoke tests.
There are three critical elements for any unit test, even before you start thinking about the overall coverage of your application's code paths:
Control environment and set up test data.
Execute functionality.
Validate results.
You are only doing (1) and (2) so far.
I'd recommend reviewing Salesforce Stack Exchange's canonical QA, How do I write an Apex unit test?, for more information and resources. In particular, you should definitely complete Unit Testing on the Lightning Platform on Trailhead. | {
"domain": "codereview.stackexchange",
"id": 34640,
"tags": "unit-testing, salesforce-apex"
} |
What would the effects be on Earth if Jupiter was turned into a star? | Question: In Clarke's book 2010, the monolith and its brethren turned Jupiter into the small star nicknamed Lucifer. Ignoring the reality that we won't have any magical monoliths appearing in our future, what would the effects be on Earth if Jupiter was turned into a star?
At it's closest and furthest:
How bright would the "back-side" of the earth be with light from Lucifer?
How much heat would the small star generate on earth?
How many days or months would we actually have night when we circled away behind the sun?
How much brighter would the sun-side of earth be when Lucifer and the sun both shine on the same side of the planet?
Answer: Before I start, I'll admit that I've criticized the question based on its improbability; however, I've been persuaded otherwise. I'm going to try to do the calculations based on completely different formulas than I think have been used; I hope you'll stay with me as I work it out.
Let's imagine that Lucifer becomes a main-sequence star - in fact, let's call it a low-mass red dwarf. Main-sequence stars follow the mass-luminosity relation:
$$\frac{L}{L_\odot} = \left(\frac{M}{M_\odot}\right)^a$$
Where $L$ and $M$ are the star's luminosity and mass, and $L_\odot$ and $M_\odot$ and the luminosity and mass of the Sun. For stars with $M < 0.43M_\odot$, $a$ takes the value of 2.3. Now we can plug in Jupiter's mass ($1.8986 \times 10 ^{27}$ kg) into the formula, as well as the Sun's mass ($1.98855 \times 10 ^ {30}$ kg) and luminosity ($3.846 \times 10 ^ {26}$ watts), and we get
$$\frac{L}{3.846 \times 10 ^ {26}} = \left(\frac{1.8986 \times 10 ^ {27}}{1.98855 \times 10 ^ {30}}\right)^{2.3}$$
This becomes $$L = \left(\frac{1.8986 \times 10 ^ {27}}{1.98855 \times 10 ^ {30}}\right)^{2.3} \times 3.846 \times 10 ^ {26}$$
which then becomes
$$L = 4.35 \times 10 ^ {19}$$ watts.
Now we can work out the apparent brightness of Lucifer, as seen from Earth. For that, we need the formula
$$m = m_\odot - 2.5 \log \left(\frac {L}{L_\odot}\left(\frac {d_\odot}{d}\right) ^ 2\right)$$
where $m$ is the apparent magnitude of the star, $m_\odot$ is the apparent magnitude of the Sun, $d_\odot$ is the distance to the Sun, and $d$ is the distance to the star. Now, $m = -26.73$ and $d(s)$ is 1 (in astronomical units). $d$ varies. Jupiter is about 5.2 AU from the Sun, so at its closest distance to Earth, it would be ~4.2 AU away. We plug these numbers into the formula, and find
$$m = -6.25$$
which is a lot less brighter than the Sun. Now, when Jupiter is farthest away from the Sun, it is ~6.2 AU away. We plug that into the formula, and find
$$m = -5.40$$
which is dimmer still - although, of course, Jupiter would be completely blocked by the Sun. Still, for finding the apparent magnitude of Jupiter at some distance from Earth, we can change the above formula to
$$m = -26.73 - 2.5 \log \left(\frac {4.35 \times 10 ^ {19}}{3.846 \times 10 6 {26}}\left(\frac {1}{d}\right) ^ 2\right)$$
By comparison, the Moon can have an average apparent magnitude of -12.74 at full moon - much brighter than Lucifer. The apparent magnitude of both bodies can, of course, change - Jupiter by transits of its moon, for example - but these are the optimal values.
While the above calculations really don't answer most parts of your question, I hope it helps a bit. And please, correct me if I made a mistake somewhere. LaTeX is by no means my native language, and I could have gotten something wrong.
I hope this helps.
Edit
The combined brightness of Lucifer and the Sun would depend on the angle of the Sun's rays and Lucifer's rays. Remember how we have different seasons because of the tilt of the Earth's axis? Well, the added heat would have to do with the tilt of Earth's and Lucifer's axes relative to one another. I can't give you a numerical result, but I can add that I hope it wouldn't be too much hotter than it is now, as I'm writing this!
Second Edit
Like I said in a comment somewhere on this page, the mass-luminosity relation really only works for main-sequence stars. If Lucifer was not on the main sequence. . . Well, then none of my calculations would be right. | {
"domain": "astronomy.stackexchange",
"id": 887,
"tags": "star, the-sun, light, jupiter, heat"
} |
Why do meteorological seasons start earlier than astronomical seasons? | Question: In meteorology the seasons always start at the beginning of the month the astronomical seasons start.
The astronomical seasons start around the 21st in a month so I guess it would make more sense to start the meteorological season at the first day of the month following the start of the astronomical season.
Another more logical reason to do this: for example the meteorological winter start at December 1 and ends at February 28 (or 29) in the next year so meteorology actually measures in broken years. Should the meteorological winter start at January 1 and end at March 31 then all seasons do exactly fit in the same year.
So is there a reason why meteorologists do it this way or is it just arbitrarily chosen?
Answer: Meteorological seasons are based on temperature, whereas astronomical seasons are based on the position of the earth in relation to the sun. So meteorological winter is the three coldest months of the year (December, January, and February) and meteorological summer is the three warmest months of the year (June, July, and August). More information can be found on this NOAA web site. | {
"domain": "earthscience.stackexchange",
"id": 1345,
"tags": "meteorology, seasons, astronomy"
} |
Derivation of Spin Lagrangian | Question: Recently, I have been looking at the Lagrange equations that describe particles of different spin. I know lagrange takes the form $L=$kinetic-potential, and it seems these equations do take this form. But I dont see how they correspond to the kinetic or potential. How are these equations derived?
The Klein-Gordon equation spin 0
$$L=c^2\partial _\lambda \phi \partial^\lambda \phi ^* -\left(\frac{mc}{\hbar}\right)^2\phi \phi^*$$
Dirac Lagrangian spin 1/2
$$L=i\hbar c \tilde\psi \gamma ^\mu \partial _\mu \psi-mc^2\tilde\psi \psi $$
The Proca equation spin 1
$$L=-\frac{1}{16 \pi}F^{\mu \nu} F_{\mu \nu} +\frac{1}{8 \pi} \left(\frac{mc}{\hbar}\right)^2A^\mu A_\mu$$
I suppose I mean to say, is there one equation that you can get all three of these equations from?
Answer: As mentioned in the comment by @Chiral Anomaly, these are the Lagrangians for the particles of these spins. But it is seem like you need a bit more than that.
We require our theories to be invariant under Lorentz transformations. We know that from special relativity. Now the first question is: what particles are "compatible" with this symmetry. To answer that we need to look at the irreducible representations of the Lorentz group. These essentially describe "objects" that transform into one another under Lorentz transformations, i.e. they don't mix up with other such objects. These irreducible representations are characterised by something called spin. Spin 0 is a scalar field (it doesn't transform under a Lorentz transformation); spin 1 is a vector (it transforms as a vector). Now it turns out there are also irreducible representations with half-integer spin: spin 1/2 describes a fermion, spin 3/2 etc.
Now that we have these particles, let us ask what theory can we write for these. This means what Lorentz invariant Lagrangians can we write with these fields. The first step on this is to focus on what we call free Lagrangians. These are Lagrangians that are at most quadratic in the fields. Why do we first look at that? Because in the quantum theory we need to calculate the a path integral, which is essentially an integral of $e^{i\int L_{\text{free}}}$ where $L_{\text{free}}$ is the Lagrangian. Now if $L_{\text{free}}$ is quadratic these are just Gaussian integrals and we can solve this exactly. The Lagrangians you mention are the quadratic Lorentz invariant Lagrangians for spin 0,1/2 and 1.
Comments
As they are quadratic you can view them as being only the kinetic part (in classical physics the kinetic part is quadratic too). But unfortunately free fields are not very interesting, as these particles don't interact with one another. To make the theory interesting you need terms that mix up these fields and are higher than quadratic. Just as a potential energy in classical physics. You can now write the full Lagrangian as $L_{\text{free}}+ L_{\text{interaction}}$ and you can consider the interaction as a correction to the free theory and perform perturbation theory.
The spin one Lagrangian you mention is not the one we know describes spin one particles. Indeed we know from experiment that the spin one particles (photons) are massless so the $A^\mu A_\mu$ term is not there.
I have simplified many things and in doing so I will probably attract many comments/corrections. But this is really the gist of where these equations come from. | {
"domain": "physics.stackexchange",
"id": 78910,
"tags": "particle-physics, lagrangian-formalism, quantum-spin, field-theory"
} |
Superconductive magnet as a source of energy? | Question: Could we charge a superconductive magnet and use it as a source of energy?
Answer: Yes. The energy expended in producing the magnetic field can be recovered during the field's collapse, which occurs after the current producing the field is shut off.
For example, the amount of energy stored in the superconducting magnets that steer the particle beams in CERN's Large Hadron Collider is equal the the kinetic energy of a fully-loaded jumbo jet going 500 MPH. Shutting the magnets down requires dissipating all that energy, and if anything goes wrong during that process, parts of the collider will get blown to pieces in an instant.
Note that the energy stored in the magnetic field of a superconducting magnet did not get there for free. When running, those magnets and the machinery needed to support them consume as much electrical power as a small city, for which CERN pays the bill. | {
"domain": "physics.stackexchange",
"id": 54591,
"tags": "electromagnetism, magnetic-fields, energy-conservation, superconductivity"
} |
What axis of rotation should be used for rotational kinetic energy? | Question: I know the kinetic energy of a rigid object is
\begin{align}\tag{$1$}
KE = \frac{1}{2}mv^{2} + \frac{1}{2}I\omega^{2}
\end{align}
where $v$ is the velocity of the center of mass of the object, $\omega$ is its angular velocity, and $I$ is the moment of inertia about its center of mass.
Now the things is, shouldn't the moment of inertia be specified about an axis as opposed to a point? I can understand that the axis has to be through the center of mass, but which direction ought it to be oriented? If there is a specific axis we have to use, how do we calculate this axis for an arbitrary body undergoing arbitrary motion?
For an example, consider a uniform solid cylinder (radius $r$, height $h$) rolling without slipping at a constant velocity $v > 0$. I could consider the axis along the axial direction of the cylinder through the center of mass and obtain
$$ I = \frac{1}{2}mr^{2}. $$
We can consider $(1)$ with $\omega = v/r\ne 0$ with no issues. But couldn't I also consider a perpendicular axis oriented, say, vertically and through the center of mass? In that case,
$$ I' = \frac{1}{12}m(3r^{2} + h^{2}). $$
This is clearly different, and as the cylinder rolls, I would expect the angular velocity to be $\omega\,' = 0$. Wouldn't this change the result in $(1)$?
Answer: The expression you chose has a scalar $\omega$ as opposed to the (bi)vectorial quantity $\vec\omega$. Now, since you have stated that it is
$I$ is the moment of inertia about its centre of mass
Then you have only the choice of picking an axis (anti-)parallel to the direction of the angular velocity. If you want the more general form, then it will be
$$\text{RKE}=\frac12\vec\omega\cdot\vec{\vec I}\cdot\vec\omega$$
where the moment of inertial tensor then allows you to not specify the axis.
However, the kinetic energy of a rigid object is actually distributed between the LKE and RKE part depending upon where you choose to evaluate. For example, there is always an instantaneous position whereby all the KE are RKE, say.
For example, if an object is rotating about a pivot, you might want to take the pivot as the centre, and then there will not be a need to cover LKE. Even if the pivot is temporary, say in a rolling ball, this is a helpful situation to consider. | {
"domain": "physics.stackexchange",
"id": 95698,
"tags": "newtonian-mechanics, rotational-dynamics, reference-frames, rigid-body-dynamics, moment-of-inertia"
} |
Calculating density of states given energy levels and degeneracy | Question: In my statistical mechanics class, my professors did a problem in which he calculated the density of states, however I am having trouble justifying his approach. I did the problem beforehand in an entirely different way, and got the same answer, but his method is much quicker and I'd like to understand why it works.
Consider a gas of $N$ identical spin-$0$ bosons confined by an isotropic three-dimensional potential.The energy levels in this potential are $ε_n= \dfrac{n}{\hbar \omega}$ with $n$ a nonnegative integer and $ω$ the classical oscillation frequency. The degeneracy of level $n$ is $g_n = \dfrac{(n+1)(n+2)}2$.
Find a formula for the density of states, $D(ε)$ for an atom confined in this potential. Assume that $n\gg1$ and recall that the density of states $D(ε)$ is defined by the fact that $D(ε)dε$ is the number of orbitals of energy between $ε$ and $ε+dε$
My method:
The total number of energy levels below $ε_n$ will simply be:
$$N(ε_n) = \sum_{k = 0}^n g_k$$
Because $n \gg 1$, we can approximate this sum as an integral and $g_n$ as $\frac{n^2}2$
$$N(ε_n) \approx \int_0^n \frac{x^2}2 dx = \frac{n^3}6 = \frac{ε_n^3}{6 \hbar^3 \omega^3}$$
Taking the logarithm of both sides:
$$\log(N(ε_n)) = 3 \log(ε_n) - \log(2 \hbar^3 \omega^3)$$
Differentiating:
$$\frac{dN}{N} = 3 \frac{dε}{ε} \implies D(ε) = \frac{dN}{dε} = \frac{3N}{ε} = \frac{ε^2}{2 \hbar^3 \omega^3}$$
His method:
$n >> 1 \implies g_n \approx \frac{n^2}2$ is the degeneracy of the energy level $ε_n$. The spacing between energy levels is $ε_{n+1} - ε_n = \hbar \omega$
Apply $D(ε) = \frac{dN}{dε}$ in a discrete setting:
$$D(ε) = \frac{n^2/2}{\hbar \omega} = \frac{\epsilon^2}{2 \hbar^3 \omega^3}$$
I can understand why $dε$ can be said to be $ε_{n+1} - ε_n$, but why does $dN = g_n$ here? It seems more reasonable, by this argument, that we would want $g_{n+1} - g_n$, which does not appear to yield the correct answer.
I believe it boils down to some confusion on what "the number of orbitals of energy between $ε$ and $ε + dε$" actually means. It's not an easy task to visualize when you are given discrete variables (despite them behaving somewhat continuously for $n\gg1$)
Answer: From first formula of your method for $n\gg 1$ we get
$$dN = N(\varepsilon_{n+1}) - N(\varepsilon_n) = g_{n+1} \approx g_n$$. | {
"domain": "physics.stackexchange",
"id": 45386,
"tags": "statistical-mechanics, differentiation, bosons, density-of-states"
} |
How to describe the languages of these regular expressions in plain English? | Question: This is actually a question my lecturer gave us, so I know that it's kind of homework, but I tried to answer this and still had no luck.
The question is
Give English description of the languages of the following regular
expressions.
$(1+\epsilon)(00^*1)^*0^*$
$(a^*b^*)^*aaa(a+b)^*$
$(a+ba)^*b^*$
I could only guess that the second one as the set of strings that at least have three consecutive $a$s. But I'm really clueless about others. Can someone make this clarified for me? Any help is highly appreciated.
Answer: The first regular expression captures all strings over $\{0,1\}$ not containing two consecutive 1s. It is not hard to check that any string in the language of the given regular expression has this property, and with some more work you can show the converse.
The second regular expression captures all strings over $\{a,b\}$ containing $aaa$ as a (consecutive) substring, as you also noticed.
The third regular expression captures all strings over $\{a,b\}$ not containing $bba$ as a (consecutive) substring, though this perhaps requires some verification. | {
"domain": "cs.stackexchange",
"id": 15650,
"tags": "regular-expressions"
} |
Improving readability of 3d math in Java | Question: Doing graphics in Java, I find myself writing lines like these regularly:
Vector3f b = mid.add( a.mult(yOffset + length / 2) ).add( up.cross(a).mult(xOffset) );
It gets even worse when I'm reusing objects.
Vector3f vTemp = VarPool.getVector3f();
Vector3f b = VarPool.getVector3f();
b.set(mid).addMutable( a.multAndStore(yOffset + length / 2, vTemp) ).addMutable( up.crossAndStore(a, vTemp).multMutable(xOffset) );
//...do some useful computation...
VarPool.recycle(vTemp, b);
I've tried coping using a combination of these two methods:
1. Place 'translations' in comments above dense lines
//i.e. Vector3f b = ( mid + a * (yOffset + length / 2) + (up X a) * xOffset );
//this calculates such and such
b.set(mid).addMutable( a.multAndStore(yOffset + length / 2, vTemp) ).addMutable( up.crossAndStore(a, vTemp).multMutable(xOffset) );
Disadvantages: Extra noise. 'Translation' comments can look like actual code that's been commented out. There's no getting around the real code being hard to read.
2. Break dense chains into more readable lines
b
.set(mid)
.addMutable( a.multAndStore(yOffset + length / 2, vTemp) )
.addMutable( ( vTemp = up.crossAndStore(a, vTemp)
.multMutable(xOffset) ) );
Disadvantages: Some calculations don't chain up quite as nicely. When different return types are involved (e.g. matrices, quaternions, scalars), chaining is not always logical or possible.
Any other suggestions? Switching language is not really viable.
Answer: @DaveNewton Not necessarily, sometimes it's Groovy :-)
Advantage
code/syntax compatible with java.
Operator overrides allow you to write your code so it looks more like your comment i.e. operators rather than methods
Disadvantage
Groovy is slower than java (but this can be reduced with Groovy++) | {
"domain": "codereview.stackexchange",
"id": 1995,
"tags": "java"
} |
Does a sulfuric acid/nitric acid mixture dissolve PTFE (ie. teflon)? | Question: In our lab, we commonly use a mixture of sulfuric acid and nitric acid to clean borosilicate glassware. I'd like to clean some PTFE pieces in there, too, but not sure if they will dissolve or not.
Does anyone have any first-hand experience with this, and know whether or not a sulfuric acid/nitric acid mixture dissolve PTFE?
Answer: For having used "sulfonitric acid" (sulfuric and nitric acids mixture) for at least 2 decades, PTFE is definitely stable to such a mixture.
Safety on the use of "sulfonitric acid"
I know it is off-topic but I think that safety should be learned as light touches, not as a-difficult-to-swallow course. (Please tell me if it is not in accordance with the site's rules).
You can safely use "sulfonitric acid" (i.e. usually a mixture of one volume of conc. sulfuric acid poured slowly into one volume of conc. nitric acid) to clean glassware and PTFE-ware if you follow some common sense rules. Explosions (or at least too vivid reactions) reported were the direct consequences of a poor understanding of what "sulfonitric acid" does.
Handle it under a fume hood (I told you it was common sense!!)
NEVER pour sulfonitric acid on organic material as the latter will be oxidized in an explosive way! Every piece of glassware should first:
Be cleaned as much as possible with any solvent or manually removing solid substance which is stuck where it should not be,
Be thoroughly rinced with a solvent miscible with water (acetone, methanol, ethanol...),
Be thoroughly rinced with tap water, in order to eliminate most of any organic solvent,
Only then can you pour the sulfonitric acid onto your glassware and let it do its magick.
Do not think that a plastic which can seemingly resist any solvent will actually resist exposure to sulfonitric acid! PTFE will resist but not common plastic materials.
If you follow these rules, there is no reason why you should experience any dangerous reaction in your lifetime. Be especially careful with sintered funnels. | {
"domain": "chemistry.stackexchange",
"id": 6199,
"tags": "experimental-chemistry, cleaning"
} |
Suppose you have airflow through a narrow glass tube. Does the air slip at glass surface? | Question: I was just pondering a silly physics problem where air is made to flow though a narrow tube. For a viscous fluid and laminar flow through a pipe you have the
Hagen–Poiseuille equation.
$\Delta P=\frac{8\mu L Q}{\pi R^{4}}$
I believe it should apply for air, although thinking about it I am not sure quite why. A viscous fluid like treacle or marmite would stick to the glass surface, so that the velocity at the glass surface is zero. But in the case of air, is velocity actually zero at the glass surface, or would it slip? If I am right and it is essentially zero, then why? Thinking of air as just particles of relatively long root mean square distance apart, it is hard to see how a shear sticking effect could manifest at the gas/glass interface.
Answer: ‘Viscous’ doesn’t mean ‘sticky’. It means “the fluid exerts force against adjacent slip”.
The viscosity of air or water is less than treacle, but it’s still non-zero. Air next to a non-moving surface will feel a force that’ll tend to make it stop. Smaller viscosity means less force, but that just means a less massive I.e. thinner layer is brought to a stop. | {
"domain": "physics.stackexchange",
"id": 62887,
"tags": "fluid-dynamics, drag, viscosity"
} |
Are hill climbing variations always optimal and complete? | Question: Are hill climbing variations (like steepest ascent hill climbing, stochastic hill climbing, random restart hill climbing, local beam search) always optimal and complete?
Answer: No, they are prone to get stuck in local maxima, unless the whole search space is investigated.
A simple algorithm will only ever move upwards; if you imagine you're in a mountain range, this will not get you very far, as you will need to go down before going up higher. You can see that going down a bit will have a net benefit, but the search algorithm will not be able to see that.
Random restart (and similar variations) allow you to do that, up to a point. Imagine you have ten people that you parachute over your mountain range, but they can only go upwards. Now you've got a better chance of finding a higher peak, but there's still no guarantee that any of them will reach the highest one. | {
"domain": "ai.stackexchange",
"id": 2733,
"tags": "search, proofs, hill-climbing, optimality, completeness"
} |
How can you check if a 2SAT problem has a bad loop | Question: im trying to figure out why this is true
The clauses {a,b}, {b,~c}, {c,~a} constitute a 2SAT problem with an implication graph without bad loops.
Can someone show me how to illustrate this and indicate how i can find wether any 2SAT problem has a bad loop or not, Thank you so much :)
Answer: The implication graph is constructed using the following ingredients:
Vertices: One vertex for each variable, and one for each negation of a variable
Edges: For each clause {x,y}, create an edge from -x to y and one edge from -y to x
The following should be the implication graph corresponding to your set of clauses:
A bad loop (I'm guessing this is what you want, haven't encountered that wording before) is a loop in the implication graph that contains both a variable and its negation (i.e. both x and -x). Clearly there is no such loop in your case.
Why "bad loop"? It is because each edge represents an implication that necessarily must hold for your 2-SAT instance. For example, the first clause reads "a or b". If this clause is to be satisfied, then we must have one of the two variables true. So:
If a isn't true (i.e. -a is true), then b must be true, which is the implication -a$\to$b
If b isn't true (i.e. -b is true), then a must be true, which is the implication -b$\to$a
In a "bad cycle", you have a cycle of such implications, so that x$\to$...$\to$-x$\to$...$\to$x. Such a cycle of implications can never be satisfied, otherwise x would be both true and false at the same time. So if you have any bad cycles, the 2-SAT formula is unsatisfiable!
In fact, a 2-SAT formula is unsatisfiable when, and only when, there is a bad cycle in the implication graph. So your formula is, by the absence of bad loops, satisfiable. The proof of satisfiability in the case of no bad loops even yields a poly-time algorithm to find a satisfying assignment (which is great : ) )
So how to assign values to the variables and satisfy your (or any) 2-SAT formula? I'll refer to https://en.wikipedia.org/wiki/2-satisfiability#Strongly_connected_components for the algorithm, but be warned that you need to know of the concept of "strongly connected component" to fully understand the algorithm. | {
"domain": "cs.stackexchange",
"id": 5675,
"tags": "complexity-theory, logic, satisfiability"
} |
arm_navigation problem with sia10d_mesh_arm_navigation | Question:
Hi there,
I'm fairly new to ROS and especially the ROS packages for industrial robotics and automation applications. when I was creating a package,I have used the link "http://code.google.com/p/swri-ros-pkg/" to download "sia10d_mesh_arm_navigation". For that I have created a package and relevant folders with files (folders like config, launch and src). Than I tried to build above mentioned package and I was known that have to implement arm_navigation stack for dependencies. After that, I tried to build necessary packages from arm_navigation stack. I have unsuccessful with ompl_ros_interface. It was given the following error message even through I have successfully build the ompl package.
mkdir -p bin
cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=rospack find rosbuild/rostoolchain.cmake ..
[rosbuild] Building package ompl_ros_interface
[rosbuild] Including /opt/ros/fuerte/share/rospy/rosbuild/rospy.cmake
[rosbuild] Including /opt/ros/fuerte/share/roscpp/rosbuild/roscpp.cmake
[rosbuild] Including /opt/ros/fuerte/share/roslisp/rosbuild/roslisp.cmake
-- checking for module 'ompl'
-- package 'ompl' not found
CMake Error at /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:266 (message):
A required package was not found
Call Stack (most recent call first):
/usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:320 (_pkg_check_modules_internal)
CMakeLists.txt:20 (pkg_check_modules)
-- Configuring incomplete, errors occurred!
[ rosmake ] Output from build of package ompl_ros_interface written to:
[ rosmake ] /home/sameera/.ros/rosmake/rosmake_output-20120525-015900/ompl_ros_interface/build_output.log
[rosmake-0] Finished <<< ompl_ros_interface [FAIL] [ 1.74 seconds ]
[ rosmake ] Halting due to failure in package ompl_ros_interface.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 63 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/sameera/.ros/rosmake/rosmake_output-20120525-015900
I was just wondering if there's a proper method to overcome the above problem.
Thanks in advance
Originally posted by samee on ROS Answers with karma: 3 on 2012-05-24
Post score: 0
Answer:
You shouldn't need to build arm_navigation from source - just apt-get install ros-fuerte-arm-navigation. If you do want to build from source on fuerte you'll need to install the separate ompl deb using 'sudo apt-get install ros-fuerte-ompl'. If you have other issues using ros-industrial code with fuerte
Originally posted by egiljones with karma: 2031 on 2012-05-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9538,
"tags": "arm-navigation"
} |
Run-time analysis of distributed network decomposition algorithm | Question: I'm trying to understand the run-time analysis in the article [1].
The authors define the following notation: $g(n) = \hat O(f(n))$ if $g(n) = O(f(n)^{O(1)})$. In the run-time analysis of their algorithm, they find that
$$T(l) \leq \hat O(p\log n)10^{\log_{p/2} l} \equiv f(l,p)$$
where $n$ is the number of nodes in the network, $l$ is the number of nodes in the input to the (recursive) algorithm and $p$ is a parameter.
Then they claim that the minimum of $f(n,p)$ is attained when $p=2^{\Theta(\sqrt{\log n})} = \hat O(n^{\epsilon(n)})$ with $\epsilon(n) = \frac 1{\sqrt{\log n}}$. This is the part that I don't understand. How do you minimize $f(n,p)$?
On the Complexity of Distributed Network Decomposition by A. Panconesi and A. Srinivasan (1996)
Answer: Calculus: any local minimum of the function $f(x)$ must be a point $x$ that satisfies $f'(x)=0$, where $f'(x)$ is the first derivative of $f(x)$. So, take the derivative, set it to zero, and find all solutions. Each solution is a candidate for the minimum. You then need to check each candidate to see which makes $f(x)$ smallest. You'll also need to check the "endpoints", e.g., the smallest and largest possible value for $x$ need to be added as candidates.
You can do the same for functions of two variables, and for asymptotics. | {
"domain": "cs.stackexchange",
"id": 5703,
"tags": "algorithms, algorithm-analysis, runtime-analysis"
} |
How to determine whether a large container is air-tight? | Question: In constructing a kitchen-waste digester at home, I use a 50 Litre HDPE drum. The base of the drum is holed with a plug fitted to allow drainage when necessary.
The top has two openings - one for inlet, the other to act as outlet for the generated fluid CO2/CH4.
The drum is to be used filled to about 2/3rds lying on it's side. Therefore any leakage at the drain will be immediately visible. The same applies to the inlet pipe - it too shall be filled with water.
A heated iron nail was used to make the outlet hole. Then a ball-point pen was inserted into it; a very snug fit. I then melted a silicon glue-stick to attempt to seal any leaks at this point.
The problem at hand is to determine whether the outlet is air-tight. It is possible to dunk this container in water and test for bubbles; but the same test will not be possible with a larger container (300 litres) to be used if this experiment is successful.
Suggestions, anybody?
Answer: If the container is airtight, it should get more and more difficult to pump air into it. If this difficulty (i.e. the pressure in the tank) does not rise at all or flats out after a short time, you can be sure that there must be a leak somewhere.
However, take care not to increase the pressure too much, as that might break the sealing. | {
"domain": "physics.stackexchange",
"id": 5914,
"tags": "experimental-technique"
} |
Gaussian integrals in Feynman and Hibbs | Question: I was going through the calculation of the free-particle kernel in Feynman and Hibbs (pp 43). The book describes
$$
\left(\frac{m}{2\pi i\hbar\epsilon}\right)\int_{-\infty}^{\infty}\exp\left(\frac{im}{2\hbar\epsilon}[(x_2-x_1)^2+(x_1-x_0)^2]\right)dx_1 \tag{3.4}
$$
as a Gaussian integral. And the result of this integration is given as
$$
\left(\frac{m}{2\pi i\hbar \cdot 2\epsilon}\right)^{1/2}\exp\left(\frac{im}{2\hbar\cdot 2\epsilon}(x_2-x_0)^2\right). \tag{3.4}
$$
The only way I could do this integral was treating it as a Gaussian integral over $ix_1$, in which case I get the required solution but with a negative sign upfront. Is this the right way to do it, or is there something very obvious that I am not seeing?
On a related note, problem 3-8 (pp 63) also involves a similar "Gaussian" integral in order to obtain the kernel for the quantum harmonic oscillator. My technique of performing a Gaussian integral over $ix$ goes horribly wrong there and I am not even near the correct answer which should be of the form
$$
\frac{1}{\sqrt{2\pi\sin\epsilon}}\exp\left(\frac{i}{2\sin\epsilon}[(x_0^2+x_2^2)\cos\epsilon-2x_2x_2]\right)
$$
after performing the integration,
$$
\exp\left(\frac{i\cot\epsilon}{2}(x_0^2+x_2^2)\right)$$
$$\times \int\exp\left(\frac{i}{2\sin\epsilon}[2x_1^2\cos\epsilon-2x_1(x_0+x_2)]\right)dx_1.
$$
Any help will be greatly appreciated.
Answer:
The variable
$$\epsilon\equiv\Delta t^M~>~0$$ in eq. (3.4) is the change in Minkowski time.
In order not to deal with purely oscillatory integrands, Feynman uses a prescription
$$\Delta t^M\quad\longrightarrow\quad \Delta t^M-i0^+ $$
where an infinitesimal negative imaginary part is added to make the integrand exponentially decaying.
In other words, under a Wick rotation
$$\Delta t^E ~\equiv~ i \Delta t^M$$
to Euclidean time, $\Delta t^E$ should have a positive real part.
Eq. (3.4) in Ref. 1 then follows from the Gaussian integral
$$\begin{align} &\sqrt{\frac{m}{2\pi \hbar\Delta t^E_{21}}} \sqrt{\frac{m}{2\pi \hbar\Delta t^E_{10}}}\cr
&\int_{\mathbb{R}}\!\mathrm{d}x_1~
\exp\left\{-\frac{m}{2\hbar}\left[\frac{\Delta x_{21}^2}{\Delta t^E_{21}} +\frac{\Delta x_{10}^2}{\Delta t^E_{10}}\right]\right\}\cr
~=~&\sqrt{\frac{m}{2\pi \hbar\Delta t^E_{20}}}\exp\left\{-\frac{m}{2\hbar}\frac{\Delta x_{20}^2}{\Delta t^E_{20}}\right\},\end{align}$$
where
$$\begin{align} \Delta x_{ab}~:=~&x_a-x_b, \cr \Delta t_{ab}~:=~&t_a-t_b, \cr a,b~\in~&\{0,1,2\}.\end{align} $$
The above square root factor is the famous Feynman's fudge factor, which can be understood in the Hamiltonian phase space path integral as arising from a Gaussian momentum integration, cf. e.g. my Phys.SE answer here.
References:
R.P. Feynman & A.R. Hibbs, Quantum Mechanics and Path Integrals, 1965. | {
"domain": "physics.stackexchange",
"id": 39814,
"tags": "quantum-mechanics, homework-and-exercises, path-integral, integration, wick-rotation"
} |
What is the relationship of the following compounds? | Question: What is the relationship of these following compounds? My guess is they are constitutional isomers. However, a couple of people said that they are enantiomers of each other. Which one is correct?
Answer: They are enantiomers. It may be difficult to see this in the way you drew it, but if you drew the structures in a proper decagon form, it may be more clear.
Think about it this way. If you were not given the stereoisomerism of the methoxy group, you would think they would be the same molecule by counting the carbons in each direction (you'll notice that methyl and methoxy groups are on carbon 6 and carbon 1 respectively).
Now, if you consider R or S configuration for the methoxy group, you'll see that they are enantiomers. This becomes clear when you apply Cahn–Ingold–Prelog priority rules to the carbon with the methoxy group. | {
"domain": "chemistry.stackexchange",
"id": 16984,
"tags": "organic-chemistry, stereochemistry"
} |
Tic Tac Toe C++ | Question: I made this Tic Tac Toe game and I would like to know your opinion and what can be improved. I also want advice about making it check if someone wins. I'm 10th grader, so I still need to learn a lot.
#include<iostream>
#include<windows.h>
using namespace std;
int main()
{
char x1,x2,x3,x4,x5,x6,x7,x8,x9;
int x, i, a;
x1=x2=x3=x4=x5=x6=x7=x8=x9='+';
i=0;
cout << "+I+I+\n" << "+I+I+\n" << "+I+I+\n";
cout << "Tic Tac Toe\n";
system("pause");
system("CLS");
cout << "7I8I9\n" << "4I5I6\n" << "1I2I3\n";
cout << "Board with coordinates\n";
system("pause");
system("CLS");
while(i<9)
{
cout<< x7 << "I" << x8 << "I" << x9 << endl
<< x4 << "I" << x5 << "I" << x6 << endl
<< x1 << "I" << x2 << "I" << x3 << endl;
if(a%2==0)cout << "X Turn\n";
else cout << "O Turn\n";
cout << "Type coordinate of square(number) "; cin >> x;
if(x==1 && x1!='X' && x1!='O')if(a%2==0)x1='X', a++, i++;
else x1='O', a++, i++;
if(x==2 && x2!='X' && x2!='O')if(a%2==0)x2='X', a++, i++;
else x2='O', a++, i++;
if(x==3 && x3!='X' && x3!='O')if(a%2==0)x3='X', a++, i++;
else x3='O', a++, i++;
if(x==4 && x4!='X' && x4!='O')if(a%2==0)x4='X', a++, i++;
else x4='O', a++, i++;
if(x==5 && x5!='X' && x5!='O')if(a%2==0)x5='X', a++, i++;
else x5='O', a++, i++;
if(x==6 && x6!='X' && x6!='O')if(a%2==0)x6='X', a++, i++;
else x6='O', a++, i++;
if(x==7 && x7!='X' && x7!='O')if(a%2==0)x7='X', a++, i++;
else x7='O', a++, i++;
if(x==8 && x8!='X' && x8!='O')if(a%2==0)x8='X', a++, i++;
else x8='O', a++, i++;
if(x==9 && x9!='X' && x9!='O')if(a%2==0)x9='X', a++, i++;
else x9='O', a++, i++;
system("CLS");
}
cout<< x7 << "I" << x8 << "I" << x9 << endl
<< x4 << "I" << x5 << "I" << x6 << endl
<< x1 << "I" << x2 << "I" << x3 << endl;
cout<< "Who won?\n";
system("pause");
return 0;
}
Answer: Avoid platform specific code
If there is an alternative you should try to avoid platform specific code.
This entails avoiding #include <windows.h> and using things like system("pause"); which rely on commands the OS understands.
There are guides on how to wait for user input on the command line and on how to clear the screen on multiple platforms on SO.
I had to adapt your code to make it run on my machine.
Use arrays for repetitive data
You should definitely use an array instead of single variables.
char x1,x2,x3,x4,x5,x6,x7,x8,x9;
would become
char fields[9];
or even two dimensional
char fields[3][3];
You would have to change to zero based indexing for this so x4 would become fields[3] or fields[1][0] in the two dimensional case.
Use loops for repetitive tasks
The array opens up loop usage. For example, you might want to do the printing in a loop instead of a row. For the 2D array this would be:
for(int row = numberOfRows - 1; row >= 0; --row) {
for(int col = 0; col < numberOfColumns; ++col) {
cout << fields[row][col];
if(col < 2) {
cout << "I";
} else {
cout << "\n";
}
}
}
There are multiple occasions where you print the board in some way or another.
This calls for a function that does the printing:
void print(Board fields, std::string columnSeparator = "I", std::string rowSeparator = "\n") {
for(int row = 2; row >= 0; --row) {
for(int col = 0; col < 3; ++col) {
cout << fields[row][col];
if(col < 2) {
cout << columnSeparator;
} else {
cout << rowSeparator;
}
}
}
}
This function can be used like:
print(fields);
cout << "Tic Tac Toe\n";
or like
Board numbers{{'1', '2', '3'},
{'4', '5', '6'},
{'7', '8', '9'}};
print(numbers);
Don't abuse the comma operator
In your code you have many lines like
if(x==1 && x1!='X' && x1!='O')if(a%2==0)x1='X', a++, i++;
else x1='O', a++, i++;
You should not misuse the comma operator to chain multiple commands into an if. Instead, you should separate the commands by ; as usual and group them into a common scope like so:
if(x==1 && x1!='X' && x1!='O') {
if(a%2==0) {
x1='X';
a++;
i++;
} else {
x1='O';
a++;
i++;
}
}
Factor out common code
There are many similarities in the different branches of the if in the code above. When the trailing code of two branches of the same if is the same, you can extract it out of the if:
if(x==1 && x1!='X' && x1!='O') {
if(a%2==0) {
x1='X';
} else {
x1='O';
}
a++;
i++;
}
Generally, you should always use {} after control flow constructs like if, while, or for to avoid errors where adding a line does not add it to the correct loop/if.
Use the ternary operator where it improves readability
The nested if in the above code can be replaced by the ternary operator:
if(a%2==0) {
x1='X';
} else {
x1='O';
}
becomes
x1= (a%2==0) ? 'X' : 'O';
Separate validity checks from actions
Finally, you have the list of ifs that should set the appropriate field.
First, we need to adapt that to the 2D array. To do so, we have to calculate the row and column from the position x:
const int row = (x - 1) / numberOfColumns;
const int column = (x - 1) % numberOfColumns;
if(fields[row][column] == '+') {
fields[row][column] = (a%2==0) ? 'X' : 'O';
a++;
i++;
}
Duplicate variables
Reducing the code showed that variables i and a do exactly the same and will therefore have the same value. So why have two of them?
I also noticed that a was not initialized which results in undefined behavior.
Check user input, return meaningful errors
Every time we read values from the user, we have to consider what happens when the user enters an invalid value.
While your code does nothing wrong thanks to the many ifs, it would be better to make the input checks more explicit in the code and to tell the user what has gone wrong:
do{
cout << "Type coordinate of square(number) ";
cin >> x;
if(x < 1 || x > 9) {
cout << "Invalid value: Expected integer between 1 and 9!\n";
}
} while(!(0 < x && x < 10));
and similarly:
if(fields[row][column] == '+') {
fields[row][column] = (i%2==0) ? 'X' : 'O';
i++;
} else {
cout << "Field already taken!\n";
}
Naming
Names are important for the understanding of variables' meanings.
Generally, the farther away from the declaration a name is used the more descriptive it needs to be.
In your code I would rename as follows:
x1, x2, ... -> fields
x -> fieldIndex
i -> roundIndex
a -> removed as it duplicates i
Misc
You don't need to return 0; from main in C++. This is done by default if main ends before a return is encountered.
Don't using namespace std; it will bring you more trouble than it saves in typing.
Use correct formatting to improve the readability of your code. Usually, each level of nesting ({}) deepens the indentation.
Code
This is the code that I came up with. There are still some issues that I would correct but this post has become long enough as it is. (Note that I have removed the screen clearing but it should be possible to reinsert it in a multi platform manner.)
#include <iostream>
constexpr int numberOfRows = 3;
constexpr int numberOfColumns = 3;
using Board = char[numberOfRows][numberOfColumns];
void print(Board fields, std::string columnSeparator = "I",
std::string rowSeparator = "\n") {
for (int row = numberOfRows - 1; row >= 0; --row) {
for (int col = 0; col < numberOfColumns; ++col) {
std::cout << fields[row][col];
if (col < numberOfColumns - 1) {
std::cout << columnSeparator;
} else {
std::cout << rowSeparator;
}
}
}
}
int main() {
Board fields;
for (int row = 0; row < numberOfRows; ++row) {
for (int column = 0; column < numberOfColumns; ++column) {
fields[row][column] = '+';
}
}
int fieldIndex, roundIndex = 0;
print(fields);
std::cout << "Tic Tac Toe\n";
std::cin.get();
Board numbers{{'1', '2', '3'}, {'4', '5', '6'}, {'7', '8', '9'}};
print(numbers);
std::cout << "Board with coordinates\n";
std::cin.get();
while (roundIndex < 9) {
print(fields);
bool xsTurn = roundIndex % 2 == 0;
if (xsTurn) {
std::cout << "X Turn\n";
} else {
std::cout << "O Turn\n";
}
do {
std::cout << "Type coordinate of square(number) ";
std::cin >> fieldIndex;
if (fieldIndex < 1 || fieldIndex > 9) {
std::cout << "Invalid value: Expected integer between 1 and 9!\n";
}
} while (!(0 < fieldIndex && fieldIndex < 10));
const int row = (fieldIndex - 1) / numberOfColumns;
const int column = (fieldIndex - 1) % numberOfColumns;
if (fields[row][column] == '+') {
fields[row][column] = (xsTurn) ? 'X' : 'O';
roundIndex++;
} else {
std::cout << "Field already taken!\n";
}
}
print(fields);
std::cout << "Who won?\n";
std::cin.get();
} | {
"domain": "codereview.stackexchange",
"id": 19091,
"tags": "c++, tic-tac-toe"
} |
Mandelbrot Set with OpenCL and SFML | Question: I don't know if I'm doing this correctly, using OpenCL and SFML together but I know a little about both so I decided to make something with them. I've already tried implementing a pure C++ and SFML version of a Mandelbrot Set generator but it ran for 50 minutes and generated a 1000*1000 not sufficiently detailed image. I wrote this code today following an OpenCL tutorial about how to write a simple vector addition and trying to figure out how to implement my problem. So that's the reason I don't use matrices in my code just vectors. I'm open to a solution that uses a huge matrix in OpenCL (I'm aware that would be much more efficient than my solution). Please devs, experience in OpenCL and GPGPU approach!
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
#include <CL\opencl.h>
#include <SFML\Graphics.hpp>
#define DATA_SIZE 8192
int main(int argc, char* argv[])
{
sf::Image img;
img.create(DATA_SIZE, DATA_SIZE);
cl_int err;
size_t global; // globális probléma tér
size_t local; // lokális probléma tér
cl_platform_id platform;
err = clGetPlatformIDs(1, &platform, NULL);
// Get first available platform
if (err != CL_SUCCESS) {
std::cerr << "Error, failed to find platform.\n";
std::cin.get();
return EXIT_FAILURE;
}
cl_device_id device_id;
err = clGetDeviceIDs(platform,
CL_DEVICE_TYPE_GPU,
1,
&device_id,
NULL);
if (err != CL_SUCCESS) {
std::cerr << "Error, failed to create device group.\n";
std::cin.get();
return EXIT_FAILURE;
}
cl_context context;
context = clCreateContext(0,
1,
&device_id,
NULL,
NULL,
&err);
if (!context) {
std::cerr << "Error, failed to create a compute context.\n";
std::cin.get();
return EXIT_FAILURE;
}
cl_command_queue commands;
commands = clCreateCommandQueue(context,
device_id,
0,
&err);
if (!commands) {
std::cerr << "Error, failed to create command queue.\n";
std::cin.get();
return EXIT_FAILURE;
}
const char* KernelSource = "__kernel void sqr(__global float* input,\n"
"const int row,\n"
"__global float* output){\n"
"int i = get_global_id(0);\n"
"float c_re = input[i];\n"
"float c_im = 1.5 - row*3.0/8192.;\n"
"int count = 0;\n"
"float x = 0., y = 0.;\n"
"while(x*x + y*y < 2. && count < 255 ){\n"
"float x_new = x*x - y*y + c_re;\n"
"y = 2*x*y + c_im;\n"
"x = x_new;\n"
"count++;\n"
"}\n"
"output[i] = count;\n"
"}\n";
cl_program program;
program = clCreateProgramWithSource(context,
1,
&KernelSource,
NULL,
&err);
err = clBuildProgram(program,
0,
NULL,
NULL,
NULL,
NULL);
if (err != CL_SUCCESS) {
size_t len;
char buffer[2048];
std::cerr << "Failed to build executable.\n";
clGetProgramBuildInfo(program, device_id,
CL_PROGRAM_BUILD_LOG,
sizeof(buffer), buffer, &len);
std::cerr << buffer << std::endl;
std::cin.get();
exit(1);
}
cl_kernel kernel;
kernel = clCreateKernel(program, "sqr", &err);
if (!kernel || err != CL_SUCCESS) {
std::cerr << "Error, failed to create compute kernel.\n";
std::cin.get();
exit(1);
}
float* data = new float[DATA_SIZE];
float* results = new float[DATA_SIZE];
int row;
cl_mem input;
cl_mem output;
for (int s = 0; s < DATA_SIZE; s++) {
row = s;
unsigned int count = DATA_SIZE;
for (int i = 0; i < count; i++) {
data[i] = -1.5 + 3.*i / (float)count;
}
input = clCreateBuffer(context,
CL_MEM_READ_ONLY, sizeof(float)*count,
NULL,
NULL);
output = clCreateBuffer(context, CL_MEM_WRITE_ONLY,
sizeof(float)*count,
NULL,
NULL);
if (!input || !output) {
std::cerr << "Error, failed to allocate device memory.\n";
std::cin.get();
exit(1);
}
err = clEnqueueWriteBuffer(commands,
input,
CL_TRUE,
0,
sizeof(float)*count,
data,
0,
NULL,
NULL);
if (err != CL_SUCCESS) {
std::cerr << "Error, failed to write to source array.\n";
std::cin.get();
exit(1);
}
err = 0;
err = clSetKernelArg(kernel,
0,
sizeof(cl_mem),
&input);
err |= clSetKernelArg(kernel,
1,
sizeof(int),
&row);
err |= clSetKernelArg(kernel,
2,
sizeof(cl_mem),
&output);
if (err != CL_SUCCESS) {
std::cerr << "Error, failed to set kernel args.\n";
std::cin.get();
exit(1);
}
err = clGetKernelWorkGroupInfo(kernel,
device_id,
CL_KERNEL_WORK_GROUP_SIZE,
sizeof(local),
&local,
NULL);
if (err != CL_SUCCESS) {
std::cerr << "Error, failed to retrieve kernel workgroup info.\n";
std::cin.get();
exit(1);
}
global = count;
err = clEnqueueNDRangeKernel(commands,
kernel,
1,
NULL,
&global,
&local,
0,
NULL,
NULL);
if (err) {
std::cerr << "Error: failed to execute kernel.\n";
std::cin.get();
exit(1);
}
clFinish(commands);
err = clEnqueueReadBuffer(commands,
output,
CL_TRUE,
0,
sizeof(float)*count,
results,
0,
NULL,
NULL);
if (err != CL_SUCCESS) {
std::cerr << "Failed to read output array.\n";
std::cin.get();
exit(1);
}
// Set the pixels in the img after the calculation
for (int i = 0; i < count; i++) {
img.setPixel(i, s, sf::Color::Color(0, (int)results[i], 0));
}
}
// Cleanup.
delete[] data;
delete[] results;
clReleaseMemObject(input);
clReleaseMemObject(output);
clReleaseProgram(program);
clReleaseKernel(kernel);
clReleaseCommandQueue(commands);
clReleaseContext(context);
// Save the image to a bitmap
img.saveToFile("mandelbrot.bmp");
return 0;
}
To be 100% honest. I do not understand everything in the code. However, I'm familiar with the basics. I'm looking forward for useful ideas.
Answer:
I'm open to a solution that uses a huge matrix in OpenCL (I'm aware
that would be much more efficient than my solution).
The kernel code doesn't share(but produce similar, see the 2D kernel part ) any data between workitems. Each workitem working only its own data in this program. So having a larger image just increases the ratio of kernel launch overhead to the (computation+buffer_copy) time so the percieved throughput increases.
But,
Since each compute unit has SIMD, neighboring workitems should produce same or similar colors so decreasing local group size as much as possible should use those SIMDs better since difference in color means divergence in pipeline and bad for performance.
Think of drawing a filled circle, interior pixels need more work, outer part less work. Smaller tiles mean more efficient work distribution around the surface line.
2D Kernel
Scanline is not enough. Even Y-axis can have same or similar for neighbour pixels so you should use 2D-ndrange kernel and have them Z-ordered or at least squares.
If each compute unit has 64 cores, try tiles of 8x8 instead of 2x16 or 16x2 because of pixel result divergence.
Even with 1-D kernel, you can achieve same performance.
Get group id, get group x and group y values from that using modulus and division.
Map a local group to a tile using modulus and division again so each local thread works on neighbours in a tile instead of a scanline.
// 64 threads per group(square mapped), 256x256 image
thread_group_x = get_group_id(0)%32 ---> 32 tiles along X axis
thread_group_y = get_group_id(0)/32 ---> 32 tiles along Y axis
thread_x = get_local_id(0)%8 ----> pixel-x inside tile
thread_y = get_local_id(0)/8 ----> pixel-y inside tile
---calculate---
---calculate end---
store(result, index=(thread_x+thread_group_x*8 + 256*(thread_y+thread_group_y*8)));
Pixel indices by 1D emulated to 2D kernel for 8k x 8k example with 64 local threads:
unsigned ix = (get_group_id (0)%1024)*8+get_local_id(0)%8;
unsigned iy = (get_group_id (0)/1024)*8+get_local_id(0)/8;
After data locality problem is solved, you can optimize for buffer copies to see actual compute performance instead of pci-e bottleneck. | {
"domain": "codereview.stackexchange",
"id": 24831,
"tags": "c++, opencl"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.