anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
DBSCAN - Space complexity of O(n)?
Question: According to Wikipedia, "the distance matrix of size $\frac{(n^2-n)}{2}$ can be materialized to avoid distance recomputations, but this needs $O(n^2)$ memory, whereas a non-matrix based implementation of DBSCAN only needs $O(n)$ memory." $\frac{(n^2-n)}{2}$ is basically the triangular matrix. However, it says that a non-matrix based implementation only requires $O(n)$ memory. How does that work? Regardless of what data structure you use, don't you always have to have $\frac{(n^2-n)}{2}$ distance values? It would still be $O(n^2)$ space complexity, no? Is there something I'm missing here? I'm working with a huge dataset and I would really like to cut down on memory usage. Answer: You can run DBSCAN without storing the distances in a matrix. This has the drawback that each time you visit a point, you have to recalculate all the relevant distances, which requires more time. However, the space complexity stays $O(n)$, since the only things you have in memory at any single time are the positions of the n points, their various labels, the neighbors of the current point and the neighbors of a particular neighbour in the case that the point turns out to be a core point.
{ "domain": "datascience.stackexchange", "id": 2977, "tags": "clustering, scalability, dbscan" }
Sulfur Trioxide - Ionic Character
Question: I am told that the sulfur trioxide molecule exhibits charge separation because of poor p-orbital overlap. Sulfur's 3p orbitals are much bigger than oxygen's 3p orbitals, and thus the $\ce{S=O}$ bond is best described as $\ce{S+ - O-}$. So is the author's argument suggesting that sulfur trioxide is unhybridized, and that it uses pure s orbitals to form sigma bonds and pure p orbitals to form pi bonds? Because that's the only way I can see sulfur as having three empty p orbitals to utilize. If we were to assign hybridization through counting the number of sigma pairs of electrons (3) we would arrive at sp2 hybridization, and there aren't enough p-orbitals to go around. Also while we're on this topic, Wikipedia says that sulfur trioxide is best represented as having one $\ce{S=O}$ bond and two $\ce{S+ - O-}$ bonds. Now, does bond length differ between an $\ce{S+ - O-}$ and an $\ce{S=O}$ bond? Wikipedia suggests not, as only one bond length is given. If bond length is equivalent between the two, why? Answer: Sulfur trioxide is planar with 120 degree O-S-O bond angles. If the molecule were unhybridized and we used the 3 orthogonal p-orbitals for our sigma bonds, then we would expect the O-S-O angle to be 90 degrees. Further, in the unhybridized scheme, there are not enough s orbitals to form 3 sigma bonds that are pure s. Just considering s and p orbital involvement, we could say that the sulfur is $\ce{sp^2}$ hybridized. This would produce 3 $\ce{sp^2}$ orbitals for sigma bonds from sulfur to oxygen and leave one p orbital for one pi bond. We can draw 3 resonance structures for this arrangement, so we would expect all bond lengths to be equivalent.
{ "domain": "chemistry.stackexchange", "id": 1719, "tags": "bond, lewis-structure" }
CSV to HTML Converter
Question: I have written a little program that converts a CSV-File to an HTML-Table. It works for my purposes. But are there parts in my code that can be written more clean? Can you improve maybe the performance? Are there maybe any bugs? I searched for bugs and fortunately I did not find some. Postscript Maybe I should have provided some background information: I am working on a database documentation that I am writing as an HTML document, because I dont like Word-documents. However, creating a tabular description of the columns with dozens of tags is painful. That is why I wrote this script: Now I only have to export the table information as CSV and can convert it directly without having to enter many tags myself. This is why there are no HTML and body tags: The tables created should not be separate HTML documents, but parts of a single, large HTML document. CsvToHtmlTable.java import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; import java.io.FileWriter; import java.util.List; import java.util.ArrayList; public class CsvToHtmlTable { public static void main(String[] args) { // print info and show user how to call the program if needed System.out.println("This program is tested only for UTF-8 files."); if (args[0].equalsIgnoreCase("help") || args[0].equalsIgnoreCase("-help") || args.length != 2) { System.out.println("java CsvToHtmlTable <input file> <output file>"); System.out.println("Example: java CsvToHtmlTable nice.csv nice.html"); System.exit(0); } String csvFile = args[0]; String outputFile = args[1]; // read lines of csv to a string array list List<String> lines = new ArrayList<String>(); try (BufferedReader reader = new BufferedReader(new FileReader(csvFile))) { String currentLine; while ((currentLine = reader.readLine()) != null) { lines.add(currentLine); } } catch (IOException e) { e.printStackTrace(); } //embrace <td> and <tr> for lines and columns for (int i = 0; i < lines.size(); i++) { lines.set(i, "<tr><td>" + lines.get(i) + "</td></tr>"); lines.set(i, lines.get(i).replaceAll(",", "</td><td>")); } // embrace <table> and </table> lines.set(0, "<table border>" + lines.get(0)); lines.set(lines.size() - 1, lines.get(lines.size() - 1) + "</table>"); // output result try (FileWriter writer = new FileWriter(outputFile)) { for (String line : lines) { writer.write(line + "\n"); } } catch (IOException e) { e.printStackTrace(); } } } How to call the program: java CsvToHtmlTable ExampleInput.csv ExampleOutput.html ExampleInput.csv Name,Vorname,Alter Ulbrecht,Klaus Dieter,12 Meier,Bertha,102 ExampleOutput.html <table border><tr><td>Name</td><td>Vorname</td><td>Alter</td></tr> <tr><td>Ulbrecht</td><td>Klaus Dieter</td><td>12</td></tr> <tr><td>Meier</td><td>Bertha</td><td>102</td></tr></table> Answer: CSVReader Reading a CSV file can be a complex task. While many CSV files are just comma-separated-values, if a value contains a comma, it would be surrounded by double quotes, and if the value contains double quotes, the double quotes themselves are doubled. To handle these more that just a basic CSV files, you really should use a CSV library, such as OpenCSV (com.opencsv:opencsv:5.0) or Apache Commons CSV (org.apache.commons:commons-csv:1.7). HTML Valid HTML Your code essentially just writes <table>...table data...</table>. This isn't proper HTML. You're missing <html>...</html> tags around the entire document, and <body>...</body> around the content. You should probably also have a <head>...</head>, perhaps with a nice <title>...</title>. Escaping If your CSV data contains any special characters, like <, >, and &, you really must escape them in the generated HTML table. Table Headings It looks like the first line of your table contains headings, not data. The first table row should probably be formatted with <th>...</th> tags instead of <td>...</td> tags. Line by Line processing You are reading the entire CSV file into memory, and only when it has been loaded in its entirety are you writing it back out as HTML. This is very memory intensive, especially if the CSV file is huge! Instead, you could: open the CSV open the HTML file write the HTML prolog for each line read from CSV file: format and write line to HTML file write HTML epilog Untested, coding from the hip, without handling quoting in CSV or escaping any HTML entities in output: try (BufferedReader reader = new BufferedReader(new FileReader(csvFile)); FileWriter writer = new FileWriter(outputFile)) { writer.write("<html><body><table border>\n"); String currentLine; while ((currentLine = reader.readLine()) != null) { writer.write("<tr>"); for(String field: currentLine.split(",")) writer.write("<td>" + field + "</td>"); writer.write("</tr>\n"); } writer.write("</table></body></html>\n"); } catch (IOException e) { e.printStackTrace(); } XML & XSLT You may want to consider creating a CSV to XML translator. Your XML output might look like: <data input-file='ExampleInput.csv'> <person> <Name>Ulbrecht</Name> <Vorname>Klaus Dieter</Vorname> <Alter>12</Alter> </person> <person> <Name>Meier</Name> <Vorname>Bertha</Vorname> <Alter>102</Alter> </person> </data> And then you could use an XSLT Stylesheet to translate the XML to HTML, possibly in a browser without ever writing the HTML to a file.
{ "domain": "codereview.stackexchange", "id": 39366, "tags": "java, html, csv" }
Does magnetic field depend on $z$ inside a toroidal coil?
Question: Imagine a circular toroid coiled with a wire through which some current $I$ is flowing. Everywhere it is stated that the magnetic field inside this toroid can be calculated as $$\vec{B} = \mu \frac{NI}{2\pi r} \ \hat{\phi}$$ where $\mu$ is the magnetic permeability of the toroid, $N$ the number of loops the coil presents, $r$ the distance to the center of the toroid and $\hat{\phi}$ the typical versor in cylindrical coordinates. My cuestion is: why doesn't the magnetic field depend on $z$, the vertical position? As I see it, there is no symmetry in $z$ that allows us to automatically discard this coordinate. Namely, as we move radially (varying $r$), the field changes because the situation differs from one radius to another: we get closer to (or further away from) the wires, and that makes the field vary. If we moved along the $z$ direction, the case would be analogous. If we center the coordinate system such that the plane $z=0$ slices the toroid in two halves, we can see that at $z=0$ the current has just a component in the $\hat{z}$ direction, but if we analyze this for any other value of $z$ the current acquires other components as well. So I don't see why the magnetic field would not depend on $z$. Does it depend on $z$ or not? If yes, how can one then calculate the actual magnetic field (the technique that would be used if the section was squared would no longer apply, I guess)? Answer: Yes, it does depend on $z$. If you think about it, it must depend on $z$ to satisfy the boundary conditions when you are close to the wires. You expect a constant value with $z$ only in the middle of the torus, where you can ignore the edge effects. Ampère's law only reduces to that simple formula if the path element $\mathrm{d}\boldsymbol{\ell}$ is parallel to magnetic field $\mathbf{B}$. But anyway, I did the maths. I have 20 current loops azimuthally distributed, so that the magnetic field magnitude in the $xy$ plance, at $z=0$, looks like this: The radius of each loop is $3$, and the torus is "centred" at $10$, so that its inner and outer radii are $7$ and $13$. Now let's look at the three components of the magnetic field, at $x=10, y=0$: And then I plotted the only $B_{\phi}$, still at $y=0$ but now varying $x$: You can see that actually the $B_\phi$ value is quite constant with $x$ provided that you are not too close to the edge (this time in the $x$ direction). This is wrong though, as you’d expect the field to go down $\propto 1/r$ — I suspect this is an artifact of a finite number of current loops. Conclusion Away from the edges, the field is essentially independent of $z$. The field is always in the azimuthal $\phi$ direction, as suggested by @Christophe in the comment).
{ "domain": "physics.stackexchange", "id": 69077, "tags": "electromagnetism, magnetic-fields" }
Which topic does Moveit2 send to robot simulation(rviz, gazebo) to accomplish desired motion execution?
Question: I want to echo the topic responsible for sending Joint_trajectory to the simulation environment (rviz, gazebo). However, I can't read this data. I would like to use this data for sending to the hardware interface for a real robot. OS Version: Ubuntu 22.04 ROS Version: ROS Humble ROS Packages: - Fanuc CRX-10iA/L description, - MoveIt2 package via moveit setup assistant Thank you for your time!!! Answer: MoveIt uses joint_trajectory_controller to process the trajectory. I'm not sure now if it uses the topic or the action interface. If the latter is the case, you cannot directly subscribe to a topic. rviz and gazebo get the data by different means. rviz show the robot_model from tf2-data, which are sent from robot_state_publisher in a typical ros2_control setup. The publisher itself gets joint_states from joint_state_broadcaster. Gazebo classic (or gazebo), which is a simulation of the physical system, uses gazebo_ros2_control or gz_ros2_control as hardware component. Hardware components and ros2_controllers interchange data by means of a shared memory, you cannot directly access data. But joint_trajectory_controller publishes a topic ~/controller_state where you can find the data. To write a driver (called hardware component) with ros2_control framework, have a look at the demo section of the documentation.
{ "domain": "robotics.stackexchange", "id": 38588, "tags": "ros2, moveit, follow-joint-trajectory, jointtrajectory, ros2-control" }
What is the difference between work done and energy transfer
Question: I understand that work done is a form of energy transfer, but I am I right in thinking that energy can be transferred without work being done? If so, what is it that makes the two different. In particular, in the case of thermodynamics, what is the difference between simply transferring energy to a gas (such as heating) and actually doing work on the gas. Thanks. Answer: In thermodynamics, work is the negative of the change in internal energy due to a change in volume, usually holding entropy and particle numbers constant. This takes the form of a force pushing on the walls of the volume, which connects it to our conventional notion of work, $W = F~\Delta x$, as seen for example if we consider a cylinder of cross-section $A$, $$W = F~\Delta x = F~\frac AA~\Delta x = \frac FA~A\Delta x = P ~\Delta V.$$And the change in internal energy is just the negative of the work, $-P~\Delta V,$ due to the law of energy conservation. In fact we can also define that $W = P~\Delta V$ even when we are not holding entropy and particle numbers constant: but then it is not necessarily the same as the change in internal energy. So for example if you compress an ideal gas it generally heats up; you could still speak of the work as $P~\Delta V$ at constant temperature, but "at constant temperature" means essentially "we squeeze this thing and it wants to become warmer, but we let energy out of the system through the walls until it comes back to the same temperature": there has been a negative work, and perhaps the internal energy has still gone up, but it has not gone up as much as it would have had the walls been thermodynamic insulators. In these cases however we can often define a "free energy" (in this case $E - T S$) which the work is the negative of the change of.
{ "domain": "physics.stackexchange", "id": 46303, "tags": "thermodynamics, energy, work" }
Are atoms elementary?
Question: I'm reading wikipedia about Real neutral particle. I know, there are many different particles discovered in the background of atom, which was ( Neutrons + Protons + Electrons ) before. For example, Neutralino, etcetera... Finally, there was discovered Higgs boson. I'm interesting, are atoms still such fundamental as its was presented before? For example, two atoms of Plumbum. Once, hop, couple of very little "bosons"( or Neutralinos or Gluons ) are connected to one such atom. Will it emit something and stay as another one atom of Plumbum or it can be differ a little? Can be such, that we have two peaces of Plumbum. But one of it will consist of atoms a little bit different, with such spice of Gluons or Bosons per atom... Or we are not able to test this yet? Answer: Atoms are not truly fundamental. In fact if you say,atoms are made of electrons,protons and neutrons which are fundamental particles,then also you are wrong.Protons and neutrons are not fundamental as they are composed of still smaller particles,quarks. For more information see this link-https://en.wikipedia.org/wiki/Standard_Model Hope my answer helps!!
{ "domain": "physics.stackexchange", "id": 27291, "tags": "particle-physics, atomic-physics, atoms" }
Simple online auction system in Java
Question: I recently applied for a Java developer position and I was asked to completed a coding task. After review of the coding task, the company said they do not want to take me forward to interview. They refused to give feedback so I have no idea of what went wrong. I'd love some insights on what I did wrong. The task: You have been tasked with building part of a simple online auction system which will allow users to bid on items for sale. Provide a bid tracker interface and concrete implementation with the following functionality: Record a user's bid on an item Get the current winning bid for an item Get all the bids for an item Get all the items on which a user has bid You are not required to implement a GUI (or CLI) or persistent storage. You may use any appropriate libraries to help, but do not include the jars or class files in your submission. The code: The interface: package com.nbentayou.app.service; import com.nbentayou.app.exception.InvalidBidException; import com.nbentayou.app.model.Bid; import com.nbentayou.app.model.Item; import com.nbentayou.app.model.User; import java.util.List; import java.util.Optional; import java.util.Set; /** * The interface for a Bid Tracker. * This interface exposes methods allowing {@link User}s to post {@link Bid}s on {@link Item}s * and query the current state of the auction. */ public interface BidTracker { /** * Records a bid for a given item. * @param bid the bid to record * @param item the item to bid on * @throws InvalidBidException when the bid is invalid */ void recordUserBidOnItem(Bid bid, Item item) throws InvalidBidException; /** * @param item the item * @return the current winning bid (last valid bid), as an {@link Optional}, for the given item */ Optional<Bid> getCurrentWinningBidForItem(Item item); /** * @param item the item * @return the list of all bids made for the given item */ List<Bid> getAllBidsForItem(Item item); /** * @param user the user to get the list of items for * @return the list of all items bid on by the given user */ Set<Item> getAllItemsWithBidFromUser(User user); } The implementation package com.nbentayou.app.service; import com.nbentayou.app.exception.InvalidBidException; import com.nbentayou.app.model.Bid; import com.nbentayou.app.model.Item; import com.nbentayou.app.model.User; import java.util.*; import java.util.concurrent.ConcurrentHashMap; import java.util.stream.Collectors; /** * An implementation of the {@link BidTracker} interface with an in-memory data storage. * * Some limitations of the current implementation are: * - It stores the data about the auction in a ConcurrentHashMap. Other implementations could query a persistent * data storage (Database, flat files, ..) * - It assumes all the Users given as parameters are allowed to place a bid. * - It assumes all the Items given as parameters can be bid on. */ public class BidTrackerImpl implements BidTracker { private final Map<Item, List<Bid>> auctionBoard; public BidTrackerImpl() { auctionBoard = new ConcurrentHashMap<>(); } /** * Getter for the auction board. * It returns a copy of the auction board to ensure that no external object can modify it. * @return a copy of the auctionBoard */ public Map<Item, List<Bid>> getCurrentAuctionBoardCopy() { return new HashMap<>(auctionBoard); } @Override public void recordUserBidOnItem(Bid bid, Item item) throws InvalidBidException { checkForNull(bid); checkForNull(item); recordUserBidOnItemSync(bid,item); } // synchronized method ensuring that only one bid is processed at a time. private synchronized void recordUserBidOnItemSync(Bid bid, Item item) throws InvalidBidException { checkBidIsHighEnough(bid, item); addBidOnItem(item, bid); } private void checkBidIsHighEnough(Bid bid, Item item) throws InvalidBidException { Optional<Bid> currentWinningBid = getCurrentWinningBidForItem(item); if(currentWinningBid.isPresent() && bid.getValue() <= currentWinningBid.get().getValue()) { throw new InvalidBidException(String.format( "A bid of £%s on item %s is too low. It should be more than the current winning bid: £%s)", bid.getValue(), item, currentWinningBid.get().getValue())); } } @Override public Optional<Bid> getCurrentWinningBidForItem(Item item) { LinkedList<Bid> bids = new LinkedList<>(getAllBidsForItem(item)); return bids.isEmpty() ? Optional.empty() : Optional.of(bids.getLast()); } @Override public List<Bid> getAllBidsForItem(Item item) { checkForNull(item); return auctionBoard.getOrDefault(item, new ArrayList<>()); } @Override public Set<Item> getAllItemsWithBidFromUser(User user) { return auctionBoard.entrySet().stream() .filter(entry -> containsBidFromUser(entry.getValue(), user)) .map(Map.Entry::getKey) .collect(Collectors.toSet()); } /* Utility methods */ private boolean containsBidFromUser(List<Bid> bidsList, User user) { return bidsList.stream().anyMatch(bid -> bid.isFromUser(user)); } private void addBidOnItem(Item item, Bid bid) { List<Bid> bidsOnItem = auctionBoard.getOrDefault(item, new ArrayList<>()); bidsOnItem.add(bid); auctionBoard.put(item, bidsOnItem); } private void checkForNull(Item item) { if(item == null) throw new IllegalArgumentException("Item can't be null"); } private void checkForNull(Bid bid) { if(bid == null) throw new IllegalArgumentException("Bid can't be null"); if(bid.getUser() == null) throw new IllegalArgumentException("Bid's user can't be null"); } } Bid POJO package com.nbentayou.app.model; import java.util.Objects; /** * The representation of an auction bid from a user */ public class Bid { private final User user; private final int value; public Bid(User user, int value) { this.user = user; this.value = value; } public User getUser() { return user; } public int getValue() { return value; } public boolean isFromUser(User user) { return this.user.equals(user); } @Override public String toString() { return String.format("{ user: %s, value: %s }", user, value); } @Override public boolean equals(Object o) { boolean equals = false; if (o == this) { equals = true; } else if (o instanceof Bid) { Bid bid = (Bid) o; equals = Objects.equals(user, bid.user) && value == bid.value; } return equals; } @Override public int hashCode() { return Objects.hash(user, value); } } Item POJO package com.nbentayou.app.model; import java.util.Objects; /** * The representation of an item of the auction */ public class Item { private final String id; private final String name; private final String description; public Item(String id, String name, String description) { this.id = id; this.name = name; this.description = description; } @Override public String toString() { return String.format("{ id: %s, name: %s, description: %s }", id, name, description); } @Override public boolean equals(Object o) { boolean equals = false; if (o == this) { equals = true; } else if (o instanceof Item) { Item item = (Item) o; equals = Objects.equals(id, item.id) && Objects.equals(name, item.name) && Objects.equals(description, item.description); } return equals; } @Override public int hashCode() { return Objects.hash(id, name, description); } } User POJO package com.nbentayou.app.model; import java.util.Objects; /** * The representation of a user of the auction */ public class User { private final String id; private final String name; public User(String id, String name) { this.id = id; this.name = name; } @Override public String toString() { return String.format("{ id: %s, name: %s }", id, name); } @Override public boolean equals(Object o) { boolean equals = false; if (o == this) { equals = true; } else if (o instanceof User) { User user = (User) o; equals = Objects.equals(id, user.id) && Objects.equals(name, user.name); } return equals; } @Override public int hashCode() { return Objects.hash(id, name); } } InvalidBidException package com.nbentayou.app.exception; /** * A Generic {@link Exception} thrown when trying to make an invalid bid. * Could be for multiple reasons: The bid is too low, the item is not in the auction anymore, etc.. * The reason should be explained in the message or this Exception could be subclassed by finer grain Exceptions. */ public class InvalidBidException extends Exception { public InvalidBidException(String message) { super(message); } } Unit tests package com.nbentayou.app.service; import com.nbentayou.app.exception.InvalidBidException; import com.nbentayou.app.model.Bid; import com.nbentayou.app.model.Item; import com.nbentayou.app.model.User; import org.junit.Before; import org.junit.Rule; import org.junit.Test; import org.junit.rules.ExpectedException; import java.util.*; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import java.util.stream.Collectors; import java.util.stream.IntStream; import static org.junit.Assert.*; public class BidTrackerImplTest { private final User user1 = new User("u1", "Nicolas Bentayou"); private final User user2 = new User("u2", "Randolph Carter"); private final User user3 = new User("u3", "Herbert West"); private final Item item1 = new Item("i1", "item1", "Brilliant!"); private final Item item2 = new Item("i2", "item2", "Brilliant!"); private final Item item3 = new Item("i3", "item3", "Brilliant!"); private BidTrackerImpl bidTracker; @Rule public final ExpectedException thrown = ExpectedException.none(); @Before public void initAuctionRoom() { bidTracker = new BidTrackerImpl(); } @Test public void recordUserBidOnItem_shouldAddAFirstBidToItem_whenBidIsValid() throws InvalidBidException { Bid bid = new Bid(user1, 10); bidTracker.recordUserBidOnItem(bid, item1); List<Bid> actualBidListOnItem1 = bidTracker.getCurrentAuctionBoardCopy().get(item1); List<Bid> expectedBidListOnItem1 = Collections.singletonList(bid); assertEquals(expectedBidListOnItem1, actualBidListOnItem1); } @Test public void recordUserBidOnItem_shouldAddSeveralBidsToItem_whenBidsAreValid() throws InvalidBidException { bidTracker.recordUserBidOnItem(new Bid(user1, 10), item1); bidTracker.recordUserBidOnItem(new Bid(user2, 20), item1); bidTracker.recordUserBidOnItem(new Bid(user1, 30), item1); List<Bid> actualBidsListOnItem1 = bidTracker.getCurrentAuctionBoardCopy().get(item1); List<Bid> expectedBidsListOnItem1 = Arrays.asList( new Bid(user1, 10), new Bid(user2, 20), new Bid(user1, 30)); assertEquals(expectedBidsListOnItem1, actualBidsListOnItem1); } @Test public void recordUserBidOnItem_shouldThrowIllegalArgumentException_whenItemIsNull() throws InvalidBidException { thrown.expect(IllegalArgumentException.class); thrown.expectMessage("Item can't be null"); bidTracker.recordUserBidOnItem(new Bid(user1, 10), null); } @Test public void recordUserBidOnItem_shouldThrowIllegalArgumentException_whenBidIsNull() throws InvalidBidException { thrown.expect(IllegalArgumentException.class); thrown.expectMessage("Bid can't be null"); bidTracker.recordUserBidOnItem(null, item1); } @Test public void recordUserBidOnItem_shouldThrowIllegalArgumentException_whenUserIsNull() throws InvalidBidException { thrown.expect(IllegalArgumentException.class); thrown.expectMessage("Bid's user can't be null"); bidTracker.recordUserBidOnItem(new Bid(null, 10), item1); } @Test public void recordUserBidOnItem_shouldThrowInvalidBidException_whenBidIsLowerThanCurrentlyWinningBid() throws InvalidBidException { thrown.expect(InvalidBidException.class); thrown.expectMessage("A bid of £5 on item { id: i1, name: item1, description: Brilliant! } is too low. It should be more than the current winning bid: £10)"); bidTracker.recordUserBidOnItem(new Bid(user1, 10), item1); Bid lowBid = new Bid(user2, 5); bidTracker.recordUserBidOnItem(lowBid, item1); } @Test public void recordUserBidOnItem_shouldThrowInvalidBidException_whenBidIsSameAsCurrentlyWinningBid() throws InvalidBidException { thrown.expect(InvalidBidException.class); thrown.expectMessage("A bid of £10 on item { id: i1, name: item1, description: Brilliant! } is too low. It should be more than the current winning bid: £10)"); bidTracker.recordUserBidOnItem(new Bid(user1, 10), item1); Bid sameBid = new Bid(user2, 10); bidTracker.recordUserBidOnItem(sameBid, item1); } @Test public void recordUserBidOnItem_shouldAddSeveralBidsToItem_whenSomeBidsAreInvalid() throws InvalidBidException { bidTracker.recordUserBidOnItem(new Bid(user1, 10), item1); bidTracker.recordUserBidOnItem(new Bid(user2, 20), item1); try { // invalid bid bidTracker.recordUserBidOnItem(new Bid(user3, 15), item1); } catch(InvalidBidException e) { /* Silencing the exception as it is irrelevant for this test */ } bidTracker.recordUserBidOnItem(new Bid(user1, 30), item1); List<Bid> bidsListOnItem1 = bidTracker.getCurrentAuctionBoardCopy().get(item1); List<Bid> expectedBidsOnItem1 = Arrays.asList( new Bid(user1, 10), new Bid(user2, 20), new Bid(user1, 30)); assertEquals(expectedBidsOnItem1, bidsListOnItem1); } @Test public void recordUserBidOnItem_shouldOnlyRecordValidBids_inAMultithreadedEnvironment() { AtomicInteger invalidBidsCount = new AtomicInteger(0); // Make 10000 bids on 4 different threads. int totalNbBids = 10000; ExecutorService executor = Executors.newFixedThreadPool(4); IntStream.range(0, totalNbBids).forEach( i -> executor.submit( () -> { try { bidTracker.recordUserBidOnItem(new Bid(user1, i), item1); } catch(InvalidBidException e) { invalidBidsCount.incrementAndGet(); } } ) ); shutDownExecutor(executor); List<Bid> actualBidsMade = bidTracker.getCurrentAuctionBoardCopy().get(item1); // asserting that all bids were processed assertEquals(totalNbBids, actualBidsMade.size() + invalidBidsCount.get()); // asserting that the accepted bids for the item are all ordered by increasing value assertEquals(actualBidsMade, sortBidListByValue(actualBidsMade)); // asserting that the last bid is for 9999 Bid lastBidMade = actualBidsMade.get(actualBidsMade.size() - 1); assertEquals(totalNbBids - 1, lastBidMade.getValue()); } private void shutDownExecutor(ExecutorService executor) { try { executor.awaitTermination(2, TimeUnit.SECONDS); } catch (InterruptedException e) { System.err.println("tasks interrupted"); } finally { executor.shutdownNow(); } } private List<Bid> sortBidListByValue(List<Bid> bidList) { return bidList.stream() .sorted(Comparator.comparing(Bid::getValue)) .collect(Collectors.toList()); } @Test public void getCurrentWinningBidForItem_shouldThrowIllegalArgumentException_whenItemIsNull() { thrown.expect(IllegalArgumentException.class); thrown.expectMessage("Item can't be null"); bidTracker.getCurrentWinningBidForItem(null); } @Test public void getCurrentWinningBidForItem_shouldReturnEmptyOptional_whenItemHasNoBid() { Optional<Bid> bid = bidTracker.getCurrentWinningBidForItem(item1); assertEquals(Optional.empty(), bid); } @Test public void getCurrentWinningBidForItem_shouldReturnOptionalWithAValue_whenItemHasBids() throws InvalidBidException { bidTracker.recordUserBidOnItem(new Bid(user1, 10), item1); bidTracker.recordUserBidOnItem(new Bid(user2, 20), item1); Optional<Bid> bid = bidTracker.getCurrentWinningBidForItem(item1); assertTrue(bid.isPresent()); assertEquals(bid.get(), new Bid(user2, 20)); } @Test public void getAllBidsForItem_shouldThrowIllegalArgumentException_whenItemIsNull() { thrown.expect(IllegalArgumentException.class); thrown.expectMessage("Item can't be null"); bidTracker.getAllBidsForItem(null); } @Test public void getAllBidsForItem_shouldReturnEmptyList_whenItemHasNoBid() { List<Bid> bids = bidTracker.getAllBidsForItem(item1); assertTrue(bids.isEmpty()); } @Test public void getAllBidsForItem_shouldReturnTheCorrectListOfBids_whenItemHasBids() throws InvalidBidException { bidTracker.recordUserBidOnItem(new Bid(user1, 10), item1); bidTracker.recordUserBidOnItem(new Bid(user2, 20), item1); List<Bid> actualBids = bidTracker.getAllBidsForItem(item1); List<Bid> expectedBids = Arrays.asList( new Bid(user1, 10), new Bid(user2, 20)); assertEquals(expectedBids, actualBids); } @Test public void getAllItemsWithBidFromUser_shouldReturnEmptySet_whenUserIsNull() { Set<Item> items = bidTracker.getAllItemsWithBidFromUser(null); assertTrue(items.isEmpty()); } @Test public void getAllItemsWithBidFromUser_shouldReturnEmptySet_whenUserHasNoBid() { Set<Item> items = bidTracker.getAllItemsWithBidFromUser(user1); assertTrue(items.isEmpty()); } @Test public void getAllItemsWithBidFromUser_shouldReturnCorrectItemSet_whenUserHasBids() throws InvalidBidException { bidTracker.recordUserBidOnItem(new Bid(user1, 10), item1); // bid on item1 bidTracker.recordUserBidOnItem(new Bid(user2, 20), item1); bidTracker.recordUserBidOnItem(new Bid(user1, 30), item1); // second bid on item1 bidTracker.recordUserBidOnItem(new Bid(user2, 10), item2); bidTracker.recordUserBidOnItem(new Bid(user3, 20), item2); bidTracker.recordUserBidOnItem(new Bid(user3, 10), item3); bidTracker.recordUserBidOnItem(new Bid(user1, 20), item3); // bid on item3 Set<Item> itemList = bidTracker.getAllItemsWithBidFromUser(user1); Set<Item> expectedItemList = new HashSet<>(Arrays.asList(item1, item3)); assertEquals(expectedItemList, itemList); } } Answer: @Override public void recordUserBidOnItem(Bid bid, Item item) throws InvalidBidException { checkForNull(bid); checkForNull(item); recordUserBidOnItemSync(bid,item); } Did you use your auto-formatter? There's a missing space between the comma and item here. private void addBidOnItem(Item item, Bid bid) { List<Bid> bidsOnItem = auctionBoard.getOrDefault(item, new ArrayList<>()); bidsOnItem.add(bid); auctionBoard.put(item, bidsOnItem); } This could be rewritten to use computeIfAbsent: auctionBoard.computeIfAbsent(item, ignored -> new ArrayList<>()).add(bid); Only slightly shorter, but it does help. @Override public Optional<Bid> getCurrentWinningBidForItem(Item item) { LinkedList<Bid> bids = new LinkedList<>(getAllBidsForItem(item)); return bids.isEmpty() ? Optional.empty() : Optional.of(bids.getLast()); } Here there's no real reason to wrap the bids in a linked list. @Override public Optional<Bid> getCurrentWinningBidForItem(Item item) { List<Bid> bids = getAllBidsForItem(item); return bids.isEmpty() ? Optional.empty() : Optional.of(bids.get(bids.size() - 1)); } You could maybe turn that into a helper function, getLastOfList. private void checkForNull(Item item) { if(item == null) throw new IllegalArgumentException("Item can't be null"); } private void checkForNull(Bid bid) { if(bid == null) throw new IllegalArgumentException("Bid can't be null"); if(bid.getUser() == null) throw new IllegalArgumentException("Bid's user can't be null"); } To me, these two methods hurt the most. To start with, these methods shouldn't have been implemented by you. Use Objects.requireNotNull: @Override public List<Bid> getAllBidsForItem(Item item) { checkForNull(item); return auctionBoard.getOrDefault(item, new ArrayList<>()); } would become @Override public List<Bid> getAllBidsForItem(Item item) { Objects.requireNotNull(item); return auctionBoard.getOrDefault(item, new ArrayList<>()); } or @Override public List<Bid> getAllBidsForItem(Item item) { return auctionBoard.getOrDefault(Objects.requireNotNull(item), new ArrayList<>()); } possibly with custom message. Second, but that's just personal preference, although that could make you a bad fit, private void checkForNull(Bid bid) { if(bid == null) throw new IllegalArgumentException("Bid can't be null"); if(bid.getUser() == null) throw new IllegalArgumentException("Bid's user can't be null"); } this right here, with the if statements without braces? That's bad form. At least, to me it is. And then of course there's the overloading for checkForNull where it takes either Item or Bid. Even although the two are completely unrelated. That's weird. I'd personally have named them validateItem and validateBid. You could have put various business logic in there. Maybe you can't have negative bids? I dunno. It's a bit of a shame they gave you no feedback, if I was interviewing you I'd be interested in pressing you on these points. * The reason should be explained in the message or this Exception could be subclassed by finer grain Exceptions. You've got a typo there, should be "finer grained Exceptions." ... it's a minor issue, though. /** * The interface for a Bid Tracker. * This interface exposes methods allowing {@link User}s to post {@link Bid}s on {@link Item}s * and query the current state of the auction. */ public interface BidTracker { /** * Records a bid for a given item. * @param bid the bid to record * @param item the item to bid on * @throws InvalidBidException when the bid is invalid */ void recordUserBidOnItem(Bid bid, Item item) throws InvalidBidException; /** * @param item the item * @return the current winning bid (last valid bid), as an {@link Optional}, for the given item */ Optional<Bid> getCurrentWinningBidForItem(Item item); /** * @param item the item * @return the list of all bids made for the given item */ List<Bid> getAllBidsForItem(Item item); /** * @param user the user to get the list of items for * @return the list of all items bid on by the given user */ Set<Item> getAllItemsWithBidFromUser(User user); } Ooh, documentation. How much of it is useful? /** * The interface for a Bid Tracker. * This interface exposes methods allowing {@link User}s to post {@link Bid}s on {@link Item}s * and query the current state of the auction. */ public interface BidTracker { That first sentence is useless. public interface BidTracker - "The interface for a Bid Tracker." Autocomplete in IDEs tends to give you the first sentence and you've just wasted it. The second sentence is better and actually explains what a BidTracker ... Service... actually does. /** * Records a bid for a given item. * @param bid the bid to record * @param item the item to bid on * @throws InvalidBidException when the bid is invalid */ void recordUserBidOnItem(Bid bid, Item item) throws InvalidBidException; The method recordUserBidOnItem Records a bid for a given item. Duhhh. It's redundant. The bid is the bid to record (you know, "recordUserBid"), then item is the item to bid on ("OnItem"). This is all just duplication. Also, it throws InvalidBidException when the bid is invalid. ... All of that information you have put in the documentation was literally IN the method signature! Look at it! void recordUserBidOnItem(Bid bid, Item item) throws InvalidBidException; It records a user's bid on an item. You have to provide the bid and the item. Also, it might throw InvalidBidException (presumably because the bid is invalid). And it returns void, so I guess the recording is "saving" or something. There was no new information in the javadoc, as a result the javadoc is just meaningless noise. I don't blame you, from the problem brief you have no real business rules to describe here, but still. Don't put javadoc on methods just to tick a box. /** * @param item the item * @return the current winning bid (last valid bid), as an {@link Optional}, for the given item */ Optional<Bid> getCurrentWinningBidForItem(Item item); And here you do it again! Although I must admit - there's one nugget of information here. A "currentWinningBid" is the "last valid bid" for a given item. But the rest is noise. It can go. /** * @param item the item * @return the list of all bids made for the given item */ List<Bid> getAllBidsForItem(Item item); Useless. /** * @param user the user to get the list of items for * @return the list of all items bid on by the given user */ Set<Item> getAllItemsWithBidFromUser(User user); That's no List, that's a Set! So why are you calling it a list? Also, the rest of the documentation here is useless too. This entire interface would have been better off without method level javadoc. The thing is, as a junior backend developer, yeah, you'd be pretty workable. Depending on how badly I needed people, I'd hire you, if the personality was a match. I don't know what kind of job you applied to. For a higher experienced job, I'd expect better. On the other hand, I don't know how much time you spent on this. An hour? Three hours? A day? If it was a day I'd think it to be a bit much but if you did this in an hour then I'd say it'd just need a bit of polishing. I mean, I did kinda tear into it, but most of my comments were minor. I'm not really spotting any bugs either. I think the most important thing you should take away from this review is that you should have sensible javadoc. And, where possible, provide an explanation from your point of view when you're presenting your work. Why did you make certain choices? For instance, why does a BidTracker accept random items to be bidded on, items it has never seen before? Maybe you take it as a given that you can just let a BidTracker work like that for demonstration purposes. Maybe they see it as a law of Demeter violation.
{ "domain": "codereview.stackexchange", "id": 35570, "tags": "java, interview-questions" }
Reproducing the disappearing glass experiment with normal glass
Question: There is a famous disappearing glass experiment in which you use pyrex glass (IoR=1.47) and oils with similar IoR to create an illusion that the glass disappeared. Is there a liquid (mixture) that can reproduce this experiment with normal glass (IoR about 1.57)? Answer: If the glass rod is surrounded by a liquid that has the same index of refraction, the speed of light will not change as it enters the rod meaning there'll be no refraction. If there is no refraction, the rod will appear "invisible". So the idea here is to get a liquid substance that has the same refractive index of glass, $n_g\approx 1.57$. The table in this article lists, at standard temperature and pressure or STP (20$^\circ$ Celcius, 1 atmosphere pressure), the refractive index of benzene (C$_6$H$_6$) at $n_1 \approx 1.50$ and for carbon disulfide (CS$_2$) at $n_2\approx 1.63$ where$^1$ a mixture of these two liquids (or perhaps a mixture of any of the liquids in the correct proportions) should do the trick. But note that the refractive index of a mixture of liquids is not obtained by the simple adding together of proportional volumes (to get the required refractive index), but rather $$n_{\text mix}^{\text id} =\frac{\phi_1\phi_2(n_1 -n_2)^2}{\phi_1n_1 +\phi_2 n_2 +\sqrt{\phi_1 n_1^2 +\phi_2 n_2^2}}$$ where $n_{\text mix}^{\text id}$ is the ideal refractive index for the mixture, and $\phi_1 , \phi_2$ are mixing fractions. $^1$ It appears as though such a mixture, or any other mixture of liquids in the list, will be non-reactive at STP, but I cannot be too sure. C$_6$H$_6$ and CS$_2$ are both clear liquids that should inertly mix (but the prior is highly flammable and the latter is a neurotoxin and highly volatile!) at STP without leading to a liquid with noticeably different optical properties.
{ "domain": "physics.stackexchange", "id": 82768, "tags": "optics, experimental-physics, refraction, glass" }
Comparing SN2 reaction rates
Question: I've read in a book that the main factor for determining SN2 reaction rate is steric hindrance. The lesser it is, the faster the reaction. So consider this question: $\ce{KI}$ in acetone undergoes SN2 reaction with each of $\ce{P,Q,R}$ and $\ce{S}$. The rates of reaction vary as: I do not understand how $\ce{S}$ is the fastest. In my opinion it should be the slowest as it is very sterically hindered. What is going on here? Source: Joint Entrance Exam (JEE) 2013 India Answer: There are a number of factors that can influence the rate of an $\mathrm{S_{N}2}$ reaction. Solvent, leaving group stability, attacking group nucleophilicity, steric factors and electronic factors. In the series of compounds you've presented, all of these parameters are held constant except for the steric and electronic factors. Considering only steric factors for a moment we would order the compounds as P>S~R>Q. But now let's consider electronic factors. Considering the allyl system in compound R, the transition state looks something like this image source The p orbitals in the adjacent double bond overlap with the p orbital at the $\mathrm{S_{N}2}$ reaction center (remember that $\mathrm{S_{N}2}$ transition states are roughly $\ce{sp^2}$ hybridized) and stabilizes the transition state through resonance delocalization. Note that the transition state is electron rich, there is a full unit of negative charge in the transition state. When a carbonyl is placed next to the $\mathrm{S_{N}2}$ reaction center, we replace the $\ce{C=C}$ double bond in the above drawing with a $\ce{C=O}$ double bond. Again there is overlap and resonance stabilization of the transition state. But in the carbonyl case the stabilization is even greater than with the $\ce{C=C}$ double bond because the carbonyl carbon is positively polarized and inductively stabilizes the electron rich (remember that negative charge) transition state even more. image source While we would expect the allyl case (R) to be faster than the ethyl case, in your series of compounds apparently it is not faster than the methyl case. (Note: often the allyl system will react a bit faster than the methyl analogue, but the two rates are usually close) However, the additional inductive stabilization from the carbonyl allows it to react faster than the methyl compound so our final reactivity order becomes S>P>R>Q.
{ "domain": "chemistry.stackexchange", "id": 3026, "tags": "organic-chemistry, reaction-mechanism, reactivity, nucleophilic-substitution" }
Change of temperature in the decomposition of an ideal diatomic gas into a monoatomic one at constant volume
Question: Let's suppose we have $N$ molecules of a diatomic ideal gas at temperature $T_1$ and pressure $P_1$ inside a constant volume $V$. The system is isolated from the surroundings so that the internal energy remains constant. The diatomic gas reacts decomposing into a monoatomic one, according to this reaction: $A_2\rightarrow2A+Q$, where $Q$ is the heat released in the reaction (which is exothermic). I need to find out the expression of the final temperature of the system, $T_2$, once all the molecules of the diatomic gas have reacted. I know that for ideal gases $U=nc_VT$, with $U$ the internal energy of the gas and $c_V$ its molar heat capacity, and that $c_V$ is different for diatomic and monoatomic gases. However, by combining these formulas I don't get the right solution. How could this be solved? Answer: The heat of reaction of an ideal gas, $\Delta H_R(T_0)$, is defined as the change in enthalpy in going from pure reactants to pure products, holding the temperature $T_0$ constant by adding an amount of heat Q equal to $Q=\Delta H_R(T_0)$. So, for an exothermic reaction, $\Delta H_R(T_0)$ is negative. For the same change from reactants to products at constant $T_0$, the change in internal energy is $$\Delta U_R(T_0)=\Delta H_R(T_0)-\Delta (PV)=\Delta H_R(T_0)-(\Delta n)RT_0$$In the present case, $\Delta n=1\ mole$. For the case of a reaction carried out adiabatically at constant volume, we need to apply Hess' law to get the change in temperature. This leads to $$\Delta U=\Delta U_R(T_0)+n_pC_{vp}(T-T_0)=0$$ where $n_P$ is the number of moles of product in the reaction formula and $C_{vp}$ is the weighted average molar heat at capacity at constant volume of the product mixture. So, in the present case, $n_p=2$ and $C_{vp}$ is the molar heat capacity at constant volume of A. So, combining the above two equations, we would have: $$2C_{vA}(T-T_0)=-\Delta H_R(T_0)+RT_0$$ This development depends on the fact that, for an ideal gas, both U and H are independent of pressure and specific volume.
{ "domain": "physics.stackexchange", "id": 65800, "tags": "homework-and-exercises, thermodynamics" }
Nuclear reactor control rods
Question: What is the relation between control rods surface exposed into a nuclear reactor and neutron energy? Is it linear? I mean, how do neutron absorbing rate change with the progressive immersion of control rods into the core? Answer: What is the relation between control rods surface exposed into a nuclear reactor and neutron energy? You mention control rod surface. Why? Do you understand how control rods work? They're inserted into the core and absorb neutrons. Now, why would the surface be a metric that matters? Perhaps this was just the most obvious thing to you at the time, so let me get into the complications here. Cross sections are higher at thermal versus fast energies. The fast and thermal neutron flux both matter a great deal for typical light water reactors. The main difference is that the average path length of a neutron (before interacting) is much longer for fast neutrons than thermal neutrons. Now, since we've made this distinction, we can ask if the control rods absorb a significant fraction of the neutrons incident on its surface. The answer is mostly "yes" for thermal energies and "no" for fast energies. See, because the thermal flux has a much smaller mean free path, it is much bumpier because of the presence of different materials, including fuel, moderator, and absorbers. It would be mostly correct to suppose that some significant fraction of a thermal neutron beam is absorbed after entering the control rod surface in terms percentage. As an order of magnitude guidance, it would be more than 1%. The same is probably not true for the fast flux. Furthermore, according to transport/diffusion physics, if you were to reduce the fast flux by a significant percentage, you would necessarily suppress the fast flux in the entire region of the core. Anyway, let me address your question by saying that under the unphysical assumption that a control rod absorbed all neutrons incident on its surface, its reactivity contribution would be proportional to its surface area, relying on a few other assumptions as well. Notably, if you have too many control rods in the same small area they will depress the flux around each other so its not 100% valid. As I've already pointed out, this doesn't fit current reactors. Let's look at the other extreme. Say, for instance, that the control rods absorbed a negligible fraction of the flux relative to the total flux. This is much closer to normal reactor conditions. In this case, the reactivity contribution would be proportional to the volume of the control rods times the local flux value, and this is shown mathematically in several nuclear engineering text books using perturbation theory. Is it linear? Again, use perturbation theory and it's linear for small movements. But what does this physically correlate to? My answer is that it is with respect to the total amount of absorbing material times the local flux where it exists. This is because the control rods don't significantly affect the shape of the flux. In the thermal energies, however, this is a mediocre assumption, which is why people use actual computer codes to design reactors. (math warning) variables: Neutron flux $\phi$ Absorption cross section $\sigma$ Absorber atom number density $N$ Rod cross-sectional area $A$ Linear distance rod is moved $\Delta l$ Let's speak of an isotropic neutron flux of a single energy. Then the flux will be of a certain physical shape $\phi(\vec{r})$ and the entire point of using the words "perturbation theory" is to refer to the assumption that $\phi(\vec{r})$ doesn't change with the insertion of the control rod. If the end of a control rod is at some given $\vec{r}$ in the core, then the change in reactivity due to movement of the rod will be $\phi(\vec{r}) N \sigma A \Delta l$ which will come out to units of $1/s$, representing the number of neutrons absorbed per second from the volume of the absorber element introduced. This affects the power of the core through a concept called the multiplication factor, which is how much the neutron chain reaction grows or declines for each neutron generation. $$k = \frac{ \text{Fission neutrons created} }{\text{Neutrons born} }$$ The absorber removes neutrons from the population that could cause a fission and thus create move neutrons, so it can be said to subtract from the numerator of that equation. The reactor changes flux (and thus power) over time as: $$\phi(t) \propto e^{t \frac{k-1}{\Lambda} }$$ You need not concern yourself with the details other than the fact that inserting absorbers causes the flux to decrease over time. (end math) I mean, how do neutron absorbing rate change with the progressive immersion of control rods into the core? This is the most objective part of the question. According to my prior arguments, the differential reactivity contribution depends on the flux value at the end of the rod being inserted. So let me go back to the linearity question. The reactivity contribution from control rods is highly nonlinear with respect to control rod position, because the flux in the core changes dramatically with vertical position. AdamRedwine points out, correctly, that the flux is well-smoothed by core design, but that statement is specific to the xy-plane. This is not true for the z-direction, where it is nearly a cosine shape for a PWR, something else for the complicated thermal-hydraulic and neutronic feedback of the BWR. Here is an example of a differential control rod worth curve. It is greater than zero at the top and bottom of the core because some neutrons do leave the core, bounce of Hydrogen, and then enter the core again to cause fission. For any small control rod movement it's linear. Sure. That follows from the fact that the above graph is continuous. The fact also implies that the integral control rod worth curve is smooth.
{ "domain": "physics.stackexchange", "id": 73127, "tags": "nuclear-physics, nuclear-engineering" }
Impact force in a fall
Question: I'm a climber and I constructed myself an anchor that I fixed to a rock wall. To test it, I hooked to it a 12mm in section steel cable with a length of 2,8m and a concrete block of 30kg to the other tip. I then dropped it from anchor level and it held. I am now wondering what kind of impact force was developed in this test. Can you help me please? Answer: When the mass reaches its lowest point, the steel wire will have increased in length from $L$ to $L+x$. So equating the strain energy of the wire with the initial gravitational potential energy of the ball: $$\frac{1}{2}kx^2 = mg(L+x) \approx mgL $$ which rearranges to $$ x = \sqrt{\frac{2mgL}{k}} $$ Note that $$ k = \frac{EA}{L} $$ where $E$ is Young's Modulus (about 200 GPa for steel), $A$ is the cross-sectional area of the wire, and $L$ is the length. The largest force that acts is then $$ f=kx=\sqrt{2mgEA} $$ I'll let you plug the numbers in.
{ "domain": "physics.stackexchange", "id": 16588, "tags": "newtonian-mechanics, momentum, free-fall" }
How can we ignore the diverging term $e^\infty$ in the integral?
Question: In Question (2.20) of Griffiths' Quantum Mechanics book, they have given this Solution. In the Solution of question 2.20(b), they omitted $e^{(ik-a) \infty}$ (or may have considered $e^{(ik-a) \infty}=0$) in this calculation . How can it be correct at all? Answer: You can rewrite the indefinite integral as \begin{equation} e^{-a x} f(x) \end{equation} where $f(x)$ does not grow exponentially with $x$. (In fact $f(x) \sim e^{\pm i k x}$ is an oscillating function). In the limit $x\rightarrow \infty$, we have that $e^{-a x} f(x) \rightarrow 0$, since $e^{-ax} \rightarrow 0$ and $f(x)$ doesn't grow fast enough to cancel this behavior. This assumes ${\rm Re}( a )> 0$.
{ "domain": "physics.stackexchange", "id": 87969, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, fourier-transform, integration" }
Console Calculator in C#
Question: This is a calculator I've made in C#. Is there any way to improve it? Surely there is a way to get rid of all the ifs that are nested inside one another. namespace CalculatorCA { using System; using System.Collections.Generic; class Program { static void Main() { Dictionary<int, Func<double, double, double>> MyDict = new Dictionary<int, Func<double, double, double>>(); FillDictionary(MyDict); bool KeepOn = true; while (KeepOn) { Console.WriteLine("Enter the first number!"); double first; if (double.TryParse(Console.ReadLine(), out first)) { Console.WriteLine("Please choose one of the following:\n1-Add\n2-Substract\n3-Multiply\n4-Divide"); int choice; if (int.TryParse(Console.ReadLine(), out choice)) { if (MyDict.ContainsKey(choice)) { Console.WriteLine("Enter the second number!"); double second; if (double.TryParse(Console.ReadLine(), out second)) { Console.WriteLine("The result is : {0}", MyDict[choice](first, second)); } } else { Console.WriteLine("Invalid operation!"); } } } Console.WriteLine("Continue?[Y/N]"); KeepOn = Console.ReadKey().Key == ConsoleKey.Y; Console.Clear(); } } public static void FillDictionary(Dictionary<int,Func<double,double,double>> myDictionary) { myDictionary.Add(1, (x, y) => x + y); myDictionary.Add(2, (x, y) => x - y); myDictionary.Add(3, (x, y) => x * y); myDictionary.Add(4, (x, y) => x / y); } } } Answer: That's a pretty concise routine there. If you had to change it, I would add methods to parse the user input so now your program logic is less nested and we get back to the good old principal of each method having a single purpose. Normally in a simple app that has a menu structure I would say use case logic instead of if statements as this will allow you to easily extend the options later, but your cleaver use of the dictionary of functions does away with this altogether. namespace CalculatorCA { using System; using System.Collections.Generic; class Program { static void Main() { Dictionary<int, Func<double, double, double>> MyDict = new Dictionary<int, Func<double, double, double>>(); FillDictionary(MyDict); bool KeepOn = true; while (KeepOn) { double first = ReadInput("Enter the first number!"); int choice = ReadInput("Please choose one of the following:\n1-Add\n2-Substract\n3-Multiply\n4-Divide", 1, MyDict.Count); double second = ReadInput("Enter the second number!"); Console.WriteLine("The result is : {0}", MyDict[choice](first, second)); Console.WriteLine("Continue?[Y/N]"); KeepOn = Console.ReadKey().Key == ConsoleKey.Y; Console.Clear(); } } private static double ReadInput(string prompt) { Console.WriteLine(prompt); double userInput; if (double.TryParse(Console.ReadLine(), out userInput)) return userInput; Console.WriteLine(" - Sorry, I expected a double, try again"); return ReadInput(prompt); } private static int ReadInput(string prompt, int? min = null, int? max = null) { Console.WriteLine(prompt); int userInput; if (int.TryParse(Console.ReadLine(), out userInput)) { if (userInput >= min.GetValueOrDefault(int.MinValue) && userInput <= max.GetValueOrDefault(int.MaxValue)) return userInput; Console.WriteLine(" - Sorry, I expected an integer, between {0} and {1}", min, max); return ReadInput(prompt, min, max); } Console.WriteLine(" - Sorry, I expected an integer, try again"); return ReadInput(prompt, min, max); } public static void FillDictionary(Dictionary<int, Func<double, double, double>> myDictionary) { myDictionary.Add(1, (x, y) => x + y); myDictionary.Add(2, (x, y) => x - y); myDictionary.Add(3, (x, y) => x * y); myDictionary.Add(4, (x, y) => x / y); } } }
{ "domain": "codereview.stackexchange", "id": 15384, "tags": "c#, beginner, console, calculator" }
How to get separate histograms plots on the basis of the column value?How to detect which plot has most deviation?
Question: I have a data frame (df) which has correlations calculated for different genes with respect to different ID combinations. I want to get separate histogram plot based on the gene name (separate plot for separate gene). Also after plotting I want to know which genes have more deviations in their correlation values (Like more deviated from the median in each plot). Example: df Gene ID_1 ID_2 Correlation TNF 1175077258 1175075956 0.626115 TNF 1175077261 1175075956 0.542499 TNF 1175077261 1175077258 0.50113 TNF 1175077263 1175075956 0.431587 TNF 1175077263 1175077258 0.734858 TNF 1175077263 1175077261 0.622206 CCL2 952399885 952399878 0.632703 CCL2 952399894 952399878 0.622244 CCL2 952399894 952399885 0.697313 CCL2 952399923 952399878 0.687826 CCL2 952399923 952399885 0.612089 CCL2 952399923 952399894 0.603834 IL10 1068768000 1068767927 0.118955 IL10 1068768147 1068767927 0.32038 IL10 1068768147 1068768000 0.430287 IL10 1068768409 1068767927 0.335264 IL10 1068768409 1068768000 0.406426 IL10 1068768409 1068768147 0.546452 I tried the below code which only works when combining all genes together. import matplotlib.pyplot as plt import seaborn as sns plt.rcParams["figure.figsize"] = (8,6) g1 = sns.histplot(df.Correlation) g1.set( xlim=(-1,1), title="correlation" ) g1.axvline(0.0, c='black', linestyle="--") g1.axvline(df.Correlation.median(), c='red', linestyle="--") The result/plot I am expecting is this (I have created this example plot in excel) Answer: You could use the groupby() pandas function to group the dataframe by gene name. And then just loop through each group to plot the histogram of correlation values. I think you can use the mean of the ratio (deviation / median) to compare deviation for different genes: import matplotlib.pyplot as plt for name, group in df.groupby('Gene'): median = group['Correlation'].median() deviation = abs(group['Correlation'] - median) ratio = deviation / median ratio_mean = ratio.mean() id_1 = group['ID_1'].astype(str) id_2 = group['ID_2'].astype(str) group['Combination'] = id_1 + '/' + id_2 plt.figure(figsize=(8, 6)) plt.bar(group['Combination'], group['Correlation']) plt.xlabel('Combination', fontsize=12, fontweight='bold') plt.ylabel('Correlation', fontsize=12, fontweight='bold') plt.xticks(rotation=45, ha="right") plt.title(f'{name} (deviation ratio: {ratio_mean.round(3)})') plt.tight_layout() plt.gca().spines['top'].set_visible(False) plt.gca().spines['right'].set_visible(False) plt.show()
{ "domain": "bioinformatics.stackexchange", "id": 2377, "tags": "python, visualization, correlation, pandas" }
Why hasn't evolution gotten rid of the appendix yet?
Question: A figure I have recently stumbled upon suggests that about 7% of the world population will, or have, had appendicitis in their lifetime. Cutting out the appendix was impossible until very recently. Even if the appendix only just started, in evolutionary terms, being infectable, it would still be capable of wiping out an entire population in no time at all, meaning some other population would adopt an alternative. No matter what the benefits, the costs would be detrimental and most definitely outweigh the former. We've had at least a million years (correct me here if I'm wrong) to get rid of the appendix, but we didn't, even though it kept on killing. It seems very counterintuitive that it's still here. Can anyone explain, please? Edit: I recognise that an appendix serves some purpose to the body. My question is: aren't the benefits much less significant than the costs? Answer: Appendicitis appears to a fairly modern disease likely caused by refined diets leading to slow bowel transit times resulting from low fibre content. We have some evidence to suggest that this is the case with no Bantu people recorded having any episodes of hospital admission for appendicitis in [1] Hospital records from three Johannesburg hospitals were reviewed in Erasmus' study, from 1929-1937. Appendicitis and overall admissions were recorded according to racial groups. Using these figures, 4% of white inpatients were admitted for appendicitis, compared to 1% of coloureds and 0% of black inpatients.17 Although these figures do not represent the true incidence rates in the general population, as hospital access and utilisation patterns are likely to have differed between three racial groups, the important possibility of differential incidences between racial groups was raised. These trends were supported by findings from audits in Cape Town and Upington. [2] The change in diet in urban african natives has also lead to appearance of diverticular disease, which was previously unknown in that population. Diverticulitis is also an infection of a colonic pouch. [3] So, it seems these are both modern diseases of low incidence so both these factors mean little selection pressure exists especially with modern medicine and surgery. Erasmus JPF. The incidence of appendicitis in the Bantu. S Afr Med J. 1939;13:601-607. Acute appendicitis in South Africa: 2015 A systematic review Emergence of Diverticular Disease in the Urban South African Black Disclaimer: I have had my appendix removed.
{ "domain": "biology.stackexchange", "id": 8446, "tags": "evolution, human-anatomy, human-evolution, population-biology" }
What will be the product if ethane reacts with SO2?
Question: At first I thought it was Reed's reaction but then I looked up the reaction and the reactant was actually $\ce{SO2Cl2}$ (sulfuryl chloride). I couldn't find the product which would be produced when any alkane reacts with $\ce{SO2}$. Does it react and what would be the product? Answer: Alkanes can react with $\ce{SO_2}$ if the reaction is photoinitiated via via a hydrogen abstraction to form $\ce{HOSO^.}$ and organic radical $\ce{(R^.)}$ species and that reactivity is correlated with the energy required to break a C–H bond and the length of the alkane chain. you can find references here
{ "domain": "chemistry.stackexchange", "id": 17669, "tags": "organic-chemistry, reaction-mechanism" }
Relying on short-circuit evaluation instead of using the IF control structure
Question: Examples of where I've started migrating to short-circuit evaluation: PHP $id=0;//initialized in case of no result $r=mysql_query(SQL); if($r && mysql_num_rows($r)>0){ list($id)=mysql_fetch_row($r); } becomes $id=0; $r=mysql_query(SQL); $r && mysql_num_rows($r)>0 && list($id)=mysql_fetch_row($r); JavaScript var Req=jQuery.ajax(OBJECT); //Something has happened and now i want to abort if possible if(Req && Req.abort){ Req.abort(); } becomes var Req=jQuery.ajax(OBJECT); //Something has happened and now i want to abort if possible Req && Req.abort && Req.abort(); I'm familiar with the short if style of if(Req && Req.abort) Req.abort(); but it feels clunkier than short-circuit evaluation. Answer: This is an abuse of short-circuit evaluation. It's not immediately obvious what the intent is. Sure, i can eventually see that you're getting an ID, calling Req.abort(), whatever. But i shouldn't have to decipher a line of code first to figure out what it does. Conciseness never trumps readability in source code. (Sure, minifying/packing JS has its benefits. But you don't write packed/minified code; you start out with decent code and then run a tool on it.) Since your intent is to do the one thing if the other stuff is true, the code should say that.
{ "domain": "codereview.stackexchange", "id": 8303, "tags": "javascript, php" }
Finding the minimum of two numbers
Question: For this flowchart on finding the minimum of 2 numbers: I've created XML for this flowchart: <start> <InOut> <in> a </in> <in> b </in> <out> min </out> </InOut> <calc> <ins> a=3 </ins> <ins> b=5 </ins> <ins> min=a </ins> </calc> <con gurd="min>b" type="if"> <calc> <ins>min=b</ins> </calc> </con> <con gurd="min>b" type="else"> <calc> <ins>min=a</ins> </calc> </con> </start> But I don't think it's not good XML coding. What are your suggests for improving this? Answer: There are some confusing aspects to your XML, and some practices that you may end up appreciating..... Technical Your XML is valid, but it does not follow best practices. The XML: <con gurd="min>b" type="if"> This should be written: <con gurd="min&gt;b" type="if"> Even though your XML is valid, the > in the attribute value can lead to confusion. Even the XML Syntax parser here on Stack Exchange is failing to parse that part right.... Naming XML is by nature relatively verbose. You are already 'paying' for that, so you may as well use it to your advantage. con should be condition, ins should be ..... instruction ? What is gurd ? Additionally, you should be consistent in cases. InOut should be inputoutput or something. Mixing cases for XML tag names is a problem. I recommend making it all lowercase. I would actually call it 'interface', and have 'input' and 'output' as child elements. Functional When you use XML, you need a parser. There is always a balance between what you parse, and what you let the parser parse. The 'text' component in the XML elements are technically called PCDATA (Parsed Character Data). The XML Parser will Parse that data looking for more XML. That's all it does. Your actual data has things like: min=a Is your application manually parsing that content as well? If it is, you should probably make your text content attribute values instead. Additionally, the white-space in your element text is scattered all over the place. Consider: <calc> <ins> a=3 </ins> <ins> b=5 </ins> <ins> min=a </ins> </calc> <con gurd="min>b" type="if"> <calc> <ins>min=b</ins> </calc> </con> In your first <ins> elements you have space surrounding the text. In your later <ins> there is no space. Is the space important? If it is, you should either use attributes, or set xml:space="preserve" on your XML. Update FYI, I ran your XML through xmllint and it output your XML as: <?xml version="1.0"?> <start> <InOut> <in> a </in> <in> b </in> <out> min </out> </InOut> <calc> <ins> a=3 </ins> <ins> b=5 </ins> <ins> min=a </ins> </calc> <con gurd="min&gt;b" type="if"> <calc> <ins>min=b</ins> </calc> </con> <con gurd="min&gt;b" type="else"> <calc> <ins>min=a</ins> </calc> </con> </start> xmllint --format your.xml is a fantastic commandline tool that validates and reformats your XML to be consistent. I recommend it to anyone using XML that has access to Linux
{ "domain": "codereview.stackexchange", "id": 8497, "tags": "xml, xsd" }
How create a NAO Virtual?
Question: How create a Virtual NAO? I'm interested in simulate to NAO, but get erros. Install http://wiki.ros.org/nao/Installation/remote sudo apt-get install ros-hydro-nao-robot When use roslaunch nao_bringup nao_sim.launch Get this in the end: ResourceNotFound: xacro ROS path [0]=/opt/ros/hydro/share/ros ROS path [1]=/home/vincent/NAO/nao_robot-devel ROS path [2]=/home/vincent/NAO/nao_robot ROS path [3]=/home/vincent/NAO/nao_gazebo_lastes ROS path [4]=/home/vincent/NAO/nao_gazebo_plugin ROS path [5]=/home/vincent/NAO/nao_extras ROS path [6]=/home/vincent/NAO/nao_description ROS path [7]=/home/vincent/NAO/nao_common ROS path [8]=/home/vincent/NAO/humanoid_msgs ROS path [9]=/opt/ros/hydro/share ROS path [10]=/opt/ros/hydro/stacks When continue with another tutorial: http://wiki.ros.org/nao/Installation/compileFromVirtualNao Using sudo bash -c 'echo app-portage/gentoolkit >> /etc/portage/package.keywords' sudo bash -c 'echo dev-python/setuptools >> /etc/portage/package.keywords' Get a similar error: bash: /etc/portage/package.keywords: No existe el archivo o el directorio A lot of thanks!! Originally posted by vncntmh on ROS Answers with karma: 52 on 2014-09-03 Post score: 0 Answer: Hi, Running nodes from a remote computer is different than compile from a virtual nao. To start I would suggest that you try to run your nodes remotely to get used to ROS and nao before doing more com)licated stuff. the following ERROR: ResourceNotFound: xacro Means that your computer cannot find the ros package xacro : (rospack find xacro) this package should be in the following directory : /opt/ros/hydro/share/xacro. If cou cannot find it : install it using sudo apt-get install ros-hydro-xacro Compile from a Virtual nao is something totally different. Alderbaran provides a virtual machine that virtualize nao's hardware and OS. This way you can compile nodes to run directly on your nao and not on your computer. On the gentoo virtual machine there is a folder /etc/portage. That kind of folder doesn't exist on a standard ubuntu desktop like the one your using. That's why it says that /etc/portage cannot be found. Hope this helps, Originally posted by marguedas with karma: 3606 on 2014-09-15 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19292, "tags": "nao" }
What does it mean that particles are the quanta of fields?
Question: I saw the question What are field quanta? but it's a bit advanced for me and probably for some people who will search for this question. I learned QM but not QFT, but I still hear all the time that "particles are the quanta of fields" and I don't really understand what it means. Is there a simple explanation for people who know QM but not QFT? Answer: In terms of measurements of the field, we can regard the vacuum state as the state that has the least correlations between measurements of the field in different places. At an elementary level, we can call the vacuum state the zero-quantum field state. The statistics of field measurements in the vacuum state are also highly symmetric, being isotropic, homogeneous, and invariant under Lorentz boosts. As we add more quanta to the quantum field state, the correlations between field measurements in different places become more complex and less symmetrical. There are many significant differences between a classical field and a quantum field, but the most basic is that instead of talking just about the field at a particular place, we talk about the probability that the field will take one value or another, at many different places all at once. As a result, instead of describing modulations of a classical field --how a particular configuration of the field is different from the zero field--, we have to describe modulations of all the correlations of the field --how a particular configuration of the quantum field is different from the vacuum state. A classical field is a function of just one point in space time, something like $\phi(x)$, but a quantum field can best be understood (IMO) in terms of a function of many points, $W(x_1,x_2,...,x_n)$, that describes the correlations between the field at $n$ different places. For the vacuum state, these functions are called the Vacuum Expectation Values (VEVs), but we can equally well construct a similar function in any state. A quantum field operator $\hat\phi(x)$ describes how to construct correlation functions for all different $n$, in any state that we can construct in the Hilbert space of quantum field states. The exact changes that are made to the VEVs by the action of a single quantum field operator are in a mathematical sense elementary operations on the discrete structure that ultimately derives from the fact that we don't talk about 2½-point correlation functions, for example, we only talk about 2-point, 3-point, ..., $n$-point correlation functions. A single quantum of the field is often taken to be associated with a given frequency, but it can in general be associated with an arbitrarily complicated modulation of the correlation functions, at many different frequencies. The significance of this is that a quantum field operator is not just "add one quantum" (though it is that), it also says where in space-time to add the quantum, $\hat\phi(x)$. A single quantum, however, may be described as a superposition of different frequencies, or it may equally well be described as a superposition of different positions. In particular, $\hat\phi(x_1)+\hat\phi(x_2)$ is in many ways as good as a single quantum field operator as is $\hat\phi(x)$. The mathematics of creation and annihilation operators is given in answers to the question you linked to, in very bare form, so if that's not what you want then you must want something different. To put the above into two bullet points, $\bullet$ quantum field operators modulate the correlations that are present in the vacuum state, and $\bullet$ correlations are an intrinsically discrete concept because they are constructed as correlations between 2, 3, or any integer number of measurements. There is quite a lot of detail missing from the above, of course; I would particularly single out the issue of how to understand the consequences of measurement incompatibility at time-like separation. I have attempted not to speak too much beyond your question. If you remember your QM well enough, BTW, the quantized simple harmonic oscillator (SHO) can be regarded as the mathematical foundation of QFT; the quantum field can be constructed as an infinite number of interacting SHOs (albeit non-rigorously). But you can get that from many of the textbooks.
{ "domain": "physics.stackexchange", "id": 3562, "tags": "quantum-mechanics, quantum-field-theory" }
Provable statements about genetic algorithms
Question: Genetic algorithms don't get much traction in the world of theory, but they are a reasonably well-used metaheuristic method (by metaheuristic I mean a technique that applies generically across many problems, like annealing, gradient descent, and the like). In fact, a GA-like technique is quite effective for Euclidean TSP in practice. Some metaheuristics are reasonably well studied theoretically: there's work on local search, and annealing. We have a pretty good sense of how alternating optimization (like k-means) works. But as far as I know, there's nothing really useful known about genetic algorithms. Is there any solid algorithmic/complexity theory about the behavior of genetic algorithms, in any way, shape or form ? While I've heard of things like schema theory, I'd exclude it from discussion based on my current understanding of the area for not being particularly algorithmic (but I might be mistaken here). Answer: Y. Rabinovich, A. Wigderson. Techniques for bounding the convergence rate of genetic algorithms. Random Structures Algorithms, vol. 14, no. 2, 111-138, 1999. (Also available from Avi Wigderson's home page)
{ "domain": "cstheory.stackexchange", "id": 1736, "tags": "ds.algorithms, ne.neural-evol" }
Why do we only see transitions corresponding to circularly polarized light in dichroic spectroscopy?
Question: I'll be using the diagrams here to explain a bit: https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=5393 In dichroic atomic vapor spectroscopy you have a constant longitudinal field through a vapor cell, you send linearly polarized light through the cell, and then measure the output on a photodetector. Because of the Zeeman effect, the energy levels split and you can find different transition frequencies through spectroscopy. So if you probe the cell with linearly polarized light with frequency $f_0$ (say, corresponding to the D1-line of rubidium), then you actually see two red-shifted and blue-shifted absorption peaks; this is because the linearly polarized light can be seen as a superposition of left-handed and right-handed circularly polarized light. I'm confused because it doesn't seem (or at least, no one has thought it useful to mention) that it's possible to still see the broad central peak at $f_0$ due to linearly polarized light still interacting with the atoms as linearly polarized. Those transitions are still available in terms of frequency differences (since the Zeeman splitting doesn't offset the $m_l = 0$ line), and I don't believe there are selection rules that I'm overlooking, but it would be a convenient explanation if that were the case. Answer: Good question. If we take the quantization axis to be the magnetic field direction then linearly polarized light propagating along the magnetic field direction is a superposition of $\sigma^+$ and $\sigma^-$. It has no $\pi$ component so it can’t drive the $0\rightarrow 0$ transition you are talking about. However, light which is propagating perpendicular to the magnetic field direction and which is polarized such that the electric field is parallel to the magnetic field is entirely composed of $\pi$ polarized light so it would drive the transition you are considering. It is a common confusion that circularly polarized light is always $\sigma$ light and linearly polarized light is always $\pi$ Light but this is not the case. The definition of $\sigma^{\pm}$ and $\pi$ light depends on the choice of quantization axis. When I’m at a computer I will link two answers that explain this in more detail. TLDR: geometry matters when it comes to polarization and selection rules. Some other relevant questions and answers: $\pi, ~\sigma$ - atomic transitions with respect to a quantization axis $\pi$, $\sigma$ - atomic transitions with respect to the magnetic field axis $\pi$, $\sigma^\pm$ components with no magnetic field? The definition of a $\pi$ polarized photon?
{ "domain": "physics.stackexchange", "id": 66634, "tags": "optics, atomic-physics, spectroscopy" }
Modeling Tri-Track with URDF and Simulate it in Gazebo
Question: Hi, I'm trying to model a Triangular Tracked Wheels robot, which I'm building it physically. (Something similar to Wall-E character). What would be the preferred approach in doing so? I know how to model the sprockets and track's links (I already have the meshes), but how to combine them in one thread and simulate them in Gazebo, is puzzling me. Originally posted by IronFist16 on ROS Answers with karma: 1 on 2018-09-16 Post score: 0 Answer: The support for tracked vehicles has been merged to Gazebo 9 and 11. It works only in ODE. You can have a look at the usage here: https://github.com/osrf/gazebo/blob/gazebo11/worlds/tracked_vehicle_simple.world . Non-deformable tri-tracks should be no problem. Originally posted by peci1 with karma: 1366 on 2020-04-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31779, "tags": "gazebo, ros-kinetic" }
Compare text in two excel ranges, highlighting additions and strikethrough deletions
Question: Code is running ok but need to be reviewed for any limitations / possible improvements. Not sure will work in all scenarios because of the number of ifs and else and length of the macro. Referred to this question on Stack Overflow. Sub StringCompare2() Dim Rg_1 As Range, Rg_2 As Range, Rg_3 As Range Dim cL_1 As Range, cL_2 As Range, cL_3 As Range Dim arr_1, arr_2 Dim xTxt As String, i As Long, j As Long, Ln As Long Dim xDiffs As Boolean On Error Resume Next If ActiveWindow.RangeSelection.Count > 1 Then xTxt = ActiveWindow.RangeSelection.AddressLocal Else xTxt = ActiveSheet.UsedRange.AddressLocal End If lOne: Set Rg_1 = Application.InputBox("Range A:", "Kutools for Excel", xTxt, , , , , 8) If Rg_1 Is Nothing Then Exit Sub If Rg_1.Columns.Count > 1 Or Rg_1.Areas.Count > 1 Then MsgBox "Multiple ranges or columns have been selected ", vbInformation, "Kutools for Excel" GoTo lOne End If lTwo: Set Rg_2 = Application.InputBox("Range B:", "Kutools for Excel", "", , , , , 8) If Rg_2 Is Nothing Then Exit Sub If Rg_2.Columns.Count > 1 Or Rg_2.Areas.Count > 1 Then MsgBox "Multiple ranges or columns have been selected ", vbInformation, "Kutools for Excel" GoTo lTwo End If Set Rg_3 = Rg_2.Offset(0, 1) For Each cL_3 In Rg_3 i = i + 1 If cL_3.Offset(0, -2) = "" And cL_3.Offset(0, -1) = "" Then cL_3 = "" cL_3.Interior.Color = rgbLightGray GoTo NextCL3 End If If cL_3.Offset(0, -2) = "" And cL_3.Offset(0, -1) <> "" Then cL_3 = cL_3.Offset(0, -1) cL_3.Font.Color = vbBlue cL_3.Interior.Color = rgbOrange GoTo NextCL3 End If If cL_3.Offset(0, -2) <> "" And cL_3.Offset(0, -1) = "" Then cL_3 = cL_3.Offset(0, -2) cL_3.Font.Color = rgbLightGray cL_3.Font.Strikethrough = True cL_3.Interior.Color = rgbBlack GoTo NextCL3 End If If cL_3.Offset(0, -2) = cL_3.Offset(0, -1) Then cL_3 = cL_3.Offset(0, -1) cL_3.Font.Color = vbBlack cL_3.Interior.Color = rgbLightGreen GoTo NextCL3 End If arr_1 = Split(Rg_1(i, 1), " ", , vbTextCompare) arr_2 = Split(Rg_2(i, 1), " ", , vbTextCompare) For j = 0 To UBound(arr_1) + UBound(arr_2) Ln = Len(cL_3) If Ln = 0 Then If arr_1(j) = arr_2(j) Then cL_3.Value = arr_2(j) cL_3.Font.Color = vbBlack cL_3.Font.Strikethrough = False Else cL_3.Value = arr_1(j) & " " & arr_2(j) cL_3.Characters(1, Len(arr_1(j))).Font.Strikethrough = True cL_3.Characters(1, Len(arr_1(j))).Font.Color = rgbLightGray cL_3.Characters(Len(arr_1(j)) + 2, Len(arr_2(j))).Font.Strikethrough = False cL_3.Characters(Len(arr_1(j)) + 2, Len(arr_2(j))).Font.Color = vbRed End If Else If j > UBound(arr_1) Then cL_3.Characters(Len(cL_3) + 1, Len(" " & arr_2(j))).Insert (" " & arr_2(j)) cL_3.Characters(Len(cL_3) - Len(" " & arr_2(j)) + 1, Len(" " & arr_2(j))).Font.Color = vbRed cL_3.Characters(Len(cL_3) - Len(" " & arr_2(j)) + 1, Len(" " & arr_2(j))).Font.Strikethrough = False Else If j > UBound(arr_2) Then cL_3.Characters(Len(cL_3) + 1, Len(" " & arr_1(j))).Insert (" " & arr_1(j)) cL_3.Characters(Len(cL_3) - Len(" " & arr_1(j)) + 1, Len(" " & arr_1(j))).Font.Color = rgbLightGray cL_3.Characters(Len(cL_3) - Len(" " & arr_1(j)) + 1, Len(" " & arr_1(j))).Font.Strikethrough = True Else If arr_1(j) = arr_2(j) Then cL_3.Characters(Len(cL_3) + 1, Len(" " & arr_2(j))).Insert (" " & arr_2(j)) cL_3.Characters(Len(cL_3) - Len(" " & arr_2(j)) + 2, Len(" " & arr_2(j))).Font.Color = vbBlack cL_3.Characters(Len(cL_3) - Len(" " & arr_2(j)) + 2, Len(" " & arr_2(j))).Font.Strikethrough = False Else cL_3.Characters(Len(cL_3) + 1, Len(" " & arr_1(j) & " " & arr_2(j))).Insert _ (" " & arr_1(j) & " " & arr_2(j)) cL_3.Characters(Len(cL_3) - Len(" " & arr_1(j) & " " & arr_2(j)) + 2, Len(arr_1(j))).Font.Strikethrough = True cL_3.Characters(Len(cL_3) - Len(" " & arr_1(j) & " " & arr_2(j)) + 2, Len(arr_1(j))).Font.Color = rgbLightGray cL_3.Characters(Len(cL_3) - Len(" " & arr_2(j)) + 2, Len(arr_2(j))).Font.Strikethrough = False cL_3.Characters(Len(cL_3) - Len(" " & arr_2(j)) + 2, Len(arr_2(j))).Font.Color = vbRed End If End If End If End If Next NextCL3: Erase arr_1: ReDim arr_1(0) Erase arr_2: ReDim arr_2(0) Next End Sub Answer: Here are some examples that show pretty odd results: What I would expect as result is something like Especially here where it says that if and this were deleted and added at the same place: if this should definitely be black. Also here it says every single word was deleted and replaced by something else while you could just do the following and keep at least to proove you wrong. Note that this is only one possible solution and there are more than one for each comparison. To find the best solution you would need to calculate all possibilities and use a good criteria to decide which one of them would be the best one. As I already explained in this answer the probelem to solve is a way more complex than you see in the first moment. Especially if you want a by charater solution like the OP stated and not a much simpler by word solution (as you tried). I see no simple answer to the issue beyond what I showed in the linked answer (using the dynamic programming technique).
{ "domain": "codereview.stackexchange", "id": 38504, "tags": "vba, excel" }
Central Canada wildfires in June 2023 — at night it seems like less smoke? Why?
Question: I live in Maryland. I bought a HEPA air purifier with automatic, smart sensor-based operation to help with things like particulates during the central Canada wildfires. The air purifier's integrated air quality sensors constantly monitor fine dust particles (PM2.5), CO2, temperature and humidity. Its air quality sensors monitor room air quality throughout the day, adjusting fan speed and purification to the necessary level, saving energy when air cleaning is not needed. The air quality indicator projects a soft glow behind the air purifier to indicate current air quality, based on measurements from the internal air quality sensors. My purifier's air quality sensors circumstantiate that air quality is better at night; the purifier automatically slows down at night. Why? Do the wildfires burn less at night? Or does less smoke wind down to the USA at night? Or something else? Answer: Your air filter is not a good way to qualify the smoke in your area, since it will run based on the air quality inside your home which can vary quite differently than the outdoor air. Select a nearby site at EPA's fire.airnow.gov site to see how air quality changed during the June 2023 wildfire smoke event in your region. I selected a few monitors in your state and pasted a representative one below. It looks like the smoke got bad after midnight on June 7 and then again it got even worse after midnight on June 8. This phenomena could be due to a shallower boundary layer at night which kept smoke closer to the surface. When you woke up in the morning it was probably quite smoky, but as the day progressed there was convective activity that kept more of the smoke aloft. Yes, wildfires usually burn less at night because temperatures are cooler. However, wind, terrain, available fuels, and humidity play major roles in the dynamics of combustive activity in wildfires. The timing of smoke impacts is circumstantial to the meteorology and geographic locations of the source and receptor. I wouldn't look for many patterns in the timing of the smoke effects that you experienced, since you are far away from the source with many dynamic meteorological phenomena in play. There are detailed satellite imagery with PM2.5 monitor overlay at Aerosol Watch, if you would like to see how the event progressed through time.
{ "domain": "earthscience.stackexchange", "id": 2681, "tags": "wildfire" }
Did the night sky ever change in recorded history?
Question: I wonder whether there has ever been a major change of the firmament in recorded history, like changes in the positions of stars, changes in constellations, or stars disappearing after going supernova. There've been visible supernovae but had the star in question been visible to the naked eye before the supernova? If so, how did it change what constellation? Answer: There are a few ways to think about this question: Do the stars change positions in the sky ... such that the layout of say... major stars in constellations appear to move over time? Are there events that cause sudden changes (changes you might notice overnight ... or within a few weeks or months)? How long does it take to notice a change? Precession The Earth's axis mostly stays oriented in the same direction as we orbit the Sun. This is why we can get away with saying that Polaris is the "North Star" ... because it appears to be almost directly above Earth's North Pole regardless of what time of the day or night ... or what day of the year you choose to observe (it is roughly within 2/3rds of a degree from the pole). But it turns out the Earth's axis does "wobble" like a spinning top. This wobble is very slow and takes thousands of years to complete a cycle (nearly 26,000 years). At the end of this century (roughly the year 2100) Polaris will be a little closer to the pole ... only 1/2° away from the true pole (today it is about 2/3rds° away from the pole). That is as close as it will get ... and then our pole's precession cycle will start taking it farther away from the pole. This "precession" shift causes all stars in the sky to shift their position very slightly from year to year and it's mostly not enough to notice in a single human lifetime ... but it is easily noticed when measured across thousands of years. There are other more subtle cycles, but Precession is noticeable across large amounts of time (thousands of years). The position of the Sun in the sky at the equinoxes changes every year and this causes the Sun to appear to be moving across the zodiac over thousands of years. The sky has a coordinate system used to catalog the locations of stars and other objects. That coordinate system uses a "declination" to measure where a star is located relative to Earth's North/South ... it works much in the same way that latitude measures geographic locations on Earth. There is also a "right ascension" value that works a bit like longitude on Earth. Except Earth is spinning (and longitudes remain fixed to the planet) ... so we needed a way to mark a fixed position in space to have a sort of celestial longitude. That measurement is called Right Ascension and it needs a "0" position. So the zero position is established by the location of the Sun in the sky at the Vernal Equinox (the start of Northern Hemisphere Spring). It even got a name... the "First point of Aries" ... named because, when it was established a few thousands years ago, it was in Aries. Today ... it's migrated all the way over to Pisces (and is getting closer to the boundary of Aquarius). These are changes caused by Precession and noticeable ... but mostly only when measuring over hundreds of years or thousands of years. It is actually noticeable just from year to year ... but only with precise measurements. Precision causes the entire sky to shift in the same direction (and by a very subtle amount). This means that the locations of the constellations will shift ... but it wont explain changes to their shapes. For that, we have to look at other factors. In reality it isn't the "sky" that is shifting ... but rather the orientation of Earth's axis that changed. Proper Motion Stars are moving ... and they don't all move in exactly the same direction. But the stars are so distant that the movement is mostly only noticeable with precise instruments. Regardless, they do move. Here for example, is the Big Dipper asterism, but I've turned on Proper Motion vectors ... the vectors (shown in a light brown color) indicates the direction of that star's motion (relative to us) and longer vectors indicate great proper motion. Notice that many of the stars of the Big Dipper are all moving in the same general direction at about the same speed ... including Alcor/Mizar, Alioth, Megrez, Phecda, and Merak. In this diagram, those stars are all moving up and to the left (relative to this frame). But more importantly... notice that Alkaid and Dubhe are going in roughly the opposite direction. These stars are moving down and to the right. This means that over thousands of years, this asterism will become distorted and will eventually not resemble the "Dipper" shape anymore. Incidentally, the stars in the dipper that are going in the same direction are all part of an Open Cluster called the Ursa Major Moving Group. It isn't as obvious that it's an open cluster because it is relatively close (not all stars in Ursa Major are members of the cluster.) Stars that physically closer to our Sun will may appear to have much faster proper motion. It isn't so much that the stars are necessarily physically moving faster ... but since they are close, the changes are easier to spot. Here, for example, is the proper motion of Rigel Kentaurus (aka Alpha Centauri). In this frame (current time) the star is in the upper-left section of this chart. It is in Centaurus (hence its name) but near Circinus. But notice it has a VERY long proper motion vector which is headed toward the right side of this chart in the direction of Hydra. Its proper motion vector is much longer than the other stars ... but Rigel Kentaurus is just a little over 4 light years away (one of our closest neighbors) -- so its relative motion is more noticeable. While Rigel Kentaurus makes the top-20 list for stars with the highest proper motion, it's not the winner ... that spot goes to Barnard's Star (in Ophiucus). But Barnard's Star isn't very bright... at magnitude 9.5 it isn't noticeable to ordinary human vision and requires a telescope to see it. But it is moving just a little under 3x faster than Alpha Centauri A & B. If you're interested, here's a top-20 list: https://www.cosmos.esa.int/web/hipparcos/high-proper-motion Transient Events Certainly events such as bright comets would change the sky -- but those aren't stars. Super-Novae events, on the other hand, are stars ... and these are easily noticeable if they happen in our home galaxy. They don't happen very often. The estimate is that, on average, they happen in a galaxy our size about once every hundred years. But's not like a clock where they happen with periodic regularity ... huge amounts of time may pass without any of them ... and then several could occur timed much closer together. Since Super-Novae events can be quite bright, they can even be visible in the daytime sky. They don't last particularly long ... they quickly brighten up ... then begin to dim over weeks and ... after a few months they may no longer be noticeable (but they will leave behind a Super Nova Remnant -- deep sky nebulae such as the Crab Nebula and Veil Nebula are examples of Super Nova Remnants.)
{ "domain": "astronomy.stackexchange", "id": 5638, "tags": "amateur-observing, history, night-sky" }
Vibration as a means of passing through solid objects?
Question: First of all, I think this is a weird question but I still want to know whether it's possible. There is a character in a cartoon network series who is really really fast. Sometimes he lets his own body vibrate really hard so he can go through for example a wall. My question is: Is it possible for materials to go through other materials when they are vibrated fast enough? Answer: I'm afraid that outside the cartoon world vibrating won't allow a solid object to pass through another solid object. Solids won't pass through each other because the electrons in one solid interact with electrons in the other solid. There's a question about this already, How can I stand on the ground? EM or/and Pauli?, and there are lots of answers to it discussing the issues involved. Basically electrons don't like to be squeezed too closely together, and to squeeze two solids into occupying the same space would take a ridiculously large amount of energy. That much energy would simply blow both solids apart. There is a sense in which you can pass matter through solids. If you take a very thin gold film and fire subatomic particles at it most of the subatomic particles will go straight through. This experiment was done in 1909 as part of the study of atomic structure. However while individual particles can pass through without hitting anything, anything bigger, even something as small as an atom, is very unlikely to make it through unscathed.
{ "domain": "physics.stackexchange", "id": 6264, "tags": "material-science" }
Conjugate momentum notation
Question: I was reading Peter Mann's Lagrangian & Hamiltonian Dynamics, and I found this equation (page 115): $$p_i := \frac{\partial L}{\partial \dot{q}^i}$$ where L is the Lagrangian. I understand this is the definition of conjugate momentum, but I wanted to know if there is a particular reason for the momentum index to be a lower index and the coordinate index to be an upper index. Is it simply the author's preference or there is a deeper reason? Answer: The index of generalized coordinates $q^1, \ldots, q^n$, is conventionally$^1$ a superscript/upper index in physics. The Lagrangian momenta $p_i:=\frac{\partial L}{\partial \dot{q}^i}$ have a subscript/lower index since they transform under general coordinate transformations $q^i\to q^{\prime j}=f^j(q,t)$ as components of a co-vector/1-form $p=p_i\mathrm{d}q^i$, cf. e.g. this Phys.SE post. -- $^1$ Be aware that many authors don't bother to make such notational distinctions.
{ "domain": "physics.stackexchange", "id": 77016, "tags": "classical-mechanics, lagrangian-formalism, momentum, differentiation, notation" }
Why are we even interested in solar cells under bias voltage?
Question: I couldn't find any answer on this super basic question. Some people on the internet say that you would not put a solar cell in an array under bias, others say that they bias themselves, but I don't understand how this would work: In a series circuit a solar cell would be biased by the adjacent solar cell, right? And it would be a reverse bias, correct? (Since + and - meet.) But then there is also a paper saying that reverse bias is detrimental, or is it meant in a way that there is a "healthy" reverse bias (normal working condition) and a "pathological" reverse-bias (shunt resistances --> hot spots) ? Please help me understand this, I am getting really desperate over this and can't find anyone who knows something about it. Answer: Solar cells are photovoltaic devices: they develop a photo-voltage when illuminated. In this sense they bias themselves. But that is a very confusing way of thinking about the as components in an electrical circuit. To get useful power out of a solar cell you must apply forward bias. The optimum bias is at the maximum power point (peak of the dashed curve). The IV curve (solar black line) of an illuminated diode enters three (two shown in the diagram) quadrants: Negative current, negative (reverse) voltage: photodetector Positive current, positive (forward) voltage: solar cell Negative current, positive (forward) voltage: light emitting diode For more background on this read this site, https://www.pveducation.org/pvcdrom/solar-cell-operation/iv-curve The same principle applies when the solar cells are interconnected in a solar module. There will be additional current and voltage matching constraints depending on how the module is interconnected, but the shape of the IV curve will still retain this fundamental feature of three quadrants. I recommend you answer your own question by playing around with a very simple SPICE model of a single solar cell. Then make it more complicated by connecting multiple devices in series and then in parallel and see how the IV curve changes.
{ "domain": "physics.stackexchange", "id": 72368, "tags": "electricity, electric-circuits, voltage, semiconductor-physics, solar-cells" }
Cannot see mesh file in Gazebo 1.2.5
Question: Hi, The simulation does not visualize the mesh of a robotic hand model in Gazebo 1.2.5, which under Gazebo 1.0.2 worked fine. The URI to the .dae file is correct in the sdf (v1.2) file. (I tried it with another .dae file from PR2 and it visualized it). The difference between the files was that mine had COLLADA version 1.4.0, and PR2 had version 1.4.1. So I imported it in Blender and exported it as version 1.4.1 and it still did not work. Here is my original .dae file, here it is after converting it in 1.4.1 and this is a PR2 .dae file which works. Any ideas? Thanks, Andrei Originally posted by AndreiHaidu on Gazebo Answers with karma: 2108 on 2012-12-12 Post score: 0 Answer: It looks like the collada file has incorrect units. Open up your original dae file, and go to this line: <unit meter="0.01" .../> Change to `meter="1.0"' Originally posted by nkoenig with karma: 7676 on 2012-12-12 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by AndreiHaidu on 2012-12-13: It worked, thanks a lot.
{ "domain": "robotics.stackexchange", "id": 2865, "tags": "gazebo, mesh, gazebo-1.2, collada" }
Pouring aluminum into another block of Aluminum
Question: I have a 120mmx120mm brushless fan with an Aluminum frame, which as you can imagine, isn't very thick. I want to thread the base, so I can mount it with a fairly large screw. For that I need to add more Aluminum so the screw has more to grip on. I'm thinking of creating a casting mold around the exterior side of the Aluminum frame (proof sealed) and then pouring molten Aluminum in it. I've seen videos of people pouring Aluminum and it seems to have a relatively low Heat Capacity compared to other metals (i.e. it will cool down and solidify before it melts the surface of Aluminum frame), as I've seen Anthill Art being made without so much as scorching the earth. Also, as Aluminum is a good Thermal Conductor, the heat would dissipate throughout the frame pretty quickly. How would I alleviate the two problems of low Heat Capacity and being a good Thermal Conductor? I know that one always have to pour excess molten metal to the block being added to, but do you guys have better ideas?? Answer: This is unlikely to work very well. The biggest issue is that you won't get good adhesion between the two surfaces. aluminium has a particularly resilient and inert surface oxide layer so molten aluminium won't just stick to the surface of solid aluminium. Welding it requires an inert atmosphere and reverse polarity current or alternating current to clean the surface. Pre-cleaning won't work either as the oxide layer forms very quickly. Equally the aluminium will shrink substantially as it cools so anything you cast onto the surface will in all likelihood just pop off. In practice, casting a new part from scratch using a known castable alloy is far more likely to work and not likely to be significantly more effort. Note also that cast aluminium is prone to porosity and needs to be de-gassed to produce reliable mechanical parts. Secondly without knowing the alloy you are working with and its initial condition there is no way of knowing how the heat will affect its mechanical properties. Aluminium anneals as a fairly low temperature and if the existing part reaches that temperature it will be drastically weakened and there is a good chance that it will distort. A better solution would be to fabricate the extra part you need and either get it TIG welded on (even then there is some risk of distortion and not all aluminium alloys are readily weldable) or to bond it on with adhesive, which is likely to be the only practical solution which is reasonably likely to be problem free.
{ "domain": "engineering.stackexchange", "id": 1457, "tags": "metals, aluminum, molding" }
Daemon intended to replace a Cronjob which reads and processes events from a queue
Question: The Daemon will read the output from a php command line interface, and process events until there are none left in the queue. Consider the sleep timer arbitrary. At the moment testOutput.php simple logs some strings to the console, some of which contain "EXITCODE-0", meaning there are no events in the queue. #include <cstdio> #include <iostream> #include <memory> #include <stdexcept> #include <string> #include <array> #include <map> #include <regex> // daemon functions #include <stdlib.h> #include <unistd.h> #include <signal.h> #include <sys/types.h> #include <sys/stat.h> #include <syslog.h> // logs #include <fstream> #include <chrono> #include <ctime> #include <iomanip> void logOutput(std::string); static void daemon(); std::string exec(const char* cmd); enum EventAction { eventActioned, eventOne, eventTwo }; static std::map<std::string, EventAction> mapValueToEvent; void initialise(); int main() { syslog(LOG_ERR, "bendaemon has started"); daemon(); initialise(); for(int i = 0; i < 20; ++i) { std::string result = exec("php -f /Users/easysigns/PlayGround/bash/readPhpOutput/testOutput.php"); switch(mapValueToEvent[result]) { case eventOne: // process event one logOutput("one"); break; case eventTwo: // process event two break; case eventActioned: // write some event complete log logOutput("zero"); sleep(20); break; default: // event not defined. write some error log break; } } syslog(LOG_ERR, "Terminating daemon"); closelog(); return EXIT_SUCCESS; } std::string exec(const char* cmd) { std::array<char, 4096> buffer; std::string result; std::shared_ptr<FILE> pipe(popen(cmd, "r"), pclose); if (!pipe) throw std::runtime_error("popen() failed!"); while (!feof(pipe.get())) { if (fgets(buffer.data(), 4096, pipe.get()) != nullptr) { result += buffer.data(); } } std::regex expression("EXITCODE-0"); if(std::regex_search(result, expression) == 1) { result = "1"; } else { result = "0"; } return result; } void initialise() { mapValueToEvent["1"] = eventOne; mapValueToEvent["2"] = eventTwo; mapValueToEvent["0"] = eventActioned; } void logOutput(std::string value) { std::ofstream logFile; logFile.open("/Users/easysigns/PlayGround/bash/readPhpOutput/logFile.txt", std::ios_base::app); std::time_t now = std::chrono::system_clock::to_time_t(std::chrono::system_clock::now()); logFile << "Time: " << std::put_time(std::localtime(&now), "%F %T") << " Event success: " << value << std::endl; return; } static void daemon() { pid_t pid; // Fork off the parent process pid = fork(); // An error occurred if (pid < 0) { exit(EXIT_FAILURE); } // Success: Let the parent terminate if (pid > 0) { exit(EXIT_SUCCESS); } // On success: The child process becomes session leader if (setsid() < 0) { exit(EXIT_FAILURE); } //TODO: Implement a working signal handler signal(SIGCHLD, SIG_IGN); signal(SIGHUP, SIG_IGN); // Fork off for the second time (detatch from terminal window) pid = fork(); // An error occurred if (pid < 0) { exit(EXIT_FAILURE); } // Success: Let the parent terminate if (pid > 0) { exit(EXIT_SUCCESS); } umask(0); // This may need to point to a different directory in the future chdir("/"); // Close all open file descriptors for (int x = (int)sysconf(_SC_OPEN_MAX); x >= 0; x--) { close (x); } // Open the log file openlog("bendaemon", LOG_PID, LOG_DAEMON); } Answer: I'm not going to comment on the OS interactions here, just the C++ code in general. prefer anonymous namespaces to the static keyword static is just a C backwards compatibility thing, the proper way to prevent a name from leaking out of a translation unit is an anonymous namespace: namespace { void daemon(); std::map<std::string, EventAction> mapValueToEvent; } Prefer using enum class to regular enum enum class EventAction {...}; This prevents a large number of namespace polution issues, and is just a good habit to build. prefer using std::unordered_map to std::map Unless you have a reallllly good reason, std::unordered_map is almost always preferable. Why shared_ptr? std::shared_ptr<FILE> pipe(popen(cmd, "r"), pclose); While you get mad props for using RAII to manage the FILE pointer, you should be using a unique_ptr here, not a shared_ptr. initialise() is unnecessary You can simply initialize the map using a literal: static std::map<std::string, EventAction> mapValueToEvent = { {"1", eventOne}, {"2", eventTwo}, {"0", eventActioned} }; Logic error, exec() can only return "0" or "1" That doesn't feel right, since you handle "0", "1" or "2"
{ "domain": "codereview.stackexchange", "id": 28899, "tags": "c++, linux" }
Functional derivative of metric
Question: To do functional derivative of some actions, we need to know a functional differential of metrics $g_{\mu \nu}(x)$. One of the formulae about that is: $$g_{\mu\nu}\delta g^{\mu\nu} = - g^{\mu\nu} \delta g_{\mu \nu},$$but this confused me, because for arbitrary tensors, it is true that$$A_{\mu\nu}B^{\mu\nu}=A^{\mu\nu}B_{\mu\nu}.$$ What is the difference? I suppose $\delta g^{\mu\nu}$ is a tensor because actions must be scalars, but i don't have any proof. Answer: The inverse metric tensor $(g^{-1})^{\mu\nu}$ is often written as $g^{\mu\nu}$ when no confusion can arise. Let us not use this shorthand notation here. OP's formula then reads $$ \delta(g^{-1})^{\mu\nu}~=~-(g^{-1})^{\mu\lambda}\delta g_{\lambda\kappa}(g^{-1})^{\kappa\nu} .$$ We can now raise and lower indices with the metric/inverse metric, e.g. $$\delta(g^{-1})^{\mu\nu}~=~-(\delta g)^{\mu\nu},$$ without running into inconsistencies.
{ "domain": "physics.stackexchange", "id": 58806, "tags": "general-relativity, metric-tensor, tensor-calculus, variational-calculus" }
Show that the weight of an object suspended from two other objects is equal to the sum of the vertical components of the two weights
Question: The following shows a mass with weight $M$ suspended from two other masses, labelled $m$ The system is in equilibrium, and the question following from this diagram is to show that $M=2m\cos\theta$ My attempt was along the following lines: We immediately know two things; we know that weight $M=m_1g$ where $m_1$ is the mass of the object, and $g$ is the gravitational acceleration constant, and we know that the tension from either of the top two strings will be the force they exert on point $A$, $\implies$ that each of these forces $= mg$, and since we are only interested in the vertical components, we specifically need $mg\cos\theta$ Now, given that the system is in equilibrium, we know that forces $\uparrow = $ forces $\downarrow$. $$\implies M=2mg\cos\theta$$ $$m_1g=2mg\cos\theta$$ and here is where my issue lies, I know that to get the RHS to the form $2m\cos\theta$ I need to divide through by the GA constant, but that would also affect the LHS. Is my logic so far correct? Or is there a completely different way in which I can show that this equation is true for this diagram? Any help is appreciated, thank you. Answer: It looks like you simply misread a detail in the question. $M$ is not weight. It is mass. So you cannot do $M=m_1g$. You are actually almost done in the first step of the derivation, if you just fix the LHS so that $M$ is a mass instead of a weight.
{ "domain": "physics.stackexchange", "id": 44322, "tags": "homework-and-exercises, newtonian-mechanics, forces, weight" }
A method that calculates win streaks
Question: Right now, I have the following: A win_streak column on the User table. This method in game.rb I'm interpreting win_streak as follows: The number if the number games in the streak The value (positive or negative) specifies whether those games were wins (positive) or losses (negative) game.rb: def adjust_streak u = self.user case self.result when 'Win' u.win_streak > 0 ? u.win_streak += 1 : u.win_streak = 1 when 'Loss' u.win_streak > 0 ? u.win_streak = -1 : u.win_streak -= 1 end u.save end I'm looking for a cleaner, and more succinct way to do this (e.g. keep track of win/loss streaks). Answer: You can use Array#min and Array#max to calculate the win/lose streak; You don't need to use self on getters; And you can use user.update instead of assigning and calling .save; I would also rename the win_streak column to streak; def adjust_streak streak = case result when 'Win' then [user.streak, 0].max + 1 when 'Loss' then [user.streak, 0].min - 1 end user.update streak: streak end
{ "domain": "codereview.stackexchange", "id": 29541, "tags": "ruby, ruby-on-rails" }
Cavendish's experiment on concentric conducting shells
Question: In a paper were there was a section addressing Cavendish's experiment on concentric conducting shells which was basically the following : Two conducting spherical shells were put together (like the Russian doll) and a conducting link was connecting these two spheres , a charge was put on the outermost sphere , then we cut the link with silk a a silk thread ,take off the outermost, and see if the inner one had any electrical charge on it , the experiment showed that indeed it doesn't contain any charge . Then the author stated that this was a consequence of the inverse square law , and that if the electric field didn't follow this very law , then upon putting charges on the outermost sphere charges would migrate through the conducting link to the smaller sphere and it would have a charge on it . How did the author deduce this conclusion ? Why ,if the electric field didn't have the inverse square behavior , the charges would migrate to the smaller sphere ? Answer: How did the author deduce this conclusion? For the charges to move from the large sphere to the small sphere, there would have to be an electric field inside the large sphere. The inverse square law could be used to prove that the field inside a conductive hollow charged sphere is zero. It goes like this: For any given point inside a conductive hollow charged sphere, the fields produced by patches of charge, subtending the equal solid angles on the opposite sides of the sphere, will cancel each other, because the charge magnitudes of the patches will be proportional to the square of their distances to the point of interest, while their fields will be inversely proportional to the square of the same distances. Why, if the electric field didn't have the inverse square behavior, the charges would migrate to the smaller sphere? The inverse square law is pretty fundamental, so there is no reliable way to tell what exactly would happen, if it did not hold, but, ignoring that for the purpose of the discussion, this would, at least, mean that the proof above would not work. That does not automatically mean though that the charges would move from the large sphere to the small sphere. In fact, we can always argue that, as long as the charges repel each other, they should distribute themselves over the outer surface of the large sphere, because this is most favorable distribution from the potential energy standpoint, and, therefore there won't be any charges on the small sphere inside, even with a conducting link connecting the spheres.
{ "domain": "physics.stackexchange", "id": 51038, "tags": "electrostatics, gauss-law, coulombs-law" }
Lagrangian mechanics, generalized coordinates, equations of motion
Question: Let $U(\vec{x}, \vec{v}) = -K \vec{A}(\vec{x})*\vec{v}$ be the generalized potential, where $\frac{d}{dt}\vec{x} = \vec{v}$, $K$ is a constant and $\frac{\partial \vec{A}(\vec{x})}{\partial t} = 0$ Now I showed that $$F = \frac{d}{dt} \frac{\partial U}{\partial v} - \frac{\partial U}{\partial x} = K \vec{v} \times (rot\vec{A})\space \space (1)$$ From there I calculated the Lagrangian equation of a $m$ in this potential: $$L = \frac{1}{2}m\vec{v}^2 - U$$ and the hamilton equation: $$H = \frac{(\vec{p}-K\vec{A})^2}{2m}$$ Is this correct? And from there I tried to calculate the equations of motion with $\frac{d}{dt} \frac{\partial L}{\partial v} - \frac{\partial L}{\partial x} = 0$but I always get $0 = 0$, does anyone know why? Kind Regards EDIT: $\frac{d}{dt} \frac{\partial L}{\partial v} - \frac{\partial L}{\partial x} = \frac{d}{dt} (mv+\frac{\partial U}{\partial v}) - \frac{\partial U}{\partial x} = F + \frac{d}{dt}\frac{\partial U}{\partial v} - \frac{\partial U}{\partial x} = F - F = 0$ where the last conversion is true because of (1). Answer: You get the zero identity because you are using the Equations of motion twice, one throught the Euler-Lagrange equations and the other time as with the definition of the force $F$. Use only one, Newton law: $$ \dot p = F = \frac{d}{dt} \frac{\partial U}{\partial v} - \frac{\partial U}{\partial x} = K \vec{v} \times (rot\vec{A})$$ or the Euler-Lagrange equations: $$ \frac{d}{dt}\left(p_i - k A_i\right) = \dot p - k \partial_jA_i\dot x^j = \partial_jU = -k\partial_jA_ix^i $$ and you can see that both are the same equation, as you know, because you used it twice and got 0.
{ "domain": "physics.stackexchange", "id": 41622, "tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, potential, hamiltonian-formalism" }
How do you pick the wavenumber at which the group velocitiy is evaluated?
Question: The equation for group velocity is $ v_g(k) = \frac{d\omega}{dk}.$ This is obviously a function of $k$ but typically the word is used as if there is a single group velocity and not a whole function. How do you assign a group velocity to a "group" ? Is there any clear cut mathematical definition which gives another condition or is there a special $\omega$ at which the function is typically evaluated ? Or is the word a misnomer and there isn't a single group velocity which can be assigned to a wavepacket ? Answer: Typically the "group" is a localized pulse in space. When you look at the k spectrum, it is typically also localized in k-space, with some average value $k_0$. (For a Gaussian wave packet, for example, the k spectrum, or Fourier transform, is also a Gaussian, and $k_0$ is the value where it peaks.) The group velocity then corresponds to $\text{d} \omega\,/\text{d} k$ evaluated at $k_0$. The choice is only ambiguous if the k-space distribution is such that the average value of k is undefined, though it is also helpful if the Taylor series expansion for $\omega(k)$ about $k_0$ converges quickly. If not, then the wave packet will tend to change shape as it propagates, which makes the idea of a group velocity fuzzier. UPDATE with more details: Suppose your wave packet is given by $\psi(x,t) = \int\limits_{-\infty}^\infty A(k) e^{i(kx-\omega t)} \mathrm{d} k$. Let the average wave number be $k_0$ (sorry for the notation conflict with OP's $k_0$), and expand your expression for $\omega(k)$ in a Taylor series about $k_0$, so that $\omega(k) = \omega(k_0) + \left(\dfrac{\partial \omega}{\partial k} \right)_{k = k_0} \left(k - k_0 \right) + \mathcal{O}((k - k_0)^2)$. As long as the range of relevant $k$ values is small enough that we can throw away the 2nd order and higher terms, we can use this to write $kx - \omega t \approx (k - k_0) x - \left(\dfrac{\partial \omega}{\partial k} \right)_{k = k_0}\left( k - k_0\right) t + k_0 x - \omega(k_0) t $, so that $\psi(x,t)$ becomes $\psi(x,t) \approx \int\limits_{-\infty}^\infty A(k) \exp\left[i (k - k_0)\left(x - \left(\dfrac{\partial \omega}{\partial k} \right)_{k = k_0} t \right) \right] e^{i(k_0 x - \omega(k_0) t)} \mathrm{d} k $. Inspecting this form, we see that it involves a plane wave traveling at the phase speed $\omega(k_0)/k_0$ (that's the last exponential factor, which comes out of the integral), modulated by an envelope traveling at the group speed $\left(\dfrac{\partial \omega}{\partial k} \right)_{k = k_0}$ (from the "x - vt" factor in the square brackets). So, as long as the 1st order Taylor series is a good approximation, the group velocity is $\partial \omega / \partial k$ computed at the average value of $k$.
{ "domain": "physics.stackexchange", "id": 62133, "tags": "waves, dispersion" }
From an engineering standpoint, what is the purpose of the indent in a coffee lid?
Question: I've see the same lids every day but I never really thought about their construction. There is an indent in a "Solo Travel Lid" for coffee cups that is just above the hole that you drink the coffee from. You can see the crescent shaped indent in the image. What is it for? Does it somehow increase fluid flow? Is it just a good spot for your upper lip? If so, why is there not a matching one for the nose? I tried googling this but I got no answer. Answer: From the "Solo Traveler" patent: A more specific object of the present invention is to provide a lid having an opening formed therethrough to enable drinking, and having a recess formed in the lid adjacent the opening to accommodate the upper lip of one drinking from the cup This is also illustrated in the patent diagrams. I came across this interesting article on the history, use, design, and styles of disposable lids. The Solo Traveler is prominently featured for its excellent aesthetics and design . The article reiterates that: the Solo Traveler lid was designed to accommodate the nose and lip of a drinker. In accomplishing this design goal, the necessary height of the lid made it useful for foam-topped gourmet coffees. Additionally, when upright, the recess acts as a reservoir (complete with drainhole) to catch drips/spills/splashes that might otherwise run down the cup and create the infamous coffee stain ring.
{ "domain": "engineering.stackexchange", "id": 539, "tags": "design, applied-mechanics" }
how can I create connections among three linux-based machines?
Question: I have a launch file that basically connects a Jetson TX1 and a Raspberry Pi 3 witha ubuntu host machine. And I need to configure all three machines such that, when roscore runs in the host, it will make connections to the other two boards. As far as hardware setup is concerned, there is an ethernet-based connection between my linux desktop (ubuntu 14.04) and the jetson tx1 (ubuntu 16.04). The raspberry pi 3 is connected to the jetson board via the USB-3.0-to-Ethernet adapter (AX11789). Following the hardware setup, the software setup is done. At first, the /etc/network/interface file in my desktop is edited as follows: auto eth0 iface eth0 iface static address 192.168.1.42 netmask 255.255.255.0 gateway 192.168.1.26 Then in my jetson, auto eth0 iface eth0 inet static address 192.168.1.26 netmask 255.255.255.0 Because it seems correct that the jetson be configured as a gateway (router), the ip address of the jetson should be used as a gateway address in both the desktop and the pi board. So in the pi board, auto eth0 iface eth0 inet static address 192.168.1.102 netmask 255.255.255.0 gateway 192.168.1.26 After this, the ping requests between the desktop and the jetson as well as between the jetson and the pi work perfectly, but the ping request between the desktop and the pi board does not work, stating Ping from 192.168.1.102: Destination host unreachable. Is there any critical information or step I am missing in this case? It may probably be due to the configuration with the Ethernet-to-usb adapter, but I'm not 100% certain about this. Originally posted by chanhyeoni on ROS Answers with karma: 62 on 2017-10-23 Post score: 0 Original comments Comment by gvdhoorn on 2017-10-23: Although you intend to use this for a ROS applcation, this is really more of a networking/Linux setup question. You will most likely get better answers faster if you ask it on stackoverflow or some other forum that deals with Linux configuration issues directly. Answer: As @gvdhoorn mentioned, your question is not directly related to ROS, but here are my two cents: Your TX1 has two network interfaces (Ethernet and USB-to-Ethernet Adapter) that are isolated from each other. This means that even though you are using IP addresses on the same range for both adapters, they don't know how to pass messages from one interface to the other. You would think that setting the "gateway address" to the TX1 IP would be enough but it doesn't work like that. What you are looking for is called "interface bridging". This link https://help.ubuntu.com/community/NetworkConnectionBridge will give you further information. I hope this helps Originally posted by Martin Peris with karma: 5625 on 2017-10-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ahendrix on 2017-10-23: It would be far less configuration to just connect all three computers to a small network switch. Comment by Martin Peris on 2017-10-23: Indeed!!
{ "domain": "robotics.stackexchange", "id": 29171, "tags": "ros, network" }
Optimal strategy in repeated cake-cutting
Question: Alice and Bob play the famous game of divide and choose $n$ times with $n$ identical cakes. Each time, Alice cuts the cake to two pieces, and Bob chooses the piece he prefers. Alice does not know Bob's preferences, but she assumes that they come from a uniform distribution. What is the strategy that maximizes Alice's expected total value? Here is my solution so far. Let's assume that the cake is the interval $[0,1]$. Alice cuts the cake by choosing some $x\in [0,1]$. Let $h\in[0,1]$ be a point such that Bob is indifferent between the cake to the left of $h$ and the cake to the right of $h$. Let $V(x)$ be Alice's valuation of the interval $[0,x]$; it is a continuous and increasing function of $x$. Then: If $h> x$, then Bob picks $[x,1]$ and Alice gets $[0,x]$ and gains $V(x)$. If $h< x$, then Bob picks $[0,x]$ and Alice gets $[x,1]$ and gains $V(1)-V(x)$. If $h=x$ then, let's say Bob picks a piece at random so Alice gains on average $V(1)/2$. If Alice knows $h$, then she has a simple optimal strategy: If $V(h) > V(1)-V(h)$, then select $x^* = h-\epsilon$, for some $\epsilon>0$, and gain $\approx V(h)$. Otherwise, select $x^* = h+\epsilon$, for some $\epsilon>0$, and gain $\approx V(1)-V(h)$. Initially, Alice does not know $h$, so she assumes it is distributed uniformly in $[0,1]$. If Alice cuts at $x$ and Bob chooses left, this tells Alice that $h\in[0,x]$; if Bob chooses right, this tells Alice that $h\in[x,1]$. So, using binary search, Alice can estimate $h$ to an arbitrary precision. This seems to be an optimal strategy when $n\to\infty$: spend a lot of rounds initially for getting an accurate estimate of $h$, and use this estimate from then on to get the optimal possible value. I am stuck at the case where $n$ is finite and small. The question bogs me even for $n=2$: what is the optimal first cut? Answer: Let $v(x) = \frac{V(x) - V(0)}{V(1) - V(0)}$ so $v(x)$ nicely goes from $0$ to $1$. Maximizing $v$ also maximizes $V$. To cover my ass for future mistakes regarding $<$ vs $\leq$, I will assume that you will never exactly guess $x = h$. Now, if $n = 1$, we can find the expected value from cutting at $x$ by integrating over all possible $h$: $$e_1(x) = E[v(x)] = \int_0^x (1 - v(x))dh + \int_x^1 v(x)dh = (1 - 2x)v(x) + x$$ Which allows you to maximize $e_1$ with differentiation to find the optimal $x$. Note that in the above example "all possible $h$" meant $[0, 1]$. But what if we know that $h \in [a, b]$ due to previous information? Then we get the following sum of integrals for when also $x \in [a, b]$: $$\int_a^x (1 - v(x))dh + \int_x^b v(x)dh = (a + b - 2x)v(x) - a + x$$ Which makes the total expected valuation: $$e_{1,a,b}(x) = E[v(x) | a \leq h \leq b] = \begin{cases} v(x) & \text{if $x < a$}\\ (a + b - 2x)v(x) - a + x & \text{if $a \leq x \leq b$}\\ 1 - v(x) &\text{if $x > b$} \end{cases}$$ Now let's analyze $n = 2$. Suppose our first cut is at $x$, then there is a $x$ chance $h \leq x$ and $1 - x$ chance $h \geq x$. In both those scenarios we will then choose our best possible $y$. This gives equation: $$e_2(x) = e_1(x) + x\left( \max_{0 \leq y \leq 1} e_{1,0,x}(y)\right) + (1-x)\left( \max_{0 \leq y \leq 1} e_{1,x,1}(y)\right)$$ $$\max_{0 \leq y \leq 1} e_{1,0,x}(y) = \max\left(\max_{0 \leq y \leq x} ((x - 2y)v(y) + y), \max_{x \leq y \leq 1} (1 - v(y))\right)$$ $$\max_{0 \leq y \leq 1} e_{1,x,1}(y) = \max\left(\max_{0 \leq y \leq x} v(y), \max_{x \leq y \leq 1} ((x + 1 - 2y)v(y) - x + y)\right)$$ Now let's say $v(x) = x$, a simple linear increase in desire for more cake. Now we can actually solve: $$\max_{0 \leq y \leq 1} e_{1,0,x}(y) = \max\left(\begin{cases}x(1-x)&0\leq x \leq \frac{1}{3}\\\frac{1}{8}(x+1)^2&\frac{1}{3} \leq x \leq 1\end{cases}, 1 - x\right)$$ $$\max_{0 \leq y \leq 1} e_{1,x,1}(y) = \max\left(x, \begin{cases}\frac{1}{8}(x-2)^2&0\leq x \leq \frac{2}{3}\\x(1-x)&\frac{2}{3} \leq x \leq 1\end{cases}\right)$$ Now $e_2$ is complicated, but is essentially a piecewise function which can be solved.
{ "domain": "cs.stackexchange", "id": 11629, "tags": "game-theory" }
Gmapping for indoor aerial vehicle?
Question: Hi. I would like to combine gmapping for indoor aerial vehicle. Something similar as http://robotics.ccny.cuny.edu/blog/node/152 However, I think they used custom Gmapping, not a stack on ROS. Clear path to achieve will be Subscribe IMU to correct laser scans with respect to attitude. Subscribe height from some sensor (sonar, laser). Provide corrected information to gmapping. If there is an implementation to do this, it would be great to know that. If it is not, it is good to know that how to provide height information to gmapping. Originally posted by Hyon Lim on ROS Answers with karma: 314 on 2012-03-28 Post score: 0 Original comments Comment by maruchi on 2012-03-29: You might want to check hector_quadrotor. Answer: The names of this package have changed, now they are: https://github.com/ethz-asl/ethzasl_icp_mapping http://www.ros.org/wiki/ethzasl_icp_mapping http://www.ros.org/wiki/ethzasl_icp_mapping Originally posted by Stéphane Magnenat with karma: 83 on 2013-03-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 8780, "tags": "slam, navigation" }
Is it possible to detect early if a model is bad?
Question: Let's say we have a model and have just started to fit it, the first epoch out of many. The first epoch shows awful results. Does it make sense to continue training hoping the results will be better in the next epochs or it's better to stop (and save time/resources) and try to change something? In my case, I have a classification model, and the first epoch gives loss=5.5, accuracy=1/n where n is the number of outputs. Consequent epochs changed almost nothing, so, the next 10 epochs were just a waste of time and resources. Is there a rule of thumb to check that you should stop and try something else? Answer: In general, if a model is no better than random guessing and does not improve, there's a problem. The "no better than random" part is to be expected at the beginning of training, as weights and biases are initialized randomly. The "does not improve" part requires actually training the model and seeing how things change. One tip for classification models I've found useful is to try to train a single class detection first : is class A present yes or no, and then generalize to more classes. The confusion matrix is also helpful once you have something working to see where the model is making bad predictions, i.e. which classes in the input are mis-classified.
{ "domain": "datascience.stackexchange", "id": 11915, "tags": "training, accuracy" }
Given an array A[] and a number x, check for pair in A[] with sum as x
Question: LINQ/FP style: Assert.IsTrue(new[] {1,2,3}.AnyPairSum(4)); Where: static class LinqDemo { public static bool AnyPairSum(this IEnumerable<int> source, int sum) { var a = source.OrderBy(i => i).ToArray(); return scan(0, a.Length - 1); bool scan(int l, int r) => l >= r ? false : a[l] + a[r] > sum ? scan(l, r - 1) : a[l] + a[r] < sum ? scan(l + 1, r) : true; } } Answer: Your algorithm isn't the most efficient out there. As you process the input, you can keep track of the numbers you have already seen and which numbers would complement those to sum to x. For instance, in your example, x=4: You read 1, you know that if you encounter a x - 1 = 3, you have found a pair. Next you read a 2, you know that if you encounter a x - 2 = 2, you have found a pair. Then you encounter a 3, which we knew we needed earlier. We can return true. In code: public static bool AnyPairSumAlternative(this IEnumerable<int> source, int sum) { var complements = new HashSet<int>(); foreach(var item in source) { if (complements.Contains(item)) { return true; } try { checked { complements.Add(sum - item); } } catch (OverflowException) { // If sum - item overflows, that means that no two ints together can sum to sum. // We swallow the exception and don't add anything to complements, since the complement // clearly doesn't exist within the data type. } } return false; } This approach skips out of the sorting, and loops through the input list only once. Checking for inclusion in the Hashset is \$O(1)\$, so if I'm not mistaken, this takes the solution from \$O(n\log n)\$ to \$O(n)\$. As comment by @slepic points out, we need to be careful that sum - item doesn't overflow. If that happens, that automatically means that the complement cannot appear in the array, since it wouldn't fit in our datatype. To account for this, we can do the subtraction in checked context and catch any OverflowException.
{ "domain": "codereview.stackexchange", "id": 36510, "tags": "c#, functional-programming, linq" }
Is ALL transfer of energy by electromagnetic radiation classified as "heat"?
Question: In my uni notes my lecturer wrote that energy is either transferred as HEAT or WORK. (he said "all other modes of energy transfer other than heat are called work". Heat can obviously be transferred by convection, conduction or radiation. In many places it says that heat radiation is 'infrared radiation'. This makes sense as infrared is the type of radiation that makes us feel the warmest (although other electromagnetic radiations such as light and microwaves can also make us feel warm... but perhaps this is just due to a bit of infrared radiation being mixed in with them?) So, what I want to know is, are all the forms of electromagnetic radiation (such as x-rays, light, radio waves and microwaves) in addition to infrared classified as 'heat transfer of energy'? I have 2 hypotheses... Perhaps my lecturer got it wrong... energy can be transmitted/transferred by heat, work OR radiation OR All types/wavelengths of electromagnetic radiation ARE in fact heat, but our bodies sense the infrared as heat/warmth more so than the other types, as it hits the outside of our bodies where there are heat sensors. For example, if we stand near lots of microwaves, we may not initially sense the thermal energy as 'heat' as it is heating up water inside our bodies rather than the heat sensors in the skin. However, our body would gradually heat up and we would start to feel the thermal energy we had acquired as heat. And another example, we can get sunburnt from UV rays even without feeling too hot (ie the burn can be out of proportion to the feeling of heat). This may be because the heat and therefore thermal energy we get from UV rays also doesn't interact with heat sensors in the skin as well as infrared can. So it is more of a 'human sense' issue on how well we can 'sense' heat rather than whether heat is actually transferred or not. Are either of my hypotheses correct? Or is there another explanation? Thank you! Answer: So, what I want to know is, are all the forms of electromagnetic radiation (such as x-rays, light, radio waves and microwaves) in addition to infrared classified as 'heat transfer of energy'? No. There is a subtle difference between the transfer of energy and the transport of energy. All forms of "waves" transport energy from one location to another. On the other hand, the transfer of energy from a wave to a substance can occur by means of heat or work. Heat is energy transfer due solely to temperature difference. So if the temperature of the electromagnetic radiating source is greater than the temperature of the substance interacting with the radiation, energy is transferred in the form of heat. This is the case for infrared radiating sources. Work is energy transfer due to forces times translational displacement or torque times angular displacement. This is technically how microwave radiation cooks food. Microwave radiation doesn't "heat" food. From the wave view, it cooks primarily due to the interaction between the alternating electric field and the electric dipole of the water molecule. The field exerts a rapidly alternating torque on the dipole giving the dipole rotational kinetic energy. That rotational kinetic energy is then randomized by collisions into increasing the translational kinetic energy of the water molecules, and thus increasing the temperature of the molecules (increasing the temperature of the food). So technically, microwave ovens cook food by electromagnetic work. The manner in which electromagnetic radiation interacts with matter depends on the frequency of the wave (or energy of the photon, from the particle view). For an overview of this interaction, I recommend you visit this site: http://hyperphysics.phy-astr.gsu.edu/hbase/mod3.html Hope this helps.
{ "domain": "physics.stackexchange", "id": 78635, "tags": "thermodynamics, energy, electromagnetic-radiation" }
Huffman decoding for video
Question: I've been trying to implement a fast Huffman decoder in order to encode/decode video. However, I'm barely able to decode a 1080p50 video using my decoder. On the other hand, there are lots of codecs in ffmpeg that entropy decode 4-8 times faster. I have been trying to optimize and profile my code, but I don't think I can get it to run much faster. Does anyone have any suggestion as to how one can optimize Huffman decoding? My profiler says my application is spending most of the time in the following code: current = current->children + data_reader.next_bit(); *ptr = current->value; ptr = ptr + current->step; Here is the entire code: void decode_huff(void* input, uint8_t* dest) { struct node { node* children; // 0 right, 1 left uint8_t value; bool step; }; CACHE_ALIGN node nodes[512] = {}; node* nodes_end = nodes+1; auto data = reinterpret_cast<unsigned long*>(input); size_t table_size = *(data++); // Size is first 32 bits. size_t num_comp = *(data++); // Data size is second 32 bits. bit_reader table_reader(data); unsigned char n_bits = ((table_reader.next_bit() << 2) | (table_reader.next_bit() << 1) | (table_reader.next_bit() << 0)) & 0x7; // First 3 bits are n_bits-1. // Unpack huffman-tree std::stack<node*> stack; stack.push(nodes); // "nodes" is root while(!stack.empty()) { node* ptr = stack.top(); stack.pop(); if(table_reader.next_bit()) { ptr->step = true; ptr->children = nodes->children; for(int n = n_bits; n >= 0; --n) ptr->value |= table_reader.next_bit() << n; } else { ptr->children = nodes_end++; nodes_end++; stack.push(ptr->children+0); stack.push(ptr->children+1); } } // Decode huffman-data // THIS IS THE SLOW PART auto huffman_data = reinterpret_cast<long*>(input) + (table_size+32)/32; size_t data_size = *(huffman_data++); // Size is first 32 bits. uint8_t* ptr = dest; auto current = nodes; bit_reader data_reader(huffman_data); size_t end = data_size - data_size % 4; while(data_reader.index() < end) { current = current->children + data_reader.next_bit(); *ptr = current->value; ptr = ptr + current->step; current = current->children + data_reader.next_bit(); *ptr = current->value; ptr = ptr + current->step; current = current->children + data_reader.next_bit(); *ptr = current->value; ptr = ptr + current->step; current = current->children + data_reader.next_bit(); *ptr = current->value; ptr = ptr + current->step; } while(data_reader.index() < data_size) { current = current->children + data_reader.next_bit(); *ptr = current->value; ptr = ptr + current->step; } // If dest is not filled with num_comp, duplicate the last value. std::fill_n(ptr, num_comp - (ptr - dest), ptr == dest ? nodes->value : *(ptr-1)); } class bit_reader { public: typedef long block_type; static const size_t bits_per_block = sizeof(block_type)*8; static const size_t high_bit = 1 << (bits_per_block-1); bit_reader(void* data) : data_(reinterpret_cast<block_type*>(data)) , index_(0){} long next_bit() { const size_t block_index = index_ / bits_per_block; const size_t bit_index = index_ % bits_per_block; ++index_; return (data_[block_index] >> bit_index) & 1; } size_t index() const {return index_;} private: size_t index_; block_type* data_; }; Answer: You seem to work on single bit basis. This is very slow. You have to combine several bits together, look up the appropriate pattern in the table and then continue from there (if necessary) in the tree. You can see an outline to this solution presented here http://www.gzip.org/algorithm.txt. Of course, googling for "fast huffman decoding" reveals several papers on that subject as well. Reading them might be worthy as well.
{ "domain": "codereview.stackexchange", "id": 5496, "tags": "c++, performance, compression, video" }
How to identify hydrogen bonds and other non-covalent interactions from structure considerations?
Question: Chemistry is governed by a wide range of interactions, from ionic and covalent bonding, or other types of strong interactions, towards weaker types of bonding, attraction, or repulsion, that typically occur between molecules. The latter are often referred to as chemical, molecular, or non-covalent interactions. Popular examples include ion-dipole interactions, hydrogen bonds, and London dispersion forces. These non-covalent interactions play very important roles in DNA, protein-drug interactions, catalyst-substrate interactions, self-assembly, chemical reactions, among others. How can I identify and analyse hydrogen bonds and other non-covalent interactions, possibly from experimental and calculated structures? Answer: It is safe to say that there will always be intermolecular forces at play. At the time where you will consider these you should already have a good idea about the molecules involved in your system. Based on the composition and molecular structures you can make certain assumptions. In a molecule it is straight forward to estimate (bond) polarities based on electronegativities, then infer from these how they might arrange. I will work an example based on the interactions between adenine and thymine later.[1] With the advent of the information age, tools that every chemist has at her or his disposal have become more sophisticated. We have access to many digital resources like databases,[2] or publication servers[3] to retrieve a vast amount of information. Molecular modelling,[4] or even more sophisticated quantum chemical calculations,[5] have become more important; and free tools are available for everyone to use. With that being said, here are some points that might help you identifying hydrogen bonds and other non-covalent interactions. For that purpose, let's have a look at our example molecules: Immediately we can formulate a couple of assumptions based on the schematic representation. In adenine there are five nitrogen atoms, which have a higher electronegativity than carbon. Negative partial charges will therefore be located at mostly at these. Hydrogens in organic compounds usually carry positive partial charges; albeit $\ce{C-H}$ tend to be a lot less polarised than $\ce{N-H}$ bonds. Whenever a hydrogen is involved in a bond, that bond can potentially act as a hydrogen-bond donor (see below). Similar observations can be made for thymine. Here we have two terminal oxygen atoms, which will carry a negative partial charge, since they have one of the highest electronegativities. These are often able to accept hydrogen bonds. On the other hand we also have $\ce{N-H}$ bonds, which can act as hydrogen bond donors. Charges & Electrostatic Potential Surfaces For many molecules structures are readily available. If not, some molecular editors give you the possibility to use implemented force fields to optimise built (guessed) structures. Based on those you can already do a few analyses. One tool that is quite powerful for various tasks is Avogadro, it let's you read crystal structures, perform basic calculations and much more. If you are just playing around, this is a really good choice. For example, I have imported the crystal structure of adenine into Avogadro, optimised it, and calculated the electrostatic potential. Or after extracting Cartesian coordinates, Molden let's you easily calculate the charges.[6] Hydrogen Bonds Many molecular editors try to guess hydrogen bonds based on their implemented cutoff values. That certainly is very helpful, but not everything can be automatised in this way. And especially weaker interactions won't be found. One has to go a bit deeper then. As a nice and concise example I have picked an intermolecular 2:1 complex between adenine and thymine, for which the crystal structure is available.[1] There are two principle structural parameters to decide about hydrogen bonds: (a) The distance of the hydrogen $\ce{H}$ and the hydrogen-bond acceptor $\ce{Y}$ is significantly shorter than the sum of their respective van-der-Waals radii, $d(\ce{XH\bond{...}Y})<r_\mathrm{vdW}(\ce{H})+r_\mathrm{vdW}(\ce{Y})$.[7] (b) The angle around the hydrogen is nearly linear, $\angle(\ce{XH\bond{...}Y}) \approx 180^\circ$. For weakly polarised $\ce{XH}$ bonds, isotropic dispersion forces become more important (while the directional electrostatic and covalent contributions decrease), therefore the angle becomes more flexible. We easily see that the bond angles are close to what we expect for hydrogen bonds. I have reproduced a few values from Batsanov's paper below, with the caveat that the value for hydrogen strongly varies depending on the chemical environment from $\pu{110 - 161 pm}$, so I used the classic from Bondi.[7c] Since all the distance are around $\pu{200 pm}$, they are well below the threshold we set earlier. $$ \begin{array}{lr} \text{Element }\ce{Y} & r_\mathrm{vdW}(\ce{Y})/\pu{pm}\\\hline \ce{H} & \approx 120\\ \ce{C} & 196 \\ \ce{N} & 179 \\ \ce{O} & 171 \\\hline \end{array}\hspace{2ex} \begin{array}{lr}\\ \text{H-Bond }\ce{XH\bond{...}Y} & \sum r_\mathrm{vdW}(\ce{Y},\ce{H})/\pu{pm}\\\hline \ce{CH} & 316 \\ \ce{NH} & 299 \\ \ce{OH} & 291 \\\hline \end{array} $$ A quite interesting approach of revealing non-covalent interactions was presented by Johnson et. al., and the corresponding program is easy to use and only requires Cartesian coordinates.[8] The surfaces between the molecules represent these interactions, where green represents weak interactions, typically found for dispersion. Blue represents stronger interactions, typically found for hydrogen-bonds. Red displays repulsive forces, typically found within ring or cage systems. If you have access to quantum chemical software, then you can obtain this plot also for wave function files .wfn. Another possibility is to analyse the electron density in terms of the quantum theory of atoms in molecules (QTAIM).[9] For this you do need a wave function file. The analysis, however, is straight forward and will yield a bond path or not. If there is a bond path, we can estimate the strengths of these bond with the methodology developed by Espinosa et. al.. According to this the bondstrength is approximately half the value of the potential energy density at the bond critical point. $$E_\mathrm{H-Bond} = \frac{1}{2}V(r_{\mathrm{BCP}[\ce{XH\bond{...}Y}]})$$ I have performed such a calculation on the DF-B97D3/def2-TZVPP level of theory with Gaussian 09. The optimised geometry will be at the end. \begin{array}{lr} \text{H-Bond} & E_\mathrm{H-Bond}/\pu{kJ mol-1}\\\hline \mathrm{N(37)H \cdots O(1)} & 46.6\\ \mathrm{N(5)H \cdots N(36)} & 38.5\\ \mathrm{C(41)H \cdots O(2)} & 3.2\\ \mathrm{N(20)H \cdots O(2)} & 50.5\\ \mathrm{N(3)H \cdots N(24)} & 29.9\\ \end{array} A general warning shall be applied to the above. Absolute values of these are only approximate, but fall within the range of what is expected. A very nice side effect of this methodology is, that it can be applied to intramolecular hydrogen bonds, too. Concluding remarks Dispersive interactions and hydrogen bonds become more and more important in rational reaction design. Be it for understanding of molecular structure of biomolecules, or as a guiding principle for catalyst-substrate interactions. With further development of computer technology, it should become more accessible to everyone. I hope this post demonstrates that gaining more insight can actually be quite easy (and free). Notes and References (a) Based on the structure from S. Chandrasekhar, T. R. Naik, S. K. Nayak, T. N. Row, Bioorg. Med. Chem. Lett. 2010, 20 (12), 3530-3533. DOI: 10.1016/j.bmcl.2010.04.131 PMID: 20493694 CSD: 739016 (b) Adenine, CSID: 185 (c) S. Mahapatra, S. K. Nayak, S. J. Prathapa, T. N. Guru Row, Cryst. Growth Des. 2008, 8 (4), 1223–1225. DOI: 10.1021/cg700743w, CSD: 652573 (d) Thymine, CSID: 1103 (e) G. Portalone, L. Bencivenni, M. Colapietro, A. Pieretti, F. Ramondo, Acta Chemica Scand. 1999, 53, 57-68. DOI: 10.3891/acta.chem.scand.53-0057, CSD: 136916 (a) The Cambridge Structural Database (CSD), https://www.ccdc.cam.ac.uk/ (b) Crystallography Open Database (COD), http://www.crystallography.net/cod/ (c) Computational Chemistry Comparison and Benchmark DataBase, http://cccbdb.nist.gov/ (Only for 1799 small molecules and atoms) (d) Handbook of Chemistry and Physics, http://hbcponline.com/faces/contents/ContentsSearch.xhtml (e) ... (a) SciFinder, https://www.cas.org/products/scifinder (b) Google Scholar, https://scholar.google.de/ (c) Web of Science, (formerly known as Web of Knowledge) http://www.webofknowledge.com/ (d) ... (a) MolCalc, http://molcalc.org/ (b) Pitt Quantum Repository, https://pqr.pitt.edu/ (At the time of writing it was dead.) Github: pittquantum (c) Many open source molecular editors include the possibility to use force field calculations. For example: Avogadro, molden (d) For more on molecular modelling in the open source domain see S. Pirhadi, J. Sunseri, D. R. Koes, J. Mol. Graph. Model. 2016, 69, 127-143. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. (a) For an extensive, but not necessarily complete, list of quantum chemistry software see Wikipedia. (b) For the purpose of this demonstration I will be using the proprietary software Gaussian. (c) To view crystal structures, Mercury can be obtained (for free) from ​the Cambridge Crystallographic Data Centre (CCDC), which also hosts CSD. https://www.ccdc.cam.ac.uk/solutions/csd-system/components/mercury/ (a) Tutorial for Avogadro (b) Tutorial for molden (a) A concise (and as far as I can tell newest) list of van-der-Waals radii of many elements can be found in S. S, Batsanov, Inorg. Mat. 2001, 37, 871-885. DOI: 10.1023/A:1011625728803 (mirrored pdf) (b) A list of van-der-Waals radii can also be found on Wikipedia (c) A. Bondi, J. Phys. Chem. 1964, 68, 441-451. doi: 10.1021/j100785a001 (a) The original publication: E. R. Johnson, S Keinan, P. Mori-Sánche§, J. Contreras-García, A. J. Cohen, W. Yang, J. Am. Chem. Soc. 2010, 132, 6498-6506. DOI: 10.1021/ja100936w (b) The presentation of the program: J. Contreras-Garcia, E. R. Johnson, S. Keinan, R. Chaudret, J-P. Piquemal, D. N. Beratan, W. Yang, J. Chem. Theory Comput. 2011, 7, 625-632. DOI: 10.1021/ct100641a (c) Download the code: http://www.lct.jussieu.fr/pagesperso/contrera/nciplot.html (d) You'll also need VMD (Visual Molecular Dynamics) from the University of Illinois (a) A very brief introduction can be found on Wikipedia. The corresponding book: Bader, Richard (1994). Atoms in Molecules: A Quantum Theory. USA: Oxford University Press. ISBN 978-0-19-855865-1. (publisher) (b) Multiwfn - A Multifunctional Wavefunction Analyzer; http://sobereva.com/multiwfn/ corresponding paper: T. Lu, F. Chen, J. Comput. Chem. 2012, 33, 580-592. DOI: 10.1002/jcc.22885 (c) Startup script (and examples) for Linux version: https://github.com/polyluxus/runMultiwfn.bash (shameless self-plug) (d) E. Espinosa, E. Molins and C. Lecomte, Chem. Phys. Lett., 1998, 285, 170–173. Appendix Optimised Structure of the adenine-thymine 2:1 complex calculated at DF-B97D3/def2-TZVPP in Gaussian 09 Rev. E.01 45 E(RB97D3/def2TZVPP/W06) = -1388.51095169 O 19.03780 11.79565 1.63996 O 14.74303 13.36808 1.71115 N 15.07940 11.09762 1.64043 H 14.04669 10.93403 1.64128 N 16.88238 12.55083 1.67499 H 17.23771 13.53695 1.70257 C 17.83067 11.52964 1.63864 C 17.29332 10.17534 1.60048 C 15.51719 12.40225 1.67775 C 15.94232 10.03610 1.60357 H 15.46489 9.06076 1.57654 C 18.24710 9.02139 1.56030 H 18.89656 9.08263 0.68030 H 18.90669 9.02997 2.43483 H 17.71085 8.06866 1.53480 N 8.28060 10.09397 1.65692 H 7.68055 9.28386 1.63653 N 8.91986 12.24943 1.71837 N 10.48101 9.01368 1.60709 N 11.91467 12.94472 1.71757 H 12.92612 13.11883 1.71578 H 11.26613 13.71385 1.74609 C 10.03193 11.42408 1.68464 N 12.26612 10.63929 1.64378 C 11.41986 11.70069 1.68284 C 9.65939 10.07380 1.64587 C 7.89700 11.42267 1.70074 H 6.85459 11.71169 1.71748 C 11.75955 9.39245 1.60917 H 12.49648 8.59107 1.57896 N 21.22700 16.46826 1.76840 N 17.31372 17.43807 1.81488 H 16.80939 18.31038 1.84233 N 19.61652 18.27309 1.82815 C 18.69253 17.30751 1.80467 N 17.71219 15.24141 1.74956 N 20.64990 14.21957 1.70686 H 19.98577 13.44513 1.68592 H 21.63800 14.02512 1.69520 C 20.27365 15.50797 1.74532 C 16.77892 16.17095 1.78070 H 15.71578 15.97320 1.77996 C 18.91965 15.92447 1.76371 C 20.85479 17.75435 1.80728 H 21.67003 18.47621 1.82413
{ "domain": "chemistry.stackexchange", "id": 11425, "tags": "theoretical-chemistry, intermolecular-forces, hydrogen-bond, dipole" }
Linear charge density of a path on a surface
Question: My problem is somewhat general. I do not think it has been posted before, however, I am new to Physics Stack Exchange so please, if I'm wrong, feel free to let me know. I will give an example problem and then talk about the general case I'm interested in. Given the outer surface of a cylinder with height $l$ with a surface density $$\sigma(\theta,z)$$ how do I get the linear charge density of a path $$\theta(z)$$ on the surface? I realize that if $\sigma$ is constant on the surface and the path is perpendicular to the symmetry axis of the cylinder it should be $$\lambda = \frac{\sigma}{l}$$ However, this does not make sense looking at the dimensions. Also, I am searching for a more general insight. Given a volume charge density $$\rho(x,y,z)$$ (if the charge density can be expressed as a surface, $\rho$ would just be a surface charge density with a $\delta$-distribution) how do I get the linear/surface charge density of a path/surface (could be sphere, cylinder, plane etc.) which lies in the same volume? I would be very happy if you could direct me to a book/website where this is explained or, even better, explain it here. This problem has been bugging me a lot. Answer: Let's take a region having charge density $\rho (x,y,z)$. Now on I will only be dealing in cartesian coordinates, however, you can easily switch to any other coordinate system if needed. Aleso, in the following answer, I am assuming charge distributions having finite characteristic parameters (volume charge density or surface charge density or linear charge density). Mathematical derivation Surface charge density Let's choose a surface $S(x,y,z)$ having an infinitesimal thickness $\mathrm d t$. Now let's choose an infinitesimal area element on the surface, at the point $(x_0,y_0,z_0)$, having an area $\mathrm d A$. Thus the charge contained in that infinitesimal volume formed by $\mathrm dA$ and $\mathrm dt$ is $$\mathrm dq =\rho(x_0,y_0,z_0)\:\mathrm dA\:\mathrm dt\tag{1}$$ Now the surface charge density is defined as $\sigma =\mathrm d q/\mathrm dA$. Using this, and equation$(1)$, we get $$\sigma(x_0,y_0,z_0)=\frac{\rho(x_0,y_0,z_0)\:\mathrm dA\:\mathrm dt}{\mathrm dA}=\rho(x_0,y_0,z_0)\:\mathrm dt$$ However, since we are talking about a surface, thus the thickness being infinitesimally small, the surface charge density ($\sigma$) must vanish. Linear charge density Applying the above process to linear charge density, we get (here, our infinitesimal volume element is a cuboid): $$\mathrm d q=\rho(x_0,y_0,z_0)\:\mathrm dl \:\mathrm dh \:\mathrm dw$$ where $\mathrm dl$ is the infinitesimal length element of the curve, $\mathrm dh$ is the thickness of the line and $\mathrm dw$ is the depth of the line. Now using the definition of linear charge density ($\lambda=\mathrm dq/\mathrm dl$), we get $$\lambda(x_0,y_0,z_0)=\frac{\rho(x_0,y_0,z_0):\mathrm dl :\mathrm dh :\mathrm dw}{\mathrm dl}=\rho(x_0,y_0,z_0) :\mathrm dh :\mathrm dwdd which again gives us a zero linear charge density. Let's, instead, try finding the linear charge density of a curve located on a surface having surface charge density $\sigma(x,y,z)$. Applying the above process, we see that we can now drop the depth term ($\mathrm dw$), since there is no depth to a 2D surface. Thus we get $$\lambda(x_0,y_0,z_0) = \sigma (x_0, y_0,z_0)\:\mathrm dh$$ Againg, the linear charge density vanishes. This implies that you cannot have a surface of a $N-1$ dimensions, with a finite (relevant) charge density inside an $N$ dimensional space having finite (relevant) charge density everywhere. Physical explanation There's a nice and intuitive way of why this isn't possible. Imagine a finite $N$-dimensional space. Now let's, for the sake of argument, assume that all the hypersurfaces inside that $N$-dimensional space have a non zero finite charge density everywhere. If this is true, then we can find the charge contained by that surface, which would be finite. Now, infinitely many such surfaces exist, and to make up the finige $N$-dimensional space, you would need infinite of such $N-1$ dimensional hypeesurfaces. This implies that the final charge contained in our space, is equal to the sum of the charges contained in each of the infinitely many hypersurfaces. But this implies that the charge contained in our space is infinite, since we are adding a finite non-zero charge (for each surface), infinitely many times. But we already assumed that the charge density of our finite $N$-dimensional space is finite everywhere, so the charge contained in that ginite space, must be finite as well. This shows we have a contradiction, implying that both of our initial assumptions Finite space having finite charge density Hypersurface having finite non-zero charge density cannot be simultaneously true. Hence, we have reached the same conclusion, the one which the math suggested. Charge distributions involving Dirac delta functions In the following part, I am only considering a specific example, where I will be trying to convert surface charge density, to linear charge density. It won't be hard to generalise this to other scenarios as well. Let's say the surface charge density is of the form $$\sigma(\mathbf r)=q(\mathbf r) \delta (\mathbf s)$$ where $\delta$ is the Dirac delta function, $q:V\to \mathbb R$ is a function from vector space to real numbers, and $\mathbf s=f(\mathbf r)$, where $f:V\to V$ is function mapping vactors to vectors in the vector space. Let the solution of the equation $\mathbf s=f(\mathbf r)=\boldsymbol{0}$ be the curve $\gamma$. Now, let's find the linear charge density at a point $\mathbf r_0$ lying on the curve $\gamma$. To do that, we need to determine the thickness of our curve. Notice that the magnitude of a first order infinitesimal change in $\mathbf s$, corresponds to translating the curve $\gamma$, forming a new curve $\gamma '$, which doesn't intersect $\gamma$. The collection of such neighbouring curves, make up a "thick" curve, say $\Gamma$. So $\Gamma$ is essentially an area, which, at any point has a thickness $\mathrm d \mathbf r$ (i.e. the change in the position vector of that point, which was initially on the curve). Thus, writing the change in $f$ upto the first linear term, we get $$ f(\mathbf r)+\frac{\mathrm df(\mathbf r)}{\mathrm dr}\mathrm d r=\mathbf s + \mathrm d\mathbf s$$ But we know, that initially $\mathbf r$ lay on the curve $\gamma$, so $\mathbf s=f(\mathbf r)=0$. Applying this to above equation, we get $$\frac{\mathrm d f(\mathbf r)}{\mathrm dr}\mathrm d r= \mathrm d\mathbf s$$ Taking the magnitude of both the sides, we get $$\left|\frac{\mathrm d f(\mathbf r)}{\mathrm dr}\right | \mathrm dr = \mathrm ds$$ Rearranging the above expreesion, we get the thickness $\mathrm d r$ as $$\mathrm dr =\frac{\mathrm ds}{\left|\frac{\mathrm d f(\mathbf r)}{\mathrm dr}\right |}$$ Now, we have gotten the thickness at every point. Let's take a small element at $\mathbf r_0$ of length $\mathrm dl$. This the charge of that element would be \begin{align} \mathrm dq &=\left(\int \frac{q(\mathbf r) \delta (\mathbf s)}{ \left|\frac{\mathrm d f(\mathbf r)}{\mathrm dr}\right |} \mathrm ds \right) \mathrm dl\\ \mathrm dq&=\frac{\mathbf r_0}{\left|\frac{\mathrm d f(\mathbf r)}{\mathrm dr}\right |_{\mathbf r=\mathbf r_0}}\mathrm dl \end{align} Using the definition of linear charge density, $\lambda=\mathrm dq/\mathrm dl$, we get $$\lambda(\mathbf r_0)=\frac{\mathbf r_0}{\left|\frac{\mathrm d f(\mathbf r)}{\mathrm dr}\right |_{\mathbf r=\mathbf r_0}}$$ This is the final expression. However, you might see that the function we given in the start should be such that $\left|\frac{\mathrm d f(\mathbf r)}{\mathrm dr}\right |\neq 0$, for all $\mathbf r$ on the curve $\gamma$.
{ "domain": "physics.stackexchange", "id": 70485, "tags": "electrostatics, charge" }
Electric Potential Field of Parallel Electrodes within Grounded Shell
Question: Any help would be greatly appreciated I think I need to solve this using boundary potential conditions and Laplace equations. Here is the system I'm attempting to model: The red rods are metal electrodes at the same potential ($V_r$) and on the same $x$ plane (offset equally from the $z$ axis). The orange cylinder is at $V=0$. I need the potential field map in 3D Cartesian coordinates, with variables of electrode voltage, electrode radius, electrode offset from the center axis, shell radius, overall system length. Update 1: I have simulated the system using FEMM, but in order to incorporate the answer with an applied magnetic field and particle motion, I need the solution in equation form. Update 2: I have this reference for a single coaxial electrode (http://www.ittc.ku.edu/~jstiles/220/handouts/Example%20The%20Electorostatic%20Fields%20of%20a%20Coaxial%20Line.pdf) in polar coordinates. Answer: Brief Summary Numerically, the mean value property of harmonic functions allows you to get an approximate solution to boundary value problems relatively quickly. Often you can improve convergence to a solution with a good initial guess, however, so analytical approaches can still be useful. Consider the limit of an infinitely long cylinder. There is a simple closed form expression for the potential in the limit where the electrode radius vanishes, and for non-vanishing radius it is possible to use an iterative approach to find a sequence of analytic approximations. Detailed Summary Firstly, as mentioned in the comments, for any harmonic function $\phi(x,y)$ there is a holomorphic function $f(x+iy)$ such that $\phi(x,y)$ is the real-part of $f$. Another crucial property of solutions to Laplace's equation is invariance under conformal mappings. If $\nabla^2\phi(x,y)=0$ and $f$ is a conformal map (loosely), then $\nabla^2\phi(f(x+iy))=0$ as well. This property can be used to simplify the problem somewhat. Let $R$ be the radius of the large cylinder, and let $\sigma(z)=Ai\frac{i-z/R}{i+z/R}$ (where $A$ is a real constant). You can check that this map sends the interior of the large cylinder (viewed as a disk in the complex plane $\mathbb C$) to the upper half plane $\mathbb H$. You can also check that it maps circles to other circles, so the electrode cylinders are mapped into two rescaled circles in the upper half plane. Moreover, the centers of the electrodes are mapped to a line parallel to the real axis, and both circles in $\mathbb H$ have equal radius. As promised, the picture: In the new coordinates, the potential $\phi$ must vanish on the real line $z=\bar z$, and have some constant value $V$ on two circles in the upper half plane. To solve this, we can invoke another standard trick of boundary value problems: the method of image charges. If we extend $\phi$ by simultaneously reflecting it across the real-line while reversing its sign, we get a larger (but more symmetric) boundary value problem. Now I use the fact that a harmonic function is the real part of a holomorphic function. Let $\psi(x,y)$ be the potential produced by charges on the upper right cylinder (centered at point $z_0$ with respect to the origin, which is set at the 'center of gravity' of the augmented problem), and let $f(z-z_0)$ be the holomorphic function that $\psi$ is the real part of ($f$ is viewed as a function defined near the origin but displaced by $z_0$). Then "by symmetry", the fields generated by the other cylinders are (proceeding counter-clockwise) $\overline{f(-\bar z-z_0)}=\bar f(-z-\bar z_0)$, $-f(-z-z_0)$, and $-\overline{f(\bar z-z_0)}=-\bar f(z-\bar z_0)$. The holomorphic field (call it $F(z)$) corresponding to the full potential $\phi$ is then just the sum of these individual 'reflected' holomorphic copies of $f(z-z_0)$. To satisfy the boundary conditions specified, we need to ensure that the sum of all $f$ and its reflections is purely imaginary plus a constant real part on any of the circular boundaries of electrodes. Because of the extra symmetry brought using the conformal mapping, we can focus on just one of these boundary circles. For simplicity, let's choose the factor $A$ so that the boundary circles all have radius 1. Assuming the (locally) holomorphic function $f(z)$ has a singular expansion about $z=0$ (i.e. $f(z-z_0)$ is [potentially] singular near $z=z_0$), we can write $f(z)=\alpha \ln z + \sum_{n\in \mathbb Z} c_n z^n$. I'll also assume that $c_n=0$ for $n> 0$. Calling $g(z-z_0)$ the sum of the three reflections of $f(z-z_0)$, we assume $g(z)$ is analytic at $z=0$, so we can write $g(z)=\sum_{n\geq 0}b_n z^n$. In order for the real part of $f(z)+g(z)$ to be constant on the unit circle, we need each nonzero frequency sector in the "Fourier expansion" to be purely imaginary. Since $f(z)$ has a singular expansion while $g(z)$ has an analytic expansion, this means imposing conditions on $b_n$ and $c_{-n}$. In order for $b_nz^n+c_{-n}z^{-n}\big|_{\partial \mathbb D}=b_ne^{in\theta}+c_{-n}e^{-in\theta}$ to be imaginary, we need $c_{-n}=-\bar b_n=-\frac{1}{n!}\overline{g^{(n)}(0)}$, where $g^{(n)}$ is the $n$-th complex derivative of $g$. From this, we can write \begin{align*} f(z)=\alpha\ln z-\sum_{n\geq 0}\frac{1}{n!}\overline{g^{(n)}(0)}z^{-n}, \end{align*} and since $\overline{g(z)}=\sum_{n=0}^\infty \frac{1}{n!}\overline{g^{(n)}(0)}\overline{z^n}$, we have $f(z)=\alpha\ln z-\overline{g(\bar z^{-1})}=\alpha\ln z - \bar g(z^{-1})$. Using the original definition of $g(z-z_0)=-\bar f(z-\bar z_0)-f(-z-z_0)+\bar f(-z-\bar z_0)$, we have a closed (functional) equation for $f(z)$: \begin{align*} f(z)=\alpha\ln z + f(\bar z_0-z_0+z^{-1})+\bar f(-2\bar z_0-z^{-1})-f(-\bar z_0-z_0-z^{-1}) \end{align*} In cases where the radii of the electrodes is very small, it is reasonable to keep only the first term $\alpha \ln z$, so that $\psi(x,y)=\alpha\ln|z|$. Then $\phi(x,y)$ is given by $\alpha (\ln |z-z_0|+\ln|z+\bar z_0|-\ln|z+z_0|-\ln|z-\bar z_0|)$, and the potential in the original coordinates ($s$ say) is given by $\phi(\sigma(s))$, where $\sigma$ was the Moebius transformation defined earlier. This approximation for $\phi$ (in the original domain) might also be a good starting point for numerical relaxation methods. For finite radius electrodes, it is possible to use the above relation for $f$ iteratively to improve the accuracy of the approximation. The expansion can be represented diagrammatically as a sum over nodes of a sort of truncated tree, where each leaf is associated with a logarithm of a finite continued fraction.
{ "domain": "physics.stackexchange", "id": 31943, "tags": "electrostatics" }
A graph representation of Adjacent Matrix in Python
Question: I'm trying to create a graph representation in Adj Matrix in Python. I'm not sure if this is the best pythonic way. class Graph(object): def __init__(self, edge_list): self.edge_list = edge_list def add_edge(self, edge_list): self.edge_list.append(edge_list) def adj_mtx(self): v = 0 counter = set() for src, dest in self.edge_list: counter.add(src) counter.add(dest) v = len(counter) mtx = [[0 for y in range(v)]for x in range(v)] for e in self.edge_list: src, dest = e src = src - 1 dest = dest - 1 mtx[src][dest] = 1 return mtx edge_list = [(1,2), (2,3), (1,3)] graph = Graph(edge_list) mtx = graph.adj_mtx() assert mtx == [[0, 1, 1], [0, 0, 1], [0, 0, 0]] graph.add_edge((3,3)) mtx = graph.adj_mtx() assert graph.adj_mtx() == [[0, 1, 1], [0, 0, 1], [0, 0, 1]] Answer: Use third party libraries if possible Almost anytime you want to do something, you probably want to use someone else's code to do it. In this case, whenever you're working with graphs in Python, you probably want to use NetworkX. Then your code is as simple as this (requires scipy): import networkx as nx g = nx.Graph([(1, 2), (2, 3), (1, 3)]) print nx.adjacency_matrix(g) g.add_edge(3, 3) print nx.adjacency_matrix(g) Friendlier interface It seems unnecessarily cumbersome to have to explicitly initialize an empty Graph this way: g = Graph([]). I think a better implementation would be something like class Graph(object): def __init__(self, edge_list=None): self.edge_list = edge_list if edge_list is not None else [] Similarly, I think the parameters for add_edge could use work. For one, it isn't a list of edges that you're passing it - it is a single edge, encoded as a list/tuple/collection of some kind. I also find it a little annoying that I have to create a tuple/list/whatever to actually add the edge to the graph - I'd rather just pass in the two end-points. I'd prefer something like this: def add_edge(self, first, second): self.edge_list.append((first, second)) Encoding your graph At this point, however, I have to ask why you're representing your graph as a list of edges - to me, the most intuitive way to think of a graph (and how I've implemented one in the past) is to use a dictionary - to use your example, I'd probably encode it like this { 1: {2, 3}, 2: {1, 3}, 3: {1, 2} } Then, instead of searching through an entire list to find a given edge, you just have to perform a quick dictionary lookup for one of the endpoints, and then a theoretically smaller iteration over a (hopefully) shorter list. I've also seen versions that use nested dictionaries very effectively. Your adj_mtx function Note - I didn't change anything about the encoding here, I just used your implementation. You could clean this up a bit. You don't need to initialize v where you do. It would also be easier if you kept the nodes in a set and added them whenever you added an edge. In general, prefer xrange in Python 2, although that makes compatibility trickier - I generally use a library like six to handle things like that, although if you don't need everything you can write your own file (good name is usually compatibility.py) that has only the changes you need. Instead of nested comprehensions, just do [0] * v. Also, if you don't care about a variable (such as x or y) use _ to identify it. You can split up iteration variables in a for loop - for src, dest in self.edge_list is cleaner than for e in self.edge_list: src, dest = e. Overall you could use more descriptive names in this function. I'd probably write it something like this: def adj_mtx(self): count = len(self.nodes) matrix = [[0]*count for _ in range(count)] for src, dest in self.edge_list: src -= 1 dest -= 1 matrix[src][dest] = 1 return matrix Additionally, it seems like adj_mtx should just be called adjacency_matrix, and I would rather it be a property (potentially with caching) than a function. Imagine something like this: class Graph(object): _adjacency_matrix = None def __init__(self, edge_list=None): self.edge_list = edge_list if edge_list is not None else [] self.nodes = set() self.cache_valid = False def add_edge(self, first, second): edge = first, second self.edge_list.append(edge) self.nodes.update(edge) self.cache_valid = False @property def adjacency_matrix(self): if not self.cache_valid: count = len(self.nodes) matrix = [[0]*count for _ in range(count)] for src, dest in self.edge_list: src -= 1 dest -= 1 matrix[src][dest] = 1 self._adjacency_matrix = matrix self.cache_valid = True return self._adjacency_matrix def clear_cache(self): self.cache_valid = False self._adjacency_matrix = None Note that if someone adds edges to the list manually, or does other weird things, then the caching won't work.
{ "domain": "codereview.stackexchange", "id": 18625, "tags": "python, graph" }
Potential energy curve for intermolecular distance
Question: (source: a-levelphysicstutor.com) I'm trying to understand this curve better, but I can't quite figure out what "negative potential energy" means. The graph should describe a molecule oscillating between $A$ and $B$, however where I'm stuck in reasoning this is that the PE is equal in $A$ and $B$, but then why does this mean $r$ will increase in $A$ (repel) and decrease in $B$? Answer: Suppose that two molecules are at distance $B$ and have zero kinetic energy. There's a lower potential energy position in $C$ and therefore the molecules will attract. They will convert $\epsilon$ potential energy into kinetic energy and reach $C$. Now, the law of inertia states, and the fact that they have positive kinetic energy indicates, that they will maintain their state of motion at $C$ towards $A$. Going towards $A$ they will gain potential energy by converting kinetic energy into it (in other words, slowing down). Once at the distance $A$, they will have gained exactly $\epsilon$ potential energy and will have therefore zero kinetic energy. At this point the cycle will repeat inverted, they will move towards a distance of $C$ because it's lower energy, surpass it and reach $B$ with zero kinetic energy, which will make the cycle repeat from the start.
{ "domain": "physics.stackexchange", "id": 11399, "tags": "potential-energy, molecules" }
Finding a maximum-weight base of a a matroid, in reverse
Question: Given a weighted matroid with positive weights, we can find a independent set with a maximum weight with a greedy algorithm: Start with an empty set (by definition of matroid, it is independent). Add an element with maximum weight among all elements whose addition keeps the current set independent (if there are no such elements, terminate). My goal is to found out whether this approach can also work in reverse. My first attempt was: Start with a set containing all elements. As long as the set is NOT independent, remove the element with minimum weight. This attempt proved to be false. Here is an example: There are 8 elements, divided to two groups: A and B. The matroid contains all subsets that have at most two elements of each group. The weights are: Group A: 303,302,301,300. Group B: 203,202,201,200. The forward-greedy approach picks 303, 302, 203, 202 - which is optimal. But the reverse-greedy approach removes 200,201,202,203,300,301, and leaves only 302,303. But, I think the following alternative algorithm should work: Start with a set containing all elements. As long as the set is not independent, remove the element with minimum weight among all elements whose removal leaves a base in the current set. So, in the above example, this algorithm removes 200 and 201, but does not remove 202 and 203 (since then no base will remain), then removes 300 and 301, and stops with the maximum-weight independent set. I am not sure about the complexity of this algorithm. Particularly, how much time does it take to calculate whether the current set (after removing an element) contains a base? Answer: Let $M$ be a matroid with non-negative weight function $w$ and rank function $r$ (the rank of a set of points $S \subseteq M$ is the maximal size of an independent subset of $S$). Suppose that the matroid has rank $m$. You propose the following algorithm: $S = M$. While $|S| > m$: remove from $S$ a minimum weight element $x$ such that $r(S-x) = m$. Return $S$. This algorithm clearly returns a base. Let $W(S)$ denote the maximum weight of an independent subset of $S$. We prove by induction on the steps of the algorithm that $W(S) = W(M)$. The base case is clear. For the inductive step, suppose that $S$ contains a base $B$ of weight $W(M)$, and suppose that we removed an element $x \in S$ to obtain the new set $S'$. If $x \notin B$ then clearly $W(S') = W(M)$. Otherwise, let $B'$ be a base contained in $S'$ (which exists by the definition of the algorithm). Since $|B'| > |B-x|$, there exists an element $y \in B' \setminus B$ such that $B'' = B-x+y$ is a base. By definition, $w(y) \geq w(x)$, and so $$W(S') \geq w(B'') = w(B)-w(x)+w(y) \geq w(B) = W(M),$$ completing the proof. Answering your question, it is a classical result that the rank function can be computed efficiently using an independence oracle. Perhaps you can come up with the algorithm.
{ "domain": "cs.stackexchange", "id": 7167, "tags": "optimization, combinatorics, greedy-algorithms, matroids" }
Do we have roomba_500_series stacks for ros groovy?
Question: roomba_500_series package for groovy. Is this released or the existing one for fuerte and electric works for groovy as well ? Originally posted by Devasena Inupakutika on ROS Answers with karma: 320 on 2013-03-07 Post score: 1 Answer: It's been tested with ros groovy on roomba 521 and it works fine by following similar steps as for ros electric and ros fuerte. Could anyone please close this question if satusfied with the answer as it's been tested. Originally posted by Devasena Inupakutika with karma: 320 on 2013-03-08 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13242, "tags": "ros" }
Why don't we call this irreversible?
Question: I've read that a quasi-static process in which entropy change only because of heat exchange: $\Delta S=\int \frac {\delta q} T$ is not called irreversible. The name irreversible is reserved for processes in which work is not ideal (for example if there is friction). However these spontaneous processes are actually irreversible. So why it's not called irreversible? Answer: This is a widespread misunderstanding, and it is based on a poor notation. The relationship between heat exchange and entropy growth is $$ \Delta S \ge \int \frac{dq}{T} $$ for a general process. An equality sign is only correct here in the restricted case of reversible processes: $$ \Delta S = \int \frac{dq_{\rm rev}}{T}. $$ If a heat exchange is taking place by a reversible process, then what is happening is that the heat is flowing across an infinitesimal temperature difference. In the limit that temperature difference will be zero and then the heat flow is truly reversible: the heat can then flow in either direction, and just an infinitesimal change in conditions will reverse the flow. One often feels that heat flow is "spontaneous" and therefore irreversible, but this no more true of heat flow than it is of mechanical work when a volume changes. If there is a tiny pressure difference between two side of a friction-less barrier, then the barrier will move "spontaneously", only of course it is not so much spontaneous as dictated by the pressure difference. If that pressure difference is vanishingly small then such a process is thermodynamically reversible. Similar statements apply to heat flow when it is taking place slowly and without restriction. The word "slowly" here means that the relevant systems move through a sequence of equilibrium states so the process is called quasistatic; the phrase "without restriction" is needed to say that there is no thermally insulating barrier or something like that, which could create a temperature gradient. One can say the same thing by saying "without hysteresis". These are various different ways of expressing the same concept.
{ "domain": "physics.stackexchange", "id": 76624, "tags": "thermodynamics, entropy, reversibility" }
Why do we fine-tune language models and not just include the data in the pre-training datasets?
Question: One question about the pre-training & fine-tuning process for language models: why is it better to fine-tune using a small dataset rather than including the fine-tuning dataset into the pre-training dataset? Or am I misunderstanding and normally the fine-tuning dataset is already included in the pre-training one, and we only change the learning parameters to better fit the data properties? Any paper reference is very welcome! Thank you. Answer: The main reason is that, normally, the tasks you pretrain and finetune the model are different, e.g. masked language modeling vs. sequence classification or tagging. This is because unlabeled data is abundant and labeled data is scarce, and pretraining on a language modeling task allows you to use a lot of data in a scarce data setup. This is, for instance, commented in the ULMfit paper, which was one of the ones that started the fine-tuning LMs wave: While we have shown that ULMFiT can achieve state-of-the-art performance on widely used text classification tasks, we believe that language model fine-tuning will be particularly useful in the following settings compared to existing transfer learning approaches (Conneau et al., 2017; McCann et al., 2017; Peters et al., 2018): a) NLP for non-English languages, where training data for supervised pretraining tasks is scarce; b) new NLP tasks where no state-of-the-art architecture exists; and c) tasks with limited amounts of labeled data (and some amounts of unlabeled data). Also, we apply transfer learning (first pretrain, then finetune) and not multitask learning (train both losses together), because what we want is to have good performance in the downstream task, and we don't care so much about the pretraining task apart from being a means to improve our results in the downstream one.
{ "domain": "datascience.stackexchange", "id": 9887, "tags": "machine-learning, nlp" }
Proof of $Dq-qD=1$ where $D=\frac{\partial }{\partial q}$ is the differential operator
Question: Can anyone provie me the proof of $Dq-qD=1$ where $D=\frac{\partial }{\partial q}$ refers to the differential operator? Or if it's something special to quantum mechanics, why is it? Is this following from $[\hat{q},\hat{p}] =i\hbar ~{\bf 1}$? Answer: a hint $$ Dxf(x)= f(x)+xDf(x) $$ $$ xDf(x) $$ take the diference and you get $ f(x) $ or $ 1.f(x)$
{ "domain": "physics.stackexchange", "id": 5479, "tags": "quantum-mechanics, homework-and-exercises, operators" }
Succinct Problems in $\mathsf{P}$
Question: The study of Succinct representation of graphs was initiated by Galperin and Wigderson in a paper from 1983, where they prove that for many simple problems like finding a triangle in a graph, the corresponding succinct version in $\mathsf{NP}$-complete. Papadimitriou and Yanakkakis further this line of research, and prove that for a problem $\Pi$ which is $\mathsf{NP}$-complete/$\mathsf{P}$-complete, the corresponding Succinct version, namely Succinct $\Pi$ is respectively, $\mathsf{NEXP}$-complete and $\mathsf{EXP}$-complete. (They also show that if $\Pi$ is $\mathsf{NL}$-complete, then Succinct $\Pi$ is $\mathsf{PSPACE}$-complete. Now my question is, are there any problems $\Pi$ known for which, the corresponding Succinct version is in $\mathsf{P}$? I'd be interested in knowing about any other related results(both positive and impossibility results, if any) which I might have missed above. (I couldn't locate anything of interest by a google search, since search words like succinct, representation, problems, graphs lead to just almost any complexity result! :)) Answer: Here is an interesting problem whose succinct version has interesting properties. Define Circuit-Size-$2^{n/2}$ to be the problem: given a Boolean function as a $2^n$-bit string, does this function have a circuit of size at most $2^{n/2}$? Note this problem is in $NP$. One way to define Succinct-Circuit-Size-$2^{n/2}$ would be: for a constant $k$, given an $n$-input, $n^k$-size circuit $C$, we want to know if its truth table is an instance of Circuit-Size-$2^{n/2}$. But this is a trivial problem: all inputs which are actual circuits are yes-instances. So this problem is in $P$. A more general way to define Succinct-Circuit-Size-$2^{n/2}$ would be: we are given an arbitrary circuit $C$ and want to know if its truth table encodes an instance of Circuit-Size-$2^{n/2}$. But if $n$ is the number of inputs to $C$, $m$ is the size of $C$, and $m \leq 2^{n/2}$, we can automatically accept: the input itself is a witness for the language. Otherwise, we have $m \geq 2^{n/2}$. In that case, the input length $m$ is already huge, so we can try all possible $2^n$ assignments in $m^{O(1)}$ time, obtain the truth table of the function, and now we are back to the original $NP$ problem again. So this is a problem in $NP$ whose succinct version is also in $NP$. This problem is believed to be not $NP$-hard; see the paper by Kabanets and Cai (http://www.cs.sfu.ca/~kabanets/Research/circuit.html)
{ "domain": "cstheory.stackexchange", "id": 1405, "tags": "cc.complexity-theory, reference-request, complexity-classes" }
How to compute the energy of two signals?
Question: Say I have two signals, one being $g_1(t)$ and the other one being $g_2(t)$, and say that I am being asked to compute the energy of $g_3(t)$, where $$g_3(t) = g_1(t) + g_2(t)$$ Is it correct to compute the energies of $g_1(t)$ and $g_2(t)$ separately and simply add the two values to find $g_3(t)$ or is it correct to compute them as one value? I.e. $$\int_ {-T}^{T} \big[g_1(t)+g_2(t)\big]^2 \,dt $$ Answer: You simply have to apply the definition of energy. Assuming all signals involved are real, the energy of $g_3(t)$ is given by \begin{align} E &= \int_{-\infty}^\infty g_3^2(t) \, dt \\ &= \int_{-\infty}^\infty \big(g_1(t) + g_2(t)\big)^2 \, dt \\ &= \int_{-\infty}^\infty g_1^2(t) \, dt + 2\int_{-\infty}^\infty g_1(t)g_2(t) \, dt + \int_{-\infty}^\infty g_2^2(t) \, dt. \end{align} One interesting case is where $g_1(t)$ and $g_2(t)$ are orthogonal, in which case the middle integral is zero. In that case, the energy of $g_3(t)$ is the energy of $g_1(t)$ plus the energy of $g_2(t)$. One simple example is two rectangular pulses that do not overlap.
{ "domain": "dsp.stackexchange", "id": 6095, "tags": "signal-energy" }
Common eigenkets of spherically symmetric Hamiltonian
Question: In a QM text it states: "Consider a spinless particle subjected to a spherical symmetrical potential. The wave equation is known to be separable coordinates, and the energy eigenfunctions can be written as $$\langle \mathbf{x'}| n,l,m \rangle = R_{nl}(r)Y_{l}^{m}(\theta, \phi)$$ where the position vector $\mathbf{x}'$ is specified by the spherical coordinates $r, \theta$ and $\phi$." Can we consider that we obtain this by the following steps: $$|n,l,m \rangle = |n,l \rangle \otimes |l,m \rangle$$ hence $$\langle x'|n, l, m \rangle = (\langle \vec{r}| \otimes \langle \theta, \phi|)(|n, l \rangle \otimes |l, m \rangle) = (\langle \vec{r}| n, l \rangle)\otimes(\langle \theta, \phi | l, m \rangle) = R_{nl}(r)Y_{l}^{m}(\theta, \phi)$$ Is what I wrote nonsense or is there something there? The question was motivated by the fact that $|l,m\rangle = Y_{l}^{m}(\theta, \phi)$...hence what we have left is $R_{nl}(r)$. Thanks. Answer: Yeah, that's a reasonable way to understand that structure. The reason you don't see it that often is because it's not very useful, but it's a valid analysis.
{ "domain": "physics.stackexchange", "id": 40414, "tags": "quantum-mechanics, angular-momentum, hamiltonian, observables" }
Israel first junction condition
Question: I am studying Eric Poisson's book A Relativist's Toolkit: The Mathematics of Black-Hole Mechanics and it states that the metric, $g_{\alpha \beta}$, can be expressed as a distributions-valued tensor: $$g_{\mu\nu} = \Theta(\ell) g^{+}_{\mu\nu}+\Theta(-\ell) g^{-}_{\mu\nu}\tag{1} \, .$$ According to the book, differentiating this equation, one should get the following: $$g_{\mu\nu , \gamma} = \Theta(\ell) g^{+}_{\mu\nu , \gamma}+\Theta(-\ell) g^{-}_{\mu\nu, \gamma} + \epsilon \delta(\ell)\left[ g_{\mu \nu} \right] n_{\gamma}\tag{2} \, .$$ However, I can't reach the last term. The book also says that from $(1)$ to $(2)$, the following identity is used: $$n_{\alpha} = \epsilon \partial_{\alpha}\ell \, .$$ Nonetheless, I can't calculate the last term of Eq.$(2)$. Can you help me, please? $\ell$ is the proper distance (or time) along the geodesics. $$ n^{\alpha} n_{\alpha} = \epsilon = \begin{cases} 1, \text{if the hypersurface is timelike}\\ -1, \text{if the hypersurface is spacelike}\\ \end{cases} $$ Answer: I found the "trick". So, after differentiating we have the following expression: $$g_{\mu\nu , \gamma} = \Theta(\ell) g^{+}_{\mu\nu , \gamma}+\Theta(-\ell) g^{-}_{\mu\nu, \gamma} + \left(\partial_{\gamma}\Theta(\ell)\right) g^{+}_{\mu\nu} + \left(\partial_{\gamma}\Theta(-\ell)\right) g^{-}_{\mu\nu }\, .$$ We know the next identity $\frac{d\Theta(\ell)}{d\ell} = \delta(\ell)$, so, the idea is to manipulate $\partial_{\gamma}\Theta(\ell)$ to that identity. $$\partial_{\gamma}\Theta(\ell) = \frac{\partial\Theta(\ell)}{\partial x^{\gamma}} =\frac{\partial \ell}{\partial x^{\gamma}}\frac{\partial \Theta(\ell)}{\partial \ell} =\frac{\partial \ell}{\partial x^{\gamma}}\frac{d \Theta(\ell)}{d \ell} = \frac{\partial \ell}{\partial x^{\gamma}}\delta(\ell) = \epsilon n_{\gamma} \delta(\ell) \, ,$$ where we have the following trick. Consider the next identity: $$n_{\gamma} = \epsilon \partial_{\gamma}\ell \iff \epsilon n_{\gamma} = \epsilon^{2} \partial_{\gamma}\ell \, ,$$ since $\epsilon = 1 \vee -1 \rightarrow \epsilon^{2} = 1$, thus we may write: $$ \epsilon n_{\gamma} = \epsilon^{2} \partial_{\gamma}\ell \iff \frac{\partial \ell}{\partial x^{\gamma}}= \epsilon n_{\gamma}$$ Finally, we obtain: $$ g_{\mu\nu , \gamma} = \Theta(\ell) g^{+}_{\mu\nu , \gamma}+\Theta(-\ell) g^{-}_{\mu\nu, \gamma} + \delta(\ell)n_{\gamma} \left(g_{\mu \nu}^{+} - g_{\mu \nu}^{-} \right) \\ \iff g_{\mu\nu , \gamma} = \Theta(\ell) g^{+}_{\mu\nu , \gamma}+\Theta(-\ell) g^{-}_{\mu\nu, \gamma} + \epsilon \delta(\ell) \left[g_{\mu \nu} \right]n_{\gamma} \, ,$$ as we wanted.
{ "domain": "physics.stackexchange", "id": 93472, "tags": "general-relativity, metric-tensor, boundary-conditions, dirac-delta-distributions" }
Biot-Savart law and surface charges on moving plates
Question: Question: A large parallel plate capacitor with uniform surface charge $\sigma$ on the upper plate and $-\sigma$ on the lower plate is lower with a constant speed V as in the figure. Use Ampere's law with the appropriate Amperian loop to find the magnetic field between the plates and also above and below them. By Ampere's law: $\vec{\bigtriangledown }\times \vec{B}=\mu_{0}\vec{J}+\mu_{0}\epsilon\frac{\partial \vec{E}}{\partial t}$ The current sheet due to the moving surface charge plate is $\vec{K}=\left \langle 0,\pm\sigma,0 \right \rangle$. Since the charges are constant across the surface, it can be expected that the current is steady/ constant so this is a case of magnetostatic. Additionally, the fact that the current is on the x-y plane suggests there is no x and y dependence. By the Biot-Savart law for surface current: $\vec{B}\left ( \vec{r} \right )=\frac{\mu_{0}}{4 \pi}\int \frac{\vec{K}\left ( \vec{r'} \right )\times \hat{\eta}}{\left \| \vec{\eta} \right \|^{2}}da'$ where $\vec{\eta}=\vec{r}-\vec{r'}$ where $\vec{r}$ is the vector distance from the origin to the field point and $\vec{r'}$ is the vector distance from origin to the source charge. We expect the magnetic field not to be in the $\hat{y}$ direction due to the cross product in the integrand. However, I do understand why the magnetic field does not have a $\hat{z}$ direction. In fact, I am unfamiliar with the right hand rule for 'surface' current. Edit:I figured that if I rotate the plate 180 degrees about the z-axis in the CCW direction, the direction of the surface current changes-opposite to the direction of the surface current before the rotation-but the magnetic field continues to point in the positive z-axis which is a contradiction. Would someone kindly clear my doubts? Thanks in advance. Answer: I wasn't sure how to proceed with this question, since the figure makes it seem like a finite capacitor and does not provide the dimensions. I will attack the problem with the assumption that the capacitor is infinite in the x-y plane. Now, there is one way to figure out the direction of the net B-field: logical reasoning as Griffiths does (Introduction to Electrodynamics, 3rd Edition, p.226, example 5.8) for the case of one plate with surface current $K$. Note that the current is in the +x direction in this example. First of all, what is the direction of B? Could it have any x -component? No: A glance at the Biot-Savart law (5.39) reveals that B is perpendicular to K. Could it have a z -component? No again. You could confirm this by noting that any vertical contribution from a filament at +y is canceled by the corresponding filament at —y. But there is a nicer argument: Suppose the field pointed away from the plane. By reversing the direction of the current, I could make it point toward the plane (in the Biot-Savart law, changing the sign of the current switches the sign of the field). But the z -component of B cannot possibly depend on the direction of the current in the xy plane. (Think about it!) So B can only have a y -component, and a quick check with your right hand should convince you that it points to the left above the plane and to the right below it. Griffiths then proceeds to make an Amperian loop, and find the B-field: So, in your case, where the current in in +y direction you are correct that there cannot be a z component. There will only be an x component. Now for the real fun. Since you put that Biot-Savart law for surface current in your question, I thought I might as well use it and go ahead and show that the net B-field is in the x-direction the integration way. So here goes We know, $\vec{B}\left ( \vec{r} \right )=\frac{\mu_{0}}{4 \pi}\int \frac{\vec{K}\left ( \vec{r'} \right )\times \vec{\eta}}{\left \| \vec{\eta} \right \|^{3}}da'$ where $\vec{\eta}=\vec{r}-\vec{r'}$ where $\vec{r}$ is the vector distance from the origin to the field point and $\vec{r'}$ is the vector distance from origin to the source charge. K=$\sigma$v$\hat{y}$ Let us take any point ($\ x_1,y_1,z_1$) that we want to find the net B-field at. We find the field here due to the surface current at the general point ($\ x,y,0 $) $\vec{\eta}=(x_1-x)\hat{x}+(y_1-y)\hat{y}+z_1\hat{z}$ ${\vec{K} ( \vec{r'} )\times \vec{\eta}}$=$\sigma$v$\hat{y} \times ((x_1-x)\hat{x}+(y_1-y)\hat{y}+z_1\hat{z}) $ =$z_1\hat{x}-(x_1-x)\hat{z} $ Now for the integral, which I take over the entire xy plane: $\vec{B}\left ( \vec{r} \right )=\frac{\mu_{0}\sigma v}{4 \pi}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \frac{\vec{K}\left ( \vec{r'} \right )\times \vec{\eta}}{( {(x_1-x)^{2}+(y_1-y)^{2}+{z_1}^{2}} )^{3/2}}dxdy$= $\frac{\mu_{0}\sigma v}{4 \pi}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \frac{z_1\hat{x}-(x_1-x)\hat{z}}{( {(x_1-x)^{2}+(y_1-y)^{2}+{z_1}^{2}} )^{3/2}}dxdy$ Separating this integral into two separate ones, one that finds the field in the x-direction and one in the z-direction: the latter comes out to be zero and the former: $\frac{\mu_{0}\sigma v}{4 \pi}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \frac{z_1\hat{x}}{( {(x_1-x)^{2}+(y_1-y)^{2}+{z_1}^{2}} )^{3/2}}dxdy =\frac{\mu_{0}\sigma v}{4 \pi}*\frac{2 \pi z}{|z|} $ So, you get: $ \vec{B}=\frac{\mu_0\sigma v\hat{x}}{2} (z>0 $, ie-above the positive plate) $ \vec{B}=-\frac{\mu_0\sigma v\hat{x}}{2} (z<0 $, ie-below the positive plate) Extending these results to both plates (adding the contributions from both $\sigma $ and -$\sigma $ you will find that $ \vec {B}=-\mu_0\sigma$v$\hat{x}$ (between the plates) $ \vec {B}=0 $ (above and below) I hope I addressed all your doubts!
{ "domain": "physics.stackexchange", "id": 33070, "tags": "electromagnetism, electrostatics, magnetic-fields, classical-electrodynamics" }
Carbon tetraradical
Question: I was just reading through Reactions: The private life of atoms by Peter Atkins and I noticed that in Chapter 3, the chapter on the combustion reaction, the author writes: As we watch we see $\ce {CH_4}$ being whittled down to naked $\ce {C}$ as its $\ce{H}$ atoms are stripped away by radical attack. "Naked $\ce {C}$" here seems to refer to a carbon atom, with four unpaired electrons. This seems rather surprising to me as I was thinking that such a reactive and unstable species could not possibly form. Thus, I would like to ask if a "naked $\ce {C}$" can possibly exist, though it may have a really short lifetime. Additionally, I would like to ask if this "naked $\ce {C}$" is considered a tetraradical or a biradical because in the ground state of carbon, there are only two unpaired electrons. Answer: In its ground state, naked carbon is triplet $^3P$, with two metastable singlet states $^1D$ and $^1S$ ($^1D$ being the one that participates in most reactions) while the tetraradical is the least stable one: It can be synthesized by several different methods. Graphite vaporization by an electric arc or laser can be employed. Photolysis of suitable precursors can also produce atomic carbon: Finely designed diazo compounds could be good precursors for atomic carbon, e.g. the diazo compound in the image below produces on heating a carbene, which immediately collapses to form benzene and atomic carbon. The problem with this approach is that the products can react further. Another approach is to use diazotetrazole, but the drawback here is that diazonoium chloride obtained after the first step is extremely explosive. I would also like to comment about the reactivity of atomic carbon since it is rather interesting. This reactive species is known to insert in C-H bonds and it can also abstract carbonyl oxygens to form carbenes: If it is reacted with water, carbohydrates form: When reacted with alkenes, cumulenes are usually obtained: They insert into C-Br and C-Cl bonds, but abstract fluorine from compounds containing C-F bonds. More about this rather interesting chemistry can be read in "Reactive Intermediate Chemistry" by Robert A. Moss, Matthew S. Platz and Maitland Jones, Jr.
{ "domain": "chemistry.stackexchange", "id": 9492, "tags": "inorganic-chemistry, atoms, combustion, radicals" }
Citation showing minors are topological minors for subcubic graphs
Question: If $G$ is a graph with maximum degree 3 and is a minor of $H$, then $G$ is a topological minor of $H$. Wikipedia cites this result from Diestel's "Graph Theory". It's listed as Prop 1.7.4 in the latest version of the book. The book lacks proof or citation. Are the whereabouts known for an (original) proof of this? Furthermore, is there a reference proving that if $G$ is a path or a subdivision of a claw and is a minor of $H$ then $G$ is a subgraph of $H$? It's mentioned here briefly but lacks reference. Answer: If $G$ is a graph with maximum degree 3 and is a minor of $H$, then $G$ is a topological minor of $H$. Since $G$ is a minor of $H$, $G$ can be obtained from $H$ by deleting edges, isolated vertices and performing edge contractions. It's also easy to show that we can insist that the subgraph operations are done first, i.e., we can first perform all the edge and vertex deletions and then perform all the edge contractions. Moreover, let us restrict the definition of "edge contraction" to disallow contracting edges where one of the vertices has degree 1. Contracting such an edge is the same as deleting it, so this will not change the definition of graph minors. Let $H'$ be the graph obtained from $H$ by performing all the edge/vertex deletions first. $H'$ still contains $G$ as a minor. If we show that $H'$ contains $G$ as a topological minor then we're done, since the definition of topological minor also allows edge/vertex deletions. Since $G$ can be obtained from $H'$ by edge contraction only, $H'$ and all intermediate graphs must have maximum degree 3 since there is no way to decrease the maximum degree of a graph by performing an edge contraction. (This would have been possible if we had allowed the contraction of edges incident on a vertex of degree 1.) So consider any step in the conversion of $H'$ to $G$. The only types of edges we can contract are those with both degree-2 vertices or one degree-2 vertex and one degree-3 vertex. (All other combinations don't work. For example, edges with two degree-3 vertices will give rise to a vertex of degree 4 when contracted.) And now we're done, since if $H_1$ is obtained from $H_2$ by contracting an edge with two degree-2 vertices, then $H_2$ can be obtained from $H_1$ by performing edge subdivision on that edge. Similarly for an edge with one degree-3 vertex and one degree-2 vertex. Thus $H'$ can be obtained from $G$ by performing edge subdivisions only, which means $G$ is a topological minor of $H'$ and thus $H$. If $G$ is a path or a subdivision of a claw and is a minor of $H$ then $G$ is a subgraph of $H$ This is easy to show once we have the previous result. Since paths and subdivisions of claws have maximum degree 3, if $G$ is a minor of $H$ it is also a topological minor of $H$. This means there is a subgraph of $H$ that can be obtained from $G$ by only performing edge subdivisions. Now it's easy to show by induction that every edge subdivision of a path or subdivision of a claw leads to a graph which contains the original as a subgraph. For example, subdividing a path of length k leads to a path of length k+1, which contains the path of length k as a subgraph. Similarly for subdivisions of a claw. We also needed this result for a paper once, so we included a short proof in our paper. You can find the result in Quantum query complexity of minor-closed graph properties. It's mentioned on page 13. However, this fact is hidden away in the proof of something else and isn't stated explicitly as a theorem. What's also interesting is that there is a converse to this theorem: The only graphs $G$ for which containing $G$ as a minor is equivalent to containing $G$ as a subgraph are those in which each connected component is a path or a subdivision of a claw.
{ "domain": "cstheory.stackexchange", "id": 971, "tags": "reference-request, graph-theory, co.combinatorics, graph-minor" }
What's the connection between complex charge, complex dipole, and complex electric field?
Question: In standard EM course, the complex EM field was emphasized and studied. However, the complex dipole moment, such as time harmonic dipole moment $\vec{p} \exp{i\omega t}$ was less investigated, as most of the time complex EM field was taught in vacuum waves after the study of electric/magnetic multiple. Some related posts could be found What is the complex dipole moment? and Complex charge of a Hertzian Dipole What's the connection between complex charge, complex dipole, and complex electric field? Especially, is there any connection of complex dipole and complex charge with jone's calculus? i.e. polarization? Answer: In principle, one could use the complex notation to indicate an orientation of a dipole or the polarisation of an electric field. However, this is not what your question is implying. You specifically asked about the time varying complex notation $e^{i\omega t}$, so this is what I am going to address. In this context the complex notation is merely a mathematical convenience. It helps us to perform calculations without invoking trigonometric relations. However, note that this mathematical convenience has its limitations: It is only allowed, if we perform linear operations, like multiplying with a scalar or adding several fields together. While the complex notation is handy for doing the math, the physical quantities are always real numbers. Hence, we are actually only interested in either their real or their imaginary part -- which one we chose is arbitrary.
{ "domain": "physics.stackexchange", "id": 66597, "tags": "electromagnetism, polarization, complex-numbers, dipole, dipole-moment" }
Isn't the data insufficient in this problem?
Question: take a look at this problem: a 1000 kg roller coaster car is towed at a constant speed up a 40 meters hill, what is the work done by the tow rope? don't we need the slope of the hill and the static friction coefficient to solve this problem? Answer: Apparently, it is assumed that rolling friction can be neglected. In this case, the answer does not depend on the slope of the hill (the kinetic energy does not change, and the change in potential energy only depends on the difference in altitude).
{ "domain": "physics.stackexchange", "id": 23403, "tags": "homework-and-exercises, newtonian-mechanics, work" }
fatal error: beginner_tutorials/AddTwoInts.h: No such file or directory , #include "beginner_tutorials/AddTwoInts.h"
Question: /home/mayankfirst/catkin_ws/src/beginner_tutorials/src/add_two_ints_client.cpp:2:43: fatal error: beginner_tutorials/AddTwoInts.h: No such file or directory #include "beginner_tutorials/AddTwoInts.h" ^ compilation terminated. /home/mayankfirst/catkin_ws/src/beginner_tutorials/src/add_two_ints_server.cpp:2:43: fatal error: beginner_tutorials/AddTwoInts.h: No such file or directory #include "beginner_tutorials/AddTwoInts.h" ^ compilation terminated. make[2]: *** [beginner_tutorials/CMakeFiles/add_two_ints_client.dir/src/add_two_ints_client.cpp.o] Error 1 make[2]: Leaving directory `/home/mayankfirst/catkin_ws/build' make[1]: *** [beginner_tutorials/CMakeFiles/add_two_ints_client.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... make[2]: *** [beginner_tutorials/CMakeFiles/add_two_ints_server.dir/src/add_two_ints_server.cpp.o] Error 1 make[2]: Leaving directory `/home/mayankfirst/catkin_ws/build' make[1]: *** [beginner_tutorials/CMakeFiles/add_two_ints_server.dir/all] Error 2 make[1]: Leaving directory `/home/mayankfirst/catkin_ws/build' make: *** [all] Error 2 Invoking "make -j4 -l4" failed ############################################################################################# CMAKE list cmake_minimum_required(VERSION 2.8.3) project(beginner_tutorials) ## Find catkin macros and libraries ## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz) ## is used, also find other catkin packages find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs genmsg ) ## System dependencies are found with CMake's conventions # find_package(Boost REQUIRED COMPONENTS system) ## Uncomment this if the package has a setup.py. This macro ensures ## modules and global scripts declared therein get installed ## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html # catkin_python_setup() ################################################ ## Declare ROS messages, services and actions ## ################################################ ## To declare and build messages, services or actions from within this ## package, follow these steps: ## * Let MSG_DEP_SET be the set of packages whose message types you use in ## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...). ## * In the file package.xml: ## * add a build_depend and a run_depend tag for each package in MSG_DEP_SET ## * If MSG_DEP_SET isn't empty the following dependencies might have been ## pulled in transitively but can be declared for certainty nonetheless: ## * add a build_depend tag for "message_generation" ## * add a run_depend tag for "message_runtime" ## * In this file (CMakeLists.txt): ## * add "message_generation" and every package in MSG_DEP_SET to ## find_package(catkin REQUIRED COMPONENTS ...) ## * add "message_runtime" and every package in MSG_DEP_SET to ## catkin_package(CATKIN_DEPENDS ...) ## * uncomment the add_*_files sections below as needed ## and list every .msg/.srv/.action file to be processed ## * uncomment the generate_messages entry below ## * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...) ## Generate messages in the 'msg' folder # add_message_files( # FILES # Message1.msg # Message2.msg # ) ## Generate services in the 'srv' folder add_message_files(DIRECTORY msg FILES Num.msg) add_service_files(DIRECTORY srv FILES AddTwoInts.srv) ## Generate actions in the 'action' folder # add_action_files( # FILES # Action1.action # Action2.action # ) ## Generate added messages and services with any dependencies listed here generate_messages(DEPENDENCIES std_msgs) ################################### ## catkin specific configuration ## ################################### ## The catkin_package macro generates cmake config files for your package ## Declare things to be passed to dependent projects ## INCLUDE_DIRS: uncomment this if you package contains header files ## LIBRARIES: libraries you create in this project that dependent projects also need ## CATKIN_DEPENDS: catkin_packages dependent projects also need ## DEPENDS: system dependencies of this project that dependent projects also need catkin_package(CATKIN_DEPENDS roscpp rospy std_msgs genmsg) # INCLUDE_DIRS include # LIBRARIES beginner_tutorials # CATKIN_DEPENDS roscpp rospy std_msgs # DEPENDS system_lib #) ########### ## Build ## ########### ## Specify additional locations of header files ## Your package locations should be listed before other locations # include_directories(include) #include_directories( # ${catkin_INCLUDE_DIRS} #) ## Declare a cpp library # add_library(beginner_tutorials # src/${PROJECT_NAME}/beginner_tutorials.cpp # ) ## Declare a cpp executable # add_executable(beginner_tutorials_node src/beginner_tutorials_node.cpp) ## Add cmake target dependencies of the executable/library ## as an example, message headers may need to be generated before nodes # add_dependencies(beginner_tutorials_node beginner_tutorials_generate_messages_cpp) ## Specify libraries to link a library or executable target against # target_link_libraries(beginner_tutorials_node # ${catkin_LIBRARIES} # ) ############# ## Install ## ############# # all install targets should use catkin DESTINATION variables # See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html ## Mark executable scripts (Python etc.) for installation ## in contrast to setup.py, you can choose the destination # install(PROGRAMS # scripts/my_python_script # DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} # ) ## Mark executables and/or libraries for installation # install(TARGETS beginner_tutorials beginner_tutorials_node # ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} # LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} # RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} # ) ## Mark cpp header files for installation # install(DIRECTORY include/${PROJECT_NAME}/ # DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} # FILES_MATCHING PATTERN "*.h" # PATTERN ".svn" EXCLUDE # ) ## Mark other files for installation (e.g. launch and bag files, etc.) # install(FILES # # myfile1 # # myfile2 # DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION} # ) ############# ## Testing ## ############# ## Add gtest based cpp test target and link libraries # catkin_add_gtest(${PROJECT_NAME}-test test/test_beginner_tutorials.cpp) # if(TARGET ${PROJECT_NAME}-test) # target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME}) # endif() ## Add folders to be run by python nosetests # catkin_add_nosetests(test) include_directories(include ${catkin_INCLUDE_DIRS}) add_executable(talker src/talker.cpp) target_link_libraries(talker ${catkin_LIBRARIES}) add_dependencies(talker beginner_tutorials_generate_messages_cpp) add_executable(listener src/listener.cpp) target_link_libraries(listener ${catkin_LIBRARIES}) add_dependencies(listener beginner_tutorials_generate_messages_cpp) add_executable(add_two_ints_server src/add_two_ints_server.cpp) target_link_libraries(add_two_ints_server ${catkin_LIBRARIES}) add_dependencies(add_two_ints_server beginner_tutorials_gencpp) add_executable(add_two_ints_client src/add_two_ints_client.cpp) target_link_libraries(add_two_ints_client ${catkin_LIBRARIES}) add_dependencies(add_two_ints_client beginner_tutorials_gencpp) After running rossrv show beginner_tutorials/AddTwoInts I get the following mayankfirst@mayankfirst-Aspire-V5-571:~/catkin_ws/beginner_tutorials$ rossrv show beginner_tutorials/AddTwoInts Unknown srv type [beginner_tutorials/AddTwoInts]: Cannot locate message [AddTwoInts]: unknown package [beginner_tutorials] on search path [{'rosconsole': ['/opt/ros/indigo/share/rosconsole/srv'], 'hector_quadrotor_pose_estimation': ['/opt/ros/indigo/share/hector_quadrotor_pose_estimation/srv'], 'catkin': ['/opt/ros/indigo/share/catkin/srv'], 'qt_dotgraph': ['/opt/ros/indigo/share/qt_dotgraph/srv'], 'image_view': ['/opt/ros/indigo/share/image_view/srv'], 'hector_gazebo_worlds': ['/opt/ros/indigo/share/hector_gazebo_worlds/srv'], 'urdf': ['/opt/ros/indigo/share/urdf/srv'], 'rosgraph': ['/opt/ros/indigo/share/rosgraph/srv'], 'rqt_py_console': ['/opt/ros/indigo/share/rqt_py_console/srv'], 'nodelet_topic_tools': ['/opt/ros/indigo/share/nodelet_topic_tools/srv'], 'rqt_graph': ['/opt/ros/indigo/share/rqt_graph/srv'], 'nodelet_tutorial_math': ['/opt/ros/indigo/share/nodelet_tutorial_math/srv'], 'qt_gui': ['/opt/ros/indigo/share/qt_gui/srv'], 'filters': ['/opt/ros/indigo/share/filters/srv'], 'hector_quadrotor_controller_gazebo': ['/opt/ros/indigo/share/hector_quadrotor_controller_gazebo/srv'], 'hector_map_tools': ['/opt/ros/indigo/share/hector_map_tools/srv'], 'controller_manager_tests': ['/opt/ros/indigo/share/controller_manager_tests/srv'], 'pointcloud_to_laserscan': ['/opt/ros/indigo/share/pointcloud_to_laserscan/srv'], 'smclib': ['/opt/ros/indigo/share/smclib/srv'], 'roslib': ['/opt/ros/indigo/share/roslib/srv'], 'roscpp_serialization': ['/opt/ros/indigo/share/roscpp_serialization/srv'], 'diagnostic_msgs': ['/opt/ros/indigo/share/diagnostic_msgs/srv'], 'rosbuild': ['/opt/ros/indigo/share/rosbuild/srv'], 'rosclean': ['/opt/ros/indigo/share/rosclean/srv'], 'ps3joy': ['/opt/ros/indigo/share/ps3joy/srv'], 'tf': ['/opt/ros/indigo/share/tf/srv'], 'rqt_publisher': ['/opt/ros/indigo/share/rqt_publisher/srv'], 'roswtf': ['/opt/ros/indigo/share/roswtf/srv'], 'hector_uav_msgs': ['/opt/ros/indigo/share/hector_uav_msgs/srv'], 'smach_ros': ['/opt/ros/indigo/share/smach_ros/srv'], 'genlisp': ['/opt/ros/indigo/share/genlisp/srv'], 'shape_msgs': ['/opt/ros/indigo/share/shape_msgs/srv'], 'trajectory_msgs': ['/opt/ros/indigo/share/trajectory_msgs/srv'], 'hector_pose_estimation_core': ['/opt/ros/indigo/share/hector_pose_estimation_core/srv'], 'diagnostic_aggregator': ['/opt/ros/indigo/share/diagnostic_aggregator/srv'], 'robot_state_publisher': ['/opt/ros/indigo/share/robot_state_publisher/srv'], 'smach_msgs': ['/opt/ros/indigo/share/smach_msgs/srv'], 'resource_retriever': ['/opt/ros/indigo/share/resource_retriever/srv'], 'rqt_topic': ['/opt/ros/indigo/share/rqt_topic/srv'], 'smach': ['/opt/ros/indigo/share/smach/srv'], 'rqt_action': ['/opt/ros/indigo/share/rqt_action/srv'], 'control_toolbox': ['/opt/ros/indigo/share/control_toolbox/srv'], 'rqt_top': ['/opt/ros/indigo/share/rqt_top/srv'], 'random_numbers': ['/opt/ros/indigo/share/random_numbers/srv'], 'rqt_rviz': ['/opt/ros/indigo/share/rqt_rviz/srv'], 'rosgraph_msgs': ['/opt/ros/indigo/share/rosgraph_msgs/srv'], 'rosboost_cfg': ['/opt/ros/indigo/share/rosboost_cfg/srv'], 'genmsg': ['/opt/ros/indigo/share/genmsg/srv'], 'xacro': ['/opt/ros/indigo/share/xacro/srv'], 'turtle_tf2': ['/opt/ros/indigo/share/turtle_tf2/srv'], 'rqt_robot_dashboard': ['/opt/ros/indigo/share/rqt_robot_dashboard/srv'], 'pluginlib': ['/opt/ros/indigo/share/pluginlib/srv'], 'rqt_msg': ['/opt/ros/indigo/share/rqt_msg/srv'], 'rqt_service_caller': ['/opt/ros/indigo/share/rqt_service_caller/srv'], 'xmlrpcpp': ['/opt/ros/indigo/share/xmlrpcpp/srv'], 'rqt_logger_level': ['/opt/ros/indigo/share/rqt_logger_level/srv'], 'rosmaster': ['/opt/ros/indigo/share/rosmaster/srv'], 'rosnode': ['/opt/ros/indigo/share/rosnode/srv'], 'rqt_pose_view': ['/opt/ros/indigo/share/rqt_pose_view/srv'], 'hector_quadrotor_description': ['/opt/ros/indigo/share/hector_quadrotor_description/srv'], 'bond': ['/opt/ros/indigo/share/bond/srv'], 'self_test': ['/opt/ros/indigo/share/self_test/srv'], 'pr2_description': ['/opt/ros/indigo/share/pr2_description/srv'], 'hector_imu_attitude_to_tf': ['/opt/ros/indigo/share/hector_imu_attitude_to_tf/srv'], 'rospack': ['/opt/ros/indigo/share/rospack/srv'], 'hector_nav_msgs': ['/opt/ros/indigo/share/hector_nav_msgs/srv'], 'hector_gazebo_plugins': ['/opt/ros/indigo/share/hector_gazebo_plugins/srv'], 'actionlib_msgs': ['/opt/ros/indigo/share/actionlib_msgs/srv'], 'image_rotate': ['/opt/ros/indigo/share/image_rotate/srv'], 'hector_quadrotor_teleop': ['/opt/ros/indigo/share/hector_quadrotor_teleop/srv'], 'rqt_image_view': ['/opt/ros/indigo/share/rqt_image_view/srv'], 'hector_geotiff': ['/opt/ros/indigo/share/hector_geotiff/srv'], 'polled_camera': ['/opt/ros/indigo/share/polled_camera/srv'], 'rqt_console': ['/opt/ros/indigo/share/rqt_console/srv'], 'joint_state_publisher': ['/opt/ros/indigo/share/joint_state_publisher/srv'], 'pr2_position_scripts': ['/opt/ros/indigo/share/pr2_position_scripts/srv'], 'hector_pose_estimation': ['/opt/ros/indigo/share/hector_pose_estimation/srv'], 'depth_image_proc': ['/opt/ros/indigo/share/depth_image_proc/srv'], 'tf2_msgs': ['/opt/ros/indigo/share/tf2_msgs/srv'], 'message_to_tf': ['/opt/ros/indigo/share/message_to_tf/srv'], 'laser_geometry': ['/opt/ros/indigo/share/laser_geometry/srv'], 'rviz': ['/opt/ros/indigo/share/rviz/srv'], 'gencpp': ['/opt/ros/indigo/share/gencpp/srv'], 'rqt_gui_cpp': ['/opt/ros/indigo/share/rqt_gui_cpp/srv'], 'rqt_bag': ['/opt/ros/indigo/share/rqt_bag/srv'], 'rqt_gui': ['/opt/ros/indigo/share/rqt_gui/srv'], 'hector_compressed_map_transport': ['/opt/ros/indigo/share/hector_compressed_map_transport/srv'], 'qt_gui_py_common': ['/opt/ros/indigo/share/qt_gui_py_common/srv'], 'eigen_conversions': ['/opt/ros/indigo/share/eigen_conversions/srv'], 'roscpp_traits': ['/opt/ros/indigo/share/roscpp_traits/srv'], 'driver_base': ['/opt/ros/indigo/share/driver_base/srv'], 'rosout': ['/opt/ros/indigo/share/rosout/srv'], 'diagnostic_common_diagnostics': ['/opt/ros/indigo/share/diagnostic_common_diagnostics/srv'], 'rostopic': ['/opt/ros/indigo/share/rostopic/srv'], 'visualization_msgs': ['/opt/ros/indigo/share/visualization_msgs/srv'], 'message_generation': ['/opt/ros/indigo/share/message_generation/srv'], 'pr2_tuck_arms_action': ['/opt/ros/indigo/share/pr2_tuck_arms_action/srv'], 'camera_calibration': ['/opt/ros/indigo/share/camera_calibration/srv'], 'rqt_runtime_monitor': ['/opt/ros/indigo/share/rqt_runtime_monitor/srv'], 'node_example': ['/home/mayankfirst/node_example/srv'], 'pr2_mechanism_diagnostics': ['/opt/ros/indigo/share/pr2_mechanism_diagnostics/srv'], 'collada_parser': ['/opt/ros/indigo/share/collada_parser/srv'], 'gazebo_msgs': ['/opt/ros/indigo/share/gazebo_msgs/srv'], 'joy': ['/opt/ros/indigo/share/joy/srv'], 'rostime': ['/opt/ros/indigo/share/rostime/srv'], 'urdf_tutorial': ['/opt/ros/indigo/share/urdf_tutorial/srv'], 'kdl_conversions': ['/opt/ros/indigo/share/kdl_conversions/srv'], 'rqt_nav_view': ['/opt/ros/indigo/share/rqt_nav_view/srv'], 'roslint': ['/opt/ros/indigo/share/roslint/srv'], 'rosservice': ['/opt/ros/indigo/share/rosservice/srv'], 'rospy': ['/opt/ros/indigo/share/rospy/srv'], 'rosunit': ['/opt/ros/indigo/share/rosunit/srv'], 'hector_marker_drawing': ['/opt/ros/indigo/share/hector_marker_drawing/srv'], 'roscpp_tutorials': ['/opt/ros/indigo/share/roscpp_tutorials/srv'], 'gmapping': ['/opt/ros/indigo/share/gmapping/srv'], 'turtle_actionlib': ['/opt/ros/indigo/share/turtle_actionlib/srv'], 'python_orocos_kdl': ['/opt/ros/indigo/share/python_orocos_kdl/srv'], 'pr2_controllers_msgs': ['/opt/ros/indigo/share/pr2_controllers_msgs/srv'], 'geographic_msgs': ['/opt/ros/indigo/share/geographic_msgs/srv'], 'visualization_marker_tutorials': ['/opt/ros/indigo/share/visualization_marker_tutorials/srv'], 'tf2_bullet': ['/opt/ros/indigo/share/tf2_bullet/srv'], 'rosconsole_bridge': ['/opt/ros/indigo/share/rosconsole_bridge/srv'], 'pluginlib_tutorials': ['/opt/ros/indigo/share/pluginlib_tutorials/srv'], 'camera_info_manager': ['/opt/ros/indigo/share/camera_info_manager/srv'], 'camera_calibration_parsers': ['/opt/ros/indigo/share/camera_calibration_parsers/srv'], 'hector_sensors_description': ['/opt/ros/indigo/share/hector_sensors_description/srv'], 'roslz4': ['/opt/ros/indigo/share/roslz4/srv'], 'rqt_dep': ['/opt/ros/indigo/share/rqt_dep/srv'], 'rosmsg': ['/opt/ros/indigo/share/rosmsg/srv'], 'actionlib_tutorials': ['/opt/ros/indigo/share/actionlib_tutorials/srv'], 'turtlesim': ['/opt/ros/indigo/share/turtlesim/srv'], 'rqt_robot_monitor': ['/opt/ros/indigo/share/rqt_robot_monitor/srv'], 'gazebo_plugins': ['/opt/ros/indigo/share/gazebo_plugins/srv'], 'rosparam': ['/opt/ros/indigo/share/rosparam/srv'], 'controller_interface': ['/opt/ros/indigo/share/controller_interface/srv'], 'diagnostic_analysis': ['/opt/ros/indigo/share/diagnostic_analysis/srv'], 'stereo_msgs': ['/opt/ros/indigo/share/stereo_msgs/srv'], 'pr2_mechanism_msgs': ['/opt/ros/indigo/share/pr2_mechanism_msgs/srv'], 'pcl_msgs': ['/opt/ros/indigo/share/pcl_msgs/srv'], 'interactive_markers': ['/opt/ros/indigo/share/interactive_markers/srv'], 'object_recognition_msgs': ['/opt/ros/indigo/share/object_recognition_msgs/srv'], 'diagnostic_updater': ['/opt/ros/indigo/share/diagnostic_updater/srv'], 'laser_assembler': ['/opt/ros/indigo/share/laser_assembler/srv'], 'pcl_conversions': ['/opt/ros/indigo/share/pcl_conversions/srv'], 'app_manager': ['/opt/ros/indigo/share/app_manager/srv'], 'rviz_python_tutorial': ['/opt/ros/indigo/share/rviz_python_tutorial/srv'], 'tf2': ['/opt/ros/indigo/share/tf2/srv'], 'hector_quadrotor_gazebo_plugins': ['/opt/ros/indigo/share/hector_quadrotor_gazebo_plugins/srv'], 'rqt_gui_py': ['/opt/ros/indigo/share/rqt_gui_py/srv'], 'rosbash': ['/opt/ros/indigo/share/rosbash/srv'], 'rqt_reconfigure': ['/opt/ros/indigo/share/rqt_reconfigure/srv'], 'pr2_controller_interface': ['/opt/ros/indigo/share/pr2_controller_interface/srv'], 'rqt_bag_plugins': ['/opt/ros/indigo/share/rqt_bag_plugins/srv'], 'hector_geotiff_plugins': ['/opt/ros/indigo/share/hector_geotiff_plugins/srv'], 'hector_trajectory_server': ['/opt/ros/indigo/share/hector_trajectory_server/srv'], 'rqt_plot': ['/opt/ros/indigo/share/rqt_plot/srv'], 'topic_tools': ['/opt/ros/indigo/share/topic_tools/srv'], 'rostest': ['/opt/ros/indigo/share/rostest/srv'], 'control_msgs': ['/opt/ros/indigo/share/control_msgs/srv'], 'hector_sensors_gazebo': ['/opt/ros/indigo/share/hector_sensors_gazebo/srv'], 'interactive_marker_tutorials': ['/opt/ros/indigo/share/interactive_marker_tutorials/srv'], 'stage': ['/opt/ros/indigo/share/stage/srv'], 'nodelet': ['/opt/ros/indigo/share/nodelet/srv'], 'transmission_interface': ['/opt/ros/indigo/share/transmission_interface/srv'], 'stage_ros': ['/opt/ros/indigo/share/stage_ros/srv'], 'pr2_msgs': ['/opt/ros/indigo/share/pr2_msgs/srv'], 'rqt_robot_steering': ['/opt/ros/indigo/share/rqt_robot_steering/srv'], 'roslaunch': ['/opt/ros/indigo/share/roslaunch/srv'], 'hardware_interface': ['/opt/ros/indigo/share/hardware_interface/srv'], 'tf2_geometry_msgs': ['/opt/ros/indigo/share/tf2_geometry_msgs/srv'], 'compressed_image_transport': ['/opt/ros/indigo/share/compressed_image_transport/srv'], 'controller_manager_msgs': ['/opt/ros/indigo/share/controller_manager_msgs/srv'], 'rqt_shell': ['/opt/ros/indigo/share/rqt_shell/srv'], 'std_msgs': ['/opt/ros/indigo/share/std_msgs/srv'], 'realtime_tools': ['/opt/ros/indigo/share/realtime_tools/srv'], 'image_transport': ['/opt/ros/indigo/share/image_transport/srv'], 'rqt_launch': ['/opt/ros/indigo/share/rqt_launch/srv'], 'pr2_teleop_general': ['/opt/ros/indigo/share/pr2_teleop_general/srv'], 'angles': ['/opt/ros/indigo/share/angles/srv'], 'cv_bridge': ['/opt/ros/indigo/share/cv_bridge/srv'], 'gazebo_ros': ['/opt/ros/indigo/share/gazebo_ros/srv'], 'rosbag_storage': ['/opt/ros/indigo/share/rosbag_storage/srv'], 'bondcpp': ['/opt/ros/indigo/share/bondcpp/srv'], 'roslang': ['/opt/ros/indigo/share/roslang/srv'], 'std_srvs': ['/opt/ros/indigo/share/std_srvs/srv'], 'geometric_shapes': ['/opt/ros/indigo/share/geometric_shapes/srv'], 'willow_maps': ['/opt/ros/indigo/share/willow_maps/srv'], 'joint_limits_interface': ['/opt/ros/indigo/share/joint_limits_interface/srv'], 'mk': ['/opt/ros/indigo/share/mk/srv'], 'cpp_common': ['/opt/ros/indigo/share/cpp_common/srv'], 'octomap': ['/opt/ros/indigo/share/octomap/srv'], 'stereo_image_proc': ['/opt/ros/indigo/share/stereo_image_proc/srv'], 'bondpy': ['/opt/ros/indigo/share/bondpy/srv'], 'collada_urdf': ['/opt/ros/indigo/share/collada_urdf/srv'], 'pr2_controller_manager': ['/opt/ros/indigo/share/pr2_controller_manager/srv'], 'tf2_kdl': ['/opt/ros/indigo/share/tf2_kdl/srv'], 'rqt_web': ['/opt/ros/indigo/share/rqt_web/srv'], 'class_loader': ['/opt/ros/indigo/share/class_loader/srv'], 'tf2_py': ['/opt/ros/indigo/share/tf2_py/srv'], 'genpy': ['/opt/ros/indigo/share/genpy/srv'], 'kdl_parser': ['/opt/ros/indigo/share/kdl_parser/srv'], 'hector_quadrotor_gazebo': ['/opt/ros/indigo/share/hector_quadrotor_gazebo/srv'], 'cmake_modules': ['/opt/ros/indigo/share/cmake_modules/srv'], 'uuid_msgs': ['/opt/ros/indigo/share/uuid_msgs/srv'], 'pr2_tuckarm': ['/opt/ros/indigo/share/pr2_tuckarm/srv'], 'pcl_ros': ['/opt/ros/indigo/share/pcl_ros/srv'], 'nav_msgs': ['/opt/ros/indigo/share/nav_msgs/srv'], 'rosmake': ['/opt/ros/indigo/share/rosmake/srv'], 'rqt_srv': ['/opt/ros/indigo/share/rqt_srv/srv'], 'roscpp': ['/opt/ros/indigo/share/roscpp/srv'], 'rqt_moveit': ['/opt/ros/indigo/share/rqt_moveit/srv'], 'theora_image_transport': ['/opt/ros/indigo/share/theora_image_transport/srv'], 'rqt_py_common': ['/opt/ros/indigo/share/rqt_py_common/srv'], 'rviz_plugin_tutorials': ['/opt/ros/indigo/share/rviz_plugin_tutorials/srv'], 'actionlib': ['/opt/ros/indigo/share/actionlib/srv'], 'hector_slam_launch': ['/opt/ros/indigo/share/hector_slam_launch/srv'], 'qt_gui_cpp': ['/opt/ros/indigo/share/qt_gui_cpp/srv'], 'eigen_stl_containers': ['/opt/ros/indigo/share/eigen_stl_containers/srv'], 'tf2_ros': ['/opt/ros/indigo/share/tf2_ros/srv'], 'librviz_tutorial': ['/opt/ros/indigo/share/librviz_tutorial/srv'], 'moveit_msgs': ['/opt/ros/indigo/share/moveit_msgs/srv'], 'roslisp': ['/opt/ros/indigo/share/roslisp/srv'], 'image_geometry': ['/opt/ros/indigo/share/image_geometry/srv'], 'message_runtime': ['/opt/ros/indigo/share/message_runtime/srv'], 'message_filters': ['/opt/ros/indigo/share/message_filters/srv'], 'python_qt_binding': ['/opt/ros/indigo/share/python_qt_binding/srv'], 'sensor_msgs': ['/opt/ros/indigo/share/sensor_msgs/srv'], 'hector_map_server': ['/opt/ros/indigo/share/hector_map_server/srv'], 'pr2_common_action_msgs': ['/opt/ros/indigo/share/pr2_common_action_msgs/srv'], 'roscreate': ['/opt/ros/indigo/share/roscreate/srv'], 'rospy_tutorials': ['/opt/ros/indigo/share/rospy_tutorials/srv'], 'hector_quadrotor_demo': ['/opt/ros/indigo/share/hector_quadrotor_demo/srv'], 'hector_quadrotor_model': ['/opt/ros/indigo/share/hector_quadrotor_model/srv'], 'laser_filters': ['/opt/ros/indigo/share/laser_filters/srv'], 'image_proc': ['/opt/ros/indigo/share/image_proc/srv'], 'octomap_msgs': ['/opt/ros/indigo/share/octomap_msgs/srv'], 'map_msgs': ['/opt/ros/indigo/share/map_msgs/srv'], 'dynamic_reconfigure': ['/opt/ros/indigo/share/dynamic_reconfigure/srv'], 'compressed_depth_image_transport': ['/opt/ros/indigo/share/compressed_depth_image_transport/srv'], 'rqt_tf_tree': ['/opt/ros/indigo/share/rqt_tf_tree/srv'], 'gazebo_ros_control': ['/opt/ros/indigo/share/gazebo_ros_control/srv'], 'turtle_tf': ['/opt/ros/indigo/share/turtle_tf/srv'], 'controller_manager': ['/opt/ros/indigo/share/controller_manager/srv'], 'pr2_hardware_interface': ['/opt/ros/indigo/share/pr2_hardware_interface/srv'], 'pr2_app_manager': ['/opt/ros/indigo/share/pr2_app_manager/srv'], 'openslam_gmapping': ['/opt/ros/indigo/share/openslam_gmapping/srv'], 'tf2_sensor_msgs': ['/opt/ros/indigo/share/tf2_sensor_msgs/srv'], 'hector_quadrotor_controller': ['/opt/ros/indigo/share/hector_quadrotor_controller/srv'], 'media_export': ['/opt/ros/indigo/share/media_export/srv'], 'geometry_msgs': ['/opt/ros/indigo/share/geometry_msgs/srv'], 'urdf_parser_plugin': ['/opt/ros/indigo/share/urdf_parser_plugin/srv'], 'tf2_tools': ['/opt/ros/indigo/share/tf2_tools/srv'], 'tf_conversions': ['/opt/ros/indigo/share/tf_conversions/srv'], 'orocos_kdl': ['/opt/ros/indigo/share/orocos_kdl/srv'], 'rosbag': ['/opt/ros/indigo/share/rosbag/srv'], 'rosbag_migration_rule': ['/opt/ros/indigo/share/rosbag_migration_rule/srv'], 'hector_mapping': ['/opt/ros/indigo/share/hector_mapping/srv'], 'pr2_mechanism_model': ['/opt/ros/indigo/share/pr2_mechanism_model/srv']}] Originally posted by mayankfirst on ROS Answers with karma: 31 on 2015-07-12 Post score: 2 Original comments Comment by BennyRe on 2015-07-13: Please post your CMakeLists.txt Comment by dornhege on 2015-07-13: ...and the workspace layout. Answer: It looks like the headers for your service files are not being generated before the programs that use them are compiled. You can declare this build dependency using the add_dependencies() cmake command. Try adding the following to your CMakeLists.txt: add_dependencies(add_two_ints_server ${beginner_tutorials_EXPORTED_TARGETS}) add_dependencies(add_two_ints_client ${beginner_tutorials_EXPORTED_TARGETS}) Originally posted by ahendrix with karma: 47576 on 2015-07-15 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by mayankfirst on 2015-07-15: But even after adding the above dependencies it gives the same error Comment by mayankfirst on 2015-07-15: There may be some problem in SRV when I try to execute rossrv show beginner_tutorials/AddTwoInts It gives the following message Unknown srv type [beginner_tutorials/AddTwoInts]: Cannot locate message [AddTwoInts]: unknown package [beginner_tutorials] on search path [{'rosconsole': ['/opt/ros/ind Comment by ahendrix on 2015-07-15: it looks like your rossrv show error is because you haven't sourced your workspace. Comment by mayankfirst on 2015-07-18: Sir, I do not have rossrv show error. I am having the compilation error and getting the following message fatal error: beginner_tutorials/AddTwoInts.h: No such file or directory #include "beginner_tutorials/AddTwoInts.h Comment by mayankfirst on 2015-07-18: I have also sourced my workspace using . ~/catkin_ws/devel/setup.bash Comment by UserK on 2015-08-10: I added what you suggested and I get: -- Using these message generators: gencpp;genlisp;genpy -- beginner_tutorial: 1 messages, 1 services CMake Error at beginner_tutorial/CMakeLists.txt:182 (add_dependencies): add_dependencies called with incorrect number of arguments Comment by ahendrix on 2015-08-10: @UserK: It looks like your variable name and your package name don't match; your package name is beginner_tutorial but the variable name you're using is beginner_tutorials. Comment by user23fj239 on 2016-04-27: It would be great to have ompl renamed. These warning msgs keep popping up, even in rqt_graph. argsl ~/catkin_ws1/bags$ WARNING: Package "ompl" does not follow the version conventions. It should not contain leading zeros (unless the number is 0). WARNING: Package "ompl" does not follow... Comment by ahendrix on 2016-04-27: @user23fj239 you can find ompl's bug tracker and file an issue for them to change their version numbering scheme. Ohterwise this isn't really the appropriate place to complain (and I don't even see an ompl warning in this particular question) Comment by user23fj239 on 2016-04-28: oh ok, I will send ompl a short note: Issue link to github
{ "domain": "robotics.stackexchange", "id": 22159, "tags": "ros, tutorials, beginner" }
Why is the common envelope ejected in some accretor-donor systems?
Question: As an example, let us consider a binary system of a neutron star and an evolved star (e.g. red giant) that has expanded, filled its roche lobe, and started the mass transfer onto the neutron star. Under certain conditions such mass transfer can become a runaway process, the accretion onto the neutron star is engulfed, and the matter that leaves the outer layers of the donor quickly surround the binary system and the two objects find themselves orbiting around each other in a dense gaseous environment. Such phase is generally referred to as Common Envelope phase. In many papers I have read the sentence: The common envelope phase ends either with the ejection of the common envelope or by the merger of the two systems still inside the envelope. I understand the latter possibility, that is that the dynamical interaction of the two objects in a dense environment would lead the system to the lost of orbital energy, the shrinking of their orbital separation, and due to further loss through gravitational wave emission, to merging. What puzzles me is: why, and how is it possible that the common envelope is instead being ejected? Answer: The envelope of the larger star may be rather weakly bound. Roughly speaking, the gravitational binding energy of the envelope is $-GMm/r$, where $M$ is the mass interior to the envelope, $m$ is the mass of the envelope and $r$ is its characteristic radius. When material is accreted onto the neutron star, then a fraction of the kinetic energy on impact with the surface will be released as radiation - roughly $Gm_{\rm NS} m_{\rm acc}/2r_{\rm NS}$, where $m_{\rm NS}$ and $r_{\rm NS}$ are the mass and radius of the neutron star and $m_{\rm acc}$ is the accreted mass - and absorbed in the envelope. The envelope could be ejected if the radiated energy exceeds the (modulus of) the envelope binding energy. This could happen because even though $m_{\rm acc}$ could be small compared with the mass of the engulfing star, $r_{\rm NS}\ll r$.
{ "domain": "physics.stackexchange", "id": 93393, "tags": "astrophysics, stars, neutron-stars, stellar-evolution, accretion-disk" }
Does any chirality-dependent inorganic chemical reaction exist?
Question: In the particle physics, there are non mirror-symmetric reactions. In the organic chemistry, particularly in complex proteins, there are reactions producing only a single chirality of molecules. But what is in the intermediate regime? Does any inorganic chemical reaction exist with an asymmetric chirality behavior? I suspect, maybe it would be possible with highly asymmetric electron clouds, probably with a significant EM octopole moment. Answer: A crystalline form of silicon dioxide known as α-quartz is optically active and can spontaneously crystallize in one of the two optically active form. However, which one of the two you get is defined purely by chance. This, however, is different from "asymmetry" in particle physics, see here for details.
{ "domain": "chemistry.stackexchange", "id": 10789, "tags": "inorganic-chemistry, chirality" }
LinkedList class implementation in Python
Question: I was solving this problem for implementing LinkedList and ListNode classes, and Insert and Remove methods for LinkedList. I was wondering if I can get any code review and feedback. class ListNode: def __init__(self, value=None): self.value = value self.next = None self.prev = None def __repr__(self): """Return a string representation of this node""" return 'Node({})'.format(repr(self.value)) class LinkedList(object): def __init__(self, iterable=None): """Initialize this linked list and append the given items, if any""" """Best case Omega(1)""" """Worst case O(n)""" self.head = None self.tail = None self.size = 0 self.length = 0 if iterable: for item in iterable: self.append(item) def __repr__(self): """Return a string representation of this linked list""" # this actually helps me to see what's inside linkedList return 'LinkedList({})'.format(self.as_list()) def setHead(self,head): self.head = head def getNext(self): return self.next def setNext(self,next): self.next = next def size(self): """ Gets the size of the Linked List AVERAGE: O(1) """ return self.count def as_list(self): """Return a list of all items in this linked list""" items = [] current = self.head while current is not None: items.append(current.value) current = current.next return items def append(self, item): new_node = ListNode(item) if self.head is None: self.head = new_node # Otherwise insert after tail node else: self.tail.next = new_node # Update tail node self.tail = new_node # Update length self.size += 1 def get_at_index(self, index): """Return the item at the given index in this linked list, or raise ValueError if the given index is out of range of the list size. Best case running time: O(1) if it's head or no value, Worst case running time: O(n) if it's in the tail [TODO]""" if not (0 <= index < self.size): raise ValueError('List index out of range: {}'.format(index)) counter = self.head for i in range(index): counter = counter.next return counter.value def prepend(self, item): """Insert the given item at the head of this linked list""" """Best case Omega(1)""" """Worst case O(1)""" new_node = ListNode(item) # Insert before head node new_node.next = self.head # Update head node self.head = new_node # Check if list was empty if self.tail is None: self.tail = new_node # Update length self.size += 1 def insert(self, value, index): """ Inserts value at a specific index BEST: O(1) WORST: O(i -1) """ if not (0 <= index <= self.size): raise ValueError('List index out of range: {}'.format(index)) node = ListNode(value) prev = self.get_at_index(index) if index == 0: self.append(value) elif index == self.size: self.append(value) else: new_node = ListNode(value) node = self.head previous = None for i in range(index): previous = node node = node.next previous.next = new_node new_node.next = node self.size += 1 def remove(self, index): """Best case Omega(1)""" """Worst case O(n)""" previous = None current = self.head while current is not None: if current.value == self.get_at_index(index): if current is not self.head and current is not self.tail: previous.next = current.next if current is self.head: self.head = current.next if current is self.tail: if previous is not None: previous.next = None self.tail = previous return previous = current current = current.next return -1 def contains(self, value): """Return an item in this linked list if it's found""" """Best case Omega(1)""" """Worst case O(n)""" current = self.head # Start at the head node while current is not None: if value == current.value: return True current = current.next # Skip to the next node return False def test_linked_list(): ll = LinkedList() print(ll) print('Appending items:') ll.append('A') print(ll) ll.append('B') print(ll) ll.append('C') print(ll) print('head: {}'.format(ll.head)) print('testing: Getting items by index:') #ll = LinkedList(['A', 'B', 'C']) print(ll) ll.insert('AA', 0) print(ll) ll.remove(1) print(ll) if __name__ == '__main__': test_linked_list() It also passes all of these tests: # ############## TESTING ############### # ########################################################### from io import StringIO import sys # custom assert function to handle tests # input: count {List} - keeps track out how many tests pass and how many total # in the form of a two item array i.e., [0, 0] # input: name {String} - describes the test # input: test {Function} - performs a set of operations and returns a boolean # indicating if test passed # output: {None} def expect(count, name, test): if (count is None or not isinstance(count, list) or len(count) != 2): count = [0, 0] else: count[1] += 1 result = 'false' error_msg = None try: if test(): result = ' true' count[0] += 1 except Exception as err: error_msg = str(err) print(' ' + (str(count[1]) + ') ') + result + ' : ' + name) if error_msg is not None: print(' ' + error_msg + '\n') class Capturing(list): def __enter__(self): self._stdout = sys.stdout sys.stdout = self._stringio = StringIO() return self def __exit__(self, *args): self.extend(self._stringio.getvalue().splitlines()) sys.stdout = self._stdout # compare if two flat lists are equal def lists_equal(lst1, lst2): if len(lst1) != len(lst2): return False for i in range(0, len(lst1)): if lst1[i] != lst2[i]: return False return True print('ListNode Class') test_count = [0, 0] def test(): node = ListNode() return isinstance(node, object) expect(test_count, 'able to create an instance', test) def test(): node = ListNode() return hasattr(node, 'value') expect(test_count, 'has value property', test) def test(): node = ListNode() return hasattr(node, 'next') expect(test_count, 'has next property', test) def test(): node = ListNode() return node is not None and node.value is None expect(test_count, 'has default value set to None', test) def test(): node = ListNode(5) return node is not None and node.value == 5 expect(test_count, 'able to assign a value upon instantiation', test) def test(): node = ListNode() node.value = 5 return node is not None and node.value == 5 expect(test_count, 'able to reassign a value', test) def test(): node1 = ListNode(5) node2 = ListNode(10) node1.next = node2 return (node1 is not None and node1.next is not None and node1.next.value == 10) expect(test_count, 'able to point to another node', test) print('PASSED: ' + str(test_count[0]) + ' / ' + str(test_count[1]) + '\n\n') print('LinkedList Class') test_count = [0, 0] def test(): linked_list = LinkedList() return isinstance(linked_list, object) expect(test_count, 'able to create an instance', test) def test(): linked_list = LinkedList() return hasattr(linked_list, 'head') expect(test_count, 'has head property', test) def test(): linked_list = LinkedList() return hasattr(linked_list, 'tail') expect(test_count, 'has tail property', test) def test(): linked_list = LinkedList() return hasattr(linked_list, 'length') expect(test_count, 'has length property', test) def test(): linked_list = LinkedList() return linked_list is not None and linked_list.head is None expect(test_count, 'default head set to None', test) def test(): linked_list = LinkedList() return linked_list is not None and linked_list.tail is None expect(test_count, 'default tail set to None', test) def test(): linked_list = LinkedList() return linked_list is not None and linked_list.length == 0 expect(test_count, 'default length set to 0', test) print('PASSED: ' + str(test_count[0]) + ' / ' + str(test_count[1]) + '\n\n') print('LinkedList Contains Method') test_count = [0, 0] def test(): linked_list = LinkedList() return (hasattr(linked_list, 'contains') and callable(getattr(linked_list, 'contains'))) expect(test_count, 'has contains method', test) def test(): linked_list = LinkedList() linked_list.append(5) linked_list.append(10) linked_list.append(15) return linked_list.contains(15) is True expect(test_count, 'returns True if linked list contains value', test) def test(): linked_list = LinkedList() linked_list.append(5) linked_list.append(10) linked_list.append(15) return linked_list.contains(8) is False expect(test_count, 'returns False if linked list contains value', test) print('PASSED: ' + str(test_count[0]) + ' / ' + str(test_count[1]) + '\n\n') Answer: Great job, looks really good! A few minor thoughts, on top of the existing answers: class LinkedList(object): ... You don't need to explicitly inherit from object if you're writing python3 code that doesn't require to be python2 compatible. This happens implicitly in python3. You've already done this for Node. Check out new style vs. old style classes if you want to learn more about this. class LinkedList: ... Moreover, you could define a function to add multiple values, such as: def extend(self, vals): for val in vals: self.append(val) You can also use that in your __init__ if initial values are provided. Additionally, you could define an __iter__ function that implements a generator. This could help you with tasks for which you don't want to use to_list() and allocate the memory for a list. def __iter__(self): curr = self.head while curr: yield curr curr = curr.next Lastly, I don't like using next as a variable, because it's already built-in, but that's not going to cause any trouble here.
{ "domain": "codereview.stackexchange", "id": 31600, "tags": "python, algorithm, programming-challenge, unit-testing, linked-list" }
Help with the Price equation
Question: The Price equation describes mathematically the evolution of a population of units from one generation to the next. $\bar{w}\Delta \bar{z}$ = $Cov (w_i,z_i) $+$ E(w_i\Delta z_i)$ I would like to know how to actually employ the equation to some data. Perhaps a simple online "walk-through" type guide of the Price equation would help. It should simply show the calculation of the Price equation using numbers from an example population. For example, I'd like to see how the Price equation is applied to the following scenario: A population, $P$, of 5 individuals reproduces to produce population $P'$. The trait value of the $i^{th}$ individual is $z_{i}$ where $z_1$, $z_2$ and $z_3$ all = 1 and where $z_4$ and $z_5$ both = 2 and $\bar{z}$ = 1.4. Absolute fitness is $w_i$ for the $i^{th}$ individual where $w_1$, $w_2$ and $w_3$ all = 1, and $w_4$ and $w_5$ both = 5. Relative fitnesses, $\omega_i$, are $\omega_{z=1}$ = 0.077, and $\omega_{z=2}$ = 0.385. Thus the population $P'$ has $n$ = 13, with 3 individuals where $z$ = 1 and 10 individuals where $z$ = 2 and $\bar{z}'$ = 1.769. $\Delta z $ is the transmission bias and is equal to 0 in this case (perfect transmission of the trait score $z$) The value $\Delta \bar{z}$ = $Cov (w,z)/ \bar{w}$ = .... Here's an R script to create the above information: # Define two trait values: z1 = 1 z2 = 2 # Define two fitness values: w1 = 1 w2 = 5 # Set number of units possesing each trait in P population: n1 = 3 n2 = 2 # Create data df = data.frame(c(rep(z1,n1),rep(z2,n2)),c(rep(w1,n1),rep(w2,n2))) colnames(df) = c("z","w") df$omega = df$w / sum(df$w) n_P = length(df$z) n_O = sum(df$w) z_P_bar = mean(df$z) z_O_bar = sum(df$w*df$z) / sum(df$w) omega_z1 = mean(df$omega[df$z==z1]) omega_z2 = mean(df$omega[df$z==z2]) # Parental population size: n_P # Offspring population size: n_O # Parental mean trait: z_P_bar # Offspring mean trait: z_O_bar # Realtive fitnesses: omega_z1 omega_z2 Answer: Here is a simple example using your data in which both terms of the Price equation are needed, since the value of the character for $z_2 $ changes in the second generation. I used your suggested change $z_2'=(9\cdot 2 + 1\cdot 3)/10 = 2.1$. The Price equation or theorem is: $$(1)\hspace{10mm}w\cdot \Delta z = \text{cov}(z_i,w_i) + E(w_i\Delta z_i) $$ While the idea here is just to give a credible calculation using both terms in the right side of the equation, and while the intuition behind the equation is essentially mathematical (or is it? $^1$), Steven Franks' paper, 'George Price's Contribution to Evolutionary Genetics', J. Theor. Biol. 175 (1995), 373-88, is a good introduction. His characterization of the right side may be the best one can do: "The two terms may be thought of as changes due to selection and transmission, respectively. The covariance between fitness and character value gives the change in the character caused by differential reproductive success. The expectation term is a fitness weighted measure of the change in character values between ancestor and descendant. The full equation describes both selective changes within a generation and the response to selection..." p. 376. We work with reference to the following data and will define variables below. $$\begin{array}{c | c | c | } n_i & 3 & 2 \\ \hline z_i &1 & 2 \\ \hline w_i &1& 5 \\ \hline n_i' & 3 & 10 \\ \hline z_i' & 1 & 2.1 \\ \hline \end{array}$$ All cites WK are to the Wiki page on the Price equation: (1) We can write $\text{cov} (w_i, z_i)$ as $E(w_iz_i) - wz.\hspace{10mm}$ WK eq. (7). Explicitly: $$ \text{Cov}(w_i,z_i) = E(w_iz_i)- wz = \frac{ w_1z_1n_1 + w_2z_2n_2}{n_1+n_2} - wz $$ $$ = \frac{1\cdot1\cdot3 + 5\cdot2\cdot2}{3+2} - (2.6)(1.4) =\frac{23}{5} - (2.6)(1.4)= 0.96$$ (2) We can write $E(w_i\Delta z_i) = E(w_i z_i') - E(w_i z_i).\hspace{10mm}$ WK eq. (8). Now $E(w_i z_i') = \frac{1}{n}\sum w_iz_i'n_i\hspace{40mm} $ WK eq. (9a). $ = \frac{1}{2 + 3}(1\cdot 1\cdot 3 + 5\cdot(2.1)\cdot 2) = 24/5.$ $E(w_iz_i) = 23/5$ from (1) above. So $ E(w_i z_i') - E(w_i z_i) = \frac{24}{5} - \frac{23}{5} = 1/5$ (3) Finally we have $\Delta z = z' - z = \frac{2.1(10) + 3(1)}{13}-\frac{3\cdot 1+2\cdot 2}{5} = \frac{29}{65} $ and $w = \frac{3\cdot 1+ 2\cdot 5}{3+2} = \frac{13}{5} = 2.6$ So $$w \Delta z = (2.6) \frac{29}{65} = 1.16 $$ $$= \text{cov}(z_i,w_i) + E(w_i \Delta z_i) = 0.96 + .2 = 1.16 $$ $n_i$ is the number of elements in group $i.$ $n_i'$ is the same for the filial generation. $z_i$ is the value of a character for group $i.$ $z_i'$ is the same for offspring of $z_i.$ $w_i$ is a fitness weight, often the number of offspring an element $z_i$ will have. $w$ is the average fitness for the parental generation. $z$ is the average value of z for parental generation; $z'$ is the average of z for the filial generation. $z_2' - z_2$ (for example) is the average value of $z_2$ in the second filial generation minus the average of $z_2$ in the parent's generation. $^1$ This site condenses some critical discussion of the Price equation. I have not worked through it yet but it addresses some lingering doubts I have about just what this equation means. Strongly recommended.
{ "domain": "biology.stackexchange", "id": 2278, "tags": "evolution, mathematical-models, population-genetics" }
What is the length of a unit cell of CuCl assuming that it is fcc?
Question: The density of $\ce{CuCl}$ is given – $x\ \mathrm{g/cm^3}$ The crystal structure is assumed to be fcc. My teacher is claiming the we can apply the formula $$\rho=\frac{Z \cdot M}{a^3 \cdot N_\mathrm A}$$ And he took $M = 35.5\ \mathrm{g\ mol^{-1}}+63.5\ \mathrm{g\ mol^{-1}} = 99\ \mathrm{g\ mol^{-1}}$, $Z = 4$ as it is fcc. But my question is how can we take $M = 99\ \mathrm{g\ mol^{-1}}$? It means every particle in the fcc has mass $99\ \mathrm u$ doesn't it? Answer: Well, yes, depending on the detail you want to go into. You can find a representation of the crystal structure of CuCl at, for example, http://www.webelements.com/compounds/copper/copper_chloride.html - you will see that the Cu atoms are indeed a simple fcc lattice. The Cl atoms are on sites that are an interpenetrating fcc lattice. The end result is the zinc blende (ZnS) lattice prototype, very familiar to semiconductor folks as the crystal structure for GaAs an associated III-V compounds. If all sites were occupied by the same atom, this is known as diamond cubic, commonly seen in diamond, Si, and Ge (amongst others). But, yes, one way to consider the crystal (particularly for your purposes here) is an fcc crystal with a Cu-Cl basis. Just don't expect the x-ray diffraction to correspond to fcc.
{ "domain": "chemistry.stackexchange", "id": 1726, "tags": "crystal-structure, solid-state-chemistry, density" }
subscriber in rqt plugin python
Question: I'm writing a rqt plugin in python, but can't get the subscriber to work. I want the string from "controller/mode" to show in "lineControlMode". For some reason this doesn't happen. Hope some of you can help me. import os import rospkg import rospy from qt_gui.plugin import Plugin from python_qt_binding import loadUi from python_qt_binding.QtWidgets import QWidget, QGraphicsView from std_msgs.msg import String class MyPlugin(Plugin): def __init__(self, context): #PLUGIN CODE super(MyPlugin, self).__init__(context) # Give QObjects reasonable names self.setObjectName('MyPlugin') rp = rospkg.RosPack() # Process standalone plugin command-line arguments from argparse import ArgumentParser parser = ArgumentParser() # Add argument(s) to the parser. parser.add_argument("-q", "--quiet", action="store_true", dest="quiet", help="Put plugin in silent mode") args, unknowns = parser.parse_known_args(context.argv()) if not args.quiet: print 'arguments: ', args print 'unknowns: ', unknowns # Create QWidget self._widget = QWidget() ui_file = os.path.join(rp.get_path('rqt_vortex_control_mode'), 'resource', 'MyPlugin.ui') # Extend the widget with all attributes and children from UI file loadUi(ui_file, self._widget) # Give QObjects reasonable names self._widget.setObjectName('MyPluginUi') # Add widget to the user interface context.add_widget(self._widget) #MY CODE self._widget.lineControlMode.setReadOnly(True) self._widget.lineControlMode.setText("init") #Subscriber self.sub = rospy.Subscriber("/uranus_dp/controller/mode", String, self.callback) def shutdown_plugin(self): self.sub.unregister() def callback(self, mode): self._widget.lineControlMode.setText(mode) Here is the link to my repository just in case: https://github.com/vortexntnu/rov-gui Originally posted by fonstein on ROS Answers with karma: 43 on 2017-02-09 Post score: 0 Original comments Comment by lucasw on 2017-02-11: The plugin appears to work, but when you publish to /uranus_dp/controller/mode the lineControlMode widget doesn't show anything? Answer: Does the "init" test message successfully show up? If so then try this at the bottom of the init: temp = String("test string") self._widget.lineControlMode.setText(temp) If that works re-enable the subscriber and put a rospy.loginfo(mode) in the callback and publish test messages to the topic, and see if the callback is getting triggered. Originally posted by lucasw with karma: 8729 on 2017-02-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by fonstein on 2017-02-15: Thanks, that helped a lot. Got it to work now! Comment by lucasw on 2017-02-15: What was the issue? Did setText not like taking a String? Comment by fonstein on 2017-02-15: I had to cast the data to String. There was also some bugs on the Subscriber, but with your tips on debugging I got it to work.
{ "domain": "robotics.stackexchange", "id": 26971, "tags": "rqt" }
Failed to put the turtlebot in full mode
Question: I have the roslaunch turtlebot_bringup minimal.launch running on the turtlebot laptop and the turtlebot dashboard running on my workstation computer but and the icons are green aside from the breakers and the mode icon. When I attempt to switch the turtlebot's mode to full mode i get the error message: "Failed to put the turtlebot in full mode: service call failed with error: service [/turtlebot_node/set_operation_mode] unavailable" Any ideas as to why i cannot switch modes? Originally posted by mbeard033 on ROS Answers with karma: 1 on 2012-03-13 Post score: 0 Original comments Comment by Ryan on 2012-03-13: Is the mode icon grey or red? Do the breaker icons fail to work as well? Answer: This type of error can occur if you do not have complete network connectivity between computers. Guide to Network Setup Troubleshooting This can also happen if you restart the master without restarting the dashboard. Originally posted by tfoote with karma: 58457 on 2012-03-13 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 8575, "tags": "ros, turtlebot, turtlebot-dashboard" }
All possible electromagnetic Lorentz invariants that can be built into the electromagnetic Lagrangian?
Question: Given the electromagnetic Lagrangian density $$ \mathcal{L}~=~-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}~=~\frac{1}{2}(E^2-B^2) $$ is a Lorentz invariant, how many other electromagnetic invariants exists that can be incorporated into the electromagnetic Lagrangian? Answer: As mentioned in the comments, to find all possible terms we normally only consider local, gauge invariant, Lorentz invariant interactions. There are in fact an infinite number of these. This is easiest understood using the Lagrangian. The gauge invariant field stength tensor is given by \begin{equation} F _{ \mu \nu } = \partial _\mu A _\nu - \partial _\nu A _\mu \end{equation} The only other tensors with Lorentz indices are \begin{equation} \epsilon _{ \alpha \beta \gamma ... } \quad , \quad g _{ \mu \nu } \end{equation} To lowest order in $ F $ the only non-zero invariants are: \begin{equation} F _{ \mu \nu } F ^{ \mu \nu} \quad , \quad \epsilon _{ \alpha \beta \gamma \delta } F ^{ \gamma \delta } F ^{ \alpha \beta } \end{equation} If we restrict ourselves to terms with mass dimension of $4$ or lower these are the only options (these terms are called renormalizable terms). However, one can also write down other invariants which have higher mass dimensions. One such example is the mass dimension six term, \begin{equation} \partial ^\mu F _{ \mu \nu } \partial ^\alpha F _\alpha ^{ \,\, \nu } \end{equation} Such terms are small at low energies and are often ignored. In general there are an infinite number of allowed (non-renormalizable) terms in the Lagrangian. Though it may not be trivial, such terms could be written in terms of the electric and magnetic fields to find the different combinations of $\bf E$ and $\bf B$ that form Lorentz invariants.
{ "domain": "physics.stackexchange", "id": 12495, "tags": "electromagnetism, special-relativity, invariants" }
Chemistry behind Gale's coffee maker in Breaking Bad
Question: Is there a scientific basis for the coffee making equipment which Gale Boetticher describes in Breaking Bad? He talks about maintaining the right conditions for bringing out the coffee flavor without degrading it. Answer: Yes, there is a scientific basis, but I think you can't do anything with the apparatus shown in the figure. Here you can find much information about it. I will try to summarize. It seems that extracting coffee at a lower temperature makes it tastes better. Gale thinks that the amount of quinic acid is the key variable that makes good coffee. In fact, some studies (McCamey, D. A.; Thorpe, T. M.; and McCarthy, J. P. Coffee Bitterness. In "Developments in Food Science." Vol 25. 169-182. 1990.) report that bitterness of coffee is due to quinic acid but is not the only substance involved (see this link). I don't have any reference about the dependence between temperature and method of extraction and quantitative of quinic acid extracted. By the way, the apparatus should extract the coffee in a more effective way. The vacuum pump seems to suggest the use of the vacuum to make the water boil at a lower temperature. But the connections in my opinion make no sense, so I will try to describe it: All start with an autoclave that is connected with a Gast vacuum pump. This is connected with an Allihn condenser (not the right thing to do because in there you should connect the water used to refrigerate), and then to a heated Erlenmeyer flask that is connected with a strange steel cylinder (I don't know what it is, maybe a filter), then to a Florence flask and then to another steel cylinder (...). The inspiration comes from a Florence Siphon but this is much simpler and is generally something like this: Here you can read and here you can watch how it works. Of course this is not so nice and intriguing as the previous apparatus and is not a geeky thing that makes us think we are dealing with a great chemist, so the authors thought to make the apparatus more appealing but completely useless!
{ "domain": "chemistry.stackexchange", "id": 1404, "tags": "organic-chemistry, everyday-chemistry, experimental-chemistry, home-experiment, chemistry-in-fiction" }
Time in the simple Heat Equation
Question: I was teaching the simple Heat equation to a pharmacy student, namely the equation $$Q = m\mathcal{C} \Delta T$$ And I was thinking about: this equation gives us the amount of energy necessary to raise the temperature of a mass $m$ of a particular substance from $T_1$ to $T_2$. If I know the density and the volume of the substance I can always find the mass, but let's suppose I know it's a liter of water inside a pot. Ok being water I already know the volume and the mass and the specific heat. What about a formula (maybe a modification of this?) that tell me how much time it will take me to make a liter of water to boil? I know it depends on how powerful is the flame of the stove but.. is there some equation as a function of the time? Answer: The 'go to' time dependent equation for solving transient heat problems is Fourier's heat equation (here in 3D and Cartesian coordinates): $$\frac{\partial T}{\partial t}=\kappa\Big(\frac{\partial^2 T}{\partial x^2}+\frac{\partial^2 T}{\partial y^2}+\frac{\partial^2 T}{\partial z^2}\Big)$$ Where $\kappa$ is the thermal diffisivity: $$\kappa=\frac{k}{\rho c_p}$$ Fourier's equation is derived from Fourier's law: $$q=-k\Big(\frac{\partial T}{\partial x}+\frac{\partial T}{\partial y}+\frac{\partial T}{\partial z}\Big)$$ Where $q$ is the heat flux: heat flow per unit of area through a surface. In the absence of work done, a change in internal energy per unit volume in the material, $\Delta Q$ becomes (here in 1D): $$\Delta Q=\rho c_p\Delta T$$ What about a formula (maybe a modification of this?) that tell me how much time it will take me to make a liter of water to boil? I know it depends on how powerful is the flame of the stove but.. is there some equation as a function of the time? Application of Fourier's law or the heat equation depends highly on the actual real world problem you're trying to solve, so a 'one-fits-all' formula does not exist. In the case of the simple heating problem of a kettle on a stove we can use the following simple derivation. We'll assume there is a contact area $A$ between the kettle and stove and that the stove temperature is at a constant $T_{\infty}$. We'll neglect heat losses (convection, radiation) from the kettle. The rate of heat transfer between stove and kettle is then given by Newton's cooling/heating law: $$\frac{dQ}{dt}=uA(T_{\infty}-T)\tag{1}$$ Where $T$ is the kettle temperature (assumed homogeneous) and $u$ the overall heat transfer coefficient. As the kettle heats up by an infinitesimal amount $dT$, by differentiating: $$Q = mc_p \Delta T$$ $$\implies dQ=mc_pdT$$ Divide both sides by $dt$: $$\frac{dQ}{dt}=mc_p\frac{dT}{dt}\tag{2}$$ With the identity $(1)=(2)$ we get the differential equation: $$mc_p\frac{dT}{dt}=uA(T_{\infty}-T)\tag{3}$$ If we integrate $(3)$ between $(0,T_1)$ and $(t,T_2)$ we get: $$-\frac{uA}{mc_p}t=\ln\Big[\frac{T_{\infty}-T_2}{T_{\infty}-T_1}\Big]\tag{4}$$ $(4)$ then allows to calculate the time $t$ to heat from $T_1$ to $T_2$. Note that for $t \to \infty$ then $T_2 \to T_{\infty}$.
{ "domain": "physics.stackexchange", "id": 34848, "tags": "thermodynamics, time" }
pandas.isna() vs pandas.DataFrame.isna()
Question: I've seen the two documentation pages for pandas.isna() and pandas.DataFrame.isna() but the difference is still unclear to me. Could someone explain the difference to me using examples? Answer: They call the same underlying method, so there is no functional difference. Calling the dataframe member function is preferred for OOP patterns, but there are many redundancies/aliases in pandas and python in general. In case you are curious, here is how the source code breaks down (it is a mess). The DataFrame (pandas/core/frame.py) method is simply: def isna(self): return super().isna() Where DataFrame extends NDFrame (implemented in pandas/core/generic.py). NDFrame subsequently invokes: def isna(self): return isna(self).__finalize__(self) Which was imported here: from pandas.core.dtypes.missing import isna, notna In pandas/core/dtypes/missing.py: def isna(obj): return _isna(obj) The _isna function is later aliased as _isna = _isna_new because there is a deprecated method _isna_old(obj). The _isna_new(obj) function then performs the logic operations: def _isna_new(obj) if is_scalar(obj): return libmissing.checknull(obj) # hack (for now) because MI registers as ndarray elif isinstance(obj, ABCMultiIndex): raise NotImplementedError("isna is not defined for MultiIndex") elif isinstance(obj, type): return False elif isinstance( obj, ( ABCSeries, np.ndarray, ABCIndexClass, ABCExtensionArray, ABCDatetimeArray, ABCTimedeltaArray, ), ): return _isna_ndarraylike(obj) elif isinstance(obj, ABCGeneric): return obj._constructor(obj._data.isna(func=isna)) elif isinstance(obj, list): return _isna_ndarraylike(np.asarray(obj, dtype=object)) elif hasattr(obj, "__array__"): return _isna_ndarraylike(np.asarray(obj)) else: return obj is None Ultimately, the DataFrame method passes itself as a parameter to the same function that you call with pandas.isna().
{ "domain": "datascience.stackexchange", "id": 6144, "tags": "pandas, similar-documents" }
Package management system
Question: A short time ago, I discovered the LinuxFromScratch project. After getting a system up and working (after much struggling), I realized that if I wanted to continue using LFS, some sort of package management would be quite nice. Of course I could have installed Pacman, apt-get, rpm, etc. like any sane person, perhaps. Instead, I was interested in creating my own simple 'package management' system that would keep track of files that belonged to a certain package, etcetera. I have attached several files, two of which I think are particularly in need of review: package.py – a class that describes information about a package such as its name, its version, what its dependencies are, etcetera fakeroot.py – this file is in charge of installing all of a package's files to the filesystem from a fakeroot, adding records of the installed files to a table in a database called Files, etcetera package.py: import io_crate, os.path, sqlite3, core_regex, datetime, io_output class Package: crate_extension = '.crate' database_location = 'proto.db' def __init__(self, name, verbosity = 0, fp = '/home/duncan/Documents/fakeroot', rp = '/home/duncan/Documents/install', ap = '/usr/src/archive/', cp = '/home/duncan/Documents/package/'): # Setup database stuff self.connection = sqlite3.connect(self.database_location) self.connection.text_factory = str self.db_cursor = self.connection.cursor() # Setup path and name self.name = name self.fakeroot_path = os.path.join(fp, self.name) self.root = rp self.archive_path = ap self.crate_path = os.path.join(cp, self.name) + self.crate_extension # Setup description taken from .crate file crate_contents = io_crate.read_crate(self.crate_path) self.description = crate_contents[0][1] self.homepage = crate_contents[1][1] self.optional_deps = crate_contents[2][1] self.recommended_deps = crate_contents[3][1] self.required_deps = crate_contents[4][1] self.verbosity = verbosity def add_to_db(self): """Adds self.name to the package database.""" if self.is_in_db(): return 0 else: # no need to try..except this because is_in_db. self.db_cursor.execute('INSERT INTO Packages VALUES(?, ?);', (self.name, datetime.datetime.today())) io_output.vbprint(self.verbosity, '{} added to the databased'.format(self.name)) return 1 def remove_from_db(self): """Removes self from the database of packages.""" if self.is_in_db(): self.db_cursor.execute('DELETE FROM Packages WHERE Package=?;', (self.name,)) io_output.vbprint(self.verbosity, '{} removed from database'.format(self.name)) return 1 return 0 def is_in_db(self): """Checks if the name of self is contained in the packages database.""" try: self.db_cursor.execute('SELECT * FROM Packages WHERE Package=?;', (self.name,)) except: print 'Couldn\'t read the database at {}'.format(self.database_location) if not self.db_cursor.fetchone(): return 0 return 1 def __del__(self): self.connection.commit() self.connection.close() fakeroot.py: import os, md5_gen, shutil, io_output class Fakeroot(): def __init__(self, package): self.dirs = [] # A list based on the files in the fakeroot. self.files = [] # A list of all the directories to be created in package.root. self.links = [] # A list of links from the fakeroot self.package = package for root, dirs, files in os.walk(package.fakeroot_path): for f in files: new_dir = os.path.normpath(os.path.join(package.root, root[len(package.fakeroot_path) + 1:len(root)])) src = os.path.join(root, f) dest = os.path.join(new_dir, f) if (os.path.islink(src)): self.links.append([root, new_dir, f]) else: self.files.append([src, dest]) for d in dirs: self.dirs.append(os.path.join(package.root, root[len(package.fakeroot_path) + 1: len(root)], d)) def create_dirs(self): # Go through self.dirs and check to see if a directory exists. If it does not, create it. for d in self.dirs: if not os.path.exists(d): # If the directory does not exist, run the equivalent of os.makedirs(d) # mkdir -p on it. io_output.vbprint(self.package.verbosity, 'DD {}'.format(d)) else: io_output.vbprint(self.package.verbosity, 'UU {}'.format(d)) continue def remove_dirs(self): for d in reversed(self.dirs): # remove the directories that are the highest in the tree first. if os.path.isdir(d): # If the directory exists if not os.listdir(d): # and is empty... os.rmdir(d) # remove it. io_output.vbprint(self.package.verbosity, '-- {}'.format(d)) else: # If it is not empty. io_output.vbprint(self.package.verbosity, 'UU {}'.format(d)) else: # If it does not exist. io_output.vbprint(self.package.verbosity, '?? {}'.format(d)) def copy_files(self): for f in self.files: if os.path.exists(f[1]): # If the file exists, show that it is being used. print 'Overwrite {}???'.format(f[1]) # TODO # TODO: "Code" for overwiting stuff goes here. # TODO: If yes, copy the file and add it to the DB. # TODO: Perhaps an option to overwrite all files could be useful? # TODO continue else: # If it does not exist, try: # try... shutil.copy2(f[0], f[1]) # copying it! self.add_to_db(f) io_output.vbprint(self.package.verbosity, '++ {}'.format(f[1])) except: io_output.vbprint(self.package.verbosity, 'Failed to copy a file...rolling back changes.') #self.remove_files() break def remove_files(self): for f in self.files: if os.path.exists(f[1]): os.remove(f[1]) self.remove_from_db(f) io_output.vbprint(self.package.verbosity, '-- {}'.format(f[1])) else: io_output.vbprint(self.package.verbosity, '?? {}'.format(f[1])) def create_links(self): for l in self.links: try: if not os.path.exists(os.path.join(l[1], l[2])): linkto = os.path.join(l[1], os.readlink(os.path.join(l[0], l[2]))) os.symlink(linkto, os.path.join(l[1], l[2])) io_output.vbprint(self.package.verbosity, 'LL {}'.format(l[2])) else: print 'Overwrite existing link {}??'.format(l[2]) # TODO: See above todo for more info. ^^^^ except: print 'Couldn\'t find the specified fakeroot!' break def remove_links(self): for l in self.links: try: os.remove(os.path.join(l[1], l[2])) io_output.vbprint(self.package.verbosity, '-- {}.'.format(l)) except: raise OSError('\nFailed to remove the link `{}`'.format(os.path.join(l[1], l[2]))) def add_to_db(self, f): """Returns 0 if the db can't be read, 1 if the file is added.""" try: self.package.db_cursor.execute('INSERT INTO Files VALUES(?, ?, ?, ?);', (f[1], md5_gen.generate(f[1]), os.path.getsize(f[1]), self.package.name)) return 1 except: return 0 def remove_from_db(self, f): self.package.db_cursor.execute('DELETE FROM Files WHERE Path=?;', (f[1],)) In regard to these two files, I feel that my design is somewhat crappy. In particular, the design of the Package and Fakeroot objects seems bad. Additionally, the many similar methods in fakeroot.py for copying files / removing files seem somewhat redundant. What do you think? io_output.py: def vbprint(verb_l, message): """Take a level of verboseness, `verb_l`, and a message to be printed. If verb_l > 0, the message is printed.""" if verb_l != 0: print message md5_gen.py (the idea for this code was found by searching Google for something like "md5 python"): import hashlib def generate(filename): """Return the md5 hash of filename. Keyword arguments: filename -- The path and name of the file to calculate the SHA1 hash of """ # Returns the md5 hash of the given file. md5 = hashlib.md5() with open(filename, 'rb') as file: while True: block = file.read(2**10) if not block: break md5.update(block) return md5.hexdigest() io_crate.py -- used to get information from files about packages: from re import split def read_crate(package): """Opens a crate file, reads the variables, and returns them as a sorted list of tuples. Keyword arguments: package -- A Package object, as defined in package.py """ # Returns 0 if the crate file cannot be read, 1 on success. try: crate_text = open(package).read() except IOError: print 'Failed to read the crate file {}.'.format(package.crate_path) return 0 processed = split("=.*?\"|\".*?\n", crate_text) lastline = "" final = [] for line in processed: if lastline.isupper(): if '_DEP' in lastline: final.append((lastline, split(' ', line))) else: final.append((lastline, line)) lastline = line return sorted(final, key=lambda pos: pos[0]) Sample .crate file: DESCRIPTION="The most foo of all bars." HOMEPAGE="http://www.foo.it/" REQUIRED_DEP="foobar-11-2" RECOMMENDED_DEP="foobus-1.7" OPTIONAL_DEP="foon-7.6a" Answer: Possible Problems Don't use exact paths. I do not have a user called 'duncan', nor do I wish to. Instead use ~/Documents/fakeroot. However, I would argue that this is the point of /tmp, as you should want to purge the files from the system afterwards. My Arch install doesn't come with SQL preinstalled, so I highly doubt LFS would (Sure you can install it but that's what your package manager is meant to do). This is poor design for a package manager IMO. verbosity and io_output can be replaced with logging. io_crate should be added to package.py rather than be an import. except should always have what you are guarding against! If you ^C out of the program, while it's in the try, it will continue execution. And print 'Couldn't read the database at ...'. And it will prevent sys.exit. read_crate shouldn't suppress the IOError. All that achieves is an index error when you expect an array as output. And a confusing trace-back. You should always close a file. A simple way to do this is with with. with open(package) as f: f.read() You should read *.crate line by line to make processing easier. with open(package) as f: for line in f: ... You should use the correct apostrophes for the correct content. '=.*?"|".*?\n' is much easier to read. You should change how for line in processed works. It makes no sense, as you don't go line by line, you go keyword, value, keyword, value, ... You should return a dictionary from read_crate, as then it is more reliable and extendable. md5_sum.py should be added to fakeroot.py. It's also debatable if package.py should too. It's a small program... md5_sum.py seems pretty good. But md5 is not sha1. I dislike your Fakeroot It might just be me, but self.dirs is horrible memory hungry. Each index will have package.root + root[...] and then it's name. I would just use a generator. Let's not do os.path.join(package.root, root[...]) on every file. Just store it in a variable. If you make a generator for self.dirs, you won't need to use reversed. Instead pass topdown=False to os.walk. As you don't say how you use the program, I'm assuming that you do something like: root = Fakeroot(...) root.create_dirs() root.copy_files() root.create_links() ... root.remove_links() root.remove_files() root.remove_files() Instead I would recommend that you make a purge function. And do the 'create' stuff in the __init__, or another single function. Your comments are mostly pointless. # If it does not exist, and try: # try... are just bad. If you asked any programmer, and quite a few non-programmers, they would understand if not os.path.exists() means the path does indeed not exist. It was hard to read you code, limit your lines to 79 characters, for best support. As shown, you need to scroll here on CR, and some code went out of my editor... Extra points Simplify read_crate def read_crate(path): with open(path) as f: return { key: val for key, val in ( line.split('=', 1) for line in f ) } This leads to the simple usage in Package. self.crate = io_crate.read_crate(self.crate_path) self.description = self.crate['DESCRIPTION'] Making purge. This should be a simple os.walk in reverse. You shouldn't need to make more functions, as they'll be 1/2 liners. And would be more read-able in the algorithm. def purge(self): for root, dirs, files in os.walk(package.root, topdown=False): for name in files: path = os.path.join(root, name) try: os.remove(path) except ...: logging.warn('-- ' + path) else: logging.info('-- ' + path) if not os.path.islink(src): self.remove_from_db(path) for name in dirs: path = os.path.join(root, name) try: os.rmdir(path) except ...: logging.warn('-- ' + path) else: logging.info('-- ' + path) Note the use of the python's logger! It's a nice tool. You can even use it to save the logs to files! Copying the tree This is the opposite of purge, and so, I'd recommend you just do that. But the loop should be have package.root + root[...] in the first for. Like: fakeroot_len = len(package.fakeroot_path) + 1 for root, dirs, files in os.walk(package.fakeroot_path): fakeroot = os.path.join(package.root, root[fakeroot_len:len(root)]) for name in files: ... for name in dirs: ...
{ "domain": "codereview.stackexchange", "id": 16881, "tags": "python, python-2.x, linux, installer" }
Simple Ratelimiter Feedback
Question: I'm preparing for coding interviews and I wanted to get feedback on this question which asks you to write a simple ratelimiter that can handle 5 requests every 2 seconds for example. Since it is for a coding interview setting, I don't think it needs to be perfect. I looked at the guava Ratelimiter for inspiration for this. Please let me know if I have made any errors or any improvements that can be made. Any feedback is appreciated. public class RateLimit { private double maxCapacity; private double refillRate; private double availableTokens; private long lastRefill; public RateLimit(double maxCapacity, long windowInSeconds) { this.maxCapacity = maxCapacity; this.lastRefill = Instant.now().toEpochMilli(); this.refillRate = TimeUnit.SECONDS.toMicros(windowInSeconds) / maxCapacity ; this.availableTokens = maxCapacity; } public boolean acquire() { return acquire(1, true); } public boolean acquire(double requiredPermits) { return acquire(requiredPermits, true); } public boolean tryConsume(double requiredPermits) { return acquire(requiredPermits, false); } private synchronized boolean acquire(double requiredToken, boolean shouldBlock) { refill(); boolean acquired = false; if (availableTokens >= requiredToken) { availableTokens -= requiredToken; acquired = true; } else if (shouldBlock) { double missingTokens = requiredToken - availableTokens; try { TimeUnit.MICROSECONDS.sleep((long) (refillRate * missingTokens)); availableTokens = 0D; lastRefill = Instant.now().toEpochMilli(); acquired = true; } catch (InterruptedException e) { System.out.println("Sleep interrupted"); } } return acquired; } private void refill() { long now = Instant.now().toEpochMilli(); if (now > lastRefill) { long elapsedTime = now - lastRefill; double tokensToBeAdded = Math.floor(TimeUnit.MILLISECONDS.toMicros(elapsedTime) / refillRate); if (tokensToBeAdded > 0) { availableTokens = Math.min(maxCapacity, availableTokens + tokensToBeAdded); lastRefill = now; } } } ``` Answer: Testability Whilst not a prerequisite for a review, @Marc is correct that unit tests would be helpful here and you would usually want to include them in your exercise submission. You want to demonstrate both that you know the code works as importantly that you've considered how easy it is to test. Time in particular can be challenging to unit test if classes have not been constructed with it as a consideration. You may wish to take a look at this Stack overflow question for some suggestions about how you could improve your time testability. Naming One of the things that struck me as a little odd was that your blocking methods are called acquire and your non-blocking method is called tryConsume. I'm not sure I understand the disconnect, is there a reason it couldn't be called tryAcquire? Have you been asked to provide both blocking and non-blocking versions of the limiter as part of the exercise? final Fields that don't change after they've been initialized (such as refillRate) should be marked as final to indicate that you're not planning on changing them. Starvation and Flooding Lets take your example of 5 requests every 2 seconds. If I call acquire(500), nothing happens for 200 seconds, then 500 messages get sent (probably within a single second). Is this the expected behaviour? During the wait time while the number of required allocations are being waited on, no other threads are able to acquire and send either. It's possible that this is expected/desired behaviour, however it seems like a potential flaw, particularly since a greedy blocking call could mean that the non-blocking request is never successful, so something I'd expect them to ask about if you get to interview. Things to consider are do you need to allow a single call to reserve X permits, or would it be better to have them acquire single permits at a time as they send each message? If you need to support multiple permits, should there be a limit on the maximum number of permits that can be requested in one go (such as the size of the window or maxCapacity).
{ "domain": "codereview.stackexchange", "id": 41210, "tags": "java, thread-safety" }
Is there any mechanism that prevents DNA of an eaten entity's cell from affecting that of own?
Question: When we eat food I heard that our digestive enzymes disassemble the DNA of the eaten cells so that the DNA cannot affect us. Then, is there any mechanism (that prevents the eaten cell's DNA from affecting us genetically) in cell-scale level? Or is it only in organ-system level(e.g. digestive system)? It seems that germs and viruses don't have the mechanism, as we manipulate them as genetic transferor. But I want to know if nucleate cells have the mechanism. Answer: Cells have cell membranes and sometimes cell walls that keep random molecules from entering the cell. Only small, uncharged molecules can pass freely through the membrane. Complex structures of proteins embedded in the cell membrane regulate the passage of charged or large molecules. Pathogens like viruses or bacteria have had developed equally complex structures of proteins that can fool the proteins in the cell membrane to gain access to the interior of the cell. In other words, the cell membrane is almost always going to keep naked DNA from entering the cell, but viruses can get around this, by packaging their DNA in protein capsules that can compromise the cell membrane. To create GMO, viruses are often used to transport the foreign DNA into the cell. Alternatively, the cell membrane can be attacked with chemicals, high voltages, or even mechanically punched holes, that allow DNA to pass through the membrane. Bacterial cells will sometimes allow the passage of naked DNA, but this is very rare in plant and animal cells.
{ "domain": "biology.stackexchange", "id": 8432, "tags": "genetics, cell-biology, microbiology" }
Scanning from Book - Gradient Removal
Question: This sample image: is warped on the left side. I don't want to dewarp, but just to remove shadow gradient (caused by book's spine). I guess gradient can be calculated from upper part of the image and that information can be used for reconstruction. First I thought to apply some layer transformation in Photoshop. I took upper part with clear gradient and scaled it vertically to cover whole image in new layer. Then I did layer difference: but while upper part is acceptably reconstructed, colored part is not. Does anyone have an idea how to approach this in Photoshop, or Python/SciPy/OpenCV, or Matlab? Answer: The simplest approach would be to divide the gradient rather than subtract it. Here's what the result looks like:
{ "domain": "dsp.stackexchange", "id": 762, "tags": "image-processing" }
Energy density, pressure and temperature for massive neutrinos in cosmology
Question: I want to be able to numerically compute the mean energy density and pressure for a massive neutrino species in cosmology, at any given scale factor $a$. These are given in terms of the distribution function $f(a, p)$ as $$ \rho(a) = 2\int \frac{4\pi p^2 dp}{(2\pi)^3} f(a,p)\sqrt{m^2 + p^2}\,, $$ $$ P(a) = 2\int \frac{4\pi p^2 dp}{(2\pi)^3} f(a,p)\frac{p^2}{3\sqrt{m^2 + p^2}}\,, $$ where $m$ is some mass. In the early Universe, typical momenta were much larger than the mass, $\sqrt{\langle p^2\rangle}\gg m$ (the neutrinos are relativistic), in which case the Fermi-Dirac distribution of the neutrinos was $$ f(a, p) = \biggl[\exp\biggl(\frac{p}{T(a)}\biggr) + 1\biggr]^{-1} $$ with some temperature $T(a)$. At later times, the neutrinos keep this distribution but with a decreasing $T(a)$. I know the temperature at early times, and so I can compute $\rho(a)$ and $P(a)$ at these times. As long as the neutrinos remain relativistic, the temperature goes like $T(a)\propto a^{-1}$. When they are completely non-relativistic, I think this changes to $T(a)\propto a^{-2}$, and so there exist some transition period in which $T(a)$ goes from one behavior to the other. How can I calculate this behavior, so that I can define $f$ at any time $a$ and ultimately compute $\rho(a)$ and $P(a)$ for all $a$, without treating the neutrinos as just ultra-relativistic or completely non-relativistic? Answer: It turns out that $T(a)\propto a$ holds for all $a$, even at non-relativistic times. This can e.g. be seen from computing the number density $n(a)$ (simply the integral over $f$ without any additional weight), from which one find $n\propto T^3$. After decoupling, conservation of neutrino numbers imply $n\propto a^{-3}$ regardless of the neutrinos be relativistic or not, and so $T\propto a^{-1}$. The confusion about $T\propto a^{-2}$ stems from the fact that this relation holds for a non-relativistic species in thermal equilibrium, which the neutrinos are not after the time of decoupling
{ "domain": "physics.stackexchange", "id": 53258, "tags": "general-relativity, statistical-mechanics, cosmology, neutrinos, gravitational-redshift" }
Where exactly will a hydraulic jump occur?
Question: I know that an hydraulic jump can occur only when the Froude number is higher than 1 (supercritical flow), and I am familiar with the Bélanger equation. However, I feel like I have a fundamental misunderstanding about hydraulic jumps: if a jump can occur when Froude number is larger then 1, what prevents the flow in rivers from "jumping" immediately whenever its Froude number becomes slightly more than 1? I mean how can a fluid develop a very high Froude number without making a hydraulic jump? According to what I read, in industrial applications such jumps are specially induced on the fluid to dissipate its energy. So I want to understand how it's done. Answer: First a hydraulic jump is where fluid will transition from supercritical (having a Froude Number greater than one, to subcritical (having a Froude Number less than one). When the Froude number is near one, the jump is very weak and somewhat gradual such that you might not even be able to tell it is "jumping". To figure out where the jump occurs you need to know when the Froude Number will transition from less than one to more than one. To help with this is an extremely useful equation for modeling open channel flow: $$\frac{dy}{dx}=S\frac{1-\left(\frac{y_n}y\right)^3}{1-\left(\frac{y_c}y\right)^3}$$ Where $y$ represents the depth of flow in a riverbed, x is the distance along the flow, $S$ is the slope of the riverbed (positive being flowing downhill, negative, flowing uphill), $y_c$ is the depth of flow corresponding to $Fr=1$ and $y_n$ is the normal depth, the depth of flow corresponding to flow that is balancing the frictional losses with gravitation gains. $$y_c=\sqrt[3]{\frac{q^2}g}$$ $$y_n=\sqrt[3]{\frac{q^2}{C^2S}}$$ Where $q$ is the flow rate per width of river, $C$ is Chezy's coefficient (approximated as constant), and $g$ is acceleration due to gravity. A depth greater than $y_c$ corresponds to a slower velocity, and thus subcritical flow (having a Froude number less than 1). When this is the case, the bottom of the fraction is positive between 0 and 1. Similarly, when the depth is less than $y_c$ the flow is supercritical and the bottom of the fraction is negative. In this range therefore, if the top of the fraction is also negative, then the flow will be getting deeper, but if it is positive, the flow will be getting shallower. This corresponds to a depth of flow less than and greater than $y_n$ respectively. This means that while the flow is supercritical, the depth of flow will always be moving towards $y_n$ This means that if you have a steep slope such that $y_n$ is less than $y_c$ the depth of flow will move to and then stay at $y_n$. If you would like the flow to have a hydraulic jump simply decrease the slope till $y_n$ is greater than $y_c$ then the flow will deepen at progressively faster rates until you reach the hydraulic jump. Note since $q$ is based on your width it's also possible to modify the width rather than slope to cause $y_n$ to be greater than $y_c$. But wait! After the hydraulic jump the flow will be subcritical, which means the flow downstream can now influence the water level and push the hydraulic jump upstream. How far upstream? Depends on the flow downstream. In subcritical flows it is easiest to work upstream. If you try to work downstream from an initial guess, you'll end up with non-physical flows unless you happen to guess perfectly. This is because in subcritical flows, as you move downstream, the depth moves away from the normal depth. If you're less than the normal depth, you can quickly end up with a negative depth, and if you're more than the normal depth you can end up with a depth the would overflow your walls quick quickly. However, if you work backwards, going upstream then the depth always moves towards the normal depth, which should always give a valid depth. So then the question is where to get your initial condition? Downstream of subcritical flow, and upstream on subscrital flow. The opposite transition of hydraulic jump. This transition is always smooth and thus according to the equation always happens when $y=y_n=y_c$ with $y_n$ decreasing downstream. An extreme example would be the lip of a waterfall or edge of an overflowing dam. For given flow rate, the depth will always be easy to predict at these choke points. As an added bonus the slope at these choke points is largely flow rate independent: $$y_c=y_n$$ $$\sqrt[3]{\frac{q^2}g}=\sqrt[3]{\frac{q^2}{C^2S}}$$ $$\frac1g=\frac1{C^2S}$$ $$S=\frac{g}{C^2}$$ So anytime the slope goes from shallower than $\frac{g}{C^2}$ to steeper there might be a transition to supercritical flow right there, and anytime the slope goes from steeper than $\frac{g}{C^2}$ to shallower there might be a hydraulic jump a bit upstream. I say might because it is possible that a choke point further downstream is causing the depth to be too high to cause any transitions (you could be at the bottom of a lake for example.) However, if the slope is constant and shallower than $\frac{g}{C^2}$ for a long time (many miles), then odds are, near the top of this section the depth will have converged to $y_n$. So by this shortcut you don't have to start all the way back from a choke point. In either case, you can model the flow going upstream from the subcritical section and downstream from the supercritical section. Then having modeled the supercritical section you can plot vs position the depth of flow that would result from a hydraulic jump at that location. Any time that plot intersects the subcritical plot is a location where a hydraulic jump could be stable. Equations are derived in full in MIT's open courseware chapter on open channels. My analysis of those equations was also guided by the analysis presented there.
{ "domain": "engineering.stackexchange", "id": 1559, "tags": "fluid-mechanics, hydraulics, open-channel-flow" }
Stepwise regression
Question: I think that both forward selection and backward selection should give the same results if the evaluation model is deterministic and using the same variables gives the same results. Is this true? If so, what are the reasons for chosing one method over the other? Answer: No, it's not true. There is no guarantee that they will give the same results. They are a heuristic. Neither one is guaranteed to find the optimal subset of features, and they might produce two different subsets. For instance, the effectiveness of a set of features might depend in some nonlinear and complicated way on the particular set you choose. So, in general, the problem is just the general optimization problem of finding a set $S \subseteq \{1,2,\dots,n\}$ that maximizes $f(S)$, where $f$ is some function (here, the accuracy of the model on a cross-validation set). Since $f$ can be arbitrary, forward selection and backward selection can give different results, and neither is guaranteed to yield the optimal set of features.
{ "domain": "cs.stackexchange", "id": 14938, "tags": "machine-learning" }
Can a free falling observer localize the event horizon by calculations?
Question: I'm think that in general relativity we can always pass the one curve in one coordinate system for another coordinate system. My intuition say that the free falling observer locate the event horizon in a light-like curve. My intuition is right? I know that nothing special happens in event horizon for a free-falling observer, so he can measure the event horizon? This implies some issues in the complementarity principle? Answer: For a freely falling observer the local geometry of spacetime is always flat i.e. described by the Minkowski metric. So the freely falling observer can never observe themself to fall through an event horizon, because that contradicts the requirement that spacetime be locally flat. In fact the freely falling observer will observe an apparent horizon that retreats before them as they fall. They would reach the horizon only at $r = 0$ i.e. the moment they reach the singularity. However an observer in a sealed spaceship can calculate when they pass through the horizon because they can measure the tidal forces at their position and use this to calculate the curvature tensor (assuming they can do it quickly enough!).
{ "domain": "physics.stackexchange", "id": 19493, "tags": "general-relativity, black-holes, quantum-information, coordinate-systems, black-hole-firewall" }
What is $\hat{p}|x\rangle$?
Question: Trying to solve a QM harmonic oscillator problem I found out I need to calculate $\hat{p}|x\rangle$, where $\hat{p}$ is the momentum operator and $|x\rangle$ is an eigenstate of the position operator, with $\hat{x}|x\rangle = x|x\rangle$. It turns out that $\hat{p}|x\rangle$ should be equal to $-i\hbar\frac{ \partial |x\rangle }{\partial x}$. But what does this even mean? I'm not sure what to do with an expression like that. If I try to find $\langle x | \hat{p} | x_0 \rangle$, I end up with $-i\hbar \frac{\partial}{\partial x} \delta(x-x_0)$, which I'm pretty sure is not allowed even in physics, where we would scoff at phrases like "$\delta$ is really a distribution and not a function". Edit: Motivation. I have solved Heisenberg's equation and found, for a Hamiltonian $H = p^2/2m + m\omega^2x^2/2$, that $\hat{x}(t) = \hat{x}_0 \cos(\omega t) + \frac{\hat{p}_0}{m\omega} \sin(\omega t)$. I am given that the initial state is $|\psi(0)\rangle = |x_0\rangle$, i.e., an eigenstate of position, and I have a hunch that by finding $\hat{x}(t)|x_0\rangle$ I might find a way to get $|\psi(t)\rangle$. But this is not what I need help with; I am now curious as to what $\hat{p}|x\rangle$ might be, never mind my original problem. Answer: The formula you quote is sort of correct, but I would encourage you to flip it around. More specifically, the old identification of momentum as a derivative means that $$\langle x |\hat p|\psi\rangle=-i\hbar\frac{\partial}{\partial x}\langle x |\psi\rangle,$$ and from this you can 'cancel out' the $\psi$ to get $$\langle x |\hat p=-i\hbar\frac{\partial}{\partial x}\langle x |.$$ (More specifically, these are both linear functionals from the relevant Hilbert space into $\mathbb C$ and they agree on all states in the Hilbert space, so they must agree as linear functionals.) This is the form that you actually use in everyday formulations, so it makes the most sense to just keep it this way, though you can of course take its hermitian conjugate to find an expression for $\hat p|x\rangle$. Regarding your discomfort at the derivative of a delta function, I should assure you it is a perfectly legitimate object to deal with. It is best handled via integration by parts: for any well-behaved function $f$ $$ \int_{-\infty}^\infty \delta'(x)f(x)\text dx = \left.\delta(x)f(x)\vphantom\int\right|_{-\infty}^\infty-\int_{-\infty}^\infty \delta(x)f'(x)\text dx = -f'(0). $$ In essence, this defines a (different) distribution, which can legitimately be called $\delta'$. It is this distribution which shows up if you substitute $|\psi\rangle$ for $|x_0\rangle$ in my first identity: $$\langle x |\hat p|x_0\rangle=-i\hbar\delta'(x-x_0).$$
{ "domain": "physics.stackexchange", "id": 13935, "tags": "quantum-mechanics, operators" }
Is there a natural restriction of VO logic which captures P or NP?
Question: The paper Lauri Hella and José María Turull-Torres, Computing queries with higher-order logics, TCS 355 197–214, 2006. doi: 10.1016/j.tcs.2006.01.009 proposes logic VO, variable-order logic. This allows quantification over orders over the variables. VO is quite powerful and can express some non-computable queries. (As pointed out by Arthur Milchior below, it actually captures the whole of the analytical hierarchy.) The authors show that the fragment of VO obtained by allowing only bounded universal quantification over the order variables exactly expresses all c.e. queries. VO allows the order variables to range over the natural numbers, so bounding the order variables is clearly a natural condition to impose. Is there a (nice) fragment of VO that captures P or NP? As an analogy, in classical first-order logic allowing quantification over sets of objects gives a more powerful logic called second-order logic or SO. SO captures the whole of the polynomial hierarchy; this is usually written as PH = SO. There are restricted forms of SO capturing important complexity classes: NP = $\exists$SO, P = SO-Horn, and NL = SO-Krom. These are obtained by imposing restrictions on the syntax of allowed formulas. So there are straightforward ways to restrict SO to obtain interesting classes. I would like to know if there are similar straightforward restrictions of VO that are roughly the right level of expressivity for P or NP. If such restrictions are not known I would be interested in suggestions for likely candidates, or in some arguments why such restrictions are unlikely to exist. I have checked the (few) papers that cite this one, and checked the obvious phrases on Google and Scholar, but found nothing obviously relevant. Most of the papers dealing with logics more powerful than first-order don't seem to deal with restrictions to bring down the power into the realm of "reasonable" computations, but seem content to dwell in the c.e. universe of arithmetic and analytic classes. I'd be happy with a pointer or a non-obvious phrase to search on; this might be well known to someone working in higher-order logics. Answer: Note: This is not really answering the question, this is just some comments posted as an answer. :) Note that in VO, one is defining sets over the set of natural numbers (similar to sets defined in recursion theory) where as in descriptive complexity setting (SO, $\exists$-SO, SO-Horn) we are talking about finite structures. An SO formula in the former setting will give not $PH$ but the whole analytical hierarchy as Arthur Milchior has written in his answer. IMHO, a better comparison would be with bounded arithmetic theories. I don't think you can get below c.e. sets unless you bound all quantifiers to finite domains, to get $P$ or $NP$ the size of domains should be very small. is the presence of one unbounded quantifier enough to capture c.e. sets? The problem is you probably want the language to be without extra symbols like equality, addition, multiplication (right?), if we had them then by MRDP theorem, Diophantine formulas (first-order existential quantifiers in front of an equality of two polynomials) would capture c.e. sets. If we are not allowing these symbols in the language, the problem is more complicated, one can use higher order quantifiers to define them, but that would increase the quantifier complexity. So if I want to give a short answer to your question about a single quantifier, I don't know. If we can express $AC^0$ relations in the language, then a single unbounded existential quantifier would suffice for capturing c.e. sets, the reason is $AC^0$ can check that a string $c$ is the computation of a TM $e$ on input $x$. First-order formulas bounded by polynomials in presence of equality, addition, and multiplication capture PH, so if we have them in the language the answer is positive, but as I said you are probably looking for a language without these symbols. Some additional comments: Assume that we have a restriction of VO which can express at least $AC^0$. Then a single unbounded existential quantifier of number type in front of those restricted formulas will give the whole c.e. sets.
{ "domain": "cstheory.stackexchange", "id": 218, "tags": "cc.complexity-theory, complexity-classes, lo.logic, computability, descriptive-complexity" }
is there any way to plot ROC curves from weka
Question: i am using some algorithms from weka . i was willing to plot some algorithms' roc curve for comparison . Is it possible and how ? Answer: In the Weka explorer, go to the classify tab and train/test your algorithm. The result buffer appears in the bottom left box under the section labeled "result list" Right click the result buffer and click visualize threshold curve, then select the class you want to analyze to save the ROC curve as an image, hold shift + alt and left click on the graph
{ "domain": "datascience.stackexchange", "id": 1792, "tags": "machine-learning, data-mining, dataset, java, weka" }
Matrix of Matrices in Python
Question: I want to create a matrix where each entry itself is a random matrix. What would be a good way to represent this? It is not necessary but some hints on how to implement your proposed solution in Python would be useful. Answer: What you are looking for is just a higher dimensional array. Let's say you want a 5x5 array where each element is a 2x2 array with random number. You can easily represent that as a 5x5x2x2 array. If you are happy about using numpy A = numpy.random.rand(5,5,2,2) Will create a 5x5x2x2 array. You can use the first two indexes to access your 2D random arrays. print A[1,1] Results in: [[ 0.68527006 0.37304491] [ 0.23281899 0.46951261]]
{ "domain": "cs.stackexchange", "id": 6161, "tags": "programming-languages, matrices" }
Grabbing city names and numbers from a CSV file
Question: I've got information that is imported from a CSV file that my site grabs every day with PHP. I'm just learning RegEx so I'm able to do what I need to do but am looking to get more efficient with my coding. Some of example of the kinds of strings that would come in would be: 15 SE NORFOLK 5 NNE OAKLAND 1 S LOS ANGELES 1 NW SACRAMENTO BOSTON It's basically numbers then directional letters then city name. Sometimes there aren't numbers and letters so I'm checking to see first and then just deleting them with preg_replace if there are (I just need the city name). Here's the expression I have that works: $location = preg_replace('/^[\d]+[\s]+[a-zA-z]+[\s]/', '', $string); I know with regex there are a bunch of different ways to different things so I'm curious if there's a more efficient way to do what I'm doing. Answer: Leave out the square brackets: ^\d+\s+[A-Z]+\s+ Square brackets are used for "character classes", like [A-Z] which matches exactly one letter out of the range A to Z. \d already is a class: all the numbers.
{ "domain": "codereview.stackexchange", "id": 8534, "tags": "php, regex, csv" }
Domain events in the unit of work pattern
Question: I've used the unit of work pattern to wrap my business logic. (Note that the application has three states: Logedout, LogedIn, Loaded) public class LogoutUnitOfWork { public void Execute() { new UnloadUnitOfWork().Execute(false); // navigate to the "logout view" // do some business related work // navigate to the "login view" (Logedout state) } } public class UnloadUnitOfWork { public void Execute(bool navigate = true) { // navigate to the "unloading view" // do some business related work _legacyService.Unload(); if(navigate) { // => navigate to the "logedin view" (LogedIn state) } } } // not relevant for now public class LoadUnitOfWork {} public class LoginUnitOfWork {} Until now non of the work units was directly listening to an event. I have place where I listen to domain events and execute an unit of work if nessesary. public void ConfigureApplicationEvents() { _eventAggregator.GetEvent<TimeoutEvent>().Subscribe(() => { new UnloadUnitOfWork().Execute(); } ... } Or I execute this unit of works from the UI (When a button was clicked). But I have a legacy dependency which also publishes events. Now I have to listen to this events and then navigate to the view which represents the new state. The problem is that I do not know where the event came from. _legacyService.Unloading += (sender, args) => { // navigate to the "unloading view" }; _legacyService.Unloaded += (sender, args) => { // where should I navigate to? // I need to know if the event came from the LogoutUnitOfWork, UnloadUnitOfWork or from the legacy dependency directly. }; At the moment I solve this like the following: var setShouldNavigateQueue = new Queue<Action>(); var shouldNavigate = true; _legacyService.Unloading += (sender, args) => { // navigate to the "unloading view" }; _legacyService.Unloaded += (sender, args) => { // do some buisiness related work if (shouldNavigate) { // => navigate to the "logedin view" (LogedIn state) } // as the legacy service reports back after the unit of work events // we execute the actions that have been queued // (should be only one which sets `shouldNavigate` to the default value) while (setShouldNavigateQueue.Any()) { setShouldNavigateQueue.Dequeue().Invoke(); } }; // this events come from the unit of work _eventAggregator.GetEvent<UnloadingEvent>().Subscribe(navigateToTarget => { // the unit of work knows if after the unloading a navigation should occure // this navigation wonn't happen for example when the `LogoutUnitOfWork` was // executed because it will navigate by itself shouldNavigate = navigateToTarget; }); _eventAggregator.GetEvent<UnloadedEvent>().Subscribe(() => { // the unit of work reports first that the unloading is done // push the action which sets the `shouldNavigate` to the default value into the queue setShouldNavigateQueue.Enqueue(() => shouldNavigate = true); }); public class LogoutUnitOfWork { public void Execute() { new UnloadUnitOfWork().Execute(false); // navigate to the "logout in progress view" // do some business related work // navigate to the "login view" (Logedout state) } } public class UnloadUnitOfWork { public void Execute(bool navigate = true) { _eventAggregator.GetEvent<UnloadingEvent>().Publish(navigate); // navigate to the "unloading in progress view" // do some business related work _legacyService.Unload(); _eventAggregator.GetEvent<UnloadedEvent>().Publish(); } } But this is to hard to read and could be missleading to others. How could I change my code to make it simpler? Should I wait and also listen for this legacy events in my unit of work? (I would find this strange as the unit of work then could start by itself which is a concept not found yet in my app) PS: If I let out any informations you need or if I wrote a little confusing please let me know that I can improve this post and my writing style. Answer: I solved my problem by checking the current active view and only do the navigate when the "unloading view" is active. Because the UnloadUnitOfWork calls the CleanUp method (which is doing the navigation) before the event of the legacy service I can decide there where I need to navigate to. public class LogoutUnitOfWork { public void Execute() { new UnloadUnitOfWork().Execute(false); // navigate to the "logout view" // do some business related work // navigate to the "login view" (Logedout state) } } public class UnloadUnitOfWork { public void Execute(bool navigate = true) { // navigate to the "unloading view" // do some business related work _legacyService.Unload(); CleanUp(navigate) } public void CleanUp(bool navigate) { // this method is called twice // in case the unloading happened through an // event from the legacy service it will only be called once if(!IsUnloadingViewActive()) return; // do some buisiness related work if(navigate) { // => navigate to the "logedin view" (LogedIn state) } } } _legacyService.Unloading += (sender, args) => { // navigate to the "unloading view" if not yet active }; _legacyService.Unloaded += (sender, args) => { new UnloadUnitOfWork().CleanUp(true); }; Maybe I'm going to introduce a state management to remove the depencency on the views. I've now added a state management. The unit of works won't be called from the events or ui anymore. They will only be called by the states. Furthermore I've added an so called IApplicationTransitionState which will persist details about why and how the state change was called. View specific code is implemented in this IApplicationTransitionState's. Business specific code will still be implemented in the UnitOfWork's. public class StateContext : IApplicationState { private IApplicationState _currentState; private IApplicationTransitionState _transitionState; public bool IsInTransition => _transitionState?.HasEnded ?? false; public static StateContext Instance { get; } = new StateContext(); private StateContext() { } public void Login() => TransiteState(() => _currentState.Login()); public void Logout() => TransiteState(() => _currentState.Logout()); public void Load() => TransiteState(() => _currentState.Load()); public void Unload() => TransiteState(() => _currentState.Unload()); public bool CanLogin() => _currentState.CanLogin(); public bool CanLogout() => _currentState.CanLogout(); public bool CanLoad() => _currentState.CanLoad(); public bool CanUnload() => _currentState.CanUnload(); public void EndTransition() { _transitionState.EndTransition(); } private void ThrowIfInTransition() { if(!_transitionState.HasEnded) throw new Exception(); } private void HandleTransitionState(Action transitAction) { ThrowIfInTransition(); _transitionState = transitAction.Invoke(); Task.Run(transitionState.StartTransition); } } public interface IApplicationState { IApplicationTransitionState Login(); IApplicationTransitionState Logout(); IApplicationTransitionState Load(); IApplicationTransitionState Unload(); bool CanLogin(); bool CanLogout(); bool CanLoad(); bool CanUnload(); } public interface IApplicationTransitionState { bool HasEnded { get; set; } void StartTransition(); void EndTransition(); } The implementation is the following. I've left out the LoginTransitionState and LoadingTransitionState. public enum UnloadReason { LoginOut, Unloading, Event } public enum LoginReason { Login, Loading } public class StartedState : IApplicationState { public IApplicationTransitionState Load() => new LoginTransitionState { Reason = LoginReason.Load }; public IApplicationTransitionState Login() => new LoginTransitionState { Reason = LoginReason.Login }; public IApplicationTransitionState Logout() => throw new Exception(); public IApplicationTransitionState Unload() => throw new Exception(); public bool CanUnload() => false; public bool CanLogout() => false; public bool CanLoad() => true; public bool CanLogin() => true; } public class LogedInState : IApplicationState { public IApplicationTransitionState Load() => new LoadingTransitionState(); public IApplicationTransitionState Logout() => new LogoutTransitionState(); public IApplicationTransitionState Unload() => throw new Exception(); public IApplicationTransitionState Login() => throw new Exception(); public bool CanLogin() => false; public bool CanUnload() => false; public bool CanLoad() => true; public bool CanLogout() => true; } public class LoadedState : IApplicationState { public IApplicationTransitionState Logout() => new UnloadingTransitionState { Reason = UnloadReason.LoginOut }; public IApplicationTransitionState Unload() => new UnloadingTransitionState { Reason = UnloadReason.Unloading } public IApplicationTransitionState Login() => throw new Exception(); public IApplicationTransitionState Load() => throw new Exception(); public bool CanLogin() => false; public bool CanLoad() => false; public bool CanLogout() => true; public bool CanUnload() => true; } public class LoginTransitionState : IApplicationTransitionState { } public class LoadingTransitionState : IApplicationTransitionState { } public class LogoutTransitionState : IApplicationTransitionState { public bool HasEnded { get; set; } public void StartTransition() { // navigate to the "login out view" new LogoutUnitOfWork().Execute(); } public void EndTransition() { HasEnded = true; // navigate to the "login view" } } public class UnloadingTransitionState : IApplicationTransitionState { public bool HasEnded { get; set; } public UnloadReason Reason { get; set; } public void StartTransition() { // navigate to the "unloading view" new UnloadUnitOfWork().Execute(); } public void EndTransition() { HasEnded = true; if(Reason != UnloadReason.LoginOut) { // navigate to the "load view" } else { StateContext.Instance.Logout(); } } } public class LogoutUnitOfWork { public void Execute() { if(StateContext.Instance.CanUnload()) StateContext.Instance.Unload(); // logout } } public class UnloadUnitOfWork { public void Execute() { _legacyService.Unload(); } } _legacyService.Unloading += (sender, args) => { if(StateContext.Instance.CanUnload()) StateContext.Instance.Unload(); }; _legacyService.Unloaded += (sender, args) => { if(StateContext.Instance.IsInTransition) StateContext.Instance.EndTransition(); }; The EndTransition method will either be called from outside of the sate management (e.g. from an legacy service event). Or from within when not external depencency exists.
{ "domain": "codereview.stackexchange", "id": 27662, "tags": "c#, design-patterns, event-handling" }
What is the lower bound on retrieving an item in a collection if no arrays(Random access memory) are allowed?
Question: I know that retrieving an item in a collection can be done in $O(1)$ time(on average) using hash tables. I would like to know if there is an algorithm that could be as performance without using arrays. My main motivation for this question is to know if it is possible to build a retrieve element algorithm in $O(1)$ time using just functional programming. It seems that the lower bound is $\Omega(\log(n))$ using trees $T = (a, (T, T)) \text{ | ( )}$. Answer: Without arrays, $\Omega(\log n)$ time is needed. Without arrays, the memory address you access is entirely determined by the control-flow path, i.e., the sequence of control-flow decisions (if construed appropriately) made during the execution. At each step, you can only choose between one of two (or a fixed finite number) of choices for control-flow. So, after $t$ steps of execution, there can be at most $O(2^t)$ different control-flow paths, and at most $O(2^t)$ different memory positions that could potentially be accessed. For the data structure operation to be correct, it needs to be possible to access every one of the $n$ memory positions, so you basically need $2^t \ge n$, which means you need $t \ge \log n$.
{ "domain": "cs.stackexchange", "id": 19962, "tags": "time-complexity, data-structures, lambda-calculus, functional-programming, lower-bounds" }
What causes Paresthesia (Pins and Needles) at a cellular level?
Question: I've looked it up in plenty of places like the Wikipedia page and such, and it is clear that the most common cause of Paresthesia is either a fair amount of pressure on a specific patch of skin causing lack of blood flow to the specific nerve endings in that limb (not to be confused with a stop in blood flow to the limb altogether) or a much stronger amount of pressure on that patch of skin for a shorter amount of time. This, although the most common cause, is not nearly the only one, which could be anywhere from simply sleeping on the wrong side of the bed to a lethal injection. I'm wondering what happens to the nerve cells that are affected at a cellular level, and what causes it at a cellular level. The level at which pounds per square inch aren't what is being noticed. Answer: Underneath the superficial layers of your skin there are receptors which sense pressure, temperature and pain. These receptors are part of the peripheral nervous system which senses stimuli and they take the message conveying details about the stimulus to the somatosensory cortex of the brain. Here is where the perception of pain, burning, pressure etc is ultimately made. To take the simplest example, if you stop blood flow for a short amount of time in a limb, these receptors are activated, and will send signals to the brain that are interpreted as tingling or numbness. With more severe pain, different receptors are activated which , again, project to the same brain area but a different message is read out. If the pressure from one limb is removed, the receptors will go back to normal function as blood flow is restored. https://en.wikipedia.org/wiki/Nociceptor http://www.scholarpedia.org/article/Mammalian_mechanoreception
{ "domain": "biology.stackexchange", "id": 1988, "tags": "human-biology, neuroscience, cell-biology, sleep, blood-circulation" }
Determining intermediate pressure between two turbines
Question: A steam power plant employs two adiabatic turbines in series. Steam enters the first turbine at 650 C and 7000 kPa and discharges from the second turbine at 20 kPa. The system is designed for equal power outputs from the two turbines, based on a turbine efficienct of 78% for each turbine. Determine the temperature and pressure of the system in its intermediate state between the two turbines. What is the overall efficiency of the two turbines together with respect to isentropic expansion of the steam from the initial to final state. How should I determine the pressure between two turbines?I have the solution from a solution manual (prob. 8.6) but I just can't understand why it takes pressures around 700 KPa and does an interpolation. what we know: $S3=S2=S1$ (if working isentropically) ($S$ is entropy) $x$ (Quality of exiting vapor) Note:we could determine properties of the intermediate flow properties using one property($S$) just if it's two-phase, but it's assumed that the intermediate flow is superheated. I'm sure of problem's data and that's all we have. Answer: The answer in the solution manual simply assumes a set of four values of intermediate pressure (that somehow magically the actual intermediate pressure lies within!): $$P_2 = \begin{pmatrix}725\\750\\775\\800\end{pmatrix} \text{kPa}$$ And for each pressure in the set, the one that results in an equal power output from the two turbines (or the closest value to $\triangle W = 0$) is the actual intermediate pressure (After some trials). and we interpolate linearly to find the value of $P_2$ for which the work difference is zero:
{ "domain": "engineering.stackexchange", "id": 761, "tags": "mechanical-engineering, thermodynamics" }
catkin_make not running
Question: I installed ROS Kinetic. When I try to run catkin_make it gave the following error The program 'catkin_make' is currently not installed. You can install it by typing: sudo apt install catkin when I run sudo apt install catkin it gave : Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: catkin : Depends: python-catkin-pkg but it is not going to be installed E: Unable to correct problems, you have held broken packages. How can I resolve this? Originally posted by dhruvt on ROS Answers with karma: 1 on 2018-08-15 Post score: 0 Original comments Comment by billy on 2018-08-16: I can't help you but can anticipate questions coming. Please update the question with below info. What hardware? What version of operating system? What method did you use to install, source or binaries? Did you go through the tutorials line by line to start? http://wiki.ros.org/ROS/StartGuide Answer: This looks like you may have missed section 1.6 from the installation instructions here. Try executing the following command in your terminal before catkin_make source /opt/ros/kinetic/setup.bash Add this command to your .bashrc file as described will mean that catkin_make will always be available in any terminal. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-08-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31552, "tags": "ros-kinetic" }
An error of "ERROR PYTHONPATH"
Question: Hi, I want to make my ROS system clean by not having this one error. The error as I pasted below after invoking the command "roswtf". How should I go about this? Output: Loaded plugin tf.tfwtf No package or stack in context ================================================================================ Static checks summary: Found 1 error(s). ERROR PYTHONPATH [/opt/ros/diamondback/ros/core/roslib/src:/opt/ros/unstable/ros/core/roslib/src:/opt/ros/diamondback/ros/core/roslib/src:] is invalid: Multiple roslib directories in PYTHONPATH (there should only be one) ================================================================================ ROS Master does not appear to be running. Online graph checks will not be run. ROS_MASTER_URI is [http://localhost:11311] Thanks in advance.. Originally posted by alfa_80 on ROS Answers with karma: 1053 on 2011-02-21 Post score: 0 Original comments Comment by mjcarroll on 2011-02-21: Please don't put [SOLVED] in the title. Also, don't mark every question as a community wiki, there is no reason for that. Answer: Your ROS_PACKAGE_PATH is not set correctly. It should never mix multiple distributions. When you follow the environment setup instructions, make sure you only source one version of setup.bash in your ~/.bashrc. Originally posted by joq with karma: 25443 on 2011-02-21 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by alfa_80 on 2011-02-21: The first 2 lines are commented, no effect at all. Comment by alfa_80 on 2011-02-21: In my ~/.bashrc file, they are correct I think as the related lines are like this: #source /opt/ros/cturtle/setup.sh #source /opt/ros/diamondback/setup.bash source /home/shah/ni/setup.bash source /opt/ros/diamondback/setup.bash
{ "domain": "robotics.stackexchange", "id": 4820, "tags": "ros-diamondback, rospack" }
What definition of change in potential energy applies to a multi-particle system?
Question: I read a text: We define change in potential energy of the system corresponding to conservative internal forces as:-. $$\Delta U = -W = -\int \mathbf{F}\cdot d\mathbf{r}$$ as the system goes from one configuration to another. I can't understand what $d\mathbf{r}$ represents when we talk about a multi-particle system. Answer: You can use that equation for each particle in the system separately. Then you can add up everything at the end. In other words: $$\Delta U_{system}=\sum_i(\Delta U)_i=-\sum_i\int\mathbf F_i\cdot\text d\mathbf r_i$$ Where the sum over $i$ covers all particles of the system.
{ "domain": "physics.stackexchange", "id": 53835, "tags": "newtonian-mechanics, work, potential-energy" }
Minimum speed of a baseball to cause nuclear fusion in air?
Question: After reading, XKCD's what would happen if a baseball travelling at the speed of light (https://what-if.xkcd.com/1/) I'm curious as to how fast would a slug or baseball actually need to go at minimum to fuse air molecules. He assumed that the baseball had a velocity of 0.99999 the speed of light. But what if it were 0.5 the speed of light, for example? Also, would there by any fusion if the "fast" baseball collided with a wall of heavier elements such as iron or cobalt instead of plain, old air? Answer: In order to cause fusion the kinetic energy of a particle needs to overcome the Coulomb barrier $$V_c=\frac{e^2}{4\pi \epsilon_0}\frac{Z_1 z_2}{R_1+R_2}$$ where $Z$ is atomic charges, $R$ the nuclear radii, and $e$ is the electron charge. For hydrogen-hydrogen this is $0.96\times 10^{-13}$ J and for nitrogen-nitrogen $1.95\times 10^{-12}$ J. A moving object presumably imparts its velocity $v$ on a boundary layer that then collides with air, so the relative velocity is $2v$ and the kinetic energy will be $(1/2)Av^2$ (the diatomic nature of the gas probably doesn't matter here, but maybe the kinetic energy is $Av^2$ if the other atom piles up behind the first one to really press it). So we get fusion when $(1/2)Av^2=V_c$ (with a negligible nudge downwards because of the Maxwell-Boltzmann velocity distribution of the gas that means some molecules will be faster and maybe a slightly bigger one for Maxwell-Jüttner distributed plasma around the projectile, but I will ignore these.) So I get a "fusion speed" of $$v_{fusion}=\sqrt{2V_c/A}.$$ For hydrogen this is 3.57% c, for nitrogen 4.3% c. This is (I think) not going to reach the Lawson criterion for fusion very well, so it is not going to be a major energy source for the ensuing mayhem.
{ "domain": "physics.stackexchange", "id": 47629, "tags": "special-relativity, speed-of-light, nuclear-physics" }