anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Quark Transitions and Convservation of Energy
Question: My question is essentially: How is conservation of energy held in quark transitions? Several quark transitions seem to break energy conservation, such as $$ d \rightarrow c W^{-}$$ $$d \rightarrow t W^{-}$$ where the masses are $$m_{d} \approx 3-7 \ \textrm{MeV}$$ $$m_{c} \approx 1.25\ \textrm{GeV}$$ $$m_{t} \approx 172\ \textrm{GeV}$$ $$m_{W^{-}} \approx 80.4 \ \textrm{GeV}$$ Are all these transitions (and others) forbidden according to conservation of energy? Clearly they are perfectly allowed transitions, and so my intuition tells me the reason is due to the effective masses of quarks within a bound system. Answer: intuition tells me the reason is due to the effective masses of quarks within a bound system. Your intuition confuses you because you are trying to use a type of "conservation of mass", (identifying mass with energy), a classical concept, instead of conservation of energy and momentum which makes life simple. One has to work with four vectors. If the coupling constants exist, and the center of mass energy of the "system" under consideration allows the generation of the rest masses of the particles, the reaction can become physical. A simpler example than quarks comes from the inverse beta decay. The free neutron decays into a proton because it is energetically allowed, as the proton has a smaller mass than the neutron. BUT in a nucleus the inverse process is a allowed, taking energy from the nuclear system, the proton turning into a neutron, $β^{+}$ decay. . Having said the above, I am curious in where did you find a d to top transition, which would have to go with higher order diagrams, still following four momentum algebra. This is the table of the straight forward couplings in quark decays,
{ "domain": "physics.stackexchange", "id": 49835, "tags": "particle-physics, quarks" }
Pubchem: list all compounds for which Kovats retention indices are available
Question: Pubchem recently started listing Kovats retention indices, e.g. for alpha ionone: https://pubchem.ncbi.nlm.nih.gov/compound/5282108 I was wondering though if there is a way to get a list of all Pubchem compound ids which have this info available? Ideally via a PUG REST query, and a JSON or XML export option? Note: doing a search https://www.ncbi.nlm.nih.gov/sites/myncbi/searches/save?db=pccompound&qk=14 https://www.ncbi.nlm.nih.gov/sites/myncbi/searches/save?db=pccompound&qk=12 will get me all records uploaded by NIST and NIST Chemistry Webbook - this will already get me close. But how can I get the list of all these pubchem IDs, ideally using some PUG REST query? (it has to be used programmatically from R) Answer: Ha NCBI helpdesk just replied with this: The easiest way to do this is to use the PubChem Classification Browser to access the list of Compound records with Kovats data. You can find the PubChem Classification Browser here: https://pubchem.ncbi.nlm.nih.gov/classification 1) Click on the "Select classification" pull-down and select "PubChem Compound TOC". 2) Under "Data type counts to display" click on "Compound". 3) Scroll down to the TOC listing and expand the "Chemical and Physical Properties" section and then "Experimental Properties". 4) Look for the entry "Kovats Retention Index" with the number of counts next to it. 5) Click on the number of counts (currently 79,656) to take you to PubChem Compound results list populated with those records shown. (https://www.ncbi.nlm.nih.gov/pccompound?DbFrom=pchierarchy&Cmd=Link&Db=pccompound&LinkName=pchierarchy_pccompound&IdsFromResult=1856976) To save the list of CIDs for these records there are two options.... 1) Click "Send to" (near the top). 2) Select "File". 3) Select "UI List" 4) Click "Create file" OR Click "Structure Download" (on the right)& follow the directions....to download the SDF or XML formats of these records. If you don't need the 3D versions of the records/images this will work fine, HOWEVER if you want the "3D Records/Images" versions of these - there is a limit of 50,000. So you'd need to download the CID list and split it in two to upload in this tool for downloading the file. I hope this helps. NCBI User Services
{ "domain": "chemistry.stackexchange", "id": 4159, "tags": "organic-chemistry, cheminformatics" }
Transport Block calculation in LTE Downlink
Question: I would like to understand the relation between the TB size calculation in the TS36.213 document and in the simulator (https://www.nt.tuwien.ac.at/research/mobile-communications/vienna-lte-a-simulators/). I’m going to make an example so maybe you can help me to see this relation: Method 1 (from the TS36.213) Calculation Procedure for downlink(PDSCH) is as follows : i) refer to TS36.213 Table 7.1.7.1-1 ii) get I_TBS for using MCS value (Let’s assume MCS is 1. in this case, I_TBS is 1 ) iii) refer to TS36.213 Table7.1.7.2.1 iv) go to column header indicating the number of RB (Let’s assume that RB is 50) v) go to row header ‘1’ which is I_TBS vi) we would get 1800 (if the number of RB is 50 and I_TBS is 9) vii) (This is Transport Block Size per 1 ms for one Antenna) And this is method 2 (from the Vienna LTE System Level simulator): i) This is the formula: TB_size_bits = max(8*round(1/8*(the_RB_grid.sym_per_RB_nosync .* num_assigned_RB .* modulation_order .* coding_rate * 2))-24,0); ii) if the_RB_grid.sym_per_RB_nosyn= 80 iii) num_assigned_RB=50 iv) modulation_order = 2 v) coding_rate=0.0762 vi) That gives TB_size_bits = 1195 bits Do you know how I can go from one method to another or if there is any relation? Thank you. Best regards, Natalia. Answer: I found this ppt: http://www.ece.drexel.edu/walsh/Gwanmo-Nov11-2.pdf (slides 9-12) where I saw a bit of light regarding on how the method 1 calculates TB size: For data, there are: 10 OFDM symbols per subframe x 12 subcarriers = 120 (we are counting the 6 RS symbols and excluding the PDCCH symbols) Therefore, 120 symbols x 50 RBs x 2 bits/symbol (QPSK) = 12000 bits For QPSK we know that the efficiency is 0.15 (approx.), so: 12000 x 0.15 = 1800 bits, as in table Table7.1.7.2.1 3GPP TS 36.312 for 50 RBs and I_TBS=1 (sorry, in my question I made a mistake and wrote I_TBS 9 the second time: can’t edit it in the forum) However, in the simulator, using method 2, we are considering: Number of non-sync symbols per RB= 12×7 – 4 (RS symbols)=80 OFDM symbols per slot 80 non-sync symbols x 50 RBs x 2 bits/symbol x 0.0762 x 2 slots =1219 bits 1219 bits -24 bits for the CRC = 1195 bits :) Please, let me know if you don’t agree with these methods. Thanks, BTS.
{ "domain": "dsp.stackexchange", "id": 3608, "tags": "communication-standard" }
How Do Common Pathfinding Algorithms Compare To Human Process
Question: This might border on computational cognitive science, but I am curious as to how the process followed by common pathfinding algorithms (such as A*) compares to the process humans use in different pathfinding situations (given the same information). Are these processes similar? Answer: Humans tend to choose not strictly optimal, but close to shortest solutions. So you'll need to look at fuzzy (approximate) algorithms, not at A*. The closest algorithm to human thinking I've aware of is a Contaction hierarchies on par with a Reach pruning algorithm. When I need to find a path between A and B on the map, I do a quick overview, taking into account if there is crossing river or something else and looking for some general ways and then adding details that could shorten path.
{ "domain": "cs.stackexchange", "id": 185, "tags": "algorithms, graphs, artificial-intelligence" }
Write a function that takes a list of dicts and combine them
Question: There is an exercise in the book Python Workout: 50 Ten-minute Exercises that asks to write a function to do the following: Write a function that takes a list of dicts and returns a single dict that combines all of the keys and values. If a key appears in more than one argument, the value should be a list containing all of the values from the arguments. Here is my code: def combine_dicts(dict_list): combined = {} for dict_ in dict_list: for key, value in dict_.items(): if key not in combined.keys(): combined[key] = [value] else: combined[key] = combined[key] + [value] return dict([((key, value[0]) if len(value) == 1 else (key, value)) \ for key, value in combined.items()]) what are some ideas that I can use to improve my code, make it more pythonic? Example input and outputs: >>> combine_dicts([{'a': [1, 2]}, {'a':[3]}, {'a': [4, 5]}]) {'a': [[1, 2], [3], [4, 5]]} >>> combine_dicts([{'a': 1, 'b':2}, {'c':2, 'a':3}]) {'a': [1, 3], 'b': 2, 'c': 2} Answer: At first glance your code looks pretty pythonic to me. I would suggest the following changes: when you are adding a new value to an existing key's list, you use list concatenation to create a new list: combined[key] = combined[key] + [value]. What you should do instead is append the new value to the existing list: combined[key].append(value) in the return statement you create a list of tuples using list comprehension which is then converted to a dictionary. Instead you can directly use a dictionary comprehension to create the dictionary more elegantly: return {key : (value[0] if len(value) == 1 else value) for key, value in combined.items()} The resulting code looks as follows: def combine_dicts(dict_list): combined = {} for dict_ in dict_list: for key, value in dict_.items(): if key not in combined.keys(): combined[key] = [value] else: combined[key].append(value) return { key : (value[0] if len(value) == 1 else value) for key, value in combined.items() }
{ "domain": "codereview.stackexchange", "id": 44136, "tags": "python, hash-map" }
How could I shorten my code in this old graded homework?
Question: I have this old graded homework, I am looking how to improve the or shorten the code, I have some lines with over 200 chars. to_pairs(network) does the opposite. n_friends(network, n) gets a network (the second presentation) and returns a set of names of persons that have exactly n friends. lonely(network) returns a set of names of people with only a single friend. most_known(network) returns the name of the person with most friends. common_friends(network, name1, name2) returns a set of common friends of the two persons. by_n_friends(network) returns a dictionary with keys from 1 to len(mreza) - 1 and the corresponding sets of people with that number of friends. For instance, for the small network, the function must return {1: {"D"}, 2: {"A", "B"}, 3: {"C"}}. See the example for the large network in the tests. suggestions(network) returns a list of pairs that are not friends but have at least one common friend. The pair must be sorted alphabetically (e.g. ("Ana", "Berta") and not ("Berta", "Ana")). clique(network, names) returns True if all persons from the group names know each other, and False otherwise. most_commons(network) returns the pair with the most mutual friends. strangers(network, names) returns True if the group names contains absolute strangers - not even one pair knows each other -, and False otherwise. is_cover(network, names) returns True if the group "covers" the entire network in the sense that every person in the network is either in the group or is a friend with someone in the group. triangles(network) computes the number of "triangles" - triplets of people who know each other. minimal_cover(network) returns the smallest set of names that cover the network (in the sense described at function is_cover, above). Here's the explanation and here's a picture. Big network: Small network: And here's my code: def to_dict(pairs): from collections import defaultdict paired = defaultdict(set) for k, v in pairs: paired[k].add(v) paired[v].add(k) return dict(paired) def to_pairs(network): return {(k, v) for k, vs in network.items() for v in vs if k < v} def n_friends(network, n): return {k for k , v in network.items() if len(v) == n} def lonely(network): return {k for k, v in network.items() if len(v) == 1} def most_known(network): return max(network, key=lambda k: len(network[k])) def common_friends(network, name1, name2): return set(network[name1].intersection(network[name2])) def by_n_friends(network): return {x: {k for k, v in network.items() if len(v) == x} for x in range(1, len(network))} def clique(network, names): import itertools as it return {tuple(sorted(y)) for y in (it.combinations(names, 2))} <= {tuple(sorted(x)) for x in to_pairs(network)} def most_commons(network): friends = {} for pair in to_pairs(network): for elem in pair: friends.setdefault(elem, set()).update(pair) return next(iter(({pair for pair in to_pairs(network) if ({pair: len(friends[pair[0]] & friends[pair[1]]) for pair in to_pairs(network)})[pair] == (max({pair: len(friends[pair[0]] & friends[pair[1]]) for pair in to_pairs(network)}.values()))}))) def strangers(network,names): import itertools as it return not(bool([(x,y) for x in set(it.permutations(names, 2)) for y in to_pairs(network) if x == y])) def suggestions(network): return {tuple(sorted([k,k1])) for k in network.keys() for k1 in network.keys() if k not in network[k1] and len(network[k] & network[k1]) != 0 and k != k1} def is_cover(network, names): return not([k for k in network.keys() if k not in names and network[k] & names == set()]) def triangles(network): return len({tuple(sorted([k1,k2,k3])) for k1 in network.keys() for k2 in network.keys() for k3 in network.keys() if k1!=k2 and k2!=k3 and k1!=k3 and(k1 in network[k3] and k2 in network[k3] and k1 in network[k2])}) def minimal_cover(network): import itertools as it if len(network) <= 4: bar1 = list(map(set, list(network.keys()))) for st in bar1: if is_cover(network,st): return st else: bar = list(map(set, sorted(it.combinations(list(network), 3)))) for st in bar: if is_cover(network, st): return st The homework is followed with unit tests made by the professor, I will not post them. Edit: The goal was to done the home work with one liners. Thank you. Answer: Longest lines can be rewritten thus: def are_friends(network, a, b): return a in network[b] def clique(network, names): """ returns True if all persons from the group names know each other, and False otherwise. """ return all(are_friends(network, a, b) for a, b in it.combinations(names, 2)) def most_commons(network): """the pair with the most mutual friends""" return max(to_pairs(network), key = lambda p: len(common_friends(network, p[0], p[1]))) def suggestions(network): def has_common_friend(a, b): return len(common_friends(network, a, b)) > 0 return [(a, b) for a, b in it.combinations(sorted(network.keys()), 2) if not are_friends(network, a, b) and has_common_friend(a, b)] def triangles(network): """the number of triplets of people who know each other.""" return len(t for t in it.combinations(network.keys(), 3) if clique(t)) Reusing functions as also suggested by ChatterOne and more consistent use of itertools.combinations. Also Use built-in functions s.a. all, max whenever appropriate. Remove unnecessary parens. Divide long function calls and list comprensions. to prevent lines getting too long.
{ "domain": "codereview.stackexchange", "id": 24001, "tags": "python, python-3.x, homework" }
What is the fastest known algorithm for finding conjugacy classes?
Question: Given a finite group $G$ of size $n$ by the table representation. I want to compute the conjugacy classes of group $G$. A trivial algorithm seems to take $O(n^2)$ operation ( $b = g^{-1}a g$ type checking ). For each pair (I know I don't need to do for all pairs ) of elements in $G$ check $b = g^{-1}a g$. I tried to search on internet but did not find anything specific about it in general. Question : What is the fastest known algorithm for finding conjugacy classes? Answer: I believe this can be done on $O(n \log n)$ on a RAM, where $n=|G|$; I don't know a reference so I'll just write it down here (but surely this is not original). Nor do I know if this is the fastest known, but clearly you can't do it faster than $\Omega(n)$, so it's gotta be close. Let's assume the group elements are denoted in the computer by the integers $1,\dotsc,n$, with $g_i \in G$ being the group element corresponding to the integer $i$. Assume WLOG that $g_1$ is the identity. 1) Find a generating set $\Gamma$ of size $\leq \log_2 n$ in $O(n \log n)$ time: G = list of group elements Gamma = [] Gr = new graph with vertex set G and no edges // O(n) steps found = new array of length n, initialized to all false // O(n) steps for i = 2 to n: // never need to add identity to generating set if not found[i] then Gamma.append(i) // Now update the graph Gr by adding new edges // Corresponding to the new generator G[i]=g_i for each vertex g in Gr: Gr.addEdge(g,g*G[i]) end for do BFS on Gr starting from 1, ignoring any vertices already found (for any vertex j encountered, this sets found[j]=true) end if end for 2) Build the following sparse graph (that is, in the edge list representation, not the dense adjacency matrix representation): vertices are the elements of $G$, and there is an (undirected) edge $(g,h)$ if $h = \gamma g\gamma^{-1}$ for some generator $\gamma \in \Gamma$. This is a graph with $n$ vertices and maximum degree $O(\log n)$. Its connected components are the conjugacy classes, and finding them can be done by BFS (say) in time $O(v + e) = O(n + n \log n) = O(n \log n)$.
{ "domain": "cstheory.stackexchange", "id": 4320, "tags": "ds.algorithms" }
Why do petrol engines have lower compression ratio than diesel engine?
Question: Patrols have higher self-ignition temperature than diesel, but still, petrol engines have lower compression ratios than diesel engines. As the self-ignition temperature of petrol is shouldn't we be able to compress it more. I am not saying compress petrol to the point of auto-ignition which can cause knocking, but just have a higher compression ratio than diesel. Is it because when the combustion of petrol-air mixture starts the moving flame front starts to compress the remaining air-fuel mixture which could reach a point of self-ignition when using higher CR? Answer: It's not because of the fuel, but because of the process. The diesel process differs fundamentally from the otto(petrol/gasoline) cycle. In the otto cycle, where fuel is present in the cylinder while compressing, the compression is limited by the auto ignition temperature of the fuel, whatever fuel is used. The compression warms up the mixture, ideally just before auto-ignition. The spark plug adds the needed flame source to start the combustion. In the diesel cycle, fuel is added only when compression has already taken place, the temperature in the combustion chamber is way higher than the auto-ignition point of the fuel, which is why the fuel combusts as soon as it's injected into the cylinder. This removes the limit set by the auto-ignition temperature when choosing a compression ratio. Thus, the ratio can be higher, up to where materials begin to form a problem. Compression is the very reason diesels are more efficient; the compression ratio can be higher, and they also always run at 'wide open throttle' giving high compression.
{ "domain": "engineering.stackexchange", "id": 2258, "tags": "automotive-engineering" }
Spring MVC verification email sending service
Question: I have a @Service to send verification emails. Here is the code of the service: package my.package.service.impl; //imports @Service public class EmailSenderServiceImpl implements EmailSenderService { private final static Logger LOGGER = LoggerFactory.getLogger( EmailSenderServiceImpl.class ); @Autowired EmailVerificationTokenService emailVerificationService; @Autowired EmailService emailService; @Override public boolean sendVerificationEmail( final String appUrl, final Locale locale, final UserDto user ) { final String token = UUID.randomUUID().toString(); emailVerificationService.insertToken( user.getIdUser(), token ); final String body = createEmailBody( appUrl, locale, user, token ); final String to = user.getEmail(); final String subject = MessageResourceUtility.getMessage( "e.verify.email.subject", null, locale ); return sendEmail( body, to, subject ); } private boolean sendEmail( final String body, final String to, final String subject ) { try { if( emailService.sendEmail( to, subject, body ) ) { return true; } else { LOGGER.debug( "Email NOT sent" ); return false; } } catch( final MessagingException e ) { LOGGER.error( "Error sending email" ); return false; } } private String createEmailBody( String appUrl, final Locale locale, final UserDto user, final String token ) { appUrl = appUrl.substring( 0, appUrl.lastIndexOf( "/" ) ); appUrl += "/accountVerification?token=" + token; appUrl += "&username=" + user.getUserName(); final Object[] array = new Object[] { user.getName(), user.getSurname(), appUrl }; final String body = MessageResourceUtility.getMessage( "e.verify.email", array, locale ); return body; } } And here is the test class: package my.package.app.service; //imports @RunWith( PowerMockRunner.class ) @PrepareForTest( MessageResourceUtility.class ) public class EmailSenderServiceTest { private static final String URL = "https://www.ysi.si/register/"; @InjectMocks EmailSenderService emailSenderService = new EmailSenderServiceImpl(); @Mock EmailService emailService; @Mock EmailVerificationTokenService emailVerificationTokenService; Locale locale; UserDto user; @Before public void setUp() throws Exception { MockitoAnnotations.initMocks( this ); user = new UserDto(); when( emailService.sendEmail( Matchers.anyString(), Matchers.anyString(), Matchers.anyString() ) ).thenReturn( true ); mockStatic( MessageResourceUtility.class ); } @Test public void shouldCreateValidationToken() throws Exception { emailSenderService.sendVerificationEmail( URL, locale, user ); verify( emailVerificationTokenService, times( 1 ) ).insertToken( Matchers.anyInt(), Matchers.anyString() ); } @Test public void shouldReturnFalseWhenError() throws Exception { when( emailService.sendEmail( Matchers.anyString(), Matchers.anyString(), Matchers.anyString() ) ).thenThrow( new MessagingException() ); final boolean returnValue = emailSenderService.sendVerificationEmail( URL, locale, user ); assertEquals( "Returns false when error", false, returnValue ); } @Test public void shouldReturnFalseWhenEmailNotSent() throws Exception { when( emailService.sendEmail( Matchers.anyString(), Matchers.anyString(), Matchers.anyString() ) ).thenReturn( false ); final boolean returnValue = emailSenderService.sendVerificationEmail( URL, locale, user ); assertEquals( "Returns false when email not sent", false, returnValue ); } @Test public void shouldSendAnEmailWithTheToken() throws Exception { final boolean returnValue = emailSenderService.sendVerificationEmail( URL, locale, user ); verify( emailService, times( 1 ) ).sendEmail( Matchers.anyString(), Matchers.anyString(), Matchers.anyString() ); assertEquals( "Returns true when success", true, returnValue ); } } I have the feeling I am not unit testing in the proper way. I feel like the test code is too coupled to production code. Am I right? Answer: Thanks for sharing the code! As far as I see you're running your tests through the public interface of the class under test and you verify its results and the communication with its dependencies. That exactly like it should be. There are just a few issues: avoid PowerMock IMHO having to use PowerMock(-ito) is a surrender to bad design. In your case the Problem is the MessageResourceUtility cause the problem. There is no such rule as that classes providing utility methods must declare them static. avoid verify with any* matchers Test cases should be as explicit as possible. That means that you should verify the parameters of the methods called against concrete values by any chance. Eg. you could verfy that the ID given by the user is passed to the dependency by the cut. Cause here is similar as above: The dependency to UUID class is hidden by the static access. Problem here is that UUID is provided by the JVM so that you cannot simply convert it to an "instance-able" utility class. You should encapsulate that in a facade class which you can pass in as dependency.
{ "domain": "codereview.stackexchange", "id": 26327, "tags": "java, unit-testing, email" }
Node management tool
Question: Does anyone know about a node management GUI. Something that allows you to get an overview of the running nodes, manage starting and stopping nodes. See if any node has crashed etc. I guess you could think of it as a GUI on top of roslaunch or something. Originally posted by Hordur on ROS Answers with karma: 544 on 2011-05-19 Post score: 0 Answer: This doesn't exist. You're not the first to think of it and some informal design has been done, but there is nothing usable. Originally posted by Straszheim with karma: 426 on 2011-06-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 5604, "tags": "ros, nodes, gui" }
Do microorganisms contain water?
Question: This may sound a bit strange question, but I am very new to biology. I would like to ask that do microorganisms like viruses, bacteria, amoebas, etc also contain water, as every living thing contains water? Answer: The short answer, is yes, pretty much! Do microorganisms contain water?* Bacteria and eukaryotic microorganism Bacteria and eukaryotic microorganisms (including amoebas) have a membrane that separates the interior from the exterior. And yes, they have water inside, in which all chemical reactions take place. Viruses Viruses, on the other hand, do not really have a membrane that separates the interior from the exterior. They are really just a bunch of proteins stuck together. As such, it is hard to tell whether you would consider the water in which those proteins float part of the organism or not. Note however, that some viruses have a viral envelope (that can be derived from a host cell membrane). In such viruses, there is more clearly an interior and an exterior, and yes, there is water in the interior too! However, there is (except very few exceptions) no metabolism inside this envelope. This, by the way, is part of the reason viruses are not considered alive. Dehydrated living things Note that some organisms can survive with very little water. Some seeds can survive extremely strong dehydration. For example, some tardigrades can survive with less than 1% water in their body (see this New York Times article).
{ "domain": "biology.stackexchange", "id": 8013, "tags": "evolution, bacteriology, virology" }
gazebo camera frame is inconsistent with rviz + opencv convention
Question: It looks like the gazebo camera frame convention is not the same as rviz and opencv, which the image below shows In opencv, z is pointing into the image (the blue axis), x is right (the red axis), and y is down (green axis), while in the gazebo camera x is pointing into the image and z is up, y is right which is similar to the robot convention of x being forward and z up. The image above is using an rviz/Camera to overlay the purple grid on the frame generated from the gazebo camera plugin, instead of the grid overlaying properly on the ground and going to toward the horizon rviz thinks the camera is pointed at the ground. This example is running the gazebo_ros_demos rrbot_gazebo and rrbot_control launch files, and using standard Ubuntu 14.04 + Jade packages. I cross posted https://github.com/ros-simulation/gazebo_ros_pkgs/issues/424 - or is it the fault of rviz/Camera and opencv, every node calling opencv camera projection functions should rotate first? Or every node on either side should have options to support either frame? (Or do options exist already and I've missed them?) My short term solution is going to be to republish every frame out of gazebo with a rotated camera frame in the header (and the urdf/xacro can create the corrected frame, or it could be sent to tf from the same republishing node). Originally posted by lucasw on ROS Answers with karma: 8729 on 2016-04-21 Post score: 3 Answer: The xacro needs to create the optical frame like this, and the sensor uses it for frameName: <!-- generate an optical frame http://www.ros.org/reps/rep-0103.html#suffix-frames so that ros and opencv can operate on the camera frame correctly --> <joint name="camera_optical_joint" type="fixed"> <!-- these values have to be these values otherwise the gazebo camera image won't be aligned properly with the frame it is supposedly originating from --> <origin xyz="0 0 0" rpy="${-pi/2} 0 ${-pi/2}"/> <parent link="camera_link"/> <child link="camera_link_optical"/> </joint> <link name="camera_link_optical"> </link> <gazebo reference="camera_link"> <sensor type="camera" name="camera1"> ... <plugin name="camera_controller" filename="libgazebo_ros_camera.so"> ... <frameName>camera_link_optical</frameName> ... This shows the correctly generated optical frame- the Camera overlay RobotModel arm is seamless with the gazebo camera image: There is a PR for gazebo_ros_demos to get this fix in, since that is where the basic gazebo + ros tutorial points it really should be working correctly. https://github.com/ros-simulation/gazebo_ros_demos/pull/15 It's possible the other gazebo ros sensors need to be handled similarly (but maybe the depth sensors had this solved within the plugin?). Originally posted by lucasw with karma: 8729 on 2016-04-21 This answer was ACCEPTED on the original site Post score: 7 Original comments Comment by athul on 2019-03-15: Thanks a lot! This fix works very fine. Comment by martinerk0 on 2021-04-14: Can you do this purely with SDF model? Comment by rezenders on 2021-05-13: This worked for me with the camera link having rpy="0 0 0", but I rotated the camera_link -${pi/2} in the roll axis ( rpy="-${pi/2} 0 0") and the image I get in rviz is rotated to the opposite side. Am I doing something wrong or is it supposed to be like this?
{ "domain": "robotics.stackexchange", "id": 24424, "tags": "ros, gazebo, rviz, camera, ros-jade" }
Simple inbox functionality using JavaScript and Ajax
Question: I've been developing a simple CMS both for practical use and learning about Laravel, and I decided to implement a simple service that makes it possible for the administrators to send messages to one another. However, the finished JavaScript code, while completely functional, ended up being a bit sketchy so I'm looking for ways to improve its performance and security. Any feedback is greatly appreciated. This is the JavaScript code that sends Ajax calls to server (based on search term, whether we want the sent messages or the incoming messages) and formats and reloads the table shown in my CMS: var page = 1; var perPage = 10; var lastPage = 1; var searchTerm = ''; var category = 'inbox'; function nextPage () { // Incrementing the global 'page' variable page++; reloadList(); } function prevPage () { // Decrementing the global 'page' variable page--; reloadList(); } function search () { // Resetting the global 'page' variable page = 1; /* reloadList() automatically detects the search terms and sends the appropriate AJAX request based on the value of the 'searchTerm' variable */ searchTerm = $("#searchBox").val(); reloadList(); } function changeCat (cat) { // Resetting the global 'page' variable page = 1; // Resetting the global 'searchTerm' variable $("#searchBox").val(''); searchTerm = ''; // Setting the global 'category' variable category = cat; // Hackish way of setting the correct header for the page $("#tab-header").html($("#"+cat+"-pill-header").html()); // Deactivating the old pill menu item $("#category-pills>li.active").removeClass("active"); // Activating the selected pill menu item $("#"+cat+"-pill").addClass('active'); reloadList(); } function toggleStar (id) { // Calling the server to toggle the 'starred' field for 'id' var data = 'id=' + id + '&_token=' + pageToken; $.ajax({ type: "POST", url: '/admin/messages/toggle-star', data: data, success: function(data) { reloadList(); }, error: function(data) { showAlert('danger', 'Error', 'Something went wrong. <a href="{{ Request::url() }}">Try again</a>', 'alerts'); } }); } function deleteChecked () { // Getting all the checked checkboxes var checkedBoxes = $("#messages>tr>td>input:checkbox:checked"); // Determining the 'url' based on if this is a soft delete or a permanent delete var url = '/admin/messages/trash-checked'; if (category == 'trash') url = '/admin/messages/delete-checked'; // Filling the 'deleteIds' array with the soon to be deleted message ids var deleteIds = []; checkedBoxes.each(function () { deleteIds.push($(this).attr('id')); }); // Calling the server to delete the messages with ids inside our 'deleteIds' array var data = 'deleteIds=' + JSON.stringify(deleteIds) + '&_token=' + pageToken; $.ajax({ type: "POST", url: url, data: data, success: function(data) { // Checking to see if the messages were deleted if (data == 'true') { // Hackish way of getting the count of unread messages var navCount = +($(".navbar-messages-count").html()); checkedBoxes.each(function () { // Getting the id of the deleted message var id = $(this).attr('id'); // Determining if it was on display in the messages section in navbar if ($("#navbar-message-" + id).length) { // Hiding the message /* We are hiding, and not removing, the element to ease the task of redisplaying in case of message restore */ $("#navbar-message-" + id).hide(); // Decrementing the count of unread messages navCount--; } }); // Updating the count of unread messages shown on numerous places in the panel $(".navbar-messages-count").html(navCount); } reloadList(); }, error: function(data) { showAlert('danger', 'Error', 'Something went wrong. <a href="{{ Request::url() }}">Try again</a>', 'alerts'); } }); } function restoreChecked () { // Getting all the checked checkboxes var checkedBoxes = $("#messages>tr>td>input:checkbox:checked"); // Filling the 'restoreIds' array with the soon to be restored message ids var restoreIds = []; checkedBoxes.each(function () { restoreIds.push($(this).attr('id')); }); // Calling the server to restore the messages with ids inside our 'restoreIds' array var data = 'restoreIds=' + JSON.stringify(restoreIds) + '&_token=' + pageToken; $.ajax({ type: "POST", url: '/admin/messages/restore-checked', data: data, success: function(data) { // Checking to see if the messages were restored if (data == 'true') { // Hackish way of getting the count of unread messages var navCount = +($(".navbar-messages-count").html()); checkedBoxes.each(function () { // Getting the id of the restored message var id = $(this).attr('id'); // Determining if it was on display in the messages section in navbar before getting deleted if ($("#navbar-message-" + id).length) { // Redisplaying the message $("#navbar-message-" + id).show(); // Incrementing the count of unread messages navCount++; } }); // Updating the count of unread messages shown on numerous places in the panel $(".navbar-messages-count").html(navCount); } reloadList(); }, error: function(data) { showAlert('danger', 'Error', 'Something went wrong. <a href="{{ Request::url() }}">Try again</a>', 'alerts'); } }); } function reloadList () { // Emptying out all the table rows $("#messages").html(''); // Checking to see if we're on the trash pill item in order to add or remove the restore button accordingly if (category == 'trash') { if (!$("#restore-checked").length) { var restoreButtonHtml = '<button type="button" class="btn btn-default btn-sm" id="restore-checked" onclick="restoreChecked()"><i class="fa fa-rotate-left"></i></button>'; $("#mailbox-controls").append(restoreButtonHtml); } } else { if ($("#restore-checked").length) $("#restore-checked").remove(); } var i = 0; // Getting information to build rows for our table getPageRows(page, perPage, '/admin/messages/show-'+category, searchTerm, pageToken, function (data) { lastPage = data['last_page']; rowData = data['data']; // If messages were available if (rowData.length == 0) { var tableRow = '<tr><td style="text-align:center;">No data to show</td></tr>'; $("#messages").append(tableRow); } // Looping through available messages for (i = 0; i < rowData.length; i++) { // Making a table row for each message var tableRow = makeMessageRow(rowData[i]['id'], rowData[i]['starred'], rowData[i]['toOrFrom'], rowData[i]['read'], rowData[i]['subject'], rowData[i]['sent']); $("#messages").append(tableRow); } // Disabling the previous page button if we're at page one if (page == 1) { $("#prev_page_button").prop('disabled', true); } else { $("#prev_page_button").prop('disabled', false); } // Disabling the next page button if we're at the last page if (page == lastPage) { $("#next_page_button").prop('disabled', true); } else { $("#next_page_button").prop('disabled', false); } }); } // Gets paginated, searched and categorized messages from server in order to show in the inbox section in the panel function getPageRows (page, perPage, url, search, token, handleData) { // Calling the server to get 'perPage' of messages var data = 'page=' + page + '&perPage=' + perPage + '&search=' + search + '&_token=' + token; $.ajax({ type: "POST", url: url, data: data, success: function(data) { handleData(data); }, error: function(data) { showAlert('danger', 'Error', 'Something went wrong. <a href="{{ Request::url() }}">Try again</a>', 'alerts'); } }); } // Makes a table row based on given information for the inbox section in the panel function makeMessageRow (id, starred, sender_name, read, subject, sent) { var tableRow = '<tr><td><input type="checkbox" id="'+id+'"></td><td class="mailbox-star">'; if (starred) { tableRow += '<a href="javascript:void(0)" onclick="toggleStar('+id+')"><i class="fa fa-star text-yellow"></i></a>'; } else { tableRow += '<a href="javascript:void(0)" onclick="toggleStar('+id+')"><i class="fa fa-star-o text-yellow"></i></a>'; } tableRow += '</td><td class="mailbox-name"><a href="#">'+sender_name+'</a>'; if (!read) { tableRow += '<span class="label label-primary message-label">unread</span>'; } tableRow += '</td><td class="mailbox-subject">'+subject+'</td><td class="mailbox-date">'+sent+'</td>'; return tableRow; } Answer: Style guide I'd recommend picking a style guide to keep all of your whitespace consistent. I've formatted yours according to semistandard. Having a style guide could be considered a form of bike-shedding, but it lets you have your IDE focus on making your code readable while you can just focus on making it work. Try to limit the use of jQuery You're using jQuery quite a lot in your code, namely to access elements. If you can afford to (and, really, you should be able to), use standard JavaScript to access DOM elements. One of the main benefits of using the standard methods is that the browser will cache repeated accesses to document.getElementById() cheaply: jQuery won't do this (at least, not the last time I checked). Comments You have some comments that are describing what you're doing, like: // Incrementing the global 'page' variable. These bring absolutely no value to your code; it's pretty obvious when you see the -- operator that you're doing that. Stick to having your comments describe why they are doing what they are doing, rather than what they are doing. Conversely, as your script is quite long you should try and add some descriptions to each function (if it is not already obvious what they do) so that other people (including yourself) understand what your code is doing in 6 months when you have to come to maintain it. We have a standard format for that called jsdoc. There are other versions of this standard, but they all use the same sort of format. Double equals Use triple equals instead. Here is why. You almost never want to use the double equals and many linters will yell at you for using it. Limit use of variables to their appropriate scope One of your variables (var i = 0) is initialised outside of a loop, but only used inside that loop. It's best practice to try and limit the amount of scope a variable touches (especially a mutable one) as it makes it easier to reason about where that variable is used. javascript:void(0)/onclick No! This breaks semantic HTML. If you have <a href='javascript:void(0)>, what you actually want is a <button> instead. A link and a button have two very different semantics. Do not use inline JavaScript like onclick: use event handlers anyway. Any good content security policy (which helps protect against XSS) will disable inline javascript and thus those attributes will not work. Generating HTML elements I'd strongly recommend generating HTML elements based on template tags in your HTML body and using Document.cloneNode instead of pulling together JavaScript strings in your code to help with separation of concerns. This would also work very nicely with something like lodash's template elements. I did originally start refactoring your code to take into account all of these things, but unfortunately it just started taking far too much time. There's a lot of room for improvement here, but I hope what I've given you helps. I'd strongly suggest trying to reduce the amount of usage of jQuery and refactoring your templates out of your JavaScript, though. This code is very long and is quite hard to follow and it will easily become that sort of code that in 6 months you wish you could just scrap it and start over. This is primarily because your view logic is very intertwined with your business logic.
{ "domain": "codereview.stackexchange", "id": 20902, "tags": "javascript, ajax, laravel" }
Parsing of text file to a table
Question: I was able to make a program that parse my samtools output file into a table that can be viewed easily in excel. While I was able to get the final result, is there a recommended improvements that be done for my code. I am trying to improve my python skills and transferring over from C and C++. The strategy I used was to split the data I need based on the "+" sign and make it into an array. Then select the elements of the array that were the information that I needed and write it to my file. My example input: 15051874 + 0 in total (QC-passed reads + QC-failed reads) 1998052 + 0 secondary 0 + 0 supplementary 0 + 0 duplicates 13457366 + 0 mapped (89.41% : N/A) 13053822 + 0 paired in sequencing 6526911 + 0 read1 6526911 + 0 read2 10670914 + 0 properly paired (81.75% : N/A) 10947288 + 0 with itself and mate mapped 512026 + 0 singletons (3.92% : N/A) 41524 + 0 with mate mapped to a different chr 31302 + 0 with mate mapped to a different chr (mapQ>=5) My output: FileName Total Secondary Supplementary duplicates mapped paired in sequencing read1 read2 properly paired with itself and mate mapped singletons with mate mapped to a different chr with mate mapped to a different chr (mapQ>=5) 10_HK_S22.merged.samtools.flag.txt 26541257 2332283 0 0 22895440 24208974 12104487 12104487 19003826 19632880 930277 69030 52261 My Program: outFile = open("output.count.txt", "w+") #windows platform add the r os.chdir(r"Susceptible\featurecounts") #open the output file to be able to write output. outFile.write("FileName\tTotal\tSecondary\tSupplementary\tduplicates\tmapped\tpaired in sequencing\tread1\t" "read2\tproperly paired\twith itself and mate mapped\tsingletons\twith mate mapped to a different chr\twith mate mapped to a different chr (mapQ>=5)\n") #Iterate through files in directory with the following ending for file in glob.glob(".flag.txt"): #open file after retrieving the name. with open(file, 'r') as counts_file: #empty list/array for storing the outputs list = [] #add the file name to array. list.append(file) #get values from output file. for line in counts_file: list.append(line.split('+')[0]) #write list to file for item in list: outFile.write("%s\t" % item) #write a newline outFile.write("\n") #close the output file outFile.close() Answer: Use with ... as ...: statements to open files, and automatically close them. Then you don't have to clutter up your program with explicit close statements. outFile = open("output.count.txt", "w+") # ... code here #close the output file outFile.close() Becomes: with open("output.count.txt", "w+") as outFile: # ... code here This is ugly and unreadable: outFile.write("FileName\tTotal\tSecondary\tSupplementary\tduplicates\tmapped\tpaired in sequencing\tread1\t" "read2\tproperly paired\twith itself and mate mapped\tsingletons\twith mate mapped to a different chr\twith mate mapped to a different chr (mapQ>=5)\n") The \t runs into the next field name, so the eye sees "tTotal". It would be better to actually list your field names in a readable form, and let the computer properly separate them: fields = ["FileName", "Total", "Secondary", "Supplementary", "duplicates", "mapped", "paired in sequencing", "read1", "read2", "properly paired", "with itself and mate mapped", "singletons", "with mate mapped to a different chr", "with mate mapped to a different chr (mapQ>=5)"] outFile.write("\t".join(fields) + '\n') Looping through one iterable, processing each one and creating a new list be often done cleaner using list comprehension: list = [] #add the file name to array. list.append(file) #get values from output file. for line in counts_file: list.append(line.split('+')[0]) Could become (without the "file" at the start of the list): values = [ line.split('+')[0] for line in counts_file ] But you take the resulting list and add a \t character between each value, so maybe instead: values = "\t".join( line.split('+')[0] for line in counts_file ) Now, you want to print out the values to the outFile, with the file at the start. f-strings are a new feature in Python. They let you format a string with local variables interpolated into the string. This makes it easy: outFile.write(f"{file}\t{values}\n") As a bonus, each line doesn't end with a trailing tab character. Resulting code would be something like: with open("output.count.txt", "w+") as outFile: fields = ["FileName", "Total", "Secondary", "Supplementary", "duplicates", "mapped", "paired in sequencing", "read1", "read2", "properly paired", "with itself and mate mapped", "singletons", "with mate mapped to a different chr", "with mate mapped to a different chr (mapQ>=5)"] outFile.write("\t".join(fields) + '\n') for file in glob.glob(".flag.txt"): with open(file, 'r') as counts_file: values = "\t".join( line.split('+')[0] for line in counts_file ) outFile.write(f"{file}\t{values}\n")
{ "domain": "codereview.stackexchange", "id": 32173, "tags": "python, python-3.x" }
How should classification be done for a very small data set?
Question: I am looking at data from the London Data Store based on social characteristics between London boroughs. Since there are only about 30 London boroughs, the data sets I am looking at are naturally very small. For example, I might be fitting regression/correlations to a plot of about 30 points. What are appropriate ways to conduct classification on such small data sets, and why? 'Why' is important. I was thinking of something like SVM, or Naive Bayes. Or regression if the data is continuous. What are very inappropriate ways to conduct classification here? Answer: I don't think you need some classification algorithm, you can use your basic understanding on data/ Business Knowledge to do the classification. As the number of data points are too low, the model cannot give you good/generalised results. Even if you try applying some complex algorithm like SVM/NN, it is of no use as the data is too low. If you still want to apply some machine learning algorithm and then you can apply Naive Bayes, Decision Tree as these are the basic algorithms, can do the job.
{ "domain": "datascience.stackexchange", "id": 2296, "tags": "classification, dataset" }
remove gravity from acceleration on Myo armband IMU measurements
Question: I'm working with the Myo armband through the myo_ros package. The device is able to provide IMU measurements comprising orientation, linear acceleration and angular velocity. The orientation is expressed wrt an (unknown) reference frame, chosen when the device is turned on. We can refer to this frame as myo_ref. The linear acceleration is expressed wrt the myo_raw frame, which is a north-west-up moving frame attached to the device. I want to manually remove the gravity component from the accelerometer data. These are the steps I'm doing: calibration: I record the orientation when the accelerometer measure +9.81 over the z-axis (so I'm sure the device is aligned with the Earth's z-axis pointing upward). This orientation, let's call it q_ref2aligned is used to publish a static transformation, describing the new frame myo_aligned wrt the frame myo_ref; each IMU measurement has an orientation, let's call it q_ref2raw which expresses the current pose of the armband wrt the frame myo_ref To the best of my knowledge, the inverse quaternion of q_ref2aligned, that is q_aligned2ref, describes the transformation from the frame myo_aligned to the frame myo_ref q_aligned2ref * q_ref2raw = q_aligned2raw should represent the current orientation of the armband wrt the frame aligned with the Earth's z-axis, right? if lin_acc is the acceleration recorded in the current IMU measurement (so wrt the myo_raw frame) and G = [0, 0, 9.81] is the gravity vector, if I multiply lin_acc by q_aligned2raw and then substract G I should be able to remove the gravity component, correct? To accomplish this, I first turn q_aligned2raw into a rotation matrix M with tf.transformations.quaternion_matrix, then I use the matrix-vector multiplication with lin_acc and finally just substract G. Am I missing something? This approach fails. Here are some experiments: 1. IMU lin_acc reading [x, y, z]: [-0.32561143, -0.80924016, 9.88805286] expected lin_acc after rotation: [~0, ~0, ~9.81 ] observed lin_acc after rotation: [-1.76936953, -4.4546028 , 8.69254434] 2. IMU lin_acc reading [x, y, z]: [-0.19153613, -0.01915361, -9.62947908] expected lin_acc after rotation: [~0, ~0, ~9.81 ] observed lin_acc after rotation: [ 1.58807182, 9.41955642, -1.23040848] 3. IMU lin_acc reading [x, y, z]: [-0.09576807, -9.61990227, 2.36068284] expected lin_acc after rotation: [~0, ~0, ~9.81 ] observed lin_acc after rotation: [-8.92865455, -4.05394425, 1.40327425] 4. IMU lin_acc reading [x, y, z]: [-0.36391865, 9.62947908, 0.70389529] expected lin_acc after rotation: [~0, ~0, ~9.81 ] observed lin_acc after rotation: [-8.56518971, 3.71455092, -2.48885704] 5. IMU lin_acc reading [x, y, z]: [9.60553706e+00, 4.78840332e-03, 9.57680664e-03] expected lin_acc after rotation: [~0, ~0, ~9.81 ] observed lin_acc after rotation: [ 1.43719352, 7.26609646, -6.11594423] 6. IMU lin_acc reading [x, y, z]: [-10.07480059, -0.16280571, 0.09576807] expected lin_acc after rotation: [~0, ~0, ~9.81 ] observed lin_acc after rotation: [ 1.86515326, 7.72080671, -6.20061538] Answer: I see this kind of question a lot, and my answer is always the same - use the Madgwick filter. Here's the original site but if it's not working here's another and the Github page. It's PhD-level work that's available, for free, already written, in C, C#, and Matlab. You don't need to implement your own calibration routine, and: The odds of your accelerometer reading exactly 9.8100000 on the z axis are very small, and even if they did You're not checking accelerations on the other axes, so you aren't discriminating between gravity and motion, and even if you were You don't appear to be taking any angular velocities into account, so it's not clear how you're developing q_ref2raw, which in turn is critical to your conversion. For reference, I took the magnitude (norm) of all of your example cases and I got the following: 9.9265, 9.6314, 9.9058, 9.5520, 9.6055, 10.077. None of these are 9.81, and so your conversion will never get any of those readings to [0, 0, (+/-)9.81]. Maybe there are readings on the other axes? I do get what you mean, though, in that the 9.81-ish readings aren't in the z position, but your algorithm isn't provided in detail here and I don't think it's suitable anyways. Have you tried troubleshooting to determine when/if you're hitting the 9.81 on the z-axis? You may be looking at stale/uninitialized rotations. Whatever the case, again, Madgwick has you covered. Use the free, open-source algorithm that works.
{ "domain": "robotics.stackexchange", "id": 2558, "tags": "ros, imu, accelerometer" }
Separation between coarse correlated equilibria and correlated equilibria
Question: I am looking for examples of techniques for proving price of anarchy bounds that have the power to separate the price of anarchy over coarse correlated equilibria (the limiting set of no-external-regret dynamics) from the price of anarchy over correlated equilibria (the limiting set of no-swap-regret dynamics). Are natural separations of this type known? One obstruction towards separating these two classes is that the most natural (and common) way to prove price of anarchy bounds is to observe only that at equilibrium, no player has any incentive to deviate to playing his action at OPT, and to somehow use this to connect the social welfare at some configuration to the social welfare of OPT. Unfortunately, any proof that the price of anarchy over coarse correlated equilibria is small that only considers deviations of each player to a single alternative action (say the action from OPT) necessarily also holds for correlated equilibria, and so cannot provide a separation. This is because the only difference between a coarse correlated equilibrium and a correlated equilibrium is the ability of a player in a correlated equilibrium to simultaneously consider multiple deviations, conditioned on his signal of the play profile drawn from the equilibrium distribution. Are such separations known? Answer: Fix M>>1>>e and look at the following two player coordination game (both players get the same utility): M | 1+e | 2e | e 1+e | 1 | e | 0 2e | e | M | 1+e e | 0 | 1+e | 1 The second and fourth row and column are strictly dominated so any correlated equilibrium cannot have them in its support, thus it would be on the sub-game: M | 2e 2e | M for which every correlated equilibrium would give each player more than M/2 utility. On the other hand, consider the joint probability distribution giving probability 1/2 to each of the 1's, and thus utility 1 to each player. The claim is that this is a coarse equilibrium. In a coarse equilibrium the possible deviations of the row player are to one of the pure strategies independently of the outcome of the joint distribution. Now if it is only known that the column player is mixing evenly between the 2nd and 4th column, then the maximum utility the row player can get is 0.5+e < 1, so deviation is not profitable.
{ "domain": "cstheory.stackexchange", "id": 284, "tags": "gt.game-theory, online-learning" }
Rust function to read the first line of a file, strip leading hashes and whitespace
Question: I’m writing a Rust function for getting a title based on the first line of a file. The files are written in Markdown, and the first line should be a heading that starts with one or more hashes, followed by some text. Examples: # This is a top-level heading ## This is a second-level heading #### Let's jump straight to the fourth-level heading I want to throw away the leading hashes, discard any leading/trailing whitespace, and return the remaining string. Example outputs: "This is a top-level heading" "This is a second-level heading" "Let's jump straight to the fourth-level heading" Assume that, for now, I’m not worried about edge cases like a first line that’s only whitespace and hashes, or a file whose first line is pathologically long. This is the program I’ve written to do it: use std::fs; use std::io::{BufRead, BufReader}; use std::path::PathBuf; /// Get the title of a Markdown file. /// /// Reads the first line of a Markdown file, strips any hashes and /// leading/trailing whitespace, and returns the title. fn title_string(path: PathBuf) -> String { // Read the first line of the file into `title`. let file = match fs::File::open(&path) { Ok(file) => file, Err(_) => panic!("Unable to read title from {:?}", &path), }; let mut buffer = BufReader::new(file); let mut first_line = String::new(); let _ = buffer.read_line(&mut first_line); // Where do the leading hashes stop? let mut last_hash = 0; for (idx, c) in first_line.chars().enumerate() { if c != '#' { last_hash = idx; break } } // Trim the leading hashes and any whitespace let first_line: String = first_line.drain(last_hash..).collect(); let first_line = String::from(first_line.trim()); first_line } fn main() { let title = title_string(PathBuf::from("./example.md")); println!("The title is '{}'", title); } I’m fairly new to Rust, and I’m sure I’m doing stuff that isn’t as optimal or idiomatic as it could be. Particular questions: Is this idiomatic Rust? Is there a better way to strip leading characters from a string? I looked in the documentation for std::string::String and couldn’t see anything. The String::from() feels a bit inefficient. Is there anything unsafe that could easily crash (the panic! aside)? Answer: Clippy returns a helpful suggestion: warning: returning the result of a let binding from a block. Consider returning the expression directly. #[warn(let_and_return)] on by default |> |> first_line |> ^^^^^^^^^^ note: this expression can be directly returned |> |> let first_line = String::from(first_line.trim()); |> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Don't ignore Results by using let _ =! You should always return them or use expect or unwrap. The loop could be simplified by using take_while: let last_hash = first_line.chars().take_while(|&c| c == '#').count(); However, treating one character as one byte is a bad idea because strings are UTF-8 encoded. UTF-8 is a variable-length encoding. You can use char_indices instead. It's slightly more efficient to take a slice of the string, instead of drain and collect here. It avoids one extra allocation. Change your function to accept any type that implements BufRead; this will allow you to write easier unit tests. Actually add those unit tests! There's no need to create a PathBuf, you aren't pushing path components on. You could just make a &Path, but most functions accept any type that can be converted to a Path (AsRef<Path>). &str implements that. There's no need to take a reference to something being passed to println! or panic!. These macros automatically take a reference. use std::fs; use std::io::BufReader; use std::io::prelude::*; /// Get the title of a Markdown file. /// /// Reads the first line of a Markdown file, strips any hashes and /// leading/trailing whitespace, and returns the title. fn title_string<R>(mut rdr: R) -> String where R: BufRead, { let mut first_line = String::new(); rdr.read_line(&mut first_line).expect("Unable to read line"); // Where do the leading hashes stop? let last_hash = first_line .char_indices() .skip_while(|&(_, c)| c == '#') .next() .map_or(0, |(idx, _)| idx); // Trim the leading hashes and any whitespace first_line[last_hash..].trim().into() } /// Read the first line of the file into `title`. fn main() { let path = "./example.md"; let file = match fs::File::open(path) { Ok(file) => file, Err(_) => panic!("Unable to read title from {}", path), }; let buffer = BufReader::new(file); let title = title_string(buffer); println!("The title is '{}'", title); } #[cfg(test)] mod test { use super::title_string; #[test] fn top_level_heading() { assert_eq!(title_string(b"# This is a top-level heading".as_ref()), "This is a top-level heading") } #[test] fn second_level_heading() { assert_eq!(title_string(b"## This is a second-level heading".as_ref()), "This is a second-level heading"); } #[test] fn fourth_level_heading() { assert_eq!(title_string(b"#### Let's jump straight to the fourth-level heading".as_ref()), "Let's jump straight to the fourth-level heading"); } } You should also investigate using a real Markdown parser to avoid nasty pitfalls.
{ "domain": "codereview.stackexchange", "id": 21118, "tags": "beginner, io, rust" }
Momentum space in spherical coordinate
Question: In statistical mechanics we calculate volume element in momentum space as $4\pi p^2 dp$ to calculate microstate in phase space , but I don't understand why we write it in spherical polar coordinate ? Is it completely suitable to write it in spherical polar coordinate ? I read somewhere that we can write it because momentum in every direction is equally likely to occur and $\langle p_x ^2\rangle =\langle p_y^2\rangle = \langle p_z^2\rangle $ Or something like this . But I don't understand it completely . So please explain this . Answer: It's not a good idea to think of this in general terms, i.e. "in statistical mechanics, this is what we do". We do it when it's a useful mathematical trick to do it. That's it. In your specific case, I suppose you're talking about a Hamiltonian that depends only on $|p|^2$, and not on $p$'s components, due to isotropy of the system. So, if you're dealing with an integral of the form $$ \int e^{-H(|p|^2)}d^3p$$ it's smarter to turn it into $$ \int e^{-H(|p|^2)}4\pi |p|^2d|p|.$$ This is purely maths, no physics here. If this doesn't answer your question, then we may need to know more about the specific calculation you're trying to solve. But yes, your intuition is correct, this is due to the fact that your system doesn't differentiate between $x, y, z$ directions.
{ "domain": "physics.stackexchange", "id": 49230, "tags": "statistical-mechanics" }
What is the difference between $\frac{DA^\mu}{D\lambda}$ and $\frac{DA^\mu}{d\lambda}$?
Question: I earlier asked this question How can you have $\frac{DA^\mu}{d\tau}$? I am now wondering: What is the difference between $\frac{DA^\mu}{D\lambda}$ and $\frac{DA^\mu}{d\lambda}$? In the linked question the answer and comments explain what $\frac{DA^\mu}{d\lambda}$ and some research tells me that $\frac{DA^\mu}{D\lambda}$ is called the intrinsic or total derivative. The forms of the two derivatives seem to be the same though, if this is the case why the use of different notation? Answer: $\frac{DA^\mu}{D\lambda}=\frac{DA^\mu}{d\lambda}$ are two notations for the same object.
{ "domain": "physics.stackexchange", "id": 24782, "tags": "general-relativity, notation, differentiation" }
Could a person weigh so much as to cause gravitational lensing?
Question: I'm a bit familiar with the concept of gravitational lensing. I also believe that all objects have some gravitational force, even if it's minuscule. Would an object as massive as a person cause any gravitational lensing? Even an extremely miniscule amount from any position? Bonus points if you somehow quantify the magnitude of this lensing. I hope the title is ok! It adds a bit of humor and is quite literally the question at hand. We can even assume a 1000lbs person if it helps. Answer: Most physics phenomena are continuous - that is they happen in all situations to varying degrees. From wikipedia, the angle deflection from gravitational lensing is $$ \theta=\frac{4GM}{rc^2}=\frac{4G(100\text{ kg})}{(0.25\text{ m})c^2}=10^{-24}\text{ radians} $$ Where $r$ is the distance of closest approach between the light beam and the massive object (can't get so close that the light beam is simply blocked). That's a pretty small angle deflection. A laser pointed at a target from very far away would be deflected $10^{−22}$ m away from the center of the target if said person stood next to the beam's path 100m away from the target.
{ "domain": "physics.stackexchange", "id": 95611, "tags": "general-relativity, spacetime, curvature, estimation, gravitational-lensing" }
kinect camera not detected
Question: I wanted to set up the kinect with ROS. I am running ROS-electric. I installed openni_kinect using sudo apt-get install ros-electric-openni-kinect. I then ran roslaunch openni_launch openni.launch to see if it would launch. Now when I run this command it returns with the everything shown below. aksat@ubuntu:~/ros_workspace/ros_repos$ roslaunch openni_launch openni.launch ... logging to /home/aksat/.ros/log/cf0669fa-490d-11e1-b4ab-0026828752b8/roslaunch-ubuntu-24440.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://ubuntu:48521/ SUMMARY ======== PARAMETERS * /rosdistro * /camera/driver/rgb_frame_id * /camera/driver/rgb_camera_info_url * /camera/depth_registered/rectify_depth/interpolation * /camera/driver/depth_frame_id * /camera/depth/rectify_depth/interpolation * /rosversion * /camera/driver/device_id * /camera/driver/depth_camera_info_url NODES /camera/depth/ rectify_depth (nodelet/nodelet) metric_rect (nodelet/nodelet) metric (nodelet/nodelet) disparity (nodelet/nodelet) points (nodelet/nodelet) /camera/rgb/ debayer (nodelet/nodelet) rectify_mono (nodelet/nodelet) rectify_color (nodelet/nodelet) / camera_nodelet_manager (nodelet/nodelet) camera_base_link (tf/static_transform_publisher) camera_base_link1 (tf/static_transform_publisher) camera_base_link2 (tf/static_transform_publisher) camera_base_link3 (tf/static_transform_publisher) /camera/ driver (nodelet/nodelet) register_depth_rgb (nodelet/nodelet) points_xyzrgb_depth_rgb (nodelet/nodelet) /camera/ir/ rectify_ir (nodelet/nodelet) /camera/depth_registered/ rectify_depth (nodelet/nodelet) metric_rect (nodelet/nodelet) metric (nodelet/nodelet) disparity (nodelet/nodelet) auto-starting new master process[master]: started with pid [24454] ROS_MASTER_URI=http://localhost:11311 setting /run_id to cf0669fa-490d-11e1-b4ab-0026828752b8 process[rosout-1]: started with pid [24467] started core service [/rosout] process[camera_nodelet_manager-2]: started with pid [24479] process[camera/driver-3]: started with pid [24480] process[camera/rgb/debayer-4]: started with pid [24481] process[camera/rgb/rectify_mono-5]: started with pid [24482] [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat process[camera/rgb/rectify_color-6]: started with pid [24483] process[camera/ir/rectify_ir-7]: started with pid [24485] process[camera/depth/rectify_depth-8]: started with pid [24491] process[camera/depth/metric_rect-9]: started with pid [24496] process[camera/depth/metric-10]: started with pid [24498] process[camera/depth/disparity-11]: started with pid [24508] process[camera/depth/points-12]: started with pid [24515] process[camera/register_depth_rgb-13]: started with pid [24520] process[camera/depth_registered/rectify_depth-14]: started with pid [24522] process[camera/depth_registered/metric_rect-15]: started with pid [24532] process[camera/depth_registered/metric-16]: started with pid [24538] process[camera/depth_registered/disparity-17]: started with pid [24539] process[camera/points_xyzrgb_depth_rgb-18]: started with pid [24542] process[camera_base_link-19]: started with pid [24546] process[camera_base_link1-20]: started with pid [24549] process[camera_base_link2-21]: started with pid [24559] process[camera_base_link3-22]: started with pid [24560] [ERROR] [1327685946.423785481]: Failed to load nodelet [/camera/rgb/rectify_mono] of type [image_proc/rectify]: According to the loaded plugin descriptions the class image_proc/rectify with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_view/disparity image_view/image openni_camera/OpenNINodelet openni_camera/driver pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/TestListener pcl/TestPingPong pcl/TestTalker pcl/VFHEstimation pcl/VoxelGrid test_nodelet/ConsoleTest test_nodelet/Plus uvc_camera/CameraNodelet uvc_camera/StereoNodelet [FATAL] [1327685946.425377640]: Service call failed! [camera/rgb/rectify_mono-5] process has died [pid 24482, exit code 255]. log files: /home/aksat/.ros/log/cf0669fa-490d-11e1-b4ab-0026828752b8/camera-rgb-rectify_mono-5*.log [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [ INFO] [1327685948.744306297]: No devices connected.... waiting for devices to be connected [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [ERROR] [1327685949.849244546]: Failed to load nodelet [/camera/depth_registered/rectify_depth] of type [image_proc/rectify]: According to the loaded plugin descriptions the class image_proc/rectify with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_view/disparity image_view/image openni_camera/OpenNINodelet openni_camera/driver pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/TestListener pcl/TestPingPong pcl/TestTalker pcl/VFHEstimation pcl/VoxelGrid test_nodelet/ConsoleTest test_nodelet/Plus uvc_camera/CameraNodelet uvc_camera/StereoNodelet [ERROR] [1327685949.849482732]: Failed to load nodelet [/camera/ir/rectify_ir] of type [image_proc/rectify]: According to the loaded plugin descriptions the class image_proc/rectify with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_view/disparity image_view/image openni_camera/OpenNINodelet openni_camera/driver pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/TestListener pcl/TestPingPong pcl/TestTalker pcl/VFHEstimation pcl/VoxelGrid test_nodelet/ConsoleTest test_nodelet/Plus uvc_camera/CameraNodelet uvc_camera/StereoNodelet [ERROR] [1327685949.849677181]: Failed to load nodelet [/camera/rgb/debayer] of type [image_proc/debayer]: According to the loaded plugin descriptions the class image_proc/debayer with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_view/disparity image_view/image openni_camera/OpenNINodelet openni_camera/driver pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/TestListener pcl/TestPingPong pcl/TestTalker pcl/VFHEstimation pcl/VoxelGrid test_nodelet/ConsoleTest test_nodelet/Plus uvc_camera/CameraNodelet uvc_camera/StereoNodelet [ERROR] [1327685949.849873087]: Failed to load nodelet [/camera/depth/rectify_depth] of type [image_proc/rectify]: According to the loaded plugin descriptions the class image_proc/rectify with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_view/disparity image_view/image openni_camera/OpenNINodelet openni_camera/driver pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/TestListener pcl/TestPingPong pcl/TestTalker pcl/VFHEstimation pcl/VoxelGrid test_nodelet/ConsoleTest test_nodelet/Plus uvc_camera/CameraNodelet uvc_camera/StereoNodelet [FATAL] [1327685949.850351918]: Service call failed! [FATAL] [1327685949.850990073]: Service call failed! [FATAL] [1327685949.853094181]: Service call failed! [FATAL] [1327685949.853464489]: Service call failed! [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [camera/rgb/debayer-4] process has died [pid 24481, exit code 255]. log files: /home/aksat/.ros/log/cf0669fa-490d-11e1-b4ab-0026828752b8/camera-rgb-debayer-4*.log [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [camera/ir/rectify_ir-7] process has died [pid 24485, exit code 255]. log files: /home/aksat/.ros/log/cf0669fa-490d-11e1-b4ab-0026828752b8/camera-ir-rectify_ir-7*.log [camera/depth/rectify_depth-8] process has died [pid 24491, exit code 255]. log files: /home/aksat/.ros/log/cf0669fa-490d-11e1-b4ab-0026828752b8/camera-depth-rectify_depth-8*.log [camera/depth_registered/rectify_depth-14] process has died [pid 24522, exit code 255]. log files: /home/aksat/.ros/log/cf0669fa-490d-11e1-b4ab-0026828752b8/camera-depth_registered-rectify_depth-14*.log [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [rospack] opendir error [No such file or directory] while crawling /home/aksat/aksat [ERROR] [1327685950.506497003]: Failed to load nodelet [/camera/rgb/rectify_color] of type [image_proc/rectify]: According to the loaded plugin descriptions the class image_proc/rectify with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_view/disparity image_view/image openni_camera/OpenNINodelet openni_camera/driver pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/TestListener pcl/TestPingPong pcl/TestTalker pcl/VFHEstimation pcl/VoxelGrid test_nodelet/ConsoleTest test_nodelet/Plus uvc_camera/CameraNodelet uvc_camera/StereoNodelet [FATAL] [1327685950.507506957]: Service call failed! [camera/rgb/rectify_color-6] process has died [pid 24483, exit code 255]. log files: /home/aksat/.ros/log/cf0669fa-490d-11e1-b4ab-0026828752b8/camera-rgb-rectify_color-6*.log [ INFO] [1327685951.744851947]: No devices connected.... waiting for devices to be connected [ INFO] [1327685954.745357505]: No devices connected.... waiting for devices to be connected [ INFO] [1327685957.745839540]: No devices connected.... waiting for devices to be connected [ INFO] [1327685960.746328466]: No devices connected.... waiting for devices to be connected [ INFO] [1327685963.746806281]: No devices connected.... waiting for devices to be connected [ INFO] [1327685966.747305849]: No devices connected.... waiting for devices to be connected ^C[camera_base_link3-22] killing on exit [camera_base_link2-21] killing on exit [camera_base_link-19] killing on exit [camera_base_link1-20] killing on exit [camera/points_xyzrgb_depth_rgb-18] killing on exit [camera/depth_registered/disparity-17] killing on exit [camera/depth_registered/metric-16] killing on exit [camera/depth_registered/metric_rect-15] killing on exit [camera/register_depth_rgb-13] killing on exit [camera/depth/points-12] killing on exit [camera/depth/disparity-11] killing on exit [camera/depth/metric-10] killing on exit [camera/depth/metric_rect-9] killing on exit [camera/driver-3] killing on exit [camera_nodelet_manager-2] killing on exit [rosout-1] killing on exit [master] killing on exit shutting down processing monitor... ... shutting down processing monitor complete done As you can see there are a number of errors which occur for some reason and I am unsure as to why these occur? They were not there before yet they suddenly turned up after removing redundant lines from my .bashrc. The edited bit of my .bashrc now looks like this: source /opt/ros/electric/setup.bash source ~/ros_workspace/setup.sh source ~/ros_workspace/ros_repos/opencv2_overlay/setup.sh export ROS_PACKAGE_PATH=~/ros_workspace:$ROS_PACKAGE_PATH export ROS_PACKAGE_PATH=~/ros_workspace/ros_repos:$ROS_PACKAGE_PATH export LD_LIBRARY_PATH=/usr/include/opencv export PKG_CONFIG_PATH=/usr/include/opencv alias cv="g++ -I/usr/include/opencv -lcv -lcxcore -lcvaux -lhighgui -lm" The main error than arises when it says [ INFO] [1327684365.007993055]: No devices connected.... waiting for devices to be connected. The kinect is definitely connected. I have tried multiple kinects and in multiple USB ports. I also tried dmesg to check it was connected and it returned: [ 7638.116188] usb 1-2: new high speed USB device using ehci_hcd and address 52 [ 7638.249939] hub 1-2:1.0: USB hub found [ 7638.250117] hub 1-2:1.0: 3 ports detected [ 7639.009053] usb 1-2.2: new full speed USB device using ehci_hcd and address 53 [ 7640.544918] usb 1-2.1: new high speed USB device using ehci_hcd and address 54 [ 7642.080965] usb 1-2.3: new high speed USB device using ehci_hcd and address 55 so its definitely there and lsusb also returns showing that the kinect is connected. I've also tried sudo apt-get remove ros-electric-openni-kinect and then reinstalling it yet it still amounts to the same thing. I've been trying to get it to work for the last two days but haven't been able to get anywhere! Does anybody have any ideas? Any help is much appreciated! Thank You!! Originally posted by AksatShah on ROS Answers with karma: 59 on 2012-01-27 Post score: 0 Answer: I've solved the issue! There were two errors to solve here: First of all the kinect camera driver had not been set up correctly so the package had to be re-built and that fixed that issue. I'm not sure why it didn't build the first time to be sure. My ROS workspace had a conflict within it with the image_proc folder in openni. This I believe had been created while trying to create an opencv overlay. Thank you very much for all the help. Originally posted by AksatShah with karma: 59 on 2012-02-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 8020, "tags": "kinect, openni-kinect" }
How to build the quantum circuit corresponding to a given unitary matrix?
Question: I have the following matrix for a circular quantum walk import numpy as np T = np.array([[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], ]) How to obtain the quantum circuit which does the operation given by the matrix T? Answer: In Qiskit, you can do it by simply passing your unitary matrix to the QuantumCircuit.unitary method, specifying the qubits indices your operator is acting on. In your case, T is a 32$\times$32 matrix, so it acts on a 5-qubits circuit as follows: from qiskit import QuantumCircuit num_qubits = 5 qc = QuantumCircuit(num_qubits) qc.unitary(T, qubits=range(num_qubits)) qc.draw('mpl')
{ "domain": "quantumcomputing.stackexchange", "id": 4672, "tags": "qiskit, quantum-gate, circuit-construction, matrix-representation" }
Please explain the shapes of the orbitals
Question: For a lot of years, I had been believing that sphere was the most stable 3-dimensional shape. But after coming across the p,d and f-orbitals, I am unable to comprehend the fact that these orbitals have such crude shapes. Can we prove that these shapes(dumb bell and crossed dumb bell) are the regions in which the electrons would be most stable (No wave functions please. I am only in high school)? P.S.: An intuitive answer would be much appreciated than a mathematical one. P.S.S: This is not a duplicate question. I don't want to know how the shapes are like that. I know it is due to the probability functions, I want to know if it is just the rule of nature, or can we show it is because of stability. Answer: I'm guess that you have read about orbital shapes in the wikipedia article, or done a google search on the term. In general, such "orbitals" shown are typically calculated for a lone electron, not any "real" multi-electron configuration. Think of an orbital like a single loop of cotton fiber wound into a cotton ball. The planetary model of the electron-nucleus pair indicates that the electron is a solid ball following the fiber. The quantum model of the electron-nucleus pair indicates that the electron has no fixed position, but that it is essentially at all places on the loop at the same time. So how big is the cotton ball in diameter? It has no limiting diameter, it depends on how hard you squeeze! So if I catch a Hydrogen atom with my magic tweezers, there is a finite probability according to the Schrödinger wave equation that its electron can be in orbit even further away than the dwarf planet Pluto, or inside the nucleus! So we we talk of an the "size" of an ion it is somewhat of an artificial abstraction, and the Schrödinger wave equation can't be "the gospel truth." (I don't mean to badly disparage the Schrödinger wave equation for it is very useful.) Not to leaving you totally confused, the size of an ions does have some real physical significance. For instance consider table salt, $\ce{NaCl}$. This is really $\ce{NaCl}$. Using x-ray diffraction we can measure how close the $\ce{NaCl}$ are, so we know their "size." Starting with the Aufbau principle for the atoms, chemists can predict the electron structure of an atom, say carbon. Given the structure of bonds in the atoms, say carbon and hydrogen, then chemists can predict how carbon and hydrogen will bond to form molecules. The fact that chemists can make predictions about molecular structure is the "proof" that the models work. Howver weird things do have with real electron orbitals. For instance if there was a single "normal electron configuration" for chromium, then there would only be one chromium chloride. However chemists have synthesized three. Chromium(II) chloride, also known as chromous chloride. Chromium(III) chloride, also known as chromic chloride or chromium trichloride Chromium(IV) chloride So there must be weird configurations that are stabilized by some sort of "hybridization." As another example all the C-H bonds in methane, $\ce{CH4}$, are the same because the four carbon orbitals (one 2s, and three 2p) orbitals of the carbon atom hybridize into four $sp^3$ orbitals that are equivalent. To put such orbital shapes into perspective, a chemist thinks a unicorn is simply a hybrid of a horse and a rhinoceros. So a model helps making predictions, but you can't have a fixated belief that the model is the whole truth. Another example would be showing you a picture of a blueberry pie. Would just such a picture "explain" a blueberry pie? In order for the picture to have meaning you must have some underlying knowledge. You know in general what a pie is. You have know that pies contain fruit pieces, so this kind is made from blueberries. You know that pies are sweet.
{ "domain": "chemistry.stackexchange", "id": 4433, "tags": "electrons, orbitals" }
Noetic on Jammy?
Question: Are there any known downsides of using the packages hosted at https://packages.ubuntu.com/, vs. those at http://packages.ros.org/ros/ubuntu? Some background: Noetic is only supported on Ubuntu Focal, by design. We'd like to run Noetic on Jammy. I notice that https://packages.ubuntu.com/ hosts a bunch of apt packages like jammy/ros-desktop-full. I've installed that package into a container, and it seems to work fine (roscore starts, anyway). I'm surprised that I have never seen anyone suggest simply installing the Ubuntu metapackages (which seems much simpler). Originally posted by Rick Armstrong on ROS Answers with karma: 567 on 2022-05-06 Post score: 0 Original comments Comment by lucasw on 2022-05-07: See also #q399664 - I should update my answer there but what I've seen since is that ros-desktop-full and related works well- I've gone well beyond just running roscore but no serious stress tests yet. But it is missing a lot of standard stuff, some packages weren't ready in time for 22.04 (maybe 22.04.x will have them)- and a lot else isn't even in debian testing/unstable yet, so those require compiling, many packages require fixes to compile (mostly small ones, boost::placeholders::_1 comes up a lot)- which then ought to get upstreamed into noetic sources or land in ros-o and eventually may become available in 22.04 apt packages. Answer: I'm surprised that I have never seen anyone suggest simply installing the Ubuntu metapackages (which seems much simpler). Because of wiki/UpstreamPackages. And see also “upstream packages” increasingly becoming a problem on ROS Discourse. We'd like to run Noetic on Jammy. The official recommendation for this sort of scenario (ie: running ROS on an unsupported OS) would be to use Docker. Or build from source. Unofficially, there is an effort ongoing to enable building Noetic on > Ubuntu 20.04: ros-o. This does not provide binary packages, so you'd have to build from source for now. Originally posted by gvdhoorn with karma: 86574 on 2022-05-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Rick Armstrong on 2022-05-07: Thanks for taking the time to answer; there's a great deal of information there to help us keep from "stepping in it". BTW, I think our motivation for attempting to run on Jammy is so that we can use Python 3.10. At first blush, I thought "sure, why not?", but I will now circle-back and really understand what we're trying to achieve. Comment by gvdhoorn on 2022-05-07: Just to make sure: I merely answered your question as to why these packages aren't recommended more (often). You could have legitimate reasons for wanting to use them. That would be something else. re: Python 3.10: that should be possible if you build things from source. Success would depend on whether Python 3.10 has deprecated (or finally removed) certain things. Again: I'm not telling you not to use the upstream packages, but I feel you need to be aware of the pros-and-cons before concluding "roscore started, so it seems they work".
{ "domain": "robotics.stackexchange", "id": 37647, "tags": "ros" }
Adding sound powers together
Question: I am a developer working on a software application which incorporates some basic acoustic simulations. I am trying to assist another developer in implementing equations provided to us by the team's math and science expert (who is unfamiliar with the specific programming language we're using for the project and can't implement the formulas himself). One area of confusion has been in calculating the total strength of a signal. We have a set of inputs such as this: 40hz | 20 200hz | 32 500hz | 26 Where the first number is the frequency in hertz, and the second number is the power of that frequency. We can convert the power to decibels using the formula $ dB = 10 Log_{10}(power) $, and we can convert decibels back to powers using $ power = 10^{dB/10} $ The hangup right now: we have a data set of frequencies and powers, like above. We need to get the total signal strength in dB. The math expert originally told us to calculate the sum, in decibels, like this: $ dB = \frac{1}{n} \sum_{i=1}^n power[n]^2 $ After I sought clarification on several apparent issues with this formula, he changed the formula to this: $ dB = 10Log_{10} (\sum_{i=1}^n power[n]^2) $ In other places when we add two decibel levels together we do it like this: $ a_{dB} + b_{dB} = 10Log_{10}( 10^\frac{a}{10} + 10^\frac{b}{10}) $ which is simply converting from dB to power, adding, then converting back to dB. If we extended that for more than two operands, it seems like the result would be this: $ dB = 10Log_{10} (\sum_{i=1}^n power[n]) $ He said that simply adding the powers isn't accurate for a large set of powers, and provided a link to this page. He did not clearly explain how the information on that page related to the work we're doing. Having that page thrown at me greatly confused me, as it introduced the subject of coherent vs. incoherent sounds which we hadn't previously discussed and initially appeared to suggest that formulas 2 and 3 are both correct. The math expert did confirm that our signals are incoherent. After researching more on my own, the impression I've gotten is that we would square the summed values if they were pressures: $ dB = 10Log_{10} (\sum_{i=1}^n pressure[n]^2) $ but that equation 2 would be the appropriate equation when using powers. When I tried to seek clarity on this he kept talking instead about his thoughts on root-mean-square (which we currently aren't using) and said to stick with equation 2 for now. What is the correct equation to use when summing a set of powers from incoherent sounds together and converting to decibels? #2, #3, or something else? Answer: Well, generally speaking, the right way to add power(s) is #3. As you already stated, the power is related to the square of the pressure, so #4 is pretty much identical to #3. Now, the concept of coherent and incoherent sources is easier to grasp when dealing with pressure. So, when you are dealing with coherent sources, the total pressure you may experience ranges from double the pressure of your sources (in case they have the same pressure and are at the same phase) or zero (same pressure opposite phase). In this case, it is more convenient to add the pressure of the contributing sources. When you are dealing with incoherent sources it is more convenient to add their powers. Since we consider power to be given by the square of pressure you can see that $$P_{tot} = \left( p_{A} + p_{B} \right)^{2} = p_{A}^{2} + p_{B}^{2} + 2p_{A}p_{B}$$ Due to the fact that the sources are incoherent the last term of the equation vanishes (since you have a computer science background you could think of the two signals as one-dimensional vectors which are orthogonal to each other. This is what incoherent means in practice and this is why the last product vanishes). Thus, you end up with $$P_{tot} = p_{A}^{2} + p_{B}^{2} \implies P_{tot} = P_{A} + P_{B}$$ wherein the above equations $p$ denotes pressure and $P$ denotes power. In case you are given dB values, you would first have to convert back to linear values with the formula you already have ($P = 10^{\frac{dB}{10}}$), add the values and then convert back to dB with the deciBel formula. The final result is identical to #3. But, once more, if you consider how we reached the conclusion that $P_{tot} = p_{A}^{2} + p_{B}^{2}$, you will realise that if you add pressures you end up with #4, which as stated above is identical due to the fact that $p^{2} = P$.
{ "domain": "physics.stackexchange", "id": 69381, "tags": "acoustics" }
Work done by a spring on a block
Question: A spring has a block attached to one end and the other end is attached to a wall. As the ball is displaced right or left, we know spring force $F_{sp}$ $=$ $-kx$, always in the direction opposite to the displacement of the block. So work done by the spring on the block is negative (equal to $\frac{-k}{2}x^2$), which can be verified from the graph of the function $F_{sp}$ $=$ $-kx$. Area under the curve gives work done by $F_{sp}$ so it makes sense that the area between thr curve and $x$-axis is always negative. But when the block is moved to the left ($-x$) such that the spring is compressed, and the block is released such that it moves from $-x$ to $0$, work done by the spring on the block is positive, equal to $\frac{k}{2}x^2$, can by shown mathematically using integrals. But how does the graph show that work done by spring is positive as the block moves from $-x$ to $0$ ? I am not able to understand it graphically, because area occupied by the graph and the $x$-axis is always negative. I understand it mathematically, but want to understand it graphically. Am I missing something? Answer: You might have confused displacement with position. The definition of work is force times displacement, where displacement = final position - initial position. Since the concepts of final and initial depend on the direction in which the block moves, when the block goes from $-x$ to $0$, the displacement actually equals $$ (0) - (-x_{max}) = x_{max} $$ Because the graph tells us that $F_{sp}$ is indeed positive for $-x_{max} < x < 0$, by definition, work done by the spring is positive. This is to say that although the graph is the same, how you use its coordinates to calculate area depends on whether your block moves to the left or to the right. It's also interesting to realize that, unlike displacement, $F_{sp}$ depends only on the position of the block.
{ "domain": "physics.stackexchange", "id": 67626, "tags": "newtonian-mechanics, work, spring" }
How do I explain qubits to my cousin who is 8 year old?
Question: How do I explain qubits to my cousin who is 8 years old? Answer: I think coin flipping could be a good example. At any given time you can express the state of the flip as 50% on tails and 50% on heads. It’s when you make the measurement - to flip the coin- that you find out which of the two states the coin is for that measurement.
{ "domain": "quantumcomputing.stackexchange", "id": 1316, "tags": "quantum-state, experimental-realization" }
Is there a non regular and regular language where the non regular is not a subset of the regular and the union is regular?
Question: Does there exist languages $L1$, $L2$ where $L1$ is non regular, $L2$ is regular $L1\not\subset L2$ and $L1 \cup L2$ is regular? Answer: I'm going by the question in the title, i.e., that $L_1\not\subset L_2$. In the body of the question you instead wrote $L_1\subsetneq L_2$ which would in contrast mean that $L_1$ is a strict subset of $L_2$. Let $L_1$ be any non-regular language. Let $x\in L_1$ be some element of $L_1$. Then set $L_2 = \Sigma\setminus\{x\}$. $L_2$ is clearly regular, since there's a trivial NFA for it that accepts any word except $x$. $L_1$ is by definition not regular. $L_1\setminus L_2=\{x\}\neq\emptyset$ and therefore $L_1 \not\subset L_2$ $L_1\cup L_2 = \Sigma$ which is clearly a trivial regular language.
{ "domain": "cstheory.stackexchange", "id": 4299, "tags": "regular-language" }
Can two different charge distributions produce same electric field all over the space?
Question: If I have a complicated charge distribution and it is producing an electric field, is it possible to get the same electric field all over the space by arranging the charges differently? Does nature allow that? Answer: Of course, it cannot be done. Charge distribution $\rho(x)$ is fixed by the Maxwell equation $\mathrm{div} \vec{E}=\rho$ for a given electric field $\vec{E}(x)$.
{ "domain": "physics.stackexchange", "id": 36310, "tags": "electrostatics, electric-fields, charge, coulombs-law" }
Was Russia's original territory unfavorable for agriculture?
Question: How favourable were the soil and climate in Russia's landmass of the 1500s for growing crops in comparison with Northern European countries? Answer: The area outlined covers several climatic zones. The northeastern part was, and to some extent still is, permafrost at shallow depth, so obviously this was not good for agriculture. Further south most of the area outlined is covered in podzols, which are generally sandy and nutrient poor. Therefore, where water availability is not a limiting constraint (and we have no rainfall records for the 1500s), and where it is not too cold, the soils could have supported agriculture- but only just. They would most likely not have been high yielding. Even today, such soils need a lot of extra nutrients and additional organic carbon to give good results. Politics apart, maybe this is why much of Russia has remained so poor for so long?
{ "domain": "earthscience.stackexchange", "id": 597, "tags": "climate, soil, agriculture" }
Lloyd's mirror problem
Question: My question is about the reflection of the interference pattern from the screen to the mirror itself in the(Image B), as you can see in the mirror the reflected interference pattern is similar to youngs classic interference pattern rather than the one which interferd on the screen.. why wouldn’t it be also shifted in the mirror as on the screen? Why do you think this happens and how is it even possible? (im a dentistry student not a physics student im just curious) Answer: Each reflection introduces a phase shift of $180º$, or $\pi$ radians. When you look at the screen, the mirror ray suffers one reflection, and thus you see a shifted Young's pattern. This doesn't happen with other methods that have no mirrors. When you look into the mirror, the reflected ray from the screen has also suffered a phase shift. Since the relevant thing is the difference between the two, it doesn't affect anymore. In the same way, 2 ideal reflections leave the ray invariant $(2\pi)$, and three reflections are like one.
{ "domain": "physics.stackexchange", "id": 56027, "tags": "quantum-mechanics, optics, visible-light, waves, interference" }
Static thread safe configuration class
Question: I wrote this class. I would be very interested in your feedback how much thread safe this class is. I know users of this class must still use some kind of synchronization when using this class, but I am ok with it, they will do such synchronization. To save your time and energy, I am more interested in thread safety related feedback of this code, but you can also comment on other aspects of code too, if you wish. using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; namespace dppClientModuleNET { public enum ApplicationMode { SOFTWARE_HSM = 0, POSTER_VER = 1 } /// <summary> /// This static class is used to manage parameters of this DLL. /// You usually initialize it only once using the Init method, /// and then mainly query the class for different parameter /// values. Properties are mainly only readable. /// You can also deinitialize this class. /// This class has been written with thread safety in mind- /// use with care. /// </summary> static class DppModuleParameters { private static bool m_isInited = false; // Is initialized or not? static readonly object m_locker = new object(); // Locker private static ushort m_softwareVersion = 0x0250; // Software version private static ApplicationMode m_applicationMode = ApplicationMode.SOFTWARE_HSM; // Build type private static string m_logDirPath = ""; // Log directory private static uint m_connectTimeoutMS = 0; // Connect timeout private static uint m_responseTimeoutMS = 0; // Response timeout private static uint m_indexHost = 0; // Host index private static int m_gComPortNumber = 0; // Com port number - this was used as global variable in C++ private static List<SocketStructure> m_HostAddresses = new List<SocketStructure>(); // List of host addresses private static string m_KeysFileName = ""; // Path to the keys file private static List<Key_t> m_DecryptedKeys = new List<Key_t>(); // Array of decrypted keys // Getter: Is module initialized? public static bool isInited() { lock (m_locker) { return m_isInited; } } // Get software version public static int GetSoftwareVersion() { lock (m_locker) { if (m_isInited == false) { throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_NOT_INITIALIZED), "Please initialize module parameters class first"); } return m_softwareVersion; } } // Get log path public static string GetLogPath() { lock (m_locker) { if (m_isInited == false) { throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_NOT_INITIALIZED), "Please initialize module parameters class first"); } return m_logDirPath; } } // Get connect timeout public static uint GetConnectTimeout() { lock (m_locker) { if (m_isInited == false) { throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_NOT_INITIALIZED), "Please initialize module parameters class first"); } return m_connectTimeoutMS; } } // Get build type public static ApplicationMode GetBuildMode() { lock (m_locker) { if (m_isInited == false) { throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_NOT_INITIALIZED), "Please initialize module parameters class first"); } return m_applicationMode; } } // Get response timeout public static uint GetResponseTimeout() { lock (m_locker) { if (m_isInited == false) { throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_NOT_INITIALIZED), "Please initialize module parameters class first"); } return m_responseTimeoutMS; } } // Get index host public static uint GetIndexHost() { lock (m_locker) { if (m_isInited == false) { throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_NOT_INITIALIZED), "Please initialize module parameters class first"); } return m_indexHost; } } // Set index host public static void SetIndexHost(uint host) { lock (m_locker) { if (m_isInited == false) { throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_NOT_INITIALIZED), "Please initialize module parameters class first"); } m_indexHost = host; } } // Get COM port number public static int GetComPortNumber() { lock (m_locker) { if (m_isInited == false) { throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_NOT_INITIALIZED), "Please initialize module parameters class first"); } return m_gComPortNumber; } } // Get list of host addresses // NOTE: Makes a deep copy of the host address array and returns that public static List<SocketStructure> GetHostAddressesArray() { lock (m_locker) { // Make a deep copy of the list of the host addresses List<SocketStructure> tmp = new List<SocketStructure>(); for (int i = 0; i < m_HostAddresses.Count(); i++) { SocketStructure s = new SocketStructure(); s.IP = m_HostAddresses[i].IP; s.port = m_HostAddresses[i].port; tmp.Add(s); } return tmp; } } // Getter for keys file name public static string GetKeysFileName() { lock (m_locker) { if (m_isInited == false) { throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_NOT_INITIALIZED), "Please initialize module parameters class first"); } return m_KeysFileName; } } // GetKeys // NOTE: Makes a deep copy of the keys array and returns that public static List<Key_t> GetKeysArray() { lock (m_locker) { // Make a copy of the list of the keys List<Key_t> tmp = new List<Key_t>(); for (int i = 0; i < m_DecryptedKeys.Count(); i++) { Key_t s = new Key_t(); s.KeyName = m_DecryptedKeys[i].KeyName; for (int j = 0; j < 8; j++) { // Copy each key separately s.MasterKey[j] = m_DecryptedKeys[i].MasterKey[j]; s.SessionKey[j] = m_DecryptedKeys[i].SessionKey[j]; } tmp.Add(s); } return tmp; } } /// <summary> /// Initialize fields of the DppModuleParameters class. Initialization should be done once. /// Otherwise you will get exception. /// </summary> /// <param name="errorInfo">[OUT] Error info structure</param> /// <param name="logDirPath">log path</param> /// <param name="hsm">Hardware security module parameter </param> /// <param name="hostAddresses">Prepaid Server addresses (";"-separated example: "x.x.x.x:yyyy;second.server.name:yyyy")</param> /// <param name="connectTimeoutMS"> Connection timeout in ms (0-default value: 15000ms) </param> /// <param name="responseTimeoutMS"> Server response timeout in ms (0-default value: 45000ms) </param> /// <param name="softwareVersion"> [OUT] Module version </param> /// <param name="indexTCIP">Index to which TCP host to connect; default value is 0</param> /// <returns>status</returns> public static int Initialize(ref DppErrorInfo_t errorInfo, string logDirPath, string hsm, string hostAddresses, uint connectTimeoutMS, uint responseTimeoutMS, ref ushort? softwareVersion, uint indexTCIP, ApplicationMode buildmode) { // Lock lock (m_locker) { try { try { // We don't allow this structure to be null if (errorInfo == null) return DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_INVALID_ARG); // Just clean the error structure errorInfo.Code = 0; errorInfo.ActionCode = 0; errorInfo.SysCode = 0; errorInfo.Description = ""; errorInfo.DescriptionFromServer = ""; errorInfo.Code = DppGlobals.dppERR_SUCCESS; // Store build mode m_applicationMode = buildmode; // ....................... // Module parameter object already initialized? if (m_isInited) throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_ALREADY_INITIALIZED), "Parameters module already initialized. Deinitialize first please."); // Pass out software version if out param is not null if (softwareVersion != null) softwareVersion = m_softwareVersion; // Is log directory empty? throw an exception if (String.IsNullOrEmpty(logDirPath)) throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_INVALID_ARG), "Log path not specified"); // List of host addresses string is null or empty? if (String.IsNullOrEmpty(hostAddresses)) throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_INVALID_ARG), "Host list not specified"); // If HSM is NULL throw a module error exception // if it is empty string we are Okay if (hsm == null) throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_INVALID_ARG), "HSM not given"); // Extract HSM string and store COM port number in instance variable if (TranslateHSM(hsm) < 0) throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_INVALID_ARG), "Wrong HSM specified"); // ....................... // Parse host addresses and store them string[] firstSplit = hostAddresses.Split(';'); for (int i = 0; i < firstSplit.Length; i++) { string[] secondSplit = firstSplit[i].Split(':'); if (secondSplit.Length != 2) throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_INVALID_ARG), "ParseHostAddresses: List of socket addresses is in not correct format"); SocketStructure sockstruct = new SocketStructure(); sockstruct.IP = secondSplit[0].Trim(); sockstruct.port = Int32.Parse(secondSplit[1]); m_HostAddresses.Add(sockstruct); } // List of host addresses empty? if (m_HostAddresses.Count() == 0) { throw new DppModuleException(DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_INVALID_ARG), "Host address list not specified"); } // Set time out parameters m_connectTimeoutMS = (connectTimeoutMS != 0 ? connectTimeoutMS : DppGlobals.ConnectTimeOutDefault); m_responseTimeoutMS = (responseTimeoutMS != 0 ? responseTimeoutMS : DppGlobals.ResponseTimeOutDefault); // Set log dir path of the logger, also store the path m_logDirPath = logDirPath; DppLogger.LogDirectory = logDirPath; // Software HSM? if (m_applicationMode != ApplicationMode.POSTER_VER) { // Get name of the key file // Note: Since module is not initialized yet, we need to pass along some parameters // otherwise other classes can't use getters to access them DppModuleParameters.GetKeyFileName(buildmode, m_gComPortNumber); // Read key file DppModuleParameters.ReadKeyFile(); } m_indexHost = indexTCIP; // Mark as initialized - this is final step m_isInited = true; } // Catch module error catch (DppModuleException ex) { ex.FillErrorStruct(ref errorInfo); } // Catch OS error catch (DppOSException ex) { ex.FillErrorStruct(ref errorInfo); } // Server error catch (DppServerException ex) { ex.FillErrorStruct(ref errorInfo); } // Catch general exception catch (Exception ex) { DppUtilities.FillErrorStructWithGeneralException(ref ex, ref errorInfo); } } catch (Exception ex) { // Some unexpected exception occured probably in the catch clauses, return error code return DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_UNKNOWN); } // Return the module code from the data structure. return errorInfo.Code; } } /// <summary> /// Deinitialize function /// </summary> /// <param name="errorInfo">[OUT] error structure</param> /// <param name="pIndexTCIP">[IN/OUT] index of host</param> /// <returns>status</returns> public static int DeInit(ref DppErrorInfo_t errorInfo, ref uint? pIndexTCIP) { // Lock lock (m_locker) { try { try { // Just clean the error structure errorInfo.Code = 0; errorInfo.ActionCode = 0; errorInfo.SysCode = 0; errorInfo.Description = ""; errorInfo.DescriptionFromServer = ""; // Pass out index if (pIndexTCIP != null) pIndexTCIP = m_indexHost; m_indexHost = 0; // Clear out log directory m_logDirPath = ""; DppLogger.LogDirectory = ""; // Clear out other parameters m_HostAddresses.Clear(); m_connectTimeoutMS = 0; m_responseTimeoutMS = 0; // Software HSM? if (m_applicationMode != ApplicationMode.POSTER_VER) { // Yes, clear decrypted keys m_DecryptedKeys.Clear(); } m_isInited = false; } catch (DppModuleException ex) { ex.FillErrorStruct(ref errorInfo); } catch (DppOSException ex) { ex.FillErrorStruct(ref errorInfo); } catch (DppServerException ex) { ex.FillErrorStruct(ref errorInfo); } catch (Exception ex) { DppUtilities.FillErrorStructWithGeneralException(ref ex, ref errorInfo); } } catch (Exception ex) { // Some unexpected exception occured probably in the catch clauses, return error code return DppUtilities.MAKE_HRESULT(DppGlobals.dppERR_UNKNOWN); } return errorInfo.Code; } } // Extract COM port number from supplied string. // Supplied string should be in form "TYPE=COM;NUMBER=3" private static int TranslateHSM(string hsm) { if (m_applicationMode == ApplicationMode.SOFTWARE_HSM) { // Exit if this is software HASP build return 0; } // Perform splitting and extraction string[] split1 = hsm.Split(';'); if (split1.Length == 2) { string[] splitTmp1 = split1[0].Split('='); if (splitTmp1[1] != "COM") return -1; string[] splitTmp2 = split1[1].Split('='); if (splitTmp2[0] != "NUMBER") return -1; // Extract the port number m_gComPortNumber = int.Parse(splitTmp2[1]); } else { return -1; } return 0; } /// <summary> /// Parse keys from the key file /// </summary> /// <param name="inBuffer">byte array representation of a text file which contains keys stored as hex string on separate lines</param> /// <param name="bufferSize">size of the byte array</param> private static void ParseTextFile(byte[] inBuffer, uint bufferSize) { string line = ""; using (Stream stream = new MemoryStream(inBuffer)) { using (StreamReader reader = new StreamReader(stream)) { while (true) { // Read text file line by line line = reader.ReadLine(); if (line == null) { break; } string[] parameters = line.Split(';'); if (parameters.Length == 3) { Key_t k = new Key_t(); // Copy key name k.KeyName = parameters[0]; // Copy master key byte[] mk = DppUtilities.HexStringToByteArray(parameters[1]); Array.Copy(mk, k.MasterKey, 8); // Copy session key byte[] sk = DppUtilities.HexStringToByteArray(parameters[2]); Array.Copy(sk, k.SessionKey, 8); // Add to the global array of keys m_DecryptedKeys.Add(k); } } } } } /// <summary> /// Retrieve path of the file where keys are stored /// </summary> private static void GetKeyFileName(ApplicationMode b, int compport) { // Get folder where DLL resides and make sure path is terminated string dllFolder = System.IO.Path.GetDirectoryName(new System.Uri(System.Reflection.Assembly.GetExecutingAssembly().CodeBase).LocalPath); if (!dllFolder.EndsWith("\\")) dllFolder += "\\"; // Call to get serial number function DppPosterApi.GetSerialNumber(ref dllFolder, 0 /* should be size of string but not needed in C#*/, 0, DppGlobals.HS_READ_TIMEOUT, 0, b, compport); // Store the result in a global variable m_KeysFileName = dllFolder + ".enc"; } /// <summary> /// Read the key file and get keys out of it. /// </summary> /// <returns></returns> private static int ReadKeyFile() { // Clear the global keys array m_DecryptedKeys.Clear(); // Software HASP? if (m_applicationMode != ApplicationMode.SOFTWARE_HSM) { if (m_gComPortNumber <= 0) throw new Exception("Wrong port number"); } // Open file for reading data using (FileStream stream = File.Open(m_KeysFileName, FileMode.Open, FileAccess.Read, FileShare.Read)) { // Get file length long length = new System.IO.FileInfo(m_KeysFileName).Length; byte[] buffer = new byte[length + 1]; // Read the file contents DppUtilities.ReadFileFully(stream, buffer, (int)length); if (m_applicationMode == ApplicationMode.SOFTWARE_HSM) { // Decrypt file contents DppCryptography.DecryptKeyFile(buffer, (uint)length, buffer); } // Parse keys from the text file ParseTextFile(buffer, (uint)length); } return 0; } } } Answer: Locking As I read the code I can see that any public method has big lock (m_locker) which should be fine for thread safety. Any transactional usage would need another locking, but that is not problem of your static class - well, you could somehow expose the locking (e.g. creating IDisposable helper for code like using(DppModuleParameters.Locker()) transaction()) but it looks fine to me as it is. Private members do not use locking. That is fine too. Style Well, looks like you are half the way between C++ and C#. Namespace starting with lowerCase, UPPER_CASE_WITH_UNDERSCORE constants (as enum in C# is more like enum class in C++, not good old enum). DppGlobals.dppERR_INVALID_ARG and DppUtilities.MAKE_HRESULT ...hmm, nothing to add - that's C++ not C#. Second Look (added) Well, it is thread-safe as it can be, if that is your only concern, but the whole class is... ehm, ugly, sorry. Why DeInit? Why Get/Set methods instead of properties? Why locking at all? I would rather think about singleton pattern (init once, live forever), remove locks inside simple getters (but leave the simple check to throw if you forget to call Init). Or it can be full class - just create it with proper args and get the info parsed untill you free the class. I can see only one setter - SetIndexHost, but the m_indexHost is only used in getter and DeInit, but I guess you can use it to index some of the lists... so, why not returning readonly collections? Maybe I am missing something here (the expected usage), but I would really design it in completely different way: Normal Class (not static) Read-only (getters and reaonly collections) Tiny view objects if you really need to change the index - create helper class that can access the collections and return the indexed value. Something more.... immutable Static reference to normal class What I had in mind was static reference to the class, where static Lib TheLib = new Lib() is like calling Init and theLib = null is like DeInit and you can even have public Lib Default { get; set; } in it. But if you cannot redesign, you cannot. You got my thoughts, now it is up to you.
{ "domain": "codereview.stackexchange", "id": 16634, "tags": "c#, multithreading, .net" }
What is the Big O of T(n)?
Question: I have a homework that I should find the formula and the order of $T(n)$ given by $$T(1) = 1 \qquad\qquad T(n) = \frac{T(n-1)}{T(n-1) + 1}\,. $$ I've established that $T(n) = \frac{1}{n}$ but now I am a little confused. Is $T(n) \in O(\frac{1}{n})$ the correct answer for the second part? Based on definition of big-O we have that $$O(g(n)) = \{f(n) \mid \exists c, n_0>0\text{ s.t. } 0\leq f(n) \leq cg(n)\text{ for all } n\geq n_0\}\,.$$ This holds for $f(n) = g(n) = \frac{1}{n}$ so, based on the definition, $O(\frac{1}{n})$ should be correct but, in the real world it's impossible that algorithm can be faster than $O(1)$. Answer: Yes, all functions $f(n)$ satisfy $f(n) \in O(f(n))$. The definitions are meaningful even if $f(n)$ isn't the running time of any function. Indeed, this notation comes from number theory, where $f(n)$ is usually some error term. Even in computer science, sometimes big O notations is used while analyzing algorithms for something other than a running time or space requirements.
{ "domain": "cs.stackexchange", "id": 3484, "tags": "algorithm-analysis, asymptotics, recurrence-relation" }
Can we understand the strong reflectivity of metals from band theory?
Question: I know that solids, including metals, have electronic bands and bandgaps. If we consider some typical metal such as copper, we know that it strongly reflects visible light. From the point of view of bands, this means that the bandgap energy must be large in comparison to the energy of the visible light and therefore, it cannot be absorbed. Is it so? I am not sure. Answer: From the point of view of bands, this means that the bandgap energy must be large in comparison to the energy of the visible light and therefore, it cannot be absorbed. I don't think so. When there is a large gap and photons can not be absorbed, the material is transparent to visible light. It is the case of some isolators as quartz. In order to absorb a photon, it is necessary that the energy gap between adjacent electronic bands allows a transition of one of the electrons. Normally, the momentum of the electron ($\hbar k$) is much higher than the moment of the photon, and moment conservation requires that there is an available state in a higher energy band with (almost) the same $k$. After being absorbed, the electron returns to the lower energy band, scattering the incoming light, what explains the bright reflective surface of metals. Metals don't have large band gaps as insulators, and electrons can find available states at high energy band.
{ "domain": "physics.stackexchange", "id": 65370, "tags": "condensed-matter, solid-state-physics, electronic-band-theory, metals, phonons" }
Cost of material to 3D print
Question: Can someone please share the typical cost of material to 3D print an object like a raspberry pi case? Thank you. Answer: This depends on what you mean by "cost". If you've got your own (or access to a) 3D printer and you're just paying for raw material then you can calculate the raw material cost per cubic distance (cubic inches, cubic centimeter, whatever your preferred units are), then determine the volume of the part to be printed, then multiply the two together. Typically the CAD software used to create the 3D model will have a button that gives you the volume of the part. Sometimes, as in selective laser sintering (SLS), the high temperatures involved degrade the unused nylon powder. This means that some percentage of the powder needs to be exchanged after some number of builds, and you'll wind up paying a higher cost than strictly the cost per unit volume of the powder (to pay for the replacement of the unused powder). If you don't have access to the printer, then you'll wind up paying the same costs mentioned above, plus whatever their going rate is for time on the machine, time to setup the part, time to inspect/depowder/clean the part, shipping, and markup. If you're interested in getting someone else to 3D print the part (and give you an instant quote), try a site like Shapeways.
{ "domain": "robotics.stackexchange", "id": 955, "tags": "3d-printing" }
Applying an adjustable force to a small area
Question: What type of a mechanical device can I use and program to apply a controllable force within the range of what a human can to an area about the size of a fingertip? It is desirable for the applied force to be precise and known. Answer: I agree the compressed air is the way to go. There are many vendors that sell small cylinders and even have bumpers available for the end of the actuator rod that could approximate a fingertip. For control, you don't mention any requirements, but if they're simply manually controlled on/off application of force, a regulator with some simple manual valving should do the trick, just make sure you have a way to vent the air in the cylinder when you're trying to release the force. Keep in mind that unless you control the flow into the cylinder, the force will be applied quite suddenly unless you use a flow control valve or orifice as well. What I've done for applications like this is to use an I/P transducer, controlled by computer which would lets me precisely control the pressure applied to the cylinder by varying the current flowing in a loop, typically 4-20mA. (There are also voltage controlled versions, but I have no experience with them) This way, you could ramp the pressure at any rate you wish and the process is much more repeatable.
{ "domain": "engineering.stackexchange", "id": 286, "tags": "mechanical-engineering" }
Does a megabyte represent (2^8)^1000000 bits in base2? Isn't that more than enough combinations to represent everything?
Question: Every introductory CompSci resource I can find goes through how 8 bits are in a byte, etc. So a byte can store some 2^8 values in binary. Then when asking about four bytes, we can assume this is (2^8)^4=2^32. But I can't find anywhere if this pattern holds up until the total amount of hard drive or memory storage. For instance, if even a single megabyte can really hold (2^8)^1000000 bits, isn't this a number so large that it could store all data in existence everywhere in the universe many times over? The number of possible combinations from a number that large would surely never be reached. Yet in reality, we all know a megabyte isn't much. I can't help but feel somewhere that the exponentiation must stop, and we instead multiply bytes together. Or is it really the case the numbers can get this big? Such as a gigabyte representing ((2^8)^1000)^3 bits? If I can represent a number, let's say, 1 million in 32 bits, and I wanted to store 10^50 numbers this large, wouldn't the required bits be 2^32*10^50? First of all, I'd never need to store 10^50 of any unit of data on my hard drive, that's astronomically massive, and secondly, 2^32*10^50 is quite a small number, far below (2^8)^1000000. So what's really happening here that we need so much storage, and a megabyte isn't much? Answer: A megabyte represents $8\cdot10^6$ bits. We can write $2^{8\cdot10^6}$ bit-strings using this much memory. But, this number is not the same as how many such bit-string we can store! Well, we can only store a single bit-string of size $8\cdot10^6$ bits in $1$ megabyte. To understand this better: consider a single page of a document. On a single-spaced page with 12 point font, we can put around 3000 characters. The number of possible pages that can be written in English on this single page of the document would be $26^{3000}$. But, how many characters can we actually write on such a page? quite simply 3000.
{ "domain": "cs.stackexchange", "id": 16425, "tags": "binary" }
Returning BindingList
Question: I came to work at a place not really understanding data binding. I have been working with it for about a year now, and I'd like to get some clarification on what is going on. Particularly, I'd like to know if there any difference between the following two types of calls. Given this simplistic example of the data: public class TableType { public int ID; } public System.Data.Linq.Table<TableType> TableTypes { get { return this.GetTable<TableType>(); } } Version 1: private BindingList<TableType> _tableTypeList; public BindingList<TableType> TableTypesList1 { get { if (_tableTypeList == null) { var temp = TableTypes.OrderBy(t => t.ID); _tableTypeList = ((IListSource)temp).GetList() as BindingList<TableType>; } return _tableTypeList; } } The Senior Developer here wrote lots of database code using Version 1. Looking at it, you would think that it would prevent redundant database calls, but that does not seem to be the case. A breakpoint on the code never gets hit again. Version 2: public BindingList<TableType> TableTypesList2 { get { var query = from t in TableTypes orderby t.ID select t; var result = new BindingList<TableType>(query.ToList()); return result; } } The Senior Developer left about 6 months ago, so now I am writing the database calls. I have been using Version 2, which looks much cleaner to me and appears to do the exact same thing. Is one version any better than the other? Is there anything one version does that another does not? How do I get a better feel for what is going on? Particularly, I was surprised to find that another call to TableTypesList1 did not call the getter. Answer: Is one version any better than the other? It depends on what you actually want. V1 is a single time operation. The table is going to be filled only once and subsequent calls will use cached data. V2 on the other hand will be executed on each acceess. One cannot say which one is to chose. You must know what would work better for you. Is there anything one version does that another does not? Yes. V1 will get data only once and V2 everytime. This means that it might be acceptable to let the first property be a property because subsequent calls will be fast and only the first time initialization might take some time (depending on how much data it needs to get). The value it returns is stored in a backing field and it returns the same value everythime. The second solution on the other head clearly behaves like a method because it can return different lists on each call thus I would make V2 a method. How do I get a better feel for what is going on? This if is the answer: if (_tableTypeList == null) { var temp = TableTypes.OrderBy(t => t.ID); _tableTypeList = ((IListSource)temp).GetList() as BindingList<TableType>; } I was surprised to find that another call to TableTypesList1 did not call the getter. It probably did but when you put the breakpoint inside the if it's obvious that this block won't be called again.
{ "domain": "codereview.stackexchange", "id": 25537, "tags": "c#, linq, comparative-review, databinding" }
lookupTransform is throwing error
Question: I have a static transformation, which I am trying to fetch using lookupTransform. However, it is not working and throwing following error- "base" passed to lookupTransform argument target_frame does not exist. Below is the code snippet- tf2_ros::Buffer tfBuffer; tf2_ros::TransformListener tfListener(tfBuffer); try { // get the latest available transformation geometry_msgs::TransformStamped t = tfBuffer.lookupTransform("base", "kinect2_link", ros::Time(0)); } catch (tf2::TransformException& ex) { ROS_ERROR_STREAM("Unable to fetch static transformation. " << ex.what()); } I tried to debug the issue. The transformation is available in rostopic. Please see below ravi@lab:~/ros_ws$ rostopic echo /tf_static transforms: - header: seq: 0 stamp: secs: 1522910703 nsecs: 637645959 frame_id: base child_frame_id: kinect2_link transform: translation: x: 0.824234432376 y: 0.100365580648 z: 0.140681429475 rotation: x: -0.345864766717 y: 0.656658630544 z: -0.629654174274 w: 0.229592305828 A similar question was asked here but unfortunately, it doesn't contain any answer. My question is how to get the static transformations from tf using C++. I am using ROS Indigo on Ubuntu 14.04 LTS OS. Originally posted by ravijoshi on ROS Answers with karma: 1744 on 2018-04-05 Post score: 0 Original comments Comment by gvdhoorn on 2018-04-05: Unless we've run into a regression or undiscovered bug, this might be just a matter of not waiting long enough for the buffer to actually contain those transforms. See #q287540 for a possible duplicate. Comment by ravijoshi on 2018-04-05: I need to check the reference, you provided. However I have a question regarding wait time for buffer. I am trying to access static transformation, which is never going to change. Is the wait concept applies to this scenario as well? Comment by gvdhoorn on 2018-04-05: I would have to verify that for static transforms. But in general instantiating a buffer and listener and then using them immediately is not going to work. The infrastructure needs some time to be able to gather all the information (essentially: receive TF msgs and process it). Comment by tfoote on 2018-04-05: Yes even static transform information must be propagated after the listener is created. It needs time to set up the communication channels and for the messages to be sent from all the sources. Comment by ravijoshi on 2018-04-05: @tfoote: I see. I am going to initialize the buffer with a wait time. But how do we decide this wait time? Too much wait is useless and too less doesn't work ! Comment by gvdhoorn on 2018-04-05: tf2_ros::Buffer has a method canTransform(..) that you can use to see whether the necessary information is present in the local buffer. Answer: Overall you need to wait for the transforms to become available. All calls should use a mechanism to check if the transform is available and have a retry or wait policy. There's a tutorial here: http://wiki.ros.org/tf2/Tutorials/tf2%20and%20time%20%28C%2B%2B%29 In general one may fail, but a later piece of data might be usable later once the transform data becomes available. Originally posted by tfoote with karma: 58457 on 2018-04-05 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 30552, "tags": "ros, tf2, tf-static, transform-listener, ros-indigo" }
Refactoring a collection of if statements that contain 2 arguments.
Question: At the moment I have seven if statements that resemble the following code: if(hit.collider.gameObject.tag == "Colour1" && start_time > look_at_time) { new_colour1.ChangeObjectMaterialColour(hit.collider.gameObject.renderer.material.color); var colums = GameObject.FindGameObjectsWithTag("column"); foreach( GameObject c in colums) c.GetComponent<MeshRenderer>().materials[1].color = new_colour1.orignalMaterial; } else if(hit.collider.gameObject.tag == "Colour2" && start_time > look_at_time) { new_colour2.ChangeObjectMaterialColour(hit.collider.gameObject.renderer.material.color); var colums = GameObject.FindGameObjectsWithTag("column"); foreach( GameObject c in colums) c.GetComponent<MeshRenderer>().materials[1].color = new_colour2.orignalMaterial; } Each statement is roughly 6 lines of code and takes up a lot of space and can be a little tricky to read. What I'm wanting to do is find a way to re factor this so that my code is little less clunky and doesn't take up too much space. I had thought about changing my collection of if statements into a switch statement but I discovered that switch statements can't handle two arguments like I have above. If there any other way I can re factor my code but keep the same functionality or am I stuck with my collection of if statements? Answer: We can start looking at what is duplicated. For example, the only difference between the two blocks of code is the string "Color1" and "Color2" in the if statement, and the variable new_colour1 which is replaced with new_colour2. From here I'd suggest something like the following: //This should be declared once - e.g. class level not method level. var colorDict = new Dictionary<string, NewColorType> { {"Colour1", new_colour1}, {"Colour2", new_colour2} }; NewColorType newColor; if(start_time > look_at_time && colorDict.TryGetValue(hit.collider.gameObject.tag, out newColor)) { newColor.ChangeObjectMaterialColour(hit.collider.gameObject.renderer.material.color); var colums = GameObject.FindGameObjectsWithTag("column"); foreach( GameObject c in colums) c.GetComponent<MeshRenderer>().materials[1].color = newColor.orignalMaterial; }
{ "domain": "codereview.stackexchange", "id": 4846, "tags": "c#" }
FFT of square wave: maths and actual numbers do not coincide
Question: this fact is driving me insane... Theory states that the Fourier transform of a square wave is a sinc function, but if I compute the fft of a synthetic (perfect) square wave of amplitude 1 and frequency 1 and I check the numbers, I see them drift from a sinc function. Phases seem to change linearly thru the bins, instead of being all constant as expected, and real values (the odd ones of course) are all strangely equal to 2/framesize instead of being zero as I would expect. In fact, if I synthesize the same square wave in frequency domain by simply using a sinc function and I do the IFFT and I check the output, I notice that the square wave so obtained is almost perfect if it were not for some tiny ringings at the edges (Jibbs ?) How is it ? What is then the "correct" function representing it in frequency domain if not a complex sinc as it should be ??? Or perhaps what I noticed is simply due to the fact that, since we are dealing with discrete quantities (fft =dft) things work slightly different than in the abstract case (fourier transform in pure math sense), and therefore some little modifications (scaling ?)are required ? Thanks in advance Answer: The continuous-time Fourier transform of a single rectangular pulse $p(t)$ of duration $[-d,d]$ is : $$ P(\Omega) = \frac{ 2 \sin(\Omega d) }{\Omega } = 2 d \cdot \text{sinc}( \frac{\Omega d}{\pi} ) \tag{1} $$ where $\Omega$ is the frequency in radians per second. You can not represent $p(t)$ or $P(\Omega)$ using a sampled-data computer system because $p(t)$ is not bandlimited. However, there is an analogous definition of a rectangular pulse in discrete-time, defined as a sequence $x[n]$ of duration $[-d,d]$, with $d$ being integer, whose discrete-time Fourier transform is : $$X(e^{j\omega}) = \frac{ \sin( \omega (2d+1) /2 ) }{ \sin( \omega /2 )} \tag{2} $$ which is called a periodic-sinc or a Dirichlet kernel. Note that this is not a simulation of the continuous case. Using the computer with N-point DFT/FFT, implementation of discrete-time case will be $$X[k] = X(e^{j\omega})|_{w_k = \frac{2\pi}{N}k} = \frac{ \sin( \frac{\pi k}{N} (2d+1)) }{ \sin( \frac{\pi k}{N} )} \tag{3} $$ A Matlab/Octave implementation is as follows d = 3; % rectangular pulse in [-d,d] N = 32; % DFT/FFT length k = 0:N-1; % DFT index X = sin( (pi/N)*(2*d+1)*k)./sin( (pi/N)*k)); X(1) = 2*d+1; x = real(ifft(X,N)); % compute inverse DFT on Xk figure,stem(k,xp); title('rect-pulse from inverse DFT of periodic-sinc'); Note that the circularity of DFT implies that negative indices of the pulse $x[n]$ are at the right edge.
{ "domain": "dsp.stackexchange", "id": 7905, "tags": "ifft" }
Current operator, why is this form valid up to second order in $q$?
Question: Context In Bernevig´s textbook Topological insulators and topological superconductors, an approximate form of the current operator in momentum space is derived. It is said to be valid to second order in momentum, but I don´t understand why. They start with a generic lattice Hamiltonian with translational invariance, so that it can be written in momentum space: $$ H=\sum_{ij} c^\dagger_{i}h_{ij}c_{j}, \quad H=\sum_{\mathbf{k}}c_{\mathbf{k}}^\dagger h_\mathbf{k} c_{\mathbf{k}}.\tag{1}$$ Here, $i,j$ correspond to the lattice sites and $\mathbf{k}$ to momentum. To obtain the current, they use the continuity equation in momentum space: $$ \dot{\rho}(\mathbf{x})+\nabla\cdot \mathbf{J}(\mathbf{x})=0 \implies \dot{\rho}_\mathbf{q}-i\mathbf{q}\cdot\mathbf{J}_\mathbf{q}=0\tag{2} .$$ Next, they use the density operator in momentum space and the Heisenberg equation of motion to write: $$ \rho_\mathbf{q}=\frac{1}{\sqrt{N}}\sum_\mathbf{k}c^\dagger_{\mathbf{k+q}}c_\mathbf{k}\implies -i\mathbf{q}\cdot\mathbf{J}_\mathbf{q}=-\dot{\rho}_\mathbf{q}=i[\rho_\mathbf{q},H], \tag{3} $$ which after some algebra becomes $$ \mathbf{q}\cdot \mathbf{J}_\mathbf{q}=\frac{1}{\sqrt{N}}\sum_\mathbf{k} (h_{\mathbf{k}+\mathbf{q}}-h_\mathbf{k})c_{\mathbf{k}+\mathbf{q}}^\dagger c_\mathbf{k}. \tag{4}$$ Core of the question After using $h_{\mathbf{k}+\mathbf{q}}-h_\mathbf{k} \approx \mathbf{q}\cdot \partial_\mathbf{k} h_\mathbf{k}$, it is possible to identify the current operator by comparing both sides of equation (4) as: $$ \mathbf{J}_\mathbf{q}=\frac{1}{\sqrt{N}}\sum_\mathbf{k} \frac{h_\mathbf{k}}{\partial \mathbf{k}} c^\dagger_{\mathbf{k}+\mathbf{q}}c_\mathbf{k}$$ I understand everything up to this point. Clearly, this form of the current operator is valid to first order in $\mathbf{q}$. However, they say that by shifting the $\mathbf{k}\rightarrow \mathbf{k}-\mathbf{q}/2$ the approximation is valid to second order: We can make a better approximation, valid to second order in $\mathbf{q}$, by shifting $\mathbf{k}\rightarrow \mathbf{k}-\mathbf{q}/2$: $$ \mathbf{q}\cdot \mathbf{J}_\mathbf{q}=\frac{1}{\sqrt{N}}\sum_\mathbf{k}(h_{\mathbf{k}+\mathbf{q}/2}-h_{\mathbf{k}-\mathbf{q}/2})c^\dagger_{\mathbf{k}+\mathbf{q}/2}c_{\mathbf{k}+\mathbf{q}/2}\\=\frac{1}{\sqrt{N}}\sum_\mathbf{k}\left(\frac{\partial h_\mathbf{k}}{\partial \mathbf{k}}\cdot \mathbf{q} \right)c^\dagger_{\mathbf{k}+\mathbf{q}/2}c_{\mathbf{k}+\mathbf{q}/2}+\mathcal{O}(q^2)\tag{3.8}$$ The linear term $q$ is important to get right in some cases, and hence the shift performed is very important. The current operator at small $q$ is, hence, $$ \mathbf{J}_\mathbf{q}=\frac{1}{\sqrt{N}}\sum_\mathbf{k} c^\dagger_{\mathbf{k}+\mathbf{q}/2}\frac{\partial h_\mathbf{k}}{\partial \mathbf{k}}c_{\mathbf{k}+\mathbf{q}/2}\tag{3.9}$$ I don't understand: Why shifting the momentum makes $J_\mathbf{q}$ valid to second order? By looking at the second line of (3.8) and equation (3.9), is equation (3.9) then valid only to first order in $q$? Answer: this is due to the Taylor expansion. Consider the function $F(x)$ with some Taylor expansion about $x_0$, then while $F(x_0+\delta x) = F(x_0) + \delta x \partial_x F(x_0)$ to first order in $\delta x$, if we want to look at the difference of $F$ at points around $x_0$ we have $$F(x_0+\delta x) - F(x_0-\delta x) = F(x_0) + 2\delta x \partial_x F(x_0)$$ which is true to second order in $\delta x$ because that order cancelled out between the two contributions. Similarly $$ h(k+\frac{q}{2}) - h(k-\frac{q}{2}) = h(k) + \frac{q}{2}\partial_k h(k) + \frac{q^2}{8}\partial^2_k h(k) - h(k) + \frac{q}{2}\partial_k h(k) - \frac{q^2}{8}\partial^2_k h(k) + O(q^3) = q \partial_k h(k) + O(q^3)$$
{ "domain": "physics.stackexchange", "id": 90583, "tags": "quantum-mechanics, operators, approximations, topological-insulators, lattice-model" }
Solve 3CNF in Poly-Time with Satisfiability Oracle
Question: The problem: Given an algorithm A which can tell whether any 3CNF formula is satisfiable in poly-time, develop an algorithm B that calculates a solution for the formula, also in poly-time, using A as a sub-routine. The only idea I have is to negate some literals (which we choose how?) and check again whether the (initially satisfiable) formula is still satisfiable - which would mean that the values of the negated literals are somehow "essential" in any valid solution. But this idea is way too vague, and maybe it is not the right direction at all. Any hints and solutions are welcome! Answer: hint: assign values to variables one at a time and call algorithm A on resulting formula. if the result of algorithm A is satisfiable or non-satisfiable what does that mean about last variable assignment?
{ "domain": "cstheory.stackexchange", "id": 5808, "tags": "complexity-classes, sat, np, boolean-formulas" }
When is precision more important over recall?
Question: Can anyone give me some examples where precision is important and some examples where recall is important? Answer: For rare cancer data modeling, anything that doesn't account for false-negatives is a crime. Recall is a better measure than precision. For YouTube recommendations, false-negatives is less of a concern. Precision is better here.
{ "domain": "datascience.stackexchange", "id": 10267, "tags": "machine-learning, model-evaluations" }
simulated turtlebot kinect data
Question: Hello, Is it possible to get simulated raw image or create Point Cloud in Rviz with simulated Kinect and simulated Turtlebot "out of the box"? Because I can only get laser data posted on /scan topic. I tried to build Point Cloud but there is no published topic to do that. Is that even possible in simulated environment or do I have to reroute some of the data to other topic. I'm using Ubuntu 11.10 and Electric. I would really appreciate any tips because I'm new to ROS :) Thank you. Originally posted by Grega Pusnik on ROS Answers with karma: 460 on 2012-02-26 Post score: 0 Answer: This is a bug in the gazebo.urdf.xacro file. The camera frame names were not correct and therefore the gazebo plugins couldn't initialize properly. It's fixed in 330:4253a4e5f257 . I'll push out a release today with the fix, turtlebot 0.9.2 Originally posted by mmwise with karma: 8372 on 2012-02-28 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by karthik on 2012-02-28: It worked fine for me before i updated my packages recently. I started using real turtle n hence couldn't notice this issue. Anyways thanks for the update :) Comment by SL Remy on 2012-02-29: I've done this as well previously, although I may have done an update since then. In that test I added a table to the world and viewed the pointcloud in rviz. Comment by karthik on 2012-02-29: Ya i have performed mapping n ran navigation stack also in a office environment that i created.
{ "domain": "robotics.stackexchange", "id": 8404, "tags": "kinect, turtlebot" }
Isn't counting sort a better sorting algorithm for linked lists rather than merge sort?
Question: I have to analyze how every sorting algorithm's complexity would change if it had to sort a linked list and which of all of them is the most efficient. I find everywhere that merge sort is the most efficient one for linked lists, but I don't understand why counting sort isn't better. I think it would still be O(n). Am I missing something? Same with bucket sort and radix sort, would the complexity change with a linked list? Answer: You are comparing apples and orange-colored bicycles here. Merge Sort is a stable Comparison Sort. As such, it only requires that there must be a Total Preorder over the elements. Comparison Sorts have a lower bound asymptotic worst-case step complexity of $\mathcal{O}(n\log{}n)$. Therefore, Merge Sort's asymptotic worst-case step complexity is as good as it gets for Comparison Sorts. There cannot be a Comparison Sort with a better asymptotic worst-case step complexity. Counting Sort is a Non-Comparison Sort. Non-Comparison Sorts can be faster than Comparison Sorts, but they can only do this by exploiting some additional information or structure about the inputs that makes them less general than Comparison Sorts. For example, Counting Sort can only sort small non-negative integers, but has an asymptotic worst-case step complexity of $\mathcal{O}(n + |r|)$, where $|r|$ is the size of the range of possible inputs. Merge Sort can sort elements drawn from an arbitrarily large range, such as integers, which is something Counting Sort cannot do. Radix Sort can sort anything that can be ordered lexicographically. Its asymptotic worst-case step complexity is $\mathcal{O}(wn)$, where $w$ is the length of the sort key. Merge Sort can sort elements based on an arbitrarily large sort key, which is something Radix Sort cannot do. Bucket Sort works by distributing the elements into buckets, then sorting those buckets using a Comparison Sort. The worst-case is when all elements end up in the same bucket, in which case Bucket Sort simply degrades to whatever sorting algorithm is used to sort the buckets. Since the sorting algorithm for the buckets is a Comparison Sort, its asymptotic worst-case step complexity cannot be better than $\mathcal{O}(n\log{}n)$, and thus neither can Bucket Sort's. Typically, Bucket Sort is used with an Insertion Sort for sorting the buckets, which has an asymptotic worst-case step complexity of $\mathcal{O}(n^2)$. In order to actually reap the benefits of Bucket Sort, a priori knowledge about the distribution of inputs is required. In other words, unless you have some additional a priori knowledge about the elements of the linked list, you can only use Comparison Sorts, and Counting Sort, Radix Sort, Bucket Sort, and friends are not even an option.
{ "domain": "cs.stackexchange", "id": 21936, "tags": "sorting" }
Electronic-vibrational-rotational Transition
Question: I'm trying to simulate the fluorescent spectrum for the first time and run into several problems. The dipole matrix element for a transition between 2 different electronic states is as follow: $$\begin{aligned} \boldsymbol{M}_{i k} &=\int \chi_{i}^{*}\left[\int \Phi_{i}^{*} \boldsymbol{p}_{\mathrm{el}} \Phi_{k} \mathrm{d} \tau_{\mathrm{el}}\right] \chi_{k} \mathrm{d} \tau_{\mathrm{N}} \\ &=\int \chi_{i}^{*} \boldsymbol{M}_{i k}^{\mathrm{el}}(R) \chi_{k} \mathrm{d} \tau_{\mathrm{N}} \end{aligned}$$ The total molecular wave function is $\psi(\boldsymbol{r}, \boldsymbol{R})=\Phi(\boldsymbol{r}, \boldsymbol{R}) \times \chi(\boldsymbol{R})$ and the nuclear wave function is $\chi_{\mathrm{N}}=\psi_{\mathrm{vib}} \cdot Y(\vartheta, \varphi)$. The matrix element becomes $$\begin{array}{c} \boldsymbol{M}_{i k}=\boldsymbol{M}_{i k}^{\mathrm{el}}\left(R_{\mathrm{e}}\right) \int \psi_{\mathrm{vib}}^{*}\left(v_{i}\right) \psi_{\mathrm{vib}}\left(v_{k}\right) \mathrm{d} R \\ \cdot \int Y_{J_{i}}^{M_{i}} \hat{\boldsymbol{p}}_{\mathrm{el}}(\vec{r}) Y_{J_{k}}^{M_{k}} \sin \vartheta \mathrm{d} \vartheta \mathrm{d} \varphi \end{array}$$ Why does the electronic contribution to the molecular dipole moment reappears in the second integral? I thought when we take $\boldsymbol{M}_{i k}^{\mathrm{el}}$ out of the integral since it's an integral over electronic coordinates, what left should be the integral of $\psi_{\mathrm{vib}} \cdot Y(\vartheta, \varphi)$ $$\int Y_{J_{i}}^{M_{i}} Y_{J_{k}}^{M_{k}} \sin \vartheta \mathrm{d} \vartheta \mathrm{d} \varphi \cdot \int \psi_{\mathrm{vib}}^{*}\left(v_{i}\right) \psi_{\mathrm{vib}}\left(v_{k}\right) \mathrm{d} R$$ [What i think the matrix element should be] The formulas are from "Atoms, Molecules and Photons An Introduction to Atomic-, Molecular- and Quantum Physics(Wolfgang Demtröder), p.346" Answer: I believe the key fact is that the electronic transition dipole moment is a vector quantity, and the unit vector in its direction $\hat{p}$ is what is included in the angular integral. So the electronic contribution does not reappear in the second integral; this integral over a unit vector $\hat p$ in the direction of $\bf p$ can only be $\leq 1$. See equation 9.129 in the same textbook and the description that immediately follows for what I believe is an equivalent situation. Edit: You make a good point in your comments and I wasn't clear enough before. The key fact is that ${\bf M}_{ik}$ is a vector quantity in the lab frame. This means its vector components can be multiplied by electric fields in the lab frame, for example laser fields whose polarization is best expressed in the lab frame. As you mentioned, ${\bf p}_{el}$ is a function of the electronic coordinates only, but only when expressed in the molecule-fixed frame. If you want to talk about the interaction of a laser field with the electrons of a rotating molecule, you will need to rotate the components of ${\bf p}_{el}$ into the lab frame. To do this rotation, you need a Wigner rotation matrix -- something like ${\mathcal D}_{M\Omega}^{(J)}(\vartheta,\varphi,0)^*$. The rotation matrix is a function of the nuclear orientation, and so when he writes $\hat p_{\rm el}$ in the second integral, he is suppressing this fact that $\hat p_{\rm el}$ is implicitly a function of the nuclear orientation $(\vartheta,\varphi)$. Demtroder appears to have swept this complexity under the rug, but all the gory detail is in the textbook by Brown & Carrington, Chapter 5. In the last sentence of the paragraph following Eq. (9.135b) in Demtroder, he very briefly alludes to this issue. He says: Different from the nuclear part of the molecular dipole moment in Eq. (9.124), which is directed along the internuclear axis, the electronic part of the dipole moment can have any direction in the molecular coordinate system. The transformation to the space fixed system is therefore more complicated and has to use a transformation matrix, which contains the Euler angles. Here "the Euler angles" refers to those of the nuclei.
{ "domain": "physics.stackexchange", "id": 66238, "tags": "quantum-mechanics, spectroscopy, vibrations, dipole-moment, fluorescence" }
C++ WAVE file reader: library-like structure, safety, readability
Question: Introduction I have released a small a WAVE file reader with a mutex/lock-based caching mechanism, as a header-only library. The general purpose of the library is to read WAVE files into floating points, in a way that handles repeated sequential requests for audio data without hanging on disk reads. I am looking for some criticism of the code but also of the structure of the project. Is there anything unsafe about the code, or is there any misuse of language constructs? Does the project provide everything you would expect from a public-facing library? Code #pragma once #include <vector> #include <string> #include <mutex> #include <thread> #include <iostream> #include <istream> #include <set> constexpr size_t WAV_HEADER_DEFAULT_SIZE = 44u; //! Wave header /*! Reads and stores the wave header in the same way it appears in the WAV file. */ struct WAV_HEADER { /*! * Read the first 44 bytes of the input stream into the header. */ bool read(std::istream& s) { if (s.good()) { s.seekg(0u); s.read(&m_0_headerChunkID[0], 4); s.read((char*)&m_4_chunkSize, 4); s.read(&m_8_format[0], 4); s.read(&m_12_subchunk1ID[0], 4); s.read((char*)&m_16_subchunk1Size, 4); s.read((char*)&m_20_audioFormat, 2); s.read((char*)&m_22_numChannels, 2); s.read((char*)&m_24_sampleRate, 4); s.read((char*)&m_28_byteRate, 4); s.read((char*)&m_32_bytesPerBlock, 2); s.read((char*)&m_34_bitsPerSample, 2); s.read(&m_36_dataSubchunkID[0], 4); s.read((char*)&m_40_dataSubchunkSize, 4); } return s.good(); } /*! * Checks whether the header is in a format that can be read by waveread */ bool valid() const { return (std::string{ &m_0_headerChunkID[0],4u } == std::string{ "RIFF" }) && // RIFF std::string{ &m_8_format[0],4u } == std::string{ "WAVE" } && // WAVE format m_16_subchunk1Size == 16 &&// PCM, with no extra parameters in file m_20_audioFormat == 1 && // uncompressed m_32_bytesPerBlock == (m_22_numChannels * (m_34_bitsPerSample / 8)) && // block align matches # channels and bit depth ( (m_34_bitsPerSample == 8) || (m_34_bitsPerSample == 16) || (m_34_bitsPerSample == 24) || (m_34_bitsPerSample == 32) // available bit depths ); } /*! * Clear all data in the header setting values to 0 or "nil\0" */ void clear() { auto cpy = [](char from[4], char to[4]) // since strcpy is deprecated on windows and strcpy_s absent on *nix. { for (size_t i{ 0u }; i < 4u; ++i) to[i] = from[i]; }; char none[4]{ "nil" }; cpy(none, m_0_headerChunkID); m_4_chunkSize = 0; cpy(none, m_8_format); cpy(none, m_12_subchunk1ID); m_16_subchunk1Size = 0; m_20_audioFormat = 0; m_22_numChannels = 0; m_24_sampleRate = 0; m_28_byteRate = 0; m_32_bytesPerBlock = 0; m_34_bitsPerSample = 0; cpy(none, m_36_dataSubchunkID); m_40_dataSubchunkSize = 0; } /*! * Samples per channel */ int samples() const { return (m_40_dataSubchunkSize / ((m_22_numChannels * m_34_bitsPerSample) / 8)); } char m_0_headerChunkID[4]; /*!< Header chunk ID */ int32_t m_4_chunkSize; /*!< Chunk size*/ char m_8_format[4]; /*!< Format */ char m_12_subchunk1ID[4]; /*!< Subchunk ID */ int16_t m_16_subchunk1Size; /*!< Subchunk size*/ int16_t m_20_audioFormat; /*!< Audio format */ int16_t m_22_numChannels; /*!< Number of channels*/ int32_t m_24_sampleRate; /*!< Sample rate of a single channel */ int32_t m_28_byteRate; /*!< Number of bytes per sample*/ int16_t m_32_bytesPerBlock; /*!< Number of bytes per block (where a block is a single sample from each channel)*/ int16_t m_34_bitsPerSample; /*!< Bits per sample */ char m_36_dataSubchunkID[4]; /*!< Detailed description after the member */ int32_t m_40_dataSubchunkSize; /*!< Detailed description after the member */ }; static_assert(sizeof(WAV_HEADER) == WAV_HEADER_DEFAULT_SIZE, "WAV File header is not the expected size."); //! Wave reader /*! Reads audio from an input stream. */ class Waveread { public: //! Constructor /*! * \param stream the input stream * \param cacheSize the size of the cache. This should usually be a reasonable multiple of the size of the set of samples you expect to read each time you call audio(). * \param cacheExtensionThreshold Within interval [0,1]. When a caller gets audio, how far into the cache should the caller go before the cache is triggered to be extended? */ Waveread( std::unique_ptr<std::istream>&& stream, size_t cacheSize = 1048576u, double cacheExtensionThreshold = 0.5 ) : m_stream{ stream.release() }, m_header{}, m_data{}, m_dataMutex{}, m_cachePos{ 0u }, m_opened{ false }, m_cacheSize{ cacheSize }, // 1MB == 1048576u m_cacheExtensionThreshold{ cacheExtensionThreshold } { if (m_cacheExtensionThreshold < 0.0) m_cacheExtensionThreshold = 0.0; else if (m_cacheExtensionThreshold > 1.0) m_cacheExtensionThreshold = 1.0; m_header.clear(); } Waveread(const Waveread&) = delete; Waveread& operator=(const Waveread&) = delete; //! Move Constructor /*! * \param other Another waveread object. The move constructor enables the placement of waveread objects in containers using std::move(). * For instance you can do: * Waveread a{stream}; * std::vector<Waveread> readers; * readers.push_back(std::move(a)) * a is now unusable, but the vector now contains the wavereader. */ Waveread(Waveread&& other) noexcept : m_stream{ }, m_header{ }, m_data{ }, m_dataMutex{}, m_cachePos{ other.m_cachePos }, m_opened{ other.m_opened }, m_cacheSize{ other.m_cacheSize }, m_cacheExtensionThreshold{ other.m_cacheExtensionThreshold } { std::lock_guard<std::mutex> l{ other.m_dataMutex }; m_data = other.m_data; m_header = other.m_header; m_stream.reset(other.m_stream.release()); } //! Reset /*! * Resets the wavereader, clearing all data. * \param stream a new std::istream to read a wave file from. */ void reset(std::unique_ptr<std::istream>&& stream) { std::lock_guard<std::mutex> lock{ m_dataMutex }; m_stream = std::move(stream); m_data.clear(); m_header.clear(); m_cachePos = 0u; m_opened = false; } //! Open /*! * Loads the wave header from file, and fills the cache from the start. */ bool open() { if (!m_opened) { m_header.read(*m_stream.operator->()); if (m_header.valid()) { m_opened = true; load(0u, m_cacheSize); return true; } else return false; // we couldn't open it } return true; // we didn't open it, but it was already opened. } //! Close /*! * Closes the wavereader. */ void close() { std::lock_guard<std::mutex> lock{ m_dataMutex }; m_stream->seekg(0u); m_data.clear(); m_header.clear(); m_cachePos = 0u; m_opened = false; } //! Audio /*! * Get interleaved floating point audio samples in the interval (-1.f,1.f). * \param startSample index of first sample desired * \param sampleCount number of samples needed including first sample * \param channels Which channels would you like to retrieve. Zero-indexed. If channels are out of bounds, then their modulus with the channel count will be taken. This means if you ask for channels {0,1} from a mono file, you will retrieve two copies of the mono channel, interleaved. * \param stride for each channel, when getting samples, skip every n samples where n == stride. * \param interleaved determines how samples are ordered: true provides {C1S1, C2S1, ..., CMS1, C1S2, C2S2, ..., CMS2} false provides {C1S1, C1S2, ..., C1SN, C2S1, C2S2, ..., C2S2, ...} */ std::vector<float> audio( size_t startSample, size_t sampleCount, std::set<int> channels = std::set<int>{ 0,1 }, size_t stride = 0u, bool interleaved = true ) { if (!open()) return std::vector<float>{}; size_t startSample_ch_bit{ startSample * m_header.m_32_bytesPerBlock }; size_t sampleCount_ch_bit{ sampleCount * m_header.m_32_bytesPerBlock }; if ((startSample_ch_bit + sampleCount_ch_bit) >= (size_t)m_header.m_40_dataSubchunkSize) // case1: out of bounds of file { if (startSample_ch_bit >= (size_t)m_header.m_40_dataSubchunkSize) // case1A: read starts out of bounds return std::vector<float>{}; else // case1B: read starts within bounds, ends out of bounds { load(startSample_ch_bit, m_header.m_40_dataSubchunkSize - startSample_ch_bit); return samples(0u, m_header.m_40_dataSubchunkSize - startSample_ch_bit, channels, stride, interleaved); } } else if (startSample_ch_bit >= m_cachePos && (startSample_ch_bit + sampleCount_ch_bit) <= (m_cachePos + m_data.size())) // case2: within cache { std::vector<float> result{ samples(startSample_ch_bit - m_cachePos, sampleCount_ch_bit, channels, stride,interleaved) }; if (startSample_ch_bit > (m_cachePos + (size_t)(m_data.size() * m_cacheExtensionThreshold))) // case2A: approaching end of cache { std::thread extendBuffer{ &Waveread::load,this,m_cachePos + (size_t)(m_cacheSize * m_cacheExtensionThreshold), m_cacheSize }; extendBuffer.detach(); } return result; } else // case3: within file, outside of cache { if (load(startSample_ch_bit, m_cacheSize > sampleCount_ch_bit ? m_cacheSize : sampleCount_ch_bit)) // load samplecount or cachesize, whichever is greater. return samples(0u, sampleCount_ch_bit, channels, stride, interleaved); else return std::vector<float>{}; } } //! Get header file const WAV_HEADER& header() const { return m_header; } //! Get size of cache const size_t& cacheSize() const { return m_cacheSize; } //! Get start position of cache const size_t& cachePos() const { return m_cachePos; } //! Has the file been opened const bool& opened() const { return m_opened; } //! Get cache extension threshold: this is the fraction of the cache that is read before it is extended. const double& cacheExtensionThreshold() const { return m_cacheExtensionThreshold; } //! Set cache extension threshold. Does not extend the cache until audio() has been called. Function will halt until the last load operation has finished. void setCacheExtensionThreshold(const double& cacheExtensionThreshold) { std::lock_guard<std::mutex> l{ m_dataMutex }; m_cacheExtensionThreshold = cacheExtensionThreshold; } //! Set cache size. Does not extend the cache until audio() has been called. Function will halt until the last load operation has finished. void setCacheSize(const size_t& csize) { std::lock_guard<std::mutex> l{ m_dataMutex }; m_cacheSize = csize; } private: //! Load data into the cache bool load(size_t pos, size_t size) // method will offset read by header size { std::lock_guard<std::mutex> lock{ m_dataMutex }; size_t truncatedSize{ (pos + size) < (size_t)m_header.m_40_dataSubchunkSize ? size : (size_t)m_header.m_40_dataSubchunkSize - pos }; if (pos < (size_t)m_header.m_40_dataSubchunkSize) { m_stream->seekg(((std::streampos)pos + (std::streampos)sizeof(WAV_HEADER))); // add header size m_data.resize(truncatedSize); if (m_stream->good()) { m_stream->read(reinterpret_cast<char*>(&m_data[0]), truncatedSize); m_cachePos = pos; m_stream->clear(std::iostream::eofbit); return m_stream->good(); } } return false; } //! Transform cached bytes into floats. /*! * \param posInCache * \param size * \param channels * \param stride * \param interleaved */ std::vector<float> samples( size_t posInCache, size_t size, std::set<int> channels = std::set<int>{}, size_t stride = 0u, bool interleaved = true) // posInCache is pos relative to cachepos. { std::vector<float> result{}; if ( (posInCache + size) <= m_data.size() && // if caller is not overshooting the cache !channels.empty() // if caller has provided channels ) { std::lock_guard<std::mutex> lock{ m_dataMutex }; size_t bpc{ m_header.m_32_bytesPerBlock / (size_t)m_header.m_22_numChannels }; // bytes per channel if (interleaved) switch (m_header.m_34_bitsPerSample) { case 8: // unsigned 8-bit for (size_t i{ posInCache }; i < (posInCache + size); i += (m_header.m_32_bytesPerBlock * (1u + stride))) { for (auto ch : channels) { // NOTE: (a) see narrow_cast<T>(var) (b) addition defined in C++ as: T operator+(const T &a, const T2 &b); // EXCEPTIONS: Integer types smaller than int are promoted when an operation is performed on them. size_t cho{ (ch % m_header.m_22_numChannels) * bpc }; result.emplace_back((float) (m_data[i + cho] - 128) // unsigned, so offset by 2^7 / (128.f)); // divide by 2^7 } } break; case 16: // signed 16-bit for (size_t i{ posInCache }; i < (posInCache + size); i += (m_header.m_32_bytesPerBlock * (1u + stride))) { for (auto ch : channels) { size_t cho{ (ch % m_header.m_22_numChannels) * bpc }; result.emplace_back((float) ((m_data[i + cho]) | (m_data[i + 1u + cho] << 8)) / (32768.f)); // divide by 2^15 } } break; case 24: // signed 24-bit for (size_t i{ posInCache }; i < (posInCache + size); i += (m_header.m_32_bytesPerBlock * (1u + stride))) { for (auto ch : channels) { size_t cho{ (ch % m_header.m_22_numChannels) * bpc }; // 24-bit is different to others: put the value into a 32-bit int with zeros at the (LSB) end result.emplace_back((float) ((m_data[i + cho] << 8) | (m_data[i + 1u + cho] << 16) | (m_data[i + 2u + cho] << 24)) / (2147483648.f)); // divide by 2^31 } } break; case 32: // signed 32-bit for (size_t i{ posInCache }; i < (posInCache + size); i += (m_header.m_32_bytesPerBlock * (1u + stride))) { for (auto ch : channels) { size_t cho{ (ch % m_header.m_22_numChannels) * bpc }; result.emplace_back((float) (m_data[i + cho] | (m_data[i + 1u + cho] << 8) | (m_data[i + 2u + cho] << 16) | (m_data[i + 3u + cho] << 24)) / (2147483648.f)); // signed, so divide by 2^31 } } break; default: break; } else switch (m_header.m_34_bitsPerSample) { case 8: // unsigned 8-bit for (auto ch : channels) { size_t cho{ (ch % m_header.m_22_numChannels) * bpc }; for (size_t i{ posInCache }; i < (posInCache + size); i += (m_header.m_32_bytesPerBlock * (1u + stride))) { // NOTE: (a) see narrow_cast<T>(var) (b) addition defined in C++ as: T operator+(const T &a, const T2 &b); // EXCEPTIONS: Integer types smaller than int are promoted when an operation is performed on them. result.emplace_back((float) (m_data[i + cho] - 128) // unsigned, so offset by 2^7 / (128.f)); // divide by 2^7 } } break; case 16: // signed 16-bit for (auto ch : channels) { size_t cho{ (ch % m_header.m_22_numChannels) * bpc }; for (size_t i{ posInCache }; i < (posInCache + size); i += (m_header.m_32_bytesPerBlock * (1u + stride))) { result.emplace_back((float) ((m_data[i + cho]) | (m_data[i + 1u + cho] << 8)) / (32768.f)); // divide by 2^15 } } break; case 24: // signed 24-bit for (auto ch : channels) { size_t cho{ (ch % m_header.m_22_numChannels) * bpc }; for (size_t i{ posInCache }; i < (posInCache + size); i += (m_header.m_32_bytesPerBlock * (1u + stride))) { // 24-bit is different to others: put the value into a 32-bit int with zeros at the (LSB) end result.emplace_back((float) ((m_data[i + cho] << 8) | (m_data[i + 1u + cho] << 16) | (m_data[i + 2u + cho] << 24)) / (2147483648.f)); // divide by 2^31 } } break; case 32: // signed 32-bit for (auto ch : channels) { size_t cho{ (ch % m_header.m_22_numChannels) * bpc }; for (size_t i{ posInCache }; i < (posInCache + size); i += (m_header.m_32_bytesPerBlock * (1u + stride))) { result.emplace_back((float) (m_data[i + cho] | (m_data[i + 1u + cho] << 8) | (m_data[i + 2u + cho] << 16) | (m_data[i + 3u + cho] << 24)) / (2147483648.f)); // signed, so divide by 2^31 } } break; default: break; } } return result; } bool m_opened; /*!< Has the file been opened */ std::unique_ptr<std::istream> m_stream; /*!< Input stream */ WAV_HEADER m_header; /*!< Holds structure of header when opened, used in subsequent operations. */ std::vector<uint8_t> m_data;/*!< Cached data holding part of the data chunk of the WAV file. */ std::mutex m_dataMutex; /*!< Mutex to lock data when buffer is being extended */ size_t m_cachePos; /*!< At what point, from the start of the data chunk (i.e. cachePos == idx - 44u), does the cached data in m_data begin at. */ size_t m_cacheSize; /*!< How big should the cache (all channels) be in bytes */ double m_cacheExtensionThreshold; /*!< Within interval [0,1]. When a caller gets audio, how far into the cache should the caller go before the cache is triggered to be extended? */ }; ``` Answer: Ensure correct order of constructor member initializers My compiler warns me that in the constructors of Waveread, m_cachePos and m_opened are initialized in the wrong order. The order of the constructor initializer list should match the order in which the member variables are declared, otherwise they might be initialized in a different order than you specify, which could be a problem if they depend on each other in some way. I would also prefer using default member initializers to initialize those member variables that don't depend on the arguments passed to the constructors. So for example: Waveread( std::unique_ptr<std::istream>&& stream, size_t cacheSize = 1048576u, double cacheExtensionThreshold = 0.5 ): m_stream{ stream.release() }, m_cacheSize{ cacheSize}, m_cacheExtensionThreshold{ cacheExtensionThreshold } { ... } ... private: bool m_opened{}; std::unique_ptr<std::istream> m_stream; WAV_HEADER m_header{}; std::vector<uint8_t> m_data{}; std::mutex m_dataMutex{}; size_t m_cachePos{}; size_t m_cacheSize; double m_cacheExtensionThreshold; }; Are you sure your mutexes are locked correctly? If you really want multiple threads to be able to access the same instance of class Wavereader, then you better be prepared for them to access the instance at the same time in the most inconvenient places. For example, what happens if two members call open() simultaneously? It might happen that both see that m_opened is false, then they both call m_header.read(...), and likely one of them will read the actual header, the other will read data after right after the header. In what order will m_header.valid() be called? When will m_opened = true be set? There are many combinations, there's at least one where m_opened() will be set to true and the function will return true, but the values in m_header will be garbage. Either always lock the mutex when doing anything with member variables, or don't have any mutex in your class and leave it up to the callers to handle concurrent access. Do you really need to cache yourself? The general purpose of the library is to read WAVE files into floating points, in a way that handles repeated sequential requests for audio data without hanging on disk reads. Virtually all operating systems that you run on desktop computers and servers already have sophisticated cache mechanisms to handle repeated access to the same data on disks. So you are duplicating what the OS already does for you. If you do lots of small reads, then there is some virtue in doing caching in your code, because it will avoid the overhead of system calls. However, if this is a concern, then it is probably even better to use memory-mapping to map the whole WAV file into memory. Avoid repetition Whenever you are repeating the same thing twice or more times, you should immediately start to find some way to avoid the repetition. This can be done by using for-loops or creating functions, or perhaps reorganizing the code a bit. In audio(), there are three cases being handled separately, but I think this can be simplified by first checking how much data has to be loaded in the cache, then convert startSample into the correct offset into the cache, and once that is done, you can do a single return samples(...) statement: std::vector<float> audio(...) { ... size_t offset; bool load_ok; if (/* out of bounds */) { load_ok = load(startSample_ch_bit, m_header.m_40_dataSubchunkSize - startSample_ch_bit); offset = 0; } else if (/* within cache */) { load_ok = true; // Everything has already been loaded offset = startSample_ch_bit - m_cachePos; } else /* within file, outside cache */ { load_ok = load(startSample_ch_bit, m_cacheSize > sampleCount_ch_bit ? m_cacheSize : sampleCount_ch_bit)); offset = 0; } if (load_ok) return samples(offset, sampleCount_ch_bit, channels, stride, interleaved); else return {}; } In samples(), you have a lot of repetition handling the different sample formats. Try to create a generic function that can convert arbitrarily sized integers. Let the compiler worry about optimizing it. Then just write the code to handle the different ways of channel interleaving. For example: static float convert_sample(const uint8_t *data, size_t len) { // Initialize a 32-bit integer with ones or zero bits, // depending on whether we need to sign-extend the input data int32_t value; if (len > 1 && data[len - 1] & 0x80) value = -1; else value = 0; // Copy the input data into value, assuming everything is little-endian memcpy(&value, data, len); // Return it as a float scaled between -1.0 and 1.0 return static_cast<float>(value) / (1 << (len * 8 - 1)) - (len > 1 ? 0 : 0.5); } ... std::vector<float> samples() { ... const size_t sample_len = header.m_34_bitsPerSample / 8); if (interleaved) { for (size_t i{ posInCache }; ...) { for (auto ch: channels) { size_t cho{...}; result.emplace_back(convert_sample(&m_data[i + cho], sample_len); } } } else { for (auto ch: channels) { for (size_t i{ posInCache }; ...) { size_t cho{...}; result.emplace_back(convert_sample(&m_data[i + cho], sample_len); } } } return result; } Avoid unnecessary floating point math Don't write (size_t)(m_cacheSize * 0.5), write m_cacheSize / 2. The answer should be the same, and if it isn't, it's because on 64-bit machines, a double has less precision than a size_t, and with large enough values this will become noticable. Also, it's quite costly to convert an integer to a double and back; not only do you waste some CPU cycles converting between the two, it might cause an interrupt where the operating system has to restore the FPU state after a lazy FPU context switch (not really an issue in your code since you already convert samples to float in samples(), but just so you know). Don't create throwaway threads I was wondering why you needed locking at all, but then I spotted this in audio(): std::thread extendBuffer{ &Waveread::load,this,m_cachePos + (size_t)(m_cacheSize * 0.5), m_cacheSize }; extendBuffer.detach(); You are creating threads here, but don't care what they are doing and detaching them immediately. But what if I am calling audio() in quick succession, with startSample() wildly varying at each call? Maybe there are two threads that want to read audio from different places, and they each call load() in turn? If reading data from disk was really so slow that you need the caching, then this will potentially create a large amount of threads that are slowing down the system, and potentially causing the wrong data being in the cache at the time it will be used. It is in fact possible that you create a thread to extend the buffer, but it will not start immediately, and then immediately there is another call to audio(), which will hit case 1 or case 3, which will call load(), and then between that call to load() and the call to samples(), the background thread will execute its load(). That means samples() will read from the wrong data. It's hard to reason about threads you no longer have any control over. I would just trust the operating system, it already performs caching for you, and will likely read ahead for you as well. Avoid using a std::set for channels A std::set is quite an expensive data structure to use to pass the desired channels to audio(): it allocates memory on the heap and builds a balanced tree of nodes. Furthermore, you pass it by value so a copy has to be made. A better datastructure would be a std::bitset, but unfortunately it needs to know its size up front, and since WAV files support up to 65535 channels, that is not great. Another option would be a std::vector<bool>. Be sure to pass these structures by const reference: std::vector<float> audio( size_t startSample, size_t sampleCount, const std::vector<bool> &channels = {true, true}, size_t stride = 0u, bool interleaved = true ) { ... } To iterate over them, you would do something like: for (size_t ch = 0; ch < channels.size(); ++ch) { if (channels[ch]) { size_t cho{ (ch % ...) }; ... } }
{ "domain": "codereview.stackexchange", "id": 39618, "tags": "c++, c++11, library, audio" }
Amplitude ranges for different bit-depths
Question: In Python, I am using the library Soundfile (which uses the library libsndfile) for digital sound processing. I am working with different sound files coming from different internet databases, and therefore they have different bit sizes. Either 16, 24 or 32 bit. If I understood correctly, when these sound files were recorded (with whatever device they were recorded), that device had an ADC with some bit precission. And therefore the amplitudes recorded by that device were mapped to the corresponding ranges: 16 bit: -32768 to +32767 24 bit: -8388608 to +8388607 32 bit: -2147483648 to +2147483647 Does it mean that the audio files with 32 bit have a higher amplitude than the rest? I guess not, right? Let's assume that all the devices had a microphone with the same sensitivity. Then the only difference is that the audio files recorded with the 24 and 32 bits devices were able to capture sounds louder than the maximum value of 32768, which for that particular sensitivity had a corresponding voltage value of whatever, right? But, again, if we assume that their microphones had the same sensitivity, a value of amplitude 32768 in a 16-bit precision file, would mean the same loudness as a value of amplitude 32768 in a 24-bit precision file, right? Thanks! Answer: The fixed point bitwidth representation of an audio signal is not an indication of the physical loudness of the signal. It is just a relative representation of the signal. ADCs will take in an analog signal with some specified max amplitude, say 2V peak-to-peak, for example. Then, this 2V range is quantized into 256 levels for an 8 bit converter. For a 16 bit converter, that same voltage range could be quantized into 65536 levels, and for 24 bits it is quantized into 2^24 levels. An ADC is thus normalized with respect to full scale, not to an LSB (i.e. not all ADCs - 8/16/24 bit - have a common LSB representation; otherwise an 8 bit ADC would have full scale that is on the order of a few mV if a 16 bit ADC has full scale of a volt). The max voltage input is mapped roughly to the max digital representation (or close to it, usually). Then, a max value for an 8 bit, signed ADC is 127 and can represent the same voltage as the max value of a 16 bit, signed ADC of 32767. As an aside, if you have 32 bit samples for audio, this is simply a high-precision representation to avoid introducing quantization noise; no audio will have such good fidelity that it truly requires 32 bits to represent it. That said, 24 bits is not a very convenient data type, so 32 bits is sometimes used for fixed-point representation to do better than 16 bits, which is marginal for high fidelity representation and operation. Even for a 16 or 24 bit converter, the SNR is not limited by the quantization of the fixed-point representation. The associated analog will be typically dominate the spurious-free dynamic range specification. Do a search for "16 bit audio ADC" and look at the datasheets of various converters if you want to get familiar with typical specs. Finally, looking at different microphones, you will find varying sensitivities and SNR, of course. The output of a microphone is nonetheless amplified to match the input voltage range of an ADC when sampling the mic output. Imagine the mic's noise being some SNR below the max output level of the mic (plus amplifier). This noise is then sampled by the ADC. An 8 bit ADC may be sufficient for a poor mic (i.e. noise floor of the mic and ADC are comparable), while a 24 bit converter is overkill for all but the best of mics. It is not that the signal out of a high-quality mic is higher voltage; it is that the noise level of the mic is much lower (and closer to the noisefloor of the high-fidelity ADC).
{ "domain": "dsp.stackexchange", "id": 6267, "tags": "audio, python, digital, analog, bitdepth" }
Stereoisomers of compounds named tribromocyclobutane
Question: Here is the link to the page in which I don't understand the answer to the last question (Question 13). The question states: Draw all of the constitutional isomers and stereoisomers of compounds named tribromocyclobutane (all structures must contain a cyclobutane ring). The following image exists as a part of the answer. But why is the following image not part of the answer? Are the above two images not diastereomers (hence stereoisomers)? Answer: The second configuration shown cannot exist simply because of the geometry of the orbitals at the carbon center. The angle between the bromine's (those on the same carbon) is much less than 108° even though there is no forced strain.
{ "domain": "chemistry.stackexchange", "id": 3879, "tags": "organic-chemistry, stereochemistry, isomers" }
How to esterify phenol with oxalic acid?
Question: Is it possible to fully esterify phenol with oxalic acid in order to get diphenyl oxalate? What catalyst should I use and is the synthesis possible to perform at home with hobby equipment? Answer: I don't know for sure, but it could work provided some conditions be fulfilled use of oxalyl chloride use low temperature, in order to try to suppress fragmentation of the intermediate (semi-ester) into the simple monoacyl chloride (by CO loss): in such cases $\ce{(COCl)2}$ behaves much like phosgene ($\ce{COCl2}$. use the phenol as a dissociated salt (sodium, potassium) in polar solvents such as sulfolane or acetonitrile, this is optional and possibly risky, due to enhanced competitive "C-acylation" if ortho/para positions are free. use Piridine-based solvents as reactivity enhancers (they convert acyl chloride to acyl immonium salts). Optionally, even stronger catalysts such as DMAP (p-dimethylamino pyridine) could work Anyway, to sum up, the major risks stem from: "C-acylation" (which is generally kinetically slower but irreversible and thermodynamically favored over "O-acylation"), if some suitable position is avail. formation of diaryl carbonates (on extrusion of CO). I guess this last drawback coul be at least reduced by working at low temperature.
{ "domain": "chemistry.stackexchange", "id": 5366, "tags": "organic-chemistry, synthesis, esters, phenols" }
How can I show that the speed of light in vacuum is the same in all reference frames?
Question: I have regularly heard that the Michelson-Morley experiment demonstrates that the speed of light is constant in all reference frames. By doing some research I have found that it actually demonstrated that the luminiferous aether probably didn't exist and that the speed of light didn't vary depending on which direction the planet was travelling in. I don't see how it demonstrated that motion towards a light source for instance doesn't affect the observer's speed relative to the light, as there were no moving parts in the experiment. The other sources I've looked at which say that the Michelson Morley experiment proved nothing like this one: Is the second postulate of Einstein's special relativity an axiom? and this one: How can we show that the speed of light is really constant in all reference frames? tend to say that Maxwell's equations were actually more significant to Einstein as they predict that light moves at a constant velocity, and this velocity has to be relative to something (or in relativity's case, everything). That something was thought to be the aether, but in the absence of that why could it not be relative to whatever emitted it? It seems like a more obvious immediate conclusion to come to than the idea that it's the same relative to everyone and all the counterintuitive results that ensue. Another idea is that the speed of light is the universal speed limit and therefore must have a fixed value just to work under galilean relativity. But then that argument goes in circles: "Why can't you go faster than the speed of light?" "Because otherwise your mass becomes infinite." "Why does your mass become infinite?" "Because of Einstein's special relativity." But this is based on the original fact that you can't go faster than the speed of light, so there's no argument I can find which completely answers why the speed of light has to be constant, other than that it has been regularly tested since. So my questions are: Is there something I'm missing about the Michelson-Morley experiment or Maxwell's equations which explains my objections and definitively shows that the speed of light is constant and it is impossible to go faster than it? If not, is there any other specific example, ideally which would have been there for Einstein, which I can use to explain to people with no knowledge of relativity why it is the case? Answer: For a basic treatment of the Michelson-Morley experiment please see 1. It's not important to know the technical details of the experiment to answer your questions though. The only relevant thing is the result, let me put it in basic terms since you seem to struggle with the "physics slang": While the total velocity of a ball thrown from a truck is the sum of the velocity of the ball relative to the truck and the velocity of the truck relative to the observer, the velocity of a light beam emitted from the truck is not. Much more the velocity of the light beam seems completely independent of the velocity of the truck. Michelson and Morely didn't have a truck, they had the earth orbiting the sun. Please make it clear to yourself that this experimental fact can be explained by stating that the speed of light is constant. If I say to you the speed of light is constant in every frame of reference, then the above result isn't surprising at all to you. But you want more. You want me to prove to you that the speed of light is universally constant. I cannot. There will never be an experiment that shows that this axiom is universally true. How should one ever construct such an experiment, how should one, for example, test the theory in the Andromeda galaxy? It's impossible, but it doesn't matter: Why not just stick with the axiom, as long as we can explain everything we see around us with it? As you already said there's an interesting connection between the invariance of the speed of light and Maxwell's equations. One can indeed prove that the speed of light has to be constant, otherwise, Maxwell's theory can't be true for all inertial frames. But this is no proof that can convince you either, since accepting Maxwells equations is no different to accepting the invariance of the speed of light. Furthermore, the basis of Einstein's theory is not the invariance of the speed of light, but the invariance of the speed of action. Which cannot be concluded from Maxwell's theory, even though it's a reasonable guess. Physical theories are not provable. But as long as they comply with reality, we accept them as truths. Addendum: I recommend this short lecture for layman by R. Feynman on the topic. Feynman and I present a very similar line of reasoning.
{ "domain": "physics.stackexchange", "id": 60124, "tags": "special-relativity, speed-of-light, inertial-frames, maxwell-equations, faster-than-light" }
jQuery countdown - accuracy
Question: I'm using this simple plugin to show a simple countdown in my pages, what I would like is to keep it more accurate, cause somentimes it seems it isn't accurate. This is the plugin: /* countdown is a simple jquery plugin for countdowns Dual licensed under the MIT (http://www.opensource.org/licenses/mit-license.php) and GPL (http://www.opensource.org/licenses/gpl-license.php) licenses. @source: http://github.com/rendro/countdown/ @autor: Robert Fleischmann @version: 1.0.0 */ (function() { (function($) { $.countdown = function(el, options) { var getDateData, _this = this; this.el = el; this.$el = $(el); this.$el.data("countdown", this); this.init = function() { _this.options = $.extend({}, $.countdown.defaultOptions, options); if (_this.options.refresh) { _this.interval = setInterval(function() { return _this.render(); }, _this.options.refresh); } _this.render(); return _this; }; getDateData = function(endDate) { var dateData, diff; endDate = Date.parse($.isPlainObject(_this.options.date) ? _this.options.date : new Date(_this.options.date)); diff = (endDate - Date.parse(new Date)) / 1000; if (diff < 0) { diff = 0; if (_this.interval) { _this.stop(); } } dateData = { years: 0, days: 0, hours: 0, min: 0, sec: 0, millisec: 0 }; if (diff >= (365.25 * 86400)) { dateData.years = Math.floor(diff / (365.25 * 86400)); diff -= dateData.years * 365.25 * 86400; } if (diff >= 86400) { dateData.days = Math.floor(diff / 86400); diff -= dateData.days * 86400 ; } if (diff >= 3600) { dateData.hours = Math.floor(diff / 3600); diff -= dateData.hours * 3600; } if (diff >= 60) { dateData.min = Math.floor(diff / 60); diff -= dateData.min * 60; } dateData.sec = diff; return dateData; }; this.leadingZeros = function(num, length) { if (length == null) { length = 2; } num = String(num); while (num.length < length) { num = "0" + num; } return num; }; this.update = function(newDate) { _this.options.date = newDate; return _this; }; this.render = function() { _this.options.render.apply(_this, [getDateData(_this.options.date)]); return _this; }; this.stop = function() { if (_this.interval) { clearInterval(_this.interval); } _this.interval = null; return _this; }; this.start = function(refresh) { if (refresh == null) { refresh = _this.options.refresh || $.countdown.defaultOptions.refresh; } if (_this.interval) { clearInterval(_this.interval); } _this.render(); _this.options.refresh = refresh; _this.interval = setInterval(function() { return _this.render(); }, _this.options.refresh); return _this; }; return this.init(); }; $.countdown.defaultOptions = { date: "June 7, 2087 15:03:25", refresh: 1000, render: function(date) { _hey_html = ""; if(date.years > 0){ _hey_html += '<span class="countdown-years" title="years left">' + date.years + 'year </span>'; } return $(this.el).html(_hey_html+'<span class="countdown-days" title="days left"> ' + date.days + ' </span><span class="countdown-hours" title="hours left"> ' + (this.leadingZeros(date.hours)) + '<small> h </small> </span><span class="countdown-min" title="minutes left">' + (this.leadingZeros(date.min)) + '<small> m </small> </span><span class="countdown-sec" title="seconds left">' + (this.leadingZeros(date.sec)) + '<small> s </small></span>'); } }; $.fn.countdown = function(options) { return $.each(this, function(i, el) { var $el; $el = $(el); if (!$el.data('countdown')) { return $el.data('countdown', new $.countdown(el, options)); } }); }; return void 0; })(jQuery); }).call(this); I think there should be some kind of delay when parsing dates in this plugin, dates are PHP generated as Unixtimestamp then I do this: $(function(){ $.each($('.countdown'), function() { var _element = '.countdown-'+$(this).attr("id"); var _id = $(this).attr("id"); if($(_element).length > 0){ var _datetime = $(_element).attr('data-expiration').toLocaleString(); var d = new Date(_datetime).getTime(); var result = new Date(d); _datetime = d; init_countdown(_id,_element,_datetime); } }); }); this is the html <div data-expiration="Jan 01, 2013 20:01:15" id="25" class="span12 countdown label-expiring countdown-25"> Sometimes it doesn't shows the real dates, it seems is in delay of about 20/30 minutes, and I can't understand why. Any help appriaciated, thanks! Answer: I do not quite understand, why in your code had so many passages from date to date? if your "tag attribute" was already with a correct date format. I've done some adjustments in the source code, I've tested and I do not see any kind of delay in the source code or in the plugin. This is the source code and you can test here: simple mode full mode mixed mode function init_countdown(id,element,datetime,fullmode){ var endDate = datetime; if(!fullmode){ $(element).countdown({ date: endDate }); }else{ $(element).countdown({ date: endDate, render: function(data) { var el = $(this.el); el.empty() .append("<div>" + this.leadingZeros(data.years, 4) + " <span>years</span></div>") .append("<div>" + this.leadingZeros(data.days, 3) + " <span>days</span></div>") .append("<div>" + this.leadingZeros(data.hours, 2) + " <span>hrs</span></div>") .append("<div>" + this.leadingZeros(data.min, 2) + " <span>min</span></div>") .append("<div>" + this.leadingZeros(data.sec, 2) + " <span>sec</span></div>"); } }); } } $(function(){ var mode=1; $.each($('.countdown'), function() { var _element = '.countdown-'+$(this).attr("id"); var _id = $(this).attr("id"); if($(_element).length > 0){ var _datetime = $(_element).attr('data-expiration'); init_countdown(_id,_element,_datetime,(mode^=1)); } }); }); HTML: <body> <div id="25" class="span12 countdown label-expiring countdown-25" data-expiration="Jan 01, 2014 20:01:15"> </div> <hr /> <div id="26" class="span12 countdown label-expiring countdown-26" data-expiration="Jan 01, 2015 20:01:15"> </div> <hr /> <div id="27" class="span12 countdown label-expiring countdown-27" data-expiration="Jan 01, 2016 20:01:15"> </div> </body>​
{ "domain": "codereview.stackexchange", "id": 2920, "tags": "javascript, jquery, plugin, datetime" }
Why is the ideal gas law only valid for hydrogen?
Question: I got this question in school: Explain, based on the properties of an ideal gas, why the ideal gas law only gives good results for hydrogen. We know that the ideal gas law is $$P\cdot V=n\cdot R\cdot T$$ with $P$ being the pressure, $V$ the volume, $n$ the amount of substance, $R$ the gas constant and $T$ the temperature (Source: Wikipedia - "Ideal gas"). An ideal gas must fulfill the following: The particles do have an infinitely small volume (or no volume), The particles do not interact with each other through attraction or repulsion, The particles can interact through elastic collisions. Now, why does only hydrogen sufficiently fulfill these conditions? I initially assumed that the reason is that it has the smallest volume possible as its nucleus only consists of a single proton. However, two things confuse me: (Let's first assume that my first idea was correct and the reason is the nucleus' scale/volume) helium's nucleus consists of two protons and two neutrons. It is therefore four times as large than hydrogen's nucleus. However, hydrogen's nucleus is infinitely times larger than an ideal gas molecule (which would have no volume), so why does the difference of $4$ significantly affect the accuracy of the ideal gas law, while the difference of an infinitely times larger hydrogen (nucleus) doesn't? My first idea is not even true, as atoms do not only consist of their nucleus. In fact, most of their volume comes from their electrons. In both hydrogen and helium, the electrons are in the same atomic orbital, so the volume of the atoms is identical. Other possibilities to explain that the ideal gas law only work for hydrogen and therefore only leave the collisions or interactions. For both of these, I do not see why they should be any different for hydrogen and helium (or at least not in such a rate that it would significantly affect the validity of the ideal gas law). So where am I wrong here? Note: I do not consider this a homework question. The question is not directly related to the actual problem, but I rather question whether the initial statement of the task is correct (as I tested every possible explanation and found none to be sufficient). Update I asked my teacher and told them my doubts. They agreed with my (and yours from the answers, of course!) points but still were of the opinion that Hydrogen is the closest to an ideal gas (apparently, they were taught so in university). They also claimed that the mass of the gas is relevant (which would be the lowest for hydrogen; but I doubt that since there is no $m$ in the ideal gas equation) and that apparently, when measuring, hydrogen is closest to an ideal gas. As I cannot do any such measurements by myself, I would need some reliable sources (some research paper would be best: Wikipedia and some Q&A site including SE - although I do not doubt that you know what you are talking about - are not considered serious or reliable sources). While I believe that asking for specific sources is outside the scope of Stack Exchange, I still would be grateful if you could provide some soruces. I believe it is in this case okay to ask for reference material since it is not the main point of my question. Update 2 I asked a new question regarding the role of mass for the elasticity of two objects. Also, I'd like to mention that I do not want to talk bad about my teacher since I like their lessons a lot and they would never tell us something wrong on purpose. This is probably just a misconception. Answer: The short answer is ideal gas behavior is NOT only valid for hydrogen. The statement you were given in school is wrong. If anything, helium acts more like an ideal gas than any other real gas. There are no truly ideal gases. Only those that sufficiently approach ideal gas behavior to enable the application of the ideal gas law. Generally, a gas behaves more like an ideal gas at higher temperatures and lower pressures. This is because the internal potential energy due to intermolecular forces becomes less significant compared to the internal kinetic energy of the gas as the size of the molecules is much much less than their separation. Hope this helps.
{ "domain": "physics.stackexchange", "id": 69709, "tags": "homework-and-exercises, thermodynamics, physical-chemistry, ideal-gas" }
Show that $\mathrm{d}S=\frac{1}{T}\,\mathrm{d}U+\frac{1}{T}\,P\,\mathrm{d}V-\frac{1}{T}\,\mu\,\mathrm{d}N$
Question: I need help to show that \begin{align*}\mathrm{d}S &=\left(\frac{\partial S}{\partial U}\right)_{V,N}\mathrm{d}U+\left(\frac{\partial S}{\partial V}\right)_{U,N}\mathrm{d}V+\left(\frac{\partial S}{\partial N}\right)_{V,U}dN\tag{1}\\&=\frac{1}{T}\mathrm{d}U+\frac{1}{T}P\mathrm{d}V-\frac{1}{T}\mu\mathrm{d}N\tag{2}\end{align*} where $U$ is the Internal Energy of the system; $S$ is the Entropy of the system; $N$ is the Number of Particles in the system; $V$ is the Volume of the system; $P$ is the systems' Pressure; $T$ is the absolute (thermodynamic) temperature of the system and $\mu$ is the Chemical Potential of the system. I know that the coefficients of $\mathrm{d}U$,$\,\mathrm{d}V$ and $\mathrm{d}N$ must match for equations $(1)$ and $(2)$ ie. $$\left(\frac{\partial S}{\partial U}\right)_{V,N}=\frac{1}{T}\tag{A}$$ $$\left(\frac{\partial S}{\partial V}\right)_{U,N}=\frac{1}{T}P\tag{B}$$ $$\left(\frac{\partial S}{\partial N}\right)_{V,U}=-\frac{1}{T}\mu\tag{C}$$ But I simply have no idea how to show $(\mathrm{A})$, $(\mathrm{B})$ and $(\mathrm{C})$. So this means I am stuck at the very beginning and hence cannot show my attempt at providing a solution (reason for question closure). For context I have added the pages of my text that shows the equivalence of equations $(1)$ and $(2)$: Could someone please help me show that \begin{align*}&\left(\frac{\partial S}{\partial U}\right)_{V,N}\mathrm{d}U+\left(\frac{\partial S}{\partial V}\right)_{U,N}\mathrm{d}V+\left(\frac{\partial S}{\partial N}\right)_{V,U}dN\\&=\frac{1}{T}\mathrm{d}U+\frac{1}{T}P\mathrm{d}V-\frac{1}{T}\mu\mathrm{d}N\end{align*} Any hints or tips are greatly appreciated. Thanks. Answer: $$\left(\frac{\partial S}{\partial U}\right)_{V,N}=\frac{1}{T}\tag{A}$$ Is defined as an expression for temperature and is not derived. Once they teach you entropy, they use it to tighten the definition of temperature. $$\left(\frac{\partial S}{\partial V}\right)_{U,N}=\frac{1}{T}P\tag{B}$$ is also introduced by reasoned argument, as the imposed definition of pressure, (when you get the $P$ on its own), rather than any derivation and I am inclined to believe this is because it might involve a Legrande transformation, which would take to long to explain, as well being slightly off-topic. $$\left(\frac{\partial S}{\partial N}\right)_{V,U}=\frac{1}{T}\mu\tag{C}$$ The final expression involves extending the thermodynamic equation to include "chemical work" and you get it by $$\mathrm{d}U= T\mathrm{d}S -P\mathrm{d}V + \mu \mathrm{d}N$$ Now $U, S, V, N $ are all capable of change in the above equation. So imagine we hold the variables $U,S $ fixed Such that $$0 = T\mathrm{d}S + \mu \mathrm{d}N$$ leads to $$ \mu = -T \left(\frac{\partial S}{\partial N}\right)_{V,U}$$ which leads to $$\left(\frac{\partial S}{\partial N}\right)_{V,U}=-\frac{1}{T}\mu\tag{C}$$
{ "domain": "physics.stackexchange", "id": 34039, "tags": "homework-and-exercises, thermodynamics, entropy, chemical-potential" }
Difference between VC-130 and VC-131 steel
Question: I am a draftsman. I make project of mechanical pieces and machinery. Some pieces works like knifes or sleeve, so to reduce the friction we design the ideal material to manufacture this pieces. The steel Vc 130 or 131 are options of choices. What is the usage of steel grades VC-130 and VC-131? What the difference in applications between them, on tools and gears projects? Answer: VC-130 and VC-131 are names from the Brazilian standard. A comparison can be found at Paulo Sergio's website. Table 4 suggests that VC-130 is equivalent to AISI-D3 and VC-131 is equivalent to AISI-D6. However, these stainless steels are low sulfur (see the Villares VC-131 page). I think the "Tool & Die Steels" site has the composition wrong. The main difference between the two steels is the Tungsten content. Tungsten tends to reduce pitting corrosion (see the Outokompu page).
{ "domain": "engineering.stackexchange", "id": 669, "tags": "mechanical-engineering, steel" }
Is potassium Sulfate a good electrolyte for the production of oxy-hydrogen?
Question: I am currently conducting an electrolysis experiment using a 30 amp power supply with a voltage ranging from 10 to 16 volts.I am thinking of using potassium sulfate (K2SO4) dissolved in water as an electrolyte. I am using graphite rods as the electrodes. My primary goal is to produce hydrogen and oxygen gases through the electrolysis process. However, before proceeding further, I would like to clarify a few points: -Is potassium sulfate a suitable pH-neutral electrolyte for my intended purpose of producing only hydrogen and oxygen gases? -Will the electrolysis of potassium sulfate generate any harmful byproducts that I should be aware of? -Are there any specific considerations or precautions I should keep in mind while using graphite rods as the electrodes with potassium sulfate electrolyte? I appreciate any insights, suggestions, or relevant information that can help me optimize my experiment and ensure the safe and efficient production of hydrogen and oxygen gases. Thank you! Answer: Potassium sulfate is a good electrolyte for electrolytic purposes. The only risk is the production of persulfates as secondary product at the anode if the current is too high. The intensity of the current depends on the surface of the anode. Apart from this, nothing disagreeable will occur. Graphite electrode can do the job of course, but you should know that they will progressively be corroded by the electrolysis and graphite powder will slowly be produced to fall down the solution. The only parameter to check is the temperature. With a 30 Am power supply, the temperature of the solution will quickly increase. It should not make it boil.
{ "domain": "chemistry.stackexchange", "id": 17521, "tags": "electrochemistry, electrolysis, hydrogen, oxygen" }
Keeping track of people's relationships using a people relation table
Question: I am trying to track relationships among people. I came up with my own solution, but I'm wondering if there might be another way or better way of doing this. To keep it simplified, I'll post just the bare bones. Let's say I have created the tables: person, person_relation, and people_relation for a MYSQL database using the following code: DROP TABLE IF EXISTS person; DROP TABLE IF EXISTS person_relation; DROP TABLE IF EXISTS people_relation; CREATE TABLE `person` ( `id` int(10) NOT NULL auto_increment primary key, `name` varchar(32) NOT NULL, ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `people_relation` ( `id` int(2) NOT NULL auto_increment primary key, `relation` varchar(32) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `person_relation` ( `id` int(10) NOT NULL auto_increment primary key, `parent_id` int(10) NOT NULL, `child_id` int(10) NOT NULL, `relation_id` int(10) NOT NULL, FOREIGN KEY (`parent_id`) REFERENCES person(`id`), FOREIGN KEY (`child_id`) REFERENCES person(`id`), FOREIGN KEY (`relation_id`) REFERENCES people_relations(`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; INSERT INTO `person` (`id`, `name`) VALUES (1, 'John Jr'), (2, 'Jane'), (3, 'Mark'), (4, 'John'), (5, 'Betty'); INSERT INTO `people_relation` (`id`, `relation`) VALUES (1, 'HUSBAND'), (2, 'WIFE'), (3, 'FATHER'), (4, 'MOTHER'), (5, 'SON'), (6, 'DAUGHTER'), (7, 'GRANDFATHER'), (8, 'GRANDMOTHER'), (9, 'GRANDSON'), (10, 'GRANDDAUGHTER'), (11, 'GREAT GRANDFATHER'), (12, 'GREAT GRANDMOTHER'), (13, 'GREAT GRANDSON'), (14, 'GREAT GRANDDAUGHTER'), (15, 'GODFATHER'), (16, 'GODMOTHER'), (17, 'GODSON'), (18, 'GODDAUGHTER'), (19, 'BROTHER'), (20, 'SISTER'), (21, 'MOTHER-IN-LAW'), (22, 'FATHER-IN-LAW'), (23, 'DAUGHTER-IN-LAW'), (24, 'SON-IN-LAW'); INSERT INTO `person_relation` (`id`, `parent_id`, `child_id`, `relation_id`) VALUES (1, 1, 2, 1), (2, 1, 3, 3), (3, 1, 4, 5), (4, 1, 5, 5), (5, 2, 1, 2), (6, 2, 3, 4), (7, 2, 4, 23), (8, 2, 5, 23), (9, 3, 1, 5), (10, 3, 2, 5), (11, 3, 4, 9), (12, 3, 5, 9), (13, 4, 1, 3), (14, 4, 2, 22), (15, 4, 3, 7), (16, 4, 5, 1), (17, 5, 1, 4), (18, 5, 2, 21), (19, 5, 3, 8), (20, 5, 4, 2); So, in my example, I am capturing 5 people, all family related, where I make connections to each of them with a unique relation: spoken plainly, a John Jr. has listed: a wife, a son, a mother, and a father; and Jane has a husband, a son, a father-in-law, and a mother-in-law, etc. The records with 5 people end up being 20 records in the person_relation table. If I add, let's say a new daughter for John Jr. and Jane. The table will then have presumably 30 records to account all the relations. Does this sound about right? This problem seems like it could explode to be a very large table. Do you know maybe a better way to do this? Or do you think I am on the right track? Answer: It seems fine for me for generic cases. Later you might need some denormalization or caching if it's too slow but it really depends on the usage. (If you have any special requirement you should share it, edit the question, please.) Four things to consider: varchar(32) for name could be too small. Consider unique indexes for the person_relation(parent_id, child_id, relation_id) triplet and the people_relation.relation attribute. For example: Currently, you can insert the same relation twice: mysql> select * FROM person_relation; Empty set (0.00 sec) mysql> INSERT INTO `person_relation` (`parent_id`, `child_id`, `relation_id`) VALUES (1, 2, 1); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO `person_relation` (`parent_id`, `child_id`, `relation_id`) VALUES (1, 2, 1); Query OK, 1 row affected (0.00 sec) mysql> select * FROM person_relation; +----+-----------+----------+-------------+ | id | parent_id | child_id | relation_id | +----+-----------+----------+-------------+ | 23 | 1 | 2 | 1 | | 24 | 1 | 2 | 1 | +----+-----------+----------+-------------+ 2 rows in set (0.00 sec) An unique index could prevent that: CREATE UNIQUE INDEX person_relation_uniq_idx ON person_relation (parent_id, child_id, relation_id); For example: mysql> DELETE FROM person_relation; Query OK, 2 rows affected (0.00 sec) mysql> CREATE UNIQUE INDEX person_relation_uniq_idx -> ON person_relation -> (parent_id, child_id, relation_id); Query OK, 0 rows affected (0.31 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> INSERT INTO `person_relation` (`parent_id`, `child_id`, `relation_id`) VALUES (1, 2, 1); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO `person_relation` (`parent_id`, `child_id`, `relation_id`) VALUES (1, 2, 1); ERROR 1062 (23000): Duplicate entry '1-2-1' for key 'person_relation_uniq_idx' mysql> SELECT * FROM person_relation; +----+-----------+----------+-------------+ | id | parent_id | child_id | relation_id | +----+-----------+----------+-------------+ | 25 | 1 | 2 | 1 | +----+-----------+----------+-------------+ 1 row in set (0.00 sec) people_relation.relation might not be NULL. people_relation and person_relation are easy to mix up, these names are too similar to each other. I'd rename people_relation to relation_type.
{ "domain": "codereview.stackexchange", "id": 20387, "tags": "sql, mysql" }
Approximating all independent sets of size k in a graph
Question: Given an undirected graph, I need an algorithm that outputs all the independent sets of size >= k (constant) in the graph. I know the problem is NPC, and I do not want to use the exponential brute-force solution. What I am looking for is to hear about the best solutions currently known to this problem. If someone can refer me to some known polynomial approximation algorithms that achieve good results, that would be very helpful. Answer: In general, the maximum independent set problem cannot be approximated to within a constant factor in polynomial time unless $P = NP$. You can do better, however, if you restrict yourself to special graph classes. For example, in interval graphs, the maximum independent set problem can be solved in polynomial time. As another example, in planar graphs, the maximum independent set problem can be approximated to within any ratio $r < 1$ in polynomial time. There is a lot more information provided here: https://cstheory.stackexchange.com/questions/2503/maximal-classes-for-which-largest-independent-set-can-be-found-in-polynomial-tim http://www.graphclasses.org/classes/problem_Independent_set.html If you're interested in actual implementations, the Wikipedia page has a few references: https://en.wikipedia.org/wiki/Independent_set_(graph_theory)#Software_for_searching_maximum_independent_set
{ "domain": "cs.stackexchange", "id": 6277, "tags": "complexity-theory, graphs" }
Does this Hamiltonian have correct dimensions?
Question: In a homework problem, I was given a Hamiltonian for the interaction of two spin $1$ particles: $$H = a\vec{S}_1\cdot\vec{S}_2 + b\bigl(\vec{S}_1\cdot\vec{S}_2\bigr)^2$$ where $\vec{S}_i$ are both spin-1 operators, and $a,b\in\mathbb{R}$. I got that eigenvalues of $\vec{S}_1\cdot\vec{S}_2$ are proportional to $\hbar^2$, so the energies that this Hamiltonian yields are a linear combination of $\hbar^2$ and $\hbar^4$, which can not be true, because the dimensions are not compatible. If they wrote $\vec{\sigma}_i$ or $\frac{\vec{S}_i}{\hbar}$ instead of $\vec{S}_i$, things would make more sense, but that's not the case. Is there really a problem, of am I missing something? Answer: Yes, there is a problem. Based on the information you've given, the terms have incompatible dimensions, $\hbar^2$ and $\hbar^4$ respectively, as you concluded. Normally, I would expect $a$ and $b$ to have appropriate units to make everything work out, but if the problem really says $a,b\in\mathbb{R}$ as you said, then they are unitless and that makes the expression inconsistent. I suppose it's possible that they are using $\vec{S}_i$ to mean $\vec{\sigma}_i$ or $\vec{\sigma}_i/2$ or some such thing, but that would be unusual, and if so it should be stated somewhere in the context in which the problem was given.
{ "domain": "physics.stackexchange", "id": 36323, "tags": "homework-and-exercises, quantum-spin, units, hamiltonian, dimensional-analysis" }
Can we see sound with our eyes?
Question: Is there a type of sound within our visual spectrum that we can see with our eyes? Answer: What we perceive as "sound" are (mechanical) oscillations of molecules from the source to the ear. This is for example why you cannot hear anything in vacuum, because there is no matter to oscillate. Light on the other hand is an electromagnetic wave. Therefore, there cannot be sound in our visual spectrum. It's in the wrong category. However, if you want to ask "are there sound waves that you can perceive with your eye". In principle: if the sound pressure is high enough, you should be able to "feel" the pressure osscilations in the eye (and anywhere else in your body), so your brain would notice the sound even if you couldn't hear it, but still you can't see it. If instead you ask yet another question, namely whether you can hear a sound and also believe that you have seen some light alongside, so that it seems that you have "seen" the sound, then you are no longer in the realm of pure physics: this is a known information processing error in many brains, where sometimes accustic signals are interpreted by the visual cortex. This is also known (especially in its more extreme forms) as synesthesia, but clearly belongs to biology and neuroscience.
{ "domain": "physics.stackexchange", "id": 27815, "tags": "acoustics, biophysics, vision, perception" }
How to compare viscosity of alcohol to water?
Question: Are there any simple at-home experiments to test the relative viscosity of alcohol to water? If they are the same or practically the same, are there any household liquids that are less viscous than water? If so, how can I demonstrate this to first-graders? Answer: A common experiment would be to have two concentric cylinders. The outer one can rotate almost freely (with some slight and constant retardation/breaking applied), while the inner one is driven by a motor, like so. Between them is a layer of the liquid that you wish to determine the viscosity of. The speed of the outer cylinder is a measure for the viscosity. However, since you want to explain this to first-graders, you might wanna start with something more obvious than the difference in viscosity between ethanol and water. How about the difference in viscosity between water and honey? As a demonstration experiment, you could use a funnel in which you pour an equal amount of either liquid, and then measure the time it takes until it has passed through the funnel. This should be more tangible and easier to understand. You could also prepare beakers with water, honey and maybe peanut butter, and have your students run a spatula through all of them. That way they can experience the difference in force that is necessary to move the spatula first hand.
{ "domain": "chemistry.stackexchange", "id": 312, "tags": "home-experiment" }
Compressing a vector into a specific range
Question: I want to solve the following task --- I am given a vector of integers. I want to 'compress' this list, where the elements will be replaced with numbers 0 through n - 1, where n is the number of unique elements in the vector, such that the relative ordering among the elements is preserved, i.e., if previously vector[index1] < vector[index2], then that is still true after the compression. In other words, each element is replaced with its ranking among the source vector. For example, given the vector {1, 3, 10, 6, 3, 12}, the elements will be replaced with 0 through 4 since there are 5 unique value. To preserve ordering, it will be transformed into {0, 1, 3, 2, 1, 4}. Right now, to complete this, I use the following algorithm: #include <iostream> #include <map> #include <set> #include <vector> using namespace std; int compress_vector(vector<int>& vec) { // function compresses the vector passed in by reference // and returns the total number of unique elements map<int, int> m; set<int> s; for (auto i : vec) s.insert(i); int counter = 0; for (auto i : s) { m[i] = counter; counter++; } for (auto& i : vec) i = m[i]; return s.size(); } int main() { vector<int> vec = { 1, 3, 10, 6, 3, 12 }; // int size = compress_vector(vec); // Should output "0 1 3 2 1 4", then a newline, then a "5" for (auto i : vec) cout << i << " "; cout << endl; cout << size; return 0; } However, I feel this function is quite messy --- it uses maps, sets, and counters. While this is functional, is there a faster or cleaner way to do this? Thanks Answer: Algorithm improvements I really don’t understand what this algorithm is supposed to be doing. Your description of it is incomplete and vague (for example, you say vector[index1] < vector[index2]… but what is index1 and index2?), and the single example is not illuminating. It looks like you’re just trying to get the sorted position of each element in the vector. So it’s very possible that there is a much better algorithm that can solve this; there’s no way for me to know when what you’re trying to do makes no sense to me. However, I can look at what your existing implementation is doing and at least improve on that a bit. You use a set to get all the unique elements in the vector, and then you use a map to get… I dunno, something something count (the sorted index?). I can’t help you with the second part. But I can help you with the first part, because you don’t need the set. You can use the map to get the unique values, like so: auto compress_vector(std::vector<int>& vec) { auto m = std::map<int, int>{}; // You don't need the set. You can use the map's keys to get the unique // values. for (auto i : vec) m[i] = 0; // And then you can recover the "set" of unique values by just iterating // on the map's keys. auto counter = 0; for (auto [i, _] : m) m[i] = counter++; // The above loop may be optimized by avoiding the duplicate lookup: // for (auto& p : m) // p->second = counter++; for (auto& i : vec) i = m[i]; return m.size(); } You could also go the other way, and keep the set but ditch the map: auto compress_vector(std::vector<int>& vec) { auto s = std::set<int>{}; for (auto i : vec) s.insert(i); // The values of the map are just the indices of the set. for (auto& i : vec) i = std::distance(s.begin(), s.find(i)); return s.size(); } Depending on a lot of factors, it might be more efficient to ditch the set and use a sorted vector instead: auto compress_vector(std::vector<int>& vec) { // Take care of the degenerate case of an empty input vector first. // // Not strictly necessary, but will save a lot of work. if (vec.empty()) return 0; // You could also do this: // if (vec.size() == 1) // { // vec[0] = 0; // return 1; // } // A second container, either a map, set, or second vector, is probably // unavoidable, because we need to keep track of the original values while // also changing the vector's contents, in order to know the sorted // indices. // // Note that this is the lazy way of building the sorted vector. If there // are a lot of duplicate values, it *might* be faster to: // 1) reserve vec.size() // 2) for each element in vec, do a lower_bound() search, to find if // it's already in sorted, and if not, then you have the insert // position auto sorted = vec; std::sort(sorted.begin(), sorted.end()); sorted.erase(std::unique(sorted.begin(), sorted.end()), sorted.end()); // Transform each element in the input vector into the index of the // element in the sorted vector. std::transform(vec.begin(), vec.end(), vec.begin(), [&sorted](auto i) { // Instead of std::find(), you could also use a binary search, // like std::lower_bound(). return static_cast<int>( std::distance( sorted.begin(), std::find(sorted.begin(), sorted.end(), i) ) ); } ); return sorted.size(); } Code review using namespace std; This is always a bad idea. You can probably get away with it in simple, toy programs, but you should never do it in real code. int compress_vector(vector<int>& vec) “Out” parameters (function parameters taken by non-const reference, and then modified with the “return” value) are generally not a great idea. They usually make functions harder to use, because they put the onus on the user to set up space for the return. What if I want the original vector and the “compressed” vector? Now I have to deal with the pain of setting up the result manually, rather than just writing auto result = compress_vector(input);. And if the vector I want to “compress” is already const (which is often the case), it’s on me to make a copy again. I know there is an argument that out parameters can be more efficient, but that doesn’t really apply here, because you’re creating whole maps, sets, and/or copies of the vector in the function anyway. An even better design would be to take an output iterator argument. for (auto i : vec) s.insert(i); Don’t do this. Saving a single line in the function just isn’t worth the risk of missing the hard-to-spot loop body and introducing bugs. If your function is so long that you actually need to save a line or two in order to fit it on screen, your function is too long in any case, and should be broken up. for (auto i : vec) s.insert(i); int counter = 0; for (auto i : s) { m[i] = counter; counter++; } for (auto& i : vec) i = m[i]; Space out the code. There are THREE loops here, jammed all together. That is three entirely separate logical sections of the function. Each section should be separated from the others by a blank line, to make it clear where the function’s “paragraphs” are. (And, of course, both the first and last loops should not be single lines.) You should also consider using algorithms instead of naked loops, for two reasons. First, they make your code clearer: a naked loop could be doing ANYTHING… but an algorithm spells out exactly what is happening. Also, algorithms are much easier to optimize. So the big glob of code above could be: std::for_each(vec.begin(), vec.end(), [&s](auto i) { s.insert(i); }); auto counter = 0; std::for_each(s.begin(), s.end(), [&m, &counter](auto i) { m[i] = counter++; }); std::for_each(vec.begin(), vec.end(), [&m](auto& i) { i = m[i]; }); Of course, all three algorithms are for_each() here, because I don’t really understand what the loops are supposed to be doing. for_each() is what you use when nothing else makes more sense. You’ll note that in the modified algorithm I wrote above, I used more specific algorithms, like transform(), unique(), and find() because I understood what was going on. return s.size(); You have a bug here. s.size() gives an unsigned type, which is problematic enough, but the real issue is that the type may be (and often is) larger than an int. But you are forcing it to be crammed into an int, which may cause truncation, or other weirdness. If you’re absolutely sure you want compress_vector() to return int, then you should at least assert that the size of the vector is smaller than the maximum value of int. Or, on the other hand, maybe you don’t really want compress_vector() to return int? I don’t understand what you want from this function, so I can’t guess which is the right answer. In main(): for (auto i : vec) cout << i << " "; Once again, this should not be on a single line. Also, you probably don’t want a whole string constant for just a space; you probably mean ' ', not " ". cout << endl; std::endl really makes no sense here. If you want a newline, use a newline: std::cout << '\n';. return 0; You don’t need this in main(). Summary Because your description of the problem is so vague and incomplete, and your examples of the intended usage are so limited and unrevealing, it’s impossible to give good recommendations. There is just so much about this function that is unexplained, and so many unanswered questions: Does it really need to modify the input vector; could it not simply return a new vector instead? Why does it return the number of unique elements; is that really important information? Why does it return that value as an int, rather than std::size_t or std::vector::size_type? And so on and so forth. The best I can do is offer base-level suggestions going by the literal operations in the given code; in other words, I can only give suggestions for tuning the existing algorithm… I can’t suggest better algorithms, if there are any. That said, there are certainly some corners you could cut, and some inefficiencies you could remove. You don’t need a set and a map… that’s just overkill in just about any situation. You might even get away with a sorted vector, which should be way more efficient than either a map or a set… especially if you do a binary search.
{ "domain": "codereview.stackexchange", "id": 41009, "tags": "c++, algorithm, vectors" }
Find minimum number of points which intersect overlapping arcs
Question: Say I have a circle of a fixed radius, with overlapping arc intervals along its edge. I want to return a minimum set of Points which intersects all arcs in $n^2$ time. I'm having some trouble proving that my algorithm works. I'm getting caught up with arc intervals that span $0,2\pi$. For example I have the intervals (in degrees): [45,90],[315,50],[300,25],[310,350] - when I do a sweep I find that the first degree with the least intersects (2) - is 25 - committing to this yields needing 3 points {25,45,310}, whereas the optimal solution is 2 points {~45,~350}. I'm thinking I can maybe just exclude that angle of the least intersects - but I'm not sure how to prove I won't run into a similar problem with a different set of arcs? So far I have: # place Intervals in a minHeap based on "starting" radian/degree O(n) # i = sweep intervals to find first radian/degree with least intersects O(n) p = float(+inf) P = {} while minHeap is not None: I = minHeap.deleteMin() if I.start>I.end: I.end += 360 # trying to normalize end if if I.start < i: minHeap.insert(new interval with I.start + 360 and I.end +360) #too early to evaluate else: if I.last <= p: p = I.last #looking for the counterclockwise radian/degree which covers most arcs end if if I.first > p: #found an interval out of reach - updating trackers p = p % 360 #normalizing P[p] = true #will eventually return P p = I.last #updating to the new most counterclockwise rad/degree end if end if end while return P Answer: A little abstraction would help both reasoning about and solving the problem efficiently, I think. For every point $x$ on the circle, there is some set $S_x$ of pairwise overlapping arcs that intersect this point. Now, if there is some other point $y$ such that $S_x \subseteq S_y$, you can always use $y$ instead of $x$ in a solution to your problem. We can therefore focus on a set of representative points $x_1,\ldots,x_m$ such that each set $S_{x_1},\ldots,S_{x_m}$ is maximal. There are at most $n$ such points (the proof of which I will leave as an excercise) and you should be well able to find this set in $O(n^2)$ time. Now, if the arcs don't wrap all around the circle, i.e. there is some point on the circle that doesn't touch any arcs, then you can look at the arcs as intervals on a straight line. Given the right infrastructure (you should sort the arcs both by starting and ending angle, again easily doable in $O(n^2)$ time) you can solve this problem greedily in $O(n)$ time (cf. this post). Finally, to solve the problem, you do as follows: For each $1\leq i\leq m$, you ask the following question: "If i use $x_i$, how many other points must I use in order to cover the arcs that are not in $S_{x_i}$?" These arcs naturally do not wrap around the circle. You can therefore calculate this in linear time, and return the minimum set of points over all tries. Since $m\leq n$, this will only take $n^2$ time in total.
{ "domain": "cs.stackexchange", "id": 18012, "tags": "algorithm-analysis, sorting, greedy-algorithms, intervals, selection-problem" }
Deciding length of quadcopter arms
Question: How quadcopter's arm length affect stability? As per my view I'll have better control on copter with longer arms but with stresses in arms and also it doesn't affect lift capabilities. Answer: For the most part, it will increase the gain of the controller. doesn't affect lift capabilities. Adding weight to something that flies always decreases lift capabilities. However, this influence is likely very small. So here's your quadrocopter with 1 DOF rotating around an axis: $$a\ddot r + b\dot r + c r$$ The general differential equation1 for a mechanical system. $r$ is the angle of rotation and $a$, $b$ and $c$ are coefficients describing the system. You have probably concluded by now that 1this was a blatant lie, for the obvious lack of the other side of the equal sign. that missing side is the load that you apply. This is usually in the form of rotors, producing thrust. For simplicities sake let's assume that this can be modelled as a force $f$. This force is applied at a distance from the center of rotation and that's where the arm length $l$ comes into play: $$a\ddot r + b\dot r + c r = l f$$ Transforming into... $$as^2 R + bsR + cR = l F$$ getting the transfer function of the system that we are interested in: $$\frac{R}{F}= \frac{l}{as^2 + bs + c}$$ $\frac{R}{F}$ can be understood as "if I add this much $f$, how much $r$ do I get back?" $l$ is basically the gain. It is a constant factor, written a little differently, this becomes more obvious: $$\frac{R}{F}= l\cdot\frac{1}{as^2 + bs + c}$$ How much $r$ do I get back? Can be answered as "$l$ times something". With a bigger $l$ you get more bang for your buck (or more rotation for your force respectively). Which means your motors don't have to go to 11 all the time. But what about stability? Stability can be determined from the roots of the denominator, called poles. The question essentially is how does $l$ influence $a$, $b$ and $c$ and how does that affect stability? a - moment of inertia: While it's unclear how the mass is distributed in the system, one can assume that, based on the general formula, $l$ has a quadratic influence on a, that is $a = a(l^2)$ That means increasing $l$ will increase $a$ a lot b - damping: This is hard to estimate. I guess most of the damping in the system comes from wind resistance. Increasing $l$ will only add little surface to the copter, hence wind resistance will not increase much (if at all). I conclude that $l$ has little to no influence on $b$ c - spring coefficient: there's certainly an influence, but you want to keep that as minimal as possible by design, because you want to make the arms as stiff as possible. Nobody likes wobbly structures. Now where are the poles? $$s_{1/2} = -\frac{b}{2a} \pm\frac{\sqrt{b^2-4ac}}{2a}$$ The important part for stability is that $s_{1/2}$ has a negative real part. Increasing $a$ due to increasing $l$ certainly reduces the negativity of the term, but it will not change it to be positive. The conclusion of this very rough estimation is that the system will not become unstable when the arm length is increased. Of course, this is a very handwavy estimate without knowing any of the actual values. If you want more, go right ahead (<- 3 links) Quadrocopters are a popular topic not just by enthusiast but also academia, so you find a lot of papers about it, giving more insights on more detailed models of the system. I think the rough estimate given in this answer is sufficient to explain the influences of the length of the arm.
{ "domain": "robotics.stackexchange", "id": 774, "tags": "control, quadcopter, stability" }
Is Phase Locked Loop is essential for BPSK signal reception?
Question: I'm noob in DSP studying BPSK communication between speaker and microphone using acoustic signal. I read article of this BPSK communication which says it needs Phase Locked Loop(PLL) sequence to match both ends communicate precisely. But in my textbook, there is no mention of PLL in BPSK demodulation. I want to communicate Tx/Rx within close range to get channel tap corresponding to its multipath effects in my room. In this case, does PLL is essential to synchronize the signal in both ends? Or I can communicate well without PLL and can get channel tap? Answer: There are three different kinds of synchronization in a passband digital communications system: Carrier synchronization: the receiver needs to know the exact frequency and phase of the carrier used by the transmitter. Symbol synchronization: the receiver needs to know the optimum instants to sample the matched filter's output. Frame synchronization: the receiver needs to know where each transmitted "character" (or word, or byte) starts. So, if you're doing passband BPSK, you'll need to perform all three. Carrier synchronization is typically done with a PLL (or a Costas loop) and you can find the details in several answers on this website. Many textbooks ignore these subjects. A textbook that does a great job of teaching these three aspects of a receiver is "Telecommunications Breakdown" by Johnson and Sethares, an early version of which is available for free from the authors' website. Having said that, there is a way to avoid doing carrier synchronization, at the cost of energy efficiency, by upconverting the baseband BPSK signal using AM DSB-LC instead of the more common AM DSB-SC. So, you first design your baseband signal: $$s_{BB}(t) = \sum_k a_k p(t-kT_P),$$ where $a_k \in \lbrace -1, 1 \rbrace$, $T_p$ is the pulse interval, and $p(t)$ is the pulse shape. Then, the DSB-LC passband signal is $$s_{PB}(t) = (A_m + s_{BB}(t)) A_c \cos(2\pi f_c t),$$ where $A_m > -\min(s_{BB}(t))$ (so that $(A_m + s_{BB}(t))>0$) and $A_c$ is the carrier amplitude. Then, the baseband signal can easily be recovered by a simple envelope detector (identical to conventional analog modulation).
{ "domain": "dsp.stackexchange", "id": 5942, "tags": "bpsk, acoustics, multipath, pll" }
Local inertial frame by linear transform?
Question: By equivalence principle, one can find a local inertial frame at every point of spacetime. Then this is usually used to introduce the general spacetime metric, as a back-transform of the Minkowski metric. The transformation from Minkowski to general metric depends only on the first derivatives of the coordinate transformation. This means that locally we can get an inertial frame by applying a linear coordinate transform. It is puzzling because in motivating examples we always get accelerating transforms, such as a falling elevator. However, it seems that the acceleration does not play role in the final formula. What is happening? Is this because linear transform is enough to diagonalize the metric, but the quadratic part (i.e., the acceleration) is needed to make the Christoffel symbols vanish? Answer: Yes you were on the right track. If you want to recover the Minkowski metric at an event, then yes a simple linear coordinate change is enough starting from a general metric. However, when you talk about an inertial frame and the equivalence principle, you need to do more than that. You need to kill the first spatial derivatives of the metric, or equivalently the Christoffel symbols. Physically, this would mean that you don't have inertial forces precisely at this event. In this case, a simple linear transformation won't do. Mathematically, you usually introduce normal coordinates and the exponential map. Btw, this is the best you can do, since second order derivatives can be built to construct gauge invariant quantities like the curvature. Hope this helps.
{ "domain": "physics.stackexchange", "id": 96029, "tags": "general-relativity, reference-frames, inertial-frames, equivalence-principle" }
How close can planets form to one another?
Question: With the NASA announcement today regarding the discovery of a system containing seven earth-sized planets (3 within the habitable zone), I wondered about the seemingly crowded conditions. What principles guide the formation of planets and the distances between them? Are there laws governing this that are well established? How close have we observed two planets form? Answer: If you're thinking about how close planets can be, you should probably consider each planet's Hill sphere, the region in which it can retain satellites. Fang & Margot (2013) did an analysis of Kepler data and found that planets had mean values of $\Delta = 21.7$, where $\Delta$ is a parameter given for two adjacent planets by $$\Delta=\frac{a_2-a_1}{R_{H1,2}}$$ where the $a$s are the semi-major axes and $R_{H1,2}$ is the mutual Hill radius. One system the authors consider is Kepler-11, which has 6 planets, all with semi-major axes $\leq0.466\text{ AU}$ and with only one semi-major axis greater than $0.25\text{ AU}$. The smallest $\Delta$ there is approximately $5.7$, although all the other $\Delta$s are quite small. Kepler-36, with only two planets, still has a $\Delta$ of $4.7$. According to the Nature paper about TRAPPIST-1, all seven planets have semi-major axes within $\sim0.063\text{ AU}$. They have mean $\Delta$s of $10.5\pm1.9$ - not much different from the Kepler-11 planets, because they have smaller Hill spheres. They may be closer together, but they can be much closer together without having stability problems. Additionally, they are in a "near-resonant" configuration. How close planets can be depends strongly on their masses, then, which in turn determines their mutual Hill radii, which determines stability. All that said, the authors believe that the TRAPPIST-1 planets may have migrated in from further out, thus entering the resonances. Without more information, we can't know whether this is the case, but if so, it is not, then, an example of planets forming close to each other.
{ "domain": "astronomy.stackexchange", "id": 2130, "tags": "planet, exoplanet, planetary-formation" }
What's the number of data points in a Welch-based PSD?
Question: In Welch periodogram Power Spectral Density estimate, we devide the N-long signal into K segments, each of length L with overlapping D, such that N=L+(K-1)*D. In this paper, just above eqn (7), the authors mention that the step of performing the DFT is to be done to each (windowed) segment (of length L) such that the length of the resulting DFT is L/2. Finally all DFT's will be averaged and normalized. At the end, you get a PSD of a max. length L/2. Why is that? I would expect the length of the resulting PSD to be related to N, not to L... Answer: And that's absolutely correct. As you mentioned, procedure can be described as follows: Take long signal of length $N$ and slice it with some overlap and window into segments of length $L$. For each of these segments calculate the DFT. Assuming no zero padding, you will get spectrum with $L$ points - same as segment length. Because signals are real, then we just care about first half of spectra (squred magnitude to be correct), thus take only $L/2$ points. Do this procedure for each segment and you will end up with lot's of vectors (dependent of overlapping and signal length $N$) of length $L/2$. Now you need to average all the spectra. This is simply average for each frequency bin across number of segments and PSD is calculated. Let's assume you will end up with 50 segments of length $L$, then you must average 50 spectra of length $L/2$. The longer is your signal, more segments will be averaged (better estimate), but obviously their length is not changing - you've chosen it in the beginning.
{ "domain": "dsp.stackexchange", "id": 7271, "tags": "power-spectral-density" }
Finite potential well with quantised energy
Question: In a finite potential well like that in figure, is the potential constant between $-L/2$ and $L/2$? Since that energy is quantised, if I'm in the second excited state, would the potential still be constant and equal to $0$, so that energy is only kinetic? Answer: You have to be careful to distinguish between the potential $V(x)$, which determines the dynamics of the system, and the potential energy of the particle. The total energy of a particle in an energy eigenstate (the ground state or any of the excited states) is constant and well-defined. However, because the wavefunction of the particle is spread out through space, the "potential energy" of the particle is not well-defined. In a finite potential well, part of the wavefunction extends into the region where $V>0$. Therefore, you cannot conclude that the potential energy of the particle is zero. There is a non-zero probability of finding the particle in a region where $V(x)\neq0$. Therefore, if you did an experiment to measure the potential energy of the particle, sometimes you would measure zero, and sometimes you would measure $V_0$. You can calculate the expectation value of the potential energy operator (which is well-defined) using the formula $$\langle V\rangle=\int_{-\infty}^\infty V(x)|\psi(x)|^2.$$ You will find it is greater than zero.
{ "domain": "physics.stackexchange", "id": 68532, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, potential-energy" }
Problem of android_tutorial_camera
Question: Hello, I have problem to run android_tutorial_camera (http://www.ros.org/wiki/android_core) First of all, what is the meaning of "Make sure that your Android device is connected to the same network as your Linux machine (e.g., on the same wifi network)" on the ros core wiki? Does it mean that both of Linux machine and Android device should have same gateway like Linux machine: 100.0.0.1 / Android device: 100.0.0.2? Anyway, For now, my configuration is like this. Linux PC is using a wired network(125...) and Android tablet is connected to wireless network(153...) The problem is that I cannot see the camera image of android device on Linux PC ROS is running. I have tried to find solution in ROSanswers and did something as follows: Checking network connection between Linux PC and android tablet through ping command. $export ROS_IP = XXX.XXX.XXX.XXX (Linux computer IP) - $export ROS_HOSTNAME = XXX.XXX.XXX.XXX (Same as ROS_IP) However there is no changes. I tried this example on galaxy tab 7inch (Android v2.2.3) and galaxy nexus(v4.2) This is message when I run roscore, (XXX.XXX.XXX.XXX is same as linux computer IP) $ roscore . . . auto-starting new master process[master]: started with pid [22361] ROS_MASTER_URI=http://XXX.XXX.XXX.XXX:11311/ setting /run_id to *********************************** process[rosout-1]: started with pid [22374] started core service [/rosout] When I check the rostopic, it is same as follows: $ rostopic list /camera/camera_info /camera/image/compressed /rosout /rosout_agg What can I do more? Thank you. Originally posted by zieben on ROS Answers with karma: 118 on 2013-05-06 Post score: 0 Original comments Comment by Alexandr Buyval on 2013-05-06: Hi, What is rxgraph outputting? Is Node on Android connected with nodes on PC? Comment by zieben on 2013-05-07: Thank you for your comment. I can see the message in Info of rxgraph like this: ERROR: Communication with node[http://XXX.XXX.XXX.XXX:39112/] failed! Answer: Thank you for your comment. I found one solution. If I use linux machine and android device on the same local network (e.g. linux machine: 192.168.0.10, android device: 192.168.0.20), this tutorial is working perfectly. However it still isn't working when I use both pn different network. So I want to change my question. How I set my linux PC network setting? For now, e.g. Linux PC physical IP: 142.153.234.12 ROS_IP = 142.153.234.12 ROS_HOSTNAME = 142.153.234.12 ROS_MASTER_URI = http://localhost:11311/ Android device physical IP: 128.123.24.123 and I entered http://142.153.234.12:11311/ on the camera_tutorial_app as master URI Is that right? thank you. Originally posted by zieben with karma: 118 on 2013-05-08 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14082, "tags": "android-core, rosjava, android" }
Making lists symmetrical by merging elements
Question: My code is based around the following problem: http://orac.amt.edu.au/cgi-bin/train/problem.pl?set=aio16int&problemid=903 It's about making a series of plots of land symmetrical by knocking down fences, but out of context of this problem, the code: Takes a list, makes it symmetrical by adding elements together, and prints / writes the number of changes needed to achieve this. My main concern is speed: the code works fine but it is taking too long to solve cases with long lists. (up to 100,000 elements, where each element is an integer up to 10,000) Any feedback welcomed! infile = open("farmin.txt", "r") outfile = open("farmout.txt", "w") n = int(infile.readline()) plots = infile.readline().split() infile.close() plots = [int(i) for i in plots] fencescut = 0 leftpos = 0 rightpos = 0 def mergeplots (direc, n1, n2): global plots if direc == "l": plots[n1] = plots[n1] + plots.pop(n2) else: plots[len(plots) - 1 - n1] = plots[len(plots) - 1 - n1] + plots.pop(len(plots) - 1 - n2) while leftpos <= len(plots)/2 and rightpos <= len(plots)/2: if plots[len(plots) - 1 - rightpos] - plots[leftpos] > 0: # left fence needs to be cut fencescut += 1 mergeplots("l",leftpos, leftpos + 1) elif plots[len(plots) - 1 - rightpos] - plots[leftpos] < 0: #right fence needs to be cut fencescut += 1 mergeplots('r', rightpos, rightpos + 1) else: # no fence needs to be cut leftpos += 1 rightpos += 1 outfile.write(str(fencescut)) outfile.close() Answer: Removing elements from the middle of the list is expensive: list.pop(k) has an \$O(k)\$ time complexity, and therefore an entire program is quadratic. However, physically removing fences is not necessary. You only need to track the widths of two current plots. Counting rightpost up from 0 is dubious. It is confusing to the reader, and complicates mergeplots unnecessarily. I recommend to make it count down from plots.len() - 1. As a side note, since you don't need to actually merge the plots (see above), mergeplots itself is not needed.
{ "domain": "codereview.stackexchange", "id": 26784, "tags": "python, performance, array" }
Is there any energy loss if two blocks collide with a spring in between?
Question: If two blocks collide with a spring in between, at the moment when the spring reaches max compression is any energy lost? One can look at it as an inelastic collision since the two blocks would be going at the same speed and then assume energy loss. But does the spring make a difference? Answer: In physics two bodies collide if the velocity (or momentum) of one body is changed due to other, now this does not mean that they need to be in contact. Perfectly elastic collision is a collision in which there is 100% gain of kinetic energy which is converted into potential energy during collision. While perfectly inelastic collision means that there is no gain of kinetic energy (this does not mean that whole kinetic energy is lost). The gain or loss of kinetic energy is known by the coefficient of restitution($e$). The coefficient of restitution for perfectly elastic collision is 1 and that for perfectly inelastic collision is 0 but this does not mean that there is direct relation between coefficient of restitution and gain in kinetic energy. You can see this by the figure below. Now for the case of spring between the two objects every thing wil be same instead the spring will deform in the place of the object themselves because perfectly elastic collision object regain their shape after deformation during collision while in perfectly inelastic collision they do not regain their shape but stick with each other. For the case of the spring in perfectly inelastic collision the spring will not regain it's original shape but will be locked in its max. compression. You can see this post for reference. Hope it helps. :) Edit: The energy in a perfectly inelastic collision gets transferred to the surroundings of the system. In other words when there is a perfectly inelastic collision (as the name suggests it is between two inelastic bodies) the bodies cannot regain their shape because the are inelastic (for example: clay) and since they are inelastic they do not store any elastic potential energy in them. Same is for the spring if you are considering perfectly inelastic collision then the spring is inelastic or it cannot regain it's shape (which would violate the definition of the spring) and the energy lost in the collision will be transferred to the surroundings in other forms.
{ "domain": "physics.stackexchange", "id": 83927, "tags": "newtonian-mechanics, momentum, energy-conservation, collision, spring" }
Symmetry transformations: a doubt about the relations that we assume true
Question: When we deal with symmetry transformations in quantum mechanics we assume true that, If before the symmetry transformation we have this $ \hat A | \phi_n \rangle = a_n|\phi_n \rangle,$ and after the symmetry transformation we have this $ \hat A' | \phi_n' \rangle = a_n'|\phi_n' \rangle,$ then $a_n'=a_n$. I think the reason for this relation is that $\hat A$ and $\hat A'$ are equivalent observables (for example the energy in two different frame of references). The problem is that, if $\hat A=\hat X$ where $\hat X$ is the position operator, then this relation seems wrong, because we would have: $ \hat X | x \rangle = x|x \rangle$ and $ \hat X' | x' \rangle =x|x' \rangle$ both ture, that means that the position eigenstate seen by two different frames of references is seen in the same coordinates. How can this be true if the systems are for example translated one to the other? Answer: This is not an assumption, it is a requirement for consistency. The symmetry transformation acts on operators and states, it does not act on numbers. So the equation $A\lvert \psi_n \rangle = a_n\lvert \psi_n\rangle$ simply becomes $A'\lvert \psi_n'\rangle = a_n\lvert \psi_n'\rangle$ after applying the transformation. This equation must be true for any linear transformation on the space of states, regardless of whether it is a symmetry or not. So when the transformation is a translation by $a$, it acts as $\hat{x}\mapsto \hat{x} - a$ on the position operator and $\lvert x\rangle \mapsto \lvert x + a\rangle$ on its eigenstates. The equation $\hat{x}\lvert x\rangle = x\lvert x\rangle$ becomes $(\hat{x}-a)\lvert x + a\rangle = x\lvert x + a\rangle$. There is nothing inconsistent about this - note that the transformed equation does not claim that $\lvert x + a\rangle$ would be a position eigenstate with eigenvalue $x$, but instead says that $\lvert x + a\rangle$ is an eigenstate of $\hat{x}-a$ with eigenvalue $x$.
{ "domain": "physics.stackexchange", "id": 66024, "tags": "quantum-mechanics, hilbert-space, operators, symmetry" }
How does a CCD able to differentiate between different colors?
Question: According to Wikipedia, Digital color cameras generally use a Bayer mask over the CCD. Each square of four pixels has one filtered red, one blue, and two green [...]. The result of this is that luminance information is collected at every pixel, but the color resolution is lower than the luminance resolution. So unlike in 3CCD, where light is split into 3 different chips, in a regular CCD the same chip (same semiconductor) is used to capture different colors (after the light passes through the Bayer mask). But if it's the same semiconductor (same bandgap), how can it be sensitive to red, green and blue and the same time? Answer: Silicon is sensitive to a range of wavelenghts from UV to near IR. Range of Kodak CCD's quantum efficiency - the proportion of photons that are detected - at each wavelength The same thing applies to 3-CCD cameras, the 3 color CCDs are identical, it is only the color splitting prism that decides which detects what color
{ "domain": "physics.stackexchange", "id": 29514, "tags": "semiconductor-physics" }
Why is hydrogen gas so highly reactive?
Question: Why is Hydrogen so reactive? What makes it combustible? Answer: Hydrogen is not particularly reactive. For example, just mixing hydrogen and oxygen gas will not cause a reaction at room temperature, but many metal elements oxidize at least on the surface in air. The most reactive elements in my opinion are fluorine in the non-metals, and the alkali metals like sodium and potassium. It is combustible because oxygen wants electrons and takes them from hydrogen to form water.
{ "domain": "chemistry.stackexchange", "id": 2953, "tags": "everyday-chemistry" }
INDUCTANCE depends on the number of turns in a solenoid. Is this the case with RELUCTANCE as well?
Question: The total flux ($\Phi$ ) through an solenoidal inductor of length $l$ and $N$ turns is proportional to the current through the inductor and the inductance $L$ of the inductor according to $$\Phi =L \cdot I $$ $$\Rightarrow \Phi =\frac{\pi r^2\mu_0 N^2 }{l}\cdot I \tag{1}$$ Clearly, in this case, if we double the amount of turns (whilst holding the length and current constant) we increase the total flux by a factor of 4. This is because the inductance of a solenoid depends quadratically on $N$. Now I have just learnt about the concept of reluctance (denoted $R$) and magnetomotive force (denoted $m.m.f$) where $m.m.f$ is defined to be $m.m.f \equiv N\cdot I$. These two quantities are related to each other by the total flux via the formula $$m.m.f = R\cdot \Phi \tag{2}$$ Now applying the above formula (eq 2) to the case of the solenoidal inductor, if we hold the current constant but double the number of turns (N) of the inductor, the $m.m.f$ only doubles (since $m.m.f \equiv N\cdot I$). But according to equation (1), the total flux must quadruple since $\Phi \propto N^2$. Thus in order for equation (2) to remain valid, the reluctance must necessarily half. That is, in order for equations 1 and 2 to be simultaneously valid, the reluctance must be inversely proportional to the number of turns so that $R\propto \frac{1}{N}$. However, everywhere I look, I find that reluctance is always given by equations that are independent of $N$. For instance, wikipedia states that $R=\frac{l}{\mu A}$. How can this be? How can equations 1 and 2 be simultaneously true whilst reluctance is independent of $N$? Any help on this issue would be most appreciated! Answer: Reluctance is independent of the number of turns. Your confusion results from the fact that people use the word flux to refer to two closely related but different quantities. Recall that magnetic flux is the integral of $\vec B$ through a surface. To speak of a flux, we must first define what this surface is. In the analysis of magnetic circuits, this surface is usually the cross section of a relevant part of the magnetic circuit. In this case, this would be the cross section of the inductor winding with area $\pi r^2$. With this definition, the flux is $\Phi_1 = \pi r^2 B$ where $\vec B$ is the magnetic field within the coil (assumed uniform here for simplicity). From the reluctance of the magnetic circuit, we have $\mathcal F=\mathcal R\Phi_1 $, where $\mathcal F=NI$ is the mmf and $\mathcal R$ is the reluctance. The other magnetic flux we may speak of is the one through the winding, more relevant for electrical analysis, e.g. through the definition of inductance. Consider the loop formed by the coil, including whatever external circuit it may be connected to. This loop is the boundary of a surface, and it is this surface through which we calculate the flux to define inductance, because this is the flux whose rate of change gives the electromotive force along the loop. It is not necessarily easy to visualize this, but each magnetic field line in the inductor core intersects the surface bounded by the loop $N$ times, so the flux is $\Phi_2=N\pi r^2B=N\Phi_1.$ Inductance is calculated as $L =\Phi_2/I$. Putting everything together, $$L=\frac {\Phi_2} I = \frac {N\Phi_1} I = \frac {N^2} {\mathcal R}$$ which is proportional to $N^2$, as expected.
{ "domain": "physics.stackexchange", "id": 88513, "tags": "electromagnetism, magnetic-fields, electric-circuits, electric-current, electromagnetic-induction" }
A problem about Huffman codeword
Question: Under Huffman Encoding, what circumstances codeword length of each character be equal?(suppose number of character is power of 2) I think if all frequency of characters is same then codeword length of all characters be the same. Surprisingly, if there are two characters that differences between frequency of two characters be $2$, and frequency of other be equal then codeword length of all characters be the same. Main problem is, can i say, for each constant $c$, if there are two character that difference of frequency be $c$, and all remaining of characters have equal frequency then codeword length of each character be equal? Answer: Assume there are 2^n symbols, n >= 2, and the frequencies of the most common and the two least common symbols are f, g, and h. We could use a code of n-1 bits for the most common and n+1 bits for the two least common symbols. This will reduce the average code length if f > g+h. You were specifically asking for the frequencies 2^-n + c/2, 2^-n, and 2^-n - c/2, so that’s the case for c > 2^-n / 2.
{ "domain": "cs.stackexchange", "id": 17981, "tags": "greedy-algorithms, huffman-coding" }
Hardness of approximation of the 3 colorability problem
Question: If we have polynomial algorithm that $c$-approximation, $c<\frac{4}{3}$ for graphs that their chromatic number $\geq k$ then $NP=P$, how to prove such statements? I also have some sort of explanation of this statement: It's NP-hard to separate between graphs that have chromatic number $k$ and chromatic number $c \cdot k$ when $c<\frac{4}{3} \quad \forall k\geq 3$ Answer: Hint: Consider planar graphs.
{ "domain": "cs.stackexchange", "id": 1369, "tags": "algorithms, np-hard, approximation" }
How B Tree guarantees self balancing property?
Question: Let's say we have a b tree of order 4 and it has 3 levels. All nodes are completely filled. Now if we want to insert a key with maximum value ( than all keys present in the tree). For this, we have to go to the rightmost node at the last level. As this last level is also filled, now how can we proceed further as all the nodes are full. We cant do a split here. So we are left with creating a new node below the rightmost node. But this tree contradicts the balancing property. I want to know the intuition behind self-balancing nature. Answer: That's not how insertion into a full B-tree works. You can read about this in detail in CLRS chapter 18, section 18.2. In brief, when you want to insert you start at the root and work your way down to the appropriate leaf node, splitting any full nodes you find on the way. That way when you come to insert the new value you are guaranteed that the parent of the node where you do the insert isn't full. There is a special case when the root is full, when you create a new empty root as a parent to the old root, so that the old root can be split. This is the only time that the tree increases in height, and of course it causes every leaf to increase its depth by 1, maintaining the same-depth property.
{ "domain": "cs.stackexchange", "id": 18564, "tags": "b-tree" }
Will a satellite escape from the bounds of Earth's gravity if its orbital velocity is increased to escape velocity?
Question: If orbital velocity $(v=\omega r)$ is increased to escape velocity at that certain orbit, will it move to infinity? I know that it will skid from that orbit as vehicles do when their velocity is more than the equivalent velocity of banking of road. But this satellite has been revolving, shouldn't it just move to another orbit and revolve instead of moving to infinity since gravity is still effective on it? Answer: Increasing the speed will increase the orbit. At some point the speed corresponds to an infinitely large orbit, meaning that the object will never return. This is called escape velocity. So, per definition, if an object such as a satellite - initially in orbit or stationary, that doesn't matter - reaches escape velocity, then it will not just reach a higher orbit but will never come back. By definition. If it did come back and just reached a higher orbit, then we wouldn't have called it escape velocity in the first place. And to the note on gravity, be aware that gravity always acts on an object. Also on objects that move with escape velocity. The velocity is just large enough to outweigh the effect of gravity continuously.
{ "domain": "physics.stackexchange", "id": 83072, "tags": "gravity, newtonian-gravity, rotational-dynamics, satellites" }
Can I discard missing alleles?
Question: I am converting a biallelic VCF into a SQL table and one of my tables will be something like: SAMPLE | custom variant identifier | Genotype -------|---------------------------|--------- 0001 | 3456 | ./. My question is that for the diploid ./. or haploid . missing alleles, whether I can just drop them to save space. My thinking is that since the dot represents missing data, the ./. are just placeholders in the VCF but provide no information at all. Is this correct? Should I retain the rows with ./.? Showing the end of one line of the VCF (really long lines because of high N): FORMAT SampleX SampleY GT:AD:DP:GQ:PL 0/0:3,0:3:4:0,4,45 ./.:3,0:3:.:0,0,0 I thought to also show the result after bcftools norm -m to split to biallelic and queried per sample. sample_id chr@pos@ref@alt genotype sample1 chr1@10158@ACCCT@A 0/0 sample2 chr1@10158@ACCCT@A ./. <- does this provide any info or can I discard? sample3 chr1@10158@ACCCT@A 0/0 sample4 chr1@10158@ACCCT@A ./. sample5 chr1@10158@ACCCT@A 0/0 Answer: If you can, do not discard missing allele information. Generally you often benefit from being able to distinguish between a position that you interrogated and genotyped as REF (reference) versus one that you interrogated but could not determine the genotype (e.g due to low coverage in the sequencing or probing issues in a microarray setup) and which usually is encoded as a missing (./. if ploidy = 2) genotype. A missing genotype tell us that the genotype at this position could be REF, but could also be some other base, we simply do not know. Knowing this is critical and/or useful depending the downstream analysis. For example it is quite important and informative when merging datasets. Since formats like VCF or PLINK do not record non-variant positions, if you discarded missing genotypes prior to merging, how can you tell between genotypes that truly were REF in one dataset (fully invariant position) and some recorded missing genotypes at the same position but which you decided to discard? You will end up assuming REF status for everything not recorded and this will inevitably lead to batch effects in your combined dataset. For a better discussion see this blog post On the other hand, if your relational database (your particular situation) for instance will not incorporate external databases in order to produce joint datasets, perhaps discarding missing genotypes is all fine.
{ "domain": "bioinformatics.stackexchange", "id": 2293, "tags": "vcf, genomics, bcftools" }
Why is $\displaystyle \int F\cdot ds = \displaystyle\int F\cdot v dt$?
Question: I don't want the argument depending on algebraic manipulations of infinitesimals but rather the substitution needed for the integral to obtain this result. Answer: You have $$ \int_{s(t_0)}^{s(t_1)} F(s)\cdot ds = \int_{t_0}^{t_1}F(s(t))\cdot s'(t)dt = \int_{t_0}^{t_1}F(s(t))\cdot v\cdot dt $$ which is exactly the definition of integral substitution in standard analysis. (I'm referring to this definition of substitution, i.e. $\varphi(t) = s(t))$
{ "domain": "physics.stackexchange", "id": 80170, "tags": "newtonian-mechanics, work, integration" }
What is the angle of twist in end point A?
Question: Can anyone please explain this question to me? I am okay with answering a gear assembly with 2 gears at a time. But the third one here is confusing me. Answer: There a few steps in this problem. The first one is to figure out what is the torque that each shaft is subjected. The second one is to determine the twisting angle. In this you are making the assumption that the disks do not deform. So the regarding the torque (if you know the basics about gear assemblies this is obvious): the short rod (L/2) is subjected to T torque the long rod (L) is subjected to $\frac{3}{2} T$ torque The magnitude of the twist is given by: $$\Delta \theta = \frac{M_{t,i} L_{i}}{G_i J_i}$$ where $J_i =\frac{\pi d^4}{32}$ therefore the twist is : for the short $$\Delta\theta_s = \frac{T L }{2 \;G \;J}$$ for the long $$\Delta \theta_L = \frac{3 T L }{2\; G \;J}$$ However because of the gear ratio from long to short being 3/2, twisting one unit on the long (DE) will result in 3/2 unit rotation on the short. Therefore the final twisting of point A is: $$\Delta\theta_A=\Delta\theta_s +\frac{3}{2}\Delta \theta_L $$ $$\Delta\theta_A=\frac{T L }{2 G \;J} +\frac{3}{2}\frac{3 T L }{2 G J} $$ $$\Delta\theta_A=\frac{T L }{G J} \left (\frac{1}{2}+\left(\frac{3}{2}\right)^2\right)$$ $$\Delta\theta_A=\frac{11}{4}\frac{T L }{G J} $$ Finally substituting $J_i =\frac{\pi d^4}{32}$ (if everything went ok) $$\Delta\theta_A=88\frac{T L }{G \pi d^4} $$
{ "domain": "engineering.stackexchange", "id": 3882, "tags": "mechanical-engineering, gears, torque, solid-mechanics" }
Do H$_2$ fuel cells function the same as H?
Question: Are the 2 hydrogen molecule broken apart first? or does the $H_2$ directly react with the dialetric/anode/cathode to produce electricity? i.e. http://www.fuelcellstore.com/horizon-aerostak-a200 Answer: In the Standard Hydrogen electrode (SHE) the electrode used is made of platinum which is inert and does not participate in any reactions occurring in the electrochemical cell but it provides its surface for conduction of electrons. The following reaction takes place if SHE acts as cathode, 2H+ (aq) + 2e -> H2 (g) {reduction half reaction} And if SHE acts as anode then the reaction taking place is, H2 (g) -> 2H+ (aq) + 2e {oxidation half reaction}. In case of a fuel cell, At Anode: 2H2 (g) + 4OH- (aq) -> 4H2O (l) + 4e At Cathode: O2 (g) + 2H2O (l) + 4e -> 2H2O (l) Overall Reaction : 2H2 (g) + O2 (g) -> 2H20 (l) A fuel cell consists of porous carbon electrodes containing suitable catalysts (finely divided Pt or Pd) incorporated in them for increasing the rate of electrode reactions. Lets say that we choose SHE as one electrode and the another Zinc electrode dipped in zinc sulphate solution. Now according to the electrochemical series zinc has less reduction potential than SHE so it will have hard time undergoing reduction. So zinc electrode acts as anode (where oxidation takes place) and SHE as cathode (where reduction takes place). Now, let one electrode be SHE and another electrode be Copper dipped in copper sulphate solution. According to the electrochemical series copper has more reduction potential than SHE so it will undergo reduction easily than SHE. So copper electrode acts as cathode and SHE as anode. SHE can act as either cathode or anode depending on the OTHER electrode's reduction potential. If the reduction potential of electrode is higher than that of SHE then SHE acts as anode and it acts as cathode if the other electrode chosen has less reduction potential than SHE.
{ "domain": "physics.stackexchange", "id": 37419, "tags": "electricity, hydrogen, molecules" }
What aerodynamic equation should be used to determine the drop in air pressure over an airfoil embedded within a vertical pipe?
Question: I have a conceptual idea for a thruster for a VTOL aircraft/drone which should produce lift by utilizing a upward-flowing airflow to produce low air pressure over the top surface of an airfoil that is embedded within a vertical pipe. I would like to be able to calculate what the drop in air pressure will be, but being that I am neither an aerospace or aeronautical engineer, I do not know which aerodynamic equation to use to determine this. I have made some CAD drawings of this conceptual VTOL aircraft/drone thruster to help illustrate and explain how the thruster should produce lift/thrust. (Note: The embedded airfoil is fastened to the vertical pipe with screws and it is colored orange to make it stand out from the rest of the thruster.) I am thinking of using a Drag Equation that I found on a NASA website to calculate the drop in air pressure, but I'm not sure if it is the right one to use. Reference https://www.grc.nasa.gov/www/k-12/airplane/drageq.html So, my question is what aerodynamic equation should be used to determine the drop in air pressure over an airfoil embedded within a vertical pipe? Answer: The equation you quote isn't a physical law, it's a definition of $C_{\textrm{d}}$. You will need it, but for finding the value of $C_{\textrm{d}}$, there's no substitute for doing a scale model experiment, geometrically similar to and at the same Reynolds number as (and unless the Mach number is very small, also at the same Mach number as) the situation about which you want to make predictions.
{ "domain": "engineering.stackexchange", "id": 4717, "tags": "mechanical-engineering, fluid-mechanics, pressure, aerospace-engineering, aerodynamics" }
Why can the charge density be expressed as the Bloch wave integral in the Brillouin zone?
Question: While watching Berry Phase, I saw a formula that says that in an insulating crystal the charge density can be written as: I don't understand the second term, I know the wave function in it is a Bloch wave. As we understand it, it should be written as: $$ \begin{align} &\sum_{\mathbf{n}}\sum_{\mathbf{k}\,\mathrm{BZ}} |\psi_{n\,\mathbf{k}}(\mathbf{r})|^2 \\ =&\sum_{n} \dfrac{\int_{\mathrm{BZ}} |\psi_{n\mathbf{k}}(\mathbf{r})|^2 d\mathbf{k}}{d\mathbf{k}}\\ =&\dfrac{N\Omega}{(2\pi)^3}\sum_{n} \int_{\mathrm{BZ}} |\psi_{n\mathbf{k}}(\mathbf{r})|^2 d\mathbf{k} \end{align} $$ $\Omega$ is the primary cell volume, $\Omega^*$ is the Brillouin zone volume, and N is the number of primary cells in the crystal $$ d\mathbf{k} = \dfrac{\Omega^*}{N}=\dfrac{\frac{(2\pi)^3}{\Omega}}{N} = \dfrac{(2\pi)^3}{N\Omega} $$ Compared with the original formula, there is an extra lattice volume $N\Omega$, why is this? Answer: If your are looking for a charge density you can't have integration over all space. You can however, for each point of space account for the "particle density" at that region, and that means a term like $|\psi|^2$. Your wave function can be decomposed in the Bloch functions $\psi_{nk}$,thus you need to sum over $n$ (accounts for electrons on every possible band), and integrate over all possible values of $k$, i.e. over the BZ (this accounts for every possible quasimomentum allowed for an electron).
{ "domain": "physics.stackexchange", "id": 90273, "tags": "quantum-mechanics, wavefunction, solid-state-physics, charge" }
How to find the closest N to the power of X to the given number?
Question: Let's say we have number 4920 and we want to find the closest $n^x$ to 4920 2 ^ 12 = 4096 but it's not the closest possible $n^x$, for example 17 ^ 3 = 4913 is closer to 4920 The question is, how do we find the closest $n^x$, where $n$ and $x$ are less than 2 digits long? (because $4913^1$ is not what I want) Answer: If $n$ and $x$ are both less than 100, there are only 10,000 combinations to check (including duplicates like $2^4 = 4^2$). That's nothing. You could precompute all of them and then implement the lookup using binary search.
{ "domain": "cs.stackexchange", "id": 9482, "tags": "algorithms, arithmetic, number-theory, integers" }
Zeolites, odor control, and the witchcraft of sunlight
Question: Various companies offer "volcanic crystals" or "volcanic rock" available in small mesh bags for odor control. The best description I have found is that these are made up largely of Zeolites. All of these manufacturers suggest that the rocks will trap odors and to just recharge the rocks every 6 months by putting it out in direct sunlight. This sounds like witchcraft! Can someone explain why putting the rocks in sunlight (versus other light) recharges them and makes them effective at trapping odors again? How does the light "release" the odor? Is it the heat of the sunlight, or could this be used in the winter as well? Answer: Zeolites are very similar to clays, with one key difference. The molecular structure of clays is rather compact. In contrast, the molecular structure of zeolites has tiny molecular-sized holes, and these holes are wont to connect. The result is a porous, tunnel-filled structure at the molecular level. The resulting tunnels make zeolites very good at absorbing substances that fit very nicely in those tunnels. What those substances are depends on the zeolite. There are a large number of naturally occurring zeolites, and even larger number of manufactured zeolites. Which zeolite absorbs what substance depends very much on the nature of the holes in the zeolite. As an example, the International Space Station uses a particular kind of zeolites ("zeolite 5A") to selectively absorb carbon dioxide from the breathing atmosphere. Zeolites would need to be discarded after becoming saturated if all they did was absorb stuff. That's not the case. It's rather easy to make zeolites relinquish the stuff they have absorbed because the absorption mechanism is rather weak. All it takes is a smallish amount of energy. Zeolites used in industrial settings are subjected to heat, noxious chemicals,or both to make them relinquish the stuff they have absorbed. In addition to having a large number of industrial uses, zeolites have apparently become attractive to the "all natural" crowd. (I suspect the larger number of "non-natural" zeolites are verboten amongst this crowd.) Exposing zeolites to sunlight means they absorb heat and ultraviolet light. This is indeed one way to make zeolites reject whatever has been absorbed. That's not very efficient; it would be much more efficient to bake those zeolites and/or expose them to a series of caustic chemical baths. However, that wouldn't play well with the "all natural" crowd.
{ "domain": "earthscience.stackexchange", "id": 1461, "tags": "geology, volcanology, rocks" }
time.sleep() in a node
Question: I have a node where I need to wait for a specific amount of time. Can I just use time.sleep() in python or do I need something else? Thanks, luketheduke Originally posted by luketheduke on ROS Answers with karma: 285 on 2016-04-30 Post score: 4 Answer: You should definitely NOT use time.sleep() The ROS way of doing this is: rate = rospy.Rate(1) # 1 Hz # Do stuff, maybe in a while loop rate.sleep() # Sleeps for 1/rate sec Or the similar: rospy.sleep(1) # Sleeps for 1 sec In both cases, you'll ofc need to import rospy Here's the API documentation: http://docs.ros.org/jade/api/rospy/html/rospy.rostime-module.html Originally posted by spmaniato with karma: 1788 on 2016-04-30 This answer was ACCEPTED on the original site Post score: 7 Original comments Comment by luketheduke on 2016-04-30: I only want to delay once, not repeatedly. How would I implement that? Comment by spmaniato on 2016-04-30: It's the second code snippet above: rospy.sleep(1) # Sleeps for 1 sec Adapt it to how many seconds you want to delay. Comment by spmaniato on 2016-05-01: @luketheduke, if my answer was correct / helpful, please "mark it as correct" by clicking on the checkmark button to its left. It will help people who read this question in the future. Thanks. Comment by Rufus on 2021-05-30: What is the benefit of using rospy.sleep instead of time.sleep? Is it just that it is interruptible? Comment by marko525 on 2022-04-29: +1 on @Rufus question comment. It's a very strong NOT, without any explanation.
{ "domain": "robotics.stackexchange", "id": 24527, "tags": "ros, rospy, ros-indigo" }
Is this the best message delay algorithm?
Question: In my application I am attempting to queue outgoing messages by the following rules By default they should be sent no less than messageDelay apart Some messages can be sent immediately, completely bypassing all queued messages Additionally, some immediately sent messages can reset the waiting time of queued messages If this is confusing, this is what it would look like on the wire | Normal |Immediate |Immediate-reset| M----------M---M------M----M----------M .... This was originally handled by a dedicated OutputThread and queue but for various reasons outside of the scope of this question I am attempting to implement this with ReentrantLock and Condition so when any of the methods finally returns, the message is known to of been sent (or an Exception is thrown meaning it didn't) However this is my first time using ReentrantLock and while I think I've implemented it correctly I'm needing someone to verify that its correct since this is an extremely difficult thing to test protected final ReentrantLock writeLock = new ReentrantLock(true); protected final Condition writeNowCondition = writeLock.newCondition(); public void sendRawLine(String line) { //[Verify state code] writeLock.lock(); try { sendRawLineToServer(line); //Block for messageDelay. If rawLineNow is called with resetDelay //the condition is tripped and we wait again while (writeNowCondition.await(getMessageDelay(), TimeUnit.MILLISECONDS)) { } } catch (Exception e) { throw new RuntimeException("Couldn't pause thread for message delay", e); } finally { writeLock.unlock(); } } public void rawLineNow(String line, boolean resetDelay) { //[Verify state code] writeLock.lock(); try { sendRawLineToServer(line); if (resetDelay) //Reset the writeNowCondition.signalAll(); } finally { writeLock.unlock(); } } Is this the correct way to implement ReentrantLock based on my requirements above? Is there a better way (NOT USING THREADS) to do this? Answer: I can see some problems with the code: As @fge mentions, catching Exception is problematic. You should catch the exceptions that you are expecting to occur, and let any other unchecked exceptions propagate. Wrapping the exceptions in RuntimeException is a bad idea unless it is really necessary. It is better to either let them propagate, or wrap them in a custom checked or unchecked exception. The behaviour of the code doesn't match your description in one respect. You talk about queuing messages. In fact, messages are being sent immediately in all cases, and you are delaying after sending the message in the sendRawLine case. In reality, you are blocking client threads rather using a queue data structure and a separate output thread. This might be the right things to do ... but it might be the wrong thing. It depends whether the client threads should be blocked until the their respective messages are sent. Either way, your characterization this as a "message queuing" problem is a bit misleading. Re: #3) The idea was to delay the next message from sending. Otherwise all calls to sendRawLine would have a weird pause before doing anything. I'm looking into using the current time though to determine if waiting is even necessary. First, that is NOT what your description of "message queuing" implies ... to me. And consider me as an examplar of someone else reading your code. Second, (if I understand you correctly) using the current time to decide to whether to wait before a send would require that something keeps track of when each thread last sent a message. Third, there is actually a good reason to not wait before sending a message. If you do wait before then, by the time the wait period has finished, the message could be rather out of date ... especially if there were a number of resets. Finally, it strikes me that you may be over-using threads here and/or that it might be better for the entity (thread or FSM) responsible for generating messages should check whether a message needs to be sent before generating it.
{ "domain": "codereview.stackexchange", "id": 4020, "tags": "java, multithreading, locking" }
Why does Penicillin only affect bacterial cell walls
Question: I was quite fascinated by the feature Should Science Pull the Trigger on Antiviral Drugs—That Can Blast the Common Cold? in this month's Wired magazine. They explain that Penicillin is effective at killing bacteria because it interferes with the growth of bacterial cell walls. How does Penicillin do that exactly? And why does it not dissolve human cells as well? Answer: Bacteria have a mesh-like structure surrounding their plasma membrane called a cell wall. The cell wall is made up of peptidoglycan polymers that form a rigid crystalline structure that helps protect the osmotic pressure of the bacterial cytoplasm. Penicillin and other β-lactams work by inhibiting the final step of peptidoglycan synthesis, which prevents transpeptidation (crosslinking) of the peptidoglycan molecules. This leads to the death of the bacterium by osmotic pressure due to the loss of the cell wall. This drug doesn't affect human cells because they lack a cell wall surrounding their plasma membrane.
{ "domain": "biology.stackexchange", "id": 248, "tags": "pharmacology, bacteriology, antibiotics" }
Can a semi-decidable problem be also decidable?
Question: As far as I understand, a semi-decidable (recursively enumerable) problem could be: decidable (recursive) or undecidable (nonrecursively enumerable) This post made me wonder if this is not conventionally followed. This is my answer to it and as far as I understand it is correct: A semidecidable problem (or equivalently a recursively enumerable problem) could be: Decidable: If the problem and its complement are both semidecidable (or recursively enumerable), then the problem is decidable (recursive). Undecidable: If the problem is semidecidable and its complement is not semidecidable (that is, is not recursively enumerable). Important note: Remember that a decidable (recursive) problem is also semidecidable (recursively enumerable). Conversely, if a problem is not recursively enumerable (semidecidable), then is not recursive (decidable). What the Wikipedia entry says is that: Partially decidable problems that are not decidable are called undecidable. In general, a semidecidable problem (recursively enumerable) could be decidable (recursive) or undecidable (nonrecursively enumerable). Also note that a problem and its complement could both (or just one of them) be not even semi-decidable (nonrecursively enumerable). Also note that, if a problem is recursive, its complement is also recursive. Is it conventionally (always) understood this way? Is there some literature that presents semi-decidability (partially decidable, recursively enumerable) problem as an equivalent of undecidability? Answer: Yes, a recursively enumerable language may be either decidable or undecidable. To see this, you ust look at the definitions of the terms. A language $L$ is recursive (aka decidable) if there is a Turing machine that halts for all inputs, accepting every word in $L$ and rejecting every word not in $L$. $L$ is recursively enumerable (aka semi-decidable) if there is a Turing machine that halts and accepts any input in $L$ and, for any input not in $L$, it either halts and rejects or it does not halt. Therefore, every recursive language is recursively enumerable. The machine that decides the recursive language is a special case of the machine required for a recursively enumerable language: specifically, it is allowed to either loop forever or reject for words not in $L$; in fact, it always rejects and never uses the option of looping forever.
{ "domain": "cs.stackexchange", "id": 2244, "tags": "terminology, computability, undecidability, semi-decidability" }
Is there a PSPACE-intermediate language?
Question: Suppose PH is strictly contained in PSPACE. Is there a problem in PSPACE that is not in PH and not PSPACE-complete? I encountered a language that is in PSPACE. The question is whether it's in PH. So far I don't have any success in either proving it's in PH or proving it's PSPACE-complete. I wonder if there is a languge that's not in PH and not PSPACE-complete given PH $\ne$ PSPACE. Answer: This is corollary 13 in Uwe Schöning's paper "A uniform approach to obtain diagonal sets in complexity classes": Corollary 13: If $\mathsf{PSpace} \neq \mathsf{PH}$, then there exist sets in $\mathsf{PSpace}$ which are not $\mathsf{PSpace\text{-}complete}$ w.r.t. $\leq^{\mathsf{P}}_T$ and which are not in the polynomial hierarchy. Proof: If $\mathsf{PSpace} \neq \mathsf{PH}$, then QBF is not in the polynomial hierarchy. Hence $A_1 = \emptyset$, $A_2 = QBF$, $C_1 = \{\mathsf{NP\text{-}complete} \text{ w.r.t. } \leq^{\mathsf{P}}_T \}$, $C_2 = \mathsf{PH}$, satisfy the hypothesis of the main theorem. Main theorem: Let $A_1$, $A_2$ be recursive sets and $C_1$, $C_2$ be classes of recursive sets with the following properties: $A_1 \notin C_1$, $A_2 \notin C_2$ $C_1$ and $C_2$ are recursively presentable, $C_1$ and $C_2$ are closed under finite variations. Then there exists a recursive set $A$ such that: $A \notin C_1$, $A \notin C_2$, if $A_1 \in \mathsf{P}$ and $A_2\notin \{ \emptyset, \Sigma^* \}$, then $A \leq^{\mathsf{P}}_m A_2$. (note: I updated the names to current ones) There are two interpretations for monotone QBF: QBFs where the quantifier free part is monotone in all variables. Then this is in $\mathsf{P}$ as Ryan noted. Because the quantifier-free part is monotone in quantified variables we can remove the quantifiers and replace the existentially quantified variables with 1 and universally quantified variables with 0, the original quantified formula is true iff this modified quantifier-free formula is true, and this reduces the problem to monotone formula evaluation which is in (and complete for) $\mathsf{NC^1} \subseteq \mathsf{P}$. (If we are taking supremum of a function over a variable and the function is monotone in that variable we only need to compute the value of the function over the maximal values for that variable.) QBFs which are monotone in the input variables. This is $\mathsf{PSpace}$ under $\mathsf{AC^0}$ reductions, the reduction of the QBF to this version is simple and is similar to the proof for monotone boolean formula evaluation being complete for $\mathsf{NC^1}$ or monotone circuit value being complete for $\mathsf{P}$.
{ "domain": "cstheory.stackexchange", "id": 1025, "tags": "cc.complexity-theory, complexity-classes, polynomial-hierarchy" }
Why do same-charge particles repel each other? Would anti-particles exhibit the contrary behaviour?
Question: Coulomb's Law states that same-charge particles repel each other and opposite-charge particles attract each other. Coulomb's Law formula is: This is related to Maxwell's equations: An example of an electromagnetic wave is: Would anti-particles act the opposite as well in relation to the force repulsion and attraction? The real question is that if particles and anti-particles are really opposite, they would always attract and remain together. A Feynman diagram showing the annihilation of an electron and a positron (antielectron), creating a photon that later decays into an new electron–positron pair. Some examples of electromagnetic interactions are: We can conclude that at the quantum level, things really are different. 6: Dirk Hünniger, Joel Holdsworth [8] Tipler, P A.. ; Mosca, G.: "Física para la ciencia y la tecnología", volumen 2 (6ª edición). Ed. Reverte, 2010 [9] Serway, R.A.; Jewett, J.W.: "Física para ciencias e ingeniería con física moderna". Ed. Cengage Learning (7ª edición), 2009. [10] Gettys, W.E.; Keller, F.J.; Skove, M.J.: "Física clásica y moderna". Ed. Mc Graw Hill, 1998 [11] Alonso, M.; Finn, E.J.: "Física" (volumen 2). Ed. Addis on Wesley Iberoamericana, 1997. [12] Eisberg, R.M.; Lerner, L.S.: "Física: fundamentos y aplicaciones" (volume n 2). Mc Graw Hill, 1986 Answer: Anti-particles have the opposite charge of their respective particle, i.e., electrons have a charge of -e, and positrons have a charge of e, so yes, in accordance with Coulomb's law, they would attract each other when particles and anti-particles collide, they are annihilated, and energy is released. Here is a good source of reading https://www.britannica.com/science/annihilation
{ "domain": "physics.stackexchange", "id": 95741, "tags": "electromagnetism, electric-fields, feynman-diagrams, antimatter, coulombs-law" }
How many planets are there in this solar system?
Question: So, in school (that's a long time ago) they have been teaching us there are 9 planets in our solar system. Mercury Venus Earth Mars Jupiter Saturn Uranus Neptune Pluto But every now and then I keep reading stories about another "dwarf planet" (Eris, discovered in 2005) that - depending on what source tells the story - is another planet according to the astronomical definition, while other sources say that it isn't a planet. Some even say Pluto isn't a planet anymore either. The result: I'm confused due to the contradicting stories. Even Wikipedia isn't clear about Eris and only writes (emphasis mine): NASA initially described it as the Solar System's tenth planet. Initially? So, is it a 10th planet or not? Fact is, there is another "something" out there and it surely seems to look like a planet. Yet, some people keep stating there are 9 planets in our solar system, while others say there are more than 9 planets, and then again there are people stating that the latest definition of "planet" has kicked out Pluto too so there are actually fewer than 9 planets in our solar system. Trying to get a definite, official, and astronomically correct answer I can actually rely on, I'm therefore asking: How many planets are there in this solar system? EDIT The "Definition of planet" at Wikipedia doesn't really help either, as it states: Many astronomers, claiming that the definition of planet was of little scientific importance, preferred to recognize Pluto's historical identity as a planet by "grandfathering" it into the planet list.* * Dr. Bonnie Buratti (2005), "Topic — First Mission to Pluto and the Kuiper Belt; "From Darkness to Light: The Exploration of the Planet Pluto"", Jet Propulsion Laboratory. Retrieved 2007-02-22. So, if you link somewhere to provide proof, it would be great if you could point me to a more trusted source than Wikipedia. Ideally, an astronomical trusted source and/or paper. Answer: In addition to Undo's fine answer, I would like to explain a bit about the motivation behind the definition. When Eris was discovered, it turned out to be really, really similar to Pluto. This posed a bit of a quandary: should Eris be accepted as a new planet? Should it not? If not, then why keep Pluto? Most importantly, this pushed to the foreground the question what, exactly, is a planet, anyway? This had been ignored until then because everyone "knew" which bodies were planets and which ones were not. However, with the discovery of Eris, and the newly-realized potential of more such bodies turning up, this was no longer really an option, and some sort of hard definition had to be agreed upon. The problem with coming up with a hard definition that decides what does make it to planethood and what doesn't is that nature very rarely presents us with clear, definite lines. Size, for example, is not a good discriminant, because solar system bodies come in a continuum of sizes from Jupiter down to meter-long asteroids. Where does one draw the line there? Any such size would be completely arbitrary. There is, however, one characteristic that has a sharp distinction between some "planets" and some "non-planets", and it is the amount of other stuff in roughly the same orbit. This is still slightly arbitrary, because it's hard to put in numbers exactly what "roughly" means in this context, but it's more or less unambiguous. Consider, then a quantity called the "planetary discriminant" µ, equal to the ratio of the planet's mass to the total mass of other bodies that cross its orbital radius and have non-resonant periods (so e.g. Neptune doesn't count as sharing Pluto's orbit) up to a factor of 10 longer or shorter (to rule out comets, which has little effect in practice). This is still a bit arbitrary (why 10?) but it's otherwise quite an objective quantity. Now take this quantity and calculate it for the different bodies you might call planets, comparing it to both the objects' mass, and their diameter, or with an arbitrary horizontal axis, in order of decreasing discriminant, Suddenly, a natural hard line emerges. If you look only at the mass and the diameter of the objects (shown in the insets above the plots), then there is a pretty continuous spread of values, with bigger gaps between the gas giants and the terrestrial planets than between Mercury and Eris/Pluto. However, if you look at the planetary discriminant, on the vertical axis, you get a very clear grouping into two distinct populations, separated by over four orders of magnitude. There's a finite set of bodies that have "cleared their orbits", and some other bodies which are well, well behind in that respect. This is the main reason that "clearing its orbital zone" was chosen as a criterion for planethood. It relies on a distinction that is actually there in the solar system, and very little on arbitrary human decisions. It's important to note that this criterion need not have worked: this parameter might also have come out as a continuum, with some bodies having emptier orbits and some others having slightly fuller ones, and no natural place to draw the line, in which case the definition would have been different. As it happens, this is indeed a good discriminant. For further reading, I recommend the Wikipedia article on 'Clearing the neighbourhood', as well as the original paper where this criterion was proposed, What is a planet? S Soter, The Astronomical Journal 132 no.6 (2006), p. 2513. arXiv:astro-ph/0608359. which is in general very readable (though there are some technical bits in the middle which are easy to spot and harmless to skip), and from which I took the discriminant data for the plots above. Edit: I must apologize for having included, in previous versions of this post, an incorrect plot, caused by taking data from Wikipedia without verifying it. In particular, the planetary discriminant for Mars was wrong (1.8×105 instead of 5.1×103), which now puts it below Neptune's instead of just below Saturn's, but the overall conclusions are not affected. The Mathematica code for the graphics is available at Import["http://goo.gl/NaH6rM"]["https://i.stack.imgur.com/CQA4T.png"]. ... and, as a final aside: Pluto is awesome. It was visited in July 2015 by the New Horizons probe, which found a world that was much more rich, dynamic, and active than anyone expected, including what appear to be churning lakes of solid nitrogen ringed by mountains of water ice, among other marvels. (Note the image has been colour-enhanced to bring out the variety of surface materials; the true-colour version of this image is here.) I, personally, don't feel it's at all necessary to 'grandfather' Pluto into the list of planets to really feel the awe at the amazing place it is - it's perfectly OK for it to be a cool place with cool science, that is also not a planet.
{ "domain": "astronomy.stackexchange", "id": 6808, "tags": "solar-system, planet, dwarf-planets" }
How is temperature defined in non-equilibrium?
Question: I see that temperature is defined always in equilibrium. But systems which are not in equilibrium with their environment. How is temperature defined in these cases? Humans for example, have a body temperature, though they are not in equilibrium with their environment. How is temperature defined in this case? Answer: We can only talk about temperature in a nonequilibrium system when such a system is locally in thermal equilibrium. A nonequilibrium system does not have one specific temperature, as it is not in equilibrium (as you point out). We can however, define a temperature at every point, provided that locally the system will be in equilibrium. We can in that case, put a thermometer at that point and as soon as the thermometer comes in equilibrium with our system at that point, we can measure it's temperature there. As Alireza points out, in the example of a human (not being in thermal equilibrium with its environment) we can still talk about the local temperature of the human body (which inside the body will be higher than at the surface of the skin). Using a thermometer we can locally measure the body temperature at the point of contact. Typically we are interested in the 'core temperature' of the body, this is why we try to put the thermometer as 'deep' inside the body as possible (e.g. in an armpit or ear).
{ "domain": "physics.stackexchange", "id": 50151, "tags": "thermodynamics, temperature, non-equilibrium" }
Print lyrics of 99 Bottles of Beer
Question: I read 99 Bottles of OOP, and one of the offhand comments was that doing the 99 bottles problem with composition was another route that one could take (the book used inheritance). Here is my attempt. Here are the lyrics needed: Beer Song Some difficulties I had: 1) Implementation of successor was tricky. I could not simply pass in a successor object because then BottleNumber(99) needed to hold BottleNumber(98) which needed to hold... Instead I used successor_number and generated a successor when needed. 2) Factory seemed messy - the arguments for the initialize method stacked up and up. Named arguments only made things longer. Sometimes I had to implement a default_object and other times I could use default named parameters. Should this be standardized throughout? Comments welcome class BeerSong def verse(number) bottle_number = BottleNumber.for(number) "#{bottle_number} of beer on the wall, #{bottle_number} of beer.\n".capitalize + "#{bottle_number.action}, #{bottle_number.successor} of beer on the wall.\n" end def verses(starting,ending) starting.downto(ending).map do |number| verse(number) end.join("\n") end def song verses(99,0) end end class BottleNumber attr_reader :number, :container, :pronoun, :quantity, :action, :successor_number class << self def for(number) return number if number.is_a? BottleNumber case number when 0 BottleNumber.new(number, quantity: 'no more', successor_number: 99, action: 'Go to the store and buy some more') when 1 BottleNumber.new(number, container: 'bottle', pronoun: 'it') else BottleNumber.new(number) end end end def initialize(number, container: 'bottles', pronoun: 'one', quantity: nil, action: nil, successor_number: nil) @number = number @container = container @pronoun = pronoun @quantity = quantity || default_quanity @action = action || default_action @successor_number = successor_number || default_successor_number end def to_s "#{quantity} #{container}" end def default_successor_number number - 1 end def default_quanity number.to_s end def default_action "Take #{pronoun} down and pass it around" end def successor BottleNumber.for(successor_number) end end Answer: Some comments when looking through your code: Successor I think that's fine - how else were you going to do it? Metz does exactly the same thing doesn't she? When successor is called a new bottle number - 1 is created - unless of course the bottle number is zero, in which case you start right back at 100. Knowledge of the arguments and their order: Consider this: BottleNumber.new(number, quantity: 'no more', successor_number: 99, action: 'Go to the store and buy some more') I don't like this. Why? Because every time you need to instantiante a bottle you need to KNOW what goes in there and you also need to know the order in which the arguments go in. You could eliminate the need to know the argument order by passing in a hash. That's probably the only bit of criticism i can add. Inheritance for this particular problem, inheritance seems like a better fit. it just seems a lot cleaner than dealing with the messiness of passing in those parameters. anyways those are just my thoughts and i hope you find them of some use.
{ "domain": "codereview.stackexchange", "id": 25475, "tags": "object-oriented, ruby" }
Why under Lorentz transformations the Higgs boson is a scalar field and under $SU(2)$ it is a doublet?
Question: I am a bit confused about this difference. My understanding is that when we build a $G$-bundle, where $G$ is a gauge group, we have a representation $\rho:G\to GL(V)$ that acts on the fibers of the $G$-bundle. Now if we want to act $SU(2),$ for example, on a scalar field $\phi$, we should use an one-dimensional representation since $\phi:M\to\mathbb{C}$, right? But how during this process the field acquires two components? I would say that $\phi:M\to \mathbb{C}$ and $\phi:M\to \mathbb{C}^2$ are sections of different bundles, so how can they be the same? PS: I would appreciate an answer in terms of fiber bundles. Answer: When we say scalar, spinor, vector, and so on, field, we mean which representation of the frame bundle the field belongs to. Or in index notation, which spacetime indices the field has: none, spinor, vector, and so on. We can combine this with internal symmetries which are $G$-bundles for some gauge group $G$, for example $SU(2)$. In indices this is some additional internal index. For example the gauge potential in QCD is usually written $A_{\mu a}$ where $\mu$ is the vector index and $a$ the color ($\operatorname{ad} SU(3)$) index. The way to do this is that if $E,F$ are vector bundles over $M$ then there exists a bundle $E \otimes F$ over $M$ such that the fiber is $e \otimes f$. The structure group of this product bundle is the product of the structure groups of $E,F$. Thus we can speak of things like an $SU(2)$ singlet scalar or an $SU(3)$ triplet spinor. In the former case $E$ is the trivial line bundle and $F$ the $SU(2)$ doublet bundle. [The proof of this theorem consists of writing the statement out in a local section and a checking that the transition maps work properly. For this what is needed is that $u \otimes v $ is smooth in both arguments using the usual notion of derivatives on finite-dimensional vector spaces. Thus the statement generalizes to functors like $\wedge,\oplus$. ]
{ "domain": "physics.stackexchange", "id": 32262, "tags": "special-relativity, gauge-theory, field-theory, group-representations, higgs" }