anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Light Front Dynamics and Infinite Momentum Frame
Question: What is the the relationship between Light Front Dynamics (One of the forms of dynamics pioneered by Dirac), and the infinite momentum frame? In the literature, it is claimed that the two are very different and should not be confused. This is also reiterated in L. Susskind's string theory course (available on YouTube). Answer: Lenny is confused, I think. They're closely related, pretty much the same thing. Light front dynamics is pretty much the same approach as "light cone gauge" except that "light front dynamics" etc. is used by people close to nuclear physics and "light cone gauge" is used by the stringy and nearby researchers. In both cases, we want to slice the spacetime along light-like slices similar to $ct+z=0$ which we call "one moment"; that's the limit of the slices in the rest frame of infinitely boosted observers (in the $z$ direction in this case). For gauge fields, we also want to impose some light-cone gauge for the components like $A_t+A_z=0$. Both of them are really the limit of the "infinite momentum frame" for the momentum going really to infinity, up to some conventional rescaling of quantities by the infinite momentum, and whenever one may discuss things in the "infinite momentum frame", it is possible to actually take the equations to the limit and rewrite all the calculations in the light cone gauge. In the light cone gauge, all the components etc. are actually finite, like the $P^\pm$ components of the momentum. In the infinite momentum frame, the energy and one component of the momentum are... infinite but the scaling with the infinite number is universal and may be dropped, and that's how we get the finite light-cone-gauge formulae. These issues were a source of considerable confusion e.g. in the Banks-Fischler-Shenker-Susskind paper founding Matrix theory that used the terminology of the "infinite momentum frame". People who realized that this was nothing else than a contrived way to talk about the light cone gauge, like myself, have been presenting all the results as results in the light cone gauge – which is discretized in the Matrix theory case. Lenny (who was inexperienced in the modern machinery of the light cone gauge although he was a co-father of IMF of a sort) rediscovered these well-known things many months later and made a big deal but there has never been any real deal about it. The infinite momentum frame is just an outdated, awkward way to use the same approach as the light front dynamics. If someone needs a reference stating that they're equivalent, take e.g. http://arxiv.org/pdf/nucl-th/9804029.pdf It says, on page 5, "It [light front dynamics] is also equivalent to the usual equal time formalism in the infinite momentum frame".
{ "domain": "physics.stackexchange", "id": 11639, "tags": "quantum-field-theory, string-theory, reference-frames" }
Sequencing a specific region of a genome
Question: First off, I'm new to bioinformatics and I am learning about DNA sequencing. Let's say that I knew that a specific region of a genome which contained information about a disease (whether it a person had the disease or not). It would make sense that we would only want to sequence that part of the genome in order to make a detector for this disease. Would it be possible to 'cut-out' this region of the genome and only sequence that part (so we don't have to sequence the entire thing)? If not, how would we sequence only this part of the genome (it doesn't make sense to sequence the other parts as they don't give us information). Thanks in advance. Answer: Yes, it is possible to sequence a specific region of the genome. The method, as you mentioned, is called targeted sequencing. Resequencing is basically sequencing something that has already been sequenced. This means that instead of assembling all of your sequence reads from scratch, you can just align them to the reference sequence (in your case, the entire human genome has been sequenced). Targeted resequencing means you're sequencing a specific region, such as a gene. This requires concentration of the specific DNA you wish to sequence and can be done by PCR amplification (using primers that flank your desired sequence) or hybridisation (using probes complementary to your desired sequence that are fixed to a surface), among other methods. For information on methods for enriching a DNA sequence, give this a read: Mamanova, L. et al. Target-enrichment strategies for next-generation sequencing. Nat. Methods. 7, 111-118 (2010).
{ "domain": "biology.stackexchange", "id": 2979, "tags": "bioinformatics" }
Python error when try Autonomy pack on RosHydro
Question: I run a launch file for irobot-create2 (https://github.com/AutonomyLab/create_autonomy ) then faced issue, I think that is related to python. My setup: Ubuntu LS12.04 Ros Hydro irobot create2 First I run without ca_description/launch/create_2.launch $ roslaunch ca_driver create_2.launch [desc:=false] [publish_tf:=true] --> it start communication with irobot. Next I run launch file ca_description/launch/create_2.launch --> OSError: [Errno 8] Exec format error Traceback (most recent call last): File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/__init__.py", line 279, in main p.start() File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/parent.py", line 257, in start self._start_infrastructure() File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/parent.py", line 206, in _start_infrastructure self._load_config() File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/parent.py", line 121, in _load_config self.config = roslaunch.config.load_config_default(self.roslaunch_files, self.port, verbose=self.verbose) File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/config.py", line 428, in load_config_default loader.load(f, config, verbose=verbose) File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 698, in load self._load_launch(launch, ros_config, is_core=core, filename=filename, argv=argv, verbose=verbose) File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 670, in _load_launch self._recurse_load(ros_config, launch.childNodes, self.root_context, None, is_core, verbose) File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 614, in _recurse_load self._param_tag(tag, context, ros_config, verbose=verbose) File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 95, in call return f(*args, **kwds) File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 240, in _param_tag value = self.param_value(verbose, name, ptype, *vals) File "/opt/ros/hydro/lib/python2.7/dist-packages/roslaunch/loader.py", line 466, in param_value p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE) File "/usr/lib/python2.7/subprocess.py", line 679, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child raise child_exception OSError: [Errno 8] Exec format error Originally posted by science00000 on ROS Answers with karma: 20 on 2016-12-07 Post score: 0 Original comments Comment by mgruhler on 2016-12-08: This seems like there are problems with access rights or the shebang. crate_2.launch just starts one node, robot_state_publisher and it uses xacro to evaluate the urdf. Is xacro installed? From source? Did you copy something over using a windows PC? Did you change the launchfile? Comment by science00000 on 2016-12-08: Hi, I try to check xacro file rosrun xacro xacro.py ~/dev/catkin_ws/src/create_autonomy/ca_description/urdf/create_2.urdf.xacro output err as below : xacro.XacroException: Some parameters were not set for macro xacro:create_base **Comment by [science00000](https://answers.ros.org/users/25431/science00000/) on 2016-12-08**:\ In file create_base.urdf.xacro the parameter wheel_separation not set to specific value I try set one but have another err xacro.XacroException: Invalid parameter "wheel_separation" while expanding macro "xacro:create_base" **Comment by [science00000](https://answers.ros.org/users/25431/science00000/) on 2016-12-09**:\ Hi mig I have modified launch file for debug then have error --> OSError: [Errno 8] Exec format error thanks you :) Answer: Maybe it has something to do with macro default parameters added in Indigo (not in Hydro)? I guess xacro would not be happy with this line in particular. You could try removing all the parameters that have a default and just hardcode the values in create_base_gazebo.urdf.xacro. Originally posted by jacobperron with karma: 1870 on 2016-12-08 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by science00000 on 2016-12-09: jacobperronYour point is correct :) I have updated as below Comment by science00000 on 2016-12-09:\ ......
{ "domain": "robotics.stackexchange", "id": 26434, "tags": "ros, create-autonomy, create2" }
Unit of Angular velocity
Question: Why is the angular velocity $\omega$ always written in $rad/sec$? Is there anything wrong if I write it in $degrees/sec$? If no, then why almost all the books have it as $rad/sec$?? Answer: $w $ is the angular velocity, not the angular displacement. You can write it in deg/ sec if you wish. The reason rad/sec are used is because the identities $\frac{d}{dx}\cos(x) = -\sin(x) $ and $ \frac{d}{dx}\sin(x) = \cos(x)$ only hold when x is measured in radians.
{ "domain": "physics.stackexchange", "id": 44613, "tags": "conventions, units, dimensional-analysis, angular-velocity" }
Determining ppm concentrations from food label information for ICP-OES analysis?
Question: Overview: I am looking to determine what ppm values of trace metals are present in a particular formula while considering digestion, dilution, and detection limits for ICP-OES. For example, one serving is 15 g of powdered formula which is then dissolved in 150 mL of solution, according to the label. The label reads 15 micrograms of Mn. Goal: Find concentration of a metal from analysis of 1 g of formula digested and then diluted to 100 mL. Attempt: 15 micrograms of Mn per 15 g of powdered formula. As that microgram/g is equivalent to ppm (15ug/15g = 1.0 ppm), there should be 1 ppm of Mn per 15 g of dry formula. Taking 1 g of the formula, digesting it, and diluting to 100 mL: 100mL/1g x (X read) = 1 ug/g (ppm) The AAS or ICP-OES read value should then be 0.01 ppm. This would be well within the detection limit for ICP-OES. However, I am not sure if my above approach is correct. Is there something obvious that I am missing? Any insight would be appreciated. Answer: You should not have a single sample preparation for all the elements. The sample preparation will depend on element by element. Your calculations have subtle misconceptions. Don't convert to concentrations but work with masses for such problems. So, you have 15 $\mu$g Mn in 15 g sample. This means you have 1 $\mu$g Mn in 1 g sample (see, I am avoiding calculating ppm in the solid as yet). So, if you dissolve 1 $\mu$g of Mn in 100 mL of water, your final solution Mn concentration is 0.001 mg Mn/ 0.1 L or 0.01 mg/L Mn. So our numerical values match but your statement is slightly problematic. there should be 1 ppm of Mn per 15 g of dry formula. No, even if you take one ton of sample, it will still be 1 ppm Mn (w/w), because concentration does not depend on sample mass or volume. Analyte masses do, so if you take a larger sample mass, analyte mass will increase too, but its concentration in the sample will remain the same. This is reason to work with masses and then calculate concentration at the end. As you already know AAS is less sensitive than ICP-OES in general, but it is less expensive. The critical info is called characteristic concentration in AAS for a each element, which is defined as the concentration that will produce 0.0044 absorbance units under optimized concentrations in air acetylene flame on a standard 10 cm burner head. Clearly, 0.01 ppm Mn is out of question on FAAS, because even under the best conditions, 0.05 ppm will produce an absorbance 0.0044. Such a value is prone to tons of errors. As a result you need a concentration sample, which is 10x fold more concentrated. I will leave it as an exercise. This is a screenshot of a so-called Cookbook of Perkin Elmer's AAS. The AAS or ICP-OES read value should then be 0.01 ppm. Don't trust machine calculated concentrations. Make a calibration curve youself. No instrument can tell you the concentration without a calibration curve.
{ "domain": "chemistry.stackexchange", "id": 15994, "tags": "everyday-chemistry, analytical-chemistry" }
Parallel Job Consumer using TPL
Question: I need to provide a service (either a Windows Service or an Azure Worker Role) which will handle the parallel execution of jobs. These jobs could be anything from importing data to compiling of reports, or to sending out mass notifications, etc. Database Design: The whole process of running these jobs is persistent. JobDefinition stores varchar reference to the IJob concrete type class. JobInstance is an instance of JobDefinition which needs to be executed as a job. JobInstanceEvent stores the process of executing an instance (the change of states). Class Structure: In order to create new jobs the simple IJob interface needs to be implemented: public interface IJob { bool Execute(); } JobHandler is responsible for deserializing the serialized varchar in the database and also saving state changes: public class JobHandler { public string State { get { return jobInstance.State; } set { using (MyEntities context = new MyEntities()) { context.JobInstances.Attach(jobInstance); jobInstance.State = value; context.SaveChanges(); } SaveJobEvent(String.Empty); } } private JobInstance jobInstance; public JobHandler(JobInstance job) { jobInstance = job; } public IJob CreateJobObject() { Type type = Type.GetType(jobInstance.JobDefinition.FullAssemblyType); XmlSerializer serializer = new XmlSerializer(type); using (StringReader reader = new StringReader(jobInstance.SerialisedObject)) { return serializer.Deserialize(reader) as IJob; } } public void SaveJobEvent(string jobEventMessage) { using (MyEntities context = new MyEntities()) { context.JobInstances.Attach(jobInstance); jobInstance.JobInstanceEvents.Add( new JobInstanceEvent() { State = jobInstance.State, Date = DateTime.UtcNow, Message = jobEventMessage } ); context.SaveChanges(); } } } The JobConsumer is what will run in the service. It performs two primary parallel tasks: Adding items to the blocking queue (through polling) Executing items from the blocking queue (through thread blocking) It is able to run multiple jobs concurrently (ParallelExecutionSlots). public class JobConsumer { public IJobSource JobSource { get; private set; } public BlockingCollection<JobHandler> JobQueue { get; private set; } public int ParallelExecutionSlots { get; private set; } private bool isPopulating; private CancellationTokenSource consumerCancellationTokenSource; public JobConsumer(IJobSource jobSource, int parallelExecutionSlots) { JobQueue = new BlockingCollection<JobHandler>(); JobSource = jobSource; ParallelExecutionSlots = parallelExecutionSlots; consumerCancellationTokenSource = new CancellationTokenSource(); StartPopulatingQueue(); } public void StartPopulatingQueue() { StartPopulatingQueue(500); } public void StartPopulatingQueue(int pollingDelay) { if (!isPopulating) { isPopulating = true; var task = Task.Factory.StartNew(() => { while (isPopulating) { foreach (var job in JobSource.ReadNewJobs()) { job.State = "Queued"; JobQueue.Add(job); } Tasks.Task.Delay(pollingDelay).Wait(); } }) .ContinueWith(t => { isPopulating = false; JobConsumerLog.LogException("JobConsumer populating process failed", t.Exception); }, CancellationToken.None, TaskContinuationOptions.OnlyOnFaulted, TaskScheduler.Current ); } } public void StopPopulatingQueue() { isPopulating = false; } public void StartConsumer() { Task.Factory.StartNew(() => { SemaphoreSlim slotsSemaphore = new SemaphoreSlim(ParallelExecutionSlots); foreach (var job in JobQueue.GetConsumingEnumerable(consumerCancellationTokenSource.Token)) { slotsSemaphore.Wait(); JobHandler jobHandler = job; Task.Factory.StartNew(() => { try { ExecuteJob(jobHandler); } finally { slotsSemaphore.Release(); } }, TaskCreationOptions.LongRunning ); } consumerCancellationTokenSource.Token.ThrowIfCancellationRequested(); }, consumerCancellationTokenSource.Token) .ContinueWith(t => { JobConsumerLog.LogException("JobConsumer execution process failed", t.Exception); }, CancellationToken.None, TaskContinuationOptions.OnlyOnFaulted, TaskScheduler.Current ) .ContinueWith(t => { JobConsumerLog.LogException("JobConsumer execution process stopped", t.Exception); }, CancellationToken.None, TaskContinuationOptions.OnlyOnCanceled, TaskScheduler.Current ); } public void StopConsumer() { consumerCancellationTokenSource.Cancel(); } private void ExecuteJob(JobHandler job) { job.State = "Executing"; try { if (job.CreateJobObject().Execute()) { job.State = "Successful"; } else { job.State = "Failed"; } } catch (Exception ex) { job.State = "FailedOnException"; } } } Feedback on the overall design would be much appreciated. I'm looking for a maintainable and extensible solution. I am using .NET 4.5 (Note: this is a second iteration of the following post) Answer: Only some few minor things: In StartPopulatingQueue to me as the reader it's unclear in what unit pollingDelay is in. Two classic solutions: Change it to be a TimeSpan - I usually prefer this since it provides the most flexibility. Add the unit suffix to the parameter name. like Ms for milliseconds or Sec for seconds etc. CreateJobObject can potentially return null if the deserialized object cannot be cast to IJob in which case ExecuteJob will throw a NullReferenceException which is typically not very meaningful. You should throw a more meaningful exception on deserialization (as mentioned in the comment by Roman you could use a direct cast instead of as which would throw an InvalidCastException) It feels wrong to me that the consumer defines the job execution states. The consumer seems largely responsible for managing the job queue and it should stick to that single responsibility. Hence I think Execute should be moved into JobHandler. I'd also consider moving CreateJobObject to JobInstance so JobInstance is responsible to provide the actual object instance. Then JobHandler is just concerned with the state changes of the job.
{ "domain": "codereview.stackexchange", "id": 16626, "tags": "c#, design-patterns, serialization" }
Unexpectedly low melting point of Aluminium
Question: According to Wikipedia, the melting point of aluminium is 933.47K (660 ºC) while the melting point of magnesium is 923K (650 ºC), yet the melting point of sodium is merely 370.87K (98 ºC). Huge difference between sodium and magnesium is expected, but small difference between magnesium and aluminium confused me. Can anyone explain? Much appreciated Answer: Aluminium, Magnesium and Sodium form Metallic Bonds instead of Covalent bonds (like diamond, as discussed). The metallic bond strength depends on: The number of electrons that become delocalized from the metal The charge of the cation (metal). The size of the cation. Sodium has One delocalized electron as compared to Magnesium(2) and Aluminium(3). It's cation is also larger than Magnesium and Aluminium cations and also has a lower charge. This is why the strength of the metallic bond is lower in $\ce{Na}$ than it is in $\ce{Mg}$ and $\ce{Al}$ and hence the melting point is lower. This also reflects in their crystal structure: $\ce{Na}$ has a body-centered cubic crystal with 68% packing efficiency whereas $\ce{Mg}$ and $\ce{Al}$ have a HCP and CCP structure respectively. Both these structures have a packing efficiency of 74%. This means that for a given unit volume, $\ce{Mg}$ and $\ce{Al}$ have more atoms in it as compared to $\ce{Na}$. This implies stronger bonds and a higher melting point.
{ "domain": "chemistry.stackexchange", "id": 13089, "tags": "theoretical-chemistry, solid-state-chemistry" }
Magnetic force calculation for parallel wires using Maxwell stress tensor. Issue with shear forces
Question: I am trying to calculate the forces in between permanent magnets and ferromagnetic surfaces with the Maxwell stress tensor using image theory and the Biot-Savart law. However I discovered a weird behavior regarding shear forces where I somehow must use the Maxwell stress tensor wrong. I can break down the problem to the force calculation in between two parallel wires with similar direction of current. The calculation of the flux density is quite easy based on amperes law. For two parallel wires aligned in the y axis the flux density component in the centerline of the permanent magnets ($y =50$ in the graphic) must be zero. In theory I can now find the magnetic forces in 2D by utilizing the Maxwell stress tensor: $$ F_x = \int_{-\infty}^\infty \frac{1}{\mu_0}( B_x(x,0)^2 - \frac{1}{2}(B_x(x,0)^2+B_y(x,0)^2) + B_x(x,0)\cdot B_y(x,0) dx $$ $$ F_y = \int_{-\infty}^\infty \frac{1}{\mu_0}(B_x(x,0)\cdot B_y(x,0) + B_y(x,0)^2 - \frac{1}{2}(B_x(x,0)^2+B_y(x,0)^2) dx $$ Note: in the graphic the $0$ is at the $y$ value $50$ I just kept the zero in the equation to explain it here. This works fine for $F_y$. The result matches the Lorentz force equation well. For $F_x$ however the results and the equation itself makes no sense at all. I would be expecting zero force in the direction of $x$. However if we insert $B_x(x,0) = 0$ (based in symmetry) into the equation for $B_x$ we get $$ F_x = \int_{-\infty}^\infty \frac{1}{\mu_0}( 0 - \frac{1}{2}(0^2+B_y(x,0)^2) + 0 dx = \int_{-\infty}^\infty \frac{1}{\mu_0}(\frac{1}{2}(B_y(x,0)^2) dx \neq 0 $$ Because $B_y$ is squared the part to integrate will always be positive which is resulting in a net force in the x direction which makes no sense at all. Can anybody explain to me where my mistake in using the Maxwell stress tensor is ? Answer: You've applied the stress tensor incorrectly. If we want to find the force in the $i$-direction on a collection of charges & currents using the stress tensor, then it is $$ F_i = \oint T_{ij} n_j da, $$ where $\hat{n}$ is the normal to the surface (and I'm using Einstein summation). In your case, for the lower half-space you have $$ \hat{n} = \hat{y} \quad \Rightarrow \quad n_x = n_z = 0, \quad n_y = 1 $$ and so $$ F_x = \oint T_{xy} \, da \qquad F_y = \oint T_{yy} \, da. $$ Note that the $i$ index has to match on both sides of the equation, and that the $j$ index is set equal to $y$ by virtue of the fact that $\hat{n}$ only points in the $y$-direction. You were calculating instead $$ F_x = \oint (T_{xx} + T_{xy}) \, da \qquad F_y = \oint (T_{xy} + T_{yy}) \, da; $$ both expressions were technically incorrect, but you got the right result for $F_y$ because $T_{xy} = 0$ in this case.
{ "domain": "physics.stackexchange", "id": 93204, "tags": "electromagnetism, forces, magnetic-fields, maxwell-equations" }
Receive array of data and access employee info fields
Question: This method receives an array of data and does some stuff in a database. I need to get some reviews on it if possible. public function doSomeStuff($arr = array()) { $id = $arr['Employee']['id']; $name = $arr['Employee']['name']; $status = $arr['Employee']['status'] == 'Disabled' ? 0 : 1; $user_id = $arr['Employee']['user_id']; $query = "update `mytable` set `status` = $status, `name`=$name WHERE `user_id` = ?"; self::_runthis($query, array($user_id)); } I am looking to see if this is fool-proof for the data it will receive and will process it. Answer: Instead of accessing the employee key every time, just pass in the array rooted at the Employee key What happens if all of the array indexes are not set? You should either check for that or use an object on which you know they all exist Important mostly for data integrity (what if you create an Employee array somewhere and forget a status or something?) But also accessing non-set array keys issues a notice I suspect there's something wrong with your class design as a whole. In particular, the same class should probably not have a method to update an employee and a method to run a query (you don't happen to have something extending some vague DB class do you...?) Why is name not a place holder like the user_id? Use real names when posting code here -- doSomeStuff is very vague, and that hampers our ability to review A text status should probably not be being passed into this since it seems like you're dealing with a low level record here Don't quote entity names unless you need to. It breaks compatibility across SQL-dialects for no reason What if $name has spaces? What if it has a ' in it? Look into SQL injection and prepared statements. (They're not just about security. There also about correctness. In it's current form, your code has a very major bug.) A default value for the parameter shouldn't be provided. Would you want someone to call $obj->doSomeStuff();
{ "domain": "codereview.stackexchange", "id": 2473, "tags": "php, php5" }
A type for prime numbers
Question: I have recently discovered that in type theory there is a concept of a "predicate type" which is a type \$A\$ formed out of all members of the underlying type \$U\$ that satisfy a given predicate function pred :: u -> Bool. I wondered if we can do that in Haskell, so I took this library for primes and decided to make a type for the prime numbers. This is how it looks: module Data.Numbers.Primes.Type (Prime , getValue , getIndex , primeIndex , getPrime , maybePrime ) where import Data.List (elemIndex) import Data.Numbers.Primes data Prime int = Prime { getValue :: int, getIndex :: Int } deriving Show instance Integral int => Enum (Prime int) where toEnum = getPrime fromEnum = getIndex instance Eq (Prime int) where x == y = getIndex x == getIndex y instance Ord (Prime int) where x `compare` y = getIndex x `compare` getIndex y -- | If a given number is prime, give its index. primeIndex :: (Integral n, Integral i) => n -> Maybe i primeIndex x | isPrime x = fromIntegral <$> elemIndex x primes | otherwise = Nothing -- | Give n-th prime. getPrime :: (Integral n, Integral int) => n -> Prime int getPrime n = Prime (primes !! fromIntegral n) (fromIntegral n) -- | If a given number is prime, give it back wrapped as such. maybePrime :: (Integral n, Integral int) => n -> Maybe (Prime int) maybePrime x | isPrime x = Prime (fromIntegral x) <$> primeIndex x | otherwise = Nothing I'm certain this code is flawed in many ways yet unbeknownst to me, of which I hope the condescending reader would let me know so I could improve. However, there are a few points that I'm suspicious about even now: Am I dealing with index type the right way? Is it good to "cache" a prime's index inside the data object? Am I doing it right? Would it be possible and/or better to cache the indices implicitly via a memoizing function? On the pros side I see one less explicit data field which consistency could be compromised, on the contras side that we will not be able to easily save and restore data objects inbetween program runs. Is an Int the right type for storing indices of primes, in the light of the facts that no prime has a negative index and there is a type Word for bounded unsigned integer values? (For some reason, Prelude makes no use of Word for, say, functions that deal with list indices, so, in suspicion that this design choice may have had a good motivation, I for now refrained from using the Word either.) Maybe the index type should rather be polymorphic over the Integral class? Is it good to use fromIntegral everywhere to make the types of the functions that deal with indices more general? Is this type actually safe as it's intended to be? That is, can I be sure there is no way to construct a Prime data object containing an arbitrary value or an inconsistent index? For example, if I used a newtype (as I did in one of the drafts) and a derived Enum instance, I would be able to toEnum any number into prime. Does the design ensure this will not be the case ever anymore? (At least, in a reasonable usage scenario involving a non-malevolent user.) Answer: Unfortunately, it's not safe. That's due to the records. If I know any Prime, I can construct a new Prime: import Data.Numbers.Primes.Type example :: Prime -> Prime example p = p {getIndex = 0, getValue = 0} -- whoops So you want to get rid of the records in your type and write the getters by hand: data Prime int = Prime int Int getValue :: Prime int -> int getValue (Prime v _) = v getIndex :: Prime int -> Int getIndex (Prime _ idx) = idx Alternatively rename them, rebind them and don't export the record selectors: data Prime int = Prime { _getIndex :: Int, _getValue int } getValue :: Prime n -> n getValue = _getValue getIndex :: Prime n -> Int getIndex = _getIndex Also, you can improve maybePrime: maybePrime :: Integral n => n -> Maybe (Prime n) maybePrime x = Prime x <$> primeIndex x Either x is prime and primeIndex returns Just idx, or it isn't. There is no need to check x twice. maybePrime is usually called a "smart constructor", by the way. Am I dealing with index type the right way? That depends on your use-case. If you need the index of the Prime often, it makes sense to cache it. If you don't need the index often, don't. You could provide an IndexedPrime that always contains the Index, though: type Index = Int data IndexedPrime int = IndexedPrime Index int newtype Prime int = Prime int You could split those into separate modules, but that's a matter of personal preference. Is an Int the right type for storing indices of primes, in the light of the facts that no prime has a negative index and there is a type Word for bounded unsigned integer values? That's a good question. The answer is: yes, Int is a right type, but it's not the right type. Most of the time, you want to use the result of getIndex in some operation that wants an Int, not a Word, just like you want to use take (a - b). Note that I said that Int is a correct type. You can of course use Word. Is it good to use fromIntegral everywhere to make the types of the functions that deal with indices more general? It can lead to interesting behaviour. For example, I can use exampleInput :: Int exampleInput = 13^9 + 33 + 13 -- yes, that's a prime. found by lucky guess :) exampleOutput :: Maybe (Prime Word8) exampleOutput = maybePrime exampleInput -- exampleOutput = Just (Prime {getValue = 195, getIndex = 333}) That's why I changed maybePrime's type, to ensure that the conversion has to be explicit by the user before they apply maybePrime. Overall, good ideas and implementation, but try to keep the fromIntegral parts in your own code to a minimum. By the way, if you're interested in predicative types, have a look at Liquid Haskell.
{ "domain": "codereview.stackexchange", "id": 27562, "tags": "haskell, primes, type-safety" }
How/when is calculus used in Computer Science?
Question: Many computer science programs require two or three calculus classes. I'm wondering, how and when is calculus used in computer science? The CS content of a degree in computer science tends to focus on algorithms, operating systems, data structures, artificial intelligence, software engineering, etc. Are there times when Calculus is useful in these or other areas of Computer Science? Answer: I can think of a few courses that would need Calculus, directly. I have used bold face for the usually obligatory disciplines for a Computer Science degree, and italics for the usually optional ones. Computer Graphics/Image Processing, and here you will also need Analytic Geometry and Linear Algebra, heavily! If you go down this path, you may also want to study some Differential Geometry (which has multivariate Calculus as a minimum prerequisite). But you'll need Calculus here even for very basic things: try searching for "Fourier Transform" or "Wavelets", for example -- these are two very fundamental tools for people working with images. Optimization, non-linear mostly, where multivariate Calculus is the fundamental language used to develop everything. But even linear optimization benefits from Calculus (the derivative of the objective function is absolutely important) Probability/Statistics. These cannot be seriously studied without multivariate Calculus. Machine Learning, which makes heavy use of Statistics (and consequently, multivariate Calculus) Data Science and related subjects, which also use lots of Statistics; Robotics, where you will need to model physical movements of a robot, so you will need to know partial derivatives and gradients. Discrete Math and Combinatorics (yes!, you may need Calculus for discrete counting!) -- if you get serious enough about generating functions, you'll need to know how to integrate and derivate certain formulas. And that is useful for Analysis of Algorithms (see the book by Sedgewick and Flajolet, "Analysis of Algorithms"). Similarly, Taylor Series and calculus can be useful in solving certain kinds of recurrence relations, which are used in algorithm analysis. Analysis of Algorithms, where you use the notion of limit right from the start (see Landau notation, "little $o$" -- it's defined using a limit) There may be others -- this is just off the top of my head. And, besides that, one benefits indirectly from a Calculus course by learning how to reason and explain arguments with technical rigor. This is more valuable than students usually think. Finally -- you will need Calculus in order to, well, interact with people from other Exact Sciences and Engineering. And it's not uncommon that a Computer Scientist needs to not only talk but also work together with a Physicist or an Engineer.
{ "domain": "cs.stackexchange", "id": 6489, "tags": "education, mathematical-analysis" }
Why does the mass on the cart-pole have to fall?
Question: Not sure if I am posting this question in the correct community, as it relates primarily to reinforcement learning. Apologies early on if this is not so. In reinforcement learning many algorithms exist for 'solving' the cart-pole problem; that of balancing a mass on the edge of a stick, connected to a cart on a hinge, which has 1 DoF. There is TD learning, Q-learning and many other on and off-policy methods. There is also the more recent, model-based policy search method PILCO. What I am really wondering, I suppose, is more of a physics question: is there a need for active control? Why is it not possible to find the one point for the cart, which prevents the mass to move, even incrementally, left or right as it sits atop the pole? Why does it always 'fall'? Answer: The upright pole is in a position of unstable equilibrium. If the pole deviates from the vertical by an angle $\theta$ then the torque rotating the pole away from the vertical is: $$ T = mg \frac{\ell}{2} \sin\theta $$ The moment of inertia of a pole about one end is $m\ell^2/3$, so the angular acceleration will be: $$ \frac{d^2\theta}{dt^2} = \frac{3 g}{2\ell} \sin\theta $$ The point is that even the tinest deviation from the vertical, i.e. non-zero value of $\theta$, will make the pole accelerate farther from the vertical and it will eventually fall. In the real world the pivot isn't a point and will have some friction, so in practice you probably could balance the pole. How easy it would be to balance would depend on exactly how the system had been made.
{ "domain": "physics.stackexchange", "id": 26634, "tags": "newtonian-mechanics, equilibrium, stability" }
Why does $\alpha=1$ mean batch MC Learning?
Question: Here is a part of slide 4 from the link: https://tao.lri.fr/tiki-download_wiki_attachment.php?attId=1683 Why does $\alpha=1$ mean batch MC Learning? I do not see this clearly when I compare with averaging returns formula. Answer: I did not read the link but I am giving a standard derivation as it can be found in Sutton-Barto for instance. The average return formula can be reformulated: \begin{align} V_{n+1}(s) &= \frac{1}{n}\sum_{i=1}^{n}R_{i} \\ &= \frac{1}{n}\left(R_{n} +\sum_{i=1}^{n-1}R_{i} \right) \\ &= \frac{1}{n}\left(R_{n} +(n-1) \frac{1}{n-1}\sum_{i=1}^{n-1}R_{i} \right) \\ &= \frac{1}{n}\left(R_{n} + (n-1) V_{n}(s) \right) \\ &= \frac{1}{n}\left(R_{n} + nV_{n}(s) - V_{n}(s) \right) \\ &= V_{n}(s) + \frac{1}{n} (R_{n} - V_{n}(s)) \end{align} where $V_{n}$ is the estimate of the value function after $n-1$ averages over the return of the state $s$. We can interpret each $i$ as a single visit to $s$ or as a batch of visits of $s$ in which case $R_{i}$ would be the averaged return of this batch. Setting $\frac{1}{n} = \alpha$ directly results in the formula for incremental updates. Note that if we set $n = \alpha = 1$ we just set the value function of the state to the return of the first episode or the average of the first batch $V(s) = R_{1}$.
{ "domain": "ai.stackexchange", "id": 3690, "tags": "reinforcement-learning" }
Was the reason that Computers were invented to solve a philosophical question about the foundations of mathematics?
Question: This guy asserts: I’ll say it — the computer was invented in order to help to clarify … a philosophical question about the foundations of mathematics. (This problem being Entscheidungsproblem - The Decision Problem) The reference here states that the Church-Turing thesis was attempting to answer this question. My question is - is it true that modern computers are a byproduct of trying to solve 'The Decision Problem'? (My intuition told me that modern computers were more a byproduct of trying to break Nazi encryption codes). (perhaps with some pre-war German influence). Answer: I can see his point, but I think he's really (deliberately?) confusing computation (and the mathematics thereof) and computers. A computer is certainly a device for performing computation, but what Church and Turing created was a (well, two, but they're "the same") theoretical (read mathematical) model of the process of computation. That is, they define a mathematical system which (if you believe the Church-Turing thesis) captures what it is possible to compute on any machine that can perform mechanical computation (mechanical in the sense that it can be automated, and yes, that's a little hand wavy, but that's another story). Computers don't work like Turing Machines (or the Lambda calculus, which doesn't even pretend to be a machine). Bits of them look kind of similar, and indeed Turing does play an important role in the development of modern computers, but they're not a byproduct of the maths, any more than aeroplanes are a byproduct of the dynamics that describe airflow across their wings.
{ "domain": "cstheory.stackexchange", "id": 1759, "tags": "computability, ho.history-overview, machine-models" }
Snake game for Windows console
Question: I would apreciate any suggestions on how to make my code better and more efficient. #include <stdio.h> #include <time.h> #include <windows.h> struct SnakeNode { int x; int y; struct SnakeNode *next; }; struct Food { int x; int y; int isEaten; }; void Gotoxy(int column,int row); int CreateScoreFile(); void CreateSnake(struct SnakeNode **snake); void Graphics(struct SnakeNode *snake,struct Food food,int score,int highscore); int isSnake(int x,int y,struct SnakeNode *snake); void CreateFood(struct Food *food,struct SnakeNode *snake); int GetSnakeSize(struct SnakeNode *snake); struct SnakeNode * GetListItem(struct SnakeNode *snake,int index); int lose(struct SnakeNode *snake); void SaveScore(int score); void Physics(struct SnakeNode **snake,int *direction,struct Food *food,int *score,int *highscore,int *endgame); void DestroySnake(struct SnakeNode **snake); char map[20][50] = {"##################################################", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "# #", "##################################################"}; int main() { int endgame = 0; int gamespeed = 25; int score = 0; int highscore = 0; int direction = 1; struct SnakeNode *snake = NULL; struct Food food; food.isEaten = 1; srand(time(NULL)); highscore = CreateScoreFile(); CreateSnake(&snake); do { CreateFood(&food,snake); Graphics(snake,food,score,highscore); Sleep(gamespeed); system("CLS"); Physics(&snake,&direction,&food,&score,&highscore,&endgame); }while(endgame == 0); DestroySnake(&snake); return 0; } void Gotoxy(int column,int row) { COORD c; c.X = column; c.Y = row; SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE),c); } int CreateScoreFile() { FILE *scfile; int highscore = 0; scfile = fopen("Score file.txt","r"); if(scfile == NULL) { scfile = fopen("Score file.txt","w"); fprintf(scfile,"%d",highscore); } else { fscanf(scfile,"%d",&highscore); } fclose(scfile); return highscore; } void CreateSnake(struct SnakeNode **snake) { struct SnakeNode *hnext = NULL; struct SnakeNode *hprev = NULL; int x = 27; int y = 10; int size = 0; *snake = (struct SnakeNode *)malloc(sizeof(struct SnakeNode)); (*snake)->x = x; (*snake)->y = y; (*snake)->next = NULL; hprev = *snake; hnext = (*snake)->next; while(hnext != NULL || size < 4) { x--; hnext = (struct SnakeNode *)malloc(sizeof(struct SnakeNode)); hnext->x = x; hnext->y = y; hnext->next = NULL; hprev->next = hnext; hprev = hnext; hnext = hnext->next; size++; } } void Graphics(struct SnakeNode *snake,struct Food food,int score,int highscore) { int x,y; struct SnakeNode *temp = NULL; for(y=0;y<20;y++) { for(x=0;x<50;x++) { printf("%c",map[y][x]); } printf("\n"); } Gotoxy(food.x,food.y); printf("$"); temp = snake; while(temp != NULL) { Gotoxy(temp->x,temp->y); printf("*"); temp = temp->next; } Gotoxy(0,20); printf("\nScore: %d",score); if(score <= highscore) { printf(" ------ Highscore: %d",highscore); } else { printf(" ------ Highscore: %d",score); } } int isSnake(int x,int y,struct SnakeNode *snake) { struct SnakeNode *temp = NULL; temp = snake; while(temp != NULL) { if(temp->x == x && temp->y == y) { return 1; } temp = temp->next; } return 0; } void CreateFood(struct Food *food,struct SnakeNode *snake) { if(food->isEaten) { food->x = rand()%48+1; food->y = rand()%18+1; food->isEaten = 0; do { if(isSnake(food->x,food->y,snake)) { food->x = rand()%48+1; food->y = rand()%18+1; } }while(isSnake(food->x,food->y,snake)); } } struct SnakeNode * GetListItem(struct SnakeNode *snake,int index) { int i; struct SnakeNode *node = NULL; node = snake; for(i=0;i<index;i++) { node = node->next; } return node; } int GetSnakeSize(struct SnakeNode *snake) { int size = 0; while(snake != NULL) { size++; snake = snake->next; } return size; } int lose(struct SnakeNode *snake) { struct SnakeNode *iterator; iterator = snake->next; while(iterator != NULL) { if(snake->x == iterator->x && snake->y == iterator->y) { return 1; } iterator = iterator->next; } return 0; } void SaveScore(int score) { FILE *savscore; int filescore; savscore = fopen("Score file.txt","r"); if(savscore != NULL) { fscanf(savscore,"%d",&filescore); if(score > filescore) { fclose(savscore); savscore = fopen("Score file.txt","w"); fprintf(savscore,"%d",score); } } fclose(savscore); } void Physics(struct SnakeNode **snake,int *direction,struct Food *food,int *score,int *highscore,int *endgame) { int i; struct SnakeNode *lnode = NULL; struct SnakeNode *fnode = NULL; struct SnakeNode *nnode = NULL; if(GetAsyncKeyState(VK_RIGHT)) { if(*direction != 3) { *direction = 1; } } else if(GetAsyncKeyState(VK_LEFT)) { if(*direction != 1) { *direction = 3; } } else if(GetAsyncKeyState(VK_UP)) { if(*direction != 2) { *direction = 4; } } else if(GetAsyncKeyState(VK_DOWN)) { if(*direction != 4) { *direction = 2; } } if(*direction == 1) { for(i=GetSnakeSize(*snake)-1;i>0;i--) { lnode = GetListItem(*snake,i); fnode = GetListItem(*snake,i-1); lnode->x = fnode->x; lnode->y = fnode->y; } (*snake)->x = (*snake)->x + 1; if((*snake)->x > 48) { (*snake)->x = 1; } } else if(*direction == 3) { for(i=GetSnakeSize(*snake)-1;i>0;i--) { lnode = GetListItem(*snake,i); fnode = GetListItem(*snake,i-1); lnode->x = fnode->x; lnode->y = fnode->y; } (*snake)->x = (*snake)->x - 1; if((*snake)->x < 1) { (*snake)->x = 48; } } else if(*direction == 4) { for(i=GetSnakeSize(*snake)-1;i>0;i--) { lnode = GetListItem(*snake,i); fnode = GetListItem(*snake,i-1); lnode->x = fnode->x; lnode->y = fnode->y; } (*snake)->y = (*snake)->y - 1; if((*snake)->y < 1) { (*snake)->y = 18; } } else { for(i=GetSnakeSize(*snake)-1;i>0;i--) { lnode = GetListItem(*snake,i); fnode = GetListItem(*snake,i-1); lnode->x = fnode->x; lnode->y = fnode->y; } (*snake)->y = (*snake)->y + 1; if((*snake)->y > 18) { (*snake)->y = 1; } } if(lose(*snake)) { *endgame = 1; Graphics(*snake,*food,*score,*highscore); SaveScore(*score); Beep(250,250); Sleep(2000); } if((*snake)->x == (*food).x && (*snake)->y == (*food).y) { (*food).isEaten = 1; *score = *score + 10; Beep(1000,25); lnode = GetListItem(*snake,GetSnakeSize(*snake)-1); nnode = (struct SnakeNode *)malloc(sizeof(struct SnakeNode)); nnode->x = lnode->x; nnode->y = lnode->y; nnode->next = NULL; lnode->next = nnode; } } void DestroySnake(struct SnakeNode **snake) { struct SnakeNode *temp = NULL; while(*snake != NULL) { temp = *snake; *snake = (*snake)->next; free(temp); } } Answer: Inefficient movement I'm going to focus this review on one specific part of your code, which is there part where you move the snake. This part could be greatly improved. I'll show several areas where the code could be improved. Don't use pointers everywhere You currently pass in pointers such as *direction and *snake to your functions. But you don't actually change the value of *direction or *snake. So instead of passing pointers to these, just pass the values. That way, instead of *direction and *snake, you could just write direction and snake. Extract common code The big thing that stands out is that there is a lot of repetition in the code. There are four directions of movement and each direction uses the same huge chunk of code. Here is one direction: else if(*direction == 3) { for(i=GetSnakeSize(*snake)-1;i>0;i--) { lnode = GetListItem(*snake,i); fnode = GetListItem(*snake,i-1); lnode->x = fnode->x; lnode->y = fnode->y; } (*snake)->x = (*snake)->x - 1; if((*snake)->x < 1) { (*snake)->x = 48; } } In each of the four directions, you do the part where you move the segments of the snake forward (except for the head). So you could extract all that common code out. // This movement part was repeated 4 times before. Now it's here once. for(i=GetSnakeSize(snake)-1;i>0;i--) { lnode = GetListItem(snake,i); fnode = GetListItem(snake,i-1); lnode->x = fnode->x; lnode->y = fnode->y; } // These four parts just move the head in various directions. if (direction == 1) { snake->x++; if (snake->x > 48) { snake->x = 1; } } else if (direction == 2) { // etc. } \$O(n^2)\$ algorithm Your snake segment movement is not very good. For each segment, it finds its node in the linked list, then finds the node in front of it, and then moves the segment. This ends up being \$O(n^2)\$, where \$n\$ is the length of the snake. You could do in \$O(n)\$ time by doing a one pass run through the snake like this: if (snake != NULL) { int prevX = snake->x; int prevY = snake->y; for (SnakeNode *seg = snake->next; seg != NULL; seg = seg->next) { int curX = seg->x; int curY = seg->y; seg->x = prevX; seg->y = prevY; prevX = curX; prevY = curY; } }
{ "domain": "codereview.stackexchange", "id": 14964, "tags": "c, console, windows, snake-game" }
How much could this motor lift and how fast?
Question: How much could this motor lift and how fast? If I connect a belt to its axis about 1cm away or even 10cm away from its tip. Torque: 1.5Nm Peak Torque: 10Nm Power: 9-30A, to 1000W. https://www.alibaba.com/product-detail/86BLF40-24v-48v-1000W-big-brushless_60644013087.html?spm=a2700.7724838.2017115.56.f5f862211YuM1C What kind of motor would I need to be able to lift 50kg about 1m high, in say 4 seconds? And to control finely the exact position within the 1m, sometimes it should lift to 20cm sometimes to 30cm. Is a stepper or servo motor capable of this task? Would a linear actuator be better, but I read those have low duty cycle, and they are slow so they would lift it slowly and wait for next time? Answer: Torque is measured in N M (Newton Meters) A Newton is kg m / s^2 So torque kg m^2 / s^2 Torque = r x F = r x m x a s is distance t is time a is acceleration r is radius = 1/10 meter s = 1/2 a t^2 = 1 m 2 / 4^2 = a = 1/8 plug it in 1.5 = 1/10 m 1/8 m = 1.5 x 10 x 8 = 1200 kg that seems too high to me but that is what I get
{ "domain": "engineering.stackexchange", "id": 2255, "tags": "motors, robotics, stepper-motor" }
Shuffling a list while keeping order relative to related elements
Question: I'm looking to shuffle a list of the elements $a_1,\dots, a_6, \dots, e_1, \dots, e_6$ while keeping two rules: if I loop though the list and filter out a specific letter or number it should be in order: $a_1, a_2, a_3 \dots$ or $a_1, b_1, c_1 \dots$ How can I shuffle the list keeping these rules? I'm using python, so if there's a library that'd be great. Otherwise, just a generic way I could tackle this problem. Here's an example of a shuffle that would fit the criteria: $a_1, b_1, a_2, b_2, c_1, a_3, d_1, c_2, d_2, e_1, a_4, b_3,$ $c_3, d_3, b_4, d_4, c_4, a_5, e_2, d_5, e_3, c_5, a_6, b_5, e_4, a_7, $ $b_6, c_6, b_7, d_6, e_5, e_6, c_7, d_7, e_7$ Answer: Something like this perhaps? from random import randint letter_count = [0] * 5 for _ in range(35): while True: letter = randint(0,4) if letter == 0 or letter_count[letter-1] > letter_count[letter]: if letter_count[letter] < 7: break letter_count[letter] += 1 print(chr(97+letter),letter_count[letter]) It works, in a quick and dirty fashion, but it does not have a uniform distribution. You would calculate the probability of a sequence occurring by dividing by the number of possible elements at each step. At some point in the example you might have 5 possible elements and the probability of the sequence occurring would be 1/(some multiple of 5). For other sequences there would never be 5 possible elements and the probabilty of that sequence would be 1/(not a multiple of 5).
{ "domain": "cs.stackexchange", "id": 9576, "tags": "algorithms, randomness" }
Optimise Game of Life in Rust
Question: I recently picked up Rust and was making a CLI for Conway's Game of Life. I got it working but, looking back at it, there are places it could be improved. The main one is the function that generates the next board. Specifically, the part of that function which calculates the amount of alive neighbours a cell has. fn next_step(game_array: [[bool; WIDTH]; HEIGHT]) -> [[bool; WIDTH]; HEIGHT] { let mut next_state = [[false; WIDTH]; HEIGHT]; let mut neighbours: u8; // Will never be above 8 const HEIGHT_I: isize = HEIGHT as isize; const WIDTH_I: isize = WIDTH as isize; const NEIGHBOUR_LIST: [[isize; 2]; 8] = [[-1, -1], [-1, 0], [-1, 1], [ 0, -1], [ 0, 1], [ 1, -1], [ 1, 0], [ 1, 1]]; for rownum in 0..HEIGHT { for cellnum in 0..WIDTH { // FROM HERE neighbours = 0; for [j, k] in NEIGHBOUR_LIST { // This will break if width and height are set to larger than isize::MAX if game_array[(((rownum as isize + j % HEIGHT_I) + HEIGHT_I) % HEIGHT_I) as usize] [(((cellnum as isize + k % WIDTH_I) + WIDTH_I) % WIDTH_I) as usize] { neighbours += 1; } } // TO HERE // This is the cleanest way I could find to implement Life rules if neighbours == 3 || (game_array[rownum][cellnum] && neighbours == 2) { next_state[rownum][cellnum] = true; } } } next_state } I wanted the edges of the board to loop and this is the best way I could find to check the neighbours of a cell. However, it is verbose and hard to read. Is there a better way that I am missing? HEIGHT and WIDTH are the height and width of the board and will never be above isize::MAX. Answer: Your solution looks good! I would suggest the following improvements in order to make it more readable: Use type aliases to improve the readability: type State = [[bool; WIDTH]; HEIGHT]; Extract the coordinate wrapping logic into its own type to make it easier to understand: #[derive(Debug, Copy, Clone)] struct Coord { value: usize, max: usize, } impl Coord { fn new(value: usize, max: usize) -> Self { Coord { value, max, } } fn increment(&mut self) { self.value += 1; if self.value >= self.max { self.value = 0; } } fn decrement(&mut self) { self.value = self.value.checked_sub(1).unwrap_or(self.max - 1); } } Create a Cell and NeighborsIter structures that encapsulate the logic of iterating over all neigbhours of a given cell: #[derive(Debug, Copy, Clone)] struct Cell { x: Coord, y: Coord, } impl Cell { fn new(x: usize, y: usize) -> Self { Cell { x: Coord::new(x, HEIGHT), y: Coord::new(y, WIDTH), } } fn into_neighbors_iter(self) -> NeighborsIter { NeighborsIter::new(self) } } struct NeighborsIter { state: usize, cell: Cell, } impl NeighborsIter { fn new(init: Cell) -> Self { Self { state: 0, cell: init, } } } impl Iterator for NeighborsIter { type Item = Cell; fn next(&mut self) -> Option<Self::Item> { match self.state { 0 => { self.cell.x.decrement(); self.cell.y.decrement() } 1 => { self.cell.y.increment() } 2 => { self.cell.y.increment() } 3 => { self.cell.x.increment() } 4 => { self.cell.x.increment() } 5 => { self.cell.y.decrement() } 6 => { self.cell.y.decrement() } 7 => { self.cell.x.decrement() } _ => { return None; } } self.state += 1; Some(self.cell) } } Separate the counting of live neigbhors into its own function: fn count_neighbors(&self, cell: Cell) -> u8 { let mut neighbours: u8 = 0; for cell in cell.into_neighbors_iter() { if self.state[cell.x.value][cell.y.value] { neighbours += 1; } } neighbours } Create a Board struct and move the next_step function into it as a method: #[derive(Debug)] struct Board { state: State } impl Board { pub fn new() -> Self { Board { state: [[false; WIDTH]; HEIGHT] } } pub fn from(state: State) -> Self { Board { state } } pub fn next_step(self) -> Board { let mut next_state = [[false; WIDTH]; HEIGHT]; for row in 0..HEIGHT { for col in 0..WIDTH { match self.count_neighbors(Cell::new(row, col)) { 3 => next_state[row][col] = true, 2 if self.state[row][col] => next_state[row][col] = true, _ => {} } } } Board { state: next_state } } fn count_neighbors(&self, cell: Cell) -> u8 { let mut neighbours: u8 = 0; for cell in cell.into_neighbors_iter() { if self.state[cell.x.value][cell.y.value] { neighbours += 1; } } neighbours } } Final Code: const WIDTH: usize = 10; const HEIGHT: usize = 10; type State = [[bool; WIDTH]; HEIGHT]; #[derive(Debug, Copy, Clone)] struct Coord { value: usize, max: usize, } impl Coord { fn new(value: usize, max: usize) -> Self { Coord { value, max, } } fn increment(&mut self) { self.value += 1; if self.value >= self.max { self.value = 0; } } fn decrement(&mut self) { self.value = self.value.checked_sub(1).unwrap_or(self.max - 1); } } #[derive(Debug, Copy, Clone)] struct Cell { x: Coord, y: Coord, } impl Cell { fn new(x: usize, y: usize) -> Self { Cell { x: Coord::new(x, HEIGHT), y: Coord::new(y, WIDTH), } } fn into_neighbors_iter(self) -> NeighborsIter { NeighborsIter::new(self) } } struct NeighborsIter { state: usize, cell: Cell, } impl NeighborsIter { fn new(init: Cell) -> Self { Self { state: 0, cell: init, } } } impl Iterator for NeighborsIter { type Item = Cell; fn next(&mut self) -> Option<Self::Item> { match self.state { 0 => { self.cell.x.decrement(); self.cell.y.decrement() } 1 => { self.cell.y.increment() } 2 => { self.cell.y.increment() } 3 => { self.cell.x.increment() } 4 => { self.cell.x.increment() } 5 => { self.cell.y.decrement() } 6 => { self.cell.y.decrement() } 7 => { self.cell.x.decrement() } _ => { return None; } } self.state += 1; Some(self.cell) } } #[derive(Debug)] struct Board { state: State } impl Board { pub fn new() -> Self { Board { state: [[false; WIDTH]; HEIGHT] } } pub fn from(state: State) -> Self { Board { state } } pub fn next_step(self) -> Board { let mut next_state = [[false; WIDTH]; HEIGHT]; for row in 0..HEIGHT { for col in 0..WIDTH { match self.count_neighbors(Cell::new(row, col)) { 3 => next_state[row][col] = true, 2 if self.state[row][col] => next_state[row][col] = true, _ => {} } } } Board { state: next_state } } fn count_neighbors(&self, cell: Cell) -> u8 { let mut neighbours: u8 = 0; for cell in cell.into_neighbors_iter() { if self.state[cell.x.value][cell.y.value] { neighbours += 1; } } neighbours } } Performance: After benchmarking the performance of the original implementation from the author and my new implementation, it is clear that the original modulus approach yields significantly slower performance on top of being more unreadable. Experiment 1 parameters: Grid - 100x100 Starting sate - Randomly generated (but same between the two methods) Game steps - 1000 Experiment 1 average execution time: Original implementation: 381.86 ms New implementation: 162.13 ms Experiment 2 parameters: Grid - 10x10 Starting sate - Randomly generated (but same between the two methods) Game steps - 10000 Experiment 2 average execution time: Original implementation: 33.209 ms New implementation: 14.175 ms Additional information: The library used for the benchmarking is criterion. And the raw data can be seen bellow. game-of-life new impl, (100x100), random state, 1000 steps time: [158.70 ms 162.13 ms 166.53 ms] Found 7 outliers among 100 measurements (7.00%) 2 (2.00%) high mild 5 (5.00%) high severe --------------------------------------------------------------------- game-of-life original impl, (100x100), random state, 1000 steps time: [381.63 ms 381.86 ms 382.13 ms] Found 6 outliers among 100 measurements (6.00%) 2 (2.00%) low mild 2 (2.00%) high mild 2 (2.00%) high severe -------------------------------------------------------------------- game-of-life new impl, (10x10), random state, 10000 steps time: [13.847 ms 14.175 ms 14.580 ms] Found 10 outliers among 100 measurements (10.00%) 2 (2.00%) high mild 8 (8.00%) high severe --------------------------------------------------------------------- game-of-life original impl, (10x10), random state, 10000 steps time: [33.176 ms 33.209 ms 33.247 ms] Found 9 outliers among 100 measurements (9.00%) 2 (2.00%) low mild 4 (4.00%) high mild 3 (3.00%) high severe
{ "domain": "codereview.stackexchange", "id": 43722, "tags": "rust, game-of-life" }
What is the correct name for 2,4-diethyl-4-ethoxyhexane?
Question: A classmate asks you to draw the structure of 2,4-diethyl-4-ethoxyhexane, which he cannot find in a chemical reference manual. Draw the structure of 2,4-diethyl-4-ethoxyhexane. (Question continued...) The reason your classmate cannot find the compound in a chemical reference manual is that 2,4-diethyl-4-ethoxyhexane is the incorrect name for the compound. What is the correct name? Here are my steps: I count the longest carbon chain = 7 carbons = -heptane I look at all the substituents: 1 ethoxy group, 1 ethyl group and 1 methyl group I number the carbon chain starting from the end closer to the ethoxy group (alphabetical order) I add all the substituents and the carbon chain together In this case, this is the name I come up with: 3-ethoxy-3-ethyl-5-methylheptane, which is incorrect. Can somebody tell me where I went wrong, please? Answer: In short, your answer is correct. The steps that you have taken are correct, so it's likely the online homework tool has a key mistake.
{ "domain": "chemistry.stackexchange", "id": 8922, "tags": "organic-chemistry, nomenclature, ethers" }
Topic to be subscribed to using laser_height_estimation
Question: I would like to give a try using laser_height_estimation package to estimate height of my flying robot as could be found here. What topic should I subscribe to? I couldn't find it in its tutorial. Thanks in advance. EDIT: I found there are 2 of them: /mav/height_to_base and /mav/height_to_footprint. Which one is more relevant and which are the datatypes of them? Originally posted by alfa_80 on ROS Answers with karma: 1053 on 2012-03-09 Post score: 0 Answer: With our quadrotor, the base frame is at the center of motion, and the footprint frame is where the legs are. So when the MAV is on the ground, the height to base might be arund 0.18 meters, but the height to footprint will be close to 0. You can subscribe to whichever topic is more convenient in your case. Make sure your include a static publisher from base to footprint. The messages published are of type mav_msgs::Height. Originally posted by Ivan Dryanovski with karma: 4954 on 2012-03-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by alfa_80 on 2012-03-09: But then, do you think, will this package work for my hardware configuration as in the provided link? I'm worried, it's meant to work with your hardware setup(laser mounting) only. Comment by alfa_80 on 2012-03-09: I couldn't configure your msg type correctly, am I right to include this "#include <mav_msgs/Height.h>" and in manifest.xml, i add " "? Comment by Ivan Dryanovski on 2012-03-09: yes that include looks correct
{ "domain": "robotics.stackexchange", "id": 8542, "tags": "ros" }
Why is the Toffoli gate not sufficient for universal quantum computation?
Question: I know that there are papers (cf. arXiv:quant-ph/0205115) out there which prove that the Toffoli gate by itself is not enough for universal quantum computation, but I haven't had the time to go through the whole proof. Could someone give me the crux of the proof or the intuition for this? Answer: The Toffoli gate is just a permutation. If you start in a known basis state, application of a Toffoli just changes it into another basis state, one that you can easily calculate classically (after all, it’s a decision based on looking at 3 bit values). Repeating that doesn’t change anything. To make it universal, you need to add something like Hadamard which introduces superposition.
{ "domain": "quantumcomputing.stackexchange", "id": 1099, "tags": "quantum-gate, universal-gates" }
Create a binary tree, code review request
Question: Ok, code reviewers, I want you to pick my code apart and give me some feedback on how I could make it better or more simple. ( Generics would be added a bit later). public class CreateABinaryTree { private TreeNode root; public CreateABinaryTree() { } /** * Constructs a binary tree in order of elements in an array. * After the number of nodes in the level have maxed, the next * element in the array would be a child of leftmost node. * * http://codereview.stackexchange.com/questions/31334/least-common-ancestor-for-binary-search-tree/31394?noredirect=1#comment51044_31394 */ public CreateABinaryTree(List<Integer> items) { this(); create(items); } private static class TreeNode { TreeNode left; int element; TreeNode right; TreeNode(TreeNode left, int element, TreeNode right) { this.left = left; this.element = element; this.right = right; } } private void create (List<Integer> items) { root = new TreeNode(null, items.get(0), null); final Queue<TreeNode> queue = new LinkedList<TreeNode>(); queue.add(root); final int half = items.size() / 2; for (int i = 0; i < half; i++) { if (items.get(i) != null) { final TreeNode current = queue.poll(); final int left = 2 * i + 1; final int right = 2 * i + 2; if (items.get(left) != null) { current.left = new TreeNode(null, items.get(left), null); queue.add(current.left); } if (right < items.size() && items.get(right) != null) { current.right = new TreeNode(null, items.get(right), null); queue.add(current.right); } } } } } Answer: I only see one really questionable thing in your code. Why do you call this() in your constructor? Your parameter-less constructor does absolutely nothing, so this line is irrelevant. If anything, I would actually even remove that constructor entirely so that your API can only be used with your meaningful and useful one. Also, as a side note, the following line opens you up to a NullPointerException since you never check to see if items is null or if it has any elements. root = new TreeNode(null, items.get(0), null);
{ "domain": "codereview.stackexchange", "id": 4627, "tags": "java, algorithm, tree" }
Hamiltonian of a simple graph
Question: I have a spin system: As shown in the picture, there are two spins S1 and S2, and a pair of interactions between them. One is a ferromagnetic interaction and the other is anti ferromagnetic interaction. I am trying to calculate the Hamiltonian of this system. The Hamiltonian of the system is: $$ H = -J_F S1_z S2_z +J_{AF} S1_z S2_z $$ $S1_z$ is the spin matrix for Z direction for spin 1 and $S2_z$ is the spin matrix for Z direction for spin 2. If we allow two random values for $J_F$ and $J_{AF}$, -0.5 and 0.5 respectively the Hamiltonian of the system is as follows. $$ H = 0.5 S1_z S2_z + 0.5 S1_z S2_z $$ $$ = S1_z S2_z $$ $$ = \begin{pmatrix} 1 & 0 \\ 0 & -1 \\ \end{pmatrix} \times \begin{pmatrix} 1 & 0 \\ 0 & -1 \\ \end{pmatrix} $$ $$ = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix} $$ Am I able to calculate the Hamiltonian correctly? Answer: The Hamiltonian of this system lives in a 4-dimensional Hilbert space since you have two spin $1/2$. Therefore, you should represent the spin matrix in this four dimensional space like this: $S_1^z=\begin{pmatrix} -0.5 & 0 &0 &0 \\ 0&-0.5 &0 &0 \\ 0 &0 &0.5 &0 \\ 0 &0 &0 &0.5 \end{pmatrix}$ , $S_2^z=\begin{pmatrix} -0.5 & 0 &0 &0 \\ 0&0.5 &0 &0 \\ 0 &0 &-0.5 &0 \\ 0 &0 &0 &0.5 \end{pmatrix}$ The order of the four states along the rows and columns is $|DD\rangle,|DU\rangle, |UD\rangle, |UU\rangle$ where $U$ stands for spin up and $D$ stands for spin down. In this case $S_1^z.S_2^z=\begin{pmatrix} 0.25 & 0 &0 &0 \\ 0&-0.25 &0 &0 \\ 0 &0 &-0.25 &0 \\ 0 &0 &0 &0.25 \end{pmatrix}$
{ "domain": "physics.stackexchange", "id": 6780, "tags": "statistical-mechanics, condensed-matter" }
How do I find transfer functions from a state space representation?
Question: Suppose I have a MIMO system in state space representation, for example: $A=\begin{bmatrix} 1 &2 &3 \\ 4&5 &6 \\ 7&8 &9 \end{bmatrix}$ $B=\begin{bmatrix} 2 &3 \\ 5& 7\\ 9 & 1 \end{bmatrix}$ $C=\begin{bmatrix} 6 &7 &11 \end{bmatrix}$ $D=0$ I have used random number to fill these matrices. I am using Matlab. Now, suppose I want to find the transfer function from the input $u$ to the to an output $x_2$ for example, how is it possible to do this? I know that I can from this matrix create the state space representation. So, suppose I want the state space representation of a plant, I would do this: G = ss(A,B,C,D) and if I want to get the transfer function from it I could do : G = ss2tf(A,B,C,D) and so from here I could plot the frequency response: bode(G) but now, suppose I want to obtain the transfer function from the disturbance to the output, or the transfer function from the input $u$ to the output, how can I do this ? [EDIT] For example, how do I obtain the sensitivity function from a state space representation of a MIMO system? Or the control sensitivity function? Answer: or the transfer function from the input to the output, how can I do this ? Is that not what your $G$ is? If you want to find the transfer function from just one element of $u$ to the output, then either delete the columns of $B$ that don't pertain to that element of $u$ and get your transfer function, or just look at the column of the transfer function that matches the element you want. but now, suppose I want to obtain the transfer function from the disturbance to the output, Then you would make a column for $B$ (or make a new $B$) that represents the effect that the disturbance has on the system, and extract the transfer function from that. Edit I neglected to include the actual math that Matlab is doing under the hood. This is twice bad -- once because I did it, and twice because I really don't like people who just push the Matlab buttons without understanding what's actually going on. If you have a system in state space representation: $$\begin{split}x_k = A x_{k-1} + B u_k \\ y_k = C x_{k-1} + D u_k\end{split}$$ then you can take the $z$ transform: $$\begin{split}X = A\ X(z) z^{-1} + B\ U(z) \\ Y = C\ X(z) z^{-1} + D\ U\end{split}$$ Then (leaving out steps, the grader will have words with me, but it's all linear algebra) you can solve for $X/U$: $$\frac{X(z)}{U(z)} = C \left(I z - A\right)^{-1} B + D$$ Note that a nice thing about doing it this way is that if $B$ and $C$ happen to be single column and single row, then you get a nice, traditional scalar transfer function -- but if they're multi-dimensional, you get a very natural matrix representation for a transfer function that just drops naturally out of the math.
{ "domain": "dsp.stackexchange", "id": 8186, "tags": "matlab, transfer-function, control-systems" }
Will water be superconducting if add enough pressure?
Question: Recently, room temperature superconductor has draw a lot of attention. Previous studies haven shown $H_2S$ is superconducting near room temperature given enough pressure. I wonder if water $H_2O$ will be superconducting if suppress it hard enough. Has anyone tried? If not, why? Answer: There is a theoretical prediction that pure $\mathrm{H_2O}$ will become a superconductor at pressures above $5$ TPa with a critical temperature of $\mathrm{T_c=}1.8\,\mathrm{K}$. Such pressures are currently not experimentally achievable, but there is another prediction that a few percent doping with nitrogen would make water a superconductor with $\mathrm{T_c \sim}60\,\mathrm{K}$ at an accessible pressure of only $\mathrm{0.15\,TPa}$. Unfortunately, nitrogen and water don't seem to mix well under these conditions. An experimental study could find no sign of ice doping by nitrogen at high pressures up to $\mathrm{0.14\,TPa}$.
{ "domain": "physics.stackexchange", "id": 94347, "tags": "experimental-physics, superconductivity" }
Connect Four game
Question: I wrote this a couple months ago and just wanted to get some feedback on it. In the printBoard() function, you see I commented out the system function. I've read this is considered bad practice. Does anyone know of any alternative to clearing the terminal window? I thought it'd be a nice touch to clear the window instead of just having the updated board appear below the previous. #include <stdio.h> #include <string.h> #include <stdlib.h> #define BOARD_ROWS 6 #define BOARD_COLS 7 void printBoard(char *board); int takeTurn(char *board, int player, const char*); int checkWin(char *board); int checkFour(char *board, int, int, int, int); int horizontalCheck(char *board); int verticalCheck(char *board); int diagonalCheck(char *board); int main(int argc, char *argv[]){ const char *PIECES = "XO"; char board[BOARD_ROWS * BOARD_COLS]; memset(board, ' ', BOARD_ROWS * BOARD_COLS); int turn, done = 0; for(turn = 0; turn < BOARD_ROWS * BOARD_COLS && !done; turn++){ printBoard(board); while(!takeTurn(board, turn % 2, PIECES)){ printBoard(board); puts("**Column full!**\n"); } done = checkWin(board); } printBoard(board); if(turn == BOARD_ROWS * BOARD_COLS && !done){ puts("It's a tie!"); } else { turn--; printf("Player %d (%c) wins!\n", turn % 2 + 1, PIECES[turn % 2]); } return 0; } void printBoard(char *board){ int row, col; //system("clear"); puts("\n ****Connect Four****\n"); for(row = 0; row < BOARD_ROWS; row++){ for(col = 0; col < BOARD_COLS; col++){ printf("| %c ", board[BOARD_COLS * row + col]); } puts("|"); puts("-----------------------------"); } puts(" 1 2 3 4 5 6 7\n"); } int takeTurn(char *board, int player, const char *PIECES){ int row, col = 0; printf("Player %d (%c):\nEnter number coordinate: ", player + 1, PIECES[player]); while(1){ if(1 != scanf("%d", &col) || col < 1 || col > 7 ){ while(getchar() != '\n'); puts("Number out of bounds! Try again."); } else { break; } } col--; for(row = BOARD_ROWS - 1; row >= 0; row--){ if(board[BOARD_COLS * row + col] == ' '){ board[BOARD_COLS * row + col] = PIECES[player]; return 1; } } return 0; } int checkWin(char *board){ return (horizontalCheck(board) || verticalCheck(board) || diagonalCheck(board)); } int checkFour(char *board, int a, int b, int c, int d){ return (board[a] == board[b] && board[b] == board[c] && board[c] == board[d] && board[a] != ' '); } int horizontalCheck(char *board){ int row, col, idx; const int WIDTH = 1; for(row = 0; row < BOARD_ROWS; row++){ for(col = 0; col < BOARD_COLS - 3; col++){ idx = BOARD_COLS * row + col; if(checkFour(board, idx, idx + WIDTH, idx + WIDTH * 2, idx + WIDTH * 3)){ return 1; } } } return 0; } int verticalCheck(char *board){ int row, col, idx; const int HEIGHT = 7; for(row = 0; row < BOARD_ROWS - 3; row++){ for(col = 0; col < BOARD_COLS; col++){ idx = BOARD_COLS * row + col; if(checkFour(board, idx, idx + HEIGHT, idx + HEIGHT * 2, idx + HEIGHT * 3)){ return 1; } } } return 0; } int diagonalCheck(char *board){ int row, col, idx, count = 0; const int DIAG_RGT = 6, DIAG_LFT = 8; for(row = 0; row < BOARD_ROWS - 3; row++){ for(col = 0; col < BOARD_COLS; col++){ idx = BOARD_COLS * row + col; if(count <= 3 && checkFour(board, idx, idx + DIAG_LFT, idx + DIAG_LFT * 2, idx + DIAG_LFT * 3) || count >= 3 && checkFour(board, idx, idx + DIAG_RGT, idx + DIAG_RGT * 2, idx + DIAG_RGT * 3)){ return 1; } count++; } count = 0; } return 0; } Answer: Some comments, in no particular order: 1: For applications like this, I think it is perfectly legitimate to use system(). It's not exactly elegant, but in this particular case I think it's fine. You could do: void clearScreen() { #ifdef WIN32 system("cls"); #else system("clear"); #endif // WIN32 } 2: Consider using the C99 or C11 standards. They allow a couple of things that make the code easier to read and maintain. For example, substitute your #defines with const int BOARD_ROWS = 6. This helps debugging (because you see BOARD_ROWS and not 6 in the compiler and debugger output), adds scope to the variables and generally avoids most of the problems with #defines. C99 and C11 don't require that variable definitions appear at the beginning of their scope. This means you can keep variables closer to their relevant context. The most useful example for this is for loops. In C99 and C11, you can write for (int row = 0; row < BOARD_ROWS; ++row) C99 also introduced a bool type (which is actually just a typedef for int). This makes the code more readable. Usage example: bool done = false; while (!done) { ... done = true; }. The bool type is of course available in C11 as well. 3: What is the value of row after int row, col = 0;? It's unspecified, so row can have any value. If you mean to initialize both variables, which you should, you need to write int row = 0, col = 0;. (There's a similar gotcha in C. What's the type of p2 in this snippet: int *p1, p2;? It's a plain int, only p1 is an int*. To make both variables pointers, write int *p1, *p2.) 4: Consider using printf consistently instead of puts. 5: Consider preferring pre-increment operators. row++ means "increment row, then return a copy of what it looked like before incrementing". ++row means "increment row", which is what you mean. It's not a big difference in this case, and the compiler can optimize away the extra copy, but I like the habit of writing exactly what you mean. If you ever start writing C++-code with user-defined increment operators, this can actually be important. 6: Try to always initialize your variables, rather than set their value at a later time. It's faster, cleaner and safer. Note that static and global variables are guaranteed to be zero-initialized when the program starts, but it doesn't hurt to be explicit. 7: Full variable names are easier to read, and there's no good reason not to type them out. Prefer index over idx, DIAGONAL_RIGHT over DIAG_RGT and so on. 8: Personally, I prefer for (;;) over while (1). The former is commonly read out loud as "forever". 9: Consider adding documentation and comments to your code. For example Doxygen comments before each function. 10: Code that is not self-explanatory should be factored out into separate functions with meaningful names. Consider your while(getchar() != '\n');. Putting this into a function called either flushInputBuffer() or waitForEnter() conveys intent and what is going on much better than the original code. 11: Consider changing your whitespace to increase readability. Instead of return 0; } int diagonalCheck(char *board){ write return 0; } int diagonalCheck(char *board) { to emphasize that the closing bracket belongs to the topmost function, not the bottom one. Also consider whitespace before brackets, for example. This is a matter of preference, the most important thing is that you're consistent and that the code is readable. 12: Break up long lines into more, shorter lines. This is a tremendous help for readability, and especially helps when doing source control merges, debugging in a debugger and other cases where your window width is limited. It's common to limit lines to 80-100 characters. 13: Consider sorting #includes in alphabetical order. 14: Consider using unit tests. 15: There are several things you are doing that are good. For example using symbolic rather than literal constants, i.e. const int DIAG_RGT = 6, DIAG_LFT = 8;. Your program is also fairly well divided into functions, and most variable names are reasonably named. Keep doing these things.
{ "domain": "codereview.stackexchange", "id": 3981, "tags": "c, console, connect-four" }
Straight cosmic string energy-momentum tensor and the cosmic strings EoS
Question: Consider a simple infinite straight "cosmic" string of negligible thickness, in flat spacetime. The string energy-momentum tensor has the following components (in the string proper frame, and using cartesian coordinates system with the $z$ axis oriented along the string) : \begin{equation}\tag{1} T^{ab} = \left(\begin{array}{cccc} \rho & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & -\, \tau \end{array}\right), \end{equation} where $\rho$ is the string's energy density (which includes some Dirac deltas) and $\tau > 0$ is the string tension (I'm using the $\eta = (1, -1, -1, -1)$ convention). In general $\tau \ne \rho$. Now, consider a large collection of random strings covering all of space. On average, the "fluid" of strings is described by the following tensor: \begin{equation}\tag{2} \langle \, T^{ab} \rangle = \left(\begin{array}{cccc} \rho & 0 & 0 & 0\\ 0 & p & 0 & 0\\ 0 & 0 & p & 0\\ 0 & 0 & 0 & p \end{array}\right). \end{equation} So the trace of (1) and (2) give $$\tag{3} T = \rho + \tau = \rho - 3 p. $$ In cosmology it is frequently stated that the equation of state $p = -\, \frac{1}{3} \: \rho$ describes a fluid of strings (and $p = -\, \frac{2}{3} \: \rho$ is associated to a fluid of "cosmic walls"). Substituting this EoS into (3) gives $\tau = \rho$, which is just a special case. So how can we justify that $p = -\, \frac{1}{3} \: \rho$ describes a fluid of strings? How could we justify that $\tau = \rho$ for a string? What if $\tau \ne \rho$? Answer: How could we justify that $τ=ρ$ for a string? Enhanced symmetry, which amounts to a simpler and more natural description. Note, that the tensor $T_{ab}=\rho \mathop{\mathrm{diag}}(1,0,0,-1)$ is invariant under Lorentz boosts along the $z$ direction, the string energy momentum tensor is simply proportional to the metric induced on the string with a constant coefficient. Such string does not have a preferred frame and its description could be made invariant under worldsheet coordinates reparametrization. So the dynamic of such string could be derived from Nambu–Goto action. Plus, we have a “microscopic” mechanism for the appearance of such cosmic strings from phase transition in theories with spontaneous symmetry breaking. The prototypical example of such theory is an Abelian Higgs model with Lagrangian: $$ \mathcal{L}=-\frac14 F_{μν} F^{μν}-|D_{μ} \Phi|^2 -V(|\Phi|), $$ where the potential has a typical “mexican hat” shape with its minimum achieved for nonzero vev of $\Phi$. A necessary condition for the existence of stable string solution, the nontrivial first homotopy group of vacuum manifold (in this case it is a circle $\Phi=\eta$) is fulfilled here and indeed this model has a string with a finite energy per unit length. What if $τ≠ρ$? Then the string has a preferred frame. This could mean that there is a whole worldsheet field theory “living” on the string, with its own evolution equations that cannot be derived just from energy–momentum conservation. One has to specify such theory and possibly couple it to the background fields other than the metric. So how can we justify that $p=−\frac13 ρ$ The context for such justifications is cosmology. If there are excitation modes of strings that have equation of state corresponding to e.g. massive or massless particles the energy contained in them would be diluted with the expansion of the Universe and we would be left with only $τ=ρ$ contributions. Note, that such modes could still leave observable consequences for structure formation, etc. For more, see the review: Hindmarsh, M.B., & Kibble, T. W. B. (1995). Cosmic strings. Reports on Progress in Physics, 58(5), 477, doi:10.1088/0034-4885/58/5/001, arXiv. and a more recent but less detailed review: Copeland, E. J., & Kibble, T. W. B. (2010). Cosmic strings and superstrings. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 466(2115), 623-657. doi:10.1098/rspa.2009.0591.
{ "domain": "physics.stackexchange", "id": 73864, "tags": "general-relativity, cosmology, stress-energy-momentum-tensor, cosmic-string" }
Do a bar magnet's and Earth's north magnetic poles have the same polarity?
Question: Are the north magnetic pole of the earth and the north pole of a bar magnet of the same polarity? Please Explain. I don't seem to understand the question. Answer: No. The earth's North Pole is a "magnetic south pole". With a compass, the part of the needle that points north is a magnetic north pole. With magnets, opposite poles attract, so what we call North on the earth must be magnetic south to attract the needle's north.
{ "domain": "physics.stackexchange", "id": 37844, "tags": "homework-and-exercises, electromagnetism, geomagnetism, ferromagnetism" }
Change in Luminosity of a planet
Question: When viewed from the sun, the brightness of the planet with a given size and albedo changes according to the fourth power of inverse distance. I found the statement in one encyclopedia but I can't find any mathematical proof of this. Answer: The brightness of reflected light of the planet depends on the brightness of the incident light and the distance (d) from the planet to the observer. Light obeys an inverse square law (think of light as spreading out in a sphere) So the brightness of the planet is inversely proportional to the distance from the planet to the observer (B is proportional to $\frac{1}{d^2}\ $) (assuming things like the planet is fully illuminated etc) But the brightness of the incident light also obeys an inverse square law with respect to the distance from the planet to the sun (r). The brightness of the incident light is proportional to $\frac{1}{r^2}\ $ So the brightness is proportional to $\frac{1}{d^2}\ $×$\frac{1}{r^2}\ $ But if your observer is near the sun, then d=r and so the brightness is proportional to $\frac{1}{r^4}\ $
{ "domain": "astronomy.stackexchange", "id": 6497, "tags": "luminosity" }
Equation for relative Kinetic energy
Question: Relative Kinetic energy is given by K.E = ($\gamma$-1)$m_0$c²; where $m_0$ is rest mass but can it also be given by this K.E= $\frac{1}{2}\gamma m_0v²$; where v is velocity of particle can it? Answer: In special relativity, the total energy of an object is given by; $$E = \sqrt{ (pc)^2 + (m_0 c^2)^2} = \sqrt{(\gamma m_0 v c)^2 +(m_0 c^2)^2}$$ where $p$ is the relativistic momentum $(\gamma m v)$. The kinetic energy is the total energy minus the rest energy, so: $$E_K = E - m_0 c^2 = \sqrt{(\gamma m_0 v c)^2 +(m_0 c^2)^2} - m_0 c^2$$ If you rearrange this equation you get : $$ E_K^2 + 2 E_K m_0 c^2 - (\gamma m_0 v c)^2 =0 \ .$$ This is a quadratic equation and the positive root is $$E_k = m_0 c^2 (\gamma -1) $$ in agreement with your first equation and as documented by Wikipedia. If you take the Taylor expansion of this expression, the first three terms are: $$0 + \frac 1 2 m_0 v^2 + \frac 3 8 \frac{m_0 v^4} {c^2} $$ When v is small compared to c the terms with powers of v greater than 2 become negligible and the equation approximates to the Newtonian expectation $ 1/2 m v^2$. The Newtonian equation is not strictly correct and is superseded by the relativistic equation, but for velocities encountered in everyday mechanics the Newtonian equation is a reasonable and convenient approximation. Your second equation $\gamma (1/2 \ m_0 v^2)$ is not equal to either the relativistic or the Newtonian equation for kinetic energy. For example, if we assume $$1/2 \gamma m v^2 = m_0 c^2 (\gamma -1) $$ this equivalence eventually reduces to $$\frac{v^2}{c^2} = 0$$ which can only be true if v=0. How come it reduces to $v^2/c^2=0$ ? $$1/2 \gamma \ m v^2 = m_0 c^2 (\gamma -1) $$ $$ \gamma \ v^2/c^2 = 2 (\gamma -1) $$ Substitute x for v^2/c^2 $$ \frac{x}{\sqrt{1-x} } = \frac{2 (1 - \sqrt{1-x})}{\sqrt{1-x}} $$ $$ {x} = 2 (1 - \sqrt{1-x}) $$ $$ 2 \sqrt{1-x} = 2-x $$ $$ 4 (1-x) = (2-x)^2 $$ $$ -x^2 = 0$$ $$ x = 0$$
{ "domain": "physics.stackexchange", "id": 99925, "tags": "special-relativity, energy, mass, inertial-frames, mass-energy" }
Definition of Disjointness for binary strings
Question: Basically, most of the definition for disjointness are such that $DISJ(A, B) = 1$ if $A \cap B = \emptyset $ and $DISJ(A, B) = 0$ for other case. My confusion is how is $0$'s influence in here. For example, is all-zeros string considered as $\emptyset$ and therefore the result will be $1$ no matter what the other string is ? Another case is if two disjoint string, e.g., $01$ and $10$ becomes not disjoint after same number of $0$ appended to each string. Currently my thought is, based on most research papers in communication complexity field, that $DISJ(A, B) = 0$ iff there exists one entry is $1$ for both $A, B$. Would like some clarification and related detailed definition with source. Answer: We identify a binary string with a set in the following way: $x \in \{0,1\}^n$ corresponds to the set $\{ i \in [n] : x_i = 1 \}$. In particular, $0^n$ is identified with the empty set.
{ "domain": "cs.stackexchange", "id": 17272, "tags": "disjoint-sets" }
What is a difference between ROS Multimaster and ROS Master-Slave
Question: I am trying to understand the difference between ROS Multimaster and ROS Master-Slave-API but it is confusing me. Please help to understand the difference between them. There is a lot of tutorials available on the web but none of them explains what is Multimaster and Master-Salve-API, there is no in-depth explanation about them. Originally posted by MuhammadDanyial on ROS Answers with karma: 13 on 2019-08-28 Post score: 0 Original comments Comment by tfoote on 2019-08-28: Please edit your question to provide more details about what you're specifically referring to. There are several terms similar to what you refer to but your question does not have enough information to tell exactly what you want to compare. Links would be helpful. Comment by MuhammadDanyial on 2019-08-29: I have added more explanation to my question, is it enough now? Answer: The multimaster package that you link to is a ROS package that was last released many years ago and hasn't been supported for several distros. If you're looking for multi master capabilities I'd recommend checking out multimaster_fkie or rocon_multimaster The Master-Slave API you link to is the internal APIs that ROS uses under the hood to coordinate between nodes. There are not tutorials for it as it's not considered a user facing API and you're generally not recommended to interact with it directly. Originally posted by tfoote with karma: 58457 on 2019-08-29 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 33705, "tags": "ros2" }
Spherical Bessel Equation has different forms?
Question: I am trying to thoroughly solve the infinite spherical well potential problem that is introduced in Griffith's Introduction to QM, Chapter 4. To solve the Radial part of the equation, one must solve the Bessel equation, which I know is: $$\frac{d^2Y}{dx^2}+x\frac{dY}{dx}+(x^2-\nu^2)Y=0$$ To arrive at this, I begin manipulating the Radial equation: $$\frac{d}{dr}\left(r^2\frac{dR}{dr}\right)-\frac{2mr^2}{\hbar^2}(V(r)-E)R=l(l+1)R$$ Upon applying the derivative and using that inside the spherical well the potential is zero: $$\frac{d^2R}{dr^2}+2r\frac{dR}{dr}+(k^2r^2-l(l+1))R=0$$ If I use a substitution such as $x=kr$, and call $\nu^2={l(l+1)}$ I can arrive at: $$\frac{d^2R}{dx^2}+2x\frac{dR}{dx}+(x^2-\nu^2)R=0$$ but this is not the Bessel Equation! Alternatively, I could have used the substitution $u(r)=rR(r)$, which yields: $$r^2\frac{d^2u}{dr^2}+(k^2r^2-l(l+1))u=0$$ Even after using the same substitution as begore, I only get: $$x^2\frac{d^2Y}{dx^2}+(x^2-\nu^2)Y=0$$ However, this is not the Bessel equation either. My question is: How can I get to the Bessel equation from either of these? Or is that not possible? In griffith's book, he states that the $u=rR$ form of the radial equation is solved by the spherical Bessel functions, so it should be the Bessel function. And if I can indeed get to the Bessel equation, I would face another problem: The solution for the Bessel equations that agree with Griffith's are those when $\nu=\pm 1/2$, but since $\nu^2=l(l+1)$, I don't see how we could ever get to that result Answer: To arrive at this, I begin manipulating the Radial equation: $$\frac{d}{dr}\left(r^2\frac{dR}{dr}\right)-\frac{2mr^2}{\hbar^2}(V(r)-E)R=l(l+1)R$$ You forgot some factors of $r$ while manipulating this equation. The above is equivalent to $$r^2\frac{d^2R}{dr^2} +2r\frac{dR}{dr}-\frac{2mr^2}{\hbar^2}(V(r)-E)R=l(l+1)R$$ Dividing by $r^2$, and using $V(r)=0$ inside the well, you get the following $$\frac{d^2R}{dr^2} +\frac{2}{r}\frac{dR}{dr}+\left(\frac{2mE}{\hbar^2}-\frac{l(l+1)}{r^2}\right)R=0$$ Which is the spherical Bessel equation.
{ "domain": "physics.stackexchange", "id": 88587, "tags": "quantum-mechanics, schroedinger-equation, differential-equations, special-functions" }
A fraction of the code
Question: Following up on this post, and including some major changes suggested, here's the revised code. Changes include: No longer keeping an IFormatProvider at instance level. Removed IFormatProvider constructor parameters. Introduced ToString(IFormatProvider) overload. Changed decimal to float, to leverage float.NaN for 0-denominator fractions. Implemented more operators. Reimplemented Equals and CompareTo per recommendations. Fixed a bug in ToString(string, IFormatProvider). Added XML comments on all public members. As the code file grew, it became apparent that I was going to need a way of grouping code sections. I did not want to use #region because... it's a question of principles. It's irrational, I just don't want to use #region. So I regrouped all static members into a partial struct (note: some code lines and XML comments were reformatted to avoid horizontal scrolling): /// <summary> /// A fractional representation of a rational number. /// </summary> public partial struct Fraction { /// <summary> /// An empty <c>Fraction</c> (0/0). /// </summary> public static readonly Fraction Empty = new Fraction(); /// <summary> /// A <c>Fraction</c> representation of the integer value 0. /// </summary> public static readonly Fraction Zero = new Fraction(default(int)); /// <summary> /// A <c>Fraction</c> representation of the integer value 1. /// </summary> public static readonly Fraction One = new Fraction(1); /// <summary> /// Represents the smallest possible value for a <see cref="Fraction"/>. /// </summary> public static readonly Fraction MinValue = new Fraction(1, int.MinValue); /// <summary> /// Represents the largest possible value for a <see cref="Fraction"/>. /// </summary> public static readonly Fraction MaxValue = new Fraction(int.MaxValue, 1); /// <summary> /// Returns a simplified/reduced representation of a fraction. /// </summary> /// <param name="fraction">The fraction to simplify.</param> /// <returns> /// Returns a new <see cref="Fraction"/>, /// a simplified representation of this instance (if simplification is possible). /// </returns> public static Fraction Simplify(Fraction fraction) { if (fraction.IsUndefined) { return new Fraction(fraction); } var gcd= GetGreatestCommonDenominator(fraction.Numerator, fraction.Denominator); var numerator = fraction.Numerator / gcd; var denominator = fraction.Denominator / gcd; return new Fraction(numerator, denominator); } private static int GetGreatestCommonDenominator(int numerator, int denominator) { return denominator == 0 ? numerator : GetGreatestCommonDenominator(denominator, numerator % denominator); } private static readonly Regex _parserRegex = new Regex(@"^\s*?(?<numerator>\d+)\s*?/\s*?(?<denominator>\d+)\s*?$"); /// <summary> /// Converts the string representation of a fraction /// into its <c>Fraction</c> equivalent. /// A return value indicates whether the conversion succeeded. /// </summary> /// <param name="s">A string containing the fraction to convert.</param> /// <param name="result"> /// When this method returns, contains the <c>Fraction</c> /// equivalent to the specified string. /// </param> /// <returns>Returns <c>true</c> if conversion is successful.</returns> public static bool TryParse(string s, out Fraction result) { var syntaxMatch = _parserRegex.Match(s); if (!syntaxMatch.Success) { result = Fraction.Zero; return false; } var numerator = int.Parse(syntaxMatch.Groups["numerator"].Value); var denominator = int.Parse(syntaxMatch.Groups["denominator"].Value); result = new Fraction(numerator, denominator); if (!result.IsUndefined) { result = result.Simplify(); } return true; } public static explicit operator float(Fraction fraction) { return fraction.ToFloat(); } public static bool operator ==(Fraction fraction1, Fraction fraction2) { return fraction1.Equals(fraction2); } public static bool operator ==(Fraction fraction, int value) { return fraction.Equals(new Fraction(value)); } public static bool operator ==(Fraction fraction, float value) { Fraction result; if (Fraction.TryParse(value.ToString(), out result)) { return fraction.Equals(result); } return false; } public static bool operator !=(Fraction fraction1, Fraction fraction2) { return !(fraction1 == fraction2); } public static bool operator !=(Fraction fraction, int value) { return !(fraction == value); } public static bool operator !=(Fraction fraction, float value) { return !(fraction == value); } public static Fraction operator ++(Fraction fraction) { return new Fraction(fraction.Numerator + 1, fraction.Denominator); } public static Fraction operator +(Fraction fraction, int value) { return fraction + new Fraction(value); } public static Fraction operator +(Fraction fraction1, Fraction fraction2) { int numerator = (fraction1.Numerator * fraction2.Denominator) + (fraction1.Denominator * fraction2.Numerator); int denominator = (fraction1.Denominator * fraction2.Denominator); var result = new Fraction(numerator, denominator).Simplify(); return result; } public static Fraction operator --(Fraction fraction) { return new Fraction(fraction.Numerator - 1, fraction.Denominator); } public static Fraction operator -(Fraction fraction, int integer) { return fraction - new Fraction(integer); } public static Fraction operator -(Fraction fraction1, Fraction fraction2) { var subtrator = new Fraction(fraction2.Numerator * -1, fraction2.Denominator); return fraction1 + subtrator; } public static Fraction operator /(Fraction fraction, int integer) { return fraction / new Fraction(integer); } public static Fraction operator /(Fraction fraction1, Fraction fraction2) { var divisor = new Fraction(fraction2.Denominator, fraction2.Numerator); return fraction1 * divisor; } public static Fraction operator *(Fraction fraction, int integer) { return fraction * new Fraction(integer); } public static Fraction operator *(Fraction fraction1, Fraction fraction2) { var numerator = fraction1.Numerator * fraction2.Numerator; var denominator = fraction1.Denominator * fraction2.Denominator; var result = new Fraction(numerator, denominator).Simplify(); return result; } } That left all instance members in their own file: /// <summary> /// A fractional representation of a rational number. /// </summary> [Serializable] public partial struct Fraction : IFormattable, IComparable, IComparable<Fraction>, IEquatable<Fraction> { private readonly int _numerator; private readonly int _denominator; /// <summary> /// Copy constructor. /// Creates a new <c>Fraction</c> instance based on the specified value. /// </summary> /// <param name="fraction"></param> public Fraction(Fraction fraction) : this(fraction.Numerator, fraction.Denominator) { } /// <summary> /// Creates a new <c>Fraction</c> with the denominator being 1. /// </summary> /// <param name="numerator"></param> public Fraction(int numerator) : this(numerator, 1) { } /// <summary> /// Creates a new <c>Fraction</c> with specified numerator and denominator. /// </summary> /// <param name="numerator"></param> /// <param name="denominator"></param> public Fraction(int numerator, int denominator) { _numerator = numerator; _denominator = denominator; } /// <summary> /// Gets the numerator (get-only). /// </summary> public int Numerator { get { return _numerator; } } /// <summary> /// Gets the denominator (get-only). /// </summary> public int Denominator { get { return _denominator; } } /// <summary> /// Gets a value indicating whether this instance is defined. /// Returns true when the fraction is a division by zero. /// </summary> public bool IsUndefined { get { return _denominator == default(int); } } /// <summary> /// Simplifies/reduces the fraction. /// </summary> /// <returns> /// Returns a simplified representation of this instance, /// if simplification is possible. /// </returns> public Fraction Simplify() { return Fraction.Simplify(this); } /// <summary> /// Creates a <c>float</c> representation of the <see cref="Fraction"/>. /// </summary> /// <returns> /// Returns the result of dividing the <c>Numerator</c> by the <c>Denominator</c>, /// or <c>float.NaN</c> when the <c>Denominator</c> is zero. /// </returns> public float ToFloat() { return IsUndefined ? float.NaN : (float)_numerator / (float)_denominator; } /// <summary> /// Returns a value indicating whether this instance and a specified object /// represent the same value. /// </summary> /// <param name="obj">Any <c>Fraction</c> or <c>float</c>-convertible value.</param> /// <returns></returns> public override bool Equals(object obj) { if (obj is Fraction) { return Equals((Fraction)obj); } return ToFloat().Equals((float)obj); } /// <summary> /// Returns the hash code for this instance. /// </summary> /// <returns></returns> public override int GetHashCode() { return ToFloat().GetHashCode(); } /// <summary> /// Converts this fraction into a string representation, /// using a default <see cref="FractionFormatter"/>. /// </summary> /// <returns></returns> public override string ToString() { return ToString(FractionFormatter.Default); } /// <summary> /// Converts this fraction into a string representation, /// using specified <c>IFormatProvider</c>. /// </summary> /// <param name="provider"></param> /// <returns></returns> public string ToString(IFormatProvider provider) { return ToString(null, provider); } /// <summary> /// Converts this fraction into a string representation, /// using specified <c>format</c> and <c>IFormatProvider</c>. /// </summary> /// <param name="format"></param> /// <param name="provider"></param> /// <returns></returns> public string ToString(string format, IFormatProvider provider) { if (provider is ICustomFormatter) { return ((ICustomFormatter)provider).Format(format, this, provider); } return FractionFormatter.Default.Format(format, this, FractionFormatter.Default); } /// <summary> /// Compares this instance to a specified object and /// returns an indication of their relative values. /// </summary> /// <param name="obj"></param> /// <returns></returns> public int CompareTo(object obj) { if (obj is int) { return CompareTo(new Fraction((int)obj)); } else if (obj is string) { Fraction fraction; if (Fraction.TryParse(obj as string, out fraction)) { return CompareTo(fraction); } } return CompareTo((Fraction)obj); } /// <summary> /// Compares this instance to specified <c>Fraction</c> and /// returns an indication of their relative values. /// </summary> /// <param name="other"></param> /// <returns></returns> public int CompareTo(Fraction other) { if (IsUndefined || other.IsUndefined) { // let the framework handle NaN comparisons return ToFloat().CompareTo(other.ToFloat()); } long left = _numerator * other.Denominator; long right = _denominator * other.Numerator; return left.CompareTo(right); } /// <summary> /// Returns a value indicating whether /// this instance is equal to a specified <c>Fraction</c> value. /// </summary> /// <param name="other"></param> /// <returns></returns> public bool Equals(Fraction other) { return CompareTo(other) == 0; } } Here's a screenshot of a Solution Explorer view showing all members and overloads: Is everything consistent? What could be improved? @mjolka suggested to write GetGreatestCommonDenominator iteratively - I've found this code online and I find the recursive method is easier to read: static int GetGreatestCommonDenominator(int numerator, int denominator) { int remainder; while (denominator != 0) { remainder = numerator % denominator; numerator = denominator; denominator = remainder; } return numerator; } Am I sacrificing something here? The FractionFormatter has also undergone some minor changes, including it here for completion: public class FractionFormatter : IFormatProvider, ICustomFormatter { private readonly CultureInfo _culture; public FractionFormatter(CultureInfo culture) { _culture = culture; } public static FractionFormatter Default { get { return new FractionFormatter(CultureInfo.CurrentUICulture); } } public object GetFormat(Type formatType) { return (formatType == typeof(ICustomFormatter)) ? this : null; } public string Format(string format, object arg, IFormatProvider formatProvider) { var fraction = (Fraction)arg; if (string.IsNullOrEmpty(format)) { return string.Format(_culture, "{0}/{1}", fraction.Numerator, fraction.Denominator); } var result = string.Format(_culture, "{0:" + format + "}", fraction.ToFloat()); return result; } } Same with MathJaxFractionFormatter: public class MathJaxFractionFormatter : IFormatProvider, ICustomFormatter { public enum MathJaxFractionSize { Normal, Large } private static readonly CultureInfo _culture = typeof(FractionFormatter).Assembly.GetName().CultureInfo; private readonly string _delimiter; private readonly MathJaxFractionSize _size; public MathJaxFractionFormatter() : this("$", MathJaxFractionSize.Normal) { } public MathJaxFractionFormatter(string delimiter, MathJaxFractionSize size) { _delimiter = delimiter; _size = size; } public object GetFormat(Type formatType) { return (formatType == typeof(ICustomFormatter)) ? this : null; } public string Format(string format, object arg, IFormatProvider formatProvider) { var fraction = (Fraction)arg; if (string.IsNullOrEmpty(format)) { var keyword = _size == MathJaxFractionSize.Normal ? "\\frac" : "\\dfrac"; return string.Format(_culture, "{2}{3}{{{0}}}{{{1}}}{2}", fraction.Numerator, fraction.Denominator, _delimiter, keyword); } return fraction.ToString(format, _culture); } } Now this will print a culture-sensitive 33.333 %: Console.WriteLine(new Fraction(1, 3).ToString("p3", new MathJaxFractionFormatter())); And this will print $\frac{1}{3}$ as expected: Console.WriteLine(new Fraction(1, 3).ToString(new MathJaxFractionFormatter())); Passing in a CultureInfo.InvariantCulture will simply cause ToString to use FractionFormatter.Default. Console.WriteLine("5/25 with InvariantCulture: {0}", new Fraction(5, 25).ToString(CultureInfo.InvariantCulture)); Outputs 5/25 with InvariantCulture: 5/25. Answer: I'd second mjolka's comments that the behaviour of ++ and -- is confusing, and that 0 would be preferable to default(int) NaN and infinity You have a single case for a zero denominator called IsUndefined. By comparison, Single has IsInfinity, IsPositiveInfinity, IsNegativeInfinity and IsNaN. Likewise, it has static NaN, NegativeInfinity and PositiveInfinity constants. For consistency, I would consider renaming Empty to NaN, and splitting IsUndefined into three cases, for the numerator < 0 (negative infinity), == 0 (NaN) and > 0 (positive infinity). These can be used when converting to and from float. Parsing You should use int.TryParse rather than int.Parse. For example, if the numerator or denominator is larger than int.Max, you want to return false rather than throwing an exception. In32.TryParse (and similar methods) actually guarantee a particular value to their out parameter. It's hard to think of a situation where this is important, but if you wanted to be super-conscietious you might consider doing the same. Note that unlike most numeric types, default(Fraction) is actually not Fraction.Zero but Fraction.Empty. You might want to think about which of these you'd rather return- 0/1 for consistency with other numeric types, or 0/0 for consistency with default. And either way, you could state your decision explicitly in your xml comments. You might also consider having a Parse method like other numeric types. It's not crucial, but it saves the consumer some work in the (common) situation that an invalid input is considered exceptional. Simplification It seems a little unpredictable when fractions get simplified. I wouldn't necessarily expect that I could define a fraction 2/4 and it would remain that way until I multiplied it by 1, at which point it would become 1/2. Especially by making Simplify public, I think you imply that simplification is something should work in a way understandable to a consumer, without having to read through all your method implementations. You could go through and try to use it consistently, but I think the simplest way would be to do the simplification inside the constructor so that all fractions are guaranteed to always be simplified. I can't think of a convincing reason that you'd want to work with unsimplified fractions. In this case you'd want to make methods relating to simplification private. And obviously they would need to take a numerator and denominator as parameters rather than fractions. Avoidable overflows This maybe falls into the category of "nice to have", but you should try to ensure you don't hit overflows inside arithmetic operations when it's avoidable. For example, if you have a defined as int.Max/2 + 1, then a/2 * 2/a would overflow, when it should equal 1/1. For multiplication, if you have a/b * c/d, both already simplified, then the easiest way to do this would be to first define and simplify a/d and c/b, then multiply those. I don't think there's any improvement that you can make for addition as long as the input fractions are already simplified. EDIT: Actually, one thing you can do is instead of using the product of the two denominators, instead use the least common multiple. You'll need to work out what to multiply the two numerators by before adding them. This one is actually probably more important than multiplication because even adding 1/50000 + 1/50000 will overflow as it is now. Comparison According to msdn documentation for IComparable.CompareTo: The parameter, obj, must be the same type as the class or value type that implements this interface; otherwise, an ArgumentException is thrown. I can see why you might want to support comparing to an integer, but comparing to a string really seems like overkill. It's also potentially dangerous, as it doesn't adhere to all of these rules from the same article: A.CompareTo(A) must return zero. If A.CompareTo(B) returns zero, then B.CompareTo(A) must return zero. If A.CompareTo(B) returns zero and B.CompareTo(C) returns zero, then A.CompareTo(C) must return zero. If A.CompareTo(B) returns a value other than zero, then B.CompareTo(A) must return a value of the opposite sign. If A.CompareTo(B) returns a value x not equal to zero, and B.CompareTo(C) returns a value y of the same sign as x, then A.CompareTo(C) must return a value of the same sign as x and y. Since strings compare alphabetically, it's easy to imagine a combination of two strings and a fraction which would compare inconsistently. e.g.: var quarter = "1/4"; var third = new Fraction(1,3); var half = "1/2"; Console.Out.WriteLine(quarter.CompareTo(half)); // 1 Console.Out.WriteLine(third.CompareTo(half)); // -1 Console.Out.WriteLine(third.CompareTo(quarter)); // 1 GetHashCode As it is, I it would be possible to get two instances where Equals returns true but which have a different hash code due to float rounding. I did a quick test and I believe that 1/3 would have a different hash code to 5592409/16777227, for example. Enforcing that all fractions are always simplified as I described before would fix this. Alternatively you could use, Tuple.Create(_numerator, _denominator).GetHashCode(), though I'm not sure about the performance impact of this. You could also look into formulae for generating hash codes from two integers. Or, for explicit consistency with the Equals method, you could find the long product of the numerator and the denominator and take the hash code of that. IConvertable For added support, you might consider implementing the IConvertable interface. This is relatively straightforward and probably best demonstrated in the linked article. For specific types such as Int32, Int64 and Boolean you might want to implement your own conversion, but otherwise, the simplest approach is to convert first to a common numeric type (probably float) then call the corresponding IConvertable method on the result.
{ "domain": "codereview.stackexchange", "id": 8514, "tags": "c#, formatting, rational-numbers" }
Simple mathematical operations (add, sub, mul, div) in C++11 template
Question: I made a simple script to implement basic mathematics operations by using variadic functions. I would like to know if my implementation is correct. The code only works for Visual C++ compiler Nov 2013 CTP (CTP_Nov2013). #include <iostream> #include <string> #include <exception> using namespace std; template <typename T> T add(T&& item = T()) { return forward<T>(item); } template <typename T, typename ... Types> auto add(T&& first, Types&& ... rest) -> decltype(first + add(forward<Types>(rest)...)) { return forward<T>(first) + add(forward<Types>(rest)...); } template <typename T> T sub(T&& i = T()) { return forward<T>(i); } template <typename T, typename ... Types> auto sub(T&& first, Types&& ... rest) -> decltype(forward<T>(first) - sub(forward<Types>(rest)...)) { return forward<T>(first) - sub(forward<Types>(rest)...); } template <typename T> T multiple(T&& i = T()) { return forward<T>(i); } template <typename T, typename ... Types> auto multiple(T&& first, Types&& ... rest) -> decltype(forward<T>(first) * multiple(std::forward<Types>(rest)...)) { return forward<T>(first) * multiple(forward<Types>(rest)...); } template <typename T> T divide(T&& item = T()) { return forward<T>(item); } template <typename T, typename ... Types> auto divide(T&& first, Types&& ... rest) -> decltype(forward<T>(first) / divide(forward<Types>(rest)...)) { if (divide(forward<Types>(rest)...) == 0) throw "Opps divided by Zero"; return forward<T>(first) / divide(forward<Types>(rest)...); } template<typename... Types> void termnate(Types&&...) { std::cout << '\n'; } template<typename... Types> void result(Types&&... t) { termnate{ ([&]{ std::cout << forward<Types>(t) << ' '; }(), 1)... }; } int main() { result( add(1, 2.5, 3, 4, 5), " = ", "1 + 2,5 + 3 + 4 + 5"); result(sub(sub(sub(sub(1, 2.5), 3), 4), 5), " = ", "1 - 2.5 - 3 - 4 - 5"); // OK = -13.5 result(multiple(1, 2.5, 3, 4, 5), " = ", "1 x 2.5 x 3 x 4 x 5"); result(divide(divide(divide(divide(1, 2.5), 3), 4.5), 5), " = ", "1 / 2.5 / 3 / 4 / 5"); // OK = .005926 cout << add("\nTest ", "template: ", string("PASS\n")); } Answer: When passing around move references, the recommended is to forward the parameters to ensure that the move reference is preserved: template <typename T, typename ... Types> auto add(T&& first, Types&& ... rest) -> decltype(first + add(std::forward<Types>(rest)...)) { return first + add(std::forward<Types>(rest)...); } Note that you have to explicitly provide the template type to std::forward(). It would deduce the wrong type otherwise.
{ "domain": "codereview.stackexchange", "id": 11041, "tags": "c++, c++11, template, variadic" }
Get the running time of forest disjoint sets
Question: If you have a forest implementation of disjoint sets, and have the union by weight/rank heuristic where you append the smaller one. Then why is the worst case running time Θ(m log n)? (m is the number of disjoint sets and n is the number of Make Set operations.) Answer: You've asked two questions: Why is the tree height Θ(log n)? Why is the runtime Θ(m log n)? This answer addresses both questions. I'll start off with a review of the ranks of the different trees to go over some basics, then discuss these time bounds. One trick that might be useful here is to think of the minimum number of nodes that can be in a tree of height n. Once we have that insight, we can try to relate the maximum runtime to the maximum height of any tree in the forest. When we initially create all of the trees, they each have height 0 and have a total of 1 node in them. Now, suppose we want to create a tree of height 1. The only way to do that would be to merge together two trees of height 0 to form a new tree of height 1. This would have a total of 2 nodes in it, and that's the minimum number of nodes possible. Next, suppose we want to create a tree of height 2. To do that, we have to merge together two trees of height 1. Each of those trees has a minimum of 2 nodes in them, so the minimum number of nodes possible in a tree of height 2 would be 4. To make a tree of height 3, we need to merge two trees of height 2, each of which has at least 4 nodes. This means that we would end up with eight nodes in the tree. If we look so far, we see this pattern: Height 0: At least 1 node. Height 1: At least 2 nodes. Height 2: At least 4 nodes. Height 3: At least 8 nodes. It looks like a tree of height n will have at least 2n nodes in it. This makes intuitive sense - the only way to make a tree of height n + 1 is to merge together two trees of height n, so the minimum number of nodes in a tree of height n + 1 should be at least double the number of nodes in a tree of height n. So now let's suppose we do a series of n different union operations to link trees together. What is the largest possible tree we can make? Suppose that we want to make a tree of height k. In order to do that, we first need to make two trees of height k - 1. Each of those trees are formed from two trees of height k - 2 each, and those trees are formed from two trees of height k - 3 each, etc. Eventually, we bottom out at trees of height 0. If we work out a recurrence relation, we get the following expression for the total number of union operations required to get at tree of height k: U(0) = 0 U(k + 1) = 2U(k) + 1 If we expand out the terms in this recurrence, we get U(0) = 0 U(1) = 1 U(2) = 3 U(3) = 7 U(4) = 15 ... U(k) = 2k - 1 In other words, if we want to get to a tree of height k, we need to do 2k - 1 link operations. This means that if we do a total of n union operations, the maximum tree height we can possibly make has height Θ(log n). This explains the runtime bound you were curious about. Creating the disjoint set forest takes time O(m), where m is the number of elements. Each union operation requires us to walk up to the root of a tree whose height is at most Θ(log n), so if we do a total of n of these, the overall runtime is at worst Θ(m + n log n). Note that this is different from the bound you've proposed. However, if you make the assumption that m = Θ(n) (that is, you do about the same number of union operations as there are nodes in the graph), then we get that Θ(m + n log n) = Θ(n + n log n) = Θ(n log n) = Θ(m log n), which matches the bound you've described. Hope this helps!
{ "domain": "cs.stackexchange", "id": 1201, "tags": "data-structures" }
Function "between" for "Everything in One Line" library
Question: This is a function to determine if a number is between x and y. It is part of a library of mine called EOL or "Everything in One Line". #include <stdbool.h> #ifndef _WINDOWS_H #include <windows.h> #endif bool between (WORD n, WORD x, WORD y) { return (n >= x && n <= y) ? (true) : (false); } Answer: There's no need to use the conditional operator here. This would be enough: bool between (WORD n, WORD x, WORD y) { return n >= x && n <= y; } Other than this, the variable names are not descriptive. It is not clear that x specifies the lower bound and y the higher bound. bool between (WORD value, WORD low, WORD high) { return value >= low && value <= high; } Additionally, I have to question a little bit the usage for a "Everything in One Line" library. If the code that you are writing is trivial one-liners, it might be easier for other programmers to quickly re-write the actual code than to use your library. Most of the times I could use a between function, I would probably write the actual code in-line, without using an additional function for it at all. However, writing a "Everything in One Line"-library can of course be an educating experience for yourself.
{ "domain": "codereview.stackexchange", "id": 11699, "tags": "c, windows" }
What are theoretically sound programming languages for graph problems?
Question: There are numerous graph theoretic tools/packages. Each with its pros and cons. What should be the semantics/syntax of a programming language meant to solve graph theoretic problems? Answer: You might want to look at The Graph Programming Language GP. From the linked page: GP (for Graph Programs) is a rule-based, nondeterministic programming language for solving graph problems at a high level of abstraction, freeing programmers from handling low-level data structures. The core of GP consists of four constructs: single-step application of a set of conditional graph-transformation rules, sequential composition, branching and iteration. Sandra Steinert devoted her PhD thesis to the topic. There's also a Hoare logic for reasoning about the correctness of such programs.
{ "domain": "cstheory.stackexchange", "id": 524, "tags": "graph-theory, pl.programming-languages, software" }
How to efficiently loop through paragraphs and make simple changes with Word VBA (Specially reverse loop to Delete Paragraphs)
Question: This is regarding my answer with SO post How to remove paragraph marks with different format in MS-Word. My primary question, is there any way performance of the code could be improved to operate on document as intended by OP (2.2 MB and has 2.1K pages, 871K words, 4,6M characters including spaces)? On Secondary, is there a simple way or workaround that I am missing (and making things unnecessary complicated) to perform the sane task efficiently? Here I reproduce my code after adding some futile measures to improve performance of the code with file of the size specified by the OP. Option Explicit Sub ReplacePara() Dim Para As Paragraph, Xstr As String, Rng As Range Dim i As Long, ln As Long, tm As Double, PrCnt As Long Dim PrvChrSize As Integer, NextChrSize As Integer Dim PrvChrFont As String, NextChrFont As String Dim PrvChrItalic As Boolean, NextChrItalic As Boolean tm = Timer ‘Following measures added to improve performance ‘but on the contrary it’s found instead of increasing time taken Application.ScreenUpdating = False With Options .Pagination = False .CheckSpellingAsYouType = False .CheckGrammarAsYouType = False End With With ActiveDocument PrCnt = .Paragraphs.Count Debug.Print PrCnt For i = .Paragraphs.Count To 1 Step -1 Set Para = .Paragraphs(i) ln = Para.Range.Characters.Count If ln > 1 Then With Para.Range.Characters(ln - 1).Font PrvChrSize = .Size PrvChrFont = .Name PrvChrItalic = .Italic End With If i < .Paragraphs.Count Then With .Paragraphs(i + 1).Range.Characters(1).Font NextChrSize = .Size NextChrFont = .Name NextChrItalic = .Italic End With Else NextChrSize = 0 NextChrFont = "" NextChrItalic = False End If End If 'Debug.Print i, PrvChrSize, PrvChrFont, NextChrSize, NextChrFont If (PrvChrSize = 15 And (PrvChrFont = "Arial" Or PrvChrItalic = True)) _ And (NextChrSize = 15 And (NextChrFont = "Arial" Or NextChrItalic)) Then Para.Range.Characters(ln).Text = " " End If .UndoClear 'If PrCnt < 1000 Then Debug.Print i & "/" & PrCnt Next End With With Options .Pagination = True .CheckSpellingAsYouType = True .CheckGrammarAsYouType = True End With Application.ScreenUpdating = True Debug.Print " Seconds taken:" & Timer - tm End Sub The added measures actually found to increase time taken (from 3 odd minutes to 4 odd minutes) with documents of 124 pages. I haven’t ventured far to go for LockWindowUpdate API. Though the code tested Ok with documents of 100 pages around. I could not finish the task with a makeshift giant file of around 2.4 K pages. It is virtually crashing Word (not recovering from ‘Not responding mode’). I ceated the file with a simple code stub with the sample file linked by OP in the SO post. Code stub was also produced for ease is testing. Sub makebig() Dim Rng As Range, MyRange As Range Dim Wd As Document Dim i As Long Set Wd = ThisDocument Set Rng = Wd.Content Rng.Copy For x = 1 To 2000 Set MyRange = Wd.Content MyRange.EndOf Unit:=wdStory, Extend:=wdMove MyRange.Paste Next End Sub Running the code with sample file twice (1st time with For x = 1 To 2000 and second time with For x = 1 To 1) will produce a file about 2.4 K pages. For getting a file of 124 pages from the sample file 200 loops are sufficient. Answer: Answering my Own question to share with community the Height of stupidity in my code and gradually how over 1 or 2 sleepless nights of testing, the problem was reduce to workable. First the height of stupidity is the line If i < .Paragraphs.Count Then even after knowing well that while working with such giant document interaction with document is to keep minimum, I resorted to .Paragraphs.Count in every loop of 56 K paragraphs. Whereas .Paragraphs.Count had already been assigned to variable PrCnt. Also the If is used to only avoid the error trying to access next para while in the last loop of paragraphs and intended to act once only. Replacing it with 'PrCnt` make the code somehow stable and could run through the code while disabling write on the document. Next Thanks to @Ryan Wildry suggestion and tried to go for For Each Loop. Since I am deleting paragraph marks. I tried to go for a Forward For Each loop and test the conditions and take the paragraphs number of the paragraphs to be deleted to a Array. This loop takes only 1-2 minutes to collect the desired information form 56 K paras. Now after completing the loop, started replacing the paragraphs in reveres loop in this fashion For i = UBound(ParaNumToDelete) To 1 Step -1 .Paragraphs(ParaNumToDelete(i)).Range.Characters(LnArr(i)).Text = " " but this also found taking around 6-7 hours to complete 2.4 K page 56 K paragraphs (and 16 k paragraphs marks to replace) document. 150 seconds to replace 50 paragraphs in the start loop (i.e. near bottom of documents) and hardly 1-2 seconds do replace 50 paragraphs at the end of loop (i.e. near start of the document). Code execution is unstable and feared to be in Non Responding state even with single click. So one more measures added to Save the document at say every 200 replacement, so code could be run again to complete the uncompleted task any time later. Finally again thanks to @Ryan Wildry's comment and a Range Array was created in the First For Each loop (taking 1-2 minutes) and next the range array was iterated in reverse order to replace paragraph marks. It takes only around 10 minutes to complete without save (or around 15 minutes with save at 200 replacement) The final code: Sub TestPara() Dim Para As Paragraph, PrvLn As Long, xRng As Range, PrvRng As Range Dim i As Long, ln As Long, tm As Double, PrCnt As Long Dim ChrSize As Integer, PrvChrSize As Integer, LastChrSize As Integer Dim ChrFont As String, PrvChrFont As String, LastChrFont As String Dim ChrItalic As Boolean, PrvChrItalic As Boolean, LastChrItalic As Boolean Dim OnOff As Boolean, DelCnt As Long, DoSave As Boolean Dim RngArr() As Range, Pos As Long tm = Timer TurnOn False With ActiveDocument PrCnt = .Paragraphs.Count Debug.Print PrCnt DelCnt = 0 PrvChrSize = 0 PrvChrFont = 0 PrvChrItalic = False PrvLn = 0 i = 1 For Each Para In .Paragraphs ln = Para.Range.Characters.Count Pos = Para.Range.End Set xRng = ActiveDocument.Range(Pos - 1, Pos) If ln > 1 Then With Para.Range.Characters(ln - 1).Font LastChrSize = .Size LastChrFont = .Name LastChrItalic = .Italic End With With Para.Range.Characters(1).Font ChrSize = .Size ChrFont = .Name ChrItalic = .Italic End With Else LastChrSize = 0 LastChrFont = 0 LastChrItalic = False ChrSize = 0 ChrFont = 0 ChrItalic = False End If If (ChrSize = 15 And ChrFont = "Arial" And ChrItalic) _ And (PrvChrSize = 15 And PrvChrFont = "Arial" And PrvChrItalic) Then DelCnt = DelCnt + 1 ReDim Preserve RngArr(1 To DelCnt) Set RngArr(DelCnt) = PrvRng End If PrvChrSize = LastChrSize PrvChrFont = LastChrFont PrvChrItalic = LastChrItalic PrvLn = ln Set PrvRng = xRng If i Mod 2000 = 0 Then Debug.Print i & "/" & PrCnt End If i = i + 1 Next Debug.Print " paragraph to delete:" & DelCnt Debug.Print " Seconds taken to Calc:" & Timer - tm TurnOn False DoSave = True For i = UBound(RngArr) To 1 Step -1 RngArr(i).Text = " " If i Mod 1000 = 0 Then .UndoClear DoEvents Debug.Print i, Timer - tm End If If DoSave Then If i Mod 200 = 0 Then .Save DoEvents Debug.Print "Save at delete Countdown " & i & "/" & Timer - tm End If End If Next End With Debug.Print " Delete completed in Seconds:" & Timer - tm TurnOn Debug.Print " pagination Completed:" Debug.Print " Seconds taken:" & Timer - tm End Sub Sub TurnOn(Optional OnOff As Boolean = True) Application.ScreenUpdating = OnOff With Options .Pagination = OnOff .CheckSpellingAsYouType = OnOff .CheckGrammarAsYouType = OnOff End With End Sub Hope my ordeal will help the community in similar situation
{ "domain": "codereview.stackexchange", "id": 35119, "tags": "vba, ms-word" }
openni depth image problem
Question: Can someone tell me why my depth image looks like that? I decompress my images of a bag like that: <node name="DEPTH_decompressed" type="republish" pkg="image_transport" output="screen" args="compressed in:=/openni/depth_registered/image_raw raw out:=/openni/depth_registered/image_raw" required="true" /> When I look at the thumbnails in rosbag, the image looks fine but when I use rviz or RGBD_SLAM, the image looks like 2 depth images side by side... My system: Ubuntu 12.04 64Bit - ROS Fuerte Originally posted by madmax on ROS Answers with karma: 496 on 2013-06-10 Post score: 1 Answer: Not sure if that is the problem, but it looks like your out topic is the same as the in topic. Try changing the name of the out topic and set rgbdslam to listen to that topic. UPDATE: I can reproduce the problem using the bagfile linked below and the following commands (in various terminals): roscore rosparam set use_sim_time true rosbag play -l --clock tunnel_openni.bag rosrun image_transport republish compressed in:=/openni/depth_registered/image_raw raw out:=/openni/depth_registered/image_raw rosrun image_view image_view image:=/openni/depth_registered/image_raw raw I also see the correct thumbnails in rxbag. The camera_info message seems correct. So I guess this is either a bug in your bagfile recording setup or the camera driver. I do not know how to solve this or what could be the cause. Sorry. My system is ros fuerte on Ubuntu 11.10. Originally posted by Felix Endres with karma: 6468 on 2013-06-10 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by madmax on 2013-06-10: No, that doesn't make a difference. Comment by Felix Endres on 2013-06-11: Do rviz and image_view display the depth image correctly? Comment by madmax on 2013-06-11: No, only when I directly look with the rqt plugin 'rosbag' on the thumbnails of the depth image topic they look correct. Comment by Felix Endres on 2013-06-13: That is weird. Could you make a short bag file available for me for debugging? Comment by madmax on 2013-06-16: Here is a little bag file. You just have to decompress the images. https://dl.dropboxusercontent.com/u/15268092/tunnel_openni.bag Comment by madmax on 2013-06-20: Thank you for looking at the bag file! Seems like the data is unusable...
{ "domain": "robotics.stackexchange", "id": 14493, "tags": "kinect, openni, rosbag, depth-image, image-transport" }
Why is the graph of CMB/black-body radiation asymptotic?
Question: Speaking of this graph of blackbody radiation, I see that the graph goes to 0 asymptotically: As we go to higher and higher frequencies, the energy of a single photon becomes increasingly high. Wouldn't there exist a point when the energy of a single photon is too large, and so there would be zero radiance at that frequency? I am currently reading "The Inflationary Universe" by Alan Guth who introduces this blackbody graph and subsequently the motivation for photons by way of an analogy of visiting a bank to open an account but discovering the minimum deposit is larger than you have so you must leave without doing anything. I had thought I understood this analogy to mean that at arbitrarily high frequencies, the energy cost to emit a photon in that frequency exceeds the thermal energy of the body. Answer: Black body radiation is a statistical description i.e. it assumes there are enough photons that they are distributed according to Boltzmanns law. At energies high enough for a single photon to equal the total energy of the system this assumption breaks down and the black body description will no longer apply. But by the point the energy has got this high the intensity predicted from the black body description will be so low as to be indistinguishable from zero, so this isn't a serious limitation.
{ "domain": "physics.stackexchange", "id": 16526, "tags": "cosmology, radiation, big-bang, cosmic-microwave-background, blackbody" }
Ros SSH not working
Question: Hi, I have followed Ros SSH tutorial and is working perfectly. However, I have change both 'talker' and 'listener' files into a launch file receptively. The talker.launch is in the client and the listener.launch is at the server. In the server I have launch an empty world gazebo as well. When I launch the following in sequence, the talker.launch does transmit data over to the server: roscore on the server Set rosclient to the rosserver launch talker.launch on client launch empty world gazebo launch listener.launch However when I change the sequence by launching gazebo 1st. The talker.launch does not transmit data over. "Topic is there but inside is empty". The following is the sequence. roscore on server Set rosclient to rosserver launch empty world gazebo launch talker.launch on client launch listener.launch Did not work so I tried rostopic echo/"Talker_topic" output: no message receive and simulated time active. May I know is this an SSH bug? Originally posted by loguna on ROS Answers with karma: 98 on 2020-11-05 Post score: 0 Answer: Update: Seems like changing rate.sleep() to time.sleep() did the trick. I still don't know why tho. Originally posted by loguna with karma: 98 on 2020-11-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 35720, "tags": "ros-melodic, ssh" }
Positive deviation from ideal behaviour
Question: I am confused about Positive behaviour from ideal behaviour as to why it occurs. 1) Molecular attractions plays a dominant role in positive deviation OR 2) Molecular volume plays dominant role in positive deviation I am confused these two facts. Answer: If in a binary solution of two components A and B, the interaction between unlike components A-B is weaker than the interaction between the like components A-A and B-B, then it means that in such solutions molecules of A (or B) will find it easier to escape in the state A-B, than when they are in their pure states (A-A or B-B). It means that molecules of A (or B) will find it easier to escape from the A-B form. This will result in having greater vapour pressure of each component of the solution than expected on the Basis of Raoult's law, and hence the total vapor pressure will also be higher than in the case of ideal solution. Thus the deviation will be positive. In such solution $\Delta_{mix}H$ is positive because energy is required to break A-A and B-B bonds. For such solutions, the dissolution process is endothermic, i.e, the solubility will increase with temperature (Le Chatelier's principle) . $\Delta_{mix}V$ is also positive because there is decrease in the magnitude of intermolecular forces in the solution, the molecules are loosely held, and therefore the volume of mixing increases.
{ "domain": "chemistry.stackexchange", "id": 7810, "tags": "vapor-pressure" }
Frictional force doesn't depend on surface area, but why does this application work?
Question: I know friction doesn't depend on surface area and the professor has been demonstrating the same in all the previous lectures. But in this lecture he shows an application where the friction helps in balancing a large weight (T2) with a much smaller weight (T1). He further says increasing the angle the rope is in contact with the cylinder increases the frictional force and helps in balancing a much larger weight $T_2$. Doesn't this contradict the fact that friction doesn't depend on surface area? Hope I gave all the details so seeing the video is not required but here it is. Answer: Frictional force does not directly depend on surface area, but it does depend on the normal reaction force. Consider two cubic bodies made of same material. The bigger body will have more weight, and higher friction will act on it. This is not a direct consequence of the fact that the bigger body has higher surface area. Generally bodies with higher surface area have more weight, so the amount of frictional force acting on them is higher.
{ "domain": "physics.stackexchange", "id": 63112, "tags": "newtonian-mechanics, forces, friction, string" }
Is $TdS = dQ$ true for processes involving mass transfer?
Question: In his famous thermodynamics textbook, Callen writes The identification of $-PdV$ as the mechanical work and of $TdS$ as the heat transfer is valid only for quasi-static processes. My question -- is this true for all processes? That is, $dQ = TdS$ is often shown for closed systems by rearranging the first law: $$dQ = dU - dW$$ and the total differential for $dU$, $$dU = TdS - PdV.$$ For open systems, this total differential becomes $$dU = TdS - PdV + \sum \mu_idN_i;$$ does the first law correspondingly become $$dQ = dU - dW - \sum \mu_idN_i.?$$ If so, is the first law not just a tautology (or at least redundant) given the total differential of $dU$? Answer: The first law in differential form for an open system is indeed $$dU = T\,dS - P\,dV + \sum \mu_i\,dN_i;$$ we can add energy to a system by heating it, doing work on it, or adding matter. Notably, however, the first law expressed using heat and work is not the equation you wrote but $$dQ = dU - dW - \sum h_i\,dN_i,$$ where the molar enthalpy $h=Ts+\mu$ appears rather than the chemical potential. The reason is that incoming species bring their own entropy $s_i$. For the same reason, $T\,dS$ can no longer be associated with $dQ$; we now have $T\,dS=dQ+T\sum s_i\,dN_i$. (With this, verify that the two equations above are equivalent; mechanical work is still defined as $dW=-P\,dV$.) These points are discussed in Knuiman and Barneveld, "On the relation between the fundamental equation of thermodynamics and the energy balance equation in the context of closed and open systems," J. Chem. Educ. 2012 89 968–972. Edit: There's an objection expressed in the comments that the relation $T\,dS=dQ+T\sum s_i\,dN_i$ applies only at constant pressure and constant temperature. Although Knuiman and Barneveld make these assumptions for simplicity, using a single pressure reservoir and a single temperature reservoir, multiple reservoirs could be used to reversibly bring the system of interest to any other pressure and temperature. The relation appears in many locations elsewhere without any constant-pressure or constant-temperature constraints mentioned. For example, De Groot and Mazur in Non-equilibrium Thermodynamics give the entropy flow as $$\boldsymbol{J_s}=\frac{1}{T}\boldsymbol{J^\prime_q}+\sum_{k=1}^n s_k\boldsymbol{J_k},$$ where $\boldsymbol{J^\prime_q}$ is the "heat flow" and $s_k$ is the "partial specific entropy of component $k$" (with diffusive flow $\boldsymbol{J_k}$). They state, "Written in this way the entropy flux contains the heat flow $\boldsymbol{J^\prime_q}$ and a transport of partial entropies with respect to the barycentric velocity $\boldsymbol v$." There's no mention of any constant-pressure or constant-temperature constraint. Larson and Pings in "The condition for chemical equilibrium in open systems" write "For an open system, the second law of thermodynamics may be written $T\,dS = \delta q + T\,dS^{(i)}_\text{irr} + T\,dS^{(e)},$ where $dS^{(i)}_\text{irr}$ and $dS^{(e)}$ are internal and external contributions to the differential change in the entropy of the system, due to irreversible chemical reaction and viscous dissipation [absent in this context], and to material entering or leaving under different conditions than in the system." There's no mention of any constant-pressure or constant-temperature constraint. Thipse writes in Advanced Thermodynamics that "The entropy balance equation for an open thermodynamic system is $\frac{dS}{dt}=\sum_{k=1}^K \dot{M}_k\hat{S}_k+\frac{\dot Q}{T}+\dot{S}_\text{gen}$, where $\sum_{k=1}^K \dot{M}_k\hat{S}_k$ = The net rate of entropy flow due to the flows of mass into and out of the system (where $\hat{S}$ = entropy per unit mass). $\frac{\dot{Q}}{T}$ = The rate of entropy flow due to the flow of heat across the system boundary. $\dot{S}_{gen}$ = The rate of internal generation of entropy within the system [absent in this context]." There's no mention of any constant-pressure or constant-temperature constraint. And so on.
{ "domain": "physics.stackexchange", "id": 95560, "tags": "thermodynamics, energy, entropy" }
Exposition of categorical models of type theory from type-theoretic perspective
Question: Are there any formalizations or expositions of categorical models from type theoretic point-of-view? What I have in mind to get a better grasp of categorical models of dependent types, treating categories as a type in some type theoretic universe and showcasing how to pass from inference system (as a let's say indexed type) for a universe-free type theory into a category and then back to properties of the inference system such as consistency (as an external non-inhabitance of an empty type), perhaphs showcasing what goes wrong for an inconsistent inference system. Are there currently any formalizations of this form? Answer: For simple type theories, there's a very simple dictionary. Type Theory Category Theory Judgement Category Type Object Context (Monoidal) Product Term Morphism However, interpreting dependent type theories is much more subtle, because the interpretation of a type is dependent on the values of the context: the type in context $b:2 \vdash \mathsf{if}\,b\,\mathsf{then}\, \Bbb{N} \,\mathsf{else}\,1$ is either the type of natural numbers or the unit type, depending on the value of $b$. The obvious thing to do when interpreting type theories in $\mathrm{Set}$ is just to interpret $\Gamma$ as the collection of closed substitutions inhabiting $\Gamma$, and model $\Gamma \vdash A$ as a map $|\Gamma| \to \mathrm{Set}$. But that's not the only way! Here's another: Imagine that we have a closed type $\Sigma b:2.\,\mathsf{if}\,b\,\mathsf{then}\, \Bbb{N} \,\mathsf{else}\,1$. This basically pairs a value for $b$ with a value of the dependent type. Consider the first projection $\pi_1 : (\Sigma b:2.\,\mathsf{if}\,b\,\mathsf{then}\, \Bbb{N} \,\mathsf{else}\,1) \to 2$ If $\pi_1(b,v) = \mathsf{true}$, then we know that $v$ must be a natural number. If $\pi_1(b,v) = \mathsf{false}$, then we know that $v$ is a unit value. So the inverse image $\pi_1^{-1}(\mathsf{true}) = \Bbb{N}$. So the inverse image $\pi_1^{-1}(\mathsf{false}) = 1$. So in general, we can think of a judgement $\Gamma \vdash A$ as being interpreted as the inverse images of $\pi_1 : (\Sigma \gamma:\Gamma. A) \to \Gamma$ This gives you the key idea for understanding models of dependent type theory. You need: A category $\Bbb{C}$ where objects are contexts and morphisms are substitutions. So a well-formed context $\Gamma\,\mathsf{ok}$ is an object $|\Gamma|$ in $\Bbb{C}$. A type-in-context $\Gamma \vdash A$ is a $\Bbb{C}$-morphism in $T : |A| \to |\Gamma|$. The intuition is that you model the dependency of $A$ on $\Gamma$ via the inverse image operation. So if $\gamma \in |\Gamma|$, we want $T^{-1}(\gamma) = A(\gamma)$. To model empty contexts, we use a unit object 1. To model context extension, $|\Gamma, A|$, what we do is we note that $id_{\Gamma} : |\Gamma| \to |\Gamma|$ and $T : |A| \to |\Gamma|$, and then take the pullback of $id_{|\Gamma|}$ and $T$. Set-theoretically, this is the "fiber product" -- it is the set of pairs $(\gamma, a)$ where $a \in T^{-1}(\gamma)$. A term-in-context $\Gamma \vdash e : A$ is a map $|e| : |\Gamma| \to |\Gamma,A|$ such that $|e|; p_1 = id_{|\Gamma|}$. Here, $p_1$ is the first projection of the pullback $|\Gamma, A|$. What this says is that the $|e|(\gamma) = (\gamma, a)$, where $a \in T^{-1}(\gamma)$. Finally, I've been a little low-level here. It's technically more convenient to use the "slice category" construction when working with maps $|A| \to |\Gamma|$, because that packages up some of the coherence conditions (like the one in 5) in a nice way. To a computer scientist, this feels like a very inside-out way of thinking about indexed objects, but (a) people who know tell us it works better in the long run, and (b) it's kind of fun. Try to see how to compose a type in context $T : |A| \to |\Gamma|$ with a substititution $\sigma : |\Delta| \to |\Gamma|$ and you'll learn why category theorists talk about substitution as pullback!
{ "domain": "cstheory.stackexchange", "id": 5547, "tags": "type-theory, ct.category-theory, model-theory" }
C++ Minimalistic Unit Testing Library
Question: I was looking for a unit testing library for C++, and I found many. Unfortunately, I decided I didn't want to use any of them. I decided to make my own. This is the result. I made heavy use of the preprocessor, and followed the convention of leading underscore to mean non-published. Everything is done in all-caps, to try to follow good conventions and to avoid name collisions. Please note that this code is published under the MIT license. unittest.h #pragma once #include <vector> #include <iostream> // Helper Macros #define _STR_HELP(x) #x #define _STR(x) _STR_HELP(x) #define ESCAPE(...) __VA_ARGS__ // TEST_CASE(std::map<int, int>) would fail. Wrap in this Macro to not fail. #define _FILE_LINE __FILE__ ":" _STR(__LINE__) // Assertion Macros #define _GENERATE_MESSAGE(test) _FILE_LINE " Failed " #test "." // Message in almost every assertion /* * Assertions are intended to be used within a TEST_CASE macro. We return EXIT_FAILURE if we fail the assertion, * meaning that the TEST_CASE fails. Assertions can also be used straight inside the main function, in which case * the main function will exit. They are intended to be used within the TEST_CASE macro, so it's not recommended to do * that. */ #define _ASSERT(test, message) if (!(test)) {\ std::cerr << message << std::endl;\ return EXIT_FAILURE;\ } // One argument assertion. This generates a message to display. #define _ASSERT_1(test) _ASSERT(test, _GENERATE_MESSAGE(test)) // Two argument assertion, which appends the user-defined message to the generated one. #define _ASSERT_2(test, additional) _ASSERT(test, _GENERATE_MESSAGE(test) " " additional) // Helper macro to get the correct assertion (out of _ASSERT_1 or _ASSERT_2) #define _GET_ASSERTION(_1, _2, NAME, ...) NAME // Public Macro for actually calling the ASSERT #define ASSERT(...) _GET_ASSERTION(__VA_ARGS__, _ASSERT_2, _ASSERT_1)(__VA_ARGS__) // More advanced macro than ASSERT; it tells the values of each of the arguments as part of the message. #define ASSERT_EQ(a, b) {\ const auto& _VAL_1 = a;\ const auto& _VAL_2 = b;\ if (_VAL_1 != _VAL_2) {\ std::cerr << _FILE_LINE " Failed " #a " == " #b ". Actual Values:\n" \ #a ": " << _VAL_1 << "\n" \ #b ": " << _VAL_2 << std::endl;\ return EXIT_FAILURE;\ }\ } /* * Convenience struct for storing test cases and all related data into a vector. */ struct _TEST_CASE { const char *name; int (*function)(); bool result; }; std::vector<_TEST_CASE> _TEST_CASES; // All test cases are added to this global vector. // Test Execution Macros #define TEST_CASE(name, code) _TEST_CASES.push_back({ name, [](){ code; return EXIT_SUCCESS; }, false }); #define RUN_TESTS() std::cout << std::endl; \ for (_TEST_CASE& testCase : _TEST_CASES) {\ testCase.result = EXIT_SUCCESS == (*testCase.function)();\ if (!testCase.result) {\ std::cerr << "Failed test case " << testCase.name << "\n";\ }\ }\ std::cout << "Results: ";\ for (_TEST_CASE testCase : _TEST_CASES) {\ std::cout << (testCase.result ? '.' : 'F');\ }\ std::cout << std::endl << std::endl; Answer: Unfortunately, I decided I didn't want to use any of them. I decided to make my own. I took the same way with my unit testing (for personal projects I develop at home) but for production code, this is a bad decision to make. I stated my unit testing lib roughly two years ago, and every two or three weeks, I keep adding features to it (and it is still not complete). Here are some things I would not do (and why): I made heavy use of the preprocessor [...] That's a bad call. Ideally, you should only use the preprocessor when no other alternative exists. In this case, many many alternatives exist. and followed the convention of leading underscore to mean non-published. This potentially causes your code to exhibit UB because leading underscore followed by capital letter is reserved for standard library impleemnters (I think). You also used the same coding convention for code and macros (please don't). The way you use macros ensures client code cannot avoid using them to write tests. If you redesign your API to not rely on macros, you can then add the macros later with minimal effort). This will make your code maintainable (it's easier to maintain C++ functions than macros) and will not impose macros on the client code. Some features you may wish to add (complementing the list provided by Loki): test suite support automatic processing of exceptions in your unit tests: expected exceptions (testing that your code correctly identifies and reacts to error scenarios) unexpected exceptions (should cause your tests to fail gracefully and report the errors) code checkpoints: this is a (usually transparent) feature, that marks the last executed line in a test (last unit test API file and line, or last _ASSERT macro call for example); if an unexpected exception occurs, that location is reported, automatically restricting the range of code you have to check to fix the issue. disconnected/customized reporting of results; Ideally, you should be able to plug in a file writer, an XML logger or anything else into a unit test suite and generate the same test output report in various formats. Other problems: the code is monolythic (you cannot choose to use something else than std::cerr in the macros, because it is hard-coded - instead of being injected into the code). the code is difficult to maintain (this is a classic problem of abusing macros) As a point of comparison, here's the how unit tests look with my (custom) library: void bad_command(unittest::test_context& ctx) { // tested scenario here ctx.check_equal(1, 2); // will fail: 1 != 2 } int main(int argc, char* argv[]) { unittest::runtime_args args{ argv, argv + argc }; auto suite = unittest::make_test_suite("test-utility-apis", std::cout, args); suite.add_test("bad_command", bad_command); // one call per unit test return suite.run(); } This code contains no macros. The passing of runtime args. to the test suite allows for: selection of output format, filtering of executed tests based on args and (probably in the future) more runtime arguments (parallel execution, etc).
{ "domain": "codereview.stackexchange", "id": 15553, "tags": "c++, unit-testing, library, macros" }
Missing resource hokuyo_node
Question: I am trying to use a Hokuyo rangefinder, following the tutorial, but fail at the first step: rosdep install hokuyo_node rviz It seems that hokuyo_node is not in ROS_PACKAGE_PATH: ERROR: Rosdep cannot find all required resources to answer your query Missing resource hokuyo_node ROS path [0]=/opt/ros/fuerte/share/ros ROS path [1]=/home/dbarry/fuerte_workspace/sandbox ROS path [2]=/opt/ros/fuerte/stacks ROS path [3]=/opt/ros/fuerte/share ROS path [4]=/opt/ros/fuerte/share/ros But hokuyo_node is in what seems an odd spot: /opt/ros/fuerte/stacks/simulator_gazebo/gazebo_plugins/ and adding that path to ROS_PACKAGE_PATH does not help. I must be missing something obvious, but need help figuring it out. Using Ubuntu 12.04, ROS fuerte Originally posted by dan on ROS Answers with karma: 875 on 2012-11-14 Post score: 1 Answer: The hokoyu_node in gazebo_plugins is just an executable. You are missing the package. You can install it using sudo apt-get install ros-fuerte-laser-drivers Or you can install it locally using rosco hokuyo_node This requires the rosinstall bundle: sudo apt-get install python-rosinstall Originally posted by jbarry with karma: 280 on 2012-11-14 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by dan on 2012-11-14: Excellent answer! I updated the wiki here: http://www.ros.org/wiki/hokuyo_node/Tutorials/UsingTheHokuyoNode
{ "domain": "robotics.stackexchange", "id": 11752, "tags": "ros, hokuyo-node, hokuyo" }
To how much significant figures do I reduce my absolute uncertainty if it has more significant figures than my measurement?
Question: They say that the absolute uncertainty in my measurement should always be reduced to 1 significant figure but I don't understand why. Answer: The key point is estimation of standard deviation of estimation of measurement standard deviation on Stats SE. The exact formula is complicated with extensive Gamma function involvement. The essential info is the relative uncertainty of $s$ decreases very slowly with number of measurements. It can be approximated by involving Stirling approximation for factorial of large numbers. $$SD(s)=s\cdot \sqrt{\mathrm{e}\cdot \left(1-\frac1n\right)^{(n-1)}-1 }$$ where SD is Standard deviation s is estimation of standard deviation $\sigma$ e is the natural logarithm basis n is the sample number The estimation of relative error of $s$ is about 33% for 5 samples about 20% for 13 samples about 10% for 50 samples. So 1 significant digit for $s$ is a reasonable rule, unless there is a large enough measurement sample set.
{ "domain": "chemistry.stackexchange", "id": 12277, "tags": "analytical-chemistry" }
Spring compression and Momentum
Question: I am asked to rate a series of elastic collisions in order of greatest time of max compression to least time of max compression for several vehicles with varying masses and velocities, which strike a spring with a spring constant k. I can determine the Momentum of each case, as I am given the masses and velocities. Additionally, I can determine each of their kinetic energy. I know that the kinetic energy of the car will be converted into potential energy in the spring: $$1/2mv^2 =1/2kx^2$$ Also, I know the impulse of the car's is going to be $$Ft= Δvm $$ $$ t=Δvm/F$$ I also know that the Force on spring will be $F=kx$, but I am not sure how the magnitude of the momentum of the cars' is going to relate to the time of maximum spring compression. I am asking for a little guidance in my reasoning. Answer: The question seems a bit odd because "time of maximum spring compression" is an odd concept. The spring compression is a function of time and the time of maximum spring compression is zero because it's an instant not a time interval. Maybe the question means the time interval from the time the car first touches the spring to the time of greatest compression. Assuming this is the case, and bearing in mind that because this is a homework question we're only allowed to give hints, the trick to doing this question is to realise that the spring behaves as a simple harmonic oscillator i.e. the compression of the spring from the moment the car touches it will be: $$d = A sin(\alpha t)$$ where $A$ and $\alpha$ are some constants that you need to calculate. The problem simplifies a lot if you think about the relation between the period of a harmonic oscillator and the amplitude of oscillation.
{ "domain": "physics.stackexchange", "id": 3736, "tags": "homework-and-exercises, momentum" }
Causal inverse of $h[n]=\delta[n]-\alpha\delta[n-1]$
Question: Find the causal inverse of $$h[n]=\delta[n]-\alpha\delta[n-1]$$ we have $h[0]=1$ and $h[1]=-\alpha$ also $h[n]=0$ for $n>1$ From the formula $$ h_i[n]=\sum_{i=1}^n\frac{h[n]h_i[n-i]}{h[0]} $$ we should have the recursive difference equation $$ h_i[n]=-\alpha h_i[n-1] $$ However this result is different from the book Digital Signal Processing- (Proakis) Also notice that the book stated that $h[n]=0$ for $n≥\alpha$ which does not make sense to me Answer: Of course it should be $n\ge 2$, that's a typo in your edition. The sign of the formula in your question is wrong. It should be $$h[0]h_I[n]=-\sum_{k=1}^nh[k]h_I[n-k],\qquad n>0\tag{1}$$ With $h[0]=1$, $h[1]=-\alpha$, and $h[n]=0$ for $n>1$, Eq. $(1)$ simplifies to $$h_I[n]=-h[1]h_I[n-1]=\alpha h_I[n-k],\qquad n>0\tag{2}$$ And since $h_I[0]=1/h[0]=1$ you obtain the result given in the book.
{ "domain": "dsp.stackexchange", "id": 10706, "tags": "filters, discrete-signals, finite-impulse-response, infinite-impulse-response, deconvolution" }
Glycemic Index and two-hour blood glucose response curve (AUC)? Where are the AUC charts?
Question: I understand the "Glycemic Index" (GI) of foods is calculated using the area under a 2-hour glucose/blood response curve (AUC) after a 24 hour fast and consuming carbohydrates. Therefore a higher GI indicates there is more glucose available in the blood stream during this 2 hours. It seems the GI is a general indicator and glucose/blood levels over 2 hours and has little information on when the spike of carbohydrate to glucose conversion occurs or when carbohydrates are first converted to glucose. Is this a true statement? I searched for the AUC charts online but found little information. I suspect the AUC charts would show when the glucose/blood levels first increase and how they are effected over time. Does anyone know where to find these charts for foods? Also, is the correct forum to be asking this in? Unsure if Beta Medical Sciences StackExchance is appropriate. Answer: You could search something like: https://scholar.google.com/scholar?hl=en&q=glycemic+index+AUC+glucose and your results will include papers that calculate glycemic index for some food based on glucose measurements. For example this one: Robert, S. D., Ismail, A. A. S., Winn, T., & Wolever, T. (2008). Glycemic index of common Malaysian fruits. Asia Pacific Journal of Clinical Nutrition, 17(1). has this figure: You can expect that the conversion of carbohydrates to glucose begins immediately after eating, so it's mostly the rate that varies by food, producing different curves.
{ "domain": "biology.stackexchange", "id": 12140, "tags": "human-biology, nutrition, glucose" }
RGBDSlam_v2 error with rosdep install
Question: Hey so I'm running hydro compiled from source on XUbuntu 13.10 ARM for Odroid and I've been smashing my head against an error for a while now. When I try to run rosdep install rgbdslam after downloading all the right stuff based on this guide http://felixendres.github.io/rgbdslam_v2/, I get this error odroid@odroid:~/rgbdslam_catkin_ws$ rosdep install rgbdslam ERROR: the following packages/stacks could not have their rosdep keys resolved to system dependencies: rgbdslam: No definition of [message_runtime] for OS version [saucy] Does anybody know why this might be or how to fix it? I've tried just passing the step and just using catkin_make, but there are a lot of dependencies that I don't know how to acquire, so I'd like to get this to work. Originally posted by 4dahalibut on ROS Answers with karma: 15 on 2014-06-27 Post score: 1 Answer: The rosdep command you tried only works for debian installs. It checks for all packages that are (recusrive) dependencies of rgbdslam if they are installed from apt and if not, tries to install them. Since they are not available on saucy, the command fails. For from-source installs you need to specify the --ignore-src option to make rosdep ignore packages found in the ROS_PACKAGE_PATH, i.e. the ones in your from-source base install: rosdep install -i rgbdslam Originally posted by demmeln with karma: 4306 on 2014-06-30 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by 4dahalibut on 2014-06-30: This worked, thanks.
{ "domain": "robotics.stackexchange", "id": 18419, "tags": "ros, rosdep, rgbdslam-v2" }
Improvement to Luhn-Checksum Algorithm in Java
Question: I just made a small Luhn Checksum validator Description of Luhn Checksum Every number in the Integer gets added individually together, but every second digit gets instead doubled and added. If the doubled number is over ten each individual digit gets added. It works, but is there a way to make it more elegant and maybe more efficient, while maintaining readability? The while loop seems a bit ugly to me, and I'm not completely satisfied with the name getLuhn but I don't know how else to name it. public class Luhn { public static void main(String[] args) { System.out.println(validate(1762483)); } public static boolean validate(int id){ int totalSum = 0; while(id>0){ totalSum += id%10; id /= 10; if(id>0){ totalSum += getLuhn(id%10); id /= 10; } } return (totalSum%10 == 0); } private static int getLuhn(int id){ id *= 2; return id%10 + id/10; } } Every comment is appreciated <3 From cleaner code, over more idiomatic Java, to improvements in performance. Answer: There is not much to improve in your code, it's compact and efficient. These are few suggestions: Input validation When the input number is negative the result is always true. To avoid confusion you can launch an exception. public static boolean validate(int id) { if (id < 0) { throw new IllegalArgumentException("Input cannot be negative."); } // .. } Clarity The Luhn Checksum algorithm is described very well on Wikipedia and by you on your question, but your implementation is not easy to follow. For example: totalSum += id%10; Here the last digit of id is added to totalSum. Adding a method (with the explanation of the operation in the name) makes it more readable: totalSum += getRightMostDigit(id); Same for: id /= 10; This operation removes the last digit of id, which can be changed to: id = dropRightMostDigit(id); I would also change the input variable name from id to number, but this is personal taste. Perfomance It's hard to improve performance and keep readability for your method. The only change I would suggest is to replace the getLuhn method with a static array. This change makes it two times faster on my machine and gets rid of the additional method. Code refactored public static boolean validate(int number) { if (number < 0) { throw new IllegalArgumentException("Input cannot be negative."); } // Array containing: // - for index in [0,4]: the double of the index value // - for index in [5,9]: the sum of the digits of the doubled index value. E.g. index = 6 -> 6*2 = 12 -> 1+2 = 3 int[] luhn = new int[] { 0, 2, 4, 6, 8, 1, 3, 5, 7, 9 }; int totalSum = 0; while (number > 0) { totalSum += getRightMostDigit(number); number = dropRightMostDigit(number); if (number > 0) { totalSum += luhn[getRightMostDigit(number)]; number = dropRightMostDigit(number); } } return totalSum % 10 == 0; } private static int getRightMostDigit(int number) { return number % 10; } private static int dropRightMostDigit(int number) { return number / 10; } Personal opinion Many implementations of the Luhn Checksum accept a String as input, so they can be used to validate credit cards or simply to operate on numbers bigger than an int. What is the use case of your algorithm? The purpose of your implementation can also be included as a comment, it will help others to understand whether they need it or not.
{ "domain": "codereview.stackexchange", "id": 38967, "tags": "java, algorithm" }
Parse YAML file with nested parameters as a Python class object
Question: I would like to use a YAML file to store parameters used by computational models developed in Python. An example of such a file is below: params.yaml reactor: diameter_inner: 2.89 cm temperature: 773 kelvin gas_mass_flow: 1.89 kg/s biomass: diameter: 2.5 mm # mean Sauter diameter (1) density: 540 kg/m^3 # source unknown sphericity: 0.89 unitless # assumed value thermal_conductivity: 1.4 W/mK # based on value for pine (2) catalyst: density: 1200 kg/m^3 # from MSDS sheet sphericity: 0.65 unitless # assumed value diameters: [[86.1, 124, 159.03, 201], microns] # sieve screen diameters surface_areas: values: - 12.9 - 15 - 18 - 24.01 - 31.8 - 38.51 - 42.6 units: square micron Parameters for the Python model are organized based on the type of computations they apply to. For example, parameters used by the reactor model are listed in the reactor section. Units are important for the calculations so the YAML file needs to convey that information too. I'm using the PyYAML package to read the YAML file into a Python dictionary. To allow easier access to the nested parameters, I use an intermediate Python class to parse the dictionary values into class attributes. The class attributers are then used to obtain the values associated with the parameters. Below is an example of how I envision using the approach for a much larger project: params.py import yaml class Reactor: def __init__(self, rdict): self.diameter_inner = float(rdict['diameter_inner'].split()[0]) self.temperature = float(rdict['temperature'].split()[0]) self.gas_mass_flow = float(rdict['gas_mass_flow'].split()[0]) class Biomass: def __init__(self, bdict): self.diameter = float(bdict['diameter'].split()[0]) self.density = float(bdict['density'].split()[0]) self.sphericity = float(bdict['sphericity'].split()[0]) class Catalyst: def __init__(self, cdict): self.diameters = cdict['diameters'][0] self.density = float(cdict['density'].split()[0]) self.sphericity = float(cdict['sphericity'].split()[0]) self.surface_areas = cdict['surface_areas']['values'] class Parameters: def __init__(self, file): with open(file, 'r') as f: params = yaml.safe_load(f) # reactor parameters rdict = params['reactor'] self.reactor = Reactor(rdict) # biomass parameters bdict = params['biomass'] self.biomass = Biomass(bdict) # catalyst parameters cdict = params['catalyst'] self.catalyst = Catalyst(cdict) example.py from params import Parameters pm = Parameters('params.yaml') # reactor d_inner = pm.reactor.diameter_inner temp = pm.reactor.temperature mf_gas = pm.reactor.gas_mass_flow # biomass d_bio = pm.biomass.diameter rho_bio = pm.biomass.density # catalyst rho_cat = pm.catalyst.density sp_cat = pm.catalyst.sphericity d_cat = pm.catalyst.diameters sa_cat = pm.catalyst.surface_areas print('\n--- Reactor Parameters ---') print(f'd_inner = {d_inner}') print(f'temp = {temp}') print(f'mf_gas = {mf_gas}') print('\n--- Biomass Parameters ---') print(f'd_bio = {d_bio}') print(f'rho_bio = {rho_bio}') print('\n--- Catalyst Parameters ---') print(f'rho_cat = {rho_cat}') print(f'sp_cat = {sp_cat}') print(f'd_cat = {d_cat}') print(f'sa_cat = {sa_cat}') This approach works fine but when more parameters are added to the YAML file it requires additional code to be added to the class objects. I could just use the dictionary returned from the YAML package but I find it easier and cleaner to get the parameter values with a class interface. So I would like to know if there is a better approach that I should use to parse the YAML file? Or should I organize the YAML file with a different structure to more easily parse it? Answer: you could use a nested parser using pint to do the unit parsing from pint import UnitRegistry, UndefinedUnitError UNITS = UnitRegistry() def nested_parser(params: dict): for key, value in params.items(): if isinstance(value, str): try: value = units.Quantity(value) except UndefinedUnitError: pass yield key, value if isinstance(value, dict): if value.keys() == {'values', 'units'}: yield key, [i * UNITS(value['units']) for i in value['values']] else: yield key, dict(nested_parser(value)) if isinstance(value, list): values, unit = value yield key, [i * UNITS(unit) for i in values] dict(nested_parser(yaml.safe_load(params))) {'reactor': {'diameter_inner': <Quantity(2.89, 'centimeter')>, 'temperature': <Quantity(773, 'kelvin')>, 'gas_mass_flow': <Quantity(1.89, 'kilogram / second')>}, 'biomass': {'diameter': <Quantity(2.5, 'millimeter')>, 'density': <Quantity(540.0, 'kilogram / meter ** 3')>, 'sphericity': <Quantity(0.89, 'dimensionless')>, 'thermal_conductivity': <Quantity(1.4, 'watt / millikelvin')>}, 'catalyst': {'density': <Quantity(1200.0, 'kilogram / meter ** 3')>, 'sphericity': <Quantity(0.65, 'dimensionless')>, 'diameters': [<Quantity(86.1, 'micrometer')>, <Quantity(124, 'micrometer')>, <Quantity(159.03, 'micrometer')>, <Quantity(201, 'micrometer')>], 'surface_areas': [<Quantity(12.9, 'micrometer ** 2')>, <Quantity(15, 'micrometer ** 2')>, <Quantity(18, 'micrometer ** 2')>, <Quantity(24.01, 'micrometer ** 2')>, <Quantity(31.8, 'micrometer ** 2')>, <Quantity(38.51, 'micrometer ** 2')>, <Quantity(42.6, 'micrometer ** 2')>]}} You might need to make your units understandable for pint, but for me that just meant changing the microns to µm and square micron to µm², and unitless to dimensionless using this statically configuration = dict(nested_parser(yaml.safe_load(params))) # reactor reactor_config = configuration['reactor'] d_inner = reactor_config['diameter_inner'] temp = reactor_config['temperature'] mf_gas = reactor_config['gas_mass_flow'] print('\n--- Reactor Parameters ---') print(f'd_inner = {d_inner}') print(f'temp = {temp}') print(f'mf_gas = {mf_gas}') dynamically for part, parameters in nested_parser(yaml.safe_load(params)): print(f'--- {part} Parameters ---') for parameter, value in parameters.items(): print(f'{parameter} = {value}') print('\n') you can check out the pint documentation on string formatting to format the units the way you want
{ "domain": "codereview.stackexchange", "id": 30247, "tags": "python, python-3.x, yaml" }
Momentum expected value derivation. From classical form to quantum operator
Question: I found in many places that the average momentum of a particle is given by: $$\langle p\rangle =\int_{-\infty}^{\infty}\psi^* \left ( \frac{\hbar}{i} \right ) \frac{ \partial \psi}{\partial x} \: \mathrm{d}x $$ I think that it comes from considering the classical momentum: $$\langle p\rangle=m\frac{\mathrm{d}\langle x\rangle}{\mathrm{d}t}$$ and that the expected value of the position is given by: $$\langle x\rangle = \int_{-\infty}^{\infty}x\: \left | \psi(x,t) \right |^2 \: \mathrm{d}x $$ But when replacing $\langle x\rangle$ and differentiating inside the integral I don't know how to handle the derivatives of $\psi$ for getting the average momentum formula. Any suggestion? Answer: $m \frac{d}{dt} \langle x \rangle = m \frac{d}{dt} \int dx \ \Psi^* x \Psi$ Use product rule to get the above into the form: $= m \int dx \left[\frac{\partial \Psi^*}{\partial t} x\Psi + \Psi^* \frac{\partial x}{\partial t} \Psi + \Psi^* x \frac{\partial \Psi}{\partial t} \right] \ \ \ \ \ -(1)$ The second term contains $\frac{\partial x}{\partial t}$, which is $0$. This is just calculus. Now comes the crucial step of imposing physics: the Schrodinger equation: $ i \hbar \frac{\partial \Psi}{\partial t} = \hat{H} \Psi$ (and also $ -i \hbar \frac{\partial \Psi^*}{\partial t} = \hat{H} \Psi^*$) Write the operator $\hat{H}$ in terms of second order spatial derivative (acting on $\Psi$ and $\Psi^*$). Through Schrodinger's equation, you get a relation between second order spatial derivative and first order time derivative. Replace the first order time derivatives in $(1)$ with second order spatial derivatives. And then integrate by parts, to reduce the second order spatial derivative to first order spatial derivative, using the boundary condition that $\frac{d\Psi}{dx}\left(\text{and} \frac{d\Psi^*}{dx}\right)$ go to zero at infinity. After a few steps of algebra, and you get it to the form $\int dx \ \Psi^* \left( -i\hbar \frac{\partial \Psi}{\partial x} \right)$, which is what you wanted: $\langle \hat{p} \rangle$. This (or a similar) calculation is usually given in various resources. Can you take it from here and do it yourself?
{ "domain": "physics.stackexchange", "id": 41888, "tags": "quantum-mechanics, momentum, wavefunction, schroedinger-equation" }
rosserial Teensy bluetooth problem
Question: Hello, I'm trying to use rosserial with Teensy 3.5 through bluetooth. My first step: change the serial port from Serial to Serial1. I follow the thread: https://answers.ros.org/question/198247/how-to-change-the-serial-port-in-the-rosserial-lib-for-the-arduino-side/?answer=295159#post-id-295159 It compiles with Arduino Mega board but no with Teensy 3.5 (or other Teensy boards) I use: Ubuntu 16.04, Kinetic ros, Arduino 1.8.5 The Hello World example code: http://wiki.ros.org/rosserial_arduino/Tutorials/Hello%20World I tried to ways: Modify the line 73 in the code arduino.1.8.5/libraries/ros_lib/ArduinoHardware.h iostream = &Serial1; Replace: ros::NodeHandle nh; with: class NewHardware : public ArduinoHardware { public: NewHardware():ArduinoHardware(&Serial1, 57600){}; }; ros::NodeHandle_<NewHardware> nh; The error when I try to compile is: .../arduino-1.8.5/libraries/ros_lib/ArduinoHardware.h:67:5: note: no known conversion for argument 1 from 'HardwareSerial*' to 'usb_serial_class*' no matching function for call to 'ArduinoHardware::ArduinoHardware(HardwareSerial*, int)' ArduinoHardware.h: http://docs.ros.org/jade/api/rosserial_arduino/html/ArduinoHardware_8h_source.html Originally posted by jordiguerrero on ROS Answers with karma: 16 on 2018-06-24 Post score: 0 Original comments Comment by jayess on 2018-06-24: What is your original problem (i.e., please update your question with a copy and paste of the error)? Why are you modifying the source code of a library? Can you please update your question with a copy and paste of of the code that you wrote? Comment by gvdhoorn on 2018-06-25: @jordiguerrero: please post your last edit as an answer, and then accept your own answer. Comment by gvdhoorn on 2018-06-25: Also: I don't see #define USE_TEENSY_HW_SERIAL in your last edit? Comment by jordiguerrero on 2018-06-25: Ok, I make the changes. Sorry about any inconvenient, it is my first post... Comment by gvdhoorn on 2018-06-25: No need to apologise. Just keep it in mind for next time. Answer: SOLVED: I forget to add the next include: #define USE_USBCON It was solved in the teensy forum, thank you to Theremingenieur user: https://forum.pjrc.com/threads/52928-rosserial-bluetooth-teensy-problem The modified Hello World sample: /* * rosserial Publisher Example * Prints "hello world!" */ // Use the following line if you have a Leonardo or MKR1000 #define USE_USBCON #include <ros.h> #include <std_msgs/String.h> //ros::NodeHandle nh; class NewHardware : public ArduinoHardware { public: NewHardware():ArduinoHardware(&Serial1, 57600){}; }; ros::NodeHandle_<NewHardware> nh; std_msgs::String str_msg; ros::Publisher chatter("chatter", &str_msg); char hello[13] = "hello world!"; void setup() { nh.initNode(); nh.advertise(chatter); } void loop() { str_msg.data = hello; chatter.publish( &str_msg ); nh.spinOnce(); delay(1000); } Originally posted by jordiguerrero with karma: 16 on 2018-06-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31083, "tags": "ros, bluetooth, rosserial, ros-kinetic" }
Asymptotics of the sum of a geometric series
Question: I have a parameter $q$ which is the probability of selecting a vertex (among $n$ vertices...) to be in a certain set. I am constructing the sets in an iterative way, having the vertex $v_i$ be in the set with probability $\leq \left(1 - q\right)^i$ (I'm not going into details about the order of the vertices for which this is performed, as I don't think its vital). By that I can tell that the expected size of my set is $\Sigma_i \left(1-q\right)^i$. Now the part which baffles me is where it says: $$\Sigma_i \left(1-q\right)^i = O\left( \frac{1}{q} \right)$$ So I know $q$ represents the probability for a vertex to be selected. So $q<1$.... But as I recall the formula for the sum of a geometrical series is: $$\frac{a_1 \cdot \left( \left(1-q\right)^i -1\right)}{1-q-1} = \frac{\left(1-q\right) \cdot \left( \left(1-q\right)^i -1\right)}{-q}$$ Seems more like $O\left( q^i \right)$ than $O\left( \frac{1}{q} \right)$. This is a silly question, but I would like to know where I got confused... Answer: There is an abuse of notation in your question: you are using $i$ as both the index for the summation and as the total number of terms in the sum (in the closed formula). Anyway, the ratio of the geometric series is $1-q$, but you mixed it up with $q-1$ in the closed formula. For simplicity consider the infinite sum $\sum_{i=0}^{\infty} (1-q)^i$, which is clearly an upper bound to the quantity you are after (and only affect the result by a multiplicative constant): $$ \sum_{i=0}^{\infty} (1-q)^i = \frac{1}{1 - (1-q)} = \frac{1}{q}. $$
{ "domain": "cs.stackexchange", "id": 21629, "tags": "asymptotics" }
The Coriolis force bending a railway
Question: Suppose a very long railway line goes from South Africa to Sweden, and then it's decided to move the entire railway line, sliding it 1 km to the north (leaving aside the difficulty of moving and the force required). Would the railway bend because of Coriolis force? I mean, every point of the railway would have different tangential velocity so when moving, we have different speeds at different time, then that's acceleration, then force, so if those are different forces applied to different points of the rail, then perhaps it would bend. Answer: Of course that there would be forces that would try to bend the track but they would be tiny. Each segment of the track would be under the action of $-2m \Omega \times v$ Coriolis force. Note that the Coriolis force only depends on velocities, not accelerations as you stated! In other words, there is the Coriolis acceleration, $-2\Omega\times v$, and you may see that it's tiny for realistic velocities of the moving track. Note that the direction of this acceleration is East-West at each point and its magnitude is zero at the equator where $\Omega$ and $v$ in your experiment have the same direction. So the Coriolis force would mostly try to rotate the track around the axis going through its equator point. Obviously, the mechanical constraints would compensate this force and it would be largely invisible. But if you consider a very similar setup to the tracks - namely rivers going to the North - they will eventually make their troughs asymmetric because of the Coriolis force. Albert Einstein was obsessed by this problem and wrote papers about it. ;-) If your concern was that the shape of the track - along the length - doesn't change as you move it North, it doesn't matter. The laws of mechanics still recognize that the metal is moving and the Coriolis force depends on the actual velocity, and not just the velocity that is easily seen. ;-)
{ "domain": "physics.stackexchange", "id": 546, "tags": "classical-mechanics, energy, coriolis-effect" }
Compton Effect Explanation
Question: Can someone brief me about Compton effect and why does this happen? I searched everywhere read a CERN article too but couldn't understand it. Answer: When a high energetic photon (like the gamma or X ray photon) hit a charged particle like an electron, due to inelastic collision, the photon loses some energy and the electron get scattered. The energy lost by the photon will be equal to the energy gained by the scattered electron. This process of inelastic scattering of electron by a photon is called Compton scattering and the phenomenon is called Compton effect. This experiment ensures the particle nature of radiation like the photoelectric effect. Since the energy of the incident photon is reduced it's wavelength should increase (and frequency should decrease as per the relation: $$E=h\nu=\frac{hc}{\lambda}$$ (This is why the yellow photon turned into a red photon in the animation). Hence the wavelength of scattered photon will be greater than that of the incident photon. This process differs from photo electric effect on the fact that in photoelectric effect, a photon is completely absorbed by the electron. The absorbed energy appears as the work function + the kinetic energy of the electron (in the case of metals). The photon usually work in the principle of "all or nothing" where the complete photon energy is either absorbed or not. i.e., a photon is not partially absorbed. This is used in explaining photoelectric effect. But, here due to inelastic scattering, the photon could transfer some of it's momentum ($\displaystyle{pc=\frac{E}{c}=\frac{h}{\lambda}}$) to the electron. The X-rays are highly energetic and have a binding energy far greater than that of an atomic electron (in the range of keV). So once incident on an atomic electron, the electron becomes free. Compton's experiment proved that light can behave as a stream of particle-like objects (quanta of energy), whose energy is proportional to the light wave's frequency. If the photon is of low but sufficient energy corresponding to visible light and soft X-rays, it can eject an electron from its host atom entirely (a process known as the photoelectric effect), instead of undergoing Compton scattering. Consider an electron at rest. An X-ray photon is coming from the left and will incident on the electron as shown. The electron gain some energy by transfer of momentum as expected in particle collisions. So the photon loses some energy and the electron gains some. Let $\lambda$ be the wavelength of the incident photon and $\lambda^\prime$ be that of the scattered photon. The original energy of the photon will be now equal to the sum of energy gained by the electron and the energy of the scattered photon as required by the conservation of energy. Here $\theta$ represents the scattering angle of photon. Compton also included the possibility that the interaction of photon with electron would sometimes accelerate the electron to speeds sufficiently close to the velocity of light and would require the application of Einstein's special relativity theory to properly describe its energy and momentum. The basic principle used in the derivation of Compton scattering is the conservation of energy and momentum. Hence $$E_\gamma+E_e=E_{\gamma^\prime}+E_{e^\prime}\longrightarrow{conservation\space of\space energy}$$ where the left hand side indicate the energy of photon and electron before collision an right hand side indicate the energy of photon and electron after collision. (The prime indicates that the parameter is associated with scattering). Also $$\vec{p_{\gamma}}+\vec{p_{e}}=\vec{p_{\gamma^\prime}}+\vec{p_{e^\prime}}$$ Since the initial momentum of electron at rest is zero, we write $$\vec{p_{\gamma}}=\vec{p_{\gamma^\prime}}+\vec{p_{e^\prime}}\longrightarrow{conservation\space of\space momentum}$$ Now considering the relativistic effects, $$E_e=m_e c^2 \space \space \space (m_e-rest\space mass\space of\space electron)$$ $$E_{e^\prime}=\sqrt{(p_{e^\prime}c)^2+(m_e c^2)^2}$$ Referring to the conservation of energy equation $$\frac{hc}{\lambda}+m_e c^2=\frac{hc}{\lambda^\prime}+\sqrt{(p_{e^\prime}c)^2+(m_e c^2)^2}$$ Rearranging both sides and squaring $$(p_{e^\prime}c)^2= (\frac{hc}{\lambda}+m_e c^2-\frac{hc}{\lambda^\prime})^2-m_e^2 c^4$$ or $$(p_{e^\prime}c)^2=(\frac{hc}{\lambda})^2+(\frac{hc}{\lambda^\prime})^2+(\frac{1}{\lambda}-\frac{1}{\lambda^\prime})2hcm_e c^2-\frac{2h^2c^2}{\lambda \lambda^\prime}\longrightarrow(1)$$ From this expression we can find the magnitude of the scattered photon. It is to be seen that the momentum gained by the scattered electron will be greater than the momentum lost by the photon This is a consequence of relativistic effect since even though the initial momentum of the electron is zero, it have a rest energy). Now, from the conservation of momentum equation, we write $$\vec{p_{e^\prime}}=\vec{p_\gamma}-\vec{p_{\gamma^\prime}}$$ By invoking the scalar product $$ p_{e^\prime}^2=\vec{p_{e^\prime}}\cdot \vec{p_{e^\prime}}=(\vec{p_\gamma}-\vec{p_{\gamma^\prime}})\cdot (\vec{p_\gamma}-\vec{p_{\gamma^\prime}})$$ or by rule of cosines $$ p_{e^\prime}^2=p_\gamma^2+p_{\gamma^\prime}^2-2p_\gamma p_{\gamma^\prime} \cos\theta$$ Multiplying both sides by $c^2$, we have $$ p_{e^\prime}^2 c^2=p_\gamma^2 c^2+p_{\gamma^\prime}^2 c^2-2c^2 p_\gamma p_{\gamma^\prime} \cos\theta$$ For a photon $E=pc=hc/\lambda$. So, the first two terms on the right hand side of the above equation represent the square of the energies of the incident and scattered photons respectively. Hence we write $$ p_{e^\prime}^2 c^2=(\frac{hc}{\lambda})^2+(\frac{hc}{\lambda^\prime})^2-\frac{2h^2c^2\cos\theta}{\lambda \lambda^\prime}\longrightarrow(2)$$ Comparing both equations (1) and (2) we have $$(\frac{1}{\lambda}-\frac{1}{\lambda^\prime})2hcm_e c^2-\frac{2h^2c^2}{\lambda \lambda^\prime}=-\frac{2h^2c^2\cos\theta}{\lambda \lambda^\prime}$$ or $$(\lambda^\prime-\lambda)m_e c-h=-h\cos\theta\Rightarrow (\lambda^\prime-\lambda)m_e c={h}(1-\cos\theta)$$ or $${\color{red}{ \Delta\lambda=(\lambda^\prime-\lambda)=\frac{h}{m_ec} (1-\cos\theta)}}$$ which gives the shift in the wavelength of the scattered and incident photon, called Compton Shift It is clear that when $\theta=0^0$, there will be no shift in wavelength, which means that if the incident photon travel undeviated, then there is no change in energy of the photon. This means there will be no electron on it's path. When $\theta=180^0$, then the incident photon is reflected back and the shift in wavelength will be maximum, means the photon and the shift corresponds to the maximum energy that the electron can gain.
{ "domain": "physics.stackexchange", "id": 30656, "tags": "electromagnetic-radiation, atomic-physics, scattering" }
Why does this Biot-Savart expression need to be changed when $z < a$?
Question: I have to represent the magnetic field of this set up with a power series. I'm asked to give an expression for $z > b$ and $z < a$. Why would the expression change for either of them, can't both situations be represented by the same series? Answer: Your one task is to come up with an approximation for the inner region, i.e. for $0\le z < a$. This can be done by a power-series of the form $$\sum_{n=0}^\infty c_n z^n.$$ This series will be quickly convergent for small $z$, but probably divergent for $z\ge a$. Your other task is to come up with an approximation for the outer region, i.e. for $b\lt z < \infty$. This can be done by a power-series of the form $$\sum_{n=0}^\infty c_n \frac{1}{z^n}.$$ This series will be quickly convergent for large $z$, but probably divergent for $z\le b$. Hence you won't be able to get a power-series convergent for the whole range of $z$.
{ "domain": "physics.stackexchange", "id": 86828, "tags": "electromagnetism" }
Why do the stars in the galaxy core move so fast?
Question: There are a bunch of stars orbiting the black hole in the center of our galaxy. These stars move at huge speed. Why do we see this? Why do the black hole not impose any noticeable time dilation on these stars? Answer: Great question Lucas. The velocity of an object in orbit around a massive body can be expressed roughly as $v(R)\sim \sqrt{ \frac{GM}{R}}$ The closer you are to the mass (e.g. the black-hole), the bigger v(r) becomes. It turns out, for a black-hole like the one at the galactic center, with stars about 100 AU away.... they travel at about 500 km/s---fast! Now, the effects of general relativity are only significant when you're near the event horizon. In this case, even though the stars are relatively 'close' (about $10^{15}$ cm), they're still almost about 1000 times further away than the event-horizon! And so the effects of general relativity (e.g. time dilation) are very very small (in this case, currently unobservable at all).
{ "domain": "physics.stackexchange", "id": 3140, "tags": "black-holes, astrophysics, milky-way, newtonian-mechanics" }
Aggregating recent trading data from Bitcoin derivatives exchange
Question: As a component of some trading software, I wrote function parse_ticks(), as part of an Exchange class that models the Bitcoin derivatives exchange BitMEX. The purpose of parse_ticks() is to convert the previous minutes stored tick data (a list of individual trades, obtained by calling all_ticks = self.ws.get_ticks()) into a dictionary which aggregates the OHLCV values from the list of previous minutes ticks. The list of trades is a list of dict's, each dict containing a timestamp, size of trade, if the tick was a buy or sell, etc. parse_ticks() gets invoked once per instrument when minute elapses. In this scenario, I am watching two instruments, so parse_ticks() gets called twice at the start of each minute. I timed parse_ticks() run time for 2400 minutes (2400 observations) with as many background processes and programs disabled as possible, and obtained these results (in seconds): mean run time: 0.0318425 min run time: 0.00458 max run time: 0.07958 std dev: 0.02276709988 Theres a massive range here, with the min and max substantially different, and the std dev is almost as large as the mean. How would I lower the std. dev of parse_ticks() run time, and get the mean run time closer the minimum observation of 0.00458? Is this kind of optimisation within the scope of python? EDIT: To answer the questions in comments (thank you all for your help): ws.get_ticks() has no outbound API calls, it looks like this: def get_ticks(self): return self.data['trade'] where data['trade'] is a list containing ticks saved from a websocket stream in realtime. The 'trade' list is trimmed by 30% if it goes above 10000 elements, so that aspect (constantly-increasing amount of data to parse) is constant, once the limit is reached. So the data is already available when parse_ticks() is called. The amount of ticks DOES vary, so some variance can be explained by that. But surely not the extreme range of 0.075? (min - max). The run times are seemingly random, there are long and short runs interspersed throughout all the observations. self.bars = {} self.symbols = ["XBTUSD", "ETHUSD"] self.ws = Bitmex_WS() def parse_ticks(self): """Return a 1-min OHLCV dict, given a list of the previous minutes tick data.""" all_ticks = self.ws.get_ticks() target_minute = datetime.datetime.utcnow().minute - 1 ticks_target_minute = [] tcount = 0 # search from end of tick list to grab newest ticks first for i in reversed(all_ticks): try: ts = i['timestamp'] if type(ts) is not datetime.datetime: ts = parser.parse(ts) except Exception: self.logger.debug(traceback.format_exc()) # scrape prev minutes ticks if ts.minute == target_minute: ticks_target_minute.append(i) ticks_target_minute[tcount]['timestamp'] = ts tcount += 1 # store the previous-to-target-minute bar's last # traded price to use as the open price for target bar if ts.minute == target_minute - 1: ticks_target_minute.append(i) ticks_target_minute[tcount]['timestamp'] = ts break ticks_target_minute.reverse() # reset bar dict ready for new bars self.bars = {i: [] for i in self.symbols} # build 1 min bars for each symbol for symbol in self.symbols: ticks = [i for i in ticks_target_minute if i['symbol'] == symbol] bar = self.build_OHLCV(ticks, symbol) self.bars[symbol].append(bar) # self.logger.debug(bar) Answer: We (and it sounds like you as well) do not have enough information to answer this question. But some good news first: I cannot see any region where your method is obviously wasting time (like searching with in for some item in a growing list). Before you concern yourself more with this, consider that even if the variance is large, your maximum time is less than a tenth of a second, for two instruments. In other words, you could be monitoring more than 1500 instruments before you get close to a maximum processing time on the order of your frequency (one minute). So ask yourself if you are performing needless premature optimization. Will this code run with \$\mathcal{O}(1000)\$ instruments, for longer than two days? If not, you can stop right here. If yes, or as an academic exercise, continue on. As far as I can see, the runtime of this code will be largely determined by two factors: How long self.ws.get_ticks() takes. Does this method connect to the internet to get it's data? Then the variance might actually be the variance in establishing the connection and getting the data. This could be influenced by your internet connection, but also the current load on the server you are trying to connect to. In this case there is nothing you can do. How many elements are returned. The actual function getting the data might take longer for more elements, but also the processing would, since you iterate over all elements of the list. The only way to know is to gather more data. Individually time the call to self.ws.get_ticks(), collect the len(all_ticks) and plot everything in a time-ordered way. Maybe this will help you to discover something interesting. Here are some possibilities of what you could discover: The server where you get the information also has some updating frequency, so only every five minutes will there be data for you. They have a rate limit, which makes all requests in-between fail (which is faster than transmitting a bunch of data). The call actually returns all data every time, and you just break when you have reached the point of the last minute. In this case each successive call is more expensive, so of course this increases the standard deviation. Try to find a way to pass the start date to the call. And here is a small class that can help you keep track of different timers I just came up with: from time import perf_counter from statistics import mean, median, stdev from collections import defaultdict class Timer: durations = defaultdict(list) def __init__(self, name=""): self.name = name def __enter__(self): self.start = perf_counter() def __exit__(self, *args): Timer.durations[self.name].append(perf_counter() - self.start) @staticmethod def calc_stats(x): return {"mean": mean(x), "std": stdev(x), "median": median(x), "min": min(x), "max": max(x)} @staticmethod def stats(): return {name: Timer.calc_stats(x) for name, x in Timer.durations.items()} With some example usage: from time import sleep import pandas as pd import matplotlib.pyplot as plt from timer import Timer for n in range(10): with Timer("a"): sleep(0.5) with Timer("b"): sleep(0.1 * n) print(pd.DataFrame(Timer.stats())) # a b # mean 0.500559 0.450504 # std 0.000021 0.303079 # median 0.500552 0.450502 # min 0.500534 0.000006 # max 0.500602 0.901003 plt.plot(Timer.durations["a"], label="a") plt.plot(Timer.durations["b"], label="b") plt.legend() plt.xlabel("Iteration") plt.ylabel("Time [s]") plt.show() Some other comments: Instead of self.bars = {i: [] for i in self.symbols} you can use self.bars = defaultdict(list), like I did in the Timer class. Try to avoid single letter variables. They are OK in a few cases, but i (and n) are usually reserved for integers. Use x, or maybe even better, tick. Don't directly compare type, use isinstance(datetime.datetime, s) instead. This also allows subclasses. Don't lie in your docstring. You say "Return a 1-min OHLCV dict, given a list of the previous minutes tick data.", but almost none of this is true. The method does not return anything and it also does not take any parameters as input! If you know the exception to expect, catch only that. At least you don't have a bare except, but except KeyError and maybe some specific error from the parser would be better. This way you don't miss an unexpected error. You should also ask yourself what your code does if an error occurs. I think it will just use the previous iterations ts, which would duplicate data. Just hope that you never have a problem in the first timestamp!
{ "domain": "codereview.stackexchange", "id": 36182, "tags": "python, performance, algorithm, python-3.x, parsing" }
Create GUI components with states
Question: I am asking about the right way to make a component that holds some state. This includes a Jbutton that saves a color in it, or a list item that saves a certain object. So when those GUI components fire an event, I can use the saved states to do something with it. My way is like this: Make a subclass of the required component, like a subclass from Jbutton. Make a Listener for this new subclass: in the listener check, if the event source is the subclass, convert it and then use the stored data. class ColorButton extends JButton { static class Listener implements ActionListener{ @Override public void actionPerformed(ActionEvent actionEvent) { Object source = actionEvent.getSource(); if( source.getClass() == ColorButton.class) { ColorButton t = (ColorButton) source; t.getComponent().setBackground(t.getColor()); } } } //states i want to be saved private Color c; private Component comp; ColorButton(Component comp, Color c) { setColorChanger(comp, c); } /* ...... ...... rest of constructors added with those additions ...... */ private void setColorChanger(Component comp, Color c) { this.comp = comp; this.c = c; } Color getColor() { return c; } Component getComponent() { return comp; } } And I use it this way: JPanel panel = new JPanel(); ColorButton.Listener l = new ColorButton.Listener(); JButton b = new ColorButton("Blue", panel, Color.BLUE); JButton r = new ColorButton("Red", panel, Color.RED); r.addActionListener(l); b.addActionListener(l); panel.add(b); panel.add(r); add(panel); I am wondering: is this way okay? I feel it is very boring to make this for every component that should hold a certain state. Is there a better way? Answer: Instead of comparing the class with if( source.getClass() == ColorButton.class) you can use if (source instanceof ColorButton). This is preferable in case you would make a subclass of ColorButton. If it's an instance of a subclass of ColorButton it is still an instanceof but source.getClass() != ColorButton.class. With the way that you are using the code above, I would save the color in the listener instead of in the button. This way you don't need the if statement in your code at all, since you can pass the appropriate type to the Listener. In this case, .setBackground is declared in the JComponent class so all you need to make sure is that it's a JComponent that gets passed to the Listener, which is no problem: static class Listener implements ActionListener { private Color color; private JComponent comp; public Listener(JComponent comp, Color color) { this.color = color; this.comp = comp; } @Override public void actionPerformed(ActionEvent actionEvent) { comp.setBackground(this.color); } } By using this, you don't need the ColorButton subclass, instead you create multiple instances of your listener: JPanel panel = new JPanel(); JButton b = new JButton("Blue"); JButton r = new JButton("Red"); r.addActionListener(new Listener(panel, Color.BLUE)); b.addActionListener(new Listener(panel, Color.RED)); panel.add(b); panel.add(r); add(panel);
{ "domain": "codereview.stackexchange", "id": 4959, "tags": "java, swing" }
DRCSIM: BDI modes change the hand force/torque sensor behavior
Question: The hand force/torque sensor behaves well until the BDI modes are changed. While in STAND mode the output of the sensor is correct, however, when the mode is changed to MANIPULATE for example, the force/torque sensor start giving a nonsense output. Hand force sensor while changing BDI modes Originally posted by Alberto Romay on Gazebo Answers with karma: 11 on 2013-05-15 Post score: 0 Original comments Comment by dcconner on 2013-05-20: I was able to duplicate this issue using the control_mode_switch demo, and hacking to use a set of seemingly reasonable kd_position gains and k_effort related to moving arms and torso, but leaving legs under BDI control. See https://bitbucket.org/osrf/drcsim/issue/273/bdi-modes-change-the-hand-force-torque for specific gains Answer: It was determined that reducing the kd_position gains avoids this problem. Originally posted by gerkey with karma: 1414 on 2013-05-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3292, "tags": "gazebo-sensor, drcsim" }
Analogue of scalar potential in London equation
Question: The London Equations to describe superconductivity phenomenologically are: $$ E = \frac{\partial}{\partial t}(\Lambda J)$$ $$B = -c \nabla \times (\Lambda J)$$ with $\Lambda = m/ne^2$. It is interesting to notice that the quantity $\Lambda J$ plays the role of an effective vector potential above, so it is natural to ask if there is another quantity that plays the role of effective scalar potential in London equation. It would like great if someone can point me to relevant discussions in books or in the literature. Answer: Including a static scalar potential in London equations doesn't make much sense, since we are talking about the inside of a metal, where any such potential is screened, even in a non-superconducting state. In other words, constant potential is the only option. Update In a contradiction to what I have written above, the London equations do include the electric field - via the time derivative of the vector potential. However, this electric field cannot be re-expressed in terms of a scalar potential, i.e. the London equations are not gauge-invariant - which appears to be a known problem, see, e.g., here.
{ "domain": "physics.stackexchange", "id": 71232, "tags": "condensed-matter, potential, superconductivity" }
How much are Neptune's and Uranus' orbits perturbed by Pluto and other KBOs?
Question: Reading another question, I came across the argument that inconsistencies in Neptune's and Uranus' orbits lead to the discovery of Pluto. This brief write-up mentions Percival Lowell and William H. Pickering by name as two people who looked for the theoretical 9th planet. My question is, were inconsistencies in Neptune's and Uranus' orbits detectable by early 20th century methods or were these scientists just chasing calculation or estimation errors with dreams of finding the next planet. My (guess) is that Pluto is much too small, and too far away and just one of many similar mass kuiperbelt objects, so it's unlikely that it was really discovered by measured inconsistency in Uranus/Neptune orbits method, (despite the fact that there are articles that say that's precisely how Pluto was discovered). In a nutshell, that's the question. Was the wobble that Pluto's causes on Neptune's orbit observable by 1910-1930 observation methods. And related, was the wobble that the entire kuiperbelt (some 20 to 50 times the mass of little pluto, but much more spread out), was that wobble detectible by 1910-1930 methods. Hard math isn't necessary to me for an answer, as I find the math hard to follow sometimes, so feel free to answer with or without mathematical calculations. Answer: Since (1) the mass of the Pluto/Charon system is very small compared with Neptune, and (2) their orbits are in 2:3 resonance, so the perturbation of Neptune's orbit would have been undetectable. University of Rochester has a good explanation of this. The Accidental Discovery of Pluto Later supposed perturbations of the orbits of Uranus and Neptune suggested the presence of yet another planet beyond the orbit of Neptune. Eventually, in 1930, a new planet Pluto was discovered, but we now know that the calculations in this case were also in error because of an incorrect assumption about the mass of the new planet. It is now believed that the supposed deviations in the orbits of Neptune and Uranus were errors in measurement because the actual properties of Pluto would not have accounted for the supposed perturbations. Thus, the discovery of Pluto was a kind of accident
{ "domain": "astronomy.stackexchange", "id": 2772, "tags": "orbit, orbital-mechanics, pluto" }
Increase static pressure
Question: I have an application (a tube with parts in it) which needs a certain static pressure in it in order to contain a sustainable air flow. Unfortunately the fan I wanted to use in a push-configuration is not able to create this pressure. Thus I thought that I could install another fan with a higher cfm at the end of the tube (in pull-configuration), and thereby increasing the static pressure? Would that be useful, or is there another way to increase the static pressure in the tube? Schematic is: Neither F1 or F2 are able to provide the necessary static pressure alone. Answer: In order to get the air through the pipe, the pressure rise of the first fan has to be so high, that the pressure-loss within the tube is compensated. Since your first fan (F1) is not powerful enough you can add a second fan (F2) to rise the pressure at the end of the pipe to ambient pressure, or use a more powerful fan (F3) instead. The sketch shows the static pressure on the y-axis and the tube at the x-axis
{ "domain": "engineering.stackexchange", "id": 478, "tags": "fluid-mechanics" }
6D localization with 6 lasers
Question: I have to know where a multi-rotor is, in a rectangular room, via 6 lasers, 2 on each axis. The problem is like this: Inputs : Room : square => 10 meters by 10 meters 6 positions of the lasers : Fixed on the frame 6 orientations of the lasers : Fixed on the frame The 6 measurements of the lasers The quaternion from the IMU of my flight controller (PixHawk). The origin is centered on the gravity center of the multi-rotor and defined as if the walls are perpendicular to each axes (the normal of the wall in X is (-1,0,0)) Output : Position in 3D (X,Y,Z) Angular position (quaternion) Since I got the angular position of the multi-rotor, I rotated the laser positions and orientations via the quaternion, then extrapolate via the 6 measurements and I got the 3 walls. (orientations of the walls are trivial, then only one point is enough to determine its position. Badly, I noticed that the yaw (rotation about z) measurement from the PixHawk is unreliable. Then I should measure the yaw from the lasers, but I do not success to do it. Event if the 2D problem is easy, I am lost in 3D. Does someone know if it [Algorithm to know XYZ position and quaternion from 6 measurments] exists somewhere ? Or what is the right way to go on this problem ? The question : How could I get the yaw from 2 measurements from 2 lasers which I know the original position, orientation and the pitch and roll. NOTE : Green pointers are the origin position, Red pointers are the "final" position, but could be rotated around the red circle (due to yaw). Answer: Solution : Is there another solution without prerotating vectors ? I finally got a solution, and here it is. Python, ROS geometry library, numpy My actual code/maths in short : 1) Rotate the position & orientation of lasers by roll & pitch. The axes='sxyz' means : Static axis, apply roll, pitch, yaw. quaternion_matrix creates a 4x4 transformation matrix from the quaternion. laser = (1,1,1,0) # laser position orientation = (1,0,0,0) # laser orientation roll, pitch, _ = list(euler_from_quaternion(q, axes='sxyz')) q = quaternion_from_euler(roll, pitch, 0, axes="sxyz") laser = numpy.dot(quaternion_matrix(q), laser) orientation = numpy.dot(quaternion_matrix(q), orientation) 2) Algebric solution : Rotation around Z in function of yaw laser = [-sin(a)*laser[1] + cos(t)*laser[0], cos(t)*laser[1] + sin(t)*laser[0], laser[2]] orientation = [-sin(a)*orientation[1] + cos(t)*orientation[0], cos(t)*orientation[1] + sin(t)*orientation[0], orientation[2]] 3) Algebric solution : Extrapolation from the measurments in function of yaw Important notice : Since the rotation do not scale vectors, the denominator of the K factor is a constant. Then, we can simplify it by precompute length of the orientation vector. M = 100 # distance K = sqrt(M^2 / (orientation[0]^2 + orientation[01]^2 + orientation[1]^2)) PointOnWall = [ K * orientation[0] + laser[0], K * orientation[1] + laser[1], K * orientation[2] + laser[2]] 4) Algebric solution : From this, on two laser, get walls. The two "PointOnWall" equations should gives enough data to get the yaw. Knowing this is a (-1,0,0) normale, I can find 2 planes from the two points : 5) Algebric solution : Measure the YAW. One plane in the other (Via XMaxima), we got : def getYaw(position1, orientation1, measure1, position2, orientation2, measure2): length1 = length(orientation1) length2 = length(orientation2) k1 = measure1/length1 k2 = measure2/length2 numerator = -k2*orientation2[0] + k1*orientation1[0] + position1[0] - position2[0] denominator = -k2*orientation2[1] + k1*orientation1[1] + position1[1] - position2[1] return atan(numerator/denominator) As expected, roll & pitch DO NOT interfere, since the positions and orientations are prerotated.
{ "domain": "robotics.stackexchange", "id": 994, "tags": "algorithm, geometry" }
Why is the 4D $U(1)$ electric 1-form symmetry a global symmetry?
Question: I am reading the paper Generalised Global Symmetries to understand higher-form symmetries. The first example in Section 4 that the authors talk about is the free Maxwell theory in 4d, i.e., pure $U(1)$ gauge theory in 4D. In this example, they talk about two 1-form symmetries: electric $U(1)_e$, with 2-form current $j_e \sim \star F$, and magnetic $U(1)_m$, with 2-form current $j_m \sim F \equiv dA$. It is mentioned that the action of the electric 1-form symmetry on the gauge field $A$ is a shift by a flat connection $\lambda$, i.e., $d\lambda=0$. A flat connection is not necessarily a constant, then why is this called a global symmetry? A related fact discussed in the same section is that the topological symmetry operator corresponding to electric 1-form symmetry is given by $$U_\alpha^e(M_2)=\exp\left(i\frac{2\alpha}{g^2}\int_{M_2}\star F\right),$$ where $M_2$ is a 2d submanifold in 4d spacetime, $g$ is the gauge coupling constant, and $e^{i\alpha}\in U(1)_e$ is the corresponding group element. Here, $\alpha$ is indeed a constant. How is $\alpha$ related to $\lambda$ above? Answer: This is what I think makes the $U(1)$ electric 1-form symmetry global. The action of shift of the gauge field by a flat connection on a Wilson line operator is $$ A \rightarrow A + \lambda\implies W_n(C) \rightarrow W_n(C) \exp \left( i n \oint_C \lambda \right). $$ Let $\alpha(C)=\oint_C \lambda$, which is the group parameter for the above transformation. Now, consider two Wilson operators defined over two different loops $C$ and $C'$. As long as the difference $L$ of $C$ and $C'$ lies in the trivial class of $H_1(X)$, where $X$ is the spacetime, we have $$ \left( \oint_C - \oint_{C'} \right)\lambda = \oint_{L=\partial S} \lambda = \int_S d\lambda = 0, $$ because $\lambda$ is a flat connection. Here, existence of the 2d manifold $S$ is guaranteed by $L$ belonging to trivial class in $H_1(X)$. So, $\alpha(C) = \alpha(C')$ for all such $C$ and $C'$ which are smoothly connected to each other. Global then means that the group parameter $\alpha$ is independent (in the above sense) of $C$ (1D manifold), which is possible whenever $\lambda$ is a flat connection. This is generalisation of the notion of global 0-form symmetry where the group parameter $\alpha$ is independent of $x$ (0D manifold), the position of the local operator.
{ "domain": "physics.stackexchange", "id": 60328, "tags": "electromagnetism, symmetry, gauge-theory" }
Why is work done in a spring positive?
Question: We know that a stretched spring obeys Hooke's law, such that $F=-kx$. We can find the potential energy of stretching/compressing this spring by $x$, given by : $$U_x-U_0=-\int_0^x F.dx = \frac{1}{2}kx^2 $$ Setting $U_0=0$ as reference, we have $U_x=\frac{1}{2}kx^2$ However, this is also sometimes described as the work done by the spring. Shouldn't the work done $W$ be given by $\int F.dr$, such that $W=-\Delta U = -\frac{1}{2}kx^2$ in this case ? Isn't the work done by the spring negative ? Also, in this case the potential energy comes to be negative.. In general, can we set any point as reference and set it to be $0$ and perform the integral between any two limits, to get either a positive or a negative $U$ ? For example, in forces of the nature $r^{-n} ,(n>1)$ we usually take the reference at $r=\infty$ and integrate from $\infty$ to some point $r$. In case of forces of the nature $r^n$, we usually take $0$ as the reference and integrate from $0$ to some $r$. In general, we are free to choose any reference and any limit, even though some are much more convenient, right ? In theory, we can choose any point, right ? As long as we have : $$U_a-U_b=-\int_b^a F.dx$$ we can choose any $a$ and $b$, and set either of $U_a$ or $U_b$ to be the reference and equal to $0$, right ? Answer: Setting $x=0$ as the reference point means you are looking at the work done by the spring from $x=0$ to the end position $x$. Since $W=-\Delta U=-\frac12kx^2$, this will always be negative, which makes sense since the spring force always points towards $x=0$, and thus will point opposite the displacement. In general $$W_{a\to b}=-(U(x_b)-U(x_a))=\frac12k(x_a^2-x_b^2)$$ and this is positive whenever $x_a^2>x_b^2$
{ "domain": "physics.stackexchange", "id": 84632, "tags": "forces, potential, work, potential-energy, spring" }
Can I fine-tune the BERT on a dissimilar/unrelated task?
Question: In the original BERT paper, section 3 (arXiv:1810.04805) it is mentioned: "During pre-training, the model is trained on unlabeled data over different pre-training tasks." I am not sure if I correctly understood the meaning of the word "different" here. different means a different dataset or a different prediction task? For example if we pre-train the BERT on a "sentence-classification-task" with a big dataset. Then, should I fine-tune it again on the same "sentence-classification-task" task on a smaller and task-specific data-set or I can use the trained model for some other tasks such as "sentence-tagging"? Answer: The sentence "During pre-training, the model is trained on unlabeled data over different pre-training tasks." means that BERT was pre-trained on normal textual data on two tasks: masked language model (MLM) and next sentence prediction (NSP). There were no other classification/tagging labels present in the data, as the MLM predicts the text itself and the NSP label is derived from the textual data itself. Both tasks were trained simultaneously from a single textual dataset that was prepared to feed the input text and the expected outputs for both tasks. Therefore "different" here refers to the two pre-training tasks I mentioned: MLM and NSP. When fine-tuning, you do not need to train again on the same sentence classification task, you just simply train it on the task you need. It is perfectly fine to fine-tune BERT on a sentence tagging task on your own dataset.
{ "domain": "datascience.stackexchange", "id": 8532, "tags": "bert, transformer, language-model, tokenization" }
Angular momentum Bohr's model
Question: I have been trying to derive speed, radius etc. in hydrogen atom using Bohr's postulates and not neglecting the coulombic attraction on proton. I know that they will be revolving around their centre of mass with same angular speed. But, I have this one doubt. Do we write $$L=\frac{nh}{2\pi}$$ of electron with respect to their centre of mass or wrt proton(nucleus)? Answer: The expression you have there looks like that of the electron relative to the proton. The equation $$L=\frac{nh}{2\pi}$$ can be derived from the de Broglie relation $p = h/\lambda$. Consider electron "orbiting" (classically speaking) about a proton (we take to be the origin). Its orbital angular momentum will be given by $$L=rp$$ $r$ and $p$ of course, being the radius and angular momentum respectively. By demanding that an integer number of wavelengths fit into the radius, $$\lambda = \frac{2\pi r}{n}$$, then $$L = rp = r\left(\frac{nh}{2\pi r}\right) = \frac{nh}{2\pi} = n\hbar$$ as required.
{ "domain": "physics.stackexchange", "id": 91477, "tags": "angular-momentum, atomic-physics, atoms, hydrogen, orbitals" }
Overheating/Jamming MG996R servo
Question: I have recently purchased my first ever servo, a cheap unbranded Chinese MG996R servo, for £3.20 on eBay. I am using it in conjunction with a Arduino Servo shield (see below): As soon as it arrived, before even plugging it in, I unscrewed the back and ensured that it had the shorter PCB, rather than the full length PCB found in MG995 servos. So, it seems to be a reasonable facsimile of a bona-fide MG996R. I read somewhere (shame I lost the link) that they have a limited life, due to the resistive arc in the potentiometer wearing out. So, as a test of its durability, I uploaded the following code to the Arduino, which just constantly sweeps from 0° to 180° and back to 0°, and left it running for about 10 to 15 minutes, in order to perform a very simple soak test. #include <Servo.h> const byte servo1Pin = 12; Servo servo1; // create servo object to control a servo // twelve servo objects can be created on most boards int pos = 0; // variable to store the servo position void setup() { servo1.attach(servo1Pin); // attaches the servo on pin 9 to the servo object Serial.begin(9600); } void loop() { pos = 0; servo1.write(pos); // tell servo to go to position in variable 'pos' Serial.println(pos); delay(1000); // waits 15ms for the servo to reach the position pos = 180; servo1.write(pos); // tell servo to go to position in variable 'pos' Serial.println(pos); delay(1000); // waits 15ms for the servo to reach the position } When I returned, the servo was just making a grinding noise and no longer sweeping, but rather it seemed to be stuck in the 0° position (or the 180°). I picked the servo up and whilst not hot, it was certainly quite warm. A quick sniff also revealed that hot, burning motor windings smell. After switching of the external power supply and allowing it to cool, the servo began to work again. However, the same issue occurred a little while later. Again, after allowing it to rest, upon re-powering, the servo continues to work. However, I am reluctant to continue with the soak test, as I don’t really want to burn the motor out, just yet. Is there a common “no-no” of not making servos sweep from extreme to extreme, and one should “play nice” and just perform 60° sweeps, or is the cheapness of the servo the issue here? I am powering the servo from an external bench supply, capable of 3A, so a lack of current is not the issue. Please note that I also have a follow up question, Should a MG996R Servo's extreme position change over time? Answer: You might be driving the servo beyond its limits. As noted in the pololu.com article “Servo control interface in detail”: If you keep making the pulses wider or narrower, servos will keep moving farther and farther from neutral until they hit their mechanical limits and possibly destroy themselves. This usually isn’t an issue with RC applications since only about half of the mechanical range is ever used, but if you want to use the full range, you have to calibrate for it and be careful to avoid crashing the servo into its limit. Similarly, the arduino.cc article about Servo.WriteMicroseconds says (with emphasis I added): On standard servos a parameter value of 1000 is fully counter-clockwise, 2000 is fully clockwise, and 1500 is in the middle. [...] some [...] servos respond to values between 700 and 2300. Feel free to increase these endpoints until the servo no longer continues to increase its range. Note however that attempting to drive a servo past its endpoints (often indicated by a growling sound) is a high-current state, and should be avoided. I suggest running a calibration, where you drive the servo to different angles, to find out what parameter values are large enough to drive the servo to its extreme positions without overdriving it.
{ "domain": "robotics.stackexchange", "id": 821, "tags": "arduino, rcservo" }
Does potential difference or current drive electrolysis?
Question: Would a high voltage ( of the order of a few score kV ) drive electrolysis? Or, does it require a large current and low voltage? Alternatively, does electrolysis require both large voltage & large current? Answer: For any electrochemcial reaction, electrons are transfer from the oxidant to the reductant. By definition, any flow of charge per unit time period is an electric current. However, in the case of electrolysis, the electron transfer is not spontaneous. An external energy source is required for the reaction to take place. The energy (work) provided per unit charge is called the voltage. For any electrolysis reaction to occur therefore requires a supply of electrical energy which means both a voltage and electric current are needed. Specifically, the amount of work (energy) resulting from a voltage, $V$ and a current, $I$ applied for a time $t$ is given by: Work produced by electrical energy, $W = V\cdot I\cdot t$ For a given electrolytic reaction to proceed, the required minimum voltage which must be applied is determined from the Gibb's function, $E^\circ_\text{cell}=\frac{-\Delta G^\circ}{nF}$ Where $E^\circ_\text{cell}$ is the minimum voltage needed for the electrolysis reaction to occur, $\Delta G^\circ$ is the change in Gibb's free energy under standard conditions, $n$ is the number of electrons transferred and $F$ is Faraday's constant (96,485 coulombs per mole). For example, if pure water is placed in an 'electrolytic cell' with two non-reactive electrodes (eg: platinum), electrons forced into the 'cell' by an electrical source (eg: battery) will react with water molecules at the cathode forcing them to lyze (split) into hydrogen ions and hydroxide ions. $\ce{H2O(aq) ->[\text{elect}] H+(aq) + OH- (aq)}$ At the surface of the 'anode', hydroxide ion will be oxidised (dontate electrons) to form oxygen gas. $\ce{4OH- (aq) -> 2H2O + O2 + 4e- }$ (anode half-cell reaction) Meanwhile, at the surface of the 'cathode', hydrogen ions will accept electrons to form hydrogen gas. $\ce{2H+(aq) + 2e- -> H2(g)}$ (cathode half-cell reaction) The overall cell reaction is therefore: $\ce{2H2O(aq) -> 2H2(g) + O2(g)}$ ; $\Delta G^\circ=237.2\ \mathrm{kJ/mol}$ The positive Gibb's function for this reaction indicates that the reaction will not occur sponteneously but requires an external energy source (eg: electrical energy), which is reflected in the negative cell potential. The minimum (theoretical) voltage required is: $E^\circ_\text{cell}=\frac{-\Delta G^\circ}{nF}=\frac{-237.2 \times 10^{3}}{2\times 96485}=-1.23\ \mathrm{V}$ However, in practice, a slightly larger voltage of 1.48 V is required since the enthalpy (heating) of the products results in slightly lower efficiency which is manifested as an overpotential of about 0.25 V. Once this critical voltage level is exceeded, the electrolysis reaction proceeds at a rate determined largely by the the current, since the current represents the rate at which charge is delivered to the system. Basically, the higher the current, the more molecules will react (electrolyze) and the more products (hydrogen gas) will form per unit time.
{ "domain": "chemistry.stackexchange", "id": 2130, "tags": "physical-chemistry, electrochemistry" }
Order of basicity
Question: I tried using conjugate acid method to decide stability but I’m unable to differentiate between b,c,d based on inductive effect. There is no conjugation so no resonance. Answer: In aprotic or very small polar solvents like chlorobenzene, solvated ions cannot form so there is no solvent effect. It follows a 'similar' order as that in gaseous phase. Coming to the question, on protonation, option A has 2 Alpha Hydrogens (with respect to N+). B has 1 alpha Hydrogens, C has 6 alpha hydrogens and D has 4 alpha hydrogens. Number of hyperconjugative structures = No. Of alpha hydrogens. So we would have C>D>A>B based on hyperconjugative stability. But actually, B>A. This is because in B, there is a possibility of hydride shift which shifts the + charge on nitrogen to the carbon connected to it. On doing so, B has 4 alpha hydrogens. Hence C>D>B>A is the answer. (although now D and B have same no of alpha hydrogens (same no of hyperconjugative structures), D is more stable than B as it would require some amount of energy to perform Hydride shift).
{ "domain": "chemistry.stackexchange", "id": 10907, "tags": "organic-chemistry" }
Hypergeometric Function: Differential Equation
Question: In Birrel & Davies: QFT in curved spacetime it is written that the following differential equation can be solved in terms of hypergeometric functions. $$(\partial_t^2 +(k^2+c(t)m^2))\phi(t)=0.$$ But there is no reference and no method listen. Could somebody please help me solve this equation for $c(t)=(a+b\cdot \operatorname{tanh}(dt))$? Answer: This example in Birrell & Davies is quite tricky and in order to get the exact answer given, you need to manipulate the differential equation and solve it by hand as far as you can get. This involves quite a bit of algebraic manipulation and properties of the hypergeometric functions, but the outline is this. You want to solve the equation $$\frac{d^2\chi_k}{d\eta^2}+[k^2+(A+B\tanh(\rho\eta)m^2)]\,\chi_k=0.\tag{1}$$ This can be solved with the substitution $$u=\frac{1}{2}[1+\tanh(\rho\eta)].\tag{2}$$ Next define the variables like in Birrell & Davies: $$\omega_{\mathrm{in}}^2=k^2+m^2(A-B)\\ \omega_{\mathrm{out}}^2=k^2+m^2(A+B)\\ \omega_{\pm}=\frac{1}{2}(\omega_{\mathrm{out}}\pm\omega_{\mathrm{in}}).\tag{3}$$ Now making the substitution $(2)$ and $(3)$ into $(1)$ and making some algebraic manipulations involving partial fractions you arrive at $$\frac{d^2\chi_k}{du^2}+\Big[\frac{1}{u}-\frac{1}{1-u} \Big]\frac{d\chi_k}{du}+\frac{1}{4\rho^2}\Big[\frac{\omega_{\mathrm{in}}^2}{u}+\frac{\omega_{\mathrm{out}}^2}{1-u} \Big]\frac{\chi_k}{u(1-u)}=0.\tag{4}$$ You could manipulate this further into the hypergeometric differential equation, but the easiest way (to me) is to solve this with Mathematica. Now however notice, that $(4)$ has singularities at $u=0$ and $u=1$. But these correspond the asymptotic values $\eta\to -\infty$ and $\eta\to\infty$. So you get the asymptotic mode solutions by investigating the solutions of $(4)$ at the singular points. Substituting $(2)$ into the solution and after quite a bit of algebra, you should arrive at the solutions $(3.87)$ and $(3.89)$.
{ "domain": "physics.stackexchange", "id": 36214, "tags": "homework-and-exercises, differential-equations, qft-in-curved-spacetime" }
Unsupervised Classification for documents
Question: I'm trying to create a classifier in which there is less "manual" work for the user. For less manual work I mean that there won't be an initial phase of manual labeling of a training set, like in Machine Learning (supervised) My dataset is composed by instances that are really different by class. They are documents in which there are orders for specified products for different clients. And every client got its template. For example I got: [Client A] Image Date: xxx Order: Products: Table [Client B] Date: xxx Order Image Products: table Image Now I'm doing the classification doing a simple check on every document, for the presence of a specified feature, that's manually identified by a user (by area and using edit distance) The classes are really different (in some cases), and trying an unsupervised classifier like an agglomerative clustering the classes are split really well. After that, using measures like TF/ICF often the features (in my case I use tokenized and normalized text as features) which got the greater values are the ones that are used in my manual classification. The criterions that I use for stop the clustering iteration are different ( I got different configuration) like max distance, or max number of clusters. After that, I thought that when the clusters will be created an user at the end will label each cluster identifying the class by the best TF/ICF (term frequency, inverse cluster frequency) features found in each cluster. And after that the clusters will be used like "classifier". I know that this approach will lead to worse classification, but It's not a problem. The problem is that when two classes are really similar (I got classes in which the difference is only the customer code, for example) they are really difficult to split. Any idea on how approach this problem? And, there is a way in which my algorithm can find out if there is a "new class" in the flux? Answer: If you have a good amount of instances for every class, you can try using a density-based approach for clustering, with algorithms like DBSCAN. If you can label at least some of the documents, you can use semi-supervised learning. Usually, when SSL is used for clustering, you need to specify "cannot link" and "must link" constraints for some pairs of instances, which is basically labeling some instances. One algorithm that follows this approach is HMRF-KMeans (Hidden Markov Random Fields K-Means).
{ "domain": "datascience.stackexchange", "id": 1213, "tags": "classification, clustering, unsupervised-learning" }
Lovasz theta of even cycle
Question: How does one show Lovasz theta of even $n$-cycle ($n$ is even) is of form $\frac{n}{2}$? Why is the Lovasz theta of such cycles not of form $\frac{n \cos(\frac{\pi}{n})}{1+\cos(\frac{\pi}{n})}$. Could someone provide a derivation for even cycle Lovasz theta number. It is clear that the Shannon Capacity is $\frac{n}{2}$. why is the cosine form tight only for odd cycles? Answer: The Lovász theta function is bounded between the independence number and the clique covering number (the chromatic number of the complementary graph). For even cycles, both numbers are $n/2$. For example, for the $6$-cycle with vertices $1,2,3,4,5,6$, there are independent sets of size $3$, e.g. $\{1,3,5\}$, and the graph can be covered with the $3$ cliques $\{1,2\},\{3,4\},\{5,6\}$.
{ "domain": "cs.stackexchange", "id": 3574, "tags": "graphs, information-theory" }
How to pass multiple arguments to rospy node through command line?
Question: I'm looking for a notation like this rosrun my_package my_node.py arg1 arg2 arg3 That I can cun in the terminal, that will enable me to pass arguments to a ROS node which is implemented in python. So something like: rosrun my_package read_topics.py arg1:='/topic1' arg2:='/topic2' And be able to access those inside the python code like for example subscriber = rospy.Subscriber(arg1, Bool, callback) publisher = rospy.Publisher(arg2, Bool, queue_size=1) How can I do that? EDIT: Based on the answers, the right approach is to pass arguments in a format rosrun my_package my_node.py _one_topic:="/topic1" _another_topic:="/topic2" __name:="my_node_name" and access the arguments inside the script as publisher = rospy.Publisher(rospy.get_param('~one_topic'), Bool, queue_size=1) subscriber = rospy.Subscriber(rospy.get_param('~another_topic'), Bool, callback) The node name can be passed using a special parameter __name, because we need to know the node name prior to initializing the rospy node. This works for me. Thank you both. Originally posted by kump on ROS Answers with karma: 308 on 2019-06-17 Post score: 0 Answer: The rosrun documentation should help here: It's also possible to pass a ~parameter using the following syntax (replace the ~ with an _): rosrun package node _parameter:=value Example: rosrun my_package my_node _my_param:=value Note: So something like: rosrun my_package read_topics.py arg1:='/topic1' arg2:='/topic2' The exact syntax you're using in your example is actually the syntax you'd use for remapping topics from the command line with rosrun. And be able to access those inside the python code like for example subscriber = rospy.Subscriber(arg1, Bool, callback) publisher = rospy.Publisher(arg2, Bool, queue_size=1) that's not how this would work. You'd access the parameters as private parameters (Using Parameters in rospy). Originally posted by gvdhoorn with karma: 86574 on 2019-06-17 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by kump on 2019-06-17: @gvdhoorn So this approach works when I access the arguments after node initialization. But how can you pass the name of the node? You need to know the name before rospy node initialization. Comment by ChuiV on 2019-06-17: You can set the node name via parameter with the __name:=node_name param (note the two underscores!) Then use rospy.init("some_node_name") After you initialize, you can get the node name with rospy.get_name() The value of get_name() depends on whether or not you set the __name from the commandline.
{ "domain": "robotics.stackexchange", "id": 33205, "tags": "rospy, ros-kinetic" }
Can all $O(n)$ problems be solved without nested loops?
Question: There are examples of algorithm implementations that contain nested loops but are of complexity O(n), and some of them have corresponding implementations that contain no nested loops. So here comes a question, can all such implementations be simplified or converted to an implementation with only top layer loops? Namely, can all problems that have an $O(n)$ algorithm be solved with an algorithm without nested loops? Answer: You can write an interpreter for any reasonable instruction set usin a single loop with a very, very long if/ else if / else if... statement. That should cover about all solvable problems. You can calculate the Ackerman function with one loop without recursion (not in the lifetime of the universe for n >= 4, but in principle).
{ "domain": "cs.stackexchange", "id": 13504, "tags": "algorithms, complexity-theory, time-complexity, asymptotics" }
Calculation of VC dimension of simple neural network
Question: Suppose I have a perceptron with one-hidden layer, with the input - one real number $x \in \mathbb{R}$, and the activation function of the output layers - threshold functions: $$ \theta(x) = \begin{cases} 0, x \leq 0 \\ 1, x > 0 \end{cases} $$ The hidden layer may contain $k$ units and I would like to calculate the VC dimension of this feed-forward neural network. The VC dimension is defined as a cardinality of the maximal set, that can be shattered by properly adjusting the weights of the neural network. The threshold functions have a VC dimension of $n+1$, where $n$ is a number of input neurons, because by a plane $n-1$ plane one may split $n$ points in any way. So when considering the results in the first layer, we have a VC dimension of $2$ for each gate, and the total number of points, that can be separated by the activation is $2 k$. Then we have a vector $\in \mathbb{R}^k$ to be processed to output, and the output unit has a dimension $k + 1$. Do I understand correctly, that the resulting VC dimension of this simple neural network is : $$ 2 k + k + 1 = 3k + 1 $$ Answer: I do not believe this is correct. The entire network will represent a piecewise-constant function with at most $k+1$ pieces, and has VC dimension $k+1$. Each hidden neuron is a step function, and together there are at most $k$ jump points among them. Taking a linear combination of those, we still cannot create any new jump points, so at the output neuron before activation, we have a piecewise-constant function with at most $k$ jump points, and arbitrary constant values on the $k+1$ intervals between them. After the activation, it's just piecewise constant with values 0 and 1 on at most $k+1$ intervals.
{ "domain": "datascience.stackexchange", "id": 8598, "tags": "neural-network, perceptron, vc-theory" }
Non-local field redefinition and $S$-matrix
Question: It is known that for local field redefinitions for which the LSZ formula is valid: $$\langle 0|\phi(x)|p\rangle \neq 0$$ field redefinitions don't change the S-matrix. (See QMechanic's answer to Equivalence Theorem of the S-Matrix) But is it true for non-local field redefintions? For instance if I take a field redefinition of the form: $$\psi(x)=e^{-l\Box} \phi(x)$$ Will the S-matrix be invariant under this? From QMechanic's explanation linked above and the answer by AccidentalFourier Transform here it would seem that the answer should be yes. Edit: My main concerns are Is such a transformation invertible? Is this a concern? Any boundary condition on $\psi$ would translate to infinitely many boundary conditions on $\phi$. Is this relevant? Answer: Claim 1. If $\psi(x)$ is an arbitrary operator that satisfies \begin{equation} \langle 0|\psi(x)|p\rangle\neq 0\tag1 \end{equation} then it is a valid interpolating field, and as such, it can be used in the LSZ formula. The proof can be found in any introductory text, such as Weinberg. Claim 2. If we assume that \begin{equation} \langle 0|\phi(x)|p\rangle\neq 0\tag2 \end{equation} and define \begin{equation} \psi(x)\overset{\mathrm{def}}=\mathrm e^{-\ell\partial^2}\phi(x)\tag3 \end{equation} then we have \begin{equation} \langle 0|\psi(x)|p\rangle\neq 0\tag4 \end{equation} The proof is straightforward. One just need to use $\langle0|\phi(x)|p\rangle=c\mathrm e^{ipx}$ for some non-zero constant $c$, and the trivial identity $\mathrm e^{-\ell\partial^2}\mathrm e^{ipx}=\mathrm e^{\ell p^2}\mathrm e^{ipx}$. Conclusion: the non-local redefinition $(2)$ is a valid redefinition, and the $S$ matrix is invariant under it.
{ "domain": "physics.stackexchange", "id": 41936, "tags": "quantum-field-theory, s-matrix-theory, non-locality" }
How do I add a repository Ubuntu Bionic
Question: My output of cat /etc/apt/sources.list.d/ros-latest.list is deb http:packages.ros.org/ros/ubuntu main So I want to replace this as its missing the distribution name. Searching at https://repogen.simplylinux.ch/generate.php I am shown the following: ###### Ubuntu Main Repos deb http://us.archive.ubuntu.com/ubuntu/ main deb-src http://us.archive.ubuntu.com/ubuntu/ main ###### Ubuntu Update Repos deb http://us.archive.ubuntu.com/ubuntu/ -security main deb-src http://us.archive.ubuntu.com/ubuntu/ -security main My question is: How do I replace my malformed repository with the one I found at the repogen website? Thank you Originally posted by jrock on ROS Answers with karma: 5 on 2019-05-15 Post score: 0 Original comments Comment by gvdhoorn on 2019-05-16: Is deb http:packages.ros.org/ros/ubuntu main a typo? It seems to be missing the // between http: and packages.ros.org/... Comment by gvdhoorn on 2019-05-16:\ How do I replace my malformed repository with the one I found at the repogen website? I wouldn 't use repogen for this: it seems to be slightly out-of-date (only lists Bionic (alpha 2)), it doesn't help you configure the ROS repositories and seems to be more geared towards setting up the main sources.list. For ROS you'd use an add-on file (ros-latest.list) which is not part of sources.list, but placed in /etc/apt/sources.list.d/ (which is a directory, not a file). Answer: It's just a text file, so you can edit it using nano, vim, joe, gedit, anything. You will need to use sudo though, as it's a file in a non-world-writable location (ie: only root may change it). On my system (ROS Melodic on Ubuntu Bionic) ros-latest.list contains: deb http://packages.ros.org/ros/ubuntu bionic main Note the bionic between http://packages.ros.org/ros/ubuntu and main. The following command will write this line to the ros-latest.list file: echo "deb http://packages.ros.org/ros/ubuntu bionic main" | sudo tee /etc/ros/rosdep/sources.list.d/ros-latest.list Originally posted by gvdhoorn with karma: 86574 on 2019-05-16 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jrock on 2019-05-17: Excellent. Thank you. This worked.
{ "domain": "robotics.stackexchange", "id": 33024, "tags": "ros2" }
Regarding cancer cells and telomeres
Question: If cancer cells have telomeres are they different than the telomeres in non-cancerous cells? Would cancer cell telomeres be somehow 'set-up' to function almost indefinitely; in other words are 'they' very 'robust' or 'durable'? If these cancer cell telomeres are 'durable could their functioning cause any apoptosis mechanisms to 'shut down'? Answer: In a normal cell, during each replication the telomere is shortened slightly due to the end replication problem, as you probably know. As mutations occur and a normal cell begins to exhibit cancerous characteristics, it needs a way to stop the self-destruction which happens when the telomeres become too short. In fact it is the cancer cells themselves which 'short-circuit' the shortening process and cause the telomeres to stay long enough. 90% of tumors do this by activating telomerase, an enzyme complex which elongates telomeres and keeps them from getting too short, so that the cell is effectively immortal from this sort of destruction. Usually telomerase is not active in most cells, except for stem, germ, and hair follicle cells, but cancer cells use this to their advantage and activate it to stay immortal. The other 10% or so employ a method called ALT, or Alternative Telomere Lengthening, a not yet thoroughly understood process which involves the recombination of tandem repeats between sister chromatids. This page explains how cancer cells bypass the telomere shortening process well, and also provides a series of further reading on telomeres and telomere shortening.
{ "domain": "biology.stackexchange", "id": 3036, "tags": "cell-biology, cancer, telomere" }
Are we choosing the same action in every step in SARSA?
Question: Here is the pseudocode for SARSA (which I took from here) Do we only select one action at the very beginning and then we always choose the same action for each step? Does it really make sense to choose the same initially chosen action $a$ regardless of the state $s$? Answer: Do we only select one action at the very beginning and then we always choose the same action for each step? No. The pseudocode is clear on this, by using the word Choose and referencing a policy. If you were expected to take the same action again, then the pseudocode already has the previous action in variable a, so it would not need to state anything about making a choice or using a policy. The $a \leftarrow a'$ notation is common way to describe copying values*, so the variable a is changed at the end of each loop. Does it really make sense to choose the same initially chosen action $a$ regardless of the state $s$? Not in this case. Some learning algorithms do use a form of "sticky" exploration where a single exploratory action is committed to for multiple time steps. It can be useful in some environments. But not basic SARSA as described in the question. * It avoids the ambiguity of $=$ as assignment or equality operator. You may also see $:=$ for assignment as an alternative, but for instance Sutton & Barto use $\leftarrow$ consistently, and a lot of RL literature follows this convention.
{ "domain": "ai.stackexchange", "id": 3139, "tags": "reinforcement-learning, sarsa" }
Reaction speed of Na2CO3 with vinegar vs that produced by NaHCO3 with CH3COOH
Question: I'm working on a project requiring the easy production of a gas not corrosive to "normal" household materials (e.g. carbon dioxide). I settled on vinegar (acetic acid) and baking soda or washing soda, as it is very easily available to most people. Looking at the chemical formula for both sodium carbonate and sodium bicarbonate, I can assume that roughly the same amount of gas will be produced by the reaction of both with vinegar. However, is there any difference at all in reaction speed? I don't care about the end product as long as it can be washed down the drain. Answer: Sodium carbonate, $\ce{Na2CO3}$, will not create gas when mixed with an equimolar portion of acetic acid (i.e. $1\ mol\ \ce{Na2CO3}:1\ mol\ \ce{CH3COOH}$). The reaction would proceed as follows: $$\ce{Na2CO3} + \ce{CH3COOH} \ce{->} \ce{NaHCO3} + \ce{CH3COONa}.$$ Therefore, you would need another equimolar portion of acetic acid to protonate the bicarbonate ion, according to this reaction: $$ \ce{NaHCO3} + \ce{CH3COOH} \ce{->} \ce{CH3COONa} + \ce{H2O} + \ce{CO2_{(g)}}.$$ This should illustrate that using sodium carbonate simply causes you to use an extra portion of acetic acid to make the same amount of gas. Thus, sodium bicarbonate and acetic acid will produce gas in less overall time than sodium carbonate and acetic acid. When starting from sodium carbonate, the reaction will proceed through both reactions above. When starting with sodium bicarbonate, only the second reaction must proceed. Technically, the rate of gas evolution will be the same once gas formation begins (assuming equal starting molarities). However, the sodium carbonate reaction wouldn't produce any gas until the first reaction had completely finished, taking more time overall.
{ "domain": "chemistry.stackexchange", "id": 4980, "tags": "acid-base" }
Why is the phase velocity in a transmission line not affected by its geometry?
Question: When deriving the speed of light in vacuum, one usually starts from Maxwell's equations, does some calculus and finds a wave solution with the phase velocity $c = 1/\sqrt{\mu_0 \varepsilon_0}$. This is clear to me. When analyzing transmission lines, the approach is not quite the same. One starts with a model of distributed inductance, capacitance, resistance and admittance along the transmission line from which one obtains the telegrapher's equations. Solving these equations (with a sinusoidal ansatz) again leads to a wave solution of the form $\exp(\mathrm i \omega t - \gamma x)$ where $$ \gamma = \sqrt{(R + \mathrm i \omega L) (G + \mathrm i \omega C)} . $$ For a lossless transmission line ($R = 0 = G$) we thus find $v_{\mathrm{ph}} = \omega / \operatorname{Im}(\gamma) = 1 / \sqrt{L C}$. Now, in the literature I've been looking at (on calculating $v_{\mathrm{ph}}$ for coplanar waveguides, but this is unimportant, I suspect) it is stated as an obvious fact that in the absence of any dielectric except vacuum $v_{\mathrm{ph}} = c$ in the transmission line. This is not obvious to me at all. Shouldn't the waveguide geometry (which determines $L$ and $C$, after all) be able to affect this value? The idea seems to be that only the space in which the fields propagate determine this velocity and that the conductors, carrying merely currents and charges, can't affect it. Can you justify this fact and make it intuitive for me? Does this still hold with a lossy transmission line? That is, if I "switch on" the resistance $R > 0$ without changing the geometry of an existing transmission line or any nearby dielectrics, will $L$ and $C$ "magically" adjust in order to keep $\operatorname{Im}(\gamma)$ (and thus $v_{\mathrm{ph}}$) constant? Answer: This is an excellent question but you must restrict it for the case of a homogeneous transmission line, otherwise it is not true. The line must consist of a pair of parallel conductors that individually may have different but otherwise arbitrary cross sections, one may contain other as in a coaxial line, and the lines are embedded in propagating medium that is homogeneous both in cross-section and along the axis. If you have that then the fundamental mode of propagation is essentially the static $E$ and $H$ fields along the line and because of the assumed homogeneity that field has no longitudinal component, that is a TEM mode. The fields can be derived from a scalar potential $\Phi$ so that with $\kappa = \omega \sqrt{\epsilon \mu}$: $$ \mathbf E_t =\nabla_t \Phi e^{-\mathfrak j \kappa z} \qquad E_z=0\\ \mathbf H_t = \pm \sqrt{\frac{\epsilon}{\mu}} \mathbf {\hat z} \times \mathbf E_t \qquad H_z=0 \\ \nabla^2_t \Phi = 0 \\ \Phi({\mathcal{C}_1}) =\Phi_1 \qquad \Phi({\mathcal{C}_2}) =\Phi_2$$ where $\mathcal{C_1}$ and $\mathcal{C_2}$ are the contours of the conductors at which the scalar potential $\Phi$ are given constants. If the medium is lossy then the $\epsilon$ and $\mu$ are complex quantities. The intuition you are asking for is in the static nature of this propagating field it being a slight modification, with restriction to the conductors, of a plane wave whose phase velocity then depends only on the medium. This does not mean that a different, not TEM, mode may not propagate at a geometry dependent speed, and, in fact, the phase velocity of all the other modes, be it TE or TM, does depend on the shape of the contours and their separation.
{ "domain": "physics.stackexchange", "id": 93100, "tags": "electromagnetism, electric-circuits, speed-of-light" }
writing your own local planner? Template?
Question: Hi all I want to write my own local planner to used with the navigation stack. I already have written my code; what is left is to modify it so it could be used with the navigation stack. I am wondering, is there a template or any guidance for writing your own local planner? Originally posted by Gazer on ROS Answers with karma: 146 on 2013-08-15 Post score: 1 Answer: The template is the BaseLocalPlanner interface class. You'll need to make that into a plugin and that should be it. Originally posted by dornhege with karma: 31395 on 2013-08-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by M_wasiel13 on 2016-10-26: Hello Could you be more precisly? I clone repository from https://github.com/ros-planning/navigation.. Is there any tutorial how to edit base_local_planner and implement own planner. Best regards Mateusz Wasielewski
{ "domain": "robotics.stackexchange", "id": 15278, "tags": "ros" }
How are bound states handled in QFT?
Question: QFT seems very well suited to handle scattering amplitudes between particles represented by the fields in the Lagrangian. But what if you want to know something about a bound state without including it as an extra field? For example, suppose we have electron+proton QED (ignoring the proton's structure): $$\mathcal{L} = -\frac14 (F_{\mu\nu})^2 + \bar{\psi_e} (i\not \partial -m_e)\psi_e + \bar{\psi_p} (i\not \partial -m_p)\psi_p - e \bar{\psi_e} \not A \psi_e + e \bar{\psi_p}\not A \psi_p$$ I can use this with no problem to calculate Rutherford scattering or similar processes. But this Lagrangian should also have the hydrogen atom hidden in it somewhere. For example, I may want to use QFT to calculate the binding energy of hydrogen. Or I might want to calculate the probability of firing an electron at a proton and getting hydrogen plus photons as a result. How can this be done? Obviously this is a broad subject, so I'm just looking for an outline of how it goes. Answer: The conventional way to handle bound states in relativistic quantum field theory is the Bethe-Salpeter equation. An old but very informative survey paper on the Bethe-Salpeter equation is M.M. Broido, Green functions in particle physics, Reports on Progress in Physics 32 (1969), 493-545. The hydrogen atom is in QFT usually treated in an approximation where the proton is treated as an external Coulomb field (and some recoil effects are handled perturbatively). The basics are given in Weinbergs QFT book Vol. 1 (p.560 for the Bethe-Salpeter equation and Chapter 14 for 1-electron atoms). Weinberg notes on p.560 that the theory of relativistic effects and radiative corrections in bound states is not yet in entirely satisfactory shape. This quote from 1995 is still valid today, 20 years later. On the other hand, quantum chemists use routinely relativistic quantum mechanical calculations for the prediction of properties of heavy atoms. For example, the color of gold or the fluidity of mercury at room temperature can be explained only through relativistic effects. They use the Dirac-Fock approximation of QED.
{ "domain": "physics.stackexchange", "id": 34349, "tags": "quantum-field-theory, particle-physics, quantum-electrodynamics" }
Do all classical-statistical critical lattice models have emergent conformal invariance?
Question: I understand that any quantum lattice model at the critical point which can be described by a massless relativistic quantum field theory has emergent conformal invariance. My question is what about classical-statistical lattice modes at the critical point. Do they all have emergent conformal symmetry and if not is there any counterexamples? Answer: I understand that any quantum lattice model at the critical point which can be described by a massless relativistic quantum field theory has emergent conformal invariance. This is not always true. For example, free quantum electrodynamics in $(2+1)$d is scale invariant and relativistic but not conformal. However, there is a proof that in $(1+1)$ dimensions, scale+Lorentz symmetry implies conformal symmetry. See the following Stack Exchange thread for more information: Does dilation/scale invariance imply conformal invariance? With that said, it appears that an enormous number of relevant examples with emergent scale+rotational symmetry also have emergent conformal symmetry, so there has been a lot of work trying to understand whether one can get conformal symmetry by adding some other reasonable assumptions. My question is what about classical lattice modes at the critical point. Do they all have such a property and if not is there any counterexamples? For classical models, you will at least want emergent scale and rotational symmetry at the critical point before you can ask if conformal symmetry emerges (this is the analog to requiring Lorentz invariance in the quantum case). For example, the Pokrovsky-Talapov critical point between incommensurate and commensurate phases in two dimensions involves highly anisotropic scaling between the two dimensions, so it is scale invariant but not conformal. However, if you have rotational symmetry and scale invariance, then in two dimensions you also have conformal invariance.
{ "domain": "physics.stackexchange", "id": 61904, "tags": "conformal-field-theory, phase-transition, lattice-model, scale-invariance" }
Are there knots in DNA?
Question: This question may sound silly. How does it come that the DNA molecules are not totally entangled with time in the cell ? If I take long strings in a box and shake it, that would create knots, and I wouldn't be able to easily isolate each string apart. But during mitoses, the chromosomes are totally isolated from each others while otherwise, the DNA is freely floating in the nucleus. Answer: I think a key factor is that DNA molecules are not passive bits of string that are left to move freely (which is what causes knots in string-like things that are left alone too long). For one thing the DNA molecule is an active part of the cell metabolism, as it is constantly being transcribed (to generate RNA and proteins) or copied or corrected and so on. And as part of its role in metabolism the DNA molecule is closely associated with RNA molecules, proteins and specifically proteins that the DNA is wrapped around called "histones". The Wikipedia page for "Chromatin" (what chromosomes are made of, basically) describes various aspects of this organization, including: That DNA which codes genes that are actively transcribed ("turned on") is more loosely packaged and associated with RNA polymerases (referred to as euchromatin) while that DNA which codes inactive genes ("turned off") is more condensed and associated with structural proteins (heterochromatin). So DNA is constantly being manipulated by all kinds of proteins and enzymes that affect its shape and position in space; it isn't just lying there. And insofar as it might get into knots, that would likely be part of the manipulations of the enzymes and proteins, or would fairly easily be dealt with by them. This doesn't mean DNA doesn't have knots however, in fact it appears it very much can: DNA AND KNOT THEORY (from The Institute for Environmental Modelling at the University of Tennessee) Conclusions: Principles of topology give cell biologists a quantitative, powerful, and invariant way to measure properties of DNA. Principles of knot theory have helped elucidate the mechanisms by which enzymes unpack DNA. Additionally, topological methods have been influential in determining the left handed winding of DNA around histones. Measuring changes in crossing number have also been instrumental in understanding the termination of DNA replication and the role of enzymes in recombination. (note this paper looks at DNA in E.coli, which does not organize its DNA in chromatin the way Eukaryotes do). Untangling DNA (from the Nature blog CreatureCast, 2013) This link includes a video about topoisomerases, enzymes also referred to in the previous link that unentangle DNA. A Monte Carlo Study of Knots in Long Double-Stranded DNA Chains (PLOS Computational Biology, 2016) Quote from the abstract: Even though our coarse-grained model is only based on experimental knotting probabilities of short DNA strands, it reproduces the correct persistence length of DNA. This indicates that knots are not only a fine gauge for structural properties, but a promising tool for the design of polymer models. The active site of the SET domain is constructed on a knot (Nature Structural Biology, 200) A knot or not a knot? SETting the record ‘straight’ on proteins (Computational Biology and Chemistry, 2003) These papers discuss potential knots within a histone protein, not DNA, but illustrates that topology and knots can be important in macromolecules: A knot within the SET domain helps form the methyltransferase active site, where AdoHcy binds and lysine methylation is likely to occur. A novel knot found in the SET domain is examined in the light of five recent crystal structures and their descriptions in the literature. Using the algorithm of Taylor it was established that the backbone chain does not form a true knot. Discovery of a predicted DNA knot substantiates a model for site-specific recombination (Science, 1985) This link is a bit painful to read (and from 1985 so I don't know if it was superseded, but it has over 200 citations) so I will leave it at the title. And finally this article, which I looked at last but probably should have looked at first because the first sentence of the abstract literally answers your question : Direct observation of DNA knots using a solid-state nanopore (Nature Nanotechnology, 2016) Long DNA molecules can self-entangle into knots. Experimental techniques for observing such DNA knots (primarily gel electrophoresis) are limited to bulk methods and circular molecules below 10 kilobase pairs in length. Here, we show that solid-state nanopores can be used to directly observe individual knots in both linear and circular single DNA molecules of arbitrary length.
{ "domain": "biology.stackexchange", "id": 6789, "tags": "genetics, dna, molecular-genetics" }
Buildfarm issues with shared libraries
Question: Hi all, I just decided to release for indigo a package that I had working for hydro and could not solve a problem with an external shared library. My package is avt_vimba_camera, a driver for Allied Vision Technologies cameras using ethernet (mainly). For that I need to use AVT SDK called VIMBA. In my package, I have VIMBA libraries and I "find_library" them from the CMakeLists.txt, who checks for the appropiate architecture (32 vs 64 bit). The package itself compiles on my machine and on another machine with no issue at all, but on jenkins, the library cannot be found (example). What is the difference on these builfarms that cause the library to be "NOTFOUND"? Originally posted by Miquel Massot on ROS Answers with karma: 1471 on 2014-09-01 Post score: 1 Answer: Each job on the build farm starts with a "fresh" image which does not have any packages installed. Then it clones your repo and uses the information in the package.xml files to install declared dependencies. I don't see any reference to the SDK in your package.xml (https://github.com/srv/avt_vimba_camera/blob/7295168f7c3465cc9a864640e8366f87663b56d3/package.xml). That might be a reason why the build farm does not install it and therefore CMake can't find the library. On your development machine you will have the SDK installed and therefore the library can be found. Originally posted by Dirk Thomas with karma: 16276 on 2014-09-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Miquel Massot on 2014-09-02: It is in lib/{32,64}bit -> https://github.com/srv/avt_vimba_camera/tree/indigo/lib/32bit This SDK has to be manually installed, that's why I unziped in lib/ and in include/ so that my executables can find it. Comment by Dirk Thomas on 2014-09-02: Nothing tries to search libraries in your sources lib folder for libraries. Also you don't install those libraries which means they would not even be part of the Debian package. May be on your development machine you have the libraries additionally in a different location?
{ "domain": "robotics.stackexchange", "id": 19253, "tags": "ros, buildfarm, bloom-release, jenkins" }
Hypervalency in elements in the second period
Question: In my experience, most texts that address hypervalency say that it only occurs from elements in the 3rd period and onwards. This explains the occurrence of $\ce{Cl2O7}$ or chlorine heptoxide. However, some 2nd period nonmetals like $\ce{C}$ and $\ce{O}$ show hypervalency. Examples: $\ce{CH5}$ - This is unlikely to occur but it does sometimes happen that carbon bonds to 5 atoms instead of 4. $\ce{H3O+}$ - Here oxygen is hypervalent. How is it possible for carbon and oxygen to each have 9 electrons if each orbital only holds 2 electrons? Do they switch between electrons or something? Answer: Normally when we talk about a single covalent bond, we are referring to a 2-centre 2-electron bond, which means that there are two electrons holding two atoms together. Carbon never forms 5 bonds. The only exception that I know of is the $\ce{CH5+}$ methanium cation, the bonding in which can be explained by a 3-centre-2-electron bond. The same kind of bond appears in diborane ($\ce{B2H6}$). In both cases, the octet rule (or duplet rule in the case of the bridging hydrogens in diborane) is not violated. It is just that those 2 electrons are shared amongst 3 different atoms, so each "bond" is effectively half a bond (in MO theory parlance we say that the bond order is 0.5). You could think of it as three of the C-H bonds being normal 2-electron bonds, and two of the C-H bonds being half-bonds (having one electron each). The total number of electrons around carbon is therefore $3 \times 2 + 1 + 1 = 8$. The neutral species $\ce{CH5}$ does not exist, because it has one more electron than the $\ce{CH5+}$ cation. That would mean that you either have to put 9 electrons around carbon, or put 3 electrons around hydrogen, both of which are of course not allowed. The hydronium ion $\ce{H3O+}$ is not actually hypervalent. It is similar to the ammonium ion $\ce{NH4+}$ in that a dative bond is formed from the lone pair on O to a $\ce{H+}$ ion.
{ "domain": "chemistry.stackexchange", "id": 3574, "tags": "electrons, covalent-compounds" }
Do i need to use ROSBRIDGE for controlling my robot from my PC?
Question: I am using platform ubantu on PC and having ROS indigo on my robot. Now i want control it completely using PC and also want to see video taken by on board camera. Do i need to use ROSBRIDGE??? Without rosbridge is it possible to control the bot? Thank you in advance!!! Originally posted by swati shirke on ROS Answers with karma: 1 on 2016-06-17 Post score: 0 Answer: No. ROS handles distributed processing "out of the box" (in this case, ROS nodes both on the robot and on your PC). You need to read the basic documentation and gain a working understanding of ROS nodes, the ROS master, and pub/sub functionality (at a minimum). Try starting at the ROS core components page and go from there. Also, directly (1st sentence) from the rosbridge wiki page: Rosbridge provides a JSON API to ROS functionality for non-ROS programs. Which is quite clearly not what you need. Originally posted by kramer with karma: 1470 on 2016-06-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 24969, "tags": "ros" }
If the angular momentum is conserved in a system whose moment of inertia is increased, it's kinetic energy decreases
Question: The workout that $\Delta K.E.$ is less than zero is attached. But where does this energy go. It is transformed back to rotational KE if moment of inertia decreases. How would you account for the change in Kinetic energy? Answer: As pointed out in many comments, the concept of Work is integral here. In Newtonian Mechanics, we have the equation: $$K_{initial} + W = K_{final}$$ Here the work is done by changing the $I$, the moment of inertia. To do this, the mass distribution of the body needs to be shifted further from the axis of rotation. Naturally, at least some particles of the body need to be moved away from the axis of rotation. To do this an external force is required. Displacement of these particle is against the centripetal force causing the rotation - such as tension. This leads to the particle, and hence the system, losing Kinetic Energy. (Negative work is being done by the centripetal force on he particle) Exactly how much energy is lost is given by the $\Delta E$ you have caculated. The above is just an attempt to provide an intuitive view of conservation of angular momentum in view of linear dynamics.
{ "domain": "physics.stackexchange", "id": 38309, "tags": "newtonian-mechanics, angular-momentum, rotational-dynamics, energy-conservation, work" }
Adjoint of the time-evolution operator
Question: The time-evolution operator $\hat U$ is defined so that $\Psi(x,t)=\hat U(t)\Psi(x,0)$. In terms of the Hamiltonian, it is expressed as $\hat{U}(t)=\exp \left(-\frac{i t}{\hbar} \hat{H}\right)$. I'm trying to calculate the adjoint conjugate $\hat U^\dagger(t)$. My attempt at a solution It must satisfy $\langle \hat{U}(t) \Psi(x,0) | \Phi(x,0) \rangle=\langle \Psi(x,0) | \hat{U}^\dagger(t)\Phi(x,0) \rangle$, so $$\int _{-\infty}^{+\infty} \hat{U}^\star(t) \Psi^\star(x,0) \Phi(x,0) dx=\int _{-\infty}^{+\infty} \Psi^\star(x,0)\hat{U}^\dagger(t)\Phi(x,0)dx$$ I know that $\hat U$ is unitary, so $\hat U^\dagger(t)=\hat U^{-1}(t)=\hat U^{\star}(t)$, but, without using this information, could the expression of $\hat U^\dagger(t)$ be deduced from the expression above? Answer: Probably it is cleaner to do it by series. \begin{equation} \begin{split} U^\dagger(t)&=\left(\sum_{n=0}^\infty \frac{1}{n!}\left(\frac{-it}{\hbar} \right)^n H^n\right)^\dagger\\ &=\sum_{n=0}^\infty \frac{1}{n!}\left(\left(\frac{-it}{\hbar} \right)^n \right)^\dagger (H^n)^\dagger\\ &=\sum_{n=0}^\infty \frac{1}{n!}\left(\frac{it}{\hbar} \right)^n H^n\\ &=\exp(it H/\hbar), \end{split} \end{equation} since $H$ is Hermitian. Another possibility is to start with Schrödinger equation, compute the adjoint and finally derive and solve an equation for $U^\dagger$ provided that $<\Psi(t)| = <\Psi(t=0)|U^\dagger$.
{ "domain": "physics.stackexchange", "id": 71042, "tags": "quantum-mechanics, homework-and-exercises, operators, schroedinger-equation, time-evolution" }
General bytes to number converter (c++)
Question: I've written a bytes to number converter for cpp similar to the python int.from_bytes. It does not make any assumptions on: The endianness of the input buffer The endianness of the system The endianness consistency of the the system ( = no assumption that all numeric types have the same endianness ) Additionally the source buffer can be smaller than the target. #include <cstdint> #include <span> #include <stdexcept> namespace Binary { // Used to indicate the endianness of a type // Better readability than a simple bool enum class Endianness { Little, Big, }; namespace Int { // Base is the underlying type that will be used to store the eventual result template< typename Base > Base FromBytes( std::span< std::uint8_t > inBuffer, Binary::Endianness inEndianness ) { // Check if we have enough space if( sizeof( Base ) < inBuffer.size() ) { throw std::length_error( "unable to fit data in requested type" ); } // Determine endianness of the Base type on this system union { Base i; char c[ sizeof( Base ) ]; } test = { .i = 1 }; bool tmp = test.c[ sizeof( Base ) - 1 ] == '\1'; Binary::Endianness type_endianness = tmp ? Binary::Endianness::Big : Binary::Endianness::Little; // Always initialize your variables Base result = Base{ 0 }; std::uint8_t * destination = ( std::uint8_t * ) &result; // If we have big endian and we have a smaller buffer the first bytes should // remain zero, meaning we should start filling further back if( type_endianness == Binary::Endianness::Big ) { destination += sizeof( Base ) - inBuffer.size(); } if( type_endianness == inEndianness ) { // equal endianness => simple copy memcpy( destination, inBuffer.data(), inBuffer.size() ); } else { // differing endianness => copy in reverse for( size_t i = 0; i < inBuffer.size(); i+=1 ) { destination[ i ] = inBuffer[ inBuffer.size() - i - 1 ]; } } return result; } } } Are there any improvements I could make? Answer: I like the enum. Except that it is different from the one C++20 defines. We're missing some review context, like the range of use cases we're trying to address. This submission contains no unit tests or other demo code. It's unclear if caller is likely to call into this repeatedly through an array loop. Perhaps "exotic" 24-bit quantities motivated this code. use constexpr Every call to FromBytes asks about native endianess. It's a nice piece of code, very clear, but we should cache the result as it won't change. Or better, find the result at compile time. Even better, dispense with it and rely on std::endian::native. Or write down the underlying requirements so we understand what systems are being targeted and what language / compiler variants are within scope. nit: Assigning tmp = test.c[0] == '\1'; seems a little more convenient. But perhaps you wanted the order for the ternary to be "big, little" for aesthetic reasons. Consider renaming tmp to is_big. Delete uninformative comments like "Always initialize your variables". I am glad to see that .size() is the first thing we check. Good. nit: I find the second form slightly easier to read. ... inBuffer[ inBuffer.size() - i - 1 ] ... inBuffer[ inBuffer.size() - 1 - i ] Why? I read it as "constant minus i" where I can reason about the constant location independent of the loop. Rather than having to puzzle out the numeric relationship between size and i, observe it is always positive, and then decrement to get to the zero-th element. document language version std::reverse_copy has been available since C++17, but we choose not to use it, unclear why. Using a standard routine might yield performance identical to a naïve loop. But maybe a vendor went to some trouble to vectorize it for your target processor, and it will go quicker. Use it for the same reason you prefer memcpy(). Tell us which language variants are acceptable for this project, write it down so future maintainers will know the rules. I was slightly surprised to not see ntohl mentioned, nor be64toh, but maybe {3, 5, 6, 7}-byte inBuffers are just as important for the use case we're addressing. Those standard functions should absolutely appear in your automated test suite. document the sign bit The brief review context explicitly says that we implement a well-known interface. We do support a corresponding endianness parameter. But interpretation of the source sign bit is left implicit. Presumably it matches the Base type. This should be explicitly written down. (Also, "Base" seems a slightly odd name as it suggests "Radix" which isn't relevant here. Maybe "NumericType"?) API design Imagine the caller is assigning a great many 16-bit integers via an array loop. I worry about whether we're giving the compiler a good chance at vectorizing such a bulk assignment. Consider offering a second method that moves K integers of identical size. Imagine the input numbers are already in native order and they have a standard {16, 32, 64}-bit size. Would it help your use case if this routine was given the flexibility to report no-op by returning address of input buffer, avoiding the memcpy() ? This code achieves its design goals. Before merging it to main it should be better documented and should be accompanied by unit tests. I would be happy to delegate or accept maintenance tasks on this codebase.
{ "domain": "codereview.stackexchange", "id": 44608, "tags": "c++, converting, byte" }
What is formed when hydrogen peroxide reacts with titanium?
Question: I just got night contact lenses, and you are supposed to use a "one-step" cleaning solution to clean it over a period of six hours. At the bottom of the lenses case, there is a "titanium plate neutralizer", which causes the liquid to bubble. What is this gas? Edit: it was platinum, not titanium Answer: Hydrogen peroxide will damage your eye tissue; that's why there are enzymes in our cells actually to catalyze the conversion of $HOOH$ to water and oxygen gas. The enzymes are actually contained in peroxisomes (name is self-explanatory).* The neutralizer serves the same function of catalyzing the otherwise relatively slow decomposition of $HOOH$ into water and oxygen gas; hydrogen peroxide will decompose in the presence of light and that's why bottles of hydrogen perioxide are opaque - generally black or brown (but not transparent like bottles of which witch hazel or isopropyl alcohol). I have seen platinum used as a catalyst in contact cleaning solutions but I suppose titanium can be used as well. *Edit: peroxisomes actually create $HOOH$ upon catabolism of cell by-products but can also break down its own toxic byproduct.
{ "domain": "chemistry.stackexchange", "id": 1187, "tags": "inorganic-chemistry, everyday-chemistry, cleaning" }
Effect on pressure for a decrease in temperature with respect to height
Question: If I were to have a decrease in temperature with height, $T = f(z)$, I was wondering if I could calculate the pressure like so: Could I simply combine $$ \frac{dP}{dz} = -\rho g, $$ with the ideal gas law: $P=\rho RT$ To get: $$ \frac{dP}{dz} = -\frac{P}{RT} g $$ Rearranged to: $$ \int{\frac{1}{P}dP} = -\frac{g}{R} \int \frac{1}{T} dz,$$ then we can say $ T = f(z) $ therefore: $$ \int{\frac{1}{P}dP} = -\frac{g}{R} \int \frac{1}{T} dz$$ which leads to $$ \ln\left(\frac{P_2}{P_1}\right) = -\frac{g}{R} \int \frac{1}{f(z)} dz $$ then simply rearrange for the pressure $P_2$ I was wondering if my methodology is correct, and if so, what assumptions are to be made for this to be true. Answer: I was wondering if my methodology is correct, and if so what assumptions are to be made for this to be true. You assumed hydrostatic equilibrium. This is invalid for the uppermost reaches of the atmosphere, and is only approximately valid for the bulk of the atmosphere. This assumption does give a good overall picture of the atmosphere (up to the turbopause), but it misses key impacts such as weather. You factored the gas constant $R$ out of the integral as a constant. It's not constant. Hot, humid air has a higher specific gas constant than does dry air. Simplifying the integral by treating the gas constant as constant misses some physics, but it does give a reasonable picture, and a better picture than obtained by treating temperature as constant as well. The gravitational constant $g$ also is not constant. It decreases with altitude, so strictly speaking, you shouldn't have factored $g$ out of the integral. This is a tiny, tiny effect.
{ "domain": "physics.stackexchange", "id": 23542, "tags": "homework-and-exercises, thermodynamics, fluid-statics" }