anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Negation of 8-bit hexadecimal
Question: I am looking for a mathematical formula / algorithm to find the negation of a 8-bit hexadecimal without having to expand into a binary form. E.g; 0000BDDA -> 48602 FFFF4226 -> -48602 Need to get from 0000BDDA -> FFFF4226 without 2' complements / expanding it to binary. I've been cracking my head at it but to no avail. Any ideas? Answer: Use the same formula as you do for two's complement. First, compute the complement of the number by complementing all letters. Then add 1. Complementing works using the following table: $$ 0 \leftrightarrow F, 1 \leftrightarrow E, 2 \leftrightarrow D, 3 \leftrightarrow C, 4 \leftrightarrow B, 5 \leftrightarrow A, 6 \leftrightarrow 9, 7 \leftrightarrow 8. $$ Example: starting with 0000BDDA, we first get FFFF4225, and after adding 1, we get FFFF4226. Why does this work? Exercise.
{ "domain": "cs.stackexchange", "id": 5333, "tags": "discrete-mathematics" }
Is there a general way of solving the Maxwell equations?
Question: Is there some method for solving differential equations that can be applied to Maxwell equations to always get a solution for the electromagnetic field, even if numerical, regardless of the specifics of the problem. Let's say you want to design a series of steps that you can handle to a student and he will be able to obtain E and B for any problem. The instructions don't have to be simple or understandable to someone without proper background but, is it possible? Answer: You need to be more precise about exactly what problem you're solving and what the inputs are. But if you're considering the general problem of what electromagnetic fields are produced by a given configuration of electric charge and current over spacetime, then the general solution is given by Jefimenko's equations.
{ "domain": "physics.stackexchange", "id": 61685, "tags": "electromagnetism, maxwell-equations, differential-equations" }
What information about a meteor's trajectory, size, or height can be derived from a single location?
Question: If one sees a meteor, is there any way to get even a rough approximation of its height, entry angle, size, or other characteristic without triangulation from another position? If it appeared as a point source and got uniformly brighter, you'd know to take a step aside. And if it appeared on one horizon, traveled overhead, and disappeared over the other, you'd be able to say "Well, that was a shallow angle of attack." But short of those scenarios (which are, probably for the best, rare), is there anything? Answer: If you see it staying in one spot you can infer that it is moving directly along your line of sight, as you say. But in the more likely event that you see it move across the sky you can determine that it is moving in a given plane only. Unless you have some independent way of judging its size or distance you can't tell any more. This actually happens all the time when a bug flies in front of someone's camera and they think they've seen an interstellar visitor.
{ "domain": "physics.stackexchange", "id": 7893, "tags": "astronomy, meteors" }
Does k-fold FORRELATION problem lies in BQP or $BQP^O$
Question: It is known that the Simon Problem lies in $BQP^O$ (oracular problem). Even it proves $\exists O$ $BPP^O\neq BQP^O$. Or It separates the classes in the Oracle/Query model of computation. Meanwhile, the factoring problem lies in $BQP$ (non-oracular). I reviewed the FORRELATION problem in the paper link. An oracle for function $f$ is provided as part of the problem. The k-fold FORRELATION is shown (in the paper) to be (promise) BQP-complete. I find the definition mentioned in section 1.1.3 (page 4) to be oracular. My query is: Is the k-fold FORRELATION problem in BQP or $BQP^{O'}$? Or am I missing some detail/context? Answer: There is a subtle difference in the terminology used in the paper. The wording of the theorem 5 in the paper makes it more clear. If you have been given an explicit construction (say, as a circuit) of the functions ($f_i$), then k-fold FORRELATION is a promise BQP-complete problem. Otherwise, merely an access of black box for $f_i$ don't make it even eligible to be called a BQP problem.
{ "domain": "quantumcomputing.stackexchange", "id": 5551, "tags": "complexity-theory, oracles, bqp" }
Classifier optimization
Question: Suppose we have a set E of entities. Each entity is described by a set P of binary properties (i.e. each element e of E has a defined true/false value for each element p of P). |P| >> |E| We now want to select a subset of fixed size (e.g., 10) of P that will enable us distinguish the elements in E as accurately as possible. In extensions of this basic scenario, the properties could assume numeric values. Has this problem been studied? Under what name? A practical example: there is a set of 100 bacterial species that are characterized for the presence or absence of 1000 genes. We now want to select a subset of 10 genes that, upon typing of a novel sample, will tell us which bacterial species that sample represents. Answer: If you need the optimal answer, the best solution I know is exhaustive search: try all ${|P| \choose 10}$ different subsets, and see which is best. The running time of this will be $O(|P|^{10})$, though, which is probably too high to be feasible. Given this, you will probably need to accept solutions that are heuristics or not guaranteed to give an absolutely optimal answer. One standard approach is to use a greedy algorithm: you iteratively build up a set of properties, one by one. At each step, if your set is currently $S$, you choose the property $p$ that makes $S \cup \{p\}$ as accurate as possible, and then add $p$ to $S$. To turn this into a full algorithm, you need to decide how you want to measure/evaluate each candidate $S \cup \{p\}$. For comparison, you can look also at the ID3 algorithm. Rather than trying to pick a set of size 10, it tries to pick a decision tree of depth 10, so it's not solving exactly the same problem: but it is similar. The metric used at each step to evaluate the candidates is the information gain; you could do the same, but for a set rather than a tree. In the machine learning literature, there is a lot of work on feature selection: given a large number of possible features, the goal is to pick a subset of the features that makes the classifier as accurate as possible. You could explore that literature and see if any of those methods are effective in your domain.
{ "domain": "cs.stackexchange", "id": 5829, "tags": "optimization" }
Shouldn't the Sampling Theorem imply that there should be no information loss at all after a signal is processed?
Question: I am very, very new to signal processing (only started a few days ago) so please bear with me because I may be missing the big picture. Consider an arbitrary signal $x(t)$ with varying frequency to be processed by a computer. Given the right sampling rate w.r.t the frequency, shouldn't the Sampling Theorem essentially imply that no information will be lost at all after the sampled signal is converted back to its continuous form, since there will be a sufficient number of discrete points for which the signal is described? Answer: For no information to be lost on conversion back to continuous form, the signal would first need to be perfectly band-limited, and you would need an ideal reconstruction filter. A perfectly band-limited signal is infinite in extent. Since you want this arbitrary signal to be processed by a computer, your computer would need infinite memory. You would also have to wait an infinite amount of time for your perfect reconstruction filter to settle. Finite sampling time and filter length problems also completely ignores other potential lossy issues, such as quantization and sampling clock jitter, etc. One might be able to process an infinite signal that can be represented in certain closed symbolic forms within finite time and resources, but there does not seem to be a method to convert any arbitrary signal into such a form. Back in the real-world (stuff you can buy or make), one would normally accept an information loss from imperfect band-limiting, finite filter length, jitter (etc.) that is around or below the quantization and numeric noise floors. This allows the processing to happen using an amount of RAM that one can afford to hopefully finish within one's lifetime. Thus leading to information loss and imperfect reconstruction.
{ "domain": "dsp.stackexchange", "id": 2322, "tags": "discrete-signals, sampling, digital" }
Quasigroups, congruences and recognizable subsets
Question: My question refers to the draft of Mathematical Foundations of Automata Theory, IV.2.1 (pages 89ff in the pdf). I will repeat everything necessary nevertheless: Let $M,N$ be monoids and $\varphi: M \rightarrow N $ a monoid morphism. We say that a subset $L$ of $M$ is recognizable by $\varphi$ if there is a subset $P$ of $N$ such that $L = \varphi^{-1}(P)$. As is known, the rational languages are precisely the recognizable subsets of $\Sigma^\ast$. Furthermore, we define an equivalence relation $R_\varphi$ by $u R_\varphi v :\Leftrightarrow \varphi(u)=\varphi(v)$. This relation is a congruence relation, that is $\forall s,t,u,v \in M:s R_\varphi t \Rightarrow usv~R_\varphi~utv$. We say that a congruence relation $R$ saturates $L$ if for all $u \in L$, $uRv$ implies $v \in L$. Then in the above document, the following proposition (IV.2.2, page 90) is stated: Let $\varphi : M \rightarrow N$ be a monoid morphism and let $L$ be a subset of $M$. The following conditions are equivalent: (1) $L$ is recognised by $\varphi$ (2) $L$ is saturated by $R_\varphi$ (3) $\varphi^{-1}(\varphi(L))=L$ Proof. (1) implies (2). If $L$ is recognised by $\varphi$, then $L=\varphi^{-1}(P)$ for some subset $P$ of $N$. Thus if $x \in L$ and $x R_\varphi y$, one has $\varphi(x) \in P$ and since $\varphi(x)=\varphi(y), y \in \varphi^{-1}(P)=L$. (2) implies (3). If $x \in \varphi^{-1}(\varphi(L))$, there is $y \in L$ such that $\varphi(x) = \varphi(y)$, that is $x R_\varphi y$. Thus, $x \in L$, and $\varphi^{-1}(\varphi(L)) \subseteq L$ follows. "$\supseteq$" is trivial. (3) implies (1). Let $P:=\varphi(L)$, then $\varphi^{-1}(P)=L$. I want to weaken the assumptions made in the proposition. Namely, assume tgat the $N$ in the above definitions is a proper groupoid (in fact, I need to deal with loops), that is, associativity and identity are lost. Further, $\varphi$ need not be a morphism (the relation $R_\varphi$, defined in the same way as above, a priori need not be a congruence anymore) and we restate (1) accordingly (as formally, the notion of a "recognable subset" is not defined for groupoids). Then my questions are (1) Is $R_\varphi$ still a congruence if we demand $\varphi$ to be a morphism of groupoids (loops)? (2) Does the proposition still hold? Does it if we assume that $R_\varphi$ is not a congruence? (3) What other sources (preferably monographs) are there that you recommend as comprehensive introductions to algebraic automata theory, esp. concerned with the role of monoids, semigroups (and maybe more general structures as groupoids) and their connection with automata and languages? Personally, I think they are both true, as for (2), neither the morphism properties, nor associativity nor congruences seem to be used in the proof. And for (1), I don't see where one would need associativity to prove it. But it may well be that I overlooked something (which is why I ask...). Answer: (1) for any function $\varphi:E\to F$, the relation $R_\varphi$ as you define it is always an equivalence relation on elements of $E$. But the notion of congruence depends of the laws you have on your sets, and $R_\varphi$ will be a congruence for the operations that are preserved by $\varphi$. Notice that your formulation with $usv$ is not anymore well-defined without associativity: you have to state it separately for $us$ and $sv$. (2) Let us weaken even more the hypotheseses. Let $\varphi:E\to F$ be any function, without any structure assumed on $E$ and $F$. Then if we take the definitions you wrote for "recognised" and "saturated", the 3 propositions are still equivalent, as your proofs still work. The thing you will lose by considering arbitrary sets (or groupoids or loops) is that you won't carry out the nice monoid structure of $\Sigma^*$, so you lose almost all the tools of regular languages theory, like Myhill-Nerode equivalence, Green relations, and so on... For (3), I think Jean-Eric Pin's webpage (including the document you mention) contains good surveys of this field.
{ "domain": "cs.stackexchange", "id": 1479, "tags": "formal-languages, reference-request, check-my-proof" }
Performing CRUD between 4 tables
Question: This code works perfectly fine but I have repeated many lines of code in this controller. I need some suggestions on how to optimize this code and remove repetition, with focus on the create and update methods. <?php namespace App\Http\Controllers\PrivateJob; use Illuminate\Http\Request; use App\Http\Controllers\Controller; use App\Models\catagory; use App\Models\city; use App\Models\province; use App\Models\sector; use App\Models\private_jobadb; use App\Models\privatejobcity; use App\Http\Requests\storePrivateNewspaperFormValidation; use App\Models\catagory_private_jobadb; class privateJobController extends Controller { public function dropDownData() { $newspaper['catagory'] = catagory::all(); $newspaper['province'] = province::all(); $newspaper['sector'] = sector::all(); $newspaper['city'] = city::all(); return $newspaper; } /** * Display a listing of the resource. * * @return \Illuminate\Http\Response */ public function index() { $private_job = private_jobadb::all(); return view('admin.private_jobs.all_priate_jobs', compact('private_job')); } /** * Show the form for creating a new resource. * * @return \Illuminate\Http\Response */ public function create() { $newspaper = $this->dropDownData(); return view('admin.private_jobs.create_Private_jobs', compact('newspaper')); } /** * Store a newly created resource in storage. * * @param \Illuminate\Http\Request $request * @return \Illuminate\Http\Response */ public function store(storePrivateNewspaperFormValidation $request) { $values = $request->input(); foreach ($values as $key => $result) { if ($key == 'city_id') { $city = array($key => $result); } elseif ($key == 'catagory_id') { $catagory = array($key => $result); } } unset($values['city_id']); unset($values['catagory_id']); if ($request->hasFile('image')) { $request->file('image'); $filename = $request->image->getClientOriginalName(); $originalfile['image'] = $request->image->storeAs('public/newpaper_jobs', $filename); $values['slug'] = str_slug($values['company_name'] . '-' . rand(1, 1000), '-'); $data = array_merge($values, $originalfile); $private_job_ad = new private_jobadb($data); $private_job_ad->save(); $insertedId = $private_job_ad->id; foreach ($city['city_id'] as $value) { $privatejobcity = new privatejobcity(); $privatejobcity->fill(['city_id' => $value, 'private_jobabd_id' => $insertedId]); $privatejobcity->save(); } foreach ($catagory['catagory_id'] as $value) { $catagory_private_jobadb = new catagory_private_jobadb(); $catagory_private_jobadb->fill(['catagory_id' => $value, 'private_jobads_id' => $insertedId]); $catagory_private_jobadb->save(); } //a flash message shold be shown that data successfully created flash('Data successfully added')->success(); return back(); } else $values['slug'] = str_slug($values['company_name'] . '-' . rand(1, 1000), '-'); $private_job_ad = new private_jobadb($values); $private_job_ad->save(); $insertedId = $private_job_ad->id; foreach ($city['city_id'] as $value) { $privatejobcity = new privatejobcity(); $privatejobcity->fill(['city_id' => $value, 'private_jobabd_id' => $insertedId]); $privatejobcity->save(); } foreach ($catagory['catagory_id'] as $value) { $catagory_private_jobadb = new catagory_private_jobadb(); $catagory_private_jobadb->fill(['catagory_id' => $value, 'private_jobads_id' => $insertedId]); $catagory_private_jobadb->save(); } //a flash message should be shown that image is't selected flash('Data successfully added')->success(); return back(); } public function show($id) { // } /** * Show the form for editing the specified resource. * * @param int $id * @return \Illuminate\Http\Response */ public function edit($id) { $data = $this->dropDownData(); $private_job = private_jobadb::with('cities', 'catagory')->where('id', $id)->get()->first(); $result = $private_job->cities->map(function ($data) { return $data['id']; })->all(); $single_catagory = $private_job->catagory->map(function ($data) { return $data['id']; })->all(); return view('admin.private_jobs.edit_private_job', compact('data', 'private_job', 'result', 'single_catagory')); } /** * Update the specified resource in storage. * * @param \Illuminate\Http\Request $request * @param int $id * @return \Illuminate\Http\Response */ public function update(Request $request, $id) { $values = $request->input(); foreach ($values as $key => $result) { if ($key == 'city_id') { $city = array($key => $result); } elseif ($key == 'catagory_id') { $catagory = array($key => $result); } } unset($values['city_id']); unset($values['catagory_id']); if ($request->hasFile('image')) { $request->file('image'); $filename = $request->image->getClientOriginalName(); $originalfile['image'] = $request->image->storeAs('public/private_job', $filename); $data = array_merge($values, $originalfile); $updated_this_id = private_jobadb::findOrFail($id); $updated_this_id->fill($data); $updated_this_id->save(); catagory_private_jobadb::where('private_jobads_id', $id)->delete(); foreach ($catagory['catagory_id'] as $value) { $catagory_private_jobadb = new catagory_private_jobadb(); $catagory_private_jobadb->fill(['private_jobads_id' => $id, 'catagory_id' => $value]); $catagory_private_jobadb->save(); } privatejobcity::where('private_jobabd_id', $id)->delete(); foreach ($city['city_id'] as $value) { $privatejobcity = new privatejobcity(); $privatejobcity->fill(['city_id' => $value, 'private_jobabd_id' => $id]); $privatejobcity->save(); } //a flash message shold be shown that data successfully updated flash('Data successfully updated')->success(); return back(); } else //a flash message should be shown that data is updated without image flash('Data successfully updated without new image old image willbe used')->success(); $updated_this_id = private_jobadb::findOrFail($id); $updated_this_id->fill($values); $updated_this_id->save(); return back(); } /** * Remove the specified resource from storage. * * @param int $id * @return \Illuminate\Http\Response */ public function destroy($id) { $deletedRows = private_jobadb::where('id', $id)->delete(); if ($deletedRows) { flash('Data successfully added')->success(); return back(); } else { flash('Data not deleted')->error(); redirect()->back(); } } } Answer: I need some suggestions on how to optimize this code and remove repetition, with focus on the create and update methods. If you aren't familiar with the principle Don't repeat yourself I recommend reading about it: The Don’t Repeat Yourself (DRY) principle states that duplication in logic should be eliminated via abstraction; Duplication is Waste Adding additional, unnecessary code to a codebase increases the amount of work required to extend and maintain the software in the future. Duplicate code adds to technical debt. Whether the duplication stems from Copy Paste Programming or poor understanding of how to apply abstraction, it decreases the quality of the code. Duplication in process is also waste if it can be automated. Manual testing, manual build and integration processes, etc. should all be eliminated whenever possible through the use of automation.1 Looking at the store and update methods I see a lot of redundancy. I would abstract out code to save related data - e.g. private function _savePrivateJobCities($cities, $id) { foreach ($cities as $value) { $privatejobcity = new privatejobcity(); $privatejobcity->fill(['city_id' => $value, 'private_jobabd_id' => $id]); $privatejobcity->save(); } } private function _saveCatagoryPrivateJobadbAssociations($catagoryIds, $jobadbId) { foreach ($catagoryIds as $value) { $catagory_private_jobadb = new catagory_private_jobadb(); $catagory_private_jobadb->fill(['catagory_id' => $value, 'private_jobads_id' => $jobadbId]); $catagory_private_jobadb->save(); } } Then those methods can be called in the update and store methods. Those methods also begin with the same 15 lines (taking the result of calling $request->input(), saving the city_id and catagory_id values in separate variables and then calling unset() on those elements. Perhaps the goal is to avoid passing those elements to the call to ->fill() on the model variable. I tested passing excess fields to ->fill() and saw no error. But to ensure no excess fields are passed, one could get the list of fillable fields from the model using ->getFillable(). Then find the intersection using array_intersect_key() and array_flip(). $fillValues = array_intersect_key($values, array_flip($updated_this_id->getFillable()); $updated_this_id->fill($fillValues); Also - in update(), there are no curly brackets following the else... Is the goal to only call flash() in the else case, or that plus updating by $id? } else //a flash message should be shown that data is updated without image flash('Data successfully updated without new image old image willbe used')->success(); $updated_this_id = private_jobadb::findOrFail($id); Presuming there should be curly braces around the update lines as well, that code can be abstracted to another method as well: private function _updatePrivatejobadb($values, $id) { $updated_this_id = private_jobadb::findOrFail($id); $updated_this_id->fill($values); $updated_this_id->save(); } Then update() can be simplified as follows: /** * Update the specified resource in storage. * * @param \Illuminate\Http\Request $request * @param int $id * @return \Illuminate\Http\Response */ public function update(Request $request, $id) { $values = $request->input(); if ($request->hasFile('image')) { $request->file('image'); $filename = $request->image->getClientOriginalName(); $originalfile['image'] = $request->image->storeAs('public/private_job', $filename); $data = array_merge($values, $originalfile); $this->_updatePrivatejobadb($data, $id); catagory_private_jobadb::where('private_jobads_id', $id)->delete(); $this->_saveCatagoryPrivateJobadbAssociations($values['catagory_id'], $id); privatejobcity::where('private_jobabd_id', $id)->delete(); $this->_savePrivateJobCities($values['city_id'], $id); //a flash message shold be shown that data successfully updated flash('Data successfully updated')->success(); return back(); } else //a flash message should be shown that data is updated without image flash('Data successfully updated without new image old image willbe used')->success(); $this->_updatePrivatejobadb($values, $id); return back(); } the store method has a lot of duplicate code in the two if...else blocks. The else after the extra code to store the image could be removed. Then it can be simplified like below: /** * Store a newly created resource in storage. * * @param \Illuminate\Http\Request $request * @return \Illuminate\Http\Response */ public function store(storePrivateNewspaperFormValidation $request) { $values = $request->input(); if ($request->hasFile('image')) { $request->file('image'); $filename = $request->image->getClientOriginalName(); $originalfile['image'] = $request->image->storeAs('public/newpaper_jobs', $filename); } $values['slug'] = str_slug($values['company_name'] . '-' . rand(1, 1000), '-'); $private_job_ad = new private_jobadb($values); $private_job_ad->save(); $insertedId = $private_job_ad->id; $this->_savePrivateJobCities($values['city_id'], $insertedId); $this->_saveCatagoryPrivateJobadbAssociations($values['catagory_id'], $insertedId); //a flash message should be shown that image is't selected flash('Data successfully added')->success(); return back(); } 1http://deviq.com/don-t-repeat-yourself/
{ "domain": "codereview.stackexchange", "id": 27377, "tags": "php, object-oriented, laravel, controller" }
How does inhalation work?
Question: In school, we learn that during inhalation, the diaphragm expands, causing air to get sucked into our lungs. You can feel this suction by putting your hand over your mouth while inhaling. Why is that? Does the expanded capacity of the lungs cause the air outside my body to rush into my body to, shall we say, keep the lungs full? Answer: You are correct that your chest muscles are in fact pulling the lungs "open," which creates a pressure differential and draws air into the lungs. When the muscles relax, the chest cavity collapses to its original state, expelling the air (not 100% of it!). You may have heard of a "collapsed lung" injury. What happens there is that the lung is ripped loose from the surrounding muscles, and thus cannot be reinflated.
{ "domain": "physics.stackexchange", "id": 26220, "tags": "thermodynamics, fluid-dynamics, pressure, air" }
Function to find the kth match (2)
Question: code from suggestions in comments here. I will still write it like the first time, so you do not need to read the old question. I've created a UDF some time ago and used it pretty often till now in different ways (but mainly to compare a "history" of data like having different filters at the same time). While there is no need to change anything at this state, I'd like get some suggestions of how to improve it without the loss of any functionality. To make it short, it is a LOOKUP-function which goes for the kth match and returns a ref. Public Function LOOKUPK(lookup_value As String, lookup_vector As Range, Optional result_vector As Range, _ Optional count_num As Long = 1, Optional case_sens As Boolean) As Variant 'Application.Volatile 'if you encounter errors remove the first >'< LOOKUPK = CVErr(2023) If count_num - 1 <> Abs(Round(count_num - 1)) Then Exit Function ' no natural Number or 0 => exit function If lookup_vector.Areas.Count > 1 Then Set lookup_vector = lookup_vector.Areas(1) 'only first area to work with Set lookup_vector = Intersect(lookup_vector.Parent.UsedRange, lookup_vector) 'skip ranges for speed If result_vector Is Nothing Then 'no output => make one If lookup_vector.Rows.Count = 1 Xor lookup_vector.Columns.Count = 1 Then 'only 1 row/columne + no output = inputrng = outputrng Set result_vector = lookup_vector ElseIf lookup_vector.Rows.Count > 2 And lookup_vector.Columns.Count = 2 Then '2 columns + >2 rows => split for vlookupk-mode Set result_vector = lookup_vector.Columns(2).Cells Set lookup_vector = lookup_vector.Columns(1).Cells ElseIf lookup_vector.Rows.Count = 2 And lookup_vector.Columns.Count > 2 Then '>2 columns + 2 rows => split for hlookupk-mode Set result_vector = lookup_vector.Rows(2).Cells Set lookup_vector = lookup_vector.Rows(1).Cells Else Exit Function 'not supported range-size => exit function End If Else 'got output => check for everything If result_vector.Areas.Count > 1 Then Set result_vector = result_vector.Areas(1) 'only first area to work with Set result_vector = Intersect(result_vector.Parent.UsedRange, result_vector) 'skip ranges for speed If Not (result_vector.Columns.Count = 1 Xor result_vector.Rows.Count = 1) Then Exit Function 'not supported range-size => exit function End If If Not (lookup_vector.Columns.Count = 1 Xor lookup_vector.Rows.Count = 1) Then Exit Function 'not supported range-size => exit function If Not case_sens Then lookup_value = LCase(lookup_value) 'case doesn't matter => make it *lower* Dim cell_count As Long 'for count in For Each Dim cell_value As Variant 'the value in For Each For Each cell_value In lookup_vector.Value cell_count = cell_count + 1 If case_sens Then 'case does matter - check directly If cell_value Like lookup_value Then count_num = count_num - 1 Else 'case doesn#t matter - check lower only If LCase(cell_value) Like lookup_value Then count_num = count_num - 1 End If If count_num = 0 Then Exit For 'item found - skip future loops Next If count_num = 0 Then 'only return something if desired match was found If result_vector.Columns.Count = 1 Then 'only 1 column => select row Set LOOKUPK = result_vector(cell_count, 1) Else 'only 1 row => select column Set LOOKUPK = result_vector(1, cell_count) End If End If End Function How to use it: VLOOKUPK(lookup_value,lookup_vector,[result_vector],[count_num],[case_sens]) lookup_value (required) The value to search for. lookup_value can be used with wildcards like *. lookup_vector (required) The Range to look in. It needs to contain only one row or column if the result_vector is submitted. If no result_vector is set and the lookup_vector contains only 1 row/column, the result_vector will be set to the lookup_vector. If no result_vector is set, and the lookup_vector contains 2 columns while holding more then 2 rows, the first column will be set as lookup_vector while the second column will become the result_vector. (and vice versa) Having a range of 2x2 will result in an error. result_vector (optional) The range where the value to output is located. Needs to contain only one row or column. Does not need to be the same type (row/column) like the lookup_vector count_num (optional) Indicates the kth match to work with. if not submitted, the first occurrence will be submitted. case_sens (optional) if set to TRUE the search will be case sensitive. Edit There was a big fail in the code I didn't noticed till now: It compared with = instead of Like. Really sorry. :( Answer: What's special about 2023? CVErr(2023) Introduce a constant so Mr(s). Maintainer knows what this error code means. This is a lot of logic to parse for something so simple. If count_num - 1 <> Abs(Round(count_num - 1)) Then Exit Function ' no natural Number or 0 => exit function Extract a private method. If IsNegative(count_num) Then Exit Function Note that now the comment is superfluous and can be removed. Often, comments are a sign that we can express the code more clearly by creating a new abstraction. You're accessing lookup_vector.Rows.Count and lookup_vector.Columns.Count more than often enough to extract a well named variable. Instead of simply exiting when something is unsupported, consider returning an error. It's not always a good idea for UDFs, but can be a great idea. It depends on your use case. Instead of lower casing strings for case insensitive comparison, use the StrComp function. It's theoretically faster. (Theoretically, I've not benchmarked it.)
{ "domain": "codereview.stackexchange", "id": 17510, "tags": "performance, vba, excel" }
Why do position operators in orthogonal directions commute?
Question: In three dimensions, we have $\hat x$, $\hat y$, $\hat z$ as the position operators in the three orthogonal directions. If the components of angular momentum don't commute, why must these all commute? I can't seem to find an answer elsewhere. For example, this states that the "coordinate operators clearly commute" without explanation. Is this an experimental fact or a postulate or something else? Answer: This is an assumption that seems to be born out by the experimental evidence thus far. Non-commutative quantum mechanics is a speculative theory that introduces a degree of non-commutativity between the components of the position operator. This introduces a sort of minimum length scale into the theory, beyond which it is not possible to localize a particle. This is a generic prediction of a certain perspective on quantum theories of gravity. In these theories it is expected that probing particles at high enough energies will lead to the postulated non-commutativity.
{ "domain": "physics.stackexchange", "id": 93400, "tags": "quantum-mechanics, operators, heisenberg-uncertainty-principle, commutator, non-commutative-theory" }
Issue with arduino and ROS after creating a new message
Question: I am getting creation of publisher failed: Checksum does not match I am getting creation of subscriber failed: Checksum does not match This error comes when we make a new arduino message and then do catkin_make and try to publish without flashing the arduino and changing the MD5. I was struck at this problem since 1 day as I was trying to edit the MD5 in the header to match the checksum. Solution: 1)copy the MD5 from the error you get to the header file 2)catkin_make 3)flash the arduino I am a new user and this is a workaround which i found. if anyone has better solution please suggest I have put this up in case someone gets struck in the same problem can find the solution Originally posted by angshumanG on ROS Answers with karma: 21 on 2017-02-25 Post score: 0 Answer: The MD5 sum in the message definition is how ROS checks that message formats match. Messages with MD5 sums that don't match don't have the same format or the same serialization, and if you try to force them to be used together by editing the MD5 sums to match, you will get garbled or corrupt data. If you change the message definitions that are used by your arduino, you should run make_libraries.py to regenerate the ROS arduino libraries, and then rebuild your arduino project and flash it again. And as you already know, if you change the message definitions you also need to run catkin_make to recreate C++ and python libraries and recompile anything that uses them on the host side. Originally posted by ahendrix with karma: 47576 on 2017-02-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by angshumanG on 2017-02-27: I am getting the error on running rosrun rosserial_arduino make_libraries.py make_libraries.py generates the Arduino rosserial library files. It requires the location of your Arduino sketchbook/libraries folder. rosrun rosserial_arduino make_libraries.py <output_path> Comment by angshumanG on 2017-02-27: I got it . I was not putting the period after the command rosrun rosserial_arduino make_libraries.py .
{ "domain": "robotics.stackexchange", "id": 27134, "tags": "ros, arduino" }
Proving program termination in the $\lambda$-calculus
Question: Turing's Checking a large routine: Finally the checker has to verify that the process comes to an end. Here again he should be assisted by the programmer giving a further definite assertion to be verified. This may take the form of a quantity which is asserted to decrease continually and vanish when the machine stops. To the pure mathematician it is natural to give an ordinal number. How do you apply this to a program written in the untyped $\lambda$-calculus? Answer: This may take the form of a quantity which is asserted to decrease continually and vanish when the machine stops. Lambda calculus evaluation is a sequence of beta reduction steps. So for the lambda calculus (with or without types: types don't affect evaluation), you want a quantity (a positive integer) that decreases at each reduction step. Such a quantity exists if and only if the lambda-term is normalizable (according to a chosen reduction strategy $\to$). Proof: suppose there is a function $f : \mathbf{\Lambda}(M_0) \to \mathbb{N}$ where $\mathbf{\Lambda}(M_0)$ is the set of lambda-terms $M$ such that $M_0 \to^\star M$, such that if $M \to M'$ then $f(M) \lt f(M')$. Then the length of a reduction chain from $M_0$ is bounded by $f(M_0)$ and in particular cannot be infinite, so $M_0$ is strongly normalizable. Conversely, if $M_0$ is strongly normalizable, define $f(M)$ as the length of the longest reduction starting at $M$.
{ "domain": "cs.stackexchange", "id": 18078, "tags": "lambda-calculus, software-verification, termination" }
Why doesn't Kirchhoff's Law work when a battery is shorted with an ideal wire?
Question: Kirchhoff's law states that the sum of voltages around any closed loop sum to zero. The law is true as the electric field is conservative in circuits. Why can we not apply the law here? Why doesn't the law hold here despite the fact that the electric field is conservative and the voltages should add up to $0$? Answer: Just to complement the other answers: This isn't really about Kirchhoff's law. Rather, it is about an idealised situation that does not have a solution at all. When you draw such a diagram, you can think of it in two ways: As a sketch of a real circuit. Then the voltage source is, e.g. a battery or a power supply, and the line is a wire. You can connect them this way, and something will happen (possibly, something will break or catch fire). As an idealised circuit. Then the voltage source maintains a fixed (presumably nonzero) voltage $V$ between the poles and supplies whatever current is necessary. The wire has no resistance, inductance or capacitance -- it will carry any current and produce zero voltage drop. You immediately see that you cannot satisfy both conditions. Hence, this idealised circuit does not admit a solution. UPDATE To extend this a bit: You can approximate the behaviour of real devices with combinations of ideal circuit element. For a battery, a common way is a series conection of an ideal volatge source and a resistor (see e.g. wikipedia), and a real wire would be an ideal wire with, again, a resistor (and possibly inductance and capacitance, see wikipedia again). So in your case, you would have to include two resistors: An internal resistance $R_\text{int}$, which you can think of as part of the battery, and a wire resistance $R_\text{w}$, which really is distributed along all of the real wire and not a localised element. The you wil have a current$$I=\frac{V}{R_\text{int}+R_\text{w}}\,$$ and an "external voltage", i.e. the voltage aong voltage source and internal resistance, of $$U_\text{ext}=V-I\cdot R_\text{int}=V\left(1-\frac{R_\text{int}}{R_\text{int}+R_\text{w}}\right)\,.$$ In the fully idealised case $R_\text{int}=R_\text{w}=0$, these expressions are ill-defined. You can look at two posible limiting cases: "Superconducting wire": If $R_\text{w}=0$ but $R_\text{int}\neq0$, i.e. superconducting ideal wire shorting a real battery, current is limited by internal resistance and external voltage is zero (and the battery will likely overheat). "Real wire on ideal battery": If, on the other hand, $R_\text{int}=0$ but $R_\text{w}\neq0$, current is limited by the wire resistance, and the external voltage is just $V$.
{ "domain": "physics.stackexchange", "id": 67790, "tags": "electric-circuits, electrical-resistance, voltage, batteries, short-circuits" }
How to use Inception v3 in Tensorflow
Question: I am trying to import Inception v3 in TensorFlow. I wish to apply it after reading this tutorial on object detection. Answer: Keras, now fully merged with the new TensorFlow 2.0, allows you to call a long list of pre-trained models. If you want to create an Inception V3, you do: from tensorflow.keras.applications import InceptionV3 That InceptionV3 you just imported is not a model itself, it's a class. You now need to instantiate an InceptionV3 object, with: my_model = InceptionV3() at this point, my_model is a Keras Sequential() model with the architecture and trained weights of Inception V3, that you can re-train, freeze, save, and load as you need. Check also the full list of models available in the module, it's great.
{ "domain": "datascience.stackexchange", "id": 6443, "tags": "machine-learning, deep-learning, tensorflow, computer-vision" }
Dynamic equations for propagation of light through dust
Question: For smoothly inhomogeneous media light propagation can be modeled by Maxwell's equations with non-constant $\varepsilon$ and $\mu$. This allows to relatively easily model propagation in the medium with varying index of refraction. But suppose a medium with refraction index $n_0$ is randomly seeded with huge number of grains with $n_1$, with some density distribution $\rho(\vec r)$, and the light wavelength is much smaller than the grain size. This would correspond to propagation of light through inhomogeneous dust or fog. My question: is there any equation, which under these conditions models the dust as a solid medium and allows to compute propagation of light in such a medium? It seems there should be something based on geometric optics, but working in terms of light intensity instead of individual rays. Maybe some sort of diffusion equation? Answer: Much of scattering theory falls under the heading of Mie scattering. It examines how light scatters off uniform spheres with a given electromagnetic permittivity. In fact, it provides exact solutions to Maxwell's equations in this case. The original work was done in Mie 1908 (in German). Various further approximations can be made, leading to things like Rayleigh scattering in the long-wavelength limit. Short wavelengths would indeed be more akin to ray tracing. Note, though, that you would have to have some rather large particles to be considered much larger than the wavelength of optical light. Also, Mie theory (which always works, but which will require more and more terms in its series solution for smaller and smaller wavelengths) provides an interesting result here: the extinction (scattering plus absorption) cross section asymptotically approaches twice the geometric cross section expected from ray tracing, due to diffraction effects. This is the "extinction paradox", though there is nothing particularly mysterious about it. The general idea is Mie theory (or something more advanced, like the discrete dipole approximation) gives you numbers for absorption -- characterized by an optical depth $\tau_1$ -- and how much light is scattered out of its original trajectory -- characterized by the "degree of elongation of the scattering indicatrix" $\tilde{\omega}_1$. Then you can apply this to get a flux diffusion equation, as is done by Piotrowski 1956 and 1961. Piotrowski was particularly interested in sunlight scattering through clouds, and he notes that the equation of interest depends only on $\tau_1 (1 - \tilde{\omega}_1/3)$. In the limit that each particle interaction scatters the photon uniformly across the whole sphere, $\tilde{\omega}_1 \to 0$ and diffusion only depends on $\tau_1$. This is to be expected, since it is the same as the photon being absorbed and re-emitted with each interaction. More gory details can be found in Irvine 1965. Roughly: One's microscopic scattering theory returns some function $\Phi$ (essentially $\tilde{\omega}_1$ from before) of cosines of scattering angles. $\Phi$ is integrated over an angle to yield $F$. The source function ("external source function" in that paper) $J_1$ is defined from $F$ and $\tau$. Two coupled equations are solved, relating specific intensity $I$, its derivative with respect to $\tau$, $J_1$, and $J$ (a convolution of $F$ and $I$).
{ "domain": "physics.stackexchange", "id": 18418, "tags": "visible-light, scattering, diffusion" }
Average quantity over Keplerian orbit
Question: I have been working through some lecture notes and am quite confused on something. I am trying to understand how to average a quantity over an orbit (Keplerian) but I am struggling to get a clear idea on this. The notes I am using is: http://www.sns.ias.edu/sites/default/files/isima1.pdf So I am trying to do the exercise on page 8, but have no idea how to get the solutions shown. Answer: The time average of a periodic quantity $Q(t)$ is, by definition, $$\langle Q \rangle=\frac{1}{T}\int_0^TQ(t)\,dt$$ where $T$ is the period. For Keplerian orbits, it is usually easiest to change the integration variable to the angular coordinate $\psi$ and express all quantities being integrated in terms of $\psi$. The exercise thus consists of doing a variety of angular integrals. There is a typo in one of the results.
{ "domain": "physics.stackexchange", "id": 67617, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, mathematical-physics, orbital-motion" }
What does it mean when we say that power of a bulb is 10 W? Since $V/I=$ resistance is a constant, how can power $=VI$ be a constant?
Question: My question is simple. In Ideal situation, at constant temperature, we know that normal appliances like a filament bulb has straight Voltage vs Current graph, meaning its resistance is constant or voltage is directly proportional to current. Now, we also have bulbs of desired power available. eg. 10W, 20W, 100W bulbs etc. Since I understand that power = V x Current, the power for a bulb can not be a constant if its resistant is assumed a constant. A normal mathematical thinking can confirm that. So, what does it means when we say the a certain bulb is a 10W bulb? Does it simply means that it would consume 2 times energy at a given voltage if it replaces a 5W bulb? Answer: The power rating given on lightbulbs always refers to the power at a specified operational voltage (which is always given together with the power or implied by the type of socket). The power at different voltages is not easily predictable as the resistance of the filament will vary strongly in dependence of temperature (which depends on the dissipated power). Furthermore, fluorescent lamps and LED lamps which have electronic components to control the lamp will usually not work at all at other voltages than the specified operational one, so a power rating that does not refer to the nominal operational voltage will not make any sense here anyway.
{ "domain": "physics.stackexchange", "id": 25468, "tags": "electricity, electric-current, electrical-resistance, voltage, power" }
Searching for a performance bug in a C++20 pathfinding algorithm (NBA*)
Question: (See the next iteration.) I have this pathfinding algorithm: DirectedGraph.hpp #ifndef COM_GITHUB_CODERODDE_DIRECTED_GRAPH_HPP #define COM_GITHUB_CODERODDE_DIRECTED_GRAPH_HPP #include <cstddef> // for std::size_t #include <sstream> #include <stdexcept> #include <unordered_map> #include <unordered_set> namespace com::github::coderodde::directed_graph { template<typename Node = int> class DirectedGraph { private: std::unordered_map<Node, std::unordered_set<Node>> child_map_; std::unordered_map<Node, std::unordered_set<Node>> parent_map_; std::unordered_set<Node> nodes_; std::size_t number_of_arcs_; public: DirectedGraph() : number_of_arcs_{ 0 } {} bool addNode(Node const& node) { if (!hasNode(node)) { child_map_[node] = {}; parent_map_[node] = {}; nodes_.insert(node); return true; } return false; } bool hasNode(Node const& node) { return nodes_.contains(node); } bool removeNode(Node const& node) { if (!hasNode(node)) { return false; } number_of_arcs_ -= child_map_[node].size() + parent_map_[node].size(); child_map_.erase(node); parent_map_.erase(node); nodes_.erase(node); return true; } bool addArc(Node const& tail, Node const& head) { bool state_changed = false; if (!hasNode(tail)) { addNode(tail); state_changed = true; } if (!hasNode(head)) { addNode(head); state_changed = true; } if (!child_map_[tail].contains(head)) { child_map_[tail].insert(head); state_changed = true; } if (!parent_map_[head].contains(tail)) { parent_map_[head].insert(tail); state_changed = true; } if (state_changed) { number_of_arcs_++; } return state_changed; } bool hasArc(Node const& tail, Node const& head) { if (!child_map_.contains(tail)) { return false; } return child_map_[tail].contains(head); } bool removeArc(Node const& tail, Node const& head) { if (!child_map_.contains(tail)) { return false; } if (!child_map_[tail].contains(head)) { return false; } child_map_[tail].erase(head); parent_map_[head].erase(tail); number_of_arcs_--; return true; } std::unordered_set<Node>* getParentNodesOf(Node const& node) { return &parent_map_[node]; } std::unordered_set<Node>* getChildNodesOf(Node const& node) { return &child_map_[node]; } std::unordered_set<Node> const& getNodes() const { return nodes_; } std::size_t getNumberOfNodes() const { return nodes_.size(); } std::size_t getNumberOfArcs() const { return number_of_arcs_; } }; template<typename Node = int> std::string buildNonExistingArcErrorMessage( Node const& tail, Node const& head) { std::stringstream ss; ss << "The arc (" << tail << ", " << head << ") does not exist."; return ss.str(); } class NonExistingArcException : public std::logic_error { public: NonExistingArcException(std::string const& err_msg) : std::logic_error{ err_msg } {} }; template<typename Node = int, typename Weight = double> class DirectedGraphWeightFunction { private: std::unordered_map<Node, std::unordered_map<Node, Weight>> weight_map_; public: void addWeight(Node const& tail, Node const& head, Weight weight) { weight_map_[tail][head] = weight; } void removeWeight(Node const& tail, Node const& head) { if (!weight_map_.contains(tail) || !weight_map_[tail].contains(head)) { return; } weight_map_[tail].erase(head); } Weight getWeight(Node const& tail, Node const& head) { if (!weight_map_.contains(tail) || !weight_map_[tail].contains(head)) { throw NonExistingArcException{ buildNonExistingArcErrorMessage(tail, head) }; } return weight_map_[tail][head]; } }; } // End of namespace com::github::coderodde::directed_graph. #endif // COM_GITHUB_CODERODDE_DIRECTED_GRAPH_HPP Pathfinders.NBAstar.hpp #ifndef COM_GITHUB_CODERODDE_GRAPH_PATHFINDERS_NBA_STAR_HPP #define COM_GITHUB_CODERODDE_GRAPH_PATHFINDERS_NBA_STAR_HPP #include "DirectedGraph.hpp" #include "Pathfinders.SharedUtils.hpp" #include <algorithm> #include <cstdlib> #include <queue> #include <sstream> #include <stdexcept> #include <unordered_map> #include <unordered_set> #include <vector> namespace com::github::coderodde::pathfinders { using namespace com::github::coderodde::directed_graph; using namespace com::github::coderodde::pathfinders::util; template<typename Node = int, typename Weight = double> void stabilizeForward( DirectedGraph<Node>& graph, DirectedGraphWeightFunction<Node, Weight>& weight_function, HeuristicFunction<Node, Weight>& heuristic_function, std::priority_queue< HeapNode<Node, Weight>*, std::vector<HeapNode<Node, Weight>*>, HeapNodeComparator<Node, Weight>>& OPEN_FORWARD, std::unordered_set<Node>& CLOSED, std::unordered_map<Node, Weight>& distance_map_forward, std::unordered_map<Node, Weight>& distance_map_backward, std::unordered_map<Node, Node*>& parent_map_forward, Node const& current_node, Node const& target_node, Weight& best_cost, const Node** touch_node_ptr) { std::unordered_set<Node>* children = graph.getChildNodesOf(current_node); for (Node const& child_node : *children) { if (CLOSED.contains(child_node)) { continue; } Weight tentative_distance = distance_map_forward[current_node] + weight_function.getWeight(current_node, child_node); if (!distance_map_forward.contains(child_node) || distance_map_forward[child_node] > tentative_distance) { OPEN_FORWARD.push( new HeapNode<Node, Weight>( child_node, tentative_distance + heuristic_function.estimate(child_node, target_node))); distance_map_forward[child_node] = tentative_distance; Node* node_ptr = new Node{ current_node }; parent_map_forward[child_node] = node_ptr; if (distance_map_backward.contains(child_node)) { Weight path_length = tentative_distance + distance_map_backward[child_node]; if (best_cost > path_length) { best_cost = path_length; *touch_node_ptr = &child_node; } } } } } template<typename Node = int, typename Weight = double> void stabilizeBackward( DirectedGraph<Node>& graph, DirectedGraphWeightFunction<Node, Weight>& weight_function, HeuristicFunction<Node, Weight>& heuristic_function, std::priority_queue< HeapNode<Node, Weight>*, std::vector<HeapNode<Node, Weight>*>, HeapNodeComparator<Node, Weight>>& OPEN_BACKWARD, std::unordered_set<Node>& CLOSED, std::unordered_map<Node, Weight>& distance_map_forward, std::unordered_map<Node, Weight>& distance_map_backward, std::unordered_map<Node, Node*>& parent_map_backward, Node const& current_node, Node const& source_node, Weight& best_cost, const Node** touch_node_ptr) { std::unordered_set<Node>* parents = graph.getParentNodesOf(current_node); for (Node const& parent_node : *parents) { if (CLOSED.contains(parent_node)) { continue; } Weight tentative_distance = distance_map_backward[current_node] + weight_function.getWeight(parent_node, current_node); if (!distance_map_backward.contains(parent_node) || distance_map_backward[parent_node] > tentative_distance) { OPEN_BACKWARD.push( new HeapNode<Node, Weight>( parent_node, tentative_distance + heuristic_function.estimate(parent_node, source_node))); distance_map_backward[parent_node] = tentative_distance; Node* node_ptr = new Node{ current_node }; parent_map_backward[parent_node] = node_ptr; if (distance_map_forward.contains(parent_node)) { Weight path_length = tentative_distance + distance_map_forward[parent_node]; if (best_cost > path_length) { best_cost = path_length; *touch_node_ptr = &parent_node; } } } } } template<typename Node = int, typename Weight = double> Path<Node, Weight> runBidirectionalAstarAlgorithm( DirectedGraph<Node>& graph, DirectedGraphWeightFunction<Node, Weight>& weight_function, HeuristicFunction<Node, Weight>* heuristic_function, Node& source_node, Node& target_node) { checkTerminalNodes(graph, source_node, target_node); std::priority_queue< HeapNode<Node, Weight>*, std::vector<HeapNode<Node, Weight>*>, HeapNodeComparator<Node, Weight>> OPEN_FORWARD; std::priority_queue< HeapNode<Node, Weight>*, std::vector<HeapNode<Node, Weight>*>, HeapNodeComparator<Node, Weight>> OPEN_BACKWARD; std::unordered_set<Node> CLOSED; std::unordered_map<Node, Weight> distance_map_forward; std::unordered_map<Node, Weight> distance_map_backward; std::unordered_map<Node, Node*> parent_map_forward; std::unordered_map<Node, Node*> parent_map_backward; OPEN_FORWARD .push(new HeapNode<Node, Weight>(source_node, Weight{})); OPEN_BACKWARD.push(new HeapNode<Node, Weight>(target_node, Weight{})); distance_map_forward[source_node] = Weight{}; distance_map_backward[target_node] = Weight{}; parent_map_forward[source_node] = nullptr; parent_map_backward[target_node] = nullptr; const Node* touch_node = nullptr; Weight best_cost = std::numeric_limits<Weight>::max(); Weight total_distance = heuristic_function ->estimate( source_node, target_node); Weight f_cost_forward = total_distance; Weight f_cost_backward = total_distance; while (!OPEN_FORWARD.empty() && !OPEN_BACKWARD.empty()) { if (OPEN_FORWARD.size() < OPEN_BACKWARD.size()) { HeapNode<Node, Weight>* top_heap_node = OPEN_FORWARD.top(); OPEN_FORWARD.pop(); Node current_node = top_heap_node->getElement(); delete top_heap_node; if (CLOSED.contains(current_node)) { continue; } CLOSED.insert(current_node); if (distance_map_forward[current_node] + heuristic_function->estimate(current_node, target_node) >= best_cost || distance_map_forward[current_node] + f_cost_backward - heuristic_function->estimate(current_node, source_node) >= best_cost) { // Reject the 'current_node'! } else { // Stabilize the 'current_node': stabilizeForward<Node, Weight>( graph, weight_function, *heuristic_function, OPEN_FORWARD, CLOSED, distance_map_forward, distance_map_backward, parent_map_forward, current_node, target_node, best_cost, &touch_node); } if (!OPEN_FORWARD.empty()) { f_cost_forward = OPEN_FORWARD.top()->getDistance(); } } else { HeapNode<Node, Weight>* top_heap_node = OPEN_BACKWARD.top(); OPEN_BACKWARD.pop(); Node current_node = top_heap_node->getElement(); delete top_heap_node; if (CLOSED.contains(current_node)) { continue; } CLOSED.insert(current_node); if (distance_map_backward[current_node] + heuristic_function->estimate(current_node, source_node) >= best_cost || distance_map_backward[current_node] + f_cost_forward - heuristic_function->estimate(current_node, target_node) >= best_cost) { // Reject the 'current_node'! } else { // Stabilize the 'current_node': stabilizeBackward<Node, Weight>( graph, weight_function, *heuristic_function, OPEN_BACKWARD, CLOSED, distance_map_forward, distance_map_backward, parent_map_backward, current_node, source_node, best_cost, &touch_node); } if (!OPEN_BACKWARD.empty()) { f_cost_backward = OPEN_BACKWARD.top()->getDistance(); } } } cleanPriorityQueue(OPEN_FORWARD); cleanPriorityQueue(OPEN_BACKWARD); if (touch_node == nullptr) { cleanParentMap(parent_map_forward); cleanParentMap(parent_map_backward); throw PathDoesNotExistException{ buildPathNotExistsErrorMessage(source_node, target_node) }; } Path<Node, Weight> path = tracebackPath( *touch_node, parent_map_forward, parent_map_backward, weight_function); cleanParentMap(parent_map_forward); cleanParentMap(parent_map_backward); return path; } } // End of namespace 'com::github::coderodde::pathfinders'. #endif // COM_GITHUB_CODERODDE_GRAPH_PATHFINDERS_NBA_STAR_HPP Pathfinders.SharedUtils.hpp #ifndef COM_GITHUB_CODERODDE_PATHFINDERS_UTIL_HPP #define COM_GITHUB_CODERODDE_PATHFINDERS_UTIL_HPP #include "DirectedGraph.hpp" #include <queue> #include <stdexcept> #include <string> #include <vector> using namespace com::github::coderodde::directed_graph; namespace com::github::coderodde::pathfinders::util { template<typename Node = int, typename Weight = double> class HeuristicFunction { public: virtual Weight estimate(Node const& tail, Node const& head) = 0; virtual ~HeuristicFunction() { } }; template<typename Node = int, typename Weight = double> class Path { private: std::vector<Node> nodes_; DirectedGraphWeightFunction<Node, Weight> weight_function_; public: Path(std::vector<Node> const& nodes, DirectedGraphWeightFunction<Node, Weight> const& weight_function) : weight_function_{ weight_function } { for (Node e : nodes) { nodes_.push_back(e); } } Node operator[](std::size_t index) { return nodes_[index]; } std::size_t length() { return nodes_.size(); } Weight distance() { Weight total_distance = {}; for (std::size_t i = 0; i < nodes_.size() - 1; ++i) { total_distance += weight_function_.getWeight( nodes_[i], nodes_[i + 1]); } return total_distance; } }; class PathDoesNotExistException : public std::logic_error { public: PathDoesNotExistException(std::string const& err_msg) : std::logic_error{ err_msg } {} }; class NodeNotPresentInGraphException : public std::logic_error { public: NodeNotPresentInGraphException(std::string const& err_msg) : std::logic_error{ err_msg } {} }; template<typename Node = int, typename Weight = double> struct HeapNode { private: Weight distance_; Node element_; public: HeapNode(Node const& element, Weight const& distance) : distance_{ distance }, element_{ element } { } [[nodiscard]] Node const& getElement() noexcept { return element_; } [[nodiscard]] Weight const& getDistance() noexcept { return distance_; } }; template<typename Node = int, typename Weight = double> class HeapNodeComparator { public: bool operator()(HeapNode<Node, Weight>* first, HeapNode<Node, Weight>* second) { return first->getDistance() > second->getDistance(); } }; template<typename Node = int> std::string buildSourceNodeNotInGraphErrorMessage(Node source_node) { std::stringstream ss; ss << "There is no source node " << source_node << " in the graph."; return ss.str(); } template<typename Node = int> std::string buildTargetNodeNotInGraphErrorMessage(Node target_node) { std::stringstream ss; ss << "There is no target node " << target_node << " in the graph."; return ss.str(); } template<typename Node = int> void checkTerminalNodes(DirectedGraph<Node> graph, Node source_node, Node target_node) { if (!graph.hasNode(source_node)) { throw NodeNotPresentInGraphException{ buildSourceNodeNotInGraphErrorMessage(source_node) }; } if (!graph.hasNode(target_node)) { throw NodeNotPresentInGraphException{ buildTargetNodeNotInGraphErrorMessage(target_node) }; } } template<typename Node = int> std::string buildPathNotExistsErrorMessage(Node source_node, Node target_node) { std::stringstream ss; ss << "There is no path from " << source_node << " to " << target_node << "."; return ss.str(); } template<typename Node = int, typename Weight = double> Path<Node, Weight> tracebackPath(Node& target_node, std::unordered_map<Node, Node*>& parent_map, DirectedGraphWeightFunction<Node, Weight>& weight_function) { std::vector<Node> path_nodes; Node previous_node = target_node; path_nodes.push_back(target_node); while (true) { Node* next_node = parent_map[previous_node]; if (next_node == nullptr) { std::reverse(path_nodes.begin(), path_nodes.end()); return Path<Node, Weight>{path_nodes, weight_function}; } path_nodes.push_back(*next_node); previous_node = *next_node; } } template<typename Node = int, typename Weight = double> Path<Node, Weight> tracebackPath( const Node& touch_node, std::unordered_map<Node, Node*>& forward_parent_map, std::unordered_map<Node, Node*>& backward_parent_map, DirectedGraphWeightFunction<Node, Weight>& weight_function) { std::vector<Node> path_nodes; Node previous_node = touch_node; path_nodes.push_back(touch_node); while (true) { Node* next_node = forward_parent_map[previous_node]; if (next_node == nullptr) { std::reverse(path_nodes.begin(), path_nodes.end()); break; } path_nodes.push_back(*next_node); previous_node = *next_node; } Node* next_node = backward_parent_map[touch_node]; while (next_node != nullptr) { path_nodes.push_back(*next_node); next_node = backward_parent_map[*next_node]; } return Path<Node, Weight>{path_nodes, weight_function}; } template<typename Node = int, typename Weight = double> void cleanPriorityQueue( std::priority_queue<HeapNode<Node, Weight>*, std::vector<HeapNode<Node, Weight>*>, HeapNodeComparator<Node, Weight>>&queue) { while (!queue.empty()) { HeapNode<Node, Weight>* heap_node = queue.top(); queue.pop(); delete heap_node; } } template<typename Node = int> void cleanParentMap(std::unordered_map<Node, Node*> parent_map) { for (const auto p : parent_map) { // One 'p.second' will be 'nullptr', but we can "delete" it too: delete p.second; } parent_map.clear(); } }; // End of namespace 'com::github::coderodde::pathfinders::util'. #endif // COM_GITHUB_CODERODDE_PATHFINDERS_UTIL_HPP main.cpp #include "DirectedGraph.hpp" #include "Pathfinders.API.hpp" #include "Pathfinders.SharedUtils.hpp" #include <chrono> #include <cstddef> #include <iostream> #include <random> constexpr std::size_t NUMBER_OF_NODES = 100 * 1000; constexpr std::size_t NUMBER_OF_ARCS = 500 * 1000; constexpr double SPACE_WIDTH = 10000.0; constexpr double SPACE_HEIGHT = 10000.0; constexpr double DISTANCE_FACTOR = 1.1; using namespace com::github::coderodde::directed_graph; using namespace com::github::coderodde::pathfinders; using namespace com::github::coderodde::pathfinders::api; class EuclideanCoordinates { private: double x_; double y_; public: EuclideanCoordinates(double x = 0.0, double y = 0.0) : x_{ x }, y_{ y } {} double distanceTo(EuclideanCoordinates const& other) const { const auto dx = x_ - other.x_; const auto dy = y_ - other.y_; return std::sqrt(dx * dx + dy * dy); } }; class MyHeuristicFunction : public HeuristicFunction<int, double> { private: std::unordered_map<int, EuclideanCoordinates> map_; public: MyHeuristicFunction( std::unordered_map<int, EuclideanCoordinates> map) : map_{ map } {} MyHeuristicFunction(const MyHeuristicFunction& other) : map_{ other.map_ } { } MyHeuristicFunction(MyHeuristicFunction&& other) { map_ = std::move(other.map_); } MyHeuristicFunction& operator=(const MyHeuristicFunction& other) { map_ = other.map_; return *this; } MyHeuristicFunction& operator=(MyHeuristicFunction&& other) { map_ = std::move(other.map_); return *this; } ~MyHeuristicFunction() { } double estimate(int const& tail, int const& head) override { const auto point1 = map_[tail]; const auto point2 = map_[head]; return point1.distanceTo(point2); } }; class GraphData { private: DirectedGraph<int> graph_; DirectedGraphWeightFunction<int, double> weight_function_; MyHeuristicFunction heuristic_function_; public: GraphData( DirectedGraph<int> graph, DirectedGraphWeightFunction<int, double> weight_function, MyHeuristicFunction heuristic_function) : graph_{ graph }, weight_function_{ weight_function }, heuristic_function_{ heuristic_function } {} DirectedGraph<int>& getGraph() { return graph_; } DirectedGraphWeightFunction<int, double>& getWeightFunction() { return weight_function_; } HeuristicFunction<int, double>& getHeuristicFunction() { return heuristic_function_; } }; EuclideanCoordinates getRandomEuclideanCoordinates( std::mt19937& mt, std::uniform_real_distribution<double> x_coord_distribution, std::uniform_real_distribution<double> y_coord_distribution) { double x = x_coord_distribution(mt); double y = y_coord_distribution(mt); EuclideanCoordinates coords{ x, y }; return coords; } GraphData createRandomGraphData(std::size_t number_of_nodes, std::size_t number_of_arcs) { DirectedGraph<int> graph; DirectedGraphWeightFunction<int, double> weight_function; std::vector<int> node_vector; node_vector.reserve(number_of_nodes); std::random_device rd; std::mt19937 mt(rd()); std::uniform_int_distribution<std::size_t> uniform_distribution(0, number_of_nodes - 1); std::uniform_real_distribution<double> x_coord_distribution(0, SPACE_WIDTH); std::uniform_real_distribution<double> y_coord_distribution(0, SPACE_HEIGHT); std::unordered_map<int, EuclideanCoordinates> coordinate_map; for (size_t node_id = 0; node_id < number_of_nodes; ++node_id) { graph.addNode((int)node_id); node_vector.push_back((int)node_id); EuclideanCoordinates coords = getRandomEuclideanCoordinates( mt, x_coord_distribution, y_coord_distribution); coordinate_map[(int)node_id] = coords; } for (size_t i = 0; i < number_of_arcs; ++i) { std::size_t tail_index = uniform_distribution(mt); std::size_t head_index = uniform_distribution(mt); int tail = node_vector[tail_index]; int head = node_vector[head_index]; EuclideanCoordinates tail_coords = coordinate_map[tail]; EuclideanCoordinates head_coords = coordinate_map[head]; graph.addArc(tail, head); weight_function.addWeight(tail, head, tail_coords.distanceTo(head_coords) * DISTANCE_FACTOR); } MyHeuristicFunction heuristic_function{ coordinate_map }; GraphData graph_data( graph, weight_function, heuristic_function); return graph_data; } class Milliseconds { private: std::chrono::high_resolution_clock m_clock; public: auto milliseconds() { return std::chrono::duration_cast<std::chrono::milliseconds> (m_clock.now().time_since_epoch()).count(); } }; int main() { GraphData graph_data = createRandomGraphData(NUMBER_OF_NODES, NUMBER_OF_ARCS); try { Milliseconds ms; std::random_device rd; std::mt19937 mt(rd()); std::uniform_int_distribution<int> dist(0, NUMBER_OF_NODES - 1); int source_node = dist(mt); int target_node = dist(mt); std::cout << "Source node: " << source_node << "\n"; std::cout << "Target node: " << target_node << "\n"; std::cout << "--- Dijkstra's algorithm: ---\n"; auto start_time = ms.milliseconds(); Path<int, double> path = findShortestPath() .in(graph_data.getGraph()) .withWeights(graph_data.getWeightFunction()) .from(source_node) .to(target_node) .usingDijkstra(); auto end_time = ms.milliseconds(); std::cout << "Path:\n"; for (size_t i = 0; i < path.length(); ++i) { std::cout << path[i] << "\n"; } std::cout << "Path distance: " << path.distance() << "\n"; std::cout << "Duration: " << (end_time - start_time) << " ms.\n\n"; std::cout << "--- Bidirectional Dijkstra's algorithm: ---\n"; start_time = ms.milliseconds(); path = findShortestPath() .in(graph_data.getGraph()) .withWeights(graph_data.getWeightFunction()) .from(source_node) .to(target_node) .usingBidirectionalDijkstra(); end_time = ms.milliseconds(); std::cout << "Path:\n"; for (size_t i = 0; i < path.length(); ++i) { std::cout << path[i] << "\n"; } std::cout << "Path distance: " << path.distance() << "\n"; std::cout << "Duration: " << (end_time - start_time) << " ms.\n\n"; std::cout << "--- A* algorithm: ---\n"; start_time = ms.milliseconds(); path = findShortestPath() .in(graph_data.getGraph()) .withWeights(graph_data.getWeightFunction()) .from(source_node) .to(target_node) .withHeuristicFunction(graph_data.getHeuristicFunction()) .usingAstar(); end_time = ms.milliseconds(); std::cout << "Path:\n"; for (size_t i = 0; i < path.length(); ++i) { std::cout << path[i] << "\n"; } std::cout << "Path distance: " << path.distance() << "\n"; std::cout << "Duration: " << (end_time - start_time) << " ms.\n\n"; //// NBA* ///////////////////////////////////////////////////////////// std::cout << "--- Bidirectional A* (NBA*) algorithm: ---\n"; start_time = ms.milliseconds(); path = findShortestPath() .in(graph_data.getGraph()) .withWeights(graph_data.getWeightFunction()) .from(source_node) .to(target_node) .withHeuristicFunction(graph_data.getHeuristicFunction()) .usingBidirectionalAstar(); end_time = ms.milliseconds(); std::cout << "Path:\n"; for (size_t i = 0; i < path.length(); ++i) { std::cout << path[i] << "\n"; } std::cout << "Path distance: " << path.distance() << "\n"; std::cout << "Duration: " << (end_time - start_time) << " ms.\n\n"; } catch (NodeNotPresentInGraphException const& err) { std::cout << err.what() << "\n"; } catch (PathDoesNotExistException const& err) { std::cout << err.what() << "\n"; } return 0; } Now, what bother me is that NBA* runs with -O3 optimization in around 200 milliseconds, whereas this demo of the same algorithm in Java in a graph with the same topological properties runs only in 29 milliseconds. I suspect that I do implicitly copy assignments/constructors, but I am not sure about that. Please, help. (The entire (Visual Studio 2022) project lives here.) Answer: Avoid manual new and delete I see a lot of new and delete statements. Those are rarely needed in modern C++, and usually point to a problem. In particular, you declare a lot of priority queues like so: std::priority_queue<HeapNode<Node, Weight>*, std::vector<HeapNode<Node, Weight>*>, HeapNodeComparator<Node, Weight>> OPEN_FORWARD; And then proceed to create HeapNodes with new and add them to the queue. However, STL containers already allocate memory for you, so instead of having them allocate memory for pointers, and you allocating more memory for the actual HeapNode objects, and everything becoming slower because of the added pointer indirection, ideally you should just be able to write: std::priority_queue<HeapNode<Node, Weight>> OPEN_FORWARD; You can if you make the member functions of HeapNode const, and add an operator<() so the container can directly compare elements without needing a HeapNodeComparator. To add a new element, instead of: OPEN_FORWARD.push(new HeapNode<Node, Weight>(source_node, Weight{})); You can write: OPEN_FORWARD.emplace(source_node, Weight{}); And accessing the top element: HeapNode<Node, Weight>* top_heap_node = OPEN_FORWARD.top(); OPEN_FORWARD.pop(); Node current_node = top_heap_node->getElement(); delete top_heap_node; Now can be simplified to: Node current_node = OPEN_FORWARD.top().getElement(); OPEN_FORWARD.pop(); You also don't need cleanPriorityQueue anymore. You can do something similar for the parent maps. Currently though, you use nullptrs to indicate that a node doesn't have a parent. Instead, either use find() to check if an element is in the std::unordered_map, or store the parent as std::optional<Node> (although I would prefer the former). Consider using different data structures runBidirectionalAstarAlgorithm() uses a large amount of containers to store information: two priority queues, an unordered set and four unordered maps. Most of these, possibly even all of them, will at one point contain as many Node objecs as there are in the input graph. That is a lot of duplication. Consider that even if Node is just a small type like an int (and not, say, a std::string or something even more complicated), a std::unordered_map<Node, ...> will still have to allocate memory for each entry it stores, and has to do bookkeeping for the allocated memory, which greatly increases the amount of memory used per Node. You can already reduce that by combining the auxiliary information you want to store for each Node while the algorithm is running in a single struct: struct Info { bool closed; Weight distance_forward; Weight distance_backward; std::optional<Node> parent_forward; std::optional<Node> parent_backward; }; std::unordered_map<Node, Info> info; The above map info replaces CLOSED, distance_map_forward, distance_map_backward, parent_map_forward and parent_map_backward. Apart from reducing the memory used by these data structures, having only one map also reduces the number of parameters you have to pass to the stabilize*() functions. Something similar can be done in class DirectedGraph. Instead of nodes_, child_map_ and parent_map_ being different containers and having to add new nodes to all three of them, just do something like: struct Adjacency { std::set<Node> children; std::set<Node> parents; }; std::unordered_map<Node, Adjacency> nodes_; Even better would be to add weights in there as well, so you don't need a separate DirectedGraphWeightFunction. Add more const You are already using const in a lot of places, but there is more that can be made const. I already mentioned the member functions of HeapNode, but also the parameters graph and weight_function of runBidirectionalAstarAlgorithm() should be const, and when you do that you will find out you need to make a bunch more member functions const. Use std::function to pass heuristic_function Instead of creating an abstract base class HeuristicFunction<Node, Weight>, which then must be inherited from, consider passing a std::function<Weight(const Node&, const Node&)> instead. This will still allow you to use a class to store the heuristic function (if you rename estimate() to operator()()), but now it will also allow you to use free functions or lambda expressions. For example, in createRandomGraphData() you could then write: GraphData graph_data(graph, weight_function, [map = std::move(coordinate_map)](const Node& tail, const Node& head) { return map[tail].distanceTo(map[head]); } ); Naming things I would avoid using ALL_CAPS names for anything but macros and enum constants. Just write open, closed and so on for the priority queues and sets. Some terms in graph theory overlap some terms in computer science that might cause some confusion. Consider arc, while this is a valid name for a connection between two nodes, some might think this is a part of a circle, and think this might cover more than just two nodes. The DirectedGraphWeightFunction doesn't look like a regular function, it's just a container that contains the weights for each arc in the graph. STL containers like std::unordered_map also have a concept call node handles. If you want to avoid confusion, I suggest renaming things like so: node -> vertex arc -> edge DirectedGraphWeightFunction -> WeightMap (or even better, merge it somehow into DirectedGraph if you never need to supported multiple different weight functions for a given graph).
{ "domain": "codereview.stackexchange", "id": 42999, "tags": "c++, performance, algorithm, pathfinding, c++20" }
Difference in spectrum of green laser and green LED
Question: In an experiment I conducted I used a spectrometer to find the spectrum of green laser and green led. this is what I found: LED spectrum: Laser spectrum: why is the spectral width of the LED is wide compared with the laser? Answer: In short, because they produce light using completely different mechanisms. In fact, the light produced in a green laser is actually frequency-doubled infrared light (which is part of why cheap green laser pointers that don't properly filter out the leftover infrared light are dangerous).
{ "domain": "physics.stackexchange", "id": 56900, "tags": "visible-light, electromagnetic-radiation, laser, light-emitting-diodes" }
Boiling sodium hydroxide in stainless steel cup: Solution turning to a blue color
Question: I boiled highly concentrated sodium hydroxide in a stainless steel cup. This created a blackish layer on the bottom of the cup and turned the colour of the sodium hydroxide solution to blueish. Am I right to assume that there was some oxidation happening at the surface of steel? Are any oxides of metals, present in common stainless steel, known to have a blue colour when dissolved in an aqueous solution? Answer: Do you say stainless steel? Stainless steel is an alloy of $\ce{Ni, Cr, Fe}$ with other trace elements, and owes its apparent resistance to corrosion to a protective, adherent, coating of mixed chrome, nickel, and iron oxides. A large amount is probably $\ce{Cr^3+}$, which is amphoteric and will dissolve in hydroxide solution. Once the protective coating is breached chromium will react in base similarly to aluminum. The potential to $\ce{[Cr(OH)4]^-}$ is about $\pu{+1.2 V}$. Stainless steel can be much more reactive than pure iron if the protective layer is continually disrupted.
{ "domain": "chemistry.stackexchange", "id": 15588, "tags": "inorganic-chemistry, acid-base, metal, alloy" }
Are RNN or LSTM appropriate Neural Networks approaches for multivariate time-series regression?
Question: Dear Data Science community, For a small project, I've started working on Neural networks as a regression tool, but I am still confused about possibilities of some variants. Here's what I am aiming to do: I have multiple input data time series $X(t)=[X_1(t), X_2(t), X_3(t),X_4(t)]$, and multiple target data time series which I want to modelize $Y(t)=[Y_1(t), Y_2(t)]$. All data are available for training. I aim to train my model/regression on an interval $[t_0,t_n]$, and then be able to apply it on a larger different interval. I know that relation between my $Y$ and $X$ are non-linear, but also that in need to take in account lag, or inertia. For example, $Y_1(t)$ is dependant of $X_1([t-dt_1,t])$ and $X_2([t-dt_2,t])$. All $dt_n$ are different, but I have an approximate idea of how 'far' I need to reach. With this in mind, through some research I have been guided to focus on Recurrent Neural Networks (RNN) and Long Short Term Memory (LSTM) networks. I aim to use TensorFlow/Keras to work on this. However, after some reading, I'm getting confused with those solutions. Many people present them in prediction applications, which supposedely means that those networks use data from a time interval (either $X$ or $Y$ in my case) on an interval $[t_0,t_n]$ to predict $Y(t_{n+1})$. But my objective is to use $X([t_0,t_n])$ to modelize $Y([t_0,t_n])$ (on the same time interval). I am getting confused with this notion of "prediction". Therefore, are RNN and LSTM networks appropriate solutions for my multivariate time series regression/model project? Or am I already going the wrong way? As a beginner in this field, any reference or link to ressources/tutorial, or demo, is also gladly welcome. Answer: Here is a really good source to begin multivariate time-series forecasting in Keras using LSTMs. I aim to train my model/regression on an interval $[t_0,t_n]$ and then be able to apply it on a larger different interval. There's no harm in this as long as you perform the right kind of multi-step forecasting. If your problem requires you to train on $[t_0, t_n], \text{for some } n < 100$ and produce $y_1(t_{n+1}), y_2(t_{n+1})$ as outputs then APPLY it on $[t_n, t_n+100]$ then there will be issues in implementation, as most ML models will require you to provide the same shape as you did when you were training your model. Here's when sliding across time-series will help you. Simply put, $$ \begin{align} \text{Train on: }& [t_0, t_n] &\text{ Output: }& y_1(t_{n+1}), y_2(t_{n+1}) \\ \text{Predict on: }& [t_1, t_{n+1}] &\text{ Output: }& y_1(t_{n+2}), y_2(t_{n+2})\\ \text{Predict on: }& [t_2, t_{n+2}] &\text{ Output: }& y_1(t_{n+3}), y_2(t_{n+3})\\ & \vdots & & \vdots\\ \end{align} $$ I know that relation between my Y and X are non-linear, but also that in need to take into account lag or inertia. For example, Y1(t) is dependent of X1([t−dt1,t]) and X2([t−dt2,t]). All dtn are different, but I have an approximate idea of how 'far' I need to reach. Well, you can model your time-series data as $X_1(t-k),\cdots,X_1(t), X_2(t-d),\cdots,X_2(t)$ as $X$ (input) and $X_1(t+1), X_2(t+1)$ as $y$ (target), where $k$ and $d$ refer to different lags for each variable.
{ "domain": "datascience.stackexchange", "id": 4414, "tags": "neural-network, time-series, regression, lstm" }
Big-O bounds on the k-th largest element of iid Gaussians
Question: I'm interested in the following problem. Let $X_1, \dots, X_n$ be iid samples with a $N(0,1)$ distribution. Let $X_{[k]}$ be the $k$-th largest element of $\{X_1, \dots, X_n\}$, so e.g. $X_{[1]} = \max_i X_i$. Are there simple big-O bounds on $X_{[k]}$ for, say, all $k < n/2$? I know that $X_{[1]} = O(\sqrt{\log{n}})$ (see here), and more generally, $X_{[c]} = O(\sqrt{\log{n}})$ for constant $c$ (see here). But what about $X_{[\log{n}]}$ or $X_{[n/4]}$, for example? My motivation is that I want to calculate $\sum_{i=1}^k E[X_{[i]}]$ for various values of $k$. Thanks! Answer: This is not a complete answer by any means, but just a quick estimate on $\mathbb{E}[\sum_{i=1}^k X_{[i]}]$ that is slightly better than the trivial bound of $O(k\sqrt{\log n})$. If this is your goal, I would think it is easier to go directly for it than consider any given $X_{[k]}$. Let $X_S=\sum_{i\in S} X_i$ for a subset $S\subseteq [n]$ and $Y_k=\sum_{i=1}^k X_{[i]}$. Using the same technique as in the first link, \begin{align} \exp(t\mathbb{E}[Y_k])&\leq \mathbb{E}[\exp(tY_k)]\\ &=\mathbb{E}\bigg[\max_{S\subseteq [n]:\vert S\vert=k}\exp(tX_S)\bigg]\\ &\leq \sum_{S\subseteq [n]:\vert S\vert=k}\mathbb{E}[\exp(tX_S)]\\ &=\sum_{S\subseteq [n]:\vert S\vert=k}\exp\bigg(\frac{kt^2}{2}\bigg)\\ &={n \choose k}\exp\bigg(\frac{kt^2}{2}\bigg). \end{align} Taking (natural) logarithms, we find $$ \mathbb{E}[Y_k]\leq \frac{\log{n \choose k}}{t}+\frac{kt}{2}. $$ Optimizing gives $t=\sqrt{\frac{2\log{n \choose k}}{k}}$ so we deduce $$ \mathbb{E}[Y_k]\leq \sqrt{2k\log{n \choose k}}. $$ For $k=o(n)$, it is known that $\log{n \choose k}=(1+o(1))k\log(n/k)$, so in this regime, we get $$ \mathbb{E}[Y_k]\leq k\sqrt{2(1+o(1))\log(n/k)}, $$ which for $k=\log n$, say, is a bit better than the naive bound. When $k=\alpha n$ for some constant $\alpha$, one instead has $\log{n \choose k}=(1+o(1))H(\alpha)n$, where $H(p)=-p\log p-(1-p)\log(1-p)$ is the entropy function. So in this regime, that would give $$ \mathbb{E}[Y_k]\leq n\sqrt{2(1+o(1))\alpha H(\alpha)}. $$ Lower bounds added for completeness: for linear $k$, one can get decent linear lower bounds that degrade as $\alpha\to 0$ but is tight at $\alpha=1/2$. Let $\alpha<1/2$, then by the Chernoff bound, $$ \Pr(X_{\alpha n+1}\leq 0)\leq \exp\bigg(\frac{-(1/2-\alpha)^2n}{2}\bigg). $$ It is known that for $i<j$, the conditional distribution of $X_{[i]}$ given $X_{[j]}=x$ is the same as the unconditional distribution of $X_{[i]}$ taken from a sample of size $j-1$ conditioned to be larger than $x$. As we are summing over all $i\leq \alpha n$ and taking expectations, we find that $$ \mathbb{E}[\sum_{i=1}^{\alpha n} X_{[i]}\vert X_{\alpha n+1}\geq 0]\geq \mathbb{E}[\sum_{i=1}^{\alpha n} Y_i], $$ where the $Y_i$ are normal random variables conditioned to be larger than $0$, which are known to have expectation $\sqrt{2/\pi}$. We also find that $$ \mathbb{E}[\sum_{i=1}^{\alpha n} X_{[i]}\vert X_{\alpha n+1}\leq 0]\geq 0, $$ as for each $x<0$, conditioning on $X_{\alpha n}=x$, by the same reasoning we are now taking expectations over a sample of size $\alpha n$ normal random variables conditioned to be at least $x$, which by symmetry of the normal distribution is at least $0$ (formally, we are using that the sum over the whole sample makes the labelling of the order statistics irrelevant). We conclude that $$ \mathbb{E}[\sum_{i=1}^{\alpha n} X_{[i]}]\geq \bigg(1-\exp\bigg(\frac{-(1/2-\alpha)^2n}{2}\bigg)\bigg)\alpha n\sqrt{2/\pi}=(1-o(1))\alpha n \sqrt{2/\pi}. $$ This holds for all $\alpha<1/2$, and by monotonicity of the left hand side in $\alpha$ for $\alpha<1/2$ and taking limits, we can further conclude using the upper bound in the comments $$ \mathbb{E}[\sum_{i=1}^{n/2} X_{[i]}]=(1-o(1))n\sqrt{1/(2\pi)}. $$ Obviously similar lower bounds hold for $k=o(n)$ but these aren't super useful as they are of vastly different order than the upper bound. Numerical Simulations: Numerical simulations suggest that these bounds are surprisingly decent in the two regimes mentioned in the post. Note that for any $k$, the function $f(x_1,\ldots,x_n)=\sum_{i=1}^k x_{[i]}$ is Lipschitz, so by Gaussian concentration, one expects simulations to roughly approximate the expectation. For instance, for $k=\log(n)$, here is a plot of the upper bound with that of simulation, with the top curve being the naive bound of $\sqrt{2}\log(n)^{3/2}$ and the lower curve the given bound: In the linear $k$ regime, for the few values I tried, the constant of the upper bounds seems to be off by a factor of at most $3$ (at $\alpha=1/2$, where we have computed the exact asymptotics). For instance, for $k=n/4$, the plot with the bound looks like: The upper bounds seem pretty good for small constant $\alpha$, while the lower bounds degrade. For instance, here is $\alpha=.001$: Here's one more plot (because I can't resist) with $k=\sqrt{n}$: Hope this helps!
{ "domain": "cstheory.stackexchange", "id": 4752, "tags": "lower-bounds, pr.probability, upper-bounds" }
Two masses on a frictionless surface connected with a spring
Question: I have a problem with an assignment with two masses on a frictionless plane connected with a spring. Both masses are 1 kg, and the distance between them (the length of the spring) is 0.4 m. The spring constant k = 8 N/m. The first mass is given a initial velocity of 1 m/s. The assignment is based on Walter Lewins lecture here at 48 minutes: http://videolectures.net/mit801f99_lewin_lec15/ I'm asked to write a python program displaying the positions of both masses, but I'm struggling with the formulas. This is what i have so far: Acceleration for each object (m = 1 kg): $$m1:\mathrm{d}^2x/\mathrm{d}t^2 = -k(x1-x2+L)t$$ $$m2:\mathrm{d}^2x/\mathrm{d}t^2 = -k(x2-x1-L)t$$ Equation: $$ω=\mathrm{sqrt}(k/m)=\mathrm{sqrt(8)}$$ $$x = A\cos(ωt)+B\sin(ωt)$$ for m1: t=0, pos=0 and v=1: $$x1=(1/ω)\cos(ωt)/2$$ i divide by 2 to get the oscillation of just m1 (wrong?) for m2: t=0, pos=0.4, v=0: $$0.4\cos(ωt)$$ again,i divide by 2 to get the oscillation of just m2 Problem: How do i make m2 dependent of m1's position? And how do i make them "move" along as they should? I have tried to create a function for center of mass, CM(t)=0.2*0.5t, to use as a reference of movement, but i cant get it right. Can someone give me a pointer in the right direction? Answer: Once the mass is released, the center of mass will move at a constant velocity. Superposed on that is the relative motion of the two masses - first towards each other, then away. They will be in exact antiphase so the center of mass has constant velocity. Your mistake was to set x up as a cosine function - that implies that it is at an extreme of position at t=0 when in fact it is at an extreme of velocity (so position is "in the middle" and you need to use a sin function). That leads me to the following (you will need to check this and prove individual steps along the way which I did not give... it's not solving the differential equations but jumping to the answer - but I think that you should be able to work towards this answer with the above hint). import numpy as np import math import matplotlib.pyplot as plt k = 8 l = 0.4 m1 = 1.0 m2 = 1.0 m_reduced = (m1*m2)/(m1+m2) omega = np.sqrt(k / m_reduced) t = np.linspace(0, 8.0 * math.pi / omega, 500) a1 = 1.0 / (2*omega) # because we know dx/dt at t=0 - but half is due to c.o.m. a2 = - a1 v_com = 0.5 # since one mass moves at 1 m/s and the other is initially stationary x1 = a1 * np.sin(omega * t) + v_com * t x2 = a2 * np.sin(omega * t) + v_com * t + l plt.figure() plt.plot(t, x1, label='x1') plt.plot(t, x2, label='x2') plt.xlabel('time (s)') plt.ylabel('position (m)') plt.legend() plt.show()
{ "domain": "physics.stackexchange", "id": 20681, "tags": "homework-and-exercises, harmonic-oscillator, spring" }
Trying to find an idiomatic Rust way of calling a series of functions and early out'ing on failure of one
Question: I would like to condense down a bunch of function calls that occur sequentially, and need to early out so they don't waste more computation later on. I've been able to get it down to the following, but I am wondering if there's a cleaner way of doing this or not. Here is an MCVE: pub struct MyStruct { // ... } impl MyStruct { pub fn new(data: &[u8]) -> Option<MyStruct> { let result = MyStruct { // initialize fields }; // CAN I DO THIS PART BETTER? result.process_stuff1()?; result.process_stuff2()?; // ... result.process_stuffn()?; Some(result) } fn process_stuff1(&self) -> Option<()> { // ... println!("Ran process_stuff1"); Some(()) } fn process_stuff2(&self) -> Option<()> { // ... println!("Ran process_stuff2"); None } // ... fn process_stuffn(&self) -> Option<()> { // ... println!("Ran process_stuffn"); Some(()) } } fn main() { let data: [u8; 1] = [0]; match MyStruct::new(&data[..]) { Some(_) => println!("Success"), None => println!("Failure") } } with the expected output (I made process_stuff2 fail just to make sure it did in fact short circuit): Ran process_stuff1 Ran process_stuff2 Failure I feel like I'm abusing the language a bit, but it is nicer to read than something like if !result.process_stuff1() { None } if !result.process_stuff2() { None } // ...etc Is there a better way of doing this? I was trying to see if there was some kind of all(...) function that I could call. I am new to Rust so there might be better ways of doing this, and you should not assume I know a lot. Answer: Technically you could use iterators to do this: pub fn new(data: &[u8]) -> Option<MyStruct> { let result = MyStruct { // initialize fields }; [ Self::process_stuff1, Self::process_stuff2, Self::process_stuffn, ] .into_iter() .map(|f| f(&result)) .collect::<Option<()>>()?; Some(result) } The reason this works is that FromIterator (the trait collect() uses) is implemented for Option as an early exit (aborting iteration), and is implemented for () to just return () (any number of ()s become ()). However, I wouldn't recommend actually writing this code — it is less clear than what you have, and more concise unless you have a lot more functions to call. I think you should keep the code structure you have, unless there is some way to express your multiple functions as, say, one function with different parameters. One thing I do think you should consider changing is returning a Result instead of an Option. The logic with ? is exactly the same, but you can return an "error" value reporting which of the sequence of functions failed. This can be key to efficiently diagnosing either program bugs or problems with the input data. (Of course, this may be unnecessary for reasons in your actual application, like obvious side effects of each function.)
{ "domain": "codereview.stackexchange", "id": 43943, "tags": "rust" }
Why does a voltmeter give a positive reading for a path in the same direction as the electric field?
Question: Straight from my textbook: If the direction of the path from initial location to final location is the same as the direction of the electric field, the potential difference is negative. Yet a voltmeter will provide a positive reading if you put the positive lead at the location with higher potential and the negative lead at the location with lower potential. Why is this? Answer: Yet a voltmeter will provide a positive reading if you put the positive lead at the location with higher potential But this is precisely what one should expect given the quote in your question. Consider a conductor with resistance $R$, oriented vertically, and with a constant downward electric (conventional) current through. In this case, there is a downward directed electric field within the conductor. Consider placing the voltmeter leads at the same point on the conductor - the voltmeter will read zero (assuming no time varying fields). Now, leave the black lead at that point and move the red lead to a point on the conductor below the black lead so that the red lead is moved in the direction of the electric field. The voltmeter reads a negative value since the black lead is at a higher potential than the red lead. This is consistent with the quote in your question. Conversely, moving the red lead to a point on the conductor above the black lead moves the red lead against the direction of the electric field and the voltmeter reads a positive value; the red lead is at a higher potential than the black lead.
{ "domain": "physics.stackexchange", "id": 21114, "tags": "electromagnetism, electric-fields, voltage" }
Probabilistic gold standard vs Deterministic gold standard
Question: I understand that we say something as a gold standard when it involves human intervention/judgement/review. But can someone help me understand what's the difference between probabilistic gold standard and deterministic gold standard. For ex: Patient has cancer or not - binary response - Deterministic gold standard which can be provided by humans. Whereas Patient has 60% chance of being a cancer and 40% chance of not being a cancer. Am I right to understand that this is called as probabilistic gold standard but this can't be produced by Humans right? Can any human/doctor for example, say this patient has 60% chance of being a cancer and 40 % chance of not being a cancer? Answer: No, deterministic probability is when you know for certain. If a person does not have a diagnosis, then he doesn't have the disease/condition. Doctors are not supposed to give a probability but we as human beings always like to know the likelihood. For example, person A who is 27 years old who has the coronavirus is highly unlikely to die of the virus but it's not impossible.
{ "domain": "datascience.stackexchange", "id": 7101, "tags": "machine-learning, deep-learning, classification, data-mining, supervised-learning" }
What is meant by "complexity of carbon sources"?
Question: My biology book states "life on earth depends upon carbon based molecules, most of these food sources are also carbon based. Depending upon the complexity of these carbon sources, different organisms can then use different kinds of nutritional processes" Please explain this statement. Answer: Consider all the different types of carbon molecules one can find on earth. Proteins. Sugars like cellulose and starch. Alkanes like methane and tar. CO2. Alcohols. A boatload of variety. Now think of how each one is used or can be used by an organism. I will chow down on a potato. And some alcohol. Termites eat cellulose. Anaerobic bacteria use methane. Plants eat up the CO2. One carbon nothing eats is CaCO3 or limestone. I am not sure why. Oh - also diamond. Not many things can metabolize a diamond. Although maybe they don't get the opportunity to try...
{ "domain": "biology.stackexchange", "id": 7179, "tags": "human-biology, molecular-biology, cell-biology, microbiology, structural-biology" }
How is it possible that combustion of coal releases similar energy as TNT explosion while intuitively we would not expect that?
Question: According to Wikipedia, the energy released in a TNT explosion is 4 × 106 J/kg. https://en.wikipedia.org/wiki/TNT According to web, combusion of coal is around 24 × 106 J/kg. https://www.world-nuclear.org/information-library/facts-and-figures/heat-values-of-various-fuels.aspx This looks rather counter-intuitive: TNT is famous for the explosion, thus I would expect that it releases a lot of energy, but actually, it seems much smaller than coal combustion... How is that possible that combustion of coal releases similar energy as a TNT explosion while intuitively we would not expect that? Answer: There are some marked differences that make $\text{TNT}$ far more suitable than the combustion of coal for explosives purposes. Firstly, the decomposition reaction of $\text{TNT}$: $$2 \text{C}_7\text{H}_5\text{N}_3\text{O}_6 \to 3 \text{N}_2 + 5 \text{H}_2 + 12 \text{CO} + 2 \text{C}$$ proceeds far faster than the combustion reaction of coal: $$\text{C}+\text{O}_2 \to \text{CO}_2$$ Secondly, the decomposition of $\text{TNT}$ produces far more gaseous reaction products than the combustion of coal: respectively $10\text{ mol}$ of gas per $\text{mol}$ of $\text{TNT}$ for $1\text{ mol}$ of gas per $\text{mol}$ of coal (and the latter requires $1\text{ mol}$ of $\text{O}_2$ for the combustion to take place). It's the production of gaseous reaction/decomposition products that make a good explosive: the super-fast build-up of gas inside the shell makes the pressure increase until the shell bursts, releasing all its energy at once.
{ "domain": "physics.stackexchange", "id": 71082, "tags": "energy" }
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()
Question: Firstly I have a pandas series of recommended product (recmd_prdt_list). In this series there is a possibility of presence of deleted products. So as to remove deleted products from the recommended products, I did the following : recmd_prdt_list = user_lookup['Recommended items'] recmd_prdt_list 0 PLV08, PLPD04, PBC07, 555, PLF02, 963, PLF07, ... 1 123, 345, R922, Asus009, AIMAC, Th001, SAM S9,... 2 LGRFG, LG, 1025, COFMH, 8048, BY7, PLHL4, 569,... 3 COFMH, 5454, 8048, 1025, LG, len123, Th001, PL... 4 LGRFG, AIM-Pro, 569, Asus009, PLHL3, PL04, PLH... 5 PLV08, PLF09, PLF02, PBC04, PLF07, AIM-Pro, PL... type(recmd_prdt_list) pandas.core.series.Series DataFrame of product status product_status ItemCode Status DeletedStatus AIMAC 2 True AIM-Pro 2 True SAM S9 2 True SH MV 2 True COFMH 2 True LGRFG 2 True type(product_status) pandas.core.frame.DataFrame first_row = user_lookup['Recommended items'][0] first_row 'PLV08, PLPD04, PBC07, 555, PLF02, 963, PLF07, HG8, jealous21, 4' type(first_row) str Converting the str to list first_row_list = list(first_row .split(",")) first_row_list ['PLV08', ' PLPD04', ' PBC07', ' 555', ' PLF02', ' 963', ' PLF07', ' HG8', ' jealous21', ' 4'] From the first row i took first itemcode to check the deleted status : product_details = product_status.loc[product_status['ItemCode'] == 'PLV08'] product_details ItemCode Status DeletedStatus PLV08 2 False type(product_details) pandas.core.frame.DataFrame product_details['DeletedStatus'] 693 False Name: DeletedStatus, dtype: bool So as to check the deleted status of each product in the respective row and save to a new list. I wrote the following code : itemcode = 'PLV08' activ_product = [] if itemcode in product_status['ItemCode'].values: print(itemcode) product_details = product_status.loc[product_status['ItemCode'] == itemcode] print(product_details) if product_details['Status'] == 2 & product_details['DeletedStatus'] == 'False': activ_product.append(itemcode) Error : PLV08 ClientId ItemCode Status DeletedStatus 499 2213 PLV08 2 False --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-35-9507e1ada5f7> in <module>() 5 product_details = product_status.loc[product_status['ItemCode'] == itemcode] 6 print(product_details) ----> 7 if product_details['Status'] == 2 & product_details['DeletedStatus'] == 'False': 8 activ_product.append(itemcode) ~/.virtualenvs/sysg_python3/lib/python3.5/site-packages/pandas/core/generic.py in __nonzero__(self) 951 raise ValueError("The truth value of a {0} is ambiguous. " 952 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." --> 953 .format(self.__class__.__name__)) 954 955 __bool__ = __nonzero__ ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). How to get solve of this error? Answer: First of all, to make logical test in Python, you should not use & for a single values equalities (see this) and you should not use question marks around boolean values False and True. Now, concerning you specific error : When writing product_details['Status'] and product_details['DeletedStatus'] you get each time a Series, which you cannot test for a logical and between them. If you have unique item codes, you can use: if product_details.iloc[0]['Status'] == 2 and product_details.iloc[0]['DeletedStatus'] == False: activ_product.append(itemcode) It will simply select the first row of product_details and subset the desired column so that the result is a single value and you can compare it.
{ "domain": "datascience.stackexchange", "id": 5960, "tags": "python, pandas, dataframe" }
Design a quantization scheme around a desired quantization noise distribution
Question: I have a - perhaps naive - question regarding the quantization noise/error. Assuming the goal is not the performance of the quantizer but rather being able to model the quantization noise exactly - say, Gaussian with certain mean and variance - are there any methods that would let one achieve this? There are no assumptions about the input distribution, it is most likely sampled from a mixture of different distributions. Edit for further explanation: My intention is to quantize a dataset, e.g. a set of images or general tensors. I have found out that the quantization noise is correlated with input distribution which gets in the way of modelling it independently. If one uses dithering, then AFAIK the noise is decoupled from the input distribution. But still the question remains, can I design a quantization scheme, e.g. lattice quantization, so that I can derive the exact mean/std of the Gaussian noise pdf? Answer: Let's get some terminology in place first. For simplicity I will use time discrete signals but it's the same for time continous signals as well. We assume an amplitude-continuous signal $x[n]$ that is turned into a amplitude discrete signal $y[n]$ using the quantization process $Q$, so we have $$y[n] = Q\{x[n]\}$$ The quantization noise $q[n]$ is simply the difference $$q[n] = y[n]-x[n]$$ The quantization process is time invariant if the process quantize the signal always the same way, or $Q\{x[n-N]\} = y[n-N]$. Most quantizers are time invariant simply because they want to minimize the quantization error and "round to the nearest" does exactly that. For a time-invariant quantizer quantization noise and signal are highly correlated. A simple example: if the signal is periodic the quantization noise will also be periodic with the same period. The standard way of decorrelating noise and signal is to add a dither. This breaks the time invariance of the quantizer but also adds quantization noise. Dither design is always a trade off. For a time invariant quantizer the probability density function (PDF) of the quantization noise can be directly calculated from the PDF of the input signal and the quantization process. The PDF is the first derivative of the Distribution Function. With this out of the way, we can start diving into the questions I have found out that the quantization noise is correlated with input distribution I don't think that's the case. The quantization noise is corelated to the input signal, but not to its distribution. Signals can be uncorrelated and even if they have the same distribution and correlated signals can have totally different distributions. I think you are confusing two different concepts here. If one uses dithering, then AFAIK the noise is decoupled from the input distribution. Again, no. Dithering de-correlates the quantization noise from the input signal but that has nothing to do with either distribution. can I design a quantization scheme, e.g. lattice quantization, so that I can derive the exact mean/std of the Gaussian noise pdf? I think what you are asking here is "can I design a quantization process so that the PDF of the quantization noise is always the same regardless of the input signal's distribution". The answer to that one is "not really". The PDF of the quantization noise is a function of the PDF of the input signal. Consider the example of a binary input signal: for a time invariant quantizer the quantization noise will also be binary, no matter how you quantize it. You can soften this up with a dither, but in order to get Gaussian quantization noise from a binary input signal the dither would basically have to drown the input signal completely. Things get a little easier if you can make simplifying assumptions. For example if we assume a uniform quantizer, a reasonably "smooth" PDF of the input signal and a large number of quantization steps we can derive that the quantization noise is uniformly distributed over the quantization interval and that this is independent of the inputs PDF. So I guess the final answer here is "maybe". It really depends on how you can constrain the problem and what assumptions you can make. If you want Gaussian quantization noise, you can try a uniform quantizer with a Gaussian distributed dither, play around with the level the dither and see of there is a sweet spot that works for you.
{ "domain": "dsp.stackexchange", "id": 11801, "tags": "quantization" }
why the perpendicular area for calculating the electric flux?
Question: When calculating the electric flux over a certain area using the formula: $$\Phi=\int_s \vec E \cdot d \vec A$$ Why does the electric field vector have to be parallel to the area vector? In other words, why is only the field perpendicular to the area considered while calculating the flux? I don't quite understand the concept behind how if the electric field vectors are not passing through the area perpendicularly it results in a less value for the electric flux. Answer: Definition: Flux by definition is the amount of quantity going out or entering a surface. Intuition: In the above diagram, the black line represents the surface for which the flux is being calculated and the red lines represent the direction of the flow of a quantity. In the above diagram, the quantity represented by the red lines are moving parallel to the surface. The quantity is not leaving the surface nor is some quantity entering the surface. Therefore, the flux is zero. In the above diagram, the quantity represented by the red lines is leaving — or entering depending on your perspective — the surface. Therefore, there is a net flux through the surface. Mathematical definition: A vector dot product gives you the projection of a vector along another vector. The area vector is defined as the area in magnitude whose direction is normal to the surface. Consider the following: $$\phi = \vec{a}.\vec{b}$$ The above equation gives the amount of $\vec{a}$ that is along the direction of $\vec{b}$ times the vector ${b}$. It is equivalent to taking the scalar projection of $\vec{a}$ and multiplying it with the magnitude of $\vec{b}$. As the flux by definition is numerically equal to the amount of quantity leaving the surface, we are concerned with the quantity passing perpendicularly through the surface. In this situation, the dot product helps us implicitly mention the above fact. $$\phi = \int\vec{E}.d\vec{A}$$
{ "domain": "physics.stackexchange", "id": 53996, "tags": "electrostatics, electricity, electric-fields" }
How to use pcl_ros with ROS 2 Eloquent?
Question: Hello everybody, I am trying to port a ROS 1 object tracking package (https://github.com/praveen-palanisamy/multiple-object-tracking-lidar) to ROS 2 Eloquent. The package I need to port depends on pcl_ros package (from perception_pcl repo: https://index.ros.org/p/pcl_ros/github-ros-perception-perception_pcl/#Eloquent). However, I have read from the documentation that it has CATKIN as build type and this makes me a bit confused about its usage. In particular, I want to build it from source to make it available as dependancy but I don't know how to deal with it to make it work with ROS 2. As far as I know, ROS 2 uses AMENT_CMAKE and not CATKIN. Is there a way to use it with ROS 2? If yes, could you describe me how? OS: Ubuntu 18.04 ROS 2 distro: Eloquent (from source) Thank you in advance Originally posted by MaxCode on ROS Answers with karma: 13 on 2020-08-18 Post score: 1 Answer: Pcl_ros has not been fully ported to ROS 2 yet. Onlt pcl_concersions have been completed. If you have need for pcl_ros, I could always use the help in helping porting it! Originally posted by stevemacenski with karma: 8272 on 2020-08-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by MaxCode on 2020-08-19: Thank you very much for your answer. Now I analyze the situation to find a solution that suites my needs Comment by stevemacenski on 2020-08-19: Can you mark the answer as correct then? Comment by fabbro on 2022-02-24: Is there a way to see where the porting process is at now? Thanks
{ "domain": "robotics.stackexchange", "id": 35433, "tags": "ros, ros2, pcl, catkin, colcon" }
Kinect configuration
Question: hi, i have Kinect 1 and i install the package openni_launch. when i run: rosrun openni_launch openni.launch, i receive : No devices connected .... waiting for devices to be connected LED green of kinect flashes what can i do please? Originally posted by Emilien on ROS Answers with karma: 167 on 2016-05-23 Post score: 0 Answer: Try this roslaunch freenect_launch freenect.launch Originally posted by ROSkinect with karma: 751 on 2016-05-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 24716, "tags": "ros" }
How to check if 2 quantum bits are orthogonal?
Question: How would you check if 2 qubits are orthogonal with respect to each other? I need to know this to solve this problem: You are given $2$ quantum bits:$$ \begin{align} |u_1\rangle &= \cos\left(\frac{x}{2}\right) |0\rangle + \sin\left(\frac{x}{2}\right)e^{in} |1\rangle \tag{1} \\[2.5px] |u_2\rangle &= \cos\left(\frac{y}{2}\right) |0\rangle + \sin\left(\frac{y}{2}\right)e^{im} |1\rangle \tag{2} \end{align} $$where $m-n = \pi$ and $x+y=\pi$. Answer: Keep in mind that $|0\rangle$ and $|1\rangle$ are orthonormal basis vectors of a two-dimensional complex vector space (over the field of complex numbers). To check whether $|u_1\rangle$ and $|u_2\rangle$ are orthogonal you'll have to check whether the standard inner product $\langle u_1|u_2\rangle$ is $0$. Here $\langle u_1|$ refers to the bra vector corresponding to the ket vector $|u_1\rangle$. In matrix notation that would simply mean that $\langle u_1|$ is the complex conjugate transpose a.k.a Hermitian conjugate of $|u_1\rangle$. In your case: $|u_1\rangle = \cos(\frac{x}{2}) \begin{bmatrix} 1 \\ 0 \end{bmatrix} + (\sin(\frac{x}{2}))e^{in} \begin{bmatrix} 0 \\ 1 \end{bmatrix}$ and $|u_2\rangle = \cos(\frac{y}{2}) \begin{bmatrix} 1 \\ 0 \end{bmatrix} + (\sin(\frac{y}{2}))e^{im} \begin{bmatrix} 0 \\ 1 \end{bmatrix}$ Now, you'll find $\langle u_1|$ to be: $\cos(\frac{x}{2}) \begin{bmatrix} 1 & 0 \end{bmatrix} + (\sin(\frac{x}{2}))e^{-in} \begin{bmatrix} 0 & 1 \end{bmatrix}$ Now, simply carry out the multiplication of the matrices $\langle u_1|$ and $ |u_2\rangle$. If it turns out to be $0$, they're orthogonal. Or else they're not orthogonal.
{ "domain": "quantumcomputing.stackexchange", "id": 190, "tags": "quantum-state" }
ROS for an autonomous boat
Question: As part of an internship I was asked to design and develop the core control system for an autonomous small-scale (2m length) solar vessel to be able to sail around the Baltic Sea. The boat should be able to sail following predefined waypoints but, thanks to AIS, Camera (collision avoidance) and a path planning algorithm, redefine its route according to the obstacles sensed. For the hardware part it runs a Raspberry Pi with the high level navigation system and an Arduino to control propeller and actuators as well as provide basic navigation functions in case of Raspberry failure. Now, before digging into coding I checked for existing solutions and found out the ROS (Robot OS) middleware, which comes with interesting abstractions for the multi-threading and multi-processing, message exchange locally and among diverse hardware architectures (Raspberry and Arduino in this case). However, I am concerned ROS would add considerable load on the Raspberry processor and increase power consumption and it would prevent fine-grained control over hardware, probably system instability too on the long run. The control software has to access sleep functions on sensors and on the Pi itself, in case of power shortages, to dynamically suspend and restart processes and it needs to run 24/7 for months without human interaction. Is ROS suited for these tasks or should I think about creating a custom software from scratch? Thanks Answer: ROS will work fine for this task. It will add some additional overhead for your Raspberry Pi but it is fairly small provided you only install the Robotic or Base configuration instead of the Full configuration and are using a headless (no GUI) Raspberry Pi install. My company uses ROS on self-driving cars and ROS has, so far, never been the cause of a fault - it has always been hardware- or coding-related. Here is your trade-off: ROS Pros: Fairly complete, stable messaging framework. Allows inter-process message passing without network overhead (see nodelets). Large library of pre-defined message types and utilities (like tf which does coordinate transforms with very little effort). Community support for portions of your framework that you wouldn't otherwise have. ROS Cons: Some overhead associated with running roscore and the message serialization/deserialization process. Without implementing the features on your own, there is very little in the way of Quality of Service regarding message transport (this is coming in ROS 2.0). Only supports Python and C++. This is not a problem for some people but could be a drawback for others.
{ "domain": "robotics.stackexchange", "id": 1464, "tags": "ros, research, automation" }
Taking derivatives of traces over matrix products
Question: I started with evaluating the following derivative with respect to a general element of an $n\times n$ matrix, $$\frac{\partial}{\partial X_{ab}}\left(\mathrm{Tr}{(XX)}\right)$$ I wrote out the trace in index notation in order to get a sense of how I might take the derivative term-by-term: $$\mathrm{Tr}{(XX)} = X_{ij}X_{ji} = \sum_{j=1}^n X_{1j}X_{j1} + \sum_{j=1}^n X_{2j}X_{j2} + \dotsm + \sum_{j=1}^n X_{nj}X_{jn}$$ From this, it appears to me that $$\frac{\partial}{\partial X_{ab}}\left(\mathrm{Tr}{(XX)}\right) = 2X_{ba}$$ I am struggling with a more complicated case, the following derivative, $$\frac{\partial}{\partial X_{ab}}\left(\mathrm{Tr}{\left([X,Y]^2\right)}\right).$$ First I aimed to simplify using the properties of the trace over a product, i.e. \begin{align} \mathrm{Tr}{\left([X,Y]^2\right)} &= \mathrm{Tr}{\left(\left(XY-YX\right)^2\right)} = \mathrm{Tr}{\left(XYXY-XYYX-YXXY+YXYX\right)}\\ &= 2\mathrm{Tr}{\left(XYXY-XYYX\right)} \end{align} where in the last line I've used the cyclic property of the trace. Now I express this in index notation, \begin{align} \mathrm{Tr}{\left([X,Y]^2\right)} &= 2\mathrm{Tr}{\left(X_{ik}Y_{km}X_{ml}Y_{lj}-X_{ik}Y_{km}Y_{ml}X_{lj}\right)}\\ &= 2\left(X_{ik}Y_{km}X_{ml}Y_{li}-X_{ik}Y_{km}Y_{ml}X_{li}\right) \end{align} Now to evaluate the derivative, \begin{align} \frac{\partial}{\partial X_{ab}}\left(2\left(X_{ik}Y_{km}X_{ml}Y_{li}-X_{ik}Y_{km}Y_{ml}X_{li}\right)\right) &= 2\frac{\partial}{\partial X_{ab}}\left(X_{ik}Y_{km}\left(X_{ml}Y_{li}-Y_{ml}X_{li}\right)\right)\\ &= 2\frac{\partial}{\partial X_{ab}}\left(X_{ik}Y_{km}\left[X,Y\right]_{mi}\right)\\ &= 2Y_{km}\left(\left(\frac{\partial}{\partial X_{ab}}X_{ik}\right)\left[X,Y\right]_{mi} + X_{ik}\left(\frac{\partial}{\partial X_{ab}}\left[X,Y\right]_{mi}\right) \right) \end{align} At this point I'm a bit stuck. I don't feel confident with the last few steps. Alternatively I've tried looking at these terms as explicit summations, but I get a bit bogged down by the fact that there are four summation variables in each term. I think there is a more efficient way of thinking about these terms, but it has escaped me so far. Answer: You can simplify the computations considerably by writing: $$\operatorname{Tr}\left(XY-YX\right)^2 = A_{ij}A_{ji}$$ where $$A_{ij}=X_{ik}Y_{kj}-Y_{ik}X_{kj}$$ Using $$\frac{\partial A_{ij}}{\partial X_{ab}} = \delta_{ia}Y_{bj}-\delta_{jb}Y_{ia}$$ You then find that the derivative is: $$2\left(\delta_{ia}Y_{bj}-\delta_{jb}Y_{ia}\right) A_{ji} = 2 \left(Y_{bj}A_{ja} -A_{bi}Y_{ia}\right) $$ It's then easy to substitute the expression for $A$ in here and write out the expression in terms of $X$ and $Y$.
{ "domain": "physics.stackexchange", "id": 68069, "tags": "homework-and-exercises, differentiation, calculus, trace, matrix-elements" }
Universe expanding - Interstellar space, Intergalactic space or Intersupercluster space?
Question: When we say that the universe is expanding, the space between everything is not. The space between atoms, the space between my house and school, and the space between Earth and Sun remains the same, due to the four fundamental forces of nature. However, when does the gravitational force becomes comparatively small, to see the expansion in space? Is it between the stars - interstellar? Is it between the galaxies - intergalactic? Is it between the superclusters - intersupercluster? Or something greater that I'm not aware of? Answer: You are basically asking at what scale does dark energy overcome gravity. Now inside a galaxy, gravity dominates, space is not expanding. And the other fundamental forces are on the QM scale much more stronger, so the EM, strong forces keep atoms together, these are not expanding either. Now between galaxy clusters, space is expanding, but the rate of expansion is not as fast as in the voids between superclusters. Yes, on the large scale, the rate of expansion is uniform, but as you go from the space between galaxy clusters to voids between superclusters, the rate gets faster. Please see here: https://physics.stackexchange.com/a/482814/132371
{ "domain": "physics.stackexchange", "id": 58765, "tags": "cosmology, universe, space-expansion" }
Why don't what I can define as "aerophilic viruses" establish colonies?
Question: Throughout my life I have seen some bacterial and fungal colonies in various sizes and colors, mostly on food or in hot humid dirty environments. Putting bacteria and microfungi as well as other microbiology such as microalgae and archea aside, I ask here as a non biologist: Why do what I can define as "aerophilic viruses" (viruses that can live significant amounts of time in the air and especially on surfaces interacting with the air - around 12 hours or more) don't establish colonies? If such "aerophilic viruses" are so common and constantly going through evolution (perhaps faster than any other organism), why a microscope is needed to see them and one cannot see gatherings of them in the macro level? Answer: Virus cannot reproduce on their own; they can only reproduce by taking over a cellular organism. For this reason, the idea of a 'virus colony' does not make sense. You could have a colony of mold or bacteria that is infected by a virus, but a virus by itself could never create a single offspring, let along a macroscopic colony. If there were some type of mutated virus that could reproduce independently, then I would argue that it would be a new type of life and not a virus.
{ "domain": "biology.stackexchange", "id": 10399, "tags": "microbiology" }
Relative Sign In XXZ Chain
Question: This is a relatively simple question that I just want confirmation on. In literature, I have seen 2 ways of writing the Heisenberg XXZ Chain: 1.) $H = -J \sum_{n=1}^{N}\left(S_n^xS_{n+1}^x+ S_n^yS_{n+1}^y + \Delta S_n^zS_{n+1}^z\right)$ 2.) $H = \sum_{n=1}^{N}\left(S_n^xS_{n+1}^x+ S_n^yS_{n+1}^y + \Delta S_n^zS_{n+1}^z\right)$ I would like to understand how to reconcile the 2 expressions. In 2.), when $\Delta = -1$, we say the chain is ferromagnetic. In expression 1.), the chain is ferromagnetic when $\Delta = 1, J > 0$. Thus, it seems to me that what is most important is actually the relative sign between the ZZ terms and the XX, YY terms? How can one go from expression 1.) to expression 2.)? Answer: The two expressions can be shown to be equivalent using a canonical transformation that implements a spin rotation by $\pi$ about the spin $z$ axis on every other site. Explicitly, this transformation may be chosen to take $$ S_{2n}^x \rightarrow S_{2n}^x, \quad S_{2n+1}^x\rightarrow -S_{2n+1}^x,\\ S_{2n}^y \rightarrow S_{2n}^y, \quad S_{2n+1}^y\rightarrow -S_{2n+1}^y,\\ S_{n}^z \rightarrow S_{n}^z $$ This flips the sign of the $S_n^x S_{n+1}^x$ and $S_n^y S_{n+1}^y$ terms.
{ "domain": "physics.stackexchange", "id": 100416, "tags": "condensed-matter, spin-chains" }
when i use “output.name[0]="Tjoint” there is a error Segmentation fault
Question: The code is as follows: #include "ros/ros.h" #include "std_msgs/String.h" #include "sensor_msgs/JointState.h" #include <sstream> class SubscribeAndPublish { public: SubscribeAndPublish() { //topic you want to publish pub_=n.advertise<sensor_msgs::JointState>("/arm/joint_states",1000); //subscribe sub_=n.subscribe("/joint_states",1000,&SubscribeAndPublish::callback,this); } void callback(const sensor_msgs::JointState::ConstPtr& msg) { sensor_msgs::JointState output; output.name=msg->name; // output.name[0]="Tjoint output.position=msg->position; output.header=msg->header; pub_.publish(output); } private: ros::NodeHandle n; ros::Publisher pub_; ros::Subscriber sub_; }; int main(int argc, char **argv) { ros::init(argc, argv, "talker"); SubscribeAndPublish SAPobject; ros::spin(); return 0; } Originally posted by buaawanggg on ROS Answers with karma: 1 on 2017-05-03 Post score: 0 Original comments Comment by gvdhoorn on 2017-05-03: This is not a question. Please format your code properly (use the Preformatted text button) and explain what you are trying to do, what doesn't work and why you think that doesn't work. Comment by buaawanggg on 2017-05-03: thx for your advice ,i try to write a node to subscribe the topic “joint-states” , this topic have messages of 6 joints ,but what i need is only the message of “joint0” and publish the message of “joint0” to topic “arm/joint_states” Comment by buaawanggg on 2017-05-03: i want to use the “void callback“” to filter the message Comment by jarvisschultz on 2017-05-04: I've provided an answer to your question, and reformatted your original question. I was only able to do this because I had already answered your original question; so I knew what you were asking. As @gvdhoorn indicated, please put more effort into writing clear and nicely formatted questions Answer: You can't call output.name[0] without first allocating at least one entry in the std::vector<std::string> data that is the output.name field. You should likely use something like std::vector::push_back or std::vector::resize to allocate the memory. The same is true for the position field. Here's a quick rewrite of your callback as an example: void callback(const sensor_msgs::JointState::ConstPtr& msg) { sensor_msgs::JointState output; output.header = msg->header; // create space for our single joint name and position: output.name.resize(1); output.position.resize(1); // find correct joint and store in output: std::vector<std::string>::const_iterator name_it = msg->name.begin(); std::vector<double>::const_iterator position_it = msg->position.begin(); for (; name_it != msg->name.end() && position_it != msg->position.end(); ++name_it, ++position_it) { if ( (*name_it).compare("joint1") == 0) { // then we've found the joint! output.name[0] = *name_it; output.position[0] = *position_it; pub_.publish(output); break; } } return; } Originally posted by jarvisschultz with karma: 9031 on 2017-05-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27795, "tags": "ros" }
Algorithmic approach to finding two vectors that span a plane
Question: I am working on an experiment where I need to align a magnetic field to be parallel to a nanoscale wire embedden in a microscale planar structure. My tools for doing so are two-fold: the planar structure produces a signal $S_P$ (a resonance frequency) that is maximal when the field is fully in plane and that drops off in a typical angle-dependence fashion when not in plane; this dependence smooth and monotonic and essentially noise free, but the visibility quickly reduces when out of plane as the frequency moves out of our measurement window. Similarly (but subject to more stringent conditions) I have a signal $S_W$ produced by the wire that is maximal when the field is in the plane of the planar structure and parallel to the wire; also a resonance frequency. It quickly goes down in an angular fashion both w.r.t. the in-plane angle and the out-of-plane angle. My task is then, with these tools in hand, to align the magnetic field parallel to the wire. Given that the field is homogeneous over scales much larger than the structures, I believe one can reduce this problem to a more general problem of aligning a 3D vector with spherical coordinates $\vec{v} = (r, \phi, \theta)$ (red) to be parallel to a 1D line (blue) that lies in a 2D plane with unknown vector span (green), at some unknown angle $\xi$ w.r.t. that plane. To do this alignment, I had the following idea. In spherical coordinates I can 'algorithmically' control $\phi$ and $\theta$ of the (red) magnetic field vector while monitoring $S_P$, to find two vectors that span the plane. I use these two vectors to set up a new polar coordinate system of that plane, in which I can then sweep the polar angle to find $\xi$ and maximize $S_W$. The second part should be trivial, but the first part is what my question is about. How do I find two vectors that span the plane, controlling $\phi$ and $\theta$ and monitoring $S_P$? As it currently stands I can rather easily find a single vector that works by simply varying $\phi$ and $\theta$ until $S_P$ reaches a maximum in both parameters simultaneously, but how do I go from there to find a linearly independent vector that is also in the plane? How should I algorithmically 'walk' through $\phi,\theta$ space to find that other vector? I say walk because what I envision the method to be like is that I start from the local maximum, 'overrotate' in $\theta$, bring $S_P$ back up with $\phi$ and overrotate, that one a bit, and then go back to theta and repeat. That will walk me through the angle space, but how I use this to find the second vector I am not sure of. Context Since this is a place where we discuss physics, it might be interesting to give some context. We are working on a 'gatemon' type device (see https://arxiv.org/abs/1503.08339) which is essentially a transmon qubit with a semiconducting nanowire as the junction, and proximity-effect-induced superconducting islands made out of semiconductor segments with superconductor shells. It has a charging energy and a josephson energy and thus a resonance frequency $S_W$, and we couple it to a planar waveguide with resonance frequency $S_P$. We're interested in studying its properties in a magnetic field, but it needs to be in the plane of the waveguide to keep that superconducting, and parallel to the wire to keep that superconducting. So I'm working on a good way to get that alignment as precise as possible. Answer: Consider the following picture: Green is an arbitrary plane through the origin (corresponding to the plane in the picture of the question), red the xz plane and blue the xy plane. Angles are also taken from there. If you set $\phi=0$ and sweep $\theta$ clockwise, you'll get a maximum of $S_p$ if you reach the black arrow. Likewise, for $\theta=90^\circ$, $S_p$ will be maximal if $\phi$ describes the yellow arrow. Both arrows define the plane (except for corner cases when the green plane is one of the coordinate planes). Why do you think this doesn't work?
{ "domain": "physics.stackexchange", "id": 45698, "tags": "experimental-physics, vectors, geometry" }
Cooling a satellite
Question: Satellites are isolated systems, the only way for it to transfer body heat to outer space is thermal radiation. There are solar panels, so there is continuous energy flow to inner system. No airflow to transfer the accumulated heat outer space easily. What kind of cooling system are being used in satellites? Answer: Typically, satellites use radiative cooling to maintain thermal equilibrium at a desired temperature. How they do this depends greatly on the specifics of the satellite's orbit around Earth. For instance, sun-synchronous satellites typically always have one side in sunlight and one side in darkness. These are particularly easy to keep cool because you can apply a white coating to the Sunward side and and black coating to the dark side. The white coating has a low value for radiative absorption while the black coating has a high value for radiative emission. This means it can absorb as little light as possible while emitting more thermal radiation. Different types of satellites have different strategies for cooling, but in general, cooling is achieved by applying functional coatings to the spacecraft that lower or raise the absorptivity/emissivity/reflectivity of its different surfaces. While designing a satellite, the space engineers perform thermal analyses and lots of calculations to determine which surfaces need to have what absorption values in order for the satellite to maintain the desired temperature. It's hard for me to be more specific than this. But this is the reason any good space engineer knows how to find a coating with the desired absorptivity/emissivity values within a day or two.
{ "domain": "physics.stackexchange", "id": 63660, "tags": "thermodynamics, thermal-radiation, satellites, cooling" }
Contradiction between energy conservation and radiation damping force
Question: We know that if charged particle is accelerated, it will radiate. According to Larmor formula, the power is $$P = {2 \over 3} \frac{q^2 a^2}{ c^3}$$ in the Gauss units, where $a$ is acceleration. And accelerated particle will experience radiation damping force(Abraham–Lorentz force) $$\mathbf{F}_\mathrm{rad} = { 2 \over 3} \frac{ q^2}{ c^3} \mathbf{\dot{a}}$$ The equation of motion of particle should be $$m \frac{d \mathbf{v}}{dt}= \mathbf{F}_{ext}+ { 2 \over 3} \frac{ q^2}{ c^3} \mathbf{\dot{a}} \tag{1}$$ But a contradiction arises when the external force is a constant force. In this case, the particle has constant acceleration ($\mathbf{\dot a}=0$). But it will still radiate energy ($P\neq0$). The work done by external force $\mathbf{F}_{ext}$ will be totally translated to particle's kinetic energy(due to uniform acceleration), but the particle will still radiate energy to infinity. Does this violate the energy conservation? Note1: For constant force, the uniform acceleration solution $\mathbf{\dot a}=0$ is still the new equation(1) 's solution. That is, $\dot a=0$ is a solution for $a=b$. Certainly this solution($\dot a=0$ ) can also be the solution of $a= b+d \dot a =b$. Note2: Even though you take the relativistic effect into consideration, you still cannot explain this contradiction. The relativistic generalization of radiation damping force is in Landau's The classical theory of fields (76.2) $$F_{rad}^\mu= \frac{2 e^2}{3 c}(\frac{d^2 u^{\mu}}{ds^2}-(u^\mu u^\alpha)\frac{d^2 u_\alpha}{ds^2})$$ So we see with constant external force $F_{ext}^\mu=\text{cosntant}$, equation of motion, $$m\frac{du^{\mu}}{ds}=F^{\mu}_{ext}+F^{\mu}_{rad}\tag{2}$$ Constant $4$-acceleration $\frac{du^{\mu}}{ds}=\frac{F^{\mu}_{ext}}{m}$, $\frac{d^2 u^{\mu}}{ds^2}=0$ is still the equation(2)'s solution. So there is still no radiation damping force. Answer: The equation (1) is only an approximate equation of motion of a charged body of non-zero dimensions, because the expression $$ \frac{2}{3}\frac{q^2}{c^3}\dot{\mathbf a} $$ only approximates total internal EM force that is acting on such body due to non-rectilinear motion of its parts. Because of this approximation, this equation does not reflect the law of conservation of energy, it only gives approximate description of motion of the body as a whole. The exact expression for net internal force would be much more complicated, it would involve internal degrees of freedom of the charged body and interaction of its parts. For highly simple models of such charged body (like uniformly charged sphere), the net force can be expressed as infinite series and the above term is just one term of this series. In an exact model of the charged body where conservation of energy is present and can be proven, the energy that the system radiates is balanced by decrease of internal energy inside and near the charged body. That includes the EM energy in the vicinity of the body. But if the internal degrees and dimensions of the body are neglected so it becomes a point, this internal energy becomes invisible and the remaining visible degrees of freedom manifest violation of energy conservation. It is just because of simplifying the model of the extended charged body so much, it is not an indication there is something wrong with the general theory. See also my answer here: Does a constantly accelerating charged particle emit EM radiation or not?
{ "domain": "physics.stackexchange", "id": 40044, "tags": "electromagnetism, electromagnetic-radiation, energy-conservation, classical-electrodynamics" }
Using array as parameter in service
Question: I am using array as input and output in a service. I am trying to extend the existing tutorial for getting started. Below is the client code- #include "ros/ros.h" #include "beginner_tutorials/AddTwoInts.h" #include <cstdlib> int main(int argc, char **argv) { ros::init(argc, argv, "add_two_array_client"); ros::NodeHandle n; ros::ServiceClient client = n.serviceClient<beginner_tutorials::AddTwoInts>("add_two_array"); beginner_tutorials::AddTwoInts srv; std::vector<double> a(3); std::vector<double> b(3); a.push_back(1.0); a.push_back(2.0); a.push_back(3.0); b.push_back(4.0); b.push_back(5.0); b.push_back(6.0); srv.request.a = a; srv.request.b = b; if (client.call(srv)) { for (int i = 0; i < 3; i++) { ROS_INFO("Sum: %f", (float)srv.response.sum[i]); } } else { ROS_ERROR("Failed to call service add_two_array"); return 1; } return 0; } Following is the server code- #include "ros/ros.h" #include "beginner_tutorials/AddTwoInts.h" bool add(beginner_tutorials::AddTwoInts::Request &req, beginner_tutorials::AddTwoInts::Response &res) { res.sum.resize(3); for (int i = 0; i < 3; i++) { res.sum[i] = req.a[i] + req.b[i]; ROS_INFO("request: x=%f, y=%f", req.a.at(i), req.b[i]); ROS_INFO("sending back response: [%f]", res.sum[i]); } return true; } int main(int argc, char **argv) { ros::init(argc, argv, "add_two_array_server"); ros::NodeHandle n; ros::ServiceServer service = n.advertiseService("add_two_array", add); ROS_INFO("Ready to add two ints."); ros::spin(); return 0; } The srv file looks like following- float64[] a float64[] b --- float64[] sum While running, the input parameters shows 0 value. Below is the snippet from terminal- [ INFO] [1465212136.691882517, 8990.505000000]: request: x=0.000000, y=0.000000 [ INFO] [1465212136.691913058, 8990.505000000]: sending back response: [0.000000] How to use array as parameter in service? Originally posted by ravijoshi on ROS Answers with karma: 1744 on 2016-06-06 Post score: 1 Original comments Comment by ravijoshi on 2016-06-06: The existing tutorial can be found here http://wiki.ros.org/ROS/Tutorials/WritingServiceClient(c%2B%2B) Answer: Simple, but tricky error: You are using the vector's fill constructor here: std::vector<double> a(3); std::vector<double> b(3); a and b will already be filled with zeros like this: a = [0 0 0] b = [0 0 0] with the command push_back you'll add another three values: a = [0 0 0 1 2 3] b = [0 0 0 4 5 6] but this values won't be used in the calculation on your server, since your loop only iterates over the first three elements. So either don't fill your vector in the beginning or don't use push_back Originally posted by JohnDoe2991 with karma: 305 on 2016-06-06 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ravijoshi on 2016-06-06: Thanks a lot.. You really saved my time. I was debugging it but couldn't figure out the actual mistake.
{ "domain": "robotics.stackexchange", "id": 24826, "tags": "ros-indigo" }
Is force frame dependent?
Question: If we look at a particle moving in the positive $x$ direction from a frame that is accelerating in the positive $y$ direction then its acceleration will be different from that in an inertial frame and hence the force on the particle must also be different in this frame. Is this right? Answer: As described in the question, you have only said that the frame is moving. But if motion of the frame is uniform, then acceleration is not changed and therefore the force is not changed (Newton II). If the frame is accelerating relative to an inertial frame, then there will be an inertial force (or pseudo force). Inertial forces are distinct from active forces (inertial literally means not acting). The inertial force accounts for the difference in the acceleration of the particle when seen from the accelerating frame.
{ "domain": "physics.stackexchange", "id": 65992, "tags": "forces, reference-frames" }
Why is Zener Diode connected in parallel to the load?
Question: I’m a newbie to the topic of circuits so sorry if this seems a stupid question. This is the picture given in my textbook of a Zener circuit. First of all, isn’t Vz = the fluctuating DC input voltage? As far as I understood, potential difference across parallel circuits is the same so it should be same across both the diode and the load right? Secondly, how does the diode work? Once it’s crossed its ‘breakdown’ stage, what exactly happens when the voltage fluctuates? How does the diode regulate it? My guess is that it ensures the same current flows through the load no matter how much current in the circuit changes (due to change in Voltage) and since V=IR (to reason mathematically) and current and resistance are same, V is also same. Am I right? Answer: Think about how the zener diode works. It is conducting for voltages above 0, and for voltages below -$V_z$. When the voltage is above $0$ the diode will conduct a current and the voltage over $R_L$ will be $0$. The load will however experience a voltage when the value is $V_z < V < 0$ because then the diode is not conducting and we have a voltage over it, the current goes through $R_L$. $V_z$is the max voltage $R_L$ will experience. If this is known the current going through the closed loop can be calculated and thus you can calculate other parameters as well. If you plot the voltage over $R_L$ you should get a signal where we only have the lower part of a sinusoid that is clipped at $V_Z$ (given that the input voltage is a sinusoid). A Zener diode is a semiconductor that permits zener breakdown at a specific voltage, and this means that a diode will conduct a current even in the backwards direction. In practice this current is dependent on the voltage applied, but given the simplicity of the circuit above, it should be enough to use the ideal diode functionality where the current is the same no matter the voltage applied. When the Zener-voltage is passed, the voltage over the circuit will be $V_z$, thus the current/voltage on $R$ will be constant as long as the diode is conducting. Something that'd help you could be to redraw the circuit for the scenarios when the diode is and is not conducting, replacing the diode with an open circuit and a short circuit. And keep in mind that current seeks the path of lowest resistance.
{ "domain": "physics.stackexchange", "id": 62936, "tags": "electric-circuits, electrical-engineering" }
Quantum Hamiltonian and Classical Hamitonian Rotationally Invariant
Question: From Shankar's QM book pg. 310, it was said that the quantum hamiltonian $H$ is rotationally invariant whenever the classical hamiltonian $\mathcal{H}$ is rotationally invariant. For infinitesimal rotations, the classical variables $(x,y,p_x,p_y)$ transform as $$\bar{x}=x-y\epsilon$$ $$\bar{y}=y+x\epsilon$$ $$\bar{p}_x=p_x - p_y\epsilon$$ $$\bar{p_y}=p_y+p_x\epsilon$$ The classical hamiltonian $\mathcal{H}$ is rotationally invariant if $H(\bar{x},\bar{y},\bar{p}_x,\bar{p}_y)=H(x,y,p_x,p_z).$ The quantum operators $(X,Y,P_x,P_y)$ transform in a similar way: $$U^\dagger XU=X-Y\epsilon$$ $$U^\dagger Y U = X\epsilon + Y$$ $$U^\dagger P_x U=P_x - P_y\epsilon$$ $$ U^\dagger P_y U=P_x\epsilon+ P_y $$ where $U$ is the infnitesimal rotation operator. The quantum hamiltonian is rotationally invariant if $H(U^\dagger X U,U^\dagger Y U,U^\dagger P_x U,U^\dagger P_y U)=H(X,Y,P_x,P_y)$. Now since the operators $X,P_x$ and $Y,P_y$ do not commute like classical variables $x,p_x$ and $y,p_y$, how can we be sure that the quantum hamiltonian is invariant whenever the classical hamiltonian is? For example consider that in the expansion of classical Hamiltonian $\mathcal{H}(\bar{x},\bar{y},\bar{p}_x , \bar{p}_y)$ we have two terms that cancel each other: $$\mathcal{H}(\bar{x},\bar{y},\bar{p}_x , \bar{p}_y)=...+xp_x-xp_x+...=\mathcal{H}(x,y,p_x,p_y)$$ How can we be sure that in the expansion of the quantum Hamiltonian $H(U^\dagger X U,U^\dagger Y U,U^\dagger P_x U,U^\dagger P_y U)$ these two terms will be $$H(U^\dagger X U,U^\dagger Y U,U^\dagger P_x U,U^\dagger P_y U)=...+XP_x-X P_x+...=H(X,Y,P_x,P_y)$$ which cancel instead of $$H(U^\dagger X U,U^\dagger Y U,U^\dagger P_x U,U^\dagger P_y U)= ... + XP_x - P_xX+...\neq H(X,Y,P_x,P_y)$$ which do not cancel? Answer: You have hit upon a recondite problem of quantization, which rarely crops up in practical problems: A classical $\cal H$ has several, indeed, many different quantum Hs which have it as their common classical limit. Of those, one chooses the ones that share the same symmetry (e.g., rotational) as $\cal H$, all things being equal, but this is not formally or theologically compulsory! Here is an artificial simplistic toy example to Illusrtate the formal point. Consider two quantum Hermitian hamiltonians $$ H_1= XP_y -YP_x ,\\ H_2= XP_y -YP_x +i\lambda X [X,P_x], $$ both of which have the same classical limit $$ {\cal H}=xp_y- yp_x, $$ which is rotationally invariant. Your may see directly $H_1$ is rotationally invariant, but $H_2$ isn't. Usually, when in doubt, people choose $H_1$ in quantizing ${\cal H}$, depending on context, as it, unlike its evil twin, $H_2$, inherits the rotational symmetry of the classical system. This is a discretionary choice.
{ "domain": "physics.stackexchange", "id": 90617, "tags": "quantum-mechanics, operators, symmetry, hamiltonian, rotation" }
Forces and moments on Massless link
Question: I am the TA for a dynamics class, and I've come across an odd problem where I can't explain/don't remember how to look up the answer to the problem. I have the following problem from an old solution manual that looks like the following: In the description of the problem, links 2 and 4 are said to have mass, but link 3 is assumed to be massless. Force F is known and the objective is to solve for torque T2. In the solution manual, the author has the following equations for link 3: Sum of forces in x direction: $F_{23x} + F_{43x} = 0$ Sum of forces in y direction: $F_{23y} + F_{43y} = 0$ Sum of moments about A: $r_3 \times F_{43} = 0 \rightarrow r_3cos(\theta_3)F_{43y} - R_3sin(\theta_3)F_{43x} = 0$ $F_{23x}$ refers to the "x component of force from link 2 on link 3". The force equations seem okay to me, but the moment equation seems fishy to me. What is the right way to deal with a massless, rigid link in a dynamics problem like this? Do I even need to write equations for this link? Answer: The reason massless links need to have no net force or moment is because an unbalanced force or moment would accelerate it infinitely. Another way to think about this is to consider Euler's first law for a rigid body, $$\mathbf{F}_G = m \, \mathbf{a}_{G/O},$$ where $\mathbf{F}_G$ is the net force on the center of mass, and $\mathbf{a}_{G/O}$ is the inertial acceleration of the center of mass. (It does not matter where we consider the "center of mass" to be for a massless link.) If the body has no mass, the right hand side is zero, and the force on the body must sum to zero. Similarly, for Euler's second law, $$\mathbf{M}_O = I_O \, \boldsymbol{\alpha},$$ where $\mathbf{M}_O$ is the sum of the moments about an inertial point $O$, $I_O$ is the mass moment of inertia about $O$, and $\boldsymbol{\alpha}$ is the angular acceleration of the body. A massless body has no moment of inertia, so the right hand side is zero, and the sum of the moments must also be zero. As for why you need these equations for the massless link in this problem, it allows you to solve for three of the four unknown reaction forces on the link, and you will need both components of $\mathbf{F}_{23}$ to determine the moment about $G2$. As an aside, I believe if $F_{43x}$ and $F_{43y}$ are oriented with $x$ and $y$ in the diagram, then the moment equation should be $$\ell \sin(60) \, F_{43x} + \ell \cos(60) \, F_{43y} = 0,$$ if $\ell$ is the length of $AB$ and the counterclockwise direction is positive.
{ "domain": "physics.stackexchange", "id": 29558, "tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram" }
Isometric equivalence of purifications of quantum states
Question: Following the notes here (Quantum Information Theory Tips 5 at ETH), we state the following result. For any quantum state $\rho_A$ and purifications $\vert\psi\rangle_{AB}$ and $\vert\phi\rangle_{AC}$, there exists an isometry $V_{B\rightarrow C}$ such that $(I_A\otimes V_{B\rightarrow C})\vert\psi\rangle_{AB} = \vert\phi\rangle_{AC}$. Consider now $\rho_{A} = \frac{\mathbb{1}_A}{2}$, the maximally mixed state, and the following purifications. $$|\psi\rangle_{A B}=\frac{1}{\sqrt{2}}\left(|0\rangle_{A}|+\rangle_{B}+|1\rangle_{A}|-\rangle_{B}\right) \quad \text{and} \quad|\phi\rangle_{A C}=\frac{1}{\sqrt{2}}\left(|0\rangle_{A}|000\rangle_{C}+|1\rangle_{A}|110\rangle_{C}\right)$$ Is it true that there is an isometry $V'_{C\rightarrow B}$ such that $(I_A\otimes V'_{C\rightarrow B})\vert\phi\rangle_{AC} = \vert\psi\rangle_{AB}$? Note that here $\text{dim}(\mathcal{H}_C) > \text{dim}(\mathcal{H}_B)$. If yes, how is this consistent with the following definition of isometries which state that they go from a smaller Hilbert space to a larger Hilbert space only? Let $\mathcal{H}$ and $\mathcal{H}^{\prime}$ be Hilbert spaces such that $\operatorname{dim}(\mathcal{H}) \leq$ $\operatorname{dim}\left(\mathcal{H}^{\prime}\right)$ An isometry $V$ is a linear map from $\mathcal{H}$ to $\mathcal{H}^{\prime}$ such that $V^{\dagger} V=I_{\mathcal{H}}$. Equivalently, an isometry $V$ is a linear, norm-preserving operator, in the sense that $\||\psi\rangle\left\|_{2}=\right\| V|\psi\rangle \|_{2}$ for all $|\psi\rangle \in \mathcal{H}$. This is related to my previous question here but I am still not sure about this dimensional problem. Answer: An isometry is a map such that $$ \langle Vx,Vy\rangle=\langle x,y\rangle$$ if the image of $V$ has smaller dimension than its domain, then clearly this property cannot hold, as if we have an orthonormal basis $$ \langle x_i,x_j\rangle=\delta_{ij}$$ we cannot have $$\langle Vx_i,Vx_j\rangle=\delta_{ij}\tag{$*$} $$ because there aren't enough orthogonal vectors in the image of $V$. Instead you can have a partial isometry, i.e. a map $V$ such that $(*)$ holds for a subset $\{x_j\}_{j=1}^{d_V}$ where $d_V$ is the dimension of the image of $V$, and that sends the other vectors to $0$. In practice this means projecting your initial space onto a subspace of the same dimension as the image of $V$ and then applying an isometry. More precisely, a partial isometry is a map that is an isometry on the orthogonal complement of its kernel. what ort1426 says is correct and enough in my opinion, this already shows isometric equivalence, but a more complete statement could be Let $|\psi\rangle_{AB}$ and $|\psi'\rangle_{AC}$ be two purifications of $\rho_A$. Then there exists a partial isometry $V_{B\to C}$ such that $V|\psi\rangle=|\psi'\rangle$ You already know how to prove case where $\mathrm{dim}(B)\leq \mathrm{dim}(C)$, then $V$ is an isometry or a unitary (which are a special case of partial isometry, despite the names), if $\mathrm{dim}(B)> \mathrm{dim}(C)$, consider a Schmidt decomposition of $|\psi\rangle$ and $|\psi'\rangle$ $$ |\psi\rangle_{AB}=\sum_{k=1}^{r} s_k |\alpha_k\rangle|\beta_k\rangle\\|\psi'\rangle_{AC}=\sum_{k=1}^{r} s_k |\alpha_k\rangle|\beta_k'\rangle$$ the $\alpha_k$ are equal because the states must both partial trace to $\rho_A$. We clearly have $r<\mathrm{dim}(C)$. Extend the $|\beta_k\rangle$ to a basis of $B$ arbitrarily and define $$ V_{B\to C}|\beta_k\rangle=\begin{cases} |\beta_k'\rangle \quad &\textrm{if } k\leq r\\ 0 \quad &\textrm{otherwise} \end{cases}$$ $V$ is a partial isometry and has the desired property, basically, you didn't need such a big Hilbert space to begin with, as the rank of the Schmidt decomposition is smaller than the dimension of your auxiliary space anyway, and $V$ throws away by projection the useless dimensions.
{ "domain": "physics.stackexchange", "id": 67603, "tags": "quantum-mechanics, hilbert-space, quantum-information, unitarity, quantum-states" }
A simple search-and-replace algorithm
Question: In recent times, I've been using the following algorithm repeatedly in different languages (Java, Python, ...) to search and replace strings. I don't like this implementation very much, it's very procedural and not very descriptive, but I have yet to come up with a better one. How could the implementation be improved? from typing import Callable import re def find_and_replace(pattern: str, text: str, replace: Callable[[re.Match], str]) -> str: last_end = 0 output = [] for match in pattern.finditer(text): text_before = text[last_end:match.start()] if text_before: output.append(text_before) replacement = replace(match) output.append(replacement) last_end = end text_remaining = text[last_end:] if text_remaining: output.append(text_remaining) return ''.join(output) Answer: wrong signature def find_and_replace(pattern: str, ... You intended pattern: re.Pattern, Recommend you routinely use mypy or similar type checker if you choose to add optional type annotations. It is common knowledge that "Comments lie!". Often they're initially accurate, and then the code evolves with edits while the comments remain mired in the past, out of sync. So at best we will "mostly believe" the comments that appear. A type signature offers stronger promises, since we expect that lint nonsense was cleaned up during routine edit-debug and CI/CD cycles. Posting code for review which doesn't meet that expectation is problematic. wrong code last_end = end You intended to assign ... = match.end(). I cannot imagine how one could "test" this source code without noticing a fatal NameError. missing documentation The review context merely mentioned that this will "search and replace strings", similar to what the identifier promises. It's unclear how the proposed functionality would differ from what the str or re libraries offer, though the signature does offer a hint. The hint is not enough, and we are not even offered so much as a motivating example of the kinds of Turing machines we might usefully pass in. missing test suite This submission contained no doctest, no automated tests of any kind. One reason we write tests is to exercise the code, for example to discover a NameError. Another reason is to educate engineers who might want to call into this library routine. Here is a test that might usefully have been included in the OP. import unittest class FindAndReplaceTest(unittest.TestCase): def test_find_and_replace(self) -> None: pattern = re.compile(r"\w+") text = "... Hello, world!" def replace(match: re.Match) -> str: return "<" + str(match.group(0).upper()) + ">" self.assertEqual( "... <HELLO>, <WORLD>!", find_and_replace(pattern, text, replace), ) superfluous tests if text_before: output.append(text_before) ... if text_remaining: output.append(text_remaining) There's no need to test for non-empty, as empty string is the additive identity when catenating. The ''.join(output) result won't be altered in the slightest if we tack on a hundred empty entries with: output += [""] * 100 So as a minor cleanup, simply elide the ifs.
{ "domain": "codereview.stackexchange", "id": 45379, "tags": "python, python-3.x, strings" }
Lower bound on running time for solving 3-SAT if P = NP
Question: Is there a lower bound on the running time for solving 3-SAT if P = NP. For instance, is it known that 3-SAT can't be solved in linear time? What about quadratic? Answer: These are some of the best time and space bounds known. In this area the research has gone to the direction of giving bounds in both time and space. My understanding is that superlinear time bounds without a space constraint are not known. Time-Space Lower Bounds for Satisfiability Lance Fortnow, Richard Lipton, Dieter van Melkebeek We establish the first polynomial time-space lower bounds for satisfiability on general models of computation. We show that for any constant $c$ less than the golden ratio there exists a positive constant $d$ such that no deterministic random-access Turing machine can solve satisfiability in time $n^c$ and space $n^d$, where $d$ approaches 1 when $c$ does.
{ "domain": "cs.stackexchange", "id": 2253, "tags": "complexity-theory, satisfiability, lower-bounds" }
Viscosity and Density
Question: So I'm reading my textbook and it says that that magnitude of air resistance $f(v)$ can be given as a taylor expansion: $$f(v) = bv + c^2v = f_{\text{linear}} + f_{\text{quadratic}}$$ The linear term, $bv$ is related to the viscosity of the medium while the quadratic term is related to the density of the medium. If something is more dense, shouldn't it be more viscous as well? Or are viscosity and density not related? Answer: No, in general, viscosity and density are not related. For example, for gases, we have: Maxwell's calculations show that the viscosity coefficient is proportional to the density, the mean free path, and the mean velocity of the atoms. On the other hand, the mean free path is inversely proportional to the density. So an increase in density due to an increase in pressure doesn't result in any change in viscosity. Alternatively, to directly disprove your assertion "If something is more dense, shouldn't it be more viscous as well?", we need look no further than oil and water: Since most oils float on water, their density must be less than that of water, and yet, oils are undeniably more viscous than water.
{ "domain": "physics.stackexchange", "id": 16161, "tags": "kinematics" }
Why are high levels of epinephrine and low levels of cortisol signs of stress?
Question: According to my psychology textbook (Psychological Science: Modeling Scientific Literacy), high levels of epinephrine and low levels of cortisol are signs of stress. However, earlier in the textbook it is shown that the autonomic response of the human body releases norepinephrine and epinephrine, while the hypothalamic pituitary adrenal (HPA) axis pathway releases cortisol. So it seems to me that signs of stress would be high levels of both of these things. Is my textbook wrong or am I just understanding what stage of stress it's describing? Answer: So it seems to me that signs of stress would be high levels of both of these things. Yes. Wikipedia claims [1] the same: Stressors [...] activate the HPA axis, though via different pathways. [...] Stressors that are uncontrollable, threaten physical integrity, or involve trauma tend to have a high, flat diurnal profile of cortisol release (with lower-than-normal levels of cortisol in the morning and higher-than-normal levels in the evening) resulting in a high overall level of daily cortisol release. On the other hand, controllable stressors tend to produce higher-than-normal morning cortisol. The same on a webpage of Society for Endocrinology [2]: In addition, in response to stress, extra cortisol is released to help the body to respond appropriately. Yet, there are conditions in which stress can lower cortisol levels. One of them is psoriazis [3]: [...] patients with persistently high levels of stressors seem to have a specific psychophysiological profile of lowered cortisol levels and may be particularly vulnerable to the influence of stressors on their psoriasis. And some psychiatric disorders [4]: Several stress-associated neuropsychiatric disorders, notably posttraumatic stress disorder and chronic pain and fatigue syndromes, paradoxically exhibit somewhat low plasma levels of the stress hormone cortisol. References: Wikipedia contributors, "Hypothalamic–pituitary–adrenal axis," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Hypothalamic%E2%80%93pituitary%E2%80%93adrenal_axis&oldid=619115237 (accessed August 7, 2014). Society for Endocrinology. Hormones. Cortisol (2013). Available from http://www.yourhormones.info/hormones/cortisol.aspx (accessed 07.08.2014) Evers AW, Verhoeven EW, Kraaimaat FW, de Jong EM, de Brouwer SJ, Schalkwijk J, Sweep FC, van de Kerkhof PC. How stress gets under the skin: cortisol and stress reactivity in psoriasis. Br. J. Dermatol. 2010 Nov;163(5):986-91. doi: 10.1111/j.1365-2133.2010.09984.x. PubMed PMID: 20716227. Yehuda R, Seckl J. Minireview: Stress-related psychiatric disorders with low cortisol levels: a metabolic hypothesis. Endocrinology. 2011 Dec;152(12):4496-503. doi: 10.1210/en.2011-1218. PubMed PMID: 21971152.
{ "domain": "biology.stackexchange", "id": 2613, "tags": "human-biology, psychology" }
Why Does ASK Modulation Create Fourier Sidebands?
Question: I know why analog amplitude modulation has side bands, it is related to (fc+fd) and (fc-fd). But what about DAM? ASK(DAM) is a type of digital modulation, and there are only two state: carrier signal for "1" or nothing for "0". The expected behavior on a spectrum is seeing only one peak on the fc, but there are many side bands at fc+3d, fc-3fd, fc+5fd, fc-5fd etc. Can someone explain that why we see these Fourier series pattern side bands? fc: carrier frequency fd: data signal frequency Answer: If you did a continuous on off keying of a 10101010... pattern, then you would see sidebands as described since this is simply an up-conversion of the Fourier Transform of a 50% duty cycle square wave (moved to any carrier frequency). However if the data pattern for this case of a rectangular on-off keying was random, the resulting spectrum would be continuous and under the envelope of a Sinc function (Sinc squared as a power spectrum). The first case itself is also under the envelope of the same Sinc function in frequency, but since the repetition rate of the underlying pattern was not random (in this case the single pulse on off cycle) and repeated at a specific repetition rate, only integer harmonics of that repetition rate can be in the spectrum, with all the even harmonics falling on the nulls of the Sinc envelope (and thus a square wave with 50% duty cycle only has odd order harmonics). This is demonstrated below, showing a single rectangular pulse and the magnitude (in dB) of its Fourier Transform. If this was a carrier frequency turned on and off, the same spectrum would result just at that particular carrier (DC is just another carrier frequency). When we repeat that pulse in time at a periodic rate, the spectrum can only have non-zero content at integer multiples of that periodic rate (just as implied with the Fourier Series Expansion), but will still be within the envelope of the base pulse (in this case a Sinc function). So in the second case it is repeated at a 50% duty cycle, such that the even harmonics are in the nulls of the Sinc (the waveform repeats at $1/(2T)$). The third case is the same pulse duration repeating at a 25% duty cycle, so the 4th harmonic lands on the first null (with the waveform repeating at $1/(4T)$). For example, if we generated a long pseudo-random sequence of such pulses with a pattern that repeats once per second, with each pulse having a duration of 1 millisecond, we would see spectral lines every 1 Hz under the envelope of a Sinc function with it's nulls at 1 KHz spacing. Thus we see in the extreme case of a random pattern that never repeats, that the spectrum will again be continuous such as in the first case of a single pulse (with the primary difference that it will be a power spectrum rather than an energy spectrum since if the pulse continues for all time, it will have infinite energy but finite power).
{ "domain": "dsp.stackexchange", "id": 9913, "tags": "digital-communications, frequency-spectrum, modulation, fourier-series, amplitude-modulation" }
How to transform true and false to 1 and 0?
Question: I use the following code: var newValue = ($(event.target).is(":checked")) ? 1 : 0; var oldValue = (!$(event.target).is(":checked")) ? 1 : 0; sendMessage(this.name, oldValue, newValue); Is there any better approach? Answer: sendMessage(this.name, +!event.target.checked, +event.target.checked) Not your sending sendMessage(text, !bool, bool) which is silly just send one bool. Also $(thing).is(":checked") is stupid (jQuery ಠ_ಠ)
{ "domain": "codereview.stackexchange", "id": 1552, "tags": "javascript, jquery" }
How can we know the time frames for events in the early universe?
Question: I just finished watching Into the Universe with Stephen Hawking (2010). Specifically the third episode titled 'The Story of Everything.' In the episode Hawking is explaining the mainstream theories of the events just after the Big Bang. What confuses me is that he just said at the beginning that the laws of physics were different and time (among other things) did not exist as we know it during this early stage in the life of the universe, but then goes on to say certain events took certain amounts of time. I don't know if the numbers match, but the same issue exists on this Wikipedia article. How can we even guess the time frames of these events if time itself is largely dependent on the current state of the universe? Answer: When you see the history of the universe plotted against time, the time used is the comoving time i.e. the time measured by a clock that is at rest with respect to the universe around it. This is the time co-ordinate used in the FLRW metric, which is a solution to the equations of GR that, as far as we can tell, gives a good description of the universe back to very early times. Earlier than around the Planck time after the Big Bang we expect the notion of time to become imprecise because it isn't possible to measure times shorter than the Planck time. Without a working theory of quantum gravity it isn't possible to comment further. However for all times later than the Planck time we expect time to be a good co-ordinate and be well behaved. This allows physicists to calculate at what time the various stages in the evolution of the universe happened. Response to comment Let me attempt to phrase my answer more broadly. The first point is that in relativity (both Special and General) you need to be careful talking about time. For example you've probably heard that time runs more slowly when you move at speeds near the speed of light. However there is a well defined standard time that cosmologists use for describing the history of the universe. We call this comoving time. So when you hear statements like "the universe is 13.7 billion years old" we mean it's 13.7 billion years old in comoving time. You don't need to know how comoving time is defined, just that it gives us a good timescale for describing the history of the universe back to the Planck time. Which brings us to ... You've probably also heard of Heisenberg's uncertainty principle. Again I'll gloss over the details, but one side effect of the uncertainty principle is that it's impossible to measure times less than about 5 $\times$ 10$^{-44}$ seconds. I don't know of a simple way to explain this to someone who isn't familiar with quantum mechanics, so I'm afraid you'll have to take this on faith. And this brings us back to Hawking's programme. As long as the times we are interested in are greater than 5 $\times$ 10$^{-44}$ seconds we can define the time using comoving time so we can assign reliable times to cosmological events like the electroweak transition. But for times close to 5 $\times$ 10$^{-44}$ seconds the whole notion of a "time when something happens" becomes meaningless because it's fundamentally impossible to measure times that short. I'd guess this is what Hawking means when he says time ceases to exist.
{ "domain": "physics.stackexchange", "id": 6602, "tags": "time" }
Basis for the Generalization of Physics to a Different Number of Dimensions
Question: I am reading this really interesting book by Zwiebach called "A First Course in String Theory". Therein, he generalizes the laws of electrodynamics to the cases where dimensions are not 3+1. It's an intriguing idea but the way he generalizes seems like an absolute guess with no sound basis. In particular, he generalizes the behavior of electric fields to the case of 2 spatial and 1 temporal dimensions by maintaining $\vec{\nabla}. \vec{E} = \rho$. But I struggle to understand why. I could have maintained that $|\vec{E}|$ falls off as the square of the inverse of the distance from the source. Essentially, there is no way to differentiate between the Coulomb's law and the Gauss's law in the standard 3+1 dimensions--so how can I prefer one over the other in the other cases? To me, it seems like it becomes purely a matter of one's taste as to which mathematical form seems more generic or deep--based on that one guesses which form would extend its validity in the cases with the number of dimensions different than that in which the experiments have been performed. But, on the other hand, I think there should be a rather sensible reason behind treating the laws in the worlds with a different number of dimensions this way--considering how seriously physicists talk about these things. So, I suppose I should be missing something. What is it? Answer: Great question. First of all, you're absolutely right that until we find a universe with a different number of dimensions in the lab, there's no single "right" way to generalize the laws of physics to different numbers of dimensions - we need to be guided by physical intuition or philosophical preference. But there are solid theoretical reasons for choosing to generalize E&M to different numbers of dimensions by choosing to hold Maxwell's equations "fixed" across dimensions, rather than, say, Coulomb's law, the Biot-Savart law, and the Lorentz force law. For one thing, it's hard to fit magnetism into other numbers of dimensions while keeping it as a vector field - the defining equations of 3D magnetism, the Lorentz force law and the Biot-Savart law, both involve cross products of vectors, and cross products can only be formulated in three dimensions (and also seven, but that's a weird technicality and the 7D cross product isn't as mathematically nice as the 3D one). For another thing, a key theoretical feature of 3D E&M is that it is Lorentz-invariant and therefore compatible with special relativity, so we'd like to keep that true in other numbers of dimensions. And the relativistically covariant form of E&M much more directly reduces to Maxwell's equations in a given Lorentz frame than to Coulomb's law. For a third thing, 3D E&M possess a gauge symmetry and can be formulated in terms of the magnetic vector potential (these turn out to be very closely related statements). If we want to keep this true in other numbers of dimensions, then we need to use Maxwell's equations rather than Coulomb's law. These reasons are all variations on the basic idea that if we transplanted Coulomb's law into other numbers of dimensions, then a whole bunch of really nice mathematical structure that the 3D version possesses would immediately fall apart.
{ "domain": "physics.stackexchange", "id": 40255, "tags": "electromagnetism, gauss-law, spacetime-dimensions, coulombs-law" }
How to determine if my GBM model is overfitting?
Question: Below is a simplified example of a h2o gradient boosting machine model using R's iris dataset. The model is trained to predict sepal length. The example yields an r2 value of 0.93, which seems unrealistic. How can I assess if these are indeed realistic results or simply model overfitting? library(datasets) library(h2o) # Get the iris dataset df <- iris # Convert to h2o df.hex <- as.h2o(df) # Initiate h2o h2o.init() # Train GBM model gbm_model <- h2o.gbm(x = 2:5, y = 1, df.hex, ntrees=100, max_depth=4, learn_rate=0.1) # Check Accuracy perf_gbm <- h2o.performance(gbm_model) rsq_gbm <- h2o.r2(perf_gbm) ----------> > rsq_gbm [1] 0.9312635 Answer: The term overfitting means the model is learning relationships between attributes that only exist in this specific dataset and do not generalize to new, unseen data. Just by looking at the model accuracy on the data that was used to train the model, you won't be able to detect if your model is or isn't overfitting. To see if you are overfitting, split your dataset into two separate sets: a train set (used to train the model) a test set (used to test the model accuracy) A 90% train, 10% test split is very common. Train your model on the train test and evaluate its performance both on the test and the train set. If the accuracy on the test set is much lower than the models accuracy on the train set, the model is overfitting. You can also use cross-validation (e.g. splitting the data into 10 sets of equal size, for each iteration use one as test and the others as train) to get a result that is less influenced by irregularities in your splits.
{ "domain": "datascience.stackexchange", "id": 3549, "tags": "machine-learning, r, accuracy, overfitting, gbm" }
Frequency Translation in MATLAB
Question: I have to design a generic baseband filter. I have a C++ code which gives me filter coefficients based on the input parameters such as no. of taps, no. of bands, band edges, amplitudes and weights(code uses remez exchange algorithm). I import the filter coefficients in MATLAB, do fft and plot the graph of it. I have to shift the graph such that its center is the center of baseband. I know that if I multiply filter coefficients with an exponential function and then do fft the resulting frequency domain output will be shifted in frequency $$x(t) e^{j 2\pi f_0} \implies X(f-f_0)$$. But I dont know how to do it practically. Do I just have to multiply my filter coefficient vector with an exponential like exp^(j*2*pi*f0) and then take fft and plot? I tried doing that but the results of both the graphs are exactly the same. Answer: You are partly right. However It's not the filter coefficinets $a$,$b$, but the filter impulse response $h[n]$ which you have to multiply with the exponential term $e^{j w_0 n}$ to shift its frequency response to a desired center of frequency $w_0$. Note that for FIR filters the coefficients are such that $a=1$ and $b[n]=h[n]$; i.e., the impulse response equals the coefficients $b[k]$, but that's not so with the IIR filters. Assume that you have an FIR filter with coefficients $a=1$ and $b[k]$ so that its impulse response is: $$h[n] = b[n]$$ And let the frequency response of the filter be $H(e^{j \omega})$, then the frequency response of the new filter $h_+[n] = e^{j w_0 n} h[n]$ will be $$ h_+[n] = e^{j w_0 n} h[n] \implies H_+(e^{j \omega}) = H(e^{j (\omega - \omega_0)}) $$ and similarly the frequency response of the new filter $h_-[n] = e^{-j w_0 n} h[n]$ will be $$ h_-[n] = e^{-j \omega_0 n} h[n] \implies H_-(e^{j \omega}) = H_(e^{j( \omega + \omega_0)}) $$ Note if you want a real impulse response filter, say $h_0[n]$, then you should add those two sub shifted filters: $$ h_0[n] = h_+[n] + h_-[n] \implies H_0(e^{j \omega}) = H_+(e^{j \omega}) + H_-(e^{j \omega}) = H(e^{j (\omega - \omega_0)}) + H(e^{j (\omega + \omega_0)}) $$ You can achieve the same result by the following multiplication then; $$ h_0[n] = 2\cos(\omega_0 n) h[n] \implies H_0(e^{j \omega}) = H(e^{j (\omega - \omega_0)}) + H(e^{j (\omega + \omega_0)}) $$ MATLAB implmenetation would be as follows: h = fir1(64, 0.1 ); % prototype lowpass filter L = length(h); w0 = 0.5*pi; % bandpass filter new frequency n = [0:L-1]; % discrete-time index vector h0 = 2*cos(w0*n).*h; % new bandpass filter % Display prototype lowpass FIR filter figure,subplot(2,1,1) stem(n,h);title('Prototype Lowpass filter h[n]'); subplot(2,1,2) plot(linspace(-1,1,1024), abs(fftshift(fft(h,1024)))); title('DTFT magnitude |H(e^{j \omega})| of lowpass filter h[n]'); % Display new bandpass FIR filter figure,subplot(2,1,1) stem(n,h0);title('Prototype Lowpass filter h_0[n]'); subplot(2,1,2) plot(linspace(-1,1,1024), abs(fftshift(fft(h0,1024)))); title('DTFT magnitude |H_0(e^{j \omega})| of bandpass filter h_0[n]');
{ "domain": "dsp.stackexchange", "id": 5698, "tags": "matlab, filters, filter-design" }
Trivago hotels price checker
Question: I've decided to write my first project in Python. I would like to hear some opinion from you. Description of the script: Generate Trivago URLs for 5 star hotels in specified city. Scrape these URLs to get prices. Save results as SQL queries in text file Execute queries from text file to upload results to database. Settings: The date from which the script is going to check prices. Number of days to check. City to check. Name of file with results of the script. I wrote this script to learn how website scraping in Python works, and how to use SQL. I also wanted to learn something about object-oriented programming. __author__ = '' import datetime import pymysql import lxml.html as lh import re import sys from selenium import webdriver class TrivagoPriceChecker(): from_year = '' from_month = '' from_day = '' days_number = '' city_id = '' hotel_id = '' result_file = '' browser = webdriver.PhantomJS() def __init__(self): print("Trivago Price Checker ver 1.0") def generate_url(self): from_date = datetime.date(int(self.from_year), int(self.from_month), int(self.from_day)) to_date = datetime.date(int(self.from_year),int(self.from_month),int(self.from_day)) + datetime.timedelta(days=int(self.days_number)) url_list = [] while(from_date < to_date): day_plus = from_date + datetime.timedelta(days=1) url = 'http://www.trivago.pl/?aDateRange%5Barr%5D=' + str(from_date) + '&aDateRange%5Bdep%5D=' + str(day_plus) + '&iRoomType=7&iPathId=' + str(self.city_id) + '&iGeoDistanceItem=' + str(self.hotel_id) + '&iViewType=0&bIsSeoPage=false&bIsSitemap=false&' url_list.append(url) from_date += datetime.timedelta(days=1) return url_list def get_hotel_price(self, hotel_url): self.browser.get(hotel_url) content = self.browser.page_source website = lh.fromstring(content) for price in website.xpath('//*[@id="js_item_' + str(self.hotel_id) + '"]/div[1]/div[2]/div[2]/strong[2]'): return price.text def save_result(self): date = datetime.date(int(self.from_year), int(self.from_month), int(self.from_day)) file = open(self.result_file, "a") counter = 1 for result in self.generate_url(): try: price = self.get_hotel_price(result).strip() price = re.sub('[^0-9]', '', price) sql_query = "INSERT INTO prices (hotel, city, adate, price) VALUES('" + str(self.hotel_id) +"','" + str(self.city_id) + "','" + str(date) + "','" + str(price) + "');" file.write(sql_query) file.write('\n') print('[' + str(counter) + '/' + str(self.days_number) + '] Hotel ID: ' + str(self.hotel_id)) except AttributeError: print('[' + str(counter) + '/' + str(self.days_number) + '] Hotel ID: ' + str(self.hotel_id) + ' Sold out!') counter += 1 date = date + datetime.timedelta(days=1) file.close() poland = { "poznan": {"city_id": 86470, "hotel_id": [1711505, 163780, 932461, 1164703]}, "warszawa": {"city_id": 86484, "hotel_id": [1503333, 93311, 93181, 93268, 106958, 106956, 127649, 106801, 107386, 93245, 154078, 107032]}, "sopot": {"city_id": 95266, "hotel_id": [228481, 164126, 922891]}, "gdansk": {"city_id": 86490, "hotel_id": [102961, 1008151, 102944, 1503323]}, "krakow": {"city_id": 86473, "hotel_id": [931575, 925925, 102937, 148894, 125181, 930571, 114768, 125763, 106926, 102947, 131257]}, "wroclaw": {"city_id": 86485, "hotel_id": [122767, 123690, 2873646, 1300328, 1511989, 121719]}, "ilawa": {"city_id": 110111, "hotel_id": [2728378]}, "bydgoszcz": {"city_id": 86475, "hotel_id": [936931]}, "kolobrzeg": {"city_id": 114376, "hotel_id": [1288624, 1393804, 3185658, 1217228]}, "mikolajki": {"city_id": 110236, "hotel_id": [2873760]}, "rzeszow": {"city_id": 86472, "hotel_id": [2591078]}, "zakopane": {"city_id": 112161, "hotel_id": [408841, 1828491, 320661]}, "ostroda": {"city_id": 110301, "hotel_id": [966969]}, "czeladz": {"city_id": 458329, "hotel_id": [2030401]}, "gietrzwald": {"city_id": 110071, "hotel_id": [2733447]}, "krynica_zdroj": {"city_id": 111696, "hotel_id": [1226658]}, "tychy": {"city_id": 86502, "hotel_id": [164039]}, "kielce": {"city_id": 86471, "hotel_id": [1941137]}, "miedziana_gora": {"city_id": 470673, "hotel_id": [2175600]}, "brojce": {"city_id": 467917, "hotel_id": [412116]}, "ustka": {"city_id": 93762, "hotel_id": [3082744]}, "lublin": {"city_id": 86481, "hotel_id": [3083850]}, "choczewo": {"city_id": 113541, "hotel_id": [3135678]}, "dziwnow": {"city_id": 114306, "hotel_id": [3213582]}, "ustron": {"city_id": 114126, "hotel_id": [966089]}, "szczawnica": {"city_id": 112051, "hotel_id": [1259175]}} def check_city(from_year, from_month, from_day, days_number, city, result_file): worker = TrivagoPriceChecker() worker.from_year = from_year worker.from_month = from_month worker.from_day = from_day worker.days_number = days_number worker.result_file = result_file if city in poland: worker.city_id = poland[city]["city_id"] print(worker.city_id) for x in poland[city]["hotel_id"]: worker.hotel_id = x worker.save_result() else: print("City not found!") exit() def export_results(db_host, db_port, db_user, db_password, db_name, query_file): connection = pymysql.connect(host=str(db_host), port=db_port, user=str(db_user), passwd=str(db_password), db=str(db_name)) query = connection.cursor() file = open(query_file,"r") progress = 0 for line in file: try: query.execute(line) progress += 1 print(progress) except: pass connection.commit() file.close() connection.close() if __name__ == "__main__": if len(sys.argv) == 7: check_city(str(sys.argv[1]), str(sys.argv[2]), str(sys.argv[3]), str(sys.argv[4]), str(sys.argv[5]), str(sys.argv[6])) else: print("Example usage: main.py 2015 02 01 30 sopot sopot.txt") Answer: Style and bad pracitces Even ignoring Caridorc's point about splitting the imports, they aren't alphabetical. They should be. Wrap your lines! There is no reason to even consider a 265-character long line. Stop converting: you're going backa and forth with datatypes like no tomorrow. There's no reason for from_year to ever be a string if it's required to be a valid integer. Convert once and keep it so. You even do str(sys.argv[1]), str(sys.argv[2]), str(sys.argv[3]), str(sys.argv[4]), str(sys.argv[5]), str(sys.argv[6]) despite each argument already being a string! Really you should be doing int(sys.argv[1]), int(sys.argv[2]), int(sys.argv[3]), int(sys.argv[4]), sys.argv[5], sys.argv[6] ! Plurals. hotel_ids, not hotel_id. generate_urls, not generate_url. __init__ is your initializer. Let it initialize. Don't do this monstrosity: worker = TrivagoPriceChecker() worker.from_year = from_year worker.from_month = from_month worker.from_day = from_day worker.days_number = days_number worker.result_file = result_file It should be worker = TrivagoPriceChecker(from_year, from_month, from_day, days_number, result_file) Globals! This is important enough to get its own section. When you do class TrivagoPriceChecker(): from_year = '' from_month = '' from_day = '' days_number = '' city_id = '' hotel_id = '' result_file = '' browser = webdriver.PhantomJS() you make a particularly evil type of global: one that can be shadowed at any time by any assignment statement to it through self. You don't need these "default"s at all: just make a proper __init__. Even if you did want defaults like these, you'd want to set them in __init__. Even if you think you want a global browser, you probably want it shared manually for extensibility in the future. Programming with grace Files and pymysql support with. Use connection = pymysql.connect(host=db_host, port=db_port, user=db_user, passwd=db_password, db=db_name) with connection as query, open(query_file, "r") as file: progress = 0 for line in file: try: query.execute(line) progress += 1 print(progress) except: pass Also consider enumerate connection = pymysql.connect(host=db_host, port=db_port, user=db_user, passwd=db_password, db=db_name) with connection as query, open(query_file, "r") as file: for progress, line in enumerate(file): try: query.execute(line) print(progress) except: pass Also, reduce the "area" of the try and the number of things it catches. I see that stupid exceptions are thrown, but you should at least avoid catching BaseException (which includes things like KeyboardInterrupt and SystemExit) and just do connection = pymysql.connect(host=db_host, port=db_port, user=db_user, passwd=db_password, db=db_name) with connection as query, open(query_file, "r") as file: for progress, line in enumerate(file): try: query.execute(line) except Exception: pass print(progress) For that matter: exceptions Don't ever do this: if city in poland: # code else: print("City not found!") exit() Not only are you hiding the error by separating it from the condition that triggers it, you're calling exit directly! You're even calling the wrong exit! Do: if city not in poland: raise ValueError("City not found!") # code When you do except AttributeError: there's a large block of code that could be at fault. Use try: # small piece of code except AttributeError: # ... else: # rest of code to narrow this down to something maintainable. In this case, you shouldn't even be using try ... except. DRY, not DRY DRY You have, after prior simplification: from_date = datetime.date(self.from_year, self.from_month, self.from_day) to_date = datetime.date(self.from_year, self.from_month, self.from_day) + datetime.timedelta(days=self.days_number) Why? This should just be from_date = datetime.date(self.from_year, self.from_month, self.from_day) to_date = from_date + datetime.timedelta(days=self.days_number) In save_result you again do date = datetime.date(self.from_year, self.from_month, self.from_day) This is silly; you call this a lot. In __init__, just initialize self.from_date and self.to_date and drop the other nonsense. Don't date = date + datetime.timedelta(days=1) use date += datetime.timedelta(days=1) You do this just above; surely you know of it. Use formatting instead of str(a) + 'xxx' + str(b) + 'xxx' + ...: sql_query = ( "INSERT INTO prices (hotel, city, adate, price)" "VALUES('{self.hotel_id}'','{self.city_id}','{date}','{price}');" ).format(self=self, date=date, price=price) Which brings me to... SQL INJECTION! You should escape. The simple way is just sql_query = ( "INSERT INTO prices (hotel, city, adate, price)" "VALUES('{hotel_id}'','{city_id}','{date}','{price}');" ).format( hotel_id=pymysql.escape_string(self.hotel_id), city_id=pymysql.escape_string(self.city_id), date=date, price=price ) In reality, any executing query should use better mechanisms (like interpolating directly with cursor.execute) but you're writing to a file so this isn't as easy. And finally You pass hotel_id as a parameter to save_result by setting an instance variable. This is ugly; just pass a parameter. city_id should probably passed the same way for symmetry. I've done a little more cleaning and came up with this. __author__ = 'Mateusz Ostaszewski' import datetime import lxml.html as lh import re import sys import pymysql from selenium import webdriver class TrivagoPriceChecker: def __init__(self, browser, year, month, day, days_number, result_file): print("Trivago Price Checker ver 1.0") self.browser = browser self.from_date = datetime.date(year, month, day) self.days_number = days_number self.result_file = result_file def generate_urls(self, city_id, hotel_id): url = ( "http://www.trivago.pl/" "?aDateRange%5Barr%5D={}" "&aDateRange%5Bdep%5D={}" "&iRoomType=7&iPathId={}" "&iGeoDistanceItem={}" "&iViewType=0" "&bIsSeoPage=false" "&bIsSitemap=false" ) def make_url(day_num): day = self.from_date + datetime.timedelta(days=day_num) return day, url.format(day, day, city_id, hotel_id) return [make_url(day) for day in range(self.days_number)] def get_hotel_price(self, hotel_id, hotel_url): self.browser.get(hotel_url) content = self.browser.page_source website = lh.fromstring(content) # Get first if exists, otherwise return None for price in website.xpath('//*[@id="js_item_{}"]/div[1]/div[2]/div[2]/strong[2]'.format(hotel_id)): return price.text def save_result(self, city_id, hotel_id): def esc(x): return pymysql.escape_string(str(x)) query = "INSERT INTO prices (hotel, city, adate, price) VALUES('{}','{}','{{}}','{{}}');\n" query = query.format(esc(hotel_id), esc(city_id)) with open(self.result_file, "a") as file: for day, result in self.generate_urls(city_id, hotel_id): price = self.get_hotel_price(hotel_id, result).strip() if price is not None: price = re.sub('[^0-9]', '', price) file.write(query.format(esc(day), esc(price))) soldout = " Sold out!" if price is None else "" print('[{}/{}] Hotel ID: {}{}'.format(day, self.days_number, hotel_id, soldout)) poland = { "poznan": {"city_id": 86470, "hotel_ids": [1711505, 163780, 932461, 1164703]}, "warszawa": {"city_id": 86484, "hotel_ids": [1503333, 93311, 93181, 93268, 106958, 106956, 127649, 106801, 107386, 93245, 154078, 107032]}, "sopot": {"city_id": 95266, "hotel_ids": [228481, 164126, 922891]}, "gdansk": {"city_id": 86490, "hotel_ids": [102961, 1008151, 102944, 1503323]}, "krakow": {"city_id": 86473, "hotel_ids": [931575, 925925, 102937, 148894, 125181, 930571, 114768, 125763, 106926, 102947, 131257]}, "wroclaw": {"city_id": 86485, "hotel_ids": [122767, 123690, 2873646, 1300328, 1511989, 121719]}, "ilawa": {"city_id": 110111, "hotel_ids": [2728378]}, "bydgoszcz": {"city_id": 86475, "hotel_ids": [936931]}, "kolobrzeg": {"city_id": 114376, "hotel_ids": [1288624, 1393804, 3185658, 1217228]}, "mikolajki": {"city_id": 110236, "hotel_ids": [2873760]}, "rzeszow": {"city_id": 86472, "hotel_ids": [2591078]}, "zakopane": {"city_id": 112161, "hotel_ids": [408841, 1828491, 320661]}, "ostroda": {"city_id": 110301, "hotel_ids": [966969]}, "czeladz": {"city_id": 458329, "hotel_ids": [2030401]}, "gietrzwald": {"city_id": 110071, "hotel_ids": [2733447]}, "krynica_zdroj": {"city_id": 111696, "hotel_ids": [1226658]}, "tychy": {"city_id": 86502, "hotel_ids": [164039]}, "kielce": {"city_id": 86471, "hotel_ids": [1941137]}, "miedziana_gora": {"city_id": 470673, "hotel_ids": [2175600]}, "brojce": {"city_id": 467917, "hotel_ids": [412116]}, "ustka": {"city_id": 93762, "hotel_ids": [3082744]}, "lublin": {"city_id": 86481, "hotel_ids": [3083850]}, "choczewo": {"city_id": 113541, "hotel_ids": [3135678]}, "dziwnow": {"city_id": 114306, "hotel_ids": [3213582]}, "ustron": {"city_id": 114126, "hotel_ids": [966089]}, "szczawnica": {"city_id": 112051, "hotel_ids": [1259175]} } def check_city(year, month, day, days_number, city, result_file): if city not in poland: raise ValueError("City not found!") browser = webdriver.PhantomJS() worker = TrivagoPriceChecker(browser, year, month, day, days_number, result_file) city_id = poland[city]["city_id"] print(city_id) for hotel_id in poland[city]["hotel_ids"]: worker.save_result(city_id, hotel_id) def export_results(db_host, db_port, db_user, db_password, db_name, query_file): connection = pymysql.connect(host=db_host, port=db_port, user=db_user, passwd=db_password, db=db_name) with connection as query, open(query_file, "r") as file: for progress, line in enumerate(file): try: query.execute(line) except Exception: pass print(progress) def main(): try: _, year, month, day, days_number, city, result_file = sys.argv except ValueError: raise SystemExit("Example usage: main.py 2015 02 01 30 sopot sopot.txt") check_city(int(year), int(month), int(day), int(days_number), city, result_file) if __name__ == "__main__": main()
{ "domain": "codereview.stackexchange", "id": 11609, "tags": "python, beginner, web-scraping" }
Layman Interpretation: Quantum Factoring Algorithm
Question: I must firstly express that I know only a little about quantum computing and my knowledge comes largely from popular science texts and the media. So, I'm hoping that somebody will be able to help me to correct my understanding of quantum computing. My understanding is as follows: a qubit acts as though it is in both states at once (1 and 0) a register of n qubits can act as though it is in any of $2^n$ states. this has obvious benefits in terms of a factoring algorithm: we can identify whether any of these states is a 'correct' solution to a problem. however, I understand that although we can identify whether or not there is a correct state, we can not necessarily observe the state which is correct So, my assumption up to now has been that we can 'pin' one or more of the qubits by replacing with a classical bit, and observe whether or not the remaining set of states still contain a correct solution. (Simple!) My problem is that this would seem to lead to a solution to the factoring problem in $O(log n)$ time, by passing down and pinning each of the bits. I haven't proved that out but it feels right, based on my assumptions. However, Shor's algorithm takes $O((logn)^3)$ and doesn't seem that simple. I'd like to know which of my assumptions are wrong, but Wikipedia's description of Shor's algorithm seems intractable to me. Can you help identify my misconception? Which of my points of understanding are correct/incorrect? Thanks! Answer: The problem stems from the fact that your understanding of quantum computing (as outlined in the question) is incorrect. Certainly quantum computers are poorly explained in most popularizations. A quantum computer cannot simply check if a solution exists within a superposition deterministically, let alone in unit time. The most general representation of the quantum state of $n$ qubits is the $2^n \times 2^n$ density matrix $\rho$, where the diagonal entries correspond to the probability of finding the system in a particular classical state, and the off diagonal entries indicate the phase of the superposition (going to 0 for classical probability distributions). Any quantum algorithm can be seen as a unitary operation followed by some measurement in the computational basis. Given an initial input state $\rho$, it will evolve to $\rho' = U\rho U^\dagger$. The most general set of measurements that can be performed are called positive operator valued measurements(POVMs), where each measurement outcome $i$ is associated with a positive sei-definite Hermitian operator $P_i$ and the probability of outcome $i$ is given by $p_i = \mbox{Tr}(P_i \rho)$ (and hence $\sum_i P_i = \mathbb{I}$). Within these constraints the operations you describe simply are not possible.
{ "domain": "cstheory.stackexchange", "id": 1481, "tags": "quantum-computing, big-picture, factoring" }
How do big rocks split in half?
Question: So, recently I was on a trip in which I could observe a lot of huge rocks (1 to 20 meters in diameter), and I found odd how in several occasions rocks would be split in two parts, with a neat division. An instance of what I am talking about is given in google images. I was wondering how such "neat" splitting can happen: is it uniquely the result of impacts with other rocks or can it be caused also by thermic shock or some other mechanism? My impression is that an impact with the exact energy to split the rock in half (and not crush it in many pieces) could be even more rare than a fluctuation of thermal nature so extreme to split the boulder, but I could be wrong and this thermal splitting could just be too unlikely (like finding all molecules of a roomful of gas in the same spot). However especially in those cases in which the split rock is not surrounded by other fragments I would tend to say that the splitting should not be due to an impact. But again, I could be wrong. Summing up, the question is: which between impact, thermal shock and any other explanation should be considered the main responsible for the split boulders that can be found around the world? (and how do other factors compare to the main one?) Answer: I will answer one part of the question and invite others to add to this. Rocks are brittle, which means that they tend to fracture when struck. As in any brittle material, the fracture usually begins at a pre-existing crack in the rock that is close to the impact point. A fracture will be self-propagating if the energy required to create a pair of free surfaces along the crack path is supplied by strain energy that is stored in the material. Since most old rocks at the earth's surface were originally formed deep underground, it is the usual thing for them to contain significant amounts of residual compressive stress that was locked into their microstructure when they either solidified from melt or underwent recrystallization during metamorphosis underground. Once exposed in a large solid mass at the surface, it's then common for the rock mass to crack apart in order to relieve those frozen-in stresses. This is especially true for granitic rocks, in which the exposed rock mass exfoliates in huge slabs- Yosemite valley being a prime example- leaving rounded domes of granite behind. This means it is possible with a little luck to neatly split a boulder in half if it has residual stresses in it and if it has a pre-existing crack in it.
{ "domain": "physics.stackexchange", "id": 58162, "tags": "thermodynamics, statistical-mechanics, geophysics, nature" }
Can we evolve 0 and 1?
Question: Is it possible to combine or create conditional statements of 0 and 1, and optimize with an evolutionary algorithm (given that all computers use a binary system)? There may be an algorithm that maps input and output to 0 and 1, or a conditional statement that edits a conditional statement. An example of a binary conditional statement is if 11001 then 01110. Just as molecules are combined to form living beings, we could begin with the most fundamental operations (0 and 1, if then) to develop intelligence. Answer: Is it possible to combine or create conditional statements of 0 and 1, and optimize with an evolutionary algorithm (given that all computers use a binary system)? Yes, evolutionary algorithms are very general, and can be used to modify almost any source data structure, including logic trees or even executable code, provided you have some measure of fitness to optimise for. This can be taken to a highly abstract level, such as the paper Evolution of evolution: Self-constructing Evolutionary Turing Machine case study where an evolutionary algorithm is used to optimise other evolutionary algorithms which solve tasks using generic models of computing. However, there are two important caveats: There needs to be a measurement phase to establish the fitness of the algorithm. This can be very complex, depending on the problem you attempt to solve. Genetic algorithms may be very general optimisers (capable of finding optimal solutions for problems when other algorithms may fail), but may also be very inefficient and slow depending on the size of the allowed genomes, and how the search space is structured relative to available genetic operations. begin with the most fundamental operations (0 and 1, if then) to develop intelligence. Provided the fitness measure allows for the expression of intelligence, then this seems theoretically possible - in the sense that there are no known reasons why a sufficiently complex logical machine could not be intelligent by any measure we have of abstract intelligence (excluding measures deliberately constructed to exclude computational models such as "intelligence is the capability of a living system . . .") However, such a project faces some barriers which currently look insurmountable: There is no formal measure of general intelligence to use as a fitness function. This could be worked around using an e-life approach of providing a suitable rich virtual environment and allowing agents to compete for resources in the hope that the most competitive agents would exhibit intelligent behaviour - but that begs the question of how you could recognise and select those agents through any objective measure. Any environment rich enough to select for general intelligence, whilst simulating low-level agent logic is likely to require a lot of computation. Our one example of evolving basic building blocks into beings we call intelligent took billions of years, whilst processing billions upon billions of separate evolving entities at any one time. These last two points imply a computational cost far beyond current technology, so there is no route to actually running this experiment for real.
{ "domain": "ai.stackexchange", "id": 1165, "tags": "machine-learning, genetic-algorithms, evolutionary-algorithms" }
Calculate Compressibility of water
Question: I have 1 litre of water on 30 celsius, and there is a pressure of 10 mega psi affects on it. I want to calculate volume decrease that caused by pressure Thanks Answer: The compression of a substance (liquid or solid) under pressure is described by the bulk modulus, $K$. The bulk modulus is a function of the compression, so the compression is given by a differential equation: $$ \frac{\text{d}V}{V} = -\frac{\text{d}P}{K} \tag{1} $$ In many cases we can approximate $K$ as constant, in which case equation (1) becomes: $$ \frac{\Delta V}{V_0} = -\frac{\Delta P}{K} \tag{1} $$ where $V_0$ is the original volume. So to do your calculation you just need to Google for the bulk modulus of water in the pressure range you're interested in. Your question could be interpreted as asking how you calculate the bulk modulus from first principles. This would require a quantum mechanics calculation of the structure of the material. You'd need a biiiiiig computer!
{ "domain": "physics.stackexchange", "id": 17365, "tags": "pressure, water, volume" }
robot_localization ukf not publishing map->odom
Question: I am trying to use the ukf node in the robot_localization but this node is not publishing the transform from the frame map to odom and the following error message appear: Could not obtain transform from odom to map. Error was "map" passed to lookupTransform argument target_frame does not exist. My config file is the following: frequency: 30 sensor_timeout: 0.1 two_d_mode: true transform_time_offset: 0.0 transform_timeout: 0.0 print_diagnostics: true debug: false debug_out_file: ~/.ros/robot/rDebug.log map_frame: map odom_frame: odom base_link_frame: base_link world_frame: map odom0: odom odom0_config: [true, true, false, false, false, true, false, false, false, false, false, false, false, false, false] odom0_queue_size: 2 odom0_nodelay: false odom0_differential: false odom0_relative: false odom0_pose_rejection_threshold: 5 odom0_twist_rejection_threshold: 1 pose0: pose pose0_config: [true, true, false, false, false, true, false, false, false, false, false, false, false, false, false] pose0_differential: true pose0_relative: false pose0_queue_size: 5 pose0_rejection_threshold: 2 pose0_nodelay: false use_control: true stamped_control: false control_timeout: 0.2 control_config: [true, false, false, false, false, true] acceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 3.4] deceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 4.5] acceleration_gains: [0.8, 0.0, 0.0, 0.0, 0.0, 0.9] deceleration_gains: [1.0, 0.0, 0.0, 0.0, 0.0, 1.0] process_noise_covariance: [0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.015] initial_estimate_covariance: [1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9] alpha: 0.001 kappa: 0 beta: 2 My inputs are odometry and a PoseWithCovarianceStamped. The frame_ids in the message does not have slashes. Odometry/filtered is not being published as well. All other transformations are okay including odom->base_link. What am I doing wrong? Is it possible to use the robot_localization package having a known map and a laser scan? Update: I am using the differential drive gazebo plugin to generate the odom->base_link transform. Input_messages: Odometry input message: header: seq: 346 stamp: secs: 34 nsecs: 723000000 frame_id: odom child_frame_id: base_link pose: pose: position: x: 3.09824971028e-06 y: -4.12930698334e-11 z: 0.0 orientation: x: 0.0 y: 0.0 z: -0.000308331906876 w: 0.999999952466 covariance: [1e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1000000000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1000000000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1000000000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001] twist: twist: linear: x: -6.59023772555e-06 y: 4.06376386433e-09 z: 0.0 angular: x: 0.0 y: 0.0 z: -6.0041686542e-05 covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] Pose input message: header: seq: 88 stamp: secs: 110 nsecs: 20000000 frame_id: odom pose: pose: position: x: 0.0339823272081 y: -0.0283254735522 z: 0.0 orientation: x: 0.0 y: 0.0 z: -0.000438910551745 w: 0.999999903679 covariance: [1.4186971384333447e-05, -2.3090090053301537e-06, 0.0, 0.0, 0.0, 0.0, -2.309008550582803e-06, 1.7632721210247837e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.844621237978572e-06, 0.0, 0.0, 0.0, 0.0, 0.0, -3.6789799651160138e-06, 0.0, 0.0, 0.0, 1.844621237978572e-06, -3.678979510368663e-06, 2.932679535661009e-06] Originally posted by agbj on ROS Answers with karma: 83 on 2017-03-29 Post score: 0 Original comments Comment by Tom Moore on 2017-04-11: Can you please change the formatting of your config file to use the code formatting (tiny box with ones and zeros)? Also, please post a sample message from every input. What are you using to generate the odom->base_link transform? Answer: Your question is already answered here: Using robot_localization with amcl Originally posted by Orhan with karma: 856 on 2017-04-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by agbj on 2017-04-10: Thank you for the answer but to publish both the pose and odometry in the same frame to get the transform from map to odom. I think i have to publish the odometric information in the map frame. Correct me if I am wrong. But the node that publishes the odometry does not know how to transform the pose Comment by Orhan on 2017-04-10: That thing explained in the link above. If you tune correctly (remappings etc.) It publishes transforms between frames automatically. Comment by agbj on 2017-04-10: The frame between map and odom is being published by the ukf node, but it is incorrect because in rviz the robot just stands still and i think it is because I am publish both the pose and odometry in the odom frame. When I use only my localization node the robot is able to localize itself reasonably
{ "domain": "robotics.stackexchange", "id": 27461, "tags": "navigation, robot-localization, transform" }
How to prove the structured program theorem?
Question: Wikipedia: The structured program theorem [...] states that [...] any algorithm can be expressed using only three control structures. They are Executing one subprogram, and then another subprogram (sequence) Executing one of two subprograms according to the value of a boolean expression (selection) Executing a subprogram until a boolean expression is true (iteration) This theorem is developed in the following papers: C. Böhm, "On a family of Turing machines and the related programming language", ICC Bull., 3, 185–194, July 1964. C. Böhm, G. Jacopini, "Flow diagrams, Turing Machines and Languages with only Two Formation Rules", Comm. of the ACM, 9(5): 366–371,1966. Unfortunately, the first one is practically unavailable, and the second one, in addition to being a bit cryptic (at least for me), refers to the first, so I have problems to understand the proof. Can anyone help me? Is there a modern paper or book which presents the proof? Thanks. UPDATE To be exact, I would like to understand the second part of the CACM paper (section 3). The authors write in section 1 the following: In the second part of the paper (by C. Böhm), some results of a previous paper are reported [8] and the results of the first part of this paper are then used to prove that every Turing machine is reducible into, or in a determined sense is equivalent to, a program written in a language which admits as formation rules only composition and iteration. Here [8] refers to the unavailable ICC Bulletin paper. It is easy to see that the above quote from Wikipedia refers to this second part of the CACM paper (the Turing machine serves as a precise definition of algorithms; "composition" means sequence; an iteration can replace a selection). Answer: The Böhm-Jacopini theorem essentially says that a program based on 'goto' is functionality equivalent to one consisting of 'while' and 'if'. Sketch of proof based on the following lecture notes: Say you have a program that consists of a sequence of statements $S_i$; prefix each statement with a label $S_i^{\prime} \gets L_i: S_i$ and update existing goto statements to point to the right locations. Declare a location variable, $l \gets 1$, and wrap the prefixed statements in a while loop that will continue until the last statement is reached, $\textbf{while} (l \neq M) \ \textbf{do} \ S^{\prime}$. Apply the following rewrite rules to $S^{\prime}$: Goto rule: Replace $L_i \ \textbf{goto} \ L_j$ with $\textbf{if} \ (l = i) \ \textbf{then} \ l \gets j$ If-goto rule: Replace $L_i : \textbf{if} \ \text{(cond)} \ \textbf{then} \ \textbf{goto} \ L_j$ with $\textbf{if} \ (l = i \land \text{(cond)}) \ \textbf{then} \ l \gets j \ \textbf{else} \ l \gets l + 1$ Otherwise rule: Replace $L_i : S_i$ with $\textbf{if} \ l = i \ \textbf{then} \ S_i, \quad l \gets l + 1$ The resulting program is then free of goto statements showing the correspondence between the two paradigms. A more formal proof is provided in your second reference. Update After studying the paper in greater detail, this is my interpretation: Based on the definitions in CACM, let $\mathcal{B}^{\prime} = \mathcal{D}(\alpha, \lbrace \lambda, R \rbrace)$ be the class of Turing machines represented by the language $\mathcal{P}^{\prime}$ that covers flow diagrams. Let $\mathcal{B}^{\prime \prime} = \mathcal{E}(\alpha, \lbrace \lambda, R \rbrace)$ be the class of Turing machines represented by the language $\mathcal{P}^{\prime \prime}$ that covers the transformation covered above. According to CACM, the author wants to prove that if a Turing machine exists in the family of Turing machines described by $M \in \mathcal{B}^{\prime}$, then there exists a corresponding Turing machine in the family of Turing machines described by $M^{*} \in \mathcal{B}^{\prime \prime}$. Sketch of the author's proof: He does this by drawing from the derived relationship $M \in \mathcal{B}^{\prime}$ and $M \in \mathcal{E}(\cdots)$ (definition (5)) and then sets up the relationship $M^{*} \in \mathcal{E}^{*}(\cdots)$. For each component of $\mathcal{E}(\cdots)$ he draws a one-to-one mapping with those of $\mathcal{E}^*(\cdots)$. Thus by establishing the bridge between $M$ and $M^{*}$ through $\mathcal{E}(\cdots)$ the author proves $M \in \mathcal{B}^{\prime} \implies M^{*} \in \mathcal{B}^{\prime\prime}$.
{ "domain": "cs.stackexchange", "id": 1855, "tags": "computability, programming-languages, computation-models" }
Which chemical reaction to use to remove carbon/peat from sieved earth/gravel?
Question: I produce gravel for my aquarium by sieving and rinsing earth from my garden. After boiling and disinfecting it (with a solution against parasites) the rest of the organic material are composted or become invisible duff inside the gravel during the initial cycling of the aquarium. The only thing remaining visible are peat and other carbon particles of the size of the gravel which I currently need to remove mechanically. Is there a way to remove them /shrink or reduce them to small pieces with a (chain of) chemical reaction(s) which can be performed with a maximum of chemicals and equipment which can be bought at a supermarket or building supplies store doesn't harm the fish after introducing them into the aquarium some weeks after the treatment - both through toxines and an instable PH value (more than +/- 0.5) leaves the color and the size of the gravel untouched because my motivation to use custom gravel are mostly aesthetic and reducing the size would risk to block my undergravel filter (has a filter chamber with a separator for gravel of a certain size) I thought about: burning the particles, but that seems wasteful in terms of energy and requires some equipment (burner, fire proof permeable net, etc.) I'm looking for a solution for some kg of 1mm to 3mm gravel in a 100 l aquarium filled with drinking water with a PH value of 7.2 to 8. In case it's necessary to have more exact info what the particles in the gravel consist of, I'll ask a separate question how I can figure this out. Answer: It's easier to use flotation or froth flotation to separate out the less-dense peat (silt?) and carbon (I assume from the activated-carbon filter) than to use a chemical reaction.. Simple flotation would depend on making a dense solution (e.g. saturated Epsom salt, $\ce{MgSO4}$) and shaking the gravel mix in it... if the liquid's density is greater than the average density of silt with its entrained air, the gravel will sink and silt float, to be skimmed off. You can reuse the liquid for more separation. Froth flotation relies on a surfactant wetting the silt and keeping it suspended while the gravel settles out. Try a bit of hand dish-washing detergent.
{ "domain": "chemistry.stackexchange", "id": 7247, "tags": "home-experiment" }
Binary Search Tree Implementation in Java
Question: Looking for critiques on my BST implementation, particularly to check if I'm building the common operational methods and traversal algorithms properly. Code has been lightly tested, and I've tried to make it as legible as possible. public class BinarySearchTree<T extends Comparable<T>> { private BinaryTreeNode<T> root; private int size; /** * Recursively inserts an element into the tree. * @param value value to insert. * @return true if insertion was successful, false if otherwise. */ public boolean insert(T value) { try { this.root = insert(this.root, value); this.size++; return true; } catch(Exception e) { return false; } } private BinaryTreeNode<T> insert(BinaryTreeNode<T> node, T value) { if(node == null) { return new BinaryTreeNode(value); } int result = value.compareTo(node.getData()); if(result <= 0) { node.setLeft(insert(node.getLeft(), value)); } else { node.setRight(insert(node.getRight(), value)); } return node; } /** * Recursively removes a specified element from the tree. * @param value value to remove. * @return true if removal was successful, false if otherwise. */ public boolean remove(T value) { try { this.root = remove(this.root, value); this.size--; return true; } catch(Exception e) { return false; } } private BinaryTreeNode<T> remove(BinaryTreeNode<T> node, T value) { if(!isEmpty()) { while(node != null) { int result = value.compareTo(node.getData()); if (result == 0) { if (node.getLeft() != null && node.getRight() != null) { node = findMin(node.getRight()); remove(node.getRight(), node.getData()); } else { node = (node.getLeft() != null) ? node.getLeft() : node.getRight(); } } else if (result < 0) { node = node.getLeft(); } else { node = node.getRight(); } } } return node; } /** * Recursively finds a specified value within the tree, if it exists. * @param value value to find. * @return node containing the search value, null if value cannot be found. */ public BinaryTreeNode<T> find(T value) { return find(this.root, value); } private BinaryTreeNode<T> find(BinaryTreeNode<T> node, T value) { if(node == null) { return node; } int result = value.compareTo(node.getData()); if(result == 0) { return node; } else if(result < 0) { return find(node.getLeft(), value); } else { return find(node.getRight(), value); } } /** * Returns a node containing the minimum value within tree, if it exists. * @return value containing minimum-value node, null if value cannot be found. */ public BinaryTreeNode<T> findMin() { return findMin(this.root); } private BinaryTreeNode<T> findMin(BinaryTreeNode<T> node) { if(!isEmpty()) { while(node.getLeft() != null) { node = node.getLeft(); } return node; } return null; } /** * Returns a node containing the maximum value within the tree, if it exists. * @return value containing the maximum-value node, null if value cannot be found. */ public BinaryTreeNode<T> findMax() { return findMax(this.root); } private BinaryTreeNode<T> findMax(BinaryTreeNode<T> node) { if(!isEmpty()) { while(node.getRight() != null) { node = node.getRight(); } return node; } return null; } /** * Recursively traverses the tree using In-Order Traversal. * @return a Doubly-Linked List representing the traversal order. */ public ArrayList<BinaryTreeNode<T>> traverseInOrder() { return traverseInOrder(this.root, new ArrayList()); } private ArrayList<BinaryTreeNode<T>> traverseInOrder(BinaryTreeNode<T> node, ArrayList<BinaryTreeNode<T>> order) { if(node == null) { return order; } traverseInOrder(node.getLeft(), order); order.add(node); traverseInOrder(node.getRight(), order); return order; } /** * Recursively traverses the tree using Pre-Order traversal. * @return a Doubly-Linked List representing the traversal order. */ public ArrayList<BinaryTreeNode<T>> traversePreOrder() { return traversePreOrder(this.root, new ArrayList()); } private ArrayList<BinaryTreeNode<T>> traversePreOrder(BinaryTreeNode<T> node, ArrayList<BinaryTreeNode<T>> order) { if(node == null) { return order; } order.add(node); traversePreOrder(node.getLeft(), order); traversePreOrder(node.getRight(), order); return order; } /** * Recursively traverses the tree using Post-Order traversal. * @return a Doubly-Linked List representing the traversal order. */ public ArrayList<BinaryTreeNode<T>> traversePostOrder() { return traversePostOrder(this.root, new ArrayList<BinaryTreeNode<T>>()); } private ArrayList<BinaryTreeNode<T>> traversePostOrder(BinaryTreeNode<T> node, ArrayList<BinaryTreeNode<T>> order) { if(node == null) { return order; } traversePostOrder(node.getLeft(), order); traversePostOrder(node.getRight(), order); order.add(node); return order; } /** * Determines whether or not a value exists within the tree, using Depth-First Search. * Uses a wrapper method to initialize objects required for search traversal. * @param data value to search for. * @return true if the value exists within the tree, false if otherwise. */ public boolean depthFirstSearch(T data) { if(getSize() <= 0) { return false; } Stack<BinaryTreeNode<T>> stack = new Stack(); stack.push(this.root); return depthFirstSearch(stack, data); } private boolean depthFirstSearch(Stack<BinaryTreeNode<T>> stack, T data) { HashMap<BinaryTreeNode<T>, VisitStatus> visited = new HashMap(); while(!stack.isEmpty()) { BinaryTreeNode<T> current = stack.pop(); visited.put(current, VisitStatus.Visiting); if(current.getData().equals(data)) { return true; } if(current.getRight() != null) { if(visited.containsKey(current.getRight())) { if(visited.get(current.getRight()).equals(VisitStatus.Unvisited)) { stack.push(current.getRight()); } } else { stack.push(current.getRight()); } } if(current.getLeft() != null) { if(visited.containsKey(current.getLeft())) { if(visited.get(current.getLeft()).equals(VisitStatus.Unvisited)) { stack.push(current.getLeft()); } } else { stack.push(current.getLeft()); } } visited.put(current, VisitStatus.Visited); } return false; } /** * Determines whether or not a value exists within the tree, using Breadth-First Search. * Uses a wrapper method to initialize objects required for search traversal. * @param data value to search for. * @return true if the value exists within the tree, false if otherwise. */ public boolean breadthFirstSearch(T data) { if(getSize() <= 0) { return false; } LinkedList<BinaryTreeNode<T>> queue = new LinkedList(); queue.addLast(this.root); return breadthFirstSearch(queue, data); } public boolean breadthFirstSearch(Queue<BinaryTreeNode<T>> queue, T data) { HashMap<BinaryTreeNode<T>, VisitStatus> visited = new HashMap(); while(!queue.isEmpty()) { BinaryTreeNode<T> current = queue.remove(); visited.put(current, VisitStatus.Visiting); if(current.getData().equals(data)) { return true; } if(current.getLeft() != null) { if(visited.containsKey(current.getLeft())) { if(visited.get(current.getLeft()).equals(VisitStatus.Unvisited)) { queue.add(current.getLeft()); } } else { queue.add(current.getLeft()); } } if(current.getRight() != null) { if(visited.containsKey(current.getRight())) { if(visited.get(current.getRight()).equals(VisitStatus.Unvisited)) { queue.add(current.getRight()); } } else { queue.add(current.getRight()); } } } return false; } /** * Gets and returns the root of the tree. * @return a node representing the root of the tree. */ public BinaryTreeNode<T> getRoot() { return this.root; } /** * Returns an array representing the current tree. * @param clazz underlying tree data type. * @return an array containing properly-ordered tree values. */ public T[] toArray(Class<T> clazz) { return toArray((T[])Array.newInstance(clazz, this.size), 0, this.root); } private T[] toArray(T[] arr, int i, BinaryTreeNode<T> node) { if(node == null || i > this.size - 1) { return arr; } arr[i] = node.getData(); arr = (node.getLeft() != null) ? toArray(arr, (2 * i) + 1, node.getLeft()) : arr; arr = (node.getRight() != null) ? toArray(arr, (2 * i) + 2, node.getRight()) : arr; return arr; } /** * Builds a tree from a specified array. * @param arr array of source values. */ public void toTree(T[] arr) { for(int i = 0; i < arr.length; i++) { insert(arr[i]); } } /** * Determines whether or not the tree is empty. * @return true if tree is empty, false if otherwise. */ public boolean isEmpty() { if(this.root == null) { return true; } return false; } /** * Returns the current size of the tree. * @return an integer representing the size of the tree. */ public int getSize() { return this.size; } } BinaryTreeNode class: public class BinaryTreeNode<T> extends TreeNode<T> { private BinaryTreeNode<T> left; private BinaryTreeNode<T> right; public BinaryTreeNode() {} public BinaryTreeNode(T data) { super(data); } public BinaryTreeNode(BinaryTreeNodeBuilder<T> builder) { this.data = builder.data; this.left = builder.left; this.right = builder.right; } public BinaryTreeNode<T> getLeft() { return left; } public void setLeft(BinaryTreeNode<T> left) { this.left = left; } public BinaryTreeNode<T> getRight() { return right; } public void setRight(BinaryTreeNode<T> right) { this.right = right; } public static class BinaryTreeNodeBuilder<T> { private T data; private BinaryTreeNode<T> left; private BinaryTreeNode<T> right; public BinaryTreeNodeBuilder<T> data(T data) { this.data = data; return this; } public BinaryTreeNodeBuilder<T> left(BinaryTreeNode<T> left) { this.left = left; return this; } public BinaryTreeNodeBuilder<T> right(BinaryTreeNode<T> right) { this.right = right; return this; } } } TreeNode class: public class TreeNode<T> { protected T data; public TreeNode() {} public TreeNode(T data) { this.data = data; } public T getData() { return data; } public void setData(T data) { this.data = data; } } VisitStatus enum: public enum VisitStatus { Unvisited, Visiting, Visited } Answer: Enum members They should be uppercase. They're constants. enum VisitStatus { UNVISITED, VISITING, VISITED } TreeNode This is not a useful abstraction. Inheritance should be avoided unless there's a very good reason to use it. Favour composition. In this case, just consolidate TreeNode into BinaryTreeNode. Builder pattern Your code doesn't actually use your builder, but I'll comment on it anyway. Having a public constructor which takes a builder is unusual: public BinaryTreeNode(BinaryTreeNodeBuilder<T> builder) { I would make this private. To then instantiate the buildable, it's much more common to have a build method on your builder: @Override public BinaryTreeNode<T> build() { return new BinaryTreeNode<>(data, left, right); } Optionally, you can have your builder implement a builder interface, such as org.apache.commons.lang3.builder.Builder, or define your own like so: interface Builder<T> { T build(); } See Item 2 in Josh Bloch's Effective Java for a good example of the pattern. In your case you don't gain much from having a builder as your node only has 3 properties. Aim for immutability You have a setData method which is never used. Conceptually does it make sense to change a node's data once it's been created? Probably not. Remove the setter and declare data as final. The fewer fields that can change in a class, the easier that class becomes to work with. If a class is completely immutable, you get benefits like thread-safety without having to synchronize. If a class must be mutable - for example, for performance reasons - then make it as immutable as possible. Don't blanket catch exceptions In some methods, you catch all exceptions and return false to indicate a failure: try { this.root = insert(this.root, value); this.size++; return true; } catch(Exception e) { return false; } There's any number of low-level problems you could be masking here that have nothing to do with your code, for example an OutOfMemoryError. Enough has been said about this already. See this question, for example. Don't repeat yourself You might have been able to observe that your private findMin and findMax methods have almost identical implementations. You can consolidate them into one method which takes a function as an additional parameter: private BinaryTreeNode<T> findExtreme(BinaryTreeNode<T> node, final Function<BinaryTreeNode<T>, BinaryTreeNode<T>> getter) { if(!isEmpty()) { while(getter.apply(node) != null) { node = getter.apply(node); } return node; } return null; } Then you can change your public methods to: return findExtreme(this.root, BinaryTreeNode::getLeft); and return findExtreme(this.root, BinaryTreeNode::getRight); Prefer abstractions There are numerous places where you return or expect an ArrayList. This is needlessly restrictive. It is preferable to use more abstract concepts, i.e. interfaces such as List or Collection, because it allows you to change the implementation later - perhaps to improve performance - without breaking the code of clients who call your methods. Use type inference to your advantage Here, for example, you can get away with using the diamond operator to remove the need to repeat the type twice: public ArrayList<BinaryTreeNode<T>> traversePostOrder() { return traversePostOrder(this.root, new ArrayList<BinaryTreeNode<T>>()); } The ArrayList instantiation can simply become new ArrayList<>() Don't use raw types Here you are using the raw type of HashMap. You should never need to use raw types after Java 1.5. HashMap<BinaryTreeNode<T>, VisitStatus> visited = new HashMap(); Once again, you can use the diamond operator to infer the correct type from the left-hand side. if(condition) { return true; } else { return false; } If you ever find yourself writing something in the above format - and you do: public boolean isEmpty() { if(this.root == null) { return true; } return false; } Then you can always simplify it to simply return condition;. In your case: return this.root == null; This is little quirk of boolean logic that beginners often don't spot. I know it took me a while to get it. Once you do, you'll cringe any time you see someone write it the way you have! I'm not an algorithms guy so I haven't commented on correctness or efficiency.
{ "domain": "codereview.stackexchange", "id": 29285, "tags": "java, tree, breadth-first-search, depth-first-search" }
Difference between baking soda and baking powder
Question: I learnt that the chemical symbol for baking soda and baking powder is NaHCO3. However, when I do a simple experiment which is put vinegar into both substances. I can get different observations. When I put vinegar into baking soda, I get a fast and fuzzy reaction. But when I put vinegar into baking powder, instead of a fast and fuzzy reaction, I get a slow and bubbly reaction. So, my question is what properties do each of the both substances have to make different reactions with vinegar although their chemical symbols are the same? Answer: As LDC3 says, Baking soda is simply sodium bicarbonate, $\ce{NaHCO3}$. When one adds an acid, the following reaction occurs: $\ce{HCO3- + HA -> H2CO3 + A-}$ and the carbonic acid releases carbon dioxide: $\ce{H2CO3 -> CO2 + H2O}$ (these are both equilibrium reactions, but as you can observe when you add vinegar, a lot of gas is produced and the gas is generally lost, pushing the equilibrium towards making carbon dioxide). Baking powder is a mixture of sodium bicarbonate and one or more weak acids. When dry, no reactions occur, but when wet, the components dissolve and can react. The observed differences when mixing vinegar with baking powder probably result from the fact that there is less sodium bicarbonate in a given volume of baking powder than baking soda, and baking powder often contains starch, which can influence the properties of the added liquid, giving the bubbles a different appearance.
{ "domain": "chemistry.stackexchange", "id": 2223, "tags": "inorganic-chemistry, everyday-chemistry, home-experiment, food-chemistry" }
How to query a network for a bacterium, specifically Streptomyces caatingaensis or Streptomyces thioluteus?
Question: I have a list of gene/protein IDs and I could not query a network using String, BioGrid, Intact and IIS, for Streptomyces thioluteus. The bacteria I am working with is the Streptomyces caatingaensis. Since it is newer than others, I thought I would find an interaction network looking for Streptomyces thioluteus (which shares one pathway I am analyzing in common, others do not share). As a result, I have this list of proteins, but I could not query any network from the IDs. Sometimes I could not find the bacterium in the tools and when it was found, as in IIS the result was zero interactions. Is there anything I could use to have some network that could be used as insight for the analysis? Answer: I realized that some tools work better with Gene Names instead of some type of ID. Using IIS I was capable of creating a network using this list of proteins (Streptomyces caatingaensis) by querying proteins by Genes Names having as base the E. Coli. The tool has 3 main tabs: proteins, annotation, and interactome. Search the Gene Names in proteins section, go to Interactome, set the parameters and create the network. P.S. If you are not used to the tool you should read the research article available on the home page before using the network for research purposes.
{ "domain": "bioinformatics.stackexchange", "id": 294, "tags": "database, proteins, networks" }
Convert carry-add loop to a map or reduce to one-liner
Question: I want to perform standard additive carry on a vector. The base is a power of 2, so we can swap modulus for bitwise AND. def carry(z,direction='left',word_size=8): v = z[:] mod = (1<<word_size) - 1 if direction == 'left': v = v[::-1] accum = 0 for i in xrange(len(v)): v[i] += accum accum = v[i] >> word_size v[i] = v[i] & mod print accum,v if direction == 'left': v = v[::-1] return accum,v Is there any way to make this function even tinier? Answer: Your code has a lot of copying in it. In the default case (where direction is left) you copy the array three times. This seems like a lot. There are various minor improvements you can make, for example instead of v = v[::-1] you can write v.reverse() which at least re-uses the space in v. But I think that it would be much better to reorganize the whole program so that you store your bignums the other way round (with the least significant word at the start of the list), so that you can always process them in the convenient direction. The parameters direction and word_size are part of the description of the data in z. So it would make sense to implement this code as a class to keep these values together. And then you could ensure that the digits always go in the convenient direction for your canonicalization algorithm. class Bignum(object): def __init__(self, word_size): self._word_size = word_size self._mod = 2 ** word_size self._digits = [] def canonicalize(self, carry = 0): """ Add `carry` and canonicalize the array of digits so that each is less than the modulus. """ assert(carry >= 0) for i, d in enumerate(self._digits): carry, self._digits[i] = divmod(d + carry, self._mod) while carry: carry, d = divmod(carry, self._mod) self._digits.append(d) Python already has built-in support for bignums, anyway, so what exactly are you trying to do here that can't be done in vanilla Python? Edited to add: I see from your comment that I misunderstood the context in which this function would be used (so always give us the context!). It's still the case that you could implement what you want using Python's built-in bignums, for example if you represented your key as an integer then you could write something like: def fold(key, k, word_size): mod = 2 ** (k * word_size) accum = 0 while key: key, rem = divmod(key, mod) accum += rem return accum % mod but if you prefer to represent the key as a list of words, then you could still implement the key-folding operation directly: from itertools import izip_longest class WordArray(object): def __init__(self, data = [], word_size = 8): self._word_size = word_size self._mod = 1 << word_size self._data = data def fold(self, k): """ Fold array into parts of length k, add them with carry, and return a new WordArray. Discard the final carry, if any. """ def folded(): parts = [xrange(i * k, min((i + 1) * k, len(self._data))) for i in xrange((len(self._data) + 1) // k)] carry = 0 for ii in izip_longest(*parts, fillvalue = 0): carry, z = divmod(sum(self._data[i] for i in ii) + carry, self._mod) yield z return WordArray(list(folded()), self._word_size)
{ "domain": "codereview.stackexchange", "id": 3084, "tags": "python, converting" }
Mathematically, how does the exchange integral for a closed-shell system reduce to zero?
Question: I am looking at the Hartree-Fock method of approximation for a closed-shell two-electron system. I have the basis functions $$ \chi_1(\vec{x}_1) = \psi_1(\vec{r}_1) \alpha(s_1) \\ \chi_2(\vec{x}_2) = \psi_1(\vec{r}_2) \beta(s_2) $$ where $\vec{x}_i$ is the combined spatial and spin coordinates. The opposite spins are illustrated by the $\alpha$ and $\beta$ labels. The above gives the total wave function to be the following Slater determinant $$\Psi(\vec{r}_1,\vec{r}_2,s_1,s_2) = \begin{vmatrix} \psi_1(\vec{r}_1)\alpha(s_1) & \psi_1(\vec{r}_1)\beta(s_1) \\ \psi_1(\vec{r}_2)\alpha(s_2) & \psi_1(\vec{r}_2)\beta(s_2) \end{vmatrix} $$ Let's bypass a lot of math, and just look at the exchange integral, where the spin and spatial parts have been separated: $$ \int ds_1 \alpha^*(s_1)\beta(s_1) \cdot \int ds_2 \beta^*(s_2)\alpha(s_2) \cdot \int d\vec{r}_1 \int d\vec{r}_2 \psi_1^*(\vec{r}_1) \psi_1^*(\vec{r}_2) \dfrac{1}{r_{12}} \psi_1(\vec{r}_2) \psi_1(\vec{r}_1) $$ So the spin integrals do not survive in this case, due to the closed shells, but I am not sure if I see why. Is there some orthonormality for the spin states at play here? Answer: Let's do the math as explicit as possible. So, the exchange integral is defined as follows, $$ \langle \chi_1(1) \chi_2(2) \lvert r_{12}^{-1} \rvert \chi_2(1) \chi_1(2) \rangle := \sum\limits_{m_{s1}=-1/2}^{+1/2} \sum\limits_{m_{s2}=-1/2}^{+1/2} \iint\limits_{-\infty}^{+\infty} \bar{\chi}_1(\vec{x}_1) \bar{\chi}_2(\vec{x}_2) r_{12}^{-1} \chi_2(\vec{x}_1) \chi_1(\vec{x}_2) \mathrm{d} \vec{r}_1 \mathrm{d} \vec{r}_2 \, , $$ where I used a more appropriate label for the spin coordinate ($m_{s}$) as well as distinguished between summation over discrete variables $m_{s1}$ and $m_{s2}$ and integration over continuous $\vec{r}_1$ and $\vec{r}_2$ ones. Then, for the restricted spin orbitals, $$ \chi_1(\vec{x}_1) = \psi_1(\vec{r}_1) \alpha(m_{s1}) \, , \\ \chi_2(\vec{x}_2) = \psi_1(\vec{r}_2) \beta(m_{s2}) \, , $$ where $\alpha$ and $\beta$ are the so-called "spin up" and "spin down" spin functions defined as follows, $$ \alpha(m_{s}) = \begin{cases} 1, & m_{s} = +1/2 \\ 0, & m_{s} = -1/2 \end{cases} \, , \quad \beta(m_{s}) = \begin{cases} 0, & m_{s} = +1/2 \\ 1, & m_{s} = -1/2 \end{cases} \, , $$ we get then $$ \langle \chi_1(1) \chi_2(2) \lvert r_{12}^{-1} \rvert \chi_2(1) \chi_1(2) \rangle \\ = \sum\limits_{m_{s1}=-1/2}^{+1/2} \sum\limits_{m_{s2}=-1/2}^{+1/2} \iint\limits_{-\infty}^{+\infty} \bar{\psi}_1(\vec{r}_1) \alpha(m_{s1}) \bar{\psi}_1(\vec{r}_2) \beta(m_{s2}) r_{12}^{-1} \psi_1(\vec{r}_1) \beta(m_{s1}) \psi_1(\vec{r}_2) \alpha(m_{s2}) \\ = \sum\limits_{m_{s1}=-1/2}^{+1/2} \sum\limits_{m_{s2}=-1/2}^{+1/2} \alpha(m_{s1}) \beta(m_{s2}) \beta(m_{s1}) \alpha(m_{s2}) \iint\limits_{-\infty}^{+\infty} \bar{\psi}_1(\vec{r}_1) \bar{\psi}_1(\vec{r}_2) r_{12}^{-1} \psi_1(\vec{r}_1) \psi_1(\vec{r}_2) \, , $$ where we used the fact that spin functions $\alpha$ and $\beta$ are real-valued, so that $\bar{\alpha} = \alpha$ and $\bar{\beta} = \beta$. At this point we could concentrate our attention exclusively on the coefficient in front of the double integral above, since we could show that it is equal to zero, so that the whole expression vanishes irregardless of the value of the integral. So, the coefficient can be written as follows, $$ \sum\limits_{m_{s1}=-1/2}^{+1/2} \sum\limits_{m_{s2}=-1/2}^{+1/2} \alpha(m_{s1}) \beta(m_{s2}) \beta(m_{s1}) \alpha(m_{s2}) \\ = \alpha(-1/2) \beta(-1/2) \beta(-1/2) \alpha(-1/2) + \alpha(-1/2) \beta(+1/2) \beta(-1/2) \alpha(+1/2) \\ + \alpha(+1/2) \beta(-1/2) \beta(+1/2) \alpha(-1/2) + \alpha(+1/2) \beta(+1/2) \beta(+1/2) \alpha(+1/2) \\ = 0 + 0 + 0 + 0 \\ = 0 \, , $$ where all the terms vanish due to at least either $\alpha(-1/2) = 0$ or $\beta(+1/2) = 0$, so that we indeed get $$ \langle \chi_1(1) \chi_2(2) \lvert r_{12}^{-1} \rvert \chi_2(1) \chi_1(2) \rangle = 0 \, . $$ It can be noted that there is a shortcut in proving that the coefficient above is zero that uses the defining properties of spin functions. First, we could rearrange spin functions as follows, $$ \sum\limits_{m_{s1}=-1/2}^{+1/2} \sum\limits_{m_{s2}=-1/2}^{+1/2} \alpha(m_{s1}) \beta(m_{s2}) \beta(m_{s1}) \alpha(m_{s2}) = \sum\limits_{m_{s1}=-1/2}^{+1/2} \alpha(m_{s1}) \beta(m_{s1}) \sum\limits_{m_{s2}=-1/2}^{+1/2} \beta(m_{s2}) \alpha(m_{s2}) \, . $$ Secondly we note that by the very definition of $\alpha$ and $\beta$ presented above, $$ \sum\limits_{m_{s}=-1/2}^{+1/2} \alpha(m_{s}) \beta(m_{s}) = \alpha(-1/2) \beta(-1/2) + \alpha(+1/2) \beta(+1/2) = 0 \cdot 1 + 1 \cdot 0 = 0 \, , $$ which is already enough to establish that the whole exchange integral vanishes. The same is, of course, true for the second factor as well, $$ \sum\limits_{m_{s}=-1/2}^{+1/2} \beta(m_{s}) \alpha(m_{s}) = \beta(-1/2) \alpha(-1/2) + \beta(+1/2) \alpha(+1/2) = 1 \cdot 0 + 0 \cdot 1 = 0 \, . $$ It is also quite customary to use Dirac bracket notation for such expressions over spin orbitals, so that one could write these findings concisely as $$ \langle \alpha \lvert \beta \rangle = \langle \beta \lvert \alpha \rangle = 0 \, , $$ where $\langle \alpha \lvert \beta \rangle$, for instance, is defined as follows, $$ \langle \alpha \lvert \beta \rangle := \sum\limits_{m_{s}=-1/2}^{+1/2} \bar{\alpha}(m_{s}) \beta(m_{s}) \, . $$ In general, for two spin functions $\gamma_1$ and $\gamma_2$, expression $\langle \gamma_1 \lvert \gamma_2 \rangle$ defined as follows, $$ \langle \gamma_1 \lvert \gamma_2 \rangle := \sum\limits_{m_{s}=-1/2}^{+1/2} \bar{\gamma_1}(m_{s}) \gamma_2(m_{s}) \, , $$ is zero if the functions correspond to different spin states and one otherwise which can be expressed as follows, $$ \langle \gamma_1 \lvert \gamma_2 \rangle = \delta_{\gamma_1 \gamma_2} \, . $$ Finally, I would like to make a short comment on "integrating" over discrete spin coordinates rather then summing up over it. One could indeed define $$ \langle \gamma_1 \lvert \gamma_2 \rangle := \int \bar{\gamma_1}(m_s) \gamma_2(m_s) \mathrm{d} m_s \, , $$ and then think of integration over discrete $m_s$ variable being reduced to summation. But the integration here is just a "symbolic shorthand" since, strictly speaking, we have a summation over discrete $m_s$ variable from the get go. We could just symbolically write it down as an integration as well, if we think it looks more cute for some reason. In exact same way the exchange integral can also be symbolically defined as follows, $$ \langle \chi_1(1) \chi_2(2) \lvert r_{12}^{-1} \rvert \chi_2(1) \chi_1(2) \rangle := \iint \bar{\chi}_1(\vec{x}_1) \bar{\chi}_2(\vec{x}_2) r_{12}^{-1} \chi_2(\vec{x}_1) \chi_1(\vec{x}_2) \mathrm{d} \vec{x}_1 \mathrm{d} \vec{x}_2 \, , $$ where the "integration" is informally done over joint spin-spatial coordinates of the electrons. And again, strictly speaking, a summation is done over the spin coordinates and an integration over spatial ones, as it is written explicitly in the very beginning of the answer.
{ "domain": "chemistry.stackexchange", "id": 4421, "tags": "quantum-chemistry" }
Counting the number of forces
Question: Newton's third law says that if object A ``exerts" a force on object B called $F_{AB}$ then there is a force which B exerts on A $F_{BA}$ such that $F_{BA} = -F_{AB}$. With this in mind, consider the following two situations: A person stands on earth and the earth induces a force $mg$ on him where $m$ is the persons mass and $g$ is the acceleration due to gravity. By the third law, this means the earth has a reciprocal force $-mg$. However, the person also stands on the surface of the earth and hence he exerts a force $mg$ onto the surface and hence again by the third law, the surface exerts a reciprocal force back onto the person $-mg$. Hence, there are is a total of force of $-2mg$ on the person. Is this reasoning correct? Two planets $A$ and $B$ exist and planet $A$ exerts a gravitational force on $B$ $F_{AB}$ by Newton's law of universal gravitation. Hence by the third law there is a reciprocal $F_{BA}$ which $B$ exerts on $A$. However, we can invoke the Newton's law of universal gravitation again with the perspective of planet $B$ and say that $B$ exerts a force on $A$ $G_{BA}$ which is of course equal to $F_{BA}$. Does this mean that the total amount of force on planet $A$ is $2F_{BA}$? Or am I incorrectly understanding the assumptions which are needed to invoke the laws I've used? Answer: Hence, there are is a total of force of −2 on the person. Is this reasoning correct? No, you have the direction of one of the forces wrong. The contact force and the gravitational force on the person act in opposite directions. The contact force pushes upwards and the gravitational force pulls downwards. They add together for a net force of 0, which is consistent with the fact that the person is not accelerating. Does this mean that the total amount of force on planet is 2 ? No. The gravitational force and the Newton’s 3rd law force are the same force. They are not two different forces.
{ "domain": "physics.stackexchange", "id": 91557, "tags": "newtonian-mechanics, forces, free-body-diagram" }
Simplified calculation of the amount of fuel required for a trip — are my calculations faulty?
Question: I've been always fascinated with how easily scifi characters travel around the Solar system and sometimes the galaxy. They just hop into a spacecar and go wherever they want. So I've come up with a thought experiment that reproduces such a trip with modern technology. Only existing means of propulsion are allowed (i. e. no hyperdrive, lightfold, wormholes, etc), but existing technology can be imagined as idealized, perfected. The goal is to make a roundtrip to the outskirts of the solar system. Let's imagine that we would like to reach a Voyager, chip a tiny piece off it as a souvenir and bring it back home. This is what a scifi hero could do in a single episode! The "Swordfish II" customized MONO racer fueling up on Ganymede, Jupiter. It's imagined to be capable, all by itself, of rising off Ganymede and flying around the moons of Jupiter, as well as rising from Earth to the orbit around Earth. It only needs assistance for crossing the Solar system in like a day. Here are the "givens": We want to make the trip in a reasonable amount of time. Like weeks, not years. Since that would cause life-threatening overload (G factor), we're gonna send an automatic probe. The probe weighs exactly 1 kg, which includes hull, electronics, manipulator, engine, batteries, etc — everything except fuel. We're assuming that the probe is travelling a straight line in open space with no gravity acting on it. We do not account for the weight of fuel tanks hull. Let's assume that fuel tanks are made of fuel and are consumed efficiently. The engine is capable of efficiently burning huge amounts of fuel in small amounts of time so that it reaches cruising speed quickly and spends most of the travel time flying with inertia. We do not account for the weight of the souvenir we're chipping off Voyager. Or let's say we're making a close-up film photo of it and carrying the film there and back, weight of the film accounted in the weight of the probe. The important part is to stop at Voyager and then stop back at Earth. At the start of the flight, the probe and its target are stationary in relation to each other. I'm not specifying the parameters of the engine, the specific distance, time restrictions, etc. Those numbers are arbitrary and Voyager is just a legend (that might not even work out, I hope I'm not violating the speed of light...). ⚠ Now the tricky part. Let's say that I have accounted of all those factors and calculated that in order to reach the initial speed necessary to meet time restrictions, the probe will need to burn 1000 kg of fuel. This is a "given". But the problem is that the probe will have no fuel to stop. It will fly past the Voyager, being able neither to carefully pinch a fragment off Voyager, nor return back home. In other words, the trip has four legs: → → ← ← accelerating towards Voyager, decelerating, accelerating towards home, decelerating. And my calculation of 1000 kg of fuel only accounted for the first leg, which would result in the loss of the probe! In order to stop the probe and complete the second leg, it would need another 1000 kg of fuel, but there's no gas station in the middle. Our only option is to take extra fuel with us, which will increase the weight of the payload. So the question of my thought experiment is: given that 1000 kg of fuel are needed for a 1 kg probe to complete just the first leg of the trip in desired time, how much fuel does the probe need to take in order to complete four legs? My understanding is that this problem can be solved with a simple proportion: Payoad Fuel Current leg A B Next leg A+B ? And the general answer is X = (A+B)*B/A. Here's the specific solution and answer, rounded down to the order of magnitude: Payoad Fuel One leg 1 kg 10^3 kg Two legs 10^3 kg 10^6 kg Three legs 10^6 kg 10^9 kg Four legs 10^9 kg 10^12 kg So my answer to the thought experiment is that more than a trillion (10^12) kg of fuel will be necessary for the described roundtrip of a 1 kg probe. That's more than the weight of 166 pyramids of Giza! That also compares to roughly 1% of crude oil reserves on Earth. I can imagine that getting this much weight to orbit would require burning ALL of Earth's oil. The question for this StackOverflow post: is my way of thought correct? Can the payload-to-fuel ratio of 1-to-1000 (provided as a "given" for the first leg of the trip) be extrapolated like this with a simple proportion? My friends say that this problem cannot be solved without the knowledge of the parameters of the engine, etc. They also say that my computation requires having a multiply more engines for each subsequent leg. But I believe that the parameters of the engine would only be necessary to learn how much time it will take for the probe to complete four legs. But that's not part of the question. Time has been accounted in the "given" for the first leg and is no longer a concern. They also say that since the ship gets lighter as it burns fuel, this problem can not be solved with a simple proportion, and some complex formulas are needed. Answer: Voyager is about 122 AU out so to get there in, say 2 weeks, is an average speed of about 60 AU/week. Light covers an AU in 8.3 minutes so $c$ is about 1200 AU/week. Your probe won't go anywhere near light-speed so you can do the whole thing using Newtonian mechanics. You accelerate linearly for 1 week, turn round then decelerate to a stop after another week. Then do the same coming home, arriving back four weeks after you left. Your average speed is 60 AU/week and peak speed at turnaround is 120 AU/week. Start at the arrival home. 1Kg payload slowing down from 120 AU/week over the course of a week using 999kg of fuel. Now repeat the calculation for leg 3; accelerating from the distant point to the mid-point where you arrive with 1000 kg. Therefore you had to start at Voyager with $10^6$kg of fuel. Repeat again for leg 2; slowing down from mid-point on the way out - you had to start with $10^9$ kg. One more time for the initial leg - you had to start with $10^{12}$kg of fuel. That's not so bad - if your fuel is as dense as water, you could fit in a spherical tank about 1.2km in diameter.
{ "domain": "physics.stackexchange", "id": 59810, "tags": "homework-and-exercises, newtonian-mechanics, kinematics, propulsion, space-travel" }
Purely mechanical question about torques in human body
Question: Im trying to program an active ragdoll animation system for my game, and ive been stuck on this question for a while. Lets imagine a body falling backwards as shown on the scheme below. My question is - purely mechanically, what prevents the butt muscle there from providing enough torque to make the body stand upright again? Or to even slow down the fall? Im asking cause right now im applying the same constant amount of torque, to make my ragdoll stand upright. And when body starts to fall, this torque acts unnaturally, making it stand under forces that should destabilize it, or slow down the fall, when it actually gets destabilized. Answer: Your question is "what prevents the butt muscle there from providing enough torque to make the body stand upright again?" The answer is the limited traction from the ground. What you are trying to do (if you have a very heavy head) is move the center of mass over the legs, by keeping everything stiff and bending at the hip in the direction shown. The requirement for that to happen is the bodies (3), (2) and (1) above remain still and do not respond to the muscle torque. So you can argue that the forces to keep (3) in equilibrium come from (2) and the forces to keep (2) in equilibrium come from (1). But the forces to keep the foot on the ground (not sliding or lifting) come from the contact normal $N$ and more importantly the friction $F$ and reaction moment $M$ (our feet are flat and not single points for this reason). The way things are connected creates a large system of equations which in planar form means three equations for each of the 4 bodies. That is 12 equations for the 3 degrees of freedom (joints at (1), (2) and (3)) and 9 constraint forces (forces and torques at each joint). $$ \begin{aligned} \mathbf{F}_1 & = m_1 (\ddot{\mathbf{x}}_1 - \mathbf{g}) + \mathbf{F}_2 \\ \mathbf{F}_2 & = m_2 (\ddot{\mathbf{x}}_2-\mathbf{g}) + \mathbf{F}_3 \\ \mathbf{F}_3 & = m_3 (\ddot{\mathbf{x}}_3-\mathbf{g}) + \mathbf{F}_4 \\ \mathbf{F}_4 & = m_4 (\ddot{\mathbf{x}}_4 -\mathbf{g}) \\ {\tau}_1 +\mathbf{b}_1 \times \mathbf{F}_1 & = \mathrm{I}_1 \ddot{\theta}_1 + {\tau}_2 + \mathbf{n}_1 \times \mathbf{F}_2 \\ {\tau}_2 + \mathbf{b}_2 \times \mathbf{F}_2 & = \mathrm{I}_2 \ddot{\theta}_2 + {\tau}_3 + \mathbf{n}_2\times \mathbf{F}_3\\ {\tau}_3+\mathbf{b}_3 \times \mathbf{F}_3 & = \mathrm{I}_3 \ddot{\theta}_3 + {\tau}_4 + \mathbf{n}_4\times \mathbf{F}_4\\ {\tau}_4+\mathbf{b}_4 \times \mathbf{F}_4 & = \mathrm{I}_4 \ddot{\theta}_4 \\ \end{aligned}$$ and your question being, how does the torque ${\tau}_4$ affect the combined center of mass acceleration $$ \ddot{\mathbf{x}}_C = \frac{m_1 \ddot{\mathbf{x}}_1 + m_2 \ddot{\mathbf{x}}_2 + m_3 \ddot{\mathbf{x}}_3 + m_4 \ddot{\mathbf{x}}_4}{m_1+m_2+m_3+m_4} $$ I am unable to explain it in simpler terms, other than what makes the parts move to the right is external forces that act to the right, like the friction $F$ I drew above. YouTube some videos of people getting up or righting themselves while wearing skis and you will notice the technique of stiffening the muscles while trying to lean forwards. Now you will notice that without poles it is impossible to do so because the skis don't have enough traction.
{ "domain": "physics.stackexchange", "id": 45359, "tags": "newtonian-mechanics, torque" }
Can multiple linear regression using the least squares(OLS) method, also be used to solve simple linear regression problems? Would both be equivalent?
Question: Simple Linear Regression reference: https://online.stat.psu.edu/stat462/node/93/ Multiple Linear Regression reference: https://online.stat.psu.edu/stat462/node/131/ I see that the way to calculate the coefficients of simple linear regression is different from multiple linear regression. The formula for calculating multiple linear regression coefficients uses an inverse matrix. While the simple linear regression formula does not need the inverse matrix. Simple linear regression, learn the relationships between two continuous quantitative variables. But multiple linear regression is used when we have more variables. This leaves me with a question: If I know the multiple linear regression formula to calculate the coefficients, if I apply the multiple linear regression formula using only 2 quantitative variables, should the results be equivalent to the simple linear regression formula? Can multiple linear regression using the least squares(OLS) method, also be used to solve simple linear regression problems? Would both be equivalent? Answer: Yes, multiple linear regression using the ordinary least squares (OLS) method can be used to solve simple linear regression problems, and the results would be equivalent. Simple linear regression involves one independent variable X and one dependent variable Y, and models the relationship as: Y = beta0 + beta1 * X + e where beta0 is the intercept, beta1 is the slope coefficient, and e is the error term. In contrast, multiple linear regression involves more than one independent variable X1, X2, ..., Xn and one dependent variable Y, and models the relationship as: Y = beta0 + beta1 * X1 + beta2 * X2 + ... + betan * Xn + e where beta0 is the intercept, beta1, beta2, ..., betan are the slope coefficients, and e is the error term. For simple linear regression, the coefficients are calculated using straightforward formulas: beta1 = sum((Xi - mean(X)) * (Yi - mean(Y))) / sum((Xi - mean(X))^2) and beta0 = mean(Y) - beta1 * mean(X) Multiple linear regression generalizes this to more than one independent variable, but when reduced to a single variable, the matrix algebra used to compute the coefficients simplifies to the same formulas as above. Here's a Python example that demonstrates that you get equivalent coefficients using simple and multiple linear regression: from sklearn.linear_model import LinearRegression from sklearn.datasets import make_regression import numpy as np # Generate some data with three features X, y = make_regression(n_samples=100, n_features=3, noise=10) # Isolate the first feature for simple linear regression X_simple = X[:, 0].reshape(-1, 1) # Only use the first feature # Replace the last 2 features with random noise for multiple linear regression # to avoid multicollinearity between features X[:, 1:] = np.random.normal(size=(100, 2)) # Fit Simple Linear Regression with only the first feature simple_lr = LinearRegression() simple_lr.fit(X_simple, y) # Fit Multiple Linear Regression with all three features multiple_lr = LinearRegression() multiple_lr.fit(X, y) # Coefficients from Simple Linear Regression print("Simple Linear Regression (using only the first feature):") print(f"Intercept: {simple_lr.intercept_:.4f}, Slope for first feature: {simple_lr.coef_[0]:.4f}") # Coefficients from Multiple Linear Regression print("\nMultiple Linear Regression (using all three features):") print(f"Intercept: {multiple_lr.intercept_:.4f}, Slopes for all features: {multiple_lr.coef_}") # Check if the coefficient for the first feature in both models are close assert np.isclose(simple_lr.coef_[0], multiple_lr.coef_[0], rtol=1e-1), "The coefficient for the first feature from both models should be close." ```
{ "domain": "ai.stackexchange", "id": 4031, "tags": "linear-regression" }
What are the justifying foundations of statistical mechanics without appealing to the ergodic hypothesis?
Question: This question was listed as one of the questions in the proposal (see here), and I didn't know the answer. I don't know the ethics on blatantly stealing such a question, so if it should be deleted or be changed to CW then I'll let the mods change it. Most foundations of statistical mechanics appeal to the ergodic hypothesis. However, this is a fairly strong assumption from a mathematical perspective. There are a number of results frequently used in statistical mechanics that are based on Ergodic theory. In every statistical mechanics class I've taken and nearly every book I've read, the assumption was made based solely on the justification that without it calculations become virtually impossible. Hence, I was surprised to see that it is claimed (in the first link) that the ergodic hypothesis is "absolutely unnecessary". The question is fairly self-explanatory, but for a full answer I'd be looking for a reference containing development of statistical mechanics without appealing to the ergodic hypothesis, and in particular some discussion about what assuming the ergodic hypothesis does give you over other foundational schemes. Answer: The ergodic hypothesis is not part of the foundations of statistical mechanics. In fact, it only becomes relevant when you want to use statistical mechanics to make statements about time averages. Without the ergodic hypothesis statistical mechanics makes statements about ensembles, not about one particular system. To understand this answer you have to understand what a physicist means by an ensemble. It is the same thing as what a mathematician calls a probability space. The “Statistical ensemble” wikipedia article explains the concept quite well. It even has a paragraph explaining the role of the ergodic hypothesis. The reason why some authors make it look as if the ergodic hypothesis was central to statistical mechanics is that they want to give you a justification for why they are so interested in the microcanonical ensemble. And the reason they give is that the ergodic hypothesis holds for that ensemble when you have a system for which the time it spends in a particular region of the accessible phase space is proportional to the volume of that region. But that is not central to statistical mechanics. Statistical mechanics can be done with other ensembles and furthermore there are other ways to justify the canonical ensemble, for example it is the ensemble that maximises entropy. A physical theory is only useful if it can be compared to experiments. Statistical mechanics without the ergodic hypothesis, which makes statements only about ensembles, is only useful if you can make measurements on the ensemble. This means that it must be possible to repeat an experiment again and again and the frequency of getting particular members of the ensemble should be determined by the probability distribution of the ensemble that you used as the starting point of your statistical mechanics calculations. Sometimes however you can only experiment on one single sample from the ensemble. In that case statistical mechanics without an ergodic hypothesis is not very useful because, while it can tell you what a typical sample from the ensemble would look like, you do not know whether your particular sample is typical. This is where the ergodic hypothesis helps. It states that the time average taken in any particular sample is equal to the ensemble average. Statistical mechanics allows you to calculate the ensemble average. If you can make measurements on your one sample over a sufficiently long time you can take the average and compare it to the predicted ensemble average and hence test the theory. So in many practial applications of statistical mechanics, the ergodic hypothesis is very important, but it is not fundamental to statistical mechanics, only to its application to certain sorts of experiments. In this answer I took the ergodic hypothesis to be the statement that ensemble averages are equal to time averages. To add to the confusion, some people say that the ergodic hypothesis is the statement that the time a system spends in a region of phase space is proportional to the volume of that region. These two are the same when the ensemble chosen is the microcanonical ensemble. So, to summarise: the ergodic hypothesis is used in two places: To justify the use of the microcanonical ensemble. To make predictions about the time average of observables. Neither is central to statistical mechanics, as 1) statistical mechanics can and is done for other ensembles (for example those determined by stochastic processes) and 2) often one does experiments with many samples from the ensemble rather than with time averages of a single sample.
{ "domain": "physics.stackexchange", "id": 6394, "tags": "statistical-mechanics, mathematical-physics, specific-reference, foundations, ergodicity" }
Display in RViz the collision geometry tag of an URDF link
Question: Is it possible to display in RViz the collision geometry tag (box, sphere, cylinder etc.) of an URDF link tag? Originally posted by andrestoga on ROS Answers with karma: 188 on 2019-07-31 Post score: 1 Answer: Yes, if you add a RobotModel display in rviz and expand its options in the Displays panel, you should see a checkbox for "Collision Enabled". If you check it, you can see the collision models for your robot that were read from the URDF. A nice example of that can be found here. Originally posted by adamconkey with karma: 642 on 2019-07-31 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 33564, "tags": "rviz, urdf, ros-melodic, collision, newtags" }
ROS2 message_filters Synchronizer compilation error
Question: Hi, I am having trouble compiling my code using the message_filters Synchronizer in ROS2 Foxy. The tests included in the library compile and run fine but the code below does not want to compile on multiple systems with the following configurations: OS: Ubuntu 20.04 Distro: ROS2 Foxy message_filters: 3.2.5 (apt version) My code: #include <rclcpp/rclcpp.hpp> #include <std_msgs/msg/string.hpp> #include <message_filters/subscriber.h> #include <message_filters/sync_policies/approximate_time.h> #include <message_filters/sync_policies/exact_time.h> #include <message_filters/synchronizer.h> using namespace std::placeholders; class SyncNode : rclcpp::Node{ private: using approximate_policy = message_filters::sync_policies::ApproximateTime<std_msgs::msg::String, std_msgs::msg::String>; typedef message_filters::Synchronizer<approximate_policy> Synchronizer; message_filters::Subscriber<std_msgs::msg::String> sub1; message_filters::Subscriber<std_msgs::msg::String> sub2; std::unique_ptr<Synchronizer> sync; void callback(const std_msgs::msg::String::SharedPtr msg1, const std_msgs::msg::String::SharedPtr msg2); public: SyncNode() : Node("test_message_filters_node"){ sub1.subscribe(this, "/message_1"); sub2.subscribe(this, "/message_2"); sync.reset(new message_filters::Synchronizer<approximate_policy>(approximate_policy(10), sub1, sub2)); sync->registerCallback(std::bind(&SyncNode::callback, this, _1, _2)); } }; void SyncNode::callback(const std_msgs::msg::String::SharedPtr msg1, const std_msgs::msg::String::SharedPtr msg2){ RCLCPP_INFO(this->get_logger(), "Messages synced: Callback activated"); } Compiling with GCC and G++ 9.4, I get the following error: This code block was moved to the following github gist: https://gist.github.com/answers-se-migration-openrobotics/bc8479199b6272c53ecfcf236b219485 Trying to compile with Clang and Clang++ 10 yields the following errors: In file included from /home/benjamin/Documents/Mechatronics/ros2/dev_ws/src/test_message_filters/src/test_sync.cpp:3: In file included from /home/benjamin/Documents/Mechatronics/ros2/dev_ws/src/test_message_filters/include/test_message_filters/msg_node.hpp:4: In file included from /opt/ros/foxy/include/message_filters/sync_policies/approximate_time.h:52: /opt/ros/foxy/include/message_filters/signal9.h:272:12: error: no matching member function for call to 'addCallback' return addCallback<const M0ConstPtr&, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /opt/ros/foxy/include/message_filters/synchronizer.h:298:20: note: in instantiation of function template specialization 'message_filters::Signal9<std_msgs::msg::String_<std::allocator<void> >, std_msgs::msg::String_<std::allocator<void> >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>::addCallback<const std::_Bind<void (SyncNode::*(SyncNode *, std::_Placeholder<1>, std::_Placeholder<2>))(std::shared_ptr<std_msgs::msg::String_<std::allocator<void> > >, std::shared_ptr<std_msgs::msg::String_<std::allocator<void> > >)> >' requested here return signal_.addCallback(callback); ^ /home/benjamin/Documents/Mechatronics/ros2/dev_ws/src/test_message_filters/include/test_message_filters/msg_node.hpp:32:15: note: in instantiation of function template specialization 'message_filters::Synchronizer<message_filters::sync_policies::ApproximateTime<std_msgs::msg::String_<std::allocator<void> >, std_msgs::msg::String_<std::allocator<void> >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType> >::registerCallback<std::_Bind<void (SyncNode::*(SyncNode *, std::_Placeholder<1>, std::_Placeholder<2>))(std::shared_ptr<std_msgs::msg::String_<std::allocator<void> > >, std::shared_ptr<std_msgs::msg::String_<std::allocator<void> > >)> >' requested here sync->registerCallback(std::bind(&SyncNode::callback, this, _1, _2)); ^ /opt/ros/foxy/include/message_filters/signal9.h:170:14: note: candidate function template not viable: no known conversion from 'typename _Bind_helper<__is_socketlike<const _Bind<void (SyncNode::*(SyncNode *, _Placeholder<1>, _Placeholder<2>))(shared_ptr<String_<allocator<void> > >, shared_ptr<String_<allocator<void> > >)> &>::value, const _Bind<void (SyncNode::*(SyncNode *, _Placeholder<1>, _Placeholder<2>))(shared_ptr<String_<allocator<void> > >, shared_ptr<String_<allocator<void> > >)> &, const _Placeholder<1> &, const _Placeholder<2> &, const _Placeholder<3> &, const _Placeholder<4> &, const _Placeholder<5> &, const _Placeholder<6> &, const _Placeholder<7> &, const _Placeholder<8> &, const _Placeholder<9> &>::type' (aka '_Bind<std::_Bind<void (SyncNode::*(SyncNode *, std::_Placeholder<1>, std::_Placeholder<2>))(std::shared_ptr<std_msgs::msg::String_<std::allocator<void> > >, std::shared_ptr<std_msgs::msg::String_<std::allocator<void> > >)> (std::_Placeholder<1>, std::_Placeholder<2>, std::_Placeholder<3>, std::_Placeholder<4>, std::_Placeholder<5>, std::_Placeholder<6>, std::_Placeholder<7>, std::_Placeholder<8>, std::_Placeholder<9>)>') to 'const std::function<void (const shared_ptr<const String_<allocator<void> > > &, const shared_ptr<const String_<allocator<void> > > &, const shared_ptr<const NullType> &, const shared_ptr<const NullType> &, const shared_ptr<const NullType> &, const shared_ptr<const NullType> &, const shared_ptr<const NullType> &, const shared_ptr<const NullType> &, const shared_ptr<const NullType> &)>' for 1st argument Connection addCallback(const std::function<void(P0, P1, P2, P3, P4, P5, P6, P7, P8)>& callback) ^ /opt/ros/foxy/include/message_filters/signal9.h:222:14: note: candidate function template not viable: no known conversion from 'typename _Bind_helper<__is_socketlike<const _Bind<void (SyncNode::*(SyncNode *, _Placeholder<1>, _Placeholder<2>))(shared_ptr<String_<allocator<void> > >, shared_ptr<String_<allocator<void> > >)> &>::value, const _Bind<void (SyncNode::*(SyncNode *, _Placeholder<1>, _Placeholder<2>))(shared_ptr<String_<allocator<void> > >, shared_ptr<String_<allocator<void> > >)> &, const _Placeholder<1> &, const _Placeholder<2> &, const _Placeholder<3> &, const _Placeholder<4> &, const _Placeholder<5> &, const _Placeholder<6> &, const _Placeholder<7> &, const _Placeholder<8> &, const _Placeholder<9> &>::type' (aka '_Bind<std::_Bind<void (SyncNode::*(SyncNode *, std::_Placeholder<1>, std::_Placeholder<2>))(std::shared_ptr<std_msgs::msg::String_<std::allocator<void> > >, std::shared_ptr<std_msgs::msg::String_<std::allocator<void> > >)> (std::_Placeholder<1>, std::_Placeholder<2>, std::_Placeholder<3>, std::_Placeholder<4>, std::_Placeholder<5>, std::_Placeholder<6>, std::_Placeholder<7>, std::_Placeholder<8>, std::_Placeholder<9>)>') to 'void (*)(const std::shared_ptr<const std_msgs::msg::String_<std::allocator<void> > > &, const std::shared_ptr<const std_msgs::msg::String_<std::allocator<void> > > &, const std::shared_ptr<const message_filters::NullType> &, const std::shared_ptr<const message_filters::NullType> &, const std::shared_ptr<const message_filters::NullType> &, const std::shared_ptr<const message_filters::NullType> &, const std::shared_ptr<const message_filters::NullType> &, const std::shared_ptr<const message_filters::NullType> &, const std::shared_ptr<const message_filters::NullType> &)' for 1st argument Connection addCallback(void(*callback)(P0, P1, P2, P3, P4, P5, P6, P7, P8)) ^ /opt/ros/foxy/include/message_filters/signal9.h:180:14: note: candidate template ignored: substitution failure: too many template arguments for function template 'addCallback' Connection addCallback(void(*callback)(P0, P1)) ^ /opt/ros/foxy/include/message_filters/signal9.h:186:14: note: candidate template ignored: substitution failure: too many template arguments for function template 'addCallback' Connection addCallback(void(*callback)(P0, P1, P2)) ^ /opt/ros/foxy/include/message_filters/signal9.h:192:14: note: candidate template ignored: substitution failure: too many template arguments for function template 'addCallback' Connection addCallback(void(*callback)(P0, P1, P2, P3)) ^ /opt/ros/foxy/include/message_filters/signal9.h:198:14: note: candidate template ignored: substitution failure: too many template arguments for function template 'addCallback' Connection addCallback(void(*callback)(P0, P1, P2, P3, P4)) ^ /opt/ros/foxy/include/message_filters/signal9.h:204:14: note: candidate template ignored: substitution failure: too many template arguments for function template 'addCallback' Connection addCallback(void(*callback)(P0, P1, P2, P3, P4, P5)) ^ /opt/ros/foxy/include/message_filters/signal9.h:210:14: note: candidate template ignored: substitution failure: too many template arguments for function template 'addCallback' Connection addCallback(void(*callback)(P0, P1, P2, P3, P4, P5, P6)) ^ /opt/ros/foxy/include/message_filters/signal9.h:216:14: note: candidate template ignored: substitution failure: too many template arguments for function template 'addCallback' Connection addCallback(void(*callback)(P0, P1, P2, P3, P4, P5, P6, P7)) ^ /opt/ros/foxy/include/message_filters/signal9.h:270:14: note: candidate template ignored: substitution failure: too many template arguments for function template 'addCallback' Connection addCallback( C& callback) ^ /opt/ros/foxy/include/message_filters/signal9.h:264:14: note: candidate function template not viable: requires 2 arguments, but 1 was provided Connection addCallback(void(T::*callback)(P0, P1, P2, P3, P4, P5, P6, P7), T* t) ^ /opt/ros/foxy/include/message_filters/signal9.h:258:14: note: candidate function template not viable: requires 2 arguments, but 1 was provided Connection addCallback(void(T::*callback)(P0, P1, P2, P3, P4, P5, P6), T* t) ^ /opt/ros/foxy/include/message_filters/signal9.h:252:14: note: candidate function template not viable: requires 2 arguments, but 1 was provided Connection addCallback(void(T::*callback)(P0, P1, P2, P3, P4, P5), T* t) ^ /opt/ros/foxy/include/message_filters/signal9.h:246:14: note: candidate function template not viable: requires 2 arguments, but 1 was provided Connection addCallback(void(T::*callback)(P0, P1, P2, P3, P4), T* t) ^ /opt/ros/foxy/include/message_filters/signal9.h:240:14: note: candidate function template not viable: requires 2 arguments, but 1 was provided Connection addCallback(void(T::*callback)(P0, P1, P2, P3), T* t) ^ /opt/ros/foxy/include/message_filters/signal9.h:234:14: note: candidate function template not viable: requires 2 arguments, but 1 was provided Connection addCallback(void(T::*callback)(P0, P1, P2), T* t) ^ /opt/ros/foxy/include/message_filters/signal9.h:228:14: note: candidate function template not viable: requires 2 arguments, but 1 was provided Connection addCallback(void(T::*callback)(P0, P1), T* t) I have no idea what is wrong with the code, which is based on multiple ROS2 Foxy examples that work fine. Could someone please point out what I am missing? Thank you! Originally posted by BVM97 on ROS Answers with karma: 13 on 2022-10-09 Post score: 1 Answer: The arguments for your callback should be const std::shared_ptr<const...>&, so in your case const std_msgs::msg::String::ConstSharedPtr msg1. So ConstSharedPtr instead of SharedPtr. Originally posted by Wilco Bonestroo with karma: 159 on 2022-10-10 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 38027, "tags": "ros2, c++, message-filters" }
Luminosity of black hole accretion disc
Question: The luminosity of a black hole accretion disc gaining mass at a rate $\mathrm{d}M\over \mathrm{d}t$ can be estimated as $${1\over 12}{\mathrm{d}M\over \mathrm{d}t}c^2$$ That is a substantial proportion of the rest mass of the in-falling matter. The linked document explains that the factor 1/12 is because at less than 3 times the event horizon radius the matter "spirals in without radiating more energy" What causes matter in the accretion disk not to radiate beyond this limit? It is twice the radius of the photon sphere, is the reason general relativity? Answer: Three times the Schwarzschild radius corresponds to the closest stable circular orbit around a black hole. The general idea is that as matters moves in towards the black hole it gets stuck in an accretion disc where angular momentum has to be moved outwards in order to allow the matter to move inwards. The generic mechanism is some sort of viscosity, which heats the gas and hence you get radiation. However, once the matter gets inside $3r_s$, that problem disappears. There are no stable orbits, no angular momentum loss or viscosity is needed and the material is able to flow (rapidly) straight into the black hole. Thus when we observe black hole accretion discs we expect them to be truncated at $3r_s$. So I think the argument then is along the lines of - the gravitational potential energy of unit mass falling to $3r_s$ is converted into an orbital kinetic energy of $0.5v^2 = GM/6r_s$ per unit mass and the rest is converted to radiation. Thus $$L = \left[\frac{GM}{3r_s} - \frac{GM}{6r_s}\right] \frac{dM}{dt}$$ $$ L = \frac{GM}{6r_s} \frac{dM}{dt} = \frac{1}{12}c^2 \frac{dM}{dt} .$$
{ "domain": "astronomy.stackexchange", "id": 1210, "tags": "black-hole, accretion-discs" }
Residual catalyst leaching from PETE
Question: I am trying to look for healthy options for 5 gallon water jugs. The problem is all the plastic 5 gallon water jugs on the market are either made of #1 PETE (BPA free) or #7 (which contains BPA). It seems #1 PETE is healthier option but it turns out #1 PETE is known to leach a bit of antimony oxide ($\ce{Sb2O3}$ catalyst, used in its synthesis) into water. So my question is if I bathe the jug in water, so that the antimony oxide can leach into the water and wash the jug one more time, can all the antimony be removed that way? Answer: Let's ignore the safety aspect here, and focus on the 'if i bathe the jug in water so that all the antimony can leach into the water and wash the jug one more time, can all the antimony be removed that way'. In short, no. There have been a number of studies on antimony in PET bottles. They look at various factors that influence the rate of leaching of Sb from PET bottles. These factors include temperature, UV, colour of bottle, pH, bottle volume and time. There is evidence that leaching is still occurring well beyond 6 months. One study suggests that leaching levels out after about 800 days. Another study calculates that the total levels of Sb in the plastic are very unlikely to be completely leached out. Factors that have significant influence of amount of Sb leached include: time stored (major influence) temperature (higher temps have major influence) volume of bottle (smaller volume bottles have higher SA/vol ratio and increase leaching) type/origin of PET (significant differences between the colour of bottles used in one study) I'm not aware of a study that attempts to find the conditions whereby Sb no longer leaches into the water, but effectively you would need to bathe the the bottle at high temperature for several days at a time, repeatedly, over months. That's just not going to happen. Some useful references for you include: doi.org/10.1016/j.watres.2007.07.048 doi.org/10.1016/j.scitotenv.2009.04.025 doi.org/10.1021/es061511+
{ "domain": "chemistry.stackexchange", "id": 9465, "tags": "everyday-chemistry, safety, polymers, esters" }
Why do aluminum, magnesium, and calcium form white precipitate when mix with sodium hydroxide?
Question: Why do aluminum, magnesium, and calcium form white precipitate when mix with sodium hydroxide? Shouldn't there be no reaction? Because sodium is more reactive than all of those element, it should be impossible for sodium to be displace by those three elements. YouTube video - "NaOH + Al - sodium hydroxide and aluminum" Answer: When adding aluminium, calcium or magnesium, you are most likely adding them as chlorides. They dissociate almost fully, similarly to sodium hydroxide. Then you have a solution with these four elements: Sodium cations Calcium (for example) cations Hydroxide anions Chloride anions The precipitate that forms depends on the solubility of the different combinations of anions and cations. Sodium chloride and hydroxide are soluble so they remain in solution. Metal chlorides are also soluble. However calcium, magnesium, and aluminium hydroxides are insoluble so they form the precipitate. You’re basically neutralising the basic solution, formed a solution of sodium chloride and a “basic” solid precipitate.
{ "domain": "chemistry.stackexchange", "id": 10195, "tags": "precipitation" }
What did I do wrong with this simple filter build?
Question: I tried to put everything I have learned from people here together to code my first filter from scratch. Unfortunately, it didn't go well and I'm not getting the expected output. The math/code became quite big and messy, which makes it hard to read. But I went through it a few times and couldn't find an error. I'm not sure what I've done wrong. I feel like perhaps I have a big idea wrong about how to do this. Maybe I took a wrong turn somewhere. Here is the circuit: This is for a physical modeling purpose, so it is necessary to have everything in terms of the component resistor, capacitor, and inductor values as drawn. I attempted to work through this by first solving the impedance of the parallel and series components, working out a Laplace transfer function, then substituting $s=\frac{1-z^{-1}}{T}$, expressing it in terms of per sample input/output delay, and then writing some simple C++ for it. But the output is just an impulse of sound then it overloads. It's not filtering as expected. Any ideas what I've done wrong? I've tried to write this out as clearly as I can in terms of the steps I've taken in the hopes it might make sense. Parallel Component: $\frac{1}{R_1} = \frac{1}{sL} + \frac{1}{R_d+\frac{1}{sC}}$ $R_1 = \frac{1}{\frac{1}{sL} + \frac{1}{R_d+\frac{1}{sC}}}$ $R_1 = \frac{Ls(CR_ds + 1)}{CLs^{2} + CR_ds+1}$ Series Component: $R_2 = 2R_g$ Transfer Function: $V_{out} = \frac{R_2}{R_2+R_1} V_{in}$ $V_{out} = \frac{2R_g}{2R_g + \frac{Ls(CR_ds + 1)}{CLs^{2} + CR_ds + 1}} V_{in}$ $V_{out}(s) = \frac{2R_g(CLs^{2} + CR_ds + 1)}{2CR_gLs^{2} + 2CR_gR_ds+CLR_ds^{2} + 2R_g + Ls} V_{in}(s)$ Substituting $s=\frac{1-z^{-1}}{T}$: $_{Numerator} = \frac{-2 R_g (-C L z^{-2} + 2 C L z^{-1} - C L + C R_d T z^{-1} - C R_d T - T^{2})}{T^{2}}$ $_{Denominator} = \frac{2 C R_g L z^{-2} - 4 C R_g L z^{-1} + 2 C R_g L + 2 C R_g T R_d - 2 C R_g T R_dz^{-1} + C L R_d- 2C L R_dz^{-1}+C L R_dz^{-2} + 2 R_g T^2 - L T z^{-1} + L T}{T^2}$ Canceling the $1/T^{2}$: $_{Numerator} = -2 R_g (-C L z^{-2} + 2 C L z^{-1} - C L + C R_d T z^{-1} - C R_d T - T^{2})$ $_{Denominator} = 2 C R_g L z^{-2} - 4 C R_g L z^{-1} + 2 C R_g L + 2 C R_g T R_d - 2 C R_g T R_dz^{-1} + C L R_d-2C L R_dz^{-1}+C L R_dz^{-2} + 2 R_g T^2 - L T z^{-1} + L T$ Cross Multiplying: $_{Leftside} = 2 C R_g L V_{out}[n-2] - 4 C R_g L V_{out}[n-1] + 2 C R_g LV_{out}[n] + 2 C R_g T R_dV_{out}[n] - 2 C R_g T R_dV_{out}[n-1] + C L R_dV_{out}[n]-2C L R_dV_{out}[n-1]+C L R_dV_{out}[n-2] + 2 R_g T^2V_{out}[n] - L T V_{out}[n-1] + L TV_{out}[n]$ $_{Leftside} = V_{out}[n] (2 C R_g L + 2 C R_g T R_d +C L R_d + 2 R_g T^2 + L T) + 2 C R_g L V_{out}[n-2] - 4 C R_g L V_{out}[n-1] - 2 C R_g T R_dV_{out}[n-1] -2C L R_dV_{out}[n-1]+C L R_dV_{out}[n-2] - L T V_{out}[n-1]$ $_{Rightside} = -2 R_g (-C L V_{in}[n-2] + 2 C L V_{in}[n-1] - C LV_{in}[n] + C R_d T V_{in}[n-1] - C R_d TV_{in}[n] - T^{2}V_{in}[n])$ Final Equation: $V_{out}[n] (2 C R_g L + 2 C R_g T R_d +C L R_d + 2 R_g T^2 + L T) = -2 R_g (-C L V_{in}[n-2] + 2 C L V_{in}[n-1] - C LV_{in}[n] + C R_d T V_{in}[n-1] - C R_d TV_{in}[n] - T^{2}V_{in}[n]) - (2 C R_g L V_{out}[n-2] - 4 C R_g L V_{out}[n-1] - 2 C R_g T R_dV_{out}[n-1] -2C L R_dV_{out}[n-1]+C L R_dV_{out}[n-2] - L T V_{out}[n-1])$ $V_{out}[n] = \frac{-2 R_g (-C L V_{in}[n-2] + 2 C L V_{in}[n-1] - C LV_{in}[n] + C R_d T V_{in}[n-1] - C R_d TV_{in}[n] - T^{2}V_{in}[n]) - (2 C R_g L V_{out}[n-2] - 4 C R_g L V_{out}[n-1] - 2 C R_g T R_dV_{out}[n-1] -2C L R_dV_{out}[n-1]+C L R_dV_{out}[n-2] - L T V_{out}[n-1])}{2 C R_g L + 2 C R_g T R_d +C L R_d + 2 R_g T^2 + L T }$ Code: class PhysicalFilter{ public: void setSampleRate(double sampleRateIn){ T = 1/sampleRateIn; } float filterSample(float inputSample, float C, float L, float R_d, float R_g){ input_2 = input_1; input_1 = input; input = inputSample; output_2 = output_1; output_1 = output; float numerator = -2 * R_g * ((-C * L * input_2) + (2 * C * L * input_1) - (C * L * input) + (C * R_d * T * input_1) - (C * R_d * T * input) - (T * T * input)) - ((2 * C * R_g * L * output_2) - (4 * C * R_g * L * output_1) - (2 * C * R_g * T * R_d * output_1) - (2 * C * L * R_d * output_1) + (C * L * R_d * output_2) - (L * T * output_1)); float denominator = (2 * C * R_g * L) + (2 * C * R_g * T * R_d) + (C * L * R_d) + (2 * R_g * T * T) + (L * T); output = numerator/denominator; return output; } private: float input = 0.f; float input_1 = 0.f; float input_2 = 0.f; float output = 0.f; float output_1 = 0.f; float output_2 = 0.f; float C = 1.f; float L = 1.f; float R_g = 1.f; float R_d = 1.f; float T = 1/44100.f; } Answer: Your analog transfer function looks OK. For the sake of clarity - and to reduce the chance of making errors - I'd just rewrite it as $$H_a(s)=G\cdot\frac{s^2+as + b}{s^2+cs + d}\tag{1}$$ with $$\begin{align}G&=\frac{2R_g}{R_d+2R_g}\\a&=\frac{R_d}{L}\\b&=\frac{1}{LC}\\c&=G\left(a+\frac{1}{2R_gC}\right)\\d&=G\cdot b\frac{}{}\end{align}$$ Then you can use the backward Euler transformation on the general second-order transfer function given by $(1)$, resulting in $$H_d(z)=\frac{G}{1+cT+dT^2}\cdot\frac{1+aT+bT^2-(2+aT)z^{-1}+z^{-2}}{1-\frac{2+cT}{1+cT+dT^2}z^{-1}+\frac{1}{1+cT+dT^2}z^{-2}}\tag{2}$$ Now you can check if your discrete-time transfer function is correct. The next step is to check your code. I would suggest to rewrite it to implement a general biquad transfer function: $$H(z)=\frac{b_0+b_1z^{-1}+b_2z^{-2}}{1+a_1z^{-1}+a_2z^{-2}}\tag{3}$$ You can test your biquad routine by supplying some known coefficients $b_i$ and $a_i$ (e.g., for a standard low pass filter, etc.). As soon as you're convinced that the routine works properly, you can test it with the coefficients of your design. With what you have now you can't really separate the design and the implementation, so testing and debugging becomes problematic.
{ "domain": "dsp.stackexchange", "id": 8233, "tags": "filters, discrete-signals, lowpass-filter, laplace-transform, c++" }
Namespace without roslaunch
Question: Hi I like to run a rosnode within a namespace but without roslaunch. I have to find a problem related to a multi-robot scenario because I use name spaces to start rosnotes for each robot. It looks like there is a problem with one node and I like to debug the node, so I like to start the node with a debugger but within a namespace. Is this possible? Greetings Originally posted by Markus Bader on ROS Answers with karma: 847 on 2014-03-24 Post score: 0 Answer: You should be able to set the ROS_NAMESPACE environment variable to your namespace. Everything in the node should be pushed to the namespace. Originally posted by dornhege with karma: 31395 on 2014-03-24 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Markus Bader on 2014-03-24: Thank you so much, that's exactly what I needed.
{ "domain": "robotics.stackexchange", "id": 17402, "tags": "ros, roslaunch, namespace, rosrun" }
What is the physical meaning of the zeroness of the antidiagonal of the matrix representation of $\textbf{n}\cdot\textbf{S}$ operator?
Question: I encountered a problem where I had to use $\textbf{n}\cdot{\textbf{S}}$. It was found to be: What does it mean physically, that the antidiagonal of this matrix is 0, for any $\textbf{n}$? Answer: All spin matrices for integer spin j have this property in the spherical basis (not in the Cartesian one, of course). It is easy to see that. They are $(2j+1)\times (2j+1)$-dimensional matrices. $S_z$ is diagonal, but its central eigenvalue is 0 (for integer spins). $S_+$ is a raising operator by one, so only the first parallel above the diagonal is nonzero; however, it has length 2 j, so it does not "shadow" the central diagonal 0, and so the second, third, ... parallels vanish, yielding 0s. The analog argument holds for $S_-$, and hence the linear combinations of raising and lowering operators. You are done. As indicated, this is a fluke property of the spherical basis. In the Cartesian basis, you'd have nonzero antidiagonals for $S_y$. I don't see much physics in a basis-dependent statement, then.
{ "domain": "physics.stackexchange", "id": 57015, "tags": "quantum-mechanics, homework-and-exercises, operators, quantum-interpretations" }
Logitic Regression cost function - what if ln(0)?
Question: I am building logistic regression from scrap. The simplified cost function I am using is (from machine learning course on coursera): in specific case during learning, one observation in training set y is 0 - but the specific choice of betas in: makes g(z) = h(x) = 1 , because e.g. z > 50. in this case my right side od J is (1 - 0) * log(1 - 1) what is -inf (I am doing my calculations in python) I understand that in this case value of cost function should be high because the probability of y = 1 is very big while the truth is that it actually is 0. Is the problem approximation of g(50) being 1 instead of something like: 0.999999? Or there is some more fundametal error with my logic? because this specific example the summation of cost of all observations is nan (not a number) in my code. Answer: In practice, an offset is used to avoid log explosion due to values close to zero. For example $\hat{\text{log}}(x)=\text{log}(x + \text{1e-6})$.
{ "domain": "datascience.stackexchange", "id": 4831, "tags": "logistic-regression, cost-function" }
Why cant DFT be implemented this way?
Question: I am trying to implement the discrete Fourier transform.Here is my code in MATLAB: sym k n = 1:4; disp(n); y = n-2>=0; z = n-4>=0; x = y-z; X(k) = sum(x.*exp(-1i*2*pi*k.*n/4)); l = 1:4; subplot(1,2,1); stem(l,abs(X(l))); subplot(1,2,2); stem(l,angle(X(l))); I get these results for magnitude and phase: However when I try to put the values [0,1,1,0] in this DFT calculator I don't get the same results. Where is the error in my code? Answer: You're missing the 0 frequency. Recall the definition of the DFT: $$X[k] = \sum_{n =0}^{N-1}x[n]e^{-j2\pi \frac{k}{N}n}\quad\quad k = 0, 1, \dots, N-1$$ n = 1:4; y = n-2>=0; z = n-4>=0; x = y-z; for k = 1:4 X(k) = sum(x .* exp(-1j*2*pi*(k-1).*(n-1)/4)); end gives the correct result.
{ "domain": "dsp.stackexchange", "id": 11732, "tags": "matlab, discrete-signals, fourier-transform, dft" }
catkin build fails after upgrading ROS distro
Question: Recently I upgraded my Ubuntu from 16.04 to 18.04, thus I reinstalled ROS Melodic after uninstalling Kinetic. roscore and some other basic ROS packages work fine, however when I try to build my workspace with catkin build following error message appears: CMake Error at /home/user1/projects/catkin_ws/src/custom_package/CMakeLists.txt:10 (find_package): By not providing "Findcatkin.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "catkin", but CMake did not find one. Could not find a package configuration file provided by "catkin" with any of the following names: catkinConfig.cmake catkin-config.cmake Add the installation prefix of "catkin" to CMAKE_PREFIX_PATH or set "catkin_DIR" to a directory containing one of the above files. If "catkin" provides a separate development package or SDK, be sure it has been installed. make: *** [cmake_check_build_system] Error 1 The package and workspace was working fine before (on another melodic machine). I tested to create a new empty workspace with catkin init but catkin build still throws the same error message. I also did source /opt/ros/melodic/setup.bash. My cmake version is 3.10.2 Anyone knows how to solve this? Many thanks! Originally posted by rfn123 on ROS Answers with karma: 146 on 2020-06-18 Post score: 0 Answer: Ok, somehow I could build a new empty workspace with catkin build after I reinstalled the ros-melodic-catkin package. In the old workspace, doing catkin clean then retrying to build then also worked. Originally posted by rfn123 with karma: 146 on 2020-06-18 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 35160, "tags": "ros-melodic, catkin" }
Where does the reaction force go when we slip?
Question: I have been trying to wrap my head around friction and how it works and I sort of cant understand the concept of slipping. When I walk I push the ground backwards and hence the ground pushes me forwards (which is the friction force) Hence friction is what makes me move forward. However on a slippery plane when I walk forward, doesn't the plane also apply a force equal in magnitude and opposite in direction, hence making me move forward. Why do I slip? Shouldn't I be moving forward. Answer: However on a slippery plane when I walk forward, doesn't the plane also apply a force equal in magnitude and opposite in direction, hence making me move forward. Unless the plane is totally frictionless, there will still be kinetic (sliding) friction. You will still be applying a force backwards on the plane and the plane exerting an equal and opposite force forwards on you, but the force will be less since kinetic friction is generally less than static friction. Hence you may still accelerate forward but at a slower rate than when you are not slipping and the friction was static. An example is a car that attempts to accelerate too quickly and loses traction. The car still moves forward but at a much slower rate and with the wheels spinning ("burning rubber" as they say.) On the other hand, if the plane was frictionless, when you push your leg backwards there would be no friction force to oppose it and act forward. It would be equivalent to moving your foot backwards without touching the surface. Taking the car example, if the road were frictionless (or close to it) the wheel would spin in place and there would be little or no movement forward of the car. Hope this helps.
{ "domain": "physics.stackexchange", "id": 98676, "tags": "newtonian-mechanics, forces, friction" }
Cross-differentiation to derive the maxwell relation $\left(\frac{\partial T}{\partial V}\right)_S = -\left(\frac{\partial P}{\partial S}\right)_V$
Question: How can I use $T=\left(\frac{\partial E}{\partial S}\right)_V$ and $P=-\left(\frac{\partial E}{\partial V}\right)_S$ to derive $$\left(\frac{\partial T}{\partial V}\right)_S = -\left(\frac{\partial P}{\partial S}\right)_V$$ The book (Statistical Physics by F. Mandl) suggests that $\frac{\partial^2 E}{\partial V \partial S} = \frac{\partial^2 E}{\partial S \partial V}$, but I can't make the connection as to how this helps. He calls this "cross-differentiation", i.e. to form the two derivatives given. However, how can his suggestion be true? If we take the 2nd derivative of $E$ using the equation for $T$ and $P$ given above, then $$\partial T=\left(\frac{\partial^2 E}{\partial S^2}\right)$$ $$\partial P=-\left(\frac{\partial^2 E}{\partial V^2}\right)$$ Answer: Assuming the functions are well-behaved (continuous and differentiable), you can change the order of differentiation. $$ \left(\frac{\partial T}{\partial V}\right)_S=\frac{\partial}{\partial V}\left(\frac{\partial E}{\partial S}\right) = \frac{\partial}{\partial S}\left(\frac{\partial E}{\partial V} \right) = -\left(\frac{\partial P}{\partial S}\right)_V$$
{ "domain": "physics.stackexchange", "id": 29920, "tags": "homework-and-exercises, thermodynamics, statistical-mechanics, maxwell-relations" }
Problem in derivation of propagator of vector meson
Question: I have been reading A. Zee's book on QFT. In chapter I.5, equation (2) is given as, $$\left[(\partial^2 + m^2) g^{\mu\nu} - \partial^\mu \partial^\nu\right] D_{\nu\lambda}(x) = \delta^\mu_\lambda \delta^4(x).$$ In order to go to momentum space, we find, $$D_{\nu\lambda}(x) \equiv \int \frac{d^4 k}{(2\pi)^4}\, D_{\nu\lambda}(k)\, e^{ikx}.$$ Plugging this in above equation we write, $$\left[-(k^2-m^2) g^{\mu\nu} + k^\mu k^\nu\right] D_{\nu\lambda}(k) = \delta^\mu_\lambda.$$ I am fine withupto this. After this he Zee writes, $$D_{\nu\lambda}(k) = \frac{-g_{\nu\lambda} + k_\nu k_\lambda/m^2}{k^2 - m^2}.$$ Can someone explain how this final step is obtained? Answer: A standard method of solving for the propagator (which can be extended to e.g. the photon propagator as well) involves taking the ansatz $D_{\nu\lambda}(k)=Ag_{\nu\lambda}+Bk_\nu k_\lambda$, since these are the only rank-2 tensors you can form from the independent variable $k^\mu$. You insert this into the propagator equation in momentum space: $$\left[-(k^2-m^2) g^{\mu\nu} + k^\mu k^\nu\right] (Ag_{\nu\lambda}+Bk_\nu k_\lambda) = \delta^\mu_\lambda$$ From here, it's simply a matter of expanding out the terms and solving for $A$ and $B$, which should come out to be $A = \frac{-1}{k^2-m^2}$ and $B = \frac{1}{m^2(k^2-m^2)}$. The derivation that you have provided is incorrect since you multiply both sides by $g_{\mu\nu}$, but $\nu$ is already used as a fully contracted/dummy index (you cannot mix free and dummy indices in a tensor equation)
{ "domain": "physics.stackexchange", "id": 74610, "tags": "homework-and-exercises, quantum-field-theory, propagator" }
Running several nodes under a "top" node
Question: I have organized my code in such a way that when I run a "main" file, it calls methods in other classes, from other packages, to accomplish a task. I soon realized that the classes written, requires a rospy node so that it can communicate with ROS_MASTER. But I cannot initialize other nodes under the "main" node as i think ROS only allows one node to be initialized and not a node inside another. Example: The classes in the other packages use SimpleActionClient/action client which uses the actionlib, that requires to initialize a node to publish messages and call services. I had only initialized a node for the "main" method. And since I cannot initialize any more nodes, the code does not really do anything Any advice on how to resolve this issue? or any alternative ideas? Thanks! Originally posted by Nash on ROS Answers with karma: 207 on 2011-05-11 Post score: 0 Original comments Comment by Nash on 2011-05-12: Sorry about that!, hope this edit makes sense.. Comment by tfoote on 2011-05-11: Can you update your question to clarify what you're trying to do, and what isn't working? Possibly provide an example. From your description everything sounds doable. Answer: From what I understand you want several classes of your program to use ros-node functionality. You can just pass them the node handle in their constructor and store it in a member variable. See here for such a constructor definition. Originally posted by Felix Endres with karma: 6468 on 2011-05-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Nash on 2011-05-12: Thank You!..it works like a charm!
{ "domain": "robotics.stackexchange", "id": 5559, "tags": "ros, simpleactionclient, node" }
Internal energy change during phase change
Question: Ice at $\mathrm{0\ ^\circ C}$ is converted to water at $\mathrm{0\ ^\circ C}$. If $\Delta H$ for the transition of ice to water is $\mathrm{1440~cal}$, calculate the change in internal energy. Since internal energy $\Delta U=nC_V\Delta T$ and $\Delta T=0$, shouldn't $\Delta U=0$? But if I use $ \Delta H=\Delta U +\Delta nRT$, I get $\Delta U=\Delta H\neq 0$. Answer: You are considering the $\Delta H$ and the $\Delta U$ between the following two thermodynamic equilibrium states: State 1: 1 mole of ice at 0 C and 1 atm. State 2: 1 mole of liquid water at 0 C and 1 atm. The relationship between $\Delta H$ and $\Delta U$ at constant pressure is: $$\Delta H=\Delta U + p\Delta V$$where V is molar volume. What is the molar volume of ice? What is the molar volume of liquid water? What is $\Delta V$?
{ "domain": "chemistry.stackexchange", "id": 4922, "tags": "thermodynamics" }
Is $E=\hbar \omega$ correct for massive particles?
Question: From Planck's relation we can say that the energy of a photon is $$E=h\nu=\hbar \omega \, .$$ where $\hbar \equiv h / 2\pi$. On the other hand, the energy of a free particle can be expressed as $$E=\frac{p^2}{2m}$$ and from de Broglie relation we have $$p = \frac{h}{\lambda} = \hbar k \, .$$ So, we can write the dispersion relation $$\omega = \frac{\hbar k^2}{2m} \, .$$ Is this correct? We are mixing a photon energy with a particle energy. The energy of a particle in its most general way is: $$E = \sqrt{p^2 c^2 + m^2c^4} \, .$$ For a photon we have $m=0$, which implies $E = pc = h\nu$, but this doesn't happen for a massive particle. So, is there something that I'm missing? Answer: $\renewcommand{\ket}[1]{|#1\rangle}$ It is correct that the kinetic energy of a massive particle in the non-relativistic limit is $$E = p^2 / 2m \, . \tag{1}$$ It is also correct that for plane waves (i.e. free particle eigenstates), the momentum is related to the wave number via $$p = \hbar k \, . \tag{2}$$ Therefore, as proposed in the question, the frequency of a free massive particle in a plane wave state and in the non-relativistic limit, is $$\omega = \frac{\hbar k^2}{2m} \, . \tag{3}$$ Is this correct? We are mixing a photon energy with a particle energy. Actually yes, it is correct. The relation $$E = h \nu = \hbar \omega \tag{4}$$ is actually a very general relation in non-relativistic quantum mechanics, not limited to photons. From one point of view, this relation comes from Schrodinger's equation for the time evolution of a quantum state $$i \hbar \frac{d \ket{\Psi}}{dt} = H \ket{\Psi} \, . \tag{5}$$ If the state $\ket{\Psi}$ has definite energy (in the case of a free particle the definite energy states are plane waves) then we can replace $H\ket{\Psi}$ with $E\ket{\Psi}$ and we get $$i \hbar \frac{d\ket{\Psi}}{dt} = E \ket{\Psi} \, . \tag{6}$$ From here, if we assume that the state has a sinusoidal time dependence $\exp(-i \omega t)$ then we get $E = \hbar \omega$. That time dependence can be assumed because it is a solution to the equation, and any other solution can be written as a linear superposition of such solutions. If you haven't learned about this idea yet don't worry about it. Note, however, that you could also choose $\exp(i \omega t)$ in which case you get $E = -\hbar \omega$. I won't get into the meaning of positive and negative energies in quantum mechanics in this post. If you're interested in that please ask a separate question though, because it is an interesting topic. In relativistic situations things are different. The Schrodinger equation no longer works (at least not in quite the same way) and you have to use things like the Dirac equation, or go to a Lagrangian formulation. You can still derive dispersion relations and you essentially get the equation with the square root as written in the main question.
{ "domain": "physics.stackexchange", "id": 25329, "tags": "quantum-mechanics, mass-energy, dispersion" }
Vincenty's distance Direct formulae numpy
Question: I've refactored a function from the pygc library used to generate the great_circle. The Vincenty's equation below can be found here. Destination given distance & bearing from start point (direct solution) Again, this presents Vincenty’s formulæ arranged to be close to how they are used in the script. $$ \begin{align} a, b =&\ \textrm{major & minor semi-axes of the ellipsoid} \\ f =&\ \textrm{flattening }(a−b)\ /\ a \\ φ_1, φ_2 =&\ \textrm{geodetic latitude} \\ L =&\ \textrm{difference in longitude} \\ s =&\ \textrm{length of the geodesic along the surface of the ellipsoid (in the same units as }a \textrm{ & } b \textrm{)} \\ α1, α2 =&\ \textrm{azimuths of the geodesic (initial/final bearing)} \\ \\ \tan U_1 =&\ (1−f) \cdot \tan φ_1 &\textrm{(U is ‘reduced latitude’)} \\ \cos U_1 =&\ 1 / \sqrt{1 + \tan^2 U_1},\qquad \sin U_1 = \tan U_1 \cdot \cos U_1 &\textrm{(trig identities; §6)} \\ \sigma_1 =&\ \arctan(\tan U_1 / \cos \alpha_1) \\ \sin α =&\ \cos U_1 \cdot \sin α_1 &(2) \\ \cos^2 α =&\ 1 − \sin^2 α &\textrm{(trig identity; §6)} \\ u^2 =&\ \cos^2 α \cdot \frac{a^2−b^2}{b^2} \\ A =&\ 1 + \frac{u^2}{16384} \cdot \left\{4096 + u^2 \cdot [−768 + u^2 · (320 − 175 \cdot u^2)]\right\} & (3) \\ B =&\ u^2/1024 \cdot \left\{256 + u^2 · [−128 + u^2 · (74 − 47 · u^2)]\right\} & (4) \\ σ =&\ \frac{s}{b \cdot A} & \textrm{(first approximation)} \\ \end{align} \\ $$ iterate until change in σ is negligible (e.g. 10-12 ≈ 0.006mm) { $$ \begin{align} \cos 2σ_m =&\ \cos(2σ_1 + σ) & (5) \\ Δσ =&\ B \cdot \sin σ \cdot \left\{\cos 2σ_m + B/4 · [\cos σ · (−1 + 2 \cdot \cos^2 2σ_m) \\ − B/6 \cdot \cos 2σ_m · (−3 + 4 \cdot \sin^2 σ) · (−3 + 4 · cos² 2σ_m)]\right\} & (6) \\ σʹ =&\ s / b·A + Δσ & (7) \\ \end{align} $$ } $$ \begin{align} φ_2 =&\ \arctan(\sin U_1 \cdot \cos σ + \cos U_1 \cdot \sin σ \cdot \cos α_1 \\ &\ / (1−f) \cdot \sqrt{\sin^2 α + (\sin U_1 \cdot \sin σ − \cos U_1 \cdot \cos σ · \cos α_1)^2}) & (8) \\ λ =&\ \arctan(\sin σ · \sin α_1 / \cos U_1 · \cos σ − \sin U_1 · \sin σ \cdot \cos α_1) & (9) \\ C =&\ f/16 \cdot \cos^2 α · [4 + f · (4 − 3 \cdot \cos^2 α)] & (10) \\ L =&\ λ − (1−C) \cdot f \cdot \sin α · \left\{σ + C \cdot \sin σ \cdot [\cos 2σ_m + C · \cos σ · (−1 + 2 \cdot \cos^2 2σ_m)]\right\} & (11) \\ λ_2 =&\ λ_1 + L \\ α_2 =&\ \arctan\left( \frac{\sin α}{−(\sin U_1 · \sin σ − \cos U_1 · \cos σ \cdot \cos α_1)}\right) & (12) \\ \end{align} $$ Where: \$φ_2, λ_2\$ is destination point \$α_2\$ is final bearing (in direction \$p_1 \rightarrow p_2\$) constants and imports """Vincenty'distance Direct formulae""" from typing import Tuple from numpy import ( vectorize, arctan, arctan2, tan, sin, cos, sqrt, pi ) # a: Semi-major axis = 6 378 137.0 metres A = 6_378_137.0 # b: Semi-minor axis ≈ 6 356 752.314 245 metres B = 6_356_752.314_245 # f = flattening (a−b)/a F = (A - B) / A # (a²−b²)/b² ( lat reduction ) F2 = (pow(A, 2) - pow(B, 2)) / pow(B, 2) TWO_PI = 2.0 * pi function upsilon2: Callable[[float], Tuple[float, float]] = (lambda u2: ( # A = 1 + u²/16384 · {4096 + u² · [−768 + u² · (320 − 175 · u²)]} (1 + (u2 / 16384) * (4096 + u2 * (-768 + u2 * (320 - 175 * u2)))), # B = u²/1024 · {256 + u² · [−128 + u² · (74 − 47 · u²)]} ((u2 / 1024) * (256 + u2 * (-128 + u2 * (74 - 47 * u2)))) )) def vincenty_direct( latitude: float, longitude: float, azimuth: float, distance: float) -> Tuple[float, float]: """ Returns: lat and long of projected point """ azimuth = azimuth + TWO_PI if azimuth < 0.0 else( azimuth - TWO_PI if azimuth > TWO_PI else azimuth ) # tan U1 = (1−f) · tan φ1 tan_u1 = (1 - F) * tan(latitude) # cos U1 = 1 / √1 + tan² U1, sin U1 = tan U1 · cos U1 cos_u1 = arctan(tan_u1) # σ1 = atan(tan U1 / cos α1) sigma1 = arctan2(tan_u1, cos(azimuth)) # sin α = cos U1 · sin α1 sin_alpha = cos(cos_u1) * sin(azimuth) # cos² α = 1 − sin² α cos2_alpha = 1 - pow(sin_alpha, 2) ( # A = 1 + u²/16384 · {4096 + u² · [−768 + u² · (320 − 175 · u²)]} alpha, # B = u²/1024 · {256 + u² · [−128 + u² · (74 − 47 · u²)]} (4) beta # u² = cos² α · (a²−b²)/b² ) = upsilon2(cos2_alpha * F2) def sigma_recursion(sigma_0: float) -> Tuple[float, float]: # cos 2σm = cos(2σ1 + σ) cos_2sigma_m: float = cos(2 * sigma1 + sigma_0) # Δσ = B · sin σ · {cos 2σm + B/4 · [cos σ · (−1 + 2 · cos² 2σm) # − B/6 · cos 2σm · (−3 + 4 · sin² σ) · (−3 + 4 · cos² 2σm)]} delta_sigma: float = ( # B · sin σ · beta * sin(sigma_0) * # cos 2σm + B/4 · (cos_2sigma_m + (beta / 4) * # [cos σ · (cos(sigma_0) * # (−1 + 2 · cos² 2σm) - (-1 + 2 * pow(cos_2sigma_m, 2) - # − B/6 · cos 2σm · (beta / 6) * cos_2sigma_m * # (−3 + 4 · sin² σ) · (-3 + 4 * pow(sin(sigma_0), 2)) * # (−3 + 4 · cos² 2σm)] (-3 + 4 * pow(cos_2sigma_m, 2))) ) )) # σʹ = s / b·A + Δσ sigma = (distance / (B * alpha)) + delta_sigma # iterate until change in σ is negligible (e.g. 10-12 ≈ 0.006mm) if abs((sigma_0 - sigma) / sigma) > 1.0e-9: sigma_recursion(sigma) return sigma, cos_2sigma_m sigma, cos_2sigma_m = sigma_recursion( # σ = s / (b·A) (first approximation) (distance / (B * alpha))) # φ2 = atan(sin U1 · cos σ + cos U1 · sin σ · cos α1 / latitude = arctan2( (sin(cos_u1) * cos(sigma) + cos(cos_u1) * sin(sigma) * cos(azimuth)), # (1−f) · √sin² α + (sin U1 · sin σ − cos U1 · cos σ · cos α1)² ) ((1 - F) * sqrt(pow(sin_alpha, 2) + pow(sin(cos_u1) * sin(sigma) - cos(cos_u1) * cos(sigma) * cos(azimuth), 2))) ) def omega(): # λ = atan(sin σ · sin α1 / cos U1 · cos σ − sin U1 · sin σ · cos α1) _lambda = arctan2( (sin(sigma) * sin(azimuth)), (cos(cos_u1) * cos(sigma) - sin(cos_u1) * sin(sigma) * cos(azimuth)) ) # C = f/16 · cos² α · [4 + f · (4 − 3 · cos² α)] c_sigma = ( (F / 16) * cos2_alpha * (4 + F * (4 - 3 * cos2_alpha)) ) # L = λ − (1−C) · f · sin α · {σ + C · sin σ · [cos 2σm + C · cos σ · (−1 + 2 · cos² 2σm)]} return ( _lambda - (1 - c_sigma) * F * sin_alpha * ( sigma + c_sigma * sin(sigma) * ( cos_2sigma_m + c_sigma * cos(sigma) * (-1 + 2 * pow(cos_2sigma_m, 2)) ))) # return longitude + omega return latitude, longitude + omega() direct = vectorize(vincenty_direct) usage import numpy as np from vincenty import direct def dev_dist(project_seconds=np.array([900, 1800, 2700, 3600])): """dev""" # position latitude = 33.01 longitude = -98.94 # direction azimuth = -171.95 # speed meters_per_second = 11 # distance projection distance = meters_per_second*project_seconds # in rads rad_lat, rad_lon, rad_azi = np.deg2rad((latitude, longitude, azimuth)) # degs = np.around(np.rad2deg( direct(rad_lat, rad_lon, rad_azi, distance)), decimals=2) points = np.swapaxes(degs, 0, 1) times = [f"+{x:02.0f}min"for x in project_seconds/60] projection = dict(zip(times, points.tolist())) if __name__ == '__main__': dev_dist() distance (meters) >>> [ 9900 19800 29700 39600] degs >>> [[ 32.92 32.83 32.74 32.66] [-98.95 -98.97 -98.98 -99. ]] points >>> [[ 32.92 -98.95] [ 32.83 -98.97] [ 32.74 -98.98] [ 32.66 -99. ]] projection >>> {'+15min': [32.92, -98.95], '+30min': [32.83, -98.97], '+45min': [32.74, -98.98], '+60min': [32.66, -99.0]} Answer: Not a great idea to from numpy import, since there are so many symbols needed that that can lead to significant namespace pollution. The typical approach is just import numpy as np. Rather than pow(, 2) you can just **2. upsilon2 should not be a lambda, and should be a regular function instead. The angle normalisation of azimuth = azimuth + TWO_PI if azimuth < 0.0 else( azimuth - TWO_PI if azimuth > TWO_PI else azimuth ) is not necessary, and even if it were, you should just do azimuth = np.mod(azimuth, TWO_PI) Don't recurse! Python has a very shallow stack capacity and no tail optimization. Just iterate - and this is quite easy with your existing function. Your indentation sometimes deviates from 4 spaces. Try to keep this consistent. Rather than _lambda, to escape a keyword it's more frequent to see lambda_. Don't vectorize(vincenty_direct). You can write an actual vectorised implementation with very little modification to your original implementation. Add some tests. The only ones that I have shown are to test against regression. Don't np.around. Keep full precision. There are better ways to truncate precision for print if that's what you're looking for. Your use of .swapaxes can be replaced with .T. You would benefit from being a little more granular with your functions, cutting them out of closure scope to make dependencies more explicit. Other benefits are that profiling - if that ever becomes necessary - is much easier. Suggested """Vincenty's distance Direct formulae""" from pprint import pprint import numpy as np # a: Semi-major axis = 6_378_137.0 metres A = 6_378_137.0 # b: Semi-minor axis ≈ 6_356_752.314_245 metres B = 6_356_752.314_245 # f = flattening (a−b)/a F = 1 - B/A # (a²−b²)/b² ( lat reduction ) F2 = (A / B)**2 - 1 TWO_PI = 2 * np.pi def get_upsilon2(u2: float) -> tuple[float, float]: # u² = cos² α · (a²−b²)/b² return ( # A = 1 + u²/16384 · {4096 + u² · [−768 + u² · (320 − 175 · u²)]} 1 + u2/16384 * (4096 + u2 * (-768 + u2 * (320 - 175*u2))), # B = u²/1024 · {256 + u² · [−128 + u² · (74 − 47 · u²)]} u2/1024 * (256 + u2 * (-128 + u2 * (74 - 47*u2))), ) def sigma_iteration(sigma_0: np.ndarray, sigma1, alpha, beta, distance) -> tuple[np.ndarray, np.ndarray]: # cos 2σm = cos(2σ1 + σ) cos_2sigma_m = np.cos(2 * sigma1 + sigma_0) # Δσ = B · sin σ · {cos 2σm + B/4 · [cos σ · (−1 + 2 · cos² 2σm) # − B/6 · cos 2σm · (−3 + 4 · sin² σ) · (−3 + 4 · cos² 2σm)]} delta_sigma = ( # B · sin σ · beta * np.sin(sigma_0) * # cos 2σm + B/4 · ( cos_2sigma_m + beta/4 * # [cos σ · np.cos(sigma_0) * ( # (−1 + 2 · cos² 2σm) - -1 + 2*cos_2sigma_m**2 - # − B/6 · cos 2σm · beta/6 * cos_2sigma_m * # (−3 + 4 · sin² σ) · (-3 + 4*np.sin(sigma_0)**2) * # (−3 + 4 · cos² 2σm)] (-3 + 4*cos_2sigma_m**2) ) ) ) # σʹ = s / b·A + Δσ sigma = distance/B/alpha + delta_sigma return sigma, cos_2sigma_m def get_sigma(distance: np.ndarray, alpha: float, beta: float, sigma1: float) -> tuple[np.ndarray, np.ndarray]: # σ = s / (b·A) (first approximation) sigma = distance / B / alpha # iterate until change in σ is negligible (e.g. 10-12 ≈ 0.006mm) while True: old_sigma = sigma sigma, cos_2sigma_m = sigma_iteration(old_sigma, sigma1, alpha, beta, distance) if np.all(np.abs(old_sigma/sigma - 1) <= 1e-12): return sigma, cos_2sigma_m def get_omega( azimuth: float, sigma: np.ndarray, cos_u1: float, sin_alpha: float, cos_2sigma_m: np.ndarray, cos2_alpha: float, ) -> np.ndarray: # λ = atan(sin σ · sin α1 / cos U1 · cos σ − sin U1 · sin σ · cos α1) lambda_ = np.arctan2( (np.sin(sigma) * np.sin(azimuth)), (np.cos(cos_u1) * np.cos(sigma) - np.sin(cos_u1) * np.sin(sigma) * np.cos(azimuth)) ) # C = f/16 · cos² α · [4 + f · (4 − 3 · cos² α)] c_sigma = ( F/16 * cos2_alpha * (4 + F*(4 - 3*cos2_alpha)) ) # L = λ − (1−C) · f · sin α · {σ + C · sin σ · [cos 2σm + C · cos σ · (−1 + 2 · cos² 2σm)]} return ( lambda_ - (1 - c_sigma)*F*sin_alpha * ( sigma + c_sigma * np.sin(sigma) * ( cos_2sigma_m + c_sigma * np.cos(sigma) * (-1 + 2 * cos_2sigma_m**2) ) ) ) def get_latitude(azimuth: float, sigma: np.ndarray, cos_u1: float, sin_alpha: float) -> np.ndarray: # φ2 = atan(sin U1 · cos σ + cos U1 · sin σ · cos α1 / latitude = np.arctan2( (np.sin(cos_u1) * np.cos(sigma) + np.cos(cos_u1) * np.sin(sigma) * np.cos(azimuth)), # (1−f) · √sin² α + (sin U1 · sin σ − cos U1 · cos σ · cos α1)² ) ( (1 - F) * np.sqrt( sin_alpha**2 + ( np.sin(cos_u1) * np.sin(sigma) - np.cos(cos_u1) * np.cos(sigma) * np.cos(azimuth) )**2 ) ) ) return latitude def vincenty_direct( latitude: float, longitude: float, azimuth: float, distance: np.ndarray, ) -> tuple[np.ndarray, np.ndarray]: """ Returns: lat and long of projected point """ # Not necessary: azimuth = np.mod(azimuth, TWO_PI) # tan U1 = (1−f) · tan φ1 tan_u1 = (1 - F) * np.tan(latitude) # cos U1 = 1 / √1 + tan² U1, sin U1 = tan U1 · cos U1 cos_u1 = np.arctan(tan_u1) # σ1 = atan(tan U1 / cos α1) sigma1 = np.arctan2(tan_u1, np.cos(azimuth)) # sin α = cos U1 · sin α1 sin_alpha = np.cos(cos_u1) * np.sin(azimuth) # cos² α = 1 − sin² α cos2_alpha = 1 - sin_alpha**2 alpha, beta = get_upsilon2(cos2_alpha * F2) sigma, cos_2sigma_m = get_sigma(distance, alpha, beta, sigma1) latitude = get_latitude(azimuth, sigma, cos_u1, sin_alpha) omega = get_omega(azimuth, sigma, cos_u1, sin_alpha, cos_2sigma_m, cos2_alpha) return latitude, longitude + omega def dev_dist(project_seconds=None) -> None: if project_seconds is None: project_seconds = np.linspace(900, 3600, 4) # position latitude = 33.01 longitude = -98.94 # direction azimuth = -171.95 # speed meters_per_second = 11 # distance projection distance = meters_per_second * project_seconds # in rads rad_lat, rad_lon, rad_azi = np.deg2rad((latitude, longitude, azimuth)) degs = np.rad2deg( vincenty_direct(rad_lat, rad_lon, rad_azi, distance) ) points = degs.T assert np.allclose( points, ( (32.921612216500890, -98.95482179485320), (32.833221427146340, -98.96961414990395), (32.744827642960495, -98.98437722206711), (32.656430874926910, -98.99911116737137), ), atol=1e-12, rtol=0, ) times = (f"+{x:02.0f}min" for x in project_seconds / 60) projection = dict(zip(times, points.tolist())) pprint(projection) if __name__ == '__main__': dev_dist() Output {'+15min': [32.92161221650089, -98.9548217948532], '+30min': [32.83322142714634, -98.96961414990395], '+45min': [32.744827642960495, -98.98437722206711], '+60min': [32.65643087492691, -98.99911116737137]}
{ "domain": "codereview.stackexchange", "id": 43057, "tags": "python, python-3.x, numpy, coordinate-system, geospatial" }
Gauss law when dealing with materials with conductivy
Question: Suppose we have a parallel plate capacitor filled with two dielectrics materials, one with conductivity $\sigma_1$ and permittivity $\epsilon_1$ and the other one with conductivity $\sigma_2$ and permittivity $\epsilon_2$. Each dielectric has thickness equal to half of the distance that separates the plates. The capacitor is connected to a battery of potential V. I am asked to find out the electric field between the plates. Applying Gauss law, I find that the electric displacement vector inside the capacitor is equal to the superficial charge density, $\sigma$. From here, I can calculate $\sigma$, supposing we are dealing with linear dielectrics: $V = \int_0^\frac{d}{2} \frac{D}{\epsilon_1} dl + \int_\frac{d}{2}^d \frac{D}{\epsilon_2} dl = \frac{\sigma d \left( \epsilon_1 + \epsilon_2 \right)}{2\epsilon_1\epsilon_2} \iff \sigma = \frac{2V\epsilon_1\epsilon_2}{d(\epsilon_1+\epsilon_2)}$ From here I conclude that: $E_1 = \frac{\sigma}{\epsilon_1} = \frac{2V\epsilon_2}{d(\epsilon_1+\epsilon_2)}$ $E_2 = \frac{\sigma}{\epsilon_2} = \frac{2V\epsilon_1}{d(\epsilon_1+\epsilon_2)}$ Problem is that acoording to my professor the solution to this part of the exercise is: $E_1 = \frac{2V\sigma_2}{d(\sigma_1+\sigma_2)}$ $E_2 = \frac{2V\sigma_1}{d(\sigma_1+\sigma_2)}$ Which he obtains by imposing boundary conditions and calculating the current densities. My question is: why is my procedure wrong? What have I assumed that is not correct? Answer: Your technique is not wrong... if $\sigma_1=\sigma_2=0$. See, if the materials you are working with can conduct current, the parts of the system with free charges will not just be the plates, but any part of the (partially) conducting media can also have free charges brought by the current that flow in these materials. This is your mistaken assumption. In this problem, the interface between $\epsilon_1,\sigma_1$ and $\epsilon_2,\sigma_2$ can have a free charge density $\sigma'$ since the system in equilibrium can have brought them from either plate. Assuming this, your potential equation now reads $$V = \int_0^\frac{d}{2} \frac{D_-}{\epsilon_1} dl + \int_\frac{d}{2}^d \frac{D_+}{\epsilon_2} dl,\ \text{ with }\ D_\pm=(\sigma+\sigma'/2).$$ $$\implies V = \frac{\sigma d \left( \epsilon_1 + \epsilon_2 \right)}{2\epsilon_1\epsilon_2} + \frac{\sigma' d \left( \epsilon_1 - \epsilon_2 \right)}{2\epsilon_1\epsilon_2}.$$ The idea is that this new interface charge density is a free parameter. Its value depends on the current equilibrium state since this is our other boundary condition, or in other words, the system's equilibrium state can only be fully described by both charge and current boundary conditions. Your professor simplifies this by first finding the current equations and seeing that they do not depend on the charge boundary conditions (thus fully explain the, say, voltage and electric fields fully). (Edit:) A nice way to illustrate this is to assign $\sigma=0$ or $\sigma=\infty$ to one of the parts. For example, in the case where $\sigma_1=0$, there cannot be any current flowing in the first region. This necessarily means that $E_2=0$, since otherwise the current coming from the 2nd plate would always want to flow into the first region. For another example, if $\sigma_2\to\infty$, the second part, again, cannot have any electric fields inside of it since, well, there are no electric field inside (perfect) conductors. These happen regardless of dielectric properties.
{ "domain": "physics.stackexchange", "id": 63965, "tags": "electromagnetism, capacitance, gauss-law" }
Generalized Geography on graphs of bounded treewidth
Question: Generalized Geography (GG) is played on a directed graph where a token is moved along arcs alternatively by two players. The vertices from which the token leaves are deleted. When a player cannot play anymore he loses (and his opponent wins). GG is PSPACE-complete (see page 11, Figure 1 of "On the complexity of some two-person perfect-information games" of Schaefer) while its variant on undirected graphs is polynomial time solvable (see for instance here). In the technical report Complexity of Path Forming Games of Hans L. Bodlaender (1989), it is shown that GG becomes polynomial time solvable in graphs of bounded treewidth. I have two questions concerning that result: 1) Do you agree the algorithm is designed for undirected GG? (which is anyway in P for general graphs). Look at figure 4.1 for instance, plus the fact that the treewidth is not re-defined for directed graphs. 2) As mentioned by Saeed in a related post, "The problem is PSPACE-complete even on digraphs of directed tree width 1". Now, even if we consider (directed) GG, and the treewidth of the undirected graph obtained by changing each arc into an edge, it seems to be untrue that GG is easier with bounded treewidth: take any instance of QBF with bounded pathwidth (which remains PSPACE-complete, result of Atserias and Oliva), number the variables by their order of appearance in bags going from "left to right", and do the reduction of Schaefer (see also the link given by Saeed). The treewidth of the undirected version of the digraph produced by this reduction is bounded. No? In view of 1) and 2) why do we think that bounded treewidth makes GG easier? Answer: So, in short, the question is, since QBF is PSPACE-complete for bounded pathwidth formulas, why isn't Geography PSPACE-complete for bounded pathwidth graphs? I think the problem with this hardness argument is that the reduction from QBF to Geography by Schaefer does not preserve the pathwidth/treewidth of the formula. The reason is that, in Schaefer's construction, the variables must be added in the graph in the order in which they are quantified, NOT the order of the path decomposition. Hence, you cannot (in an obvious way) guarantee that the resulting digraph has small (undirected) treewidth.
{ "domain": "cstheory.stackexchange", "id": 3168, "tags": "cc.complexity-theory, graph-algorithms, time-complexity" }
Translational invariance implying diagonal representation in momentum space
Question: I have just come across something in my reading of Peskin and Schroeder that claims that because a function, in this particular case a two-point correlation function, is translationally invariant, it automatically has a diagonal momentum space representation. I am not seeing this relationship, and I was hoping someone could clarify this for me! Answer: This is just a property of Fourier transformations. If the correlation function is translational invariant then, by definition, the position space representation $D(x,y)$ transforms as $D(x+a,y+a) = D(x,y)$ for any constant $a$. Thus $D(x,y) = D(x-y,0)$ and so the correlator depends only on the difference $x-y$. For simplicity, we'll define $D(x-y) = D(x-y,0)$. Fourier transform this over both $x$ and $y$ and you'll find it diagonal in momentum space. For example, in one dimension, \begin{eqnarray} \tilde{D}(p,q) &\sim& \int dx dy e^{ipx + iqy} D(x-y)\\ &=& \int du dy e^{ipu} e^{i(p+q)y} D(u) \\ &\sim& \tilde{D}(p) \delta(p+q) \end{eqnarray} where I'm neglecting to keep track of the constants. The result is diagonal in momentum space by virtue of the delta-function, enforcing $q=-p$.
{ "domain": "physics.stackexchange", "id": 25195, "tags": "quantum-field-theory, momentum, conservation-laws, symmetry" }
gmapping: "Skipping XML Document..."
Question: Firing up an instance of pointcloud_to_laserscan_node (not the nodelet) on Indigo + Ubuntu gets [ERROR] [1509665358.223587616]: Skipping XML Document "/opt/ros/indigo/share/gmapping/nodelet_plugins.xml" which had no Root Element. This likely means the XML is malformed or missing. Two questions: Why would pointcloud_to_laserscan be looking for a file in slam_gmapping? Can we safely ignore this? Originally posted by Rick Armstrong on ROS Answers with karma: 567 on 2017-11-02 Post score: 0 Original comments Comment by gvdhoorn on 2017-11-03: May I suggest a topic title change? The error message specifically states that /opt/ros/../gmapping/nodelet_plugins.xml has a problem. pointcloud_to_laserscan is not mentioned anywhere. Comment by Rick Armstrong on 2017-11-28: Just saw this comment. Done. Answer: update: PR#56 was merged, new version of gmapping is in the shadow repository and should be available in the regular repositories after the next sync. This is probably ros-perception/slam_gmapping#55. Can you verify there is a nodelet_plugins.xml in the /opt/ros/indigo/share/gmapping directory? If not, that is the cause. The CMakeLists.txt is not install(..)ing that file (at least in the current version), which makes things work in a devel space, but not in an install space. As the binary pkgs contain only install(..)ed artefacts, nodelet_plugins.xml is missing from those. This seems to have been introduced in ros-perception/slam_gmapping#41, which added the nodelet version of gmapping. Why would pointcloud_to_laserscan be looking for a file in slam_gmapping? it isn't. It looks that way, but in reality the rospkg plugin infrastructure is just iterating over all ROS pkgs it knows and trying to load the nodelet_plugins.xml file that gmapping has listed in its package manifest. You just happen to be starting pointcloud_to_laserscan. Can we safely ignore this? Yes. If you're not intending to use the nodelet version of gmapping, you should be fine. Edit: I've submitted a PR to fix this: ros-perception/slam_gmapping#56. Note that gmapping will have to go through a full release cycle before the binaries on the repositories contain this fix. Originally posted by gvdhoorn with karma: 86574 on 2017-11-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 29265, "tags": "ros, pointcloud-to-laserscan, xml" }