anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Rate equation of first law of thermodynamics
Question: The mathematical expression for first law of thermodynamics is: $\delta{Q} = dE + \delta{W}$ here E is the total stored energy of the system One of my book mentions that, we consider a time interval $\delta{t}$ during which amount of heat $\delta{Q}$ crosses the control surface and an amount of work $\delta{W}$ is done by the control mass So dividing the initial equation by $\delta{t}$, we get $\frac{\delta{Q}}{\delta{t}} = \frac{dE}{\delta{t}} +\frac{\delta{W}}{\delta{t}}$ Taking limit for each of these quantities at $\delta{t}$ approaches Zero, we have $\lim_{\delta{t}\to 0 }\frac{\delta{Q}}{\delta{t}} = \lim_{\delta{t}\to 0 } \frac{dE}{\delta{t}} + \lim_{\delta{t}\to 0 } \frac{\delta{W}}{\delta{t}}$ $\implies \frac{dQ}{dt} = \frac{dE}{dt} + \frac{dW}{dt}$ $\therefore \dot{Q} = \dot{\frac{dE}{dt}} + \dot{W}$ My question, how can we assume $\delta{t}$ isn't $\delta$ used for path dependent variable? Is time path dependent then? and what is going on after we take the limits,how does $\delta $ changes to $d$ Answer: $\dot q$ is the heating from the outside across the system boundary, $\dot w$ is the external work done by the system on the environment. Both $\dot q$ and $\dot w$ are directly measurable quantities that have no direct relationship to the so-called state variables of the system that absorbs the working (work rate) and heating ("heat" rate). When, say, $\dot q$ is integrated over time we get a finite amount of "heat" absorbed by the system, $\Delta Q = \int_{t_0}^{t_1}\dot q dt$ but here $\dot q$ is only function of $time$ and of nothing else. The 1st law can indeed be written as $\dot q = \frac{dE}{dt} + \dot w$ where $E$ is *the* internal energy of the system and Truesdell and his followers always do it this way; see the subject under the heading *Rational Thermodynamics*. In fact, writing the 1st law in the differential (or better said infinitesimal) form as $\delta q = dE + \delta w$ is just a tacit acknowledgment that the fundamental quantities are heating not "heat", or working not "work". Of course if you prefer infinitesimals then you can always write it equivalently using $\delta q = \dot q dt$ or $\delta w =\dot w dt$. (The real fun starts when you combine the 1st law in the rate equation form with the 2nd law of thermodynamics in its rate equation form ...)
{ "domain": "physics.stackexchange", "id": 78106, "tags": "thermodynamics" }
Identify this large (swamp) fly?
Question: Can anyone identify this large fly? I encountered it in Okefenokee Swamp (Georgia, U.S.A.) on May 1, 2015. Weather: 53-78°F (avg. 66°F), avg humidity = 60, dew point = 47°F. The fly was about the size of an adult thumb (more accurately approx. 5cm long and 2cm wide) -- this fly was HUGE! It was fairly docile and allowed me to take a good close-up or 2. Answer: Tabanus nigrovittatus as my initial guess. "Greenhead horse fly." Source This is probably the biggest give-away for the genus Tabanus (namely the features of the veins at the wing tip, denoted R4 and R5, versus other Diptera orders), still searching out a good identification key to solidify the ID. Furthermore, also known as the salt marsh greenhead, a swamp is a good candidate to find these (1). So what's particular is entomologists have gone through and collected many specimens, and data about their wing venation, and based on the Comstock-Needham system, you can pin presence/absence of major veins, and branching of said veins to specific types on insects. In this case, R refers to the Radius or 3rd longitudinal vein, and the number refers part of the branching that reaches the margin of the wing. Pg 156 here helps us ID Tabanus vs other orders. Problematic with identifying these spp. is you need to get a good look at the sternites on the under side as well, if you look at how they ID Tabanus sudeticus here, but the photos tend to match nicely with those of T. nigrovittatus you can widely find.
{ "domain": "biology.stackexchange", "id": 5039, "tags": "species-identification, zoology, entomology" }
Longest substrings of common length with the same parity
Question: Given two sequences $a$ and $b$, find largest $x$ such that in $a$ there is substring $A$ and substring $B$ in $b$ meeting those conditions: length of both $A$ and $B$ is equal to $x$; sum of elements in $A$ has the same parity as sum of elements in $B$. Lengths of $a$ and $b$ are up to $5\times10^5$, so simple $O(n^2)$ solution won't do. Example: $a = [0, 1, 2, 3, 4, 5]$ $b = [3, 1, 3, 6]$ Answer: 3 (one of the possible solutions is $A = [2, 3, 4], B = [3, 1, 3]$). I've thought about it for hours and can't find a solution. How to do this in linear or linearithmic time complexity? I'm quite sure the problem can be simplified to for index $i$, storing only sum of elements up to $i$ modulo $2$. However, it doesn't help me much. (The problem comes from a rather-old Israeli book תכנות תחרותי: סביב אתגרים ('Competetitive programming') by Mordechai Ben-Ari. The book isn't well known and I couldn't find any solution in Hebrew, so I translated the problem into English for a better chance of getting answer.) Any help will be appreciated. Answer: Here is an $O(n\log^2 n)$ solution. The first step is to reduce to an easier problem: Given $x_1,\ldots,x_n \in \{0,1\}$, determine, for all $0 \leq i \leq n$ and $b \in \{0,1\}$, whether there is a contiguous subarray of length $i$ whose sum is equal to $b$ modulo 2. Applying this to both $A$ and $B$, we can solve the original problem in $O(n)$. The basic idea is to use divide and conquer. Given an array $x_1,\ldots,x_n$, divide it into two arrays $x_1,\ldots,x_m$ and $x_{m+1},\ldots,x_n$ of roughly equal size. Every contiguous subarray of $x_1,\ldots,x_n$ of length $i$ with parity $b$ has one of the following forms: Contiguous subarray of $x_1,\ldots,x_m$ of length $i$ with parity $b$. Contiguous subarray of $x_{m+1},\ldots,x_n$ of length $i$ with parity $b$. The concatenation of a subarray $x_{m-i_1+1},\ldots,x_m$ of parity $b_1$ with a subarray $x_{m+1},\ldots,x_{m+i_2}$ of parity $b_2$, where $i_1+i_2 = i$ and $b_1+b_2 \equiv b \pmod{2}$. We can determine whether subarrays of the first two forms exist, for all $i$ and $b$, by running the procedure recursively on the two halves. The interesting part is the third form. We start by determining the parity of all arrays of the form $x_{m-i_1+1},\ldots,x_m$ and of the form $x_{m+1},\ldots,x_{m+i_2}$ for all $i_1,i_2$, which takes time $O(n)$. Let $L_{b_1}$ be the set of $i_1$ such that $x_{m-i_1+1},\ldots,x_m$ has parity $b_1$, and define $R_{b_2}$ analogously. Using FFT (exercise), for each $b_1,b_2$ we can determine in $O(n\log n)$ the set $X_{b_1b_2}$ of $i$ such that there exist $i_1 \in L_{b_1}$ and $i_2 \in R_{b_2}$ summing to $i$. Given the sets $X_{b_1b_2}$ for all $b_1,b_2$, we can determine whether subarrays of the third form exist, for all $i$ and $b$. In total, denoting the running time of our algorithm by $T(n)$, we get the recurrence $$ T(n) = T(\lfloor n/2 \rfloor) + T(\lceil n/2 \rceil) + O(n\log n), $$ whose solution is $T(n) = O(n\log^2 n)$.
{ "domain": "cs.stackexchange", "id": 12537, "tags": "algorithms, strings, efficiency, substrings, algorithm-design" }
How astronomers estimate the redshift of galaxies clusters?
Question: I don't understand how astronomers estimate the redsfhit of a cluster. As far as I understand a cluster of galaxies is something really "big", so I expect that different galaxies in the cluster have different redshifts. The redshift of the cluster is some average of the redshift of every galaxy in the cluster? Answer: Yes, it is the average redshift of galaxies that belong to a cluster. There is of course an uncertainty in that, but a typical velocity dispersion among galaxies in a massive galaxy cluster is 1000 km/s. So the redshift error due to the uncertainty in the mean is $$\Delta z \sim \frac{\Delta v}{c} = \frac{\sigma_v}{c\sqrt{n}}= \frac{0.0033}{\sqrt{n}},$$ where $n$ is the number of galaxies in the cluster with a measured redshift.
{ "domain": "physics.stackexchange", "id": 69023, "tags": "cosmology, astronomy, space-expansion, galaxies, redshift" }
Methods / Algorithms for rank scales based on cumulative scoring
Question: Say you have an organization that requires employees to participate in a Q&A site similar to StackOverflow - questions and answers are voted upon, selected answers get extra points, certain behaviors boost your score etc. What we need to do is assign a rating from 1-100 to these users with even distribution. The behaviors that add points: Ask a question [fixed] Answer a question [fixed] Receive an upvote on a question [determined by relative ranking] Receive an upvote on an answer [determined by relative ranking] Have your answer selected [determined by relative ranking] Responding to a comment, etc [fixed] Likewise, there are behaviors that subtract points. If a user with a high ranking upvotes a question asked by a lower-ranking user, more points should be awarded than the inverse situation. Likewise if a lower-ranking user downvotes a higher-ranking user's question, the impact should be minimal compared to the inverse. There should be a limit to this impact though so that a high-ranking user doesn't unintentionally destroy any momentum of a low-ranking user by issuing a powerful downvote. We have a few challenges here: How do we determine how many points to assign to each type of behavior, with actor/recipient relative rank taken into account? I'm thinking we just assign a flat number to each behavior, that number decided relative to the importance of the other behaviors, and then have a variable score that can alter the score if there is a wide variance between the users. The mechanics of this - does the score double at most? - are unclear. How to we assign this rank? This one is a little easier - I'm thinking we just order the users according to score and then split the dataset into 100 sections, assigning each "chunk" a number 1-100. Should we be worried about these numbers getting "very big"? The scenario described above has been trivialized; actions taken by these users may happen hundreds of times per day so the scores can become very high, very quickly. Is there a way we can keep this under control while avoiding a large number of duplicate scores? How do we define the "fixed" scores as the total scores become very big? Over time we may have users with hundreds of thousands of points - but the fixed-score behaviors should still reward them. They should reward lower-ranking users more than higher-ranking users. I don't know if there are some standard practices, algorithms, or terminologies that I should be aware of when facing a problem like this - any input would be appreciated. Answer: To solve challenges #3 and #4, let's limit the overall available rank volume. For example, sum of this rank for all the users will be 1 (100%). From challenge #2 I understood, that you accept 2 different ranks: (1) place from 1 to 100, and (2) simple sum of all earned points (fixed and relative). Did I got it right? I so, there is no need to worry about unlimited growth, or fixed scores inflation. Let's just use percentages, not 1-100 ranks. These percentage ranks could be calculated based on interaction behaviors (vote/selecting answer/etc), using PageRank-like algorithm. Such algorithm will consider all previous reactions (and ranks of acted users), obtained by an exact user. Unfortunately, you cannot use PageRank algorithm "as is", because it supports only "positive" links, but you can look for it's extensions. For example, look at this paper with PageRank extension for both positive and negative links (as users can down vote). You can iteratively estimate percentage rank (TrustRank, TR) using this algorithm. The second task is to calculate reward/penalty rate in points for each single action. Let's determine (predefine) maximal reward/penalty rate (X) for each type of action. And will use coefficient to discount it, based on TrustRanks of acting users (e.g., author and voter). Slightly modified Sigmoid will map this ratio from [-Inf,+Inf] range to [0,1]. Here for peer users you will have ~0.5 of predefined maximal rate. If "voter" has TR twice more than "author", "author" will recieve ~0.75 of predefined value, and so on. You can tune steepness with additional parameter, or try to find any other mapping transformation function. Anyway, now simply multiply maximal penalty/reward by this coefficient, and you'll get the number of point, you need to deduct or add. The only issue, I see, is a user with zero TR - such user as a voter will "give" nothing, and as an object of voting, will recieve the maximal amount of points regardless voter's rank. To avoid this, you can predefine minimal TR (like 1e-10), and don't let user's TR to fall beyond this value.
{ "domain": "datascience.stackexchange", "id": 1269, "tags": "data-mining, statistics, algorithms, distribution" }
React app that receives data via an AJAX requests, display the data and makes it searchable
Question: I have recently started using React and wish to know if my code adheres to the React coding style, whether I am following the same approach towards solving any problem and is there any way I can make the code better. var React = require('react'); var Bootstrap = require('react-bootstrap'); var xhr = require('superagent'); var Input = Bootstrap.Input; var Panel = Bootstrap.Panel; var Polls; var View = React.createClass({ getInitialState: function() { return { selectedPoll: '', query: '' }; }, componentDidMount: function() { var self = this; xhr.get('/polls').end(function(err, res) { if(err) { console.log(err); } else { Polls = res.body; if(self.isMounted()){ self.setState({ selectedPoll: res.body, query: '' }); } } }); }, _onchange: function(value) { var result = []; (Polls).map(function(val) { if(val.uuid.toLowerCase().indexOf(value) !== -1 || val.question_text.toLowerCase().indexOf(value) !== -1) { result.push(val); } }); this.setState({ query: value, selectedPoll: result }); }, render:function() { return ( <div id="Container"> <div id="Sidebar"> <SearchBar change={this._onchange} val={this.state.query} /> <PollList data={this.state.selectedPoll} /> </div> <div id="PollWindow"> <PollView /> </div> </div> ); } }); var SearchBar = React.createClass({ doSearch: function(e) { this.props.change(e.target.value.toLowerCase()); }, render: function() { return ( <Input type="text" value={this.props.val} onChange={this.doSearch} /> ) } }); var PollList = React.createClass({ render: function() { var arr = []; if(this.props.data !== ''){ this.props.data.map(function(value) { var head = value.uuid; var body = value.question_text; arr.push(<div key={head}><Panel header={head} key={head}>{body}</Panel></div>); }); } return ( <div>{arr}</div> ); } }); var PollView = React.createClass({ render: function() { return ( <p>Dummy space</p> ); } }); module.exports = View; Answer: Looks pretty idomatic :) the one major thing you might want to do is make use of propTypes to more explicitly define your Component API's. PropTypes help set expections for what values should/can be. For instance in SearchBar your doSearch() method looks like this.props.change(e.target.value.toLowerCase()); There is an implicit assumption that onChange is never going to be null, and that it's a function. You can codify that (and guard against dumb mistakes) by adding propTypes: { onChange: React.PropTypes.func.isRequired } alternatively you can leave off: isRequired and provide a default value for onChange (perhaps a noop function). In PollList this line is odd (you also do this in _onChange of View): this.props.data.map(function(value) { var head = value.uuid; var body = value.question_text; arr.push(<div key={head}><Panel header={head} key={head}>{body}</Panel></div>); }); .map() already returns a new array so there is no need to push into the arr either just return the child elements or use a forEach()
{ "domain": "codereview.stackexchange", "id": 12926, "tags": "javascript, react.js" }
package moveit_simple_grasps in kinetic
Question: Hey everyone, I have written a small plugin for ROS after this tutorial: http://wiki.ros.org/pluginlib/Tutorials/Writing%20and%20Using%20a%20Simple%20Plugin Now I have already installed a lot of packages that I can do a simple catkin_make in the ws, but no I am getting this error: -- Could not find the required component 'moveit_simple_grasps'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found. CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package): Could not find a package configuration file provided by "moveit_simple_grasps" with any of the following names: moveit_simple_graspsConfig.cmake moveit_simple_grasps-config.cmake Now I have already installed this package via apt: % sudo apt-get install ros-kinetic-simple-grasping but I am getting the same error as mentioned before. I have only found the requested package for hydro and indigo... can someone tell my what package I should install to get my tutorial compiled? Thx in advance! :-) BL Originally posted by blightzyear on ROS Answers with karma: 3 on 2018-03-05 Post score: 0 Original comments Comment by blightzyear on 2018-03-12: Can nobody help me? ;-( Answer: Now I have already installed this package via apt: % sudo apt-get install ros-kinetic-simple-grasping but I am getting the same error as mentioned before. The package you installed is probably not the correct one: note how the error message mentions moveit_simple_grasps, not simple-grasping. The former may be found here: moveit_simple_grasps. The latter here: simple_grasping. Originally posted by gvdhoorn with karma: 86574 on 2018-03-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by blightzyear on 2018-03-13: Thank you for the info! :-)
{ "domain": "robotics.stackexchange", "id": 30210, "tags": "ros, ros-kinetic, ubuntu" }
What actually are virtual particles?
Question: What actually are virtual particles? In various places around physics SE, documentaries and occasional news headlines, I see the term "virtual particles", normally virtual photons. I have tried researching it, but I'm not at a level of understanding yet to be able to grasp whats going on, if someone could explain it in a simple manner that would be great. Answer: This is the table of particles on which the standard model of elementary particle physics is founded: These particles are completely and uniquely characterized by their mass and quantum numbers, like spin, flavour, charge... The standard model is a mathematical model based on a Lagrangian which contains the interactions of all these particles, and it is framed in the four dimensions of special relativity. This means that the mass of each particle, called rest mass (because it is the invariant mass the particle has in its rest frame) in the energy-momentum frame is given by: $$m_0^2c^2 = \left(\frac Ec\right)^2 - ||\mathbf p||^2$$ in natural units where $c= 1,$ $$m_0^2 = E^2 -||\mathbf p||^2$$ The standard model Lagrangian allows the calculation of cross-sections and lifetimes for elementary particles and their interactions, using Feynman diagrams which are an iconic representation of complicated integrals: Only the external lines are measurable and observable in this model, and the incoming and outgoing particles are on the mass shell. The internal lines in the diagrams carry only the quantum numbers of the exchanged named particle, in this example a virtual photon. These "photons" instead of having a mass of zero, as they do when measured/observed have a varying mass imposed by the integral under which they have "existence". The function of the virtual line is to keep the quantum number conservation rules and help as a mnemonic. It does not represent a "particle" that can be measured, but a function necessary for the computation of cross-sections and lifetimes according to the limits of integration entering the problem under study. p.s. my answer to this other question might be relevant in framing what a particle is.
{ "domain": "physics.stackexchange", "id": 70542, "tags": "virtual-particles" }
Project Euler #2 (Even Fibonacci numbers) in Swift
Question: I figured working through Project Euler problems in Swift would be a good way to learn any tips or tricks. For example, tuples are something I'm not used to using, but they proved useful here. Using things makes me more confident with them, so perhaps this will help me find other uses for them in the future. Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms. Here is my solution: func swap(inout left: Any, inout right: Any) { let swap = left right = left left = swap } func findNextEvenFibonacci(var firstFibonacci: Int = 0, var secondFibonacci: Int = 1) -> (Int, Int) { do { firstFibonacci += secondFibonacci swap(&firstFibonacci, &secondFibonacci) } while firstFibonacci % 2 != 0 return (firstFibonacci, secondFibonacci) } let maxFibonacci: Int = 4_000_000 var fibonacci: (first: Int, next: Int) = (0,1) var sumOfEvenFibonnaccis: Int = 0 while fibonacci.next < maxFibonacci { fibonacci = findNextEvenFibonacci(firstFibonacci: fibonacci.first, secondFibonacci: fibonacci.next) sumOfEvenFibonnaccis += fibonacci.first } let answer = sumOfEvenFibonnaccis Answer: The following is similar to Flambino's approach, but creates a Swift SequenceType so that you can use the Swift library functions filter() and reduce() to iterate over the elements: struct FibonacciSequence : SequenceType { let upperBound : Int func generate() -> GeneratorOf<Int> { var current = 1 var next = 1 return GeneratorOf<Int>() { if current > self.upperBound { return nil } let result = current current = next next += result return result }; } } let fibseq = lazy(FibonacciSequence(upperBound: 4_000_000)) let sum = reduce(fibseq.filter { $0 % 2 == 0 }, 0) { $0 + $1 } See here for more information about sequences and generators. This nice application made me actually understand how you can create your own sequences. lazy() creates a LazySequence, whose filter() method returns the elements "on demand", e.g. as needed in the summation. The filter() function would return an array of all even Fibonacci numbers before starting the summation. (Kudos to @jtbandes for his suggestion). Update: Due to major changes in the Swift programming language, the above code does not compile anymore with the current Swift 2.1/Xcode 7. Here is an updated version fore anybody's convenience: struct FibonacciSequence : SequenceType { let upperBound : Int func generate() -> AnyGenerator<Int> { var current = 1 var next = 1 return anyGenerator { if current > self.upperBound { return nil } let result = current current = next next += result return result }; } } let fibseq = FibonacciSequence(upperBound: 4_000_000).lazy let sum = fibseq.filter { $0 % 2 == 0 }.reduce(0) { $0 + $1 }
{ "domain": "codereview.stackexchange", "id": 9233, "tags": "programming-challenge, fibonacci-sequence, swift" }
Tree Utility in Python
Question: I wrote this piece of code to mimic the *nix tree utility, however, I am not happy with the pad_info=[] part. It's used to make padding according to whether the parents were last child of their parents or not. Is there any alternate (and a more elegant) way to do this? def tree(path, depth=1, max_depth=100, print_hidden=False, pad_info=[]): '''Print contents of directories in a tree-like format By default, it prints upto a depth of 100 and doesn't print hidden files, ie, files whose name begin with a '.' returns number of files, number of directories it encountered ''' fileCount, dirCount = 0, 0 files = sorted(os.listdir(path), key=lambda s: s.lower()) if not print_hidden: files = [os.path.join(path, x) for x in files if not x.startswith('.')] for i, file in enumerate(files): padding = ['| ' if x == 'nl' else ' ' for x in pad_info] padding = ''.join(padding) filename = os.path.basename(file) is_last = i == len(files) - 1 prefix = '`-- ' if is_last else '|-- ' print '%s%s%s' % (padding, prefix, filename) if os.path.isdir(file): dirCount += 1 new_pad_info = pad_info + (['l'] if is_last else ['nl']) fc, dc = tree(os.path.join(path, file), depth=depth+1, max_depth=max_depth, print_hidden=print_hidden, pad_info=new_pad_info) fileCount += fc dirCount += dc else: fileCount += 1 return fileCount, dirCount Answer: Some minor style and code comments: Be consistent in using snake_case for variables names – Most names are good, but then a fileCount and dirCount sneaks in. Also try avoid using abbreviations like fc and dc Avoid building lists twice – You first build the list of files from os.listdir(), and then rebuild it using files in combination with print_hidden. This can be avoided using a list comprehension with an if statement like [a for a in a_list if a == something] Avoid one-time intermediate variables – If you use a variable just once, it can be argued that it is better to just use the expression directly. In your code this applies to padding, filename and prefix Avoid hiding predefined variables and functions – Don't use reservered words like file in your code, which hides the original file. When supplying all variables, you don't need to prefix them by name – In your recursive call to tree you prefix them, which is not needed as the order and presences is already stated. Instead of storing text, store a boolean in your pad_info – To avoid string comparisons and storage, it makes more sense to store a boolean value whether the director is the last in the current directory or not. This can further simplify your code by using a preset list to choose which text to be displayed directly as True = 1 and False = 0 when used in a list context. Maybe a personal preferences, but use """ for docstrings – If you are consistently using """ for docstrings, this allows for intermediate ' in the text, as well as enabling you to comment out larger portion of code securely using the other option of '''. Add option to remove output – As your code returns the file and directory counts, it could be viable to foresee a situation where you are not interested in actually viewing the output. This could be added as another option to your function. Use internal function to hide inner implementation details - It could be argued that this is a good candidate for using a function within the function. This internal function could be the one actually handling the recursion, and only having the parameters path and depth. In other words, you could hide some of the inner workings of your function using a inner function, and let the outer function be a simple call to the inner function doing the work for you. Notice also that the inner function has full access to the outer functions variables, so need to pass them around (or treat as globals). BUG: Your code doesnt' respect the max_depth variable – You don't have anything bailing out if the depth is too high... Feature: No marking of empty directories – If reducing max_depth or having empty directories there is no marking to indicate that the current entry is actually a directory. My refactored code Here is the code when applying most of this comment to it: import os FOLDER_PATTERN = ['| ', ' '] FILE_PATTERN = ['|-- ', '`-- '] def tree(path, do_output=True, print_hidden=False, max_depth=100): """Print file and directory tree starting at path. By default, it prints upto a depth of 100 and doesn't print hidden files, ie. files whose name begin with a '.'. It can be modified to only return the count of files and directories, and not print anything. Returns the tuple of number of files and number of directories """ def _tree(path, depth): file_count, directory_count = 0, 0 files = sorted((os.path.join(path, filename) for filename in os.listdir(path) if print_hidden or not filename.startswith('.')), key=lambda s: s.lower()) files_count = len(files) for i, filepath in enumerate(files, start = 1): # Print current file, based on previously gathered info if do_output: print('{}{}{}'.format( ''.join(FOLDER_PATTERN[folder] for folder in parent_folders), FILE_PATTERN[i == files_count], os.path.basename(filepath))) # Recurse if we find a new subdirectory if os.path.isdir(filepath) and depth < max_depth: # Append whether current directory is last in current list or not parent_folders.append(i == files_count) # Print subdirectory and get numbers subdir_file_count, subdir_directory_count = \ _tree(os.path.join(filepath), depth+1) # Back in current directory, remove the newly added directory parent_folders.pop() # Update counters file_count += subdir_file_count directory_count += subdir_directory_count + 1 elif os.path.isdir(filepath): directory_count += 1 else: file_count += 1 return file_count, directory_count parent_folders = [] return _tree(path, 1)
{ "domain": "codereview.stackexchange", "id": 17149, "tags": "python, file-system" }
Exterior and Covariant Derivatives
Question: Is the following guaranteed to be true for any covariant vector $f_\mu$ (1-form $\boldsymbol{f}$) in the absence of torsion? $$\nabla_{[\alpha}\nabla_{\beta}f_{\mu]}=\partial_{[\alpha}\partial_{\beta}f_{\mu]}=\boldsymbol{d}\boldsymbol{d}\boldsymbol{f}=0,$$ where $\nabla_{\alpha}$ is the covariant derivative, $\partial_{\beta}$ is the partial derivative, and $\boldsymbol{d}$ is the exterior derivative, and brackets in the subscript means antisymmetrization. $\partial_{[\alpha}\partial_{\beta}f_{\mu]}=\boldsymbol{d}\boldsymbol{d}\boldsymbol{f}$ is just the definition of $\boldsymbol{d}$ and is everywhere in textbooks, and the fact that it is zero is also all over the place. So my real question is: Is the following always true? $\nabla_{[\alpha}\nabla_{\beta}f_{\mu]}=\partial_{[\alpha}\partial_{\beta}f_{\mu]}$ Answer: I think you need the first Bianchi identity (which holds for torsion free). Your antisymmetrized expression is the LHS of $$ [\nabla_a, \nabla_b] f_c+ [\nabla_b, \nabla_c] f_a+[\nabla_c, \nabla_a] f_b =- f_d({R^d}_{cab}+ {R^d}_{abc}+{R^d}_{cab})\\\qquad\qquad =0 $$ I think my minus sign is correct for the covariant case of the commutator/curvature expression. Added comment: Actually your derivation using $d^2=0$ is also quite correct --- so my first Bianchi identity route can be turned around to give a neat (and previously unknown to me) proof of the first Bianchi identity for torsion-free connections.
{ "domain": "physics.stackexchange", "id": 51306, "tags": "homework-and-exercises, general-relativity, differential-geometry, tensor-calculus, differentiation" }
VBA introspection library for SQLite
Question: SQLiteDB VBA Library is a set of VBA helper functions for the SQLite engine. The primary motivation behind this project is to provide convenient access to extended introspection features. Because of its generality, ADODB/ADOX libraries provide only limited metadata information. The Introspection subpackage of this library relies on the generic SQL querying mechanism and specialized SQL queries. It facilitates access to complete information about both the features of the active engine used and objects/attributes of the attached database. The library uses the ADODB package and relies on the Christian Werner's SQLiteODBC driver (I use a custom compiled binary that embeds a recent version of SQLite, as described here). Class diagram Note, this post only covers the core functionality. Further documentation is available on GitHub, and complete source code, tests, and examples are available from the project repository. The three classes at the top of the diagram form the Introspection subpackage responsible for metadata-related SQL code, with SQLiteSQLDbInfo being the top-level object. It implements a portion of the functionality, proxies functionality provided by SQLiteSQLDbIdxFK, and encapsulates SQLiteSQLEngineInfo. SQLiteSQLDbInfo '@Folder "SQLiteDB.Introspection" '@ModuleDescription "SQL queries for retrieving SQLite database metadata." '@PredeclaredId '@Exposed '@IgnoreModule ProcedureNotUsed Option Explicit Private Type TSQLiteSQLDbInfo Schema As String Engine As SQLiteSQLEngineInfo End Type Private this As TSQLiteSQLDbInfo Private Sub Class_Initialize() this.Schema = "main" Set this.Engine = SQLiteSQLEngineInfo End Sub Private Sub Class_Terminate() Set this.Engine = Nothing End Sub '''' @ClassMethodStrict '''' This method should only be used on the default instance '''' '@DefaultMember '@Description "Default factory" Public Function Create(Optional ByVal Schema As String = "main") As SQLiteSQLDbInfo Dim Instance As SQLiteSQLDbInfo Set Instance = New SQLiteSQLDbInfo Instance.Init Schema Set Create = Instance End Function Public Sub Init(ByVal Schema As String) this.Schema = Schema End Sub '@Description "Exposes SQLiteSQLEngineInfo introspection queries" Public Property Get Engine() As SQLiteSQLEngineInfo Set Engine = this.Engine End Property '@Description "Generates a query returning the list of attached databases" Public Property Get Databases() As String Databases = "SELECT name, file FROM pragma_database_list" End Property '''' @Proxy '@Description "Generates a query returning all non-system database objects." Public Function GetDbSchema(Optional ByVal Schema As String = vbNullString) As String GetDbSchema = SQLiteSQLDbIdxFK.DbSchema(IIf(Len(Schema) > 0, Schema, this.Schema)) End Function '''' @Proxy '@Description "Generates a query returning all non-system database objects, but triggers" Public Function DbSchemaNoTriggers(Optional ByVal Schema As String = vbNullString) As String DbSchemaNoTriggers = SQLiteSQLDbIdxFK.DbSchemaNoTriggers(IIf(Len(Schema) > 0, Schema, this.Schema)) End Function '''' @Proxy '@Description "Generates a query returning triggers" Public Function Triggers(Optional ByVal Schema As String = vbNullString) As String Triggers = SQLiteSQLDbIdxFK.Triggers(IIf(Len(Schema) > 0, Schema, this.Schema)) End Function '''' For some reason, running SELECT * FROM <schema>.pragma_integrity_check '''' with several attached databases gives the result as if <schema> is '''' ignored and all attached databases are checked. Prefer to run this '''' check when the only attached database is the one being checked. '@Description "Generates a query running integrity check." Public Property Get CheckIntegrity() As String CheckIntegrity = "SELECT * FROM pragma_integrity_check" End Property '''' For some reason, running SELECT * FROM <schema>.pragma_foreign_key_check '''' with several attached databases gives the result as if <schema> is '''' ignored and all attached databases are checked. Prefer to run this '''' check when the only attached database is the one being checked. '@Description "Generates a query running integrity check." Public Property Get CheckFKs() As String CheckFKs = "SELECT * FROM pragma_foreign_key_check" End Property '''' @Proxy '@Description "Generates a query returning database tables." Public Function Tables(Optional ByVal Schema As String = vbNullString) As String Tables = SQLiteSQLDbIdxFK.Tables(IIf(Len(Schema) > 0, Schema, this.Schema)) End Function '''' @Proxy '@Description "Generates a query returning all foreing keys in the SQLite database" Public Property Get ForeingKeys() As String ForeingKeys = SQLiteSQLDbIdxFK.ForeingKeys(this.Schema) End Property '''' @Proxy '@Description "Generates a query returning all indices in the SQLite database" Public Function Indices(Optional ByVal NonSys As Boolean = True) As String Indices = SQLiteSQLDbIdxFK.Indices(this.Schema, NonSys) End Function '''' @Proxy '''' See the called class for details '@Description "Generates a query returning child columns for all foreing keys and corresponding indices." Public Property Get FKChildIndices() As String FKChildIndices = SQLiteSQLDbIdxFK.FKChildIndices(this.Schema) End Property '''' @Proxy '''' See the called class for details '@Description "Generates a query returning similar indices." Public Property Get SimilarIndices() As String SimilarIndices = SQLiteSQLDbIdxFK.SimilarIndices(this.Schema) End Property '@Description "Generates a query returning table's columns." Public Function TableColumns(ByVal TableName As String) As String Guard.EmptyString TableName TableColumns = "SELECT * " & _ "FROM " & this.Schema & ".pragma_table_xinfo('" & TableName & "')" End Function '@Description "Generates a query returning table's columns with placeholder columns." Public Function TableColumnsEx(ByVal TableName As String) As String Guard.EmptyString TableName TableColumnsEx = "SELECT * , 0 AS [unique], '' as [check], '' as [collate] " & _ "FROM " & this.Schema & ".pragma_table_info('" & TableName & "')" End Function '@Description "Generates a query returning table's SQL." Public Function TableSQL(ByVal TableName As String) As String Guard.EmptyString TableName TableSQL = "SELECT sql " & _ "FROM sqlite_master " & _ "WHERE type = 'table' AND name = '" & TableName & "'" End Function '@Description "Generates a query returning table's foreign keys." Public Function TableForeingKeys(ByVal TableName As String) As String TableForeingKeys = "SELECT * " & _ "FROM " & this.Schema & ".pragma_foreign_key_list('" & TableName & "')" End Function SQLiteSQLDbIdxFK Bulky code related to database indices and foreign keys goes into a separate module. '@Folder "SQLiteDB.Introspection" '@ModuleDescription "SQL queries for retrieving detailed information on database indices and foreign keys." '@PredeclaredId '''' '''' Logically, this module is a part of SQLiteSQLDbInfo, and this FK/IDX code is '''' placed in a separate module simply to isolate the large amount of SQL code. '''' All methods of this module are exposed by SQLiteSQLDbInfo via composition. '''' This class is not supposed to be used directly, and it does not need to be '''' instantiated: all functionality can be used via the default instance. '''' Option Explicit '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' Generates an SQLite query returning database tables, skipping '''' system tables (prefixed with "sqlite_") and ordering by ROWID '''' (in order of creation). If requested, a CTE WITH term is '''' generated. '''' '''' Args: '''' Schema (string, optional, "main"): '''' Schema name/alias '''' CTEWITH (boolean, optional, False): '''' If True, format as a CTE WITH term '''' '''' Returns: '''' String, containing the query '''' '''' Examples: '''' >>> ?SQLiteSQLDbIdxFK.Tables '''' SELECT name, sql '''' FROM main.sqlite_master '''' WHERE type = 'table' AND (name NOT LIKE 'sqlite_%') '''' ORDER BY ROWID ASC '''' '''' >>> ?SQLiteSQLDbIdxFK.Tables(, True) '''' t AS ( '''' SELECT name, sql '''' FROM main.sqlite_master '''' WHERE type = 'table' AND (name NOT LIKE 'sqlite_%') '''' ORDER BY ROWID ASC '''' ) '''' '@Description "Generates a query returning database tables." Public Function Tables(Optional ByVal Schema As String = "main", _ Optional ByVal CTEWITH As Boolean = False) As String Dim Indent As String Dim Query As String Indent = IIf(CTEWITH, " ", vbNullString) Query = Indent & Join(Array( _ "SELECT tbl_name, sql", _ "FROM " & Schema & ".sqlite_master", _ "WHERE type = 'table' AND (name NOT LIKE 'sqlite_%')", _ "ORDER BY ROWID ASC" _ ), vbNewLine & Indent) Tables = IIf(CTEWITH, "t AS (" & vbNewLine & Query & vbNewLine & ")", Query) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' Generates an SQLite query returning database views ordered by ROWID '''' (in order of creation). '''' '''' Args: '''' Schema (string, optional, "main"): '''' Schema name/alias '''' '''' Returns: '''' String, containing the query '''' '''' Examples: '''' >>> ?SQLiteSQLDbIdxFK.Views '''' SELECT tbl_name, sql '''' FROM main.sqlite_master '''' WHERE type = 'view' '''' ORDER BY ROWID ASC '''' '@Description "Generates a query returning database views." Public Function Views(Optional ByVal Schema As String = "main") As String Views = Join(Array( _ "SELECT tbl_name, sql", _ "FROM " & Schema & ".sqlite_master", _ "WHERE type = 'view'", _ "ORDER BY ROWID ASC" _ ), vbNewLine) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' Generates an SQLite query returning database triggers ordered by ROWID '''' (in order of creation). '''' '''' Args: '''' Schema (string, optional, "main"): '''' Schema name/alias '''' '''' Returns: '''' String, containing the query '''' '''' Examples: '''' >>> ?SQLiteSQLDbIdxFK.Triggers '''' SELECT tbl_name, sql '''' FROM main.sqlite_master '''' WHERE type = 'trigger' '''' ORDER BY ROWID ASC '''' '@Description "Generates a query returning database triggers." Public Function Triggers(Optional ByVal Schema As String = "main") As String Triggers = Join(Array( _ "SELECT tbl_name, sql", _ "FROM " & Schema & ".sqlite_master", _ "WHERE type = 'trigger'", _ "ORDER BY ROWID ASC" _ ), vbNewLine) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' Generates an SQLite query returning all non-system database objects '''' ordered by type (tables, indices, views, triggers) and then by ROWID. '''' The query returns two columns (sql, type_id). '''' '''' Args: '''' Schema (string, optional, "main"): '''' Schema name/alias '''' '''' Returns: '''' String, containing the query '''' '''' Examples: '''' >>> ?SQLiteSQLDbIdxFK.DbSchema '''' SELECT sql, (CASE type '''' WHEN 'table' THEN 0 '''' WHEN 'index' THEN 1 '''' WHEN 'view' THEN 3 '''' ELSE 4 '''' END) AS type_id '''' FROM main.sqlite_master '''' WHERE name NOT like 'sqlite_%' '''' ORDER BY type_id, _ROWID_ '''' '@Description "Generates a query returning all non-system database objects." Public Function DbSchema(Optional ByVal Schema As String = "main") As String DbSchema = Join(Array( _ "SELECT sql, (CASE type", _ " WHEN 'table' THEN 0", _ " WHEN 'index' THEN 1", _ " WHEN 'view' THEN 2", _ " ELSE 3", _ " END) AS type_id", _ "FROM " & Schema & ".sqlite_master", _ "WHERE name NOT like 'sqlite_%'", _ "ORDER BY type_id, _ROWID_" _ ), vbNewLine) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' Generates an SQLite query returning all non-system database objects, '''' except for triggers, ordered by type (tables, indices, views) and '''' then by ROWID. The query returns two columns (sql, type_id). '''' '''' Args: '''' Schema (string, optional, "main"): '''' Schema name/alias '''' '''' Returns: '''' String, containing the query '''' '''' Examples: '''' >>> ?SQLiteSQLDbIdxFK.DbSchemaNoTriggers '''' SELECT sql, (CASE type '''' WHEN 'table' THEN 0 '''' WHEN 'index' THEN 1 '''' ELSE 2 '''' END) AS type_id '''' FROM main.sqlite_master '''' WHERE (name NOT like 'sqlite_%') AND type <> 'trigger' '''' ORDER BY type_id, _ROWID_ '''' '@Description "Generates a query returning all non-system database objects." Public Function DbSchemaNoTriggers(Optional ByVal Schema As String = "main") As String DbSchemaNoTriggers = Join(Array( _ "SELECT sql, (CASE type", _ " WHEN 'table' THEN 0", _ " WHEN 'index' THEN 1", _ " ELSE 2", _ " END) AS type_id", _ "FROM " & Schema & ".sqlite_master", _ "WHERE (name NOT like 'sqlite_%') AND type <> 'trigger'", _ "ORDER BY type_id, _ROWID_" _ ), vbNewLine) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' Generates an SQLite query returning base info on database indices ordering '''' by ROWID (in order of creation). If requested, a CTE WITH term is generated. '''' '''' Args: '''' Schema (string, optional, "main"): '''' Schema name/alias '''' CTEWITH (boolean, optional, False): '''' If True, format as a CTE WITH term '''' '''' Returns: '''' String, containing the query '''' '''' Examples: '''' >>> ?SQLiteSQLDbIdxFK.IndexBase '''' SELECT ROWID AS id, name AS idx_name, tbl_name, sql '''' FROM main.sqlite_master '''' WHERE type='index' '''' ORDER BY ROWID ASC '''' '''' >>> ?SQLiteSQLDbIdxFK.IndexBase(, True) '''' ib AS ( '''' SELECT ROWID AS id, name AS idx_name, tbl_name, sql '''' FROM main.sqlite_master '''' WHERE type='index' '''' ORDER BY ROWID ASC '''' ) '''' '@Description "Generates a query returning indices (base info)." Public Function IndexBase(Optional ByVal Schema As String = "main", _ Optional ByVal CTEWITH As Boolean = False) As String Dim Indent As String Dim Query As String Indent = IIf(CTEWITH, " ", vbNullString) Query = Indent & Join(Array( _ "SELECT ROWID AS id, name AS idx_name, tbl_name, sql", _ "FROM " & Schema & ".sqlite_master", _ "WHERE type = 'index'", _ "ORDER BY ROWID ASC" _ ), vbNewLine & Indent) IndexBase = IIf(CTEWITH, "ib AS (" & vbNewLine & Query & vbNewLine & ")", Query) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' Generates an SQLite CTE WITH term for a foreign key list. '''' '''' Args: '''' Schema (string, optional, "main"): '''' Schema name/alias '''' '''' Returns: '''' String, containing the CTE WITH term '''' '''' Examples: '''' >>> ?SQLiteSQLDbIdxFK.pForeignKeyList '''' fkl AS ( '''' SELECT tbl_name AS child_table, [from] AS child_col0, '''' [table] AS parent_table, [to] AS parent_col0, '''' on_update, on_delete, id AS fk_id, seq AS fk_seq '''' FROM t '''' Join main.pragma_foreign_key_list(t.tbl_name) '''' ORDER BY child_table, fk_id '''' ) '''' '@Description "Generates a query returning a foreign key CTE WITH term." Public Function pForeignKeyList(Optional ByVal Schema As String = "main") As String pForeignKeyList = Join(Array( _ "fkl AS (", _ " SELECT tbl_name AS child_table, [from] AS child_col0,", _ " [table] AS parent_table, [to] AS parent_col0,", _ " on_update, on_delete, id AS fk_id, seq AS fk_seq", _ " FROM t", _ " JOIN " & Schema & ".pragma_foreign_key_list(t.tbl_name)", _ " ORDER BY child_table, fk_id", _ "),", _ "fk AS (", _ " SELECT *, group_concat(child_col0, ', ') AS child_cols,", _ " group_concat(parent_col0, ', ') AS parent_cols,", _ " min(fk_seq) AS min_fk_seq", _ " FROM fkl", _ " GROUP BY child_table, fk_id", _ " ORDER BY child_table, fk_id", _ ")" _ ), vbNewLine) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' Generates an SQLite CTE WITH term for index info & list. '''' For each index list info and join the tables. Only use <index name> here. '''' For multi-column indices, keep the row with the first column and generates '''' a column list. Generate database-wide list of additional index info columns '''' from the per-table index lists. '''' '''' Args: '''' Schema (string, optional, "main"): '''' Schema name/alias '''' '''' Returns: '''' String, containing the CTE WITH term '''' '@Description "Generates a query returning a CTE WITH term for index info & list." Public Function pIndexInfoList(Optional ByVal Schema As String = "main") As String pIndexInfoList = Join(Array( _ "ii AS (", _ " SELECT ib.idx_name, min(ii.seqno) AS seqno, ii.name AS col0_name, group_concat(ii.name, ', ') AS columns", _ " FROM ib", _ " JOIN " & Schema & ".pragma_index_info(ib.idx_name) AS ii", _ " GROUP BY idx_name", _ "),", _ "il AS (", _ " SELECT name AS idx_name, seq AS idx_seq, [unique], origin, partial", _ " FROM t", _ " JOIN " & Schema & ".pragma_index_list(tbl_name)", _ ")" _ ), vbNewLine) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '@Description "Generates a query returning all foreing keys in the SQLite database" Public Function ForeingKeys(Optional ByVal Schema As String = "main") As String Dim StmtParts(0 To 5) As String StmtParts(0) = "WITH" '''' List all db tables StmtParts(1) = Tables(Schema, True) & "," '''' For each table, list foreign keys and join them to get a list of all foreign '''' keys for the DB. Each row contains info on a foreign key for a single column. '''' Yield a single row per foreign key, including multi-column keys. For multi-column '''' keys, keep the row with the first column and generates a column list. StmtParts(2) = pForeignKeyList(Schema) StmtParts(3) = "SELECT *" StmtParts(4) = "FROM fk AS foreign_keys" StmtParts(5) = "ORDER BY child_table, fk_id" ForeingKeys = Join(StmtParts, vbNewLine) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' Generates an SQLite query returning database indices, ordering by ROWID. '''' If "NonSys" = True, skip auto indices (prefixed with "sqlite_autoindex_"). '''' '@Description "Generates a query returning all indices in the SQLite database" Public Function Indices(Optional ByVal Schema As String = "main", _ Optional ByVal NonSys As Boolean = True) As String Dim StmtParts(10 To 26) As String StmtParts(10) = "WITH" '''' List all db tables StmtParts(11) = Tables(Schema, True) & "," '''' List all db indices StmtParts(12) = IndexBase(Schema, True) & "," '''' For each index list info and join the tables. Only use <index name> here. For '''' multi-column indices, keep the row with the first column and generates a column list. '''' Generate database-wide list of additional index info columns from the per-table index lists StmtParts(13) = pIndexInfoList(Schema) & "," '''' After taking care of multi-row descriptions, add aditional columns from index list StmtParts(14) = "idx AS (" StmtParts(15) = " SELECT ib.id, ib.idx_name, ib.tbl_name, ii.col0_name, ii.columns, ib.sql" StmtParts(16) = " FROM ib, ii" StmtParts(17) = " ON ib.idx_name = ii.idx_name" StmtParts(18) = ")," '''' Join additional info columns with index-wise list StmtParts(19) = "iex AS (" StmtParts(20) = " SELECT idx.*, il.idx_seq, il.[unique], il.origin, il.partial" StmtParts(21) = " FROM idx, il" StmtParts(22) = " WHERE idx.idx_name = il.idx_name" StmtParts(23) = ")" StmtParts(24) = "SELECT *" StmtParts(25) = "FROM iex AS indices" StmtParts(26) = IIf(NonSys, _ "WHERE idx_name NOT LIKE 'sqlite_autoindex_%'" & vbNewLine, vbNullString) & _ "ORDER BY id" Indices = Join(StmtParts, vbNewLine) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' Indices on child columns of foreing key relations are not mandatory, '''' but generally should be defined. Database engine does not control whether '''' such indices are defined. This query return a summary table showing all '''' child columns and corresponding indices in the "idx_name" column. If this '''' field is empty for a particular child column, the corresponding index has '''' not been defined. '''' '@Description "Generates a query returning child columns for all foreing keys and corresponding indices." Public Function FKChildIndices(Optional ByVal Schema As String = "main") As String Dim StmtParts(10 To 34) As String StmtParts(10) = "WITH" StmtParts(11) = Tables(Schema, True) & "," StmtParts(12) = IndexBase(Schema, True) & "," StmtParts(13) = pIndexInfoList(Schema) & "," StmtParts(14) = "idx AS (" StmtParts(15) = " SELECT ib.id, ib.idx_name, ib.tbl_name, ii.col0_name, ii.columns, ib.sql" StmtParts(16) = " FROM ib, ii" StmtParts(17) = " ON ib.idx_name = ii.idx_name" StmtParts(18) = ")," StmtParts(19) = "iex AS (" StmtParts(20) = " SELECT idx.*, il.idx_seq, il.[unique], il.origin, il.partial" StmtParts(21) = " FROM idx, il" StmtParts(22) = " WHERE idx.idx_name = il.idx_name AND partial = 0" StmtParts(23) = ")," StmtParts(24) = pForeignKeyList(Schema) & "," '''' Join indices and foreign keys tables to see which child columns do not have indices. '''' Multi-column indices, having the child column set as the "prefix" are accepted. StmtParts(25) = "fki AS (" StmtParts(26) = " SELECT fk.child_table, fk.child_cols, fk.parent_table, fk.parent_cols," StmtParts(27) = " iex.idx_name" StmtParts(28) = " FROM fk" StmtParts(29) = " LEFT JOIN iex" StmtParts(30) = " ON fk.child_table = iex.tbl_name AND fk.child_cols = substr(iex.columns, 1, length(fk.child_cols))" StmtParts(31) = ")" StmtParts(32) = "SELECT *" StmtParts(33) = "FROM fki AS fkeys_childindices" StmtParts(34) = "ORDER BY child_table, child_cols" FKChildIndices = Join(StmtParts, vbNewLine) End Function '''' @ClassMethod '''' This method can also be used on the default instance '''' '''' If IDX1 indexes columns (A, B) and IDX2 indexes columns (A, B, C), that is '''' IDX1 indexes a "prefix" of IDX2, IDX2 can replace IDX1. On the other hand, '''' depending on statistics (if for any given pair (A, B), there are very few '''' rows), IDX2 may not be justifiable (unless it is the primary key). This '''' query aims to return all such similar ("prefix") indices, though it has not '''' been thoughroughly verified. It may return some "false" positive. Whether '''' it can miss indices is not clear. '''' '@Description "Generates a query returning similar indices." Public Function SimilarIndices(Optional ByVal Schema As String = "main") As String Dim StmtParts(10 To 39) As String StmtParts(10) = "WITH" StmtParts(11) = Tables(Schema, True) & "," StmtParts(12) = IndexBase(Schema, True) & "," StmtParts(13) = pIndexInfoList(Schema) & "," StmtParts(14) = "idx AS (" StmtParts(15) = " SELECT ib.id, ib.idx_name, ib.tbl_name, ii.col0_name, ii.columns" StmtParts(16) = " FROM ib, ii" StmtParts(17) = " ON ib.idx_name = ii.idx_name" StmtParts(18) = ")," StmtParts(19) = "iex AS (" StmtParts(20) = " SELECT idx.*, il.idx_seq, il.[unique], il.origin, il.partial" StmtParts(21) = " FROM idx, il" StmtParts(22) = " WHERE idx.idx_name = il.idx_name" StmtParts(23) = ")," StmtParts(24) = "fdup AS (" StmtParts(25) = " SELECT tbl_name, col0_name, count(*) AS group_size" StmtParts(26) = " FROM iex" StmtParts(27) = " WHERE partial = 0" StmtParts(28) = " GROUP BY tbl_name, col0_name" StmtParts(29) = " HAVING group_size > 1" StmtParts(30) = ")," StmtParts(31) = "idup AS (" StmtParts(32) = " SELECT iex.*, fdup.group_size" StmtParts(33) = " FROM iex" StmtParts(34) = " JOIN fdup" StmtParts(35) = " ON iex.tbl_name = fdup.tbl_name AND iex.col0_name = fdup.col0_name" StmtParts(36) = ")" StmtParts(37) = "SELECT *" StmtParts(38) = "FROM idup AS similar_indices" StmtParts(39) = "ORDER BY tbl_name, col0_name, columns" SimilarIndices = Join(StmtParts, vbNewLine) End Function SQLiteSQLEngineInfo Engine-related code goes in this module. '@Folder "SQLiteDB.Introspection" '@ModuleDescription "SQL queries for retrieving information about the engine configuration and available features." '@PredeclaredId '@Exposed '@IgnoreModule ProcedureNotUsed '''' All methods in this module are class methods and can be safely called on the default instance '''' @ClassModule Option Explicit '@Description "Generates query returning available SQLite collations" Public Property Get Collations() As String Collations = "SELECT * FROM pragma_collation_list AS collations ORDER BY name" End Property '@Description "Generates query returning compile options" Public Property Get CompileOptions() As String CompileOptions = "SELECT * FROM pragma_compile_options AS compile_options" End Property '@Description "Generates query returning available SQLite functions" Public Property Get Functions() As String Functions = "SELECT * FROM pragma_function_list AS functions ORDER BY name" End Property '@Description "Generates query returning available SQLite modules" Public Property Get Modules() As String Modules = "SELECT * FROM pragma_module_list AS modules ORDER BY name" End Property '@Description "Generates query returning available SQLite pragmas" Public Property Get Pragmas() As String Pragmas = "SELECT * FROM pragma_pragma_list AS pargmas ORDER BY name" End Property '@Description "Generates query returning SQLite version" Public Property Get Version() As String Version = "SELECT sqlite_version() AS version" End Property ADOlib.RecordsetToQT This routine outputs record data from an ADODB.Recordset onto an Excel worksheet via the QueryTable feature. The ADODB.Recordset object is directly provided to the QueryTable constructor keeping the code compact and the process efficient. '@Description "Outputs Recordset to Excel Worksheet via QueryTable" Public Sub RecordsetToQT(ByVal AdoRecordset As ADODB.Recordset, ByVal OutputRange As Excel.Range) Attribute RecordsetToQT.VB_Description = "Outputs Recordset to Excel Worksheet via QueryTable" Guard.NullReference AdoRecordset Guard.NullReference OutputRange Dim QTs As Excel.QueryTables Set QTs = OutputRange.Worksheet.QueryTables '''' Cleans up target area before binding the data. '''' Provided range reference used to indicate the left column and '''' Recordset.Fields.Count determines the width. '''' If EntireColumn.Delete method is used, Range object becomes invalid, so '''' a textual address must be saved to reset the Range reference. '''' However, when multiple QTs are bound to the same worksheet, '''' EntireColumn.Delete shifts columns to the left, so the target range '''' may not be clear. EntireColumn.Clear clears the contents. Dim FieldsCount As Long FieldsCount = AdoRecordset.Fields.Count Dim QTRangeAddress As String QTRangeAddress = OutputRange.Address(External:=True) Dim QTRange As Excel.Range '@Ignore ImplicitActiveSheetReference Set QTRange = Range(QTRangeAddress) QTRange.Resize(1, FieldsCount).EntireColumn.Clear '@Ignore ImplicitActiveSheetReference Set QTRange = Range(QTRangeAddress) Dim WSQueryTable As Excel.QueryTable For Each WSQueryTable In QTs WSQueryTable.Delete Next WSQueryTable Dim NamedRange As Excel.Name For Each NamedRange In QTRange.Worksheet.Names NamedRange.Delete Next NamedRange Set WSQueryTable = QTs.Add(Connection:=AdoRecordset, Destination:=QTRange.Range("A1")) With WSQueryTable .FieldNames = True .RowNumbers = False .PreserveFormatting = True .RefreshOnFileOpen = False .BackgroundQuery = True .RefreshStyle = xlInsertDeleteCells .SaveData = False .AdjustColumnWidth = True .RefreshPeriod = 0 .PreserveColumnInfo = True .EnableEditing = True End With WSQueryTable.Refresh QTRange.Worksheet.UsedRange.Rows(1).HorizontalAlignment = xlCenter End Sub Answer: Micro-review: '@IgnoreModule ProcedureNotUsed I used to sprinkle this around too, however there are a few reasons not to use it: It indicates your integration tests - if held in the same project files - do not hit this code path, which should be fixed or ignored case-by-case IMO. The (relatively) new '@EntryPoint annotation is usually a better indication about a public API. '@IgnoreModule means if you refactor, then different routines may be targeted by this annotation to the ones you had originally intended - that's fine if the module is truly dedicated to just public API and the annotation will always be valid. But sometimes you are ignoring procedures which might not be just API methods. For example, in SQLiteSQLDbInfo: Public Sub Init(ByVal Schema As String) this.Schema = Schema End Sub That could be Friend Sub because it's not public API, and it is vitally important that you do not forget to call this method if you refactor the Create method and accidentally drop the call to Init. ProcedureNotUsed can help with that without having to implement a factory interface. Also RD lets you define an ignore reason for each '@Ignore[Module] annotation using a colon: Dim QTRange As Excel.Range '@Ignore ImplicitActiveSheetReference Set QTRange = Range(QTRangeAddress) ... could be: Dim QTRange As Excel.Range '@Ignore ImplicitActiveSheetReference: QTRangeAddress is a fully qualified external range Set QTRange = Range(QTRangeAddress) ... that said, if this code made its way into SheetX, then Range implicitly refers to SheetX.Range which fails if QTRange is in SheetY, so better be on the safe side and use the fully qualified Application.Range So for a workbook with 2 sheets Sheet1 and Sheet2, the following code: Sub t() Debug.Print Range("[Book1]Sheet1!$A$1").Address(external:=True) End Sub ... fails in Sheet2 since it references Sheet1, but: Sub t() Debug.Print Application.Range("[Book1]Sheet1!$A$1").Address(external:=True) End Sub ... prints "[Book1]Sheet1!$A$1" as expected
{ "domain": "codereview.stackexchange", "id": 41964, "tags": "sql, vba, excel, database, sqlite" }
Some nuances on Group and Subgroup Isomorphism?
Question: (1) Is it known Group Isomorphism is in $\mathsf{coNP}$ and is the conjecture so? Is there a good reference for $\mathsf{coNP}$-ness in similar situations? (2) Is subgroup isomorphism $\mathsf{NP\text-complete}$? (3) Is there an explicit relation between group cardinality and number of generators and bit size of generators that is used in the presentation of the group? For instance if the group has cardinality $2^{\log^cn}$ is there a bound on size and number of generators? Answer: (1) In terms of structural complexity classes (as opposed to just upper bounds on deterministic time), for general Group Isomorphism, the known upper bounds are essentially the same as for Graph Isomorphism, namely $\mathsf{coAM} \cap \mathsf{SZK}$. However, Arvind and Toran showed that Solvable Group Isomorphism is in $\mathsf{NP} \cap \mathsf{coNP}$ under a relatively benign derandomization assumption (in particular, weaker than that currently needed to show Graph Isomorphism is in $\mathsf{coNP}$). Although this isn't general Group Isomorphism, as solvable groups are widely believed to contain the hardest cases of Group Isomorphism, this is "pretty close." As it is conjecture that $\mathsf{coAM} = \mathsf{coNP}$, it is also conjectured that Group Isomorphism is in $\mathsf{coNP}$. In the Cayley table model, many people believe that Group Isomorphism is even in $\mathsf{P}$. (2) Subgroup Isomorphism in the Cayley table model is unlikely to be $\mathsf{NP}$-complete, as it has a quasi-polynomial time algorithm. Namely, $H$ has at most $\log_2|H|$ generators; try all possible mappings of these generators into $G$ to see if any gives an injection. This takes time $|G|^{\log_2|H| + O(1)}$. (3) Every finite group has a generating set of size at most $\log_2|G|$ (simple exercise). For the rest of this part of the question, it really depends on how the group is presented. However, note that how the group is given as input - e.g. Cayley table, generating permutations, generating matrices, generators-and-relations, black-box - also has a significant effect on the complexity of the corresponding algorithms. It is known that there are $n^{\Theta(\log^2 n)}$ groups of order $\leq n$, so by counting, groups of order $\leq n$ in general need $\Theta(\log^3 n)$ bits to describe. It is an open question whether there is always a presentation with generators-and-relations of poly-logarithmic size.
{ "domain": "cstheory.stackexchange", "id": 3489, "tags": "graph-isomorphism, gr.group-theory" }
What information is sufficient for describing a thermodynamic system?
Question: For a single-component system, why are the energy, volume, and number of particles sufficient for describing the thermodynamics of the system? Why just three variables and those three variables in particular? In the book that I am using (Callen, $\textit{Thermodynamics and an Introduction to Thermostatistics}$) he postulates that the macroscopic equilibrium state is characterized by the energy, volume, and particle numbers of its components, but what is the reason for this? Answer: Typically, the state variables used to identify a macrostate are N, V, E for an isolated system, or you can replace E with T if the system is in contact with an heat bath. Other choices are also possible (the different formulations are related by the Legendre transform). Note that other substances may require an enlarged set of state variables (for example, magnets or superfluids, see https://arxiv.org/abs/cond-mat/0405111 ). Generally, apart from V (that sets the "size") and the internal energy E (that is related to the microscopic Hamiltonian), the other variables should be "conserved quantities" under the (possibly dissipative) dynamics of the system: for a simple system N is a conserved quantity (it is a "Noether charge"), other systems (like superfluids) may have extra conserved quantities.
{ "domain": "physics.stackexchange", "id": 75858, "tags": "thermodynamics" }
Denoising a signal
Question: I'm starting hydraulic experiments, where I'd have to measure velocity in an unsteady flow with a device called Acoustic Doppler Velocimeter. In DSP terms, I'd have a nonstationary signal in a shape of waves (In the figure below, the instantaneous velocity (cm/s) as function of time (s) in one point, the period is about 70 sec in my case). This signal contains the mean component (mean velocity) and noise (turbulence). My goal is to extract the mean velocity. I have looked up DSP and found many interesting models (Huang-Hilbert Transform, Wavelet Transform, Short Fourier Transform) to denoise. The only problem is that, in steady case, they need about 3 minutes measuring in one point so that they can average (arithmetic averaging) and filter out this noise. Since I'm in unsteady, I'd probably need more. Besides, my signal lasts about 1.5 minute. So I'm a little bit lost: can I still apply the denoising models (They're applied in the literature) ? Thank you! Answer: As @matthewjpollard has already indicated, you are acting too complex for a possibly simpler problem. You shall always begin with the simplest solution. I hope the following OCTAVE code can convince you on the justification of this principle. Note that I've used the simplest (yet more complex than simple polynomials) model of a nonlinear wave fitting into your description. A more realistic model might be required for a through investigation. clc; clear all; close all; N = 256; % cosine period (and signal length) M = 32; % tail length of signal n=[0:N-1]; % time-index x1 = 1-cos(2*pi*n/N); % wanted signal x1 = [x1, zeros(1,M)]; % append same tail (for practical pupose here) x2 = 0.15*randn(1,N+M); % enough wideband gaussian noise, to be added to our signal x = x1 + x2; % total signal as noise + wanted b = fir1(64,0.01); % simple FIR LP filter of order 64 and wc=0.01*pi y = filter(b,1,x); % Filter the input signal and obtain the result. figure,plot(x1);title('what you want') figure,plot(x);title('what you have') figure,plot(y);title('what you get') The result is: (note the delay of the filter...) Depending on your accuracy requirement, you may investigate other filter types too.
{ "domain": "dsp.stackexchange", "id": 6514, "tags": "fft, wavelet, stft, hilbert-transform, moving-average" }
Different definition of expectation value of quantum operator including density operaotor?
Question: in QM lecture we are told that the average value of a quantum state in a mixed state is (when system is in state $|\psi_j\rangle$ with probabiity $p_j$ and given total probablitiy $\sum_j p_j=1$ is $\langle \hat{A}\rangle=\sum_i p_i\langle \psi_i|\hat{A}|\psi_i\rangle$ which is something that is understood to me. then later the expectation value he define to be $\langle \hat{A}\rangle=Trace(\hat{\rho}\hat{A})$ with define $\hat{\rho}=\sum_i p_i|\psi_i\rangle\langle \psi_i|$. could someone point me on how to show both definition are same or how are they related? Answer: Remember that for any operator $\hat O$ the trace is the sum of the diagonal components so that $$Tr(\hat{O}) = \sum_{k=1}^{n} \langle k|\hat{O}|k\rangle=O_{kk}$$ Also note that repeated indices are already assumed to be summed over, but I will keep your notation for clarity. We can now write $$Tr(\hat{\rho}\hat{A}) = \sum_{k=1}^{n} \sum_{i=1}^{n}p_i\langle k|\psi_i\rangle\langle \psi_i|\hat{A}|k\rangle$$ $$\ \ \ \ \ \ \ \ \ \ \ \ \ \ =\sum_{k=1}^{n}\sum_{i=1}^{n}p_i\langle \psi_i|\hat{A}|k\rangle\langle k|\psi_i\rangle$$ and these step should be understandable. Noting that $\sum|k\rangle\langle k|= \hat I$ the identity operator, you should be able to get the result you need. That is, this result for the trace will reduce to your first expression for the averaage value $\langle \hat A \rangle$. The above equation certainly looks neater if you remove the $\sum$ symbols so that $$Tr(\hat{\rho}\hat{A}) = p_i\langle k|\psi_i\rangle\langle \psi_i|\hat{A}|k\rangle = p_i\langle \psi_i|\hat{A}|k\rangle\langle k|\psi_i\rangle$$ by remembering that repeated indices are summed over the space of states and the condition $|k\rangle\langle k|$ is the identity.
{ "domain": "physics.stackexchange", "id": 72425, "tags": "quantum-mechanics, density-operator" }
An apple sunk in water, looks fine. What could it mean?
Question: I washed a dozen of apple just now and one of them sunk in water. Also surprisingly I was not able to find an answer via googling, all top results were about why they float. The smallest one. I don't know the variety name, but AFAIK it is an apple, tastes like an apple. Could a fact that it sunk mean some internal spoilage? Answer: Image Source: Infovisual: Apple I think that the density of the endocarp is more than the mesocarp, and in that small apple, more amount of endocarp is present than mesocarp, making its average density more than water, thus, making it sink. Vice versa for the bigger apples.
{ "domain": "biology.stackexchange", "id": 11663, "tags": "fruit" }
The total number of stereoisomers of 1,2-dibromo-3,4-dichlorocyclobutane
Question: I tried writing all the geometrical isomers for 1,2-dibromo-3,4-dichlorocyclobutane, and then saw if they were optical or not. And I am getting the answer as 10: However, the given answer is 8. I can't really figure out where I went wrong. I am not sure whether the last one is a stereoisomer or not. Answer: I think if the question is asking for number of stereoisomers, OP is correct about 10. If it is about optically active isomers, then answer is 8 as shown in following image: Since the given molecule contains four chiral centers, theoretically it should have maximum of $2^4 = 16$ stereoisomers. The best way to find the correct number of isomers is to assign the $R/S$ (Cahn-Ingold-Prelog) configuration for each chiral center. Since the molecule has at least one plane of symmetry based on the stereoisomer as shown for the structure 1, the structure can name two different ways: For example, for the structure 1, it could be $(1S,2R,3R,4S)$ going counter-clockwise (red-numbering) or $(1R,2S,3S,4R)$ by going clockwise. Thus, the corresponding mirror images would have $(1R,2S,3S,4R)$ going counter-clockwise (red-numbering) or $(1S,2R,3R,4S)$ by going clockwise configurations. As shown in the image, these for configurations are identical. Thus, structure 1 is not optically active (meaning its mirror image is superimposable because it has plane of symmetry as indicated) and called meso-isomer. Yet, it is a stereoisomer. The same is true for the structure 4 as well and hence optically inactive (the second meso-isomer). However, structures 2, 3, 5, and 6 have non-imposable mirror images as indicated in above image (show the assigned $R/S$ configurations). As a consequence, these four and their mirror images are optically active (total of eight structures). Thus, 1,2-dibromo-3,4-dichlorocyclobutane has 10 stereoisomers: 8 optically active isomers and 2 meso-isomers. OP's doubt about last structure can be solved by assigning corresponding $R/S$-configurations as well: For the given structure it is $(1S,2S,3R,4R)$ (clockwise) or $(1S,2S,3R,4R)$ (counter-clockwise; same as clockwise). The assigned mirror image is $(1R,2R,3S,4S)$ (clockwise) or $(1R,2R,3S,4S)$ (counter-clockwise; same as clockwise). There are no coinciding names. Keep in mind that there is no plane of symmetry because 1,2-dibromo versus 1,2-dichloro here. Thus, structure 6 is optically active.
{ "domain": "chemistry.stackexchange", "id": 15971, "tags": "organic-chemistry, isomers, cis-trans-isomerism, stereochemistry" }
How is the training comlexity of NNLM word2vec calculated?
Question: I was reading this paper on word2vec, and came around the following description of a feedforward NNLM: It consists of input, projection, hidden and output layers. At the input layer, N previous words are encoded using 1-of-V coding, where V is size of the vocabulary. The input layer is then projected to a projection layer P that has dimensionality N × D, using a shared projection matrix. As only N inputs are active at any given time, composition of the projection layer is a relatively cheap operation. The following expression is given for the computational complexity per training example: Q = N×D + N×D×H + H×V. The last two terms make sense to me: N×D×H is roughly the amount of parameters in a dense layer from the N×D-dimensional projection layer to the H hidden neurons, analogous for H×V. The first term, however, I expected to be V×D since the mapping from a one-hot encoded word to a D-dimensional vector is done via a V×D dimensional matrix. I came to that conclusion after reading this referenced paper and this SO post where the workings of the projection layer are explained in more detail. Perhaps I have misunderstood what is meant by "training complexity". Answer: Yes. Technically your understanding is correct; i.e. if all input neurons are active, the computational complexity would be like you said: Q = V$\times$D + N$\times$D$\times$H + H$\times$V. So, if there are V words, the input, of course, will be a 1 of V one-hot vector. And since there are N words in the input, the input matrix will be of size N$\times$V. If D is the dimensionality of the embedded vector, the projection matrix will be of size V$\times$D. The product of the input matrix and the projection matrix will then have dimensions (N$\times$V)$\cdot$(V$\times$D) = N$\times$D. But remember that this is a time series problem and not all input words are available at any given time. For a N-gram model, only N inputs are active at any given time. Hence, of the total V matrix elements in the projection matrix, only N elements corresponding to the input one-hot vector are referenced/updated at any given time. The remaining elements are referenced/updated at a different time. As a result, at any given time, the input matrix will have dimension N$\times$N, and the projection matrix will only have to be N$\times$D. The product of these two matrices (N$\times$N)$\cdot$(N$\times$D) will still have dimensions N$\times$D. Hence the total complexity is Q = N$\times$D + N$\times$D$\times$H + H$\times$V. But in practice, this is implemented as a look up table rather than a matrix multiplication, as mentioned here.
{ "domain": "ai.stackexchange", "id": 3536, "tags": "neural-networks, word-embedding, word2vec, computational-complexity" }
Derivation of the redshift of photons emitted from the edge of a Schwarzschild black hole
Question: It seems that there is no nice derivation (either on this website or elsewhere on the web) of the standard formula for the redshift of a photon emitted in the Schwarzschild metric, as observed by an observer at infinity. The formula is: $$ \frac{\lambda_{\inf}}{\lambda_{\rm emitted}} = \left(1 - \frac{2GM}{c^{2}r}\right)^{1/2}$$ Can someone show this derivation starting from the Schwarzschild metric? Answer: First, let's consider the frequency that would be observed by an observer at distance $R$ from the center of the Schwarzchild spacetime. We can identify the energy-momentum four vector of the photon with the tangent vector of the null geodesic describing the photon's trajectory, $p^\mu = \frac{{\rm d}x^\mu}{{\rm d}\lambda}$, where $\lambda$ is an affine parameter. The frequency $\omega(R)$ seen by an observer at frequency $R$ is the time component of the vector $p^0$ in the locally inertial coordinates. Define $e^\mu_t$ to be a four vector of unit length in the time (in other words this vector has components (1,0,0,0) in the locally inertial frame). Then we can define the frequency seen by an observer $R$ in a coordinate independent way as \begin{equation} \omega(R) = g_{\mu\nu} e_t^\mu p^\nu \end{equation} So much for the observed frequency. To relate this to the emitted frequency, we can use the fact that the Schwarzschild metric enjoys several Killing vectors. The relevant one in this context is the vector $\partial_t$ associated with time translation invariance. The components of this vector $\xi^\mu$ in Schwarzschild coordinates are $\xi^0=1, \xi^r=\xi^\phi=\xi^\theta=0.$ Therefore, \begin{equation} g_{\mu\nu}\xi^\mu \xi^\nu = g_{00} = -\left(1-\frac{2GM}{R}\right). \end{equation} Since $\xi^\mu$ is a Killing vector, we have that $g_{\mu\nu}\xi^\mu p^\nu$ is a constant along a geodesic. Using this fact, we can compute the constant, which we will call $E$ \begin{equation} E = g_{\mu\nu} \xi^\mu p^\nu \end{equation} Now the Killing vector at position $x$, in turn, is related by an overall scaling factor to the time seen by a locally inertial observer at that position. The unit time vector $e_t^\mu$ satisfies $g_{\mu\nu}e^\mu_t e^\nu_t=-1$, and therefore \begin{equation} e^\mu_t = \left(1-\frac{2GM}{R}\right)^{1/2}\xi^\mu \end{equation} Using this relationship, we can express $\omega(R)$ as \begin{equation} \omega(R) = g_{\mu\nu} e^\mu_t p^\nu = \left(1-\frac{2GM}{R}\right)^{1/2} g_{\mu\nu} \xi^\mu p^\nu = \left(1-\frac{2GM}{R}\right)^{1/2} E \end{equation} Since $E$ is constant, \begin{equation} \frac{\omega(R_1)}{\omega(R_2)} = \frac{\lambda(R_2)}{\lambda(R_2)} = \left[\frac{1-\frac{2GM}{R_1}}{1-\frac{2GM}{R_2}}\right]^{1/2} \end{equation} Taking the limit $R_2\rightarrow \infty$ and setting $R_1=R_{\rm em}$ recovers the formula in the original question.
{ "domain": "physics.stackexchange", "id": 73307, "tags": "general-relativity, black-holes, metric-tensor, redshift" }
Transforming time series into static features?
Question: I'm working on a side project where I have a mixture of static data and time series, and the goal would be to perform clustering on the data. There's a bunch of data sources, but basically the main thing would be some static information about users (like age, sex, location etc.) and some time series data (user 123 did xyz at 2pm, then yxz at 3pm, then yyy at 4pm). The goal would be to perform a clustering/segmentation via unsupervised learning to create user segments. The most data I have is from the time series kind, but I'd like to incorporate both time series and the static data into my model. The question is, would it be viable to transform the time series data into static? If yes, what would be a method for this? Or, what would be some methods to perform clustering on time series data? I'm currently thinking maybe an autoencoder could help me somehow, but I'm not entirely sure how. What are some common methods for this (if any)? Can you maybe give me some pointers in where to start looking? Thank you! Answer: So the first that comes to mind for me is to ask, "What is the end goal"? Are you trying to classify them by how active they are and at what times? If you are then I would refer you to this paper here. The relevant section is 2.3 where they explain that there are two main approaches to dealing with this issue in the literature. "The first approach is to feed time-series features to RNN and then concatenate with static features." "The second approach for combining the two types of features is to include the time invariant features as part of the temporal features and feed them together to RNN units." In short, you can either train it all in one model or first use the time series model and feed to another model for the time invariant features. Where the paper mentions these solutions, there are further citations to other sources that discuss this further.
{ "domain": "datascience.stackexchange", "id": 9927, "tags": "machine-learning, time-series, clustering, transformation" }
Three Tanks: 2 @ 100 gallons, 1 @ 300 gallons
Question: I'm a physics noob with a question that will probably stump nobody in this forum, but which Google can't seem to answer straightforwardly for me. Here it is: I want to build three hot tubs which will all be connected via two 2-inch tubes at the bottom. The tubs will all be the same height, and situated level with one another. Two will have 150 gallons of water, one will have 300 gallons of water. Will the additional volume in the 300 gallon tank equate to more pressure at the 2-inch tubes, such that the water level might not be the same across all three tanks? Put another way, if I sit in one of the 150 gallon tanks, will the water I displace move to the adjacent 300 gallon tank? Answer: Firstly, you need to know that fluid pressure at a point in a liquid, $P = h \rho g$, where $h$ is the height of liquid column above that point, and $\rho$ is the density of the liquid. Also, if three different vessels containing the same liquid connected, then the level of liquid in the three vessels is same, irrespective of the size or shape of the vessels. Now, in your case, if the size and shape of the three tanks are same, and if they are connected, then there can be no way in which the level of liquid is different in the three tanks. If, however, you control the flow of liquid in the pipe, then the height of liquid in the 300 gallon tank will be higher than the others, and that will amount to more pressure on the pipe at the bottom, as per the equation I've stated previously. If you lower yourself in one tank, and the liquid is allowed to flow freely in the pipes, then at first, the water in the tank in which you're lowering yourself, will rise, but then water will flow though the pipes such that the liquid level is the sane throughout in the tanks. But keep in mind that the upthrust will still act upon you.
{ "domain": "physics.stackexchange", "id": 44482, "tags": "homework-and-exercises, fluid-dynamics, estimation, fluid-statics, viscosity" }
Factor $1/\sqrt{2\pi}$ in the normalization of wave function packet
Question: My book has started using the wave packet definition as follows (time independent form): $$\Psi(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} A(k) \ e^{ikx}dx$$ I do not understand where the $1/\sqrt{2\pi}$ comes from in this definition. First, I thought it has something to do with normalization however, I can't seem to prove this to myself. $$\Psi'(x) = N \int_{-\infty}^{\infty} A(k) \ e^{ikx}dx$$ $$\Psi'(x)^{\ast} = N \int_{-\infty}^{\infty} A^{\ast}(k) \ e^{-ikx}dx$$ $$\Psi'(x) \Psi'(x)^{\ast}= N^2 \int_{-\infty}^{\infty} A(k) \ e^{ikx}dx \int_{-\infty}^{\infty} A^{\ast}(k) \ e^{-ikx}dx = N^2 \int_{-\infty}^{\infty} A(k) A^{\ast}(k)dx = N^2 A(k) A^{\ast}(k)$$ The last step I justify by the conditions that the wave functions must approach zero as you go from $\pm \infty$. $$P = 1 = N^2 \int_{-\infty}^{\infty} A(k) A^{\ast}(k)dk$$ I am not sure where to go from here. Does this term actually come from the normalization? If so, how can I show this. Answer: The Fourier-transform operators $$ \hat F = \int \mathrm dx \frac{e^{ikx}}{\sqrt{2\pi}} \qquad \hat F{}^{-1} = \int \mathrm dk \frac{e^{-ikx}}{\sqrt{2\pi}} $$ are prettier if you attach a $\sqrt{2\pi}$ to them both, instead of the asymmetric $$ \hat F_\text{ugly} = \int \mathrm dx \ {e^{ikx}} \qquad \hat F{}_\text{ugly}^{-1} = \int \mathrm dk \frac{e^{-ikx}}{{2\pi}} $$ You should think of $\Psi(x) = \hat F {}^{-1} A(k)$ as your wavefunction, but $A(k) = \hat F\Psi(x)$ as also your wavefunction, in momentum space. You’re currently stuck because the normalization of $\Psi$ is related to the normalization of $A$. Perhaps abetting your confusion: in the current version of your question (v3), you erroneously have $\Psi(x) = \hat F A(k)$. That’s impossible, because the integral on the right side eats up the dummy variable $x$, so the left side should be a function of $k$. Likewise you have to be careful about the dummy variables when you are computing your overlap integrals. You expect that \begin{align} 1 = \left< \Psi | \Psi \right> &= \int \mathrm dx\ \Psi^*(x) \Psi(x) \\ &= \int \mathrm dx \left( \int\mathrm dk A^*(k) \frac{e^{+ikx}}{\sqrt{2\pi}} \right) \left( \int\mathrm dk’ A(k’) \frac{e^{-ik’x}}{\sqrt{2\pi}} \right) \end{align} You’re going to exploit $ \int\mathrm dx\ e^{i(k-k’)x} \sim \delta(k-k’) $ and get rid of two of the integrals, leaving you with $$ 1= \left<\Psi|\Psi\right>_x = \left<A|A\right>_k $$ which is what I meant about the normalizations being related. But you’re going to look up and/or prove that there are the correct amount of $\sqrt{2\pi}$ involved, rather than trusting my one-squiggle relationship. If you have been putting off reading about Fourier transforms, today is a good day.
{ "domain": "physics.stackexchange", "id": 85191, "tags": "quantum-mechanics, wavefunction, fourier-transform, normalization" }
Gauge invariant Chern-Simons Lagrangian
Question: I have to prove the (non abelian) gauge invariance of the following lagrangian (for a certain value of $\lambda$): $$\mathcal L= -\frac14 F^{\mu\nu}_aF_{\mu\nu}^a + \frac{k}{4\pi}\epsilon^{\mu\nu\rho}\operatorname{tr}[A_{\rho}\partial_{\mu}A_{\nu} + \lambda A_{\mu}A_{\nu}A_{\rho}]$$ Is there an easier way than expliciting every component of A, going to first order gauge transformation, ... ? Cause it seems really ugly to me. Answer: There is a nice reason for this, which Witten often explains. Imagine that your three dimensional space is the boundary of a four-dimensional space, for example, you can imagine that space is the surface z=0 of regular four dimensional space x,y,z,t. Further, you can imagine that space is closed into a sphere, which doesn't affect things except for some boundary conditions at infinity (the physics shouldn't care about such things, also note that this is implicitly Euclidean). If you close the three dimensional space-time into a sphere, the interior of the sphere is like the rest of the values of z for the plane case. You can extend any 3 dimensional gauge field configuration to the imaginary fourth dimension arbitrarily, so that any gauge field on the surface of the sphere can be extended to many different gauge fields on the interior. On the interior, you can construct the manifestly gauge invariant operator: $$ \epsilon_{\mu\nu\lambda\sigma} F^{\mu\nu}F^{\lambda\sigma} = F\tilde{F}$$ It is important to note that this quantity is a perfect divergence: $$ F\tilde{F} = \partial_\mu J^\mu_\mathrm{CS} $$ where J is the Chern-Simons current in 4-dimensions. Using Stokes theorem, for any four-dimensional gauge field configuration $$ \int F\tilde{F} = \int d(*J) = \int_\partial *J $$ Where the last equality is Stoke's theorem, and the previous equality is writing the diverence of a current as the Poincare dual of a three-form. So the manifestly Gauge invariant $F\tilde{F}$ integral on any gauge field on the interior of the sphere is equal to the integral of the three form *J on the boundary of the sphere. So the integral of *J must be gauge invariant. I didn't work out the actual form of *J, but it is the quantity you are trying to prove gauge invariant. Although Witten's argument is conceptually illuminating, so it is the correct argument, verifying gauge invariance explicitly is not much more difficult than understanding all parts of the argument. Still, it is good to know the conceptual reason, because the reason the Chern-Simons style things are important is exactly because they are the boundary terms of integrals of those gauge invariant field tensor combinations which are perfect derivatives.
{ "domain": "physics.stackexchange", "id": 12004, "tags": "homework-and-exercises, quantum-field-theory, gauge-invariance, chern-simons-theory" }
Does scikit-learn have a forward selection/stepwise regression algorithm?
Question: I am working on a problem with too many features and training my models takes way too long. I implemented a forward selection algorithm to choose features. However, I was wondering does scikit-learn have a forward selection/stepwise regression algorithm? Answer: No, scikit-learn does not seem to have a forward selection algorithm. However, it does provide recursive feature elimination, which is a greedy feature elimination algorithm similar to sequential backward selection. See the documentation here
{ "domain": "datascience.stackexchange", "id": 9953, "tags": "feature-selection, scikit-learn" }
Madlibs Program
Question: I just started to learn Python, and wrote this Madlib program that works the way I intended it to, so I do not need help debugging, just some advice on tips to improve the code or make it simpler. Program and Madlibs text files on Google Drive Madlibs Reader #Date 8SEP14 #introduction and file select print ("Welcome to Madlibs!") print ("Please select from the list below which Madlib Adventure you would like to do! ") #display list of available madlibs from os import listdir for each in listdir(): if each.endswith('txt'): print (each) file_choice = input("What Madlib would you like to do? ") #if input is not good then repeats until a good input is entered while file_choice not in listdir(): file_choice = input("Please try again and don't forget the .txt at the end ") #opening file and setting variables with open(file_choice,"r",encoding = "utf-8") as my_file: word_types = ["Verb","Adjective","Noun","Last-Name","Illness","Number","Place","Silly-Word","Body-Part","Past-Tense-Verb","Adverb","Verb-Ending-ING","Noun-Plural","Celebrity","Movie","Verb-Ending-ED","Mean-Nickname","Liquid","Cute-Animal-Plural","Article-of-Clothing","Celebrity-1","Celebrity-2"] user_inputs = [] user_inputs_index = [] my_read_file = my_file.read() new_read_file = my_read_file.split() x = 0 for item in new_read_file: if item in word_types and user_inputs_index == []: user_inputs.append(item) user_inputs_index.append(new_read_file.index(item)) else: if item in word_types: user_inputs.append(item) user_inputs_index.append(new_read_file.index(item,(int(user_inputs_index[-1])+1))) for each in user_inputs: user_inputs[x] = input("Please Enter a(n) " + each + ": ") x += 1 z = 0 for word in user_inputs: y = int(user_inputs_index[z]) new_read_file[y] = word z += 1 print (" ".join(new_read_file)) my_file.close() Answer: One thing I notice here is that you don't follow the DRY principle. Don't Repeat Yourself. Always make an attempt to not repeat yourself because it a) cuts down on code, b) makes code easier to modify, and c) usually runs more efficiently. for example. for item in new_read_file: if item in word_types and user_inputs_index == []: user_inputs.append(item) user_inputs_index.append(new_read_file.index(item)) else: if item in word_types: user_inputs.append(item) user_inputs_index.append(new_read_file.index(item,(int(user_inputs_index[-1])+1))) This function checks item in word_types two times in a row (which is inefficient), and in each case you do user_inputs.append(item) when you would do it regardless (which makes it harder to modify). Try changing it to: for item in new_read_file: if item in word_types: user_inputs.append(item) if user_inputs_index == []: user_inputs_index.append(new_read_file.index(item)) else: user_inputs_index.append(new_read_file.index(item,(int(user_inputs_index[-1])+1))) PS instead of writing else: if...: you can simply write elif...:
{ "domain": "codereview.stackexchange", "id": 9479, "tags": "python, beginner, game, python-3.x" }
Does atomic time slow down at faster speeds?
Question: Continuing with my questioning regarding two different "times", (but hopefully within the scope inquiry this time), if two asteroids are in an orbit around the solar system, one at 10,000kph and another at 1,000,000kph (100 times faster), relativity says that time relative to each other will be different. Lets further say that they each contain 2 tons of Uranium 238. Ignoring external forces and using only time dilation, will each asteroid have the same amount of after uranium 238 after 10,000 Earth years? Answer: Will each asteroid have the same amount of uranium 238 after 10,000 Earth years? No. Time dilation affects the rate of radioactive decay and other particle decay rates. This has been experimentally verified in many experiments, starting in the 1940s with the measurement of the lifetimes of relativistic muons created when cosmic rays interact with the upper atmosphere. See this Wikipedia article for more details.
{ "domain": "physics.stackexchange", "id": 89972, "tags": "special-relativity, inertial-frames, time-dilation, observers" }
How do antioxidants work?
Question: A friend and I are doing a project that involves the process behind manufacturing Jet A-1 and specifically the specifications highlighted in DEF 91-91. I was interested to see the information at the bottom of page 8. Specifically: Why are 2,6-ditertiary-butyl-phenol, 2,6 ditertiary-butyl-4-methyl-phenol and 2,4-dimethyl-6-tertiary-butyl-phenol used rather than "more common" antioxidants like ascorbic acid? How do these antioxidants function and do they work any differently to biochemical antioxidants? What factors determine the effectiveness of an antioxidant? Why are a mixture of chemicals used at different ratio's rather than just one. Do they work more/less effectively at different temperatures for example? Answer: 1) solubility. Typical liquid fuels consist entirely of hydrocarbons and are non-polar and thus cannot dissolve polar molecules like ascorbic acid. 2) The molecules mentioned in the provided quote can form stable radicals on loss of $\ce{OH}$ group hydrogen, i.e. act as a radical trap, transforming active radicals into non-active particles. It is a typical mechanism for antioxidants to work. 3) Strongly depends on the definition of effectiveness employed. Differen applications demand different properties. 4) Because in industry people typically do not bother with isolation of individual products if a mixture can work just as well. Sure, one can isolate individual products of phenol alkylation, but it is a tiresome work, so why bother, if the mixture works just fine?
{ "domain": "chemistry.stackexchange", "id": 9137, "tags": "organic-chemistry, mixtures, fuel" }
How to download and install ROS Fuerte in 2019?
Question: I am currently trying to implement someone else's project code to recreate their results. They've left build instructions, and running instructions, which is great. The only issue is, the original project was implemented on Ubuntu 12.04 Precise Pangolin with ROS Fuerte, and is too sprawling for me to have a reasonable chance of successfully migrating it to a newer Ubuntu/ROS. I was able to get a virtual machine running with Ubuntu 12.04 on it. When it came to installing ROS Fuerte, I followed the instructions here, but when I try to install the pre-built binaries with apt-get (Step 1.4), I get the error E: Unable to locate package ros-fuerte-desktop-full I'm assuming that because Fuerte has long since passed its EOL date, the binaries have been taken off the server. I also tried to install it from source, but ran into an error in step 1.2.2: ERROR in config: Unable to download URL [http://packages.ros.org/cgi-bin/gen_rosinstall.py?rosdistro=fuerte&variant=desktop-full&overlay=no]: HTTP Error 404: Not Found Have these files been moved somewhere? What is the best way to go about installing ROS Fuerte in 2019? Originally posted by saltus on ROS Answers with karma: 131 on 2019-12-20 Post score: 0 Original comments Comment by jarvisschultz on 2019-12-20: Does this post help in working around the disappearance of the gen_rosinstall.py script from packages.ros.org when you try to build from source.? https://answers.ros.org/question/82421/fuerte-source-install-gen_rosinstallpy-not-found/ Comment by jarvisschultz on 2019-12-20: The answer from @gvdhoorn is likely much more useful. I focused in on why you were having the source build error, but he's right, avoiding building from source is definitely preferable Answer: I'm assuming that because Fuerte has long since passed its EOL date, true. the binaries have been taken off the server. The main repositories have been cleaned up and don't serve the Fuerte binaries any more. However, the SnapshotRepository still has everything available. Follow the instructions there, don't be confused by the fact that fuerte is not listed in the table on that page, but do realise there is only a final snapshot available, as explained in the Snapshots of End-Of-Life ROS distributions section. That should allow you to install the latest last builds of all Fuerte packages. I also tried to install it from source, Please don't do this, unless absolutely necessary. I was able to get a virtual machine running with Ubuntu 12.04 on it. As an alternative you could consider using the Docker images for legacy ROS versions. That could also let you avoid the VM, which is always a good thing imho. Originally posted by gvdhoorn with karma: 86574 on 2019-12-20 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by saltus on 2019-12-20: Thanks for the tip! The SnapshotRepository is exactly what I'm looking for. That could also let you avoid the VM, which is always a good thing imho. I agree, I originally created a partition to install 12.04 on, but getting the hardware drivers to work with my current setup was a nightmare. So I figured the VM was a necessary evil. I'll definitely take a look at those Docker images and see if they're a better fit for what I'm trying to do.
{ "domain": "robotics.stackexchange", "id": 34179, "tags": "ros, binaries, ros-fuerte, rosinstall" }
Compare two datasets in Excel VBA
Question: I made the following code to take two reports and compares them to show the end user elements which are missing from one of the reports so they can make the adjustments needed. This is the main part of the process where the data within the two reports are processed and it's working with around 85K lines in one report and 60K lines in the other which are located on sheet1 and sheet2 within the same workbook (an earlier macro clears and pulls the data in from where they live. It's taking around 15 minutes to run (I've got a quad core machine, with 4gb of ram. takes over an hour to run on the older dual core machines in the office). Still easier than running it manually but it was suggested that this could be run in seconds with some improvements. Sub processdata() Application.ScreenUpdating = False Application.Calculation = xlCalculationManual Application.EnableEvents = False Application.DisplayAlerts = False Dim XXXXLen As Long With Sheets("Input - XXXXwebnew") XXXXLen = .Cells(.Rows.Count, "A").End(xlUp).Row End With 'add concatenate ref column in column A on Input XXXXWebNew Sheets("INPUT - XXXXwebnew").Select Columns("A:A").Select Selection.Insert Shift:=xlToRight Sheets("INPUT - XXXXwebnew").Range("A1:A" & XXXXLen) = "=CONCATENATE(E1,""_"",G1,""_"",I1)" Application.Calculate Sheets("Input - XXXXwebnew").Range("a1:a" & XXXXLen).Copy Sheets("Input - XXXXwebnew").Range("a1:a" & XXXXLen).PasteSpecial xlPasteValues 'picks up config products and moves them from E (input - XXXXwebnew) to to A on (workings) tab Workbooks("workingmodel.xlsm").Sheets("WORKINGS").Range("a2:a" & XXXXLen + 1).value _ = Workbooks("workingmodel.xlsm").Sheets("INPUT - XXXXWebNew").Range("e1:e" & XXXXLen).value 'picks up simple products and moves them from A (input - XXXXwebnew) to to A on (workings) tab 'set a second dim which is the dim XXXXlen X2 Dim XXXXlen2 As Long XXXXlen2 = XXXXLen + XXXXLen Workbooks("workingmodel.xlsm").Sheets("WORKINGS").Range("a" & XXXXLen + 2 & ":a" & XXXXlen2 + 1).value _ = Workbooks("workingmodel.xlsm").Sheets("INPUT - XXXXWebNew").Range("a1:a" & XXXXLen).value 'remove all duplicates Sheets("workings").Range("$A$1:$A$" & XXXXlen2 + 1).RemoveDuplicates Columns:=1, Header:=xlYes 'dim set for Workings tab length of data Dim WorkLen As Long With Sheets("WORKINGS") WorkLen = .Cells(.Rows.Count, "A").End(xlUp).Row End With 'brings first formula in, calculates, C&Psp Sheets("workings").Range("b2:b" & WorkLen) = "=IF(LEN(A2)=12,""CONFIG"",""SIMPLE"")" Application.Calculate Sheets("workings").Range("b2:b" & WorkLen).Copy Sheets("workings").Range("b2:b" & WorkLen).PasteSpecial xlPasteValues 'Sheets("workings").Range("c1") = "does it appear within XXXX_all(code means yes / #N/A means no)" 'define lenght of XXXX_all Dim XXXXallLen As Long With Sheets("INPUT - XXXX_all") XXXXallLen = .Cells(.Rows.Count, "A").End(xlUp).Row End With 'building the various dimensions required for a dynamic vba vlookup Dim sheetXXXX_all As String sheetXXXX_all = "INPUT - XXXX_all" Dim XXXXalllookup As String XXXXalllookup = ("'" & sheetXXXX_all & "'!$A$1:$m$" & XXXXallLen) Sheets("workings").Range("c2:c" & WorkLen) = "=left(VLOOKUP(A2," & XXXXalllookup & ",1,FALSE),12)" Application.Calculate Sheets("workings").Range("c2:c" & WorkLen).Copy Sheets("workings").Range("c2:c" & WorkLen).PasteSpecial xlPasteValues 'Sheets("workings").Range("d1") = "is it enabled" Sheets("workings").Range("d2:d" & WorkLen) = "=VLOOKUP(A2," & XXXXalllookup & ",2,FALSE)" Application.Calculate Sheets("workings").Range("d2:d" & WorkLen).Copy Sheets("workings").Range("d2:d" & WorkLen).PasteSpecial xlPasteValues 'Sheets("workings").Range("e1") = "does it have an image 0 = no #N/A = product code doesn't exist" Sheets("workings").Range("e2:e" & WorkLen) = "=VLOOKUP(A2," & XXXXalllookup & ",4,FALSE)" Application.Calculate Sheets("workings").Range("e2:e" & WorkLen).Copy Sheets("workings").Range("e2:e" & WorkLen).PasteSpecial xlPasteValues 'Sheets("workings").Range("f1") = "does description has a character" Sheets("workings").Range("f2:f" & WorkLen) = "=IF(LEN(VLOOKUP(A2," & XXXXalllookup & ",4,FALSE))=0,""NO DESC"",""FINE"")" Application.Calculate Sheets("workings").Range("f2:f" & WorkLen).Copy Sheets("workings").Range("f2:f" & WorkLen).PasteSpecial xlPasteValues 'Sheets("workings").Range("g1") = "RRRP Price" Sheets("workings").Range("g2:g" & WorkLen) = "=IF(VLOOKUP(A2," & XXXXalllookup & ",6,FALSE)<0.1,""NO PRICE"",""PRICE EXISTS"")" Application.Calculate Sheets("workings").Range("g2:g" & WorkLen).Copy Sheets("workings").Range("g2:g" & WorkLen).PasteSpecial xlPasteValues 'Sheets("workings").Range("h1") = "UK Price" Sheets("workings").Range("h2:h" & WorkLen) = "=IF(VLOOKUP(A2," & XXXXalllookup & ",13,FALSE)<0.1,""NO PRICE"",""PRICE EXISTS"")" Application.Calculate Sheets("workings").Range("h2:h" & WorkLen).Copy Sheets("workings").Range("h2:h" & WorkLen).PasteSpecial xlPasteValues 'Sheets("workings").Range("I1") = "Current stock greater than 0" Sheets("workings").Range("i2:i" & WorkLen).FormulaR1C1 = "=IF(RC[-7]=""config"",IF(SUMIF('Input - XXXXwebnew'!C[-4],WORKINGS!RC[-8],'Input - XXXXwebnew'!C[11])<0.1,""NO STOCK"",""HAS STOCK""),IF(VLOOKUP(RC[-8],'Input - XXXXwebnew'!C[-8]:C[12],20,FALSE)>0,""HAS STOCK"",""NO STOCK""))" Application.Calculate Sheets("workings").Range("i2:i" & WorkLen).Copy Sheets("workings").Range("i2:i" & WorkLen).PasteSpecial xlPasteValues Application.ScreenUpdating = True Application.Calculation = xlCalculationAutomatic Application.EnableEvents = True Application.DisplayAlerts = True End Sub Answer: One possible speedup would be to remove all copy/ pastespecial values and just do a single one at the end, just after turning calculation back on: Sheets("workings").Range("C2:I" & WorkLen).Value2 = Sheets("workings").Range("C2:I" & WorkLen).Value2 In addition, since you are looking up the same information over and over (all VLOOKUP functions share the same first arguments), you should consider adding a column which holds the MATCH function and from the other columns use its result as an argument for the INDEX function. So suppose we'll use column Z for the MATCH: Sheets("workings").Range("Z2:Z" & WorkLen) = "=MATCH(A2," & XXXXalllookup & ",0)" Then column D would become (it fetches its data from col B): Sheets("workings").Range("d2:d" & WorkLen) = "=INDEX(" & "'" & sheetXXXX_all & "'!$B$1:$B$" & XXXXallLen & ",Z2)"
{ "domain": "codereview.stackexchange", "id": 24870, "tags": "beginner, vba, excel, time-limit-exceeded" }
What is the reason behind a good smell?
Question: I know that volatile substance creates smell but how a good smell produce? Is there any common characteristics of the compounds having good smell? Answer: There is a very wide variety of chemicals that have a good smell - and the flavor and fragrance industry has discovered and profited from thousands of them. To have a good smell, a compound first of all has to be low molecular weight and able to easily evaporate into air so it can be carried to your nose--i.e. a "volatile organic compound". Esters are a good class to look at for starters, and are responsible for many of the fruity odors. For example, amyl acetate is a simple compound, easily made in the lab, that smells like bananas. Another important class is terpenes: hydrocarbons made from 5-carbon units that include pine and cedar odors. Some odor components like those of chocolate are more complex, have many components, and are hard to reproduce exactly. One way to think of good (and bad) smells is how over time animals and humans have evolved to appreciate the smells of some things (like fruit or cooking onions) as good and other things like rotting flesh as bad, since the former is good for you and the latter definitely bad. If you got it backwards, you would not survive for long. But flowers, for example, smell good but do not materially benefit us. Maybe we have some residual bee genes in us.
{ "domain": "chemistry.stackexchange", "id": 8116, "tags": "organic-chemistry, smell" }
RobotC Code Malfuncion (VEX Robotics Clawbot)
Question: I have a standard VEX Clawbot, which I've been trying to make go straight for some time. I've been following this guide: http://www.education.rec.ri.cmu.edu/products/cortex_video_trainer/lesson/3-5AutomatedStraightening2.html This is my code: #pragma config(I2C_Usage, I2C1, i2cSensors) #pragma config(Sensor, I2C_1, , sensorQuadEncoderOnI2CPort, , AutoAssign ) #pragma config(Sensor, I2C_2, , sensorQuadEncoderOnI2CPort, , AutoAssign ) #pragma config(Motor, port1, leftMotor, tmotorVex393_HBridge, openLoop, driveLeft, encoderPort, I2C_1) #pragma config(Motor, port10, rightMotor, tmotorVex393_HBridge, openLoop, reversed, driveRight, encoderPort, I2C_2) //*!!Code automatically generated by 'ROBOTC' configuration wizard !!*// void GOforwards() { nMotorEncoder[rightMotor]=0; nMotorEncoder[leftMotor]=0; int rightEncoder = abs(nMotorEncoder[rightMotor]); int leftEncoder = abs(nMotorEncoder[leftMotor]); wait1Msec(2000); motor[rightMotor] = 60; motor[leftMotor] = 60; while (rightEncoder < 2000) { if (rightEncoder > leftEncoder) { motor[rightMotor] = 50; motor[leftMotor] = 60; } if (rightEncoder < leftEncoder) { motor[rightMotor] = 60; motor[leftMotor] = 50; } if (rightEncoder == leftEncoder) { motor[rightMotor] = 60; motor[leftMotor] = 60; } } motor[rightMotor] = 0; motor[leftMotor] = 0; } task main() { GOforwards(); } I am using integrated Encoders. When I run the code my robot runs without stopping and the Encoder values diverge quickly. This is a video of the code running from the debugger windows: https://www.youtube.com/watch?time_continue=2&v=vs1Cc3xnDtM I am not sure why the power to the wheels never changes, or why it seems to believe that the Encoder values are equal... much less why it runs off into oblivion when the code should exit the while loop once the right encoder's absolute value exceeds 2000. Any help would be appreciated. Answer: It looks like you don't ever update the encoder values after you initialize them. The problem seems to be that maybe you're thinking that the left and right encoder values will update automatically, but that is generally not the case. The exception might be if those variable names are actually protected variable names that have special meaning in your development environment - more on that later. I don't know about VEX robotics, but I'll caution you that your left and right encoders are based on nMotorEncoder and, just like how leftEncoder and rightEncoder doesn't get updated anywhere in your loop, nMotorEncoder doesn't get updated anywhere either. So, that said, try using the following: void GOforwards() { nMotorEncoder[rightMotor]=0; nMotorEncoder[leftMotor]=0; int rightEncoder = abs(nMotorEncoder[rightMotor]); // <----- Define int leftEncoder = abs(nMotorEncoder[leftMotor]); // <----- Define wait1Msec(2000); motor[rightMotor] = 60; motor[leftMotor] = 60; while (rightEncoder < 2000) { rightEncoder = abs(nMotorEncoder[rightMotor]); // <----- Update leftEncoder = abs(nMotorEncoder[leftMotor]); // <----- Update if (rightEncoder > leftEncoder) { motor[rightMotor] = 50; motor[leftMotor] = 60; } if (rightEncoder < leftEncoder) { motor[rightMotor] = 60; motor[leftMotor] = 50; } if (rightEncoder == leftEncoder) { motor[rightMotor] = 60; motor[leftMotor] = 60; } } motor[rightMotor] = 0; motor[leftMotor] = 0; } By adding the two lines inside the while loop, now you are checking and updating the encoder counts every time you step into a new loop. This is one of the tricky things with debugging - you have to be looking at the correct variables! In your video, you are looking at leftMotor and rightMotor, and what appears to be fields for those motors. You were NOT looking at the actual variables that control which case gets selected! Those variables are leftEncoder and rightEncoder. And again, final warning - it looks like you're counting on things like motor and nMotorEncoder to be function names or keywords that do something, because otherwise it just looks like you enter an infinite loop that never actually writes any values to the motors. I would have expected something more along the lines of a speed assignment followed by writing that speed to a function, but that second step never happens. For example, I would have thought you'd do something like the following: motor[leftMotor] = 60; motor[rightMotor] = 60; WriteMotorSpeed(motor); But that last function, WriteMotorSpeed() never happens. Similarly, there is no function that returns an updated encoder count. Here too it looks like you're expecting the nMotorEncoder variable name to auto-magically update, where I would have expected you to need to call a function that returns an updated value, like the following: nMotorEncoder = GetEncoderCounts([leftMotor,rightMotor]); or something similar. I could absolutely be wrong; it looks like you're using ROBOTC, and also it's VEX robotics, so there could 100% be a set of pre-defined keywords or key variable names that hold special meaning in that development environment, but again I'm just pointing out what I see versus what I would expect to see. I would highly suggest you check the documentation to see if those values automatically update or if there is, in fact, a function you should be calling to read/write encoder counts and motor speeds.
{ "domain": "robotics.stackexchange", "id": 1350, "tags": "robotc, quadrature-encoder, vex" }
In online one step actor critic, why does the weights update become less significant as the episode progresses?
Question: The Reinforcement Learning Book by Richard Sutton et al, section 13.5 shows an online actor critic algorithm. Why do the weights updates depend on the discount factor via $I$? It seems that the more we get closer to the end of the episode, the less we value our newest experience $\delta$. This seems odd to me. I thought discounting in the recursive formula of $\delta$ itself is enough. Why does the weights update become less significant as the episode progresses? Note this is not eligibility traces, as those are discussed separately, later in the same episode. Answer: This "decay" of later values is a direct consequence of the episodic formula for the objective function for REINFORCE: $$J(\theta) = v_{\pi_\theta}(s_0)$$ That is, the expected return from the first state of the episode. This is equation 13.4 in the book edition that you linked in the question. In other words, if there is any discounting, we care less about rewards seen later in the episode. We mainly care about how well the agent will do from its starting position. This is not true for all formulations of policy gradients. There are other, related, choices of objective function. We can formulate the objective function as caring about the returns from any distribution of states, but in order to define it well, we do need to describe the weighting/distribution somehow, it should be relevant to the problem, and we want to be able to get approximate samples of $\nabla J(\theta)$ for policy gradient to work. The algorithm you are asking about is specifically for improving policy for episodic problems. Note you can set $\gamma = 1$ for these problems, so the decay is not necessarily required. As an aside (because someone is bound to ask): Defining $J(\theta)$ with respect to all states equally weighted could lead to difficulties e.g. the objective would take less account of a policy's ability to avoid undesirable states, and it would require a lot of samples from probably irrelevant states in order to estimate it. These difficulties would turn up as a hard to calculate (or maybe impossible) expectation for $\nabla J(\theta)$
{ "domain": "ai.stackexchange", "id": 960, "tags": "machine-learning, reinforcement-learning, discount-factor, actor-critic-methods" }
Do Kleene star and complement commute?
Question: I am having hard time solving the following problem. Are there any languages for which $$ \overline{L^*} = (\overline{L})^* $$ Assuming $\emptyset^* = \emptyset$, if I consider $\Sigma = \{a\}$ and L = $\Sigma^*$, I get that $L^* = L$ and that $\overline{L^*} = \emptyset$. For the right side I get $\overline{L} = \emptyset$ and $(\overline{L})^* = \emptyset$. Thus, both sides are equal. Is it true that $\emptyset^* = \emptyset$? Answer: Hint: The star of a language always contains the empty string. The complement of a language containing the empty string never does. With that in mind, look at the left and the right hand sides of your proposed equality.
{ "domain": "cs.stackexchange", "id": 2373, "tags": "formal-languages, regular-languages, closure-properties" }
How to learn physics?
Question: I am an engineering student (CSE) in India..But recently I have developed a strong love to physics..I want to learn physics and understand it in deep..I know physics is the search of deep fundamental laws of nature.That means I need to start from first principles and extrapolate from it?How should i learn physics? Answer: The resources you use actually don't matter as much as how you study them -- the best way to understand stuff in physics (and also math) is to look at a variety of sources and figure out the basic insights/axioms -- often empirically rooted insights -- that lead to the subject, then derive the entire subject for yourself. This means you stumble across all the insights that the guy who first discovered it came into, and you have a complete intuition (and the formalisation of this intuition) of the entire structure of the field. Then you go back, and think about how you could have come up with those initial insights yourself. For instance -- if you're learning special relativity, you first have a scattered understanding of a bunch of seemingly (somewhat) disconnected laws and theorems in the theory -- you've got the dynamics equations someplace, then you see the Lorentz transformations, etc. You know something about how $c$ is the "speed of things in spacetime", and different speeds are just different angles in spacetime, yada yada. So you start drawing these spacetime diagrams, seeing what results you can get from them -- you realise you're dealing with linear transformations, and you spend some time determining exactly what the transformation is (it's a skew), and reconcile this with what you earlier thought (that it's a rotation). This allows you to come up with rapidities, and you discover things like Minkowski dot products and the Lorentz group. Now you have a complete theory of spacetime, and can figure all sorts of things from it. But then you realise -- you can't figure out everything from it. From what you've already skimmed from some sources, you know things about how momentum transforms, and how mass transforms. You try to derive these, but keep encountering circularities in proofs you find online. Then you realise, the issue is really a definitional one -- how do you define momentum in relativity? How do you define energy in relativity? The natural way to define these comes to you in the form of conservation laws, so you immediately start formulating some thought experiments, and after some trial and error, you have discovered a completely non-circular series of arguments that allow you to define dynamics in relativity. And then you look closely at the expressions you've derived for energy and momentum -- and voila! They seem to follow the exact same relationship as time and space do. So you've stumbled across four-vectors, and you have a complete theory of relativistic mechanics, and your understanding of it is so deep you can solve seemingly any problem with it. The reason this skim-and-discover method works so effectively, is that you can rediscover all the important insights for yourself, but do it without taking centuries, because you already have the starting point, but can still develop the experience of having discovered them via the "think about how you could've discovered the starting point yourself" stuff. It's also important to study mathematics alongside physics -- there are often extremely strong connections -- almost equivalences -- between certain areas of math and certain areas of physics (perhaps it's because the math was developed for the physics). E.g. special relativity and linear algebra (the basic geometric stuff), quantum mechanics and linear algebra (the more abstract, general stuff), general relativity and differential (at least Riemannian) geometry, etc. Learning them side-by-side helps you get the full set of insights, it's like learning two similar languages (or languages that share a similar script) side-by-side. With that said, resources still do matter, and the best resources are those which derive things fully, giving close attention to avoid circularity. It's also useful to have good exercises, but sometimes you can make these up yourself. Definitely avoid popular science ("math-free" books), and in the same spirit avoid any texts that claim to specifically avoid a certain mathematical formalism (e.g. "non-calculus-based" mechanics), because usually this formalism is the best, most appropriate and relevant formalism for the task (avoiding it because you don't understand the formalism is like using a glue-stick to assemble your home because you're out of nails). It's also important to learn roughly "in order" -- you don't need to be very steadfast about this, you can learn general relativity before statistical physics, but you probably shouldn't learn string theory before statistical physics (even though technically "you can", you won't learn to think like a physicist if you do). Here's a reasonable ordering, with good resources that satisfy the description I mentioned: Newtonian mechanics, classical gravity -- Jewett & Serway Learn alongside single-variable calculus and differential equations. Electromagnetism -- you actually don't need to know this in full detail before going to special relativity, but you need a theoretical understanding of the Maxwell equations and how they predict a wave if you want to understand the empirical motivations behind relativity. You should also have a good understanding of basic stuff like polarisation before learning quantum mechanics (it's one of the justifications for Born's law). Learn alongside multivariable calculus. Analytical mechanics, at least basic Lagrangian -- The IITs in India have a very nice lecture series on this, google "V Balakrishnan NPTEL classical mechanics". Learn alongside calculus of variations. Special relativity -- learn from all over the place, if you ever get really stuck (for a week or so), consult the Feynman lectures (I don't like some of his dynamics proofs, though), or better Einstein's original paper (there's a good translation with comments from Stephen Hawking in "A stubbornly persistent illusion"). Learn alongside first-year linear algebra. Thermodynamics, statistical physics -- there are many good books on the subject, but I'd recommend you pick up a graduate-level text right away, at least if you know all the undergrad stuff (it's impossible not to). Learn alongside statistics (distributions and stuff). Quantum mechanics -- NPTEL is good again, but there are other sources too (you can start with the wikipedia article on matrix mechanics, and a recent 3blue1brown video called "some light on quantum mechanics"). Learn while discovering abstract linear algebra for yourself. General relativity -- Schutz is your god, it's a wonderful textbook, much better than any of the overrated pap from Zee. Learn alongside Riemannian geometry. Quantum field theory and related -- this is extensive, it's what much of the mid-20th century was about. There are many standard texts, I think Tom Banks has a good book on this, and Zee actually has a reasonable book here. But Weinberg's volumes are the most extensive, you should at least use these to fall back on. Then of course you'll be ready for the really advanced stuff -- string theory and all -- I could recommend some stuff, but there are mathematical prerequisites, and frankly I don't feel like adding too much new content to this answer.
{ "domain": "physics.stackexchange", "id": 9264, "tags": "soft-question, education" }
Problem with turtlebot computers setup
Question: Hello, I am encountering problems when trying to set up the computers for running a turtlebot. Here is the setup info: Turtlebot netbook - Electric - 192.168.1.3 (tbot) Workstation - Groovy - 192.168.1.2 (work) ROS_MASTER_URI : "http://192 . 168 . 1 . 3:11311" ROS_IP : "192 . 168 . 1 . 3" (set on both computers) I can ssh into tbot from work and viceversa. I can ping both machines from each other. netcat listening works fine. But using "rostopic list" on the workstation gives out "Unable to communicate with master!" error. Also, according to the tutorials, if I use "rostopic echo /diagnostics" on the netbook, it should report a warning saying that "diagnostics has not been published", but in this case, it just prints some diagnostic info saying that dev/ttyUSB0 is not connected. roscore is indeed running on 192.168.1.3:11311, but the workstation is unable to find it. Looking forward to your suggestions. Thanks in advance. Originally posted by SaiHV on ROS Answers with karma: 275 on 2013-03-09 Post score: 0 Original comments Comment by Jon Stephan on 2013-03-09: The master is usually port 11311, are you sure yours is 11611? Comment by SaiHV on 2013-03-09: Sorry, that was a typo. I typed this from Windows. Corrected. Comment by bit-pirate on 2013-03-14: Does Windows not support "3" any more? :-D Comment by SaiHV on 2013-03-14: No, I meant I forgot what the port was as I was in Windows and couldn't run roscore to check. Answer: Hello, As I mentioned, I did a clean install of linux on the turtlebot and installed ROS Groovy and the turtlebot apps, and the setup started working. Thanks for your comments. Originally posted by SaiHV with karma: 275 on 2013-03-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Chik on 2013-03-14: congratulations
{ "domain": "robotics.stackexchange", "id": 13275, "tags": "ros, turtlebot" }
A question from Quantum Scattering theory Sakurai
Question: This is a question from scattering theory - solution for incoming plane wave and outgoing scattered wave: In Sakurai's Modern Quantum Mechanics, I encountered the following line: Furthermore, because of angular momentum conservation, this must hold for each partial wave separately. In other words, the coefficients of $e^{ikr}/r$ must be same in magnitude as the coefficient of $e^{-ikr}/r$. Why is that? In particular, how does it mathematically follow from $\int_S \textbf{j}\cdot d\textbf{S}=0$? Answer: I think, you speak about elastic scattering of a plane wave on a potential. First point is that you must follow the conservation laws. Energy and momentum, it is obvious. We know that angular momentum is also conserved, but here it is used for a trick. The plane wave has actually all angular momenta. So you decompose PW to the sum of partial waves by $L$. Since your system is elastic scattering (no source, no sink), you should have $\int_S j \cdot dS = 0$. That means - what comes IN ($e^{-ikr}/r$) it should also go OUT ($e^{ikr}/r$). And it should work partial wave by partial wave... no way to change $L$ in the isolated system. And from here you get rules for the coefficients - the same magnitudes, so that the square of the coefficients are the same.
{ "domain": "physics.stackexchange", "id": 37648, "tags": "quantum-mechanics, scattering" }
Kinetic mixing, and a bare mass?
Question: I've been reading the following classic paper by Bob Holdom "Two $U(1)$s and $\epsilon$ charge shifts", and I'm attempting to derive the expression for $\chi$. In particular, I am computing the following one-loop vacuum polarization diagram $$ \Pi_{12}^{\mu\nu}(p) = e_1 e_2\left(\int d^4 k\gamma^\mu\frac{i(\gamma^\rho k_\rho + m_1)}{k^2 - m_1^2}\gamma^\nu\frac{i(\gamma^\sigma k_\sigma + m_1)}{k^2 - m_1^2} - (1\leftrightarrow2)\right). $$ One typically computes such diagrams in general dimension $d$ and then analytically continues to $d = 4$. However, in this special case where there are two fermions with the special charge assignments $(e_1,e_2)$ and $(e_1,-e_2)$ as in 1, the divergences cancel, and the diagram can be computed without regularization. The result is (up to numerical factors) $$ \Pi_{12}^{\mu\nu}(p)\sim (m_1^2 - m_2^2) g^{\mu\nu} + (p^\mu p^\nu - p^2 g^{\mu\nu})\Pi_{12}(p). $$ This appears to violate gauge invariance. However, I think this may be because it is inconsistent to not use a regulator for this particular diagram, even though the result is exactly finite, since other diagrams do require a regulator. And indeed, if I compute $\Pi_{12}^{\mu\nu}(p)$ in general dimension $d$, then the mass term goes away, but this feels sketchy to me... Is my reasoning sensible? Answer: You are evaluating two integrals of the form $$ A\sim \int \frac{1}{k^2},\qquad B=A_{1\leftrightarrow2} $$ In the end you are interested in $A-B$. As you correctly point out, the difference of the integrands is integrable, so it seems that $A-B$ requires no regularization. This is not actually correct: the two terms are individually divergent, so $A-B$ is actually an $\infty-\infty$ indeterminate. The only way to evaluate this difference meaningfully is to regulate. Now, both integrals are convergent, and their difference is well-defined. Next, introduce couter-terms, renormalize, etc., and remove the regulator at the very end. If you do the algebra correctly, you'll see that the end-result does not agree with your initial, naive computation of $A-B$ where you didn't introduce a regulator. The correct result is, happily, gauge-invariant. The incorrect result is not, but of course who cares about wrong results.
{ "domain": "physics.stackexchange", "id": 97896, "tags": "quantum-field-theory, photons, renormalization, regularization, self-energy" }
LeetCode: Merge k sorted lists C#
Question: https://leetcode.com/problems/merge-k-sorted-lists/ Merge k sorted linked lists and return it as one sorted list. Analyze and describe its complexity. Example: Input: [ 1->4->5, 1->3->4, 2->6 ] Output: 1->1->2->3->4->4->5->6 using System.Collections.Generic; using Heap; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace LinkedListQuestions {/// <summary> /// https://leetcode.com/problems/merge-k-sorted-lists/ /// </summary> [TestClass] public class MergeKSortedLists { [TestMethod] public void LeetCodeExample() { ListNode l1 = new ListNode(1); l1.next = new ListNode(4); l1.next.next = new ListNode(5); ListNode l2 = new ListNode(1); l2.next = new ListNode(3); l2.next.next = new ListNode(4); ListNode l3 = new ListNode(2); l3.next = new ListNode(6); ListNode[] arr = new ListNode[] { l1, l2, l3 }; ListNode res = MergeKSortedListsTest.MergeKLists(arr); } } public class MergeKSortedListsTest { public static ListNode MergeKLists(ListNode[] lists) { if (lists == null || lists.Length == 0) { return null; } MinHeap<ListNode> heap = new MinHeap<ListNode>(new ListNodeComparer()); ListNode dummy = new ListNode(0); ListNode tail = dummy; foreach (ListNode node in lists) { if (node != null) { heap.Add(node); } } while (heap.Count > 0) { tail.next = heap.ExtractDominating(); tail = tail.next; if (tail.next != null) { heap.Add(tail.next); } } return dummy.next; } public class ListNodeComparer : Comparer<ListNode> { public override int Compare(ListNode o1, ListNode o2) { if (o1.val < o2.val) return -1; else if (o1.val == o2.val) return 0; else return 1; } } } } you do not need to review the MinHeap code! public abstract class Heap<T> : IEnumerable<T> { //capacity of the Queue private const int InitialCapacity = 0; private int _capacity = InitialCapacity; //items in the queue private T[] _heap = new T[InitialCapacity]; //last item in the queue private int _tail = 0; //when growing the queue you multiply by Growfactor private const int GrowFactor = 2; // if the min size is 0 you grow the queue size by at least Min grow private const int MinGrow = 1; //how to compare the keys protected Comparer<T> Comparer { get; private set; } //store how many Items are in the queue public int Count { get { return _tail; } } //shows the current capacity of the queue public int Capacity { get { return _capacity; } } //this function is used in BubbleUp and in GetDominating protected abstract bool Dominates(T x, T y); protected Heap() : this(Comparer<T>.Default) { } protected Heap(Comparer<T> comparer) : this(Enumerable.Empty<T>(), comparer) { } protected Heap(IEnumerable<T> collection) : this(collection, Comparer<T>.Default) { } protected Heap(IEnumerable<T> collection, Comparer<T> comparer) { if (collection == null) { throw new ArgumentNullException("collection"); } if (comparer == null) { throw new ArgumentNullException("comparer"); } Comparer = comparer; foreach (var item in collection) { if (Count > Capacity) { Grow(); } _heap[_tail++] = item; } for (int i = 0; i < Parent(_tail - 1); i++) { BubbleDown(i); } } public void Add(T item) { if (Count == Capacity) { Grow(); } _heap[_tail++] = item; BubbleUp(_tail - 1); } //when adding a new item we bubble the item from tail-1 up to its' //correct position private void BubbleUp(int i) { if (i == 0 || Dominates(_heap[Parent(i)], _heap[i])) { return; //correct domination (or root) } Swap(i, Parent(i)); BubbleUp(Parent(i)); } // when adding new items into the queue from the CTOR // we bubble them down private void BubbleDown(int i) { int dominatingNode = Dominating(i); if (dominatingNode == i) { return; } Swap(i,dominatingNode); BubbleDown(dominatingNode); } public T GetMin() { if (Count == 0) { throw new InvalidOperationException("Heap is empty"); } return _heap[0]; } private void Swap(int i, int j) { T tmp = _heap[i]; _heap[i] = _heap[j]; _heap[j] = tmp; } public static int Parent(int i) { return (i + 1) / 2 - 1; } private void Grow() { int newCapcity = _capacity * GrowFactor + MinGrow; var newHeap = new T[newCapcity]; Array.Copy(_heap, newHeap, _capacity); _heap = newHeap; _capacity = newCapcity; } //return from 0 up to Count public IEnumerator<T> GetEnumerator() { return _heap.Take(Count).GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } //when we bubble down, when building the queue from a list in the CTOR // we need to understand who is dominating i, and we swap them, // and call bubbleDown again private int Dominating(int i) { int dominatingNode = i; dominatingNode = GetDominating(YoungChild(i), dominatingNode); dominatingNode = GetDominating(OldChild(i), dominatingNode); return dominatingNode; } private int GetDominating(int newNode, int dominatingNode) { if (newNode < _tail && !Dominates(_heap[dominatingNode], _heap[newNode])) { return newNode; } else { return dominatingNode; } } private static int YoungChild(int i) { return 2 * (i + 1) - 1; } private static int OldChild(int i) { return YoungChild(i) + 1; } public T ExtractDominating() { if (Count == 0) throw new InvalidOperationException("Heap is empty"); T ret = _heap[0]; _tail--; Swap(_tail, 0); BubbleDown(0); return ret; } } public class MinHeap<T> : Heap<T> { public MinHeap() : this(Comparer<T>.Default) { } public MinHeap(Comparer<T> comparer) : base(comparer) { } public MinHeap(IEnumerable<T> collection) : base(collection) { } public MinHeap(IEnumerable<T> collection, Comparer<T> comparer) : base(collection, comparer) { } protected override bool Dominates(T x, T y) { return Comparer.Compare(x, y) <= 0; } } Answer: One thing I don't like is, that you merge "in place" - that is: the input linked lists change as a side effect. I would expect them to be untouched by the method. Consider to make a new linked list as the result. As a micro optimization you could probably spare a couple of ticks, if the input lists contain a lot of duplicate values, by iterate to the first node with a greater value in the second loop: while (heap.Count > 0) { ListNode node = heap.ExtractDominating(); tail.next = node; while (node.next != null && node.val == node.next.val) { node = node.next; } if (node.next != null) { heap.Add(node.next); } tail = node; }
{ "domain": "codereview.stackexchange", "id": 35931, "tags": "c#, programming-challenge, linked-list, mergesort, heap" }
What is in the space between neurons in a brain?
Question: When neuron animations are displayed, there are frequently seen neurons, axons arranged in a lattice with a lot of empty space between. I'm interested if there is indeed empty space in the brain, or if it is filled with some sort of fluid? I've checked an article on cerebrospinal fluid but am not sure that it is present all throughout the brain. The reason I'm asking is that I'm thinking of neurotransmitters- they are released in synapses, but I'm not sure how they stay there - are they suspended in some liquid as well? Answer: Not so empty, actually. The human brain has a mass of ~1.5kg, and volume ~1200cc (a little bigger for men, a little smaller for women). So is heavier than water by a good margin. While it has Cerebrospinal fluid, that only occupies the subarachnoid space (the space below the skull and above the cortex, contained between two layers: pia matter and arachnoid membrane) and the ventricular system (several spaces inside the brain, remnants of the embryological development of the brain). Neuron density may vary widely, depending mainly on the particular characteristics of neuron cell types and their interconnections. But besides neurons, there's a lot of infrastructure inside the brain. For example: Astroglia: They are a type of glial cells which participate in the formation of the blood-brain barrier (supporting the endothelial cells), nourishing of neurons, maintenance of ion and neurotransmitter concentrations, among others. They also keep in place most of the tissue. Microglia: Small cells with immune (phagocitic) functions inside the brain. Radial glia: A more specialized precursor cell, that also participates in neuronal migration in the brain. Oligodendrocites: Cells responsible for the insulation (myelination) of axons. Neuroepithelial cells: The stem cells in the brain. Neuroglia, which includes the first four cell types above, accounts for ~90% of tissue in human brain (http://classes.biology.ucsd.edu/bipn140.WI13/documents/Gliamorethanjustbrainglue.pdf). Also, when you collect some billions of axons, they can be quite representative in terms of mass and volume. The ratio between white matter and gray matter is close to one for humans, but lower for smaller mammals (http://www.pnas.org/content/97/10/5621.full.pdf). About the last part of your question, the synaptic cleft (the space between pre-synaptic and post-synaptic neurons) is a salty solution: water with high concentration of sodium and chloride ions (and also calcium ions, neurotransmitters and a lot more). These ion concentrations are fundamental to the generation of action potentials, neural signaling, and the general dynamics of the brain.
{ "domain": "biology.stackexchange", "id": 11854, "tags": "neuroscience, brain" }
How do you determine the charge of the metal ion in a single replacement reaction?
Question: Say we have a simple single replacement reaction between a salt and a metal $$\ce{2 AgNO3 + Cu -> Cu(NO3)2 + 2 Ag}$$ We know in normal circumstances, silver always has a $+1$ charge, and in normal circumstances for copper, it typically varies between a $+1$ or a $+2$. Now I have been told that in copper nitrate, the copper ion will (almost?) always be in a $+2$ state, but this raises the question of why we would know that. Why can't the reaction be $$\ce{AgNO3 + Cu -> CuNO3 + Ag}$$ instead? Say we were to have a reaction between silver sulfate and copper. What charge would the copper take on? How do we know? Answer: Say we were to have a reaction between silver sulfate and copper. What charge would the copper take on? How do we know? Silver sulfate is not very water-soluble. Let us say that we use a generic soluble Ag(I) salt and we wish to know what will be the resulting charge on the copper ion after the reaction. If we were discovering/ studying this reaction for the first time, we might do the following: One is the pure observational way: We make a solution of a silver salt in water and place a copper plate in the solution. After some time, the solution turns blue. Using previous knowledge that Cu(II) solutions in water are typically blue, one can make an educated guess that the resulting ion is Cu(II) not Cu(I). Cu(I) is colorless. Second option is pure analytical way: We let copper and silver nitrate react completely (i.e., excess copper), and after a reaction, we physically separate the silver particles and remaining copper by filtration. We evaporate the solution and analyze the blue crystals chemically. The resulting formula after analysis will suggest that copper must be in Cu(II) form, otherwise the elemental analysis data would not agree with Cu(I) salt. Third approach is the electrochemical approach which you might not be familiar as yet: One can start with half-cells and look at electrode potential values. The question is, if Cu is given a choice to react with silver ions in water, what will be the most favorable product? Ag(I)/ Ag half-cell has an electrode potential of +0.80 V Cu(I)/Cu half-cell has an electrode potential of +0.52 V Cu(II)/Cu half-cell has an electrode potential of +0.34 V Which half-cell of copper will give a larger positive potential difference with silver half-cell? It is the Cu(II)/Cu half cell, which means that if Cu(s) reacts with Ag(I), the thermodynamically favored product will be Cu(II) not Cu(I).
{ "domain": "chemistry.stackexchange", "id": 16665, "tags": "redox, transition-metals" }
In speech recognition applications, how are different voice amplitudes handled? (i.e loud vs. quiet tones)
Question: I am doing some research related to real-time voice/hotword recognition engines. In most current implementations, input is divided into frames (overlapping or not), and audio features are extracted per frame (most common being MFCCs) and fed into a Hidden Markov Model or a Neural Network of sorts. Most papers I read address issues such as noise removal/reduction (using methods like Cepstral Mean Normalization), however I couldn't find any mention of how different voice amplitudes are handled. For example, I can train an engine to recognize my voice when I speak normally, however if I change my volume (speak louder), then the extracted features would look different (same shape but larger magnitude). Since this is a real-time system, I am not sure how real-time normalization is even possible, or whether it should be applied on the voice samples, or the extracted features. Or perhaps it is solved by training the system on variations in the speaker's volume? Your help is greatly appreciated. Answer: Some of the channel effects can be indeed removed by doing the Cepstral Mean Subtraction/Normalization. Nevertheless that generally applies only to "convolutive" distortions that are constant. Any additive distortions, i.e. white noise, babble noise usually cannot be removed via CMS. But like you said this topic is handled via different methods to cope with the noise removal. As for the immunity of MFCC's with respect to different audio levels, you can do it easily by taking the appropriate coefficients. Generally the first MFCC coefficients is obtained by fitting the constant value curve ($\cos(0)$) to your log-energy filter banks. Therefore it is highly correlated to the RMS energy of your signal. If your remove that coefficient (often called 'static') then in theory you make your model volume (gain) independent. The rest of coefficients is not really related to the energy of your signal. Usually researchers drop the first coefficient, but they are adding it first and second derivative to the $\Delta$ and $\Delta\Delta$ features. Obviously there is more to that. For example the Lombard Effect might take place and it's obviously changing the envelope of your signal.
{ "domain": "dsp.stackexchange", "id": 2931, "tags": "speech-recognition, real-time, amplitude, normalization" }
How to correctly link opencl with ros?
Question: I just have installed opencl in my ubuntu 18 based on this tutorial. i also find similar question here but i didn't understand it properly. Here are the folders inside the workspace: well i tried like below, but here only opencl_headers are not creating any error. include_directories(~/intel-compute-runtime/workspace/opencl_headers/CL) target_link_libraries(~/intel-compute-runtime/workspace/build_igc/BiFModule/clang_build/install/lib) Originally posted by dinesh on ROS Answers with karma: 932 on 2019-01-08 Post score: 0 Answer: This is working for me: include_directories( # include ${catkin_INCLUDE_DIRS} /usr/include/CL/ ) add_executable(save_pattern src/save_pattern.cpp) target_link_libraries(save_pattern ${catkin_LIBRARIES} /usr/lib/x86_64-linux-gnu/libOpenCL.so) Originally posted by dinesh with karma: 932 on 2019-01-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32246, "tags": "ros-melodic, cmake" }
Where does the extra kinetic energy of the rocket come from?
Question: Consider a rocket in deep space with no external forces. Using the formula for linear kinetic energy $$\text{KE} = mv^2/2$$ we find that adding $100\ \text{m/s}$ while initially travelling at $1000\ \text{m/s}$ will add a great deal more energy to the ship than adding $100 \ \text{m/s}$ while initially at rest: $$(1100^2 - 1000^2) \frac{m}{2} \gg (100^2) \frac{m}{2}.$$ In both cases, the $\Delta v$ is the same, and is dependent on the mass of fuel used, hence the same mass and number of molecules is used in the combustion process to obtain this $\Delta v$. So I'd wager the same quantity of chemical energy is converted to kinetic energy, yet I'm left with this seemingly unexplained $200,000\ \text{J/kg}$ more energy, and I'm clueless as to where it could have come from. Answer: You've noted that at high velocities, a tiny change in velocity can cause a huge change in kinetic energy. And that means that the thrust due to burning fuel seems to be able to contribute an arbitrarily high amount of energy, possibly exceeding the chemical energy of the fuel itself. The resolution is that all of this logic applies to the fuel too! When the fuel is exhausted, it loses much of its speed, so the kinetic energy of the fuel decreases a lot. The extra kinetic energy of the rocket comes from this extra contribution, which can be arbitrarily large. Of course, the kinetic energy of the fuel didn't come from nowhere. If you don't use gravity wells, that energy came from the fuel you burned previously, which was used to speed up both the rocket and all the fuel inside it. So everything works out -- you don't get anything for free. For those that want more detail, this is called the Oberth effect, and we can do a quick calculation to confirm it. Suppose the fuel is ejected from the rocket with relative velocity $u$, a mass $m$ of fuel is ejected, and the rest of the rocket has mass $M$. By conservation of momentum, the velocity of the rocket will increase by $(m/M) u$. Now suppose the rocket initially has velocity $v$. The change in kinetic energy of the fuel is $$\Delta K_{\text{fuel}} = \frac12 m (v-u)^2 - \frac12 mv^2 = \frac12 mu^2 - muv.$$ The change in kinetic energy of the rocket is $$\Delta K_{\text{rocket}} = \frac12 M \left(v + \frac{m}{M} u \right)^2 - \frac12 M v^2 = \frac12 \frac{m^2}{M} u^2 + muv.$$ The sum of these two must be the total chemical energy released, which shouldn't depend on $v$. And indeed, the extra $muv$ term in $\Delta K_{\text{rocket}}$ is exactly canceled by the $-muv$ term in $\Delta K_{\text{fuel}}$. Sometimes this problem is posed with a car instead of a rocket. To understand this case, note that cars only move forward because of friction forces with the ground; all that a car engine does is rotate the wheels to produce this friction force. In other words, while rockets go forward by pushing rocket fuel backwards, cars go forward by pushing the Earth backwards. In a frame where the Earth is initially stationary, the energy associated with giving the Earth a tiny speed is negligible, because the Earth is heavy and energy is quadratic in speed. Once you switch to a frame where the Earth is moving, slowing the Earth down by the same amount harvests a huge amount of energy, again because energy is quadratic in speed. That's where the extra energy of the car comes from. More precisely, the same calculation as above goes through, but we need to replace the word "fuel" with "Earth". The takeaway is that kinetic energy differs between frames, changes in kinetic energy differ between frames, and even the direction of energy transfer differs between frames. It all still works out, but you must be careful to include all contributions to the energy.
{ "domain": "physics.stackexchange", "id": 100375, "tags": "newtonian-mechanics, energy-conservation, conservation-laws, rocket-science, propulsion" }
Attaching objects to the robot's body in diamondback
Question: Hallo everybody, I followed this tutorial: Attaching objects to the robot's body I just wanted to attach a cylinder to the endeffector of my robot arm to avoid collisions between the environment and the object carried by the robot... but it does not work... ros::Publisher att_object_in_map_pub = nh.advertise<mapping_msgs::AttachedCollisionObject>("attached_collision_object", 10); mapping_msgs::AttachedCollisionObject att_object; att_object.link_name = "link_name"; att_object.object.id = "box"; att_object.object.operation.operation = mapping_msgs::CollisionObjectOperation::ADD; att_object.object.header.frame_id = "link_name"; att_object.touch_links.push_back("contact_link"); att_object.object.header.stamp = ros::Time::now(); geometric_shapes_msgs::Shape object; object.type = geometric_shapes_msgs::Shape::CYLINDER; object.dimensions.resize(2); object.dimensions[0] = radius*1.2; object.dimensions[1] = hight*1.2; geometry_msgs::Pose pose; pose.position.x = 0.0; pose.position.y = 0.0; pose.position.z = object.dimensions[1]/2.0; pose.orientation.x = 0; pose.orientation.y = 0; pose.orientation.z = 0; pose.orientation.w = 1; att_object.object.shapes.push_back(object); att_object.object.poses.push_back(pose); att_object_in_map_pub.publish(att_object); the arm and the collision free path planning works fine but the attached object is ignored... can anyone tell me which "Display Type" i have to add in rviz to get the attached object displayed? Which "Display Type" is "Known objects" in the tutorial? any ideas what could cause my problem? are there any visionaries in the community? Update: I also followed this tutorial: Adding known objects to the collision environment Worked!!! but of course in this case the object is not attached to the arm... I also solved my problem with the "DisplayType"... Type: Markers, Topic: /collision_model_markers/environment_server This worked for the tutorial "Adding known objects to the collision environment" I hope it's the same for the other one... Update 2: @ egiljones The link names are different for my robot. The code above is part of a function and the link names are given as arguments. I don't think that this is problem... Anyway, thx for the answer... Originally posted by lechn_do on ROS Answers with karma: 1 on 2012-02-09 Post score: 0 Answer: Is there an actual link on your robot called "link_name"? If not, this code fragment will not actually do anything - you must specify a valid link that's associated with your robot to have the AttachedObject take effect. This link need not actually touch the object, but when you update the position of that link it will also update the position of any attached bodies. Add to the 'touch_links' any additional links that it's ok for the attached object to contact - if you don't do this for all links that will be in constant contact with the object then planning won't work, as every state will be in collision. Originally posted by egiljones with karma: 2031 on 2012-02-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 8167, "tags": "ros, grasping, arm-navigation, manipulation, ros-diamondback" }
Python 3 code to generate simple crossword puzzles from a list of words/anagrams
Question: This code takes a list of words that all (pairwise) share at least one letter, e.g. words = ["forget", "fret", "for", "tort", "forge", "fore", "frog", "fort", "forte", "ogre"] and creates what I'm calling a "sparse crossword" puzzle that looks like this. -------FORT ----------O ------F-FOR ----FROGR-T ----F-R-E-- --OFORGET-- --G-R-E---- FORTE------ --E-------- I'd greatly appreciate any feedback on either the algorithm for inserting words into the puzzle (which results is a pretty "formulaic" grid) and the general structure/style of the Python code as well. import enum import itertools import math import numpy as np import random @enum.unique class Direction(enum.Enum): ACROSS = enum.auto() DOWN = enum.auto() def __str__(self): return("ACROSS" if self is Direction.ACROSS else "DOWN") def get_deltas(self): delta_r = int(self == Direction.DOWN) delta_c = int(self == Direction.ACROSS) return(delta_r, delta_c) @staticmethod def random(): return random.choice(list(Direction)) class GridWord: def __init__(self, word: str, r: int, c: int, direction: Direction): if not isinstance(word, str): raise TypeError("word must be a string") if not (isinstance(r, int) and isinstance(c, int) and r >= 0 and c >= 0): raise ValueError("Row and column positions must be positive integers") if not isinstance(direction, Direction): raise TypeError("Direction must be an enum of type Direction") self.word = word.upper() self.r1 = r self.c1 = c self.direction = direction self.delta_r, self.delta_c = self.direction.get_deltas() self.__len = len(self.word) self.r2 = self.r1 + (self.__len - 1)* self.delta_r self.c2 = self.c1 + (self.__len - 1)* self.delta_c def __str__(self): return(f"{self.word}, ({self.r1}, {self.c1}) -- ({self.r2}, {self.c2}), {self.direction}") def __len__(self): return(self.__len) def __contains__(self, item): if isinstance(item, str): # The left operand is a string return(item in self.word) elif isinstance(item, tuple) and len(item) == 2 and isinstance(item[0], int) and isinstance(item[1], int): # The left operand is a tuple that contains two integers, i.e. a # coordinate pair return(self.r1 <= item[0] and item[0] <= self.r2 and self.c1 <= item[1] and item[1] <= self.c2) else: raise TypeError("'in <GridWord>' requires string or coordinate pair as left operand") def __getitem__(self, item): try: return(self.word[item]) except: raise def intersects(self, other): if not isinstance(other, GridWord): raise TypeError("Intersection is only defined for two GridWords") if self.direction == other.direction: raise ValueError("Intersection is only defined for GridWords placed in different directions") for idx1, letter1 in enumerate(self.word): for idx2, letter2 in enumerate(other.word): rr1 = self.r1 + idx1*self.delta_r cc1 = self.c1 + idx1*self.delta_c rr2 = other.r1 + idx2*self.delta_c # because the direction is reversed cc2 = other.c1 + idx2*self.delta_r if letter1 == letter2 and rr1 == rr2 and cc1 == cc2: return(True) return(False) def overlaps(self, other): if not isinstance(other, GridWord): raise TypeError("Overlap check is only defined for two GridWords") if self.direction == other.direction: return((self.r1, self.c1) in other or (other.r1, other.c1) in self) for idx, letter in enumerate(self.word): rr = self.r1 + idx*self.delta_r cc = self.c1 + idx*self.delta_c if (rr, cc) in other: return(True) return(False) def adjacent_to(self, other): if not isinstance(other, GridWord): raise TypeError("Adjacency is only defined for two GridWords") if self.direction != other.direction: return(False) for delta in [-1, 1]: for idx in range(self.__len): r = self.r1 + idx*self.delta_r + delta*self.delta_c c = self.c1 + idx*self.delta_c + delta*self.delta_r if (r, c) in other: return(True) # (-1) point directly to the left of (or above) a word placed across # (or down) # # (1) point directly to the right of (or below) a word placed across # (or down) if delta == -1: r = self.r1 + delta * self.delta_r c = self.c1 + delta * self.delta_c elif delta == 1: r = self.r2 + delta * self.delta_r c = self.c2 + delta * self.delta_c if (r, c) in other: return(True) return(False) class Grid: def __init__(self, num_rows = 50, num_cols = 50): self.num_rows = num_rows self.num_cols = num_cols self.grid = np.full([self.num_rows, self.num_cols], "") self.grid_words = [] def __str__(self): s = "" for i in range(self.num_rows): for j in range(self.num_cols): s += self.grid[i][j] if self.grid[i][j] != "" else "-" s += "\n" return(s) def __approximate_center(self): center = (math.floor(self.num_rows / 2), math.floor(self.num_cols / 2)) return(center) def __insert_word(self, grid_word): if not isinstance(grid_word, GridWord): raise TypeError("Only GridWords can be inserted into the Grid") delta_r, delta_c = grid_word.direction.get_deltas() for idx, letter in enumerate(grid_word.word): self.grid[grid_word.r1 + idx*delta_r, grid_word.c1 + idx*delta_c] = letter self.grid_words.append(grid_word) def __word_fits(self, word: str, r: int, c: int, d: Direction): # Make sure we aren't inserting the word outside the grid if ((d == Direction.DOWN and r + len(word) >= self.num_rows) or (d == Direction.ACROSS and c + len(word) >= self.num_cols)): return(False) grid_word = GridWord(word, r, c, d) check = False for gw in self.grid_words: if grid_word.adjacent_to(gw): # If the word is adjacent to any other words in the grid, we can # exit right away because it doesn't fit return(False) if grid_word.overlaps(gw): if d == gw.direction: # If the word overlaps another word that is placed in the # same direction, we can exit right away return(False) elif not grid_word.intersects(gw): # If the word overlaps another word that is placed in the # other direction but DOESN'T intersect it (i.e. the overlap # doesn't happen on the same letter in each word), we can # exit right away return(False) else: check = True else: # If the word doesn't overlap the current word (already in the # grid) that's being checked, we don't know yet whether or not # we CAN or CANNOT place it on the grid pass return(check) def __scan_and_insert_word(self, word): if not isinstance(word, str): raise TypeError("Only strings can be inserted into the puzzle by scanning") if len(self.grid_words) == 0: self.__insert_word(GridWord(word, *self.__approximate_center(), Direction.random())) return(None) for d, r, c in itertools.product(list(Direction), range(self.num_rows), range(self.num_cols)): if self.__word_fits(word, r, c, d): grid_word = GridWord(word, r, c, d) self.__insert_word(grid_word) break def scan_and_insert_all_words(self, words): for word in words: self.__scan_and_insert_word(word) def __randomly_insert_word(self, word): if not isinstance(word, str): raise TypeError("Only strings can be randomly inserted into the puzzle") if len(self.grid_words) == 0: self.__insert_word(GridWord(word, *self.__approximate_center(), Direction.random())) return(None) num_iterations = 0 while num_iterations <= 10000: rand_r = random.randint(0, self.num_rows - 1) rand_c = random.randint(0, self.num_cols - 1) d = Direction.random() if self.__word_fits(word, rand_r, rand_c, d): grid_word = GridWord(word, rand_r, rand_c, d) self.__insert_word(grid_word) break num_iterations += 1 def crop(self): min_c = min([word.c1 for word in self.grid_words]) min_r = min([word.r1 for word in self.grid_words]) max_c = max([word.c2 for word in self.grid_words]) max_r = max([word.r2 for word in self.grid_words]) cropped_grid = Grid(max_r - min_r + 1, max_c - min_c + 1) for grid_word in self.grid_words: cropped_word = GridWord(grid_word.word, grid_word.r1 - min_r, grid_word.c1 - min_c, grid_word.direction) cropped_grid.__insert_word(cropped_word) return(cropped_grid) random.seed(1) words = ["forget", "fret", "for", "tort", "forge", "fore", "frog", "fort", "forte", "ogre"] g = Grid() g.scan_and_insert_all_words(words) print(g.crop()) Answer: Validation This: words = ["forget", "fret", "for", "tort", "forge", "fore", "frog", "fort", "forte", "ogre"] should probably receive some kind of validation to confirm that each word shares at least one letter. One simple way to do this is a set: iterate through each word, adding each letter of the word to the set. Then do a second iteration to ensure that the set intersects with each word. Enum name def __str__(self): return("ACROSS" if self is Direction.ACROSS else "DOWN") is not necessary. You should be able to simply do: def __str__(self): return self.name Implicit tuple This: return(delta_r, delta_c) does not require parens, nor does this: return(True) Dunders self.__len = len(self.word) Don't name this variable with two underscores - that usually has a special meaning. (The same applies to __word_fits.) Even so, you don't need this variable at all - just use len(self.word) in your __len__ method. Combined comparison self.r1 <= item[0] and item[0] <= self.r2 and self.c1 <= item[1] and item[1] <= self.c2 becomes self.r1 <= item[0] <= self.r2 and self.c1 <= item[1] <= self.c2 Don't no-op except Delete this try block, since it does nothing: try: return(self.word[item]) except: raise Coordinate nomenclature rr1 and cc1 and their ilk are probably better expressed as yy1 and xx1, etc. Overlap detection You have some long loops to detect spatial overlap. Instead, consider building up an index structure that is composed of nested lists. Indexing [y][x] into the list can get you an inner structure that contains all words at that location, and for each of them, the offset into the word. This will be fairly cheap memory-wise and will greatly improve your runtime. It will also make __word_fits much nicer. Or semantics s += self.grid[i][j] if self.grid[i][j] != "" else "-" can become s += self.grid[i][j] or '-' Grid.str Much ink has been spilled on the evils of successive immutable string concatenation. This is what StringIO is built for, so do that instead. Or, if you're feeling fancy, write a long, horrible '\n'.join(...) comprehension. Branch trimming if d == gw.direction: return(False) elif not grid_word.intersects(gw): return(False) becomes if d == gw.direction or not grid_word.intersects(gw): return False And delete this branch entirely (you can keep the comment of course): else: # If the word doesn't overlap the current word (already in the # grid) that's being checked, we don't know yet whether or not # we CAN or CANNOT place it on the grid pass
{ "domain": "codereview.stackexchange", "id": 36395, "tags": "python, python-3.x" }
Location Selection Algorithm in Solar Engineering
Question: This is a practical problem in energy generation with heliostats. We have a number of heliostats basically forming the shape of a doughnut. The facility needs to deploy hubs on those heliostats. One hub can support 12 harnesses (a kind of cable) and one harness can support 16 heliostats; namely, one hub can support 192 heliostats. The cost of deploying a hub and harness per meter are given. In other words, every harness starts from a hub and can connecting at most 16 heliostats. Deployment of a hub results in a cost. Also, the cost of harnesses result from its distance from the hub to the furthest heliostat on this harness. However, there is no distance from heliostats to its connecting harness. Currently my approach is to use k-means to cluster the whole region and run greedy on distance from harness for each split region, though I do not think this is a good approach. This is a pattern generated by my algorithm. As you can see, some harness only support 1 or 2 heliostats and result in a waste of harness. Our job is to optimize the cost. In the end, the algorithm should return locations of hubs, harness topology patterns, and the total cost. Can anyone enlighten me potential algorithms on this? (Every point on the image is a heliostat, there are 40981 in total) ----------- Updated 14/12/2016 ----------- More details on the question, thanks D.W's suggestion: Coordinates of heliostats are given. These coordinates correspond to real-world metrics (meters). The placement of hubs are not constrained. However, one harness cannot be longer than 45 meters due to voltage requirement. ----------- Updated 15/12/2016 ----------- The classification on "furthest" in "the cost of harnesses result from its distance from the hub to the furthest heliostat on this harness": Furthest is in the sense of "last", not "furthest" in the sense of "having the largest Euclidean distance from the hub" Answer: The problem looks messy, so there probably isn't a clean algorithm to produce a globally optimal solution. Instead, I suggest you use some combination of heuristics to try to find a nearly-optimal solution in a reasonable amount of time, perhaps by iteratively making small incremental improvements. Any candidate solution is comprised of three elements: Location of the hubs. Assignment of each heliostat to a hub. Routing of the harnesses. One way to make an incremental improvement is to repeatedly fix two of these three and optimize the other (holding the other two fixed), and rotate through which one you change. You seem to have a reasonable strategy to obtain an initial candidate solution, and you could iterate from there. I'll sketch in more detail each of these three optimizations, which you could rotate through repeatedly until you cannot make any further improvement: Harness optimization. Hold the location of the hubs and the assignment of which hub each heliostat is associated with fixed (to be the same as in the current candidate solution). The goal is optimize the routing of the harness wiring. For candidate algorithms for that, see the other question. My suspicion is that this step is the one where there are the greatest opportunities for improving your solution, and where it's worth devoting the most effort to. Hub location optimization. Hold the heliostat-to-hub assignments and harness routing fixed. Pick a single hub and the heliostats associated with it. Then it's easy to fine-tune the location of the hub to minimize overall cost, under the assumption that you don't change the topology/routing of the harnesses. In particular, each harness contributes one edge from the hub $r$ to the heliostats $h_1,h_2,\dots,h_{12}$ it is immediately connected to. Now we want to find a location for $r$ that minimizes the sum of distances $d(r,h_1)+\dots + d(r,h_{12})$. This is a sum-of-squares problem so it can be solved through standard sum-of-squares optimization, or simply by solving it with gradient descent; since we're in only 2 dimensions I would expect any method to converge rapidly. You can do this separately for each hub. It would also be possible to try a random perturbation to the location of the hub, re-apply harness optimization, see whether this reduced the overall cost, and if so accept that change to the hub location. This might be more expensive than it is worth. Heliostat assignment to hubs. Hold the location of hubs fixed, and assume we're not going to make radical changes to the harness topology. We can try to fine-tune the assignment of heliostats to hubs in a number of ways. One approach is to pick a heliostat $h$ that is currently assigned to hub $r_1$ but is relatively close to another heliostat that's wired to a different hub $r_2$, try swapping which hub it is associated with, and see if this leads to any reduction in cost. To determine whether it leads to a reduction in cost you could re-run harness optimization from scratch in both cases (which might be slow). Or, as a fast heuristic, you could remove $h$ from its current harness, find the nearest heliostat $h'$ that's currently connected to $r_2$, attach $h$ to $h'$ (splicing it into the harness for $h'$), and see whether this reduces the cost. Another approach would be to do a more ambitious optimization. Pick a pair of hubs $r_1,r_2$ and the set of heliostats currently connected to either of them. Now try to apply joint harness optimization to that set of heliostats and those two hubs simultaneously, simultaneously optimizing the routing of the heliostats and also which hub each heliostat is connected. The methods in that other question can be generalized to solve this problem. When finding the solution we may find that one or two heliostats switch which hub they are connected to. Now repeat this for every pair of adjacent hubs. However, my guess is that heliostat assignment might not yield many gains (compared to the naive method of assigning each heliostat to the hub it is closest to), so it might not be worth implementing these more sophisticated methods for heliostat assignment. Overall algorithm. In summary, we start from some initial solution (e.g., selected using the method described in the question) and then repeatedly rotate through the following three operations: Harness optimization: fine-tune the routing of the harnesses. Hub location optimization: fine-tune the location of the hubs. Heliostat re-assignment: fune-tune the assignment of heliostats to hubs. There is no guarantee that this will lead to a global optimum, i.e., the absolute lowest cost possible. This could get stuck in a local minimum. There are various methods for dealing with that (e.g., applying random perturbations; re-starting from multiple randomly chosen initial solutions), which you could experiment with as well. Hopefully this will help you find a solution that is "good enough", or at least, get you started on some methods to try. Good luck!
{ "domain": "cs.stackexchange", "id": 7916, "tags": "algorithms, graphs, optimization" }
Do you break the metallic bond when you break a metal into two pieces?
Question: As we all know, metallic ions are surrounded by a sea of electrons. If we continually bend or stretch a metal, say, iron, it will break. Does that mean we break the metallic bond in subatomic level? Answer: Yes, you are breaking metallic bonds. Similarly, when you crush a crystal of salt, you are breaking ionic bonds. There aren't that many good examples of breaking covalent bonds1, though breaking a polymer like nylon usually will do it. The "physical" breaking of bonds is not a very exotic thing. Remember that while the energy required to break every bond in a gram of substance is a lot, the energy required to just break a slice of those bonds isn't much. A very common question that arises is "what happens with the open valencies left behind when you physically break bonds?". Usually, gases from the atmosphere adsorb onto the metal surface, forming hydrogen bonds. A note: When you do this, there is nothing going on at the subatmoic level, only at the atomic level. 1. This is not due to the strength of covalent bonds -- indeed, these are usually weaker than ionic bonds. This happens because most covalent substances are made up of smaller molecules (as opposed to macroscopic lattices) held together by hydrogen bonding/Van der Waals forces, and it is these bonds that break when you break a covalent substance.
{ "domain": "chemistry.stackexchange", "id": 589, "tags": "bond, metal" }
Simple ring/circular buffer C++ class
Question: I've got this simple ring/circular buffer class: template<class T, size_t sz> class CircularBuffer { std::array<T, sz> buffer; size_t head; size_t tail; bool isFull; public: CircularBuffer() : head{0}, tail{0}, isFull{false} { } void put(T item) { buffer[head] = item; head = (head + 1) % sz; if (isFull) { tail = (tail + 1) % sz; } isFull = head == tail; } T get() { auto result = buffer[tail]; tail = (tail + 1) % sz; isFull = false; return result; } bool empty() const { return tail == head; } size_t capacity() const { return sz; } size_t size() const { if (isFull) return sz; if (head >= tail) return head - tail; return sz + head - tail; } }; And I was looking for clarification on a few things, to take advantage of C++ features. First, the new constexpr keyword, what here, if anything should I apply it to? (I'm assuming the size_t size() const member function could use it? Anything else?) Second, all of these member functions are quite small, should they be inlined? Third, in the T get() member function, I do auto result = buffer[tail];, should I use auto& instead, or any other versions? (or even just T/T&?) Should that be a const as it's not modified within the function, and only potentially modified once a copy is returned via the functions return parameter. Any other feedback is welcome! Answer: Interface Naming Functions returning a bool should be phrased as a question. empty should be is_empty instead. Yes, the standard library does it wrong too, leading to confusion like "I used vector.empty();, but it didn't empty my vector. Why?" get should be pop or pop_get. Getters are not supposed to change the object. Note that it is impossible to write get with the strong exception guarantee, which is the reason why std::vector::pop_back returns void instead of the element. constexpr Currently you can mark all your functions constexpr. Sometimes it is possible to evaluate the result of your CircularBuffer at compile time. That probably rarely comes up, but there is no good reason not to do it (yet). Generality Type restrictions There are limits for what Ts I can use your CircularBuffer with. T must be copyable and default constructible. That means I cannot use a struct Foo{ Foo(int); }; or a std::unique_ptr<int>. Arguably those should be allowed. Move-Only Supporting move-only types is possible by using std::move in the appropriate spots, mainly buffer[head] = std::move(item); and auto result = std::move(buffer[tail]);. Just try to use a CircularBuffer<std::unique_ptr<int>> and the compiler will tell you about each spot. Non-Default-Constructible To be able to use CircularBuffer<Foo> you would need to delay constructing objects until the user uses put. You can achieve that by changing std::array<T, sz> buffer; to alignas(alignof(T)) std::array<char, sz * sizeof(T)> buffer;. That way no Ts are default constructed. When you add an element in put you have to placement new the element: new (&buffer[head * sizeof(T)]) T(std::move(item));. get then has to call std::destroy_at(reinterpret_cast<T*>(&buffer[tail * sizeof(T)])); (or just call the destructor). This makes things more complicated and also reinterpret_cast and new are not constexpr. Brick Types Some types like std::mutex cannot be copied or moved, but you could still support them. To do that, offer an emplace function similar to std::vector::emplace_back that constructs the T in place from a given list of arguments. get Return Type Returning a T by value seems reasonable. You are taking out the element. Returning a T & instead seems dangerous, because usage of the buffer will eventually change the value you got. Maybe add 2 peek functions instead that return a reference to the current object without removing it. One of the functions would be T &peek() and the other const T &peek() const. Bugs empty When Full CircularBuffer<int, 3> b; b.put(1); b.put(2); b.put(3); std::cout << std::boolalpha << b.empty(); That should really not print true. Over- and Underflow If I put more items into the buffer than it has space it silently overwrites objects. If I try to get items without putting items in, it simply returns uninitialized objects which is undefined behavior for builtins. This is my fault for using your container incorrectly, but you could be nice and add an assert so that I can find my bug easier. inline Your functions are already implicitly marked inline which changes the linkage and has nothing to do with inlining. Whether inlining is the right choice is a complicated case-by-case question that you should leave to your compiler. Only use inline to mean "I want internal linkage", which you can also do for variables since C++17.
{ "domain": "codereview.stackexchange", "id": 34471, "tags": "c++, circular-list" }
Determine whether three sides form a valid triangle, and classify the triangle
Question: This is a my first Python program, and whilst I am new to Python I would like to keep in good practice Below is a short program to work out what type a triangle is and if it makes a valid triangle I have tried to use as little documention for this as possible to try and get used to how things work in Python so I can only imagine the mistakes I have made. However please do mention anything that should have be done better # Determin if triangle is Scalene. Isosceles or equilateral # Also works out if lengths can make a triangle from decimal import * getcontext().prec = 3 getcontext().rounding = ROUND_HALF_UP #Needs to be divided to re-set decimal place I think a = Decimal(input("Length of side a = ")) / 1 b = Decimal(input("Length of side b = ")) / 1 c = Decimal(input("Length of side c = ")) / 1 if a != b and b != c and a != c: print("This is a a Scalene triangle") triangle_type = 'Scalene' elif a == b and c == b: print("This is an Equilateral triangle") triangle_type = 'Equilateral' else: print("This is an Isosceles triangle") triangle_type = 'Isosceles' def is_valid_triangle(a, b, c,triangle_type): if triangle_type == 'Equilateral': return True #all same lengths will be a valid triangle elif triangle_type == 'Isosceles' or triangle_type == 'Scalene': if a == b: return a + b > c elif b == c: return b + c > a elif a == c: return a + c > b else: #This will be the scalene triangle return a + b > c else: return False #Message is unclear as could be lengths are negitive or correct int type not used print('Is this a valid triangle?', is_valid_triangle(a,b,c,triangle_type)) Answer: Add a __name__ == "__main__" guard, and move the logic into a function separate from the I/O: def triangle_type(a, b, c): '''Return a string indicating the type of triangle (Equilateral, Isosceles, Scalene, Impossible) ''' # implementation here... def main(): getcontext().prec = 3 getcontext().rounding = ROUND_HALF_UP #Needs to be divided to re-set decimal place I think a = Decimal(input("Length of side a = ")) / 1 b = Decimal(input("Length of side b = ")) / 1 c = Decimal(input("Length of side c = ")) / 1 print(f"This is a {triangle_type(a, b, c)} triangle") if __name__ == "__main__": main() In the implementation, we can save a lot of "or" tests by sorting the lengths before we start: a, b, c = sorted([a, b, c]) if a + b <= c: # N.B. automatically catches a < 0, since b <= c return 'Impossible' if a != b != c: return 'Scalene' elif a == c: return 'Equilateral' else: return 'Isosceles' Modified code def triangle_type(a, b, c): ''' Return a string indicating the type of triangle (Equilateral, Isosceles, Scalene, Impossible) ''' a, b, c = sorted([a, b, c]) if a + b <= c: return 'Impossible' if a != b != c: return 'Scalene' if a == c: return 'Equilateral' return 'Isosceles' def main(): a = input("Length of side a: ") b = input("Length of side b: ") c = input("Length of side c: ") print(f"({a}, {b}, {c}) is a {triangle_type(a, b, c)} triangle") if __name__ == "__main__": main() Further improvement Use the doctest module to write the tests: def triangle_type(a, b, c): ''' Return a string indicating the type of triangle (Equilateral, Isosceles, Scalene, Impossible) >>> triangle_type(1, 1, 2) 'Impossible' >>> triangle_type(-1, -1, -1) 'Impossible' >>> triangle_type(1, 1.0, 1) 'Equilateral' >>> triangle_type(1, 2, 2) 'Isosceles' >>> triangle_type(2, 3, 2) 'Isosceles' >>> triangle_type(2, 3, 4) 'Scalene' ''' a, b, c = sorted([a, b, c]) if a + b <= c: return 'Impossible' if a != b != c: return 'Scalene' if a == c: return 'Equilateral' return 'Isosceles' if __name__ == "__main__": import doctest doctest.testmod()
{ "domain": "codereview.stackexchange", "id": 35410, "tags": "python, beginner, python-3.x" }
Calculating the pH of a mixture of Na2HPO4 and Na3PO4?
Question: I know there are more questions about this on the forum, but I was just wondering: when we mix both solutions, would we have to consider the equilibria corresponding to $\mathrm{p}K_\mathrm{a2}$ and $\mathrm{p}K_\mathrm{a3}$, or just the latter? Answer: I'll give a round about answer based on significant figures. The whole truth is that any time that you add any phosphate ion into an aqueous solution, then you will have all four phosphate species ($\ce{H3PO4}$, $\ce{H2PO4^-}$, $\ce{HPO4^{2-}}$, and $\ce{PO4^{3-}}$) in solution. The relative amounts depend on the 3 equilibrium equations, and the total concentration of all of the phosphate species. For phosphoric acid the three pKa's are different enough so that only two phosphate species will have a "significant" concentration at whatever pH the solution is at. Here is where the answer gets fuzzy. What is "significant"? To account for 99% of the species (2 significant figures) is typically good enough and at most two species would need to be considered. However if you want to account for 99.99999999999% of the species (12 significant figures), then you're going to have to consider all four phosphate species. So the gist is how many significant figures do you need to consider in the calculations? What is typically done is to simply the four equilibrium equations to the two "significant" ones (maybe only 1 species at high or low pH's), and then calculate the concentrations of the last two species using the found concentrations of the first two. In reality there is another consideration. If you look at the pKa values for phosphoric acid, Wikipedia lists pKa1 = 2.148, pKa2 = 7.198, and pKa3 = 12.319. There are only three significant figures in each of these equilibrium constants. (Only the mantissa counts, not the characteristic.) So you can only have three significant figures for any given phosphate species. So, to three significant figures, for any sort of mixture of $\ce{Na2HPO4}$ and $\ce{Na3PO4}$ salts you'll need to consider both $pKa_2$ and $pKa_3$ and you'll end up with a quadratic equation to solve. The simplifying assumption is that $$\ce{[H3PO4] + [H2PO4^-] << [HPO4^{2-}] + [PO4^{3-}]}$$ An "exact" iterative solution, considering all four species, can easily be solved via a computer program, but it is really messy to do such a calculation by hand. As a final check you should calculate the concentrations for $\ce{H3PO4}$ and $\ce{H2PO4^-}$ and verify that the assumption holds. (For phosphoric acid if the assumption doesn't hold to three significant figures, then normalizing the calculated values would yield a "good enough" result to 3 significant figures.)
{ "domain": "chemistry.stackexchange", "id": 7250, "tags": "acid-base, ph, ions" }
Are there non-hadronic jets?
Question: Hello I am new into jetphysics and I read here that 'Hadronic jets are amongst the most striking phenomena in high-energy physics'. My small understanding of jets is that they are defined by the hadrons that are composed from the decay process. (Jets are shapes starting from a collision point up and the energy/radiation shower of hadrons that we can observer in the detector calorimeters). I want to know if there are non-hadronic jets, and if so how are they defined. Or if something fundamental I miss. Any introducttonary textbook suggestions on the subject is very welcome. Thank you in advance Answer: There exists a notion of jets in QED: one such definition of a jet arises due to the $\sigma_{2\rightarrow2}$ electron to muon scattering. Analysis of Feynman diagrams at the next order suggests the inclusion of the two radiative emission diagrams as well, allowing us to calculate $\sigma_{2\rightarrow2}$ as $\sigma_{\rm total} - \sigma_{2\rightarrow3}$, where $\sigma_{2\rightarrow3}$ is $\sigma(e^+e^-\rightarrow\mu^+\mu^-\gamma)$. Now, in calculating $\sigma_{2\rightarrow3}$, we usually employ "experimental regularisation"$^\dagger$ to counter the IR divergence from the radiative corrections, and posit that the detectors can only detect photons with energy lower than some $E_\gamma$, and that they can only distinguish between the $\mu$ and $\gamma$ impact at an angle greater than $\theta'$. Thus the final two-body state will also be parameterised by $E_\gamma$ and $\theta'$, forming a Sterman-Weinberg jet. In general, however, QCD hadronic jets are much easier to detect in practice. $^\dagger$ Of course, theoretical particle physicists usually prefer to use a photon mass $m_\gamma$ as a regulator during calculations.
{ "domain": "physics.stackexchange", "id": 74579, "tags": "particle-physics, proton-decay" }
Gan paper: sampling the distribution
Question: In the Gan paper it is said page 3 Figure 1: "The lower horizontal line is the domain from which z is sampled, in this case uniformly. The horizontal line above is part of the domain of x. The upward arrows show how the mapping x = G(z) imposes the non-uniform distribution pg on transformed samples" For those who wants to see the figure: I wanted to know what that would mean in practical case. Let's say you are working with images that are normalized between [0,..,1] this would be the domain of x as referred in the paper right? Does this mean that I would have to sample my z from the domain of x, i.e: [0,..,1] ? In most implementations I see people taking point randomly using things such as: np.random.randn(latent_dim) Answer: Let's say you are working with images that are normalized between [0,..,1] this would be the domain of x as referred in the paper right? No, the domain of X would be "images of [whatever they contain (e.g. dogs)] normalized between 0 and 1". Does this mean that I would have to sample my z from the domain of x, i.e: [0,..,1] ? No, they are both different domains and the generator $G$ maps between them. In the paragraph you linked, the authors just point out that the generator $G$ is a function that maps the input data (i.e. random vectors following a uniform distribution in $[0, 1]$) to the output data (e.g. images of dogs) and that the mapping is non-regular, meaning that very different inputs may lead to similar outputs and vice versa.
{ "domain": "datascience.stackexchange", "id": 10491, "tags": "machine-learning, deep-learning, gan, generative-models" }
Java TicTacToe MVC with Singleplayer mode
Question: To practice the MVC pattern and Unittesting in Java I decided to make a simple TicTacToe Console Application. The features of this App are: Multiplayer-Mode Singleplayer-Mode (that should always result in a draw) My questions are: Have I applied the concept of the MVC pattern correctly? I implemented Unittests for the SimpleAI class. Are my tests appropriate? Can i make them more dynamic(now they just test for a specific case)? Are there any heavy no-noes in my code I should watch out for in the future? Here is the Github link: https://github.com/Baumgartner-Lukas/TTT.git Code: View: import controller.GameController; import controller.SimpleAI; import model.GameBoard; import view.GameFieldView; import java.io.IOException; public class TicTacToe { public static void main(String[] args) throws IOException { GameBoard model = new GameBoard(); GameFieldView view = new GameFieldView(); SimpleAI sai = new SimpleAI(); GameController controller = new GameController(model, view, sai); controller.play(); } } Model: Stones: package model; public enum Stone { X("X"), O("O"), NONE(" "); private final String stone; Stone(String stone){ this.stone = stone; } @Override public String toString() { return stone; } } GameBoard: package model; public class GameBoard { public static final int SIZE = 3; public static final int TURNS = SIZE * SIZE; private Stone grid[][] = new Stone[SIZE][SIZE]; //Fill the new GameBoard with NONE(" ") Stones public GameBoard(){ for(int r = 0; r < SIZE; r++){ for(int c = 0; c < SIZE; c++){ grid[r][c] = Stone.NONE; } } } public Stone getStone(int row, int col) { return grid[row][col]; } public void setStone(int row, int col, Stone stone) { grid[row][col] = stone; } } Controller: package controller; import model.GameBoard; import model.Stone; import view.GameFieldView; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class GameController { private BufferedReader reader; private GameFieldView view; private GameBoard model; private SimpleAI simpleAI; private boolean SimpleAIisActive = false; public GameController(GameBoard model, GameFieldView view, SimpleAI sai) { this.model = model; this.view = view; this.simpleAI = sai; this.reader = new BufferedReader(new InputStreamReader(System.in)); } private int counter = 0; //counter to determine which players turn it is and when the max. turns are reached public void play() throws IOException { int[] input = new int[2]; System.out.println("Single- or Multiplayer (S | M): "); String opt = reader.readLine(); if (opt.trim().toLowerCase().equals("s")) { SimpleAIisActive = true; } while (counter < GameBoard.TURNS) { //print Field view.printGameField(model); if (SimpleAIisActive && counter % 2 == 1) { simpleAI.updateGameBoard(model); if (hasWon()) { System.out.printf("%n AI has won! You noob %n"); view.printGameField(model); return; } System.out.printf("%n AI-Turn %n%n"); view.printGameField(model); counter++; } //prompt players for their turn try { input = prompt(); while (input[0] < 0 || input[0] > 2 || input[1] < 0 || input[1] > 2) { System.err.printf("Row and Col must be between 0 - 2. %n"); input = prompt(); } while (!isValidMove(input)) { System.err.printf("This field is already taken! %n"); input = prompt(); } } catch (IOException ioe) { System.err.println("Error reading input"); ioe.printStackTrace(); } placeStone(input); if (hasWon()) { view.printGameField(model); System.out.printf("%nPlayer %d has won! GG EZ %n", counter % 2 + 1); return; } counter++; } view.printGameField(model); System.out.println("Game finished with a draw!"); } /** * For readability * * @return returns True if one of those conditions is true */ private boolean hasWon() { return checkStraight() || checkDiagonal(); } /** * Checks if there are 3 same Stones in a diagonal line * * @return returns true if 3 Stones are found. False if not. */ private boolean checkDiagonal() { //middle Stone Stone s = model.getStone(1, 1); return s != Stone.NONE && ( s == model.getStone(0, 0) && s == model.getStone(2, 2) || s == model.getStone(0, 2) && s == model.getStone(2, 0)); } /** * Checks if there are 3 same Stones in a straight line * * @return returns true if 3 Stones are found. False if not. */ protected boolean checkStraight() { int i = 0; while (i < 3) { Stone sCol = model.getStone(0, i); Stone sRow = model.getStone(i, 0); if (sCol == model.getStone(1, i) && sCol == model.getStone(2, i) && sCol != Stone.NONE) return true; if (sRow == model.getStone(i, 1) && sRow == model.getStone(i, 2) && sRow != Stone.NONE) return true; i++; } return false; } /** * Checks if the user puts a Stone on a valid (empty) position on the board * * @param input row and col of field where to set the stone * @return returns true if the input field is empty */ protected boolean isValidMove(int[] input) { int row = input[0]; int col = input[1]; return (model.getStone(row, col) == Stone.NONE); } protected void placeStone(int[] input) { int row = input[0]; int col = input[1]; if (counter % 2 == 0) { model.setStone(row, col, Stone.X); } else { model.setStone(row, col, Stone.O); } } /** * Prompts the player for the position where to set the stone * * @return returns the inputarray [0] = row, [1] = col * @throws IOException Throws an Exception if the inputvalues are out of bound of the gamefield */ private int[] prompt() throws IOException { int player; int[] input = new int[2]; player = counter % 2 + 1; System.out.println("=========="); System.out.printf("It is player %d's turn! %n", player); System.out.println("Give Row: "); input[0] = Integer.parseInt(reader.readLine()); System.out.println("Give Col: "); input[1] = Integer.parseInt(reader.readLine()); return input; } } Singleplayer "AI": package controller; import model.GameBoard; import model.Stone; import static java.util.concurrent.ThreadLocalRandom.current; public class SimpleAI { int counter = 1; public SimpleAI() { } /** * public method to use in the GameController class * @param model model of the current game board */ protected void updateGameBoard(GameBoard model) { alwaysDraw(model); counter++; } /** * Adds stones randomly on the field. Easiest difficulty. * @param model model of the current game board */ private void addRandomStone(GameBoard model) { int row = getRandomNumber(); int col = getRandomNumber(); while (model.getStone(row, col) != Stone.NONE) { row = getRandomNumber(); col = getRandomNumber(); } model.setStone(row, col, Stone.O); } /** * Adds stones in a way to the board, that should always lead to a draw * @param model model of the current game board */ private void alwaysDraw(GameBoard model) { //if there is no stone set in the middle, set a stone in the middle if (counter == 1) { if (model.getStone(1, 1) == (Stone.NONE) && counter == 1) { model.setStone(1, 1, Stone.O); } else { //if there is a stone in the middle, set the stone in one of the edges model.setStone(getRandomEvenNumber(), getRandomEvenNumber(), Stone.O); } } else { if (!checkDiagonal(model)) { if (!checkRows(model)) { if (!checkCols(model)) { if (!checkCorners(model)) { checkStraights(model); } } } } } } /** * checks if there is a free space on any of the middle lanes(0:1, 2:1, 1:0, 1:2) * @param model current model of the game board */ private void checkStraights(GameBoard model) { int r = getRandomNumber(); int c = getRandomNumber(); if(model.getStone(r, c) == Stone.NONE && (r + c > 0 && r + c < 4)) { model.setStone(r, c, Stone.O); }else{ checkStraights(model); } } /** * checks if any of the corners of the game board is free to set a stone * @param model current model of the game board * @return true if there was a free corner and a friendly stone was set * false if no corner was empty */ private boolean checkCorners(GameBoard model) { int cornerCount = 0; for (int r = 0; r < 2; r++) { for (int c = 0; c < 2; c++) { if (model.getStone(r * 2, c * 2) == Stone.X) { cornerCount++; if (cornerCount < 2 && model.getStone(r * 2, c * 2) == Stone.NONE) { model.setStone(r * 2, c * 2, Stone.O); return true; } } } } return false; } /** * Checks if there are two enemy stones already in a diagonal position. If so, make the according counter move * If there is no enemy stone in the middle, skip that check. * @param model model of the current game board * @return false if there is no enemy stone in the middle. * true if there are two enemy stones in a diagonal pos and a counter move was made. */ private boolean checkDiagonal(GameBoard model) { if (model.getStone(1, 1) != Stone.X) return false; if (model.getStone(1, 1) == Stone.X && model.getStone(0, 0) == Stone.X && model.getStone(2, 2) != Stone.O) { model.setStone(2, 2, Stone.O); return true; } else if (model.getStone(1, 1) == Stone.X && model.getStone(0, 2) == Stone.X && model.getStone(2, 0) != Stone.O) { model.setStone(2, 0, Stone.O); return true; } else if (model.getStone(1, 1) == Stone.X && model.getStone(2, 0) == Stone.X && model.getStone(0, 2) != Stone.O) { model.setStone(0, 2, Stone.O); return true; } else if (model.getStone(1, 1) == Stone.X && model.getStone(2, 2) == Stone.X && model.getStone(0, 0) != Stone.O) { model.setStone(0, 0, Stone.O); return true; } return false; } /** * Checks all rows if two enemy stones are in the same row * @param model model of the current game board * @return false if there are no two enemy stones in the same row. * true if there are two enemy stones in the same row and a counter move was made */ private boolean checkRows(GameBoard model) { for (int r = 0; r < 3; r++) { int stoneCount = 0; for (int c = 0; c < 3; c++) { if (model.getStone(r, c) == Stone.X) { stoneCount++; }else if(model.getStone(r,c) == Stone.O){ stoneCount--; } } if (stoneCount == 2) { counterMoveRow(model, r); return true; } } return false; } /** * Checks columns if for enemy stones * @param model model of the current game board * @return false if threre are no two enemy stones in the same column * true if there are two enemy stones in the same column and a counter move was made */ private boolean checkCols(GameBoard model) { for (int c = 0; c < 3; c++) { int stoneCount = 0; for (int r = 0; r < 3; r++) { if (model.getStone(r, c) == Stone.X) { stoneCount++; }else if(model.getStone(r,c) == Stone.O) stoneCount--; } if (stoneCount == 2) { counterMoveCol(model, c); return true; } } return false; } /** * Sets a friendly stone in the appropriate position * @param model model of the current game board * @param c column in which the two enemy stones were found */ private void counterMoveCol(GameBoard model, int c) { for (int r = 0; r < 3; r++) { if (model.getStone(r, c) == Stone.NONE) model.setStone(r, c, Stone.O); } } /** * Sets a friendly stone in the appropriate position * @param model model of the current game board * @param r row in which the two enemy stones were found */ private void counterMoveRow(GameBoard model, int r) { for (int c = 0; c < 3; c++) { if (model.getStone(r, c) == Stone.NONE) model.setStone(r, c, Stone.O); } } /** * generates a random integer number with a range between 0 and 2 * @return random int between 0 and 2 */ private int getRandomNumber() { return current().nextInt(0, 3); } /** * generates an even random number (0 or 2) * used to setting a stone in one of the corners * @return random even int (0 or 2) */ private int getRandomEvenNumber() { return current().nextInt(0, 2) * 2; } } Tests: package controller; import model.Stone; import org.junit.Before; import model.GameBoard; import org.junit.Test; import static org.junit.Assert.*; public class SimpleAITest { private GameBoard model; private SimpleAI sai; @Before public void setUp() throws Exception { model = new GameBoard(); sai = new SimpleAI(); sai.counter = 2; } @Test public void stoneIsSetCorrectlyRowCheck() { setUpForRowCheck(); sai.updateGameBoard(model); assertEquals(Stone.O, model.getStone(0, 2)); } @Test public void stoneIsSetCorrectlyColCheck() { setUpForColCheck(); sai.updateGameBoard(model); assertEquals(Stone.O, model.getStone(0, 2)); } @Test public void stoneIsSetCorrectlyDiagonalCheck() { setUpForDiagonal(); sai.updateGameBoard(model); assertEquals(Stone.O, model.getStone(2, 0)); } @Test public void stoneIsSetCorrectlyStraightCheck() { setUpForStraight(); sai.updateGameBoard(model); assertTrue(model.getStone(1,0) == Stone.O || model.getStone(1,2) == Stone.O); } private void setUpForStraight() { model.setStone(0, 1, Stone.X); model.setStone(2, 1, Stone.X); model.setStone(1, 1, Stone.X); } private void setUpForRowCheck() { model.setStone(0, 0, Stone.X); model.setStone(0, 1, Stone.X); model.setStone(2, 0, Stone.X); model.setStone(2, 2, Stone.X); model.setStone(1, 1, Stone.O); model.setStone(2, 1, Stone.O); } private void setUpForColCheck() { model.setStone(2, 2, Stone.X); model.setStone(1, 2, Stone.X); model.setStone(1, 1, Stone.O); } private void setUpForDiagonal() { model.setStone(1, 1, Stone.X); model.setStone(0, 2, Stone.X); model.setStone(2, 2, Stone.O); } } Answer: Thanks for sharing your code. Have I applied the concept of the MVC pattern correctly? No. In the MVC pattern thr controller manuputales the model and the view handles User interaction by displaying the models current state taking the user input and passing it to the controller. In your implementation the controller does the user interaction. I implemented Unittests for the SimpleAI class. Are my tests appropriate? UnitTests have more that one goal: UTs verify the desired behavior of the tested code UTs document the current behavior of the tested code UTs are examples of how to use the tested code You might see yourself how well your code reaches each goal... Can i make them more dynamic(now they just test for a specific case)? UnitTest are meant to be specific. Each test method verifies a single assumption about the behavior of the tested code. Therefore you cannot write "generic" test to be reused with other code to test. Are there any heavy no-noes in my code I should watch out for in the future? Naming Finding good names is the hardest part in programming. So always take your time to think carefully of your identifier names. Please read (and follow) the Java Naming Conventions. Your variable SimpleAIisActive should start with a lower case letter and since it holds a boolean it should start with is, has, can or alike so it might be isSimpleAiActive. avoid single character names Since the number of characters is quite limited in most languages you will soon run out of names. This means that you either have to choose another character which is not so obviously connected to the purpose of the variable. And/or you have to "reuse" variable names in different contexts. Both makes your code hard to read and understand for other persons. (keep in mind that you are that other person yourself if you look at your code in a few month!) On the other hand in Java the length of identifier names is virtually unlimited. There is no penalty in any way for long identifier names. So don't be stingy with letters when choosing names. prefer OOish solutions over procedural approaches There is nothing wrong with procedural approaches in general, but Java is an object oriented (OO) programming language and if you want to become a good Java programmer then you should start solving problems in an OO way. But OOP doesn't mean to "split up" code into random classes. The ultimate goal of OOP is to reduce code duplication, improve readability and support reuse as well as extending the code. Doing OOP means that you follow certain principles which are (among others): information hiding / encapsulation single responsibility / separation of concerns same level of abstraction KISS (Keep it simple (and) stupid.) DRY (Don't repeat yourself.) "Tell! Don't ask." Law of Demeter ("Don't talk to strangers!") In you code the change of the current user is an example of a procedural approach. You have a counter variable and an calculate the current user based on that each time. If you would consider the current user being an object represented by its Stone You could have it this way: private static final int CURRENT_PLAYER = 0; private final List<Stone> players = new ArrayList<>(Arrays.asList(Stone.X,Stone.O)); //... protected void placeStone(int[] input) { int row = input[0]; int col = input[1]; model.setStone(row, col, players.get(CURRENT_PLAYER)); } //... protected void updateGameBoard(GameBoard model) { alwaysDraw(model); players.add(players.remove(CURRENT_PLAYER)); } //... System.out.printf("%nPlayer %d has won! GG EZ %n", players.get(CURRENT_PLAYER));
{ "domain": "codereview.stackexchange", "id": 29649, "tags": "java, beginner, mvc, tic-tac-toe, ai" }
Maximum number of Stabilizer Generators?
Question: The Pauli group, $P_n$, is given by $$P_n=\{ \pm 1, \pm i\}\otimes \{ I,\sigma_x,\sigma_y,\sigma_z\}^{\otimes n}$$ Abelian subgroups of this which do not contain the element $(-1)*I$ correspond to a stabilizer group. If there are $r$ generators of one such subgroup, $\mathcal{G}$, then the $+1$ eigenstate has $2^{n-r}$ basis elements. This then leads to the natural question of whether we have that $r\le n$ and how can it be proved (either way)? I guess a (valid?) proof would be along the lines of that if $r \gt n$ we would have a bias of fractional dimension - this is not allowed so $r\lt n$. But if one exists I would prefer a proof considering only the group properties and not the space which it acts on. Answer: Consider a subgroup $G $ of the Pauli group with at least one operator that acts non-trivially on some qubit. Given any qubit $j $, for which the group contains an operator $S_j $ which acts on $j $ non-trivially, there is a Clifford group operator $C_j $ such that $C_j S_j C_j^\dagger =Z_j $, acting on qubit $j $ alone. (Why?) If $G_j = \{ C_j S C_j^\dagger \,\vert\, S \in G \}$ and $G $ is abelian, then $G_j = \langle Z_j \rangle \oplus G'_j$, where $G'_j $ does not act on qubit $j $. (Why?) By induction, we can transform any abelian subgroup on $n $ qubits to a group with at most $n+1$ generators, where up to $n $ of them act on a single qubit with a $Z $ operator. (And what then would the remaining one be?) From this, we can prove that a stabiliser group on $n $ qubits has at most $n $ generators; and with only a little more work, we can show that a stabiliser group with $r $ generators stabilises a subspace of dimension $2^{n-r} $.
{ "domain": "quantumcomputing.stackexchange", "id": 172, "tags": "stabilizer-code" }
What types of features are used in a large-scale click-through rate prediction problem?
Question: Something that I often see in papers (example) about large-scale learning is that click-through rate (CTR) problems can have up to a billion of features for each example. In this Google paper the authors mention: The features used in our system are drawn from a variety of sources, including the query, the text of the ad creative, and various ad-related metadata. I can imagine a few thousands of features coming from this type of source, I guess through some form of feature hashing. My question is: how does one get to a billion features? How do companies translate user behavior into features in order to reach that scale of features? Answer: That really is a nice question, although once you're Facebook or Google etc., you have the opposite problem: how to reduce the number of features from many billions, to let's say, a billion or so. There really are billions of features out there. Imagine, that in your feature vector you have billions of possible phrases that the user could type in into search engine. Or, that you have billions of web sites a user could visit. Or millions of locations from which a user could log in to the system. Or billions of mail accounts a user could send mails to or receive mails from. Or, to swich a bit to social networking site-like problem. Imagine that in your feature vector you have billions of users which a particular user could either know or be in some degree of separation from. You can add billions of links that user could post in his SNS feed, or millions of pages a user could 'like' (or do whatever the SNS allows him to do). Similar problems may be found in many domains from voice and image recognition, to various branches of biology, chemistry etc. I like your question, because it's a good starting point to dive into the problems of dealing with the abundance of features. Good luck in exploring this area! UPDATE due to your comment: Using features other than binary is just one step further in imagining things. You could somehow cluster the searches, and count frequencies of searches for a particular cluster. In a SNS setting you could build a vector of relations between users defined as degree of separation instead of a mere binary feature of being or not being friends. Imagine logs that global corporations are holding on millions of their users. There's a whole lot of stuff that can be measured in a more detailed way than binary. Things become even more complicated once we're considering an online setting. In such a case you do not have time for complicated computations and you're often left with binary features since they are cheaper. And no, I am not saying, that the problem becomes tractable once it's reduced to a magical number of billion features. I am only saying that a billion of features is something you may end up after a lot of effort in reducing the number of dimensions.
{ "domain": "datascience.stackexchange", "id": 282, "tags": "machine-learning, classification, bigdata, dataset" }
Stoichiometric calculation of composition of a compound
Question: I have a question from a high school review package. I don't need the answer, but one thing that is bugging me is that, I feel as if I do not understand the question at all. I don't know what it wants me to find. Can anyone please tell me what the question is about? The question is: Copper oxide is a black powder. It can be decomposed by heating it with an excess of charcoal, a form of carbon. The charcoal reacts with the copper oxide to produce copper and carbon dioxide. Any excess charcoal that was used can be separated from the copper by adding water. The charcoal will float on the water while the more dense copper will sink to the bottom. The charcoal can then be skimmed off. Mass of copper produced = $\pu{2.76 g}$ Mass of copper oxide used = $\pu{3.45 g}$ a) Use this data to determine the simplest formula of copper oxide. b) Predict the valence/charge of the copper in the copper oxide formed. I basically got: $$\frac{2.76}{63.546} = \pu{0.043 mol}$$ The balanced chemical equation that I deduced was $$\ce{2CuO + C -> 2Cu + CO2}$$ However, I am not sure that my understanding of the question is correct. Is there a mistake in my understanding. Note: I was able to solve the question. Answer: This is an example of a classical method of determining chemical formulas by elemental analysis. You are not supposed to know the chemical formula of copper oxide neither they are asking you to write a balanced equation. All they are telling you is that (i) Mass of pure oxide of copper = 3.45 g (ii) Mass of copper obtained after reduction (we do not need to know the chemical equation) = 2.76 g Use mass balance, how many grams of oxygen atoms must be there in that oxide? Now you know the masses of Copper and Oxygen. Two options (a) Use mol and find out molar ratios between the two elements (b) Alternatively, if you do not need to invoke the mole concept ... old school way. Just for trying Think about the fact that 16 (exactly 16) is the mass unit of 1 oxygen atom. What mass of copper atoms would be associated with 16 units of O if copper's atomic mass is 63.54 units. Use mass balance and some arithmetic.
{ "domain": "chemistry.stackexchange", "id": 14243, "tags": "physical-chemistry, stoichiometry" }
Does the angular velocity of a rotating rod change when its axis of rotation changes?
Question: If a rigid rod of length $l$ rotates on one end around the origin with an angular velocity of $\omega$ and suddenly the end fixed to the origin is released allowing the rod to move freely without any external forces on it what is the new angular velocity of the rod? Around what axis would the rod now rotate? I think that the new angular velocity would still be $\omega$ but it would now be around the center of mass of the rod. This would conserve angular momentum relative to the rod's center of mass and the magnitude of velocity of the rod's center of mass would also remain unchanged (thus conserving the total kinetic energy of the rod). Is my thinking correct or does the angular momentum change because the axis of rotation is now different and the rod's moment of inertia relative to this new axis of rotation changes? Answer: The rod exactly before detaching (letś call $t = -\epsilon$) from the center of rotation (A) has an angulat velocity $\omega$ and angular momentum $L = I\omega$, where $I = \frac{1}{3}ml^2$. Just after the rod being released, (letś call $t = \epsilon$) the COM is moving with a constant velocity $v = \omega \frac{l}{2}$, because that was its velocity at $t = -\epsilon$ The velocity of A is zero at $t = \epsilon$. So its relative velocity with respect to the COM is $v_A = -\omega\frac{l}{2}$. The opposite end of the rod has a velocity of $v_B = \omega l$ at the same time. The new angular velocity with respect to the COM is: $$\omega_1 = \frac{(\omega l - \omega\frac{l}{2})}{\frac{l}{2}} = \omega$$ The new angular momentum with respect to A is the sum of $\mathbf r \times \mathbf p_{COM}$ plus the spin angular momentum with respect to the COM: $$L_1 = \frac{l}{2} m\omega \frac{l}{2} + I_1\omega$$ The new moment of inertia with respect to the center of mass is $I_1 = \frac{ml^2}{12}$ So, $$L_1 = m\frac{l^2}{4}\omega + \frac{ml^2}{12}\omega = m\frac{l^2}{3}\omega = L$$ The kinetic energy was only rotational: $$E = \frac{1}{2}I\omega^2 = \frac{1}{2}m\frac{l^2}{3}\omega^2 = \frac{1}{6}ml^2\omega^2$$ And later it is translational and rotational: $$E_1 = \frac{1}{2}mv^2 + \frac{1}{2}I_1\omega^2 = \frac{1}{2}m(\omega \frac{l}{2})^2 + \frac{1}{2}(\frac{ml^2}{12})\omega^2 = \frac{1}{6}ml^2\omega^2 = E$$
{ "domain": "physics.stackexchange", "id": 87461, "tags": "angular-momentum, rotation, moment-of-inertia" }
Bug on wall identification request
Question: I've been finding a few of these on the same wall of my apartment the path month. Size is maybe 3-5 mm. I live in an apartment in Toronto Canada. Thanks in advance! Answer: Without clearer photos it is pretty hard to say, however, I think it is likely to be one of the Dermestidae family of beetles. These include a bunch of common pests in the household, including "carpet beetles" (Anthrenus sp.), and the larder beetle (Dermestes lardarius). I think this is most likely to be a carpet beetle, given the light/dark mottled pattern on the shell.
{ "domain": "biology.stackexchange", "id": 11861, "tags": "species-identification, entomology" }
Calculating the B flux density of a flat spiral coil with N turns
Question: Consider the following topology for a flat spiral coil (with an air gap in the middle). It's been a very long time since I have used Maxwell's Equations and now I find myself trying to figure out the $\vec{B}$ flux density of the following arrangement, assuming a current is flowing through the coiled wire. The primary assumption here is that the current in the wire cannot jump due to thing insulation between each winding. My initial guess was to construct an Ampèrian Loop across just either the lhs or rhs wire group and then compute a line integral of the enclosed current. My other idea was to construct an Ampèrian Loop that spans just the central wire and the air gap, however I'm generally at a loss due to being pretty damn rusty. Any help with this matter would be greatly appreciated. Regards, Vhaanzeit Answer: Assuming that you want to know the $B$ field in the center: for a single loop with radius $R$, the field is $B=\mu_0I/(2R)$. Then sum over all the loops.
{ "domain": "physics.stackexchange", "id": 31228, "tags": "electromagnetism" }
On regression to minimize log distance rather than distance
Question: Suppose I have a lot of points $ x_i \in \mathbb{R}^N $ with corresponding non-negative labels $ y_i \in \mathbb{R} $ and I want to do regression and make a prediction on some new datapoint $ x^* \in \mathbb{R}^N $ for which I don't have a label. Is there a name for the procedure of choosing a parametric model $ f_\theta : \mathbb{R}^N \rightarrow \mathbb{R} $ so as to minimize the cost function $ \sum_i {|\log(f_\theta(x_i)) - \log(y_i)|^2 } $ rather than $ \sum_i{|f_\theta(x_i) - y_i|^2} $? It seems that minimizing the difference between logs has some nice properties, and I'm surprised I didn't see this discussed in Bishop's machine learning book, for example. I thought of this when I was considering a house pricing problem, where I figured I cared more about the percentage by which I was wrong than the pure difference. After all, in my application (and I'm sure many others like it), being wrong by \$50,000 is terrible for a \$60,000 home, but it's okay for a $2.5M home. Any data science veterans reading this who have used a cost function like the one I suggested above with logs, or who can tell me what it's called (if it has a formal name)? Answer: There is a loss called Root Mean Squared Log Error (RMSLE): $\sqrt[]{\frac{1}{n}\sum_{i=1}^n{(\log(y_i + 1) - \log(\hat{y_i} + 1))^2}}$ (do not forget the $+1$ as the $log$ is not defined at $0$) You will find a brief explanation and discussion here. It has also been used in competitions as for example here.
{ "domain": "datascience.stackexchange", "id": 6817, "tags": "regression" }
How to calculate moment of inertia double rotation?
Question: How to calculate moment of ineria of disk if it has double rotation, shown on picture below. Answer: This is done using the parallel axis theorem $$I = I_{cm} + mr^2$$ where $I_{cm}$ is the moment of inertia of the disc and $I$ is the moment of inertia with respect to the athlete, $m$ is a mass of the disc and $r$ is the perpendicular distance between the athlete’s axis and the axis of the disk. So if you can calculate the moment of inertia of the disc and have the other values above, the rest should be straightforward.
{ "domain": "physics.stackexchange", "id": 75211, "tags": "newtonian-mechanics, rotational-dynamics, reference-frames, moment-of-inertia" }
Clarification needed in the concept of apparent depth & real depth
Question: I understood the concept of apparent depth from here: But one thing I didn't understand is, will there be difference in the real depth and apparent depth when we are looking not at an angle as shown above but vertically downward (along the normal) as shown in the figure below : According to me real depth and apparent depth should be same because the light rays coming out of the object is not undergoing any refraction. If I'm right. Then the answer to the question below must be 6 m/s but Its 8m/s. How come? A bird is flying 3 m above the surface of water. If the bird is diving vertically down with speed = 6 m/s, his apparent velocity as seen by a stationary fish underwater is : (A) 8 m/s (B) 6 m/s (C) 12 m/s (D) 4 m/s Answer: I did the experiment by putting a ruler into a glass of water, and I found that the perception of reduced depth remained apparent as close as vertical as I could get (too close to vertical I couldn't see the marks on the ruler). Note that your final result for $n_w$ $$n_w = \frac{D_r}{D_a}$$ does not contain any dependence on the angle of view, so $D_a$ remains less than $D_r$ arbitrarily near the vertical. At the vertical your calculation involves zero divided by zero so we can't do the calculation there, but at an infinitesimally small angle away from vertical you get $D_a$ < $D_r$ so I would assume the result holds at zero incident angle as well. That presumably explains you fish's eye view. Response to claw's comment: This (rather rough!) diagram shows the bird as seen by the fish: I'm going to use your assumption that i and r are small so i = sine(i) = tan(i) and the same for r. I'm also assuming that the distance from the water to the bird, $h$, is much greater than the distance of the fish under the water, so: $$\frac {w}{h} = tan(i) = i$$ where $w$ is the distance from the birds body to the tip of the wing i.e. half the wingspan (I chose it to be half the wingspan to avoid messing around with factors of two). Now the fish sees the bird in the position I've drawn in red i.e. at some height, $h'$ and with some wing length, $w'$, and: $$\frac {w'}{h'} = tan(r) = r$$ Now we know from Snell's law that sine(r) = sine(i)/n, and using our approximation that $i$ and $r$ are small we get: $$\frac {w'}{h'} = \frac{1}{n_w} \frac {w}{h}$$ Now there's a key point to be made. Assuming the fish doesn't have binocular vision, our fish only knows w'/h' i.e. the fish can't tell if it's huge bird far away or a small bird very near. To make any progress we need to assume that the fish knows what the wingspan of the bird is, i.e. the fish knows that w' = w. If we know w' = w we can divide both sides by w and get: $$\frac {1}{h'} = \frac{1}{n_w} \frac {1}{h}$$ or with a simple rearrangement: $$h' = n_wh$$ And this is the result we need. The velocity seen by the fish, $v'$, is just dh'/dt: $$v' = \frac{dh'}{dt} = \frac{d(n_wh)}{dt} = n\frac{dh}{dt} = n_wv$$
{ "domain": "physics.stackexchange", "id": 7652, "tags": "optics, geometric-optics, refraction" }
Schema.org Microdata validation plugin
Question: I am starting to write a plugin that is aiming to validate a piece of mark up against requirements of schema.org. I was hoping if I could get some more tips on how to improve the structure of my code. Is this how you would go about this if you were to do it? $.fn.hasAttr = function(name) { return this.attr(name) !== undefined; }; $.fn.outerHTML = function(s) { return $(this).clone().wrap('<div>').parent().html(); }; $.fn.getOpeningTag = function (s) { return $(this).outerHTML().slice(0, $(this).outerHTML().indexOf(">") + 1) } $.fn.validateSchema = function(options) { var defaults = { // for a list of international postcode regex see: http://www.thalesjacobi.com/Regex_to_validate_postcodes // The one used in the example is from the UK localPostCodeFormat: new RegExp("^([Gg][Ii][Rr] 0[Aa]{2})|((([A-Za-z][0-9]{1,2})|(([A-Za-z][A-Ha-hJ-Yj-y][0-9]{1,2})|(([A-Za-z][0-9][A-Za-z])|([A-Za-z][A-Ha-hJ-Yj-y][0-9]?[A-Za-z])))) {0,1}[0-9][A-Za-z]{2})$"), utilities : { missAtrr : function (target, attrName) { if (!target.hasAttr(attrName)){ defaults.returnError(target, "The element " + target.getOpeningTag() + " is missing the required attribute " + attrName); } }, isValidPostCode : function (target) { if (!defaults.localPostCodeFormat.test(target.text())) { defaults.returnError(target, target.getOpeningTag() + " does not have a valid post code"); } } }, returnError : function (target, errorMessage) { var errorContainer = $('<span class="schemaError"></span>'); target.before(errorContainer); errorContainer.css({ 'color' : 'red', 'border': 'solid 1px red' }).text(errorMessage); } // more utils to be added }; // Extend our default options with those provided. var opts = $.extend(defaults, options); var schemaElements = { postalAddress : $(this).find('[itemtype*=PostalAddress]'), itemProps : $(this).find("[itemprop]"), emptyProps : $(this).find("[itemprop='']"), postCode : $(this).find("[itemprop=postalCode]"), email : $(this).find("[itemprop=email]"), outofplaceChild : $(this).find("[itemscope]").siblings("[itemprop]")// more dom refs to be added } // validation stuff // // PostalAddress should have itemsscope if (schemaElements.postalAddress){ defaults.utilities.missAtrr(schemaElements.postalAddress, "itemscope"); } // No itemscope can be left empty if (schemaElements.emptyProps){ $.each(schemaElements.emptyProps, function () { defaults.returnError($(this), $(this).getOpeningTag() + " can not be left without a value"); }); } // postcode should match the post code format of that country if (schemaElements.postCode){ defaults.utilities.isValidPostCode(schemaElements.postCode); } // No itemprop can exist without having a itemscope parent if (schemaElements.outofplaceChild){ $.each(schemaElements.outofplaceChild, function () { defaults.returnError($(this), $(this).getOpeningTag() + " Does not have a itemscope parent"); }); } // more rules to be added }; $(function(){ $('body').validateSchema(); }); Answer: You could replace .fn.getOpeningTag with: $.fn.getOpeningTag = function (s) { if( this[0] && this[0].nodeType === 1 ) { return "<" + this[0].tagName.toLowerCase() + ">"; } }; I saw 600x performance improvement in chrome These are methods on jQuery objects so doing $(this) is redundant, this will already be a jQuery object. Do not put stuff like validateSchema, in jQuery.fn. It is not a method that operates on a collection DOM elements, make it class that is consists of many small methods. That way you avoid the 9000 configuration options pattern. function SchemaValidation() { } SchemaValidation.prototype = { ... }; var schemaValidation = new SchemaValidation( a, b, c ); schemaValidation.addElements( e, f, g ); schemaValidation.validate(); Of course you don't have to do it that way but for god's sake do not have it as a jQuery method, it doesn't make any sense. Especially in this case you aren't even using the jQuery object the method is called on. Well, technically you are but that's superfluous.
{ "domain": "codereview.stackexchange", "id": 1180, "tags": "javascript, jquery, microdata" }
Mean difference calculator for any int combination
Question: What it does Calculates the mean difference for a list of any combination of ints (I think). What I want to know The only other way I can think of doing this is using recursion. I’d like to know if there are other better ways to do this. Also the reason I done this is because I wanted to calculate the mean difference between time stamps I have in another script, so used this as an exercise, that one will be more difficult since it’s 24 hr time, when it gets to the part of switching from: 12:00 -> 00:00 Code: nums = [3, 1, 2, 5, 1, 5, -7, 9, -8, -3, 3] l = len(nums) - 1 diff_list = [] for i in range(l): diff = nums[0] - nums[1] if diff < 0: diff = nums[1] - nums[0] diff_list.append(diff) nums.pop(0) mean = sum(diff_list)/len(diff_list) print(round(mean, 1)) Answer: Reinventing the wheel Now I will try not to sound too harsh, but did you attempt to google the problem at hand before starting? Building habits in programming is important, but it is equally important to building upon existing code. A quck search gives me the following useful links here, here and here. General comments l = len(nums) - 1 This needs a better name. A good rule of thumb is to avoid single letter variables. Secondly this variable is only used once, and thus is not needed. As you could have done for i in range(len(nums)-1): which is just as clear. As mentioned the next part could be shortened to diff = abs(nums[0] - nums[1]) Perhaps the biggest woopsie in your code is nums.pop(0) for two reasons It modifies your original list. Assume you have calculated and the mean differences, but now want to access the first element in your list: nums[0] what happens? Secondly pop is an expensive operation, as it shifts the indices for every element in the list for every pop. Luckily we are iterating over the indices so we can use them to avoid poping. Combining we get for i in range(len(nums) - 1): diff = abs(nums[i-1] - nums[i]) diff_list.append(diff) However, this can be written in a single line if wanted as other answers have shown. zip is another solution for a simple oneliner, albeit it should be slightly slower due to slicing. I do not know how important performance is to you, so zip might be fine [abs(j - i) for i, j in zip(nums, nums[1:])] If speed is important it could be worth checking out numpy Improvements Combining everything, adding hints and struct and a numpy version we get import numpy as np from typing import List def element_difference_1(nums: List[int]) -> List[int]: return [abs(j - i) for i, j in zip(nums, nums[1:])] def element_difference_2(nums: List[int]) -> List[int]: return [abs(nums[i + 1] - nums[i]) for i in range(len(nums) - 1)] def mean_value(lst: List[int]) -> float: return sum(lst) / len(lst) def mean_difference(nums: List[int], diff_function, rounding: int = 1) -> None: num_diffs = diff_function(nums) mean = mean_value(num_diffs) return round(mean, rounding) def mean_difference_np(lst) -> float: return np.mean(np.abs(np.diff(lst))) if __name__ == "__main__": nums = [3, 1, 2, 5, 1, 5, -7, 9, -8, -3, 3] print(mean_difference(nums, element_difference_1)) print(mean_difference(nums, element_difference_2)) print(mean_difference_np(np.array(nums)))
{ "domain": "codereview.stackexchange", "id": 41541, "tags": "python-3.x, mathematics" }
Accessing Data inside Presentation Layer from API: Laravel 5.2.27
Question: I wrote the code to fetch the JSON Data. For that I tried to do this in 2 projects. First Laravel project which has API code for database interaction only. Sample code in API controller class CountryAPI extends Controller { public function AllCountries() { return response()->json(['CountryList' => \App\Models\CountryModel::all()]); } } Second Laravel project which is a Presentation Layer that has code to access data from an API. So, Database is not exposed directly in this layer, instead a URL is being used to access data. Sample code in the Presentation Layer class CountryController extends Controller { public function AllCountries() { $url = "http://localhost/API/public/Countries"; $json = file_get_contents($url); $json_data = json_decode($json, true); return view('Country.List', array("Countries" => $json_data["CountryList"])); } } Then inside the View it is like below. @foreach($Countries as $Country) <tr class="odd pointer"> <td class=" ">{{$Country["Country"]}}</td> <td class=" ">{{$Country["CountryCode"]}}</td> </tr> @endforeach This is just a beginning to learn how to isolate the database code from Presentation Layer. Question: Is there problem in below points? Code Quality Approach to fetch data in Presentation layer from API. As website users will send request to website and then website will send request to Database server through an API. So each requests will have one in direct additional request. Is that good? Reason I asked this because Data has to be accessed from Android as well as from Website Answer: This looks like a good place to use a shared repository. If you used a repository that was shared between your android api server app and your web server app you could avoid the need for inter-server communications and translating to and from json. You'd get better performance and the repository would function as the middle tier of your 3-tier architecture. Repositories in Laravel are a fairly large topic so I'll just point you at Laracasts for that. For using a shared model in more than one project, this thread gives an example of doing it via composer: https://laracasts.com/discuss/channels/general-discussion/share-models-between-two-laravel-projects?page=1 That's not to say there is anything particularly wrong with the architecture you're describing. It shouldn't have too much of a performance hit but it will have a hit from the communications aspect and the translation to and from json aspect. Both of which are small as long as you stay within your local LAN. You seem to be accessing localhost for the API so it shouldn't be a problem.
{ "domain": "codereview.stackexchange", "id": 19520, "tags": "php, laravel" }
Implementing joinWithSeparator on a Character array
Question: I recently answered a Stack Overflow question about transforming an array of Characters into a String in Swift. Looking at the question and the Swift standard lib, it appears that there is a method joinWithSeparator, but the current implementation only supports SequenceType instances where the Element is a String. As an exercise to myself, I wanted to write an extension on SequenceType that could flatten an array of Characters into a String: extension SequenceType where Generator.Element == Character { @warn_unused_result public func joinWithSeparator(separator: String) -> String { var str = "" self.enumerate().forEach({ str.append($1) if let arr = self as? [Character], endIndex: Int = arr.endIndex { if $0 < endIndex - 1 { str.append(Character(separator)) } } }) return str } } How can I further optimize this code? Would it be possible to replace the forEach loop with flatMap? Or is a map call inappropriate since I need to append each character to a String, rather than create a new array? Answer: There are two problems with your method: Problem #1: The method takes a separator: String parameter, but actually it is expected that a single character string is passed, and it crashes at Character(separator) if a multi-character string is passed: let myChars: [Character] = ["a", "b", "c"] let joined = myChars.joinWithSeparator("==") print(joined) // fatal error: Can't form a Character from a String containing more than one extended grapheme cluster The solution is easy: change the parameter to separator: Character. Btw, this makes the method faster. Problem #2: The method is an extension of SequenceType, but it actually works only for Arrays. For arbitrary sequences of characters, the optional cast as? [Character] fails. This is silently ignored and the separator not inserted: let charSeq = Repeat(count: 4, repeatedValue: Character("x")) let joined = charSeq.joinWithSeparator(",") print(joined) // Output: xxxx Two more remarks before I suggest possible solutions: The name of the local variable var str = "" is quite non-descriptive. I would call it result or joinedString (but that may be opinion based). Your loop self.enumerate().forEach { /* do something with `$0` (index) and `$1` (char) } is correct, but I would write it as for (index, char) in self.enumerate() { /* do something with `index` and `char` } So how can problem #2 be solved? You cast the sequence to an array in order to get the index of the last element, so that you can insert the separator after all elements but the last. It becomes simpler if you think the other way around: Insert the separator before all elements but the first. No need to determine the endIndex anymore: public func joinWithSeparator(separator: Character) -> String { var result = "" for (idx, char) in self.enumerate() { if idx > 0 { result.append(separator) } result.append(char) } return result } This has about the same speed as your method when applied to an array, but works for arbitrary character sequence. You asked: Would it be possible to replace the forEach loop with flatMap? and the answer is "yes": public func joinWithSeparator(separator: Character) -> String { let joinedChars = self.enumerate().flatMap { (idx, char) in idx == 0 ? [ char ] : [ separator, char ] } return String(joinedChars) } The shorted implementation (but not the fastest) would be to convert all characters to strings, and use the existing method to join strings: public func joinWithSeparator(separator: Character) -> String { return self.map { String($0) }.joinWithSeparator(String(separator)) } Performance comparison Here is my test code (compiled in Release mode on a 3.5Ghz iMac): let myChars = Array(Repeat(count: 1_000_000, repeatedValue: Character("x"))) let d1 = NSDate() let j = myChars.joinWithSeparator(",") let d2 = NSDate() print(d2.timeIntervalSinceDate(d1)) Results: Your original code: 0.163 sec. Your code, with the separator parameter changed to Character: 0.104 sec. My first suggestion: 0.095 sec. Using flatMap: 0.276 sec. Convert all characters to strings: 0.305 sec.
{ "domain": "codereview.stackexchange", "id": 20736, "tags": "strings, swift" }
Critical temperature and lattice size with the Wolff algorithm for 2d Ising model
Question: When I run my implementation of the Wolff algorithm on the square Ising model at the theoretical critical temperature I get subcritical behaviour. The lattice primarily just oscillates between mostly positive and mostly negative states. I find that I need to increase the temperature to get behaviour that looks critical. At first I thought it was a bug in the program, but the behaviour actually makes sense. On an infinite lattice the Wolff algorithm should produce clusters of all sizes at $T_C$. This means that most of the clusters it tries to produce are larger than the lattice used in the simulation, and most clusters end up reaching the boundaries and filling most of the lattice. It also goes back to the point Kadanoff always made that true criticality is only possible in infinite systems. I find that I do get critical looking behaviour at slightly higher temperatures than theory predicts. The required temperature increases with decreasing lattice size. Is there any literature on this effect? How do people compensate for it in practice? Is there a formula for the temperature adjustments for different lattice sizes? Answer: The first thing to realize is that there are no "true" phase transitions (in the sense of non-analytic behaviour of thermodynamic potentials) in finite systems. This is the main difficulty one faces when analysing phase transitions using (most) computer simulation schemes. In particular, such simulations are only reliable as long as the observed correlation length is significantly smaller than the system's linear size. However, when there is a second-order phase transition, the correlation length diverges at the critical point, which implies that close to the "true" critical temperature, the behavior observed in a finite system will be smoothed out (and, it turns out, the natural finite-volume analogue of the critical point is shifted, see below). Now, of course, a large enough finite system will still display a behaviour that "resembles" a phase transition, but with its singularities smoothed out. To extrapolate results to infinite systems then requires (i) the determination of finite-volume analogues of the limiting quantities (in particular the critical temperature), (ii) examining how these finite-volume quantities change when the system's size is increased. In order to help with this extrapolation procedure, physicists have devised various finite-size scaling theories. I assume that you are working on a torus (i.e., with periodic boundary conditions) of linear size $L$. This is the simplest case, as far as finite-size effects are concerned, since one then avoids the additional difficulties related to the presence of the system's boundary. The first detailed finite-size scaling theory was developed by Ferdinand and Fisher in 1969 in a classical paper published in Phys. Rev. 185, 832. They used the exact results available for the two-dimensional Ising model to analyze finite-size effects on the free energy and specific heat. The specific heat of a finite-volume Ising model does not diverge. However, it still displays a sharp increase in a narrow region around the "true" critical point $T_c$. Fisher and Ferdinand proposed to define the finite-volume analogue $T_c(L)$ of the critical temperature as the value of the temperature at which the specific heat is maximal. They then argued that $$ T_c - T_c(L) \sim L^{-1/\nu}\,, $$ where $\nu$ is the critical exponent associated to the specific heat, which is given by $\nu=1$ for the two-dimensional model.
{ "domain": "physics.stackexchange", "id": 24099, "tags": "statistical-mechanics, simulations, phase-transition, ising-model, critical-phenomena" }
How do I use ML models to estimate current stress level based on past data?
Question: I am new to machine learning and I cannot understand the difference between estimating current stress level and predicting future stress levels based on historical data. I have been told these are two different problems and require a different approach. I have a dataset with the features and the stress level column, which is the target. Now if I want to estimate the current stress level does this mean I have to generate lag-based features? and then use a time-split for training and testing? Answer: Yes, both these scenarios are different. Estimating Current Stress level - Your target variable here is stress level and features are heart rate and blood pressure. In order to estimate current stress level; you would have the values of the features and they would be known to you as it is in present and you can calculate heart rate and blood pressure using devices. So, you can directly apply machine learning models or even use operational research mathematical models to calculate the stress level. Estimating future stress level - Your target variable here is stress level and features here are still heart rate and blood pressure. In order to estimate future stress level; you wouldn't know what would be the future heart rates and blood pressure; those will be unknown to you. The extra step here would be to impute these unknown values. So, you would have to consider different imputation techniques for these features for the future; some common concepts that you can utilize are imputing the lag values for both heart rate and blood pressure and then utilizing the dataset for your machine learning models.
{ "domain": "datascience.stackexchange", "id": 12036, "tags": "machine-learning, time-series, regression, feature-engineering, predict" }
Time dilation when observed from each frame
Question: I have just begun with special relativity so pardon me if my question seems too obvious. In the books I am following, there is an example of time dilation which says: The half life of muons is $\tau$(in the proper frame of muon). We have muon beam moving with a speed of $0.999 c$ and so the time taken for the beam intensity to reduce to half ,in the lab frame, would be $\tau \gamma$ where $\gamma$ is the Lorentz factor for this beam. However we can also say, that the people in the lab frame would have aged only $\frac{\tau}{\gamma}$ with respect to the muon frame, because the muons feel that the people in the lab frame are going backwards at the same speed. Correct me if I am wrong, but I feel that it is a contradiction, that on seeing one way the observers in the lab have aged $\tau \gamma$, and in the other $\frac{\tau}{\gamma}$. Where I am going wrong? Answer: Yes, you're right that something is not OK here. How come time in one frame seems to slow down, while in the other one it seems to speed up? Short answer: it's not what happens. Actually, from both frames, the time in the other frame appears to slow down. While this sounds impossible at first, it turns out special relativity comes with a whole bunch of strange phenomena. This is one of them. Have you heard of the Minkowski diagram? It's an intuitive, visual representation of the relation of different reference frames. For any reference frame, events (=points in spacetime) appear to happen at the same time only if the line that connects them is paralell with the x (space) axis of the frame. From the black frame, A and B appear to happen at the same time, and if you look closely, OB is shorter than OA. Time seems to run slower in the blue frame. But from the blue frame, B and C appear to happen at the same time. OC is shorter than OB, so time seems to run slower in the black frame.
{ "domain": "physics.stackexchange", "id": 51960, "tags": "special-relativity, time-dilation" }
How can we let a car, of which the front (and rear) wheels are tightly connected by a solid rod, make a turn?
Question: Consider a car with four equal wheels. The front (and rear) wheels are tightly connected by a solid rod (i.e. both the front and rear wheels rotate with the same angular velocity). How can we let this car make a turn? The wheels on the right (or left) side must rotate with the same angular velocity as the wheels on left (or right) side, that's for sure. Do we, for example, have to give full gas to make both wheels on one side slip, while the wheels on the other side are not slipping but rolling (by placing the wheels on one side on a different material than on the other side)? What are the possibilities? Answer: In short, the car cannot make a turn without one wheel skidding in this case if the axle rigidly links both wheels and forces the constraint you speak of. Do we, for example, have to give full gas to make both wheels on one side slip, while the wheels on the other side are not slipping but rolling (by placing the wheels on one side on a different material than on the other side)? Something like this is true; it is the role of the Differential Gearbox to allow both wheels to spin at different angular speeds such that their average angular speed is held constant (and equal to a fixed multiple of that of the driveshaft) and also so that torque can be imparted to both wheels notwithstanding the different angular speeds. Fun Fact: Although the Differential's name refers only to different angular speeds and has nothing to do with differential geometry, the Differential is an essential device in realizing a South Pointing Chariot, which can be used in an elegant intuitive explanation of the notion of parallel transport and connexion in geometry, see: Mariano Santander, "The Chinese South-Seeking chariot: A simple mechanical device for visualizing curvature and parallel transport", Am. J. Phys. 60 #9 pp782-787 (1992) Indeed, a lone wheel of nonzero width cannot make a turn without skidding for the same reasons you have identified, unless the wheel has a conical profile of the correct angle. This latter phenomenon is much less extreme because the path curvature varies much less over the tyre's width than it does between wheels. Indeed the elastic deformation wrought in the tyre owing to the nonuniform path curvature across the tyre's width is what begets the steering torque imparted by a wheel tracking a curved path on the car for steering mechanisms. This unavoidable slight skid is what leads to inevitable tyre wear from turns. It is also highly apparent if you drive slowly on very polished surfaces, such as polished concrete in some underground carparks; as you turn, you can hear a loud, squeaky-rubber kind of sound rather like one hears in stroking a blown up rubber ballon.
{ "domain": "physics.stackexchange", "id": 45844, "tags": "rotational-dynamics, friction" }
Preventing oversell, allocation of limited resources with overlapping properties
Question: I am trying to solve problem of preventing oversell of limited resources. Consider resources (people) who are described by set of properties where each property belongs to different category (example properties from four categories: male, age 25-30, 2 children, interested in games). Buyers want to allocate access to resources. Buyers can specify subset of categories and one property from each category (example: allocate 1000 males, age 25-30 or allocate 100 females, age 25-30, interested in music). In my real life example I have 6m+ possible set of properties (profiles) where for each set of properties I know how many profiles exists. My initial approach was to build a graph like one below: and then traverse using edge weights, for instance validating if demand for 100 females, age2 can be satisfied: check if size(female, age2) < 100 for each parent: check if size(parent) < 100 and go to 2. for each child: check if size(child) < 100 * weight(edge(node, child)) go to 1. (above algorithm is simplified as does not prevent visiting same node multiple times) It all works fine when graph is small, however when number of nodes and edges (dependencies) between nodes (profile universe groups) grows it does not scale very well. Consider example: large graph, 6m nodes, 20m+ edges buyer wants to allocate 1000 males (and there are only males and females in gender category) algorithm would start with top-level 'male' node which probably has 10m+ outgoing edges and 10m+ checks would be required (and probably each of those 10m outgoing edges has incoming edges which need to be checked as well). I was trying to find different approach but failed. I was trying to google out existing solutions but seems like I am unable to even name problem properly. Any reference to what is this problem similar to would be good for me as a starting point. Thanks for comments/help. Two more graphs to present exponential growth of the graph: 3 categories 4 categories Update Regarding size, assuming 8 categories of properties where each category has: 2, 6, 6, 6, 6, 8, 1140, 150 values respectively then estimated number of profiles: 2*6^4*8*1140*150 ~= 3.5 * 10^9. Number of nodes in graph: at least 7 * 10^9, number of edges in graph: at least 140 * 10^9. Update #2 Formula for number of nodes is: $\sum_{i<n}\prod_{k<i \atop j_1, j_2, ..., j_k < n} s_{j_{1}} ... s_{j_{n}}$ where $n$ is number of categories and $s_x$ is size of category $x$. So in my example there would be 11'169'108'657 nodes. Update #3 As per @Raphael advice - I have reduced number of nodes and now formula is: $\sum_{i<n-M}\prod_{k<i \atop j_1, j_2, ..., j_k < n} s_{j_{1}} ... s_{j_{n}}$ where $M<n$ and assumed that distribution of resources across smallest slices of universe is equal. At the same time removed lot of edges from graph. Example of sub-graph size reduction: Answer: So the data structure holding pointers for every possible combination of classifiers is huge. Sure, but why build it at all? Don't overengineer this! Just store the profiles in a database and do one (linear time) filtering sweep for each query, i.e. select/count on demand. For a few millions of records, that should require no further preprocessing. If the number of requests is large and/or you need really small response times, you can think about caching, or creating equivalence classes along some popular classifiers, or along classifiers with few large classes. Then, the linear sweep has to be done only on small lists. For example, you can divide your database along gender and age (assuming these are included in most customer queries) $\qquad \{m, w, o\} \times \{0..5, 10..15, \dots, 95..100\}$ where the values obviously depend on your data. Then, each query will require only few of these small lists, and can even parallelise if you store the individual chunks separately.
{ "domain": "cs.stackexchange", "id": 1452, "tags": "algorithms, databases, counting" }
Is a free particle one on which there's no NET force or one on which there's no force at all?
Question: I'm getting different definitions from different sources. Some claim that free particles have no forces acting on them at all (i.e. even if a particle has forces acting on it such that they cancel, it's not free). Other sources explicitly state that there is no net force (like this). Other sources, when I google about this, state that free particles are "free from external influence" - which is kind of vague and can be interpreted either way. Can someone resolve the confusion? Answer: Forces often come from force fields like gravitational or electromagnetic. These differ in different parts of space, so they may cancel in one point but not in another. Now, suppose a particle is at a point where all forces cancel (and there's a non-empty set of forces being in superposition). Let's now perturb its position. In general, the forces now won't cancel, and the particle will experience acceleration. Suppose that the forces are all directed into the initial point where no net force acted on the particle. Then this initial point would be the point of stable equilibrium. Now, if the particle has too small kinetic energy, it will be bound in the potential well, unable to escape it. I think it's fair to say that in the case described above a particle, even at the point of equilibrium, where all forces cancel, is not free (it can't escape arbitrarily far given arbitrarily long time). So, I think the case where there's no net force, but the set of forces is non-empty shouldn't be included in the definition of a free particle.
{ "domain": "physics.stackexchange", "id": 65855, "tags": "classical-mechanics" }
How to fill an ArrayList of ArrayLists with a Left Join?
Question: I have a class Employee that contains an ArrayList of Projects. I'm storing the Employees in one table, and the Projects in another. I'm trying to find the best way to create an ArrayList of Employee objects based on a result set. Simply creating an ArrayList of Employees based on a result set is pretty straightforward, but I'm finding that filling an ArrayList of Employees that each contains an ArrayList of Projects isn't so simple. Right now, my getEmployees() function is using two nested SQL queries to accomplish this, something like this: public ArrayList<Employee> getEmployees() { PreparedStatement ps1 = null; ResultSet rs1 = null; PreparedStatement ps2 = null; ResultSet rs2 = null; ArrayList<Employee> employees = new ArrayList<Employee>(); String query1 = "SELECT * FROM employees " + "ORDER BY employee_id ASC"; try { ps1 = conn.prepareStatement(query1); rs1 = ps1.executeQuery(); while (rs1.next()) { Employee employee = new Employee(); int employeeID = rs1.getInt("employee_id"); employee.setEmployeeID(employeeID); employee.setName(rs1.getString("employee_name")); // Get projects for this employee ArrayList<Project> projects = new ArrayList<Project>(); String query2 = "SELECT * FROM projects " + "WHERE employee_id = ?"; ps2 = conn.prepareStatement(query2); ps2.setInt(1, employeeID); rs2 = ps2.executeQuery(); while (rs2.next()) { Project project = new Project(); project.setProjectID(rs2.getInt("project_id")); project.setName(rs2.getInt("project_name")); projects.add(project); } employee.setProjects(projects); employees.add(employee); } return employees; } catch (SQLException e) { e.printStackTrace(); } finally { try { // close result sets and prepared statements } catch (SQLException e) { e.printStackTrace(); } } return null; } The above code works, but it seems messy to me. Is there a way to do this without having to nest two separate SQL queries? I've tried using a LEFT JOIN to get a single result set back, but I was unable to figure out a way to use that single result set to fill the ArrayList of Employees, each containing an ArrayList of Projects. EDIT: Assume the tables have the following structures: employees table: employee_id int(11) NOT NULL PRIMARY KEY AUTO_INCREMENT employee_name varchar(60) NOT NULL projects table: project_id int(11) NOT NULL PRIMARY KEY AUTO_INCREMENT employee_id int(11) NOT NULL project_name varchar(60) NULL Answer: A standard way to do this is through break-processing, where you track one value, and when it changes, you do something special.... but you may find it easier to do a more unstructured system: Map<Integer, List<Project>> employeeProjects = new HashMap<>(); Map<Integer, Employee> employees = new TreeMap<>(); // Treeset ... sorted by employeeID // join the tables. String select = " select e.employee_id, e.employee_name, p.project_id, p.project_name " + " from employees e left outer join projects p on e.employee_id = p.employee_id" + " order by e.employee_id, p.project_name"; // do the select..... while (rs.next()) { Integer employeeID = rs.get("employee_id"); Employee emp = employees.get(employeeID); if (emp == null) { emp = new Employee(); emp.setName(rs.get("employee_name"); emp.setEmployeeID(employeeID); employees.put(employeeID, emp); // create a new list for this employee employeeProject.put(employeeID, new ArrayList<Project>()); } String projectName = rs.getString("project_name"); if (!rs.wasNull()) { List<Project> projects = employeeProject.get(employeeID); Project proj = new Project(); proj.setID(rs.getInt("project_id")); proj.setName(projectName); projects.add(proj); } } rs.close(); Then, once you have the data structured the way you want, you can: List<Employee> result = new ArrayList<>(); for (Employee emp : employees.values()) { emp.setProjects(employeeProjects.get(emp.getEmployeeID()); result.add(emp); } return result;
{ "domain": "codereview.stackexchange", "id": 6203, "tags": "java, sql" }
How to get enclosed spaces from a series of connected nodes
Question: I have a bunch of connected walls in a list and the data for them is like so: Wall { Node A; Node B; } Node { float x; float y; } I want to find the rooms from the connected walls as an array of connected points to represent each room's perimeter. This is an example visually of what i am trying to find: The red dots are the nodes, and the lines are the walls, the numbers are the identified rooms that the walls created. The walls can be at any angle, not sure if that matters though. I am wondering what algorithms exist that can help me solve this problem, what is the best way to approach this? Answer: One approach is to use a data structure for representing a planar graph. Each node corresponds to a vertex in the graph, and each walls corresponds to an edge in the graph. Then, you are looking for the set of faces in this graph. Standard data structures for representing planar graphs should make it easy to retrieve the set of faces. For instance, one standard data structure is the DCEL data structure. It explicitly contains one record for each face, so once you have converted this to a DCEL data structure, then it is straightforward to iterate over all faces. There are standard algorithms for constructing a DCEL data structure from the set of vertices and edges. Or, instead of a DCEL, it looks like you could alternatively use a quad-edge data structure or a winged-edge data structure. The keyword is to look for data structures for polygon meshes. This has been studied in great detail in the computer graphics and computational geometry fields. Alternatively, you could solve your problem directly. For each node, find all of the walls associated with it, sort them by their angle, and store that sorted list associated the node. After doing that for all nodes, then you can iterate through all rooms. Pick an wall, then you can find the room to the "right" of that wall by simulating the left-hand rule: stand to the right of that wall, put your left hand on the wall, and walk forward, going in a circle around the perimeter of the room. To simulate that rule, as you walk forward, you'll walk to the endpoint of the current wall; at that node $d$, to find the next wall you proceed to, look in the sorted list of walls incident on $d$, and find the next one in sorted order, then follow that wall. It might be a bit trickier to work out the details of this, than to use an existing implementation of a DCEL data structure.
{ "domain": "cs.stackexchange", "id": 15182, "tags": "algorithms, graphs" }
Binary encoding and its interpretation in Python
Question: I have a column named Street that has 2 values: Paved and Gravel. Here is what print(train[binary_columns[0]].unique().tolist()) gives me: ['Pave', 'Grvl'] I want to encode these values in binary like this: df['Street'] = df['Street'].replace(['Pave', 'Grvl'], [1, 0]) But I wonder if this is a good idea. Wouldn't the computer interpret this as Pave > Grvl? How does the computer differentiate between binary and integer encoding? Answer: Your categorical variable has two levels, so there is no actual difference between dummy-coding vs. simply entering the variable into the analysis. That is, to dummy code you would create one new variable with two values but your original variable is already one variable with two values. Dummy-coding is important for variables with more than two possible values. So, in this case the computer won't consider Pave > Grvl. But if you have more than two variables then you should use dummy variables. For your data, you can use pandas.get_dummies() or sklearn's one hot encoder to achieve your result.
{ "domain": "datascience.stackexchange", "id": 5385, "tags": "machine-learning, python, dataframe, kaggle, binary" }
Why does the electric field dominate in light?
Question: I read a book on the wave property of light where the author mentioned that the electric field, instead of magnetic field, dominates the light property. I don't understand why. In Maxwell's theory, a light field has an electric and magnetic field at the same time and they are perpendicular. Also, in some books, where they consider the polarization, they only use the electric field as example. For example, if the vibration of the electric field is up and down, it cannot go through a polarizer which orients 90 degree to the vibration direction of the field, so no light goes through the polarizer. But what happened to the magnetic field? The magnetic field is perpendicular to the electric field, so in this case, the magnetic field should pass the polarizer, and we should have outgoing light -- but we don't. Why is this so? Answer: Materials, and certainly materials transparent to light , have few magnetic properties. They are not composed out of atoms that have strong ferromagnetism. But all atoms have strong electric fields. This means that light, as it goes through a transparent medium has small probability to interact with its magnetic field component with the medium, which is mainly transparent to it. Take the wire grid polariser as a more simple example It consists of a regular array of fine parallel metallic wires, placed in a plane perpendicular to the incident beam. Electromagnetic waves which have a component of their electric fields aligned parallel to the wires induce the movement of electrons along the length of the wires. Since the electrons are free to move in this direction, the polarizer behaves in a similar manner to the surface of a metal when reflecting light; and the wave is reflected backwards along the incident beam (minus a small amount of energy lost to joule heating of the wire). A wire-grid polarizer converts an unpolarized beam into one with a single linear polarization. Coloured arrows depict the electric field vector. The diagonally-polarized waves also contribute to the transmitted polarization. Their vertical components are transmitted, while the horizontal components are absorbed and reflected. The magnetic component in this setup cannot interact to affect the absorption of the light the way the electric can with the free electrons in the metal of the wire.
{ "domain": "physics.stackexchange", "id": 77469, "tags": "electromagnetism, polarization, electromagnetic-radiation" }
Dynamic variables in PHP from enum
Question: Related to the question Verb conjugator for French, I was asked an question on whether one could summarize all the $exceptionIs<NAME_OF_EXCEPTION> = $exceptionmodel-> getValue() === ExceptionModel::NAME_OF_EXCEPTION lines. In other words, can one make dynamic variables out of the comparison of a value combined with all values of an enum? As I'm a bit rusty in php, I made this version, and ask you whether this is a good solution, or if it needs major refactoring. To avoid posting other peoples code, I've mocked the ExceptionModel class, and replace the output of ExceptionModel::getConstants() with a predefined array. This to give you working code to review. The original code is located on github, as the classes ExceptionModel and Enum. <?php // A mockup of the original ExceptionModel inheriting from Enum class ExceptionModel { const NO_EXCEPTIONS = 'no_exception'; const ALLER = 'aller'; const AVOIR_IRR = 'avoir_irr'; const ETRE_IRR = 'etre_irr'; // ... many more lines ... function getConstants() { // ... returns array of constants ... } } function myFunction($exception) { // In final version, it should use the following line // $exceptionModels = ExceptionModel::getConstants(); // ... but for now, use this array $exceptionModels = array ( "NO_EXCEPTIONS" => 'no_exception', "ALLER" => 'aller', "AVOIR_IRR" => 'avoir_irr', "ETRE_IRR" => 'etre_irr' ); // Generate dynamic variables testing for equality of // of $exception and an Enum value from ExceptionModel foreach ($exceptionModels as $constName => $constValue) { ${'exceptionIs' . $constName} = $exception === $constValue; } if ($exceptionIsALLER) { echo "ExceptionModel is aller. "; } else if ($exceptionIsAVOIR_IRR) { echo "ExceptionModel is avoir_irr"; } else { echo "ExceptionModel was neither, it is: " . $exception; } echo "\n"; } echo "<pre>"; myFunction("none"); myFunction(ExceptionModel::ALLER); myFunction(ExceptionModel::AVOIR_IRR); echo "</pre>"; ?> This correctly produces the output: ExceptionModel was neither, it is: none ExceptionModel is aller. ExceptionModel is avoir_irr Answer: An alternative approach to creating the temporary variables, is to use reflection and automatically define test function for equality to any given enum value. If extending the base Enum class, or the ExtensionModel class with the following function: function __call($func, $param) { $func_prefix = substr($func, 0, 2); $func_const = substr($func, 2); if ($func_prefix == "is") { $reflection = new ReflectionClass(get_class($this)); return $this->getValue() === $reflection->getConstant($func_const); } } Then it is legal to do stuff like in the following test function: function myFunction(ExceptionModel $exceptionModel, Tense $tense) { if ($exceptionModel->isALLER() && $tense->isPresent() ) { ... do something ... } given that both ExceptionModel and Tense inherits from the Enum class. In other words, now you can do $enumobject->is<ENUM_VALUE>() for any enum value inheriting from Enum.
{ "domain": "codereview.stackexchange", "id": 17005, "tags": "php" }
How the object will fall?
Question: If we push a regular object from a table ( take a cuboid ) , can you predict that whether it will turn while falling or not and what face of it will hit the ground . Like if you accidentally push a book from a table and it falls down , why sometimes it falls on its face down and not face up. Can you predict this ? Answer: In order to predict which side an object will land on, generally we need a few pieces of information: The shape of the object and its size. This determines how many "sides" the object has in the first place (where "sides" is defined here to be "configurations that an object can land in, up to a rotation about the vertical axis and an arbitrary translation"). For some objects, the "sides" are simple to define: a cube has six, a tetrahedron has four, and an octahedron has eight, for example. For others, they may not look like "sides" in the normal sense: a sphere has one "side" by our definition, because there is one configuration that it can land in, and a cylinder, even a thin one like a coin, has three sides, since it can technically land on the round edge. The shape and size of the object also determine the effect of air resistance, which is a very significant factor in objects with large cross-sectional area and low mass (like a feather). The weight distribution in the object. This determines its center of gravity, which in turn determines the "tipping points" for each side and influences how the object ultimately reacts when it hits the ground. For example, a pair of "loaded dice" gives you different results than a pair of regular dice, because the loaded dice have an uneven weight distribution that makes the tipping-point angle quite large for one of the sides. That side is likely to be on the bottom when the die lands. The initial conditions when the object was pushed from the table. There are a few relevant quantities here: initial height of the object above the ground (i.e. the height of the the table), the initial horizontal and vertical velocities of the object, the initial angular velocity of the object and the axis of rotation, and the initial orientation of the object as it leaves the table. The initial conditions, along with the object's shape, size, and weight distribution, determine the object's final orientation when it first hits the ground. How the object reacts when it hits the ground, and how the ground reacts when it is hit by the object. This includes all of the information relating to the elasticity (i.e. does it bounce?) and/or fracture toughness (i.e. does it shatter or break?) of both the object and the ground. This information matters from the first impact with the ground to the final orientation. Once we have all of that information, we can use a simulation engine that takes all of these into account to predict with reasonable accuracy how an object will land.
{ "domain": "physics.stackexchange", "id": 54882, "tags": "newtonian-mechanics, forces" }
Add one to integer represented as an array
Question: I am practicing interview questions and have written the following code for the given task. Task : Given a non-negative number represented as an array of digits, add 1 to the number (increment the number represented by the digits). The digits are stored such that the most significant digit is at the head. Code: class Solution: # @param A : list of integers # @return a list of integers def plusOne(self, A): Rev_A = A[::-1] i = 0 while i <= len(A)-1: if Rev_A[i] != 9: Rev_A[i] +=1 carry = 0 break else: Rev_A[i] = 0 carry = 1 i += 1 if carry == 1: Rev_A.append(1) while True: if Rev_A[-1]==0: Rev_A.pop() else: break return Rev_A[::-1] def __init__(self): self.list = [] Test cases: if __name__ == "__main__": A = Solution() ## Test Cases print A.plusOne([0]) print A.plusOne([9,0,0]) print A.plusOne([6,9,9]) print A.plusOne([9,9,9]) print A.plusOne([0,9,9,9]) print A.plusOne([0,0,0,1,2,3]) How can this code be better? Answer: Run the code through pycodestyle, then understand and apply all of the recommended changes. This is tedious the first couple times, but you'll learn to write much more idiomatic Python. The class is pointless - you might as well just write a bare function instead. __init__ also creates a field which is then never used. Names like A make the code harder to read. Naming your variables what they actually are makes the code much easier to read. I would swap around your first if/else to avoid the negative comparison. You use a number for carry, but it might as well be boolean because it can never be more than 1. Personally I consider this code hard to understand. You reverse the list twice (making a copy both times), add and remove to it. This is my take on it: def increment(digits): for index, digit in reversed(list(enumerate(digits))): if digit == 9: digits[index] = 0 else: digits[index] += 1 return digits digits.insert(0, 1) return digits if __name__ == '__main__': print(increment([0])) print(increment([9,0,0])) print(increment([6,9,9])) print(increment([9,9,9])) print(increment([0,9,9,9])) print(increment([0,0,0,1,2,3])) Reverse the list one element at a time Set trailing nines to zero If any number is not nine, increment it and we're done If we get through the whole list, it must have been all nines, so we add a one to the start and return This code treats an empty list as zero, which may or may not be allowed in the exercise.
{ "domain": "codereview.stackexchange", "id": 30499, "tags": "python, array" }
What's the approximation factor of this Max k-Cut approximation?
Question: I'm thinking about an approximation algorithm for Max k-Cut. One simple and more involved approximation algorithms can be found here. The Max k-Cut problem is defined as follows. Input is a graph G = (V, E) and an integer k, n = |V|, the question asks for partitioning G into k disjoint sets such that the total number of edges between disjoint parts is maximized. My algorithm is a greedy strategy and works as follow (maybe someone else already had a similar idea but I'm not aware of): Start with each vertex in a group by itself, and at each step, combine the two groups that have a minimum number of edges between them. Repeat this until the number of groups shrinks to $k$. Is there a known approximation guarantee for this algorithm? Answer: This is called the edge-contraction heuristic, in which an upper bound $(k-1)/(k+1)$ on the approximation ratio can be shown. See section 3 of the work by Kahruman et al. for reference. Imagine whenever we combine two nodes u, v into a group, instead of forming a group we contract the edge (u, v) between these two nodes, and updates the weight of all the edges incident to the newly formed node. Repeat the procedure until there are only k nodes left. Lemma 3.1 in the above paper shows that the sum of weights of the first $i$ edges being contracted (denoted as $W_i$) satisfies the following inequality: $$W_i \leq \frac{2iW}{(n-1)(n-i-1)} \text{,}$$ where $W$ is the sum of weights of all edges. This can be derived from $$W_{i+1} \leq W_i + \frac{W-W_i}{({n-i\atop 2})} \text{,}$$ since we always contract the lightest pair, the weight of the contracted edge cannot surpass the average weight. One can see that the cut formed by this algorithm has weight $W_C = W-W_{n-k}$, and by observing that $W \geq W^*$ where $W^*$ is the weight of the optimum cut, we have our desired approximation ratio: $$W_C \geq W-\frac{2(n-k)W}{(n-1)(k-1)} \geq \frac{k-1}{k+1}W^* \text{.}$$ For k=2, the algorithm gives a 1/3-approximation to the Max cut problem.
{ "domain": "cstheory.stackexchange", "id": 627, "tags": "ds.algorithms, graph-algorithms, approximation-algorithms, max-cut" }
Field At Point Due to Paramagnetic Material in External Field
Question: TL;DR: How do you calculate the field at a given point when both an external field and a paramagnetic material are present? My overarching question has to do with the effect an induced magnetization (volume), $\mathbf{M}$, has on the surrounding magnetic field in a magnetostatic formulation. From Griffiths (4th ed, Eq. 5.89 on pg. 255), we know that an isolated magnetic dipole, $\mathbf{m}$, will produce a magnetic field, $\mathbf{B}$, at a distance from the magnetic dipole (I will call this arbitrary point $P$). $$ \mathbf{B}_{\rm dip}(\mathbf{r}) = \frac{\mu_0}{4 \pi}\frac{1}{r^3}[3(\mathbf{m} \cdot \mathbf{\hat{r}})\mathbf{\hat{r}}-\mathbf{m}]$$ Now, if in addition to the presence of a dipole, I turn on a homogeneous external magnetic field, $\mathbf{B}_0$. After waiting for a sufficiently long time such that the dipole aligns with the external magnetic field, the field at point $P$ should be the superposition of field due to $\mathbf{B}_0$ and the field due to the dipole. Now instead of the dipole, let's say that there is paramagnetic material present. Accordingly, if I flip on an external magnetic field, $\mathbf{B}_0$, the material polarizes with a magnetization, $\mathbf{M}_0$. However, at the point $P$ (which is outside the domain of the paramagnetic material), the classical formulation (I believe) is that the total magnetic field, $\mathbf{B}$ would not be dependent on the magnetization, $\mathbf{M}_0$, of the paramagnetic material. If we follow the definition in Eq. 6.18 of Griffiths ($\mathbf{H} \equiv \frac{1}{\mu_0}\mathbf{B} - \mathbf{M}$), then we would see that the $\mathbf{B}$ field at point $P$ would only be due to the free current that is causing $\mathbf{B}_0$ in the first place. However, if I envision the paramagnetic material as a collection of many small magnetic dipoles, and the application of an external field aligns enough of the dipoles in the direction of the external field such that the net magnetization is assumed to be in the direction of the external field, how could I now say that the coordination of these tiny dipoles within the paramagnetic material does not contribute to the magnetic field, $\mathbf{B}$ at the point $P$? How do I rectify the seeming contradiction I described above (or maybe I'm misguided and overlooked something)? What would be the effect of the paramagnetic material on the magnetic field, $\mathbf{B}$ outside its domain? It seems from a footnote on pg. 5 of Blundell's Magnetism in Condensed Matter that there would be an effect. "A magnetized sample will also influence the magnetic field outside it, as well as inside it (considered here), as you may know from playing with a bar magnet and iron filings." How should I think about this? Answer: The magnetization does produce a contribution to the field outside the magnetized region. That's all there is to say, barring the calculation which will depend on the geometry. But there is one commonly realized geometry where the field owing to the magnetization stays inside the region: the long cylinder. It is like a solenoid. In the formula $$ {\bf H} = \frac{1}{\mu_0} {\bf B} - {\bf M}, $$ $\bf M$ is the dipole moment per unit volume at some location, $\bf B$ is the total magnetic field at that location owing to everything (i.e. owing to all currents and magnetic dipoles), and $\bf H$ is the quantity defined by this equation. Ordinarily all the above (${\bf M},\, {\bf B}, \, {\bf H}$) refer to values after spatial averaging over a region large enough to smooth over the atomic structure of any material which may be present. Let's consider the case of a short magnetized bar. Suppose there are no free currents anywhere, ${\bf j}_{\rm f} = 0$ throughout all of space. All we have are the aligned magnetic dipoles which make up the magnetized bar. Now one might form the intuition, from Maxwell's equation $\nabla \times {\bf H} = {\bf j}_{\rm f}$ (for static problems) that ${\bf j}_{\rm f}$ is "the source'' of $\bf H$ and so if there is no ${\bf j}_{\rm f}$ anywhere then ${\bf H} = 0$. This intuition is wrong. For, consider the Maxwell equation $\nabla \cdot {\bf B} = 0$. This tells us that $$ \nabla \cdot {\bf H} = - \nabla \cdot {\bf M}. $$ But this means that $\nabla \cdot {\bf M}$ acts as a source of $\bf H$ just as surely as charge density acts as a source of $\bf E$. In the case of the magnetized bar, $\nabla \cdot {\bf M} \ne 0$ at the edges of the bar (and it might be non-zero inside the bar too, but for uniform magnetization this divergence will be zero inside the bar). So we do have a source of $\bf H$ in this example. It means $\bf H$ cannot be zero at the edges of the bar, and therefore it is not zero elsewhere because the field equations for empty space guarantee its continuity.
{ "domain": "physics.stackexchange", "id": 86625, "tags": "electromagnetism, magnetic-fields, magnetic-moment" }
How can I construct sorting network for $k$ numbers
Question: How can I construct a sorting network for $k$ numbers? My goal is to implement sorting networks in Java for $k$ in the range $[3,\hspace{-0.03 in}32]$. To be even more specific, I only want to sort integers. I found some implementation in this article (pages 2-3), but I don't understand it. I been trying to convert this problem to SAT. I started with a simple non-optimal network: $[01, 12, \ldots, (n-1)n, 01, 12, \ldots, (n-2)(n-1), \ldots, 01, 12, 01]$ (source). The idea is to convert it to SAT, find the shortest equal-satisfiable SAT formula, and convert it back to a network representation. The problem is that in the network, the order of comparisons is important, so I don't know hot to convert it to SAT. It sounds like some one has already been trying to do something like this, but I don't understand it completely. Related question. Answer: there is some research angle here dating at least to Knuth's Art of Computer Programming and presumably earlier in finding optimal sorting networks for low $n$. its intractable to find optimal sorting networks for small $n$ but it has been done up to about $n=10$ eg as in this recent notable paper, also using SAT. details about how to reduce the problem to SAT are in the paper. basically a large SAT formula encoding is built that asserts "these boolean variables configure a circuit that sorts all inputs for size $n$". (the more nonresearch angle is to use existing sort algorithms or sorting network configurations as mentioned in the paper by Har-Peled you cite to generate the (nonoptimal) circuits, this is more like a CS/EE exercise.) Optimal Sorting Networks Daniel Bundala, Jakub Závodný This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n <= 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n <= 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work.
{ "domain": "cstheory.stackexchange", "id": 2626, "tags": "sorting" }
ARtoolkit - rviz
Question: How can I use the ARtoolkit library with rviz? (Ubuntu 10.04, ROS electric) Originally posted by Janina on ROS Answers with karma: 11 on 2012-05-13 Post score: 0 Answer: You can use the package ar_pose. It runs the ARToolkit marker detection and publishes both detection messages and transforms that can easily be visualized in rviz. See the package wiki page for screenshots and a demo video. Originally posted by Stephan with karma: 1924 on 2012-05-13 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 9375, "tags": "ros" }
Efficiently selecting the median and elements to its left and right
Question: Suppose we have a set $S = \{ a_1,a_2,a_3,\ldots , a_N \}$ of $N$ coders. Each Coders has rating $R_i$ and the number of gold medals $E_i$, they had won so far. A Software Company wants to hire exactly three coders to develop an application. For hiring three coders, they developed the following strategy: They first arrange the coders in ascending order of ratings and descending order of gold medals. From this arranged list, they select the three of the middle coders. E.g., if the arranged list is $(a_5,a_2,a_3,a_1,a_4)$ they select $(a_2,a_3,a_1)$ coders. Now we have to help company by writing a program for this task. Input: The first line contains $N$, i.e. the number of coders. Then the second line contains the ratings $R_i$ of $i$th coder. The third line contains the number of gold medals bagged by the $i$th coder. Output: Display only one line that contains the sum of gold medals earned by the three coders the company will select. Answer: This is a problem of selecting the $k$th smallest element from the list solved by a class of algorithms called the Selection Algorithms. There exist deterministic linear time selection algorithms so your problem can be solved in linear time by selecting the $n/2,n/2-1,n/2+1$th smallest elements from the original unsorted list.
{ "domain": "cs.stackexchange", "id": 133, "tags": "algorithms, algorithm-design" }
OFDM training symbol format
Question: For OFDM synchronisation, we are using two training symbols according to the Schmidl and Cox method of frequency synchronization and timing synchronization. The paper says: The first OFDM training symbol has only even numbered subcarriers applying a PN sequence.The result is two identical half symbols in time domain each consisting of Nc/2 samples each. The second training symbol consists of even numbered subcarriers that are differentially modulated with even numbered subcarriers of first training symbol using a PN sequence. The odd numbered subcarriers of second training symbol can be used for data, pilot or reference symbols. I have two questions on this. In OFDM, every subcarrier corresponds to one symbol which is a complex value. So what does it mean by saying even numbered subcarriers are modulated in a symbol using a PN sequence? How does that ensure that it results in two identical halves of a time domain symbol. what is differential modulation using a PN sequence. How is it achieved? Answer: To answer your first question, what they mean is that the first training symbol only encodes data on the even-numbered subcarriers. The other subcarriers are set to zero. That is, the frequency-domain, $$ X[k] = \begin{cases} s_k, &k \text{ mod } 2 = 0 \\ 0, &\text{otherwise} \end{cases} $$ The symbols to encode on the even-numbered subcarriers $s_k$ are chosen from a pseudorandom noise (PN) sequence. They assert that this results in a time-domain symbol that has two identical half-symbols in it (i.e. the first half of the symbol period is equal to the second half). Why is that? Recall one of the properties of the discrete-time Fourier transform: if you zero-stuff a signal in the time-domain (by inserting zeros between each pair of samples), then in the frequency domain, the spectrum is repeated periodically. You might have seen this before in a description of discrete-time interpolation: insert zeros, then add a lowpass filter to eliminate the spectrum duplicates. The property is dual in the sense that it works the other way also. When assigning symbols to subcarriers, you're working in the frequency domain. An inverse DFT is used to generate the time-domain signal for transmission. If you set all of the odd-numbered subcarriers to zero, then you're generating a frequency-domain signal that is zero-stuffed by a factor of two. When you inverse transform that to the time domain, you get the same effect: the resulting symbol consists of two periodic half-symbol waveforms. I haven't read the paper in detail, but I'm assuming their synchronization technique takes this redundancy into account in some way.
{ "domain": "dsp.stackexchange", "id": 942, "tags": "modulation, digital-communications, dsp-core, ofdm" }
Extracting non-duplicate cells in a particular matrix with repeated entries
Question: Consider a board of $n$ x $n$ cells, where $n = 2k, k≥2$. Each of the numbers from $S = \left\{1,...,\frac{n^2}{2}\right\}$ is written to two cells so that each cell contains exactly one number. How can I show that $n$ cells $c_{i, j}$ can be chosen with one cell per row and one cell per column such that no pair of cells contains the same number. This was an example problem for an exam I'm studying for. I tried it now for several hours but I can't get it right. I think random permutations can help here but I am not sure. Answer: Choose a permutation $\pi$ uniformly at random, and let $P = \{ a_{i, \pi(i)} \mid i\in [n]\}$. The set $P$ contains exactly one element in each row and each column of given the matrix $A$. Now consider any pair of entries in $A$ with the same value. If those two entries lie in the same row or the same column, they cannot both be in $P$. If those two entries are in different rows and columns of $A$, then the probability that both entries lie in $P$ is exactly $1/n(n-1)$. There are $n^2/2$ different values in the matrix. So the expected number of values with both entries in $P$ is at most $n^2/2n(n-1) = n/2(n-1)$. If $n\ge 4$, this expected value is less than $1$, which implies that the probability of choosing no matching pairs must be positive.
{ "domain": "cs.stackexchange", "id": 217, "tags": "combinatorics, probability-theory" }
Magnifying glass geometrical shapes variants
Question: https://en.wikipedia.org/wiki/Magnifying_glass As we see the magnifying glass is a circle. Can we design & construct magnifying glass with other shapes viz triangle, rectangle, hexagon, Kite ? Will it effect the magnification of the object which we observe with the magnifying glass after changing the circular shape to different shapes? Answer: Do some experimentation. That's Physics' primary source of knowledge. Partly cover a magnifying glass with some opaque material so that the remaining part gets a triangular, rectangular, hexagon, ... shape. Does it still function the way you expect it from a magnifying glass?
{ "domain": "physics.stackexchange", "id": 73230, "tags": "glass" }
Why is bismuthine (BiH3, or bismuth trihydride) unstable if bismuth is considered to be the most stable heavy element?
Question: Bismuth-209 is considered to be the most stable heavy element, though it is weakly radioactive. Given that, why does bismuthine ($\ce{BiH3}$, or bismuth trihydride) have a half-life of only 20 minutes and is the least stable hydride in its group? Answer: By putting words in incorrect order it is quite easy to arrive at nonsense. Bismuth is by far not the most stable heavy element; indeed, it is not particularly stable at all. Instead, it is the most heavy stable element (if we disregard its radioactivity, that is). Think of these two definitions for a while. Think how different they are. Moreover, this is about nuclear stability, which has absolutely nothing to do with chemical stability of the element, which in turn (besides not being well defined) has nothing to do whatsoever with stability of its hydride. I sincerely recommend to all chemistry students to abandon using the word "stability" altogether, for it seems to cause a great deal of confusion every single time someone uses it.
{ "domain": "chemistry.stackexchange", "id": 6834, "tags": "inorganic-chemistry, metal" }
How to calculate Big O of $T(n) = aT(n^b) + f(n)$?
Question: I'm a student studying Big O. I know that we can solve $T(n) = aT(\frac{n}{b}) + f(n)$ by compering $n^{\log_b{a}}$ to $f(n)$ or $O(n^{\log_b{a}} + f(n))$ Today I was faced with $T(n) = T(\sqrt n) + 1$ and I think it is $O(\log\log n)$. and I was faced with this too, $T(n) = T(\sqrt n) + O(\log\log n)$ that I think it is $O(\log^2\log n)$. I'm wondering what is the formula (or method) to solve any kind of this type of problems (my guess is $O(f(n)a^n\log\log n)$). so: How to calculate Big O of $T(n) = aT(n^b) + f(n)$ with $0<b<1$? Thanks in advance. Answer: How to calculate Big O of $T(n) = aT(n^b) + f(n)$ with $0<b<1$? The powerful technique you are searching for is variable substitution. Let $S(m)=T(2^m)$. Then $$S(m)=T(2^m)=aT(2^{mb}) + f(2^m)=aS(mb)+g(m),$$ where $g(m)=f(2^m)$. Now that we have a recurrence relation about $S(m)$, to which we might be able to apply the master's theorem. Here are some examples. If $f(n)$ is a constant, so is $g(m)$. If $a=1$ and $b=\frac12$, then we know that $S(m)=O(\log m)$. Hence, $$T(n)=S(\log n)=O(\log \log n).$$ If $f(n)=\log n$, $a=1$ and $b=\frac23$, then $S(m)=S(2m/3)+m$. So $S(m)=O(m)$. Hence, $$T(n)=S(\log n)=O(\log n).$$ If $f(n)=\log\log n$, $a=1$ and $b=\frac12$, then $S(m)=S(m/2)+\log m$. So $S(m)=O((\log m)^2)$. Hence, $$T(n)=S(\log n)= O((\log\log n)^2).$$ Note that the above reasoning is rather loose as $\log n$ and $\sqrt n$ might not be an integer when $n$ is. A lot of careful sandwiching together with some kind of continuity or monotonicity is needed to establish the result rigorously.
{ "domain": "cs.stackexchange", "id": 14169, "tags": "complexity-theory, time-complexity, big-o-notation" }
SpamAssassin spam analyzer script in PHP based on sa-learn command
Question: I wrote a small script to analyse spam messages that are spam false negative; meaning that they are spam messages in nature but that happened to be in your INBOX folder because your spam filter failed to detect it correctly (I personally use SpamAssassin and unfortunately it rarely happens). The goal is to have a chance to analyse spam messages that are put in spam folder manually by running a script in cron jobs. Obviously, all spam messages will be analysed regardless of which email client you're using (Thunderbird, Kaiten Mail or Roundcube), because all you need is to have a message moved in spam folder. Example of my well working PHP script: <?php // Check if there is messages in SPAM folder that don't have [SPAM] lable (e.g.: manually moved from INBOX) exec("grep -L '\[SPAM\]' /home/domainexample.ru/Maildir/.Junk/cur/* 2> /dev/null", $spam_messages); if (!empty($spam_messages)) { $sa_learn = '/usr/bin/sa-learn --spam'; foreach ($spam_messages as $spam_message) { //Learn a message that we believe is spam $sa_learn .= ' ' . $spam_message; $marked_as_spam = file_get_contents($spam_message); // Adding [SPAM] flag to message's subject of analyzed message $marked_as_spam = str_replace("Subject:", "Subject: [SPAM] ", $marked_as_spam); file_put_contents($spam_message, utf8_encode($marked_as_spam)); } // Logging the results $sa_learn .= ' > ' . '/home/domainexample.ru/.spamassassin/logs/' . date('d.m.Y-G:i') . '_analyzer.log'; //Executing analyzer in background shell_exec("nohup $sa_learn 2> /dev/null & echo $!"); // Cleaning cached messages in Dovecot shell_exec("rm -f /var/lib/dovecot/index/domainexample.ru/.Junk/*"); } ?> I would like to have (in case there are) few ideas for improving this already working PHP script but most importantly, I would like to learn of how to write the same script using PERL or/and BASH scripting. Could you please suggest some ideas for improving current PHP script, along with providing pure and fully working examples of the same scenarios in PERL or/and BASH? Answer: Here's an untested Bash version: #!/usr/bin/env bash set -o errexit -o noclobber -o nounset while IFS= read -r -u 9 path do /usr/bin/sa-learn --spam "$path" \ > "/home/domainexample.ru/.spamassassin/logs/$(date +%d.%m.%Y-%G:%S)_analyzer.log" \ 2>&1 & sed -i -e 's/^Subject:/Subject: [SPAM] /' "$path" rm "$path" done 9< <(grep -FL '[SPAM]' /home/domainexample.ru/Maildir/.Junk/cur/* 2> /dev/null) Some changes from the original: Uses grep's -F option to speed up search. Runs sa-learn repeatedly instead of once to avoid having to accumulate the data. Shouldn't slow down the execution since the processes are backgrounded. Deletes files as soon as they are processed, to avoid reprocessing if the previous run failed.
{ "domain": "codereview.stackexchange", "id": 3919, "tags": "php, bash, perl" }
Why is it ok to calculate the reward based on a hidden state?
Question: I'm looking at this source code, where the reward is calculated with reward = cmp(score(self.player), score(self.dealer)) Why is it ok to calculate the reward based on a hidden state? A player only sees the dealer's first card. self.dealer[0] Answer: The code you reference is not part of the learning agent. It is part of: class BlackjackEnv(gym.Env): If, as in this case, the environment is provided entirely by software simulation, it is absolutely necessary for it to include a full working model of all state transitions and rewards. That is independent of whether any hidden state makes the problem harder. Why is it ok to calculate the reward based on a hidden state? In the case of Blackjack, this can be treated not as a hidden state that would affect the outcome if only known, but as randomness in the environment over which the agent has no control. Critically, the dealer has no options to behave differently depending on the unknown card, and the dealer's eventual score is entirely unaffected by the player's earlier choices. It is a subtle difference. If you applied the same environment rules to Poker, where an opponent could behave differently depending on this hidden knowledge, then a simple MDP model is not enough theory to result in an optimal solution. In that case, you would need to look into Partially Observable MDPs (POMDPs). Note this would not affect reward calculation in the environment, just the choices of which agent type to use. If you are just learning RL, you probably don't know of any algorithms that could solve this yet. In practice, a lot of problems are somewhere between a classic MDP and a POMDP - they contain elements which, if the agent could know them, may allow it to achieve a higher expected reward. In many cases though, these elements can either be treated as random (as here in Blackjack) and thus the system is still theoretically an MDP, or they have a very small effect on the optimal policy, so can be ignored for practical purposes (e.g. think of all the physical details in a real cart pole balancing system - friction, temperature, flexing motions, etc).
{ "domain": "ai.stackexchange", "id": 828, "tags": "reinforcement-learning, rewards" }
ROS-Industrial support for Kinetic
Question: I was a little bit shocked when I saw that the ROS-I installation page suggests installation with/on Hydro: http://wiki.ros.org/Industrial/Install. Now, does anyone have experience with ROS-I running with Kinetic? Through how much pain would I need to go through? Originally posted by Borob on ROS Answers with karma: 111 on 2017-02-10 Post score: 1 Answer: This is a 'known issue': ros-industrial/ros_industrial_issues#40. The tutorial is outdated (and technically, it does not really suggests to install Hydro: it documents how to install various ROS-Industrial packages on ROS Hydro. But it is outdated in any case. Edit: I've just updated the page to at least make that more clear). Now, does anyone have experience with ROS-I running with Kinetic? Through how much pain would I need to go through? That will completely depend on what packages you'd like to use. Without knowing that, I can only tell you that almost all packages will build from sources in a Catkin workspace, they just haven't all been released. If you can update your question and include which packages you need, I can provide more detailed instructions if needed. Edit: So I would like to use the universal_robot, ur_modern_driver, ur_modern_driver, moveit, basically anything that I need to do ik, calculate trajectories and planning. I think the first three are little problematic. you list ur_modern_driver twice, but: moveit has been released for Kinetic, so that is just an apt-get away (note btw that MoveIt is not a ROS-Industrial package) universal_robot has a kinetic-devel branch and is being used by many people on Kinetic already As for ur_modern_driver: there is currently no development targetting Kinetic, but incompatibilities are mostly confined to the ros_control parts. If you don't need those, you should be able to compile and run everything. If you need / want the hardware_interface, there is at least one fork that addressed the incompatibilities. See ur_modern_driver#58 for more info. Originally posted by gvdhoorn with karma: 86574 on 2017-02-10 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Borob on 2017-02-10: Thanks! So I would like to use the universal_robot, ur_modern_driver, ur_modern_driver, moveit, basically anything that I need to do ik, calculate trajectories and planning. I think the first three are little problematic. Comment by Borob on 2017-02-14: Great thank you!
{ "domain": "robotics.stackexchange", "id": 26977, "tags": "ros-kinetic" }
`ROS_INFO_STREAM_NAMED(name, msg)` isn't streaming to the appropriate name
Question: Lets say I have these names: "A", "B", "C" There is a weird case where I use ROS_INFO_STREAM_NAMED(name, msg) and it streams the message to "A", even though the variable name is "B" or "C". However, when I hard code the name in: if (name == "B") ROS_INFO_STREAM_NAMED(name, msg) or ROS_INFO_STREAM_NAMED("B", msg) then it works. I am callling ROS_INFO_STREAM_NAMED(name, msg) in my own log function which is located in a c++ interface. Classes A, B, and C all extends said interface and thus has access to log. Every ros spin cycle I would call on A's read function which calls on B's read function, which calls on C's read function. in each of these read functions, they would just log("reading"). But when I try to filter them out by name using rqt_logger_level, ros.<my package>.A would control the levels for what is suppose to be B and C's messages. Originally posted by C-Dog on ROS Answers with karma: 36 on 2015-12-17 Post score: 0 Original comments Comment by gvdhoorn on 2015-12-18: I think it would help if you included the sources you are referring to. Could you edit your question and add some snippets (be sure to format them using the Preformatted text button). Answer: Actually I found the answer to my question here: https://github.com/ros/ros_comm/issues/561 Originally posted by C-Dog with karma: 36 on 2015-12-18 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 23252, "tags": "ros, logger, rosconsole, logging" }
How is my Game Loop?
Question: I wrote a game loop for a game I'm going to be making: Check the current FPS Try the keep the FPS to a hardcoded level Draw and Update the game. There are two classes, the abstract class and the implementing class. Any suggestions? Game public abstract class Game { private int FRAMES_PER_SECOND; private boolean running = true; long targetTime; private long runningFPS; protected Game(int fps) { setTargetFPS(fps); } public void setTargetFPS(int fps) { this.FRAMES_PER_SECOND = fps; targetTime = 1000 / FRAMES_PER_SECOND; } public void run(JPanel panel, BufferedImage image) { int currentFPS = 0; long counterstart = System.nanoTime(); long counterelapsed = 0; long start; long elapsed; long wait; targetTime = 1000 / FRAMES_PER_SECOND; while (running) { start = System.nanoTime(); processInput(); update(); // time to update and process input elapsed = System.nanoTime() - start; wait = targetTime - elapsed / 1000000; if (hasTimeToDraw(wait)) { //CREATE AND ANTIALIAS GRAPHICS Graphics2D g = image.createGraphics(); g.addRenderingHints(new RenderingHints(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON)); g.setRenderingHint(RenderingHints.KEY_TEXT_ANTIALIASING, RenderingHints.VALUE_TEXT_ANTIALIAS_ON); g.setRenderingHint(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY); //Draw draw(g); g.dispose(); panel.repaint(); //Take account for the time it took to draw elapsed = System.nanoTime() - start; wait = targetTime - elapsed / 1000000; } counterelapsed = System.nanoTime() - counterstart; currentFPS++; // at the end of every second if (counterelapsed >= 1000000000L) { //runningFPS is how many frames we processed last second runningFPS = currentFPS; currentFPS = 0; counterstart = System.nanoTime(); } //dont wanna wait for negative time if (wait < 0) wait = 0; try { Thread.sleep(wait); } catch (InterruptedException e) { e.printStackTrace(); } } } public long getCurrentFPS() { return runningFPS; } private boolean hasTimeToDraw(long wait) { //Not really sure how to implement this method... Maybe just time the draw method and hardcode it in? return true; } public void stop() { running = false; } public abstract void processInput(); public abstract void update(); public abstract void draw(Graphics2D g); } GameFrame public class GameFrame extends JFrame{ private static final long serialVersionUID = 1L; private static final int WIDTH = 800; private static final int HEIGHT = 800; private JPanel panel; private BufferedImage image = new BufferedImage(WIDTH, HEIGHT, BufferedImage.TYPE_3BYTE_BGR); private Game game = new Game(60){ int i = 0; int x = 5; @Override public void processInput() { // TODO Auto-generated method stub } @Override public void update() { GameFrame.this.setTitle("FPS: " + getCurrentFPS()); if(i > WIDTH || i < 0) x = -x; i += x; } @Override public void draw(Graphics2D g) { g.setColor(Color.BLACK); g.fillRect(0, 0, WIDTH, HEIGHT); g.setColor(Color.RED); g.fillRect(i, 50, 20, 53); } }; public GameFrame(){ panel = new JPanel(){ private static final long serialVersionUID = 1L; @Override protected void paintComponent(Graphics g) { g.drawImage(image, 0, 0, null); } }; panel.setPreferredSize(new Dimension(WIDTH,HEIGHT)); this.add(panel); this.pack(); this.setVisible(true); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } public void run(){ game.run(panel,image); } public static void main(String[] args){ new GameFrame().run(); } } Answer: long start; long elapsed; long wait; This is no Pascal, declare your variable where they get initialized. This is not always possible, maybe 1 such variable per 10 classes is unavoidable. //dont wanna wait for negative time if (wait < 0) wait = 0; try { Thread.sleep(wait); } catch (InterruptedException e) { e.printStackTrace(); } Better don't change a variable if you don't have to as it makes the code harder to understand. Doing Thread.sleep(wait<0 ? 0 : wait); is simpler, doing Thread.sleep(Math.max(0, wait)); is even simpler and so is if (wait>0) Thread.sleep(wait); The main problem is that your method is far too long. Extracting some methods would help a lot, e.g., Graphics2D g = newNiceGraphics(image); sleepUninterruptibly(wait); Enough for now....
{ "domain": "codereview.stackexchange", "id": 21146, "tags": "java, game" }
Is there an infinite amount of wavelengths of light? Is the EM spectrum continuous?
Question: The electromagnetic spectrum is a continuum of wavelengths of light, and we have labels for some ranges of these and numerical measurements for many. Question: Is the EM spectrum continuous such that between two given wavelengths (e.g. 200nm and 201nm) there is an infinite number of distincts wavelengths of light? Or is there some cut-off of precision with which light might exist (e.g. can light only have wavelengths of whole number when measured in nanometers, etc.)? Answer: Yes, there are an uncountable infinity of possible wavelengths of light. In general the frequency spectrum for Electromagnetic (e.g light, radio, etc) is continuous and thus between any two frequencies there are an uncountable infinity of possible frequencies (just as there are an uncountable number of numbers between 1 and 2). Two things to consider in practice: There are situations in which the only relevant frequencies are discrete (such as the modes in a cavity). For any given experimental measurement you will always have a finite precision or bandwidth with which you can measure, and so although light at 200nm and 200.01nm is in principle different, you might not be able to tell in practice.
{ "domain": "physics.stackexchange", "id": 20300, "tags": "visible-light, electromagnetic-radiation, wavelength, discrete" }
OpenAI: What is the difference between model "gpt-3.5-turbo" and "gpt-3.5-turbo-0301"?
Question: I have performed an API call to OpenAI's endpoint https://api.openai.com/v1/models . The endpoint lists the currently available engines, and provides basic information about each one such as the owner and availability. As a logged-in user, I get a JSON response of 63 models. These are the most recent ones (currently) , formatted, shown with release date. 59: "11/28/2022, 2:40:35 AM : text-davinci-003" 60: "12/16/2022, 8:01:39 PM : text-embedding-ada-002" 61: "2/27/2023, 10:13:04 PM : whisper-1" 62: "2/28/2023, 7:56:42 PM : gpt-3.5-turbo" 63: "3/1/2023, 6:52:43 AM : gpt-3.5-turbo-0301" I notice that there are 2 very similar models , "gpt-3.5-turbo" and "gpt-3.5-turbo-0301", with gpt-3.5-turbo-0301 released only 11 hours after gpt-3.5-turbo. What is the difference between these two model versions? It does not seem to be a glitch or a misnaming error. Why did OpenAI bother to include both of them, and why didn't take the inferior version? (I haven't experimented with these two models in any way yet. I might do this very soon. However I though I might as well ask here. Informing others in this forum might have some benefit.) Answer: Taken from here: https://platform.openai.com/docs/models/gpt-3-5 I think its literally an update but the specifics of what that updates are I do not know I looked through the documentation but this was all I could find.
{ "domain": "ai.stackexchange", "id": 3783, "tags": "open-ai, chatgpt, large-language-models" }
Basic Question Mobile Robot cmd_vel
Question: I'm on cturle, base is maverick and robot node is arm lucid and cturtle with an arduino as the actual robot controller. My physical robot is a small cheap 4 wheel drive using skid steering, I'm limited physically so my mobile robot is is the apartment, which is not small but tricky, with a bad floor surface for odometry and some areas's of magnetic variance....which I've got managed to a point. And some pretty tight doorways. The robot has a compass and odometers and a sharp ir. I am using a crude gyro setup to detect where the compass goes off and the robot is not actually turning via the gyro. But upshot is that what works best is driving to bearing and distance in straight lines and stop and skid turn to new bearing. So I have my robot programmed from the arduino to follow waypoints. The surface magnetics motors/odometry are not upto variable speed turns on the go...I tried and lost that one. From a ROS perspective I've got odom and laser scan working and sending the fixed transform base link -> base scan So manually driving the arduino around things look quite good in rviz. I have n't started mapping yet as I'm following the tutorials carefully. I'm just writing (plagerising) a joystick teleop package and starting to hit some fundamentals. For teleop I can interface quite easily to the beagle/ board arduino, but I'm concerned I'm on the wrong path for move_base cmd_vel base_controller Could I please just make sure I understand the output of cmd_vel..... x and z I'm taking it that x is velocity and z is rate of angle change or is this x distance and bearing...... If the former is correct (what I fear) velocities how do I obtain/convert manage this over site in my work? waypoint data and not screw the nav stack ? The arduino side of my robot is not dumb and has taken quite alot of work to over come some issues. Sorry to ask this one, I've been searching for a while. Dave Originally posted by davo on ROS Answers with karma: 42 on 2011-07-13 Post score: 1 Answer: If you take a look at the definition of the geometry_msgs/Twist message, it's fairly straightforward. Generally, the coordinates are aligned such that linear/x is the forward/backward direction for your robot,. Since your robot is likely non-holonomic, you can probably ignore linear/y and linear/z. Angular/z is in-plane rotation, and you can ignore x and y. Linear velocities are generally in meters/s, and angular velocities in radians/s. Originally posted by Dan Lazewatsky with karma: 9115 on 2011-07-13 This answer was ACCEPTED on the original site Post score: 6
{ "domain": "robotics.stackexchange", "id": 6123, "tags": "ros, navigation, base-controller" }
Handling large imbalanced data set
Question: I have an imbalanced data set consisting of some 10's of millions text strings, each with thousands of features created by uni- and bigrams, and additionally I have also the string length and entropy of string as features. It is a multiclass data set (40-50 classes), but it is imbalanced. Some classes can be 1000x smaller compared to the largest class. I have restricted the data to 1 million strings per class as maximum, otherwise the imbalance could be even larger. Because of this I want to use over-sampling to improve the data for the underrepresented classes. I have looked into ADASYN and SMOTE from the python imblearn package. But when I run it the process eats up all my RAM in the swap memory, and soon after the process gets killed. I assume because the memory is not enough. My question is now how to best proceed. Obviously my data is too large to be over-sampled as it is. I have thought of two options, but I cannot make out which is the most "correct". I sent in only one underrepresented class and the largest class, and repeated this for each underrepresented class. I am not sure if this could mean that classes might start to overlap though. I instead under-sample the data, maybe down to 100k samples per class. This might reduce the data enough such that I can run oversampling on the less represented classes (with 1k-10k samples). Any other options that are more appropriate that I have missed? Answer: There are multiple options, depending on your problem and the algorithms you want to use. The most promising (or closest to your original plan) is to use a generator to prepare batches of training data. This is only useful for models that allow for partial fits, like neural networks. Your generator can just stratify examples by for example generating a batch that includes exactly one of each target. One epoch would be when you served all the samples from the biggest class. Downsampling is not a bad idea but it depends on the difficulty of your task, because you do end up throwing away information. You could look at some curves depending on the amount of samples for your model, if it looks relatively capped this wouldn't be a big issue. A lot of models allow for weighting classes in your loss function. If we have 10,000 of class A and 1,000 of class B, we could weight class B 10x, which means mistakes that way count much harder and it will focus relatively more on samples from class B. You could try this but I could see this going wrong with extreme imbalances. You can even combine these methods, downsample your biggest classes, upsample your smaller classes and use weights to balance them perfectly. EDIT: Example of the batch options: We have 4x A, 2x B and 1x C, so our set is: A1 A2 A3 A4 B1 B2 C1 Regular upsampling would go to: A1 A2 A3 A4 B1 B2 B1 B2 C1 C1 C1 C1 But this will not fit in our memory in a big data setting. What we do instead is only store our original data in memory (could even be on disk) and keep track where we are for each class (so they are seperated on target). A: A1 A2 A3 A4 B: B1 B2 C: C1 Our first batch takes one of each class: A1 B1 C1 Now our C class is empty, which means we reinitialize it, shuffle them (in this case it's only one example). A: A2 A3 A4 B: B2 C: C1 Next batch: A2 B2 C1 B and C are empty, reinitialize them and shuffle: A: A3 A4 B: B2 B1 C: C1 Next batch is: A3 B2 C1 And our last one of the epoch would be A4 B1 C1 As you can see, we have the same distribution as the full memory option, but we never keep more in memory than our original ones, and the model always gets balanced, stratified batches.
{ "domain": "datascience.stackexchange", "id": 9971, "tags": "python, bigdata, multiclass-classification, class-imbalance" }
regarding visual dsp ++
Question: i've written a source code in visual dp ++ in c language. the platform is sharc processor adsp 21062.after the build is complete. while running the program it is displaying a message Instruction timed out with PC at: 0x20a5a . what may be the error in my c code. can you please resolve? Answer: The processor is stalled. For example you are trying to read a peripheral register and there is no data available. This can have many different reasons so without further detail that's impossible to assess. First step would be to look at program memory at address 0x20a5a and see what the processor was trying to do. Take a look at the disassembly window.
{ "domain": "dsp.stackexchange", "id": 1539, "tags": "dsp-core" }