anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Memory cells (addresses) read into a cache
Question: How does one determine what memory cells (addresses) are being read into a cache? For example if we read at adress "C000EAFCH", which memory cells will be read into the cache if the cache is 64 byte and each cacheline is 8 byte? If we use direct mapping, would one say that there are the 4 previous bytes and the 4 next bytes from the address that are being read? In that case I would say that addresses C00EAF8H to C00EAFFH are being read. That would mean 4 bytes before the address, then the byte on the address and 3 more addresses are read. However the one who asked this question wrote that the correct answer is C00EAF8 to C00EAFFH. I don't know why the 'H' disappeared in C00EAF8 but maybe it's a typo. To be honest I don't even know what the 'H' stands for, as it is not a hexadecimal character. Maybe someone can clarify this? I also wonder, why doesn't one read the byte on the address, and then the next 7 bytes? Does this reading 4 bytes before and 3 bytes after have to do with direct mapping? Answer: Cache lines are aligned. That means that the bottom-most bits of the start address of the cache line are always 0s. If you have 8-byte cache lines they will always be read in address ranges $[8i, 8(i+1) - 1]$. For example, if you read address 117 (decimal) = 1110101 (binary) then the 8 byte cache block that gets read in is the set of bytes in the range 112 (decimal) = 1110000 (binary) through 119 (decimal) = 1110111 (binary).
{ "domain": "cs.stackexchange", "id": 4308, "tags": "cpu-cache" }
Do Tardigrades preserve water or replace water?
Question: If you find something funny in my argument, then please pardon me, as I lack knowledge of biology. I was reading an article titled as "Water Bears Can Replace All The Fluid In their Bodies With A Glass Matrix" Being extremely new to biology, I am a bit confused. Do Tardigrades preserve water through this protein or entirely change their biological structure to flush away all the water present in their body and rely on these proteins? Does this mean that their bodies no longer contain water? Thanks. Answer: neither they are desiccating, some water remains but too little to sustain normal processes, the matrix just preventing all the normally destructive side effects of desiccation that would destroy the cellular machinery, the cellular processes are still essentially stopped.
{ "domain": "biology.stackexchange", "id": 8116, "tags": "microbiology" }
Scrapy spider for products on a site
Question: I recently submitted a code sample for a web scraping project and was rejected without feedback as to what they didn't like. The prompt, while I cannot give it here verbatim, basically stated that I needed to write a spider to crawl a site for product items. They suggested using a generic spider to scrape the site in question while using URL rules for efficiency. They gave links to documentation in case you hadn't used scrapy before. I felt like this meant that they didn't mind hiring people unfamiliar with their toolset. Speaking of which we could only use pyquery for dom traversal. I usually would have opted for pure lxml and xpaths. I understood the concept of using rules to limit extraneous requests but after noticing that the site in question contained a sitemap I decided to start there instead. I do know that they explicitly said not to use any outside libraries, so that is why I didn't use Pillow for image processing. However, I did cheat and use requests for some other things that the actual spider didn't utilize but again I wasn't told why my code wasn't good enough. So at this point I would like to learn why. # -*- coding: utf-8 -*- import scrapy from scrapy.spiders.sitemap import * from pyquery import PyQuery as pq from oxygendemo.items import OxygendemoItem import oxygendemo.utilities from oxygendemo.utilities import * class OxygenSpider(SitemapSpider): print 'MY SPIDER, IS ALIVE' name = "oxygen" allowed_domains = ["oxygenboutique.com"] sitemap_urls = ['http://www.oxygenboutique.com/sitemap.xml'] sitemap_rules = generate_sitemap_rules() ex_rates = get_exchange_rates() def parse_sitemap_url(self, response): self.logger.info('Entered into parse_sitemap_url method') self.logger.info('Received response from: {}'.format(response.url)) self.logger.debug('Respons status: {}'.format(response.status)) item = OxygendemoItem() d = pq(response.body) parsed_url = urlparse.urlparse(response.url) base_url = get_base(parsed_url) product_info = d('.right div#accordion').children() image_links = d('div#product-images tr td a img') description = product_info.eq(1).text()\ .encode('ascii', 'ignore') item['code'] = str(parsed_url[2].lstrip('/')[:-5]) item['description'] = description item['link'] = parsed_url.geturl() item['name'] = d('.right h2').text() gbp_price = { 'prices': d('.price').children(), 'discount': 0 } item['gbp_price'], item['sale_discount'] = get_price_and_discount( gbp_price ) if 'error' not in self.ex_rates: item['usd_price'] = "{0:.2f}".format( item['gbp_price'] * self.ex_rates['USD'] ) item['eur_price'] = "{0:.2f}".format( item['gbp_price'] * self.ex_rates['EUR'] ) else: item['usd_price'], item['eur_price'] = ['N/A'] * 2 item['designer'] = d('.right').find('.brand_name a').text() item['stock_status'] = json.dumps(determine_stock_status(d('select') .children())) item['gender'] = 'F' # Oxygen boutique carries Womens's clothing item['image_urls'] = fetch_images(image_links, base_url) item['raw_color'] = get_product_color_from_description(description) yield item This is the utilities module I used: # -*- coding: utf-8 -*- import requests import json import urlparse from pyquery import PyQuery as pq import re def get_base(parsed_url): base_url = parsed_url[0] + '://' + parsed_url[1] base_url = base_url.encode('ascii', 'ignore') return base_url def get_exchange_rates(): ''' return dictionary of exchange rates with british pound as base currency ''' url = 'http://api.fixer.io/latest?base=GBP' try: response = requests.get(url) er = json.loads(response.content)['rates'] return er except: return {'error': 'Could not contact server'} def determine_stock_status(sizes): result = {} for i in xrange(1, len(sizes)): option = sizes.eq(i).text() if 'Sold Out' not in option: result[option] = 'In Stock' else: size = option.split(' ')[0] result[size] = 'Sold Out' return result def determine_type(short_summary): short_summary = short_summary.upper() S = { 'HEEL', 'SNEAKER', 'SNEAKERS', 'BOOT', 'FLATS', 'WEDGES', 'SANDALS' } J = { 'RING', 'NECKLACE', 'RING', 'BANGLE', 'CHOKER', 'COLLIER', 'BRACELET', 'TATTOO', 'EAR JACKET' } B = { 'BAG', 'PURSE', 'CLUTCH', 'TOTE' } A = { 'PINNI', 'BLOUSE', 'TOP', 'SKIRT', 'KNICKER', 'DRESS', 'DENIM', 'COAT', 'JACKET', 'SWEATER', 'JUMPER', 'SHIRT', 'SKINNY', 'SHORT', 'TEE', 'PANTS', 'JUMPSUIT', 'HIGH NECK', 'GOWN', 'TROUSER', 'ROBE', 'PLAYSUIT', 'CULOTTE', 'JODPHUR', 'PANTALON', 'FLARE', 'CARDIGAN', 'VEST', 'CAMI', 'BEDSHORT', 'PYJAMA', 'BRALET', 'TUNIC', 'HOODY', 'SATEEN', 'BIKER', 'JEAN', 'SWEAT', 'PULL', 'BIKINI', 'LE GRAND GARCON' } types = { 'B': B, 'S': S, 'J': J, 'A': A } for key, val in types.iteritems(): for t in val: if t in short_summary: return key else: return 'R' # Tag as accessory as failsafe def fetch_images(image_links, base_url): ''' base_url will come as unicode change to python string ''' images = [] for image in image_links: images.append(urlparse.urljoin(base_url, image.attrib['src'])) return images def get_price_and_discount(gbp_price): if gbp_price['prices']('.mark').text() == '': # No discount gbp_price['discount'] = '0%' orig_price = float(gbp_price['prices'].parent().text() .encode('ascii', 'ignore')) else: # Calculate discount prices = gbp_price['prices'] orig_price = "{0:.2f}".format(float(prices('.mark').text())) new_price = "{0:.2f}".format(float(gbp_price['prices'].eq(1).text())) gbp_price['discount'] = "{0:.2f}"\ .format(float(orig_price) / float(new_price) * 100) + '%' return float(orig_price), gbp_price['discount'] def get_raw_image_color(image): ''' Note that Pillow imaging library would be perfect for this task. But external libraries are not allowed via the constraints noted in the instructions. Example: Image.get_color(image) Could be used with Pillow. ''' # only import Pillow image library if this is used # Later from PIL import Image im = Image.open(image) colors = im.getcolors() if colors is None: return None else: return colors[0] # Not functional at this point def get_product_color_from_description(description): ''' Will go this route to avoid external imports ''' description = description.upper().split(' ') colors = ( 'BLACK', 'WHITE', 'BLUE', 'YELLOW', 'ORANGE', 'GREY', 'PINK', 'FUSCIA', 'RED', 'GREEN', 'PURPLE', 'INDIGO', 'VIOLET' ) for word in description: for color in colors: if word == color: return color.lower() else: return None def generate_sitemap_rules(): d = pq(requests.get('http://www.oxygenboutique.com').content) # Proof of concept regex can be found here --> http://regexr.com/3c0lc designers = d('ul.tame').children() re_front = r'(http:\/\/)(www\.)(.+\/)((?!' re_back = r').+)' re_middle = 'products|newin|product|lingerie|clothing' for li in designers: ''' This removes 36 requests from the queue ''' link = pq(li.find('a')).attr('href').rstrip('.aspx') re_middle += '|' + link return [(re_front + re_middle.replace('-', r'\-') + re_back, 'parse_sitemap_url')] OxygendemItem() declaration: import scrapy from scrapy import Field class OxygendemoItem(scrapy.Item): code = Field() # unique identifier (retailers perspective) description = Field() # Detailed description designer = Field() # manufacturer eur_price = Field() # full (non_discounted) price gender = Field() # F - Female, M - male gbp_price = Field() # full (non_discounted) price image_urls = Field() # list of urls representing the item link = Field() # url of product page name = Field() # short summary of the item raw_color = Field() # best guess of color. Default = None sale_discount = Field() # % discount for sale item where applicable stock_status = Field() # dictionary of sizes to stock status ''' size: quantity Example: { 'L': 'In Stock', 'M': 'In Stock', 'S': 'In Stock', 'XS': 'In Stock' } ''' # 'A' = apparel, 'B' = bags, 'S' = shoes, 'J' = jewelry, 'R' = accessories type = Field() usd_price = Field() # full (non_discounted) price Answer: Well to start with you have bad practices in your imports. It's recommended to stay away from using from module import * because doing that imports things without explicitly declaring their names. Without realising it, you could be overwriting other functions, including builtins in the module was made carelessly. Instead use just import module or from module import func1, func2, CONST. Especially though, don't do this: import oxygendemo.utilities from oxygendemo.utilities import * It's totally redundant to have the first line since you're then ignoring it to import everything. In case you don't know, you can still alias plain imports: import oxygendemo.utilities as util So you don't even need to worry about the name being too long. Also OxygenSpider is not laid out properly. You have loose code that should probably be in an __init__ function. Let me show you how this works in the interpreter: >>> class A: print "Printing class A" Printing class A So what happened there? The print command was run when the class was created. I haven't created any object yet, so what happens when I create an object: >>> A() <__main__.A instance at 0x0000000002CA5588> >>> b = A() >>> Nothing. It's not printing the command that you intended to appear when creating an OxygenSpider object. If you were to wrap it in __init__ though, it would. __init__ is a special function that runs when a new object is created, like so: >>> class A: def __init__(self): print "Printing this object" >>> A() Printing this object <__main__.A instance at 0x0000000002113488> >>> b = A() Printing this object You see now? Nothing happens after the class is created but when actual objects are created __init__ gets run. You should be putting the whole opening block to OxygenSpider in a function like that. Also the variables should be assigned as self.var, and the constants should be in UPPER_SNAKE_CASE and constant lists should be tuples instead. Tuples are made with () and are basically like lists except they cannot be changed. However since you're inheriting from SitemapSpider you also need to run its __init__ function in yours. You need to call it so that your base class is initialised before you run your particular __init__ code. There's a good explanation in this Stack Overflow answer class OxygenSpider(SitemapSpider): def __init__(self): super(SitemapSpider, self).__init__() print 'MY SPIDER, IS ALIVE' self.NAME = "oxygen" self.ALLOWED_DOMAINS = ("oxygenboutique.com") self.SITEMAP_URLS = ('http://www.oxygenboutique.com/sitemap.xml') self.sitemap_rules = generate_sitemap_rules() self.ex_rates = get_exchange_rates() Also printing when creating an object just to say it's created isn't very nice anyway, you should remove that.
{ "domain": "codereview.stackexchange", "id": 16377, "tags": "python, web-scraping, scrapy" }
Binary Classification [Text] based on Embedding Distance?
Question: I was just informed this community was a better fit for my SO question. I am wondering if I can use a Milvus or Faiss (L2 or IP or...) to classify documents as similar or not based on distance. I have vectorized text from news articles and stored into Milvus and Faiss to try both out. What I don't want to do is retrain a model every time I add new article embeddings and have to worry about data set balance, do I have to change my LR, etc. I would like to store embeddings and return the Top1 result for each new article that I'm reading and if the distance is "close" save that new article to Milvus/Faiss else discard. Is that an acceptable approach to binary classification of text from your point of view? If so with DistilBert embeddings, is magnitude (L2) a better measurement or Orientation (IP)? When I say "close" this isn't a production idea for work just and idea that I can't think through or find other people explaining online, I would expect accuracy of "close" to be some ballpark threshold... As a Cosine Similarity example (Figure1) if OA and OB exist in Milvus/Faiss DB and I search with a new embedding OC I would get OB closest to OC at 0.86 and if the threshold for keeping is say > 0.51 I would keep the 0C. As an L2 example (Figure1) if A' and B' exist in my Milvus/Faiss DB and I search for C' with a threshold of say < 10.5 I would reject C' as B' is closest to C' at 20.62. Figure 1 - medium article Answer: The are two levels to your question: Conceptual - Yes, you can perform an approximate nearest neighbor search on text documents that have been embedded. What you call binary classification is more commonly called anomaly detection when the data is not labeled. Often times in anomaly detection there is a threshold for similar or not. Implementation - Milvus is a database. Faiss is a vector library. The specific implementation will depend on the system is architectured.
{ "domain": "datascience.stackexchange", "id": 11495, "tags": "machine-learning, deep-learning, classification, word-embeddings" }
Reason for the convention about polarization states
Question: I'd like to know if there is a special reason for limiting convention of polarization state to waves that can be split in just two components of equal frequency. Answer: Galmeida, I think you are thinking of a Jones vector polarization formalism, which works for plane waves in a homogeneous, linear, anisotropic medium (like air, amorphous glass, or vacuum). The reasons they are defined this way are the following Any electric field can classically be decomposed into a superposition of plane waves via a Fourier transform. So we can think about manipulating plane wave components, which correspond to single frequencies. In homogeneous, linear, anisotropic media, the propagation direction of a plane wave is the $k$-vector, which is perpendicular to the electric and magnetic field directions. If we only consider the electric field, then it's vector will trace some shape in the plane perpendicular to the $k$-vector. Since the shape traced is in a plane, we only need two numbers (say x and y) to the electric field vector and any given time/spatial point. So the reason that we define the single frequency is because we think about plane waves usually, and the reason that the Jones formalism only uses two components is because in a lot of materials that we care about the electric field vector is always perpendicular to the $k$-vector. This can be generalized (and has been) to a full 3-d vector and spectral (all wavelengths) polarization description. See Emil Wolf and his work on Coherence and Polarization or Goodman's Statistical Optics for more details...
{ "domain": "physics.stackexchange", "id": 5878, "tags": "electromagnetism, terminology, polarization" }
Stability of the summation function
Question: Stability of the below function was asked. $$ T(x[n]) = \sum_{k=n_0}^n x[n] $$ I understand that $x[n]$ shouldn't exceed $M$ value. $x[n]\le M$. I mean, we are looking for if the input is bounded, what will the output be. However, i didn't understand from where $|n-n_0|M$ came from in the below function. $$ \left |T(x[n]) \right| \le \sum_{k=n_0}^n \left| x[n] \right| \le |n-n_0|M $$ Answer: Well, we are summing over $k$ in the range $n_0$ to $n$. The number of elements in the sum is therefore $n-n_0$. We also know that the sum must be positive and that all terms in the sum ($x[n]$) have absolute value less than $M$. To account for $n-n_0$ being negative, but our sum MUST be positive, we take its absolute value.
{ "domain": "dsp.stackexchange", "id": 3874, "tags": "discrete-signals" }
Regarding ros::WallTime
Question: sequence1: this is my massage package: #Num.msg Header header string data this is my terminal massage,when i type $ rosmsg show Test/Num Header header uint32 seq time stamp string frame_id string data Ques1:how to upgrade the "time" to "WallClock time",which will show the current time in "hr.min.sec" format?????????????? sequence2: this is my publisher node: #include "ros/ros.h" #include "Test/Num.h" #include <sstream> using namespace std; int main(int argc, char **argv) { ros::init(argc, argv, "publisher"); ros::NodeHandle m; ros::Publisher chatter_pub = m.advertise<Test::Num>("request", 1000); ros::Rate loop_rate(10); int counter = 0; while (ros::ok()) { Test::Num msg1; msg1.header.stamp=ros::Time::now(); msg1.header.frame_id="/current time"; std::stringstream sst; sst << "Can you here me,hello..."<<counter; msg1.data = sst.str(); ROS_INFO("%s", msg1.data); chatter_pub.publish(msg1); loop_rate.sleep(); ++counter; ros::spinOnce(); } return 0; } ques2:in the above code "msg1.header.stamp=ros::Time::now();"--in this line what will be the member instead of "stamp" in case of ros::WallTime::now()????????? sequence3: this is my subscriber node: #include "ros/ros.h" #include "Test/Num.h" #include <sstream> using namespace std; void Request(const Test::Num& msg2) { Test::Num newMessage; newMessage.data = msg2.data; newMessage.header.stamp=ros::Time::now(); newMessage.header.frame_id="/current time"; */ ROS_INFO("I heard: [%s]", msg2.data); } int main(int argc, char **argv) { ros::init(argc, argv, "subscriber"); ros::NodeHandle m; ros::Subscriber request_sub = m.subscribe("request", 1000, Request); ros::spin(); return 0; } ques3: if i want to add ros duration to subtract two time stamp to get the response time(time to publish massage in request topic-time to subscribe massage from request topi).what will be the command?????? ques4:in rosmake command i am getting following tewo errors: 1.warning: cannot pass objects of non-POD type ‘const struct std::basic_string<char, std::char_traits, std::allocator >’ through ‘...’; call will abort at runtime 2.warning: format ‘%s’ expects type ‘char*’, but argument 8 has type ‘int’ Ques4:in rosrun Test publisher command ---it showing illigal instruction Originally posted by muin028 on ROS Answers with karma: 13 on 2012-09-18 Post score: 0 Answer: Please open single questions in the future unless they are regarding the same problem. 1./2.: Just leave it as it is and use ros::WallTime instead of ros::Time. There is no need to rename the parameter, just put some time in there, and if you want use WallTime. 3.: You can just substract the two times with the normal "-" operator. You will automatically get a ros::Duration. 4.: Add a .c_str(), when you output a std::string to ROS_...() calls. Originally posted by dornhege with karma: 31395 on 2012-09-18 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by muin028 on 2012-09-19: you said"There is no need to rename the parameter, if you want to use WallTime."error: no match for ‘operator=’ in ‘msg1.Test::Num_std::allocator<void >::header.std_msgs::Header_std::allocator<void >::stamp = ros::WallTime::now()()’......................but i am getting this kind of error. Comment by muin028 on 2012-09-19: "just put some time in there".can you please give some more definition/clarify about this sentence..............Thanks in advance. Comment by dornhege on 2012-09-19: They are both a ros::TimeBase, so you can construct a ros::Time from the sec/nsec parameters of WallTime. "just put some time" mean, you can use a ros::Time that you constructed in any way you want (e.g. from WallTime) as long as it represents a time. However, you cannot change the Header type. Comment by muin028 on 2012-09-20: thanks for the reply.....ros::Time and ros::WallTime are both different classes.of them ros::Time can take time primitive type,ros::WallTime can not take time primitive type.Because i tried lots of time but its saying the same thing.I think there are some bug in ros::WallTime class. Comment by dornhege on 2012-09-20: They are both instantiations of the same template. You can thus create a ros::Time from the sec/nsec of a ros::WallTime and will get a respective object.
{ "domain": "robotics.stackexchange", "id": 11056, "tags": "ros" }
Helper function to solve Project Euler question 26
Question: Project Euler problem 26 asks us to: Find the value of d < 1000 for which 1/d contains the longest recurring cycle in its decimal fraction part. I wrote this function in Python to find the decimal representation of a rational number p/q. How can I improve it? Also suggest good coding styles. #! /usr/bin/env python # -*- coding - utf-8 -*- """This program converts a rational number into its decimal representation. A rational number is a number of the form p/q where p and q are integers and q is not zero. The decimal representation of a rational number is either terminating or non-terminating but repeating. """ def gcd(a, b): """Computes gcd of a, b using Euclid algorithm. """ if not isinstance(a, int) or not isinstance(b, int): return None a = abs(a) b = abs(b) while b != 0: a, b = b, a % b return a def decimal(p, q): """Computes the decimal representation of the rational number p/q. If the representation is non-terminating, then the recurring part is enclosed in parentheses. The result is returned as a string. """ if not isinstance(p, int) or not isinstance(q, int): return '' if q == 0: return '' abs_p = abs(p) abs_q = abs(q) s = (p / abs_p) * (q / abs_q) g = gcd(abs_p, abs_q) p = abs_p / g q = abs_q / g rlist = [] qlist = [] quotient, remainder = divmod(p, q) qlist.append(quotient) rlist.append(remainder) if remainder == 0: return str(quotient) while remainder != 0: remainder *= 10 quotient, remainder = divmod(remainder, q) qlist.append(quotient) if remainder in rlist: break else: rlist.append(remainder) qlist = map(str, qlist) if remainder: recur_index = rlist.index(remainder) + 1 dstring = qlist[0] + '.' + ''.join(qlist[1:recur_index]) + \ '(' + ''.join(qlist[recur_index:]) + ')' if s < 0: dstring = '-' + dstring else: dstring = qlist[0] + '.' + ''.join(qlist[1:]) if s < 0: dstring = '-' + dstring return dstring if __name__ == '__main__': p = raw_input('p: ') q = raw_input('q: ') try: p = int(p) q = int(q) if q == 0: raise ValueError print '%d/%d =' % (p, q), decimal(p, q) except ValueError: print 'invalid input' Answer: Trying to generate the actual string representation is interesting to understand the problem but you do not need to keep all that complexity in your code for problem 26. Indeed, you are only interested in the length of the cycle for 1/d. Thus, what I did was to : remove all the logic corresponding to string construction at the end of the function simplify argument handling by removing p (always 1) and considering only positive q (renamed d). notice that we can actually return the value directly from the while loop and 0 if we get out of the loop. notice that we don't actually need the content of qlist but we do need its length. notice that the first iteration is somehow just an iteration like the other to remove a bit of code. notice that we don't need the quotients anymore At the end, here's what the code is like : def cycle_length(d): """Computes the length of the recurring cycle in the decimal representation of the rational number 1/d if any, 0 otherwise """ if not isinstance(d, int) or d <= 0: raise ValueError("cycle_length(d): d must be a positive integer") rlist = [] qlist_len = 0 remainder = 1 while remainder: remainder = remainder % d if remainder in rlist: return qlist_len - rlist.index(remainder) rlist.append(remainder) remainder *= 10 qlist_len+=1 return 0 if __name__ == '__main__': for d in range(1,20): #d = raw_input('d: ') try: print '1/%s =' % (d), cycle_length(int(d)) except ValueError: print 'invalid input'
{ "domain": "codereview.stackexchange", "id": 5774, "tags": "python, project-euler, mathematics" }
How to understand the notion of boolean query via Immerman's definition
Question: A query is any mapping $I:STRUC[\sigma] \to STRUC[\tau]$ that is polynomially bounded. A boolean query is a map $I_b: STRUC[\sigma] \to \{0,1\}$. A boolean query can also be thought as the susbset: $$ \{ A \in STRUC[\sigma] \ \mid \ I_b(A) = 1 \} $$ From this definition of query, used by Immerman in his book "Descriptive Complexity", how are boolean queries and general queries related? It seems from the definition that they are two different things entirely. Is there an intuitive way to link the two concepts? Thank you! Answer: Boolean queries are a special case of general queries. You can take $\tau$ to be some convenient vocabulary and choose a pair of $\tau$-structures to represent "true" and "false". FOr a concrete example, take $\tau$ to be the vocabulary with a single nullary relation symbol $T$. Now, associate "true" with the $\tau$-structure $\mathfrak{T}$ that has $|\mathfrak{T}|=\emptyset$ and $T^\mathfrak{T}=\{\langle\,\rangle\}$ (i.e., the nullary relation that contains the empty tuple) and "false" with the structure $\mathfrak{F}$ that has $|\mathfrak{F}|=\emptyset$ and $T^\mathfrak{F}=\emptyset$ (the nullary relation that doesn't contain the empty tuple). Empty universes are a bit awkward, since they mean that $(\forall x\,\varphi) \rightarrow \exists x\,\varphi$ isn't a tautology. If you don't like that and/or you don't like nullary relations, make the relation $T$ unary and use a one-element universe.
{ "domain": "cs.stackexchange", "id": 13374, "tags": "complexity-theory, logic, descriptive-complexity, finite-model-theory" }
The most appropriate way to implement a heap is with an array rather than a linked list, why is this?
Question: The most appropriate way to implement a heap is with an array rather than a linked list, why is this? I don't completely comprehend why this is? is it because it is easier to traverse? Answer: It doesn't make any sense at all to implement a heap as a linked list. (The most common kinds of) heaps are inherently binary trees. You can store a heap in an array because it's easy to compute the array index of a node's children: the children of the node at index K live at indices 2K+1 and 2K+2 if indices start at 0 (or at indices 2K and 2K+1 if indices start at 1). It's massively more efficient to find the Kth element of an array than the Kth element of a linked list. Advantages of storing a heap as an array rather than a pointer-based binary tree include the following. Lower memory usage (no need to store three pointers for every element of the heap). Easier memory management (just one object allocated, rather than N). Better locality of reference (the items in the heap are relatively close together in memory rather than scattered wherever the allocator put them).
{ "domain": "cs.stackexchange", "id": 9023, "tags": "data-structures" }
Is it possible to achieve a level of "truly zero" concentration?
Question: If we take some aqueous solution and dilute it further and further, will the concentration of the solution ever get to zero? I would say no, simply because total dilution implies that all the molecules of the solute have literally disappeared. But, the fact that I am unable to figure out where the molecules have gone doesn't make my argument compelling at all. I am led to believe that there is a far better answer and/or explanation to my questionUPDATE: While it is true that this question closely resembles the linked one, the answer provided in this question is a lot better as it gives a much deeper insight into the dilution process. Answer: It depends how you dilute it. If you take an aqueous solution of A and just add pure water (absolutely 100% water), the concentration of A will never quite be null. In this case however, you will reach a point where the concentration of A is so small that it can be considered null for your applications. If, however, you dilute the solution, take a sample, then dilute that sample (and so on), you could reach a concentration of exactly 0M. Imagine you have diluted the solution enough so that it contains exactly 1 molecule of A. When you take your sample for the next dilution, if this molecule isn't in the sample, the concentration will be exactly null. If it does happen to be in the sample, it could be left behind when you draw the next sample, or the next, and so on. In practice though, the water you use for the dilution will likely contain impurities. You will maybe not achieve exactly 0M, but the concentration could be so small that it is undetectable and have no measurable consequence.
{ "domain": "chemistry.stackexchange", "id": 14316, "tags": "aqueous-solution, concentration, molecules" }
How to Prepare a Buffer Solution?
Question: Problem I am given the task of preparing three buffer solutions at pH $10$, $9.5$, and $9.0$ . I have available concentrated ammonia and $\pu{3M}$ hydrochloric acid. The buffer capacity desired is $\pu{0.1 M}$. Attempt at Solving By adding $\ce{HCl}$ to a solution containing ammonia, it will completely consume the strong acid. Resulting solution will contain ammonia and it's conjugate acid ammonium ion, $\ce{NH4+}$. $$\ce{NH3 + H+ -> NH4+}$$ Where $$[{\ce{NH4+}]_\mathrm {final}}= [{\ce{HCl}]_\mathrm{initial}}$$ and $$[{\ce{NH3}]_\mathrm{final}} = [{\ce{NH3}]_\mathrm {initial}} - [{\ce{HCl}]_\mathrm {initial}}$$ I think at this point I should make some assumptions: $\pu{500 mL}$ buffer to be prepared, $\mathrm pK_\mathrm {a,\ce{NH4+}}$=$9.25$ $[\ce{NH3}] = \pu{2 M}$, ammonia in ethanol solution From here I think, from the Henderson-Hasselbalch equation., $$\mathrm{pH} = \mathrm pK_\mathrm a + \log \left(\frac{[\ce{A^-}]}{[\ce{HA}]}\right)$$ $$\mathrm{pH} = \mathrm pK_\mathrm a + \log \left (\frac{[\ce{NH3]}}{[\ce{NH4+}]}\right)$$ $$\implies \mathrm{pH} = \mathrm pK_\mathrm a + \log \left(\frac{[\ce{NH3}]}{[\ce{NH3}] - [\ce{HCl}]}\right)$$ Making the pH = 10 solution $$10 = 9.25 + \log\left(\frac{[\ce{NH3}]}{[\ce{NH4+}]}\right)$$ It is pretty visible at this point though that I don't exactly know how to proceed. I have reached one variable but through processes that would not really hold up under scrutiny. The $[\ce{NH3}]$ that I have as my last variable is either $\ce{[NH3]}_\mathrm {initial}$ or $\ce{[NH3]}_\mathrm {final}$ but I have no idea how to determine which it is. I also feel I may have applied the wrong method from the start. If there is anything I can do to add clarity or stop this post from being thrown out please let me know so I can remedy the situation asap. Answer: You have some of the chemistry wrong. Looking at your assumptions, concentrated ammonia is typically in aqueous solution, as is concentrated hydrochloric acid. You could have ethanol solutions saturated with $\ce{HCl}$ and $\ce{NH3}$ but that would be unusual. Concentrated ammonia in aqueous solution is about 18 molar and is usually notated as concentrated ammonium hydroxide. Since all three buffer solutions are in alkaline solution all the $\ce{HCl}$ will be reacted according to the following reaction: $\ce{NH3 + HCl -> NH4^+ + Cl^-}$ The $\ce{Cl^-}$ anion is a spectator anion and will not effect the pH. $\ce{\text{pKa}_{ammonia}= 9.25}$ is sound. Using the given pKa, at pH = 9.25 then $\ce{[NH3] = [NH4^+]}$. So: at pH 9.0 there will be a bit more $\ce{[NH4^+]}$ than $\ce{[NH3]}$. at pH 9.5 there will be a bit more $\ce{[NH3]}$ than $\ce{[NH4^+]}$. at pH 10.0 there will be a even more $\ce{[NH3]}$ than there was $\ce{[NH4^+]}$ at ph 9.5. The ratio of $\ce{[NH3]}$ to $\ce{[NH4^+]}$ can be calculated based on the pH using the Henderson-Halbach equation which for the ammonia/ammonium equilibrium is: pH = 9.25 + log ([NH3]/[NH4+]) The problem as stated however seems incomplete. If you really must make the buffer solution using only concentrated ammonium hydroxide and 3 molar HCl, then there is a unique solution for each buffer. However if you can add additional water, then there is no unique solution and you'd need to know the buffer capacity for each of the buffers. It seems really odd to have solutions as buffer solutions as strong as 18 molar. I'll explain further that the equilibrium equation $\text{K}_a = \dfrac{\ce{[NH3][H^+]}}{\ce{[NH4^+]}}$ should really be written as the activities of the species not the concentrations. Below about 0.1 molar the activity and concentration are pretty equal. But in as concentrated solutions as concentrated as 18 molar the assumption is dicey and some correction would need to be made since the activity of the various species would be less than their actual concentrations. Working towards a solution for 0.10 molar buffers Edit 1/19/2017, noon So now the problem is that 0.1 molar buffers are needed for pH values of pH 10, 9.5, and 9.0. (I'll assume that ph 10 is 10.0 so 1 significant figure in concentration.) Even though we are going to work with pKa of $\ce{[NH4^+]}$, let's nt forget that we are starting with "pure" ammonium hydroxide which ionizes as: $\ce{NH3 +H2O <--> NH4^+ + OH^-}$ In "pure" ammonium hydroxide not much of the ammonia will ionize to ammonium, so we can assume $\ce{[OH^-] = [NH4^+]}$ and the pH will be 11+. So we can add various amounts of HCl and create the needed buffers. However there is a consideration. A 0.1 molar buffer means that given 1 liter of solution then: If 0.1 moles of a strong acid is added, the the pH drops no more than 1 pH unit. If 0.1 moles of a strong base is added the the pH increases no more than 1 pH unit. Now a buffer would be made "best" at the pKa or pKb value of a chemical so that the buffer capacity was the same for either a strong acid or a strong base. The buffers at 9.0 and 9.5 are reasonably close to the pKa of ammonium which is 9.25. So these buffers will guard against nearly the same amount of acid as base. However the buffer at pH 10.0 is a considerable distance from the pKa of ammonium. So it will take much less of a strong base like NaOH to change the pH to 11.0 than a strong acid like HCl to change the pH to 9.0. So assume that: The 0.1 molar buffer capacity means "at least 0.1 molar". You're making 1 liter of each of the solutions You only need volumes to about 5% for the reagents. The last assumption is a "gotcha" of sorts. I haven't worked through the solution, but I don't think there is a way to solve all the constraints directly in one pass. I think the "exact" solution would have to be calculated iteratively. No problem for a computer, but it is painful by hand. So overall this is trying to avoid iterating using the quadratic equation.
{ "domain": "chemistry.stackexchange", "id": 11201, "tags": "physical-chemistry, acid-base, everyday-chemistry, buffer" }
Cheap launching objects to orbit
Question: Why slingatron hasn't been successful ? Same thing with gigantic nail clipper. Since we dont need great force when arm length increases, why couldn't we build gigantic nail clipper orbital launcher :) ? Provided of course that we had material able to sustain great compression stress. Disclaimer, no im not affiliated in any way with creators, dont know them, not even same country. Just thought that idea was cool. Answer: Because of the centrifugal force. I had made some calculations back when I was looking for a good MSc thesis subject because I wanted to work on a railgun or a slingshot in orbit but it turned out it was impossible because of that, since it would need an astronomically long arm not to crush everything in the satellite. You are looking at reaching orbital speeds of several kilometres per second... The centripetal force provided by the arm (or in that case, the spiral structure) and undergone by the structure of the satellite is: $$F_c=\frac{v^2}{R}$$ Which means that to reach a relatively low speed of 3km/s (GEO), even with a 1km radius the centripetal force is 900kg-f. Not so big? Take into account the fact that it's very simplified. This is only to reach an orbital speed. In practice the delta V for insertion is different, since you have gravity to battle as you go up and you're doing it less efficiently with a slingshot than a launcher because you don't have a vertical phase at the beginning... And they already require more than 10km/s of deltaV to get to GEO (10tonnes-f!). I let you have a look at Hohmann transfers as a simplistic example to get values.
{ "domain": "physics.stackexchange", "id": 12238, "tags": "orbital-motion" }
exclude variables with no variation during prediction?
Question: I am working on a binary classification problem. I do have certain input categorical variables such as gender, ethnicity etc. But all the records have the same value. Meaning, all 10K records in my dataset have female as gender value. Same for ethnicity as well. Is it okay to straight away exclude these variables during model building? Or it is important to retain them? Since, there is no variation (in these variables between output classes), I assume they will not be contributing anything to the output. Can help me with this? Answer: If You have only females in your dataset, adding gender feature to the model input will not improve it. The technical explanation on why it won't help changes between models, but the intuition is simple - the model tries to find correlation between the features and the labels, and the correlation between any variable and a fixed-value variable is zero. You didn't directly asked about it but it worth mentioning that if the classification problem is related to the gender, the model will work better on females than on males because you don't have data about them. That will be true whether you will add those features or not. I talked about the gender feature as an example, but the answer is valid to any other feature.
{ "domain": "datascience.stackexchange", "id": 10476, "tags": "machine-learning, deep-learning, neural-network, classification, data-mining" }
Total charge from a charge density
Question: I was trying to calculate the total charge from a charge density that has a very strange form involving delta functions. The charge density is $$\rho(\vec r) = - \vec d . \nabla \delta(\vec r)$$ $d$ is a vector in 3 dimensions. How to find the integral of such a function? Answer: Since this is a simple enough problem to do, I'll do it in one dimension and let you do it in three! In one dimension, $$\rho(x) = -d \frac{\text{d}}{\text{d} x} \delta (x).$$ The derivative of the delta function (very much like the delta function itself) is only properly defined when you act it on a function. It's easy to show (using integration by parts) that $$\int_{-\infty}^\infty f(x)\,\,\delta'(x - x_0)\,\, \text{d} x = - f'(x_0),$$ meaning that while the delta function "picks out" the value of the function at $x_0$, the derivative of the delta function picks out the (negative of the) value of the function's derivative at $x_0$! Using this, it should be easy to see that $$Q = \int_{-\infty}^\infty \rho(x)\,\text{d} x = + \frac{\text{d}}{\text{d} x} (d) = 0,$$ if $d$ is a constant. I leave it to you to see how this generalises (very simply!) to three dimensions.
{ "domain": "physics.stackexchange", "id": 71607, "tags": "electrostatics, charge, density, dirac-delta-distributions, dipole" }
Gauge fixing and degrees of freedom
Question: Today, my friend (@Will) posed a very intriguing question - Consider a complex scalar field theory with a $U(1)$ gauge field $(A_\mu, \phi, \phi^*)$. The idea of gauge freedom is that two solutions related by a gauge transformation are identified (unlike a global transformation where the solutions are different but give rise to the exact same observables), i.e. $$(A_\mu(x), \phi(x), \phi^*(x)) ~\sim~ (A_\mu(x) + \partial_\mu \alpha(x), e^{i \alpha(x)}\phi, e^{-i \alpha(x)}\phi^*(x)).$$ The process of "gauge fixing" is to pick one out of the many equivalent solutions related via gauge transformation. The usual procedure of gauge fixing is to impose a condition on $A_\mu$ so that one picks out one of the solutions. His question was the following: Instead of imposing a gauge condition on $A_\mu$, why do we not impose a gauge condition on $\phi$? Wouldn't this also pick out one the many equivalent solutions? Shouldn't this also give us the same observables? If so, why do we not do this in practice? After a bit of discussion, we came to the following conclusion: The idea of gauge symmetry comes from the requirement that a quantum theory involving fields $(A_\mu, \phi, \phi^*)$ have a particle interpretation in terms of a massless spin-1 particles and 2 spin-0 particles. However, prior to gauge fixing, the on-shell degrees of freedom include those of a massless spin-1 particle and 3 spin-0 fields ($A_\mu \equiv 1 \otimes 0,~\phi,\phi^* \equiv 0$). We would now like to impose a gauge condition to get rid of one scalar degree of freedom. There are two ways to do this - Impose gauge condition on $A_\mu$ so that $A_\mu \equiv 1$. Now, $A_\mu$ corresponds to a massless spin-1 particle and the complex scalar corresponds to two spin-0 particles. This is what is usually done. Impose a gauge condition on $\phi$. For instance, one can require that $\phi = \phi^*$. We now have a real field corresponding to a spin-0 particle. However, $A_\mu$ still contains the degrees of freedom of both a massless spin-1 and a spin-0 particle. I claimed that the second gauge fixing procedure is completely EQUIVALENT to the first one. However, the operator that now creates a massless spin-1 particle is some nasty, possibly non-Lorentz Invariant combination of $A_0, A_1, A_2$ and $A_3$. A similar statement holds for the spin-0 d.o.f. in $A_\mu$. Thus, the operators on the Hilbert space corresponding to the particles of interest are not nice. It is therefore, not pleasant to work with such a gauge fixing procedure. In summary, both gauge fixing procedures work. The first one is "nice". The second is not. Is this conclusion correct? NOTE: By the statement $A_\mu \equiv 1$, I mean that $A_\mu$ contains only a massless spin-1 d.o.f. Answer: If $\phi$ is non-zero, fixing the phase of $\phi$ is a perfectly valid gauge condition. It's used frequently in Standard Model calcuations involving the Higgs field, where it goes by the name unitarity gauge. This is a nice gauge in some ways, because it makes manifest the fact that there's a massive vector field in the system. Edit: Some caution is required with unitary gauge. It's a complete gauge when you can reasonably treat $\phi$ as non zero, because it uses every degree of freedom in the gauge transformation. This means for example that it's ok to use in perturbative calculations around a Higgs condensate. But when $\phi$ can vanish, the phase function isn't uniquely defined, which means the gauge transformation is not invertible. This gauge isn't quite a gauge.
{ "domain": "physics.stackexchange", "id": 8779, "tags": "quantum-field-theory, degrees-of-freedom, gauge-symmetry, gauge" }
Angular momentum conservation along translating axis of rotation
Question: Say you have a cylinder rolling on the ground, then could we take any possible axis as it's axis of rotation? Could we take as an axis parallel to the central axis of cylinder? what about one perpendicular to the floor which it is rolling around and passing thru center? In my school we were only taught to take inertia along x,y and z axes and hence the confusion Also, how would computing inertia look like when the r term in the inertia integral is a function of time? i.e: $$ I = \int (r(t))^2 dm$$ This quesiton is different from this one: Is angular momentum conserved in all possible axis of rotation (give no external torque)? Because there I got an answer in reference to a stationary frame and here I'm asking for moving frame of reference Answer: You need to be specific about what point or axis you are taking your angular momentum about. You must remember that the equation $$ {\boldsymbol \tau}= \frac{d{\bf L}}{dt} $$ is not generally true. Here $$ \tau= \sum_i ({\bf r}_i- {\bf R})\times {\bf F}_i $$ is the torque about point ${\bf R}$ and $$ {\bf L} =\sum_i ({\bf r}_i- {\bf R)}\times m_i \dot {\bf r}_i $$ is the angular momentum about ${\bf R}$ . Applied torque only equals the rate of change of angular momentum when $$ \dot {\bf R}\times \sum_i m_i \dot {\bf r}_i=0. $$ The usual cases of this are if ${\bf R}$ satisfies one these consditions: 1) ${\bf R}$ is stationary 2) ${\bf R}$ is the center of mass 3) $\dot {\bf R}$ is parallel to the velocity of the center of mass
{ "domain": "physics.stackexchange", "id": 67405, "tags": "newtonian-mechanics, angular-momentum, rotational-dynamics, conservation-laws" }
Check if two strings are permutations of one another
Question: I am working my way through "Cracking the coding interview" problems and have finished implementing this question: "Given, two strings, write a method to decide if one is a permutation of the other". Can anyone give me any feedback on my implementation in regards of efficiency or improvements? public static boolean containsPremutation(String firstString, String secondString){ //Checking if either string is empty. if(firstString.isEmpty() || secondString.isEmpty()) return false; //Checking if one string is larger than the other. if(firstString.length() > secondString.length() || secondString.length() > firstString.length()) return false; //First convert the Strings into char[] char [] charFirstString = firstString.toCharArray(); char [] charSecondString = secondString.toCharArray(); //Secondly, aplhabetize/sort the char arrays Arrays.sort(charFirstString); Arrays.sort(charSecondString); //Thirdly, create new Strings out of the sorted char arrays String alphaFirstString = new String(charFirstString); String alphaSecondString = new String(charSecondString); //Now you can begin comparing each char in the Strings. // Begin iterating at the same char and if they are not the same char return false //otherwise continue iterating. for (int i = 0; i < alphaFirstString.length(); i++){ if(alphaFirstString.charAt(i) != alphaSecondString.charAt(i)) return false; } return true; Answer: This question has used the word "permutation" for two strings, and in standard English what that's asking for is "is one the anagram of the other"? Using "anagram" as a search would help you find a lot of other information on this problem, especially on Code Review. Your solution is significantly over-engineered. You have broken down the problem in to too many steps, and you've missed an opportunity to show how you would avoid code duplication. Let's break down your code in to its stages: Check each string is valid input Check the strings are the same length Extract the characters from each string Sort each string's characters Create a new String from each input's sorted characters. Iterate over each of the sorted string's characters and compare to the other string. Return the match status Now, let's strip out the redundant parts... for example, you don't need to convert the sorted characters back to a string only to loop over each character again.... get rid of stage 5 and convert stage 6 to operate over the sorted array. Also, stage 1, that's a bug. Two empty strings are anagrams of each other... agreed? Get rid of that check. Now, what's left is two identical operations performed on two different inputs, which are then compared, so extract the common logic in to a function (Code reuse is a good thing): private static final char[] sortedChars(String input) { char [] chars = input.toCharArray(); Arrays.sort(chars); return chars; } Then, your code looks like: public static boolean containsPremutation(String firstString, String secondString){ char[] firstChars = sortedChars(firstString); char[] secondChars = sortedChars(secondString); .... } Now what? Instead of converting them to strings, we can use the core library's Arrays class to do the comparisons for us: return Arrays.equals(firstSorted, secondSorted); The full code would be: private static final char[] sortedChars(String input) { char [] chars = input.toCharArray(); Arrays.sort(chars); return chars; } public static boolean containsPremutation(String firstString, String secondString){ char[] firstChars = sortedChars(firstString); char[] secondChars = sortedChars(secondString); return Arrays.equals(firstSorted, secondSorted); } Looking at the code that way, you can see other issues too... generally, "Hungarian Notation" is frowned on in Java Code Style - don't have String appended to variable names like firstString and secondString ... we know that they are strings. do we even need the variables in the contains... method? Could it just be: public static boolean containsPremutation(String first, String second){ return Arrays.equals(sortedChars(first), sortedChars(second)); } As mentioned in a comment, it's Permutation and not Premutation.
{ "domain": "codereview.stackexchange", "id": 24567, "tags": "java, array, interview-questions" }
Calculating the power of a signal
Question: I'm trying to calculate the power of a signal and my tutor has given me this formula to do it I've spent the past while building a program and now the foundations are there it's time to implement the maths side of it. The problem is I can't actually read it. Would someone be able to transcribe it and explain it to me? Answer: As @jojek has already said in the comments, the formula reads $$ P_\mathrm{x} = \dfrac{1}{T}\sum_{t=1}^{T}x^2(t) $$ As $t$ usually denotes continous time I find this formulation a little bit odd. What your tutor probably actually meant is $$ P_\mathrm{x} = \dfrac{1}{T}\int_{t=0}^{T}x^2(t) \mathrm{d}t $$ It calculates the average power of the (continous-time) signal $x(t)$ of length $T$. In computer programs we're dealing with discrete-time (and discrete valued) signals, of course and the average power of the digital signal $x_n$ is given by $$ \tilde P_\mathrm{x} = \dfrac{1}{N}\sum_{n=1}^{N}x_n^2, $$ where $N$ is the length of $x_n$ in samples.
{ "domain": "dsp.stackexchange", "id": 2425, "tags": "signal-analysis, sound" }
“Add two numbers given in reverse order from a linked list”
Question: Original Question Problem Description You are given two non-empty linked lists representing two non-negative integers. The digits are stored in reverse order and each of their nodes contain a single digit. Add the two numbers and return it as a linked list. You may assume the two numbers do not contain any leading zero, except the number 0 itself. Link to LeetCode. Only code submitted to LeetCode is in AddTwoNumbersHelper and AddTwoNumbers. I've written this so that anyone can compile and run this program on their machines with: g++ -std=c++11 -Wall -Wextra -Werror Main.cpp -o main ; ./main I'm looking for feedback on this LeetCode question after modifying the code from feedback in my Original Question. Here is what has changed: The solution now uses recursion The main logic is not spaghetti code anymore Dealt with dangling pointers What I'd like some feedback on I'm still a bit unclear on the best practice for how to handle the namespace and usings, as well as how to find the time and space complexity for this. #include <cstddef> #include <iostream> #include <ostream> #include <stddef.h> #include <stdlib.h> using std::cout; using std::endl; using std::ostream; using std::string; struct Node { int val; Node *next; Node(int val) : val(val), next(NULL) {} void PrintAllNodes() { Node *current = new Node(0); current = this; std::string nodeString = "LinkedList: "; int val = 0; while(current != NULL) { val = current->val; nodeString = nodeString + " -> ( " + std::to_string(val) + " ) "; current = current->next; } std::cout << nodeString << '\n'; } void Append(int i) { if (this->next == NULL) { Node *n = new Node(i); this->next = n; } else { this->next->Append(i); } } }; class Solution { public: Node *AddTwoNumbers(Node *l1, Node *l2); private: Node *AddTwoNumbersHelper(Node *l1, Node *l2, int sum, int carry, Node *current, Node *head); }; Node *Solution::AddTwoNumbers(Node *l1, Node *l2) { Node *head = new Node(0); Node *current = head; int sum = 0; int carry = 0; return AddTwoNumbersHelper(l1, l2, sum, carry, current, head); } Node *Solution::AddTwoNumbersHelper(Node *l1, Node *l2, int sum, int carry, Node *current, Node *head) { if (l1 == NULL && l2 == NULL) { head = head->next; return head; } sum = 0; if (l1 == NULL) { sum = l2->val + carry; } else if (l2 == NULL) { sum = l1->val + carry; } else if (l1 != NULL && l2 != NULL) { sum = l1->val + l2->val + carry; } if (sum >= 10) { carry = sum / 10; sum -= 10; } else { carry = 0; } Node *next = new Node(sum); current->next = next; if (l1 == NULL) { return AddTwoNumbersHelper(l1, l2->next, sum, carry, current->next, head); } else if (l2 == NULL) { return AddTwoNumbersHelper(l1->next, l2, sum, carry, current->next, head); } else if (l1 != NULL && l2 != NULL) { return AddTwoNumbersHelper(l1->next, l2->next, sum, carry, current->next, head); } return head; } /** * Input: (2 -> 4 -> 3) + (5 -> 6 -> 4) * Output: 7 -> 0 -> 8 * Explanation: 342 + 465 = 807. */ void ProveBasicCase() { cout << "\n\nBasic case\n"; Solution s; Node *l1 = new Node(2); l1->Append(4); l1->Append(3); Node *l2 = new Node(5); l2->Append(6); l2->Append(4); Node *n = new Node(0); n = s.AddTwoNumbers(l1, l2); n->PrintAllNodes(); } /** * Input: (2 -> 4 -> 3) + (5 -> 6) * Output: 7 -> 0 -> 4 * Explanation: 342 + 65 = 407. */ void ProveUnEqualListSize() { cout << "\n\nUneven List sizes\n"; Solution s; Node *l1 = new Node(2); l1->Append(4); l1->Append(3); Node *l2 = new Node(5); l2->Append(6); Node *n = new Node(0); n = s.AddTwoNumbers(l1, l2); n->PrintAllNodes(); } /** * Input: (9) + (1 -> 9 -> 9 -> 9 -> 8 -> 9 -> 9) * Output: 0 -> 0 -> 0 -> 0 -> 9 -> 9 -> 9 * Explanation: 9 + 9989991 = 9990000 */ void ProveDoubleCarry() { cout << "\n\nDouble Carry\n"; Solution s; Node *l1 = new Node(9); Node *l2 = new Node(1); l2->Append(9); l2->Append(9); l2->Append(9); l2->Append(8); l2->Append(9); l2->Append(9); Node *n = new Node(0); n = s.AddTwoNumbers(l1, l2); n->PrintAllNodes(); } int main() { cout << "mr.robot prgm running...\n"; ProveBasicCase(); ProveUnEqualListSize(); ProveDoubleCarry(); return 0; } Answer: #include <cstddef> #include <iostream> #include <ostream> #include <stddef.h> #include <stdlib.h> You've got some unnecessary includes here. <cstddef> and <stddef.h> are the C++ and C versions of the same content. You should be using the C++ version (<cstddef>) and not the C version. Unless you are targeting C++03 and earlier or require the std::ostream definitions, you don't need to include <ostream> with <iostream>. With C++11, <iostream> is guaranteed to include <istream>/<ostream> as needed for the global objects std::cout and friends. The only other use for <ostream> is std::endl, which you don't use anyway. using std::cout; using std::endl; using std::ostream; using std::string; Neither endl or ostream appear beyond these using-declarations. string is used once and is qualified with std::. You should remove those. Avoid using declarations (using std::cout) and directives (using namespace std;) at the global scope of header files. All subsequent lookups in translation units that include your header will pollute the global namespace with those symbols, leading to compilation errors due to ambiguous usage, maintenance, and reuse problems. Namespaces help separate identifiers and make interfaces explicit. If you want to opt into argument dependent lookup, use these declarations/directives in the smallest scope possible. struct Node { int val; Node *next; Node(int val) : val(val), next(NULL) {} void PrintAllNodes() {/*...*/} void Append(int i) {/*...*/} }; Rather than extending through modification a class you do not own (ListNode from LeetCode), you can extend through composition. Write an iterator adaptor to traverse the provided forward list that inspects elements. Write another iterator adaptor to insert after your current node. With these two iterators, you can utilize these C structures with the C++ standard library. In your own code, use nullptr instead of NULL. nullptr is a well-specified and very restrictive type, which isn't vulnerable to the type deduction errors that NULL is. Node *current = new Node(0); Avoid new/delete. You create a new node, but you do not delete it. Who is responsible for cleaning up after this node? Node *current = new Node(0); current = this; /* ... */ while(current != NULL) { val = current->val; nodeString = nodeString + " -> ( " + std::to_string(val) + " ) "; current = current->next; } Don't introduce a variable until you need it and keep the scope of variables as narrow as possible. When there is an obvious loop variable, prefer a for-statement. For long lists, you are likely to have multiple reallocations of nodeString. Since you are just constructing the stream to be streamed out, consider bypassing the string construction and just stream the pieces directly. std::cout << "LinkedList: -> ( " << val << " ) "; for (Node* current = next; current != nullptr; next = next->current) { std::cout << " -> ( " << current->val << " ) "; } std::cout << '\n'; Node *Solution::AddTwoNumbers(Node *l1, Node *l2) { Node *head = new Node(0); Every time you add two lists together, you leak this head node. Node *Solution::AddTwoNumbersHelper(Node *l1, Node *l2, int sum, int carry, Node *current, Node *head) { if (l1 == NULL && l2 == NULL) { head = head->next; return head; } Here, head is advanced and returned. Nobody cleans up the head that was incremented from. What happens if both are lists are the same length and there is a carried 1 after summing both lists? int sum = 0; There is no reason for sum to exist at this point. Declaring sum where you reset it in the helper allows it to be used as scratch space. if (sum >= 10) { carry = sum / 10; sum -= 10; } else { carry = 0; } Clang/GCC converts the division to a multiplication, but you can reduce the strength of that operation further if you think about how digits work when added together. Consider the maximum digit (9) and add it with another maximum digit (9). The result (18) will never carry more than one, even if you added a carry from a previous addition (18 + 1 = 19). With that knowledge, we can remove the division and set carry to be whether the digit overflowed. carry = sum > 9; if (carry) { sum -= 10; } carry will either be 1 (overflowed) or 0. void ProveBasicCase() { cout << "\n\nBasic case\n"; Solution s; Node *l1 = new Node(2); l1->Append(4); l1->Append(3); Node *l2 = new Node(5); l2->Append(6); l2->Append(4); Node *n = new Node(0); n = s.AddTwoNumbers(l1, l2); n->PrintAllNodes(); } Ease your cognitive load during tests. Instead of looking through the output and remembering what the output should be, programmatically compare the observed result with an expected result. Pick a testing framework (DocTest, Catch2, GoogleTest, Boost.Test) and start writing actual tests. #define DOCTEST_IMPLEMENT_WITH_MAIN #include <doctest.h> struct NodeList { /* ... */ }; template <typename... Args> NodeList* make_node_list(Args... args) { /*...*/ } NodeList* sum_lists(NodeList* list1, NodeList* list2) { /* ... */ } TEST_CASE("Basic case") { auto l1 = make_node_list(2, 4, 3); auto l2 = make_node_list(5, 6, 4); auto l3 = sum_lists(l1, l2); CHECK(l3 != nullptr); CHECK(l3->next != nullptr); CHECK(l3->next->next != nullptr); CHECK(l3->next->next->next == nullptr); CHECK(l3->val == 7); CHECK(l3->next->val == 0); CHECK(l3->next->next->val == 8); } I'm still a bit unclear on the best practice for how to handle the namespace and usings Use them when you want to opt into ADL or want to use literals. Keep their uses limited to the narrowest scope required and never in the global scope of a header file (or file included by a header). Read more here. as well as how to find the time and space complexity for this. \$\mathcal{O}(m+n)\$ time where \$m\$ and \$n\$ are the lengths of each list. Space will just be the max of \$m\$ and \$n\$. An iterative way to think about it is, while list1 is not exhausted if list2 is exhausted sum list1 elements w/carry into result when list1 is exhausted, append carry if still carrying return result sum current element from list1, list2, & carry into result sum list2 w/carry into result when list2 is exhausted, append carry if still carrying return result
{ "domain": "codereview.stackexchange", "id": 31802, "tags": "c++, programming-challenge, linked-list" }
Is fission/fusion to iron the most efficient way to convert mass to energy?
Question: Is fission/fusion of any element to iron-56 (or nickel-62?) the best way to convert mass to energy, that doesn't involve black holes? In other words, will we be always limited to convert only about 1% of the mass available to energy? Are there other ways (using strangelets? antimatter?) to go beyond that limit? I exclude black holes as, as I understand, you can only extract a finite amount of energy by reducing their spin, so they are not viable for energy production on a cosmological scale. Answer: Matter-antimatter annihilation, such as an electron annihilating with a positron to form two high-energy photons, can convert 100% of the mass into radiation. So fission and fusion are far from the most efficient ways to convert mass into other forms of energy. Unfortunately, the universe appears to contain almost no antimatter.
{ "domain": "physics.stackexchange", "id": 59095, "tags": "nuclear-physics, mass-energy, antimatter, fusion" }
Jordan-Wigner Transformations on fermionic system
Question: I've been trying to use Jordan-Wigner Transformations on a given fermionic Hamiltonian. The given Hamiltonian is: $$ \hat{H}= -\sum_{m=1}^{N}(J_z \hat{S}_{m}^{z} \hat{S}_{m+1}^{z} + \frac{J_{\perp}}{2}(\hat{S}_{m}^{+}\hat{S}_{m+1}^{-}+\hat{S}_{m}^{-}\hat{S}_{m+1}^{+}))$$ and the form of the Jordan-Wigner Transformations given are: $$\hat{S}_{m}^{+}=\hat{c}_{m}^{\dagger} e^{i \pi \sum_{j<m}\hat{n}_{j}} $$ $$\hat{S}_{m}^{-}= e^{-i \pi \sum_{j<m}\hat{n}_{j}} \hat{c}_{m}$$ $$\hat{S}_{m}^{z}=\hat{c}_{m}^{\dagger}\hat{c}_{m} - \frac{1}{2}$$ The answer that I manage to get/how far I get in my calculation is: $$\hat{H}= -\sum_{m=1}^{N}(J_z (\frac{1}{4} - \frac{1}{2}\hat{c}_{m}^{\dagger}\hat{c}_{m} - \frac{1}{2}\hat{c}_{m+1}^{\dagger}\hat{c}_{m+1} +\hat{c}_{m}^{\dagger}\hat{c}_{m}\hat{c}_{m+1}^{\dagger}\hat{c}_{m+1}) + \frac{J_{\perp}}{2}(\hat{c}_{m}^{\dagger}\hat{c}_{m+1} + \hat{c}_{m}\hat{c}_{m+1}^{\dagger})).$$ However, the answer given in the textbook is: $$\hat{H}= -\sum_{m=1}^{N}(J_z (\frac{1}{4} - \hat{c}_{m}^{\dagger}\hat{c}_{m} +\hat{c}_{m}^{\dagger}\hat{c}_{m}\hat{c}_{m+1}^{\dagger}\hat{c}_{m+1}) + \frac{J_{\perp}}{2}(\hat{c}_{m}^{\dagger}\hat{c}_{m+1} + \hat{c}_{m+1}^{\dagger}\hat{c}_{m})).$$ Is anyone able to help me make the last step from my answer to the given answer? Answer: For the $J_z$ part you're basically there already. Just note that with periodic boundary conditions $c^\dagger_{N+1}=c^\dagger_1$ and you're free to change the summation index, so $$ \sum_{m=1}^{N} \hat{c}_{m+1}^{\dagger}\hat{c}_{m+1} = \sum_{m=1}^{N} \hat{c}_{m}^{\dagger}\hat{c}_{m}. $$ For the $J_\perp$ part you've made a sign error for one of the terms. A good way for you to locate the error might be to Jordan-Wigner-transform both $S_{m+1}^+S_m^-$ and $S_m^-S_{m+1}^+$. Obviously these should give the same result, but you may find one more straightforward than the other.
{ "domain": "physics.stackexchange", "id": 84234, "tags": "hamiltonian, fermions, many-body, spin-models, spin-chains" }
How can I show that applying Hamiltonian dynamics recovers the original wave equation?
Question: Problem Consider the wave equation: $$ \frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x^2}\tag{1}$$ with $ u = u(t, x)$ over domain $x \in [0, l] = \Omega$. This can be represented as a Hamiltonian system with generalized coordinates $p = \dot{u}$ and $q = u$. Then the Hamiltonian is defined as: $$ \mathcal{H}(p, q) = \int_{\Omega}\left[ \frac{1}{2} p^2 + \frac{1}{2}c^2 \left(\frac{\partial q}{\partial x}\right)^2\right] \; dx \tag{2}$$ with dynamics $$\dot{q} = \frac{\delta \mathcal{H}}{\delta p }\quad\text{and}\quad\dot{p} = - \frac{\delta \mathcal{H}}{\delta q }.\tag{3}$$ (See 6.1 in https://arxiv.org/abs/1407.6118) I am trying to recover the original form (1) using the Hamiltonian formulation above. Here is what I've tried: I have a background in mathematics, but less so in physics (other than an intro sequence that did not cover Hamiltonian mechanics). I am currently a graduate student trying to understand this derivation for my research. I originally started by treating the dynamics for $\dot{q}$ and $\dot{p}$ as standard partial derivatives, but ran into a couple of issues; at which point I tried using the concept of the total variation, and was starting from this definition $$\delta \mathcal{H} = \frac{\partial \mathcal{H}}{\partial p} \delta p + \frac{\partial \mathcal{H}}{\partial q} \delta q. \tag{4}$$ But I seem to be missing something and am getting stuck: the integral over $\Omega$ is still present in my derivation so it seems like the dynamics of $\dot{q}$ require integrating over the entire domain $\Omega$ - this seems wrong. When taking the derivative with respect to $q$, I obtain the term $\frac{\partial}{\partial q}\left( \left(\frac{\partial q}{\partial x}\right)^2\right)$ which I think should yield $ 2\left(\frac{\partial q}{\partial x}\right) \cdot \frac{\partial}{\partial q}\left(\frac{\partial q}{\partial x}\right)$. However I don't know exactly how to evaluate $\frac{\partial}{\partial q}\left(\frac{\partial q}{\partial x}\right)$ and this doesn't seem to lead to the right result. Assistance with showing the derivation, or even just recommended readings/references would be greatly appreciated. Answer: $$ \frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x^2}\tag{1}$$ with $ u = u(t, x)$ over domain $x \in [0, l] = \Omega$. This can be represented as a Hamiltonian system with generalized coordinates $p = \dot{u}$ and $q = u$. the Hamiltonian is defined as: $$ \mathcal{H}(p, q) = \int_{\Omega}\left[ \frac{1}{2} p^2 + \frac{1}{2}c^2 \left(\frac{\partial q}{\partial x}\right)^2\right] \; dx \tag{2}$$ with dynamics $$\dot{q} = \frac{\delta \mathcal{H}}{\delta p }\quad\text{and}\quad\dot{p} = - \frac{\delta \mathcal{H}}{\delta q }.\tag{3}$$ I am trying to recover the original form (1) using the Hamiltonian formulation above. To orient yourself, recall the standard Hamiltonian equations of motion for a system with $N$ coordinates labeled by index $i$: $$ \frac{\partial H}{\partial p_i} = \dot q_i $$ $$ \frac{\partial H}{\partial q_i} = -\dot p_i\;. $$ When we start to treat the index $i$ as continuous rather than discrete, we switch to a continuous label $x$ and we switch from a sum over $i$ to an integral over $x$. We also switch to saying that $H$ is a functional of $q(x)$ rather than a function of the $q_i$, etc. The continuum generalization of Hamilton's equations of motion are: $$ \frac{\partial H}{\partial p(x)} = \dot q(x) $$ and $$ \frac{\partial H}{\partial q(x)} = -\dot p(x)\;, $$ where I am going to keep using the $\partial$ notation rather than the $\delta$ notation, even for functional derivative, but you can use whatever notation you like. When we take the "functional derivative" we consider what happens to the functional $H$ when the function $q(x)$ is changed to $q(x)+\delta q(x)$ and we define the functional derivative $\frac{\partial H}{\partial q(x)}$ as: $$ \delta H \equiv \int \frac{\partial H}{\partial q(x)}\delta q(x) dx + O(\delta q^2) \tag{5} $$ For example, in your Hamiltonian, let $q \to q+\delta q$. Then $$ H \to H + \int dx c^2\frac{\partial q}{\partial x}\frac{\partial \delta q}{\partial x} + O(\delta q^2) \tag{6} $$ $$ = H - \int dx c^2\frac{\partial^2 q}{\partial x^2}\delta q + O(\delta q^2)\;, \tag{7} $$ which shows that the functional derivative wrt q(x) is: $$ \frac{\partial H}{\partial q(x)} = -c^2\frac{\partial^2 q}{\partial x^2} $$ Similarly: $$ \frac{\partial H}{\partial p(x)} = p(x) \equiv \dot q(x) $$ Then, also using Hamilton's canonical equations, we see that: $$ -c^2\frac{\partial^2 q}{\partial x^2} = \frac{\partial H}{\partial q(x)} = -\dot p(x) = -\ddot q(x) $$ Or, cancelling the minus sign and switching back to $u=q$: $$ c^2 \frac{\partial^2 u}{\partial x^2} = \ddot u $$
{ "domain": "physics.stackexchange", "id": 88737, "tags": "homework-and-exercises, waves, hamiltonian-formalism, hamiltonian, variational-calculus" }
Geometric optics- lenses
Question: I was wondering why would light rays converge at a biconvex lens and diverge at a biconcave lens? Is it concerning on how light rays behave in a convex or a concave mirror? Also, is it possible to predict or sketch how would light rays behave using the angle of incidence and angle of refraction? Answer: Here is what I think, At P1, By Snells law, Since the glass is a denser medium(as compared to air), the velocity of light is less than that in air. Therefore, the light ray deflects towards the normal. Afterwards, at P2, Since the air is a rarer medium than glass, By Snell's Law, light gets deflected away from the normal. In the same way you can explain that of a biconcave lens. Hope this helps :)
{ "domain": "physics.stackexchange", "id": 40411, "tags": "reflection, refraction, geometric-optics, lenses" }
Sampling numbers from a weighted set that sum to constant value
Question: So I have a multi-set of positive integers $S = \{n_1, n_2, \dots\}$ with associated weights $W = \{w_1, w_2, \dots\}$. I want to sample some numbers, without replacement, from $S$ according to weights $W$ such that the sampled numbers sum to a given constant positive integer $k$. The naive way to do it, as I understand it, is to find out all combinations of numbers from $S$ that sum to $k$ and add their weights and choose one of the combinations according to the sum of weights. But doing that is $O(|S|^k)$, which is unacceptably large. What I'm doing right now is do a weighted shuffle of $S$ according to weights $W$, and picking number in the obtained order until the requirement is fulfilled. If, for example, the order obtained is $3, 2, 3, 2, 1$ and $k = 4$, then I'll pick $3$, ignore $2$ (as picking it will exceed $k = 4$) and pick $1$. This approach turns out to be $O(n\log\ n)$. Does anyone have a better idea for doing this sampling in a way that is faster than exponential but more robust than what I'm currently doing now? Answer: There's a close connection between counting the number of solutions and randomly sampling from the set of solutions. Any time you need to randomly sample, it's often helpful to ask yourself how you'd count the number of solutions, and then you can often turn that into a way to randomly sample. So, one approach is to use dynamic programming to count the number ways to select numbers from $S$ that sum to $k$ (weighted by their weights $W$), then use that to help you sample from this space. Let me spell out the details more. Define $$f(S,W,k) = \sum_I \prod_{i \in I} w_i,$$ where $I$ ranges over all sets of indices such that $\sum_{i \in I} n_i = k$. Notice that if all the weights were 1, then $f(S,W,k)$ would count the number of ways to select these numbers; in general, with arbitrary weights, you can think of $f(S,W,k)$ as a weighted count, where you sum up the weights of each candidate combination, and the weight of a combination is the product of the weights of the numbers selected. You can compute $f(S,W,k)$ using dynamic programming, using the recurrence $$f(\{n_1,\dots,n_j\},W,k) = f(\{n_1,\dots,n_{j-1}\},W,k) + w_j f(\{n_1,\dots,n_{j-1}\},W,k-n_j).$$ Now you want to sample a set $I$ with probability proportional to $\prod_{i \in I} w_i$. This can be done, using your algorithm for computing $f(S,W,k)$. In particular, if $S=\{n_1,\dots,n_m\}$, then flip a coin with heads probability $${f(\{n_1,\dots,n_{m-1}\},W,k) \over f(\{n_1,\dots,n_m\},W,k)}.$$ If it is heads, then don't include $n_m$ in the solution: instead, recursively sample some numbers from $\{n_1,\dots,n_{m-1}\}$ that sum to $k$, and output that as the random sample. If it is tails, then do include $n_m$ in the solution: recursively sample some numbers from $\{n_1,\dots,n_{m-1}\}$ that sum to $k-n_m$, then add $n_m$ to that combination, and output that as the random sample. You can see that this induces the correct probability distribution on samples. Overall, the running time will be comparable to the time to compute $f(S,W,k)$, or in other words, the running time will be $O(|S| \cdot k)$. This is much better than the $O(|S|^k)$ naive solution. It can still be very slow if $k$ is enormous, but if $k$ is not too big, it might be perfectly satisfactory.
{ "domain": "cs.stackexchange", "id": 13443, "tags": "algorithms, combinatorics, probability-theory, sampling" }
How Does Lead Block Radiation
Question: How is it that lead can block radiation, and things are lead lined. In the Indiana Jones 4 movie he climbs inside a lead-lined fridge and he somehow survives the blast and radiation? Answer: Lead is used to block radiation because: It is very dense. This means that the number of interactions that a radiation particle will undergo is higher over a fixed distance which causes the radiation to attenuate. It has a high proton number Z. This means that the charged radiation particles will scatter through large angles, also causing attenuation.
{ "domain": "physics.stackexchange", "id": 15397, "tags": "nuclear-physics, radiation" }
What does a negative coefficient of determination mean for evaluating ridge regression?
Question: Judging by the negative result being displayed from my ridge.score() I am guessing that I am doing something wrong. Maybe someone could point me in the right direction? # Create a practice data set for exploring Ridge Regression data_2 = np.array([[1, 2, 0], [3, 4, 1], [5, 6, 0], [1, 3, 1], [3, 5, 1], [1, 7, 0], [1, 8, 1]], dtype=np.float64) # Separate X and Y x_2 = data_2[:, [0, 1]] y_2 = data_2[:, 2] # Train Test Split x_2_train, x_2_test, y_2_train, y_2_test = train_test_split(x_2, y_2, random_state=0) # Scale the training data scaler_2 = StandardScaler() scaler_2.fit(x_2_train) x_2_transformed = scaler_2.transform(x_2_train) # Ridge Regression ridge_2 = Ridge().fit(x_2_transformed, y_2_train) x_2_test_scaled = scaler_2.transform(x_2_test) ridge_2.score(x_2_test_scaled, y_2_test) Output is: -4.47 EDIT: From reading the scikit learn docs this value is the R$^2$ value. I guess the question is though, how do we interpret this? Answer: A negative value means you're getting a terrible fit - which makes sense if you create a test set that doesn't have the same distribution as the training set. From the sklearn documentation: The coefficient $R^2$ is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a $R^2$ score of 0.0.
{ "domain": "datascience.stackexchange", "id": 4597, "tags": "machine-learning, scikit-learn, ridge-regression" }
Testing distance between characters in a string
Question: Here is another challenge from Coderbyte. I found this one challenging, although not quite as much as the previous two I've posted. Based on the feedback I received on my earlier posts, I structured this solution as a function. It is probably better than my previous solutions but still could be improved. In particular, I am wondering if there is a way to condense my while loops, perhaps by determining the a and b indices at the same time or maybe by building a helper function? It seems like there is redundancy there. Resetting the index variable strikes me as particular clumsy or inelegant, but it had to happen somehow. I'm also not to happy with splitting the string and then joining the modified array to deal with the spaces. Any feedback on these issues or other suggestions to improve the code would be greatly appreciated. Here's the challenge, slightly modified: Take a str parameter and return true if there are any occurrences of characters "a" and "b" separated by exactly 3 places (i.e. "lane borrowed" would result in true because there is exactly three characters between a and b). Otherwise return the string false. (While I was posting this, I realized I had not accounted for spaces. My interpretation of the instructions is that spaces do not count as characters. I have updated my code accordingly.) var str = prompt("Please enter a string: ").split(""); function test(str){ var index = 0; var arrayA = []; var arrayB = []; while (index >= 0){ //removes spaces index = str.indexOf(" "); if (index >= 0) { str.splice(index, 1); } } str=str.join(""); index = 0; while(str.indexOf("a",index) != -1){ //Identifies indices of "a" arrayA.push(str.indexOf("a",index)); index = str.indexOf("a", index)+1; } index=0; while(str.indexOf("b",index) != -1){//Identifies indicies of "b" arrayB.push(str.indexOf("b",index)); index = str.indexOf("b", index)+1; } for(var i=0; i<arrayA.length; i++){//determines if any a's and b's are 3 characters apart for(var j=0; j<arrayB.length; j++){ if (Math.abs(arrayA[i]-arrayB[j]) === 3){ return true; } } } return false; } console.log(test(str)); Here's a revised version: function ABCheck(str) { str = str.split(""); for (var i = 0; i<str.length; i++){ if (str[i] === " "){ str.splice(i,1); console.log(str); } } for (var i = 0; i< str.length; i++){ if (str[i] === "a" && (str[i-3] === "b" || str[i+3] === "b")){ return true; } } return false; } Answer: I would recommend using a regular expression. Regex is designed for solving pattern matching and string manipulation problems. Although it can look esoteric and terse it is well worth the effort to learn since it is very powerful and fairly ubiquitous. It is very likely your programming language/shell/editor of choice supports a flavor of regex. var str = prompt("Please enter a string: "); function test(str){ return /a\w{3}b/.test(str) || /b\w{3}a/.test(str); } console.log(test(str));
{ "domain": "codereview.stackexchange", "id": 4768, "tags": "javascript, strings, programming-challenge" }
Did Huygens understand light to be a transverse wave or a longitudinal wave?
Question: We have this source that claims Huygens "assumed light to be longitudinal", which contradicts this source which claims "Huygens believed that light was made up of waves vibrating up and down perpendicular to the direction of the wave propagation". I must admit, neither source is entirely accurate or reliable, so what exactly was Huygens formulation regarding the nature of light? Answer: Possibly interesting quote from the "Note by the translator" section (page ix) of "Treatise On Light" by Huygens, Christiaan https://archive.org/details/treatiseonlight031310mbp/page/n10/mode/1up (bolding mine) The Treatise on Light of Huygens has, however, withstood the test of time: and even now the exquisite skill with which he applied his conception of the propagation of waves of light to unravel the intricacies of the phenomena of the double refraction of crystals, and of the refraction of the atmosphere, will excite the admiration of the student of Optics. It is true that his wave theory was far from the complete doctrine as subsequently developed by Thomas Young and Augustin Fresnel, and belonged rather to geometrical than to physical Optics. If Huygens had no conception of transverse vibrations, of the principle of interference, or of the existence of the ordered sequence of waves in trains, he nevertheless attained to a remarkably clear understanding of the principles of wave-propagation; and his exposition of the subject marks an epoch in the treatment of Optical problems.
{ "domain": "physics.stackexchange", "id": 89419, "tags": "optics, waves, visible-light, history, huygens-principle" }
Are these collisions equivalent?
Question: Similar to the question if two cars with a velocity of 50 mph each colliding is the same as one car colliding with wall at 100 mph, I was wondering if the same amount of energy is produced when hitting a stationary object at 50 mph as hitting an object that's moving away from you at 20 mph with your velocity being 70 mph? Answer: Yes, you have to consider what the frame of reference is for the colliding objects. The relative velocity between the objects is given by $v = v_1 - v_2$ where $v_2$ is the velocity of the stationary object and $v_1$ is the velocity of the car. In the first situation the relative velocity is is $(50 - 0)\ \rm{mph}$. Thus the first object will hit the standing object at $50\ \rm{mph}$. In the second situation, the relative velocity is $(70 - 20)\ \rm{mph}$ so again the car will hit the moving object at $50\ \rm{mph}$. In both cases, the impact happens at the same velocity even though one is moving and one is going faster. The acceleration of the objects is the same in both situations since they both change by $50\ \rm{mph}$ during the impact time with everything else kept equal. Edit: In the elastic collision equations, you must first set the velocities relative to each other. By setting the relative velocity to $50\ \rm{mph}$ in both equations, you will find that the collisions are the same. Note: Kinetic energy depends on the frame of reference. If the ground is the frame of reference, then the total energy in the first situation is $\frac 12\cdot 50^2 = 1250\ \rm{units}$ In the second frame of reference the total energy is $\frac 12\cdot70^2 + \frac 12\cdot 20^2 = 2650\ \rm{units}$. Assuming equal masses and an elastic collision, in the first situation the object that gets hit will move at $50\ \rm{mph}$ relative to the ground. The amount of kinetic energy the object gained was $1250\ \rm{units}$ relative to the ground. In the second situation, the impacted object now moves at $70\ \rm{mph}$, and gained $\frac 12\cdot70^2 - \frac 12\cdot20^2 = 2250\ \rm{units}$ of kinetic energy relative to the ground. From the perspective of the ground, the energy transferred to the object in the second situation is greater, but has no effect on the actual impact of the objects. The impact of the collision will still happen at the same relative velocity, so the objects in the collisions still experience the same change in momentum. The only difference is that in order to come to a stop after the collision, it will take more energy in the second situation than the first so this may impact what forces the objects experience as they come to stop on the ground (consider frictional forces on asphalt).
{ "domain": "physics.stackexchange", "id": 52955, "tags": "momentum, energy-conservation, conservation-laws, inertial-frames, collision" }
Synch problem with rosbag and rqt_plot
Question: Hi All, I'm building a pose estimator and to assess it i'm using a ground truth from a vicon system. Both systems publish a PoseStamped message. When I rosbag play a dataset and I try to visualize it wit rqt_plot it seems like the two signals have different velocities. At the beginning they are synchronized but since one of the two is "faster" than the other, the effect is that it seems stretching with respect to the other. The two sources publish with a different frequency (vicon is 100 Hz, the other is like 200 Hz) but I thought that ROS was taking care of the synch between messages. Furthermore there are absolute timestamps! Any ideas? I'm running Hydro with Ubuntu 12.04. I tried different graphic backends but nothing changes. EDIT: even using simulation time with rosparam set /use_sim_time true seems to work (actually the plot wit PyQtGraph has a line that restart from the beginning so maybe the sim time param is not effective EDIT 2: we found some synchronization problems inside one of the two computers. We corrected them using Chrony, but the stretching problem still remains. What we actually discovered is that during the real execution the to topics are perfectly synchronized (we checked during the execution with side by side terminals rostopic echo for both topics. Instead, when we do a rosbag play the timestamps are different!!! And one of the two has a time that flows SLOWER!!! I use, as suggested rosbag play --clock -l mybag.bag with use_sim_time set as true Thanks in advance. Originally posted by mark_vision on ROS Answers with karma: 275 on 2014-05-15 Post score: 0 Answer: Guys, I got that. It was a problem caused by RT task running. See http://answers.ros.org/question/167401/synch-problem-with-chrony-and-xenomai/ Originally posted by mark_vision with karma: 275 on 2014-05-26 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 17955, "tags": "rosbag, synchronization, rqt-plot" }
How do I play and record audio at the same time?
Question: So this question is more related to Matlab itself. Some background info: I'm trying out the Exponential Sine Sweep method to obtain the Room Impulse Response with my laptop speakers and microphone for testing. I can generate the Sine sweep perfectly but I have trouble playing and recording it without a lot of lag. I'm now trying to use "audioPlayerRecorder" from the Audio System Toolbox but when I do: f1 = 80; % Start frequency (Hz) f2 = 22000; % Stop frequency (Hz) T = 3; % Sweep duration in seconds fs = 44100; % Sampling frequency [sweep, inverse] = ESS_generator(f1, f2, T, fs); playRec = audioPlayerRecorder(fs); recording = playRec(sweep); I get an error on the last line saying: Error using coder.internal.assert (line 33) No full-duplex audio device detected But I know that my laptop is able to use the microphone and speakers simultaneously so I'm thinking maybe the error gets thrown because of a missing configuration? I'm running the code on Linux with the ALSA audio drivers. I'm using external speakers through the AUX audio port, and the default laptop microphone. Answer: I suspect that your problem is to do with having your speakers and microphone on different ALSA audio devices or subdevices. You need to find the available sound cards for recording and playback on your system. The following commands will tell you this information : aplay -l arecord -l You will need to define a default full duplex sound card using your choice of devices you see in the output to the above commands. The way to define this full duplex card (made of two different soundcard devices) is to define it in your ~/.asoundrc file. This example shows you how to do that, however I will repeat it here. The requirement is that you use plughw as your device name because the clocks on the two sound cards will drift and you will need resampling of some form. But you can play with this option. So for example if you want to take input from a device on the first card (hw:0) and output to the second device on the second card (hw:1,1) then you would write the following in your ~/.asoundrc file : pcm.!default { type asym playback.pcm "plughw:1,1" capture.pcm "plughw:0" } ctl.!default { type hw card 1 } Hopefully now Matlab sees a full duplex card as the default device and allows full duplex operation.
{ "domain": "dsp.stackexchange", "id": 11683, "tags": "matlab, audio, impulse-response" }
What is the difference between "equivariant to translation" and "invariant to translation"
Question: I'm having trouble understanding the difference between equivariant to translation and invariant to translation. In the book Deep Learning. MIT Press, 2016 (I. Goodfellow, A. Courville, and Y. Bengio), one can find on the convolutional networks: [...] the particular form of parameter sharing causes the layer to have a property called equivariance to translation [...] pooling helps to make the representation become approximately invariant to small translations of the input Is there any difference between them or are the terms interchangeably used? Answer: Equivariance and invariance are sometimes used interchangeably in common speech. They have ancient roots in maths and physics. As pointed out by @Xi'an, you can find previous uses (anterior to Convolutional Neural Networks) in the statistical literature, for instance on the notions of the invariant estimator and especially the Pitman estimator. However, I would like to mention that it would be better if both terms keep separate meaning, as the prefix "in-" in invariant is privative (meaning "no variance" at all), while "equi-" in equivariant refers to "varying in a similar or equivalent proportion". In other words, one in- does not vary, the other equi- does. Let us start from simple image features, and suppose that image $I$ has a unique maximum $m$ at spatial pixel location $(x_m,y_m)$, which is here the main classification feature. In other words: an image and all its translations are "the same". An interesting property of classifiers is their ability to classify in the same manner some distorted versions $I'$ of $I$, for instance translations by all vectors $(u,v)$. The maximum value $m'$ of $I'$ is invariant: $m'=m$: the value is the same. While its location will be at $(x'_m,y'_m)=(x_m-u,y_m-v)$, and is equivariant, meaning that is varies "equally" with the distortion. The precise formulations given (in mathematical terms) for equivariance depend on the class of objects and transformations one considers: translation, rotation, scale, shear, shift, etc. So I prefer here to focus on the notion that is most often used in practice (I accept the blame from a theoretical stand-point). Here, translations by vectors $(u,v)$ of the image (or some more generic actions) can be equipped with a structure of composition, like that of a group $G$ (here the group of translations). One specific $g$ denotes a specific element of the translation group (translational symmetry). A function or feature $f$ is invariant under the group of actions $G$ if for all images in a class, and for any $g$, $$f(g(I)) = f(I)\,.$$ In other words: if you change the image by action $g$, the values for feature or function $f$ are the same. It becomes equivariant if there exists another mathematical structure or action (often a group again) $G'$ that reflects the transformations (from $G$) in $I$ in a meaningful way. In other words, such that for each $g$, you have some (unique?) $g' \in G'$ such that $$f(g(I)) = g'(f(I))\,.$$ In the above example on the group of translations, $g$ and $g'$ are the same (and hence $G'=G$): an integer translation of the image reflects as the exact same translation of the maximum location. This is sometimes refered to as "same-equivariance". Another common definition is: $$f(g(I)) = g(f(I))\,.$$ I however used potentially different $G$ and $G'$ because sometimes $f(.)$ and $g(.)$ do not lie in the same domain. This happens for instance in multivariate statistics (see e.g. Equivariance and invariance properties of multivariate quantile and related functions, and the role of standardisation). But here, the uniqueness of the mapping between $g$ and $g'$ allows to get back to the original transformation $g$. Often, people use the term invariance because the equivariance concept is unknown, or everybody else uses invariance, and equivariance would seem more pedantic. For the record, other related notions (esp. in maths and physics) are termed covariance, contravariance, differential invariance. In addition, translation-invariance, as least approximate, or in envelope, has been a quest for several signal and image processing tools. Notably, multi-rate (filter-banks) and multi-scale (wavelets or pyramids) transformations have been design in the past 25 years, for instance under the hood of shift-invariant, cycle-spinning, stationary, complex, dual-tree wavelet transforms (for a review on 2D wavelets, A panorama on multiscale geometric representations). The wavelets can absorb a few discrete scale variations. All theses (approximate) invariances often come with the price of redundancy in the number of transformed coefficients. But they are more likely to yield shift-invariant, or shift-equivariant features.
{ "domain": "datascience.stackexchange", "id": 11305, "tags": "neural-network, deep-learning, convolution" }
How to get length of array of geometry_msg:Points in service handle?
Question: I'm trying to find the length of an array of geometry_msg:Point passed to a service handle, so I can iterate over each Point, apply a transform and then push onto a response. when I use sizeof(req.points) - I get "24" regardless of the number of points actually in the message. TransformPoints.srv geometry_msgs/Point[] points --- geometry_msgs/Point[] points The callback: static bool pointsCallback(image_transform::TransformPoints::Request &req, image_transform::TransformPoints::Response &res) { int no_points = sizeof(req.points); int point_size = sizeof(geometry_msgs::Point); std::cout << no_points << std::endl; std::cout << point_size << std::endl; std::cout << req.points[0] << std::endl; std::cout << req.points[1] << std::endl; Will output something like this: 24 # sizeof(req.points) 24 # sizeof(geometry_msgs::Point); x: 0 y: 1 z: 0 x: 1 y: 2 z: 0 Originally posted by LukeAI on ROS Answers with karma: 131 on 2020-08-06 Post score: 0 Answer: Hi @LukeAI, What you want is not sizeof that returns the size in bytes of the object but size function that returns the size of the vector. So in your case, since the points msgs instances are std vectors, it would be something like: auto req_size = req.points.size(); Regards. Originally posted by Weasfas with karma: 1695 on 2020-08-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2020-08-06: I would perhaps even suggest using a range-based for-loop instead. Comment by LukeAI on 2020-08-06: Weasfas answered my question, as stated. but using the range-based for-loop is what I really wanted. I didn't realise the an array in a msg or srv was a std vector. Comment by gvdhoorn on 2020-08-06: http://wiki.ros.org/msg#Fields
{ "domain": "robotics.stackexchange", "id": 35380, "tags": "ros, c++, ros-kinetic, services, array" }
What is this species of green frog found in store-bought flowers in France?
Question: Today I found a frog in a big flower composition bought at a french shop (living in Grenoble, south of France). It started snowing today, and I'd like to keep it until it's warmer outside. So could someone tell me what species it is ? (To check what it eats and if it's legal to keep it) Thanks ! (I'll edit with more pictures if it's needed and if she moves) Edit : Answer: This looks very much like a tree frog of which there are many similar looking species. Without knowing where your flowers came from or more characteristics about the specimen, it'll be difficult to provide an accurate answer*. One possibility may be the European tree frog (Hyla arborea). According to Wikipedia: Members of the H. arborea species complex are the only representatives of the widespread tree frog family (Hylidae) indigenous to mainland Europe. which is supported by genetic work by Stöck et al. 20081 Description [source]: Small (3.2-5 cm in length) Slender body shape with long legs and smooth dorsal skin+ Usually green (but variable) with dark brown lateral stripe across eye caudally to groin and whitish ventral skin surface. Gripping discs on toes The range of this species also includes much of France: Image Source + Although the description states this species has smooth dorsal skin (possibly like the OP's specimen -- I can't tell for sure), many of the pictures I've seen seem to suggest that the dorsal skin of this species is actually rather granular. For example, see here. *Feel free to update your post with additional info and perhaps a better response will be added. Citations: 1 Stöck, M., Dubey, S., Klütsch, C., Litvinchuk, S.N., Scheidt, U. and Perrin, N., 2008. Mitochondrial and nuclear phylogeny of circum-Mediterranean tree frogs from the Hyla arborea group. Molecular Phylogenetics and Evolution, 49(3), p.1019.
{ "domain": "biology.stackexchange", "id": 9453, "tags": "species-identification, zoology, herpetology" }
Experimental bounds on Lorentz-violating dispersion relation
Question: It has been predicted by several background-independent approaches to Quantum Gravity (like LQG or spinfoams) that the physical dispersion relations in vacuum could take the following form: $$ p^2 = E^2 \left( 1 + \xi \frac{E}{E_P} + {\cal O}\left( \frac{E^2}{E_P^2} \right) \right), $$ where $E$ is the observed energy of the massless particle, $p$ is the observed absolute value of the 3-momentum and $E_P = \text{const}$ is Planck's energy. The dimensionless coefficient $\xi$ is to be calculated by theory. This anzatz is in apparent contradiction with Special Relativity, but actually this dispersion relations are a consequence of spacetime being discrete at the Planck scale. The idea is that it may be correct far beyond the domain of validity of SR. For observable energies $E \ll E_P$ we have $E = p$. My question is: what are the most restrictive experimental bounds on the value of $\xi$? I've heard claims that tight bounds have been obtained from investigating the spectrum of distant astrophysical gamma-ray sources. Ideally, I want a reference to a peer-reviewed paper explicitly stating the experimental bound on $\xi$. Answer: The astrophysical gamma ray sources to which you are referring are gamma ray bursts, particularly the short bursts. These systems are ideal for testing Lorentz invariance because they can lead to the emission of high energy (tens of GeV) photons in a short interval of time, allowing for measurements of the potential delay in arrival of photons of different energies. The strictest limit set on Lorentz invariance so far in the literature comes from Abdo et al 2009, in which they use the detection of a photon of approximately 30 GeV from the short GRB 090510. In your notation, they find $$|\xi| < 0.82 \qquad $$ at about the 99% confidence limit. With less conservative assumptions, the limit drops even lower. To gain a more complete understanding of their analysis, you need to read the supplemental information, where they enumerate the various limits as they make their assumptions less and less conservative. (Note that what they call $\xi_1$ in their paper is actually $1/\xi$ in your notation, and what they report is $\xi_1 > 1.22$).
{ "domain": "physics.stackexchange", "id": 38201, "tags": "general-relativity, special-relativity, experimental-physics, resource-recommendations, loop-quantum-gravity" }
"For small values of n, O(n) can be treated as if it's O(1)"
Question: I've heard several times that for sufficiently small values of n, O(n) can be thought about/treated as if it's O(1). Example: The motivation for doing so is based on the incorrect idea that O(1) is always better than O(lg n), is always better than O(n). The asymptotic order of an operation is only relevant if under realistic conditions the size of the problem actually becomes large. If n stays small then every problem is O(1)! What is sufficiently small? 10? 100? 1,000? At what point do you say "we can't treat this like a free operation anymore"? Is there a rule of thumb? This seems like it could be domain- or case-specific, but are there any general rules of thumb about how to think about this? Answer: This is largely piggy-backing on the answers already posted, but may offer a different perspective. It's revealing that the question discusses "sufficiently small values of n". The whole point of Big-O is to describe how processing grows as a function of what's being processed. If the data being processed stays small, it's irrelevant to discuss the Big-O, because you're not interested in the growth (which isn't happening). Put another way, if you're going a very short distance down the street, it may be equally fast to walk, use a bicycle, or drive. It may even be faster to walk if it would take a while to find your car keys, or if your car needs gas, etc. For small n, use whatever's convenient. If you're taking a cross-country trip, then you need to look at ways to optimize your driving, your gas mileage, etc.
{ "domain": "cs.stackexchange", "id": 17523, "tags": "asymptotics" }
gazebo_ros_pkgs build error on saucy/indigo
Question: I'm trying to build gazebo_ros_pkgs on saucy/indigo, and I'm getting the following build error: [ 14%] [ 15%] Generating dynamic reconfigure files from cfg/GazeboRosCamera.cfg: /home/scpeters/ws/gazebo_ros_pkgs/devel/include/gazebo_plugins/GazeboRosCameraConfig.h /home/scpeters/ws/gazebo_ros_pkgs/devel/lib/python2.7/dist-packages/gazebo_plugins/cfg/GazeboRosCameraConfig.py Generating dynamic reconfigure files from cfg/Hokuyo.cfg: /home/scpeters/ws/gazebo_ros_pkgs/devel/include/gazebo_plugins/HokuyoConfig.h /home/scpeters/ws/gazebo_ros_pkgs/devel/lib/python2.7/dist-packages/gazebo_plugins/cfg/HokuyoConfig.py [ 15%] Generating dynamic reconfigure files from cfg/GazeboRosOpenniKinect.cfg: /home/scpeters/ws/gazebo_ros_pkgs/devel/include/gazebo_plugins/GazeboRosOpenniKinectConfig.h /home/scpeters/ws/gazebo_ros_pkgs/devel/lib/python2.7/dist-packages/gazebo_plugins/cfg/GazeboRosOpenniKinectConfig.py ... Traceback (most recent call last): File "/home/scpeters/ws/gazebo_ros_pkgs/src/gazebo_ros_pkgs/gazebo_plugins/cfg/Hokuyo.cfg", line 41, in from driver_base.msg import SensorLevels ImportError: No module named msg Traceback (most recent call last): File "/home/scpeters/ws/gazebo_ros_pkgs/src/gazebo_ros_pkgs/gazebo_plugins/cfg/GazeboRosCamera.cfg", line 6, in from driver_base.msg import SensorLevels ImportError: No module named msg Traceback (most recent call last): make[2]: *** [/home/scpeters/ws/gazebo_ros_pkgs/devel/include/gazebo_plugins/HokuyoConfig.h] Error 1 File "/home/scpeters/ws/gazebo_ros_pkgs/src/gazebo_ros_pkgs/gazebo_plugins/cfg/GazeboRosOpenniKinect.cfg", line 6, in from driver_base.msg import SensorLevels make[2]: *** Waiting for unfinished jobs.... ImportError: No module named msg make[2]: *** [/home/scpeters/ws/gazebo_ros_pkgs/devel/include/gazebo_plugins/GazeboRosCameraConfig.h] Error 1 [ 16%] [ 20%] make[2]: *** [/home/scpeters/ws/gazebo_ros_pkgs/devel/include/gazebo_plugins/GazeboRosOpenniKinectConfig.h] Error 1 [ 23%] [ 28%] make[1]: *** [gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_plugins_gencfg.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... Any ideas? I've cross-posted to the gazebo_ros_pkgs issue tracker. Originally posted by scpeters on ROS Answers with karma: 111 on 2014-04-07 Post score: 0 Answer: It looks like an issue a missing dependency for dynamic_reconfigure. I've submitted a pull request to fix it based on advice from @dirk-thomas. Originally posted by scpeters with karma: 111 on 2014-04-07 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 17559, "tags": "gazebo, ros-indigo" }
How does rtabmap fuse wheel odometry with visual odometry?
Question: I set up my robot with turtlebot and kinect and excute rtabmap. -- http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot I hid the image of the Kinect sensor and moved the turtlebot. And I moved the turtlebot to calculate the travel distance only with the kinect image. I think the rtabmap algorithm is internally fusing image information and wheel odometry. How are you fusing? And do you have your own paper about fusing wheel odometry and viusal odometry? Thank you ^ ^ Originally posted by JunJun on ROS Answers with karma: 26 on 2017-01-06 Post score: 0 Answer: Hi, rtabmap doesn't fuse wheel odometry and visual odometry. There is only one odometry input to rtabmap node (so one or the other can be used). If you want to fuse odometries, look at the robot_localization package for example, then use its odometrry output as the odometry input of rtabmap. cheers Originally posted by matlabbe with karma: 6409 on 2017-01-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by JunJun on 2017-01-11: I have question. Where does RTABMAP use the incoming odometry? Comment by matlabbe on 2017-01-12: Some examples here: http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot Comment by JunJun on 2017-01-12: hmm i want to know how rtabmap algorithm use internally odometry. Comment by matlabbe on 2017-01-12: The map's graph has two kinds of links: odometry and loop closure. Odometry is used to set the transform between two consecutive nodes added to map's graph. Comment by an99990 on 2022-07-14: how does one would use your visual odometry and fuse it with wheel odometry ? if my odom_topic is my wheel odometry it means that theres no visual odometry done ? ty Comment by matlabbe on 2022-10-09: rtabmap node doesn't do odometry, it just consumes it. If you don't start rgbd_odometry or stereo_odometry nodes, you can feed rtabmap directly with your own odom topic.
{ "domain": "robotics.stackexchange", "id": 26654, "tags": "slam, navigation, odometry, kinect, rtabmap" }
Help me understand how Pulse-Amplitude Modulation (PAM) works
Question: Reading on the Internet i learned that PAM is basically a sampling technique which converts an analog signal into a discrete signal. It modulates a train of pulses by varying the amplitudes of the individual pulses in a regularly timed sequence based on the analog wave. (Multiplying the analog signal with a train of pulses) In my mind i have the idea of PAM being similar to AM but instead of an analog carrier we have a train of pulses. On the other hand in some textbook i read that the way PAM works is by grouping bits from a stream into symbols and each symbol corresponds to a pulse with a certain amplitude. This seems to me as a completely different process than the first definition. So what is true? and if both are true when is the second way to do PAM used? Answer: If you define pulse amplitude modulation (PAM) as a method where a train of pulses is generated with the information coded by the amplitude of these pulses then both methods comply with that definition. In analog PAM, the amplitudes are defined by samples of the analog source signal, whereas in digital PAM, the amplitudes are discrete and are defined by the bit pattern and some fixed mapping between bit patterns and discrete amplitude values. In both cases, the PAM signal is given by $$s(t)=\sum_kA_kp(t-kT)\tag{1}$$ where $A_k$ are the amplitudes representing the analog or digital source signal, $p(t)$ is the pulse, and $T$ is the symbol rate. The signal $(1)$ is a baseband PAM signal, which can be modulated by a carrier to obtain passband PAM. Using complex symbols $A_k$ on a rectangular grid in the complex plane and two orthogonal carriers gives you quadrature amplitude modulation (QAM). Take a look at this answer for more information on passband PAM.
{ "domain": "dsp.stackexchange", "id": 8100, "tags": "modulation, amplitude" }
Effects of Groups on the Nucleophilcity of Nitrogen
Question: The reactivity of nitrogen mustards increases with increasing nucleophilicity of the central nitrogen atom. Select the most and least reactive from each of the following groups of nitrogen mustards. My Attempt I know that II will be the most reactive since the methyl group will donate electron density to the nitrogen atom due to inductive effects, making it more nucleophilic and hence more reactive. I also thought that III will be the least reactive since the ester group pulls more electron density away from the nitrogen atom compared to the carbonyl group. I also thought in general that esters are stronger deactivators than carbonyl groups. However the answer is I. After searching on the internet, it turns out that esters and carbonyl groups are roughly equally strong deactivators. So how am I suppose to determine which one will be less nucleophilic? Answer: Your analysis of "pulls more electron density away" is a bit too simplistic. Both of them are approximately equally electron-withdrawing via the inductive effect, but in terms of resonance, which is the main factor here, they are quite different. Compound I is an amide. Amides are non-nucleophilic on nitrogen because the N lone pair is delocalised into the C=O π* orbital. In fact, amides act as nucleophiles via the carbonyl oxygen and not the nitrogen. In compound III, there is also the same delocalisation, but a way of looking at it is that the other oxygen (on the -OMe group) also wants to donate its lone pair into the C=O π*. In a sense this means that there is some degree of competition between the nitrogen and the oxygen, and the nitrogen lone pair is no longer delocalised to such a great extent. Formally speaking, I think the best way of putting it is that the HOMO of compound III has a greater coefficient on nitrogen than the HOMO of compound I. You can also draw resonance forms to show that there is a smaller contribution from the resonance form with positively charged nitrogen.
{ "domain": "chemistry.stackexchange", "id": 5492, "tags": "organic-chemistry, amines, nucleophilicity, amides" }
Calculate viewing times at a location - when location has an obstruction
Question: My most convenient viewing location is my backyard. It also has an obstructed view of the sky - trees. This means that just knowing the rise and set times of a planet does not let me know when, or if, I can view a planet at my location. For example, let us say I look up my favorite planet, Saturn, and discover that it will rise at 9pm and sets at 4am. The rise time will give me when it clears the horizon. This does not give me the information I need to know. I need to know the degrees above the horizon it is at its zenith and at what time it will reach its meridian passage. This tells me if the planet will rise higher than my obstruction. The planet needs to rise higher than the red line in my graphic for me to see it. I have been trying to find a chart, either in book form or online, that includes the degrees above the horizon that a planet will culminate. Does anyone know of such a resource? Answer: I don't know of a chart in a book or online, but I know of some other options: Nightshift (Android app) will give you transit times and show you graphs. SkEye (Android app) will show you graphs. Heavens Above's Planet Summary will show you transit times and altitude in degrees for any given time and location (set location at top right of page).
{ "domain": "astronomy.stackexchange", "id": 5845, "tags": "horizon" }
Intensity of spectral lines
Question: Why are certain spectral lines more intense than others, that is why certain transitions between levels have greater probabilities of occurrence than others? Answer: The transition probability between two levels is given by the Fermi golden rule: $$w_{i\rightarrow f} = \frac{2\pi}{\hbar}|V_{i,f}|^2\delta(E_i - E_f \pm \hbar\omega).$$ Thus, the principal factor that determines the intensity of a transition is the matrix element, $V_{i,f}$, which is different for different levels. Frequently one deals with the dipolar coupling $$\hat{V} = - \mathbf{d}\cdot\mathbf{E},$$ where the dipolar matrix element is given by $$\mathbf{d}_{i,f} = \int d^3\mathbf{r}\psi_i(\mathbf{r})^*\mathbf{r}\psi_f(\mathbf{r}), $$ which is obviously different for the different pairs of the states $i, f$. Another reason for different line intensity is different final density of states of the background radiation that causes the spontaneous emission.
{ "domain": "physics.stackexchange", "id": 66826, "tags": "quantum-mechanics, electromagnetic-radiation, radiation, atoms, intensity" }
am i finding the force correctly?
Question: I am trying to see if I am doing a problem correctly. Problem Suppose you carry a 50kg sack of potatoes up two flights of stairs, a total height of 10m. How much work did you do? If it took you 20seconds, what was your power outlet? (I am NOT looking for the answer, I am just trying to see if I am doing this correctly). Based on what I have learned to find Work you must do this equation: $W = Fd$ (force x displacement) This is where I am a little confused. Not sure what the force is, but going off of my notes to find the force you must do this equation: $F = ma$ (mass x acceleration). I already know the mass is 50kg, but to find acceleration I must find it in m/s^2. (Where I think I am getting this wrong). So I know that our meters is 10, and our seconds is 20 (10 m high, and 20 seconds going up stairs), but to find it squared I would do 10/20 / 20 m/s^2 which equals 0.025 m/s^2 for my acceleration. Going back to finding force I can now plug in my acceleration of 0.025 as follows: $F = (50kg) \times (0.025m/s^2) J$ $F = 1.25 N$ $W = (1.25) \times (10)$ $W = 12.5 J$ Did I do this correctly? If not, instead of answers, let me know what I did wrong, and maybe like a hint at what I should do next. I appreciate any help given. Answer: You are confused by using $F=ma$, since, as you walk up the stairs, you are not accelerating, because you walk with a constant velocity. On the sack, we find a net force $F=0$. But, there is gravity acting, such that the force that you execute is just opposite to gravity: $F=mg$. The work you put in is then $W=F\times d$, and thus $W=mgh$. Surprise, surprise, this is also exactly the change in potential energy in the system, which you might have in your notes.
{ "domain": "physics.stackexchange", "id": 14103, "tags": "homework-and-exercises, forces, acceleration" }
Loading an urdf model from the parameter server with hydro
Question: Migrating from groovy to hydro, I encountered the following problem: When I try to load an urdf model from the parameter server with the following code: urdf::Model urdfModel; if(!nh.hasParam(urdfName)) { return; } if(!urdfModel.initParam(urdfName)) { return; } where urdfName is a std::string (e.g., "/robot_description"), using hydro the program dies and I receive the following error message (with gdb): Program received signal SIGILL, Illegal instruction. 0x00007ffff4e6af0f in std::vector<float, std::allocator<float> >::_M_insert_aux(__gnu_cxx::__normal_iterator<float*, std::vector<float, std::allocator<float> > >, float const&) () from /usr/lib/libpcl_common.so.1.7 (gdb) bt #0 0x00007ffff4e6af0f in std::vector<float, std::allocator<float> >::_M_insert_aux(__gnu_cxx::__normal_iterator<float*, std::vector<float, std::allocator<float> > >, float const&) () from /usr/lib/libpcl_common.so.1.7 #1 0x00007ffff3005cbb in ?? () from /opt/ros/hydro/lib/liburdfdom_world.so #2 0x00007ffff2fff447 in ?? () from /opt/ros/hydro/lib/liburdfdom_world.so #3 0x00007ffff2ff3d97 in urdf::parseURDF(std::string const&) () from /opt/ros/hydro/lib/liburdfdom_world.so #4 0x00007ffff32503a3 in urdf::Model::initString(std::string const&) () from /opt/ros/hydro/lib/liburdf.so #5 0x00007ffff324f60e in urdf::Model::initParam(std::string const&) () from /opt/ros/hydro/lib/liburdf.so The same code and urdf model work fine with groovy. I am running Lubuntu 12.10 with ROS hydro (urdf 1.10.18, urdfdom 0.2.10). Edit: I am using an i7 CPU. The output of valgrind is: vex amd64->IR: unhandled instruction bytes: 0xC5 0xFA 0x10 0x2 0xC5 0xFA 0x11 0x0 ==28164== valgrind: Unrecognised instruction at address 0x7bd2f0f. ==28164== at 0x7BD2F0F: std::vector<float, std::allocator<float> >::_M_insert_aux(__gnu_cxx::__normal_iterator<float*, std::vector<float, std::allocator<float> > >, float const&) (in /usr/lib/libpcl_common.so.1.7.0) ==28164== by 0x9A13CBA: ??? (in /opt/ros/hydro/lib/liburdfdom_world.so) ==28164== by 0x9A0D446: ??? (in /opt/ros/hydro/lib/liburdfdom_world.so) ==28164== by 0x9A01D96: urdf::parseURDF(std::string const&) (in /opt/ros/hydro/lib/liburdfdom_world.so) ==28164== by 0x97C83A2: urdf::Model::initString(std::string const&) (in /opt/ros/hydro/lib/liburdf.so) ==28164== by 0x97C760D: urdf::Model::initParam(std::string const&) (in /opt/ros/hydro/lib/liburdf.so) ==28164== by 0x7703F0A: <my_code> ==28164== Your program just tried to execute an instruction that Valgrind ==28164== did not recognise. There are two possible reasons for this. ==28164== 1. Your program has a bug and erroneously jumped to a non-code ==28164== location. If you are running Memcheck and you just saw a ==28164== warning about a bad jump, it's probably your program's fault. ==28164== 2. The instruction is legitimate but Valgrind doesn't handle it, ==28164== i.e. it's Valgrind's fault. If you think this is the case or ==28164== you are not sure, please let us know and we'll try to fix it. ==28164== Either way, Valgrind will now raise a SIGILL signal which will ==28164== probably kill your program. ==28164== ==28164== Process terminating with default action of signal 4 (SIGILL) ==28164== Illegal opcode at address 0x7BD2F0F ==28164== at 0x7BD2F0F: std::vector<float, std::allocator<float> >::_M_insert_aux(__gnu_cxx::__normal_iterator<float*, std::vector<float, std::allocator<float> > >, float const&) (in /usr/lib/libpcl_common.so.1.7.0) ==28164== by 0x9A13CBA: ??? (in /opt/ros/hydro/lib/liburdfdom_world.so) ==28164== by 0x9A0D446: ??? (in /opt/ros/hydro/lib/liburdfdom_world.so) ==28164== by 0x9A01D96: urdf::parseURDF(std::string const&) (in /opt/ros/hydro/lib/liburdfdom_world.so) ==28164== by 0x97C83A2: urdf::Model::initString(std::string const&) (in /opt/ros/hydro/lib/liburdf.so) ==28164== by 0x97C760D: urdf::Model::initParam(std::string const&) (in /opt/ros/hydro/lib/liburdf.so) ==28164== by 0x7703F0A: <my_code> ==28164== ==28164== HEAP SUMMARY: ==28164== in use at exit: 414,808 bytes in 6,624 blocks ==28164== total heap usage: 13,630 allocs, 7,006 frees, 807,298 bytes allocated ==28164== ==28164== LEAK SUMMARY: ==28164== definitely lost: 4 bytes in 1 blocks ==28164== indirectly lost: 0 bytes in 0 blocks ==28164== possibly lost: 148,127 bytes in 1,866 blocks ==28164== still reachable: 266,677 bytes in 4,757 blocks ==28164== suppressed: 0 bytes in 0 blocks ==28164== Rerun with --leak-check=full to see details of leaked memory ==28164== ==28164== For counts of detected and suppressed errors, rerun with: -v ==28164== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) I am wondering, why the code runs with groovy but not with hydro. Have there been made any changes that could explain this behavior? Originally posted by alice on ROS Answers with karma: 11 on 2014-06-05 Post score: 1 Original comments Comment by bchr on 2014-06-05: +1, and memory corruption is more likely. You may want to run valgrind on your code, but it might not be able to find the source of the error. Answer: Illegal instruction could point to your program trying to use some feature not supported by your CPU (or some serious memory corruption). What type of CPU are you using? Originally posted by gvdhoorn with karma: 86574 on 2014-06-05 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 18174, "tags": "urdf, ros-hydro" }
Is there any biological evidence that is not suggestive of or seems to disprove evolution?
Question: Often, in science, when we have evidence that doesn't fit our paradigm, we bend it until the paradigm collapses. Although there is plenty of decent evidence for evolution, is there anything that does not fit the evolutionary paradigm? Is there any biological evidence that is not suggestive of or seems to disprove evolution? Or is everything merely hunky-dorey? Answer: Evolution is a broad field of knowledge. There are definitely a few elements of our current theory of evolution that does not perfectly match observations. However, those concern small details and might not of much interest to you. Here are a few examples We don't know the relative importance of background selection and selective sweep at explaining genome-wide variation in genetic diversity. We don't really have much of an idea what fraction of speciation happen in sympatry. We don't understand well how recombination rate evolves to differ in different genders. We don't really understand what are the limits and/or costs of phenotypic plasticity that makes phenotypic plasticity less frequent that we would otherwise expect. We don't really understand the patterns of genetic diversity on sexual chromosomes at the intersection with the pseudo-autosomal region (PAR). We don't fully understand how much expansion load (a type of mutation load caused by the sampling of individuals at the expansion edge of a population) there is in human population. We don't really know whether the mitochondrion (and other double membranes organelles) was first a endo-parasite or a endo-symbiont. If you are asking whether there are evidences suggesting that evolution is not happening or that humans and chimpanzee do not actually share a common ancestor but were created independently, then no, there is no such evidence. You should have a look at the related post Is Evolution a fact?
{ "domain": "biology.stackexchange", "id": 8055, "tags": "evolution" }
How do viral envelopes contain molecules coded for by viral genes when they are derived from the host cell's plasma membrane?
Question: I have been studying viruses from " Biology: A Global Approach " by Campbell, Urry, et al. Regarding viral envelopes in animal viruses, the textbook writes, " ..the viral envelope is usually derived from the host cell's plasma membrane, although all or most of the molecules of this membrane are specified by viral genes." I understand that the capsid proteins and glycoproteins on the surface of the viral envelope and coded for by viral genes, but the envelope formation happens in a process similar to exocytosis, where the new viral capsids are wrapped in the membrane as they bud from the cell. How can the viral envelope contain molecules coded by the viral genes (apart from the surface glycoproteins)? Answer: I don't have access to the book, so I can't be sure how they meant that statement, but most likely "all or most of the molecules" in the lipid membrane excludes the lipid molecules themselves. One of the defining characteristics of viruses is that they don't do their own metabolism. Even if they did, viral genes could only specify the enzymes for lipid synthesis, and these wouldn't be included in the membrane. However, viruses often DO manipulate the host cell's lipid metabolism. From this review of Lipid Interactions During Virus Entry and Infection: Large enveloped viruses are not the only viruses that manipulate cellular lipid metabolism. Microarray analysis of HCV infected cells show significant changes in the expression of genes involved in lipid metabolism (Woodhouse et al., 2010). Recent transcriptomic and proteomic analyses indicate that the expression of host genes involved in lipid biosynthesis, degradation, and transport is profoundly altered by HCV; in particular cholesterol biosynthesis genes were found to be upregulated (Woodhouse et al., 2010). Parallel lipidomics analysis also showed changes in selected lipid species, particularly phospholipids and sphingomyelin (Diamond et al., 2010). This suggests that HCV reprogramming of host lipid metabolism attempts to maintain host homeostasis in spite of the elevated demand of metabolic precursors by the virus. So, membranes in the cell are composed of different types of lipids, and they aren't equally good for making virus membranes. Some viruses have been shown to upregulate production of the lipids they need, so in that sense those lipids are products of virus genes. That's a bit of a stretch, though, imo. For proteins in the viral membrane, they aren't necessarily all glycoproteins, though it looks like most are. M1 and M2 membrane proteins in influenza aren't glycosylated, for example. Here's an image from another review showing virus budding through the plasma membrane: One kind of virus budding. Viral glycoproteins, inserted into the cellular membrane at the endoplasmic reticulum and processed through the Golgi to the plasma membrane (see Figure 3), associate with the assembled viral nucleocapsid. The direct association pictured here is characteristic of togaviruses. For other viruses, possessing helical nucleocapsids, the association is mediated by a peripheral membrane protein. Cellular membrane proteins are excluded from the envelope of the mature virion. This may occur during assembly, as pictures, or by prior formation of a viral membrane patch (or raft), before the nucleocapsid arrives at the membrane. In this image, you can see the host membrane proteins (the white ovals) being excluded from the patch where the viral proteins (black shapes) accumulate and which later envelops the nucleocapsid. This process is likely what your textbook is referring to.
{ "domain": "biology.stackexchange", "id": 11862, "tags": "cell-biology, virology" }
Battleships Player vs Computer Algorithm
Question: I am a Year 12 student studying CS for my A-Levels. I have had previous Python experience but it is obviously not to professional/industry standard. Therefore, my goal for this review is for my code to be criticised, allowing me to improve towards reaching such standards. The program is a computer version of the Battleships game. I have taken out the PvP and "Just Shoot Ships" methods. I believe the most interesting part is the PVE() function. This is whether the logic is implemented for the computer to play against the player. Here is what I think I'd like a review on: The use of variables and their names The use of classes - how I've passed objects into functions; I think my use of classes and passing in objects is fairly inconsistent. I would like to know if there is a way I could improve this. The use of list and whether it is good practice to use jagged arrays in such a way. The efficiency of my algorithm in general if possible. The effectiveness of commenting and how it can be improved to be more useful in a team environment # Battleships 2K16 # Completed 11/11/16, 13:00 # Developed by Tommy Kong # This is a Battleships game. # It allows players to: # Play against each other # Play against a computer # Practice shooting ships import random, time # Class for creating a grid class Grid: # Create gird of width and height def __init__(self,width,height): self.grid = [] self.pieces = [[0],[1],[2],["miss"],["hit"]] for y in range(height): row = [] for x in range(width): row.append("") self.grid.append(row) # Add piece to given coordinates def addPiece(self,piece,side): for pieceSet in self.pieces: if pieceSet[0] == side: pieceSet.append(piece) for coord in piece: self.grid[coord[1]][coord[0]] = side # Function checks if the grid still has pieces of a certain side def isDefeated(self,side): for row in self.grid: for piece in row: if piece == side: return False return True # Display the grid for the user def show(self): # Adding the top coordinates print("-" * 45) line = "" for i in range(10): line += "| " + str(i) + " " print(line + "| |") print("-" * 45) # Adding the individual cells for y in range(len(self.grid)): line = "" for x in self.grid[y]: if x == "": line += "| - " elif x == 0: line += "| - " elif x == 1: line += "| - " elif x == 2: line += "| - " elif x == "miss": line += "| X " elif x == "hit": line += "| O " # Adding side coordinates line += "| " + str(y) + " |\n" for x in self.grid[y]: line+="----" line += "-----" print(line) # Returns if a grid is empty def isEmpty(self,x,y): if self.grid[y][x] == "": return True else: return False # Returns the value inside a coord def getCoordValue(self,x,y): return self.grid[y][x] # Class which handles the battleships game class Battleships: # Allow the user to choose from 3 options def __init__(self): while True: print("Welcome to Battleships!") print("You can - ") print("[1] PvP\n[2] PvE\n[3]Just Shoot Ships") decision = 0 while decision < 1 or decision > 3: try: decision = int(input("What would you like to do: ")) except: decision = 0 if decision == 1: self.PVP() elif decision == 2: self.PVE() elif decision == 3: self.JSS() # Adds a ship def addShip(self,grid,side,length): orientation = 0 while orientation < 1 or orientation > 2: try: orientation = int(input("Would you like the ship to be horizontal [1] or vertical [2]: ")) except: orientation = 0 if orientation == 1: rootCoord = self.inputCoord(10-(length),9) elif orientation == 2: rootCoord = self.inputCoord(9,10-(length)) ship = [] # Whilst valid ship has not been added while True: currentShip = [] # Add coords depending on length for i in range(length): if orientation == 1: currentShip.append([rootCoord[0]+i,rootCoord[1]]) elif orientation == 2: currentShip.append([rootCoord[0],rootCoord[1]+i]) # Test that the coords are not filled already validShip = True for coord in currentShip: if grid.isEmpty(coord[0],coord[1]) == False: # If any coords are filled then the ship is invalid validShip = False print("There are already ships existing there!") return self.addShip(grid,side,length) # If ship is valid then stop trying and return ship coords if validShip: ship = currentShip return ship # Function returns list of ship lengths that has been sunk def getSunkShips(self,grid,side): # List of sunk ships sunkShips = [] # Go through the pieces array in grid object for ship in range(1,len(grid.pieces[side])): sunkStatus = [] # For each ship coordinate in a ship for shipCoord in grid.pieces[side][ship]: # If the coordinate can be found in the hit list then that part has been sunk sunk = False for hitCoord in range(1,len(grid.pieces[4])): if shipCoord == grid.pieces[4][hitCoord][0]: sunk = True break sunkStatus.append(sunk) # Go through the sunk parts and if all of it is sunk then the ship is sunk sunkShip = True for status in sunkStatus: if status == False: sunkShip = False break if sunkShip == True: sunkShips.append(ship+1) return sunkShips # Method for when the user wants to play against the computer def PVE(self): # Create grids grids = [Grid(10,10),Grid(10,10)] print("Now you are going to add your ships.") # Add ships for player 1 print("Player 1 add ships.") print("You input the root coordinate of the ship. They extend to the right or down, depending on orientation.") # Add ships of length 2 - 6 for shipLength in range(5): print("Add ship of length {}".format(shipLength+2)) ship = self.addShip(grids[0],1,shipLength+2) grids[0].addPiece(ship,1) input("Press enter to continue...") self.clearScreen() print("Okay, the grids are set!") self.clearScreen() # Add ships for computer grids[1].grid = self.makeShips(grids[1],0,[1,1,1,1,1]) turn = 1 # Lists of coords the computer should shoot next compWaitList = [[]] # Coords the computer has tried compShotList = [] compSunkShips = [] compPreviousHits = [] # While there are ships on both side while grids[0].isDefeated(1) == False and grids[1].isDefeated(0) == False: # If it is player 1's turn if turn == 1: print("Player 1's turn to shoot.") grids[1].show() validMove = False while validMove == False: # Get shot input and try to shoot # If shot is not invalid then update the grid shot = self.inputCoord(9,9) potentialGrid = self.shoot(grids[1],0,shot) if potentialGrid != "invalid": grids[1].grid = potentialGrid validMove = True else: continue input("Press enter to continue.") self.clearScreen() print("Current grid for Player 1.") grids[1].show() # Check game is won or not if grids[1].isDefeated(0) == True: self.clearScreen() print("Player 1 wins!") input("Press enter to continue...") self.clearScreen() break # If game is not won, tell the players of any full ships they have sunk. self.sunkShips = self.getSunkShips(grids[1],0) if len(self.sunkShips) >= 1: print("Player 1 has sunk...") for ship in self.sunkShips: print("Ship of length {}.".format(ship)) else: print("No ships have yet been sunk.") input("Press enter for Computer's turn.") self.clearScreen() turn = 2 # Computer's turn if turn == 2: print("Computer's turn to shoot.") validShot = False # Get a possible x and y coordinate to shoot while validShot == False: x = -1 y = -1 if compWaitList == [[]]: while x < 0 or x > 9: x = random.randint(0,9) while y < 0 or y > 9: y = random.randint(0,9) # Else take the first coord from the waiting list else: if compWaitList[0] != []: x = compWaitList[0][0][0] y = compWaitList[0][0][1] compWaitList[0].pop(0) else: x = compWaitList[1][0][0] y = compWaitList[1][0][1] compWaitList[1].pop(0) alreadyShot = False # If proposed x and y coordinate is in shot list then repeat loop for coord in compShotList: if coord == [x,y]: alreadyShot = True break if alreadyShot == False: validShot = True print("Computer is deciding...") time.sleep(1) # Shoot with coords and also add them to used coords compShotList.append([x,y]) potentialGrid = self.shoot(grids[0],1,[x,y],True) print("Computer shot at [{},{}].".format(x,y)) # If it was a hit then try adding smart coords to wait list if potentialGrid[1] == "hit": print("The computer hit!") grids[0].show() # If there has been previous hit of importance and there are still possible hit locations if compPreviousHits != [] and compWaitList != []: # If the number of sunk ship increases, get the sunk length then remove the last n-1 possible perpendicular coords if compSunkShips != self.getSunkShips(grids[0],1): sunkShipLength = self.getSunkShips(grids[0],1) print(compSunkShips) print(sunkShipLength) for length in compSunkShips: if sunkShipLength[0] == length: sunkShipLength.pop(0) compSunkShips = self.getSunkShips(grids[0],1) compWaitList[0] = [] # Move the previous hit to last, to be removed compPreviousHits.append(compPreviousHits[0]) compPreviousHits.pop(0) compPreviousHits.append([x,y]) del compWaitList[len(compWaitList)-sunkShipLength[0]:] if compWaitList == []: compWaitList.append([]) del compPreviousHits[len(compPreviousHits)-sunkShipLength[0]:] # Else find relation of the two coords else: # Set limits to relating to whether they're on the edge and tets relation to last hit if compPreviousHits[0][0] == x: # Add next parallel coord (in relation to the hit and previosuly hit coord) to high priority, and perpendicular coords to lowest priority # This is so there is a higher probability of another hit if compPreviousHits[0][1] < y: compWaitList += [[[[x+1,y],[x-1,y]]]] if y != 9: compWaitList[0] = [[[x,y+1]]] + compWaitList[0] else: compWaitList += [[[[x+1,y],[x-1,y]]]] if y != 0: compWaitList[0] = [[x,y-1]] + compWaitList[0] elif compPreviousHits[0][1] == y: if compPreviousHits[0][0] < x: compWaitList += [[[x,y-1],[x,y+1]]] if x != 9: compWaitList[0] = [[x+1,y]] + compWaitList[0] else: compWaitList += [[[x,y-1],[x,y+1]]] if x != 0: compWaitList[0] = [[x-1,y]] + compWaitList[0] compPreviousHits.append(compPreviousHits[0]) compPreviousHits.pop(0) compPreviousHits = [[x,y]] + compPreviousHits else: # Add adjacent coords to the waiting list, depending on position on the grid if x == 0: if y == 0: compWaitList[0] = [[x+1,y]] compWaitList.append([[x,y+1]]) elif y == 9: compWaitList[0] = [[x+1,y]] compWaitList.append([[x,y-1]]) else: compWaitList[0] = [[x+1,y]] compWaitList.append([[x,y-1],[x,y+1]]) elif x == 9: if y == 0: compWaitList[0] = [[x-1,y]] compWaitList.append([[x,y+1]]) elif y == 9: compWaitList[0] = [[x-1,y]] compWaitList.append([[x,y-1]]) else: compWaitList[0] = [[x-1,y]] compWaitList.append([[x,y-1],[x,y+1]]) elif y == 0: compWaitList[0] = [[x-1,y]] compWaitList.append([[x+1,y],[x,y+1]]) elif y == 9: compWaitList[0] = [[x-1,y]] compWaitList.append([[x+1,y],[x,y-1]]) else: compWaitList[0] = [[x-1,y]] compWaitList.append([[x+1,y],[x,y-1],[x,y+1]]) compPreviousHits.append([x,y]) grids[0].grid = potentialGrid[0] validMove = True else: print("The computer missed!") grids[0].show() # Check game is won or not if grids[0].isDefeated(1) == True: self.clearScreen() grids[0].show() print("Player 2 wins!") input("Press enter to continue...") self.clearScreen() break self.sunkShips = self.getSunkShips(grids[0],1) if len(self.sunkShips) >= 1: print("Computer has sunk...") for ship in self.sunkShips: print("Ship of length {}.".format(ship)) else: print("No ships have yet been sunk.") input("Press enter for Player 1's turn.") self.clearScreen() turn = 1 return # Function takes in a grid, the opposing side, and the coordinates for the shot def shoot(self,grid,oSide,shot,isComputer=False): # Get value in the coord to be shot coordValue = grid.getCoordValue(shot[0],shot[1]) # If the opponent is the computer if oSide == 0: # If value is miss or hit, it is an invalid move if coordValue == "miss": print("You've already shot there! Was a miss!") return "invalid" elif coordValue == "hit": print("You've already shot there! Was a hit!") return "invalid" # If blank, miss elif coordValue == "": print("Miss!") grid.addPiece([shot],"miss") return grid.grid # If computer piece, hit elif coordValue == 0: print("Hit!") grid.addPiece([shot],"hit") return grid.grid elif oSide == 1: if isComputer == True: if coordValue == "": grid.addPiece([shot],"miss") return [grid.grid,"miss"] elif coordValue == 1: grid.addPiece([shot],"hit") return [grid.grid,"hit"] else: if coordValue == "miss": print("You've already shot there! Was a miss!") return "invalid" elif coordValue == "hit": print("You've already shot there! Was a hit!") return "invalid" # If shooting at side 2 (own), then it is invalid elif coordValue == 2: print("You cannot shoot your own ships!") return "invalid" elif coordValue == "": print("Miss!") grid.addPiece([shot],"miss") return grid.grid # If opponet is 1 and you shoot 1 then it is hit elif coordValue == 1: print("Hit!") grid.addPiece([shot],"hit") return grid.grid elif oSide == 2: if coordValue == "miss": print("You've already shot there! Was a miss!") return "invalid" elif coordValue == "hit": print("You've already shot there! Was a hit!") return "invalid" # If shooting at side 1 (own), then it is invalid elif coordValue == 1: print("You cannot shoot your own ships!") return "invalid" elif coordValue == "": print("Miss!") grid.addPiece([shot],"miss") return grid.grid # If opponet is 2 and you shoot 2 then it is hit elif coordValue == 2: print("Hit!") grid.addPiece([shot],"hit") return grid.grid # Function takes in a grid, and the number of different ships wanted to add def makeShips(self,grid,side,shipLengths): # Add ships of varying lengths for length in range(len(shipLengths)): # Adds amount of ships for that length for amount in range(shipLengths[length]): ship = self.makeShip(grid,length+2) grid.addPiece(ship,side) return grid.grid # Function returns array of coordinates for a ship def makeShip(self,grid,length): ship = [] # Randomise orientation orientation = random.randint(1,2) # Whilst valid ship has not been added while True: currentShip = [] # Get root position depending on orientation if orientation == 1: x = random.randint(0,10-length) y = random.randint(0,9) elif orientation == 2: x = random.randint(0,9) y = random.randint(0,10-length) # Add coords depending on length for i in range(length): if orientation == 1: currentShip.append([x+i,y]) elif orientation == 2: currentShip.append([x,y+i]) # Test that the coords are not filled already validShip = True for coord in currentShip: if grid.isEmpty(coord[0],coord[1]) == False: # If any coords are filled then the ship is invalid validShip = False # If ship is valid then stop trying and return ship coords if validShip: keepTrying = False ship = currentShip return ship # Function takes in coordinate inputs def inputCoord(self,maxX,maxY): x = -1 y = -1 # While the coordinates are not within grid params while x < 0 or x > maxX: try: x = int(input("Enter X coordinate: ")) except: x = -1 while y < 0 or y > maxY: try: y = int(input("Enter Y coordinate: ")) except: y = -1 return [x,y] #Clears the console def clearScreen(self): print("\n" * 100) game = Battleships() Answer: I'll mention some small things to make the code look a little more def __init__(self,width,height): self.grid = [] self.pieces = [[0],[1],[2],["miss"],["hit"]] for y in range(height): row = [] for x in range(width): row.append("") self.grid.append(row) You can write the inner loop with a list comprehension row = ["" for x in range(width)] In addition since self.pieces are never changed from instance to instance, you can move them out of the init self.pieces = [[0],[1],[2],["miss"],["hit"]] def __init__(self,width,height): self.grid = [] for y in range(height): row = ["" for _ in range(width)] self.grid.append(row) for pieceSet in self.pieces: if pieceSet[0] == side: pieceSet.append(piece) This is equivalent to a list comprehension (or a filter) that might be easier to read. also reusing peiceSet for both the iterating variable and the list seems weird and possibly buggy pieceSet = [piece for peice in self.pieces if pieceSet[0] == side] for row in self.grid: for piece in row: if piece == side: return False return True This is a good case for the any function for row in self.grid: if any(piece == side for piece in row): return False return True You could reduce this to a single any with two loops inside it, but it gets a bit long and unwieldy. for y in range(len(self.grid)): line = "" for x in self.grid[y]: if x == "": line += "| - " elif x == 0: line += "| - " elif x == 1: line += "| - " elif x == 2: line += "| - " elif x == "miss": line += "| X " elif x == "hit": line += "| O " The only place y is used is as an index. To quote Raymond Hettinger "There must be a better way". We can use enumerate to keep y, but make the iteration look a bit nicer The conditions can be shortened a little too, though not everyone would say it is an improvement. for y, row in enumerate(self.grid): line = "" for cell in row: if cell in ("", 0, 1, 2): line += "| - " elif cell == "miss": line += "| X " elif cell == "hit": line += "| O " ... def isEmpty(self,x,y): if self.grid[y][x] == "": return True else: return False Ahhhhhhhhhhhh, I really dislike this as there exists a much nicer (and more intuitive way) def isEmpty(self, x, y): return self.grid[y][x] == "" def getCoordValue(self,x,y): return self.grid[y][x] I'm not sure this is useful, anywhere it is used could just directly index the array while decision < 1 or decision > 3: try: decision = int(input("What would you like to do: ")) except: decision = 0 This seems like a pretty good candidate for a method. def getValueInRange(start, end, message=""): """Get a value from a user that must be bound to a range start and end are exclusive from this range""" choice = start # something outside the range while choice < start or choice > end: try: choice = int(input(message)) except ValueError: pass return choice There is no need to overwrite choice with the default again and again if it fails. Also a more explicit error makes debugging and reading easier. Finally I only catch on ValueError as something unexpected would be better off thrown back up rather than silently dying here. while True: currentShip = [] # Add coords depending on length for i in range(length): if orientation == 1: currentShip.append([rootCoord[0]+i,rootCoord[1]]) elif orientation == 2: currentShip.append([rootCoord[0],rootCoord[1]+i]) # Test that the coords are not filled already validShip = True for coord in currentShip: if grid.isEmpty(coord[0],coord[1]) == False: # If any coords are filled then the ship is invalid validShip = False print("There are already ships existing there!") return self.addShip(grid,side,length) # If ship is valid then stop trying and return ship coords if validShip: ship = currentShip return ship There is a lot going on here, so I'll show the improvements I would make an explain them after currentShip = [] # Add coords depending on length for i in range(length): if orientation == 1: attemptCoords = [rootCoord[0]+i, rootCoord[1]] elif orientation == 2: attemptCoords = [rootCoord[0], rootCoord[1]+i] if not grid.isEmpty(attemptCoords[0], attemptCoords[1]): print("There are already ships existing there!") return self.addShip(grid, side, length) currentShip.append(attemptCoords) return currentShip There is no need for a while True. Before adding the coords to the currentShip, we can check if they are in an empty part of the grid. If they aren't we abort early. The boolean validShip doesn't actually do anything, if you reach that part of the code you know it has to be True. That means it can be removed for shipCoord in grid.pieces[side][ship]: # If the coordinate can be found in the hit list then that part has been sunk sunk = False for hitCoord in range(1,len(grid.pieces[4])): if shipCoord == grid.pieces[4][hitCoord][0]: sunk = True break sunkStatus.append(sunk) # Go through the sunk parts and if all of it is sunk then the ship is sunk sunkShip = True for status in sunkStatus: if status == False: sunkShip = False break if sunkShip == True: sunkShips.append(ship+1) It is never necessary to have a condition with == True or == False. This is another good place for any and all to appear for shipCoord in grid.pieces[side][ship]: # If the coordinate can be found in the hit list then that part has been sunk hitShips = grid.pieces[4] sunk = any(shipCoord == hitShips[hitCoord][0] for hitCoord in range(1, len(hitShips))) sunkStatus.append(sunk) # Go through the sunk parts and if all of it is sunk then the ship is sunk sunkShip = all(sunkStatus) if sunkShip: sunkShips.append(ship+1) I will admit the code right now is not the most readable, but I think that is down to variable names now, as opposed what is going on in the code. The rest of the code looks kinda dense right now, I would suggest going through the above suggestions and any other replies you get, make some improvements and post again with the next iteration of code. I hope to see you post an update soon, all the best!
{ "domain": "codereview.stackexchange", "id": 23365, "tags": "python, algorithm, object-oriented, game, array" }
What different lines of reasoning and traditions lead to the conclusion that Software Engineering is or isn't part of Computer Science?
Question: Background: Some people consider Software Engineering as a branch of Computer Science, while others consider that they are, or should be, separate. The former stance seems to be well presented in written works. On Wikipedia, Software Engineering is classified as Applied Computer Science, along with, e.g., Artificial Intelligence and Cryptography. The ACM Computing Classification system places SE under Software, along with, e.g., Programming Languages and Operating Systems. CSAB has also considered SE as part of Computer Science, and considered that [...] it includes theoretical studies, experimental methods, and engineering design all in one discipline. [...] It is this close interaction of the theoretical and design aspects of the field that binds them together into a single discipline. [...] Clearly, the computer scientist must not only have sufficient training in the computer science areas to be able to accomplish such tasks, but must also have a firm understanding in areas of mathematics and science, as well as a broad education in liberal studies to provide a basis for understanding the societal implications of the work being performed. While the above seems to reflect my own view, there is also the stance that the term Computer Science should be reserved for what is sometimes called Theoretical Computer Science, such as Computability Theory, Computational Complexity Theory, Algorithms and Data Structures, and that other areas should be split off into their own disciplines. In the introductory courses I took for my CS degree, the core of CS was defined via the questions "what can be automated?" (Computability Theory) and "what can be automated efficiently?" (Computational Complexity Theory). The "how" was then explored at length in the remaining courses, but one could well consider SE being so far from these core questions that it shouldn't be considered part of CS. Even here on CS.SE, there has been debate about whether SE questions are on-topic, reflecting the problematic relationship between CS and SE. Question: I'm wondering what lines of reasoning and traditions within Computer Science might lead to one conclusion or the other: that SE is, or should be, part of CS or that it is not. (This implies that answers should present both sides.) Answer: (I did some extensive searching and found material that answers my question. I liked Patrick87's answer, but I found this to be more complete.) The answer to the question lies in a careful examination of the philosophy of Computer Science. In Computer Science, three intellectual traditions meet (or collide, if you wish) in a single discipline: the theoretical tradition; the empirical tradition; and the engineering tradition. The theoretical tradition concerns itself with creating hypotheses or theorems, and proving them in a mathematical fashion. Its aim is the construction of coherent axiomatic systems of thought. The empirical tradition concerns itself with forming hypotheses, models, and predictions, collecting data from experiments, and analysing the results. Its aim is to investigate and explain phenomena. Finally, the engineering tradition concerns itself with stating requirements and specifications, and with designing, implementing, and testing systems based on these requirements and specifications. Its aim is to construct systems and solve concrete instances of problems. Each of these traditions comes with a set of assumptions about the aims and means of scientific inquiry. The traditions are not unique to Computer Science; they are general traditions that can be found to differing degrees in other disciplines. Perhaps the clearest examples are mathematics (theoretical tradition), physics (empirical tradition), and construction engineering (engineering tradition). Computer Science, though, operates in the intersection of all three traditions. However, depending on one's particular focus within Computer Science, and one's familiary with other parts of Computer Science, one might emphasize one of these traditions to the degree that the other two appear alien. As noted in Patrick87's answer, the educational setting can emphasize a certain intellectual tradition which may lead someone to a certain kind of demarcation of Computer Science which either includes or does not include Software Engineering. Similarly, one may later adopt a view of science that includes or excludes one or more of the three traditions, or parts of them. For example, one may consider only the theoretical and empirical traditions to fulfil one's criteria for "science", and consider engineering non-scientific. One may also consider the three traditions to be on a value continuum, with one tradition being superior to the others (e.g. valuing the theoretical tradition most, the empirical tradition less, and the engineering tradition least). So the lines of reasoning are rooted in the abovementioned traditions. Based on this, the answer to the question is that considering Software Engineering as part or not as part of Computer Science stems from one's understanding of science in general, and of one's understanding of the philosophy of Computer Science in particular. The following articles go into considerable depth on this issue and summarise a lot of the viewpoints that have been put forward. Tedre, Matti (2011) Computing as a Science: A Survey of Competing Viewpoints. Minds & Machines 21(3):pp.361-387. Tedre, Matti (2009) Computing as Engineering. Journal of Universal Computer Science 15(8):pp.1642-1658 Tedre, Matti (2007) Know Your Discipline: Teaching the Philosophy of Computer Science. Journal of Information Technology Education 6(1):pp.105-122.
{ "domain": "cs.stackexchange", "id": 479, "tags": "software-engineering" }
Index theorem and UV and IR face of chiral anomaly
Question: The index theorem in theory with fermions and gauge fields implies the relation between the index $n_{+}-n_{-}$ of Dirac operator and the integral $\nu$ over EM field chern characteristic class: $$ \tag 1 n_{+} - n_{-} = \nu $$ Let's focus on 4D. The index theorem is obtained by computing anomalous jacobian $$ J[\alpha] = \text{exp}\left[-2i\alpha \sum_{n = 1}^{N = \infty}\int d^{4}x_{E}\psi^{\dagger}_{n}\gamma_{5}\psi_{n}\right] $$ Here $n$ denotes the number of eigenfunction of the Dirac operator $$ D_{I}\gamma_{I}, \quad D_{I} \equiv i\partial_{I} - A_{I} $$ From the one side, this is bad defined quantity, $$ J[\alpha] \simeq \text{exp}\left[i\alpha \lim_{x \to y}\text{Tr}(\gamma_{5})\delta (x - y)\right], $$ so it requires the UV regularization. The explicit form of this regularization is fixed by the requirements of gauge and ''euclidean'' invariance, leading to introducing the function $f\left( \left(\frac{D_{I}\gamma_{I}}{M}\right)^{2}\right)$, with $M$ being the regularization parameter. From the other side, by using the regularization, it is not hard to show that the exponent is equal to the $-2i\alpha (n_{+}-n_{-})$. Since this number defines the difference of zero modes, it depends only on IR property of theory. Moreover, $\nu$ is also determined by the behavior of gauge fields on infinities, being IR defined number. Because of this puzzle, I want to ask: does the index theorem provide the relation between IR (zero modes, large scale topology) nature and UV (regularization required) nature of chiral anomaly? Precisely, I know the "spectral flow" interpretation of chiral anomaly, according to which an anomaly is the collective motion of chiral charge from UV world to IR one. Does the index theorem provide this interpretation? Answer: The index theorem implies that in a given topological sector $\nu$ there are $n_L,n_R$ L/R zero modes such that $n_L-n_R=\nu$. These are solutions of the 4D euclidean Dirac equation $\gamma\cdot D\psi=0$. In particular, $\psi$ must be normalizable in 4D. Now (for simplicity) go to temporal gauge and look at the associated Dirac equation $\partial_t\psi = i\alpha\cdot D\psi$. For smoothly varying fields the 4D solutions must correspond to adiabatic solution of the type $$ \psi(x,t) = \psi(x,-\infty) \exp(-\int^t_{-\infty}\epsilon(t') dt')\, . $$ Now the only way that $\psi$ is normalizable is that $\epsilon$ changes sign as $t$ goes from $-\infty$ to $+\infty$. This means that the spectral flow of the Dirac Hamiltonian $H$ is equal to the chiral imbalance of the 4D zero modes modes, which by the index theorem is determined by 4D topology.
{ "domain": "physics.stackexchange", "id": 35401, "tags": "quantum-field-theory, renormalization, topology, quantum-anomalies, chirality" }
Question about Compton scattering experiment
Question: I learnt that Compton scattering means an incoming photon hits an electron at rest and electron goes in a deviated angle $\phi$ as well as a "giving off" a photon at an angle $\theta$. I suppose that all photons have the same properties, so is the resultant photon the same photon as the original one (like reflection but with an increase in wavelength)? Or does the electron absorb the photon and then release a new photon? Answer: It is not only semantic, but more a question of physical point view. In the usual (relativistic) derivation of the Compton shift in wavelenght it is depicted as a colision betwen two "particles", so the answer would be "the same photon". However, there is no mandatory need of photons to interpret the Compton effect. If you quantize the matter (electron) and not the EM field (photon), the "collision" is a regular Bragg reflexion or diffraction of the EM wave on the "gratting" formed by the electronic wave fonction (superposition of incoming and outgoing electron). In this point of view, the frequency of the EM wave relains unchanged in the frame where this matter standing wave is stationnary, and the frequency shift in the lab frame simply result from (relativistic) Doppler effect. The answer would be "no photon at all". Furthermore, in the frame of full quantized theory, you could try to draw the Feynmann diagram associated to this effect, which (at lowest order) would imply a vertex with two photon lines. Such a vertex is allowed in the frame of Coulomb gauge quanization, but not in the, thought physicaly equivalent, (at least at moderate energy) Lorentz gauge quantization, which will describe it as a two step process : absorbtion plus emission. Hence you will answer "the same photon" in the former point of view, and "an other photon" in the latter.
{ "domain": "physics.stackexchange", "id": 76130, "tags": "quantum-mechanics" }
Should one log transform discrete numerical variables?
Question: I am working on a Linear Regression problem and one of the assumptions of a Linear Regression model is that the features should be Normally Distributed. Hence to convert my non linear features to linear, I am performing several transformations like log, box-cox, square-root transformation etc. I have both, discrete and continuous numerical variables (an example of each along with their histograms and qq plot is given): CONTINUOUS VARIABLE HISTOGRAM AND QQ PLOT DISCRETE VARIABLE HISTOGRAM AND QQ PLOT From the qq plot of the continuous variable, we can see there are points that do no lie on the red line and hence it needs some kind of transformation. So I might try different transformations to see which results in a Normal Distribution and hence make the points fall on the red line. But what about the discrete variable? From the qq plot of the discrete variable, all the points are forming a horizontal line so will transforming them make them fall on the red line? Should I proceed the same as I do in the case of a continuous variable, or is there some other method? Answer: First of all in standard linear regression there is no assumption of normality for the features. More than that, standard linear regression, known also as fixed effects linear regression is a linear regression model where the input variables are given, so they are not random variables. Under that model only the target variable is a random variable $$y = X\beta + \epsilon, \epsilon \sim \cal{N}(0,\sigma^2) $$ There are quite a few assumptions for a linear model like independent and heteroskedastic noise, additive features and so on. There are times you need to transform features to meet those assumptions. I enumerate some cases I encountered: Your model is linear (yes, I know, it sounds redundant, but it is not). The point is that the input variables should be additive to form hyperplanes (in 2 dimensions this is a line), otherwise your model don't work. Imagine you collect observations about gravitational attraction which comes from formula: $F_G = -\frac{G m_1 m_2}{r^2}$ (inverse square law - Force is proportional to the product of the masses and inversely proportional to the square of the distance between them). And you want to regress $F_G$ as a linear model from input variables $m_1$, $m_2$, $r$ ($G$ is a constant). Obviously modeling that like $F_G = \beta_0 + \beta_1 m_1 + \beta_2 m_2 + \beta_3 r + \epsilon$ will not work at all, it will be wild. But taking logarithms from all variables involved your data will be linearly additive. Most of the time you do not know the laws which governs your data, but with careful inspection of the relations between your input variables you could eventually get them to be in good shape for a linear model. For that you should see how more inputs correlates and contributes not each feature independently heteroskedastic noise - this means constant variance of thee error, or in plain English the variance of the error should not depend input variables. Imagine that your output variable is a volume, something like $v^d$, where $d$ is the dimensionality of the space. Now an error of 1 centimeter for a cube of side length 10 is much lower than the error in volume for a cube of side 100. Basically the observed variance should increase significantly for large values of target variables. Again, most of the time you might not know "laws" about your variables, but graphical inspection can help a lot to isolate such kind of deviations and adjustments could be made. In the example I gave again a logarithmic transformation (even box-cox, power transform, etc) could help you a lot. discrete variables - here you have different options. You could have an ordinal variable here like temperature low, medium or high. Those cases could be encoded like a numerical column but you have to pay attention with the values you have for each level. Those values will have implications in the value of the coefficients and the values you gave should have some sense. If you discrete variables encodes mutually exclusive factors like eye color: brown, blue, green, whatever. You better encode those as binary variables (one less since otherwise the regression will not work due to impossibility of matrix inversion). Now the above discussion discusses cases when your discrete variables are given as factors (text eventually). But this applies also to numeric encodings also. If the color is encoded as 1, 2, 3, .., instead of strings you should transform that into binary variables if they are nominal factors. If you have an ordinal variable you perhaps could leave it as such, other than the case when you have a clue regarding better proper values. I will try to give here another example. Supporse you have encoded numerically a magnitude of an earthquake and you have something like 1 for (1-3 richter), 2 for (4-6 richter), 3 for (7-9 richter) and 4 for a catastrophic one. You could maintain the same values or maybe you could try to use the fact that richer scale is an exponential scale, 3 richer degree is 10 times smaller than 4 richer degrees (or similar, I do not remember precisely, but you can get the idea). In that case you could substitute those values with $10^i$ instead of $i$ for a better alignment with the linear model. Some things could not be repaired. For example if your data comes from a time series where you have a clear and string dependence of observations from past observations, such kind of problem could not be easily solved, and perhaps you should take another approach anyway, since fixed effects linear models are not recommended for such cases. As a conclusion, you could study more the assumptions of linear regression, try to understand what could you do to check, or at least to inspect and study if those assumptions are met, and see if it could be corrected reasonably. This should be your target, to make your data aligned with the linear model assumptions, if this is the model you want to use. All transformations of data should be governed by this idea. And of course, please remember what you have done to transform the data, to apply the same to future predictions. Eventually to invert the target transformations if you want results in the original target distribution. [later edit]: I added some ideas on discrete variables.
{ "domain": "datascience.stackexchange", "id": 10271, "tags": "python, linear-regression, feature-engineering, transformation" }
What do we mean by polynomially upper bounded and lower bounded
Question: I just came across this asymptotic bound : $(\log n)!= \Theta \left(n^{\log \log n}\right)$ Which had the following remark: Hence, polynomially lower bounded but not upper bounded. I found this here: https://gateoverflow.in/12928/%24-log-and-log-log-are-polynomially-bounded-anybody-can-prove I am confused on how can we conclude whether it's polynomial lower bounded and not upper bounded. Could someone please help me on understanding this. Answer: Now a polynomial is of the form,$n^{c}$ where $c$ is constant. Now let $f(n)$ be a constant function. Now it being a constant function we have, $f(n) \in O(lg (n))$ $\implies lg(f(n)) \in O(lg(lg(n))$ Now $lg(f(n))$ is yet another constant function and let it be $g(n)$ So we have from the previous implication, $ g(n) \in O(lg(lg(n))$ $\implies n^{g(n)} \in O(n^{lg(lg(n)})$ $\implies n^{lg(lg(n))} \in \Omega(n^{g(n)})$ But $g(n)$ is after all a constant so let it be $c$ then we have from the previous implication, $n^{lg(lg(n))} \in \Omega(n^{c})$ Now $n^{c}$ is a polynomial and hence $n^{lg(lg(n))}$ is polynomially lower- bounded. We cannot have the other way around i.e. $n^{lg(lg(n))}$ is upper bounded by a polynomial simply because of the observation, $n^{c} \in O(n^{lg(lg(n)})$ as clearly $lg(lg(n)$ shoots up faster(no doubt) than $c$ a constant.
{ "domain": "cs.stackexchange", "id": 16349, "tags": "algorithms, asymptotics" }
ground state of spin chain with $Z_i X_{i+1} Z_{i+2}$ interaction
Question: the problem comes from transverse field Ising model, with an extra 3-spin interaction term $$H=H_0+H_1+H_2=-h\sum_{i=1}^{N}X_i -\lambda_1 \sum_{i=1}^{N-1}Z_i Z_{i+1}-\lambda_2 \sum_{i=1}^{N-2}Z_i X_{i+1} Z_{i+2}$$ now, I am only interested in the extreme case of $H_0, H_1, H_2$ separately. the ground state of $H_0$ and $H_1$ is clear to us, $H_0$ unique ground state $|\rightarrow\rightarrow\rightarrow\rightarrow\rightarrow\rightarrow\rightarrow>$ $H_1$ double degenerate ground states $ |\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow>$ and $ |\downarrow\downarrow\downarrow\downarrow\downarrow\downarrow\downarrow\downarrow>$ my question is, what is the spin representation ground states of $H_2=-\lambda_2 \sum_{i=1}^{N-2}Z_i X_{i+1} Z_{i+2}$ what I know: the degeneracy is 4, the ground state energy is $E_{H_2}=-\lambda_2 (N-2)$, excitations all have energy of $2\lambda_2$. although Jordan-Wigner transformation can diagonalize the entire problem in fermion picture, however, the ground state is not clear in spin presentation. I am expecting the ground state of $H_2$ has some non-trivial entanglement shape. Answer: This Hamiltonian is known in quantum information as the cluster state Hamiltonian (though usually with some additional boundary terms). Its ground state, the cluster state, is a resource for measurement based quantum computing (in two dimensions). The cluster state exhibits symmetry protected (SPT) order, this is, it has a non-trivial entanglement structure and exhibits hidden long-range order witnessed by string order parameters, and it has been found that this SPT order protects the ability to perform measurement based computation. The SPT order also relates to the 4-fold degenerate ground state with open boundary conditions. To answer your direct question, the ground state of the cluster state Hamiltonian is obtained by starting from $$ |\phi_\ell\rangle|+\rangle|+\rangle\cdots|+\rangle|\phi_r\rangle $$ with arbitrary boundary conditions $|\phi_\bullet\rangle$, and applying a Controlled-Z operation between all adjacent sites. You can find a lot of papers on the topic when searching for the corresponding keywords (cluster states, SPT, measurement based computation). If you ask more specific questions, I can try to answer them more specifically.
{ "domain": "physics.stackexchange", "id": 41619, "tags": "quantum-information, spin-models, ground-state, spin-chains" }
Why longitudinal accoustic and optical branch have the same energy (and thus frequency)?
Question: Context: Solid state physics, lattice dynamics. Question: At point $X$ the LO and LA branches have the same energy. Is this something that can be expected before doing the experiment or is it just it ? Answer: This is expected. For simplicity, consider a one dimensional lattice, with two atoms per unit cell, let's call them A and B. This misses some details of the three-dimensional Si structure, but it gets to the kernel of the argument. First look at what is happening at the $\Gamma$ point. For the LO branch, each "molecule" in the lattice is vibrating in phase with all the others, in particular, with its nearest neighbor. Let me try to depict the displacements of the atoms in the first four unit cells. $<\, > \;\; <\, > \;\; <\, > \;\; < \,>\;\;\cdots$ Each symbol in this diagram represents displacements of each atom: first A: left. first B: right, second A left, second B right, and so on The LA phonon at $\Gamma$ would be represented this way: $> > \;\; > > \;\; > > \;\; > > \;\; > > \cdots $ This diagram represents displacements of each atom: first A: right. first B: right, second A right, second B right, and so on None of the bonds are displaced from their equilibrium length, so the energy of the LA mode at $\Gamma$ is zero, as shown in the dispersion relation. Now lets look at what happens at the zone boundary. Here, the phases of the displacement of an atom is the negative of that of the nearest neighbors: For the LO phonon at the $X$ point: $<\,>\;\;>\,<\;\;<\,>\;\;>\,<\;\cdots$ Look at the first unit cell. The first B and the second A move together, as do the second B and third A but in the opposite direction. And the LA phonon at the $X$ point $>\,>\;\;<\,<\;\;>\,>\;\;<\,<\;\;\cdots$ Look at the first unit cell. The first A and the first B move together, as do the second A and second B but in the opposite direction. The pattern of motion for the LA and LO phonons at $X$ are nearly the same! Not exactly the same, because for clarity I've chosen a model where A and B are different, with inter-molecular spacing different that intra-molecular spacing. Things are different in silicon because all the species are the same, and the members of the unit cell don't line up one-dimensionally, but things work out very much like this model, and The pattern of motion for the LA and LO phonons at $X$ are exactly the same!
{ "domain": "physics.stackexchange", "id": 12993, "tags": "solid-state-physics" }
In ROS, is it possible to do dynamic reconfigure with "nested" parameters?
Question: Not positive I have the terminology right, but the idea is that I have a ROS 1 node that can have multiple "layers" of namespaces for some of the parameters. So the parameter list is something like: /perception_node/map_size /perception_node/sensor_1/max_range /perception_node/sensor_2/max_range It doesn't have to be set up this way, the key here is that there are multiple "sensor" objects that have the same parameter names, but can have different values. And the sensor names themselves should ideally be configurable from the launch file or a YAML file. I'm able to get it up and running so that it all initializes correctly, I can have a sensor struct like this: struct Sensor { string sensor_name; // < This gets set in the constructor float max_range; void configure(ros::NodeHandle & nodeHandle) { max_range = nodeHandle.param(sensor_name + "/max_range"); } }; And that works, but I'd really like to use dynamic reconfigure with this setup too. I've used dynamic reconfigure before on other projects, so I know the basics of creating a ".cfg" file and everything. But dynamic reconfigure doesn't seem to like parameter names with "/" symbols in them, at least when I tried the following it failed to compile: gen = ParameterGenerator() gen.add("sensor_1/max_range", float_t ...etc.... My initial idea was that since the ".cfg" file is (I think) really a python file, I could do something clever here and read the sensor names from a configuration file rather than hard coding it like in the above example. I also tried using groups, but it looks like you're not allowed to use the same parameter name more than once even if you do this: sensor_1_params = gen.addGroup("sensor_1") sensor_1_params.add("max_range", float_t ...etc... sensor_2_params = gen.addGroup("sensor_2") sensor_2_params.add("max_range", float_t ...etc... #Nope! This fails to compile too The only thing I can think of right now is to get rid of the slashes, so that the parameters are called something like "sensor_1_max_range" and "sensor_2_max_range" and so on. The issue being that it would make the config callback kind of ugly, because on the C++ side I'd have to do something like: if (sensor_name == "sensor_1") { max_range = config.sensor_1_max_range; } else if (sensor_name == "sensor_2") { max_range = config.sensor_2_max_range; } // And many more lines... I also feel like this kind of defeats the purpose of having the sensor names in a configuration file, because now any time we want to add or remove sensors we need to edit the configuration file AND the C++ code. So, I guess I'm wondering is it possible to actually do something like what I want? Have people had to deal with this before? Or am I better off restructuring things? Answer: Yes, multiple sets of dynamic_reconfigure parameters are supported for a single ros node. You need to create a Server for each path that has this kind of parameter. You can reuse a .cfg class name for different paths if you want to. An example is move_base, which provides dynamic parameters for multiple items within its /move_base namespace. Each path is associated to a separate dynamic_reconfigure::Server object. The following lines of code are from file dwa_planner_ros.cpp in the navigation.git repo, where name is a relative path like "DWAPlannerROS": ros::NodeHandle private_nh("~/" + name); dsrv_ = new dynamic_reconfigure::Server<DWAPlannerConfig>(private_nh); Take a look at the rest of that file to get the full picture.
{ "domain": "robotics.stackexchange", "id": 38589, "tags": "ros, dynamic-reconfigure" }
MOs of Benzene using group theory
Question: I am trying to analyze the $\sigma$ orbitals of benzene molecule using group theory, doing the same thing that is done to the $\pi$ orbitals. The symmetry group of benzene is the $D_{6h}$; considering the $sp^{2}$ orbitals of the benzene: and applying the symmetry operations to these orbitals, I can find the caracters of the representation with which they are transformed, and I can decompose this representation in the irreducible representation of the group. Doing this, I found $$\Gamma = 2 A_{1g} \oplus 2E_{2g} \oplus B_{1u} \oplus B_{2u} \oplus 2E_{1u} $$ Now, I can construct the projection operators and with them I get the symmetrized orbitals for each representation. The molecular orbitals will be the linear combinations of these orbitals. My question is: In which function should I apply the projection operators? In the $sp^{2}$ function? Answer: Your decomposition is for the hybrid orbitals that point from carbon to carbon, omitting the ones that point out towards the hydrogens, so you really want to project starting from one of the hybrid orbitals. First off we correct your decomposition: $$\Gamma=A_{1g}\oplus A_{2g}\oplus B_{1u}\oplus B_{2u}\oplus2 E_{1u}\oplus 2E_{2g}$$ We only need half the operations of $D_{6h}$ because all the hybrid orbitals transform the same under $\sigma_h$, so we write out the matrices for the group operations of the subgroup $D_6$ and a set of matrix representations for each irreducible representation and the result of each group operation acting on $\phi_1$, first half: $$\begin{array}{c|cccccc}R&E&C_6&C_6^2&C_6^3&C_6^4&C_6^5\\\hline A_{1g}&1&1&1&1&1&1\\ A_{2g}&1&1&1&1&1&1\\ B_{1u}&1&-1&1&-1&1&-1\\ B_{2u}&1&-1&1&-1&1&-1\\ E_{1u}&\begin{bmatrix}1&0\\0&1\end{bmatrix}&\begin{bmatrix}\frac12&-\frac{\sqrt3}2\\\frac{\sqrt3}2&\frac12\end{bmatrix}&\begin{bmatrix}-\frac12&-\frac{\sqrt3}2\\\frac{\sqrt3}2&-\frac12\end{bmatrix} &\begin{bmatrix}-1&0\\0&-1\end{bmatrix}&\begin{bmatrix}-\frac12&\frac{\sqrt3}2\\-\frac{\sqrt3}2&-\frac12\end{bmatrix}&\begin{bmatrix}\frac12&\frac{\sqrt3}2\\-\frac{\sqrt3}2&\frac12\end{bmatrix}\\ E_{2g}&\begin{bmatrix}1&0\\0&1\end{bmatrix}&\begin{bmatrix}-\frac12&-\frac{\sqrt3}2\\\frac{\sqrt3}2&-\frac12\end{bmatrix}&\begin{bmatrix}-\frac12&\frac{\sqrt3}2\\-\frac{\sqrt3}2&-\frac12\end{bmatrix} &\begin{bmatrix}1&0\\0&1\end{bmatrix}&\begin{bmatrix}-\frac12&-\frac{\sqrt3}2\\\frac{\sqrt3}2&-\frac12\end{bmatrix}&\begin{bmatrix}-\frac12&\frac{\sqrt3}2\\-\frac{\sqrt3}2&-\frac12\end{bmatrix}\\ R\phi_1&\phi_1&\phi_3&\phi_5&\phi_7&\phi_9&\phi_{11}\end{array}$$ Ummm... yeah, I assume the $x$-axis points right in your illustration and the $y$-axis points up. Also I number the hybrid orbitals $1$ to $12$ counterclockwise starting with the one just above the $x$-axis in the first quadrant. I will label the dihedral $C_2$ rotation axes $a$ to $f$ with $a$ parallel to the $x$-axis counterclockwise in $30°$ increments. So now we can produce the second half of our table: $$\begin{array}{c|cccccc}R&C_{2a}&C_{2b}&C_{2c}&C_{2d}&C_{2e}&C_{2f}\\\hline A_{1g}&1&1&1&1&1&1\\ A_{2g}&-1&-1&-1&-1&-1&-1\\ B_{1u}&1&-1&1&-1&1&-1\\ B_{2u}&-1&1&-1&1&-1&1\\ E_{1u}&\begin{bmatrix}1&0\\0&-1\end{bmatrix}&\begin{bmatrix}\frac12&\frac{\sqrt3}2\\\frac{\sqrt3}2&-\frac12\end{bmatrix}&\begin{bmatrix}-\frac12&\frac{\sqrt3}2\\\frac{\sqrt3}2&\frac12\end{bmatrix} &\begin{bmatrix}-1&0\\0&1\end{bmatrix}&\begin{bmatrix}-\frac12&-\frac{\sqrt3}2\\-\frac{\sqrt3}2&\frac12\end{bmatrix}&\begin{bmatrix}\frac12&-\frac{\sqrt3}2\\-\frac{\sqrt3}2&-\frac12\end{bmatrix}\\ E_{2g}&\begin{bmatrix}1&0\\0&-1\end{bmatrix}&\begin{bmatrix}-\frac12&\frac{\sqrt3}2\\\frac{\sqrt3}2&\frac12\end{bmatrix}&\begin{bmatrix}-\frac12&-\frac{\sqrt3}2\\-\frac{\sqrt3}2&\frac12\end{bmatrix} &\begin{bmatrix}1&0\\0&-1\end{bmatrix}&\begin{bmatrix}-\frac12&\frac{\sqrt3}2\\\frac{\sqrt3}2&\frac12\end{bmatrix}&\begin{bmatrix}-\frac12&-\frac{\sqrt3}2\\-\frac{\sqrt3}2&\frac12\end{bmatrix}\\ R\phi_1&\phi_{12}&\phi_2&\phi_4&\phi_6&\phi_8&\phi_{10}\end{array}$$ Then it remains to apply projection operators: $$\Gamma_j^{(\mu,i)}=\sum_{R\in G}D_{ij}^{(\mu)}\left(R^{-1}\right)R\phi_1$$ (Note the $R^{-1}$ in the above formula) $$\Gamma_{A_{1g}}=\phi_1+\phi_3+\phi_5+\phi_7+\phi_9+\phi_{11}+\phi_{12}+\phi_2+\phi_4+\phi_6+\phi_8+\phi_{10}$$ $$\Gamma_{A_{2g}}=\phi_1+\phi_3+\phi_5+\phi_7+\phi_9+\phi_{11}-\phi_{12}-\phi_2-\phi_4-\phi_6-\phi_8-\phi_{10}$$ $$\Gamma_{B_{1u}}=\phi_1-\phi_3+\phi_5-\phi_7+\phi_9-\phi_{11}+\phi_{12}-\phi_2+\phi_4-\phi_6+\phi_8-\phi_{10}$$ $$\Gamma_{B_{2u}}=\phi_1-\phi_3+\phi_5-\phi_7+\phi_9-\phi_{11}-\phi_{12}+\phi_2-\phi_4+\phi_6-\phi_8+\phi_{10}$$ $$\Gamma_{E_{1u},x}^{(1)}=\phi_1+\frac12\phi_3-\frac12\phi_5-\phi_7-\frac12\phi_9+\frac12\phi_{11}+\phi_{12}+\frac12\phi_2-\frac12\phi_4-\phi_6-\frac12\phi_8+\frac12\phi_{10}$$ $$\Gamma_{E_{1u},y}^{(1)}=\frac{\sqrt3}2\phi_3+\frac{\sqrt3}2\phi_5-\frac{\sqrt3}2\phi_9-\frac{\sqrt3}2\phi_{11}+\frac{\sqrt3}2\phi_2+\frac{\sqrt3}2\phi_4-\frac{\sqrt3}2\phi_8-\frac{\sqrt3}2\phi_{10}$$ $$\Gamma_{E_{1u},x}^{(2)}=-\frac{\sqrt3}2\phi_3-\frac{\sqrt3}2\phi_5+\frac{\sqrt3}2\phi_9+\frac{\sqrt3}2\phi_{11}+\frac{\sqrt3}2\phi_2+\frac{\sqrt3}2\phi_4-\frac{\sqrt3}2\phi_8-\frac{\sqrt3}2\phi_{10}$$ $$\Gamma_{E_{1u},y}^{(2)}=\phi_1+\frac12\phi_3-\frac12\phi_5-\phi_7-\frac12\phi_9+\frac12\phi_{11}-\phi_{12}-\frac12\phi_2+\frac12\phi_4+\phi_6+\frac12\phi_8-\frac12\phi_{10}$$ $$\Gamma_{E_{2g},x^2-y^2}^{(1)}=\phi_1-\frac12\phi_3-\frac12\phi_5+\phi_7-\frac12\phi_9-\frac12\phi_{11}+\phi_{12}-\frac12\phi_2-\frac12\phi_4+\phi_6-\frac12\phi_8-\frac12\phi_{10}$$ $$\Gamma_{E_{1u},2xy}^{(1)}=\frac{\sqrt3}2\phi_3-\frac{\sqrt3}2\phi_5+\frac{\sqrt3}2\phi_9-\frac{\sqrt3}2\phi_{11}+\frac{\sqrt3}2\phi_2-\frac{\sqrt3}2\phi_4+\frac{\sqrt3}2\phi_8-\frac{\sqrt3}2\phi_{10}$$ $$\Gamma_{E_{1u},x^2-y^2}^{(2)}=-\frac{\sqrt3}2\phi_3+\frac{\sqrt3}2\phi_5-\frac{\sqrt3}2\phi_9+\frac{\sqrt3}2\phi_{11}+\frac{\sqrt3}2\phi_2-\frac{\sqrt3}2\phi_4+\frac{\sqrt3}2\phi_8-\frac{\sqrt3}2\phi_{10}$$ $$\Gamma_{E_{2g},2xy}^{(2)}=\phi_1-\frac12\phi_3-\frac12\phi_5+\phi_7-\frac12\phi_9-\frac12\phi_{11}-\phi_{12}+\frac12\phi_2+\frac12\phi_4-\phi_6+\frac12\phi_8+\frac12\phi_{10}$$
{ "domain": "chemistry.stackexchange", "id": 12134, "tags": "group-theory" }
Express the maximum work from a voltaic cell
Question: The net cell reaction of an electrochemical cell and its standard potential is given below: $$\ce{ Mg + 2Ag+ ->Mg^{2+} + 2Ag} \ \ \ \ \ \ \ \ E^\circ=3.17\:\mathrm{V}$$ The question is to find the maximum work obtainable from this electrochemical cell if the initial concentrations of $\ce{Mg^{2+}}=0.1\ \mathrm{M}$ and of $\ce{Ag+}=1\ \mathrm{M}$. The solution just uses the Nernst equation to find the potential at this concentrations and uses $\Delta G=-nFE$ to calculate the Gibbs free energy change which is then equated to the maximum work obtainable. But this is just the work obtainable per mole at this concentration only. As soon as the reaction proceeds, the concentration change and so does the value of $\Delta G$ per mole and therefore the maximum work obtainable is different for different concentrations. Therefore, to find the maximum work obtainable, shouldn't we first calculate the equilibrium constant of this reaction (I calculated it to be $K_c=1.89\times10^{107}$), then the equilibrium concentrations(which is probably $\ce{Mg^{2+}}=0.6\ \mathrm{M}, \ \ce{Ag+}=0.178\times10^{-53}\ \mathrm{M}$) and then use something like: $$ \int_{\mathrm{in}}^{\mathrm{eq}}\Delta G* \mathrm{d\,M} $$ where $\Delta G$ is per mole and the $dM$ is a small amount in moles that the reaction proceeds at that concentration. Please help me verify my method and to proceed further. Answer: There is a simpler approach to this problem. First, since both initial concentrations are not 1 M, you need to determine $E$ for the initial conditions, where $Q$ is the reaction quotient $Q=\frac{[\ce{Mg^{2+}}]}{[\ce{Ag^{+}}]^2}$: $$E=E^\circ-\dfrac{RT}{nF}\ln{Q}$$ The value of $n$ for this reaction is $n=2$, and assuming $T=298\ \mathrm{K}$: $$E_i=3.17\ \mathrm(V)-\dfrac{(8.314 \dfrac{\text{J}}{\text{K}\cdot \text{mol}})(298 \text{ K})}{(2)(9.648\times10^4 \dfrac{\text{C}}{\text{mol}})}\cdot\ln{\dfrac{0.1 \text{ M}}{(1.0 \text{ M})^2}}=3.22 \text{ V}$$ You can then go and convert this potential into free energy or maximum work or whatever else you want. At equilbrium $E=0$ and $Q=K_c$, so we can calculate the equilibrium constant if we want to compare it to $Q$ and determine which direction is favored. We don't need to. Since $E_i>0$, this reaction procedures toward the products. Now, as the reaction proceeds the concentrations of $\ce{Ag+}$ and $\ce{Mg^{2+}}$ will change. From the law of mass action: $$-\Delta \ce{[Ag+]}= 2\Delta \ce{[Mg^{2+}]}$$ After some time $\Delta \ce{[Ag+]} = 0.25 \text{ M}$, so $\Delta \ce{[Mg^{2+}]}=0.125\text{ M}$ and $[\ce{Ag+}]=0.75 \text{ M}$ and $[\ce{Mg{2+}}]=0.225 \text{ M}$. The value of $Q$ has changed and so has the value of $E$: $$E_i=3.17\ \mathrm(V)-\dfrac{(8.314 \dfrac{\text{J}}{\text{K}\cdot \text{mol}})(298 \text{ V})}{(2)(9.648\times10^4 \dfrac{\text{C}}{\text{mol}})}\cdot\ln{\dfrac{0.225 \text{ M}}{(0.75 \text{ M})^2}}=3.18 \text{ V}$$ As the reaction proceeds, the value of $E$ decreases, which means the amount of work that can be done by the reaction also decreases. The maximum value of $E$ occurs at the initial state, and so the maximum work that this cell can do happens at the initial state. Since $E$ decreases as the reaction proceeds, so too does the work the cell can do at any moment. If you want the total work, then you need to integrate with respect to $Q$, not worrying about the actual concentrations: $$\int_{Q_i}^{K_c}E\cdot\mathrm{d}Q$$
{ "domain": "chemistry.stackexchange", "id": 11753, "tags": "physical-chemistry, electrochemistry, thermodynamics, ions" }
run the roslaunch comman in Qt creator
Question: I tried to run the roslaunch command in Qt but the error is: sh: 1: /opt/ros/indigo/bin/roslaunch/roslaunch: not found and I tried to download the plugin for ROS it still doesn't work. and here is my code: QProcess * exec; exec =new QProcess(this); exec->setProcessChannelMode(QProcess::MergedChannels); //exec->start("gnome-terminal", QStringList()<<"source ~/catkin_ws/devel/settup.bash; roslaunch p2os_launch pioneer_joy_drive.launchS ") QString command = "/opt/ros/indigo/bin/roslaunch/roslaunch p2os_launch pioneer_joy_drive.launch"; exec->start(command); system(qPrintable(command)); Originally posted by Nicky0201 on ROS Answers with karma: 1 on 2016-11-21 Post score: 0 Answer: Have you tried writing only "roslaunch" instead of "/opt/.../roslaunch"? Dont known if it helps, but heres one solution for a similiar situation: link text Originally posted by FábioBarbosa with karma: 137 on 2017-09-28 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26300, "tags": "ros, qt" }
How to include a .launch file on a remote machine from a .launch file?
Question: I'm trying to write a launch file that automatically ssh's and launches turtlebot_bringup minimal.launch on my turtlebot from my computer. I'm using Indigo. I have something like this: <launch> <machine name="machname" address="mybot" user="myuser"/> <include file="/opt/ros/indigo/share/turtlebot_bringup/launch/minimal.launch" machine="machname"/> </launch> as suggested in this question. However, like the some of the commenters in that question, I'm getting a "unknown attribute 'machine'" error. What's the correct way to do this? Thanks in advance for the help! Originally posted by JKostas on ROS Answers with karma: 23 on 2016-02-28 Post score: 2 Answer: From reading through the source for roslaunch, and from looking at the other posts on that question, I'm pretty sure the undocumented feature you're trying to use was removed in later versions of roslaunch. Instead, you might want to try using the default atrribute to the maching tag, to make all nodes run on the remote machine: <launch> <machine name="machname" address="mybot" user="myuser" default="true"/> <include file="/opt/ros/indigo/share/turtlebot_bringup/launch/minimal.launch"/> </launch> Originally posted by ahendrix with karma: 47576 on 2016-02-28 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by JKostas on 2016-03-02: Thanks!! I had other issues trying to run the XML above, and decided to give up and use a keyboard macro (xdotool's "key" command + python + python's subprocess module). However, while the XML may not work perfectly, this answer is, in general, a great solution to this issue. Thanks again!
{ "domain": "robotics.stackexchange", "id": 23936, "tags": "ros, roslaunch, include, network, ssh" }
Machine epsilon - find epsi
Question: 4.2 Machine Epsilon Find the floating point number epsi that has the the following properties: 1.0+epsi is greater than 1.0 and Let m b e any number less than epsi. Then 1.0+m is equal to 1.0. epsi is called machine epsilon. It is of great importance in understanding floating point numbers. I have written the following program to try and find my machine epsilon value to a given search depth (10). Do you have any ideas how I could write this program better? (defpackage :find-epsi (:use cl)) (in-package :find-epsi) (defun smaller-scale (&OPTIONAL (epsi 1.0)) (if (> (+ 1.0 epsi) 1.0) (smaller-scale (/ epsi 10)) epsi)) (defun bigger (epsi inc-unit) (if (< (+ 1.0 epsi) 1.0) (bigger (+ epsi inc-unit) inc-unit) epsi)) (defun smaller (epsi dec-unit) (if (> (+ 1.0 epsi) 1.0) (smaller (+ epsi dec-unit) dec-unit) epsi)) (defun find-epsi (&OPTIONAL (search-depth 10) (epsi (smaller-scale)) (incdec-unit epsi)) (if (= search-depth 0) epsi (find-epsi (1- search-depth) (bigger (smaller epsi incdec-unit) incdec-unit) incdec-unit))) (format t "epsi: ~a ~%" (find-epsi)) It seems that it should be much simpler to find epsilon than I originally thought. What do you think about the following program? (defpackage :find-epsi (:use cl)) (in-package :find-epsi) (defun find-epsi (&OPTIONAL (epsi 1.0)) (if (> (+ 1.0 epsi) 1.0) ; if the variable epsi is still significant (find-epsi (/ epsi 2)) ; halve it and try again epsi)) ; otherwise, we have found epsilon (format t "epsi: ~a ~%" (find-epsi)) Answer: If we assume that a float is represented in memory as a (sign, mantissa, exponent) tuple, and assume a radix of 2, then we can find the machine epsilon exactly. That is, if we can assume the machine stores floats using base-2 representations of the mantissa and exponent, then we know that: The machine will store a value of 1 in floating point exactly - this would be stored as 1 for the mantissa, and 0 for the exponent, i.e. 1 * 2^0. The machine will store all powers of two that it can represent using a single bit in the mantissa, and by varying the exponent. E.g. 1/4 could be represented as 1 * (2 ^ -2). Any representable power of two will be stored without losing information. 1 + epsi will be the smallest value greater than 1 that can be stored in the mantissa of the floating-point number. EDIT The second version looks much better than the first, but I believe there's an off-by-one error in the number of times you recurse in find-epsi. I suggest that you create a test function, to see if your result is the machine epsilon: (defun epsi-sig-p (epsi) (> (+ 1.0 epsi) 1.0)) You'll probably find that (is-sig (find-epsi)) is #f... This also suggests that you can refactor (under the DRY principle) find-epsi to use this test function in find-epsi: (defun find-epsi (&OPTIONAL (epsi 1.0)) (if (epsi-sig-p epsi) (find-epsi (/ epsi 2)) epsi)) but we didn't change the behavior to fix the calculation, yet. For this, I'd suggest another routine, to check whether we should try the next possible epsi: (defun next-epsi (epsi) (/ epsi 2)) (defun is-epsi-p (epsi) (and (epsi-sig-p epsi) (not (epsi-sig-p (next-epsi epsi))))) (defun find-epsi (&OPTIONAL (epsi 1.0)) (if (is-epsi-p epsi) epsi (find-epsi (next-epsi epsi)))) is-epsi-p should return #t, now.
{ "domain": "codereview.stackexchange", "id": 160, "tags": "lisp, common-lisp, floating-point" }
Dynamic Programming Solution to 0,1 KnapSack Problem
Question: I am trying to understand the DP solution to the basic knapsack problem.However even after reading through a variety of tutorials ,its still beyond my comprehension.I am taking an algorithmics course and need to solve questions based on the classic knapsack problem.However unless I understand the classic problem clearly , I won't be able to make any headway to the advanced ones.Please help pointing me out in the right direction. The recursive equation is way too confusing for me. Answer: The key to understanding a dynamic programing problem is understanding the recursive definition and this can be daunting. For this problem we start with n objects labeled 1 to n. We define $O(K,W)$ to be the optimal value for the first k items with a total weight W, we need to be able to define this in terms of subproblems now since without that, dynamic programming does not work. To add some quick definitions $W_k$ is the weight of the kth item and $V_K$ is its values. So we define the recursive formula in 2 cases: The first case is $O(K,W) = O(K-1,W) \quad \text{if $W_k$>W}$, meaning the k'th item weighs more then are target weight so it cannot be added even if nothing else was in the bag. The second case is the decision case, $O(K,W) = MAX(O(K-1,W),O(K-1,W-W_k)+V_k)$ it means that either we don't add Item K to our set, in which case our best so far is $O(K-1,W)$, or we do add it, in which case we only have $W-W_k$ weight remaining to be filled with items 1 to k-1 and our total value is $O(K-1,W-W_k)+V_k$, which is our recursive value plus the value of the new Item. The optimal solution is then $O(n,MaxWeight)$ Now for the dynamic programing part, the best way to start out is with our standard grid, we have our horizontal portion as k, so at any $i$ point we have the first $i$ items available, and our vertical as the total weight from zero to the weight the question was asking. The first row and columns will have a value of zero since with no weight or items, our bag has no value. Below I have put good walk through example. We walk through the algorithm by moving down each column, one column at a time. So at any grid location we check 3 other grid points which correspond to the values from the recursive definition, If we are at grid point $k,W$ and item k has a weight of$W_k$ and a value of $V_K$ we first check if $W<W_K$, if it is we use the square above it, since our item is too big, this is $O(K-1,W)$, if the item is not too big you check locations $k-1,W$ which is $O(K-1,W)$ and compare it to $K-1,W-W_k$ which is $O(K-1,W-W_k)+ V_k$, since we are going column by column from smallest to largest these are already calculated. If we keep doing this we will eventually get to the bottom right corner, which is the solution to the problem. The trick to this is we start at the smallest values and save the answers so for larger one the recursion is only a quick lookup instead of a full call.
{ "domain": "cs.stackexchange", "id": 3036, "tags": "dynamic-programming, knapsack-problems" }
What are AE Salts?
Question: I saw this sentence as I was reading Agatha Christie's Five Little Pigs: Coniine and AE Salts comes under Schedule I of the Poisons Acts. So I've been wondering what are the mentioned "AE salts"? It's such a little detail but I'm interested to know. Answer: "Five Little Pigs" was published around 1942-43. Agatha Christie also worked in pharmacy around that period of time, and, most likely, referenced Poisons Acts of 1933 by Royal Pharmaceutical Society of Great Britain. It is also worth noticing that translations of the paragraph vary quite a bit: English: "He was very distressed by the whole thing, poor gentleman. As well he might be! Blamed himself for his drug brewing-and the coroner blamed him for it too. Coniine and AE Salts comes under Schedule I of the Poisons Acts. He came in for some pretty sharp censure. He was a friend of both parties, and it hit him very hard-besides being the kind of county gentleman who shrinks from notoriety and being in the public eye." Russian, missing "and AE Salts": "Он был очень расстроен случившимся, бедный джентельмен. И правильно! Винил себя за то, что приготовил эту настойку, - и коронер тоже винил его в этом. Кониум входит в список ядовитых веществ №1. Мистеру Блейку было выражено порицание в самой резкой форме. Он дружил и с мистером, и с миссис Крейл, а потому случившееся переживал особенно болезненно, не говоря уж о том, что ему, как человеку, постоянно живущему в деревне, такая популярность была совершенно ни к чему." German, entirely missing "Coniine and AE Salts comes under Schedule I of the Poisons Acts": "Er war höchst unglücklich darüber, der arme Mann. Er wurde von Gewissensbissen wegen seiner Giftmischerei geplagt, und der Gerichtsarzt machte ihm auch schwere Vorwürfe." Which makes me think that "AE salts" is probably not that important, and slightly expands the definition of the poison. I didn't find original Schedule I of the Poisons Acts 1933, though its brief overview states that it mostly consists of various alkaloids and its salts. At that period of time alkaloids were not synthesized, but obtained from the plants by extraction, so I would suggest AE stands for Alkaloid Extraction.
{ "domain": "chemistry.stackexchange", "id": 8450, "tags": "chemistry-in-fiction" }
Maximum speed in a spring-mass system
Question: I am studying energy right now and I can use only gravitational potential energy, elastic energy and kinetic energy to solve some problems. My doubt is how can I prove that the maximum speed in a mass hanging on a spring is reached at the middle of the elongation of the spring. The exercise shows the following situation: A block with mass $m$ is attached to an ideal spring (no mass) of an elastic constant $k$. The block is released when the spring is on his natural state. Consider no friction and that the system is conservative. The question ask for the maximum speed of the block using energy only. I solved it already (with $m=0.1$ $[kg]$ and $k=10\left[\frac{N}{m}\right ]$ and assuming that the point $0[m]$ is when the spring reaches it maximum elongation and the block is released from a height of $h\;[m])$. Then, I got that the maximum speed is $1[\frac{m}{s}]$, but I assumed that the maximum speed is reached at the middle, so I want to know why this is true in these ideal conditions. Answer: The total energy you have is Kinetic + Potential. And for a spring system, the potential energy is minimum when the spring is not stretched or compressed. But since energy is conserved, when potential energy is zero, all the energy in the system is present as kinetic energy. So kinetic energy is maximum when the displacement from natural length of the spring is zero.
{ "domain": "physics.stackexchange", "id": 64174, "tags": "energy, energy-conservation, potential-energy" }
Find pairs of integers (a, b) in an array such that a = b + k in linear time - elements are not unique
Question: A while ago, I was asked to solve a question similar to this: We are given an array arr and we would like to find all pairs of items (a, b) where a = b + k. The items are NOT unique and it is also possible to have k = 0. I know that if items are unique, this problem can be solved in linear time by using a hashmap. However, when items are NOT unique, I think that the problem becomes very different. See this example: arr = [1, 1, 1, 1, 1] k = 0 The expected output is: (1, 1), (1, 1), (1, 1), (1, 1) // For the first element (1, 1), (1, 1), (1, 1) // For the second element (1, 1), (1, 1) // For the third element (1, 1) // For the fourth element As it is obvious to me, in the worst case (the above example) the output is of size $n \choose 2$, which is $\Theta (n^2)$. How is it possible to have a linear algorithm, when the output size is definitely $\Theta (n^2)$? My interviewer insisted that it is possible to still solve it in linear time, if the correct data structure is used. Answer: If you must write out all of the pairs individually, then the overall problem takes quadratic time because the running time of this last post-processing step is $O(n^2)$. If you are allowed to represent the output in a different way, then you can simply keep track of distinct pairs and associate them with a counter (see run-length encoding). Thus, you can compute the solution in $O(n)$ time using the following algorithm: Initialize an auxiliary array (or hash map) $Aux$. Perform a first linear scan of the input array $Arr$ and use the auxiliary array to keep track of how many times each of the elements appears. For example, $Aux[2]=3$ indicates that $2$ appears $3$ times in $Arr$. Perform a second linear scan of the input array $Arr$. During the $i$th iteration, $a=Arr[i]$ and you must look for the element $b =a-k$, which appears $c =Aux[b]$ times in $Arr$. If $c \gt 0$, then you will add the pair $(a,b)$ to your solution, associated with the count $c$. Two special cases that must be considered at step 3: If the pair already exists in your solution, you can simply increment the existing counter by $c$. If $k=0$, you must add the pair $(a,b)$ only if $c \gt 1$. The counter associated with this pair will not be set to (or incremented by) $c$, but, rather, by $c-x-1$, where $x$ stores how many times we have seen $a$ before.
{ "domain": "cs.stackexchange", "id": 8572, "tags": "algorithms, time-complexity, runtime-analysis, search-algorithms, arrays" }
How is algorithm complexity modeled for functional languages?
Question: Algorithm complexity is designed to be independent of lower level details but it is based on an imperative model, e.g. array access and modifying a node in a tree take O(1) time. This is not the case in pure functional languages. The Haskell list takes linear time for access. Modifying a node in a tree involves making a new copy of the tree. Should then there be an alternate modeling of algorithm complexity for functional languages? Answer: If you assume that the $\lambda$-calculus is a good model of functional programming languages, then one may think: the $\lambda$-calculus has a seemingly simple notion of time-complexity: just count the number of $\beta$-reduction steps $(\lambda x.M)N \rightarrow M[N/x]$. But is this a good complexity measure? To answer this question, we should clarify what we mean by complexity measure in the first place. One good answer is given by the Slot and van Emde Boas thesis: any good complexity measure should have a polynomial relationship to the canonical notion of time-complexity defined using Turing machines. In other words, there should be a 'reasonable' encoding $tr(.)$ from $\lambda$-calculus terms to Turing machines, such for some polynomial $p$, it is the case that for each term $M$ of size $|M|$: $M$ reduces to a value in $p(|M|)$ $\beta$-reduction steps exactly when $tr(M)$ reduces to a value in $p(|tr(M)|)$ steps of a Turing machine. For a long time, it was unclear if this can be achieved in the λ-calculus. The main problems are the following. There are terms that produce normal forms (in a polynomial number of steps) that are of exponential size. Even writing down the normal forms takes exponential time. The chosen reduction strategy plays an important role. For example there exists a family of terms which reduces in a polynomial number of parallel β-steps (in the sense of optimal λ-reduction), but whose complexity is non-elementary (meaning worse then exponential). The paper "Beta Reduction is Invariant, Indeed" by B. Accattoli and U. Dal Lago clarifies the issue by showing a 'reasonable' encoding that preserves the complexity class P of polynomial time functions, assuming leftmost-outermost call-by-name reductions. The key insight is the exponential blow-up can only happen for 'uninteresting' reasons which can be defeated by proper sharing. In other words, the class P is the same whether you define it counting Turing machine steps or (leftmost-outermost) $\beta$-reductions. I'm not sure what the situation is for other evaluation strategies. I'm not aware that a similar programme has been carried out for space complexity.
{ "domain": "cs.stackexchange", "id": 8844, "tags": "algorithm-analysis, runtime-analysis, computation-models, functional-programming" }
Generating ikfast plugin for 5 DOF robot
Question: Hi everyone, we are currently working on an ERICC 1 robot (probably >30 years old) and need to update its control systems, etc. Considering that it is a 5 DOF robot, we are having a lot of difficulty with the inverse kinematics problem. It seems that KDL is able to find a solution every once in a while but it is not reliable (< 10 % of the time and not validated). After a quick search, we found that the encouraged method for < 6 DOF robot is to use ikfast which we are currently trying to implement. We are following the tutorial. The URDF we are using was created by us and a package containing all the meshes and the URDF can be found here: Robot_description package (sorry for the download format, I am not used to using git or anything else I hope it is not too bad) From these files, we are able to convert the URDF to collada succesfully as explained in the tutorial using : rosrun collada_urdf urdf_to_collada my_robot.urdf ericc_collada.dae then, when we visualize it using openrave, we can see the robot as in the following picture (can't upload too new on the group): ERICC collada openrave http://oi57.tinypic.com/2virv5x.jpg If we look at it quickly, we can see that it was imported properly (at least visually) but there is no reference frame at the tip of the end effector as I expected when reading a previous post about 5 DOF katana arm. If I try to generate the solver, using : python `openrave-config --python-dir`/openravepy/_openravepy_/ikfast.py --robot=ericc_collada.dae --iktype=translationdirection5d --baselink=0 --eelink=5 --savefile=ikfast_ericc It tries various methods and end up with this error message Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/openravepy/_openravepy_/ikfast.py", line 6121, in <module> chaintree = solver.generateIkSolver(options.baselink,options.eelink,options.freeindices,solvefn=solvefn) File "/usr/lib/python2.7/dist-packages/openravepy/_openravepy_/ikfast.py", line 1639, in generateIkSolver chaintree = solvefn(self, LinksRaw, jointvars, isolvejointvars) File "/usr/lib/python2.7/dist-packages/openravepy/_openravepy_/ikfast.py", line 2055, in solveFullIK_TranslationDirection5D endbranchtree2 += self.solveAllEquations(AllEquations,curvars=curvars,othersolvedvars = self.freejointvars+usedvars,solsubs = solsubs,endbranchtree=endbranchtree) File "/usr/lib/python2.7/dist-packages/openravepy/_openravepy_/ikfast.py", line 4264, in solveAllEquations return self.addSolution(solutions,AllEquations,curvars,othersolvedvars,solsubs,endbranchtree,currentcases=currentcases) File "/usr/lib/python2.7/dist-packages/openravepy/_openravepy_/ikfast.py", line 4340, in addSolution return [solution[0].subs(solsubs)]+self.solveAllEquations(AllEquations,curvars=newvars,othersolvedvars=othersolvedvars+[var],solsubs=solsubs+self.Variable(var).subs,endbranchtree=endbranchtree,currentcases=currentcases) File "/usr/lib/python2.7/dist-packages/openravepy/_openravepy_/ikfast.py", line 4321, in solveAllEquations raise self.CannotSolveError('failed to find a variable to solve') __main__.CannotSolveError: 'failed to find a variable to solve' Any suggestions as to how I could fix this issue ? SETUP informations: Ubuntu 12.04 LTS 64 bits, ROS hydro, Openrave 0.8 Best regards, Pascal Fortin Originally posted by pascal.fortin on ROS Answers with karma: 16 on 2014-11-04 Post score: 0 Answer: Edit (2017-09-26): I've written an updated version of this in #q263925. That also uses a Docker image to avoid having to install OpenRave on the ROS machine (which is non-trivial on current versions of Ubuntu). For some of my 5dof manipulators, I've run into the same issue. What worked for me was to wrap the Collada file describing your manipulator in an OpenRAVE robot definition file (that is not official terminology). This provides OpenRAVE IKFast with enough information to be able to generate a plugin for your robot. This also requires passing different parameters to be passed to the openrave.py script (I used version 0.8): openrave0.8.py --database inversekinematics --robot=/path/to/collada_file_with_manipulator.xml --iktype=translationdirection5d --iktests=100 The iktests parameter value was just a default, you can make it larger or smaller. Unfortunately I cannot find my collada_file_with_manipulator.xml right now, so I cannot provide it to you, but I used something like: <robot file="/path/to/converted.urdf.dae"> <Manipulator name="YOUR_NAME"> ... </Manipulator> </robot> Note that you don't need to manually edit the Collada file you got by converting your urdf, you can reference it in your wrapper model definition using the file attribute. I used the following pages for information: OpenRAVE Custom XML Format, in particular the Defining Manipulators section Translation3D failed to find a variable to solve Originally posted by gvdhoorn with karma: 86574 on 2014-11-12 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 19958, "tags": "kinematics, inverse, ikfast, openrave, robot" }
Is dynamics in GR unique?
Question: Quote: "... a map $h$ of an open set $\Theta$ of a Bnach space $B_1$ into a Banach space $B_2$ is Lipschitz in $\Theta$ if there exists $k\in \mathbb{R}$ such that $||h(a)-h(b)||\leq k||a-b||$ for $(a,b)\in\Theta\times \Theta$". Quote: "0.33 The Cauchy Theorem. The differential equation $$(*)~~~~~~~~~~~~ x'=f(t,x),x(t_0)=x_0$$ has a unique continuous solution $x(t)$ if $f(t,x)$ is Lipschit in $x$ on $I\times \Omega$." Quote: "0.34 Remark. If $f(t,x)$ is not Lipschitz, we cannot say anything in general. But if Banach space is $\mathbb{R}^n$, then there is a solution of (*), but it may not be unique." This made me thought about GR, with black hole, the curvature went to infinity, thus the function could not be Lipschitz. Does this imply that the dynamics around black hole was not unique? Answer: This is why cosmic censorship is considered to be so important -- you are saved from this conclusion if all of the infinite curvature points are hidden behind horizons, and therefore, the exterior of the black holes can still be globally hyperbolic.
{ "domain": "physics.stackexchange", "id": 62082, "tags": "general-relativity, differential-geometry, causality, differential-equations, determinism" }
Formation of splint bones in ungulates
Question: I do know that the splint bones are vestigial (reduced non functional toes in this case) , in odd and even toed ungulates. Is there any evolutionary reason of why is it so? What were the conditions that lead to it? Answer: One easy way to make an animal faster (especially if it is larger than a housecat), is to make the ends of the limbs lighter, the more mass there is near the end of a limb the more energy it takes to move it and the slower you can swing it, that translates into moving slower because you cannot get the limb back into position for the next step fast enough. Because of this reduction in the mass and number of toes in lower limb is an adaptation we see over and over again in animals focusing on running speed. We see it in dinosaurs, we see it in ungulates and artiodactyls. We see five toes reduced to three then eventually one toe, or sometimes we see a reduction from four toes to two paired toes instead. Source
{ "domain": "biology.stackexchange", "id": 8008, "tags": "evolution, vestigial" }
How to prove $S^\dagger\neq S$ in context of transformations in Dirac equation?
Question: We know that $S^{-1}=\gamma^0S^\dagger\gamma^0$ and not equal to $S^\dagger$. How to explicitly prove this? I have searched the books but could not find this and am at a loss on how to proceed. Even some pointers would be helpful. Answer: We have $S = \exp\left(-\frac i4\omega_{\mu\nu}\sigma^{\mu\nu}\right)$, where $\sigma^{\mu\nu} = \frac i2[\gamma^\mu, \gamma^\nu]$. This gives $S^\dagger = \exp\left(\frac i4\omega_{\mu\nu}\sigma^{\mu\nu\dagger}\right)$. Now, try to prove $\sigma^{\mu\nu\dagger} = \gamma^0\sigma^{\mu\nu}\gamma^0$ which will give the desired result.
{ "domain": "physics.stackexchange", "id": 91381, "tags": "homework-and-exercises, quantum-field-theory" }
Can I model process noise as a known "error" in my dynamics while designing a Kalman Filter?
Question: Consider I am modelling the dynamics of a robot and using a Kalman filter to obtain estimates of some state. I have certain terms in my equation which correspond to data not accessible to this robot ( states of other robots etc). 1) Is it fair to model these as process noise by assuming these terms to evolve based on some random process and act "Gaussian like" ? This also raises another important question: 2) Why does the Kalman Filter require us to have a positive definite covariance associated with the process noise. How do I interpret this in the real world when I am writing down process noise as some unmodeled physical terms ? Answer: The Extended Kalman filter (EKF) effectively does that: Unmodelled nonlinearities are taken account of by assuming higher process or measurement noise, depending on where the nonlinearity appears.
{ "domain": "dsp.stackexchange", "id": 3267, "tags": "filters, noise, kalman-filters, covariance" }
Controllability: Rank VS Determinant
Question: Assume that we have our state space model: $$\dot{x} = Ax + Bu$$ I have a system $A$ and signalmatrix $B$ and if I do in MATLAB: >> rank(ctrb(A, B)) I get rank of the controllability matrix. But if I do this in MATLAB: >> det(ctrb(A, B)) I can get none zero number. That means that the system is controllable. But who is best method to use? Determine if the system is controllable by using the criteria >> det(ctrb(A, B)) =/= 0 or rank(ctrb(A, B)) = n where $n$ is the lenght of the state vector or the dimension of $A$. Answer: In general, the controllability matrix $$C=\begin{pmatrix}B &AB &A^2B &\cdots &A^{n-1}B\end{pmatrix}$$ is not square: A is $n\times n$ whereas $B$ is $n\times m$, resulting in $C$ being $n\times mn$, so its determinant does not exist. The most general criterion is therefore on the rank. However, if $B$ has only one column, then $\det C$ exists. However, the computation of a determinant may not be numerically stable when the matrix is singular or nearly singular. By that I mean that floating-point round-offs may give you a small but non-zero determinant even though the rank is less than $n$, and the determinant should therefore be zero. On the contrary, almost every algorithm to compute the rank I know of is numerically stable (this is the case in Matlab iirc). So even if $B$ has only one column, I would recommend using the rank.
{ "domain": "physics.stackexchange", "id": 41078, "tags": "simulations, linear-systems" }
Adjoint of the Quantum Momentum Operator
Question: I'm studying quantum mechanics and I have a question about the momentum operator. We have that the momentum operator is given by \begin{equation*} p = -i\hbar\nabla \end{equation*} and so its adjoint is given by \begin{equation*} p^{\dagger} = i\hbar\nabla^{\dagger}. \end{equation*} The momentum operator is also self-adjoint, so \begin{equation*} -i\hbar\nabla = i\hbar\nabla^{\dagger} \quad\Leftrightarrow\quad \nabla^{\dagger} = -\nabla \end{equation*} in some sense. However, I haven't run across any explanation of in what sense such a relation might hold, or if I'm just completely incorrect. I appreciate any guidance on this! Edit: My thought process is to examine $\nabla$ in a weak sense in Hilbert space, keeping in mind the boundary conditions imposed on wave functions, but I haven't made any useful conclusions. Edit: The domain on which I mean the momentum operator to act is the set of quantum states. Answer: Consider a vector space $V$ with an inner product $\langle\cdot,\cdot\rangle:V\times V\rightarrow\mathbb{R}.$ Given an operator $A:V\rightarrow V$, the adjoint is defined as the unique$^1$ operator satisfying $$\langle A^\dagger\phi,\psi\rangle:=\langle\phi, A\psi\rangle\tag{1}\label{1} \quad\forall\phi,\psi\in V.$$ In OP's case the vector space is the Hilbert space $L^2(\mathbb{R}^3)$ with the inner product $$\langle\phi,\psi\rangle:=\int_{\mathbb{R}^3}\phi^\ast(\vec{r})\psi(\vec{r})d^3\vec{r}\tag{2}\label{2}$$ Let us now rewrite the RHS of \eqref{1} with $A=\nabla$ and see if we can recast it in a form analogous to the RHS. $$\langle\phi, \nabla\psi\rangle=\int_{\mathbb{R}^3}\phi^\ast(\vec{r})\nabla\psi(\vec{r})d^3\vec{r}=\underbrace{\phi^\ast(\vec{r})\psi(\vec{r})d^3\vec{r}\bigg\lvert_{\partial\mathbb{R}^3}}_{=0}-\int_{\mathbb{R}^3}[\nabla\phi^\ast(\vec{r})]\psi(\vec{r})d^3\vec{r}\tag{3}\label{3}:=\langle (-\nabla)\phi\lvert \psi\rangle.$$ Where in the first step we have used intergration by parts together with the requirement that the wavefunctions vanish at infinity. Comparing \eqref{1} and \eqref{3}, we conclude that $$\nabla^\dagger=-\nabla\tag{4}\label{4}$$ in $L^2(\mathbb{R}^3)$. $^1$ The situation is more delicate for infinite dimensional spaces, in which we should be interested in the present case as $L^2(\mathbb{R}^3)$ is infinite-dimensional and more should be said about the domain of the adjoint operator. In other words, we're not being rigorous here. The definition I gave works fine in the finite dimensional case, though.
{ "domain": "physics.stackexchange", "id": 94385, "tags": "quantum-mechanics, operators, hilbert-space, momentum, mathematical-physics" }
Why are hail storms always brief?
Question: It can rain all day, and it can snow for hours on end, but hail always seems to fall for short periods only. Why don't you get prolonged periods of hail? Answer: Hail is created within a thunderstorm by updrafts that carry water droplets to an altitude where temperatures are lower than the freezing point for water (32°F or 0°C). This freezing process forms a hailstone. This process can repeat itself each time increasing the size of the hailstone until the hailstones become too heavy to be picked up again. The creation of hail requires high wind speeds that may only be sustained in the storm for a brief period of time. Longer periods of hail are possible but typically require large strong storms such as tornadic supercell thunderstorms. References: How hail is created. Size of Hail vs wind speed
{ "domain": "earthscience.stackexchange", "id": 1023, "tags": "meteorology" }
Reversing physics law in classical mechanics?
Question: Leonard Susskind in one of the video's mentions: Link - It includes the timestamp so you don't have to wind. simply starting some place letting it evolve for a long period of time and then letting it evolve with the reverse law of physics if you come back to the same place every time then your law physics is deterministic. He says this for the example of 3 state system(Head, Tail, Feet). The laws of physics is H->T->F, but in a very small probability, one in a million, at some instance of time, if the current state is H, it can just say: "I will hold still", but probability of this is 1 in million. Due to this, What could happen is the following sequence: HTFHTFHHTF...(note two H in the sequence). Now, what I actually understand is if you let the system evolve for 10 million of unit of time, and decide to reverse, there's a chance you won't get back to the original starting point(H). You might end up at F or T. In this case, we can't call the system or law deterministic. but now, read the quote from Leonard again. In our example, there's still chance that the reversing the system could get back to the original configuration(H). If so, would it be deterministic as Leonard says ? I don't think so, because if you reverse, you go backwards one step at a time and there will be instances of time that you will think it's F for example, but in reality it was T. The funny thing is though, you won't be able to detect each value of each instance of time. Going backwards, you will write down the sequence: FTHFTHFTH because you don't know where the fluctuation happened, so you won't get every value/state correctly. I thought determinism was if you got a system and you let it evolve for 10 instances of time, when you get backwards, you should be able to derive each state at each instance of time(in our case 10). but Leonard only says: just getting back to the original is enough. What don't I understand ? Answer: I think you got the essential ideas correctly, but Professor Susskind didn't mean to imply that just getting back to the original state is enough as you say, to be certain that a law of physics is deterministic. Rather, evolving the present state with the reverse law, and comparing the final result to the original state, is a way to test whether the system is deterministic, it is a necessary condition but not a sufficient one. I think that the reason he chose to focus on a situation where we let a system evolve for a long time and then apply the reverse law for that length of time, is that in reality the state of a system becomes more and more complex and "drifts" apart farther from its initial state, which wouldn't really apply to the three states system of $\text{HTF}$ he describes. The space of states of most physical systems, even simple ones like atoms, is immensely larger. So as he mentions, if we only allow evolution in a very small time interval, it isn't a great way to test reversibility because it may be that just by accident the system didn't have a chance to interact with its environment and hence change state, so in such a case it may appear to be reversible, and hence possibly also deterministic. As an aside, what Prof. Susskind means by a "deterministic" law in this lecture is really one which is both deterministic & reversible, or in other words determinstic both into the past and future. One can imagine a law that is future determinstic but irreversible.
{ "domain": "physics.stackexchange", "id": 95595, "tags": "classical-mechanics, determinism" }
In Einstein's "relativity of simultaneity" thought experiment, would not the passenger on the train see a dimmer signal?
Question: This is the updated, more precise question--is this a paradox?: Suppose a rocket traveling close to the velocity of light which emits a single photon from its midpoint at point A, illustrated below. The rocket is equipped with a single detector drawn in green at the front of the rocket. The velocity of light is independent of the velocity of the source, and thus an earthbound observer will note the photon's spherically-symmetric probabilistic wavefront expanding in the form of of the larger red circle C. An observer on the rocket will note the photon's spherically-symmetric probabilistic wavefront expanding in the form of of the smaller black circle D. Let us run this single-photon experiment numerous times. Because the detector illustrated in green occupies a larger portion of the smaller Circle D, the observer on the spaceship will see the photon detected more often by the detector than will the earthbound observer. Because the detector illustrated in green occupies a smaller portion of circle C, the earthbound observer will see the photon detected less often at the detector than the rocket's observer. One could imagine surrounding both Circle C and Circle D with similar detectors along the entire circumference. One could perform the single-photon experiment numerous times on numerous trips, using only the detectors on Circle C or only the detectors on Circle D. On average, the earthbound observer will see the photon hit the illustrated green detector less often than will the observer on the rocket. Can both the observer on earth and the rocket be right? IS not there a paradox here? Suppose two flashes of light of equal intensity. If one measures one further away from the other, it will appear dimmer. And so it is that the passenger on Einstein's train will see the lightning flash behind them to be dimmer than the one in front of him. In Einstein's book on relativity, he writes, We suppose a very long train travelling along the rails with the constant velocity v and in the direction indicated in Fig 1. People travelling in this train will with a vantage view the train as a rigid reference-body (co-ordinate system); they regard all events in reference to the train. Then every event which takes place along the line also takes place at a particular point of the train. Also the definition of simultaneity can be given relative to the train in exactly the same way as with respect to the embankment. As a natural consequence, however, the following question arises : Are two events (e.g. the two strokes of lightning A and B) which are simultaneous with reference to the railway embankment also simultaneous relatively to the train? We shall show directly that the answer must be in the negative. When we say that the lightning strokes A and B are simultaneous with respect to be embankment, we mean: the rays of light emitted at the places A and B, where the lightning occurs, meet each other at the mid-point M of the length A arrow B of the embankment. But the events A and B also correspond to positions A and B on the train. Let M1 be the mid-point of the distance A arrow B on the travelling train. Just when the flashes (as judged from the embankment) of lightning occur, this point M1 naturally coincides with the point M but it moves towards the right in the diagram with the velocity v of the train. If an observer sitting in the position M1 in the train did not possess this velocity, then he would remain permanently at M, and the light rays emitted by the flashes of lightning A and B would reach him simultaneously, i.e. they would meet just where he is situated. Now in reality (considered with reference to the railway embankment) he is hastening towards the beam of light coming from B, whilst he is riding on ahead of the beam of light coming from A. Hence the observer will see the beam of light emitted from B earlier than he will see that emitted from A. Observers who take the railway train as their reference-body must therefore come to the conclusion that the lightning flash B took place earlier than the lightning flash A. We thus arrive at the important result: Events which are simultaneous with reference to the embankment are not simultaneous with respect to the train, and vice versa (relativity of simultaneity). Every reference-body (co-ordinate system) has its own particular time ; unless we are told the reference-body to which the statement of time refers, there is no meaning in a statement of the time of an event. And so it is that the passenger on Einstein's train will see the lightning flash behind them to be dimmer than the one in front of him. Is this not true? Indeed if the passenger is traveling very close to c, the flash from behind them will appear to be very, very dim, as the intensity of light falls of as $r^2$. Is this not true? Let us replace the two lightning strikes with light bulbs which the stationary observer standing at M observes to flash at the exact same time, just like the lightning strikes did. Will not the observer on the train conclude that the lightbulb behind them is dimmer than the one in front of them? (@knzhou answers "Yes" below in the comments.) Suppose then we consider a traveler on a spaceship with two light bulbs at either end, replacing the lightning strikes. The space ship is traveling at .9 c relative to the earth. Will the traveler not see the flash from the light bulb behind him to be dimmer? Answer: A quick reprise of the situation: This is the view in the embankment rest frame at $t=0$. We'll take the train to have a length $2d$ and we'll choose the origins so the middle of the train is at the origin in both frames. To find out where the flashes occur in the train frame we use the Lorentz transformations and the results are: $$ F^\prime_{\,1}(t,x) = \left(\gamma\frac{vd}{c^2}, -\gamma d\right) $$ $$ F^\prime_{\,2}(t,x) = \left(-\gamma\frac{vd}{c^2}, \gamma d\right) $$ So according to the passenger on the train the distance to both flashes is the same, but flash $2$ happens before flash $1$ so the passenger sees the light from $F_2$ before he sees the light from $F_1$. The thought experiment is really intended to show breakdown of simultaneity i.e. that in the embankment frame the flashes are simutaneous while in the train frame they are not. However we can extend the experiment to consider intensity as well. In the train frame the flashes occur at an equal distance, so the light from them travels an equal distance and therefore the $1/r^2$ factor is the same for both. In the embankment frame the light travels different distances because the light from $F_1$ travels a longer distance to reach the passenger than the light from $F_2$ does, so the $1/r^2$ factor is different for the two flashes. How do we explain the difference? The solution is simply that for the passenger on the train the light is Doppler shifted because the flashes are moving relative to him. Remember that Doppler shift changes intensity as well as frequency. Although the passenger sees the light from both flashes travelling the same distance, he sees $F_1$ to be red shifted and less intense while $F_2$ is blue shifted and more intense. Now we consider the situation where the you replace the lightning flashes with light bulbs that are stationary with respect to the train. Now the passenger sees both flashes as equal brightness. How do we explain what the embankment observer sees? And again the solution is just the Doppler shift. Now the embankment observer sees $F_1$ to be blue shifted and $F_2$ to be read shifted i.e. $F_1$ is brighter than $F_2$. So even though the light from $F_1$ has to travel farther, it's brighter to start with.
{ "domain": "physics.stackexchange", "id": 32166, "tags": "special-relativity, spacetime, time-dilation, relative-motion, thought-experiment" }
Why does the 'Jacobian of at least one combination of $n$ functions shall be different from zero'?
Question: I've started reading The Variational Principles of Mechanics by Cornelius Lanczos; here is the concerned excerpt from p. 11: The generalized coordinates $q_1,q_2,\ldots, q_n$ may or may not have a geometrical significance. It is necessary however that the functions $$x_1= f_1(q_1,q_2,\ldots, q_n),\\ .......................\\ .......................\\ z_N= f_{3N}(q_1,q_2,\ldots, q_n).$$ shall be finite, continuous and differentiable, and that the Jacobian of at least one combination of $n$ functions shall be different from zero. These conditions may be violated at certain singular points, which have to excluded from consideration. ... While I could get that the functions must be 'finite, continuous and differentiable' but couldn't get the condition that the 'Jacobian of at least one combination of $n$ functions shall be different from zero'. Can anyone tell me what is the necessity of this condition? What does this actually mean? Or why do the functions need to follow this? Answer: The conditions about (i) differentiability of the functions and (ii) the maximal rank of the corresponding rectangular Jacobian matrix are regularization conditions imposed to simplify the mathematical analysis of the physical problem, in particular to legitimate the possible future use of the inverse function theorem. In the affirmative case, the functions are called independent. See also this related Phys.SE post. Physical systems that do not meet these regularization conditions are more difficult to analyse.
{ "domain": "physics.stackexchange", "id": 30880, "tags": "classical-mechanics, lagrangian-formalism, differential-geometry, constrained-dynamics" }
Bounded energy states for the infinite well with delta potential at origin
Question: So I have this potential $$ V(x) = \begin{cases} -a\delta(x) & -b < x < b \\ \infty & |x| \geq b \end{cases} $$ Solving the time independent Schrödinger Equation for bound energy states $(E<0)$ gave me: $$ \psi_1(x) = Ae^{kx} + Be^{-kx}\mbox{, }-b<x<0 \\ \psi_2(x) = Ce^{kx} + De^{-kx}\mbox{, }0<x<b $$ Where $k=\sqrt{\frac{-2mE}{\hbar^2}}$ and $\psi(x) = 0$ elsewhere. Then, I also got these boundaries conditions: $$ \psi_1(x\rightarrow 0) = \psi_2(x\rightarrow 0) <=> A + B = C+D \tag{1} $$ $$ \frac{d\psi_2}{dx}\Big|_{\epsilon⁺} - \frac{d\psi_1}{dx}\Big|_{\epsilon⁻} = \frac{2m} {\hbar^2}\int_{\epsilon^-}^{\epsilon^+}dx\delta(x)\psi(x) <=> K(C-D-A+B) = \frac{2m}{\hbar^2}\psi(0) \tag{2} $$ $$ \psi(b) = \psi_2(b) = Ce^{kb} + De^{-kb} = 0 \tag{3} $$ $$ \psi(-b) = \psi_1(-b) = Ae^{-kb} + Be^{kb} = 0 \tag{4} $$ From these, I got $A=D$ and $B=C$, where $B=-Ae^{-2kb}$. Now, when I plugged these into (2), I got $$ tanh(kb) = -\frac{\hbar^2k}{m} $$ where, if I let $z = kb$, I have $$ tanh(z) = -\frac{\hbar^2z}{mb} $$ for which only $z = 0$ is a solution, implying that $k=0$ and $\psi_1(x)=\psi_2(x)=0$. What did I miss? I can't find where I'm messing up. Edit: I missed the $-a$ of the potential. Plugging it in, I get this final equation: $$ tanhkb = \frac{k\hbar^2}{ma} $$ which has a solution under a constraint. If we let $z=kb$: $$ tanhz = \frac{\hbar^2}{mab}z $$ then we have a solution if $\frac{\hbar^2}{mb}<a$. Answer: I think your analysis is correct in that there are no nontrivial eigenstates with $E \leq 0$. See this answer: https://physics.stackexchange.com/a/80120/37496 However because of the infinite potential well we only need $E < \infty$ for bound states, so even positive energy states will be bound states. This is the same as in the case of the infinite square well with no delta potential, where all of the eigenstates are bound states with positive energy.
{ "domain": "physics.stackexchange", "id": 68798, "tags": "quantum-mechanics, homework-and-exercises, potential, schroedinger-equation" }
Using Flow graph to find maximum matching
Question: I recently submitted an answer to the following question (homework in algorithms course): A guy has m shirts, n pants, and p belts. he wants to make the maximum amount of outfits while abiding by these rules: Every item can appear at most in one outfit Not every pant can go with the same shirt (we have n X m boolean matrix representing legal combinations) Not every pant can go with every belt (again we have n X p boolean matrix) my attempt: create a bipartite graph where V1 ={ v | v is a pair of shirt and a belt} V2= {v | v is a pant}. we draw an edge between a vertex v from V1 and vertex u from V2 only if the three make a legal combination. after constructing the graph we create a source connected to all the vertexes of V1 and a sink connected to all the vertexes of V2 and use Ford-Fulkerson algorithm to find the maximum flow which yields the maximum matching of the graph. My T.A said this was wrong since it won't give me the correct answer, but didn't explain why. Is he right? if he is, where is the fallacy of my solution? (I'm not looking for a better solution, I have seen a better one already, just want to know if this particular one is correct) Answer: Your TA is correct. Your algorithm is wrong. Here is a simple counterexample for your algorithm. We have two shirts, two pants and one belt. All combinations are legal. The bipartite graph has 2 nodes representing two shirt-belt pairs and 2 nodes represent two pants. There is an edge between any shirt-belt pair and any pant. The maximum-matching of the bipartite graph consists of two matches. However, since there is only one belt, we can make at most one outfit.
{ "domain": "cs.stackexchange", "id": 19630, "tags": "algorithms, graphs, network-flow, bipartite-matching" }
Hot Air Balloon and Buoyancy
Question: This is a conceptual question in a solution I am trying to understand. Problem statement: I have a balloon with a volume of V $m^3$. The outside air temp is $K$ kelvin and mass to lift is $m$ kg. I am to find the temperature inside the balloon to barely lift the given mass. The formula used is $F_{b}=ρgV$ Solution: I know that in order for the balloon to lift $F_b > F_g$ (subscript $b$ = buoyancy and subscript $g$ is gravity force) Using $F_{b}=ρgV$ I tried taking the differences of the two air densities and setting it equal to the mass I need to lift. As seen here. $(ρ_{air}-ρ_{balloon})Vg=mg$ Concern: Since $ρVg=F_{b}$ I understand this as there are THREE acting forces; One inside the balloon and one outside the balloon(which is in the NEGATIVE y-direction(drag?)) and mg. I find this strange since the balloon is not moving. Answer: The force balance There are indeed 3 contributions to the nett force. You have gravity acting on the mass you want to lift so $F_{g,1}=mg$. However you are also lifting the air inside the balloon: $F_{g,2}=m_{balloon} g= \rho_{balloon}V g$. At the same time the mass of air is pushing down: $F_{g,3}=m_{air} g = \rho_{air}V g$. So: $(\rho_{air}-\rho_{balloon})V g=m g$ which yields $$\rho_{balloon}=\rho_{air}-\frac{m}{V}$$ You can interpret the term with the difference in densities in two ways, you can view it as 1 force which becomes 0 for equal densities or you can view it as 2 forces which balance when densities are equal. The latter is the standard convention, but I think your confusing stems from the former interpretation. The temperature inside your balloon Your question doesn't completely stop there, because now you want to known to what temperature you should go. The simplest assumption to start with is to assume that air behaves as an ideal gas which means: $$ P M = \rho R T $$ where $P$ is the pressure, $M$ the molecular mass, $\rho$ the density, $R$ the gas constant and $T$ the temperature. The gas in the balloon is heated, but it can still equilibrate pressure with the outside air because of the opening. Obviously the gas constant and the molecular mass don't change so we can find the density in the balloon as $$\rho_{balloon}= \frac{P M}{R T_{balloon}} $$ For the surrounding air we can do something similar: $$\rho_{air}= \frac{P M}{R T_{air}} $$ So our equation becomes $\frac{P M}{R T_{balloon}} =\frac{P M}{R T_{air}}-\frac{m}{V}$ which can be rewritten for the tempearture in the balloon to: $$T_{balloon} =\left(\frac{1}{T_{air}}-\frac{mR}{PVM}\right)^{-1}$$ This equation nicely shows that an increase in mass to lift ($m$) will result in a higher temperature for the air in the balloon, but for example also that a higher pressure (nice weather) will result in a lower temperature necessary to lift the same mass on a bad weather day.
{ "domain": "physics.stackexchange", "id": 7152, "tags": "homework-and-exercises, thermodynamics, fluid-dynamics" }
Half wave plate and angular momentum
Question: Given: A half wave plate freely floating in space. Circularly polarized light, falling perpendicularly to it. The plate changes polarisation of the beam to the opposite one. Therefore it receives angular momentum and starts to rotate. Where does energy comes from? What would happen with a single photon passing through the plate? Answer: Summary: the energy change is negligible and if it is not, the energy difference comes from a frequency change of the photon. One must realize that unless the plate was already quickly rotating before the experiment, the energy stored in the rotation of the plate at the end is negligible relatively to the energy of the photon for the same reason why the electron carries most of the kinetic energy in the hydrogen atom even though both electron and proton are orbiting around the shared center-of-mass. In the latter case, the energy of a particle goes like $p^2/2m$, so for a fixed value of $p^2$ - and indeed, the proton's and electron's $p$ only differ by a sign - the lighter particle carries much more kinetic energy. This is totally true for the rotational motion as well. The kinetic energy of rotation is $J^2/2I$ where $J$ is the angular momentum and $I$ is the moment of inertia. For a single photon, the angular momentum relative to the direction of the photon's motion, $J_1=-\hbar$, is changed to $J_2=+\hbar$. The overall change is $J=+2\hbar$ which becomes the angular momentum of the plate. When you square it, you clearly get a negligible number - that is divided by an $O(1)$ value of the moment of inertia. So the energy obtained in the form of the increased rotation, caused by a single photon's change of the polarization, is tiny. If you study where this small amount comes from, you will find out that the photon's frequency is decreased by a tiny fraction so that its reduced energy exactly compensates the increase of the kinetic energy of the plate. But we can only see this change as nonzero if we allow the plate to have an arbitrary angular frequency before the experiment. It's not too difficult to explicitly check that this statement works: if $J$ jumps by $2\hbar$, the energy $J^2/2I$ jumps by $J\times \Delta J/I=2J\hbar/I$. Because $E=\hbar\omega_\gamma$ for photons, the energy conservation law requires that the frequency of the photon decreases by $\delta \omega = -2J/I$ where $J$ is the plate's angular momentum before it changed the photon's polarization. But indeed, $-2J/I=-2\omega$ where $\omega$ is the angular frequency of the plate before the experiment. That's exactly the frequency change that the photon experiences. Why? Because the periodicity of the photon is conserved if measured relatively to the rotating plate - so its angular frequency changes from $|-\omega_\gamma|$ to $|+\omega_\gamma|$. But relatively to the inertial system, it's changed from $|-\omega_\gamma-\omega|$ to $|+\omega_\gamma-\omega|$ i.e. it drops by $2\omega$. It's just like comparing sidereal days and solar days. What would happen with a single photon? I was working with a single-photon case from the beginning because it's a reasonable approach. A large electromagnetic wave may always be represented as a coherent state of many photons, so one just multiplies the results for a single photon. Of course, for a single photon, the "electromagnetic wave" must actually be represented as a wave function determining the probability amplitudes. At any rate, it is true that such wave plates may guarantee that every single photon that enters with a particular polarization leaves with another polarization. If you measure the "exact" polarizations of the final photon, you will get the right one with probability of 100 percent. If you measure a different one, you may get odds between 0 and 100 percent. Every photon will be ultimately detected at a single point; the wave function knows about the odds.
{ "domain": "physics.stackexchange", "id": 409, "tags": "optics, angular-momentum, energy-conservation, polarization" }
General formulation of "X transforms like an X"
Question: It has been discussed several times on this site the phrase "a tensor is something that transforms like a tensor". I'm comfortable with both the mathematical formalism and the physical applications. Even though the physical formalism can be made precise using the mathematical definition of a tensor using multilinearity, the situation is unsatisfactory for me. In particular, "a tensor is something that transforms like a tensor" very strongly implies the following: There is a big space of "something". This space is equipped with an action of coordinate changes. The linear space of tensors is a subspace of this, characterized by the action behaving a specific way. The mathematical definition directly constructs the space of tensors in an intrinsic way, without reference to the "something" space at all. This is a discrepancy I'd like to look into. So the question is: Is there a mathematical definition that follows the physical definition more closely? Another illustrative example: We say that "the difference of two connections is a tensor". What does that mean? Note that connections don't form a linear space, so this has to occur in some larger linear space, such that the affine subspace of connections is parallel to the vector subspace of tensors. Can we define this in a non-adhoc way? Some extensions of the question: can we do this without mentioning coordinate charts, and can this be extended to spinors? Answer: I don't think your implications are the right way to think about this kind of operational definition. Rather, we should consider this as the way physicists talk about the local data attached to various geometric objects over spacetime. In differential geometry, we have the general notion of a fiber bundle $\pi : B\to M$ over a manifold $M$. Almost all objects we usually talk about live "in" such bundles - they're sections of them or structures on them. For such bundles, we have that $\pi^{-1}(x)\cong F$ for whatever the fiber $F$ is - a vector space, a space of tensors, etc. While globally $B$ carries generally more structure than just attaching a copy of $F$ to every point in $M$, locally there exist trivializations of these bundles: A cover of $M$ by open sets $U_i\subset M$ for which $\pi^{-1}(U_i) \cong U_i\times F$, i.e. if you only look at one of the $U_i$, the bundle is just attaching a copy of $F$ to every point in $U_i$ in the straightforward way. Those trivializations come with transition functions $t_{ij} : U_i\cap U_j \to \mathrm{GL}(F)$, where by $\mathrm{GL}$ I mean the appropriate set of invertible structure-preserving transformations of the fiber, e.g. the linear invertible transformations of a vector space when $F$ is a vector space. These functions carry the information about the global structure of $M$ and $B$ - they tell us how to glue together $B$ again from this local data: For any $x\in U_i\cap U_j$, we should consider $(x,f)\in U_i\times F$ and $(x,t_{ij}(x)f)\in U_j\times F$ to be the same thing. This procedure can be reversed (and is sometimes known as the "cocycle construction" of bundles): We can start with a bunch of $U_i$, attach our desired fiber $F$ to them, specify a bunch of $t_{ij}$, then define $B$ to be the disjoint union $\bigsqcup_i U_i\times F$ quotiented by the relation $$(x_i,f_i)\sim (x_j, f_j) \iff x_i = x_j \quad \land \quad f_i = t_{ij}(x_j)f_j.$$ Now I claim that the specification of the $t_{ij}$ is equivalent to what the physicists are doing when they define objects by their transformation behaviour: When we say that "a vector" $v^\mu$ transforms like $v^\mu \mapsto J^\mu_\nu v^\nu$ under "coordinate transformations", where $J$ is the Jacobian of the transformation, in this language what we mean is that we're building a vector bundle with fiber $\mathbb{R}^n$ - the tuples of numbers $v^\mu$ - by specifying that when $U_i$ and $U_j$ are two different coordinate charts, and $\phi_{ij} : U_i\cap U_j \to U_i \cap U_j$ is the coordinate transformation between them, then the transition function between $U_i\times \mathbb{R}^n$ and $U_j\times \mathbb{R}^n$ is given by the Jacobian of $\phi_{ij}$, $J(\phi_{ij}) : U_i\cap U_j \mapsto \mathrm{GL}(\mathbb{R}^n)$. This generalizes straightforwardly to tensors (building bundles with fibers $\mathbb{R}^n\otimes\dots\otimes \mathbb{R}^n\cong \mathbb{R}^{n^m}$), gauge fields/connections (fiber is the Lie algebra of the gauge group, the transition functions are the behaviour under local gauge transformations), etc. Statements like "the difference of two connections is a tensor" just means examining the way the difference of two connections is acted on by the transition functions - it turns out the "non-tensorial" part of the gauge transformation drops out of the difference, since it is independent of the specific value of the connections, and so this lives naturally in the bundle we defined with the tensorial transition functions.
{ "domain": "physics.stackexchange", "id": 99206, "tags": "general-relativity, differential-geometry, coordinate-systems, tensor-calculus, spinors" }
Why do we use the softmax instead of no activation function?
Question: Why do we use the softmax activation function on the last layer? Suppose $i$ is the index that has the highest value (in the case when we don't use softmax at all). If we use softmax and take $i$th value, it would be the highest value because $e$ is an increasing function, so that's why I am asking this question. Taking argmax(vec) and argmax(softmax(vec)) would give us the same value. Answer: Short answer: Generally, you don't need to do softmax if you don't need probabilities. And using raw logits leads to more numerically stable code. Long answer: First of all, the inputs of the softmax layer are called logits. During evaluation, if you are only interested in the highest-probability class, then you can do argmax(vec) on the logits. If you want probability distribution over classes, then you'll need to exponentiate and normalize to 1 - that's what softmax does. During training, you'd need to have a loss function to optimize. Your training data contains true classes, so you have your target probability distribution $p_i$, which is 1 at your true class and 0 at all other classes. You train the network to produce a probability distribution $q_i$ as an output. It should be as close to the target distribution $p_i$ as possible. The "distance" measure between two probability distribution is called cross-entropy: $$ H = - \sum p_i \log q_i $$ As you can see, you only need logs of the output probabilities - so the logits will suffice to compute the loss. For example, the keras standard CategoricalCrossentropy loss can be configured to compute it from_logits and it mentions that: Using from_logits=True is more numerically stable.
{ "domain": "ai.stackexchange", "id": 2815, "tags": "neural-networks, activation-functions, softmax, multiclass-classification" }
Difference between CTMC, DTMC, and MDP
Question: I've been reading the Handbook of Model Checking recently; I'm especially interested in probabilistic model checking, so have been led to the PRISM model checker. For background, I am very familiar with TLA+ and use of its model checker for safety & liveness properties. My goal is to specify & model-check a simple probabilistic consensus protocol like snowflake (pdf). One thing which confuses me is the difference between Markov Decision Problems (MDPs), Discrete-Time Markov Chains (DTMCs), and Continuous-Time Markov Chains (CTMCs). I'd assumed probabilistic model-checking was just a way of assigning weights to nondeterministic steps; things like network messages being dropped, nodes crashing, random choice of nodes to send messages to, that sort of thing. Why are these different models necessary? What can you express in one but not the other? I see on this page the documentation says: In every state of the model, there is a set of commands (belonging to any of the modules) which are enabled, i.e. whose guards are satisfied in that state. The choice between which command is performed (i.e. the scheduling) depends on the model type. But aren't we model-checking it in some way where it doesn't matter which action is "performed", since both possible transitions are checked via some breadth-first search mechanism? Answer: The "probabilistic" element in probabilistic model checking is that the system being checked is probabilistic, not that we add probabilities to an existing deterministic or non-deterministic system. Thus, what you are checking is whether a probabilistic system satisfies some property. For example "is it true that with probability at least 0.5, the system will reach an error state?" Now, a probabilistic system is given by a set of states, and transitions between the states. However, this does not give a concrete definition of such systems. Specifically, there are two aspects to be considered: Are the transitions dependent on an action (e.g. user input, or some environment interaction) Is time discrete or continuous? For property 1, if there is no user input, then the system just progresses with time, and is called a Markov Chain (assuming it's behaviour is independent of time). Otherwise, it is called a Markov Decision Process (MDP), where in each step an action is "played", and that determines the probabilities of moving to the next steps. Now for property 2 - if time is discrete, then at every time point the system progresses. This gives rise to Discrete time Markov chains (DTMC, or often called just MC). If time is continuous, then (after modelling specifically what is meant by that), you get Continuous time Markov chains (CTMC). You could also speak of continuous time MDPS, although this model is slightly less common. I'm not sure if PRISM supports it. Different models are used for different settings. For example, CTMCs are common in modelling asynchronous protocols, or chemical reactions, whereas DTMCs are useful for synchronous probabilistic systems.
{ "domain": "cstheory.stackexchange", "id": 4820, "tags": "model-checking, markov-chains, formal-methods, temporal-logic, linear-temporal-logic" }
If 1 photon hits 1 atom, will the departing angle be deterministic?
Question: Consider a single photon fired towards a single atom. After interacting with the atom, the photon heads away from the atom (assume no absorption). Is the angle at which the photon departs deterministic? Answer: No it is not. Any configuration of atom and photon that is kinematically consistent (i.e. energy momentum conservation) with the initial configuration is a possible state after the interaction. However not every final state is equally likely to occur and it is indeed possible to compute the probability distribution of the angle through QFT techniques but this is not trivial.
{ "domain": "physics.stackexchange", "id": 66544, "tags": "quantum-mechanics, photons, determinism, randomness" }
Proving a specific language is regular
Question: In my computability class we were given a practice final to go over and I'm really struggling with one of the questions on it. Prove the following statement: If $L_1$ is a regular language, then so is $L_2 = \{ uv |$ $u$ is in $L_1$ or $v$ is in $L_1 \}$. You can't use the pumping lemma for regular languages (I think), so how would you go about this? I'm inclined to believe that it's false because if $u$ is in $L_1$, what if $v$ is non-regular? Then it would be impossible to write a regular expression for it. The question is out of 5 marks though and that doesn't seem like enough of an answer for it. Answer: You can use the pumping lemma to show that a language is not regular. Here, you're trying to prove that if $L_1$ is regular then $L_2$ is regular. The pumping lemma can't be used to prove this implication. It could be used to prove the contraposite (i.e. you might be able to prove that if $L_2$ is not regular, then $L_1$ is not regular by the pumping lemma). However, even assuming this can work (I haven't checked), it would be a lot more complicated than necessary for this result. You write “what if $v$ is non-regular?” But this doesn't make sense: the concept of regularity applies to a language, not a word. For a given $u$, what language does $v$ have to belong to, in order for $uv$ to be in $L_2$? If that's not enough of a hint, try reasoning with regular expressions. If $L_1$ is regular, then there is a regular expression that characterizes it. How can you use this regular expression to characterize $L_2$?
{ "domain": "cs.stackexchange", "id": 160, "tags": "formal-languages, regular-languages" }
How do you see the stage of the second meiotic arrest in oogenesis in the given video?
Question: My old question raised this new question. After reading this page I can say now that metaphase is the stage in which the second meiotic arrest occurs within oogenesis: The oocyte is arrested again in the metaphase of the second meiotic division. It is difficult to visualize the thing in my head. I could not find a video of specifically for oogenesis but probably the general model for meiosis holds true. The problem I am having is that I cannot see the arresting steps in this general meiosis video. How do you see the second meiotic arrest of oogenesis in the above video? I cannot even see clearly the first meiotic arrest. I only know that it is in prophase for the first and metaphase for the second. Answer: I seem to understand the thing now. The video is utterly simplified for animal cell meiosis I and II. In oogenesis, you get after every anaphase one cell with very little cytoplasm, polar body, and another cell with much cytoplasm. In the video, the amount of cytoplasm is equal so the thing is idealized. The video is better to explain male gametogenesis: spermatogenesis, since the amount of cytoplasm does not differ after each anaphase. The video is too going too fast to teach you oogenesis, since in reality the things are arrested rather long time. The second meiotic arrest in oogenesis can lasts 12 - 50 years starting after puberty. The rupturing secondary oocytes lasts then again less in the second meiotic arrest of oogenesis, since they start to develop into ovum after the monthly release.
{ "domain": "biology.stackexchange", "id": 149, "tags": "meiosis, oogenesis" }
Formation of tetrazoles from ketones in Schmidt reaction
Question: Normally in Schmidt reaction of ketones, amides are formed however for some reason here further reaction takes place and a tetrazole is formed. Here is my try for this reaction mechanism (if my mechanism attempt is wrong please point out the mistakes and the proper mechanism): Relating to the mechanism, I have the following doubts: Is this a special case of reaction or are tetrazoles formed with many different ketone reactions? If not, what makes this case special? What are the driving forces in each step? Is the aromaticity the main reason? Answer: It is true that normally in the Schmidt reaction of ketones, amides are formed as the major product (Ref.1). However, the products from the Schmidt reaction is highly depeends on the conditions used. The other byproducts include tetrazole and urea derivatives (Ref.2). The mechanism for tetrazole formation given by OP is acceptable but it need harsh conditions such as the presence of $\ce{NaN3}$ and $\ce{POCl3}$ as the solvent at high temperature (e.g., Ref.3). Actually, during the Schmidt reaction of ketones, the formation of terazole should be taken place even before formation of the corresponding amide. According to the mechanism (Ref.4), there should be a nitrilium ion $(\ce{R1-C#N^+-R2})$ formation, which can be considered as a N-alkylated nitrile structure: We all knows that a nitrile reacts with hydrazoic acid $(\ce{HN3})$ in the presence of sulfuric acid to give corresponding tetrazole (Ref.5). Thus, it can be envisioned that the additional hydrazoic acid molecule may react with the nitrilium ion to produce tetrazole compound as depicted in Ref.3: Is this a special case of reaction or are tetrazoles formed with many different ketone reactions? It is not a special case of tetrazole formation with cyclohexanone alone. As shown in the mechanism, it is possible to prepare a tetrazole from all kind of ketones (REf.2). However, it is note worthy that Hjelte and Agback have discussed that the formation of only tetrazole with $\alpha$-tetralone even with 1:1 ketone to hydrazoic acid ratio is due to the ring size (resistance to rearrange more stable six-membered to less stable seven membered). The have isolated $45\%$ of $\alpha$-tetralone unreacted (Ref.6). References: Karl Friedrich Schmidt, “Process of making derivatives of hypothetical imines including amines and their substitution products,” U.S. Patent 1,564,631, 1925. G. I. Koldobskii, V. A. Ostrovskii, and B. Z. Gidaspov, "Application of the Schmidt reaction for the preparation of tetrazoles (review)," Chemistry of Heterocyclic Compounds 1975, 11, 626–635 (DOI: https://doi.org/10.1007/BF00959947). Rajendran Sribalan, Andiappan Lavanya, Maruthan Kirubavathi, and Vediappen Padmini, "Selective synthesis of ureas and tetrazoles from amides controlled by experimental conditions using conventional and microwave irradiation," Journal of Saudi Chemical Society 2018, 22(2), 198-207 (DOI: https://doi.org/10.1016/j.jscs.2016.03.004). Peter A. S. Smith, “The Schmidt Reaction: Experimental Conditions and Mechanism,” J. Am. Chem. Soc. 1948, 70(1), 320–323 (DOI: https://doi.org/10.1021/ja01181a098). Robert M. Herbst and Charles F. Froberger, “Synthesis of Iminotetrazoline Derivatives as Trichomonacidal and Fungicidal Agents,” J. Org. Chem. 1957, 22(9), 1050–1053 (DOI: https://doi.org/10.1021/jo01360a013). Nils S. Hjelte and Tamara Agback, “Benzocycloalkanones in the Schmidt Reaction,” Acta Chem. Scand. 1964, 18(1), 191-194 (DOI: 10.3891/acta.chem.scand.18-0191)(PDF).
{ "domain": "chemistry.stackexchange", "id": 15873, "tags": "organic-chemistry, reaction-mechanism, carbonyl-compounds, carbocation, amides" }
Are there identity guidelines for ROS and its logos?
Question: The point cloud library has a nice set of guidelines for conveying a standard logo and design language for identifying PCL. Does ROS have an equivalent set of logos/design documents? See here and here Originally posted by cmansley on ROS Answers with karma: 198 on 2011-09-02 Post score: 4 Answer: Here: https://github.com/ros-infrastructure/artwork and Here: http://www.ros.org/press-kit/ Originally posted by jbohren with karma: 5809 on 2015-01-19 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 6591, "tags": "ros" }
Isentropic processes
Question: I'm having trouble understanding why reversible adiabatic processes are isentropic. I understand that in a reversible adiabatic process there is no heat exchange and so $dQ = TdS = 0.$ However, if I have a thermally isolated ideal gas in a piston, and I reversibly compress the gas, even though there is no heat exchange (the process is reversible and adiabatic), I have increased the temperature of the gas! Doesn't that mean I've increased it's entropy? It occurs to me that the decrease in volume probably exactly matches the increase in temperature in terms of entropy changes so that the total entropy change is $0.$ But still, if I try and do some calculations, I'm stuck. Say I change the volume from $V_1$ to $V_2$ by reversibly compressing the piston. Am not I changing the pressure and temperature simultaneously as I do that? How do I integrate the ideal gas law here? Answer: The short answer is that you're right - the change in entropy due to the decrease in volume cancels out the change in entropy due to the increase in temperature. But you're also right that this involves a simultaneous change in $p$ and $T$, and it would be nice to know by how much each of these changes. Below I've done the calculation. It was a bit more involved than I was expecting, and I hope I didn't make any mistakes! To do this calculation we first need to note that the ideal gas law, $$pV=nRT,\tag{i}$$ tells us how pressure, volume and temperature relate to one another, but if you want to do any real calculations, you also need to know how the internal energy $U$ behaves. This is given by $$ U =n\; c_V T,\tag{ii} $$ where $c_V$ is the dimensionless heat capacity at constant volume. (Some people will define an ideal gas in such a way that $c_V$ is allowed to be a function of $U$, but here I'll assume it's constant. To a good approximation, $c_V=\frac{3}{2} R$ for a monatomic gas, or $\frac{5}{2} R$ for a diatomic one.) Equation $(\mathrm{ii})$ can't be derived from Equation $(\mathrm{i})$, so both of them are needed in order to define the properties of an ideal gas. With this in mind, let's start with the fundamental equation of thermodynamics (for systems without chemical reactions): $$ dU = TdS - pdV. $$ Because we're considering an adiabatic process we know that $TdS = 0$, so $$ dU = -pdV = - \frac{nRT}{V} dV, $$ where the second equality is obtained by substituting the ideal gas law $(\mathrm{i})$. But we also know from the heat capacity equation $(\mathrm{ii})$ that $nT = U/c_V$. This gives us $$ dU = - \frac{U\;R}{c_VV} dV, $$ or $$ c_V\frac{1}{U}dU = - R\frac{1}{V} dV. $$ Now we can integrate both sides: $$ c_V \int_{U_1}^{U_2} \frac{1}{U}dU = -R\int_{V_1}^{V_2} \frac{1}{V}dV, $$ or $$ c_V\left( \ln U_2 - \ln U_1 \right) =R \left( \ln V_1 - \ln V_2 \right). $$ A quick note about interpretation is in order here. We're integrating both sides over different ranges ($U_1$ to $U_2$ and $V_1$ to $V_2$) and then setting them equal. This is because we want to know how much the internal energy will change if we reversibly change the volume by a certain amount, so we're looking for $U_2$ and $U_1$ as a function of $V_1$ and $V_2$. Anyway, now we can use some logarithm identities to get $$ c_V \ln \frac{U_2}{U_1} = R\ln \frac{V_1}{V_2} $$ or $$ \ln \left(\frac{U_2}{U_1}\right)^{c_V} = \ln \left(\frac{V_1}{V_2}\right)^R, $$ so $$ \left(\frac{U_2}{U_1}\right)^{c_V} = \left(\frac{V_1}{V_2}\right)^{R}, $$ or $$ V_1^{R} U_1^{c_V} = V_2 ^{R} U_2^{c_V}. $$ This means that, for a reversible process, the quantity $VU^{c_V}$ must remain constant. Now, finally, we can substitute $U$ from Equation $(\mathrm{ii})$ to get $$ V^R(c_VnT)^{c_V} = \text{constant for an adiabatic process.} $$ There's a factor of $(c_Vn)^{c_V}$ that we can ignore by incorporating it into the constant on the right hand side, so $$ V^RT^{c_V} = \text{constant for an adiabatic process,} $$ or $$ V_1^R T_1^{c_V} = V_2^R T_2^{c_V}. $$ Given any values for $V_1$, $V_2$ and $T_1$, you can use this to work out $T_2$. By substituting into $(\mathrm{i})$ you can also work out the change in pressure.
{ "domain": "physics.stackexchange", "id": 6207, "tags": "thermodynamics, entropy, adiabatic" }
Why does light bend when it enters a medium?
Question: I am asking this question after referring to @benjimin answers at Why does light bend? I found the same answer at wiki. But i found a video on youtube by doctor don (https://www.youtube.com/watch?v=NLmpNM0sgYk&t=607s) claiming that method to be wrong and here is why- According to the old explanation the waves add up and move at at different angle due to slowed speed. It looks somewhat like this - (image from the vid) But there are also other wavefronts adding up together and going different ways. like- So with different wavefronts aligning in different direction, many waves propagate in different directions and this is not observed in reality and hence this explanation proves wrong. Now, Doctor Don explains that we can consider the electric field components of the waves and deuce change on them on entering a medium. Since the surface is shared by air and medium, two set of Maxwell's equation are formed and equated- (each for air and medium) This implies that $\frac{dB}{dt}$ is same in both air and medium but why is this so?? Shouldn't the magnetic field also change due to wave interference with magnetic field created by oscillating electron? Answer: Those equations are written for the $E$- and $B$-fields at the boundary. The $E$-field has a discontinuity at the boundary, but the $B$-field is continuous at the boundary, and this can be deduced by Ampere's law. We need to make some notes first. In the video, Don Lincoln is considering the case of $p$-polarized light coming to the air-glass interface. In $p$-polarized light, the $B$-field is parallel to the surface of the interface. One can use Maxwell's equations to show that the components of the $E$- and $B$-fields that are parallel to the surface must be the same in the limit $z\rightarrow 0+$ from above and in the limit $z\rightarrow 0-$ from below where $z=0$ is the level of the surface. Since in the case of $p$-polarized light we have that $\vec{B}(x, y, z, t)$ is parallel to the surface at all $(x, y, z, t)$, our proof will demonstrate that $\vec{B}_{\text{air}}(z\rightarrow 0+) = \vec{B}_{\text{glass}}(z\rightarrow 0-)$. To show our claim, we will use Ampere's law. At the air-glass interface, consider a rectangular loop $C$ of height $h$ and length $\ell$ half-way between the the two media as depicted below. By Ampere's law, we have $$ \oint_{C} \vec{B}\cdot \text d\vec{l} = \iint_{S} \mu \left[ \vec{J} + \epsilon\frac{\partial\vec{E}}{\partial t} \right] \cdot \text d\vec{A} $$ where the right-hand side is an integral over the rectangle whose boundary is loop $C$. The absolute value of the area integral on the right-hand side is bounded above by $\text{|maximum of integrand|}\times \text{Area}$ where $\text{Area} = h\ell$. As we send $h\rightarrow 0$, we see that $\text{Area} = h\ell\rightarrow 0$, so the area integral in the above equation goes to zero. Hence in the limit of $h\rightarrow 0$, we have $$ \oint_{C} \vec{B}\cdot \text d\vec{l}\rightarrow 0. $$ Thus, $\vec{B}(z\rightarrow 0+)\cdot \text d\vec{l}_{1} + \vec{B}(z\rightarrow 0-)\cdot \text d\vec{l}_{2} = 0$ for any length element $\text d\vec{l}_{1}$ parallel to the surface. Since the loop goes in opposite directions above and below the surface, $\text d\vec{l}_{1} = -\text d\vec{l}_{2}$ and thus $\vec{B}(z\rightarrow 0+)\cdot \text d\vec{l}_{1} + \vec{B}(z\rightarrow 0-)\cdot (-\text d\vec{l}_{1}) = 0$. Since we can orient the length element $\text d\vec{l}_{1}$ any way we want as long as it is parallel to the interface surface and since $\vec{B}$ is parallel to the interface surface in $p$-polarized light, it follows that $\vec{B}(z\rightarrow 0+) - \vec{B}(z\rightarrow 0-) = 0$, and so $\vec{B}(z\rightarrow 0+) = \vec{B}(z\rightarrow 0-)$, as desired. Thus, $\partial\vec{B}/\partial t$ is the same as $z\rightarrow 0+$ and $z\rightarrow 0-$. The behavior of $\vec{B}$ does change as you pass from air to glass, but we've shown that it is same at the boundary, meaning it is continuous as you pass through the boundary. This is exactly the same way the parallel component $\vec{E}_{\text{parallel}}$ of $\vec{E}$ is same at the boundary and continuous across the boundary, as the video claims. In fact, a similar proof of this can be devised using Faraday's law. (Lastly, I should repeat that the $B$-field is parallel to the interface surface only in the case of $p$-polarized light. In the case of $s$-polarized light, things are a bit different. The same reasoning above applies, but it only applies to the component of $\vec{B}$ that is parallel to the interface surface.)
{ "domain": "physics.stackexchange", "id": 85579, "tags": "optics, electromagnetic-radiation, refraction, geometric-optics" }
Enthalpy vs. delta enthalpy
Question: I haven’t really found a clear answer. Enthalpy is measured as the sum of internal energy (kinetic and potential energy) and pressure times volume. However, I don’t really see the value of measuring enthalpy, so why is it even mentioned? Aren’t we more concerned with changes in enthalpy, like enthalpy of vaporization, enthalpy of fusion? In other words, what are the differences of enthalpy and delta enthalpy? Is the former just a formality, and we are really concerned about measuring the energy of a thermodynamic system undergoing change (i.e. chemical reactions)? Answer: Enthalpy vs ∆Enthalpy We know that enthalpy represents the energy of a system. So, now think about it? Measuring the energy of a system? Consider all the changes and influences a system is under. This makes it really difficult (we could say impossible) to calculate the enthalpy of a system. But we do know one thing, that the enthalpy depends on two factors. A system will experience change in enthalpy if it's internal energy is changed and/or if work is done on/by it. So what we do is that we assume an initial state where we start our observation and a final state where our observation concludes. And the changes in state cause particular changes in enthalpy and this change in enthalpy of the system is what matters to us anyway. Why did we define it then if we don't use it? We just know that a particular quantity depends on two particular factors. It's just that we understand the fact that it is meaningless to calculate the enthalpy of a system - because we do not know where to start from? Where will you start from? You need a point where the enthalpy was zero initially and then gained enthalpy right? But imagine - doesn't that mean there won't be any atomic activity at all if we're starting from zero enthalpy? And if there wasn't atomic activity then that chemical specie might not even be able to exist? It's complicated. But then we realise: all that matters to us is just change in enthalpy during a thermodynamic process. And that is what thermodynamics is essentially all about - energy in motion. Thermodynamics is but a study of states (this was mentioned in NCERT Chemistry). So we study the change in Enthalpy from one state to another. We define the enthalpy and what it depends on so that we can observe the factors that it depends on.
{ "domain": "chemistry.stackexchange", "id": 15411, "tags": "organic-chemistry, inorganic-chemistry, physical-chemistry, thermodynamics, experimental-chemistry" }
Accelerating Charge Radiates, so why can't we make a laser emitter out of it?
Question: I've been taking class where we learnt about different types of laser, it reminded me of an old question I had before. There's been issues with making very short wavelength laser pulse with traditional technologies, such as molecules, crystals, diodes, e.t.c. But as it's been known that accelerating charge radiates, so why can't we make a laser emitter out of it in rotational motion, where we seem to have full capability of control over the power of emission? It seemed much better than thermal or traditional electro excitation. Something like rotational motion and Larmor radiation (https://www.cv.nrao.edu/course/astr534/PDFnewfiles/LarmorRad.pdf), one might agree that we only had the expression for power output, but not the one for the wavelength. So here's another question, what's the expression for radiated/emitted photons(Suppose a unit charge in a circular motion of speed $v$ and radius $r$)? Answer: As @John Custer stated, the free electron laser is based on the acceleration of free electrons. Such tools can lase at very short wavelength. Electrons are accelerated and then sent through a strong alternating magnetic field, a so called wiggler. During their zigzag motion they emit coherent radiation. Very powerful lasers of very short wavelengths can be engineered in this way.
{ "domain": "physics.stackexchange", "id": 56554, "tags": "quantum-mechanics, electromagnetism, laser, definition" }
Is there an elegant proof of the existence of Majorana spinors?
Question: Almost all standard sources on the existence of Majorana spinors (e.g. Appendix B.1 to Polchinski's "String Theory", Vol. 2) do so in a way I consider inherently ugly: A priori, we are dealing with an irreducible complex representation $(V,\rho)$ of the Clifford algebra of signature $(p,q)$, i.e. generalized Dirac spinors. That Majorana spinors exist means abstractly that there is a real form on $V$, i.e. a conjugate-linear map $\phi : V\to V$ with $\phi^2 = \mathrm{id}_V$ that commutes at least with the $\mathfrak{so}(p,q)$ action. Every single source I can find for Majorana spinors uses operations like the transpose, complex conjugation and Hermitian adjoint on the $\Gamma$-matrices to obtain matrices acting on the same space. This is abstractly wrong, the transpose acts on the dual, the complex conjugation on the conjugate, and the Hermitian adjoint needs an inner product we have no reason for choosing. Of course, since $V$ is finite-dimensional, one can pick a basis and define the uncanonical isomorphisms to its dual and its conjugate, but I find this inelegant, particularly since the standard derivations require us to make a particular such choice with respect to the signs the $\Gamma$-matrices have under e.g. ${}^\dagger$. Finally, the Majorana spinors are usually defined by some equation involving an unnatural and arbitrary-looking product of $\Gamma$-matrices, which varies from source to source according to different sign conventions and sign choices made in the course of the derivation. It's inelegant because the rest of the theory of spinors can be developed without making such uncanonical choices. Both the uniqueness of the dimension of the irreducible Dirac representations (there are two of them in odd dimensions) and the existence of the Weyl spinors in even dimensions can be derived purely from the abstract properties of the Clifford algebra, no choices made, no transpose, adjoint or conjugate occuring. The (admittedly slightly subjective question) is: Is there a way to show in which dimensions Majorana spinors exist that neither requires an uncanonical choice of basis nor arbitrary choices of signs? Some partial results: In even dimensions, the Dirac representation is necessarily self-conjugate since it is the only irreducible representation of the Clifford algebra, so all that is left to show is that a conjugate-linear $\mathfrak{so}$-equivariant map on it squares to $\mathrm{id}_V$ and not to $-\mathrm{id}_V$. However, I can't seem to exhibit any particular equivariant map on it that one could simply check for its square. In odd dimensions, one first needs to figure out whether the two inequivalent Dirac representations are conjugate to each other or self-conjugate. As further motivation that a clear proof using only canonical properties of the Clifford algebra itself is required, consider the confusing and contradictory claims in the literature: Polchinski, "String Theory", Vol. 2, p.434: $\mathrm{SO}(d-1,1)$ has Majoranas for $d = 0,1,2,3,4 \mod 8$, corresponding to a special case of $p-q = d-1-1 = 6,7,0,1,2 \mod 8$. Fecko, "Differential Geometry and Lie Groups for Physicists", pp. 651: $\mathrm{Cliff}(p,q)$ has Majoranas for $p-q = 0,2\mod 8$. This clearly conflicts with Polchinski's claims e.g. for $d=3$. Figueroa-O'Farrill, "Majorana spinors", pp. 18: We have Majoranas for $p-q = 0,6,7\mod 8$ and "symplectic Majoranas" for $p-q = 2,3,4\mod 8$. Note that these results conflict simply by the number of possible $p-q$ regardless of whether I've correctly taken care of the differing conventions of whether $p$ or $q$ denotes timelike dimensions. Answer: To answer the confusion between the three sources you list: Using the signature convention of Figueroa O'Farrill, we have Majorana pinor representations for $p - q \pmod 8 = 0,6,7$ and Majorana spinor representations for $p - q \pmod 8 = 1$. Pinor representations induce spinor representations (that will be reducible in even dimension) and so we get Majorana spinor representations for $p - q \pmod 8 = 0,1,6,7$. Although $\mathcal{Cl}(p,q)$ is not isomorphic to $\mathcal{Cl}(q,p)$, their even subalgebras are isomorphic and so can be embedded in either signature. This means that Majorana pinor representations in $\mathcal{Cl}(q,p)$ also induce spinor representations in the even subalgebra of $\mathcal{Cl}(p,q)$ and so we also get an induced Majorana spinor representation for $p - q \pmod 8 = 2$ (from $q - p \pmod 8 = 6$; this is often called the pseudo-Majorana representation). Fecko has his signature convention swapped compared to Figueroa O'Farrill, and so swapping back we see that his $0,2 \pmod 8$ gives us $0,6 \pmod 8$. One can also see from his table (22.1.8) that on the page you reference he was listing signatures with Clifford algebra isomorphisms to a single copy of the real matrix algebra, but his table also gives us $p - q \pmod 8 = 1$, converting signature convention to $p - q \pmod 8 = 7$ which is the isomorphism to two copies of the real matrix algebra and so also yields Majorana pinor representations. He doesn't talk about Majorana (or pseudo-Majorana) spinor representations here and so doesn't list $p - q \pmod 8 = 1,2$. As for Polchinski, he includes pseudo-Majorana representations (or is signature convention agnostic) and so lists all of $p - q \pmod 8 = 0,1,2,6,7$. To answer the question of in which dimensions Majorana spinors (including pseudo-Majorana) exist: For a signature $(p,q)$ they exist whenever any of $\mathcal{Cl}(p,q)$, $\mathcal{Cl}(q,p)$ or the even subalgebra of $\mathcal{Cl}(p,q)$ are isomorphic to either one or a direct sum of two copies of the real matrix algebra. This means $p - q \pmod 8 = 0,1,2,6,7$. If one discounts pseudo-Majorana spinors, then one removes $\mathcal{Cl}(q,p)$ from the previous statement and this means $p - q \pmod 8 = 0,1,6,7$. Of course, this does not talk about the naturally quaternionic symplectic and pseudo-symplectic Majorana representations. One can take the algebra isomorphisms of low-dimensional Clifford algebras ($\mathcal{Cl}(1,0) \cong \mathbb{C}$, $\mathcal{Cl}(0,1) \cong \mathbb{R} \oplus \mathbb{R}$ etc.) and use the isomorphisms between Clifford algebras of different signatures ($\mathcal{Cl}(p+1,q+1) \cong \mathcal{Cl}(p,q) \otimes \mathcal{Cl}(1,1)$ etc.) to bootstrap the equivalent matrix algebra isomorphisms of Clifford algebras (and similarly for their even subalgebras) of arbitrary signature and from there one can see when real forms exist.
{ "domain": "physics.stackexchange", "id": 43146, "tags": "representation-theory, lie-algebra, spinors, majorana-fermions, clifford-algebra" }
How to get model bounding box
Question: Hi all, I'm using gazebo for robotic experiments in combination with ROS. My problem is the following: given the name of a certain model in the scene (published on a topic) I need to write a ROS node that is capable of determining the bounding box of the model. I'm searching among the various services that gazebo exposes to ROS and I came across this. But I don't know how to extract the required information from this message. Please, can someone tell me how this can be done? Thanks. Originally posted by federico.nardi on Gazebo Answers with karma: 23 on 2017-08-30 Post score: 0 Answer: Not sure if this can be easily done through ROS without a Gazebo plugin. With a Gazebo plugin, you could write a model plugin and use the physics::Model::BoundingBox() function. Originally posted by chapulina with karma: 7504 on 2017-08-31 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 4168, "tags": "gazebo-model" }
How can you make one visualize electric potential and field intensity using analogies?
Question: I mean to say for a hollow sphere, can its electric potential be zero at its centre and can electric field intensity be? And if they are zero then what physical meaning does it gives? Since, these quantities can't be seen or touched, I don't exactly know what's going? So please can you explain in short in a simple language and with some useful equations Note:-Analogies such as Water bucket-Capacitor etc will be appreciated. Thanks! Answer: Electric potential (energy per charge) is analogous to water pressure. What makes a water particle move is not the pressure in itself, though. It is the pressure difference. A particle will only move if it is being pushed from more one side than from the other. If pushed equally from both sides, it doesn't move. Particles inside a pressure container don't move, but if the tab opens, then they all flow towards this place of lower pressure. Electric potential does the same for a charge. Charge move towards points of lowest energy. Having the same energy level or potential at every point (connecting both ends of a wire to the same battery pole, e.g.) makes no charge move. But having a charge difference (connecting the one end to the positive pole (at high potential) and the other end to the negative pole (at low potential), makes them move towards the point at lower potential. About the hollow sphere, you can imagine a circular pipe e.g. The pressure is the same all around this pipe, so no particles move around. Just like the charge do not move around on an equipotential surface. In the centre of this pipe-ring there is much lower pressure - but also no access. The water particles are pressed towards the wall but then stop because the wall is strong enough to balance out this pressure. On a conducting sphere, the same happens because the sphere is a conductor but the air inside it not - the charges meet "a wall" when trying to escape from the conductor. The air is simply preventing charge to move inside it with enough force to balance out the electric force that pushes the charges towards the conductor edge. If the potential is suddenly very high, then the air might "break down". This is called breakdown voltage and is what happens in lightning storms when otherwise insulating air suddenly becomes conducting. This is equivalent to the pipe wall breaking from the pressure being too high.
{ "domain": "physics.stackexchange", "id": 41988, "tags": "electrostatics" }
Perturbative Quantum Mechanics
Question: I am, in full generality, confused about perturbation theory in quantum mechanics. My textbook and Wikipedia have the same general approach to explaining it: given some Hamiltonian $H=H^{(0)} + H^\prime$, we can break down each eigenfunction $\left\vert n \right\rangle$ into a power series in an invented constant $\lambda$ and the eigenenergies likewise: $\left\vert n \right\rangle = \sum\lambda^i\left\vert n^{(i)}\right\rangle$ $E_n = \sum \lambda^i E_n^{(i)}$ $\left(H^{(0)} + H^\prime\right) \left(\left\vert n^{(0)}\right\rangle + \lambda \left\vert n^{(0)}\right\rangle + \cdots \right) = \left(E^{(0)}+ \lambda E^{(1)} + \cdots\right) \left(\left\vert n^{(0)}\right\rangle + \lambda \left\vert n^{(1)}\right\rangle + \cdots \right)$ ... and then they take $\lambda\to1$. My question is - what's the logic here? Where did this come from? What purpose does $\lambda$ serve, given that the actual size of each contribution will be determined by the $E^{(i)}$'s and $\left\vert n^{(i)}\right\rangle$'s? Answer: Firstly, I refer you to Prof. Binney's textbook (see below) which covers perturbation theory in quantum mechanics in explicit detail. When doing perturbation theory, we perturb the Hamiltonian $H^{(0)}$ of a system which has been solved analytically, i.e. the eigenstates and eigenvalues are known. Specifically, $$H^{(0)}\to H^{(0)} + \lambda H'$$ where $H'$ is the perturbation, and $\lambda$ is a coupling constant. Why include such a constant? As Binney says, it provides us a 'slider' which when gradually increased to unity increases the strength of the perturbation. When $\lambda = 0$, the system is unperturbed, and when $\lambda=1$ we 'fully perturb the system.' Introducing a coupling constant $\lambda$ also provides us with a manner to refer to a particular order of perturbation theory; $\mathcal{O}(\lambda)$ is first order, $\mathcal{O}(\lambda^2)$ is second order, etc. As we increase in powers of the coupling constant, we hope the corrections decrease. (The series may not even converge.) A caveat: the demand that a coupling $\lambda \ll1$ may not be sufficient or correct to ensure that the coupling is small; this is only the case when the coupling is dimensionless. For example, if the coupling, in units where $c=\hbar=1$, had a mass (or equivalently energy) dimension of $+1$, then to ensure a weak coupling we would need to demand, $\lambda/E \ll 1$, where $E$ had dimensions of energy. Such couplings are known as relevant as at low energies they are high, and at high energies the coupling is low. http://www-thphys.physics.ox.ac.uk/people/JamesBinney/QBhome.htm
{ "domain": "physics.stackexchange", "id": 12705, "tags": "quantum-mechanics, perturbation-theory" }
What does an undefined formula in physics mean?
Question: I am trying to figure out how undefined formulas in mathematics relates to physics. Take the following formula for terminal velocity. $$V_\text{terminal} = \sqrt{ mg \over{c \rho A}} $$ Say we have a an air density of 0; $ \rho = 0 $ (a vacuum) Logic tells me the particle would continue to accelerate and never reach terminal velocity, but in mathematics this formula would be undefined. Obviously this is one of many examples of what can happen in physics problems, but what does undefined actually mean in terms of physics? I hope I am explaining myself clearly. Answer: Yes, the particle would continue to accelerate and would never reach a terminal velocity. But that is not what this equation tells you. This equation tells you what the terminal velocity is, given the parameters of the function. When in a vacuum, there is no terminal velocity. It is not zero, it is not infinity. A terminal velocity literally does not exist and that is exactly what the equation tells you.
{ "domain": "physics.stackexchange", "id": 26084, "tags": "newtonian-mechanics, newtonian-gravity, projectile, drag, singularities" }
Why isn't work done when carrying a backpack with constant velocity?
Question: Here's an excerpt from my textbook: In the following situations, state whether work is being done, and why or why not. A) A person carrying a backpack walking across a floor B) A person shoveling snow across a driveway at a constant speed The solutions are: A) No work is being done on the bag because the direction of motion is perpendicular to the direction of force due to gravity B) Work is being done on the shovel because the direction of motion and the direction of the applied force is the same I have no issue with situation B, but situation A bothers me. Although I agree that no work is being done by the forces of gravity and the force applied to hold it up, is it not the case that the person is applying a force forward to the bag as they walk, and therefore work is being done by that force? To make things a bit more uniform (since walking is a jerky motion), presuming that the person wearing the bag is riding a bike at a constant speed, that means that they have to apply a force that balances wind resistance. I might say that since there is no net force there is no work being done, but then situation B would have no work being done as well, since the shovel/snow are moving at a constant speed. Where did I misread this? Answer: In situation a), if the person is walking perfectly smoothly in a vacuum then no work is being done on the bag, if there is air resistance then both the person and the air do work on the bag, if there is jiggling then gravity does work too. Situation b) is clearer, the snow is initially at rest and accelerated by the spade, so the force from the spade acts in the direction of motion and work is done. Abd yes, you need a clearer textbook! :P
{ "domain": "physics.stackexchange", "id": 50335, "tags": "homework-and-exercises, work" }
Work done in a $pV$ diagram
Question: From what I gather, work done in a $pV$ diagram is the area under the curve. I have this figure here: The solution says the quantity of work in A is greater than that in B. I am kinda confused, aren't the area of $\Delta iAf$ and $\Delta iBf$ the same? Answer: Area under curve A: Area under curve B: Remember that "under" means all the way down to the horizontal axis. The work formula is an integral, $$W=\int p \mathrm dV,$$ and integrals work by summing up all the infinitely many infinitely thin columns that stand on the axis and reach up to the curve.
{ "domain": "physics.stackexchange", "id": 83310, "tags": "homework-and-exercises, thermodynamics, energy, temperature, work" }