anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Set rosparam for ~ handle
Question: Hi! i'm using code ros::NodeHandle ph("~"); ph.param("scale_angular", a_scale, a_scale); ph.param("scale_linear", l_scale, l_scale); How could i set scale_angular from console via rosparam utility? Thanks! Originally posted by noonv on ROS Answers with karma: 471 on 2013-07-05 Post score: 0 Answer: rosparam set NODE_NAME/scale_angular VALUE or when starting you node pass _scale_angular:=VALUE. Originally posted by dornhege with karma: 31395 on 2013-07-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by noonv on 2013-07-05: Thanks !!!
{ "domain": "robotics.stackexchange", "id": 14818, "tags": "rosparam" }
Read a CSV file and do natural language processing on the data
Question: I am studying the techniques of data mining and data processing. I'm doing this through data I've collected and stored in a csv file. The problem is that this filed was very large, to the point of having as astonishing 40 thousand lines of text. Some of the algorithms in the rendering part are fast and agile, but the part of the orthographic correction of words is laborious. I am using the NLTK package nltk.corpus import forest. So when it comes time to do this step, I daresay it will not end in a timely manner. From this, I was wondering if someone can help me with a solution where I can read a file line, do the whole process, save it to the bank and then read another line from the file. So by reading line by line and each line do the process. I think this way I can improve the performance of the algorithm. txtCorpus = [] dtype_dic= {'status_id': str, 'status_message' : str, 'status_published':str} for csvfile in pd.read_csv('data/MyCSV.csv',dtype=dtype_dic,encoding='utf-8',sep=',', header='infer',engine='c', chunksize=2): txtCorpus.append(csvfile) def status_processing(txtCorpus): myCorpus = preprocessing.PreProcessing() myCorpus.text = str(txtCorpus) print "Doing the Initial Process..." myCorpus.initial_processing() print "Done." print "----------------------------" print ("StartingLexical Diversity...") myCorpus.lexical_diversity() print "Done" print "----------------------------" print "Removing Stopwords..." myCorpus.stopwords() print "Done" print "----------------------------" print "Lemmatization..." myCorpus.lemmatization() print "Feito" print "----------------------------" print "Correcting the words..." myCorpus.spell_correct() print "Done" print "----------------------------" print "Untokenizing..." word_final = myCorpus.untokenizing() print "Feito" print "----------------------------" print "Saving in DB...." try: db.myDB.insert(word_final, continue_on_error=True) except pymongo.errors.DuplicateKeyError: pass print "Insertion in the BB Completed. End of the Pre-Processing Process " def main(): status_processing(txtCorpus) main() I believe that by visualizing the code, you can better understand what I explained above. I thought about doing a for where I read a line and passed it on def status_processing(txtCorpus): and so, I repeated the process until the end. But I could not reach a solution. preprocessing file: import nltk,re, htmlentitydefs from nltk.stem.snowball import SnowballStemmer from nltk.stem import WordNetLemmatizer from bs4 import BeautifulSoup import spellcorrect class Techniques(object): Lemmatizing = 1 Stopwords = 2 Stemming = 3 Spellcorrect = 4 def __init__(self, Type): self.value = Type def __str__(self): if self.value == Techniques.Lemmatizing: return 'Lemmatizing' if self.value == Techniques.Stopwords: return 'Stopwords' if self.value == Techniques.Stemming: return 'Stemming' if self.value == Techniques.Spellcorrect: return 'Spell Correct' def __eq__(self,y): return self.value==y.value class PreProcessing(): @property def text(self): return self.__text @text.setter def text(self, text): self.__text = text tokens = None def initial_processing(self): soup = BeautifulSoup(self.text,"html.parser") self.text = soup.get_text() #Todo Se quiser salvar os links mudar aqui self.text = re.sub(r'(http://|https://|www.)[^"\' ]+', " ", self.text) self.tokens = self.tokenizing(1, self.text) pass def lexical_diversity(self): word_count = len(self.text) vocab_size = len(set(self.text)) return vocab_size / word_count def tokenizing(self, type, text): if (type == 1): return nltk.tokenize.word_tokenize(text) elif (type == 2): stok = nltk.data.load('tokenizers/punkt/portuguese.pickle') #stok = nltk.PunktSentenceTokenizer(train) return stok.tokenize(text) def stopwords(self): stopwords = nltk.corpus.stopwords.words('portuguese') stopWords = set(stopwords) palavroesPortugues = ['foda','caralho', 'porra', 'puta', 'merda', 'cu', 'foder', 'viado', 'cacete'] stopWords.update(palavroesPortugues) filteredWords = [] for word in self.tokens: if word not in stopWords: filteredWords.append(word) self.tokens = filteredWords def stemming(self): snowball = SnowballStemmer('portuguese') stemmedWords = [] for word in self.tokens: stemmedWords.append(snowball.stem(word)) self.tokens = stemmedWords def lemmatization(self): lemmatizer = WordNetLemmatizer()#'portuguese' lemmatizedWords = [] for word in self.tokens: lemmatizedWords.append(lemmatizer.lemmatize(word, pos='v')) self.tokens = lemmatizedWords def part_of_speech_tagging(self): return 'Not implemented yet' def padronizacaoInternetes(self): return 'Not implementes yet' def untokenize(self, words): """ Untokenizing a text undoes the tokenizing operation, restoring punctuation and spaces to the places that people expect them to be. Ideally, `untokenize(tokenize(text))` should be identical to `text`, except for line breaks. """ text = ' '.join(words) step1 = text.replace("`` ", '"').replace(" ''", '"').replace('. . .', '...') step2 = step1.replace(" ( ", " (").replace(" ) ", ") ") step3 = re.sub(r' ([.,:;?!%]+)([ \'"`])', r"\1\2", step2) step4 = re.sub(r' ([.,:;?!%]+)$', r"\1", step3) step5 = step4.replace(" '", "'").replace(" n't", "n't").replace( "can not", "cannot") step6 = step5.replace(" ` ", " '") return step6.strip() def untokenizing(self): return ' '.join(self.tokens) #return self.untokenize(self.tokens) #return tokenize.untokenize(self.tokens) def spell_correct(self): correctedWords = [] spell = spellcorrect.SpellCorrect() for word in self.tokens: correctedWords.append(spell.correct(word)) self.tokens = correctedWords spellcorret file: import re, collections from nltk.corpus import floresta class SpellCorrect: def words(self, text): return re.findall('[a-z]+', text.lower()) def train(features): model = collections.defaultdict(lambda: 1) for f in features: model[f] += 1 return model NWORDS = train(floresta.words()) #words(file('big.txt').read()) alphabet = 'abcdefghijklmnopqrstuvwxyz' def edits1(self, word): splits = [(word[:i], word[i:]) for i in range(len(word) + 1)] deletes = [a + b[1:] for a, b in splits if b] transposes = [a + b[1] + b[0] + b[2:] for a, b in splits if len(b)>1] replaces = [a + c + b[1:] for a, b in splits for c in self.alphabet if b] inserts = [a + c + b for a, b in splits for c in self.alphabet] return set(deletes + transposes + replaces + inserts) def known_edits2(self, word): return set(e2 for e1 in self.edits1(word) for e2 in self.edits1(e1) if e2 in self.NWORDS) def known(self, words): return set(w for w in words if w in self.NWORDS) def correct(self, word): candidates = self.known([word]) or self.known(self.edits1(word)) or self.known_edits2(word) or [word] return max(candidates, key=self.NWORDS.get) Answer: spellcorrect.py SpellCorrect should not be a class. Your two "working" methods (train and edit1) does not reference self at all and the other ones only use self for its namespace. You should provide functions instead. As far as I can tell, the words method is not used anymore since you commented it in the building of NWORDS. alphabet is better imported from string: from string import ascii_lowercase as alphabet. I don't understand the definition of model in train. Why give a score of 1 to missing features; and so a score of 2 for features encountered once? Moreover, it the aim of train is to count how many times a given feature appears in features, you'd be better off using a collection.Counter. You will have a better memory footprint if you turn edit1 into a generator. Just yield (and yield from in Python 3) computed elements instead of storing them in a list. Turning edit1 into a generator will allow you to do so with edit2 without requiring it to filter elements itself. And let this job to known alone. Avoiding the discrepancy between how you build your words. In edit1, you can iterate more easily over words and still get the index using enumerate. It can simplify some of your checks. import collections from string import ascii_lowercase as alphabet from nltk.corpus import floresta NWORDS = collections.Counter(floresta.words()) def edits1(word): for i, letter in enumerate(word): begin, end = word[:i], word[i+1:] yield begin + end # delete if end: yield begin + end[0] + letter + end[1:] # transpose else: for other in alphabet: yield begin + letter + other # insert at the end for other in alphabet: yield begin + other + end # replace yield begin + other + letter + end # insert before the current letter def edits2(word): for editted_once in edits1(word): for editted_twice in edits1(editted_once): yield editted_twice def known(words): return set(w for w in words if w in NWORDS) def correct(word): candidates = known([word]) or known(edits1(word)) or known(edits2(word)) or [word] return max(candidates, key=self.NWORDS.get) Main file Your top-level code should be under an if __name__ == '__main__': clause. So move your txtCorpus building and your call to main there. In fact, main would be of better interest if it built txtCorpus itself before calling status_processing. status_processing also does more than it advertise as it process the statuses, but also save the result in DB. You should let the caller do whatever they want with the processed results. All these print can be unneeded to someone else. Consider using the logging module instead. def status_processing(corpus): myCorpus = preprocessing.PreProcessing() myCorpus.text = str(corpus) print "Doing the Initial Process..." myCorpus.initial_processing() print "Done." print "----------------------------" print ("StartingLexical Diversity...") myCorpus.lexical_diversity() print "Done" print "----------------------------" print "Removing Stopwords..." myCorpus.stopwords() print "Done" print "----------------------------" print "Lemmatization..." myCorpus.lemmatization() print "Feito" print "----------------------------" print "Correcting the words..." myCorpus.spell_correct() print "Done" print "----------------------------" print "Untokenizing..." word_final = myCorpus.untokenizing() print "Feito" print "----------------------------" return word_final if __name__ == '__main__': dtype_dic = {'status_id': str, 'status_message': str, 'status_published': str} txt_corpus = list(pd.read_csv( 'data/MyCSV.csv', dtype=dtype_dic, encoding='utf-8', sep=',', header='infer', engine='c', chunksize=2)) word_final = status_processing(txt_corpus) print "Saving in DB...." try: db.myDB.insert(word_final, continue_on_error=True) except pymongo.errors.DuplicateKeyError: pass print "Insertion in the DB Completed. End of the Pre-Processing Process " preprocessing.py Techniques should be an enum. You can use flufl.enum if you need them in Python 2. But I don't see them used anywhere in the code, you can get rid of that class. Since it seems that the code is for Python 2, you should have PreProcessing inherits from object. The text property of PreProcessing does not add value over a self.text attribute initialized in the constructor. Especially since you need to set it for the other methods to work. pass is unnecessary for non-empty blocks. tokenizing offers a choice between two variants, a boolean parameter would be more suited here. And since you seems to use only one of them, you can give it a default value. I would merge __init__ and initial_processing since this method populate the self.tokens attribute with the initial set of tokens every other method works with. Using raise NotImplementedError instead of return 'Not implemented yet' is much more meaningfull. Consider using list-comprehensions or the list constructor instead of manually appending items into an empty list. import nltk import re from nltk.stem.snowball import SnowballStemmer from nltk.stem import WordNetLemmatizer from bs4 import BeautifulSoup import spellcorrect class PreProcessing(): def __init__(self, text): soup = BeautifulSoup(text, "html.parser") #Todo Se quiser salvar os links mudar aqui self.text = re.sub(r'(http://|https://|www.)[^"\' ]+', " ", soup.get_text()) self.tokens = self.tokenizing() def lexical_diversity(self): word_count = len(self.text) vocab_size = len(set(self.text)) return vocab_size / word_count def tokenizing(self, use_default_tokenizer=True): if use_default_tokenizer: return nltk.tokenize.word_tokenize(self.text) stok = nltk.data.load('tokenizers/punkt/portuguese.pickle') return stok.tokenize(self.text) def stopwords(self): stopwords = set(nltk.corpus.stopwords.words('portuguese')) stopwords.update([ 'foda', 'caralho', 'porra', 'puta', 'merda', 'cu', 'foder', 'viado', 'cacete']) self.tokens = [word for word in self.tokens if word not in stopwords] def stemming(self): snowball = SnowballStemmer('portuguese') self.tokens = [snowball.stem(word) for word in self.tokens] def lemmatization(self): lemmatizer = WordNetLemmatizer() #'portuguese' self.tokens = [lemmatizer.lemmatize(word, pos='v') for word in self.tokens] def part_of_speech_tagging(self): raise NotImplementedError def padronizacaoInternetes(self): raise NotImplementedError def untokenize(self, words): """ Untokenizing a text undoes the tokenizing operation, restoring punctuation and spaces to the places that people expect them to be. Ideally, `untokenize(tokenize(text))` should be identical to `text`, except for line breaks. """ text = ' '.join(words) step1 = text.replace("`` ", '"').replace(" ''", '"').replace('. . .', '...') step2 = step1.replace(" ( ", " (").replace(" ) ", ") ") step3 = re.sub(r' ([.,:;?!%]+)([ \'"`])', r"\1\2", step2) step4 = re.sub(r' ([.,:;?!%]+)$', r"\1", step3) step5 = step4.replace(" '", "'").replace(" n't", "n't").replace( "can not", "cannot") step6 = step5.replace(" ` ", " '") return step6.strip() def untokenizing(self): return ' '.join(self.tokens) def spell_correct(self): self.tokens = [spellcorrect.correct(word) for word in self.tokens] More generic comments Consider reading (and following) PEP 8, the official Python style guide; especially as regard to: import declarations; whitespace around operators, comas… and variable names. Also consider using docstrings all around your code, it will make it more easier to understand.
{ "domain": "codereview.stackexchange", "id": 22895, "tags": "python, time-limit-exceeded, csv, natural-language-processing" }
What is the meaning of $V(D,G)$ in the GAN objective function?
Question: Here is the GAN objective function. $$\min _{G} \max _{D} V(D, G)=\mathbb{E}_{\boldsymbol{x} \sim p_{\text {data }}(\boldsymbol{x})}[\log D(\boldsymbol{x})]+\mathbb{E}_{\boldsymbol{z} \sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log (1-D(G(\boldsymbol{z})))]$$ What is the meaning of $V(D, G)$? How do we get these expectation parts? I was trying to understand it following this article: Understanding Generative Adversarial Networks (D.Seita), but, after many tries, I still can't understand how he got from $\sum_{n=1}^{N} \log D(x)$ to $\mathbb{E}(\log(D(x))$. Answer: To understand this equation first you need to understand the context in which it is first introduced. We have two neural networks (i.e. $D$ and $G$) that are playing a minimax game. This means that they have competing goals. Let's look at each one separately: Generator Before we start, you should note that throughout the whole paper the notion of the data-generating distribution is used; in short the authors will refer to the samples through their underlying distributions, i.e. if a sample $a$ is drawn from a distribution $p_a$, we'll denote this as $a \sim p_a$. Another way to look at this is that $a$ follows distribution $p_a$. The generator ($G$) is a neural network that produces samples from a distribution $p_g$. It is trained so that it can bring $p_g$ as close to $p_{data}$ as possible so that samples from $p_g$ become indistinguishable to samples from $p_{data}$. The catch is that it never gets to actually see $p_{data}$. Its inputs are samples $z$ from a noise distribution $p_z$. Discriminator The discriminator ($D$) is a simple binary classifier that tries to identify which class a sample $x$ belongs to. There are two possible classes, which we'll refer to as the fake and the real. Their respective distributions are $p_{data}$ for the real samples and $p_g$ for the fake ones (note that $p_g$ is actually the distribution of the outputs of the generator, but we'll get back to this later). Since it is a simple binary classification task, the discriminator is trained on a binary cross-entropy error: $$ J^{(D)} = H(y, \hat y) = H(y, D(x)) $$ where $H$ is the cross-entropy $x$ is sampled either from $p_{data}$ or from $p_g$ with a probability of $50\%$. More formally: $$ x \sim \begin{cases} p_{data} \rightarrow & y = 1, & \text{with prob 0.5}\\ p_g \;\;\;\,\rightarrow & y = 0, & \text{otherwise} \end{cases} $$ We consider $y$ to be $1$ if $x$ is sampled from the real distribution and $0$ if it is sampled from the fake one. Finally, $D(x)$ represents the probability with which $D$ thinks that $x$ belongs to $p_{data}$. By writing the cross-entropy formula we get: $$ H(y, D(x)) = \mathbb{E}_y[-log \; D(x)] = \frac{1}{N} \sum_{i=1}^{N}{ \; y_i \; log(D(x_i))} $$ where $N$ is the size of the dataset. Since each class has $N/2$ samples we can split this sum into two parts: $$ = - \left[ \frac{1}{N} \sum_{i=1}^{N/2}{ \; y_i \; log(D(x_i))} + \frac{1}{N} \sum_{i=N/2}^{N} \; (1 - y_i) \; log((1 - D(x_i))) \right] $$ The first of the two terms represents the the samples from the $p_{data}$ distribution, while the second one the samples from the $p_g$ distribution. Since all $y_i$ are equally likely to occur, we can convert the sums into expectations: $$ = - \left[ \frac{1}{2} \; \mathbb{E}_{x \sim p_{data}}[log \; D(x)] + \frac{1}{2} \; \mathbb{E}_{x \sim p_{g}}[log \; (1 - D(x))] \right] $$ At this point, we'll ignore $2$ from the equations since it's constant and thus irrelevant when optimizing this equation. Now, remember that samples that were drawn from $p_g$ were actually outputs from the generator (obviously this affects only the second term). If we substitute $D(x), x \sim p_g$ with $D(G(z)), z \sim p_z$ we'll get: $$ L_D = - \left[\; \mathbb{E}_{x \sim p_{data}}[log \; D(x)] + \; \mathbb{E}_{z \sim p_{z}}[log \; (1 - D(G(z)))] \right] $$ This is the final form of the discriminator loss. Zero-sum game setting The discriminator's goal, through training, is to minimize its loss $L_D$. Equivalently, we can think of it as trying to maximize the opposite of the loss: $$ \max_D{[-J^{(D)}]} = \max_D \left[\; \mathbb{E}_{x \sim p_{data}}[log \; D(x)] + \; \mathbb{E}_{z \sim p_{z}}[log \; (1 - D(G(z)))] \right] $$ The generator however, wants to maximize the discriminator's uncertainty (i.e. $J^{(D)}$), or equivalently minimize $-J^{(D)}$. $$ J^{(G)} = - J^{(D)} $$ Because the two are tied, we can summarize the whole game through a value function $V(D, G) = -J^{(D)}$. At this point I like to think of it like we are seeing the whole game through the eyes of the generator. Knowing that $D$ tries to maximize the aforementioned quantity, the goal of $G$ is: $$ \min_G\max_D{V(D, G)} = \min_G\max_D \left[\; \mathbb{E}_{x \sim p_{data}}[log \; D(x)] + \; \mathbb{E}_{z \sim p_{z}}[log \; (1 - D(G(z)))] \right] $$ Disclaimer: This whole endeavor (on both my part and the authors' part) was to provide a mathematical formulation to training GANs. In practice there are many tricks that are invoked to effectively train a GAN, that are not depicted in the above equations.
{ "domain": "ai.stackexchange", "id": 1204, "tags": "machine-learning, generative-model, generative-adversarial-networks, notation" }
Python decorator for optional arguments decorator
Question: I want my Python decorators to have optional arguments and not be called when not necessary. The accepted answer in here doesn't accept named arguments, and I don't want to add boilerplate code inside decorators, so I came up with an alternative decorator: import inspect def decorator_defaults(**defined_defaults): def decorator(f): args_names = inspect.getargspec(f)[0] def wrapper(*new_args, **new_kwargs): defaults = dict(defined_defaults, **new_kwargs) if len(new_args) == 0: return f(**defaults) elif len(new_args) == 1 and callable(new_args[0]): return f(**defaults)(new_args[0]) else: too_many_args = False if len(new_args) > len(args_names): too_many_args = True else: for i in range(len(new_args)): arg = new_args[i] arg_name = args_names[i] defaults[arg_name] = arg if len(defaults) > len(args_names): too_many_args = True if not too_many_args: final_defaults = [] for name in args_names: final_defaults.append(defaults[name]) return f(*final_defaults) if too_many_args: raise TypeError("{0}() takes {1} argument(s) " "but {2} were given". format(f.__name__, len(args_names), len(defaults))) return wrapper return decorator Two sample decorators: from functools import wraps @decorator_defaults(start_val="-=[", end_val="]=-") def my_text_decorator(start_val, end_val): def decorator(f): @wraps(f) def wrapper(*args, **kwargs): return "".join([f.__name__, ' ', start_val, f(*args, **kwargs), end_val]) return wrapper return decorator @decorator_defaults(end_val="]=-") def my_text_decorator2(start_val, end_val): def decorator(f): @wraps(f) def wrapper(*args, **kwargs): return "".join([f.__name__, ' ', start_val, f(*args, **kwargs), end_val]) return wrapper return decorator And usage of sample text decorators: @my_text_decorator def func1a(value): return value @my_text_decorator() def func2a(value): return value @my_text_decorator2("-=[") def func2b(value): return value @my_text_decorator(end_val=" ...") def func3a(value): return value @my_text_decorator2("-=[", end_val=" ...") def func3b(value): return value @my_text_decorator("|> ", " <|") def func4a(value): return value @my_text_decorator2("|> ", " <|") def func4b(value): return value @my_text_decorator(end_val=" ...", start_val="|> ") def func5a(value): return value @my_text_decorator2("|> ", end_val=" ...") def func5b(value): return value print(func1a('My sample text')) # func1a -=[My sample text]=- print(func2a('My sample text')) # func2a -=[My sample text]=- print(func2b('My sample text')) # func2b -=[My sample text]=- print(func3a('My sample text')) # func3a -=[My sample text ... print(func3b('My sample text')) # func3b -=[My sample text ... print(func4a('My sample text')) # func4a |> My sample text <| print(func4b('My sample text')) # func4b |> My sample text <| print(func5a('My sample text')) # func5a |> My sample text ... print(func5b('My sample text')) # func5b |> My sample text ... decorator_defaults works, but I believe it could be written better. I'm not that experienced in Python, so I would like to hear some ideas/comments on how to improve it. Answer: The code looks rather complicated. Instead of trying to understand it, I'd just like to point to NickC's answer to the linked SO question. If I add **kwargs to his optional_arg_decorator like this... def optional_arg_decorator(fn): def wrapped_decorator(*args, **kwargs): if len(args) == 1 and len(kwargs) == 0 and callable(args[0]): return fn(args[0]) else: def real_decorator(decoratee): return fn(decoratee, *args, **kwargs) return real_decorator return wrapped_decorator ...and adapt your decorators like this, I'm getting the same output from the test cases. from functools import wraps @optional_arg_decorator def my_text_decorator(f, start_val="-=[", end_val="]=-"): @wraps(f) def wrapper(*args, **kwargs): return "".join([f.__name__, ' ', start_val, f(*args, **kwargs), end_val]) return wrapper @optional_arg_decorator def my_text_decorator2(f, start_val, end_val="]=-"): @wraps(f) def wrapper(*args, **kwargs): return "".join([f.__name__, ' ', start_val, f(*args, **kwargs), end_val]) return wrapper
{ "domain": "codereview.stackexchange", "id": 11877, "tags": "python, python-3.x, meta-programming" }
Will a giant ball of protons form a black hole?
Question: Suppose you have enough energy and resources to put together (in a momentarily static configuration in which they are all at rest at the same time) as many protons as you want to form a "proton star". This ball will last a split second before the protons repulsion disperses it back. But if made of enough protons, it will momentarily have enough mass to form a black hole (or am I wrong and there is no upper mass limit for this situation? So the question is, will a black hole form or not? My doubt is that it is always argued that a star collapses into a black hole because there is no other force in nature to counterbalance it, however notice that in this case, the repulsive force of of the charged protons is always larger than the implosive force due to gravity. Answer: This question has already been answered by a Randall Monroe of XKCD and Dr. Cindy Keeler of the Niels Bohr Institute. It forms a Naked Singularity. Which is an infinitely dense object, from which light can escape. Source: https://what-if.xkcd.com/140/ You Reissner–Nordström metric for this question, as opposed to the more well known Schwarzschild metric. Overall, you will get a blackhole, but it would be a blackhole without an event horizon (a naked singularity). Naked Singularities are forbidden by General Relativity, which I assume is the model of framework we want to work within for this question.
{ "domain": "physics.stackexchange", "id": 31787, "tags": "general-relativity, black-holes, charge" }
How do central charges affect $R$-symmetry group in extended SUSY?
Question: When examining a ${\cal N}=1$ SUSY one finds that the corresponding $R$-symmetry group is simply $U(1)$. On the other hand, when considering extended SUSY (i.e. ${\cal N}>1$) the largest possible group is $U({\cal N})$, but that depend on the central charges, which are defined by the anticommutator $$ \{Q_{\alpha}^I,Q_{\beta}^{J}\} = \varepsilon_{\alpha\beta} Z^{IJ}. $$ in fact, in presence of non-vanishing central charges one can prove that the $R$-symmetry group reduces to $USp({\cal N})$, the compact version of the symplectic group $Sp({\cal N})$, $USp({\cal N}) \simeq U({\cal N}) ∩ Sp({\cal N})$. Why is that? Answer: Well, now your $R$-transformations have to preserve $Z^{IJ}$. Because $\epsilon_{\alpha\beta}$ is antisymmetric, $Z^{IJ}$ should also be antisymmetric so that their product was symmetric under the exchange $(I,\alpha)\leftrightarrow (J,\beta)$ just like the anticommutator. Of course if you have sufficiently large number of supercharges $\mathcal{N}$ you may introduce $Z^{IJ}$ that will only have, for example $Z^{12}=-Z^{21}$ with vanishing other components. In that case the $R$-symmetry group will be larger. But (for even $\mathcal{N}$) you may consider the situation when there are no such combinations of the generators $c_I Q^I_\alpha$ that $c_I Z^{IJ}=0$. Then $Z^{IJ} dq_I\, dq_J$ is a skew-symmetric nondegenerate bilinear form, i.e. the symplectic form. Therefore the $R$-symmetries should belong to $Sp(\mathcal{N})$. But they also should belong to $U(\mathcal{N})$. Therefore the $R$-symmetry group should be an intersection of $U(\mathcal{N})$ and $Sp(\mathcal{N})$, i.e. $USp(\mathcal{N})$.
{ "domain": "physics.stackexchange", "id": 85599, "tags": "group-theory, supersymmetry, lie-algebra" }
Why does it seem that the potential difference dependence of capacitance and total energy stored in a parallel-plate capacitor are contradictory?
Question: Consider a parallel-plate capacitor. Charge is stored physically on electrodes ("plates") which are flat and parallel to one another. If one electrode has charge $+Q$ and the other electrode has charge $-Q$, and $V$ is the potential difference between the electrodes, then the capacitance $C$ is $$C = \frac{Q}{V}$$ (This definition of $C$ is given in, for example, Introduction to Electrodynamics by David J. Griffiths.) But, now, let's think about the energy stored in the electric field between the electrodes of this parallel-plate capacitor. As stated in Griffiths on page 105, "How much work $W$ does it take to charge the capacitor up to a final amount $Q$?" It turns out that $W$ is $$W = \frac{1}{2} CV^2$$ So: (i) the capacitor's capacitance $C$ goes like $\frac{1}{V}$; and (ii) the energy $W$ stored in the electric field goes like $V^2$. Are statements (i) and (ii) at odds with one another? I am sure that they cannot be. But conceptually I am having difficulty. We desire high capacitance -- we want to put as much charge on the electrodes as possible, because if we accomplish this, then I think that will increase the energy density of the system. But is what I just said true? If we manage to increase $Q$, then by $V = \frac{Q}{C}$, the potential difference $V$ between the plates will also increase. This, I think, is why capacitor electrodes are separated by a material (such as a polarizable dielectric material like a slab of plastic); otherwise $V$ will become too large and the breakdown voltage will be reached, generating a spark. But, now, the equation $W = \frac{1}{2} CV^2$ (where I think that $W$ can be conceptualized as the energy stored in the electric field between the electrodes) seems to say that as $V$ increases, so does the energy $W$, quadratically. So, my question is, do we want a capacitor to have a large potential difference $V$ or a small potential difference $V$? If $V$ is large, then $W$ is large (which we want), but $C$ is small (which we do not want). Am I somehow thinking of two different potential differences $V$ and confusing them? Answer: As a summary of what other answers have already stated, in essence: Capacitance is a function of the geometry of the capacitor (directly proportional to the overlap area of the plates and inversely proportional to it's separation) and of the relative permittivity of the dielectric employed. Once this constructive parameters have been fixed, it's capacitance gets uniquely defined, and so it's constant when the capacitor gets charged and discharged (assuming linear response, as it's the norm in standard circuits). The relation between voltage between the plates of the capacitor, the accumulated charge and it's capacitance is just the definition of the latter. That capacitance is a property of the system which is a function only of geometry and dielectric's permittivity, is a fact that can be deduced from this definition. So, once you have picked up a capacitance (by fixing the parameters involved as explained above) maximization of stored energy in the electric field generated gets down to increasing voltage between the plates as much as possible. Nevertheless, increasing the electric field's strenght has a limit (the dieletric strength), which corresponds to the breakdown voltage of the dielectric, so that is the maximum voltage safely and technically attainable. So: $W_{max}=\displaystyle\frac{1}{2}CV^{2}_{max}\ ;\\V_{max}=E_{max}\cdot d\ ;\\ C=\displaystyle\varepsilon_{0}\varepsilon_{r}\frac{A}{d}\Longrightarrow W_{max}=\displaystyle\frac{1}{2}\varepsilon_{0}\varepsilon_{r}E_{max}^{2}\cdot A\cdot d=\displaystyle\frac{1}{2}\varepsilon_{0}\varepsilon_{r}E_{max}^{2}\cdot Vol_{\ dielectric}$ Corollary: The amount of energy stored in a fully charged capacitor is just obliquely related to it's capacitance. Nevertheless, even though the energy of the electric field is directly proportional to the volume of dielectric between the plates (the product of the plate's area and their separation), for a given amount of dielectric material, the preferred geometry implies a large area and as little separation as possible, because that arrangement allows more compact designs. That's why a larger capacitances leads to larger energy densities in practical applications.
{ "domain": "physics.stackexchange", "id": 4180, "tags": "electrostatics, electric-fields, potential, capacitance, voltage" }
What type of galaxy is LEDA 14884?
Question: A new image of deep space was released yesterday, showing tons of galaxies. One in particular (named "LEDA 14884") looks very puzzling (like an ear), zoomed in below: Article says it's a ring-shaped galaxy. Is there a more precise characterisation in terms of well-known galaxy typologies (irregular perhaps)? Perhaps the angle is confusing it's otherwise usual shape? Answer: This is almost certainly an example of a collisional ring galaxy. It's listed as such (under the alternate name AM 0417-391) in the catalog of Madore et al. 2009; here is a panel showing the galaxy from one of their figures: Image (probably from the Digitized Sky Survey) of AM 0417-391 = LEDA 14884), taken from Figure 5 of Madore et al. (2009). "RN" = "ring nucleus"; "Cn" = possible "companion" galaxies -- though some of these are in this case parts of the ring. These are thought to be the result of a smaller galaxy colliding more or less head-on with a larger disk galaxy. The smaller galaxy passes through (so these are sometimes called "drop through" collisions), but leaves behind a pronounced "ripple" in the gas, which becomes concentrated in an expanding ring and forms stars at a high rate; this shows up as a ring of gas and bright, young stars. Since most such collisions will be off-center (i.e., the smaller galaxy won't pass through the exact center of the bigger galaxy), the ring will be asymmetric, as is clearly the case for LEDA 14884. These are very rare systems; Madore et al. (2009) quote an estimated frequency of 0.01% of galaxies in the local universe being collisional ring galaxies.
{ "domain": "astronomy.stackexchange", "id": 5374, "tags": "galaxy, identify-this-object" }
Jiggling tf Tree After fusing Odometry with IMU
Question: I am using ROS NOETIC and Pioneer3DX as the mobile robot which publishes Odometry info into /RosAria/pose topic as nav_msgs/Odometry message. Phidgets Spatial IMU is being used which published the IMU data to /imu/data_raw and /imu/data (this uses imu_madgwick filter to get the orientation data) topics as sensor_msgs/imu. This is my launch file: <node pkg="robot_localization" type="ekf_localization_node" name="ekf_odom_node" output="screen" > <param name="frequency" value="10"/> <param name="sensor_timeout" value="0.1"/> <param name="publish_tf" value="true"/> <param name="two_d_mode" value="true"/> <remap from="odometry/filtered" to="odom/ekf/enc_imu"/> <!-- <param name="map_frame" value="map"/> --> <param name="odom_frame" value="odom"/> <param name="base_link_frame" value="base_link"/> <param name="world_frame" value="odom"/> <param name="transform_time_offset" value="0.0"/> <param name="odom0" value="/RosAria/pose"/> <param name="odom0_differential" value="true" /> <param name="odom0_relative" value="false" /> <param name="odom0_queue_size" value="10" /> <rosparam param="odom0_config">[false, false, false, false, false, false, true, true, false, false, false, true, false, false, false]</rosparam> <param name="imu0" value="/imu/data"/> <param name="imu0_differential" value="false" /> <param name="imu0_relative" value="true" /> <param name="imu0_queue_size" value="10" /> <param name="imu0_remove_gravitational_acceleration" value="true" /> <rosparam param="imu0_config">[false, false, false, false, false, true, false, false, false, false, false, true, true, false, false]</rosparam> <param name="print_diagnostics" value="true" /> <param name="debug" value="false" /> <param name="debug_out_file" value="debug_odom_ekf.txt" /> <rosparam param="process_noise_covariance">[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2.7491722135750453e-06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2.7491722135750453e-06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2.7491722135750453e-06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7.53975837175367e-06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7.53975837175367e-06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7.53975837175367e-06]</rosparam> My Odometry data doesn't have any Covariance and my IMU data has covariance for angular velocity and linear acceleration, which I have specified in the process noise covariance. When I launch robot_localization, tf tree starts jiggling, I assume it is due to the clash between my Mobile robot controller and the robot_localization node publishing the tf in base_link. Please let me know how can I fix this. Also, my filtered odometry topic seems to have covariance and it is constantly changing even when stationary (which I think is due to the IMU, the IMU frame is constantly quivering even when there is no movement). I would like to know the reason/fix for this. A single instance of echoed filtered odometry topic looks like this: header: seq: 109 stamp: secs: 1668740288 nsecs: 491907835 frame_id: "odom" child_frame_id: "base_link" pose: pose: position: x: -0.0027767606103315393 y: -0.001673958087734123 z: -6.744843784137596e-19 orientation: x: -2.346676768845103e-21 y: 3.727956144832735e-21 z: -3.8869887649939077e-05 w: 0.999999999244566 covariance: [2.5936842164529874e-09, -8.262592806627886e-10, -6.622168163121949e-29, -6.51261107995881e-29, -2.1762563577312287e-27, 2.8450281461501656e-09, -8.26259280662786e-10, 3.466544928206097e-09, -3.531843233858175e-29, 2.240228932317681e-27, 7.406142294336661e-29, -4.720515954616235e-09, -6.237768220090504e-29, -3.652695440579897e-29, 1.6966118537294256e-09, -2.2507145702543902e-11, 3.7374557772980976e-11, -4.854850589794123e-30, -6.187178545053845e-29, 2.234850450568973e-27, -2.250714570254387e-11, 1.9517088713200696e-09, 2.3360351043958477e-13, -2.9994289813041105e-27, -2.1745832537608525e-27, 7.123303943513571e-29, 3.7374557772980866e-11, 2.336035104395824e-13, 1.9514617135004515e-09, -1.749658505742235e-27, 2.8450281461501797e-09, -4.7205159546162296e-09, -5.483666760685148e-30, -3.647455422328822e-27, -2.1815083512975117e-27, 2.1702338813096678e-08] twist: twist: linear: x: -0.05010471062555465 y: -0.030309224849492545 z: -1.1370249616301676e-18 angular: x: 1.3814492723897156e-21 y: 6.076604847095885e-22 z: 0.00013337306265425098 covariance: [2.24411163849706e-09, 2.9660498804899114e-27, -7.266418360758023e-27, -3.3104205459984865e-32, 5.495001420972768e-32, 7.74031030451591e-50, 9.289639803192707e-28, 2.2441116384970603e-09, -8.745147397450586e-28, -5.4401329579754266e-33, 9.029772770157075e-33, 1.271448515629526e-50, -4.21542365055365e-27, -2.0919414991473807e-27, 1.6905828785801147e-09, -1.3851442826998742e-13, 2.300067240234748e-13, 3.2514696938862866e-31, -1.808953450357923e-32, -1.0703286354554449e-32, -1.3851442826998752e-13, 6.881328598779493e-08, 1.3119186332455095e-16, -1.4224270414867518e-25, 3.00261465895899e-32, 1.776637994317112e-32, 2.3000672402347464e-13, 1.3119186343091536e-16, 6.881328584902495e-08, -8.276459357269306e-27, 4.1654287691771453e-50, 2.478040890632391e-50, 3.3591904764586926e-31, -2.6523856600289845e-25, -2.1058221482951057e-26, 1.3872113963674785e-07] Originally posted by pavan_c on ROS Answers with karma: 3 on 2022-11-17 Post score: 0 Answer: I was facing the same problem as both the robot_localization and the diff_drive_controller were publishing the tf's. So I disabled the tf's comming from diff_drive_controller by adding this param: enable_odom_tf: false This solved my problem. Originally posted by Kunal Mod with karma: 26 on 2022-11-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by pavan_c on 2022-11-30: Thank You for the answer Kunal, it is working. I even got it working with RosAria by creating a duplicate topic with different names of frame_id's so that TF's won't get clashed. Also, can you please let me know how to configure the process noise covariance matrix in the EKF node?
{ "domain": "robotics.stackexchange", "id": 38133, "tags": "navigation, ekf, pioneer-3dx, rosaria, robot-localization" }
3D vision with one camera / VSLAM with known position
Question: I want to map an environment with a single camera which moved through the environment with a known trajectory. So that I create a 3D map. Several SLAM systems can create a (2D or 3D) map and estimate the position of the camera. But in my case this position is already known. So I am looking for a ROS SLAM package that can be supplied with accurate position data. Or a vision package that can be used to do stereovision with movement parallax. Originally posted by davinci on ROS Answers with karma: 2573 on 2013-07-05 Post score: 1 Answer: Hey, For SLAM using monouclar camera u can try ethzasl PTAM package. PTAM is well known monocular VSLAM algorithm. But as you might know that map created by monocular camera will created on an arbitrary scale. For stereo vision you can use RGBD slam. Hope this helps.. Originally posted by ayush_dewan with karma: 1610 on 2013-07-05 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 14817, "tags": "navigation, mapping, camera, 3d-slam" }
ros msg float64 to float value
Question: i am trying to remove 3 under decimal point of my data. when i use float to store some data , it gives "-9.082400" but when i use this data to float64 for ros topic, it gives "-9.08240001381" how can i remove "1381" value? is it happening during conversion between float to float 64? any idea will help me thx! Originally posted by benthebear93 on ROS Answers with karma: 17 on 2020-08-05 Post score: 0 Answer: @benthebear93, that is the natural behaviour of floating points numbers, it a representationm problem. Doubles AKA float64 can store technically up to 16 significant digits, so you can try to use floats AKA float32 that stores only 7. You can round the number to the decimal digit you want with, asumming you are using C++: round(value * 1000) / 1000; That is rounder the value number to the 4th decimal digit. So the number -9.08240001381 will be -9.0824. But at the end of the day you will have decimals that you do not want since thats is how floating point representation works. If anyone knows any other trick to solve this problem I will glad to discuss it here. Regards. Originally posted by Weasfas with karma: 1695 on 2020-08-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by benthebear93 on 2020-08-05: thx for the answer. i guess my question was more related with floating point rather than ROS.
{ "domain": "robotics.stackexchange", "id": 35373, "tags": "ros, ros-melodic" }
How to determine whether a newly discovered dinosaur is not a young one and not an entirely different species?
Question: Every once in a while, there is an official announce that a new species has been discovered. For example, paleontologists have recently discovered a dinosaur they named Nanuqsaurus hoglundi, which really resembles Tyrannosaurus Rex, but in a smaller version. It also lived during the same era. Is there something that indicates it's a cousin of the T-Rex, and not Mr. Rex's son? Other small dinosaurs include the Compsognathus, the Microraptor and others. Why were they classified as new species when they were discovered? Considering evolution over a few million years, my guess is that some earlier species may have been smaller and evolved into more complex animals, bigger bodies, horns, frills, etc. Answer: Diagnosing extinct species is even more difficult than extant taxa (see this question). Because systematists describing fossil species (usually) only have skeletons, they compare to other fossils. You are correct that diagnosing a species from only a skeleton can be tricky. What defines what you call a genus, a species, etc.? How morphologically dissimilar can two skeletons be before you call them different species? The running joke is that if you found fossils of a chihuahua skeleton and a great dane skeleton, you would certainly call them different species. But by modern biological species concepts, they are the same species. Then there are extant species that are essentially indistinguishable in terms of their skeletons, but do not interbreed, and so are probably different species. Sometimes there is consensus, and sometimes there is not. This disagreement can lead to a description of a new species by one person being reclassified ("revised" is the term used) as another species. There are lots of examples of this among dinosaurs. Most famously, Brontosaurus was revised as actually being Apatosaurus. More recently, and relevant to your question, Nanotyrannus is thought by some to be a juvenile Tyrannosaurus rex. So then how did Fiorillo and Tykowski, who described Nanuqsaurus hoglundi, decide that it was (1) not just a juvenile T. rex and (2) that it was a new species? Their rationale is laid out really well in their paper, primarily in the sections "Diagnosis" and "Description". You rightly picked out the two questions that paleontologists reading this paper would want to know. Nanuqsaurus is not a juvenile: The dorsomedial edge of the maxilla is marked by deep pockets separated by pronounced transverse ridges. These together formed a strong peg-in-socket articulation between the dorsal margin of the maxilla and the ventrolateral edge of the nasal. The same kind of deeply interlocking naso-maxillary contact is a well-documented character that is also present only in developmentally mature individuals of the derived tyrannosaurines Daspletosaurus torosus, Tarbosaurus bataar, and Tyrannosaurus rex 2, [6], [20], [21], [22]. The nasal-maxilla contact is either smoothly grooved or bears only weak scalloping in immature individuals and more basal tyrannosauroids 2, [6]. The presence of this feature in DMNH 21461 is evidence that the material also represents a developmentally mature individual. What they are saying here is that maxilla (part of the front of the upper jaw) in Nanuqsaurus has a very characteristic relationship to the nasal bone in which they form a peg-and-socket joint. This relationship is found in tyrannosaurine dinosaurs like Tyrannosaurus, Daspletosaurus, and Tarbosaurus. But more importantly, it is only found in adult tyrannosaurines. So they conclude that Nanuqsaurus is an adult. Nanuqsaurus is a distinct species from Tyrannosaurus and other tyrannosaurines In all three hypotheses of phylogeny, Nanuqsaurus hoglundi was found to be a derived tyrannosaurine, the sister taxon to the Tarbosaurus + Tyrannosaurus clade (Figure 6). This node was supported by a single unambiguous character, the presence of a dorsoventrally tall, paired sagittal crest on the frontal. The age of Nanuqsaurus hoglundi (70-69Ma) is consistent with its place in the recovered hypothesis of tyrannosauroid phylogeny, positioned in time between the more basal Daspletosaurus torosus (Middle to Late Campanian) and the more derived Tyrannosaurus rex (latest Maastrichtian) among other North American tyrannosaurines. The authors ran a phylogenetic analysis of characters coded for Nanuqsaurus and about 20 other taxa. Nanuqsaurus has enough shared characteristics to put to solidly among tyrannosaurine (near Tyrannosaurus, Daspletosaurus, and Tarbosaurus). But, importantly, it has a characteristic that none of these others have: "the presence of a dorsoventrally tall, paired sagittal crest on the frontal." Based on its being an adult and having a character that is not found anywhere else among tyrannosaurines, Fiorillo and Tykowski name a new species. I'm simplifying what they did, but that's the general idea. To their credit, they acknowledge that there is not a lot distinguishing the new species and discuss in much more detail their rationale and the implications of their results.
{ "domain": "biology.stackexchange", "id": 1933, "tags": "taxonomy, palaeontology" }
Why is the number of digits (bits) in the binary representation of a positive integer $n$ is the integral part of $1 + \log_2 n$?
Question: I've stumbled on this definition on Wikipedia, and I can't figure out why. I could probably start the demonstration by saying that, with $n$ bits, you can create $2^n$ possible different numbers, so $2^n=x$. If I rewrite the expression with logarithms, I find $\log_2x = n$, so the base-2-log of the total number of possible combinations of 0 and 1 is $x$. And I'm stuck here. I'm probably seeing this from the wrong angle. Also, I wonder what is that $+1$ for? Sorry if it's a dumb question, but the only way I can memorize a formula is to give a meaning to its numbers. Answer: If you have $n$ bits available you can represent the numbers $0$ to $2^n-1$ in binary. Therefore, if you want to represent a number $x$ in binary, you need a number of bits $n$ that is large enough so that $$ \begin{split} 2^n - 1 & \geq x\\ n & \geq \log_2(x+1)\,. \end{split} $$ Now there are two cases for $\log_2(x+1)$: It is an integer. Then $\log_2(x)$ won't be. It'll be slightly less. So $\lfloor\log_2(x)\rfloor + 1$ will yield the same value. (The exception where $x=1$ can be checked separately.) It isn't an integer. In this case $\lfloor\log_2(x)\rfloor + 1$ yields the next largest integer just as $\lfloor\log_2(x+1)\rfloor + 1$ would. In either case, because $n$ is always an integer we can write $$ n \geq \lfloor\log_2(x)\rfloor + 1\,. $$
{ "domain": "cs.stackexchange", "id": 11140, "tags": "combinatorics, arithmetic" }
Does charging time of a series of resistance and capacitor depend on the applied voltage?
Question: Cause I found out that you can calculate the capacitance by t=rc (time constant equals resistance times capacitance). But where I got confused was the charging source. Intuitively I would think if you hooked higher voltage up to it, then you'd get faster charging, but then you'd get different rc, but r and c are supposed to be constants. Put another way hooking up a 1000 uF cap in series with a 100k ohm resistor, up to a 9V battery would give one charge time, and then if you put a couple more nine volt batteries in series that would give a shorter time, but you still have a 1000 uF cap in there... what the heck? Answer: The time constant of the circuit doesn't change. Charging 1000 uF through 100 kohms, the time constant is 100 s. This gives the time it will take for the capacitor voltage to reach 63% of its final value. However the final value will of course be 18 V when charging with an 18 V source, and 9 V when charging with a 9 V source. This means that the charging current, and the rate of change of the capacitor voltage will both be higher when charging with the higher-voltage source. Also, if you defined the "charging time" to be the time to reach some fixed intermediate voltage, rather than reaching the final equilibrium voltage, then it will be shorter when charging with the higher voltage source. For example, if you define the charging time to be the time it takes the capacitor voltage to reach 1 V, then it will take about 11.8 s when using a 9 V source or 5.7 s when using an 18 V source.
{ "domain": "physics.stackexchange", "id": 37773, "tags": "capacitance" }
Comparison of examples in incomplete dominance and Co dominance
Question: When we consider examples of incomplete dominance we take that of FOUR O CLOCK plant - We say that alleles for red and white colour of flower are not completely dominant against each other. Hence in hybrid progeny we Say (as in standard textbooks) that pink colour appears due to "expression of single gene for pigmented flower which produces only pink colour (quantitative inheritance )" In codominance concept we take example of SHORT HORNED CATTLE which again appears as same case white and red cattle produces roan coloured hybrid progeny but due to "juxtaposition of small patches of red and white colour hair" The same principle can be applied to four o clock plant too. Suppose pink colour of flower is too due to juxtaposition of red and white coloured elements. Then where is the difference between the two concepts? Further the thing which puzzled me more was that - "initially cattle example was considered to be in complete dominance. Later it was CORRECTED to codominance " Also incomplete dominance is also known as "mosaic dominance" but actually mosaic happens in codominance. Kindly clarify my doubts Answer: Mosaic dominance I think your confusion comes from the statement incomplete dominance is also known as "mosaic dominance" I had never heard of mosaic dominance before and I could not find much on this concept. I found a few papers but they either referred to mosaicism (see below) or they did use this term only a few times without defining it. There is an entry for mosaic inheritence in the online medical dictionnary though (here). The definition they give is inheritance in which the paternal influence is dominant in one group of cells and the maternal in another. It is a little vague to me as 1) the phrasing is a little unusual and 2) it does not specify the underlying mechanism. If anything, it feels like a synonym of codominance and not of incomplete dominance. Mosaicism is a thing in genetics but it is mainly unrelated to the concept of dominance as it refers to genetic polymorphism among cells within an organism. Where did you read about mosaic dominance? Can you please give a direct quote? If anything, mosaic dominance sounds like a synonym of codominance and not a synonym of incomplete dominance. Codominance vs incomplete dominance Outside the above issue, I think you pretty much define these two terms through your examples. You might want to have a look at the post (although it is a post who did not get much attention) How is incomplete dominance different from codominance? In short, codominance implies that the specific phenotypic effect of each allele is being visible in the heterozygote, while incomplete dominance implies that the heterozygote has a phenotype that is somewhat intermediate between the two homozygote phenotypes.
{ "domain": "biology.stackexchange", "id": 7478, "tags": "genetics" }
Momentum operator in QM
Question: I am reading a passage from the book "Decoherence and the Quantum to Classical Transition" which describes a scattering process in which a light environmental particle with initial wavefunction $|\chi_i\rangle$ bounces off a heavy particle in position eigenfunction $|x\rangle$ and transition to a quantum state $|\chi_i(x)\rangle$, so that the overall wavefunction undergoes the evolution $|\chi_i\rangle|x\rangle\rightarrow|\chi_i(x)\rangle|x\rangle$. However, the book then threw the following line in: The state $|x\rangle$ can be thought of as the state $|x=0\rangle$ (corresponding to the scattering center being located at the origin) translated by the action of the momentum operator $\hat p$:$$|x\rangle=e^{-i\hat p\cdot x/\hbar}|x=0\rangle.$$ However, I don't have any intuitive understanding for why the operator $e^{-i\hat p\cdot x/\hbar}$ should map a position eigenstate $|0\rangle$ to a position eigenstate $|x\rangle$. Could anyone point me to what this theorem is called and where I could find a derivation of it? Answer: This is representation of a translation operator: $$ e^{ip_x a/\hbar}=e^{a\partial_x} $$ Acting on a function (which has derivatives of infinite order) we have $$ e^{ip_x a/\hbar}f(x)=e^{a\partial_x}f(x)= \sum_{n=1}^{+\infty}\frac{a^n\partial_x^n}{n!}f(x)= \sum_{n=1}^{+\infty}\frac{a^n}{n!}f^{(n)}(x)=f(x+a), $$ since the last sum is just Taylor expansion of $f(x+a)$ near point $x$. This is obviously not a coincidence, since momentum operator is the generator of infinitesimally small translations, and its conservation follows from translational invariance (see Noether theorem.)
{ "domain": "physics.stackexchange", "id": 93428, "tags": "quantum-mechanics, operators, hilbert-space, momentum" }
How to improve execution of web service using System.DirectoryServies.AccountManagement that runs very slow?
Question: This method runs in just under two minutes. I would like to optimize it to run in less than 15 seconds. Using a LAMDA filter on my list before iterating it's elements and removing one conditional statement in the method shaved off 30 seconds. Any ideas to improve performance? c# .net 4.0 [SecurityCritical] [SecurityPermissionAttribute(SecurityAction.Demand)] private static void GetGroupMembership(List<ActiveDirectoryPrincipalProperties> userGroupProperties) { List<ActiveDirectoryPrincipalProperties> groupProperties = new List<ActiveDirectoryPrincipalProperties>(); foreach (ActiveDirectoryPrincipalProperties gProperties in userGroupProperties.FindAll(token => token.groupYesNo.Equals(true))) { PrincipalContext ctx = new PrincipalContext(ContextType.Domain, gProperties.groupDomain); try { GroupPrincipal group = GroupPrincipal.FindByIdentity(ctx, IdentityType.Name, gProperties.groupName); foreach (Principal member in group.GetMembers(true)) { ActiveDirectoryPrincipalProperties memberProperties = new ActiveDirectoryPrincipalProperties(); memberProperties.fullGroupName = gProperties.fullGroupName; memberProperties.groupDomain = gProperties.groupDomain; memberProperties.groupName = gProperties.groupName; memberProperties.groupType = gProperties.groupType; memberProperties.groupYesNo = false; memberProperties.memberDomain = member.Context.Name.ToString(); memberProperties.memberName = member.SamAccountName.ToString(); memberProperties.memberType = member.StructuralObjectClass.ToString(); memberProperties.sqlUserOnlyYesNo = false; groupProperties.Add(memberProperties); } group.Dispose(); } finally { ctx.Dispose(); } } userGroupProperties.AddRange(groupProperties); } Answer: If you are writing this in Visual Studio 2010, it would be worth profiling your code to find out exactly what method calls take the longest. Here is a link where you can watch a video from Channel9 about using the built in performance analyser tool in Visual Studio 2010 to perform CPU Sampling on your code: http://channel9.msdn.com/Blogs/wriju/CPU-Sampling-using-Visual-Studio-2010-Performance-Analyzer-Tool Once you've identified the methods that take the longest to execute, you can start to work out if you're making any redundant or excessive calls and remove them, or research each call to find ways of optimising each.
{ "domain": "codereview.stackexchange", "id": 513, "tags": "c#, .net" }
Is possible to use openni with ROS jade?
Question: I recently install the Jade version, so I want to use the openNI driver to use the Microsoft Kinect because this driver because it has a great performance. Originally posted by jcardenasc93 on ROS Answers with karma: 70 on 2015-07-18 Post score: 0 Answer: Yes, it should be, but currently it seems you would have to compile it yourself because it seems to be not in the repository yet. Regards, Christian Originally posted by cyborg-x1 with karma: 1376 on 2015-07-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jcardenasc93 on 2015-07-21: Hi Christian thanks for your answer, I am not really sure how to build it by source. Is just to clone it in catkin_ws/src and then source the packages with source ~/catkin_ws/devel/setup.bash?. And my other question is ...That packages really works on Jade version? Comment by cyborg-x1 on 2015-07-22: Yes, that should be it. And of course you need all the dependencies for the packages, you should be able to install them with rosdep: http://answers.ros.org/question/75241/install-ros-dependencies-from-apt/
{ "domain": "robotics.stackexchange", "id": 22227, "tags": "kinect, openni, ros-jade" }
Water vs Milkshake being sucked through a straw
Question: Consider water in a glass being sucked through a straw. The water rises up in the straw because of a pressure gradient introduced by the sucking action. Now, change the liquid from water to something thicker like a thick milkshake (higher viscosity and higher density). It would need a greater amount of suction to raise the level of the slushy/milkshake in the straw to that achieved by using water. How should I explain this ? Can this be explained with the Bond number (gravity/capillarity ratio) or should I explain this phenomena based on the Capillary number (viscosity/capillarity ratio)? Can/Should the velocity of the liquid flow in the straw be calculated with the Hagen-Poisseuille's equation? Answer: The classic Poisseuille flow approach is a fine approximate solution for situations that satisfy its assumptions. The effect of gravity can be accounted for well by including it in the pressure-drop term. It should work fine for water being sucked through a soda-straw. Surface tension forces won't be large for water or milkshakes sucked through an ordinary soda-straw (~5mm dia.) Surface tension (and the two non-dimensional numbers you mention) would become useful in problems of fluid flow through porous media where liquid is flowing through capillaries. I think milkshakes behave differently than water because they are non-Newtonian. When I pull the straw out of the milkshake, a thick layer (2-3 mm) clings to the outside of the straw ... I can set a cherry on the top of my milkshake and it does not sink into the glass. That's not because it's floating (cherries are more dense than milkshakes) its because the forces in the milkshake beneath the cherry do not exceed the critical shear-stress required to cause flow. Below this critical shear-stress, milkshakes behave as a solid. Note that the dimension of critical shear-stress (material property of milkshakes) multiplied by the characteristic length of the zone of yielding flow beneath the cherry just happens to have the same dimension as surface-tension. But that doesn't mean you're justified assuming the Bond or Cappillary numbers have physical meaning in this case. Dimensions may agree but the physics are different. The Poisseuille flow approach assume the fluid behaves as a Newtonian-fluid. That assumption is likely violated for a milkshake. So Poisseuille flow solutions might be a poor approximation for analyzing milkshakes.
{ "domain": "physics.stackexchange", "id": 12363, "tags": "fluid-dynamics, density, viscosity" }
Correct Quadcopter Yaw control implementation
Question: I have built a quadcopter from scratch, including my own flight controller. I have implemented a sensor fusion algorithm (Madgwick algorithm), which returns me current Yaw, Pitch and Roll angles. Then, a PID control algorithm based on these measured angles and desired angles (which are set by the control sticks on a transmitter) adjusts the speed of the motors. For pitch and roll controls the algorithm is obvious and works just fine (i.e, pitch stick is 100% forward - desired pitch is "30 degrees", and current measured pitch is trying to catch up with that value using PID control). However for Yaw I would like the yaw stick to rotate the quad in the corresponding direction as long as I am holding the "yaw stick" in a non-zero position, with a rotating speed, proportional to the yaw stick angle. Since I have current measured Yaw angle (relative to a "reference yaw" which is the yaw that the quad had when the throttle was zero, not relative to the "absolute magnetic North-origined yaw"), I am basically just adding a "quadRotationSpeed = yawStickValue*delta_t" to the "desired yaw angle" in each iteration of the control loop, and then do the rest as a regular PID control: based on desiredYaw and measuredYaw I am calculating the motor torques. Now the questions: 1) Is this approach wrong? 2) When I turn the yaw stick, the quad seems to be rotating with the speed, proportional to the stick angle, but when I let go off the stick, it rapidly turns back half way and then keeps turning slowly back to the "desired angle" I set with the yaw stick. 3) After this yaw maneuver the quad seems to be much less stable during the flight, but gradually it stabilizes. My quad is about 800 grams and is quite large, about 60 cm in diagonal. Answer: So I reimplemented the whole Yaw control thing using PID on raw gyro data, ignoring the sensor fusion current Yaw angle, which turned out to work perfectly. Since gyro returns the speed of change of the yaw angle, and I wanted to control the speed of change of the angle using control sticks, doing so directly is the most natural approach.
{ "domain": "robotics.stackexchange", "id": 1481, "tags": "quadcopter, control, pid" }
PointCloud subscriber/publisher types
Question: Hi, When working with PointClouds, I usually subscribe to a sensor_msgs::PointCloud2 topic, and convert to a pcl::PointCloud inside my code. Then, once I'm done mainpulating the cloud, I convert it back to a sensor_msgs::PointCloud2 to publish it, using these functions: pcl::fromROSMsg pcl::toROSMsg My first question is: what is the cost of using these functions? I am assuming there's a memcpy involved, which (when dealing with kinect data) is probably not negligible. Second: is there a way to avoid this, by directly subscribing/publishing pcl::PointCloud types? I believe that there is a publisher/subscriber for pcl types, but I'm not sure whether it actually saves time, or does the same conversion/memcpy behind the scenes. Thanks, Ivan Originally posted by Ivan Dryanovski on ROS Answers with karma: 4954 on 2011-03-23 Post score: 1 Answer: The conversion to and from PCL datatypes is quite computationally expensive. I have observed something like 100ms for a conversion of a Kinect frame on a 1Ghz Atom core. To avoid this you can use direct publication of pcl datatypes and nodelets. A very simple example of a nodelet which does this is in the pointcloud_to_laserscan package is cloud_throttle.cpp When publishing over the network the Kinect data was unusable. Using nodelets I was able to take processing of the Kinect from 10Hz and 150% cpu to 30Hz and 87% cpu on an Atom by publishing using the exact same data type. Here's a launch file which brings up the kinect and processes the data inside the same process using nodelets. Originally posted by tfoote with karma: 58457 on 2011-03-24 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by brice rebsamen on 2013-12-19: what about the second part of the question? when publishing / subscribing directly with pcl PointCloud, does it first convert to sensors_msgs::PointCloud2 then serialize, then deserialize and convert to PCL? Or does it serialize deserialize directly from / to PCL? I am not using nodelets. Comment by tfoote on 2013-12-19: If ti does require conversion it will directly convert to the wire format to or from the pcl datatype.
{ "domain": "robotics.stackexchange", "id": 5184, "tags": "ros, pcl, nodelet, messages, pcl-ros" }
How do I use canonical ordering to reduce symmetry in the SAT encoding of the pigeonhole problem?
Question: In the paper "Efficient CNF Encoding for Selecting 1 from N Objects", the authors introduce their "commander variable" technique for encoding the constraint, and then talk about the pigeonhole problem. Since my error may exist in lower-level understanding, let me declare what I think I know before posing the question: Let $m$ and $n$ be the number of pigeons and holes. The naive encoding uses a propositional variable $X_{i,j}$ that is true when the $i'th$ pigeon is to be put in the $j'th$ hole. The clause $ExactlyOne(X_{1,1}, X_{1,2}, ..., X_{1,n})$ enforces that pigeon 1 must occupy exactly one hole; identical clauses are added for the other pigeons. The clause $AtMostOne(X_{1,1}, X_{2,1}, ..., X_{m,1})$ enforces that no more than one pigeon occupies hole 1; identical clauses are added for the remaining holes. When there are more pigeons than holes (m > n), the problem is unsolvable (obvious to humans) but the SAT solver doesn't "see" this fact. When it can't find a way to place pigeons $1,2,3,..,m$ it will search an attempt with pigeons $2,1,3,...,m$. It doesn't understand that the order of the pigeons is irrelevant. The paper, among others, calls this symmetry. Instances where $m=n+1$ are used as a strenuous test of a SAT solver's ability to detect unsatisfiability. The paper proposes to break the symmetry by enforcing order on pigeons. Pigeon $i$ must be placed in a hole in front of the hole of pigeon $i+1$ (i.e., the pigeon in hole $j$ must have a smaller number than of the pigeon in hole $j+1$). It then disappointingly says, "Due to space limitations, we do not explicitly describe in detail the canonical-ordering encoding, but the number of clauses generated is of order $O(n*log(n))$". So my question is: what did they do to get these results? I want to treat the variables $\{X_{1,1}, X_{2,1}, ..., X_{m,1}\}$ as string of bits that, numerically, identifies the choice of which pigeon went into hole 1, and so on. Follow this with $n-1$ comparators to enforce the paper's suggestion. My naive comparator construction, however, requires m clauses, one for each bit (of increasingly ugly size). Help! :) Answer: Let $m$ be the number of pigeons and $n$ be the number of holes. Let the propositional variables $B_{i,0}$ ... $B_{i,log(n)}$ encode the binary representation of $j-1$ if the $i$th pigeon is put into the $j$th hole. (Example, if pigeon 1 were placed in hole 10, $j - 1 = 9$, which is binary 1001. So $B_{1,3}$ = true, $B_{1,2}$ = false, $B_{1,1}$ = false and $B_{1,0}$ = true.) Enforce a particular ordering of the pigeons in the holes by requiring that the hole encoded by the $B_{i}$ variables is less than that of $B_{i+1}$. The encodings are compared as you would expect: $B_{i,log(n)}$ < $B_{i+1,log(n)}$ OR $B_{i,log(n)}$ = $B_{i+1,log(n)}$ AND $B_{i,log(n)-1}$ < $B_{i+1,log(n)-1}$ OR $B_{i,log(n)}$ = $B_{i+1,log(n)}$ AND $B_{i,log(n)-1}$ = $B_{i+1,log(n)-1}$ AND $B_{i,log(n)-2}$ < $B_{i+1,log(n)-2}$ OR ... ... following the pattern of allowing the most significant bits to be equivalent as long as the next bit to the right is less than that of the next pigeon. There will be $O(log(n))$ conjunctions per comparator and $O(m)$ comparators, giving the expected $O(m * log(n))$ additional clauses. The $B$ variable values must be implied by the $X_{i,j}$ values. Each $B_{i,*}$ bit is implied by any one of a particular set of the $X_{i,j}$ variables being set. Example: assuming $n = 16$, you would have: $ExactlyOne(X_{1,9}, X_{1,10}, X_{1,11}, X_{1,12}, X_{1,13}, X_{1,14}, X_{1,15}, X_{1,16}, \overline{B_{1,3}})$ which forces $B_{1,3}$ true if pigeon 1 is placed in any of holes 9-16. Otherwise $B_{1,3}$ is set false to satisfy the clause. These clauses set the remaining $B_{i}$ bits. $ExactlyOne(X_{1,5}, X_{1,6}, X_{1,7}, X_{1,8}, X_{1,13}, X_{1,14}, X_{1,15}, X_{1,16}, \overline{B_{1,2}})$ $ExactlyOne(X_{1,3}, X_{1,4}, X_{1,7}, X_{1,8}, X_{1,11}, X_{1,12}, X_{1,15}, X_{1,16}, \overline{B_{1,1}})$ $ExactlyOne(X_{1,2}, X_{1,4}, X_{1,6}, X_{1,8}, X_{1,10}, X_{1,12}, X_{1,14}, X_{1,16}, \overline{B_{1,0}})$ There will be $log(n)$ of these clauses for each pigeon. Since there are $m$ pigeons, $m * log(n)$ clauses are added.
{ "domain": "cstheory.stackexchange", "id": 2017, "tags": "cc.complexity-theory, sat, proof-complexity" }
Difference between centralized computing and distributed computing using Client-Server
Question: I already understand that Distributed Computing is the breakup of having multiple clients rely on a single source, and having each client utilize other clients for information. But my confusion comes from a type of distributed computing, where one type is the client server model. From what I know, the client server model has multiple clients using a network to access a server. How does this differentiate from calling it centralized computing? Answer: As a contrast to "distributed", "centralized" means one computer. Sometimes (not so much nowadays) this may be a mainframe with multiple "dumb" terminals in which all the work happens on the mainframe and the terminals just take in inputs and display outputs. If the terminals aren't "dumb", meaning they can do non-trivial computation of their own (and, in particular, maintain state), then this is essentially an example of a distributed, client-server system. Your description of "distributed" corresponds to a peer-to-peer system which is but one form of a distributed system. Admittedly, there has been a level shift in the terms, and/or they are applied at a logical level. For example, Facebook would be considered a "centralized" system insofar as it's controlled by a single entity, as opposed to Diaspora which is decentralized. I think it has become popular to use the term "distributed" when "decentralized" may be a better term. Of course, Facebook has many servers spread across the world, and so it's infrastructure is very much distributed. Similarly, a single data center for Facebook may logically be a single "server" but internally is a distributed system as well consisting of hundreds of physical servers. Wikipedia does a pretty good job of outlining the key characteristics of a distributed system from a theoretical perspective: Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Though I would add to that list that the communication medium is unreliable. The focus of distributed computing, in both theory and practice, is dealing with various forms of failure. Wikipedia's use of the more generic term "component" suggest that this notion can be applied at different levels. For example, you can fruitfully think of the various sub-systems of a single computer (on the motherboard) as forming a distributed system if your focus is on timing and failure issues. Even single multi-core CPUs are becoming more and more like distributed systems with the cores as the "components".
{ "domain": "cs.stackexchange", "id": 6674, "tags": "distributed-systems, computer-networks" }
Did the old world have relatives of plants which were brought after the discovery of Americas?
Question: When the Americas were discovered many plants were brought back to Europe. For example: potato, tomato, strawberries, chilies etc. At the same time many animals were brought to the Americas including: horses, cattle, sheep, and camels. It seems from the records that there were relatives of these animals in the Americas — for example llamas are in the same family (Camelidae) as camels. I wonder if there were relatives of these vegetables from the Americas already present in Europe or anywhere else in the old world. Answer: Three of your examples: potato, tomato, and chilies, are from the genus Solanum. Along with the rest of the Solanaceae family, they are most diverse in the Americas but some species are found worldwide. Species that would be found originally in the 'old world' from the Solanum genus and Solanaceae family include the black nightshade, bittersweet nightshade, and deadly nightshade. There is also at least one cultivated plant variously known as eggplant, aubergine, or brinjal (Solanum melongena) that is thought to have been domesticated in Asia. The strawberry genus, Fragaria, also has species world-wide. Of course, because all life on Earth is related, the answer to the general question "are there relatives of X found in some place" will always be "yes" as long as there is life there, it's only a matter of the extent of relation.
{ "domain": "biology.stackexchange", "id": 10662, "tags": "agriculture, biogeography" }
Lindblad equation derivation
Question: I'm reading A simple derivation of the Lindblad equation. It introduces a Hamiltonian for a system consisting of a principal system $S$, a heat bath $B$ and an interaction term: $\hat{H}=\hat{H}_S+\hat{H}_B+\alpha\hat{H}_{SB}$. It then switches to using operators $\hat{H}(t)$ and $\hat{\rho}(t)$ in the interaction picture. On page 3 it says "$\bar{H}_{SB}$ can always be defined in a manner in which the first term on the right hand size of Eq. (10) is zero." Here is Eq. (10): $\frac{d}{dt}\hat{\rho}_S(t)=-\frac{i}{\hbar}\alpha \text{Tr}_B\left\{[\hat{H}(t),\hat{\rho}(0)]\right\}+\ldots$ As $H_{SB}$ is a given, I'm not sure how we can choose anything to make that term zero. What does the statement mean? Answer: The statement is almost true, although one has to redefine both $H_S$ and $H_{SB}$ to make it work. You need to also make use of the standard assumption of factorising initial conditions, $$ \rho(0) = \rho_S(0)\otimes \rho_B,$$ where $\rho_B$ is the bath reference state, which must commute with the bath Hamiltonian, i.e. $[\rho_B,H_B]=0$. Often (but not always) the bath state is taken to be thermal $$ \rho_B = \frac{\mathrm{e}^{-\beta H_B}}{Z}.$$ Using the cyclicity of the partial trace, you can show that the offending term is equal to $$\mathrm{Tr}_B\left\lbrace[H(t),\rho_S(0)\otimes \rho_B]\right\rbrace = [\mathrm{Tr}_B\{H(t)\rho_B\},\rho_S(0)]\},$$ where the operator $\mathrm{Tr}_B\{H(t)\rho_B\}$ acts only in the Hilbert space of the system. In order to remove this term, define the shifted Hamiltonians $$H_{SB}' = H_{SB} - \mathrm{Tr}_B\{H_{SB}\rho_B\},$$ $$H_A'= H_{A} + \mathrm{Tr}_B\{H_{SB}\rho_B\},$$ so that $H = H_A' + H_B + H_{SB}'$ is unchanged. Then, carrying out the derivation as before using the new Hamiltonians, you should be able to prove that $$\mathrm{Tr}_B\left\lbrace[H'(t),\rho_S(0)\otimes \rho_B]\right\rbrace = 0, $$ where $$H'(t) = \mathrm{e}^{\mathrm{i}(H_A'+H_B)t}H_{SB}'\mathrm{e}^{-\mathrm{i}(H_A'+H_B)t}.$$ In practice, one finds that common interaction Hamiltonians often already satisfy the desired property $\mathrm{Tr}_B\{H_{SB}\rho_B\}=0$ for states $\rho_B$ that are diagonal in the reservoir energy eigenbasis.
{ "domain": "physics.stackexchange", "id": 32560, "tags": "quantum-mechanics, thermodynamics, open-quantum-systems" }
Killing vector argument gone awry?
Question: What has gone wrong with this argument?! The original question A space-time such that $$ds^2=-dt^2+t^2dx^2$$ has Killing vectors $(0,1),(-\exp(x),\frac{\exp(x)}{t}), (\exp(-x),\frac{\exp(-x)}{t})$. Given that $$\dot x^b\frac{\partial}{\partial x^b}(\dot x^a\xi_a)=0 \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(*)$$ where $x^a=(t,x)$, $\dot x^a$ is the tangent vector to a geodesic and $\xi^a$ is a Killing vector, then $$a\exp(x)+b\exp(-x)+c\frac{1}{t}=0$$ where $a,b,c$ are constants. So I tried directly using $(*)$. What I got are $$t\dot t\dot x=0\\ \exp(x)(2\dot t\dot x+t\dot x^2)=0\\ \exp(-x)(2\dot t\dot x-t\dot x^2)=0$$ for each of the Killing vectors given. (Right so far?) Then multiplying the last 2 equations by $t$ and using the first equation, I get $$\exp(x)t^2\dot x^2=0\\ -\exp(-x)t^2\dot x^2=0$$ So any linear combination of the 2 LHS's must vanish, giving $$\alpha\exp(x)t^2\dot x^2+\beta\exp(-x)t^2\dot x^2=0$$ for arbitrary constants $\alpha,\beta$. Either I have gone wrong somewhere, or somehow we must have $$t^2\dot x^2=1+\frac{\gamma}{t}$$ Unfortunately, I can't see why it is so. Any insight? Thanks. Answer: Let us denote \begin{align} \xi_1 = (0,1), \qquad \xi_2 = (-e^x, e^x/t), \qquad \xi_3 = (e^{-x}, e^{-x}/t) \end{align} Each of these killing vectors leads to a conserved quantity \begin{align} c_1 &= \dot x_\mu\cdot (\xi_1)^\mu = \dot x t^2 \\ c_2 &= \dot x_\mu\cdot (\xi_2)^\mu = \dot t e^x +\dot x te^x \\ c_3 &= \dot x_\mu\cdot (\xi_3)^\mu = -\dot t e^{-x} + \dot x t e^{-x} \end{align} From the first conservation equation we obtain $$ \dot x t = \frac{c_1}{t} $$ Plugging this into the second two gives \begin{align} c_2e^{-x} &= \dot t+\frac{c_1}{t} \\ c_3e^{x} &= -\dot t+\frac{c_1}{t} \\ \end{align} and adding them together then gives the desired result.
{ "domain": "physics.stackexchange", "id": 7750, "tags": "homework-and-exercises, general-relativity, spacetime, differential-geometry, geodesics" }
QM explanation on entanglement for spin and the original EPR experiment
Question: I have read this: http://en.wikipedia.org/wiki/EPR_paradox#Mathematical_formulation and this: How does non-commutativity lead to uncertainty? But it does not give me a specific explanation on the math description how to prove these specifically for spin in the original EPR paradox: And finally I think I understand logically where it says : "It remains only to show that Sx and Sz cannot simultaneously possess definite values in quantum mechanics. One may show in a straightforward manner that no possible vector can be an eigenvector of both matrices. More generally, one may use the fact that the operators do not commute," But I have not found anything on the mathematical formulation on how these can be shown. My questions are about the math formulation how to prove these. Question: Can somebody please help me how to show in QM math that "It remains only to show that Sx and Sz cannot simultaneously possess definite values in quantum mechanics." And can somebody please help me with showing in QM math this specifically "One may show in a straightforward manner that no possible vector can be an eigenvector of both (Sx and Sz) matrices." And can somebody please help me show in math how to "More generally, one may use the fact that the operators do not commute," Answer: A state in QM is given by a vector of an Hilbert space, in this case, of finite dimension. Observables are represented by Hermitian operators on that Hilbert space. The possible values that can be obtained after a measurement by such an observable operator $\mathcal{O}$ are given by its eigenvalues. Now, for a state to have a definite value, we want a state (vector of the Hilbert space $|x\rangle$) that is associated to a definite eigenvalue. Those vectors are called eigenvectors and satisfy $\mathcal{O} |x\rangle = x |x\rangle$ This then becomes a problem in linear algebra. Your question really is about linear algebra. Given $S_x$ and $S_z$ as per the article, one can show that they cannot share an eigenvector by supposing the converse and arriving at a contradiction. We have (omitting the constants): $S_x = \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right)\quad$ and $\quad S_z = \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right)$ Now suppose that $(x,y)$ is an eigenvector of $S_z$ and $S_x$. This implies that both $(x,-y)=\alpha (x,y)\quad$ and $\quad (y,x)=\beta (x,y)$ So we have that $x=\alpha x, \quad -y = \alpha y, \quad y = \beta x, \quad x=\beta y \implies x=0$ or $y=0$ which contradicts the last two equations unless $x=y=0$. In both cases we have the zero vector which cannot be a physical state (not normalizable). In general, finding eigenvectors of a matrix is really basic linear algebra and I suggest that you read up on the topic: https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors, as there are much more efficient technique than I what I used in this simple case. What you basically want to do for a matrix $A$ is to calculate the determinant of $A-\lambda Id$ and find the zeroes in the variable $\lambda$ of the resulting polynomial equation. This works because in the diagonal form, substracting $\lambda Id$ will leave one row with only zeroes leading to a null determinant. Now once the eigenvalues are found, you use them to obtain linear equations on the components of the eigenvector associated. Finally, it is a result of linear algebra that non-commuting matrices cannot be simultaneously diagonalized, which means they do not share a common eigenbasis. This really means in this context that since: $[S_x,S_z]=S_x S_z - S_z S_x \neq 0,$ there will be some states that do not have definite value for both operators. As ACuriousMind noted, to go further requires the fact this pair of operators do not commute on any subspace of the Hilbert space they act on.
{ "domain": "physics.stackexchange", "id": 35889, "tags": "quantum-mechanics, quantum-entanglement, heisenberg-uncertainty-principle, epr-experiment" }
Minimize the sum of gaps
Question: I have a set of $n$ objects $\{1,2,\ldots,n\}$ where object $i$ has weight $w(i)$ and we have a capacity $W$. I would like to pick a subset $S=\{a_1,\ldots,a_m\}\subseteq \{1,2,\ldots,n\}$ of the objects in order to minimize $$\sum_{j=0}^m(a_{j+1}-a_{j}-1),$$ (assuming that $a_0=0$ and $a_{m+1}=n+1$) while respecting the capacity, i.e., $$\sum_{j=1}^mw(a_j)\leq W.$$ The objective is like the sum of gaps between chosen objects. Is this problem NP-hard or can we find a polynomial time algorithm? Answer: For every $k$ and $\sigma$, let $A(k,\sigma)$ be the minimum of $\sum_{j=1}^m w(a_j)$ over all subsets $a_1,\ldots,a_m$ of $\{1,\ldots,k\}$ containing $k$ such that $\sum_{j=0}^m (a_{j+1} - a_j - 1) = \sigma$; note that there are $O(n)$ choices for each of $k$ and $\sigma$. You can compute $A(k,\sigma)$ for all $k,\sigma$ using dynamic programming, and use it to solve your problem in polytime.
{ "domain": "cs.stackexchange", "id": 15850, "tags": "algorithms, np-complete, np-hard" }
How to make physical sense of negative temperatures
Question: In our statistical physics course we learned that entropy is $S=\mathrm{k}ln(\Omega)$. Then, we wrote down the multiplicity of a two-state paramagnet (which I will omit due to cleanness). We plugged the result into $S$ to only find that $S(U\equiv energy)$ has the shape of the upper half of a semi-circle (See Fig.) Suppose the total energy of the system is $|U_{total}|=1$ and $S=1$ is the maximum entropy of system. Defining temperature as $$\frac{1}{T}=\frac{\partial S}{\partial U},$$ we are essentially indicating the physical existence of negative and infinite temperatures. I have done much reading about this topic and I understand pretty well where all of this comes from. I understand that such scenario is only possible when your system has a well defined maximum energy it can obtain, in such that if you add any more energy, you will induce a population inversion and you can make sense of negative temperatures from say analyzing the Boltzmann distribution. I also understand that negative temperatures are the hottest in the sense that any other object in thermal contact with the system will gain heat from this system with negative temperature (I make sense of it by thinking that the system has many dipoles/molecules in its higher energy levels and regardless of anything, it will give energy to environment to achieve ground state status). My Question: What do negative and infinite temperature mean physically? NOT mathematically. How can you explain them and convince me of them using words? What is happening during the short time the system has negative or infinite temperature? An analogy would be nice! For example, in the case of ideal gases and Einstein solids, you can invoke the equipartition of energy and relate temperature to average kinetic energy. This gives temperature a physical sense and meaning. Does such analogy exist for negative and infinite temperatures? Answer: A positive temperature system always "wants" to absorb more energy in the sense that giving that system more energy gives it more entropy. However, if two systems of positive but unequal temperature come into contact, we know that energy flows from one to the other. That must mean that one system "wants" the energy more than the other, in the sense that one system's $dS/dE$ is bigger than the other. To be specific, suppose we have two positive temperature systems $A$ and $B$. If $T_A > T_B$, then we expect energy to flow from $A$ to $B$. This happens because the entropy decrease experienced by $A$ as it loses energy is less than the entropy increase experienced by $B$ as it gains energy. Both systems want energy, but $B$'s temperature being lower means that it wants the energy more than $A$ does, and the total entropy of the combined systems is raised if $A$ gives some energy to $B$. Now, if a system's temperature is negative, as in the right half of the plot, then that means that it would rather give up energy than absorb it. Meanwhile, the positive temperature system always wants to absorb more energy! Therefore, if you put a positive temperature system in contact with a negative temperature one, the energy will flow from the negative temperature system to the positive temperature one because that increases the entropy of both systems. As you can now see, a negative temperature system is in a sense hotter than any positive temperature one because the negative temperature system will always give energy to the positive temperature one. What do negative and infinite temperature mean physically? They mean exactly as described above: a negative temperature just means exactly that the system would rather give up energy instead of absorb it. An analogy would be nice! For example, in the case of ideal gases and Einstein solids, you can invoke the equipartition of energy and relate temperature to average kinetic energy. This gives temperature a physical sense and meaning. Does such analogy exist for negative and infinite temperatures? Well, in order to have negative temperature, the system must be such that adding energy does not increase entropy. Such a case is pretty weird, because for example, the number of accessible translational motion states for a moving particle increases with increasing energy. In order to have negative temperature, we must have a system where those translational degrees of freedom are absent, and where the system still has some kind of degrees of freedom that can be excited but only to a limited upper amount of energy. A magnetic system is the most common example, because there we have the spin degree of freedom which has only two states.
{ "domain": "physics.stackexchange", "id": 43280, "tags": "thermodynamics, statistical-mechanics, temperature" }
Is any reference frame truly inertial?
Question: I'm aware of the potential duplicate question here. However, that question centres on the Newtonian argument of there being a force, and hence acceleration. However, my issue is with the expansion of the universe. An inertial reference frame takes one point to be that which is experiencing 'proper' time and so on, with this frame not accelerating. Even with the lack of acceleration due to external forces, I don't see how this is reconciled with the fact that space itself is expanding. The scale factor $a(t)$ gives the relative expansion of the universe, which surely means any reference frame itself is expanding and changing, preventing it from being truly inertial. I'd expect this to be completely negligible on most scales, but am curious anyway. Is this correct, or am I fundamentally misunderstanding something? Answer: "Inertial" loses its sense in general relativity. The analysis is much more complex, and "inertial" systems are simply done away with. We have the concept of "locally inertial", but this just means that we can always choose our coordinates such as at a point the metric will be the Minkowski metric, with null first derivatives: this is a general property, and you don't use it to distinguish frames. What is interesting to do is understand if a moving particle that follows geodesics in the FRW metric. For simplicity, let us take an $1+1$ dimensional space, of metric $$ g=-dt^2+a(t)^2dx^2. $$ This is a flat space similar to the FRW one. Let us suppose $a$ to be never $0$, to avoid singularities. Suppose to have a massive particle that is moving on a geodesic, parametrized by $x^a(\tau)=(t(\tau),x(\tau))$. Furthermore, suppose an affine parametrization, such as $t'(\tau)^2-a(t)^2x'(\tau)^2=1$ for each value of $\tau$. Equations of motion for an affine parametrization can be found from the Lagrangian $$ L=\frac12(-t'(\tau)^2+a(t(\tau))^2x'(\tau)^2). $$ To solve this problem, we can use the fact that $x$ is cyclic: motion equations will be $$ x'(\tau)=\frac{C}{a(t)^2};\\ t'(\tau)=\sqrt{1+a(t)^2 x'(\tau)^2}=\sqrt{1+\frac{C^2}{a(t(\tau))^2}}. $$ We can choose initial conditions such as $x^a(0)=(0,0)$ and $x'(0)=v t'(0)$, that means $$ t'(0)=\gamma, \quad x'(0)=v\gamma, $$ where $\gamma$ is the usual dilation factor, relative to $t$. We are now ready to solve the problem. You can try seeing what happens by using an explicit law for $a(t)$. Without specifying the form of $a$, we can note that, if $a$ is a monotonic function of $t$, the object that we can interpret as velocity $x'/t'$ effectively decreases with time. This teaches us an important lesson: a particle that begins moving in the FRW universe with some velocity will lose its velocity. As velocity is not commonly used in GR, it is better to talk in terms of energy: a free particle in FRW experiences energy loss due to universe growing. This is the reason for the cosmological redshift, when you apply this method to light particles (that travel on null geodesics). So, to conclude: in this particular frame we can see the energy of the particle getting dissipated, so we can infer that it will be slower. Even if there is no force acting on it, according to GR. EDIT: as requested, I include an addendum to improve the answer. The core answer to your question is "There is no notion of inertial frame in GR". The rest of my answer provided an example of a free particle (from the GR point of view) that experiences a change of velocity. This was just to show that the definition "an inertial frame is a frame where free particles move on straight lines" really makes no sense: if by straight lines you intend segments, that's not true. If you intend geodesics, that's true in every frame. Simply put, "inertial frames" do not exist in GR.
{ "domain": "physics.stackexchange", "id": 38893, "tags": "cosmology, spacetime, reference-frames, universe, inertial-frames" }
Hot and "warm" Jupiters expelling terrestrial planets?
Question: I've seen in documentaries that if a Jupiter size planet migrates close to its star then it would remove terrestrial planets along the way. This makes sense and I'm sure most models would predict this, but do we have evidence? Indeed, the Kepler survey (taking Kepler 1-447) found 34 Planets>6 R(Earth) less than 0.1 AU from the star and the only one where another planet was found was Kepler 424 which also has a Jupiter size planet at 0.73 AU. Three "warm" Jupiters (0.1- 1.7AU) out of 47 have earth-size planets discovered in same system and all are "mini-solar systems" with the smaller planets far interior. (K68,90 and 407). If you include "super-earths" (R = 1.25-2.0 R(Earth)), then there are 7 additional stars found with "warm Jupiters" in multiple systems. The super-earths are still all inside the Jupiter's (although K18,89 and 118 have Jupiters less than 0.2 AU). My question is: Is this finding proof that migrating Jupiters have thrown out terrestrial planets or do their existence make the signals of "exterior" planets hard to discern? And what about smaller planets migrating in "behind" the Jupiter's? Answer: It's not proof that they've ejected other inner planets, because there are plenty of other explanations for why we haven't observed companions. Steffen et al. (2012) analyzed Kepler data - likely some of the same examples you've looked at - and came up with several explanations besides the no-companions-because-of-planet-planet scattering hypothesis: Inner planets never formed. It's possible that mechanisms in systems with hot Jupiters prevent inner planets from forming; the authors are unclear as to what those mechanisms might be. The planets are too small to see during transits. This is the first explanation based on detection bias. Obviously, Kepler has its limits, and it's entirely possible that the systems have small bodies it simply can't detect. The planets are too low-mass. It is possible that there may be planets with masses so low that transit-timing variations (TTVs) are too small for Kepler to see them. The planets have high mutual inclinations. In other words, there might be planets with orbits highly inclined with respect to the hot Jupiters' orbits; we only detect the ones at inclinations favorable to detection from our angle. The options the authors suggest rest mainly on experimental biases as opposed to theoretical possibilities. Something I was confused about when I read your question was the assumption that hot Jupiters migrate mainly due to planet-planet scattering. Levison et al. argue that warm Jupiters (i.e. giant planets with semi-major axes of about 1 AU) are more likely to have migrated via planet-planet scattering, because it provides a stopping mechanism via damping, assuming the planetesimal number density scales appropriately. This happens when the energy needed to move planetesimals from their orbits is greater than the energy lost from the change in the planet's orbit. Given that planetesimals cannot survive long in orbits less than about twice the stellar radius, warm Jupiters reach semi-major axes no lower than 0.03 to 0.1 AU (see Murray et al. (1998)). Hot Jupiters, on the other hand, may be driven largely by Type II gas disk migration, where giant planets create a gap in a protoplanetary disk that subsequently brings in material; this then brings the planet closer to the star, eventually leading to a hot Jupiter. Here's a visualization, from Planet Hunters:
{ "domain": "astronomy.stackexchange", "id": 1938, "tags": "kepler, hot-jupiter" }
ROS_INFO a String
Question: hi there, this is a question with regards to the "Writing publisher subscriber" tutorial with c++. I've realized that for strings we have to add in .c_str() What is this .c_str() thing? We add it in when ROS_INFO("%s", msg.data.c_str()); why cant we just. ROS_INFO("%s", msg.data); And i've also realized this is specific only for strings. Thanks in advanced to whoever helped me out :) edit: I did my own personal reading and found a few resources pretty helpful in understanding this. check out https://embeddedartistry.com/blog/2017/07/26/stdstring-vs-c-strings/ Enjoy :) Originally posted by sajid1122 on ROS Answers with karma: 19 on 2022-07-19 Post score: 0 Answer: By the C++ documentation: const char* c_str() const noexcept; Get C string equivalent Returns a pointer to an array that contains a null-terminated sequence of characters (i.e., a C-string) representing the current value of the string object. This array includes the same sequence of characters that make up the value of the string object plus an additional terminating null-character ('\0') at the end. Return Value A pointer to the c-string representation of the string object's value. c_str returns a const char* that points to a null-terminated string (i.e. a C-style string). It is useful when you want to pass the "contents" of a std::string to a function that expects to work with a C-style string. Originally posted by ljaniec with karma: 3064 on 2022-07-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by sajid1122 on 2022-07-19: Hi, thanks for your. Answer, very helpful. But i have a followup question. Why do we have to pass the string pointer and not the string as a whole. coz like you mentioned, the return value is a pointer to the c-string. Why do we need to return a pointer? why cant we just return the actual string itself. Anyways, your answer was very helpful. Do answer my follow-up question when you get the chance if not its alright too :) Comment by ljaniec on 2022-07-19: I based my answer on similar questions in the past, e.g. https://answers.ros.org/question/49042/cannot-ouput-string-service-via-ros_info/ A string type in a ROS message/service maps to a std::string in C++ (see the msg package for more details). std::string's are not printable with something like printf, unless you get access to the underlying const char* data which is what printf's "%s" format string is expecting. You can get access to the C-string via the c_str() member on std::string. Comment by sajid1122 on 2022-07-19: Yup, thanks for sharing that link. I think that answers my question. Thanks a lot mate.
{ "domain": "robotics.stackexchange", "id": 37856, "tags": "ros, ros-tutorials" }
Complexity Theory - Why can't you use diagonalization to seperate classes A and B when an orcale O exists under which A=B?
Question: In a recent lecture the professor stated that given two complexity classes A and B, and given the existance of an oracle O such that $$A^o=B^o$$ (As I understand, meaning that a problem in A with can be reduced to a problem in B with the oracle O), It can be shown that classes A and B can not be seperated using a diagonilization argument. Unfortunately he did not explain much further... Can someone give the outline\thought process of the general proof (informally)? I think I understand why diagonalization fails in specific time hirearchy proof constructions (due to the existence of a cook reduction) but I can not see how to generlize this in a satisfactory way. Thank you. Answer: In order to explain this, you need to understand what is meant by "diagonalization argument". In this context, we mean a proof that only treats Turing machines as black boxes, i.e. only uses the fact that we can encode Turing machines as strings and treat them as inputs to other machines. This gives rise to the possibility of simulation, a machine $M$ can simulate some machine $M'$ without paying too much in space/time. You can now observe that such proofs immediately generalize to the oracle model (encoding and simulation are possible), so if you could write such a proof for $A\neq B$, it would also work in the oracle model, showing $A^\mathcal{O}\neq B^{\mathcal{O}}$. For further reading you could go to "Computational Complexity: A Modern Approach", by Arora & Barak. They have a section on limits of diagonalization.
{ "domain": "cs.stackexchange", "id": 7410, "tags": "complexity-theory, oracle-machines" }
Script to get available IP automatically
Question: Recently I made a script to get available IP automatically. I was woundering if anyone can give me tips on how to make this code look better, preferably in a class and OOP. I'm gonna show the code to my boss, but i want it to look clean and nice before i do so, and hopefully learn a thing or two about writing better code. Code: import requests from orionsdk import SwisClient import getpass # Brukerinfromasjon npm_server = 'mw-solarwinds.yy.dd' username = 'jnk' password = getpass.getpass() server_navn = input('Skriv inn DNS navn: ') dns_ip = '10.96.17.4' # 10.96.17.5 = Felles dns_sone = 'yy.dd' verify = False if not verify: from requests.packages.urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning) swis = SwisClient(npm_server, username, password) # Kjører mot IPAM med brukerinformasjon subnets = { 'ka-windows': ['10.112.12.0', '24'], 'ka-linux': ['10.112.10.0', '24'], 'ka-exa-mgmt': ['10.112.26.0', '28'] } print("Tilgjengelige subnets: ") for i in subnets: print(i) print("--------------") found = False while not found: inp = input("Skriv in Subnet: ") if inp in subnets: ''' Finner ledig IP adresse i subnet ''' sub_ip = subnets[inp][0] sub_cdir = subnets[inp][1] ipaddr = swis.invoke('IPAM.SubnetManagement', 'GetFirstAvailableIp', sub_ip, sub_cdir) ''' Sette DNS > IP ''' dns = swis.invoke('IPAM.IPAddressManagement', 'AddDnsARecord', server_navn, ipaddr, dns_ip, dns_sone) print("IP: {} > DNS: {}".format(ipaddr, server_navn)) found = True else: print("Det er ikke et subnet, velg en fra listen.") Answer: Not sure if it's any better as a class... #!/usr/bin/env python import getpass import requests from orionsdk import SwisClient from requests.packages.urllib3.exceptions import InsecureRequestWarning class Subnet_Explorer(dict): def __init__(self, npm_server, auth, dns, verify, **kwargs): super(Subnet_Explorer, self).__init__(**kwargs) self.update( npm_server = npm_server, auth = auth, dns = dns, server_navn = input('Skriv inn DNS navn: '), swis = SwisClient(npm_server, auth['username'], auth['password']) ) if verify == False: requests.packages.urllib3.disable_warnings(InsecureRequestWarning) def directed_exploration(self, subnets): """ Yields tuple of IP and DNS addresses from select subnets """ unexplored_subnets = subnets.keys() while True: print("Unexplored Subnets: {unexplored_subnets}".format( unexplored_subnets = unexplored_subnets)) inp = input("Skriv in Subnet: ") if not unexplored_subnets or inp not in unexplored_subnets: print("Det er ikke et subnet, velg en fra listen.") break unexplored_subnets.remove(inp) ipaddr = self['swis'].invoke('IPAM.SubnetManagement', 'GetFirstAvailableIp', subnets[inp][0], subnets[inp][1]) dns = self['swis'].invoke('IPAM.IPAddressManagement', 'AddDnsARecord', self['server_navn'], ipaddr, self['dns']['ip'], self['dns']['sone']) yield ipaddr, self['server_navn'] if __name__ == '__main__': """ Running as a script within this block, eg. someone ran; python script_name.py --args to get here, usually. """ auth = { 'username': 'jnk', 'password': getpass.getpass(), } dns = { 'ip': '10.96.17.4', 'sone': 'yy.dd', } subnet_explorer = Subnet_Explorer( npm_server = 'mw-solarwinds.yy.dd', auth = auth, dns = dns, verify = False) exploration = subnet_explorer.directed_exploration( subnets = { 'ka-windows': ['10.112.12.0', '24'], 'ka-linux': ['10.112.10.0', '24'], 'ka-exa-mgmt': ['10.112.26.0', '28'] }) print("--------------") for ipaddr, server_navn in exploration: print("IP: {ipaddr} > DNS: {server_navn}".format( ipaddr = ipaddr, server_navn = server_navn)) print("--------------") ... though perhaps ya see some things ya like. Regardless ya may want to consider is adding some argparse stuff after the if __name__ == '__main__': line. And probably best to use anything from above modifications with care as I may have made things a bit more messy by turning towards classes. Is there a reason for setting dns = swis.invoke(...) when it's not being used for anything?
{ "domain": "codereview.stackexchange", "id": 34552, "tags": "python, networking, client, ip-address" }
Why can you not create a LED equivalent by illuminating a colored plastic casing?
Question: I would first like to apologize if this is a dumb question. I understand the physics of color sufficiently well. You have an incoming photon that intercepts an electron on the atom, the electron gets excited to some energy level for a few femtoseconds, and then goes back to the ground state, expelling another photon during the transition. The wavelength of the emitted photon is what you see. The point of this inquiry is to understand what is the precise physics that disables us from using plastic as an illumination source. Answer: You have an incoming photon that intercepts an electron on the atom, the electron gets excited to some energy level for a few femtoseconds, and then goes back to the ground state, expelling another photon during the transition. This is only one way that objects have color. It doesn't cover, for example, fluorescent materials. In this scenario the emitted photon has very nearly the same frequency (and therefore color) as the original exciting photon. If the object is "red" then it means the object behaves this way when it intercepts red photons, but when it intercepts other colored photons it simply absorbs them and doesn't re-emit (the electron loses energy thermally and heats the material rather than producing a new photon). So if we have a red plastic, and we illuminate it with white light, we do get red light reflected (or transmitted). But it means we "wasted" all the other energy in the white light that was being carried by green, yellow, blue, etc., photons. So this process will be fairly inefficient. If we use an LED, we only produce (for example) red photons to begin with and don't waste energy producing yellow, green, and blue photons to be thrown away. the precise physics that disables us from using plastic as an illumination source. Physics doesn't prevent it. Back before LEDs were cheap, it was fairly common to produce red light by putting a red-dyed glass or plastic layer in front of a white lamp. It's even still done today for some applications. For example, you can still buy gels for stage lighting that have exactly this purpose. Using LEDs is just cheaper and more efficient for many applications, given today's technology. (There are other minor differences like it's hard to find dyes that are as narrow-band as LEDs, so the light from a filtered white lamp will typically have a wider spectrum than from an LED)
{ "domain": "physics.stackexchange", "id": 100532, "tags": "optics, solid-state-physics, semiconductor-physics, optical-materials" }
Java implementation of the caesar-cipher
Question: I've wrote a little program that encrypts text by using the caesar-cipher. Also it contains a little GUI, created by using swing. Here's the full code: import javax.swing.JOptionPane; import javax.swing.JScrollPane; import javax.swing.JTextArea; import java.awt.Dimension; public class caesar { public static void main(String[] args) { String field, text; field = JOptionPane.showInputDialog("Please enter text:"); field = field.replaceAll("[^a-zA-Z]+", ""); field = field.toUpperCase(); int shift; String shift_String = JOptionPane.showInputDialog("Please enter shift to the right:"); shift = Integer.parseInt(shift_String); String d = JOptionPane.showInputDialog("Encrypt (1) or decrypt (2):"); int decision = Integer.parseInt(d); String out; if(decision==1) { out = encrypt(field, shift); JTextArea msg = new JTextArea(out); msg.setLineWrap(true); msg.setWrapStyleWord(true); JScrollPane scrollPane = new JScrollPane(msg); scrollPane.setPreferredSize(new Dimension(300,300)); JOptionPane.showMessageDialog(null, scrollPane); } if(decision==2) { out = decrypt(field, shift); JTextArea msg = new JTextArea(out); msg.setLineWrap(true); msg.setWrapStyleWord(true); JScrollPane scrollPane = new JScrollPane(msg); scrollPane.setPreferredSize(new Dimension(300,300)); JOptionPane.showMessageDialog(null, scrollPane); } } //Encryption public static String encrypt(String text, int n) { int x = 0; int y = 0; String out = ""; //Empty string for result. while (x < text.length()) { if (text.charAt(x) > 64 && text.charAt(x) < 91) { if (text.charAt(x) + n > 90) { y = 26; } out = out + (char) (text.charAt(x) + n - y); } else { out = out + text.charAt(x); } x++; y = 0; } return out; } //Decryption public static String decrypt(String text, int n) { int x = 0; int y = 0; String out = ""; //Empty string for result. while (x < text.length()) { if (text.charAt(x) > 64 && text.charAt(x) < 91) { if (text.charAt(x)-n < 65) { y = 26; } out = out + (char) (text.charAt(x) - n + y); } else { out = out + text.charAt(x); } x++; y = 0; } return out; } } My question now is: How to improve this code? I mean, it does what it is supposed to do, but it's not really great code. Answer: In my opinion, the main problem with your code is the duplication, here is my advices. 1) Put the ui code out of the conditions. The only issue there, if the choice is invalid, you can either show a default string, or throw an exception. if (decision == 1) { out = encrypt(field, shift); } else if (decision == 2) { out = decrypt(field, shift); } else { out = "Invalid choice!"; } JTextArea msg = new JTextArea(out); msg.setLineWrap(true); msg.setWrapStyleWord(true); JScrollPane scrollPane = new JScrollPane(msg); scrollPane.setPreferredSize(new Dimension(300, 300)); JOptionPane.showMessageDialog(null, scrollPane); Or if (decision == 1) { out = encrypt(field, shift); } else if (decision == 2) { out = decrypt(field, shift); } else { throw new IllegalStateException("Invalid choice!") } JTextArea msg = new JTextArea(out); msg.setLineWrap(true); msg.setWrapStyleWord(true); JScrollPane scrollPane = new JScrollPane(msg); scrollPane.setPreferredSize(new Dimension(300, 300)); JOptionPane.showMessageDialog(null, scrollPane); 2) In the encrypt & decrypt, to create the string containing the result, i suggest that you use java.lang.StringBuilder instead of concatening the String; you will gain some performance. public static String decrypt(String text, int n) { int x = 0; int y = 0; StringBuilder out = new StringBuilder(); //Empty string for result. while (x < text.length()) { if (text.charAt(x) > 64 && text.charAt(x) < 91) { if (text.charAt(x) - n < 65) { y = 26; } out.append(text.charAt(x) - n + y); } else { out.append(text.charAt(x)); } x++; y = 0; } return out.toString(); } 3) In the encrypt & decrypt, extract the text.charAt(x) in a variable, to remove the duplicates. public static String decrypt(String text, int n) { int x = 0; int y = 0; StringBuilder out = new StringBuilder(); //Empty string for result. while (x < text.length()) { final char currentChar = text.charAt(x); if (currentChar > 64 && currentChar < 91) { if (currentChar - n < 65) { y = 26; } out.append(currentChar - n + y); } else { out.append(currentChar); } x++; y = 0; } return out.toString(); } 4) The encrypt and decrypt methods are pretty similar, you can probably merge them if you want. //Encryption public static String encrypt(String text, int n) { return operation(text, n, true); } //Decryption public static String decrypt(String text, int n) { return operation(text, n, false); } public static String operation(String text, int n, boolean isEncryption) { int x = 0; int y = 0; StringBuilder out = new StringBuilder(); //Empty string for result. while (x < text.length()) { final char currentChar = text.charAt(x); if (currentChar > 64 && currentChar < 91) { if (isEncryption ? (currentChar + n > 90) : (currentChar - n < 65)) { y = 26; } out.append(isEncryption ? (currentChar + n - y) : (currentChar - n + y)); } else { out.append(currentChar); } x++; y = 0; } return out.toString(); } ```
{ "domain": "codereview.stackexchange", "id": 36818, "tags": "java, caesar-cipher" }
What helpful solution does the Halting Problem give to computing?
Question: What problem does the halting problem solve in computing, whether theoretical or practical? It is very easy to debug code which loops forever, just signal the debugger to break if the program is running for too long? What purpose / good is the halting problem? Why was Turing praised for it? Answer: The halting problem is an example of a problem that a computer cannot solve. On the face of it, it might seem that computers can do anything, given enough resources and time. However, Turing showed that this intuition is false. Later on this was made more tangible by showing that a computer cannot decide whether a given Diophantine equation has solutions in the positive integers. This is similar to Gödel's incompleteness theorem, which shows that the axiomatic method doesn't provide an answer to all possible questions. In fact, no finite list of axioms is enough to determine the truth of every statement regarding the natural numbers. Later on, a natural example emerged in the realm of set theory, namely the continuum hypothesis.
{ "domain": "cs.stackexchange", "id": 527, "tags": "computability, terminology, halting-problem" }
robot_localization delayed yaw response
Question: Our robot has wheel odometry and an IMU (gyro plus accelerometers) and we are migrating from robot_pose_ekf to robot_localization. We expect to add additional IMUs as well as gps once this base configuration is working. I am sending test data to robot_localization as follows: vx = 0.1 imu.angular_velocity = 0.05 for the first 30 seconds, then -0.1 for 30 sec and then 0.2 acclerometers are all set to 0 All the other inputs are excluded using "false" entries in the sensor selection matrix. We do not specify an absolute yaw value because we are not directly measuring one. robot_localization diganostics notices this and outputs a warning that Yaw is not being sent and that can result in unbounded errors. The output from robot_localization agrees with the input for x linear twist (0.1) and the z angular twist initially agrees with imu.angular_velocity, but when imu.angular_velocity changes (at 30 seconds), the z angular twist output from robot_localization persists as the original value. This remains the case at 60 seconds when the input changes again, the output is still around the original input value. When first setting up robot_localization, I did not change the covariance matrix and, in that case, the z angular twist output would stay near 0 for the first 5 seconds or so and then jump to equal the imu.angular_velocity. When I modified the covariance matrix, increasing the value for vYaw, then the output jumps much sooner to the input value. However, when the input value changes, the output does not. I would attach the robot_localization.launch file and a bag file, but ros.answers does not support those file types for upload, only image files. two_d_mode is set true. differential and relative modes are set false. Here are the frame settings and sensor selection matrices: <!-- Defaults to "map" if unspecified --> <param name="map_frame" value="map"/> <!-- Defaults to "odom" if unspecified --> <param name="odom_frame" value="odometry/filtered"/> <!-- Defaults to "base_link" if unspecified --> <param name="base_link_frame" value="base_footprint"/> <!-- Defaults to the value of "odom_frame" if unspecified --> <param name="world_frame" value="odometry/filtered"/> <!-- we are going to only use vx because that is all the wheel encoders actually report --> <rosparam param="odom0_config">[false, false, false, false, false, false, true, false, false, false, false, false, false, false, false]</rosparam> <!-- we are going to only use vyaw, ax, ay, az because the robot does not roll significantly in the other axes (and two_d_mode is set to true) --> <rosparam param="imu0_config">[false, false, false, false, false, false, false, false, false, false, false, true, true, true, true]</rosparam> Originally posted by dan on ROS Answers with karma: 875 on 2015-08-10 Post score: 1 Original comments Comment by Tom Moore on 2015-08-11: Can you use Google Drive or DropBox to post a bag file? It would also be useful to post a sample IMU message. Thanks! Comment by dan on 2015-08-11: Here is a bag file and an example IMU message from the original issue, as well as the robot_localization.launch file: robot_localization_link Comment by dan on 2015-08-11: I found that if I set the vz covariance to a high value the output tracks better (see source file in dropbox). However, it does not make sense to continuously be setting that and also, I thought that one advantage of the UKF was that it tracks the covariance based on the statistics of the data. Comment by Tom Moore on 2015-08-11: The measurement covariances will always be critical. Usually, when convergence is slow, it means your initial estimate covariance for that variable is too low, or the measurement covariance for that variable is too high. I don't have DropBox access right now, so I'll check it out later today. Comment by dan on 2015-08-11: The original issue was that I was not setting an initial angular vel covariance (so they were all 0), as I thought UKF would build an estimate quickly. Setting a large initial covariance helped, but there is still more lag in the ang vel output than I would like. Is there a way to tune that? Comment by dan on 2015-08-11: replaced the bag file, the robot_localization.launch file and the source code file in the dropbox folder with updated ones that show the updated results where the robot_localization output of z angular velocity changes appropriately, but fairly slowly. Answer: Hi Dan, Here's an example of one of your IMU messages: header: seq: 241 stamp: secs: 1439336777 nsecs: 766463406 frame_id: Imu_link orientation: x: 0.0 y: 0.0 z: 0.0 w: 1.0 orientation_covariance: [1000000.0, 0.0, 0.0, 0.0, 1000000.0, 0.0, 0.0, 0.0, 1000000.0] angular_velocity: x: 0.0 y: 0.0 z: 0.05 angular_velocity_covariance: [1e-06, 0.0, 0.0, 0.0, 1e-06, 0.0, 0.0, 0.0, 1000.0] linear_acceleration: x: 0.0 y: 0.0 z: 0.0 linear_acceleration_covariance: [0.001, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.001] Your issue, at least in part, is that your yaw velocity covariance (last value in angular_velocity_covariance) is massive. You're giving the filter a yaw velocity with a variance of 1000. Apologies if you're aware of this, but just so we're on the same page, there are two covariance matrices that are driving the behavior you're seeing. The first is the initial_estimate_covariance in the launch file. It specifies the initial covariance matrix for the entire state estimate. In it, you have the yaw velocity variance set to 1e-3. The other matrix of consequence is the measurement covariance matrix, which is specified in the IMU message itself. In it, you have a huge variance for your yaw velocity, so when you take in your first measurement, the filter trusts its own error estimate for yaw velocity (with value 0 and variance 1e-3) much more than the value from your measurement (with value 0.05 and variance 1000). To summarize, to increase the speed of convergence, do these three things: Change the initial_estimate_covariance matrix so that the value at (12, 12) is something large, e.g., 100. Change your IMU message so that the angular_velocity_covariance is much smaller (e.g., 0.02). If your IMU has any kind of documentation, you can probably get a reasonable value from there. Ditch all of the Mahalanobis distance settings in the advanced section of the launch file. I plan to modify that the template launch file so that all the advanced parameters are commented out by default. Originally posted by Tom Moore with karma: 13689 on 2015-08-12 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by dan on 2015-08-12: Works nicely! I was confused about the relationships of the various covariance matrices. Should I be sending the IMU covariance values with every update even though they don't change? Also, how do I estimate those values? I am using the L3GD20H Comment by Tom Moore on 2015-08-12: Every IMU message is handled independently, so you must fill out that field in every message, even if the value is constant. A brief look through your document didn't provide any answers re: accuracy. You may have to poke around the web and look for error rates. When in doubt, over-estimate. Comment by dan on 2015-08-12: OK, thanks for the help. The datasheet says at a 50Hz output rate, the gyro has a "rate noise density" of 0.11 degrees per sec / sqrt(Hz). I will look further into what that spec means. Thanks for your help on this. I am posting another question about acceleration, Hope that is OK :/
{ "domain": "robotics.stackexchange", "id": 22418, "tags": "ros, navigation, gyro, robot-localization, yaw" }
Are quantum computers at least build able?
Question: Assuming a highly unrealistic where you basically have unlimited money to burn. Is it possible to a build a size scalable quantum computer? My premise is that I often read tha people have a made a single qubit transistor etc... but the system needs to be cooled with liquid helium to remain operable and so is not really practical. What if someone had the cash to buy cold liquid helium for arbitrarily large systems. Do we then have at least size scalable systems that could in theory become large full size Quantum Computers if someone paid for the coolant? Answer: The problem isn't coolant. MRIs use liquid helium, for instance, and they're clearly practical. The problems are things like coherence times (making qubits last a while without errors), performing gates quickly, bringing many qubits together so you can do large operations, etc. There are currently a number of candidates for what a scalable quantum computing architecture will look like--superconducting qubits, trapped ions, Rydberg systems, topological qubits, linear optical elements. Different architectures have different problems: ions have amazing coherence times, but the gates are not very fast. Superconducting qubits are potentially easily scalable (you can circuit-print them) but their coherence times are currently very bad. People with "highly unrealistic" money (Google, Microsoft, Lockheed, the US government) are currently funding research to make these architectures more scalable. We are not yet at the point where money itself is the only lacking material--even if you had enough money to buy all possible materials, there are still years of focused research efforts between the current state of the art and a useful quantum computer. EDIT: Comments and other answers have mentioned D-Wave. I'll just note that they are...controversial. It's not widely accepted that this represents a legitimate quantum computer, or a type of quantum computer which is provides any advantage relative to classical computation.
{ "domain": "physics.stackexchange", "id": 25702, "tags": "superconductivity, quantum-computer, cooling" }
Closing kinematic loop with trac_ik
Question: Hello List, Creating simulation of a rower in a boat using ros, gazebo and trac_ik. There are three kinematic loops in the robot's description. This cannot be described in URDF, so they should be closed when generating the SDF from it. I use ik_fast to find the correct joint values for this. I created stubs on both sides of the, still, broken arm's, and ask trac_ik how to close them. The project can be found om my github page. The code resides in the boot3_description directory. Please see this screenshot of the rower in rviz. The loop with the left arm is closed, but the right arm not yet. Also note the stub-links that have to be placed on top of each other. I create a program ik_1.py that uses the joint state controller and trac_ik to calculate the correct values of the arm joints to close the loop. The new initial joint values are written in param.xacro to be used later. Now my question. The program works perfectly when closing the left arm, but does NOT yield the proper values for the right arm. trac_ik finds a solution, but when the new values are used the right arm is not connected. I can't find any difference in the two chains, apart of course of the different values to get another arm. How can trac_ik find a solution that is clearly wrong? Hopefully someone can shed some light on this. UPDATE: I was confused, I use trac_ik, not ik_fast. Thanks in advance, Sietse Originally posted by Sietse on ROS Answers with karma: 168 on 2019-03-07 Post score: 0 Original comments Comment by gvdhoorn on 2019-03-07: You keep referring to ik_fast, but in your script I only see trac_ik. Comment by Sietse on 2019-03-07: Oops, you are right. Thanks, Updated. Answer: Answering my own question. There where some differences in the arms in orientation, and how axes where set, that got me confused. That is now corrected, and using the ik_1.py program I can now close the kinematic loops. The use case is that, during experiments I need to change the model slightly, larger rower, shorter oars etc. After that the kinematic loops are broken and should be closed again. Sorry for the confusion. Originally posted by Sietse with karma: 168 on 2019-03-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32608, "tags": "ros, ros-melodic, trac-ik, arm-kinematics" }
Decidablity with exponential number of solution
Question: I am trying to understanding this. If a problem has exponential amount of candidate solutions, such as 2^2^n. Is this decidable? To my understanding, as long as its' verfiable, no matter how big the solutions there are, it's.decidable. Thanks for clarity Answer: If a problem has a finite number of possible solutions, then you can enumerate all the potential solutions, and if one of them is correct, halt. Otherwise, halt and reject after checking the last solution. It does not matter how long this process takes, it may be exponential, or superexponential as in your example of $2^{2^n}$, the only thing that matters is that there is a way to enumerate all candidate solutions.
{ "domain": "cs.stackexchange", "id": 6641, "tags": "undecidability" }
Can electricity pass through a flat battery?
Question: Let’s say there are 4 batteries in series in a circuit connected to a bulb. One of the battery is flat. Can the bulb still light up? I’m so confused now. Answer: A battery (flat or not) is still a conductor and the circuit will remain complete. Its just that when the battery runs out, it wont be able to provide any extra voltage to the circuit. The bulb will still glow but the brightness will change (depending on if the batteries were in series or parallel configuration).
{ "domain": "physics.stackexchange", "id": 69184, "tags": "electric-circuits, electric-current, electrical-resistance, batteries, electrochemistry" }
Which spider is this? Is it dangerous?
Question: So today while clearing the trash out of my room I found this orange spider on an abandoned piece of foam. It is carrying a weird ball with it wherever it goes. What is this? And to which genus does this spider belongs? For a better contrast I placed it on paper. EDIT-1 Size: Roughly 1cm in length Location: Uttar Pradesh, India. Coordinates- https://maps.app.goo.gl/svXoxaH4Z2lK2rFH2 Temperature: 34°C and Humid. EDIT-2 Adding few more pics Answer: It is clearly a Theridiid (cobweb) spider, but it doesn't look like the Parasteatoda tepidariorum I'm familiar with here in the US. The egg sac it's holding indicates it's a mature female, and the regular pattern on the abdomen is both distinctive and unlike the variable, blotchy pattern of tepidariorum. Unfortunately, not being familiar with Eurasian spiders likely to be found in your area, I can't be of much help as to what it might be, beyond limiting it to the Theridiids. Oh, yes - it's pretty clearly not one of the Widows (Latrodectus spp.), and since those are the only medically significant Theridiids we know, it's not dangerous.
{ "domain": "biology.stackexchange", "id": 8676, "tags": "species-identification, arachnology" }
Why do things stop accelerating?
Question: Before $t=0$: object is not moving At $t=0$: I push object with sufficient force to overcome static friction After $t=0$: Object accelerates. Resultant force is greater than $0$. Dynamic friction is less than the push. So why does the object stop accelerating after say $t=2$ (either starts to decelerates or reaches constant velocity)? $F_f = \mu N$ so it does not depend on the velocity of the object so as the object accelerates it, the push force and frictional force stay constant so the resultant force also does. Answer: Forces are interactions between two objects. You push on an object, and the object pushes back with an equal but opposite force. When the interaction stops, the force disappears. No exception to this has ever been observed. That is why it is called a law. The object never stops accelerating as long as you keep pushing with a force greater than the friction force. If you decrease your pushing force to be equal to the friction force it will move at constant velocity. If your pushing force becomes less then the friction force, the object decelerates. These are simple statements about how Newton's First and Second Laws apply.
{ "domain": "physics.stackexchange", "id": 72774, "tags": "newtonian-mechanics, forces, acceleration, friction, velocity" }
Would two unmoving electric charges repel each other?
Question: I am reading that the electromagnetic force is only transmitted when 2 objects with charge move relative to each other. In that case, would 2 electrons that are not moving repel? Would 2 unmoving protons attract? How does static electricity work in if motion is required? Answer: I am reading that the electromagnetic force is only transmitted when 2 objects with charge move relative to each other. Be careful with the things, named electromagnetic. What we have are electric interactions, magnetic interactions and electromagnetic induction processes. Furthermore we have electromagnetic radiation. Electric interactions are these between charges. Electrons repeal each other due to their electric field. The same for protons, positrons and anti-protons. Any macroscopic electric interaction happens due to the charge separation. Magnetic interactions are these between the magnetic dipoles of subatomic particles. The common magnetic dipole of atoms and molecules may be neutral or having some value, dependent from the position and alignment of the electrons, protons and neutrons of the atoms. Atoms may form domains, where the magnetic dipoles are aligned. Having aligned dipoles materials are permanent magnets. Electromagnetic interactions are those of three involved value F, v and B: a moving with v charge, influenced by a magnetic field B, gets deflected sideways with a force F (electric drive) moving (with force) a wire with it free electrons across a magnetic field the electrons start to flow with some velocity (electric generator) accelerating electrons in a coil a magnetic field is induces. Since there is no other model, the interactions of electric and magnetic fields in the three cases mentioned above are mediated by virtual photons. At the PSE there are numerous questions and answers about the virtual photons, which are needed for explanation, but can never be considered real. One note about photons. They are indeed of electromagnetic nature. Emitted from subatomic particles with their electric and magnetic field components, they have both oscillating electric and magnetic field components and this their name electromagnetic radiation comes from. In that case, would 2 electrons that are not moving repel? Would 2 unmoving protons attract? Exact in the case of the electrons. And two protons also repeal each other. The electric field is an intrinsic (existing independent from surrounding circumstances) property. Any two charged particles - moving or not - will attract or repeal each other. How does static electricity work in if motion is required? The charges have their fields around and these fields are working a bit like springs, attracting different or repealing equal charges.
{ "domain": "physics.stackexchange", "id": 69704, "tags": "electromagnetism, electrostatics, electric-fields, charge, coulombs-law" }
Where does $E_8$ come from in M-Theory?
Question: Where does the $E_8$ symmetry comes from in M-Theory? For example when you compactify one of the dimensions on a line you get E8xE8 heterotic string theory. Or if you compactify 11D Supergravity leaving just 3 dimensions the theory has $E_8$ symmetry. Is it a mystery? Like maybe M-Theory has some $E_{11}$ symmetry that nobody knows about? Or is there a simple explanation? (For example heterotic string theory has a "simple" explanation that 16 of the 26 dimensions of the left-handed bosonic modes are compactified on an $E_8\times E_8$ torus lattice.) Answer: Compactifying M-Theory on an interval (a line segment) is usually thought of as a $\mathbb{Z}_{2}$ quotient acting on 'ordinary' M-Theory on $\mathbb{R}^{1,9} \times S^1$. The $\mathbb{Z}_2$ action on $S^{1}$ is what produces an interval with two fixed points. On the two fixed points you have two 10-dimensional boundaries (which so far is just because the fixed points have something like an $\mathbb{R}^{1,9}$ fibered over them). The distance between the two boundaries equals half the circumference of the original $S^1$. But you can do more. The low-energy effective theory of M-Theory is 11-dimensional supergravity. Compactifying this as above leads to a bulk supergravity theory which far away from the boundaries "looks" pretty much like 11-dimensional supergravity but with a modification of the supersymmetry transformation laws and the Bianchi identity, due to the presence of the boundaries. On the (even-dimensional) boundaries, now one can have chiral fermions. So gauge and gravitational anomalies are possible (gauge anomalies occur in even spacetime dimensions, and gravitational anomalies arise in $d = 4k+2$ spacetime dimensions.) In fact, requiring that the theory be (1) supersymmetric and (2) anomaly-free, leads us to two possible gauge groups for the gauge bosons in the theory: $SO(32)$ or $E_8 \times E_8$. The argument is roughly similar to the one made to decide what the permissible gauge groups are for $\mathcal{N}=1$ supergravity coupled to $\mathcal{N}=1$ super Yang-Mills theory. [I say roughly because the details are different: you have to study something called anomaly inflow which describes how the gauge and gravitational anomalies on the boundaries are canceled by contributions from the bulk, through a Green-Schwarz like coupling of bulk-boundary fields, and contributions from the Chern-Simons term which is already present in the action of 11-dimensional supergravity.] The reason to pick $E_8 \times E_8$ is that the anomalies must be canceled on both boundaries, and there's no way to distribute $SO(32)$ between two boundaries (it is a simple group with no factors). The above setup is called Horava-Witten theory [1,2]. It is the strongly coupled version of the $E_8 \times E_8$ heterotic string. Specifically, the distance between the two boundaries (Horava-Witten walls, "end of the world branes", "9-branes") is related to the heterotic coupling. It is also sometimes called heterotic M theory. Note: Usually, one is not so interested in the compactification to $R^{1,9} \times (S^{1}/\mathbb{Z}_2)$, but instead in $R^{1,3} \times CY_3 \times (S^{1}/\mathbb{Z}_2)$, which produces $\mathcal{N} = 1$ supersymmetry in the $(1+3)$ spacetime dimensions. Here $CY_3$ is a Calabi-Yau 3-fold. References: P. Horava and E. Witten, "Heterotic and Type I String Dynamics from Eleven Dimensions," Nucl. Phys. B460:506-524, 1996. [arXiv:hep-th/9510209] P. Horava and E. Witten, "Eleven Dimensional Supergravity on a Manifold with a Boundary," Nucl. Phys. B475:94-114, 1996. [arXiv:hep-th/9603142] B. Ovrut, "Lectures on Heterotic M-Theory," [arXiv:hep-th/020103] Sidenote: For $E_{11}$, you might want to see https://ncatlab.org/nlab/show/E11, and references therein. I don't know enough about the subject to comment in any meaningful way.
{ "domain": "physics.stackexchange", "id": 34087, "tags": "string-theory, symmetry, group-theory, lie-algebra, compactification" }
How to calculate the relativistic final velocity? What's the formula?
Question: Assume we have an object of 1 kg, at rest and we invest 100 Joules of energy to accelerate it. The resultant velocity can be calculated by $$ v = \sqrt{\frac{2K_{e}}{m}} $$ so, $$ \sqrt{\frac{2 (100)}{1}} \simeq 14.14 m/s $$ But because of relativity if we invest more and more energy, we won't get the same rise in the velocity as the relativistic mass goes on increasing and resultant velocity rises more and more slowly and it will approach at best the speed of light on spending infinite amount of energy. What is the formula or method to calculate the final relativistic velocity of an object of mass $m$ if I invest $K_{e}$ amount of energy to it? Answer: Your first formula is incorrect because it's using the newtonian version of the kinetic energy. In special relativity it becomes: $$K_e=(\gamma-1)mc^2=\left(\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}-1\right)mc^2$$ After a bit of elbow grease, you get: $$v=c\sqrt{1-\frac{1}{\left(1+\frac{K_e}{mc^2}\right)^2}}$$ Notice the approximation $K_e/mc^2\ll 1$ restores the formula of Newtonian mechanics.
{ "domain": "physics.stackexchange", "id": 88380, "tags": "special-relativity" }
How does Byte Pair Encoding work?
Question: I am using this to do some Byte Pair Encoding (BPE). My corpus looks like this. When I run the learn_bpe, I get a vocabulary that looks like this. e r r e o n o r t i ) ;</w> a c n t ' ,</w> er r a l r o h e m e When I try to combine it again to see if it worked with ubword-nmt apply-bpe -c data/jsvocab.txt < data/javascript.txt > tst.txt, the resulting file has a lot of strange @ characters. const p@@ re@@ F@@ or@@ m@@ at@@ t@@ e@@ d@@ B@@ l@@ o@@ c@@ k@@ N@@ a@@ me@@ s = { '@@ ap@@ i@@ -@@ p@@ ro@@ j@@ ect@@ s@@ '@@ : '@@ A@@ P@@ I P@@ ro@@ j@@ ect@@ s@@ ', '@@ b@@ a@@ s@@ i@@ c@@ -@@ c@@ ss@@ '@@ : '@@ B@@ a@@ s@@ i@@ c C@@ S@@ S@@ ', '@@ b@@ a@@ s@@ i@@ c@@ -@@ h@@ t@@ m@@ l@@ -@@ and@@ -@@ h@@ t@@ m@@ l@@ 5@@ '@@ : '@@ B@@ a@@ s@@ i@@ c H@@ T@@ M@@ L an@@ d H@@ T@@ M@@ L@@ 5@@ ', '@@ c@@ ss@@ -@@ f@@ le@@ x@@ b@@ o@@ x@@ '@@ : '@@ C@@ S@@ S F@@ le@@ x@@ b@@ o@@ x@@ ', '@@ c@@ ss@@ -@@ g@@ r@@ i@@ d@@ '@@ : '@@ C@@ S@@ S G@@ r@@ i@@ d@@ ', de@@ v@@ o@@ p@@ s@@ : '@@ D@@ e@@ v@@ O@@ p@@ s@@ ', e@@ s@@ 6@@ : '@@ E@@ S@@ 6@@ ', '@@ in@@ f@@ or@@ m@@ ation@@ -@@ se@@ c@@ ur@@ i@@ t@@ y@@ -@@ w@@ i@@ th@@ -@@ he@@ l@@ me@@ t@@ j@@ s@@ '@@ : '@@ I@@ n@@ f@@ or@@ m@@ a@@ ti@@ o@@ n S@@ ec@@ ur@@ i@@ t@@ y w@@ i@@ t@@ h H@@ e@@ l@@ me@@ t@@ J@@ S@@ ', j@@ q@@ u@@ er@@ y@@ : '@@ j@@ Q@@ u@@ er@@ y@@ ', '@@ j@@ s@@ on@@ -@@ ap@@ i@@ s@@ -@@ and@@ -@@ a@@ j@@ a@@ x@@ '@@ : '@@ J@@ S@@ O@@ N A@@ P@@ I@@ s an@@ d A@@ j@@ a@@ x@@ ', '@@ m@@ on@@ g@@ o@@ d@@ b@@ -@@ and@@ -@@ m@@ on@@ g@@ o@@ o@@ se@@ '@@ : 'M@@ on@@ g@@ o@@ D@@ B an@@ d M@@ on@@ g@@ o@@ o@@ se@@ ', '@@ t@@ he@@ -@@ d@@ o@@ m'@@ : '@@ T@@ h@@ e D@@ O@@ M@@ ', '@@ ap@@ i@@ s@@ -@@ and@@ -@@ m@@ i@@ c@@ ro@@ serv@@ i@@ c@@ e@@ s@@ '@@ : '@@ A@@ P@@ I@@ s an@@ d M@@ i@@ c@@ ro@@ serv@@ i@@ c@@ e@@ s@@ ', '@@ ap@@ i@@ s@@ -@@ and@@ -@@ m@@ i@@ c@@ ro@@ serv@@ i@@ c@@ e@@ s@@ -@@ p@@ ro@@ j@@ ect@@ s@@ '@@ : '@@ A@@ P@@ I@@ s an@@ d M@@ i@@ c@@ ro@@ serv@@ i@@ c@@ e@@ s P@@ ro@@ j@@ ect@@ s@@ ' }@@ ; And so on. I'm not sure what I'm missing, but it seems that it didn't fully reconstruct the text from the vocabulary? Answer: Your BPE vocabulary is quite small given how the strings you want to segment look like. The biggest problem here is that BPE expects tokenized sentences with tokens separated by spaces. The BPE model is not aware of JS syntax and you feed it with JS code, so there is no wonder that it does not learn anything syntactically plausible. You can interpret the BPE file a log of what subwords got merged and you can see that the longest string you get is something like: m').valueOf H:mm').valueOf I would recommend some preprocessing, perhaps some JS lexer, so it respects the syntax and perhaps bigger vocabulary.
{ "domain": "datascience.stackexchange", "id": 5420, "tags": "nlp, encoding" }
Helical motion of a rigid body
Question: I want to show that a rigid body, with two components of its angular velocity vector and one component of its linear velocity vector, in the absence of external forces and torques, has helical trajectories. This is usually taken for granted in various papers I have read (see for example this p.4 section 4, or this p.206 first paragraph). I consider a moving rigid body whose centre of mass is located at a point $\boldsymbol{r}_{0}(t)$ in the laboratory frame of reference defined by the basis vectors $\boldsymbol{e}_{x},\boldsymbol{e}_{y},\boldsymbol{e}_{z}$. I attach a moving frame of reference $\left\{ \boldsymbol{r}_{0}(t);\boldsymbol{e}_{x}^{\prime}(t),\boldsymbol{e}_{y}^{\prime}(t),\boldsymbol{e}_{z}^{\prime}(t)\right\} $ to the body; this frame is centred at $\boldsymbol{r}_{0}(t)$ and its axes rotate with the frame of the body. The rotating triad $\left(\boldsymbol{e}_{x}^{\prime},\boldsymbol{e}_{y}^{\prime},\boldsymbol{e}_{z}^{\prime}\right)$ can be characterised by three Euler angles $\theta_{1},\theta_{2},\theta_{3}$ and the following transformation rules: $$ \mathbf{e}_{i}^{\prime}=\boldsymbol{L}\left(\theta_{1},\theta_{2},\theta_{3}\right)\cdot\mathbf{e}_{i} $$ I adopt the Tait-Bryan angle convention, with an $y-x^{\prime\prime}-z^{\prime}$ intrinsic definition:$\theta_{1}$ is the yaw (anticlockwise around $e_{y}$), $\theta_{2}$ the pitch (anticlockwise around $e_{x}^{\prime}$) and $\theta_{3}$ the bank (roll, anticlockwise around $e_{z}^{\prime}$). For brevity, I define the vector $\boldsymbol{\theta}=\left(\theta_{1},\theta_{2},\theta_{3}\right)$ of the three independent Euler angles. The matrix of the transformation is $$\boldsymbol{L}\left(\boldsymbol{\theta}\right)=\boldsymbol{R}_{Z}\left(\theta_{3}\right)\boldsymbol{R}_{X}\left(\theta_{1}\right)\boldsymbol{R}_{Y}\left(\theta_{2}\right)$$ where the $\bf R_i$ are the rotation matrices in $\mathbb{R}^3$. I assume that the body has linear velocity $\boldsymbol{U}(t)=U\boldsymbol{e}_{z}^{\prime}(t)$ oriented along its anterior-posterior axis $\boldsymbol{e}_{z}^{\prime}(t)$, and rotational velocity vector $\boldsymbol{\omega}(t)$. In the chosen representation, the components of angular velocity in the body's frame of reference are \begin{align*} \omega_{x} & =\dot{\theta}_{2}\cos\theta_{1}-\dot{\theta}_{3}\sin\theta_{1}\cos\theta_{2}\\ \omega_{y} & =\dot{\theta}_{1}+\dot{\theta}_{3}\sin\theta_{2}\\ \omega_{z} & =\dot{\theta}_{2}\sin\theta_{1}+\dot{\theta}_{3}\cos\theta_{1}\cos\theta_{2}. \end{align*} These can be found by noticing that $$\boldsymbol{L}^{T}\dot{\boldsymbol{L}}=\left(\begin{array}{ccc} 0 & -\omega_{z} & \omega_{y}\\ \omega_{z} & 0 & -\omega_{x}\\ -\omega_{y} & \omega_{x} & 0 \end{array}\right)$$ The trajectory of the body is therefore given by the curve traced by its centre of mass $\boldsymbol{r}_{0}$ as it moves through space. In the laboratory frame, this reads $$\boldsymbol{r}_{0}(t)=\intop_{0}^{t}\boldsymbol{U}\left(\tau\right)d\tau=U\intop_{0}^{t}\boldsymbol{e}_{z}^{\prime}\left(\tau\right)d\tau$$ Clearly, $\boldsymbol{e}_{z}^{\prime}$ rotates with the body, and therefore is a function of time through the angular velocity components $\omega_{i}$: \begin{align*} \boldsymbol{e}_{z}^{\prime}(t) & =\boldsymbol{L}(t)\cdot\boldsymbol{e}_{z}\\ & =\left(\cos\theta_{3}\sin\theta_{1}+\cos\theta_{1}\sin\theta_{2}\sin\theta_{3}\right)\boldsymbol{e}_{x}+\left(-\cos\theta_{1}\cos\theta_{3}\sin\theta_{2}+\sin\theta_{1}\sin\theta_{3}\right)\boldsymbol{e}_{y}+\left(\cos\theta_{1}\cos\theta_{2}\right)\boldsymbol{e}_{z} \end{align*} $$ \text{where }\ \theta_{i}=\intop_{0}^{t}\dot{\theta}_{i}d\tau. $$ The problem is complicated, because in order to integrate $\boldsymbol{e}_{z}^{\prime}$ I have to invert the relationships between $\omega_{i}$ and $\theta_{j}$. Notice that I am keeping all three components of the angular velocity here, to see at what point and to what extent having only two rotational degrees of freedom is necessary. However, even if I assume that the angular velocities are constant $\theta_{i}=\theta_{i}t$, I do not get expressions that contain $\omega_i$ explicitly. Is there another approach that makes use of $\boldsymbol{L}^{T}\dot{\boldsymbol{L}} = \boldsymbol{\omega}\times$? Answer: The problem is complicated, because in order to integrate $\boldsymbol{e}_{z}^{\prime}$ we have to invert the relationships between $\omega_{i}$ and $\theta_{j}$. Let us assume that the angular velocities are constant and take the small angle approximation: \begin{align*} \dot{\theta}_{i}=\text{const.}\ \ \Longrightarrow\ \ \theta_{i}(t)=\dot{\theta}_{i}t \end{align*} the small angle approximation implies that the angular velocity components in the body frame become: \begin{align*} \Omega_{x} & \approx\dot{\theta}_{2}\\ \Omega_{y} & \approx\dot{\theta}_{1}\\ \Omega_{z} & \approx\dot{\theta}_{3} \end{align*} With these assumptions we obtain: \begin{align*} \boldsymbol{e}_{z}^{\prime}(t) & =\cos\theta_{3}\sin\theta_{1}\boldsymbol{e}_{x}+\sin\theta_{1}\sin\theta_{3}\boldsymbol{e}_{y}+\cos\theta_{1}\boldsymbol{e}_{z}\\ & =\cos\left(\Omega_{z}t\right)\sin\left(\Omega_{y}t\right)\boldsymbol{e}_{x}+\sin\left(\Omega_{y}t\right)\sin\left(\Omega_{z}t\right)\boldsymbol{e}_{y}+\cos\left(\Omega_{y}t\right)\boldsymbol{e}_{z} \end{align*} which can be integrated in order to derive $\boldsymbol{r}_{0}(t)$: \begin{align*} \boldsymbol{r}_{0}(t) & =U\intop_{0}^{t}\boldsymbol{e}_{z}^{\prime}\left(\tau\right)d\tau=\frac{U}{\Omega_{y}-\Omega_{z}}\left[\Omega_{y}-\Omega_{y}\cos\left(\Omega_{y}t\right)\cos\left(\Omega_{z}t\right)-\Omega_{z}\sin\left(\Omega_{y}t\right)\sin\left(\Omega_{z}t\right)\right]\boldsymbol{e}_{x}\\ & \hspace{8em}+\frac{U}{\Omega_{y}-\Omega_{z}}\left[-\Omega_{y}\cos\left(\Omega_{y}t\right)\sin\left(\Omega_{z}t\right)+\Omega_{z}\sin\left(\Omega_{y}t\right)\cos\left(\Omega_{z}t\right)\right]\boldsymbol{e}_{y}\\ & \hspace{8em}+\frac{U}{\Omega_{y}}\sin\left(\Omega_{y}t\right)\boldsymbol{e}_{z} \end{align*} This is in general a complex and very intriguing curve, but we will limit our study here to the case where $\Omega_{y}\ll\Omega_{z}$. A series expansion of $\boldsymbol{r}_{0}$ around $\Omega_{y}$ yields \begin{align*} \boldsymbol{r}_{0}(t) & \approx\frac{U\Omega_{y}}{\Omega_{z}}\left[1+\cos\left(\Omega_{z}t\right)+\Omega_{z}t\sin\left(\Omega_{z}t\right)\right]\boldsymbol{e}_{x}+\frac{U\Omega_{y}}{\Omega_{z}}\left[-\Omega_{z}t\cos\left(\Omega_{z}t\right)+\sin\left(\Omega_{z}t\right)\right]\boldsymbol{e}_{y}+t\boldsymbol{e}_{z}\\ & =\underset{\text{circular helix}}{\underbrace{\left(\begin{array}{c} r\cos\left(\Omega_{z}t\right)\\ r\sin\left(\Omega_{z}t\right)\\ t \end{array}\right)}}+\underset{\text{spiral}}{\underbrace{\left(\begin{array}{c} t\ \Omega_{z}r\sin\left(\Omega_{z}t\right)\\ -t\ \Omega_{z}r\cos\left(\Omega_{z}t\right)\\ 0 \end{array}\right)}}+\underset{\text{axis shift}}{\underbrace{r\left(\begin{array}{c} 1\\ 0\\ 0 \end{array}\right)}} \end{align*} The growth rate of the spiral is very small, $\dot{\rho}=U\Omega_{y}/\Omega_{z}^2\ll U$, due to the approximation $\Omega_{y}\ll\Omega_{z}.$ Hence, the trajectory is a quasi-circular helix of radius $r$.
{ "domain": "physics.stackexchange", "id": 57422, "tags": "classical-mechanics, rigid-body-dynamics, solid-mechanics" }
Densities of different phases of steel (austenite, bainite)
Question: Steel seems to be a very complex material, not least because of the different phases and microstructures. At the moment, I'm especially interested in the bainite formation that happens when austenitic steel is cooled rapidly (but not rapidly enough to drive the martensite formation). My question: What is the mass density (or range of values, or functions describing the densities in relation to specific conditions) of these two microstructures? Answer: There is a lot more going on in this question than appears at first glance. Density of austenite is fairly straightforward: it is approximately the atom-weighted sum of the face-centered cubic densities of the substitutional constituents as the microstructure consists of a single phase. In other words, Fe, Mo, V, etc. The interstitial constituents, i.e. C, N, add mass as they do not replace Fe lattice sites but instead fit between them, increasing mass without increasing volume, therefore increasing density. However in austenite there is at most 2% by weight of carbon in solution at $1130\:\textrm{C}$ as shown in the phase diagram at the bottom. The density of pure iron is $7870\:\textrm{kg}/\textrm{m}^3$ (from Google), and the density increase is probably close to about 2% of the density of graphite, $2250\:\textrm{kg}/\textrm{m}^3$ (from Google), or about $45\:\textrm{kg}/\textrm{m}^3$, for a total of about $7915\:\textrm{kg}/\textrm{m}^3$ for the density of fully saturated austenite at room temperature. It isn't clear how you would produce such an unstable material, but we can at least estimate in theory. There are slight volume changes in the FCC lattice caused by the substitutional and interstitial atoms, but without atomistic models these changes are challenging to predict. They may be determined experimentally, of course. For bainite, there are two phases: ferrite and cementite. The ferrite phase consists of virtually all of the substitutional microconstituents in a body-centered cubic structure, so its density may be calculated by atom-weighted average. There is also a trivial quantity of carbon in solution at equilibrium, increasing density as before. The density of the microstructure is then the volume-weighted average of the densities of the two phases. Plain-carbon steel with no other alloying additions should have ferrite density very close to that of pure iron, or about $7870\:\textrm{kg}/\textrm{m}^3$ (from Google), and cementite has a theoretical density of approximately $7640\:\textrm{kg}/\textrm{m}^3$ (source). For steels the maximum amount of cementite is approximately 32% by volume, from a tie-line construction on the phase diagram. The maximum carbon concentration in a steel, by definition, is close to 2.14 weight percent, whereas in cementite is 6.67 weight percent. So the volume-weighted density would be a minimum of approximately $7795\:\textrm{kg}/\textrm{m}^3$. These numbers may be adjusted for temperature using coefficients of linear expansion cubed for the volume change (assuming linearity). They may also be adjusted by considering the densities of substitutional alloy additions. Finally I would caution that the numbers are theoretical for austenite, as plain-carbon steel, pure, retained austenite should be practically impossible to produce in bulk. I'd also caution that there are more complex phenomena to consider such as phase changes, diffusion, as well as other thermodynamic and kinetic phenomena that govern how steels form microstructures and phases, and which will definitely have an effect on the densities.
{ "domain": "engineering.stackexchange", "id": 349, "tags": "materials, steel, metallurgy" }
libviso2 cannot run
Question: Hi all : I'm trying to do the libviso2 now . There is a README .txt Move to libviso2 root directory Type 'cmake .' Type 'make' Run './viso2 path/to/sequence/2010_03_09_drive_0019' I done the first 3 steps ,but I really don't know how to do the last line .... What is path to sequence ??? It's that a directory ? Can somebody helping me ~~~~~ please ..... Thank you Originally posted by wendyZzzz on ROS Answers with karma: 1 on 2016-04-22 Post score: 0 Answer: It seems it's not ROS related question. libviso2 can evaluate in ROS with viso2_ros package I recommend referring below link: http://wiki.ros.org/viso2?distro=indigo Originally posted by LeeJaemin with karma: 26 on 2016-04-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24431, "tags": "ros, libviso2" }
Fundamental principles for simple radiative heat transfer problems
Question: The picture on the left below illustrates what is intended to be a very simple one-dimensional textbook-style radiative heat transfer problem. It is meant to be a pedagogical tool for explaining the greenhouse effect. The idea is that the gray plate is transparent to all of the incoming radiation from the distant hot source but is otherwise well-approximated as a blackbody. The hot source is at such a distance that it delivers 240 W/m$^2$ to the top surface of the brown plate which is black, $\epsilon = 1$. The bottom surface of the brown plate is perfectly insulated such that no energy escapes through that side. The plates are intended to be parallel and closely spaced such that radiation out from the edges can be neglected, i.e. large in comparison to the spacing between them but still small in comparison to the distance from the source. All objects are surrounded by vacuum at zero Kelvin. Without the gray plate in place the problem solution for the temperature of the brown plate is 255 K since that is the temperature where it emits 240 W/m$^2$, and the heat in is balanced by the heat out. However, the argument put forth is, since the gray plate is never going to be at a higher temperature than the brown plate it does not transfer any heat to the brown plate and thus cannot raise the temperature of the brown plate. Is this correct? Answer: The first thing to note is that the properties of the gray plate are such that it is for all intents and purposes a blackbody with respect to its absorption from the brown plate and its emission to space, but it is completely transparent to the radiation from the hot source. For example for a temperature of 300K a blackbody emits $\sigma (300^4) = 459.3 W/m^2$, while the gray plate emits $\sigma (300^4) - 1.2 \times 10^{-14} W/m^2$ when it is at 300K, where the small number being subtracted is the amount of radiation a blackbody emits in the wavelengths from 0 to 1 $\mu m$. Next, in order to track all of the heat inflows and outflows to the plates, let’s make the lumped thermal capacity approximation for the plates and assume that they are each at a uniform temperature. Let’s also write everything as per unit area of the plates. The rate form for the first law of thermodynamics (under the assumption of local thermodynamic equilibrium) is, $$\frac{dU}{dt}=\dot Q + \dot W$$ where $U$ is the internal energy, $\dot Q$ is the net rate of heat inflow to the system $\dot Q=\dot Q_{in}-\dot Q_{out}$, and $\dot W$ is the rate that work is done by the surroundings on the system. For this problem it is clear that $\dot W=0$. The lumped thermal capacitance approximation then uses, $$\frac{dU}{dt} = C \frac{dT}{dt}$$ where $C$ is the total thermal capacity of the plate, which is just the mass per unit area of the plate times its specific heat. The heat influx to the brown plate per unit area is simply the heat influx from the hot distant source $\dot Q_{in} = Q_0=240 W/m^2$, and the heat outflux per unit area is given by the Stefan-Boltzmann equation for the radiative heat flux between two closely spaced parallel plates, $\dot{Q}_{out} = \sigma (T_{b}^{4}-T_{g}^{4})$ where $b$ is for the brown plate and $g$ is for the gray plate. This allows us to write a governing equation for the temperature evolution of the brown plate as, $$C_b \frac{dT_b}{dt} = 240 - \sigma (T_{b}^{4}-T_{g}^{4})$$ For the gray plate the heat influx is simply the heat outflux from the brown plate, $\dot{Q}_{in} = \sigma (T_{b}^{4}-T_{g}^{4})$, and the heat outflux is the radiation emitted to space, $\dot{Q}_{out} = \sigma T_{g}^{4}$. This allows us to write a governing equation for the temperature evolution of the gray plate as, $$C_g \frac{dT_g}{dt} = \sigma (T_{b}^{4}-T_{g}^{4})-\sigma T_{g}^{4} =\sigma (T_{b}^{4}-2 T_{g}^{4}) $$ These equations can be cast in non-dimensional form with the following definitions, $$T_0 =(Q_0/\sigma)^{1/4}, \bar{T} = \frac{T}{T_0},t_0 = \frac{C_b}{\sigma T_0^{3}}\text{, and } \bar{t} = \frac{t}{t_0}$$ The equations are then, $$\frac{d\bar{T_b}}{d\bar{t}} = 1 - \bar{T}_{b}^{4}+\bar{T}_{g}^{4}$$ and $$\frac{C_g}{C_b} \frac{d\bar{T_g}}{d\bar{t}} = \bar{T}_{b}^{4}-2 \bar{T}_{g}^{4} $$ This set of two ordinary differential equations for $\bar{T_b}$ and $\bar{T_g}$ have a single parameter $C_g/C_b$ that must be specified along with initial temperatures for each plate. Let’s consider the case where the gray plate is gone and the brown plate has reached its steady state temperature of $\bar{T_b}=1$, and at time $\bar{t}=0$ the gray plate is introduced at a temperature of $\bar{T_g}=0$. Let’s also take $C_g/C_b = 1$. For this case the time evolution of the temperatures of the brown and gray plates is shown below. Note that the steady state temperatures for the plates are $T_b=2^{1/4} T_0$ and $T_g = T_0$. So how is it that the brown plate increases in temperature if the white plate is always colder and there is never any heat flux from the gray to the brown plate? The answer lies in the fact that the net heat outflux from the brown plate changes upon the introduction of the gray plate. Prior to the introduction of the gray pate the brown plate was emitting radiation to 0K space, which is the maximum amount of heat that it can lose per second via radiation. After the gray plate is introduced it warms due to the heat it receives from the brown plate and thus the brown plate is losing less heat per second than when it was when it was emitting to space. Since it is still receiving 240 $W/m^2$ from the source every second it must warm up until it is transferring 240 $W/m^2$ to the gray plate. In turn, the gray plate must emit 240 $W/m^2$ to space at steady state and will be at 255K to do so. The graph below illustrates this behavior showing the heat influx to the brown plate from the source and the heat outflux to the gray plate. The net is of course positive during the transient, which causes the temperature of the brown plate to increase until the new steady state is achieved. This is in spite of the fact that no heat is ever transferred from the colder gray plate to the hotter brown plate.
{ "domain": "physics.stackexchange", "id": 100637, "tags": "thermodynamics, energy, thermal-radiation, thought-experiment" }
The hanging chain problem (catenary), numerically
Question: I am supossed to solve the hanging chain in constant homogeneus gravity field: The chain of length $L_0$ is divided into $N$ parts which are homogenous with length $l$ and mass $m$ and connected by ideal joints. I am supposed to study the curve with different $N$ and $l$. It´s always true that $Nl=L_0=const.$ I´ve seen the analytical solution but I am supposed to do it numerically and I am supposed to get a set of inhomogenous equations a then solve them numerically. But I dont know how to get the equations. According to the task I am supposed to get a set of normal equations not differential equations. Can someone help me? Answer: The catenary shape equation is $$ y(x) = y_C + a \left( \cosh \left( \frac{x-x_C}{a} \right) -1 \right) $$ where $(x_C,y_C)$ is the coordinate of the lowest point and $a$ is the so called catenary constant. At the supports the forces on the horizontal direction are $H$ and the unit weight of the cable is $w = \rho g A$ making the catenary constant $$a = \frac{H}{w} $$ Use the above to find the end-points of each segment. For a segment spanning between $x_1$ and $x_2$ in the horizontal direction with length $\ell$ and angle orientation $\theta$ the following is true $$ \begin{align} x_2 = x_1 + \ell \cos \theta \\ y_2 = y_1 + \ell \sin \theta \end{align} $$ Now for the balance of forces. The weight of each segment is $W = w \ell$ and the force equations $$ \begin{align} (T + \Delta T) \cos \left( \theta + \Delta \theta\right) - T \cos \left( \theta \right) & = 0 \\ (T + \Delta T) \sin \left( \theta + \Delta \theta\right) - T \sin \left( \theta \right) - w \ell & = 0 \end{align} $$ where $T$ and $\theta$ is the tension and angle on the left side of the segment and $T+\Delta T$ and $\theta + \Delta \theta$ the tension and angle on the right side. This means that each segment recursively changes the tension and angle by $$ \begin{align} \Delta T & = \sqrt{T^2+w^2 \ell^2+2 T w \ell \sin \theta}-T \\ \Delta \theta & = \tan^{-1} \left( \frac{\cos \theta}{\sin\theta + \frac{T}{w \ell} } \right) \end{align} $$ You can check the results against the analytical form of the tension and angle $$ \begin{align} T &= H \cosh\left( \frac{x-x_C}{a} \right) \\ \theta &= \sinh\left( \frac{x-x_C}{a} \right) \end{align} $$
{ "domain": "physics.stackexchange", "id": 36931, "tags": "homework-and-exercises, newtonian-mechanics, computational-physics" }
Question about what a simultaneous measurement of entangled spins means
Question: I was working through a problem I found online and ran into something that is confusing me. We have a system of three spin-1/2 particles, in the state $$ |\psi\rangle = \frac{1}{\sqrt{2}}(|1/2,1/2,1/2\rangle - |-1/2,-1/2,-1/2\rangle). $$ Now, one can show that this is an eigenstate with eigenvalue $1$ of the operator $$ M = \sigma_x^1 \sigma_y^2 \sigma_y^3.$$ I.e., the $x$ pauli matrix on the first spin, and the $y$ pauli matrix on the other two spins. Now, the question later asks what the possible measurement outcomes are if an observer simultaneously measures $S_x$ of the first particle and $S_y$ of the other two particles. According to the answer, the possible outcomes are (+--), (-+-), (--+), (+++). Now, I get that the product of these parities is +1, in line with the fact that $|\psi \rangle$ is an eigenstate of $M$ with eigenvalue $1$. But, I guess I'm confused about what a simultaneous measurement actually means here. For example, if we performed the measurements sequentially, then I believe that the $S_x$ measurement would have to yield $-1$ since $|1/2\rangle - |-1/2\rangle$ is an eigenstate of $S_x$ with eigenvalue $-1$. So then what does a "simultaneous measurement" actually mean here? Now sure, if one measured the product operator $S_x^1 S_y^2 S_y^3$ then clearly the only possible outcome would be $(\hbar/2)^3$ but the problem seems to be asking what happens if you measure the individual spins at the same time, not just the single observable $M$. I'm not sure what is actually going on in that case. Answer: You can't factorise out the state of the first qubit a pure eigenstate of $S_x$ as you seem to want to (the state is not a product state). To see the the possible outcomes of the measurement of $S_x$ on the first particle you need to rewrite the parts of the related to that particle in the $x$ basis. I'm going to use $|+\rangle$ and $|-\rangle$ to represent the eigenstates of $S_x$. First note that: \begin{align} |1/2\rangle &= \frac{1}{\sqrt{2}}\left(|+\rangle + |-\rangle\right)\\ |-1/2\rangle &= \frac{1}{\sqrt{2}}\left(|+\rangle - |-\rangle\right)\\ \end{align} So we can write: \begin{align} |\psi\rangle &= \frac{1}{2}\left(\left(|+\rangle + |-\rangle\right)|1/2\rangle|1/2\rangle - \left(|+\rangle - |-\rangle\right)|-1/2\rangle|-1/2\rangle\right)\\ \end{align} Then combine factors: \begin{align} |\psi\rangle &= \frac{1}{2}\left(|+\rangle\left(|1/2\rangle|1/2\rangle - |-1/2\rangle|-1/2\rangle\right) + |-\rangle\left(|1/2\rangle|1/2\rangle + |-1/2\rangle|-1/2\rangle\right)\right)\\ \end{align} Your reasoning would have been correct if we could write this in the form $|\psi\rangle = |-\rangle|\text{some state for the other particles}\rangle$ but we can't do that. The overall aim in doing this algebra is to show that the order you do the measurements in doesn't change the outcomes you can get at all. We can freely say "simultaneous" without worrying about what it means because it doesn't matter what order we do the measurements in! You should definitely go through it and convince yourself of this. All you need to do is factorise the expression into the eigenstates of each operator in turn and use that to work out what the outcomes of each series of measurements should be. It might be less tedious to calculate everything with a two qubit system (and observable) though. Note that my algebra may be totally wrong (it usually has some errors) but the technique is correct. You factorise the system according to the eigenvectors of the observable you're going to measure first and use that to work out the state after each measurement outcome, then repeat with the new state and the remaining measurements.
{ "domain": "physics.stackexchange", "id": 24619, "tags": "quantum-mechanics, homework-and-exercises, quantum-entanglement" }
X-ray diffraction intensity and Laue equations
Question: My textbook, Solid-State Physics, Fluidics, and Analytical Techniques in Micro- and Nanotechnology, by Madou, says the following in a section on X-Ray Intensity and Structure Factor $F(hkl)$: In Figure 2.28 we have plotted $y = \dfrac{\sin^2(Mx)}{\sin^2(x)}$. This function is virtually zero except at the points where $x = n\pi$ ($n$ is an integer including zero), where it rises to the maximum value of $M^2$. The width of the peaks and the prominence of the ripples are inversely proportional to $M$. Remember that there are three sums in Equation 2.38. For simplicity we only evaluated one sum to calculate the intensity in Equation 2.39. The total intensity equals: $$I \propto \dfrac{\sin^2 \left( \dfrac{1}{2} M \mathbf{a}_1 \cdot \Delta \mathbf{k} \right)}{ \sin^2 \left( \dfrac{1}{2} \mathbf{a}_1 \cdot \Delta \mathbf{k} \right)} \times \dfrac{\sin^2 \left( \dfrac{1}{2} N \mathbf{a}_2 \cdot \Delta \mathbf{k} \right)}{ \sin^2 \left( \dfrac{1}{2} \mathbf{a}_2 \cdot \Delta \mathbf{k} \right)} \times \dfrac{\sin^2 \left( \dfrac{1}{2} P \mathbf{a}_3 \cdot \Delta \mathbf{k} \right)}{ \sin^2 \left( \dfrac{1}{2} \mathbf{a}_3 \cdot \Delta \mathbf{k} \right)} \tag{2.40}$$ so that the diffracted intensity will equal zero unless all three quotients in Equation 2.40 take on their maximum values at the same time. This means that the three arguments of the sine terms in the denominators must be simultaneously equal to integer multiples of $2\pi$, or the peaks occur only when: $$\mathbf{a}_1 \cdot \Delta \mathbf{k} = 2 \pi e$$ $$\mathbf{a}_2 \cdot \Delta \mathbf{k} = 2 \pi f$$ $$\mathbf{a}_3 \cdot \Delta \mathbf{k} = 2 \pi g$$ These are, of course, the familiar Laue equations. I could be mistaken, but I see two possible errors here: Since we have that the function is virtually zero except at the points where $x = n \pi$, where $n$ is an integer, we use L'Hôpital's rule to get that $\dfrac{2M \cos(Mx)}{2\cos(x)} = \dfrac{M \cos(Mx)}{\cos(x)}$, which is a maximum of $M$ -- not $M^2$ -- for $x$. Assuming that we require that the arguments of the sine terms of the three denominators equal integer multiples of $2\pi$, we have that $$\dfrac{1}{2} \mathbf{a}_1 \cdot \Delta \mathbf{k} = 2\pi e \Rightarrow \mathbf{a}_1 \cdot \Delta \mathbf{k} = 4 \pi e$$ However, as the author indicates, the Laue equation is $\mathbf{a}_1 \cdot \Delta \mathbf{k} = 2 \pi e$. So should it not be the case that we require that the arguments of the sine terms of the three denominators equal integer multiples of $\pi$, so that we have that $$\dfrac{1}{2} \mathbf{a}_1 \cdot \Delta \mathbf{k} = \pi e \Rightarrow \mathbf{a}_1 \cdot \Delta \mathbf{k} = 2\pi e$$ I would greatly appreciate it if people would please take the time to review this. Answer: On applying L'Hôpital's rule, we get $ y = \frac{2Msin(Mx)cos(Mx)}{2sin(x)cos(x)}$. Again applying L'Hôpital's rule $\frac{sin(Mx)}{sin(x)} = M$, giving $y=M^{2}$. Just here, it is proved that $\frac{sin^{2}(Mx)}{sin^{2}x}$ has maxima at $ x = n\pi$. Here n is any integer and not just even integers. In $\frac{sin^{2}(\frac{1}{2}M\vec{a_{1}}.\vec{\Delta k)}}{sin^{2}(\frac{1}{2}\vec{a_{1}}.\vec{\Delta k)}}$, $ x = \frac{1}{2}\vec{a_{1}}.\vec{\Delta k}$, therefore $$\dfrac{1}{2} \mathbf{a}_1 \cdot \Delta \mathbf{k} = \pi e \Rightarrow \mathbf{a}_1 \cdot \Delta \mathbf{k} = 2 \pi e$$
{ "domain": "physics.stackexchange", "id": 64353, "tags": "solid-state-physics, scattering, diffraction, x-ray-crystallography, braggs-law" }
but_velodyne with VLP16
Question: The but_velodyne wiki states that the package has been tested only with Velodyne HDL-32E. I am wondering if anyone has tried using the package with VLP16 or is aware of a reason why it would not work with VLP16? I am looking for ways to colorize point cloud generated by VLP16 and am starting to investigate if but_calibration_camera_velodyne can do that. Has anyone used this node to colorize point clouds? What were the results? Originally posted by chukcha2 on ROS Answers with karma: 89 on 2016-09-20 Post score: 0 Answer: I have not used but_velodyne, and don't know whether VLP-16 support has been added. That device returns data in a slightly different format, so it won't work automatically. The velodyne driver in ros-drivers does support the VLP-16, if you build it from source. Originally posted by joq with karma: 25443 on 2016-09-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25803, "tags": "velodyne, pointcloud" }
Why is the C- Space topology for a 2R robot a torus?
Question: Currently doing a course on robotics - skip to 1:12 in the video below where Kevin Lynch describes the C-Space topology of a 2R robot to be a torus. Why did he rotate the circle for joint 1 to be perpendicular to that of joint 2? https://youtu.be/z29hYlagOYM?list=PLggLP4f-rq01z8VLqhDC94W2nWpWpZoMj&t=72 Also shouldn't the C-Space for such a contraption be a annular disc? notice the difference between the fig1 and fig2 in the illustration I drew. I get that the range being [0,2π] wrap around to form a torus, but I am confused about how Kevin Lynch approached this problem, aren't both joints operating on the xy plane? Answer: The torus doesn't represent the actual motion of joints in space it is used to represent the C-space only. You need a C-space that can represent all the combinations of two different angles i.e., all pairs ($\theta_1, \theta_2$) so that by picking a point on the C-space you can uniquely determine the location of both the joints. If you use the annular ring and you pick any point on the annular ring so you can find out the angle of the first joint but how do you represent the angle of second joint. But if you take a torus the point from the centre will represent the angle of the first joint and the angle around the peripheral ring of the torus will represent angle of the second joint as i tried to show in the image. I tried to draw so that it is easier to visualise(pretty bad actually but hope it helps) So if you take a disc it wouldn't be possible to uniquely describe the location of $\theta_2$ and if you try you will get overlapping circles. Again, don't confuse C-space with the actual motion of the joints. it is a space which is used to map each possible position of the 2 joints to a unique point on some region(here the torus).
{ "domain": "robotics.stackexchange", "id": 39029, "tags": "motion-planning, path-planning" }
How can we know which alleles are together on a chromosome?
Question: This illustration says that if the two homozygotes pr+ pr+ vg+ vg+ and pr pr vg vg are crossed to produce a heterozygous offspring (pr+ pr vg+ vg), then: this cross gives us exactly what we need to observe recombination: a fly that's heterozygous for the purple and vestigial genes, in which we know clearly which alleles are together on a single chromosome. (My emphasis.) Source: Khan Academy - Genetic Linkage & Mapping I can't see the logic of this: surely if there were four chromosomes involved instead of two, wouldn't we get the same result via independent assortment? In this scenario, the red-eyed, long-winged fly would have pr+ on chromosome 1, pr+ on chromosome 2, vg+ on chromosome 3 and vg+ on chromosome 4. Conversely, the purple-eyed, vestigial-winged fly would have pr on chromosome 1, pr on chromosome 2, vg on chromosome 3 and vg on chromosome 4. As the mother and father are both homozygotes (the mother for the dominant alleles and the father for the recessive alleles) the offspring couldn't fail to inherit pr+ and vg+ from the mother and pr and vg from the father, and the offspring would be the same as in the illustration. I can't see how we can get any information as to whether genes are on the same chromosome or not from this crossing (unless we know they are on the same chromosome before we do the cross!). I'm sure I've got my own logic wrong - can anyone enlighten me? Answer: (moved from comments) We don't know from the pictured cross that the genes are linked. However, IF they are linked, we know from that cross which alleles are together on a single chromosome. Thus, the description should have something like ", if indeed they are linked." added to the end of "this cross gives us exactly what we need to observe recombination: a fly that's heterozygous for the purple and vestigial genes, in which we know clearly which alleles are together on a single chromosome." It's the further crosses of the F1 offspring that tell us if the genes are linked and if so, how closely. The original cross is necessary but not sufficient to determine this.
{ "domain": "biology.stackexchange", "id": 11403, "tags": "genetics, homework, recombination, genetic-linkage" }
Why gravitational wave stretches on $x$-axis would necessarily compress on the $y$-axis?
Question: I read gravitational wave is a traverse wave, usually produced by inspiraling neutron stars or black holes, and laser interferometer such as LIGO is commonly used to detect them, since the setup of the mirrors of the interferometer is at 90° angle, and signal is read from the constructive and destructive patterns. Does it mean that due to the limitation of our detector we assume gravitational wave as being traverse wave? I think it depends on how the gravitational wave is generated, so inspiral binary stellar masses would always produce traverse wave. But is there any other way to produce longitudinal wave pattern such as heavy mass rocking in-situ, etc.? Answer: There is no way to produce a longitudinal gravitational wave. They are inherently transverse. To discover how gravitational waves behave, physicists mathematically study small perturbations to flat spacetime or some other fixed background spacetime. They do this by linearizing Einstein's field equations for General Relativity. The perturbations are found to propagate at the speed of light and to have only two modes, both of which are transverse to the direction of propagation. The fact that, when a gravitational wave passes by, one transverse direction of space is found to expand while the other direction shrinks — and then vice versa, in an oscillatory way — is related to the fact that the metric field of spacetime is a tensor field with two indices.
{ "domain": "physics.stackexchange", "id": 57269, "tags": "general-relativity, interference, gravitational-waves, polarization, interferometry" }
increase hokuyo URG-04LX-01 scan Frequency
Question: How do i increase hokuyo URG-04LX-01 to scan at a higher Frequency. current it is 10Hz. i have add param name="Skip" value = "1"/ to my launch file, but a i increase the Value the Frequency Drop Originally posted by vinod9910 on ROS Answers with karma: 61 on 2014-06-12 Post score: 0 Answer: You can't increase it beyond 10Hz, that is the maximum scanning rate of the URG-04 according to the specs. By increasing the "skip" beyond 0, you are dropping data so your scanning frequency will be reduced (skip=1 => 5Hz). Originally posted by AHornung with karma: 5904 on 2014-06-12 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by vinod9910 on 2014-06-12: Thank You.
{ "domain": "robotics.stackexchange", "id": 18250, "tags": "lidar, scan, frequency, hokuyo, time" }
Explanation for "if all accelerated systems are equivalent, then Euclidean geometry cannot hold in all of them"
Question: I'm doing an EPQ (mini college research paper) on gravity, and I found a site that explained things in simple terms. I am having trouble understanding how Einstein came to his revelation space-time was curved. Einstein also realised that the gravitational field equations were bound to be non-linear and the equivalence principle appeared to only hold locally. and Einstein said If all accelerated systems are equivalent, then Euclidean geometry cannot hold in all of them. Can anybody help? Answer: Here's a simple demonstration: Consider flat space (i.e. Minkowski), viewed in a rotating frame (in e.g. cylindrical coordinates one just replaces $\phi$ by $\phi'=\phi+\omega t$). One can calculate (without too much trouble) that, in these coordinates, a spatial line element can be expressed in terms of the canonical cylindrical coordinates as $$ d\ell^2=dr^2+dz^2+\frac{r^2d\phi^2}{1-\frac{\omega^2r^2}{c^2}}$$ Now, note that if we consider a unit disc in the $z=\text{constant}$ plane, we find $$d\ell=\frac{2\pi}{\sqrt{1-\frac{\omega^2}{c^2}}}>2\pi\hspace{1cm}\iff \omega>0$$ The startling conclusion is that this observer will measure the circumference of a disc of radius $r$ to be $C>2\pi r$ for any $\omega>0$. Hence, Euclidean geometry does not hold universally, even in flat space, if we relax the assumption that 'inertial frames' are somehow privileged, i.e. if we take this calculation seriously. Realizing that there is a need to consider (relatively) accelerating frames as equivalent was one of the major breakthroughs that needed to be made in order to arrive at the theory of general relativity. Note that this example of the spinning disk was raised quite quickly after the advent of special relativity, and that it sparked quite a lively debate, influencing Einstein's thinking on relativity.
{ "domain": "physics.stackexchange", "id": 17769, "tags": "general-relativity, gravity, reference-frames, acceleration, equivalence-principle" }
Spectral function and bound states in condensed matter
Question: In condensed matter physics (like in QFT) we can use Feynman's diagrams to compute the self-energy. From here we can obtain the spectral function as: $$ A_{\mathbf k} (\omega) = \frac{-\frac{1}{\pi}\mathrm{Im}\Sigma_{\mathbf k}(\omega)} {\left[\omega - (\epsilon_{\mathbf k }- \mu) - \mathrm{Re}\Sigma_{\mathrm{k}}(\omega)\right]^2 + \left[ \mathrm{Im}\Sigma_{\mathbf k}(\omega) \right]^2} $$ At low temperatures this spectral function typically has some satellites structures and a quasi-particle peak. From here we can read-off quasi-particle properties... However I want to know if (and how!) can we see if there are any bound states in the system? Actually, I do know that it can be done (in BCS theory for example), but I do not know how? Answer: Bound vs. extended states In condensed matter physics bound states are the states with the wave functions decaying towards infinity, as opposed to the extended states, as, e.g., plane waves. In this sense the definitive answer to the question can be given only by the exact diagonalization of the Hamiltonian and studying the behavior of the wave functions. Without that the question is necessarily vague, and the discussions of the bound states usually carry qualitative character. Gaps between the states Another way to define bound states is as isolated states, i.e., they are not a part of a continuous spectrum. In non-interacting case (or in the spectrum of a diagonalized Hamiltonian), these are immediately seen from the dependence of $\epsilon_k$ on $k$. In other words, they are separated by a gap from the adjacent bound states or the continuum. In this picture, including the interactions broadens the states, and whether they remain (quasi-)bound depends on how the broadening compares to the gap. When the broadening is very strong we say that the bound states wash out, becoming continuous. Gap opening Finally, there is a class of problems where we are interested in appearance of bound states. In some cases one can re-express the problem in terms of an effective Schrödinger equation (e.g., by summing the ladder diagrams - see Fetter&Walecka for a clear presentation). In this case one distinguishes bound and extended states by the properties of the solutions of this effective Schrödinger equation (i.e., decaying to infinity or not). Another case is gap opening, which can be studied by a number of techniques, e.g., renormalization group. Clarification about the spectral function The textbooks on QFT in condensed matter physics teach us that the particle (and multiparticle) excitations appear as singularities in the Green's function. It is necessary to stress here that Green's function is not the same thing as the spectral function (like the one given in the question). In fact, the spectral function is analytical, i.e., it does not have singularities. It is convenient to use mathematically, but it is less suitable for the question that interests us. In fact, although it gives a good intuition when discussing broadening of states or their delocalization, as I discussed above, it can be very misleading in terms of finding the true excitations, since a lot of $\omega$-dependence, and all the new physics due to the interaction, is hidden in the self-energy. If we work with the Green's function, then its singularities will be either isolated poles or branch cuts - corresponding respectively to isolated (likely bound) states and the continuous spectrum. Another point - the spectral function given in the question is that for a one-particle Green's function. If we are interested in two-particle bound states, operating with a two-particle Green's function and the corresponding spectral representation.
{ "domain": "physics.stackexchange", "id": 75037, "tags": "condensed-matter, feynman-diagrams, greens-functions, quasiparticles" }
Is this bug a termite?
Question: Noticed this bug on my kitchen floor, and am wondering whether it is a termite: I have a video of it crawling too, but don't see how to upload video here. Edit: Regarding scale: it is small, perhaps 3 mm (~1/8 inch). Location is northern Arkansas, so is within termite territory. Answer: This is a beetle rather than a termite: in comparison to a termite, most beetles have sclerotized forewings (so-called elytra) and from above, you can recognize its head, pronotum and forewings. Termites, however, have soft bodies and most often are wingless. From above, you can additionally see their meso- and metanotum and their abdominal tergites (latter are covered by the wings in most beetles). Commonly, they are unpigmented and thus, pale. Winged termites have transparent wings that are longer than their bodies. Beetles are extremely diverse with almost 200 described beetle families with more than 300,000 species, so it's super hard to tell even the family let alone the species from a photo. Image source (in reading direction): Sanjay Acharya CC-BY-SA-4.0 | Teechippy CC-BY-SA-4.0 | Dulneth Wijewardana CC-BY-SA-4.0
{ "domain": "biology.stackexchange", "id": 12330, "tags": "species-identification, entomology" }
Do flies actually take off backwards?
Question: I've been told that flies take off backwards, but I haven't really been able to prove it to myself. The closest I've gotten was noticing that they fly into window glass back-first, with their heads pointing up. Is that how they take off? Answer: A 2008 study on fruit flies found that when faced with an immediate threat, flies would tend to launch themselves into the air in the opposite direction to the threat. So when confronted by a threat from directly ahead, the flies would jump backwards. Interestingly, "in response to stimuli approaching from the side, however, flies jumped at an angle that was approximately halfway between directly away and directly forward." In other words, the response was biased toward taking off in the forward direction. The study's authors note that "voluntary takeoffs elicited by either attractive odors or internal cues are almost always in the forward direction". To summarize, fruit flies will take off in a forward direction when possible, but can take off backwards or at an angle when faced with an immediate threat. Source: Card, Gwyneth; Dickinson, Michael H. (September 9, 2008). "Visually mediated motor planning in the escape response of Drosophila". Current Biology 18: 1300–1307. doi:10.1016/j.cub.2008.07.094.
{ "domain": "biology.stackexchange", "id": 4657, "tags": "entomology, flight" }
JavaScript naming and grouping
Question: I'm fairly new to JavaScript. I would like some feedback on the naming and grouping of the following code. Is there any change that will make it more readable? Template.documentPage.events({ 'click .hint-text': function(e) { e.preventDefault() var currentHintId = $(e.target).attr('id') Session.set('currentHintId', currentHintId) $('.hint-popup').show() }, 'click .hint-submit': function(e) { e.preventDefault() var currentHintId = Session.get('currentHintId') var popupInput = $('.hint-popup input').val() $('#' + currentHintId).text(popupInput) Documents.update(this._id, {$set: {content: $('.content').html()}}, function() { console.log('Saved.') }) $('.hint-popup').hide() $('.hint-popup input').val('Enter text') } }) Answer: To start off, I can't see a single semicolon being used here. You should always close your statements with a semicolon. Also I can't quite understand what you've done here: 'click .hint-text': function(e) { e.preventDefault() var currentHintId = $(e.target).attr('id') Session.set('currentHintId', currentHintId) $('.hint-popup').show() }, Is this a property in an object? If so why have you named it 'click .hint-text'? Give it a proper name such as togglePopup or something along those lines to make it more obvious what it does. Now it's just confusing. And for the other function you could name it submit(whatever you're submitting). If you included some additional markup it would help your question a bit.
{ "domain": "codereview.stackexchange", "id": 11903, "tags": "javascript, beginner, meteor" }
Is a strontium–fluorine battery the highest voltage battery using pure elements?
Question: Strontium has a very low standard electrode potential and fluorine has a very high one. \begin{align} \ce{F2 + 2e^- &<=> 2F^-} &\quad E^\circ &= \pu{+2.87 V} \tag{R1} \\ \ce{Sr &<=> Sr^+ + e^-} &\quad E^\circ &= \pu{-4.10 V} \tag{R2} \end{align} In theory, a strontium–fluorine battery would have a voltage of $\pu{6.97 V},$ although there are many practical reasons such as danger and rarity of materials for such batteries not to be made. Is a strontium–fluorine battery the theoretically highest voltage chemical battery using pure elements, or is it possible to obtain a higher one? Answer: The half-potential you've given for strontium is only for the first ionisation. The half-potential you'd actually get is $\pu{-2.899 V}$ for the stable dication to give a cell potential of $\pu{5.769 V}$. From the CRC Handbook [1], lithium has the lowest element to stable ion potential of $\pu{-3.0401 V},$ which is why it is common in batteries. Reference Lide, David R., ed. CRC Handbook of Chemistry and Physics, 87th ed. Boca Raton, FL: CRC Press. 2006. ISBN 0-8493-0487-3.
{ "domain": "chemistry.stackexchange", "id": 16555, "tags": "inorganic-chemistry, physical-chemistry, electrochemistry, reduction-potential" }
PHP comment system
Question: Basics I created a simple comment system. My goal was it to create a system that can easily be used on everyone's server without having to install a load of programs. I also tried to create it as privacy-friendly as possible (no email-address, no cookies). I also need to solve this problem without databases. Functionality Basic form to submit new comments Flag-functionality (with simple email send to the owner of the website) Answer functionality with indented answers Code simpleComments.php This script provides the main functionality: Spam-protection (with suggestions from here and here), sending, answering and flagging comments. I think that especially the function save() looks is a rather hacky solution. If you know a better alternative (without databases), I would be happy to hear it. //The password for the AES-Encryption (has to be length=16) $encryptionPassword = "****************"; //============================================================================================ //============================================================================================ // == // FROM HERE ON NO ADJUSTMENT NECESSARY == // == //============================================================================================ //============================================================================================ /** * Creates image * * This function creates a black image with the random exercise created by randText() on it. * Additionally the function adds some random lines to make it more difficult for bots to read * the text via OCR. The result (for example) looks like this: https://imgur.com/a/6imIE73 * * @author Philipp Wilhelm * * @since 1.0 * * @param string $rand Random exercise created by randText() * @param int $width Width of the image (default = 200) * @param int $height Height of the image (default = 50) * @param int $textColorRed R-RGB value for the textcolor (0-255) (default = 255) * @param int $textColorGreen G-RGB value for the textcolor (0-255) (default = 255) * @param int $textColorBlue B-RGB value for the textcolor (0-255) (default = 255) * @param int $linesColorRed R-RGB value for the random lines (0-255) (default = 192) * @param int $linesColorGreen G-RGB value for the random lines (0-255) (default = 192) * @param int $linesColorBlue B-RGB value for the random lines (0-255) (default = 192) * @param int $fontSize font size of the text on the image (1-5) (default = 5) * @param int $upperLeftCornerX x-coordinate of upper-left corner of the first char (default = 18) * @param int $upperLeftCornerY y-coordinate of the upper-left corner of the first char (default = 18) * @param int $angle angle the text will be rotated by (default = 10) * * @return string created image surrounded by <img> */ function randExer($rand, $width = 200, $height = 50, $textColorRed = 255, $textColorGreen = 255, $textColorBlue = 255, $linesColorRed = 192, $linesColorGreen = 192, $linesColorBlue = 192, $fontSize = 5, $upperLeftCornerX = 18, $upperLeftCornerY = 18, $angle = 10) { global $encryptionPassword; $random = openssl_decrypt($rand,"AES-128-ECB", $encryptionPassword); $random = substr($random, 0, -40); //Creates a black picture $img = imagecreatetruecolor($width, $height); //uses RGB-values to create a useable color $textColor = imagecolorallocate($img, $textColorRed, $textColorGreen, $textColorBlue); $linesColor = imagecolorallocate($img, $linesColorRed, $linesColorGreen, $linesColorBlue); //Adds text imagestring($img, $fontSize, $upperLeftCornerX, $upperLeftCornerY, $random . " = ?", $textColor); //Adds random lines to the images for($i = 0; $i < 5; $i++) { imagesetthickness($img, rand(1, 3)); $x1 = rand(0, $width / 2); $y1 = rand(0, $height / 2); $x2 = $x1 + rand(0, $width / 2); $y2 = $y1 + rand(0, $height / 2); imageline($img, $x1, $x2, $x2, $y2, $linesColor); } $rotate = imagerotate($img, $angle, 0); //Attribution: https://stackoverflow.com/a/22266437/13634030 ob_start(); imagejpeg($rotate); $contents = ob_get_contents(); ob_end_clean(); $imageData = base64_encode($contents); $src = "data:" . mime_content_type($contents) . ";base64," . $imageData; return "<img alt='' src='" . $src . "'/>"; }; /** * Returns time stamp * * This function returns the current time stamp, encrypted with AES, by using the standard function time(). * * @author Philipp Wilhelm * * @since 1.0 * * @return int time stamp */ function getTime() { global $encryptionPassword; return openssl_encrypt(time() . bin2hex(random_bytes(20)),"AES-128-ECB", $encryptionPassword); } /** * Creates random exercise * * This function creates a random simple math-problem, by choosing two random numbers between "zero" and "ten". * The result looks like this: "three + seven" * * @author Philipp Wilhelm * * @since 1.0 * * @return string random exercise */ function randText() { global $encryptionPassword; //Creating random (simple) math problem $arr = array("zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten"); $item1 = $arr[array_rand($arr)]; $item2 = $arr[array_rand($arr)]; $random = $item1 . " + " . $item2; $encrypted = openssl_encrypt($random . bin2hex(random_bytes(20)),"AES-128-ECB", $encryptionPassword); return $encrypted; } /** * flags comment * * This function sends an email to the specified adress containing the id of the flagged comment * * @author Philipp Wilhelm * * @since 1.0 * * @param string $to Email-adress the mail will be send to * @param string $url URL of the site the comment was flagged on * */ function flag($to, $url) { //Which comment was flagged? $id = $_POST["comment"]; //At what side was the comment flagged? $referer = $_SERVER["HTTP_REFERER"]; $subject = "FLAG"; $body = $id . " was flagged at " . $referer . "."; //Send the mail mail($to, $subject, $body); //Redirect to what page after flag? //(In this case to the same page) header("Location:" . $url); exit(); } /** * redirects to the same page, but with the added parameter to specify to which * comment will be answered and jumps right to the comment-form * * * @author Philipp Wilhelm * * @since 1.0 * * @param string $url the url of the current page * @param string $buttonName URL of the site the comment was flagged on * @param string $urlName the "id-name" * */ function answer($url, $buttonName, $urlName) { header("Location:" . $url . "?" . $urlName . "=" . $_POST["comment"] . "#" . $buttonName); exit(); } /** * error message * * Redirects to the specified url to tell the user that something went wrong * e.g. entered wrong solution to math-exercise * * @author Philipp Wilhelm * * @since 1.0 * * @param string $urlError The specified url * */ function error($urlError) { header("Location:" . $urlError); die(); } /** * Redirects to specified url when user enters words that are on the "blacklist" * * @author Philipp Wilhelm * * @since 1.0 * * @param string $urlBadWords The specified url to which will be redirected * */ function badWords($urlBadWords) { header("Location:" . $urlBadWords); die(); } /** * Redirects to same url after comment is successfully submitted - comment will be visible * immediately * * @author Philipp Wilhelm * * @since 1.0 * * @param string $url URL of the site * */ function success($url) { header("Location:" . $url); die(); } /** * checks if user enters any words that are on the "blacklist" * * @author Philipp Wilhelm * * @since 1.0 * * @param string $text The user-entered text * @param string $blackList filename of the "blacklist" * * @return boolean true if user entered a word that is on the "blacklist" * */ function isForbidden($text, $blackList) { //gets content of the blacklist-file $content = file_get_contents($blackList); $text = strtolower($text); //Creates an array with all the words from the blacklist $explode = explode(",", $content); foreach($explode as &$value) { //Pattern checks for whole words only ('hell' in 'hello' will not count) $pattern = sprintf("/\b(%s)\b/",$value); if(preg_match($pattern, $text) == 1) { return true; } } return false; } /** * saves a new comment or an answer to a comment * * @author Philipp Wilhelm * * @since 1.0 * * @param string $url Email-adress the mail will be send to * @param string $urlError URL to the "error"-page * @param string $urlBadWords URL to redirect to , when user uses words on the "blacklist" * @param string $blacklist filename of the blacklist * @param string $fileName filename of the file the comments are stored in * @param string $nameInputTagName name of the input-field for the "name" * @param string $messageInputTagName name of the input-field for the "message" * @param string exerciseInputTagName name of the input-field the math-problem is stored in * @param string solutionInputTagName name of the input-field the user enters the solution in * @param string $answerInputTagName in this field the id of the comment the user answers to is saved * (if answering to a question) * @param string $timeInputTagName name of the input-field the timestamp is stored in * */ function save($url, $urlError, $urlBadWords, $blacklist, $fileName, $nameInputTagName, $messageInputTagName, $exerciseInputTagName, $solutionInputTagName, $answerInputTagName, $timeInputTagName) { global $encryptionPassword; $solution = filter_input(INPUT_POST, $solutionInputTagName, FILTER_VALIDATE_INT); $exerciseText = filter_input(INPUT_POST, $exerciseInputTagName); if ($solution === false || $exerciseText === false) { error($urlError); } $time = openssl_decrypt($_POST[$timeInputTagName], "AES-128-ECB", $encryptionPassword); if(!$time) { error($urlError); } $time = substr($time, 0, -40); $t = intval($time); if(time() - $t > 300) { error($urlError); } //Get simple math-problem (e.g. four + six) $str = openssl_decrypt($_POST[$exerciseInputTagName], "AES-128-ECB", $encryptionPassword); $str = substr($str, 0, -40); if (!$str) { error($urlError); } $arr = array("zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten"); //gets array with written numbers $words = array_map("trim", explode("+", $str)); //gets the numbers as ints $numbers = array_intersect($arr, $words); if (count($numbers) != 2) { error($urlError); } $sum = array_sum(array_keys($numbers)); $urlPicture = "identicon.php/?size=24&hash=" . md5($_POST[$nameInputTagName]); //Did user enter right solution? if ($solution == $sum) { $name = $_POST[$nameInputTagName]; $comment = htmlspecialchars($_POST[$messageInputTagName]); $content = file_get_contents($fileName); if(strcmp($content, "<p>No comments yet!</p>") == 0 || strcmp($content, "<p>No comments yet!</p>\n") == 0) { $content = "<p>Identicons created with <a href='https://github.com/timovn/identicon'>identicon.php</a> (licensed under <a href='http://www.gnu.org/licenses/gpl-3.0.en.html'>GPL-3.0</a>).</p>"; } $id = bin2hex(random_bytes(20)); $answerID = $_POST[$answerInputTagName]; //Checks if user used any words from the blacklist if(isForbidden($comment, $blacklist)) { badWords($urlBadWords); } //Case the user writes a new comment (not an answer) if(strlen($answerID) < 40) { file_put_contents($fileName, //Needed styles "<style>" . ".commentBox {" . "display: block;" . "background: LightGray;" . "width: 90%;" . "border-radius: 10px;" . "padding: 10px;" . "margin-bottom: 5px;" . "} " . "input[name='flag'], input[name='answer'] {" . "border: none;" . "padding: 0;" . "margin: 0;" . "margin-top: 5px;" . "padding: 2px;" . "background: transparent;" . "}" . "</style>" . //get random avatar "<img class='icon' style='vertical-align:middle;' src='" . $urlPicture . "'/>" . //Displaying user name "<span><b> " . $name . "</b></span> says:<br>" . //Current UTC-time and -date "<span style='font-size: small'>" . gmdate("d-m-Y H:i") . " UTC</span><br>" . //The main comment "<div class='commentBox'>" . $comment . "<br>" . "</div>". "<div style='width: 90%; font-size: small; float: left'>" . //Flag-button "<form style='margin: 0; padding: 0; float: left;' method='POST' action='simpleComments.php'>" . "<input style='display: none;' name='comment' type='text' value='" . $id . "'/>" . "<input style='color: red;' type='submit' name='flag' value='Flag'/>" . "</form>" . //Answer-button "<form id='answer' style='margin-left: 0; padding: 0; float: left;' method='POST' action='simpleComments.php'>" . "<input style='display: none;' name='comment' type='text' value='" . $id . "'/>" . "<input style='color: green;' type='submit' name='answer' value='Answer'/>" . "</form>" . "<!-- " . $id . " -->" . "</div>" . "<br><br>" . $content); success($url); } //Case that user writes an answer else { if(strpos($content, $answerID) !== false) { $explode = explode("<!-- " . $answerID . " -->", $content); file_put_contents($fileName, $explode[0] . "</div>" . "<br><br>" . //Needed styles "<style>" . ".answerBox {" . "display: block;" . "background: LightGray;" . "width: 90%;" . "border-radius: 10px;" . "padding: 10px;" . "margin-bottom: 5px;" . "} " . "input[name='flag'] {" . "border: none;" . "padding: 0;" . "margin: 0;" . "margin-top: 5px;" . "padding: 2px;" . "background: transparent;" . "}" . "</style>" . "<div style='margin-left: 50px'>" . //get random avatar "<img class='icon' style='vertical-align:middle;' src='" . $urlPicture . "'/>" . //Displaying user name "<span><b> " . $name . "</b></span> says:<br>" . //Current UTC-time and -date "<span style='font-size: small'>" . gmdate("d-m-Y H:i") . " UTC</span><br>" . //The main comment "<div class='answerBox'>" . $comment . "<br>" . "</div>". //Flag-button "<div style='width: 90%; font-size: small; float: left'>" . "<form style='margin: 0; padding: 0; float: left;' method='POST' action='simpleComments.php'>" . "<input style='display: none;' name='comment' type='text' value='" . $id . "'/>" . "<input style='color: red;' type='submit' name='flag' value='Flag'/>" . "</form><br><br>" . "</div>" . "<!-- " . $answerID . " -->" . $explode[1]); success($url); } } } error($urlError); } //============================================================================================ //============================================================================================ // == // FROM HERE ON ADJUSTMENT ARE NECESSARY == // == //============================================================================================ //============================================================================================ /** * start point of the script * * @author Philipp Wilhelm * * @since 1.0 * * */ function start() { //To what email-adress should the flag-notification be send? $to = "example@example.com"; //What's the url you are using this system for? (exact link to e.g. the blog-post) $url = "https://example.com/post001.html"; //Which page should be loaded when something goes wrong? $urlError = "https://example.com/messageError.html"; //What page should be loaded when user submits words from your "blacklist"? $urlBadWords = "https://example.com/badWords.html"; //In which file are the comments saved? $fileName = "testComments.php"; //What's the filename of your "blacklist"? $blackList = "blacklist.txt"; //Replace with the name-attribute of the respective input-field //No action needed here, if you didn't update form.php $nameInputTagName = "myName"; $messageInputTagName = "myMessage"; $exerciseInputTagName = "exerciseText"; $solutionInputTagName = "solution"; $answerInputTagName = "answerID"; $timeInputTagName = "time"; $buttonName = "postComment"; $urlName = "id"; if (isset($_POST["flag"])) { flag($to, $url); } if (isset($_POST["answer"])) { answer($url, $buttonName, $urlName); } if (isset($_POST[$buttonName])) { save($url, $urlError, $urlBadWords, $blackList, $fileName, $nameInputTagName, $messageInputTagName, $exerciseInputTagName, $solutionInputTagName, $answerInputTagName, $timeInputTagName); } } start(); ?> The code was checked with phpcodechecker.com and it didn't find any problems. The other files are not really worth reviewing, so I will leave it here. Links For those who are nevertheless interested in the other files and a how-to, please see the repository for this project. There also is a live-demo for those of you who want to test it. Question Every suggestions are welcome. As mentioned before, I would be especially interested in a more elegant solution for the save()-function. Answer: Initial Feedback I like the usage of docblocks above the functions. The save() function makes good use of returning early to limit indentation levels, except for the last check - when $solution does not match $sum then it can call error() right away. Overall that function is quite lengthy - it violates the Single Responsibility Principle. The functionality to write to the file could be moved to separate functions for each case (comment vs answer). The stylesheets can be moved out to a CSS file(s). Like I mentioned in this answer CSRF tokens could replace the need for the image creation, encoding and decoding. Suggestions Global variables As others have suggested, global variables have more negative aspects than positives. You could pass the encryption password to each function that needs it, but that would require updating the signature of each function that needs it. Another option is to create a named constant using define(). define('ENCRYPTION_PASSWORD', 'xyz'); This could be done in a separate file that is included via include() (or include_once()) or require() (or require_once()), which could be separate from version control (e.g. a .env file) Constants can also be created using the const keyword - outside of a class as of PHP 5.3.01. const ENCRYPTION_PASSWORD = 'xyz'; As was already suggested, using a class with a namespace is a great idea. A class would allow the use of a class constant that would be namespaced to the class and have a specific visibility as of PHP 7.12. Hopefully your code is running on PHP 7.2 or later, since those versions are officially supported3. Iteration by reference The function isForbidden iterates over the contents of the file pointed to in $blacklist assigning the value by reference: foreach($explode as &$value) { This seems unnecessary because $value is not modified within the loop. It might be best to avoid such a practice unless you are certain the array elements need to be modified. Strict equality You may have heard this already: it is a good habit to use strict comparison operators - i.e. === and !== when possible - e.g. for this comparison within save(): if (count($numbers) != 2) { count() returns an int and 2 is an int so !== can be used as there is no need for type conversion. Hidden inputs The HTML generated for the forms contains: <input style='display: none;' This could be simplified slightly using the hidden input type: <input type="hidden" While any input could be displayed by the user by modifying the page via browser console or other means, the hidden input was created for the purpose of hiding form values.
{ "domain": "codereview.stackexchange", "id": 39602, "tags": "beginner, php, html, iteration" }
How to interpret Hubble velocity ODE?
Question: Simple question. Hubble's law is $v = \dfrac{dr}{dt} = H_0r$ . The claim is that, if all galaxies are moving apart from one another, then at earlier times they were closer, and at some finite time in the past, they converge (infinite density, big bang). But if we integrate Hubble's law, we find $r(t) \propto e^{H_0t}$ . But here, $r\rightarrow 0$ only in the event that $t\rightarrow -\infty$. If we set $t = t_0 = H_0^{-1}$, then we just end up with $r(t_0) \propto e$. Which does not seem consistent with sandard Big Bang theory. This is actually an old argument used in Steady State theories, but it was presented and then never resolved in my cosmology text (Ryden). How do we reconcile this problem? Answer: @caverac is right. Note that his a's are the scale factor of the cosmological solution, the Friedman Lamaitre Robertson Walker solution. a is the quantity that depends on time, i.e., a = a(t), and it is this quantity that grows as the universe expands. D = a(t)r is the proper distance (say between two far enough galaxies), and is what is measured cosmologically to be expanding. The Hubble velocity is its derivative with time. r is just a radial coordinate, but it has a metric tensor factor that is a(t). The linearity of v (recession velocity) with D (proper distance) is good as long as H doesn't change much. That's true for us out to about a redshift of 1/2-1, and then it curves for higher redshifts. Hubble only saw the linear region. The wiki article referenced shows the graphic for v vs D for different cases. See Wikipedia at https://en.m.wikipedia.org/wiki/Hubble%27s_law for the Hubble law and the equations from the FLRW solution. And the only time you really have the exponential solution you found is when the universe is dark energy dominated. This is called a deSitter universe. Dark energy has a constant H, as you can see from caverac's equation when the a's have grown enough that the first and second term are negligible. The universe is going towards being dark energy dominated, as you can see in that equation, as a increases. In some number of of billions of years, the universe expansion will be mainly exponential, with the Hubble time the e-doubling time. Meanwhile, the equations from caverac are the ones to use, with actually the (normal, not dark) radiation at this time pretty irrelevant. At different times in the universe history different factors were the main contributors, and early mainly radiation, then malty matter, now a combination of about 25-30% matter (observed and dark) and the rest dark energy
{ "domain": "physics.stackexchange", "id": 40188, "tags": "cosmology, spacetime, universe, space-expansion" }
If gold is a worse electrical conductor than silver and copper, why are gold plated contacts considered "better" by the market?
Question: I've always wondered if this is just a marketing thing, but all across the electronic supply spectrum I see "gold plated" listed as the best conductors for various contacts and connectors. However, silver is considered the most conductive element (6.2×107 S/m), followed by copper (5.9×107 S/m), with gold (4.5×107 S/m) being third. I get that any of those are better conductors than the most common (tin and steel) but I would think silver, being less expensive, more abundant, and a better conductor, would be preferred over gold. What am I missing here? Answer: Gold is used for connectors not so much for its conductivity as it's chemical inertness. Even ostensibly corrosion resistant materials like aluminium and stainless steel form a thin metal oxide layer on the surface, indeed this is what gives them their corrosion resistance and metal oxides have poor conductivity. So gold plating is potentially useful for improving the electrical connection between two contacts it tends to be most useful in signal or data connections where you are usually dealing with low voltages and maintaining a consistent resistance across connections is important. The downside of gold is that it is soft and so prone to wear and so you tend to see it most often in connectors which need to be physically small and aren't subject to a lot of wear eg things like SIM cards, flash cards, graphics cards, phone batteries etc etc. Where connections are changed frequently eg in music, sound and lighting equipment chrome plating or nickel plating is usually preferred as it is much harder and you can mitigate any small increase in resistance by having a larger contact area.
{ "domain": "engineering.stackexchange", "id": 5459, "tags": "electrical-engineering" }
Could a dwarf galaxy host a star at the center instead of a SMBH?
Question: I don't think this would be possible, but I'm curious, and as what ever query I googled I only found posts about black holes indeed existing, what black holes are, that even the Milky Way has a black hole in its center or that We can't know if dwarf galaxies have non-SMBH at their center. So sorry if it is a stupid question but I didn't want to go on with assuming something without knowing: So is it theoretically possible for a Galaxy to have a star at its gravitational center of mass? Answer: A galaxy's center of gravity is not determined by the most massive object, but by all objects in the galaxy. Even supermassive black holes (SMBHs) do not dominate the gravitational field except very, very close to the center. By far, most of the stars in a galaxy couldn't care less about the SMBH. The region within which a BH dominates over that of the stars (the "sphere of influence"$^\dagger$) is given by (e.g. Peebles 1972) $$ r = \frac{G M_\mathrm{BH}}{\sigma^2}, $$ where $G$ is the gravitational constant, $M_\mathrm{BH}$ is the mass of the black hole, and $\sigma$ is the velocity dispersion. In the Milky Way (MW), there's a SMBH (Sagittarius A*) of roughly $M_\mathrm{BH} \simeq 4\times10^6\,M_\odot$. In that region, the stellar velocity dispersion is roughly $50$–$100\,\mathrm{km}\,\mathrm{s}^{-1}$ (e.g. Genzel et al. 2010). Plugging in those values, you'll find that Sgr A* dominates the kinematic out to roughly 3 pc, or 10 lightyears, which is nothing compared to MW's radius of $\sim10^5$ lightyears. If you take the most massive conceivable stars (a hypothesized Pop III star of $M\sim10^3\,M_\odot$) in Willman 1, the smallest known dwarf galaxy — which has stellar velocity dispersion of the order $5$–$10\,\mathrm{km}\,\mathrm{s}^{-1}$ — you'll find that such a star will dominate the gravitational potential out to a distance of only $\lesssim0.1\,\mathrm{pc}$, again completely negligible compared to the galaxy's radius of $\sim25\,\mathrm{pc}$. In other words, although it's possible for a dwarf galaxy to host a massive star, it will just be a star like all the others, and will in no way define the galaxy's gravitational center. $^\dagger$Not to be confused with the event horizon which is even smaller.
{ "domain": "astronomy.stackexchange", "id": 2528, "tags": "galaxy, galaxy-center" }
Array splitting in sub-arrays of consecutive elements
Question: I want to split an array in sub-arrays of consecutive elements. For example, for the array: a=[1 2 3 6 7 9 10 15] I want an output 1 2 3, 6 7, 9 10, 15. I think that the natural choice is to use a struct for this: [v,x] = find(diff(a)>1) %find "jumps" xx=[0 x length(a)] for ii=1:length(xx)-1 cs{ii}=a(xx(ii)+1:xx(ii+1)); %output struct array end v = 1 1 1 x = 3 5 7 xx = 0 3 5 7 8 The code works correctly but I was wondering if there are smarter ways to do this. Answer: You should preallocate the cs cell array: [v,x] = find(diff(a)>1); %find "jumps" xx = [0 x length(a)]; cs = cell(length(a)+1,1); for ii = 1:length(xx)-1 cs{ii} = a(xx(ii)+1:xx(ii+1)); end Style comments: Try to keep consistent formatting, either put spaces around all equal signs, or around none. Terminate statements with a semicolon to prevent your function producing output to the command window.
{ "domain": "codereview.stackexchange", "id": 28694, "tags": "array, matlab" }
How to get the weights of a linear model by solving normal equation?
Question: In chapter 6.1 of the book Deep Learning, the author tries to learn the XOR function by using a linear model (on page 168). Linear Model: $f(\mathbf{x};\mathbf{w},b)=\mathbf{x}^T\mathbf{w}+b$ MSE Loss: $J(\mathbf{w},b)= \frac{1}{4} \sum(f^*(\mathbf{x})-f(\mathbf{x;\mathbf{w},b})) $ , where $f^*(\mathbf{x})$ is the XOR function. Normal equation: According to the same book on page 107, the weights can be obtained by solving the gradient of the loss function, which will result in a normal equation (5.12). $\mathbf{w}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}$ My Attempts: Since it is a XOR function, we know that if the input is $\mathbf{X}=\begin{bmatrix} 0 & 0 \\ 0 & 1 \\ 1 & 0 \\ 1 & 1 \\ \end{bmatrix}$, then the corresponding output will be $\mathbf{y}=\begin{bmatrix} 0 \\ 1 \\ 1 \\ 0 \\ \end{bmatrix}$. So I just plug everything into the normal equation as shown above. However, the solution I get is $\mathbf{w}= \begin{bmatrix} \frac{1}{3} \\ \frac{1}{3} \\ \end{bmatrix}$. What am I doing wrong here? And also how to find the bias $b$? Answer: Hey I know this is a year old but I wanted to provide the answer in case you haven't got it. So page 124 says how we derive the normal equations, however the following few paragraphs explains how to deal with a bias term. You've got our dataset right, however we need to add an extra 1 to take the place of our bias in the weight vector. I'll append the ones at the end. $\mathbf{X} = \begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 1 \\ \end{bmatrix} $ You've got our $y$ vector correct though, i'll write it down below for completeness sakes $\mathbf{y} = \begin{bmatrix} 0 \\ 1 \\ 1 \\ 0 \\ \end{bmatrix} $ Now we compute solution to the normal equations, $w = (X^{T}X)^{-1}X^{t}y$, below $\mathbf{w} = \begin{bmatrix} 0 \\ 0 \\ \frac{1}{2} \\ \end{bmatrix}$ Now the element in the 3rd position of our vector is the bias term and hence $b = \frac{1}{2}$ and the first two element of $\mathbf{w}$ is our weight vector equalling zero. Hopefully you find this useful!
{ "domain": "datascience.stackexchange", "id": 8069, "tags": "machine-learning, linear-regression, gradient-descent" }
Angular momentum partial components of a $k$-dependent pairing potential
Question: I am going over this review on pairing in unconventional superconductors :http://arxiv.org/abs/1305.4609v3 which on page 21 states that for a "regular" function $U(\theta)$, partial components $U_l$ of angular momentum $l$ scale as $\exp(-l)$ for large $l$. I tried to prove this statement but am not satisfied with my awnser, and would greatly appreciate some insight. Below is what I did so far. I assume here that "regular" means infinitely derivable. The partial component $U_l$ is defined as such : $ U_l = \int_{0}^{\pi} U(\theta) P_l(\cos \theta) \sin \theta d\theta $ such that $ U(\theta) = \sum_{l=0}^{\infty} U_l P_l(\cos \theta), $ where $P_l(\cos \theta)$ is the $l$-th order Legendre polynomial. Let us make use of Rodrigue's formula : $P_l(\cos \theta) = \frac{1}{2^l l!} \frac{d^l}{dx^l} [(x^2-1)^l] |_{x=\cos \theta}$. The highest-order term of this polynomial is $\frac{1}{2^l l!} \frac{(2l)!}{l!} \cos^l \theta$. So in the development of $U(\theta)$, the contribution to the term of order $\cos^l \theta$ coming from the $l$-th order Legendre polynomial is $ U_l \frac{1}{2^l l!} \frac{(2l)!}{l!} \cos^l \theta$. There are also other contributions to this order coming from the higher-order Legendre polynomials, but they will be proportional to some $U_k$ with $k>l$. As we want to prove that $U_l$ is exponentially small as $l$ gets large, we can neglect these contributions for now. Let us now try to find an equivalent of $ U_l \frac{1}{2^l l!} \frac{(2l)!}{l!}$ as $l$ goes to infinity. We can make use of Stirling's formula : $ l! \sim (\frac{l}{e})^l \sqrt{2 \pi l}$ which gives us $ U_l \frac{1}{2^l l!} \frac{(2l)!}{l!} \sim U_l 2^l \frac{1}{\sqrt{l \pi}}$. If we want $U(\theta)$ to be a regular function, we need its high-order components in $\cos^l \theta$ to get smaller and smaller as $l$ gets large, as $\cos^l \theta$ behaves in a singular manner when $l$ goes to infinity. Thus, we need to have $U_l \sim a^{-l}$, with $a>2$, as $l$ goes to infinity. Why I am not happy with this awnser : I neglected higher-order components in the contribution to $\cos^l \theta$ Maybe the $U_l$ could behave in a complicated oscillating way to make the $l$-th order term converge, without being exponentially small. Does anyone have an alternate way of proving the fact that $U_l$ has to be exponentially small as $l$ gets large, or a way to complete the above proof ? Thanks for your help. Answer: Just focusing on the $\cos^l\theta$ term is probably not going to get you anywhere, since $\cos^l\theta$, being a completely analytical function, is by no means singular (and following your argument you get exponential growth of $U_l$, not decay). It is a fairly well-known fact that for analytical functions (this is what "regular" means, roughly speaking the function is equal to its Taylor expansion. It is a stronger condition than being infinitely differentiable. And here, we require analyticity in a finite region on the complex plane), the expansion coefficient with Legendre polynomial (also for Chebyshev, etc) decays exponentially. The proof is not that trivial, as you can see from the requirement of complex analyticity it uses contour integral. You can find the proof in many textbooks, for example Philip J. Davis, Interpolation and Approximation, Dover, 1975 The proof for Legendre polynomials is at page 313.
{ "domain": "physics.stackexchange", "id": 26995, "tags": "quantum-mechanics, condensed-matter, mathematical-physics" }
How is GetLinkById() supposed to work in Gazebo 1.3?
Question: Hi, After I upgraded to Gazebo 1.3 (from 1.2.5) I noticed the model->GetLink() function has changed. It does not overload anymore with the (string) link_name or with the (unsigned int) id arguments. Now there are two functions GetLink(string link_name) and GetLinkByID(unsigned int _id). However the later GetLinkById() it doesn't work. Is it a bug? Or am I not using it right, I used it for example with values 0 or 1 (0 being until now the first link of the model). Code example: this->model->GetLinkById(0)->GetName().c_str() The error I get: gzserver: /usr/include/boost/smart_ptr/shared_ptr.hpp:418: T* boost::shared_ptr<T>::operator->() const [with T = gazebo::physics::Link]: Assertion `px != 0' failed. Thanks, Andrei Originally posted by AndreiHaidu on Gazebo Answers with karma: 2108 on 2013-02-05 Post score: 0 Answer: The ID of a link is a unique integer that is assigned when the Link is instantiated. The ID is primarily used for internal purposes. Originally posted by nkoenig with karma: 7676 on 2013-02-05 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by nkoenig on 2013-02-05: Here is a pull request to "fix" this: https://bitbucket.org/osrf/gazebo/pull-request/271/remove-doxygen-processing-of-some-internal/diff Comment by nkoenig on 2013-02-05: Here is a pull request to "fix" this: https://bitbucket.org/osrf/gazebo/pull-request/271/remove-doxygen-processing-of-some-internal/diff
{ "domain": "robotics.stackexchange", "id": 3007, "tags": "gazebo, gazebo-1.3" }
Showing frequency and amplitude after doing an FFT on a signal
Question: I have executed an FFT on a .wav file of a person saying "ahhh" which shows the frequency and amplitude of the audio file. When I run it on a .wav file it shows the frequency 0 Hz as having the highest amplitude. I'm not sure where the problem is within my code and how I can fix it. How does 0 Hz have the max amplitude? The reason I'm trying to fix this is that I want to find the frequency with the highest amplitude in the wave file then run some calculation against it but running calculations against 0hz doesn't make any sense to me especially when I have to do multiplication with it. When I look at the array_sort variable it says the frequency is 0hz and 0.117341 as it's amplitude. Here's the wave file and .m file I'm using http://dl.dropbox.com/u/6576402/questions/fft/11262012_44100ahh.wav http://dl.dropbox.com/u/6576402/questions/fft/test_rtfftphase_question.m clear all, clc,clf,tic dirpathtmp=strcat('/tmp/'); [vp_sig_orig, fs_rate, nbitsraw] = wavread(strcat(dirpathtmp,'/nov26/11262012_44100ahh.wav')); %must be mono fs=fs_rate; t_rebuilt=linspace(0,2*pi,fs); %creates same size time for new signal as orginal signal good for error checking vp_sig_len=length(vp_sig_orig); %get sample rate from vp fs_rate needs to be an even number? % Use next highest power of 2 greater than or equal to length(x) to calculate FFT. nfft= 2^(nextpow2(length(vp_sig_orig))); % Take fft, padding with zeros so that length(fftx) is equal to nfft fftx = fft(vp_sig_orig,nfft); sigfft= fft(vp_sig_orig); sigifft=ifft(sigfft); sigphase = unwrap(angle(sigfft')); %get phase of orginal signal % Calculate the number of unique points NumUniquePts = ceil((nfft+1)/2); % FFT is symmetric, throw away second half fftx = fftx(1:NumUniquePts); % Take the magnitude of fft of x and scale the fft so that it is not a function of the length of x mx = abs(fftx)/length(vp_sig_orig); %replaced for testing from stackexchage if rem(nfft, 2) % odd nfft excludes Nyquist point mx(2:end) = mx(2:end)*2; else mx(2:end -1) = mx(2:end -1)*2; end amp=mx; ampinv=abs(amp-max(amp)); % This is an evenly spaced frequency vector with NumUniquePts points. freq_vect = (0:NumUniquePts-1)*vp_sig_len/nfft; freq=freq_vect'; %get phase of new signal phase = unwrap(angle(fftx)); %get phase of orginal signal array=[freq,amp]; array_sort=sortrows(array,-2); %sort by largest amplitude first I'm using Octave 3.2.4 on Linux Ubuntu 64bit. Answer: A strong amplitude response at 0 Hz simply means that you have a very strong DC offset. In other words, it just means that the mean of your signal is not 0. If this is the only problem you have, then all you really need to do, is remove the mean of your signal. In other words: vp_sig_orig = vp_sig_orig - mean(vp_sig_orig);
{ "domain": "dsp.stackexchange", "id": 576, "tags": "fft, matlab, signal-detection, signal-analysis" }
Why does this greedy algorithm fail to accurately determine whether a graph is a perfect matching?
Question: I came across this problem in Tim Roughgarden's course on Coursera: In this problem you are given as input a graph $T=(V,E)$ that is a tree (that is, $T$ is undirected, connected, and acyclic). A perfect matching of $T$ is a subset $F \subseteq E$ of edges such that every vertex $v \in V$ is the endpoint of exactly one edge of $F$. Equivalently, $F$ matches each vertex of $T$ with exactly one other vertex of $T$. For example, a path graph has a perfect matching if and only if it has an even number of vertices. Consider the following two algorithms that attempt to decide whether or not a given tree has a perfect matching. The degree of a vertex in a graph is the number of edges incident to it. (The two algorithms differ only in the choice of $v$ in line 5.) Algorithm A: While T has at least one vertex: If T has no edges: halt and output "T has no perfect matching." Else: Let v be a vertex of T with maximum degree. Choose an arbitrary edge e incident to v. Delete e and its two endpoints from T. [end of while loop] Halt and output "T has a perfect matching." Algorithm B: While T has at least one vertex: If T has no edges: halt and output "T has no perfect matching." Else: Let v be a vertex of T with minimum non-zero degree. Choose an arbitrary edge e incident to v. Delete e and its two endpoints from T. [end of while loop] Halt and output "T has a perfect matching." Now, the answer key says: Algorithm $A$ can fail, for example, on a three-hop path. Correctness of algorithm $B$ can be proved by induction on the number of vertices in $T$. Note that the tree property is used to argue that there must be a vertex with degree $1$; if there is a perfect matching, it must include the edge incident to this vertex. However, I think I found a counter-example which shows that Algorithm B may not be correct. Am I missing something? Consider the following graph (say $T$): If we follow B: Step 1: Let $v=3$ (all vertices have the same degree) and $e=(2,3)$. Remove $e$ along with vertices $3$ and $2$. Step 2: Let $v=1$ and $e=(0,1)$. Remove $e$ along with vertices $0$ and $1$. Step 3: No vertices are left. Hence we get the output: "T has a perfect matching." But clearly, our original graph was not a perfect matching as all the nodes were of degree $3$. Note: I assumed that ties are meant to be broken arbitrarily by the algorithm. Answer: "Consider the following two algorithms that attempt to decide whether or not a given tree has a perfect matching". Your graph is NOT a tree as it has a cycle $0, 1, 2, 0$. Furthermore, your graph does have a perfect matching. In fact, the edges $(2,3)$ and $(0,1)$ obtained by your step 1, 2 and 3 is a perfect matching. And hence, it is not true that "our original graph was not a perfect matching as all the nodes were of degree 3". Plenty of graphs whose nodes are all of degree 3 have a perfect matching.
{ "domain": "cs.stackexchange", "id": 12826, "tags": "algorithms, graphs, greedy-algorithms, matching" }
A good way to hash two numbers with known properties together
Question: I have two 64-bit numbers a and b which have a few properties: only 49 out of the 64 bits are used (15 bits are always 0), and the two number's binary representations are disjoint (a & b == 0). I am looking for a good way to hash the two numbers together into a deterministic 64-bit integer. Tabular hashing with randomly generated keys works wonderfully, but I was wondering if there was a faster non-iterative alternative. The target is to minimize the number of collisions between two unique sets of a and b. Answer: I don't know of a good way to use your special properties. But I do know a very fast provably good hash. Choose three random 64-bit integers $x, y, z$ (which must be independent of your input $a, b$). Then the following function $h$: $$h(a, b) = xa + \lfloor ya / 2^{64}\rfloor + yb + \lfloor zb / 2^{64}\rfloor \mod 2^{64}$$ is an $2^{-63}$-ADU (almost delta universal) hash function, per Short-output universal hash functions and their use in fast and secure data authentication by Long Hoang Nguyen and Andrew William Roscoe. That is, if $(a, b) \neq (a', b')$ then for any $\delta$ (including $\delta = 0$ which implies collision resistance) $$\Pr[h(a, b) + \delta \equiv h(a', b') \mod 2^{64}] \leq 2^{-63}.$$ Note that computing $\lfloor pq / 2^{64}\rfloor$ only takes a single instruction on modern CPUs - it is the high bits of the 64x64-bit product. On Intel x86-64 this is always returned by the MUL instruction which returns both parts, on ARM64 it is given by the MULHI instruction. So in total computing this hash takes 4 multiplication and 3 addition instructions.
{ "domain": "cs.stackexchange", "id": 21930, "tags": "hash, binary" }
Determine the density using a u shaped tube
Question: I did a practical to determine the density of water and oil, the practical succeed but some quesions were arise. So I hope your help for solve these problems. I did this practical using a u shaped tube, and the density was determined by the height of each liquid due to the pressure. When I do this practical my text books says we should add high density liquid first and then lower density liquid. Also when we messure the heights of the liquids (due to pressure) if I want to change the heights to get various messurements(the messurements are taking to draw the chart) the text book says we should add the lower density liquid. Can you say the reasons for these two situations? In first situation why firstly add high density liquid? In second situation why we add the lower density liquid to get various messurements? Why we can't add water to get various messurements? What is the problem? Thank you Answer: Speaking as a (retired) experimental scientist I would always do the experiment. So I would have tried adding the low density liquid first to see what happened. Even if the experiment fails done that way (which it will :-) you'll usually learn something in the process. Anyhow, the problem with adding the low density fluid first is that when you add the high density fluid it will fall through the low density fluid to the bottom of the U tube: So this won't give you the arrangement you require with oil on one side and water on the other. As for your second question, there's no reason why you can't add water to the water side as well as oil to the oil side. I suspect the instructions assume you have added all the water first, so the experiment consists just of adding the oil bit by bit and measuring the heights.
{ "domain": "physics.stackexchange", "id": 84029, "tags": "density" }
Electric Flux vs Magnetic Flux Units
Question: If electric flux is the number of electric field lines through a surface area and magnetic flux is a number of magnetic field lines through a surface area, why are the units for them different? Electric Flux: Magnetic Flux: Mathematically, I know that the units of E field are different from B Field so it makes sense that the units for the fluxes are different. I see E field and M field, however, as two sides of the same coin so I would think they carry the same units. Really just looking for other's thoughts on this matter. Answer: Magnetic flux is measured in $Wb$ but magnetic flux density (which is what's written in the image you posted) is measured in $Wb/m^2$ which is Tesla. To put in better/clear forms, electric flux and magnetic flux units can be written as $$Wb = \frac{kg \cdot m^2}{s^2 \cdot A} = V\cdot s = T\cdot m^2 \hspace{3mm} (for\hspace{1mm} magnetic\hspace{1mm} flux) $$ $$\frac{kg \cdot m^3}{s^3 \cdot A} = V\cdot m \hspace{3mm} (for\hspace{1mm} electric\hspace{1mm} flux)$$
{ "domain": "physics.stackexchange", "id": 76007, "tags": "gauss-law" }
Something-Treewidth Property
Question: Let $s$ be a graph parameter (ex. diameter, domination number, etc) A family $\mathcal{F}$ of graphs has the $s$-treewidth property if there is a function $f$ such that for any graph $G\in \mathcal{F}$, the treewidth of $G$ is at most $f(s)$. For instance, let $s = \mathit{diameter}$, and $\mathcal{F}$ be the family of planar graphs. Then it is known that any planar graph of diameter at most $s$ has treewidth at most $O(s)$. More generally, Eppstein showed that a family of graphs has the diameter-treewidth property if and only if it excludes some apex graph as a minor. Examples of such families are graphs of constant genus, etc. As another example, let $s = \mathit{domination{-}number}$. Fomin and Thilikos have proved an analog result to Eppstein's by showing that a family of graphs has the domination-number-treewidth property if and only if $\mathcal{F}$ has local-treewidth. Note that this happens if and only if $\mathcal{F}$ has the diameter-treewidth property. Questions: For which graph parameters $s$ is the $s$-treewidth property known to hold on planar graphs? For which graph parameters $s$ is the $s$-treewidth property known to hold on graphs of bounded local-treewidth? Are there any other families of graphs, not comparable to graphs of bounded local-treewidth for which the $s$-treewidth property holds for some suitable parameter $s$? I have a feeling that these questions have some relation with the theory of bidimensionality. Within this theory, there are several important parameters. For instance, the sizes of feedback vertex set, vertex cover, minimum maximal matching, face cover, dominating set, edge dominating set, R-dominating set, connected dominating set, connected edge dominating set, connected R-dominating set, etc. Does any parameter $s$ encountered in bidimensionality theory have the $s$-treewidth property for some suitable family of graphs? Answer: For question $1$: any bidimensional parameter has this property on general graphs. A parameter $s(G)$ is bidimensional if the value of $s(G) \geq s(H)$ for every minor $H$ of $G$, and if $s$ is ``large'' on grids. In applications to PTASes, sub exponential algorithms and kernels on minor-free classes of graphs, "large" means that there exists a constant $c$, such that the value of $s$ on a $t$ times $t$ grid is at least $ct^2$. This is what you most likely will find if you do a google search for ``bidimensionality'' However, for your question it is sufficient that $s$ grows to infinity on $t$ times $t$ grids as $t$ grows to infinity. This is because any graph with large enough treewidth will contain a large enough grid minor. So, to conclude, if $s$: is closed under minors is arbitrarily large on $t$ times $t$ grids for large enough $t$ Then s has the s-treewidth property. See the recent parameterized complexity book ( http://parameterized-algorithms.mimuw.edu.pl ) in the treewidth chapter for more info.
{ "domain": "cstheory.stackexchange", "id": 3794, "tags": "graph-theory, graph-algorithms, treewidth, graph-minor" }
Audio processing -- difference between block and sample-by-sample?
Question: If a system operates on a signal with a time-domain IIR or FIR, why would an acquisition system chunk the audio into powers of two? I can understand filling a buffer by a power of two for an FFT operation. Is there a difference? I am still confused by the difference between block processing and sample by sample processing. Some clarification would be greatly appreciated. Answer: Note that FIR filters are sometimes implemented using an FFT (overlap-add, overlap-save). In that case it makes sense to have power of 2 buffers lengths (depending on the FFT implementation). This is of course an example of block processing, where you have to wait for a whole block before the computation of the output signal can begin. The consequence is that you always have some latency. Sample by sample processing is possible with a time-domain implementation, where you get one output sample for each input sample. Note that a time-domain implementation is not necessarily sample-by-sample but could also be using block processing.
{ "domain": "dsp.stackexchange", "id": 2229, "tags": "audio" }
Does the curvature of spacetime occur in the fourth dimension in the form of a tesseract?
Question: I have heard contradictory ideas about spacetime and the fourth dimension. Some talk as though it is spatially tangent to all other dimensions and this is where the curvature of spacetime occurs. Is this true? Answer: Imagine you found a spherical ball with a surface area equal to $4 \pi r^2$, for some $r$, but the distance between the centre of the ball and its surface was not equal to $r$. In that case the 3-dimensional space in which the ball is sitting would be 'warped'; the rules of Euclidean geometry are not applying. Or suppose that you draw three lines between three points, each line being the shortest possible between that pair of points, but the angles of the triangle constructed this way did not add up to 180 degrees. Again, the rules of Euclidean geometry are not working. This is another way in which a region of space can be warped. The technical mathematical name for such 'warping' is 'curvature'. If a space has properties like these then we say the space is curved. The reason for the name is that one can get a lot of insight into this kind of curvature by looking at examples in fewer dimensions, and then there is a very nice analogy with the more ordinary use of the word. But when applied to three-dimensional space, and indeed to spacetime, the concept gets quite a lot more sophisticated. So, to answer your question, the presence of curvature does not necessarily require the presence of extra dimensions.
{ "domain": "physics.stackexchange", "id": 65592, "tags": "general-relativity, gravity, spacetime, curvature" }
A hard attempt to compare liquid and solid with Newtonian mechanics
Question: Suppose there is a wooden hemisphere on a horizontal plane, apex touching the plane. A small wooden cube is slowly placed on the base of the hemisphere anywhere other than the centre. As a result of extra torque provided by the weight of the cube, the hemisphere will lean on one side. Assume that there is enough friction to prevent cube from sliding. Now replace the wooden hemisphere with a bowl of similar shape filled with water (This time its edge is higher than before, as water should not run off. But the volume of water is the same as the volume of the hemisphere in the previous case). Now slowly place the small wooden cube on the water again as before. Will the bowl lean as before? So far, my idea was: No, because when we place the cube on the water, a same amount of water equal to the submerged volume will be displaced. (My guess is) That will keep the centre of mass fixed, so there will be no extra torque about the contact point. But the answers I got to these questions made me confused over my opinion: Can the fish topple the bowl and What is the rower actually doing? Pushing the water or pushing the lake?. ( Read them only if you are interested. I expect an answer for this question.) Is the cube still able to change the CoM of the system? Answer: You are right, the bowl doesn't lean. From Archimedes' principle we know that the mass of the displaced fluid is the same as the mass of the cube. This means that the weight of the cube is the same as the weight of some water that would occupy the submerged part of the cube. In other words, if you substitute the cube with a volume of water equal to the volume of the submerged part, the force is the same. But of course, if there is only water, the bowl is in equilibrium. Therefore, since the situation is dynamically equivalent, the bowl is in equilibrium also with the cube.
{ "domain": "physics.stackexchange", "id": 81593, "tags": "newtonian-mechanics, forces, torque" }
Potassium hydroxide neutralization with acetic acid
Question: I put 9% acetic acid to 60% potassium hydroxide solution to neutralize it and was hoping to see some fizzing, but didn't see any indication of them reacting. Why is that? Answer: As stated in Pritt's comment, the reaction you are performing is: $$\ce{CH3COOH + KOH -> CH3COO^−K+ + H2O}$$ Note that none of the products are gases, so there is no effervescence (fizzing). If you want a neutralization reaction that will effervesce, then you need to form a gas. A simple candidate for this would be sodium bicarbonate (baking soda), which would give the following reaction: $$\ce{CH3COOH + NaHCO3 -> CH3COO^−Na+ + H2O + CO2 ^}$$ With this we are forming $\ce{CO2}$ gas, which will cause the solution to effervesce. Note that this is likely to be stinky and messy: a good experiment to perform outdoors.
{ "domain": "chemistry.stackexchange", "id": 8227, "tags": "acid-base" }
regex string extension
Question: I wrote some string-extensions: public static class RegexStringExtensions { public static string PatternReplace( this string seed, string pattern, Func<string, string> outPattern) { return Regex.Replace(seed, pattern, s => outPattern(s.Groups[1].Value)); } public static string PatternReplace( this string seed, string pattern, Func<string, string, string> outPattern) { return Regex.Replace(seed, pattern, s => outPattern(s.Groups[1].Value, s.Groups[2].Value)); } public static string PatternReplace( this string seed, string pattern, Func<string, string, string, string> outPattern) { return Regex.Replace(seed, pattern, s => outPattern(s.Groups[1].Value, s.Groups[2].Value, s.Groups[3].Value)); } public static string PatternReplace( this string seed, string pattern, Func<string, string, string, string, string> outPattern) { return Regex.Replace(seed, pattern, s => outPattern(s.Groups[1].Value, s.Groups[2].Value, s.Groups[3].Value, s.Groups[4].Value)); } } with those, I could handle a replacement for regex-pattern and work with the groups more in C#-way like (just an easy example): string val = "abcdef".PatternReplace(@"(ab)(cd)(ef)", (s,t,u) => u + string.Concat(s.Reverse())); // result is "efba" I think it's not that nice, to have so many overloads since I could have like 10 groups or something. Any idea how I could make this a bit nicer? For now, it'd work with up to 4 parameters, but with also 4 overloads. Answer: I'm not saying I like this but you could reuse the longest overload for the shorter ones. Here I took the one that takes three parameters. public static string PatternReplace( this string seed, string pattern, Func<string, string> outPattern ) { return seed.PatternReplace(pattern, (x, y, z) => outPattern(x)); } public static string PatternReplace( this string seed, string pattern, Func<string, string, string> outPattern ) { return seed.PatternReplace(pattern, (x, y, z) => outPattern(x, y)); } Here's another small experiment with C# 7 where you could use anonymous tuples with only one method. But actually you could return a tuple in you current solution too. public static class RegexStringExtensions { public static string PatternReplace( this string seed, string pattern, Func<(string, string, string), string> outPattern ) { return Regex.Replace( seed, pattern, m => outPattern(( m.Groups[1].Value, m.Groups[2].Value, m.Groups[3].Value) ) ); } } Usage: string val = "abcdef".PatternReplace( @"(ab)(cd)(ef)", t => t.Item1 + string.Concat(t.Item2.Reverse()) );
{ "domain": "codereview.stackexchange", "id": 24231, "tags": "c#, regex, extension-methods" }
Vectors misconception correction:
Question: If a vector pointing upward has a positive magnitude, a vector pointing down has a negative magnitude. Why is this False? Both vectors pointing in different direction, shouldn't it be have different signs, hence different magnitude? if vector A - B = 0, then the vector A and B have equal magnitude and are directed in the same direction. Why is this True? If A point right, B have to point right to 'cancel out' to be zero, this means different direction. Answer: regarding the first statement, "magnitude" is the "quantity" of something and vectors with opposite sign can have same "quantity" since it's not depended on direction of vector, for example, if you and your friend are playing tug of war and none of you is being displaced from your initial position, that means that the "quantity" of force that you both are applying is same but due to "opposite" direction the net force becomes zero hence none of you is moving. now onto the second statement, if we take two vectors $ \vec{a} \mathbf{,} \vec{b} $ if it is given that $\vec{a}$ - $\vec{b}$ = 0, what this actually shows is that $\vec{a}$ + ($\vec{-b}$) = 0, and here $\vec{-b}$ represent $\vec{b}$ but in opposite direction, $\Rightarrow$ that the addition of $\vec{a}$ and $\vec{-b}$ will make the resultant zero and this is only possible if $\vec{a}$ and $\vec{b}$ are in same direction. The example 3 in this image might clear your doubt
{ "domain": "physics.stackexchange", "id": 93925, "tags": "vectors" }
Real time PCR parameter CT
Question: When puting a real time PCR, parameter CT, which means threshold cycle, is used. What does it mean really? according to wikipedia "The number of cycles at which the fluorescence exceeds the threshold is called the threshold cycle (Ct)" could anyone put it in conext? Answer: Realtime PCR uses a fluorescent dye which binds to double-stranded DNA and thus allows to measure the growing amount of DNA by each cycle. The DNA added to the reaction also binds the fluorescent dye and makes some part of the fluorescent background. When the PCR reaction starts, it takes a while until enough DNA is synthesized to go over the background and to be able to reliable distinguish the signal from the noise. I think this becomes clearer, when you look at the image below (from the NIH): The Ct is the point where the signal can be distinguished from the background when it enters the exponential grwoth phase. The no template control stays below this threshold.
{ "domain": "biology.stackexchange", "id": 1929, "tags": "pcr" }
Authentication program in Swing
Question: It is a simple program which allows you to input a username an password. If the username/password is equal to the String it launches a JOptionPane that says "Correct". If it doesn't it launches a JOptionPane that says "Incorrect". import java.awt.event.*; import javax.swing.*; public class Main { //The Strings for the program static String username = "Username"; static String password = "Password"; static int lockout = 0; static int lockout1 = 0; //Main statement, that runs the program @SuppressWarnings("deprecation") public static void main(String[] args) throws ClassNotFoundException, InstantiationException, IllegalAccessException, UnsupportedLookAndFeelException { //Makes the program look like regular windows program not a java one UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); //The JPanel and it's contents JPanel p = new JPanel(); JTextField first = new JTextField(8); first.setToolTipText("Enter Your Username"); JLabel labelUser = new JLabel("Username:"); JPasswordField second = new JPasswordField(8); //Tooltips when you hover over the item second.setToolTipText("Enter Your Password"); JLabel labelPass = new JLabel("Password:"); JRadioButton view = new JRadioButton(); view.setToolTipText("To View Your Password"); //Causes the JPassword field to output * as the text. second.setEchoChar('*'); //Adds the different items to the JPanel. p.add(labelUser); p.add(first); p.add(labelPass); p.add(second); p.add(view); //Adds an action listener to the JRadioButton ActionListener viewActionListener = new ActionListener() { public void actionPerformed(ActionEvent actionEvent) { if(view.isSelected()){ //Makes the Text in the JPasswordField visible. second.setEchoChar((char)0); }else{ //Sets it back to the default "**" second.setEchoChar('*'); } } }; //Adds the action listener to the JRadioButton view.addActionListener(viewActionListener); //Specifies what buttons are on the JOptionPane String[] buttons = {"OK", "Cancel"}; //Code for the JOptionPane int a = JOptionPane.showOptionDialog(null, p, "Authentication", JOptionPane.WARNING_MESSAGE, 1, null, buttons, buttons); //If the first button is clicked if (a == 0) { //Checks if there is any text in the TextFields. if (first.getText().equals("") || second.getText().equals("")) { //Adds 1 to the int lockout 1 and if it reaches 4 it closes the app lockout1 += 1; if (lockout1 == 4) { JOptionPane.showMessageDialog(null, "Please try again later", "Try Again Later", 0); System.exit(0); } //Displays the message JOptionPane.showMessageDialog(null, "Please enter all of your credentials before continuing", "Try Again", 2); main(args); //If the username and the password fields are equal to the Strings, it outputs the message correct } else if (first.getText().equals(username) && second.getText().equals(password)) { JOptionPane.showMessageDialog(null, "Correct", "Correct", 1); } else { //If not it adds 1 to lockout, if lockout reaches 4 the program closes lockout += 1; if (lockout == 4) { JOptionPane.showMessageDialog(null, "Please try again later", "Try Again Later", 0); System.exit(0); } //Outputs the message incorrect JOptionPane.showMessageDialog(null, "Incorrect", "Incorrect", 0); //reopens the program main(args); } } //If you click the cancel button, it closes the application if (a == 1) { System.exit(0); } } } Answer: Here are some first (more or less grouped and complete) thoughts. I'm not an expert regarding awt or Swing, so I can't really comment on that. Don't put everything in your main method. Instead try to extract some useful (and potentially reusable) methods. One good starting point might be your duplicate code: lockout1 += 1; if (lockout1 == 4) { JOptionPane.showMessageDialog(null, "Please try again later", "Try Again Later", 0); System.exit(0); } Don't suppress all deprecation warnings. Instead do not use deprecated classes or methods. Make your fields private. username and password seem to be constants. If so, conventions advise to make them private static final and write them upper case: private static final String USERNAME = "Username"; private static final String PASSWORD = "Password"; Avoid System.exit. Your application will close automatically as soon as the main method returns. Just open another dialog. Format your source code. Some lines have different indentation and your are inconsistent with your spaces between if( or }else{. Some of your comments don't add much value for the reader. I usually recommend to not comment what you are doing, but instead why you are doing something. buttons should probably be a constant. Then again, there is an optionType for exactly your combination. Not sure on that one, but don't view and second have to be final to be used in the ActionListener? I think your parameters of showOptionDialog are incorrect. JOptionPane.WARNING_MESSAGE should actually be the optionType and 1 would be your messageType. Granted, this is a bit difficult, because both are int. Use constants from JOptionPane instead of "magic" numbers (e.g. the 1 above or when checking the return value of showOptionDialog). The dialog can actually return -1 as well if the user does not click any of the buttons, cf. the javadoc: /** Return value from class method if user closes window without selecting * anything, more than likely this should be treated as either a * <code>CANCEL_OPTION</code> or <code>NO_OPTION</code>. */ public static final int CLOSED_OPTION = -1;
{ "domain": "codereview.stackexchange", "id": 17325, "tags": "java, swing, authentication" }
Is Light intangible to other Light? And how does all the intersecting light exist in space?
Question: I was thinking of how light actually gets into my eyes, and thought about my light bulb shining rays to every part of my bedroom wall, and reflecting them towards me. but then i realized, i could be at many different locations and still see all that light, so it must either be bouncing off in all directions, or the light that shined elsewhere has bounced around the room to end up reflecting off the wall in all other directions. so i can view the entire wall from bazillions of atom offsets of spaces in my room. but lets just concentrate on two atom positions directly on my index fingers. the entire wall is shining directly into those two atoms. so light is passing through other light that is heading towards another direction. just with this simple scenario of the wall shining towards two positions, in my imagination, two pyramids of light colliding with each other, which is hard to imagine how that works, let alone the bazillions more intersecting, constant flowing of light that reveals the slightest of dust in the air on sunny days, from all places in your room, and how it can travel out into space, like the stars do towards earth from light years away, just concentrating into smaller and smaller amounts of atoms space. it really breaks my mind at how light even works. so i guess my question is, pretty much, how does light travel? Answer: As the other answers state, both in the classical mathematical modeling of the behavior of light and in the quantum mechanical where classical light is composed of a superposition of photons, there is no interaction of light, to first order. The italics to emphasize that in the underlying quantum mechanical frame there exists a photon photon interaction/scattering in higher orders ,with a very very small small probability in the visible frequencies. The diagram is a shorthand for an integral which allows to calculate the probability of photon photon (two incoming squiggles) scattering off each other (two outgoing squiggles) at energies of visible light. The four electromagnetic vertices make the contribution so small , it can be ignored for visible light frequencies. The diagrams go on to higher orders in a converging series expansion with diminishing contributions from higher orders, because each vertex gives a $(1/137)^{1/2}$ multiplicative contribution to the final value of the diagram, and the above diagram goes to the fourth power, so already the probability of scattering falls by $\sim 10^{-5}$. Higher orders for visible light means diagrams with more vertices with even smaller contribution. The electromagnetic spectrum has higher energy photons though, up to gamma rays, and the probability of photons scattering goes up with energy, as students of physics will find when they reach a quantum mechanical level course. There are even proposals for gamma gamma colliders.
{ "domain": "physics.stackexchange", "id": 48321, "tags": "electromagnetism, visible-light, electromagnetic-radiation, superposition, linear-systems" }
Show that the halting problem is decidable for one-pass Turing machines
Question: $L=\{<\!M,x\!>\, \mid M's \text{ transition function can only move right and } M\text{ halts on } x \}$. I need to show that $L$ is recursive/decidable. I thought of checking the encoding of $M$ first and determine whether its transition function moves only right (Can I do that?). If so then try to simulate $M$ on $x$ for $|Q|+1$ steps, if it stops then $<\!M,x\!>\, \in L$ otherwise it is not. Is this correct? Answer: I assume $Q$ is state set of $M$. If so, this runtime bound does not make much sense; in particular, a Turing machine with three states can move to the end of the input and check whether $x$ is an even number; it is clearly not enough to wait three steps. The next question is whether there is a computable bound on the runtime of halting one-pass machines. Unfortunately, there is not: $M$ might be nondeterministic and halt after an arbitrary amount of steps. So we determinise $M$ while simulating (that is computable) and look for a bound for all paths. Now we are getting somewhere: after having consumed the input, there are two cases. Either $M$ halts or it enters a loop. As it only moves right (into empty tape), such loops can be detected. Putting things together, we simulate a determinisation of $M$. We run every branch until $x$ is consumed¹ and then for another $|Q|\cdot|\Sigma_T|$ steps, $Q$ the set of states and $\Sigma_T$ the tape alphabet of $M$. At this point $M$ has either stopped or loops (in this branch), which we detect by a pair of current state and tape symbol occuring for the second time. We only have to check finitely many branches up to a computable bound, therefore $L$ is recursive. $M$ might loop here already if not moving the head is allowed, but that can be detected, too.
{ "domain": "cs.stackexchange", "id": 272, "tags": "formal-languages, computability, turing-machines, check-my-proof" }
Quadcopter program execution time optimization using Raspberry Pi by increasing i2c baudrate
Question: Is it possible to speed up execution time of a c++ program in raspberry pi solely by increasing the i2c baudrate and increasing the sampling frequency of the sensors? I have the issue of sudden jerkiness of my quadcopter and found the culprit which is the frequency at which my loop excecutes which is only about 14Hz. The minimum requirement for a quadcopter is 100-200hz. It is similar to the issue he faces here Raspberry Pi quadcopter thrashes at high speeds He said that he was able to increase his sampling rate from 66hz to 200hz by increasing the i2c baudrate. I am confused on how that is done. In the wiring pi library, it says that we can set the baudrate using this command: gpio load i2c 1000 will set the baud rate to 1000Kbps – ie. 1,000,000 bps. (K here is times 1000) What I am curious about is how to set this baudrate to achieve my desired sampling rate? I plan on optimizing it further to achieve at least a 100Hz sampling rate As of now, the execution time of each loop in my quadcopter program is at 0.07ms or 14Hz. It takes 0.01ms to 0.02ms to obtain data from the complementary filter. I have already adjusted the registers of my sensors to output readings at 190Hz (Gyroscope L3GD20H) and 200Hz (Accelerometer LSM303) and 220Hz (Magnetometer LSM303). Answer: It turns out, the default i2c baudrate becomes a bottle neck in reading the measurement data from the IMU's so increasing it from 100kbps to 400kbps was able to boost my execution frequency from 150Hz (After optimizing) to ~ 210Hz (0.00419s)
{ "domain": "robotics.stackexchange", "id": 949, "tags": "quadcopter, pid, raspberry-pi, sensor-fusion, c++" }
What happens when a solution of sodium chlorite is acidified?
Question: I'm told that when sodium chlorite $\ce{(NaClO2)}$ is mixed with aqueous acid, chlorine dioxide ($\ce{ClO2}$) is produced and remains mostly dissolved in water. I would like to know how the strength of the acid (concentration of acid and/or the type of acid) affects this reaction. I would also like to know what happens (afterwards) to the $\ce{ClO2}$ over time. For example, if undisturbed, does the $\ce{ClO2}$ enter the air, convert into another compound, or something else? Answer: When acid is added to sodium chlorite in solution (note: $\ce{ClO2-}$ is the chlorite ion), chlorous acid ($\ce{HClO2}$) is transiently formed which then goes on to decompose. However, you need hydrochloric acid, not just any acid, because chloride ion ($\ce{Cl-}$) itself acts as a catalyst for the decomposition. What happens is that chlorous acid disproportionates; that is, some of it gets oxidized while the rest gets reduced. Specifically, out of five $\ce{Cl}$ atoms in chlorous acid (oxidation state +3), one will end up as chloride in the oxidation state $-1$, and four will end up as $\ce{ClO2}$ in the oxidation state +4: $$\ce{5HClO2 -> 4ClO2 + Cl- + H+ + 2H2O}$$ Pretty much any concentration will do. Calculating the proportions is easy if you know a bit of stoichiometry: you need 5 moles of chlorite for 4 moles of hydrochloric acid. The reaction proceeds to completion, and you get 4 moles of $\ce{ClO2}$. The $\ce{ClO2}$ remains in solution and is a good bleaching agent; see Wikipedia. It can undergo further disproportionation but if the solutions are kept in the dark, this process is slow.
{ "domain": "chemistry.stackexchange", "id": 1542, "tags": "acid-base, redox, halides" }
Is the punctuation part of the alphabet?
Question: Given a language $L \subseteq \Sigma^*$, (it could be Italian, English, C++ or anything else), should we consider the punctuation (".", ";", "->") as a part of the alphabet $\Sigma$ upon the language is built on or not? Answer: Yes, for a couple of reasons. In natural language, punctuation changes the parsing and meaning of sentences. The classic example is "The panda eats shoots and leaves" (describing its diet) has a different meaning than "The panda eats, shoots, and leaves" (describing its violence before it departs). A parser should, in theory, generate different parse trees for these two sentences. In formal languages, punctuation doesn't really exist as an entity separate from other symbols. So unless adding and removing punctuation from a word NEVER changes its membership in a language, you want to consider it, since any decision procedures will need to know about this. For formal languages, we can't ignore that sometimes, what you call punctuation, might change the properties of the language. For instance, it's easy to accept the set $\{ w.w^R \mid w \in \Sigma^* \}$ with a deterministic pushdown automaton. But remove the $.$ character as a delimiter, and we need non-determinism to guess where the end of the first string is, and where its reversal starts.
{ "domain": "cs.stackexchange", "id": 6903, "tags": "formal-languages, programming-languages" }
The equational theory of regular languages has no finite set of axioms for general alphabets
Question: According to Redko the equational theory of regular languages with operations $+, \cdot, *$ over a single letter has no finite set of axioms. Why does this imply that it has no finite set of axioms over an arbitrary alphabet? Suppose I have a finite set of axioms for the equational theory of an alphabet with more than one letter, how would this give a finite set of axioms for the single letter case? I mean in the single letter case we have additional equations which did not hold in the arbitrary case, for examle $AB = BA$ for two languages $A,B$, and therefore which could not be derived from the axioms, hence these axioms do not axiomatize the single letter case. Some clarification as asked for in the comments: Given a fixed alphabet $\Sigma$ and a set of variables $X = \{A,B,C,\ldots\}$, the terms are inductively defined: 1) Every $a \in \Sigma$ is a term, 2) Every variable $X$ is a term, 3) If $s,t$ are terms, then $s + t, s\cdot t, t^{\ast}$ are terms. The interpretation of the regular sets $\mathcal{Reg}(\Sigma)\subseteq \mathcal P(\Sigma^{\ast})$ is as usual. As written in the comments, as set of terms $\Gamma$ axiomises $\mathcal{Reg}(\Sigma)$, the regular languages over a given alphabet $\Sigma$, if $$ \Gamma \vdash t_1 = t_2 \mbox{ iff } \mathcal{Reg}(\Sigma) \models t_1 = t_2. $$ Where $t_1, t_2$ are terms with alphabet letters from $\Sigma$, and variables are interpreted as universally quantified. The only rule of interference being substitution, that I mean by equational logic. Answer: For each $k > 0$, the equation $$ A^{\ast} = (A^k)^{\ast}(1 + A + A^2 + \ldots + A^{k-1}) $$ holds for regular languages over every alphabet, hence if for some alphabet we have a finite axiomatization, this equation is provable from it. Now as shown in A. Ginsburg, Algebraic Theory of Automata, the following infinite system of axioms is complete for unary regular languages. \begin{align*} A + B & = B + A \\ A + (B + C) & = (A + B) + C \\ A + A & = A \\ A + \emptyset & = A \\ AB &= BA \\ (AB)C & = A(BC) \\ A1 & = A \\ A \emptyset & = \emptyset \\ (A+B)C & = AC + BC \\ 1^{\ast} & = 1, \emptyset^{\ast} = 1 \\ (AB^{\ast})^{\ast} & = 1 + AA^{\ast}B^{\ast} \\ (A + B)^{\ast} & = A^{\ast} B^{\ast} \\ A^{\ast} & = (A^k)^{\ast}(1 + A + A^2 + \ldots + A^{k-1}), \quad k > 0 \end{align*} So we get a finite axiomatisation for unary languages, if we take but the last family of axioms, and all axioms from the finite axiom system for some alphabet that prove $$ A^{\ast} = (A^k)^{\ast}(1 + A + A^2 + \ldots + A^{k-1}) $$ for each $k > 0$. Then all of the above equations are derivable by this finite axiom system, and as the above one is complete, this finite axiom system would be complete, which is not possible. Hence there is no finite system of axioms for any alphabet.
{ "domain": "cs.stackexchange", "id": 11365, "tags": "formal-languages, regular-languages, finite-automata, first-order-logic" }
Stellar life cycle flow chart with mass conditions and time scales
Question: I remember that in my nuclear astrophysics lecture a decade ago, our lecturer drew a large flow chart like diagram of stellar evolution in dependence of the mass of the star (in solar units) on the black-board. I found one from Wikipedia which is close to the one I remember, but less detailed than the one I remember. The content of the linked chart is summarized in my following sketch: Sadly, neither me nor my former classmates recall which text book or publication the diagram came from, we suspect it was a composition of many sources. The diagram came with a brief explanation of each mass threshold, and it also outlined the governing equations in the respective stellar state. I also remember that there were rough approximates of the time scales of the respective state given. Maybe somebody can help me out with alternative search terms, references, or a textual answer briefly summarizing the conditions and the state-of-the-art error bars for the mass thresholds and timescales? Related The life course for a massive star from birth to death using the HR Diagram and it comments are helpful, but only slightly related What are the equations governing stellar evolution (Luminosity, Mass, Temperature, Radius) touches one aspect of what I am after A flowchart from Stephan et al. 2019 outlining the outcomes of binary stellar evolution in the galactic center comes with the right amount of details, but is not for general star evolution. Edit from 2021-01-22 To make answering easier, I quickly layed out the existing diagram as BPMN which is editable e.g. with this freely available editor. The source code is available via https://pastebin.com/mrafLWa0 - please feel free to reuse, expand and distribution my diagram, just make you comment here where and what for. Answer: My slapdash (and perhaps messy because I rendered it in PowerPoint) attempt at addressing stellar evolution. Open the flowchart in a new tab in fullscreen to see a better rendering of it.
{ "domain": "astronomy.stackexchange", "id": 5232, "tags": "star, black-hole, supernova, stellar-evolution, neutron-star" }
Raised index of partial derivative
Question: I am having a really hard time wrapping my head around component notation for tensor fields. For example, I do not know exactly what the following expression means $$\partial_\mu\partial^\nu \phi, \tag{$\#$}$$ where $\phi$ is a scalar field. On the one hand $\partial^\nu=g^{\lambda\nu}\partial_\lambda$ where $g_{\mu\nu}$ is the Minkowski metric, and hence we could write explicitly $$\partial_\mu\partial^\nu \phi=\sum_{\mu,\nu,\lambda}g^{\lambda\nu}\partial_\mu \partial_{\lambda}\phi=\sum_{\mu,\nu}\partial_{\mu}\partial_{\nu}\phi=\partial_\mu\partial_\nu\phi. \tag{$*$} $$ On the other hand, we may think of $\partial_\mu\partial^\nu=g(\partial_\mu,\partial^\nu)=\delta_\mu^\nu;$ so that $\partial_\mu\partial^\nu\phi=\phi?$ Maybe? I am actually not sure of what this would mean. I am really confused. Any help is appreciated. Edit: To give context of where this expression comes from: I was computing the Lagrangian $$\mathcal L=\frac{1}{2}(\partial_\mu\phi)(\partial^\mu \phi) $$ considering an infinitesimal spacetime translation $x^ \mu\to x^\mu-\alpha a^\mu$. The scalar field thus transforms like $\phi(x)\to \phi(x)+\alpha(\partial_\mu\phi(x))a^\mu.$ Plugging thins into the Lagrangian yields the term I am referring to. Edit 2: The change in placement of indices are actually my doubts. I try to elaborate. I do not have any background in using indices to talk about tensors. I am used to interpret the expressions $\partial_\mu$ as the local vector field defined in some chart (local coordinates). I think about vector fields $X$ as abstract section of the Tangent bundle, which restricted to local coordinates can be expressed as $X=X^\mu\partial_\mu$. In the context of QFT, as far as I understand, the symbol $\partial_\mu$ denotes $(\partial_t,\nabla)$ in the local coordinates $(t,x,y,z)$. So that $\partial_\mu\phi=(\partial_t \phi,\partial_x \phi,\partial_y\phi,\partial_z\phi)$. This was supposed to be my justification on why I wrote the summation on $\mu$ and $\nu$ in $(*)$, but now I note that this only applies when $\mu$ or $\nu$ appear twice, indicating the scalar product; which leads me to the last remark. I think of $g_{\mu \nu}$ as the component of the matrix $$g=\begin{pmatrix} 1&0&0&0\\ 0&-1&0&0\\ 0&0&-1&0\\ 0&0&0&-1\\ \end{pmatrix}$$ which represents the pseudo-Riemmanian metric, which by definition acts on tangent vectors, i.e. linear combinations of the $\partial_\mu$ applied to a point. This is where my doubt comes, in which was the right way to interpret the notation; in particular what is the expresion $(\#)$ in explicit coordinates? Answer: $\renewcommand{\lag}{\mathcal{L}}\renewcommand{\pd}{\partial}\renewcommand{\d}{\mathrm{d}}$$\pd^\mu$ is defined as $\pd^\mu := g^{\mu\nu}\pd_{\nu}$, where I use the convention that all repeated indices are summed and $g^{\mu\nu}$ are the components of the inverse metric tensor. Thus your Lagrangian can be rewritten as $$\lag=\tfrac12g^{\mu\nu}(\pd_\mu\phi)(\pd_\nu\phi)\tag{1}$$ and also your expression $(\#)$ is equal to $g^{\mu\sigma}\pd_\nu\pd_\sigma\phi$. To see where all this comes from from a differential geometry point of view, this Lagrangian can be written in a coordinate free form as the top-form $$\lag = \tfrac12 \d\phi\wedge\star\d\phi,\tag{2}$$ where $\d$ is the exterior derivative and $\star$ is the Hodge-star. It is an easy exercise to restrict to a local coordinate system, $\d x^\mu$, in which case $\d\phi$ becomes $\frac{\pd\phi}{\pd x^\mu}\d x^\mu\equiv\pd_\mu\phi\,\d x^\mu$. The Hodge star will contribute a factor of $g^{\mu\nu}$ and so (2) will fall back to (1). Moreover, you can think of $a^\mu\pd_\mu\phi(x)$ in a more formal setting as $\iota_a \d\phi$, where $\iota_a$ is the interior product along the vector field $a$ with components $a^\mu$. So the transformation $\phi(x)\mapsto\phi(x)+\alpha a^\mu \pd_\mu\phi(x)$ is written as $$\phi(x)\mapsto \phi(x) + \alpha\,(\iota_a\d\phi)(x).$$ The relevant term in your expression ($\#$) comes from a term $\alpha \d\phi\wedge\star\d\iota_a \d\phi$ in the Lagrangian, basically it is just the $\alpha \star\d\iota_a \d\phi$ part. If we expand this in local coordinates $\{\d x^\sigma\}$, we get: $$ \alpha \star\d\iota_a \d\phi = \alpha a^\mu \pd_\sigma\pd_\mu \phi\;\star\d x^\sigma = \alpha a^\mu \pd_\sigma\pd_\mu \phi\ g^{\nu\sigma} \varepsilon_{\nu\lambda\kappa\rho}\d x^\lambda\wedge\d x^\kappa\wedge\d x^\rho,$$ where in the second equality I used the definition of the Hodge star acting on the basis differentials. Stripping off numbers, $\varepsilon$-symbols and the differentials, all we're left with is $$g^{\nu\sigma}\pd_\sigma\pd_\mu\phi,\tag{$\#'$}$$ which is exactly what you would have found (with your much shorter route) as $$\pd^\nu\pd_\mu\phi \tag{#}.$$ Thus, $(\#')=(\#)$. Of course the typical way to arrive there is to simply use the fact that for any object $\bullet_\mu$ with a downstairs leg we can lift it using the inverse metric, i.e. $\bullet^\mu := g^{\mu\nu}\bullet_\nu$. But since you had trouble understanding where does this stem from from a differential geometry perspective, I wanted to stick with the differential geometry picture all the way through, from the Lagrangian to the final result. Hope this helped and didn't confuse you more.
{ "domain": "physics.stackexchange", "id": 71822, "tags": "special-relativity, metric-tensor, tensor-calculus, notation, covariance" }
After how many hours water drunk is expelled from the body?
Question: If one drinks a liter of water at 7 am when (in what span) will that water be eliminated through uresis? Is that process influenced by any factors such as empty stomach, sleep or other? Answer: The time from drinking to urination depends mainly on your hydration status and the presence of food in the stomach. Scenario: You have drunk enough fluid in the previous days, so you are normally hydrated. In the morning, you get up from the bed, urinate, drink 1 liter of water in 5 minutes and eat nothing. Some water can come through the stomach and can be absorbed in the small intestine in about 5 minutes; the entire liter may need more than 2 hours (sweatscience.com). When some water is absorbed into the blood it can immediately trigger diuresis - the excretion of the urine through the kidneys into the bladder. It may take about 3 hours for the entire amount of water drunk to be excreted. So, the approximate time span (from start to end of water excretion) could be 5 - 180 minutes. But not likely the entire liter of water will be excreted, because, in the morning, you are in a slightly negative water balance, so some water will stay in your body. If you are dehydrated before starting drinking, much less urine will be excreted in the first few hours. If you drink after eating, the food in the stomach can delay water absorption and excretion by more than an hour. This also happens when you drink nutritional fluids, such as milk or juice. This is from my experience and understanding basic physiology. I'll try to find some references.
{ "domain": "biology.stackexchange", "id": 9719, "tags": "human-physiology" }
Reading a matrix and computing the determinant
Question: As one of my first C programs I want to read in a matrix and compute its determinant. I don't pose limits on the size of the matrix and this makes things more complicated. Version 0 #include <stdio.h> #include <stdlib.h> #include <string.h> #include <ctype.h> #define ROW_LENGTH 8 #define CHUNK 32 typedef struct { size_t size; double *elements; } Matrix; double determinant(Matrix *); Matrix *parse_input(); Matrix *create_matrix(size_t size); void free_matrix(Matrix *); double at(Matrix *, int, int); char *readline(); int main(void) { Matrix *M = parse_input(); printf("%f\n", determinant(M)); free_matrix(M); return 0; } double determinant(Matrix *M) { if (M->size == 1) { return M->elements[0]; } else if (M->size == 2) { return at(M, 0, 0) * at(M, 1, 1) - at(M, 0, 1) * at(M, 1, 0); } // Make the matrix triangular size_t i, j, t; double r = 1; for (j = 0; j < M->size; j++) { if (!at(M, j, j)) return 0; for (i = j + 1; i < M->size; i++) { double ratio = at(M, i, j) / at(M, j, j); for (t = 0; t < M->size; t++) { M->elements[i * M->size + t] -= ratio * at(M, j, t); } } } for (i = 0; i < M->size; i++) { r *= at(M, i, i); } return r; } Matrix *parse_input() { char *row = readline(); size_t t; size_t N = 0, P = 0; size_t i = 1, j = 0; double *first_row; if (!(first_row = malloc(ROW_LENGTH * sizeof first_row))) { puts("Could not allocate memory."); exit(EXIT_FAILURE); } char *number = strtok(row, " "); while (number) { if (N == ROW_LENGTH) { if (!(first_row = realloc(first_row, 2 * N * sizeof first_row))) { puts("Could not allocate memory."); free(first_row); exit(EXIT_FAILURE); } } first_row[N++] = atof(number); number = strtok(NULL, " "); } Matrix *M = create_matrix(N); for (t = 0; t < N; t++) { M->elements[t] = first_row[t]; } free(row); free(first_row); while (++P < N) { j = 0; row = readline(); char *number = strtok(row, " "); while (number) { M->elements[i * M->size + j++] = atof(number); number = strtok(NULL, " "); } i++; free(row); } return M; } Matrix *create_matrix(size_t size) { Matrix *M = malloc(sizeof(Matrix)); M->size = size; M->elements = calloc(size * size, sizeof(double)); return M; } void free_matrix(Matrix *matrix) { free(matrix->elements); free(matrix); } double at(Matrix *M, int i, int j) { return M->elements[i * M->size + j]; } char *readline() { char *input = NULL; char tmpbuf[CHUNK]; size_t inputlen = 0, tmplen = 0; do { fgets(tmpbuf, CHUNK, stdin); tmplen = strlen(tmpbuf); inputlen += tmplen; input = realloc(input, inputlen + 1); if (!input) { puts("Could not allocate memory."); exit(EXIT_FAILURE); } strcat(input, tmpbuf); } while (tmplen == CHUNK - 1 && tmpbuf[CHUNK - 2] != '\n'); return input; } Version 1 #include <stdio.h> #include <stdlib.h> #include <string.h> #include <ctype.h> #define ROW_LENGTH 8 #define CHUNK 32 typedef struct { size_t size; double *elements; } Matrix; double determinant(Matrix *); Matrix *parse_input(); Matrix *create_matrix(size_t); void free_matrix(Matrix *); double at(Matrix *, int, int); char *readline(); int main(void) { Matrix *M = parse_input(); printf("%f\n", determinant(M)); free_matrix(M); return 0; } double determinant(Matrix *M) { if (M->size == 1) { return M->elements[0]; } else if (M->size == 2) { return at(M, 0, 0) * at(M, 1, 1) - at(M, 0, 1) * at(M, 1, 0); } // Make the matrix triangular size_t i, j, t; double r = 1; for (j = 0; j < M->size; j++) { if (!at(M, j, j)) return 0; for (i = j + 1; i < M->size; i++) { double ratio = at(M, i, j) / at(M, j, j); for (t = 0; t < M->size; t++) { M->elements[i * M->size + t] -= ratio * at(M, j, t); } } } for (i = 0; i < M->size; i++) { r *= at(M, i, i); } return r; } Matrix *parse_input() { char *row = readline(); size_t t; size_t N = 0; size_t i = 1, j = 0; size_t row_length = ROW_LENGTH; double *first_row; if (!(first_row = malloc(ROW_LENGTH * sizeof *first_row))) { puts("Could not allocate memory."); exit(EXIT_FAILURE); } char *number = strtok(row, " "); while (number) { if (N == row_length) { row_length *= 2; if (!(first_row = realloc(first_row, row_length * sizeof *first_row))) { puts("Could not allocate memory."); free(first_row); exit(EXIT_FAILURE); } } first_row[N++] = atof(number); number = strtok(NULL, " "); } Matrix *M = create_matrix(N); for (t = 0; t < N; t++) { M->elements[t] = first_row[t]; } free(row); free(first_row); while (i < N) { j = 0; row = readline(); char *number = strtok(row, " "); while (number && j < N) { M->elements[i * M->size + j++] = atof(number); number = strtok(NULL, " "); } i++; free(row); } return M; } Matrix *create_matrix(size_t size) { Matrix *M; if (!(M = malloc(sizeof *M))) { puts("Could not allocate memory."); exit(EXIT_FAILURE); } M->size = size; M->elements = calloc(size * size, sizeof(double)); return M; } void free_matrix(Matrix *matrix) { free(matrix->elements); free(matrix); } double at(Matrix *M, int i, int j) { return M->elements[i * M->size + j]; } char *readline() { char *input = calloc(CHUNK, 1); char tmpbuf[CHUNK]; size_t inputlen = 0, tmplen = 0; do { fgets(tmpbuf, CHUNK, stdin); tmplen = strlen(tmpbuf); inputlen += tmplen; input = realloc(input, inputlen + 1); if (!input) { puts("Could not allocate memory."); exit(EXIT_FAILURE); } strcat(input, tmpbuf); } while (tmplen == CHUNK - 1 && tmpbuf[CHUNK - 2] != '\n'); return input; } Version 2 #include <stdio.h> #include <stdlib.h> #include <string.h> #include <ctype.h> #define _M(i, j) (M->elements[(i) * M->size + (j)]) #define ROW_LENGTH 8 #define CHUNK 32 typedef struct { size_t size; double *elements; } Matrix; double determinant(Matrix *); signed char find_pivot(Matrix *, int); Matrix *parse_input(); Matrix *create_matrix(size_t); void free_matrix(Matrix *); int readline(char **, size_t *, FILE *); int main(void) { Matrix *M = parse_input(); printf("%f\n", determinant(M)); free_matrix(M); return 0; } double determinant(Matrix *M) { if (M->size == 1) { return M->elements[0]; } else if (M->size == 2) { return _M(0, 0) * _M(1, 1) - _M(0, 1) * _M(1, 0); } // Make the matrix triangular size_t i, j, t; signed char sign = 1; double ratio, r = 1; for (j = 0; j < M->size; j++) { if (!_M(j, j)) { if (!find_pivot(M, j)) { return 0; } sign *= -1; } for (i = j + 1; i < M->size; i++) { ratio = _M(i, j) / _M(j, j); for (t = 0; t < M->size; t++) { _M(i, t) -= ratio * _M(j, t); } } } for (i = 0; i < M->size; i++) { r *= _M(i, i); } return sign * r; } signed char find_pivot(Matrix *M, int j) { size_t i; for (i = j + 1; i < M->size; i++) { if (_M(i, j)) { size_t t; double tmp; for (t = 0; t < M->size; t++) { tmp = _M(i, t); _M(i, t) = M->elements[j * M->size + t]; _M(j, t) = tmp; } return 1; } } return 0; } Matrix *parse_input() { char *row; size_t reading_size = CHUNK; if (!(row = malloc(reading_size))) { puts("Could not allocate memory."); exit(EXIT_FAILURE); } readline(&row, &reading_size, stdin); size_t t; size_t N = 0; size_t i = 1, j = 0; size_t row_length = ROW_LENGTH; double *first_row; if (!(first_row = malloc(ROW_LENGTH * sizeof *first_row))) { puts("Could not allocate memory."); exit(EXIT_FAILURE); } char *number = strtok(row, " "); while (number) { if (N == row_length) { row_length *= 2; if (!(first_row = realloc(first_row, row_length * sizeof *first_row))) { puts("Could not allocate memory."); free(row); free(first_row); exit(EXIT_FAILURE); } } first_row[N++] = atof(number); number = strtok(NULL, " "); } Matrix *M = create_matrix(N); for (t = 0; t < N; t++) { M->elements[t] = first_row[t]; } free(first_row); while (i < N) { j = 0; readline(&row, &reading_size, stdin); char *number = strtok(row, " "); while (number && j < N) { M->elements[i * M->size + j++] = atof(number); number = strtok(NULL, " "); } i++; } free(row); return M; } Matrix *create_matrix(size_t size) { Matrix *M; if (!(M = malloc(sizeof *M))) { puts("Could not allocate memory."); exit(EXIT_FAILURE); } M->size = size; if (!(M->elements = calloc(size * size, sizeof(double)))) { puts("Could not allocate memory."); exit(EXIT_FAILURE); } return M; } void free_matrix(Matrix *matrix) { free(matrix->elements); free(matrix); } int readline(char **input, size_t *size, FILE *file) { char *offset; char *p; size_t old_size; // Already at the end of file if (!fgets(*input, *size, file)) { return EOF; } // Check if input already contains a newline if (p = strchr(*input, '\n')) { *p = 0; return 0; } do { old_size = *size; *size *= 2; if (!(*input = realloc(*input, *size))) { puts("Could not allocate memory."); free(*input); exit(EXIT_FAILURE); } offset = &((*input)[old_size - 1]); } while (fgets(offset, old_size + 1, file) && offset[strlen(offset) - 1] != '\n'); return 0; } How it works The user inputs the first row of the matrix, the program counts the number of elements and determines how many rows remain (the matrix must be a square one). Then the determinant is computed by first reducing the matrix to triangular form and then computing the product of the elements on the main diagonal. Example session: ./det 4 23 4 2 -5 2 45 2 40 330.000000 I'm quite proud of how I managed to get it finally correct (after many problems with allocations and memory leaks). However, I was wondering how can I improve readline() and parse_input(). The latter looks quite a mess. Answer: Bugs I spotted a few bugs in your program. You are allocating the wrong size. if (!(first_row = malloc(ROW_LENGTH * sizeof first_row))) { Should be: if (!(first_row = malloc(ROW_LENGTH * sizeof *first_row))) { The same goes for the call to realloc later. Your program can't handle a row bigger than 16 elements. Right now, you only call realloc under this condition: if (N == ROW_LENGTH) { But that can only happen once. To allow for an infinite size, you need to keep track of the current allocation size and realloc every time N reaches the current allocation max. The first time you allocate input using realloc(NULL, size), you don't clear it before you use it. Remember that realloc(NULL, size) is equivalent to calling malloc. So input will be uninitialized but you call strcat on it right after. If the first row has N elements, you will create a NxN matrix. But when you read the second and subsequent rows, you don't limit the row length to N elements when you fill in the matrix. In particular, if the last row contains more than N elements, you will overflow your matrix. Other things You check the return value of malloc in most places, but not in create_matrix(). In readline, you are expanding the size of the reallocated buffer by a constant 32 bytes at a time. If the input is really huge, then this will take a long time, as this is an O(N^2) operation. This could be improved by doubling the size of the buffer each time. Also, you can simplify the function by getting rid of tmpbuf. Instead, call fgets directly on the unfilled part of the buffer (the part that realloc just expanded). Your at() function is an interesting way of accessing matrix elements. I think an even better way would be like this: #define _M(i,j) (M->elements[(i) * M->size + (j)]) This assumes you will always name your matrix M, but you could modify it to be more general if you need to. The good thing about this macro is that you can use it as an lvalue, which you couldn't with your function. For example: _M(i,j) -= ratio * _M(j,t); You have a variable P in parse_input. I don't know what it stands for but it looks like it has the same exact value as i, which is the current row number. I think you could eliminate P.
{ "domain": "codereview.stackexchange", "id": 12886, "tags": "beginner, c, memory-management, matrix" }
How are non-glucose sugars metabolized in the body?
Question: In my biology book's section on disaccharide metabolism and glycolysis, it states that sugars other than glucose must be acted upon to enter glycolysis. Let's take sucrose as an example. Sucrose is hydrolyzed in the small intestine by sucrase. The resulting fructose and glucose are absorbed and transported to the liver via the portal vein. My question concerns the fate of fructose. To undergo glycolysis, the book states that fructose is converted into either fructose-6-phosphate (F6P) or fructose-1-phosphate (F1P). Let's say it is converted to F1P. Aldolase splits this into dihydroxyacetone phosphate and D-glyceraldehyde. Triose kinase then converts D-glyceraldehyde to glyceraldehyde-3-phosphate, a glycolytic intermediate. Where is this occurring in the body? Are we still in the liver? I can't imagine that all the fructose we consume is undergoing glycolysis in the liver. To leave the liver as a sugar, it would have had to been converted to glucose, right? In classes I've taken, I've been told that sugars that enter the liver are pretty much all converted to glucose. Once they are converted to glucose, they can be distributed to the rest of the body, stored as glycogen, etc. If we are going straight from fructose to F1P to a glycolytic intermediate, we couldn't have left the liver. How is such a transformation even useful? Anyone care to shed some light on this? Answer: Where is this occurring in the body? Almost totally in the liver. To leave the liver as a sugar, it would have had to been converted to glucose, right? Correct, but it's not a direct conversion. Fructose is metabolized almost completely in the liver in humans, and is directed toward replenishment of liver glycogen and triglyceride synthesis... ...Increased concentrations of DHAP and glyceraldehyde-3-phosphate in the liver drive the gluconeogenic pathway toward glucose-6-phosphate, glucose-1-phosphate and glycogen formation. It appears that fructose is a better substrate for glycogen synthesis than glucose and that glycogen replenishment takes precedence over triglyceride formation. Once liver glycogen is replenished, the intermediates of fructose metabolism are primarily directed toward triglyceride synthesis. So, Fructose is almost entirely made into something else first, and then that something (Glycogen or the Glycerol from triglycerides) gets broken down into Glucose or an intermediate. Fructose stays in the liver because Fructokinase has a pretty low Km (0.5 mM) compared to Glucokinase (12mM) for Fructose, so almost all of the Fructose that enters the liver is phosphorylated into F1P - which cannot leave.
{ "domain": "biology.stackexchange", "id": 1399, "tags": "human-biology, biochemistry, metabolism, glucose" }
Which are the rays that form the fringes in wedge shaped film and Newton's rings?
Question: While reading interference by the division of amplitude, I came across this doubt. Different sources seem to hint towards different answers. First, in wedge-shaped film, in the book Optics by Ajoy Ghatak, p210, considering an extended source, the formation of fringes on the wedge is schematically shown by the following diagram. But, while reading Optics by Hecht, 5th edition, for the same condition, this is the diagram given (p421). In the above diagrams, the formation of the fringes at the top of the wedge, when seen through the naked eye seems to happen for different reasons. In the first image, two different rays originating from the same point on the extended source seem to interfere at a point on the wedge and later pass through the eye. When the eye is focused at that point, the rays will recombine on the retina and hence appear bright or dark depending on the thickness of the film at the point on the wedge. In the second image, a single ray from the extended source alone seems to be responsible for the formation of the bright or dark fringe at that point on the wedge, if the eye is focused such that the two reflected rays from the incident ray recombine on the retina. Also, while looking for other sources, even this website seems to agree with the second image since they calculate path difference between rays reflected from the same incident rays indicating that the reflected rays will lead to the formation of the fringe when they recombine on the retina. The same doubt is carried to Newton's rings case. Are the rings we see through the traveling microscope formed by the reflected rays of the same incident ray or by different incident rays which are very close to each other? Are the two cases infact different depending upon where our eyes are focused? Thank you! Answer: This is a very good question which can have a relatively simple answer in some cases but is much more difficult to answer in other cases. To simplify my analysis I have ignored the refraction of rays as they pass though an air/glass interface and any phase changes at these interfaces. When dealing with wedge fringe localisation many textbooks have diagrams which look like those below. These diagrams are illustrate to show that the real (left-hand diagram) and virtual (right-hand diagram) fringes are localised near the wedge where the rays cross. As you have pointed out only one incoming ray is shown and hence only one point of intersection where the waves overlap. Another way of discussing wedge fringes produced by a point source is shown below. The point source produces two virtual images which act as two coherent sources and where the waves from those two sources overlap there is interference. I have only shown by shading a limited region where there is interference. This shows that in this case the fringes are non-localised just like the ones for Young's double slits. This means that they can be viewed wherever the waves from the two sources overlap. An important part of the fringe system is the zero order where the path difference form the two virtual sources to some point $x$ is the same, $a'X = a''X$. Now what happens when a second point source is used as in the left-hand diagram below? There are now two overlapping interference patterns produced by virtual sources $a'\,a''$ and $b'\,b''$ which might mean that fringes are no longer visible. However out of the chaos there is a region around $Y$ where the zero order fringes of the two patterns overlap. If one then focusses on this region one would see fringes. These are the localised fringes near the wedge. Moving on and adding a third point source and then to even more which is equivalent to having an extended source you will note in the right hand diagram the zero order fringes being roughly in the same area. The visibility of the zero order and adjacent order fringes improves if one observed the fringes from positions normal to the wedge and again the fringes are localised near the wedge ie to see the fringes you must focus on a region near the apex of the wedge. An experimental arrangement is shown below with the microscope, which has a very small depth of field, focussed on the wedge.
{ "domain": "physics.stackexchange", "id": 68597, "tags": "optics, interference" }