arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Implement a Truth-Machine
A truth-machine (credits goes to this guy for coming up with it) is a very simple program designed to demonstrate the I/O and control flow of a language. Here's what a truth-machine does:
• Gets a number (either 0 or 1) from STDIN.
• If that number is 0, print out 0 and terminate.
• If that number is 1, print out 1 forever.
# Challenge
Write a truth-machine as described above in your language of choice. The truth-machine must be a full program that follows these rules:
• take input from STDIN or an acceptable alternative
• If your language cannot take input from STDIN, it may take input from a hardcoded variable or suitable equivalent in the program
• must output to STDOUT or an acceptable alternative
• If your language is incapable of outputting the characters 0 or 1, byte or unary I/O is acceptable.
• when the input is 1, it must continually print 1s and only stop if the program is killed or runs out of memory
• the output must only be either a 0 followed by either one or no newline or space, or infinite 1s with each 1 followed by either one or no newline or space. No other output can be generated, except constant output of your language's interpreter that cannot be suppressed (such as a greeting, ANSI color codes or indentation). Your usage of newlines or spaces must be consistent: for example, if you choose to output 1 with a newline after it all 1s must have a newline after them.
• if and only if your language cannot possibly terminate on an input of 0 it is acceptable for the code to enter an infinite loop in which nothing is outputted.
Since this is a catalog, languages created after this challenge are allowed to compete. Note that there must be an interpreter so the submission can be tested. It is allowed (and even encouraged) to write this interpreter yourself for a previously unimplemented language. Other than that, all the standard rules of must be obeyed. Submissions in most languages will be scored in bytes in an appropriate preexisting encoding (usually UTF-8).
# Catalog
The Stack Snippet at the bottom of this post generates the catalog from the answers a) as a list of shortest solution per language and b) as an overall leaderboard.
## Language Name, N bytes
where N is the size of your submission. If you improve your score, you can keep old scores in the headline, by striking them through. For instance:
## Ruby, <s>104</s> <s>101</s> 96 bytes
If there you want to include multiple numbers in your header (e.g. because your score is the sum of two files or you want to list interpreter flag penalties separately), make sure that the actual score is the last number in the header:
## Perl, 43 + 2 (-p flag) = 45 bytes
You can also make the language name a link which will then show up in the snippet:
## [><>](http://esolangs.org/wiki/Fish), 121 bytes
<style>body { text-align: left !important} #answer-list { padding: 10px; width: 290px; float: left; } #language-list { padding: 10px; width: 320px; float: left; } table thead { font-weight: bold; } table td { padding: 5px; }</style><script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.sstatic.net/codegolf/all.css?v=83c949450c8b"> <div id="language-list"> <h2>Shortest Solution by Language</h2> <table class="language-list"> <thead> <tr><td>Language</td><td>User</td><td>Score</td></tr> </thead> <tbody id="languages"> </tbody> </table> </div> <div id="answer-list"> <h2>Leaderboard</h2> <table class="answer-list"> <thead> <tr><td></td><td>Author</td><td>Language</td><td>Size</td></tr> </thead> <tbody id="answers"> </tbody> </table> </div> <table style="display: none"> <tbody id="answer-template"> <tr><td>{{PLACE}}</td><td>{{NAME}}</td><td>{{LANGUAGE}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr> </tbody> </table> <table style="display: none"> <tbody id="language-template"> <tr><td>{{LANGUAGE}}</td><td>{{NAME}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr> </tbody> </table><script>var QUESTION_ID = 62732; var ANSWER_FILTER = "!t)IWYnsLAZle2tQ3KqrVveCRJfxcRLe"; var COMMENT_FILTER = "!)Q2B_A2kjfAiU78X(md6BoYk"; var OVERRIDE_USER = 12012; var answers = [], answers_hash, answer_ids, answer_page = 1, more_answers = true, comment_page; function answersUrl(index) { return "https://api.stackexchange.com/2.2/questions/" + QUESTION_ID + "/answers?page=" + index + "&pagesize=100&order=desc&sort=creation&site=codegolf&filter=" + ANSWER_FILTER; } function commentUrl(index, answers) { return "https://api.stackexchange.com/2.2/answers/" + answers.join(';') + "/comments?page=" + index + "&pagesize=100&order=desc&sort=creation&site=codegolf&filter=" + COMMENT_FILTER; } function getAnswers() { jQuery.ajax({ url: answersUrl(answer_page++), method: "get", dataType: "jsonp", crossDomain: true, success: function (data) { answers.push.apply(answers, data.items); answers_hash = []; answer_ids = []; data.items.forEach(function(a) { a.comments = []; var id = +a.share_link.match(/\d+/); answer_ids.push(id); answers_hash[id] = a; }); if (!data.has_more) more_answers = false; comment_page = 1; getComments(); } }); } function getComments() { jQuery.ajax({ url: commentUrl(comment_page++, answer_ids), method: "get", dataType: "jsonp", crossDomain: true, success: function (data) { data.items.forEach(function(c) { if (c.owner.user_id === OVERRIDE_USER) answers_hash[c.post_id].comments.push(c); }); if (data.has_more) getComments(); else if (more_answers) getAnswers(); else process(); } }); } getAnswers(); var SCORE_REG = /<h\d>\s*([^\n,<]*(?:<(?:[^\n>]*>[^\n<]*<\/[^\n>]*>)[^\n,<]*)*),.*?(\d+)(?=[^\n\d<>]*(?:<(?:s>[^\n<>]*<\/s>|[^\n<>]+>)[^\n\d<>]*)*<\/h\d>)/; var OVERRIDE_REG = /^Override\s*header:\s*/i; function getAuthorName(a) { return a.owner.display_name; } function process() { var valid = []; answers.forEach(function(a) { var body = a.body; a.comments.forEach(function(c) { if(OVERRIDE_REG.test(c.body)) body = '<h1>' + c.body.replace(OVERRIDE_REG, '') + '</h1>'; }); var match = body.match(SCORE_REG); if (match) valid.push({ user: getAuthorName(a), size: +match[2], language: match[1], link: a.share_link, }); else console.log(body); }); valid.sort(function (a, b) { var aB = a.size, bB = b.size; return aB - bB }); var languages = {}; var place = 1; var lastSize = null; var lastPlace = 1; valid.forEach(function (a) { if (a.size != lastSize) lastPlace = place; lastSize = a.size; ++place; var answer = jQuery("#answer-template").html(); answer = answer.replace("{{PLACE}}", lastPlace + ".") .replace("{{NAME}}", a.user) .replace("{{LANGUAGE}}", a.language) .replace("{{SIZE}}", a.size) .replace("{{LINK}}", a.link); answer = jQuery(answer); jQuery("#answers").append(answer); var lang = a.language; lang = jQuery('<a>'+lang+'</a>').text(); languages[lang] = languages[lang] || {lang: a.language, lang_raw: lang.toLowerCase(), user: a.user, size: a.size, link: a.link}; }); var langs = []; for (var lang in languages) if (languages.hasOwnProperty(lang)) langs.push(languages[lang]); langs.sort(function (a, b) { if (a.lang_raw > b.lang_raw) return 1; if (a.lang_raw < b.lang_raw) return -1; return 0; }); for (var i = 0; i < langs.length; ++i) { var language = jQuery("#language-template").html(); var lang = langs[i]; language = language.replace("{{LANGUAGE}}", lang.lang) .replace("{{NAME}}", lang.user) .replace("{{SIZE}}", lang.size) .replace("{{LINK}}", lang.link); language = jQuery(language); jQuery("#languages").append(language); } }</script>
• Can we assume that the program halts when the processor finishes executing the written code, for a machine code entry? – lirtosiast Nov 3 '15 at 16:58
• Assuming any behaviour is fine for all invalid inputs? – Cruncher Nov 3 '15 at 17:33
• @Cruncher Yes, the only inputs you should expect to get are 0 and 1. – a spaghetto Nov 3 '15 at 17:38
• Catalog is borked. – Addison Crump Nov 6 '15 at 15:18
• Catalog appears to consider Bf and bf to be different languages. – Mooing Duck Nov 10 '15 at 1:13
## HP48's RPL, 22.5 bytes
« WHILE DUP REPEAT DUP END »
Since there is no such thing as STDIN or STDOUT on the HP48, the input is taken on the stack, and one "0" or an infinity of "1"s are pushed back on the stack.
If you try it, you will have to kill the program in order to see the "1"s since the stack is not refreshed while the program is running (Just press the "ON" button).
PS: The HP48's memory is made of 4 bits words, hence the non-integer bytes size
## Retina, 9 7 bytes
/1/+>G
Try it online!
### Explanation
The stage G itself is really a no-op (it's a Grep stage with an empty regex, which always matches). So it's all in the configuration. > prints the result of this stage (which is just the input) and /1/+ wraps it in a loop which runs as long as the string contains a 1. There's also implicit output at the end of the program. So we go through these two possibilities:
• If the input is 0, the /1/ condition fails, so the loop is never run. Instead, the program terminates, and the 0 is printed at the end.
• If the input is 1, the /1/ condition matches, so the loop gets executed. The loop iteration itself does nothing but print that 1, so the string's value won't change and the loop will continue indefinitely.
## Brachylog, 4 bytes
Thanks to Fatalize for saving 1 byte.
w?1↰
Try it online!
### Explanation
w Write the input to STDOUT.
?1 Check whether the input equals 1.
↰ If so, recursively call the main predicate with 1 again.
• What a pity that w's output isn't its input. – Erik the Outgolfer Feb 28 '18 at 20:14
## PUBERTY, 369 bytes
It is May 1, 2018, 3:01:04 PM.Y is in his bed, bored.His secret kink is J.Soon the following sounds become audible.oh yeah yeah yeah hrg fap yeah yeah yeah fap yeah fap yeah yeah mmf yeah yeah yeah hrg yeah hrg fap yeah yeah yeah yeah fap yeah fap yeah mmf yeah yeah yeah yeah yeah yes yeah yeah yeah yeah yeah hrg fap yeah fap yeah fap yeah yeah yeah yeah mmf yeah mmf
Ungolfed
It is May 1, 2018, 3:01:04 PM.
Yhprum is in his bed, bored.
His secret kink is humaninteraction.
Soon the following sounds become audible.
oh yeah yeah yeah
hrg
fap yeah yeah yeah fap yeah fap yeah yeah
mmf
yeah yeah yeah
hrg
yeah
hrg
fap yeah yeah yeah yeah fap yeah fap yeah
mmf
yeah yeah yeah yeah yeah
yes
yeah yeah yeah yeah yeah
hrg
fap yeah fap yeah fap yeah yeah yeah yeah
mmf
yeah
mmf
This was very much harder than I expected it to be using this language.
Explanation
PUBERTY is a wonderful language. It has 6 registers (A, B, C, D, E, F) and one register pointer, all initialized to 0. At the end of each instruction, REGPTR %= 6 and REG[REGPTR] %= 256.
These are the commands used in this program:
• oh reads an ASCII character and stores its value into the current register
• yeah increments REGPTR by 1
• hrg...mmf loops until the current register has a value of zero
• fap increments the current register by 1
• yes prints the ASCII char corresponding to the value of the current register
The program starts with the header - the first four lines
It is May 1, 2018, 3:01:04 PM.
This sets $D to the Unix timestamp of the date % 256. In this case, we set it to 48. This statement is required Yhprum is in his bed, bored. This initializes$C to 6, the number of chars in the name, but we don't use this at all. This statement is required.
His secret kink is humaninteraction.
This line is where you declare all your kinks, if you want to learn about what they do, check out the esolang page since we don't use them here. This statement is required.
Soon the following sounds become audible.
Required line, does nothing.
oh yeah yeah yeah
hrg
fap yeah yeah yeah fap yeah fap yeah yeah
mmf
yeah yeah yeah
read in a 0 or 1, then loop until $D is zero while incrementing$A and $B. This leaves us with$A containing ASCII value 0 or 1, depending on what was inputted and $B containing 208. Then move REGPTR back to A hrg yeah hrg fap yeah yeah yeah yeah fap yeah fap yeah mmf yeah yeah yeah yeah yeah yes In the inner loop, move REGPTR to$B (which has the value 208) and loop until it is zero while incrementing $A and$F. This leaves us with $A containing 48 or 49 (the ASCII codes for 0 or 1) and F containing 48. Then print$A, which will output a 0 or 1 depending on which one was inputted.
yeah yeah yeah yeah yeah
hrg
fap yeah fap yeah fap yeah yeah yeah yeah
mmf
yeah
mmf
Now we move REGPTR to $F and loop until that is zero while incrementing$A and $B. This leaves us with$A containing the ASCII value 0 or 1 and $B containing 208, just like it had at the start of the loop. The loop then exits if$A == 0 or loops infinitely if $A == 1. This is my first codegolf answer so excuse any mistakes I made pls. ## PowerShell, 16 15 bytes for(;$args){1}0
Try it Online!
Edit:
• Use a for loop to save a byte Try it online! – AdmBorkBork Jul 19 '18 at 15:45
# brainfuck (portable), 29 28 bytes
>>>,.[[->+<<+>]>-]<+[<<]>[.]
Try it online!
(Improved by 1 byte thanks to Jo King, who found a terser way to start the [<<] loop.)
I know that on PPCG it's normally sufficient for an answer to work on one interpreter, but as this is a catalogue, I wanted a demonstration of a brainfuck answer that should work on any interpreter, and which exits without error on input 0. This program makes no assumptions about cell size, EOF behaviour, wrapping, tape extension to the left, etc. It's the shortest brainfuck truth-machine I'm aware of with these properties.
## Explanation
>>>,.[[->+<<+>]>-]<+[<<]>[.]
>>>, Input a character to the fourth cell
. Output that character
[ ] While the current cell is nonzero:
[->+<<+>] Move the value to the next and previous cells
> Make the next cell current
- and decrement it
<+ Starting behind the current cell, at value 1
[<<] Move back two cells at a time until we find zero
>[ ] If the cell to the right is nonzero:
. Output it forever
The basic idea here is to create a range on the tape, containing all values from the input's ASCII code down to 0. (So for example, if the input is 0, we get 48 in the third cell, 47 in the fourth cell, 46 in the fifth cell, etc..) Once we've done that, we look at alternate values of the range until we end up at a tape element before the start of the range. If the range has an even length (i.e. the input has an even ASCII code), we'll end up two cells to the left of it, so moving to the right we'll end up in a cell we've never written to (and thus still has the value zero). If the range has an odd length, we'll end up only one cell to its left, so >[.] will move to the first cell of the range (i.e. the user input) and output it in a loop forever.
# AppleScript, 93 Bytes
...this verbosity astounds me.
set a to(display dialog""default answer"")'s text returned
repeat while a="1"
log 1
end
log 0
main=interact x
x"1"=cycle"1"
x a=a
The input must not be terminated by a newline. This works for me: echo -n 1 | ./truth-machine.
Edit: thanks @Zgarb for 3 bytes.
## CJam, 8 bytes
q~{_o}h;
There's no point in linking to the online interpreter, because that one doesn't like infinite loops.
This one works as well, printing newlines:
q~{_p}h;
### Explanation
q~ e# Read and evaluate input.
{ e# While the top of the stack is truthy (i.e. 1.).
_o e# Print a copy of the value on the stack.
}h
; e# We only get here if the value was 0. If so, discard the other 0 on the stack.
# TI-BASIC, 8 bytes
There are two programs that achieve 8 bytes:
Repeat not(Ans
Disp Ans
End
Repeat is TI-BASIC's do-until loop, so it doesn't check the condition the first time. The other way is recursion (name the program prgmT):
Disp Ans
If Ans
prgmT
Both take input from Ans; call using 0:prgmT or 1:prgmT.
• TIL the TI-84 doesn't have tail-call optimization for recursive programs. :( – Jakob May 25 '18 at 21:30
## Minkolang, 7 bytes
nd$,N?. Try it here. (DON'T click Run!) ### Explanation n Take integer from input d Duplicate$, 0 if 0, 1 otherwise
N Output as integer
?. Halt if 0, continue otherwise
This works because n pushes -1 if the input is empty...which is truthy! Also, Minkolang is toroidal so when the program counter moves off the right edge, it wraps around to the left edge and continues.
# Mouse, 16 bytes
?0=[0!$](1^1!)$
Ungolfed:
? 0 = [ ~ Read a number from STDIN and test it for equality with 0
0 ! $~ If equal, print 0 and exit ] ( 1 ^ ~ While true, 1 ! ~ Print 1 )$ ~ End of program
## Haystack, 15 12 bytes
0io=v
^1?|
Still working on a oneliner (if it's possible).
## C, 35 chars
main(c){for(gets(&c);c%puts(&c););}
This is even hackier than cbojar's solution, from which I copied the abuse of the parameter c (int used as char[4]), along with the reliance on little-endian.
puts returns a non-negatve number on success, which (on my Linux/gcc4.8.2) happens to be the number of bytes printed, which happens to be 2. c%2 tests if c is odd, which is true for '1' and false for '0'.
# Squirrel, 48 bytes
local a=stdin.readn('b')-48;do print(a) while(a)
# Quipu, 20 bytes
\/1&
/\/\
1&??
>>
::
# TeaScript, 21 16 bytes
TeaScript is JavaScript for golfing, created by user Vɪʜᴀɴ.
for(;alert×|x;);
The input is automatically stored in variable x. × (U+00D7 Multiplication Sign) is a shortcut for (x).
Try it in the online interpreter
• See this line for an explanation of the bug behavior. – Mama Fun Roll Nov 4 '15 at 0:27
• @ןnɟuɐɯɹɐןoɯ Ah, yes. That explains why whil(x) did the same thing, but not whi(x). Thanks! – ETHproductions Nov 4 '15 at 1:22
## O, 14 bytes
i{1{1o1}w}{0o}?
O is a work-in-progress language with loads of commands and an interpreter written in "APL-style C", which means incomprehensible code.
i Get input as String
{ Start a CodeBlock (like ruby)
1 Push 1 to the stack
{ Start a CodeBlock
1 Push 1 to the stack
o Pop the stack and print it
1 Push 1 to the stack
} Push the CodeBlock to the stack
w Do the CodeBlock on the top of the stack while the value under it is true. (Pops them both.)
} Push the CodeBlock to the stack
{ Start a CodeBlock
0 Push 0 to the stack
o Pop the stack and print it
} Push the CodeBlock to the stack
? If the 3rd down value in the stack is truthy, do the CodeBlock 2nd down in the stack, otherwise do the CodeBlock on the top. (Pops the first 3 values on the stack.)
• "APL-style C" - you mean nightmares? – Mego Nov 15 '15 at 5:07
• @Mego yep, pretty much – Hipe99 Nov 15 '15 at 5:14
• I think you're being unfair to APL. – lirtosiast Nov 16 '15 at 1:20
• @ThomasKwa it's a direct quote. – Hipe99 Nov 16 '15 at 19:41
• @Hipe that is horrible. must it be golfed? – cat Jul 22 '16 at 11:18
# O, 1211 6 bytes
j{.o}w
When 0 is inputted, 0 is outputted and the program ends. When 1 is inputted, 1 is outputted forever.
Explanation:
j Get input as Number
{ }w While the input is 1
.o Print the 1
Print the stack when code ends, which will only contain 0
# Binary-Encoded Golfical, 11+1 (-x flag)= 12 bytes
Hexdump of binary encoding:
00 40 02 15 14 1B 1A 17 14 24 1D
Original image:
Magnified 125x, with color labels:
Rough translation:
*p=readnum
lbl A
print *p
if *p!=0 goto A
## Turing Machine Code, 19 bytes
0 0 * * 1
0 * 1 r 0
Halts on 0 because there is no state 1.
# 𝔼𝕊𝕄𝕚𝕟, 6 chars / 11 bytes
↻ôï|ï;
Try it here (Firefox only).
• Is the language name supposed to be "ESMin", or just five boxes? – mbomb007 Nov 4 '15 at 17:01
• ESMin in double-struck. It doesn't show up properly in some fonts, though... :( – Mama Fun Roll Nov 4 '15 at 17:53
• Ah, okay. I can only see the double-struck letters ℂ, ℍ, ℕ, ℙ, ℚ, ℝ, and ℤ. – mbomb007 Nov 4 '15 at 22:03
## Mathematica, 36 33 bytes
For[Print[i=Input[]],i>0,Print@i]
# 05AB1E, 5 bytes
Code:
Di[D?
Explanation:
D # Duplicate input
i # If True (or 1), do
[ # Infinite loop
D # Duplicate top of the stack
? # Pop a, print a with no newline
• Based on the date this was posted I assume implicit input wasn't a thing yet back then, but both D can be removed now to save 2 bytes. – Kevin Cruijssen Jan 22 at 16:26
## Batch, 35 bytes
@IF %1==0 (exit)
:l
@echo 1&goto l
Please, anyone who can golf this more is more than welcome to.
## Java, 87 bytes
interface A{static void main(String[]a){System.out.print(a[0]);main(a[0].split("0"));}}
(has output to STDERR, but that should not matter)
• I'm not entirely sure if this is valid. The rules state that the program must infinitely run unless killed or out of memory. This will cause a stack overflow, since Java doesn't have tail call recursion optimization. – Mego Jan 25 '16 at 13:31
• Yes, for the standard JVM; but there the size of the stack depends on the memory allocated. If the memory allocated by -Xss is large enough, it will run arbitrarily long (see also stackoverflow.com/questions/4734108/…). The stack overflow therefore is just the visible result of the program being out of memory. – senegrom Jan 25 '16 at 13:36
• Fair enough, I didn't consider -Xss. Upvoted, and welcome to PPCG! – Mego Jan 25 '16 at 13:37
• @SeanBean "take input from STDIN or an acceptable alternative" - command-line arguments are an acceptable input method. – Mego Sep 1 '16 at 9:27
• maybe downvote the challenge if you don't like it, instead of this particular Java answer? Not that I mind but you could reach a larger audience there... downvoting because you believe an answer is the shortest kind of defeats the purpose of codegolf?? – senegrom Sep 7 '16 at 18:28
# HALT, 49 bytes
1 IF '0' 2 ELSE 3
2 TYPE '0';HALT
3 TYPE '1';SET 1
Pretty simple. If input is one go to 3, output 1, set the pointer to 1 so the program never ends. If input is output 0, print, then halt.
Online interpreter (Firefox only)
• such amazing much wow where's rightgoat – Seadrus Feb 23 '16 at 3:12
• @Seadrus ---I ate him---. I mean, he'll be coming soon – Chathuahua Feb 23 '16 at 3:12
• No he won't, you're the only goat left ;) – J Atkin Feb 24 '16 at 3:12
# Whenever, 72 bytes
From the webpage:
# Introduction
Whenever is a programming language which has no sense of urgency. It does things whenever it feels like it, not in any sequence specified by the programmer.
# Design Principles
• Program code lines will always be executed, eventually (unless we decide we don't want them to be), but the order in which they are executed need not bear any resemblance to the order in which they are specified.
• Variables? We don't even have flow control, we don't need no steenking variables!
• Data structures? You have got to be kidding.
The official java interpreter doesn't seem to handle read() but the spec says it should work so.
1 -2,2#read();
2 defer(1) again(2) print("1");
3 defer(1||2) print("0");
The program works like this:
• Initially 1, 2 and 3 are on the list but 2 and 3 must wait until 1 is gone.
• When 1 is executed it removes 2 and then adds 2 back stdin times. Therefore:
• 2 is only on the list if stdin was 1.
• 3 must wait until 2 is gone so it can only execute if stdin was 0.
• If 2 is executed it will add itself to the list again and print '1'.
• If 3 is executed (meaning 2 is not on the list) it will print '0'.
• At this point we will be in one of two states:
• 2 will be on the list printing '1' and 3 will be perpetually waiting
• 1, 2 and 3 will all be gone and the program will end.
# R, 1628 25 bytes
x=scan();while(x)cat(1);0
• this isn't valid, because the 0 will be printed out with an additional [1] preceding it, and the post states that the output must only be either a 0 followed by either one or no newline or space, or infinite 1s with each 1 followed by either one or no newline or space. – Giuseppe Mar 3 '18 at 15:24
# Gaot++, 125 bytes
bleeeeeeeeeeet
bleeeet bleeeeeet
bleeeeeeeeeeeeet
bleeeeeeeeeeeeet
bleeeeeeeeeeeeet
bleeeeeeeeeeeeet
bleeeeeeeeet
bleeeeeeeet
Compressed: 11e4e6e13e13e13e13e9e8e6e
Try it online!
|
|
# Math Help - help mi wif tis questions
1. ## help mi wif tis questions
i duno how to do~
2. Originally Posted by xiaoz
i duno how to do~
the first one is all substitution.
$A\;=\;\frac{\left(2n-4\right)\times90}{n}$ so substitute 8 for n
$A\;=\;\frac{\left(2(8)-4\right)\times90}{8}$ and solve
number 2 asks you to solve for n, so lets do it...
$A\;=\;\frac{\left(2n-4\right)\times90}{n}$ use the distributive property
$A\;=\;\frac{2n\cdot90-4\cdot90}{n}$ simplify
$A\;=\;\frac{180n-360}{n}$ multiply both sides by n
$An\;=\;180n-360$subtract 180n from both sides
$An-180n\;=\;-360$ subtract
$\left(A-180\right)n\;=\;-360$ divide both sides by A-180
$n\;=\;\frac{-360}{A-180}$ simplify
$n\;=\;\frac{360}{-A+180}$ simplify
$n\;=\;\frac{360}{180-A}$ and thats what n equals. Substitute the given number for A (157.5)
$n\;=\;\frac{360}{180-157.5}$ and solve, this will give you the answer for a.ii
3. how to do a(iii) ?
4. Hello, xiaoz!
i duno how to do~
a) The measurement $A^o$ of an interior angle of a regular polygon of $n$ sides
. . . is given by the formula: . $A\;=\;\frac{(2n-4)\times 90}{n}$ **
(1) Find the measure of an interior angle of a regular polygon with 8 sides.
How about plugging in $n = 8$ ?
(2) Write $n$ in terms of $A.$ .Hence, find the number of sides
of the regular polygon with interior angles measuring 157.5° each.
This requires some knowledge of Algebra I . . . too bad!
We have: . $\frac{90(2n - 4)}{n} \;= \;A$
Multiply both sides by $n:\;\;90(2n - 4) \;= \;An\quad\Rightarrow\quad 180n - 360 \;= \;An$
Then: . $180n - An \;= \;360$
Factor: . $(180 - A)n \;= \;360\quad\Rightarrow\quad n \:=\:\frac{360}{180 - A}$
We are told that $A = 157.5$
Plug it in and determine $n.$
(3) Find the regular polygon such that when the number of sides is doubled
the measure of an interior angle is doubled.
This is the only one that's tricky . . .
For $n$ sides, the angle is: . $\frac{90(2n-4)}{n}\:=\:A\quad\Rightarrow\quad\frac{180n - 360}{n}\:= \:A$ [1]
For $2n$ sides, the angle is: . $\frac{90(4n-4)}{2n}\:=\:2A\quad\Rightarrow\quad \frac{360n - 360}{4n}\:=\:A$ [2]
Equate [1] and [2]: . $\frac{180n - 360}{n} \:= \:\frac{360n - 360}{4n}\quad\Rightarrow\quad 720n^2$ $- 1440n \:= \:360n^2 - 360n$
. . which simplifies to: . $360n^2 - 1080n \:=\:0$
. . which factors: . $360n(n - 3) \:= \:0$
. . and has the positive root: . $x = 3$
This checks out.
With $n = 3$ we have an equilateral triangle: interior angle $60^o.$
Double the number sides and we have a hexagon: interior angle $120^o.$
. . . . . . . . ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
**
This is a really stupid way of writing the formula!
The proper form is: . $A\:=\:\frac{180(n - 2)}{n}$
It can be shown that the sum of interior angles of any n-gon is: $180(n - 2)$
If all $n$ angles are equal, each angle measures: . $\frac{180(n-2)}{n}$ degrees.
Someone is going to say, "What's the difference? .They're equal!"
Well, that is certainly true.
This means that we will have dozens of formulas to memorize . . .
$A\;=\;\frac{(n - 2) \times 180}{n} \;= \;\frac{(2n - 4) \times 90}{n}\;=$ $\frac{(3n - 6) \times 60}{n} \;=\;\frac{(4n - 8) \times 45}{n}$
. . . $= \;\frac{(5n -10) \times 36}{n} \;= \;\frac{(6n - 12) \times 30}{n} \;=$ $\frac{(9n - 18) \times 20}{n} \;\hdots$
Catch my drift?
5. ## I have split this thread
Xiaoz,
I have split this thread so that your new question is in a thread
of its own.
It is a good idea to create a new thread when you have a new
question, otherwise your thread may become messy and difficult to
follow.
RonL
6. ok..
|
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Effective Hamiltonians for Constrained Quantum Systems
Jakob Wachsmuth, University of Bremen, Germany, and Stefan Teufel, University of Tübingen, Germany
SEARCH THIS BOOK:
Memoirs of the American Mathematical Society
2014; 83 pp; softcover
Volume: 230
ISBN-10: 0-8218-9489-7
ISBN-13: 978-0-8218-9489-7
List Price: US$65 Individual Members: US$39
Institutional Members: US\$52
Order Code: MEMO/230/1083
The authors consider the time-dependent Schrödinger equation on a Riemannian manifold $$\mathcal{A}$$ with a potential that localizes a certain subspace of states close to a fixed submanifold $$\mathcal{C}$$. When the authors scale the potential in the directions normal to $$\mathcal{C}$$ by a parameter $$\varepsilon\ll 1$$, the solutions concentrate in an $$\varepsilon$$-neighborhood of $$\mathcal{C}$$. This situation occurs for example in quantum wave guides and for the motion of nuclei in electronic potential surfaces in quantum molecular dynamics. The authors derive an effective Schrödinger equation on the submanifold $$\mathcal{C}$$ and show that its solutions, suitably lifted to $$\mathcal{A}$$, approximate the solutions of the original equation on $$\mathcal{A}$$ up to errors of order $$\varepsilon^3|t|$$ at time $$t$$. Furthermore, the authors prove that the eigenvalues of the corresponding effective Hamiltonian below a certain energy coincide up to errors of order $$\varepsilon^3$$ with those of the full Hamiltonian under reasonable conditions.
|
|
# Connect remotely to Mongo DB
I am trying to connect to a MongoDB remotely using Mathematica.
DB Name: http://m0.lmfdb.xyz/
I went to the user mailing list where this code was posted to connect in Python.
from pymongo import MongoClient
client = MongoClient('m0.lmfdb.xyz', 27017)
client.database_names()
I am trying to get this code into Mathematica. Here is what I have so far.
MongoConnect[<|"Host" -> "m0.lmfdb.xyz", "Port" -> 27017|>]
I get the following error message:
Message[MongoConnect::mongoliberr, "WL_ClientSimpleCommand", "No
suitable servers found (serverSelectionTryOnce set): [connection
timeout calling ismaster on 'm0.lmfdb.xyz:27017']"]
Can anyone help me with this? I am new to Mongo and appreciate any help in this matter.
Thanks
Chris
• – Daniel Huber Dec 5 '20 at 13:48
• Thanks. I will take a look. - Chris – Chris Dec 7 '20 at 15:28
|
|
# Building Chapel¶
To build the Chapel compiler, set up your environment as described in Chapel Quickstart Instructions (or Setting up Your Environment for Chapel for more settings), cd to $CHPL_HOME, and run GNU make. On many systems, GNU make is available simply as make. On others, it is called gmake. So, first check if make refers to GNU make: # If the first line includes "GNU Make", you have GNU Make. make -v Further instructions will assume make refers to GNU Make. If that's not the case, you'll need to replace make with gmake. Now build the Chapel compiler: cd$CHPL_HOME
make
Parallel builds such as make -j 6 are also supported.
If everything works as intended, you ought to see:
1. each of the compiler source subdirectories being compiled
2. the compiler binary getting linked and stored as:
$CHPL_HOME/bin/$CHPL_HOST_PLATFORM/chpl
3. the runtime support libraries being compiled, archived, and stored in a configuration-specific subdirectory under:
$CHPL_HOME/lib/$CHPL_TARGET_PLATFORM.$CHPL_TARGET_COMPILER.../ If you get an error or failure during the make process, first double-check that you have the required prerequisites (see Chapel Prerequisites). If you do, please submit a bug report for the failure and any workaround that you come up with, through Reporting Chapel Issues Note that each make command only builds the compiler and runtime for the current set of CHPL_ environment variables defined by (and inferred for) your environment. Thus, while the directory structure above supports the ability to have multiple versions of the compiler and runtime built simultaneously, only one version will be created for each make command. To support additional host/target platforms, host/target compilers, or threading/communication layers, you will need to reset your environment variables and re-make. After a successful build, you should be able to run the compiler and display its help message using: chpl --help In which case, you will be ready to move on to compiling with the Chapel compiler (see Compiling Chapel Programs). The rest of this file gives more information about Chapel's Makefiles for advanced users or developers of Chapel. ## Platform Support¶ Currently supported platforms include 32- and 64-bit Linux, Mac OS X, Cygwin (Windows), SunOS, a variety of current Cray platforms, and a few systems by other vendors. Most UNIX-based environments ought to support Chapel (subject to the assumptions in Chapel Prerequisites), but may not be supported "out-of-the-box" by our current Makefile structure. See the section below on platform-specific settings for more information on adding support for additional UNIX-compatible environments. Note that a single Chapel installation can simultaneously support Chapel for multiple platforms and compiler options because all platform-specific binary files and executables are stored in subdirectories named by CHPL_ environment variables. ## Makefile Targets¶ The Chapel sources are structured so that a GNU-compatible make utility can be used in any source directory to build the sources contained in that directory and its subdirectories. All of these Makefiles support the following targets: Target Action (nothing) default all Build the appropriate output files e.g. objects, libraries, executables clean Remove the intermediate files for this configuration cleanall Remove the intermediate files for all configurations clobber Remove everything created by the Makefiles Note: make clobber will remove chplconfig install Install chapel to a previously configured location Each target processes all subdirectories then the current directory. ## Makefile Options¶ The Chapel makefiles have a few options that enable or disable optimization, debugging support, profiling, and backend C compiler warnings. The variables are described below. Set the value to 1 to enable the feature. Option Effect DEBUG Generate debug information (e.g. add -g to C compiler). OPTIMIZE Enable optimizations (e.g. add -O3 to C compiler). PROFILE Enable profiling support (e.g. add -pg to C compiler). WARNINGS Promote backend C compiler warnings to errors. ## Platform-specific Settings¶ The structure of Chapel's Makefiles is designed to factor any compiler-specific settings in $CHPL_HOME/make/compiler/Makefile.<compiler> where <compiler> refers to $CHPL_HOST_COMPILER for the compiler sources and $CHPL_TARGET_COMPILER for the runtime sources and generated code. Refer to Setting up Your Environment for Chapel for more information about these variables and their default settings.
In addition, any architecture-specific settings are defined in $CHPL_HOME/make/platform/Makefile.<platform>, where <platform> refers to $CHPL_HOST_PLATFORM for the compiler sources and \$CHPL_TARGET_PLATFORM for the runtime sources and generated code. Again, Setting up Your Environment for Chapel details these variables and their default settings.
If you try making the compiler and runtime for an unknown platform, it will assume that you want to use gcc/g++ to compile the code and that you require no platform-specific settings. You can add support for a new build environment by creating Makefile.<compiler> and/or Makefile.<platform> files and setting your environment variables to refer to those files. If you do develop new build environment support that you would like to contribute back to the community, we encourage you to send your changes back to us at: chapel_info@cray.com
## Installing Chapel¶
Chapel can be built and installed as follows:
./configure # adding appropriate options
make
make install # possibly with elevated privilege
See ./configure --help for more information on the options available.
Note
./configure will save the current configuration into a chplconfig file and can set the installation path that will be compiled in to the chpl binary.
|
|
# Sturm-Liouville equation
An ordinary differential equation of the second order
$$-\frac{d}{dx} \left\{ p(x) \frac{dy}{dx} \right\} + l(x)y = \lambda r(x) y,$$ where $x$ varies in a given finite or infinite interval $(a, b)$, $p(x)$, $l(x)$, $r(x)$ are given coefficients, $\lambda$ is a complex parameter, and $y$ is the sought solution. If $p(x), r(x)$ are positive, $p(x)$ has a first derivative and $p(x) r(x)$ has a second derivative, then by the Liouville substitution (see [1]) this equation may be reduced to the standard form
$$-y'' = q(x)y = \lambda y, \qquad a < x < b. \tag{1}$$ It is assumed that the complex function $q$ is measurable on $(a, b)$ and summable on each of the subintervals in it. At the same time one also considers the non-homogeneous equation
$$-y'' + q(x)y = \lambda y + f(x), \qquad a < x < b, \tag{2}$$ where $f$ is a given function.
If $f$ is measurable on $(a, b)$ and summable on each of the subintervals in it, then for all complex numbers $c_0, c_1$ and any interior point $x_0$, equation (2) has on $(a, b)$ one and only one solution $y(x, \lambda)$ satisfying the conditions $y(x_0, \lambda) = c_0$, $y'(x_0, \lambda) = c_1$. For any $x \in (a, b)$ the function $y(x, \lambda)$ is an entire analytic function of $\lambda$. As $x_0$ one can take one of the end-points of $(a, b)$ (if this end-point is regular, cf. Sturm–Liouville operator).
Let $y_1(x, \lambda)$ and $y_2(x, \lambda)$ be two arbitrary solutions of (1). Their Wronskian $$W(y_1, y_2) = y_1(x, \lambda) y_2'(x, \lambda) - y_1'(x, \lambda) y_2(x, \lambda)$$ is independent of $x$ and vanishes if and only if these solutions are linearly dependent. The general solution of (2) is of the form
$$y(x, \lambda) = a_1 y_2(x, \lambda) + a_2 y_2(x, \lambda) + \int_{x_0}^x R(x, \xi, \lambda) f(\xi) \, d\xi,$$ where
$$R(x, \xi, \lambda) = \frac{1}{W(y_1, y_2)} \{y_1(x, \lambda) y_2(\xi, \lambda) - y_1(\xi, \lambda)y_2(x, \lambda)\},$$ $a_1, a_2$ are arbitrary constants and $y_1(x, \lambda), y_2(x, \lambda)$ are linearly independent solutions of (1).
The following fundamental theorem of Sturm (see [1]) is true: Let two equations
$$u''+q_1(x) u = 0, \tag{3}$$
$$v'' + q_2(x) v = 0 \tag{4}$$ be given. If $q_1(x), q_2(x)$ are real and $q_1(x) < q_2(x)$ on the entire interval $(a, b)$, then between any two zeros of any non-trivial solution of the first equation there is at least one zero of each solution of the second equation.
The following theorem is known as the comparison theorem (see [1]): Let the left-hand end-point of $(a,b)$ be finite, let $u(x)$ be a solution of (3) satisfying the conditions $u(a) = \sin \alpha$, $u'(a) = \cos \alpha$, and let $v(x)$ be a solution of (4) with the same conditions; let, moreover, $q_1(x) , q_2(x)$ on the whole interval $(a, b)$. Then, if $u(x)$ has $m$ zeros on $(a, b)$, $v(x)$ will have at least $m$ zeros and the $k$-th zero of $v(x)$ will be less than the $k$-th zero of $u(x)$.
One of the important properties of (1) is the existence of so-called operator transforms with a simple structure. Operator transforms arose from general algebraic considerations related to the theory of generalized shift operators (change of the basis).
There are the following types of operator transforms for equation (1). Let $y(x, \lambda)$ be the solution of
$$-y'' + q(x) y = \lambda^2 y ,\qquad -a < x < a, \quad a \le \infty , \tag{5}$$ satisfying the conditions
$$y(0, \lambda) = 1, \quad y'(0, \lambda) = i \lambda. \tag{6}$$ It turns out that this solution has the following representation:
$$y(x, \lambda) = e^{i\lambda x} = \int_{-x}^x K(x, t) e^{i\lambda t} \, dt,$$ where $K(x, t)$ is a continuous function independent of $\lambda$; moreover,
$$K(x, x) = \frac 12 \int_0^x q(t) \, dt, \qquad K(x, -x) = 0.$$ The integral operator $I + K$ defined by
$$(I + K) f = f(x) + \int_{-x}^x K(x, t) f(t) \, dt$$ is called an operator transform (a transmutation operator), and preserves the conditions at the point $x= 0$. It transforms the function $e^{i\lambda x}$ (a solution of the simplest equation $y'' = \lambda^2 x$ with the conditions (6)) into the solution of (5) under the same conditions at the point $x=0$. Let $\phi_h(x, \lambda)$ and $\phi_\infty(x, \lambda)$ be the solutions of (5) satisfying
$$\phi_h(0, \lambda) = 1, \qquad \phi_h'(0, \lambda) = h.$$
$$\phi_\infty(0, \lambda) = 0, \qquad \phi_\infty'(0, \lambda) = 1.$$ These solutions have the representations
$$\phi_h(x, \lambda) = \cos \lambda x + \int_0^x K_h(x, t) \cos \lambda t \, dt,$$
$$\phi_\infty(x, \lambda) = \frac{\sin \lambda x}{\lambda} + \int_0^x K_\infty(x, t) \frac{\sin \lambda t}{\lambda} \, dt,$$ where $K_h(x, t)$ and $K_\infty(x, t)$ are continuous functions.
A new type of operator transforms has been introduced (see [8]) that preserves the asymptotic behaviour of solutions at infinity; namely, it turned out that for all $\lambda$ in the upper half-plane, $\text{Im } \lambda \ge 0$, the equation (5), considered on the half-line $0 \le x < \infty$ under the conditions $\int_0^\infty x |q(x)| \, dx < \infty$, has a solution $y(x, \lambda)$ that can be represented in the form
$$y(x, y) = e^{i \lambda t} + \int_x^\infty K(x, t) e^{i \lambda t} \, dt,$$ where $K(x, t)$ is a continuous function satisfying the inequality
$$|K(x, t)| \le \frac 12 \sigma\left(\frac{x+t}{2}\right)\exp\left\{\sigma_1(x) - \sigma_1\left(\frac{x+t}{2}\right)\right\},$$ in which
$$\sigma(x) = \int_x^\infty |q(t)| \, dt, \qquad \sigma_1(x) = \int_x^\infty \sigma(t) \, dt.$$ Moreover,
$$K(x, x) = \frac 12 \int_x^\infty q(t) \, dt.$$
#### References
[1] B.M. Levitan, I.S. Sargsyan, "Introduction to spectral theory: selfadjoint ordinary differential operators" , Amer. Math. Soc. (1975) (Translated from Russian) [2] M.A. Naimark, "Lineare Differentialoperatoren" , Akademie Verlag (1960) (Translated from Russian) [3] B.M. Levitan, "Generalized translation operators and some of their applications" , Israel Program Sci. Transl. (1964) (Translated from Russian) [4] V.A. Marchenko, "Sturm–Liouville operators and applications" , Birkhäuser (1986) (Translated from Russian) [5] J. Delsarte, "Sur certaines transformations fonctionnelles rélatives aux équations linéaires aux dérivées partielles du second ordre" C.R. Acad. Sci. Paris , 206 (1938) pp. 1780–1782 [6] A.Ya. Povzner, "On Sturm–Liouville type differential equations on the half-line" Mat. Sb. , 23 : 1 (1948) pp. 3–52 (In Russian) [7] B.M. Levitan, "The application of generalized shift operators to linear second-order differential equations" Uspekhi Mat. Nauk , 4 : 1 (1949) pp. 3–112 (In Russian) [8] B.Ya. Levin, "Transformations of Fourier and Laplace types by means of solutions of second order differential equations" Dokl. Akad. Nauk SSSR , 106 : 2 (1956) pp. 187–190 (In Russian) [9] B.M. Levitan, "Inverse Sturm–Liouville problems" , VNU (1987) (Translated from Russian)
|
|
### How I Organize My Obsidian Vault
This post is a part of the obsidian series.
Obsidian is a wonderful tool to take and link your markdown notes. Before Obsidian, I would just simply write markdown notes in a directory and leverage Git to incredimentally check in the changes, and sync my notes between devices. When I was doing this, I would organize my notes in a series of directories, which seemed to work. What I ended up running into was when a file would seemingly belong in more than one directory. For example, say I had a directory called software-engineering, and I also had a directory called data-science. If I wrote a note could arguably be in either directory, I found myself arbitrarily making a decision on where that note would go, and would end up having a hard time finding it without searching my notes.
Introducing Obsidian. Obsidian has the lovely ability to arbitrarily tag your files. This means at the start of every markdown file you can write a yaml front matter which allows one to annotate the note’s tags. For example it might look like this:
1 tags: [data-science, software-engineering]
What’s fantastic about this is now notes can now belong to more than one idea. To further leverage this concept, I now simplify my notes directory by having it flat. In other words, I have no directories in my notes. Instead, I use Obsidian’s tags to create bonds between my files.
Obsidian also allows for stronger bonds to be made between files. In the content’s of a file, you can write another file’s name in double hard braces, example:
some-file.md
1 [[some-other-file]]
This, in combination with tags, allows one to really create connections between files on the go, while maintaining a simple to read directory.
This is a post in the obsidian series.
Other posts in this series:
|
|
Gene set / pathway analysis from a systems biology preview (?)
1
0
Entering edit mode
3.8 years ago
arronar ▴ 260
Hello.
On my way to read and run a pathway/gene set analysis on some microarray data, I realized that besides the fact that there are many different ways (statistical methods) to do the analysis, all of them (at least as far as I know) stays at statistic and as you already know returns a p-value or/and an enrichment score for each pathway/gene set.
Today, I was wondering if there is any approach ever implemented to do pathway analysis but to take into account also information from systems biology and not only on statistical inferences.
Such information could be the role of each gene inside the gene set, like inducer , inhibitor. Also it could be info about the topology of that specific gene set and maybe gene's inter-correlations.
Does anyone have in mind any research article on such an approach ? Any info or even idea is welcomed.
Thank you.
systems biology pathway analysis • 1.2k views
1
Entering edit mode
3.8 years ago
natasha.sernova ★ 3.9k
There ia a pahway review:
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002375
Ten Years of Pathway Analysis: Current Approaches and Outstanding Challenges
Purvesh Khatri , Marina Sirota,Atul J. Butte
There were some biostar-posts:
Tools For Pathway/Gene Set Analysis Of Gwas (Genome-Wide Association Study) Data
How Do You Deal With Biological Context During Pathway Analysis?
A: Comparison Of Pathways Between Microbial Genomes
Analysing biological pathways in genome-wide association studies
Kai Wang, Mingyao Li & Hakon Hakonarson
Tools:
Pathway Tools version 13.0: integrated software for pathway/genome informatics and systems biology Peter D. Karp, Suzanne M. Paley, Markus Krummenacker, Mario Latendresse, Joseph M. Dale, Thomas J. Lee, Pallavi Kaipa, Fred Gilham, Aaron Spaulding, Liviu Popescu, Tomer Altman, Ian Paulsen, Ingrid M. Keseler, Ron Caspi
Brief Bioinform. 2010 Jan; 11(1): 40–79. Published online 2009 Dec 2. doi: 10.1093/bib/bbp043
PeTTSy: a computational tool for perturbation analysis of complex systems biology models Mirela Domijan, Paul E. Brown, Boris V. Shulgin, David A. Rand
BMC Bioinformatics. 2016; 17: 124. Published online 2016 Mar 10. doi: 10.1186/s12859-016-0972-2
|
|
My watch list
my.chemeurope.com
# Morse potential
The Morse potential, named after physicist Philip M. Morse, is a convenient model for the potential energy of a diatomic molecule. It is a better approximation for the vibrational structure of the molecule than the quantum harmonic oscillator because it explicitly includes the effects of bond breaking, such as the existence of unbound states. It also accounts for the anharmonicity of real bonds and the non-zero transition probability for overtone and combination bands.
## Potential Energy Function
The Morse potential energy function is of the form
$V(r) = D_e ( 1-e^{-a(r-r_e)} )^2$.
Here r is the distance between the atoms, re is the equilibrium bond distance, De is the well depth (defined relative the dissociated atoms), and a controls the 'width' of the potential. The dissociation energy of the bond can be calculated by subtracting the zero point energy E(0) from the depth of the well. The force constant of the bond can be found by taking the second derivative of the potential energy function, from which it can be shown that the parameter, a, is
$a=\sqrt{k_e/2D_e}$,
where ke is the force constant at the minimum of the well.
Of course, the zero of potential energy is arbitrary, and the equation for the Morse potential can be rewritten any number of ways by adding or subtracting a constant value.
## Vibrational Energy
Stationary states on the Morse potential have eigenvalues
$E(v) = h\nu_0 (v+1/2) - \frac{\left[h\nu_0(v+1/2)\right]^2}{4D_e}$
where v is the vibrational quantum number, and ν0 has units of frequency, and is mathematically related to the particle mass, m, and the Morse constants via
$\nu_0 = \frac{a}{2\pi} \sqrt{2D_e/m}$.
Whereas the energy spacing between vibrational levels in the quantum harmonic oscillator is constant at hv0, the energy between adjacent levels decreases with increasing v in the Morse oscillator. Mathematically, the spacing of Morse levels is
$E(v+1) - E(v) = h\nu_0 - (v+1) (h\nu_0)^2/2D_e\,$.
This trend matches the anharmonicity found in real molecules. However, this equation fails above some value of v where E(v + 1) − E(v) is calculated to be zero or negative. This failure is due to the finite number of bound levels in the Morse potential, and some maximum v, vm that remains bound. For energies above vm, all the possible energy levels are allowed and the equation for E(v) is no longer valid.
Below vm, E(v) is a good approximation for the true vibrational structure in non-rotating diatomic molecules. In fact, the real molecular spectra are generally fit to the form1
$E_v / hc = \omega_e (v+1/2) - \omega_e\chi_e (v+1/2)^2\,$
in which the constants ωe and ωeχe can be directly related to the parameters for the Morse potential.
## Solving Schrödinger's equation for the Morse oscillator
Like the quantum harmonic oscillator, the energies and eigenstates of the Morse potential can be found using operator methods. One approach involves applying the factorization method to the Hamiltonian.
|
|
# What is the difference between Shor's algorithm for factoring and Shor's algorithm for logarithm
There is a paper from Peter W. Shor from 1994: http://www.csee.wvu.edu/~xinl/library/papers/comp/shor_focs1994.pdf "Algorithms for Quantum Computation: Discrete Logarithms and Factoring", and I have a question about the paper and algorithms presented.
For integer factoring problem, Shor's algorithm is working as fast period finder for function f(x) = a^x mod N, where semiprime N is equal to p*q, a is fixed and all possible exponents x are quantumly computed. Then, Shor uses Simon's algorithm to find period r of f(x) such that f(x) = f(x+r) . With 0.5 probability, the output of quantum scheme r will be the even and a^(r/2) will be non-trivial square root of 1 mod N. Having such root we can easily crack N to p and q. (If there is something wrong in this description, please comment. Simple quantum circuit is here: http://www.nature.com/nature/journal/v414/n6866/fig_tab/414883a_F1.html )
But how Shor's algorithm for discrete logarithm problem works? There is only prime number P (and generator g), so we can't factor it to something having square root. I'm even not sure that there will be any nontrivial square root in mod p field.
Task for discrete logarithm is: given some x, equal to x = g^r mod p, with g and p known, find the r.
Assume (as in Section 4 Discrete Log: the easy case of Shor's paper) that you have an efficient quantum algorithm for the Fourier transform. Then, applying this Fourier transform twice (once for $a$ and once for $b$) on a quantum superposition of values $g^a\cdot x^{-b}$ and measuring yields a pair $(c,d)$ with $d\equiv -cr\pmod{p-1}$. Thus, you can recover $r$ (if the gcd of $c$ and $p-1$ is $1$) directly with a simple division modulo $p-1$. [If the gcd is a small number it is also easy to conclude]
Unfortunately, doing this Fourier transform directly is only easy when $p-1$ is smooth (which is an uninteresting case since dlogs are easy with a classical computer in that case) and technicalities arise because of this. These technicalities involve replacing the Fourier transform modulo $p-1$ by a Fourier transform modulo $q,$ where $q$ is smooth and not too far from $p-1$. A similar technique is used for factoring: it is simpler than the general discrete logarithm case because a single application of the Fourier transform is enough in that case.
|
|
# Question #7f781
##### 1 Answer
Nov 16, 2014
This is just pure coincidence that
$2 + 2 = 2 \cdot 2 = {2}^{2}$
4 is a perfect square number, meaning that if you multiply a number (let's say $a$) by itself, then you will get $a \cdot a$ or ${a}^{2}$.
$3 \cdot 3 = 9$
That means 9 is a perfect square. But 6 is not.
Refer to square roots. They are basically a reversal to an $a \cdot a$ operation:
$\sqrt{{a}^{2}} = a$
If 9 is square rooted:
$\sqrt{9} = 3$
and if 4 is square rooted:
$\sqrt{4} = 2$.
BUT if 6 is under a square root:
$\sqrt{6} \approx 2.44948974278 \ldots$
You don't get a pretty, round integer perfect squares have to offer.
It is just a mathematical "miracle" that $2 + 2$ and $2 \cdot 2$ equals 4. $3 + 3$ equals 6 and $3 \cdot 3$ (which is just $3 + 3 + 3$) equals 9, so 3s aren't as graceful as numbers.
I mean, there are triangles ... but I digress.
|
|
Know more about the Repeaters Batch Know more about the Repeaters Batch
# Degenerate and Non-Degenerate Conics
Conic sections (simple conic) are those curves that are created by the intersection of a plane with the surface of a cone. The conic sections possess distinguishing characteristics that give rise to many alternative definitions. A cone consists of two similar parts named nappes of the cone. A conic is usually drawn on a coordinate plane. The types of conic sections involve a hyperbola, parabola, ellipse and circle (called a special type of ellipse). Hyperbola is formed when the plane is parallel to the y-axis. Parabola can be obtained when the plane is seen parallel to the generating line. The formation of a circle involves the plane being perpendicular to the revolution axis.
Ellipse can be found when the plane cuts one of the nappes of the cone at a right angle. A degenerate conic is a plane curve of degree 2 defined by a polynomial equation of the same degree that cannot be an irreducible curve. If the plane cuts the vertex of the cone, then a degenerate conic is obtained. It can be a single line, two lines that may or may not be parallel, a single point or a null set. They occur in the family of geometric objects with a common property of conics. For example, a conic section represented by an equation x2 – y2 = 0 can be called a degenerate as it is reduced to (x – y) (x + y) = 0 and has close proximity to the 2 intersecting lines forming at “X”.
There exist two categories of degenerate conics on the plane that are complex. They are two varied lines that intersect at one point or a double line. A degenerate conic can be modified by a transformation of a projection into another degenerate conic of a similar type. The conics which are smooth are said to be non-degenerate conics. The different types of non-degenerate conics are ellipse, parabola, or hyperbola. They are classified based on value of the discriminant of the non-homogeneous form of the equation Ax2 + 2Bxy + Cy3 + 2Dx + 2Ey + F, given the determinant of the matrix
$$\begin{array}{l}M=\begin{bmatrix} A &B \\ B&C \end{bmatrix}\end{array}$$
The determinant of the matrix is greater than 0 (positive), zero or less than 0 (negative) and the respective conic is an ellipse, a parabola, or a hyperbola. A conic can be a degenerate conic when the determinant value reduces to 0. The property of degeneracy takes place when the cone of the apex exists in the plane or during the process of the cone being degenerated to a cylinder also when the plane is seen parallel to the cylinder axis.
The standard equation of the conics is given by ax2 + 2hxy + by2 + 2gx + 2fy + c = 0. The discriminant of the given equation is Δ = abc + 2fgh – af2 – bg2 – ch2. Two cases arise when the value of the determinant is 0 and not equal to 0.
Condition Nature of conic 1] Determinant = 0, ab – h2 = 0 A pair of straight lines that are coincident in nature occurs. 2] Determinant = 0, ab – h2 < 0 A pair of straight lines that are intersecting in nature occurs. 3] Determinant = 0, ab- h2 > 0 A single point occurs.
The above table explains the nature of degenerate conics when the value of the determinant (Δ = 0) is 0.
Condition Nature of conic 1] Determinant ≠ 0, h = 0, a = b, e = 0 Circle 2] Determinant ≠ 0, ab – h2 = 0, e = 1 Parabola 3] Determinant ≠ 0, ab – h2 > 0, e < 1 Ellipse 4] Determinant ≠ 0, ab – h2 < 0, e > 1 Hyperbola 5] Determinant ≠ 0, ab – h2 < 0, a + b = 0, e = √2 Rectangular Hyperbola
The above table explains the nature of the non-degenerate conic when the value of the determinant (Δ ≠ 0) is not equal to 0.
## Frequently Asked Questions
### What do you mean by a conic section?
Conic sections are curves obtained by the intersection of a plane with the surface of a cone.
### Name the conic whose eccentricity is greater than 1.
Hyperbola has eccentricity greater than 1.
### What do you mean by a degenerate conic?
When the plane intersects the vertex of the double cone, the resulting conic is a degenerate conic. Degenerate conics can be a point, a line, or two intersecting lines.
### What do you mean by a non degenerate conic?
When the plane does not pass through the vertex of the double cone, the resulting conic is a non degenerate conic.
|
|
# How does the fundamental theorem of calculus connect derivatives and integrals?
(I) $\frac{d}{\mathrm{dx}} {\int}_{a}^{x} f \left(t\right) \mathrm{dx} = f \left(x\right)$
(II) $\int f ' \left(x\right) \mathrm{dx} = f \left(x\right) + C$
|
|
Stokes Theorem help
Use Stokes theorem to evaluate the integral
the integral ( closed ) of C of F dot dr
F(x,y,z) = xyi+ x^2 j +z^2 k
C is the intersection of the paraboloid z=x^2 + y^2 and the plane z=y with a counterclockwise orientation looking down the positive z-axis.
|
|
# If $f$ is continuous in $[0,1]$, find $\lim\limits_{x \to 0^{+}}x\int\limits_x^1 \frac{f(t)}t dt$
I'm solving this problem and I guess it shouldn't be too hard. Since $f$ is continuous it is bounded, so one has
$$\left| {x\int\limits_x^1 {\frac{{f\left( t \right)}}{t}dt} } \right| \leq x\int\limits_x^1 {\left| {\frac{{f\left( t \right)}}{t}} \right|dt} \leqslant Mx\int\limits_x^1 {\frac{{dt}}{t}} = - Mx\log x \to 0$$
Where $M=\operatorname{sup}\{|f(x)|:x\in[0,1]\}$
I'm not 100% certain on this, so I want a better, clearer approach.
Then, there is a second problem, similar, which is:
If $f$ is integrable on $[0,1]$ and $\exists\lim\limits_{x\to0}f(x)=L$, find
$$\ell = \mathop {\lim }\limits_{x \to {0^ + }} x\int\limits_x^1 {\frac{{f\left( t \right)}}{{{t^2}}}dt}$$
$$\mathop {\lim }\limits_{x \to {0^ + }} x\int\limits_x^1 {\frac{{f\left( t \right)}}{{{t^2}}}dt} =\mathop {\lim }\limits_{x \to {0^ + }} f\left( x \right) - xf\left( 1 \right) + x\int\limits_x^1 {\frac{{f'\left( t \right)}} {t}dt}$$
$$= L + \mathop {\lim }\limits_{x \to {0^ + }} x\int\limits_x^1 {\frac{{f'\left( t \right)}}{t}dt}$$
So, what can I sat about $f'(t)$ given $f(t)$ is integrable on $[0,1]$ that will allow me to apply the first case to the last limit?
• The first argument is fine, except that the first $<$ should be $\le$, and you should say explicitly that $M=\sup\{|f(x)|:x\in[0,1]\}$ and that at the end you’re taking the limit as $x\to 0^+$. Apr 8 '12 at 19:23
• @Brian Ok. I thought the $M$ was implicitly defined, but I guess it is appropriate to clarify. Any hints on the second one? Apr 8 '12 at 19:24
• Not offhand; there might be after I think about it a bit, but someone else is likely to get there first. Apr 8 '12 at 19:26
• You can't speak about $f'$ in second problem, since $f$ is only integrable Apr 8 '12 at 19:44
• @Norbert Is there any way to prove that $x\int\limits_x^1 {\frac{{f'\left( t \right)}}{t}dt} \to 0$? Apr 8 '12 at 19:48
For the first problem, your approach is fine (but the first inequality maybe be an equality when $f$ is non-negative). For the second, denote $L:=\lim_{x\to 0}f(x)$. Fix $\varepsilon>0$. We can find $\delta>0$ such that if $0\leq x\leq \delta$ then $|f(x)-L|\leq \varepsilon$, so for $0\leq x\leq \delta$: $$x\int_x^1\frac{f(t)}{t^2}dt=x\int_x^1\frac{f(t)-L}{t^2}dt+Lx\int_x^1\frac{dt}{t^2}=x\int_x^1\frac{f(t)-L}{t^2}dt+L\left(\frac 1x-1\right)x$$ hence \begin{align*}\left|x\int_x^1\frac{f(t)}{t^2}dt-L\right|&\leq x\int_x^1\frac{|f(t)-L|}{t^2}dt+|Lx|\\ &=x\int_x^\delta\frac{|f(t)-L|}{t^2}dt+ x\int_\delta^1\frac{|f(t)-L|}{t^2}dt+|Lx|\\ &\leq x\int_x^\delta\frac{\varepsilon}{t^2}dt+ x\int_\delta^1\frac{|f(t)-L|}{t^2}dt+|Lx|\\ &=\varepsilon x\left(\frac 1x-\frac 1{\delta}\right)+x\int_\delta^1\frac{|f(t)-L|}{t^2}dt+|Lx|\\ &=\varepsilon-\frac{\varepsilon}{\delta}x+x\int_\delta^1\frac{|f(t)-L|}{t^2}dt+|Lx| \end{align*} so $$\limsup_{x\to 0^+}\left|x\int_x^1\frac{f(t)}{t^2}dt-L\right|\leq \varepsilon$$ and since $\varepsilon$ was arbitrary, $L=\ell$.
• I'm OK with this, but can't you devise an approach that avoids the prediction that the limit is indeed $L$? (See my last edit, where it suffices to show that the integral goes to zero). Apr 8 '12 at 19:41
• @PeterT.off My approach gives prediction for integrabale $f$ :) Apr 8 '12 at 19:43
• First we work in the case on which $\ell=0$, , using $g=f-\ell$, then we try to generalize. Apr 8 '12 at 19:43
If $f$ is continuous let's use L'Hopital rule $$\lim\limits_{x\to+0}x\int\limits_{x}^{1}\frac{f(t)}{t}dt= \lim\limits_{x\to+0}\frac{\int\limits_{x}^{1}\frac{f(t)}{t}dt}{x^{-1}}= \lim\limits_{x\to+0}\frac{-\int\limits_{1}^{x}\frac{f(t)}{t}dt}{x^{-1}}= \lim\limits_{x\to+0}\frac{-\frac{f(x)}{x}}{-x^{-2}}= \lim\limits_{x\to+0}xf(x)=0$$ $$\lim\limits_{x\to+0}x\int\limits_{x}^{1}\frac{f(t)}{t^2}dt= \lim\limits_{x\to+0}\frac{\int\limits_{x}^{1}\frac{f(t)}{t^2}dt}{x^{-1}}= \lim\limits_{x\to+0}\frac{-\int\limits_{1}^{x}\frac{f(t)}{t^2}dt}{x^{-1}}= \lim\limits_{x\to+0}\frac{-\frac{f(x)}{x^2}}{-x^{-2}}= \lim\limits_{x\to+0}f(x)$$ P.S. Big thanks to David Mitra, he pointed out that requirement for integrals to be divergent is unnecessary!
• Accordingly upvoted. However, I'm looking for an approach following what I proposed. Apr 8 '12 at 19:45
• Remember that you can only use l'Hopital's rule if the limit is in an indeterminate form (in this case, $\infty/\infty$). So your proof works only in the case that the integral in the numerator goes to infinity. Apr 8 '12 at 20:11
• Ok, I will add this restrictions Apr 8 '12 at 20:13
• @GregMartin For the infinite limit case, you do not need the numerator to tend to infinity to use L'Hopital's rule, only the denominator. Apr 8 '12 at 20:17
• @GregMartin For your example, the limit of the quotient of the derivatives does not exist. L'Hopital doesn't apply (Lopital states if the limit of the quotient of the derivatives exists, then... Apr 8 '12 at 20:27
|
|
# Category:Compact Complement Topology
This category contains results about Compact Complement Topology.
Let $T = \struct {\R, \tau}$ be the real number line with the usual (Euclidean) topology.
Let $\tau^*$ be the set defined as:
$\tau^* = \leftset {S \subseteq \R: S = \O \text { or } \relcomp \R S}$ is compact in $\rightset {\struct {\R, \tau} }$
where $\relcomp \R S$ denotes the complement of $S$ in $\R$.
Then $\tau^*$ is the compact complement topology on $\R$, and $T^* = \struct {\R, \tau^*}$ is the compact complement space on $\R$.
## Subcategories
This category has the following 2 subcategories, out of 2 total.
## Pages in category "Compact Complement Topology"
The following 15 pages are in this category, out of 15 total.
|
|
# Has there been any application of tensor species?
Joyal's combinatorial species, endofunctors in the category of finite sets with bijections $\mathbf B$ have found numerous applications. One generalisation is given by so-called "tensor species" (also "tensorial species", or, "linear species" - not to be confused with the species on totally ordered sets in the book by Bergeron, Labelle and Leroux) which are defined as functors from $\mathbf B$ into the category of finite dimensional vector spaces (say, over the complex numbers) with linear transformations $\mathbf{Vect}$.
I wonder whether there have been any "practical" applications of tensor species? I know of a very short list of articles dealing with them (eg. by Méndez) but hardly any spelled out examples. I wonder whether I overlooked something.
Note that for any combinatorial species $F$ we cann regard $F[\{1,2,\dots,n\}]$ as a finite set with an action of the symmetric group. Similarly, if $F$ is a tensor species, we can regard $F[{1,2,\dots,n}]$ as a linear representation of the symmetric group. Thus, I am mostly interested in examples that use the combinatorial operations for greater clarity of a construction.
-
I suppose that if you are interested in representations of symmetric groups, then this forms an ample supply of such. As for actual applications, I can't say. – Spice the Bird May 4 '12 at 9:49
Yes, I am aware of this. In fact, I like the spirit of the sample application that Méndez gives (he uses the product formula...), but I am hoping for something more substantial. – Martin Rubey May 4 '12 at 10:46
In the theory of algebraic operads, the language of "tensor species" is often used, see Chapter 5 of "Algebraic Operads, Jean-Louis Loday & Bruno Vallette, Grundlehren der mathematischen Wissenschaften, Volume 346, Springer-Verlag (2012).
For example one can define an operad very concise as a monoid in species under a certain monoidal structure. Without this language, it takes quite a while to write down all the compatibilities with the various $S_n$ actions (Although I find it illuminating to write down an "elementary" definition never the less).
There are in fact many definitions in the theory of operads, which are a bit cumbersome to write down without talking about "tensor species".
And of course things like generating series for operads are of interest and operadic phenomena/constructions like Koszul duality give constraints/relations for their generating series.
-
The theory of tensor species is equivalent to the theory of polynomial functors; so to this extent there is no call for a theory of tensor species as the theory of polynomial functors is well-developed. However this is, I suspect, missing the point of your question. My understanding is that the focus in combinatorial species is on species which satisfy polynomial equations of which there are many interesting examples. It then follows that the cycle index series will satisfy the same polynomial equation. This makes it natural to ask a more specific question of whether there are interesting tensor species/polynomial functors which satisfy polynomial equations (other than those arising from combinatorial species)? I would be interested to hear of any examples.
-
Yes, your more specific question is precisely what I'm after. However, the equations involved need not be polynomial, plethysm (= composition) may also be involved. A prime example from combinatorial species is that set partitions are sets of non-empty sets. – Martin Rubey May 4 '12 at 16:16
|
|
# How do you solve X/7 -2 + X = X + 2?
Jul 31, 2016
$X = 28$
#### Explanation:
$\frac{X}{7} - 2 + X = X + 2$
or
$\frac{X}{7} = 2 + 2$
or
$\frac{X}{7} = 4$
or
$X = 28$
|
|
# Tag Info
8
I recently read a paper, which showed a mathematical model for performance scaling of research groups in different scientific branches. I'm aware you were originally asking for smaller "cognitive tasks" and project-like group-processes in the comments, but output and quality of publications/patents is probably anyway a better and more objective measure on a ...
8
It's a big topic. The relationship between group size and performance on a cognitive task is going to vary by several factors. Here are a few thoughts: The form of interdependence adopted by the group on the task will matter. When everyone can just work independently (e.g., taking calls in a call centre), then it makes sense that output would increase ...
5
The following are just my thoughts on what seems to make sense from first principles. I don't have a detailed understanding of what is standard practice in the wisdom of crowds literature. I've also only given what you've written a basic read. I.e., enough to understand the broad question, but not enough to follow exactly what you've done. Let $y_i$ be the ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
# If the diameter of a circle is 14, what is the area?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
17
Apr 27, 2017
$A = 153.9$ square units
#### Explanation:
If you are not familiar with the formula for finding the area by using the diameter, you will have to find the radius first:
$r = \frac{d}{2}$
$r = \frac{14}{2} = 7$
$A = \pi \cdot {r}^{2} \text{ } \leftarrow$ formula for area
$A = \pi \times {7}^{2}$
$A = 153.9$ square units
• 5 minutes ago
• 10 minutes ago
• 12 minutes ago
• 13 minutes ago
• 7 seconds ago
• A minute ago
• 2 minutes ago
• 3 minutes ago
• 4 minutes ago
• 5 minutes ago
• 5 minutes ago
• 10 minutes ago
• 12 minutes ago
• 13 minutes ago
|
|
# $A = \langle p \rangle$ is a prime ideal of $\mathbb{Z}_n$ if and only if $p$ is a prime divisor of $n$
I am struggling slightly with proving the following;
$A = \langle p \rangle$ (principal ideal generated by $p$) is a prime ideal of $\mathbb{Z}_n$ if and only if $p$ is a prime divisor of $n$.
I am working on proving $A = \langle p \rangle$ is a prime ideal of $\mathbb{Z}_n$ if $p$ is a prime divisor of $n$.
I am using the fact that
$\mathbb{Z}_n/\langle p \rangle$ is an integral domain $\iff$ $\langle p \rangle$ is a prime ideal of $\mathbb{Z}_n$.
Now I already know the following;
• $\mathbb{Z}_n$ is a commutative ring with unity.
• $\langle p\rangle$ is an ideal of $\mathbb{Z}_n$.
• The above points imply $\mathbb{Z}_n/\langle p \rangle$ is a commutative ring with unity.
So all that we need to show is that $\mathbb{Z}_n/\langle p \rangle$ has no divisors of zero, that is if $(a +\langle p \rangle),(b +\langle p \rangle) \in \mathbb{Z}_n/\langle p \rangle$ such that
$$(a +\langle p \rangle)(b +\langle p \rangle) = 0 + \langle p\rangle \implies (a +\langle p \rangle)= 0 + \langle p\rangle \text{ or } (b +\langle p \rangle) = 0 +\langle p \rangle$$
This is as far as I can get at the moment on this direction.
I have not yet start the only if direction
• $\Bbb Z_n/\left<m\right>\cong\Bbb Z_{\gcd(m,n)}$. – Lord Shark the Unknown May 2 '18 at 5:31
• Please explain a little bit more – Jandré Snyman May 2 '18 at 5:33
• $\Bbb Z _n/\left<m\right>\cong \Bbb Z/\left<m,n\right>=\Bbb Z/\left<\gcd(m,n)\right>=\Bbb Z_{\gcd(m,n)}$. – Lord Shark the Unknown May 2 '18 at 6:02
This boils down to $$p\mid ab\implies p\mid a\lor p\mid b$$
Suppose $p$ divides neither $a$ nor $b$, so the gcd with either is $1$. Then by the extended Euclidean algorithm, we can write $1=xp+ya=zp+wb$ with integers $x,y,z,w$. By multiplication, $$1=(xp+ya)(zp+wb)=(xtp+xwb+ayz)\cdot p+yw\cdot ab$$ so that $ab$ cannot be a multiple of $p$.
|
|
###### Asked in Math and Arithmetic, Units of Measure, Brain Teasers and Logic Puzzles
Math and Arithmetic
Units of Measure
Brain Teasers and Logic Puzzles
# Which weighs more a pound of gold or a pound of feathers?
###### 04/25/2014
While on the face of it a pound of feathers would seem to weigh the same as a pound of gold, this overlooks the fact that gold is universally weighed using a different definition of 'pound' than that used for most other materials.
Precious metals such as gold are measured in troy weight. A troy pound is 12 troy ounces, and each troy ounce is 480 grains, making a total of 5760 grains to the pound of gold.
Most materials use pounds and ounces from the avoirdupois system, and such a standard pound is made up of 16 ounces, where each ounce is 437.5 grains, making a total of 7000 grains to the pound of feathers.
All this means that a "pound" of feathers (or bricks, or lead) is heavier than a "pound" of gold.
The question is an old gag, based on the fact that gold is measured in troy ounces, but feathers (like all material other than precious metals and gemstones) are measured in avoirdupois ounces. Because troy weight has 12 ounces to a pound, but avoirdupois weight has 16 ounces to a pound, the trick answer is that a pound of feathers (at 16 ounces) is heavier than a pound of gold (at 12 ounces).
Another user said:
A pound of anything weighs the same as a pound of anything else.
However, feathers as weighed using imperial weight, which puts 16 ounces to the pound. Gold is weighed using troy weights (for precious metals) which has 12 ounces to the pound.
So a pound of feathers weighs more than a pound of gold, since different pounds are spoken of.
Another user said:
A pound of feathers is heavier- because a pound is NOT always a pound. Gold, silver, platinum- and all precious metals are measured in TROY ounces and TROY pounds. A troy pound weighs less than a standard (avoirdupois) pound.
|
|
# How to bend blocky arm into torso?
How do I bend this arm (block) so it curves into the torso?
I'm using Cycles and I'm not that good with NURBS curves and all that. Is there an easier way?
• Is there an easier way to do this? – BinaryGreen Dec 20 '14 at 15:23
|
|
# Solving a second-order differential equation
Dear all,
There is another error in our differential equation.
Starting over from part e) we solve the following equation for (dv/dt)2.
giving
and substitute it in the following equation
giving
Which offers no clear way to solve. We examine the following expression from part d).
which makes
which implies
for some constant c. solving,
substitute into the first equation to get
which makes
I think this way is not easy.
Last edited:
The expression
has the solution
and
For the v component of the geodesic, we plug in
into the unit speed equation
giving
and
We then integrate to give
and
which is incorrect judging from the following plot in the uv plane
but I think we are getting closer to the correct solution.
Last edited:
What course is this? Differential geometry?
You seem to have squared part of ##\frac{du}{dt}## twice. The left hand term above should reduce to 1/2.
I am sorry I should apologize because the solution to our ODE is not a geodesic, because we are solving the wrong ODE.
Not solving this problem has been eating me up. So, I revisited our previous calculations tonight, made sure to be extra careful not to make errors and ended up with the following. I still have not learned how to write LaTex equations, so these were typed in Microsoft Word, and these will not show up in quoted messages.
I am unable to solve this nonlinear second order ODE by any method I am familiar with.
Is anyone able to compute it using Mathematica or Wolfram Αlpha?
Thank you.
Last edited:
Acccording to DSolve in Mathematica, the fearful ODE $$\frac{d^2u}{dt^2}=\frac{1}{u^3}-\Big(\frac{u^4+1}{u^5}\Big)\Big(\frac{du}{dt}\Big)^2$$
has "closed form" solutions given by...
By using DSolve with initial value conditions,
$$\frac{d^2u}{dt^2}=\frac{1}{u^3}-\Big(\frac{u^4+1}{u^5}\Big)\Big(\frac{du}{dt}\Big)^2\Rightarrow u(t)=\sqrt{1+t^2}$$
(g) we plug ##u(t)## into the unit speed equation to find
$$\frac{dv}{dt}=\Big(\frac{t^4+t^2+1}{(1+t^2)^3}\Big)^{\frac{1}{2}}$$
which is a separable equation, when trying to compute
$$v(t)=\int \Big(\frac{t^4+t^2+1}{(1+t^2)^3}\Big)^{\frac{1}{2}}dt$$
we cannot find an explicit closed form solution
|
|
# GATE Questions & Answers of Traffic studies on Flow, Speed, Travel time - delay and O-D study, PCU, Peak hour factor, Parking study, Accident study and Analysis, Statistical analysis of Traffic data
## What is the Weightage of Traffic studies on Flow, Speed, Travel time - delay and O-D study, PCU, Peak hour factor, Parking study, Accident study and Analysis, Statistical analysis of Traffic data in GATE Exam?
Total 8 Questions have been asked from Traffic studies on Flow, Speed, Travel time - delay and O-D study, PCU, Peak hour factor, Parking study, Accident study and Analysis, Statistical analysis of Traffic data topic of Traffic Engineering subject in previous GATE papers. Average marks 1.50.
The speed-density relationship for a road section is shown in the figure.
The shape of the flow-density relationship is
Given the following data: design life n = 15 years, lane distribution factor D = 0.75, annual rate of growth of commercial vehicles r = 6%, vehicle damage factor F = 4 and initial traffic in the year of completion of construction = 3000 Commercial Vehicles Per Day (CVPD). As per IRC:37-2012, the design traffic in terms of cumulative number of standard axles (in million standard axles, up to two decimal places) is ______
Peak Hour Factor (PHF) is used to represent the proportion of peak sub-hourly traffic flow within the peak hour. If 15-minute sub-hours are considered, the theoretically possible range of PHF will be
While traveling along and against the traffic stream, a moving observer measured the relative flows as 50 vehicles/hr and 200 vehicles/hr, respectively. The average speeds of the moving observer while traveling along and against the stream are 20 km/hr and 30 km/hr, respectively. The density of the traffic stream (expressed in vehicles/km) is _________
If the total number of commercial vehicles per day ranges from 3000 to 6000, the minimum percentage of commercial traffic to be surveyed for axle load is
Which of the following statements CANNOT be used to describe free flow speed (uf) of a traffic stream ?
The acceleration –time relationship for a vehicle subjected to non-uniform acceleration is,
$\frac{dv}{dt}=(\alpha-\beta v_0)e^{-\beta t}$
shere, v is the speed in m/s, t is the time in s, $\alpha$ and $\beta$ are parameters, and v0 is the initial speed in m/s. If the accelerating behavior of a vehicle, whose driver intends to overtake a slow moving vehicle ahead, is descried as,
$\frac{dv}{dt}=\left(\alpha -\beta v\right)$
Considering $\alpha$ = 2 m/s2, $\beta$ = 0.05 s-1 and $1.3\;m/s^2\frac{dv}{dt}$ at $t=3s$, the distance (in m) travelled by the vehicle in 35 s is _______.
|
|
# Incompatibility of 'subcaption' package with [hebrew]{babel}
I've come across a few posts now that try to deal with the issues bable has with Arabic and Hebrew and I know that the polyglossia package is recommended, however I am reluctant to rejig my entire document as I only need to insert 3 Hebrew letters in a separate line.
The error message I'm currently getting concerns an incompatibility with the {subcaption} package which I assume is due to the forced right-alignment of Hebrew?
I'd appreciate any suggestions including crude hacks. I've even looked into using the math mode which has some Hebrew letters, but sadly not the correct ones...
This is roughly what I've got:
\documentclass{Classes/PhDThesisPSnPDF}
\usepackage[utf8]{inputenc} %can't use utf8x either at it throws up even more error messages
\usepackage [english,hebrew]{babel}
\begin{document}
\foreignlanguage{hebrew}{בָּבֶל}
\end{document}
• For just 3 letter very likely all you need is a font with these characters. Please, edit your question and add a minimal example. – Javier Bezos Feb 13 at 17:49
• Oh thanks, that pointed me in the right direction for googling - I seem to have found the solution with the {cjhebrew} package! – Algebreaker Feb 13 at 19:03
• Regarding the hebrew package and caption resp. subcaption you will find the actual status here: gitlab.com/axelsommerfeldt/caption/issues/29 – Axel Sommerfeldt Feb 17 at 21:26
|
|
Publicité ▼
## définition - Bolted joint
voir la définition de Wikipedia
Wikipedia
# Bolted joint
Bolted joint in vertical cutaway Screw joint Stud joint
Bolted joints are one of the most common elements in construction and machine design. They consist of fasteners that capture and join other parts, and are secured with the mating of screw threads.
There are two main types of bolted joint designs. In one method the bolt is tightened to a calculated clamp load, usually by applying a measured torque load. The joint will be designed such that the clamp load is never overcome by the forces acting on the joint (and therefore the joined parts see no relative motion).
This type of joint design provides several properties:
• For cyclic loads, the fastener is not subjected to the full amplitude of the load; as a result, the fastener's fatigue life can be increased or—if the material exhibits an endurance limit—extended indefinitely.[1]
• As long as the external loads on a joint don't exceed the clamp load, the fastener is not subjected to any motion and will not come loose, obviating the need for locking mechanisms. (Questionable under Vibration Inputs.)
The other type of bolted joint does not have a designed clamp load but relies on the shear strength of the bolt shaft. This may include clevis linkages, joints that can move, and joints that rely on a locking mechanism (like lock washers, thread adhesives, and lock nuts).
## Theory
The clamp load, also called preload, of a fastener is created when a torque is applied, and is generally a percentage of the fastener's proof strength; a fastener is manufactured to various standards that define, among other things, its strength and clamp load. Torque charts are available to identify the required torque for a fastener based on its property class or grade.
When a fastener is tightened, it is stretched and the parts being fastened are compressed; this can be modeled as a spring-like assembly that has a non-intuitive distribution of strain. External forces are designed to act on the fastened parts rather than on the fastener, and as long as the forces acting on the fastened parts do not exceed the clamp load, the fastener is not subjected to any increased load.
However, this is a simplified model that is only valid when the fastened parts are much stiffer than the fastener. In reality, the fastener is subjected to a small fraction of the external load even if that external load does not exceed the clamp load. When the fastened parts are less stiff than the fastener (soft, compressed gaskets for example), this model breaks down; the fastener is subjected to a load that is the sum of the preload and the external load.
In some applications, joints are designed so that the fastener eventually fails before more expensive components do. In this case, replacing an existing fastener with a higher strength fastener can result in equipment damage. Thus, it is generally good practice to replace old fasteners with new fasteners of the same grade.
Thread engagement is the length or number of threads that are engaged between the screw and the female threads. Screws are designed so that the shank fails before the threads, but for this to hold true, a minimum thread engagement must be used. The following equation defines this minimum thread engagement:[2]
$L_e = \frac{2 \times A_t}{0.5 \pi \left( D - 0.64952 p \right)}$
Where Le is the thread engagement length, At is the tensile stress area, D is the major diameter of the screw, and p is the pitch. This equation only holds true if the screw and female thread materials are the same. If they are not the same, then the following equations can be used to determine the additional thread length that is required:[2]
$J = \frac{\text{tensile strength of external thread material}}{\text{tensile strength of internal thread material}}$
$L_{e2} = J \times L_e$
Where Le2 is the new required thread engagement.
While these formulas give absolute minimum thread engagement, many industries specify that bolted connections be at least fully engaged. For instance, the FAA has determined that in general cases, at least one thread must be protruding from any bolted connection. [1]
## Setting the torque
Engineered joints require the torque to be accurately set. Setting the torque for fasteners is commonly achieved using a torque wrench.[3] The required torque value for a particular fastener application may be quoted in the published standard document or defined by the manufacturer.
The clamp load produced during tightening is higher than 75% of the fastener's proof load.[3] To achieve the benefits of the preloading, the clamping force must be higher than the joint separation load. For some joints, multiple fasteners are required to secure the joint; these are all hand tightened before the final torque is applied to ensure an even joint seating.
The torque value is dependent on the friction produced by the threads and by the fastened material's contact with both the fastener head and the associated nut. Moreover, this friction can be affected by the application of a lubricant or any plating (e.g. cadmium or zinc) applied to the threads, and the fastener's standard defines whether the torque value is for dry or lubricated threading, as lubrication can reduce the torque value by 15% to 25%; lubricating a fastener designed to be torqued dry could over-tighten it, which may damage threading or stretch the fastener beyond its elastic limit, thereby reducing its clamping ability.
Also, if the fastener rather than its associated nut is torqued, then the torque value should be increased[4] to compensate for the additional friction; fasteners should only be torqued if they are fitted in clearance holes.
Torque wrenches do not give a direct measurement of the clamping force in the screw, and indeed much of the force applied is lost just to overcoming friction.
More accurate methods for setting the clamping force rely on defining or measuring the screw extension; for instance, measurement of the angular rotation of the nut can serve as the basis for defining screw extension on thread pitch.[5] Measuring the screw extension directly allows the clamping force to be very accurately calculated. This can be achieved using a dial test indicator, reading deflection at the fastener tail, using a strain gauge, or ultrasonic length measurement.
There is no simple method to measure the tension of a fastener already in place other than to tighten it and identify at which point the fastener starts moving. This is known as re-torqueing. An electronic torque wrench can be used on the fastener in question, so that the torque applied can be constantly measured as it is slowly increased in magnitude; when the fastener starts moving (that is, becoming tightened) the required torque magnitude briefly drops sharply, and this drop-off point is considered the measure of tension.
Recent developments enable tensions to be estimated by using ultrasonic testing. Another way to ensure correct tension (mainly in steel erecting) involves the use of crush-washers. These are washers that have been drilled and filled with orange RTV. When the orange rubber strands appear, the tension is correct.
Large-volume users (such as auto makers) frequently use computer controlled nut drivers. With such machines, the computer in effect plots a graph of the torque exerted. Once the torque reaches a set maximum torque chosen by the designer, the machine stops. Such machines are often used to fit wheelnuts and normally tighten all the wheel nuts simultaneously.
## Failure modes
The most common mode of failure is overloading: Operating forces of the application produce loads that exceed the clamp load, causing the joint to loosen over time or fail catastrophically.
Over-torquing might cause failure by damaging the threads and deforming the fastener, though this can happen over a very long time. Under-torquing can cause failures by allowing a joint to come loose, and it may also allow the joint to flex and thus fail under fatigue.
Brinelling may occur with poor quality washers, leading to a loss of clamp load and subsequent failure of the joint.
Other modes of failure include corrosion, embedment, and exceeding the shear stress limit.
Bolted joints may be used intentionally as sacrificial parts, which are intended to fail before other parts, as in a shear pin.
## Locking mechanisms
Bolted joints in an automobile wheel. Here the outer fasteners are four studs with three of the four nuts that secure the wheel. The central nut (with locking cover and cotter pin) secures the wheel bearing to the spindle.
Locking mechanisms keep bolted joints from coming loose. They are required when vibration or joint movement will cause loss of clamp load and joint failure, and in equipment where the security of bolted joints is essential.
• Two nuts, tightened on each other. In this application a thinner nut should be placed adjacent to the joint, and a thicker nut tightened onto it. The thicker nut applies more force to the joint, first relieving the force on the threads of the thinner nut and then applying a force in the opposite direction. In this way the thicker nut presses tightly on the side of the threads away from the joint, while the thinner nut presses on the side of the threads nearest the joint, tightly locking the two nuts against the threads in both directions.[6]
## Measurement of frictional torque of threads in bolt
The torque is applied by means of suspending the weights on one end of the rope and other end is wound around the head of the fastener and tied to the projection. The amount of load is increased gradually until the fastener starts rotating. The applied load is then calculated by adding up the weights. This is the load that is required to overcome the friction between the threads. Similarly, the net applied torque is calculated by multiplying the resultant load by the radius of the fastener's head.
In another method, the torque is applied to the nut by an electromagnetic force. A specially designed gripper is used to grip the nut. A bar magnet is mounted on the gripper, and the gripper is then surrounded by a coil of wire through which alternating current is passed. As the magnetic field from the permanent magnet interacts with the field created by the coil, the permanent magnet (and thus the nut) is subjected to a torque. This is quite similar to the construction of an electric motor, and hence a motor can be directly used to provide the torque. A stepper motor can be used so that the torque is provided in steps, each of which causes a small, measurable angular displacement in the nut from which the torque can be calculated. The discrete torques can be added to get the net torque consumed in displacing the nut from one end of the fastener to the desired location. This is the torque that is required to overcome the friction between the threads.
## Bolt banging
Bolt banging occurs in buildings when bolted joints slip into bearing under load, thus causing a loud and potentially frightening noise resembling a rifle shot that is not, however, of structural significance and does not pose any threat to occupants.[7]
## References
### Notes
1. ^ Collins, p. 481.
2. ^ a b
3. ^ a b Oberg et al. 2004, p. 1495.
4. ^ AIPS 01-02-008: "Bolt Torque"
5. ^ Oberg et al. 2004, p. 1499.
6. ^
7. ^ "Steel Interchange: 'Banging Bolts'", MSC: Modern Steel Construction.
### Bibliography
Publicité ▼
Contenu de sensagent
• définitions
• synonymes
• antonymes
• encyclopédie
• definition
• synonym
Publicité ▼
dictionnaire et traducteur pour sites web
Alexandria
Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web !
Essayer ici, télécharger le code;
Solution commerce électronique
Augmenter le contenu de votre site
Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML.
Parcourir les produits et les annonces
Obtenir des informations en XML pour filtrer le meilleur contenu.
Indexer des images et définir des méta-données
Fixer la signification de chaque méta-donnée (multilingue).
Renseignements suite à un email de description de votre projet.
Jeux de lettres
Les jeux de lettre français sont :
○ Anagrammes
○ jokers, mots-croisés
○ Lettris
○ Boggle.
Lettris
Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée.
boggle
Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer
Dictionnaire de la langue française
Principales Références
La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés.
Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID).
L'encyclopédie française bénéficie de la licence Wikipedia (GNU).
Changer la langue cible pour obtenir des traductions.
Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.
9898 visiteurs en ligne
calculé en 0,062s
Je voudrais signaler :
section :
une faute d'orthographe ou de grammaire
un contenu abusif (raciste, pornographique, diffamatoire)
|
|
# solutions to system of equations
• Mar 16th 2011, 03:57 PM
Taurus3
solutions to system of equations
find all solutions to the following equations. How do I do this???
x1+2x+x3+x4=7
x1+2x2+2x3-x4=12
2x1+4x2+6x4=4
• Mar 16th 2011, 04:06 PM
pickslides
You have 4 unknowns and 3 equations which means your solution will not be unique. Are you aware of this?
• Mar 16th 2011, 06:10 PM
Taurus3
wait...so how do I solve it?
• Mar 16th 2011, 06:32 PM
pickslides
It can't be solved for a unique solution, before we go any further, take a look at this equation inparticular
Quote:
Originally Posted by Taurus3
x1+2x+x3+x4=7
Is it
$\displaystyle x_1+2x_2+x_3+x_4=7$
or
$\displaystyle x_1+x_2+x_3+x_4=7$
??
• Mar 17th 2011, 04:26 AM
HallsofIvy
The real question is, how have you learned to solve problems like these? There are a number of ways of solving such equations, ranging from fairly basic (eliminate variables by subtracting) to very sophisticated (row reduce the augmented matrix). Without more information, we cannot possibly know what woul be appropriate for you.
|
|
## The difference between partial and total derivatives
If you keep up with this blog, you’re probably the type who knows partial derivatives inside and out. If I were to ask you about the partial derivative of $e^x$ with respect to $t$, you would probably blurt out, “zero”, without skipping a beat. On the other hand, you might not have come across the total derivative.
The total derivative gives the rate of change of a variable in terms of another variable, without assuming that all other variables are held constant. Typically, to compute it, we use the chain rule to express dependencies in terms of other dependencies. Notationally, this is very simple and intuitive and probably covered in the first lecture of a course in thermodynamics1:
$\displaystyle \frac{\mathrm{d}f}{\mathrm{d}t} = \frac{\partial f}{\partial t} + \frac{\partial f}{\partial x}\frac{\mathrm{d}x}{\mathrm{d}t} + \frac{\partial f}{\partial y}\frac{\mathrm{d}y}{\mathrm{d}t} + \ldots$
Notice that a roman $\mathrm{d}$ is used, which distinguishes the total derivative (at least, when printed) from both the partial derivative and the ordinary derivative, which uses an italic $d$. (The latter is also used in the expression $\int f(x) \, dx$.)
If you are good with math, you can start using this immediately. For example, let’s say
$\displaystyle f = x^2 + y^2 + z^2 - c^2 t^2$
Then,
$\displaystyle \frac{\mathrm{d}f}{\mathrm{d}t} = -2c^2 t + 2x \frac{\mathrm{d}x}{\mathrm{d}t} + 2y \frac{\mathrm{d}y}{\mathrm{d}t} + 2z \frac{\mathrm{d}z}{\mathrm{d}t}$
The intuition behind the terminology is not too hard to grasp. The partial derivative only captures some of the information regarding the dependence of one variable upon another at a given point. The total derivative is built up from several partial derivatives in order to capture all of the information, right?
Okay, sure. You can think of it that way if you want, and, indeed, that will suffice for practically any application of partial and total differentiation. But the real meaning is more subtle than that, and that’s what this post is about.
Now, let’s start over. What’s the derivative of $e^x$ with respect to $t$?
Aha! Now I’ve primed you to be all clever here and inquire for clarification! The partial derivative, which is 0, or the total derivative, which is $e^x \frac{\mathrm{d}x}{\mathrm{d}t}$? Say that, and you’ll be guaranteed to get the trick question right.
But wait. There is something here that should bother you. If $x$ really is a function of $t$, doesn’t that render the partial derivative completely meaningless? It looks as though the partial derivative simply operates on the notation, whereas the total derivative gives you the actual variation between the quantities being studied. So perhaps a partial derivative is nothing but a notational convenience used to build up the total derivative, which is the expression you actually care about when calculating variation. And in the case that $x$ is actually independent of $t$, the total derivative expression reduces to 0, which is still correct.
Actually, this is not true. The partial derivative has meaning. But only if you think in terms of functions. People who are not mathematicians don’t really deal with functions. They deal with relationships between variables, and use the machinery of functions for convenience. But mathematicians deal with functions. Indeed, the entire field of real analysis deals with functions from $\mathbb{R}^m$ to $\mathbb{R}^n$.
Let’s review a basic fact about functions, which is not made clear outside of a post-secondary curriculum in pure mathematics. A function itself is nothing more than some set of ordered pairs of elements from the domain and codomain, subject to the restriction that no element in the domain is paired with more than one element from the codomain. Fundamentally, a function has absolutely nothing to do with variables like $x, t$. Variables are simply convenient tools for writing down the definitions of specific functions, such as $f(x) = e^x$. However, notice that we can simply refer to this function as “the exponential function”, without mentioning any variables at all, which indicates that functions have an intrinsic, variable-free identity. Indeed, the vast majority of functions are non-computable and have no defining expression at all (we usually call those functions “pathological”, though). You can see this because the set of functions from $\mathbb{R}$ to $\mathbb{R}$ has cardinality $2^{\mathfrak{c}}$ whereas the set of all expressions that we can use to write down the definitions of functions is merely countable.
By the way, this is very much like how coordinates are not part of the identity of a vector space; a vector space is just a collection of things that satisfy the vector space axioms. We think of vectors in terms of coordinates, perhaps, because we most often deal with the vector spaces $\mathbb{R}^n$ with the standard basis. But coordinates don’t emerge until you’ve selected a basis, and the definition of a vector space gives you no clues about how to select some unique, canonical basis. (This, by the way, is why higher physics uses such confusing objects as tensors, raising and lowering indices, and covariant derivatives—it needs to respect the observer’s arbitrary choice of a coordinate system.)
Now then, partial derivatives are intrinsic properties of functions. A function whose domain is $\mathbb{R}^m$ has up to $m$ partial derivatives. We can notate these partial derivatives without using any variables at all. This is precisely what is done in the well-known textbook Calculus on Manifolds by Michael Spivak. There, the notations $D_1f, D_2f, \ldots$ are used for the partial derivatives of the function $f$ with respect to the first, second, etc. elements of (the tuple type of) the domain. Spivak goes so far as to refer to $\frac{\partial}{\partial x}$ and so on as “classical notation”, with the implication that they are beautiful and evocative but ultimately imprecise.
So let’s say for example we have a function $f: \mathbb{R}^2 \to \mathbb{R}$. This function is the unique function with this domain and codomain such that a given element of the domain is associated with that element of the codomain that equals the product of the first element and the exponential of the second element of the former. (We would normally write this down, for convenience, as $f(x,y) = x e^y$.) Then, this function has two partial derivatives, $D_1 f$ and $D_2 f$. The partial derivative $D_1 f$ is the function $\mathbb{R}^2 \to \mathbb{R}$ in which a given element of the domain is associated with that element of the codomain that equals the exponential of the second element of the former. (In classical notation, $\frac{\partial f}{\partial x} = e^y$.) The other partial derivative $D_2 f$ is identical to $f$ itself.
Indeed, then, you will see that the symbol $\frac{\partial}{\partial t}$ has no meaning whatsoever on its own. It is not a true operator, despite what quantum mechanics might have you think.2 An operator is a higher-order function, whose domain itself contains functions (or other operators). $\frac{\partial}{\partial t}$ can’t just be fed a function and spit out another function. What is $\frac{\partial}{\partial t}$ applied to the exponential function? You can’t answer, because you’re expecting me to tell you which variable is used in the definition of the exponential function. But it’s all the same function, whichever variable I pick. On the other hand, the symbol $D_1$ is a true operator, whose domain is differentiable functions of at least one real. (For functions of exactly one real, $D_1$ is the ordinary single-variable derivative.) And $D_2$ is an operator whose domain is differentiable functions of at least two reals (since it picks out the second, and differentiates with respect to it.) And so on. The symbol $\frac{\partial}{\partial t}$ acquires meaning when it is paired with an expression, such as $e^x$. The interpretation is now that we have some function of at least two reals, that picks one of them to associate each tuple of the domain with that one’s exponential in the codomain, and we are taking the partial derivative with respect to some other element. For example, $D_2 f$ where $f(x, t) = e^x$, or $D_1 f$ where $f(t, x, y, z) = e^x$.
In light of this revised, correct view of functions, what are total derivatives? Total derivatives are not intrinsic properties of functions. Total derivatives do, in fact, operate on expressions, unlike partial derivatives, which operate on functions. At the time of writing, we have the following from the Wikipedia article on total derivatives:
The part where it says, “if y depends on x“, is crucial, because it shows that associating total derivatives with functions is self-contradictory. You simply cannot say that $f: \mathbb{R}^2 \to \mathbb{R}$ (which is implied by the notation $f(x,y)$) and then introduce a restriction that prevents the first and second elements of the ordered pair of the domain from varying independently. Really, what you’re doing by introducing this dependency is creating a new function, $g(x) = f(x, x) = x^2$, and taking the ordinary derivative of that, instead.
It is more correct to think of the total derivative, not as an operator, but as a notation that represents the relationship between rates of change of two variables, out in the real world where there are no functions, just a bunch of variables that may or may not depend on each other. The notation $\frac{\mathrm{d}f}{\mathrm{d}t}$ does not, then, represent something you do to the function $f$. It represents the rate of change of the variable $f$ with respect to the variable $t$.
Furthermore, the chain rule, in the form I gave in the beginning of this post, is actually an abuse of notation. The thing on the left hand side is a total derivative, which is an expression, and not a function. The right hand side, though, contains partial derivatives, so it looks as though the right hand side must come out to be a function. But wait, the right hand side contains total derivatives too! Arrgh! The problem is that the right hand side contains notation of the form $\frac{\partial f}{\partial x}$. This does not make sense if we treat $f$ as a variable, but only if we treat $f$ as a function and $x$ as one of the bound (dummy) variables used in its definition. So this chain rule is an abuse of notation in that treats $f$ as a variable on the left hand side and as a function on the right hand side. It just happens to be an extremely useful one because it helps you to calculate the total derivative.
With that in mind, let’s revisit an old joke.
A polynomial and $e^x$ are walking down the street, when all of a sudden they notice a differential operator heading toward them. The polynomial panics, and says, “Uh oh, a differential operator. If I run into it too many times, I’ll disappear.” $e^x$ says, “That’s okay, I’ll go talk to it. It can’t do anything to me, because I’m $e^x$.” So $e^x$ approaches the differential operator, and says, “Hi, I’m $e^x$.” The differential operator replies, “Nice to meet you, $e^x$. I’m $\frac{d}{dt}$.”
Some people might point out that $e^x$ will only be annihilated by the differential operator if $x$ is not a function of $t$. The truth is that this joke is just not precisely stated enough to hold up to this analysis (and that you should just laugh at it without reading that much into it). You see, when you simply say “d”, it’s taken to mean the ordinary derivative $d$, rather than a total derivative. But then, since an ordinary derivative is a partial derivative with respect to the sole real-valued variable, it must operate on functions, whereas in the joke the protagonist $e^x$ and the deuteragonist, the polynomial, are merely expressions. The correct (but much less funny) joke then reads:
A polynomial and $e^x$ are walking down the street, when all of a sudden they notice a total derivative heading toward them. The polynomial panics, and says, “Uh oh, a a total derivative. If I run into it too many times, I’ll disappear.” $e^x$ says, “That’s okay, I’ll go talk to it. It can’t do anything to me, because I’m $e^x$.” So $e^x$ approaches the total derivative, and says, “Hi, I’m $e^x$.” The total derivative replies, “Nice to meet you, $e^x$. I’m $\frac{\mathrm{d}}{\mathrm{d}t}$.” $e^x$ gulps and wishes that $x$ depended upon $t$.
Or, even more forced:
A polynomial function and the function $f(x, t) = e^x$ are walking down the street, when all of a sudden they notice a differential operator heading toward them. The former panics, and says, “Uh oh, a differential operator. If I run into it too many times, I’ll disappear.” The latter says, “That’s okay, I’ll go talk to it. It can’t do anything to me, because I’m the exponential function. So $f$ approaches the differential operator, and says, “Hi, I’m $e^x$.” The differential operator replies, “Nice to meet you, $e^x$. I’m $D_2$.”
Sorry for killing the joke, but I hope at least now you understand the subtleties involved in the partial and total derivatives.
1 I used the example of thermodynamics because it has the annoying property that the variables it works with, such as pressure, volume, and temperature, are not independent, and you always have to carefully pay attention to which variables are being held constant and which ones are allowed to vary. It was the only math-based course I’ve ever had trouble with.
2 Whether “quantum mechanics” refers to a field or the people who study that field is left deliberately ambiguous.
Hi! I'm Brian Bi. As of November 2014 I live in Sunnyvale, California, USA and I'm a software engineer at Google. Besides code, I also like math, physics, chemistry, and some other miscellaneous things.
This entry was posted in Uncategorized. Bookmark the permalink.
### 4 Responses to The difference between partial and total derivatives
1. Uupis says:
Thanks for taking the time to explain this in detail. I clicked the link because I was curious, but I stayed because it turned out to be interesting. :)
2. Kannabianka says:
Many thanks!
3. Jared Spencer says:
So let’s suppose we have a function f(x,y,z) = x^2 + y^2 + z^2 and that y = 2x and z = sin x. The partial with respect to x would then be 2x. But if we substitute y = 2x we would then have a function g(x,z) = 5x^2 + z^2. Here the partial with respect to x would be 10x. For both functions f and g the total derivative with respect to x is 10x +2(sin x)(cos x), so the total derivative is the same in both cases, however, very obviously partial f with respect to x and partial g with respect to x are NOT the same thing. This has always bothered me, as it seems to imply that varying x a little but will cause your “answer” to depend on how you write something down on a piece of paper, which is nonsense. From your post I understand this is the case because f(x,y,z) and g(x,z) are NOT the same function, even if they evaluate to the same “answer” from an elementary point of view. The partial derivative then is “attached” to the function itself, and one should be very careful when making substitutions of variables where partial derivatives are important. On the other hand the total derivative really just evaluates how one variable changes with respect to others and doesn’t care about the choice of function used to describe the relationship between the variables, and thus is not “attached” to the function in the same way as the partial derivative is. Do I have this basically right? I am a chemist, not a mathematician, and the rule for partial derivatives has always bothered me because I did not understand how you could artificially hold one variable constant if it depended on the other (like y and z in the above example) and obtain anything meaningful. If what I say above is a more correct way of thinking about it, then I believe I can better understand what is happening in these situations, although from a using mathematics to describe physics standpoint it seems that choosing when to take a partial derivative and how it actually applies to “the real world” should be handled with a great amount of care. Or perhaps a better way of saying it is that the choice of which function to pick and apply to a physical phenomenon (e.g., f vs. g above) depends on more than just writing down a form that when evaluated at some points x, y, … gives you the “right” answer.
4. Olivier says:
|
|
• Masao Doyama
Articles written in Bulletin of Materials Science
• Computer simulation of deformation and fracture of small crystals by molecular dynamics method
Body-centred cubic iron whiskers having [100] and [110] axes were pulled in a molecular dynamics simulation using a supercomputer. The upper yield stress close to the theoretical strength was found. Above the upper yield stress, phase transformation was observed; at the same time the stress was greatly reduced. A new possible mechanism of twinning is proposed. The whiskers were pulled until they had broken into two pieces. Copper small crystals with and without a notch were sheared. It was observed that the edge dislocations were created at the surface and moved through and escaped from the crystals. Copper small single crystals with a notch were pulled. A half-dislocation was created near the tip of the notch. Sharp yield stress was observed. In medium deformation dislocations on different slip planes were created. Due to the cutting of dislocations the tensile stress increased.
• Computer simulation of surface diffusion of copper, silver and gold
The binding energies to copper, silver and gold (111) surfaces of self-atom clusters have been calculated. The activation energies of motion of these ad-atom clusters, vacancies and divacancies on copper, silver and gold (111) surface, and of the conversion of ad-atom clusters on (111) and (100) have been calculated by use ofn-body embedded atom potentials and molecular dynamics.
• Creation and motion of dislocations and fracture in metal and silicon crystals
By making a step on one surface ($$\left( {11\bar 2} \right)$$) of a rectangular small paralellepiped copper crystal, dislocations could be created by the molecular dynamic method. The dislocation created was not a complete edge dislocation but a pair of Heidenreich-Shockley partial dislocations. Each time a dislocation was created, the stress on the surface was released. Small copper crystals having a notch were pulled (until fracture), compressed and buckled by use of the molecular dynamic method. An embedded atom potential was used to represent the interaction between atoms. Dislocations were created near the tip of the notch. A very sharp yield stress was observed.
The results of high speed deformations of pure silicon small crystals using the molecular dynamics are presented. The results suggest that plastic deformation may be possible for the silicon with a high speed deformation even at room temperature. Another small size single crystal, the same size and the same surfaces, was compressed using molecular dynamic method. The surfaces are {110}, {112} and {111}. The compressed direction was [111]. It was found that silicon crystals are possible to be compressed with a high speed deformation. This may suggest that silicon may be plastically deformed with high speed deformation.
• Foreword
• Crystal growth study using combination of molecular dynamics and Monte Carlo methods
Molecular dynamics method although provides details of energies of the system as a function of time, is not suited to simulate the processes involving activation processes. Therefore, we attempted to combine the molecular dynamics and Monte Carlo methods. Using molecular dynamics, the energies of the system were calculated which were subsequently combined with Monte Carlo method using random numbers, epitaxial growth of (111) plane of copper, silver, and gold. While surface adsorption and surface diffusion for copper, silver, and gold were simulated by use of molecular dynamics method, the relation between the growth rate of thin films and the packing density of atoms were obtained using Monte Carlo simulation. Thus, by combining the results of the molecular dynamics method and the Monte Carlo method the growth process of thin films at elevated temperatures were obtained, which is too tedious to be calculated by molecular dynamics alone.
• # Bulletin of Materials Science
Volume 43, 2020
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
|
When the radical is a square root, you should try to have terms raised to an even power (2, 4, 6, 8, etc). 2) What is the length of the diagonal of a square with area 48? I can multiply radical expressions. A perfect square number has … Let’s do that by going over concrete examples. But if I try to multiply through by root-two, I won't get anything useful: It didn't get rid of the radical underneath. Adding and Subtracting Radical Expressions Kindergarten Phonics Worksheets . Although 25 can divide 200, the largest one is 100. And we have one radical expression over another radical expression. These properties can be used to simplify radical expressions. Let's apply these rule to simplifying the following examples. Rationalizing the Denominator. You can use the Mathway widget below to practice simplifying fractions containing radicals (or radicals containing fractions). Next, express the radicand as products of square roots, and simplify. (a) Solution: Start by factoring the radicand's coefficient; in other words, write it as a product of smaller numbers. But multiplying that "whatever" by a strategic form of 1 could make the necessary computations possible, such as when adding fifths and sevenths: For the two-fifths fraction, the denominator needed a factor of 7, so I multiplied by katex.render("\\frac{7}{7}", typed10);7/7, which is just 1. Recognize a radical expression in simplified form. This is an easy one! Picking the largest one makes the solution very short and to the point. 1) 125 n 2) 216 v 3) 512 k2 4) 512 m3 5) 216 k4 6) 100 v3 7) 80 p3 8) 45 p2 9) 147 m3n3 10) 200 m4n 11) 75 x2y 12) 64 m3n3 13) 16 u4v3 14) 28 x3y3-1- ©s n220 D1b2S kKRumtUa c LSgoqfMtywta1rme0 pL qL 9CY. Here it is! The powers don’t need to be “2” all the time. Anything divided by itself is just 1, and multiplying by 1 doesn't change the value of whatever you're multiplying by that 1. This allows us to focus on simplifying radicals without the technical issues associated with the principal $$n$$th root. nth roots . Comparing surds. Thus, the answer is. Menu Algebra 2 / Polynomials and radical expressions / Simplify expressions. Simplifying Radical Expressions Date_____ Period____ Simplify. Leave a Reply Cancel reply Your email address will not be published. In this example, we simplify √ (60x²y)/√ (48x). COMPETITIVE EXAMS. Please click OK or SCROLL DOWN to use this site with cookies. 07/30/2018. Simplify the expression: Example 4: Simplify the radical expression \sqrt {48} . Examples: 1) First we factored 12 to get its prime factors. (Click "Tap to view steps" to be taken directly to the Mathway site for a paid upgrade. More so, the variable expressions above are also perfect squares because all variables have even exponents or powers. Perfect Powers 1 Simplify any radical expressions that are perfect squares. 07/31/2018 . If you would like a lesson on solving radical equations, then please visit our lesson page . Simplifying Radical Expressions. Another way to solve this is to perform prime factorization on the radicand. Scientific notations. Unit 4 Radical Expressions and Rational Exponents (chapter 7) Learning Targets: Properties of Exponents 1. Identify like radical terms. Compare what happens if I simplify the radical expression using each of the three possible perfect square factors. A worked example of simplifying elaborate expressions that contain radicals with two variables. Otherwise, check your browser settings to turn cookies off or discontinue using the site. Simplify the expression: Horizontal translation. A radical can be defined as a symbol that indicate the root of a number. By quick inspection, the number 4 is a perfect square that can divide 60. Now for the variables, I need to break them up into pairs since the square root of any paired variable is just the variable itself. Multiplying and Dividing 3. However, the key concept is there. In order to simplify radical expressions, you need to be aware of the following rules and properties of radicals 1) From definition of n th root(s) and principal root Examples More examples on Roots of Real Numbers and Radicals. Next Adding and Subtracting Radical Expressions. Simplifying Radicals Practice Worksheet Awesome Maths Worksheets For High School On Expo In 2020 Simplifying Radicals Practices Worksheets Types Of Sentences Worksheet . Quantitative aptitude. We need to recognize how a perfect square number or expression may look like. Test to see if it can be divided by 4, then 9, then 25, then 49, etc. By the way, do not try to reach inside the numerator and rip out the 6 for "cancellation". This lesson covers . Simplifying radical expression, surd solver, adding and subtracting integers worksheet. Think of them as perfectly well-behaved numbers. . Browse more videos. Nothing simplifies, as the fraction stands, and nothing can be pulled from radicals. Express the odd powers as even numbers plus 1 then apply the square root to simplify further. Just as you were able to break down a number into its smaller pieces, you can do the same with variables. Simplify expressions with addition and subtraction of radicals. All that you have to do is simplify the radical like normal and, at the end, multiply the coefficient by any numbers that 'got out' of the square root. To create these "common" denominators, you would multiply, top and bottom, by whatever the denominator needed. In this case, there are no common factors. The multiplication of the denominator by its conjugate results in a whole number (okay, a negative, but the point is that there aren't any radicals): Simplifying expressions makes those expressions easier to compare with other expressions (which have also been simplified). To read our review of the Math Way -- which is what fuels this page's calculator, please go here . Number Line. ), URL: https://www.purplemath.com/modules/radicals5.htm, Page 1Page 2Page 3Page 4Page 5Page 6Page 7, © 2020 Purplemath. Vertical translation. The simplification process brings together all the … Remember that getting the square root of “something” is equivalent to raising that “something” to a fractional exponent of {1 \over 2}. Negative exponents rules. Going through some of the squares of the natural numbers…. A perfect square is the … Topic. Recognize a radical expression in simplified form. Simplifying Radical Expressions Worksheet by using Advantageous Focuses. All right reserved. Why? Repeat the process until such time when the radicand no longer has a perfect square factor. Example 7: Simplify the radical expression \sqrt {12{x^2}{y^4}} . What if we get an expression where the denominator insists on staying messy? Google Classroom Facebook Twitter Before we begin simplifying radical expressions, let’s recall the properties of them. applying all the rules - explanation of terms and step by step guide showing how to simplify radical expressions containing: square roots, cube roots, . We typically assume that all variable expressions within the radical are nonnegative. One rule is that you can't leave a square root in the denominator of a fraction. Take a look at the expression below: Looking at the radical expression above, we can determine that X is the radicand of the expression. 1) 125 n 5 5n 2) 216 v 6 6v 3) 512 k2 16 k 2 4) 512 m3 16 m 2m 5) 216 k4 6k2 6 6) 100 v3 10 v v 7) 80 p3 4p 5p 8) 45 p2 3p 5 9) 147 m3n3 7m ⋅ n 3mn 10) 200 m4n 10 m2 2n 11) 75 x2y 5x 3y 12) 64 m3n3 8m ⋅ n mn 13) 16 u4v3 4u2 ⋅ v v 14) 28 x3y3 2x ⋅ y 7xy-1-©x 32w0y1 j2f 1K Ruztoa X mSqo 0fvt Kwnayr GeF DLuL ZCI. Books Never Written Math Worksheet . Simplifying Radical Expressions. Search phrases used on 2008-09-02: Students struggling with all kinds of algebra problems find out that our software is a life-saver. You will see that for bigger powers, this method can be tedious and time-consuming. Example 14: Simplify the radical expression \sqrt {18m{}^{11}{n^{12}}{k^{13}}}. SIMPLIFYING RADICAL EXPRESSIONS INVOLVING FRACTIONS. Next lesson. To expand this expression (that is, to multiply it out and then simplify it), I first need to take the square root of two through the parentheses: As you can see, the simplification involved turning a product of radicals into one radical containing the value of the product (being 2 × 3 = 6 ). 07/31/2018. I can use properties of exponents to simplify expressions. Let’s start out with a couple practice questions. Step 2 : We have to simplify the radical term according to its power. Multiplying through by another copy of the whole denominator won't help, either: This is worse than what I'd started with! Then express the prime numbers in pairs as much as possible. Simplify … Don't stop once you've rationalized the denominator. #1. Radical expressions are expressions that contain radicals. I can simplify radical algebraic expressions. Simplifying expressions is an important intermediate step when solving equations. Generally speaking, it is the process of simplifying expressions applied to radicals. Multiplication tricks. Let’s find a perfect square factor for the radicand. Quotient Property of Radicals. Nothing cancels. Otherwise, you need to express it as some even power plus 1. However, I hope you can see that by doing some rearrangement to the terms that it matches with our final answer. Radical expressions (expressions with square roots) need to be left as simplified as possible. That is, I must find some way to convert the fraction into a form where the denominator has only "rational" (fractional or whole number) values. As long as the powers are even numbers such 2, 4, 6, 8, etc, they are considered to be perfect squares. You can do some trial and error to find a number when squared gives 60. Related Posts. Simplifying Radicals Kick into gear with this bundle of printable simplifying radicals worksheets, and acquaint yourself with writing radical expressions in the simplest form. The solution to this problem should look something like this…. Simplifying Radical Expressions Kuta Software Answers Lesson If you ally craving such a referred simplifying radical expressions kuta software answers lesson books that will pay for you worth, get the utterly best seller from us currently from several preferred authors. Type any radical equation into calculator , and the Math Way app will solve it form there. In this tutorial, the primary focus is on simplifying radical expressions with an index of 2. Example 1: Simplify the radical expression \sqrt {16} . The denominator must contain no radicals, or else it's "wrong". Try to further simplify. Let’s deal with them separately. Start by dividing the number by the first prime number 2 and continue dividing by 2 until you get a decimal or remainder. Let's look at to help us understand the steps involving in simplifying radicals that have coefficients. Simplifying hairy expression with fractional exponents. The standard way of writing the final answer is to place all the terms (both numbers and variables) that are outside the radical symbol in front of the terms that remain inside. Playing next. Simplifying radical expressions: three variables. 1) Factor the radicand (the number inside the square root) into its prime factors 2) Bring any factor listed twice in the radicand to the outside. To simplify radical expressions, look for factors of the radicand with powers that match the index. IntroSimplify / MultiplyAdd / SubtractConjugates / DividingRationalizingHigher IndicesEt cetera. The denominator here contains a radical, but that radical is part of a larger expression. On the previous page, all the fractions containing radicals (or radicals containing fractions) had denominators that cancelled off or else simplified to whole numbers. While these look like geometry questions, you’ll have to put your GMAT algebra skills to work! It must be 4 since (4)(4) = 42 = 16. Example 6: Simplify the radical expression \sqrt {180} . For example, These types of simplifications with variables will be helpful when doing operations with radical expressions. Use the multiplication property. Simplifying radical expression. This includes square roots, cube roots, and fourth roots. The multiplication of the denominator by its conjugate results in a whole number (okay, a negative, but the point is that there aren't any radicals): The multiplication of the numerator by the denominator's conjugate looks like this: Then, plugging in my results from above and then checking for any possible cancellation, the simplified (rationalized) form of the original expression is found as: It can be helpful to do the multiplications separately, as shown above. WeBWorK. When the radical is a cube root, you should try to have terms raised to a power of three (3, 6, 9, 12, etc.). Ecological Succession Worksheet . The radicand contains both numbers and variables. Report. Don't try to do too much at once, and make sure to check for any simplifications when you're done with the rationalization. Type ^ for exponents like x^2 for "x squared". Aptitude test online. The paired prime numbers will get out of the square root symbol, while the single prime will stay inside. Created by Sal Khan and Monterey Institute for Technology and Education. For the number in the radicand, I see that 400 = 202. 2:55. This type of radical is commonly known as the square root. To simplify complicated radical expressions, we can use some definitions and rules from simplifying exponents. Example 12: Simplify the radical expression \sqrt {125} . This allows us to focus on simplifying radicals without the technical issues associated with the principal nth root. I won't have changed the value, but simplification will now be possible: This last form, "five, root-three, divided by three", is the "right" answer they're looking for. For the three-sevenths fraction, the denominator needed a factor of 5, so I multiplied by katex.render("\\frac{5}{5}", typed11);5/5, which is just 1. We can use this same technique to rationalize radical denominators. Simplifying Radical Expressions on the GMAT. +1 Solving-Math-Problems Page Site. The numerator contains a perfect square, so I can simplify this: This looks very similar to the previous exercise, but this is the "wrong" answer. A radical expression is said to be in its simplest form if there are. . Note Naming Worksheets . The following are the steps required for simplifying radicals: Start by finding the prime factors of the number under the radical. For this problem, we are going to solve it in two ways. Variables are included. Use the multiplication property. Identify like radical terms. Web Design by. You just need to make sure that you further simplify the leftover radicand (stuff inside the radical symbol). Example 1: to simplify $(\sqrt{2}-1)(\sqrt{2}+1)$ type (r2 - 1)(r2 + 1) . To get rid of it, I'll multiply by the conjugate in order to "simplify" this expression. Here's how to simplify a rational expression. Perfect Squares 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 324 400 625 289 = 2 = 4 = 5 = 10 = 12 Simplifying Radicals Simplifying Radical Expressions Simplifying Radical Expressions A radical has been simplified when its radicand contains no perfect square factors. Simplifying Radical Expressions Date_____ Period____ Simplify. After doing some trial and error, I found out that any of the perfect squares 4, 9 and 36 can divide 72. To simplify radicals, rather than looking for perfect squares or perfect cubes within a number or a variable the way it is shown in most books, I choose to do the problems a different way, and here is how. Quiz: Simplifying Radicals Previous Simplifying Radicals. Take a look at the expression below: Looking at the radical expression above, we can determine that X is the radicand of the expression.Meanwhile, √ is the radical symbol while n is the index.In this case, should you encounter a radical expression that is written like this: Have coefficients denominators, you have to consider certain rules when we operate with exponents ensure... Same denominators the prime factorization of the radicand, I can find simplifying radical expressions whole number.! X squared '' think about it, I hope you can see that by going over concrete examples may... Can use some definitions and rules from simplifying radical expressions ( which have also simplified. N\ ) th root that when multiplied by itself gives the target number ^ for exponents like x^2 for cancellation. Are getting larger square with area 48 fourth roots 48 } x^2 for cancellation.... Name in any algebra textbook because I do n't stop once you 've rationalized the denominator must decimal! Select simplify '' this expression by first rewriting the odd exponents radicands stuff... Blog and tagged 10-2 simplifying radicals without the technical issues associated with the in. Grade 7 - Displaying top 8 Worksheets found for this concept 200, the variable above... Fractions are fine, even preferred this is to perform prime factorization on the radicand, and roots! Expression: improve your math knowledge with free questions in simplify '' this expression is in ... You can only cancel common factors in fractions, you have to do here . Rule is that you ca n't leave a Reply cancel Reply your email address not! Wo n't help, either: this is to express it as some even power 1. Try to reach inside the radical together of each number above yields a whole answer. Way to approach it especially when the exponents of the natural numbers… browser settings to turn off. Remember, the largest one is 100 screenshot of the original expression this site cookies. Check to see if anything simplifies at that point without the technical issues associated with the 2 the! 3 if possible simplify radical expressions, let ’ s simplify this radical number try! Of radical is part of a number into its smaller pieces, you to. Possible perfect square because I made it up to enable this widget “ something ” by until. An even number plus 1 contain no radicals, simplifying radicals, or type in your.. Radicals is the process of simplifying expressions applied to radicals stop simplifying radical expressions get... 3Page 4Page 5Page 6Page 7, etc notice that the square root,. September 21, 2012, UPDATED on January 15, 2020, in GMAT algebra to!, this method can be attributed to exponentiation, or type in your exercise. To perform prime factorization by 4, 9 and 36 can divide 200, the number in the ''... 3Page 4Page 5Page 6Page 7, © 2020 Purplemath - simplify radical expressions Worksheet using. Something like this… two factors of the radicand, I found out that any of the math way -- is... On September 21, 2012, UPDATED on January 15, 2020, in algebra. My radical fraction if I simplify the radical are nonnegative UPDATED on January 15,,... Will then show you the best experience are all radicals 's wrong '' form, due to the that. Squares 4, 9 and 36 can divide 60 a pair of threes the... Chapter 7 ) Learning Targets: properties of radicals root to simplify expressions and over again powers 1 simplify radical. Mᶜgarry on September 21, 2012, UPDATED on January 15,,. Called a radical can be tedious and time-consuming radical equations, then 9, then,. As powers of the squares of the number 16 is obviously a perfect square factor is 4 than I! Known as the fraction by a strategic form of 1 of even powers look at to help you learn to. That today 's searchers used to simplify complicated radical expressions '' and thousands of other math skills radical! Branch ” respectively '' this expression rationalized the denominator here contains a radical sign for the entire fraction, would. Had two factors of the radicand that have integer roots a whole number answer exercise! Numbers are perfect squares comes out very nicely number of steps in the radicand as products square! Indiceset cetera with area 48 by finding the prime factors of the number inside radical! Rule below as you were able to break down a number this tutorial the...: we have to know the important properties of radicals 80 { x^3 } y\, z^5. { 48 } by 4, then 25, then I will have multiplied the fraction by a form. And continue dividing by 2 on solving radical equations, then 25, then 25 then! Out with a couple practice questions in fractions, not parts of.. It especially when the radicand use the Mathway widget below to practice simplifying fractions containing radicals ( or containing! The main approach is to express it as some even power plus 1 then apply the square root of number. For Technology and Education, it is the radical expression \sqrt { 200 } any number is a perfect factors. - simplify radical expressions Worksheet Answers, source: homeschooldressage.com into calculator, simplifying radical expressions fourth roots Worksheet! ) first we factored 12 to get rid of it, I 'll by. Google simplifying radical expressions Facebook Twitter simplifying radicals is the index that 's a perfect square one of the is... This page 's calculator, please go here these types of Sentences Worksheet be some number n found between and. Phrases that today 's searchers used to simplify your algebraic expression on your own exercise is. Improve your math knowledge with free questions in simplify '' to be taken to! I made it up rid of it, I see that by doing some and... Radical, but that radical is part of a larger expression idea of radicals '' to “. Same with variables will be helpful when doing operations with radical expressions Rationalizing the.... Rule did I use simplifying radical expressions break it down into pieces of “ smaller ” radical expressions “ something by... Is on simplifying radicals is the largest one is 100 this case, there are common! And error, I 'll need to follow when simplifying radicals without the technical issues associated with principal. Same technique to rationalize radical denominators will stay inside 4 is a perfect factors. Thinking back to those elementary-school fractions, not parts of expressions entered values further the calculator the! ( 60x²y ) /√ ( 48x ) ) = 42 = 16 ”., √ is the process of simplifying expressions applied to radicals Prompts radical expressions multiplying expressions... To show that there is an important intermediate step when solving equations ( 48x ) root and! While n is the process of manipulating a radical expression, you have to take sign... Algebra, improper fractions are fine, even preferred that one of the radicand that coefficients... Composed of three parts: a radical sign and indicates the principal root. View steps '' to compare your answer to Mathway 's by 3, 5, 7,.... Way app will solve it form there 12 { x^2 } { q^7 } { y^4 }. Know the important properties of radicals unit 10 - radical expressions, due to the Mathway site for paid... ( 48x ) steps involving in simplifying radicals, unit 10 - radical expressions and Rational (. Should look something like this… principal nth root start by finding the prime factorization of the radicand products... Solve it form there of an even number plus 1 the process of expressions. Solver, adding and Subtracting radical expressions, we can use properties of to! Largest possible one because this issue simplifying radical expressions matter to your instructor right now, it... Factorization of the square root if it can be expressed as exponential numbers even! { 27 } } 1: find the prime factorization { q^7 } { r^ { 27 }..., { z^5 } } square that can be written as perfect of... Middle '' thing is the wrong '' form, due to the radical by prime of. Express them with even powers since get rid of it, I multiply... Radicand with powers that match the index lesson page any number is a perfect square will the. Doing some trial and error, I hope you can simplify a radical expression is composed of parts... Mathway site for a paid upgrade to ensure you get the best option is the radical are.. Will get out of the original expression with the principal nth root on staying messy them even. For example, these types of simplifications with variables will be helpful when doing operations with expressions. On the radicand with powers that match the index presents the answer a bit. For example, we want to express them with even and odd exponents ''... A symbol that indicate the root of each number above yields a whole number answer to recognize how a square! Thinking back to those elementary-school fractions, not parts of expressions, do not try to inside. Out very nicely or powers expect that the square root of 60 must contain decimal simplifying radical expressions... Because I do n't have a pair of threes inside the symbol ) have coefficients only cancel common factors must... Over concrete examples expression is said to be powers of an even number plus 1 then apply the root... Perform prime factorization to do 32 } issue may matter to other instructors in later classes example 13 simplify... 'D started with to work equations step-by-step number in the solution for simplifying:. To focus on simplifying radicals without the technical issues associated with the smaller perfect square way app solve!
|
|
## Solutions to Try Its
1. $\left({b}^{2}-a\right)\left(x+6\right)$
2. $\left(x - 6\right)\left(x - 1\right)$
3. a. $\left(2x+3\right)\left(x+3\right)$
b. $\left(3x - 1\right)\left(2x+1\right)$
4. ${\left(7x - 1\right)}^{2}$
5. $\left(9y+10\right)\left(9y - 10\right)$
6. $\left(6a+b\right)\left(36{a}^{2}-6ab+{b}^{2}\right)$
7. $\left(10x - 1\right)\left(100{x}^{2}+10x+1\right)$
8. ${\left(5a - 1\right)}^{-\frac{1}{4}}\left(17a - 2\right)$
## Solutions to Odd-Numbered Exercises
1. The terms of a polynomial do not have to have a common factor for the entire polynomial to be factorable. For example, $4{x}^{2}$ and $-9{y}^{2}$ don’t have a common factor, but the whole polynomial is still factorable: $4{x}^{2}-9{y}^{2}=\left(2x+3y\right)\left(2x - 3y\right)$.
3. Divide the $x$ term into the sum of two terms, factor each portion of the expression separately, and then factor out the GCF of the entire expression.
5. $7m$
7. $10{m}^{3}$
9. $y$
11. $\left(2a - 3\right)\left(a+6\right)$
13. $\left(3n - 11\right)\left(2n+1\right)$
15. $\left(p+1\right)\left(2p - 7\right)$
17. $\left(5h+3\right)\left(2h - 3\right)$
19. $\left(9d - 1\right)\left(d - 8\right)$
21. $\left(12t+13\right)\left(t - 1\right)$
23. $\left(4x+10\right)\left(4x - 10\right)$
25. $\left(11p+13\right)\left(11p - 13\right)$
27. $\left(19d+9\right)\left(19d - 9\right)$
29. $\left(12b+5c\right)\left(12b - 5c\right)$
31. ${\left(7n+12\right)}^{2}$
33. ${\left(15y+4\right)}^{2}$
35. ${\left(5p - 12\right)}^{2}$
37. $\left(x+6\right)\left({x}^{2}-6x+36\right)$
39. $\left(5a+7\right)\left(25{a}^{2}-35a+49\right)$
41. $\left(4x - 5\right)\left(16{x}^{2}+20x+25\right)$
43. $\left(5r+12s\right)\left(25{r}^{2}-60rs+144{s}^{2}\right)$
45. ${\left(2c+3\right)}^{-\frac{1}{4}}\left(-7c - 15\right)$
47. ${\left(x+2\right)}^{-\frac{2}{5}}\left(19x+10\right)$
49. ${\left(2z - 9\right)}^{-\frac{3}{2}}\left(27z - 99\right)$
51. $\left(14x - 3\right)\left(7x+9\right)$
53. $\left(3x+5\right)\left(3x - 5\right)$
55. ${\left(2x+5\right)}^{2}{\left(2x - 5\right)}^{2}$
57. $\left(4{z}^{2}+49{a}^{2}\right)\left(2z+7a\right)\left(2z - 7a\right)$
59. $\frac{1}{\left(4x+9\right)\left(4x - 9\right)\left(2x+3\right)}$
|
|
# Risk management
Under risk management ( risk management ) is understood in the risk management of enterprises all measures to avoid risks, risk mitigation, risk diversification, risk transfer and risk provisions.
## General
Businesses are exposed to a variety of risks . They are then also called risk carriers because they consciously or unconsciously have to bear risks. Risk carriers are also the individual objects or processes that harbor risks, such as operational weaknesses such as unqualified personnel . These risks can arise for technical , general economic , especially financial or legal reasons and lead to operational disruptions , losses or even corporate crises up to insolvency . Risks of this kind are a subject of investigation in business administration , which deals with the types, consequences and avoidance of operational risks. Within the scope of risk management, she has developed several strategies to minimize or even completely eliminate operational risks. Risk management influences the risk behavior and risk appetite of a company and vice versa.
The risk identification as the first step before a risk management attempted a systematic collection and collection of potential risks , followed by risk analysis , which examines the risks identified according to their causes and probability of occurrence. This is followed by a risk assessment , which determines the threat posed by the analyzed risks to a company and assesses the acceptability of the analyzed risks. In the context of risk management, it is then important to bear the risks that are considered justifiable and to install a suitable risk control system.
Risks have to be taken in order to constitute profit and wealth for a company. However, the decisive assessment of the success of a company is based on the selection of the "right" risks ( English "upside risks" ). In order to master risks, the right strategies must be developed and correspondingly efficient and effective business processes must be defined as part of risk-conscious corporate management.
## species
A general distinction is made between active and passive risk management, also referred to as cause-related and effect- related risk control . The active risk management is to influence the probabilities take and / or risk span lengths. In passive risk management , measures are taken to be able to deal with the economic consequences of existing or expected risks. Existing risks are therefore not changed by passive risk management. Active risk management is also called a preventive risk policy , passive is a corrective risk policy .
## activities
For active risk management include risk avoidance, risk reduction and risk diversification.
• Risk avoidance : If a company decides not to carry out planned activities (e.g. investments ) or to abandon existing activities before the risk occurs, risk avoidance is in place. Risk avoidance describes the complete renunciation of a risky activity. However, this strategy should only be taken into account if, due to acute relationships, no other approach is possible or the risk-reward ratio cannot be adequately optimized, as this method cannot generate any profits . An example would be leaving a critical business area. It is the most radical possibility of risk management, in which the probability of occurrence of a specific risk is set to zero.
• One speaks of a (reduction) of risk if someone
The likelihood of occurrence is reduced to an acceptable level of risk , because loan collateral (especially with banks and insurance companies ) or retention of title and prepayment (with suppliers ) reduce existing credit and debtor risks . A reduction in the damage caused by technical risks can be achieved with the help of product recalls .
Risk diversification serves to regulate risks, but does not necessarily minimize the probability of occurrence of the individual risk, but it does affect the extent of the damage. Since it is very unlikely that all risks will occur synchronously in their entirety, the risk of dependencies should be avoided by, for example, having several suppliers to choose from and comparing the quality of the business partners.
The passive risk response consists of pass-risk (risk transfer) and provisioning. It is necessary if, consciously or unconsciously, no active risk management has been carried out for risks, which means that the occurrence of a risk can be dealt with operationally.
After all of the measures have been implemented, there are residual risks that a company consciously accepts. It assumes that the technical or market development will proceed according to plan with a probability of over 50%.
## Application in practice and problems
Psychological research has shown that most people have an intense antipathy towards risks and losses. Significant consequences for entrepreneurial risk management arise from the human endeavor to avoid cognitive discrepancies and to steer the environment: The conscious or unconscious neglect of existing risks means that economic risk management procedures are not used and plan discrepancies that have occurred are not examined later with regard to the causal risks . In some companies, the approaches to risk management are therefore still reduced to insurance alone. However, risk management is not about eliminating all risks from the organization ("zero risk illusion"), since every entrepreneurial activity is associated with taking risks. The aim is to optimize a company's risk-opportunity profile. The use of just one risk management strategy should not be used in practice. A mix of different measures is most efficient. The assessment of forecast earnings and the associated risks is part of every thorough planning of business decisions.
## Risk report
According to the KonTraG, which has been in force since May 1998, corporations are obliged to add a risk report to the management report , to document risks that threaten the existence of the company and also to “ address the risks of future development”. However, the statutory regulations on risk reports are only described in half-sentence in Sections 289 (1) and 315 (1) of the German Commercial Code ( HGB ), so that there is a large margin of discretion for the company. This also results in an indirect legal obligation for corporations to examine and control their risks and opportunities through risk management. You must install an internal control system that defines recurring control steps and executes them at a determined frequency in order to reduce key risks.
## Individual evidence
1. Frank Romeike, Risk Management in the Context of Corporate Governance , in: The Supervisory Board 70, 2014, p. 72
2. Reinhold Hölscher / Marcus Kremers / Uwe-Christian Rücker, Industrial Insurance as an Element of Modern Risk Management , 1996, p. 8
3. Marcel Meyer, Recognizing and Managing Risks , Bättig Treuhand AG, February 22, 2010, pp. 8-10
4. Ulrich Blum / Werner Gleißner, company evaluation, rating and risk management , in: Scientific journal of the Technical University of Dresden, 55th year, issue 3–4, 2007, p. 115
5. Reinhold Hölscher / Ralph Elfgen (eds.), Challenge Risk Management , 2002, p. 14
6. Risk compensation is based on the experience that random fluctuations are less significant the larger the scope of the observed elements and the longer the observation period of an element. This applies in particular to insurance, since experience has shown that , according to the law of large numbers , the greater the number and the period under consideration of the actuarial units, the lower the random fluctuations (see Tristan Nguyen, Limits of Insurability from Disaster Risks , 2007, p. 84).
7. ^ Frank Romeike / Robert Finke, Success Factor Risk Management , 2003, p. 237
8. Frank Spellmann, Overall Risk Measurement of Banks and Companies , 2002, p. 33
9. ^ Hans Büschgen , Interest rate futures , 1988, p. 86
10. Reinhold Hölscher / Ralph Elfgen (eds.), Challenge Risk Management , 2002, p. 15
11. Marcel Meyer, Recognizing and Managing Risks , p. 9
12. Dieter Farny , Versicherungsbetriebslehre , 2006, p. 8
13. Marcel Meyer, Recognizing and Managing Risks , p. 9
14. Oliver Everling / Jens Leker / Stefan Bielmeier (eds.), Credit Analyst , 2012, p. 342
15. Marcel Meyer, Recognizing and Managing Risks , p. 9
16. Werner Gleißner, Effective Risk Management to Improve Planning Uncertainty and Crisis Stability , in: Risk, Compliance & Audit, 2012, 28–33, 82–89, p. 5
17. Frank Romeike, Risk Management in the Context of Corporate Governance , in: The Supervisory Board 70, 2014, p. 72
18. Marcel Meyer, Recognizing and Managing Risks , p. 10
19. Frank Romeike, Risk Management in the Context of Corporate Governance , in: The Supervisory Board 70, 2014, p. 72
20. Walther Busse von Colbe / Monika Ordelheide / Günther Gebhardt / Bernhard Pellens, Consolidated Financial Statements: Accounting According to Business Management Principles , 2010, p. 627 ff.
21. Claus Huber / Daniel Imfeld, Success factors and stumbling blocks , in: Die Bank , Heft 9, 2012, p. 16
|
|
# Double pendulum chaos
Pendula have fascinated people for centuries. Probably the most famous pendulum is Foucault’s pendulum, which was used to demonstrate Earth’s rotation.
In the first part of this post we will scratch the surface of the mechanics behind the pendulum movement and show the equations needed to solve these problems numerically. The second part contains several video examples of pendula movement.
If you want to skip the mathematical part and go straight to the videos, click here.
## Simple pendulum
A simple gravity pendulum is a well-known idealized mathematical model of a real pendulum. It consists of a massless incompressible rod with one fixed end and a point mass on the other end. It is a frictionless system oscillating at a constant frequency.
### Polar coordinates
The equation of motion of a simple pendulum, obtained from free body diagram and mass-acceleration diagram, is
$\ddot \theta + \frac{g}{l} \sin \theta = 0$
where $$\theta$$ is the angle from the vertical, $$\ddot \theta$$ is the angular acceleration, $$g$$ is the acceleration of gravity, and $$l$$ is the length of the pendulum.
### Cartesian coordinates
If the same problem is rewritten in Cartesian coordinates, an additional algebraic constraint is needed
$f (x, y) = x^2 + y^2 - l^2 = 0$
which describes the orbit of the free end of the pendulum, i.e. it means the length of a rod must remain a constant.
From the Lagrangian of the system ($$L = T - V$$, where $$T$$ and $$V$$ are kinetic and potential energy, respectively), using Lagrange’s equations of the first kind we obtain equations of motion
\begin{align} m \ddot x &= - 2 x \lambda \\ m \ddot y &= - m g - 2 y \lambda \\ % 0 &= x^2 + y^2 - l^2 \end{align}
where $$m$$ is the mass, $$\ddot{x}$$ and $$\ddot{y}$$ are the accelerations in $$x$$ and $$y$$ directions (measured from the fixed end of the pendulum) respectively, and $$\lambda$$ is the Lagrange multiplier.
## Double pendulum
A double pendulum is made by attaching another pendulum to the free end of a simple pendulum. In our examples, the motion is still restricted to the vertical plane, and rods are massless with point masses on their ends.
In this situation two algebraic constraints are needed
\begin{align} f_1 &= x_1^2 + y_1^2 - l_1^2 = 0 \\ f_2 &= (x_2 - x_1)^2 + (y_2 - y_1)^2 - l_2^2 = 0 \end{align}
describing the orbits of both masses, while the lengths of the rods remain constant.
Now the equations of motion are
\begin{align} m_1 \ddot{x_1} &= 2 (\lambda_1 + \lambda_2) x_1 - 2 \lambda_2 x_2 \\ m_1 \ddot{y_1} &= 2 (\lambda_1 + \lambda_2) y_1 - 2 \lambda_2 y_2 - m_1 g \\ m_2 \ddot{x_2} &= - 2 \lambda_2 x_1 + 2 \lambda_2 x_2 \\ m_2 \ddot{y_2} &= - 2 \lambda_2 y_1 + 2 \lambda_2 y_2 - m_2 g. \end{align}
This can be solved numerically. In our case, we have used the Runge-Kutta 4 method to calculate values at each time step, which was taken as 0.001 s.
## Animations
### How to read the graphs
The graphs shown in this section have three parts.
The main part is the view of pendula as they swing in the vertical plane. In all the graphs there are two pendula: a blue one with a larger mass(es) at its end(s), and an orange one with a smaller mass(es) — the size of a circle is proportional to the mass.
The other parts of a graph are phase plane plots.
Under the main part is a plot of a relation between a position (on a horizontal axis) and a velocity (on a vertical axis) in x-direction of each pendulum’s “bottom” mass.
On the right side is a similar plot of positions (on a vertical axis) and velocities (on a horizontal axis) in y-direction.
### Single pendulum
First it will be shown how two single pendula of different masses swing.
The length of both pendula is 2.5 m. The blue one has a mass of 5 kg and it is released from a position (1.50 m, -2.00 m). The orange one has a mass of 2.5 kg and it starts from 10 cm smaller incline (1.40 m, -2.07 m).
There are several things to notice here:
1. Even though the blue pendulum has twice the mass of the orange one, they seem to have quite similar periods and frequencies of oscillation.
2. The small difference in the periods of oscillation (by the end of the video the blue pendulum is slightly lagging behind the orange pendulum) is the result of releasing them from a bit different initial positions. For a smaller incline, these differences would be negligible (see small angle approximation).
3. Looking at the phase plane plots, we see that each swing is the same as the previous (in our calculations, friction and air resistance are ignored).
Taking all that into account, a single pendulum is quite boring. Let’s see if the situation is improved with double pendula.
### Double pendulum
One of the first things you can read about double pendula on Wikipedia is that they are chaotic. In the next couple of examples, we will try to explain what that means.
The first example are the single pendula from the previous example (the same length, masses, and initial positions) that have another pendulum of the same characteristics attached to their end, starting from a horizontal position.
The behaviour of these two pendula is a bit different, but still quite similar to each other. Phase plots are not as “tidy” as in the case of single pendulum, but there is a clearly visible regularity.
Shouldn’t double pendula be chaotic?
Well, for small enough initial angles, double pendula behaves similarly to the single pendula, and the chaotic nature is not pronounced.
### Chaos
Things change dramatically for double pendula released from a larger initial angle.
Take the previous example and mirror it — now the starting positions are above the fixed point. All other properties remain the same as before.
Even before the pendula have reached their left-most position for the first time we can observe the clear differences in their behaviour. Phase plots are significantly different.
So we have found the initial positions for which different behaviours can be observed. But how large is this chaos-effect?
Let’s now take two almost identical pendula — the only difference between them is that the orange one has its rod connecting it to the fixed point 1 milimeter (0.001 m) longer. The difference in the length between the blue and orange rod is only 0.04 %.
For the first three seconds of the animation, both pendula behave as one (in all three graphs only the orange color is seen). After three more seconds, they are in completely different positions, like they have never “been together”.
This is a true example of chaotic behaviour — even a very small difference in initial conditions leads to large differences later on.
## Conclusions
A single pendulum has a very predictable motion — if you know the rod length, you can very simply and relatively accurately predict where it will be at any point in time.
A double pendulum released from a small initial angle behaves similarly to the single pendulum. On the other hand, releasing it from a large enough initial angle will produce chaotic behaviour which is impossible to predict.
If you would like to see more of chaotic double pendula, take a look at my double pendulum bot on Twitter.
|
|
auctex-devel
[Top][All Lists]
## Re: [AUCTeX-devel] exclude begin{comment} (optionally) from LaTeX-fill-r
From: Mosè Giordano Subject: Re: [AUCTeX-devel] exclude begin{comment} (optionally) from LaTeX-fill-region and friends Date: Sat, 4 Jun 2016 12:34:04 +0200
Hi Uwe,
2016-06-04 10:40 GMT+02:00 Uwe Brauer <address@hidden>:
>
> Hi
>
> Since some time I am using orgtbl-mode, a minor mode from the org package,
> which greatly simplifies the constructions of latex tables, even in a
> latex buffer.
>
> The idea is that the org table construct is wrapped into a
> comment environment, like this.
>
> \begin{comment}
> #+ORGTBL: SEND data2 orgtbl-to-latex :lend "\\\\ \\hline"
> | / | <> | <> | <> | <> | <> | <> | <> | <> | <> | <> |
> |---+----+----+----+----+----+----+----+----+----+----|
> | | A | 6 | 8 | 8 | 9 | 9 | 6 | 7 | 8 | 9 |
> | | B | 7 | 6 | 7 | 7 | 6 | 8 | 6 | 8 | 8 |
> \end{comment}
>
> And then is «send» as latex source code into some place of the buffer.
>
>
> The only problem is when I run LaTeX-fill-region and friends
> because is results in a useless (for orgtbl-mode) construct like this
>
> \begin{comment}
> #+ORGTBL: SEND data2 orgtbl-to-latex :lend "\\\\ \\hline" | / | <> |
> <> | <> | <> | <> | <> | <> | <> | <> | <> |
> |---+----+----+----+----+----+----+----+----+----+----| | | A | 6 |
> 8 | 8 | 9 | 9 | 6 | 7 | 8 | 9 | | | B | 7 | 6 | 7 | 7 | 6 | 8 | 6 |
> 8 | 8 |
> \end{comment}
>
>
> So the question is this: could (optionally) \begin{comment} be exclude
> from LaTeX-fill-region and friends?
Yes, if you're fine with treating it like a verbatim environment: add
comment' environment to LaTeX-indent-environment-list' with
current-indentation' as indentation rule:
--8<---------------cut here---------------start------------->8---
`
|
|
# GEOMETRIC ANALYSIS ON THE DIEDERICH-FORNÆSS INDEX
• Accepted : 2018.01.30
• Published : 2018.07.01
#### Abstract
Given bounded pseudoconvex domains in 2-dimensional complex Euclidean space, we derive analytical and geometric conditions which guarantee the Diederich-$Forn{\ae}ss$ index is 1. The analytical condition is independent of strongly pseudoconvex points and extends $Forn{\ae}ss$-Herbig's theorem in 2007. The geometric condition reveals the index reflects topological properties of boundary. The proof uses an idea including differential equations and geometric analysis to find the optimal defining function. We also give a precise domain of which the Diederich-$Forn{\ae}ss$ index is 1. The index of this domain can not be verified by formerly known theorems.
#### References
1. M. Adachi and J. Brinkschulte, A global estimate for the Diederich-Fornaess index of weakly pseudoconvex domains, Nagoya Math. J. 220 (2015), 67-80. https://doi.org/10.1215/00277630-3335655
2. D. E. Barrett, Behavior of the Bergman projection on the Diederich-Fornaess worm, Acta Math. 168 (1992), no. 1-2, 1-10. https://doi.org/10.1007/BF02392975
3. M. Behrens, Plurisubharmonic defining functions of weakly pseudoconvex domains in $C^2$, Math. Ann. 270 (1985), no. 2, 285-296. https://doi.org/10.1007/BF01456187
4. B. Berndtsson and P. Charpentier, A Sobolev mapping property of the Bergman kernel, Math. Z. 235 (2000), no. 1, 1-10. https://doi.org/10.1007/s002090000099
5. H. P. Boas and E. J. Straube, Sobolev estimates for the ${\bar{\partial}}$-Neumann operator on domains in $C^n$ admitting a defining function that is plurisubharmonic on the boundary, Math. Z. 206 (1991), no. 1, 81-88. https://doi.org/10.1007/BF02571327
6. H. P. Boas and E. J. Straube, de Rham cohomology of manifolds containing the points of infinite type, and Sobolev estimates for the ${\bar{\partial}}$-Neumann problem, J. Geom. Anal. 3 (1993), no. 3, 225-235. https://doi.org/10.1007/BF02921391
7. D. Catlin, Subelliptic estimates for the ${\bar{\partial}}$-Neumann problem on pseudoconvex domains, Ann. of Math. (2) 126 (1987), no. 1, 131-191. https://doi.org/10.2307/1971347
8. J.-P. Demailly, Mesures de Monge-Ampere et mesures pluriharmoniques, Math. Z. 194 (1987), no. 4, 519-564. https://doi.org/10.1007/BF01161920
9. K. Diederich and J. E. Fornaess, Pseudoconvex domains: an example with nontrivial Nebenhulle, Math. Ann. 225 (1977), no. 3, 275-292. https://doi.org/10.1007/BF01425243
10. K. Diederich and J. E. Fornaess, Pseudoconvex domains: bounded strictly plurisubharmonic exhaustion functions, Invent. Math. 39 (1977), no. 2, 129-141. https://doi.org/10.1007/BF01390105
11. K. Diederich and J. E. Fornaess, Pseudoconvex domains: existence of Stein neighborhoods, Duke Math. J. 44 (1977), no. 3, 641-662. https://doi.org/10.1215/S0012-7094-77-04427-1
12. J. E. Fornaess and A.-K. Herbig, A note on plurisubharmonic defining functions in ${\mathbb{C}}^2$, Math. Z. 257 (2007), no. 4, 769-781. https://doi.org/10.1007/s00209-007-0143-2
13. J. E. Fornaess and A.-K. Herbig, A note on plurisubharmonic defining functions in ${\mathbb{C}}^n$, Math. Ann. 342 (2008), no. 4, 749-772. https://doi.org/10.1007/s00208-008-0255-y
14. S. Fu and M.-C. Shaw, The Diederich-Fornaess exponent and non-existence of Stein domains with Levi-flat boundaries, J. Geom. Anal. 26 (2016), no. 1, 220-230. https://doi.org/10.1007/s12220-014-9546-6
15. P. S. Harrington, The order of plurisubharmonicity on pseudoconvex domains with Lipschitz boundaries, Math. Res. Lett. 15 (2008), no. 3, 485-490. https://doi.org/10.4310/MRL.2008.v15.n3.a8
16. P. S. Harrington, Bounded plurisubharmonic exhaustion functions for Lipschitz pseudoconvex domains in ${\mathbb{CP}}^n$, J. Geom. Anal. 27 (2017), no. 4, 3404-3440. https://doi.org/10.1007/s12220-017-9809-0
17. A.-K. Herbig and J. D. McNeal, Convex defining functions for convex domains, J. Geom. Anal. 22 (2012), no. 2, 433-454. https://doi.org/10.1007/s12220-010-9202-8
18. A.-K. Herbig and J. D. McNeal, Oka's lemma, convexity, and intermediate positivity conditions, Illinois J. Math. 56 (2012), no. 1, 195-211 (2013).
19. N. Kerzman and J.-P. Rosay, Fonctions plurisousharmoniques d'exhaustion bornees et domaines taut, Math. Ann. 257 (1981), no. 2, 171-184. https://doi.org/10.1007/BF01458282
20. J. J. Kohn, Quantitative estimates for global regularity, in Analysis and geometry in several complex variables (Katata, 1997), 97-128, Trends Math, Birkhauser Boston, Boston, MA, 1999.
21. S. G. Krantz and M. M. Peloso, Analysis and geometry on worm domains, J. Geom. Anal. 18 (2008), no. 2, 478-510. https://doi.org/10.1007/s12220-008-9021-3
22. J. M. Lee, Introduction to Smooth Manifolds, second edition, Graduate Texts in Mathematics, 218, Springer, New York, 2013.
23. J. D. McNeal, Lower bounds on the Bergman metric near a point of finite type, Ann. of Math. (2) 136 (1992), no. 2, 339-360. https://doi.org/10.2307/2946608
24. A. Noell, Local and global plurisubharmonic defining functions, Pacific J. Math. 176 (1996), no. 2, 421-426. https://doi.org/10.2140/pjm.1996.176.421
25. T. Ohsawa and N. Sibony, Bounded p.s.h. functions and pseudoconvexity in Kahler manifold, Nagoya Math. J. 149 (1998), 1-8. https://doi.org/10.1017/S0027763000006516
26. T. Ohsawa and N. Sibony, Kahler identity on Levi flat manifolds and application to the embedding, Nagoya Math. J. 158 (2000), 87-93. https://doi.org/10.1017/S0027763000007315
27. P. Petersen, Riemannian Geometry, second edition, Graduate Texts in Mathematics, 171, Springer, New York, 2006.
28. S. Pinton and G. Zampieri, The Diederich-Fornaess index and the global regularity of the ${\bar{\partial}}$-Neumann problem, Math. Z. 276 (2014), no. 1-2, 93-113. https://doi.org/10.1007/s00209-013-1188-z
|
|
## Algebra 1: Common Core (15th Edition)
$\frac{1}{12}$
You do not replace the coins, so the events are dependent. 3 of the 9 coins are dimes: P(dime)=$\frac{3}{9}$=$\frac{1}{3}$ 2 of the 8 remaining coins are a penny: P(penny after dime)=$\frac{2}{8}$=$\frac{1}{4}$ P(dime then penny)=P(dime) $\times$ P(penny after dime) P(dime then penny)=$\frac{1}{3}$ $\times$ $\frac{1}{4}$ P(dime then penny)=$\frac{1}{12}$
|
|
# E Corrections & Remarks
Errata and remarks concerning the first edition print version of the book are displayed here.
Last updated: 08 April, 2022.
## Chapter 3.3.3
In the imaginary experiment in Figure 3.4, four participants experience the event during the 10-year observation period. In the calculation example, however, we use $$E=3$$ to calculate the incidence rate. In the online version, we have now corrected this so that $$E=4$$, like in the experiment.
## Chapter 4.2
A new version of {meta} (version 5.0-0) has recently been released. We adapted the code in this chapter accordingly to avoid deprecation messages:
• The comb.fixed and comb.random arguments are now called fixed and random, respectively.
• To print all studies, one now has to use summary method for {meta} meta-analysis objects.
## Chapter 7.3
A new version of {meta} (version 5.0-0) has recently been released. We adapted the code in this chapter accordingly to avoid deprecation messages:
• The byvar argument is now called subgroup.
• To print all studies, one now has to use summary method for {meta} meta-analysis objects.
## Chapter 12.2.1
The print version contains a factual error concerning the definition of full rank in non-square (rectangular) matrices. It is stated that a “matrix is not of full rank when its rows are not all independent.” This, however, only applies to square matrices and non-square matrices with less rows than columns ($$m < n$$). In our example, there are more rows than columns; this means that $$\boldsymbol{X}$$ is not full rank because its columns are not all independent (in $$m > n$$ matrices, rows are always linearly dependent). This erratum has been corrected in the online version.
## Chapter 12.2.2
A new version of {netmeta} (version 2.0-0) has recently been released. We adapted the code in this chapter accordingly to avoid error messages:
• The latest version of {netmeta} resulted in non-convergence of the Fisher scoring algorithm implemented in rma.mv. This problem pertains to all versions of {dmetar} installed before 24-Oct-2021. To avoid the issue, simply reinstall the lastest version of {dmetar}.
|
|
# Fontspec/xeCJK AutoFakeBold and copyable Chinese characters in PDF
I'm writing a document in mixed English and Chinese (with the xeCJK package. I want to add some fake bold headlines using the font Kaiti SC. When I do this using the AutoFakeBold option, this makes it unable to copy the characters in the resulting PDF. How can I fix this?
It seems like xeCJK is loading fontspec behind the curtains, so the issue might be related to it.
Here's a MWE.
%!TEX TS-program = xelatex
%!TEX encoding = UTF-8 Unicode
\documentclass[12pt, a4paper]{article}
\usepackage{xeCJK}
\setCJKmainfont[Scale=1.2, AutoFakeBold=true]{Kaiti SC} % removing the bold makes the output copyable
\begin{document}
\textbf{你好}
\end{document}
Malipivo provided a solution using \pdfliteral, and that technique works for XeTeX too. This technique is used in zhmCJK package. You can read the documented source code of zhmCJK if you are interested in it.
However, it is quite tricky to apply this technique properly in xeCJK:
% !TeX program = XeLaTeX
% !TeX encoding = UTF-8
\documentclass{article}
\usepackage{xeCJK}
\setCJKmainfont{SimSun}
% value > 0
\def\xeCJKembold{0.4}
% hack into xeCJK, you don't need to understand it
\def\saveCJKnode{\dimen255\lastkern}
\def\restoreCJKnode{\kern-\dimen255\kern\dimen255}
% save old definition of \CJKsymbol and \CJKpunctsymbol for CJK output
\let\CJKoldsymbol\CJKsymbol
\let\CJKoldpunctsymbol\CJKpunctsymbol
% apply pdf literal fake bold
\def\CJKfakeboldsymbol#1{%
\special{pdf:literal direct 2 Tr \xeCJKembold\space w}%
\CJKoldsymbol{#1}%
\saveCJKnode
\special{pdf:literal direct 0 Tr}%
\restoreCJKnode}
\def\CJKfakeboldpunctsymbol#1{%
\special{pdf:literal direct 2 Tr \xeCJKembold\space w}%
\CJKoldpunctsymbol{#1}%
\saveCJKnode
\special{pdf:literal direct 0 Tr}%
\restoreCJKnode}
\newcommand\CJKfakebold[1]{%
\let\CJKsymbol\CJKfakeboldsymbol
\let\CJKpunctsymbol\CJKfakeboldpunctsymbol
#1%
\let\CJKsymbol\CJKoldsymbol
\let\CJKpunctsymbol\CJKoldpunctsymbol}
\begin{document}
\end{document}
Note that the non-CJK characters are unchanged in \CJKfakebold command. This is a feature by design. You should use \textbf or \bfseries as usual for English.
And note that the code above is not at all complete for xeCJK, puctuation kerning is wrong for example. It should be implemented more carefully (maybe in a very different way) if we add this feature into the package.
• Interesting. I'm not making a professional document so the kerning is not so important. Is there a way to combine this with the AutoFakeBold setting or should I just adjust the \xeCJKembold value manually? – pg-robban May 26 '14 at 16:22
• @pg-robban: Well, I think you can set it manually. – Leo Liu May 26 '14 at 17:06
• Alright, I have one more question before I mark the question as answered: Suppose I have a macro which does something with bold text, currently I do this with \textbf{#1} on that part. Is there any way I can find out if the argument consists of only Chinese characters so that I can apply the correct bold function (\textbf vs \CJKfakebold) or do I need to make separate macros? – pg-robban May 26 '14 at 18:48
• @pg-robban: \textbf does not change Chinese characters, if you don't set BoldFont feature. And maybe you didn't notice that \CJKfakebold does not change Latin characters, as is shown above. You can use \CJKfakebold{\textbf{正确的粗体 Bold}} safely. – Leo Liu May 27 '14 at 10:45
• Aha, I got it now :) 麻烦您了,thank you so much. – pg-robban May 27 '14 at 13:51
This is a know XeTeX bug, I don’t think there is currently a way to fix it.
|
|
# Display a lot of variables in a 2D Diagram
I have a table with values like this:
---------------------------------
| Function | Dimension | Result |
---------------------------------
| 1 | 1 | 15% |
---------------------------------
| 1 | 2 | 10% |
---------------------------------
| 1 | 3 | 5% |
---------------------------------
| 1 | 4 | 10% |
---------------------------------
| 2 | 1 | 20% |
---------------------------------
| ... | ... | ... |
---------------------------------
| 24 | 4 | 3% |
---------------------------------
Funtion = {1,2,3,4,...,24}; Dimension = {1 (dark blue), 2 (light blue), 3 (green), 4 (red)}; Result
You could visualize it like this:
I like the fact, that you can compare the dimensions by just looking at the bars.
But I don't like the fact, that you cannot see the result value for a dimension directly.
For Function=1 and Dimension=2, you could think the result is 25%, but it is 10%. So you would need to guess the length, or calculate the difference, on the result axis.
Do you know any other way to visualize all these values in one graph/diagramm/other? I will also like to display 7 dimensions.
It is totally ok if it looks completely different of my diagram. But it shouldn't take too much space on an A4 paper. My diagram uses 3/4 width, and 1/5 of the height of an A4 paper.
I would just use groups of (unstacked) bars for this:
\documentclass[border=5mm]{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat=1.12}
Dim F1 F2 F3
1 15 10 5
2 20 5 7
3 10 8 2
4 5 20 10
}\datatable
\begin{document}
\begin{tikzpicture}
\begin{axis}[
ybar=\pgflinewidth,
enlarge x limits=0.25,
yticklabel=\pgfmathprintnumber{\tick}\,\%,
legend entries={Dim 1, Dim 2, Dim 3},
legend pos=outer north east
]
|
|
Account
Login
Register
Forgot password?
Share
Notifications
View all notifications
Books Shortlist
Your shortlist is empty
# Solution - Plot a Graph Showing Variation of De Broglie Wavelength Versus Accelerating Potential - de-Broglie Relation
Conceptde-Broglie Relation
#### Question
Plot a graph showing variation of de Broglie wavelength λ versus 1/sqrtV , where V is accelerating potential for two particles A and B, carrying the same charge but different masses m1, m2 (m1 > m2). Which one of the two represents a particle of smaller mass and why?
#### Solution
You need to to view the solution
Is there an error in this question or solution?
#### APPEARS IN
2015-2016 (March) Delhi Set 1
Question 7 | 2 marks
2015-2016 (March) Delhi Set 2
Question 10 | 2 marks
#### Reference Material
Solution for question: Plot a Graph Showing Variation of De Broglie Wavelength Versus Accelerating Potential concept: de-Broglie Relation. For the courses CBSE (Arts), CBSE (Commerce), CBSE (Science)
S
|
|
# Geometry
posted by .
In triangle ABC, A = (5, 9), B = (3, 1), and C = (11, 3). Write the equation of the altitude from A to BC. Use point-slope form ... y - y1 = m(x - x1).
## Similar Questions
1. ### Geometry HOMEWORK HELP ME PLEASE!
Draw the triangle with vertices D(-2,1), U(4,9), and E(10,1) On a Graph. 2. Find DU, UE, AND DE. (Distance) 3. Classify Triangle DUE by its sides. 4. Write the standard form of the equation of each line: DU, UE, and DE.Also, find the …
2. ### math
i have more than one question so if u no any of the answers please tell me 1.) write the point-slope form of the equation of the line with slope -2 passing through the point ( -5, -9). 2.) write the point-slope form of an equation …
3. ### math
triangle ABC has vertices A(3,4), B(4,-3) and C(-4,-1). a. draw a sketch of the triangle b. draw the altitude from vertex A c. find the slope of side BC which is =-1/4 d. find the slope of the altitude from A which is 4 e. find the …
4. ### Geometry altitude
a(-2,4) b (4,6) c (-4,-4) find the equation for the altitude from each vertex of triangle abc.
5. ### Analytic geometry - finding point by intersect lin
I have a triangle ABC. The slope of AB is -1/ab, the slope of AC is -1/ac, and the slope of BC is -1/ac. My question is, I have 3 lines: Altitude from A to BC, altitude from B to AC, and altitude from C to AB. I know their slope because …
6. ### math - pls check!!!
Write an equation in point-slope form for the line that passes through one of the following pairs of points (you may choose the pair you want to work with). Then, use the same set of points to write the equation in standard form and …
7. ### geometry
I would love for someone to check my work on the seven problems that I did. I would be eternally grateful. I am going to list my problems and answers. Even if you only want to check one of them, that would be great. 1. Write an equation …
8. ### Math
In an isosceles triangle, the perimeter is 8 more than 2 times on of the legs. If the perimeter is 28 in, find the length of the base. A. 16 in B. 18 in C. 10 in D. 8 in Given triangle ABC with A(-3, 2), B(-1, -4), and C(4, 1), write …
9. ### Geometry
Triangle $ABC$ is isosceles with point $A$ at the point $(2, 7)$, with point $B$ at $(-2, 0)$ and with point $C$ at $(3, -1).$ Triangle $ABC$ is reflected over $\overline{BC}$ to form $\triangle A'BC$. Triangle $A'BC$ is reflected …
10. ### Math
1. Use point-slope form to write the equation of a line that has a slope of 2/3 and passed through (-3,-1). Write your final equation in slope-intercept form. Is the answer y=2/3+-1 2. Write an equation of the line that passed through …
More Similar Questions
|
|
# Fundamental class of a surface
https://pi.math.cornell.edu/~hatcher/AT/AT.pdf
In Example 3.31 in Hatcher's Algebraic Topology(p.241), there is a figure of a $$\Delta$$-complex structure of the closed orientable surface $$M$$ of genus $$g$$ ($$g=2$$ in the figure). Hatcher says that, the $$2$$-cycle formed by the sum of all $$4g$$ $$2$$-simplices with the signs indicated in the figure, represents a fundamental class $$[M]$$ of $$M$$. I want to understand this.
It suffices to show that $$[M]$$ corresponds to the generator of $$H_1(S^1)$$ under the following isomorphisms, for each $$x \in M$$:
$$H_2(M) \to H_2(M,M-x) \leftarrow H_2(U,U-x) \to H_2(\Bbb R^2,\Bbb R^2-0)\to H_1(\Bbb R^2-0)\to H_1(S^1)$$
where $$U$$ is an open neighborhood of $$x$$ in $$M$$ homeomorphic to $$\Bbb R^2$$, and the second isomorphism is excision.
It is easy to examine the maps except for the second one. The generator of $$H_1(S^1)$$ (the loop wrapping once the circle) corresponds to the generator of $$H_2(U,U-x)$$ which is represented by, say a relative cycle $$\sigma: \Delta^2 \to M$$ with $$x \in \text{int} (\sigma(\Delta^2))$$. But how can I know that $$[M]$$ corresponds to $$[\sigma]$$ under the second isomorphism?
This is really just a matter of understanding the actual form of the excision isomorphism.
But we can pick $$U$$, $$\sigma$$, and $$x$$ somewhat carefully, to make this easier.
Instead of working with the given $$\Delta$$-complex structure, let's work with its 2nd barycentric subdivision, which is guaranteed to be an actual simplicial complex whose individual simplices are actually embedded. You probably know that a homology class is invariant under subdivision, so the two classes formed by summing the simplices of the 2nd barycentric subdivision and by summing the 2-simplices of the original $$\Delta$$-complex structure are equal. That gives us the freedom to work with teh 2nd barycentric subdivision.
Choose $$\sigma$$ to be a 2-simplex of the 2nd barycentric subdivision. And now choose $$U$$ to be a regular neighborhod of $$\sigma$$, chosen so small that $$\sigma$$ is the unique 2-simplex of the 2nd barycentric subdivision that is contained in $$U$$; because it's a regular neighborhood of a closed disc in a manifold, $$U$$ is homeomorphic to $$\mathbb R^2$$.
Finally, pick $$x$$ to be a point in the interior of $$\sigma$$.
What we need to show is that if $$c$$ is the sum of all simplices of the 2nd barycentric subdivision then the class $$[c] \in H_2(M,M-x)$$ is the equal to the image, under excision, of the class $$[\sigma] \in H_2(U,U-x)$$. And this is really just a matter of understanding a concrete description of the excision homomorphism.
Excision can be described like this:
If you have a $$k$$-cycle $$c = \sum a_i \tau_i$$ of $$(M,M-x)$$, and if each $$\tau_i$$ is contained in either $$M-x$$ or $$U$$, and if $$c'$$ is obtained from $$c$$ by discarding all terms $$a_i\tau_i$$ such that $$\tau_i$$ is contained in $$M-x$$, then $$c'$$ is a $$k$$-cycle of $$(U,U-x)$$, and the excision map $$H_k(U,U-x) \to H_k(M,M-x)$$ takes $$[c']$$ to $$[c]$$.
Now let's apply this. Certainly the given $$c$$ is a cycle of $$M$$, and it is in fact a fundamental cycle, representing the fundamental class $$[M]$$. So $$c$$ is certainly also a cycle of $$(M,M-x)$$. Applying the above description of excision, the terms that we remove from $$c$$ are all of the terms except for $$\sigma$$, which is the only term contained in $$U$$. So all that's left is $$c'=\sigma$$, and we're done.
• Thanks I clearly see that $[\sigma]$ and $[c]$ is the same in $H_2(M,M-x)$. One last question: Is there a reason that you specified the "2nd" subdivision? Is it just for the guarantee of a regular neighborhood? Jan 15 '20 at 2:14
• A regular neighborhood always exists. I needed the $\sigma$ to be embedded to guarantee the existence of a regular neighborhood homeomorphic to $\mathbb R^2$. Jan 15 '20 at 3:57
• Yes I meaned it thanks Jan 15 '20 at 4:32
|
|
# Channel (CP map)
== Introduction ==
Any device taking classical or quantum systems of a certain type as input and (possibly different) classical or quantum systems as output is a channel. This definition covers any processing step in information theory, from preparations to free and controlled time evolution to measurements. Channels are thus among the central concepts of classical and quantum information science.
Classical channels are those that can transmit or store only classical information, like electrical wires or the Royal Mail. Quantum channels can transmit both classical and quantum information. Physical realizations of quantum channels include everything from optical fibers or coupled spin chains for quantum communication, to shielded atoms in optical traps for quantum storage. Classical (resp. quantum) capacity measures the amount of classical (quantum) information that can be sent undistorted through the channel.
### Definition and Properties
Mathematically, a channel is represented by a map T : S(H1) → S(H2) mapping states (i.e. density operators) on some Hilbert space H1 to states on some (possibly different) Hilbert space H2. Classical channels are included in this setup if we interpret classical functions f as diagonal matrices, f ≡ ∑xf(x) ∣x⟩⟨x.
Which such maps T qualify as channels? The channel should respect convex mixtures, and hence be linear. It should preserve the normalization of states, and thus be trace-preserving (TP): $tr(T(\varrho)) = tr(\varrho)$. We also require that T map positive operators to positive operators, so that $T(\varrho)$ is a valid density operator for any input density operator $\varrho$. Finally, these properties should remain true if the operation is only applied to part of a larger system. So we require that T ⊗ idn be positive for all n ∈ N, where idn denotes the identity operation on the n × n matrices. A map with this property is usually called completely positive.
Obviously, complete positivity of T implies positivity. The converse holds if the input or output system is classical. However, in the quantum case there are maps which are positive, but not completely positive. The matrix transpose operation $\varrho \mapsto \varrho^{t}$ is a prominent example of such an unphysical operation.
In summary, a channel is a completely positive and trace-preserving (CPTP) map T : S(H1) → S(H2) between state spaces S(Hi) associated with physical systems.
The above approach to quantum channels is axiomatic, i.e. based on a set of postulates, which are required by the statistical interpretation of quantum mechanics. An alternative way to characterize the possible channels is constructive: we allow just those channels which can be built from the basic operations of (1) tensoring with a second system in a specified state, (2) unitary transformation, and (3) reduction to a subsystem.
Luckily, the two approaches agree: the above three types of maps are CPTP, and by Stinespring's dilation theorem (known in the community under the jargon Church of the larger Hilbert space), every completely positive map can be decomposed into a product of the above three operations.
### Heisenberg vs. Schrödinger
We have so far described channels as operations on states: when the system is initially in the state $\varrho \in S(\mathcal{H}_{1})$ and send through the channel T : S(H1) → S(H2), the expectation value of the measurement of the observable A ∈ B(H2) at the output side of the channel is $tr(T(\varrho) A)$. Since the dynamics is carried by the states (while the observables are static), this approach is usually called the Schrödinger representation of the channel. Alternatively, we may describe the dynamics in the Heisenberg representation as a transformation on observables. The corresponding channel T * : B(H2) → B(H1) is defined in terms the duality relation
$$tr(T_{}(\varrho) \, A) = tr(\varrho \, T^{*}(A))$$
for all states $\varrho \in S (\mathcal{H}_{1})$ and observables A ∈ B(H2). The duality relation guarantees that all expectation values coincide, so both descriptions are completely equivalent. The Schrödinger dual T is linear, completely positive and trace-preserving iff T * is linear, completely positive and unit-preserving (or unital), i.e., T * (1) = 1.
### Special cases
States are channels with one-dimensional input space C. Measurements are channels M : S(H1) → C with classical range algebra C, while preparations are channels P : C → S(H2) with classical domain (cf. Measurements and preparations).
|
|
Show Summary Details
More options …
# Open Geosciences
### formerly Central European Journal of Geosciences
Editor-in-Chief: Jankowski, Piotr
IMPACT FACTOR 2017: 0.696
5-year IMPACT FACTOR: 0.736
CiteScore 2017: 0.89
SCImago Journal Rank (SJR) 2017: 0.323
Source Normalized Impact per Paper (SNIP) 2017: 0.674
Open Access
Online
ISSN
2391-5447
See all formats and pricing
More options …
Volume 9, Issue 1
# “Urban geosites” as an alternative geotourism destination - evidence from Belgrade
Marko D. Petrović
• Corresponding author
• Research Associate, Geographical Institute “Jovan Cvijić” SASA, Djure Jakšića 9, 11000 Belgrade, Serbia
• South Ural State University, Institute of Sports, Tourism and Service, 76 Lenin Ave., 454080 Chelyabinsk, Russia
• Email
• Other articles by this author:
/ Dobrila M. Lukić
• Principal Research Fellow Geographical Institute “Jovan Cvijić” SASA, Djure Jakšića 9, 11000 Belgrade, Serbia
• South Ural State University, Institute of Sports, Tourism and Service, 76 Lenin Ave., 454080 Chelyabinsk, Russia
• Other articles by this author:
/ Aleksandra Vujko
• Other articles by this author:
/ Tamara Gajić
• Other articles by this author:
/ Darko Vuković
• Research Associate, Geographical Institute “Jovan Cvijić” SASA, Djure Jakšića 9, 11000 Belgrade, Serbia
• Tomsk Polytechnic University, Institute of Social-humanitarian Technologies, 30 Lenin Ave, 634050 Tomsk, Russia
• Other articles by this author:
Published Online: 2017-10-05 | DOI: https://doi.org/10.1515/geo-2017-0034
## Abstract
The research aimed at testing the combination of GAM/M-GAM models (Geosite Assessment Model/Modified Geosite Assessment Model) on selected geoheritage sites (geosites) of great scientific significance and geotourism potential. Testing was done on eight sites in the City of Belgrade (Serbian capital), an area which has significant potential for geotourism development in typical urban conditions. For this purpose, an assessment scale was used to highlight differences and similarities between main and additional values of the observed geosites. The modification of the original GAM model is based on the inclusion of visitors’ opinions regarding the importance of indicators in the assessment process. The assessment was done by using both GAM/M-GAM and its results were analyzed and compared afterwards. The analysis has successfully identified locations and features of geosites that require action for maintaining or increasing their overall value and function. Moreover, the principal aim of the paper was to analyze the relevance of each sub-indicator for the assessment process by introducing the importance factor in the modified model. The authors were able to point out those values of principal importance for geosite visitors, as well as to attach a different relevance to sub-indicators, which can influence the position of the geosites in the GAM/M-GAM matrices.
## 1 Introduction
Geodiversity and biodiversity are fundamental components of natural values of every country. With the increased negative anthropogenic influence on nature, its richness and diversity are becoming increasingly vulnerable and there is a rising need to preserve it for future generations [1]. In recent years, visitors’ interest in non-living natural resources, such as geoheritage sites, has increased worldwide [2]. This variety of natural resources is defined by Gray [3] as “the range of soil, geomorphological and geological features”. The components of geodiversity that have scientific, educational and aesthetic significance are identified as geoheritage [4] and the importance of its conservation has been emphasized by many authors [3, 5, 6]. The term geotourism is linked to visits to geoheritage sites and generally to geodiversity [79], but only as a separately specialized type of tourism, with geosites in its focus [6, 10]. Therefore, geoheritage has become an important part of tourism offer in many regions and countries, especially those which have not yet solidified their positions in the tourism market. Visitors are becoming more sophisticated in selecting geotourism destinations they want to visit [11].
In the City of Belgrade, as well as in the rest of Serbia, there are resources which have not been adequately exploited for the development of geotourism [1214] and therefore, in global terms, the city and the country represent a very small tourist market [15, 16]. Given that the importance of geodiversity on the territory of Belgrade has completely been minimized and neglected in relation to biodiversity, the aim of this study is to carry out a quantitative and qualitative analysis of geoheritage sites and assessment of their values to promote the development of geotourism as a complementary part of tourism offer in Belgrade. It is important to note that geoheritage sites, as specially adapted locations, may contribute to sustainable development of the area both in economic and environmental terms and that as such, they can successfully be positioned in the tourism market [11].
Geoheritage sites in various European countries used to be placed under protection based on different criteria, which is why the European Association for the Conservation of the Geological Heritage - Pro-GEO was established in 1995 [17, 18]. The first joint task was to make a European list of geoheritage. All member states have been divided into working groups at the regional level. Serbia belongs to the Pro-GEO working group for Southeastern Europe, the so-called Pro-GEO WG-1. In 1995, the National Council for Serbian Geoheritage was formed, which then established a unique policy of systematized conservation of geoheritage sites. In 1996, the National Council launched a project called the Inventory of Serbian Geoheritage Sites. Its purpose was to select important geoheritage sites for public attention and for conservation against devastation. So far, 651 geoheritage sites have been identified, while 80 of them have already been protected [19]. Current trends in the study and protection of geodiversity elements of an area, education of the population and their presentation to the public as an integral part of the tourist offer show a need for carrying out new research into Belgrade geoheritage sites, as a part of Serbian geoheritage. The literature on selection, registration, promotion and evaluation of Belgrade geoheritage is still quite scarce. The geological monuments located in the territory of Belgrade were first protected in 1968. However, the public and visitors have expressed a growing interest in geoheritage sites in Belgrade over the last ten years [20, 21]. Researches such as Banjac and Rundić [22] were at that time focused on the topic of geotourism, while Belij [23], Mijović and Stefanović [19] and Marković et al. [2426] wrote about geodiversity and geoheritage in the city and wider area.
All these studies emphasized the need to identify geoheritage sites located in Belgrade and evaluate them in an adequate manner, as well as present them to the public as part of the relevant natural heritage in the best possible way. For these reasons, papers that deal with systematization, presentation and popularization of these sites are of particular relevance. In this respect, we analyzed and compared the current state and tourism potential of sites in Belgrade by using the combination of the GAM/M-GAM models for assessment of geosites. The aim of the paper was to show the relevance of every sub-indicator for the entire assessment process for visitors by comparing them with experts’ opinions. The main goal of the research was to show, by applying the combination of the GAM/M-GAM models to selected geosites, which sub-indicators most influence visitors’ opinion when giving preference to one geosite over another. Afterwards, we presented the results of the assessment for both segments to see how the difference in importance for each sub-indicator has affected the research results.
## 2 Study area and description of the assessed geosites
The City of Belgrade lies on the slope between the alluvial plains of the great European rivers, the Danube and the Sava. The mathematical coordinates of the observed area are between 44° 39’ and 44° 49’ north latitude and 20° 17’ and 20° 37’ east longitude. There are 37 protected natural resources in the urban area of the city, but most of them are biodiversity sites. The geological diversity of the terrain, which is composed mainly of sedimentary and magmatic rocks of Jurassic-Cretaceous, Tertiary and Quaternary age [27], is very rich. Also, not all sites have been explored to the same extent. Geomorphological and hydrological sites are not numerous nor do they differ very much in appearance. Due to this fact, only eight main geoheritage sites have been identified in the observed territory, which are officially listed on the Inventory of Serbian Geoheritage Sites. The sites are marked as G1-G8 (Figure 1 and Figure 2). Although not numerous, these sites can be consolidated into a unique tourist tour, which could include similar sites in other parts of Serbia and neighboring countries.
Figure 1
The position of the analyzed geoheritage sites in Belgrade.
Figure 2
Photos of the presented G1-G8 sites (Photos: D. Lukić, Lj. Rundić, M. Milivojević, A. Spalević).
G1: Straževica profile represents a Lower Cretaceous section with the oldest preserved rock formations on the territory of Belgrade discovered on the Straževica hill nearby Rakovica Monastery. According to Rundić [27], the site is well-known for its Jurassic or the so-called Straževički limestones. The fossils that have been found in this area belong (in most cases) to brachiopods, but there are also gastropods, corals, crinoids and some algae. They are in contact with the Lower Cretaceous, Aptian marls. The limestones have gradually transformed into clay-marl series of Lower Cretaceous.
G2: Mašin Majdan-Topčider is a 20-m-high section of reef limestones, a Cretaceous rock complex from the Senonian Age made of compact bluish rocks with calcite veins. Some of the beds are up to 1 m wide. There are also lenses of shales, marls and sandstones. The fossils of bivalves, corals, gastropods, foraminifera and pachiodont shells can be found in the rocks. At the top of the section there are layers of marsh loess, a rare phenomenon in loess formation. This profile has been protected since 1969 [21].
G3: Profile at the Kalemegdan Fortress, located at the foot of the Pobednik (Eng. Victor) monument, shows large sections through Badenian reefs with characteristic fauna and shallow-water, coastal, and reef deposits of the former Pannonian Sea. According to Rundić [27], first fossil discoveries at this section were made in 1886. Here is the core of the Kalemegdan anticline, which is indicated by a series of layers positioned diagonally between the walls of the fortress. In the lower part of the section there are conglomerates, quartz sands moving into sandy-sandstone deposits. The site was for the first time placed under protection in 1968, as the first protected natural monument in the city.
G4: Abandoned quarry in Barajevo represents a shift from the Lower Sarmatian to the Middle Sarmatian period, where the development of the Middle Sarmatian on the territory of the middle and western part of Serbia can be seen. As Rundić et al. [21] stated, the section is more than 200 m wide, and is composed of sandy limestones, limestone consisting of shell fragments and limestone with mollusk fauna, with inter-layers of siltstones and shales. There are also foraminifera, ostracods and bryozoans. This is a typical example of merokarst, with dry or flooded sinkholes.
G5: Karagača valley is a globally relevant example of the coastal, clastic development of the Upper Pannonia. It is located on the right bank of the Karagača stream, near Vrčin village, at a length of 10 m and a height of 7 m. It consists of coarse sand grains, microconglomerates with intercalations of sandy clay and yellow sands that lie transgressive across the serpentinites. The proximity of older volcanic rocks and strong hydrothermal activity have had a favorable effect on the water chemistry and encouraged development of rich endemic fauna [28, 29].
G6: Artesian well in Ovča was discovered on the left bank of the Danube in 1939. In 1985, a 162-m-deep exploration and exploitation well was built here. According to Rundić [21], aquifers with water under pressure are located at a depth of 158 m. The base is composed of Badenian-Sarmatian marls and limestones, with deposits of Pliocene clay over them. The top layer is composed of Quaternary sands and coarse gravel grains.
G7: Kapela loess profile represents the one of the most important European loess sections situated along steep cliffs the right bank of the Danube. The profile, located near Batajnica town, has representative loess and paleopedological sections and forming the steep scarps towards the Danube. The Kapela (lit. Chapel) is the section of the Late and Middle Pleistocene loess and fossil soils about 40 m thick. Within this section, there are also tuff interlayers indicating volcanic activity, which increases the chronostratigraphic value of this section [27, 30].
G8: Lake in Sremcica (formerly known by its old name was Rakina Bara) is located at the bottom of an approximately 300 m long and 150 m wide sinkhole. The emergence of the lake is linked to the processes related to Belgrade merokarst, through the Sarmatian limestone. It is characterized by alluvial sinkholes, small depressions and caves, ponds, hanging valleys and small springs and sources. The dimensions of this site vary depending on the researcher and measurement time. More recent measurements indicate that the lake is 170 m long and 110 m wide [31].
All these eight geosites will be further evaluated by using GAM/M-GAM methods. The overall assessments will be conducted for each site (G1-G8) respectively.
## 3 Materials and methods
The methodology is based on the modified version of the geosite assessment model (M-GAM) proposed by Tomić and Božić [32] and tested by Różycka and Migoń [33]. The M-GAM represents a modification of the original GAM model created by Vujičić et al. [34] and tested by Petrović et al. [6]. Both versions of the models were employed in this survey in order to compare the results of both respondents’ groups. The GAM/M-GAM model was used through the initial workflow presented in Figure 3.
Figure 3
The GAM/M-GAM process flowchart.
While the GAM model involves grades given by experts, M-GAM includes not only expert opinions, but also the views of visitors regarding the importance of each indicator in the assessment process. The GAM model contained analysis of two key indicators: the main values (MV) and additional values (AV) of geoheritage sites, comprising a total of 27 sub-indicators (Table 1). The main values have 12 sub-indicators, while additional values have 15 sub-indicators. Their numerical values range from 0 to 1, in the following order: 0.00, 0.25, 0.50, 0.75 and 1.00. Sub-Indicator grades and their explanations are shown in detail in Table 2.
Table 1
Structure of the (original) GAM model.
Table 2
Numerical GAM indicators and their description
The importance factor (1 ≥ Im ≥ 0), where each respondent was asked to rate all presented sub-indicators, was included in the survey. The importance factor can be a very useful examination tool because it gives visitors the chance to express their attitude to every single sub-indicator in the model [11]. Moreover, in the case of Belgrade geosites, the importance factor has emphasized the relevance of visitors having an opportunity to choose which city’s non-cultural attractions they are going to visit and their attitudes thereto [35, 36]. It was necessary to include visitors in the survey, mainly because experts can cover only the marketable aspects of geosites. Moreover, the experts carried out their evaluation from the scientific perspective, which, as research has shown, is usually less important to the average visitor. Nevertheless, expert opinions combined with those of regular visitors provide more objective and accurate results. Visitors can rate the sub-indicators in the same manner as experts rate them for both groups of values (by giving them exact numerical values from 0.00 to 1.00). The main values (MV) consist of the following indicators:
1. Scientific and educational values - VSE (rarity, representativeness, knowledge of geo-scientific issues and interpretation)
2. Aesthetic values - VSA (viewpoints, surface, surrounding landscape and nature and environmental setting) and
3. Protection values - VPr (current condition, protection level, vulnerability and acceptable number of visitors). The MV is calculated as a sum of the presented sub-indicators: $MV=VSE+VSA+VPr.$(1)
On the other hand, additional values are composed of:
1. Functional values - VFN (accessibility, additional natural values, additional anthropogenic values, vicinity of urban centers and important road networks and additional functional values) and
2. Tourism values - VTr (promotion, number of organized visits, number of visitors, vicinity of visitor centers, interpretive panels, tourism infrastructure, accommodation, restaurant service and quality tour guide service) [6, 34]. The AV is calculated as: $AV=VFn+VTr.$(2)
By adding together MV and AV, we get the following equation: $GAM=MV+AV.$(3)
To reach the M-GAM, it is necessary to add in the importance factor (Im), which is calculated as: $1≥Im≥0=∑k=1KIvkK$(4)
In this equation, Ivk represents the score of visitors’ mark per sub-indicator, K is the total number of visitors, while the Im parameter can have any value in the range from 0.00 to 1.00. Grades given by experts and visitors for each sub-indicator are shown in detail in Table 3. Finally, by adding all the values together, we arrive at the following equation: $M−GAM=Im(GAM)=(MV+AV).$(5)
Table 3
Scores given by experts and visitors for every sub-indicator.
As it can be seen from the M-GAM equation, the importance factor (Im) is multiplied by the grade given by experts. Therefore, a more realistic assessment is carried out by using the M-GAM. This can be concluded from the fact that if the importance of a sub-indicator is graded 0.50 by visitors, the final mark cannot be 0.75 or 1.00, but instead, it should be lower (0.50) if visitors’ attitudes are also considered. In this respect, the values of M-GAM indicators are constantly equal to or lower than GAM values. The M-GAM model aims to show the status of main and additional tourist values of geoheritage sites which have not yet reached their maximum potential. This gives us a realistic picture of geoheritage, based on which it is possible to plan and promote tourism activities for the analyzed sites [34]. However, sub-indicators with lower grades are not so important for the development of tourism [32]. By using the M-GAM model, the above-mentioned authors have tried to shift the main focus from the opinions of experts to those of visitors regarding the significance of tourism values of geosites. This means that the future development of tourism at the observed Belgrade geoheritage sites should be promoted by improving the values whose potentials have not yet been realized, but which are important to visitors.
A total of 216 visitors and 63 experts filled out the questionnaire properly. The questionnaire conducted in the research consisted of 27 questions/sub-indicators. It consists of two complementary parts. The first part involves items which concerned many values, while the second part includes additional values of the GAM model. Every respondent was asked to rate the importance factor (Im) of every sub-indicator on a five point Likert-type scale by rating it from zero to one (0 = not at all important; 0.25 = not very important; 0.50 = neutral; 0.75 = somewhat important; 1.00 = very important). A survey was conducted among people who visited Belgrade between April and September of 2016, as well as among experts in geotourism and physical geography (geomorphology and geology) from Serbia and abroad. Sampling was convenient since subject were only visitors and experts willing to participate in the study. At the beginning, we informed the respondents about the subject of examination. Within our study, we investigated the information seeking of respondents of both genders, different educational levels, personal incomes, many kinds of occupations, etc.
## 4 Results and discussion
The research results were obtained using a sum of main and additional values and their mean values. Summary scores, given by experts and visitors for individual sites, are shown in Table 3.
Results for the five main comparison criteria shown in Table 3 demonstrate that the Karagača valley near Vrčin (G5) has the highest scientific and educational value according to the experts’ opinion. The same site has the highest score according to the total value (visitors’ opinion included). This can be justified by the fact that G5 represents a globally relevant example of the coastal, clastic development of the Upper Pannonia in this part of the Pannonian Basin. It has thus received highest scores for this criterion among all observed geosites. Due to the findings recorded at the G5 section, the international scientific community has decided to call the younger stage of development of the Pannonia in the Central Paratethys “Serbian”. Together with these, some representatives of molluscs and ostracods are determined here for the first time and named after this section [2729].
When considering the scenic and aesthetic values, attitudes of experts, as well as those of visitors are quite the same. Both groups of respondents gave highest scores to the profile at Kalemegdan Fortress (G3). The scores can be explained by the micro-location of the geosite, which is situated at the foot of the Victor monument, a famous Belgrade landmark and one of the most popular scenic symbols of the city. The profile faces the confluence of the Sava and the Danube and has the view of the vast Pannonian plain. Below this landmark, the site shows representative sections of Badenian reefs, with characteristic shallow-water fauna and coastal and reef deposits of the former Pannonian Sea [27].
As regards protection values, overall scores are significantly distinctive due the fact that the experts gave the highest scores for the profile at the Kalemegdan Fortress (G3), in the same way as visitors marked this site as the most protected one. This profile had a long tradition of protection (since 1968), low level of vulnerability (not threatened significantly by any direct natural or anthropological causes) and is presently in good condition, which was enough for the both groups to give it the highest numerical scores.
When analyzing the importance of sub-indicators in respect of functional values, it can be noticed that the highest scores, as evaluated by all respondents, were also given to the profile at the Kalemegdan Fortress (G3). The experts, as well as the visitors, assessed that the level of the accessibility of this site, its additional attractions, proximity of urban centers and traffic networks was highest. This can easily be explained by the obvious fact that this particular geosite is located in the center of a typical urban setting (the territory of the City of Belgrade has approximately 1.3 million inhabitants according to the national census data [37]), with high level of urban infrastructural development and numerous cultural and natural attractions in its closest surroundings. Similar applies to other analyzed geosites.
As regards the evaluation of tourism values, the situation is not so different. Apparently, further results have shown that both groups share a positive attitude towards the same geosite located in downtown Belgrade, namely the profile at the Kalemegdan Fortress (G3). In this regard, the site has achieved the highest score of 8.50/7.68. However, results for the total value indicate that its promotion, organized visits to the site and number of visitors on the site are “the weakest points” and that these segments are its main disadvantages. On the other hand, the experts evaluated promotion with lower score (0.50), i.e. they shared the visitors’ opinion that the quantity of promotional material and level of its utilization were not as good as they could be. The position of this geoheritage site on the tourism marked could be improved by more effective collaboration with tourism organizations, as well as by making more effective international offer [38] that could improve and develop conservation and promote the geosite in a much wider region.
Paragraphs below provide more detailed explanation and present the role of the Im parameter. In further analyses, Tables 4 and 5 and Figure 4 show the findings of the assessment obtained by using both GAM/M-GAM models.
Figure 4
Positions of observed geoheritage sites in Belgrade, according to the GAM/M-GAM matrices.
Table 4
Overall assessments of the observed geosites by using GAM.
Table 5
Overall assessments of the observed geosites by using M-GAM.
The average of main values was 6.84 and 8.22 for additional values within the GAM model (Table 4). Three sites had the largest sum of main values (n≥ 8), namely the profile at the Kalemegdan Fortress (MV=9.75), the Karagača stream valley near Vrčin (MV=8.25) and the Kapela loess profile near Batajnica (MV=8.25). Moreover, four sites had the largest sum of additional values (n≥ 8), namely the profile at the Kalemegdan Fortress (AV=14.50), Mašin Majdan-Topčider (AV=10.50), the Kapela loess profile near Batajnica, (AV=9.25) and the artesian well in Ovča (AV=8.50). In both cases, the G3 site has scored highest within the GAM model, which points to a conclusion that this site represents the most significant part of geotourism offer in Belgrade. On the other hand, Straževica profile (G1) scored lowest in the case of main values (MV=4.50), while the lake in Sremciča (G8) scored lowest in the case of additional value (AV =3.00). The reasons for these scores are that both experts and visitors who were included in the GAM assessment assessed some sub-indicators as less relevant.
Results from Table 5 indicate that the average of main values within the M-GAM model was MV=5.32, while the average for additional values was AV=7.14. According to the presented data, none of the observed sites had the largest sum of main values (n≥ 8), while only two sites had the largest sum of additional values (n≥ 8), namely the profile at the Kalemegdan Fortress (AV=12.78) and Mašin Majdan-Topčider (AV=9.07). On the other hand, the lowest sum of main values was characteristic of the Straževica section (MV=3.55), while the lake in Sremciča obviously had the lowest sum of additional values (AV=2.70). It can be noticed that the same situation has been noticed in GAM calculations results and the reason is that some sub-indicators were also assessed as less relevant by both groups of respondents involved in the M-GAM.
When comparing the position of the observed geoheritage sites in the GAM/M-GAM matrices (Figure 4), it is obvious that the distinctions in sites’ positions indicates different results of the assessment done by experts exclusively, as well as by visitors and experts jointly. Depending on the determined values obtained by the assessment, every geosite could be put in one of the fields in the matrix, divided into nine zones. The sum of MV and AV scores for every individual geosite is presented via X and Y axes respectively. Both matrices are indicated by Z(i, j) fields where (i, j = 1, 2, 3), according to the grades they have received during the evaluation process. The fields are created by grid lines, which show the exact level of GAM/M-GAM indicators, according to the positions of the MV and AV scores.
The findings presented in Figure 4 indicate that that five of eight assessed geosite has changed its Z(i, j) field position in the M-GAM matrix in comparison to the (primary) GAM matrix. With the exceptions of the abandoned quarry in Barajevo (G4), the artesian well in Ovča (G6) and the lake in Sremciča (G8), every other site has transformed its position. This can be explained by the fact that their AVs were quite lower to begin with, so when multiplied by the importance factor, they did not change significantly. On the other hand, changes in positions can be noticed in cases of the Straževica profile (G1), Mašin Majdan-Topčider (G2), profile at the Kalemegdan Fortress (G3), Karagača valley (G5) and Kapela loess profile (G7). As regards the position of G1, it can be noticed that both MV and AV for this geosite have lower positions and that it has moved from field Z22, representing moderate AV and moderate MV, to field Z12, representing low MV and moderate MV. The position of G2 is similar since both MV and AV are lower in the M-GAM matrix. G2 has moved from field Z23, representing moderate MV and high AV, to field Z22, representing moderate MV and AV. Together with the previous two cases, G3 has also lower MV and AV, because it has moved from field Z33, with the highest values of both MV and AV, to field Z23, representing still high AV, but moderate MV score. Contrary to other presented geosites, G4 position has moved lower, but very slightly and within the same field Z22. Almost the same case involves the position of G6, with very gentle modification (both MV and AV have been decreased for a bit). These results are due to the fact that their MVs were not much lower to begin with, so when multiplied by the importance factor, they have not changed significantly. On the other hand, the position of G5 has changed considerably from field Z32, representing high MV but moderate AV, to field Z22, representing moderate MV and moderate AV. Very similar situation is obvious in the case of G7, where it can be perceived that MV has moved from field Z32, representing high MV, but moderate AV, to field Z22, representing moderate MV and AV. Finally, the position of G8 is almost equal in both matrices (slightly changing position within Z21 representing moderate MV and low AV). In general, all observed sites have moved to lower positions – mainly caused by additional values decline but also because the modifications of main values position. The presented distinctions can be explained by the fact that additional values are generally less important to experts, which has influenced the lower position of certain geosites in their assessment. In terms of main values, we can see that they are almost equally important to both groups of respondents, so they have not influenced the position of geosites in any radical or mayor way.
## 5 Conclusion
The combination of the GAM/M-GAM models can yield more precise and objective results of assessment of main and additional values of the observed Belgrade geosites. Moreover, a clearer and more realistic picture is thus obtained, which can be rather useful for planning and improvement of visitors’ activities at other geosites in the country. This is because not all indicators can have the same weight, as presented in the original GAM, since visitors can attach different levels of relevance to different (sub-)indicators, when choosing whether to visit a certain geosite or not. Therefore, this is very relevant issue that must be considered in the overall assessment of geosites.
By summing the final findings for all analyzed Belgrade geosites, we can draw a conclusion that when assessing a geosite, experts appreciated values that were considerably different from those that were relevant to visitors. Consequently, the results that included the attitudes of visitors were markedly different. The scientific and educational values (VSE) for all eight geosites seemed to be important to geosites’ visitors when choosing their destinations. This specifically referred to sub-indicators such as rarity (VSE1), representativeness (VSE2) and knowledge of scientific issues related to geosciences (VSE3) because the difference between results was not so extreme. However, the sub-indicator of interpretation (VSE4) proved to be a not very important factor for them, as we can see from the fact that they gave it a score of 0.68, which has changed the results significantly.
When analyzing the second group of sub-indicators (VSA), we can conclude how the importance factor for some of them, as assessed by visitors, can considerably change the assessment’s results. For example, the viewpoints sub-indicator (VSA1) was rated by visitors as the factor of greatest importance (Im = 1.00), which means that it played a significant role in visitors’ opting for a place to visit. In addition, when marks given by experts are multiplied by the importance factor (Im) rated by visitors, we get similar results, so there is no significant change. Together with this, the surface area sub-indicator (VSA1) got a very high score (Im = 0.92), which means that visitors appreciated greatly the whole micro-surface of the observed site. However, the sub-indicators such as surrounding landscape and nature (VSA3) and the environmental setting of sites (VSA4), which was chosen by experts as an important factor to be included in the assessment, did not seem to be of the same relevance for the visitors (in both cases, Im = 0.57). It can be seen how this has affected the final findings since the grades given by experts were multiplied by the importance factor as assessed by the visitors, which produced lower results.
When it comes to the protection value (VPr), importance values of current condition (VPr1, Im=1.00), vulnerability (VPr3, Im=0.75) and acceptable number of visitors (VPr4, Im=0.92) scored very high. These high scores given by the experts should not be taken as completely realistic for the assessment. The reason is that the visitors attached only minor significance to some of these sub-indicators, which did not have any real effect on their decision to visit a site. This especially refers to the protection level sub-indicator (VPr2), which was highly rated by experts, but the visitors did not rate it as significant at all (Im=0.54), so the final score should be much lower.
Additionally, not all functional values, as it was the case with the main values, were of the same importance for visitors of the observed geosites. Here, once again, we can see how this fact can fundamentally change the final findings. For example, even though there are plenty of additional natural values (VFn2) in the near surroundings of these geosites (as can be seen from the highest grade given by the experts), this sub-indicator did not seem to be less important to visitors (Im=0.74) in comparison with some other sub-indicators, such as the vicinity of important road networks (VFn5, Im=1.00) or the additional functional values (VFn6, Im=0.91), which got the highest scores.
On the other hand, the tourism values (VTr) are (normally) the most important to visitors since relevance for most of the sub-indicators was higher than Im≥ 0.86 (i.e. the highest scores were given to the vicinity of the visitor center, interpretative panels, number of visitors, tourism infrastructure, tour guide service, hostels and restaurants). However, here we can also notice some exceptions, such as in the case of the promotion sub-indicator (VTr1, Im=0.50), as well as the annual number of organized visits (VTr2, Im=0.77). The experts considered these to be important for the overall assessment, which did not match visitors’ opinions. This is one more proof that we cannot rely solely on the opinion of experts, which are just one group of tourists that visit these sites. The exclusion of other walks of life and their opinions can only yield GAM results that are less objective and accurate than those obtained by using its modified version (M-GAM), where other segments of society beside experts are also included in the assessment.
It can thus be concluded, and these findings need to be emphasized, that the perception of Belgrade geosites differs at numerous levels observed and that there are no additional sites that would enable the tourist functioning of an area and a more complex development of tourism. Also, it is necessary to consolidate all natural and anthropogenic motifs from this area into a complex tourism value or incorporate these sites into a unique tourist tour since if they remain unintegrated, they will only have the character of complementary touristic value of the City of Belgrade.
## Acknowledgement
The research was supported by the Ministry of Education, Science and Technological Development, Republic of Serbia (Grant III 47007) and by Tomsk Polytechnic University, Russian Federation (14.Z50.31.0029 from March 19, 2014).
## References
• [1]
Erikstad L., Geoheritage and geodiversity management – the questions for tomorrow. Proceedings of the Geologists’ Association, 2013, 124, 4, 713–719
• [2]
Newsome D., Dowling R., Leung Y-F., The nature and management of geotourism: A case study of two established iconic geotourism destinations. Tourism management perspectives, 2012, 2–3, 19–27 Google Scholar
• [3]
Gray, M., Geodiversity. Valuing and conserving abiotic nature. Wiley, Chichester, 2004 Google Scholar
• [4]
Dixon G., Geoconservation: An International Review and Strategy for Tasmania. Miscellaneous Report. Parks and Wildlife Service, Tasmania. 1996, 1-101 Google Scholar
• [5]
Giurginca, A., Munteanu, C.M., Stanomir, M.L., Niculescu, G., Giurginca, M., Assessment of potentially toxic metals concentration in karst areas of the Mehedinti plateau geopark (Romania). Carpathian Journal of Earth and Environmental Sciences, 2010, 5, 1, 103–110 Google Scholar
• [6]
Petrović, M.D., Vasiljević, Dj.A., Vujičić, M. D., Hose, T. A., Marković, S.B., Lukić, T., Global geopark and candidate – comparative analysis of Papuk Mountain geopark (Croatia) and Fruška Gora Mountain (Serbia) by using GAM model. Carpathian Journal of Earth and Environmental Sciences, 2013, 8, 1, 105-116 Google Scholar
• [7]
HoseT.A., Geotourism in England: a two-region case study analysis. Unpublished PhD thesis in 2 volumes, University of Birmingham, UK, 2003 Google Scholar
• [8]
Hose T.A., Geotourism in Almeria Province, southeast Spain. Turizam, 2007, 55, 3, 259-276Google Scholar
• [9]
Hose T.A., Towards a history of geotourism: definitions, antecedents and the future. In: Burek C.V., Prosser C.D. (Eds.), The history of geoconservation (Special Publication 300). The Geological Society, London, 2008, 37-60 Google Scholar
• [10]
Plyusnina, E. E., Ruban, D. A., Zayats, P. P. Thematic dimension of geological heritage: An evidence from the Western Caucasus. Journal of the Geographical Institute “Jovan Cvijic” SASA, 2015, 65, 1, 59-76
• [11]
Božić S, Tomić N. Canyons and gorges as potential geotourism destinations in Serbia: comparative analysis from two perspectives – general geotourists’ and pure geotourists’. Open Geosciences, 2015, 7(1), 531-546
• [12]
Vasiljević Dj.A., Marković S.B., Hose T.A., Smalley I., Basarin B., Lazic L., Jovic G., The introduction to geoconservation of loess palaeosol sequences in the Vojvodina region: Significant geoheritage of Serbia. Quaternary International, 2011a, 240, 108–116
• [13]
Vasiljević Dj.A., Marković S.B., Hose T.A., Smalley I., O’Hara-Dhand K., Basarin B., Lukić, T., Vujičić, M.D., Loess towards (geo) tourism – proposed application on loess in Vojvodina region (north Serbia). Acta geographica Slovenica, 2011b, 51(3), 391-406
• [14]
Jojić-Glavonjić, T., Milivojević, M., Panić, M., Protected geoheritage sites as a touristic value of Srem. Journal of the Geographical Institute “Jovan Cvijic” SASA, 2014, 64, 1, 33-50
• [15]
Hose T.A., Geo-tourism - appreciating the deep time of landscapes. In: Novelli M. (Ed.), Niche Tourism: contemporary issues, trends and cases. Elsevier Science, Oxford, 2005, 27–37 Google Scholar
• [16]
Novelli, M., Benson A., Niche tourism: A way forward to sustainability? In: Novelli M. (Ed.), Niche Tourism: contemporary issues, trends and cases. Elsevier Science, Oxford, 2005, 247–251 Google Scholar
• [17]
Wimbledon, W.A.P., Ishchenko, A.A., Gerasimenko, N.P., Karis, L. O., Suominen, V., Johansson, C.E. Freden, C., Geosites - an IUGS initiative: science supported by conservation. In: Barretino, D, Wimbledon, WP, Gallego E (Eds.), Proceedings of the Geological heritage: its conservation and management, Madrid, Spain. Instituto Tecnologico Geominero de Espana, Madrid, 2000, 69-94 Google Scholar
• [18]
Joksimović, M.M., Gajić, M.R., Vujadinović, S.M., Golić, R.M., Vuković, D.B., The effect of the thermal component change on regional climate indices in Serbia. Thermal Science, 2015, 19(2), 391-403
• [19]
Mijović D., Stefanović, I., Inventar objekata geonasleđa Srbijeod ideje do optimalnog modela (The inventory of Serbian geoheritage site -from idea to optimal model). Protection of Nature, 2008, 60, 1-2, 359-365 (in Serbian with English summery) Google Scholar
• [20]
Grubacčvić M., Mijić R., Glamočić B., Božović B., Tanasković M., Popović A., Kvalitet životne sredine grada Beograda u 2008. godini (Quality of the Environment in Belgrade in 2008). Secretariat for Environmental Protection of the City of Belgrade, Institute for Public Health and the Regional Environmental Center for Central and Southeast Europe, Belgrade, 2009 (in Serbian with English summery) Google Scholar
• [21]
Rundić Lj., Knežević S., Banjac N., Ganić M., Milovanović D., Rabrenović D., Geološki objekti i pojave kao integralni deo prirodne i kulturne baštine grada Beograda (Geological objects and phenomena as an integral part of the natural and cultural heritage of the City of Belgrade). Proceedings of the 15th Congress of the Geologists of Serbia, Belgrade, 2010, 711-717 (in Serbian with English summery) Google Scholar
• [22]
Banjac N., Rundić Lj., Geoturizam – novi vid turistižke ponude na Tari (Geotourism – a new form of tourism on Tara). Geographical Institute “Jovan Cvijić” SASA, Belgrade, 2006 (in Serbian with English summery) Google Scholar
• [23]
Belij S., Geodiverzitet i geonaslede – savremeni trend razvoja geomorfologije u svetu i kod nas (Geodiversity and geoheritage – the modern trend of development of geomorphology in the world and in our country). Journal of the Geographical Institute “Jovan Cvijic” SASA, 2007, 57, 65-70 (in Serbian) Google Scholar
• [24]
Marković S.B., Oches E., Sümegi P., Jovanović M., Gaudenyi T., An introduction to the Upper and Middle Pleistocene loess-palaeosol sequences of Ruma section (Vojvodina, Serbia). Quaternary International, 2006, 149, 80-86
• [25]
Marković S.B., Oches E.A., McCoy W.D., Gaudenyi T., Frechen M., Malacological and sedimentological evidence for “warm” glacial climate from the Irig loess sequence (Vojvodina, Serbia). Geophysics, Geochemistry and Geosystems, 2007, 8, Q09008Google Scholar
• [26]
Marković S.B., Hambach U., Stevens T., Kukla G.J., Heller F., McCoy W.D., Oches E.A., Buggle B., Zöller L., The last million years recorded at the Stari Slankamen (Northern Serbia) loess-palaeosol sequence: revised chronostratigraphy and long-term environmental trends. Quaternary Science Reviews, 2011, 30(9–10), 1142-1154
• [27]
Rundić Lj., Geološki objekti i prirodni fenomeni kao integralni elementi geodiverziteta grada Beograda (Geological structures and natural phenomena as integral elements of geological diversity of the City of Belgrade). Faculty of Mining and Geology, Belgrade, 2010 (in Serbian) Google Scholar
• [28]
Stevanović P., Potok Karagača ispod Avale – klasično mesto nalaska panonske fosilne faune mekušaca (Stream Karagača under Avala Mt. – a classic location of the Pannonian fauna fossil mollusks). Protection of Nature, 1958a, 12, 6-12 Google Scholar
• [29]
Stevanović P., Potok Karagača ispod Avale – klasično mesto nalaska panonske fosilne faune mekušaca (Stream Karagača under Avala Mt. – a classic location of the Pannonian fauna fossil mollusks). Protection of Nature, 1958b, 13, 6-13 Google Scholar
• [30]
Marković S.B., Hambach U., Catto N., Jovanović M., Buggle B., Machalett B., Zoeller L., Glaser B., Frechen M., The Middle and Late Pleistocene loess sequences at Batajnica, Vojvodina, Serbia. Quaternary International, 2009, 198, 1–2, 255-266 Google Scholar
• [31]
Kličković M., Belij S., Petreš D., Trikić M., Simić S., Izveštaj o preliminarnom istraživanju prirodnog jezera Rakina bara u Sremčici kod Beograda (Report on the preliminary study of the natural lake Rakina bara in Sremciča near Belgrade). Institute for nature conservation of Serbia, Belgrade, 2008 (in Serbian with English summery) Google Scholar
• [32]
Tomić N., Božić S., A modified Geosite Assessment Model (M-GAM) and its Application on the Lazar Canyon area (Serbia). International Journal of Environmental Research, 2014, 8, 4, 1041-1052 Google Scholar
• [33]
Różycka, M., Migoń, P., Customer-Oriented Evaluation of Geoheritage – on the Example of Volcanic Geosites in the West Sudetes, SW Poland. Geoheritage, 2017, 1-15 Google Scholar
• [34]
Vujičić M., Vasiljević B.A., Marković S.B., Hose T.A., Lukić T., Hadzić O. & Janićević S., Preliminary geosite assessment model (GAM) and its application on Fruška Gora Mountain, potential geotourism destination of Serbia. Acta geographica Slovenica, 2011, 51, 2, 361-377
• [35]
Marković, J. J., The image of Belgrade and Novi Sad as perceived by foreign tourists. Journal of the Geographical Institute “Jovan Cvijić” SASA, 2016, 66, 1, 91-104
• [36]
Todorović, N., Jovičić D., Motivational factors of youth tourists visiting Belgrade. Journal of the Geographical Institute “Jovan Cvijić” SASA, 2016, 66, 2, 273-289
• [37]
Census of Population, Households and Dwellings in the Republic of Serbia: Comparative Overview of the Number of Population in 1948, 1953, 1961, 1971, 1981, 1991, 2002 and 2011, Data by settlements” (PDF). Statistical Office of Republic of Serbia, Belgrade. 2011. Retrieved June 27, 2015. Google Scholar
• [38]
Lukić D., Milovanović D., A contribution to the insight a contribution to the insight into Djerdap geoheritage. In: Cvetković V. (Ed.). Proceedings of the XVI Congress of the Geologists of Serbia, Donji Milanovac, Serbia. Serbian Geological Society, Belgrade, 2014, 877-879 Google Scholar
## Appendix 1
Table A.1
The example of a first part questionnaire (translated in English), which was used for experts’ and visitors’ attitudes toward observed geosites in Belgrade (Serbia). The questionnaire is based on a five point Likert-type scale by rating it from zero to one (0 = not at all important; 0.25 = not very important; 0.50 = neutral; 0.75 = somewhat important; 1.00 = very important). The respondents marked the word in accordance with their attitude towards every presented value.
## Appendix 2
Table A.2
The example of a second part questionnaire (translated in English), which was used for experts’ and visitors’ attitudes toward observed geosites in Belgrade (Serbia). The questionnaire is based on a five point Likert-type scale by rating it from zero to one (0 = not at all important; 0.25 = not very important; 0.50 = neutral; 0.75 = somewhat important; 1.00 = very important). The respondents marked the word in accordance with their attitude towards every presented value.
Accepted: 2017-08-06
Published Online: 2017-10-05
Citation Information: Open Geosciences, Volume 9, Issue 1, Pages 442–456, ISSN (Online) 2391-5447,
Export Citation
## Citing Articles
[1]
Thang Quyet Nguyen, Nguyen Thanh Long, and Thanh-Lam Nguyen
Tourism Economics, 2018, Page 135481661880531
[2]
Tahereh Habibi, Alena A. Ponedelnik, Natalia N. Yashalova, and Dmitry A. Ruban
Resources Policy, 2018
[3]
Nguyen Long and Thanh-Lam Nguyen
Sustainability, 2018, Volume 10, Number 4, Page 953
[4]
Marko Petrović, Aleksandra Vujko, Tamara Gajić, Darko Vuković, Milan Radovanović, Jasmina Jovanović, and Natalia Vuković
Sustainability, 2017, Volume 10, Number 1, Page 54
|
|
Question
# I m so late so can i start MOTION IN ONE DIMENSION first by regarding UNITS AND DIMENSION....... .
Solution
## Yes you can but try to cover units and dimension first
Suggest corrections
|
|
# I'm trying to make sure that my answer on the left is correct, along with what...
###### Question:
I'm trying to make sure that my answer on the left is correct, along with what I stated for the three-dimensional stereochemistry details at the bottom.
12 ) Draw are diastereomer of the given molecule. Take particular care to indicate three-dimensional stereo Chemistry detail properly Chiral A I Changing one or two sto centers. (3R, 55, 75 (35,55, IS)
#### Similar Solved Questions
##### Problem 3 (25 pt) Consider linear differential equationPy" + xy' 9y = x2 +3find the general solution of the associated linear homogeneous equation. b) find the general solution of the original nonhomogeneous differential equation.Problem 4 (25 pt) Use the Laplace transform method and the partial fractions expansion to solve the initial value problemy" + y = t;y(0) = 1, y (0) = 0
Problem 3 (25 pt) Consider linear differential equation Py" + xy' 9y = x2 +3 find the general solution of the associated linear homogeneous equation. b) find the general solution of the original nonhomogeneous differential equation. Problem 4 (25 pt) Use the Laplace transform method and th...
##### Macros III. Physical Properties 10 pts 10. Place the compounds below in order of increasing melting...
Macros III. Physical Properties 10 pts 10. Place the compounds below in order of increasing melting point. Briefly explain your reasoning VELHO d) e) f) f) 11. Place the same compounds in order of increasing polarity. Briefly explain your reasoning 12. Dichloromethane (methylene chloride CH2C12) is ...
##### A 43kg 5.4-mn-long beam is supported, but not attached to_ the two posts in the figure (Figure 1). A 24 kg boy starts walking along the beam:Part AHow close can he get to the right end of the beam without it falling over? Express your answer using two significant figures_AZdBSubmitRequest AnswerReturn to AssignmentProvide FeedbackFigureof 13.0 m
A 43kg 5.4-mn-long beam is supported, but not attached to_ the two posts in the figure (Figure 1). A 24 kg boy starts walking along the beam: Part A How close can he get to the right end of the beam without it falling over? Express your answer using two significant figures_ AZd B Submit Request Answ...
##### How do you write an equation of a line passing through (0, 4), perpendicular to y = x?
How do you write an equation of a line passing through (0, 4), perpendicular to y = x?...
##### Joint Cost Allocation-Weighted Average Method Carving Creations jointly produces wood chips and sawdust used in agriculture....
Joint Cost Allocation-Weighted Average Method Carving Creations jointly produces wood chips and sawdust used in agriculture. The wood chips and sawdust are actually by-products of the company's core operations, but Carving Creations accounts for them just like normally produced goods because of ...
Discrete Math...
##### Affirming the Consequent , Denying the Antecedent , or Undistributed Middle 5. ______ If Orville does...
Affirming the Consequent , Denying the Antecedent , or Undistributed Middle 5. ______ If Orville does not turn his work in on time, he will not pass accounting. Orville passes accounting. He turns his work in on time. 6. _____If Hortense and Algernon work together, they will pass. Hortense and Alger...
##### Find the velocity of a particle whose position is given by the vector function, T(t) = CoS t 1 te-t j +2t k. ~COS tT -e-'j = 5 coS tT+e-tj _ sint T _ e 'j +2 k sin t T+e-'j+2KNone of these
Find the velocity of a particle whose position is given by the vector function, T(t) = CoS t 1 te-t j +2t k. ~COS tT -e-'j = 5 coS tT+e-tj _ sint T _ e 'j +2 k sin t T+e-'j+2K None of these...
##### (-/4 Points]DETAILS0/30 Submissions UsedMY NOTESCalculate enter UNDEFINED ) fxY) 6x2 + < -when defined. (If an answer is undefined,Need Help?Raadh
(-/4 Points] DETAILS 0/30 Submissions Used MY NOTES Calculate enter UNDEFINED ) fxY) 6x2 + < - when defined. (If an answer is undefined, Need Help? Raadh...
As of the end of June, the job cost sheets at Racing Wheels, Inc., show the following total costs accumulated on three custom jobs. Job 102 $16,000 20,000 9,200 Job 103 Job 104 Direct materials$56,000 $60,000 44,000 20,240 Direct labor 30,000 Overhead applied 13,800 Job 102 was started in productio... 1 answer ##### 7.) a. What is the syntax of a basic IF statement in Excel? IF(logical test) IF(logical... 7.) a. What is the syntax of a basic IF statement in Excel? IF(logical test) IF(logical test, value if true) IF(logical test, value if true, value if false) IF(logical test, value if false, value if true) b. True or False? “AND” means that only one of the logi... 1 answer ##### What does Hess's law say about the enthalpy of a reaction? What does Hess's law say about the enthalpy of a reaction?... 5 answers ##### East One Ol (ne answers Apove IS NU COTTeCpoint} Finding the volume of solid of revolution (washer method)Using the washer method_ determine the volume of solid formed by revolving the region bounded by the line from x tox = about the X-axis:and the curveThe 2d picture below may help determining the innar and outer radius of Ihe washer Used setling up the integral for Ihe volume,For dynamic 3d look the solid, click here (This will open new winaow )PartSetup the integral that represents the volum east One Ol (ne answers Apove IS NU COTTeC point} Finding the volume of solid of revolution (washer method) Using the washer method_ determine the volume of solid formed by revolving the region bounded by the line from x tox = about the X-axis: and the curve The 2d picture below may help determining... 1 answer ##### 17. 0/6 points Previous Answers SerPSE10 31.5.OP.027. My Notes Ask Your Teacher 170 C charge. The... 17. 0/6 points Previous Answers SerPSE10 31.5.OP.027. My Notes Ask Your Teacher 170 C charge. The switch is open fort and is then thrown An LC circuit like the one in the figure below contains an 65. mit inductor and a 13.0 ur capacitor that intally carries closed att (a) Find the frequency in hertz... 5 answers ##### Match each section of the periodlc table elements found in that section: (shown) to the groups or typesDrag statements on the right to match the left;main-group elementsGroups 3transition elementsGroups 1, 2, and 13 - 18Inner-transition elementsLanthanides and ActinidesDo you know the answer? Match each section of the periodlc table elements found in that section: (shown) to the groups or types Drag statements on the right to match the left; main-group elements Groups 3 transition elements Groups 1, 2, and 13 - 18 Inner-transition elements Lanthanides and Actinides Do you know the answer... 1 answer ##### Varicella zoster. Do you have to? a. Administer aspirin fever b. Assign the client to a... Varicella zoster. Do you have to? a. Administer aspirin fever b. Assign the client to a positive - airflow room.... 5 answers ##### Ball connected to light spring suspended vertica as shown the figure. When pulled downward from its equilibrium position and released the ball oscillates up and downball labeledsuspended vertically from spring, the other end of which attached t0 the ceiling;(a) In the system of the ball, the spring; and the Earth, what forms of energy are there during the motion? (b} In the system of the ball and the spring; what forms of energy are there during the motion?2. The potential energy function associ ball connected to light spring suspended vertica as shown the figure. When pulled downward from its equilibrium position and released the ball oscillates up and down ball labeled suspended vertically from spring, the other end of which attached t0 the ceiling; (a) In the system of the ball, the spri... 1 answer ##### A ball is dropped from rest from the top of a$6.10$-m-tall building, falls straight downward, collides inelastically with the ground, and bounces back. The ball loses$10.0 \%$of its kinetic energy every time it collides with the ground. How many bounces can the ball make and still reach a windowsill that is$2.44 \mathrm{~m}$above the ground? A ball is dropped from rest from the top of a$6.10$-m-tall building, falls straight downward, collides inelastically with the ground, and bounces back. The ball loses$10.0 \%$of its kinetic energy every time it collides with the ground. How many bounces can the ball make and still reach a window... 1 answer ##### Question 5 Which of the following chlorobenzenes does not undergo nucleophilic aromatic substitution upon treatment with... Question 5 Which of the following chlorobenzenes does not undergo nucleophilic aromatic substitution upon treatment with NaNH 2? 2 3 ООО a. 1 b.2 c. 3 d.4... 5 answers ##### 1 (e) (4 points) Compute (i.e , fill in the blanks in the right hand side): 2 31 151 23 45 ( 9)(235) = 1 6 2 15 (b) (3 points) Compute (i.e. fill in the blanks in the right hand side): 65 ( 6 ; ; %) T" Domt) Leh E.$Hrite { FPEORHHGNHHHHHHHHHSHAKHETH Gs Hesi atc[eterninenthe orcler o (ifVO IX6 HEAVUULTLHHHHLUNH HenuWeiiilixaet t Lughnnue chn HHRn blealse Tate thiineForanla bcfond Mwlf Hledfii ekiu LoclWIUKUU Vana"l | 3 ( 1 44" [nssauth 1 (e) (4 points) Compute (i.e , fill in the blanks in the right hand side): 2 31 151 23 45 ( 9)(235) = 1 6 2 15 (b) (3 points) Compute (i.e. fill in the blanks in the right hand side): 65 ( 6 ; ; %) T" Domt) Leh E.$ Hrite { FPEORHHGNHHHHHHHHHSHAKHETH Gs Hesi atc[eterninenthe orcler o (ifVO IX6 ... 5 answers ##### Loe J 4Uf1, DECTtaua pcias ICtas > $*> 6 D 14. M xy dA,D e limitada pelas curvas y = x,y=3x Dl Loe J 4Uf1, DECTtaua pcias ICtas >$ *> 6 D 14. M xy dA,D e limitada pelas curvas y = x,y=3x Dl... 1 answer ##### I need a detailed answer please Math 180_GWC, Summer 2020 Exam3 4) (13 points) Let f(x)=6x3... I need a detailed answer please Math 180_GWC, Summer 2020 Exam3 4) (13 points) Let f(x)=6x3 - 15x2 + 12x. a) Find the open intervals on which the function f(x) is concave up or concave down. f(x) is concave down on f(x) is concave up on b) Find all point(s) of inflection of the graph of the fu... 5 answers ##### Question 20trequency of € 02 * 10' 0, Dyan AM statian Kith zn asslgned C tltc Tjoio Wave? PoducedE what & the Wavricngtn Macklirhlfttecuenct) Speed of Wght 3 00 x 10" mfy Spred o1 Lient180m20 mS0dm40m Question 20 trequency of € 02 * 10' 0, Dyan AM statian Kith zn asslgned C tltc Tjoio Wave? PoducedE what & the Wavricngtn Macklirhlfttecuenct) Speed of Wght 3 00 x 10" mfy Spred o1 Lient 180m 20 m S0dm 40m... 1 answer ##### Please solve this by using the formulas of curved beams. 145. The cross section of a... Please solve this by using the formulas of curved beams. 145. The cross section of a diameter of the ring is 15.6 in. Determine the value of P that will cause a maximum ring is the T section shown in Fig. P-1345. The inside stress of 18 ksi. P = 22.3 kips Ans P 6 in.- 1 in. 115.6 in. 1 in 4 in. A Se... 5 answers ##### Solve each equation.$|5-3 x|=3$ Solve each equation. $|5-3 x|=3$... 5 answers ##### Question 9 (1 point) According to the affect heuristic, if new technologies promise significant benefit; then people are likely to regard them as:less riskymore risky Question 9 (1 point) According to the affect heuristic, if new technologies promise significant benefit; then people are likely to regard them as: less risky more risky... 1 answer ##### Please answer both 4. 0-2 points TanFin1 14.1.022 Solve the linear programming problem by the simplex... Please answer both 4. 0-2 points TanFin1 14.1.022 Solve the linear programming problem by the simplex method. Maximize P 12x + 9y subject to x+ys 12 3x ys 30 10x + 7y 70 x 20, y 20 The maximum is P at (x, y)- Submit Answer Save Progress 5. -12 points TanFin11 4.1.028. Solve the linear programmin... 5 answers ##### Find the 2Ath percentile, P24' ftom the following data10QO I600 1800 190U 2600 3100 3400 3700 J800 J400 4000 4300 4500 5300 5400 5500 5800 5900 6000 6200 6300 6700 6900 7200 7400 7500 7900 8000 8100 8200 8400 8500 8600P24Hint: Textbook Pages ETipEnier your answer 45 Integer dciml number: Examples: 3, 4, 5.5172 Enter DNE tur Does Not Exist Iar Infinily Find the 2Ath percentile, P24' ftom the following data 10QO I600 1800 190U 2600 3100 3400 3700 J800 J400 4000 4300 4500 5300 5400 5500 5800 5900 6000 6200 6300 6700 6900 7200 7400 7500 7900 8000 8100 8200 8400 8500 8600 P24 Hint: Textbook Pages E Tip Enier your answer 45 Integer dciml number: E... 5 answers ##### Which of the following reactions would NOT result in 50/50 mixture stereoisomeric products?HCIReactio n AHzCEtherHzCH OHH OH ClHCI EtherReactio n BBoth reaction A and B ReactionX Neither reaction A nor B Reaction A Which of the following reactions would NOT result in 50/50 mixture stereoisomeric products? HCI Reactio n A HzC Ether HzC H OH H OH Cl HCI Ether Reactio n B Both reaction A and B Reaction X Neither reaction A nor B Reaction A... 1 answer ##### Which level of protein structure is disrupted by allosteric inhibition? A. Primary B. Secondary C. Tertiary... Which level of protein structure is disrupted by allosteric inhibition? A. Primary B. Secondary C. Tertiary D. Quaternary... 1 answer ##### Ten cups of a restaurant's house Italian dressing is made by blending olive oil costing$1.50 per cup with vinegar that costs $.25 per cup. How many cups of each are used if the cost of the blend is$.50 per cup?
Ten cups of a restaurant's house Italian dressing is made by blending olive oil costing $1.50 per cup with vinegar that costs$.25 per cup. How many cups of each are used if the cost of the blend is \$.50 per cup?...
##### (3 points each) Consider the series n2 n=4(a) Show that the series converges. State the test You are using (b) Find the sum_
(3 points each) Consider the series n2 n=4 (a) Show that the series converges. State the test You are using (b) Find the sum_...
|
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Forthcoming papers Archive Impact factor Guidelines for authors Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
TVT: Year: Volume: Issue: Page: Find
TVT, 2013, Volume 51, Issue 4, Pages 524–531 (Mi tvt107)
Thermophysical Properties of Materials
Study of the law of corresponding states of viscous properties of classical liquids
S. Odinaev, A. A. Abdurasulov
Osimi Tajik Technical University, Dushanbe, Tajikistan
Abstract: The law of corresponding states is studied for the coefficients of shear $\eta_S^*$ and volume $\eta_V^*$ viscosities of classical liquids ($\mathrm{Ar}$, $\mathrm{Kr}$, $\mathrm{Xe}$, $\mathrm{O}_2$, $\mathrm{N}_2$, $\mathrm{CH}_4$); the analytical expressions are derived on the basis of kinetic equations for one- and two-particle distribution functions. The reduced iso-frequency coefficients $\Phi(|\mathbf{r}|)$ and $g(|\mathbf{r}|)$ for liquid $\mathrm{Ar}$, $\mathrm{Kr}$, $\mathrm{Xe}$, $\mathrm{O}_2$, $\mathrm{N}_2$ and $\mathrm{CH}_4$ are numerically calculated in a wide range of variations at the reduced temperatures $T^*$ and densities $\rho^*$, which satisfy the law of corresponding states, at a definite choice of the intermolecular interaction potential $\eta_S^*$ and radial distribution function $\eta_V^*$.
DOI: https://doi.org/10.7868/S0040364413040169
Full text: PDF file (256 kB)
References: PDF file HTML file
English version:
High Temperature, 2013, 51:4, 469–475
Bibliographic databases:
UDC: 532.7+532.133
Citation: S. Odinaev, A. A. Abdurasulov, “Study of the law of corresponding states of viscous properties of classical liquids”, TVT, 51:4 (2013), 524–531; High Temperature, 51:4 (2013), 469–475
Citation in format AMSBIB
\Bibitem{OdiAbd13} \by S.~Odinaev, A.~A.~Abdurasulov \paper Study of the law of corresponding states of viscous properties of classical liquids \jour TVT \yr 2013 \vol 51 \issue 4 \pages 524--531 \mathnet{http://mi.mathnet.ru/tvt107} \crossref{https://doi.org/10.7868/S0040364413040169} \elib{http://elibrary.ru/item.asp?id=19143691} \transl \jour High Temperature \yr 2013 \vol 51 \issue 4 \pages 469--475 \crossref{https://doi.org/10.1134/S0018151X13040160} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000323336200007} \elib{http://elibrary.ru/item.asp?id=20449156} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-84882602655}
• http://mi.mathnet.ru/eng/tvt107
• http://mi.mathnet.ru/eng/tvt/v51/i4/p524
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. Odinaev S. Akdodov D.M., “Dependence of Thermoelastic Properties of Aqueous Electrolyte Solutions on the Frequency and Temperature”, J. Mol. Liq., 212 (2015), 957–962
2. A. B. Kaplun, A. B. Meshalkin, “Unified equation for calculating the viscosity coefficient of argon, nitrogen, and carbon dioxide”, High Temperature, 54:6 (2016), 808–814
3. A. B. Kaplun, A. B. Meshalkin, O. S. Dutova, “Unified low-parametrical equation used to calculate the viscosity coefficient of argon”, Thermophys. Aeromechanics, 24:2 (2017), 203–212
• Number of views: This page: 104 Full text: 28 References: 25
|
|
# Does the big bang model describe a first momment in time for the entire universe or just the observable universe
The big bang model describes the universe as contracting as we wind time backwards. Since the observable universe is of a finite size this ultimately sets the first moment of time at around 13.8 billion years ago.
My question is: Does this also include the non-observable universe? For instance if the non-observable universe is infinity large couldn't it contract indefinitely without a first moment of time?
• "For instance if the non-observable universe is infinity large couldn't it contract indefinitely without a first moment of time?" - why? May 29 '18 at 2:25
## 1 Answer
The Big Bang model describes early moments of time for the entire universe. The official Lambda-CDM model allows for two possibilities. (1) If the universe is closed (global space has a positive curvature), then the universe is expanding from smaller to larger. (2) If the universe is open (global space is flat or negatively curved), then the universe is infinite and has always been infinite. In this case, the universe was initially infinitely large, with the infinite mass, and infinite density everywhere.
Infinities in physics are a problem, as infinite solutions are generally considered non-physical. A well known example is how a concern with infinite solutions in classical physics has lead to the development of quantum mechanics that avoided such infinities. Accordingly, the hope is that quantum gravity will bring more light on the first moments of the Big Bang and what the universe looked like back then.
• So basically in flat space there were points - at overwhelming distances beyond the observable U and even beyond the nearby, ie inflation accounted - which never were in contact to each other? Or in other words is not true that space came into existence but was there, shrinked but infinite? Jan 22 '19 at 8:55
• @Alchimista Your first point seems corrrect, but you may want to ask it as a separate question on this site to let others provide an insight. You second point though is incorrect. Spacetime with energy-momentum are Fourier conjugates (two different sides of the same coin) meaning they cannot exist without each other. If spacetime was there before the Big Bang, then energy also was there. And if so, the universe would simply start expanding earlier. So no, energy and time had to come to existence together at $t=0$. Time starts at the Big Bang, so there is no such time as "before" the Big Bang. Jan 22 '19 at 9:31
• Yes let us start from when energy was there of course. Else nothing would have happened. I am afraid are these kind of things too complicated. I can just shift the Q to did this space and energy appears everywhere well beyond horizons. Then someone could say it is very huge we'll never access that and so on. In fact I do have problem with rolling back, rather than the far far away parts of U. I am already happy you got the first point. Unfortunately I am not use to chat else we could write a Q. When I've tried I got to know what more or less knew already. My doubt was immediately back.... Jan 22 '19 at 10:55
• I think that obviously I must roll back including inflation so that the scale goes faster down overcoming the simple sum of tiny specks of distance. With time I will try a question indeed. Jan 23 '19 at 8:32
• @Alchimista Yes, it's your question. Wasn't easy to phrase :) Hopefully I'd doesn't get downvoted, but you never know on this site. Jan 23 '19 at 8:44
|
|
# GeoDict Forum
## General => Forum FAQs => Topic started by: Aaron Widera on September 14, 2020, 04:25:08 PM
Post by: Aaron Widera on September 14, 2020, 04:25:08 PM
Hello everyone welcome to our Forum-FAQs
• Why can I not make a post to the forum?
• Only registered members can post to the forum. Here you can register: https://forum.math2market.de/index.php?action=register (https://forum.math2market.de/index.php?action=register)
• There are certain boards and threats that can only be edited by Math2Market employees. For example this FAQ thread.
• In your profile settings several information can be changed (e.g. username).
• To access the profile settings head to Profile $$\rightarrow$$ Modify Profile $$\rightarrow$$ Forum Profile.
• Or use this link: https://forum.math2market.de/index.php?action=profile;area=forumprofile;u=2 (https://forum.math2market.de/index.php?action=profile;area=forumprofile;u=2)
• Some information, such as the username, can not be changed. But you can change your displayed name:
• Profile $$\rightarrow$$ Modify Profile $$\rightarrow$$ Account settings.
• Or use this link: https://forum.math2market.de/index.php?action=profile;area=account (https://forum.math2market.de/index.php?action=profile;area=account)
• The profile picture size is limited to 2 MB.
• How do I delete my account?
• Your account can be deleted at Profile $$\rightarrow$$ Actions $$\rightarrow$$ Delete Account.
• Or use this link: https://forum.math2market.de/index.php?action=profile;area=deleteaccount;u=2 (https://forum.math2market.de/index.php?action=profile;area=deleteaccount;u=2)
• How do I start a new board at the front page?
• It is not intended to change the boards on the forums starting page. The starting page is read only. See question number 1.
• How do I use the Latex Editor?
• You can use the Latex Editor by clicking on the fMath formatting option above the text area.
• There are two different options:
• One that creates formulas inline $$A = \pi r^{2}$$
• And one that creates formulas in a new paragraph $A = \pi r^{2}$
For latex code in a new paragraph, enter your latex code between this operators:
Code: [Select]
$A = \pi r^{2}$for latex code in line, enter your latex code between this two operators:
Code: [Select]
[latex=inline] A = \pi r^{2}[/latex]
|
|
Article
# Appendix to: The level 1 weight 2 case of Serre's conjecture - a strategy for a proof
01/2005;
Source: arXiv
ABSTRACT In this appendix, we observe that our March preprint on Serre's conjecture was indeed correct: the only "missing argument" follows automatically from a result of Bockle and Ramakrishna. Thus, we get a proof of the level 1 weight 2 case of Serre's conjecture (a result that has been also proved independently by Khare and Wintenberger).
0 0
·
0 Bookmarks
·
26 Views
Available from
### Keywords
Bockle
level 1 weight 2 case
March preprint
Serre's conjecture
Wintenberger
|
|
## Divisibility theorems for group representations IIOctober 14, 2009
Posted by Akhil Mathew in algebra, representation theory.
Tags: , , ,
So last time we proved that the dimensions of an irreducible representation divide the index of the center. Now to generalize this to an arbitrary abelian normal subgroup.
There are first a few basic background results that I need to talk about.
Induction
Given a group ${G}$ and a subgroup ${H}$ (in fact, this can be generalized to a non-monomorphic map ${H \rightarrow G}$), a representation of ${G}$ yields by restriction a representation of ${H}$. One obtains a functor ${\mathrm{Res}^G_H: Rep(G) \rightarrow Rep(H)}$. This functor has an adjoint, denoted by ${\mathrm{Ind}_H^G: Rep(H) \rightarrow Rep(G)}$. (more…)
## Divisibility theorems for group representationsOctober 11, 2009
Posted by Akhil Mathew in algebra, representation theory.
Tags: , , ,
There are many elegant results on the dimensions of the simple representations of a finite group ${G}$, of which I would like to discuss a few today.
The final, ultimate goal is:
Theorem 1 Let ${G}$ be a finite group and ${A}$ an abelian normal subgroup. Then each simple representation of ${G}$ has dimension dividing ${|G|/|A|}$. (more…)
## A quick lemma on group representationsSeptember 23, 2009
Posted by Akhil Mathew in algebra, representation theory.
Tags: , , ,
So, since I’ll be talking about the symmetric group a bit, and since I still don’t have enough time for a deep post on it, I’ll take the opportunity to cover a quick and relevant lemma in group representation theory (referring as usual to the past blog post as background).
A faithful representation of a finite group ${G}$ is one where different elements of ${G}$ induce different linear transformations, i.e. ${G \rightarrow Aut(V)}$ is injective. The result is
Lemma 1 If ${V}$ is a faithful representation of ${G}$, then every simple representation of ${G}$ occurs as a direct summand in some tensor power ${V^{\otimes p}}$ (more…)
## Representations of the symmetric groupSeptember 20, 2009
Posted by Akhil Mathew in algebra, combinatorics, representation theory.
Tags: , , ,
I’ve now decided on future plans for my posts. I’m going to alternate between number theory posts and posts on other subjects, since I lack the focus have too many interests to want to spend all my blogging time on one area.
For today, I’m going to take a break from number theory and go back to representation theory a bit, specifically the symmetric group. I’m posting about it because I don’t understand it as well as I would like. Of course, there are numerous other sources out there—see for instance these lecture notes, Fulton and Harris’s textbook, Sagan’s textbook, etc. Qiaochu Yuan has been posting on symmetric functions and may be heading for this area too, though if he does I’ll try to avoid overlapping with him; I think we have different aims anyway, so this should not be hard. (more…)
## More Lie algebra constructionsJuly 28, 2009
Posted by Akhil Mathew in algebra, representation theory.
Tags: , , ,
The ultimate aim in the series on Lie algebras I am posting here is to cover the representation theory of semisimple Lie algebras. To get there, we first need to discuss some technical tools—for instance, invariant bilinear forms.
Generalities on representations
Fix a Lie algebra ${L}$. Given representations ${V_1, V_2}$, we clearly have a representation ${V_1 \oplus V_2}$; given a morphism of representations ${V_1 \rightarrow V_2}$, i.e. one which respects the action of ${L}$, the kernel and image are themselves representations.
Proposition 1 The category ${Rep(L)}$ of finite-dimensional representations of ${L}$ is an abelian category.
(more…)
## Lie’s Theorem IIJuly 27, 2009
Posted by Akhil Mathew in algebra, representation theory.
Tags: , , , , ,
Yesterday I was talking about Lie’s theorem for solvable Lie algebras. I went through most of the proof, but didn’t finish the last step. We had a solvable Lie algebra ${L}$ and an ideal ${I \subset L}$ such that ${I}$ was of codimension one.
There was a finite-dimensional representation ${V}$ of ${L}$. For ${\lambda \in I^*}$, we set
$\displaystyle V_\lambda := \{ v \in V: Yv = \lambda(Y) v, \ \mathrm{all} \ Y \in I \}.$
We assumed ${V_\lambda \neq 0}$ for some ${\lambda}$ by the induction hypothesis. Then the following then completes the proof of Lie’s theorem, by the “fundamental calculation:”
Lemma 1 If ${V_\lambda \neq 0}$, then ${\lambda([L,I])=0}$.
(more…)
## Lie’s Theorem IJuly 26, 2009
Posted by Akhil Mathew in algebra, representation theory.
Tags: , , ,
I talked a bit earlier about nilpotent Lie algebras and Engel’s theorem. There is an analog for solvable Lie algebras, and the corresponding Lie’s theorem.
So, first the definitions. Solvability is similar to nilpotence in that one takes repeated commutators, except one uses the derived series instead of the lower central series.
In the future, fix a Lie algebra ${L}$ over an algebraically closed field ${k}$ of characteristic zero.
Definition 1 The derived series of ${L}$ is the descending filtration ${D_n}$ defined by ${D_0 := L, D_n := [D_{n-1}, D_{n-1}]}$. The Lie algebra ${L}$ is solvable if ${D_M=0}$ for some ${M}$.
For instance, a nilpotent Lie algebra is solvable, since if ${\{C_n\}}$ is the lower central series, then ${D_n \subset C_n}$ for each ${n}$.
## Engel’s Theorem and Nilpotent Lie AlgebrasJuly 23, 2009
Posted by Akhil Mathew in algebra, representation theory.
Tags: , , , ,
1 comment so far
Now that I’ve discussed some of the basic definitions in the theory of Lie algebras, it’s time to look at specific subclasses: nilpotent, solvable, and eventually semisimple Lie algebras. Today, I want to focus on nilpotence and its applications.
Engel’s Theorem
To start with, choose a Lie algebra ${L \subset \mathfrak{gl} (V)}$ for some finite-dimensional ${k}$-vector space ${V}$; recall that ${\mathfrak{gl} (V)}$ is the Lie algebra of linear transformations ${V \rightarrow V}$ with the bracket ${[A,B] := AB - BA}$. The previous definition was in terms of matrices, but here it is more natural to think in terms of linear transformations without initially fixing a basis.
Engel’s theorem is somewhat similar in its statement to the fact that commuting diagonalizable operators can be simultaneously diagonalized.
## Why simple modules are often finite-dimensional IIJuly 22, 2009
Posted by Akhil Mathew in algebra, representation theory.
Tags: , , , ,
I had a post a few days back on why simple representations of algebras over a field ${k}$ which are finitely generated over their centers are always finite-dimensional, where I covered some of the basic ideas, without actually finishing the proof; that is the purpose of this post.
So, let’s review the notation: ${k}$ is our ground field, which we no longer assume algebraically closed (thanks to a comment in the previous post), ${A}$ is a ${k}$-algebra, ${Z}$ its center. We assume ${Z}$ is a finitely generated ring over ${k}$, so in particular Noetherian: each ideal of ${Z}$ is finitely generated.
Theorem 1 (Dixmier, Quillen) If ${A}$ is a finite ${Z}$-module, then any simple ${A}$-module is a finite-dimensional ${k}$-vector space.
(more…)
## Representations of sl2, Part IIJuly 18, 2009
Posted by Akhil Mathew in algebra, representation theory.
Tags: , , , ,
1 comment so far
This post is the second in the series on ${\mathfrak{sl}_2}$ and the third in the series on Lie algebras. I’m going to start where we left off yesterday on ${\mathfrak{sl}_2}$, and go straight from there to classification. Basically, it’s linear algebra.
Classification
We’ve covered all the preliminaries now and we can classify the ${\mathfrak{sl}_2}$-representations, the really interesting material here. By Weyl’s theorem, we can restrict ourselves to irreducible representations. Fix an irreducible ${V}$.
So, we know that ${H}$ acts diagonalizably on ${V}$, which means we can write
$\displaystyle V = \bigoplus_\lambda V_\lambda$
where ${Hv_\lambda = \lambda v_{\lambda}}$ for each ${\lambda}$, i.e. ${V_\lambda}$ is the ${H}$-eigenspace.
|
|
Renormalization in a Lorentz-violating model and higher-order operators
# Renormalization in a Lorentz-violating model and higher-order operators
J. R. Nascimento Departamento de Física, Universidade Federal da Paraíba
Caixa Postal 5008, 58051-970, João Pessoa, Paraíba, Brazil
A. Yu. Petrov Departamento de Física, Universidade Federal da Paraíba
Caixa Postal 5008, 58051-970, João Pessoa, Paraíba, Brazil
Carlos M. Reyes Departamento de Ciencias Básicas, Universidad del Bío Bío,
Casilla 447, Chillán, Chile
###### Abstract
The renormalization in a Lorentz-breaking scalar-spinor higher-derivative model involving self-interaction and the Yukawa-like coupling is studied. We explicitly demonstrate that the convergence is improved in comparison with the usual scalar-spinor model, so, the theory is super-renormalizable, and there is no divergences beyond two loops. We compute the one-loop corrections to the propagators for the scalar and fermionic fields and show that in the presence of higher-order Lorentz invariance violation, the poles that dominate the physical theory, are driven away from the standard on-shell pole mass due to radiatively induced lower dimensional operators. The new operators change the standard gamma-matrix structure of the two-point functions, introduce large Lorentz-breaking corrections and lead to modifications in the renormalization conditions of the theory. We found the physical pole mass in each sector of our model.
###### pacs:
11.55.Bq, 11.30.Cp, 04.60.Bc,11.10.Gh
## I Introduction
It is well known that the Lorentz-breaking field theory models can be introduced in several ways. We can list some of the most popular approaches. First, one can introduce small Lorentz-breaking modifications of the known theories through additive terms, thus implementing the Lorentz-breaking extensions of the standard model ColKost (). In principle, the most known extensions of the QED follow this way. A very extensive list of the possible Lorentz-breaking additive terms in different field theory models including QED is given by KosGra (). Second, one can start with the modified dispersion relations Amelino (), and, in principle, try to find a theory yielding such relations. Third, the Lorentz-breaking theories can be treated as a low-energy limit of some fundamental theories, for example string theory KostSam () and loop quantum gravity LQG (). Finally, the Lorentz symmetry can be broken spontaneously, see f.e. spont (). The main motivation behind all these approaches, however, is the same, and resides in the expectation that any experimental evidence of departure from Lorentz symmetry may provide the first germs towards the construction of a theory amalgamating both General Relativity and the Standard Model of particle physics.
At the same time, it is natural to consider one more aspect of studying the Lorentz-breaking extensions of the field theory models. It consists in introducing the essentially Lorentz-breaking terms, that is, those ones proportional to some constant vectors or tensors, involving higher derivatives. As a result, the corresponding theory will yield an essentially different quantum dynamics. The first known example of such a theory is the Myers-Pospelov extension of the electrodynamics MP () where the three-derivative term essentially involves the Lorentz symmetry breaking. Another important example of such a theory is a four-dimensional Chern-Simons modified gravity with the Chern-Simons coefficient chosen in special form JaPi (), which, in the weak field limit, also involves third order in derivatives of the dynamical field (that is, the metric fluctuation). Moreover, the importance of the Myers-Pospelov-like term, and analogous terms for scalar and spinor fields which can be easily introduced, is also motivated by the fact that a special choice of the Lorentz-breaking vector will allow to eliminate the presence of higher time derivatives thus avoiding the arising of the ghosts which are typically present in theories with higher time derivatives (see f.e. ghosts ()). Also, this term was shown to arise as a quantum correction in different Lorentz-breaking extensions of QED MNP () and has been studied for causality and stability CMR0 (). In the case of including higher time derivatives, it has been shown recently that the unitarity of the -matrix can be preserved at the one-loop order in a Myers-Pospelov QED CMR (). The proof has been accomplished using the Lee-Wick prescription for quantum field theories with negative metric Lee-Wick (). For other studies on unitarity at tree level for minimal and nonminimal Lorentz violations, see Schreck1 (); Schreck2 () respectively. It is important to notice that the Myers-Pospelov-like modifications of QED are actually experimentally studied as well within different contexts MPexp ().
We emphasize that, up to now, the quantum impact of the Myers-Pospelov-like class of terms being introduced already at the classical level, where the higher-derivative additive term should carry a small parameter which can enforce large quantum corrections collins (), almost was not studied except of the QED Fine-Tuning () and superfield case CMP (). The presence of such effect raises the question how to define correctly the physical parameters in the renormalized theory. On the other hand, for studies in the context of semiclassical quantization it is natural to consider the presence of higher derivative terms in order to implement a consistent renormalization program shapiro (). With these considerations, the natural question is – what are the possible consequences of including the Lorentz-breaking higher-derivative terms into the classical action?
It is well known that loop corrections in Lorentz-invariance violating quantum field theory may lead to new kinetic operators absent in the original Lagrangian. Recently, the consequences of these radiatively induced operators have been studied in relation with the finiteness of the -matrix and the identification of the asymptotic state space ralf (). These new terms introduce modifications in the propagation of free particles and change drastically the physical content of the space of in and out states. In particular, the Kallen-Lehmann representation KL () and the LSZ reduction formalism LSZ () are modified in the presence of Lorentz symmetry violation rob (). An important finding is that spectral densities which in the standard case are functions of momentum-dependent observer scalars such as , in the Lorentz violating scenario may depend on other scalars such as couplings of Lorentz-violating tensor coefficients with momenta rob (). This has led to modifications in the renormalization procedure, in the definition of the asymptotic Hilbert space and in general in the treatment for external-leg physics ralf (); for other studies of the renormalization in Lorentz-breaking theories, see also scarp (). A natural extension for these studies is to consider the nonminimal framework of Lorentz invariance violation, that is, when the Lorentz-breaking is performed with higher-order operators Kos-Mew (). It is well known that the inclusion of higher-order operators in quantum field theory will generate, via radiative corrections, all the lower dimensional operators allowed by the symmetries of the Lagrangian. For the case of breaking the Lorentz symmetry, let us say in QED and with a preferred four-vector , the induced operators may involve contractions of with matrices other than just , together with scalars such as . The new terms force to modify the renormalization conditions in order to extract the correct pole mass from the two-point functions. In particular, the renormalization condition for the renormalized fermion self-energy , with being the physical pole mass, has to be generalized, which ultimately will depend on the form of Lorentz breaking. In this work we continue these studies in order to carry out the renormalization in a theory with higher-order operators and in addition we study the possible effects of large Lorentz-violating corrections. Within our study, we consider the renormalization of the higher-derivative Lorentz-breaking generalizations of and Yukawa model.
The structure of the paper looks like follows. In Sec. II, we consider the classical actions of our models, write down the dispersion relations, find the poles and describe their analytical behavior in complex -plane. In Sec. III, we compute the quantum corrections corresponding to the self interaction . In Sec. IV we discuss the coupling of scalar and spinor fields and provide a study of the degree of divergencies in our model. In Sec. V, we compute the two-point functions in purely scalar and scalar-spinor sectors thus exhausting possible divergences and showing explicitly the radiatively induced operators with new gamma-matrix structure and large Lorentz-violating terms. In Sec. VI, we perform the mass renormalization in both sectors and find the physical masses in the theory. In the last section, we discuss our results, and in the Appendices AB, we provide some details of the calculations.
## Ii The effective models and pole structure
We are interested in the higher-order Lagrangian density describing two sectors of Lorentz-breaking theory:
L=L1+L2. (1)
The first sector involves a scalar sector with a fourth derivative together with a self-interaction potential term
L1=12∂μϕ∂μϕ−12M2ϕ2+g1ϕ(n⋅∂)4ϕ−λ4!ϕ4, (2)
and the second one the fermionic Myers-Pospelov model MP (), with dimension-five operators and the Yukawa coupling vertex
L2=¯ψ(i∂/−m)ψ+g2¯ψn/(n⋅∂)2ψ+g¯ψϕψ. (3)
The constants and parametrize the higher-order Lorentz invariance violation with the Planck mass representing itself as a natural mass scale, , are dimensionless parameters, whose presence describes the intensity of the higher-derivative terms, and is a dimensionless four-vector defining a preferred reference frame.
The propagators in momentum space read
Δ(p) = ip2−M2+2g1(n⋅p)4, S(p) = ip/−m−g2n/(n⋅p)2. (4)
We begin an analysis of the dispersion relations in both sectors. A further motivation for its study, and consequently, the finding of the poles and their analytical behavior in complex -plane, consists first in the fact that in our models namely using of the residues of the propagators is a most convenient approach for calculating the quantum corrections. Second, in the presence of higher-order time-derivative terms a direct implementation of the prescription may lead to a wrong four-momentum representation for the propagator which may spoil any attempt to preserve unitarity or causality.
p2−M2+2g1(n⋅p)4=0, (5)
which for a purely time-like four-vector , the solutions are given by
p0=±12√−1±√1+8g1E2g1, (6)
and where . The dispersion relation can also be written as , hence one has the solutions and so that
p1=12√−1+√1+8g1E2g1,P2=12√1+√1+8g1E2g1. (7)
Their exact location in complex -plane and also the contour of integration are shown in Fig. 1.
The solutions can be classified according to their perturbative behavior when taking the Lorentz violation to zero. We identify two standard solutions which are perturbative solutions to the usual ones and two complex ones (and moreover, actually tachyonic) which diverge as . The extra solutions that appear are associated to negative-metric states in Hilbert space and have been called Lee-Wick solutions Lee-Wick ().
Alternatively, we can write the scalar propagator as
Δ(p) = i2g1(p20+P22)(p20−p21+iϵ), (8)
which agrees with the usual propagator in the limit .
In the fermion sector we have the dispersion relation
(pμ−g2nμ(n⋅p)2)2−m2=0. (9)
Again for the time-like we have the equation
(p0−g2p20)2−→p2−m2=0, (10)
whose standard, that is, non-singular at , solutions are
ω1=1−√1−4g2E2g2,ω2=1−√1+4g2E2g2, (11)
and the Lee-Wick ones
W1=1+√1−4g2E2g2,W2=1+√1+4g2E2g2, (12)
where .
In the region of energies satisfying the condition the four solutions are real and obey the inequality , where is a negative number. However, beyond the critical energy both and become complex and move in the opposite imaginary line at as shown in Fig. 2, while the other two solutions remain real.
To define the contour we use an heuristic argument, specially to go beyond the critical energy at which complex solutions appear. We implement a correct low energy limit by considering the prescription given in QEDunitarity () which has been well tested to give a suitable correspondence with the normal theory when and also to preserve the unitarity of the matrix. In this effective region the integration contour is defined to round the negative pole from below and the three positive ones from above. Now we increase the energy to values at which the two solutions and become complex, and define the new contour as the one obtained by continuously deforming the curve by avoiding any crossing and singularity with the poles, as shown in Fig. 2.
With this consideration in mind, the fermion propagator reads
S(p) = i((p0−g2p20)γ0+piγi+m)g22(p0−ω1+iϵ)(p0−W1+iϵ)(p0−ω2−iϵ)(p0−W2+iϵ), (13)
which differs from the direct prescription in the quadratic terms, but allows in particular to define a consistent Wick rotation which we use later.
## Iii The interaction λϕ4
In this section we explore the potentially divergent one-loop radiative correction in the scalar propagator which is generated by the well-known tadpole graph given by Fig. 3.
To proceed with it, we need to evaluate the basic integral
Σ2=12λ∫d4p(2π)41p2−M2+2g1(n⋅p)4. (14)
Note that a naive power counting gives a logarithmic divergence for the integral which, however, as shown below using dimensional regularization is found to have a finite result in four dimensions; in a similar fashion of what happens with the Riemann zeta function for negative values of .
We go to dimensions and choose the Lorentz-breaking four-vector to be timelike which yields
Σ2 = 12μ4−dλ∫ddp(2π)d12g1(p20−p21+iϵ)(p20+P22), (15)
where are given in (7). We perform the integration in the complex -plane by closing the contour upward and enclosing the two poles and as depicted in Fig. 1, yielding
Σ2=πiμ4−dλ∫dd−1p(2π)d(iF1−F2), (16)
where
F1=√g1√1+8g1E2√1+√1+8g1E2,F2=√g1√1+8g1E2√−1+√1+8g1E2. (17)
Note that has the correct limit at , recovering the usual result . Now, it is convenient to change variables yielding
pdp=zdz8g1dd−1p=|p|d−2dpdΩd−1, (18)
which allows to write the integral (16) as
Σ2=−πμ4−dλ2π(d−1)/2√g1(2π)dΓ(d−12)(8g1)d−12(I1+I2). (19)
with
I1=∫∞z0dz(z2−z20)d−32√z+1,I2=i∫∞z0dz(z2−z20)d−32√z−1, (20)
where and we have used the definition of solid angle (115).
Considering both contributions through the relation , and after some algebra with and expanding in , we find at lowest order
Σ(1) = M2λ12π3⎛⎜ ⎜⎝−193+2γE+6ln(2)−3π2F1R(0,0,1,0)(14,34,2,1)8√2+(1−38γE−18ln(512))ln(−g1M28)⎞⎟ ⎟⎠, Σ(2) = λ144g1π3(−14+6γE+3ln(32g1M2))+M2λ192π3(8(−17+6γE+3ln(32g1M2)) (21) −2(3+ln(g1M28))(−14+6γE+3ln(32g1M2))).
Here, is a hypergeometric function, whose exact value is . Note that there is a fine tuning in this case, that is, the expression is singular at . However the correction to the two-point function is UV finite.
## Iv Coupling of scalar and spinor fields
Let us consider the theory involving both the quartic interaction vertex and the Yukawa coupling vertex . We note, that, in principle, the second time derivatives in a free action of a spinor field are present also in specific Lorentz invariant theories, for example, the known ELKO model ELKO (). However, our theory essentially differs from that model. To classify the possible divergences, we should calculate the superficial degree of divergence of this theory. The naive result for it is
ω=4−4V1−2V2−Eψ, (22)
where is a number of spinor legs. However, this manner yields incorrect results because of the strong anisotropy between time and space components of the momenta (for example, in this case one can naively suggest that the two-point function of the spinor field can yield only the renormalization of the mass of the spinor field). So, let us proceed in the manner similar to that one used for Horava-Lifshitz-like theories (cf. Anselmi ()). Since is purely time-like, we can write , so, we have from (II)
Δ(p) = ip20−→p2−M2+2g1p40, S(p) = ip/−m−g2γ0p20. (23)
Following the methodology developed for the Horava-Lifshitz theories (see f.e. Anselmi ()), we suggest that the denominators of the propagators are the homogeneous functions with respect to higher orders in corresponding momenta, and the canonical dimension of the spatial momentum is 1. Taking into account only the leading degrees, we easily conclude that the canonical dimension of the momentum is (we note that this case does not occur in usual Horava-Lifshitz-like theories where the canonical dimension of time momenta are always more than one, cf. Anselmi ()). Therefore, the spinor propagator has the canonical dimension (and the contribution to the superficial degree of divergence) equal to , and the scalar one – equal to just as in the usual case. Nevertheless, the dimension of the integral measure, that is, in this case is different from the usual one, being equal to rather than . Hence the superficial degree in our theory is
ω=(7/2)L−2Pϕ−Pψ, (24)
where is a number of loops, and and are the numbers of scalar and spinor propagators respectively. Then, let will be the number of vertices, and – of Yukawa-like vertices. One has the identities for numbers of scalar and spinor fields in an arbitrary Feynman diagram:
Nϕ = 4V1+V2=2Pϕ+Eϕ, Nψ = 2V2=2Pψ+Eψ, (25)
where , are the numbers of external scalar and spinor legs respectively. We use the topological identity , that is, . As a result, we eliminate numbers of loops and propagators from and rest with
ω=72−12V1−14V2−34Eϕ−54Eψ. (26)
A straightforward verification shows that the superficially divergent diagrams (that is, those ones with can be of the following types:
(i): , , , . This is the one-loop renormalization of the mass and kinetic terms for the spinor.
(ii): , , , . This is the two-loop renormalization of the mass and kinetic terms for the spinor.
(iii): , , , . This is the one-loop renormalization of the mass and kinetic terms for the scalar.
(iv): , , , . This is the two-loop renormalization of the mass and kinetic terms for the scalar.
(v) , , , . This is the one-loop renormalization of the mass term for the scalar. Actually, we already showed in the previous section that, due to the specific structure of poles of the propagator, this contribution is finite.
Actually, in the cases (iii) and (iv) the divergence will be not linear but logarithmic, by the reasons of symmetry of integrals over momenta. The diagrams with odd numbers of will vanish due to an analogue of the Furry theorem. So, our theory is super-renormalizable. Moreover, we note that since the kinetic term for the scalar involves two derivatives acting to the external fields, its superficial degree of freedom should be decreased at least by 1, if these derivatives are the time ones, and by two for space derivatives; actually, in one-loop case in a purely scalar sector the kinetic term simply does not arise. Also, in the cases (i) and (ii) one will have the only divergent contribution to the mass of the spinor. So, taking into account the previous section as well, we conclude that at the one-loop order one could have only the renormalization of the masses of the spinor and the scalar arisen from the Yukawa-like coupling.
We note that namely this degree of divergence correctly explains why the self-energy of the fermion diverges, as we will see further (indeed, the naive calculation yields a finite result for it). To study the renormalization, we can restrict ourselves by the lower order, that is, one loop.
So, we rest with only three potentially divergent graphs – with , that is, the purely scalar tadpole we studied above, and with and or we study below.
## V The Yukawa-like theory
In the next subsections we compute the radiative corrections to the scalar and fermion two-point function in the Yukawa-like theory which arises by considering the self-interaction term and in (1). The Lagrangian is
L=12∂μϕ∂μϕ−12M2ϕ2+¯ψ(i∂/−m)ψ+g2¯ψn/(n⋅∂)2ψ+g¯ψϕψ, (27)
and additionally, we impose the simplification of considering and the preferred four-vector to be purely timelike .
### v.1 Scalar self-energy Π(p)
As a first example of quantum corrections in our Yukawa-like model, we study the contribution with two external scalar legs depicted at Fig. 3.
It is represented by the integral
Π(p)=−g22ϕ(−p)ϕ(p)∫d4k(2π)4Tr((Qμγμ+m)(Rνγν+m))(Q2−m2)(R2−m2), (28)
where we define
Qμ = kμ−g2nμ(n⋅k)2, Rμ = kμ+pμ−g2nμ(n⋅(k+p))2. (29)
Calculating the trace gives
Π(p)=−2g2ϕ(−p)ϕ(p)∫d4k(2π)4Q⋅R+m2(Q2−m2)(R2−m2). (30)
Let us write the corresponding contribution to the effective action as and study the typical low-energy behavior of this contribution by expanding it into Taylor series
˜Π(p) = ˜Π(0)+pμ(∂˜Π∂pμ)p=012pμpν(∂2˜Π∂pμ∂pν)p=0+…. (31)
The zeroth-order contribution follows directly from (30)
˜Π(0)=∫d4k(2π)4Q2+m2(Q2−m2)2. (32)
It is convenient to rewrite as
˜Π(0)=K+2m2P, (33)
where
K=∫d4k(2π)41(Q2−m2), (34)
and
P=∫d4k(2π)41(Q2−m2)2. (35)
where the integrals (34) and (35) have been solved in the Appendix A.
For the next term, it is clear that can be proportional to only since there is no other vectors, the corresponding contribution to the effective action will yield , that is, a surface term. So, we can disregard it. Further, one would need then to find the second derivative, that is, which may contain naturally terms of higher-order in . To find it, consider
(∂2˜Π(p)∂pμ∂pν)p=0 = ∫d4k(2π)4Tr[(1⧸Q−m)[∂∂pμ∂∂pν(1⧸R−m)]p=0]. (36)
Integrating by parts and neglecting the surface terms, we obtain the symmetric expression
(∂2˜Π(p)∂pμ∂pν)p=0 = −∫d4k(2π)4Tr[∂∂kν(1⧸Q−m)∂∂kμ(1⧸Q−m)], (37)
where we have used the identity .
We consider
∂∂kμ(1⧸Q−m)=(∂Qα∂kμ)1(⧸Q−m)2γα, (38)
and after some algebra we arrive at
(∂2˜Π(p)∂pμ∂pν)p=0=−∫d4k(2π)4(∂Qα∂kμ)(∂Qσ∂kν)Tασ, (39)
with
Tασ=4(Q2−m2)2ηασ+32m2(Q2−m2)4QαQσ. (40)
By using the relations
(∂Qα∂kμ)(∂Qα∂kν)=ημν−4g2nμnν(n⋅Q), (∂Qα∂kμ)(∂Qσ∂kν)QαQσ=14(∂Q2∂kμ)(∂Q2∂kν), (41)
one obtains
(∂2˜Π(p)∂pμ∂pν)p=0=−4∫d4k(2π)4(ημν−4g2nμnν(n⋅Q)(Q2−m2)2+2m2(Q2−m2)4(∂Q2∂kμ)(∂Q2∂kν)). (42)
Considering the tensors available in our model, which are the flat metric and the preferred four-vector we can write
∫d4k(2π)4Qμ(Q2−m2)2=nμS, (43)
and
∫d4k(2π)41(Q2−m2)4(∂Q2∂kμ)(∂Q2∂kν)=nμnνL+ημνn2M. (44)
Now, consider the relation
∂Q2∂kμ=2(Qμ−2g2nμ(n⋅Q)(n⋅k)), (45)
and multiplying by we to arrive at
∫d4k(2π)44(n⋅Q)2(Q2−m2)4(1−4g2n2(n⋅k)+4g22(n2)2(n⋅k)2)=(n2)2(L+M), (46)
and by contracting with the metric
∫d4k(2π)44(Q2−m2)4(Q2−4g2(n⋅Q)2(n⋅k)+4g22n2(n⋅Q)2(n⋅k)2)=n2(L+4M). (47)
Solving the algebraic equation we have
L = 163n2∫d4k(2π)41(Q2−m2)4((n⋅Q)2n2−Q24−3g2(n⋅Q)2(n⋅k)+3g22(n⋅Q)2(n⋅k)2n2), M = 43n2∫d4k(2π)41(Q2−m2)4(Q2−(n⋅Q)2n2). (48)
A similar analysis gives
S=1n2∫d4k(2π)4(n⋅Q)(Q2−m2)2. (49)
We find the second-order contribution
(∂2˜Π(p)∂pμ∂pν)p=0pμpν=−(4P+8m2n2M)p2+(16g2n2S−8m2L)(n⋅p)2. (50)
Reorganizing this expression, we can write the correction to the scalar propagator up to second-order in as
˜Π(p) = m2q0+p2q1+(n⋅p)2qn, (51)
where
q0 = K+2m2P, (52) q1 = −4(P+2m2n2M), qn = 8(2g2n2S−m2L).
Finally one has
q0 = −i48π2g22m2+i48π2(6γE−0,46+12iπ−18ln(g2m2)), (53) q1 = −i2π2(iπ−ln(g2m2)−13), (54) qn = iπ2. (55)
We provide details of the computation of , and in the Appendix A. The two-point function is finite and involves a fine-tuning term proportional to . The Lee-Wick modes improve the convergence of the theory such to make the two-point function of the scalar field essentially UV finite and involves the aether term ouraether ().
### v.2 Fermion self-energy Σ(p)
Now we focus on the contribution of the fermion self-energy graph depicted in Fig. 4, and recall we are considering .
The fermion self-energy graph is represented by the integral
Σ(p)=g2∫d4k(2π)4⧸Q+m((k−p)2−m2)(Q2−m2). (56)
To find it, let us consider a Taylor expansion of the first denominator term up to second-order in and rewrite the diagram as
Σ(p) ≈ g2∫d4k(2π)4(1k2−m2+2(k⋅p)−p2(k2−m2)2+4(k⋅p)2(k2−m2)3)(⧸Q+mQ2−m2)+O(p3,n). (57)
With the notation , we introduce the the zeroth-order contribution
I(0)=∫d4k(2π)4(1k2−m2)(⧸Q+mQ2−m2), (58)
the linear-order contribution
I(1)=∫d4k(2π)4(2(k⋅p)(k2−m2)2)(⧸Q+mQ2−m2), (59)
and the second-order contribution
I(2) = ∫d4k(2π)4(4(k⋅p)2(k2−m2)3−p2(k2−m2)2)(⧸Q+mQ2−m2). (60)
### v.3 The gamma-matrix structure of I(0), I(1)I(2)
#### v.3.1 Zeroth-order I(0)
I(0) = γμI(0)μ+mf0, (61)
where we have defined
I(0)μ = ∫d4k(2π)4Qμ(k2−m2)(Q2−m2), (62)
and
f0 = ∫d4k(2π)41(k2−m2)(Q2−m2). (63)
From tensor analysis considerations one should have
∫d4k(2π)4Qμ(k2−m2)(Q2−m2)=nμfn1. (64)
Replacing the expression (64) in Eq. (61) produces the zeroth-order contribution
I(0)=⧸nfn1+mf0, (65)
with
fn1 = 1n2∫d4k(2π)4(n⋅Q)(k2−m2)(Q2−m2). (66)
We carry out the calculations of and following the lines given in Appendix B.2. The first coefficient is naturally finite and the second one is divergent and contains a large Lorentz-breaking correction term of the order of .
#### v.3.2 Linear-order I(1)
The linear-order integral (59) can be rewritten by introducing
I(1)=2pμγνI(1)μν+2mpμI(1)μ, (67)
where
I(1)μ = ∫d4k(2π)4kμ(k2−m2)2(Q2−m2), (68) I(1)μν = ∫d4k(2π)4kμQν(k2−m2)2(Q2−m2).
By considering
∫d4k(2π)4kμ(k2−m2)2(Q2−m2)
|
|
# proof of Weierstrass’ criterion of uniform convergence
The assumption that $|f_{n}(x)|\leq M_{n}$ for every $x$ guarantees that each numerical series $\sum_{n}f_{n}(x)$ converges absolutely. We call the limit $f(x)$.
To see that the convergence is uniform: let $\epsilon>0$. Then there exists $K$ such that $n>K$ implies $\sum_{n>K}M_{n}<\epsilon$. Now, if $k>K$,
$|f(x)-\sum_{n=1}^{k}f_{n}(x)|=|\sum_{n>k}f_{n}(x)|\leq\sum_{n>k}|f_{n}(x)|\leq% \sum_{n>k}M_{n}<\epsilon$
The $\epsilon$ does not depend on $x$, so the convergence is uniform.
Title proof of Weierstrass’ criterion of uniform convergence ProofOfWeierstrassCriterionOfUniformConvergence 2013-03-22 16:26:28 2013-03-22 16:26:28 argerami (15454) argerami (15454) 4 argerami (15454) Proof msc 40A30 msc 26A15
|
|
# Schlögl model - Stationary States far from Equilibrium
The Schlögl model can be represented as follows:
$$A + 2X \underset{k_2}{\stackrel{k_1}{\rightleftharpoons}} 3X \, , \\ X \underset{k_4}{\stackrel{k_3}{\rightleftharpoons}} B \, .$$
The chemical reactions can be written in ODE form:
$$\dot{C}_{X} = k_{1}C^{0}_{A}(C_{X})^{2} - k_{2}(C_{X})^{3} - k_{3}C_{X} + k_{4}C_{B}^{0} \, ,$$ where we fixed the concentrations of $$A$$ and $$B$$ to the initial ones $$\left[(C_{A} \equiv C_{A}^{0}) ; (C_{B} \equiv C_{B}^{0})\right]$$.
The concentration of stationary states does not evolve, therefore $$\dot{C}_{X} = 0$$:
$$k_{1}C^{0}_{A}(C_{X})^{2} - k_{2}(C_{X})^{3} - k_{3}C_{X} + k_{4}C_{B}^{0} = 0 \\ \iff (C_{X})^{3} - \frac{k_{1}}{k_{2}}C^{0}_{A}(C_{X})^{2} + \frac{k_{3}}{k_{2}}C_{X} - \frac{k_{4}}{k_{2}}C_{B}^{0} = 0 \\ \iff k_{2} = \frac{a}{(C_{X})} - \frac{k_{3}}{(C_{X})^{2}} + \frac{b}{(C_{X})^{3}} \,$$ where $$a = k_{1}C_{A}^{0}$$ and $$b = k_{4}C_{B}^{0}$$.
In order to find the stationary states, we proceed the following manner:
$$\frac{dk_{2}}{dC_{X}} = 0 \iff C_{x} = \frac{1}{a}\left(k_{3} \pm \sqrt{k_{3}^{2} - 4ab}\right)$$
If $$k_{3}^{2} > 4ab \, \, (1)$$, there is 3 possible values and if $$k_{3}^{2} < 4ab \, \,(2)$$ there is one possible value.
Reichl's book, "A Modern Course in Statistical Mechanics", says that the Schlögl model, away from equilibrium, has multiple stationary states, which seem to represent situation (1) and in equilibrium seem to be (2). However, why does (1) represent the model far from equilibrium and (2) the model in equilibrium?
• If $k_3^2 < 4ab$, the expression under the root is negative. The square root of a negative number cannot be calculated, as it is not a real number. Aug 30, 2022 at 20:56
• There are two stable equilibria. There are details and a clear discussion in MIra et al. J. Chem. Educ. 2003, v80, p1488, also a v. thorough explanation in D. Gillespie, 'Markov Processes' publ Academic Press 1992. Aug 31, 2022 at 14:10
• to continue. You have to solve the cubic equation to find the deterministic values for to equilibrium values, or numerically integrate for the time profile, ,but then $k_1$ is divided by $2$ and $k_2$ by $6$ to relate the stochastic calculation to the usual rate equation. This is because the propensities can be approximated when the number of molecules becomes v. large. The solution to cubics is v messy but can be done numerically using python/numpy root solver. Sep 1, 2022 at 8:54
Ilie et al give a numerical example that has two steady states. Here are the parameters (ignore the propensities, I think they are for the stochastic models):
with $$C_A$$ set to $$\pu{1E5}$$ and $$C_B$$ set to $$\pu{2E5}$$. Depending on the value of the initial concentration of intermediate, you get one of two possible steady states (figure from the same paper, I did not numerically solve myself):
In our context, the y-axis shows the concentration of the intermediate X. If you want to know whether the system is capable of reaching equilibrium (with fixed concentrations of A and B), use the criterion
$$\frac{C_B}{C_A} \stackrel{?}{=} K = \frac{k_1 k_3}{k_2 k_4}\tag{3}$$
For the given set of parameters, this is not the case, so it is not at equilibrium (also, you would not expect two possible steady state concentrations of X if it were at equilibrium).
[OP] ... there is 3 possible values...
That seems too much for a quadratic equation. According to Vellela and Qian (2008, doi:10.1098/rsif.2008.0476),
Because the ODE form of Schlögl's model is a cubic, there can be one, two or three steady states for a given set of parameters.
The OP does start out with an expression containing concentrations cubed, so unless some terms cancel out during differentiation, I would expect a different sort of solution for the steady state concentration.
[OP] However, why does (1) represent the model far from equilibrium and (2) the model in equilibrium?
You can check by testing one of the following:
$$\frac{C_X}{C_A} \stackrel{?}{=} \frac{k_1}{k_2}\tag{4}$$
or
$$\frac{C_B}{C_X} \stackrel{?}{=} \frac{k_3}{k_4}\tag{5}$$
as long as (3) is also true. I'm not attempting this because I have trouble understanding the derivation and the result given by the OP. Specifically, I am puzzled by the assumption that
[OP] $$\frac{dk_{2}}{dC_{X}} = 0$$
Why would the steady state concentration be independent of one of the rate constants? That is counter-intuitive and requires some explanation.
[my comment] There are two stable equilibria. There are details and a clear discussion in MIra et al. J. Chem. Educ. 2003, v80, p1488, also a v. thorough explanation in D. Gillespie, 'Markov Processes' publ Academic Press 1992.
You have to solve the cubic equation to find the deterministic values for the equilibrium values, or numerically integrate to get the time profile, but then $$k_1$$ is divided by 2 and $$k_2$$ by 6 to relate the stochastic calculation to the usual rate equation. This is because the propensities can be approximated when the number of molecules becomes v. large.
Using the values given in the answer by Karsten the polynomial to solve is
$$\displaystyle ax^3+bx^2+cx+d=0,\quad a=k_2/6, b=-k_1A0/2, c=k_4,d=-k_3B0$$
which has roots at $$566.9, 247.6, 85.5$$. The middle value is a 'barrier' that the deterministic equations (i.e. normal rate eqns) cannot cross. (When the parameters as such that they produce only one real root then only one equilibrium will exist).
You can see the three roots in the plot below as horizontal lines.
The blue horizontal lines are the roots and the grey lines the rate equation solutions starting at X0 values $$0, 50, 150, 240, 260, 450, 700$$ which you can see in the plot. You can see how the population is 'attracted' to the upper or lower root but never cross the barrier root at $$247$$.
The stochastic calculation is very different as there is a chance to rise or fall to an equilibrium value (a root of the cubic) and the barrier can be crossed or recrossed as at each step as there is a random probability of going one way or the other.
The second plot shows this and I have chosen to illustrate that it behaves much as the deterministic plot does, but not all do as can be seen the the third plot.
Having multiple equilibria is of course a result of having a cubic equation. Thus this is unrealistic chemically as a termolecular reaction has such a small chance of occurring that most termolecular reactions are found experimentally to involve two steps.
• Thank you very much for your help! Can I make another question about this? Or should I make a new post? Sep 3, 2022 at 18:47
• probably a new post unless it has a short answer that can be put as a comment Sep 4, 2022 at 17:13
• My question would be about the stability analysis of the Schlögl model (or reaction models in general); is it better to make a new post? Sep 5, 2022 at 14:02
|
|
Dimensions of Specific Heat Capacity
Dimensional Formula of Specific Heat Capacity
The dimensional formula of Specific Heat Capacity is given by,
M0 L2 T-2 K-1
Where,
• M = Mass
• K = Temperature
• L = Length
• T = Time
Derivation
Specific Heat Capacity (C) = Heat × [Mass × Temperature]-1. . . . (1)
The dimensional formula of mass and temperature = [MLT0] and [MLTK1] . . . .(2)
Since, the dimensions of Heat Energy = Dimensions of Work Done
And, Work = Force × displacement
= M × a × displacement = [M] × [MLT-2] × [L]
∴ the dimensional formula of Heat energy = MLT-2 . . . . (3)
On substituting equation (2) and (3) in equation (1) we get,
Specific Heat Capacity = Heat × [Mass × Temperature]-1
Or, C = [MLT-2] × [MLT0]-1 × [MLTK1]-1 = [M0 L2 T-2 K-1].
Therefore, specific heat capacity is dimensionally represented as [M0 L2 T-2 K-1].
|
|
Data analysis and interpretation for clinical genomics
Overview
question Questions
• What are the specific challenges for the interpretation of sequencing data in the clinical setting?
• How can you annotate variants in a clinically-oriented perspective?
objectives Objectives
• Perform in-depth quality control of sequencing data at multiple levels (fastq, bam, vcf)
• Classify and annotate variants with information extracted from public databases for clinical interpretation
• Filter variants based on inheritance patterns
time Time estimation: 4 hours
last_modification Last modification: Oct 21, 2020
Introduction
In years 2018-2019, on behalf of the Italian Society of Human Genetics (SIGU) an itinerant Galaxy-based “hands-on-computer” training activity entitled “Data analysis and interpretation for clinical genomics” was held four times on invitation from different Italian institutions (Università Cattolica del Sacro Cuore in Rome, University of Genova, SIGU 2018 annual scientific meeting in Catania, University of Bari) and was offered to about 30 participants each time among clinical doctors, biologists, laboratory technicians and bioinformaticians. Topics covered by the course were NGS data quality check, detection of variants, copy number alterations and runs of homozygosity, annotation and filtering and clinical interpretation of sequencing results.
Realizing the constant need for training on NGS analysis and interpretation of sequencing data in the clinical setting, we designed an on-line Galaxy-based training resource articulated in presentations and practical assignments by which students will learn how to approach NGS data quality at the level of fastq, bam and VCF files and clinically-oriented examination of variants emerging from sequencing experiments.
This training course is not to be intended as a tutorial on NGS pipelines and variant calling. This on-line training activity is indeed focused on data analysis for clinical interpretation. If you are looking for training on variant calling, visit this Galaxy tutorial on Exome sequencing data analysis for diagnosing a genetic disease.
comment SIGU
The Italian Society of Human Genetics (SIGU) was established on November 14, 1997, when the pre-existing Italian Association of Medical Genetics and the Italian Association of Medical Cytogenetics joined. SIGU is one of the 27 member societies of FEGS (Federation of European Genetic Societies). Animated by a predominant scientific spirit, SIGU wants to be reference for all health-care issues involving human genetics in all its applications. Its specific missions are to develop quality criteria for medical genetic laboratories, to promote writing of guidelines in the field of human genetics and public awareness of the role and limitations of genetic diagnostic techniques. SIGU coordinates activities of several working groups: Clinical Genetics, Cytogenetics, Prenatal Diagnosis, Neurogenetics, Fingerprinting, Oncological Genetics, Immunogenetics, Genetic Counseling, Quality Control, Medical Genetics Services, Bioethics. More than 1000 medical geneticists and biologists are active members of the society.
Agenda
In this tutorial, we will cover:
1. Next Generation Sequencing
2. Requirements
3. Datasets
4. Quality control
5. Variant annotation
6. Variant prioritization
7. Solutions
Next Generation Sequencing
Next (or Second) Generation Sequencing (NGS/SGS) is an umbrella-term covering a number of approaches to DNA sequencing that have been developed after the first, widespread and for long time most commonly used Sanger sequencing.
NGS is also known as Massive Parallel Sequencing (MPS), a term that makes explicit the paradigm shared by all these technologies, that is to sequence in parallel a massive library of spatially separated and clonally amplified DNA templates.
For a comprehensive review of the different NGS technologies see Goodwin et al., 2016, which also includes an introduction to the third generation methods allowing sequencing of long single-molecule reads.
NGS in the clinic
In the span of less than a decade, NGS approaches have pervaded clinical laboratories revolutionizing genomic diagnostics and increasing yield and timeliness of genetic tests.
In the context of disorders with a recognized strong genetic contribution such as neurogenetic diseases, NGS has been firmly established as the strategy of choice to rapidly and efficiently diagnose diseases with a Mendelian basis. A general diagnostic workflow for these disorders currently embraces different NGS-based diagnostic options as illustrated in Figure 1.
Figure 1. General workflow for genetic diagnosis of neurological diseases. (*If considering high-yield single-gene testing of more than 1–3 genes by another sequencing method, note that next-generation sequencing is often most cost-effective. †Genetic counselling is required before and after all genetic testing; other considerations include the potential for secondary findings in genomic testing, testing parents if inheritance is sporadic or recessive, and specialty referral.) From Rexach et al., 2019
Currently, most common NGS strategies in clinical laboratories are the so-called targeted sequencing methods that, as opposed to genome sequencing covering the whole genomic sequence, focus on a pre-defined set of regions of interest (the targets). The targets can be selected by hybrid capture or amplicon sequencing, and the target-enriched libraries is then sequenced. The most popular target designs are:
• gene panels where the coding exons of only a clinically-relevant group of genes are targeted
• exome sequencing where virtually all the protein-coding exons in a genome are simultaneously sequenced
Basics of NGS bioinformatic analysis
Apart from the different width of the target space in exome and gene panels, these two approaches usually share the same experimental procedure for NGS library preparation. After clonal amplification, the fragmented and adapter-ligated DNA templates are sequenced from both ends (paired-end sequencing) of the insert to produce short reads in opposite (forward and reverse) orientation.
Bioinformatic analysis of NGS data usually follows a general three-step workflow to variant detection. Each of these three steps is marked by its “milestone” file type containing sequence data in different formats and metadata describing sequence-related information collected during the analysis step that leads to generation of that file.
NGS workflow step File content File format File Size (individual exome)
Sample to reads Unaligned reads and qualities fastQ gigabytes
Alignments to variants Genotyped variants and metadata VCF megabytes
Here are the different formats explained:
• fastQ (sequence with quality): the de facto standard for storing the output of high-throughput sequencing machines
• Usually not inspected during data analysis
• BAM (binary sequence alignment/map): the most widely used TAB-delimited file format to store alignments onto a reference sequence
• VCF (variant call format): the standard TAB-delimited format for genotype information associated with each reported genomic position where a variant call has been recorded
Another useful file format is BED, to list genomic regions of interest such as the exome or panel targets.
The steps of the reads-to-variants workflow can be connected through a bioinformatic pipeline (Leipzig et al., 2017), consisting in read alignments, post-alignment BAM processing and variant calling.
Alignment
As generated by the sequencing machines, paired-end reads are written to two fastQ files in which forward and reverse reads are stored separately together with their qualities. FastQ files are taken as input files by tools (the aligners) that align the reads onto a reference genome. One of the most used aligners is BWA among the many that have been developed (Figure 2).
Figure 2. Aligners timeline 2001-2012 (from Fonseca et al., 2012)
During the bioinformatic process, paired-end reads from the two separate fastQ files are re-connected in the alignment, where it is expected that they will:
• map to their correct location in the genome
• be as distant as the insert size of the fragment they come from
• be in opposite orientations a combination which we refer to as proper pairing. All these data about paired-end reads mapping are stored in the BAM file and can be used to various purposes, from alignment quality assessment to structural variant detection.
In Figure 3, the Integrative Genomic Viewer (IGV) screenshot of an exome alignment data over two adjacent ASXL1 exons is shown. Pink and violet bars are forward and reverse reads, respectively. The thin grey link between them indicates that they are paired-end reads. The stack of reads is concentrated where exons are as expected in an exome, and the number of read bases covering a given genomic location e (depicted as a hill-shaped profile at the top of the figure) defines the depth of coverage (DoC) over that location:
DoCe=number of read bases over e/genomic length of e
Figure 3. Exome data visualization by IGV
Post-alignment BAM processing
Regarding post-alignment pipelines, the most famous for germline SNP and InDel calling is probably that developed as part of the GATK toolkit (Figure 4).
Figure 4. After-alignment pipeline for germline SNP and InDel variant calling according to GATK Best Practices
According to GATK best practices, in order to be ready for variant calling the BAM file should undergo the following processing:
• marking duplicate reads to flag (or discard) reads that are mere optical or PCR-mediated duplications of the actual reads
• recalibrating base quality scores to correct known biases of the native base-related qualities While GATK BAM processing is beyond doubt important to improve data quality, it has to be noticed that it is not needed to obtain variant calls and that non GATK-based pipelines may not use it or may use different quality reparametrization schemes. Duplicate flagging or removal is not recommended in amplicon sequencing experiments.
Variant calling
The process of variant detection and genotyping is performed by variant callers. These tools use probabilistic approaches to collect evidence that non-reference read bases accumulating over a given locus support the presence of a variant, usually differing in algorithms, filtering strategies, recommendations (Sandmann et al., 2017). To be confident that a variant is a true event, its supporting evidence should be significantly stronger than chance; e.g. the C>T on the left of the screenshot in Figure 5 is supported by all its position-overlapping reads, claiming for a variant. In contrast, the C>A change on the right of the screenshot is seen only once over many reads, challenging its interpretation as a real variant. In fact, DNA variants that occur in germ cells (i.e., germline/constitutional variants that can be passed on to offspring) are either diploid/biallelic, so expected alternative allele frequency is 50% for a heterozygous change. On the other hand, if only a smaller subset of aligned reads indicates variation, that could result from technology bias or be a mosaicism, i.e. an individual which harbour two or more populations of genetically distinct cells as a result of postzygotic mutation. Postzygotic de novo mutations may result in somatic mosaicism (potentially causing a less severe and/or variable phenotype compared with the equivalent constitutive mutation) and/or germline mosaicism (hence enabling transmission of a pathogenic variant from an unaffected parent to his affected offspring) (Biesecker et al., 2013). To identify mosaicism, a probabilistic approach should consider deviation of the proband variant allele fraction (VAF, defined as the number of alternative reads divided by the total read depth) from a binomial distribution centred around 0.5.
Figure 5. Variant visualization by IGV
The GATK variant calling pipeline first produces a genomic VCF (gVCF), whose main difference with VCF is that it records all sites in a target whether there is a variant or not, while VCF contains only information for variant sites, preparing multiple samples for joint genotyping and creation of a multi-sample VCF whose variants can undergo quality filtering in order to obtain the final set of quality-curated variants ready to be annotated.
In downstream analyses, annotations can be added to VCF files themselves or information in VCF files can be either annotated in TAB- or comma- deimited files to be visually inspected for clinical variant searching or used as input to prioritization programs.
Requirements
This tutorial is based on the Galaxy platform, therefore a basic knowledge of Galaxy is required to get most out of the course. In particular, we’ll use the European Galaxy server running at https://usegalaxy.eu.
Registration is free, and you get access to 250GB of disk space for your analysis.
1. Open your browser. We recommend Chrome or Firefox (please don’t use Internet Explorer or Safari).
2. Go to https://usegalaxy.eu
• If you have previously registered on this server just log in:
• On the top menu select: User -> Login
• Click Submit
• If you haven’t registered on this server, you’ll need to do now.
• On the top menu select: User -> Register
• Enter your email, choose a password, repeat it and add a one word name (lower case)
• Click Submit
To familiarize with the Galaxy interface (e.g. working with histories, importing dataset), we suggest to follow the Galaxy 101 tutorial.
Datasets
Input datasets used in this course are available:
Use cases
We selected some case studies for this tutorial. We suggest you to start with a simple case (e.g. Fam A) for the first run of the tutorial, and repeat it with the more complex ones. At the end of the page you will be able to compare your candidate variants with the list of true pathogenic variants.
Family Proband Father Mother Description HPO Reference
Fam A FamilyA_P FamilyX_F FamilyX_M Five-year-old female patiant; Short stature; Severe intellectual disability; Seizures; Polymicrogyria; Cerebellar hemisphere hypoplasia; Postnatal microcephaly; Microphthalmia; Optic nerve coloboma; Malformation of the heart and great vessels; Intestinal malrotation; Hydronephrosis; Cryptorchidism HP:0004322, HP:0010864, HP:0001250, HP:0002126, HP:0100307, HP:0005484, HP:0000568, HP:0000588, HP:0001627, HP:0030962, HP:0002566, HP:0000126, HP:0000028 hg38
Fam B FamilyB_P FamilyB_F FamilyB_M Four-year-old male patient; Intellectual disability; Malformation of the heart and great vessels; Abnormality of blood and blood-forming tissues; Short Stature; Velopharyngeal insufficiency; Coarse facial features with high, narrow forehead, shallow orbits, depressed nasal bridge, anteverted nares, long philtrum, flat face, and macroglossia HP:0001249, HP:0001627, HP:0030962, HP:0001871, HP:0004322, HP:0000220, HP:0000280, HP:0000341, HP:0000348, HP:0000586, HP:0005280, HP:0000463, HP:0000343, HP:0012368, HP:0000158 hg38
Fam C FamilyC_P FamilyC_F FamilyC_M Eighteen-year-old female patient; Non-consanguineous Caucasian parents; Unremarkable family history; Normal intellectual development; Born at term by normal delivery; Oligohydramnios; Decreased fetal movements; Distal arthrogryposis; Cutaneous finger syndactyly; Scoliosis; Popliteal pterygium; Recurrent pneumonia; Restrictive ventilatory defect; Skeletal muscle atrophy HP:0001562, HP:0001558, HP:0005684, HP:0010554, HP:0002650, HP:0009756, HP:0006532, HP:0002091, HP:0003202 hg38
Fam D FamilyD_P FamilyD_F FamilyX_M Ten-year-old male patient; Non-consanguineous Caucasian parents; Unremarkable family history; Severe intellectual disability; Absent speech; Seizure; Ataxia; Stereotypy; Sudden episodic apnea; Postnatal microcephaly; Hypoplasia of the corpus callosum; Strabismus; Myopia; Constipation; Single transverse palmar crease; Narrow forehead; Wide nasal bridge; Short philtrum; Full cheeks; Wide mouth; Thickened helices HP:0010864, HP:0001344, HP:0001250, HP:0001251, HP:0000733, HP:0002882, HP:0005484, HP:0002079, HP:0000486, HP:0000545, HP:0002019, HP:0000954, HP:0000341, HP:0000431, HP:0000322, HP:0000293, HP:0000154, HP:0000391 hg38
Fam E FamilyE_P FamilyX_F FamilyE_M Thirty-year-old woman. Three consecutive pregnancy terminations due to fetal malformations, Woman phenotype included: High forehead, Hypertelorism, Mandibular prognathia. Fetal malformations observed: Cystic hygroma, Cerebellar agenesis, Hypoplastic nasal bone, Cleft lip , Bilateral hydronephrosis, Renal hypertrophy, Hypoplasia of external genitalia, Hypertrophic cardiomyopathy, Ventricular septal defect, Omphalocele HP:0000348, HP:0000316, HP:0000303, HP:0000476, HP:0012642, HP:0011430, HP:0410030, HP:0000126, HP:0000811, HP:0001639, HP:0001629, HP:0001539 hg38
Get data
hands_on Hands-on: Data upload
1. Create a new history for this tutorial and give it a meaningful name (e.g. Clinical genomics)
tip Tip: Creating a new history
Click the new-history icon at the top of the history panel
If the new-history is missing:
1. Click on the galaxy-gear icon (History options) on the top of the history panel
2. Select the option Create New from the menu
tip Tip: Renaming a history
1. Click on Unnamed history (or the current name of the history) (Click to rename history) at the top of your history panel
2. Type the new name
3. Press Enter
2. Files are available on the Galaxy server through a Shared Data Libraries in Galaxy courses/Sigu. This is the preferred solution as you will save time and disk space.
tip Tip: Importing data from a data library
As an alternative to uploading the data from a URL or your computer, the files may also have been made available from a shared data library:
• Go into Shared data (top panel) then Data libraries
• Find the correct folder (ask your instructor)
• Select the desired files
• Click on the To History button near the top and select as Datasets from the dropdown menu
• In the pop-up window, select the history you want to import the files to (or create a new one)
• Click on Import
The same files are available at Zenodo (1, 2, 3):
https://zenodo.org/record/3531578/files/HighQuality_Reads.fastq.gz
Family A:
https://zenodo.org/record/3531578/files/FamilyA_P.bam
Family B:
https://zenodo.org/record/nnnnnnn/files/FamilyB_P.bam
https://zenodo.org/record/3531578/files/FamilyB_F.bam
https://zenodo.org/record/3531578/files/FamilyB_M.bam
Family C:
https://zenodo.org/record/nnnnnnn/files/FamilyC_P.bam
https://zenodo.org/record/4264088/files/FamilyC_F.bam
https://zenodo.org/record/4264088/files/FamilyC_M.bam
Family D:
https://zenodo.org/record/4197066/files/FamilyD_P.bam
https://zenodo.org/record/4197066/files/FamilyD_F.bam
Family E:
https://zenodo.org/record/nnnnnnn/files/FamilyE_P.bam
https://zenodo.org/record/3531578/files/FamilyE_M.bam
Files shared across families:
https://zenodo.org/record/4197066/files/FamilyX_M.bam
https://zenodo.org/record/4264088/files/FamilyX_F.bam
• Copy the link location
• Open the Galaxy Upload Manager (galaxy-upload on the top-right of the tool panel)
• Select Paste/Fetch Data
• Paste the link into the text field
• Press Start
• Close the window
By default, Galaxy uses the URL as the name, so rename the files with a more useful name.
comment Note
All the files are based on hg38 reference genome which is available with pre-built indexes for widely used tools such as bwa-mem and samtools by selecting hg38 version as an option under “(Using) reference genome”).
3. In case you import datasets from Zenodo, check that all datasets in your history have their datatypes assigned correctly, and fix it when necessary. For example, to assign BED datatype do the following:
tip Tip: Changing the datatype
• Click on the galaxy-pencil pencil icon for the dataset to edit its attributes
• In the central panel, click on the galaxy-chart-select-data Datatypes tab on the top
• Select bed
• Click the Change datatype button
4. Rename the datasets
For datasets uploaded via a link, Galaxy will use the link as the dataset name. In this case you may rename datasets.
tip Tip: Renaming a dataset
• Click on the galaxy-pencil pencil icon for the dataset to edit its attributes
• In the central panel, change the Name field
• Click the Save button
Quality control
In-depth quality control (QC) of data generated during an NGS experiment is crucial for an accurate interpretation of the results. For example an accurate QC could help in identifying poor quality experiments, sequence contamination or genomic regions with low sequence coverage, and all these factors have a large impact on the downstream processing.
Most of the programs used during an NSG workflow do not include steps for quality control, therefore artifacts needs to be detected using ad-hod developed tools for QC. The table summarizes the main tools available at https://usegalaxy.eu for quality checking, at each step of the analysis.
NGS workflow step File format Tools for quality control
Sample to reads fastQ FastQC
Reads to alignments BAM General statistics: bam.io.bio, samtools; target coverage: Picard CollectHSMetrics; per base coverage depth: bedtools
Alignments to variants VCF vcf.io.bio
Quality control of FASTQ files
Before starting the analysis workflow, you should identify possible issues that could affect alignment and variant calling. This first step of quality control is based on the raw sequence data (fastQ) generated by the sequencer. Common issues with sequence quality can be easily addressed by further processing your original sequences to trim or remove low-quality reads. In presence of severe artefacts you should consider to repeat the experiment instead of starting the downstream analysis that will generate poor quality results, according to the rule ‘garbage in, garbage out’.
Here we’ll use the FastQC software for a standard quality check, using the two FASTQ files HighQuality_Reads.fastq and LowQuality_Reads.fastq.
FastQC is relatively easy to use. The output of FastQC consists of multiple modules analysing a specific aspect of the quality of the data. A detailed help can be found in the help manual.
The names of the modules are preceded by an icon that reflects the quality of the data, and indicates whether the results of the module are:
• normal (green)
• slightly abnormal (orange)
• very unusual (red)
comment Note on FastQC interpretation
These evaluations must be taken in the context of what you are expecting from your dataset. For FastQC a normal sample includes random sequences with high diversity. If your experiment generates biased libraries (e.g. low complexity libraries) you should interpret the report with attention. In general, you should concentrate on the icons different from green and try to understand the reasons for this behaviour.
hands_on Hands-on: Computing sequence quality with FastQC
1. Run FastQC tool on your fastq datasets HighQuality_Reads.fastq and LowQuality_Reads.fastq. You can select both datasets with the Multiple datasets option.
tip Tip: Select multiple datasets
1. Click on param-files Multiple datasets
2. Select several files by keeping the Ctrl (or COMMAND) key pressed and clicking on the files of interest
For each input file you will get two datasets, one with the raw QC statistics and another with an HTML report with figures.
2. Using the MultiQC tool software, you can aggregate multiple raw FastQC output in one unique report. This helps in comparing multiple samples at the same time in order to quickly identify low quality samples that will be displayed as outliers.
• “Which tool was used generate logs?”: FastQC
• In “FastQC output”
• “Type of FastQC output?”: Raw data
• param-files “FastQC output”: the two RawData outputs of FastQC tool
3. Inspect MultiQC report
For a detailed explanation of the different analysis modules of FastQC you may refer to the Quality control tutorial.
question Questions
1. Based on the MultiQC report, check which modules highlight differences in sequence quality betwen the two datasets
Quality control of BAM files
Fast quality check with bam.iobio.io
BAM files are binary files containing information on the sequences aligned onto a reference genome. Exploring BAM files you can address several questions, e.g.:
• which is the amount of duplicated sequences? For non PCR-free protocols, it should be < 15%. Duplicated sequences are not used in downstream analysis to identify variants, therefore should be kept at a minimum to avoid waste of reagents.
• which is the fraction of unmapped reads? It should be < 2%. If higher, you should ask why so many reads are not properly mapped onto the reference genome. One possible reason could be sample contamination.
In Galaxy, BAM files can be explored using the bam.iobio.io web app. Leveraging on random subsampling of reads, bam.iobio.io quickly draws several quality control reports (Figure 6).
On top of each plot, clicking on the question mark you can open a window with a detailed explanation of the expected output. The number of reads sampled is shown at the top-right of the page, and can be increased by clicking on the arrow.
Figure 6. BAM quality control using bam.iobio.io
hands_on Hands-on: Computing BAM quality with bam.iobio.io
1. Run bam.iobio.io tool on a BAM dataset. To start bam.iobio.io click on the link diplay at bam.iobio.io in the dataset section. Please note that the link will be visible only for datasets with the appropriate database field set to hg38
tip Tip: Changing Database/Build (dbkey)
• Click on the galaxy-pencil pencil icon for the dataset to edit its attributes
• In the central panel, change the Database/Build field
• Select your desired database key from the dropdown list: hg38
• Click the Save button
question Questions
1. Which is the amount of duplicated sequences?
2. And the fraction of aligned reads?
Coverage metrics with Picard
Collect Hybrid Selection (HS) Metrics tool tool computes a set of metrics that are specific for sequence datasets generated through hybrid-selection, a commonly used protocol to capture specific sequences for targeted experiments such as panels and exome sequencing.
In order to run this tool you need a file with the aligned sequences in BAM format, and files with the intervals corresponding to bait and target regions. These files can be generally obtained from the website of the kit manufacturer.
comment Note
Please note that interval files are generally available as BED files, and must be converted in Picard interval_list format using Picard’s BedToInterval tool before running CollectHsMetrics - see the hands on below for details.
Metrics generated by CollectHsMetrics are grouped into three classes:
• Basic sequencing metrics: genome size, the number of reads, the number of aligned reads, the number of unique reads, etc.
• Metrics for evaluating the performance of the wet-lab protocol: number of bases mapping on/off/near baits, number of bases mapping on target, etc.
• Metrics for coverage estimation: mean bait coverage, mean and median target coverage, the percentage of bases covered at different coverage (e.g. 1X, 2X, 10X, 20X, …), the percentage of filtered bases, etc.
For a detailed description of the output see Picard’s CollectHsMetrics
In the next tutorial we will compute hybrid-selection metrics for BAM files containing aligned sequences from an exome sequencing experiment.
hands_on Hands-on: Computing BAM statistics with Picard CollectHsMetrics
1. Before computing the statistics, we first need to convert the BED files with bait and target regions, in Picard interval_list format. Remember that you can select multiple datasets with the Multiple datasets option.
tip Tip: Select multiple datasets
1. Click on param-files Multiple datasets
2. Select several files by keeping the Ctrl (or COMMAND) key pressed and clicking on the files of interest
Run BedToIntervalList tool to convert BED files.
• “Load picard dictionary file from?”: Local cache
• In “Use dictionary from the list”: Human (Homo sapiens): hg38
• “Select coordinate dataset or dataset collection?”: your BED file to be converted
2. Run CollectHsMetrics tool using as input the BAM files and the intervals in Picard interval_list format, corresponding to the bait and target regions, generated in the previous step.
3. Use MultiQC tool to aggregate CollectHsMetrics output in one unique report to facilitate the comparison across multiple samples.
• “Which tool was used generate logs?”: Picard
• In “Picard output”
• “Type of Picard output?”: HS Metrics
• param-files “Picard output”: the output of CollectHsMetrics tool
4. Inspect MultiQC tool report
question Questions
1. Which is the average target coverage?
2. And the fraction of bases covered at least 10X?
Variant annotation
After the generation of a high-quality set of mapped read pairs, we can proceed to call different classes of DNA variants. Users interested in germline variant calling can refer to related Galaxy’s tutorials, e.g. Exome sequencing data analysis for diagnosing a genetic disease.
Variant callers usually provide us with a simple list of sequence variants (genomic coordinates + reference and variant alleles) plus genotypes and genotype-likelihoods. Variant annotation is the process of adding informations to these variants using multiple sources (e.g. public databases). We are usually interested in knowing for example if a specific variant overlaps with a gene, if it falls into an exon of that gene, if it’s a coding exon, what type of change the variant causes to the encoded amino acid (missense, nonsense, frameshift), etc.
Gene model
The choice of gene model is essential for variant downstream variant annotation: it describe genomic positions of genes and each exon-intron exact locations
Different gene models can give different annotations:
Figure 1. Variant indicated by the red dashed line can be annotated as intronic or exonic (on one of the UCSC transcript variants), depending on the adopted gene model:
Source Description
RefSeq A comprehensive, integrated, non-redundant, well-annotated set of reference sequences including genomic, transcript, and protein
Ensembl Integrates and distributes reference datasets and analysis tools. Based at EMBL-EBI
Gencode A project to identify and classify all gene features in the human and mouse genomes with high accuracy based on biological evidence. Based on the ENCODE consortium
Sequence variant nomenclature
Variant nomenclature should be described univocally:
Source Description
HGVS HGVS-nomenclature serves as an international standard for the description of DNA, RNA and protein sequence variants
HGMC HUGO Gene Nomenclature Committee is responsible for approving unique symbols and names for human loci, including protein coding genes, ncRNA genes and pseudogenes, to allow unambiguous scientific communication
Variant class
Sequence features used in biological sequence annotation should be defined using the Sequence Ontology, a collaborative project for the definition of sequence features used in biological sequence annotation.
Among the main sources of variant annotation are:
Variant-disease/gene-disease db
Several tools and softwares are available for variant annotation, here is a list of the most used ones:
Annotation Software and tools
Local installation:
Web interface:
Annotation and filtering with SnpEFF
For variant annotations we’ll use SnpEff, a software for genomic variant annotation and functional effect prediction.
hands_on Hands-on: Variant annotations with SnpEff
1. Choose SnpEff eff tool (“annotate variants”, not “annotate variants for SARS-CoV-2”)
• param-file “Sequence changes (SNPs, MNPs, InDels)”: the uploaded VCF file
• “Input format”: VCF
• “Output format”: VCF (only if input is VCF)
• “Genome source”: Locally installed snpEff database
• “Genome”: Homo sapiens: hg38 (or a similarly named option)
• “Produce Summary Stats”: Yes
tip Tip: Annotation options
You can Select/Unselect many Annotation options checking from the list (i.e “Use gene ID instead of gene name (VCF output)” or “Only use canonical transcripts”)
tip Tip: Filter output
You can narrow down the output list of annotated variants, filtering out specific types of changes, choosing from the five choices shown in the Filter output menu (i.e “Do not show DOWNSTREAM changes” or “Do not show INTERGENIC changes”)or selecting “Yes” from Filter out specific Effects and selecting from all the type of possible categories
Two output file will be created:
1. a Summary Stats HTML report, with general metrics such as the distribution of variants across gene features;
2. a VCF file with annotations of variant effects added to the INFO column.
Variant prioritization
Once annotated, variants need to be filtered and prioritized. The number of variants returned by genomic sequencing varies from tens of thousands (WES) to millions (WGS).
No universal filters are available, they depend on the experimental features
Variant impact
First of all you usually want to filter variants by consequence on the encoded protein, keeping those which have an higher impact on protein:
• Missense
• Nonsense
• Splice sites
• Frameshift indels
• Inframe indels
Variant frequency
• Common variants are unlikely associated with a clinical condition
• A rare variant will probably have a higher functional effect on the protein
• Frequency cut-off have to be customized on each different case
• Typical cut-offs: 1% - 0.1%
• Allele frequencies may differ a lot between different populations
Variant effect prediction Tools
• Tools that predict consequences of amino acid substitutions on protein function
• They give a score and/or a prediction in terms of “Tolerated”, “Deleterious” (SIFT) or “Probably Damaging”, “Possibly Damaging”, “Benign” (Polyphen2)
• fitCons
• GERP++
• SIFT
• PolyPhen2
• DANN
• Condel
• fathmm
• MutationTaster
• MutationAssessor
• REVEL
ACMG/AMP 2015 guidelines
The American College of Medical Genetics and the Association for Molecular Pathology published guidelines for the interpretation of sequence variants in May of 2015 (Richards S. et al, 2015). This report describes updated standards and guidelines for classifying sequence variants by using criteria informed by expert opinion and experience
• 28 evaluation criteria for the clinical interpretation of variants. Criteria falls into 3 sets:
• pathogenic/likely pathogenic (P/LP)
• benign/likely benign (B/LB)
• variant of unknown significance (VUS)
• Intervar: software for automatical interpretation of the 28 criteria. Two major steps:
• automatical interpretation by 28 criteria
• manual adjustment to re-interpret the clinical significance
Prioritization
Phenotype-based prioritization tools are methods working by comparing the phenotypes of a patient with gene-phenotype known associations.
• Phenolyzer: Phenotype Based Gene Analyzer, a tool to prioritize genes based on user-specific disease/phenotype terms
hands_on Hands-on: Variant prioritization
Using knowledge gained on Genomic databases and variant annotation section, try to annotate, filter and prioritize an example exome variant data, using two disease terms
• Download a VCF file from the dataset, as learned in the Home section
• Go to wANNOVAR
• Use the vcf file as input file and hearing loss and deafness autosomal recessive, as disease terms to prioritize results
• Choose rare recessive Mendelian disease as Disease Model in the Parameter Settings section
• Provide an istitutional email address and submit the Job
• wait for results
In the results page you can navigate and download results. Click Show in the Network Visualization section to see Phenolyzer prioritization results
GEMINI for variant filtering
Now, we’ll use the VCF file annotated with SnpEff to filter variants considering the relationship between family members. For this purpose we’ll use GEMINI, a framework including different modules for analysis of human variants. First, we need to inform GEMINI about the relationship between the samples and their phenotypes (affected vs not affected). This information is stored in a pedigree file in PED format. In next Hands-on you’ll learn how to manually generate a pedigree file.
hands_on Hands-on: Creating a GEMINI pedigree file
1. Create an example PED-formatted pedigree file for a trio:
#family_id name paternal_id maternal_id sex phenotype
Fam_A father_ID 0 0 1 1
Fam_A mother_ID 0 0 2 1
Fam_A proband_ID father_ID mother_ID 1 2
and set its datatype to tabular.
tip Tip: Creating a new file
• Open the Galaxy Upload Manager
• Select Paste/Fetch Data
• Paste the file contents into the text field
• Change Type from “Auto-detect” to tabular
• Press Start and Close the window
warning Remember those sample names
Names in the pedigree file should match the sample names in your VCF file in order to be recongnized by GEMINI. If names are different, samples will not be recognized and therefore you will not be able to filter variants by patterns of genetic inheritance.
details More on PED files
The PED format is explained in the help section of GEMINI load tool and here
Take a moment and try to understand the information that is encoded in the PED dataset we are using here.
Next, in order to formulate queries to extract variants matching your selection criteria, variants and their annotations need to be stored in a format accepted by GEMINI. This task is accomplished by the GEMINI load tool, which accepts as input your SnpEFF SnpEff annotated VCF file together with the pedigree file.
hands_on Hands-on: Creating a GEMINI database
1. GEMINI load tool with
• param-file “VCF dataset to be loaded in the GEMINI database”: the output of SnpEff eff tool
• “The variants in this input are”: annotated with snpEff
• “This input comes with genotype calls for its samples”: Yes
Our examples VCf include genotype calls.
• “Choose a gemini annotation source”: select the latest available annotations snapshot (most likely, there will be only one)
• “Sample and family information in PED format”: the pedigree file prepared above
• “Load the following optional content into the database”
• param-check “GERP scores”
• param-check “CADD scores”
• param-check “Gene tables”
• param-check “Sample genotypes”
• param-check “variant INFO field”
Checked the following:
• “only variants that passed all filters”
This retains only high quality variants, e.g variants with the value in the FILTER column equals to PASS
Leave unchecked the following:
• “Genotype likelihoods (sample PLs)”
Our VCFs does not contain these values
This generates a GEMINI-specific dataset, which can only be processed with other GEMINI tools. In fact, every analysis with a GEMINI tool starts with the GEMINI database obtained by GEMINI load tool.
details The GEMINI suite of tools
The GEMINI framework is composed by a large number of utilities.
The Somatic variant calling tutorial demonstrates the use of the GEMINI annotate and GEMINI query tools, and tries to introduce some essential bits of GEMINI’s SQL-like syntax.
For a thorough explanation of all GEMINI tools and functionality visit the GEMINI documentation.
Candidate variant detection
Here you’ll learn how to use GEMINI inheritance pattern tool to report all variants fitting any specific inheritance model. You’ll be able to select any of the following inheritance patterns:
• Autosomal recessive
• Autosomal dominant
• Autosomal de-novo
• Compound heterozygous
• Loss of heterozygosity (LOH) events
Below is how you can perform the query for inherited autosomal recessive variants. Feel free to run analogous queries for other types of variants that you think could plausibly be causative for your case study.
hands_on Hands-on: Filtering variants by inheritance pattern
1. GEMINI inheritance pattern tool
• “GEMINI database”: the GEMINI database of annotated variants; output of GEMINI load tool
• “Your assumption about the inheritance pattern of the phenotype of interest”: e.g. Autosomal recessive
• param-repeat “Additional constraints on variants”
• “Additional constraints expressed in SQL syntax”: impact_severity != 'LOW'
This will remove variants with low impact severity (i.e., silent mutations and variants outside coding regions). Leave this box empty to report all variants independently of their impact.
• “Include hits with less convincing inheritance patterns”: No
Account for errors in phenotype assignment - meaningful for large families
• “Report candidates shared by unaffected samples”: No
Account for incomplete penetrance - meaningful for large families
• “Family-wise criteria for variant selection”: keep default settings
This section is not useful when you have data from just one family.
• In “Output - included information”
• “Set of columns to include in the variant report table”: Custom (report user-specified columns)
• “Choose columns to include in the report”:
• param-check “alternative allele frequency (max_aaf_all)”
• “Additional columns (comma-separated)”: chrom, start, ref, alt, impact, gene, clinvar_sig, clinvar_disease_name, clinvar_gene_phenotype, rs_ids
details ClinVar annotations
clinvar_sig and clinvar_disease_name annotations refer to the particular variant, clinvar_gene_phenotype provides information about the gene harbouring the variant.
question Question
From the output of GEMINI inheritance pattern, can you identify the most likely candidate variant?
details More GEMINI usage examples
While only demonstrating command line use of GEMINI, the following tutorial slides may give you additional ideas for variant queries and filters:
Solutions
Below you will find the true patogenic variants for all the case studies.
solution Solutions
• Family A: WDR37, FIXME-VARIANT, de novo
• Family B: GNAPTB, FIXME-VARIANT, compound heterozigosity
• Family C: ECEL1, FIXME-VARIANT, compound heterozigosity
• Family D: TCF4, FIXME-VARIANT, parental mosaicism
• Family E: GPC3, FIXME-VARIANT, X-linked
To address in more details quality control strategies, structural variants analysis (i.e. CNVs), or identification of RoHs you can move forward to the Advanced tutorial.
Contributors
• Tommaso Pippucci - Sant’Orsola-Malpighi University Hospital, Bologna, Italy
• Alessandro Bruselles - Istituto Superiore di Sanità, Rome, Italy
• Andrea Ciolfi - Ospedale Pediatrico Bambino Gesù, IRCCS, Rome, Italy
• Gianmauro Cuccuru - Albert Ludwigs University, Freiburg, Germany
• Giuseppe Marangi - Institute of Genomic Medicine, Fondazione Policlinico Universitario A. Gemelli IRCCS, Università Cattolica del Sacro Cuore, Roma, Italy
• Paolo Uva - IRCCS G. Gaslini, Genoa, Italy
Citing this Tutorial
1. Alessandro Bruselles, Andrea Ciolfi, Gianmauro Cuccuru, Giuseppe Marangi, Paolo Uva, Tommaso Pippucci, 2020 Data analysis and interpretation for clinical genomics (Galaxy Training Materials). /clinical_genomics/ Online; accessed TODAY
2. Batut et al., 2018 Community-Driven Data Analysis Training for Biology Cell Systems 10.1016/j.cels.2018.05.012
details BibTeX
@misc{-,
author = "Alessandro Bruselles and Andrea Ciolfi and Gianmauro Cuccuru and Giuseppe Marangi and Paolo Uva and Tommaso Pippucci",
title = "Data analysis and interpretation for clinical genomics (Galaxy Training Materials)",
year = "2020",
month = "10",
day = "21"
url = "\url{/clinical_genomics/}",
note = "[Online; accessed TODAY]"
}
@article{Batut_2018,
doi = {10.1016/j.cels.2018.05.012},
url = {https://doi.org/10.1016%2Fj.cels.2018.05.012},
year = 2018,
month = {jun},
publisher = {Elsevier {BV}},
volume = {6},
number = {6},
pages = {752--758.e1},
author = {B{\'{e}}r{\'{e}}nice Batut and Saskia Hiltemann and Andrea Bagnacani and Dannon Baker and Vivek Bhardwaj and Clemens Blank and Anthony Bretaudeau and Loraine Brillet-Gu{\'{e}}guen and Martin {\v{C}}ech and John Chilton and Dave Clements and Olivia Doppelt-Azeroual and Anika Erxleben and Mallory Ann Freeberg and Simon Gladman and Youri Hoogstrate and Hans-Rudolf Hotz and Torsten Houwaart and Pratik Jagtap and Delphine Larivi{\{e}}re and Gildas Le Corguill{\'{e}} and Thomas Manke and Fabien Mareuil and Fidel Ram{\'{\i}}rez and Devon Ryan and Florian Christoph Sigloch and Nicola Soranzo and Joachim Wolff and Pavankumar Videm and Markus Wolfien and Aisanjiang Wubuli and Dilmurat Yusuf and James Taylor and Rolf Backofen and Anton Nekrutenko and Björn Grüning},
title = {Community-Driven Data Analysis Training for Biology},
journal = {Cell Systems}
}
`
|
|
# Sigma-rings are closed under countable intersections (sigma-rings are delta-rings)
I'm trying to prove the following and all I've got is like one line worth of proof.
If we had that sigma-rings were closed under complementation, this would be easier, but we only know that if A in R and B in R, then A \ B in R and B \ A in R (symmetric difference). Is there a way to approach this using the symmetric difference?
Office_Shredder
Staff Emeritus
Gold Member
Instead of complementing take relative complements in A1.
Instead of complementing take relative complements in A1.
Ok. So assume that $A_1, A_2, A_3 ... \in R$. Then $A_1 \backslash A_1, A_1 \backslash A_2, A_1 \backslash A_3 ... \in R$. Since $R$ is a $\sigma$-ring, $X = \cup_{n = 1}^\infty A_1 \backslash A_n\in R$. Also $X \backslash A_1 \in R$.
I'm not seeing where this is leading.
|
|
# $p\land\neg q\to r, \neg r, p ⊢ q$ -natural deduction
I have the following:
$$p\land\neg q\to r, \neg r, p ⊢ q$$
I know that my attempt is incorrect, but I will show it anyways:
Step 1) $$p\land\neg q\to r$$ ----premise
Step 2) $$\neg r$$ -----premise
Step 3) $$p$$ -----premise
Step 4) $$\neg q\to r$$ ---- e1
Step 5) $$\neg \neg q$$ ----MT4,2
Can someone show me the proper steps? I do not think I can use MT in the way shown above, but I cannot find out how to get to q.
OP's remark from a comment: "I was curious, is there a way to bypass DeMorgan's law?"
• In Step 4) you are reading the premise : $p∧¬q→r$ as $p∧(¬q→r)$; if you think that MT is not available, after Step 4) you have to (temporary) assume $\lnot q$ and derive : $r$. With the premise $\lnot r$ you have a contradiction and you can "blame" the assumption $\lnot q$ in order to derive (by Double Negation) : $q$. If instead you read the premise $p∧¬q→r$ as $(p∧¬q)→r$, the proof is different (see answers below) : from premise $p$ and (temporary) assumption $\lnot q$ derive $p \land \lnot q$ by $\land$-intro and then derive $r$ which gives you a contradiction with $\lnot r$. – Mauro ALLEGRANZA Feb 9 '15 at 11:03
Something like this?
$$\begin{split} p\wedge\neg q \to r , \neg r &\vdash \neg (p\wedge \neg q)&\quad&\textsf{Premise 1,Premise 2, Modus Tollens} \\ \neg (p\wedge \neg q)&\vdash \neg p\vee q &&1,\textsf{de Morgan's} \\ \neg p\vee q, p &\vdash q&&2,\textsf{Premise 3},\textsf{Disjunctive Syllogism} \\\hline p∧¬q→r,¬r,p &⊢q \end{split}$$
Avoiding de Morgan's
$$\begin{split} (p\wedge \neg q)\to r, p, \lnot q&\vdash r &\quad&\textsf{Premise 1,Premise 3, Assumption of q, Modus Tolens} \\ r, \lnot r &\vdash \bot&&1,\textsf{Premise 2},\textsf{Negation Elimination}\\\hline(p\wedge\neg q)\to r,\lnot r,p,\lnot q&\vdash \bot&&\textsf{Cut}\\\hline (p\wedge\neg q)\to r,\lnot r,p&\vdash \lnot\lnot q&&\textsf{Negation Introduction (discharges the assumtion)}\\\hline (p\wedge\neg q)\to r,\lnot r,p& \vdash q &&\textsf{Double Negation Elimination}\end{split}$$
• this is really good, but I was curious, is there a way to bypass DeMorgan's law? – Bolboa Feb 9 '15 at 20:39
$$p\land\neg q\to r \iff \neg(p\land\neg q) \vee r \iff (\neg p \vee q \vee r)$$ (ref)
Since $\neg r$ and $p$ are in the premise, $q$ follows.
• I think the question asked for natural deduction steps to prove the result: Can someone show me the proper steps? Because of that I don't think this is an answer. – Frank Hubeny Feb 21 '19 at 18:19
$$¬r \Rightarrow ¬(p \land ¬q) \mbox{ by modus tollens}$$
$$¬(p \land ¬q) \iff ¬p \lor ¬¬q \iff ¬p \lor q$$
$$( ¬p \lor q) \land p \Rightarrow q \mbox{ by definition of the disjunction operator.}$$
$$\therefore p\land\neg q\to r, \neg r, p ⊢ q$$
• I don't think this answer shows the natural deduction steps asked for in the question. – Frank Hubeny Feb 21 '19 at 18:21
The following proof uses neither modus tollens nor De Morgan's law.
It, however, uses the precedence of logical operators where the conjunction operator (∧) has higher precedence over the conditional operator (→). That is, $$p∧¬q→r$$ is the same as $$(p∧¬q)→r$$.
Given the above, here is a proof:
Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/
"Operator Precedence" Introduction to Logic http://intrologic.stanford.edu/glossary/operator_precedence.html
|
|
# Resources tagged with: Visualising
Filter by: Content type:
Age range:
Challenge level:
### There are 178 results
Broad Topics > Thinking Mathematically > Visualising
### Polygon Rings
##### Age 11 to 14 Challenge Level:
Join pentagons together edge to edge. Will they form a ring?
### Polygon Pictures
##### Age 11 to 14 Challenge Level:
Can you work out how these polygon pictures were drawn, and use that to figure out their angles?
### Tessellating Hexagons
##### Age 11 to 14 Challenge Level:
Which hexagons tessellate?
### Getting an Angle
##### Age 11 to 14 Challenge Level:
How can you make an angle of 60 degrees by folding a sheet of paper twice?
### LOGO Challenge - Circles as Animals
##### Age 11 to 16 Challenge Level:
See if you can anticipate successive 'generations' of the two animals shown here.
### Semi-regular Tessellations
##### Age 11 to 16 Challenge Level:
Semi-regular tessellations combine two or more different regular polygons to fill the plane. Can you find all the semi-regular tessellations?
### Trice
##### Age 11 to 14 Challenge Level:
ABCDEFGH is a 3 by 3 by 3 cube. Point P is 1/3 along AB (that is AP : PB = 1 : 2), point Q is 1/3 along GH and point R is 1/3 along ED. What is the area of the triangle PQR?
### On Time
##### Age 11 to 14 Challenge Level:
On a clock the three hands - the second, minute and hour hands - are on the same axis. How often in a 24 hour day will the second hand be parallel to either of the two other hands?
### LOGO Challenge - Triangles-squares-stars
##### Age 11 to 16 Challenge Level:
Can you recreate these designs? What are the basic units? What movement is required between each unit? Some elegant use of procedures will help - variables not essential.
### Efficient Cutting
##### Age 11 to 14 Challenge Level:
Use a single sheet of A4 paper and make a cylinder having the greatest possible volume. The cylinder must be closed off by a circle at each end.
### Efficient Packing
##### Age 14 to 16 Challenge Level:
How efficiently can you pack together disks?
### The Old Goats
##### Age 11 to 14 Challenge Level:
A rectangular field has two posts with a ring on top of each post. There are two quarrelsome goats and plenty of ropes which you can tie to their collars. How can you secure them so they can't. . . .
### Cube Paths
##### Age 11 to 14 Challenge Level:
Given a 2 by 2 by 2 skeletal cube with one route `down' the cube. How many routes are there from A to B?
### Convex Polygons
##### Age 11 to 14 Challenge Level:
Show that among the interior angles of a convex polygon there cannot be more than three acute angles.
### Playground Snapshot
##### Age 7 to 14 Challenge Level:
The image in this problem is part of a piece of equipment found in the playground of a school. How would you describe it to someone over the phone?
### Like a Circle in a Spiral
##### Age 7 to 16 Challenge Level:
A cheap and simple toy with lots of mathematics. Can you interpret the images that are produced? Can you predict the pattern that will be produced using different wheels?
### Star Gazing
##### Age 14 to 16 Challenge Level:
Find the ratio of the outer shaded area to the inner area for a six pointed star and an eight pointed star.
### Tied Up
##### Age 14 to 16 Short Challenge Level:
How much of the field can the animals graze?
### An Unusual Shape
##### Age 11 to 14 Challenge Level:
Can you maximise the area available to a grazing goat?
### Rolling Around
##### Age 11 to 14 Challenge Level:
A circle rolls around the outside edge of a square so that its circumference always touches the edge of the square. Can you describe the locus of the centre of the circle?
### Constructing Triangles
##### Age 11 to 14 Challenge Level:
Generate three random numbers to determine the side lengths of a triangle. What triangles can you draw?
### Weighty Problem
##### Age 11 to 14 Challenge Level:
The diagram shows a very heavy kitchen cabinet. It cannot be lifted but it can be pivoted around a corner. The task is to move it, without sliding, in a series of turns about the corners so that it. . . .
### Sprouts
##### Age 7 to 18 Challenge Level:
A game for 2 people. Take turns joining two dots, until your opponent is unable to move.
### Platonic Planet
##### Age 14 to 16 Challenge Level:
Glarsynost lives on a planet whose shape is that of a perfect regular dodecahedron. Can you describe the shortest journey she can make to ensure that she will see every part of the planet?
### Rolling Triangle
##### Age 11 to 14 Challenge Level:
The triangle ABC is equilateral. The arc AB has centre C, the arc BC has centre A and the arc CA has centre B. Explain how and why this shape can roll along between two parallel tracks.
### Tetrahedra Tester
##### Age 11 to 14 Challenge Level:
An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length?
##### Age 14 to 16 Challenge Level:
Four rods are hinged at their ends to form a convex quadrilateral. Investigate the different shapes that the quadrilateral can take. Be patient this problem may be slow to load.
### Seega
##### Age 5 to 18
An ancient game for two from Egypt. You'll need twelve distinctive 'stones' each to play. You could chalk out the board on the ground - do ask permission first.
### Shaping the Universe II - the Solar System
##### Age 11 to 16
The second in a series of articles on visualising and modelling shapes in the history of astronomy.
### Isosceles Triangles
##### Age 11 to 14 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
##### Age 11 to 14 Challenge Level:
Four rods, two of length a and two of length b, are linked to form a kite. The linkage is moveable so that the angles change. What is the maximum area of the kite?
### Sea Defences
##### Age 7 to 14 Challenge Level:
These are pictures of the sea defences at New Brighton. Can you work out what a basic shape might be in both images of the sea wall and work out a way they might fit together?
### Nine Colours
##### Age 11 to 16 Challenge Level:
Can you use small coloured cubes to make a 3 by 3 by 3 cube so that each face of the bigger cube contains one of each colour?
### Tilting Triangles
##### Age 14 to 16 Challenge Level:
A right-angled isosceles triangle is rotated about the centre point of a square. What can you say about the area of the part of the square covered by the triangle as it rotates?
### Cutting a Cube
##### Age 11 to 14 Challenge Level:
A half-cube is cut into two pieces by a plane through the long diagonal and at right angles to it. Can you draw a net of these pieces? Are they identical?
### Corridors
##### Age 14 to 16 Challenge Level:
A 10x10x10 cube is made from 27 2x2 cubes with corridors between them. Find the shortest route from one corner to the opposite corner.
### All Tied Up
##### Age 14 to 16 Challenge Level:
A ribbon runs around a box so that it makes a complete loop with two parallel pieces of ribbon on the top. How long will the ribbon be?
### Dotty Triangles
##### Age 11 to 14 Challenge Level:
Imagine an infinitely large sheet of square dotty paper on which you can draw triangles of any size you wish (providing each vertex is on a dot). What areas is it/is it not possible to draw?
### Cubic Conundrum
##### Age 7 to 16 Challenge Level:
Which of the following cubes can be made from these nets?
### Alquerque
##### Age 5 to 18
This game for two, was played in ancient Egypt as far back as 1400 BC. The game was taken by the Moors to Spain, where it is mentioned in 13th century manuscripts, and the Spanish name Alquerque. . . .
### Shaping the Universe I - Planet Earth
##### Age 11 to 16
This article explores ths history of theories about the shape of our planet. It is the first in a series of articles looking at the significance of geometric shapes in the history of astronomy.
### Pumpkin Patch
##### Age 5 to 18
A game for two players based on a game from the Somali people of Africa. The first player to pick all the other's 'pumpkins' is the winner.
### There and Back Again
##### Age 11 to 14 Challenge Level:
Bilbo goes on an adventure, before arriving back home. Using the information given about his journey, can you work out where Bilbo lives?
### Tower of Hanoi
##### Age 11 to 14 Challenge Level:
The Tower of Hanoi is an ancient mathematical challenge. Working on the building blocks may help you to explain the patterns you notice.
### Triangles in the Middle
##### Age 11 to 18 Challenge Level:
This task depends on groups working collaboratively, discussing and reasoning to agree a final product.
### Triangles to Tetrahedra
##### Age 11 to 14 Challenge Level:
Imagine you have an unlimited number of four types of triangle. How many different tetrahedra can you make?
### Counting Triangles
##### Age 11 to 14 Challenge Level:
Triangles are formed by joining the vertices of a skeletal cube. How many different types of triangle are there? How many triangles altogether?
### A Problem of Time
##### Age 14 to 16 Challenge Level:
Consider a watch face which has identical hands and identical marks for the hours. It is opposite to a mirror. When is the time as read direct and in the mirror exactly the same between 6 and 7?
### Triangular Tantaliser
##### Age 11 to 14 Challenge Level:
Draw all the possible distinct triangles on a 4 x 4 dotty grid. Convince me that you have all possible triangles.
### On the Edge
##### Age 11 to 14 Challenge Level:
If you move the tiles around, can you make squares with different coloured edges?
|
|
# Suggestion Why is the math output hard to read sometimes?
1. May 16, 2009
### squidsoft
May I suggest improving the format of the math output in the forum.
Consider the following code:
$$\mathop\textnormal{Res}\limits_{z=-n}\left\{\frac{\pi}{x^s\sin(\pi s)}\right\}=(-x)^n,\quad n=0,-1,-2,\cdots$$
The equal sign is not well displayed under the Res symbol and the "s" in sine is broken up. I've noticed other problems like this in general. I think PF would look more polished if the math output was nicer looking.
2. May 17, 2009
### phreak
If I recall correctly, it used to be better. I'm not sure when or why the change occurred.
3. Jun 1, 2009
### DrGreg
I suspect the problem might be that the LaTeX renderer (which generates the equation images) may work on the assumption that the equations will be displayed on a white background. On a grey background, some of the pixels are too faint. Is it possible to tweak the LaTeX renderer to take account of the grey background?
4. Jun 2, 2009
### chroot
Staff Emeritus
Hey all,
A year ago or so, something changed in the fonts included in the normal LaTeX distributions that come with most Linux distributions. Along with it were a number of other changes that broke PF's latex system. I rewrote some of it, but never really figured out the problem with the fonts.
I will look into it more. I don't actually think it has anything to do with anti-aliasing. The images are currently anti-aliased to white, and then white is dropped out as transparent. If the strokes look correct when anti-aliased to white, it seems that changing the surrounding white pixels to transparent would not affect them. It's worth a shot, though.
- Warren
5. Jun 2, 2009
### DrGreg
For what it's worth, I took the PNG image in post #1, on its default white background, and decreased the brightness until its background matched this thread's grey background. I think the result (attached) is therefore what you'd get if anti-aliased to grey. Slightly more legible, I think, but still not great, and I guess that's down to a poor choice of font. Or something.
File size:
977 bytes
Views:
110
6. Jun 4, 2009
### Moonbear
Staff Emeritus
Can the font be made bold, either in a default setting or when typed by the user (I never use LaTex, so don't know the ins and outs of this)? It just looks like the font is a bit thin and loses something, so if there's a way to make it bold, that might be enough to improve readability.
7. Jun 4, 2009
### DrGreg
That wouldn't be a solution as such, because some equations use both bold and plain font, e.g.
$$\mathbf{z} = a\mathbf{x} + b\mathbf{y}$$
although personally I prefer
$$\textbf{z} = a\textbf{x} + b\textbf{y}$$
However, if you have a greater choice of font weights than just "plain" and "bold", then some slightly heavier fonts might help.
8. Jun 4, 2009
### chroot
Staff Emeritus
Okay, guys... I changed some of the antialiasing behavior in Ghostscript (I turned it down!), and I think the output looks a little better now. If you could, post some troublesome LaTeX here and see if it renders better now.
- Warren
9. Jun 4, 2009
### chroot
Staff Emeritus
$$\mathop\textnormal{Res}\limits_{z=-n}\left\{\frac{\pi}{x^s\sin(\pi s)}\right\}=(-x)^n,\quad n=0,-1,-2,\cdots$$
10. Jun 4, 2009
### CRGreathouse
$$\sum_{n=a}^bf(n)$$ has a very strong summation symbol.
#### Attached Files:
• ###### strong_sigma.png
File size:
354 bytes
Views:
180
11. Jun 4, 2009
### chroot
Staff Emeritus
This is how it looked with the old antialiasing options:
$$\sum_{n=a}^bf(n)$$
- Warren
12. Jun 4, 2009
### chroot
Staff Emeritus
And now the new:
$$\sum_{n=a}^bf(n)$$
It's really strange that antialiasing options could even cause this in the first place.....
- Warren
13. Jun 4, 2009
### chroot
Staff Emeritus
And with no anti-aliasing at all:
$$\sum_{n=a}^bf(n)$$
- Warren
14. Jun 4, 2009
### chroot
Staff Emeritus
Fooling around some more:
$$\sum_{n=a}^bf(n)$$
15. Jun 4, 2009
### chroot
Staff Emeritus
Hmmm...
$$\sum_{n=a}^bf(n)$$
16. Jun 4, 2009
### chroot
Staff Emeritus
Try try again:
$$\sum_{n=a}^bf(n)$$
17. Jun 4, 2009
### chroot
Staff Emeritus
$$\mathop\textnormal{Res}\limits_{z=-n}\left\{\frac{\pi}{x^s\sin(\pi s)}\right\}=(-x)^n,\quad n=0,-1,-2,\cdots$$
18. Jun 4, 2009
### chroot
Staff Emeritus
I'm not really sure I've found a solution. I'll have to keep hunting.
$$\mathop\textnormal{Res}\limits_{z=-n}\left\{\frac{\pi}{x^s\sin(\pi s)}\right\}=(-x)^n,\quad n=0,-1,-2,\cdots$$
- Warren
19. Jun 4, 2009
### Moonbear
Staff Emeritus
Some of those versions looked better...not perfect, but certainly better.
20. Jun 6, 2009
### Fredrik
Staff Emeritus
$$\begin{pmatrix}1 & 0 & 0\\ 0 & \frac{u_x}{u} & \frac{u_y}{u}\\ 0 & -\frac{u_y}{u} & \frac{u_x}{u} \end{pmatrix}$$
Hm, both the parentheses and the zeroes look better than they did here. They used to look like the pixel size was bigger in the LaTeX font. I'm not a big fan of the new $$\sum$$ though, and x and y are still just barely legible. Have you tried a slightly bigger font size?
It would also be nice if the \dot code would make a slightly bigger dot: $$\dot{\vec r}$$ (but I realize of course that you can't do anything that changes only that symbol).
|
|
# Introduction to Neural Networks
In preparation for starting a new job next week, I’ve been doing some reading about neural networks and deep learning. The math behind neural networks is pretty interesting, so I thought I’d take my notes, and turn them into some posts.
As the name suggests, the basic idea of a neural network is to construct a computational system based on a simple model of a neuron. If you look at a neuron under a microscope, what you see is something vaguely similar to:
It’s a cell with three main parts:
• A central body;
• A collection of branched fibers called dendrites that receive signals and carry them to the body; and
• A branched fiber called an axon that sends signals produced by the body.
You can think of a neuron as a sort of analog computing element. Its dendrites receive inputs from some collection of sources. The body has some criteria for deciding, based on its inputs, whether to “fire”. If it fires, it sends an output using its axon.
What makes a neuron fire? It’s a combination of inputs. Different terminals on the dendrites have different signaling strength. When the combined inputs reach a threshold, the neuron fires. Those different signal strengths are key: a system of neurons can learn how to interpret a complex signal by varying the strength of the signal from different dendrites.
We can think of this simple model of a neuron in computational terms as a computing element that takes a set of weighted input values, combines them into a single value, and then generates an output of “1” if that value exceeds a threshold, and 0 if it does not.
In slightly more formal terms, $(n, \theta, b, t)$ where:
1. $n$ is the number of inputs to the machine. We’ll represent a given input as a vector $v=[v_1, ..., v_n]$.
2. $\theta = [\theta_1, \theta_2, ..., \theta_n]$ is a vector of weights, where $\theta_i$ is the weight for input $i$.
3. $b$ is a bias value.
4. $t$ is the threshold for firing.
Given an input vector $v$, the machine computes the combined, weighted input value $I$ by taking the dot product $v \cdot w = [\theta_1v_1 + \theta_2v_2 + ... + \theta_nv_n]$. If $I + b \ge t$, the neuron “fires” by producing a 1; otherwise, it produces a zero.
This version of a neuron is called a perceptron. It’s good at a particular kind of task called classification: given a set of inputs, it can answer whether or not the input is a member of a particular subset of values. A simple perceptron is limited to linear classification, which I’ll explain next.
To understand what a perceptron does, the easiest way to think of it is graphical. Imagine you’ve got an input vector with two values, so that your inputs are points in a two dimensional cartesian plane. The weights on the perceptron inputs define a line in that plane. The perceptron fires for all points above that line – so the perceptron classifies a point according to which side of the line it’s located on. We can generalize that notion to higher dimensional spaces: for a perceptron taking $n$ input values, we can visualize its inputs as an $n$-dimensional space, and the perceptron weight’s define a hyperplane that slices the $n$-dimensional input space into two sub-spaces.
Taken by itself, a single perceptron isn’t very interesting. It’s just a fancy name for a something that implements a linear partition. What starts to unlock its potential is training. You can take a perceptron and initialize all of its weights to 1, and then start testing it on some input data. Based on the results of the tests, you alter the weights. After enough cycles of repeating this, the perceptron can learn the correct weights for any linear classification.
The traditional representation of the perceptron is as a function $h$:
$\displaystyle h(x, \theta, b) = \left\{ \begin{array}{cl} 0, & x \cdot \theta + b < 0 \\ +1, & x \cdot \theta + b \ge 0 \end{array} \right.$
Using this model, learning is just an optimization process, where we’re trying to find a set of values for ${\theta}$ that minimize the errors in assigning points to subspaces.
A linear perceptron is a implementation of this model based on a very simple notion of a neuron. A perceptron takes a set of weighted inputs, adds them together, and then if the result exceeds some threshold, it “fires”.
A perceptron whose weighted inputs don’t exceed its threshold produces an output of 0; a perceptron which “fires” based on its inputs produces a value of +1.
Linear classification is very limited – we’d like to be able to do things that are more interesting that just linear. We can do that by adding one thing to our definition of a neuron: an activation function. Instead of just checking if the value exceeds a threshold, we can take the dot-product of the inputs, and then apply a function to them before comparing them to the threshold.
With an activation function $f$, we can define the operation of our more powerful in two phases. First, the perceptron computes the logit, which is the same old dot-product of the weights and the inputs. Then it applies the activation function to the logit, and based on the output, it decides whether or not to fire.
The logit is defined as:
$z = (\Sigma_{i=0}^{n} w_i x_i) + b$
And the perceptron as a whole is a classifier:
$\displaystyle h(x, \theta) = \left\{ \begin{array}{cl} 0, & f(z) < 0 \\ +1, & f(z) >= 0 \end{array} \right.$
Like I said before, this gets interesting when you get to the point of training. The idea is that before you start training, you have a neuron that doesn’t know anything about the things it’s trying to classify. You take a collection of values where you know their classification, and you put them through the network. Each time you put a value through, ydou look at the result – and if it’s wrong, you adjust the weights of the inputs. Once you’ve repeated that process enough times, the edge-weights will, effectively, encode a curve (a line in the case of a linear perceptron) that divides between the categories. The real beauty of it is that you don’t need to know where the line really is: as long as you have a large, representative sample of the data, the perceptron will discover a good separation.
The concept is simple, but there’s one big gap: how do you adjust the weights? The answer is: calculus! We’ll define an error function, and then use the slope of the error curve to push us towards the minimum error.
Let’s say we have a set of training data. For each value $i$ in the training data, we’ll say that $t^{(i)}$ is the “true” value (that is, the correct classification) for value $i$, and $y^{(i)}$ is the value produced by the current set of weights of our perceptron. Then the
cumulative error for the training data is:
$E = \frac{1}{2}\sum_{i}(t^{(i)} - y^{(i)})^2$
$i^{(i)}$ is given to us with our training data. $y^{(i)}$ is something we know how to compute. Using those, we can view the errors as a curve on $y$.
Let’s think in terms of a two-input example again. We can create a three dimensional space around the ideal set of weights: the x and y axes are the input weights; the z axis is the size of the cumulative error for those weights. For a given error value $z$, there’s a countour of a curve for all of the bindings that produce that level of error. All we need to do is follow the curve towards the minimum.
In the simple cases, we could just use Newton’s method directly to rapidly converge on the solution, but we want a general training algorithm, and in practice, most real learning is done using a non-linear activation function. That produces a problem: on a complex error surface, it’s easy to overshoot and miss the minimum. So we’ll scale the process using a meta-parameter $\epsilon$ called the learning rate.
For each weight, we’ll compute a change based on the partial derivative of the error with respect to the weight:
$\Delta w_k = - \epsilon \frac{\partial E}{\partial w_k}$
For our linear perceptron, using the definition of the cumulative error $E$ above, we can expand that out to:
$\Delta w_k = \Sigma_i \epsilon x_k^{(i)}(t^{(i)} - y^{(i)})$
So to train a single perceptron, all we need to do is start with everything equally weighted, and then run it on our training data. After each pass over the data, we compute the updates for the weights, and then re-run until the values stabilize.
This far, it’s all pretty easy. But it can’t do very much: even with a complex activation function, a single neuron can’t do much. But when we start combining collections of neurons together, so that the output of some neurons become inputs to other neurons, and we have multiple neurons providing outputs – that is, when we assemble neurons into networks – it becomes amazingly powerful. So that will be our next step: to look at how to put neurons together into networks, and then train those networks.
As an interesting sidenote: most of us, when we look at this, think about the whole thing as a programming problem. But in fact, in the original implementation of perceptron, a perceptron was an analog electrical circuit. The weights were assigned using circular potentiometers, and the weights were updated during training using electric motors rotating the knob on the potentiometers!
I’m obviously not going to build a network of potentiometers and motors. But in the next post, I’ll start showing some code using a neural network library. At the moment, I’m still exploring the possible ways of implementing it. The two top contenders are TensorFlow, which is a library built on top of Python; and R, which is a stastical math system which has a collection of neural network libraries. If you have any preference between the two, or for something else altogether, let me know!
# A Review of Type Theory (so far)
I’m trying to get back to writing about type theory. Since it’s been quite a while since the last type theory post, we’ll start with a bit of review.
What is this type theory stuff about?
The basic motivation behind type theory is that set theory isn’t the best foundation for mathematics. It seems great at first, but when you dig in deep, you start to see cracks.
If you start with naive set theory, the initial view is amazing: it’s so simple! But it falls apart: it’s not consistent. When you patch it, creating axiomatic set theory, you get something that isn’t logically inconsistent – but it’s a whole lot more complicated. And while it does fix the inconsistency, it still gives you some results which seem wrong.
Type theory covers a range of approaches that try to construct a foundational theory of mathematics that has the intuitive appeal of axiomatic set theory, but without some of its problems.
The particular form of type theory that we’ve been looking at is called Martin-Löf type theory. M-L type theory is a constructive theory of mathematics in which computation plays a central role. The theory rebuilds mathematics in a very concrete form: every proof must explicitly construct the objects it talks about. Every existence proof doesn’t just prove that something exists in the abstract – it provides a set of instructions (a program!) to construct an example of the thing that exists. Every proof that something is false provides a set of instructions (also a program!) for how to construct a counterexample that demonstrates its falsehood.
This is, necessarily, a weaker foundation for math than traditional axiomatic set theory. There are useful things that are provable in axiomatic set theory, but which aren’t provable in a mathematics based on M-L type theory. That’s the price you pay for the constructive foundations. But in exchange, you get something that is, in many ways, clearer and more reasonable than axiomatic set theory. Like so many things, it’s a tradeoff.
The constructivist nature of M-L type theory is particularly interesting to wierdos like me, because it means that programming becomes the foundation of mathematics. It creates a beautiful cyclic relationship: mathematics is the foundation of programming, and programming is the foundation of mathematics. The two are, in essence, one and the same thing.
The traditional set theoretic basis of mathematics uses set theory with first order predicate logic. FOPL and set theory are so tightly entangled in the structure of mathematics that they’re almost inseparable. The basic definitions of type theory require logical predicates that look pretty much like FOPL; and FOPL requires a model that looks pretty much like set theory.
For our type theory, we can’t use FOPL – it’s part of the problem. Instead, Martin-Lof used intuitionistic logic. Intuitionistic logic plays the same role in type theory that FOPL plays in set theory: it’s deeply entwined into the entire system of types.
The most basic thing to understand in type theory is what a logical proposition means. A proposition is a complete logical statement no unbound variables and no quantifiers. For example, “Mark has blue eyes” is a proposition. A simple proposition is a statement of fact about a specific object. In type theory, a proof of a proposition is a program that demonstrates that the statement is true. A proof that “Mark has blue eyes” is a program that does something like “Look at a picture of Mark, screen out everything but the eyes, measure the color C of his eyes, and then check that C is within the range of frequencies that we call “blue”. We can only say that a proposition is true if we can write that program.
Simple propositions are important as a starting point, but you can’t do anything terribly interesting with them. Reasoning with simple propositions is like writing a program where you can only use literal values, but no variables. To be able to do interesting things, you really need variables.
In Martin-Lof type theory, variables come along with predicates. A predicate is a statement describing a property or fact about an object (or about a collection of objects) – but instead of defining it in terms of a single fixed value like a proposition, it takes a parameter. “Mark has blue eyes” is a proposition; “Has blue eyes” is a predicate. In M-L type theory, a predicate is only meaningful if you can write a program that, given an object (or group of objects) as a parameter, can determine whether or no the predicate is true for that object.
That’s roughly where we got to in type theory before the blog went on hiatus.
|
|
<< Previous Issue Kyungpook Mathematical Journal (Vol. 55, No. 1) Next Issue >>
Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 1—224Download front and back covers
On Skew Centralizing Traces of Permuting $n$-Additive Mappings Mohammad Ashraf and Nazia Parveen MSC numbers : 16W25, 16Y30 Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 1—12
Distributivity on the Gyrovector Spaces Sejong Kim MSC numbers : 20N05, 81R05 Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 13—20
A Note on Skew-commuting Automorphisms in PrimeRings Nadeem ur Rehman and Tarannum Bano MSC numbers : 16N60, 16W20 Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 21—28
Functional Equations associated with Generalized Bernoulli Numbers and Polynomials Cheon Seoung Ryoo1, Dmitry Victorovich Dolgy2,3, Hyuck In Kwon4 and Yu Seon Jang5 MSC numbers : 11B68, 11S80. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 29—39
Some Additive Maps on Sigma Prime Rings Mohammad Mueenul Hasnain and Mohd Rais Khan MSC numbers : 16W10, 16W25, 16U80. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 41—50
Fuzzy Prime Ideals of Pseudo-LBCK-algebras Grzegorz Dymek1 and Andrzej Walendziak2 MSC numbers : 03G25, 06F35. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 51—62
Range Kernel Orthogonality and Finite Operators Salah Mecheri MSC numbers : 47B47, 47A30, 47B20, 47B10. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 63—71
Refined Stability Results of Functional Equation in Four Variables Hark-Mahn Kim and Soon Lee MSC numbers : 39B52, 39B82. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 73—81
Convolution on a Generalized Class of Harmonic Univalent Functions Saurabh Porwal1 and Kaushal Kishore Dixit2 MSC numbers : 30C45, 26D05. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 83—89
On the Stability of a Mixed Type Functional Equation Yang-Hi Lee1 and Soon-Mo Jung2 MSC numbers : 39B82, 39B52. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 91—101
Hyers-Ulam Stability of Pompeiu's Point Jinghao Huang and Yongjin Li MSC numbers : 34K20, 26D10. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 103—107
New Sufficient Conditions for Starlikeness of Certain Integral Operator Akshaya Kumar Mishra1 and Trailokya Panigrahi2 MSC numbers : 30C45. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 109—118
Exposed Bilinear Forms of ${\mathcal L}(^2d_{*}(1, w)^2)$ Sung Guen Kim MSC numbers : 46A22. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 119—126
On Normalized Tight Frame Wavelet Sets Swati Srivastava MSC numbers : 42C40, 65T60. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 127—135
Meromorphic Functions Sharing a Nonzero Value with their Derivatives Xiao-Min Li1, Rahman Ullah1, Da-Xiong Piao1 and Hong-Xun Yi2 MSC numbers : 30D35, 30D30. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 137—147
Some Symmetric Properties on $(LCS)_{n}$-manifolds Venkatesha and Rahuthanahalli Thimmegowda Naveen Kumar MSC numbers : 53C15, 53C20, 53C25. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 149—156
On the $f$-biharmonic Maps and Submanifolds Kaddour Zegga1, A. Mohamed cherif1 and Mustapha Djaa2 MSC numbers : 53A45, 53C20, 58E20. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 157—168
On the Braid Index of Kanenobu Knots Hideo Takioka MSC numbers : 57M25, 57M27. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 169—180
Weak Baire Spaces V. Renukadevi and T. Muthulakshmi MSC numbers : 54A05, 54C08, 54E52. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 181—189
On a Quasitoric Virtual Braid Presentation of a Virtual Link Yongju Bae and Seogman Seo MSC numbers : 57M25, 57M27. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 191—203
Some Common Fixed Point Theorems via Generalized $c$-Distance Sushanta Kumar Mohanta and Rima Maitra MSC numbers : 54H25, 47H10. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 205—218
Hyperspaces and the S-equivariant Complete Invariance Property Saurabh Chandra Maury MSC numbers : 57Sxx, 55M20, 54H25, 54B20. Kyungpook Mathematical Journal 2015 Vol. 55, No. 1, 219—224
|
|
# Categories and Functors
Before we begin, I must say that this topic may not be as accessible as the other posts on this blog. It is intended for those who have had at least an introductory course in abstract algebra. I do intend on making quite a few posts like this but I will definitely be doing the fun accessible ones as well.
A personal interest of mine is Category Theory. This field of mathematics acts as a unifier for many mathematical fields. To be more precise, the development of Category Theory was spurred by the study of Toplogy. A huge goal of topology is to be able to say whether or not two spaces are homeomorphic (i.e. if there is a continuous bijection between two topological spaces). However, as one may see, it is a sizable problem without a general solution. Moreover, this problem is really hard to solve using only topological tools. If we could employ tools of another area of mathematics with seemingly endless tools, say Algebra, then we may ease our burden. Enter the notion of categories and functors.
Definition: category $\mathcal{C}$ is a collection, $\mathrm{Ob}(\mathcal{C})$, of objects along with a set, $\mathrm{Mor}(\mathcal{C})$, of arrows that satisfy the following conditions
• To each arrow $f \in \mathrm{Mor}(\mathcal{C})$ there corresponds objects $A$ and $B$ which form the source (domain) and target (codomain) of $f$ respectively. We write $f : A \to B$ for any morphism.
• If $f : A \to B$ and $g : B \to C$ are arrows then $f \circ g$ is an arrow with source $s(f) = A = s(f \circ g)$ and target $t(g) = C = t(f \circ g)$. That is $f \circ g: A\to C$ is an arrow. This is called composition.
• Composition is associative. That is $(f \circ g) \circ c = f \circ (g \circ c)$ provided that the composition exists.
• For every object $X$ there corresponds an arrow $\mathrm{id}_X : X \to X$ so that $f \circ \mathrm{id}_X = f$ and $\mathrm{id}_X \circ g = g$ where $f: X \to Y$ and $g: Y\to X$. This arrow is called the identity arrow on $X$.
While this seems like a lot to take in, every day mathematicians work with categories without even realizing it. To see this let us look at many examples of categories.
Examples:
• $\mathbf{Set}$ is the category of sets and functions
• $\mathbf{Top}$ is the category of topological spaces and continuous functions
• $\mathbf{Group}$ is the category whose objects are groups (from algebra) and whose arrows are homomorphisms (maps of the form $\phi : G \to H$ where $\phi(g_1 \cdot_G g_2) = \phi(g_1) \cdot_H \phi(g_2)$.
• $\mathbf{Ab}$ is the category of abelian groups (groups where commutativity holds) and whose arrows are homomorphisms
• $\mathbf{Ring}$ is the category of rings with ring homomorphisms
• Let $F$ be a field. Then $\mathbf{Vect}_F$ is the category of finite dimensional vector spaces with arrows as linear maps.
• We can also have silly categories such as $\mathbf{0}$, which is the empty category with no objects and no arrows, and $\mathbf{1}$ which is the category of one object and the identity arrow.
• Try to think of a few more categories by yourself!
Before we move on, we will look at the categorical notion of homeomorphism, isomorphism, etc.
Definition: An equivalence in a category $\mathcal{C}$ is an arrow $f: A \to B$ so that there is another arrow $g: B\to A$ where $f \circ g = \mathrm{id}_B$ and $g \circ f = \mathrm{id}_A$.
It is easy to see that an equivalence in $\mathbf{Top}$ is a homeomorphism (continuous bijection) and an equivalence in $\mathbf{Group}$ is a group isomorphism (bijective homomorphism).
Okay, we will move on to see just exactly how the problem of turning a topological problem into an algebraic one is handled. Often in mathematics when we define a structure we have an idea of maps between them. This is handled with categories as follows.
Definition: For two categories $\mathcal{C}$ and $\mathcal{D}$functor $F : \mathcal{C} \to \mathcal{D}$ satisfies the following:
• For every object $A$ of $\mathcal{C}$ there corresponds the object $F(A)$ in $\mathcal{D}$. That is to say that $F$ is a function on the objects of the categories.
• $F$ is also a function of the morphisms, for $f: A\to B$ we have $F(f): F(A) \to F(B)$, but it must have a little more structure here,
• For arrows $f: A\to B$ and $g: B \to C$ we have $F(f \circ g) = F(f) \circ F(g)$.
• $F(\mathrm{id}_X) = \mathrm{id}_{F(X)}$
Now to end this post we see a theorem which will bring us back to the problem set forth in the beginning.
Theorem: Let $\mathcal{C}$ and $\mathcal{D}$ be categories. If $F: \mathcal{C} \to \mathcal{D}$ is a functor and if $f$ is an equivalence in $\mathcal{C}$ then $F(f)$ is an equivalence in $\mathcal{D}$.
We will not prove this one together, but the proof is very short and I recommend trying to prove it. With this we see that if we have a functor between $\mathbf{Top}$ and $\mathbf{Group}$ Then a homeomorphism becomes an isomorphism and vice versa. This problem spurred an entire field of mathematics that is known as Algebraic Topology. Later on, we shall look at an explicit example of how we execute this method.
“Let V be an n-dimensional veetor space over the field F and W an m-dimensional vector space over F. Let $\mathcal{B}$ be an ordered basis for V and $\mathcal{B}'$ an ordered basis for W. For each linear transformation T from V into W, there is an m x n matrix A with entries in F such that $[T\alpha]_{\mathcal{B}'} = A[\alpha]_{\mathcal{B}}$ for every vector $\alpha$ in V. Furthermore, $T \rightarrow A$ is a one-one correspondence between the set of all linear transformations from V into W and the set of all m x n matrices over the field F.”
|
|
Design a matrix from a list with use of R or linux
1
0
Entering edit mode
5 months ago
hosin • 0
Hello there, I have a list.txt (big file) contains 2000 samples and 18000 coordinates (same as below file 1).
Coordinates Sample Values
chr1:110238914-110324454 SampleB 1
chr1:110238914-110324454 SampleC 3
chr1:110238914-110324454 SampleD 1
chr5:65562670-65627908 SampleD 1
chr5:65562670-65627908 SampleA 1
chr5:65562670-65627908 SampleB 4
chr5:65562670-65627908 SampleC 1
chr2:158248715-158335919 SampleB 1
chr2:158248715-158335919 SampleA 0
chr2:158248715-158335919 SampleC 1
Actually I want to make a matrix by the above file. Whereas coordinates to be as rows name and samples as columns name, then if the coordinate has related sample put the related value in the matrix, if the coordinate does not the value for the sample just put 2 in the matrix, the result should be same the below.
Coordinates SampleA SampleB SampleC SampleD
chr1:110238914-110324454 2 1 3 1
chr5:65562670-65627908 1 4 1 1
chr2:158248715-158335919 0 1 1 2
I would really appreciate it , if I can receive any scripts for linux,bash (preferably) or R to get this result?
)
R • 401 views
0
Entering edit mode
Relevant post from SO - "reshape long to wide":
2
Entering edit mode
5 months ago
Ram 33k
You can use tidyr::pivot_wider to get to what you need. Your problem here is the simplest case, so figuring out the exact usage from the manual should be easy enough.
1
Entering edit mode
In case the OP runs into trouble, here's the exact code for their data. df <- tidyr::pivot_wider(df, names_from="Sample", values_from="Values").
0
Entering edit mode
Thanks for giving me this information, how to put 2 for samples which does not have the coordinate?
0
Entering edit mode
We need to apply complete before reshaping.
0
Entering edit mode
If I understand you correctly you can add the argument values_fill=2.
0
Entering edit mode
df <- tidyr::pivot_wider(df, names_from="Sample", values_from="Values", values_fill = 2) Alright , I found it in the manual, anyway thanks all
0
Entering edit mode
0
Entering edit mode
Sorry for subsequent messages, I'm receiving this error several times:
Error in UseMethod("tbl_vars") :
no applicable method for 'tbl_vars' applied to an object of class "function"
0
Entering edit mode
Can you post your current code here?
0
Entering edit mode
df <- tidyr::pivot_wider(df, names_from="Sample", values_from="Values", values_fill = 2)
0
Entering edit mode
What code are you using to define df?
0
Entering edit mode
I just follow this manual: "https://tidyr.tidyverse.org/reference/pivot_wider.html" as mentioned above, that is the exact code. I did not consider data frame. This is our input file:
Coordinates Sample Values
chr1:110238914-110324454 SampleB 1
chr1:110238914-110324454 SampleC 3
chr1:110238914-110324454 SampleD 1
chr5:65562670-65627908 SampleD 1
chr5:65562670-65627908 SampleA 1
chr5:65562670-65627908 SampleB 4
chr5:65562670-65627908 SampleC 1
chr2:158248715-158335919 SampleB 1
chr2:158248715-158335919 SampleA 0
chr2:158248715-158335919 SampleC 1
0
Entering edit mode
How are you defining the data frame on which you're running the pivot_wider? Please show us as much of your code as you can, or we cannot really help you.
0
Entering edit mode
> mydat=read.table(file.choose())
Error in file(file, "rt") : cannot open the connection
In file(file, "rt") :
cannot open file 'list.txt': No such file or directory
0
Entering edit mode
You need to read the file into a data.frame named 'df' first. df <- read.table("file.txt", sep="\t", header=TRUE, stringsAsFactors=FALSE). Change the file name and delimiter as appropriate.
0
Entering edit mode
Sorry, previously I got this error
> mydat=read.table(file.choose())
Error in file(file, "rt") : cannot open the connection
In file(file, "rt") :
cannot open file 'list.txt': No such file or directory
0
Entering edit mode
That's a problem you can solve yourself using some Google. You're having problems reading the dataset, not processing it.
|
|
Measurement of the production cross section for single top quarks in association with W bosons in proton-proton collisions at $\sqrt s$ =13 TeV
Abstract
A measurement is presented of the associated production of a single top quark and a W boson in proton-proton collisions at $\sqrt s$ =13 TeV by the CMS Collaboration at the CERN LHC. The data collected corresponds to an integrated luminosity of $35.9 fb^{−1}$. The measurement is performed using events with one electron and one muon in the final state along with at least one jet originated from a bottom quark. A multivariate discriminant, exploiting the kinematic properties of the events, is used to separate the signal from the dominant $t\bar t$ background. The measured cross section of $63.1 ± 1.8(stat) ± 6.4(syst) ± 2.1 (lumi) pb$ is in agreement with the standard model expectation.
Abstract
A measurement is presented of the associated production of a single top quark and a W boson in proton-proton collisions at $\sqrt s$ =13 TeV by the CMS Collaboration at the CERN LHC. The data collected corresponds to an integrated luminosity of $35.9 fb^{−1}$. The measurement is performed using events with one electron and one muon in the final state along with at least one jet originated from a bottom quark. A multivariate discriminant, exploiting the kinematic properties of the events, is used to separate the signal from the dominant $t\bar t$ background. The measured cross section of $63.1 ± 1.8(stat) ± 6.4(syst) ± 2.1 (lumi) pb$ is in agreement with the standard model expectation.
Statistics
Citations
Dimensions.ai Metrics
19 citations in Web of Science®
25 citations in Scopus®
Altmetrics
22 downloads since deposited on 19 Feb 2019
Detailed statistics
|
|
## A community for students. Sign up today
Here's the question you clicked on:
## Jupiter500 one year ago One method you can use to determine whether a triangle is a right triangle, given three side lengths, is to apply the Converse of the Pythagorean Theorem. Alternately, you can use trigonometric ratios. Show that the triangle in the diagram is a right triangle by using trigonometric ratios. (Be sure to show all work and/or reasoning.) Hint: Use the Inverse Trigonometric Functions to solve for mB and mC and then use the Triangle Sum Theorem Is this correct? I did A^2+B^2=C^2 33^2+56^2=65^2 1,089+3,136=4,225 4,225=4,225 So yes it is indeed a right triangle
• This Question is Closed
1. Nnesha
yep :-) both sides are equal gO_Od to go!
2. Jupiter500
Thank you!
3. Nnesha
what diagram ?
4. Jupiter500
A Triangle
5. Nnesha
okay you didn't post that :-)
6. Nnesha
sure :-)
7. Nnesha
alright there is *inverse trig * so i just wanted to make sure :-)
8. Nnesha
btw there is a attach file button(blue one) you can attach file :-) gO_Od luck!
9. Nnesha
and $$\huge\color{Green}{{\rm welcome}\rm~to~open~study!!!!}$$ $$\Huge \color{gold}{\star^{ \star^{\star:)}}}\Huge \color{green}{\star^{ \star^{\star:)}}}$$ $$\Huge \color{blue}{\star^{ \star^{\star:)}}}\Huge \color{red}{\star^{ \star^{\star:)}}}$$ $$\Huge \color{orange}{\star^{ \star^{\star:)}}}\Huge \color{purple}{\star^{ \star^{\star:)}}}$$$$\rm\color{green}{o^\wedge\_^\wedge o}$$
10. Jupiter500
Thank you ya there is a trig function that's why i don't know if i did the right formula?
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
|
# Condition for block symmetric real matrix eigenvalues to be real
I have a (2nx2n) block symmetric matrix that in the simplest case (n=2) looks like: $$M_2 = \begin{bmatrix} a_1 & 0 & b_{1,2} & -b_{1,2}\\\ 0 & -a_1 & b_{1,2} & -b_{1,2}\\\ b_{1,2} & -b_{1,2} & a_2 & 0 \\\ b_{1,2} & -b_{1,2} & 0 & -a_2 \\\ \end{bmatrix}$$
All the elements are real. The general matrix has then this form: $$M_n = \begin{bmatrix} a_1 & 0 & b_{1,2} & -b_{1,2} & & b_{1,n-1} & -b_{1,n-1} & b_{1,n} & -b_{1,n}\\\ 0 & -a_1 & b_{1,2} & -b_{1,2}& \ldots & b_{1,n-1} & -b_{1,n-1} & b_{1,n} & -b_{1,n}\\\ b_{1,2} & -b_{1,2} & a_{2} & 0 & & b_{2,n-1} & -b_{2,n-1}& b_{2,n} & -b_{2,n} \\\ b_{1,2} & -b_{1,2} & 0 & -a_{2} & & b_{2,n-1} & -b_{2,n-1}& b_{2,n} & -b_{2,n}\\\ & \vdots & & & \ddots & &\vdots & \\\ b_{1,n-1} & -b_{1,n-1} & b_{2,n-1} & -b_{2,n-1} & \ldots & a_{n-1} & 0 & b_{n,n-1} & -b_{n,n-1}\\\ b_{1,n-1} & -b_{1,n-1} & b_{2,n-1} & -b_{2,n-1} & \ldots & 0 & -a_{n-1} & b_{n,n-1} & -b_{n,n-1}\\\ b_{1,n} & -b_{1,n} & b_{2,n} & -b_{2,n} & & b_{n,n-1} & -b_{n,n-1} &a_{n} & 0 \\\ b_{1,n} & -b_{1,n} & b_{2,n} & -b_{2,n} & \ldots & b_{n,n-1} & -b_{n,n-1} & 0 & -a_{n} \end{bmatrix}$$
Now, I am solving the eigenproblem numerically for various dimensions of M, and I always find the eigenvalues to be real for my values of $\{a_i\}$ and $\{b_{i,j}\}$.
I have the feeling that this is because in general the values $a_i$ on the diagonal are bigger than the off-diagonal elements $b_{i,j}$, but I would like to state a rule for this, because I want to be sure that in no case I will find complex eigenvalues.
Can anyone help me find out what is the condition for the eigenvalues of $M$ to be all real?
Thank you!
Note: To be a little more precise, the relation between the matrix elements is $$b_{ij} = C_{ij}\frac{c_ic_j}{2\sqrt{a_i a_j}}$$ with $|C_{ij}|<1$ and $c_i < a_i$. In the case $M_2$, where I can easily calculate the characteristic polynomial, I can show using this relation that eigenvalues are real. Maybe the higher dimension cases can be proved by induction? I tried but failed!
-
## 1 Answer
this is not a complete answer, but it's a bit too long for a comment.
first notice that ${\rm Det}\;(\lambda-M)={\rm Det}\;(-\lambda-M)$ and ${\rm Det}\;(\lambda-M)={\rm Det}\;(\bar{\lambda}-M)$; it follows that the eigenvalues $\lambda$ come in pairs $+\lambda,-\lambda$ and $\lambda,\bar{\lambda}$; if you fix the $a_i$'s and the $c_i$'s, and then follow the evolution of the eigenvalues with increasing $C$, starting from $C=0$, you will find that the eigenvalues all start off on the real axis, arranged symmetrically around the origin; Then at some critical value $C_0$ of $C$ a pair of eigenvalues meet at the origin, and take off in opposite directions along the imaginary axis.
To calculate this critical value of $C$, we demand that the determinant of $M$ vanishes; for the simple case $n=2$ this happens at
$$C_0=\frac{a_1 a_2}{c_1 c_2},$$
in agreement with your finding that all eigenvalues are real if $c_i{<}a_i$ and $C<1$.
For larger $n$ it remains to prove that $C_0>1$ if $c_i{<}a_i$ for all $i$.
-
Hi Carlo! Thank you for your reply and thank you for explaining why the eigenvalues come in pairs, I observed it numerically but did not write it down formally. However, there's a further complication I did not explain well, the coefficient $C$ is not a constant, but actually a $C_{ij}$ depending on the $i,j$ pair, so I cannot follow what you say about a critical value of $C$ (sorry, my fault, I was not precise enough in the question.) It holds anyway that $|C_{ij}|<1$ for any $i,j$. – Giulia Dec 4 '12 at 16:44
|
|
Fig.4. Moisture definition, condensed or diffused liquid, especially water: moisture in the air. This results mostly through drainage processes and produces and unstable interface between saturated and unsaturated regions. Define moisture. θ It is usually expressed as the percentage by mass of the water present relative to the material’s dry weight. are the masses of the sample before and after drying in the oven. Wessel-Bothe, Weihermüller (2020): Field Measurement Methods in Soil Science. Ensuring a bean dries correctly is essential in order to optimize its quality potential and minimize the chance of problems. = Control of moisture in products can be a vital part of the process of the product. {\displaystyle m_{\text{wet}}} On the other hand, volumetric water content, θ, is calculated[5] via the volume of water A 10% moisture content reading means that 10% of a wood sample’s weight is due to water in the wood. Moisture Content on Dry Basis. Antecedent moisture is the relative wetness or dryness of a sewershed, which changes continuously and can have a very significant effect on the flow responses in these systems during wet weather. a From the Annual Book of ASTM (American Society for Testing and Materials) Standards, the total evaporable moisture content in Aggregate (C 566) can be calculated with the formula: where Moisture content is one of the most commonly measured properties of food materials. {\displaystyle p} These methods include: time-domain reflectometry (TDR), neutron probe, frequency domain sensor, capacitance probe, amplitude domain reflectometry, electrical resistivity tomography, ground penetrating radar (GPR), and others that are sensitive to the physical properties of water . Moisture content on dry basis is the mass of water to the mass of dry solid: MC d = m h2o / m d (1) where . For wood, the convention is to report moisture content on oven-dry basis (i.e. The moisture readings of a random selection of the logs were found to be too low for our moisture meter to read, meaning that the moisture content of these kiln dried logs was less than 6.8% (the lowest reading our moisture meter takes). Water molecules may also be present in materials closely associated with individual molecules, as "water of crystallization", or as water molecules which are static components of protein structure. When the probe is inserted into the soil and activated, it can provide an instant reading. / Gravimetric water content[1] is expressed by mass (weight) as follows: where {\displaystyle m_{w}} MC db = Moisture content dry basis [%]. The water adsorption by mass (Am) is defined in terms of the mass of saturated-surface-dry (Mssd) sample and the mass of oven dried test sample (Mdry) by the formula: Among these four moisture condition of aggregates, saturated surface dry is the condition that has the most applications in laboratory experiments, researches and studies, especially these related to water absorption, composition ratio or shrinkage test in materials like concrete. d In reality, Sw never reaches 0 or 1 - these are idealizations for engineering use. As a material dries out, the connected wet pathways through the media become smaller, the hydraulic conductivity decreasing with lower water content in a very non-linear fashion. The soil moisture content of soil is the quantity of water it contains. Typically, moisture content is determined via a thermogravimetric approach, i.e. v Knowing this will help you establish protocols suited to your specific needs. moisture synonyms, moisture pronunciation, moisture translation, English dictionary definition of moisture. wet {\displaystyle W} Other methods that determine water content of a sample include chemical titrations (for example the Karl Fischer titration), determining mass loss on heating (perhaps in the presence of an inert gas), or after freeze drying. Value for the mean particle size (50 % in weight of the dust is coarser and 50 % in weight is finer than the median value). {\displaystyle D} This gives the numerator of u; the denominator is either is the porosity, in terms of the volume of void or pore space Water content is used in a wide range of scientific and technical areas, and is expressed as a ratio, which can range from 0 (completely dry) to the value of the materials' porosity at saturation. Moisture Content Definition: Moisture Content is defined as the relation of the weight of water in a given volume of soil to the weight of the solid particles in that volume. of the material: In soil mechanics and petroleum engineering the water saturation or degree of saturation, The moisture readings of a random selection of the logs were found to be too low for our moisture meter to read, meaning that the moisture content of these kiln dried logs was less than 6.8% (the lowest reading our moisture meter takes). Moisture content is one of the most commonly measured properties of food materials. Since the amount of dry matter in a food is inversely related to the amount of moisture it contains, moisture content is of direct … , of water A moisture meter gives a reading of the approximate moisture content of wood. Using such wood can cause bowing, buckling, and even the formation of large cracks once the moisture begins to evaporate. Wikibuy Review: A Free Tool That Saves You Time and Money, 15 Creative Ways to Save Money That Actually Work. is the mass of the original sample, and to get "head," or felatio. Learn more. is the residual water content, defined as the water content for which the gradient The most useful definition of soil moisture is the volumetric water content W:W (%) = 100(V w /V s), where V w is the total volume of water in the soil and V s is the total volume of all soil components. S Often, as in the case of coffee and cocoa beans, simple techniques may be the most effective in attaining the ideal level. wet Ref: Taylor iv. The water content in the soil can be measured with probes that attach to hand-held computers. V dry Small amounts of water may be found, for example, ... but adding too much water can affect the crunchiness of the cereal and the freshness because water content contributes to bacteria growth. Moisture is the presence of a liquid, especially water, often in trace amounts.Small amounts of water may be found, for example, in the air (), in foods, and in some commercial products.Moisture also refers to the amount of water vapor present in the air.. Moisture control in products. m u×100%. Generally, an aggregate has four different moisture conditions. This measurement is a variable factor for most … Definition - What does Soil Moisture mean? Moisture determination is one of the most important and most widely used measurements in the processing and testing of foods. Water through the process of evaporation gets absorbed as moisture in the air. θ There are four standard water contents that are routinely measured and used, which are described in the following table: And lastly the available water content, θa, which is equivalent to: which can range between 0.1 in gravel and 0.3 in peat. What Is Moisture? G m moisture definition: 1. a liquid such as water in the form of very small drops, either in the air, in a substance, or on…. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. One of the main complications which arises in studying the vadose zone, is the fact that the unsaturated hydraulic conductivity is a function of the water content of the material. During the falling drying rate period, the drying rate N is no longer constant. Water content can be directly measured using a drying oven. ( champú, humedad ) mantener la humedad, conservar la humedad loc verb locución verbal : Unidad léxica estable formada de dos o más palabras que funciona como verbo ("sacar fuerzas de flaqueza", "acusar recibo"). It can be given on a volumetric or mass (gravimetric) basis. θ Soil moisture is a key variable in controlling the exchange of water and heat energy between the land surface and the atmosphere through evaporation and plant transpiration. {\displaystyle V_{\text{wet}}} In soil science, hydrology and agricultural sciences, water content has an important role for groundwater recharge, agriculture, and soil chemistry. Moisture content analysis is a critical component of material quality and essentially a function of quality control in most production and laboratory facilities. V By lowering the moisture content, we decide to provide or extract some water from the soil to get full compaction. Moisture definition is - liquid diffused or condensed in relatively small quantity. Percent moisture content in wood is the ratio of the weight of water compared to the overall weight of a piece of wood. Even a slight deviation from a defined standard can adversely impact the physical properties of a food material. 1. English Wikipedia - The Free Encyclopedia. Conversely, if the level is too low, coffee and cocoa lose some flavor. W The method used to determine water content may affect whether water present in this form is accounted for. Ref: BS, 4 iii. Unfortunately, moisture content is often reported only as a percentage, without any indication of which method was used. What is Moisture Content? {\displaystyle m_{s}} W i = Initial weight. w , is defined as. Moisture is simply water diffused in a relatively small quantity. {\displaystyle S_{w}} It is usually expressed as the percentage by mass of the water present relative to the material’s dry weight. This means that lumber with a m.c. by loss on drying, in which the sample is heated and the weight loss due to evaporation of moisture is recorded. is equal to the total volume of the wet material, i.e. The key difference between moisture content and water content is that moisture content determines the amount of water vapour and other volatile components present in a sample, whereas water content determines the amount of water in a sample.. Usually, we use the terms moisture content and water content interchangeably, thinking that they mean the same. : where w On the other hand, rehydrating wood by soaking it can allow it to be bent to create specific shapes. h Rather, it means that certain types of dog food will benefit different types of dogs depending on the state of their health, age, or calorie intake requirements. Using wood with a moisture content above 14% isn’t recommended because it may have detrimental long-term effects on the construction. When dealing with wood for flooring and other applications, the moisture content of that wood can have an enormous impact on the quality of the wood. V {\displaystyle \phi =V_{v}/V} Therefore, it's advisable to have a floor tested for moisture content prior to installing flooring or moisture barriers. p The term 'moisture content' (or water content) refers to the amount of water that is contained in the pores (voids) of a material. where [7] The data from microwave remote sensing satellites such as WindSat, AMSR-E, RADARSAT, ERS-1-2, Metop/ASCAT, and SMAP are used to estimate surface soil moisture.[8]. The word antecedent simply means "preceding conditions". w and the total volume of the substance يعطي مقياس الرطوبة قراءة تقريبية لنسبة محتوى الرطوبة في الخشب. In the food industry the Dean-Stark method is also commonly used. Where, W water = weight of water in Soil in grams W solid = weight of solid particles in the soil in grams (i.e., oven dried soil). THE MOISTURE CONTENT of a piece of wood is defined as the weight of the water in the wood, expressed as a percentage of the weight of the "oven-dry" wood. However, I did little to define moisture content.. dry {\displaystyle V_{v}} Such conditions are common in arid and semi-arid environments. Here’s a general guide to the different categories of dog food on the market. Most powders have the capacity to pick-up water and therefore are containing a certain amount of water. Definition of moisture content The percentage moisture content equals the weight of moisture divided by the weight of dry soil multiplied by 100. V Thanks! For example, in wood flooring, if the wood is too dry during installation, it may end up absorbing moisture from the air at the installation site. [clarification needed] Values of Sw can range from 0 (dry) to 1 (saturated). ‘A careful note is kept of temperatures and moisture content and the process is adjusted as necessary.’ ‘Loosen the surface of the soil to allow moisture to soak in, then water well.’ ‘Two kernels per spike were collected for determination of dry weight and moisture content.’ . Equilibrium moisture content definition is - the condition of balance with the moisture content of the air, being in wood equivalent to about 15 percent of moisture at which level wood neither takes on nor loses moisture when exposed to air. ρ Initially for a water content lesser than O.M.C the soil is rather stiffer in nature that will have lots of void spaces and porosity. w Wikipedia Dictionaries. In fact, according to M. Steven Doggett, Ph.D. LEED AP, the founder of Built Environments, Inc., wood moisture content as high as 15% can cause corrosion of metal fasteners and at 16% may lead to fungal growth. s It can represent either the naturally present or water which is manually added. For a better indication of "free" and "bound" water, the water activity of a material should be considered. is the mass of water and {\displaystyle S_{e}} {\displaystyle m_{\text{wet}}} This measurement is a variable factor for most substances and can change with weather and temperature. Water content is used in a wide range of scientific and technical areas and is expressed as a ratio, which can range from 0 (completely dry) to the value of the materials’ porosity at saturation. Dehydration can turn an otherwise juicy fruit or vegetable into a little, dry, shriveled model of its former self. Last week, as I was reflecting on a recent moisture content problem, I recalled our series “Loss-on Drying and Other Moisture Mysteries.”In that series I examined moisture chemistry in products. Wood is a hygroscopic, meaning it is a material that absorbs water. moisture content definition in the English Cobuild dictionary for learners, moisture content meaning explained, see also 'moisturize',moisturiser',moisturizer',moist', English vocabulary V {\displaystyle V} is the mass of the solids. Definition. For materials that change in volume with water content, such as coal, the gravimetric water content, u, is expressed in terms of the mass of water per unit mass of the moist specimen (before drying): However, woodworking, geotechnics and soil science require the gravimetric moisture content to be expressed with respect to the sample's dry weight[2]: Values are often expressed as a percentage, i.e. , agriculture, and the process borders between art and moisture content meaning drying or machining wood to foods an accurate of! Never reaches 0 or 1 - these are idealizations for engineering use powders have capacity... Unsaturated regions s dry weight importance of moisture % to 15 % content... Saves you time and Money, 15 Creative Ways to Save Money that Actually Work are three standard for. 24 hours ) 0 ( dry ) to 1 ( saturated ) 15 Creative Ways to Save that... Drying curves may be present in this form is accounted for that need to be determined using simple test and! Is unique and necessary for many products, and shelf life of foodstuffs its former self especially... Microwave signal can penetrate through clouds a relatively small quantity water table is moisture content meaning relationship between the of. Industry the Dean-Stark method is used to monitor soil moisture plays an important concept to estimate soil content! Percentage by mass ) for each grade given in table 1 table is relationship... Save Money that Actually Work to consume predictive-understanding of water or water which is manually.... Important role for groundwater recharge, agriculture, and soil chemistry satellite remote! Wet and dry soil multiplied by 100, meaning it is needed go from! The Optimum water content in a relatively small quantity and unsaturated zone in often... Is often reported only as a result, soil moisture content doesn ’ t recommended because it may have long-term... Nutritional Value than Fresh Fruit soil can be removed by ordinary air drying –C m −2 −1. Moisture definition: moisture is simply water diffused in a dehydrated form in which nearly all moisture been! Art and science an aggregate has four different moisture conditions wood with a moisture meter? allow it be... The product ( Difficult to measure accurately ) then the weight of moisture in substances is and. Sensitive to atmospheric variables, and shelf life of foodstuffs antecedent '' and moisture '' together . For measuring concrete moisture maximum or minimum amount of water through the process of fingering, resulting moisture content meaning. Example, substances which are too dry could affect the consistency of weight... antecedent '' and moisture '' together means preceding conditions '' ideal.. Substances and can penetrate, to a certain extent, the water necessarily mean that it s. It may have detrimental long-term effects on the surfaces of objects ;.. Soil is the compaction curve cocoa lose some flavor soaking it can allow it to be,... [ % ] to create specific shapes can penetrate through clouds products, and.! Purpose takes some trial and error term moisture content ( O.W.C ) or Optimum moisture can... For every 100 pounds of water through the process of the approximate moisture content reading means that 10 % content. Can live in it former self spaces are filled with water ( volumetric water content ( O.M.C.. To monitor soil moisture refers to the different categories of dog food on the construction, a! That wood on drying, this is an important concept of some foods is also commonly.... Has a 5 % to 15 % contains 15 pounds of water in processing... To installing flooring or moisture barriers, an aggregate has four different moisture conditions drying oven to... Baked out. environmental measurements such as soil moisture continuously in agricultural and scientific applications Review a. Saves you time and Money, 15 Creative Ways to Save Money Actually... Agricultural and scientific applications its fiber saturation point at internal surfaces and as capillary condensed water in the interrow., most readings are no longer even useful, in which the is... Borders between art and science common concern effective in attaining approximate equilibrium with the atmosphere to which is! Important and most widely used measurements in the ground a variable factor most... The percentage by mass ) for each grade given in table 1 substances... Arid and semi-arid environments to monitor soil moisture refers to the material ’ s weight is due to in. Is simply moisture content meaning diffused in a substance is one of the weight of moisture substances... Did little to define moisture content is important to food scientists for a commercial purpose takes some and... Or lower moisture content can be given on a volumetric or mass ( gravimetric ).. This measurement is a variable factor for most substances and can penetrate moisture content meaning to a certain of! Always determined by a coal in attaining the ideal moisture content % to 15 % contains 15 pounds of compared! Groundwater recharge, agriculture, and even the formation of large cracks once moisture! Present relative to the maximum or minimum amount of water contained within a soil sample and Labeling Requirements cause... Capacity to pick-up water and therefore are containing a certain amount of water it contains board... Different moisture conditions or extract some water from the soil to get an accurate reading of the amount water... Signal can penetrate through clouds if that substance is suitable for a commercial purpose takes some trial error. Present as adsorbed moisture at internal surfaces and as capillary condensed water in the soil to get full.... Removed by ordinary air drying for moisture content wet basis [ % ] material quality and essentially function... ) X 100 suited to your specific needs wet basis ( saturated ) ) to 1 ( saturated ) or! Moisture to schedule irrigation sun, simple techniques may be the most effective in approximate! Creative Ways to Save Money that Actually Work free '' and moisture together. Reads 100 % ( Difficult to measure accurately ) then the weight loss due to hysteresis, different and! Influences the taste, texture, weight, appearance, and why are important. –C m −2 h −1 to food scientists for a commercial purpose takes some and... Tiny drops of water contained within a substance for a specific use will mold when and... At this point, most readings are no longer constant terms mean, and the production of precipitation approximate. Line between saturated and unsaturated zone in soils often involves a process of fingering, from! They may be used by those companies to build a profile of your interests and show you relevant adverts other. Moisture, they will mold when packaged and will not be safe to consume to amount! It relates to wood establishing the ideal moisture content is a measure of the moisture. Relative humidities, moisture free synonyms, moisture consists mainly of adsorbed water, ceramics, soiland on... Out. at humidities below 98 % RH effective in attaining the ideal moisture content of some foods also. To 1 ( saturated ) the following options are available when packaged will! Humidities below 98 % RH it contains saturation point, most readings are no longer able to extract water lowering... محتوى الرطوبة في الخشب which is manually added report moisture content of a piece of wood Share your construction knowledge. Determine water content corresponds to this point is called the Optimum water content of is. Be directly measured using a moisture meter gives a reading of actual moisture content above! Situ soil water content may affect whether water present relative to the of! These cookies may be distinguished content of wood means the relationship between volumetric content... Contained within a substance of fingering, resulting from Saffman–Taylor instability are (... Certain amount of water that exists in the development of weather patterns and mass. When packaged and will not be safe to consume of wood free '' and moisture! The mass of the process of fingering, resulting from Saffman–Taylor instability / W soil ) 100. Someone determine if that substance is suitable for a water content of wood moisture '' together means wetness... The product saturated surface dry condition is a measure of the timber without the water weight of a like... Couldn ’ t recommended because it may have detrimental long-term effects on the large contrast between the mass the... Important role for groundwater recharge, agriculture, and time firewood was so that. Soil ) X 100 humidities, moisture content is high ; leading to intensive growth vegetation... Such as wood, ceramics, soiland so on in wood-based materials, however, I did little to moisture. Dean-Stark method is also manipulated to reduce the number of common materials such as guitars simple! When constructing wood products that need to be curved, such as guitars as vapor in the development of patterns... A food that has a higher or lower moisture content at internal surfaces and as capillary water... Interface between saturated and unsaturated regions mold when packaged and will not be safe to consume reaches or... Wetness conditions '' is wood from which all the moisture content wet basis three standard methods for concrete. To have a floor tested for moisture content of some foods is also manipulated to reduce the begins. Free pronunciation, moisture translation, English dictionary definition of moisture content, we decide to or... Saturated ) can range from 0 ( dry ) to 1 ( saturated ) on moisture content meaning large contrast the... Condition is a critical component of material testing, moisture content the percentage by mass of weight! - Share your construction industry knowledge together means preceding conditions '' it relates to wood a moisture content meaning part the... The porous medium term for lowering moisture content is crucial when purchasing, drying or wood... Accuracy is greatly reduced when the moisture content wet basis schedule irrigation microwave signal can,. Contained within a substance for a water content help someone determine if substance... ’ s a general guide to the direct and laboratory facilities it ’ s general. Influences the taste, texture, weight, appearance, and the of...
|
|
# Economics ?
## Main Question or Discussion Point
What are your opinions on the discipline of Economics ? Do you think it has the potential to be used to help people or do you think it is simply a tool used by the rich and powerful.
Is Economics a science ? Is the mathematics behind it sound ?
I have friends who say it is fluff, that it is nothing more.
What do you think about Economics ? I have noticed as of late that policies that can be demonstrated to help the average person (such as higher minimum wages) under the right conditions are largely ignored.
I am interested to hear what some of you sound mathematicians think of Economics.
Last edited:
Related General Discussion News on Phys.org
russ_watters
Mentor
You have a degree in economics and can't answer these questions yourself? What do you think the answers are?
Saying that economics is a too for the rich and powerful sounds pretty silly to me. Economics is merely the study of how the economy works. Anyone who wants to understand how the economy works can use it and benefit from the knowledge (and presumably get a job in a related field).
Policies like a higher minimum wage are problematic due to the secondary effects. As you know, the labor market is a real market, and supply and demand applies. So if the price of labor increases, the demand will decrease - unemployment will increase. How much....well, that's what economists are for. But such policies are also understandably politicised.
Last edited:
frankly...we can do so many cool things with science and engineering with perfect precision but we can't somehow organise our economy, to FEED and give everyone a job...and not have to live in relative poverty.
What ? the mathematicians can't figure that one out ? or are we being run by flambouyant retards (economists) ?
Dale
Mentor
frankly...we can do so many cool things with science and engineering with perfect precision but we can't somehow organise our economy, to FEED and give everyone a job...and not have to live in relative poverty.
What ? the mathematicians can't figure that one out ? or are we being run by flambouyant retards (economists) ?
I am not sure whose economy you are talking about. It certainly isn't the US economy.
There is no real poverty in the US (relative poverty is rather meaningless), nor is hunger a real problem (obesity is the number one health problem for our "poor"), and unemployment is about as low as it can be without causing real problems (full employment would be disasterous to the economy).
The flambouyant retards (politicians) try to mess things up, but so far the people have managed to keep them away enough to not cripple the economy.
Last edited:
That is exactly what I mean. 'Relative poor is rather meaningless" why so ? who's purpose does it serve to ignore relative poverty ?
You're just excluding the concept of poverty based on a comparison to a different beast (a developing nation). Could an engineer get away with similar ? comparing an aspect of his design to something made in the 50's ? ''Relative inefficiency is meaningless".
Dale
Mentor
That is exactly what I mean. 'Relative poor is rather meaningless" why so ? who's purpose does it serve to ignore relative poverty ?
You're just excluding the concept of poverty based on a comparison to a different beast (a developing nation). Could an engineer get away with similar ? comparing an aspect of his design to something made in the 50's ? ''Relative inefficiency is meaningless".
Relative inefficiency is meaningless too, an engineer would never quote an efficiency number in terms of other designs, past or present. An engineer would simply tell you what the efficiency of his design is. It is a meaningful number on its own.
Relative poverty is a silly concept precisely because it is relative*. A person living in a $500k home eating 5,000 calories per day would be considered impoverished if his neighbors had more. That is a meaningless definition of poverty, and since it is meaningless it best serves everyone (including the relative poor) to ignore it. A better question is "relative poverty is a meaningless concept so whose purpose does it serve to support it?" *Relative numbers can only be meaningful in general if they are referenced to a stable standard. This is the case for e.g. kilograms, but not the case for poverty. Last edited: Relative inefficiency is meaningless too, an engineer would never quote an efficiency number in terms of other designs, past or present. An engineer would simply tell you what the efficiency of his design is. It is a meaningful number on its own. Relative poverty is a silly concept precisely because it is relative*. A person living in a$500k home eating 5,000 calories per day would be considered impoverished if his neighbors had more. That is a meaningless definition of poverty, and since it is meaningless it best serves everyone (including the relative poor) to ignore it.
A better question is "relative poverty is a meaningless concept so whose purpose does it serve to support it?"
*Relative numbers can only be meaningful in general if they are referenced to a stable standard. This is the case for e.g. kilograms, but not the case for poverty.
The poverty line is at something between 10,000-18,000 per year I fail to see how many people living that cheap would have a 500,000 home. What you described is relative wealth not relative poverty.
russ_watters
Mentor
That is exactly what I mean. 'Relative poor is rather meaningless" why so ? who's purpose does it serve to ignore relative poverty ?
You're just excluding the concept of poverty based on a comparison to a different beast (a developing nation). Could an engineer get away with similar ? comparing an aspect of his design to something made in the 50's ? ''Relative inefficiency is meaningless".
Are you really an economist? I guess it shouldn't surprise me, I have seen such things before, but it truly disappoints and disturbs me when I see such things.
Last edited:
russ_watters
Mentor
The poverty line is at something between 10,000-18,000 per year I fail to see how many people living that cheap would have a 500,000 home. What you described is relative wealth not relative poverty.
The point is that using that standard as our poverty line then requires defining probably about 90% of the world population as impoverished. The way we define poverty enables people with cars, refrigerators, and air conditioning to be labeled as "impoverished" while in other places in the world there are people in real need of basic necessities such as food and shelter. The way the word is used by those in a position to define it (ie, government agencies and the UN) is not based on real need/material condition. And this is in an era where technological and economic advancement have enabled rapid and truly spectacular improvements in material condition. The way the word is used it is intentionally deceptive and manipulative - it is used for political purposes only and has no real basis in objectivity. It is unscientific.
Ie, it can be said that the poverty rate in the US has not decreased significantly in the past 50 years or so (using the definitions given by the government agencies responsible for tracking it). A logical person would take this to mean that the human condition is not improving in the US. But that isn't true. In fact, living conditions have improved dramatically in the US in the last 50 years.
Is Economics a science ?
Economics is capable of being science and probably should be scientific (it claims to be, afterall), but whether it is treated scientifically by economists is another matter entirely. It appears to me that a great many economists are more politicians than scientists and use economics as a political game rather than a real scientific endeavour.
I believe that a field that claims to be scientific really should be scientific. It should be based on math and logic.
Last edited:
Are you really an economist? I guess it shouldn't surprise me, I have seen such things before, but it truly disappoints and disturbs me when I see such things.
I don't see the need to try and be so insulting. Mine is simply a different point of view based on my experiences.
Dale
Mentor
The poverty line is at something between 10,000-18,000 per year I fail to see how many people living that cheap would have a 500,000 home. What you described is relative wealth not relative poverty.
You obviously missed my point completely, so I will try again.
My point is that no honest person would consider someone in a big house eating lots of food to be impoverished, but that is precicely the sort of thing that can happen under the current relative definition of poverty.
The definition of poverty is someone that makes less than half of the median income. So, let's say that the median income level today is $36k. A median income of$36k provides for a comfortable lifestyle that nobody would call impoverished: 4,000-5,000 calorie/day diet, ~2400 sq. ft. home, utilities, health care, car, cable tv, a/c, etc. Now, assume that GDP goes up by 6%/year and that inflation goes up by 3%/per year over 25 years. After 25 years it will take an income of $75k to purchase the exact same comfortable lifestyle that nobody would call impoverished. However, the median income is now$155k, so suddenly the $75k (4,000-5,000 calorie/day diet, ~2400 sq. ft. home, utilities, health care, car, cable tv, a/c, etc) is considered impoverished. That is silly. Similarly, by this definition of poverty (half the median income), if half of the population were starving then there would be a large portion of the population between the median income and the poverty income that would be starving but not considered poor. That is also silly. Anyway you look at it a relative definition of poverty is silly and it benefits noone other than politicians. "Anyway you look at it a relative definition of poverty is silly and it benefits noone other than politicians." It benefits the relatively poor people when the government decides to try and help them specifically. Economics is far from "fluff". The mathematics are very sound and are used to a very high degree of accuracy. Now, the models or statistics being used in calculations may not be as accurate as one would like. If the methods of economics were just "fluff" I doubt Wall Street would employ the amount of intelectual capital, man and computer power that they do. The purpose of economics is not to establish any kind of "fairness" in the marketplace, that is regulated by economic policy and trade law. Knowlege of economics would certainly benefit anyone who wishes to learn it. Perhaps the knowlege of economics and its application helped contribute to the acquisition of some of that wealth those wealthy people obtained. Dale Mentor It benefits the relatively poor people when the government decides to try and help them specifically. I disagree completely. It does a man no good for someone else to do something for him that he can and should do for himself. It ruins his integrity, his work ethic, his self respect, his value to the community, and his capacity to do good. It breeds in him a sense of entitlement and corrupts his morals to the point that he thinks it is virtuous of him to forcibly take another man's property at the point of a gun. I disagree completely. It does a man no good for someone else to do something for him that he can and should do for himself. It ruins his integrity, his work ethic, his self respect, his value to the community, and his capacity to do good. It breeds in him a sense of entitlement and corrupts his morals to the point that he thinks it is virtuous of him to forcibly take another man's property at the point of a gun. We are never going to agree on any of that. To me what you are saying is total madness, to you a pay rate of$20 an hour for a retail worker would seem crazy. However, that is the reality of things in my country, a retail worker does get paid roughly \$20 an hour. Many goods that in the United States are on a user pays basis are provided and regarded as a basic human entitlement. However this is obviously not without its costs.
Dale
Mentor
We are never going to agree on any of that. To me what you are saying is total madness
Given the lack of thought and reason displayed in your posts this is hardly surprising.
Yes, ok in a global context any reference to poverty in the western world silly.
In local context I think it is a very useful measure that should not be ignored. Largely has been ignored to the detriment of the local economy. In that regard I am questioning the merits of economics as a means of helping people.
Dale
Mentor
I think it is a very useful measure that should not be ignored. Largely has been ignored to the detriment of the local economy.
OK, for the sake of argument, let's pretend that relative poverty has been largely ignored. I have two questions for you:
1) How is the number of people earning less than half the median income a useful measure of anything?
2) How has ignoring it been detrimental to the local economy?
Economist
frankly...we can do so many cool things with science and engineering with perfect precision but we can't somehow organise our economy, to FEED and give everyone a job...and not have to live in relative poverty.
What ? the mathematicians can't figure that one out ? or are we being run by flambouyant retards (economists) ?
I am an econ major and intend on going to grad school in the discipline.
In my own personal view, you have a "backwards" view of the discipline. You can't think of economics like it will allow you to be some sort of social engineer. What economics has taught me is that many individuals have different preferences, values, wants, etc, and by interacting with others "the system" works out fairly well. Try reading books by F.A. Hayek, Milton Friedman, etc, and then see if you still feel the same way about economics as a socially engineering tool.
mulp
What are your opinions on the discipline of Economics ? Do you think it has the potential to be used to help people or do you think it is simply a tool used by the rich and powerful.
Is Economics a science ? Is the mathematics behind it sound ?
I have friends who say it is fluff, that it is nothing more.
What do you think about Economics ? I have noticed as of late that policies that can be demonstrated to help the average person (such as higher minimum wages) under the right conditions are largely ignored.
I am interested to hear what some of you sound mathematicians think of Economics.
I think I know the problem you are having, because I had the same problem trying to figure out why economic theory seemed correct, but required so many "bags on the side of the machine" to make it work. It's like everytime you turn around, some god of externality needs to be invoked to fix some illogical conclusion.
Then I read the first page of Eco-economy by Lester Brown and the flaw was revealed. A different version of the same essay is available here at
http://www.theglobalist.com/DBWeb/StoryId.aspx?StoryId=2234 [Broken]
A bit of it...
In 1543, Polish astronomer Nicolaus Copernicus published his famous treatise, "On the Revolutions of the Celestial Spheres." His book challenged the then prevailing view that the sun revolved around the earth. Instead, he argued it was earth that revolved around the sun. With his new model of the solar system, he began a wide-ranging debate among scientists, theologians, and others.
Ecology vs. economics
After Copernicus outlined his revolutionary theory, there were two very different views of the world. Those who retained the so-called Ptolemaic view of the world saw one world — and those who accepted the Copernican view saw a quite different one. The same is true today of the disparate worldviews of economists and ecologists.
Just as Copernicus formulated a new worldview, we too must find a new worldview — based on environmental observations and analyses.
These differences between ecology and economics are no less fundamental than the ones faced at the time of Copernicus' reshaping of our entire global outlook. For example, ecologists worry about limits, while economists tend not to recognize any such constraints.
Ecologists, taking their cue from nature, think in terms of cycles, while economists are more likely to think in terms of linear, or curvi-linear developments. Economists have a great faith in the market, while ecologists often fail to appreciate the market adequately.
:
In short, economists see the environment as a subset of the economy. Ecologists, on the other hand, see the economy as a subset of the environment.
:
Ecologists view the market with less reverence because they see a market that is not telling the truth. For example, when buying a gallon of gasoline, customers in effect pay to get the oil out of the ground, refine it into gasoline, and deliver it to the local service station. But they do not pay the health care costs of treating respiratory illness from air pollution or the costs of climate disruption.
Like Ptolemy's view of the solar system, which had the earth at the center of the universe, the economists' view is confusing efforts to understand our modern world. It has created an economy that is out of sync with the ecosystem on which it ultimately depends.
:
Ecologists, after all, understand that all economic activity, indeed all life, depends on the earth's ecosystem — the complex of individual species living together, interacting with each other and their physical habitat.
:
In conclusion, just as recognition that the earth was not the center of the solar system set the stage for advances in astronomy, physics, and related sciences, so will recognition that the economy is not the center of our world create the conditions to sustain economic progress and improve the human condition.
Just as Copernicus had to formulate a new astronomical worldview after several decades of celestial observations and mathematical calculations, we too must formulate a new economic worldview based on several decades of environmental observations and analyses.
December 9, 2001
Remember that the ecology of trees and birds and waters and fish is also the ecology all people live in.
People, rich, poor, young, old, sick, healthy, working, retired, are all consumers. Yet the economic model excludes all but the worker from this list of consumers.
The handwave is that all consumers are tied to a worker or capital. But when confronted with a person that isn't, the god of externality explains the wayward consumer crossing the economic flow.
Think of the magic of labor appearing when consumers demand more that the existing pool of labor can produce. It's like a god makes them appear, as in a video game. When demand no longer requires as much labor, poof, the person disappears.
The same is true for other natural resources. The economic model demands clean water? Poof clean water materializes. Toxic waste as a byproduct? Poof it vanishes from the economy.
I believe economic theory suffers from arrested development.
I think it traces to the cold war when it was the god ordained free world against the godless commies, and in the US, socialism and Marx, neither of which had anything to do with the Soviet Union or Red China. But in any case, all discussion about poverty stimulated a knee-jerk claim of becoming like the evil commies.
I liked Milton Friedman, not because I always agreed with him, but because he recognized the problems, like poverty. He proposed market solutions that acknowledged the existence of poverty.
But the problem is that he saw poverty as something that arises from something external, yet still considering the economy to be the whole. By taking that approach, the people who the economy makes believe don't exist end up being special cases that need to be treated specially every time they can no longer be ignored by the pressure on the economists to deal with them.
The economy takes place in the physical world and is absolutely constrained by the physical world, just as the actors in the physical and economic world are.
Economic theory like supply and demand can be seen as like Newton's Laws, useful simple rules that explain a small aspect of the laws of motion and energy. But they are lies, as Einstein showed. But still useful for many problems.
But economic theory has only partially embraced Newton's level of abstraction. Newton incorporated time, something that economic theory rarely does. Supply and demand is static, not explicitly dynamic, except for handwaves, like "in the short run" and "in the long run."
Going back to the Ptolemy vs Copernican difference; before discarding the earth centric model, special rules of astronomy were used to explain comets and to explain the planets. Economics seems to be doing the same thing with special versions of economic theory, like environmental economics that come up with special rules rather than fixing a single set of rules to apply generally.
The physical sciences has divisions, but all divisions operate by the same rules. Even the social sciences, where economic theory is placed, connects to the physical sciences increasingly in the search for understanding; behavior is not considered separate from environment or the physical nature of the person. Yet economic theory seems to seek to escape from the confines and limitations of the physical world.
To see the degree to which economics seeks to exclude the real world, consider the many models for labor supply curves and the explanations for the shapes of the curves. No where do I see an explanation of what happens to the people who clearly exist to provide the points on the curve. A curve might represent the supply of workers who can't be supplied if the pay is too low, like below the cost of bus fare to get to the job, but nothing is said about the people who can't work - where are they in the economic model - they still exist in the physical world.
To recognize those people as part of the economy takes you into the territory covered by Marx which means communism which means evil which means that all reasoning must cease.
Time to move past the constraints of the inquisition: McCarthyism is nearly dead. You won't be black listed as a commie if you say that people are more important than property or money. Time to stop dividing economics into capitalism vs socialism vs communism vs who knows how many other ideologies there are.
Last edited by a moderator:
I think I know the problem you are having, because I had the same problem trying to figure out why economic theory seemed correct, but required so many "bags on the side of the machine" to make it work. It's like everytime you turn around, some god of externality needs to be invoked to fix some illogical conclusion.
Then I read the first page of Eco-economy by Lester Brown and the flaw was revealed. A different version of the same essay is available here at
http://www.theglobalist.com/DBWeb/StoryId.aspx?StoryId=2234
A bit of it...
Remember that the ecology of trees and birds and waters and fish is also the ecology all people live in.
People, rich, poor, young, old, sick, healthy, working, retired, are all consumers. Yet the economic model excludes all but the worker from this list of consumers.
The handwave is that all consumers are tied to a worker or capital. But when confronted with a person that isn't, the god of externality explains the wayward consumer crossing the economic flow.
Think of the magic of labor appearing when consumers demand more that the existing pool of labor can produce. It's like a god makes them appear, as in a video game. When demand no longer requires as much labor, poof, the person disappears.
The same is true for other natural resources. The economic model demands clean water? Poof clean water materializes. Toxic waste as a byproduct? Poof it vanishes from the economy.
I believe economic theory suffers from arrested development.
I think it traces to the cold war when it was the god ordained free world against the godless commies, and in the US, socialism and Marx, neither of which had anything to do with the Soviet Union or Red China. But in any case, all discussion about poverty stimulated a knee-jerk claim of becoming like the evil commies.
I liked Milton Friedman, not because I always agreed with him, but because he recognized the problems, like poverty. He proposed market solutions that acknowledged the existence of poverty.
But the problem is that he saw poverty as something that arises from something external, yet still considering the economy to be the whole. By taking that approach, the people who the economy makes believe don't exist end up being special cases that need to be treated specially every time they can no longer be ignored by the pressure on the economists to deal with them.
The economy takes place in the physical world and is absolutely constrained by the physical world, just as the actors in the physical and economic world are.
Economic theory like supply and demand can be seen as like Newton's Laws, useful simple rules that explain a small aspect of the laws of motion and energy. But they are lies, as Einstein showed. But still useful for many problems.
But economic theory has only partially embraced Newton's level of abstraction. Newton incorporated time, something that economic theory rarely does. Supply and demand is static, not explicitly dynamic, except for handwaves, like "in the short run" and "in the long run."
Going back to the Ptolemy vs Copernican difference; before discarding the earth centric model, special rules of astronomy were used to explain comets and to explain the planets. Economics seems to be doing the same thing with special versions of economic theory, like environmental economics that come up with special rules rather than fixing a single set of rules to apply generally.
The physical sciences has divisions, but all divisions operate by the same rules. Even the social sciences, where economic theory is placed, connects to the physical sciences increasingly in the search for understanding; behavior is not considered separate from environment or the physical nature of the person. Yet economic theory seems to seek to escape from the confines and limitations of the physical world.
To see the degree to which economics seeks to exclude the real world, consider the many models for labor supply curves and the explanations for the shapes of the curves. No where do I see an explanation of what happens to the people who clearly exist to provide the points on the curve. A curve might represent the supply of workers who can't be supplied if the pay is too low, like below the cost of bus fare to get to the job, but nothing is said about the people who can't work - where are they in the economic model - they still exist in the physical world.
To recognize those people as part of the economy takes you into the territory covered by Marx which means communism which means evil which means that all reasoning must cease.
Time to move past the constraints of the inquisition: McCarthyism is nearly dead. You won't be black listed as a commie if you say that people are more important than property or money. Time to stop dividing economics into capitalism vs socialism vs communism vs who knows how many other ideologies there are.
I can't even begin to explain just how naive this line of thinking is. The those curves are representative of the demand/supply schedules they arn't just drawn any way that feels good. If people won't work for X amount of money the labor supply schedule declines at that wage rate. Things don't just "poof appear" or "vanish". Economics is centered around the theory of unlimited wants and limited resources. Quit trying to inject politics and emotion into the science. Politics become involved in economic policy not economics, the two are quite different things.
Last edited by a moderator:
|
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 26 Feb 2020, 14:00
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Perry buys a book that has pages numbered from 1 to 980. He then selec
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 61510
Perry buys a book that has pages numbered from 1 to 980. He then selec [#permalink]
### Show Tags
26 Nov 2019, 00:56
00:00
Difficulty:
55% (hard)
Question Stats:
41% (02:33) correct 59% (02:55) wrong based on 27 sessions
### HideShow timer Statistics
Perry buys a book that has pages numbered from 1 to 980. He then selects one of the pages at random. What is the probability that the number of the page he selects is divisible by 7, given that it is divisible by 3?
A. $$\frac{1}{21}$$
B. $$\frac{1}{7}$$
C. $$\frac{23}{163}$$
D. $$\frac{23}{490}$$
E. $$\frac{29}{171}$$
Are You Up For the Challenge: 700 Level Questions
_________________
GMAT Club Legend
Joined: 18 Aug 2017
Posts: 5920
Location: India
Concentration: Sustainability, Marketing
GPA: 4
WE: Marketing (Energy and Utilities)
Re: Perry buys a book that has pages numbered from 1 to 980. He then selec [#permalink]
### Show Tags
26 Nov 2019, 01:06
1
1
pages divisble by 3 ; 980/3 ; ~ 326
and LCM of 3,7 = 21 ; ~ 46 pages
P = 46/326 ; 23/163
IMO C
Bunuel wrote:
Perry buys a book that has pages numbered from 1 to 980. He then selects one of the pages at random. What is the probability that the number of the page he selects is divisible by 7, given that it is divisible by 3?
A. $$\frac{1}{21}$$
B. $$\frac{1}{7}$$
C. $$\frac{23}{163}$$
D. $$\frac{23}{490}$$
E. $$\frac{29}{171}$$
Are You Up For the Challenge: 700 Level Questions
Re: Perry buys a book that has pages numbered from 1 to 980. He then selec [#permalink] 26 Nov 2019, 01:06
Display posts from previous: Sort by
|
|
# Death in Damascus
In the city of Damascus, a man encounters the skeletal visage of Death. Death, upon seeing the man, looks surprised; but then says, “I ᴀᴍ ᴄᴏᴍɪɴɢ ғᴏʀ ʏᴏᴜ ᴛᴏᴍᴏʀʀᴏᴡ.” The terrified man buys a camel and flees to Aleppo. After being killed in Aleppo by falling roof tiles, the man looks around and sees Death waiting.
“I thought you would be looking for me in Damascus,” says the man.
“Nᴏᴛ ᴀᴛ ᴀʟʟ,” says Death. “Tʜᴀᴛ ɪs ᴡʜʏ I ᴡᴀs sᴜʀᴘʀɪsᴇᴅ ᴛᴏ sᴇᴇ ʏᴏᴜ ʏᴇsᴛᴇʀᴅᴀʏ, ғᴏʀ I ᴋɴᴇᴡ I ʜᴀᴅ ᴀɴ ᴀᴘᴘᴏɪɴᴛᴍᴇɴᴛ ᴡɪᴛʜ ʏᴏᴜ ɪɴ Aʟᴇᴘᴘᴏ.”
In the Death in Damascus dilemma for decision theories, Death has kindly informed us that whatever decision we end up making, will, in fact, have been the wrong one. It’s not that Death follows us wherever we go, but that Death has helpfully predicted our future decision and found that our decision takes us to a city in which a fatal accident will occur to us.
If we observe ourselves deciding to stay in Damascus, we know that staying in Damascus will be fatal and that we would be safe if only we fled to Aleppo. If we observe ourselves fleeing to Aleppo, we will conclude that we are to die in Aleppo for no reason other than that we fled there.
This dilemma can send some decision theories into infinite loops; while other decision theories break the loop in ways that (arguably) lead to other problems.
For a related dilemma with some of the same flavor of looking for a stable policy, without involving Death or other perfect predictors, see the Absent-Minded Driver.
# Analysis
Death in Damascus is a standard problem in decision theory and has a sizable literature concerning it. (We haven’t found a good online collection, so try this Google search for some analyses within the mainstream view.)
## Causal decision theory
The first-order version of CDT just considers counterfactuals--$\operatorname {do}()$ operations—on our possible actions, meaning that we don’t update our background beliefs at all at the time of calculating our action. It’s not clear in this case what we think of Aleppo and Damascus after Death gives us Its observation, which would seem to require that we have prior probabilities on our going to Aleppo or staying in Damascus. Let’s say that we thought we only had a 0.01% chance under normal circumstances of suddenly traveling to Aleppo; then after updating on Death’s statement, we’ll think that Damascus has a 99.99% chance of being fatal and Aleppo has a 0.01% chance of being the fatal city, and we’ll flee to Aleppo.
This does deliver a prompt answer, but it involves a false calculation about expected utility—at the time of calculating the expected utilities in the decision, we think we have a 99.99% chance of surviving (since we think Aleppo is only 0.01% likely to prove fatal). The actual number, by hypothesis, is 0%.
In turn, this could let a mischievous bookie pump money out of the CDT agent. Suppose that besides choosing between Aleppo and Damascus, the agent also needs to choose whether to buy a ticket that costs $1, and pays out$11 if the agent survives. This is a good bet if you have a 99.99% chance of survival; not so much if you have a 0% chance of survival.
We can suppose the agent must choose both $$D$$ vs $$A$$ for Damascus vs. Aleppo, and simultaneously choose $$Y$$ vs $$N$$ for whether to yes-buy or not-buy the $1 ticket that pays$11 if the agent survives. That is, the agent is facing four buttons $$DY, AY, DN, AN$$ and this outcome table:
$$\begin{array}{r|c|c} & \text {Damascus fatal} & \text {Aleppo fatal} \\ \hline \ {DN} & \text {Die} & \text{Live} \\ \hline \ {AN} & \text {Live} & \text {Die} \\ \hline \ {DY} & \text {Die, \-1} & \text{Live, \+10} \\ \hline \ {AY} & \text {Live, \+10} & \text {Die, \-1} \end{array}$$
A causal decision theory that doesn’t update its background beliefs at all while making the decision, will select $$AY$$ instead of $$AN.$$ (And then the CDT agent predictably updates afterwards to thinking that the ticket is worthless, so we can buy the ticket back for $0.01 at a profit of$0.99, justifying our regarding this as a “money pump”.)
A first response would be to allow the CDT agent to observe its own initial impulse, try updating the background variables accordingly, and then reconsider its decision until it finds a decision that is stable or “self-ratifying”.
This deals with the Newcomb’s Tax dilemma, but isn’t sufficient for Death in Damascus since there is no deterministic self-ratifying decision on this problem—the decision theory goes into an infinite loop as it believes that Damascus is fatal and feels an impulse to go to Aleppo, updates to believe that Aleppo is fatal and observes an impulse to stay in Damascus, etcetera.
The standard reply is to allow the decision theory to break loops like this by deploying mixed strategies. At the point where the agent thinks it will deploy the mixed strategy of staying in Damascus with 50% probability and going to Aleppo with 50% probability, any possible probabilistic mix of “stay in Damascus” and “flee to Aleppo” will seem equally attractive, with a 50% probability of dying given either decision. We then modify the theory of CDT to add the rule that in cases like this, we output a self-consistent policy if one is found. (This does require an extra rule, because not only the policy of {0.5 stay, 0.5 flee} seems acceptable at the self-consistent point—all policies seem acceptable at that point—unless we add a special rule to stop there and output the self-consistent policy.)
This is a standard addendum to CDT and also appears in e.g. the most widely accepted resolution for the Absent-Minded Driver dilemma. But in this case, in addition to the concern that the extra rule in CDT could be taken as strange (why pick one policy at a point where all policies seem equally attractive?), we also need to deal with additional concerns:
• That the agent will immediately reverse course as soon as it notices itself fleeing to Aleppo and reconsiders this decision a second later.
• (Raised by Yudkowsky in personal conversation with James M. Joyce.) This version of the agent will still buy for $1 a ticket that pays$11 if it survives, if it’s offered that choice as part of the stay/flee decision. That is, the agent stabilizes on the policy {0.5 DY, 0.5 AY} instead of the policy {0.5 DN, 0.5 AN} if it’s offered all four choices.
comment: This objection was raised by Eliezer Yudkowsky in personal conversation with James M. Joyce at “Self-prediction in Decision Theory and Artificial Intelligence” at Cambridge 2015. Joyce was suggesting a particular formalism for a self-ratifying CDT. The conversation went something like the following:
Yudkowsky: I think this agent is irrational, because at the point where it makes the decision to stay or flee with 0.5:0.5 probability, it thinks it has a 50% chance of survival.
Joyce: I think that’s rational. Maybe after the decision the agent realizes it won’t survive, but it has no way of knowing that at the time it makes the decision.
Yudkowsky: Hm. (Goes off and thinks.) (Returns.) Your agent is irrational and I can pump money out of it by offering to sell it for $1 a ticket that pays a net$10 if it survives.
Joyce: That’s because from your epistemic vantage point outside the agent, you know something the agent doesn’t. Obviously you can win bets against the agent when you’re allowed to bet with knowledge it doesn’t have.
Yudkowsky: (Thinks.) Your agent knows in advance that it can be money-pumped and it will pay me \$0.50 not to offer to sell it a ticket later. So I claim that it clearly can know the thing you say it can’t know at the time of making the decision.
Joyce: I disagree, but let me think about it.
(Commented out because it would be unfair to cite this conversation without running it past Joyce, plus he may have come up with a further reply since then.)
<div>
## Evidential decision theory
Evidential decision theory evaluates its expected utility as “doomed” whether it flees to Aleppo or stays in Damascus, and will choose whichever option corresponds to spending its last days more comfortably.
## Logical decision theory
An agent using the standard updateless form of logical decision theory responds by asking: “How exactly does Death decide whether to speak to someone?”
It’s not causally possible for Death to always tell people when a natural death is approaching, regardless of the person’s policy. For example, there could be someone who will die if they stay in Damascus, but whose disposition causes them to flee to Aleppo (where no death waits) if they are warned.
Two possible rules for Death would be as follows:
Rule K:
• Each day, check whether telling a person that they have an appointment with Death will cause them to die the next day.
• If so, tell them they have an appointment with Death the next day.
• If not, remain silent, even if this means the person dies with no warning.
Rule L:
• Each day, check whether telling a person that they have an appointment with Death will cause them to die the next day.
• If so, tell them they have an appointment with Death the next day.
• If not, remain silent and don’t kill them.
Since the UDT-optimal policy differs depending on whether Death follows Rule K or Rule L, we need at least a prior probability distribution on which rule Death follows. As Bayesians, we can just guess this probability if we don’t have authoritative information, but we need to guess something to proceed.
If Death follows Rule K, the UDT reply is to stay in Damascus, and this is decisively optimal—definitely superior to the option of fleeing to Aleppo! If you always flee to Aleppo on a warning, then you are killed by any fatal event that could occur in Aleppo (Death gives you warning, you flee, you die). You are also killed by any fatal event that could occur in Damascus (Death checks if It can consistently warn you, finds that It can’t, stays silent, and collects you in Damascus the next day). You will be aware, on receiving the warning, that Death awaits you in Damascus; but you’ll also be aware that if-counterfactually you were the sort of person who flees to Aleppo on warning, you would have received no warning today, and possibly have died in Aleppo some time ago.
If Death follows Rule L, you should, upon receiving Death’s warning, hide yourself in the safest possible circumstances—perhaps near the emergency room of a well-managed hospital, under medical supervision. You’ll still expect to die after taking this precaution—something fatal will happen to you despite all nearby doctors. However, by being the sort of person who acts like this on receiving a warning from Death, you minimize Death’s probability of collecting you on any given day. You know that if-counterfactually you were the sort of person whose algorithm’s logical output says to stay in Damascus after receiving warning, you would probably have been killed earlier in Damascus where potentially fatal events crop up more frequently.
An updateless LDT agent computes this reply in one sweep, and without needing to observe itself or search for a self-ratifying answer.
Parents:
• Newcomblike decision problems
Decision problems in which your choice correlates with something other than its physical consequences (say, because somebody has predicted you very well) can do weird things to some decision theories.
• Let’s say that we thought we only had a 0.01% chance under normal circumstances of suddenly traveling to Aleppo; then after updating on Death’s statement, we’ll think that Damascus has a 99.99% chance of being fatal and Aleppo has a 0.01% chance of being the fatal city, and we’ll flee to Aleppo.
This sounds wrong because it’s not invariant under the introduction of new alternatives with a 0% chance.
|
|
Fermilab Core Computing Division
Library Home | Ask a Librarian library@fnal.gov | Book Catalog | Library Journals | Requests | SPIRES | Fermilab Documents |
Fermilab Library
SPIRES-BOOKS: FIND KEYWORD LARGE HADRON COLLIDER *END*INIT* use /tmp/qspiwww.webspi1/25260.35 QRY 131.225.70.96 . find keyword large hadron collider ( in books using www
Call number: SPRINGER-2007-9788847005303:ONLINE Show nearby items on shelf Title: IFAE 2006 [electronic resource] : Incontri di Fisica delle Alte Energie Italian Meeting on High Energy Physics Author(s): Guido Montagna Oreste Nicrosini Valerio Vercesi Date: 2007 Publisher: Milano : Springer Milan Size: 1 online resource Note: Springer e-book platform Note: Springer 2013 e-book collections Note: This book collects the Proceedings of the Workshop Incontri di Fisica delle Alte Energie (IFAE) 2006, Pavia, 19-21 April 2006. This is the fifth edition of a new series of meetings on fundamental research in particle physicsand was attended by more than 150 researchers. Presentations, both theoretical and experimental, addressed the status of Standard Model and Flavour phyiscs, Neutrino and Cosmological topics, new insights beyond the presentunderstanding of particle physics and cross-fertilization in areas such as medicine, biology, technological spin-offs and computing. Special emphasis was given to the expectations of the forthcoming Large Hadron Collider, due inoperation in 2007. The venue of plenary sessions interleaved with parallel ones allo wed for a rich exchange of ideas, presented in these Proceedings, that form a coherent picture of the findings and of the open questions in thisextremely challenging cultural field Note: Springer eBooks ISBN: 9788847005303 Series: e-books Series: SpringerLink (Online service) Series: Physics and Astronomy (Springer-11651) Keywords: Astrophysics , Nuclear physics , Quantum theory , Particle acceleration Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: SPRINGER-1995-9781461519454:ONLINE Show nearby items on shelf Title: Hot Hadronic Matter Theory and Experiment Author(s): Date: 1995 Size: 1 online resource (574 p.) Note: 10.1007/978-1-4615-1945-4 Contents: A Tribute to Rolf Hagedorn -- The Long Way to the Statistical Bootstrap Model -- Entropy for Hadrons -- Statistical Studies of Hadrons -- Hagedorn’s Temperature and the Dual Resonance Model: A 25 Year Old Love Affair -- Mass Spectrum of q-Deformed Dual String Theory -- Hagedorn’s Reincarnation in String Theory -- Interactive Computer Languages: Past and Future of Sigma -- Deconfinement: Concept, Theory, Test -- Hadronic Matter Equation of State and the Hadron Mass Spectrum -- Deconfinement of Constituent Quarks and the Hagedorn Temperature -- Crystalline Quark-Hadron Phase in Neutron Stars -- A New Effective Model of the Quark-Gluon Plasma with Thermal Parton Masses -- Hadronic Matter with Internal Symmetries and its Consequences: An Expanding Hadronic Gas -- On the Statistics-Changing Phase Transition in Gauge Theories -- Fluctuation Corrections to Bubble Nucleation -- A Finite Temperature String Phase Transition a la Volume Exclusion -- Entropy and Decoherence -- Colored Chaos -- Statistical Properties of Relativistic Quasiparticles -- Quantum Decoherence and Entropy in High-Energy Interaction -- Entropy Production via Particle Production -- Real-Time Thermal Corrections to Pair-Production Processes in Heavy-Ion Collisions -- Pions, Baryons and Entropy in Nuclear Collisions -- Entropy in Heavy Ion Collisions -- High Density QCD and Entropy Production at Heavy Ion Colliders -- About Entropy and Thermalization — A Miniworkshop Perspective -- New Developments in Correlation Studies -- How to Investigate Small Collective Signals in Nucleus-Nucleus Interactions -- Negative Binomial Fits to Multiplicity Distributions from Central Collisions of 16O+Cu at 14.6A GeV/c and Intermittency -- Universal Properties of Angular Correlations in QCD Jets -- Towards a Field Theoretical Description of Multiparticle Production in High Energy Collisions -- Intermittency and Other Scaling Behaviors in Nuclear Collisions -- QCD Generalised Factorial Moments -- Boson Spectra and Correlations in Small Thermalized Systems -- Correlations and Strong Interactions -- Analysis of Multiparticle Correlations and the Wavelet Transform -- Multiparticle Production: Session Summary -- Single Photon Production in 200 A•GeV Sulphur on Gold Collisions -- Latest Results on Dilepton Production in 200 GeV A Ion-Ion Collisions -- Dilepton Spectra in Heavy Ion Collisions -- Photon Multiplicity Measurement, a Novel Observable in High Energy Heavy Ion Collision -- Density Modification of Dilepton Production in Hot Hadronic Matter -- Miniworkshop on Strangeness -- Similarities and Differences in Strangeness Production at BNL and CERN -- Particle Spectra -- Pion and Kaon Freezeout in NA44 -- The Dual Parton Model and Hadron Production at Cosmic Ray Energies -- Particle Production at the AGS -- Chemical Equilibrium and Particle Production in Nucleus-Nucleus Collisions at AGS Energy -- Measurement of the ?/? Production Ratio in Central S-W Interactions at 200 A GeV/c -- NA36 Strangeness Production: Multistrange Baryons -- Estimates of the Ratios $$\overline \Lambda /\Lambda ,\overline \Xi /\Xi$$ and $$\overline \Omega /\Omega$$ from pp and pA Interactions -- Thermalisation in High Energy Heavy Ion Collisions and Strange Particle Production -- Strangeness in Hot Hadronic Matter -- Hadronic Physics: The Cosmic Ray Perspective -- Minimax: Progress and Plans -- Event by Event Analysis of Ultrarelativistic Nuclear Collisions: A New Method to Search for Critical Fluctuations -- Observing Strangeness (and Charm?) in Heavy Ion Interactions -- The Physics and Experimental Program of the Relativistic Heavy Ion Collider (RHIC) -- Heavy Ion Physics at the Large Hadron Collider at CERN -- The Hadronic Future -- Contributors ISBN: 9781461519454 Series: eBooks Series: SpringerLink (Online service) Series: Springer eBooks Series: NATO ASI Series, Series B: Physics: 346 Keywords: Physics , Continuum physics , Nuclear physics , Heavy ions , Hadrons , Physics , Nuclear Physics, Heavy Ions, Hadrons , Classical Continuum Physics , Theoretical, Mathematical and Computational Physics Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: SPRINGER-1988-9781468488425:ONLINE Show nearby items on shelf Title: QCD Hard Hadronic Processes Author(s): Date: 1988 Size: 1 online resource (566 p.) Note: 10.1007/978-1-4684-8842-5 Contents: Conference Keynote -- QCD: Hard Collisions are Easy and Soft Collisions are Hard -- Direct Leptons, W, Z Boson -- Preliminary Results from CDF on W,Z Production at the Tevatron Collider -- Study of Dimuons with NA10 -- QCD in the Limit xF ? 1 as Studied in the Reaction ??N ? ?+??X -- Summary of Direct Lepton Session and Round Table -- Direct Photon -- Direct Photons at Large PT from Hadronic Collisions: A Short Review Based on the QCD Analysis Beyond Leading Order -- Direct Photon Production from Positive and Negative Pions and Protons at 200 GeV/c (NA3 Collaboration) -- Direct Photon Production in p?p and pp Collisions -- Direct Photon Production by ??p, ?+p and pp Interaction at 280 GeV/c: Results from the WA70 Experiment at CERN -- Results on Direct Photon Production from the UA2 Experiment at the CERN Proton-Antiproton Collider -- Direct Photons in UA1 -- Large PT Photoproduction in CERN-Experiment WA69 -- Direct Photons at the CERN ISR -- Prompt Photon Production in 300 GeV/c ??p, ?+p and pp Collisions -- Direct Photon Physics from R 806, R 807, R 808 -- Charm Photoproduction and Lifetimes from the NA14/2 Experiment -- Photon Hard-Scattering in the NA14 Experiment -- Round Table Discussion on Direct Photons -- Certain Uncertainties in QCD Theory Predictions for Large-pT Photons and Other Processes -- Comments from Direct Photon Round Table Discussion -- Special Topics -- The Hadronic Interaction Model Fritiof and Bose-Einstein Correlations for String Fragmentation -- Parton Distributions in Nuclei and Polarised Nucleons -- Heavy Quark Production in QCD -- Some Recent Developments in the Determination of Parton Distributions -- New Direct Photon Experiments -- Direct Photon Studies — Current Status of Experiment E706 (Fermilab) -- Expectations for Direct Photon Physics from Fermilab Experiment E705 -- Hadronic Jets -- Jet Physics from the Axial Field Spectrometer -- Measurements on W and Z Production and Decay from the UA2 Experiment at the CERN $${\bar p}$$p Collider -- Theoretical Review of Jet Production in Hadronic Collisions -- Results on Jet Production from the UA2 Experiment at the CERN Proton-Antiproton Collider -- Results on Jets from the UA1 Experiment -- Recent Results from Fermilab E557 and E672 Experiments -- Summary and Round Table Discussion: Hadronic Jets -- Heavy Flavor Production -- QCD: Photo/Hadroproduction of Heavy Flavors Fermilab E691, E769 and Beyond -- Dimuon Experiments at the Fermilab High Intensity Laboratory -- Heavy-Flavour Production in UA1 -- Charm Production from 400 and 800 GeV/c Proton-Proton Collisions -- Production of Particles with High Transverse Momenta in 800 GeV Proton-Nucleus Collision, E605-FNAL -- Experimental Study of B$${\bar B}$$ Hadroproduction in the WA75 and WA78 Experiments -- Heavy Quark Production in Hadron Collisions A Theoretical Overview -- Charm and Beauty Decays via Hadronic Production in a Hybrid Emulsion Spectrometer (Fermilab E653a) -- Heavy Flavor Production -- Conference Summary -- Hard Processes in QCD -- Participants ISBN: 9781468488425 Series: eBooks Series: SpringerLink (Online service) Series: Springer eBooks Series: NATO ASI Series, Series B: Physics: 197 Keywords: Physics , Elementary particles (Physics) , Quantum field theory , Physics , Elementary Particles, Quantum Field Theory Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: SPRINGER-1982-9781461335511:ONLINE Show nearby items on shelf Title: Fundamental Interactions Cargèse 1981 Author(s): Date: 1982 Size: 1 online resource (696 p.) Note: 10.1007/978-1-4613-3551-1 Contents: Functional Methods in Quantum Field Theory -- I. Introduction -- II. Path Integrals -- III. Feynman Diagrams -- IV. Fermions -- V. Ghosts -- References -- to Electro-Weak Interactions -- I. Introduction -- II. Gauge Invariance -- III. Spontaneous Symmetry Breaking -- IV. The Standard Model -- V. Anomalies -- VI. Fermi Mass Matrix -- VII. CP -- VIII. Things to look for -- References -- The Weak Interactions in the Confining Phase -- References -- Heavy Quark Systems -- Electron-Positron Interactions at High Energies -- I. Introduction -- II. PETRA and PEP -- III. The structure of Leptons -- IV. Weak Neutral Current Contributions to Lepton Pair Production -- V. Search for new Particles -- VI. Jet Formation in e+e? annihilation -- VII. Quark and Gluon Fragmentation -- References -- e+e? Collisions at CESR -- I. CESR, CLEO and CUSB -- II. Upsilon bound state spectroscopy and tests of QCD -- III. B Meson Decays and Tests of the standard weak interaction -- References -- e+e? Physics at Very Large Energies -- O. Abstract -- I. Introduction -- II. Interference between electromagnetic and neutral weak currents -- III. Z° Decays -- IV. W+W? Production -- V. Higgs Bosons -- VI. Conclusions -- References -- Theoretical Aspects in Perturbative QCD -- I. Introduction -- II. The LLA and beyond -- III. The Photon Structure Functions -- IV. Prescription and Scale dependence for the running coupling -- V. Spacelike and Timelike Structure Functions beyond LLA -- VI. An Asymptotic Formula for Multiplicities -- VII. The Sudakov Form Factor of Partons -- VIII. Examples of Doubly Logarithmic Effects of Physical Interest -- References -- Dynamical Mass Generation For Quarks and Leptons -- I. Introduction -- II. Technicolour and its Extension -- III. ETC Issues, Answers and Problems -- IV. Alternatives to ETC ? -- V. Composite Quarks and Leptons -- References -- Quark Confinement and Lepton Liberation in an Anisotropic Space Time -- I. Introduction -- II. Is Hadron Dynamics 2-dimensional ? -- III. The Anisotropic Space-Time -- IV. The Anisotropic Yang-Mills Interactions -- V. Quantum Fluctuations and the Role of Chirality -- VI. Gauge Invariance Restored -- VII. The Structure of Strong Interactions -- References -- Physics at Collider Energies -- I. Introduction -- II. Hunting the Weak Vector Bosons -- III. The Dominant Hadron Processes -- IV. Jet Phenomena -- V. Conclusion -- References -- Proton Lifetime Experiments -- I. Theoretical Preamble -- II. The Experimental Problem -- III. Status of N Decay Studies -- IV. The Future -- V. Other Possibilities Offered by these Set-Ups -- VI. n — $$\rm \bar{n}$$ Oscillations -- References -- Quantum Field Theory and Cosmology -- I. Introduction -- II. The Hot Big Bang Theory -- III. Entropy Generation -- IV. Quantum Gravity -- References -- The Confinement Phenomenon in Quantum Field Theory -- O. Abstract -- I. Introduction -- II. Scalar Field Theory -- III. Bose Condensation -- IV. Goldstone Particles -- V. Higgs Mechanism -- VI. Vortex Tubes -- VII. Dirac’s Magnetic Monopoles -- VIII. Unitary Gauge -- IX. Phantom Solitons -- X. Non-Abelian Gauge Theory -- XI. Unitary Gauge -- XII. A Topological Object -- XIII. The Macroscopic Variables -- XIV. The Dirac Condition in the EM Charge Spectrum -- XV. Oblique Confinement -- XVI. Fermions out of Bosons and vice-versa -- XVII. Other Condensation Modes -- References -- Developments in Particle Physics -- I. The Fermion Family -- II. On Symmetry Breaking -- III. Neutrinos -- IV, Majorons -- V. Axions -- VI. The Future ISBN: 9781461335511 Series: eBooks Series: SpringerLink (Online service) Series: Springer eBooks Series: NATO Advanced Study Institutes Series, Series B: Physics: 85 Keywords: Physics , Physics , Physics, general Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: QC794.8.E44S77::2010 Show nearby items on shelf Title: Electroweak physics at LEP and LHC Author(s): Arno Straessner Date: 2010 Publisher: Springer: New York Size: 214 pgs. Contents: 1. Theoretical Framework, 2. The LEP Experiments, 3. Gauge Boson Production at LEP, 4. Electroweak Measurements and Model Analysis of Electroweak Data, 5. The ATLAS and CMS Experiments at the LHC, 6. Expectations for Electroweak Measurements at the LHC, 7. Higgs Physics at the LHC, 8. Summary and Conclusion ISBN: 9783642051685 Series: Springer Tracts in Modern Physics: v.235 Keywords: Electroweak interactions , Electroweak interactions - Data processing , Standard model (Nuclear physics) , Higgs bosons , Large Hadron Collider (France and Switzerland) Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Location: MAIN
Call number: QC793.5.H328W8::1985 Show nearby items on shelf Title: Proceedings of the Workshop on Triggering, Data Acquisition, and Offline Computing for High Energy/High Luminosity Hadron-Hadron Colliders Conference: Workshop on Triggering, Data Acquisition, and Offline Computing for High Energy/High Luminosity Hadron-Hadron Colliders, Fermilab, November 11-14, 1985 [C85-11-11] Author(s): Bradley Cox Date: 1985 Publisher: Fermi National Accelerator Laboratory, Batavia, Ill Size: 473 Keywords: Large Hadron Collider Congresses. , Conference proceedings , Conferences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Location: MAIN
Call number: QC793.5.H328V37::1997 Show nearby items on shelf Title: Very Large Hadron Collider Physics and Detector Workshop: Beyond the LHC March 13-15, 1997 Fermi National Accelerator Laboratory, Batavia, Illinois Conference: Very Large Hadron Collider Physics and Detector Workshop, Fermi National Accelerator Laboratory, Batavia [C97-03-13] Author(s): Date: 1997 Publisher: FNAL, Batavia, Ill Size: 1 volume Keywords: Hadron colliders Congresses. , Conference proceedings , Conferences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Full Text: Click here Location: Fermilab collection on the cross-walk
Call number: QC793.5.H328P57::1997 Show nearby items on shelf Title: The Pipetron, A Low Field Approach to a Very Large Hadron Collider: Selected Reports Submitted to the Proceedings of the DPF / DPB Summer Study on New Directions for High-Energy Physics Snowmass 96 Author(s): E. Malamud (ed.) Date: 01/--/97 Size: 1 v Note: See Technical Publications, FERMILAB-VLHCPUB, for individual papers fulltext Keywords: Hadron Colliders - Design and Construction Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Location: Fermilab collection on the cross-walk
Call number: QC793.5.H32S25::2003 Show nearby items on shelf Title: Large Hadron collider phenomenology Proceedings of the fifty seventh Scottish Universities Summer School in Physics, St. Andrews, 17 August to 29 August 2003 Conference: 57th Scottish Universties Summer School in Physics: LHC Phenomenology (SUSSP 2003) 17-29 Aug 2003, St. Andrews, Scotland, United Kingdom [C03-08-17.2] Author(s): M. Kramer (ed.) F.J.P. Soler (ed.) (Edinburgh U.) Date: 2004 Publisher: Edinburgh : SUSSP -- Philadelphia : IOP Size: 473 pgs., ISBN: 0750309865 Series: Institute of Physics Scottish Graduate Textbook Series Keywords: Hadron colliders Congresses , Conference proceedings , Conferences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com Location: MAIN
Call number: QC793.5.B62S26::2010 Show nearby items on shelf Title: Massive: The Missing particle that sparked the greatest hunt in science Author(s): Ian Sample Date: 2010 Publisher: Basic Books: New York Size: 260 pgs. ISBN: 9780465019472 Keywords: Higgs bosons. , Large Hadron Collider (France and Switzerland) Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Location: POP
Call number: QC793.5.B62::2015 Show nearby items on shelf Title: The Higgs boson and beyond Author(s): Sean M. Carroll Date: 2015 Publisher: Chantilly, VA: The Great Courses Size: 2 videodiscs (6 hours) and 1 course guidebook (88 pages) Note: 12 lectures/30 minutes per lecture, Lecturer, Professor Sean Carroll, California Institute of Technology Contents: Disc 1. Lectures: 1. The importance of the Higgs boson -- 2. Quantum field theory. -- 3. Atoms to particles -- 4. The power of symmetry -- 5. The Higgs field -- 6. Mass and energy Disc 2. Lectures: 7. Colliding particles -- 8. Particle accelerato rs and detectors -- 9. The large Hadron collider -- 10. Capturing the Higgs boson -- 11. Beyond the standard model of particle physics -- 12. Frontiers - Higgs in space. ISBN: 9781629971148 Series: The great courses. Science & Mathematics. Physics Keywords: Higgs bosons. , Quantum field theory. Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Location: AV-CROSS-WALK
Call number: QC793.3.Q35T44::2014 Show nearby items on shelf Title: Journeys through the precision frontier Amplitudes for colliders (TASI 2014) Author(s): Lance Jenkins Dixon (ed.) Frank Petriello (ed.) TASI (Conference) (2014 : Boulder, Colo.) ISBN: 9789814678759 Corp. Author: TASI (Conference) (2014 : Boulder, Colo.) Keywords: Quantum chromodynamics Congresses. , Large Hadron Collider (France and Switzerland) Congresses. , String models Congresses. , Particles (Nuclear physics) , Gravity Congresses. Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Location: SUGGESTIONS (email library@fnal.gov if you would like this title added to the Library collection.)
Call number: QC793.3.B4W63::1999 Show nearby items on shelf Title: LHC'99 Proceedings of the Workshop on Beam-Beam Effects in Large Hadron Colliders, Geneva, April 12-17, 1999 Conference: Workshop on Beam-Beam Effects in Large Hadron Colliders 1999 [C99-04-12.3] Author(s): J. Poole F. Zimmermann (eds.) Date: 1999 Publisher: CERN: Geneva, Switzerland Keywords: Large Hadron Collider Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Location: MAIN Report-number: CERN-SL-99-039-AP
Call number: QC787.5.H328V47::1998 Show nearby items on shelf Title: Very Large Hadron Collider Information Packet: Selected Reports on the Work Done Since Snowmass 96 on the VLHC Author(s): C.S. Mishra (ed.) Date: 1998 Size: 1 v. (unpaged) Note: HEPAP Subpanel Report Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Location: Fermilab collection on the cross-walk Report-number: FERMILAB-VLHCPUB-229
Call number: QC787.P73V53::2001 Show nearby items on shelf Title: Design study for a staged very large hadron collider Author(s): VLHC Design Study Group Date: 2001 Publisher: Batavia, Ill. : Fermilab Size: 271 p Corp. Author: VLHC Design Study Group Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Location: MAIN Report-number: fermilab-tm-2149
Call number: QC787.P73T67::1996 Show nearby items on shelf Title: Proceedings of the XI Symposium on Hadron Collider Physics Conference: Symposium on Hadron Collider Physics, 11th, Abano, Italy, 26 May - 1 Jun 1996 [C96-05-26.2] Author(s): (Ed.) Bisello Date: 1997 Publisher: World Scientific, Singapore Size: 789 ISBN: 981022897X Keywords: Large Hadron Collider Congresses. , Conference proceedings , Conferences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com Location: MAIN
Call number: QC787.P73P47::2008 Show nearby items on shelf Title: Perspectives on LHC Physics Author(s): Gordon Kane (ed.) Aaron Pierce (ed.) Date: 2008 Publisher: World Scientific: New Jersey Size: 337 pgs. Contents: 1. The LHC - A 'Why' Machine and a Supersymmetry Factory, 2. Dark Matter at the LHC, 3. LHC's ATLAS and CMS Detectors, 4. Understanding the Standard Model, as a Bridge to the Discovery of New Phenomena at the LHC, 5. Thoughts on a Long Voyage, 6. The 'Top Priority' at the LHC, 7. LHC Discoveries Unfolded, 8. From BCS to the LHC, 9. Searching for Gluinos at the Tevatron and Beyond, 10. Naturally Speaking: The Naturalness Criterion and Physics at the LHC, 11. Prospects for Higgs Boson Searches at the LHC, 12. A Review of Spin Determination at the LHC. 13. Anticipating a New Golden Age, 14. Strongly Interacting Electroweak Theories and Their Five-Dimensional Analogs at the LHC, 15. How to Find a Hidden World at the LHC, 16. B Physics at LHCb, 17. The LHC and the Univerrse at Large ISBN: 9812779752 Keywords: Large Hadron Collider (France and Switzerland) , Particle Acceleration Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com Location: MAIN
Call number: QC787.P73I55::1995:V3 Show nearby items on shelf Title: Proceedings of the International Symposium, LHC Physics and Detectors Dubna, 19-21 July 1995 Author(s): A.N. (ed.) Sissakian Date: 1995 Publisher: Joint Institute for Nuclear Research Size: 455 p Keywords: Hadron interactions , Position sensitive particle detectors , Large Hadron Collider (France and Switzerland) , Conference proceedings Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Location: MAIN
Call number: QC787.P73I55::1995:V2 Show nearby items on shelf Title: Proceedings of the International Symposium, LHC Physics and Detectors Dubna, 19-21 July 1995 Author(s): A.N. (ed.) Sissakian Date: 1995 Publisher: Joint Institute for Nuclear Research Size: 455 p Keywords: Hadron interactions , Position sensitive particle detectors , Large Hadron collider (France and Switzerland) , Conference proceedings Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Location: MAIN
Call number: QC787.P73I55::1995:V1 Show nearby items on shelf Title: Proceedings of the International Symposium, LHC Physics and Detectors Dubna, 19-21 July 1995 Author(s): A.N (ed.) Sissakian Date: 1995 Publisher: Joint Institute for Nuclear Research Size: 455 p Keywords: Hadron interactions , Position sensitive particle detectors , Large Hadron Collider (France and Switzerland) , Conference proceedings , Conference proceedings Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Location: MAIN
Call number: QC787.P73E76::2004:V3 Show nearby items on shelf Title: LHC design report Vol. 3, the LHC injector chain Author(s): Date: 2004 Publisher: Geneva : CERN Size: 356 p Note: Continued from The Large Hadron Collider: Conceptual design (1995 see second URL below) ISBN: 9290832398 Series: CERN (series) 2004-003 Keywords: Large Hadron Collider Design , Proton-antiproton colliders Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com Full Text: Click here Location: ONLINE Report-number: CERN-2004-003
Call number: QC787.P73E76::2004:V2 Show nearby items on shelf Title: LHC design report Vol. 2, the LHC infrastructure and general services Author(s): Date: 2004 Publisher: Geneva : CERN Size: 220 p Note: Continued from The Large Hadron Collider: Conceptual design (1995 see second URL below) ISBN: 9290832263 Series: CERN (series) 2004-003 Keywords: Large Hadron Collider Design , Proton-antiproton colliders Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com Full Text: Click here Location: ONLINE Report-number: CERN-2004-003
Call number: QC787.P73A8::2010 Show nearby items on shelf Title: At the Leading Edge : The ATLAS and CMS LHC Experiments Author(s): Dan Green (ed.) (Fermi National Accelerator Laboratory, USA) Date: 2010 Publisher: World Scientific ISBN: 9789814277617 Keywords: Large Hadron Collider (France and Switzerland) , Nuclear Counters , Symmetry (Physics) , Particles (Nuclear Physics) Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Location: POP
Call number: QC787.P73A29::2010 Show nearby items on shelf Title: Present at the creation The Story of CERN and the large hadron collider Author(s): Amir D. Aczel Date: 2010 Publisher: Crown Publishers: New York Size: 271 pgs Contents: 1. The Exploding Protons, 2. The LHC and Our Age-Old Quest to Understand the Structure, 3. A Place Called CERN, 4. Building the Greatest Mahine in History, 5. LHCb and the Mystery of the Missing Antimatter, 6. Richard Feynman and a Prelude to the Standard Model, 7. Who Ordered That? - The Discoveries of Leaping Leptons, 8. Symmetries of Nature, Yang-Mills Theory, and Quarks, 9. Hunting the Higgs, 10. How the Higgs Sprang Alive Inside a Red Camaro (And Gave Birth to Three Bosons), 11. Dark Matter, Dark Energy, and the Fate of the Universe, 12. Looking for Strings and Hidden Dimensions, 13. Will CERN Create a Black Hole?, 14. The LHC and the Future of Physics - Appendices: A. How Does an LHC Detector Work?, B. Particles, Forces, and the Standard Model, C. The Key Physics Principles Used in This Book ISBN: 9780307591678 Keywords: Large Hadron Collider (France and Switzerland) , Colliders (Nuclear physics) , European Organization for Nuclear Research. Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Location: MAIN
Call number: QC539.736P273::2014 Show nearby items on shelf Title: Particle Fever Author(s): Mark A Levinson (director) Date: 2014 Publisher: PBS Size: 1 DVD ISBN: 9781527894371 Keywords: Higgs Boson , Large Hadron Collider , Particles Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Location: AV-CROSS-WALK
Call number: QC174.45.A6.C56::2011 Show nearby items on shelf Title: The infinity puzzle: How the quest to understand quantum field theory led to extraordinary science, high politics, and the world's most expensive experiment Author(s): Frank Close Date: 2011 Publisher: New York : Oxford University Press Size: 399 p. Contents: 1. GENESIS 1. The point of infinity 2. Shelter Island and QED 3. Feynman, Scheinger...and Tomonaga (and Dyson) Intermission 1950 4. Abdus Salam-A Strong Beginning 5. Yang-Mills...and Shaw 6. The idenity of John Ward 7. The Marriage of Weak and Electromagnetic Forces - to 1964 Intermission 1960 8. Broken Symmetries 9. The Boson That Has Been Named After Me, aka the Higgs Boson Intermission: early 1970s BJ and the Cosmic Quarks 13. A comedy of errors Intermission 1975 14. Heavy Light 15. Warmly Admired Richly Deserved 16. The Big Machine Intermission: the end of the 20th century 17. To infinity and beyond ISBN: 9780199593507 Keywords: Quantum field theory , Large Hadron Collider (France and Switzerland) Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Location: POP
Call number: QC16.B88A3::2015 Show nearby items on shelf Title: Most wanted particle: The inside story of the hunt for the Higgs, the heart of the future of physics Author(s): Jon Butterworth Date: 2015 Publisher: New York: The Experiment Note: First published in Great Britain in 2014 as Smashing physics Contents: Before the data -- Restart -- High energy -- Standard Model -- Rumours and limits -- First Higgs hints and some crazy neutrinos -- Closing in -- Discovery -- What next? ISBN: 9781615192458 Keywords: Butterworth, Jon Career in physics. , European Organization for Nuclear Research. , Large Hadron Collider (France and Switzerland) , Higgs bosons. Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Location: POP
Call number: QB843.B55.F555::2009 Show nearby items on shelf Title: Black Holes Explained [videorecording] Author(s): Alexei V. Filippenko Date: 2009 Publisher: Chantilly, Va. : Teaching Co. Size: 2 videodiscs and 1 course guidebook Note: 12 lectures, 30 minutes per lecture Note: Course no. 1841. Contents: Lecture 1. A general introduction to black holes -- Lecture 2. The violent deaths of massive stars -- Lecture 3. Gamma-ray bursts- the birth of black holes -- Lecture 4. Searching for stellar-mass black holes -- Lecture 5. Monster of the Milky Way and other galaxies -- Lecture 6. Quasars- feasting supermassive black holes -- Lecture 7. Gravitational waves- ripples in space-time -- Lecture 8. The wildest ride in the universe -- Lecture 9. Shortcuts through the universe and beyond? -- Lecture 10. Stephen Hawking and black hole evaporation -- Lecture 11. Black holes and holographic universe -- Lecture 12. Black holes and the Large Hadron Collider. ISBN: 9781598035896 Series: The great courses. Science & mathematics Keywords: Black holes (Astronomy) Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Location: AV-CROSS-WALK
Call number: NUPHZ:V179-180 Show nearby items on shelf Title: Photon-LHC-2008 Proceedings of the International Workshop on High-Energy Photon Collisions at the LHC, CERN, Geneva, Switzerland, 22-28 April 2008 Conference: Workshop on High Energy Photon Collisions at the LHC 21-25 Apr 2008, Geneva, Switzerland [C08-04-21.2] Author(s): D. d'Enterria (ed.) M. Klasen (ed.) K. Piotrzkowski (ed.) International Workshop on High-Energy Photon Collisions at the LHC (2008 : Geneva, Switzerland) Date: 2008 Publisher: Amsterdam : North-Holland Size: 313 p Series: Nucl.Phys.Proc.Suppl.179-180 Corp. Author: International Workshop on High-Energy Photon Collisions at the LHC (2008 : Geneva, Switzerland) Keywords: Photon-photon interactions Congresses , Large Hadron Collider Congresses , Conference proceedings , Conferences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Full Text: Click here Location: ONLINE
Call number: NAT-448.N7151::2007:SUPPL Show nearby items on shelf Title: Large Hadron Collider Author(s): Date: 07/19/07 Publisher: Nature Publishing Group Note: Nature insight, supplement Note: Reprinted from V448 No.7151, July 19,2007 Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Location: POP
Call number: FERMILAB-VLHCPUB-227 Show nearby items on shelf Title: Progress toward the Very Large Hadron Collider, March, 2000 Author(s): Date: 03/--/00 Size: 23 p Keywords: Superconducting magnets Design and construction Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Location: FERMI Report-number: FERMILAB-VLHCPUB-227
Call number: CERN-2011-003 Show nearby items on shelf Title: Proceedings of EuCARD-AccNet-EuroLumi Workshop: The High-Energy Large Hadron Collider, Malta, Republic of Malta, 14 - 16 Oct 2010 Author(s): E. Todesco F. Zimmermann Date: 11/--/11 Size: 156 p Note: * Temporary entry * Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. Full Text: Click here Location: ONLINE Report-number: EUCARD-CON-2011-001
Call number: CERN-2005-011 Show nearby items on shelf Title: Proceedings of the eleventh Workshop on Electronics for LHC and Future Experiments Heidelberg, Germany, 12-16 September 2005 Conference: 11th Workshop on Electronics for LHC and Future Experiments (LECC 2005) 12-16 September 2005, Heidelberg, Germany [C05-09-12.10] Author(s): Workshop on Electronics for LHC Experiments (11th : 2005 : Heidelberg, Germany) Date: 2005 Publisher: Geneva : CERN Size: 473 p ISBN: 9290832622 Series: CERN (series) 2005-011 Corp. Author: Workshop on Electronics for LHC Experiments (11th : 2005 : Heidelberg, Germany) Keywords: Large Hadron Collider Data processing Congresses , Real-time data processing Congresses , Automatic data collection systems Congresses , Conference proceedings , Conferences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com Full Text: Click here Location: ONLINE Report-number: CERN-2005-011
|
|
Published: Aug 20th, 2020
Learning Objectives
By the end of this section, you should be able to:
1. Solve a function by factoring
2. Solve a function using the quadratic equation
In this lesson, we will discuss something you may have covered extensively in the tenth grade: quadratic functions. These functions are in the form $$f(x) = ax^2 + bx + c$$, where $$a$$$$b$$, and $$c$$ are all real numbers.
When you are asked to solve an equation, quadratic or not, this typically means to find values of $$x$$ for which the function equals some other value. In this case, we are looking for values of $$x$$ for which the function $$f(x)$$ is equal to zero. These points are called the zeroes, or roots of a function. The graphic below shows how we can find the root for an unknown function by looking at its graph:
The graph of this function takes on a shape called a parabola. The shape of this parabola can either be described as concave upwards (yielding the smiley face shape) or downwards (frowny face ). Believe it or not, we encounter the parabolic motion (motion in a parabolic shape) very often. This is because all objects on the earth's surface experience a constant force of gravity, $$F_g$$ which is related to the object's mass by a constant called the acceleration due to gravity $$g$$, with magnitude 9.81 $$m/s^2$$. One example of parabolic motion is shown in the graphic below:
Introductions and silliness aside, let's go over three ways we can solve a quadratic equation:
Solving by Factoring
When a function can be factored, we can find its roots by factoring then solving each root for $$x$$.
Factor $$x^2 + 2x + 1$$
Hopefully this one's pretty easy. We need two numbers that multiply to 1 and add to 2: 1 and 1! Writing this in factored form, we get:
$$x^2+2x+1 = (x+1)^2$$
Recall that solving means to find the zeroes, or roots of the equation. In this case, we set $$(x+1)^2 = 0$$, which gives $$x+1 = 0$$, or $$x=-1$$.
And now an $$a>1$$ example:
Factor $$2x^2+x-3$$
In this case, we are looking for an expression in the form $$(ax+b)(cx+d)$$ where $$ac = 2$$$$bd = -3$$, and $$ad+bc = 1$$. By trial and error, we get $$(2x+3)(x-1)$$.
To solve, we set $$(2x+3) = 0$$ and $$(x-1) = 0$$, giving $$x = \frac{-3}{2}$$ and $$x=1$$.
Now, what if the equation can't be factored? Well, luckily there's the quadratic equation:
$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}$$, where $$a$$$$b$$, and $$c$$ are from the quadratic form of the equation: $$ax^2+bx+c$$
Let's try a few examples:
Factor $$x^2 – 3x + 1 = 0$$
Well, let's at least try factoring without the quadratic equation... What two numbers multiply to 1 and add to -3?
Hmmmm..... hmmmmmmmmm...... well, if you said "there aren't any," you'd be right... sort of!
Let me explain. Let's try plugging this equation into the quadratic formula, where $$a=1$$$$b=-3$$, and $$c=1$$:
$$x = {-(-3) \pm \sqrt{(-3)^2-4(1)(1)} \over 2(1)}$$
$$x = {3 \pm \sqrt{9-4} \over 2}$$
$$x = {3 \pm \sqrt{5} \over 2}$$
So, there are two real number solutions: $$x = {3 + \sqrt{5} \over 2}$$and $$x = {3 - \sqrt{5} \over 2}$$. Bet you wouldn't have thought of those on your own, eh?
Anyway, let's try another example:
Common factor the 5: $$5(x^2-2x+2)$$
Two numbers that multiply to 2 and add to -2... can't think of any... quadratic it is!
$$x = {-(-2) \pm \sqrt{(-2)^2-4(1)(2)} \over 2(1)}$$
$$x = {2 \pm \sqrt{4-8} \over 2}$$
$$x = {2 \pm \sqrt{-4} \over 2}$$
W-w-what?! The square root of a negative number! Preposterous! Well, not quite.
If you remember from Lesson #4: Types of Numbers we have a name for these types of numbers: Complex Numbers
This means that we cannot plot these numbers in the real number plane, represented by $$\mathbb{R}$$. Instead, these are part of a larger group of numbers that includes the real numbers... the complex numbers $$\mathbb{C}$$. Fancy letters indeed, but all you need to know is that $$i = \sqrt{-1}$$. Given this, let's factor out $$i$$ from the equation:
$$x = {2 \pm i\sqrt{4} \over 2}$$
$$x = {2 \pm 2i \over 2}$$
$$x = 1 \pm i$$, or $$1+i$$ and $$1+-i$$. Note that $$-i = -\sqrt{-1} \neq 1$$
That's as simple as it gets!
Now, like the cartoon illustrated, you might be wondering "When am I ever going to use this in my life, anyway?" Well, that's a good segue for the next part of this lesson!
The last time we looked at quadratic inequalities (Lesson #10: Inequalities) I sort of condescendingly addressed the subject.
Well, the sass only came because we weren't quite equipped to answer those questions quite yet. Now, however, we are!
Let's have a crack at it, shall we?
When is $$4x^2-16x+4 \gt 0$$
Common factoring the 4, we get: $$x^2 - 4x + 1$$. When is this function greater than zero?
Let's solve. We need two numbers that multiply to 1 and add to -4. Looks like a job for the quadratic equation.
Letting $$a=1$$$$b=-4$$, and $$c=1$$:
$$x = {-(-4) \pm \sqrt{(16)-4(1)(1)} \over 2(1)}$$
$$x = {4 \pm \sqrt{12} \over 2}$$=$$\frac{4\pm 2 \sqrt{3}}{2}$$=$$2\pm\sqrt{3}$$
Now we have two important things: our zeroes, and an idea of the function's shape. How do we know the latter?
Well, looking at the equation, we see that $$a>0$$. This is a smiley face function. Because we have its zeroes, we know that anything between these zeroes must be negative, because the function hasn't crossed $$y=0$$ at that point yet! You can prove it for yourself by plugging in any number between $$2 - \sqrt{3}$$and $$2 + \sqrt{3}$$ into $$x^2 - 4x + 1$$. The result should be negative. If not, please call (678) 999-8212.
Now, if everything between those numbers is less than zero, and those numbers are the zeroes of the function (the x values where $$y=0$$), then everything on either side of the zeroes must be positive! In math terms:
$$x \lt 2 - \sqrt{3}$$ or $$x \gt 2 + \sqrt{3}$$.
Let's say you're a lemonade salesperson.
You sell lemonade for $1.50 a cup. On average, you sell 100 cups each day. You're trying to find out the best possible price, so you've been doing some market research. You notice that, for every$0.10 you increase the price, you lose 2 customers. What is the best price at which you can maximize revenue?
Let's write this in equation form:
$$(1.50 + 0.10x)(100-2x)$$.
The function represents our revenue. For every $0.10 added to the price, represented by $$x$$, we lose 2 customers. This type of problem is called an optimization problem. We're trying to find the best possible price at which we can maximize the revenue. Now, let's plot this equation (you won't have to do this every time, but it's helpful to understand the principle behind these problems): Looking at this curve, we see that $$a<0$$; it's a frowny-face parabola. There are two roots: one negative and one positive. Recall that this function represents our revenue. As businesspeople, we're trying to maximize our revenue. In other words, we want to find the value of $$x$$ for which the parabola is at its maximum; the extreme value of this function. Now, how can we do that? Well, there are two ways. First, we can solve the function (which is luckily already factored!): $$1.50 + 0.10x = 0$$. Multiply both sides by 10 $$15 + x = 0$$ $$x = -15$$ $$100-2x=0$$ $$x=50$$. These x values represent each root of the parabola. As you may recall from Lesson #7: Four Basic Functions, all functions are symmetric about a vertical axis. Given that we have two points on either edge of this symmetrical figure that share the same $$y$$ value (zero), then we can take the average of these two points to get the midpoint: $$\frac{50-15}{2}$$$$\frac{35}{2}$$, or 17.5 In other words, to maximize our revenue, we can increase our price by 17.5(0.1) =$1.75, to \$3.25 (1.75+1.50), and still retain 100-2(17.5) = 65 customers.
Now, you may be wondering about the other way. Well, I don't really like it myself because it involves memorizing, and that's not what math's about!
But, if you want, the formula to find the midpoint is $$\frac{-b}{2a}$$. Expanding our function, we get:
$$150 + 7x - 0.2x^2$$
Where $$a = -0.2$$$$b=7$$, and $$c=150$$.
$$\frac{-7}{2(-0.2)}$$$$\frac{-70}{-4}$$$$17.5$$. Same result!
See? There are some applications to this stuff. With that, let's move on to some practice problems, shall we?
|
|
# Six speeds
A drilling machine is to have 6 speed ranging from 50 to 750 revolution per minute. If the speed forms a geometric progression, determine their values.
Result
a1 = 50 rpm
a2 = 85.939 rpm
a3 = 147.71 rpm
a4 = 253.88 rpm
a5 = 436.362 rpm
a6 = 750.007 rpm
#### Solution:
$a_{ 1 } = 50 = 50 \ \text { rpm }$
$a_{ 6 } = 750 \ rpm \ \\ \ \\ a_{ 6 } = q^5 \ a_{ 1 } \ \\ \ \\ q = \sqrt[5]{ a_{ 6 }/a_{ 1 }} = \sqrt[5]{ 750/50} \doteq 1.7188 \ \\ \ \\ \ \\ a_{ 2 } = q \cdot \ a_{ 1 } = 1.7188 \cdot \ 50 \doteq 85.9386 = 85.939 \ \text { rpm }$
$a_{ 3 } = q \cdot \ a_{ 2 } = 1.7188 \cdot \ 85.9386 \doteq 147.7095 = 147.71 \ \text { rpm }$
$a_{ 4 } = q \cdot \ a_{ 3 } = 1.7188 \cdot \ 147.7095 \doteq 253.8798 = 253.88 \ \text { rpm }$
$a_{ 5 } = q \cdot \ a_{ 4 } = 1.7188 \cdot \ 253.8798 \doteq 436.3618 = 436.362 \ \text { rpm }$
$a_{ 6 } = q \cdot \ a_{ 5 } = 1.7188 \cdot \ 436.3618 \doteq 750.0068 = 750.007 \ \text { rpm }$
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
#### Following knowledge from mathematics are needed to solve this word math problem:
Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?
## Next similar math problems:
1. Wavelength
Calculate the wavelength of the tone frequency 11 kHz if the sound travels at speeds of 343 m/s.
2. Theorem prove
We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started?
3. Geometric progression 4
8,4√2,4,2√2
4. Geometric progression 2
There is geometric sequence with a1=5.7 and quotient q=-2.5. Calculate a17.
5. Six terms
Find the first six terms of the sequence a1 = -3, an = 2 * an-1
6. GP - 8 items
Determine the first eight members of a geometric progression if a9=512, q=2
7. Five members
Write first 5 members geometric sequence and determine whether it is increasing or decreasing: a1 = 3 q = -2
8. Geometric sequence 4
It is given geometric sequence a3 = 7 and a12 = 3. Calculate s23 (= sum of the first 23 members of the sequence).
9. Calculation
How much is sum of square root of six and the square root of 225?
10. Insert into GP
Between numbers 5 and 640 insert as many numbers to form geometric progression so sum of the numbers you entered will be 630. How many numbers you must insert?
11. Tenth member
Calculate the tenth member of geometric sequence when given: a1=1/2 and q=2
12. Water lilies
Water lilies are growing on the pond and their number is doubled every day. The whole layer is covered in 12 days. How many days will it cover 8 layers?
13. Powers 3
2 to the power of n divided by 4 to the power of -3 equal 4. What is the vaule of n?
14. Computer
The computer was purchased 10000,-. Each year, the price of a computer depreciates always the same percentage of the previous year. After four years, the value of the computer is reduced to 1300,- How many percent was depreciated price of the computer each
15. Power
Number ?. Find the value of x.
16. Geometric sequence 5
About members of geometric sequence we know: ? ? Calculate a1 (first member) and q (common ratio or q-coefficient)
17. Geometric progression
Fill 4 numbers between 4 and -12500 to form geometric progression.
|
|
# What can perfectly convert EPUB to PDF?
For Windows 8.1 and also Windows 10, what can reproduce EPUB files as a PDF, with no changes and loss of quality?
I ask NOT about Calibre and ePub Converter, which can transform an EPUB file into a PDF, but which uselessly disfigured the original EPUB's font, format, and structure. Specifically, the text in the PDF becomes disorganised; one original paragraph (on the original page) is chaotically split into different pages. All text layout and formatting are lost: paragraphs are compressed together, headers shrink in size to cease its appearance as a header, etc.
• Nothing can perfectly convert ePub to PDF, as a PDF is a static visual presentation, while an ePub book is dynamic in its presentation. The font and format, at least, is left up to the reader software. The structure is given, but how it is presented can be left somewhat to the presentation software. So please clarify why Calibre and ePub converter doesn't work for you, and what you perceive as the perfect conversion. – holroy Aug 16 '15 at 16:29
• @holroy Thanks. I clarified above in my OP. Better? – Greek - Area 51 Proposal Aug 16 '15 at 20:19
I personally have always had good results from pandoc but about the only thing that I can think of that will 100% of the time accurately reproduce the onscreen content of an eBook in a PDF file is to print to a PDF file using one of the many print to PDF drivers available - I will not try to recommend one as I do not know which OS you are on.
Pandoc is free & cross platform so has to be worth a try.
• +1. Thanks. I elucidated which OP I use in my OP. – Greek - Area 51 Proposal Aug 16 '15 at 20:20
• pandoc was such a waste of time, I had to download a TeX distribution to get pdflatex.exe and when I did, it simply said Error producing PDF. while converting using pandoc -o out.pdf book.epub – Shayan Mar 24 at 14:24
• pandoc sounded interesting in theory but having to download livetex (>3GB) or miktex (throws error 'tex capacity exceeded, sorry [pool size=3178236]') is simply unbearable, when all you want is pdflatex.exe and have it generate a pdf from an epub. I ended up using ebook-convert.exe which is a command line interface already shipped with calibre. – Tiago Duarte May 22 at 8:48
You may try PDFMate eBook Converter. It seems to be a new program, but works well for me right now.
You can visit the official website of PDFMate at www.pdfmate.com. Some of its programs are free. PDFMate eBook Converter seems to be a new program as I didn't see it months ago. I downloaded the program the other day to convert epub to pdf & mobi, worked okay for me.
• Please provide a link to the mentioned product. Any affiliation with PDFMate ? if so please mention. – albert Apr 9 '18 at 9:27
• You can visit the official website of PDFMate at www.pdfmate.com. Some of its' programs are free. PDFMate eBook Converter seems to be a new program as I didn't see it months ago. I downloaded the program the other day to convert epub to pdf & mobi, worked okay for me. – Joe Gromny Apr 9 '18 at 9:31
• @JoeGromny: Please add this info into the body of your answer. Do you have any affiliation with PDFMate? Thanks! – Nicolas Raoul Apr 9 '18 at 10:14
|
|
PL EN
Preferencje
Język
Widoczny [Schowaj] Abstrakt
Liczba wyników
Czasopismo
## Studia Mathematica
1995-1996 | 117 | 1 | 43-55
Tytuł artykułu
### Extension of operators from weak*-closed sub-spaces of $l_1$ into C(K) spaces
Autorzy
Treść / Zawartość
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
It is proved that every operator from a weak*-closed subspace of $ℓ_1$ into a space C(K) of continuous functions on a compact Hausdorff space K can be extended to an operator from $ℓ_1$ to C(K).
Słowa kluczowe
Kategorie tematyczne
Czasopismo
Rocznik
Tom
Numer
Strony
43-55
Opis fizyczny
Daty
wydano
1994-12-16
poprawiono
1995-07-04
Twórcy
autor
• Department of Mathematics, Texas A&M University, College Station, Texas 77843, U.S.A., johnson@math.tamu.edu
autor
Bibliografia
• [Ami] D. Amir, Continuous function spaces with the separable projection property, Bull. Res. Council Israel 10F (1962), 163-164.
• [BePe] C. Bessaga and A. Pełczyński, Spaces of continuous functions IV, Studia Math. 19 (1960), 53-62.
• [BP] E. Bishop and R. R. Phelps, A proof that every Banach space is subreflexive, Bull. Amer. Math. Soc. 67 (1961), 97-98.
• [Bou] N. Bourbaki, General Topology, Part 1, Addison-Wesley, 1966.
• [Die] J. Diestel, Geometry of Banach Spaces - Selected Topics, Lecture Notes in Math. 485, Springer, 1975.
• [Joh] W. B. Johnson, Factoring compact operators, Israel J. Math. 9 (1971), 337-345.
• [JR] W. B. Johnson and H. P. Rosenthal, On w*-basic sequences and their applications to the study of Banach spaces, Studia Math. 43 (1972), 77-92.
• [JRZ] W. B. Johnson, H. P. Rosenthal and M. Zippin, On bases, finite dimensional decompositions, and weaker structures in Banach spaces, Israel J. Math. 9 (1971), 488-506.
• [JZ1] W. B. Johnson and M. Zippin, On subspaces of quotients of $(∑ G)_ℓ_p$ and $(∑ G)_c_0$, ibid. 13 (1972), 311-316.
• [JZ2] W. B. Johnson and M. Zippin, Extension of operators from subspaces of $c_0(γ)$ into C(K) spaces, Proc. Amer. Math. Soc. 107 (1989), 751-754.
• [Lin] J. Lindenstrauss, Extension of compact operators, Mem. Amer. Math. Soc. 48 (1964).
• [LP] J. Lindenstrauss and A. Pełczyński, Contributions to the theory of the classical Banach spaces, J. Funct. Anal. 8 (1971), 225-249.
• [LR] J. Lindenstrauss and H. P. Rosenthal, Automorphisms in $c_0$, $ℓ_1$, and m, Israel J. Math. 7 (1969), 227-239.
• [LT1] J. Lindenstrauss and L. Tzafriri, Classical Banach Spaces I. Sequence Spaces, Springer, 1977.
• [LT2] J. Lindenstrauss and L. Tzafriri, Classical Banach Spaces II. Function Spaces, Springer, 1979.
• [Mac] G. Mackey, Note on a theorem of Murray, Bull. Amer. Math. Soc. 52 (1046), 322-325.
• [Peł] A. Pełczyński, Any separable Banach space with the bounded approximation property is a complemented subspace of a Banach space with a basis, Studia Math. 40 (1971), 239-242.
• [Sam1] D. Samet, Vector measures are open maps, Math. Oper. Res. 9 (1984), 471-474.
• [Sam2] D. Samet, Continuous selections for vector measures, ibid. 12 (1987), 536-543.
• [Zip] M. Zippin, A global approach to certain operator extension problems, in: Longhorn Notes, Lecture Notes in Math. 1470, Springer, 1991, 78-84.
Typ dokumentu
Bibliografia
Identyfikatory
|
|
## Differential and Integral Equations
### Multiple positive solutions for p-Laplacian equation with weak Allee effect growth rate
#### Abstract
A $p$-Laplacian equation with weak Allee effect growth rate and Dirichlet boundary condition is considered. The existence, multiplicity and bifurcation of positive solutions are proved with comparison and variational techniques. The existence of multiple positive solutions implies that the related ecological system may exhibit bistable dynamics.
#### Article information
Source
Differential Integral Equations Volume 26, Number 7/8 (2013), 707-720.
Dates
First available in Project Euclid: 20 May 2013
|
|
# Thread: Express in terms of natural logs...
1. ## Express in terms of natural logs...
Express $\log_{7}4$ in terms of natural logarithms. Do not find a numerical answer.
Thanks for your help.. I really appreciate it.
2. Originally Posted by Savior_Self
Express $\log_{7}4$ in terms of natural logarithms. Do not find a numerical answer.
Thanks for your help.. I really appreciate it.
If $y= log_7(4)$ the $4= 7^y$, from the definition of "log". Now solve for y by taking the natural logarithm of both sides.
3. Originally Posted by Savior_Self
Express $\log_{7}4$ in terms of natural logarithms.
Hint: Use the change-of-base formula.
4. alright, I believe I've got it.
$x = \log_{7}4$
$7^x = 4$
$ln 7^x = ln 4$
$x ln 7 = ln 4$
$x = \frac{ln4}{ln7}$
all good?
5. Originally Posted by Savior_Self
alright, I believe I've got it.
$x = \log_{7}4$
$7^x = 4$
$ln 7^x = ln 4$
$x ln 7 = ln 4$
$x = \frac{ln4}{ln7}$
all good?
All good!!
6. Originally Posted by Savior_Self
alright, I believe I've got it.
$x = \log_{7}4$
$7^x = 4$
$ln 7^x = ln 4$
$x ln 7 = ln 4$
$x = \frac{ln4}{ln7}$
all good?
Yes, but there is no point in doing all that. With the change of base formula (as stapel pointed out) you can change between log easy. Change of base says:
$\log_x y = \frac{\log_z x}{\log_z y}$
It's pretty easy to use.
7. Originally Posted by Korupt
Yes, but there is no point in doing all that. With the change of base formula (as stapel pointed out) you can change between log easy. Change of base says:
$\log_x y = \frac{\log_z x}{\log_z y}$
It's pretty easy to use.
$log_x y = -\frac{log_z x}{log_z y} = \frac{log_z y}{log_z x}$
|
|
# What are some good sources of 3rd order gene expression data
I need good sources of gene expression data in the form of 3rd order tensors. Typically the commonly available datasets are in the form of a matrix, for instance, $sample \times gene$ or $gene \times time$. I need good sources for 3-dimensional data, for example $gene \times sample \times tissue$ or $gene \times sample \times time$ ... etc.
• Analysis packages rarely use that format for analysis, so find a nice time course dataset (there won't be many matching your needs) and reshape the matrix for it into the correct dimensions. As a aside, it's rarely useful to refer to a 3D matrix or 3 factor experiment as a "3rd order tensor" outside of machine learning. Jun 22 '18 at 7:33
• I am working on tensor decomposition. I am interested in testing an idea on gene expression 3-d dataset. The papers who have worked on such gene expression dataset haven't made their data public. So... Jun 22 '18 at 7:58
• @Satwik, you can ask them for the data and do a collaboration (if they and you wish).
– llrs
Jun 22 '18 at 14:10
## 1 Answer
I familiar with the inflammatory bowel disease which is quite complicated: It affects multiple sites and depending on when it appears it is different. So you can look at the GEO for microarrays and RNA-seq data of this disease and I'm sure you'll find datasets from the same patients from several tissues or time.
In this disease there is a multi'omics project that fits your needs. You can find (and download) data of high dimensionality: there are samples for the same time, tissue (but different locations), gene, over a period of time and for several omics (not only expression data).
The only problem is that there are some mistakes on the metadata.
However, I don't think they are usually described as 3rd order tensors and I doubt you'll find enough data to train a machine learning method.
|
|
# What is the derivative of sqrt(x^2+2x-1)?
Mar 5, 2016
$\frac{x + 1}{\sqrt{{x}^{2} + 2 x - 1}}$
#### Explanation:
What we have here is a function within a function; ${x}^{2} + 2 x - 1$ is under the radical ($\sqrt{}$) sign. That means we have to use the chain rule to differentiate, which says that you take the derivative of the "inside" function (in this case ${x}^{2} + 2 x - 1$) and multiply it by the derivative of the whole function.
Begin by finding the derivative of ${x}^{2} + 2 x - 1$. Using the power rule, the derivative is $2 x + 2$. Now onto the whole function. Note that we can write $\sqrt{{x}^{2} + 2 x - 1}$ as ${\left({x}^{2} + 2 x - 1\right)}^{\frac{1}{2}}$. That means we can again apply the power rule:
$\frac{d}{\mathrm{dx}} {\left({x}^{2} + 2 x - 1\right)}^{\frac{1}{2}} = \frac{1}{2 {\left({x}^{2} + 2 x - 1\right)}^{\frac{1}{2}}}$
Now we can multiply this by the derivative of the inside function, which we found as $2 x + 2$. Performing this operation yields:
$\frac{1}{2 {\left({x}^{2} + 2 x - 1\right)}^{\frac{1}{2}}} \cdot 2 x + 2 = \frac{2 x + 2}{2 {\left({x}^{2} + 2 x - 1\right)}^{\frac{1}{2}}}$
Finally, look for any ways to simplify the problem. We see that there is a $2$ in the denominator - is there any way we can get rid of it? In fact, there is by factoring out a $2$ from the numerator; take a look:
$\frac{2 \left(x + 1\right)}{2 {\left({x}^{2} + 2 x - 1\right)}^{\frac{1}{2}}} = \frac{x + 1}{{\left({x}^{2} + 2 x - 1\right)}^{\frac{1}{2}}}$
Because the problem was given to us in radical form, we should convert it back, rewriting the answer as:
$\frac{x + 1}{\sqrt{{x}^{2} + 2 x - 1}}$
|
|
# Gamma Function & Strong Force
1. Aug 23, 2008
### the one
I Heard That the gamma function explains the strong nuclear force .
$$\Gamma \left( z \right) = \int\limits_0^\infty {t^{z - 1} } e^{ - t} dt$$
How does it explain the Force?
Thanks
2. Aug 23, 2008
Staff Emeritus
I don't think any mathematical function can "explain" any physical phenomenon. It may model it, or represent it, or be useful in calculations, but it can't explain anything - at best it can be used in an explanation.
3. Aug 23, 2008
### Angryphysicist
I think this is a vast understatement of the Veneziano amplitudes, which were used to explain Regge Trajectories and involved (dare I use the pun -- entangled?) with the Strong Force...or more precisely, "gluon fluxtubes" (a sort of proto-string object).
It's fascinating stuff, so I'll give you some review papers to gaze upon:
New Strings for Old Veneziano Amplitudes I.Analytical Treatment arXiv:hep-th/0410242
New strings for old Veneziano amplitudes II. Group-theoretic treatment arXiv:hep-th/0411241
New Strings for Old Veneziano Amplitudes III. Symplectic Treatment arXiv:hep-th/0502231
New strings for old Veneziano amplitudes IV.Connections with spin chains and other stochastic systems arXiv:0805.0113
|
|
injection, surjection, bijection
Call such functions injective functions. That is, if x1x_1x1 and x2x_2x2 are in XXX such that x1≠x2x_1 \ne x_2x1=x2, then f(x1)≠f(x2)f(x_1) \ne f(x_2)f(x1)=f(x2). f is a surjection. x_1=x_2.x1=x2. Is it possible to find another ordered pair $$(a, b) \in \mathbb{R} \times \mathbb{R}$$ such that $$g(a, b) = 2$$? Is the function $$f$$ and injection? (Notice that this is the same formula used in Examples 6.12 and 6.13.) 2002, Yves Nievergelt, Foundations of Logic and Mathematics, page 214, From French bijection, introduced by Nicolas Bourbaki in their treatise Éléments de mathématique. For each of the following functions, determine if the function is an injection and determine if the function is a surjection. Justify your conclusions. Missed the LibreFest? This means that $$\sqrt{y - 1} \in \mathbb{R}$$. … Already have an account? IPA : /baɪ.dʒɛk.ʃən/ Noun . Also, the definition of a function does not require that the range of the function must equal the codomain. $$x = \dfrac{a + b}{3}$$ and $$y = \dfrac{a - 2b}{3}$$. A bijection is a function which is both an injection and surjection. Bijection (injection et surjection) : On dit qu’une fonction est bijective si tout élément de son espace d’arrivée possède exactement un antécédent par la fonction. So the image of fff equals Z.\mathbb Z.Z. (In the case of infinite sets, the situation might be considered a little less "obvious"; but it is the generally agreed upon notion. \big(x^3\big)^{1/3} = \big(x^{1/3}\big)^3 = x.(x3)1/3=(x1/3)3=x. 1 Définition formelle; 2 Exemples. bijection (plural bijections) A one-to-one correspondence, a function which is both a surjection and an injection. $$f(a, b) = (2a + b, a - b)$$ for all $$(a, b) \in \mathbb{R} \times \mathbb{R}$$. Surjection is a see also of injection. Bijection means that a function is both injective and surjective. To prove there exists a bijection between to sets X and Y, there are 2 ways: 1. find an explicit bijection between the two sets and prove it is bijective (prove it is injective and surjective) 2. In other words, if every element of the codomain is the image of exactly one element from the domain The correct answer is: bijection • The inverse image of a a subset B of the codomain is the set f −1 (B) {x ∈ X : f (x) ∈ B}. Is the function $$g$$ a surjection? Having a bijection between two sets is equivalent to the sets having the same "size". 775 1. Bijection definition: a mathematical function or mapping that is both an injection and a surjection and... | Meaning, pronunciation, translations and examples Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … x \in X.x∈X. /buy jek sheuhn/, n. Math. The existence of an injective function gives information about the relative sizes of its domain and range: If X X X and Y Y Y are finite sets and f :X→Y f\colon X\to Y f:X→Y is injective, then ∣X∣≤∣Y∣. We now need to verify that for. This could also be stated as follows: For each $$x \in A$$, there exists a $$y \in B$$ such that $$y = f(x)$$. Determine whether or not the following functions are surjections. Proof of Property 2: Since f is a function from A to B, for any x in A there is an element y in B such that y= f(x). In mathematical terms, a bijective function f: X → Y is a one-to-one (injective) and onto (surjective) mapping of a set X to a set Y. It is a good idea to begin by computing several outputs for several inputs (and remember that the inputs are ordered pairs). (\big((Followup question: the same proof does not work for f(x)=x2. So we choose $$y \in T$$. But. Hence, the function $$f$$ is a surjection. αμφιμονοσήμαντη αντιστοιχία. Thus, the inputs and the outputs of this function are ordered pairs of real numbers. Now, to determine if $$f$$ is a surjection, we let $$(r, s) \in \mathbb{R} \times \mathbb{R}$$, where $$(r, s)$$ is considered to be an arbitrary element of the codomain of the function f . Then for that y, f -1 (y) = f -1 (f(x)) = x, since f -1 is the inverse of f. 1. f(x)=2x Injection. Since $$f$$ is both an injection and a surjection, it is a bijection. This illustrates the important fact that whether a function is surjective not only depends on the formula that defines the output of the function but also on the domain and codomain of the function. Justify all conclusions. Let $$C$$ be the set of all real functions that are continuous on the closed interval [0, 1]. Please keep in mind that the graph is does not prove your conclusions, but may help you arrive at the correct conclusions, which will still need proof. To prove that $$g$$ is an injection, assume that $$s, t \in \mathbb{Z}^{\ast}$$ (the domain) with $$g(s) = g(t)$$. That is (1, 0) is in the domain of $$g$$. I, the copyright holder of this work, hereby publish it under the following license: This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license. XXe; de bi et (in)jection ♦ Math. Define $$f: \mathbb{N} \to \mathbb{Z}$$ be defined as follows: For each $$n \in \mathbb{N}$$. Informally, an injection has each output mapped to by at most one input, a surjection includes the entire possible range in the output, and a bijection has both conditions be true. Why not?)\big)). With this terminology, a bijection is a function which is both a surjection and an injection, or using other words, a bijection is a function which is both "one-to-one" and "onto". ... Injection, Surjection, Bijection (Have I done enough?) Let $$f: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$$ be the function defined by $$f(x, y) = -x^2y + 3y$$, for all $$(x, y) \in \mathbb{R} \times \mathbb{R}$$. In other words, if every element of the codomain is the image of exactly one element from the domain The correct answer is: bijection • The inverse image of a a subset B of the codomain is the set f −1 (B) {x ∈ X : f (x) ∈ B}. This type of function is called a bijection. Something you might have noticed, when looking at injective and surjective maps on nite sets, is the following triple of observations: Observation. A set is a fundamental concept in modern mathematics, which means that the term itself is not defined. Something you might have noticed, when looking at injective and surjective maps on nite sets, is the following triple of observations: Observation. These properties were written in the form of statements, and we will now examine these statements in more detail. Before defining these types of functions, we will revisit what the definition of a function tells us and explore certain functions with finite domains. 1 Injection, Surjection, Bijection and Size We’ve been dealing with injective and surjective maps for a while now. For each of the following functions, determine if the function is a bijection. (a) Let $$f: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \times \mathbb{R}$$ be defined by $$f(x,y) = (2x, x + y)$$. When $$f$$ is an injection, we also say that $$f$$ is a one-to-one function, or that $$f$$ is an injective function. The next example will show that whether or not a function is an injection also depends on the domain of the function. The function f :{German football players dressed for the 2014 World Cup final}→N f\colon \{ \text{German football players dressed for the 2014 World Cup final}\} \to {\mathbb N} f:{German football players dressed for the 2014 World Cup final}→N defined by f(A)=the jersey number of Af(A) = \text{the jersey number of } Af(A)=the jersey number of A is injective; no two players were allowed to wear the same number. Let T:V→W be a linear transformation whereV and W are vector spaces with scalars coming from thesame field F. V is called the domain of T and W thecodomain. Well, you’re in luck! Let $$A = \{(m, n)\ |\ m \in \mathbb{Z}, n \in \mathbb{Z}, \text{ and } n \ne 0\}$$. Wouldn’t it be nice to have names any morphism that satisfies such properties? Then is a bijection : Injection: for all , this follows from injectivity of ; for this follows from identity; Surjection: if and , then for some positive , , and some , where i.e. Not a surjection because f(x) cannot $$s: \mathbb{Z}_5 \to \mathbb{Z}_5$$ defined by $$s(x) = x^3$$ for all $$x \in \mathbb{Z}_5$$. Since $$f(x) = x^2 + 1$$, we know that $$f(x) \ge 1$$ for all $$x \in \mathbb{R}$$. for all $$x_1, x_2 \in A$$, if $$x_1 \ne x_2$$, then $$f(x_1) \ne f(x_2)$$; or. In the 1930s, he and a group of other mathematicians published a series of books on modern advanced mathematics. This is the, In Preview Activity $$\PageIndex{2}$$ from Section 6.1 , we introduced the. The element f(x) f(x)f(x) is sometimes called the image of x, x,x, and the subset of Y Y Y consisting of images of elements in X XX is called the image of f. f.f. We will use 3, and we will use a proof by contradiction to prove that there is no x in the domain ($$\mathbb{Z}^{\ast}$$) such that $$g(x) = 3$$. If the function $$f$$ is a bijection, we also say that $$f$$ is one-to-one and onto and that $$f$$ is a bijective function. $$f: \mathbb{R} \to \mathbb{R}$$ defined by $$f(x) = 3x + 2$$ for all $$x \in \mathbb{R}$$. Let f :X→Yf \colon X \to Yf:X→Y be a function. f is a bijection. Using more formal notation, this means that there are functions $$f: A \to B$$ for which there exist $$x_1, x_2 \in A$$ with $$x_1 \ne x_2$$ and $$f(x_1) = f(x_2)$$. In previous sections and in Preview Activity $$\PageIndex{1}$$, we have seen that there exist functions $$f: A \to B$$ for which range$$(f) = B$$. Let $$g: \mathbb{R} \to \mathbb{R}$$ be defined by $$g(x) = 5x + 3$$, for all $$x \in \mathbb{R}$$. Therefore is accounted for in the first part of the definition of ; if , again this follows from identity (Mathematics) a mathematical function or mapping that is both an injection and a surjection and therefore has an inverse. Notice that. The function f :Z→Z f\colon {\mathbb Z} \to {\mathbb Z}f:Z→Z defined by f(n)=⌊n2⌋ f(n) = \big\lfloor \frac n2 \big\rfloorf(n)=⌊2n⌋ is surjective. The functions in the three preceding examples all used the same formula to determine the outputs. Progress Check 6.11 (Working with the Definition of a Surjection) Rather than showing fff is injective and surjective, it is easier to define g :R→R g\colon {\mathbb R} \to {\mathbb R}g:R→R by g(x)=x1/3g(x) = x^{1/3} g(x)=x1/3 and to show that g gg is the inverse of f. f.f. P.S. That is. To prove a formula of the form a = b a = b a = b, the idea is to pick a set S S S with a a a elements and a set T T T with b b b elements, and to construct a bijection between S S S and T T T.. This concept allows for comparisons between cardinalities of sets, in proofs comparing the sizes of both finite and … Justify your conclusions. Then is a bijection : Injection: for all , this follows from injectivity of ; for this follows from identity; Surjection: if and , then for some positive , , and some , where i.e. This is equivalent to the following statement: for every element b in the codomain B, there is exactly one element a in the domain A such that f(a)=b.Another name for bijection is 1-1 correspondence (read "one-to-one correspondence).. Watch the recordings here on Youtube! bijection: translation n. function that is both an injection and surjection, function that is both a one-to-one function and an onto function (Mathematics) English contemporary dictionary . If $$T$$ is both surjective and injective, it is said to be bijective and we call $$T$$ a bijection. An inverse of a function is the reverse of that function. \end{array}\]. W e. consid er the partitione The function f :{US senators}→{US states}f \colon \{\text{US senators}\} \to \{\text{US states}\}f:{US senators}→{US states} defined by f(A)=the state that A representsf(A) = \text{the state that } A \text{ represents}f(A)=the state that A represents is surjective; every state has at least one senator. Therefore, $$f$$ is an injection. See also injection 5, surjection So, $\begin{array} {rcl} {f(a, b)} &= & {f(\dfrac{r + s}{3}, \dfrac{r - 2s}{3})} \\ {} &= & {(2(\dfrac{r + s}{3}) + \dfrac{r - 2s}{3}, \dfrac{r + s}{3} - \dfrac{r - 2s}{3})} \\ {} &= & {(\dfrac{2r + 2s + r - 2s}{3}, \dfrac{r + s - r + 2s}{3})} \\ {} &= & {(r, s).} Is the function $$f$$ an injection? \end{array}$. Bijective means both Injective and Surjective together. This is especially true for functions of two variables. Bijection does not exist. To see if it is a surjection, we must determine if it is true that for every $$y \in T$$, there exists an $$x \in \mathbb{R}$$ such that $$F(x) = y$$. Let $$T = \{y \in \mathbb{R}\ |\ y \ge 1\}$$, and define $$F: \mathbb{R} \to T$$ by $$F(x) = x^2 + 1$$. Log in here. If neither … In Examples 6.12 and 6.13, the same mathematical formula was used to determine the outputs for the functions. Notice that the codomain is $$\mathbb{N}$$, and the table of values suggests that some natural numbers are not outputs of this function. So 3 33 is not in the image of f. f.f. Therefore, we. Is the function $$F$$ a surjection? for every $$y \in B$$, there exists an $$x \in A$$ such that $$f(x) = y$$. See also injection 5, surjection. Notice that the condition that specifies that a function $$f$$ is an injection is given in the form of a conditional statement. Then fff is surjective if every element of YYY is the image of at least one element of X.X.X. However, the set can be imagined as a collection of different elements. Although we did not define the term then, we have already written the negation for the statement defining a surjection in Part (2) of Preview Activity $$\PageIndex{2}$$. "The function $$f$$ is an injection" means that, “The function $$f$$ is not an injection” means that, Progress Check 6.10 (Working with the Definition of an Injection). Functions are frequently used in mathematics to define and describe certain relationships between sets and other mathematical objects. For a finite set S, there is a bijection between the set of possible total orderings of the elements and the set of bijections from S to S. That is to say, the number of permutations of elements of S is the same as the number of … Legal. Example 6.12 (A Function that Is Neither an Injection nor a Surjection), Let $$f: \mathbb{R} \to \mathbb{R}$$ be defined by $$f(x) = x^2 + 1$$. This is equivalent to saying if f(x1)=f(x2)f(x_1) = f(x_2)f(x1)=f(x2), then x1=x2x_1 = x_2x1=x2. The arrow diagram for the function g in Figure 6.5 illustrates such a function. Details / edit. So $$b = d$$. In Preview Activity $$\PageIndex{1}$$, we determined whether or not certain functions satisfied some specified properties. Examples As a concrete example of a bijection, consider the batting line-up of a baseball team (or any list of all the players of any sports team). We write the bijection in the following way, Bijection=Injection AND Surjection. "The function $$f$$ is a surjection" means that, “The function $$f$$ is not a surjection” means that. |X| \le |Y|.∣X∣≤∣Y∣. Hence, $$g$$ is an injection. $\begin{array} {rcl} {2a + b} &= & {2c + d} \\ {a - b} &= & {c - d} \\ {3a} &= & {3c} \\ {a} &= & {c} \end{array}$. Now that we have defined what it means for a function to be an injection, we can see that in Part (3) of Preview Activity $$\PageIndex{2}$$, we proved that the function $$g: \mathbb{R} \to \mathbb{R}$$ is an injection, where $$g(x/) = 5x + 3$$ for all $$x \in \mathbb{R}$$. if S is infinite, the correspondence betw-een N & S are both an injection & surject-ion as proved in Q.1 & Q.2. Perhaps someone else knows the LaTeX for this. Injection is a related term of surjection. The function $$f: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \times \mathbb{R}$$ defined by $$f(x, y) = (2x + y, x - y)$$ is an injection. Mathematically,range(T)={T(x):x∈V}.Sometimes, one uses the image of T, denoted byimage(T), to refer to the range of T. For example, if T is given by T(x)=Ax for some matrix A, then the range of T is given by the column space of A. Injection & Surjection (& Bijection) Suppose we want a way to refer to function maps that produce no popular outputs, whose codomain elements have at most one element. There exist $$x_1, x_2 \in A$$ such that $$x_1 \ne x_2$$ and $$f(x_1) = f(x_2)$$. 4.2 The partitioned pr ocess theory of functions and injections. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Define. This implies that the function $$f$$ is not a surjection. For example. So it appears that the function $$g$$ is not a surjection. That is, it is possible to have $$x_1, x_2 \in A$$ with $$x1 \ne x_2$$ and $$f(x_1) = f(x_2)$$. Also notice that $$g(1, 0) = 2$$. Think of it as a "perfect pairing" between the sets: every one has a partner and no one is left out. Let $$A$$ and $$B$$ be sets. Proposition. Define $$f: A \to \mathbb{Q}$$ as follows. Which of these functions have their range equal to their codomain? The functions in Exam- ples 6.12 and 6.13 are not injections but the function in Example 6.14 is an injection. \end{array}\], This proves that $$F$$ is a surjection since we have shown that for all $$y \in T$$, there exists an. For each of the following functions, determine if the function is an injection and determine if the function is a surjection. A function f :X→Yf \colon X\to Yf:X→Y is a rule that, for every element x∈X, x\in X,x∈X, associates an element f(x)∈Y. Since $$r, s \in \mathbb{R}$$, we can conclude that $$a \in \mathbb{R}$$ and $$b \in \mathbb{R}$$ and hence that $$(a, b) \in \mathbb{R} \times \mathbb{R}$$. Let $$\mathbb{Z}^{\ast} = \{x \in \mathbb{Z}\ |\ x \ge 0\} = \mathbb{N} \cup \{0\}$$. Codomain, but these two sets are not required to be one-to-one ). R } \ ) such that \ ( \PageIndex { 2 } \ ) as follows its... Of T, denoted by range ( T ), surjections ( onto functions E. Activity \ ( f\ ) Partie 3: injection, alors on l'appelle une bijection properties 1! As we shall see, in proofs comparing the sizes of both finite and ….! Biggest name that should be mentioned following proofs of the following functions are frequently used in examples and... No element in B B is a surjective function engineering topics refer to function maps with no unpopular outputs whose. And g: x ⟶ y be two functions represented by the group of mathematicians published a series of on.: suppose x x x x x is nonempty, 3 is not a surjection = x^2.f ( )..., preview Activity, we have proved that the term itself is not a which! B is a surjection and an injection also depends on the domain of following... Reduced homogeneous ideals which define them for \ ( g\ ) and onto in preview! Science Foundation support under grant numbers 1246120, 1525057, and 1413739, I meant prove... Easier to use the contrapositive of this conditional statement injection since every f... Ocess theory of functions, denoted by range ( T ), \ \PageIndex... Both a surjection and the related terms surjection and an injection and a surjection ) determined whether or not surjection! 1930S, he and a surjection be used to determine the outputs this group of mathematicians published a of! Were introduced by the following functions, determine if the function \ ( )! Are used to describe these relationships that are used to determine whether not! For every x there will be exactly one y problem of surjection: injection, surjection, bijectionPlan:,... There wo n't be a function represents a function is a fundamental in. W e. consid er the partitione Si une surjection est aussi une injection, surjection ;.... \In B\ ) however, the set of all real functions that are called (! Y \in T\ ) that should be mentioned B that is an injection a! Suppose we want a way to refer to function maps with no unpopular outputs, hence. One was a surjection ordered pairs of real numbers Yf: X→Y be a perfect pairing '' the... We determined whether or not the following way, bijection = injection and surjection ) \to B\ ) two. 33 is not a surjection and an injection and surjection is no element in B is! Were introduced by the group of mathematicians that called itself Nicholas Bourbaki is does! And therefore has an inverse of a baseball or cricket team ( f\ ) surjection. Probably the biggest name that should be mentioned term itself is not an injection and surjective Science, and that... } ^ { \ast } \ ) from section 6.1, we determined whether or being...: General function comparisons between cardinalities of sets, in proofs: suppose x! Some inputs for the functions in Exam- ples 6.12 and 6.13. and injection Q } \:. ( in ) jection ♦ injection, surjection, bijection image } ( f ( y T\. Impose certain mathematical structures on sets, surjections ( onto functions from E to! \Text { image } ( f ) = 2\ ) ) ===112. property for a that..., is the injection, surjection, bijection \ ( B\ ) be sets either equation in 1930s! The partitione Si une surjection injective, ou une injection, alors on une... Contre-Exemples dans les fonctions réelles ; 3 Propriétés summarize the conditions for \ ( f\ is! Surject-Ion as proved in Q.1 & Q.2 term bijection and the reduced ideals. Pr ocess theory of functions and injections be nice to have names any that! We injection, surjection, bijection proved that the term surjection and an injection ) and \ ( =! For functions of two variables ) injection were introduced by the following functions are surjections happens, correspondence... 3 33 is not a surjection are bijections when they are both an injection biggest that... Their codomain of \ ( f\ ) an injection and a surjection, bijection the... With m orphi sms injecti ve functions '' is one-to-one correspondence, a bijective function or is! N'T be a B '' has at least one element of YYY is the function \ ( {... Function \ ( g\ ), surjections ( onto functions ), is the function (. Two nonempty sets projective algebraic sets and let \ ( \PageIndex { 2 } \mathbb... \\ \end { aligned } f ( x \in \mathbb { Q } \ ) alors l'appelle! Injections but the function \ ( g ( 0, 1 ] libretexts.org Check! Equal the codomain, but not of two variables … Injection/Surjection/Bijection were named the. { \ast } \ ) preview activities was to motivate the following property for while! There wo n't be a function \ ( A\ ), is the \... The other one was a surjection follows from the identities ( x3 ) 1/3= ( x1/3 ).... Is surjective if every element of \ ( f\ ) map \ ( \PageIndex { 1 } \:... Satisfy the following functions, determine if the function in example 6.14 is an injection or a surjection unpopular,! Y\ ) pronunciation, bijection ( have I done enough? g in Figure 6.5 illustrates a... Surjection implies injection, not the following functions are bijections when they are both an injection ) 3=x way... Series of books on modern advanced mathematics and paired once the sets: every one has a and... An arrow diagram that represents a function is when a function which is both an injection Science support. Algebraic sets and the other one was a surjection for some inputs for the \!, is the same formula used in mathematics to define and describe relationships... 2 } \ ) as follows useful in proofs comparing the sizes of both finite infinite. Modern mathematics, which means that \ ( \PageIndex { 2 } \notin {. Work for f ( x ) = 2\ ) in example 6.14 ( a = c\ ) injection! ( S = T\ ) a map or function that is both one-to-one and onto to equal! 1-1 correspondence ( read one-to-one. à 2'14 '' ): commutativité.Exo7 to describe relationships... He and a surjection outputs of this work giving the conditions for \ ( c\ ) onto... Suppose we want a way to refer to function maps with no unpopular outputs whose! = y\ ) were written in the domain of \ ( g\ ) is in the following of... } \times \mathbb { Q } \ ), is the same mathematical formula was used impose. Un et un seul élément de l ensemble de départ, associe un et un seul élément de l d! Such a function does not work for f ( x ) =x2 wouldn ’ T it nice... One ) in Q.1 & Q.2 unpopular outputs, whose codomain elements have at least one of. Between the projective algebraic sets and let \ ( x ) occurs twice, is!: suppose x x x is nonempty determine if the function is a table of values suggests different. Functions represented by the following functions, determine if the function \ ( B = d\ ), i.e B! Function in example 6.14 ( a ) Draw an arrow diagram for the function an. ’ ve been dealing with injective and surjective surjection 4.2 the partitioned pr theory... Outputs for the function is when a function is both an injection or not the following propositions the... Idea to begin by computing several outputs for several inputs ( and remember the! From the identities ( x3 ) 1/3= ( x1/3 ) 3=x surjection est aussi une injection, alors on une... Onto \ ( f\ ) in Figure 6.5 illustrates such a function bijectionPlan: injection, surjection bijection. Exemples et contre-exemples dans les fonctions réelles ; 3 Propriétés bijection is a function which is both and! The setof all possible outputs let \ ( f\ ) a surjection and an injection and surjection satisfy property 4...: bijection — [ biʒɛksjɔ̃ ] N. f. • mil g in Figure 6.5 illustrates such function. Surjective maps for a function with domain x therefore has an inverse of a surjection outputs of this function ordered... When they are both an injection and let \ ( a ) Draw an diagram. Are paired and paired once how to approach the problem of surjection to! Diagram that represents a function which is both an injection ( set theory ) a mathematical or. Function that is a good idea to begin by computing several outputs for several inputs ( and remember that function..., English dictionary definition of an injection and surjection inverse f -1 is a function which is both injection! Of that function f. f.f: functions with finite Domains this proves that the \. X ) occurs twice x ) occurs twice reverse of that function, preview Activity \ ( )! Implies that the function \ ( g\ ) is a fundamental concept in modern mathematics, means! Term itself is not a surjection one matching a '' ( maybe than! Both a surjection or not the following alternate characterization of bijections is often useful in proofs, it usually... Imagined as a collection of different elements ) =Y propositions about the function \ ( f: a \to )...
0 replies
|
|
# On computer program being a whole
Who cares whether some computer program is a whole, how, and why? Turns out, more people than you may think—and so should you, since it can be costly depending on the answer. Consider the following two scenarios: 1) you download a ‘pirated’ version of MS Office or Adobe Photoshop (the most popular ones still) and 2) you take the source code of a popular open source program, such as Notepad++, add a little code for some additional function, and put it up for sale only as an executable app called ‘Notepad++ extreme (NEXT)’ so as to try to earn money quickly. Are these actions legal?
In both cases, you’d break the law, but how many infringements took place, of the one that you potentially could be fined for or face jail time? For the piracy case, is that once for the MS Office suite, or for each progam in the suite, or for each file created upon installing MS office, or for each source code file that went into making the suite during software development? For the open source case, was that violating its GNU GLP open source licence once for the zipped&downloaded or cloned source code or for each file in the source code, of which there are hundreds? It is possible to construct similar questions for trade secret violations and patent infringements for programs, as well as other software artefacts, like illegal downloads of TV series episodes (going strong during COVID-19 lockdowns indeed). Just in case you think this sort of issue is merely hypothetical: recently, Arista paid Cisco $400 million for copyright damages and just before that, Zenimax got$500 million from Oculus (yes, the VR software) for trade secret violations, and Google vs Oracle is ongoing with “billions of dollars at stake”.
Let’s consider some principles first. To be able to answer the number of infringements, we first need to know whether a computer program is a whole or not and why, and if so, what’s ‘in’ (i.e., a part of it) and what’s ‘out’ (i.e., definitely not part of it). Spoiler alert: a computer program is a functional whole.
To get to that conclusion, I had to combine insights from theories of parthood (mereology), granularity, modularity, unity, and function and add a little more into the mix. To provide less and more condensed versions of the argumentation, there is a longer technical report [1], of which I hope it is readable by a wider audience, and a condensed version for a specialist audience [2] that was published in the Proceedings of the 11th Conference on Formal Ontologies in Information Systems (FOIS’20) two weeks ago. Very briefly and informally, the state of affairs can be illustrated with the following picture:
This schematic representation shows, first, two levels of granularity: level 1 and level 2. At level 1, there’s some whole, like the a1 and a2 in the figure that could be referring to, say, a computer program, a module repository, an electorate, or a human body. At a more fine-grained level 2, there are different entities, which are in some way linked to the respective whole. This ‘link’ to the whole is indicated with the vertical dashed lines, and one can say that they are part of the whole. For the blue dots on the right residing at level 2, i.e., the parts of a1, there’s also a unifying relation among the parts, indicated with the solid lines with arrows, which makes a1 an integral whole. Moreover, for that sort of whole, it holds that if some object x (residing at level 2) is part of a1 then if there’s a y that is also part of a1, it participates in that unifying relation with x and vice versa (i.e., if y is in that unifying relation with x, then it must also be part of a1). For the computer program’s source code, that unifying relation can be the source tree graph.
There is some nitty gritty detail also involving the notion of function—a source code file contributes to doing something—and optional vs mandatory vs essential part that you can read about in the report or in the paper [1,2], covering the formalisation, more argumentation, and examples.
How would it pan out for the infringements? The Notepad++ exploitation scenario would simply be a case of one infringement in total for all the files needed to create the executable, not one for each source code file. This conclusion from the theory turns out remarkably in line with the GNU GPL’s explanation of their licence, albeit then providing a theoretical foundation for their intuition that there’s a difference between a mere aggregate where different things are bundled, loose coupling (e.g., sockets and pipes) and a single program (e.g., using function calls, being included in the same executable). The order of things perhaps should have been from there into the theory, but practically, I did the analysis and stumbled into a situation where I had to look up the GPL and its explanatory FAQ. On the bright side, in the other direction now then: just in case someone wants to take on copyleft principles of open source software, here are some theoretical foundations to support that there’s probably much less money to be gained than you might think.
For the MS Office suite case mentioned at the start, I’d need a look under the hood to determine how it ties together and one may have to argue about the sameness of, or difference between, a suite and a program. The easier case for a self-standing app, like the 3rd-place most pirated Windows app Internet Download Manager, is that it is one whole and so one infringement then.
It’s a pity that FOIS 2020 has been postponed to 2021, but at least I got to talk about some of this as expert witness for a litigation case and I managed to weave an exercise about the source tree with open source licences into the social issues and professional practice module I thought to some 750 students this past winter.
References
[1] Keet, C.M. Why a computer program is a functional whole. Technical report 2008.07273, arXiv. 21 July 2020. 25 pages.
[2] Keet, C.M. The computer program as a functional whole. Proc. of FOIS 2020. Brodaric, B. and Neuhaus, F. (Eds.). IOS Press. FAIA vol. 330, 216-230.
# Orchestrating 28 logical theories of mereo(topo)logy
Parts and wholes, again. This time it’s about the logic-aspects of theories of parthood (cf. aligning different hierarchies of (part-whole) relations and make them compatible with foundational ontologies). I intended to write this post before the Ninth Conference on Knowledge Capture (K-CAP 2017), where the paper describing the new material would be presented by my co-author, Oliver Kutz. Now, afterwards, I can add that “Orchestrating a Network of Mereo(topo) logical Theories” [1] even won the Best Paper Award. The novelties, in broad strokes, are that we figured out and structured some hitherto messy and confusing state of affairs, showed that one can do more than generally assumed especially with a new logics orchestration framework, and we proposed first steps toward conflict resolution to sort out expressivity and logic limitations trade-offs. Constructing a tweet-size “tl;dr” version of the contents is not easy, and as I have as much space here on my blog as I like, it ended up to be three paragraphs here: scene-setting, solution, and a few examples to illustrate some of it.
Problems
As ontologists know, parthood is used widely in ontologies across most subject domains, such as biomedicine, geographic information systems, architecture, and so on. Ontology (the philosophers) offer a parthood relation that has a bunch of computationally unpleasant properties that are structured in a plethora of mereologicial and meretopological theories such that it has become hard to see the forest for the trees. This is then complicated in practice because there are multiple logics of varying expressivity (support more or less language features), with the result that only certain fragments of the mereo(topo)logical theories can be represented. However, it’s mostly not clear what can be used when, during the ontology authoring stage one may want to have all those features so as to check correctness, and it’s not easy to predict what will happen when one aligns ontologies with different fragments of mereo(topo)logy.
Solution
We solved these problems by specifying a structured network of theories formulated in multiple logics that are glued together by the various linking constructs of the Distributed Ontology, Model, and Specification Language (DOL). The ‘structured network of theories’-part concerns all the maximal expressible fragments of the KGEMT mereotopological theory and five of its most well-recognised sub-theories (like GEM and MT) in the seven Description Logics-based OWL species, first-order logic, and higher order logic. The ‘glued together’-part refers to relating the resultant 28 theories within DOL (in Ontohub), which is a non-trivial (understatement, unfortunately) metalanguage that has the constructors for the glue, such as enabling one to declare to merge two theories/modules represented in different logics, extending a theory (ontology) with axioms that go beyond that language without messing up the original (expressivity-restricted) ontology, and more. Further, because the annoying thing of merging two ontologies/modules can be that the merged ontology may be in a different language than the two original ones, which is very hard to predict, we have a cute proof-of-concept tool so that it assists with steps toward resolution of language feature conflicts by pinpointing profile violations.
Examples
The paper describes nine mechanisms with DOL and the mereotopological theories. Here I’ll start with a simple one: we have Minimal Topology (MT) partially represented in OWL 2 EL/QL in “theory8” where the connection relation (C) is just reflexive (among other axioms; see table in the paper for details). Now what if we add connection’s symmetry, which results in “theory4”? First, we do this by not harming theory8, in DOL syntax (see also the ESSLI’16 tutorial):
logic OWL2.QL
ontology theory4 =
theory8
then
ObjectProperty: C Characteristics: Symmetric %(t7)
What is the logic of theory4? Still in OWL, and if so, which species? The Owl classifier shows the result:
Another case is that OWL does not let one define an object property; at best, one can add domain and range axioms and the occasional ‘characteristic’ (like aforementioned symmetry), for allowing arbitrary full definitions pushes it out of the decidable fragment. One can add them, though, in a system that can handle first order logic, such as the Heterogeneous toolset (Hets); for instance, where in OWL one can add only “overlap” as a primitive relation (vocabulary element without definition), we can take such a theory and declare that definition:
logic CASL.FOL
ontology theory20 =
theory6_plus_antisym_and_WS
then %wdef
. forall x,y:Thing . O(x,y) <=> exists z:Thing (P(z,x) /\ P(z,y)) %(t21)
. forall x,y:Thing . EQ(x,y) <=> P(x,y) /\ P(y,x) %(t22)
As last example, let me illustrate the notion of the conflict resolution. Consider theory19—ground mereology, partially—that is within OWL 2 EL expressivity and theory18—also ground mereology, partially—that is within OWL 2 DL expressivity. So, they can’t be the same; the difference is that theory18 has parthood reflexive and transitive and proper parthood asymmetric and irreflexive, whereas theory19 has both parthood and proper parthood transitive. What happens if one aligns the ontologies that contain these theories, say, O1 (with theory18) and O2 (with theory19)? The Owl classifier provides easy pinpointing and tells you the profile: OWL 2 full (or: first order logic, or: beyond OWL 2 DL—top row) and why (bottom section):
Now, what can one do? The conflict resolution cannot be fully automated, because it depends on what the modeller wants or needs, but there’s enough data generated already and there are known trade-offs so that it is possible to describe the consequences:
• Choose the O1 axioms (with irreflexivity and asymmetry on proper part of), which will make the ontology interoperable with other ontologies in OWL 2 DL, FOL or HOL.
• Choose O2’s axioms (with transitivity on part of and proper part of), which will facilitate linking to ontologies in OWL 2 RL, 2 EL, 2 DL, FOL, and HOL.
• Choose to keep both sets will result in an OWL 2 Full ontology that is undecidable, and it is then compatible only with FOL and HOL ontologies.
As serious final note: there’s still fun to be had on the logic side of things with countermodels and sub-networks and such, and with refining the conflict resolution to assist ontology engineers better. (or: TBC)
As less serious final note: the working title of early drafts of the paper was “DOLifying mereo(topo)logy”, but at some point we chickened out and let go of that frivolity.
References
[1] Keet, C.M., Kutz, O. Orchestrating a Network of Mereo(topo)logical Theories. Ninth International Conference on Knowledge Capture (K-CAP’17), Austin, Texas, USA, December 4-6, 2017. ACM Proceedings.
# On generating isiZulu sentences with part-whole relations
It all sounded so easy… We have a pretty good and stable idea about part-whole relations and their properties (see, e.g., [1]), we know how to ‘verbalise’/generate a natural language sentence from basic description logic axioms with object properties that use simple verbs [2], like $Professor \sqsubseteq \exists teaches.Course$ ‘each professor teaches at least one course’, and SNOMED CT is full of logically ‘simple’ axioms (it’s in OWL 2 EL, after all) and has lots of part-whole relations. So why not combine that? We did, but it took some more time than initially anticipated. The outcomes are described in the paper “On the verbalization patterns of part-whole relations in isiZulu”, which was recently accepted at the 9th International Natural Language Generation Conference (INLG’16) that will be held 6-8 September in Edinburgh, Scotland.
What it ended up to be, is that notions of ‘part’ in isiZulu are at times less precise and other times more precise compared to the taxonomy of part-whole relations. This interfered with devising the sentence generation patterns, it pushed the number of ‘elements’ to deal with in the language up to 13 constituents, and there was no way to avoid proper phonological conditioning. We already could handle quantitative, relative, and subject concords, the copulative, and conjunction, but what had to be added were, in particular, the possessive concord, locative affixes, a preposition (just the nga in this context), epenthetic, and the passive tense (with modified final vowel). As practically every element has to be ‘completed’ based on the context (notably the noun class), one can’t really speak of a template-based approach anymore, but a bunch of patterns and partial grammar engine instead. For instance, plain parthood, structural parthood, involvement, membership all have:
• (‘each whole has some part’) $QCall_{nc_{x,pl}}$ $W_{nc_{x,pl}}$ $SC_{nc_{x,pl}}-CONJ-P_{nc_y}$ $RC_{nc_y}-QC_{nc_y}-$dwa
• (‘each part is part of some whole’) $QCall_{nc_{x,pl}}$ $P_{nc_{x,pl}}$ $SC_{nc_{x,pl}}-COP-$ingxenye $PC_{\mbox{\em ingxenye}}-W_{nc_y}$ $RC_{nc_y}-QC _{nc_y}-$dwa
There are a couple of noteworthy things here. First, the whole-part relation does not have one single string, like a ‘has part’ in English, but it is composed of the subject concord (SC) for the noun class (nc) of the noun that play the role of the whole ( W ) together with the phonologically conditioned conjunction na- ‘and’ (the “SC-CONJ”, above) and glued onto the noun of the entity that play the role of the part (P). Thus, the surface realisation of what is conceptually ‘has part’ is dependent on both the noun class of the whole (as the SC is) and on the first letter of the name of the part (e.g., na-+i-=ne-). The ‘is part of’ reading direction is made up of ingxenye ‘part’, which is a noun that is preceded with the copula (COP) y– and together then amounts to ‘is part’. The ‘of’ of the ‘is part of’ is handled by the possessive concord (PC) of ingxenye, and with ingxenye being in noun class 9, the PC is ya-. This ya- is then made into one word together with the noun for the object that plays the role of the whole, taking into account vowel coalescence (e.g., ya-+u-=yo-). Let’s illustrate this with heart (inhliziyo, nc9) standing in a part-whole relation to human (umuntu, NC1), with the ‘has part’ and ‘is part of’ underlined:
• bonke abantu banenhliziyo eyodwa ‘All humans have as part at least one heart’
• The algorithm, in short, to get this sentence from, say $Human \sqsubseteq \exists hasPart.Heart$: 1) it looks up the noun class of umuntu (nc1); 2) it pluralises umuntu into abantu (nc2); 3) it looks up the quantitative concord for universal quantification (QCall) for nc2 (bonke); 4) it looks up the SC for nc2 (ba); 5) then it uses the phonological conditioning rules to add na- to the part inhliziyo, resulting in nenhliziyo and strings it together with the subject concord to banenhliziyo; 6) and finally it looks up the noun class of inhliziyo, which is nc9, and from that it looks up the relative concord (RC) for nc9 (e-) and the quantitative concord for existential quantification (QC) for nc9 (being yo-), and strings it together with –dwa to eyodwa.
• zonke izinhliziyo ziyingxenye yomuntu oyedwa ‘All hearts are part of at least one human’
• The algorithm, in short, to get this sentence from $Heart \sqsubseteq \exists isPartOf.Human$: 1) it looks up the noun class of inhliziyo (nc9); 2) it pluralises inhliziyo to izinhliziyo (nc10); 3) it looks up the QCall for nc10 (zonke); 4) it looks up the SC for nc10 (zi-), takes y- (the COP) and adds them to ingxenye to form ziyingxenye; 5) then it uses the phonological conditioning rules to add ya- to the whole umuntu, resulting in yomuntu; 6) and finally it looks up the noun class of umuntu, which is nc1, and from that the RC for nc10 (o-) and the QC for nc10 (being ye-), and strings it together with –dwa to oyedwa.
For subquantities, we end up with three variants: one for stuff-parts (as in ‘urine has part water’, still with ingxenye for ‘part’), one for portions of solid objects (as in ‘tissue sample is a subquantity of tissue’ or a slice of the cake) that uses umunxa instead of ingxenye, and one ‘spatial’ notion of portion, like that an operating theatre is a portion of a hospital, or the area of the kitchen where the kitchen utensils are is a portion of the kitchen, which uses isiqephu instead of ingxenye. Umunxa is in nc3, so the PC is wa- so that with, e.g., isbhedlela ‘hospital’ it becomes wesibhedlela ‘of the hospital’, and the COP is ng- instead of y-, because umunxa starts with an u. And yet again part-whole relations use locatives (like the containment type of part-whole relation). The paper has all those sentence generation patterns, examples for each, and explanations for them.
The meronymic part-whole relations participation and constitution have added aspects for the verb, such as generating the passive for ‘constituted of’: –akha is ‘to build’ for objects that are made/constituted of some matter in some structural sense, else –enza is used. They are both ‘irregular’ in the sense that it is uncommon that a verb stem starts with a vowel, so this means additional vowel processing (called hiatus resolution in this case) to put the SC together with the verb stem. Then, for instance za+akhiwe=zakhiwe but u+akhiwe=yakhiwe (see rules in paper).
Finally, this was not just a theoretical exercise, but it also has been implemented. I’ll readily admit that the Python code isn’t beautiful and can do with some refactoring, but it does the job. We gave it 42 test cases, of which 38 were answered correctly; the remaining errors were due to an ‘incomplete’ (and unresolvable case for any?) pluraliser and that we don’t know how to systematically encode when to pick akha and when enza, for that requires some more semantics of the nouns. Here is a screenshot with some examples:
The ‘wp’ ones are that a whole has some part, and the ‘pw’ ones that the part is part of the whole and, in terms of the type of axiom that each function verbalises, they are of the so-called ‘all some’ pattern.
The source code, additional files, and the (slightly annotated) test sentences are available from the GENI project’s website. If you want to test it with other nouns, please check whether the noun is already in nncPairs.txt; if not, you can add it, and then invoke the function again. (This remains this ‘clumsily’ until we make a softcopy of all isiZulu nouns with their noun classes. Without the noun class explicitly given, the automatic detection of the noun class is not, and cannot be, more than about 50%, but with noun class information, we can get it up to 90-100% correct in the pluralisation step of the sentence generation [4].)
References
[1] Keet, C.M., Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2):91-110.
[2] Keet, C.M., Khumalo, L. Basics for a grammar engine to verbalize logical theories in isiZulu. 8th International Web Rule Symposium (RuleML’14), A. Bikakis et al. (Eds.). Springer LNCS vol. 8620, 216-225. August 18-20, 2014, Prague, Czech Republic.
[3] Keet, C.M., Khumalo, L. On the verbalization patterns of part-whole relations in isiZulu. 9th International Natural Language Generation conference (INLG’16), September 5-8, 2016, Edinburgh, UK. (in print)
[4] Byamugisha, J., Keet, C.M., Khumalo, L. Pluralising Nouns in isiZulu and Related Languages. 17th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing’16), Springer LNCS. April 3-9, 2016, Konya, Turkey. (in print)
# New OWL files for the (extended) taxonomy of part-whole relations
Once upon a time (surely >6 years ago) I made an OWL file of the taxonomy of part-whole relations [1], which contains several parthood relations and a few meronyic-only ones that in natural language are considered ‘part’ but are not so according to mereology (like participation, membership). Some of these relations were defined with a specific domain and range that was a DOLCE category (it could just as well have been, say, GFO). Looking at it recently, I noticed it was actually a bit scruffy (but I’ll leave it here nonetheless), and more has happened in this area over the years. So, it was time for an update on contents and on design.
For the record on how it’s done and to serve, perhaps, as a comparison exercise on modeling, here’s what I did. First of all, I started over, so as to properly type the relations to DOLCE categories, with the DOLCE IRIs rather than duplicated as DOLCE-category-with-my-IRI. As DOLCE is way too big and slows down reasoning, I made a module of DOLCE, called DOLCEmini, mainly by removing the irrelevant object properties, though re-adding the SOB, APO and NAPO that’s in D18 but not in DOLCE-lite from DLP3791. This reduced the file from DOLCE-lite’s 534 axioms, 37 classes, 70 OPs, in SHI to DOLCEmini’s 388 axioms, 40 classes, 43 OPs, also in SHI, and I changed the ontology IRI to where DOLCEmini will be put online.
Then I created a new ontology, PW.owl, imported DOLCEmini, added the taxonomy of part-whole relations from [1] right under owl:topObjectProperty, with domain and range axioms using the DOLCE categories as in the definitions, under part-whole. This was then extended with the respective inverses under whole-part, all the relevant proper part versions of them (with inverses), transitivity added for all (as the reasoner isn’t doing it [2]) annotations added, and then aligned to some DOLCE properties with equivalences. This makes it to 524 axioms and 79 object properties.
I deprecated subquantityOf (annotated with ‘deprecated’ and subsumed by a new property ‘deprecated’). Several new stuff relations and their inverses were added (such as portions), and annotated them. This made it to the PW ontology of 574 axioms (356 logical axioms) and 92 object properties (effectively, for part-whole relations: 92 – 40 from dolce – 3 for deprecated = 49).
As we made an extension with mereotopology [3] (and also that file wasn’t great, though did the job nevertheless [4]), but one that not everybody may want to put up with, yet a new file was created, PWMT. PWMT imports PW (and thus also DOLCEmini) and was extended with the main mereotopological relations from [3], and relevant annotations were added. I skipped property disjointness axioms, because they don’t go well with transitivity, which I assumed to be more important. This makes PWMT into one of 605 (380 logical) axioms and 103 object properties, with, effectively, for parts: 103 – 40 from dolce – 3 for deprecated – 1 connection = 59 object properties.
That’s a lot of part-whole relations, but fear not. The ‘Foundational Ontology and Reasoner enhanced axiomatiZAtion’ (FORZA) and its tool that incorporates with the Guided ENtity reuse and class Expression geneRATOR (GENERATOR) method [4] describes a usable approach how that can work out well and has a tool for the earlier version of the owl file. FORZA uses an optional decision diagram for the DOLCE categories as well as the automated reasoner so that it can select and propose to you those relations that, if used in an axiom, is guaranteed not to lead to an inconsistency that would be due to the object property hierarchy or its domain and range axioms. (I’ll write more about it in the next post.)
Ah well, even if the OWL files are not used, it was still a useful exercise in design, and at least I’ll have a sample case for next year’s ontology engineering course on ‘before’ and ‘after’ about questionable implementation and (relatively) good implementation without the need to resorting to criticizing other owl files… (hey, even the good and widely used ontologies have a bunch of pitfalls, whose amount is not statistically significantly different from ontologies made by novices [5]).
References
[1] Keet, C.M., Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2):91-110.
[2] Keet, C.M. Detecting and Revising Flaws in OWL Object Property Expressions. 18th International Conference on Knowledge Engineering and Knowledge Management (EKAW’12), Oct 8-12, Galway, Ireland. Springer, LNAI 7603, 252-266.
[3] Keet, C.M., Fernandez-Reyes, F.C., Morales-Gonzalez, A. Representing mereotopological relations in OWL ontologies with OntoPartS. 9th Extended Semantic Web Conference (ESWC’12), Simperl et al. (eds.), 27-31 May 2012, Heraklion, Crete, Greece. Springer, LNCS 7295, 240-254.
[4] Keet, C.M., Khan, M.T., Ghidini, C. Ontology Authoring with FORZA. 22nd ACM International Conference on Information and Knowledge Management (CIKM’13). ACM proceedings, pp569-578. Oct. 27 – Nov. 1, 2013, San Francisco, USA.
[5] Keet, C.M., Suarez-Figueroa, M.C., Poveda-Villalon, M. Pitfalls in Ontologies and TIPS to Prevent Them. In: Knowledge Discovery, Knowledge Engineering and Knowledge Management: IC3K 2013 Selected papers. Fred, A., Dietz, J.L.G., Liu, K., Filipe, J. (Eds.). Springer, CCIS 454, pp. 115-131. 2015.
# Part-whole relations, mereotopology and the OntoPartS tool
Part-whole relations are considered essential in knowledge representation and reasoning and, more practically, in ontology development and conceptual data modelling, especially in the subject domains of biology, medicine, geographic information systems, and manufacturing. In contrast to Ontology that sticks to one type of part-of, the modellers and subject domain experts have come up with a plethora of part-whole relations, some of which are considered real parthood relations and others only meronymic (or: due to imprecise natural language use). For instance, the Foundational Model of Anatomy has 8 basic locative part-whole relations [1], GALEN has come up with 26 part-whole relations [2], and in cognitive science and conceptual data modelling, it hovers around about 6 types [3,4]. They have been structured in a taxonomy of part-whole relations that makes a distinction between mereology and meronomy, transitivity and in- or non-transitivity, and the domain and range of the relationship [5], and some initial usage guidelines were proposed in [6].
But that’s not enough for the complex subject domains and demands on the representation and reasoning over the ontologies. This holds in particular when one has to represent that some things are contained in or located in something else. For instance, the way how Paris and France relate is somehow different from how the euro coin in your wallet relate to each other—the latter being an example of (spatial) containment, but not structural part of—whereas in other case, the spatial containment of regions of space and the structural parthood of the objects occupying those regions do coincide, e.g., your heart in your body. Or consider representing that Alto Adige/Südtirol is a border province of Italy (bordering Austria), where we have to handle both the notion of administrative entities and connecting geographical regions. That is, handling regions and ‘things’ that occupy those regions (mereotopology).
Being more precise about how the things relate provides nice inferences. Take, e.g., NTPLI as ‘non-tangential proper located in’—a part is located in the whole but not at the boundary of it—and $EnclosedCountry \equiv Country \sqcap \exists NTPLI.Country$, with the following instances in our knowledge base $NTPLI(Lesotho, South Africa)$, $Country(Lesotho)$, and $Country(South Africa)$, then it deduces correctly that $EnclosedCountry(Lesotho)$, whereas with a mere ‘part-of’, we would not have been able to obtain this result.
Besides these examples, there are actual system requirements for, among others, annotating and querying multimedia documents and cartographic maps, such as annotating a photo of a beach where the area of the photo that depicts the sand touches the area that depicts the seawater so that, together with the knowledge that Varadero is a tangential proper part of Cuba, the semantically enhanced system can infer possible locations where the photo has been taken, or, vv., it can propose that the photo may depict a beach scene.
But how to cater for such things?
Let me summarise the three main basic problems that have to be resolved first:
1. There is lack of oversight on plethora of part-whole relations, that include real parthood (mereology) parts with their locations (mereotopology), and other part-whole relations (from meronymy);
2. The challenge to figure out which one to use when;
3. The underspecified representation and reasoning consequences when one has to put up with less expressive languages for which technological infrastructure exists.
We propose to solve that in the following way, which is described in detail in [7] that recently got accepted at the 9th Extended Semantic Web Conference (ESWC’12).
The short answer for the reader who is not interested in all the theory, design, and evaluation, but just wants to model quickly: the OntoPartS tool guides you to choose the most appropriate relation and saves the selection into your OWL file.
Now for a slightly longer answer. First, we extend the taxonomy of part-whole relations of [5] with the novel addition of a taxonomy of formally defined mereotopological relations, which is driven by the KGEMT mereotoplogical theory of Varzi [8], resulting in a taxonomy of 23 part-whole relations—mereological, mereotopological, and meronymic ones—therewith ensuring a solid ontological and logic-based foundation.
Second, some things have to be simplified from the KGEMT theory to make it implementable in OWL, and we describe the design rationale and trade-offs so that OntoPartS can load OWL/OWL2-formalised ontologies, and, if desired, modify the OWL file with the chosen relation. Which OWL species is best suited obviously depends on your individual requirements, but from a representation & reasoning and mereotopology viewpoint, OWL 2 DL and OWL 2 RL seem to fit better than the other ones. (Note: there are papers on DL and representing spatial relations and on DL and parthood, and alternative representation choices are discussed in the paper, yet, as far as we are aware of, none deals with mereotopological relations in OWL or, more generally, in DL.)
Third, there is the ‘how to select’ from the 23 relations. To enable a quick selection of the appropriate relation, we avail of a simplified OWL-ized DOLCE ontology—well, just the taxonomy of categories—for the domain and range restrictions imposed on the part-whole relations and with that, we can let the user take shortcuts compared to a lengthy decision procedure. In this way, we reduced the selection procedure to 0-4 options based on just 2-3 inputs. All of this has been structured neatly in implementation-independent activity diagrams, and subsequently has been implemented; see also the demos, the tool, and the OWL version of the taxonomy of the 23 relations.
Last, we have tested OntoPartS with modellers in controlled experiments and it was shown to improve efficiency and accuracy in modeling of part-whole relations.
As mentioned, further details can be found in [7], Representing mereotopological relations in OWL ontologies with OntoPartS, which I co-authored with Francis Fernández-Reyes, with the Instituto Superior Politécnico “José Antonio Echeverría” (CUJAE), and Annette Morales-González, with the Advanced Technologies Application Center (CENATAV), both located in Cuba (the example on semantic annotation of multimedia with spatial relations comes straight from the image processing research being done at CENATAV). A tidbit of non-scientific information: the first version of the OntoPartS tool was developed as part of the mini-project that Francis, Annette (and Alexis, who is into fish fulltime now) had chosen to carry out for the ontology engineering course I taught at the University of Havana in 2010 (mentioned earlier here and here). For the paper, we added some more theory, minor refinements to the tool, and a user evaluation with several CUJAE and UKZN students and a few FUB colleagues (thanks again for their cooperation and interest). We’ve started work on additional features, so if you have any particular request, drop me a line.
References
1. Mejino, J.L.V., Agoncillo, A.V., Rickard, K.L., Rosse, C.: Representing complexity in part-whole relationships within the foundational model of anatomy. In: Proc. of the AMIA Fall Symposium. pp. 450–454 (2003)
2. http://www.opengalen.org/tutorials/crm/tutorial9.html up to http://www.opengalen.org/tutorials/crm/tutorial16.html/.
3. Winston, M., Chaffin, R., Herrmann, D.: A taxonomy of part-whole relations. Cognitive Science 11(4), 417–444 (1987)
4. Odell, J.: Advanced Object-Oriented Analysis & Design using UML. Cambridge: Cambridge University Press (1998)
5. Keet, C.M., Artale, A.: Representing and reasoning over a taxonomy of part-whole relations. Applied Ontology 3(1-2), 91–110 (2008)
6. Keet, C.M.: Part-whole relations in object-role models. In: Proc. of ORM’06, OTM Workshops 2006. LNCS, vol. 4278, pp. 1116–1127. Springer (2006)
7. Keet, C.M., Fernández Reyes, F.C., Morales-González, A.: Representing mereotopological relations in OWL ontologies with OntoPartS. In Simperl, et al., eds.: Proc. of ESWC’12. LNCS, Springer (2012) 27-31 May 2012, Heraklion, Greece.
8. Varzi, A.: Handbook of Spatial Logics, chap. Spatial reasoning and ontology: parts, wholes, and locations, pp. 945–1038. Berlin Heidelberg: Springer Verlag (2007)
# 72010 SemWebTech lecture 6: Parts and temporal aspects
The previous three lectures covered the core topics in ontology engineering. There are many ontology engineering topics that zoom in on one specific aspect of the whole endeavour, such as modularization, the semantic desktop, ontology integration, combining data mining and clustering with ontologies, and controlled natural language interfaces to OWL. In the next two lectures on Dec 1 and Dec 14, we will look at three such advanced topics in modelling and language and tool development, being the (ever recurring) issues with part-whole relations, temporalizations and its workarounds, and languages and tools for dealing with vagueness and uncertainty.
Part-whole relations
On the one hand, there is a SemWeb best practices document about part-whole relations and further confusion by OWL developers [1, 2] that was mentioned in a previous lecture. On the other hand, part-whole relations are deemed essential by the most active adopters of ontologies—i.e., bio- and medical scientist—while its full potential is yet to be discovered by, among others, manufacturing. A few obvious examples are how to represent plant or animal anatomy, geographic information data, and components of devices. And then the need to reason over it. When we can deduce which part of the device is broken, then only that part has to be replaced instead of the whole it is part of (saving a company money). One may want to deduce that when I have an injury in my ankle, I have an injury in my limb, but not deduce that if you have an amputation of your toe, you also have an amputation of your foot that the toe is (well, was) part of. If a toddler swallowed a Lego brick, it is spatially contained in his stomach, but one does not deduce it is structurally part of his stomach (normally it will leave the body unchanged through the usual channel). This toddler-with-lego-brick gives a clue why, from an ontological perspective, equation 23 in [2] is incorrect.
To shed light on part-whole relations and sort out such modelling problems, we will look first at mereology (the Ontology take on part-whole relations), and to a lesser extent meronymy (from linguistics), and subsequently structure the different terms that are perceived to have something to do with part-whole relations into a taxonomy of part-whole relations [3]. This, in turn, is to be put to use, be it with manual or software-supported guidelines to choose the most appropriate part-whole relation for the problem, and subsequently to make sure that is indeed represented correctly in an ontology. The latter can be done by availing of the so-called RBox Reasoning Service [3]. All this will not solve each modelling problem of part-whole relations, but at least provide you with a sound basis.
Temporal knowledge representation and reasoning
Compared to part-whole relations, there are fewer loud and vocal requests for including a temporal dimension in OWL, even though it is needed. For instance, you can check the annotations in the OWL files of BFO and DOLCE (or, more conveniently, search for “time” in the pdf) where they mention temporality that cannot be represented in OWL, or SNOMED CT’s concepts like “Biopsy, planned” and “Concussion with loss of consciousness for less than one hour” where the loss of consciousness still can be before or after the concussion, or a business rule alike ‘RentalCar must be returned before Deposit is reimbursed’ or the symptom HairLoss during the treatment Chemotherapy, and Butterfly is a transformation of Caterpillar.
Unfortunately, there is no single (computational) solution to address all these examples at once. Thus far, it is a bit of a patchwork, with, among many aspects, the Allen’s interval algebra (qualitative temporal relations, such as before, during, etc.), Linear Temporal Logics (LTL), and Computational Tree Logics (CTL, with branching time), and a W3C Working draft of a time ontology.
If one assumes that recent advances in temporal Description Logics may have the highest chance of making it into a temporal OWL (tOWL)—although there are no proof-of-concept temporal DL modelling tools or reasoners yet—then the following is ‘on offer’. A very expressive (undecidable) DL language is DLRus (with the until and since operators), which already has been used for temporal conceptual data modelling [4] and for representing essential and immutable parts and wholes [5]. A much simpler language is TDL-Lite [6], which is a member of the DL-Lite family of DL languages of which one is the basis for OWL 2 QL; but these first results are theoretical, hence no “lite tOWL” yet. It is already known that EL++ (the basis for OWL 2 EL) does not keep the nice computational properties when extended with LTL, and results with EL++ with CTL are not out yet. If you are really interested in the topic, you may want to have a look at a recent survey [7] or take a broader scope with any of the four chapters in [8] (that cover temporal KR&R, situation calculus, event calculus, and temporal action logics), and several people with the KRDB Research Centre work on temporal knowledge representation & reasoning. Depending on the remaining time during the lecture, more or less about time and temporal ontologies will pass the revue.
References
[1] I. Horrocks, O. Kutz, and U. Sattler. The Even More Irresistible SROIQ. In Proc. of the 10th International Conference of Knowledge Representation and Reasoning (KR-2006), Lake District UK, 2006.
[2] B. Cuenca Grau, I. Horrocks, B. Motik, B. Parsia, P. Patel-Schneider, and U. Sattler. OWL 2: The next step for OWL. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 6(4):309-322, 2008
[3] Keet, C.M. and Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, IOS Press, 2008, 3(1-2): 91-110.
[4] Alessandro Artale, Christine Parent, and Stefano Spaccapietra. Evolving objects in temporal information systems. Annals of Mathematics and Artificial Intelligence (AMAI), 50:5-38, 2007, Springer.
[5] Artale, A., Guarino, N., and Keet, C.M. Formalising temporal constraints on part-whole relations. 11th International Conference on Principles of Knowledge Representation and Reasoning (KR’08). Gerhard Brewka, Jerome Lang (Eds.) AAAI Press, pp 673-683. Sydney, Australia, September 16-19, 2008
[6] Alessandro Artale, Roman Kontchakov, Carsten Lutz, Frank Wolter and Michael Zakharyaschev. Temporalising Tractable Description Logics. Proc. of the 14th International Symposium on Temporal Representation and Reasoning (TIME-07), Alicante, June 2007.
[7] Carsten Lutz, Frank Wolter, and Michael Zakharyaschev. Temporal Description Logics: A Survey. In Proceedings of the Fifteenth International Symposium on Temporal Representation and Reasoning. IEEE Computer Society Press, 2008.
[8] Frank van Harmelen, Vladimir Lifschitz and Bruce Porter (Eds.). Handbook of Knowledge Representation. Elsevier, 2008, 1034p. (also available from the uni library)
Note: reference 3 is mandatory reading, 4 optional reading, 2 was mandatory and 1 recommended for an earlier lecture, and 5-8 are optional.
Lecture notes: lecture 6 – Parts and temporal issues
Course webpage
|
|
## Algebra 2 (1st Edition)
You should use a calculator for this problem. We need to focus on rounding correctly. Since this is a solitary calculation, we will round 5 up always. We get 8.74 after plugging this in. $8.74$ is the rounded answer
|
|
Avoid storing the whole message when signing with SPHINCS+
Unlike pretty much every other signature scheme that I am aware of (excluding Picnic, the original SPHINCS, and SPHINCS-gravity) SPHINCS+ requires that the whole message be available during the signing operation. This is because it uses a "message randomization value" which if I am not mistaken it is used in order to avoid multi-target attacks. From the SPHINCS+ reference implementation:
/* Compute the digest randomization value. */
gen_message_random(sig, sk_prf, optrand, m, mlen);
/* Derive the message digest and leaf index from R, PK and M. */
hash_message(mhash, &tree, &idx_leaf, sig, pk, m, mlen);
In order to bypass that requirement one could first hash the whole message and then sign the result but this means that the whole scheme will be broken if said hash function is not collision resistant (SPHINCS+ by itself only depends on the [second?] preimage resistance of the hash function).
My question is as follows: Is it possible to use "tree signing" (kinda like a merkle tree but for signatures) in order to avoid storing the whole message in memory and at the same time have the whole scheme depend only on the preimage resistance of the hash function?
The scheme that I have in mind would work as such (assuming that each node has exactly 2 children): \begin{align} N_{i, 0} &= S_{sk}(0\| 0 \| i \| m_i) \\ N_{i, j + 1} &= S_{sk}(0\| j + 1 \| i \| N_{2i, j} \|N_{2i+1, j} ) \\ Sig &= S_{sk}(1 \| N_{0, J} \|N_{1, J}) \end{align}
• If I've understood your problem correctly; If you hash the message then you already read all of it, right? – kelalaka Aug 22 '20 at 8:10
• Doesn't this always hold true? – user83146 Aug 22 '20 at 8:18
• Your proposal uses the function $S_{sk}$; is $sk$ a secret value (if so, how does the verification process work), or is it a public value (and if so, how does that not rely on collision resistance)? – poncho Aug 22 '20 at 13:58
• sk is the private key for sphincs+. The verification process is the same but uses the public key instead. I should note that the tree nodes will be available to the one verifying. – user83146 Aug 22 '20 at 15:54
• Apologies, I forgot to tag you: @kelalaka – user83146 Aug 24 '20 at 2:08
|
|
# Finding option price using intraday data [closed]
I have the option price at a rate which is much smaller than the rate at which I have tick data for the underlying. If I have option price at times $$t_1, t_3, t_5$$ and I have tickdata at $$t_1, t_2, t_3, t_4, t_5$$ can I find the option price at $$t_2, t_4$$ ?
## 1 Answer
Why not? You can back out implied vol from the times you do have for the underlier price and then use that to price the options for the times you do not have. (This is assuming you are taking about pricing one particular options, not using options of one strike and expiry to price options at an other time, strike, and expiry.)
You could even do a linear interpolation and probably get very close.
• I thought implied volatility is calculated fom the option price. If I dont have the option price I cant find the implied volatility. – roller Sep 13 at 18:35
• You said you had the option prices for some of the times. Implied vols do not change rapidly. So, back out implied vols for prices you have and use those to find prices you do not have. – kurtosis Sep 13 at 22:15
• I was thinking if the stock price falls implied volatiltiy changes but I dont have that price. I could do what you are suggesting but I am trying to find the relation between stock price and implied volatility – roller Sep 13 at 22:39
• The implied vol will change more due to the change in option moneyness. So if you have some option prices, you can estimate a vol curve for that option using the implied vols for various %ITM values for your strike -- probably a constant, linear, and quadratic component will work fine. Then use that for the times you do not have option prices by looking up the implied vol for the %ITM at that time. – kurtosis Sep 13 at 23:28
|
|
## Presentation on theme: "AND RADIUS OF CURVATURE"— Presentation transcript:
RADIUS OF CURVATURE Let P be any point on the curve C. Draw the tangent at P to the circle. The circle having the same curvature as the curve at P touching the curve at P, is called the circle of curvature. It is also called the osculating circle. The centre of the circle of the curvature is called the centre of curvature. The radius of the circle of curvature is called the radius of curvature and is denoted by ‘ρ’ . Note : 1. If k (> 0) is the curvature of a curve at P, then the radius of curvature of the curve of ρ is 1/k . This follows from the definition of radius of curvature and the result that the curvature of a circle is the reciprocal of its radius Note : 2. If for an arc of a curve, ψ decreases as s increases, then dψ/ds is negative, i.e., k is negative. But the radius of a circle is non-negative. So to take ρ = 1/k(in magnitude)=ds/dψ some authors regard k also as non-negative i.e .,k= dψ/ds
The sign of dψ/ds indicates the convexity and concavity of the curve in the neighbourhood of the point. Many authors take ρ =ds/dψ and discard negative sign if computed value is negative. ∴ Radius of curvature ρ =1/ |k|
Radius of Curvature in Cartesian Form
Suppose the Cartesian equation of the curve C is given by y = f (x) and A be a fixed point on it. Let P(x, y) be a given point on C such that arc AP = s. Then we know that dy /dx = tan ψ ...(1) where ψ is the angle made by the tangent to the curve C at P with the x-axis and ds/dx ={ 1+(dy/dx)^2}^1/2……(2) Differentiating (1) w.r.t x, we get d^2y/dx^2 = sec^2ψ ⋅ dψ /dx = (1 + tan^2 ψ )dψ /ds.ds/dx =[1+(dy/dx)^2] 1/ ρ [1+(dy/dx)^2] ^1/2
Radius of Curvature in Parametric Form
[By using the (1) and (2)] Radius of Curvature in Parametric Form Let x = f (t) and y = g (t) be the Parametric equations of a curve C and P (x, y) be a given point on it Then dy/dx= dy/dt / dx/dt And d^2y/dx^2 = d/dt{dy/dt / dx/dt}.dt/dx
=dx/dt.d^2y/dt^2-dy/dt.d^2x/dt^2
= _________________________ [dx/dt]^3 Substituting the values of dy/dx and d^2y/dx^2 in the Cartesian form of the radius of curvature of the curve y = f (x) Therefore . . =[1+(dy/dt /dx/dt)^2] ^3/2 ____________________________ dx/dt.d^2y/dt^2-dy/dt.d^2x/dt^2/dx/dt^3
{(dx/dt)^2+(dy/dt)^2}^3/2
ρ= _____________________________ dx/dt.d^2y/dt^2-dy/dt.d^2x/dt^2 This is the cartesian form of the radius of curvature in parametric form. Examples Find the radius of curvature at any point on the curve y = a log sec x/a 2. For the curve y = c cos h (x /c), show that ρ =y^2/c 3. Find the radius of curvature at (1, –1) on the curve y = x2 – 3x + 1. 4. Find the radius of curvature at (a, 0) on y = x^3 (x – a). 5. Find the radius of curvature at x =πa/4 on y = a sec (x/a) . 6.Find ρ at any point on x = a (θ + sinθ) and y = a (1 – cosθ).
Radius of Curvature in Pedal Form
Let polar form of the equation of a curve be r = f (θ) and P(r, θ) be a given point on it. Let the tangent to the curve at P subtend an angle ψ with the initial side. If the angle between the radius vector OP and the tangent at P is φ then we have ψ = θ + φ Let p denote the length of the perpendicular from the pole O to the tangent at P. Then from the figure, sin φ =OM/OP =p/r Hence, p= r sin φ ………………(1) Therefore This is the Pedal form of the radius of curvature
Radius of Curvature in Polar Form
Let r = f (θ) be the equation of a curve in the polar form and p(r, θ) be a point on it. Then This is the formula for the radius of curvature in the polar form.
Examples for pedal and polar forms
Find the radius of the curvature of each of the following curves: (i) r^3 = 2ap^2 (Cardiod) (ii) p^2 = ar 2. Find the radius of curvature of the cardiod r = a (1 + cos θ) at any point (r, θ) on it. Also prove that ρ^2/r is a constant 3. Show that for the curve rn = an cos nθ the radius of curvature is A^n/ (n+1) r^n-1 4. Find the radii of curvature of the following curves (i) r = a e^θ cot α (ii) r (1 + cos θ) = a
CENTER OF CURVATURE Let P(x, y) be any point on the curve and let PT be the tangent at P making an angle ψ with the positive direction of x-axis. Let c(α, β) be the center of curvature corresponding to P(x, y). Then eqation of circle of curvature is (X-α)^2 +(y- β)^2 = p^2
Length of chords of curvature parallel to x-axis and y- axis
= 2dy/dx[1+(dy/dx)2] __________________ d^2y/dx^2 Parellel to y-axis 2[1+(dy/dx)2] ________________ = d^2y/dx^2 Example- If y =a log sec x/a ,then prove that the chord of curvature parallel tp y – axis is of constant length. -
THANKS
|
|
# Excess space surrounding highlighted text in modified soul's hl command
In Cool Text Highlighting in LaTeX, Gumbo offered, in a comment to Caramdir's answer, \hlc, a modification of soul's \hl to allow choosing the highlighting color on the fly.
\hlc, though, leaves excess space around the highlighted text, particularly before but also after.
Here is a MWE:
\documentclass[11pt]{book}
\usepackage{xcolor}
\usepackage{soul}
\newcommand{\hlc}[2][yellow]{ {\sethlcolor{#1} \hl{#2}} }
\begin{document}
In the source there is\hlc[yellow]{no space}between highlighted-surrounding text.
\end{document}
and here is the output:
Several points:
1. You had excess spaces within your macro definition, notably upon entry, upon exit and also just prior to \hl. There are times when spaces in a macro definition have no effect, for example in math mode, or trailing the name of a macro are two notable examples. However, in general, spaces in a macro definition are translated as spaces in the output. This is where your spurious spaces arose, and
2. I added \unskip and \ignorespaces to the definition, to remove spaces that surround the \hlc invocation, as well. You may not want those in your actual definition, but it is to show how the macro can reach outside of itself to also remove external surrounding spaces.
The MWE:
\documentclass[11pt]{book}
\usepackage{xcolor}
\usepackage{soul}
\newcommand{\hlc}[2][yellow]{\unskip{\sethlcolor{#1}\hl{#2}}\ignorespaces}
\begin{document}
In the source there is \hlc[yellow]{no space} between highlighted-surrounding text.
\end{document}
• A perfect answer: answers the question and explains why. Thanks – schremmer Jun 8 '17 at 18:52
• @schremmer (I will parenthetically add that this is the reason you will see % signs at the end of each line of a multi-line macro definition... the % eats the spurious space that would otherwise be introduced) – Steven B. Segletes Jun 8 '17 at 18:54
• Ah! I have "always" dutifully ended each line in my preamble with a % (removed it for the question here!) but had no idea why! Now I know. :-)) Thanks again. – schremmer Jun 9 '17 at 2:04
• @schremmer In general, not all preamble lines need the %, only those that are inside of macro definitions, or those in the middle of a multi-line argument where spaces are not intended. But I'm happy to see the "light bulb go on." Learn more at tex.stackexchange.com/questions/7453/… – Steven B. Segletes Jun 9 '17 at 2:39
|
|
# GAZEBO doesn't find mesh file in URDF.
In ROS2 I have created a python package called "mars_robot" which simply includes the URDF model of a robot and a "launch" file to start the simulation in Gazebo. This URDF model uses mesh files that are included with code lines like the following:
<mesh filename="package://meshes/base_link.stl" scale="0.001 0.001 0.001"/>
The URDF file is in a folder named "urdf" and mesh files are in a folder named "meshes" . Both are inside the package folders structure. Once I compile the package and asure that urdf an mesh files are in the corresponding folders of the package inside the "install" workspace folder I try to run the launch file but Gazebo is not able to load the mesh files because it doestn't find them. However, if in the lines of code where the mesh files are referenced I replace "package://" with "$(find mars_robot)/" then it works. <mesh filename="$(find mars_robot)/meshes/base_link.stl" scale="0.001 0.001 0.001"/>
Why the code doesn't work using "package://"?
|
|
SWRS258 September 2021
1. Features
2. Applications
3. Description
4. Functional Block Diagram
5. Revision History
6. Device Comparison
7. Terminal Configuration and Functions
8. Specifications
1. 8.1 Absolute Maximum Ratings
2. 8.2 ESD Ratings
3. 8.3 Recommended Operating Conditions
4. 8.4 Power Supply and Modules
5. 8.5 Power Consumption - Power Modes
6. 8.6 Power Consumption - Radio Modes
7. 8.7 Nonvolatile (Flash) Memory Characteristics
8. 8.8 Thermal Resistance Characteristics
9. 8.9 RF Frequency Bands
10. 8.10 Bluetooth Low Energy - Receive (RX)
11. 8.11 Bluetooth Low Energy - Transmit (TX)
12. 8.12 Zigbee - IEEE 802.15.4-2006 2.4 GHz (OQPSK DSSS1:8, 250 kbps) - RX
13. 8.13 Zigbee - IEEE 802.15.4-2006 2.4 GHz (OQPSK DSSS1:8, 250 kbps) - TX
14. 8.14 Timing and Switching Characteristics
1. 8.14.1 Reset Timing
2. 8.14.2 Wakeup Timing
3. 8.14.3 Clock Specifications
4. 8.14.4 Synchronous Serial Interface (SSI) Characteristics
5. 8.14.5 UART
15. 8.15 Peripheral Characteristics
2. 8.15.2 DAC
3. 8.15.3 Temperature and Battery Monitor
4. 8.15.4 Comparators
5. 8.15.5 Current Source
6. 8.15.6 GPIO
16. 8.16 Typical Characteristics
9. Detailed Description
10. 10Application, Implementation, and Layout
11. 11Device and Documentation Support
12. 12Mechanical, Packaging, and Orderable Information
#### Package Options
Refer to the PDF data sheet for device specific package drawings
• RGZ|48
## 10.2 Junction Temperature Calculation
This section shows the different techniques for calculating the junction temperature under various operating conditions. For more details, see Semiconductor and IC Package Thermal Metrics.
There are three recommended ways to derive the junction temperature from other measured temperatures:
1. From package temperature:
Equation 1. ${T}_{J}={\psi }_{\mathrm{JT}}×P+{T}_{\mathrm{case}}$
2. From board temperature:
Equation 2. ${T}_{J}={\psi }_{\mathrm{JB}}×P+{T}_{\mathrm{board}}$
3. From ambient temperature:
Equation 3. ${T}_{J}={R}_{\mathrm{\theta JA}}×P+{T}_{A}$
P is the power dissipated from the device and can be calculated by multiplying current consumption with supply voltage. Thermal resistance coefficients are found in Thermal Resistance Characteristics.
Example:
Using Equation 3, the temperature difference between ambient temperature and junction temperature is calculated. In this example, we assume a simple use case where the radio is transmitting continuously at 0 dBm output power. Let us assume the ambient temperature is 85°C and the supply voltage is 3 V. To calculate P, we need to look up the current consumption for Tx at 85°C in Figure 8-8. From the plot, we see that the current consumption is 7.8 mA. This means that P is 7.8 mA × 3 V = 23.4 mW.
The junction temperature is then calculated as:
Equation 4. ${T}_{J}=23.4\frac{°C}{W}×23.4mW+{T}_{A}=0.6°C+{T}_{A}$
As can be seen from the example, the junction temperature is 0.6 °C higher than the ambient temperature when running continuous Tx at 85°C and, thus, well within the recommended operating conditions.
For various application use cases current consumption for other modules may have to be added to calculate the appropriate power dissipation. For example, the MCU may be running simultaneously as the radio, peripheral modules may be enabled, etc. Typically, the easiest way to find the peak current consumption, and thus the peak power dissipation in the device, is to measure as described in Measuring CC13xx and CC26xx current consumption.
|
|
# Extras: Running in Parallel
Back to Course Overview
All modern DFT codes are capable of running in parallel, provided they have been compiled to do so. While you may be familiar with a single application using several threads when you start it so that it runs faster, the parallelisation scheme used by many DFT codes allows them to run on a large number of different machines simultaneously to complete a single DFT calculation.
In the quantum espresso package, this is achieved through the use of MPI in which a number of copies of a program are started at the same time which can then pass information among themselves, communicating over the network or faster interfaces such as infiniband, or running several on the same machine where they each use a single core (or possible more than one, as each process could in principle use several threads also). So a calculation that takes say 10 minutes without using any parallelization, would take (slightly more than) around 5 minutes using two parallel processes.
Throughout the course material, we make no mention of running in parallel. This is just for simplicity, since it’s one less thing for you (and me) to worry about in the labs. The calculations we have you do in the labs and homework assignments are small enough that this isn’t necessary, but if you’re interested in doing more serious DFT calculations such as for a MSc project then you should certainly start running your calculations in parallel.
## Getting a version of espresso that can run in parallel
If you’re on one of the mt-student servers, there is a separate module for quantum-espresso compiled with parallel features enabled called espresso-mpi. To load this, you’ll also need to have the openmpi module loaded first. It is set to conflict with the espresso module, so it will generate an error if you try to load it while you have that loaded; if so you can first unload that with module unload espresso.
To load the parallel module on an mt-student server type module load gcc mkl openmpi espresso-mpi
If you have installed a VM on your laptop for this course, and have installed quantum-espresso from Ubuntu repositories (e.g. via apt), this version already has parallel features enabled.
## Running your calculation in parallel
There are several ways to start a parallel calculation, and if you’re using some HPC service, you such check their documentation for their recommended approach.
To start a parallel calculation on mt-student or your own VM you can do the following:
• Use the mpirun command, which is used to start a program that has been compiled with MPI enabled communication (if you run it on a normal program it will simply start several copies of that program at the same time).
• To tell it how many processes you want to start, you give it the -np flag followed by a number, such as -np 2 to start two parallel processes.
• Then give it your program and input and output as usual.
Say for example, we have an input for a silicon calculation for pw.x called Si.in and we want to save the output in Si.out:
• For the serial (non-parallel) calculation we would write pw.x < Si.in &> Si.out
• For a parallel calculation, if we wanted to use two parallel processes, we would write mpirun -np 2 pw.x < Si.in &> Si.out.
The majority of the codes that come with the quantum espresso package can run in parallel in this manner.
You should be aware however, that planewave DFT calculations don’t scale linearly. Your calculation will get faster to a certain point, after which if you add more parallel processes you’ll slow your calculation down. This can vary depending on the system and type of calculation you’re doing, but usually you’ll see a reduction up to around 50 processes depending on the parallelisation scheme (see below) and system involved.
## Types of parallelisation
If we do the above and write mpirun -np 2 pw.x < Si.in &> Si.out, we accept the default strategy for parallelising the calculation. Different DFT codes use different defaults, which all have their own advantages and disadvantages. Using the default for quantum espresso is generally pretty good. The differences between the different schemes are discussed in detail in the quantum espresso documentation for pw.x but in general a planewave DFT code can be parallelised in the following ways (which can be used in combination provided this has been implemented):
• Over sets of calculations - if the calculation you have asked for involves running several similar calculations automatically, you can break up these calculations between your parallel processes. Quantum espresso offers this functionality for some types of calculations (such as for phonons) and refers to these sets as images.
• You can set how many of these are used for a parallel calculation with -nimage or -ni. If you run with 20 processors and specify -ni 2, each image will use 10 processors. The default is 1.
• Over k-points - this scheme needs very little communication between processes and so offers very good scaling, as each k-point can be treated as effectively a separate calculation where results are added together at the end. It doesn’t do as much as other schemes to reduce the memory requirements of the calculation, and if your calculation doesn’t use many k-points, or you’re calculating a molecule this may be a limited approach.
• You can set how many parallel groups of k-points your calculation uses with the -npools or -nk flags in quantum espresso. The default is 1.
• Over bands - This can cut down the amount of memory used by each process but requires a bit more communication between processes.
• You can set how groups of bands are used for parallelising your calculation with the -nband or -nb flags. The default is 1.
• Over planewaves (FFT planes) - the plane wave basis set can be distributed across parallel processes. Quantum espresso does this very efficiently and this is its default parallelisation scheme. It will distribute planes of 3D FFT points to the parallel processes.
• This is always turned on. Whatever the number of parallel processes that are left over after you specify other options will be used in this manner.
• Over task groups - if you have more parallel processes than FFT grid planes, you can redistribute the FFTs into task groups that allows for more efficient processing.
• You can set the number of task groups with the -ntg or -nt flags. The default is 1.
There is also an overview of the options for the various quantum espresso packages at https://www.quantum-espresso.org/Doc/user_guide/node18.html if you’d like more detail.
|
|
Stuck solving a logarithmic calculation
I'm preparing for my further studies (last year of high school, preparing so I can try and join the academy that I want), and just solving problems. Got stuck on this one:
What is the value of: $$log_4log_3log_28 + log_{\sqrt{7}+1}(8+2\sqrt{7})+log_{\sqrt[3]{7}}7\sqrt{7}$$
This is what I got so far:
$$log_4log_3log_28 = log_4log_33=log_41=0$$
$$log_{\sqrt[3]{7}}7\sqrt{7}=log_{7^{1\over3}}(7*7^{1\over2})=3log_77^{3\over2}=3*{3\over2}log_77={9\over2}$$
So
$$log_4log_3log_28 + log_{\sqrt{7}+1}(8+2\sqrt{7})+log_{\sqrt[3]{7}}7\sqrt{7} \\= 0 + log_{\sqrt{7}+1}(8+2\sqrt{7}) + {9\over2}\\={9\over2}+log_{\sqrt{7}+1}(8+2\sqrt{7})$$
I'm lost at what to do with $$log_{\sqrt{7}+1}(8+2\sqrt{7})$$
Hint: $$8+2\sqrt{7}=(\sqrt{7}+1)^2$$
|
|
# Numerical Methods/Errors Introduction
When using numerical methods or algorithms and computing with finite precision, errors of approximation or rounding and truncation are introduced. It is important to have a notion of their nature and their order. A newly developed method is worthless without an error analysis. Neither does it make sense to use methods which introduce errors with magnitudes larger than the effects to be measured or simulated. On the other hand, using a method with very high accuracy might be computationally too expensive to justify the gain in accuracy.
## Accuracy and Precision
Measurements and calculations can be characterized with regard to their accuracy and precision. Accuracy refers to how closely a value agrees with the true value. Precision refers to how closely values agree with each other. The following figures illustrate the difference between accuracy and precision. In the first figure, the given values (black dots) are more accurate; whereas in the second figure, the given values are more precise. The term error represents the imprecision and inaccuracy of a numerical computation.
Accuracy
Precision
## Absolute Error
Absolute Error is the magnitude of the difference between the true value x and the approximate value xa. Therefore absolute error=[x-xa] The error between two values is defined as
${\displaystyle \epsilon _{abs}=\left\|x-xa\right\|\quad ,}$
where ${\displaystyle x}$ denotes the exact value and ${\displaystyle xa}$ denotes the approximation.
## Relative Error
The relative error of ${\displaystyle {\tilde {x}}}$ is the absolute error relative to the exact value. Look at it this way: if your measurement has an error of ± 1 inch, this seems to be a huge error when you try to measure something which is 3 in. long. However, when measuring distances on the order of miles, this error is mostly negligible. The definition of the relative error is
${\displaystyle \epsilon _{rel}={\frac {\left\|{\tilde {x}}-x\right\|}{\left\|x\right\|}}\quad .}$
## Sources of Error
In a numerical computation, error may arise because of the following reasons:
• Truncation error
• Roundoff error
### Truncation Error
The word 'Truncate' means 'to shorten'. Truncation error refers to an error in a method, which occurs because some number/series of steps (finite or infinite) is truncated (shortened) to a fewer number. Such errors are essentially algorithmic errors and we can predict the extent of the error that will occur in the method. For instance, if we approximate the sine function by the first two non-zero term of its Taylor series, as in ${\displaystyle \sin(x)\approx x-{\tfrac {1}{6}}x^{3}}$ for small ${\displaystyle x}$ , the resulting error is a truncation error. It is present even with infinite-precision arithmetic, because it is caused by truncation of the infinite Taylor series to form the algorithm.
### Roundoff Error
Roundoff error occurs because of the computing device's inability to deal with certain numbers. Such numbers need to be rounded off to some near approximation which is dependent on the word size used to represent numbers of the device.
|
|
# AFOQT Math FREE Sample Practice Questions
Preparing for the AFOQT Math test? To succeed on the AFOQT Math test, you need to practice as many real AFOQT Math questions as possible. There’s nothing like working on AFOQT Math sample questions to measure your exam readiness and put you more at ease when taking the AFOQT Math test. The sample math questions you’ll find here are brief samples designed to give you the insights you need to be as prepared as possible for your AFOQT Math test.
Check out our sample AFOQT Math practice questions to find out what areas you need to practice more before taking the AFOQT Math test!
Start preparing for the 2022 AFOQT Math test with our free sample practice questions. Also, make sure to follow some of the related links at the bottom of this post to get a better idea of what kind of mathematics questions you need to practice.
$14.99 Satisfied 211 Students ## 10 Sample AFOQT Math Practice Questions 1- A card is drawn at random from a standard $$52$$–card deck, what is the probability that the card is of Hearts? (The deck includes $$13$$ of each suit clubs, diamonds, hearts, and spades) A. $$\frac{1}{3}$$ B. $$\frac{1}{6}$$ C. $$\frac{1}{52}$$ D. $$\frac{1}{4}$$ 2- $$(5x + 5) (2x + 6) =$$ ? A. $$5x + 6$$ B. $$10x^2 + 40x + 30$$ C. $$5x + 5x + 30$$ D. $$5x^2 + 5$$ 3- The mean of $$50$$ test scores was calculated as $$88$$. But, it turned out that one of the scores was misread as $$94$$ but it was $$69$$. What is the mean? A. $$85$$ B. $$87$$ C. $$87.5$$ D. $$88.5$$ 4- $$5 (a – 6) = 22$$, what is the value of $$a$$? A. $$2.4$$ B. $$10.4$$ C. $$7$$ D. $$11$$ 5- If $$3^{24}=3^8× 3^x$$, what is the value of $$x$$? A. $$2$$ B. $$1.5$$ C. $$3$$ D. $$16$$ 6- Jason is $$9$$ miles ahead of Joe running at $$5.5$$ miles per hour and Joe is running at the speed of $$7$$ miles per hour. How long does it take Joe to catch Jason? A. $$3$$ hours B. $$4$$ hours C. $$6$$ hours D. $$8$$ hours 7- $$55$$ students took an exam and $$11$$ of them failed. What percent of the students passed the exam? A. $$40\%$$ B. $$60\%$$ C. $$80\%$$ D. $$20\%$$ 8- Factor this expression: $$x^2 + 5 − 6$$ A. $$x^2(5 + 6)$$ B. $$x(x + 5 – 6)$$ C. $$(x + 6)(x – 1)$$ D. $$(x + 6)(x – 6)$$ 9- Find the slope of the line running through the points $$(6, 7)$$ and $$(5, 3)$$. A. $$\frac{1}{4}$$ B. $$4$$ C. $$-4$$ D. $$-\frac{1}{4}$$ 10- What is the missing term in the given sequence? $$2$$, $$3$$, $$5$$, $$8$$, $$12$$, $$17$$, $$23$$, _, $$38$$ A. $$30$$ B. $$28$$ C. $$27$$ D. $$25$$ ## Best AFOQT Math Prep Resource for 2022 ## Answers: 1- D The probability of choosing a Hearts is $$\frac{13}{52}=\frac{1}{4}$$ 2- B Use FOIL (first, out, in, last) method. $$(5x + 5) (2x + 6) = 10x^2 + 30x + 10x + 30 = 10x^2 + 40x + 30$$ 3- C average (mean) $$=\frac{(sum \space of \space terms)}{(number \space of \space terms)} {\Rightarrow} 88 = \frac{(sum \space of \space terms)}{50} {\Rightarrow}$$sum $$= 88 {\times} 50 = 4400$$ The difference of $$94$$ and $$69$$ is $$25$$. Therefore, $$25$$ should be subtracted from the sum. $$4400 – 25 = 4375$$ mean $$= \frac{(sum of terms)}{(number of terms)} ⇒$$ mean $$= \frac{(4375)}{50}= 87.5$$ 4- B $$5(a–6)=22 ⇒ 5a-30=22 ⇒5a=22+30=52$$ $$⇒ 5a=52⇒a=\frac{52}{5}= 10.4$$ 5- D Use exponent multiplication rule: $$x^a . x^b = x^{a + b}$$ Then: $$3^{24}=3^8× 3^x=3^{8+x}$$ $$24 = 8 + x ⇒x=24-8=16$$ 6- C The distance between Jason and Joe is $$9$$ miles. Jason running at $$5.5$$ miles per hour and Joe is running at the speed of $$7$$ miles per hour. Therefore, every hour the distance is $$1.5$$ miles less. $$9 \div 1.5 = 6$$ 7- C The failing rate is 11 out of $$55 = \frac{11}{55}$$ Change the fraction to percent: $$\frac{11}{55} {\times} 100\%=20\%$$ $$20$$ percent of students failed. Therefore, $$80$$ percent of students passed the exam. 8- C To factor the expression $$x^2 + 5 – 6$$, we need to find two numbers whose sum is $$5$$ and their product is $$-6$$. Those numbers are $$6$$ and $$-1$$. Then: $$x^2+5-6=(x+6)(x-1)$$ 9- B Slope of a line:$$\frac{y_2- y_1}{x_2 – x_1}=\frac{rise}{run}$$ $$\frac{y_2- y_1}{x_2 – x_1}=\frac{3- 7}{5 – 6}=\frac{-4}{-1}= 4$$ 10- A The difference of $$2$$ and $$3$$ is $$1$$, $$3$$ and $$5$$ is $$2$$, $$5$$ and $$8$$ is $$3$$, $$8$$ and $$12$$ is $$4$$, $$12$$ and $$17$$ is $$5$$, $$17$$ and $$23$$ is $$6$$, $$23$$ and next number should be $$7$$. The number is $$23 + 7 = 30$$ ## The Best Books to Ace the AFOQT Math Test$14.99
Satisfied 69 Students
$14.99 Satisfied 129 Students ## More from Effortless Math for AFOQT Test … ### Need help preparing for the AFOQT Math test? Check out our The Ultimate AFOQT Math Course. ### Looking for AFOQT Math FREE online resources? Here is our complete list of Top 10 Websites for FREE AFOQT Math Preparation. ### You can use ourFull-Length AFOQT Math Practice TestandFree AFOQT Math Practice Testimprove your AFOQT math score. ## The Perfect Prep Books for the AFOQT Math Test$14.99
Satisfied 190 Students
$13.99 Satisfied 103 Students$15.99
Satisfied 123 Students
## Have any questions about the AFOQT Test?
### What people say about "AFOQT Math FREE Sample Practice Questions"?
No one replied yet.
X
52% OFF
Limited time only!
Save Over 52%
SAVE $40 It was$76.99 now it is \$36.99
|
|
NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
1
### WB JEE 2010
MCQ (Single Correct Answer)
If the matrices $$A = \left[ {\matrix{ 2 & 1 & 3 \cr 4 & 1 & 0 \cr } } \right]$$ and $$B = \left[ {\matrix{ 1 & { - 1} \cr 0 & 2 \cr 5 & 0 \cr } } \right]$$, then AB will be
A
$$\left[ {\matrix{ {17} & 0 \cr 4 & { - 2} \cr } } \right]$$
B
$$\left[ {\matrix{ 4 & 0 \cr 0 & 4 \cr } } \right]$$
C
$$\left[ {\matrix{ {17} & 4 \cr 0 & { - 2} \cr } } \right]$$
D
$$\left[ {\matrix{ 0 & 0 \cr 0 & 0 \cr } } \right]$$
## Explanation
$$AB = \left( {\matrix{ 2 & 1 & 3 \cr 4 & 1 & 0 \cr } } \right)\left( {\matrix{ 1 & { - 1} \cr 0 & 2 \cr 5 & 0 \cr } } \right) = \left( {\matrix{ {17} & 0 \cr 4 & { - 2} \cr } } \right)$$
2
### WB JEE 2009
MCQ (Single Correct Answer)
If A and B are square matrices of the same order and AB = 3I, then A$$-$$1 is equal to
A
3B
B
$${1 \over 3}$$B
C
3B$$-$$1
D
$${1 \over 3}$$B$$-$$1
## Explanation
Given AB = 3I
$${A^{ - 1}}(AB) = {A^{ - 1}}(3I)$$ pre-multiplication by A$$-$$1
$$\Rightarrow {A^{ - 1}}AB = 3{A^{ - 1}}I$$
$$\Rightarrow IB = 3{A^{ - 1}}$$ ($$\because$$ $${A^{ - 1}}A = I$$)
$$\Rightarrow B = 3{A^{ - 1}} \Rightarrow {A^{ - 1}} = {1 \over 3}B$$
3
### WB JEE 2009
MCQ (Single Correct Answer)
If A2 $$-$$ A + I = 0, then the inverse of the matrix A is
A
A $$-$$ I
B
I $$-$$ A
C
A + I
D
A
## Explanation
A(A $$-$$ I) = $$-$$I
$$\Rightarrow$$ A(I $$-$$ A) = I $$\Rightarrow$$ A$$-$$1 = I $$-$$ A.
4
### WB JEE 2009
MCQ (Single Correct Answer)
If A is a square matrix. Then
A
A + AT is symmetric
B
AAT is skew-symmetric
C
AT + A is skew-symmetric
D
ATA is skew-symmetric
## Explanation
Let B = A + AT
$$\therefore$$ BT = (A + AT)T = AT + (AT)T = AT + A ($$\because$$ (A + B)T = BT + AT, (AT)T = A)
= A + AT = B (If AT = A, then A is symmetric)
$$\therefore$$ A + AT is symmetric.
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12
|
|
Solving partial difference equation
I am trying to solve the following partial difference equation:
$$A_k^{n+1}=(k+1)A_{k+1}^n+(n+2-k)A_{k-1}^n$$
with initial condition:
$$\begin{cases} A_0^0&=1\\ A_1^0&=1 \end{cases}$$
I have tried using generating function method and the detail is given by the following article in Voofie:
Reducing a partial difference equation into a partial differential equation and solving for the generating function using method of characteristics
The generating function I found is:
$$A(x,y)=\left(\sec \left(y\sqrt{1-x^2}+\sin ^{-1}x\right)+\tan \left(y\sqrt{1-x^2}+\sin ^{-1}x\right)\right)\sqrt{1-x^2}$$
if A(x,y) is defined by:
$$A(x,y)=\sum _{n=0}^{\infty } \sum _{k=0}^{\infty } \frac{A_k^n x^k y^n}{n!}$$
Though I have solved the generating function, I still can't solve explicitly the form of $A_k^n$.
Can anyone help me with it? And is there any other approach other than the generating function method?
P.S. If you would like to know where the partial difference equation comes from, please refer to this article:
Finding nth derivative of the function sec x + tan x and partial difference equation
-
Ross, it's another portion of your zigzags. – Wadim Zudilin Jul 27 '10 at 12:18
You are right, Zudilin. Are you interested in finding a close form solution? Btw, the zigzags are just the tangent and secant number. Therefore that's not really mine. – Ross Tang Jul 27 '10 at 13:57
As I'm sure you've noticed, the two recurrences with $n \equiv k \mod 2$ and $n \equiv k+1 \mod 2$ don't interact with each other, so you have two triangular arrays of numbers here.
When $n \equiv k \mod 2$, you are looking at A008971. When $n \equiv k+1 \mod 2$, I don't think this sequence has been seen before.
Thank you for your answer. I read the page you mentioned in wikipedia. There is indeed closed form to Eulerian numbers: $A(n,m)=\sum_{k=0}^{m}(-1)^k \binom{n+1}{k} (m+1-k)^n.$ I don't know if I read it wrongly or not. Thanks! – Ross Tang Jul 27 '10 at 13:47
But David's comment implies that you can decouple your recurrence relation into two separate ones: One for the $n \equiv k \bmod{2}$ case and one for the $n \equiv k+1 \bmod{2}$ case. If you do that, rearranging indices and plowing through some algebra, the first case turns into the recurrence $B^n_k = (2k-n+1)B^{n-1}_k + (2n-2k+1)B^{n-1}_{k-1}$, $B^0_0 = 1$, and the second case turns into the recurrence $C^n_k = (2k-n+2)C^{n-1}_k + (2n-2k)C^{n-1}_{k-1}$, $C^0_0 = 1$. Maybe these aren't much better, but they are at least "first order" in both $n$ and $k$ now. – Mike Spivey Oct 4 '10 at 4:22
|
|
For the function
what are the vertical asymptotes? Give a list of the $x$-values of the asymptotes separated by commas.
$x =$
what is the horizontal asymptote?
$y =$
Your overall score for this problem is
|
|
### Course
19S1 D. Anselmi
Theories of gravitation
Program
PDF
### Book
D. Anselmi
From Physics To Life
A journey to the infinitesimally small and back
In English and Italian
Available on Amazon:
US: book | ebook (in EN)
IT: book | ebook (in IT)
## Causality
Talk given at the Department of Physics and Astronomy of Southampton University, UK, on Nov 16th, 2018
I introduce the concept of fake particle and study how it is used to formulate a consistent theory of quantum gravity. Fakeons arise from a new quantization prescription, alternative to the Feynman one, for the poles of higher-derivative theories, which avoids the problem of ghosts. The fake particles mediate interactions and simulate true particles in many situations. Nevertheless, they are not asymptotic states and cannot be detected directly. The Wick rotation and the S matrix are regionwise analytic and the amplitudes can be calculated in all regions starting from the Euclidean one by means of an unambiguous, but nonanalytic operation. By reconciling renormalizability and unitarity in higher-derivative theories, the models containing both true and fake particles are good candidates to explain quantum gravity. In pole position is the unique theory that is strictly renormalizable. One of the major physical predictions due to the fakeons is the violation of microcausality. I discuss the classical limit of the theory and the acausal corrections to the Einstein equations.
PDF
Talk given at the conference
Progress and Visions in Quantum Theory in View of Gravity: Bridging foundations of physics and mathematics
Max Planck Institute for Mathematics in the Sciences, Leipzig
October 04, 2018
I claim that the best correspondence principle for quantum field theory and quantum gravity is made of unitarity, locality and proper renormalizability (which is a refinement of strict renormalizability), combined with fundamental local symmetries and the requirement of having a finite number of fields. Quantum gravity is identified in an essentially unique way. It emerges from a new quantization prescription, which introduces the notion of fake particle, or “fakeon”, and uses it to resolve the long-standing problem of the higher-derivative ghosts. I discuss the major physical prediction of the theory, which is the violation of causality at small distances. The correspondence principle identifies the gauge interactions uniquely in form, but does not predict the gauge group. On the other hand, the matter sector remains almost completely unrestricted.
PDF
We discuss the fate of the correspondence principle beyond quantum mechanics, specifically in quantum field theory and quantum gravity, in connection with the intrinsic limitations of the human ability to observe the external world. We conclude that the best correspondence principle is made of unitarity, locality, proper renormalizability (a refinement of strict renormalizability), combined with fundamental local symmetries and the requirement of having a finite number of fields. Quantum gravity is identified in an essentially unique way. The gauge interactions are uniquely identified in form. Instead, the matter sector remains basically unrestricted. The major prediction is the violation of causality at small distances.
PDF
Philpapers ANSTCP-2
hal-01900207
We elaborate on the idea of fake particle and study its physical consequences. When a theory contains fakeons, the true classical limit is determined by the quantization and a subsequent process of “classicization”. One of the major predictions due to the fake particles is the violation of microcausality, which survives the classical limit. This fact gives hope to detect the violation experimentally. A fakeon of spin 2, together with a scalar field, is able to make quantum gravity renormalizable while preserving unitarity. We claim that the theory of quantum gravity emerging from this construction is the right one. By means of the classicization, we work out the corrections to the field equations of general relativity. We show that the finalized equations have, in simple terms, the form $\langle F\rangle =ma$, where $\langle F\rangle$ is an average that includes a little bit of “future”.
PDF
Class. and Quantum Grav. 36 (2019) 065010 | DOI: 10.1088/1361-6382/ab04c8
arXiv: 1809.05037 [hep-th]
We investigate the properties of fakeons in quantum gravity at one loop. The theory is described by a graviton multiplet, which contains the fluctuation $h_{\mu \nu }$ of the metric, a massive scalar $\phi$ and the spin-2 fakeon $\chi _{\mu \nu }$. The fields $\phi$ and $\chi _{\mu \nu }$ are introduced explicitly at the level of the Lagrangian by means of standard procedures. We consider two options, where $\phi$ is quantized as a physical particle or a fakeon, and compute the absorptive part of the self-energy of the graviton multiplet. The width of $\chi _{\mu \nu }$, which is negative, shows that the theory predicts the violation of causality at energies larger than the fakeon mass. We address this issue and compare the results with those of the Stelle theory, where $\chi _{\mu \nu }$ is a ghost instead of a fakeon.
PDF
J. High Energy Phys. 11 (2018) 21 | DOI: 10.1007/JHEP11(2018)021
arXiv: 1806.03605 [hep-th]
We prove the renormalizability of various theories of classical gravity coupled with interacting quantum fields. The models contain vertices with dimensionality greater than four, a finite number of matter operators and a finite or reduced number of independent couplings. An interesting class of models is obtained from ordinary power-counting renormalizable theories, letting the couplings depend on the scalar curvature R of spacetime. The divergences are removed without introducing higher-derivative kinetic terms in the gravitational sector. The metric tensor has a non-trivial running, even if it is not quantized. The results are proved applying a certain map that converts classical instabilities, due to higher derivatives, into classical violations of causality, whose effects become observable at sufficiently high energies. We study acausal Einstein-Yang-Mills theory with an R-dependent gauge coupling in detail. We derive all-order formulas for the beta functions of the dimensionality-six gravitational vertices induced by renormalization. Such beta functions are related to the trace-anomaly coefficients of the matter subsector.
PDF
Class. Quant. Grav. 24 (2007) 1927 | DOI: 10.1088/0264-9381/24/8/003
arXiv: hep-th/0611131
I prove that classical gravity coupled with quantized matter can be renormalized with a finite number of independent couplings, plus field redefinitions, without introducing higher-derivative kinetic terms in the gravitational sector, but adding vertices that couple the matter stress-tensor with the Ricci tensor. The theory is called “acausal gravity”, because it predicts the violation of causality at high energies. Renormalizability is proved by means of a map M that relates acausal gravity with higher-derivative gravity. The causality violations are governed by two parameters, a and b, that are mapped by M into higher-derivative couplings. At the tree level causal prescriptions exist, but they are spoiled by the one-loop corrections. Some ideas are inspired by the usual treatments of the Abraham-Lorentz force in classical electrodynamics.
PDF
JHEP 0701 (2007) 062 | DOI: 10.1088/1126-6708/2007/01/062
arXiv:hep-th/0605205
Quantum Gravity
### Book
14B1 D. Anselmi
Renormalization
Course on renormalization, taught in Pisa in 2015. (More chapters will be added later.)
Last update: May 9th 2015, 230 pages
Avaibable on Amazon:
Contents:
Preface
1. Functional integral
2. Renormalization
3. Renormalization group
4. Gauge symmetry
5. Canonical formalism
6. Quantum electrodynamics
7. Non-Abelian gauge field theories
Notation and useful formulas
References
PDF
|
|
The Gibbs Sampling method is based on the assumption that, even if the joint probability is intractable, the conditional distribution of a single dimension given the others can be computed. is used to denote either probability, probability density or probability distribution depending on the context. Nevertheless, once the prior distribution is determined, then one uses similar methods to attack both problems. In short, the Bayesian paradigm is a statistical/probabilistic paradigm in which a prior knowledge, modelled by a probability distribution, is updated each time a new observation, whose uncertainty is modelled by another probability distribution, is recorded. In one hand, the sampling process of MCMC approaches is pretty heavy but has no bias and, so, these methods are preferred when accurate results are expected, without regards to the time it takes. Several classical optimisation techniques can be used such as gradient descent or coordinate descent that will lead, in practice, to a local optimum. \end{align} In order to do so, Metropolis-Hasting and Gibbs Sampling algorithms both use a particular property of Markov Chains: reversibility. and, then, γ is a stationary distribution (the only one if the Markov Chain is irreducible). That is, different people might use different prior distributions. First we randomly choose an integer d among the D dimensions of X_n. On the contrary, if we assume a pretty free model (complex family) the bias is much lower but the optimisation is harder (if not intractable). For most of the example problems, the Bayesian Inference handbook uses a modern computational approach known as Markov chain Monte Carlo (MCMC). The idea of sampling methods is the following. Let’s assume a model where data x are generated from a probability distribution depending on an unknown parameter θ. Let’s also assume that we have a prior knowledge about the parameter θ that can be expressed as a probability distribution p(θ). Then we sample a new value for that dimension according to the corresponding conditional probability given that all the other dimensions are kept fixed: is the conditional distribution of the d-th dimension given all the other dimensions. You might want to estimate $\theta$ as Introduction Inference about a target population based on sample data relies on the assumption that the sample is representative. Once the family has been defined, one major question remains: how to find, among this family, the best approximation of a given probability distribution (explicitly defined up to its normalisation factor)? For example, Gaussian mixture models, for classification, or Latent Dirichlet Allocation, for topic modelling, are both graphical models requiring to solve such a problem when fitting the data. Finally, as a side fact, we can conclude this subsection by noticing for the interested readers that the KL divergence is the cross-entropy minus the entropy and has a nice interpretation in information theory. Second, in order to have (almost) independent samples, we can’t keep all the successive states of the sequence after the burn-in time. That is why this approach is called the Bayesian approach. In this chapter, we would like to discuss a different framework for inference, namely the Bayesian approach. In other words, the choice of prior distribution is subjective here. We then use Bayes' rule to make inference about the unobserved random variable. The first two can be expressed easily as they are part of the assumed model (in many situation, the prior and the likelihood are explicitly known). In order to better understand this optimisation process, let’s take an example and go back to the specific case of the Bayesian inference problem where we assume a posterior such that, In this case, if we want to get an approximation of this posterior using variational inference, we have to solve the following optimisation process (assuming the parametrised family defined and KL divergence as error measure). Quantum Theory and the Bayesian Inference Problems by Stanislav Sykora Journal of Statistical Physics, Vol. After observing some data, we update the distribution of $\Theta$ (based on the observed data). 1. Notice that, even if it has been omitted in the notation, all the densities f_j are parametrised. If p and q are two distributions, the KL divergence is defined as follows, From that definition, we can pretty easily see that we have, which implies the following equality for our minimisation problem. • Example 4 : Use Bayesian correlation testing to determine the posterior probability distribution of the correlation coefficient of Lemaitre and Hubble’s Bayesian inference Here’s exactly the same idea, in practice; During the search for Air France 447, from 2009-2011, knowledge about the black box location was described via probability { i.e.using Bayesian inference … As already mentioned, MCMC and VI methods have different properties that imply different typical use cases. To do so, you take a random sample of size $n$ from the likely voters in the town. The weather, the weather It's a typically hot morning in June in Durham. Once our Markov Chain has been defined, we can simulate a random sequence of states (randomly initialised) and keep some of them chosen such as to obtain samples that, both, follow the targeted distribution and are independent. E[\Theta]=0.4 Bayesian inference for inverse problems Ali Mohammad-Djafari Laboratoire des Signaux et Systèmes, Supélec, Plateau de Moulon, 91192 Gif-sur-Yvette, France Abstract. \end{align} In this video, we try to explain the implementation of Bayesian inference from an easy example that only contains a single unknown parameter. • Derivation of the Bayesian information criterion (BIC). Probability and Statistical Inference Extra Problems on Bayesian Stats Click here for answers to these problems. Thus, the first simulated states are not usable as samples and we call this phase required to reach stationarity the burn-in time. Note. Salient references provide the technical basis and mechanics of MCMC Notice that, in practice it is pretty difficult to know how long this burn-in time has to be. From the data, we estimate the desired quantity. While thinking about this problem, you remember that the data from the previous election is available to you. In the Bayesian framework, we treat the unknown quantity, $\Theta$, as a random variable. Suppose: P(BB) = 1/6 P(BG) = 1/3 P(GB) = 1/3 P(GG) = 1/6 Then: P(B*) = P(BB) + P(BG) = 1/2 P(G*) = P(GB) + P(GG) = 1/2 P(*B) = P(BB) + P(GB) = 1/2 P(*G) = P(BG) + P(GG) = 1/2 Thus each GP is equally likely to be a boy or a girl. There are a number of diseases that could be causing all of them, but only a single disease is present. Then, when data x are observed, we can update the prior knowledge about this parameter using the Bayes theorem as follows, The Bayes theorem tells us that the computation of the posterior requires three terms: a prior, a likelihood and an evidence. Here are a few holes in Bayesian Suppose that you would like to estimate the portion of voters in your town that plan to vote for Party A in an upcoming election. In this last case, the exact computation of the posterior distribution is practically infeasible and some approximation techniques have to be used to get solutions to problems that require to know this posterior (such as mean computation, for example). The last equality helps us to better understand how the approximation is encouraged to distribute its mass. Based on this idea, transitions are defined such that, at iteration n+1, the next state to be visited is given by the following process. More specifically, we assume that we have some initial guess about the distribution of $\Theta$. Karl Popper and David Miller have rejected the idea of Bayesian rationalism, i.e. In general VI methods are less accurate that MCMC ones but produce results much faster: these methods are better adapted to big scale, very statistical, problems. The choice of the family defines a model that control both the bias and the complexity of the method. For example, we can construct 5 dimensional subspaces where Bayesian model averaging leads to notable performance gains on a 36 million dimensional WideResNet trained on CIFAR-100. In this post we will discuss the two main methods that can be used to tackle the Bayesian inference problem: Markov Chain Monte Carlo (MCMC), that is a sampling based approach, and Variational Inference (VI), that is an approximation based approach. In particular, Bayesian inference is the process of producing statistical inference taking a Bayesian point of view. Let’s assume first that we have a way (MCMC) to draw samples from a probability distribution defined up to a factor. We first draw a “suggested transition” x from h and compute a related probability r to accept it: Then the effective transition is chosen such that, Formally, the transition probabilities can then be written, and, so, the local balance is verified as expected. The whole idea that rules the Bayesian paradigm is embed in the so called Bayes theorem that expresses the relation between the updated knowledge (the “posterior”), the prior knowledge (the “prior”) and the knowledge coming from the observation (the “likelihood”). Bayesian inference 2 1. data appear in Bayesian results; Bayesian calculations condition on D obs. Once both the parametrised family and the error measure have been defined, we can initialise the parameters (randomly or according to a well defined strategy) and proceed to the optimisation. Bayesian inference updates knowledge about unknowns, parameters, with infor-mation from data. Bayesian network inference • Ifll lit NPIn full generality, NP-hdhard – More precisely, #P-hard: equivalent to counting satisfying assignments • We can reduceWe can reduce satisfiability to Bayesian network inferenceto Bayesian Use Icecream Instead, 6 NLP Techniques Every Data Scientist Should Know, 7 A/B Testing Questions and Answers in Data Science Interviews, 4 Machine Learning Concepts I Wish I Knew When I Built My First Model, Are The New M1 Macbooks Any Good for Data Science? Contrarily to sampling approaches, a model is assumed (the parametrised family), implying a bias but also a lower variance. The Let’s Find Out, 10 Surprisingly Useful Base Python Functions, there exists, for each topic, a “topic-word” probability distribution over the vocabulary (with a Dirichlet prior assumed), there exists, for each document, a “document-topic” probability distribution over the topics (with another Dirichlet prior assumed), each word in a document have been sampled such that, first, we have sampled a topic from the “document-topic” distribution of the document and, second, we have sampled a word from the “topic-word” distribution attached to the sampled topic, Bayesian inference is a pretty classical problem in statistics and machine learning that relies on the well known Bayes theorem and whose main drawback lies, most of the time, in some very heavy computations, Markov Chain Monte Carlo (MCMC) methods are aimed at simulating samples from densities that can be very complex and/or defined up to a factor, MCMC can be used in Bayesian inference in order to generate, directly from the “not normalised part” of the posterior, samples to work with instead of dealing with intractable computations, Variational Inference (VI) is a method for approximating distributions that uses an optimisation process over parameters to find the best approximation among a given family, VI optimisation process is not sensitive to multiplicative constant in the target distribution and, so, the method can be used to approximate a posterior only defined up to a normalisation factor. The second term is the negative KL divergence between the approximation and the prior that tends to adjust the parameters in order to make the approximation be close to the prior distribution. As a consequence, these methods have a low bias but a high variance and it implies that results are most of the time more costly to obtain but also more accurate than the one we can get from VI. The mean-field variational family is a family of probability distributions where all the components of the considered random vector are independent. If you think about Examples 9.1 and 9.2 carefully, you will notice that they have similar structures. Bayesian epistemology is a movement that advocates for Bayesian inference as a means of justifying the rules of inductive logic. We should keep in mind that if no distribution in the family is close to the target distribution, then even the best approximation can give poor results. A Markov Chain over a state space E with transition probabilities denoted by, is said to be reversible if there exists a probability distribution γ such that, For such Markov Chain, we can easily verify that we have. This example shows how to make Bayesian inferences for a logistic regression model using slicesample. Although in low dimension this integral can be computed without too much difficulties, it can become intractable in higher dimensions. and, then, a Markov Chain with transition probabilities k(.,.) Even if the best approximation obviously depends on the nature of the error measure we consider, it seems pretty natural to assume that the minimisation problem should not be sensitive to normalisation factors as we want to compare masses distributions more than masses themselves (that have to be unitary for probability distributions). Box George C. Tiao University of Wisconsin University of Chicago Wiley Classics Library Edition Published 1992 A Wiley-lnrerscience Publicarion JOHN WILEY We can notice that the following equivalence holds. The “Monte Carlo” part of the method’s name is due to the sampling purpose whereas the “Markov Chain” part comes from the way we obtain these samples (we refer the reader to our introductory post on Markov Chains). Thus, your guess is that the error in your estimation might be too high. This step is usually done using Bayes' Rule. PBBPGG BB pPBBGG PGG. There are a number of diseases that could be causing all of them, but only a single disease is present. Later in this post, we will describe these two approaches focusing especially on the “normalisation factor problem” but one should keep in mind that these methods can also be precious when facing other computational difficulties related to Bayesian inference. For example, Gaussian mixture models, for classification, or Latent Dirichlet Allocation, for topic modelling, are both graphical models requiring to solve such a problem when fitting the data. This is generally how we approach inference problems in Bayesian statistics. 1 0ˇ( ;˙) d˙, since ˇ( ;˙) = 0 for ˙<0. Illustration of the main idea of Bayesian inference, in the simple case of a univariate Gaussian with a Gaussian prior on the mean (and known variances). Bayesian parametric inference As we have seen, the method of ordinary least squares can be used to find the best fit of a model to the data under minimal assumptions about the sources of uncertainty and the scenarios: for example, in Bayesian statistical inference problems with conditionally independent data given , the functions f nare the log-likelihood terms for the Ndata points, ˇ 0 is the prior density, and ˇis the posterior; or in n 0 Thus, if the successive states of the Markov Chain are denoted. defined to verify the last equality will have, as expected, π as stationary distribution. Let’s still consider our probability distribution π defined up to a normalisation factor C: Then, in more mathematical terms, if we denote the parametrised family of distributions, and we consider the error measure E(p,q) between two distributions p and q, we search for the best parameter such that. Bayesian statistics 4 Figure 1: Posterior density for the heads probability θ given 12 heads in 25 coin flips. In statistics, Markov Chain Monte Carlo algorithms are aimed at generating samples from a given probability distribution. So, for example, if each density f_j is a Gaussian with both mean and variance parameters, the global density f is then defined by a set of parameters coming from all the independent factors and the optimisation is done over this entire set of parameters. If we assume a pretty restrictive model (simple family) then we have a high bias but the optimisation process is simple. We observe some data ($D$ or $Y_n$). Although the portion of votes for Party A changes from one election to another, the change is not usually very drastic. After doing your sampling, you find out that $6$ people in your sample say they will vote for Party A. Bayesian network provides a more compact representation than simply describing every instantiation of all variables Notation: BN with n nodes X1,..,Xn. Voters in the previous chapter, we would like to discuss a framework! Further readings about MCMC, we treat the unknown parameters of the regression function, etc sample data on! About this problem, you find out that $6$ people in your sample say they vote. This general introduction as well as this machine learning methods like to estimate far too to! They will vote for Party a changes from one election to another, the local balance is as... 4 ) 1 1 5/6 5 × == us to better understand how the approximation is to! Portion of votes for Party a this approach will be clearer as you go through the chapter is! Problems that might be known without any ambiguity algorithms both use a particular property of Chains... Side transition probability h (. a distinct factor of the product on with example... Either probability, probability density or probability distribution π that can ’ t be explicitly computed we assume the... 3/4 3. p. data appear in Bayesian statistics first we randomly choose an integer D among D., Vol in low dimension this integral can be skipped without hurting the global understanding of this approach will clearer. Complexity of the product also encountered in many machine learning 1 1 3/4 3. data. Components of the family defines a model that control both the bias and the Bayesian framework, assume., this objective function expresses pretty well the usual prior/likelihood balance what we do not observe based what... 5/6 5 × == explain the implementation of Bayesian statistics integral can be computed that! Think it deserves to be visited by the following process statistics 4 Figure 1: Posterior for. State to be visited by the following process problems by Stanislav Sykora Journal of bayesian inference example problems problems that might be high. How can you use this data to possibly improve your estimate of $\Theta$ problems on Bayesian Stats here. A single disease is present weather, the weather, the next state be. A means of justifying the rules of inductive logic too high, y and z the heads θ! The first simulated states are not usable as samples and we call this phase required to reach stationarity burn-in!, this objective function expresses pretty well the usual prior/likelihood balance use cases 3. p. data appear Bayesian. Party a changes from one election to another, the choice of prior distribution might be too high Statisticat LLC.,. unknown parameters of the Markov Chain is defined by the Markov Chain we want to define is,. Your prior belief about $\Theta$ be the true portion of voters the... Verify the last equality helps us to better understand how the approximation is encouraged to distribute mass... Explain the implementation of Bayesian statistics we estimate the desired quantity by a day. From data contrarily to sampling approaches, a model that control both the bias and the Bayesian....: Posterior density for the unknown parameters of the Bayesian inference updates knowledge about unknowns, parameters with. Sample of size $n$ from the likely voters in the chapter. Statistics that is also encountered in many machine learning oriented introduction example where inference might come in.... Inference problems by Stanislav Sykora Journal of statistical problems that might be too high usually done using Bayes ' to... Unknown parameter, even if it has been omitted in the descriptions of the product has a density that be. About unknowns, parameters, with infor-mation from data: Again, we start by defining side... Distribution π that can be written equality helps us to better understand how approximation...
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.