text
stringlengths
100
957k
meta
stringclasses
1 value
I. Basic math. II. Pricing and Hedging. III. Explicit techniques. IV. Data Analysis. V. Implementation tools. VI. Basic Math II. 1 Real Variable. 2 Laws of large numbers. 3 Characteristic function. 4 Central limit theorem (CLT) II. 5 Random walk. 6 Conditional probability II. 7 Martingales and stopping times. 8 Markov process. 9 Levy process. 10 Weak derivative. Fundamental solution. Calculus of distributions. 11 Functional Analysis. 12 Fourier analysis. 13 Sobolev spaces. 14 Elliptic PDE. A. Energy estimates for bilinear form B. B. Existence of weak solutions for elliptic Dirichlet problem. C. Elliptic regularity. D. Maximum principles. E. Eigenfunctions of symmetric elliptic operator. F. Green formulas. 15 Parabolic PDE. VII. Implementation tools II. VIII. Bibliography Notation. Index. Contents. ## Existence of weak solutions for elliptic Dirichlet problem. roposition (Existence of weak solution for elliptic Dirichlet problem 1). Let be a bounded open subset of with a -boundary and satisfy the definition ( Elliptic differential operator ). There exists a number dependent only on and such that for any and any function there exists a weak solution of the problem Proof The statement follows from the proposition ( Lax-Milgram theorem ) applied to the problem The bilinear form is We use the proposition ( Energy estimates for the bilinear form B ) and estimate Hence, for the form satisfies conditions of the proposition ( Lax-Milgram theorem ) in . The mapping is a bounded linear functional in . Definition We introduce the following "adjoint" operator and a bilinear form : Definition The function is a weak solution of the problem if Proposition (Existence of weak solution for elliptic Dirichlet problem 2). Let be a bounded open subset of with a -boundary and satisfy the definition ( Elliptic differential operator ). 1. Either (Elliptic alternative 1) or (Elliptic alternative 2) 2. If the assertion ( Elliptic alternative 2 ) holds then 3. The problem ( Elliptic Dirichlet problem ) has a weak solution if and only if Proof According to the proposition ( Existence of weak solution for elliptic Dirichlet problem 1 ), there exists a mapping where the is the weak solution of the problem Hence, a function is a weak solution of the problem if or The functions and are connected by iff . According to the proof of the proposition ( Existence of weak solution for elliptic Dirichlet problem 1 ), the satisfies conditions of the proposition ( Lax-Milgram theorem ): for some constant . We estimate According to the proposition ( Rellich-Kondrachov compactness theorem ) the last inequality implies that the operator is compact. The rest follows from the proposition ( Fredholm alternative ). Proposition (Existence of weak solution for elliptic Dirichlet problem 3) 1. There exists at most countable set such that the boundary value problem has a unique weak solution for iff . 2. If is infinite then where the values are nondecreasing and . Proof We have established during the proof of the previous proposition that the operator is compact in if defined. The rest follows from the proposition ( Spectrum of compact operator ). Proposition (Elliptic boundedness of inverse) Let be a weak solution of for then for a constant depending only on and . The constant blows up if approaches . Proof The statement is a simple corollary of the proof of the proposition ( Existence of weak solution for elliptic Dirichlet problem 2 ). Notation. Index. Contents.
{}
Vernier Software & Technology # Titration Curves: An Application of the Logistic Function ## Introduction Think about how cold germs spread through a school. One person comes to class with a cold and infects other students. At first, the disease spreads slowly, but as more students catch cold and spread it to other classmates, the disease spreads more rapidly. The rate of infection slows down again when most students are infected and there is no one left at school to infect. The maximum number of students in the school who can contract the disease is the number of students in the school. A logistic function is often used to model this type of situation. The logistic function is an exponential function, but it contains a ratio and offset which make its behavior interesting. The formula for a logistic function is: $y = \frac{A} {{1 + {B^{x - C}}}} + D$ In this activity, you will add base to an acid and use a logistic function to model the data and locate the equivalence point. ## Objectives • Record pH versus base volume data for an acid-base titration. • Manually model the titration curve using a logistic function. • Describe the role of each parameter in the logistic function. ## Sensors and Equipment This experiment features the following Vernier sensors and equipment. You may also need an interface and software for data collection. What do I need for data collection?
{}
Creating a Server Now that we have a data manager (s:dataman) the next step is to create a server to share our data with the world, which we will build using a library called Express. Before we start writing code, though, we need to understand how computers talk to each other. HTTP Almost everything on the web communicates via HTTP, which stands for HyperText Transfer Protocol. The core of HTTP is a request/response cycle that specifies the kinds of requests applications can make of servers, how they exchange data, and so on. f:server-cycle shows this cycle in action for a page that includes one image. 1. The client (a browser or some other program) makes a connection to a server. 2. It then sends a blob of text specifying what it’s asking for. 3. The server replies with a blob of text and the HTML. 4. The connection is closed. 5. The client parses the text and realizes it needs an image. 6. It sends another blob of text to the server asking for that image. 7. The server replies with a blob of text and the contents of the image file. 8. The connection is closed. This cycle might be repeated many times to display a single web page, since a separate request has to be made for every image, every CSS or JavaScript file, and so on. In practice, a lot of behind-the-scenes engineering is done to keep connections alive as long as they’re needed, and to cache items that are likely to be re-used. An HTTP request is just a block of text with two important parts: • The method is almost always either GET (to get data) or POST (to submit data). • The URL is typically a path to a file, but as we’ll see below, it’s completely up to the server to interpret it. The request can also contain headers, which are key-value pairs with more information about what the client wants. Some examples include: • "Accept: text/html" to specify that the client wants HTML • "Accept-Language: fr, en" to specify that the client prefers French, but will accept English • "If-Modified-Since: 16-May-2018" to tell the server that the client is only interested in recent data (Unlike a dictionary, a key may appear any number of times, which allows a request to do things like specify that it’s willing to accept several types of content. The body of the request is any extra data associated with it, such as files that are being uploaded. If a body is present, the request must contain the Content-Length header so that the server knows how much data to read (f:server-request). The headers and body in an HTTP response have the same form, and mean the same thing. Crucially, the response also includes a status code to indicate what happened: 200 for OK, 404 for “page not found”, and so on. Some of the more common are shown in t:server-codes. Code Name Meaning 100 Continue The client should continue sending data 200 OK The request has succeeded 204 No Content The server completed the request but there is no data 301 Moved Permanently The resource has moved to a new permanent location 307 Temporary Redirect The resource is temporarily at a different location 400 Bad Request The request is badly formatted 401 Unauthorized The request requires authentication 404 Not Found The requested resource could not be found 408 Timeout The server gave up waiting for the client 418 I’m a Teapot An April Fool’s joke 500 Internal Server Error A server error occurred while handling the request 601 Connection Timed Out The server did not respond before the connection timed out One final thing we need to understand is the structure and interpretation of URLs. This one: http://example.org:1234/some/path?value=deferred&limit=200 has five parts: • The protocol http, which specifies what rules are going to be used to exchange data. • The hostname example.org, which tells the client where to find the server. If we are running a server on our own computer for testing, we can use the name localhost to connect to it. (Computers rely on a service called DNS to find the machines associated with human-readable hostnames, but its operation is out of scope for this tutorial.) • The port 1234, which tells the client where to call the service it wants. (If a host is like an office building, a port is like a phone number in that building. The fact that we think of phone numbers as having physical locations says something about our age…) • The path /some/path tells the server what the client wants. • The query parameters value=deferred and limit=200. These come after a question mark and are separated by ampersands, and are used to provide extra information. It used to be common for paths to identify actual files on the server, but the server can interpret the path however it wants. In particular, when we are writing a data service, the segments of the path can identify what data we are asking for. Alternatively, it’s common to think of the path as identifying a function on the server that we want to call, and to think of the query parameters as the arguments to that function. We’ll return to these ideas after we’ve seen how a simple server works. Hello, Express A Node-based library called Express handles most of the details of HTTP for us. When we build a server using Express, we provide callback functions that take three parameters: • the original request, • the response we’re building up, and • what to do next (which we’ll ignore for now). We also provide a pattern with each function that specifies what URLs it is to match. Here is a simple example: const express = require('express') const PORT = 3418 // Main server object. const app = express() // Return a static page. app.get('/', (req, res, next) => { res.status(200).send('<html><body><h1>Asteroids</h1></body></html>') }) app.listen(PORT, () => { console.log('listening...') }) The first line of code loads the Express library. The next defines the port we will listen on, and then the third creates the object that will do most of the work. Further down, the call to app.get tells that object to handle any GET request for ‘/’ by sending a reply whose status is 200 (OK) and whose body is an HTML page containing only an h1 heading. There is no actual HTML file on disk, and in fact no way for the browser to know if there was one or not: the server can send whatever it wants in response to whatever requests it wants to handle. Note that app.get doesn’t actually get anything right away. Instead, it registers a callback with Express that says, “When you see this URL, call this function to handle it.” As we’ll see below, we can register as many path/callback pairs as we want to handle different things. Finally, the last line of this script tells our application to listen on the specified port, while the callback tells it to print a message as it starts running. When we run this, we see: $node static-page.js listening... Our little server is now waiting for something to ask it for something. If we go to our browser and request http://localhost:3418/, we get a page with a large title Asteroids on it. Our server has worked, and we can now stop it by typing Ctrl-C in the shell. Handling Multiple Paths Let’s extend our server to do different things when given different paths, and to handle the case where the request path is not known: const express = require('express') const PORT = 3418 // Main server object. const app = express() // Root page. app.get('/', (req, res, next) => { res.status(200).send('<html><body><h1>Home</h1></body></html>') }) // Alternative page. app.get('/asteroids', (req, res, next) => { res.status(200).send('<html><body><h1>Asteroids</h1></body></html>') }) // Nothing else worked. app.use((req, res, next) => { res .status(404) .send(<html><body><p>ERROR:${req.url} not found</p></body></html>) }) app.listen(PORT, () => { console.log('listening...') }) The first few lines are the same as before. We then specify handlers for the paths / and /asteroids, each of which sends a different chunk of HTML. The call to app.use specifies a default handler: if none of the app.get handlers above it took care of the request, this callback function will send a “page not found” code and a page containing an error message. Some sites skip the first part and only return error messages in pages for people to read, but this is sinful: making the code explicit makes it a lot easier to write programs to scrape data. As before, we can run our server from the command line and then go to various URLs to test it. http://localhost:3418/ produces a page with the title “Home”, http://localhost:3418/asteroids produces one with the title “Asteroids”, and http://localhost:3418/test produces an error page. Serving Files from Disk It’s common to generate HTML in memory when building data services, but it’s also common for the server to return files. To do this, we will provide our server with the path to the directory it’s allowed to read pages from, and then run it with node server-name.js path/to/directory. We have to tell the server whence it’s allowed to read files because we definitely do not want it to be able to send everything on our computer to whoever asks for it. (For example, a request for the /etc/passwd password file on a Linux server should probably be refused.) Here’s our updated server: const express = require('express') const path = require('path') const fs = require('fs') const PORT = 3418 const root = process.argv[2] // Main server object. const app = express() // Handle all requests. app.use((req, res, next) => { const actual = path.join(root, req.url) const data = fs.readFileSync(actual, 'utf-8') res.status(200).send(data) }) app.listen(PORT, () => { console.log('listening...') }) The steps in handling a request are: 1. The URL requested by the client is given to us in req.url. 2. We use path.join to combine that with the path to the root directory, which we got from a command-line argument when the server was run. 3. We try to read that file using readFileSync, which blocks the server until the file is read. We will see later how to do this I/O asynchronously so that our server is more responsive. 4. Once the file has been read, we return it with a status code of 200. If a sub-directory called web-dir holds a file called title.html, and we run the server as: \$ node serve-pages.js ./web-dir we can then ask for http://localhost:3418/title.html and get the content of web-dir/title.html. Notice that the directory ./web-dir doesn’t appear in the URL: our server interprets all paths as if the directory we’ve given it is the root of the filesystem. If we ask for a nonexistent page like http://localhost:3418/missing.html we get this: Error: ENOENT: no such file or directory, open 'web-dir/missing.html' at Object.openSync (fs.js:434:3) at Object.readFileSync (fs.js:339:35) ... etc. ... We will see in the exercises how to add proper error handling to our server. Favorites and Icons If we use a browser to request a page such as title.html, the browser may actually make two requests: one for the page, and one for a file called favicon.ico. Browsers do this automatically, then display that file in tabs, bookmark lists, and so on. Despite its .ico suffix, the file is (usually) a small PNG-formatted image, and must be placed in the root directory of the website. Content Types So far we have only served HTML, but the server can send any type of data, including images and other binary files. For example, let’s serve some JSON data: // ...as before... app.use((req, res, next) => { const actual = path.join(root, req.url) if (actual.endsWith('.json')) { const data = fs.readFileSync(actual, 'utf-8') const json = JSON.parse(data) res.setHeader('Content-Type', 'application/json') res.status(200).send(json) } else { const data = fs.readFileSync(actual, 'utf-8') res.status(200).send(data) } }) What’s different here is that when the requested path ends with .json we explicitly set the Content-Type header to application/json to tell the client how to interpret the bytes we’re sending back. If we run this server with web-dir as the directory to serve from and ask for http://localhost:3418/data.json, a modern browser will provide a folding display of the data rather than displaying the raw text. Exercises Report Missing Files Modify the version of the server that returns files from disk to report a 404 error if a file cannot be found. What should it return if the file exists but cannot be read (e.g., if the server does not have permissions)? Serving Images Modify the version of the server that returns files from disk so that if the file it is asked for has a name ending in .png or .jpg, it is returned with the right Content-Type header. Delayed Replies Our file server uses fs.readFileSync to read files, which means that it stops each time a file is requested rather than handling other queries while waiting for the file to be read. Modify the callback given to app.use so that it uses fs.readFile with a callback instead. Using Query Parameters URLs can contain query parameters in the form: http://site.edu?first=123&second=beta Read the online documentation for Express to find out how to access them in a server, and then write a server to do simple arithmetic: the URL http://localhost:3654/add?left=1&right=2 should return 3, while the URL http://localhost:3654/subtract?left=1&right=2 should return -1. Key Points • An HTTP request or response consists of a plain-text header and an optional body. • HTTP is a stateless protocol. • Express provides a simple path-based JavaScript server. • Write callback functions to handle requests matching specified paths. • Provide a default handler for unrecognized requests. • Use Content-Type to specify the type of data being returned. • Use dynamic loading to support plugin extensions.
{}
# Preamble “Training Question Answering Models From Synthetic Data” is an NLP paper from Nvidia that I found very interesting. Question and answer(QA) data is expansive to obtain. If we can use the data we have to generate more data, that will be a huge time saver and create a lot of new possibilities. This paper shows some promising results in this direction. Some caveats: 1. We need big models to be able to get decent results. (The paper reported question generation models with the number of parameters from 117M to 8.3B. See the ablation study in the following sections.) 2. Generated QA data is still not at the same level as the real data. (At least 3x+ more synthetic data is needed to reach the same level of accuracy.) There are a lot of contents in this paper, and it can be a bit overwhelming. I wrote down parts of the paper that I think is most relevant in this post, and hopefully, it can be helpful to you as well. # Method ## Components There are three or four stages in the data generation process. Each stage requires a separate model: • Stage 0 - [Optional] Context generation: The SQuAD 1.1 training data were used to train the following three stages (Figure 2 below). But when testing/generating, we can choose to use real Wikipedia data or use a model to generate Wikipedia-like data. • Stage 1 - Answer Generation ($\hat{p} \sim p(a|c)$): A BERT-style model to do answer extraction from the given context. The start and the end of the token span are jointly sampled. • Stage 2 - Question Generation ($\hat{q} \sim p(q|\hat{a},c)$): Fine-tuned GPT-2 model to generation question from the context and the answer. • Stage 3 - Roundtrip Filtration ($\hat{a} \stackrel{?}{=} a^{\ast} \sim p(a|c,\hat{q})$): A trained extractive QA model to get the answer from the context and the generated question. If the predicted answer matches the generated answer, we keep this triplet (context, answer, and question). Otherwise, the triplet is discarded. The last step seems to be very strict. Any deviation from the generated answer will not be tolerated. However, given the EM(exact match) of the model trained on SQuAD 1.1 alone is already 87.7%, it’s reasonable to expect that the quality of answer predicted by the filtration model to be quite accurate. The paper also proposes an over-generation technique (generate two questions for each answer and context pair) to compensate for those valid triplets being discarded. ## More Details ### Context Generation Beside using Wikipedia documents as contexts, this paper also generates completely synthetic contexts using an 8.3B GPT-2 model: This model was first trained with the Megatron-LM codebase for 400k iterations before being fine-tuned on only Wikipedia documents for 2k iterations. This allows us to generate high-quality text from a distribution similar to Wikipedia by using top-p (p = 0.96) nucleus sampling. This paper train the answer generation model to match the exact answer in the training data. This naturally ignores the other possible answers from the context but seems to be a more generalizable way to do it. The joint modeling of the starts and the ends of the answer span, which is reported to perform better, creates more candidates in the denominator in the calculation of the likelihood. (I’m not very sure about the complexity and performance impact of this joint approach.) ### Question Generation This paper uses token type ids to identify the components in the triplets. The answer span in the context are also marked by the answer token id. Special tokens is also added to the start and the end of the questions. ### Number of Triplets Generated As explained in the previous section, the paper uses an over-generation technique to compensate for the model precision problem. Two questions are generated for each answer and context pair (a.k.a. answer candidate). Answer candidates of the context are generated by top-k sampling within a nucleus of p = 0.9 (that means we take the samples with the highest likelihoods until we either get K samples or the cumulative probabilities of the samples taken reaches 0.9). In the ablation study(which will be covered in the following sections), the models in stage 1 to 3 are trained with half of the SQuAD 1.1 training data, and the other half is used to generate synthetic data. The performance of the QA model trained on synthetic data is used to evaluate the quality of synthetic data. From the table above (Table 4), we can see that the smaller model on average generated 2.75 valid triplets per context, and the larger model generated 4.36 triplets. Those synthetic datasets are already bigger than the SQuAD 1.1 training set. # Experiments ## Model Scale Table 4 (in the previous section) shows that larger models in stage 1 to 3 create better data for the downstream model, but it is not clear whether it was the quality of the data or the quantity of the data that helped. Table 5 shows that the quality of questions generated does increase as the model scales up. To test the quality of the generated answers, the paper used the 1.2B question generator (see Table 5) to generate questions without filtration from the generated answers, fine-tune a QA model and test against the dev set. Table 6 shows that bigger model increases the quality of generated answers, but only marginally. (I am not sure how they obtained the benchmark for BERT-Large, though. I think BERT-Large expects a context and question pair to generate an answer, but here we want to generate answers from only the context. Maybe they take the pre-trained BERT-Large model and fine-tune it like the other two.) In Table 7 we can see that filtration does improve the performance of the downstream models (compare to Table 5). When using real answers to generate questions, less than 50% of the triplets generated by the 345M model were rejected, while about 55% by the 1.2B model was rejected. Note that all the models in this set under-performed to the model trained with only human-generated data (SQuAD training set). The additional triplets from using generated answers are quite helpful, the 1.2B model finally surpassed the baseline model (human-generated data), but it used 3x+ more data. To sum up, the ablation study shows that scaling up the model improved the quality of the generated data, but the increase in the quantity of the data also played a part. ## Fully Synthetic Data In this part of the paper, they trained the models for stage 1 to 3 using the full SQuAD 1.1 training set, and use the deduplicated Wikipedia documents as contexts to generate answers and questions. They also fine-tune an 8.3B GPT-2 model on Wikipedia documents to generate synthetic contexts. Table 2 shows that synthetic contexts can be as good as the real ones. Also, further fine-tuning on the real SQuAD 1.1 data can further improve the performance, which might imply that there is still something missing in the fully or partially synthetic triplets. However, using 200x+ more data to get less 1% more accuracy seems wasteful. We want to know how much synthetic data we need to reach the baseline accuracy. The next section answers this question. ## The Quantity of Data Required to Beat the Baseline (The “data labeled” seems to mean the size of the corpus used to generate the triplets, not the size of generated triplets.) Figure 3 shows that we need at least 50 MB of text labeled to reach the baseline accuracy (without fine-tuning with the real data), 100 MB to surpass. That’s 2.5x+ and 7x+ more than the real one used by the baseline. Considering there are multiple triplets generated by one context, the number of triplets required is estimated (by me) to be around 20x and 40x more. The silver lining is that only 10 MB of text is needed to be labeled if we fine-tune the model with the real SQuAD data to surpass the baseline. That roughly translates to 3 to 4 times more triplets used than the baseline. So real plus synthetic data is probably the way to go for now. # Wrapping Up There are quite a few more details I did not cover in this post. Please refer to the source paper if you want to know more. All in all, very interesting results in this paper. Unfortunately, the amount of compute needed to synthesize the data and the amount of synthetic data needed to reach good results are still staggering. But on the other hand, it is essentially trade compute for the human labors required by the annotation process, and it might not be a bad deal. tweet Share
{}
# Feedback CL Latex question Can somebody help. I’m trying to get side 1 to be 4 times the square root of 3 and the notation is not working. If you put the expression in the desmos calculator then copy it and paste it in your code sometimes you can get the correct notation. This is what I got for 4 times the square root of 3. 4\sqrt{3}
{}
# Gray (unit): Wikis Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles. # Encyclopedia The gray (symbol: Gy) is the SI unit of absorbed radiation dose by matter due to ionizing radiation (for example, X-rays). It supersedes the old SI unit, the rad now "strongly discouraged". ## Definition One gray is the absorption of one joule of energy, in the form of ionizing radiation, by one kilogram of matter. $1 \ \mathrm{Gy} = 1\ \frac{\mathrm{J}}{\mathrm{kg}} = 1\ \mathrm{m}^2\cdot\mathrm{s}^{-2}$ For X-rays and gamma rays, these are the same units as the sievert (Sv). To avoid any risk of confusion between the absorbed dose (by matter) and the equivalent dose (by biological tissues), one must use the corresponding special units, namely the gray instead of the joule per kilogram for absorbed dose and the sievert instead of the joule per kilogram for the dose equivalent. The unit gray spells out the same in both the singular and the plural. This SI unit is named after Louis Harold Gray. As with every SI unit whose name is derived from the proper name of a person, the first letter of its symbol is uppercase (Gy). When an SI unit is spelled out in English, it should always begin with a lowercase letter (gray), except where any word would be capitalized, such as at the beginning of a sentence or in capitalized material such as a title. Note that "degree Celsius" conforms to this rule because the "d" is lowercase. Based on The International System of Units, section 5.2. ## Origin The gray was defined in 1975 in honour of Louis Harold Gray (1905–1965), who used a similar concept, "that amount of neutron radiation which produces an increment of energy in unit volume of tissue equal to the increment of energy produced in unit volume of water by one röntgen of radiation," in 1940. ## Explanation The gray measures the deposited energy of radiation. The biological effects vary by the type and energy of the radiation and the organism and tissues involved. The sievert attempts to account for these variations. A whole-body exposure to 5 or more gray of high-energy radiation at one time usually leads to death within 14 days (see Radiation poisoning for details). This dosage represents 375 joules for a 75 kg adult (equivalent to the chemical energy in 20 mg of sugar). Since gray are such large amounts of radiation, medical use of radiation is typically measured in milligray (mGy). The average radiation dose from an abdominal x-ray is 1.4 mGy, that from an abdominal CT scan is 8.0 mGy, that from a pelvic CT scan is 25 mGy, and that from a selective spiral CT scan of the abdomen and the pelvis is 30 mGy.[1] ## Conversions One gray is equivalent to 100 rad, and 3 kGy are equal to 300 krad. The röntgen is defined as the radiation exposure equal to the quantity of ionizing radiation that will produce one esu of electricity in one cubic centimetre of dry air at 0 °C and a standard atmosphere, and is conventionally taken to be worth 0.258 mC/kg (using a conventional air density of about 1.293 kg/m3). Using an air ionisation energy of about 36.161 J/C, we have 1 Gy ≈ 115 R. Submultiples Multiples Value Symbol Name Value 10–1 Gy dGy decigray 101 Gy daGy decagray 10–2 Gy cGy centigray 102 Gy hGy hectogray 10–3 Gy mGy milligray 103 Gy kGy kilogray 10–6 Gy µGy microgray 106 Gy MGy megagray 10–9 Gy nGy nanogray 109 Gy GGy gigagray 10–12 Gy pGy picogray 1012 Gy TGy teragray 10–15 Gy fGy femtogray 1015 Gy PGy petagray 10–18 Gy aGy attogray 1018 Gy EGy exagray 10–21 Gy zGy zeptogray 1021 Gy ZGy zettagray 10–24 Gy yGy yoctogray 1024 Gy YGy yottagray Common multiples are in bold face.
{}
# error trying to build a histogram using r with the rStudio application Using ggplot2 I am attempting to create a histogram. I have a column that is full on the continents. I need to add all the continents which I attempted to do with the aggregate function. data <- aggregate(country$continent,country["continent"], sum) hist(data) Error in Summary.factor(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : ‘sum’ not meaningful for factors • I do not understand what you want to achieve. You currently group by continent and sum continent names for each country. You cannot do "USA" + "Russia" = "Love" – keiv.fly Dec 5 '18 at 21:04 • Ok I have a data_frame called country which has a column called continent. I wish to groupby and count the continent column. So, for example USA = 5, Russia = 2, etc. After that is done, I am going to make a histogram. – Chris Kehl Dec 5 '18 at 21:20 ## 1 Answer The following code : data.frame(table(country$continent)) will create a dataframe where one column is the continent and the other is the number of countries in the continent. • working on histogram hist(data$Freq) – Chris Kehl Dec 5 '18 at 23:22 • not looking as expected. – Chris Kehl Dec 5 '18 at 23:24 • barplot(table(country$continent)) – keiv.fly Dec 5 '18 at 23:29 • Closing in on this problem, thanks, it did look 100% better. – Chris Kehl Dec 5 '18 at 23:40 • Histogram is not a barplot. data['percen'] creates another column in the same dataframe. Calling hist(data) means you want to create a frequency histogram from the data frame. And data frame is not one column but many. So what you want is probably barplot(data$Freq/sum(data$Freq)) or barplot(data\$percen) – keiv.fly Dec 6 '18 at 14:39
{}
# Testing for a Linear Correlation. In Exercises 13–28, construct a scatterplot, and find the value of the linear correlation coefficient r. Also find the P-value or the critical values of r from Table A-6. Use a significance level of alpha = 0.05. Determine whether there is sufficient evidence to support a claim of a linear correlation between the two variables. (Save your work because the same data sets will be used in Section 10-2 exercises.) Lemons and Car Crashes Listed below are annual data for various years. The data are weights (metric tons) of lemons imported from Mexico and U.S. car crash fatality rates per 100,000 population [based on data from “The Trouble with QSAR (or How I Learned to Stop Worrying and Embrace Fallacy),” by Stephen Johnson, Journal of Chemical Information and M Question Modeling data distributions Testing for a Linear Correlation. In Exercises 13–28, construct a scatterplot, and find the value of the linear correlation coefficient r. Also find the P-value or the critical values of r from Table A-6. Use a significance level of $$\alpha = 0.05$$. Determine whether there is sufficient evidence to support a claim of a linear correlation between the two variables. (Save your work because the same data sets will be used in Section 10-2 exercises.) Lemons and Car Crashes Listed below are annual data for various years. The data are weights (metric tons) of lemons imported from Mexico and U.S. car crash fatality rates per 100,000 population [based on data from “The Trouble with QSAR (or How I Learned to Stop Worrying and Embrace Fallacy),” by Stephen Johnson, Journal of Chemical Information and Modeling, Vol. 48, No. 1]. Is there sufficient evidence to conclude that there is a linear correlation between weights of lemon imports from Mexico and U.S. car fatality rates? Do the results suggest that imported lemons cause car fatalities? $$\begin{matrix} \text{Lemon Imports} & 230 & 265 & 358 & 480 & 530\\ \text{Crashe Fatality Rate} & 15.9 & 15.7 & 15.4 & 15.3 & 14.9\\ \end{matrix}$$ 2020-11-09 Step 1 Note: we are using MINTAB software to perform the calculations. The data shows that the weights of lemon imports from Mexico and U.S. car fatality rates. The level of significance is $$\alpha = 0.05$$. Procedure to obtain scatterplot using the MINITAB software: Choose Graph > Scatterplot. Choose Simple and then click OK. Under Y variables, enter a column of CRASH FERTILITY RATES. Under X variables, enter a column of LEMON IMPORTS. Click OK. Step 2 The hypotheses are given below: Null hypothesis: $$H0:\rho = 0$$ That is, there is no linear correlation between the weights of lemon imports from Mexico and U.S. car fatality rates. Alternative hypothesis: $$H1:\rho cancel= 0$$ That is, there is a linear correlation between the weights of lemon imports from Mexico and U.S. car fatality rates. Correlation coefficient r: Software Procedure: Step-by-step procedure to obtain the ‘correlation coefficient’ using the MINITAB software: Select Stat >Basic Statistics > Correlation. In Variables, select LEMON IMPORTS and CRASH FERTILITY RATES from the box on the left. Click OK. Output using the MINITAB software is given below: Correlations: LEMON IMPORTS, CRASH FATALITY RATE Pearson correlation of LEMON IMPORTS and CRASH FATALITY RATE $$= -0.959$$ $$P-value = 0.010$$ Step 3 Thus, the Pearson correlation of weights of lemon imports from Mexico and U.S. car fatality rates is –0.959 and the P-value is 0.010. Critical value: From the TABLE “Critical Values of the Pearson Correlation Coefficient r”, the critical value for 5 degrees of freedom for $$\alpha = 0.05$$ level of significance is $$\pm 0.878$$. The horizontal axis represents weights of lemon imports from Mexico and vertical axis represents U.S. car fatality rates. From the plot, it is observed that there is a linear association between the weights of lemon imports from Mexico and U.S. car fatality rates because the data point show a distinct pattern. The P-value is 0.010 and the level of significance is 0.05. Here, the P-value is less than the level of significance. Hence, the null hypothesis is rejected. That is, there is a linear correlation between the weights of lemon imports from Mexico and U.S. car fatality rates. The critical value is $$\pm 0.878$$. Here, the correlation value –0.959 lies beyond the lower critical value. Thus, there is sufficient evidence to support the claim that there is a linear correlation between the weights of lemon imports from Mexico and U.S. car fatality rates. There is a linear correlation between the weights of lemon imports from Mexico and U.S. car fatality rates but it does not appear the imported lemons cause car fatalities because, it do not suggest any cause-effect relationship. ### Relevant Questions A random sample of $$n_1 = 14$$ winter days in Denver gave a sample mean pollution index $$x_1 = 43$$. Previous studies show that $$\sigma_1 = 19$$. For Englewood (a suburb of Denver), a random sample of $$n_2 = 12$$ winter days gave a sample mean pollution index of $$x_2 = 37$$. Previous studies show that $$\sigma_2 = 13$$. Assume the pollution index is normally distributed in both Englewood and Denver. (a) State the null and alternate hypotheses. $$H_0:\mu_1=\mu_2.\mu_1>\mu_2$$ $$H_0:\mu_1<\mu_2.\mu_1=\mu_2$$ $$H_0:\mu_1=\mu_2.\mu_1<\mu_2$$ $$H_0:\mu_1=\mu_2.\mu_1\neq\mu_2$$ (b) What sampling distribution will you use? What assumptions are you making? NKS The Student's t. We assume that both population distributions are approximately normal with known standard deviations. The standard normal. We assume that both population distributions are approximately normal with unknown standard deviations. The standard normal. We assume that both population distributions are approximately normal with known standard deviations. The Student's t. We assume that both population distributions are approximately normal with unknown standard deviations. (c) What is the value of the sample test statistic? Compute the corresponding z or t value as appropriate. (Test the difference $$\mu_1 - \mu_2$$. Round your answer to two decimal places.) NKS (d) Find (or estimate) the P-value. (Round your answer to four decimal places.) (e) Based on your answers in parts (i)−(iii), will you reject or fail to reject the null hypothesis? Are the data statistically significant at level \alpha? At the $$\alpha = 0.01$$ level, we fail to reject the null hypothesis and conclude the data are not statistically significant. At the $$\alpha = 0.01$$ level, we reject the null hypothesis and conclude the data are statistically significant. At the $$\alpha = 0.01$$ level, we fail to reject the null hypothesis and conclude the data are statistically significant. At the $$\alpha = 0.01$$ level, we reject the null hypothesis and conclude the data are not statistically significant. (f) Interpret your conclusion in the context of the application. Reject the null hypothesis, there is insufficient evidence that there is a difference in mean pollution index for Englewood and Denver. Reject the null hypothesis, there is sufficient evidence that there is a difference in mean pollution index for Englewood and Denver. Fail to reject the null hypothesis, there is insufficient evidence that there is a difference in mean pollution index for Englewood and Denver. Fail to reject the null hypothesis, there is sufficient evidence that there is a difference in mean pollution index for Englewood and Denver. (g) Find a 99% confidence interval for $$\mu_1 - \mu_2$$. lower limit upper limit (h) Explain the meaning of the confidence interval in the context of the problem. Because the interval contains only positive numbers, this indicates that at the 99% confidence level, the mean population pollution index for Englewood is greater than that of Denver. Because the interval contains both positive and negative numbers, this indicates that at the 99% confidence level, we can not say that the mean population pollution index for Englewood is different than that of Denver. Because the interval contains both positive and negative numbers, this indicates that at the 99% confidence level, the mean population pollution index for Englewood is greater than that of Denver. Because the interval contains only negative numbers, this indicates that at the 99% confidence level, the mean population pollution index for Englewood is less than that of Denver. 1. Find each of the requested values for a population with a mean of $$? = 40$$, and a standard deviation of $$? = 8$$ A. What is the z-score corresponding to $$X = 52?$$ B. What is the X value corresponding to $$z = - 0.50?$$ C. If all of the scores in the population are transformed into z-scores, what will be the values for the mean and standard deviation for the complete set of z-scores? D. What is the z-score corresponding to a sample mean of $$M=42$$ for a sample of $$n = 4$$ scores? E. What is the z-scores corresponding to a sample mean of $$M= 42$$ for a sample of $$n = 6$$ scores? 2. True or false: a. All normal distributions are symmetrical b. All normal distributions have a mean of 1.0 c. All normal distributions have a standard deviation of 1.0 d. The total area under the curve of all normal distributions is equal to 1 3. Interpret the location, direction, and distance (near or far) of the following zscores: $$a. -2.00 b. 1.25 c. 3.50 d. -0.34$$ 4. You are part of a trivia team and have tracked your team’s performance since you started playing, so you know that your scores are normally distributed with $$\mu = 78$$ and $$\sigma = 12$$. Recently, a new person joined the team, and you think the scores have gotten better. Use hypothesis testing to see if the average score has improved based on the following 8 weeks’ worth of score data: $$82, 74, 62, 68, 79, 94, 90, 81, 80$$. 5. You get hired as a server at a local restaurant, and the manager tells you that servers’ tips are $42 on average but vary about $$12 (\mu = 42, \sigma = 12)$$. You decide to track your tips to see if you make a different amount, but because this is your first job as a server, you don’t know if you will make more or less in tips. After working 16 shifts, you find that your average nightly amount is$44.50 from tips. Test for a difference between this value and the population mean at the $$\alpha = 0.05$$ level of significance. We will now add support for register-memory ALU operations to the classic five-stage RISC pipeline. To offset this increase in complexity, all memory addressing will be restricted to register indirect (i.e., all addresses are simply a value held in a register; no offset or displacement may be added to the register value). For example, the register-memory instruction add x4, x5, (x1) means add the contents of register x5 to the contents of the memory location with address equal to the value in register x1 and put the sum in register x4. Register-register ALU operations are unchanged. The following items apply to the integer RISC pipeline: a. List a rearranged order of the five traditional stages of the RISC pipeline that will support register-memory operations implemented exclusively by register indirect addressing. b. Describe what new forwarding paths are needed for the rearranged pipeline by stating the source, destination, and information transferred on each needed new path. c. For the reordered stages of the RISC pipeline, what new data hazards are created by this addressing mode? Give an instruction sequence illustrating each new hazard. d. List all of the ways that the RISC pipeline with register-memory ALU operations can have a different instruction count for a given program than the original RISC pipeline. Give a pair of specific instruction sequences, one for the original pipeline and one for the rearranged pipeline, to illustrate each way. Hint for (d): Give a pair of instruction sequences where the RISC pipeline has “more” instructions than the reg-mem architecture. Also give a pair of instruction sequences where the RISC pipeline has “fewer” instructions than the reg-mem architecture. Lemons and Car Crashes Listed below are annual data for various years. The data are weights (metric tons) of lemons imported from Mexico and U.S. car crash fatality rates per 100,000 population [based on data from “The Trouble with QSAR (or How I Learned to Stop Worrying and Embrace Fallacy),” by Stephen Johnson, Journal of Chemical Information and Modeling, Vol. 48, No. 1]. Is there sufficient evidence to conclude that there is a linear correlation between weights of lemon imports from Mexico and U.S. car fatality rates? Do the results suggest that imported lemons cause car fatalities? $$\begin{array}{|c|c|}Lemon\ imports &230&265&368&480&630\\ Crash\ Fatality\ Rate&159&157&15.3&15.4&14.9\end{array}$$ A new thermostat has been engineered for the frozen food cases in large supermarkets. Both the old and new thermostats hold temperatures at an average of $$25^{\circ}F$$. However, it is hoped that the new thermostat might be more dependable in the sense that it will hold temperatures closer to $$25^{\circ}F$$. One frozen food case was equipped with the new thermostat, and a random sample of 21 temperature readings gave a sample variance of 5.1. Another similar frozen food case was equipped with the old thermostat, and a random sample of 19 temperature readings gave a sample variance of 12.8. Test the claim that the population variance of the old thermostat temperature readings is larger than that for the new thermostat. Use a $$5\%$$ level of significance. How could your test conclusion relate to the question regarding the dependability of the temperature readings? (Let population 1 refer to data from the old thermostat.) (a) What is the level of significance? State the null and alternate hypotheses. $$H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}>?_{2}^{2}H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}\neq?_{2}^{2}H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}?_{2}^{2},H1:?_{1}^{2}=?_{2}^{2}$$ (b) Find the value of the sample F statistic. (Round your answer to two decimal places.) What are the degrees of freedom? $$df_{N} = ?$$ $$df_{D} = ?$$ What assumptions are you making about the original distribution? The populations follow independent normal distributions. We have random samples from each population.The populations follow dependent normal distributions. We have random samples from each population.The populations follow independent normal distributions.The populations follow independent chi-square distributions. We have random samples from each population. (c) Find or estimate the P-value of the sample test statistic. (Round your answer to four decimal places.) (d) Based on your answers in parts (a) to (c), will you reject or fail to reject the null hypothesis? At the ? = 0.05 level, we fail to reject the null hypothesis and conclude the data are not statistically significant.At the ? = 0.05 level, we fail to reject the null hypothesis and conclude the data are statistically significant. At the ? = 0.05 level, we reject the null hypothesis and conclude the data are not statistically significant.At the ? = 0.05 level, we reject the null hypothesis and conclude the data are statistically significant. (e) Interpret your conclusion in the context of the application. Reject the null hypothesis, there is sufficient evidence that the population variance is larger in the old thermostat temperature readings.Fail to reject the null hypothesis, there is sufficient evidence that the population variance is larger in the old thermostat temperature readings. Fail to reject the null hypothesis, there is insufficient evidence that the population variance is larger in the old thermostat temperature readings.Reject the null hypothesis, there is insufficient evidence that the population variance is larger in the old thermostat temperature readings. Use either the critical-value approach or the P-value approach to perform the required hypothesis test. For several years, evidence had been mounting that folic acid reduces major birth defects. A. Czeizel and I. Dudas of the National Institute of Hygiene in Budapest directed a study that provided the strongest evidence to date. Their results were published in the paper “Prevention of the First Occurrence of Neural-Tube Defects by Periconceptional Vitamin Supplementation” (New England Journal of Medicine, Vol. 327(26), p. 1832). For the study, the doctors enrolled women prior to conception and divided them randomly into two groups. One group, consisting of 2701 women, took daily multivitamins containing 0.8 mg of folic acid, the other group, consisting of 2052 women, received only trace elements. Major birth defects occurred in 35 cases when the women took folic acid and in 47 cases when the women did not. a. At the 1% significance level, do the data provide sufficient evidence to conclude that women who take folic acid are at lesser risk of having children with major birth defects? b. Is this study a designed experiment or an observational study? Explain your answer. c. In view of your answers to parts (a) and (b), could you reasonably conclude that taking folic acid causes a reduction in major birth defects? Explain your answer. Researchers have asked whether there is a relationship between nutrition and cancer, and many studies have shown that there is. In fact, one of the conclusions of a study by B. Reddy et al., “Nutrition and Its Relationship to Cancer” (Advances in Cancer Research, Vol. 32, pp. 237-345), was that “...none of the risk factors for cancer is probably more significant than diet and nutrition.” One dietary factor that has been studied for its relationship with prostate cancer is fat consumption. On the WeissStats CD, you will find data on per capita fat consumption (in grams per day) and prostate cancer death rate (per 100,000 males) for nations of the world. The data were obtained from a graph-adapted from information in the article mentioned-in J. Robbins’s classic book Diet for a New America (Walpole, NH: Stillpoint, 1987, p. 271). For part (d), predict the prostate cancer death rate for a nation with a per capita fat consumption of 92 grams per day. a) Construct and interpret a scatterplot for the data. b) Decide whether finding a regression line for the data is reasonable. If so, then also do parts (c)-(f). c) Determine and interpret the regression equation. d) Make the indicated predictions. e) Compute and interpret the correlation coefficient. f) Identify potential outliers and influential observations. Case: Dr. Jung’s Diamonds Selection With Christmas coming, Dr. Jung became interested in buying diamonds for his wife. After perusing the Web, he learned about the “4Cs” of diamonds: cut, color, clarity, and carat. He knew his wife wanted round-cut earrings mounted in white gold settings, so he immediately narrowed his focus to evaluating color, clarity, and carat for that style earring. After a bit of searching, Dr. Jung located a number of earring sets that he would consider purchasing. But he knew the pricing of diamonds varied considerably. To assist in his decision making, Dr. Jung decided to use regression analysis to develop a model to predict the retail price of different sets of round-cut earrings based on their color, clarity, and carat scores. He assembled the data in the file Diamonds.xls for this purpose. Use this data to answer the following questions for Dr. Jung. 1) Prepare scatter plots showing the relationship between the earring prices (Y) and each of the potential independent variables. What sort of relationship does each plot suggest? 2) Let X1, X2, and X3 represent diamond color, clarity, and carats, respectively. If Dr. Jung wanted to build a linear regression model to estimate earring prices using these variables, which variables would you recommend that he use? Why? 3) Suppose Dr. Jung decides to use clarity (X2) and carats (X3) as independent variables in a regression model to predict earring prices. What is the estimated regression equation? What is the value of the R2 and adjusted-R2 statistics? 4) Use the regression equation identified in the previous question to create estimated prices for each of the earring sets in Dr. Jung’s sample. Which sets of earrings appear to be overpriced and which appear to be bargains? Based on this analysis, which set of earrings would you suggest that Dr. Jung purchase? 5) Dr. Jung now remembers that it sometimes helps to perform a square root transformation on the dependent variable in a regression problem. Modify your spreadsheet to include a new dependent variable that is the square root on the earring prices (use Excel’s SQRT( ) function). If Dr. Jung wanted to build a linear regression model to estimate the square root of earring prices using the same independent variables as before, which variables would you recommend that he use? Why? 1 6) Suppose Dr. Jung decides to use clarity (X2) and carats (X3) as independent variables in a regression model to predict the square root of the earring prices. What is the estimated regression equation? What is the value of the R2 and adjusted-R2 statistics? 7) Use the regression equation identified in the previous question to create estimated prices for each of the earring sets in Dr. Jung’s sample. (Remember, your model estimates the square root of the earring prices. So you must actually square the model’s estimates to convert them to price estimates.) Which sets of earring appears to be overpriced and which appear to be bargains? Based on this analysis, which set of earrings would you suggest that Dr. Jung purchase? 8) Dr. Jung now also remembers that it sometimes helps to include interaction terms in a regression model—where you create a new independent variable as the product of two of the original variables. Modify your spreadsheet to include three new independent variables, X4, X5, and X6, representing interaction terms where: X4 = X1 × X2, X5 = X1 × X3, and X6 = X2 × X3. There are now six potential independent variables. If Dr. Jung wanted to build a linear regression model to estimate the square root of earring prices using the same independent variables as before, which variables would you recommend that he use? Why? 9) Suppose Dr. Jung decides to use color (X1), carats (X3) and the interaction terms X4 (color * clarity) and X5 (color * carats) as independent variables in a regression model to predict the square root of the earring prices. What is the estimated regression equation? What is the value of the R2 and adjusted-R2 statistics? 10) Use the regression equation identified in the previous question to create estimated prices for each of the earring sets in Dr. Jung’s sample. (Remember, your model estimates the square root of the earring prices. So you must square the model’s estimates to convert them to actual price estimates.) Which sets of earrings appear to be overpriced and which appear to be bargains? Based on this analysis, which set of earrings would you suggest that Dr. Jung purchase? Are yields for organic farming different from conventional farming yields? Independent random samples from method A (organic farming) and method B (conventional farming) gave the following information about yield of sweet corn (in tons/acre). $$\text{Method} A: 6.51, 7.02, 6.81, 7.27, 6.73, 6.11, 6.17, 5.88, 6.69, 7.12, 5.74, 6.90.$$ $$\text{Method} B: 7.32, 7.01, 6.66, 6.85, 5.78, 6.48, 5.95, 6.31, 6.50, 5.93, 6.68.$$ Use a 5% level of significance to test the claim that there is no difference between the yield distributions. (a) What is the level of significance? (b) Compute the sample test statistic. (Round your answer to two decimal places.) (c) Find the P-value of the sample test statistic. (Round your answer to four decimal places.)
{}
# Arrangement of guests in dinner party 12 guests at a dinner party are to be seated along a circular table. Supposing that the master and mistress of the house have fixed seats opposite one another, and that there are two specified guests who must always, be placed next to one another; the number of ways in which the company can be placed, is: A) 20.10! B)22.10! C)44.10! D)NONE How I tried: as the master and mistress are to be seated opposite to each other there are 2 possible ways (interchanging their position) of arrangement, now the two guests to be seated next to each other also have two possible ways (interchanging their position) now these two guests be treated as one guest and then arranging the 12-2-2+1=9 guests in remaining position= 9! Therefore the total number of arrangement =2.2.9!=36.9! But the answer is A)20.10!, where am I wrong, how to solve the problem? . means multiply • Does . mean multiply? Feb 8 '20 at 7:16 • Welcome to MSE. Please use MathJax to format the math in your posts. For example $2\cdot2\cdot9!$ is typeset as $2\cdot2\cdot9!$. Feb 8 '20 at 7:23 Seat the master. It does not matter where since two seating arrangements that differ by a rotation are considered to be equivalent. The mistress must sit opposite the master. Since there are $$12$$ guests, there are six seats between the master and mistress on each side. If we proceed clockwise around the table from the master, the block of two seats for the two guests who must sit together must either begin in one of the first five seats before we reach the mistress or in one of the first five seats after we reach the mistress. Choose one of those $$10$$ places for the block of two seats to begin. We have two ways to arrange the specified guests in the block of two seats. The remaining ten guests can be arranged in $$10!$$ ways in the remaining ten seats as we proceed clockwise around the table from the master. Hence, there are $$2 \cdot 10 \cdot 10!$$ admissible seating arrangements.
{}
#### Approximation Algorithms for Stochastic k-TSP ##### Alina Ene, Viswanath Nagarajan, Rishi Saket We consider the stochastic $k$-TSP problem where rewards at vertices are random and the objective is to minimize the expected length of a tour that collects reward $k$. We present an adaptive $O(\log k)$-approximation algorithm, and a non-adaptive $O(\log^2k)$-approximation algorithm. We also show that the adaptivity gap of this problem is between $e$ and $O(\log^2k)$. arrow_drop_up
{}
## anonymous 5 years ago Use the cross-product method to find out whether the pair of fractions is equivalent. 2/9, 4/16 1. anonymous 2*16 = 4 *9 32 = 32 cool 2. anonymous how did you get that? 3. anonymous $\frac{2}{9} \times \frac{4}{16}$ you multiply across the diagonals. the 2 matches with the 16 and the 9 matches with the 4. 4. anonymous okay thanks so much i like it a lot better when people can help explain things to me , what about this one? divide using long divison 8 divided by 24,324 5. anonymous =8 6. anonymous it's hard to explain long division without talking through it, honestly =/ there's a nice wikipedia though. http://en.wikipedia.org/wiki/Long_division
{}
International Journal of Mathematics ( IF 0.604 ) Pub Date : 2020-07-25 , DOI: 10.1142/s0129167x20500688 Najmeddine Attia We develop, in the context of the boundary of a supercritical Galton–Watson tree, a uniform version of large deviation estimate on homogeneous trees to estimate almost surely and simultaneously the Hausdorff and packing dimensions of the Mandelbrot measure over a suitable set $𝒥$. As an application, we compute, almost surely and simultaneously, the Hausdorff and packing dimensions of a fractal set related to covering number on the Galton–Watson tree. Hausdorff和Mandelbrot度量值的包装尺寸 down wechat bug
{}
Convergence Implications via Dual Flow Method #### T. Amaba, D. Taguchi, G. Yuki 2019, v.25, Issue 3 ABSTRACT Given a one-dimensional stochastic differential equation, one can associate to this equation a stochastic flow on $[0,+\infty )$, which has an absorbing barrier at zero. Then one can define its dual stochastic flow. In \cite{AW}, Akahori and Watanabe showed that its one-point motion solves a corresponding stochastic differential equation of Skorokhod-type. In this paper, we consider a discrete-time stochastic-flow which approximates the original stochastic flow. We show that under some assumptions, one-point motions of its dual flow also approximates the corresponding reflecting diffusion. We prove and use relations between a stochastic flow and its dual in order to obtain weak and strong approximation results related to stochastic differential equations of Skorokhod-type. Keywords: dual stochastic flow, Siegmund's duality, absorbing diffusion, reflecting diffusion, Euler\tire Maruyama approximation
{}
## Amplitude $c=\lambda v$ Moderators: Chem_Mod, Chem_Admin Mishta Stanislaus 1H Posts: 45 Joined: Fri Sep 29, 2017 7:04 am ### Amplitude I understand that wave length and frequency have and inverse relationship due to the given equation, but does amplitude relate to these in any way? Naveed Zaman 1C Posts: 62 Joined: Fri Sep 29, 2017 7:04 am Been upvoted: 2 times ### Re: Amplitude I believe there might be a formula that has to do with amplitude versus intensity, but I think it has to do with when we consider electromagnetic radiation as a wave, not photons. I think therefore it's something we don't have to know how to do for class purposes (I think it also has to do with only black body radiation, which is more of a physics concept than a chemistry concept). The amplitude doesn't necessarily have to change with an increase/decrease in wavelength or frequency so I don't think it's discussed much. Hope this helps! Salman Azfar 1K Posts: 50 Joined: Thu Jul 13, 2017 3:00 am ### Re: Amplitude It's essentially what the reply before me said. Amplitude has its purposes but I've looked through the entire reading and I haven't seen any mention of it in equations unless I missed something. At the very least, it doesn't impact our current calculations with wavelength and frequency with respect to the speed of light, so it's something that may come in later but is not very significant as of now. Jason Muljadi 2C Posts: 62 Joined: Thu Jul 27, 2017 3:01 am ### Re: Amplitude Amplitude is related to the energy of the wave. If the wave has a low energy, then there will be a short amplitude. If the energy of a wave is high, then there will be a long amplitude. Attachments Amplitude.png (5.09 KiB) Viewed 465 times Courtney Cheney 3E Posts: 19 Joined: Fri Sep 29, 2017 7:07 am ### Re: Amplitude If your told one wave of light has higher energy than another do you assume that the amplitude is higher, the frequency is higher, both, or can it not be determined? Gurvardaan Bal1L Posts: 54 Joined: Fri Sep 29, 2017 7:04 am Been upvoted: 2 times ### Re: Amplitude Courtney Cheney 3E wrote:If your told one wave of light has higher energy than another do you assume that the amplitude is higher, the frequency is higher, both, or can it not be determined? If you are told this, you can assume that the amplitude is larger, but you cannot assume that the frequency is also higher. Frequency and amplitude are not related to each other, at least generally. In certain cases you might find a relationship, but based on what I found, there doesn't seem to be a general correlation between amplitude and frequency. Troy Tavangar 1I Posts: 50 Joined: Fri Sep 29, 2017 7:04 am ### Re: Amplitude Amplitude has no relation ship with the frequency or wavelength Return to “Properties of Light” ### Who is online Users browsing this forum: No registered users and 0 guests
{}
## docutils-users — General discussions, questions, help requests. You can subscribe to this list here. 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 Jan Feb Mar Apr May Jun Jul Aug (3) Sep (15) Oct (21) Nov (18) Dec (59) Jan (43) Feb (35) Mar (78) Apr (65) May (163) Jun (169) Jul (137) Aug (77) Sep (47) Oct (27) Nov (43) Dec (68) Jan (61) Feb (39) Mar (11) Apr (42) May (86) Jun (82) Jul (24) Aug (26) Sep (37) Oct (62) Nov (131) Dec (43) Jan (31) Feb (56) Mar (65) Apr (165) May (106) Jun (97) Jul (65) Aug (150) Sep (78) Oct (115) Nov (41) Dec (26) Jan (50) Feb (39) Mar (56) Apr (67) May (89) Jun (68) Jul (116) Aug (65) Sep (58) Oct (103) Nov (28) Dec (52) Jan (92) Feb (60) Mar (124) Apr (96) May (69) Jun (79) Jul (25) Aug (22) Sep (7) Oct (17) Nov (27) Dec (32) Jan (57) Feb (87) Mar (51) Apr (43) May (56) Jun (62) Jul (25) Aug (82) Sep (58) Oct (42) Nov (38) Dec (86) Jan (50) Feb (33) Mar (84) Apr (90) May (109) Jun (37) Jul (22) Aug (51) Sep (93) Oct (86) Nov (31) Dec (62) Jan (33) Feb (57) Mar (62) Apr (43) May (30) Jun (49) Jul (20) Aug (40) Sep (152) Oct (38) Nov (15) Dec (32) Jan (29) Feb (25) Mar (65) Apr (45) May (27) Jun (11) Jul (14) Aug (8) Sep (13) Oct (117) Nov (60) Dec (19) Jan (23) Feb (32) Mar (24) Apr (41) May (56) Jun (24) Jul (15) Aug (11) Sep (26) Oct (21) Nov (12) Dec (31) Jan (32) Feb (24) Mar (39) Apr (44) May (44) Jun (8) Jul (9) Aug (12) Sep (34) Oct (19) Nov (5) Dec (9) Jan (22) Feb (12) Mar (7) Apr (2) May (13) Jun (17) Jul (8) Aug (10) Sep (7) Oct (4) Nov Dec (39) Jan (13) Feb (12) Mar (12) Apr (40) May (5) Jun (22) Jul (3) Aug (42) Sep (5) Oct (10) Nov Dec (10) Jan (9) Feb (43) Mar (5) Apr (14) May (17) Jun (5) Jul (5) Aug (22) Sep (5) Oct Nov (4) Dec (18) Jan (28) Feb (29) Mar (9) Apr (23) May (48) Jun (5) Jul (32) Aug (4) Sep Oct Nov Dec Showing results of 7781 1 2 3 .. 312 > >> (Page 1 of 312) [Docutils-users] Released 0.14 From: engelbert gruber - 2017-08-03 10:51:20 Attachments: Message as HTML RELEASE-NOTES nothing changed from rc2, except some release documentation and clarification * docutils/docs/ref/docutils.dtd: - Enable validation of Docutils XML documents against the DTD: * docutils/parsers/rst/: - Added functionality: escaped whitespace in URI contexts. - Consistent handling of all whitespace characters in inline markup recognition. (May break documents that relied on some whitespace characters (NBSP, ...) *not* to be recognized as whitespace.) * docutils/utils/smartquotes.py: - Update quote definitions for et, fi, fr, ro, sv, tr, uk. - Add quote definitions for hr, hsb, hu, lv, sh, sl, sr. - Differentiate apostrophe from closing single quote (if possible). - Add command line interface for stand-alone use (requires 2.7). * docutils/writers/_html_base: - Provide default title in metadata. - The MathJax CDN shut down on April 30, 2017. For security reasons, we don't use a third party public installation as default but warn if math-output is set to MathJax without specifying a URL. See math-output_ for details. * docutils/writers/html4css1: - Respect automatic table column sizing. * docutils/writers/latex2e/__init__.py - Handle class arguments for block-level elements by wrapping them in a "DUclass" environment. This replaces the special handling for "epigraph" and "topic" elements. * docutils/writers/odf_odt: - Language option sets ODF document's default language - Image width, scale, ... set image size in generated ODF. * tools/ - New front-end rst2html4.py. cheers Re: [Docutils-users] How stable is reStructuredText format? From: David Goodger - 2017-08-01 13:47:51 On Tue, Aug 1, 2017 at 2:31 AM, crocket wrote: > Will there be breaking changes in reStructuredText in the foreseeable > future? No. Ongoing backward compatibility is of paramount importance in Docutils and the reStructuredText format definition. Any changes made are additions (new constructs which would have failed in the past) or corrections (fixing bugs). Documents from years ago still process properly, using the latest Docutils code. These days changes/additions are infrequent, incremental, and discussed thoroughly on the Docutils-develop mailing list first. David Goodger ; Re: [Docutils-users] How stable is reStructuredText format? From: Ben Finney - 2017-08-01 07:51:28 crocket writes: > Will there be breaking changes in reStructuredText in the foreseeable > future? The reStructuredText *format* was fixed many years ago, and I'm not aware of any proposed newer versions. In fact, I am not aware of any releases of “reStructuredText” in the past ten years. So I'm not sure what changes you're asking about. -- \ “The best ad-libs are rehearsed.” —Graham Kennedy | \ | _o__) | Ben Finney Re: [Docutils-users] How stable is reStructuredText format? From: crocket - 2017-08-01 07:32:06 Attachments: Message as HTML Will there be breaking changes in reStructuredText in the foreseeable future? On Mon, Jul 31, 2017 at 10:27 PM, Roberto Alsina wrote: > On Mon, Jul 31, 2017 at 5:31 AM crocket wrote: > >> If I wrote a document in reStructuredText, how long can I expect it to >> last? >> >> > It depends. > > Will there be a stable, maintained toolchain to process your text in 100 > years? Probably not. > > Will there be a stable, maintained tool to view the HTML or PDF output of > your text in 100 years? Probably yes, because there are enough things that > need to live a long time in those formats. > > Will there be a stable, maintained tool to open text files in 100 years? > Most likely. > > If you change those "100" to lower numbers, each one becomes more likely. > Re: [Docutils-users] Pull dictionary of docinfo fields from a document? From: Tony Narlock - 2017-07-31 21:35:25 Attachments: Message as HTML On July 31, 2017 at 3:03:49 AM, Guenter Milde via Docutils-users ( docutils-users@...) wrote: On 2017-07-30, Tony Narlock wrote: > My intention is to use DocInfo as a way to scape meta information off RST > files to build an index of them. > There is a DocInfo transformer at docinfo.transforms.frontmatter.DocInfo. > Two things: > 1. I don’t want DocInfo fields to show on HTMLWriter Either hide them with CSS or strip with the setting: --strip-elements-with-class= Remove all elements with classes="" from the document tree. Warning: potentially dangerous; use with caution. (Multiple-use option.) Thanks, I gave both those a shot in my initial run. After a bit more digging, I was able to do this by overriding visit_docinfo in the HTMLWriter: def visit_docinfo(self, node): raise nodes.SkipNode > 2. I want to pull a python dictionary of key->value fields from DocInfo. ... > My understanding is DocInfo handles that fields in biblio_nodes, but also > can handle arbitrary field names. ( > http://docutils.sourceforge.net/docs/ref/doctree.html#docinfo). Is that > true? Yes. Check with, e.g. rst2pseudoxml, this gives a nice representation of the doctree. Then you can, e.g. create a function to convert the docinfo sub-tree into the dict. The transforms will give some hints on how to wald around the doctree and collect information. That helped. In my circumstance, I was able to find some permissively licensed code that did a .traverse(nodes.docinfo) to pluck out a dict of the information. Here is the snippet: https://github.com/adieu/mezzanine-cli/blob/c6feeaf/mezzanine_cli/parser.py#L17 License: https://github.com/adieu/mezzanine-cli/blob/c6feeaf/setup.py#L10 Günter ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Docutils-users mailing list Docutils-users@... https://lists.sourceforge.net/lists/listinfo/docutils-users Please use "Reply All" to reply to the list. [Docutils-users] Return reStructuredText source content after title, subtitle, and docinfo From: Tony Narlock - 2017-07-31 21:30:59 Attachments: Message as HTML I want to be able to store raw reStructuredText “body content”, including section names after title/subtitle, in a database. At present, I am using publish_doctree to get title, subtitle, and docinfo meta data successfully. So, assuming: ========== Main title ========== Subtitle ======== :Date: 2017-04-04 I want everything from this segment and below Including sections “””””””””””””””””” like this I just want to get this: I want everything from this segment and below Including sections “””””””””””””””””” like this I want the raw reStructuredText to be preserved. Re: [Docutils-users] How stable is reStructuredText format? From: Roberto Alsina - 2017-07-31 13:27:18 Attachments: Message as HTML On Mon, Jul 31, 2017 at 5:31 AM crocket wrote: > If I wrote a document in reStructuredText, how long can I expect it to > last? > > It depends. Will there be a stable, maintained toolchain to process your text in 100 years? Probably not. Will there be a stable, maintained tool to view the HTML or PDF output of your text in 100 years? Probably yes, because there are enough things that need to live a long time in those formats. Will there be a stable, maintained tool to open text files in 100 years? Most likely. If you change those "100" to lower numbers, each one becomes more likely. Re: [Docutils-users] How stable is reStructuredText format? From: Matěj Cepl - 2017-07-31 12:20:24 On 2017-07-31, 08:31 GMT, crocket wrote: > If I wrote a document in reStructuredText, how long can I expect it to last? Longer than the latest version of Microsoft Word and probably longer than with one of various versions of Markdown. Do you still have those Lotus Ami Pro documents? Just next to the Word 2.0 ones? Right. Best, Matěj -- http://matej.ceplovi.cz/blog/, Jabber: mceplceplovi.cz GPG Finger: 3C76 A027 CA45 AD70 98B5 BC1D 7920 5802 880B C9D8 As a rule of thumb, the more qualifiers there are before the name of a country, the more corrupt the rulers. A country called The Socialist People's Democratic Republic of X is probably the last place in the world you'd want to live. -- Paul Graham discussing (not only) Nigerian spam (http://www.paulgraham.com/spam.html) Re: [Docutils-users] How stable is reStructuredText format? From: crocket - 2017-07-31 08:45:57 Attachments: Message as HTML I forgot to send a reply to all recipients. This email is sent to all recipients. I am looking for a lightweight markup language suitable for writing diary and taking notes. Once I write a diary entry, I want to keep it for many decades without modification. If I had to manually modify them after decades, then I would be tempted to treat them as .txt files and not bother to compile them. On Mon, Jul 31, 2017 at 5:35 PM, engelbert gruber < engelbert.gruber@...> wrote: > https://www.openhub.net/p/docutils > > i would say ... rock class > > On 31 July 2017 at 10:31, crocket wrote: > >> If I wrote a document in reStructuredText, how long can I expect it to >> last? >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Docutils-users mailing list >> Docutils-users@... >> https://lists.sourceforge.net/lists/listinfo/docutils-users >> >> Please use "Reply All" to reply to the list. >> >> > Re: [Docutils-users] How stable is reStructuredText format? From: engelbert gruber - 2017-07-31 08:35:28 Attachments: Message as HTML https://www.openhub.net/p/docutils i would say ... rock class On 31 July 2017 at 10:31, crocket wrote: > If I wrote a document in reStructuredText, how long can I expect it to > last? > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Docutils-users mailing list > Docutils-users@... > https://lists.sourceforge.net/lists/listinfo/docutils-users > > Please use "Reply All" to reply to the list. > > [Docutils-users] How stable is reStructuredText format? From: crocket - 2017-07-31 08:31:34 Attachments: Message as HTML If I wrote a document in reStructuredText, how long can I expect it to last? Re: [Docutils-users] Pull dictionary of docinfo fields from a document? From: Guenter Milde - 2017-07-31 08:03:30 On 2017-07-30, Tony Narlock wrote: > My intention is to use DocInfo as a way to scape meta information off RST > files to build an index of them. > There is a DocInfo transformer at docinfo.transforms.frontmatter.DocInfo. > Two things: > 1. I don’t want DocInfo fields to show on HTMLWriter Either hide them with CSS or strip with the setting: --strip-elements-with-class= Remove all elements with classes="" from the document tree. Warning: potentially dangerous; use with caution. (Multiple-use option.) > 2. I want to pull a python dictionary of key->value fields from DocInfo. ... > My understanding is DocInfo handles that fields in biblio_nodes, but also > can handle arbitrary field names. ( > http://docutils.sourceforge.net/docs/ref/doctree.html#docinfo). Is that > true? Yes. Check with, e.g. rst2pseudoxml, this gives a nice representation of the doctree. Then you can, e.g. create a function to convert the docinfo sub-tree into the dict. The transforms will give some hints on how to wald around the doctree and collect information. Günter Re: [Docutils-users] publish_parts and table of contents? From: Tony Narlock - 2017-07-30 18:10:10 Attachments: Message as HTML On July 26, 2017 at 2:03:02 AM, Guenter Milde via Docutils-users ( docutils-users@...) wrote: On 2017-07-25, Tony Narlock wrote: > What if the user doesn’t want to publish CSS from the tree? You can select the used CSS stylsheet(s) as well as toggle between inclusion and referencing them via Docutils settings. For programmatic use, "settings_override" is your friend. See "config.html" and the documentation the publish_* functions for descriptions and, e.g., the functional tests for usage examples. > Even if using > publish_doctree, I prefer the equivalent to publish_parts fragment and > html_body, usually. > Here is what I did: ... > Is the equivalent of this attainable a simpler way? Is something like this > worth considering as a patch? I don't know. You may file an enhancement ticket. Considering that. Also the addition of "contents" as a part for the HTML writer seems like a valid enhancement request. (But mind that we don't have many ressources to really work on it.) The ToC is being used on a production website right now, I plan on making a post about how the site uses docutils. I’m trying to get myself into gear on docutils internals so I can help now and then. Günter ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Docutils-users mailing list Docutils-users@... https://lists.sourceforge.net/lists/listinfo/docutils-users Please use "Reply All" to reply to the list. [Docutils-users] Pull dictionary of docinfo fields from a document? From: Tony Narlock - 2017-07-30 17:47:39 Attachments: Message as HTML My intention is to use DocInfo as a way to scape meta information off RST files to build an index of them. There is a DocInfo transformer at docinfo.transforms.frontmatter.DocInfo. Two things: 1. I don’t want DocInfo fields to show on HTMLWriter 2. I want to pull a python dictionary of key->value fields from DocInfo. e.g. ============== Document title ============== :date: 2017-05-01 :custom_field: value rest of the file Being able to extract: { "date": datetime.date(2017,5,1), "custom_field": "value" } My understanding is DocInfo handles that fields in biblio_nodes, but also can handle arbitrary field names. ( http://docutils.sourceforge.net/docs/ref/doctree.html#docinfo). Is that true? Re: [Docutils-users] publish_parts and table of contents? From: Guenter Milde - 2017-07-26 07:02:43 On 2017-07-25, Tony Narlock wrote: > What if the user doesn’t want to publish CSS from the tree? You can select the used CSS stylsheet(s) as well as toggle between inclusion and referencing them via Docutils settings. For programmatic use, "settings_override" is your friend. See "config.html" and the documentation the publish_* functions for descriptions and, e.g., the functional tests for usage examples. > Even if using > publish_doctree, I prefer the equivalent to publish_parts fragment and > html_body, usually. > Here is what I did: ... > Is the equivalent of this attainable a simpler way? Is something like this > worth considering as a patch? I don't know. You may file an enhancement ticket. Also the addition of "contents" as a part for the HTML writer seems like a valid enhancement request. (But mind that we don't have many ressources to really work on it.) Günter Re: [Docutils-users] publish_parts and table of contents? From: Tony Narlock - 2017-07-25 18:31:01 Attachments: Message as HTML Thank you. This has been educational and shown me things that I didn’t find obvious. What if the user doesn’t want to publish CSS from the tree? Even if using publish_doctree, I prefer the equivalent to publish_parts fragment and html_body, usually. Here is what I did: def publish_parts_from_doctree(document, destination_path=None, writer=None, writer_name='pseudoxml', settings=None, settings_spec=None, settings_overrides=None, config_section=None, enable_exit_status=False): reader = docutils.readers.doctree.Reader(parser_name='null') pub = Publisher(reader, None, writer, source=io.DocTreeInput(document), destination_class=io.StringOutput, settings=settings) if not writer and writer_name: pub.set_writer(writer_name) pub.process_programmatic_settings( settings_spec, settings_overrides, config_section) pub.set_destination(None, destination_path) pub.publish(enable_exit_status=enable_exit_status) return pub.writer.parts Is the equivalent of this attainable a simpler way? Is something like this worth considering as a patch? On July 24, 2017 at 9:08:59 AM, Guenter Milde via Docutils-users ( docutils-users@...) wrote: On 2017-07-22, Tony Narlock wrote: > I can confirm getting the node information. > The issue I have is getting the HTML from toc_list. > The only other problem I have is: ... > AttributeError: 'bullet_list' object has no attribute > 'note_transform_message' You need to pass a complete doctree to publish from doctree but transforms.Contents.build_contents() returns only a partial doctree. The example below should get you started. Günter #! /usr/bin/env python # -*- coding: utf-8 -*- # # Proof of concept for a front end generating just a toc from an rst source. import sys import docutils from docutils.core import Publisher, publish_doctree, publish_from_doctree from docutils.transforms.parts import Contents from docutils import nodes # from docutils.writers.html5_polyglot import Writer # Test source as string sample = """ Sample Title ============ first section ------------- some text second section -------------- more text this is subsection 2.1 ********************** """ # Parse sample to a doctree (for later parsing with build_contents()) sample_tree = publish_doctree(source=sample) # The sample can also be written to supported output formats from the doctree: output = publish_from_doctree(sample_tree, writer_name="pseudoxml") # output = publish_from_doctree(sample_tree, writer_name="latex") # output = publish_from_doctree(sample_tree, writer_name="html5") #print output # Create a new document tree with just the table of contents # ========================================================== # document tree template: toc_tree = nodes.document('', '', source='toc-generator') toc_tree += nodes.title('', 'Table of Contents') # Re-use the Contents transform to generate the toc by travelling over the # doctree of the complete document. # Set up a Contents instance: # The Contents transform requires a "pending" startnode and generation options # startnode pending = nodes.pending(Contents, rawsource='') contents_transform = docutils.transforms.parts.Contents(sample_tree, pending) contents_transform.backlinks = False # run the contents builder and append the result to the template: toc_topic = nodes.topic(classes=['contents']) toc_topic += contents_transform.build_contents(sample_tree) toc_tree += toc_topic # test # print toc_tree output = publish_from_doctree(toc_tree, writer_name="pseudoxml") output = publish_from_doctree(toc_tree, writer_name="html5") print output ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Docutils-users mailing list Docutils-users@... https://lists.sourceforge.net/lists/listinfo/docutils-users Please use "Reply All" to reply to the list. Re: [Docutils-users] publish_parts and table of contents? From: Guenter Milde - 2017-07-24 14:08:36 On 2017-07-22, Tony Narlock wrote: > I can confirm getting the node information. > The issue I have is getting the HTML from toc_list. > The only other problem I have is: ... > AttributeError: 'bullet_list' object has no attribute > 'note_transform_message' You need to pass a complete doctree to publish from doctree but transforms.Contents.build_contents() returns only a partial doctree. The example below should get you started. Günter #! /usr/bin/env python # -*- coding: utf-8 -*- # # Proof of concept for a front end generating just a toc from an rst source. import sys import docutils from docutils.core import Publisher, publish_doctree, publish_from_doctree from docutils.transforms.parts import Contents from docutils import nodes # from docutils.writers.html5_polyglot import Writer # Test source as string sample = """ Sample Title ============ first section ------------- some text second section -------------- more text this is subsection 2.1 ********************** """ # Parse sample to a doctree (for later parsing with build_contents()) sample_tree = publish_doctree(source=sample) # The sample can also be written to supported output formats from the doctree: output = publish_from_doctree(sample_tree, writer_name="pseudoxml") # output = publish_from_doctree(sample_tree, writer_name="latex") # output = publish_from_doctree(sample_tree, writer_name="html5") #print output # Create a new document tree with just the table of contents # ========================================================== # document tree template: toc_tree = nodes.document('', '', source='toc-generator') toc_tree += nodes.title('', 'Table of Contents') # Re-use the Contents transform to generate the toc by travelling over the # doctree of the complete document. # Set up a Contents instance: # The Contents transform requires a "pending" startnode and generation options # startnode pending = nodes.pending(Contents, rawsource='') contents_transform = docutils.transforms.parts.Contents(sample_tree, pending) contents_transform.backlinks = False # run the contents builder and append the result to the template: toc_topic = nodes.topic(classes=['contents']) toc_topic += contents_transform.build_contents(sample_tree) toc_tree += toc_topic # test # print toc_tree output = publish_from_doctree(toc_tree, writer_name="pseudoxml") output = publish_from_doctree(toc_tree, writer_name="html5") print output Re: [Docutils-users] publish_parts and table of contents? From: Tony Narlock - 2017-07-22 23:13:26 Attachments: Message as HTML I can confirm getting the node information. The issue I have is getting the HTML from toc_list. The only other problem I have is: from docutils.writers.html5_polyglot import Writer core.publish_from_doctree(toc_list, writer=Writer()) Traceback (most recent call last): File “./try2.py", line 48, in core.publish_from_doctree(toc_list, writer=Writer()) File ".venv/lib/python3.6/site-packages/docutils/core.py", line 521, in publish_from_doctree return pub.publish(enable_exit_status=enable_exit_status) File ".venv/lib/python3.6/site-packages/docutils/core.py", line 218, in publish self.apply_transforms() File ".venv/lib/python3.6/site-packages/docutils/core.py", line 199, in apply_transforms self.document.transformer.apply_transforms() File “.venv/lib/python3.6/site-packages/docutils/transforms/__init__.py", line 162, in apply_transforms self.document.note_transform_message) AttributeError: 'bullet_list' object has no attribute 'note_transform_message' On July 21, 2017 at 10:40:53 PM, David Goodger (goodger@...) wrote: On Fri, Jul 21, 2017 at 7:41 PM, Tony Narlock wrote: > Thanks for your help on this. > > This is way trickier than it looks, with all due respect. Just because you're trying to hack Docutils without a sufficiently deep understanding of the internals. > Clocked in almost > two days on this so far. Hopefully this exercise has improved your understanding! > Just trying to get the table of contents separate from html_body. Seriously > considering adding ..contents:: to the source, building HTML and ripping out > the ToC via LXML. > > Love reStructuredText and docutils (been having quite a few internal > successes lately), but this particular task feels like going against the > grain. Have you read the documentation? There's no one place for what you want, it's spread out. See: * http://docutils.sourceforge.net/docs/ref/transforms.html * http://docutils.sourceforge.net/docs/peps/pep-0258.html#transformer * http://docutils.sourceforge.net/docs/dev/hacking.html Also, see the code. There's lots of inline documentation in docstrings and comments. Ultimately, you need to understand the flow of data in Docutils, how all the components interrelate. No, no, no, don't tug on that. You never know what it might be attached to. — Buckaroo Banzai (during brain surgery) I think the attached code will get you most of the way to where you want to go. DG > On July 21, 2017 at 3:39:58 PM, Guenter Milde via Docutils-users > (docutils-users@...) wrote: > > On 2017-07-21, Tony Narlock wrote: > >> So here is where I am: >> https://gist.github.com/tony/1a03b7668c9e33672f4465dd63c6076b > > No time to look. > >> On July 20, 2017 at 11:54:07 AM, Guenter Milde via Docutils-users ( >> On 2017-07-20, Tony Narlock wrote: >>> On July 19, 2017 at 5:27:15 PM, Guenter Milde via Docutils-users ( > >>> ... > > >> > I suppose rather than messing with "parts", you can use the publish_* >> > functions in a wrapper script: >> >> > Don't use .. contents.. in the source. >> >> > 1. Parse the rst source with publish_doctree() >> >> > Returns a doctree object. >> >> >> > 2. Export doctree to HTML with publish_from_doctree() > > Does this work? > > Yes, this just gives CSS + HTML for way more than I need. Am I supposed to > see anything special in the HTML or are you just checking that > publish_doctree+publish_from_doctree works (it does). > > Way more than html_body (all I need, aside from ToC). And I’m not sure what > I can do with this content? > > > > >> > 3. Run the toc-generating transform on the doctree. >> > Returns a "toc doctree". > >> Where would it be? > > In docutils/transforms/parts.py > > > >> Am I applying the transform correctly in the paste? > >>> 4. Export the "toc doctree" with publish_from_doctree(). > >> Assuming I’m running the transform correctly, I see no difference in the >> output. > > So I suppose you don't apply it correctly. > > The idea is to collect generate a TOC by travelling over the doctree in > the same manner as it is done by the "Contents" transform. > > Therefore, it should be possible to use > docutils.transforms.parts.Contents.build_contents() and pass it the > startnode of the doctree returned by "publish_parts". > >>> This is just an idea, not tested and detailled. > > Günter Re: [Docutils-users] publish_parts and table of contents? From: Matěj Cepl - 2017-07-22 21:00:51 On 2017-07-22, 03:40 GMT, David Goodger wrote: > On Fri, Jul 21, 2017 at 7:41 PM, Tony Narlock wrote: >> Thanks for your help on this. >> >> This is way trickier than it looks, with all due respect. > > Just because you're trying to hack Docutils without a sufficiently > deep understanding of the internals. Yeah, but that’s the problem for most people who would like to hack on docutils. I am following this thread with some level of dread, because these are exactly operations I will probably need if I am thinking about writing that rst2epub. And frankly this thread does not increase my faith in my own ability to write such script (if I had the time to do so, that is). I will certainly study your attached example. Best, Matěj -- http://matej.ceplovi.cz/blog/, Jabber: mceplceplovi.cz GPG Finger: 3C76 A027 CA45 AD70 98B5 BC1D 7920 5802 880B C9D8 He uses statistics as a drunken man uses lamp-posts... for support, rather than illumination. -- Andrew Lang Re: [Docutils-users] publish_parts and table of contents? From: David Goodger - 2017-07-22 03:41:00 Attachments: toc.py On Fri, Jul 21, 2017 at 7:41 PM, Tony Narlock wrote: > Thanks for your help on this. > > This is way trickier than it looks, with all due respect. Just because you're trying to hack Docutils without a sufficiently deep understanding of the internals. > Clocked in almost > two days on this so far. Hopefully this exercise has improved your understanding! > Just trying to get the table of contents separate from html_body. Seriously > considering adding ..contents:: to the source, building HTML and ripping out > the ToC via LXML. > > Love reStructuredText and docutils (been having quite a few internal > successes lately), but this particular task feels like going against the > grain. Have you read the documentation? There's no one place for what you want, it's spread out. See: * http://docutils.sourceforge.net/docs/ref/transforms.html * http://docutils.sourceforge.net/docs/peps/pep-0258.html#transformer * http://docutils.sourceforge.net/docs/dev/hacking.html Also, see the code. There's lots of inline documentation in docstrings and comments. Ultimately, you need to understand the flow of data in Docutils, how all the components interrelate. No, no, no, don't tug on that. You never know what it might be attached to. — Buckaroo Banzai (during brain surgery) I think the attached code will get you most of the way to where you want to go. DG > On July 21, 2017 at 3:39:58 PM, Guenter Milde via Docutils-users > (docutils-users@...) wrote: > > On 2017-07-21, Tony Narlock wrote: > >> So here is where I am: >> https://gist.github.com/tony/1a03b7668c9e33672f4465dd63c6076b > > No time to look. > >> On July 20, 2017 at 11:54:07 AM, Guenter Milde via Docutils-users ( >> On 2017-07-20, Tony Narlock wrote: >>> On July 19, 2017 at 5:27:15 PM, Guenter Milde via Docutils-users ( > >>> ... > > >> > I suppose rather than messing with "parts", you can use the publish_* >> > functions in a wrapper script: >> >> > Don't use .. contents.. in the source. >> >> > 1. Parse the rst source with publish_doctree() >> >> > Returns a doctree object. >> >> >> > 2. Export doctree to HTML with publish_from_doctree() > > Does this work? > > Yes, this just gives CSS + HTML for way more than I need. Am I supposed to > see anything special in the HTML or are you just checking that > publish_doctree+publish_from_doctree works (it does). > > Way more than html_body (all I need, aside from ToC). And I’m not sure what > I can do with this content? > > > > >> > 3. Run the toc-generating transform on the doctree. >> > Returns a "toc doctree". > >> Where would it be? > > In docutils/transforms/parts.py > > > >> Am I applying the transform correctly in the paste? > >>> 4. Export the "toc doctree" with publish_from_doctree(). > >> Assuming I’m running the transform correctly, I see no difference in the >> output. > > So I suppose you don't apply it correctly. > > The idea is to collect generate a TOC by travelling over the doctree in > the same manner as it is done by the "Contents" transform. > > Therefore, it should be possible to use > docutils.transforms.parts.Contents.build_contents() and pass it the > startnode of the doctree returned by "publish_parts". > >>> This is just an idea, not tested and detailled. > > Günter Re: [Docutils-users] publish_parts and table of contents? From: Tony Narlock - 2017-07-22 00:41:29 Attachments: Message as HTML Thanks for your help on this. This is *way* trickier than it looks, with all due respect. Clocked in almost two days on this so far. Just trying to get the table of contents separate from html_body. Seriously considering adding ..contents:: to the source, building HTML and ripping out the ToC via LXML. Love reStructuredText and docutils (been having quite a few internal successes lately), but this particular task feels like going against the grain. On July 21, 2017 at 3:39:58 PM, Guenter Milde via Docutils-users ( docutils-users@...) wrote: On 2017-07-21, Tony Narlock wrote: > So here is where I am: > https://gist.github.com/tony/1a03b7668c9e33672f4465dd63c6076b No time to look. > On July 20, 2017 at 11:54:07 AM, Guenter Milde via Docutils-users ( > On 2017-07-20, Tony Narlock wrote: >> On July 19, 2017 at 5:27:15 PM, Guenter Milde via Docutils-users ( >> ... > > I suppose rather than messing with "parts", you can use the publish_* > > functions in a wrapper script: > > > Don't use .. contents.. in the source. > > > 1. Parse the rst source with publish_doctree() > > > Returns a doctree object. > > > > 2. Export doctree to HTML with publish_from_doctree() Does this work? Yes, this just gives CSS + HTML for way more than I need. Am I supposed to see anything special in the HTML or are you just checking that publish_doctree+publish_from_doctree works (it does). Way more than html_body (all I need, aside from ToC). And I’m not sure what I can do with this content? > > 3. Run the toc-generating transform on the doctree. > > Returns a "toc doctree". > Where would it be? In docutils/transforms/parts.py > Am I applying the transform correctly in the paste? >> 4. Export the "toc doctree" with publish_from_doctree(). > Assuming I’m running the transform correctly, I see no difference in the > output. So I suppose you don't apply it correctly. The idea is to collect generate a TOC by travelling over the doctree in the same manner as it is done by the "Contents" transform. Therefore, it should be possible to use docutils.transforms.parts.Contents.build_contents() and pass it the startnode of the doctree returned by "publish_parts". >> This is just an idea, not tested and detailled. Günter ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Docutils-users mailing list Docutils-users@... https://lists.sourceforge.net/lists/listinfo/docutils-users Please use "Reply All" to reply to the list. Re: [Docutils-users] publish_parts and table of contents? From: Guenter Milde - 2017-07-21 20:39:39 On 2017-07-21, Tony Narlock wrote: > So here is where I am: > https://gist.github.com/tony/1a03b7668c9e33672f4465dd63c6076b No time to look. > On July 20, 2017 at 11:54:07 AM, Guenter Milde via Docutils-users ( > On 2017-07-20, Tony Narlock wrote: >> On July 19, 2017 at 5:27:15 PM, Guenter Milde via Docutils-users ( >> ... > > I suppose rather than messing with "parts", you can use the publish_* > > functions in a wrapper script: > > > Don't use .. contents.. in the source. > > > 1. Parse the rst source with publish_doctree() > > > Returns a doctree object. > > > > 2. Export doctree to HTML with publish_from_doctree() Does this work? > > 3. Run the toc-generating transform on the doctree. > > Returns a "toc doctree". > Where would it be? In docutils/transforms/parts.py > Am I applying the transform correctly in the paste? >> 4. Export the "toc doctree" with publish_from_doctree(). > Assuming I’m running the transform correctly, I see no difference in the > output. So I suppose you don't apply it correctly. The idea is to collect generate a TOC by travelling over the doctree in the same manner as it is done by the "Contents" transform. Therefore, it should be possible to use docutils.transforms.parts.Contents.build_contents() and pass it the startnode of the doctree returned by "publish_parts". >> This is just an idea, not tested and detailled. Günter Re: [Docutils-users] publish_parts and table of contents? From: Tony Narlock - 2017-07-21 20:26:33 Attachments: Message as HTML Here’s where it’s at now (after looking at footer and PEP code): https://gist.github.com/tony/9c0d5eaa081b5ff611b7ca9e86a83046 Output: Contents So stuff is showing in TOC. But the pending contents information doesn’t seem to be rendering. On July 21, 2017 at 11:07:44 AM, Tony Narlock (tony@...) wrote: So here is where I am: https://gist.github.com/tony/1a03b7668c9e33672f4465dd63c6076b On July 20, 2017 at 11:54:07 AM, Guenter Milde via Docutils-users ( docutils-users@...) wrote: On 2017-07-20, Tony Narlock wrote: > On July 19, 2017 at 5:27:15 PM, Guenter Milde via Docutils-users ( > docutils-users@...) wrote: > On 2017-07-19, Tony Narlock wrote: > ... >> The transform generates the TOC by travelling the document tree after >> parsing is complete. ... >> https://gist.github.com/tony/c4fc5661fcd4b7de71c65dd8a52c9ea4 > Which contains the description: >> 1. Currently, table of contents is only outputted through directive. > ... >> 3. I want it to be available in "toc" *without* using the directive in the >> source. > For this, you would need to run the "Contents" transform also if the > document does not contain the "contents" directive. >> 2. I do not to position table of contents in the RST. (therefore, I >> specifically do not want it in html_body) I suppose rather than messing with "parts", you can use the publish_* functions in a wrapper script: Don't use .. contents.. in the source. 1. Parse the rst source with publish_doctree() Returns a doctree object. 2. Export doctree to HTML with publish_from_doctree() 3. Run the toc-generating transform on the doctree. Returns a "toc doctree". Where would it be? Am I applying the transform correctly in the paste? 4. Export the "toc doctree" with publish_from_doctree(). Assuming I’m running the transform correctly, I see no difference in the output. This is just an idea, not tested and detailled. > It would be indispensable to get a code example or demonstration. This is left as an exercise to the reader. This has been educational and is helping me understand internals better. I prefer vanilla docutils whenever possible. Any more ideas? Günter ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Docutils-users mailing list Docutils-users@... https://lists.sourceforge.net/lists/listinfo/docutils-users Please use "Reply All" to reply to the list. Re: [Docutils-users] publish_parts and table of contents? From: Tony Narlock - 2017-07-21 16:07:53 Attachments: Message as HTML So here is where I am: https://gist.github.com/tony/1a03b7668c9e33672f4465dd63c6076b On July 20, 2017 at 11:54:07 AM, Guenter Milde via Docutils-users ( docutils-users@...) wrote: On 2017-07-20, Tony Narlock wrote: > On July 19, 2017 at 5:27:15 PM, Guenter Milde via Docutils-users ( > docutils-users@...) wrote: > On 2017-07-19, Tony Narlock wrote: > ... >> The transform generates the TOC by travelling the document tree after >> parsing is complete. ... >> https://gist.github.com/tony/c4fc5661fcd4b7de71c65dd8a52c9ea4 > Which contains the description: >> 1. Currently, table of contents is only outputted through directive. > ... >> 3. I want it to be available in "toc" *without* using the directive in the >> source. > For this, you would need to run the "Contents" transform also if the > document does not contain the "contents" directive. >> 2. I do not to position table of contents in the RST. (therefore, I >> specifically do not want it in html_body) I suppose rather than messing with "parts", you can use the publish_* functions in a wrapper script: Don't use .. contents.. in the source. 1. Parse the rst source with publish_doctree() Returns a doctree object. 2. Export doctree to HTML with publish_from_doctree() 3. Run the toc-generating transform on the doctree. Returns a "toc doctree". Where would it be? Am I applying the transform correctly in the paste? 4. Export the "toc doctree" with publish_from_doctree(). Assuming I’m running the transform correctly, I see no difference in the output. This is just an idea, not tested and detailled. > It would be indispensable to get a code example or demonstration. This is left as an exercise to the reader. This has been educational and is helping me understand internals better. I prefer vanilla docutils whenever possible. Any more ideas? Günter ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Docutils-users mailing list Docutils-users@... https://lists.sourceforge.net/lists/listinfo/docutils-users Please use "Reply All" to reply to the list. Re: [Docutils-users] publish_parts and table of contents? From: Guenter Milde - 2017-07-20 16:53:48 On 2017-07-20, Tony Narlock wrote: > On July 19, 2017 at 5:27:15 PM, Guenter Milde via Docutils-users ( > docutils-users@...) wrote: > On 2017-07-19, Tony Narlock wrote: > ... >> The transform generates the TOC by travelling the document tree after >> parsing is complete. ... >> https://gist.github.com/tony/c4fc5661fcd4b7de71c65dd8a52c9ea4 > Which contains the description: >> 1. Currently, table of contents is only outputted through directive. > ... >> 3. I want it to be available in "toc" *without* using the directive in the >> source. > For this, you would need to run the "Contents" transform also if the > document does not contain the "contents" directive. >> 2. I do not to position table of contents in the RST. (therefore, I >> specifically do not want it in html_body) I suppose rather than messing with "parts", you can use the publish_* functions in a wrapper script: Don't use .. contents.. in the source. 1. Parse the rst source with publish_doctree() Returns a doctree object. 2. Export doctree to HTML with publish_from_doctree() 3. Run the toc-generating transform on the doctree. Returns a "toc doctree". 4. Export the "toc doctree" with publish_from_doctree(). This is just an idea, not tested and detailled. > It would be indispensable to get a code example or demonstration. This is left as an exercise to the reader. Günter ` 41 messages has been excluded from this view by a project administrator. Showing results of 7781 1 2 3 .. 312 > >> (Page 1 of 312)
{}
# fitclinear Fit linear classification model to high-dimensional data ## Syntax ``Mdl = fitclinear(X,Y)`` ``Mdl = fitclinear(X,Y,Name,Value)`` ``````[Mdl,FitInfo] = fitclinear(___)`````` ``````[Mdl,FitInfo,HyperparameterOptimizationResults] = fitclinear(___)`````` ## Description `fitclinear` trains linear classification models for two-class (binary) learning with high-dimensional, full or sparse predictor data. Available linear classification models include regularized support vector machines (SVM) and logistic regression models. `fitclinear` minimizes the objective function using techniques that reduce computing time (e.g., stochastic gradient descent). For reduced computation time on a high-dimensional data set that includes many predictor variables, train a linear classification model by using `fitclinear`. For low- through medium-dimensional predictor data sets, see Alternatives for Lower-Dimensional Data. To train a linear classification model for multiclass learning by combining SVM or logistic regression binary classifiers using error-correcting output codes, see `fitcecoc`. example ````Mdl = fitclinear(X,Y)` returns a trained linear classification model object that contains the results of fitting a binary support vector machine to the predictors `X` and class labels `Y`.``` example ````Mdl = fitclinear(X,Y,Name,Value)` returns a trained linear classification model with additional options specified by one or more `Name,Value` pair arguments. For example, you can specify that the columns of the predictor matrix correspond to observations, implement logistic regression, or specify to cross-validate. It is good practice to cross-validate using the `Kfold` `Name,Value` pair argument. The cross-validation results determine how well the model generalizes.``` example ``````[Mdl,FitInfo] = fitclinear(___)``` also returns optimization details using any of the previous syntaxes. You cannot request `FitInfo` for cross-validated models.``` ``````[Mdl,FitInfo,HyperparameterOptimizationResults] = fitclinear(___)``` also returns hyperparameter optimization details when you pass an `OptimizeHyperparameters` name-value pair.``` ## Examples collapse all Train a binary, linear classification model using support vector machines, dual SGD, and ridge regularization. `load nlpdata` `X` is a sparse matrix of predictor data, and `Y` is a categorical vector of class labels. There are more than two classes in the data. Identify the labels that correspond to the Statistics and Machine Learning Toolbox™ documentation web pages. `Ystats = Y == 'stats';` Train a binary, linear classification model that can identify whether the word counts in a documentation web page are from the Statistics and Machine Learning Toolbox™ documentation. Train the model using the entire data set. Determine how well the optimization algorithm fit the model to the data by extracting a fit summary. ```rng(1); % For reproducibility [Mdl,FitInfo] = fitclinear(X,Ystats)``` ```Mdl = ClassificationLinear ResponseName: 'Y' ClassNames: [0 1] ScoreTransform: 'none' Beta: [34023x1 double] Bias: -1.0059 Lambda: 3.1674e-05 Learner: 'svm' Properties, Methods ``` ```FitInfo = struct with fields: Lambda: 3.1674e-05 Objective: 5.3783e-04 PassLimit: 10 NumPasses: 10 BatchLimit: [] NumIterations: 238561 GradientNorm: NaN GradientTolerance: 0 RelativeChangeInBeta: 0.0562 BetaTolerance: 1.0000e-04 DeltaGradient: 1.4582 DeltaGradientTolerance: 1 TerminationCode: 0 TerminationStatus: {'Iteration limit exceeded.'} Alpha: [31572x1 double] History: [] FitTime: 0.1603 Solver: {'dual'} ``` `Mdl` is a `ClassificationLinear` model. You can pass `Mdl` and the training or new data to `loss` to inspect the in-sample classification error. Or, you can pass `Mdl` and new predictor data to `predict` to predict class labels for new observations. `FitInfo` is a structure array containing, among other things, the termination status (`TerminationStatus`) and how long the solver took to fit the model to the data (`FitTime`). It is good practice to use `FitInfo` to determine whether optimization-termination measurements are satisfactory. Because training time is small, you can try to retrain the model, but increase the number of passes through the data. This can improve measures like `DeltaGradient`. To determine a good lasso-penalty strength for a linear classification model that uses a logistic regression learner, implement 5-fold cross-validation. `load nlpdata` `X` is a sparse matrix of predictor data, and `Y` is a categorical vector of class labels. There are more than two classes in the data. The models should identify whether the word counts in a web page are from the Statistics and Machine Learning Toolbox™ documentation. So, identify the labels that correspond to the Statistics and Machine Learning Toolbox™ documentation web pages. `Ystats = Y == 'stats';` Create a set of 11 logarithmically-spaced regularization strengths from $1{0}^{-6}$ through $1{0}^{-0.5}$. `Lambda = logspace(-6,-0.5,11);` Cross-validate the models. To increase execution speed, transpose the predictor data and specify that the observations are in columns. Estimate the coefficients using SpaRSA. Lower the tolerance on the gradient of the objective function to `1e-8`. ```X = X'; rng(10); % For reproducibility CVMdl = fitclinear(X,Ystats,'ObservationsIn','columns','KFold',5,... 'Learner','logistic','Solver','sparsa','Regularization','lasso',... 'Lambda',Lambda,'GradientTolerance',1e-8)``` ```CVMdl = classreg.learning.partition.ClassificationPartitionedLinear CrossValidatedModel: 'Linear' ResponseName: 'Y' NumObservations: 31572 KFold: 5 Partition: [1×1 cvpartition] ClassNames: [0 1] ScoreTransform: 'none' Properties, Methods ``` `numCLModels = numel(CVMdl.Trained)` ```numCLModels = 5 ``` `CVMdl` is a `ClassificationPartitionedLinear` model. Because `fitclinear` implements 5-fold cross-validation, `CVMdl` contains 5 `ClassificationLinear` models that the software trains on each fold. Display the first trained linear classification model. `Mdl1 = CVMdl.Trained{1}` ```Mdl1 = ClassificationLinear ResponseName: 'Y' ClassNames: [0 1] ScoreTransform: 'logit' Beta: [34023×11 double] Bias: [-13.2904 -13.2904 -13.2904 -13.2904 -9.9357 -7.0782 -5.4335 -4.5473 -3.4223 -3.1649 -2.9795] Lambda: [1.0000e-06 3.5481e-06 1.2589e-05 4.4668e-05 1.5849e-04 5.6234e-04 0.0020 0.0071 0.0251 0.0891 0.3162] Learner: 'logistic' Properties, Methods ``` `Mdl1` is a `ClassificationLinear` model object. `fitclinear` constructed `Mdl1` by training on the first four folds. Because `Lambda` is a sequence of regularization strengths, you can think of `Mdl1` as 11 models, one for each regularization strength in `Lambda`. Estimate the cross-validated classification error. `ce = kfoldLoss(CVMdl);` Because there are 11 regularization strengths, `ce` is a 1-by-11 vector of classification error rates. Higher values of `Lambda` lead to predictor variable sparsity, which is a good quality of a classifier. For each regularization strength, train a linear classification model using the entire data set and the same options as when you cross-validated the models. Determine the number of nonzero coefficients per model. ```Mdl = fitclinear(X,Ystats,'ObservationsIn','columns',... 'Learner','logistic','Solver','sparsa','Regularization','lasso',... 'Lambda',Lambda,'GradientTolerance',1e-8); numNZCoeff = sum(Mdl.Beta~=0);``` In the same figure, plot the cross-validated, classification error rates and frequency of nonzero coefficients for each regularization strength. Plot all variables on the log scale. ```figure; [h,hL1,hL2] = plotyy(log10(Lambda),log10(ce),... log10(Lambda),log10(numNZCoeff)); hL1.Marker = 'o'; hL2.Marker = 'o'; ylabel(h(1),'log_{10} classification error') ylabel(h(2),'log_{10} nonzero-coefficient frequency') xlabel('log_{10} Lambda') title('Test-Sample Statistics') hold off``` Choose the index of the regularization strength that balances predictor variable sparsity and low classification error. In this case, a value between $1{0}^{-4}$ to $1{0}^{-1}$ should suffice. `idxFinal = 7;` Select the model from `Mdl` with the chosen regularization strength. `MdlFinal = selectModels(Mdl,idxFinal);` `MdlFinal` is a `ClassificationLinear` model containing one regularization strength. To estimate labels for new observations, pass `MdlFinal` and the new data to `predict`. This example shows how to minimize the cross-validation error in a linear classifier using `fitclinear`. The example uses the NLP data set. `load nlpdata` `X` is a sparse matrix of predictor data, and `Y` is a categorical vector of class labels. There are more than two classes in the data. The models should identify whether the word counts in a web page are from the Statistics and Machine Learning Toolbox™ documentation. Identify the relevant labels. ```X = X'; Ystats = Y == 'stats';``` Optimize the classification using the `'auto'` parameters. For reproducibility, set the random seed and use the `'expected-improvement-plus'` acquisition function. ```rng default Mdl = fitclinear(X,Ystats,'ObservationsIn','columns','Solver','sparsa',... 'OptimizeHyperparameters','auto','HyperparameterOptimizationOptions',... struct('AcquisitionFunctionName','expected-improvement-plus'))``` ```|=====================================================================================================| | Iter | Eval | Objective | Objective | BestSoFar | BestSoFar | Lambda | Learner | | | result | | runtime | (observed) | (estim.) | | | |=====================================================================================================| | 1 | Best | 0.041619 | 4.0257 | 0.041619 | 0.041619 | 0.077903 | logistic | ``` ```| 2 | Best | 0.00072849 | 3.8245 | 0.00072849 | 0.0028767 | 2.1405e-09 | logistic | ``` ```| 3 | Accept | 0.049221 | 4.6777 | 0.00072849 | 0.00075737 | 0.72101 | svm | ``` ```| 4 | Accept | 0.00079184 | 4.299 | 0.00072849 | 0.00074989 | 3.4734e-07 | svm | ``` ```| 5 | Accept | 0.00082351 | 3.9102 | 0.00072849 | 0.00072924 | 1.1738e-08 | logistic | ``` ```| 6 | Accept | 0.00085519 | 4.2132 | 0.00072849 | 0.00072746 | 2.4529e-09 | svm | ``` ```| 7 | Accept | 0.00079184 | 4.1282 | 0.00072849 | 0.00072518 | 3.1854e-08 | svm | ``` ```| 8 | Accept | 0.00088686 | 4.4564 | 0.00072849 | 0.00072236 | 3.1717e-10 | svm | ``` ```| 9 | Accept | 0.00076017 | 3.7918 | 0.00072849 | 0.00068304 | 3.1837e-10 | logistic | ``` ```| 10 | Accept | 0.00079184 | 4.4733 | 0.00072849 | 0.00072853 | 1.1258e-07 | svm | ``` ```| 11 | Accept | 0.00076017 | 3.9953 | 0.00072849 | 0.00072144 | 2.1214e-09 | logistic | ``` ```| 12 | Accept | 0.00079184 | 6.7875 | 0.00072849 | 0.00075984 | 2.2819e-07 | logistic | ``` ```| 13 | Accept | 0.00072849 | 4.2131 | 0.00072849 | 0.00075648 | 6.6161e-08 | logistic | ``` ```| 14 | Best | 0.00069682 | 4.3785 | 0.00069682 | 0.00069781 | 7.4324e-08 | logistic | ``` ```| 15 | Best | 0.00066515 | 4.3717 | 0.00066515 | 0.00068861 | 7.6994e-08 | logistic | ``` ```| 16 | Accept | 0.00076017 | 3.9068 | 0.00066515 | 0.00068881 | 7.0687e-10 | logistic | ``` ```| 17 | Accept | 0.00066515 | 4.5966 | 0.00066515 | 0.0006838 | 7.7159e-08 | logistic | ``` ```| 18 | Accept | 0.0012353 | 4.6723 | 0.00066515 | 0.00068521 | 0.00083275 | svm | ``` ```| 19 | Accept | 0.00076017 | 4.2389 | 0.00066515 | 0.00068508 | 5.0781e-05 | svm | ``` ```| 20 | Accept | 0.00085519 | 3.2483 | 0.00066515 | 0.00068527 | 0.00022104 | svm | ``` ```|=====================================================================================================| | Iter | Eval | Objective | Objective | BestSoFar | BestSoFar | Lambda | Learner | | | result | | runtime | (observed) | (estim.) | | | |=====================================================================================================| | 21 | Accept | 0.00082351 | 6.7163 | 0.00066515 | 0.00068569 | 4.5396e-06 | svm | ``` ```| 22 | Accept | 0.0010769 | 15.946 | 0.00066515 | 0.00070107 | 5.1931e-06 | logistic | ``` ```| 23 | Accept | 0.00095021 | 17.989 | 0.00066515 | 0.00069594 | 1.3051e-06 | logistic | ``` ```| 24 | Accept | 0.00085519 | 5.5831 | 0.00066515 | 0.00069625 | 1.6481e-05 | svm | ``` ```| 25 | Accept | 0.00085519 | 4.5366 | 0.00066515 | 0.00069643 | 1.157e-06 | svm | ``` ```| 26 | Accept | 0.00079184 | 3.6968 | 0.00066515 | 0.00069667 | 1.0016e-08 | svm | ``` ```| 27 | Accept | 0.00072849 | 3.9655 | 0.00066515 | 0.00069848 | 4.2234e-08 | logistic | ``` ```| 28 | Accept | 0.049221 | 0.49611 | 0.00066515 | 0.00069842 | 3.1608 | logistic | ``` ```| 29 | Accept | 0.00085519 | 4.3911 | 0.00066515 | 0.00069855 | 8.5626e-10 | svm | ``` ```| 30 | Accept | 0.00076017 | 3.8952 | 0.00066515 | 0.00069837 | 3.1946e-10 | logistic | ``` ```__________________________________________________________ Optimization completed. MaxObjectiveEvaluations of 30 reached. Total function evaluations: 30 Total elapsed time: 173.5287 seconds. Total objective function evaluation time: 153.4246 Best observed feasible point: Lambda Learner __________ ________ 7.6994e-08 logistic Observed objective function value = 0.00066515 Estimated objective function value = 0.00069859 Function evaluation time = 4.3717 Best estimated feasible point (according to models): Lambda Learner __________ ________ 7.4324e-08 logistic Estimated objective function value = 0.00069837 Estimated function evaluation time = 4.3951 ``` ```Mdl = ClassificationLinear ResponseName: 'Y' ClassNames: [0 1] ScoreTransform: 'logit' Beta: [34023×1 double] Bias: -10.1723 Lambda: 7.4324e-08 Learner: 'logistic' Properties, Methods ``` ## Input Arguments collapse all Predictor data, specified as an n-by-p full or sparse matrix. The length of `Y` and the number of observations in `X` must be equal. ### Note If you orient your predictor matrix so that observations correspond to columns and specify `'ObservationsIn','columns'`, then you might experience a significant reduction in optimization-execution time. Data Types: `single` | `double` Class labels to which the classification model is trained, specified as a categorical, character, or string array, logical or numeric vector, or cell array of character vectors. • `fitclinear` only supports binary classification. Either `Y` must contain exactly two distinct classes, or you must specify two classes for training using the `'ClassNames'` name-value pair argument. For multiclass learning, see `fitcecoc`. • If `Y` is a character array, then each element must correspond to one row of the array. • The length of `Y` and the number of observations in `X` must be equal. • It is good practice to specify the class order using the `ClassNames` name-value pair argument. Data Types: `char` | `string` | `cell` | `categorical` | `logical` | `single` | `double` ### Note `fitclinear` removes missing observations, that is, observations with any of these characteristics: • `NaN`, empty character vector (`''`), empty string (`""`), `<missing>`, and `<undefined>` elements in the response (`Y` or `ValidationData``{2}`) • At least one `NaN` value in a predictor observation (row in `X` or `ValidationData{1}`) • `NaN` value or `0` weight (`Weights` or `ValidationData{3}`) For memory-usage economy, it is best practice to remove observations containing missing values from your training data manually before training. ### Name-Value Pair Arguments Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`. Example: `'ObservationsIn','columns','Learner','logistic','CrossVal','on'` specifies that the columns of the predictor matrix corresponds to observations, to implement logistic regression, to implement 10-fold cross-validation. ### Note You cannot use any cross-validation name-value pair argument along with the `'OptimizeHyperparameters'` name-value pair argument. You can modify the cross-validation for `'OptimizeHyperparameters'` only by using the `'HyperparameterOptimizationOptions'` name-value pair argument. #### Linear Classification Options collapse all Regularization term strength, specified as the comma-separated pair consisting of `'Lambda'` and `'auto'`, a nonnegative scalar, or a vector of nonnegative values. • For `'auto'`, `Lambda` = 1/n. • If you specify a cross-validation, name-value pair argument (e.g., `CrossVal`), then n is the number of in-fold observations. • Otherwise, n is the training sample size. • For a vector of nonnegative values, the software sequentially optimizes the objective function for each distinct value in `Lambda` in ascending order. • If `Solver` is `'sgd'` or `'asgd'` and `Regularization` is `'lasso'`, then the software does not use the previous coefficient estimates as a warm start for the next optimization iteration. Otherwise, the software uses warm starts. • If `Regularization` is `'lasso'`, then any coefficient estimate of 0 retains its value when the software optimizes using subsequent values in `Lambda`. Returns coefficient estimates for all optimization iterations. Example: `'Lambda',10.^(-(10:-2:2))` Data Types: `char` | `string` | `double` | `single` Linear classification model type, specified as the comma-separated pair consisting of `'Learner'` and `'svm'` or `'logistic'`. In this table, $f\left(x\right)=x\beta +b.$ • β is a vector of p coefficients. • x is an observation from p predictor variables. • b is the scalar bias. ValueAlgorithmResponse RangeLoss Function `'svm'`Support vector machiney ∊ {–1,1}; 1 for the positive class and –1 otherwiseHinge: $\ell \left[y,f\left(x\right)\right]=\mathrm{max}\left[0,1-yf\left(x\right)\right]$ `'logistic'`Logistic regressionSame as `'svm'`Deviance (logistic): $\ell \left[y,f\left(x\right)\right]=\mathrm{log}\left\{1+\mathrm{exp}\left[-yf\left(x\right)\right]\right\}$ Example: `'Learner','logistic'` Predictor data observation dimension, specified as the comma-separated pair consisting of `'ObservationsIn'` and `'columns'` or `'rows'`. ### Note If you orient your predictor matrix so that observations correspond to columns and specify `'ObservationsIn','columns'`, then you might experience a significant reduction in optimization-execution time. Complexity penalty type, specified as the comma-separated pair consisting of `'Regularization'` and `'lasso'` or `'ridge'`. The software composes the objective function for minimization from the sum of the average loss function (see `Learner`) and the regularization term in this table. ValueDescription `'lasso'`Lasso (L1) penalty: $\lambda \sum _{j=1}^{p}|{\beta }_{j}|$ `'ridge'`Ridge (L2) penalty: $\frac{\lambda }{2}\sum _{j=1}^{p}{\beta }_{j}^{2}$ To specify the regularization term strength, which is λ in the expressions, use `Lambda`. The software excludes the bias term (β0) from the regularization penalty. If `Solver` is `'sparsa'`, then the default value of `Regularization` is `'lasso'`. Otherwise, the default is `'ridge'`. ### Tip • For predictor variable selection, specify `'lasso'`. For more on variable selection, see Introduction to Feature Selection. • For optimization accuracy, specify `'ridge'`. Example: `'Regularization','lasso'` Objective function minimization technique, specified as the comma-separated pair consisting of `'Solver'` and a character vector or string scalar, a string array, or a cell array of character vectors with values from this table. ValueDescriptionRestrictions `'sgd'`Stochastic gradient descent (SGD) [5][3] `'asgd'`Average stochastic gradient descent (ASGD) [8] `'dual'`Dual SGD for SVM [2][7]`Regularization` must be `'ridge'` and `Learner` must be `'svm'`. `'bfgs'`Broyden-Fletcher-Goldfarb-Shanno quasi-Newton algorithm (BFGS) [4]Inefficient if `X` is very high-dimensional. `'lbfgs'`Limited-memory BFGS (LBFGS) [4]`Regularization` must be `'ridge'`. `'sparsa'`Sparse Reconstruction by Separable Approximation (SpaRSA) [6]`Regularization` must be `'lasso'`. If you specify: • A ridge penalty (see `Regularization`) and `X` contains 100 or fewer predictor variables, then the default solver is `'bfgs'`. • An SVM model (see `Learner`), a ridge penalty, and `X` contains more than 100 predictor variables, then the default solver is `'dual'`. • A lasso penalty and `X` contains 100 or fewer predictor variables, then the default solver is `'sparsa'`. Otherwise, the default solver is `'sgd'`. If you specify a string array or cell array of solver names, then the software uses all solvers in the specified order for each `Lambda`. For more details on which solver to choose, see Tips. Example: `'Solver',{'sgd','lbfgs'}` Initial linear coefficient estimates (β), specified as the comma-separated pair consisting of `'Beta'` and a p-dimensional numeric vector or a p-by-L numeric matrix. p is the number of predictor variables in `X` and L is the number of regularization-strength values (for more details, see `Lambda`). • If you specify a p-dimensional vector, then the software optimizes the objective function L times using this process. 1. The software optimizes using `Beta` as the initial value and the minimum value of `Lambda` as the regularization strength. 2. The software optimizes again using the resulting estimate from the previous optimization as a warm start, and the next smallest value in `Lambda` as the regularization strength. 3. The software implements step 2 until it exhausts all values in `Lambda`. • If you specify a p-by-L matrix, then the software optimizes the objective function L times. At iteration `j`, the software uses `Beta(:,j)` as the initial value and, after it sorts `Lambda` in ascending order, uses `Lambda(j)` as the regularization strength. If you set `'Solver','dual'`, then the software ignores `Beta`. Data Types: `single` | `double` Initial intercept estimate (b), specified as the comma-separated pair consisting of `'Bias'` and a numeric scalar or an L-dimensional numeric vector. L is the number of regularization-strength values (for more details, see `Lambda`). • If you specify a scalar, then the software optimizes the objective function L times using this process. 1. The software optimizes using `Bias` as the initial value and the minimum value of `Lambda` as the regularization strength. 2. The uses the resulting estimate as a warm start to the next optimization iteration, and uses the next smallest value in `Lambda` as the regularization strength. 3. The software implements step 2 until it exhausts all values in `Lambda`. • If you specify an L-dimensional vector, then the software optimizes the objective function L times. At iteration `j`, the software uses `Bias(j)` as the initial value and, after it sorts `Lambda` in ascending order, uses `Lambda(j)` as the regularization strength. • By default: • If `Learner` is `'logistic'`, then let gj be 1 if `Y(j)` is the positive class, and -1 otherwise. `Bias` is the weighted average of the g for training or, for cross-validation, in-fold observations. • If `Learner` is `'svm'`, then `Bias` is 0. Data Types: `single` | `double` Linear model intercept inclusion flag, specified as the comma-separated pair consisting of `'FitBias'` and `true` or `false`. ValueDescription `true`The software includes the bias term b in the linear model, and then estimates it. `false`The software sets b = 0 during estimation. Example: `'FitBias',false` Data Types: `logical` Flag to fit the linear model intercept after optimization, specified as the comma-separated pair consisting of `'PostFitBias'` and `true` or `false`. ValueDescription `false`The software estimates the bias term b and the coefficients β during optimization. `true` To estimate b, the software: 1. Estimates β and b using the model 2. Estimates classification scores 3. Refits b by placing the threshold on the classification scores that attains maximum accuracy If you specify `true`, then `FitBias` must be true. Example: `'PostFitBias',true` Data Types: `logical` Verbosity level, specified as the comma-separated pair consisting of `'Verbose'` and a nonnegative integer. `Verbose` controls the amount of diagnostic information `fitclinear` displays at the command line. ValueDescription `0``fitclinear` does not display diagnostic information. `1``fitclinear` periodically displays and stores the value of the objective function, gradient magnitude, and other diagnostic information. `FitInfo.History` contains the diagnostic information. Any other positive integer`fitclinear` displays and stores diagnostic information at each optimization iteration. `FitInfo.History` contains the diagnostic information. Example: `'Verbose',1` Data Types: `double` | `single` #### SGD and ASGD Solver Options collapse all Mini-batch size, specified as the comma-separated pair consisting of `'BatchSize'` and a positive integer. At each iteration, the software estimates the subgradient using `BatchSize` observations from the training data. • If `X` is a numeric matrix, then the default value is `10`. • If `X` is a sparse matrix, then the default value is `max([10,ceil(sqrt(ff))])`, where `ff = numel(X)/nnz(X)` (the fullness factor of `X`). Example: `'BatchSize',100` Data Types: `single` | `double` Learning rate, specified as the comma-separated pair consisting of `'LearnRate'` and a positive scalar. `LearnRate` specifies how many steps to take per iteration. At each iteration, the gradient specifies the direction and magnitude of each step. • If `Regularization` is `'ridge'`, then `LearnRate` specifies the initial learning rate γ0. The software determines the learning rate for iteration t, γt, using `${\gamma }_{t}=\frac{{\gamma }_{0}}{{\left(1+\lambda {\gamma }_{0}t\right)}^{c}}.$` • If `Regularization` is `'lasso'`, then, for all iterations, `LearnRate` is constant. By default, `LearnRate` is `1/sqrt(1+max((sum(X.^2,obsDim))))`, where `obsDim` is `1` if the observations compose the columns of the predictor data `X`, and `2` otherwise. Example: `'LearnRate',0.01` Data Types: `single` | `double` Flag to decrease the learning rate when the software detects divergence (that is, over-stepping the minimum), specified as the comma-separated pair consisting of `'OptimizeLearnRate'` and `true` or `false`. If `OptimizeLearnRate` is `'true'`, then: 1. For the few optimization iterations, the software starts optimization using `LearnRate` as the learning rate. 2. If the value of the objective function increases, then the software restarts and uses half of the current value of the learning rate. 3. The software iterates step 2 until the objective function decreases. Example: `'OptimizeLearnRate',true` Data Types: `logical` Number of mini-batches between lasso truncation runs, specified as the comma-separated pair consisting of `'TruncationPeriod'` and a positive integer. After a truncation run, the software applies a soft threshold to the linear coefficients. That is, after processing k = `TruncationPeriod` mini-batches, the software truncates the estimated coefficient j using `${\stackrel{^}{\beta }}_{j}^{\ast }=\left\{\begin{array}{ll}{\stackrel{^}{\beta }}_{j}-{u}_{t}\hfill & \text{if}\text{\hspace{0.17em}}{\stackrel{^}{\beta }}_{j}>{u}_{t},\hfill \\ 0\hfill & \text{if}\text{\hspace{0.17em}}|{\stackrel{^}{\beta }}_{j}|\le {u}_{t},\hfill \\ {\stackrel{^}{\beta }}_{j}+{u}_{t}\hfill & \text{if}\text{\hspace{0.17em}}{\stackrel{^}{\beta }}_{j}<-{u}_{t}.\hfill \end{array}\begin{array}{r}\hfill \text{\hspace{0.17em}}\text{\hspace{0.17em}}\\ \hfill \text{\hspace{0.17em}}\text{\hspace{0.17em}}\\ \hfill \text{\hspace{0.17em}}\text{\hspace{0.17em}}\end{array}$` • For SGD, ${\stackrel{^}{\beta }}_{j}$ is the estimate of coefficient j after processing k mini-batches. ${u}_{t}=k{\gamma }_{t}\lambda .$ γt is the learning rate at iteration t. λ is the value of `Lambda`. • For ASGD, ${\stackrel{^}{\beta }}_{j}$ is the averaged estimate coefficient j after processing k mini-batches, ${u}_{t}=k\lambda .$ If `Regularization` is `'ridge'`, then the software ignores `TruncationPeriod`. Example: `'TruncationPeriod',100` Data Types: `single` | `double` #### Other Classification Options collapse all Names of classes to use for training, specified as the comma-separated pair consisting of `'ClassNames'` and a categorical, character, or string array, a logical or numeric vector, or a cell array of character vectors. `ClassNames` must have the same data type as `Y`. If `ClassNames` is a character array, then each element must correspond to one row of the array. Use `'ClassNames'` to: • Order the classes during training. • Specify the order of any input or output argument dimension that corresponds to the class order. For example, use `'ClassNames'` to specify the order of the dimensions of `Cost` or the column order of classification scores returned by `predict`. • Select a subset of classes for training. For example, suppose that the set of all distinct class names in `Y` is `{'a','b','c'}`. To train the model using observations from classes `'a'` and `'c'` only, specify `'ClassNames',{'a','c'}`. The default value for `ClassNames` is the set of all distinct class names in `Y`. Example: `'ClassNames',{'b','g'}` Data Types: `categorical` | `char` | `string` | `logical` | `single` | `double` | `cell` Misclassification cost, specified as the comma-separated pair consisting of `'Cost'` and a square matrix or structure. • If you specify the square matrix `cost` (`'Cost',cost`), then `cost(i,j)` is the cost of classifying a point into class `j` if its true class is `i`. That is, the rows correspond to the true class, and the columns correspond to the predicted class. To specify the class order for the corresponding rows and columns of `cost`, use the `ClassNames` name-value pair argument. • If you specify the structure `S` (`'Cost',S`), then it must have two fields: • `S.ClassNames`, which contains the class names as a variable of the same data type as `Y` • `S.ClassificationCosts`, which contains the cost matrix with rows and columns ordered as in `S.ClassNames` The default value for `Cost` is ```ones(K) – eye(K)```, where `K` is the number of distinct classes. `fitclinear` uses `Cost` to adjust the prior class probabilities specified in `Prior`. Then, `fitclinear` uses the adjusted prior probabilities for training and resets the cost matrix to its default. Example: `'Cost',[0 2; 1 0]` Data Types: `single` | `double` | `struct` Prior probabilities for each class, specified as the comma-separated pair consisting of `'Prior'` and `'empirical'`, `'uniform'`, a numeric vector, or a structure array. This table summarizes the available options for setting prior probabilities. ValueDescription `'empirical'`The class prior probabilities are the class relative frequencies in `Y`. `'uniform'`All class prior probabilities are equal to 1/`K`, where `K` is the number of classes. numeric vectorEach element is a class prior probability. Order the elements according to their order in `Y`. If you specify the order using the `'ClassNames'` name-value pair argument, then order the elements accordingly. structure array A structure `S` with two fields: • `S.ClassNames` contains the class names as a variable of the same type as `Y`. • `S.ClassProbs` contains a vector of corresponding prior probabilities. `fitclinear` normalizes the prior probabilities in `Prior` to sum to 1. Example: `'Prior',struct('ClassNames',{{'setosa','versicolor'}},'ClassProbs',1:2)` Data Types: `char` | `string` | `double` | `single` | `struct` Score transformation, specified as the comma-separated pair consisting of `'ScoreTransform'` and a character vector, string scalar, or function handle. This table summarizes the available character vectors and string scalars. ValueDescription `'doublelogit'`1/(1 + e–2x) `'invlogit'`log(x / (1 – x)) `'ismax'`Sets the score for the class with the largest score to 1, and sets the scores for all other classes to 0 `'logit'`1/(1 + ex) `'none'` or `'identity'`x (no transformation) `'sign'`–1 for x < 0 0 for x = 0 1 for x > 0 `'symmetric'`2x – 1 `'symmetricismax'`Sets the score for the class with the largest score to 1, and sets the scores for all other classes to –1 `'symmetriclogit'`2/(1 + ex) – 1 For a MATLAB® function or a function you define, use its function handle for the score transform. The function handle must accept a matrix (the original scores) and return a matrix of the same size (the transformed scores). Example: `'ScoreTransform','logit'` Data Types: `char` | `string` | `function_handle` Observation weights, specified as the comma-separated pair consisting of `'Weights'` and a numeric vector of positive values. `fitclinear` weighs the observations in `X` with the corresponding value in `Weights`. The size of `Weights` must equal the number of observations in `X`. `fitclinear` normalizes `Weights` to sum up to the value of the prior probability in the respective class. By default, `Weights` is `ones(n,1)`, where `n` is the number of observations in `X`. Data Types: `double` | `single` #### Cross-Validation Options collapse all Cross-validation flag, specified as the comma-separated pair consisting of `'Crossval'` and `'on'` or `'off'`. If you specify `'on'`, then the software implements 10-fold cross-validation. To override this cross-validation setting, use one of these name-value pair arguments: `CVPartition`, `Holdout`, or `KFold`. To create a cross-validated model, you can use one cross-validation name-value pair argument at a time only. Example: `'Crossval','on'` Cross-validation partition, specified as the comma-separated pair consisting of `'CVPartition'` and a `cvpartition` partition object as created by `cvpartition`. The partition object specifies the type of cross-validation, and also the indexing for training and validation sets. To create a cross-validated model, you can use one of these four options only: `'``CVPartition``'`, `'``Holdout``'`, or `'``KFold``'`. Fraction of data used for holdout validation, specified as the comma-separated pair consisting of `'Holdout'` and a scalar value in the range (0,1). If you specify `'Holdout',p`, then the software: 1. Randomly reserves `p*100`% of the data as validation data, and trains the model using the rest of the data 2. Stores the compact, trained model in the `Trained` property of the cross-validated model. To create a cross-validated model, you can use one of these four options only: `'``CVPartition``'`, `'``Holdout``'`, or `'``KFold``'`. Example: `'Holdout',0.1` Data Types: `double` | `single` Number of folds to use in a cross-validated classifier, specified as the comma-separated pair consisting of `'KFold'` and a positive integer value greater than 1. If you specify, e.g., `'KFold',k`, then the software: 1. Randomly partitions the data into k sets 2. For each set, reserves the set as validation data, and trains the model using the other k – 1 sets 3. Stores the `k` compact, trained models in the cells of a `k`-by-1 cell vector in the `Trained` property of the cross-validated model. To create a cross-validated model, you can use one of these four options only: `'``CVPartition``'`, `'``Holdout``'`, or `'``KFold``'`. Example: `'KFold',8` Data Types: `single` | `double` #### SGD and ASGD Convergence Controls collapse all Maximal number of batches to process, specified as the comma-separated pair consisting of `'BatchLimit'` and a positive integer. When the software processes `BatchLimit` batches, it terminates optimization. • By default: • The software passes through the data `PassLimit` times. • If you specify multiple solvers, and use (A)SGD to get an initial approximation for the next solver, then the default value is `ceil(1e6/BatchSize)`. `BatchSize` is the value of the `'``BatchSize``'` name-value pair argument. • If you specify `'BatchLimit'` and `'``PassLimit``'`, then the software chooses the argument that results in processing the fewest observations. • If you specify `'BatchLimit'` but not `'PassLimit'`, then the software processes enough batches to complete up to one entire pass through the data. Example: `'BatchLimit',100` Data Types: `single` | `double` Relative tolerance on the linear coefficients and the bias term (intercept), specified as the comma-separated pair consisting of `'BetaTolerance'` and a nonnegative scalar. Let ${B}_{t}=\left[{\beta }_{t}{}^{\prime }\text{\hspace{0.17em}}\text{\hspace{0.17em}}{b}_{t}\right]$, that is, the vector of the coefficients and the bias term at optimization iteration t. If ${‖\frac{{B}_{t}-{B}_{t-1}}{{B}_{t}}‖}_{2}<\text{BetaTolerance}$, then optimization terminates. If the software converges for the last solver specified in `Solver`, then optimization terminates. Otherwise, the software uses the next solver specified in `Solver`. Example: `'BetaTolerance',1e-6` Data Types: `single` | `double` Number of batches to process before next convergence check, specified as the comma-separated pair consisting of `'NumCheckConvergence'` and a positive integer. To specify the batch size, see `BatchSize`. The software checks for convergence about 10 times per pass through the entire data set by default. Example: `'NumCheckConvergence',100` Data Types: `single` | `double` Maximal number of passes through the data, specified as the comma-separated pair consisting of `'PassLimit'` and a positive integer. `fitclinear` processes all observations when it completes one pass through the data. When `fitclinear` passes through the data `PassLimit` times, it terminates optimization. If you specify `'``BatchLimit``'` and `'PassLimit'`, then `fitclinear` chooses the argument that results in processing the fewest observations. Example: `'PassLimit',5` Data Types: `single` | `double` Data for optimization convergence detection, specified as the comma-separated pair consisting of `'ValidationData'` and a cell array. During optimization, the software periodically estimates the loss of `ValidationData`. If the validation-data loss increases, then the software terminates optimization. For more details, see Algorithms. To optimize hyperparameters using cross-validation, see cross-validation options such as `CrossVal`. • `ValidationData(1)` must contain an m-by-p or p-by-m full or sparse matrix of predictor data that has the same orientation as `X`. The predictor variables in the training data `X` and `ValidationData{1}` must correspond. The number of observations in both sets can vary. • `ValidationData{2}` and `Y` must be the same data type. The set of all distinct labels of `ValidationData{2}` must be a subset of all distinct labels of `Y`. • Optionally, `ValidationData(3)` can contain an m-dimensional numeric vector of observation weights. The software normalizes the weights with the validation data so that they sum to 1. If you specify `ValidationData`, then, to display validation loss at the command line, specify a value larger than 0 for `Verbose`. If the software converges for the last solver specified in `Solver`, then optimization terminates. Otherwise, the software uses the next solver specified in `Solver`. By default, the software does not detect convergence by monitoring validation-data loss. #### Dual SGD Convergence Controls collapse all Relative tolerance on the linear coefficients and the bias term (intercept), specified as the comma-separated pair consisting of `'BetaTolerance'` and a nonnegative scalar. Let ${B}_{t}=\left[{\beta }_{t}{}^{\prime }\text{\hspace{0.17em}}\text{\hspace{0.17em}}{b}_{t}\right]$, that is, the vector of the coefficients and the bias term at optimization iteration t. If ${‖\frac{{B}_{t}-{B}_{t-1}}{{B}_{t}}‖}_{2}<\text{BetaTolerance}$, then optimization terminates. If you also specify `DeltaGradientTolerance`, then optimization terminates when the software satisfies either stopping criterion. If the software converges for the last solver specified in `Solver`, then optimization terminates. Otherwise, the software uses the next solver specified in `Solver`. Example: `'BetaTolerance',1e-6` Data Types: `single` | `double` Gradient-difference tolerance between upper and lower pool Karush-Kuhn-Tucker (KKT) complementarity conditions violators, specified as the comma-separated pair consisting of `'DeltaGradientTolerance'` and a nonnegative scalar. • If the magnitude of the KKT violators is less than `DeltaGradientTolerance`, then the software terminates optimization. • If the software converges for the last solver specified in `Solver`, then optimization terminates. Otherwise, the software uses the next solver specified in `Solver`. Example: `'DeltaGapTolerance',1e-2` Data Types: `double` | `single` Number of passes through entire data set to process before next convergence check, specified as the comma-separated pair consisting of `'NumCheckConvergence'` and a positive integer. Example: `'NumCheckConvergence',100` Data Types: `single` | `double` Maximal number of passes through the data, specified as the comma-separated pair consisting of `'PassLimit'` and a positive integer. When the software completes one pass through the data, it has processed all observations. When the software passes through the data `PassLimit` times, it terminates optimization. Example: `'PassLimit',5` Data Types: `single` | `double` Data for optimization convergence detection, specified as the comma-separated pair consisting of `'ValidationData'` and a cell array. During optimization, the software periodically estimates the loss of `ValidationData`. If the validation-data loss increases, then the software terminates optimization. For more details, see Algorithms. To optimize hyperparameters using cross-validation, see cross-validation options such as `CrossVal`. • `ValidationData(1)` must contain an m-by-p or p-by-m full or sparse matrix of predictor data that has the same orientation as `X`. The predictor variables in the training data `X` and `ValidationData{1}` must correspond. The number of observations in both sets can vary. • `ValidationData{2}` and `Y` must be the same data type. The set of all distinct labels of `ValidationData{2}` must be a subset of all distinct labels of `Y`. • Optionally, `ValidationData(3)` can contain an m-dimensional numeric vector of observation weights. The software normalizes the weights with the validation data so that they sum to 1. If you specify `ValidationData`, then, to display validation loss at the command line, specify a value larger than 0 for `Verbose`. If the software converges for the last solver specified in `Solver`, then optimization terminates. Otherwise, the software uses the next solver specified in `Solver`. By default, the software does not detect convergence by monitoring validation-data loss. #### BFGS, LBFGS, and SpaRSA Convergence Controls collapse all Relative tolerance on the linear coefficients and the bias term (intercept), specified as the comma-separated pair consisting of `'BetaTolerance'` and a nonnegative scalar. Let ${B}_{t}=\left[{\beta }_{t}{}^{\prime }\text{\hspace{0.17em}}\text{\hspace{0.17em}}{b}_{t}\right]$, that is, the vector of the coefficients and the bias term at optimization iteration t. If ${‖\frac{{B}_{t}-{B}_{t-1}}{{B}_{t}}‖}_{2}<\text{BetaTolerance}$, then optimization terminates. If you also specify `GradientTolerance`, then optimization terminates when the software satisfies either stopping criterion. If the software converges for the last solver specified in `Solver`, then optimization terminates. Otherwise, the software uses the next solver specified in `Solver`. Example: `'BetaTolerance',1e-6` Data Types: `single` | `double` Absolute gradient tolerance, specified as the comma-separated pair consisting of `'GradientTolerance'` and a nonnegative scalar. Let $\nabla {ℒ}_{t}$ be the gradient vector of the objective function with respect to the coefficients and bias term at optimization iteration t. If ${‖\nabla {ℒ}_{t}‖}_{\infty }=\mathrm{max}|\nabla {ℒ}_{t}|<\text{GradientTolerance}$, then optimization terminates. If you also specify `BetaTolerance`, then optimization terminates when the software satisfies either stopping criterion. If the software converges for the last solver specified in the software, then optimization terminates. Otherwise, the software uses the next solver specified in `Solver`. Example: `'GradientTolerance',1e-5` Data Types: `single` | `double` Size of history buffer for Hessian approximation, specified as the comma-separated pair consisting of `'HessianHistorySize'` and a positive integer. That is, at each iteration, the software composes the Hessian using statistics from the latest `HessianHistorySize` iterations. The software does not support `'HessianHistorySize'` for SpaRSA. Example: `'HessianHistorySize',10` Data Types: `single` | `double` Maximal number of optimization iterations, specified as the comma-separated pair consisting of `'IterationLimit'` and a positive integer. `IterationLimit` applies to these values of `Solver`: `'bfgs'`, `'lbfgs'`, and `'sparsa'`. Example: `'IterationLimit',500` Data Types: `single` | `double` Data for optimization convergence detection, specified as the comma-separated pair consisting of `'ValidationData'` and a cell array. During optimization, the software periodically estimates the loss of `ValidationData`. If the validation-data loss increases, then the software terminates optimization. For more details, see Algorithms. To optimize hyperparameters using cross-validation, see cross-validation options such as `CrossVal`. • `ValidationData(1)` must contain an m-by-p or p-by-m full or sparse matrix of predictor data that has the same orientation as `X`. The predictor variables in the training data `X` and `ValidationData{1}` must correspond. The number of observations in both sets can vary. • `ValidationData{2}` and `Y` must be the same data type. The set of all distinct labels of `ValidationData{2}` must be a subset of all distinct labels of `Y`. • Optionally, `ValidationData(3)` can contain an m-dimensional numeric vector of observation weights. The software normalizes the weights with the validation data so that they sum to 1. If you specify `ValidationData`, then, to display validation loss at the command line, specify a value larger than 0 for `Verbose`. If the software converges for the last solver specified in `Solver`, then optimization terminates. Otherwise, the software uses the next solver specified in `Solver`. By default, the software does not detect convergence by monitoring validation-data loss. #### Hyperparameter Optimization collapse all Parameters to optimize, specified as the comma-separated pair consisting of `'OptimizeHyperparameters'` and one of the following: • `'none'` — Do not optimize. • `'auto'` — Use `{'Lambda','Learner'}`. • `'all'` — Optimize all eligible parameters. • String array or cell array of eligible parameter names. • Vector of `optimizableVariable` objects, typically the output of `hyperparameters`. The optimization attempts to minimize the cross-validation loss (error) for `fitclinear` by varying the parameters. For information about cross-validation loss (albeit in a different context), see Classification Loss. To control the cross-validation type and other aspects of the optimization, use the `HyperparameterOptimizationOptions` name-value pair. ### Note `'OptimizeHyperparameters'` values override any values you set using other name-value pair arguments. For example, setting `'OptimizeHyperparameters'` to `'auto'` causes the `'auto'` values to apply. The eligible parameters for `fitclinear` are: • `Lambda``fitclinear` searches among positive values, by default log-scaled in the range `[1e-5/NumObservations,1e5/NumObservations]`. • `Learner``fitclinear` searches among `'svm'` and `'logistic'`. • `Regularization``fitclinear` searches among `'ridge'` and `'lasso'`. Set nondefault parameters by passing a vector of `optimizableVariable` objects that have nondefault values. For example, ```load fisheriris params = hyperparameters('fitclinear',meas,species); params(1).Range = [1e-4,1e6];``` Pass `params` as the value of `OptimizeHyperparameters`. By default, iterative display appears at the command line, and plots appear according to the number of hyperparameters in the optimization. For the optimization and plots, the objective function is log(1 + cross-validation loss) for regression and the misclassification rate for classification. To control the iterative display, set the `Verbose` field of the `'HyperparameterOptimizationOptions'` name-value pair argument. To control the plots, set the `ShowPlots` field of the `'HyperparameterOptimizationOptions'` name-value pair argument. For an example, see Optimize Linear Classifier. Example: `'OptimizeHyperparameters','auto'` Options for optimization, specified as the comma-separated pair consisting of `'HyperparameterOptimizationOptions'` and a structure. This argument modifies the effect of the `OptimizeHyperparameters` name-value pair argument. All fields in the structure are optional. Field NameValuesDefault `Optimizer` • `'bayesopt'` — Use Bayesian optimization. Internally, this setting calls `bayesopt`. • `'gridsearch'` — Use grid search with `NumGridDivisions` values per dimension. • `'randomsearch'` — Search at random among `MaxObjectiveEvaluations` points. `'gridsearch'` searches in a random order, using uniform sampling without replacement from the grid. After optimization, you can get a table in grid order by using the command `sortrows(Mdl.HyperparameterOptimizationResults)`. `'bayesopt'` `AcquisitionFunctionName` • `'expected-improvement-per-second-plus'` • `'expected-improvement'` • `'expected-improvement-plus'` • `'expected-improvement-per-second'` • `'lower-confidence-bound'` • `'probability-of-improvement'` Acquisition functions whose names include `per-second` do not yield reproducible results because the optimization depends on the runtime of the objective function. Acquisition functions whose names include `plus` modify their behavior when they are overexploiting an area. For more details, see Acquisition Function Types. `'expected-improvement-per-second-plus'` `MaxObjectiveEvaluations`Maximum number of objective function evaluations.`30` for `'bayesopt'` or `'randomsearch'`, and the entire grid for `'gridsearch'` `MaxTime` Time limit, specified as a positive real. The time limit is in seconds, as measured by `tic` and `toc`. Run time can exceed `MaxTime` because `MaxTime` does not interrupt function evaluations. `Inf` `NumGridDivisions`For `'gridsearch'`, the number of values in each dimension. The value can be a vector of positive integers giving the number of values for each dimension, or a scalar that applies to all dimensions. This field is ignored for categorical variables.`10` `ShowPlots`Logical value indicating whether to show plots. If `true`, this field plots the best objective function value against the iteration number. If there are one or two optimization parameters, and if `Optimizer` is `'bayesopt'`, then `ShowPlots` also plots a model of the objective function against the parameters.`true` `SaveIntermediateResults`Logical value indicating whether to save results when `Optimizer` is `'bayesopt'`. If `true`, this field overwrites a workspace variable named `'BayesoptResults'` at each iteration. The variable is a `BayesianOptimization` object.`false` `Verbose` Display to the command line. • `0` — No iterative display • `1` — Iterative display • `2` — Iterative display with extra information For details, see the `bayesopt` `Verbose` name-value pair argument. `1` `UseParallel`Logical value indicating whether to run Bayesian optimization in parallel, which requires Parallel Computing Toolbox™. Due to the nonreproducibility of parallel timing, parallel Bayesian optimization does not necessarily yield reproducible results. For details, see Parallel Bayesian Optimization.`false` `Repartition` Logical value indicating whether to repartition the cross-validation at every iteration. If `false`, the optimizer uses a single partition for the optimization. `true` usually gives the most robust results because this setting takes partitioning noise into account. However, for good results, `true` requires at least twice as many function evaluations. `false` Use no more than one of the following three field names. `CVPartition`A `cvpartition` object, as created by `cvpartition`.`'Kfold',5` if you do not specify any cross-validation field `Holdout`A scalar in the range `(0,1)` representing the holdout fraction. `Kfold`An integer greater than 1. Example: `'HyperparameterOptimizationOptions',struct('MaxObjectiveEvaluations',60)` Data Types: `struct` ## Output Arguments collapse all Trained linear classification model, returned as a `ClassificationLinear` model object or `ClassificationPartitionedLinear` cross-validated model object. If you set any of the name-value pair arguments `KFold`, `Holdout`, `CrossVal`, or `CVPartition`, then `Mdl` is a `ClassificationPartitionedLinear` cross-validated model object. Otherwise, `Mdl` is a `ClassificationLinear` model object. To reference properties of `Mdl`, use dot notation. For example, enter `Mdl.Beta` in the Command Window to display the vector or matrix of estimated coefficients. ### Note Unlike other classification models, and for economical memory usage, `ClassificationLinear` and `ClassificationPartitionedLinear` model objects do not store the training data or training process details (for example, convergence history). Optimization details, returned as a structure array. Fields specify final values or name-value pair argument specifications, for example, `Objective` is the value of the objective function when optimization terminates. Rows of multidimensional fields correspond to values of `Lambda` and columns correspond to values of `Solver`. This table describes some notable fields. FieldDescription `TerminationStatus` • Reason for optimization termination • Corresponds to a value in `TerminationCode` `FitTime`Elapsed, wall-clock time in seconds `History` A structure array of optimization information for each iteration. The field `Solver` stores solver types using integer coding. IntegerSolver 1SGD 2ASGD 3Dual SGD for SVM 4LBFGS 5BFGS 6SpaRSA To access fields, use dot notation. For example, to access the vector of objective function values for each iteration, enter `FitInfo.History.Objective`. It is good practice to examine `FitInfo` to assess whether convergence is satisfactory. Cross-validation optimization of hyperparameters, returned as a `BayesianOptimization` object or a table of hyperparameters and associated values. The output is nonempty when the value of `'OptimizeHyperparameters'` is not `'none'`. The output value depends on the `Optimizer` field value of the `'HyperparameterOptimizationOptions'` name-value pair argument: Value of `Optimizer` FieldValue of `HyperparameterOptimizationResults` `'bayesopt'` (default)Object of class `BayesianOptimization` `'gridsearch'` or `'randomsearch'`Table of hyperparameters used, observed objective function values (cross-validation loss), and rank of observations from lowest (best) to highest (worst) collapse all ### Warm Start A warm start is initial estimates of the beta coefficients and bias term supplied to an optimization routine for quicker convergence. ### Alternatives for Lower-Dimensional Data High-dimensional linear classification and regression models minimize objective functions relatively quickly, but at the cost of some accuracy, the numeric-only predictor variables restriction, and the model must be linear with respect to the parameters. If your predictor data set is low- through medium-dimensional, or contains heterogeneous variables, then you should use the appropriate classification or regression fitting function. To help you decide which fitting function is appropriate for your low-dimensional data set, use this table. Model to FitFunctionNotable Algorithmic Differences SVM • Computes the Gram matrix of the predictor variables, which is convenient for nonlinear kernel transformations. • Solves dual problem using SMO, ISDA, or L1 minimization via quadratic programming using `quadprog`. Linear regression • `lasso` implements cyclic coordinate descent. Logistic regression • `fitglm` implements iteratively reweighted least squares. • `lassoglm` implements cyclic coordinate descent. ## Tips • It is a best practice to orient your predictor matrix so that observations correspond to columns and to specify `'ObservationsIn','columns'`. As a result, you can experience a significant reduction in optimization-execution time. • For better optimization accuracy if `X` is high-dimensional and `Regularization` is `'ridge'`, set any of these combinations for `Solver`: • `'sgd'` • `'asgd'` • `'dual'` if `Learner` is `'svm'` • `{'sgd','lbfgs'}` • `{'asgd','lbfgs'}` • `{'dual','lbfgs'}` if `Learner` is `'svm'` Other combinations can result in poor optimization accuracy. • For better optimization accuracy if `X` is moderate- through low-dimensional and `Regularization` is `'ridge'`, set `Solver` to `'bfgs'`. • If `Regularization` is `'lasso'`, set any of these combinations for `Solver`: • `'sgd'` • `'asgd'` • `'sparsa'` • `{'sgd','sparsa'}` • `{'asgd','sparsa'}` • When choosing between SGD and ASGD, consider that: • SGD takes less time per iteration, but requires more iterations to converge. • ASGD requires fewer iterations to converge, but takes more time per iteration. • If `X` has few observations, but many predictor variables, then: • Specify `'PostFitBias',true`. • For SGD or ASGD solvers, set `PassLimit` to a positive integer that is greater than 1, for example, 5 or 10. This setting often results in better accuracy. • For SGD and ASGD solvers, `BatchSize` affects the rate of convergence. • If `BatchSize` is too small, then `fitclinear` achieves the minimum in many iterations, but computes the gradient per iteration quickly. • If `BatchSize` is too large, then `fitclinear` achieves the minimum in fewer iterations, but computes the gradient per iteration slowly. • Large learning rates (see `LearnRate`) speed up convergence to the minimum, but can lead to divergence (that is, over-stepping the minimum). Small learning rates ensure convergence to the minimum, but can lead to slow termination. • When using lasso penalties, experiment with various values of `TruncationPeriod`. For example, set `TruncationPeriod` to `1`, `10`, and then `100`. • For efficiency, `fitclinear` does not standardize predictor data. To standardize `X`, enter `X = bsxfun(@rdivide,bsxfun(@minus,X,mean(X,2)),std(X,0,2));` The code requires that you orient the predictors and observations as the rows and columns of `X`, respectively. Also, for memory-usage economy, the code replaces the original predictor data the standardized data. • After training a model, you can generate C/C++ code that predicts labels for new data. Generating C/C++ code requires MATLAB Coder™. For details, see Introduction to Code Generation. ## Algorithms • If you specify `ValidationData`, then, during objective-function optimization: • `fitclinear` estimates the validation loss of `ValidationData` periodically using the current model, and tracks the minimal estimate. • When `fitclinear` estimates a validation loss, it compares the estimate to the minimal estimate. • When subsequent, validation loss estimates exceed the minimal estimate five times, `fitclinear` terminates optimization. • If you specify `ValidationData` and to implement a cross-validation routine (`CrossVal`, `CVPartition`, `Holdout`, or `KFold`), then: 1. `fitclinear` randomly partitions `X` and `Y` according to the cross-validation routine that you choose. 2. `fitclinear` trains the model using the training-data partition. During objective-function optimization, `fitclinear` uses `ValidationData` as another possible way to terminate optimization (for details, see the previous bullet). 3. Once `fitclinear` satisfies a stopping criterion, it constructs a trained model based on the optimized linear coefficients and intercept. 1. If you implement k-fold cross-validation, and `fitclinear` has not exhausted all training-set folds, then `fitclinear` returns to Step 2 to train using the next training-set fold. 2. Otherwise, `fitclinear` terminates training, and then returns the cross-validated model. 4. You can determine the quality of the cross-validated model. For example: • To determine the validation loss using the holdout or out-of-fold data from step 1, pass the cross-validated model to `kfoldLoss`. • To predict observations on the holdout or out-of-fold data from step 1, pass the cross-validated model to `kfoldPredict`. ## References [1] Hsieh, C. J., K. W. Chang, C. J. Lin, S. S. Keerthi, and S. Sundararajan. “A Dual Coordinate Descent Method for Large-Scale Linear SVM.” Proceedings of the 25th International Conference on Machine Learning, ICML ’08, 2001, pp. 408–415. [2] Langford, J., L. Li, and T. Zhang. “Sparse Online Learning Via Truncated Gradient.” J. Mach. Learn. Res., Vol. 10, 2009, pp. 777–801. [3] Nocedal, J. and S. J. Wright. Numerical Optimization, 2nd ed., New York: Springer, 2006. [4] Shalev-Shwartz, S., Y. Singer, and N. Srebro. “Pegasos: Primal Estimated Sub-Gradient Solver for SVM.” Proceedings of the 24th International Conference on Machine Learning, ICML ’07, 2007, pp. 807–814. [5] Wright, S. J., R. D. Nowak, and M. A. T. Figueiredo. “Sparse Reconstruction by Separable Approximation.” Trans. Sig. Proc., Vol. 57, No 7, 2009, pp. 2479–2493. [6] Xiao, Lin. “Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization.” J. Mach. Learn. Res., Vol. 11, 2010, pp. 2543–2596. [7] Xu, Wei. “Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent.” CoRR, abs/1107.2490, 2011.
{}
# Fonts used on this site What fonts are used on this site, for text and mathematics. It looks like Computer Modern, but it seems less "thin" and "spindly" than usual, which I like. Maybe someone fattened up the shapes, or the hinting is very good, or something ? There was another question asking the same thing, and the answer was "Georgia", but I don't know if that refers to the text font, the math font, or both.
{}
# Chemistry doubt $$0.5\text{ g}$$ sample of a sulphite salt was dissolved in $$200\text{ ml}$$ and $$20\text{ ml}$$ of this solution required $$10\text{ ml}$$ of $$0.02\text{ M}$$ acidified permanganate solution.Find the percentage by mass of sulphite in sulphite ore? Note by Shivam Mishra 1 year, 10 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: My solution : Let the number of moles of sulphite ions be $$M$$. Clearly on oxidation the sulphite ion changes to sulphate so the oxidation number changes from $$4+$$ to $$6+$$, hence the valence factor is $$2$$. The product of moles and valence factor gives the equivalents. The valence factor permagnate solution in acidified solution is $$5$$. Equate the equivalents as : $M ×2×\frac{20}{200} = \frac{10}{1000} × 0.02 × 5$ $M=\frac{5}{1000}$ As molecular weight of sulphite ion is $$80$$, percentage by mass is : $\frac{0.005 × 80}{0.5} × 100$ $=80 %$ - 1 year, 9 months ago Is the information complete? - 1 year, 10 months ago Yes - 1 year, 10 months ago 80% - 1 year, 8 months ago Thanks @Utkarsh Dwivedi - 1 year, 9 months ago Tell me is this question based on molarity - 1 year, 10 months ago Yes,but it can also be solved using milliequivalents. - 1 year, 10 months ago
{}
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item Physics , 2005, DOI: 10.1016/j.physleta.2004.03.058 Abstract: The entanglement of pair cat states in the phase damping channel is studied by employing the relative entropy of entanglement. It is shown that the pair cat states can always be distillable in the phase damping channel. Furthermore, we analyze the fidelity of teleportation for the pair cat states by using joint measurements of the photon-number sum and phase difference. Physics , 2000, Abstract: We first consider teleportation of entangled states shared between Claire and Alice to Bob1 and Bob2 when Alice and the two Bobs share a single copy of a GHZ-class state and where {\it all} the four parties are at distant locations. We then generalize this situation to the case of teleportation of entangled states shared between Claire1, Claire2, ....., Claire(N-1) and Alice to Bob1, Bob2, ....., BobN when Alice and the N Bobs share a single copy of a Cat-like state and where again {\it all} the 2N parties are at distant locations. Physics , 2005, DOI: 10.1103/PhysRevA.72.052313 Abstract: We present a protocol for the teleportation of the quantum state of a pulse of light onto the collective spin state of an atomic ensemble. The entangled state of light and atoms employed as a resource in this protocol is created by probing the collective atomic spin, Larmor precessing in an external magnetic field, off resonantly with a coherent pulse of light. We take here for the first time full account of the effects of Larmor precession and show that it gives rise to a qualitatively new type of multimode entangled state of light and atoms. The protocol is shown to be robust against the dominating sources of noise and can be implemented with an atomic ensemble at room temperature interacting with free space light. We also provide a scheme to perform the readout of the Larmor precessing spin state enabling the verification of successful teleportation as well as the creation of spin squeezing. Physics , 1999, DOI: 10.1103/PhysRevLett.84.3482 Abstract: We show that {\it one} single-mode squeezed state distributed among $N$ parties using linear optics suffices to produce a truly $N$-partite entangled state for any nonzero squeezing and arbitrarily many parties. From this $N$-partite entangled state, via quadrature measurements of $N-2$ modes, bipartite entanglement between any two of the $N$ parties can be `distilled', which enables quantum teleportation with an experimentally determinable fidelity better than could be achieved in any classical scheme. Physics , 2005, Abstract: In this chapter we review the characterization of entanglement in Gaussian states of continuous variable systems. For two-mode Gaussian states, we discuss how their bipartite entanglement can be accurately quantified in terms of the global and local amounts of mixedness, and efficiently estimated by direct measurements of the associated purities. For multimode Gaussian states endowed with local symmetry with respect to a given bipartition, we show how the multimode block entanglement can be completely and reversibly localized onto a single pair of modes by local, unitary operations. We then analyze the distribution of entanglement among multiple parties in multimode Gaussian states. We introduce the continuous-variable tangle to quantify entanglement sharing in Gaussian states and we prove that it satisfies the Coffman-Kundu-Wootters monogamy inequality. Nevertheless, we show that pure, symmetric three-mode Gaussian states, at variance with their discrete-variable counterparts, allow a promiscuous sharing of quantum correlations, exhibiting both maximum tripartite residual entanglement and maximum couplewise entanglement between any pair of modes. Finally, we investigate the connection between multipartite entanglement and the optimal fidelity in a continuous-variable quantum teleportation network. We show how the fidelity can be maximized in terms of the best preparation of the shared entangled resources and, viceversa, that this optimal fidelity provides a clearcut operational interpretation of several measures of bipartite and multipartite entanglement, including the entanglement of formation, the localizable entanglement, and the continuous-variable tangle. Physics , 2002, DOI: 10.1103/PhysRevA.66.052318 Abstract: The scheme for entanglement teleportation is proposed to incorporate multipartite entanglement of four qubits as a quantum channel. Based on the invariance of entanglement teleportation under arbitrary two-qubit unitary transformation, we derive relations of separabilities for joint measurements at a sending station and for unitary operations at a receiving station. From the relations of separabilities it is found that an inseparable quantum channel always leads to a total teleportation of entanglement with an inseparable joint measurement and/or a nonlocal unitary operation. Physics , 2004, DOI: 10.1103/PhysRevLett.95.150503 Abstract: We devise the optimal form of Gaussian resource states enabling continuous variable teleportation with maximal fidelity. We show that a nonclassical optimal fidelity of $N$-user teleportation networks is {\it necessary and sufficient} for $N$-party entangled Gaussian resources, yielding an estimator of multipartite entanglement. This {\it entanglement of teleportation} is equivalent to entanglement of formation in the two-user protocol, and to localizable entanglement in the multi-user one. The continuous-variable tangle, quantifying entanglement sharing in three-mode Gaussian states, is operationally linked to the optimal fidelity of a tripartite teleportation network. Brent R. Yates Physics , 2011, Abstract: Even Einstein has to be wrong sometimes. However, when Einstein was wrong he created a 70 year debate about the strange behavior of quantum mechanics. His debate helped prove topics such as the indeterminacy of particle states, quantum entanglement, and a rather clever use of quantum entanglement known as quantum teleportation. Ye Yeo Physics , 2003, Abstract: We investigate the teleportation of an entangled two-qubit state using three-qubit GHZ and W channels. The effects of white noise on the average teleportation fidelity and amount of entanglement transmitted are also studied. Physics , 2007, Abstract: Quantum entanglement, like othre resources, is now coonsidered to be a resource which can be produced, concentrated if required, transported and consumed. After its inception [1] in 1933, various schemes of quantum state teleportation have been proposed using different types of channels. Not restricting to qubit based systems, qutrit states and channels have also been of considerable interest. In the present paper we investigate the teleportation of an unknown single qutrit state as well as two qutrit state through a three qutrit quantum channel along with the required operations to recover the state. This is further generalized to the case of teleportation of n-qutrit system. Page 1 /100 Display every page 5 10 20 Item
{}
# 10 g of water at 70^@C is mixed with 5 g of water at 30^@C. Find the temperature of the mixture in equilibrium. Specific heat of water is 1 cal// 29 views in Physics closed 10 g of water at 70^@C is mixed with 5 g of water at 30^@C. Find the temperature of the mixture in equilibrium. Specific heat of water is 1 cal//g.^@C. by (90.7k points) selected Given, Mass m = 10g, m_(2) = 5g, Temperature T_(1)= 70^(@)C, T_(2) = 30^(@)C Let T^(@)C be the temperature of the mixture. From principle of calorimetry, common temperature of the mixture at equilibrium is given by T= (m_(1)c_(w)T_(1) + m_(2)c_(w)T_(2)) = (c_(w)(m_(1)T_(1) + m_(2)T_(2))/(c_(w)(m_(1)+m_(2)) =(m_(1)T_(1)+m_(2)T_(2))/(m_(1) + m_(2)) = (10xx70+5xx30)/(10+5) = (700+150)/(15) =(850/15) = 56.67^(@)C`
{}
WIKISKY.ORG Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login # 21 Her Contents ### Images DSS Images   Other Images ### Related articles Evolution of magnetic fields in stars across the upper main sequence: I. Catalogue of magnetic field measurements with FORS 1 at the VLTTo properly understand the physics of Ap and Bp stars it is particularlyimportant to identify the origin of their magnetic fields. For that, anaccurate knowledge of the evolutionary state of stars that have ameasured magnetic field is an important diagnostic. Previous resultsbased on a small and possibly biased sample suggest that thedistribution of magnetic stars with mass below 3 M_ȯ in the H-Rdiagram differs from that of normal stars in the same mass range (Hubriget al. 2000). In contrast, higher mass magnetic Bp stars may well occupythe whole main-sequence width (Hubrig, Schöller & North 2005b).In order to rediscuss the evolutionary state of upper main sequencemagnetic stars, we define a larger and bias-free sample of Ap and Bpstars with accurate Hipparcos parallaxes and reliably determinedlongitudinal magnetic fields. We used FORS 1 at the VLT in itsspectropolarimetric mode to measure the magnetic field in chemicallypeculiar stars where it was unknown or poorly known as yet. In thisfirst paper we present our results of the mean longitudinal magneticfield measurements in 136 stars. Our sample consists of 105 Ap and Bpstars, two PGa stars, 17 HgMn stars, three normal stars, and nine SPBstars. A magnetic field was for the first time detected in 57 Ap and Bpstars, in four HgMn stars, one PGa star, one normal B-type star and fourSPB stars. Evolutionary state of magnetic chemically peculiar starsContext: .The photospheres of about 5-10% of the upper main sequencestars exhibit remarkable chemical anomalies. Many of these chemicallypeculiar (CP) stars have a global magnetic field, the origin of which isstill a matter of debate. Aims: .We present a comprehensivestatistical investigation of the evolution of magnetic CP stars, aimedat providing constraints to the theories that deal with the origin ofthe magnetic field in these stars. Methods: .We have collectedfrom the literature data for 150 magnetic CP stars with accurateHipparcos parallaxes. We have retrieved from the ESO archive 142 FORS1observations of circularly polarized spectra for 100 stars. From thesespectra we have measured the mean longitudinal magnetic field, anddiscovered 48 new magnetic CP stars (five of which belonging to the rareclass of rapidly oscillating Ap stars). We have determined effectivetemperature and luminosity, then mass and position in the H-R diagramfor a final sample of 194 magnetic CP stars. Results: .We foundthat magnetic stars with M > 3 ~M_ȯ are homogeneouslydistributed along the main sequence. Instead, there are statisticalindications that lower mass stars (especially those with M ≤2~M_ȯ) tend to concentrate in the centre of the main sequence band.We show that this inhomogeneous age distribution cannot be attributed tothe effects of random errors and small number statistics. Our datasuggest also that the surface magnetic flux of CP stars increases withstellar age and mass, and correlates with the rotation period. For starswith M > 3~M_ȯ, rotation periods decrease with age in a wayconsistent with the conservation of the angular momentum, while for lessmassive magnetic CP stars an angular momentum loss cannot be ruledout. Conclusions: .The mechanism that originates and sustains themagnetic field in the upper main sequence stars may be different in CPstars of different mass. Multiplicity among chemically peculiar stars. II. Cool magnetic Ap starsWe present new orbits for sixteen Ap spectroscopic binaries, four ofwhich might in fact be Am stars, and give their orbital elements. Fourof them are SB2 systems: HD 5550, HD 22128, HD 56495 and HD 98088. Thetwelve other stars are: HD 9996, HD 12288, HD 40711, HD 54908, HD 65339,HD 73709, HD 105680, HD 138426, HD 184471, HD 188854, HD 200405 and HD216533. Rough estimates of the individual masses of the components of HD65339 (53 Cam) are given, combining our radial velocities with theresults of speckle interferometry and with Hipparcos parallaxes.Considering the mass functions of 74 spectroscopic binaries from thiswork and from the literature, we conclude that the distribution of themass ratio is the same for cool Ap stars and for normal G dwarfs.Therefore, the only differences between binaries with normal stars andthose hosting an Ap star lie in the period distribution: except for thecase of HD 200405, all orbital periods are longer than (or equal to) 3days. A consequence of this peculiar distribution is a deficit of nulleccentricities. There is no indication that the secondary has a specialnature, like e.g. a white dwarf. Based on observations collected at theObservatoire de Haute-Provence (CNRS), France.Tables 1 to 3 are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/394/151Appendix B is only available in electronic form athttp://www.edpsciences.org Rotational velocities of A-type stars in the northern hemisphere. II. Measurement of v sin iThis work is the second part of the set of measurements of v sin i forA-type stars, begun by Royer et al. (\cite{Ror_02a}). Spectra of 249 B8to F2-type stars brighter than V=7 have been collected at Observatoirede Haute-Provence (OHP). Fourier transforms of several line profiles inthe range 4200-4600 Å are used to derive v sin i from thefrequency of the first zero. Statistical analysis of the sampleindicates that measurement error mainly depends on v sin i and thisrelative error of the rotational velocity is found to be about 5% onaverage. The systematic shift with respect to standard values fromSlettebak et al. (\cite{Slk_75}), previously found in the first paper,is here confirmed. Comparisons with data from the literature agree withour findings: v sin i values from Slettebak et al. are underestimatedand the relation between both scales follows a linear law ensuremath vsin inew = 1.03 v sin iold+7.7. Finally, thesedata are combined with those from the previous paper (Royer et al.\cite{Ror_02a}), together with the catalogue of Abt & Morrell(\cite{AbtMol95}). The resulting sample includes some 2150 stars withhomogenized rotational velocities. Based on observations made atObservatoire de Haute Provence (CNRS), France. Tables \ref{results} and\ref{merging} are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/897 The spectroscopic binaries 21 Her and gamma GemIn the framework of a search campaign for short-term oscillations ofearly-type stars we analysed recently obtained spectroscopic andphotometric observations of the early A-type spectroscopic binaries 21Her and gamma Gem. From the radial velocities of 21 Her we derived animproved orbital period and a distinctly smaller eccentricity incomparison with the values known up to now. Moreover, fairly convincingevidence exists for an increase of the orbital period with time. Inaddition to the orbital motion we find further periods in the orbitalresiduals. The longest period of 57\fd7 is most likely due to a thirdbody which has the mass of a brown dwarf, whereas the period of 1\fd48could be related to the half rotational period of the star. For thespectral types we deduced A1 III for the primary and M for thesecondary. Two further periods of 0\fd21 and 0\fd22 give hint to theexistence of short-term pulsations in 21 Her. Their period difference isof the order of the expected rotational period so that one possibleexplanation could be rotational splitting of nonradial pulsation modes.Because of the very strong aliasing of the data this finding has to beconfirmed by observations having a more suitable time sampling, however.The analysis of photometric series and the Hipparcos photometry give nocertain evidence for periodic light variations. For gamma Gem, besidesthe orbital RV variation, no variations with amplitudes larger thanabout 100 m s-1 could be detected. The orbital elements ofgamma Gem are only slightly changed compared to the previously knownorbital solution by including our new radial velocities, but theiraccuracy is improved. For some chemical elements we determined theirabundances, NLTE values of C, O, and Na as well as LTE values of Mg, Sc,Fe, Cr, and Ti. We find the abundances to be rather close to the solarvalues, only carbon shows a little underabundance. The research is basedon spectroscopic observations made with the 2 m telescope at theThüringer Landessternwarte Tautenburg, Germany, and photometricobservations with the 0.6 m telescope of the National AstronomicalObservatory Rozhen, Bulgaria. The Spectroscopic Binaries 21 Her and Gamma GeminorumIn this work we analysed recently obtained spectroscopic observations ofthe early A-type spectroscopic binary gamma Gem. The research is basedon spectroscopic observations made with the 2 m telescope at theThüringer Landessternwarte Tautenburg, Germany. On the basis of 30TLS spectrograms we derived abundances of C, O, Na (NLTE) and Mg, Si,Sc, Ti, Cr and Fe (LTE). We found that the abundances of oxygen,titanium, chromium, and iron are rather close to the solar values, whilethe other studied elements show either small enhancement (sodium), ormoderate-to-small deficiency (carbon, magnesium, silicon, scandium).Generally, our results are in agreement with recent determination ofelemental abundances of gamma Gem performed by Adelman & Philip(1996). The abundances derived for Gam Gem are typical for normal'' Astars. Moderate carbon deficiency in A stars is explained by radiativediffusion which leads to an elemental segregation in the stableatmosphere. Observationally it was confirmed in the classic work of Roby& Lambert (Roby S. W., Lambert D. L., 1990, ApJS 73, 67) for asample of A stars. Although oxygen depletion is also predicted by thediffusion theory, we found its abundance to be normal in Gam Gematmosphere. The proper motions of fundamental stars. I. 1535 stars from the Basic FK5A direct combination of the positions given in the HIPPARCOS cataloguewith astrometric ground-based catalogues having epochs later than 1939allows us to obtain new proper motions for the 1535 stars of the BasicFK5. The results are presented as the catalogue Proper Motions ofFundamental Stars (PMFS), Part I. The median precision of the propermotions is 0.5 mas/year for mu alpha cos delta and 0.7mas/year for mu delta . The non-linear motions of thephotocentres of a few hundred astrometric binaries are separated intotheir linear and elliptic motions. Since the PMFS proper motions do notinclude the information given by the proper motions from othercatalogues (HIPPARCOS, FK5, FK6, etc.) this catalogue can be used as anindependent source of the proper motions of the fundamental stars.Catalogue (Table 3) is only available at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strastg.fr/cgi-bin/qcat?J/A+A/365/222 Do the physical properties of Ap binaries depend on their orbital elements?We reveal sufficient evidence that the physical characteristics of Apstars are related to binarity. The Ap star peculiarity [represented bythe Δ(V1-G) value and magnetic field strength] diminishes witheccentricity, and it may also increase with orbital period(Porb). This pattern, however, does not hold for largeorbital periods. A striking gap that occurs in the orbital perioddistribution of Ap binaries at 160-600d might well mark a discontinuityin the above-mentioned behaviour. There is also an interestingindication that the Ap star eccentricities are relatively lower thanthose of corresponding B9-A2 normal binaries for Porb>10d.All this gives serious support to the pioneering idea of Abt &Snowden concerning a possible interplay between the magnetism of Apstars and their binarity. Nevertheless, we argue instead in favour ofanother mechanism, namely that it is binarity that affects magnetism andnot the opposite, and suggest the presence of a newmagnetohydrodynamical mechanism induced by the stellar companion andstretching to surprisingly large Porb. The ROSAT all-sky survey catalogue of optically bright main-sequence stars and subgiant starsWe present X-ray data for all main-sequence and subgiant stars ofspectral types A, F, G, and K and luminosity classes IV and V listed inthe Bright Star Catalogue that have been detected as X-ray sources inthe ROSAT all-sky survey; several stars without luminosity class arealso included. The catalogue contains 980 entries yielding an averagedetection rate of 32 percent. In addition to count rates, sourcedetection parameters, hardness ratios, and X-ray fluxes we also listX-ray luminosities derived from Hipparcos parallaxes. The catalogue isalso available in electronic form via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html The Tokyo PMC catalog 90-93: Catalog of positions of 6649 stars observed in 1990 through 1993 with Tokyo photoelectric meridian circleThe sixth annual catalog of the Tokyo Photoelectric Meridian Circle(PMC) is presented for 6649 stars which were observed at least two timesin January 1990 through March 1993. The mean positions of the starsobserved are given in the catalog at the corresponding mean epochs ofobservations of individual stars. The coordinates of the catalog arebased on the FK5 system, and referred to the equinox and equator ofJ2000.0. The mean local deviations of the observed positions from theFK5 catalog positions are constructed for the basic FK5 stars to comparewith those of the Tokyo PMC Catalog 89 and preliminary Hipparcos resultsof H30. Systematic Errors in the FK5 Catalog as Derived from CCD Observations in the Extragalactic Reference Frame.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1997AJ....114..850S&db_key=AST Speckle observations of visual and spectroscopic binaries. VI.Not Available The Relation between Rotational Velocities and Spectral Peculiarities among A-Type StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1995ApJS...99..135A&db_key=AST Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. Radio continuum emission from stars: a catalogue update.An updated version of my catalogue of radio stars is presented. Somestatistics and availability are discussed. A new list of effective temperatures of chemically peculiar stars. II.Not Available CCD observations linking the radio and optical references framesObservations made with the U.S. Naval Observatory 20 cm transittelescope are presented for 104 FK5 and 13 radio stars that are directlytied into the J2000 extragalactic reference system. A comparison of thestar positions presented in this paper with the FK5 catalog findspossible warps in the FK5 reference system with amplitudes approximately0.1 arcsec and rotations for linking the optical and radio referencesystems with values omegax=-20 plus or minus 17 (s.e.),omegay=28 plus or minus 16 (s.e.), and omegaz=11plus or minus 13 (s.e.) mas. When the data of this paper are combinedwith other studies, these rotations become omegax=11 plus orminus 13 (s.e.), omegay=40 plus or minus 13 (s.e.), andomegaz=17 plus or minus 9 (s.e.) mas, indicating theomegay rotation might be real. Among the radio stars, thereare four stars (KQ Pup, 54 Cam, SZ Psc, and HD 244085) with significantoptical-radio offsets that exceed 0.15 arcsec in magnitude. Moreover,many other radio stars probably have appreciable offsets as determinedfrom a statistical investigation. Optical-radio offsets which aretypically accurate to sigma approximately plus or minus 42 (s.e.) masare also presented for 48 extragalactic objects observed with thetransit telescope. Among these objects, 21% have significant offsets.Radio galaxies are much more likely to have large offsets than QSOs andBL Lac objects, making many of them poor candidates for radio referenceobjects. The second Quito astrolabe catalogueThe paper contains 515 individual corrections {DELTA}α and 235corrections {DELTA}δ to FK5 and FK5Supp. stars and 50 correctionsto their proper motions computed from observations made with theclassical Danjon astrolabe OPL-13 at Quito Astronomical Observatory ofEcuador National Polytechnical School during a period from 1964 to 1983.These corrections cover the declination zone from -30deg to +30deg. Meanprobable errors of catalogue positions are 0.047" in αcosδand 0.054" in δ. The systematic trends of the catalogue{DELTA}αalpha_cosδ,{DELTA}αdelta_cosδ,{DELTA}δalpha_, {DELTA}δdelta_ arepresented for the observed zone. Corrections to the right ascension to be applied to the apparent places of 1217 stars given in "The Chinese Astronomical Almanach" for the year 1984 to 1992.Not Available Speckle observations of visual and spectroscopic binaries. IVThis is the fourth paper of this series giving results of speckleobservations for 22 visual and 161 spectroscopic binaries. Theobservation was carried out by using the 212 cm telescope of San PedroMartir Observatory in Mexico on 7 nights from July 20 to July 26, 1991.We obtained fringes in power spectra of 19 visual and 11 spectroscopicbinaries (6 newly resolved ones) with angular separation larger than0.06 arcsec. We introduced a new ICCD TV camera in this observation, andwere able to achieve the diffraction-limit resolution of the 212 cmtelescope. Speckle observations of visual and spectroscopic binaries. IIIThis is the third paper of this series giving results of speckleobservations carried out for seven visual and 119 spectroscopic binariesat seven nights from May 20 to May 27, 1989, and for 30 visual and 272spectroscopic binaries at 12 nights from June 11 to June 15, and fromAugust 28 to September 3, 1990, using the 212-cm telescope at San PedroMartir Observatory in Mexico. Fringes in the lower spectrum of 31 visualand spectroscopic binaries with angular separation larger than 21 arcsecare obtained. Additionally to two spectroscopic binaries, HD41116 andHD206901, named in the second paper of this series, six spectroscopicbinaries are found each of which has the third component starsurrounding two stars of spectroscopic binary having periodic variationof radial velocity. Physical data of the fundamental stars.Not Available ICCD speckle observations of binary stars. IV - Measurements during 1986-1988 from the Kitt Peak 4 M telescopeOne thousand five hundred and fifty measurements of 1006 binary starsystems observed mostly during 1986 through mid-1988 by means of speckleinterferometry with the KPNO 4-m telescope are presented. Twenty-onesystems are directly resolved for the first time, including newcomponents to the cool supergiant Alpha Her A and the Pleiades shellstar Pleione. A continuing survey of The Bright Star Catalogue yieldedeight new binaries from 293 bright stars observed. Corrections tospeckle measures from the GSU/CHARA ICCD speckle camera previouslypublished are presented and discussed. Behaviour of OI triplet 7773 A. II - AP starsThe behavior of the O I triplet at 7773 A in a sample of 74 Ap stars isanalyzed and compared with the results derived for a set of 50 normalstars. These abundance determinations are made in the NLTE frame byintroducing a correction to the LTE model atmosphere. Among the Apstars, the oxygen abundance varies greatly from one group to another andshows a clear separation between the different classes of peculiarities.An underabundance of up to a factor 400 is found for the (Sr-Cr-Eu)stars. A catalogue of right ascensions and declinations of FK4 starsThe position parameters of 578 stars from the fundamental catalog FK4are determined on the basis of 3-4-h meridian-circle observationsobtained by the differential method at Belgrade Astronomical Observatoryduring 1981-1987. The observation method and data-reduction proceduresare explained, and the results are compiled in extensive tables. Theaverage mean-square errors per observation are found to beepsilon(alpha) cos delta = + or - 0.022 sec and epsilon(delta) = + or -0.32 arcsec. Statistical Investigation of Chemically Peculiar Stars - Part Five - Spectroscopic Binary StarsNot Available Statistical Investigation of Chemically Peculiar Stars - Part One - the Stars with Known PeriodsNot Available The local system of early type stars - Spatial extent and kinematicsPublished uvby and H-beta photometric data and proper motions arecompiled and analyzed to characterize the structure and kinematics ofthe bright early-type O-A0 stars in the solar vicinity, with a focus onthe Gould belt. The selection and calibration techniques are explained,and the data are presented in extensive tables and graphs and discussedin detail. The Gould belt stars of age less than 20 Myr are shown togive belt inclination 19 deg to the Galactic plane and node-lineorientation in the direction of Galactic rotation, while the symmetricaldistribution about the Galactic plane and kinematic properties (purecircular differential rotation) of the belt stars over 60 Myr oldresemble those of fainter nonbelt stars of all ages. The unresolveddiscrepancy between the expansion observed in the youngest nearby starsand the predictions of simple models of expansion from a point isattributed to the inhomogeneous distribution of interstellar matter. Frequency of Bp-Ap stars among spectroscopic binariesImproving previous studies with more numerous published values, areexamination has been conducted concerning the binary frequency forBp-Ap stars, pointing out differences with normal stars for Si, Si-Cr,Si-Sr stars as well as Hew stars, but not for the Hg-Mn and the coolestAp stars. The period and the eccentricity distributions for Bp-Ap starshave been analyzed, compared to normal stars of various spectral types.Remarkably, this analysis reveals a great deficiency among loweccentricity systems for all the peculiar stars, except the Hg-Mn ones.Also discussed is the synchronism for the systems for which thephotometric period is known. Finally, the values of the parameterDelta(V1-G) of the Geneva photometry, which is a measurement of the5200-A depression, is compared for different binary systems. Submit a new article • - No Links Found -
{}
The ratio of the energy of a photon... Question # The ratio of the energy of a photon of 2000 Å wavelength radiation to that of 4000 Å radiation is (A) 114 (B) 4 (C)1/2 (D) 2 NEET/Medical Exams Chemistry Solution 65 4.0 (1 ratings) ( frac{varepsilon_{1}}{varepsilon_{2}}=frac{h C / 2000}{h C / 4000}=frac{2}{2} )
{}
# American Institute of Mathematical Sciences • Previous Article A new smoothing approach to exact penalty functions for inequality constrained optimization problems • NACO Home • This Issue • Next Article Output feedback overlapping control design of interconnected systems with input saturation 2016, 6(2): 153-160. doi: 10.3934/naco.2016005 ## Solving Malfatti's high dimensional problem by global optimization 1 Institute for Systems Dynamics and Control Theory, SB of RAS, 664033 Irkutsk, Russian Federation, Russian Federation, Russian Federation Received  October 2015 Revised  April 2016 Published  June 2016 We generalize Malfatti's problem which dates back to 200 years ago as a global optimization problem in a high dimensional space. The problem has been formulated as the convex maximization problem over a nonconvex set. Global optimality condition by Strekalovsky [11] has been applied to this problem. For solving numerically Malfatti's problem, we propose the algorithm in [3] which converges globally. Some computational results are provided. Citation: Rentsen Enkhbat, M. V. Barkova, A. S. Strekalovsky. Solving Malfatti's high dimensional problem by global optimization. Numerical Algebra, Control and Optimization, 2016, 6 (2) : 153-160. doi: 10.3934/naco.2016005 ##### References: [1] M. Andreatta, A. Bezdek and Jan P. Boroński., The problem of Malfatti: Two centuries of debate, The Mathematical Intelligencer, 33 (2011), 72-76. doi: 10.1007/s00283-010-9154-7. [2] R. Enkhbat, Global optimization approach to Malfatti's problem, Accepted for JOGO and to appear in 2016. [3] R. Enkhbat, An algorithm for maximizing a convex function over a simple set, Journal of Global Optimization, 8 (1996), 379-391. doi: 10.1007/BF02403999. [4] H. Gabai and E. Liban, On Goldberg's inequality associated with the Malfatti problem, Math. Mag., 41 (1967), 251-252. [5] M. Goldberg, On the original Malfatti problem, Math. Mag., 40 (1967), 241-247. [6] G. A. Los, Malfatti's Optimization Problem [in Russian], Dep. Ukr. NIINTI, July 5, 1988. [7] H. Lob and H. W. Richmond, On the solutions of the Malfatti problem for a triangle, Proc. London Math. Soc., 2 (1930), 287-301. doi: 10.1112/plms/s2-30.1.287. [8] C. Malfatti, Memoria sopra una problema stereotomico,, Memoria di Matematica e di Fisica della Societa italiana della Scienze, (). [9] V. N. Nefedov, Finding the global maximum of a function of several variables on a set given by inequality constraints, Journal of Numerical Mathematics and Mathematical Physics, 27 (1987), 35-51. [10] T. Saaty, Integer optimization methods and related extremal problems [Russian translation], Nauka, Moscow, 1973. 10 (1803), 235-244. [11] A. S. Strekalovsky, On the global extrema problem, Soviet Math. Doklad, 292 (1987), 1062-1066. [12] H. Tverberg, A generalization of Radon's theorem, Journal of the London Mathematical Society, 41 (1966), 123-128. [13] V. A. Zalgaller, An inequality for acute triangles, Ukr. Geom. Sb., 34 (1991), 10-25. [14] V. A. Zalgaller and G. A. Los, The solution of Malfatti's problem, Journal of Mathematical Sciences, 72 (1994), 3163-3177. doi: 10.1007/BF01249514. show all references ##### References: [1] M. Andreatta, A. Bezdek and Jan P. Boroński., The problem of Malfatti: Two centuries of debate, The Mathematical Intelligencer, 33 (2011), 72-76. doi: 10.1007/s00283-010-9154-7. [2] R. Enkhbat, Global optimization approach to Malfatti's problem, Accepted for JOGO and to appear in 2016. [3] R. Enkhbat, An algorithm for maximizing a convex function over a simple set, Journal of Global Optimization, 8 (1996), 379-391. doi: 10.1007/BF02403999. [4] H. Gabai and E. Liban, On Goldberg's inequality associated with the Malfatti problem, Math. Mag., 41 (1967), 251-252. [5] M. Goldberg, On the original Malfatti problem, Math. Mag., 40 (1967), 241-247. [6] G. A. Los, Malfatti's Optimization Problem [in Russian], Dep. Ukr. NIINTI, July 5, 1988. [7] H. Lob and H. W. Richmond, On the solutions of the Malfatti problem for a triangle, Proc. London Math. Soc., 2 (1930), 287-301. doi: 10.1112/plms/s2-30.1.287. [8] C. Malfatti, Memoria sopra una problema stereotomico,, Memoria di Matematica e di Fisica della Societa italiana della Scienze, (). [9] V. N. Nefedov, Finding the global maximum of a function of several variables on a set given by inequality constraints, Journal of Numerical Mathematics and Mathematical Physics, 27 (1987), 35-51. [10] T. Saaty, Integer optimization methods and related extremal problems [Russian translation], Nauka, Moscow, 1973. 10 (1803), 235-244. [11] A. S. Strekalovsky, On the global extrema problem, Soviet Math. Doklad, 292 (1987), 1062-1066. [12] H. Tverberg, A generalization of Radon's theorem, Journal of the London Mathematical Society, 41 (1966), 123-128. [13] V. A. Zalgaller, An inequality for acute triangles, Ukr. Geom. Sb., 34 (1991), 10-25. [14] V. A. Zalgaller and G. A. Los, The solution of Malfatti's problem, Journal of Mathematical Sciences, 72 (1994), 3163-3177. doi: 10.1007/BF01249514. [1] Rentsen Enkhbat, Evgeniya A. Finkelstein, Anton S. Anikin, Alexandr Yu. Gornov. Global optimization reduction of generalized Malfatti's problem. Numerical Algebra, Control and Optimization, 2017, 7 (2) : 211-221. doi: 10.3934/naco.2017015 [2] Henri Bonnel, Ngoc Sang Pham. Nonsmooth optimization over the (weakly or properly) Pareto set of a linear-quadratic multi-objective control problem: Explicit optimality conditions. Journal of Industrial and Management Optimization, 2011, 7 (4) : 789-809. doi: 10.3934/jimo.2011.7.789 [3] Enkhbat Rentsen, Battur Gompil. Generalized Nash equilibrium problem based on malfatti's problem. Numerical Algebra, Control and Optimization, 2021, 11 (2) : 209-220. doi: 10.3934/naco.2020022 [4] Geng-Hua Li, Sheng-Jie Li. Unified optimality conditions for set-valued optimizations. Journal of Industrial and Management Optimization, 2019, 15 (3) : 1101-1116. doi: 10.3934/jimo.2018087 [5] Yong Xia. New sufficient global optimality conditions for linearly constrained bivalent quadratic optimization problems. Journal of Industrial and Management Optimization, 2009, 5 (4) : 881-892. doi: 10.3934/jimo.2009.5.881 [6] Jutamas Kerdkaew, Rabian Wangkeeree, Rattanaporn Wangkeeree. Global optimality conditions and duality theorems for robust optimal solutions of optimization problems with data uncertainty, using underestimators. Numerical Algebra, Control and Optimization, 2022, 12 (1) : 93-107. doi: 10.3934/naco.2021053 [7] Nazih Abderrazzak Gadhi, Fatima Zahra Rahou. Sufficient optimality conditions and Mond-Weir duality results for a fractional multiobjective optimization problem. Journal of Industrial and Management Optimization, 2021  doi: 10.3934/jimo.2021216 [8] Ying Gao, Xinmin Yang, Kok Lay Teo. Optimality conditions for approximate solutions of vector optimization problems. Journal of Industrial and Management Optimization, 2011, 7 (2) : 483-496. doi: 10.3934/jimo.2011.7.483 [9] Hugo Beirão da Veiga. A challenging open problem: The inviscid limit under slip-type boundary conditions.. Discrete and Continuous Dynamical Systems - S, 2010, 3 (2) : 231-236. doi: 10.3934/dcdss.2010.3.231 [10] Gaoxi Li, Zhongping Wan, Jia-wei Chen, Xiaoke Zhao. Necessary optimality condition for trilevel optimization problem. Journal of Industrial and Management Optimization, 2020, 16 (1) : 55-70. doi: 10.3934/jimo.2018140 [11] Xiaoqing Ou, Suliman Al-Homidan, Qamrul Hasan Ansari, Jiawei Chen. Image space analysis for uncertain multiobjective optimization problems: Robust optimality conditions. Journal of Industrial and Management Optimization, 2021  doi: 10.3934/jimo.2021199 [12] Tiago Carvalho, Luiz Fernando Gonçalves. A flow on $S^2$ presenting the ball as its minimal set. Discrete and Continuous Dynamical Systems - B, 2021, 26 (8) : 4263-4280. doi: 10.3934/dcdsb.2020287 [13] Mariane Bourgoing. Viscosity solutions of fully nonlinear second order parabolic equations with $L^1$ dependence in time and Neumann boundary conditions. Existence and applications to the level-set approach. Discrete and Continuous Dynamical Systems, 2008, 21 (4) : 1047-1069. doi: 10.3934/dcds.2008.21.1047 [14] Tran Ngoc Thang, Nguyen Thi Bach Kim. Outcome space algorithm for generalized multiplicative problems and optimization over the efficient set. Journal of Industrial and Management Optimization, 2016, 12 (4) : 1417-1433. doi: 10.3934/jimo.2016.12.1417 [15] Jing Quan, Zhiyou Wu, Guoquan Li. Global optimality conditions for some classes of polynomial integer programming problems. Journal of Industrial and Management Optimization, 2011, 7 (1) : 67-78. doi: 10.3934/jimo.2011.7.67 [16] Ye Tian, Qingwei Jin, Zhibin Deng. Quadratic optimization over a polyhedral cone. Journal of Industrial and Management Optimization, 2016, 12 (1) : 269-283. doi: 10.3934/jimo.2016.12.269 [17] Monika Laskawy. Optimality conditions of the first eigenvalue of a fourth order Steklov problem. Communications on Pure and Applied Analysis, 2017, 16 (5) : 1843-1859. doi: 10.3934/cpaa.2017089 [18] Bhawna Kohli. Sufficient optimality conditions using convexifactors for optimistic bilevel programming problem. Journal of Industrial and Management Optimization, 2021, 17 (6) : 3209-3221. doi: 10.3934/jimo.2020114 [19] Chia-Huang Wu, Kuo-Hsiung Wang, Jau-Chuan Ke, Jyh-Bin Ke. A heuristic algorithm for the optimization of M/M/$s$ queue with multiple working vacations. Journal of Industrial and Management Optimization, 2012, 8 (1) : 1-17. doi: 10.3934/jimo.2012.8.1 [20] Saeid Ansary Karbasy, Maziar Salahi. Quadratic optimization with two ball constraints. Numerical Algebra, Control and Optimization, 2020, 10 (2) : 165-175. doi: 10.3934/naco.2019046 Impact Factor:
{}
# Where would the energy from a hydroelectric dam be diverted from? A hydroelectric dam converts mechanical energy (gravity acting against water) into electrical energy. What form would this energy take if the dam did not exist? Would it be thermal energy (from friction as the water flows)? Would it be purely mechanical (the dam reducing the water's momentum)? A solar panel, for example, converts radiant energy (from the sun) into electrical energy — diverting it from the ground that it occludes (likely becoming thermal energy). But I'm having difficulty figuring out the equivalent for a hydroelectric dam.
{}
# Give an example of a subset of $\mathbb {R}^2$ which is path connected but not locally connected. [closed] Give an example of a subset of $\mathbb R^2$ which is path connected, but no point has a local base of connected sets. • Do you mean that no point has a local base of connected neighborhoods? – Stefan Hamcke Sep 20 '13 at 16:37 $$I\times\{0\} \cup \left(\{0\}\cup\left\{\frac1n\middle|\ n\in\Bbb N\right\}\right)\times I$$ If you want a space where no point has a local base of connected sets, consider the following Let $U_{n,q}$ denote the line segment joining $(n+q,n)$ to $(n+q,n+1)$, and let $R_{n,q}$ denote the segment between $(n+1,n+q)$ and $(n+2,n+q)$. Define $Z$ as the union $$Z=\bigcup \{R_{n,q}, U_{n,q}\mid n\in\Bbb Z,q\in\Bbb Q\cap I\}$$ This space is path-connected, but not locally connected at any point. • Does anyone know how to enlarge the delimiter " $|$ " in the set notation? – Stefan Hamcke Sep 20 '13 at 16:03 • \middle|. It's worth noting that this is the so-called comb space. – Ayman Hourieh Sep 20 '13 at 16:11 • You're welcome! – Ayman Hourieh Sep 20 '13 at 16:13 • @Mathemagician1234 My example is not homeomorphic to the comb space (for example comb space is compact). You can also take a cone on the rationals, or similar things. – ronno Sep 20 '13 at 16:19 • excuse the statement was wrong! – Jarbas Dantas Silva Sep 20 '13 at 16:27 $$\{(x,y) \in I \times I| x \in \Bbb{Q} \text{ or } y = 0\}$$ • excuse the statement was wrong! – Jarbas Dantas Silva Sep 20 '13 at 16:28 $[\displaystyle\cup_{(\ \in [0,1] \cap \mathbb{Q})\times 0)} (a(t-1)+p_1t; \ t \in (0,1), \ p_1=(0,1) \in \mathbb{R^2})] \bigcup [\displaystyle\cup_{(b \in 0 \times [0,1] \cap \mathbb{Q})} (b(t-1)+p_2t; \ t \in (0,1), \ p_2=(0,0) \in \mathbb{R^2})]$
{}
# Text at bottom of last page I am working on a template in LO Writer and I'd like to have text (a signature line) at the bottom of the very last page. Above the text is a table which is automatically filled with often long texts that can cause page breaks. The solution should therefore work regardless of the number of pages. The last page might be anywhere from first page to 100+th page. The last page will likely still have content (the table) other than the signature line. I cannot use endnotes (that's at the top of the page) or a footer (that's on every page and I can't use a different style for the last page because it might also be the first page, it's automatically generated) or anchor it to the page (that's just on one page and you have to specify which one) and frankly, I'm at loss. Any ideas? edit retag close merge delete Sort by » oldest newest most voted You may insert a frame • anchor to paragraph, properties :: type :: position :: vertical bottom to page text area For "power users" you can manage a protection of the "last line" like this: • Insert a section as the very last object in your text; in that way that there is no paragraph behind it. So nobody is able to write below the section. • Insert the frame as above mentioned • Protect the section - done [/edit] more Unfortunately, this does not work for me, it stays on the first page. ( 2018-05-23 18:35:16 +0200 )edit The trick is to anchor the frame to a dedicated paragraph. See my answer update. ( 2018-05-23 22:33:02 +0200 )edit I tried it again but it did not work. Then I tried anchoring it to character rather than to paragraph and it worked! Everything else in your answer was very helpful and accurate but I'm afraid you must have confused it there but now all is well. I'd appreciate it if you could correct that in your answer so I can mark it as correct. I also noticed that this solution also works with a Textbox. Thank you very much for your help! I couldn't have done it without you. ( 2018-05-24 08:22:40 +0200 )edit Anchoring as character instead of to paragraph sometimes gives better results. It depends on circumstances, but it may prove "dangerous" when you edit the paragraph: erasing the anchor location will delete the frame (which does not happen wit "to paragraph"). Once again, choosing the right anchor is a matter of experimenting, due to the effective local formatting and context. ( 2018-05-24 13:03:26 +0200 )edit Wrt improving my answer: 1) do you suggest I mention character anchor? 2) where is my confusion? Initially I thought only of automating signature insertion. Should I keep only the alignment issue? ( 2018-05-24 13:05:51 +0200 )edit Three directions: • If you really design a template in LO concept, i.e. a file with extension .ott, just put your "signature" paragraph as the last element in the template. When you "instantiate" the template, begin to type above your "signature" or, if you already have more fixed fixed text in your template (TOC or index placeholder, tables, copyright, …), where the logical start of your document is. If you don't need the "signature" in the document, delete it (it won't affect the template). • Define an "AutoText" for your "signature" and trigger its expansion when you need it. • Use the replacement capabilities of Tools>AutoCorrect>AutoCorrect Options, Replace tab: define a new pattern in Replace and your "signature" in With. After that, when you type the pattern, it is automatically replaced by your "signature". Note you will need to type the final Return to end the paragraph signature. EDIT to cope with alignment specification In your template, add an empty paragraph in the very last position. Adjust the spacing properties and font size so that it is as unobtrusive as possible. With the cursor inside, Insert>Frame, anchored To paragraph. Position properties are : Vertical Bottom to Page Text Area, Horizontal as you seem fit (Center is quite common for a "signature`). Width in Size should be large enough for your purpose. Do not forget to remove the default border. Type your signature inside the frame. The frame is automatically aligned at the bottom of the last page. Tune the properties of the last empty paragraph to avoid some undesired effect such as the "signature" shifted to a new page although there is enough room for it in the preceding page (this is an effect of paragraph spacing). If this answer helped you, please accept it by clicking the check mark ✔ to the left and, karma permitting, upvote it. If this resolves your problem, close the question, that will help other people with the same question. more Please take into account that I need the signature line at the bottom of the last page, not just on the last page. Even if there's only two or three words on the page (other than the ones with the signature line) the signature line has to be at the very bottom. Also, once I have created the template, I will not be instantiating the template manually, the process will be automated. ( 2018-05-23 18:27:54 +0200 )edit @wowza42 I edited my first solution, adding a protected section as the very last part of the text. How to fix the section as last part: • Type some letters in the last paragraph, mark the entire last paragraph, then goto menu Insert → section If you fail: • Delete the last paragraph of text by going to the last paragraph of section, then hit CTRL+SHIFT+DEL The way to break that: Hit ALT+ENTER (last paragraph of section) then you create a paragraph below, even if section protected ( 2018-05-24 14:06:27 +0200 )edit @wowza42 wrote: Also, once I have created the template, I will not be instantiating the template manually, the process will be automated. I have no clue what to do then and what automation you are going to start... ( 2018-05-24 14:10:29 +0200 )edit
{}
# Interesting right? Algebra Level 4 How many 0's are there between the decimal point and the first non-zero digit of $$\left(\dfrac{1}{2}\right)^{1000}$$. Details and Assumptions • You may use the fact that $$\log_{10} 5 \approx 0.69897$$ correct to up 5 decimal places. ×
{}
<center><applet code="pendulumSystem.class" width=500 height=350 codebase="/java/Pendulum/"><param name="Reset" value="Reset"><param name="Pause" value="Pause"><param name="Show" value="Show"><param name="Resume" value="Resume"><param name="" value=""></applet></center> You are welcomed to check out  [url=http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=1116.0]Force analysis of a pendulum[/url] <hr ALIGN=LEFT WIDTH="100%"> <b><font size=+1>How to change parameters?</font></b> <ol>Set the initial position <font color="#0000FF">Click and drag the left mouse button</font> <ol>The horizontal position of the pendulum will follow the mouse Animation starts when you release the mouse button</ol> <font color="#0000FF">dragging the pointer (while > holding down the left button)</font> <ol><font color="#0000FF">from the support-point </font>(red dot) to a position that sets the length you want.</ol> Animation starts when you release the mouse button <li>Change gravity <font color="#FF0000">g</font></li> <font color="#0000FF">Click near the tip of the red arrow</font>, <ol>and drag the mouse button to change it (up-down).</ol> <li>Change the mass of the bob</li> <font color="#0000FF">Click near the buttom of the black stick,</font> <ol>and drag the mouse button to change it (up-down).</ol> </ol> Information displayed: <ul>1. red dots: kinetic energy <font color="#0000FF">K = m v*v /2 </font>of the bob 2. blue dots: potential energy <font color="#0000FF">U = m g h</font>of the bob <i><font color="#0000FF">Try ro find out the relation between kinetic energy and pontential energy!</font></i> 3.black dots (pair) represent the peroid T of the pendulum <ul>move the mouse to the dot : <ul>will display information for that dot in the textfield</ul> </ul></ul> <ol>blue arrow(1): gravity green arrows(2): components of gravity red arrow (1): velocity of the bob <i><font color="#0000FF">Try to compare velocity and the tangential component of the gravitional force!</font></i></ol> <hr WIDTH="100%">The calculation is in real time (use Runge-Kutta 4th order method). The period(T) is calculated when the velocity change direction. <ul><font color="#0000FF">You can produce a period verses angle ( T - X ) curve on the screen,just started at different positions and wait for a few second.</font></ul> Therotically, the period of a pendulum $T=\sqrt{g/L}$. Purpose for this applet: 1. The period of the pendulum mostly depends on the length of the pendulum and the gravity (which is normally a constant) 2. The period of the pendulum is independent of the mass. 3. The variation of the pendulum due to initial angle is very small. The equation of motion for a pendulum is $\frac{d^2\theta}{dt^2}=-\frac{g}{L}\, \sin\theta$ when the angle is small math_failure (math_unknown_error): \theta &lt;&lt; 1 ,$\sin\theta\approx \theta$ so the above equation become $\frac{d^2\theta}{dt^2}\approx-\frac{g}{L}\, \theta$ which imply it is approximately a simple harmonic motion with period $T=2\pi \sqrt{\frac{L}{g}}$ What is the error introduced in the above approximation? From Tayler's expansion $\sin\theta=\theta-\frac{\theta^3}{3!}+\frac{\theta^5}{5!}-\frac{\theta^7}{7!}+\frac{\theta^9}{9!}-\frac{\theta^11}{11!}+...$ To get first order approximation, the error is $\frac{\theta^3}{3!}=\frac{\theta^3}{6}$ So the relative error (error in percentage)= $\frac{\theta^3/6}{\theta}=\frac{\theta^2}{6}$ If the angle is 5 degree, which mean $\theta=5*pi/180\approx=5/60=1/12$ So the relative error is $\frac{\theta^2}{6}=1/(12^2*6)=1/(144*6)=1/864\approx 0.00116$ For angle=5 degree , the relative error is less than $0.116%$ For angle=10 degree , the relative error is less than $0.463%$ For angle=20 degree , the relative error is less than $1.85%$ So the period of the pendulum is almost independent of the initial angle (the error is relatively small unless the angle is much larger than 20 degree- for more than 2% error). [/quote] So sad.. still can't find resource code. :( Do you have any other way?
{}
## Loren on the Art of MATLABTurn ideas into MATLAB Note Loren on the Art of MATLAB has been retired and will not be updated. # A Glimpse into Floating-Point Accuracy There are frequent posts on the MATLAB newsgroup as well as lots of questions posed to Technical Support about the floating point accuracy of MATLAB. Many people think they have found a bug when some seemingly simple arithmetic doesn't give the intuitive answer. The issue that arises has to do with the finite number of bits that a computational device uses to store and operate on values. ### Calculator Demo Have you ever impatiently waited for an elevator and continued pushing the UP button in the hopes that the elevator will arrive faster? If so, you may also have impatiently pushed buttons repetitively on a calculator (assuming you know what a calculator is!). It's not just MATLAB that computes values in a way that seems to defy our expectations sometimes. clear format long Here's what I tried. I put the number 7 into my calculator. I think it calculates in single precision, but I'm not sure. It shows 7 digits after the decimal point when I take the square root, and doing it several times, I get sqrts7inSingle = single([7 2.6457513 1.6265765 1.275373]) sqrts7inSingle = 7.0000000 2.6457512 1.6265765 1.2753730 Next, I take the final number and square it 3 times, yielding squaringSqrt7inSingle = single([1.275373 1.6265762 2.6457501 6.99999935]) squaringSqrt7inSingle = 1.2753730 1.6265762 2.6457500 6.9999995 Compare the difference between the original and final values. sqrts7inSingle(1)-squaringSqrt7inSingle(end) ans = 4.7683716e-007 This number turns out to be very interesting in this context. Compare it to the distance between a single precision value of 7 and its next closest value, also know as the floating-point relative accuracy. eps(single(7)) ans = 4.7683716e-007 We get the same value! So, we can demonstrate the effects of finite storage size for values on a handheld calculator. Let's try the equivalent in MATLAB now. ### Accuracy in MATLAB Let's do a similar experiment in MATLAB using single precision so we have a similar situation to the one when I used the calculator. In this case, I am going to keep taking the square root until the value reaches 1. num = single(7); count = 0; tol = eps('single'); dnum = sqrt(num); while abs(dnum-1)>=tol count = count+1; dnum = sqrt(dnum); end count count = 23 So, it took 23 iterations before we reached the number 1. But now we're in a funny situation. There is no way to square exactly 1 and ever reach our original value of 7. ### Using Double Precision Let's repeat the same experiment from the calculator in MATLAB using double precision. num = 7; sqrtd = sqrt(num); sqrtd(2) = sqrt(sqrtd(1)); sqrtd(3) = sqrt(sqrtd(2)) prods(1) = sqrtd(end); prods(2) = prods(1)^2; prods(3) = prods(2)^2 finalNum = prods(3)^2 sqrtd = 2.64575131106459 1.62657656169779 1.27537310685845 prods = 1.27537310685845 1.62657656169779 2.64575131106459 finalNum = 7.00000000000001 Find the difference between finalNum and num and compare to the proper relative floating-point accuracy: diffNum = finalNum-num acc = eps(num)*6 diffNum-acc diffNum = 5.329070518200751e-015 acc = 5.329070518200751e-015 ans = 0 Why did I use 6 when I calculated acc? Because I performed 6 floating point operations to get the my final answer, 3 sqrt applications, followed by squaring the answers 3 times. ### Typical MATLAB Pitfall The most common MATLAB pitfall I run across is when users check equality of values generated from the : operator. The code starts innocently enough. I've actually turned the logic around here a bit, so we will print out values in the loop where the loop counter may not be exactly what the user expects. format short for ind = 0:.1:1; if ind ~= fix(10*ind)/10 disp(ind - fix(10*ind)/10) end end 5.5511e-017 1.1102e-016 1.1102e-016 And then the question is, why do any of the values print out? It's because computers can't represent all numbers exactly given a fixed storage size such as double precision. ### Conclusions This was a very simplified explanation about floating point, but one, I hope, that is easily understood. Let me know. ### References Here are some pointers to more resources. Published with MATLAB® 7.2 |
{}
# Thread: Size of the intersection set 1. ## Size of the intersection set Hello guys I am thinking about a problem involving intersection of sets. If I have for example a two sets A and B, and |A| is the number of elements in A and |B| the number of elements in B, then |A \cap B| is the number of elements in the intersection set A \cap B. Now, if I add a third set C, with |C| elements, the number of elements in A \cap B \cap C will be naturally smaller (or equal) than in A \cap B. And if I'll add more sets, the number of elements in the intersection will keep decreasing. Are there any researches / theorems about the rate of decrements ? I mean, if I take sets with numbers that are chosen by random, and I keep adding them, how strongly will the number of elements in the intersection converge to zero ? Any insights / references will be most appreciated... /cap means intersection, couldn't make latex work, don't know why...apologies... 2. ## Re: Size of the intersection set Well it depends on what elements $C$ and $|A \cap B|$ have in common. You can't show that the intersection converges to zero, because each additional set might as well be equal to the intersection of A and B. If A and B are fixed and are subsets of a universal set U, and if C, D, ... are subsets of fixed size, that's an entirely different question. Try small cases first. 3. ## Re: Size of the intersection set What I am trying to prove is slightly different. Let's say I have a set U which is a subset of the integers set Z. U contains around 10,000 elements. Now I take a subset of U, let's call it A, and it has (just for illustration) 3000 elements. In the next step I will take another subset B, and it will have 2700 elements. I choose the elements of the subsets by random (!). Now I look at the intersection of A and B, it will contain (for example) 300 elements. In the next step I will take another random subset, C, and will look at the intersection of A, B and C. My intuition say that if I'll keep doing that, eventually the generalized intersection will converge to the empty set. In the real world problem from which I took this challenge, I have seen the size of the generalized intersection decreases fast to 25 elements for 3 sets only. I do not have the 4th just yet, but I am trying to find a mathematical explanation to my intuition. 4. ## Re: Size of the intersection set Ohh I see...one way to think of it, if any element n appears in the intersection of sets A,B,C, it must be contained in sets A,B,C. What is the probability of that happening? I haven't thought about this for much time, but it looks like you'll probably get some sort of probability distribution based on the number and size of your subsets, and the size of the universal set U. Yes, intuitively, the intersection should "converge" to the empty set.
{}
# Homework Help: Dropping a package with initial velocity 1. Jan 27, 2015 ### Calpalned 1. The problem statement, all variables and given/known data A helicopter is ascending vertically with a speed of 5.30m/s . At a height of 100m above the Earth, a package is dropped from a window. How much time does it take for the package to reach the ground? [Hint: v0 for the package equals the speed of the helicopter.] 2. Relevant equations 1) vf2 = v02 + 2aΔx 2) vf = v0 + at 3) xf = x0 + v0t + ½(a)t2 3. The attempt at a solution Using the third equation with Δx = xf - x0 = -100, v0 = 5.3 m/s and acceleration = -9.81, and the quadratic formula, I get the right answer of t = 5.09 seconds. However, I don't get why this other logical method does not work. Using equation one, I substitute v0 with 5.3, a with -9.81 and Δx with -100. This leads to vf2 = 5.32 + 2(-9.8)(-100). vf = -44.598 m/s. Plugging this value into the second equation I get -44.58 = 5.3 - 9.8t. t equals 4.09 seconds. What did I do wrong? 2. Jan 27, 2015 ### Nathanael $\frac{44.58+5.3}{9.8}=5.09$ seconds 3. Jan 27, 2015 ### Calpalned Thank you so much! I see: I subtracted incorrectly
{}
# Applied Mathematical Methods in the Physical Sciences by Mary L. Boas ## For those who have used this book 65.5% 27.6% 3.4% 4. ### Strongly don't Recommend 3.4% 1. Jan 19, 2013 ### Greg Bernhardt Code (Text): [LIST] [*] Infinite Series, Power Series [LIST] [*] The Geometric Series [*] Definitions and notation [*] Applications of Series [*] Convergent and divergent Series [*] Testing Series for Convergence; The Preliminary Test [*] Tests for convergence of series of positive terms; absolute convergence [*] Alternating series [*] Conditionally convergent series [*] Power series; interval of convergence [*] Expanding functions in power series [*] Techniques for obtaining power series expansions [*] Questions of convergence and accuracy in computations [*] Some uses of series [*] Miscellaneous problems [/LIST] [*] Complex Numbers [LIST] [*] Introduction [*] Real and imaginary parts of a complex number [*] The complex plane [*] Terminology and notation [*] Complex algebra [*] Complex infinite series [*] Complex power series; circle of convergence [*] Elementary functions of complex numbers [*] Euler's formula [*] Powers and roots of complex numbers [*] The exponential and trigonometric functions [*] Hyperbolic functions [*] Logarithms [*] Complex roots and powers [*] Inverse trigonometric and hyperbolic functions [*] Some applications [*] Miscellaneous problems [/LIST] [*] Linear Equations; Vectors, Matrices and Determinants [LIST] [*] Intoduction [*] Set of linear equations, row reduction [*] Determinants; Cramer's rule [*] Vectors [*] Lines and planes [*] Matrix operations [*] Linear combinations, linear functions, linear operators [*] General theory of sets of linear equations [*] Special matrices [*] Miscellaneous problems [/LIST] [*] Partial Differentiation [LIST] [*] Introduction and notation [*] Power series in two variables [*] Total differentials [*] Approximate calculations using differentials [*] Chain rule or differentiating a function of a function [*] Implicit differentiation [*] More chain rule [*] Application of partial differentiation to maximum and minimum prolems [*] Maximum and minimum problems with constraints; Lagrange multipliers [*] Endpoint or boundary point problems [*] Change of variables [*] Differentiation of integrals; Leibniz' rule [*] Miscellaneous problems [/LIST] [*] Multiple Integrals; Applications of Integration [LIST] [*] Introduction [*] Double and Triple Integrals [*] Applications of Integration; Single and Multiple Integrals [*] Change of Variables in Integrals; Jacobians [*] Surface Integrals [*] Miscellaneous Problems [/LIST] [*] Vector Analysis [LIST] [*] Introduction [*] applications of vector multiplication [*] Triple products [*] Differentiation of vectors [*] Fields [*] Some other expressions involving $\nabla$ [*] Line integrals [*] Green's theorem in the plane [*] The divergence and the divergence theorem [*] The curl and Stokes' theorem [*] Miscellaneous problems [/LIST] [*] Fourier Series [LIST] [*] Introduction [*] Simple harmonic motion and wave motion; periodic functions [*] Applications of Fourier series [*] Average value of a function [*] Fourier coefficients [*] Dirichlet conditions [*] Complex form of Fourier series [*] Other intervals [*] Even and odd functions [*] An application to sound [*] Parseval's theorem [*] Miscellaneous problems [/LIST] [*] Ordinary Differential Equations [LIST] [*] Introduction [*] Separable equations [*] Linear first-order equations [*] Other methods for first order equations [*] Second-order linear equations with constant coefficients and zero right-hand side [*] Second-order linear equations with constant coefficients and right-hand side not zero [*] Other second-order equations [*] Miscellaneous problems [/LIST] [*] Calculus of Variations [LIST] [*] Introduction [*] The Euler equation [*] Using the Euler equation [*] The brachistochrone problem;, cycloids [*] Several dependent variables; Lagrange's equations [*] Isoperimetric problems [*] Variational notation [*] Miscellaneous problems [/LIST] [*] Coordinate Transformations; Tensor Analysis [LIST] [*] Introduction [*] Linear transformations [*] Orthogonal transformations [*] Eigenvalues and eigenvectors; diagonalizing matrices [*] Applications of diagonalization [*] Curvilinear coordinates [*] Scale factors and basis vectors for orthogonal systems [*] General curvilinear coordinates [*] Vector operators in orthogonal curvilinear coordinates [*] Tensor analysis - introduction [*] Cartesian tensors [*] General coordinate systems [*] Vector operations in tensor notation [*] Miscellaneous problems [/LIST] [*] Gamma, Beta, and Error Functions; Asymptotic Series; Stirling's Formula; Elliptic Integrals and Functions [LIST] [*] Introduction [*] The factorial function [*] Definition of the gamma function; recursion relation [*] The gamma function of negative numbers [*] Some important formulas involving gamma functions [*] Beta functions [*] The relation between the beta and gamma functions [*] The simple pendulum [*] The error function [*] Asymptotic series [*] Stirling's formula [*] Elliptic integrals and functions [*] Miscellaneous problems [/LIST] [*] Series Solutions of Differential Equations; Legendre Polynomials; Bessel Functions; Sets of Orthogonal Functions [LIST] [*] Introduction [*] Legendre's equation [*] Leibniz' rule for differentiation products [*] Rodrigues' formula [*] Generating function for Legendre polynomials [*] Complete sets of orthogonal functions [*] Orthogonality of the Legendre polynomials [*] Normalization of the Legendre polynomials [*] Legendre series [*] The associated Legendre functions [*] Generalized power series or the method of Frobenius [*] Bessel's equation [*] The second solution of Bessel's equation [*] Tables, graphs, and zeros of Bessel functions [*] Recursion relations [*] A general differential equation having Bessel functions as solutions [*] Other kinds of Bessel functions [*] The lengthening pendulum [*] Orthogonality of Bessel functions [*] Approximate formulas for Bessel functions [*] Hermite functions; Laguerre functions; ladder operators [*] Miscellaneous problems [/LIST] [*] Partial Differential Equations [LIST] [*] Introduction [*] Laplace's equation; steady-state temperature in a rectangular plate [*] The diffusion or heat flow equation; heat flow in a bar or slab [*] The wave equation; the vibrating string [*] Steady-state temperature in a cylinder [*] Vibration of a circular membrane [*] Steady-state temperature in a sphere [*] Poisson's equation [*] Miscellaneous problems [/LIST] [*] Functions of a complex variable [LIST] [*] Introduction [*] Analytic functions [*] Contour integrals [*] Laurent series [*] The residue theorem [*] Methods of finding residues [*] Evaluation of definite integrals by use of the residue theorem [*] The point at infinity; residues at infinity [*] Mapping [*] Some applications of conformal mapping [*] Miscellaneous problems [/LIST] [*] Integrals Transforms [LIST] [*] Introduction [*] The Laplace transform [*] Solutions of differential equations by Laplace transforms [*] Fourier transforms [*] Convolution; Parseval's theorem [*] Inverse Laplace transform (Bromwich integral) [*] The Dirac delta function [*] Green functions [*] Integral transform solutions of partial differential equations [*] Miscellaneous problems [/LIST] [*] Probability [LIST] [*] Introduction; definition of probability [*] Sample space [*] Probability theorems [*] Sample space [*] Methods of counting [*] Random variables [*] Continuous distributions [*] Binomial distribution [*] The normal or Gaussian distribution [*] The Poisson distribution [*] Applications to experimental measurements [*] Miscellaneous problems [/LIST] [*] References [*] Bibliography [*] Index [/LIST] Last edited: May 6, 2017 2. Jan 23, 2013 ### ZapperZ Staff Emeritus 3. Apr 22, 2013 I already have a hard copy of this book and looking for a pdf version. Is there a place where I could purchase the soft version? 4. Apr 23, 2013 ### sandy.bridge I am getting this book and working through it this summer. As of tomorrow, I am finished all the math required for my degree, so this will be for fun. 5. Apr 29, 2013 ### Greg Bernhardt Congrats! Let us know what you think when you get the book. 6. Apr 29, 2013 ### George Jones Staff Emeritus Years ago, I used Boas as the text for a Mathematical Methods course that I taught on vector analysis, differential equations, and special functions. After the course was over, a post-doc told me that he lurked outside the classroom door while I taught, and that he thought that I must have spent a lot of time preparing my lectures. I told him that Boas spent a lot of time preparing, and that I just followed her lead. 7. May 18, 2013 I just finished my first year of EE. I have 3 months free and plan to go over this book. How much should i expect to cover in 3 months? Should I set a time limit (e.g 1 week) per chapter or take as much time needed to reasonably understand the material? 8. Jun 8, 2013 ### sandy.bridge I, personally, wouldn't recommend setting "deadlines" for chapter completion. This is your time off, and I would exploit this fact via learning at your own pace. Furthermore, I would venture to say that the chapter difficulties are not uniform. 9. Oct 6, 2013 ### johnqwertyful The one complaint about the book is that the publisher messed up the copying pretty badly at places. Difficult to read to sometimes unreadable. Ink blots, cut of pages, etc. All the boxed formulas/theorems are very dark and hard to read. 10. Oct 6, 2013 ### ZapperZ Staff Emeritus Are you sure you bought the legit version of the text? It sounds like one of those cheap, pirated, illegal copies. I have the 2nd edition, and I've seen and browsed through the 3rd edition. I have never seen anything resembling what you mentioned. Zz. 11. Oct 6, 2013 ### AlephZero Even professional book printers screw up sometimes and few pages get trashed. If you bought it from a reputable bookseller, they should exchange it for a properly printed copy free of charge. But as ZZ said, it's very unusual for a "whole book" to be badly printed or bound without somebody noticing there was a problem. 12. Oct 6, 2013 ### johnqwertyful I bought a physical copy off Amazon. A few other people complained about the same thing 13. Oct 21, 2013 ### ajayguhan Which is better kreyszig or boas ? 14. Mar 22, 2014 Would Calc 1 be enough for this book? Or do I need to wait till I teach myself Calc 2 & 3? 15. Mar 22, 2014 ### wakefield You'll need to know various techniques of integration and it would help to be familiar with sequences and series already. 16. Apr 26, 2016 ### RaulTheUCSCSlug I believe that with the use of Green's theorem and some other integration techniques, you will need to look at techniques from Calc 3 and at least gone through Calc 1 and Calc 2. I know you posted this long time ago, but perhaps a member reading this will have the same question. Cheers!
{}
struct.cable.state.compression Syntax i := struct.cable.state.compression(p) Get the compression yield state of the cable element. The return value {0, 1, 2} denotes never yielded, now yielding, or yielded in the past, respectively. Returns: f - an integer in the set of {0, 1, 2} denotes never yielded, now yielding, or yielded in the past, respectively p - a pointer to a cable element
{}
## Solve Ques 20 Q.20. A chemical compound having a strong smell of chlorine is used to disinfect water: (a) Identify the compound (b) Write the chemical equation of its preparation. (C) Write its uses. What are you looking for?
{}
# Spherical perceptron as a storage memory with limited errors It has been known for a long time that the classical spherical perceptrons can be used as storage memories. Seminal work of Gardner, Gar88, started an analytical study of perceptrons storage abilities. Many of the Gardner's predictions obtained through statistical mechanics tools have been rigorously justified. Among the most important ones are of course the storage capacities. The first rigorous confirmations were obtained in SchTir02,SchTir03 for the storage capacity of the so-called positive spherical perceptron. These were later reestablished in TalBook and a bit more recently in StojnicGardGen13. In this paper we consider a variant of the spherical perceptron that operates as a storage memory but allows for a certain fraction of errors. In Gardner's original work the statistical mechanics predictions in this directions were presented sa well. Here, through a mathematically rigorous analysis, we confirm that the Gardner's predictions in this direction are in fact provable upper bounds on the true values of the storage capacity. Moreover, we then present a mechanism that can be used to lower these bounds. Numerical results that we present indicate that the Garnder's storage capacity predictions may, in a fairly wide range of parameters, be not that far away from the true values. 06/17/2013 ### Discrete perceptrons Perceptrons have been known for a long time as a promising tool within t... 03/22/2021 ### The minimal spherical dispersion In this paper we prove upper and lower bounds on the minimal spherical d... 06/16/2018 ### Sharp Analytical Capacity Upper Bounds for Sticky and Related Channels We study natural examples of binary channels with synchronization errors... 05/07/2021 ### Bounds for the sum of distances of spherical sets of small size We derive upper and lower bounds on the sum of distances of a spherical ... 05/16/2018 ### Integrated Bounds for Disintegrated Storage We point out a somewhat surprising similarity between non-authenticated ... 11/05/2013 ### Polyhedrons and Perceptrons Are Functionally Equivalent Mathematical definitions of polyhedrons and perceptron networks are disc... 10/18/2019 ### Classification of spherical objects based on the form function of acoustic echoes One way to recognise an object is to study how the echo has been shaped ... ## 1 Introduction In this paper we will study a special type of the classical spherical perceptron problem. Of course, spherical perceptrons are a well studied class of problems with applications in various fields, ranging from neural networks and statistical physics/mechanics to high-dimensional geometry and biology. While the spherical perceptron like problems had been known for a long time (for various mathematical versions see, e.g. [19, 11, 18, 35, 34, 9, 16, 6, 33] ), it is probably the work of Gardner [12] that brought them in the research spotlight. One would be inclined to believe that the main reason for that was Gardner’s ability to quantify many of the features of the spherical perceptrons that were not so easy to handle through the standard mathematical tools typically used in earlier works. Namely, in [12], Gardner introduced a fairly neat type of analysis based on a statistical mechanics approach typically called the replica theory. As a result she was able to quantify almost any of the spherical perceptrons typical features of interest. While some of the results she obtained were known (for example, the storage capacity with zero-thresholds, see, e.g. [19, 11, 18, 35, 34, 9, 16, 6, 33]) many other ones were not (storage capacity with non-zero thresholds, typical volume of interactions strengths for which the memory functions properly, the storage capacities of memories with errors, and so on). Moreover, many of the results that she obtained remained as mathematical conjectures (either in the form of those related to quantities which are believed to be the exact predictions or in the form of those related to quantities which may be solid approximations). In recent years some of those that had been believed to be exact have indeed been rigorously proved (see, e.g. [20, 21, 32, 22]) whereas many of those that are believed to be solid approximations have been shown to be at the very least rigorous bounds (see, e.g. [22, 28]). In this paper we will also look at one of the features of the spherical perceptron. The quantity that we will be interested in this paper in particular is fairly closely related to the well-known storage capacity. Namely, we will indeed attempt to evaluate the storage capacity of the spherical perceptron, however, instead of insisting that all the patterns should be memorized correctly we will also allow for a certain fraction of errors. In other words, we we will allow that a certain fraction of patterns can in fact be memorized incorrectly. Throughout the paper, we will often refer to the capacity of such a memory as the storage capacity with errors. Of course, this problem was already studied in [12] and a nice set of observations related to it has already been made there. Here, we will through a mathematically rigorous analysis attempt to confirm many of them. Before going into the details of our approach we will recall on the basic definitions related to the spherical perceptron and needed for its analysis. Also, to make the presentation easier to follow we find it useful to briefly sketch how the rest of the paper is organized. In Section 2 we will, as mentioned above, introduce a more formal mathematical description of how a perceptron operates. In Section 3 we will present several results that are known for the classical spherical perceptron. In Section 4 we will discuss the storage capacity when the errors are allowed. We will recall on the known results and later on in Section 5 present a powerful mechanism that can be used to prove that many of the known results are actually rigorous bounds on the quantities of interest. In Section 6 we will then present a further refinement of the mechanism from Section 5 that can be used to potentially lower the values of the storage capacity obtained in Section 5. Finally, in Section 7 we will discuss obtained results and present several concluding remarks. ## 2 Mathematical setup of a perceptron To make this part of the presentation easier to follow we will try to introduce all important features of the spherical perceptron that we will need here by closely following what was done in [12] (and for that matter in our recent work [22, 28]). So, as in [12], we start with the following dynamics: H(t+1)ik=sign(n∑j=1,j≠kH(t)ijXjk−Tik). (1) Following [12] for any fixed we will call each , the icing spin, i.e. . Continuing further with following [12], we will call , the interaction strength for the bond from site to site . To be in a complete agreement with [12], we in (1) also introduced quantities . is typically called the threshold for site in pattern . However, to make the presentation easier to follow, we will typically assume that . Without going into further details we will mention though that all the results that we will present below can be easily modified so that they include scenarios where . Now, the dynamics presented in (1) works by moving from a to and so on (of course one assumes an initial configuration for say ). Moreover, the above dynamics will have a fixed point if say there are strengths , such that for any Hiksign(n∑j=1,j≠kHijXjk−Tik)=1 (2) ⇔ Hik(n∑j=1,j≠kHijXjk−Tik)>0,1≤j≤n,1≤k≤n. Of course, the above is a well known property of a very general class of dynamics. In other words, unless one specifies the interaction strengths the generality of the problem essentially makes it easy. After considering the general scenario introduced above, [12] then proceeded and specialized it to a particular case which amounts to including spherical restrictions on . A more mathematical description of such restrictions considered in [12] essentially boils down to the following constraints n∑j=1X2ji=1,1≤i≤n. (3) The fundamental question that one typically considers then is the so-called storage capacity of the above dynamics or alternatively a neural network that it would represent (of course this is exactly one of the questions considered in [12]). Namely, one then asks how many patterns (-th pattern being ) one can store so that there is an assurance that they are stored in a stable way. Moreover, since having patterns being fixed points of the above introduced dynamics is not enough to insure having a finite basin of attraction one often may impose a bit stronger threshold condition Hiksign(n∑j=1,j≠kHijXjk−Tik)=1 (4) ⇔ Hik(n∑j=1,j≠kHijXjk−Tik)>κ,1≤j≤n,1≤k≤n, where typically is a positive number. We will refer to a perceptron governed by the above dynamics and coupled with the spherical restrictions and a positive threshold as the positive spherical perceptron. Alternatively, when is negative we will refer to it as the negative spherical perceptron (such a perceptron may be more of an interest from a purely mathematical point of view rather than as a neural network concept; nevertheless we will view it as an interesting mathematical problem; consequently, we will on occasion, in addition to the results that we will present for the standard positive perceptron, present quite a few results related to the negative case as well). Also, we should mentioned that beyond the above mentioned negative case many other variants of the model that we study here are possible from a purely mathematical perspective. Moreover, many of them have found applications in various other fields as well. For example, a nice set of references that contains a collection of results related to various aspects of different neural networks models and their bio- and many other applications is [2, 1, 4, 5, 3, 23, 8]. ## 3 Standard spherical perceptron – known results As mentioned above, our main interest in this paper will be a particular type of the spherical perceptron, namely the one that functions as a memory with a limited fraction of errors. However, before proceeding with the problem that we will study here in great detail we find it useful to first recall on several results known for the standard spherical perceptron, i.e. the one that functions as a storage memory without errors. That way it will be easier to properly position the results we intend to present here within the scope of what is already known. ### 3.1 Statistical mechanics We of course start with recalling on what was presented in [12]. In [12] a replica type of approach was designed and based on it a characterization of the storage capacity was presented. Before showing what exactly such a characterization looks like we will first formally define it. Namely, throughout the paper we will assume the so-called linear regime, i.e. we will consider the so-called linear scenario where the length and the number of different patterns, and , respectively are large but proportional to each other. Moreover, we will denote the proportionality ratio by (where obviously is a constant independent of ) and will set m=αn. (5) Now, assuming that , are i.i.d. symmetric Bernoulli random variables, [12] , using the replica approach, gave the following estimate for so that (4) holds with overwhelming probability (under overwhelming probability we will in this paper assume a probability that is no more than a number exponentially decaying in away from ) αc(κ)=(1√2π∫∞−κ(z+κ)2e−z22dz)−1. (6) Based on the above characterization one then has that achieves its maximum over positive ’s as . One in fact easily then has limκ→0αc(κ)=2. (7) Also, to be completely exact, in [12], it was predicted that the storage capacity relation from (6) holds for the range . ### 3.2 Rigorous results – positive spherical perceptron (κ≥0) The result given in (7 ) is of course well known and has been rigorously established either as a pure mathematical fact or even in the context of neural networks and pattern recognition [19, 11, 18, 35, 34, 9, 16, 6, 33]. In a more recent work [20, 21, 32] the authors also considered the storage capacity of the spherical perceptron and established that when (6) also holds. In our own work [22] we revisited the storage capacity problems and presented an alternative mathematical approach that was also powerful enough to reestablish the storage capacity prediction given in (6). We below formalize the results obtained in [20, 21, 32, 22]. ###### Theorem 1. [20, 21, 32, 22] Let be an matrix with i.i.d.Bernoulli components. Let be large and let , where is a constant independent of . Let be as in (6) and let be a scalar constant independent of . If then with overwhelming probability there will be no such that and (4) is feasible. On the other hand, if then with overwhelming probability there will be an such that and (4) is feasible. ###### Proof. Presented in various forms in [20, 21, 32, 22]. ∎ As mentioned earlier, the results given in the above theorem essentially settle the storage capacity of the positive spherical perceptron or the Gardner problem. However, there are a couple of facts that should be pointed out (emphasized): 1) The results presented above relate to the positive spherical perceptron. It is not clear at all if they would automatically translate to the case of the negative spherical perceptron. As we hinted earlier, the case of the negative spherical perceptron () may be more of interest from a purely mathematical point of view than it is from say the neural networks point of view. Nevertheless, such a mathematical problem may turn out to be a bit harder than the one corresponding to the standard positive case. In fact, in [32], Talagrand conjectured (conjecture 8.4.4) that the above mentioned remains an upper bound on the storage capacity even when , i.e. even in the case of the negative spherical perceptron. However, he does seem to leave it as an open problem what the exact value of the storage capacity in the negative case should be. In our own work [22] we confirmed this Talagrand’s conjecture and showed that even in the negative case from (6) is indeed an upper bound on the storage capacity. 2) It is rather clear but we do mention that the overwhelming probability statement in the above theorem is taken with respect to the randomness of . To analyze the feasibility of (9) we in [22] relied on a mechanism we recently developed for studying various optimization problems in [29]. Such a mechanism works for various types of randomness. However, the easiest way to present it was assuming that the underlying randomness is standard normal. So to fit the feasibility of (9) into the framework of [29] we in [22] formally assumed that the elements of matrix are i.i.d. standard normals. In that regard then what was proved in [22] is a bit different from what was stated in the above theorem. However, as mentioned in [22] (and in more detail in [29, 26]) all our results from [22] continue to hold for a very large set of types of randomness and certainly for the Bernouilli one assumed in Theorem 1. 3) We will continue to call the critical value of so that (4) is feasible the storage capacity even when , even though it may be linguistically a bit incorrect, given the neural network interpretation of finite basins of attraction mentioned above. ### 3.3 Rigorous results – negative spherical perceptron (κ<0) In our recent work [28] we went a step further and considered the negative version of the standard spherical perceptron. While the results that we will present later on in Sections 5 and 6 will be valid for any our main concern will be from a neural network point of view and as such will be related to the positive case, i.e. to scenario. In that regard the results that we review in this subsection may seem as not as important as those from the previous subsections. However, once we present the main results in Sections 5 and 6 it will be clear that there is an interesting conceptual similarity that is deeply rooted in a combinatorial similarity of what we will present in this subsection (and what was essentially proved in [22, 28]) and the results that we will present in Sections 5 and 6. As mentioned above under point 3), we in [28] called the corresponding limiting in case the storage capacity of the negative spherical perceptron. Before presenting the storage capacity results that we obtained in [22, 28] we will find it useful to slightly redefine the original feasibility problem considered above. This will of course be of a great use in the exposition that will follow as well. We first recall that in [28] we studied the so-called uncorrelated case of the spherical perceptron (more on an equally important correlated case can be found in e.g. [22, 12]). This is the same scenario that we will study here (so the simplifications that we made in [28] and that we are about to present below will be in place later on as well). In the uncorrelated case, one views all patterns , as uncorrelated (as expected, stands for vector ). Now, the following becomes the corresponding version of the question of interest mentioned above: assuming that is an matrix with i.i.d. Bernoulli entries and that , how large can be so that the following system of linear inequalities is satisfied with overwhelming probability Hx≥κ. (8) This of course is the same as if one asks how large can be so that the following optimization problem is feasible with overwhelming probability Hx≥κ ∥x∥2=1. (9) To see that (8) and (9) indeed match the above described fixed point condition it is enough to observe that due to statistical symmetry one can assume . Also the constraints essentially decouple over the columns of (so one can then think of in (8) and (9) as one of the columns of ). Moreover, the dimension of in (8) and (9) should be changed to ; however, since we will consider a large scenario to make writing easier we keep the dimension as . Also, as mentioned under point 2) above, we will, without a loss of generality, treat in (9) as if it has i.i.d. standard normal components. Moreover, in [22] we also recognized that (9) can be rewritten as the following optimization problem ξn=minxmaxλ≥0 κλT1−λTHx subject to ∥λ∥2=1 (10) ∥x∥2=1, where is an -dimensional column vector of all ’s. Clearly, if then (9) is feasible. On the other hand, if then (9) is not feasible. That basically means that if we can probabilistically characterize the sign of then we could have a way of determining such that . That is exactly what we have done in [22] on an ultimate level for and on a say upper-bounding level for . Of course, we do mention again, that as far as point 2) goes, we in [28] (and will in this paper as well) without loss of generality again made the same type of assumption that we had made in [22] related to the statistics of . In other words, as far as the presentation below is concerned, we will continue to assume that the elements of matrix are i.i.d. standard normals (as mentioned above, such an assumption changes nothing in the validity of the results that we will present; also, more on this topic can be found in e.g. [24, 25, 29] where we discussed it a bit further). Relying on the strategy developed in [29, 27] and on a set of results from [14, 15] we in [22] proved the following theorem that essentially extends Theorem 1 to the case and thereby resolves Conjecture 8.4.4 from [32] in positive: ###### Theorem 2. [22] Let be an matrix with i.i.d. standard normal components. Let be large and let , where is a constant independent of . Let be as in (10) and let be a scalar constant independent of . Let all ’s be arbitrarily small constants independent of . Further, let be a standard normal random variable and set fgar(κ)=1√2π∫∞−κ(gi+κ)2e−g2i2dgi. (11) Let and be scalars such that (1−ϵ(m)1)√αfgar(κ)−(1+ϵ(n)1)−ϵ(g)5 > ξ(l)n√n (1+ϵ(m)1)√αfgar(κ)−(1−ϵ(n)1)+ϵ(g)5 < ξ(u)n√n. (12) If then limn→∞P(ξ(l)n≤ξn≤ξ(u)n)=limn→∞P(min∥x∥2=1max∥λ∥2=1,λi≥0(ξ(l)n≤κλT1−λTHx)≤ξ(u)n)≥1. (13) Moreover, if then limn→∞P(ξn≥ξ(l)n)=limn→∞P(min∥x∥2=1max∥λ∥2=1,λi≥0(κλT1−λTHx)≥ξ(u)n)≥1. (14) ###### Proof. Presented in [22]. ∎ In a more informal language (essentially ignoring all technicalities and ’s) one has that as long as α>1fgar(κ), (15) the problem in (9) will be infeasible with overwhelming probability. On the other hand, one has that when as long as α<1fgar(κ), (16) the problem in (9) will be feasible with overwhelming probability. This of course settles the case completely and essentially establishes the storage capacity as which of course matches the prediction given in the introductory analysis presented in [12] and of course rigorously confirmed by the results of [20, 21, 32]. On the other hand, when it only shows that the storage capacity with overwhelming probability is not higher than the quantity given in [12]. As mentioned above this confirms Talagrand’s conjecture 8.4.4 from [32]. However, it does not settle problem (question) 8.4.2 from [32]. The results obtained based on the above theorem as well as those obtained based on Theorem 1 are presented in Figure 1. When (i.e. when ) the curve indicates the exact breaking point between the “overwhelming” feasibility and infeasibility of (9). On the other hand, when (i.e. when ) the curve is only an upper bound on the storage capacity, i.e. for any value of the pair that is above the curve given in Figure 1, (9) is infeasible with overwhelming probability. Since the case did not appear as settled based on the above presented results we then in [28] attempted to lower the upper bounds given in Theorem 2. We created a fairly powerful mechanism that produced the following theorem as a way of characterizing the storage capacity of the negative spherical perceptron. ###### Theorem 3. Let be an matrix with i.i.d. standard normal components. Let be large and let , where is a constant independent of . Let be a scalar constant independent of . Let all ’s be arbitrarily small constants independent of . Set ˆγ(s)=2c(s)3+√4(c(s)3)2+168, (17) and Isph(c(s)3)=ˆγ(s)−12c(s)3log(1−c(s)32ˆγ(s)). (18) Set p=1+c(s)32γ(s)per,q=c(s)3κ2γ(s)per,r=c(s)3κ24γ(s)per,s=−κ√p+q√p,C=exp(q22p−r)√p, (19) and I(1)per(c(s)3,γ(s)per,κ)=12erfc(κ√2)+C2(erfc(s√2)). (20) Further, set Iper(c(s)3,α,κ)=maxγ(s)per≥0(γ(s)per+1c(s)3log(I(1)per(c(s)3,γ(s)per,κ))). (21) If is such that minc(s)3≥0(−c(s)32+Isph(c(s)3)+Iper(c(s)3,α,κ))<0, (22) then (9) is infeasible with overwhelming probability. ###### Proof. Presented in [28]. ∎ The results one can obtain for the storage capacity based on the above theorem are presented in Figure 2 (as mentioned in [28], due to numerical optimizations involved the results presented in Figure 2 should be taken only as an illustration; also as discussed in [28] taking in Theorem 3 produces the results of Theorem 2). Even as such, they indicate that a visible improvement in the values of the storage capacity may be possible, though in a range of values of substantially larger than (i.e. in a range of ’s somewhat smaller than zero). While at this point this observation may look as unrelated to the problem that we will consider in the following section one should keep it in mind (essentially, a conceptually similar conclusion will be made later on when we study the capacities with limited errors). ## 4 Spherical perceptron with errors What we described in the previous section is a typical setup of a standard spherical perceptron. To be a bit more precise, it is a setup one can use to in a way quantify the storage capacity of the standard spherical perceptron. In this section we will slightly change this standard notion of how the spherical perceptron operates. In fact, what we will change will actually be what is an acceptable way of spherical perceptron’s operation. Of course, such a chnage is not our invention. While it had been known for a long time, it is the work of Gardner [12] that popularized its an analytical study. Before, we present the known analytical predictions we will briefly sketch the main idea behind the spherical perceptrons that will be allowed to function as memories with errors. We will rely on many simplifications of the original perceptron setup from Section 2 introduced in [22, 28] and presented in Section 3. To that end we start by recalling that for all practical purposes needed here (and those we needed in [22, 28]) the storage capacity of the standard spherical perceptron can be considered through the feasibility problem given in (9) which we restate below Hx≥κ ∥x∥2=1. (23) We of course recall as well, that as argued in [22, 28] (and as mentioned in the previous section) one can assume that the elements of are i.i.d. standard normals and that the dimension of is , where as earlier we keep the linear regime, i.e. continue to assume that where is a constant independent of . Now, if all inequalities in (23) are satisfied one can have that the dynamics established will be stable and all patterns could be successfully stored. On the other hand if one relaxes such a constraint so that only a fraction of them (say larger than ) is satisfied then only such a fraction of patterns could be successfully stored (of course one views storage at each site ; however, due to symmetry as discussed earlier, one can simply just switch to consideration of (23)). This is of course similar to saying if a fraction (say smaller than ) of the inequalities may not hold then such a fraction of patterns could be incorrectly stored. One can then reformulate (23) so that it provides a mathematical description for such a scenario. The resulting feasibility problem one can then consider becomes di(Hi,:x−κ)≥0,1≤i≤m n∑i=1di=(1−fwb)m di∈{0,1},1≤i≤m ∥x∥2=1. (24) Using the replica approach Gardner developed for a problem similar to this one in [12], Gardner and Derrida in [13] proceeded and characterized the feasibility of (24). Namely, they gave a prediction for the value of the critical storage capacity as a function of and so that (24) is feasible (as mentioned earlier, in what follows we may often refer to as the storage capacity of the spherical perceptron with limited errors). The prediction given in [13] essentially boils down to the following two equations: first one determines as the solution of fwb=1√2π∫κ−x−∞e−z22dz. (25) Then one determines a prediction for the storage capacity as (26) Now, assuming the standard setup (where no errors are allowed) one has which from (25) implies . One then from (26) has α(gar)c,wb(κ,∞)→(1√2π∫κ−∞(z−κ)2e−z22dz)−1=(1√2π∫∞−κ(z+κ)2e−z22dz)−1=fgar(κ)=αc(κ). (27) In other words, if no errors are allowed (25) and (26) give the same result for the storage capacity as does (6). Now, looking back at what was presented in Figure 1, one should note that when (the case primarily of interest here) the curve denotes the exact values of the storage capacity for any . On the other hand, one from the same plot has that if a pair is above the curve the memory is not stable, i.e. it is with overwhelming probability that one can not find a spherical such that (9) is feasible. However, if one attempts to be a bit more precise with respect to this instability one may find it useful to introduce a number of allowed wrong patterns (bits). This is in essence what (25) and (26) do. They basically attempt to characterize the number of incorrectly stored patterns when and a pair is above the curve given in Figure 1 (in fact one can use them to give a prediction for the number of incorrectly stored patterns (say ) even when ). Alternatively, as framed above, one can think of all of this as a way of finding the storage capacity if a fraction of errors (incorrectly stored patterns), say is allowed. This is of course exactly the problem that we will be attacking below and based on the above is exactly what (25) and (26) characterize. Before proceeding further we should provide a few comments as for the potential accuracy of the above predictions. As is now well known if and then the above prediction boils down to the standard storage capacity of the positive spherical perceptron which is based on [20, 21] (and later on [32, 22]) known to be correct. On the other hand, as discussed in [28] (and briefly in the previous section), the above prediction is only a rigorous upper bound on the storage capacity of the negative spherical perceptron. In fact, many of the conclusions already made in [12, 13] indicated this kind of behavior. Namely, a stability analysis of the replica approach done in [13] indicated that some of the predictions (essentially in a certain range of plane) related to the storage capacities when the errors are allowed may not be accurate. In [7] the replica stability range given in [13] was corrected a bit and as a consequence [7] actually established that the replica analysis of [13] may in fact produce incorrect results in the entire regime above the curve given in Figure 1. Still, even if the results given in (25) and (26) are to be incorrect, they may be a fairly good approximate predictions for the storage capacity (or alternatively the fraction of incorrectly stored patterns) or they may even be say rigorous bounds on the true values (as were the predictions of [12] related to the negative spherical perceptron). Below we will show that the above given predictions (namely, those given in (25) and (26)) are in fact rigorous upper bounds on the storage capacity of the spherical perceptron when a fraction of incorrectly stored patterns is allowed. ## 5 Upper bounds on the storage capacity of the spherical perceptrons with limited errors As we have mentioned at the end of the previous section, in this section we will create a set of results that will essentially establish the predictions obtained in [13] (and given in (25) and (26)) as rigorous upper bounds on the storage capacity of the spherical perceptron with limited errors. We start by writing an analogue to for the feasibility problem of interest here, namely the one given in (24) ξwb=minx,dmaxλ≥0 κλTdiag(d)1−λT% diag(d)Hx subject to ∥λ∥2=1 (28) n∑i=1di=(1−fwb)m di∈{0,1},1≤i≤m ∥x∥2=1. Although it is probably obvious, we mention that is an matrix with elements of vector on its main diagonal and zeros elsewhere. Clearly, following the logic we presented in previous sections, the sign of determines the feasibility of (24). In particular, if then (24) is infeasible. Given the random structure of the problem (we recall that is random) one can then pose the following probabilistic feasibility question: how small can be so that in (28) is positive and (24) is infeasible with overwhelming probability? In what follows we will attempt to provide an answer to such a question. ### 5.1 Probabilistic analysis In this section we will present a probabilistic analysis of the above optimization problem given in (28). In a nutshell, we will provide a relation between and so that with overwhelming probability over . This will, of course, based on the above discussion then be enough to conclude that the problem in (24) is infeasible with overwhelming probability when and satisfy such a relation. The analysis that we will present below will to a degree rely on a strategy we developed in [29, 27] and utilized in [22] when studying the storage capacity of the standard spherical perceptrons. We start by recalling on a set of probabilistic results from [14, 15] that were used as an integral part of the strategy developed in [29, 27, 22]. ###### Theorem 4. ([15, 14]) Let and , , be two centered Gaussian processes which satisfy the following inequalities for all choices of indices 1. . Then P(⋂i⋃j(Xij≥λij))≤P(⋂i⋃j(Yij≥λij)). The following, more simpler, version of the above theorem relates to the expected values. ###### Theorem 5. ([14, 15]) Let and , , be two centered Gaussian processes which satisfy the following inequalities for all choices of indices 1. . Then E(minimaxj(Xij))≤E(minimaxj(Yij)). Now, since all random quantities of interest below will concentrate around its mean values it will be enough to study only their averages. However, since it will not make writing of what we intend to present in the remaining parts of this section substantially more complicated we will present a complete probabilistic treatment and will leave the studying of the expected values for the presentation that we will give in the following section where such a consideration will substantially simplify the exposition. We will make use of Theorem 4 through the following lemma (the lemma is an easy consequence of Theorem 4 and in fact is fairly similar to Lemma 3.1 in [15], see also [24, 22] for similar considerations). ###### Lemma 1. Let be an matrix with i.i.d. standard normal components. Let and be and vectors, respectively, with i.i.d. standard normal components. Also, let be a standard normal random variable and let be a function of . Then P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(−λT% diag(d)Hx+g−ζλ,d)≥0)≥P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(gTdiag(d)λ+hTx−ζλ,d)≥0). (29) ###### Proof. The proof is basically similar to the proof of Lemma 3.1 in [15] as well as to the proof of Lemma 7 in [24]. However, one has to be a bit careful about the structures of sets of allowed values for . For completeness we will sketch the core of the argument. The remaining parts follow easily as in Lemma 3.1 in [15] (or as in the proof of Lemma 7 in [24]). Namely, one starts by defining processes and in the following way Yij=(λ(j))Tdiag(d(i))Hx(i)+gXij=gTdiag(d(i))λ(j)+hTx(i). (30) Then clearly EY2ij=EX2ij=(λ(j))Tdiag(d(i))% diag(d(i))λ(j)+1. (31) One then further has EYijYik = (λ(j))Tdiag(d(i))diag(d(i))λ(k)(x(i))Tx(i)+1 EXijXik = (λ(j))Tdiag(d(i))diag(d(i))λ(k)+(x(i))Tx(i), (32) and clearly EXijXik=EYijYik. (33) Moreover, EYijYlk = (λ(j))Tdiag(d(i))diag(d(l))λ(k)(x(l))Tx(i)+1 EXijXlk = (λ(j))Tdiag(d(i))diag(d(l))λ(k)+(x(l))Tx(i). (34) And after a small algebraic transformation EYijYlk−EXijXlk = (1−(λ(j))Tdiag(d(i))diag(d(l))λ(k))−(x(l))Tx(i)(1−(λ(j))Tdiag(d(i))diag(d(l))λ(k)) (35) = (1−(x(l))Tx(i))(1−(λ(j))T% diag(d(i))diag(d(l))λ(k)) ≥ 0. Combining (31), (33), and (35) and using results of Theorem 4 one then easily obtains (29). ∎ Let with being an arbitrarily small constant independent of . We will first look at the right-hand side of the inequality in (29). The following is then the probability of interest P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(gTdiag(d)λ+hTx+κλTdiag(d)1−ϵ(g)5√n)≥ξ(l)wb). (36) After solving the minimization over one obtains P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(gT% diag(d)λ+hTx+κλTdiag(d)1−ϵ(g)5√n)≥ξ(l)wb)=P(f(r)err(κ)−∥hi∥2−ϵ(g)5√n≥ξ(l)wb), (37) where f(r)err(κ)=min1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(gTdiag(d)λ+κλTdiag(d)1). (38) Since is a vector of i.i.d. standard normal variables it is rather trivial that P(∥h∥2<(1+ϵ(n)1)√n)≥1−e−ϵ(n)2n, (39) where is an arbitrarily small constant and is a constant dependent on but independent of . Along the same lines, due to the linearity of the objective function in the definition of and the fact that is a vector of i.i.d. standard normals, one has P(f(r)err(κ)>(1−ϵ(m)1)ferr(κ)√n)≥1−e−ϵ(m)2m, (40) where ferr(κ)=limn→∞Ef(r)err(κ)√n=limn→∞E(min1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(gTdiag(d)λ+κλTdiag(d)1))√n, (41) and is an arbitrarily small constant and analogously as above is a constant dependent on and but independent of . Then a combination of (37), (39), and (40) gives P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(gT% diag(d)λ+hTx+κλTdiag(d)1−ϵ(g)5√n)≥ξ(l)wb)≥(1−e−ϵ(m)2m)(1−e−ϵ(n)2n)P((1−ϵ(m)1)ferr(κ)√n−(1+ϵ(n)1)√n−ϵ(g)5√n≥ξ(l)wb). (42) If (1−ϵ(m)1)ferr(κ)√n−(1+ϵ(n)1)√n−ϵ(g)5√n>ξ(l)wb (43) ⇔ (1−ϵ(m)1)ferr(κ)−(1+ϵ(n)1)−ϵ(g)5>ξ(l)wb√n, one then has from (42) limn→∞P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(gTdiag(d)λ+hTx+κλT% diag(d)1−ϵ(g)5√n)≥ξ(l)wb)≥1. (44) To make the result in (44) operational one needs an estimate for . In the following subsection we will present a way that can be used to estimate . Before doing so we will briefly take a look at the left-hand side of the inequality in (29). The following is then the probability of interest P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(κλTdiag(d)1−λTdiag(d)Hx+g−ϵ(g)5√n−ξ(l)wb)≥0). (45) Since (where is, as all other ’s in this paper are, independent of ) from (45) we have P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(κλTdiag(d)1−λTdiag(d)Hx+g−ϵ(g)5√n−ξ(l)wb)≥0)≤P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(κλTdiag(d)1−λTdiag(d)Hx−ξ(l)wb)≥0)+e−ϵ(g)6n. (46) When is large from (46) we then have limn→∞P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(κλTdiag(d)1−λTdiag(d)Hx+g−ϵ(g)5√n−ξ(l)wb)≥0)≤limn→∞P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(κλTdiag(d)1−λTdiag(d)Hx−ξ(l)wb)≥0)=limn→∞P(min∥x∥2=1max∥λ∥2=1,λi≥0(κλTdiag(d)1−λTdiag(d)Hx)≥ξ(l)wb). (47) Assuming that (43) holds, then a combination of (29), (44), and (47) gives limn→∞P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(κλTdiag(d)1−λTdiag(d)Hx)≥ξ(l)wb)≥limn→∞P(min∥x∥2=1,1Td=(1−fwb)m,di∈{0,1}max∥λ∥2=1,λi≥0(g</
{}
# Who broke the window? [closed] There is a broken window in the house. I asked who broke the window: Gregory: I was not! April : I didn’t do it. August: April was. June : August says the truth Two of them said the truth, and two of them lied. Gregory: It was June April: August didn’t do it August: April didn’t do it! June: Gregory was lying. Who broke the window? Please explain the logic. • Welcome to PSE! Take the tour if you haven't already or have a look at the help section if you have any questions. Hope you enjoy your time here! Aug 10 '20 at 7:36 • This doesn't seem completely solvable with the information given. Is there any more info you haven't added? E.g. was the window broken by exactly one person or more than one, must that person be among the four speakers, what do we know about the truth/lie values of the second set of statements? Aug 10 '20 at 7:46 • It'd be good to get a grammar edit by OP just to clarify what they mean with Gregory and August's first statements. The word 'was' doesn't fit the sentence structure - I assume Gregory is saying 'I didn't do it" and August is saying 'April did it', but it'd be good to confirm that. Aug 10 '20 at 7:50 • I think it also needs clarification if a liar can still tell the truth? Otherwise August is a contradiction since of the two statements he made, exactly one must be true. – Cain Aug 10 '20 at 16:21 Gregory: I was not! April : I didn’t do it. August: April was. June : August says the truth Two of them said the truth, and two of them lied. If June or August told the truth, then so did the other, so April broke the window and Gregory is lying, so it was broken by April and Gregory, and maybe also an outsider. That's one possibility. Otherwise, April and Gregory told the truth, because two did. Then it was broken by August and/or June and/or an outsider. So, in sum, it was broken by April and Gregory together, or by August and/or June. And maybe an outsider helped, too. Or indeed maybe an outsider did it alone. The second batch of statements tells us nothing, since we don't know how many people are telling the truth that time. # Development I will try to use a mix between the formal notation and natural language for convenience. This is a long development of rational logic. If you want to skip to the judging part, jump to the bottom. Let: Gr = Gregory; Ap = April; Au = August; Ju = June; A = First set of answers; B = Second set of answers; ~ = Negative (turns the premisse false); T = True value; F = False value Premise 1: There is a broken window in the house. ## Question A: Who broke the window? GrA: ~Gr (Gregory was not) ApA: ~Ap (April was not) AuA: Ap (April was) JuA: AuA -> Ap (AuA is true, therefore April was) We can't conclude anything from this but if we take the major opinion it was April who broke the window and therefore April is lying. However, we were presented with a new premise: ## New premise Premise 2: Two answers to question A are truth and two are false. Truth table yelds six possibilites: A GrA ApA AuA JuA -------------------------- 1 T T F F 2 T F T F 3 T F F T 4 F T T F 5 F T F T 6 F F T T We can't have AuA and ~AuA, therefore the cases where AuA and JuA have different truth values are contradictions. Let's try to draw it more formally: A1: ~Gr + ~Ap + ~(Ap) + ~(AuA) -> ~(Ap) => ~Gr + ~Ap (Gregory and April did not break the window.) A2: ~Gr + ~(~Ap) + Ap + ~(AuA) -> ~(Ap) => (Contradiction - We can't have AuA and ~AuA.) A3: ~Gr + ~(~Ap) + ~(Ap) + AuA -> Ap => (Contradiction - We can't have AuA and ~AuA.) A4: ~(~Gr) + ~Ap + Ap + ~(AuA) -> ~(Ap) => (Contradiction - We can't have AuA and ~AuA.) A5: ~(~Gr) + ~Ap + ~(Ap) + AuA -> Ap => (Contradiction - We can't have AuA and ~AuA.) A6: ~(~Gr) + ~(~Ap) + Ap + AuA -> Ap => (Gregory and April did break the window.) Both A1 and A6 are valid solutions. Altough we still don't know if August and June may have participated in breaking the window, or if we accept A1, it may be the case that none of the four broke the window. There is no premise stating that one or more of the four broke the window. Instead of choosing an answer, we go ahead and ask the four again: ### Question B: Who broke the window? GrB: Ju (June was) ApB: ~Au (August was not) AuB: ~Ap (April was not) JuB: ~GrA -> ~(~Gr) (GrA is false, therefore Gregory is) Taking the previous truth table. For Ax when x may be 1 or 6: B GrA ApA AuA JuA GrB ApB AuB JuB Conclusion ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- A1 ~Gr {T} ~Ap {T} ~(Ap) {F} ~(AuA) -> ~(Ap) {F} Ju ~Au ~Ap ~GrA -> ~(~Gr) => ~Gr ~Ap ~Ap ~Ap Ju ~Au ~Ap Gr -> Contradiction (We can't have GrA and ~GrA) A6 ~(~Gr) {F} ~(~Ap) {F} Ap {T} AuA -> Ap {T} Ju ~Au ~Ap ~GrA -> ~(~Gr) => Gr Ap Ap Ap Ju ~Au ~Ap Gr -> Contradiction (AuA and AuB are contradictory) Assuming all the answers for question B are true, then neither A1 or A6 are valid because there's a contradiction. So let's find another Ax where All B answers don't reach a contradiction: B GrA ApA AuA JuA GrB ApB AuB JuB Conclusion ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- A2 ~Gr {T} ~(~Ap) {F} Ap {T} ~(AuA) -> ~(Ap) {F} Ju ~Au ~Ap ~GrA -> ~(~Gr) => ~Gr Ap Ap ~Ap Ju ~Au ~Ap Gr -> Contradiction (We can't have AuA and ~AuA or GrA and ~GrA) A3 ~Gr {T} ~(~Ap) {F} ~(Ap) {F} AuA -> Ap {T} Ju ~Au ~Ap ~GrA -> ~(~Gr) => ~Gr Ap ~Ap Ap Ju ~Au ~Ap Gr -> Contradiction (We can't have GrA and ~GrA) A4 ~(~Gr) {F} ~Ap {T} Ap {T} ~(AuA) -> ~(Ap) {F} Ju ~Au ~Ap ~GrA -> ~(~Gr) => Gr ~Ap Ap ~Ap Ju ~Au ~Ap Gr -> Contradiction (We can't have AuA and ~AuA) A5 ~(~Gr) {F} ~Ap {T} ~(Ap) {F} AuA -> Ap {T} Ju ~Au ~Ap ~GrA -> ~(~Gr) => Gr ~Ap ~Ap Ap Ju ~Au ~Ap Gr -> Contradiction (We can't have AuA and ~AuA) Every possibility is a contradiction, therefore we can't assume all answers to question B are true. Someone must be lying to question B. So let's make a truth table for question B: ## Truth table for question B B GrB ApB AuB JuB -------------------------- 1 T T T T 2 T T T F 3 T T F T 4 T F T T 5 F T T T 6 T T F F 7 T F T F 8 T F F T 9 F T T F 10 F T F T 11 F F T T 12 T F F F 13 F T F F 14 F F T F 15 F F F T 16 F F F F So if we try to match all these with the question A truth table we get a bigger truth table. Ruling out the contradictions where we can't have AuA and ~AuA (AuA and JuA can't have different boolean values) or GrA and ~GrA (GrA and JuB can't have the same boolean value), also ruling out the cases where all answers to question B are true (B1) which we already know are invalid, we are left with the following set: A+B GrA ApA AuA JuA GrB ApB AuB JuB ------------------------------------------------------ A1+B2 T T F F T T T F A1+B6 T T F F T T F F A1+B7 T T F F T F T F A1+B9 T T F F F T T F A1+B12 T T F F T F F F A1+B13 T T F F F T F F A1+B14 T T F F F F T F A1+B16 T T F F F F F F A6+B3 F F T T T T F T A6+B4 F F T T T F T T A6+B5 F F T T F T T T A6+B8 F F T T T F F T A6+B10 F F T T F T F T A6+B11 F F T T F F T T A6+B15 F F T T F F F T This table allows us to see that after all contradictions ruled out, we essentially narrowed down to the combination of GrB, ApB and AuB boolean values in two set of circumstances with their own premises. Let's call them C and D: ## Possibilities ### Circumnstance C (A1) Gregory did not break the window. (GrA {T} and JuB {F}) April did not break the window. (ApA {T}, AuA {F}, JuA {F}) ### Circumnstance D (A2) Gregory did break the window. (GrA {F} and JuB {T}) April did break the window. (ApA {F}, AuA {T}, JuA {T}) We are now exposed to another contradiction from the set we made earlier. If AuB is true while ApA is false or vice versa, then the conclusion is invalid. So let's eliminate those cases: A+B GrA ApA AuA JuA GrB ApB AuB JuB ------------------------------------------------------ A1+B2 T T F F T T T F A1+B7 T T F F T F T F A1+B9 T T F F F T T F A1+B14 T T F F F F T F A6+B3 F F T T T T F T A6+B8 F F T T T F F T A6+B10 F F T T F T F T A6+B15 F F T T F F F T This is the formal(ish) version of the arguments after the truth values are applied, simplified to omit the information we already take for granted: A1+B2 Ju {T} ~Au {T} => Ju ~Au -> Ju A1+B7 Ju {T} ~(~Au) {F} => Ju Au -> Ju + Au A1+B9 ~(Ju) {F} ~Au {T} => ~Ju ~Au -> None A1+B14 ~(Ju) {F} ~(~Au) {F} => ~Ju Au -> Au A6+B3 Ju {T} ~Au {T} => Ju ~Au -> (Gr + Ap) + Ju A6+B8 Ju {T} ~(~Au) {F} => Ju Au -> (Gr + Ap) + Ju + Au A6+B10 ~(Ju) {F} ~Au {T} => ~Ju ~Au -> (Gr + Ap) A6+B15 ~(Ju) {F} ~(~Au) {F} => ~Ju Au -> (Gr + Ap) + Au # Final conclusion So in the end of it all it's really a truth table from the answers of Gregory and April to the second question (Gregory: It was June and April: August didn’t do it) Which gives eight possibilities without any contradiction depending on which combination of these two sentences we consider true or false. There is one possibility where the four broke the window (Gregory, April, August, June). There are two cases where three of the four break the window, in which Gregory and April are always present and they may break the window either with August or June. There are two cases where two of the four break the window (Gregory and April or August and June). There are two cases where only one person broke the window (June or August). There is one possibility where no one of the four broke the window. This covers the logic part of the answer and should be a valid answer giving the information available. # Assumptions Now if we assume two premises not given by the OP: Premises 3 and 4: There is one and only one person in the set Gregory, April, August, June who broke the window. Then we'll have to narrow down even further and take other measures not related to logic alone to choose a culprit between June and August. In this case, I would judge this way: The first thing is that I have only what I called "Circumstance C" before. Which means The answers to the first question must have these truth values: Gregory: I was not! (True) April : I didn’t do it. (True) August: April was. (False) June : August says the truth (False) Now we have two choices: If June lied in the second answer, then June broke the window. If everyone but August lied in the second answer, then August broke the window. Here are the sentences again: • Gregory: It was June • April: August didn’t do it • August: April didn’t do it! • June: Gregory was lying. Choice one: If June is the culprit Gregory says it was not Gregory and said it was June, April says it was not April or August, August lied about it being April but now admits it wasn't April. June lied that August was telling the truth about it being April, and now lies that Gregory was lying about not being Gregory. The one who broke the window is June. Choice two: If August is the culprit Gregory says it was not Gregory but now is lying that is was June. April said it was not April but now is lying that it was not August. August lied about it being April but now admits it wasn't April. June lied that August was telling the truth about it being April, and now lies that Gregory was lying about not being Gregory. The one who broke the window is August. ## My judgement In both cases June lied two times. Even if June didn't lie about June itself, this is suspicious behaviour. Also the first June lie was supporting a lie from August, this is suspicious behaviour too. And finally the second June lie is calling a liar on the one who accused June. So I would isolate June and confront those facts until June either tell the truth or give new information. If I couldn't come to a conclusion and had to choose someone, it would be June. • I marvel at the effort you put into this answer. Awesome! Aug 11 '20 at 23:00 From the first set of statements: Gregory: I was not! April : I didn’t do it. August: April was. June : August says the truth We know that two are true and two are lies. Now August's and June's statements are either both true or both lies, since June's statement is about August's statement. • If they're both true, then April is the culprit, therefore April's statement is false but Gregory's statement is true, contradiction. • If they're both lies, then Gregory and April are both telling the truth, therefore both innocent. From the second set of statements: Gregory: It was June April: August didn’t do it August: April didn’t do it! June: Gregory was lying. Assuming that each person is consistent, either telling the truth both times or lying both times, then Gregory and April are again telling the truth, August and June are lying. But that means April and June are both the culprits! • If Agust are lying we can see the April says “April didn’t do it!” (April telling the truth) So the answere is June Aug 10 '20 at 8:12 • @user36514 Instead of giving away the answer in the comments, please consider clarifying the question as requested, so that we can arrive at the correct answer ourselves. Aug 10 '20 at 8:34 The first liar is: August - We can see he's contradicting himself in the two scenarios. The second liar is: June - She affirms that August is telling the truth even though he's clearly not. So the ones telling the truth are: Gregory and April The one to blame is: June - Since April just affirms that she wasn't the one to do it nor August, and Gregory (who's telling the truth) blames June. First Statements: Gregory: I was not! April : I didn’t do it. August: April was. June : August says the truth Given, two of them said truth, and two of them lied. There are only two possibilties, so a person can say either truth or false(lie). Choice - 1 Considering June as Truth: As June is truth, August says the truth too (we have got two of them who says truth: June, August So Gregory and June must be lying), and according to August, April is the culprit, which is staisfied as April lied, But even Gregory was lying, so he must be the culprit too But as there only one culprit involved, So Considering june said truth is False Choice - 2 Considering June as Lied : As June lied, By June words, August lied too (So we got two of them who lied :June, August), So Gregory and April says the truth, Considering that Gregory and April are not Culprit Second Statements: Gregory: It was June April: August didn’t do it August: April didn’t do it! June: Gregory was lying. As Gregory and April didn't do it, then it must either be August or June. Asking again changed the August statment into truth, and April says August didn't do it, So turns out that June is the Cuprit not only as he is the only option left but also he defends himself that Gregory is lying whereas Gregory has no reason to lie as he didn't do it. ### June There is information missing here for the second set of answers, but we can make assumptions that do lead to a solution. However, we also are missing if there can be more than 1 culprit. We know that August provides conflicting answers, so the missing information cannot be that a liar always lies: August MUST be lying once and telling the truth once. As such, the only way missing information can lead to a conclusion is if the missing info is 'again, two lied and two spoke the truth'. It might be 'this time, X lied and 4-X spoke the truth', but then that would be too obvious to not leave out. Leaving it out by accident suggests that the second set of answers shares a characteristic with the first, and we already know it cannot be liars-always-lie. There is 1 other option: 'This time, those who lied before spoke the truth, and vice versa'. Now, the first set of answers: August and June overlap, either both speak the truth or both lie. This leads to two possible scenarios. Scenario A: August and June are speaking the truth: April is a culprit. This means Gregory also lied, meaning April and Gregory are both culprits. Scenario B: August and June both lie. This means Gregory and April are telling the truth, so Gregory and April are both innocent. Then, we get to the second set of answers. We assume that 2 lie and 2 tell the truth. But we cannot tell who based on the first set. Gregory and June clash, so 1 of them lies and 1 of them is telling the truth. Next, either April or August has to be a liar, meaning one of them is a culprit and the other is innocent. Under scenario B, we know April was innocent. This means August is telling the truth, and April is lying. This means August is a culprit. IF we only have 1 culprit, then that means Gregory is lying now and only August did it. However, if multiple culprits are allowed, then Gregory could be lying or telling the truth, so we don't know if June ALSO did it. Unless of course '1 lie, 1 truth, per person' applies, then we know Gregory lies this time around and June is innocent. Under scenario A, we know April and Gregory are both culprits. This means August lies this time around, April is telling the truth, August is innocent. Again we cannot resolve June vs Gregory, so we don't know if we have 2 or 3 culprits. If, however, '1 lie, 1 truth, per person' applies, then we know this time Gregory is telling the truth and June is lying, so we have 3 culprits. So, based on '1 culprit mandatory, or multiple culprits possible' and '2 lies + 2 truths second time' or 'who lies and who tells the truth flipped in the second set of answers', we may or may not get an answer with 1, 2 or 3 culprits. So we cannot tell the culprit for sure, even with assumptions of the missing information. But likely the intent was: 1 culprit, second time again 2 lies + 2 truths, so August is the culprit.
{}
# 6.5. Index Configuration¶ GeoMesa exposes a variety of configuration options that can be used to customize and optimize a given installation. ## 6.5.1. Setting Schema Options¶ Static properties of a SimpleFeatureType must be set when calling createSchema, and can’t be changed afterwards. Most properties are controlled through user-data values, either on the SimpleFeatureType or on a particular attribute. Setting the user data can be done in multiple ways. If you are using a string to indicate your SimpleFeatureType (e.g. through the command line tools, or when using SimpleFeatureTypes.createType), you can append the type-level options to the end of the string, like so: // append the user-data values to the end of the string, separated by a semi-colon String spec = "name:String,dtg:Date,*geom:Point:srid=4326;option.one='foo',option.two='bar'"; SimpleFeatureType sft = SimpleFeatureTypes.createType("mySft", spec); If you have an existing simple feature type, or you are not using SimpleFeatureTypes.createType, you may set the values directly in the feature type: // set the hint directly SimpleFeatureType sft = ... sft.getUserData().put("option.one", "foo"); If you are using TypeSafe configuration files to define your simple feature type, you may include a ‘user-data’ key: geomesa { sfts { "mySft" = { attributes = [ { name = name, type = String } { name = dtg, type = Date } { name = geom, type = Point, srid = 4326 } ] user-data = { option.one = "foo" } } } } ## 6.5.2. Setting Attribute Options¶ In addition to schema-level user data, each attribute also has user data associated with it. Just like the schema options, attribute user data can be set in multiple ways. If you are using a string to indicate your SimpleFeatureType (e.g. through the command line tools, or when using SimpleFeatureTypes.createType), you can append the attribute options after the attribute type, separated with a colon: // append the user-data after the attribute type, separated by a colon String spec = "name:String:index=true,dtg:Date,*geom:Point:srid=4326"; SimpleFeatureType sft = SimpleFeatureTypes.createType("mySft", spec); If you have an existing simple feature type, or you are not using SimpleFeatureTypes.createType, you may set the user data directly in the attribute descriptor: // set the hint directly SimpleFeatureType sft = ... sft.getDescriptor("name").getUserData().put("index", "true"); If you are using TypeSafe configuration files to define your simple feature type, you may add user data keys to the attribute elements: geomesa { sfts { "mySft" = { attributes = [ { name = name, type = String, index = true } { name = dtg, type = Date } { name = geom, type = Point, srid = 4326 } ] } } } ## 6.5.3. Setting the Indexed Date Attribute¶ For schemas that contain a date attribute, GeoMesa will use the attribute as part of the primary Z3/XZ3 index. If a schema contains more than one date attribute, you may specify which attribute to use through the user-data key geomesa.index.dtg. If you would prefer to not index any date, you may disable it through the key geomesa.ignore.dtg. If nothing is specified, the first declared date attribute will be used. // specify the attribute 'myDate' as the indexed date sft1.getUserData().put("geomesa.index.dtg", "myDate"); // disable indexing by date sft2.getUserData().put("geomesa.ignore.dtg", true); ## 6.5.4. Customizing Index Creation¶ To speed up ingestion, or because you are only using certain query patterns, you may disable some indices. The indices are created when calling createSchema. If nothing is specified, the Z2/Z3 (or XZ2/XZ3 depending on geometry type) indices and record indices will all be created, as well as any attribute indices you have defined. Warning Certain queries may be much slower if you disable an index. To enable only certain indices, you may set a user data value in your simple feature type. The user data key is geomesa.indices.enabled, and it should contain a comma-delimited list containing a subset of index identifiers, as specified in Index Overview. See Setting Schema Options for details on setting user data. If you are using the GeoMesa SchemaBuilder, you may instead call the indexes methods: import org.locationtech.geomesa.utils.geotools.SchemaBuilder val sft = SchemaBuilder.builder() .addString("name") .addDate("dtg") .addPoint("geom", default = true) .userData .indices(List("id", "z3", "attr")) .build("mySft") ## 6.5.5. Configuring Feature ID Encoding¶ While feature IDs can be any string, a common use case is to use UUIDs. A UUID is a globally unique, specially formatted string composed of hex characters in the format {8}-{4}-{4}-{4}-{12}, for example 28a12c18-e5ae-4c04-ae7b-bf7cdbfaf234. A UUID can also be considered as a 128 bit number, which can be serialized in a smaller size. You can indicate that feature IDs are UUIDs using the user data key geomesa.fid.uuid. If set before calling createSchema, then feature IDs will be serialized as 16 byte numbers instead of 36 byte strings, saving some overhead: sft.getUserData().put("geomesa.fid.uuid", "true"); datastore.createSchema(sft); If the schema is already created, you may still retroactively indicate that feature IDs are UUIDs, but you must also indicate that they are not serialized that way using geomesa.fid.uuid-encoded. This may still provide some benefit when exporting data in certain formats (e.g. Arrow): SimpleFeatureType existing = datastore.getSchema("existing"); existing.getUserData().put("geomesa.fid.uuid", "true"); existing.getUserData().put("geomesa.fid.uuid-encoded", "false"); datastore.updateSchema("existing", existing); Warning Ensure that you use valid UUIDs if you indicate that you are using them. Otherwise you will experience exceptions writing and/or reading data. ## 6.5.6. Configuring Geometry Serialization¶ By default, geometries are serialized using a modified version of the well-known binary (WKB) format. Alternatively, geometries may be serialized using the tiny well-known binary (TWKB) format. TWKB will be smaller on disk, but does not allow full double floating point precision. For point geometries, TWKB will take 4-12 bytes (depending on the precision specified), compared to 18 bytes for WKB. For line strings, polygons, or other geometries with multiple coordinates, the space savings will be greater due to TWKB’s delta encoding scheme. For any geometry type attribute, TWKB serialization can be enabled by setting the floating point precision through the precision user-data key. Precision indicates the number of decimal places that will be stored, and must be between -7 and 7, inclusive. A negative precision can be used to indicate rounding of whole numbers to the left of the decimal place. For reference, 6 digits of latitude/longitude precision can store a resolution of approximately 10cm. For geometries with more than two dimensions, the precision of the Z and M dimensions may be specified separately. Generally these dimensions do not need to be stored with the same resolution as X/Y. By default, Z will be stored with precision 1, and M with precision 0. To change this, specify the additional precisions after the X/Y precision, separated with commas. For example, 6,1,0 would set the X/Y precision to 6, the Z precision to 1 and the M precision to 0. Z and M precisions must be between 0 and 7, inclusive. TWKB serialization can be set when creating a new schema, but can also be enabled at any time through the updateSchema method. If modifying an existing schema, any data already written will not be updated. SimpleFeatureType sft = ... sft.getDescriptor("geom").getUserData().put("precision", "4"); See Setting Attribute Options for details on how to set attribute options. ## 6.5.7. Configuring Column Groups¶ For back-ends that support it (currently HBase and Accumulo), subsets of attributes may be replicated into separate column groups. When possible, only the reduced column groups will be scanned for a query, which avoids having to read unused data from disk. For schemas with a large number of attributes, this can speed up some queries, at the cost of writing more data to disk. Column groups are specified per attribute, using attribute-level user data. An attribute may belong to multiple column groups, in which case it will be replicated multiple times. All attributes will belong to the default column group without having to specify anything. See Setting Attribute Options for details on how to set attribute options. Column groups are specified using the user data key column-groups, with the value being a comma-delimited list of groups for that attribute. It is recommended to keep column group names short (ideally a single character), in order to minimize disk usage. If a column group conflicts with one of the default groups used by GeoMesa, it will throw an exception when creating the schema. Currently, the reserved groups are d for HBase and F, A, I, and B for Accumulo. SimpleFeatureType sft = ... sft.getDescriptor("name").getUserData().put("column-groups", "a,b"); import org.locationtech.geomesa.utils.geotools.SimpleFeatureTypes // for java, use org.locationtech.geomesa.utils.interop.SimpleFeatureTypes val spec = "name:String:column-groups=a,dtg:Date:column-groups='a,b',*geom:Point:srid=4326:column-groups='a,b'" SimpleFeatureTypes.createType("mySft", spec) geomesa { sfts { "mySft" = { attributes = [ { name = "name", type = "String", column-groups = "a" } { name = "dtg", type = "Date", column-groups = "a,b" } { name = "geom", type = "Point", column-groups = "a,b", srid = 4326 } ] } } } import org.locationtech.geomesa.utils.geotools.SchemaBuilder val sft = SchemaBuilder.builder() .addString("name").withColumnGroups("a") .addDate("dtg").withColumnGroups("a", "b") .addPoint("geom", default = true).withColumnGroups("a", "b") .build("mySft") ## 6.5.8. Configuring Z-Index Shards¶ GeoMesa allows configuration of the number of shards (or splits) into which the Z2/Z3/XZ2/XZ3 indices are divided. This parameter may be changed individually for each SimpleFeatureType. If nothing is specified, GeoMesa will default to 4 shards. The number of shards must be between 1 and 127. Shards allow us to pre-split tables, which provides some initial parallelism for reads and writes. As more data is written, tables will generally split based on size, thus obviating the need for explicit shards. For small data sets, shards are more important as the tables might never split due to size. Setting the number of shards too high can reduce performance, as it requires more calculations to be performed per query. The number of shards is set when calling createSchema. It may be specified through the simple feature type user data using the key geomesa.z.splits. See Setting Schema Options for details on setting user data. sft.getUserData().put("geomesa.z.splits", "4"); ## 6.5.9. Configuring Z-Index Time Interval¶ GeoMesa uses a z-curve index for time-based queries. By default, time is split into week-long chunks and indexed per week. If your queries are typically much larger or smaller than one week, you may wish to partition at a different interval. GeoMesa provides four intervals - day, week, month or year. As the interval gets larger, fewer partitions must be examined for a query, but the precision of each interval will go down. If you typically query months of data at a time, then indexing per month may provide better performance. Alternatively, if you typically query minutes of data at a time, indexing per day may be faster. The default per week partitioning tends to provides a good balance for most scenarios. Note that the optimal partitioning depends on query patterns, not the distribution of data. The time interval is set when calling createSchema. It may be specified through the simple feature type user data using the key geomesa.z3.interval. See Setting Schema Options for details on setting user data. sft.getUserData().put("geomesa.z3.interval", "month"); ## 6.5.10. Configuring XZ-Index Precision¶ GeoMesa uses an extended z-curve index for storing geometries with extents. The index can be customized by specifying the resolution level used to store geometries. By default, the resolution level is 12. If you have very large geometries, you may want to lower this value. Conversely, if you have very small geometries, you may want to raise it. The resolution level for an index is set when calling createSchema. It may be specified through the simple feature type user data using the key geomesa.xz.precision. See Setting Schema Options for details on setting user data. sft.getUserData().put("geomesa.xz.precision", 12); For more information on resolution level (g), see “XZ-Ordering: A Space-Filling Curve for Objects with Spatial Extension” by Böhm, Klump and Kriegel. ## 6.5.11. Configuring Attribute Index Shards¶ GeoMesa allows configuration of the number of shards (or splits) into which the attribute indices are divided. This parameter may be changed individually for each SimpleFeatureType. If nothing is specified, GeoMesa will default to 4 shards. The number of shards must be between 1 and 127. See Configuring Z-Index Shards for more background on shards. The number of shards is set when calling createSchema. It may be specified through the simple feature type user data using the key geomesa.attr.splits. See Setting Schema Options for details on setting user data. sft.getUserData().put("geomesa.attr.splits", "4"); ## 6.5.12. Configuring Attribute Cardinality¶ GeoMesa allows attributes to be marked as either high or low cardinality. If set, this hint will be used in query planning. For more information, see Cardinality Hints. To set the cardinality of an attribute, use the key cardinality on the attribute, with a value of high or low. SimpleFeatureType sft = ... sft.getDescriptor("name").getUserData().put("index", "true"); sft.getDescriptor("name").getUserData().put("cardinality", "high"); import org.locationtech.geomesa.utils.geotools.SchemaBuilder import org.locationtech.geomesa.utils.stats.Cardinality val sft = SchemaBuilder.builder() .addString("name").withIndex(Cardinality.HIGH) .addDate("dtg") .addPoint("geom", default = true) .build("mySft") ## 6.5.13. Configuring Partitioned Indices¶ To help with large data sets, GeoMesa can partition each index into separate tables, based on the attributes of each feature. Having multiple tables for a single index can make it simpler to manage a cluster, for example by making it trivial to delete old data. This functionality is currently supported in HBase, Accumulo and Cassandra. Partitioning must be specified through user data when creating a simple feature type, before calling createSchema. To indicate a partitioning scheme, use the key geomesa.table.partition. Currently the only valid value is time, to indicate time-based partitioning: sft.getUserData().put("geomesa.table.partition", "time"); import org.locationtech.geomesa.utils.geotools.SchemaBuilder val sft = SchemaBuilder.builder() .addString("name") .addDate("dtg") .addPoint("geom", default = true) .userData .partitioned() .build("mySft") Note that to enable partitioning the schema must contain a default date field. When partitioning is enabled, each index will consist of multiple physical tables. The tables are partitioned based on the Z-interval (see Configuring Z-Index Time Interval). Tables are created dynamically when needed. Partitioned tables can still be pre-split, as described in Configuring Index Splits. For Z3 splits, the min/max date configurations are automatically determined by the partition, and do not need to be specified. When a query must scan multiple tables, by default the tables will be scanned sequentially. To instead scan the tables in parallel, set the sytem property geomesa.partition.scan.parallel=true. Note that when enabled, queries that span many partitions may place a large load on the system. The GeoMesa command line tools provide functions for managing partitions; see manage-partitions for details. ## 6.5.14. Configuring Index Splits¶ When planning to ingest large amounts of data, if the distribution is known up front, it can be useful to pre-split tables before writing. This provides parallelism across a cluster from the start, and doesn’t depend on implementation triggers (which typically split tables based on size). Splits are managed through implementations of the org.locationtech.geomesa.index.conf.TableSplitter interface. ### 6.5.14.1. Specifying a Table Splitter¶ A table splitter may be specified through user data when creating a simple feature type, before calling createSchema. To indicate the table splitter class, use the key table.splitter.class: sft.getUserData().put("table.splitter.class", "org.example.CustomSplitter"); To indicate any options for the given table splitter, use the key table.splitter.options: sft.getUserData().put("table.splitter.options", "foo,bar,baz"); See Setting Schema Options for details on setting user data. ### 6.5.14.2. The Default Table Splitter¶ Generally, table.splitter.class can be omitted. If so, GeoMesa will use a default implementation that allows for a flexible configuration using table.splitter.options. If no options are specified, then all tables will generally create 4 split (based on the number of shards). The default ID index splits assume that feature IDs are randomly distributed UUIDs. For the default splitter, table.splitter.options should consist of comma-separated entries, in the form key1:value1,key2:value2. Entries related to a given index should start with the index identifier, e.g. one of id, z3, z2 or attr (xz3 and xz2 indices use z3 and z2, respectively). Index Type Option Description Z3/XZ3 z3.min The minimum date for the data z3.max The maximum date for the data z3.bits The number of leading bits to split on Z2/XZ2 z2.bits The number of leading bits to split on ID id.pattern Split pattern Attribute attr.<attribute>.pattern Split pattern for a given attribute #### 6.5.14.2.1. Z3/XZ3 Splits¶ Dates are used to split based on the Z3 time prefix (typically weeks). They are specified in the form yyyy-MM-dd. If the minimum date is specified, but the maximum date is not, it will default to the current date. After the dates, the Z value can be split based on a number of bits (note that due to the index format, bits can not be specified without dates). For example, specifying two bits would create splits 00, 01, 10 and 11. The total number of splits created will be <number of z shards> * <number of time periods> * 2 ^ <number of bits>. #### 6.5.14.2.2. Z2/XZ2 Splits¶ If any options are given, the number of bits must be specified. For example, specifying two bits would create splits 00, 01, 10 and 11. The total number of splits created will be <number of z shards> * 2 ^ <number of bits>. #### 6.5.14.2.3. ID and Attribute Splits¶ Splits are defined by patterns. For an ID index, the pattern(s) are applied to the single feature ID. For an attribute index, each attribute that is indexed can be configured separately, by specifying the attribute name as part of the option. For example, given the schema name:String:index=true,*geom:Point:srid=4326, the name attribute splits can be configured with attr.name.pattern. Patterns consist of one or more single characters or ranges enclosed in square brackets. Valid characters can be any of the numbers 0 to 9, or any letter a to z, in upper or lower case. Ranges are two characters separated by a dash. Each set of brackets corresponds to a single character, allowing for nested splits. For example, the pattern [0-9] would create 10 splits, based on the numbers 0 through 9. The pattern [0-9][0-9] would create 100 splits. The pattern [0-9a-f] would create 16 splits based on lower-case hex characters. The pattern [0-9A-F] would do the same with upper-case characters. For data hot-spots, multiple patterns can be specified by adding additional options with a 2, 3, etc appended to the key. For example, if most of the name values start with the letter f and t, splits could be specified as attr.name.pattern:[a-z],attr.name.pattern2:[f][a-z],attr.name.pattern3:[t][a-z] For number-type attributes, only numbers are considered valid characters. Due to lexicoding, normal number prefixing will not work correctly. E.g., if the data has numbers in the range 8000-9000, specifying [8-9][0-9] will not split the data properly. Instead, trailing zeros should be added to reach the appropriate length, e.g. [8-9][0-9][0][0]. #### 6.5.14.2.4. Full Example¶ import org.locationtech.geomesa.utils.interop.SimpleFeatureTypes; String spec = "name:String:index=true,age:Int:index=true,dtg:Date,*geom:Point:srid=4326"; SimpleFeatureType sft = SimpleFeatureTypes.createType("foo", "spec"); sft.getUserData().put("table.splitter.options", "id.pattern:[0-9a-f],attr.name.pattern:[a-z],z3.min:2018-01-01,z3.max:2018-01-31,z3.bits:2,z2.bits:4"); ## 6.5.15. Configuring Cached Statistics¶ GeoMesa allows for collecting summary statistics for attributes during ingest, which are then stored and available for instant querying. Hints are set on individual attributes using the key keep-stats, as described in Setting Attribute Options. Note Cached statistics are currently only implemented for the Accumulo data store Stats are always collected for the default geometry, default date and any indexed attributes. See Cost-Based Strategy for more details. In addition, any other attribute can be flagged for stats. This will cause the following stats to be collected for those attributes: • Min/max (bounds) • Top-k Only attributes of the following types can be flagged for stats: String, Integer, Long, Float, Double, Date and Geometry. For example: // set the hint directly SimpleFeatureType sft = ... sft.getDescriptor("name").getUserData().put("keep-stats", "true"); See Analytic Commands and Accessing Stats through the GeoMesa API for information on reading cached stats. ## 6.5.16. Mixed Geometry Types¶ A common pitfall is to unnecessarily specify a generic geometry type when creating a schema. Because GeoMesa relies on the geometry type for indexing decisions, this can negatively impact performance. If the default geometry type is Geometry (i.e. supporting both point and non-point features), you must explicitly enable “mixed” indexing mode. All other geometry types (Point, LineString, Polygon, etc) are not affected. Mixed geometries must be declared when calling createSchema. It may be specified through the simple feature type user data using the key geomesa.mixed.geometries. See Setting Schema Options for details on setting user data. sft.getUserData().put("geomesa.mixed.geometries", "true");
{}
# Physics 30 electricity problem 1. Oct 13, 2005 ### Chipper Ok so im on the last 2 questions of my electricity and electric fields assignment and im totally stumped. heres the first one 12. Object A with a mass of 1.4 X 10-3 kg has a charge of 3.8 X 10-10 C. This object is at rest a distance of 48 cm from a fixed object B of charge 4.2 X 10-5 C. The two objects will repel each other. With what speed will object A be moving at when it is 96 cm from object B? (4 marks) Ok so far this i figured that first i ahve to find the total energy at rest at 48cm and ET=EP because Ek=0 cuz its at rest. however i cant think of a formula for EP because for EP=mgh i dont have any hieght and for EP=1/2kx^2 i dont have k or x. then i know i find the EP at 96 cm and solve for v, but i cant figure out how to get there. THe next question is sort of the same: 13. Two charged objects A and B have masses of 1.3 X 10-2 kg and 2.6 X 10-2 kg respectively. Their charges are -1.7 X 10-4 C and -3.8 X 10-4 C respectively. They are released from rest when they are 3.6 m apart. What will their speeds be when they are a "large'' distance apart? (6 marks) For this one im totally confused because i dont know how to start. I mean i wrote down the variables but im not sure how to approach this question. 2. Oct 15, 2005 ### andrevdh Try the work done by the electrostatic force as a mass is moved form $$48\ cm\rightarrow\ 96\ cm$$ is equal to the change in it's kinetic energy. 3. Oct 15, 2005 ### andrevdh In the second case all their potential energy gets converted into their kinetic energy. 4. Apr 10, 2011 ### ashish80ism try using the potential at the two points then finding out the work done and hence the kinetic energy then find the velocity
{}
This page will look better in a graphical browser that supports web standards, but is accessible to any browser or internet device. Served by Samwise. Cardiac Physiome Society workshop: November 6-9, 2017 , Toronto # Homogenous_AirBlood_Exch Periodic airway transport and alveolus-capillary gas exchange. Model number: 0156 Run Model: Help running a JSim model. Java runtime required. Web browser must support Java Applets. (JSim model applet may take 10-20 seconds to load.) ## Description A homogeneous model for respiratory gas exchange is presented. It contains an alveolar sac (compartment) connected to a 1D airway (cylinder). The alveolar sac exchanges species with a capillary (cylinder) through a single layer. The model can predict time dependent airway concentration profiles in the presence of bidirectional air flow over a single or multiple breath time span. Source of species can be set either in air or in blood. ## Equations $\large \frac{{\partial C_{c}}}{\partial t}\,=\, -\, \frac{F_c}{V_c}\,\frac{\partial C_c}{\partial x} \, + \, \frac{D_c}{L^2}\,\frac{{\partial}^2 C_c}{\partial x^2} \, - \, \frac{PS_{c\,if}}{V_c} \, \left ( C_c - C_{if} \right ) \, - \, \frac{G_c}{V_c} \, C_c$ $\large \frac{{\partial C_{if}}}{\partial t}\,=\, \frac{D_{if}}{L^2}\,\frac{{\partial}^2 C_{if}}{\partial x^2} \, + \, \frac{PS_{c\,if}}{V_{if}} \, \left ( C_c - C_{if} \right ) \, - \, \frac{q_{ifx}}{V_{if}} \, - \, \frac{G_{if}}{V_{if}} \, C_{if}$ $\large \frac{{\partial C_{a}}}{\partial t}\,=\, -\, \frac{F_a}{V_a}\,\frac{\partial C_a}{\partial y} \, + \, \frac{D_a}{{L_a}^2}\,\frac{{\partial}^2 C_a}{\partial y^2} \, - \, \frac{G_a}{V_a} \, C_a$ $\large \frac{d V_{al}}{d t}\,=\, F_a \, + \, \frac{q_{if}}{C_{tot}}$ $\large {V_{al}} \, \frac{d C_{al}}{d t} \, + \, C_{al} \, \frac{d V_{al}}{d t} \,=\, q_{if} \, + \, F_a \, \left( \left(1-H\left(F_a\right)\right)\,C_{al} \, + \, H\left(F_a\right)\,C_{a}\left(t,1\right)\right)$ $\large q_{ifx} \, = \, PS_{if\,al} \, \left(C_{if}-C_{al}\right)$ $\large q_{if} \, = \, \int_0^1{q_{ifx}dx}$ where $\large C$ is the species concentration, $\large V$ is the volume, $\large F$ is the volume flow rate, $\large D$ is the diffusion coefficient, $\large L$ is the length, $\large PS$ is the pereability-surface product, $\large G$ is the reaction rate, $\large q_{ifx}$ is the specie flux per unit length of capillary-alveoli interface, $\large q_{if}$ is the integrated specie flux through capillary-alveoli interface, $\large H$ is the Heaviside function, index $\large c$ refers to the capillary, index $\large if$ refers to the capillary-alveoli interface, index $\large al$ refers to the alveolus, and index $\large a$ refers to the airway. The equations for this model may also be viewed by running the JSim model applet and clicking on the Source tab at the bottom left of JSim's Run Time graphical user interface. The equations are written in JSim's Mathematical Modeling Language (MML). See the Introduction to MML and the MML Reference Manual. Additional documentation for MML can be found by using the search option at the Physiome home page. ## References Hlastala MP, A Model of Fluctuating Alveolar Gas Exchange During the Respiratory Cycle. Respir. Physiol. 15: 214-232, 1972 ## Key Terms gas exchange, alveolar exchange, pulmonary circulation, bronchial circulation, pulmonary capillaries, airway, alveolar sac, PDE ## Model History Get Model history in CVS. Posted by: bej ## Acknowledgements Please cite www.physiome.org in any publication for which this software is used and send an email with the citation and, if possible, a PDF file of the paper to: staff@physiome.org. Or send a copy to: The National Simulation Resource, Director J. B. Bassingthwaighte, Department of Bioengineering, University of Washington, Seattle WA 98195-5061.
{}
# Apéryodical: Scratch and sniff ζ plot Christian’s put together this fun applet for exploring the Zeta function – you can move your pointer around to reveal the value of $\zeta$ at each point in the complex plane. The hue (colour) revealed is the argument of the value, and the lightness (bright to dark) represents the magnitude. There’s a blog post over at Gandhi Viswanathan’s Blog explaining how it works. The resulting plot has contour lines showing how the function behaves. Explore! ## About the author • #### Katie Steckles Publicly engaging mathematician, Manchester MathsJam organiser, hairdo.
{}
# Inductor in A.C. Circuit I am reading about inductors connected in A.C. Circuits. I understand mathematically that the current lags behind voltage. But what is the physical explanation for this? My understanding: As the emf of the alternating source increases, an opposing emf of equal magnitude is induced in the inductor due to self induction. But if this the case, how can current flow? One emf tries to push electrons one way and the other emf tries to push electrons the other way? I came across a similar question here:If induced voltage (back-emf) is equal and opposite to applied voltage, what drives the current? but there were so many answers that I don't know what is right) I hope that the answer to this question will help me figure out other questions like how current increases when the emf decreases. • The magnetic field in the inductor wants to maintain its steady state condition (see Lenz's Law for a closely related subject). Because of this, as the polarity in the AC circuit changes, the inductor "fights" this, and it takes time for the instantaneous emf of the circuit to change the magnetic field in the inductor. Due to this, the current in the inductor always lags the emf across the inductor. – David White Aug 19 '18 at 1:10 • @GokulakrishnanShankar I have added an answer to the question in the link that you have quoted which may help answer your question? physics.stackexchange.com/a/423571/104696 – Farcher Aug 19 '18 at 16:08 The answer to your question lies in the fact that you are dealing with two different electric fields which are competing with one another and that the non-conservative electric field produced by the inductor owes its existence to a changing magnetic flux produced by a changing current in the circuit. One electric field produced by the voltage source is trying to change the current and the other electric field produced by the inductor is trying to stop the change in current but the change in current has to happen because if it didn't the electric field produced by the inductor would cease to exist. The definition of self-inductance $L$ is $L=\frac {\Phi}{I}$ where $\Phi$ is the magnetic flux and $I$ is the current. Differentiating the defining equation with respect to time and then rearranging the equation gives $\frac{d\Phi}{dt} = L\frac{dI}{dt} \Rightarrow \mathcal E_{\rm L} = - L\frac{dI}{dt}$ after applying Farday's law where $\mathcal E_{\rm L}$ is the induced emf which is going to try and prevent any change in the current. Consider a series circuit which consists of an alternating voltage source and an ideal inductor. The alternating voltage source is trying to change the current in the circuit by varying the electric field in the circuit. The inductor is trying to oppose any change to the current and hence the magnetic flux by producing a non-conservative electric field in opposition to the field produced by the voltage source. The strength of the non-conservative field which will oppose the electric field trying to change the current in the circuit is determined by the rate of change of current in the circuit. Suppose the current and the supply voltage are in phase with another as in graph 1. Just after time A the electric field due to the supply voltage is increasing which leads to an increase in the current in the circuit. The inductor needs to generate an electric field which tries to negate small electric field produced by the voltage source. However, at this time the rate of change of current is a maximum. At time B the electric field produced by the voltage source is large and to negate its effect the inductor must produce a large electric field in the opposite direction yet at this time the rate of change of current is near zero. Suppose that the current leads the supply voltage by $90^\circ$ as in graph 2. Just after time A the electric field due to the supply voltage is increasing which leads to an increase in the current in the circuit. The inductor needs to generate an electric field which tries to negate small electric field produced by the voltage source. The good news is that at this time the rate of change of current very small, but the electric field produced by the inductor would be in the same direction as the electric field produced by the voltage source. At time B there is a large rate of change of current so that the inductor would produce a large electric field to negate the electric field produced by the voltage but again that field is in the wrong direction. You could go on like this until you get to graph 3 where the supply voltage leads the current by $90^\circ$ and you will find that at all times the magnitude of the electric field produced by the inductor mirrors that produced by the voltage source but is opposite in direction. In terms of energy the graphs of supply voltage, current and power produced by the voltage source look like this. if the phase difference if $90^\circ$. The darker green areas represent energy flowing from the voltage source to the inductor and the lighter green areas represent energy floring from the inductor to the voltage source. A corresponding power graph for the inductor would be the mirror image of the one for the voltage source. Overall the important thing to realise is that even if two emf in a circuit look as though they negate each other there can still be a transfer of energy between the two sources of emf. • But in the last graph (the coloured one), it looks like the current leads the voltage.... Am I getting it wrong? – Gokulakrishnan Shankar Aug 28 '18 at 15:10 • @GokulakrishnanShankar You are correct and I have switch two labels in the graph to show the voltage leading the current. – Farcher Aug 28 '18 at 15:25 • Thanks a lot! I understand now... Just one last question: why "non conservative" electric field? – Gokulakrishnan Shankar Aug 28 '18 at 15:55 • @GokulakrishnanShankar If the work done between two points is not independent of the path taken then the field is non conservative. Have a look at this video. m.youtube.com/watch?v=eqjl-qRy71w – Farcher Aug 28 '18 at 17:34 As the emf of the alternating source increases, an opposing emf of equal magnitude is induced in the inductor due to self induction. But if this the case, how can current flow? But the induced emf isn't there unless the current through the inductor is changing. From the Wikipedia article Inductance: In electromagnetism and electronics, inductance is the property of an electrical conductor by which a change in electric current through it induces an electromotive force (voltage) in the conductor. Your first sentence in your quote above is essentially correct but you must also understand that the opposing emf, due to self induction, implies that the current is changing. For an ideal inductor with a non-zero voltage across, the current through can be finite only if the voltage across and the induced emf are equal in magnitude. Since the emf is zero when the current through is constant, it follows that when there is a voltage across the inductor, there is a changing current through. • 1) So in order for the induced emf to become equal to the voltage of the source, there must be current changing in the inductor, right? – Gokulakrishnan Shankar Aug 19 '18 at 3:17 • @GokulakrishnanShankar, (1) in order for there to be any (induced) emf at all, the current must be changing and (2) the current must changing at just the right rate such that the induced emf has the same magnitude as the voltage of the source. – Alfred Centauri Aug 19 '18 at 12:49 The mathematical explanation is $$V = L \frac{{\rm d}I}{{\rm d}t}$$ which is just the mathematical definition of the term inductor. If you turn this around, $$\frac{{\rm d}I}{{\rm d}t} = \frac{V}{L}$$ If this tells you that, for example if $I$ is negative, and you start applying a positive current, the current will only then start to trend positive. And it won't actually reach a positive value until some finite time after you've applied the positive voltage. Then once current goes positive, the same thing happens when you apply a negative voltage --- you have to wait before you'll get a negative current. imagine that the inductor is a mass, and the voltage is a force acting on that mass, and the current is the velocity of the mass. we start with the mass (inductor) at rest, with velocity (current) equal to zero. you apply a force (voltage) to the mass (inductor). at the instant you apply that force (voltage), the velocity (current) of the mass (inductor) is zero. and as it then accellerates, the velocity (current) in the mass (inductor) still lags behind force (voltage). Your question implies that the voltage across an inductor is equal to the difference between the applied voltage (emf) and the opposing voltage (back emf) and, since the back emf is equal to the applied voltage, the resulting voltage on the inductor should be zero. In reality, if you apply some voltage to an inductor and actually measure it with a voltmeter, you'll see that the voltage is not zero, but is, in fact, equal to the applied voltage, which causes the current in the inductor to grow according to the familiar equation: $V_{appl}= L \frac {di} {dt}$. So, although the concept of the "back emf" is very useful, it should not be treated as a real voltage, since that would lead to the incorrect conclusion that the net voltage across an inductor is always zero. It is not. A possible mechanical analogy is Newton's second law, $F_{appl}=ma=m \frac {dv} {dt}$. We could call $ma$ a Back Force and treat it as a real force acting on a body in response to the applied force $F_{appl}$, in which case we could, mistakenly, conclude that the net force acting on a body is zero and, therefore, the body should not accelerate. • You are right that emf is not voltage, but unfortunately your analogy is misleading. It suggests that induced EMF is not a measure of additional force, but only a manifestation (acceleration) of the real forces measured by voltage (impressed force). But in fact in EM theory both voltage forces and induced electromotive forces are real and contribute to total force. In low ohmic resistance coil they largely cancel out though and the tiny difference is what makes the current accelerate/decelerate. – Ján Lalinský Mar 9 '19 at 23:44 • @JánLalinský "In low ohmic resistance coil they largely cancel out though and the tiny difference is what makes the current accelerate/decelerate." So, if the resistance was zero, the difference between the applied voltage and EMF would be zero and the current would not "accelerate" at all? – V.F. Mar 10 '19 at 1:04 • No, becase emf in ideal inductor is given by $-LdI/dt$, so if emf is non-zero, current will change in time. The lower the ohmic resistance of real coil, the lower the difference between emf and voltage needed for given $dI/dt$. – Ján Lalinský Mar 10 '19 at 2:38 • @JánLalinský You've said: "...the tiny difference is what makes the current accelerate...". According to this, if the difference between the applied voltage and EMF (L di/dt) is zero, the current won't accelerate. I am saying: L di/dt term (which is the difference between the applied voltage and IR) is what makes the current accelerate. The mechanical analogy of of this difference is the difference between the applied force and the friction and this difference would be responsible for the mechanical acceleration, ma. – V.F. Mar 10 '19 at 12:48 • no, you are misapplying my statement above (which was meant for a real coil) to the idealized case of zero ohmic resistance. Of course, non-zero emf alone implies non-zero current acceleration, regardless of what the difference between EMF and voltage is. But this does not mean that forces quantified by the term $LdI/dt$ alone make the current accelerate. It is the vector sum of both electrostatic forces (q. by voltage) and induced electric forces (q. by EMF) that make the current accelerate. In the ideal case of zero resistance, it is just that such difference needed is zero. – Ján Lalinský Mar 10 '19 at 19:00
{}
# Linear transformation and it's properties Let M be a p-1-dimensional subspace Linear transformation and its You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it venenolundicoofw First, the statement is false if because in this case, t is the class of an element t′ of M and $M=kerB⇒B{t}^{\prime }=0\ne 1$. Now, if $t\ne 0$, then ${t}^{\prime }\text{⧸}\in M$ and the line $L\phantom{\rule{0.222em}{0ex}}=⟨{t}^{\prime }⟩$ satisfies $M\oplus L={\mathbb{R}}^{p}$. Thus if $\pi$ is the projection on L along M (i.e., if $x={x}_{M}+{x}_{L}$, then $\pi \left(x\right)={x}_{L}\right)$ and if $\psi :L\to \mathbb{R}$ is defined by $\psi \left(\lambda {t}^{\prime }\right)=\lambda ,\mathrm{\forall }\lambda \in \mathbb{R}$, we see that $B=\psi \circ \pi$ satisfies . Edit: 1. B is linear: indeed, if so $B\left(ax+y\right)=\psi \left(\pi \left(ax+y\right)\right)=\psi \left(\left(a\lambda +\mu \right){t}^{\prime }\right)=a\lambda +\mu$ and $aB\left(x\right)+B\left(y\right)=a\psi \left(\pi \left(x\right)\right)+\psi \left(\pi \left(y\right)\right)=a\psi \left(\lambda {t}^{\prime }\right)+\psi \left(\mu {t}^{\prime }\right)=a\lambda +\mu$ By definition of . Now, if is a subspace of ${\mathbb{R}}^{p}$ containing strictly a $\left(p-1\right)$-subspace, hence $kerB$ would be all ${\mathbb{R}}^{p}$. This is false because $B{t}^{\prime }=1$. Thus $kerB=M$.
{}
# Help with an Integration problem 1. Dec 4, 2005 ### asdf1 (integration) (sinax)(cosax)dx = (1/2)(integration)[sin2ax)dx = (-1/4a)cos2ax = (-1/4a)(1-2(sinax)^2) but the correct answer should be (1/2a)sin^ax does anybody know what went wrong? 2. Dec 4, 2005 ### siddharth Why should the "correct answer" be (1/2a)sin^ax? How do you know the "correct answer" is actually right? 3. Dec 4, 2005 ### benorin It's okay Sure, you're right, and the back of the book (or Maple or whatever) is also right [assuming you meant (1/2a)sin(ax)^2: $$\int \sin(ax)\cos(ax)dx = \frac{1}{2}\int \sin(2ax)dx = -\frac{1}{4a} \cos(2ax) +C ,$$ where C is the constant of integration. however, one may instead apply the substitution $$u=\sin(ax) \Rightarrow du=a\cos(ax)dx$$ to the given integral like this $$\int \sin(ax)\cos(ax)dx =\frac{1}{a} \int u du = \frac{1}{2a} u^2 +C = \frac{1}{2a} \sin^{2}(ax) +C$$ But how could that be? because $$-\frac{1}{4a} \cos(2ax) =C+\frac{1}{2a} \sin^{2}(ax)$$ is the half-angle identity from you used [quote: (-1/4a)cos2ax = (-1/4a)(1-2(sinax)^2)] for the proper value of C. 4. Dec 4, 2005 ### asdf1 thank you very much!!! :)
{}
Showing that identity and g are not homotopic (without Homology) Question: Are the identity mapping on $S^1$ and the reflection about the $x$-axe homotopic? This is a question which I already know the answer. The objective is to find better answers and suggestions to improve mine. Here some definitions for improving the context of the problem. Notation: let us denote $[0,1]$ by $I$. Definition: Let $X$ and $Y$ be topological spaces and let $A$ be a subspace of $X$. Let $f,g:X\to Y$ be continuous functions. We say that $f$ is homotopic to $g$ relative to $A$ (denote $f\simeq_A g$) if there is a continuous function $H:X\times I\to Y$ such that the functions of the form $H_t:X\to Y$, $H_t(x)=H(x,t)$ for $t\in I$, satisfy the following: 1. $H_t(x)=g(x)=f(x)$ for all $t\in I$ and all $x\in A$. 2. $H_0=f$ and $H_1=g$. The map $H$ is called an homotopy from $f$ to $g$ relative to $A$. When $f\simeq_\emptyset g$ we say that $f$ and $g$ are homotopic and denote $f\simeq g$. Notice that $\simeq_A$ is an equivalence relation between continuous functions from $X$ to $Y$. Definition: Two loops $\alpha,\beta:I\to X$ at $x\in X$ ($\alpha$ and $\beta$ continuous s.t. $\alpha(0)=\alpha(1)=\beta(0)=\beta(1)=x$) have the same homotopy class (denoted by $[\alpha]=[\beta]$) iff $\alpha\simeq_{\{0,1\}} \beta$. Problem: Consider the function $g:S^1\to S^1$, $g(x,y)=(x,-y)$ and $f=id_{S^1}$ the identity map in $S^1$. Prove without using Homology Theory that $g$ and $id_{S^1}$ are not homotopic. • Do you find 'they induce different maps on the fundamental group' similarly unsatisfying? – user98602 Mar 27 '15 at 16:30 We will prove that $$f$$ and $$g$$ are not homotopic by contradiction. Let $$\alpha:I\to S^1$$ be the loop at $$p=(1,0)\in S^1$$ defined by $$\alpha(s)=(\cos(2\pi s),\sin(2\pi s))$$. Then the loop $$\alpha^{-1}:I\to S^1$$, $$\alpha^{-1}(s)=\alpha(1-s)$$ is such that $$[\alpha]^{-1}=[\alpha^{-1}]$$. Notice that $$[\alpha]$$ generates $$\pi_1(S^1,p)$$. Suppose that $$f\simeq g$$. We will show that this implies that $$\alpha\simeq_{\{0,1\}}\alpha^{-1}$$ or equivalently that $$[\alpha]=[\alpha]^{-1}$$, which is going to lead us to a contradiction. Remark: notice that if $$f\simeq g$$, then $$\alpha=f\circ\alpha\simeq g\circ\alpha=\alpha^{-1}$$. But the relation $$\alpha\simeq\alpha^{-1}$$ is not enough to prove $$\alpha\simeq_{\{0,1\}}\alpha^{-1}$$. Actually, it can be proved that $$\alpha\simeq_{\{0\}}\alpha^n$$ for all $$n\in\mathbb{Z}$$, whenever $$f$$ and $$g$$ are homotopic or not. For example the function $$T:I\times I\to S^1$$, $$T(s,t)=(\cos(2\pi st),\sin(2\pi st))$$ is a homotopy relative to $$\{0\}$$ from $$C_p$$ to $$\alpha$$, where $$C_p:I\to S^1$$, $$C_p(s)=p$$, for all $$s\in I$$. Let $$H:S^1\times I\to S^1$$ be a homotopy from $$f$$ to $$g$$. Then the continuous mapping $$G:I\times I\to S^1, G(s,t)=H(\alpha(s),t)$$ is such that $$G_0=\alpha$$, $$G_1=\alpha^{-1}$$ and $$G_t(0)=G_t(1)$$ for all $$t\in I$$. Notice that $$G$$ is not necessarily a homotopy from $$\alpha$$ to $$\alpha^{-1}$$ relative to $$\{0,1\}$$, because we could have $$t_1\neq t_2$$ such that $$G(i,t_1)\neq G(i,t_2)$$ for $$i\in \{0,1\}$$. Let $$\beta:I\to S^1$$ be the loop at $$p$$ defined by $$\beta(t)=G(0,t)$$. Since $$G$$ is continuous, so is $$\beta$$. Now consider $$S^1$$ as a subspace of $$\mathbb{C}$$. Define the function $$F:I\times I\to S^1$$ as $$F(s,t)=G(s,t)\cdot e^{-i\cdot \arg(\beta(t))}$$ where $$\cdot$$ is the product of complex numbers and $$\arg:S^1\to [0,2\pi]$$ is the argument function. Geometric interpretation of $$F$$: For $$t\in I$$ fixed, the function $$z\mapsto z\cdot e^{-i\cdot \arg(\beta(t))}$$ is a rotation function of angle $$-\arg(\beta(t))$$. This function rotates the image of the loop $$s \mapsto G_t(s)$$ in such a way that the base point of the resultant loop is $$p$$. In fact, the base point of the loop $$s \mapsto G_t(s)$$ is $$G_t(0)$$, and the base point of the resultant loop $$s \mapsto G_t(s)\cdot e^{-i\cdot \arg(\beta(t))}$$ is $$F(0,t)=G_t(0)\cdot e^{-i\cdot \arg(\beta(t))}=\beta(t)\cdot e^{-i\cdot \arg(\beta(t))}=e^{i\cdot \arg(\beta(t))}\cdot e^{-i\cdot \arg(\beta(t))}=p, \mbox{ for all }t\in I.$$ Is not difficult to check that the function $$t\mapsto e^{-i\cdot \arg(\beta(t))}$$ is continuous and so is $$F$$. Properties of $$F$$: $$F(s,0)=G(s,0)\cdot e^{-i\cdot \arg(\beta(0))}=G(s,0)=\alpha(s)$$ $$F(s,1)=G(s,1)\cdot e^{-i\cdot \arg(\beta(1))}=G(s,1)=\alpha^{-1}(s)$$ $$F(0,t)=p$$, as we proved above. $$F(1,t)=G_t(1)\cdot e^{-i\cdot \arg(\beta(t))}=\beta(t)\cdot e^{-i\cdot \arg(\beta(t))}=e^{i\cdot \arg(\beta(t))}\cdot e^{-i\cdot \arg(\beta(t))}=p$$ We have proved that $$F$$ is a homotopy from $$\alpha$$ to $$\alpha^{-1}$$ relative to $$\{0,1\}$$. Therefore $$[\alpha]=[\alpha]^{-1}$$, then $$[\alpha]^2=[1]$$. Hence $$[\alpha]$$ generates a finite group which contradicts the fact that $$\pi_1(S^1,p)\cong\mathbb{Z}$$. Therefore $$f$$ and $$g$$ are not homotopic.
{}
# How do you write one-fourth x less than the sum of 7 and 2x? Nov 9, 2016 $\textcolor{g r e e n}{7 + 2 x - \frac{1}{4} x}$ #### Explanation: one-fourth $x$ less than the sum of $7$ and $2 x$ $\rightarrow$ one-fourth $x$ less than $7 + 2 x$ $\rightarrow \frac{1}{4} x$ less than $7 + 2 x$ $\rightarrow \left(7 + 2 x\right) - \frac{1}{4} x$
{}
The open source CFD toolbox Mixed # Properties • Robin condition • a linear blend of fixed value and gradient conditions • blending specified using a value fraction • not usually applied directly, but used in derived types, e.g. the inletOutlet condition • explicit and implicit contributions Face values are evaluated according to: $\phi_f = w \phi_{ref} + \left(1-w\right)\left(\phi_c + \Delta \grad{\phi}_{ref} \right)$ where $$\phi_f$$ = face value $$\phi_c$$ = cell value $$\phi_{ref}$$ = reference value $$\Delta$$ = face-to-cell distance $$w$$ = value fraction # Usage <patchName> { type mixed; refValue <field value>; valueFraction <field value>; } # Further information Source code: Would you like to suggest an improvement to this page? Create an issue
{}
The Linearity Property of the Integral of Nonnegative Meas. Functions # The Linearity Property of the Integral of Nonnegative Measurable Functions Theorem 1 (Linearity of the Integral of Nonnegative Measurable Functions): Let $(X, \mathcal A, \mu)$ be a complete measure space and let $f$ and $g$ be nonnegative measurable functions defined on a measurable set $E$. Then for all $\alpha, beta \in \mathbb{R}$ with $\alpha, \beta \geq 0$ we have that $\displaystyle{\int_E (\alpha f(x) + \beta g(x)) \: d \mu = \alpha \int_E f(x) \: d \mu + \beta \int_E g(x) \: d \mu}$. (1) \begin{align} \quad \lim_{n \to \infty} \int_E [\alpha \varphi_n(x) + \beta \psi_n(x)] \: d \mu &= \int_E [\alpha f(x) + \beta g(x)] \: d \mu \\ \quad \lim_{n \to \infty} \int_E \alpha \varphi_n(x) \: d \mu + \lim_{n \to \infty} \int_E \beta \psi_n(x) \: d \mu &= \int_E [\alpha f(x) + \beta g(x)] \: d \mu \\ \quad \alpha \lim_{n \to \infty} \int_E \varphi_n(x) \: d \mu + \beta \lim_{n \to \infty} \int_E \psi_n(x) \: d \mu &= \int_E [\alpha f(x) + \beta g(x)] \: d \mu \\ \quad \alpha \int_E f(x) \: d \mu + \beta \int_E g(x) \: d \mu &= \int_E [\alpha f(x) + \beta g(x)] \: d \mu \quad \blacksquare \end{align}
{}
Question The first four Ionization Energies (IE) of an element are given, which of the following elements is it most likely to be? IE_1 = 0.8; IE_2 = 1.7; IE_3 = 14.8; IE_4 = 21 a) He b) Be c) O d) Mg Solution In the given element, the Ionization Energy abdrubtly increases after removing two electrons, hence we need to find and element which achieves stability after the removal of 2 electrons. He: Since Helium has only two electrons, the given data cannot be for helium. Be: The electronic configuration of Be is 1s\^2 2s\^2. Therefore, after removing two electrons we are left with a fully filled s-orbital. Since s-orbital is closer to nucleus, it will take more energy to remove these electrons. Thus, the given data can be for Berelium. O: The electronic configuration of O is 1s\^2 2s\^2 2p\^4. Therefore, after removing two electrons, we are left with 2 electrons in p-orbital, which is not a stable configuration, hence we should not expect any jumps in ionization energy here. Mg: The electronic configuration of Mg is 1s\^2 2s\^2 2p\^6 3s\^2. Therefore, after removing 2 electrons, we are left with a fully filled p-orbital. Since s-orbital is farther away from nucleus compared to s, it shouldn't be very difficult to remove electrons from this orbital. Therefore we should not expect such a high jump in ionization energy. Therefore, the given element, most likely, is Berelium.
{}
# The area of a circle is 16 π cm 2 . what is the circle's circumference? ###### Question: The area of a circle is 16 π cm 2 . what is the circle's circumference? ### How did the compass help with the navigation of the sea How did the compass help with the navigation of the sea... ### Calcium is an important mineral in the body, and one that is tightly regulated in the blood with the help of several hormones. Calcium Calcium is an important mineral in the body, and one that is tightly regulated in the blood with the help of several hormones. Calcium status plays an important role in the health of bones and a variety of other metabolic functions. Therefore, it is essential to understand where to find calcium in f... ### If henry (180 pounds) wants to start a muscle-strengthening fitness program, his protein needs may be If henry (180 pounds) wants to start a muscle-strengthening fitness program, his protein needs may be as high as?... ### Paul is a busy professional with a strong commitment to family life, but hewants to start exercising more. He has set a goal of Paul is a busy professional with a strong commitment to family life, but he wants to start exercising more. He has set a goal of exercising at least two hours each day, seven days a week. What is wrong with this goal? O A. This goal does not address his health issues. O B. The goal is not measurabl... -3 -9 -3 9 1 6 If A = 9 and B 3 -10 find - 7A-4B. 7 9 -10 9 a. c. -15 59 -3 57 67 45 16 61 -83 40 -19 -43 -45 -99 106 -53 -27 34 d. b. 29 -30 -51 43 75 54 -5 82 -71 37 -58 -21 -99 103 -35 27 -23 $-3 -9 -3 9 1 6 If A = 9 and B 3 -10 find - 7A-4B. 7 9 -10 9 a. c. -15 59 -3 57 67 45 16 6... 4 answers ### The miss teen georgia pageant is taking place and there are 25 contestants. how many different ways The miss teen georgia pageant is taking place and there are 25 contestants. how many different ways could the pageant have a winner, runner up, and a 2nd place winner? 3 13800 6 6840... 12 answers ### A student explains that she can measure a well defined frequency for radio waves. Therefore, she says that the wave model accurately A student explains that she can measure a well defined frequency for radio waves. Therefore, she says that the wave model accurately predict the behavior of radio waves. What other evidence supports using the wave model but not the particle model to characterize radio waves? 1. They can travel thro... 3 answers ### In 2006 Monster Beverage Corporation (MNST) stock traded for 11.00 per share. By 2016 the share price was 150.00. During that In 2006 Monster Beverage Corporation (MNST) stock traded for 11.00 per share. By 2016 the share price was 150.00. During that time span the company stocks split twice. The company has never paid its common shareholders a dividend. Monster Beverage Corporation can be classified as a/ an stock.... 4 answers ### How do you say jaeden in spanish i do not know How do you say jaeden in spanish i do not know... 3 answers ### The perimeter of a triangle is 117 inches. if one side of the triangle is four more than the shortest The perimeter of a triangle is 117 inches. if one side of the triangle is four more than the shortest side, and the longest side is 11 more than the shortest side, find the lengths of the three sides?... 7 answers ### Tommy is writing a coordinate proof to show that the midsegment of a trapezoid is parallel to its bases. Tommy is writing a coordinate proof to show that the midsegment of a trapezoid is parallel to its bases. he starts by assigning coordinates as given, where rs¯¯¯¯¯ is the midsegment of trapezoid klmn . [tex]Tommy is writing a coordinate proof to show that the midsegment of a trapezoid i... 4 answers ### In ΔFGH, the measure of ∠H=90°, GF = 89, HG = 80, and FH = 39. What ratio represents the cotangent of ∠F? In ΔFGH, the measure of ∠H=90°, GF = 89, HG = 80, and FH = 39. What ratio represents the cotangent of ∠F?... 4 answers ### An account with a principal of 10,000 to earn 750 in intreast An account with a principal of 10,000 to earn 750 in intreast... 1 answer ### How many terms are in the following sequence? 645700815, ..., 45, 15,5 How many terms are in the following sequence? 645700815, ..., 45, 15,5... 10 answers ### Which statement is true regarding the graphedfunctions?Of(0) = 2 and g(-2) = 0Of(0) = 4 and g(-2) Which statement is true regarding the graphed functions? Of(0) = 2 and g(-2) = 0 Of(0) = 4 and g(-2) = 4 Of(2)= 0 and g(-2) = 0 Of(-2) = 0 and g(-2) = 0 [tex]Which statement is true regarding the graphed functions? Of(0) = 2 and g(-2) = 0 Of(0) = 4 and g($... Which of the following is a function and a relation?​ $Which of the following is a function and a relation?​$...
{}
# Confirm that this function is bounded This is a workout from Problems from guide by Andreescu and also Dospinescu. When it was posted on AoPS a year ago I invested numerous hrs attempting to address it, yet fruitless, so I am wishing a person below can inform me. Trouble: Confirm that the function $f : [0, 1) \to \mathbb{R}$ specified by $\displaystyle f(x) = \log_2 (1 - x) + x + x^2 + x^4 + x^8 + ...$ is bounded. An initial monitoring is that $f$ pleases $f(x^2) = f(x) + \log_2 (1 + x) - x$. I experimented with utilizing this useful formula for some time, yet could not fairly make it function. 0 2019-05-05 04:55:17 Source Share Starting from (the all-natural logarithm of) $(1-x)^{-1} = (1+x)(1+x^2)(1+x^4) \dots$, it comes to be more clear where the $\log(2)$ variable originates from. One needs to show that $\Sigma (x^{2^k} - C\log(1 + x^{2^k}))$ is bounded amount of favorable terms. The amount of the first $n$ terms comes close to $n - Cn\log(2)$ as $x \to 1-$, so we require $C = 1/\log(2)$ if there is to be boundedness. 0 2019-05-08 09:18:34 Source OK, a 2nd method is required (yet it in fact ends up the trouble). It behaves and also straightforward sufficient that it's possibly what the writers planned by a "Book" remedy. Allow $f(x) = x \log(2) - \log(1+x)$. We intend to show that $S(x) = f(x) + f(x^2) + f(x^4) + \dots$ is bounded. Due to the fact that $f(0)=f(1)=0$ and also $f$ is differentiable, we can locate a constant $A$ such that $|f(x)| \leq Ax(1-x) = Ax - Ax^2$. The amount of this bound over the powers $x^{2^k}$ is telescopic. Notification that the duty of $\log(2)$ was to make certain that $f(1)=0$. 0 2019-05-08 08:33:18 Source
{}
# Line bundles on special abelian surfaces Given two smooth elliptic curves $C_1$ and $C_2$ over $\mathbb{C}$. Assume they are not isogenous. I'm interested in the structure of $Pic(A)$ and $Pic^{0}(A)$ for $A:=C_1 \times C_2$. Reading Birkenhake/Lange - Complex Abelian Varieties, i think this has to do with correspondences of curves. Since an elliptic curve is its own Jacobian and the two curves are not isogenous, we have $Hom(C_1,C_2)=0$. So the space of correspondences $Corr(C_1,C_2)$ is trivial, i.e. every line bundle $L$ on $A$ is of the form $L=q^{\*}M\otimes p^{\*}N$, where q and p are the projections on the factors and $M$ and $N$ are line bundles on the factors. This implies $Pic(A)=Pic(C_1)\times Pic(C_2)$. Does this impliy $Pic^{0}(A)=Pic^{0}(C_1)\times Pic^{0}(C_2)$? That is, is the Picard variety of $A$ isomorphic to $A$ in this case? - Yes, any product $A$ of elliptic cures is principally polarizable(e.g. by the product polarization), hence isomorphic to its Picard variety. –  Pete L. Clark Oct 7 '10 at 18:01 Yes: in fact $Pic^0(C_1\times C_2)=Pic^0(C_1)\times Pic^0(C_2)$ for any pair of curves. The fact that $C_1$ and $C_2$ are not isogenous in your case only affects the Neron-Severi group $Pic/Pic^0$ of $C_1\times C_2$, exactly for the reasons you describe. Interesting, can you give an explanation or a reference why this is true for all curves? I see that we have $Pic^{0}(C_1)\times Pic^{0}(C_2) \subset Pic^{0}(C_1\times C_2)$, via the pullbacks coming from the projections. But i don't see the other inclusion. –  TonyS Oct 7 '10 at 17:33
{}
Version: # CMake Settings Reference Open 3D Engine uses custom CMake configuration values in order to detect settings like valid deployment platforms, active projects, and the locations of downloaded packages. This document is a reference for the user-available CMake settings used by O3DE. Settings specific to a Gem are covered in Gem reference. For general CMake options, see the cmake-variables documentation . Keep in mind that every time you change a configuration value, you need to regenerate the project files so that the changes are picked up and apply to your next build. Caution: CMake options like CMAKE_CXX_STANDARD are set during configuration automatically by O3DE. Changing these values can break your ability to compile O3DE code, so edit them with care. ## Cache values ### Required settings These options are the user-supplied settings that are required to configure O3DE builds. Make sure that these values are set before running your first configure, and only change them later if necessary. • LY_3RDPARTY_PATH - The filesystem path to your package directory. Changing this value requires reconfiguration, and will prompt another install of packages. See packages for more information. Type: PATH ### Build configuration • LY_UNITY_BUILD - Controls the generation of unity build files. Unity builds speed up build times by taking multiple .cpp files and merging them together into a single compilation unit. Note: Make sure that this option is turned ON if you experience slow build times for your projects, the O3DE engine, or O3DE tools. The impact is most dramatic for systems with lots of available RAM but fewer available cores or low disk throughput. Type: BOOL Default: ON • LY_MONOLITHIC_GAME - Controls project library linking. When this value is set to ON, it provides a compiler hint to use static libraries where possible. Some libraries, such as PhysX, are only available as shared libraries and can’t be statically linked. Some platforms may disable static linking entirely. Type: BOOL Default: OFF ### Asset configuration These options control the types of assets that are built, and where projects load assets from at runtime. • LY_ASSET_DEPLOY_TYPE - The default type of assets to be built by Asset Processor. Valid platforms are: • pc - Windows PC • linux - Linux • mac - MacOS • ios - iOS and iPad OS • android - Android Type: STRING Default: The asset type for the current host platform. • LY_ASSET_DEPLOY_MODE - Controls how projects load assets at runtime. Allowed values are: • LOOSE - Load assets on demand from the asset cache, after sources are processed by the Asset Processor. This setting is appropriate for development. • PAK - Only load assets from .pak asset bundles created by the Asset Bundler. Which directories to load asset bundles from is controlled with the LY_OVERRIDE_PAK_FOLDER_ROOT setting. • VFS - Load data from the virtual filesystem server (VFS). Type: STRING Default: LOOSE • LY_ARCHIVE_FILE_SEARCH_MODE Defines the default file search mode to locate non-Pak files within the Archive System • 0 = Search file system first, before searching within mounted .pak files. • 1 = Search mounted .pak files first, before searching file system. • 2 = Search only mounted .pak files. Type: STRING Default: 0 = (debug/profile configurations), 2 = (release configuration) ### Package system settings • LY_PACKAGE_DEBUG - Produces verbose information about the package download and installation process when ON. Set this cache value before filing any bug report against the O3DE package system to make sure that all the necessary information is there to resolve the issue. Type: BOOL Default: OFF • LY_PACKAGE_DOWNLOAD_CACHE_LOCATION - The download cache for packages pulled from a remote server. This cache is never emptied if LY_PACKAGE_KEEP_AFTER_DOWNLOADING is ON. Type: PATH Default: ${LY_3RDPARTY_PATH}/downloaded_packages • LY_PACKAGE_KEEP_AFTER_DOWNLOADING - Whether or not to keep downloaded packages in the cache, even after installation. Type: BOOL Default: ON • LY_PACKAGE_SERVER_URLS - The URLs for servers to pull packages from, as a semi-colon (;) separated list. These can be http, https, file, or s3 URLs. These values are prepended to any LY_PACKAGE_SERVER_URLS environment variable. Type: STRING Default: (Empty string) • LY_PACKAGE_UNPACK_LOCATION - The location where downloaded packages are unpacked to, before being relocated to the package folder. Type: PATH Default: ${LY_3RDPARTY_PATH}/packages • LY_PACKAGE_VALIDATE_PACKAGE - Validate packages against checksums in the requesting CMake file, redownloading the package from sources as necessary. Type: BOOL Default: OFF • LY_PACKAGE_VALIDATE_CONTENTS - Check each file against the hashes contained in the SHA256SUMS of the package. When this value is OFF, the checksums are validated only on the first package download. Turning this setting on allows for checking for local modifications to the package, but will slow down configuration. Type: BOOL Default: OFF • LY_PACKAGE_DOWNLOAD_RETRY_COUNT - The number of times to attempt retrieval from a package source if an error occurs in the transfer. Type: Integer Default: 3 ### Build/Debugging Tools • LY_BUILD_WITH_ADDRESS_SANITIZER - Enables Address Sanitizer (ASan). Note: Currently only supported for Windows and “Visual Studio” generators. Documentation can be found here Type: BOOL Default: OFF ## CMake functions reference Intellisense support for Visual Studio and Visual Studio Code is available in the O3DE CMake files, located in the cmake directory of O3DE source.
{}
# Find fifth term. Algebra Level 1 Find the next term in the following sequence of positive integers: $$2,6,12,20, ?$$ ×
{}
# Borel Sets sigma algebra Need to connect Borel sets and sigma algebra. Also, need to show that a lebesgue measure contains all of the Borel Sets. I know that borel sets are the smallest open sigma algebra sets but I need to further my understanding and was having trouble with this. • Could you be more specific? On $R^n$ with finite n>0, the set of Lebesgue sets can be defined as the sigma-algebra generated by the union of A and B, where B is the Borel sets, and x belongs to A iff x is a subset of a measure-zero Borel set. It follows that y is a Lebesgue set iff $p\subset y\subset q$ where p is G-delta, and q is F-sigma ,and p,q have equal measure,. Also it follows that a subset of a Lebesgue-null set is Lebesgue-null. – DanielWainfleet Dec 8 '15 at 5:20 Let $X \neq \varnothing$; let $\mathscr{X} \subset 2^{X}$. Then by proof the set $$\sigma (\mathscr{X}) := \bigcap \{ \mathscr{Y} \subset 2^{X} \mid \mathscr{Y} \supset \mathscr{X} \ \text{is a}\ \sigma\text{-algebra} \}$$ is a $\sigma$-algebra; call $\sigma(\mathscr{X})$ the $\sigma$-algebra generated by $\mathscr{X}$. Let $(X, \mathscr{X})$ be a topological space; let $E \subset X$. Then $E$ is called a Borel set iff $E \in \sigma (\mathscr{X})$.
{}
# Reynolds Transport Theorem 1. Jun 22, 2006 ### Cyrus I have come across the Reynolds Transport Theorm in my study of Fluid Dynamics, and it's a very powerful tool. $$\frac {DB_{sys}}{Dt} = \frac{ \delta}{\delta t} \int_{cv} \rho b dV + \int_{cs} \rho b \vec {V} \cdot \hat {n} dA$$ Where B is any extensive property of the system, and b is any intensive propery of the system. The term on the left is the material derivative of the system, the first term on the right is the rate of change of the property B in the control volume and the second term on the left is the rate of change of B through the control surface. This seems like something that might be useful in many other areas. Usually the same equations are found in nearly all areas of science. Does this have any applicability in say, E&M? Look's like it should. The surface integral term looks like Gauss' law, though I am not sure what the other terms would possibly represent. Last edited: Jun 22, 2006 2. Jun 25, 2006 ### Clausius2 For instance, take the first Maxwell equation in absence of charges: $$\nabla \cdot \overline{E}=0$$ and the equation of continuity for an incompressible flow: $$\nabla\cdot\overline{v}=0$$ Both fields are solenidal, then $$\oint_S \overline{E}\cdot \overline{dS}=\oint_S \overline{v}\cdot \overline{dS}=0$$ The first integral is the conservation of flux of electric field over any closed boundary, whereas the second integral is the conservation of velocity flux (mass flux) over any closed boundary. The Reynolds transport gives you the definition of a material derivative. In this case, even though a parcel which travels with the flow velocity cannot have a mass variation, then the right hand side of the RTT has to cancel because the material derivative is a a derivative viewed from the laboratory frame. The RTT is esentially a change of frame of reference when computing derivatives. In my opinion, it does not make sense to do the same at least in EM. Why? Well, the Navier Stokes equations have Transport terms (the convective terms: $$u\cdot\nabla u$$) whereas the Maxwell equations don't allow transport of electric quantities by the background. That means that in the case of a fluid you can have different variations if you are travelling with the flow velocity (and thus cancelling the convective transport terms) or with another different velocity (enhancing the convective transport). For instance, I am gonna put an example of the importance of the RTT which cannot be achieved in EMs. Imagine a turbulent shear flow. I have a mean shear profile in y direction (vertical) in a water tunnel and the flow is turbulent. The Turbulent Kinetic Energy equation (derived from RANS equations) says to me: $$\frac{\partial K}{\partial t}+<U>\frac{\partial K}{\partial x}=P-\epsilon$$ where P is the production, $$\epsilon$$ is the turbulent dissipation rate and $$<U>$$ is the mean velocity (the shear). Imagine I change the frame of reference, such that I analyze the turbulent flow from a frame moving with the mean velocity. Thus, I cancel the convective term and there is no transport from my new frame but only a local variation: $$\frac{ dK}{dt}=P-\epsilon$$ Moreover this equation can be integrated using a $$K-\epsilon$$ model or knowing some experimental data, and one will realise that the turbulent kinetic energy increases exponentially with time in this reference frame because of the shear flow which feeds the production term. Hence, one can simplify a lot the analysis of a fluid field because of the fact that the physics of the fluid mechanics is being transported with the flow velocity in the general case, whereas it does not happen with Maxwell equations in general (except with EM waves).
{}
# GLM: verifying a choice of distribution and link function I have a generalized linear model that adopts a Gaussian distribution and log link function. After fitting the model, I check the residuals: QQ plot, residuals vs predicted values, histogram of residuals (acknowledging that due caution is needed). Everything looks good. This seems to suggest (to me) that the choice of a Gaussian distribution was quite reasonable. Or, at least, that the residuals are consistent with the distribution I used in my model. Q1: Would it be going too far to state that it validates my choice of distribution? I chose a log link function because my response variable is always positive, but I’d like some sort of confirmation that it was a good choice. Q2: Are there any tests, like checking the residuals for the choice of distribution, that can support my choice of link function? (Choosing a link function seems a bit arbitrary to me, as the only guidelines I can find are quite vague and hand-wavey, presumably for good reason.) Within that framework, the canonical link for a Gaussian model would be the identity link. In this case you rejected that possibility, presumably for theoretical reasons. I suspect your thinking was that $Y$ cannot take negative values (note that ‘does not happen to’ is not the same thing). If so, the log is a reasonable choice a-priori, but it doesn’t just prevent $Y$ from becoming negative, it also induces a specific shape to the curvilinear relationship. A standard plot of residuals vs. fitted values (perhaps with a loess fit overlaid) will help you identify if the intrinsic curvature in your data is a reasonable match for the specific curvature imposed by the log link. As I mentioned, you can also try whatever other transformation meets your theoretical criteria that you want and compare the two fits directly.
{}
Lemma 15.47.6. Let $R$ be a Noetherian ring. The following are equivalent 1. $R$ is J-2, 2. every finite type $R$-algebra which is a domain is J-0, 3. every finite $R$-algebra is J-1, 4. for every prime $\mathfrak p$ and every finite purely inseparable extension $\kappa (\mathfrak p) \subset L$ there exists a finite $R$-algebra $R'$ which is a domain, which is J-0, and whose field of fractions is $L$. Proof. It is clear that we have the implications (1) $\Rightarrow$ (2) and (2) $\Rightarrow$ (4). Recall that a domain which is J-1 is J-0. Hence we also have the implications (1) $\Rightarrow$ (3) and (3) $\Rightarrow$ (4). Let $R \to S$ be a finite type ring map and let's try to show $S$ is J-1. By Lemma 15.47.3 it suffices to prove that $S/\mathfrak q$ is J-0 for every prime $\mathfrak q$ of $S$. In this way we see (2) $\Rightarrow$ (1). Assume (4). We will show that (2) holds which will finish the proof. Let $R \to S$ be a finite type ring map with $S$ a domain. Let $\mathfrak p = \mathop{\mathrm{Ker}}(R \to S)$. Let $K$ be the fraction field of $S$. There exists a diagram of fields $\xymatrix{ K \ar[r] & K' \\ \kappa (\mathfrak p) \ar[u] \ar[r] & L \ar[u] }$ where the horizontal arrows are finite purely inseparable field extensions and where $K'/L$ is separable, see Algebra, Lemma 10.42.4. Choose $R' \subset L$ as in (4) and let $S'$ be the image of the map $S \otimes _ R R' \to K'$. Then $S'$ is a domain whose fraction field is $K'$, hence $S'$ is J-0 by Lemma 15.47.5 and our choice of $R'$. Then we apply Lemma 15.47.4 to see that $S$ is J-0 as desired. $\square$ There are also: • 2 comment(s) on Section 15.47: The singular locus In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
# Coronavirus disease 2019 Coronavirus disease 2019 (COVID-19) Other names • Coronavirus • Corona • COVID • 2019-nCoV acute respiratory disease • Novel coronavirus pneumonia[1][2] • Severe pneumonia with novel pathogens[3] Symptoms of COVID-19 Pronunciation Specialty Infectious disease Symptoms Fever, cough, fatigue, shortness of breath, loss of smell; sometimes no symptoms at all[5][6][7] Complications Pneumonia, viral sepsis, acute respiratory distress syndrome, kidney failure, cytokine release syndrome Usual onset 2–14 days (typically 5) from infection Causes Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) Risk factors Travel, viral exposure Diagnostic method rRT-PCR testing, CT scan Prevention Hand washing, face coverings, quarantine, social distancing[8] Treatment Symptomatic and supportive Frequency 6,226,409[9] confirmed cases Deaths 373,883 (6.0% of confirmed cases)[9] Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).[10] It was first identified in December 2019 in Wuhan, China, and since spread globally, resulting in an ongoing pandemic.[11][12] The first case may be traced back to 17 November 2019.[13] As of 1 June 2020, more than 6.22 million cases have been reported across 188 countries and territories, resulting in more than 373,000 deaths. More than 2.67 million people have recovered.[9] Common symptoms include fever, cough, fatigue, shortness of breath, and loss of smell and taste.[6][7][14] While the majority of cases result in mild symptoms, some progress to acute respiratory distress syndrome (ARDS) likely precipitated by a cytokine storm,[15] multi-organ failure, septic shock, and blood clots.[16][17][18] The time from exposure to onset of symptoms is typically around five days but may range from two to fourteen days.[6][19] The virus is primarily spread between people during close contact,[a] most often via small droplets produced by coughing,[b] sneezing, and talking.[7][20][22] The droplets usually fall to the ground or onto surfaces rather than travelling through air over long distances.[7] Less commonly, people may become infected by touching a contaminated surface and then touching their face.[7][20] It is most contagious during the first three days after the onset of symptoms, although spread is possible before symptoms appear, and from people who do not show symptoms.[7][20] The standard method of diagnosis is by real-time reverse transcription polymerase chain reaction (rRT-PCR) from a nasopharyngeal swab.[23] Chest CT imaging may also be helpful for diagnosis in individuals where there is a high suspicion of infection based on symptoms and risk factors; however, guidelines do not recommend using CT imaging for routine screening.[24][25] Recommended measures to prevent infection include frequent hand washing, maintaining physical distance from others (especially from those with symptoms), quarantine (especially for those with symptoms), covering coughs, and keeping unwashed hands away from the face.[8][26][27] The use of cloth face coverings such as a scarf or a bandana is recommended in public settings to minimise the risk of transmissions, with some authorities requiring their use.[28][29] Medical grade facemasks such as N95 masks should only be used by healthcare workers, first responders and those who care for infected individuals.[30][31] According to the World Health Organization (WHO), there are no vaccines nor specific antiviral treatments for COVID-19.[7] On 1 May 2020, the United States gave emergency use authorization to the antiviral remdesivir for people hospitalized with severe COVID‑19.[32] Management involves the treatment of symptoms, supportive care, isolation, and experimental measures.[33] The World Health Organization (WHO) declared the COVID‑19 outbreak a public health emergency of international concern (PHEIC)[34][35] on 30 January 2020 and a pandemic on 11 March 2020.[12] Local transmission of the disease has occurred in most countries across all six WHO regions.[36] Video summary ( script) ### Signs and symptoms Symptoms of COVID-19 [5] Symptom Range Fever 83–99% Cough 59–82% Loss of appetite 40–84% Fatigue 44–70% Shortness of breath 31–40% Coughing up sputum 28–33% Muscle aches and pains 11–35% Fever is the most common symptom, but it is highly variable in severity and presentation, with some older, immunocompromised, or critically ill people not having fever at all.[5][37][38] In one study, only 44% of people had fever when they presented to the hospital, while 89% went on to develop fever at some point during their hospitalization.[5][39] A lack of fever does not verify someone is disease free. Other common symptoms include cough, loss of appetite, fatigue, shortness of breath, sputum production, and muscle and joint pains.[1][5][6][40] Symptoms such as nausea, vomiting, and diarrhoea have been observed in varying percentages.[41][42][43] Less common symptoms include sneezing, runny nose, sore throat, and skin lesions.[44] Some cases in China initially presented with only chest tightness and palpitations.[45] A decreased sense of smell or disturbances in taste may occur.[46][47] Loss of smell was a presenting symptom in 30% of confirmed cases in South Korea.[14][48] As is common with infections, there is a delay between the moment a person is first infected and the time he or she develops symptoms. This is called the incubation period. The average incubation period for COVID‑19 is five to six days but commonly ranges from one to 14 days,[7][49] with approximately 10% of cases exceeding that time.[50][51][52] A minority of cases do not develop noticeable symptoms at any point in time.[53][54] These asymptomatic carriers tend not to get tested, and their role in transmission is not yet fully known.[55][56] However, preliminary evidence suggests they may contribute to the spread of the disease.[57] #### Complications Complications may include pneumonia, acute respiratory distress syndrome (ARDS), multi-organ failure, septic shock, and death.[11][16][58][59] Cardiovascular complications may include heart failure, arrhythmias, heart inflammation, and blood clots.[60] Approximately 20-30% of people who present with COVID‑19 have elevated liver enzymes reflecting liver injury.[61][62] Neurologic manifestations include seizure, stroke, encephalitis, and Guillain–Barré syndrome (which includes loss of motor functions).[63] Following the infection, children may develop paediatric multisystem inflammatory syndrome, which has symptoms similar to Kawasaki disease, which can be fatal.[64][65] ### Cause #### Transmission Respiratory droplets produced when a man sneezes, visualised using Tyndall scattering A video discussing the basic reproduction number and case fatality rate in the context of the pandemic (10:19 min) COVID-19 spreads primarily when people are in close contact and one person inhales small droplets produced by an infected person (symptomatic or not) coughing, sneezing, or talking.[22] WHO recommends 1 metre (3 ft) of social distance;[7] the U.S. CDC recommends 2 metres (6 ft).[20] People can transmit the virus without showing symptoms, but it is unclear how often this happens.[7][20][22] One estimate of the number of those infected who are asymptomatic is 40%.[66] People are most infectious when they show symptoms (even mild or non-specific symptoms), but may be infectious for up to two days before symptoms appear (pre-symptomatic transmission).[22] They remain infectious an estimated seven to twelve days in moderate cases and an average of two weeks in severe cases.[22] When the contaminated droplets fall to floors or surfaces they can, though less commonly, remain infectious if people touch contaminated surfaces and then their eyes, nose or mouth with unwashed hands.[7] On surfaces the amount of active virus decreases over time until it can no longer cause infection,[22] and surfaces are thought not to be the main way the virus spreads.[20] It is unknown what amount of virus on surfaces is required to cause infection via this method, but it can be detected for up to four hours on copper, up to one day on cardboard, and up to three days on plastic (polypropylene) and stainless steel (AISI 304).[22][67][68] Surfaces are easily decontaminated with household disinfectants which kill the virus outside the human body or on the hands.[7] Disinfectants or bleach are not a treatment for COVID‑19, and cause health problems when not used properly, such as when used inside the human body.[69] Sputum and saliva carry large amounts of virus.[7][20][22][70] Although COVID‑19 is not a sexually transmitted infection, kissing, intimate contact, and faecal-oral routes are suspected to transmit the virus.[71][72] Some medical procedures are aerosol-generating[73] and result in the virus being transmitted more easily than normal.[7][22] COVID‑19 is a new disease, and many of the details of its spread are still under investigation.[7][20][22] It spreads easily between people—easier than influenza but not as easily as measles.[20] Estimates of the number of people infected by one person with COVID-19 (the R0) have varied widely. The WHO's initial estimates of the R0 were 1.4-2.5 (average 1.95), however a more recent review found the basic R0 (without control measures) to be higher at 3.28 and the median R0 to be 2.79.[74] #### Virology Illustration of SARSr-CoV virion Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a novel severe acute respiratory syndrome coronavirus, first isolated from three people with pneumonia connected to the cluster of acute respiratory illness cases in Wuhan.[75] All features of the novel SARS-CoV-2 virus occur in related coronaviruses in nature.[76] Outside the human body, the virus is killed by household soap, which bursts its protective bubble.[24] SARS-CoV-2 is closely related to the original SARS-CoV.[77] It is thought to have an animal (zoonotic) origin. Genetic analysis has revealed that the coronavirus genetically clusters with the genus Betacoronavirus, in subgenus Sarbecovirus (lineage B) together with two bat-derived strains. It is 96% identical at the whole genome level to other bat coronavirus samples (BatCov RaTG13).[44] In February 2020, Chinese researchers found that there is only one amino acid difference in the binding domain of the S protein between the coronaviruses from pangolins and those from humans; however, whole-genome comparison to date found that at most 92% of genetic material was shared between pangolin coronavirus and SARS-CoV-2, which is insufficient to prove pangolins to be the intermediate host.[78] ### Pathophysiology The lungs are the organs most affected by COVID‑19 because the virus accesses host cells via the enzyme angiotensin-converting enzyme 2 (ACE2), which is most abundant in type II alveolar cells of the lungs.[79] The virus uses a special surface glycoprotein called a "spike" (peplomer) to connect to ACE2 and enter the host cell.[80] The density of ACE2 in each tissue correlates with the severity of the disease in that tissue and some have suggested decreasing ACE2 activity might be protective,[81][82][unreliable medical source?] though another view is that increasing ACE2 using angiotensin II receptor blocker medications could be protective.[83] As the alveolar disease progresses, respiratory failure might develop and death may follow.[82][unreliable medical source?] SARS-CoV-2 may also cause respiratory failure through affecting the brainstem as other coronaviruses have been found to invade the central nervous system (CNS). While virus has been detected in cerebrospinal fluid of autopsies, the exact mechanism by which it invades the CNS remains unclear and may first involve invasion of peripheral nerves given the low levels of ACE2 in the brain.[84][85][unreliable medical source?] The virus also affects gastrointestinal organs as ACE2 is abundantly expressed in the glandular cells of gastric, duodenal and rectal epithelium[86] as well as endothelial cells and enterocytes of the small intestine.[87][unreliable medical source?] The virus can cause acute myocardial injury and chronic damage to the cardiovascular system.[88] An acute cardiac injury was found in 12% of infected people admitted to the hospital in Wuhan, China,[42] and is more frequent in severe disease.[89][unreliable medical source?] Rates of cardiovascular symptoms are high, owing to the systemic inflammatory response and immune system disorders during disease progression, but acute myocardial injuries may also be related to ACE2 receptors in the heart.[88] ACE2 receptors are highly expressed in the heart and are involved in heart function.[88][90] A high incidence of thrombosis (31%) and venous thromboembolism (25%) have been found in ICU patients with COVID‑19 infections and may be related to poor prognosis.[91][unreliable medical source?][92][unreliable medical source?] Blood vessel dysfunction and clot formation (as suggested by high D-dimer levels) are thought to play a significant role in mortality, incidences of clots leading to pulmonary embolisms, and ischaemic events within the brain have been noted as complications leading to death in patients infected with SARS-CoV-2. Infection appears to set off a chain of vasoconstrictive responses within the body, constriction of blood vessels within the pulmonary circulation has also been posited as a mechanism in which oxygenation decreases alongside the presentation of viral pneumonia.[93][better source needed] Another common cause of death is complications related to the kidneys.[93][better source needed] Early reports show that up to 30% of hospitalized patients in both China and New York have experienced some injury to their kidneys, including some persons with no previous kidney problems.[94] Autopsies of people who died of COVID‑19 have found diffuse alveolar damage (DAD), and lymphocyte-containing inflammatory infiltrates within the lung.[95][unreliable medical source?] #### Immunopathology Although SARS-COV-2 has a tropism for ACE2-expressing epithelial cells of the respiratory tract, patients with severe COVID‑19 have symptoms of systemic hyperinflammation. Clinical laboratory findings of elevated IL-2, IL-7, IL-6, granulocyte-macrophage colony-stimulating factor (GM-CSF), interferon-γ inducible protein 10 (IP-10), monocyte chemoattractant protein 1 (MCP-1), macrophage inflammatory protein 1-α (MIP-1α), and tumour necrosis factor-α (TNF-α) indicative of cytokine release syndrome (CRS) suggest an underlying immunopathology.[42] Additionally, people with COVID‑19 and acute respiratory distress syndrome (ARDS) have classical serum biomarkers of CRS, including elevated C-reactive protein (CRP), lactate dehydrogenase (LDH), D-dimer, and ferritin.[96] Systemic inflammation results in vasodilation, allowing inflammatory lymphocytic and monocytic infiltration of the lung and the heart. In particular, pathogenic GM-CSF-secreting T-cells were shown to correlate with the recruitment of inflammatory IL-6-secreting monocytes and severe lung pathology in COVID‑19 patients.[citation needed] Lymphocytic infiltrates have also been reported at autopsy.[95][unreliable medical source?] ### Diagnosis Demonstration of a nasopharyngeal swab for COVID-19 testing CDC rRT-PCR test kit for COVID-19 [97] The WHO has published several testing protocols for the disease.[98] The standard method of testing is real-time reverse transcription polymerase chain reaction (rRT-PCR).[99] The test is typically done on respiratory samples obtained by a nasopharyngeal swab; however, a nasal swab or sputum sample may also be used.[23][100] Results are generally available within a few hours to two days.[101][102] Blood tests can be used, but these require two blood samples taken two weeks apart, and the results have little immediate value.[103] Chinese scientists were able to isolate a strain of the coronavirus and publish the genetic sequence so laboratories across the world could independently develop polymerase chain reaction (PCR) tests to detect infection by the virus.[11][104][105] As of 4 April 2020, antibody tests (which may detect active infections and whether a person had been infected in the past) were in development, but not yet widely used.[106][107][108] The Chinese experience with testing has shown the accuracy is only 60 to 70%.[109] The FDA in the United States approved the first point-of-care test on 21 March 2020 for use at the end of that month.[110] Diagnostic guidelines released by Zhongnan Hospital of Wuhan University suggested methods for detecting infections based upon clinical features and epidemiological risk. These involved identifying people who had at least two of the following symptoms in addition to a history of travel to Wuhan or contact with other infected people: fever, imaging features of pneumonia, normal or reduced white blood cell count, or reduced lymphocyte count.[111] A study asked hospitalised COVID‑19 patients to cough into a sterile container, thus producing a saliva sample, and detected the virus in eleven of twelve patients using RT-PCR. This technique has the potential of being quicker than a swab and involving less risk to health care workers (collection at home or in the car).[70] Along with laboratory testing, chest CT scans may be helpful to diagnose COVID‑19 in individuals with a high clinical suspicion of infection but are not recommended for routine screening.[24][25] Bilateral multilobar ground-glass opacities with a peripheral, asymmetric, and posterior distribution are common in early infection.[24] Subpleural dominance, crazy paving (lobular septal thickening with variable alveolar filling), and consolidation may appear as the disease progresses.[24][112] In late 2019, the WHO assigned emergency ICD-10 disease codes U07.1 for deaths from lab-confirmed SARS-CoV-2 infection and U07.2 for deaths from clinically or epidemiologically diagnosed COVID‑19 without lab-confirmed SARS-CoV-2 infection.[113] Typical CT imaging findings CT imaging of rapid progression stage #### Pathology Few data are available about microscopic lesions and the pathophysiology of COVID‑19.[114][115] The main pathological findings at autopsy are:[citation needed] ### Prevention Progressively stronger mitigation efforts to reduce the number of active cases at any given time—" flattening the curve"—allows healthcare services to better manage the same volume of patients. [119] [120] [121] Likewise, progressively greater increases in healthcare capacity—called raising the line—such as by increasing bed count, personnel, and equipment, helps to meet increased demand. [122] Mitigation attempts that are inadequate in strictness or duration—such as premature relaxation of distancing rules or stay-at-home orders—can allow a resurgence after the initial surge and mitigation. [120] [123] Preventive measures to reduce the chances of infection include staying at home, avoiding crowded places, keeping distance from others, washing hands with soap and water often and for at least 20 seconds, practising good respiratory hygiene, and avoiding touching the eyes, nose, or mouth with unwashed hands.[124][125][126] The U.S. Centers for Disease Control and Prevention (CDC) recommends covering the mouth and nose with a tissue when coughing or sneezing and recommends using the inside of the elbow if no tissue is available.[124] Proper hand hygiene after any cough or sneeze is encouraged.[124] The CDC has recommended cloth face coverings in public settings where other social distancing measures are difficult to maintain, in part to limit transmission by asymptomatic individuals.[127] The U.S. National Institutes of Health guidelines do not recommend any medication for prevention of COVID‑19, before or after exposure to the SARS-CoV-2 virus, outside the setting of a clinical trial.[128] Social distancing strategies aim to reduce contact of infected persons with large groups by closing schools and workplaces, restricting travel, and cancelling large public gatherings.[129] Distancing guidelines also include that people stay at least 6 feet (1.8 m) apart.[130] There is no medication known to be effective at preventing COVID‑19.[131] After the implementation of social distancing and stay-at-home orders, many regions have been able to sustain an effective transmission rate ("Rt") of less than one, meaning the disease is in remission in those areas.[132] In a simple model ${\textstyle \ln(R_{t})\approx R_{t}-1}$ needs on average over time be kept at or below zero to avoid exponential growth.[citation needed] As a COVID-19 vaccine is not expected until 2021 at the earliest,[133] a key part of managing COVID‑19 is trying to decrease and delay the epidemic peak, known as "flattening the curve".[120] This is done by slowing the infection rate to decrease the risk of health services being overwhelmed, allowing for better treatment of current cases, and delaying additional cases until effective treatments or a vaccine become available.[120][123] According to the WHO, the use of masks is recommended only if a person is coughing or sneezing or when one is taking care of someone with a suspected infection.[134] For the European Centre for Disease Prevention and Control (ECDC) face masks "... could be considered especially when visiting busy closed spaces ..." but "... only as a complementary measure ..."[135] The U.S. CDC recommends masks in public places where 6-foot social distancing is difficult to maintain, primarily in case you yourself are asymptomatic and to prevent unknowingly spreading the infection.[136] Several countries have recommended that healthy individuals wear face masks or cloth face coverings (like scarves or bandanas) at least in certain public settings, including China,[137] Hong Kong,[138] Spain,[139] Italy (Lombardy region),[140] Russia,[141] and the United States.[127] Those diagnosed with COVID‑19 or who believe they may be infected are advised by the CDC to stay home except to get medical care, call ahead before visiting a healthcare provider, wear a face mask before entering the healthcare provider's office and when in any room or vehicle with another person, cover coughs and sneezes with a tissue, regularly wash hands with soap and water and avoid sharing personal household items.[30][142] The CDC also recommends that individuals wash hands often with soap and water for at least 20 seconds, especially after going to the toilet or when hands are visibly dirty, before eating and after blowing one's nose, coughing or sneezing. It further recommends using an alcohol-based hand sanitiser with at least 60% alcohol, but only when soap and water are not readily available.[124] For areas where commercial hand sanitisers are not readily available, the WHO provides two formulations for local production. In these formulations, the antimicrobial activity arises from ethanol or isopropanol. Hydrogen peroxide is used to help eliminate bacterial spores in the alcohol; it is "not an active substance for hand antisepsis". Glycerol is added as a humectant.[143] ### Management People are managed with supportive care, which may include fluid therapy, oxygen support, and supporting other affected vital organs.[144][145][146] The CDC recommends those who suspect they carry the virus wear a simple face mask.[30] Extracorporeal membrane oxygenation (ECMO) has been used to address the issue of respiratory failure, but its benefits are still under consideration.[39][147] Personal hygiene and a healthy lifestyle and diet have been recommended to improve immunity.[148] Supportive treatments may be useful in those with mild symptoms at the early stage of infection.[149] The WHO, the Chinese National Health Commission, and the United States' National Institutes of Health have published recommendations for taking care of people who are hospitalised with COVID‑19.[128][150][151] Intensivists and pulmonologists in the U.S. have compiled treatment recommendations from various agencies into a free resource, the IBCC.[152][153] ### Prognosis The severity of diagnosed COVID-19 cases in China [154] Case fatality rates by age group: China, as of 11 February 2020 [155] South Korea, as of 20 May 2020 [156] Spain, as of 18 May 2020 [157] Italy, as of 14 May 2020 [158] Case fatality rate in China depending on other health problems. Data through 11 February 2020. [155] The number of deaths vs total cases by country and approximate case fatality rate [159] The severity of COVID‑19 varies. The disease may take a mild course with few or no symptoms, resembling other common upper respiratory diseases such as the common cold. Mild cases typically recover within two weeks, while those with severe or critical diseases may take three to six weeks to recover. Among those who have died, the time from symptom onset to death has ranged from two to eight weeks.[44] Children make up a small proportion of reported cases, with about 1% of cases being under 10 years and 4% aged 10–19 years.[22] They are likely to have milder symptoms and a lower chance of severe disease than adults; in those younger than 50 years the risk of death is less than 0.5%, while in those older than 70 it is more than 8%.[160][161][162] Pregnant women may be at higher risk for severe infection with COVID‑19 based on data from other similar viruses, like Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS), but data for COVID‑19 is lacking.[163][164] In China, children acquired infections mainly through close contact with their parents or other family members who lived in Wuhan or had traveled there.[160] Some studies have found that the neutrophil to lymphocyte ratio (NLR) may be helpful in early screening for severe illness.[165] Most of those who die of COVID‑19 have pre-existing (underlying) conditions, including hypertension, diabetes mellitus, and cardiovascular disease.[166] The Istituto Superiore di Sanità reported that out of 8.8% of deaths where medical charts were available, 97% of people had at least one comorbidity with the average person having 2.7 diseases.[167] According to the same report, the median time between the onset of symptoms and death was ten days, with five being spent hospitalised. However, people transferred to an ICU had a median time of seven days between hospitalisation and death.[167] In a study of early cases, the median time from exhibiting initial symptoms to death was 14 days, with a full range of six to 41 days.[168] In a study by the National Health Commission (NHC) of China, men had a death rate of 2.8% while women had a death rate of 1.7%.[169] Histopathological examinations of post-mortem lung samples show diffuse alveolar damage with cellular fibromyxoid exudates in both lungs. Viral cytopathic changes were observed in the pneumocytes. The lung picture resembled acute respiratory distress syndrome (ARDS).[44] In 11.8% of the deaths reported by the National Health Commission of China, heart damage was noted by elevated levels of troponin or cardiac arrest.[45] According to March data from the United States, 89% of those hospitalised had preexisting conditions.[170] The availability of medical resources and the socioeconomics of a region may also affect mortality.[171] Concerns have been raised about long-term sequelae of the disease. The Hong Kong Hospital Authority found a drop of 20% to 30% in lung capacity in some people who recovered from the disease, and lung scans suggested organ damage.[172] This may also lead to post-intensive care syndrome following recovery.[173] ### History The virus is thought to be natural and has an animal origin,[76] through spillover infection.[174] The actual origin is unknown, but the first known cases of infection happened in China. By December 2019, the spread of infection was almost entirely driven by human-to-human transmission.[155][175] A study of the first 41 cases of confirmed COVID‑19, published in January 2020 in The Lancet, revealed the earliest date of onset of symptoms as 1 December 2019.[176][177][178] Official publications from the WHO reported the earliest onset of symptoms as 8 December 2019.[179] Human-to-human transmission was confirmed by the WHO and Chinese authorities by 20 January 2020.[180][181] ### Epidemiology Several measures are commonly used to quantify mortality.[182] These numbers vary by region and over time and are influenced by the volume of testing, healthcare system quality, treatment options, time since the initial outbreak, and population characteristics such as age, sex, and overall health.[183] The death-to-case ratio reflects the number of deaths divided by the number of diagnosed cases within a given time interval. Based on Johns Hopkins University statistics, the global death-to-case ratio is 6.0% (373,883/6,226,409) as of 1 June 2020.[9] The number varies by region.[184] Other measures include the case fatality rate (CFR), which reflects the percent of diagnosed individuals who die from a disease, and the infection fatality rate (IFR), which reflects the percent of infected individuals (diagnosed and undiagnosed) who die from a disease. These statistics are not time-bound and follow a specific population from infection through case resolution. Many academics have attempted to calculate these numbers for specific populations.[185] Outbreaks have occurred in prisons due to crowding and an inability to enforce adequate social distancing.[186][187] In the United States, the prisoner population is aging and many of them are at high risk for poor outcomes from COVID‑19 due to high rates of coexisting heart and lung disease, and poor access to high-quality healthcare.[186] Total confirmed cases over time Total deaths over time Total confirmed cases of COVID‑19 per million people [188] Total confirmed deaths due to COVID‑19 per million people [189] #### Infection fatality rate Our World in Data states that, as of 25 March 2020, the infection fatality rate (IFR) cannot be accurately calculated.[190] In February, one research group estimated the IFR at 0.94%, with a confidence interval between 0.37 percent to 2.9 percent.[191] The University of Oxford Centre for Evidence-Based Medicine (CEBM) estimated a global CFR of 0.8 to 9.6 percent (last revised 30 April) and IFR of 0.10 to 0.41 percent (last revised 2 May).[192] According to CEBM, random antibody testing in Germany suggested an IFR of 0.37% (0.12% to 0.87%) there, but there have been concerns about false positives.[192][193][194][195][unreliable medical source?] Firm lower limits of infection fatality rates have been established in a number of locations. As of 7 May, in New York City, with a population of 8.4 million, 14,162 have died from COVID-19 (0.17% of the population).[196] In Bergamo province, 0.57% of the population has died.[197][198][unreliable medical source?] To get a better view on the number of people infected, initial antibody testing has been carried out, but there are no valid scientific reports based on any of them as of yet.[199][200] On 1 May antibody testing in New York suggested an IFR of 0.86%.[201] #### Sex differences The impact of the pandemic and its mortality rate are different for men and women.[202] Mortality is higher in men in studies conducted in China and Italy.[1][203][204] The higher risk for men appears in their 50s, and begins to taper off only at 90.[204] In China, the death rate was 2.8 percent for men and 1.7 percent for women.[204] The exact reasons for this sex-difference are not known, but genetic and behavioural factors could be a reason.[202] Sex-based immunological differences, a lower prevalence of smoking in women, and men developing co-morbid conditions such as hypertension at a younger age than women could have contributed to the higher mortality in men.[204] In Europe, of those infected with COVID‑19, 57% were men; of those infected with COVID‑19 who also died, 72% were men.[205] As of April 2020, the U.S. government is not tracking sex-related data of COVID‑19 infections.[206] Research has shown that viral illnesses like Ebola, HIV, influenza, and SARS affect men and women differently.[206] A higher percentage of health workers, particularly nurses, are women, and they have a higher chance of being exposed to the virus.[207] School closures, lockdowns, and reduced access to healthcare following the COVID-19 pandemic may differentially affect the genders and possibly exaggerate existing gender disparity.[202][208] #### Ethnic differences In the U.S., a greater proportion of deaths due to COVID-19 have occurred among African Americans.[209] Structural factors that prevent African Americans from practicing social distancing include their concentration in crowded substandard housing and in "essential" occupations such as public transit and health care. Greater prevalence of lacking health insurance and care and of underlying conditions such as diabetes, hypertension and heart disease also increase their risk of death.[210] Similar issues affect Native American and Latino communities.[209] According to a U.S health policy non-profit, 34% of American Indian and Alaska Native People (AIAN) non-elderly adults are at risk of serious illness compared to 21% of white non-elderly adults.[211] The source attributes it to disproportionately high rates of many health conditions that may put them at higher risk as well as living conditions like lack of access to clean water.[212] Leaders have called for efforts to research and address the disparities.[213] ### Society and culture #### Name During the initial outbreak in Wuhan, China, the virus and disease were commonly referred to as "coronavirus" and "Wuhan coronavirus",[214][215][216] with the disease sometimes called "Wuhan pneumonia".[217][218] In the past, many diseases have been named after geographical locations, such as the Spanish flu,[219] Middle East Respiratory Syndrome, and Zika virus.[220] In January 2020, the World Health Organisation recommended 2019-nCov[221] and 2019-nCoV acute respiratory disease[222] as interim names for the virus and disease per 2015 guidance and international guidelines against using geographical locations (e.g. Wuhan, China), animal species or groups of people in disease and virus names to prevent social stigma.[223][224][225] The official names COVID‑19 and SARS-CoV-2 were issued by the WHO on 11 February 2020.[226] WHO chief Tedros Adhanom Ghebreyesus explained: CO for corona, VI for virus, D for disease and 19 for when the outbreak was first identified (31 December 2019).[227] The WHO additionally uses "the COVID‑19 virus" and "the virus responsible for COVID‑19" in public communications.[226] Both the disease and virus are commonly referred to as "coronavirus" in the media and public discourse. #### Misinformation After the initial outbreak of COVID‑19, conspiracy theorists spread misinformation and disinformation regarding the origin, scale, prevention, treatment, and other aspects of the disease, which rapidly spread online.[228][229][230] #### Decreased emergency room use In Austria, 39% fewer persons sought help for cardiac symptoms in the month of March. A study estimated that there were 110 incidents of preventable cardiac death as compared to 86 confirmed deaths from Coronavirus as of 29 March.[231] A preliminary study in the U.S. found 38% under-utilization of cardiac care units as compared to normal.[232] The head of cardiology at the University of Arizona has stated, "My worry is some of these people are dying at home because they're too scared to go to the hospital."[233] There is also concern that persons with symptoms of stroke and appendicitis are delaying seeking help.[233][234] ### Other animals Humans appear to be capable of spreading the virus to some other animals. A domestic cat in Liège, Belgium, tested positive after it started showing symptoms (diarrhoea, vomiting, shortness of breath) a week later than its owner, who was also positive.[235] Tigers and lions at the Bronx Zoo in New York, United States, tested positive for the virus and showed symptoms of COVID‑19, including a dry cough and loss of appetite.[236] Minks at two farms in the Netherlands also tested positive for COVID-19.[237] A study on domesticated animals inoculated with the virus found that cats and ferrets appear to be "highly susceptible" to the disease, while dogs appear to be less susceptible, with lower levels of viral replication. The study failed to find evidence of viral replication in pigs, ducks, and chickens.[238] In March 2020, researchers from the University of Hong Kong have shown that Syrian hamsters could be a model organism for COVID-19 research.[239] ### Research No medication or vaccine is approved to treat the disease.[240] International research on vaccines and medicines in COVID‑19 is underway by government organisations, academic groups, and industry researchers.[241][242] In March, the World Health Organisation initiated the "Solidarity Trial" to assess the treatment effects of four existing antiviral compounds with the most promise of efficacy.[243] The World Health Organization suspended hydroxychloroquine from its global drug trials for COVID-19 treatments on 26 May 2020 due to safety concerns. It had previously enrolled 3,500 patients from 17 countries in the Solidarity Trial.[244] France, Italy and Belgium also banned the use of hydroxychloroquine as a COVID-19 treatment.[245] There has been a great deal of COVID-19 research, involving accelerated research processes and publishing shortcuts to meet the global demand. To minimise the harm from misinformation, medical professionals and the public are advised to expect rapid changes to available information, and to be attentive to retractions and other updates.[246] #### Vaccine There is no available vaccine, but various agencies are actively developing vaccine candidates. Previous work on SARS-CoV is being used because both SARS-CoV and SARS-CoV-2 use the ACE2 receptor to enter human cells.[247] Three vaccination strategies are being investigated. First, researchers aim to build a whole virus vaccine. The use of such a virus, be it inactive or dead, aims to elicit a prompt immune response of the human body to a new infection with COVID‑19. A second strategy, subunit vaccines, aims to create a vaccine that sensitises the immune system to certain subunits of the virus. In the case of SARS-CoV-2, such research focuses on the S-spike protein that helps the virus intrude the ACE2 enzyme receptor. A third strategy is that of the nucleic acid vaccines (DNA or RNA vaccines, a novel technique for creating a vaccination). Experimental vaccines from any of these strategies would have to be tested for safety and efficacy.[248] On 16 March 2020, the first clinical trial of a vaccine started with four volunteers in Seattle, Washington, United States. The vaccine contains a harmless genetic code copied from the virus that causes the disease.[249] Antibody-dependent enhancement has been suggested as a potential challenge for vaccine development for SARS-COV-2, but this is controversial.[250] #### Medications At least 29 phase II–IV efficacy trials in COVID‑19 were concluded in March 2020, or scheduled to provide results in April from hospitals in China.[251][252] There are more than 300 active clinical trials underway as of April 2020.[131] Seven trials were evaluating already approved treatments, including four studies on hydroxychloroquine or chloroquine.[252] Repurposed antiviral drugs make up most of the Chinese research, with nine phase III trials on remdesivir across several countries due to report by the end of April.[251][252] Other candidates in trials include vasodilators, corticosteroids, immune therapies, lipoic acid, bevacizumab, and recombinant angiotensin-converting enzyme 2.[252] The COVID‑19 Clinical Research Coalition has goals to 1) facilitate rapid reviews of clinical trial proposals by ethics committees and national regulatory agencies, 2) fast-track approvals for the candidate therapeutic compounds, 3) ensure standardised and rapid analysis of emerging efficacy and safety data and 4) facilitate sharing of clinical trial outcomes before publication.[253][254] Several existing medications are being evaluated for the treatment of COVID‑19,[240] including remdesivir, chloroquine, hydroxychloroquine, lopinavir/ritonavir, and lopinavir/ritonavir combined with interferon beta.[243][255] There is tentative evidence for efficacy by remdesivir, as of March 2020.[256][257] Clinical improvement was observed in patients treated with compassionate-use remdesivir.[258] Remdesivir inhibits SARS-CoV-2 in vitro.[259] Phase III clinical trials are underway in the U.S., China, and Italy.[240][251][260] In 2020, a trial found that lopinavir/ritonavir was ineffective in the treatment of severe illness.[261] Nitazoxanide has been recommended for further in vivo study after demonstrating low concentration inhibition of SARS-CoV-2.[259] There are mixed results as of 3 April 2020 as to the effectiveness of hydroxychloroquine as a treatment for COVID‑19, with some studies showing little or no improvement.[262][263] One study has shown an association between hydroxychloroquine or chloroquine use with higher death rates along with other side effects.[264][265] The studies of chloroquine and hydroxychloroquine with or without azithromycin have major limitations that have prevented the medical community from embracing these therapies without further study.[131] Oseltamivir does not inhibit SARS-CoV-2 in vitro and has no known role in COVID‑19 treatment.[131] #### Cytokine storm A cytokine storm can be a complication in the later stages of severe COVID‑19. There is preliminary evidence that hydroxychloroquine may be useful in controlling cytokine storms in late-phase severe forms of the disease.[266] Tocilizumab has been included in treatment guidelines by China's National Health Commission after a small study was completed.[267][268] It is undergoing a phase 2 non-randomised trial at the national level in Italy after showing positive results in people with severe disease.[269][270] Combined with a serum ferritin blood test to identify a cytokine storm (also called cytokine storm syndrome, not to be confused with cytokine release syndrome), it is meant to counter such developments, which are thought to be the cause of death in some affected people.[271][272][273] The interleukin-6 receptor antagonist was approved by the FDA to undergo a phase III clinical trial assessing the its effectiveness on COVID‑19 based on retrospective case studies for the treatment of steroid-refractory cytokine release syndrome induced by a different cause, CAR T cell therapy, in 2017.[274] To date, there is no randomised, controlled evidence that tocilizumab is an efficacious treatment for CRS. Prophylactic tocilizumab has been shown to increase serum IL-6 levels by saturating the IL-6R, driving IL-6 across the blood-brain barrier, and exacerbating neurotoxicity while having no effect on the incidence of CRS.[275] Lenzilumab, an anti-GM-CSF monoclonal antibody, is protective in murine models for CAR T cell-induced CRS and neurotoxicity and is a viable therapeutic option due to the observed increase of pathogenic GM-CSF secreting T-cells in hospitalised patients with COVID‑19.[276] The Feinstein Institute of Northwell Health announced in March a study on "a human antibody that may prevent the activity" of IL-6.[277] #### Passive antibodies Transferring purified and concentrated antibodies produced by the immune systems of those who have recovered from COVID‑19 to people who need them is being investigated as a non-vaccine method of passive immunisation.[278] This strategy was tried for SARS with inconclusive results.[278] Viral neutralisation is the anticipated mechanism of action by which passive antibody therapy can mediate defence against SARS-CoV-2.[279] The spike protein of SARS-CoV-2 is the primary target for neutralizing antibodies.[279] Other mechanisms, however, such as antibody-dependent cellular cytotoxicity and/or phagocytosis, may be possible.[278] Other forms of passive antibody therapy, for example, using manufactured monoclonal antibodies, are in development.[278] Production of convalescent serum, which consists of the liquid portion of the blood from recovered patients and contains antibodies specific to this virus, could be increased for quicker deployment.[280]
{}
# How do you calculate the decimal expansion of an irrational number? Just curious, how do you calculate an irrational number? Take $\pi$ for example. Computers have calculated $\pi$ to the millionth digit and beyond. What formula/method do they use to figure this out? How does it compare to other irrational numbers such as $\varphi$ or $e$? - Minor aside: your question is not "how do you calculate an irrational number", but "how do you calculate the decimal expansion of an irrational number". –  Hurkyl May 20 '12 at 12:20 This is a really good question, because it's simple to ask but has no simple answer. It depends a lot on which number you have in mind. For an interesting counterpoint, consider Euler's constant \gamma$$\approx 0.57721\ldots. Methods are known for calculating \gamma with great precision, but it is not known whether it is irrational or not! – MJD May 20 '12 at 13:12 Relevant this and this. – user2468 May 20 '12 at 15:42 At the other extreme, there are irrational numbers that are not computable at all, such as Chaitin's constant. See en.wikipedia.org/wiki/Chaitin%27s_constant – Robert Israel May 20 '12 at 19:36 @Sean All these formulas have been proven equivalent, otherwise they wouldn't be called "formulas for \pi". In general, showing that two formulas are equivalent is very hard, and requires a great deal of mathematics – Alex Becker Jul 6 '12 at 2:50 show 7 more comments ## 5 Answers # \pi For computing \pi, many very convergent methods are known. Historically, popular methods include estimating \arctan with its Taylor's series expansion and calculating \pi/4 using a Machin-like formula. A basic one would be$$\frac{\pi}{4} = 4 \arctan\frac{1}{5} - \arctan\frac{1}{239}$$The reason these formulas are used over estimating \arctan 1 =\frac{\pi}{4} is because the series for \arctan x is move convergent for x \approx0. Thus, small values of x are better for estimating \pi/4, even if one is required to compute \arctan more times. A good example of this is Hwang Chien-Lih's formula:$$ \begin{align} \frac{\pi}{4} =& 183\arctan\frac{1}{239} + 32\arctan\frac{1}{1023} - 68\arctan\frac{1}{5832} + 12\arctan\frac{1}{113021}\\ & - 100\arctan\frac{1}{6826318} - 12\arctan\frac{1}{33366019650} + 12\arctan\frac{1}{43599522992503626068}\\ \end{align} $$Though \arctan needs to be computed 7 times to a desired accuracy, computing this formula interestingly requires less computational effort then computing \arctan 1 to the same accuracy. Iterative algorithms, such as Borwein's algorithm or Gauss–Legendre algorithm can converge to \pi extremely fast (Gauss–Legendre algorithm find 45 million correct digits in 25 iterations), but require much computational effort. Because of this, the linear convergence of Ramanujan's algorithm or the Chudnovsky algorithm is often preferred (these methods are mentioned in other posts here as well). These methods produce 6-8 digits and 14 digits respectively term added. It is interesting to mention that the Bailey–Borwein–Plouffe formula can calculate the n^{th} binary digit of \pi without needing to know the n-1^{th} digit (these algorithms are known as "spigot algorithms"). Bellard's formula is similar but 43% faster. The first few terms from the Chudnovsky algorithm are (note the accuracy increases by about 14 decimal places): n Approx. sum Approx. error (pi-sum) 0 3.141592653 5.90 x 10^-14 1 3.141592653 -3.07 x 10^-28 2 3.141592653 1.72 x 10^-42 3 3.141592653 1.00 x 10^-56 See these two questions as well. # e The most popular method for computing e is its Taylor's series expansion, because it requires little computational effort and converges very quickly (and continues to speed up).$$e=\sum_{n=0}^\infty \frac{1}{n!}$$The first sums created in this series are as follows: n Approx. sum Approx. error (e-sum) 0 1 1.718281828. 1 2 0.718281828 2 2.5 0.218281828 3 2.666666666 0.051615161 ... 10 2.718281801 2.73 x 10^-8 ... 20 2.718281828 2.05 x 10^-20 One should also note that the limit definition of e and the series may be used in conjunction. The canonical limit for e is$$e=\lim_{n \to \infty}\left(1+\frac{1}{n}\right)^n$$Noting that this is the first two terms of the Taylor's series expansion for \exp(\frac{1}{n}) to the exponent of n for n large, it is clear that \exp(\frac{1}{n}) can be computed to a higher accuracy in fewer terms then e^1 in the series, because in two terms give a better and better estimate as n \to \infty. This means that if we add another few terms of the expansion of \exp(\frac{1}{n}), we can find the n^{th} root of e to high accuracy (higher then the limit and the series) and then we just multiply the answer n times with itself (easy, if n is an integer). As a formula, we have, if m and a are large:$$e \approx \left(\sum_{n=0}^m \frac{1}{n!a^n}\right)^a$$If we use the series to find the 100^{th} root (i.e. using the above formula, a=100) of e, this is what results (note the fast rate of convergence): n Approx. sum Approx. sum^100 Approx. error (e-sum) 0 1 1 1.718281828. 1 1.01 2.704813829 0.013467999 2 1.01005 2.718236862 0.000044965 3 1.010050166 2.718281716 1.12 x 10^-7 ... 10 1.010050167 2.7182818284 6.74 x 10^-28 ... 20 1.010050167 2.7182818284 4.08 x 10^-51 # \varphi The golden ratio is$$\varphi=\frac{\sqrt{5}+1}{2}$$so once \sqrt{5} is computed to a sufficient accuracy, so can \varphi. To estimate \sqrt{5}, many methods can be used, perhaps most simply through the Babylonian method. Newton's root-finding method may also be used to find \varphi because it and its reciprocal, \Phi, are roots of$$0=x^2-x-1$$If \xi is a root of f(x), Newtons method finds \xi:$$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}\xi=\lim_{n \to \infty}x_n$$We thus assign f(x)=x^2-x-1 and f'(x)=2x-1. Then$$x_{n+1}=x_n-\frac{x_n^2-x_n-1}{2x_n-1}=\frac{x_n^2+1}{2x_n-1}$$If x_0=1, the first few iterations yield: n value of x_n Approx. error (phi-x_n) 1 2 -0.381966011 2 1.666666666 -0.048632677 3 1.619047619 -0.001013630 4 1.618034447 -4.59 x 10^-7 ... 7 1.618033988 -7.05 x 10^-54 The quadratic convergence of this method is very clear in this example. # \gamma Unfortunately, no quadratically convergent methods are known to compute \gamma. As mentioned above, some methods are discussed here: What it the fastest/most efficient algorithm for estimating Euler's Constant \gamma? The algorithm from here is$$ \gamma= 1-\log k \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1}}{(r-1)!(r+1)} + \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1} }{(r-1)! (r+1)^2}+\mbox{O}(2^{-k}) $$and this method gives the following approximation: k Approx. sum Approx. error (gamma-sum) 1 0.7965995992978246 0.21938393439629178 5 0.5892082678451087 0.011992602943575847 10 0.5773243590712589 1.086941697260313 x 10^-4 15 0.5772165124955206 8.47593987773898 x 10^-7 This answer has even faster convergence. Some other methods are also reviewed here: http://www.ams.org/journals/mcom/1980-34-149/S0025-5718-1980-0551307-4/S0025-5718-1980-0551307-4.pdf # \zeta(3) A method for estimating \zeta(3) is the Amdeberhan-Zeilberger formula (O(n \log n^3)):$$\zeta(3)=\frac{1}{64}\sum_{k=0}^{\infty}\frac{(-1)^k(205k^2+250k+77)(k!)^{10}}{((2k+1)!)^5}$$# G (Catalan's constant) Fee, in his article, presents a method for computing Catalan's constant based on a formula of Ramanujan:$$G=\sum_{k=0}^\infty \frac{2^{k-1}}{(2k+1)\binom{2k}{k}}\sum_{j=0}^k \frac1{2j+1}$$Another rapidly-converging series from Ramanujan has also been used for computing Catalan's constant:$$G=\frac{\pi}{8}\log(2+\sqrt 3)+\frac38\sum_{n=0}^\infty \frac{(n!)^2}{(2n)!(2n+1)^2}$$# \log 2 The Taylor's series for \log has disappointingly poor convergence and for that alternate methods are needed to efficiently compute \log 2. Common ways to compute \log 2 include "Machin-like formulae" using the \operatorname{arcoth} function, similar to the ones used to compute \pi with the \arctan function mentioned above:$$\log 2=144\operatorname {arcoth}(251)+54\operatorname {arcoth}(449)-38\operatorname {arcoth}(4801)+62\operatorname {arcoth}(8749)$$# A (Glaisher-Kinkelin constant) One usual method for computing the Glaisher-Kinkelin constant rests on the identity$$A=\exp\left(\frac1{12}(\gamma+\log(2\pi))-\frac{\zeta'(2)}{2\pi ^2}\right)$$where \zeta'(s) is the derivative of the Riemann zeta function. Now,$$\zeta'(2)=2\sum_{k=1}^\infty \frac{(-1)^k \log(2k)}{k^2}$$and any number of convergence acceleration methods can be applied to sum this alternating series. Two of the more popular choices are the Euler transformation, and the CRVZ algorithm. Another interesting website that has many fast algorithms for common constants is here. - "These methods produce 6-8 digits and 14 digits respectively." That would be per iteration, no? – Gerry Myerson May 21 '12 at 0:35 @GerryMyerson Every time you add another term. I will edit. – Argon May 21 '12 at 14:41 \gamma is the golden/black sheep of the herd. – Pedro Tamaroff May 21 '12 at 22:50 In lieu of writing a different answer, I decided to add to this CW answer. I hope you don't mind. – J. M. Jul 7 '12 at 3:52 @J.M. No problem! I edited the style a tad, if that's okay. – Argon Jul 7 '12 at 16:11 add comment Different irrationals yield to different techniques. \phi=(1+\sqrt5)/2 just involves calculating \sqrt5, which can be done easily by Newton's method from introductory calculus. The infinite series$$e=1+1+1/2+1/6+1/24+\cdots$$where the denominators are the factorials, can be used to calculate e. For pi, this article on Gauss-Legendre algorithm will give you some ideas. - Perhaps it looks better if you name the link instead of showing the link address directly in your answer. Like, "For pi, this Wikipedia article [...]". – Gigili May 20 '12 at 9:48 add comment Gerry Myerson's answer above is correct in saying that different irrational numbers lead to different techniques. In essence, though, all those techniques boil down to one idea: Find some sort of method (formula, infinite series, algorithm, etc.) that when used, will yield a decimal expansion that will converge to the value of the irrational (or rational, for that matter!). Naturally, certain techniques are more useful in certain circumstances (e.g., in computing, techniques that converge very quickly, but also result in as few processor instructions as possible are preferred). As an aside, my personal favorite formula for \pi was given by Ramanujan:$$ \frac{1}{\pi} = \frac{\sqrt{8}}{9801} \sum_{n=0}^{\infty}\frac{(4n)!}{(n!)^4}\frac{1103+26390n}{396^{4n}} $$This formula converges really really quickly. The MathWorld article notes that it provides, on average, 6 to 8 decimal places per term. - add comment An example not yet given. # \zeta(3)=1.20205690315959428539973816151144 # \qquad\qquad 999076498629234049... (here) The number$$\zeta (3)=\sum_{n=1}^\infty \frac{1}{n^3} \tag{1}$$is called Apéry's constant, because its irrationality was first proved by Roger Apéry. The following series, which converges to \zeta (3) faster than (1), can be used to compute it$$\zeta (3)=\frac{5}{2}\sum_{n=1}^{\infty }\frac{\left( -1\right) ^{n-1}}{n^{3}\binom{2n}{n}}.\tag{2}$$For the same purpose we can use the continued fraction expansion for \zeta (3), which is$$\zeta \left( 3\right) =\dfrac{6}{5-\dfrac{1}{117-\dfrac{64}{535-...-\dfrac{n^{6}}{34n^{3}+51n^{2}+27n+5}}}}.\tag{3}$$Another possibility is to use the following limit$$\begin{equation*} \zeta (3)=\lim_{n\rightarrow \infty }\frac{a_{n}}{b_{n}}, \end{equation*}\tag{4}$$where$$\begin{equation*} a_{n}=\sum_{k=0}^{n}\binom{n}{k}^{2}\binom{n+k}{k}^{2}c_{n,k}, \end{equation*}\tag{5}\begin{equation*} b_{n}=\sum_{k=0}^{n}\binom{n}{k}^{2}\binom{n+k}{k}^{2}, \end{equation*}\tag{6}$$and$$\begin{equation*} c_{n,k}=\sum_{m=1}^{n}\frac{1}{m^{3}}+\sum_{m=1}^{k}\frac{\left( -1\right) ^{m-1}}{2m^{3}\binom{n}{m}\binom{n+m}{m}}\quad k\leq n. \end{equation*}\tag{7}$$-- References Apéry, Roger (1979), Irrationalité de \zeta 2 et \zeta 3, Astérisque 61: 11–13 Alfred van der Poorten (1979), A proof that Euler missed..., The Mathematical Intelligencer 1 (4): 195–203 - add comment For \pi there is a nice formula given by John Machin:$$ \frac{\pi}{4} = 4\arctan\frac{1}{5} - \arctan\frac{1}{239}\,. $$The power series for \arctan \alpha is given by$$\arctan\alpha = \frac{\alpha}{1} - \frac{\alpha^3}{3}+\frac{\alpha^5}{5} - \frac{\alpha^7}{7} + \ldots\,. $$Also you could use (generalized) continued fractions:$$ \pi = \dfrac{4}{1+\cfrac{1^2}{3+\cfrac{2^2}{5+\cfrac{3^2}{7+\cdots}}}}$There are many other methods to compute$\pi$, including algorithms able to find any number of$\pi$'s hexadecimal expansion independently of the others. As I remember, the wikipedia has a lot on methods to compute$\pi$. Moreover, as$\pi$is a number intrinsic to mathematics, it shows in many unexpected places, e.g. in a card game called Mafia, for details see this paper. As for$e$, there are also power series and continued fractions, but there exists more sophisticated algorithms that can compute$e$much faster. And for$\phi$, there is simple recurrence relation based on Newton's method, e.g.$\phi_{n+1} = \frac{\phi_n^2+1}{2\phi_n-1}$. It is worth to mention that the continued fraction for the golden ratio contain only ones, i.e.$[1;1,1,1,\ldots]$and the successive approximations are ratios of consecutive Fibonacci numbers$\frac{F_{n+1}}{F_n}\$. To conclude, majority of example methods here was in one of the forms: computing better and better ratios (but each fraction was calculated exactly) or work with approximations the whole time, but create a process that will eventually converge to the desired number. In fact this distinction is not sharp, but the methods that are used in those approaches are usually different. Useful tools: power series, continued fractions, and root-finding. -
{}
# monteCarloDiffusivityCalculation¶ monteCarloDiffusivityCalculation(defect_lists, transition_path_lists, start_temperature=None, end_temperature=None, delta_temperature=None, charge=None, number_of_samples=None, number_of_jumps=None, discard_faulty=True, random_seed=None) Perform Atomistic Kinetic Monte Carlo simulations on a list of defects and transition paths at different temperatures with the objective of extracting a diffusivity Arrhenius plot. Parameters: defect_lists (sequence of InterstitialList | SplitInterstitialList | VacancyList) – A sequence of defect lists. transition_path_lists (sequence of TransitionPathList) – A sequence of transition path lists. start_temperature (PhysicalQuantity of type Kelvin) – Starting temperature. Default: 873.15 Kelvin end_temperature (PhysicalQuantity of type Kelvin) – Ending temperature. Should be different than starting temperature Default: 1173.15 Kelvin delta_temperature (PhysicalQuantity of type Kelvin) – Interval to increase start_temperature until end_temperature is reached. Default: 50 Kelvin charge (int) – Charge state to be simulated. Must be between -4 and +4 included. Default: 0 number_of_samples (int) – Number of Monte Carlo samples to average. Bigger number of samples provides more accuracy (better statistics) at the cost of more CPU time. Default: 50 number_of_jumps (int) – Number of random walks (migration jumps) per sample. Bigger number of jumps provides better accuracy at the cost of more CPU time, although is usually better to increase the number of samples than the number of jumps. Default: 500 discard_faulty (bool) – Whether the faulty NEB calculations are to be discarded. Default: True random_seed (int) – Initial seed used for the pseudo-random number generator. Change it to do statistical analysis. Default: 1 A tuple containing the prefactor and activation energy. Tuple of PhysicalQuantity of type frequency, PhysicalQuantity of type energy.
{}
# The Harmonious Mathematics of Music According to legend, the Ancient Greek Pythagoras was once walking on the streets of Samos, when the sounds of blacksmiths’ hammering suddenly gave him an epiphany. Pythagoras rushed into the shop and, as he analyzed mathematically the shapes of the blacksmiths’ hammers, he laid the foundations of music that today’s Rihanna, Shakira and others are building upon. Wait a minute… Are you talking about the guy from the Pythagorean theorem? I am. And despite what you’ve learned at school, this Pythagorean theorem is absolutely not Pythagoras’ greatest idea — most notably because he probably wasn’t even the one who came up with the Pythagorean theorem. At least, that’s what I think: Pythagoras’ greatest idea was the mathematization of music, whose harmonious structures would later be the playground of the greatest musicians like Ludwig van Beethoven (we’ll get there!), or like the brilliant mathemusician Vi Hart: I’ve been a bit reluctant about writing this article at first because I’m very, very bad in music. I have never taken any music class, I play no instrument and I sing horribly false — seriously, do not ask me to sing! But I do enjoy listening to music… especially since I’ve found out about the fundamental mathematical structures of mathematics! I guess that it’s an amazing example of how mathematics has allowed me to greatly appreciate an art I never paid enough attention to! ## The Perfection of Octaves As the story goes, when Pythagoras started to play with hammers, he noticed that two of them were particularly harmonious with respect to one another. He measured the weights of these hammers and found out something absolutely startling. What did he find out? The heavier of the two hammers was exactly twice the weight of the lighter one! Exactly twice. Exactly twice? How come? I know! What were the odds? Pythagoras had focused on these hammers solely because of musical aesthetic considerations. And yet, out of this personal taste of musical harmony, emerged a perfect ratio of 2. Okay, actually, this beautiful story is probably apocryphal… But is there a Pythagoras’ story that’s not apocryphal? That is very surprising… and intriguing! I know! In fact, it was so neat that Pythagoras went on claiming that whole numbers ruled the world… which apparently led his “philosophy school” to drown Hippasus of Metapontum, because Hippasus was claiming to have found some non-perfect-ratio number in the supposedly perfect realm of geometry! Find out more with my article on numbers and constructibility. So, is there an explanation of this ratio of 2? To understand this ratio, Pythagoras turned his attention to the simplest musical instrument: A string. Could it be that the ratio of 2 applied to the string also allow for some musical harmony? Is there harmony between the vibration of the string and, say, the vibration of half of the string? Is there? There is. Today, this essential harmony is called an octave. In fact, octaves are so important in music that we gave the same name to two notes separated by an octave. You know? $A$, $B$, $C$, $D$, $E$, $F$, $G$… Two $C’s$ are separated by exactly an octave. If they are notes played by a string, one corresponds to a string exactly twice the length of the other. Okay, it was a small lie. Actually, what matters isn’t the length of the string, but the frequency at which it vibrates. The greater this frequency, the higher-pitch the note. It turns out that for most (musical) strings, it’s a good approximation to say that a string half the length will vibrate twice as fast. But is there a reason why two notes separated by an octave sound harmonious? There is! Let’s take the longer string. Let’s say it plays a $C$. It will be a low-pitch $C$. Now, if you pinch the string in the middle, the string will vibrate in a sort of perfectly symmetric way. So far so good. But if you don’t pinch it in the middle, it will vibrate in a much more complicated way. A very nastily complicated way. But there’s one thing we do know about the vibration of the string… What is it? End points of the string are fixed. And that’s very important. Why? The only symmetric vibrations that leave end points fixed are vibrations that perfectly divide the string in portions of equal lengths. These are called harmonics of different modes. And amazingly, we can prove mathematically that any asymmetric vibration of the long string is the sum of the vibrations of the harmonics. This decomposition is one of the most remarkable fact of mathematics! It is called the Fourier decomposition, and it turned out to be an essential component of many theories, from the ones that explain how our ears can hear and enjoy music, to how matter is distributed throughout our universe, and to the fundamental physics of elementary particles! Now, as you can imagine, unless you pinch the string at its extremity, the higher harmonics will not be loud enough to be heard. In fact, we’ll mainly hear the harmonics of modes 1 and 2. I get it! Mode 2 is precisely the higher octave? Exactly! And that’s the reason why two $C’s$ sound so harmonious: The low-pitch $C$ also “plays” the higher-pitch one. It sort of contains it, and thus, when we move from one to another, our ears detect some harmonious continuum. In fact, in our ears, the different notes are heard by tiny hair. Each hair is sensitive to a specific pitch. By moving from a do to another, some of the hair will carry on their motions, and this is what explains the harmony that our brain then feels. ## Let’s Create the Notes Evidently, if music was only made of octaves, it’d be a bit boring. It’d be pleasing, but a bit monotone. To go further we need to create new notes. And we’ll do that solely based on requirements of harmony. What do you mean by “creating new notes”? Let’s consider one string of reference, and let’s say it plays the fundamental note called $0$. We’ve seen that by halving the length of the string, we could define other notes in perfect harmony with the fundamental $0$. And as we’ve said it, these other notes that differ from $0$ by octaves are called $0’s$ as well. More formally, taking higher and lower octaves define all the notes $0’s$ of music. But music isn’t made only of $0’s$. We need more notes. How can we create them through a coherent mathematical process? What about dividing the string in thirds? Bingo! As we’ve seen earlier, when $0$ is played, we hear all its harmonics. Moreover, the harmonics that we will hear the most will be those of smaller modes. Now, as we’ve seen modes 1 and 2 are $0’s$. Next is mode 3. This mode 3 harmonic is not any octave higher, so it is not a $0$. It has to be a new kind of note. Let’s call it the note $1$. Crucially, the note $1$ is contained within $0$. When $0$ is played, we hear $1$. And that’s why the two notes are harmonious. Musicians say that $1$ is the perfect fifth of $0$. You’ve been saying that notes separated by octaves or perfect fifths sound harmonious… But why should I believe you? Let’s listen to this video. The author of the video first plays a series of octaves, and then a series of perfect fifths. Okay, that sounded good indeed. But why are perfect fifths called perfect fifths? I’ll get to that later (I personally would rather call it a perfect third…). Now that we have one new note $1$, we can also create all notes $1’s$! How? They’re octaves higher and lower of the $1$ we’ve created! Exactly! Cool! But then, can’t we create perfect fifths of $1’s$? It’s an awesome observation you’ve just made. In fact, that’s how we can create all the notes of music: We just make up the perfect fifth of the last note we’ve created, and its octaves as well! Thereby, we construct $2$ as the perfect fifth of $1$, $3$ as the perfect fifth of $2$, and so on. As you’ve noticed, I’ve stopped drawing the strings the notes correspond to. That’s because the strings of high-pitch notes are much much smaller than those of low-pitch ones. And the reason for that is that all these multiplicative processes necessarily range on huge scales. Such scales cannot be represented all at once on a figure. That’s why, instead, we use so-called logarithmic scales that allow to contract scales by replacing multiplications by additions. That’s why all $C’s$ are equally spaced on a piano keyboard. What separates them is described in terms of additive variations, rather than multiplicative ones. Doesn’t this creation process stop? In principle, it shouldn’t stop. We should be creating infinitely many notes! Are you crazy? How can we compose music if there are infinitely many notes? That’s a good point. ## The Impossibility Theorem of Music We should have a finite number of notes. But that can only happen if some note $n$ we’ve created is the “same” as the note $0$. More precisely, $n$ must be is some octaves higher than $0$. Indeed, if $n$ is the same as $0$, then any $n+k$ is the same as $k$. Hence, all the notes are actually between $0$ and $n-1$, and thus there are only $n$ notes. I’ll let you prove the reciprocal as an exercise, namely, if there are a finite number notes, then there must be some note $n$ that is the same as note $0$. Unfortunately, it’s not too hard to prove mathematically that this can’t be. This is what could be called the impossibility theorem of music: Perfect harmony in music requires an infinite number of musical notes. Why is that? Any note we’ve created is obtained by repeatedly taking thirds. In fact, the note $n$ corresponds to a string of length $1/3^n$ of the original string. Moreover, its $k^{th}$ lower octave is obtained doubling $k$ times the length of the string. Thus, the $k^{th}$ lower octave of note $n$ corresponds to a string length that is $2^k/3^n$ of the original string. I see! So, for this note to be the same as the original note, we must have $2^k/3^n = 1$, right? Exactly! In other words, we must have $2^k = 3^n$. But unfortunately, this can’t be. This equation has no solution. How do you know? The left term $2^k$ is even but $3^n$ is odd. An even number can’t be odd… Good point… So, what? Are we screwed? Before telling you what we can do, let me state the impossibility theorem of music. More precisely, any note is physically defined by a frequency of vibration. Suppose we have one frequency of reference. Then other notes can be defined by how much greater (in ratio) they are. In other words, we can regard the set of notes as the set $\mathbb R_+^*$ of strictly positive real numbers. We then have the “separated-by-octaves” equivalence relation between string length, defined by $x \sim y$ if there exists $k \in \mathbb Z$ such that $x = 2^k y$. Let’s call $\mathsf{Notes}$ the quotient space $\mathbb R_+^* / \sim$. Then, we have a function $\mathsf{PerfectFifth} : \mathsf{Notes} \rightarrow \mathsf{Notes}$, defined by $\mathsf{PerfectFifth}(x) = x/3$. Then, any subset $\mathsf{MusicalNotes} \subset \mathsf{Notes}$ stable by $\mathsf{PerfectFifth}$ (i.e. such that $\mathsf{PerfectFifth} (\mathsf{MusicalNotes}) \subset \mathsf{MusicalNotes}$) is infinite. ## The Fundamental Theorem of Music Here’s a positive breaking news though. Despite the impossibility theorem, musicians still manage to play pretty good music. So how did they find harmony in musical notes? Do you know how many notes there are in an octave? Well, there’s $A$, $B$, $C$, $D$, $E$, $F$ and $G$. So, seven! Are you sure? Aren’t you forgetting some? Oh yeah! There’s also $A\#$, $C\#$, $D\#$, $F\#$ and $G\#$. That’s a total of twelve notes! It is, isn’t it? This means that something has happened when the note $12$ has been created. This note $12$ corresponds to dividing the string length by $3^{12}$. This is $3^{12} = 531441$. Yet, we also have $2^{19} = 524288$. The ratio of these equal $2^{19}/3^{12} \approx 0.987$. That’s almost 1! I see! So note $12$ is almost a $0$! Exactly. In fact, I’d call the approximate equality $2^{19} \approx 3^{12}$ the fundamental theorem of music. Stated differently, it says that the note $12$ is nearly the same as $0$. And all of music has built upon this fundamental approximation, sometimes known as the Pythagorean comma. We could have done better! Indeed, we could have used the approximation $2^{84}/3^{53} \approx 0.998$ to define 53 notes. It’s even possible to do better, and the reason for that lies in the irrationality of $\ln(3)/\ln(2)$. More precisely, using the logarithm the set $\mathsf{Notes}$ with multiplication can be shown to be isomorphic to the Lie group of the circle $\mathbb R / \mathbb Z$, and the perfect-fifth operator then corresponds to the translation by $\ln(3)/\ln(2)$ in the circle group. Ergodic theory then proves that the irrationality of $\ln(3)/\ln(2)$ shows that the translation is mixing. In particular, by repeating the perfect-fifth operator, we can get arbitrary close to the original point (this can also be proved using Poincaré’s recurrence theorem). But somehow, musicians seem to consider that 53 notes would already be slightly too many and that $0.987$ is already close enough to $1$… They’re even worse than physicists! Now that we have 12 notes, we should all order them not only according to harmony as we have done, but according to how they fit in a single octave. I spare you the computations, but we can see that the note of slightly higher pitch than $0$ is $7$, then $2$, $9$, $4$, $11$, $6$, $1$, $8$, $3$, $10$, $5$. And then we’re back at $0$. Because of the Pythagorean comma, the note following $5$ should be $12$, which is slightly under the higher $0$. To avoid the abrupt slightly distinguishable gap between $12$ and $0$, piano tuners usually spread the Pythagorean comma within every perfect fifth. More precisely, the difference between successive notes, which theoretically should be $2^8/3^5$, is defined as $\sqrt[12]{2}$. Fortunately, this slight difference between $2^8/3^5$ and $\sqrt[12]{2}$ is not audible to the untrained ear! Finally, the fact that the difference between any two successive notes is the same means that there is a sort of homogeneity in the notes. We can shift all songs by a ratio $\sqrt[12]{2}$ and all (multiplicative) distances between notes, which is what the untrained ears only hear, would be the same. That sequence $0$, $7$, $2$, $9$, $4$, $11$, $6$, $1$, $8$, $3$, $10$, $5$… It looks like a random list of numbers! Look carefully. It’s not a random list at all. There’s a neat underlying structure. From a note to the next one, either you add 7 or you subtract 5. More precisely, you always add 7, but you subtract 12 when the number you obtain exceeds 12: The sequence is an arithmetic progression with common difference 7, in the set of numbers modulo 12. In other words, it’s the sequence $7k \; mod \; 12$ Since 7 and 12 are coprime, this sequence is surjective of rank 12. Now, this gives us two different ways of ordering the notes of music. We can either order them by pitch proximity in an octave, or we can order them by harmony of perfect fifth: In fact, musical notes cannot be ordered. Indeed, you cannot say that $C\#$ is “higher” than $C$, because the low $C\#$ is certainly not higher than the high $C$. You can say that it is the “next” note. But as we move on to “next” notes, like $D$, $D\#$, and so on, we eventually come back to $C\#$. So it’s not much of an order relation. Mathematically, we thus do not talk about order, but about topology instead. And because by “moving on” we eventually come back to the original point, the natural topology of musical notes is the topology of a circle. But note that there are actually two topologies. One is in terms of “next higher pitch”, and the other is in terms of “perfect fifth”. Finally, I can explain why the perfect fifth is called that way. As you can see in the figure, between the note $n+1$ is always 5 notes after the note $n$. That’s why we say that $n+1$ is the perfect fifth of $n$. ## The Second Theorem of Music One thing that troubled me early on when I heard about music theories was the idea of chords. Some triplets of notes sound good, while others sound terrible. I always thought that this was because of our education: Good music always aline good-sounding triplets, and I thought that repeatedly hearing them led us to like them. But I’ve since stumbled upon a more fundamental explanation which I find fascinating. Wait… What’s a nice chord? The most famous of all chords is the major triad. An example of a major triad is the $C$ major chord. The $C$ major chord plays $C$, $E$ and $G$. This major chord is music’s most celebrated chord. And with reason. It does sound extremely harmonious, as opposed to other random choices of three notes. So, why is it harmonious? Hehe… Remember how a vibrating string can be decomposed into modes? Yes I remember. So, say mode 1 is $C$. Then, mode 2 is… the C an octave higher! Yes! Mode 3 is… the perfect fifth of $C$, which is $G$, right? Very good. What about mode 4? Another $C$ two octaves higher! Excellent! And now… what about mode 5? I don’t know… Wait, I see where you’re getting at… It’s $E$, isn’t it? Humm… Is it? Isn’t it? Well, let’s compute it! Recall that $E$ is what we constructed as the note $4$. So, the length of the string it corresponds to is $1/3^4$ of $C$. Let’s now consider the octave 4 times higher. It corresponds to a length of $2^4/3^4$ of that of $C$. Well, $2^4/3^4 \approx 0.198 \approx 0.2 = 1/5$. Do you know what that means? I do! It means that E is the mode 5 of the string! Exactly! In other words, when we play $C$, we also hear its modes 2, 3, 4 and 5, which corresponds to $C$, $G$, $C$ and $E$. By playing C, G and E, we are amplifying the notes that we were already hearing! That’s why the major chord is so harmonious! Wow! This is brilliant! I know! In fact, the intricate relationship between $C$ and $E$ has been given a name. $E$ is called the major third of $C$. This means that there are more intricate harmony relations between the musical notes, through this major-third relation. It’s a tricky one. The minor chord consists of replacing the major third by another note called the minor third. The minor third of the note $0$ is the note $3$. But this minor third does not correspond to any mode. It is a bit less harmonious. But disharmony, or dissonance, may lie at the core of the compositions of the greatest musicians. At least, this is what argued for Beethoven by Natalya St. Clair in the following awesome TedEd video: ## Let’s Conclude There’s a cliché out there according to which the mathematical study of artistic phenomena leads to some deterioration of artistic beauty. I strongly feel that this is completely false. In fact, my personal experience has repeatedly been exactly the opposite. It was rather when I learned about the underlying science or mathematics of arts that I started to appreciate artistic masterpieces the way they deserve to be appreciated. This has been totally the case with my appreciation of music. Maths only adds to the musical experience. And, for me, it adds a lot! Really? I’m telling you! But if you don’t trust me, please listen to the scientist that probably conveyed this idea the best, the great Richard Feynman: Similarly, I have times and times again been terribly excited by artistic masterpieces whose mathematical structures I could understand. This was especially the case when I stumbled upon a piece of work by François Morellet, when I discovered the fractal nature of Jackson Pollock’s paintings or Escher’s brilliant use of hyperbolic geometry. Seriously, learn the maths, and you will see the whole world through much more exciting perspectives! ## One comment to “The Harmonious Mathematics of Music” 1. Inteano says: Well. I’m totally clueless when it comes to music, and if anything, you just proved that to me …mathematically. I was always apprehensive of the structure of music, since I couldn’t for the life of me understand whether the notes or scales were arbitrarily selected or whether they corresponded to some fixed underlying truths or givens . Though I believe I have a “feeling” for music and can appreciate many types of music just by listening to it, almost always recognizing a “good” piece from the first few notes, I always had the impression that “perfectly” harmonious music was extremely rare in any genre and so I was wondering why that could be. I felt that in every piece of music there was something amidst the harmonies that was dis-harmonic so much so that it created within me a sort of aesthetic disappointment. It was in search for a possible answer to this that I stumbled upon your website. You have done a very good job of simplifying as much as is probably possible, yet it still all sounds like gibberish to me. Maybe after reading your article I’m now at the point where a (musically) primitive person has just realized that the earth isn’t flat and so there’s no such thing as a perfectly flat stretch of land. I intuitively suspect that the only thing that could lure me out of my musical cave would be to listen to a piece of music that is considered by musicians to be as close as possible to a “perfectly harmonious” piece of music with no dissonant musical acrobatics so that I could start the structural comparisons from there. Thanks for the effort , anyway ! It’s not your fault some of us are musically or mathematically retarded ….
{}
Note on the Kung-Traub Conjecture for Traub-type two-point Iterative methods for Quadratic Equations Kalyanasundaram Madhu # Note on the Kung-Traub Conjecture for Traub-type two-point Iterative methods for Quadratic EquationsKalyanasundaram Madhu Abstract: In this work, we have developed two-point iterative methods with three function evaluations reaching more than fourth order convergence. Furthermore, we show that with the same number of function evaluations we can develop higher order two-point methods of order $r+3$, where $r$ is a positive integer, if we know the asymptotic error constant of the previous method. We understand from the Kung-Traub conjecture that the maximum order reached by a method with 3 function evaluations is four, even though quadratic functions. Hence, proposed method proves that Kung-Traub's conjecture fails theoretically for quadratic functions. Keywords: Quadratic equation, two-point iterative methods, Kung-Traub's conjecture, Efficiency Index.
{}
# Question In the previous problem, suppose the scale of the project can be doubled in one year in the sense that twice as many units can be produced and sold. Naturally, expansion would only be desirable if the project were a success. This implies that if the project is a success, projected sales after expansion will be 26,000. Again assuming that success and failure are equally likely, what is the NPV of the project? Note that abandonment is still an option if the project is a failure. What is the value of the option to expand? Sales0 Views35
{}
# DIRECT SEARCH ALGORITHMS OVER RIEMANNIAN MANIFOLDS We generalize the Nelder-Mead simplex and LTMADS algorithms and, the frame based methods for function minimization to Riemannian manifolds. Examples are given for functions defined on the special orthogonal Lie group $\mathcal{SO}(n)$ and the Grassmann manifold $\mathcal{G}(n,k)$. Our main examples are applying the generalized LTMADS algorithm to equality constrained optimization problems and, to the Whitney embedding problem for dimensionality reduction of data. A convergence analysis of the frame based method is also given. ## Article Download View DIRECT SEARCH ALGORITHMS OVER RIEMANNIAN MANIFOLDS
{}
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer. QUESTION # What is the polar equation of a horizontal line? The polar equation of y=k is r=kcsctheta. This can be obtained by using y=rsintheta, one of the conversion equations that relate the Cartesian and polar systems. By substitution, we have rsintheta=k, and consequently r=k/sintheta, which is equivalent to r=kcsctheta. For a specific example, you may want to see the Youtube video, offered by Mathispower4u, at
{}
# Math Help - Real Analysis - Sets and Proofs 1. ## Real Analysis - Sets and Proofs I have 2 problems that are annoying the heck out of me... #1. If A is contained in B prove that (C\B) is contained in (C\A) Either prove the converse is true or give a counterexample. Don't even know where to start. It is only the first week of the course, so I just need pretty basic help. #2. Under what conditions does A\(A\B)=B. Again, how do I start this? 2. I point you towards a website I've been contributing to for some few weeks now ... here's the page containing some proofs that might help. Subset Equivalences - Proofwiki.org explore and enjoy. 3. Oi, miss Green ! Originally Posted by kathrynmath I have 2 problems that are annoying the heck out of me... #1. If A is contained in B prove that (C\B) is contained in (C\A) Either prove the converse is true or give a counterexample. Don't even know where to start. It is only the first week of the course, so I just need pretty basic help. To prove a set P is contained in another one, Q, prove : if x is in P, then x is in Q. If you want to know whether it's true or false, draw a diagram. Just to give you the correct direction ! Note, $A \backslash B=A \cap \overline{B}$ 4. Originally Posted by Moo Oi, miss Green ! To prove a set P is contained in another one, Q, prove : if x is in P, then x is in Q. Does this involve assuming P=Q? 5. Originally Posted by Moo Oi, miss Green ! To prove a set P is contained in another one, Q, prove : if x is in P, then x is in Q. If you want to know whether it's true or false, draw a diagram. Just to give you the correct direction ! Note, $A \backslash B=A \cap \overline{B}$ So, I figured out #1 now. For #2, is it something with A equaling B? 6. Not quite. $A - (A - B) \Longrightarrow A \cap B$ (sorry, I use $-$ for set difference) The above may need some thought, it's not trivial. But that means $A \cap B = A$ which is one of the classic subset equivalences: $A \cap B = A \iff A \subseteq B$ That's the direction you want to go in, but proving them needs thought. 7. Hmmm, this question is still frustrating me. 8. Is B a null set? Because say A={1, 2, 3, 4, 5} and B={1, 2,3}. We get: A\({4,5} ={1,2, 3} If B is a null set, and A is the same, we get: A\{1, 2, 3, 4, 5}. ={} Therefore, we have B. Also, what about A=B? A = {1, 2 , 3, 4, 5} and B={1,2, 3, 4, 5} A\({} {1,2,3,4,5} Therefore, we get B. So, these 2 cases appear to work. these are my thoughts so far. 9. Sorry, I thought I just told you: A has to be a subset of B. Note that if $A=B$ then it is also true that $A \subseteq B$ so yes, the fact that $A=B$ does make your equation work, it's just not the full answer. 10. So, B has to be contained in A? For example A={1,2,3,4,5}, B={1,2,3} A\(A\B) A\{4,5} {1,2,3,4,5}\{4,5} {1,2,3} B That works.... 11. Did you actually check out the link I gave you a few postings back in the thread? 12. Originally Posted by kathrynmath I have 2 problems that are annoying the heck out of me... #1. If A is contained in B prove that (C\B) is contained in (C\A) Either prove the converse is true or give a counterexample. Don't even know where to start. It is only the first week of the course, so I just need pretty basic help. #2. Under what conditions does A\(A\B)=B. Again, how do I start this? I just looked at #1. Clearly, we have the following identities 1. if $A\subset B$, then $B^{c}\subset A^{c}$, where $~^{c}$ stands for the compliment of the sets. 2. if $A\subset B$, then $A\cap C\subset B\cap C$ 3. $A\backslash B=A\cap B^{c}$ Hence, from the properties above, we get $A\subset B$ implies $B^{c}\subset A^{c}$. Intersection both sides with $C$, we get $C\cap B^{c}\subset C\cap A^{c}$, which is equivalent to $C\backslash B\subset C\backslash A$. I hope you may easily find #2 by using 3. for twice and thinking on the resultant situation. Note. The easiest way to work on such problems may be using Venn diagrams.
{}
Continuous Dependence of ode solution on parameters [closed] Let $f:V\rightarrow \mathbb{R}^n$ be locally Lipschitz ($V$ is a subset of $\mathbb{R}\times\mathbb{R}^m\times \mathbb{R}^n$). Suppose we have a function $x:[t_0,\beta[\times W\rightarrow \mathbb{R}^n$ differentiable in the first argument ($W$ is an open subset of $\mathbb{R}$, $\beta$ is finite) such that for every $(t,\overrightarrow{\alpha})\in [t_0,\beta[\times W$ we have: $$(t,\overrightarrow{\alpha},x(t,\overrightarrow{\alpha}))\in V$$ $$x_1(t,\overrightarrow{\alpha})=f(t,\overrightarrow{\alpha},x(t,\overrightarrow{\alpha}))$$ Here $x_1(t,\overrightarrow{\alpha})$ means partial derivative with respect to first argument. It is also given that the function $g:W\rightarrow \mathbb{R}^n$ given by $g(\overrightarrow{\alpha})=x(t_0,\overrightarrow{\alpha})$ is locally Lipschitz. Question: Does it follow that the function $x:[t_0,\beta[\times W\rightarrow\mathbb{R}^n$ is continuous ? I can only prove the conclusion if the hypotheses are strengthened to $f,g$ Lipschitz instead of just merely locally Lipschitz.I would still like to know the answer in the locally Lipschitz case. Thank you a lot. closed as off-topic by Loïc Teyssier, Mikhail Katz, Pedro Lauridsen Ribeiro, Pace Nielsen, coudyFeb 11 '18 at 20:47 This question appears to be off-topic. The users who voted to close gave this specific reason: • "MathOverflow is for mathematicians to ask each other questions about their research. See Math.StackExchange to ask general questions in mathematics." – Loïc Teyssier, Mikhail Katz, Pedro Lauridsen Ribeiro, Pace Nielsen, coudy If this question can be reworded to fit the rules in the help center, please edit the question. • Why not use the "local" property to prove the existence of local solutions and then glue them together using uniqueness on the overlaps of a suitable cover. – user64472 Feb 7 '18 at 13:14 • Crossposted to MSE: math.stackexchange.com/questions/2458822/… – Dap Feb 9 '18 at 12:50
{}
Article Content The predominantly horizontal transport of an atmospheric property by the wind, e.g. moisture[?] or heat advection. Mathematically it is the operator (in z and p vertical coordinates) $\mathbf{v} \cdot \nabla = u \frac{\partial}{\partial x} + v \frac{\partial}{\partial y} + w \frac{\partial}{\partial z} = u \frac{\partial}{\partial x} + v \frac{\partial}{\partial y} + \omega \frac{\partial}{\partial p}$. All Wikipedia text is available under the terms of the GNU Free Documentation License Search Encyclopedia Search over one million articles, find something about almost anything! Featured Article Equator ... surface of the Earth at the equator is mainly ocean. Places crossed by the equator: Sao Tome - a near miss Gabon Republic of the Congo Democratic Republic ...
{}
# $\int_3^5{\frac{x^2}{1+x^2}dx}$ by differentiation under the integral I'm trying an easy problem to get my bearings using the method here. The integral is $$\int_3^5{\frac{x^2}{1+x^2}dx}$$. I would like to proceed, if possible to solve by defining: $$F(y) = \int_3^5{\frac{\sin{(y\cdot x})}{1+x^2}dx}$$ Then obtaining $$-F''(y) = \int_3^5{\frac{x^2\sin{(y\cdot x})}{1+x^2}dx}$$ Adding gives $$F(y) - F''(y)= \int_3^5{\frac{(1+x^2)\sin{(y\cdot x})}{1+x^2}dx}$$ or $$F(y) - F''(y) - \frac{\cos{3y}-\cos{5y}}{y} = 0.$$ This is where I'm stuck. I don't know how to solve the differential equation. Any help would be greatly appreciated. I'm assuming that this can be done. - If you use cos instead of sin, then $-F''(y)$ will be just what you're looking for when $y=0$. Then, for $(\sin 3y - \sin 5y)/y$, you'd find the limit as $y\to0$. –  Michael Hardy Sep 6 '11 at 0:15 @Michael Hardy: F(y) would still be part of the result, though, wouldn't it? I'm trying not to integrate the denominator at all if possible. My goal is to avoid integrating the fractions. –  Matt Groff Sep 6 '11 at 0:36 @Michael Hardy: $F(0)$ must be involved as well. –  anon Sep 6 '11 at 0:37 This is really roundabout, but doable. Using the original version with $\sin$, the result will have the form of $F$ in terms of an integral, but you need $F'(0)$ for your final answer so it disappears. One way is to solve $u-y''=\cos(kt)/t$ by introducing $v=u'$ and then writing the second-order DE as a first-order system ($\vec{q}=(u,v)^T$) $$q'=\begin{pmatrix}0&1\\1&0\end{pmatrix}q-{0\choose\cos(kt)/t}$$ which can be solved using an integrating factor with the matrix exponential. –  anon Sep 6 '11 at 0:43 As anon mentions, I should say that calculating limits should be easy for what I anticipate - hence I can get F(0), F''(0), and so on... Thanks to anon, by the way. –  Matt Groff Sep 6 '11 at 0:45 $y''-y+ \frac{\cos3x-\cos5x}{x}=0$ , This is nonhomogeneous second-order ordinary differential equation of the form: $y''+p(x)y'+q(x)y-g(x)=0$ ,which can be solved if the general solution to the homogenous version is known, in which case variation of parameters can be used to find the particular solution: $y_p=-y_1(x)\int\frac{y_2(x)g(x)}{W(x)}\, dx + y_2(x)\int\frac{y_1(x)g(x)}{W(x)}\, dx$ , where $y_p$ is particular solution, $y_1(x)$ and $y_2(x)$ are the homogeneous solutions of the equation $y''+p(x)y'+q(x)y=0$ and W(x) is the Wronskian of these two functions.
{}
# fix filter/corotate command ## Syntax fix ID group-ID filter/corotate keyword value ... • ID, group-ID are documented in fix command • one or more constraint/value pairs are appended • constraint = b or a or t or m b values = one or more bond types a values = one or more angle types t values = one or more atom types m value = one or more mass values ## Examples timestep 8 run_style respa 3 2 8 bond 1 pair 2 kspace 3 fix cor all filter/corotate m 1.0 fix cor all filter/corotate b 4 19 a 3 5 2 ## Description This fix implements a corotational filter for a mollified impulse method. In biomolecular simulations, it allows the usage of larger timesteps for long-range electrostatic interactions. For details, see (Fath). When using run_style respa for a biomolecular simulation with high-frequency covalent bonds, the outer time-step is restricted to below ~ 4fs due to resonance problems. This fix filters the outer stage of the respa and thus a larger (outer) time-step can be used. Since in large biomolecular simulations the computation of the long-range electrostatic contributions poses a major bottleneck, this can significantly accelerate the simulation. The filter computes a cluster decomposition of the molecular structure following the criteria indicated by the options a, b, t and m. This process is similar to the approach in fix shake, however, the clusters are not kept constrained. Instead, the position is slightly modified only for the computation of long-range forces. A good cluster decomposition constitutes in building clusters which contain the fastest covalent bonds inside clusters. If the clusters are chosen suitably, the run_style respa is stable for outer time-steps of at least 8fs. Restart, fix_modify, output, run start/stop, minimize info: No information about these fixes is written to binary restart files. None of the fix_modify options are relevant to these fixes. No global or per-atom quantities are stored by these fixes for access by various output commands. No parameter of these fixes can be used with the start/stop keywords of the run command. These fixes are not invoked during energy minimization. ## Restrictions This fix is part of the USER-MISC package. It is only enabled if LAMMPS was built with that package. See the Build package doc page for more info. Currently, it does not support molecule templates.
{}
# Prime Land Description Everybody in the Prime Land is using a prime base number system. In this system, each positive integer x is represented as follows: Let {pi}i=0,1,2,... denote the increasing sequence of all prime numbers. We know that x > 1 can be represented in only one way in the form of product of powers of prime factors. This implies that there is an integer kx and uniquely determined integers ekx, ekx-1, ..., e1, e0, (ekx > 0), that The sequence (ekx, ekx-1, ... ,e1, e0) is considered to be the representation of x in prime base number system. It is really true that all numerical calculations in prime base number system can seem to us a little bit unusual, or even hard. In fact, the children in Prime Land learn to add to subtract numbers several years. On the other hand, multiplication and division is very simple. Recently, somebody has returned from a holiday in the Computer Land where small smart things called computers have been used. It has turned out that they could be used to make addition and subtraction in prime base number system much easier. It has been decided to make an experiment and let a computer to do the operation minus one''. Help people in the Prime Land and write a corresponding program. For practical reasons we will write here the prime base representation as a sequence of such pi and ei from the prime base representation above for which ei > 0. We will keep decreasing order with regard to pi. Input The input consists of lines (at least one) each of which except the last contains prime base representation of just one positive integer greater than 2 and less or equal 32767. All numbers in the line are separated by one space. The last line contains number 0. Output The output contains one line for each but the last line of the input. If x is a positive integer contained in a line of the input, the line in the output will contain x - 1 in prime base representation. All numbers in the line are separated by one space. There is no line in the output corresponding to the last null'' line of the input. Sample Input 17 1 5 1 2 1 509 1 59 1 0 Sample Output 2 4 3 2 13 1 11 1 7 1 5 1 3 1 2 1 To commemorate my first number of topics #include<iostream> #include<stdio.h> #include<string.h> #include<stdlib.h> typedef long long ll; const int N=1000; int prime[N],p[N],tot=0; using namespace std; struct num { int d,z; }; num n[N],r[N]; void solve(ll s) { int index=0; int sum; for(int i=0;i<tot;i++) { if(s%p[i]==0) { sum=0; while(s%p[i]==0) { sum++; s=s/p[i]; } r[index].d=p[i]; r[index++].z=sum; } } if(s>1) { r[index].d=s; r[index++].z=1; } for(int i=index-1;i>0;i--) printf("%d %d ",r[i].d,r[i].z); printf("%d %d\n",r[0].d,r[0].z); } void init() { for(int i=2;i<N;i++) prime[i]=1; for(int i=2;i<N;i++) { if(prime[i]) p[tot++]=i; for(int j=0;j<tot&&i*p[j]<N;j++) { prime[i*p[j]]=0; if(i%p[j]==0) break; } } } int main() { int x,y,k; init(); while(scanf("%d",&x)&&x) { k=0; ll sum=1; n[k].d=x; scanf("%d",&y); n[k++].z=y; char c; while(1) { scanf("%c",&c); if(c=='\n') break; scanf("%d %d",&x,&y); n[k].d=x; n[k++].z=y; } for(int i=0;i<k;i++) { for(int j=1;j<=n[i].z;j++) sum*=n[i].d; } solve(sum-1); } return 0; } • 评论 • 上一篇 • 下一篇
{}
# Market Basket Analysis with Association Rule Learning Last Updated on September 15, 2021 The promise of Data Mining was that algorithms would crunch data and find interesting patterns that you could exploit in your business. The exemplar of this promise is market basket analysis (Wikipedia calls it affinity analysis). Given a pile of transactional records, discover interesting purchasing patterns that could be exploited in the store, such as offers and product layout. In this post you will work through a market basket analysis tutorial using association rule learning in Weka. If you follow along the step-by-step instructions, you will run a market basket analysis on point of sale data in under 5 minutes. Kick-start your project with my new book Machine Learning Mastery With Weka, including step-by-step tutorials and clear screenshots for all examples. Let’s get started. Photo by HealthGauge, some rights reserved. ## Association Rule Learning I once did some consulting work for a start-up looking into customer behavior in a SaaS app. We were interested in patterns of behavior that indicated churn or conversion from free to paid accounts. I spent weeks pouring over the data, looking at correlations and plots. I came up with a bunch of rules that indicated outcomes and presented ideas for possible interventions to influence those outcomes. I came up with rules like: “User Creates x widgets in y days and logged in n times then they will convert“. I ascribed numbers to the rules such as support (the number of records that match the rule out of all record) and lift (the % increase in predictive accuracy in using the rule to predict a conversion). It was only after I delivered and presented the report that I released what a colossal mistake I made. I had performed Association Rule Learning by hand, when there are off-the-shelf algorithms that could have done the work for me. I’m sharing this story so that it sticks in your mind. If you are sifting large datasets for interesting patterns, association rule learning is a suite of methods should be using. ### Need more help with Weka for Machine Learning? Take my free 14-day email course and discover how to use the platform step-by-step. Click to sign-up and also get a free PDF Ebook version of the course. ## 1. Start the Weka Explorer In previous tutorials, we have looked at running a classifier, designing and running an experiment, algorithm tuning and ensemble methods. If you need help downloading and installing Weka, please refer to these previous posts. Weka GUI Chooser Start the Weka Explorer. ## 2. Load the Supermarket Datasets Weka comes with a number of real datasets in the “data” directory of the Weka installation. This is very handy because you can explore and experiment on these well known problems and learn about the various methods in Weka at your disposal. Load the Supermarket dataset (data/supermarket.arff). This is a dataset of point of sale information. The data is nominal and each instance represents a customer transaction at a supermarket, the products purchased and the departments involved. There is not much information about this dataset online, although you can see this comment (“question of using supermarket.arff for academic research”) from the personal that collected the data. Supermarket dataset loaded in the Weka Explorer The data contains 4,627 instances and 217 attributes. The data is denormalized. Each attribute is binary and either has a value (“t” for true) or no value (“?” for missing). There is a nominal class attribute called “total” that indicates whether the transaction was less than \$100 (low) or greater than \$100 (high). We are not interested in creating a predictive model for total. Instead we are interested in what items were purchased together. We are interested in finding useful patterns in this data that may or may not be related to the predicted attributed. ## 3. Discover Association Rules Click the “Associate” tab in the Weka Explorer. The “Apriori” algorithm will already be selected. This is the most well known association rule learning method because it may have been the first (Agrawal and Srikant in 1994) and it is very efficient. In principle the algorithm is quite simple. It builds up attribute-value (item) sets that maximize the number of instances that can be explained (coverage of the dataset). The search through item space is very much similar to the problem faced with attribute selection and subset search. Click the “Start” button to run Apriori on the dataset. ## 4. Analyze Results The real work for association rule learning is in the interpretation of results. Results for the Apriori Association Rule Learning in Weka From looking at the “Associator output” window, you can see that the algorithm presented 10 rules learned from the supermarket dataset. The algorithm is configured to stop at 10 rules by default, you can click on the algorithm name and configure it to find and report more rules if you like by changing the “numRules” value. The rules discovered where: 1. biscuits=t frozen foods=t fruit=t total=high 788 ==> bread and cake=t 723 conf:(0.92) 2. baking needs=t biscuits=t fruit=t total=high 760 ==> bread and cake=t 696 conf:(0.92) 3. baking needs=t frozen foods=t fruit=t total=high 770 ==> bread and cake=t 705 conf:(0.92) 4. biscuits=t fruit=t vegetables=t total=high 815 ==> bread and cake=t 746 conf:(0.92) 5. party snack foods=t fruit=t total=high 854 ==> bread and cake=t 779 conf:(0.91) 6. biscuits=t frozen foods=t vegetables=t total=high 797 ==> bread and cake=t 725 conf:(0.91) 7. baking needs=t biscuits=t vegetables=t total=high 772 ==> bread and cake=t 701 conf:(0.91) 8. biscuits=t fruit=t total=high 954 ==> bread and cake=t 866 conf:(0.91) 9. frozen foods=t fruit=t vegetables=t total=high 834 ==> bread and cake=t 757 conf:(0.91) 10. frozen foods=t fruit=t total=high 969 ==> bread and cake=t 877 conf:(0.91) Very cool, right! You can see rules are presented in antecedent => consequent format. The number associated with the antecedent is the absolute coverage in the dataset (in this case a number out of a possible total of 4,627). The number next to the consequent is the absolute number of instances that match the antecedent and the consequent. The number in brackets on the end is the support for the rule (number of antecedent divided by the number of matching consequents). You can see that a cutoff of 91% was used in selecting rules, mentioned in the “Associator output” window and indicated in that no rule has a coverage less than 0.91. I don’t want to go through all 10 rules, it would be too onerous. Here are few observations: • We can see that all presented rules have a consequent of “bread and cake”. • All presented rules indicate a high total transaction amount. • “biscuits” an “frozen foods” appear in many of the presented rules. You have to be very careful about interpreting association rules. They are associations (think correlations), not necessary causally related. Also, short antecedent are likely more robust than long antecedent that are more likely to be fragile. Photo by goosmurf, some rights reserved. If we are interested in total for example, we might want to convince people that buy biscuits, frozen foods and fruit to buy bread and cake so that they result in a high total transaction amount (Rule #1). This may sound plausible, but is flawed reasoning. The product combination does not cause a high total, it is only associated with a high total. Those 723 transactions may have a vast assortment of random items in addition to those in the rule. What might be interesting to test is to model the path through the store required to collect associated items and seeing if changes to that path (shorter, longer, displayed offers, etc) have an effect on transaction size or basket size. ## Summary In this post you discovered the power of automatically learning association rules from large datasets. You learned that it is much more efficient approach to use an algorithm like Apriori rather than deducing rules by hand. You performed your first market basket analysis in Weka and learned that the real work is in the analysis of results. You discovered the careful attention to detail required when interpreting rules and that association (correlation) is not the same as causation. ## Discover Machine Learning Without The Code! #### Develop Your Own Models in Minutes ...with just a few a few clicks Discover how in my new Ebook: Machine Learning Mastery With Weka Covers self-study tutorials and end-to-end projects like: ### 51 Responses to Market Basket Analysis with Association Rule Learning 1. Deepak Babu March 18, 2014 at 5:50 pm # Nice article. I had written about using association rule mining using R in two parts, first part explaining concepts and the second part explaining implementation using R with visualizations. • jasonb March 22, 2014 at 9:27 am # Great post Deepak, thanks for sharing. • ia October 18, 2014 at 9:13 pm # thanks i have understand alot of concepts due to this example.But i m using weka 3.7.1 can u provide me an exmple with a small dataset in weka like weather.arff,.cpu.arff etc. i will be thankful 2. ia October 18, 2014 at 9:07 pm # Thanks alot sir for your cooperation But there is a problem that supermarket dataset have large size. I was try to perform the example but supermarket dataset was unavailable. Sir if you kindly provide example for other dataset like weather.arrf i will be thankful. 3. ibrar November 9, 2014 at 12:50 am # i m working on Arabic text, can your good self provide me the example in arabic dataset and then converted into CSV and ARFF format. I will be thankful 4. Faizan January 19, 2015 at 2:01 am # Dude, you really made my day. THANKS ALOT. I had another query, how to increase the size of heap? I have tried to edit the runWeka.bat file and runWeka.ini . I tried the xmx and maxheap=2014mb and everything else. But the problem is, I can’t save the edited file. • Jason Brownlee January 19, 2015 at 8:19 am # You can, but you may better off working with a representative sample of your source data. 5. Neeta March 26, 2015 at 7:52 pm # Is this weka tool is useful for find frequent and infrequent itemset and after that to find association rule? 6. Álvaro López López March 28, 2015 at 12:12 am # Hello Jason. Your article is great to introduce Association Rules with Weka’s Supermarket example. I would like to point that it is mistaken at this point about explaining the rules meaning: “The number in brackets on the end is the support for the rule (number of antecedent divided by the number of matching consequents). ” The figure is the value for Confidence metric, which also implies how the rules are ordered in the output. Best regards, Álvaro 7. maryam May 8, 2015 at 7:41 pm # i want to use association rule mining to consider count of purchase for each user. For example user A listened item a 3 times. Thanks ‘‘Click (CDi) and Length of Reading Time (High))___Click (CDj)’’ placement (CDj)’’ 8. YADAF September 14, 2017 at 3:09 am # Hie. Your article is great to introduce Association Rules with Weka’s Supermarket example. Best rules found: 1. biscuits=t frozen foods=t fruit=t total=high 788 ==> bread and cake=t 723 lift:(1.27) lev:(0.03) [155] conv:(3.35) 2. baking needs=t biscuits=t fruit=t total=high 760 ==> bread and cake=t 696 lift:(1.27) lev:(0.03) [149] conv:(3.28) 3. baking needs=t frozen foods=t fruit=t total=high 770 ==> bread and cake=t 705 lift:(1.27) lev:(0.03) [150] conv:(3.27) 4. biscuits=t fruit=t vegetables=t total=high 815 ==> bread and cake=t 746 lift:(1.27) lev:(0.03) [159] conv:(3.26) 5. party snack foods=t fruit=t total=high 854 ==> bread and cake=t 779 lift:(1.27) lev:(0.04) [164] conv:(3.15) 6. biscuits=t frozen foods=t vegetables=t total=high 797 ==> bread and cake=t 725 lift:(1.26) lev:(0.03) [151] conv:(3.06) 7. baking needs=t biscuits=t vegetables=t total=high 772 ==> bread and cake=t 701 lift:(1.26) lev:(0.03) [145] conv:(3.01) 8. biscuits=t fruit=t total=high 954 ==> bread and cake=t 866 lift:(1.26) lev:(0.04) [179] conv:(3) 9. frozen foods=t fruit=t vegetables=t total=high 834 ==> bread and cake=t 757 lift:(1.26) lev:(0.03) [156] conv:(3) 10. frozen foods=t fruit=t total=high 969 ==> bread and cake=t 877 lift:(1.26) lev:(0.04) [179] conv:(2.92) 9. Essam Mosallam October 18, 2017 at 8:42 am # i want to use association rule mining to consider count of purchase for each user. For example user A listened item a 3 times. Thanks and i want to analyze the result for True only not false for items, Thanks. 10. Sid October 31, 2017 at 12:14 am # What If it has Date ? If we need to find the patterns on when purchased and association between them • Jason Brownlee October 31, 2017 at 5:34 am # Yes, often the date can provide a lot of useful information. 11. Otto November 23, 2017 at 4:41 pm # Hello Jason! Thank you for such a cool website! I do have a question. What if my dataset is in binary (0,1)? I have a grocery store dataset where each column is an item and the 0 or 1 indicates whether it was bought or not. The row is the receipt, or the list of items bought by an individual. I wanted to find out what items where bought together by using the weka association but the top ten rules generated are always 0 (No). I want to find out the rules that only have 1 because it shows me which items where bought. As it is now the No rules generated in results are telling me what people are not buying. How do I get around this? I hope I am making sense. • Otto November 23, 2017 at 4:42 pm # examples 1. Vanilla Eclair=No Vanilla Meringue=No Chocolate Croissant=No Almond Bear Claw=No 63188 ==> Almond Tart=No 60738 lift:(1) lev:(0) [206] conv:(1.08) 2. Vanilla Eclair=No Vanilla Meringue=No Chocolate Croissant=No Blueberry Danish=No 63064 ==> Almond Tart=No 60618 lift:(1) lev:(0) [205] conv:(1.08) 3. Chocolate Eclair=No Vanilla Eclair=No Vanilla Meringue=No Chocolate Croissant=No 63181 ==> Almond Tart=No 60730 lift:(1) lev:(0) [205] conv:(1.08) 4. Vanilla Eclair=No Apricot Tart=No Vanilla Meringue=No Almond Bear Claw=No 63254 ==> Almond Tart=No 60797 lift:(1) lev:(0) [202] conv:(1.08) 5. Vanilla Eclair=No Apricot Tart=No Vanilla Meringue=No Blueberry Danish=No 63118 ==> Almond Tart=No 60666 lift:(1) lev:(0) [201] conv:(1.08) 6. Chocolate Eclair=No Vanilla Eclair=No Apricot Tart=No Vanilla Meringue=No 63234 ==> Almond Tart=No 60777 lift:(1) lev:(0) [201] conv:(1.08) 7. Vanilla Eclair=No Apricot Tart=No Vanilla Meringue=No Chocolate Croissant=No 63180 ==> Almond Tart=No 60725 lift:(1) lev:(0) [201] conv:(1.08) • Jason Brownlee November 24, 2017 at 9:34 am # It is also possible that there simply are not interesting patterns find, keep that in mind. • Jason Brownlee November 24, 2017 at 9:34 am # It’s a good question. Perhaps the data is not rich enough? Perhaps you can collect more features to describe each outcome? 12. Tom February 22, 2018 at 9:44 pm # Hi jason..how can i use those rules if i want to create personalized push notification for my customers? I already have a dataset which includes the customer’s past purchases. What i am finding difficult is how to send each of them in one click the personalized advert. • Jason Brownlee February 23, 2018 at 11:57 am # It really depends on your application and data. 13. Jesús Martínez March 13, 2018 at 12:41 am # Very good. Thanks for sharing this analysis. 14. Vinod March 14, 2018 at 11:24 pm # Hi, is it mandatory to have the basket size greater than 1, i mean we need to consider only those invoices were we have more than 1 product purchased to get product association ? • Jason Brownlee March 15, 2018 at 6:31 am # Yes, unless you are looking across purchases for a customer. 15. Vinod March 15, 2018 at 4:44 pm # Thanks Jason, yes across purchases for a customer can through some good insights as well, will have a look at it. 16. Vinod April 4, 2018 at 10:55 pm # Hi, i am done with processing the market basket report and its now time to get some insights out of it and recommend. My question is having looked at Confidence and Lift parameter, which one is the better metric to look at ? secondly i can see some Antecedents have decent support and lift > 1 but the confidence matrix is low with around 15%. I know that recommendations are very subjective and majorly upto how business looks at each parameters. But need some directional views on how do we go about in recommending things back to client. • Jason Brownlee April 5, 2018 at 6:01 am # Perhaps find out the goals of the stakeholders and use that to motivate the interpretation. Or take different stances in the report and then interpret results. E.g. if x is important then the method found that …, otherwise if y is important, the method found that… I hope that helps. 17. steamwash April 15, 2019 at 3:12 pm # Nice Blog! Thanks for sharing this useful post. 18. Carlos Trujillo Almeida April 21, 2019 at 7:11 am # Hi Jason For the same example and without changing any parameters, what is the support of the second rule obtained, and how is it obtained? 19. Karimi August 29, 2019 at 7:44 pm # Hi Jason, how do i select the attributes using association rule to be input for a classifier like MLP? • Jason Brownlee August 30, 2019 at 6:17 am # I’m not sure it is an appropriate approach. 20. stellamaris August 29, 2019 at 11:00 pm # Thanks Jason, is it possible to select best attributes from the generated rules? 21. Karimi August 29, 2019 at 11:31 pm # Hi Jason, i used wbc dataset that has 9 attributes and class attributes to create association rule model the rules generated are having only 6 attributes and a class attribute. Are the 3 attributes not appearing irrelevant? 22. Esther Mead February 4, 2020 at 5:33 am # jason, thank you so much for all of your great tutorials! my question is, can the apriori algorithm be implemented in python for predicting housing prices (say, for the common boston housing prices dataset)? 23. Misha March 26, 2020 at 2:14 am # Very interesting article Jason! I think one thing to add is that there is a lot of algorithms that essentially get the same rules given a set of parameters and they differ in computational speed. So depending how wide your dataset is you may want to consider using ECLAT or F-P Growth. I have recently posted an article with some theory and Python implementation on my blog, so feel free to take a look and let me know what you think: https://pyshark.com/market-basket-analysis-using-association-rule-mining-in-python/ 24. Sundeep March 29, 2020 at 12:52 pm # Hi Jason Thank you for a great article. I’m working on a school assignment using weka with supermarket dataset. I’m confused about one question and hope you can help ASAP. Question: Let’s assume that 80% of values are missing in this dataset. Then what is the number of items which average customer’s basket contains (hint: this means that 20% of attributes have the value “t” in an instance on average)? Thanks 25. Ash May 17, 2020 at 3:27 pm # HI , This is great. Thank you for this. Can you please tell me how to find the highest rank association rule in this? • Jason Brownlee May 18, 2020 at 6:08 am # Sorry, I don’t have more tutorials on association rules. Perhaps in the future. 26. Mark November 4, 2020 at 9:09 pm # “The number in brackets on the end is the support for the rule” “Support” are the numbers close to ‘promise’ part and ‘consequent’ part of the rule. Next article about apriori where author does not know how to explain results. 27. Ammar Azlan August 24, 2021 at 11:56 am # Hi Jason, I love your article. I just want to point out some typo. I hope it helps. “I’m sharing this story so that it sticks in your mind. If you are sifting large datasets for interesting patterns, association rule learning is a suite of methods *should* should be using.” • Adrian Tam August 24, 2021 at 11:58 am # Good catch! Thank you. 28. Deep Learner September 15, 2021 at 1:29 am # Hi Jason, Thanks for the post always excellent. We love your work! Just a note, I found the link you have provided that is meant to give some info about the data is dead, however, I looked and found another discussion where there are some details about the data set right here:
{}
# I. Introduction In this study, we test whether demographic characteristics, socio-economic factors and public policy parameters are significant determinants of new COVID-19 cases. Our empirical framework follows a multivariate negative binomial regression model, covering the 10 countries having the highest number of confirmed cases of the virus per million people. Several studies have shown that pre-existing health conditions and non-communicable diseases (NCDs) increase both the incidence of the virus and related mortality rate (Banik et al., 2020; Chan et al., 2020; Mathur & Rangamani, 2020). Singh and Misra (2020) show, in a meta-analysis from the pooled studies of China, USA, and Italy, that the severity of COVID-19 is associated with other non-communicable diseases. Although COVID-19 affects all age groups, people above the age of 65 are more vulnerable to the infection than the younger population. However, obesity and smoking habits among younger population make them equally vulnerable to COVID-19 (WHO, 2020a). Regions with increased smoking habits are found to have increased COVID-19 cases even among younger population (Yu, 2020). Apart from the NCDs and age-related factors, government pharmaceutical and non-pharmaceutical policies also affect the incidence of COVID-19 (Hale et al., 2020; OECD, 2020). Allel et al. (2020) find that a delay in the government lockdown responses significantly affected the incidence rate ratios (IRR) of COVID-19 cases. Health care capacity of a country is another important determinant of pandemic preparedness (Chaudhry et al., 2020; Khan et al., 2020; Kraef et al., 2020; Mbunge, 2020). Health inequalities due to inadequate health capacity, directly affect vulnerable people (Bambra et al., 2020). Inspired by the above findings, this study helps to understand the country-specific factors and government responses in the countries with the highest number of confirmed cases. The study period is divided into 2 parts: the first part covers March to May 2020 (complete/partial lockdown in most countries) and the second part covers June to September 2020 (lockdown relaxed in most countries). The study is carried out for top 10 countries which have the greatest number of active COVID-19 cases per million people, as of 30th September 2020. We examine the country-specific effects of social, demographic, and health-related risk factors, along with government measures to contain the spread of COVID-19, on the number of new COVID-19 cases, for the 2 different time periods distinguished by country lockdowns. Our research focuses on the determinants of the incidence of COVID-19 and is inspired by the variation in the incidence of the virus, both within and across countries. The COVID-19 virus had spread to most parts of the world by March 2020. The World Health Organisation (WHO) declared Europe as the new epicentre of the virus on 13th March, 2020, and more than 1 million people were affected worldwide by 4th April 2020 (WHO, COVID-19 Dashboard, World Health Organization, 2020b). Persons with pre-existing health conditions were deemed to be the most vulnerable and have had the highest mortality rate due to COVID-19. This study provides an understanding of the gaps in government policies in the countries with the highest virus cases, by dividing the pandemic period into pre- and post-lockdown phases. Moreover, by emphasising the role of NCDs, age, population density, and human development index, we consider the demographic and socio-economic factors along with the government policies. This is one of our contributions to the literature particularly from a policy point of view given that policy design is dictated by understanding determinants of the virus over time. To-date, there is no study that covers all these aspects in the manner we do. We find that demographic factors and government policies influence the incidence of new cases while socio-economic factors have a limited role. In the next section, data and methodology are discussed, followed by empirical results and analysis in section III and, finally, conclusion and policy implications in section IV. # II. Methodology and Data ## A. Methodology The association between the country-level factors and the number of new COVID-19 cases can be studied by fitting multivariate negative binomial regression models for both the pre- and post-lockdown sub-samples. This helps us understand the most relevant factors in explaining the variation in the number of new cases. A negative binomial regression is used when the dependent variable is a count variable with non-negative integers. Negative binomial regression is a generalised Poisson regression that relaxes ‘the variance equal to the mean’ assumption made by the Poisson model. In a negative binomial regression, the mean of the dependent variable is determined by the exposure time, t, and a set of regressor variables, and each regressor variable has the same length of observation time. The slope coefficients can be estimated with consistency provided that the conditional mean is specified correctly, and the standard errors obtained are robust to possible misspecification of the distribution. This is like linear regression, except that consistent estimates and robust inferences can be obtained even when the normality assumption does not hold (Cameron & Trivedi, 2013). Due to large variation in the number of confirmed new COVID-19 cases per thousand people (CC), within countries and regions, the distribution of CC is over-dispersed (OECD, 2020). In this case, the Poisson distribution is inadequate to fit the model as it has only one parameter, $$\mu$$, and also requires variance-mean equality assumption. Negative binomial regression is appropriate for over-dispersed count data, that is, when the conditional variance exceeds the conditional mean. The traditional negative binomial regression model is given by \begin{align} \ln{{(\mu}_{CC})}_{t} =& \ \beta_{0} + \beta_{1}x_{1t} + \beta_{2}x_{2t} + \ldots + \beta_{p}x_{it}\ln{{(\mu}_{CC})}_{i} \hspace{8mm}(1)\\ =& \ \beta_{0} + \beta_{1}x_{1t} + \beta_{2}x_{2t} + \ldots + \beta_{p}x_{it} + {\alpha_{i} + \ €}_{it} \end{align} where $$\mu_{CC}$$ is the conditional mean of the dependent variable $$CC$$; the predictor variables $$x_{1t},\ x_{2t},...,\ x_{it}$$ are given; $$\alpha_{i}$$ is the individual-specific fixed effect term, $$€_{it}$$ is the residual term and the population regression coefficients, $$\beta_{0},\ \beta_{1},\ \beta_{2},\ ...,\ \beta_{p}$$ are to be estimated. The regression coefficients of a negative binomial distribution are interpreted as the difference between the logarithm of two consecutive expected counts, $$x_{\mathit{0}}$$ and $$x_{\mathit{0+1}}$$, and written as: $\beta\ = \ log\ (\ \mu_{x_{\mathit{0 + 1}}})\ –\ log(\ \mu_{x_{\mathit{0}}})\hspace{30mm}(2)$ where $$\beta$$ is the regression coefficient, $$\mu_{x_{\mathit{0 + 1}}}\text{ and }\mu_{\mathit{x_{0}}}$$ are the expected count of the predictor variable, evaluated at $$x_{\mathit{0}}$$ and $$x_{\mathit{0+1}}$$, respectively; $$x_{\mathit{0+1}}$$ denotes an incremental change in the predictor variable, $$x$$. Alternatively, Equation (2) can be written as: $log\ (\ \mu_{x_{\mathit{0 + 1}}})\ –\ log(\ \mu_{x_{\mathit{0}}}) = \ log(\ \mu_{x_{\mathit{0 + 1}}}/\ \ \mu_{\mathit{x_{0}}})\hspace{20mm}(3)$ This representation of the parameter estimates in Equation (3) gives the IRR, which is interpreted as the rate of change in the dependant variable for an incremental change in the predictor variable. The impact of an incremental change in the explanatory variable on the rate of occurrence of the dependant variable can be estimated by calculating the IRR for that variable as given in Equation (3). The sign of the coefficient shows whether the dependant variable increases or decreases with incremental changes in the predictor variable and the IRR shows the magnitude of the rate of change. We report both the estimated coefficients and the IRR values in order to infer the effect of country specific factors on the incidence of daily new COVID-19 cases. ## B. Data In this study, the Government Stringency Index (GSI) and the number of testing done per 1000 people (TEST) proxy for government policy; the number of beds per 10,000 people (BEDS) and the number of physicians per 10,000 people (PHY) are used as proxies for health capacity; population density (PD), median age of the population (POP) and proportion of people with pre-existing health conditions (NCDs) are demographic factors; and GDP per capita (GDP) and Human Development Index (HDI) represent socioeconomic factors. These are the explanatory variables in this study. The dependent variable is the number of confirmed new COVID-19 cases (CC). Daily data for the variables are compiled from the John Hopkins University database and the Oxford University Online Repositories for the period March 15, 2020 to September 30, 2020. The countries analysed are India, the USA, Brazil, Argentina, France, Colombia, Russia, Israel, the UK, and Peru. # III. Empirical Findings The results of the coefficient estimate and the IRR values for the negative binomial regression, for both time periods, are given in Table 1 below. Table 1:Results of negative binomial regression for two different time periods Independent Variables Model 1 (March-15 to May-31, 2020) Model 2 (June-01 to September-30, 2020) Coefficients (SE) Coefficients (SE) GSI -0.024** (0.38) -0.086 (0.35) TEST -0.012* (0.11) -0.124* (0.11) BEDS -0.032* (0.07) -0.134* (0.04) PHY -0.085* (0.05) -0.116* (0.06) NCD 2.847* (0.07) 2.126** (0.04) PD 0.125** (0.16) 0.126** (0.05) AGE 3.132** (0.15) 3.076** (0.11) HDI -0.005*** (0.78) -0.045 (0.30) GDP -0.001*** (0.14) -0.008 (0.14) Constant 8.764*** (0.99) 42.654*** (4.59) Pseudo R2 7.3% 6.7% Ln(alpha), SE 0.742** (0.08) 1.167*** (0.15) Independent Variables Model 1 (March-15 to May-31) Model 2 (June-01 to September-30) IRR (SE) IRR (SE) GSI 0.673** (0.39) 0.867 (0.36) TEST 0.184* (0.10) 0.168* (0.11) BEDS 0.782* (0.08) 0.917* (0.05) PHY 0.175* (0.05) 0.178* (0.07) NCD 1.887* (0.08) 1.632** (0.05) PD 1.145** (0.15) 1.750** (0.05) AGE 7.896** (0.16) 1.076** (0.10) HDI 0.009*** (0.79) 0.054 (0.30) GDP 0.001*** (0.13) 0.080 (0.15) Constant 9.876*** (0.98) 41.654*** (4.60) Pseudo R2 3.8% 2.7% Ln(alpha), SE 0.756** (0.08) 1.170*** (0.15) This table shows the coefficient estimates and the IRR values for the negative binomial regression. The sign of the coefficient shows the direction of change and the value of IRR shows the magnitude of the rate of change in the dependent variable for incremental changes in the independent variables. The dependent variable is the number of confirmed new COVID-19 cases, CC. IRR is incidence rate ratio. All values in brackets are the standard errors (SE). Finally, *p < 0.1; **p < .05; ***p < .01 (two-tailed tests) show the level of significance at the 1%, 5% and 10% levels, respectively. Table 1 shows that the IRR estimates are statistically significant for all the variables in model 1. The results show that with a unit increase in GSI, TEST, BEDS and PHY, keeping other variables in the model constant, the CC would be expected to decrease by a factor of 0.673 (GSI), 0.184 (TEST), 0.782 (BEDS) and 0.175 (PHY). Negative coefficient estimates for GSI, TEST, BEDS and PHY (Table 1) show that an increase in each of these variables leads to fall in the rate of new COVID-19 cases; however, the magnitude of the decrease in the dependant variable, CC, for an incremental rise in the predictor variable is given by the IRR values. Similarly, for the demographic factors, for an incremental increase in NCD, PD and AGE, the dependant variable CC increases by 1.887, 1.145 and 7.896, respectively. The IRR estimates show that the median age of the population (AGE) is the largest contributor to the incidence of COVID-19. The IRR estimates for the socio-economic indicators, GDP and HDI, are significant at the 10% level, but the values are small. In Model 2, when lockdown restrictions were partially or completely relaxed, GSI, HDI and GDP became insignificant in both Models 1 and 2. This may be due to increased workforce participation in this period, especially those working in the informal sector. All other variables remain statistically significant. # IV. Conclusion This study aims to understand the determinants of COVID-19 confirmed cases using a range of government response and socio-economic variables. The study focusses on the 10 most affected countries. The findings of this study emphasise the role of demographic characteristics of the population as well as government stringency and testing policies as important factors in reducing the incidence of COVID-19. Consistent with previous studies, we also find support for dynamic government lockdown policies, with periodic lockdowns effective in controlling new cases. The study suggests that the best course of action for the countries with high COVID-19 cases will be to continue implementing periodic lockdowns, while increasing the number of COVID-19 testing. The study also suggests that these countries should also strengthen their healthcare capacity, proxied in this study by the number of physicians and the number of beds, in order to meet requirements of the vulnerable section of the population.
{}
# Simple add to col X if col Y is that? Hi, sorry if this has asked million times earlier, but i dont have a clue what kind of tags or keywords i should search. I need a function, which removes 1 from certain row after certain value has given. For example, C3 is input. If given value 532, then function searches from col 1 a row which has same value as input. Then it removes 1 from quantity, which is at col 2 in same row. Im sure i can avoid negative problems and such, if i just get some directions how to start with this one. Thanks, Krisu edit retag close merge delete Hello krisuvirt, to compute the new quantity you could write : =INDIRECT(CONCATENATE("B"; MATCH(C3;A1:A99999;0)) ;1) -1 i think it needs a macro to enter this new value into the cell in Column B.. ( 2017-05-09 20:26:42 +0200 )edit Sort by » oldest newest most voted Searches (MATCH(), VLOOKUP(), ...) have to get the value to search for and a range to search in. In case of VLOOKUP() this range is extended (at least) as far as needed to alo contain the column where the associated value to return is located. Read the help about the related functions. If you want to search 100 rows starting with number 2, the following formula should do: =VLOOKUP(C3;$A$2:\$A101;2;0)-1 If the value to return is not contained in the same row as the match, you can use a combination of MATCH() with INDEX() or OFFSET(). more Ok, thanks folks. Ill try with these. • Krisu more
{}
Re-insert equation by label I read over the response given here: Is it possible to re-insert a LaTeX equation by label? which says such a thing is possible only with a macro. I would like to do this. I have a bajillion equations, some labelled and some not. Of those that are labelled, some I would like to display again, say, at the end of the document as a quick summary. Wome equations I want to show twice at different places identically. The problem is I can't decide on notation and I keep changing small things and it is a bit of a pain to remember which equations I want to display again and if I've updated them or not. Would there be a obvious/better way to do this? I have to admit I haven't ever bothered with macros for latex before today so apologies if the following is horrible, but, (to the best of my understanding) is that it would be something like \newcommand{\acommand}[2]{% #1 \expandafter\newcommand\csname #2\endcsname{#1}% } \acommand{someequation}{equationassociated} \\ \equationassociated So then \acommand{someequation}{labelassociated} will display someequation and define the macro \labelassociated. But then, what if I want to display someequation along with the label entered as the second argument? - Could you please provide a minimal working example (MWE) that shows your use case? –  Werner Dec 31 '12 at 8:23 Yes, what if? What is the question here? If you create a macro, e.g. \newcommand*{\reuseeq}[1]{\csname #1\endcsname}, and use it with the associated labe, e.g. \reuseeq{labelassociated} you’d get someequation again. Related: How do I show the equation formula again instead of its number of ref? –  Qrrbrbirlbel Dec 31 '12 at 8:32 For a lot of equations, specially if they must be reused, for me the best approach are external files, each with one raw equation and include in the master simply with \input{path\filename} or a macro that do this plus surround it with the desired environment, add appropriate labels based in the filename and/or path, etc.
{}
Does the notation “$\frac{1}{n}\left(\frac{2\cdot3\cdot\,\cdots\,\cdot n}{n\cdot n\cdot\,\cdots\,\cdot n}\right)$” make sense for $n<5$? In my textbook, in order to solve for the limit of the sequence $$\lim_{x \to \infty} \frac{n!}{n^n} \tag{1}$$ the book rewrites the sequence $$a_n=\frac{1}{n}\left(\frac{2\cdot3\cdot\,\cdots\,\cdot n}{n\cdot n\cdot\,\cdots\,\cdot n}\right)\tag{2}$$ But how can these two expressions be equal? If you plug in numbers $$1$$ through $$4$$ into the second equation, it doesn't make sense. For example, for $$n=4$$, $$\left(\frac{2\cdot3\cdot\,\cdots\,\cdot 4}{4\cdot 4\cdot\,\cdots\,\cdot 4}\right) \tag{3}$$ makes no sense: $$n$$ must be $$\geq5$$ because in the numerator (of the parentheses part) $$2\cdot3\cdot\,\cdots\,\cdot n \tag{4}$$ there are two multiplication dots separated by an ellipsis, implying that there is at least one value between $$3$$ and $$n$$. Thus, the second equation can only be used for $$n$$ must be $$\geq5$$. My textbook says that $$\left(\frac{2\cdot3\cdot\,\cdots\,\cdot n}{n\cdot n\cdot\,\cdots\,\cdot n}\right) \tag{5}$$ can equal $$1$$ for $$n=1$$ but technically you have to be able to plug in $$1$$ for $$n$$ to say that. • The conversion you write from $a_n=\frac {n!}{n^n}$ to $a_n=\frac{1}{n}\left(\frac{1\cdot2\cdot3\cdot...\cdot n}{n\cdot n\cdot n\cdot...\cdot n}\right)$ is off by a factor $\frac 1n$. I don't know what the leading $\frac 1n$ is doing in the second. – Ross Millikan Oct 19 '19 at 2:06 • @RossMillikan My bad I mistyped. – user532874 Oct 19 '19 at 2:10 • In a now-deleted answer, @fordjones makes the important point that the ellipsis notation represents a pattern and shouldn't be taken entirely literally. "$1\cdot 2\cdot 3\cdot\,\cdots\,\cdot n$" is a common way to express $n!$, even for $n=1$ through $n=4$, despite the notation suggesting that there are terms between "$3$" and "$n$". Without resorting to the somewhat scary-looking sigma notation, the ellipsis notation is simply the most-convenient way to express "the product of the integers from $1$ to $n$" in an arithmetical form. Mathematics, like any language, has its colloquialisms. – Blue Oct 19 '19 at 2:21 • You could click the edit button at the bottom of the post and correct the error. – Ross Millikan Oct 19 '19 at 2:27 • @RossMillikan I did – user532874 Oct 19 '19 at 2:28 You are correct in that, strictly, what has been written makes no sense. That being said, these kinds of abuses of notation are common throughout mathematics. A more rigorous expression would be $$\frac{n!}{n^n}=\frac{\prod_{i=1}^n i}{\prod_{i=1}^n n}$$ • For $a_n=\frac{1}{n}\left(\frac{2\cdot3\cdot...\cdot n}{n\cdot n\cdot...\cdot n}\right)$ I'm still not sure what the domain is. It's a sequence so $n$ must be a positive integer but which positive integers are allowed, strictly speaking? – user532874 Oct 19 '19 at 2:07 • Yes, $n$ should be a positive integer. This is another common abuse, some letters tend to mostly be used to denote objects of a certain kind. – Reveillark Oct 19 '19 at 2:20 Are you sure about $$\lim_{x \to 2} \frac{n!}{n^n} ?$$ $$a_n=\frac{1}{n}\left(\frac{1\cdot2\cdot3\cdot...\cdot n}{n\cdot n\cdot n\cdot...\cdot n}\right) = \frac{n!}{n^n}$$ if the number of $$n$$ in the denominator of $$(\frac{1\cdot2\cdot3\cdot...\cdot n}{n\cdot n\cdot n\cdot...\cdot n})$$ is $$n-1$$ which does not seem so. Otherwise it is not correct. • I made a mistake typing. – user532874 Oct 19 '19 at 2:14 It seems that in the book they are using that $$a_n=\frac{1}{n}\left(\frac{2\cdot3\cdot\,\cdots\,\cdot n}{n\cdot n\cdot\,\cdots\,\cdot n}\right)=\frac1n\frac{n!}{n^{n-1}}=\frac{n!}{n^n}$$ and indeed for $$n=1$$ we have $$\frac{n!}{n^{n-1}}=\frac{1}{1^0}=1$$.
{}
# Free Fall and air resistance Hey guys, when an object is free falling, is it still under the effects of air resistance? If it was not under the effects (an environment with out air resistance) is free fall possible? (I am looking for an argument that it is under the effects of free fall but all replies will be taken into consideration) ## Answers and Replies rock.freak667 Homework Helper When a body is falling through the air there are three forces acting on it, Weight(W), Friction(F) and upthrust(U). The resultant force is given by $F_R=W-U-F$, when in free fall the resultant force is zero. So then the air resistance is still there but the weight is numerically equal to the sum of the air resistance and upthrust. Andy Resnick
{}
# CHEMISTRY: The Molecular Nature of Matter and Change 2016 ## Educators Problem 1 What type of central-atom orbital hybridization corresponds to each electron-group arrangement: (a) trigonal planar; (b) octahedral; (c) linear; (d) tetrahedral; (e) trigonal bipyramidal? Check back soon! Problem 2 What is the orbital hybridization of a central atom that has one lone pair and bonds to: (a) two other atoms; (b) three other atoms; (c) four other atoms; (d) five other atoms? Check back soon! Problem 3 How do carbon and silicon differ with regard to the types of orbitals available for hybridization? Explain Check back soon! Problem 4 How many hybrid orbitals form when four atomic orbitals of a central atom mix? Explain. Check back soon! Problem 5 Give the number and type of hybrid orbital that forms when each set of atomic orbitals mixes: (a) two d, one s, and three p (b) three p and one s Check back soon! Problem 6 Give the number and type of hybrid orbital that forms when each set of atomic orbitals mixes: (a) one p and one s (b) three p, one d, and one s Check back soon! Problem 7 What is the hybridization of nitrogen in each of the following: (a) $\mathrm{NO} ;(\mathrm{b}) \mathrm{NO}_{2} ;(\mathrm{c}) \mathrm{NO}_{2}^{-} ?$ Check back soon! Problem 8 What is the hybridization of carbon in each of the following: (a) $\mathrm{CO}_{3}^{2-} ;$ (b) $\mathrm{C}_{2} \mathrm{O}_{4}^{2-} ;$ (c) $\mathrm{NCO}^{-2}$ Check back soon! Problem 9 What is the hybridization of chlorine in each of the following: (a) $\mathrm{ClO}_{2} ;(\mathrm{b}) \mathrm{ClO}_{3}^{-} ;(\mathrm{c}) \mathrm{ClO}_{4}^{-?}$ Check back soon! Problem 10 What is the hybridization of bromine in each of the following: (a) $\mathrm{BrF}_{3} ;(\mathrm{b}) \mathrm{BrO}_{2}^{-} ;(\mathrm{c}) \mathrm{BrF}_{5} ?$ Check back soon! Problem 11 Which types of atomic orbitals of the central atom mix to form hybrid orbitals in (a) $\mathrm{SiClH}_{3} ;$ (b) $\mathrm{CS}_{2} ;(\mathrm{c}) \mathrm{SCl}_{3} \mathrm{F} ;(\mathrm{d}) \mathrm{NF}_{3} ?$ Check back soon! Problem 12 Which types of atomic orbitals of the central atom mix to form hybrid orbitals in (a) $\mathrm{Cl}_{2} \mathrm{O} ;(\mathrm{b}) \mathrm{BrCl}_{3} ;(\mathrm{c}) \mathrm{PF}_{5} ;(\mathrm{d}) \mathrm{SO}_{3}^{2-} ?$ Check back soon! Problem 13 Phosphine $\left(\mathrm{PH}_{3}\right)$ reacts with borane $\left(\mathrm{BH}_{3}\right)$ as follows: $$\mathrm{PH}_{3}+\mathrm{BH}_{3} \longrightarrow \mathrm{H}_{3} \mathrm{P}-\mathrm{BH}_{3}$$ a) Which of the illustrations below depicts the change, if any, in the orbital hybridization of P during this reaction? (b) Which depicts the change, if any, in the orbital hybridization of B? Check back soon! Problem 14 The illustrations below depict differences in orbital hybridization of some tellurium (Te) fluorides. (a) Which depicts the difference, if any, between $\mathrm{TeF}_{6}(\text {lef} t)$ and $\mathrm{TeF}_{5}-(\text { right }) ?$ (b) Which depicts the difference, if any, between TeF $_{4}(\text {left})$ and $\operatorname{Te} \mathrm{F}_{6}(\text {right}) ?$ Check back soon! Problem 15 Use partial orbital diagrams to show how the atomic orbitals of the central atom lead to hybrid orbitals in (a) $\mathrm{GeCl}_{4}$ (b) $\mathrm{BCl}_{3}$ (c) $\mathrm{CH}_{3}^{+}$ Check back soon! Problem 16 16 Use partial orbital diagrams to show how the atomic orbitals of the central atom lead to hybrid orbitals in (a) $\mathrm{BF}_{4}^{-}$ (b) $\mathrm{PO}_{4}^{3-}$ (c) $\mathrm{SO}_{3}$ Check back soon! Problem 17 Use partial orbital diagrams to show how the atomic orbitals of the central atom lead to hybrid orbitals in (a) $\operatorname{SeCl}_{2}$ (b) $\mathrm{H}_{3} \mathrm{O}^{+}$ (c) $\mathrm{IF}_{4}^{-}$ Check back soon! Problem 18 Use partial orbital diagrams to show how the atomic orbitals of the central atom lead to hybrid orbitals in (a) $\mathrm{AsCl}_{3}$ (b) $\operatorname{Sn} \mathrm{Cl}_{2}$ $(\mathrm{c}) \mathrm{PF}_{6}-$ Check back soon! Problem 19 Methyl isocyanate, $\mathrm{CH}_{3}-\mathrm{N}=\mathrm{C}=\ddot{\mathrm{O}}$ is an intermediate in the manufacture of many pesticides. In 1984 a leak from a manufacturing plant resulted in the death of more than 2000 people in Bhopal, India. What are the hybridizations of the N atom and the two C atoms in methyl isocyanate? Sketch the molecular shape. Check back soon! Problem 20 Are these statements true or false? Correct any false ones. a) Two $\sigma$ bonds comprise a double bond. b) A triple bond consists of one $\pi$ bond and two $\sigma$ bonds. c) Bonds formed from atomic $s$ orbitals are always $\sigma$ bonds. d) A $\pi$ bond restricts rotation about the $\sigma$ -bond axis. e) A $\pi$ bond consists of two pairs of electrons. f) End-to-end overlap results in a bond with electron density above and below the bond axis. Check back soon! Problem 21 Describe the hybrid orbitals used by the central atom and the type(s) of bonds formed in (a) $\mathrm{NO}_{3}^{-} ;$ (b) $\mathrm{CS}_{2} ;(\mathrm{c}) \mathrm{CH}_{2} \mathrm{O}$ Check back soon! Problem 22 Describe the hybrid orbitals used by the central atom and the type(s) of bonds formed in (a) $\mathrm{O}_{3} ;(\mathrm{b}) \mathrm{I}_{3}^{-} ;(\mathrm{c}) \mathrm{COCl}_{2}(\mathrm{C} \text { is } central).$ Check back soon! Problem 23 Describe the hybrid orbitals used by the central atom(s) and the type(s) of bonds formed in (a) FNO; $(b) C_{2} F_{4}$ (c) $(\mathrm{CN})_{2}$ Check back soon! Problem 24 Describe the hybrid orbitals used by the central atom(s) and the type(s) of bonds formed in (a) $\operatorname{BrF}_{3} ;$ (b) $\mathrm{CH}_{3} \mathrm{C} \equiv \mathrm{CH}$ (c) $\mathrm{SO}_{2}$ Check back soon! Problem 25 2 -Butene $\left(\mathrm{CH}_{3} \mathrm{CH}=\mathrm{CHCH}_{3}\right)$ is a starting material in the manufacture of lubricating oils and many other compounds. Draw two different structures for 2-butene, indicating the $\sigma$ and $\pi$ bonds in each. Check back soon! Problem 26 Two p orbitals from one atom and two p orbitals from another atom are combined to form molecular orbitals for the joined atoms. How many MOs will result from this combination? Explain. Check back soon! Problem 27 Certain atomic orbitals on two atoms were combined to form the following MOs. Name the atomic orbitals used and the MOs formed, and explain which MO has higher energy: Check back soon! Problem 28 How do the bonding and antibonding MOs formed from a given pair of AOs compare to each other with respect to (a) energy; (b) presence of nodes; (c) internuclear electron density? Check back soon! Problem 29 Antibonding MOs always have at least one node. Can a bonding MO have a node? If so, draw an example. Check back soon! Problem 30 How many electrons does it take to fill (a) a $\sigma$ bonding $\mathrm{MO}$ (b) a $\pi$ antibonding $\mathrm{MO} ;(\mathrm{c})$ the MOs formed from combination of the 1$s$ orbitals of two atoms? Check back soon! Problem 31 How many electrons does it take to fill (a) the MOs formed from combination of the 2p orbitals of two atoms; (b) a $\sigma_{2 p}^{+1} \mathrm{MO}$ (c) the MOs formed from combination of the 2s orbitals of two atoms? Check back soon! Problem 32 The molecular orbitals depicted below are derived from 2p atomic orbitals in $\mathrm{F}_{2}^{+},$ (a) Give the orbital designations. (b) Which is occupied by at least one electron in $\mathrm{F}_{2}^{+},?$ (c) Which is occupied by only one electron in $\mathrm{F}_{2}^{+},?$ Check back soon! Problem 33 The molecular orbitals depicted below are derived from n = 2 atomic orbitals. (a) Give the orbital designations. (b) Which is highest in energy? (c) Lowest in energy? (d) Rank the MOs in order of increasing energy for $\mathrm{B}_{2}$ Check back soon! Problem 34 Use an MO diagram and the bond order you obtain from it to answer: (a) Is $\mathrm{Be}_{2}^{+}$ stable? (b) Is $\mathrm{Be}_{2}^{+}$ diamagnetic? (c) What is the outer (valence) electron configuration of $\mathrm{Be}_{2}+2$ Check back soon! Problem 35 Use an MO diagram and the bond order you obtain from it to answer (a) Is $\mathrm{O}_{2}^{-}$ stable? (b) Is $\mathrm{O}_{2}$ - paramagnetic? (c) What is the outer (valence) electron configuration of $\mathrm{O}_{2}^{-2}$ Check back soon! Problem 36 Use MO diagrams to place $\mathrm{C}_{2}^{-}, \mathrm{C}_{2},$ and $\mathrm{C}_{2}+$ in order of (a) increasing bond energy; (b) increasing bond length. Check back soon! Problem 37 Use MO diagrams to place $\mathrm{B}_{2}^{+}, \mathrm{B}_{2},$ and $\mathrm{B}_{2}^{-}$ in order of (a) decreasing bond energy; (b) decreasing bond length. Check back soon! Problem 38 Predict the shape, state the hybridization of the central atom, and give the ideal bond angle(s) and any expected deviations for: (a) ${BrO}_{3}-$ (b) ${AsCl}_{4}^{-}$ (c) ${SeO}_{4}^{2-}$ (d) ${BiF}_{5}^{2-}$ (e) ${SbF}_{4}^{+}$ (f) ${AlF}_{6}^{3-}$ (g) ${IF}_{4}^{+}$ Check back soon! Problem 39 Butadiene (right) is a colorless gas used to make synthetic rubber and many other compounds. (a) How many $\sigma$ bonds and $\pi$ bonds does the molecule have? (b) Are cis-trans arrangements about the double bonds possible? Explain. Check back soon! Problem 40 Epinephrine (or adrenaline; below) is a naturally occurring hormone that is also manufactured commercially for use as a heart stimulant, a nasal decongestant, and a glaucoma treatment. (a) What is the hybridization of each $C, O,$ and $N$ atom? (b) How many $\sigma$ bonds does the molecule have? (c) How many $\pi$ electrons are delocalized in the ring? Check back soon! Problem 41 Use partial orbital diagrams to show how the atomic orbitals of the central atom lead to the hybrid orbitals in: (a) $\mathrm{IF}_{2}^{-}$ (b) $\mathrm{ICl}_{3}$ (c) $\mathrm{XeOF}_{4}$ (d) $\mathrm{BHF}_{2}$ Check back soon! Problem 42 Isoniazid (below) is an antibacterial agent that is very useful against many common strains of tuberculosis. (a) How many $\sigma$ bonds are in the molecule? (b) What is the hybridization of each $\mathrm{C}$ and $\mathrm{N}$ atom? Check back soon! Problem 43 Hydrazine, $\mathrm{N}_{2} \mathrm{H}_{4}$ and carbon disulfide, $\mathrm{CS}_{2}$ form a cyclic molecule (below). (a) Draw Lewis structures for $\mathrm{N}_{2} \mathrm{H}_{4}$ and $\mathrm{CS}_{2}$ (b) How do electron-group arrangement, molecular shape, and hybridization of N change when $\mathrm{N}_{2} \mathrm{H}_{4}$ reacts to form the product? (c) How do electron-group arrangement, molecular shape, and hybridization of C change when $\mathrm{CS}_{2}$ reacts to form the product? Check back soon! Problem 44 In each of the following equations, what hybridization change, if any, occurs for the underlined atom? a) $\underline{\mathrm{BF}}_{3}+\mathrm{NaF} \longrightarrow \mathrm{Na}^{+} \mathrm{BF}_{4}^{-}$ b) $\mathrm{PCl}_{3}+\mathrm{Cl}_{2} \longrightarrow \mathrm{PCl}_{5}$ c) $\mathrm{HC} \equiv \mathrm{CH}+\mathrm{H}_{2} \longrightarrow \mathrm{H}_{2} \mathrm{C}=\mathrm{CH}_{2}$ d) $\underline{\mathrm{SiF}}_{4}+2 \mathrm{F}^{-} \longrightarrow \mathrm{SiF}_{6}^{2-}$ e)$\underline{\mathrm{SO}}_{2}+\frac{\mathrm{l}}{2} \mathrm{O}_{2} \longrightarrow \mathrm{so}_{3}$ Check back soon! Problem 45 The ionosphere lies about 100 km above Earth’s surface. This layer consists mostly of $\mathrm{NO}, \mathrm{O}_{2},$ and $\mathrm{N}_{2},$ and photoionization creates $\mathrm{NO}^{+}, \mathrm{O}_{2}^{+},$ and $\mathrm{N}_{2}^{+},$ (a) Use MO theory to compare the bond orders of the molecules and ions. (b) Does the magnetic behavior of each species change when its ion forms? Check back soon! Problem 46 Glyphosate (below) is a common herbicide that is relatively harmless to animals but deadly to most plants. Describe the shape around and the hybridization of the P, N, and three numbered C atoms. Check back soon! Problem 47 Tryptophan is one of the amino acids found in proteins: (a) What is the hybridization of each of the numbered $C, N,$ and O atoms? (b) How many $\sigma$ bonds are present in tryptophan? (c) Predict the bond angles at points a, b, and c. Check back soon! Problem 48 Some species with two oxygen atoms only are the oxygen molecule, $\mathrm{O}_{2},$ the peroxide ion, $\mathrm{O}_{2}^{2-},$ the superoxide ion, $\mathrm{O}_{2}^{-}$ and the dioxygenyl ion $\mathrm{O}_{2}^{+}$. Draw an MO diagram for each, rank them in order of increasing bond length, and find the number of unpaired electrons in each. Check back soon! Problem 49 Molecular nitrogen, carbon monoxide, and cyanide ion are isoelectronic. (a) Draw an MO diagram for each. (b) CO and $C N^{-}$ are toxic. What property may explain why $\mathrm{N}_{2}$ isn't? Check back soon! Problem 50 There is concern in health-related government agencies that the American diet contains too much meat, and numerous recommendations have been made urging people to consume more fruit and vegetables. One of the richest sources of vegetable protein is soy, available in many forms. Among these is soybean curd, or tofu, which is a staple of many Asian diets. Chemists have isolated an anticancer agent called genistein from tofu, which may explain the much lower incidence of cancer among people in the Far East. A valid Lewis structure for genistein is (a) Is the hybridization of each C in the right-hand ring the same? Explain. (b) Is the hybridization of the O atom in the center ring the same as that of the O atoms in the OH groups? Explain. (c) How many carbon-oxygen $\sigma$ bonds are there? How many carbon-oxygen $\pi$ bonds? (d) Do all the lone pairs on oxygens occupy the same type of hybrid orbital? Explain. Check back soon! Problem 51 An organic chemist synthesizes the molecule below: (a) Which of the orientations of hybrid orbitals shown below are present in the molecule? (b) Are there any present that are not shown below? If so, what are they? (c) How many of each type of hybrid orbital are present? Check back soon! Problem 52 Simple proteins consist of amino acids linked together in a long chain; a small portion of such a chain is Experiment shows that rotation about the C—N bond (indicated by the arrow) is somewhat restricted. Explain with resonance structures, and show the types of bonding involved. Check back soon! Problem 53 Sulfur forms oxides, oxoanions, and halides. What is the hybridization of the central S in $\mathrm{SO}_{2}, \mathrm{SO}_{3}, \mathrm{SO}_{3}^{2-}, \mathrm{SCl}_{4}, \mathrm{SCl}_{6},$ and $\mathrm{S}_{2} \mathrm{Cl}_{2}$ (atom sequence $\mathrm{Cl}-\mathrm{S}-\mathrm{S}-\mathrm{Cl} ) ?$ Check back soon! Problem 54 The compound 2,6-dimethylpyrazine (below) gives chocolate its odor and is used in flavorings. (a) Which atomic orbitals mix to form the hybrid orbitals of N? (b) In what type of hybrid orbital do the lone pairs of N reside? (c) Is $\mathrm{C}$ in $\mathrm{CH}_{3}$ hybridized the same as any $\mathrm{C}$ in the ring? Explain. Check back soon! Problem 55 Use an MO diagram to find the bond order and predict whether $\mathrm{H}_{2}-$ exists. Check back soon! Problem 56 Acetylsalicylic acid (aspirin), the most widely used medicine in the world, has the Lewis structure shown below. (a) What is the hybridization of each C and each O atom? (b) How many localized $\pi$ bonds are present? (c) How many $C$ atoms have a trigonal planar shape around them? A tetrahedral shape? Check back soon! Problem 57 Linoleic acid is an essential fatty acid found in many vegetable oils, such as soy, peanut, and cottonseed. A key structural feature of the molecule is the cis orientation around its two double bonds, where $R_{1}$ and $R_{2}$ represent two different groups that form the rest of the molecule. (a) How many different compounds are possible, changing only the cis-trans arrangements around these two double bonds? (b) How many are possible for a similar compound with three double bonds? Check back soon!
{}
## Algebra 2 (1st Edition) $c=49$ $(x+7)^{2}$ $x^{2}+14x+c\qquad$ ... find half the coefficient of $x$. $\displaystyle \frac{14}{2}=7$ $\qquad$ ...square the result. $7^{2}=49\qquad$ ...replace $c$ with $49$ in the original expression The trinomial $x^{2}+14x+c$ is a perfect square when $c=49.$ Then $x^{2}+14x+49=(x+7)(x+7)=(x+7)^{2}$
{}
8.Water is pouring into a cuboidal reservoir at the rate of 60 liters per minute. If the volume of reservoir is 108 $$m^3$$, find the number of hours it will take to fill the reservoir. Given,Volume of the reservoir = 108 $$m^3$$ = 108000 L [∵1 $$m^3$$ = 1000 L] Time taken to fill the reservoir=$$\frac{Volume\;of\;reservoir}{Rate of flowing water}$$ $$=\frac{10800}{60}minutes=1800\;minutes\; or \;\frac{11800}{60}=30\; hours$$
{}
Question d3749 Feb 2, 2015 The density of a gas depends on its mass and its volum (of course), which means that it depends on pressure and temperature. When you are at STP, 1 mole of any ideal gas occupies exactly $\text{22.4 L}$. SInce no other information is given, you can safely assume that you are dealing with 1 mole of sulfur trioxide ($S {O}_{3}$). Now, if you have 1 mole, we've established that you have a volume of $\text{22.4 L}$ at STP. All you need now is its mass - for this use $S {O}_{3}$'s molar mass - $\text{80.1 g/mol}$. This tells you that 1 mole of sulfur trioxide weighs $\text{80.1 g}$. Therefore, the density of sulfur trioxide at STP is rho = m/V = ("80.1 g")/("22.4 L") = "3.58 g/L"# This is a rather lengthy tutorial, but very good, it does not have the sulfur dioxide per se: http://www.eiu.edu/eiuchem/genchem/tutorial6.pdf
{}
# The Lap-Counting Function for Linear Mod One and Tent Maps ## Dr. Leopold Flatto (Bell Laboratories) The study of interval maps is a basic topic in dynamical systems, these maps arising in diverse settings such as population genetics and number theory. A central problem in the subject is to decide when two such maps are topologically conjugate. An important invariant is the lap-counting function $L(z)=\sum L_n z^n$, where $L_n$ is the number of monatonic pieces of the $n$th iterate of the map. This function was introduced by Milnor and Thurston, who used it to show that interval maps are semi-conjugate to piecewise linear maps with slopes $\pm s$, where $s$ is the topological entropy. The linear mod one and test maps are the simplest examples of such maps. For these we obtain a complete description of $L(z)$ are related to the topological dynamical and ergodic properties of the maps. Finally, linear mod one maps are related to the dynamic of the Lorenz attractor, discovered by Lorenz in his study of weather prediction.
{}
# # A $$kg$$ disk initially at rest in the Earth reference frame is free to move parallel to a horizontal bar through a hole in the disk’s centre. The disk is struck face-on by a $$kg$$ paintball traveling at $$m/s$$. ## Part 1# According to an observer in the Earth reference frame, what is the change in the system’s kinetic energy after the ball hits the disk? Take the system to be the paintball and the disk and assume that after the collision all of the paint from the paintball sticks to the disk. Please enter in a numeric value in . ## Part 2# Before the paintball hits the disk, what is the velocity of the system’s zero-momentum reference frame relative to the Earth reference frame? Please enter in a numeric value in . ## Part 3# What would an observer in the zero-momentum reference frame measure for the system’s change in kinetic energy?
{}
Journal Search Engine ISSN : 2288-4637(Print) ISSN : 2288-4645(Online) The Journal of Asian Finance, Economics and Business Vol.7 No.8 pp.297-308 DOI : https://doi.org/10.13106/jafeb.2020.vol7.no8.297 # Corporate Social Responsibility and the Pricing of Seasoned Equity Offerings: Does Executive Firm-Related Wealth Matter? Hong Chuong PHAM1, Duc Anh NGO2, Ha Thanh LE3, Thiet Thanh NGUYEN4 1 First Author. Associate Professor, University Management Board, National Economics University, Vietnam. Email: chuongph@neu.edu.vn 2 Associate Professor, Department of Accounting, Finance and MIS, School of Business, Norfolk State University, U.S.A. Email: adngo@nsu.edu 4 Assistant Professor, Accounting, Economics, and Finance Department, George Dean Johnson Jr. College of Business and Economics, University of South Carolina Upstate, U.S.A. Email: tnguyen2@uscupstate.edu This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://Creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited. 3 Corresponding Author. Associate Professor, Faculty of Environment, Climate Change and Urban Studies, National Economics University, Vietnam [Postal Address: 207 Giai Phong Road, Dong Tam Ward, Hai Ba Trung District, Hanoi, 113068, Vietnam] Email: thanhlh@neu.edu.vn June 03, 2020 June 28, 2020 July 09, 2020 ## Abstract This study exemines the roles of corporate social activity (CSR) and executive compensation structure on the pricing of seasoned equity offerings (SEOs) with special focus on the role of CSR in reducing the level of information asymmetry between managers and future shareholders of issuing firms through SEOs. This study also investigates the interaction between executive compensation structure and CSR on the discounting of SEOs. We use a sample of 2,102 seasoned equity offerings of U.S. firms with CSR scores from 1995 to 2015 in our OLS fixed effect regression analysis. The results show that issuing firms with high CSR are more likely to expericence a lower degree of the SEO discount. The results also document a positive association between CSR and a high proportion of equity-based compensation of issuing firms’ executives. The findings of this paper confirm that CSR attenuates the impact of information asymmetry and the pre-SEO price uncertainty on the pricing of the offers and hence the SEO discount. Furthermore, CSR reinforces the impact of executive firmrelated wealth on the discounting of seasoned equity offerings. It appears that firm-related wealth motivates managers to actively engage in reducing information asymmetry activities before SEOs, thereby decreasing the SEO discount. JEL Classification Code: G32, G34, G15 ## 1. Introduction Corporate social activity (CSR) can bring different types of benefits to a firm’s stakeholders in several ways including better brand recognition, increased customer base, and brand loyalty, easier access to the capital markets, and positive market reactions to its major economic event announcements. For example, Lins et al. (2017) find evidence that increased spending on CSR improves the trust between CSR firms and investors, especially during a period when the overall trust in corporations and markets is low. They show that firms with high CSR activities have stock returns four to seven percent higher than firms with low CSR activities. Zahari et al. (2020) examine the relationship between CRS practices and brand equity among the top 100 brands in Malaysia and find that CRS pratices improve financial-based brand significantly. Edmans (2011) analyses the relationship between employee satisfaction and long-run stock returns and finds that the stock performance of high CSR firms outperforms that of their industry benchmarks. The author also finds that investing in CSR helps firms improve their investment returns. Similarly, Yang and Kim (2018) conduct a survey of South Korean corprorates’ employees to evaluate the effects of CSR activities on employee work performance. The authors find CSR not only enhances work performance but also promotes firms’ authentic leadership. Over the last two decades, an increasing number of firms engage in CSR activities not only to meet new regulation requirements but also to reduce the pressure of investor activism requiring firms to consider the interest of broader stakeholders such as communities, employees, and environmental groups. However, managers may tend to take advantage of a positive market response to firms’ CRS activities to opportunistically increase their wealth rather than increase their shareholders' wealth, especially during periods surrounding firm major economic event announcements. According to the pecking order theory of capital structure, managers follow a hierarchy order of financing sources. Managers prefer retained earnings to debt, and debt to new equity issues when financing their new projects. For example, Myers and Majluf (1984) show in their model that managers only issue new equity when they believe their stocks are overvalued to increase the current shareholders' wealth at the expense of new shareholders. Outside investors, who do not possess the same level of inside information as the managers do, yet understand the motivation of managers, react negatively to firms' announcements of seasoned equity offerings. Corwin (2002) provides evidence that new equity offers are underpriced by 2.2 percent in the sample of firms issuing seasoned equity offerings during the 1980s-1990s periods. That means, on average,  the closing price on the offer day of the issuing firm is about 2.2 percent less than the offer price of new equity. In a similar study on the discounting and clustering in seasoned equity offerings, Mola and Loughran (2004) examine a sample of 4,814 SEOs from 1986-1999 and find that, on average, the new offers are priced at a discount of more than 3.0 percent. The discounting of SEOs reflects in part the rounding practices for new offers as well as the underwriters' rent extraction ability. If CSR can increases firms’ reputation in financial markets, CSR also can mitigate the negative market reactions to firms' announcements of seasoned equity offerings. Recently, Feng et al. (2018) examine whether CSR can add value to capital market participants through seasoned equity offerings and find that CSR issuing firms experience a lesser degree of negative reactions to SEO announcements partly because CSR helps reduce information asymmetry between managers and outside investors. Similarly, Yoon and Lee (2019) examine how CSR is related to the degree of asymetric information in the Korean financial market. They find that firms with high CSR scores are associated with a lower level of information asymetry. Howevey, the authors find no evidence of such relation in the sample of cheabol- affiliated firms.  Follow this line of logic, in this study we investigate whether managers’ firm related wealth from their compensation packages play a significant role in the pricing of new offers as well as in increasing CSR activities before seasoned equity offerings, thereby reducing the level of SEO discount. We focus on examining the relationship between managers’ firm-related wealth and SEO discount for two reasons. First, according to agency theory, managers always consider maximizing their compensation or their firm related wealth in any corporate decision. Consequently, managers are more likely to opportunistically increase spending on CSR activities before any corporate event that may negatively impact on both their short term compensation such as salaries or bonus vested shares and long-term firm related wealth compensation such as the value of stock options or unvested shares. Second, as Carlson et al. (2010) already find in their study that the firm risk changes dynamically around seasoned equity offerings which often creases before SEOs and decreases gradually thereafter,  it's likely that managers who have a large proportion of pay-for-performance compensation such as restricted stocks or stock options in their compensation packages are conscious about their wealth. Therefore, managers not only try to reduce information asymmetry by increasing spending on CSR activities before SEOs but also carefully work with underwriters to have better offer prices for their new offers, thereby maximizing their firm related wealth through their firms' SEO episodes. To our best of our current knowledge, this is the first study to examine the combined effects of CSR and executive compensation structure on the pricing of seasoned equity offerings. We shed new light on how the structure of executive compensation impacts, coupled with the effects of the managers' opportunistic decisions regarding CSR activities, on the pricing of seasoned equity offerings by investigating the SEO discount level. We find that outside markets respond positively to new offers of issuing firms with high CSR activities. In addition, issuing firms with top executives having a larger proportion of firm related wealth in their compensation packages experience a lesser degree of SEO discounts. Specifically, the results show that managers’ firm related wealth is associated with a higher level of CSR activities and a lower level of SEO discount. Our findings suggest that managers may mitigate the effects of negative market response to SEO announcements by engaging CSR activities before the SEO episodes as well as by carefully timing and pricing their new offers to maximize their wealth. The paper proceeds as follows. The next section presents a brief literature review and hypothesis development. Section 3 describes the sample and methodology. Section 4 reports and discusses the results. Section 5  sets out to conclude the paper. ## 2. Literature Review and Hypotheses Agency theory provides a framework that explains and resolves conflicts between an agent and a principal that can occur in a situation in which the goals or desires of both sides differ. To mitigate agency problems, the principal may design a contract that aligns the agent's interests with the principal's interests. In terms of compensation structure, the principal usually offers a compensation package that ties the agent's compensation with firm performance based on some performance measures such as profitability or stock price since it is difficult or expensive for the principal to verify or observe the actual efforts of the agent. However, no compensation contract is perfect, and therefore managers can always find a way to maximize their wealth regardless of whether it is at the expense of shareholders. In a seminal paper of agency theory, Jensen (1986) provides a detailed analysis of why a manager in a public firm chooses a set of activities for the firms such that the total value of the firm is less than it would be if the manager is a sole owner of the firm. Our research is at the intersection of agency problems and external financing literature. In a new share issue, current shareholders’ wealth can be affected negatively or positively by the new issue depending on the offer price of the new shares greater or lesser than the current trading price. Datta et al. (2005) use a sample of 444 U.S. SEOs during the period 1995-1999 and find that there is an adverse stock price effect for issuing firms with high executive equity based compensation. Their findings also suggest that one of important determinants of the shareholders wealth effects associated with new equity offerings is the structure of executive compensation. In a similar study, Brazel and Webb (2006) investigate the effect of executive compensation on the short-term market reactions following SEO announcements and find that investors are more likely to negatively react to the SEO announcements of firms whose managers are paid with a high proportion of performance based incentives. More recently, Brisker et al. (2014) examine the executive compensation structure and motivations of seasoned equity offerings and find that managers who are awarded high equity-based compensation are more likely to time their seasoned equity offerings to periods when their firms' stocks are overvalued to maximize their compensation. We extent their findings by examining the interaction effect of executive compensation structure and CSR activities on the pricing of seasoned equity offerings. If markets react positively to new seasoned equity offerings issued by high CSR firms, post-issue market performance of high CSR firms would outperform that of low CSR firms. In addition, the managers of the issuing firms with a high firm related wealth are more likely to price their new offers at a lesser discount degree compared with those of issuing firms with a smaller firm related wealth. Taken together, our first hypothesis as follows: H1A: The post-SEO stock performance of issuing firms with high CSR scores is better than that of issuing firms with low CSR scores. H1B: The post-SEO stock performance of issuing firms with managers having a relatively high proportion of equity-based compensation is higher than that of issuing firms with a relatively low proportion of equity-based compensation. The previous studies (i.e., Jensen & Meckling, 1976; Myers & Majluf, 1984; Mola & Loughran, 2004) provide a theoretical foundation and an empirical strategy for our research. According to Myers and Majluf (1984), firms do not have optimal debt-equity ratios. Instead, they follow the pecking order when raising external capital for their financing needs to minimize their financing costs.  Because of information asymmetry between managers and outside investors, to reduce financing cots managers only depends on the equity market if they exhaust all retained earnings and their debt financing becomes prohibitively expensive. In other words, managers only issue new equity, if other sources of financing such as retained earnings or debt available, when they believe their stock is overvalued to maximize current shareholders' wealth at the expense of future shareholders. In case managers have to turn to the equity market for their financing needs, they can mitigate negative market reactions to their new offerings by opportunistically timing their offerings to have a better price for their new shares.  Mola and Loughran (2004) finds evidence that investment banks can extract rents from issuing firms by discounting new equity offers to maximize their profit. Their study shows that issuing firms allowing a discount on their new issues because issuing firms consider analyst coverage very important to their success of new equity offerings. A few investment bankers or underwriters with highly regarding research analysts provide underwriting services in the market, therefore they have a strong influence on setting prices for new offers of issuing firms. Also, SEO discounts reflect the cost of uncertainty about firm value, marketing cots about new issues, and acquiring information that raises the offer price (Altinkiliç & Hansen, 2003; Mola & Loughran, 2004). We expect that managers with high equity compensation have a concern for a discount level of their new offerings because their compensation value is tied with firm performance and stock prices, and this concern motivates them to actively engage in pricing of seasoned equity offerings as well as opportunistically increase their level of CSR activities before SEO announcements to mitigate the impacts on their firm-related wealth after SEOs due to negative market reaction. We, therefore, offer our second hypothesis as follows: H2A: The SEO discount of issuing firms with high CSR is lower than that of issuing firms with low CSR. H2B: The SEO discount of issuing firms with managers having a relatively high proportion of equity-based compensation (firm- related wealth) is lower that of issuing firms with managers having a relatively low proportion of equity-based compensation (firm-related wealth). ## 3. Data and Research Methods ### 3. 1. Data Sources In this study, we use data from several sources for empirical analysis. We begin with a sample of all U.S firms conducting SEOs from 1991 to 2015 from the SDC Platinum’s New Issues Database. We obtain only U.S. common equity offers after excluding private placements, right offers, and unit investment trusts offers. We choose the sample period from 1991 to 2015 because we can match the SEO sample with CSR information from the  MSCI ESG KLD Stats dataset. The MSCI ESG KLD Stat initially provided corporate social responsibility ratings of 650 firms that comprise the S&P 500 and the FTSE KLD 400 Social Index in 1991. The dataset expanded the coverage to the 1,000 and the 3,000 largest publicly traded U.S companies in 2001 and 2003 respectively. The  KLD dataset covers approximately 80 indicators in corporate social responsibility areas including community, corporate governance, diversity, employee relations, environment, human rights, and product. This dataset also provides information for involvement in controversial business issues or in “sin businesses” such as alcohol gambling, firearms, military, nuclear power, and tobacco.  We extract accounting data and stock prices from the Compustat and CRSP datasets to construct our dependent and control variables in our regression analysis. The sample selection process is presented in Table 2 Panel A. First, we obtain a sample of 3,070 SEOs of non-financial and utilities U.S. firms from 1991 to 2015. We then match the SEO sample to the KDL database to obtain CSR information for the SEO sample. The matching process reduces our SEO sample with CSR information to 2,492 observations.  We obtain underwriters’ ranking information for the SEO sample from Prof. Ritter’s underwriter database. We obtain financial information and stock prices from COMPUSTAT and CRSP databases to compute our independent variables. We drop all observations with insufficient data to calculate the stock price volatility, firm beta, market-to-book, IPO underpricing, and underwriters’ ranks. The sample for regression analysis of SEO discount (Model 1) in Table 5 has 2,102 observations. We then collect executive compensation data from the Standard and Poor’s ExecuComp database. We also use the ExecuComp database to compute the annual aggregated firm-related wealth of all top executives in a given firm for the entire sample. Firm-related wealth is the sum of the value of stock and options portfolios held by an executive. We also use executive compensation information to construct two variables of interest, namely DELTA and VEGA, which are proxied for the executive performanc-pay sensitivity to the firm's stock prices and the firm's stock returns, respectively. Our final sample for analysis of the effects of executive firm-related wealth on post SEO stock performance has 978 observations. ### 3. 2. Regression Models and Variable Construction To test the effects of corporate social activity and executive compensation structure on the post-SEO stock performance, our main dependent variables are CAR (the cumulative abnormal stock returns five trading days after the SEO announcement date (CAR[0:+5]). We use the market model to estimate CAR after the announcement date for each SEO during the sample period. Our main independent variables of interest are the total corporate social responsibility (TCSR) and the aggregate executive firm-related wealth (INSIDEWEALTH).  We follow previous studies (e.g. Chatterji et al., 2009; Kim et al., 2012) to construct two versions of our corporate social responsibility variable. The first version (TCSR1) is the total number of strength indicators minus the total number of concern indicators aggregated across five social rating categories including community, diversity, employee relations, environment, and product for each firm in a given year. In a similar vein, we construct the second version (TCSR2) by aggregating the net CSR scores by taking the total number of strengths minus the total number of concerns in each category but excluding the corporate governance category to disentangle the effect of CSR and corporate governance. We follow Sundaram and Yermack (2007) to calculate executive firm-related wealth (INSIDEWEALTH). The executive firm-related wealth equals the market value of equity owned by all top firm's executives reported in Execucomp database plus the value of options held. The value of option portfolios is estimated based on the Black-Scholes option valuation model. In our robustness tests, we follow Core and Guay (2002) to construct our two additional variables proxied for the sensitivity of executive compensation to stock prices (DELTA) and the sensitivity of compensation to stock return volatility (VEGA). DELTA is the change in the aggregate value of options held by the firm’s all top executives for one percent change in the stock price and VEGA is the change in the aggregate value of options held by the firm’s all top executives associated with one percent change in the annualized standard deviation of stock return. We use the Black and Scholes (1973) model formula for valuing European call options to estimate DELTA and VEGA for each executive annually. Option Value Where N: Comulative probability function for the normal distribution S: Price of the underlying stock X: Strike price of the option δ: Expected stock return volatility over the life of the option r: Natural logarithm of risk-free rate T: Time to maturity of the option in years. D: Natural logarithm of the expected dividend yield over the life of the option The values of DELTA and VEGA are calculated as follows: DELTA = [∆(option value)/∆(price)]*(price/100) VEGA= [∆(option value)/∆(stock volatility)]*0.01 Where N′ = normal density function Following the literature on the underpricing and discounting of SEOs (e.g. Altinkiliç and Hansen, 2003; Corwin, 2003; Kim and Park, 2005; Mola and Loughran, 2004), we use the following control variables in our regressions. BETA is the relative risk of a firm’s equity compated to the market as a whole, which is computed by regressing monthly returns on the value weighted market returns over the rolling 24-month window ending one month before the offer date. MB is the firm market-to book-ratio which is computed as the value of total assets minus the value of book equity plus the market value of equity, divided by total assets. VOLATILITY is the firm stock return volatility which is the standard deviation of stock returns over 30-trading days ending 10 days before the offer date. PRECAR is the pre-SEO share price run-up which is the cumulative market-adjusted return over the period from the filing date to the day before the offer date. SEOPROCEEDS is the relative size of the SEO proceeds which is defined as the value of SEO proceeds divided by the firm market value of equity. TICK is a dummy variable that taking the value of 1 if the decimal portion of the closing price on the day before the offer is less than 0.25, and zero otherwise. LNPRICE is the natural logarithm of the closing stock price one day before the offer date. URANK is the underwriter’s reputation. We obtain the ranking list of underwriters for our sample firms from Prof. Ritter’s underwriter ranking database (https://site.warrington.ufl.edu/ritter/ipo-data/). Table 1 provides variable definitions and more details of each variable construction. We use the following estimation model to test the effect of CSR and firm-related wealth as well as executive compensation structure on the post-SEO stock return performance. Prior studies of (Altinkiliç and Hansen, 2003;  Corwin, 2003, Kim and Park, 2005) show that outside investors incorporate a discount in the price of new offers to reflect the cost of uncertainty about firm value. The higher level of information asymmetry, the higher degree of SEO discount because it becomes more difficult for both the underwriter and future shareholders to value the issuer. To test our second hypothesis, we construct a dependent variable (SEODISCOUNT) that measures the level of the SEO discount. SEODISCOUNT equals the closing price on the day before the offer minus the offer price, divided by the closing price on the day before the offer. To test our second hypothesis, we establish the following regression models using SEODISCOUNT as a dependent variable and two main independent variables of interest proxied for CSR and executive firm-related wealth: TCSR and INSIDEWEALTH. Based on the previous SEO discounting literature, we also control firm risk (BETA), market to book ratio (MB), stock return volatility (VOLATILITY), relative offer size (SEOPROCEEDS), pre-SEO price-run up (PRECAR), firm size (FIRMSIZE), IPO underpricing (IPOUNDERPRICING), and underwriters’ reputation (URANK), and a dummy variable for firms listed on Nasdaq stock exchange (NASDAQ) in model (3). We add two other independent variables of interest, namely DELTA and VEGA to examine whether the executive compensation structure impacts the SEO discount in model (4). ## 4. Empirical Results ### 4. 1. Descriptive Statistics Table 2 presents sample selection and descriptive statistics for our sample. The mean (median) of SEO discount the sample is 3.59%  (2.47%). The average market-to-book ratio and the firm beta of the sample firms are 3.21 and 1.23, respectively. The mean of the TCSR1 score is positive (0.669) which is consistent with prior studies. The mean (median) of FIRMSIZE is $13.87 million ($13.71 million) and the mean (median) of SEOPROCEEDS is 14.13 percent (12.69 percent). Table 3 presents a Pearson correlation matrix among variables. With few exceptions such as the correlation between TCSR1 and the firm beta, the correlation between TCSR and MB, most pairwise correlations are statistically significant at the 5 or 1 percent levels. The correlation coefficient between TCSR1 and INSIDEWEALTH is positive and statistically significant indicates that the two variables interact incorporate social activity consideration, consistent with our hypothesis. While the INSIDEWEALTH, VEGA, DELTA are associated with the total CSR score, the SEO discount is negatively correlated with the total CSR score. These correlations imply that the high CSR firms tend to have high levels of their executive firm related wealth, a higher proportion of equity-based compensation. Also, Table 3 suggests that multicollinearity is not a main concern in our regression analysis. ### 4. 2. Regression Results #### 4. 2.1. The Effects of CSR and Firm-Related Wealth on the Post-SEO Stock Performance We first examine whether corporate social activity and executive firm related wealth are associate with the post-SEO stock performance. Table 4 panel A presents the regression results of regressing the firms’ post-SEO stock performance on the corporate social responsibility score (TCSR), executive firm-related wealth (INSIDEWEALTH), and other control variables for the sample of SEOs between the period 1991-2015. We include calendar year and industry dummy dummy variables as controls for the calendar year and industry fixed effects. Consistent with our hypothesis, corporate social activity (TCSR)  and managers’ firm related wealth (INSIDEWEALTH) are positively associated with the post-SEO stock performance.  Specifically, the coefficients on TCSR1 and INSIDEWEALTH in column 1 are 0.0029 (p-value <0.01)  and 0.0042 (p-value <0.01) are statistically significant at the one percent, respectively. Similarly, in column 2 the coefficients on the second proxy for CSR (TCSR2) and INSIDEWEALTH are also positive and statistically significant. As expected, the coefficients on other control variables are consistent with the literature on the determinants of the post-SEO stock performance. For example, the coefficient on VOLATILITY is -0.6992 (p-value< 0.01), indicating that the greater the price volatility, the lower the post-SEO stock return performance. In Table 4 panel B, we present the results for our equation (2) regressions. Instead of using INSIDEWEALTH along with TCSR as the main exploratory variables of interest, we use two measures of pay-performance sensitivity, namely DELTA and VEGA in our OLS regressions. The results show that while the coefficients on DELTA (0.0028, p-value < 0.05 in model 1 and 0.027, p-value < 0.01 in model 2)  are positive and significant, the coefficients on VEGA are negative, but not statistically significant. The results provide evidence that managers with a high proportion of equity in their compensation package are associated with high CSR activities.  This suggests that managers with a high level of the firm related wealth and high equity-based proportion in their compensation packages are more likely to engage in CSR activities. As a result, coupled with the positive effects of CSR on firm reputation and financial performance, managers can maximize their wealth through the SEO episodes. Overall, the results from Table 4 supports our hypotheses H1A and H1B, respectively. We find that managers with a higher proportion of equity-based compensation are associated with a high level of CRS activities. We also find some evidence that managers are more likely to engage in CSR activities, especially in the SEO sample firms. The CSR  may mitigate the negative market response to new equity issues, thereby reducing the negative effect of the stock dilution on the managers' firm related wealth. #### 4.2.2. The Effects of CSR and Firm-Related Wealth on the SEO Discount Table 5 reports the results from our regression equation (3) which examines whether CSR has a positive effect on the SEO discount. As shown in the literature, CSR has positive effects on firm reputation and helps reduce investors' perception of the firm performance uncertainty during the issuing periods. We expect that high CSR firms are more likely to experience lower levels of SEO discount because the SEO discount reflects the cost of uncertainty about firm value as well as the underwriters’ costs for acquiring information. Table 5 Panel A provides results from a univariate test. We sorted the sample SEO firms into quintiles based on CSR scores (TCSR1 and TCSR2). The top quintile is classified as the High CSR group and the bottom quintile is classified as the Low CSR group.  Using both TCSR1 and TCSR2 measures, we find that new issues of high CSR firms, on average, have a discount of 3.33 (3.25) percent, while those of low CSR firms have a discount of 3.78 (3.66)  percent, which is 45 (41) basis points higher. Table 5 Panel B provides results of our regression of SEODISCOUNT on TCSR and other control variables. Our hypothesis 2A suggests that issuing firms with high CSR should experience less SEO discount. Consequently, we test whether, under the presence of CSR activities, underwriters are more willing to set a higher offer price for their client firms, thereby lower the SEO discount. The results reported in Table 5 Panel B support our predictions.  The coefficients on TCSR1 and TCSR2 are both negative and statistically significant (-0.0072 and -0.0028; p-value < 0.05 for TCSR1 and TCSR2, respectively). The results suggest that CSR allows underwriters to set higher office prices for new issues, thereby protecting shareholders' wealth from the negative impact of SEO discount on the firm's new equity offerings. To further examine whether the effects of managers' firm related wealth, the sensitivity of executive wealth, and the interaction between the managers' related wealth and CSR on the pricing of the SEOs, we add an interaction term TCSR*INSIDEWEALTH (column 1, 2) along with TCSR and INSIDEWEALTH, and VEGA (column 3, 4) in our regressions. Table 6 presents the results of our regression estimation of equation (4). The coefficients on TCSR and INSIDEWEALTH are both negative and significant, indicating that TCSR and INSIDEWEALTH are negatively associated with the SEO discount. This implies that underwriters perceived CSR as a firm's value-added activities, thereby rewarding issuing firms with lower SEO discounts. Table 6 column 3 and 4 also reveal the association between the pay-performance sensitivity with the SEO discount. While the coefficients on DELTA are negative (significant (-0.0071 and -0.0074; p-value < 0.01 for TCSR1 and TCSR2, respectively)  and significant, the coefficients on VEGA are positive and significant (0.0036 and 0.0035; p-value < 0.01 for TCSR1 and TCSR2, respectively). This is also consistent with our hypothesis that the executive’s portfolio sensitivities to changes in stock prices (DELTA) have positive a effect on the SEO discount. A negative effect of VEGA on the SEO discount suggests that underwriters in the anticipation of risk-taking behaviors of managers with large vegas are more likely to set lower offer prices because of a higher level of price uncertainty of issuing firms. Taken together, these findings show that CSR attenuates the negative effects of information asymmetry and price uncertainty of issuing firms, thereby reducing the SEO discounts of their new equity offerings. The results confirm our H2A and H2B by consistently finding consistent negative coefficients of TCSR, INSIDEWEALTH, and a positive coefficient VEGA across all model specifications. Other coefficients on control variables such as PRECAR, URANK, SEOPROCEEDS, MB, TICK, LNPRICE are all statistically significant and their signs of coefficients are in line with previous studies. ## 5. Conclusion In this study, we analyze the roles of CSR and executive compensation structure on the pricing of seasoned equity offerings. Specifically, we examine the role of CSR in reducing the level of information asymmetry between managers and future shareholders of issuing firms through seasoned equity offerings.  More importantly, we investigate the interaction between executive compensation structure and CSR on the SEO discount. Consistent with prior studies, we confirm that CSR attenuates the impact of information asymmetry and price uncertainty on the pricing of the offers and hence the SEO discount.  We show that CSR reinforces the impact of managers' firm related wealth on the pricing of seasoned equity offerings. We find a positive association between CSR and issuing firms with managers with a high proportion of equity-based compensation. In summary, CSR activities and executive compensation structure have significant effects on the SEO discount. Underwriters reward high CSR issuing firms with a lower degree of SEO discount through the underwriting process because CSR helps reduce the price uncertainty of issuing as well as the costs of acquiring information, thus reducing the information asymmetry between managers and outside investors. In addition, the executive compensation structure also has a pronounced effect on the issuing firms’ post-SEO performance and the SEO discount. In short, managers with a high level of pay-performance sensitivity, especially those with a high proportion of equity-based compensation. Overall, our study extends the literature in several ways. First, this is the first study examining the effect of CSR on the pricing of seasoned equity offerings.  Second, we provide new evidence that pay performance-sensitivity affects the pricing of new offers. Our study also sheds light on the underwriters' assessment of the role of CSR in reducing information asymmetry and the pre-SEO price uncertainty of issuing firms ## Reference 1. Altinkiliç, O., & Hansen, R. S. (2003). Discounting and underpricing in seasoned equity offers. Journal of Financial Economics, 69(2), 285–323. https://doi.org/10.1016/S0304-405X(03)00114-4 2. Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81(3), 637–654. https://doi.org/10.1086/260062 3. Brazel, J. F., & Webb, E. (2006). CEO compensation and the seasoned equity offering decision. Managerial and Decision Economics, 27, 363–378. https://doi.org/10.1002/mde.1268 4. Brisker, E. R., Autore, D. M., Colak, G., & Peterson, D. R. (2014). Executive compensation structure and the motivations for seasoned equity offerings. Journal of Banking and Finance, 40, 330–345. https://doi.org/10.1016/j.jbankfin.2013.12.003 5. Carlson, M., Fisher, A., & Giammarino, R. (2010). SEO risk dynamics. Review of Financial Studies, 23, 4026–4077. https://doi.org/10.1093/rfs/hhq083 6. Chatterji, A. K., Levine, D. I., & Toffel, M. W. (2009). How well do social ratings actually measure corporate social responsibility? Journal of Economics and Management Strategy, 18(1), 125–169. https://doi.org/10.1111/j.1530-9134.2009.00210.x 7. Core, J., & Guay, W. (2002). Estimating the value of employee stock option portfolios and their sensitivities to price and volatility. Journal of Accounting Research, 40(3),613–630. https://doi.org/10.1111/1475-679X.00064 8. Corwin, S. A. (2003). The determinants of underpricing for seasoned equity offers. Journal of Finance, 48(5), 2249–2279. https://doi.org/10.1111/1540-6261.00604 9. Datta, S., Iskandar-Datta, M., & Raman, K. (2005). Executive compensation structure and corporate equity financing decisions. Journal of Business, University of Chicago Press, 78(5), 1859–1890. https://doi.org/10.1086/431445 10. Edmans, A. (2011). Does the stock market fully value intangibles? Employee satisfaction and equity prices. Journal of Financial Economics, 101(3), 621–640. https://doi.org/10.1016/j.jfineco.2011.03.021 11. Feng, Z. Y., Chen, C. R., & Tseng, Y. J. (2018). Do capital markets value corporate social responsibility? Evidence from seasoned equity offerings. Journal of Banking and Finance. 94(C), 54–74. https://doi.org/10.1016/j.jbankfin.2018.06.015 12. Jensen, M. C., & Meckling, W. H. (1976). Theory of the firm: Managerial behavior, agency costs and ownership structure. Journal of Financial Economics, 3(4), 305–360. https://doi.org/10.1016/0304-405X(76)90026-X 13. Kim, Y., & Park, M. S. (2005). Pricing of seasoned equity offers and earnings management. Journal of Financial and Quantitative Analysis, 40(2), 435–463. https://doi.org/10.1017/s0022109000002374 14. Kim, Y., Park, M. S., & Wier, B. (2012). Is earnings quality associated with corporate social responsibility? Accounting Review, 87(3), 761–796. https://doi.org/10.2308/accr-10209 15. Lins, K. V., Servaes, H., & Tamayo, A. (2017). Social Capital, Trust, and Firm Performance: The Value of Corporate Social Responsibility during the Financial Crisis. Journal of Finance, 72(4), 1785–1824. https://doi.org/10.1111/jofi.12505 16. Mola, S., & Loughran, T. (2004). Discounting and Clustering in Seasoned Equity Offering Prices. Journal of Financial and Quantitative Analysis, 39(01), 1–23. https://doi.org/10.1017/S0022109000003860 17. Myers, S. C., & Majluf, N. S. (1984). Corporate financing and investment decisions when firms have information that investors do not have. Journal of Financial Economics, 13(2), 187–221. https://doi.org/10.1016/0304-405X(84)90023-0 18. Sundaram, R. K., & Yermack, D. L. (2007). Pay me later: Inside debt and its role in managerial compensation. Journal of Finance, 62(4), 1551–1588. https://doi.org/10.1111/j.1540-6261.2007.01251.x 19. Yang, H. C., & Kim, Y. E. (2018). The effects of corporate social responsibility on job performance: Moderating effects of authentic leadership and meaningfulness of work. Journal of Asian Finance, Economics and Business, 5(3), 121–132. https://doi.org/10.13106/jafeb.2018.vol5.no3.121 20. Yoon, B., & Lee, J. H. (2019). Corporate social responsibility and information asymmetry in the Korean market: Implications of chaebol affiliates. Journal of Asian Finance, Economics and Business, 6(1), 21-31. https://doi.org/10.13106/jafeb.2019.vol6.no1.21 21. Zahari, A. R., Esa, E., Rajadurai, J., Azizan, N. A., & Tamyez, P. F. M. (2020). The effect of corporate social responsibility practices on brand equity: An examination of malaysia’s top 100 brands. Journal of Asian Finance, Economics and Business, 7(2), 271–280. https://doi.org/10.13106/jafeb.2020.vol7.no2.271
{}
# Naming¶ Every object in PsyNeuLink has a name attribute, that is a string used to refer to it in printouts and display. The name of a object can be specified in the name argument of its constructor. An object’s name can be reassignd, but this should be done with caution, as other objects may depend on its name. ## Default Names¶ If the name of an object is not specified in its constructor, a default name is assigned. Some classes of objects use class-specific conventions for default names (see individual classes for specifics). Otherwise, the default name is handled by the Registry, which assigns a default name based on the name of the class, with a hyphenated integer suffix (<object class name>-n), beginning with ‘0’, that is incremented for each additional object of that type requiring a default name. For example, the first TransferMechanism to be constructed without specifying its name will be assigned the name ‘TransferMechanism-0’, the next ‘TransferMechanism-1’, etc.. ## Duplicate Names¶ If the name of an object specified in its constructor is the same as the name of an existing object of that type, its name is appended with a hyphenated integer suffix (<object name>-n) that is incremented for each additional duplicated name, beginning with ‘1’. The object with the original name (implicitly instance ‘0’) is left intact. There is one exception to this rule, for the naming of State. States of the same type, but that belong to different Mechanisms, can have the same name (for example, TransferMechanism-0 and TransferMechanism-1 can both have an InputState named INPUT_STATE-0 (the default name for the first InputState); however, if a State is assigned a name that is the same as another State of that type belonging to the same Mechanism, it is treated as a duplicate, and its name is suffixed as described above.
{}
# Rooty & Complex Algebra Level 3 If $$z = -\frac{\sqrt{2}}{2} + \frac{\sqrt{2}}{2}i$$, what is the value of $$z^{-68}?$$ ×
{}
Enter the pressure range (psi), the voltage reading (volts), the upper voltage limit (volts), and the voltage lower limit (volts) into the calculator to determine the Pressure From Voltage. ## Pressure From Voltage Formula The following formula is used to calculate the Pressure From Voltage. P = PR * (Vr - VL) / (Vu - VL) • Where P is the Pressure From Voltage (psi) • PR is the pressure range (psi) • Vr is the voltage reading (volts) • VL is the voltage lower limit (volts) • Vu is the voltage upper limit (volts) ## How to Calculate Pressure From Voltage? The following example problems outline how to calculate the Pressure From Voltage. Example Problem #1 1. First, determine the pressure range (psi). In this example, the pressure range (psi) is given as 100 . 2. Next, determine the voltage reading (volts). For this problem, the voltage reading (volts) is given as  32 . 3. Next, determine the voltage lower limit (volts). In this case, the voltage lower limit (volts) is found to be 15. 4. Next, determine the upper voltage limit. For this problem, this is 30. 5. Finally, calculate the Pressure From Voltage using the formula above: P = PR * (Vr – VL) / (Vu – VL) Inserting the values from above and solving yields: P = 100 * (32 – 15) / (30 – 15) = 113.33 (psi) Example Problem #2 Using the same method as above, determine the variables required by the equation. For this example problem, these are provided as: pressure range (psi) = 500
{}
## TfL Orders 10 Hydrogen Buses from ISE ##### 13 November 2007 Transport for London (TfL) has signed a £9.65 million (US$20 million) contract with ISE for five hydrogen fuel cell buses and five hydrogen internal combustion engine buses. The Mayor of London, Ken Livingstone, announced that the ten new hydrogen-powered buses will join London’s bus fleet by 2010. In February 2006, the Mayor announced the London Hydrogen Transport program, which aims to introduce 70 new hydrogen vehicles into London, 10 of which are to be buses. (Earlier post.) The contract with ISE is one of the world’s first commercial contracts for hydrogen buses. The vehicles will be operated by First on behalf of TfL. Hydrogen is a fuel of the future as it improves air quality and does not produce the harmful emissions which are causing catastrophic climate change. London is now the first city in Europe to commit to a hydrogen bus fleet of this size, which will match traditional diesel buses in terms of performance. This represents a huge step forward from the previous hydrogen trials in the capital and is an important step towards my target of having five per cent of all public sector fleet vehicles powered by hydrogen by 2015. —Ken Livingstone The contract covers not only the initial cost of the vehicles themselves but also the specialist maintenance and replacement parts over a five year period after delivery. The Department for Business Enterprise & Regulatory Reform has provided a grant of £2.6 million (US$5.4 million) towards TfL’s hydrogen bus program. ISE will work with a number of sub-contractors, including The Wright Group, a bus manufacturer based in Northern Ireland, and Ballard. The new hydrogen buses incorporate hybrid technology to allow them s to match their diesel counterparts in terms of range and operating hours. The well-to-wheel CO2 emissions for both types of bus will be calculated after delivery, when the volume of hydrogen required to power the buses in operation has been confirmed. Reductions in carbon dioxide emissions compared to a diesel bus are expected to be 50% for the fuel cell buses and 20% for the internal combustion engine buses. The procurement process to secure a hydrogen refuelling supplier is underway and TfL expects to have chosen a supplier on board early in 2008. Components of the HHICE cradle. Click to enlarge. The current ISE Hydrogen Hybrid Internal Combustion Engine (HHICE) bus powertrain combines the Ford Triton 6.8-liter V10 hydrogen engine with a Siemens 145 kW motor/generator to power dual 170 kW drive motors with rated torque of 440 Nm (324 lb-ft) each and peak torque of 900 Nm (664 lb-ft). Fuel is 58 kg of hydrogen stored at 5,000 psi. The ISE HHICE currently uses either a 200 kW Cobasys NiMH battery pack or a 200 kW ultracapacitor bank from Maxwell. ISE is working with Ballard as part of a consortium to provide BC Transit with up to 20 fuel cell buses in a fleet that will roll onto British Columbia roads by the end of 2009. (Earlier post.) ISE has also worked with UTC Power fuel cell systems on several fuel-cell hybrid bus projects in the past. (Earlier post.) The London fuel cell buses will be the first to incorporate a 75 kW version of the new HD6 module in a fuel cell hybrid transit bus. This lower cost, fuel-efficient module is offered with Ballard’s 5 year or 12,000 hour warranty and is tailored for inner-city transit operation, as will be the case in London. So that's $2,000,000 per bus average cost. Considering the fact that hydrogran ICEs are not much more expensive than regular one's, I'm betting the per-unit FCV cost is astronomical. Clean-diesel, CNG and hybrid buses go for$300,000 - $600,000 or so, if I recall. The difference --$1,400,000 per bus -- will buy you 200,000 gallons of diesel fuel at English prices ($7/gal with all the taxes, which I'll assume TfL actually pays). At something like 4 mpg (hybrids are capable of at least this), that's 800,000 miles of service, or probably the entire life of the bus. So basically, you could get an entire bus plus lifetime fuel for the cost of just a bus. I know this is supposed to stimulate hydrogen research and production, so the per-unit costs go down, but can anyone say "wasteful?" Not to mention the pollution created to make the hydrogen! Way too wasteful of funds! The$20 M for 10 buses is a small amount in consideration the experimental nature of the project. The experience to be gained will be invaluable in advancing the state of the art in pollution-free transportation. The cost has to be higher than conventional diesel buses because of considerable development cost in design and engineering, and the hydrogen-fuel components are practically hand-made in very small number, without the benefit of mass production. In time, the cost of this initial investment will be recouped on subsequent production of H2-capable buses. Richard posted: "Not to mention the pollution created to make the hydrogen! Way too wasteful of funds!" Richard, the article stated that: " Reductions in carbon dioxide emissions compared to a diesel bus are expected to be 50% for the fuel cell buses and 20% for the internal combustion engine buses." that is, if the H2 is made from fossil-fuel sources, these buses will still be less polluting than current technology. I strongly doubt that these CO2 reduction numbers, if the primary energy source is a coal-fired plant; guess they assume that nuclear power plants have no emissions at all (yet again...) Roger: If there was any reasonable chance that this technology could be made mainstream and cost-effective at some point in the medium or even distant future, you would not be seeing a patchwork of random government actors throwing bits of money at it here and there with no obvious plan. (Moreover, having moved to England temporarily, I've immediately become intensely suspicious of anything Ken Livingston does, since he is largely irrational.) Instead, you'd see major motor manufacturers shopping around for serious financing, getting plans in place for large-scale production lines, and selling the first run of units at the final projected price -- even if at a loss -- to establish a credible market. That's more or less what Toyota did with the Prius, and what GM did with the Allison Bus Hybrid system. Even if you needed a fleet of five buses running on standard routes in order to collect operational data, my point is that you'd see such trials sponsored by a serious manufacturer, not a lunatic local governor, if FCVs were seriously in the offing. At minimum, the manufacturer would sell the test buses to the local authority, but the price would be in the realm of reason. NBK Don't let reality intrude, Red Ken's careful and thoughtful use of the taxpayer provided funds is so typical of government hubris as run by our betters. He is too wise for us mere mortals, and he assumes that the taxpayer's pockets are deep and inexhaustible enough, to make any generous gesture. Especially one that get him a headline. Union Diesel Parts Co.,Ltd http://www.undiesel.com MSN:undiesel@hotmail.com Email:undiesel@undiesel.com Tel:86-0594-2226395 Union Diesel Parts Co.,Ltd is a manufacturer specialist in diesel engine parts.The majored products is Head Rotor(VE Pump Parts)Nozzle,Plunger,Delivery Valve etc. As a elder manufacturer for diesel engine parts, we keep step with the international standard and technics. And we absorb the advanced producing and testing technic.The quality of our products are in the same class with other overseas famous manufacturers. Our products could be replacement of BOSCH, ZEXEL, DENSO, DELPHI, STANADYNE, YANMAR, Delphi-Lucas-CAV, AMBAC INTERNATIONAL. We also provide aftermarket service for Cummins, Caterpillar and IVECO. We are sincerely hope to become one of your most trustworthy partners in China. Diesel engine parts,ve pump,diesel injection parts,diesel nozzle,diesel fuel injection system,head rotor,feed pump,cam plate,element,drive shaft,manget valve,plunger,delivery valve,cat pencil nozzle,8n7005, diesel fuel injection,Union Diesel Pumps,diesel Pump,Turbochargers,Diesel Plunger,Ve Pump Part,Diesel Engine Spare Part,diesel fuel injection part, injection Pump,Fuel injection nozzle,diesel Pump Plunger,Engine parts for fuel injection nozzle,Plunger(A,AS,PS7100,P8500,MW TYPE),Nozzle(Dn,Dnopdn,s,sn,pn), Roller Assy,Cam Disk,Piston,Spring Timer, Distributor Head Delivery Valve,Hold Delivery Valve,Spring Delivery Valve,Pump Housing,gasoline injector,distributor fuel injectrion pump,VA,VE, multi-elements pump(P,PF,PE,PFR,PES,M,MWA,Z,ETC),lucas cav,engine cam drive pump(PE,PF,PFR,ETC),pencil valve,lucas varity diesel fuel injection equipment, lucas roto diesel,Lucas simms,v-8 style pumps,S Series Nozzle,Pencil Fuel Injector,Fuel Injector, The S series fuel injectors,The J P series fuel injectors,J& P series fuel nozzle,diesel engine, N.oil Transfer Pump,diesel fuel,cylinder injection pumps,Cross disk,P&S style Nozzle,diesel element,AD-plunger,AW-plunger,diesel parts,diesel fuel equipments, Nozzle Holder,B&C engine parts distributor,bosch,
{}
# Fundamentally, what is a perfect language model? Suppose that we want to generate a sentence made of words according to language $$L$$: $$W_1 W_2 \ldots W_n$$ Question: What is the perfect language model? I ask about perfect because I want to know the concept fundamentally at its fullest extent. I am not interested in knowing heuristics or shortcuts that reduce the complexity of its implementation. # 1. My thoughts so far ## 1.1. Sequential One possible way to think about it is moving from left to right. So, 1st, we try to find out value of $$W_1$$. To do so, we choose the specific word $$w$$ from the space of words $$\mathcal{W}$$ that's used by the language $$L$$. Basically: $$w_1 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_1 = w)$$ Then, we move forward to find the value of the next word $$W_2$$ as follows $$w_2 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_2 = w | W_1 = w_1)$$ Likewise for $$W_3, \ldots, W_n$$: $$w_3 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_3 = w | W_1 = w_1, W_2=w_2)$$ $$\vdots$$ $$w_n = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_n = w | W_1 = w_1, W_2=w_2, \ldots W_{n-1}=w_{n-1})$$ But is this really perfect? I personally doubt. I think while language is read and written usually from a given direction (e.g. left to right), it is not always done so, and in many cases language is read/written possibly in a funny order as we always do. E.g. even when I wrote this question, I jumped back and forth, then went to edit it (as I'm doing now). So I clearly didn't write it from left to right! Similarly, you, the reader; you won't really read it in a single pass from left to right, will you? You will probably read it in some funny order and go back and forth for awhile until you conclude an understanding. So I personally really doubt that the sequential formalism is perfect. ## 1.2. Joint Here we find all the $$n$$ words jointly. Of course ridiculously expensive computationally (if implemented), but our goal here is to only know what is the problem at its fullest. Basically, we get the $$n$$ words as follows: $$(w_1, w_2, \ldots, w_n) = \underset{(w_1,w_2,\ldots,w_n) \in \mathcal{W}^n}{\text{arg max }} \Pr(W_1 = w_1, W_2=w_2, \ldots W_n=w_n)$$ This is a perfect representation of language model in my opinion, because its answer is gauranteed to be correct. But there is this annoying aspect which is that its words candidates space is needlessly large! E.g. this formalism is basically saying that the following is a candidate words sequence: $$(., Hello, world, !)$$ even though we know that in (say) English a sentence cannot start by a dot ".". ## 1.3. Joint but slightly smarter This is very similar to 1.2 Joint, except that it deletes the single bag of all words $$\mathcal{W}$$, and instead introduces several bags $$\mathcal{W}_1, \mathcal{W}_2, \ldots, \mathcal{W}_n$$, which work as follows: • $$\mathcal{W}_1$$ is a bag that contains words that can only appear as 1st words. • $$\mathcal{W}_2$$ is a bag that contains words that can only appear as 2nd words. • $$\vdots$$ • $$\mathcal{W}_n$$ is a bag that contains words that can only appear as $$n$$th words. This way, we will avoid the stupid candidates that 1.2. Joint evaluated by following this: $$(w_1, w_2, \ldots, w_n) = \underset{w_1 \in \mathcal{W}_1,w_2 \in \mathcal{W}_2,\ldots,w_n \in \mathcal{W}_n) \in \mathcal{W}^n}{\text{arg max }} \Pr(W_1 = w_1, W_2=w_2, \ldots W_n=w_n)$$ This will also guarantee being a perfect representation of a language model, yet it its candidates space is smaller than one in 1.2. Joint. ## 1.4. Joint but fully smart Here is where I'm stuck! Question rephrase (in case it helps): Is there any formalism that gives the perfect correctness of 1.2. and 1.3., except for also being fully smart in that its candidates space is smallest? • Typical theory about languages uses the concept of a grammar, which results in a parse tree, representing the structure. Working left-to-right does not work for many languages. A grammar contains rules such as "a noun-phrase can consist of an article followed by a noun, or an adjective phrase followed by a noun, or ..." and so on. For example, here's a starting point: en.wikipedia.org/wiki/Probabilistic_context-free_grammar Sep 13, 2020 at 1:56 • @RobbyGoetschalckx. Thanks that is very helpful. I looked up parse trees (thanks to you), and I think your rules example is for a constituency-based parse tree? I also found dependency-based trees, though not very clear to me. I guess the constituency one stores syntax (words order), while the dependency one is more about the semantics and less of the syntax (tree doesn't encode words order)? Sep 13, 2020 at 2:21 • What kind of language do you have in mind: programming languages (like LISP, Ocaml, ...) or human languages (like French, English)? For human languages, do you assume some correct plain text representation (e.g. an UTF-8 encoded stream of bytes), or is it formatted (e.g. HTML5) with spelling mistakes, or is it sound? See also decoder-project.eu (I am on the photo) Sep 13, 2020 at 10:11 • @BasileStarynkevitch Does it matter which language? So far I'd say English UTF-8 typo-free, but I don't see why should it matter in defining language models correctly. Any example how language selection will affect the definition of the fundamental language model? Sep 13, 2020 at 15:35 • I am not so sure that Scheme R5RS, modern Chinese ideograms, written Latin (by Cicero...), and Egyptian hieroglyphs share some "common model" - more than what Noam Chomsky wrote about. Be sure to start a PhD thesis if you believe there is one. Consider using RefPerSys. Feel free to email me basile@starynkevitch.net for more, but do mention the URL of your question in your email ... Sep 13, 2020 at 15:43 So, a language model measures the probability of a given sentence in a language $$L$$. The sentences can have any length and the sum of probabilities of all the sentences in the language $$L$$ is 1. It's very difficult to compute, thus people use some simplifications, like say if the words are located far enough from each other, then the occurrence of a current word doesn't depend on a word which was occurred far away in the past. Each sentence is a sequence $$w_1, \dots, w_n$$ and a language model computes the probability of the sequence $$p([w_1, \dots w_n])$$ (it's not joint distrribution yet). It can be decomposed into a joint distribution with some special tokens added $$p(BOS, w_1, \dots w_n, EOS])$$. BOS is begin of the sentence and EOS is end of sentence. Then this joint distribution can be decomposed using the chain rule $$p(BOS, w_1, \dots w_n, EOS]) = p(BOS) p(w_1 | BOS) \Big[ \prod\limits_{i=1}^n p(w_i | BOS, w_1, \dots, w_{i-1}) \Big] p(EOS | BOS, w_1, \dots, w_n)$$. There are 2 types of probabilities that are usually modelled differently: a prior probability $$p(BOS)$$ which is always equal to 1, because you always have BOS as the first token in the augmented sequence. Then conditional probabilities can be computed as follows $$p(w_i | BOS, w_1, \dots, w_{i-1}) = \frac{c(BOS, w_1, \dots, w_{i-1}, w_i)}{\sum_{w_i \in W} c(BOS, w_1, \dots, w_{i-1}, w_i)}$$. Where $$c$$ is a counter function that measures how many times a given sequence occured in the dataset you specified to train your model. You can notice it's a maximum likelihood estimate of the unknown conditional probabilities. Obviously if you're using a certain dataset you compute a model of that dataset, not of a language, but that's the to approximate true probabilities of sentences in a language. The EOS token is needed to make difference between a probability of a non-finished yet sequence and that which has finished, because if you take those counters from above and forget about adding the EOS into your dataset in the end of all sentences, you'll get probabilities that don't sum into 1 (which is bad). • Thanks a lot! Easy to understand despite me being a beginner. I have 2 questions if your time allows: (1) Isn't that actually similar to 1.1. sequential, except for adding BOS and EOS? I guess I was wrong for assuming that 1.2. Joint offers any advantage, since 1.1. is just the result of applying the chain rule. Am I right? (2) Is there any proof of some kind that this is the truth? I can intuitively see it being true, but any is there any proof that it cannot be any simpler? Sep 13, 2020 at 23:52
{}
# Find the limit without using Maclaurin series Find a limit, $$\lim_{x\to 0} \frac{1-\cos{(1-\cos{(1-\cos x)})}}{x^8}$$ without using Maclaurin series. My attempt was to use L'hopital's rule but that's just too much, and chances of not making a mistake trough repetitive differentiation are very low. Hint: Use $1-\cos t = 2\sin^2 \frac{t}{2}$ and the chain rule for limit. (ask if you need further hint)
{}
# Find the domain and the range of the real function f defined by f (x) = |x – 1|. Domain of f = R, Range of f is the set of all non-negative real numbers.
{}
# Galois extension and discriminant I try to solve some question about galois theory.. Let $$f(x)$$ be irredicuble polynomial over $$\mathbb{Q}$$ and $$f(x)=(x-\alpha_1)...(x-\alpha_n)$$ where $$\alpha_i \in \mathbb{C}$$ splitting field is $$K=\mathbb{Q}(\alpha_1,...,\alpha_n)$$ define discriminant as follows $$D=\prod_{i Firs questionn is that Show that $$K/Q$$ is galois extension and $$\operatorname{Gal}(K/Q)$$ is subgroup of $$S_n$$. I don't understand relation between discriminant and this question. I showed directly, since it is normal and seperable, it must be Galois. and there is n roots so that subgroup of $$S_n$$ Is it true? And I can not show the other questions which are Show that $$D\in\mathbb{Q}$$. If $$\sqrt{D}\in \mathbb{Q}$$ then $$\operatorname{Gal}(K/Q)$$ is subgroup of $$A_n$$. (I did something about this one, I assumed $$\sqrt{D}\in \mathbb{Q}$$. $$K=\mathbb{Q}(\alpha_1,..., \sqrt{D})$$ since $$\sqrt{D}\in \mathbb{Q}$$, that is a subgroup of $$S_n$$ But how can I show that it is subgroup of alternating group?) • I don't think the first part is meant to have anything to do with the discriminant. – Hoot Nov 1, 2015 at 16:09 to show that $D\in Q$ show that it is fixed by elements of the Galois group. $\sqrt{D}=\prod(\alpha_i-\alpha_j)$, let $s$ in the Galois group $s(\sqrt{D})=sign(s)\sqrt{D}$, thus $s(\sqrt{D})=\sqrt{D}$ iff $sign(s)=1$.
{}
## Equilibrium Constant from Electrode Potentials 1. The problem statement, all variables and given/known data Evaluate the equilibrium constant for the formation of triodide ion. I2 + I- ------> I3- a 298K, if EI2|I-o = 0.6197V and EI3-|I-o=0.5355V I don't understand one thing - At equilibrium, ΔG=0 so that Eocell=0 (from the equation ΔG=-nfE) but Eocell is not zero for the above reaction. Where is my mistake? PhysOrg.com science news on PhysOrg.com >> Hong Kong launches first electric taxis>> Morocco to harness the wind in energy hunt>> Galaxy's Ring of Fire Quote by Abdul Quadeer I don't understand one thing - At equilibrium, ΔG=0 so that Eocell=0 (from the equation ΔG=-nfE) but Eocell is not zero for the above reaction. Where is my mistake? $$\Delta$$Go = -nFEo and $$\Delta$$G = -nFE are two different equations.You are combining them.At equilibrium $$\Delta$$G=0 doesn't mean that Eo=0 but it means E=0 i.e. electric potential at that time is equal to zero.Eo is constant quantity for any reaction and if it becomes zero the reaction can't proceed. For calculation of equilibrium constant just use the above result in nerst equation ie at equilibrium E = Eo - (0.059*logQ)/n = 0 [Q= equilibrium constant] Thank you very much.
{}
# Performance study of global weight window generator based on particle density uniformity He, P.; Wu, B.; Hao, L.; Sun, G.; Li, B.; Fischer, Ulrich ##### Abstract: The variance reduction techniques are necessary for Monte Carlo calculations in which obtaining a detailed calculation result for a large and complex model is required. The GVR method named as global weight window generator (GWWG) was proposed by the FDS team. In this paper, two typical calculation examples, ISPRA-Fe benchmark in SINBAD (Shielding Integral Benchmark Archive Database) and TF Coils (Toroidal Field coils) of European HCPB DEMO (Helium Cooled Pebble Bed demonstration fusion plant), are used to study the performance of GWWG method. It can be seen from the calculation results that the GWWG method has a significant effect in accelerating the Monte Carlo calculation. Especially when the global convergence calculation results are needed, the acceleration effect (FOMG) can reach 10$^{5}$ or more. It proves that the GWWG method is an effective tool for deep-penetration simulations using Monte Carlo method. Zugehörige Institution(en) am KIT Institut für Neutronenphysik und Reaktortechnik (INR) Publikationstyp Proceedingsbeitrag Publikationsjahr 2020 Sprache Englisch Identifikator ISBN: 978-1-71382-724-5 KITopen-ID: 1000135211 HGF-Programm 31.03.08 (POF III, LK 01) Neutronik Erschienen in PHYSOR2020 – International Conference on Physics of Reactors: Transition to a Scalable Nuclear Future Veranstaltung International Conference on Physics of Reactors (PHYSOR 2020), Online, 29.03.2020 – 02.04.2020 Verlag EDP Sciences Seiten Art.-Nr.: 18005 Serie EPJ Web of Conferences ; 247 Schlagwörter SuperMC / Global variance reduction / Monte Carlo / Particle transport Nachgewiesen in DimensionsScopus KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft KITopen Landing Page
{}
# Ee’s SLideshow Viewer and Second Life 2.0 So, Linden Labs caught everyone by surprise and made the Beta 2.0 viewer the main release for new residents, despite there being numerous bugs that still need to be fixed. Pertinent to Ee’s SLideshow viewer are the new shared media facilities, or more precisely changes to parcel media. Upgrades to SLideshow Viewer will soon incorporate prim media, allowing interaction with websites loaded in the viewer, but a concern is that until you can be certain that all residents are using viewers that support it you have to ask yourself whether presentations you are setting up are accessible by those who want to continue using the 1.23 viewer. I suspect that before long, and without much warning, Linden Lab will make upgrade to 2.0 mandatory, so getting rid of this worry. In the meantime however there are some annoyances that everyone needs to be aware of using parcel media which is the SLideshow Viewers current mechanism for displaying html and streaming media, I don’t intend to release any upgrades to deal with these yet since most will hopefully be fixed quite quickly (see JIRA links later): • Viewer 2.0 does not recognise files with a .mp4 extension. Silly I know, but for some reason a movie with .mp4 extension will not be recognised by 2.0 as a movie… even if you force the parcel settings to treat it as such. WORKAROUND: change the filename on the remote server so it has a .mpeg extension instead and everything will now work correctly in both 2.0 and 1.23. • Displaying only images from the web through parcel media (i.e. not images embedded in html but directly linking to an image file) will not scale to fit (even when texture settings are set to do so). I.e. LL have broken scaling of images, and not fixed it yet. Note, viewer 1.23 works fine, this is only 2.0. WORKAROUND: both these require you to have access to the server where the images are stored. First is to embed each image in its own html: <html> <body scroll=no> <img src=”picture.jpg” width=100% height=98%> </body> </html> Here the “scroll=no” part is to try and stop a scroll bar appearing when viewer in 2.0 (another annoyance). For better results you can also change the texture of the screen so repeat is set to 0.98 and then adjust the horizontal scroll to -0.05 (play around for best result). The second method requires a server running php, but means you can leave all your images as they are. Save the following code as “a.php” (shorter name the better, it leaves more room for the rest of the url when changing texture names in contents of SLideshow viewer) and place in same directory as images… then use e.g. “http://domain.com/a.php?image=image.jpg” rather than “http://domain.com/image.jpg”. <html> <body scroll=no> <?php $imageurl=$_GET[‘image’]; $show = ‘<img width=100% height=98% src=”‘.$image.'”>’; echo (\$show); ?> </body> </html> Of course, this also works if instead of image.jpg you put a url, so will also work. I have a running php to do precisely this on this server, if anyone would like to use it rather than have their own version please send me an IM in world (Ee Maculate) and I’d be happy to pass on the url. • Media in 2.0 loads a lot slower than in 1.23. This means that if you have slides changing from one url to another there will be a delay in the image being displayed. Shared media seems better designed for a prim to have one url associated to it that you use as a web browser rather than swapping between html, images, movies and textures. Hopefully this will improve with future updates. WORKAROUND: just be aware if giving a presentation from 1.23 that other avatars using 2.0 will not be updating images as quickly as you.
{}
# Getting the Right Vector from the Forward Vector I'm currently working on a small Camera (ArcBall) and I finally am starting to understand how it will work. I will first create a basic View Matrix using a LookAt function. Then, I will send the Camera Position and the Camera Target to a function which will: 1: Calculate the Vector of (Camera Position - Camera Target) 2: Rotate this vector by X Pitch 3: Rotate the same vector by Y Yaw This will determine the new Camera position. Now I need to find the Up Vector to be able to use LookAt() so I need to do the cross product of the forward and the right vector to find the Up Vector. 4: I will do (Camera Target - Camera Position) and normalize it so this will be my new Forward vector 5: ??? I need to find the right vector. How can I find the Right vector if I do not have the Up vector? If you see flaws with my ArcBall camera, tell me, I am new to OpenGl and I'm trying to learn. • Have you completed High School mathematics? I ask because the manner in which any answer is presented, concisely or verbosely, mathematically sophisticated or naïve, depends on your background. – Pieter Geerkens May 19 '16 at 3:10 • Not really, well, High School yes but where I lived, we have never learned Matrices and Vector. Sorry ! – Gabriel Roy May 19 '16 at 3:20 • @GabrielRoy It will really be difficult to use OpenGL for 3D games without a good understanding of Linear Algebra, I strongly recommend learning everything you can about Vectors, Matrices, Transformations, and if you are so daring, Quaternions. – user5665 May 19 '16 at 5:17 For example, if I am looking forward along the z axis forwardVector = vec3(0,0,1), then I could have up be along the y axis upVector = vec3(0,1,0) and right therefore be along the x axis rightVector = vec3(1,0,0), or I could have up be along the -y axis upVector = vec3(0,-1,0) and therefore right would be along the -x axis rightVector = vec3(-1,0,0). Or I could even have 'up' be along the x axis upVector = vec3(1,0,0), which would mean that right is along the y axis rightVector = vec3(0,1,0). And so forth. No one "up" or "right" vector can be considered "correct" if all you have is a forward vector, without imposing some extra constraints on the system.
{}
# Section 5.7 Calculus 2 Comparison Tests ## 5.7 Comparison Tests ### 5.7.1 Direct Comparison Test • Following is a list of sequence formulas ordered from larger to smaller (for sufficiently large $$n$$). • $$n^n$$ • $$n!$$ • $$b^n$$ where $$b>1$$ (such as $$2^n,e^n,10^n$$…) • $$n^p$$ where $$p>0$$ (such as $$\sqrt{n},n,n^4$$…) • $$\log_b n$$ where $$b>1$$ (such as $$\log_{10}(n),\ln(n),\log_2(n)$$…) • any positive constant • Example Show that $$\frac{m^3+7}{4^m+5}\leq 2(\frac{1}{2})^m$$ for sufficiently large values of $$m$$. • Suppose $$\sum_{n=N}^\infty a_n$$ is a series with non-negative terms. • If there exists a convergent series $$\sum_{n=M}^\infty b_n$$ with non-negative terms where $$a_n\leq b_n$$ for sufficiently large $$n$$, then $$\sum_{n=N}^\infty a_n$$ converges as well. • If there exists a divergent series $$\sum_{n=M}^\infty b_n$$ with non-negative terms where $$a_n\geq b_n$$ for sufficiently large $$n$$, then $$\sum_{n=N}^\infty a_n$$ diverges as well. • Example Show that $$\sum_{m=0}^\infty \frac{m^3+7}{4^m+5}$$ converges by comparing with the series $$\sum_{m=0}^\infty 2(\frac{1}{2})^m$$. • Example Does $$\sum_{n=1}^\infty\frac{2}{n^{1/3}+5}$$ converge or diverge? • Example Does $$\sum_{k=3}^\infty\frac{e^k}{e^{2k}-1}$$ converge or diverge? • Example Does $$\sum_{m=2}^\infty(m\ln m)^{-1/2}$$ converge or diverge? ### 5.7.2 Limit Comparison Test • Suppose $$\sum_{n=N}^\infty a_n$$ is a series with non-negative terms. If there exists a series $$\sum_{n=M}^\infty b_n$$ with non-negative terms where $$0<\lim_{n\to\infty}\frac{a_n}{b_n}<\infty$$, then either both series converge or both series diverge. • Example Does $$\sum_{n=1}^\infty\frac{2}{n^{1/3}+5}$$ converge or diverge? • Example Does $$\sum_{i=0}^\infty\frac{i^2+3i+7}{3i^4+2i^2+5}$$ converge or diverge? • Example Does $$\sum_{n=42}^\infty\frac{2^n+5^n}{3^n+4^n}$$ converge or diverge? ### Review Exercises 1. Does $$\sum_{n=0}^\infty\sqrt{\frac{n}{n^4+7}}$$ converge or diverge? 2. Does $$\sum_{n=3}^\infty\frac{4}{n^{0.8}-1}$$ converge or diverge? 3. Does $$\sum_{j=2}^\infty\frac{e^j}{e^{2j}+1}$$ converge or diverge? 4. Does $$\sum_{k=10}^\infty\frac{\sin^2(k)}{k^3}$$ converge or diverge? 5. Does $$\sum_{m=4}^\infty\frac{1}{\ln m}$$ converge or diverge? 6. Does $$\sum_{n=4}^\infty\frac{5}{2n+3}$$ converge or diverge? 7. Does $$\sum_{m=1}^\infty\frac{1}{1+2+\dots+(m-1)+m}$$ converge or diverge? (Hint: show that $$\frac{1}{1+2+\dots+(m-1)+m} = \frac{2}{(1+m)+(2+m-1)+\dots+(m-1+2)+(m+1)}$$.) 8. Does $$\sum_{m=0}^\infty\frac{2m}{(m^2+1)^2}$$ converge or diverge? 9. Does $$\sum_{n=1}^\infty\sqrt{\frac{n+1}{n^2+3}}$$ converge or diverge? Solutions ## Textbook References • University Calculus: Early Transcendentals (3rd Ed) • 9.4
{}
summaryrefslogtreecommitdiffstats log msg author committer range path: root/kde/slack-desc/filelight blob: f86e2f0e78d51b6f11933460fe24d3cdcef743cc (plain) ```1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ``` ``````# HOW TO EDIT THIS FILE: # The "handy ruler" below makes it easier to edit a package description. Line # up the first '|' above the ':' following the base package name, and the '|' # on the right side mfilelights the last column you can put a character in. You must # make exactly 11 lines for the formatting to be correct. It's also # customary to leave one space after the ':'. |-----handy-ruler------------------------------------------------------| filelight: filelight (file system monitor) filelight: filelight: Filelight allows you to quickly understand exactly where your filelight: diskspace is being used by graphically representing your file system. filelight: filelight: filelight's home page is: http://utils.kde.org/projects/filelight filelight: filelight: filelight: filelight: filelight: ``````
{}
# Creating a Bar Chart in R In R, we can create a bar plot using the barplot() function. A Bar Plot or Bar Graph is primarily used to compare values. It presents grouped data using rectangular bars whose lengths are proportional to the values that they represent. Let's take our Product Sales data where we have the Revenue and Gross Margin for each order along with various attributes such as ProductLine, RetailerType, OrderMethod etc. We can use this data to create a bar chart which plots the total or average sales on y-axis as bars and the x-axis as one of these factors that we are interested in such as ProductLine. The first step is to group the Total Sales by ProductLine using the tapply() function as shown below: > sales_by_productline <- tapply(sales$Revenue, sales$ProductLine, sum) This gives us the sales to be plotted as bars. We can now plot the chart using the barplot() function. > barplot(sales_by_productline) The above function call will create the bar chart. We can enhance the chart by adding a title and axis labels. > barplot(sales_by_productline, main="Sales by ProductLine", xlab="ProductLine",ylab="$") The resulting bar plot is displayed below: ### Adding Colors to Bars We can add different colors to the bars in the bar plot by adding the col argument. The col argument is basically a vector of colors. R has some inbuilt functions to generate vectors of colors, for example, the gray function generates a vector of grays. Similarly the rainbow function generates a vector of rainbow colors. Alternatively you can explicitly supply a vector containing color codes. In the following example, we extend our bar plot by painting the bars with rainbow colors: > barplot(sales_by_productline, col=rainbow(5), main="Sales by ProductLine", xlab="ProductLine",ylab="$") The graph will now look as follows: To supply explicit colors, you would pass the col argument as col = c("Red", "Green", "Blue", ...). Alternatively, you can also specify the hex codes for the colors. The base graphics library provides only basic features for plotting charts. For example, in the above bar chart, we could have plotted mean revenue values instead of totals. That would have worked fine. However, suppose we also wanted to plot max and min sales for each product line as markers above and below the mean bars. That would be very complicated to achieve here. However, the same thing could be easily done with other plotting libraries such as gplots. We will learn about the gplots library in a separate course.
{}
1. zaphod Group Title 2. zaphod Group Title 3. satellite73 Group Title $$a$$ is the amplitude. since this starts and 3 and goes up do 9, then comes back down to 3 and then to -3, the amplitude is 6 4. satellite73 Group Title that is, the range is of length 12 from -3 to 9, so the amplitude is half of that, therefore $$a=6$$ 5. zaphod Group Title IS THERE ANY OTHER METHOD TO SOLVE IT WITH EQUATIONS. 6. satellite73 Group Title from your eyes you see that the period is $$\pi$$ the period of $$\sin(bx)$$ is $$\frac{2\pi}{b}$$ so set $$\frac{2\pi}{b}=\pi$$ and solve for $$b$$ 7. satellite73 Group Title no there are no equations here, you have to visualize, since you are given a picture 8. satellite73 Group Title well there is an equation to find $$b$$ . it is $\frac{2\pi}{b}=\pi$ but you only know that the period is $$\pi$$ from looking at the graph 9. zaphod Group Title and i know c, now can i substitute it in the main equation and find a? 10. satellite73 Group Title that is the entire point of this exercise, not to use equations, but to visualize the period, and amplitude from the picture 11. satellite73 Group Title we know $$c=3$$ because this is the graph of sine lifted up 3 units 12. zaphod Group Title can u explain why period of sin(bx) = 2pi/b 13. satellite73 Group Title it is always the case that the period of $$\sin(bx)$$ is $$\frac{2\pi}{b}$$ we can think of it this way. since is periodic with period $$2\pi$$ so it does everything on the interval $$[0,2\pi)$$ now if $$bx=0$$ that means $$x=0$$ and if $$bx=2\pi$$ that means $$x=\frac{2\pi}{b}$$ so that gives you the period 14. allamiro Group Title sub x = pi then x = 2pi when x = pi y =3 when x = 2pi y = 9 then solve the equations 15. zaphod Group Title @satellite73 period is for one complete wave right, how come its 2 pi, it has to be pi 16. allamiro Group Title when x = 0 y = 0 then solve the equations for a b and c 17. zaphod Group Title cn u show the working @allamiro 18. allamiro Group Title dsregard the 2 pi thing = 9 just x = pi and x = 0 19. allamiro Group Title what you mean show the work x = pi y = 3 3 = a sin ( b* pi ) + c x= 0 y = 0 0 = a sin ( b * 0 ) + c 20. zaphod Group Title yes? 21. allamiro Group Title sorry again y = 3 when x = 0 I didnt focus at the graph 22. allamiro Group Title so c = 3 23. zaphod Group Title ok how do u find b now. 24. allamiro Group Title yes so now lets say 9 = a sin ( bx) + 3 the highest value for sin when sinx b = 1 so bx = pi /2 so from there a =6 25. zaphod Group Title b? 26. allamiro Group Title y = 3 when x = pi /2 3 = 6 sin ( b 2 pi ) + 3 6 sin ( 2 b pi) = 0 sin ( 2 b pi ) = sin ( 2b pi ) = sin ( pi ) b = 1/2 27. allamiro Group Title y = 3 when x = 2pi * correctiion 28. allamiro Group Title :)
{}