url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://nbcgib.uesc.br/lec/software/editores/tinn-r/en
Domingo 21 Setembro 2014 ## Tinn-R Contents[Hide] #### 2. Description The Tinn-R is an editor/word processor ASCII/UNICODE generic for the Windows operating system, very well integrated into the R, with characteristics of Graphical User Interface (GUI) and Integrated Development Environment (IDE). It is a project registered under the General Public License GPL, that is, an open source software. #### 3. Purpose The purpose of Tinn-R project is to facilitate the learning and use of all the potentiality of R environment for statistical computing. For novice users use will greatly accelerate the learning of R. For experienced users, provides advanced editing features (R, Noweb, LaTeX, Txt2tags), processing, format conversion (Noweb, LaTeX, Txt2tags, Pandoc) and compiling LaTeX documents, among other formats. The productivity of all the work involving text files (scripts, documentation, etc) can be considerably increased with the efficient use of Tinn-R resources. In a nutshell, a tool case for editing and word processing: easy to use for a novice and a very flexible and versatile for experienced/advanced users. Some R users may prefer other editors/GUIs which are more powerful and with more features such as: Vim + Vim-R-plugin and Emacs + ESS, the two most widely used. However for both, learning is much more difficult. #### 4. Key features • R • Recognizes Rgui.exe and Rterm.exe • Supports RNOWEB (Knitr and Sweave) • Object explorer (graphical user interface - GUI for R with selection options and filter) • Several optionsfor sending instructions (file, selection, blocks, lines) and control of the R interpreter • Edit • Advanced colouring of several languages' syntax • Supports macros • Completion (based on XML database, customizable and expandable) • Processing • Basic support to LaTeX • Format conversion (Pandoc, Txt2tags e Deplate) • Spell checkingfor multiple languages • Search and substitutionon files and folders • Content Management • Interface for project management #### 5. A little history The project began in mid/2003, six months after the current project coordinator (CPC) start working with the environment R. In August/2003 he had decided to adopt R as the main tool in the teaching of statistics (his main activity) and also statistical data analysis (second main activity). The initial objectives of the project with respect to R were three: • Developing an Editor/GUI simple and flexible under the Windows operational system • Facilitate the workof the CPC regarding data analysis using R After the CPC have tested almost all GUIs then available: as well as other popular editors offering resources to interact with R: he realized that he did not adapt well to neither the GUIs nor the editors tested. Furthermore, he worried about the difficulties related to teaching (installation, configuration and usage) in the laboratories of statistical computing. Among these, the Emacs + ESS was the best known, recommended and used by experienced users, however, with difficult configuration and usage for the novice user (main public teaching on the R statistical computing) or casual. Additionally, the interface was not pleasant to users accustomed to the rich Windows graphical interfaces. Moreover, some projects were still fledgling, while others had problems of continuity. The CPC imagined that an editor could be customized by adding features that the GUI needs. Since he had already been an old programmer in Object Pascal, it would be interesting to start from an open source editor written in this language, and adapt it to his needs. After searching the internet and preliminary tests, six editors developed under the Delphi IDE then Borland (now Embarcadero) were selected. The second stage consisted of testing performance and stability. Finally, two projects were selected: • Tinn, in English, discontinued in 2005 • Notes, in Portuguese, also discontinued in 2005 Both had the basic features needed. Among them, the Tinn (Tinn Is Not Notepad) showed greater structural simplicity, better performance and greater stability, which led to his final selection. Although the basic features of a generic and simple editor had already been implemented by the developers, there was still much to be done in relation to the editor and future GUI. The small group of Tinn developers was reported (although this is not a requirement of the software under the General Public License - GPL) of the CPC´s intention to have new features implemented. They worked together in the source code of the editor Tinn for about five to six months until realizing, given the new requirements, that would not be possible to keep it generic, according to the original design of the project Tinn. Then, in November/2003 a new project started: Tinn-R. In December/2003 the basic features allowing communication with the R environment had already been implemented and the program used by the CPC for his analyzes. It would also be used in the classroom of a statistics graduate studies at the UESC/PPGPV (still in preparation, and scheduled for March/2004). In January/2004 a copy of the software was forwarded to the coordinator for GUI projects of CRAN (The Comprehensive R Archive Network), Dr. Philippe Grosjean. The project received high praise and a number of suggestions, most were (largely) implemented in the short term. Others, due to complexity, took the long term. At a later stage, Tinn-R was made available to R users in SciViews-R home page, maintained by Philippe. The project name (Tinn-R) was one of the suggestions made by Philippe. The version 0.0.8.8 R1.04 (Jan/2004) was the first one to be released. The authors were then José Cláudio Faria and Mark de Groot. Mark was one of the remaining members of the original Tinn team project, Philippe was then collaborator. This was the first published version of the project. Subsequently, given the effective collaboration in defining the characteristics of the project, and the development of the R functions that allow better integration between the two programs (Tinn-R and R), Philippe was invited to be co-author of the project. Mark de Groot, an excellent programmer in Object Pascal, since having no affinities with statistics, began to move away from the project, becoming a sporadic contributor and since 2006 no longer contributed to the project. Registered under the General Public License GPL, the project gained many supporters and countless suggestions began to be sent by new users. The project's success is attributed to the experience of Philippe in GUI development for R, to his suggestions (always requesting more resources than the CPC was willing to implement), as well as the users' (the same), which effectively determined the direction of its development. The project began to be used as editor of Editor/GUI, simple, yet efficient, in educational and research institutions related to statistics and R. Over the years we sought, within the time available for this activity, answer, in the best possible way, the demand and feedback from users, which may be your great advantage: a program designed by users for users. In late 2006, Enio G. Jelihovschi joined the project, becoming responsible for its documentation. In 2008, the post-doctoral of CPC (ESALQ/USP, under the supervision of the prof. Clarice G. B. Demétrio) with a scholarship from CNPq had the title: TINN-R - GUI/EDITOR FOR R OPEN SOURCE ENVIRONMENT FOR STATISTICAL COMPUTING, had two main objectives: • Improvement and consolidation of program under the Windows operational system • Use, independent of the operational system (multiplatform) The first objective was met in full. As for the second, studies of the main alternatives (using the multiplatform Lazarus and migration to the platform .Net under MONO) were developed. After contacting teams of developers of those tools and environments, and also preliminary testing, we finally reach the conclusion that, in both cases, it would be an overwhelming task and the final results unreliable. The Embarcadero, after the acquisition of the compilers from Borland, has made serious efforts to enable the compilation of code in Object Pascal/Delphi available in other platforms, beyond Windows. Thus, it is envisaged in the medium and long term, the possibility of porting the Tinn-R project for Linux and Mac. #### 6. Authors • José Cláudio Faria - Brazil/UESC/DCET (Coordinator, development and programming in Object Pascal) • Philippe Grosjean - Belgium/UMH/EcoNum (R programming, experience in developing GUIs for R, excellent ideas and suggestions, guidance and project documentation in the English language). In fact, much of the project's success is due to good suggestions - over the years - from Phil: Thank you! (Some were very difficult to implement.) • Enio Galinkin Jelihovschi - Brasil/UESC/DCET (Documentation in English language) #### 8. User list (discussion group) • You can report bug, ask questions, make suggestions and discuss ideas about the Tinn-R Editor such as how to accomplish a specific task, how to change the behavior, and why a specific feature is missing. • SourceForge (old) #### 9. What is new? ##### 9.1. 3.0.3.6 (Feb/28/2014) • The conversion options using Deplate or Txt2tags depends of the file extension. For Deplate are recognized: .dp, .dpt, .dplt, .deplate and .txt. For Txt2tags are recognized: .t2, .t2t, .txt2tags. and .txt. ##### 9.2. 3.0.3.5 (Feb/10/2014) • Highlighters settings window was deeply reworked. • Print preview window was reworked. ##### 9.3. 3.0.3.4 (Feb/08/2014) • Bug(s) fixed: • R highlighter: expressions like the below. Thanks to Arnold for pointing it out. gsub("/", "\\\\", text) • Comment/Uncomment: the automatic detection of the language and chunks (regions) for multiple (or complex) languages (like: R noweb, R doc, HTML complex, PHP complex, etc). • The Comment, Uncomment first and Uncomment all procedures were improved. ##### 9.4. 3.0.3.3 (Feb/05/2014) • Parts of the source code were enhanced. • The control over the previous focus related to various options (Options, Help, View, etc.) has been improved. • The IPC (Inter Process Communication) communicating Rterm and Tinn-R was re-optimised. Now it its approximately 4x faster than the prior version. \o/\o/\o/ • The User guide has been revised and improved. ##### 9.5. 3.0.3.2 (Jan/30/2014) • Bug(s) fixed: • Advanced options of the editor: Options/Application/Editor/Advanced/Want tabs. Now when tabbing (if there is a selection) <TAB> and <SHIFT><TAB> act really as block indent, unindent. It works only inside of the more important instances of SynEdit class: Editor and Rterm/Log. Within Rterm/IO <TAB> has another function: to complete. • Some options of the interface Options/Application/Editor/Advanced are now more understandable. Thanks to Berry Boessenkool for the suggestions! • The menu Help has a new option: What is new? ##### 9.6. 3.0.3.1 (Jan/29/2014) • Bug(s) fixed: • The installer of version 3.0.3.0 related to the file data.zip. The file data.zip was corrupted. The installer of version 3.0.3.0 has been deleted from all servers and we advise users to not redistribute this version. Thanks to Mark A. for pointing it out! • The STOP button is working again for Rgui.exe (but is not active for Rterm.exe yet). ##### 9.7. 3.0.3.0 (Jan/28/2014) • Bug(s) fixed: • Pop-up menu of: Tools/Database/Completion, Tools/R/Explorer and Tools/R/Card. • Parts of the source code were enhanced. • The IPC (Inter Process Communication) communicating Rterm and Tinn-R was optimised. Now it its approximately 8x faster and also more accurate. This really was an old dream! • The User guide has been revised. • The menu Tools/Processing/Compilation (Latex) has a new option: Make index (makeindex). The default shortcut is CTRL + ALT + I. • The Rterm support to the function debug and the package debug was a bit enhanced. The necessary instruction (below) is automatically sent to the R interpreter from this version. options(debug.catfile = 'stdout') • Source code: #### 11. eBook • Updated version (eISBN: 978-85-7455-342-9): • Outdated version: #### 13. Feedback from users I am a very happy and satisfied user of Tinn-R. (Raphael Seitz - Technical University Berlin - Germany - Author of the nice picture above) I work with R since years and tried many editors. Many of them has good features as well, even ones that are not in TinnR up til now (code folding) but this one is the best of all, very handy, easy to use even for beginners, usable from USB just with 2 corrections in preferences. Excelent work. Many thanks to José Cláudio Faria and Philippe Grosjean. (Udo Junghans) Tinn-R has greatly simplified and accelerated my development of R script since I began using it about one year ago. Tinn-R is an impressive open source tool. Calling it, a GUI code editor is a bit of an understatement. In several ways, when used in conjunction with R, you have a highly capable environment that begins to approach the features and functionality of an Integrated Development Environment (IDE). Also impressive is the level of commitment and support this tool receives. (Dan Hunt) An exceptionally powerful tool for leveraging R's strength's. It is clear from use over the last year that the development team is serious and capable - that makes this a robust addition to one's toolkit. (Boramark) Excellent useful program. Works excellently with R - much better for me that using the native windows Rgui. Allow you have multiple script files open simultaneously. Code is nicely formatted. (Tom) A brilliant way of making analysis using R pleasurable. Fast response and excellent integration. (Brian K. Boonstra) For me this is essential if you're going to use the free program R software. (JJ) Very good project, thanks a ton for giving out. (Elijah Snider) Very very good. Using for a long time. (Mervyn Sousa) AWESOME software and free... EXCELLENT. (Jerald Petersen) Thanks a lot everywhere! (Roderick Crockett) Works great. Thanks to the developers of this app. (Anna) This works great. Thanks guys! (Lydia Harpe) Fast and simple. (Max Shawn) I like this editor so much :) (Clay Greenham) Great tool. Like it. (Derek Finn) The best program that I've ever used. (Adolphus Keefe)
2014-09-21 16:04:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4678577184677124, "perplexity": 6150.731082600609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135597.56/warc/CC-MAIN-20140914011215-00305-ip-10-234-18-248.ec2.internal.warc.gz"}
http://rosa.unipr.it/FSDA/FSRB.html
# FSRB FSRB gives an automatic outlier detection procedure in Bayesian linear regression ## Syntax • out=FSRB(y,X)example • out=FSRB(y,X,Name,Value)example ## Description out =FSRB(y, X) FSRB with all default options. out =FSRB(y, X, Name, Value) FSRB with optional arguments. ## Examples expand all ### FSRB with all default options. Common part to the first examples: load Houses Price Dataset. load hprice.txt; n=size(hprice,1); y=hprice(:,1); X=hprice(:,2:5); out=FSRB(y,X); ### FSRB with optional arguments. load hprice.txt; n=size(hprice,1); y=hprice(:,1); X=hprice(:,2:5); % set \beta components beta0=0*ones(5,1); beta0(2,1)=10; beta0(3,1)=5000; beta0(4,1)=10000; beta0(5,1)=10000; % \tau s02=1/4.0e-8; tau0=1/s02; % R prior settings R=2.4*eye(5); R(2,2)=6e-7; R(3,3)=.15; R(4,4)=.6; R(5,5)=.6; R=inv(R); % define a Bayes structure with previous data n0=5; bayes=struct; bayes.R=R; bayes.n0=n0; bayes.beta0=beta0; bayes.tau0=tau0; intercept=1; % function call out=FSRB(y,X,'bayes',bayes,'msg',0,'plots',1,'init',round(n/2),'intercept', intercept) out = struct with fields: ListOut: [1×18 double] outliers: [1×18 double] mdr: [273×2 double] Un: [273×11 double] nout: [2×5 double] beta: [5×1 double] scale: 1.4951e+04 class: 'FSRB' ### Example on Fishery dataset (analysis with intercept). nsamp is the number of subsamples to use in the frequentist analysis of first year, in order to find initial subset using LMS. close all nsamp=3000; % threshold to be used to increase susbet of good units threshold=300; bonflev=0.99; % Bonferroni confidence level to be used for first year bonflevB=0.99; % Bonferroni confidence level to be used for subsequent years y02=Fishery2002(:,3); X02=Fishery2002(:,2); n02=length(y02); seq02=1:n02; % frequentist Forward Search, 1st year [out02]=FSR(y02,X02,'nsamp',nsamp,'plots',1,'msg',0,'init',round(n02*3/4),'bonflev',bonflev); % In what follows % g stands for good units % i stand for intermediate units (i.e. units whose raw residual is smaller % than threshold) % o stands for outliers % gi stands for good +intermediate units % u02g = good units % n02g = number of good units u02g=setdiff(seq02,out02.ListOut); n02g=length(u02g); X02g=[ones(length(u02g),1) X02(u02g,:)]; y02g=y02(u02g); % b02g = regression coefficients just using g units b02g=X02g\y02g; % res02 = squared raw residuals for all units using b02g res02=(y02-[ones(length(X02),1) X02]*b02g).^2; res02o=res02(out02.ListOut); % sel= boolean vector which is true for the intermediate units % (units whose squared residual is below the threshold) sel=res02o<threshold^2; % u02i = vector containing intermediate units (that is outliers whose % residual is smaller than threshold) u02i=out02.ListOut(sel); % u02o = vector containing outliers whose residual is out of the threshold u02o=out02.ListOut(~sel); % u02gi = g + i units if ~isempty(u02i) u02gi=[u02g u02i]; else u02gi=u02g; end % n02gi = number of good + intermediate units n02gi=length(u02gi); % plotting section hold('off') % good units, plotted as (+) plot(X02(u02g)',y02(u02g)','Marker','+','LineStyle','none','Color','b') hold('on') % intermediate units plotted as (X) plot(X02(u02i)',y02(u02i)','Marker','X','MarkerSize',9,'LineWidth',2,'LineStyle','none','Color','m') % outliers, plotted as (O) plot(X02(u02o)',y02(u02o)','Marker','o','LineStyle','none','Color','r') xlabel('Quantity'); ylabel('Price'); title('Frequentist - 2002'); % S202gi = estimated of sigma^2 using g+i units S202gi=sum(res02(u02gi))/(n02gi-2); % X02gi = X matrix referred to good + intermediate units X02gi=[ones(n02gi,1) X02(u02gi,:)]; % y02gi = y vector referred to good + intermediate units y02gi=y02(u02gi); % bayes = structure which contains prior information to be used in year % 2003 bayes=struct; bayes.beta0=b02g; % beta prior is beta based on g units tau0=1/S202gi; % tau0 is based on g + i units bayes.tau0=tau0; R=X02g'*X02g; % R is based on g units bayes.n0=n02gi; % n0 is based on g + i units bayes.R=R; % 2003 y03=Fishery2003(:,3); X03=Fishery2003(:,2); n03=length(y03); seq03=1:n03; % Bayesian Forward Search, 2nd year out03=FSRB(y03,X03,'bayes',bayes,'msg',0,'plots',1,'init',round(n03/2),'bonflev',bonflevB); u03g=setdiff(seq03,out03.ListOut); n03g=length(u03g); % compute beta coefficient for year 2003 just using good units X03g=[ones(n03g,1) X03(u03g,:)]; y03g=y03(u03g); b03g=X03g\y03g; res03=(y03-[ones(length(X03),1) X03]*b03g).^2; res03o=res03(out03.ListOut); sel=res03o<threshold^2; % u03i = units to add to the good units subset (intermediate units) u03i=out03.ListOut(sel); % u03o = outliers out of the threshold u03o=out03.ListOut(~sel); if ~isempty(u03i) u03gi=[u03g u03i]; else u03gi=u03g; end n03gi=length(u03gi); X03gi=[ones(n03gi,1) X03(u03gi,:)]; y03gi=y03(u03gi,:); % plotting section hold('off') % good units, plotted as (+) plot(X03(u03g)',y03(u03g)','Marker','+','LineStyle','none','Color','b') hold('on') % units below the threshold, plotted as (X) plot(X03(u03i)',y03(u03i)','Marker','X','MarkerSize',9,'LineWidth',2,'LineStyle','none','Color','m') % outliers, plotted as (O) plot(X03(u03o)',y03(u03o)','Marker','o','LineStyle','none','Color','r') set(gca,'FontSize',14) xlabel('QUANTITY in tons') ylabel('VALUE in 1000 euro') title('2003'); % Definition of bayes structure (based on 2002 and 2003) bayes=struct; X02gX03g=[X02g; X03g]; y02gy03g=[y02g; y03g]; n02gn03g=n02g+n03g; % b0203g prior estimate of beta for year 2004 is computed using good units % for years 2002 and 2003 b0203g=X02gX03g\y02gy03g; bayes.beta0=b0203g; % R is just referred to good units for years 2002 and 2003 R=X02gX03g'*X02gX03g; bayes.R=R; % n0 is referred to g + i units in 2002 and 2003 bayes.n0=n02gi+n03gi; X02giX03gi=[X02gi; X03gi]; y02giy03gi=[y02gi; y03gi]; % n02gin03gi = number of g+i units in 2002 and 2003 n02gin03gi=n02gi+n03gi; % res = residuals for g+i units using b0203g res=y02giy03gi-X02giX03gi*b0203g; S203gi=sum(res.^2)/(n02gin03gi-2); % estimate of tau is based on g + i units tau0=1/S203gi; bayes.tau0=tau0; y04=Fishery2004(:,3); X04=Fishery2004(:,2); n04=length(y04); seq04=1:n04; % Bayesian Forward Search, 3rd year out04=FSRB(y04,X04,'bayes',bayes,'msg',0,'plots',1,'init',round(n04/2),'bonflev',bonflevB); u04g=setdiff(seq04,out04.ListOut); n04g=length(u04g); X04g=[ones(n04g,1) X04(u04g,:)]; y04g=y04(u04g); % b04g = beta based on good units for year 2004 b04g=X04g\y04g; res04=(y04-[ones(length(X04),1) X04]*b04g).^2; % res04o squared residuals for the tentative outliers res04o=res04(out04.ListOut); % we keep statistical units below the threshold sel=res04o<threshold^2; % u04i = units to add to the good units subset (intermediate units) u04i=out04.ListOut(sel); % u04o = units outliers out of the threshold u04o=out04.ListOut(~sel); if ~isempty(u04i) u04gi=[u04g u04i]; else u04gi=u04g; end n04gi=length(u04gi); % plotting section hold('off') % good units, plotted as (+) plot(X04(u04g)',y04(u04g)','Marker','+','LineStyle','none','Color','b') hold('on') % units below the treshold, plotted as (X) plot(X04(u04i)',y04(u04i)','Marker','X','MarkerSize',9,'LineWidth',2,'LineStyle','none','Color','m') % outliers, plotted as (O) plot(X04(u04o)',y04(u04o)','Marker','o','LineStyle','none','Color','r') set(gca,'FontSize',14) xlabel('QUANTITY in tons') ylabel('VALUE in 1000 euro') % frequentist Forward Search, 3rd year out04=FSR(y04,X04,'nsamp',nsamp,'plots',1,'msg',0,'init',round(n04/2),'bonflev',bonflev); xlabel('QUANTITY in tons') ylabel('VALUE in 1000 euro') title('Frequentist - 2004'); ### Example on Fishery dataset (analysis without the intercept). close all % nsamp is the number of subsamples to use in the frequentist analysis of first % year, in order to find initial subset using LMS. nsamp=3000; % threshold to be used to increase susbet of good units threshold=300; bonflev=0.99; % Bonferroni confidence level y02=Fishery2002(:,3); X02=Fishery2002(:,2); n02=length(y02); seq02=1:n02; % frequentist Forward Search, 1st year (regression without intercept) [out02]=FSR(y02,X02,'intercept',0,'nsamp',nsamp,'plots',1,'msg',0,'init',round(n02*3/4),'bonflev',bonflev); % In what follows % g stands for good units % i stand for intermediate units (i.e. units whose raw residual is smaller % than threshold) % o stands for outliers % gi stands for good +intermediate units % u02g = good units % n02g = number of good units u02g=setdiff(seq02,out02.ListOut); X02g=X02(u02g,:); y02g=y02(u02g); % b02g = regression coefficients just using g units % Note that b02g is a scalar because the intercept has not been added b02g=X02g\y02g; % res02 = squared raw residuals for all units using b02g res02=(y02-X02*b02g).^2; res02o=res02(out02.ListOut); % sel= boolean vector which is true for the intermediate units % (units whose squared residual is below the threshold) sel=res02o<threshold^2; % u02i = vector containing intermediate units (that is outliers whose % residual is smaller than threshold) u02i=out02.ListOut(sel); % u02gi = g + i units if ~isempty(u02i) u02gi=[u02g u02i]; else u02gi=u02g; end % n02gi = number of good + intermediate units n02gi=length(u02gi); % S202gi = estimated of sigma^2 using g+i units S202gi=sum(res02(u02gi))/(n02gi-1); % bayes = structure which contains prior information to be used in year % 2003 bayes=struct; bayes.beta0=b02g; % beta prior is beta based on g units tau0=1/S202gi; % tau0 is based on g + i units bayes.tau0=tau0; R=X02g'*X02g; % R is based on g units bayes.n0=n02gi; % n0 is based on g + i units bayes.R=R; % 2003 y03=Fishery2003(:,3); X03=Fishery2003(:,2); n03=length(y03); % Run Bayesian Forward Search for the 2nd year using the prior based on % the first year. out03=FSRB(y03,X03,'bayes',bayes,'msg',0,'plots',1,'init',round(n03/2),'bonflev',bonflev,'intercept',0); ### Outlier detection for Bank-Profit data. XX=load('BankProfit.txt'); X=XX(:,1:end-1); y=XX(:,end); beta0=zeros(10,1); beta0(1,1)=-0.5; beta0(2,1)=9.1; % Number of products (NUMPRO) beta0(3,1)=0.001; % direct revenues (DIRREV) beta0(4,1)=0.0002; % indirect revenues (INDREV) beta0(5,1)=0.002; % savings accounts SAVACC beta0(6,1)=0.12; % number of operations NUMOPE beta0(7,1)=0.0004; % total amount of operations TOTOPE beta0(8,1)=-0.0004; % Bancomat POS beta0(9,1)=1.3; % Number of cards NUMCAR beta0(10,1)=0.00004; % Amount in cards TOTCAR % \tau s02=10000; tau0=1/s02; % number of obs in which prior was based n0=1500; bayes=struct; bayes.R=R; bayes.n0=n0; bayes.beta0=beta0; bayes.tau0=tau0; intercept=1; n=length(y); out=FSRB(y,X,'bayes',bayes,'msg',1,'plots',1,... 'init',round(n/2),'xlim',[1700 1905],'ylim',[2 4]); %% Plot the outliers with a different symbol using a 3x3 layout selout=out.ListOut; selin=setdiff(1:n,selout); close all % just in case user has additional function subtightplot % http://www.mathworks.com/matlabcentral/fileexchange/39664-subtightplot make_it_tight = false; if make_it_tight == true && exist('subtightplot','file') ==2 subplot = @(m,n,p) subtightplot (m, n, p, [0.05 0.025], [0.1 0.01], [0.1 0.01]); else clear subplot; end % sel = panels in which yticks do not have to be removed sel=[1 4 7]; miny=min(y); maxy=max(y); for j=1:9 subplot(3,3,j) hold('on') plot(X(selin,j),y(selin),'+') plot(X(selout,j),y(selout),'ro') ylim([miny maxy]) xlim([min(X(:,j)) max(X(:,j))]) if isempty(intersect(j,sel)) set(gca,'YTickLabel','') end % Add on the plot the variable name text(0.75,0.1,['x' num2str(j)],'Units','normalized','FontSize',16) end Observed curve of r_min is at least 10 times greater than 99.99% envelope -------------------------------------------------- ------------------------- Signal detection loop Tentative signal in central part of the search: step m=1830 because rmin(1830,1903)>99.999% ------------------- Signal validation exceedance of upper envelopes Validated signal ------------------------------- Start resuperimposing envelopes from step m=1829 Superimposition stopped because r_{min}(1851,1859)>99.9% envelope Subsample of 1858 units is homogeneous ---------------------------- Final output Number of units declared as outliers=45 Summary of the exceedances 1 99 999 9999 99999 838 77 76 73 73 ## Plot the outliers with a different symbol using a 3x3 layout selout=out.ListOut; selin=setdiff(1:n,selout); close all % just in case user has additional function subtightplot % http://www.mathworks.com/matlabcentral/fileexchange/39664-subtightplot make_it_tight = false; if make_it_tight == true && exist('subtightplot','file') ==2 subplot = @(m,n,p) subtightplot (m, n, p, [0.05 0.025], [0.1 0.01], [0.1 0.01]); else clear subplot; end % sel = panels in which yticks do not have to be removed sel=[1 4 7]; miny=min(y); maxy=max(y); for j=1:9 subplot(3,3,j) hold('on') plot(X(selin,j),y(selin),'+') plot(X(selout,j),y(selout),'ro') ylim([miny maxy]) xlim([min(X(:,j)) max(X(:,j))]) if isempty(intersect(j,sel)) set(gca,'YTickLabel','') end % Add on the plot the variable name text(0.75,0.1,['x' num2str(j)],'Units','normalized','FontSize',16) end ## Input Arguments ### y — Response variable. Vector. Response variable, specified as a vector of length n1, where n1 is the number of observations. Each entry in y is the response for the corresponding row of X. Missing values (NaN's) and infinite values (Inf's) are allowed, since observations (rows) with missing or infinite values will automatically be excluded from the computations. Data Types: single| double ### X — Predictor variables. Matrix. Matrix of explanatory variables (also called 'regressors') of dimension n1 x (p-1) where p denotes the number of explanatory variables including the intercept. Rows of X represent observations, and columns represent variables. By default, there is a constant term in the model, unless you explicitly remove it using input option intercept, so do not include a column of 1s in X. Missing values (NaN's) and infinite values (Inf's) are allowed, since observations (rows) with missing or infinite values will automatically be excluded from the computations. Remark: note that here we use symbol n1 instead of traditional symbol n because we want to better separate sample information coming from n1 values to prior information coming from n0 previous experiments. Data Types: single| double ### Name-Value Pair Arguments Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN. Example: 'intercept',1 , bayes=struct;bayes.R=R;bayes.n0=n0;bayes.beta0=beta0;bayes.tau0=tau0; , 'plots',1 , 'init',100 starts monitoring from step m=100 , 'nocheck',1 , 'bivarfit',2 , 'multivarfit','1' , 'labeladd','1' , 'nameX',{'NameVar1','NameVar2'} , 'namey','NameOfResponse' , 'bonflev',0.99 , 'msg',1 ### intercept —Indicator for constant term.scalar. If 1, a model with constant term will be fitted (default), else no constant term will be included. Example: 'intercept',1 Data Types: double ### bayes —Prior information.structure. It contains the following fields Value Description beta0 p-times-1 vector containing prior mean of \beta R p-times-p positive definite matrix which can be interpreted as X0'X0 where X0 is a n0 x p matrix coming from previous experiments (assuming that the intercept is included in the model. The prior distribution of $\tau_0$ is a gamma distribution with parameters $a_0$ and $b_0$, that is $p(\tau_0) \propto \tau^{a_0-1} \exp (-b_0 \tau) \qquad E(\tau_0) = a_0/b_0$ tau0 scalar. Prior estimate of $\tau=1/ \sigma^2 =a_0/b_0$ n0 scalar. Sometimes it helps to think of the prior information as coming from n0 previous experiments. Therefore we assume that matrix X0 (which defines R), was made up of n0 observations. REMARK if structure bayes is not supplied the default values which are used are. beta0= zeros(p,1): Vector of zeros. R=eye(p): Identity matrix. tau0=1/1e+6: Very large value for the prior variance, that is a very small value for tau0. n0=1: just one prior observation. $\beta$ is assumed to have a normal distribution with mean $\beta_0$ and (conditional on $\tau_0$) covariance $(1/\tau_0) (X_0'X_0)^{-1}$. $\beta \sim N( \beta_0, (1/\tau_0) (X_0'X_0)^{-1} )$ Example: bayes=struct;bayes.R=R;bayes.n0=n0;bayes.beta0=beta0;bayes.tau0=tau0; Data Types: double ### plots —Plot on the screen.scalar. If plots=1 (default) the plot of minimum deletion residual with envelopes based on n observations and the scatterplot matrix with the outliers highlighted is produced. If plots=2 the user can also monitor the intermediate plots based on envelope superimposition. Else no plot is produced. Example: 'plots',1 Data Types: double ### init —Search initialization.scalar. scalar which specifies the initial subset size to start monitoring exceedances of minimum deletion residual, if init is not specified it set equal to: p+1, if the sample size is smaller than 40; min(3*p+1,floor(0.5*(n+p+1))), otherwise. Example: 'init',100 starts monitoring from step m=100 Data Types: double ### nocheck —Check input arguments.scalar. If nocheck is equal to 1 no check is performed on matrix y and matrix X. Notice that y and X are left unchanged. In other words the additional column of ones for the intercept is not added. As default nocheck=0. Example: 'nocheck',1 Data Types: double ### bivarfit —Superimpose bivariate least square lines.character. This option adds one or more least square lines, based on SIMPLE REGRESSION of y on Xi, to the plots of y|Xi. bivarfit = '' is the default: no line is fitted. bivarfit = '1' fits a single ols line to all points of each bivariate plot in the scatter matrix y|X. bivarfit = '2' fits two ols lines: one to all points and another to the group of the genuine observations. The group of the potential outliers is not fitted. bivarfit = '0' fits one ols line to each group. This is useful for the purpose of fitting mixtures of regression lines. bivarfit = 'i1' or 'i2' or 'i3' etc. fits an ols line to a specific group, the one with index 'i' equal to 1, 2, 3 etc. Again, useful in case of mixtures. Example: 'bivarfit',2 Data Types: char ### multivarfit —Superimpose multivariate least square lines.character. This option adds one or more least square lines, based on MULTIVARIATE REGRESSION of y on X, to the plots of y|Xi. multivarfit = '' is the default: no line is fitted. multivarfit = '1' fits a single ols line to all points of each bivariate plot in the scatter matrix y|X. The line added to the scatter plot y|Xi is avconst + Ci*Xi, where Ci is the coefficient of Xi in the multivariate regression and avconst is the effect of all the other explanatory variables different from Xi evaluated at their centroid (that is overline{y}'C)) multivarfit = '2' equal to multivarfit ='1' but this time we also add the line based on the group of unselected observations (i.e. the normal units). Example: 'multivarfit','1' Data Types: char ### labeladd —Add outlier labels in plot.character. If this option is '1', we label the outliers with the unit row index in matrices X and y. The default value is labeladd='', i.e. no label is added. Example: 'labeladd','1' Data Types: char ### nameX —Add variable labels in plot.cell array of strings. cell array of strings of length p containing the labels of the variables of the regression dataset. If it is empty (default) the sequence X1, ..., Xp will be created automatically Example: 'nameX',{'NameVar1','NameVar2'} Data Types: cell ### namey —Add response label.character. character containing the label of the response Example: 'namey','NameOfResponse' Data Types: char ### ylim —Control y scale in plot.vector. vector with two elements controlling minimum and maximum on the y axis. Default value is '' (automatic scale) Example: 'ylim','[0,10]' sets the minim value to 0 and the max to 10 on the y axis Data Types: double ### xlim —Control x scale in plot.vector. vector with two elements controlling minimum and maximum on the x axis. Default value is '' (automatic scale) Example: 'xlim','[0,10]' sets the minim value to 0 and the max to 10 on the x axis Data Types: double ### bonflev —Signal to use to identify outliers.scalar. option to be used if the distribution of the data is strongly non normal and, thus, the general signal detection rule based on consecutive exceedances cannot be used. In this case bonflev can be: - a scalar smaller than 1 which specifies the confidence level for a signal and a stopping rule based on the comparison of the minimum MD with a Bonferroni bound. For example if bonflev=0.99 the procedure stops when the trajectory exceeds for the first time the 99% bonferroni bound. - A scalar value greater than 1. In this case the procedure stops when the residual trajectory exceeds for the first time this value. Default value is '', which means to rely on general rules based on consecutive exceedances. Example: 'bonflev',0.99 Data Types: double ### msg —Level of output to display.scalar. scalar which controls whether to display or not messages on the screen If msg==1 (default) messages are displayed on the screen about step in which signal took place and .... else no message is displayed on the screen Example: 'msg',1 Data Types: double ## Output Arguments ### out — description Structure Structure which contains the following fields Value Description ListOut k x 1 vector containing the list of the units declared as outliers or NaN if the sample is homogeneous. This field in future releases will be deleted bacause it will be replaced by out.outliers. outliers k x 1 vector containing the list of the units declared as outliers or NaN if the sample is homogeneous. beta p-by-1 vector containing the posterior mean of $\beta$ (regression coefficents), $\beta = (c*R + X'X)^{-1} (c*R*\beta_0 + X'y)$ in step $n-k$ scale scalar. This is the reciprocal of the square root of the posterior estimate of $\tau$ in step $n-k$ mdr (n-init) x 2 matrix: 1st col = fwd search index; 2nd col = value of Bayesian minimum deletion residual in each step of the fwd search. Un (n-init) x 11 Matrix which contains the unit(s) included in the subset at each step of the fwd search. REMARK: in every step the new subset is compared with the old subset. Un contains the unit(s) present in the new subset but not in the old one. Un(1,2) for example contains the unit included in step init+1. Un(end,2) contains the units included in the final step of the search. nout 2 x 5 matrix containing the number of times mdr went out of particular quantiles. First row contains quantiles 1 99 99.9 99.99 99.999. Second row contains the frequency distribution. constr This output is produced only if the search found at a certain step a non singular matrix X. In this case the search runs in a constrained mode, that is including the units which produced a singular matrix in the last n-constr steps. out.constr is a vector which contains the list of units which produced a singular X matrix class 'FSRB'. ## References Chaloner K. and Brant R. (1988). A Bayesian Approach to Outlier Detection and Residual Analysis, Biometrika, Vol 75 pp. 651-659. Riani M., Corbellini A., Atkinson A.C. (2017), Very Robust Bayesian Regression for Fraud Detection, submitted Atkinson A.C., Corbellini A., Riani M., (2017), Robust Bayesian Regression with the Forward Search: Theory and Data Analysis, Test, DOI 10.1007/s11749-017-0542-6
2018-07-17 19:09:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7740331888198853, "perplexity": 6654.380680933672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589892.87/warc/CC-MAIN-20180717183929-20180717203929-00429.warc.gz"}
https://www.freemathhelp.com/forum/threads/latex-delimiters-i-cannot-get-the-old-tex-and-tex-pair-to-work.114605/
# Latex Delimiters: I cannot get the old tex and /tex pair to work. #### JeffM ##### Elite Member With the new software, what are the delimiters for LaTeX. I cannot get the old tex and /tex pair to work. Also, where can you change font size? #### MarkFL ##### Super Moderator Staff member For inline math you can use \ ( \ ) (without the spaces), and for displaystyle math (indented and on its own line) you can use \ [ \ ]. If you click the calculator icon in the toolbar, the [MATH][/MATH] tags will be generated for you which is inline but gives you the larger displaystyle math. To change your font size use the "T" button to the left of the flag (spoiler) icon. Thank you #### mmm4444bot ##### Super Moderator Staff member … I cannot get the old tex and /tex pair to work. Also, where can you change font size? The tags do work. Can you send me the problematic text, by private conversation? Font size may be changed by using the toolbar icon (eighth icon from the left). $$\displaystyle \;$$ #### JeffM ##### Elite Member mmm I was trying to answer a question. When it did not work, I abandoned my answer so I cannot send you what was creating a problem.. What I can tell you is that the answer used displaystyle. #### pka ##### Elite Member [ tex]\left( {r\cos \left( {\theta + \frac{{2k\pi }}{4}} \right),r\sin \left( {\theta + \frac{{2k\pi }}{4}} \right)} \right),~k=1,~2,~3[/tex] $$\displaystyle \left( {r\cos \left( {\theta + \frac{{2k\pi }}{4}} \right),r\sin \left( {\theta + \frac{{2k\pi }}{4}} \right)} \right),~k=1,~2,~3$$ Without the space in [ tex] the code works for me. #### JeffM ##### Elite Member $$\displaystyle \sum_{k=m}^n f(k) \equiv f(m) + f(m+1) + \ ... \ + f(n-1) + f(n).$$ This is what I am getting from \sum_{k=m}^n f(k) \equiv f(m) + f(m+1) + \ ... \ + f(n-1) + f(n). #### MarkFL ##### Super Moderator Staff member Yes, the :(n): text to replace for the (n) smiley needs to be changed. Only an admin can do that. #### MarkFL ##### Super Moderator Staff member $$\displaystyle \sum_{k=m}^n f(k) \equiv f(m) + f(m+1) + \ ... \ + f(n-1) + f( n).$$ For now, use ( n) instead (note the space in front of the n). #### mmm4444bot ##### Super Moderator Staff member [ tex]\left( {r\cos \left( {\theta + \frac{{2k\pi }}{4}} \right),r\sin \left( {\theta + \frac{{2k\pi }}{4}} \right)} \right),~k=1,~2,~3[/tex] Without the space in [ tex] the code works for me. Here's a tip, pka. We don't need to play tricks on the system anymore (like inserting spaces), in order to display tags in a post. Enclosing tags within [plain] and [/plain] tags will prevent the system from processing the enclosed code. See the bottom of this post, for more info. Cheers $$\displaystyle \;$$ pka #### JeffM ##### Elite Member So far I like the new software. I never expected it to be bug free, but there really have been few bugs so far. Good work all. #### mmm4444bot ##### Super Moderator Staff member … For now, use ( n) instead (note the space in front of the n). Thanks, Mark! That's way easier than my 'ol vBulletin LaTeX fix (to suppress keyword autolinking, for instance). I would have coded something like $$f\text{(}n)$$ to force $$\displaystyle f\text{(}n)$$. #### mmm4444bot ##### Super Moderator Staff member … there really have been few bugs so far … True, although there are a number of issues being discussed on the (private) staff board, heh … $$\displaystyle \;$$ #### MarkFL ##### Super Moderator Staff member So far I like the new software. I never expected it to be bug free, but there really have been few bugs so far. Good work all. Yes, XF 2.1 is a robust forum software, coded by some of the folks who brought us vB 3.x way back when. I think its major strengths are responsive design and a lot more AJAX being used to make reloading pages less necessary. Over time we'll get all the reported issues sorted out.
2019-03-18 17:42:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8172949552536011, "perplexity": 6789.11050017278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201521.60/warc/CC-MAIN-20190318172016-20190318194016-00313.warc.gz"}
https://math.stackexchange.com/questions/1084873/job-scheduling-to-minimise-squared-completion-times-using-mixed-0-1-quadratic-pr
# Job scheduling to minimise squared completion times using mixed 0-1 quadratic program I have come across an Optimization question as follows: There are $n$ jobs that have to be processed on a machine. The machine can process only one job at a time. The time taken to process job $i$ on the machine is $t_i$. For a given sequence of jobs on the machine, the completion time of job $i$ is the time at which the machine finishes processing job $i$. The goal is to find the sequence of jobs which minimises the sum of the squared completion times. I have attempted to solve it as follows but am not sure if it's correct. In particular I'm unsure about the constraints for the binary variables. Variables: Let $x_{ij}=1$, for $i \in {1,...,n}$, $j \in 1,...,n$, $j \neq i$ if job $i$ is completed after job $j$, 0 otherwise. Let $y_i$, $i \in {1,...,n}$ be the completion time for job $i$. Min. $\sum_{i=1}^n y_i^2$ s.t. $y_i - \sum_{j \neq i} t_j x_{ij} \quad \forall i \in {1,...,n}$ $\quad\; x_{ij} + x_{jk} - 2x_{ik} \leq 1 \quad \forall i \neq j \neq k$ $\quad\; \sum_{i \neq j} x_{ij} = \sum_{m=1}^{n-1}m$ $\quad\; x_{ij} \in {0,1} \quad \forall i \neq j$ $\quad\; y_i \in \mathbb{R}^+$
2019-09-21 13:17:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6164064407348633, "perplexity": 90.67576987890351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574501.78/warc/CC-MAIN-20190921125334-20190921151334-00509.warc.gz"}
https://minireference.com/blog/2014/08/
### Math makes you cry? Try SymPy! This summer I wrote a short SymPy tutorial that illustrates how a computer algebra system can help you understand math and physics. Using SymPy you can solve all kinds of math problems, painlessly. Check it: Sympy tutorial (PDF, 12 pages) Print this out, and try the examples using live.sympy.org. The topics covered are: high school math, calculus, mechanics, and linear algebra. SymPy makes all math and physics calculation easy to handle, it can even make them fun! Learn the commands and you’ll do well on all your homework problems. Best of all, sympy is free and open source software so your learning and your calculations won’t cost you a dime!
2019-10-17 10:16:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6812110543251038, "perplexity": 2068.3462365125665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673538.21/warc/CC-MAIN-20191017095726-20191017123226-00446.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/cpaa.2004.3.319
# American Institute of Mathematical Sciences June  2004, 3(2): 319-328. doi: 10.3934/cpaa.2004.3.319 ## The global solution of an initial boundary value problem for the damped Boussinesq equation 1 Department of Applied Mathematics, Southwest Jiaotong University, 610066, Chengdu 2 Department of Mathematics and Statistics, Curtin University of Technology, Perth, WA6845, Australia 3 Department of Applied Mathematics, Southwest Jiaotong University, Chengdu, China Received  January 2003 Revised  December 2003 Published  March 2004 This paper deals with an initial-boundary value problem for the damped Boussinesq equation $u_{t t} - a u_{t t x x} - 2 b u_{t x x} = - c u_{x x x x} + u_{x x} + \beta(u^2)_{x x},$ where $t > 0,$ $a,$ $b,$ $c$ and $\beta$ are constants. For the case $a \geq 1$ and $a+ c > b^2$, corresponding to an infinite number of damped oscillations, we derived the global solution of the equation in the form of a Fourier series. The coefficients of the series are related to a small parameter present in the initial conditions and are expressed as uniformly convergent series of the parameter. Also we prove that the long time asymptotics of the solution in question decays exponentially in time. Citation: Shaoyong Lai, Yong Hong Wu, Xu Yang. The global solution of an initial boundary value problem for the damped Boussinesq equation. Communications on Pure and Applied Analysis, 2004, 3 (2) : 319-328. doi: 10.3934/cpaa.2004.3.319 [1] Xiaoyun Cai, Liangwen Liao, Yongzhong Sun. Global strong solution to the initial-boundary value problem of a 2-D Kazhikhov-Smagulov type model. Discrete and Continuous Dynamical Systems - S, 2014, 7 (5) : 917-923. doi: 10.3934/dcdss.2014.7.917 [2] Peng Jiang. Unique global solution of an initial-boundary value problem to a diffusion approximation model in radiation hydrodynamics. Discrete and Continuous Dynamical Systems, 2015, 35 (7) : 3015-3037. doi: 10.3934/dcds.2015.35.3015 [3] Tatsien Li, Libin Wang. Global classical solutions to a kind of mixed initial-boundary value problem for quasilinear hyperbolic systems. Discrete and Continuous Dynamical Systems, 2005, 12 (1) : 59-78. doi: 10.3934/dcds.2005.12.59 [4] Vladimir V. Varlamov. On the initial boundary value problem for the damped Boussinesq equation. Discrete and Continuous Dynamical Systems, 1998, 4 (3) : 431-444. doi: 10.3934/dcds.1998.4.431 [5] Michal Beneš. Mixed initial-boundary value problem for the three-dimensional Navier-Stokes equations in polyhedral domains. Conference Publications, 2011, 2011 (Special) : 135-144. doi: 10.3934/proc.2011.2011.135 [6] Gilles Carbou, Bernard Hanouzet. Relaxation approximation of the Kerr model for the impedance initial-boundary value problem. Conference Publications, 2007, 2007 (Special) : 212-220. doi: 10.3934/proc.2007.2007.212 [7] Xianpeng Hu, Dehua Wang. The initial-boundary value problem for the compressible viscoelastic flows. Discrete and Continuous Dynamical Systems, 2015, 35 (3) : 917-934. doi: 10.3934/dcds.2015.35.917 [8] V. A. Dougalis, D. E. Mitsotakis, J.-C. Saut. On initial-boundary value problems for a Boussinesq system of BBM-BBM type in a plane domain. Discrete and Continuous Dynamical Systems, 2009, 23 (4) : 1191-1204. doi: 10.3934/dcds.2009.23.1191 [9] Dongfen Bian. Initial boundary value problem for two-dimensional viscous Boussinesq equations for MHD convection. Discrete and Continuous Dynamical Systems - S, 2016, 9 (6) : 1591-1611. doi: 10.3934/dcdss.2016065 [10] Changming Song, Hong Li, Jina Li. Initial boundary value problem for the singularly perturbed Boussinesq-type equation. Conference Publications, 2013, 2013 (special) : 709-717. doi: 10.3934/proc.2013.2013.709 [11] Yi Zhou, Jianli Liu. The initial-boundary value problem on a strip for the equation of time-like extremal surfaces. Discrete and Continuous Dynamical Systems, 2009, 23 (1&2) : 381-397. doi: 10.3934/dcds.2009.23.381 [12] Martn P. Árciga Alejandre, Elena I. Kaikina. Mixed initial-boundary value problem for Ott-Sudan-Ostrovskiy equation. Discrete and Continuous Dynamical Systems, 2012, 32 (2) : 381-409. doi: 10.3934/dcds.2012.32.381 [13] Türker Özsarı, Nermin Yolcu. The initial-boundary value problem for the biharmonic Schrödinger equation on the half-line. Communications on Pure and Applied Analysis, 2019, 18 (6) : 3285-3316. doi: 10.3934/cpaa.2019148 [14] Haifeng Hu, Kaijun Zhang. Analysis on the initial-boundary value problem of a full bipolar hydrodynamic model for semiconductors. Discrete and Continuous Dynamical Systems - B, 2014, 19 (6) : 1601-1626. doi: 10.3934/dcdsb.2014.19.1601 [15] Boling Guo, Jun Wu. Well-posedness of the initial-boundary value problem for the fourth-order nonlinear Schrödinger equation. Discrete and Continuous Dynamical Systems - B, 2022, 27 (7) : 3749-3778. doi: 10.3934/dcdsb.2021205 [16] Xu Liu, Jun Zhou. Initial-boundary value problem for a fourth-order plate equation with Hardy-Hénon potential and polynomial nonlinearity. Electronic Research Archive, 2020, 28 (2) : 599-625. doi: 10.3934/era.2020032 [17] Linglong Du, Caixuan Ren. Pointwise wave behavior of the initial-boundary value problem for the nonlinear damped wave equation in $\mathbb{R}_{+}^{n}$. Discrete and Continuous Dynamical Systems - B, 2019, 24 (7) : 3265-3280. doi: 10.3934/dcdsb.2018319 [18] Zhiyuan Li, Xinchi Huang, Masahiro Yamamoto. Initial-boundary value problems for multi-term time-fractional diffusion equations with $x$-dependent coefficients. Evolution Equations and Control Theory, 2020, 9 (1) : 153-179. doi: 10.3934/eect.2020001 [19] Xiaoqiang Dai, Shaohua Chen. Global well-posedness for the Cauchy problem of generalized Boussinesq equations in the control problem regarding initial data. Discrete and Continuous Dynamical Systems - S, 2021, 14 (12) : 4201-4211. doi: 10.3934/dcdss.2021114 [20] Jing Li, Boling Guo, Lan Zeng, Yitong Pei. Global weak solution and smooth solution of the periodic initial value problem for the generalized Landau-Lifshitz-Bloch equation in high dimensions. Discrete and Continuous Dynamical Systems - B, 2020, 25 (4) : 1345-1360. doi: 10.3934/dcdsb.2019230 2021 Impact Factor: 1.273
2022-07-05 00:36:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6650184392929077, "perplexity": 2057.2546140768063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104506762.79/warc/CC-MAIN-20220704232527-20220705022527-00710.warc.gz"}
https://origin.geeksforgeeks.org/covariance-and-correlation-in-r-programming/?ref=lbp
Skip to content # Covariance and Correlation in R Programming • Last Updated : 14 Jan, 2022 Covariance and Correlation are terms used in statistics to measure relationships between two random variables. Both of these terms measure linear dependency between a pair of random variables or bivariate data. In this article, we are going to discuss cov(), cor() and cov2cor() functions in R which use covariance and correlation methods of statistics and probability theory. ## Covariance in R Programming Language In R programming, covariance can be measured using cov() function. Covariance is a statistical term used to measures the direction of the linear relationship between the data vectors. Mathematically, where, x represents the x data vector y represents the y data vector [Tex]\bar{x}  [/Tex]represents mean of x data vector [Tex]\bar{y}  [/Tex]represents mean of y data vector N represents total observations ### Covariance Syntax in R Syntax: cov(x, y, method) where, • x and y represents the data vectors • method defines the type of method to be used to compute covariance. Default is “pearson”. Example: ## R # Data vectors x <- c(1, 3, 5, 10)   y <- c(2, 4, 6, 20)   # Print covariance using different methods print(cov(x, y)) print(cov(x, y, method = "pearson")) print(cov(x, y, method = "kendall")) print(cov(x, y, method = "spearman")) Output: [1] 30.66667 [1] 30.66667 [1] 12 [1] 1.666667 ## Correlation in R Programming Language cor() function in R programming measures the correlation coefficient value. Correlation is a relationship term in statistics that uses the covariance method to measure how strong the vectors are related. Mathematically, where, x represents the x data vector y represents the y data vector [Tex]\bar{x}  [/Tex]represents mean of x data vector [Tex]\bar{y}  [/Tex]represents mean of y data vector ### Correlation in R Syntax: cor(x, y, method) where, • x and y represents the data vectors • method defines the type of method to be used to compute covariance. Default is “pearson”. Example: ## R # Data vectors x <- c(1, 3, 5, 10)   y <- c(2, 4, 6, 20)   # Print correlation using different methods print(cor(x, y))   print(cor(x, y, method = "pearson")) print(cor(x, y, method = "kendall")) print(cor(x, y, method = "spearman")) Output: [1] 0.9724702 [1] 0.9724702 [1] 1 [1] 1 ## Conversion of Covariance to Correlation in R cov2cor() function in R programming converts a covariance matrix into corresponding correlation matrix. Syntax: cov2cor(X) where, • X and y represents the covariance square matrix Example: ## R # Data vectors x <- rnorm(2) y <- rnorm(2)   # Binding into square matrix mat <- cbind(x, y)   # Defining X as the covariance matrix X <- cov(mat)   # Print covariance matrix print(X)   # Print correlation matrix of data # vector print(cor(mat))   # Using function cov2cor() # To convert covariance matrix to # correlation matrix print(cov2cor(X)) Output: x y x 0.0742700 -0.1268199 y -0.1268199 0.2165516 x y x 1 -1 y -1 1 x y x 1 -1 y -1 1 My Personal Notes arrow_drop_up Recommended Articles Page :
2022-07-02 22:37:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20890328288078308, "perplexity": 6327.593798945954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00597.warc.gz"}
https://physicsinmotion.ca/fluid-mechanics/fluid-dynamics
Physics in Motion # Fluid Dynamics There are many aspects of fluid dynamics that can be understood by simply applying the notions of particle mechanics such as energy, forces, and work. For incompressible fluids in particular, there is a simple relation between the pressure on the fluid, its height, and its velocity. Let us consider a small amount of mass entering a tube on one end and exiting on the other. The two ends are not necessarily at the same height, nor do they necessarily have the same area. The work done on the fluid b the external pressure as it enters is $$W_1 = A_1 p_1 dx_1$$ Where $$A_1$$ is the area of the entrance, $$p_1$$ is the inward pressure, and $$d x_1$$ is the distance the fluid was pushed through by the pressure. Similarly, the work done by the external pressure upon exit is, $$W_2 = - A_2 p_2 d x_2$$ Where the minus sign accounts for the opposite direction of the velocity (outwards) as compared with the pressure (inwards). The total work done on the fluid is then, $$W_{ext} = W_1 + W_2 = A_1 p_1 d x_1 - A_2 p_2 d x_2$$ $$W_{ext} = (p_1 - p_2) V$$ Conservation of volume flow rate implies that over the same time the volume flowing inwards in equal to the volume flowing outwards and thus $$A_1 d x_2 = A_2 d x_2 = V$$. As this work is done, there are changes in the fluid's kinetic and potential energies. These changes are, $$\Delta K = \frac{1}{2} m v^2_2 - \frac{1}{2} m v_1^2,$$ $$\Delta U = m g h_2 - m g h_1$$ The relation between the external work done and the changes in kinetic and potential energy informs us that, $$W_{ext} = \Delta K + \Delta U \rightarrow (p_1 - p_2) V = \frac{1}{2} m (v^2_2 - v^2_1) + mg (h_2 - h_1)$$ Dividing through by the volume and rearranging, we arrive at a relation among the pressure, velocity, and height upon entrance and exit, $$p_1 + \frac{1}{2} \rho_l v^2_1 + \rho_l g h_1 = p_2 + \frac{1}{2} \rho_l v^2_2 + \rho_l g h_2$$ In other words, the quantity on both sides is unchanged throughout the motion, $$p + \frac{1}{2} \rho_l v^2 + \rho_l g h = constant$$ This relation is known as Bernoulli's equation.
2022-09-28 20:17:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8572861552238464, "perplexity": 218.13825694782963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00623.warc.gz"}
https://www.the-cryosphere.net/13/297/2019/
Journal topic The Cryosphere, 13, 297–307, 2019 https://doi.org/10.5194/tc-13-297-2019 The Cryosphere, 13, 297–307, 2019 https://doi.org/10.5194/tc-13-297-2019 Brief communication 30 Jan 2019 Brief communication | 30 Jan 2019 Brief communication: Analysis of organic matter in surface snow by PTR-MS – implications for dry deposition dynamics in the Alps Brief communication: Analysis of organic matter in surface snow by PTR-MS – implications for dry deposition dynamics in the Alps Dušan Materić1, Elke Ludewig2, Kangming Xu1, Thomas Röckmann1, and Rupert Holzinger1 Dušan Materić et al. • 1Institute for Marine and Atmospheric Research Utrecht, Utrecht University, Princetonplein 5, 3584CC Utrecht, the Netherlands • 2ZAMG – Zentralanstalt für Meteorologie und Geodynamik, Sonnblick Observatory, 5020 Salzburg, Freisaalweg 16, Austria Correspondence: Dušan Materić (dusan.materic@gmail.com) Abstract The exchange of organic matter (OM) between the atmosphere and snow is poorly understood due to the complex nature of OM and the convoluted processes of deposition, re-volatilisation, and chemical and biological processing. OM that is finally retained in glaciers potentially holds a valuable historical record of past atmospheric conditions; however, our understanding of the processes involved is insufficient to translate the measurements into an interpretation of the past atmosphere. This study examines the dynamic processes of post-precipitation OM change at the alpine snow surface with the goal of interpreting the processes involved in surface snow OM. 1 Introduction Organic matter (OM) in the cryosphere originates from different sources (e.g. oxidation products of anthropogenic, biogenic, and biomass burning volatile organic compounds – VOCs), is transported from short and long distances, and is deposited via dry or wet deposition (Antony et al., 2014). From the moment of the emission, OM undergoes atmospheric chemistry processes, which profoundly alter the chemical composition of OM, resulting in numerous chemical species that are finally deposited on the snow/ice surface (Legrand et al., 2013; Müller-Tautges et al., 2016). Fingerprints of OM stored in the snow and ice therefore potentially hold a rich historical record of atmospheric chemistry processes and the transport pathways in the atmosphere (Fu et al., 2016; Giorio et al., 2018; Grannas et al., 2006; Pokhrel et al., 2016). The vast diversity of OM, which is found in snow and ice samples, is impossible to characterise by one single method. The most used methods so far in snow/ice OM research are based on gas chromatography (GC) and liquid chromatography mass spectrometry (LC-MS) (Giorio et al., 2018; Gröllert et al., 1997). Novel high-resolution mass-spectrometry-based analytical methods, such as Fourier-transform ion cyclotron resonance mass spectrometry (FT-ICR-MS), Orbitrap mass spectrometry, and thermal desorption–proton transfer reaction–mass spectrometry (TD-PTR-MS), have recently been developed and can be used to characterise OM in the cryosphere with high mass resolution (Hawkes et al., 2016; Kujawinski et al., 2002; Marsh et al., 2013; Materić et al., 2017). Therefore, numerous new proxies are now potentially available to interpret the rich composition of OM in the cryosphere. Reconstructing past atmospheric conditions from measurements of OM in the cryosphere is analytically challenging because of (1) low concentrations of target organics in the sample and (2) chemical changes that (might) happen after OM deposition. A recently developed method using TD-PTR-MS has partly solved the first issue, enabling the detection of low-molecular-weight OM ranging from 28 to 500 amu (Materić et al., 2017). However, chemical changes (e.g. photochemical, biological) and re-emission from the snow/ice surface still remain challenging to quantify, especially in the context of the diversity of OM species, both high- and low-molecular-weight OM. The low-molecular-weight fraction represents an important part of OM in the cryosphere and the group includes VOCs and semi-volatile organic compounds (sVOCs) that deposit directly from the gas phase or as part of secondary organic aerosols (SOAs). Low-molecular-weight OM has been extensively studied in an atmospheric context by real-time or off-line PTR-MS techniques (Gkatzelis et al., 2018; Holzinger et al., 2010b, 2013; Materić et al., 2015; Timkovsky et al., 2015), however, not so in the context of deposited (e.g. dissolved) OM in the cryosphere. In this work, we applied a novel TD-PTR-MS method to measure concentrations of OM present in alpine snow. The first application of this new technique is investigation of snow–atmosphere interaction of OM during a dry weather period. 2 Material and methods 2.1 Sampling site The snow samples were taken at 3106 m in altitude at Hoher Sonnblick, Austria, close to the research station Sonnblick Observatory. The sample site was next to the southern precipitation-measuring platform, which is about 50 m south-east of the observatory. The sampling location was carefully chosen to be least affected by potential contamination coming from the observatory. Average temperature of the site is about 1.1 C in summer and −12.2C in winter considering the meteorological data being gathered since 1886. The sample period spans the days from 20 March to 1 April 2017. During this time period the Sonnblick Observatory experienced an average day length of 12.5 h, an average temperature of −4C, 78 % relative humidity, an average wind speed of 7.3 m s−1, and a pressure of 696 hPa. There was no significant precipitation observed but these days were mostly foggy in the morning with the exception of 27 and 28 March 2017, which were nearly clear-sky days, followed by less cloudy days till 1 April 2017. The measured air temperatures (2 m above the surface) at the site were below zero for the whole time, with the exception of three brief instances when the temperature was recorded at 0.1 C for 10 min (Fig. 1a). However, hourly temperature averages for these events were also <0C. If we use a positive degree-day (PDD) model to assess the melting possibility of those single 10 min periods, we calculated the depth of the meltwater to be 1.4–5.5 µm (using the snow melt factor of 2–8 $\mathrm{mm}{}^{\circ }{\mathrm{C}}^{-\mathrm{1}}\phantom{\rule{0.125em}{0ex}}{\mathrm{day}}^{-\mathrm{1}}$) (Singh et al., 2000). Thus, we conclude that no significant melting and runoff happened for the entire sampling period. Figure 1Meteorological data measured at Sonnblick station during the sampling: (a) temperature, (b) global radiation, (c) wind direction, (d) relative humidity. Vertical lines represent the sampling time. Note the wind direction change in the period preceding the sampling on 29 March 2017. More information on the meteorological conditions can be found in Figs. 1 and A1. 2.2 Sampling Snow samples were taken every third day from the surface snow (<2cm), scooping the snow directly into clean 50 mL polypropylene vials. We also took field blanks (ultrapure water) to assure that our blanks were exposed to the same impurities as the snow samples. The samples were stored in a freezer at −20C until the end of the sampling campaign and then shipped on dry ice to the analysis lab, where they were kept frozen until the analysis. 2.3 Analysis Prior to the analysis, the samples (and blanks) were melted at room temperature and filtered through a 0.2 µm PTFE filter. We loaded 1 mL of each sample into clean 10 mL glass vials that had been prebaked at 250 C overnight. The samples together with the field blanks were dehydrated using a low-pressure evaporation–sublimation system and analysed by TD-PTR-MS (PTR-TOF 8000, IONICON Analytik), following the method described before (Materić et al., 2017). The samples (triplicates) were run randomly and the blanks (four replicates) were run in between covering the entire period of the experiment. PTR conditions included a drift-tube pressure of 295 Pa, drift-tube temperature of 120 C, and drift-tube voltage of 603 V, yielding EN 122 Td. The thermal desorption procedure was optimised for snow-sample analysis and has the following temperature sequence: (1) 1.5 min incubation at 35 C, (2) ramp to 250 C at a rate of 40 ${}^{\circ }\mathrm{C}\phantom{\rule{0.125em}{0ex}}{\mathrm{min}}^{-\mathrm{1}}$, (3) 5 min at 250 C, and (4) cooling down to <35C. The method is fast (<15min per run), sensitive (e.g. limit of detection (LoD) <0.17ng mL−1 for pinonic acid, LoD <0.26ng mL−1 for levoglucosan), requires a small sample size (<2mL of water), and provides reasonably high-mass-resolution data (>4500, full width at half maximum, FWHM). For the data analysis, we used the custom-made software package PTRwid for peak integration and identification and R scripts for statistical analyses (linear regression, fitting, etc.) (Holzinger, 2015). We used 3σ of the field blanks for estimating the LoD, so only ions that are above this value were taken into account for the scientific interpretation (Armbruster and Pry, 2008). We evaluated the impurities in the field blanks by comparing them with the system blanks (clean vials) and discovered that the average impurity level of a field blank was reasonably low (7.0 ng mL−1), which mostly (60 %) originated from the ion mz 81.035 (C5H4OH+). The impurities here might originate from the polypropylene vials we used; however, the levels are much lower compared to the methods used for measuring total and dissolved OM (Giorio et al., 2018). The impurities were taken into account by means of field blank subtraction and LoD filtering (Materić et al., 2017). From the mass spectra, identified peaks were integrated over 8 min starting when the temperature in the TD system reached 50 C. Extracted peaks were quantified by PTRwid and the concentration was expressed in nanograms per millilitre of sample. We calculated the molar concentration of C, H, O, and N for each sample, from which atomic ratios (O∕C, H∕C, N∕C), mean carbon number (nC), and mean carbon oxidation state (OSC) are calculated as described earlier (Holzinger et al., 2013; Materić et al., 2017). For the elemental composition calculation, we excluded ions $m/z<\mathrm{100}$ as these are dominated by thermal dissociation products of non-volatile high-molecular-weight compounds. Taking into account these fragments of bigger molecules would substantially alter elemental composition and atomic ratios. Figure 2Total concentration of organic ions and cumulative metrics of atomic ratio distribution. (a) Total concentration in nanograms per millilitre; the line represents the fit from the simple deposition model explained in the text (Eq. 1); (b) H∕C ratio; (c) O∕C ratio; (d) N∕C ratio; (e) oxidative state of carbon; (f) mean numbers of carbon. The error bars represent the standard deviation of three replicates. 3 Results and discussion 3.1 Total ion concentration and simple mass balance model During our sampling period, the total concentration of organics increases in general over the time that the snow was exposed to the atmosphere (Fig. 2a). The concentration of organics in the snow surface reflects a dynamic balance between two opposing processes that work independently: deposition as source and loss. If we consider just dry deposition (it was a period without precipitation), the retained (actual) concentration of the organics in the snow can be described as $\begin{array}{}\text{(1)}& \frac{\mathrm{d}m}{\mathrm{d}t}=D-L,\end{array}$ where m is the concentration of organics remaining in the snow, D is the total dry deposition rate, and L is the overall loss rate due to re-volatilisation, photochemical reactions, biological processes, etc. As our samples generally show an increase in the ion concentrations (Figs. 2 and 3), the loss rate by re-volatilisation, photochemical reaction, and biological decay is lower than the total deposition rate (D>L). A negative mass balance, i.e. D<L, can happen, for example, in periods of extensive photochemical reactions together with snow exposure to an air mass with a low concentration of OM. Figure 3Box plots of concentration for ions representing four distinctive groups (tick line of a box represents the median and upper and lower line maximum and minimum values): (a) ion mz 115.070 – pinonic acid, (b) ion mz 85.029 – levoglucosan, (c) ion mz 99.008, and (d) ion mz 159.065. The lines illustrate the change in the concentration over the time that is typical for each group. The first sample is taken just after the precipitation (snow symbol), followed by a non-precipitation period for the rest of the experiment (other weather symbols). Our total concentration data (as well as many individual ion groups; see below) indicate a relaxation towards a source–sink equilibrium. Mathematically, the simplest model that has these characteristics is a system with quasi-constant deposition rate D (i.e. changes in deposition are much slower than changes in the loss rate) and a first-order loss rate ($L=-km$ in Eq. 1), which can be integrated to yield $\begin{array}{}\text{(2)}& m={m}_{\mathrm{0}}{e}^{-kt}+\frac{D}{k}\left(\mathrm{1}-{e}^{-kt}\right),\end{array}$ where m0 is the initial concentration of m, k is the first-order loss rate coefficient, and t is time. In our experiment, we measured m with a time step t of 3 days and consider m0 to be our measurement of the fresh snow in the beginning of the analysis period. Equation (2) can then be fit to the data and the best fit for the total concentration of semi-volatile organic traces (R2>0.9899 and rRMSE<3.5 %) was found for k=0.31day−1 and D=206$\mathrm{ng}\phantom{\rule{0.125em}{0ex}}{\mathrm{mL}}^{-\mathrm{1}}\phantom{\rule{0.125em}{0ex}}{\mathrm{day}}^{-\mathrm{1}}$. When the fit is applied to the mass of carbon in the detected organics, the best-fit values for the two parameters are k=0.30day−1 and D=114$\mathrm{ng}\phantom{\rule{0.125em}{0ex}}{\mathrm{mL}}^{-\mathrm{1}}\phantom{\rule{0.125em}{0ex}}\mathrm{C}\phantom{\rule{0.125em}{0ex}}{\mathrm{day}}^{-\mathrm{1}}$, respectively. Considering reported average organic aerosol (OA) concentration we assume the winter air concentration (C) to be at most 2 µg C m−3 (Guillaume et al., 2008; Holzinger et al., 2010a; Strader et al., 1999). Further taking an average sampling depth of 2 cm and a snow density of 250 mg mL−1 we calculated a deposition velocity of 0.33 cm s−1 according to Eq. (3): $\begin{array}{}\text{(3)}& v=\frac{D}{C×A},\end{array}$ where D is the measured deposition rate, C is concentration (2 µg C m−3), and A is the area that was typically sampled (combining sampling depth and snow density relates 1 mL of the sample to an area of 2 cm2). Assuming slightly higher (3 µg C m−3) or lower (1 µg C m−3) OA concentration in air and sampling variation between 1.5 and 2 cm in depth we calculated a positive error of a factor of 2 and a negative error of a factor of 2−1. Thus, a deposition velocity for OAs of 0.17–0.66 cm s−1 would be required to be consistent with the observations. However, the deposition velocities for OA were previously estimated to be 0.034±0.014 and 0.021±0.005cm s−1 for particles in the 0.15–0.3 and 0.5–1.0 µm size ranges (Duan et al., 1988; Gallagher et al., 2002). The required deposition velocities are approximately an order of magnitude higher than the previously reported estimates even if we use the upper limit of expected OA concentration (2 µg C m−3). Therefore, we conclude that the dominating contribution to OM in the snow is from gas-phase sVOCs. As direct measurements of bulk sVOCs do not exist, we estimated the required average loads of sVOCs in the air passing the sampling location to explain the observations. Using the deposition rate calculated from our measurements (Eq. 2), the concentration-weighted average molecular mass of measured compounds, and deposition velocities of 1 cm s−1 (assuming that sVOC deposition velocities are similar to that of formic acid) (Nguyen et al., 2015), we calculated an average gas-phase sVOC burden of 883 ng m−3 of air which is equivalent to 247 ppt. Assuming slightly higher or lower deposition velocities (±0.2cm s−1) yields errors of +221 and −148ng m−3, or +62 and −41ppt. Our calculated value of average sVOC concentration agrees with previous estimates of 600 ng m−3 (Zhao et al., 2014). Thus, our data suggest that dynamic processes of dissolved organic matter (DOM) on the surface snow are dominated by deposition and re-volatilisation of gas-phase sVOCs. This has important implications for our understanding of the snow surface processes. Our analysis suggests that air masses with different sVOC composition can leave different OM fingerprints in the snow (discussed in the sections below). The Dk ratio quantifies the equilibrium point (asymptote) for the model described in Eq. (2). This represents a point at which the equilibrium is established between deposition and losses. The derived time constant for loss of about 3 days implies that 90 % equilibrium is established for the total ion concentration in only 6 days. This value, however, represents an average equilibrium time for total measured DOM, and it is reasonable to assume that this equilibration timescale differs among different compounds. In particular, it is estimated to be established much faster for the gas-phase sVOCs compared to SOA. Similar mass balance calculations will be carried out in the following section for individual ion groups. 3.2 Grouping of ions with similar time evolution In the data analysis of TD-PTR-MS spectra, we found 270 organic ions above the detection limit present in the samples. Compounds that have the same origin (similar sources or atmospheric chemistry processes) should feature similar time evolution, if the lifetime is not so short that such a common time evolution is lost. Based on the pattern of concentration change over time (using a linear regression model) we identified four groups of ions with a similar time evolution (Fig. 3, Table A1). In groups 1, 2, 3, and 4 we assigned 25, 33, 9, and 21 ions, respectively (88 ions in total 33 %), and 175 ions did not fall into any of these groups. Ions which we did not assign to any group either showed different time evolution or had concentrations close to the detection limit causing poor correlation. On average, the total concentration levels of the ions within the four groups were 30, 56, 16, and 57 ng mL−1 and 315 ng mL−1 for ions which did not fall into any of the described groups. Specific information can be found in Materić (2019). These levels of OM retrieved by PTR-MS agree with previous measurements at the site, although different methods have been used (Gröllert et al., 1997). In the first two groups (Fig. 3a and b), among the numerous ions we identified masses that we tentatively attribute to pinonic acid (mz 115.07 fragment) and levoglucosan (e.g. mz 85.03 and 97.03 fragments) (Salvador et al., 2016). Pinonic acid is an oxidation product of monoterpenes and the main source is expected to be emissions from surrounding alpine conifer forests; thus group 1 ions indicate air masses that were originally rich in biogenic VOCs, which have been processed during transport. Levoglucosan is a clear indicator of biomass burning and the most likely source during this period is domestic wood combustion. Therefore, we associate group 2 ions with the anthropogenic wood combustion sources and their products in complex atmospheric processing. The compounds that fall in group 3 show, after an initial increase in the concentration on 23 March 2017, a decreasing trend (Fig. 3c; see also Table A1). The change in the concentration of the compounds constituting this group may point to a one-time significant pollution event which happened between 20 and 23 March. The total concentration of ions in this group was measured to be 34 ng mL−1 (8.2 % of the total organics) on 23 March 2017. This deposition event could have come from a single source; however higher-time-resolution measurements are needed to further characterise the potential source. As total concentration of ions in this group drops in 6 days below 20 ng mL−1 (3.1 % of the total organics), this group is also an example of how contaminated snow equilibrates with the cleaner atmosphere on timescales similar to those we derived from the simple box model. As for total concentration, most of the ions and ion groups show an increase in the ion concentrations throughout the sampling period. Group 4 (Fig. 3d, Table A1) represents the compound group for which the concentration seems steadily increasing towards an equilibrium. This indicates that the simple mass balance model may be applicable, i.e. the assumption of a (close to) constant deposition and first-order loss rate. Therefore we also applied the simple mass balance model (see Sect. 3.1) to the individual ions in group 4 to investigate whether individual organic compounds have different k values. This is expected due to the different chemical and physical properties (such as volatility, susceptibility to photolysis, etc.) as well as different nutrition adequacy for potential biodegradation. For the sum of organic ions in group 4 (Fig. 3d, Table A1), k=0.20 day−1. Generally, the lower k value of this group compared to the total sVOC could be related to the fact that most of the ions here are heavier (thus less volatile). However, within this group k values of individual ions were found to be independent of the molecular weight and also independent of the composition, i.e. O∕C, H∕C, OSC, and nC (R2<0.12). As the volatility of sVOC is expected to depend on molecular weight and functional groups (longer sVOCs are in general less volatile, unless additional functional groups are involved), this suggests that volatility might not play the only role in the loss processes of this group. A deviation in the general concentration trend in individual ions (from the expected growth, Sect. 3.1) was observed on 29 March 2017, particularly in groups 1 and 2 represented by pinonic acid and levoglucosan (Fig. 3a and b). Elevated levoglucosan and lower pinonic acid levels observed on 29 March are temporally related to a change in wind direction. On 29 March, the air masses originate from the north-east direction, rather than the north-west direction seen for other samples (Fig. 1c), so this event is attributed to the meteorological situation and possibly a more pronounced source of biomass burning following the transport regime at the time. Presence of such distinctive patterns of concentration change over time, ion grouping, and their relation with the meteorological data indicate that meteorology and deposition of sVOCs after fresh precipitation strongly affect the organic composition in snow, which questions the most straightforward approach of interpreting OM signals in terms of OA in the air. 3.3 Elemental composition We further investigate the processing of OM in snow during the study period by calculating cumulative metrics of the OM composition from the PTR-MS data, namely the elemental ratios O∕C and H∕C, nC, and OSC of the organic carbon, in order to further characterise the processes behind the observed changes. The fresh snow sample (20 April 2017) has the lowest total concentration of all measured organics, low OSC, the lowest O∕C and N∕C values, and high H∕C and nC values (Fig. 2), which all indicate “fresh” OM in the air (Kroll et al., 2011), which was captured in the snow. An interesting signature in the metrics is observed on 29 March 2017 when the prevailing air flow regime was interrupted (wind direction change, Fig. 1c). This sample showed the highest value of nC, the lowest OSC, and elevated H∕C and low O∕C ratios (Fig. 2). This all indicates photochemically younger (fresher) emissions of VOCs and semi-volatiles, originating from air masses rich in biomass burning aerosols (Fig. 3b), which is in agreement with previous results linking low OSC and high nC to biomass burning aerosols (Kroll et al., 2011). However, on 29 March we also observed lower average total OM concentration in the sample compared to the previous period, which clearly indicates a net loss of OM. Potential processes that could explain such a loss of OM involve photolysis-induced re-volatilisation, OM runoff (e.g. snow melting), or oxidation. The photolysis-induced volatilisation should be higher for this sample as the previous days (27 and 28 March) had the highest global radiation values (33 % higher than the average for the sampling period) and the longest sunshine duration (>12h) (Figs. 1b and A1). Conversely, no significant temperature increase has been measured to support increased melting and OM runoff. Loss by oxidation (referring to “dark” oxidation that is uncoupled from photooxidation) is also unlikely as a main process since the O∕C ratio did not increase for 29 March (Fig. 3c). Thus, the most likely cause of the lower total OM concentration observed on 29 March is re-volatilisation, possibly enhanced by photolysis, which would indicate that the air contained a lower burden of SVOCs. In addition, new OM material with different characteristics was deposited before that sample was collected. Combining all metrics (Fig. 2) and meteorological data available (Fig. 1), we can conclude that the air passing the site prior to 29 March 2017 was cleaner and photochemically younger and contained higher molecular weight compounds that might have originated from anthropogenic emissions such as biomass burning (high levels of levoglucosan, Fig. 3). 4 Conclusion In this work, we analysed the concentrations of low-molecular-weight organic matter (20–500 amu) in alpine snow samples during a 12-day no-precipitation period, 20 March–1 April 2017. We noticed four distinctive groups of ions with a similar concentration trend over that time (R2>0.9), suggesting common sources, chemistry processes, or transport pathways. The largest two groups of ions came from (a) surrounding forests (e.g. pinonic acid – associated with monoterpene oxidation) and (b) residential fires (levoglucosan – common biomass burning marker). The snow sample taken on 29 March showed a change in the general concentration trend, consistent with a shift in wind direction, indicating different air mass origin. This is also in agreement with a change in atomic ratio metrics (O∕C, H∕C, OSC, and nC), which also indicated that re-volatilisation is the most important pathway of OM loss here, suggesting that the advected air was cleaner during this period. Dry deposition can be approximated by a mass balance model with a roughly constant deposition rate of D=206$\mathrm{ng}\phantom{\rule{0.125em}{0ex}}{\mathrm{mL}}^{-\mathrm{1}}\phantom{\rule{0.125em}{0ex}}{\mathrm{day}}^{-\mathrm{1}}$ and a first-order loss rate constant k=0.31 day−1. Calculated deposition velocities were inconsistent with the idea that OAs contribute the bulk of deposited OM; instead we suggest a dominant contribution of gas-phase sVOCs over the OA in the total bulk organic matter. This all indicates that, at least for this site and location, snow–atmosphere DOM exchange processes are mostly driven by gas-phase sVOCs, for which equilibration with air is fast. This has implications for the reconstruction of recent atmospheric conditions by analysis of organics in the snow. Data availability Data availability. Data are available in Materić (2019). Appendix A Figure A1Light conditions and precipitation during the sampling period. (a) Global radiation (W m−2) integrated for each day; (b) total daily sunshine duration in hours; (c) precipitation for the sampling period. Table A1Groups of ions as identified using a linear regression model. Note that different thresholds and cutoffs of R2 values are used to assign different ions to the groups (cutoffs: R2>0.98, 0.98, 0.995, and 0.70). The ions used for the group identification are highlighted in bold. Author contributions Author contributions. DM and RH designed the experiments and DM carried them out. EL provided the samples and meteorological data. DM prepared the paper with contributions from all co-authors. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This work is supported by the Netherlands Earth System Science Centre (NESSC) research network and by the Dutch NWO Earth and Life Science (ALW), project 824.14.002. We thank the operators at the Sonnblick Observatory for taking the samples. Edited by: Martin Schneebeli Reviewed by: two anonymous referees References Antony, R., Grannas, A. M., Willoughby, A. S., Sleighter, R. L., Thamban, M., and Hatcher, P. G.: Origin and sources of dissolved organic matter in snow on the East Antarctic ice sheet, Environ. Sci. Technol., 48, 6151–6159, https://doi.org/10.1021/es405246a, 2014. Armbruster, D. A. and Pry, T.: Limit of Blank, Limit of Detection and Limit of Quantitation, Clin. Biochem. Rev., 29, S49–S52, 2008. Duan, B., Fairall, C. W., and Thomson, D. W.: Eddy Correlation Measurements of the Dry Deposition of Particles in Wintertime, J. Appl. Meteorol., 27, 642–652, https://doi.org/10.1175/1520-0450(1988)027<0642:ECMOTD>2.0.CO;2, 1988. Fu, P., Kawamura, K., Seki, O., Izawa, Y., Shiraiwa, T., and Ashworth, K.: Historical Trends of Biogenic SOA Tracers in an Ice Core from Kamchatka Peninsula, Environ. Sci. Technol. Lett., 3, 351–358, https://doi.org/10.1021/acs.estlett.6b00275, 2016. Gallagher, M. W., Nemitz, E., Dorsey, J. R., Fowler, D., Sutton, M. A., Flynn, M., and Duyzer, J.: Measurements and parameterizations of small aerosol deposition velocities to grassland, arable crops, and forest: Influence of surface roughness length on deposition, J. Geophys. Res.-Atmos., 107, AAC 8-1–AAC 8-10, https://doi.org/10.1029/2001JD000817, 2002. Giorio, C., Kehrwald, N., Barbante, C., Kalberer, M., King, A. C. F., Thomas, E. R., Wolff, E. W., and Zennaro, P.: Prospects for reconstructing paleoenvironmental conditions from organic compounds in polar snow and ice, Quaternary Sci. Rev., 183, 1–22, https://doi.org/10.1016/j.quascirev.2018.01.007, 2018. Gkatzelis, G. I., Tillmann, R., Hohaus, T., Müller, M., Eichler, P., Xu, K.-M., Schlag, P., Schmitt, S. H., Wegener, R., Kaminski, M., Holzinger, R., Wisthaler, A., and Kiendler-Scharr, A.: Comparison of three aerosol chemical characterization techniques utilizing PTR-ToF-MS: a study on freshly formed and aged biogenic SOA, Atmos. Meas. Tech., 11, 1481–1500, https://doi.org/10.5194/amt-11-1481-2018, 2018. Grannas, A. M., Hockaday, W. C., Hatcher, P. G., Thompson, L. G., and Mosley-Thompson, E.: New revelations on the nature of organic matter in ice cores, J. Geophys. Res.-Atmos., 111, D04304, https://doi.org/10.1029/2005JD006251, 2006. Gröllert, C., Kasper, A., and Puxbaum, H.: Organic Compounds in High Alpine Snow, Int. J. Environ. Anal. Chem., 67, 213–222, https://doi.org/10.1080/03067319708031405, 1997. Guillaume, B., Liousse, C., Galy-Lacaux, C., Rosset, R., Gardrat, E., Cachier, H., Bessagnet, B., and Poisson, N.: Modeling exceptional high concentrations of carbonaceous aerosols observed at Pic du Midi in spring–summer 2003: Comparison with Sonnblick and Puy de Dôme, Atmos. Environ., 42, 5140–5149, https://doi.org/10.1016/j.atmosenv.2008.02.024, 2008. Hawkes, J. A., Dittmar, T., Patriarca, C., Tranvik, L., and Bergquist, J.: Evaluation of the Orbitrap Mass Spectrometer for the Molecular Fingerprinting Analysis of Natural Dissolved Organic Matter, Anal. Chem., 88, 7698–7704, https://doi.org/10.1021/acs.analchem.6b01624, 2016. Holzinger, R.: PTRwid: A new widget tool for processing PTR-TOF-MS data, Atmos. Meas. Tech., 8, 3903–3922, https://doi.org/10.5194/amt-8-3903-2015, 2015. Holzinger, R., Kasper-Giebl, A., Staudinger, M., Schauer, G., and Röckmann, T.: Analysis of the chemical composition of organic aerosol at the Mt. Sonnblick observatory using a novel high mass resolution thermal-desorption proton-transfer-reaction mass-spectrometer (hr-TD-PTR-MS), Atmos. Chem. Phys., 10, 10111–10128, https://doi.org/10.5194/acp-10-10111-2010, 2010a. Holzinger, R., Williams, J., Herrmann, F., Lelieveld, J., Donahue, N. M., and Röckmann, T.: Aerosol analysis using a Thermal-Desorption Proton-Transfer-Reaction Mass Spectrometer (TD-PTR-MS): a new approach to study processing of organic aerosols, Atmos. Chem. Phys., 10, 2257–2267, https://doi.org/10.5194/acp-10-2257-2010, 2010b. Holzinger, R., Goldstein, A. H., Hayes, P. L., Jimenez, J. L., and Timkovsky, J.: Chemical evolution of organic aerosol in Los Angeles during the CalNex 2010 study, Atmos. Chem. Phys., 13, 10125–10141, https://doi.org/10.5194/acp-13-10125-2013, 2013. Kroll, J. H., Donahue, N. M., Jimenez, J. L., Kessler, S. H., Canagaratna, M. R., Wilson, K. R., Altieri, K. E., Mazzoleni, L. R., Wozniak, A. S., Bluhm, H., Mysak, E. R., Smith, J. D., Kolb, C. E., and Worsnop, D. R.: Carbon oxidation state as a metric for describing the chemistry of atmospheric organic aerosol, Nat. Chem., 3, 133–139, https://doi.org/10.1038/nchem.948, 2011. Kujawinski, E. B., Freitas, M. A., Zang, X., Hatcher, P. G., Green-Church, K. B., and Jones, R. B.: The application of electrospray ionization mass spectrometry (ESI MS) to the structural characterization of natural organic matter, Org. Geochem., 33, 171–180, https://doi.org/10.1016/S0146-6380(01)00149-8, 2002. Legrand, M., Preunkert, S., Jourdain, B., Guilhermet, J., Faïn, X., Alekhina, I., and Petit, J. R.: Water-soluble organic carbon in snow and ice deposited at Alpine, Greenland, and Antarctic sites: a critical review of available data and their atmospheric relevance, Clim. Past, 9, 2195–2211, https://doi.org/10.5194/cp-9-2195-2013, 2013. Marsh, J. J. S., Boschi, V. L., Sleighter, R. L., Grannas, A. M., and Hatcher, P. G.: Characterization of dissolved organic matter from a Greenland ice core by nanospray ionization Fourier transform ion cyclotron resonance mass spectrometry, J. Glaciol., 59, 225–232, https://doi.org/10.3189/2013JoG12J061, 2013. Materić, D.: Analysis of organic matter in surface snow by PTR-MS – implications for dry deposition dynamics in the Alps, Utrecht University, Utrecht, https://doi.org/10.24416/UU01-6LY8GT, 2019. Materić, D., Bruhn, D., Turner, C., Morgan, G., Mason, N., and Gauci, V.: Methods in Plant Foliar Volatile Organic Compounds Research, Appl. Plant Sci., 3, 1500044, https://doi.org/10.3732/apps.1500044, 2015. Materić, D., Peacock, M., Kent, M., Cook, S., Gauci, V., Röckmann, T., and Holzinger, R.: Characterisation of the semi-volatile component of Dissolved Organic Matter by Thermal Desorption – Proton Transfer Reaction – Mass Spectrometry, Sci. Rep.-UK, 7, 15936, https://doi.org/10.1038/s41598-017-16256-x, 2017. Müller-Tautges, C., Eichler, A., Schwikowski, M., Pezzatti, G. B., Conedera, M., and Hoffmann, T.: Historic records of organic compounds from a high Alpine glacier: influences of biomass burning, anthropogenic emissions, and dust transport, Atmos. Chem. Phys., 16, 1029–1043, https://doi.org/10.5194/acp-16-1029-2016, 2016. Nguyen, T. B., Crounse, J. D., Teng, A. P., St. Clair, J. M., Paulot, F., Wolfe, G. M., and Wennberg, P. O.: Rapid deposition of oxidized biogenic compounds to a temperate forest, P. Natl. Acad. Sci. USA, 112, E392–E401, https://doi.org/10.1073/pnas.1418702112, 2015. Pokhrel, A., Kawamura, K., Ono, K., Seki, O., Fu, P., Matoba, S., and Shiraiwa, T.: Ice core records of monoterpene- and isoprene-SOA tracers from Aurora Peak in Alaska since 1660s: Implication for climate change variability in the North Pacific Rim, Atmos. Environ., 130, 105–112, https://doi.org/10.1016/j.atmosenv.2015.09.063, 2016. Salvador, C. M., Ho, T.-T., Chou, C. C.-K., Chen, M.-J., Huang, W.-R., and Huang, S.-H.: Characterization of the organic matter in submicron urban aerosols using a Thermo-Desorption Proton-Transfer-Reaction Time-of-Flight Mass Spectrometer (TD-PTR-TOF-MS), Atmos. Environ., 140, 565–575, https://doi.org/10.1016/j.atmosenv.2016.06.029, 2016. Singh, P., Kumar, N., and Arora, M.: Degree–day factors for snow and ice for Dokriani Glacier, Garhwal Himalayas, J. Hydrol., 235, 1–11, https://doi.org/10.1016/S0022-1694(00)00249-3, 2000. Strader, R., Lurmann, F., and Pandis, S. N.: Evaluation of secondary organic aerosol formation in winter, Atmos. Environ., 33, 4849–4863, https://doi.org/10.1016/S1352-2310(99)00310-6, 1999. Timkovsky, J., Dusek, U., Henzing, J. S., Kuipers, T. L., Röckmann, T., and Holzinger, R.: Offline thermal-desorption proton-transfer-reaction mass spectrometry to study composition of organic aerosol, J. Aerosol Sci., 79, 1–14, https://doi.org/10.1016/j.jaerosci.2014.08.010, 2015. Zhao, Y., Hennigan, C. J., May, A. A., Tkacik, D. S., de Gouw, J. A., Gilman, J. B., Kuster, W. C., Borbon, A., and Robinson, A. L.: Intermediate-Volatility Organic Compounds: A Large Source of Secondary Organic Aerosol, Environ. Sci. Technol., 48, 13743–13750, https://doi.org/10.1021/es5035188, 2014.
2020-01-28 10:06:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6780810356140137, "perplexity": 5380.087406463154}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778168.77/warc/CC-MAIN-20200128091916-20200128121916-00024.warc.gz"}
https://www.physicsforums.com/threads/rotation-of-a-ring-pivoted-at-several-points.907025/
Rotation of a ring pivoted at several points Tags: 1. Mar 9, 2017 davidpascu 1. The problem statement, all variables and given/known data A rigid ring is fixed with three bearings evenly spaced around its circunference, which allow it to rotate but not to displace in the radial direction. An external force is applied in a fixed point P of the ring which moves with the ring. The bearings have friction. These parameters from the problem are known: ring mass, radium and inertia momentum, external force (value F and angle phi) and friction coefficient. Obtain the expression for the angular acceleration as a function of these parameters and the angular position of the point P. 2. Relevant equations For the friction forces: F_fr = μN 3. The attempt at a solution I have tried projecting all the forces on the horizontal and vertical axis, and using Newtons Equations in the center of mass: ΣF = 0 and ΣM = Iα, but I end up with more unknown variables than equations. 2. Mar 9, 2017 BvU Namely ? Please post what you have so far, perhaps we can find another relationship between the variables you should have listed in full.... 3. Mar 9, 2017 davidpascu First, projecting the force in the tangential and radial direction, with the angle phi: FTAN = F cos Φ and FRAD = F sin Φ Then, projecting this two again in the vertical and horizontal axis: FX = -FTAN cos θ - FRAD sin θ FY = -FTAN sin θ + FRAD cos θ where the vertical axis Y is taken as the origin of the angle θ. Considering that all normal reactions point towards the ring center, I apply Newtons laws in the center of mass. The normal reactions and friction forces are numbered 1,2,3 following counterclockwise order starting from the one in the top. In X: N2 cos 30 - N3 cos 30 + FFRICT1 - FFRICT2 sin 30 - FFRICT3 sin 30 + FX= 0 In Y: -N1 + N2 sin30 + N3 sin 30 + FFRICT2 cos30 - FFRICT3 cos 30 -mg + FY = 0 Momentum around CM: R x (FTAN - FFRICT1 - FFRICT2 - FFRICT3) = Iα And FFRICT = μ N for all 3 bearings. So as I need to know the angular acceleration as α = f(m, I, μ, F, Φ, θ) I need to get first the expression of the normal forces (and with them the friction forces). And with this set of equations only I am not able to do it. Last edited: Mar 9, 2017 4. Mar 9, 2017 BvU Apart from $I = mR^2$ I don't have much to contribute, I'm afraid. Perhaps an extra assumption is required ? Because I see that increasing all three normal forces by the same amount of Newton does not change the problem description but it does change the outcome. (I.e. how tightly is the ring clamped in ?) A sensible assumption to reduce the number of degrees of freedom would then be to put $F_{N,1} = 0$ -- but then: why isn't it in the problem statement ? Last edited: Mar 9, 2017 5. Mar 9, 2017 haruspex Quite so. It would be reasonable to look for the solution which minimises their magnitude. Since there are three, it is not obvious what that means. I suggest finding the maximum possible acceleration. 6. Mar 9, 2017 BvU I completely forgot: Hello David, I'm glad Haru joined us: I felt lonely wanting to help but unable to do so. Goes to show that even at PF we don't know everything Can you make headway assuming $F_{N,1} = 0$ ? 7. Mar 9, 2017 haruspex Not sure that would correspond to max acceleration. Min Σ|FN| would. @davidpascu, I forgot to ask... The description does not say whether the plane is horizontal or vertical. The diagram shows mg. is that in the original or did you add it? 8. Mar 10, 2017 davidpascu Hi all, The ring is in the vertical plane. Actually this is not a fixed problem, but more one case of a more general model I need to do. The issue is this: I have a ring "pivoted" in several points with bearings so that it can rotate around it axis but not translate. The number of bearings is not fixed. The ring is in the vertical plane. The ring´s rotation is driven by a force such as the one in the example. And I want to create a simple dynamics model that allows me to get the loads on the bearings. The problem I put in the heading is just the particular case of this model with 3 bearings. But I guessed it was easier to post one particular case than the global problem, and that if I managed to solve this one I could extend the solution to the other cases. 9. Mar 10, 2017 BvU Ah, explains the problem statement. Is it wise then, to use $F_{\rm friction} = \mu N$ ? (We think it makes the problem underdetermined). 10. Mar 10, 2017 davidpascu I am nor really sure, it is just an assumption... I guess maybe it is not correct and that is why it does not lead to a solution. Then I would ask on ideas on how to model the problem. I guess the heart of the problem is how to model correctly the bearings. My first feeling was to put the reaction forces in the radial direction, because the bearings in essence work as sliders (prismatic joint, see picture below), but that leads to inconsistencies also, because if that, how can the reaction counteract the force in every moment to prevent the ring from moving if for example there is only one bearing, or there are two of them situated radially opposite? PD: I guess maybe with this new formulation of the problem it may not longer belong to this section of the forum. I don´t know if it can be moved to the correct one. 11. Mar 10, 2017 Nidum (1) If the three bearings are just in nominal contact with the ring then only two of them are carrying load at any one time ? A solution may be possible using a piecewise method where forces are analysed in separate zones as different bearing combinations come into play . or (2) Ignore the fact that there are actually three separate bearings and just use a rule relating combined bearing friction drag torque to radial load . The calculation of friction drag torque for the case of radial load on a dry plain or segmented sleeve bearing / shaft combination is quite simple and may be useful . Last edited: Mar 10, 2017 12. Mar 10, 2017 Nidum What are you trying to do overall with this work ? Bearing technology is quite well understood out in the real world . If you have any particular area of interest let us know . You may get some useful information back . Last edited: Mar 10, 2017 13. Mar 10, 2017 davidpascu I am involved in the conceptual design of some mechanisms. For my part I need to design this mechanism for rotating a mobile part. The project is still in an early step, so the only things I know for sure are the ring's dimensions and the external force. I have to come up with an idea for a guiding system for the ring's rotation (something such as a set of bearings or some kind of rail-wheel system), and for that I want to first know what forces and momenta I can expect on them. So that is why I wanted to make a simple model that can allow me to get some quick estimations without having to go to FEM modelling or similar. I will try taking a look at what I can find on bearing forces and friction. Thanks! 14. Mar 10, 2017 Nidum How big is the ring and what sort of speeds and loads are involved ? 15. Mar 13, 2017 davidpascu The ring has around 2 meter diameter. It rotates at low speed (below 100 rpm) and the load applied to it is variable, but its always around 200-300 N 16. Mar 13, 2017 Nidum Problem with big thin rings is that they distort under their own weight so any support system really has to be multi contact or ideally continuous . Ball races are manufactured in large diameter small cross section configurations for some types of medical scanners . May provide a ready made solution . 17. Mar 13, 2017 Dr.D Regarding the wording of the original question, the word "pivot" suggests a pin joint, a fixed point (a revolute joint to be precise) Everything discussed seems to indicate that you intend either a rolling or sliding support (a prismatic joint). The word pivot can easily misdirect the discussion.
2017-08-20 12:37:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7063177824020386, "perplexity": 845.9368289300629}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106465.71/warc/CC-MAIN-20170820112115-20170820132115-00054.warc.gz"}
https://by.tc/aperiodicity-of-rule-30/
# Proof summary for aperiodicity of Rule 30 #### This is rule 30. The center column is highlighted, as we are interested in determining whether or not it’s possible for it to ever degenerate into a cyclic pattern. Today, I have a pretty good outline for how to show that it cannot. ### Rule 30 as boolean algebra We can always substitute variables in place of the hard 1s and 0s you usually deal with in cellular automata. #### 1 variable Here’s one of the simplest replacements we could do. Here, we merely replaced the single cell that starts all this, the $$1$$ at the top, with $a$. That $a$ is propagated through the system exactly how the $1$ would be. Note that it would also be legitimate (if boring) to let $a=0$, in which case we’re left with an empty universe. #### 2 variables If we add another variable, things begin to get interesting: If we let $a=0,b=1$, then $b$ becomes the center column exactly as before. If we set $a=1$, however, it becomes the de facto center column*, regardless of the value of $b$. For a while, I thought there may be a trick along those lines to show that neither one could therefore be cyclic, but I wasn’t able to find anything, other than a conclusion that the appearances of $a$ and $b$ must be aperiodic even if the resulting pattern is not. Apart from that, the situation is also getting slightly messier. We now have several values we see coming up repeatedly, listed in the table to the right. This is our first look at something that will become critical later, that new expressions are generally built by pasting two different sets together. * We don’t mean literal center column here, but rather that it will contain the same pattern found in the typical Rule 30 configuration with a single $1$ cell in the center column seeding the structure. #### K.I.S.S. Incidentally, without simplification, an algebraic approach would be immediately intractable. If all we do is apply $f(a,b,c):=a+b+c+bc$, we can see the problem after only a couple of steps, in the same 2-variable configuration used above: To limit the combinatorial explosion, there are two simple optimizations we can make since we’re dealing with boolean values. First, all exponents can be dropped, since $0^m=0$ and $1^m=1$ for any $m$. Second, we drop any terms with even coefficients, and strip the odd coefficients, which takes our expressions most of the way toward$\pmod{2}$ as they’re meant to be. Alas, even after all that, we’ll still face doubly exponential growth. #### 3 variables When we add a third variable, things move into full gear. Although there are clearly a lot of new expressions here, not all possible expressions are represented. For $k$ variables, able to combine in up to $2^k$ additive terms, each of which may itself include or not include $k$ variables, the number of raw possible expressions grows doubly exponentially as $2^{2^k}$. That growth rate has not been the case so far, but it will be from here on out, presumably a result of reaching the three operands that Rule 30 expects. As a result, the number of distinct algebraic expressions quickly becomes unmanageable even after our simplifications. Fortunately, we won’t need to manage it. There’s an invariant property that readily scales with this, and that’s what we’ll be using here. ### Continuing the pattern There are a few expressions that do appear that we’ll be ignoring, but this is because they’re either part of the cycling diagonals off to the side, which doesn’t help us, or because it’s one of the first few entries in the column, which don’t actually repeat. (Interestingly, any expression in the form $a+b+c+bc$ falls into this category, and only appears once.) Given that, let’s focus only on expressions that appear on row $n\geq 3$, in the center column. For all $k \geq 3$, if you process sufficiently many rows to see every legal expression at least once, you’ll find that within those $2^{3\cdot 2^{k-3}}$ distinct expressions, if you chop up the polynomials and take a raw count of all monomial terms, every possible term appears, and each one appears in precisely one-half of all expressions. The only exception to this is the null-set-expression $0$, which may occur as infrequently as $1$ in $2^{2^k}$ cells (but needs further research), and any triplets like $bcd, cde, def, \ldots$, consisting of exactly three distinct consecutive in-range variables. There are only $k-2$ of these, and they appear in only a quarter of the overall list. But back to the main story here. Yes, they all share very equitably, and in fact, it’s so even that if you remove every expression containing any given monomial or subexpression, the count of all remaining monomials is exactly halved, suggesting an extremely organized distribution. Take the 4-variable case as an example, which will have $64$ distinct expressions. This means that $32$ of those will have $a$ as a monomial, and $16$ of the other $32$ will have $b$ as a monomial, and $8$ of the remaining $16$ will have $c$ as a monomial, and finally $4$ of those last $8$ will have $d$ as a monomial (with no $a$, $b$, or $c$). That leaves $4$ expressions to be accounted for; of those, $0$ uses one slot, and the other three will all have all multivariable monomials. Any of those final four expressions is indicative of a vertical run of $0$s of width at least $4$ and height at least $2$. This principle can be extended to any number of variables to show arbitrarily large contiguous spans of $0$, disallowing any periodicity. #### Ratcheting up complexity The tables we’re building are additive in the sense that, upon adding a new variable to the seed row, no existing addends will ever be removed as a result. This property gives us a welcome measure of stability. Instead, those cells that do change values can do so only by taking on additional additive terms, all of which must be divisible by the new variable. Given that, let’s look at the actual driving force behind the complexity generation: the multiplicative operation. In fairness, the real source of the complexity is the interweaving of multiplication and addition (as is true in literally all things), but combinatorially speaking, the multiplication is the killer. The additions are able to mix and match terms to some extent, and thus are half of that double exponential, but the multiplications are able to glue together what effectively become new proxy variables which must be independently tracked downstream, amplifying the capacity of the additions to swirl together novel expressions. In particular, every new seed variable appears to permeate the system sufficiently such that all $2^k$ possible terms appear. That said, it’s worth repeating that although all terms appear, not all possible expressions do. As mentioned, the general $a+b+c+b c$ form is a one-and-done, and also worth special attention is the lack of solitary single variable expressions: $a$ is the only one that you’ll see, discounting degenerate cycles on the diagonals. For any variable past $a$, you will never see it on its own after being introduced; in every expression where it appears as a term, it’s always part of a larger whole, e.g. you’ll see $c+ac+bc$, at the very least. This is a general pattern that seems to hold: $d$ never appears on its own but requires at least $d+bd+cd$, and so forth. Close examination of the genesis of these terms in the tables below makes some sense of this behavior. Although these tables are obviously becoming unwieldy, here are the beginnings of the 4- and 5-variable progressions. Without serious optimization, identifying the distinct expressions for the 5-variable case is right at the limit of what my computer can reasonably handle using 64gb of memory and a day of computing time. I suspect that no amount of optimization would allow brute force verification of the 6-variable case without special hardware. So while on the one hand, I am basing my theory on an extremely limited data set, the specific nature of the data strongly suggests to me that the patterns identified likely hold at any level, but to prove it, I still need to work out a more compelling theory of how those exact properties do hold. ### Mechanism of operation Consider the progression when starting with $\{a,b,c\}$. You get $8$ distinct expressions from that; in fact, we’ll pop up that table now. Now consider the progression when starting with $\{b,c,d\}$. By this, I mean with $b$ and $c$ in their usual columns, and padded by $0$ as always, so yes: this is isomorphic to the $\{a,b,c\}$ progression. See, I’m attempting to lay conceptual groundwork for picturing them coming together from their own individual starting points, so we can think through what happens when we combine them as with $\{a,b,c,d\}$. It should be obvious that $\{b,c,d\}$ on its own will also generate eight unique expressions, and essentially the same ones that are given here. To determine them exactly, it generally suffices to simply shift all the variables forward or backward. Doing this, we find that all of the expressions for $\{b,c,d\}$ will be different (excepting $0$). Conceivably, the merging process between these two sets will somehow come to settle on $8^2=64$ distinct expressions, highly suggestive of pairing, but it’s unclear how to get more precise than that. And thus I guess we’ve gotten to where I’m stuck. I’m probably better off sticking with the inductive-like approach of considering what happens when you add one more seed variable. The even splitting of terms completely regularly between all expressions, as well as the number of expressions itself simply repeatedly squaring, both scream that there is a very orderly process behind this part of things, but I can’t put my finger on it if so. And I wish I could, because a proof immediate follows if I could show this distribution holds, or even a decent tangential result.
2022-07-07 03:37:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7027779817581177, "perplexity": 425.84920660634407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00373.warc.gz"}
http://jmre.ijournals.cn/en/ch/reader/view_abstract.aspx?file_no=20220306&flag=1
Bounded Weak Solutions to a Class of Parabolic Equations with Gradient Term and $L^r{(0,T;L^q(\Omega))}$ Sources Received:April 16, 2021  Revised:June 21, 2021 Key Word: parabolic equations   lower order gradient term   $L^\infty$ estimate   bounded solutions Fund ProjectL:Supported by the National Natural Science Foundation of China (Grant No.11901131) and the University-Level Research Fund Project in Guizhou University of Finance and Economics (Grant No.2019XYB08). Author Name Affiliation Zhongqing LI School of Mathematics and Statistics, Guizhou University of Finance and Economics, Guizhou 550025, P. R. China Hits: 99 We consider a class of nonlinear parabolic equations whose prototype is $$\begin{cases}u_t-\Delta u=\overrightarrow{b}(x,t)\cdot\nabla u+\gamma|\nabla u|^2-\text{div}{\overrightarrow{F}(x,t)}+f(x,t), &(x,t)\in \Omega_T,\\ u(x,t)=0,&(x,t)\in\Gamma_T,\\ u(x,0)=u_0(x), &x\in\Omega, \end{cases}$$ where the functions $|\overrightarrow{b}(x,t)|^2,|\overrightarrow{F}(x,t)|^2,f(x,t)$ lie in the space $L^r{(0,T;L^q(\Omega))}$, $\gamma$ is a positive constant. The purpose of this paper is to prove, under suitable assumptions on the integrability of the space $L^r{(0,T;L^q(\Omega))}$ for the source terms and the coefficient of the gradient term, a priori $L^\infty$ estimate and the existence of bounded solutions. The methods consist of constructing a family of perturbation problems by regularization, Stampacchia's iterative technique fulfilled by an appropriate nonlinear test function and compactness argument for the limit process.
2022-08-12 12:38:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5867946743965149, "perplexity": 538.5594380740253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00039.warc.gz"}
https://msp.org/jomms/2011/6-5/p01.xhtml
#### Vol. 6, No. 5, 2011 Download this article For screen For printing Recent Issues The Journal About the Journal Editorial Board Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1559-3959 Author Index To Appear Other MSP Journals Study of multiply-layered cylinders made of functionally graded materials using the transfer matrix method ### Y. Z. Chen Vol. 6 (2011), No. 5, 641–657 ##### Abstract This paper provides a general solution for a multiply-layered cylinder made of functionally graded materials. The Young’s modulus is assumed to be an arbitrary function of $r$, and the Poisson’s ratio takes a constant value. The first step is to study the single-layer case ($a). A transfer matrix is defined, relating the values of radial stress and displacement at the initial point ($r=a$) to those at the end point ($r=b$). The matrix is evaluated on the basis of two fundamental solutions, which are evaluated numerically. The final solution is obtained by using many transfer matrices for layers, continuation conditions between layers, and boundary conditions at inner and outer boundaries. Several numerical examples are provided. ##### Keywords composites, layered structure, nonlinear behavior, transfer matrix method, strength ##### Milestones Received: 4 March 2010 Revised: 11 May 2010 Accepted: 12 May 2010 Published: 9 September 2011 ##### Authors Y. Z. Chen Division of Engineering Mechanics Jiangsu University Xue Fu Road 301 Jiangsu, 212013 China
2022-05-21 09:30:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3168484568595886, "perplexity": 3000.092355075502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00196.warc.gz"}
https://mathhelpforum.com/threads/basic-integration-but-two-different-answers.146698/
# Basic integration but two different answers? #### powerhouseteam Greetings all 1/(x/2 + 1) dx I get the answer 2 ln (x/2 + 1) my friend did it this way 1/(x/2 + 1) * (2/2) dx 2/(x+2) dx 2 ln (x+2) What is going on?? #### mr fantastic MHF Hall of Fame Greetings all 1/(x/2 + 1) dx I get the answer 2 ln (x/2 + 1) my friend did it this way 1/(x/2 + 1) * (2/2) dx 2/(x+2) dx 2 ln (x+2) What is going on?? What's going on is that both your answers are wrong - for two reasons. One of the reasons explains why you can get two apprently different answers. C and K are arbitrary constants of integration. Your teacher will have told you many many times not to forget including it. Note: 2 ln |x/2 + 1| + C = 2 ln |1/2 (x + 2)| + C = 2 ln (1/2) + 2 ln|x+2| + C = 2 ln|x + 2| + K where K = C + 2ln(1/2). #### chiph588@ MHF Hall of Honor Greetings all 1/(x/2 + 1) dx I get the answer 2 ln (x/2 + 1) my friend did it this way 1/(x/2 + 1) * (2/2) dx 2/(x+2) dx 2 ln (x+2) What is going on?? $$\displaystyle 2\ln|\tfrac x2+1|$$+C$$\displaystyle = 2\ln|\tfrac x2+1|+2\ln(2)+C_1 = 2\ln|(\tfrac x2+1)\cdot2|+C_1$$ where the last equality come from the law $$\displaystyle \ln(x)+\ln(y)=\ln(xy)$$ Thus we see $$\displaystyle 2\ln|\tfrac x2+1|+C = 2\ln|x+2|+C_1$$
2019-11-18 00:32:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8530158400535583, "perplexity": 4407.492168405322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669422.96/warc/CC-MAIN-20191118002911-20191118030911-00145.warc.gz"}
http://math.stackexchange.com/questions/12755/on-cardinality-of-a-set
# on cardinality of a set Set $S$ is a collection of disjoint sets each having cardinality of that of $\mathbb{R}$.The cardinality of set $S$ is also that of $\mathbb{R}$. $F$ is the set which is union of all the elements of $S$. Is the cardinality of set $F$ is equal to that of $\mathbb{R}$ ? - Yes. The cardinality of $F$ is $\sum\limits_{x\in S}|x| = \sum\limits_{x\in S}\mathfrak{c} = \mathfrak{c}\mathfrak{c}=\mathfrak{c}$. In cardinal arithmetic, if $\kappa$ and $\lambda$ are nonzero cardinals, and at least one is infinite, you have $$\kappa+\lambda = \kappa\lambda = \max\{\kappa,\lambda\}.$$ - (Minor technicality: $\kappa\lambda=0$ if $\kappa=0$ or $\lambda=0$.) –  Jonas Meyer Dec 2 '10 at 17:09 @Jonas Meyer: Quite right! Thank you. –  Arturo Magidin Dec 2 '10 at 18:35 You are asking whether $\mathbb{R} \times \mathbb{R}$ has the same cardinality as $\mathbb{R}$. The answer is yes, thanks to the fact that $|\mathbb{R}| = 2^{\aleph_0}$ and $2^{\aleph_0} \times 2^{\aleph_0} = 2^{\aleph_0 + \aleph_0} = 2^{\aleph_0}$. More explicitly, $\mathbb{R}$ has the same cardinality as the set $S = \{ 0, 1 \}^{\mathbb{N}}$ of binary sequences, and there is an obvious bijection $S \times S \to S$ given by "interweaving" sequences: that is, sending $$(a_1, a_2, a_3, ...) \times (b_1, b_2, b_3, ...) \to (a_1, b_1, a_2, b_2, a_3, b_3, ...).$$ In general, the statement that $A \times A$ has the same cardinality as $A$, for $A$ infinite, is true for all alephs. I think I have been told that whether this statement is true for all infinite sets is equivalent to AC. - Yes: the statement that $A\times A$ is bijectable to $A$ for all infinite $A$ requires AC. –  Arturo Magidin Dec 2 '10 at 15:50 @Arturo Magidin, @Qiaochu Yuan : thank you for the answers and explanation. –  Rajesh D Dec 2 '10 at 16:57 This is a theorem by Alfred Tarski, when he went to publish it from one end he heard "AC is obviously false, so the theorem is useless" and from the other end he got "This is obviously trivial, and it is pointless to publish that". The proof itself is quite elegant nonetheless. –  Asaf Karagila Dec 2 '10 at 17:09
2015-09-02 15:06:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560983180999756, "perplexity": 237.98198684341168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645265792.74/warc/CC-MAIN-20150827031425-00350-ip-10-171-96-226.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-distance-travelled-from-t-0-to-t-2pi-by-an-object-whose-moti-1
# How do you find the distance travelled from t=0 to t=2pi by an object whose motion is x=cos^2t, y=sin^2t? Sep 22, 2017 An interesting thing about this example is that there is a non-calculus-based way of doing it as well, to get the same answer for the distance (arc length) equal to $4 \sqrt{2}$. #### Explanation: Since $x = {\cos}^{2} \left(t\right)$ and $y = {\sin}^{2} \left(t\right)$, it follows that $x + y = {\cos}^{2} \left(t\right) + {\sin}^{2} \left(t\right) = 1$ for all values of $t$. Therefore, the motion is always on the straight line with $x y$-equation $x + y = 1$, which is equivalent to $y = - x + 1$ (a straight line with a slope of $- 1$ and a $y$-intercept of $1$). Also, since ${\cos}^{2} \left(t\right) \setminus \ge q 0$ and ${\sin}^{2} \left(t\right) \setminus \ge q 0$ for all $t$, this motion is always in the 1st quadrant of the plane where $x \setminus \ge q 0$ and $y \setminus \ge q 0$. Now think about, for $0 \setminus \le q t \setminus \le q 2 \pi$, how the values of ${\cos}^{2} \left(t\right)$ oscillate from 1 to 0 to 1 to 0 and back to 1 again, while the values of ${\sin}^{2} \left(t\right)$ oscillate from 0 to 1 to 0 to 1 and back to 0 again. In other words, the motion traverses the line segment from $\left(1 , 0\right)$ to $\left(0 , 1\right)$ four times. By the Pythagorean Theorem (draw an appropriate right triangle), this line segment has length $\sqrt{{1}^{2} + {1}^{2}} = \sqrt{2}$. This leads us to conclude that the total distance traveled (arc length) is $4 \sqrt{2}$.
2022-01-21 11:22:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8289362788200378, "perplexity": 122.97954232106481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00692.warc.gz"}
http://stat545.com/automation02_windows.html
2015-11-17 NOTE: This year we made R packages before we used Make. The hope is, therefore, that the Make that ships with Rtools is all we need. So hopefully we can ignore this? back to All the automation things ### Install make on Microsoft Windows We are still working out the best way to install make on Windows. Our current best recommendation is to install msysGit, which includes make as well as git and bash. Download and install msysGit. The two software packages msysGit and Git for Windows are related. Both install git and bash, but only msysGit installs make. The programs installed by msysGit are found by default in C:\msysGit\bin. Here is the complete list of programs included with msysGit. For this activity, RStudio needs to be able to find in your PATH environment variable the program make, the shell bash, other utilities like rm and cp, and Rscript. Here is another alternative for installing make alone: • Go to the Make for Windows web site • Install the file you just downloaded and copy to your clipboard the directory in which it is being installed • FYI: The default directory is C:\Program Files (x86)\GnuWin32\ • You now have make installed, but you need to tell Windows where to find the program. This is called updating your PATH. You will want to update the PATH to include the bin directory of the newly installed program. ### Update your PATH If you installed Make for Windows (as opposed to the make that comes with Git for Windows), you still need to update your PATH. These are the steps on Windows 7 (we don’t have such a write-up yet for Windows 8 – feel free to send one!): • Click on the Windows logo • Right click on Computer • Select Properties • Select Advanced System Settings • Select Environment variables • Select the line that has the PATH variable. You may have to scroll down to find it • Select Edit • Go to the end of the line and add a semicolon ;, followed by the path where the program was installed, followed by \bin. • Typical example of what one might add: ;C:\Program Files (x86)\GnuWin32\bin • Click Okay and close all the windows that you opened • Quit RStudio and open it again. • You should now be able to use make from RStudio and the command line ### Issues we are still clarifying See issue 58 for what seems to be the most comprehensive statement of the Windows situation. What are the tricky bits? • Getting the same Makefile to “work” via RStudio’s Build buttons/menus and in the shell. And, for that matter, which shell? Git Bash or ??? • Ensuring make, Rscript, pandoc, rm, etc. can be found = updating PATH. • Getting make to use the correct shell. back to All the automation things
2017-05-23 12:36:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3301765024662018, "perplexity": 3239.704534733506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607636.66/warc/CC-MAIN-20170523122457-20170523142457-00557.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-4-section-4-5-linear-programming-exercise-set-page-305/35
## Intermediate Algebra for College Students (7th Edition) The answer is $54 x^{7}y^{15}$. The given expression is $(2x^4y^3)(3xy^4)^3$ Multiply the power of the bracket with power of terms inside the bracket. $(2x^4y^3)(3^3x^3y^{12})$ Clear the parentheses and add powers of the like terms. $2\cdot3^3\cdot x^{4+3}y^{3+12}$ Simplify. $54 x^{7}y^{15}$.
2021-10-25 11:06:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9457244277000427, "perplexity": 1197.620904188737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00391.warc.gz"}
https://stacks.math.columbia.edu/tag/003N
Definition 4.28.5. Let $\mathcal{A}$ be a category and let $\mathcal{C}$ be a $2$-category. 1. A functor from an ordinary category into a $2$-category will ignore the $2$-morphisms unless mentioned otherwise. In other words, it will be a “usual” functor into the category formed out of 2-category by forgetting all the 2-morphisms. 2. A weak functor, or a pseudo functor $\varphi$ from $\mathcal{A}$ into the 2-category $\mathcal{C}$ is given by the following data 1. a map $\varphi : \mathop{\mathrm{Ob}}\nolimits (\mathcal{A}) \to \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$, 2. for every pair $x, y\in \mathop{\mathrm{Ob}}\nolimits (\mathcal{A})$, and every morphism $f : x \to y$ a $1$-morphism $\varphi (f) : \varphi (x) \to \varphi (y)$, 3. for every $x\in \mathop{\mathrm{Ob}}\nolimits (A)$ a $2$-morphism $\alpha _ x : \text{id}_{\varphi (x)} \to \varphi (\text{id}_ x)$, and 4. for every pair of composable morphisms $f : x \to y$, $g : y \to z$ of $\mathcal{A}$ a $2$-morphism $\alpha _{g, f} : \varphi (g \circ f) \to \varphi (g) \circ \varphi (f)$. These data are subject to the following conditions: 1. the $2$-morphisms $\alpha _ x$ and $\alpha _{g, f}$ are all isomorphisms, 2. for any morphism $f : x \to y$ in $\mathcal{A}$ we have $\alpha _{\text{id}_ y, f} = \alpha _ y \star \text{id}_{\varphi (f)}$: $\xymatrix{ \varphi (x) \rrtwocell ^{\varphi (f)}_{\varphi (f)}{\ \ \ \ \text{id}_{\varphi (f)}} & & \varphi (y) \rrtwocell ^{\text{id}_{\varphi (y)}}_{\varphi (\text{id}_ y)}{\alpha _ y} & & \varphi (y) } = \xymatrix{ \varphi (x) \rrtwocell ^{\varphi (f)}_{\varphi (\text{id}_ y) \circ \varphi (f)}{\ \ \ \ \alpha _{\text{id}_ y, f}} & & \varphi (y) }$ 3. for any morphism $f : x \to y$ in $\mathcal{A}$ we have $\alpha _{f, \text{id}_ x} = \text{id}_{\varphi (f)} \star \alpha _ x$, 4. for any triple of composable morphisms $f : w \to x$, $g : x \to y$, and $h : y \to z$ of $\mathcal{A}$ we have $(\text{id}_{\varphi (h)} \star \alpha _{g, f}) \circ \alpha _{h, g \circ f} = (\alpha _{h, g} \star \text{id}_{\varphi (f)}) \circ \alpha _{h \circ g, f}$ in other words the following diagram with objects $1$-morphisms and arrows $2$-morphisms commutes $\xymatrix{ \varphi (h \circ g \circ f) \ar[d]_{\alpha _{h, g \circ f}} \ar[rr]_{\alpha _{h \circ g, f}} & & \varphi (h \circ g) \circ \varphi (f) \ar[d]^{\alpha _{h, g} \star \text{id}_{\varphi (f)}} \\ \varphi (h) \circ \varphi (g \circ f) \ar[rr]^{\text{id}_{\varphi (h)} \star \alpha _{g, f}} & & \varphi (h) \circ \varphi (g) \circ \varphi (f) }$ Comment #2042 by Matthew Emerton on In the displayed diagram for condition (b), the domain of $\alpha_y$ is labelled as $\mathrm{id}_y$, rather than as $\mathrm{id}_{\varphi(y)}.$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2019-02-17 15:49:19
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9781356453895569, "perplexity": 1189.5356036093747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482186.20/warc/CC-MAIN-20190217152248-20190217174248-00450.warc.gz"}
https://bebac.at/articles/ABE-RSABE-ABEL-A-Comparison.htm
Consider allowing JavaScript. Otherwise, you have to be proficient in reading since formulas will not be rendered. Furthermore, the table of contents in the left column for navigation will not be available and code-folding not supported. Sorry for the inconvenience. Examples in this article were generated with 4.2.0 by the packages PowerTOST1 and TeachingDemos.2 More examples are given in the respective vignettes.3 4 See also the README on GitHub for an overview and the online manual5 for details. For background about sample size estimations in replicate designs see the respective articles (ABE, RSABE, and ABEL). See also the articles about power and sensitivity analyses. • The right-hand badges give the respective section’s ‘level’. 1. Basics about power and sample size methodology – requiring no or only limited statistical expertise. 1. These sections are the most important ones. They are – hopefully – easily comprehensible even for novices. 1. A somewhat higher knowledge of statistics and/or R is required. May be skipped or reserved for a later reading. 1. An advanced knowledge of statistics and/or R is required. Not recommended for beginners in particular. • Click to show / hide R code. Abbreviation Meaning $$\small{\alpha}$$ Nominal level of the test, probability of Type I Error (patient’s risk) (A)BE (Average) Bioequivalence ABEL Average Bioequivalence with Expanding Limits AUC Area Under the Curve $$\small{\beta}$$ Probability of Type II Error (producer’s risk), where $$\small{\beta=1-\pi}$$ CDE Center for Drug Evaluation (China) CI Confidence Interval CL Confidence Limit Cmax Maximum concentration $$\small{CV_\textrm{w}}$$ (Pooled) within-subject Coefficient of Variation $$\small{CV_\textrm{wR},\;CV_\textrm{wT}}$$ Observed within-subject Coefficient of Variation of the Reference and Test product $$\small{\Delta}$$ Clinically relevant difference EMA European Medicines Agency FDA (U.S.) Food and Drug Administration $$\small{H_0}$$ Null hypothesis (inequivalence) $$\small{H_1}$$ Alternative hypothesis (equivalence) HVD(P) Highly Variable Drug (Product) $$\small{k}$$ Regulatory constant in ABEL (0.760) $$\small{\mu_\textrm{T}/\mu_\textrm{R}}$$ True T/R-ratio $$\small{n}$$ Sample size $$\small{\pi}$$ (Prospective) power, where $$\small{\pi=1-\beta}$$ PE Point Estimate R Reference product RSABE Reference-Scaled Average Bioequivalence SABE Scaled Average Bioequivalence $$\small{s_\textrm{bR}^2,\;s_\textrm{bT}^2}$$ Observed between-subject variance of the Reference and Test product $$\small{s_\textrm{wR},\;s_\textrm{wT}}$$ Observed within-subject standard deviation of the Reference and Test product $$\small{s_\textrm{wR}^2,\;s_\textrm{wT}^2}$$ Observed within-subject variance of the Reference and Test product $$\small{\sigma_\textrm{wR}}$$ True within-subject standard deviation of the Reference product T Test product $$\small{\theta_0}$$ True T/R-ratio $$\small{\theta_1,\;\theta_2}$$ Fixed lower and upper limits of the acceptance range (generally 80.00 – 125.00%) $$\small{\theta_\textrm{s}}$$ Regulatory constant in RSABE (0.8925742…) $$\small{\theta_{\textrm{s}_1},\;\theta_{\textrm{s}_2}}$$ Scaled lower and upper limits of the acceptance range TIE Type I Error (patient’s risk) TIIE Type II Error (producer’s risk: 1 – power) uc Upper cap of expansion in ABEL 2×2×2 2-treatment 2-sequence 2-period crossover design (TR|RT) 2×2×3 2-treatment 2-sequence 3-period full replicate designs (TRT|RTR and TTR|RRT) 2×2×4 2-treatment 2-sequence 4-period full replicate designs (TRTR|RTRT, TRRT|RTTR, and TTRR|RRTT) 2×3×3 2-treatment 3-sequence 3-period partial replicate design (TRR|RTR|RRT) 2×4×2 2-treatment 4-sequence 2-period full replicate design (TR|RT|TT|RR) 2×4×4 2-treatment 4-sequence 4-period full replicate designs (TRTR|RTRT|TRRT|RTTR and TRRT|RTTR|TTRR|RRTT) # Introduction What are the differences between Average Bioequivalence (ABE), Reference-Scaled Average Bioequivalence (RSABE), and Average Bioequivalence with Expanding Limits (ABEL) in terms of power and sample sizes? For details about inferential statistics and hypotheses in equivalence see another article. Definitions: • A Highy Variable Drug (HVD) shows a within-subject Coefficient of Variation (CVwR) > 30% if administered as a solution in a replicate design. The high variability is an intrinsic property of the drug (absorption/permeation, clearance). • A Highy Variable Drug Product (HVDP) shows a CVwR > 30% in a replicate design.6 The concept of Scaled Average Bioequivalence (SABE) for HVD(P)s is based on the following considerations: • HVD(P)s are safe and efficacious despite their high variability because: • They have a wide therapeutic index (i.e., a flat dose-response curve). Consequently, even substantial changes in concentrations have only a limited impact on the effect. If they would have a narrow therapeutic index, adverse effects (due to high concentrations) and lacking effects (due to low concentrations) would have been observed in Phase II (or in Phase III at the latest) and therefore, the originator’s product not have been approved in the first place. • Once approved, the product has a documented safety / efficacy record in phase IV and in clinical practice – despite its high variability. If problems were evident, the product would have been taken off the market. • Given that, the conventional ‘clinically relevant difference’ Δ of 20% in ABE (leading to the fixed limits of 80.00 – 125.00%) is overly conservative and therefore, requires large sample sizes. • Thus, a more relaxed Δ > 20% was proposed. A natural approach is to scale (expand / widen) the limits based on the within-subject variability of the reference product σwR.7 The conventional confidence interval inclusion approach in ABE is based on $\begin{matrix}\tag{1} \theta_1=1-\Delta,\theta_2=\left(1-\Delta\right)^{-1}\\ H_0:\;\frac{\mu_\textrm{T}}{\mu_\textrm{R}}\ni\left\{\theta_1,\,\theta_2\right\}\;vs\;H_1:\;\theta_1<\frac{\mu_\textrm{T}}{\mu_\textrm{R}}<\theta_2, \end{matrix}$ where $$\small{\Delta}$$ is the clinically relevant difference, $$\small{\theta_1}$$ and $$\small{\theta_2}$$ are the fixed lower and upper limits of the acceptance range, $$\small{H_0}$$ is the null hypothesis of inequivalence, and $$\small{H_1}$$ is the alternative hypothesis. $$\small{\mu_\textrm{T}}$$ and $$\small{\mu_\textrm{R}}$$ are the geometric least squares means of $$\small{\textrm{T}}$$ and $$\small{\textrm{R}}$$, respectively. $$\small{(1)}$$ is modified in Scaled Average Bioequivalence (SABE) to $H_0:\;\frac{\mu_\textrm{T}}{\mu_\textrm{R}}\Big{/}\sigma_\textrm{wR}\ni\left\{\theta_{\textrm{s}_1},\,\theta_{\textrm{s}_2}\right\}\;vs\;H_1:\;\theta_{\textrm{s}_1}<\frac{\mu_\textrm{T}}{\mu_\textrm{R}}\Big{/}\sigma_\textrm{wR}<\theta_{\textrm{s}_2},\tag{2}$ where $$\small{\sigma_\textrm{wR}}$$ is the standard deviation of the reference. The scaled limits $$\small{\left\{\theta_{\textrm{s}_1},\,\theta_{\textrm{s}_2}\right\}}$$ of the acceptance range depend on conditions given by the agency. RSABE is recommended by the FDA and China’s CDE. ABEL is another variant of SABE and recommended in all other jurisdictions. In order to apply the methods8 following conditions have to be fulfilled: • The study has to be performed in a replicate design, i.e., at least the reference product has to be administered twice. Realization: Ob­servations (in a sample) of a random variable (of the population). • The realized within-subject variability of the reference has to be high (in RSABE swR ≥ 0.2949 and in ABEL CVwR > 30%). • ABEL only: • A clinical justification must be given that the expanded limits will not impact safety / efficacy. • There is an ‘upper cap’ of scaling (uc = 50%, except for Health Canada, where uc ≈ 57.382%10), i.e., the expansion is limited to 69.84 – 143.19% or 67.7 – 150.0%, respectively. • Except for applications in Brazil and Chile, it has to be demonstrated that the high variability of the reference is not caused by ‘outliers’. In all methods a point estimate-constraint is imposed. Even if a study would pass the scaled limits, the PE has to lie within 80.00 – 125.00% in order to pass. It should be noted that larger deviations between geometric mean ratios arise as a natural, direct consequence of the higher variability. Since extreme values are common for HVD(P)s, assessment of ‘outliers’ is not required by the FDA and China’s CDE for RSABE, as well as by Brazil’s ANVISA and Chile’s ANAMED for ABEL. The PE-constraint – together with the upper cap of expansion in jurisdictions applying ABEL – lead to truncated distributions. Hence, the test assuming the normal distribution of $$\small{\log_{e}}$$-transformed data is not correct in the strict sense. top of section ↩︎ # Preliminaries A basic knowledge of R is required. To run the scripts at least version 1.4.8 (2019-08-29) of PowerTOST is required and at least 1.5.3 (2021-01-18) suggested. Any version of R would likely do, though the current release of PowerTOST was only tested with version 4.1.3 (2022-03-10) and later. All scripts were run on a Xeon E3-1245v3 @ 3.40GHz (1/4 cores) 16GB RAM with R 4.2.0 on Win­dows 7 build 7601, Service Pack 1, Universal C Runtime 10.0.10240.16390. library(PowerTOST) # attach the packages library(TeachingDemos) # to run the examples # Sample size The idea behind reference-scaling is to avoid extreme sample sizes required for ABE and preserve power independent from the CV. Let’s explore some examples. I assumed a $$\small{CV}$$ of 0.45, a T/R-ratio ($$\small{\theta_0}$$) of 0.90, and targeted ≥ 80% power in some common replicate designs. Note that sample sizes are integers and follow a staircase function because in software packages balanced sequences are returned. CV <- 0.45 theta0 <- 0.90 target <- 0.80 designs <- c("2x2x4", "2x2x3", "2x3x3") method <- c("ABE", "ABEL", "RSABE") res <- data.frame(design = rep(designs, each = length(method)), method = method, n = NA) for (i in 1:nrow(res)) { if (res$method[i] == "ABE") { res[i, 3] <- sampleN.TOST(CV = CV, theta0 = theta0, design = res$design[i], targetpower = target, print = FALSE)[["Sample size"]] } if (res$method[i] == "ABEL") { res[i, 3] <- sampleN.scABEL(CV = CV, theta0 = theta0, design = res$design[i], targetpower = target, print = FALSE, details = FALSE)[["Sample size"]] } if (res$method[i] == "RSABE") { res[i, 3] <- sampleN.RSABE(CV = CV, theta0 = theta0, design = res$design[i], targetpower = target, print = FALSE, details = FALSE)[["Sample size"]] } } print(res, row.names = FALSE) # design method n # 2x2x4 ABE 84 # 2x2x4 ABEL 28 # 2x2x4 RSABE 24 # 2x2x3 ABE 124 # 2x2x3 ABEL 42 # 2x2x3 RSABE 36 # 2x3x3 ABE 126 # 2x3x3 ABEL 39 # 2x3x3 RSABE 33 CV <- 0.45 theta0 <- seq(0.95, 0.85, -0.001) methods <- c("ABE", "ABEL", "RSABE") clr <- c("red", "magenta", "blue") ylab <- paste0("sample size (CV = ", 100*CV, "%)") ################# design <- "2x2x4" res1 <- data.frame(theta0 = theta0, method = rep(methods, each =length(theta0)), n = NA) for (i in 1:nrow(res1)) { if (res1$method[i] == "ABE") { res1$n[i] <- sampleN.TOST(CV = CV, theta0 = res1$theta0[i], design = design, print = FALSE)[["Sample size"]] } if (res1$method[i] == "ABEL") { res1$n[i] <- sampleN.scABEL(CV = CV, theta0 = res1$theta0[i], design = design, print = FALSE, details = FALSE)[["Sample size"]] } if (res1$method[i] == "RSABE") { res1$n[i] <- sampleN.RSABE(CV = CV, theta0 = res1$theta0[i], design = design, print = FALSE, details = FALSE)[["Sample size"]] } } dev.new(width = 4.5, height = 4.5, record = TRUE) op <- par(no.readonly = TRUE) par(lend = 2, ljoin = 1, mar = c(4, 3.3, 0.1, 0.2), cex.axis = 0.9) plot(theta0, res1$n[res1$method == "ABE"], type = "n", axes = FALSE, ylim = c(12, max(res1$n)), xlab = expression(theta[0]), log = "xy", ylab = "") abline(v = seq(0.85, 0.95, 0.025), lty = 3, col = "lightgrey") abline(v = 0.90, lty = 2) abline(h = axTicks(2, log = TRUE), lty = 3, col = "lightgrey") axis(1, at = seq(0.85, 0.95, 0.025)) axis(2, las = 1) mtext(ylab, 2, line = 2.4) legend("bottomleft", legend = methods, inset = 0.02, lwd = 2, cex = 0.9, col = clr, box.lty = 0, bg = "white", title = "\u226580% power") lines(theta0, res1$n[res1$method == "ABE"], type = "S", lwd = 2, col = clr[1]) lines(theta0, res1$n[res1$method == "ABEL"], type = "S", lwd = 2, col = clr[2]) lines(theta0, res1$n[res1$method == "RSABE"], type = "S", lwd = 2, col = clr[3]) box() ################# design <- "2x2x3" res2 <- data.frame(theta0 = theta0, method = rep(methods, each =length(theta0)), n = NA) for (i in 1:nrow(res2)) { if (res2$method[i] == "ABE") { res2$n[i] <- sampleN.TOST(CV = CV, theta0 = res2$theta0[i], design = design, print = FALSE)[["Sample size"]] } if (res2$method[i] == "ABEL") { res2$n[i] <- sampleN.scABEL(CV = CV, theta0 = res2$theta0[i], design = design, print = FALSE, details = FALSE)[["Sample size"]] } if (res2$method[i] == "RSABE") { res2$n[i] <- sampleN.RSABE(CV = CV, theta0 = res2$theta0[i], design = design, print = FALSE, details = FALSE)[["Sample size"]] } } plot(theta0, res2$n[res2$method == "ABE"], type = "n", axes = FALSE, ylim = c(12, max(res2$n)), xlab = expression(theta[0]), log = "xy", ylab = "") abline(v = seq(0.85, 0.95, 0.025), lty = 3, col = "lightgrey") abline(v = 0.90, lty = 2) abline(h = axTicks(2, log = TRUE), lty = 3, col = "lightgrey") axis(1, at = seq(0.85, 0.95, 0.025)) axis(2, las = 1) mtext(ylab, 2, line = 2.4) legend("bottomleft", legend = methods, inset = 0.02, lwd = 2, cex = 0.9, col = clr, box.lty = 0, bg = "white", title = "\u226580% power") lines(theta0, res2$n[res2$method == "ABE"], type = "S", lwd = 2, col = clr[1]) lines(theta0, res2$n[res2$method == "ABEL"], type = "S", lwd = 2, col = clr[2]) lines(theta0, res2$n[res2$method == "RSABE"], type = "S", lwd = 2, col = clr[3]) box() ################# design <- "2x3x3" res3 <- data.frame(theta0 = theta0, method = rep(methods, each =length(theta0)), n = NA) for (i in 1:nrow(res3)) { if (res3$method[i] == "ABE") { res3$n[i] <- sampleN.TOST(CV = CV, theta0 = res3$theta0[i], design = design, print = FALSE)[["Sample size"]] } if (res3$method[i] == "ABEL") { res3$n[i] <- sampleN.scABEL(CV = CV, theta0 = res3$theta0[i], design = design, print = FALSE, details = FALSE)[["Sample size"]] } if (res3$method[i] == "RSABE") { res3$n[i] <- sampleN.RSABE(CV = CV, theta0 = res3$theta0[i], design = design, print = FALSE, details = FALSE)[["Sample size"]] } } plot(theta0, res3$n[res3$method == "ABE"], type = "n", axes = FALSE, ylim = c(12, max(res3$n)), xlab = expression(theta[0]), log = "xy", ylab = "") abline(v = seq(0.85, 0.95, 0.025), lty = 3, col = "lightgrey") abline(v = 0.90, lty = 2) abline(h = axTicks(2, log = TRUE), lty = 3, col = "lightgrey") axis(1, at = seq(0.85, 0.95, 0.025)) axis(2, las = 1) mtext(ylab, 2, line = 2.4) legend("bottomleft", legend = methods, inset = 0.02, lwd = 2, cex = 0.9, col = clr, box.lty = 0, bg = "white", title = "\u226580% power") lines(theta0, res3$n[res3$method == "ABE"], type = "S", lwd = 2, col = clr[1]) lines(theta0, res3$n[res3$method == "ABEL"], type = "S", lwd = 2, col = clr[2]) lines(theta0, res3$n[res3$method == "RSABE"], type = "S", lwd = 2, col = clr[3]) box() par(op) It’s obvious that we need substantially smaller sample sizes in the methods for reference-scaling than we would require for ABE. The sample size functions of the scaling methods are also not that steep, which means that even if our assumptions about the T/R-ratio would be wrong, power (and hence, sample sizes) would be affected to a lesser degree. Nevertheless, one should not be overly optimistic about the T/R-ratio. For HVD(P)s a T/R-ratio of ‘better’ than 0.90 should be avoided.11 NB, that’s the reason why in sampleN.scABEL() and sampleN.RSABE() the default is theta0 = 0.90. If scaling is not acceptable (e.g., AUC for the EMA), I strongly recommend to specify theta0 = 0.90 in sampleN.TOST() because its default is 0.95. Note that RSABE is more permissive than ABEL due to its regulatory constant ~0.8926 instead of 0.760 and unlimited scaling (no upper cap). Hence, sample sizes for RSABE are always smaller than ones for ABEL. Since power depends on the number of treatments, roughly 50% more subjects are required than in 4-period full replicate designs. Similar sample sizes than in the 3-period full replicate design because both have the same degrees of freedom. However, the step size is wider (three sequences instead of two). Interlude 1 Before we estimate a sample size, we have to be clear about the planned evaluation. The EMA and most other agencies require an ANOVA (i.e., all effects fixed), whereas Health Canada, the FDA, and China’s CDE a mixed-effects model. Let’s explore the replicate designs implemented in PowerTOST. Note that only ABE for Balaam’s design (TR|RT|TT|RR) is implemented. print(known.designs()[7:11, c(2:4, 9)], row.names = FALSE) # relevant columns # design df df2 name # 2x2x3 2*n-3 n-2 2x2x3 replicate crossover # 2x2x4 3*n-4 n-2 2x2x4 replicate crossover # 2x4x4 3*n-4 n-4 2x4x4 replicate crossover # 2x3x3 2*n-3 n-3 partial replicate (2x3x3) # 2x4x2 n-2 n-2 Balaam's (2x4x2) The column df gives the degrees of freedom of an ANOVA and the column df2 the ones of a mixed-effects model. Which model is intended for evaluation is controlled by the argument robust, which is FALSE (for an ANOVA) by default. If set to TRUE, the estimation will be performed for a mixed-effects model. A simple example (T/R-ratio 0.90, CV 0.25 – 0.50, 2×2×4 design targeted at power 0.80: theta0 <- 0.90 CV <- seq(0.25, 0.5, 0.05) design <- "2x2x4" target <- 0.80 reg1 <- reg_const(regulator = "EMA") reg1$name <- "USER" reg1$est_method <- "ISC" # keep conditions but change from "ANOVA" reg2 <- reg_const("USER", r_const = log(1.25) / 0.25, CVswitch = 0.3, CVcap = Inf) # ANOVA ##################################################### # Note: These are internal (not exported) functions # # Use them only if you know what you are doing! # ##################################################### d.no <- PowerTOST:::.design.no(design) df1 <- PowerTOST:::.design.df(ades, robust = FALSE) df2 <- PowerTOST:::.design.df(ades, robust = TRUE) ns <- nu <- data.frame(CV = CV, ABE.fix = NA_integer_, ABE.mix = NA_integer_, ABEL.fix = NA_integer_, ABEL.mix = NA_integer_, RSABE.fix = NA_integer_, RSABE.mix = NA_integer_) for (i in seq_along(CV)) { ns$ABE.fix[i] <- sampleN.TOST(CV = CV[i], theta0 = theta0, targetpower = target, design = design, print = FALSE, details = FALSE)[["Sample size"]] n <- ns$ABE.fix[i] nu$ABE.fix[i] <- eval(df1) ns$ABE.mix[i] <- sampleN.TOST(CV = CV[i], theta0 = theta0, targetpower = target, design = design, robust = TRUE, print = FALSE, details = FALSE)[["Sample size"]] n <- ns$ABE.mix[i] nu$ABE.mix[i] <- eval(df2) ns$ABEL.fix[i] <- sampleN.scABEL(CV = CV[i], theta0 = theta0, targetpower = target, design = design, print = FALSE, details = FALSE)[["Sample size"]] n <- ns$ABEL.fix[i] nu$ABEL.fix[i] <- eval(df1) ns$ABEL.mix[i] <- sampleN.scABEL(CV = CV[i], theta0 = theta0, targetpower = target, design = design, regulator = reg1, print = FALSE, details = FALSE)[["Sample size"]] n <- ns$ABEL.mix[i] nu$ABEL.mix[i] <- eval(df2) ns$RSABE.fix[i] <- sampleN.scABEL(CV = CV[i], theta0 = theta0, targetpower = target, design = design, regulator = reg2, print = FALSE, details = FALSE)[["Sample size"]] n <- ns$RSABE.fix[i] nu$RSABE.fix[i] <- eval(df1) ns$RSABE.mix[i] <- sampleN.RSABE(CV = CV[i], theta0 = theta0, targetpower = target, design = design, print = FALSE, details = FALSE)[["Sample size"]] n <- ns$RSABE.mix[i] nu$RSABE.mix[i] <- eval(df2) } cat("Sample sizes\n") print(ns, row.names = FALSE) # Sample sizes # CV ABE.fix ABE.mix ABEL.fix ABEL.mix RSABE.fix RSABE.mix # 0.25 28 30 28 30 28 28 # 0.30 40 40 34 36 28 32 # 0.35 52 54 34 36 24 28 # 0.40 68 68 30 32 22 24 # 0.45 84 84 28 30 20 24 # 0.50 100 102 28 30 20 22 For the mixed-effects models sample sizes are in general slightly larger. cat("Degrees of freedom\n") print(nu, row.names = FALSE) # Degrees of freedom # CV ABE.fix ABE.mix ABEL.fix ABEL.mix RSABE.fix RSABE.mix # 0.25 80 28 80 28 80 26 # 0.30 116 38 98 34 80 30 # 0.35 152 52 98 34 68 26 # 0.40 200 66 86 30 62 22 # 0.45 248 82 80 28 56 22 # 0.50 296 100 80 28 56 20 In the mixed-effects models we have fewer degrees of freedom (more effects are estimated). Note that for the FDA’s RSABE always a mixed-effects model has to be employed and thus, the results for an ANOVA are given only for comparison. # Power Let’s change the point of view. As above, I assumed $$\small{CV=0.45}$$, $$\small{\theta_0=0.90}$$, and targeted ≥ 80% power. This time I explored how $$\small{CV}$$ different from my assumption affects power with the estimated sample size. Additionally I assessed ‘pure’ SABE, i.e., without an upper cap of scaling and without the PE-constraint for the EMA’s conditions (switching $$\small{CV_\textrm{wR}=30\%}$$, regulatory constant $$\small{k=0.760}$$). CV <- 0.45 theta0 <- 0.90 target <- 0.80 designs <- c("2x2x4", "2x2x3", "2x3x3") method <- c("ABE", "ABEL", "RSABE", "SABE") # Pure SABE (only for comparison) # No upper cap of scaling, no PE constraint pure <- reg_const("USER", r_const = 0.760, CVswitch = 0.30, CVcap = Inf) pure$pe_constr <- FALSE res <- data.frame(design = rep(designs, each = length(method)), method = method, n = NA, power = NA, CV0.40 = NA, CV0.50 = NA) for (i in 1:nrow(res)) { if (res$method[i] == "ABE") { res[i, 3:4] <- sampleN.TOST(CV = CV, theta0 = theta0, design = res$design[i], targetpower = target, print = FALSE)[7:8] res[i, 5] <- power.TOST(CV = 0.4, theta0 = theta0, n = res[i, 3], design = res$design[i]) res[i, 6] <- power.TOST(CV = 0.5, theta0 = theta0, n = res[i, 3], design = res$design[i]) } if (res$method[i] == "ABEL") { res[i, 3:4] <- sampleN.scABEL(CV = CV, theta0 = theta0, design = res$design[i], targetpower = target, print = FALSE, details = FALSE)[8:9] res[i, 5] <- power.scABEL(CV = 0.4, theta0 = theta0, n = res[i, 3], design = res$design[i]) res[i, 6] <- power.scABEL(CV = 0.5, theta0 = theta0, n = res[i, 3], design = res$design[i]) } if (res$method[i] == "RSABE") { res[i, 3:4] <- sampleN.RSABE(CV = CV, theta0 = theta0, design = res$design[i], targetpower = target, print = FALSE, details = FALSE)[8:9] res[i, 5] <- power.RSABE(CV = 0.4, theta0 = theta0, n = res[i, 3], design = res$design[i]) res[i, 6] <- power.RSABE(CV = 0.5, theta0 = theta0, n = res[i, 3], design = res$design[i]) } if (res$method[i] == "SABE") { res[i, 3:4] <- sampleN.scABEL(CV = CV, theta0 = theta0, design = res$design[i], targetpower = target, regulator = pure, print = FALSE, details = FALSE)[8:9] res[i, 5] <- power.scABEL(CV = 0.4, theta0 = theta0, n = res[i, 3], design = res$design[i], regulator = pure) res[i, 6] <- power.scABEL(CV = 0.5, theta0 = theta0, n = res[i, 3], design = res$design[i], regulator = pure) } } res[, 4:6] <- signif(res[, 4:6], 5) print(res, row.names = FALSE) # design method n power CV0.40 CV0.50 # 2x2x4 ABE 84 0.80569 0.87483 0.73700 # 2x2x4 ABEL 28 0.81116 0.78286 0.81428 # 2x2x4 RSABE 24 0.82450 0.80516 0.83001 # 2x2x4 SABE 28 0.81884 0.78415 0.84388 # 2x2x3 ABE 124 0.80012 0.87017 0.73102 # 2x2x3 ABEL 42 0.80017 0.77676 0.80347 # 2x2x3 RSABE 36 0.81147 0.79195 0.81888 # 2x2x3 SABE 42 0.80961 0.77868 0.83463 # 2x3x3 ABE 126 0.80570 0.87484 0.73701 # 2x3x3 ABEL 39 0.80588 0.77587 0.80763 # 2x3x3 RSABE 33 0.82802 0.80845 0.83171 # 2x3x3 SABE 39 0.81386 0.77650 0.84100 # Cave: very long runtime CV.fix <- 0.45 CV <- seq(0.35, 0.55, length.out = 201) theta0 <- 0.90 methods <- c("ABE", "ABEL", "RSABE", "SABE") clr <- c("red", "magenta", "blue", "#00800080") # Pure SABE (only for comparison) # No upper cup of scaling, no PE constraint pure <- reg_const("USER", r_const = 0.760, CVswitch = 0.30, CVcap = Inf) pure$pe_constr <- FALSE ################# design <- "2x2x4" res1 <- data.frame(CV = CV, method = rep(methods, each =length(CV)), power = NA) n.ABE <- sampleN.TOST(CV = CV.fix, theta0 = theta0, design = design, print = FALSE)[["Sample size"]] n.RSABE <- sampleN.RSABE(CV = CV.fix, theta0 = theta0, design = design, print = FALSE, details = FALSE)[["Sample size"]] n.ABEL <- sampleN.scABEL(CV = CV.fix, theta0 = theta0, design = design, print = FALSE, details = FALSE)[["Sample size"]] n.SABE <- sampleN.scABEL(CV = CV.fix, theta0 = theta0, design = design, print = FALSE, regulator = pure, details = FALSE)[["Sample size"]] for (i in 1:nrow(res1)) { if (res1$method[i] == "ABE") { res1$power[i] <- power.TOST(CV = res1$CV[i], theta0 = theta0, n = n.ABE, design = design) } if (res1$method[i] == "ABEL") { res1$power[i] <- power.scABEL(CV = res1$CV[i], theta0 = theta0, n = n.ABEL, design = design, nsims = 1e6) } if (res1$method[i] == "RSABE") { res1$power[i] <- power.RSABE(CV = res1$CV[i], theta0 = theta0, n = n.RSABE, design = design, nsims = 1e6) } if (res1$method[i] == "SABE") { res1$power[i] <- power.scABEL(CV = res1$CV[i], theta0 = theta0, n = n.ABEL, design = design, regulator = pure, nsims = 1e6) } } dev.new(width = 4.5, height = 4.5, record = TRUE) par(mar = c(4, 3.3, 0.1, 0.1), cex.axis = 0.9) plot(CV, res1$power[res1$method == "ABE"], type = "n", axes = FALSE, ylim = c(0.65, 1), xlab = "CV", ylab = "") abline(v = seq(0.35, 0.55, 0.05), lty = 3, col = "lightgrey") abline(v = 0.45, lty = 2) abline(h = axTicks(2, log = FALSE), lty = 3, col = "lightgrey") axis(1, at = seq(0.35, 0.55, 0.05)) axis(2, las = 1) mtext("power", 2, line = 2.6) legend("topright", legend = methods, inset = 0.02, lwd = 2, cex = 0.9, col = clr, box.lty = 0, bg = "white", title = "n for CV = 45%") lines(CV, res1$power[res1$method == "ABE"], lwd = 2, col = clr[1]) lines(CV, res1$power[res1$method == "ABEL"], lwd = 2, col = clr[2]) lines(CV, res1$power[res1$method == "RSABE"], lwd = 2, col = clr[3]) lines(CV, res1$power[res1$method == "SABE"], lwd = 2, col = clr[4]) box() ################# design <- "2x2x3" res2 <- data.frame(CV = CV, method = rep(methods, each =length(CV)), power = NA) n.ABE <- sampleN.TOST(CV = CV.fix, theta0 = theta0, design = design, print = FALSE)[["Sample size"]] n.RSABE <- sampleN.RSABE(CV = CV.fix, theta0 = theta0, design = design, print = FALSE, details = FALSE)[["Sample size"]] n.ABEL <- sampleN.scABEL(CV = CV.fix, theta0 = theta0, design = design, print = FALSE, details = FALSE)[["Sample size"]] n.SABE <- sampleN.scABEL(CV = CV.fix, theta0 = theta0, design = design, print = FALSE, regulator = pure, details = FALSE)[["Sample size"]] for (i in 1:nrow(res2)) { if (res2$method[i] == "ABE") { res2$power[i] <- power.TOST(CV = res2$CV[i], theta0 = theta0, n = n.ABE, design = design) } if (res2$method[i] == "ABEL") { res2$power[i] <- power.scABEL(CV = res2$CV[i], theta0 = theta0, n = n.ABEL, design = design, nsims = 1e6) } if (res2$method[i] == "RSABE") { res2$power[i] <- power.RSABE(CV = res2$CV[i], theta0 = theta0, n = n.RSABE, design = design, nsims = 1e6) } if (res2$method[i] == "SABE") { res2$power[i] <- power.scABEL(CV = res2$CV[i], theta0 = theta0, n = n.ABEL, design = design, regulator = pure, nsims = 1e6) } } plot(CV, res2$power[res2$method == "ABE"], type = "n", axes = FALSE, ylim = c(0.65, 1), xlab = "CV", ylab = "") abline(v = seq(0.35, 0.55, 0.05), lty = 3, col = "lightgrey") abline(v = 0.45, lty = 2) abline(h = axTicks(2, log = FALSE), lty = 3, col = "lightgrey") axis(1, at = seq(0.35, 0.55, 0.05)) axis(2, las = 1) mtext("power", 2, line = 2.6) legend("topright", legend = methods, inset = 0.02, lwd = 2, cex = 0.9, col = clr, box.lty = 0, bg = "white", title = "n for CV = 45%") lines(CV, res2$power[res2$method == "ABE"], lwd = 2, col = clr[1]) lines(CV, res2$power[res2$method == "ABEL"], lwd = 2, col = clr[2]) lines(CV, res2$power[res2$method == "RSABE"], lwd = 2, col = clr[3]) lines(CV, res2$power[res2$method == "SABE"], lwd = 2, col = clr[4]) box() ################# design <- "2x3x3" res3 <- data.frame(CV = CV, method = rep(methods, each =length(CV)), power = NA) n.ABE <- sampleN.TOST(CV = CV.fix, theta0 = theta0, design = design, print = FALSE)[["Sample size"]] n.RSABE <- sampleN.RSABE(CV = CV.fix, theta0 = theta0, design = design, print = FALSE, details = FALSE)[["Sample size"]] n.ABEL <- sampleN.scABEL(CV = CV.fix, theta0 = theta0, design = design, print = FALSE, details = FALSE)[["Sample size"]] n.SABE <- sampleN.scABEL(CV = CV.fix, theta0 = theta0, design = design, print = FALSE, regulator = pure, details = FALSE)[["Sample size"]] for (i in 1:nrow(res3)) { if (res3$method[i] == "ABE") { res3$power[i] <- power.TOST(CV = res3$CV[i], theta0 = theta0, n = n.ABE, design = design) } if (res3$method[i] == "ABEL") { res3$power[i] <- power.scABEL(CV = res3$CV[i], theta0 = theta0, n = n.ABEL, design = design, nsims = 1e6) } if (res3$method[i] == "RSABE") { res3$power[i] <- power.RSABE(CV = res3$CV[i], theta0 = theta0, n = n.RSABE, design = design, nsims = 1e6) } if (res3$method[i] == "SABE") { res3$power[i] <- power.scABEL(CV = res3$CV[i], theta0 = theta0, n = n.ABEL, design = design, regulator = pure, nsims = 1e6) } } plot(CV, res3$power[res3$method == "ABE"], type = "n", axes = FALSE, ylim = c(0.65, 1), xlab = "CV", ylab = "") abline(v = seq(0.35, 0.55, 0.05), lty = 3, col = "lightgrey") abline(v = 0.45, lty = 2) abline(h = axTicks(2, log = FALSE), lty = 3, col = "lightgrey") axis(1, at = seq(0.35, 0.55, 0.05)) axis(2, las = 1) mtext("power", 2, line = 2.6) legend("topright", legend = methods, inset = 0.02, lwd = 2, cex = 0.9, col = clr, box.lty = 0, bg = "white", title = "n for CV = 45%") lines(CV, res3$power[res3$method == "ABE"], lwd = 2, col = clr[1]) lines(CV, res3$power[res3$method == "ABEL"], lwd = 2, col = clr[2]) lines(CV, res3$power[res3$method == "RSABE"], lwd = 2, col = clr[3]) lines(CV, res3$power[res3$method == "SABE"], lwd = 2, col = clr[4]) box() par(op) As expected, power of ABE is extremely dependent on the CV. Not surprising, because the acceptance limits are fixed at 80.00 – 125.00%. As stated above, ideally reference-scaling should preserve power independent from the CV. If that would be the case, power would be a line parallel to the x-axis. However, the methods implemented by authorities are decision schemes (outlined in the articles about RSABE and ABEL), where certain conditions have to be observed. Therefore, beyond a maximum around 50%, power starts to decrease because the PE-constraint becomes increasingly important and – for ABEL – the upper cap of scaling sets in. On the other hand, ‘pure’ SABE shows the unconstrained behavior of ABEL. Let’s go deeper into the matter. As above but a wider range of CV values (0.3 – 1). Here we see a clear difference between RSABE and ABEL. Although in both the PE-constraint has to be observed, in the former no upper cap of scaling is imposed and hence, power affected to a minor degree. On the contrary, due to the upper upper cap of scaling in the latter, it behaves similarly to ABE with fixed limits of 69.84 – 143.19%. Consequently, if the CV will be substantially larger than assumed, in ABEL power may be compromised. Note also the huge gap between ABEL and ‘pure’ SABE. Whilst the PE-constraint is statistically not justified, it was introduced in all jurisdictions ‘for political reasons’. 1. There is no scientific basis or rationale for the point estimate recommendations 2. There is no belief that addition of the point estimate criteria will improve the safety of approved generic drugs 3. The point estimate recommendations are only “political” to give greater assurance to clinicians and patients who are not familiar (don’t understand) the statistics of highly variable drugs # Statistical issues If in Reference-Scaled Average Bioequivalence the realized $$\small{s_\textrm{wR}<0.294}$$, the study has to be assessed for Average Bioequivalence. Over-speci­fi­ca­tion: More parameters than could be uniquely estimated. Alas, the recommended mixed-effects model13 14 is over-specified for partial (aka semi-replicate) designs – since T is not repeated – and therefore, the software’s optimizer may fail to converge.15 Note that there are no problems in Average Bioequivalence with Expanding Limits because a simple ANOVA (all effects fixed and assuming $$\small{s_\textrm{wT}^2\equiv s_\textrm{wR}^2}$$) has to be used.16 Say, the $$\small{\log_{e}}$$-transformed AUC data are given by pk. Then the SAS code recommended by the FDA13 14 17 is: PROC MIXED data = pk; CLASSES SEQ SUBJ PER TRT; MODEL LAUC = SEQ PER TRT/ DDFM = SATTERTH; RANDOM TRT/TYPE = FA0(2) SUB = SUBJ G; REPEATED/GRP = TRT SUB = SUBJ; ESTIMATE 'T vs. R' TRT 1 -1/CL ALPHA = 0.1; ods output Estimates = unsc1; title1 'unscaled BE 90% CI - guidance version'; title2 'AUC'; run; data unsc1; set unsc1; unscabe_lower = exp(lower); unscabe_upper = exp(upper); run; FA0(2) denotes a ‘No Diagonal Factor Analytic’ covariance structure with $$\small{q=2}$$ [sic] factors, i.e., $$\small{\frac{q}{2}(2t-q+1)+t=(2t-2+1)+t}$$ parameters, where the $$\small{i,j}$$ element is $$\small{\sum_{k=1}^{\textrm{min}(i,j,q=2)}\lambda_{ik}\lambda_{jk}}$$.18 The model has five variance components ($$\small{s_\textrm{wR}^2}$$, $$\small{s_\textrm{wT}^2}$$, $$\small{s_\textrm{bR}^2}$$, $$\small{s_\textrm{bT}^2}$$, and $$\small{cov(\textrm{bR},\textrm{bT})}$$), where the last three are combined to give the ‘subject-by-formulation interaction’ variance component as $$\small{s_\textrm{bR}^2+s_\textrm{bT}^2-cov(\textrm{bR},\textrm{bT})}$$. That’s perfectly fine for all full replicate designs, i.e., TR|RT|TT|RR,19 TRT|RTR, TRR|RTT, TRTR|RTRT, TRRT|RTTR, TTRR|RRTT, TRTR|RTRT|TRRT|RTTR, and TTRRT|RTTR|TTRR|RRTT where all components can be uniquely estimated. However, in partial replicate designs, i.e., TRR|RTR|RRT13 14  and TRR|RTR20 21 only R is repeated and consequently, just $$\small{s_\textrm{wR}^2}$$, $$\small{s_\textrm{bR}^2}$$, and the total variance of T ($$\small{s_\textrm{T}^2=s_\textrm{wT}^2+s_\textrm{bT}^2}$$) can be estimated. In the partial replicate designs the optimizer tries hard to come up with the solution we requested. In the ‘best’ case one gets the correct $$\small{s_\textrm{wR}^2}$$ and – rightly –  NOTE: Convergence criteria met but final hessian is not positive definite. Of course, $$\small{s_\textrm{wT}^2}$$ is nonsense (and differs between software packages…). In the worst case the optimizer shows us the finger.  WARNING: Did not converge.  WARNING: Output 'Estimates' was not created. Terrible consequence: Study performed, no result, the innocent statistician – falsely – blamed. With an R script the PE can be obtaind but not the required 90% confidence interval.15 There are workarounds. In most cases FA0(1) converges and generally CSH (heterogeneous compound-sym­metry) or simply CS (compound-symmetry). Since these structures are not stated in the guidance(s), one risks a ‘Refuse-to-Receive’22 in the application. It must be mentioned that – in extremely rare cases – nothing helps! Try to invoice the . I simulated 10,000 partial replicate studies23 with $$\small{\theta_0=1}$$, $$\small{s_\textrm{wT}^2=s_\textrm{wR}^2=0.086}$$ $$\small{(CV_\textrm{w}\approx 29.97\%)}$$, $$\small{s_\textrm{bT}^2=s_\textrm{bR}^2=0.172}$$ $$\small{(CV_\textrm{b}\approx 43.32\%)}$$, $$\small{\rho=1}$$, i.e., homoscedasticity and no subject-by-formulation interaction. With $$\small{n=24}$$ subjects 82.38% power to demonstrate ABE. Evaluation in Phoenix / WinNonlin,24 singularity tolerance and convergence criterion 10–12 (instead of 10–10), ite­ration limit 250 (instead of 50) and got: $\small{\begin{array}{lrrc}\hline \text{Convergence} & \texttt{FA0(2)} & \texttt{FA0(1)} & \texttt{CS}\\ \hline \text{Achieved} & 30.14\% & 99.97\% & 100\% \\ \text{Modif. Hessian} & 69.83\% & - & - \\ >\text{Iteration limit} & 0.03\% & 0.03\% & - \\\hline \end{array} \hphantom{a} \begin{array}{lrrc}\hline \text{Warnings} & \texttt{FA0(2)} & \texttt{FA0(1)} & \texttt{CS}\\ \hline \text{Neg. variance component} & 9.01\% & 3.82\% & 11.15\% \\ \text{Modified Hessian} & 68.75\% & - & - \\ \text{Both} & 2.22\% & 0.06\% & - \\\hline \end{array}}$ As long as we achieve convergence, it doesn’t matter. Perhaps as long as the data set is balanced and/or does not contain ‘outliers’, all is good. I compared the results obtained with FA0(1) and CS to the guidances’ FA0(2). 80.95% of simulated studies passed ABE. $\small{\begin{array}{lcrcrrrr} \hline & s_\textrm{wR}^2 & \textrm{RE (%)} & s_\textrm{bR}^2 & \textrm{RE (%)} & \text{PE}\;\;\; & \textrm{90% CL}_\textrm{lower} & \textrm{90% CL}_\textrm{upper}\\ \hline \text{Min.} & 0.020100 & -76.6\% & 0.00005 & -100.0\% & 75.59 & 65.96 & 84.88\\ \text{Q I} & 0.067613 & -21.4\% & 0.12461 & -27.6\% & 95.14 & 83.84 & 107.59\\ \text{Med.} & 0.083282 & -3.2\% & 0.16537 & -3.9\% & 99.95 & 88.29 & 113.15\\ \text{Q III} & 0.102000 & -18.6\% & 0.21408 & +24.5\% & 105.00 & 92.92 & 119.15\\ \text{Max.} & 0.199367 & +131.8\% & 0.51370 & +198.7\% & 135.92 & 123.82 & 149.75\\\hline \end{array}}$ Up to the 4th decimal (rounded to percent, i.e., 6–7 significant digits) the CIs were identical in all cases. Only when I looked at the 5th decimal for both covariance structures, ~1/500 differed (the CI was wider than with FA0(2) and hence, more conservative). Since all guidelines require rounding to the 2nd decimal, that’s not relevant anyhow. One example where the optimizer was in deep trouble with FA0(2), FA0(1), and CSH (all with singularity tolerance and convergence criterion 10–15). $\small{\begin{array}{rccccc} \hline \text{Iter}_\textrm{max} & s_\textrm{wR}^2 & \textrm{-2REML LL} & \textrm{AIC} & df & \textrm{90% CI}\\ \hline 50 & 0.084393 & 39.368 & 61.368 & 22.10798 & \text{82.212 -- 111.084}\\ 250 & 0.085094 & 39.345 & 61.345 & 22.18344 & \text{82.223 -- 111.070}\\ \text{1,250} & 0.085271 & 39.339 & 61.339 & 22.20523 & \text{82.225 -- 111.066}\\ \text{6,250} & 0.085309 & 39.338 & 61.338 & 22.20991 & \text{82.226 -- 111.066}\\ \text{31,250} & 0.085317 & 39.338 & 61.338 & 22.21032 & \text{82.226 -- 111.066}\\\hline \end{array}}$ A warning was thrown in all cases:  Failed to converge in allocated number of iterations. Output is suspect.  Negative final variance component. Consider omitting this VC structure. Note that $$\small{s_\textrm{wR}^2}$$ increases with the number of iterations, which is over-compensated by increasing degrees of freedom and hence, the CI narrows. Note also that SAS forces negative variance components to zero – which is questionable as well. However, with CS (compound-symmetry) convergence was achieved after just four (‼) iterations without any warnings: $$\small{s_\textrm{wR}^2=0.089804}$$, $$\small{df=22.32592}$$, $$\small{\textrm{90% CI}=\text{82.258 -- 111.023}}$$. Welcome to the hell of mixed-effects modeling. Now I understand why Health Canada requires that the optimizer’s constraints are stated in the SAP.25 If you wonder whether a 3-period full replicate design is acceptable for agencies: • According to the EMA it is indeed.26 • Already in 2001 the FDA recommended the 2-sequence 4-period full replicate design TRTR|RTRT but stated also:17 »Other replicated crossover designs are possible. For example, a three-period design TRT|RTR could be used. […] the two-sequence, three-period design TRR|RTT is thought to be optimal among three-period replicated crossover designs. « The guidance is in force for 21 years and the lousy partial replicate is not mentioned at all… • It is unclear whether the problematic 3-sequence partial replicate design mentioned more recently13 14  is mandatory or just given as an example. Does the overarching guidance about statistics in bioequivalence17 (which is final) overrule later ones, which are only drafts? If in doubt, initiate a ‘Controlled Corres­pon­dence’27 beforehand. Good luck!28 # Study costs Power (and hence, the sample size) depends on the number of treatments – the smaller sample size in replicate designs is compensated by more administrations. For ABE costs of a replicate design are similar to the common 2×2×2 crossover design. If the sample size of a 2×2×2 design is $$\small{n}$$, then the sample size for a 4-period replicate design is $$\small{^1/_2\,n}$$ and for a 3-period replicate design $$\small{^3/_4\,n}$$. Nevertheless, smaller sample sizes come with a price. We have the same number of samples to analyze and study costs are driven to a good part by bioanalytics.29 We will save costs due to less pre-/post-study exams but have to pay a higher subject remuneration (more hospitalizations and blood samples). If applicable (depending on the drug): Increased costs for in-study safety and/or PD measurements. Furthermore, one must be aware that more periods / washout phases increase the chance of dropouts. Let’s compare study costs (approximated by the number of treatments) of 3-period replicate designs to 4-period replicate designs planned for ABEL and RSABE. I assumed a T/R-ratio of 0.90, a CV-range of 30 – 65%, and targeted ≥ 80% power. CV <- seq(0.30, 0.65, 0.05) theta0 <- 0.90 target <- 0.80 designs <- c("2x2x4", "2x2x3", "2x3x3") res1 <- data.frame(design = designs, CV = rep(CV, each = length(designs)), n = NA_integer_, n.trt = NA_integer_, costs = "100% * ") for (i in 1:nrow(res1)) { res1$n[i] <- sampleN.scABEL(CV = res1$CV[i], theta0 = theta0, targetpower = target, design = res1$design[i], print = FALSE, details = FALSE)[["Sample size"]] n.per <- as.integer(substr(res1$design[i], 5, 5)) res1$n.trt[i] <- n.per * res1$n[i] } ref1 <- res1[res1$design == "2x2x4", c(1:2, 4)] for (i in 1:nrow(res1)) { if (!res1$design[i] %in% ref1$design) { res1$costs[i] <- sprintf("%.0f%% ", 100 * res1$n.trt[i] / ref1$n.trt[ref1$CV == res1$CV[i]]) } } names(res1)[4:5] <- c("treatments", "rel. costs") cat("ABEL (EMA and others)\n") print(res1, row.names = FALSE) # ABEL (EMA and others) # design CV n treatments rel. costs # 2x2x4 0.30 34 136 100% * # 2x2x3 0.30 50 150 110% # 2x3x3 0.30 54 162 119% # 2x2x4 0.35 34 136 100% * # 2x2x3 0.35 50 150 110% # 2x3x3 0.35 48 144 106% # 2x2x4 0.40 30 120 100% * # 2x2x3 0.40 46 138 115% # 2x3x3 0.40 42 126 105% # 2x2x4 0.45 28 112 100% * # 2x2x3 0.45 42 126 112% # 2x3x3 0.45 39 117 104% # 2x2x4 0.50 28 112 100% * # 2x2x3 0.50 42 126 112% # 2x3x3 0.50 39 117 104% # 2x2x4 0.55 30 120 100% * # 2x2x3 0.55 44 132 110% # 2x3x3 0.55 42 126 105% # 2x2x4 0.60 32 128 100% * # 2x2x3 0.60 48 144 112% # 2x3x3 0.60 48 144 112% # 2x2x4 0.65 36 144 100% * # 2x2x3 0.65 54 162 112% # 2x3x3 0.65 54 162 112% In any case 3-period designs are more costly than 4-period full replicate designs. However, in the latter dropouts are more likely and the sample size has to be adjusted accordingly. Given that, the difference diminishes. Since there are no convergence issues in ABEL, the partial replicate can be used. Agencies are only interested in $$\small{CV_\textrm{wR}}$$. However, I prefer one of the 3-period full replicate designs due to the additional information about $$\small{CV_\textrm{wT}}$$. CV <- seq(0.30, 0.65, 0.05) theta0 <- 0.90 target <- 0.80 designs <- c("2x2x4", "2x2x3", "2x3x3") res2 <- data.frame(design = designs, CV = rep(CV, each = length(designs)), n = NA_integer_, n.trt = NA_integer_, costs = "100% * ") for (i in 1:nrow(res1)) { res2$n[i] <- sampleN.RSABE(CV = res2$CV[i], theta0 = theta0, targetpower = target, design = res2$design[i], print = FALSE, details = FALSE)[["Sample size"]] n.per <- as.integer(substr(res1$design[i], 5, 5)) res2$n.trt[i] <- n.per * res2$n[i] } ref2 <- res2[res2$design == "2x2x4", c(1:2, 4)] for (i in 1:nrow(res1)) { if (!res2$design[i] %in% ref2$design) { res2$costs[i] <- sprintf("%.0f%% ", 100 * res2$n.trt[i] / ref2$n.trt[ref2$CV == res2$CV[i]]) } } names(res2)[4:5] <- c("treatments", "rel. costs") cat("RSABE (U.S. FDA and China CDE)\n") print(res2, row.names = FALSE) # RSABE (U.S. FDA and China CDE) # design CV n treatments rel. costs # 2x2x4 0.30 32 128 100% * # 2x2x3 0.30 46 138 108% # 2x3x3 0.30 45 135 105% # 2x2x4 0.35 28 112 100% * # 2x2x3 0.35 42 126 112% # 2x3x3 0.35 39 117 104% # 2x2x4 0.40 24 96 100% * # 2x2x3 0.40 38 114 119% # 2x3x3 0.40 33 99 103% # 2x2x4 0.45 24 96 100% * # 2x2x3 0.45 36 108 112% # 2x3x3 0.45 33 99 103% # 2x2x4 0.50 22 88 100% * # 2x2x3 0.50 34 102 116% # 2x3x3 0.50 30 90 102% # 2x2x4 0.55 22 88 100% * # 2x2x3 0.55 34 102 116% # 2x3x3 0.55 30 90 102% # 2x2x4 0.60 24 96 100% * # 2x2x3 0.60 36 108 112% # 2x3x3 0.60 33 99 103% # 2x2x4 0.65 24 96 100% * # 2x2x3 0.65 36 108 112% # 2x3x3 0.65 33 99 103% In almost all cases 3-period designs are more costly than 4-period full replicate designs. However, in the latter dropouts are more likely and the sample size has to be adjusted accordingly. Given that, the difference diminishes. Due to the convergence issues in ABE (mandatory, if the realized $$\small{s_\textrm{wR}<0.294}$$), I strongly recommend to avoid the partial replicate design and opt for one of the 3-period full replicate designs instead. # Pros and Cons From a statistical perspective, replicate designs are preferrable over the 2×2×2 crossover design. If we observe discordant30 outliers in the latter, we cannot distinguish between lack of compliance (the subject didn’t take the drug), a product failure, and a subject-by-formulation interaction (the subject belongs to a subpopulation). A member of the EMA’s PKWP once told me that he would like to see all studies performed in a replicate design – regardless whether the drug / drug product is highly variable or not. One of the rare cases where we were of the same opinion.31 We design studies always for the worst case combination, i.e., based on the PK metric requiring the largest sample size. In jurisdictions accepting reference-scaling only for Cmax (e.g. by ABEL) the sample size is driven by AUC. metrics <- c("Cmax", "AUCt", "AUCinf") alpha <- 0.05 CV <- c(0.45, 0.34, 0.36) theta0 <- rep(0.90, 3) theta1 <- 0.80 theta2 <- 1 / theta1 target <- 0.80 design <- "2x2x4" plan <- data.frame(metric = metrics, method = c("ABEL", "ABE", "ABE"), CV = CV, theta0 = theta0, L = 100*theta1, U = 100*theta2, n = NA, power = NA) for (i in 1:nrow(plan)) { if (plan$method[i] == "ABEL") { plan[i, 5:6] <- round(100*scABEL(CV = CV[i]), 2) plan[i, 7:8] <- signif( sampleN.scABEL(alpha = alpha, CV = CV[i], theta0 = theta0[i], theta1 = theta1, theta2 = theta2, targetpower = target, design = design, details = FALSE, print = FALSE)[8:9], 4) } else { plan[i, 7:8] <- signif( sampleN.TOST(alpha = alpha, CV = CV[i], theta0 = theta0[i], theta1 = theta1, theta2 = theta2, targetpower = target, design = design, print = FALSE)[7:8], 4) } } txt <- paste0("Sample size based on ", plan$metric[plan$n == max(plan$n)], ".\n") print(plan, row.names = FALSE) cat(txt) # metric method CV theta0 L U n power # Cmax ABEL 0.45 0.9 72.15 138.59 28 0.8112 # AUCt ABE 0.34 0.9 80.00 125.00 50 0.8055 # AUCinf ABE 0.36 0.9 80.00 125.00 56 0.8077 # Sample size based on AUCinf. If the study is performed with 56 subjects and all assumed values are realized, post hoc power will be 0.9666 for Cmax. I have seen deficiency letters by regulatory assessors asking for a »justification of too high power for Cmax«. As shown in the article about ABEL, we get an incentive in the sample size if $$\small{CV_\textrm{wT}<CV_\textrm{wR}}$$. However, this does not help if reference-scaling is not acceptable (say, for the AUC in most jurisdictions) because the conventional model for ABE assumes homoscedasticity ($$\small{CV_\textrm{wT}\equiv CV_\textrm{wR}}$$). theta0 <- 0.90 design <- "2x2x4" CVw <- 0.36 # AUC - no reference-scaling # variance-ratio 0.80: T lower than R CV <- signif(CVp2CV(CV = CVw, ratio = 0.80), 5) # 'switch off' all scaling conditions of ABEL reg <- reg_const("USER", r_const = 0.76, CVswitch = Inf, CVcap = Inf) reg$pe_constr <- FALSE res <- data.frame(variance = c("homoscedastic", "heteroscedastic"), CVwT = c(CVw, CV[1]), CVwR = c(CVw, CV[2]), CVw = rep(CVw, 2), n = NA) res$n[1] <- sampleN.TOST(CV = CVw, theta0 = theta0, design = design, print = FALSE)[["Sample size"]] res$n[2] <- sampleN.scABEL(CV = CV, theta0 = theta0, design = design, regulator = reg, details = FALSE, print = FALSE)[["Sample size"]] print(res, row.names = FALSE) # variance CVwT CVwR CVw n # homoscedastic 0.36000 0.36000 0.36 56 # heteroscedastic 0.33824 0.38079 0.36 56 Although we know that the test has a lower within-subject $$\small{CV}$$ than the reference, this information is ignored and the (pooled) within-subject $$\small{CV_\textrm{w}}$$ used. ## Pros • Statistically sound. Estimation of CVwR (and in full replicate designs additionally of CVwT) is possible. The additional information is welcomed side effect. • Mandatory for ABEL and RSABE. Smaller sample sizes than for ABE. • ‘Outliers’ can be better assessed than in the 2×2×2 crossover design. • In the 2×2×2 crossover this will be rather difficult (exclusion of subjects based on statistical and/or PK grounds alone is not acceptable). • For ABEL assessment of ‘outliers’ (of the reference treatment only) is part of the recommended procedure.32 ## Cons • A larger sample size adjustment according to the anticipated dropout-rate required than in a 2×2×2 crossover design due to three or four periods instead of two.33 • Contrary to ABE – where the limits are based on Δ according to (1) – in practice the scaled limits of SABE (2) are calculated based on the realized swR. • Without access to the study report, Δ cannot be re-calculated. This is an unsatisfactory situation for physicians, pharmacists, and patients alike. • The elephant in the room: Potential inflation of the Type I Error (patient’s risk) in RSABE (if CVwR < 30%) and in ABEL (if ~25% < CVwR < ~42%). This issue is covered in another article. # Uncertain CVwR An intriguing statement of the EMA’s Pharmacokinetics Working Party. Suitability of a 3-period replicate design scheme for the demonstration of within-subject variability for Cmax The question raised asks if it is possible to use a design where subjects are randomised to receive treatments in the order of TRT or RTR. This design is not considered optimal […]. However, it would provide an estimate of the within subject variability for both test and reference products. As this estimate is only based on half of the subjects in the study the uncertainty associated with it is higher than if a RRT/RTR/TRR design is used and therefore there is a greater chance of incorrectly concluding a reference product is highly variable if such a design is used. The CHMP bioequivalence guideline requires that at least 12 patients are needed to provide data for a bioequivalence study to be considered valid, and to estimate all the key parameters. Therefore, if a 3-period replicate design, where treatments are given in the order TRT or RTR, is to be used to justify widening of a confidence interval for Cmax then it is considered that at least 12 patients would need to provide data from the RTR arm. This implies a study with at least 24 patients in total would be required if equal number of subjects are allocated to the 2 treatment sequences. EMA (2015)34 I fail to find a statement in the guideline35 that $$\small{CV_\textrm{wR}}$$ is a ‘key parameter’ – only that »The number of evaluable subjects in a bioequivalence study should not be less than 12.« However, in sufficiently powered studies such a situation is extremely unlikely (dropout-rate ≥ 42%).36 Let’s explore the uncertainty of $$\small{CV_\textrm{wR}=30\%}$$ based on its 95% confidence interval in two scenarios: 1. No dropouts. In the partial replicate design all subjects provide data for the estimation of CVwR. In full replicate designs only half of the subjects provide this information. 2. Extreme dropout-rates. Only twelve subjects remain in R-replicated sequence(s). # CI of the CV for sample sizes of replicate designs # (theta0 0.90, target power 0.80) CV <- 0.30 des <- c("2x3x3", # 3-sequence 3-period (partial) replicate design "2x2x3", # 2-sequence 3-period full replicate designs "2x2x4") # 2-sequence 4-period full replicate designs type <-c("partial", rep("full", 2)) seqs <- c("TRR|RTR|RTR", "TRT|RTR ", "TRTR|RTRT ") res <- data.frame(scenario = c(rep(1, 3), rep(2, 3)), design = rep(des, 2), type = rep(type, 2), sequences = rep(seqs, 2), n = c(rep(NA, 3), rep(0, 3)), RR = c(rep(NA, 3), rep(0, 3)), df = NA, lower = NA, upper = NA, width = NA) for (i in 1:nrow(res)) { if (is.na(res$n[i])) { res$n[i] <- sampleN.scABEL(CV = CV, design = res$design[i], details = FALSE, print = FALSE)[["Sample size"]] if (res$design[i] == "2x2x3") { res$RR[i] <- res$n[i] / 2 } else { res$RR[i] <- res$n[i] } } if (i > 3) { if (res$design[i] == "2x3x3") { res$n[i] <- res$n[i-3] - 12 res$RR[i] <- 12 # only 12 eligible subjects in sequence RTR } else { res$n[i] <- 12 # min. sample size res$RR[i] <- res$n[i] # CVwR can be estimated } } res$df[i] <- res$RR[i] - 2 res[i, 8:9] <- CVCL(CV = CV, df = res$df[i], side = "2-sided", alpha = 0.05) res[i, 10] <- res[i, 9] - res[i, 8] } res[, 8] <- sprintf("%.1f%%", 100 * res[, 8]) res[, 9] <- sprintf("%.1f%%", 100 * res[, 9]) res[, 10] <- sprintf("%.1f%%", 100 * res[, 10]) names(res)[1] <- "sc." # Rows 1-2: Sample sizes for target power # Rows 3-4: Only 12 eligible subjects to estimate CVwR print(res, row.names = FALSE) # sc. design type sequences n RR df lower upper width # 1 2x3x3 partial TRR|RTR|RTR 54 54 52 25.0% 37.6% 12.5% # 1 2x2x3 full TRT|RTR 50 25 23 23.1% 43.0% 19.9% # 1 2x2x4 full TRTR|RTRT 34 34 32 23.9% 40.3% 16.4% # 2 2x3x3 partial TRR|RTR|RTR 42 12 10 20.7% 55.1% 34.4% # 2 2x2x3 full TRT|RTR 12 12 10 20.7% 55.1% 34.4% # 2 2x2x4 full TRTR|RTRT 12 12 10 20.7% 55.1% 34.4% Given, the CI of the $$\small{CV_\textrm{wR}}$$ in the partial replicate design is narrower than in a three period full replicate design. Is that really relevant, esp. since only twelve eligible subjects in the RTR-sequence are acceptable to provide a ‘valid’ estimate? Obviously the EMA’s PKWP is aware of the uncertainty of the realized $$\small{CV_\textrm{wR}}$$, which may lead to a misclassification (the study is assessed by ABEL although the drug / drug product is not highly variable) and hence, a potentially inflated Type I Error (TIE, patient’s risk). The partial replicate has – given studies with the same power – the largest degrees of freedom and hence, leads to the lowest TIE.37 However, it does not magically disappear. Such a misclassification may also affect the Type II Error (producer’s risk). If the realized $$\small{CV_\textrm{wR}}$$ is lower than assumed in sample size estimation, less expansion can be applied and the study will be underpowered. Of course, that’s not a regulatory concern. design <- "2x2x4" theta0 <- 0.90 # asumed T/R-atio CV.ass <- 0.35 # assumed CV CV.real <- c(CV.ass, 0.30, 0.40) # realized CV # sample size based on assumed T/R-ratio and CV, targeted at ≥ 80% power res <- data.frame(CV.ass = CV.ass, n = sampleN.scABEL(CV = CV.ass, design = design, theta0 = theta0, details = FALSE, print = FALSE)[["Sample size"]], CV.real = CV.real, L = NA_real_, U = NA_real_, TIE = NA_real_, TIIE = NA_real_) for (i in 1:nrow(res)) { res$L[i] <- scABEL(CV = res$CV.real[i])[["lower"]] res$U[i] <- scABEL(CV = res$CV.real[i])[["upper"]] res$TIE[i] <- power.scABEL(CV = res$CV.real[i], design = design, theta0 = res$U[i], n = res$n[i]) res$TIIE[i] <- 1 - power.scABEL(CV = res$CV.real[i], design = design, theta0 = theta0, n = res$n[i]) } res$CV.ass <- sprintf("%.0f%%", 100 * res$CV.ass) res$CV.real <- sprintf("%.0f%%", 100 * res$CV.real) res$L <- sprintf("%.2f%%", 100 * res$L) res$U <- sprintf("%.2f%%", 100 * res$U) res$TIE <- sprintf("%.5f", res$TIE) res$TIIE <- sprintf("%.4f", res$TIIE) names(res)[c(1, 3)] <- c("assumed", "realized") print(res, row.names = FALSE) # assumed n realized L U TIE TIIE # 35% 34 35% 77.23% 129.48% 0.06557 0.1882 # 35% 34 30% 80.00% 125.00% 0.08163 0.1972 # 35% 34 40% 74.62% 134.02% 0.05846 0.1535 I recommend the article about power analysis (sections ABEL and RSABE). Of note, if there are no / few dropouts, the estimated $$\small{CV_\textrm{wR}}$$ in 4-period full replicate designs carries a larger uncertainty due to its lower sample size and therefore, less degrees of freedom. If the PKWP is concerned about an ‘uncertain’ estimate, why is this design given as an example?16 38 Many studies are performed in this design and are accepted by agencies. Since for RSABE generally smaller sample sizes are required than for ABEL, the estimated $$\small{CV_\textrm{wR}}$$ is more uncertain in the former. # Cave: very long runtime theta0 <- 0.90 target <- 0.80 CV <- seq(0.3, 0.5, 0.00025) x <- seq(0.3, 0.5, 0.05) des <- c("2x3x3", # 3-sequence 3-period (partial) replicate design "2x2x3", # 2-sequence 3-period full replicate designs "2x2x4") # 2-sequence 4-period full replicate designs RSABE <- ABEL <- data.frame(design = rep(des, each = length(CV)), n = NA, RR = NA, df = NA, CV = CV, lower = NA, upper = NA) for (i in 1:nrow(ABEL)) { RSABE$n[i] <- sampleN.RSABE(CV = RSABE$CV[i], theta0 = theta0, targetpower = target, design = RSABE$design[i], details = FALSE, print = FALSE)[["Sample size"]] if (RSABE$design[i] == "2x2x3") { RSABE$RR[i] <- RSABE$n[i] / 2 } else { RSABE$RR[i] <- RSABE$n[i] } RSABE$df[i] <- RSABE$RR[i] - 2 RSABE[i, 6:7] <- CVCL(CV = RSABE$CV[i], df = RSABE$df[i], side = "2-sided", alpha = 0.05) ABEL$n[i] <- sampleN.scABEL(CV = ABEL$CV[i], theta0 = theta0, targetpower = target, design = ABEL$design[i], details = FALSE, print = FALSE)[["Sample size"]] if (ABEL$design[i] == "2x2x3") { ABEL$RR[i] <- ABEL$n[i] / 2 } else { ABEL$RR[i] <- ABEL$n[i] } ABEL$df[i] <- ABEL$RR[i] - 2 ABEL[i, 6:7] <- CVCL(CV = ABEL$CV[i], df = ABEL$df[i], side = "2-sided", alpha = 0.05) } ylim <- range(c(RSABE[6:7], ABEL[6:7])) col <- c("blue", "red", "magenta") leg <- c("2×3×3 (partial)", "2×2×3 (full)", "2×2×4 (full)") dev.new(width = 4.5, height = 4.5, record = TRUE) par(mar = c(4, 4.1, 0.2, 0.1), cex.axis = 0.9) plot(CV, rep(0.3, length(CV)), type = "n", ylim = ylim, log = "xy", xlab = expression(italic(CV)[wR]), ylab = expression(italic(CV)[wR]*"  (95% confidence interval)"), axes = FALSE) grid() abline(h = 0.3, col = "lightgrey", lty = 3) axis(1, at = x) axis(2, las = 1) axis(2, at = c(0.3, 0.5), las = 1) lines(CV, CV, col = "darkgrey") legend("topleft", bg = "white", box.lty = 0, title = "replicate designs", legend = leg, col = col, lwd = 2, seg.len = 2.5, cex = 0.9, y.intersp = 1.25) box() for (i in seq_along(des)) { lines(CV, RSABE$lower[RSABE$design == des[i]], col = col[i], lwd = 2) lines(CV, RSABE$upper[RSABE$design == des[i]], col = col[i], lwd = 2) y <- RSABE$upper[signif(RSABE$CV, 4) %in% x & RSABE$design == des[i]] n <- RSABE$n[signif(RSABE$CV, 4) %in% x & RSABE$design == des[i]] # sample sizes at CV = x shadowtext(x, y, labels = n, bg = "white", col = "black", cex = 0.75) } plot(CV, rep(0.3, length(CV)), type = "n", ylim = ylim, log = "xy", xlab = expression(italic(CV)[wR]), ylab = expression(italic(CV)[wR]*"  (95% confidence interval)"), axes = FALSE) grid() abline(h = 0.3, col = "lightgrey", lty = 3) axis(1, at = x) axis(2, las = 1) axis(2, at = c(0.3, 0.5), las = 1); box() lines(CV, CV, col = "darkgrey") legend("topleft", bg = "white", box.lty = 0, title = "replicate designs", legend = leg, col = col, lwd = 2, seg.len = 2.5, cex = 0.9, y.intersp = 1.25) box() for (i in seq_along(des)) { lines(CV, ABEL$lower[ABEL$design == des[i]], col = col[i], lwd = 2) lines(CV, ABEL$upper[ABEL$design == des[i]], col = col[i], lwd = 2) y <- ABEL$upper[signif(ABEL$CV, 4) %in% x & ABEL$design == des[i]] n <- ABEL$n[signif(ABEL$CV, 4) %in% x & ABEL$design == des[i]] # sample sizes at CV = x shadowtext(x, y, labels = n, bg = "white", col = "black", cex = 0.75) } par(op) cat("RSABE\n"); print(RSABE[signif(RSABE$CV, 4) %in% x, ], row.names = FALSE) cat("ABEL\n"); print(ABEL[signif(ABEL$CV, 4) %in% x, ], row.names = FALSE) That’s interesting. Say, we assumed $$\small{CV_\textrm{wR}=37\%}$$, a T/R-ratio 0.90 targeted at ≥ 80% power in a 4-period full replicate design intended for ABEL. We performed the study with 32 subjects. The 95% CI of the $$\small{CV_\textrm{wR}}$$ is 29.2% (no expansion, assessment for ABE) to 50.8% (already above the upper cap of 50%). Disturbing, isn’t it? Interlude 2 If you wonder why the confidence intervals are asymmetric ($$\small{CL_\textrm{upper}-CV_\textrm{wR}>CV_\textrm{wR}-CL_\textrm{lower}}$$): The $$\small{100\,(1-\alpha)}$$ confidence interval of the $$\small{CV_\textrm{wR}}$$ is obtained via the $$\small{\chi^2}$$-distribution of its error variance $$\small{s_\textrm{wR}^2}$$ with $$\small{n-2}$$ degrees of freedom. $\begin{matrix}\tag{3} s_\textrm{wR}^2=\log_{e}(CV_\textrm{wR}+1)\\ L=\frac{(n-1)\,s_\textrm{wR}^2}{\chi_{\alpha/2,\,n-2}^{2}}\leq s_\textrm{wR}^2\leq\frac{(n-1)\,s_\textrm{wR}^2}{\chi_{1-\alpha/2,\,n-2}^{2}}=U\\ \left\{CL_\textrm{lower},\;CL_\textrm{upper}\right\}=\left\{\sqrt{\exp(L)-1},\sqrt{\exp(U)-1}\right\} \end{matrix}$ The $$\small{\chi^2}$$-distribution is skewed to the right. Since the width of the confidence interval for a given $$\small{CV_\textrm{wR}}$$ depends on the degrees of freedom, it implies a more precise estimate in larger studies, which will be required for relatively low variabilities (least scaling). In the example above the width of the CI in the partial replicate design is for RSABE 0.139 (n 45) at $$\small{CV_\textrm{wR}=0.30}$$ and 0.322 (n 30) at $$\small{CV_\textrm{wR}=0.50}$$. For ABEL the widths are 0.125 (n 54) and 0.273 (n 39). # Postscript Regularly I’m asked whether it is possible to use an adaptive Two-Stage Design (TSD) for ABEL or RSABE. Whereas for ABE it is possible in principle (no method for replicate designs is published so far – only for 2×2×2 crossovers39), for SABE the answer is no. Contrary to ABE, where power and the Type I Error can be calculated by analytical methods, in SABE we have to rely on simulations. We would have to find a suitable adjusted $$\small{\alpha}$$ and demonstrate beforehand that the patient’s risk will be controlled. For the implemented regulatory frameworks the sample size estimation requires 105 simulations to obtain a stable result (see here and there). Since the convergence of the empiric Type I Error is poor, we need 106 simulations. Combining that with a reasonably narrow grid of possible $$\small{n_1}$$ / $$\small{CV_\textrm{wR}}$$-combinations,40 we end up with with 1013 – 1014 simulations. I don’t see how that can be done in the near future, unless one has access to a massi­ve­ly parallel supercomputer. I made a quick estimation for my fast workstation: ~60 years running 24/7… As outlined above, SABE is rather insensitive to the CV. Hence, the main advantage of TSDs over fixed sample designs in ABE (re-estimating the sample size based on the CV observed in the first stage) is simply not relevant. Fully adaptive methods for the 2×2×2 crossover allow also to adjust for the PE observed in the first stage. Here it is not possible. If you are concerned about the T/R-ratio, perform a (reasonably large!)41 pilot study and – even if the T/R-ratio looks promising – plan for a ‘worse’ one since it is not stable between studies. # Post postscript Let’s recap the basic mass balance equation of PK: $\small{F\cdot D=V\cdot k\,\cdot\int_{0}^{\infty}C(t)\,dt=CL\cdot AUC_{0-\infty}}\tag{4}$ We assess Bioequivalence by comparative Bioavailability, i.e., $\small{\frac{F_{\textrm{T}}}{F_{\textrm{R}}}\approx \frac{AUC_{\textrm{T}}}{AUC_{\textrm{R}}}}\tag{5}$ That’s only part of the story because – based on $$\small{(4)}$$ – actually $\small{AUC_{\textrm{T}}=\frac{F_\textrm{T}\cdot D_\textrm{T}}{CL}\;\land\;AUC_{\textrm{R}}=\frac{F_\textrm{R}\cdot D_\textrm{R}}{CL}}\tag{6}$ Since an adjustment for measured potency is generally not acceptable, we have to assume that the true contents equal the declared ones and further $\small{D_\textrm{T}\equiv D_\textrm{R}}\tag{7}$ This allows us to eliminate the doses from $$\small{(6)}$$; however, we still have to assume no inter-occasion variability of clearances ($$\small{CL=\textrm{const}}$$) in order to arrive at $$\small{(5)}$$. Great, but is that true‽ If we have to deal with a HVD, the high variability is an intrinsic property of the drug itself (not the formulation). In BE were are interested in detecting potential differences of formulations, right? Since we ignored the – possibly unequal – clearances, all unexplained variability goes straight to the residual error, results in a large within-subject variance and hence, a wide confidence interval. In other words, the formulation is punished for a crime that clearance committed. Can we do anything against it – apart from reference-scaling? We know that $\small{k=CL\big{/}V}\tag{8}$ In cross-over designs the volume of distribution of healthy subjects likely shows limited inter-occasion variability. Therefore, we can drop the volume of distribution and approximate the effect of $$\small{CL}$$ by $$\small{k}$$. This leads to $\small{\frac{F_{\textrm{T}}}{F_{\textrm{R}}}\sim \frac{AUC_{\textrm{T}}\cdot k_{\textrm{T}}}{AUC_{\textrm{R}}\cdot k_{\textrm{R}}}}\tag{9}$ A variant of $$\small{(9)}$$ – using $$\small{t_{1/2}}$$ instead of $$\small{k}$$ – was explored already in the dark ages.42 43 Although there was wide variation in milligram dosage, body weight, and estimated halflife in these studies, the average area/dose × halflife ratios are amazingly similar. John G. Wagner (1967)44 […] the assumption of constant clearance in the individual between the occasions of receiving the standard and the test dose is suspect for theophylline. […] If there is evidence that the clearance but not the volume of distribution varies in the individual, the AUC × k can be used to gain a more precise index of bioavailability than obtainable from AUC alone. Upton et al. (1980)45 Confirmed (esp. for $$\small{AUC_{0-\infty}}$$) in the data set Theoph46 which is part of the Base R installation: $\small{\begin{array}{lc} \textrm{PK metric} & CV_\textrm{geom}\,\%\\\hline AUC_{0-\textrm{tlast}} & {\color{Red} {22.53\%}}\\ AUC_{0-\textrm{tlast}} \times k & {\color{Blue} {21.81\%}}\\ AUC_{0-\infty} & {\color{Red} {28.39\%}}\\ AUC_{0-\infty} \times k & {\color{Blue} {20.36\%}}\\\hline \end{array}}$ Later work47 (by an author of the FDA…) was ignored as well. Hey, for 20+ years! A recent paper demonstrated its usefulness in extensive simulations. Abstract Aim To quantify the utility of a terminal-phase adjusted area under the concentration curve method in increasing the probability of a correct and conclusive outcome of a bioequivalence (BE) trial for highly variable drugs when clearance (CL) varies more than the volume of distribution (V). Methods: Data from a large population of subjects were generated with variability in CL and V, and used to simulate a two-period, two-sequence crossover BE trial. The 90% confidence interval for formulation comparison was determined following BE assessment using the area under the concentration curve (AUC) ratio test, and the proposed terminal-phase adjusted AUC ratio test. An outcome of bioequivalent, non-bio­equi­va­lent or inconclusive was then assigned according to predefined BE limits. Results: When CL is more variable than V, the proposed approach would enhance the probability of correctly assigning bioequivalent or non-bioequivalent and reduce the risk of an inconclusive trial. For a hypothetical drug with between-subject variability of 35% for CL and 10% for V, when the true test-reference ratio of bioavailability is 1.15, a cross-over study of n=14 subjects analyzed by the proposed method would have 80% or 20% probability of claiming bioequivalent or non-bioequivalent, compared to 22%, 46% or 32% probability of claiming bioequivalent, non-bioequivalent or inconclusive using the standard AUC ratio test. Conclusions: The terminal-phase adjusted AUC ratio test represents a simple and readily applicable approach to enhance the BE assessment of drug products when CL varies more than V. Lucas et al. (2022)48 I ❤️ the idea. When Abdallah’s paper47 was published, I tried the approach retrospectively in a couple of my studies. Worked mostly, and if not, it was a HVDP, where the variability is caused by the formulation (e.g., gastric-re­sis­tant diclofenac). That’s basic PK and $$\small{CL=\textrm{const}}$$ is a rather strong assumption, which might be outright false. Then the entire current concept of BE testing is built on sand: Studies are substantially larger than necessary, exposing innocent subjects to nasty drugs. It is recommended that area correction be attempted in bioequivalence studies of drugs where high intrasubject variability in clearance is known or suspected. […] The value of this approach in regulatory decision making remains to be determined. Abdallah (1998)47 Performance of the AUC·k ratio test […] indicate that the regulators should consider the method for its potential utility in assessing HVDs and lessening unnecessary drug exposure in BE trials. Lucas et al. (2022)48 For HVDs (not HVDPs) probably we could counteract the high variability, avoid the potentially inflated Type I Error in SABE (which is covered in another article), use conventional ABE with fixed limits, and all will be good. Maybe agencies should revise their guidelines. Hope dies last. Helmut Schütz 2022 R and PowerTOST GPL 3.0, TeachingDemos Artistic 2.0, pandoc GPL 2.0. 1st version April 22, 2021. Rendered May 19, 2022 00:13 CEST by rmarkdown via pandoc in 0.66 seconds. Footnotes and References 1. Labes D, Schütz H, Lang B. PowerTOST: Power and Sample Size for (Bio)Equivalence Studies. Package version 1.5.4. 2022-02-21. CRAN.↩︎ 2. Snow G. TeachingDemos: Demonstrations for Teaching and Learning. Package version 2.12. 2020-04-01. CRAN.↩︎ 3. Schütz H. Average Bioequivalence. 2021-01-18. CRAN.↩︎ 4. Schütz H. Reference-Scaled Average Bioequivalence. 2022-02-19. CRAN.↩︎ 5. Labes D, Schütz H, Lang B. Package PowerTOST’. February 21, 2022. CRAN.↩︎ 6. Some gastric resistant formulations of diclofenac are HVDPs, practically all topical formulations are HVDPs, whereas diclofenac itself is not a HVD ($$\small{CV_\textrm{w}}$$ of a solution ~8%).↩︎ 7. Note that the model of SABE is based on the true $$\small{\sigma_\textrm{wR}}$$, whereas in practice the observed $$\small{s_\textrm{wR}}$$ is used.↩︎ 8. Note that the intention to apply one of the SABE-methods must be stated in the protocol. It is neither acceptable to switch post hoc from ABE to SABE nor between methods (say, from ABEL to the more permissive RSABE).↩︎ 9. $$\small{CV_\textrm{wR}=100\sqrt{\exp(0.294^2)-1}=30.04689\ldots\%}$$↩︎ 10. Health Canada, TPD. Notice: Policy on Bioequivalence Standards for Highly Vari­able Drug Products. File number 16-104293-140. Ottawa. April 18, 2016. Online.↩︎ 11. Tóthfalusi L, Endrényi L. Sample Sizes for Designing Bioequivalence Studies for Highly Variable Drugs. J Pharm Pharmaceut Sci. 2012; 15(1): 73–84. doi:10.18433/J3Z88F.  Open Access.↩︎ 12. Benet L. Why Highly Variable Drugs are Safer. Presentation at the FDA Advisory Committee for Phar­ma­ceu­tical Science. Rockville. 06 October, 2006.  Internet Archive.↩︎ 13. FDA, OGD. Draft Guidance on Progesterone. Rockville. Recommended April 2010, Revised February 2011. Download.↩︎ 14. FDA, CDER. Draft Guidance. Bioequivalence Studies With Pharmacokinetic End­points for Drugs Submitted Under an ANDA. Rockville. August 2021. Download.↩︎ 15. Fuglsang A. Mitigation of the convergence issues associated with semi-replicated bioequivalence data. Pharm Stat. 2021; 20(6): 1232–4. doi:10.1002/pst.2142.↩︎ 16. EMA. EMA/582648/2016. Annex I. Online.↩︎ 17. FDA, CDER. Guidance for Industry. Statistical Approaches to Estab­lish­ing Bioequivalence. Rockville. January 2001. Download.↩︎ 18. SAS Institute Inc. SAS® 9.4 and SAS® Viya® 3.3 Programming Documentation. The MIXED Procedure. February 13, 2019. Online.↩︎ 19. Balaam LN. A Two-Period Design with t2 Experimental Units. Biometrics. 1968; 24(1): 61–73. doi:10.2307/2528460.↩︎ 20. Chow, SC, Shao J, Wang H. Individual bioequivalence testing under 2×3 designs. Stat Med. 2002; 21(5): 629–48. doi:10.1002/sim.1056.↩︎ 21. Aka the ‘extra-reference’ design. It should be avoided because it is biased in the presence of period effects (T is not administered in the 3rd period).↩︎ 24. Certara USA, Inc., Princeton, NJ. Phoenix® WinNonlin® version 8.1. 2018.↩︎ 25. Health Canada, TPD. Guidance Document: Conduct and Analysis of Comparative Bioavailability Studies. Section 2.7.4.2 Model Fitting. Cat:H13‐9/6‐2018E. Ottawa. 2018/06/08. Online.↩︎ 26. EMA, CHMP. Questions & Answers: positions on specific questions addressed to the Pharmacokinetics Working Party (PKWP). EMA/618604/2008 Rev. 13. London. 19 November 2015. Online.↩︎ 27. FDA, CDER. Guidance for Industry. Controlled Correspondence Related to Generic Drug Development. Silver Spring. December 2020. Download.↩︎ 28. A member of the BEBA-Forum faced two partial replicate studies where the software failed to converge (SAS and Phoenix WinNonlin). Luckily, these were just pilot studies. He sent letters to the FDA asking for a clarification. He never received an answer.↩︎ 29. In case of a ‘poor’ bioanalytical method requiring a large sample volume: Since the total blood sampling volume is generally limited with the one of a blood donation, one may opt for a 3-period full replicate or has to measure HCT prior to administration in higher periods and – for safety reasons – exclude subjects if their HCT is too high.↩︎ 30. The T/R-ratio in a particular subject differs from other subjects showing a ‘normal’ response. A concordant outlier will show deviant responses for both T and R. That’s not relevant in crossover designs.↩︎ 31. If the study was planned for ABE, fails due to lacking power ($$\small{CV}$$ higher than assumed and $$\small{CV_\textrm{wR}>30\%}$$, and reference-scaling would be acceptable (no safety/efficacy issues with the expanded limits), one has already estimates of $$\small{CV_\textrm{wR}}$$ and $$\small{CV_\textrm{wT}}$$ and is able to design the next study properly.↩︎ 32. The $$\small{CV_\textrm{wR}}$$ has to be recalculated after exlusion of the outlier(s), leading to less expansion of the limits. Never­theless, the outlying subject(s) has/have to be kept in the data set for calculating the 90% confidence interval. However, that contradicts the principle »The data from all treated subjects should be treated equally« stated in the guideline.↩︎ 33. Also two or three washout phases instead of one. Once we faced a case when during a washout a volunteer was bitten by a dog. Since he had to visit a hospital to get his wound sutured, according to the protocol it was rated as a – not drug-related – SAE and we had to exlude him from the study. Shit happens.↩︎ 34. EMA. Questions & Answers: positions on specific questions addressed to the Phar­ma­co­kinetics Working Party (PKWP). EMA/618604/2008. London. June 2015 (Rev. 12 and later). Online.↩︎ 35. EMA, CHMP. Guideline on the Inves­ti­ga­tion of Bioequivalence. CPMP/EWP/QWP/1401/98 Rev. 1/ Corr **. London. 20 January 2010. Online.↩︎ 36. Schütz H. The almighty oracle has spoken! BEBA Forum. RSABE /ABEL. 2015-07-23↩︎ 37. Labes D, Schütz H. Inflation of Type I Error in the Evaluation of Scaled Average Bioequivalence, and a Method for its Control. Pharm Res. 2016; 33(11): 2805–14. doi:10.1007/s11095-016-2006-1.↩︎ 38. EMA. EMA/582648/2016. Annex II. London. 21 September 2016. Online.↩︎ 39. Maurer W, Jones B, Chen Y. Controlling the type 1 error rate in two-stage sequential designs when testing for average bioequivalence. Stat Med. 2018; 37(10): 1587–1607. doi:10.1002/sim.7614.↩︎ 40. Step sizes of $$\small{n_1}$$ 2 in full replicate designs and 3 in the partial replicate design; step size of $$\small{CV_\textrm{wR}}$$ 2%.↩︎ 41. I know one large generic player’s rule for pilot studies of HVD(P)s: The minimum sample size is 24 in a four-period full replicate design. I have seen pilot studies with 80 subjects.↩︎ 42. Before the portmanteau word ‘bioavailability’ (of ‘biological’ and ‘availability’) was coined by Lindenbaum et al. in 1971. Try to search for earlier papers with the keyword ‘bioavailability’. You will be surprised.↩︎ 43. Lindenbaum J, Mellow MH, Blackstone MO, Butler VP. Variation in Biologic Availability of Digoxin from Four Preparations. N Engl J Med. 1971; 285: 1344–7. doi:10.1056/nejm19711209285240.↩︎ 44. Wagner JG. Method of Estimating Relative Absorption of a Drug in a Series of Clinical Studies in Which Blood Levels Are Measured After Single and/or Multiple Doses. J Pharm Sci. 1967; 56(5): 652–3. doi:10.1002/jps.2600560527.↩︎ 45. Upton RA, Sansom L, Guentert TW, Powell JR, Thiercellin J-F, Shah VP, Coates PE, Riegelman S. Evaluation of the Absorption from 15 Commercial Theophylline Products Indicating Deficiencies in Currently Applied Bio­avail­ability Criteria. J Pharmacokin Biopharm. 1980; 8(3): 229–42. doi:10.1007/BF01059644.↩︎ 46. Study D, Treatment II of Upton et al.: Slophyllin aqueous syrup (Dooner Laboratories), 80 mg theophylline per 15-mL dose, 60 mL ≈ 320 mg administered, twelve subjects, sampling: 0, 0.25, 0.5, 1, 2, 3.5, 5, 7, 9, 12, 24 hours. Linear-up / log-down trapezoidal rule, extrapolation based on $$\small{\widehat{C}_\textrm{last}}$$ [sic].↩︎ 47. Abdallah HY. An area correction method to reduce intrasubject variability in bioequivalence studies. J Pharm Phar­ma­ceut Sci. 1998; 1(2): 60–5.  Open Access.↩︎ 48. Lucas AJ, Ogungbenro K, Yang S, Aarons L. Chen C. Evaluation of area under the concentration curve adjusted by the terminal-phase as a metric to reduce the impact of variability in bioequivalence testing. Br J Clin Phar­ma­col. 2022; 88(2): 619–27. doi:10.1111/bcp.14986.↩︎
2022-07-02 08:32:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47545918822288513, "perplexity": 11675.658951923264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00295.warc.gz"}
https://mathematica.stackexchange.com/questions/95677/running-a-script-that-needs-a-package
# Running a script that needs a package I am running a resource extensive notebook, and so I am trying to break it down into a .m file so that I can run it quicker and without using the mathematica interface. The problem is that my notebook needs another package (installed in the usual ~/.Mathematica/Applications/OtherPackage) and I cannot make it load with inside my .m. The usual notebook normally calls the other package as << OtherPackage But if I add this to the beginning of the script and run $math -script script.m The output goes bananas. On the other hand, if I start$ math and then run << OtherPackage It loads correctly, and if later I perform << script.m it behaves normally and runs as expected. My question, is there a way to make my script run in a single go from the command line without having to go to the Tex-Base interface? • Greetings! Make the most of Mma.SE and take the tour now. Help us to help you, write an excellent question. Edit if improvable, show due diligence, give brief context, include minimum working examples of code and data in formatted form. As you receive give back, vote and answer questions, keep the site useful, be kind, correct mistakes and share what you have learned. – rhermans Sep 28 '15 at 14:30 • Can you be more specific than "goes bananas"? Do you mean it doesn't load the package and you get errors from unevaluated expressions? Try using Get["OtherPackage"] instead of <<. – ZachB Oct 1 '15 at 1:37 • I used the Get["OtherPackage"] but the same thing happens. It's hard to explain what happens: there is an initial output confirming "OtherPackage" was loaded (it seems so), but then it processes the rest of the script faulty and with general stops, complaining about non-invertible matrices and arrays with unequal length, etc which I would interpret as "OtherPackage" not being successfully load... – romanovzky Oct 1 '15 at 9:37
2019-10-23 13:59:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31417763233184814, "perplexity": 1744.6400755000777}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00362.warc.gz"}
https://superuser.com/questions/646559/virtualbox-and-ssds-trim-command-support
# VirtualBox and SSD's TRIM command support I am aware of the huge number of posts on the internet saying that this would not work and why and I really spent days looking for the solutions months ago but I've found yesterday some tips how to "enable TRIM command support" for guest machines. I've tried it and "it looks" like working. What I would like to know is where's the catch or is this really working as it should. My exact command attaching the disk file: VBoxManage storageattach "GuestOsMachineName" --storagectl "SATA" --port 1 --device 0 --nonrotational on --discard on --medium "C:\path\to\file.vdi" --type hdd Which genereted this entry in the machine's *.vbox file: <AttachedDevice nonrotational="true" discard="true" type="HardDisk" port="1" device="0"> </AttachedDevice> To be sure I would not lose any data this drive was the second one attached to the machine. I've made simple test like copying some file to the drive, leaving it, restarting the machine, shutting down the machine, checking if it's there after booting back, looking at the disk file usage in the host OS. Results are: • disk file attached without options --nonrotational and --discard keeps its (dynamic) size even after deleting files in the guest OS • disk file attached with both options mentioned above releases the space after the data was deleted Now here are my questions: - what does exactly --discard option do? it's not described in the VirtualBox manual (http://www.virtualbox.org/manual/ch08.html#vboxmanage-storageattach) - is it really passing TRIM down to the host OS or does it just look like? • Virtual TRIM used on a virtual machine interfaced to a virtual disk for virtual feelgood... – Fiasco Labs Sep 17 '13 at 19:12 • Ramhound: So what is the "Solid-state drive" checkbox option in the storage submenu for? Beside, if there is an option like --discard mentioned in the manual, then it should be detailed. I completely don't get your point saying that "there is a reason why it's not described". If so, why it is in the manual at all? – Krzysztof Szynter Sep 19 '13 at 6:32 • To answer @Ramhound my blog is one of the post the OP listed. I'm not sure what his reason was, but for me, I had a Virtual Machine that I needed to physically shrink the filesize of the dynamically allocated disk. It was a disk that had held data that was deleted and I was trying to shrink it back down to a smaller size -- passing the TRIM command enabled that to happen...shrinking my virtual disk from 12G to 7G. To the OP, I hope my post helped you. I got here by seeing incoming traffic on my blog. – Jayson Rowe Jan 7 '14 at 3:03 • Just a warning for anybody interested in the topic. The trimming implementation on the VirtualBox disk image emulator is extremely buggy and will likely crash your vm. There's a 2 years old bug opened for it. It's possible to enable it but don't waste time trying it. – Dominik SMogor May 10 '19 at 9:15 • I think this is the bug Dominik is referring to: virtualbox.org/ticket/16450 – bobpaul Jul 9 '19 at 16:01 --discard options specifies that vdi image will be shrunk in response to trim command from guest OS. Following requirements must be met: • disk format must be VDI • cleared area must be at least 1MB (size) • [probably] cleared area must be cover one or more 1MB blocks (alignment) Obviously guest OS must be configured to issue trim command, typically that means guest OS is made to think the disk is an SSD. Ext4 supports -o discard mount flag; OSX probably requires additional settings as by default only Apple-supplied SSD's are issued this command. Windows ought to automatically detect and support SSD's at least in versions 7 and 8, I am not clear if detection happens at install or run time. Linux exFAT driver (courtesy of Samsung) supports discard command. It is not clear if Microsoft implementation of exFAT supports same, even though the file system was designed for flash to begin with. Alternatively there are ad hoc methods to issue trim, e.g. Linux fstrim command, part of util-linux package. Earlier solutions required user to zero out unused areas, e.g. using zerofree and compact the disk explicitly (I'm assuming that's only possible when vm is offline). • Also, using some sort of de-duplication thing on btrfs (particularly one that punches out holes for 0 regions) and btrfs balance really helps with creating as many trimmable regions as possible. – Omnifarious Jan 11 '17 at 23:41 Since this is the top result on Google, let me clarify other answers a bit, even though this is an old post. It is in fact possible to get TRIM working in the sense that unused virtual blocks on the guest filesystem can have the corresponding physical blocks of flash marked as unused for better utilization of the flash. The pieces are even already present in the other answers and comments. First, the host must be set up so that free space is TRIM'ed. You can either mount the filesystem with -o discard, or you can run fstrim on the filesystem regularly through cron. I prefer the latter, as first option can lead to the system locking up when deleting many files at one time. The disk format used must be VDI dynamic size as qarma writes. Make sure that nonrotational="true" discard="true" are set in the .vbox file as described under OP. Then enable TRIM in the guest OS as normal. In Linux, I again recommend a cron job running fstrim. This is probably even more important here, since the cost of doing TRIM on the virtual disk image is much higher than on a physical SSD, since data is moved around in order to make the image smaller. Now, since the disk image is regularly compacted, it will only take up the actual space used, plus some 1MB block size overhead as qarma writes. This again means that the free space will be TRIM'ed on the host SSD. • Actually I killed some of my VMs using the trim command. – davidbaumann Mar 16 '18 at 21:05 • @davidbaumann How did that happen? – davidtgq May 8 '18 at 16:08 • Actually, after enabling it, it started to trim about 20GB. Exactly at this moment, the Laptop crashed (had some problems with my GPU these days). forums.virtualbox.org/viewtopic.php?f=2&t=75308 – davidbaumann May 8 '18 at 19:34 • Command to do this to the existing first disk (assuming SATA): VBoxManage storageattach \$VM --storagectl "SATA Controller" --port 0 --device 0 --nonrotational on --discard on – RobM Jul 9 '19 at 16:10 • The Virtual Box virtual disk is "regularly compacted"? Are you saying it will dynamically move stuff around and shrink the file when there is free space? – xpusostomos May 19 at 23:55
2021-06-21 09:08:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2564582824707031, "perplexity": 3732.947819559153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00195.warc.gz"}
https://www.palmsens.com/knowledgebase-article/the-cottrell-experiment-and-diffusion-limitation-double-layer/
# The Cottrell Experiment and Diffusion Limitation 3/3 – Electrochemical Double Layer This chapter is the final chapter of the series ‘The Cottrell experiment and diffusion limitation ’. In this chapter the electrochemical double layer and its features are discussed. ## Electrochemical double layer As soon as an electrode surface is charged, due to a potentiostat or its Nernst potential, an electric field is created. Charged particles will move in this field. Ions of the other charge than the electrodes charge will accumulate directly at the electrode, forming a layer of ions. Assuming that the electrode is positive charged, anions will accumulate. Attracted by these anions, kations will be attracted and form another loose layer on top of the first layer. The first layer is known as the outer Helmholtz plane. The charges accumulating in the metal surface are the inner Helmholtz plane (see Figure 2.1). The layer of ions and the electrode act like a capacitor and this has impact on most electrochemical techniques. This is only a rudimentary description of the electrochemical double layer; there are more sophisticated models, but for many electrochemical experiments the simple model will suffice. Figure 2.1 | Scheme of the electrochemical double layer ## Migration Up to now it was assumed that the electrochemical active species Red is transported by diffusion or convection only, but there is one more way for mass transport: migration. Migration is mass transport due to an electric field. If Red is negatively charged, the positive potential of the electrode will attract the Red-ions. Why are complete models and observations done without taking migration into consideration? Usually an electrochemical measurement is done in a well-conducting solution. Since the current flows from the working to the counter electrode through the solution, a high solution resistance will make the measured current smaller and increase the Ohmic drop (i.e. the difference in the potential applied between the reference electrode and the working electrode and the potential that the working electrode is feeling, see also Ohmic drop). To reduce the resistance of a solution, an electrochemical inert support electrolyte is added. Often the buffer itself in a pH buffered solution is sufficient or a salt with a high solubility, for example KCl, NaCl, NaSO4, NaNO3, is added. If the support electrolyte has a high concentration compared to the investigated species, in this example Red, the electric field will be compensated by the ions of the support electrolyte and almost only these will migrate. A rule of thumb is that the support electrolyte should have a hundred times higher concentration. Since this effect is suppressed so easily, migration is often negligible. Another effect is the capacitive charging current or short capacitive current. As mentioned the electrochemical double layers acts as capacitor. Capacitors store charge. A simple capacitor is the plate capacitor. It comprises two conducting parallel plates that are not in contact with each other. If a power source is connected to the plates, a current flows that is exponentially decaying until it is insignificant. A current flows because one plate is charged negative and the other positive. The separation of charges means current flows. At some point the plates cannot store more charge and the current stops flowing. The current decays over time according to Equation 2.1 EC is the charging potential or voltage, I0 is the starting current, R is the resistance of the circuit around the capacitor, t the time and C the capacity of the capacitor. The capacity is a property of the capacitor and is defined as the charge Q that can be stored per applied potential E or as equation Equation 2.2 Usually U is used for voltage, but since these equations need to be transferred to electrochemical experiments, it is useful to start with the potential E instead of the voltage U. These two are not synonymous but in this context it is fine to exchange them. ## Properties of the electrochemical double layer If it is assumed that the electrochemical double layer behaves exactly like a plate capacitor, the two equations 2.1 and 2.2 show three important facts: 1. The capacitive current decays exponentially with the time t. The higher resistance R and capacity C are, the slower it will be decaying. The product of resistance R and capacity C is often called the time constant τ. 2. The charge Q that can be stored is proportional to the applied potential. Every time the charge Q that can be stored changes a current I flows until the charge Q is adjusted. The charge Q that can be stored changes, if the potential E is changing. This is expressed in the equation: Equation 2.3 3. In equation 2.2 it is shown implicitly and in 2.3 explicitly that the higher the capacity C is, the more capacitive current will flow if the potential changes. Usually electrochemists are interested in the Faraday current, that is the current caused by an electrochemical reaction; the capacitive current, caused by physics, is an unwanted side effect (see also Capacitive current). What does this mean for measurements? If the potential of the electrode is changed, for example during a potential step, a current will flow that has no chemical but only a physical meaning. This current decays exponential with t, while the Faraday current decays with t. This means that the capacitive current decays much faster than the Faraday current (see Figure 2.2). The higher the capacity C the higher the capacitive current. The capacity C for a plate capacitor can be calculated with Equation 2.4 where ε0 is the electric field constant, εr is the relative permittivity of the medium between the plates, d is the distance between the two plates and A is the surface area of the two plates. In conclusion, most factors influencing the capacity cannot be altered in an electrochemical experiment. The constant ε0 cannot be changed. The distance d and the relative permittivity εr can only be changed by changing the solution, because d is defined by the distance between inner and outer Helmholtz plane (see Figure 2.1). The area A is influenced by the surface roughness. The rougher a surface the higher its area. If a reusable electrode is used, a proper polishing that leads to a smooth surface can reduce capacitive current drastically. Figure 2.2 | Scheme of the capacitive and Faraday current through time The electrochemical double layer The electrochemical double layer acts as a capacitor and every change in the potential of the electrode will induce a capacitive charging current that is caused by physics not by a chemical reaction. This current decays exponentially. The Cottrell Experiment – The Cottrell ExperimentThe Cottrell Experiment – PDF
2021-07-31 00:29:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8151558637619019, "perplexity": 898.7933465157967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00358.warc.gz"}
https://brilliant.org/problems/double-repeated-functions/
# Double repeated functions Algebra Level 2 Let $$f(x) = 2x-6$$. Find the value of $$x$$ for which $$f^2(x) = 2$$. Note: $$f^2(x) = f(f(x))$$, NOT $$(f(x))^2$$. ×
2017-05-29 02:03:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36940592527389526, "perplexity": 2889.402287129211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612008.48/warc/CC-MAIN-20170529014619-20170529034619-00180.warc.gz"}
http://ds-projects.chrispyles.io/notebooks/insurance/insurance
# Insurance Data This Jupyter Notebook takes an insurance data set from Kaggle and looks into the relationships between the different parameters given. First, we check for a correlation (a linear relationship) between BMI and insurance charges, and then we see if being a smoker influences what you’re charged by insurance companies using an A/B Test. import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline # plt.style.use('ggplot') age sex bmi children smoker region charges 0 19 female 27.900 0 yes southwest 16884.92400 1 18 male 33.770 1 no southeast 1725.55230 2 28 male 33.000 3 no southeast 4449.46200 3 33 male 22.705 0 no northwest 21984.47061 4 32 male 28.880 0 no northwest 3866.85520 ## Data Exploration The best place to start with any data-oriented project is to figure out how the data look. To this end, we take a look at the distributions of the different data in the insurance DataFrame. The figure created below has a histogram or bar chart for each column to show the counts of the data contained therein. plt.figure(figsize=[20, 20]) plt.subplot(331) sns.boxplot(y='age', data=insurance) plt.title('Boxplot of Ages') plt.ylabel('Age') ins_by_sex = insurance.groupby('sex').count() plt.subplot(332) sns.barplot(ins_by_sex.index, ins_by_sex['age']) plt.title('Counts of Sexes') plt.xlabel('Sex') plt.ylabel('Count') plt.subplot(333) sns.distplot(insurance['bmi'], bins=np.arange(15.5, 53.5, 1)) plt.title('Histogram of BMIs, bin width = 1') plt.xlabel('BMI') plt.ylabel('Count') plt.subplot(334) sns.distplot(insurance['children'], bins=np.arange(-.5, 6.5, 1), kde=False) plt.title('Histogram of No. of Children') plt.xlabel('No. of Children') plt.ylabel('Count') plt.xlim([-.5, 5.5]) ins_by_smoker = insurance.groupby('smoker').count() plt.subplot(335) sns.barplot(ins_by_smoker.index, ins_by_smoker['age']) plt.title('Counts of Smokers and Non-Smokers') plt.xlabel('Smoker?') plt.ylabel('Count') ins_by_region = insurance.groupby('region').count() plt.subplot(336) sns.barplot(ins_by_region.index, ins_by_region['age']) plt.title('Counts of Regions') plt.xlabel('Region') plt.ylabel('Count') plt.subplot(338) sns.distplot(insurance['charges']) plt.title('Histogram of Charges, bin width = $2,000') plt.xlabel('Charges ($)') plt.ylabel('Count') plt.suptitle('Data Exploration', y=.92, fontsize=24); ## Is there a correlation between BMI and insurance charges? Correlation is calculated by taking two data points, putting them in standard units, multiplying the coordinates elementwise, and then finding the mean (all of this is defined in the correlation function below). The value of $r$, heretofore referred to as correlation, ranges from -1 to 1; a value near 1 indicates a positive linear relationship (i.e. a line with a positive slope), near -1 indicates a negative linear relationship, and near 0 indicates little/no linear relationship. Accompanying the calculation of $r$ is a scatter plot and a joint density plot of the data, with BMI on the $x$-axis and insurance charges on the $y$-axis. standard_units = lambda x: (x - np.std(x))/np.mean(x) correlation = lambda x, y: np.mean(standard_units(x) * standard_units(y)) slope = lambda x, y: correlation(x, y) * np.std(y) / np.std(x) intercept = lambda x, y: np.mean(y) - slope * np.mean(x) plt.figure(figsize=[12, 5]) plt.suptitle(r'Insurance Charges vs. BMI, $r$ = ' + str(np.round(correlation(insurance['bmi'], insurance['charges']), 3))) plt.subplot(121) sns.scatterplot('bmi', 'charges', data=insurance) plt.subplot(122) sns.kdeplot(insurance['bmi'], insurance['charges']); Based on the fact that the value of $r$ was around 0.1, there might be a small correlation between BMI and insurance charges, but the relationship is not as strong as it could be among other data points. There doesn’t seem to be such a huge correlation between BMI and insurance charges, but it kind of looks like there might be something (very) loosely positive there. In order to check, we see if there is perhaps a more linear relationship between BMI and the insurance charges on a log scale: insurance['log10_charges'] = np.log10(insurance['charges']) sns.jointplot('bmi', 'log10_charges', data=insurance, kind='kde') plt.suptitle(r'Log10 of Insurance Charges vs. BMI, $r$ = ' + str(np.round(correlation(insurance['bmi'], np.log10(insurance['charges'])), 3))) ### Conclusion As it turns out, although the correlation is .7, there really doesn’t look to be much there. So while there may be some linear relationship between BMI and insurance charges (or its log), it’s not too apparent. ## Does being a smoker affect what you’re charged by the insurance company? Null Hypothesis: Being a smoker does not affect your charges; any differences in the observed values are due to random chance. Alternative Hypothesis: Being a smoker does affect what you are charged in insurance premiums. This question, from a data science perspective, is asking whether or not the charges for the groups smoker and non-smoker come from the same underlying distribution. To find this out, we use an A/B Test, which involves shuffling up the data in question in a sample without replacement, computing a test statistic, and finding the p-value. By convention, if the p-value is less than .05 (meaning less than 5% of the simulated data point in the same direction as the original data set), we lean in the direction of the alternative hypothesis. In this A/B test, the test statistic will be the absolute difference between the mean charges for smokers and non-smokers. Before doing the actual permutation test, we will run through a single permutation to demonstrate the process that will eventually be done thousands of times to obtain a p-value. The first part of the permutation test is to shuffle up the charges column of the insurance DataFrame and create a new column with the shuffled-up charges. It is from this permuting of the sample that permutation tests get their name. smoker_and_charges = insurance[['smoker', 'charges']] charges_shuffled = list(smoker_and_charges.sample(frac=1)['charges']) shuffled_charges_df = smoker_and_charges.assign(shuffled_charges=charges_shuffled) smoker charges shuffled_charges 0 yes 16884.92400 6600.3610 1 no 1725.55230 12231.6136 2 no 4449.46200 3484.3310 3 no 21984.47061 8442.6670 4 no 3866.85520 42983.4585 The next part of the test is to compute the value of the test statistic, which is the absolute difference between the mean charges for smokers and non-smokers. (A high value of this statistic points in the direction of the alternative hypothesis.) To this end, we define the function ts which takes a DataFrame and a column name as its arguments and returns the absolute difference of the mean value of col_name after df has been grouped by the column smoker. def ts(df, col_name): df_grouped = df.groupby('smoker').mean() return abs(df_grouped[col_name].iloc[0] - df_grouped[col_name].iloc[1]) # computing the test statistic on the table shuffled_charges_df test_stat_1 = ts(shuffled_charges_df, 'shuffled_charges') test_stat_1 668.2917138336725 Finally, we are ready for the permutation test. The function perm_test below taks a DataFrame as its argument and the number of replications, reps, to go through. For each replication, it permutes df as we did above and computes the value of the test statistic, collecting them in the list stats. After collecting these values, it computes the test statistic for the original data, and returns a p-value by taking the percentage of test statistics that are greater than or equal to the observed value. def perm_test(df, reps): stats = [] for _ in np.arange(reps): charges_shuffled = list(df.sample(frac=1)['charges']) df = df.assign(shuffled_charges=charges_shuffled) stat = ts(df, 'shuffled_charges') stats += [stat] observed_ts = ts(df, 'charges') return np.count_nonzero(stats >= observed_ts) / len(stats) # run the permutation test with 100,000 repetitions perm_test(smoker_and_charges, 100000) 0.0 ### Conclusion Because the p-value is 0, we know that none of the shuffled sets were as far or farther in the direction of the alternative hypothesis than was the original data set; this means that in all likelihood, the observed differences are not due to random chance. Thus, we lean in the direction of the alternative hypothesis: that being a smoker affects what you’re charged by insurance companies. Conventional wisdom, I know, but it is still nice to have it proven empirically.
2019-11-11 20:22:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.403131902217865, "perplexity": 1556.0366431569682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664437.49/warc/CC-MAIN-20191111191704-20191111215704-00498.warc.gz"}
https://support.bioconductor.org/p/98442/
Volcanoplot with limma - RAW P-values or Adj.P-Values 2 0 Entering edit mode tcalvo ▴ 70 @tcalvo-12466 Last seen 5 months ago Brazil I have noticed that limma's volcanoplot() function uses uncorrected p-values from the MArrayLM objected. My question is: why? I've seen an old post where G. Smyth  mentioned that the FDR-corrected p-values loses some info in comparison to the raw ones. Could someone elucidate this, please? Another reason pointed by the author was that the same adj.p-value may match to different p-values. Thanks! Thyago volcanoplot limma fdr • 5.3k views 5 Entering edit mode @gordon-smyth Last seen 2 hours ago WEHI, Melbourne, Australia I'm not sure what I can tell you that I didn't already say in my earlier answer to a similar question: Volcano plot labeling troubles You've already repeated in your question the reason why it it preferable to use p-value as the y-axis rather than FDR. (Actually I like B-statistic even better, but that's another story.) The p-values are the basic values from which FDR is computed and it is typically better to plot basic data rather than derived quantities. Why does that not convince you?  Why would you want to force points with different p-values together on the y-axis? Or are you asking for more explanation of why different p-values can lead to the same FDR? I think that has been answered separately. Note that there is always a p-value cutoff that corresponds to any FDR cutoff, so you can easily indicate an FDR cutoff on the plot even if the y-axis is p-value. So using FDR as the y-axis has no advantage that I can think of. 0 Entering edit mode Thanks for your answer. I'm not questioning your decision by making the way you did it, though. I only asked because I wanted to know exactly why, since a lot of people often question me this. Anyway, thank you again. Regards, 3 Entering edit mode @wolfgang-huber-3550 Last seen 9 days ago EMBL European Molecular Biology Laborat… There's another reason to support Gordon's view. There is a fundamental difference between p-values and FDR: p-values are per-hypothesis (i.e., per-gene) properties, whereas FDR is an average across all rejected hypotheses. I.e., if you have a set of hypotheses (genes) rejected at a certain FDR $\alpha$, then the local fdr for some of these is less than $\alpha$, and for some, more than $\alpha$. The only thing you know is that the FDR overall is $\alpha$. In general, there is no 1:1 relation between p-value and FDR. In the special case of the Benjamini-Hochberg method, such a 1:1 relation can be constructed (what's called the 'adjusted p-value'), but this assumes that the Benjamini-Hochberg method is used, with no modifications such as filtering, weighting, etc. This assumption has seemed so natural that often it has not even been questioned (hence the popularity of the 'adjusted p-value' terminology), but in fact is not natural if there is heterogeneity between the tests, e.g., if we know that some tests have more power than others, or some have a higher prior probability of being null than others. For these reasons, the p-value and not the adjusted p-value is the preferable quantity to use in a volcano plot.
2022-09-26 09:51:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7890697717666626, "perplexity": 1387.6288264358136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00646.warc.gz"}
https://zbmath.org/?q=an:07163265
zbMATH — the first resource for mathematics Gaussian field on the symmetric group: prediction and learning. (English) Zbl 1443.60035 Summary: In the framework of the supervised learning of a real function defined on an abstract space $$\mathcal{X}$$, Gaussian processes are widely used. The Euclidean case for $$\mathcal{X}$$ is well known and has been widely studied. In this paper, we explore the less classical case where $$\mathcal{X}$$ is the non commutative finite group of permutations (namely the so-called symmetric group $$S_N)$$. We provide an application to Gaussian process based optimization of Latin Hypercube Designs. We also extend our results to the case of partial rankings. MSC: 60G15 Gaussian processes 62M20 Inference from stochastic processes and prediction EGO Full Text: References: [1] R. A. Adams and J. J. Fournier., Sobolev spaces, volume 140. Academic Press, 2003. · Zbl 1098.46001 [2] E. Anderes, J. Møller, and J. G. Rasmussen. Isotropic covariance functions on graphs and their edges., arXiv preprint arXiv:1710.01295, 2017. [3] F. Bachoc. Cross validation and maximum likelihood estimations of hyper-parameters of gaussian processes with model misspecification., Computational Statistics & Data Analysis, 66:55-69, 2013. · Zbl 06958972 [4] F. Bachoc. Asymptotic analysis of the role of spatial sampling for covariance parameter estimation of Gaussian processes., Journal of Multivariate Analysis, 125:1-35, 2014. · Zbl 1280.62100 [5] F. Bachoc, F. Gamboa, J. M. Loubes, and N. Venet. A Gaussian process regression model for distribution inputs., IEEE Transactions on Information Theory, PP(99):1-1, 2017. · Zbl 1401.62106 [6] F. Bachoc, A. Lagnoux, A. F. López-Lopera, et al. Maximum likelihood estimation for gaussian processes under inequality constraints., Electronic Journal of Statistics, 13(2) :2921-2969, 2019. · Zbl 1428.62420 [7] F. Bachoc, A. Suvorikova, D. Ginsbourger, J.-M. Loubes, and V. Spokoiny. Gaussian processes with multidimensional distribution inputs via optimal transport and Hilbertian embedding., 1805.00753v2, 2019. [8] C. Berg, J. P. R. Christensen, and P. Ressel., Harmonic analysis on semigroups. Springer, Berlin, 1984. · Zbl 0619.43001 [9] P. Billingsley., Convergence of probability measures. John Wiley & Sons, 2013. · Zbl 0172.21201 [10] Brussels European Opinion Research Group. Eurobarometer 55.2 (May-June 2001), 2012. [11] M. Christopher., Logistics & supply chain management. Pearson UK, 2016. [12] S. Clémençon, R. Gaudel, and J. Jakubowicz. Clustering Rankings in the Fourier Domain. In, Machine Learning and Knowledge Discovery in Databases, Lecture Notes in Computer Science, pages 343-358. Springer, Berlin, Heidelberg, Sept. 2011. [13] N. Cressie. Statistics for spatial data., Terra Nova, 4(5):613-617, 1992. [14] N. Cressie and S. Lahiri. The asymptotic distribution of REML estimators., Journal of Multivariate Analysis, 45:217-233, 1993. · Zbl 0772.62008 [15] N. Cressie and S. Lahiri. Asymptotics for REML estimation of spatial covariance parameters., Journal of Statistical Planning and Inference, 50:327-341, 1996. · Zbl 0847.62044 [16] D. E. Critchlow. On rank statistics: an approach via metrics on the permutation group., Journal of statistical planning and inference, 32(3):325-346, 1992. · Zbl 0770.62036 [17] D. E. Critchlow., Metric methods for analyzing partially ranked data, volume 34. Springer Science & Business Media, 2012. · Zbl 0589.62041 [18] D. E. Critchlow and M. A. Fligner. Ranking models with item covariates. In, Probability Models and Statistical Analyses for Ranking Data, pages 1-19. Springer, 1993. · Zbl 0766.62011 [19] D. E. Critchlow, M. A. Fligner, and J. S. Verducci. Probability models on rankings., Journal of Mathematical Psychology, 35(3):294-318, 1991. · Zbl 0741.62024 [20] D. Dacunha-Castelle and M. Duflo., Probability and statistics, volume 2. Springer Science & Business Media, 2012. · Zbl 0586.62003 [21] P. Diaconis. Group representations in probability and statistics., Lecture Notes - Monograph Series, 11:i-192, 1988. · Zbl 0695.60012 [22] R. Fagin, R. Kumar, and D. Sivakumar. Comparing top k lists., SIAM Journal on Discrete Mathematics, 17(1):134-160, 2003. · Zbl 1057.68075 [23] S. Gerschgorin. Uber die abgrenzung der eigenwerte einer matrix., Izvestija Akademii Nauk SSSR, Serija Matematika, 7(3):749-754, 1931. · JFM 57.1340.06 [24] D. Haussler. Convolution kernels on discrete structures. Technical report, Technical report, Department of Computer Science, University of California at Santa Cruz, 1999. [25] Y. Jiao and J.-P. Vert. The kendall and mallows kernels for permutations., IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(7) :1755-1769, 2017. [26] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions., Journal of Global Optimization, 13(4):455-492, 1998. · Zbl 0917.90270 [27] R. Kondor., Group Theoretical Methods in Machine Learning. PhD Thesis, Columbia University, New York, NY, USA, 2008. [28] R. Kondor and M. S. Barbosa. Ranking with kernels in Fourier space. In, Proceedings of the Conference on Learning Theory (COLT 2010), pages 451-463, 2010. [29] A. Korba, S. Clémençon, and E. Sibony. A learning theory of ranking aggregation. In, Artificial Intelligence and Statistics, pages 1001-1010, 2017. [30] G. Lebanon and Y. Mao. Non parametric modeling of partially ranked data., Journal of Machine Learning Research, 9(Oct) :2401-2429, 2008. · Zbl 1225.62067 [31] M. Lomelí, M. Rowland, A. Gretton, and Z. Ghahramani. Antithetic and Monte Carlo kernel estimators for partial rankings., Statistics and Computing, 29(5) :1127-1147, Sep 2019. · Zbl 1430.62050 [32] H. Mania, A. Ramdas, M. J. Wainwright, M. I. Jordan, and B. Recht. On kernel methods for covariates that are rankings., Electron. J. Statist., 12(2) :2537-2577, 2018. · Zbl 1409.62090 [33] J. I. Marden., Analyzing and modeling rank data. Chapman and Hall/CRC, 2014. · Zbl 0853.62006 [34] K. Mardia and R. Marshall. Maximum likelihood estimation of models for residual covariance in spatial regression., Biometrika, 71:135-146, 1984. · Zbl 0542.62079 [35] M. D. McKay, R. J. Beckman, and W. J. Conover. Comparison of three methods for selecting values of input variables in the analysis of output from a computer code., Technometrics, 21(2):239-245, 1979. · Zbl 0415.62011 [36] C. A. Micchelli, Y. Xu, and H. Zhang. Universal kernels., Journal of Machine Learning Research, 7(Dec) :2651-2667, 2006. · Zbl 1222.68266 [37] R. Montemanni, J. Barta, M. Mastrolilli, and L. M. Gambardella. The robust traveling salesman problem with interval data., Transportation Science, 41(3):366-381, 2007. [38] S. T. Rachev, L. Klebanov, S. V. Stoyanov, and F. Fabozzi., The methods of distances in the theory of probability and statistics. Springer Science & Business Media, 2013. · Zbl 1280.60005 [39] C. Rasmussen and C. Williams., Gaussian processes for machine learning. The MIT Press, Cambridge, 2006. · Zbl 1177.68165 [40] T. J. Santner, B. J. Williams, W. Notz, and B. J. Williams., The design and analysis of computer experiments, volume 1. Springer, 2003. · Zbl 1041.62068 [41] M. Stein., Interpolation of spatial data: some theory for kriging. Springer, New York, 1999. · Zbl 0924.62100 [42] S. Sundararajan and S. Keerthi. Predictive approaches for choosing hyperparameters in Gaussian processes., Neural Computation, 13 :1103-18, June 2001. · Zbl 1108.62327 [43] H. White. Maximum likelihood estimation of misspecified models., Econometrica: Journal of the Econometric Society, pages 1-25, 1982. · Zbl 0478.62088 [44] J. Yu, R. Buyya, and C. K. Tham. Cost-based scheduling of scientific workflow applications on utility grids. In, e-Science and Grid Computing, 2005. First International Conference on, pages 8-pp. IEEE, 2005. This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-06-15 16:42:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4965115785598755, "perplexity": 5233.495471090414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621450.29/warc/CC-MAIN-20210615145601-20210615175601-00101.warc.gz"}
https://seekerwjk.wordpress.com/tag/it-education/
### Archive Posts Tagged ‘IT Education’ ## Distributed Learning: A new model The Geomblog Communication is now the key to modelling distributed/multicore computations. Jim Demmel has been writing papers and giving talks on this theme for a while now, and as processors get faster, and the cloud becomes a standard computing platform, communication between nodes is turning out to be the major bottleneck. So suppose you want to learn in this setting ? Suppose you have data sitting on different nodes (you have a data center, or a heterogeneous sensor network, and so on) and you’d like to learn something on the union of the data sets. You can’t afford to ship everything to a single server for processing: the data might be too large to store, and the time to ship might be prohibitive. So can you learn over the (implicit) union of all the data, with as little discussion among nodes as possible ? This was the topic of my Shonan talk, as well as two papers that I’ve been working on with my student Avishek Saha, in collaboration with  Jeff Phillips and Hal Daume. The first one will be presented at AISTATSthis week, and the second was just posted to the arxiv. We started out with the simplest of learning problems: classification. Supppose you have data sitting on two nodes (A and B), and you wish to learn a hypothesis over the union of A and B. What you’d like is a way for the nodes to communicate as little as possible with each other while still generating a hypothesis close to the optimal solution. It’s not hard to see that you could compute an $\epsilon$-sample on A, and ship it over to B. By the usual properties of an $\epsilon$-sample, you guarantee that any classifier on B’s data combined with the sample will also classify A correctly to within some $\epsilon$-error. It’s also not too hard to show a lower bound that matches this upper bound. The amount of communication is nearly linear in $1/\epsilon$. But can you do better ? In fact yes, if you let the nodes talk to each other, rather than only allowing one-way communication. One way of gaining intuition for this is that $A$ can generate classifiers, and send them over to $B$, and $B$ can tell $A$ to turn the classifier left or right. Effectively, $B$ acts as an oracle for binary search. The hard part is showing that this is actually a decimation (in that a constant fraction of points are eliminated from consideration as support points in each step), and once we do that, we can show an exponential improvement over one-way communication. There’s a trivial way to extend this to more than 2 players, with a $k^2$ blow up in communication for $k$ players. This binary search intuition only works for points in 2D, because then the search space of classifiers is on the circle, which lends itself naturally to a binary search. In higher dimensions, we have to use what is essentially a generalization of binary search – the multiplicative weight update method. I’ll have more to say about this in a later post, but you can think of the MWU as a “confused zombie” binary search, in that you only sort of know “which way to go” when doing the search, and even then points that you dismissed earlier might rise from the dead. It takes a little more work to bring the overhead for k-players down to a factor k. This comes by selecting one node as a coordinator, and implementing one of the distributed continuous sampling techniques to pass data to the coordinator. You can read the paper for more details on the method. One thing to note is that the MWU can be “imported” from other methods that use it, which means that we get distributed algorithms for many optimization problems for free. This is great because a number of ML problems essentially reduce to some kind of optimization. A second design template is multipass streaming: it’s fairly easy to see that any multipass sublinear streaming algorithm can be placed in the k-player distributed setting, and so if you want a distributed algorithm, design a multipass streaming algorithm first. One weakness of our algorithms was that we didn’t work in the “agnostic” case, where the optimal solution itself might not be a perfect classifier (or where the data isn’t separable, to view it differently). This can be fixed: in an arxiv upload made simultaneously with ours, Blum, Balcan, Fine and Mansour solve this problem very neatly, in addition to proving a number of PAC-learning results in this model. It’s nice to see different groups exploring this view of distributed learning. It shows that the model itself has legs. There are a number of problems that remain to be explored, and I’m hoping we can crack some of them. In all of this, the key is to get from a ‘near linear in error’ bound to a ‘logarithmic in error’ bound via replacing sampling by active sampling (or binary search). ## Online Education Venture Lures Cash Infusion and Deals With 5 Top Universities ACM TechNews SAN FRANCISCO — An interactive online learning system created by two Stanford computer scientists plans to announce Wednesday that it has secured $16 million in venture capital and partnerships with five major universities. Enlarge This Image Coursera Andrew Ng and Daphne Koller, the Stanford computer scientists who created Coursera. The scientists, Andrew Ng and Daphne Koller, taught free Web-based courses through Stanford last year that reached more than 100,000 students. Now they have formed a company, Coursera, as a Web portal to distribute a broad array of interactive courses in the humanities, social sciences, physical sciences and engineering. Besides Stanford and the University of California, Berkeley, where the venture has already been offering courses, the university partners include the University of Michigan, the University of Pennsylvania and Princeton. Although computer-assisted learning was pioneered at Stanford during the 1960s, and for-profit online schools like the University of Phoenix have been around for several decades, a new wave of interest in online education is taking shape. “When we offer a professor the opportunity to reach 100,000 students, they find it remarkably appealing,” Dr. Koller said. Last fall a course in artificial intelligence taught by Sebastian Thrun, then at Stanford, and Google’s director of research, Peter Norvig, attracted more than 160,000 students from 190 countries. The free course touched off an intense debate behind the scenes at Stanford, where annual tuition is$40,050. Ultimately, the 22,000 students who finished the course received “certificates of completion” rather than Stanford credit. And Dr. Thrun, who also directs Google’s X research lab, left his tenured position at Stanford and founded a private online school, Udacity. Coursera (pronounced COR-sayr-uh), based in Mountain View, Calif., intends to announce that it has received financial backing from two of Silicon Valley’s premier venture capital firms, Kleiner Perkins Caufield & Byers and New Enterprise Associates. The founders said they were not ready to announce a strategy for profitability, but noted that the investment gave them time to develop new ways to generate revenue. One of their main backers, the venture capitalist John Doerr, a Kleiner investment partner, said via e-mail that he saw a clear business model: “Yes. Even with free courses. From a community of millions of learners some should ‘opt in’ for valuable, premium services. Those revenues should fund investment in tools, technology and royalties to faculty and universities.” Both founders said they were motivated by the potential of Internet technologies to reach hundreds of thousands of students rather than hundreds. “We decided the best way to change education was to use the technology we have developed during the past three years,” said Dr. Ng, who is an expert in machine learning. Previously he said he had been involved with Stanford’s effort to put academic lectures online for viewing. But he noted that there was evidence that the newer interactive systems provided much more effective learning experiences. He and Dr. Koller dismissed the idea that companies would “disintermediate” universities by spotting the brightest talents among students and hiring them directly. Coursera and Udacity are not alone in the rush to offer mostly free online educational alternatives. Start-up companies like Minerva and Udemy, and, separately, the Massachusetts Institute of Technology, have recently announced similar platforms. In December, M.I.T. said it was forming MITx under the leadership of L. Rafael Reif, the university’s provost, and the computer scientist Anant Agarwal. The program began offering its first course, on circuits and electronics, in March. As at Stanford, students receive a certificate of completion but not university credit. Unlike previous video lectures, which offered a “static” learning model, the Coursera system breaks lectures into segments as short as 10 minutes and offers quick online quizzes as part of each segment. Where essays are required, especially in the humanities and social sciences, the system relies on the students themselves to grade their fellow students’ work, in effect turning them into teaching assistants. Dr. Koller said that this would actually improve the learning experience. The Coursera system also offers an online feature that allows students to get support from a global student community. Dr. Ng said an early test of the system found that questions were typically answered within 22 minutes. He acknowledged that there was still no technological fix for cheating, and said the courses relied on an honor system. Dr. Koller said the educational approach was similar to that of the “flipped classroom,” pioneered by the Khan Academy, a creation of the educator Salman Khan. Students watch lectures at home and then work on problem-solving or “homework” in the classroom, either one-on-one with the teacher or in small groups. Dr. Ng said he had already vastly extended his reach by using the Internet as a teaching platform. He cited one student who had been in danger of losing his job at a large telecommunications firm; after he took the online course, he improved so much he was given responsibility for a significant development project. And a programmer at the Fukushima nuclear power plant in Japan was able to immediately apply machine-learning algorithms to the crisis that followed the earthquake and tsunami last year. A version of this article appeared in print on April 18, 2012, on page B4 of the New York edition with the headline: Online Education Venture Lures Cash Infusion and Deals With 5 Top Universities.
2018-02-21 05:22:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38415488600730896, "perplexity": 1245.0543003167115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813431.5/warc/CC-MAIN-20180221044156-20180221064156-00198.warc.gz"}
http://mathoverflow.net/feeds/question/18636
Number of invertible {0,1} real matrices? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-22T06:55:27Z http://mathoverflow.net/feeds/question/18636 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/18636/number-of-invertible-0-1-real-matrices Number of invertible {0,1} real matrices? Tony Huynh 2010-03-18T18:49:59Z 2012-03-23T11:28:07Z <p>This question is inspired from <a href="http://mathoverflow.net/questions/18547/number-of-unique-determinants-for-an-nxn-0-1-matrix" rel="nofollow">here</a>, where it was asked what possible determinants an $n \times n$ matrix with entries in {0,1} can have over $\mathbb{R}$. </p> <p>My question is: how many such matrices have non-zero determinant? </p> <p>If we instead view the matrix as over $\mathbb{F}_2$ instead of $\mathbb{R}$, then the answer is </p> <p>$(2^n-1)(2^n-2)(2^n-2^2) \dots (2^n-2^{n-1}).$</p> <p>This formula generalizes to all finite fields $\mathbb{F}_q$, which leads us to the more general question of how many $n \times n$ matrices with entries in { $0, \dots, q-1$ } have non-zero determinant over $\mathbb{R}$? </p> http://mathoverflow.net/questions/18636/number-of-invertible-0-1-real-matrices/18639#18639 Answer by Michael Lugo for Number of invertible {0,1} real matrices? Michael Lugo 2010-03-18T19:04:41Z 2010-03-18T19:04:41Z <p>See <a href="http://www.research.att.com/~njas/sequences/A046747" rel="nofollow">Sloane, A046747</a> for the number of singular (0,1)-matrices. It doesn't seem like there's an exact formula, but it's conjectured that the probability that a random (0,1)-matrix is singular is asymptotic to $n^2/2^n$.</p> <p>Over $F_2$ the probability that a random matrix is nonsingular, as $n \to \infty$, approaches the product $(1/2)(3/4)(7/8)\cdots = 0.2887880951$, and so the probability that a random large matrix is singular is only around 71 percent. I should note that a matrix is singular over $F_2$ if its real determinant is even, so this tells us that determinants of 0-1 matrices are more likely to be even than odd. </p> http://mathoverflow.net/questions/18636/number-of-invertible-0-1-real-matrices/18676#18676 Answer by Kevin P. Costello for Number of invertible {0,1} real matrices? Kevin P. Costello 2010-03-18T22:49:07Z 2010-03-18T22:49:07Z <p>As Michael noted, the conjectured bound for the probability a random $(0,1)$ matrix is singular is conjectured to be $(1+o(1)) n^2 2^{-n}$. This corresponds to the natural lower bound coming from the observation that if a matrix has two equal rows or columns it is automatically singular. </p> <p>The best bound currently known for this problem is $(\frac{1}{\sqrt{2}} + o(1) )^n$, and is due to <a href="http://arxiv.org/abs/0905.0461" rel="nofollow">Bourgain, Vu, and Wood</a>. Corollary 3.3 in their paper also gives a bound of $(\frac{1}{\sqrt{q}}+o(1))^n$ in the case where entries are uniformly chosen from ${0, 1, \dots, q-1}$ (here the conjectured bound would be around $n^2 q^{-n})$</p> <p>Even showing that the determinant is almost surely non-zero is not easy (this was first proven by Komlos in 1967, and the reference is given in Michael's Sloane link). </p> http://mathoverflow.net/questions/18636/number-of-invertible-0-1-real-matrices/91999#91999 Answer by Tony Huynh for Number of invertible {0,1} real matrices? Tony Huynh 2012-03-23T11:28:07Z 2012-03-23T11:28:07Z <p>Lurking around MO, I found a question which is related to the second part of my question. Namely, Greg Martin and Erick B. Wong prove that assuming that the entries of an $n \times n$ matrix are chosen randomly with respect to a uniform distribution from the set {$-k, -k + 1 \cdots, -1, 0, 1, \cdots, k-1, k$}, then the probability that the resulting matrix will be singular is $\ll k^{-2 + \epsilon}$. </p> <p>See this <a href="http://mathoverflow.net/questions/90591/singular-matrices-with-integer-entries" rel="nofollow">MO question</a> (where the above paragraph is plagarized from) and also <a href="http://www.math.ubc.ca/~gerg/papers/downloads/AAIMHNIE.pdf" rel="nofollow">here</a> for the link to the Martin, Wong paper. </p>
2013-05-22 06:55:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400384426116943, "perplexity": 680.9585989948191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701445114/warc/CC-MAIN-20130516105045-00068-ip-10-60-113-184.ec2.internal.warc.gz"}
https://eprint.iacr.org/2019/262
Revisiting Post-Quantum Fiat-Shamir Qipeng Liu and Mark Zhandry Abstract The Fiat-Shamir transformation is a useful approach to building non-interactive arguments (of knowledge) in the random oracle model. Unfortunately, existing proof techniques are incapable of proving the security of Fiat-Shamir in the quantum setting. The problem stems from (1) the difficulty of quantum rewinding, and (2) the inability of current techniques to adaptively program random oracles in the quantum setting. In this work, we show how to overcome the limitations above in many settings. In particular, we give mild conditions under which Fiat-Shamir is secure in the quantum setting. As an application, we show that existing lattice signatures based on Fiat-Shamir are secure without any modifications. Available format(s) Category Foundations Publication info Preprint. MINOR revision. Keywords quantumFiat-ShamirsignatureSIS Contact author(s) qipengl @ cs princeton edu History Short URL https://ia.cr/2019/262 CC BY BibTeX @misc{cryptoeprint:2019/262, author = {Qipeng Liu and Mark Zhandry}, title = {Revisiting Post-Quantum Fiat-Shamir}, howpublished = {Cryptology ePrint Archive, Paper 2019/262}, year = {2019}, note = {\url{https://eprint.iacr.org/2019/262}}, url = {https://eprint.iacr.org/2019/262} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
2022-06-26 23:56:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18610617518424988, "perplexity": 3829.920812853346}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00513.warc.gz"}
https://1lab.dev/Algebra.Ring.html
open import Algebra.Semigroup open import Algebra.Group.Ab open import Algebra.Prelude open import Algebra.Monoid open import Algebra.Group module Algebra.Ring where # Rings🔗 The ring is one of the basic objects of study in algebra, which abstracts the best bits of the common algebraic structures: The integers $\bb{Z}$, the rationals $\bb{Q}$, the reals $\bb{R}$, and the complex numbers $\bb{C}$ are all rings, as are the collections of polynomials with coefficients in any of those. Less familiar examples of rings include square matrices (with values in a ring) and the integral cohomology ring of a topological space: that these are so far from being “number-like” indicates the incredible generality of rings. A ring is an abelian group $R$ (which we call the additive group of $R$), together with the data of a monoid on $R$ (the multiplicative monoid), where the multiplication of the monoid distributes over the addition. We’ll see why this compatibility condition is required afterwards. Check out what it means for a triple $(1, *, +)$ to be a ring structure on a type: record is-ring {ℓ} {R : Type ℓ} (1R : R) (_*_ _+_ : R → R → R) : Type ℓ where no-eta-equality field *-monoid : is-monoid 1R _*_ +-group : is-group _+_ +-commutes : ∀ {x y} → x + y ≡ y + x *-distribl : ∀ {x y z} → x * (y + z) ≡ (x * y) + (x * z) *-distribr : ∀ {x y z} → (y + z) * x ≡ (y * x) + (z * x) There is a natural notion of ring homomorphism, which we get by smashing together that of a monoid homomorphism (for the multiplicative part) and of group homomorphism; Every map of rings has an underlying map of groups which preserves the addition operation, and it must also preserve the multiplication. This encodes the view of a ring as an “abelian group with a monoid structure”. record is-ring-hom {ℓ} (A B : Ring ℓ) (f : A .fst → B .fst) : Type ℓ where private module A = Ring-on (A .snd) module B = Ring-on (B .snd) field pres-id : f A.1R ≡ B.1R pres-+ : ∀ x y → f (x A.+ y) ≡ f x B.+ f y pres-* : ∀ x y → f (x A.* y) ≡ f x B.* f y It follows, by standard equational nonsense, that rings and ring homomorphisms form a precategory — for instance, we have $f(g(1_R)) = f(1_S) = 1_T$. Rings : ∀ ℓ → Precategory (lsuc ℓ) ℓ ## In components🔗 We give a more elementary description of rings, which is suitable for constructing values of the record type Ring above. This re-expresses the data included in the definition of a ring with the least amount of redundancy possible, in the most direct terms possible: A ring is a set, equipped with two binary operations $*$ and $+$, such that $*$ distributes over $+$ on either side; $+$ is an abelian group; and $*$ is a monoid. record make-ring {ℓ} (R : Type ℓ) : Type ℓ where no-eta-equality field ring-is-set : is-set R -- R is an abelian group: 0R : R _+_ : R → R → R -_ : R → R +-idl : ∀ {x} → 0R + x ≡ x +-invr : ∀ {x} → x + (- x) ≡ 0R +-assoc : ∀ {x y z} → (x + y) + z ≡ x + (y + z) +-comm : ∀ {x y} → x + y ≡ y + x -- R is a monoid: 1R : R _*_ : R → R → R *-idl : ∀ {x} → 1R * x ≡ x *-idr : ∀ {x} → x * 1R ≡ x *-assoc : ∀ {x y z} → (x * y) * z ≡ x * (y * z) -- Multiplication is bilinear: *-distribl : ∀ {x y z} → x * (y + z) ≡ (x * y) + (x * z) *-distribr : ∀ {x y z} → (y + z) * x ≡ (y * x) + (z * x) This data is missing (by design, actually!) one condition which we would expect: $0 \ne 1$. We exploit this to give our first example of a ring, the zero ring, which has carrier set the unit — the type with one object. Despite the name, the zero ring is not the zero object in the category of rings: it is the terminal object. In the category of rings, the initial object is the ring $\bb{Z}$, which is very far (infinitely far!) from having a single element. It’s called the “zero ring” because it has one element $x$, which must be the additive identity, hence we call it $0$. But it’s also the multiplicative identity, so we might also call the ring $\{*\}$ the One Ring, which would be objectively cooler. Zero-ring : Ring lzero Zero-ring = from-make-ring {R = ⊤} \$ record { ring-is-set = λ _ _ _ _ _ _ → tt ; 0R = tt ; _+_ = λ _ _ → tt ; -_ = λ _ → tt ; +-idl = λ _ → tt ; +-invr = λ _ → tt ; +-assoc = λ _ → tt ; +-comm = λ _ → tt ; 1R = tt ; _*_ = λ _ _ → tt ; *-idl = λ _ → tt ; *-idr = λ _ → tt ; *-assoc = λ _ → tt ; *-distribl = λ _ → tt ; *-distribr = λ _ → tt }
2022-08-16 10:41:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 20, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7518391609191895, "perplexity": 1160.4276123949649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00135.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/CLONE-afaf42be-9820-4186-8d76-e738423175bc/chapter-3-section-3-6-uniform-circular-motion-example-page-47/3-7
## Essential University Physics: Volume 1 (4th Edition) Clone $1.52 \ h$ Using the equation for period, we find: $T = \sqrt{\frac{4\pi^2r}{a}}$ $T = \sqrt{\frac{4\pi^2(6.77\times10^6)}{8.73}}=5532\ s= 1.52 \ h$
2019-10-14 06:34:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9819559454917908, "perplexity": 4467.458650467844}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649232.14/warc/CC-MAIN-20191014052140-20191014075140-00452.warc.gz"}
https://brickisland.net/DDGSpring2021/2021/02/22/reading-3-exterior-algebra-and-k-forms-due-3-2/
# Reading 3: Exterior Algebra and k-Forms (due 3/2) Your next reading assignment will help you review the concepts we’ve been discussing in class: describing “little volumes” or $k$-vectors using the wedge product and the Hodge star, and measuring these volumes using “dual” volumes called $k$-forms. These objects will ultimately let us integrate quantities over curved domains, which will also be our main tool for turning smooth equations from geometry and physics into discrete equations that we can actually solve on a computer. The reading is Chapter 4, “A Quick and Dirty Introduction to Exterior Calculus”, up through section 4.5.1 (pages 45–65). It will be due Tuesday, March 2 at 10am Eastern time. See the assignments page for handin instructions. Your next homework will give you some hands-on practice with differential forms; just take this time to get familiar with the basic concepts.
2022-08-17 06:48:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41528216004371643, "perplexity": 1527.7618301498114}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00003.warc.gz"}
https://www.iac.es/en/science-and-technology/publications/discovery-faint-double-peak-ha-emission-halo-low-redshift-galaxies
# Discovery of Faint Double-peak Hα Emission in the Halo of Low Redshift Galaxies Sánchez Almeida, J.; Calhau, J.; Muñoz-Tuñón, C.; González-Morán, A. L.; Rodríguez-Espinosa, J. M. Bibliographical reference The Astrophysical Journal Advertised on: 8 2022 Description Aimed at the detection of cosmological gas being accreted onto galaxies in the local universe, we examined the Hα emission in the halo of 164 galaxies in the field of view of the Multi-Unit Spectroscopic Explorer Wide survey (MUSE-Wide) with observable Hα (redshift <0.42). An exhaustive screening of the corresponding Hα images led us to select 118 reliable Hα emitting gas clouds. The signals are faint, with a surface brightness of ${10}^{-17.3\pm 0.3}\,\mathrm{erg}\,{{\rm{s}}}^{-1}\,{\mathrm{cm}}^{-2}\,{\mathrm{arcsec}}^{-2}$ . Through statistical tests and other arguments, we ruled out that they are created by instrumental artifacts, telluric line residuals, or high-redshift interlopers. Around 38% of the time, the Hα line profile shows a double peak with the drop in intensity at the rest frame of the central galaxy, and with a typical peak-to-peak separation of the order of ±200 km s-1. Most line emission clumps are spatially unresolved. The mass of emitting gas is estimated to be between 1 and 10-3 times the stellar mass of the central galaxy. The signals are not isotropically distributed; their azimuth tends to be aligned with the major axis of the corresponding galaxy. The distances to the central galaxies are not random either. The counts drop at a distance >50 galaxy radii, which roughly corresponds to the virial radius of the central galaxy. We explore several physical scenarios to explain this Hα emission, among which accretion disks around rogue intermediate-mass black holes fit the observations best. Related projects Starbursts in Galaxies GEFE Starsbursts play a key role in the cosmic evolution of galaxies, and thus in the star formation (SF) history of the universe, the production of metals, and the feedback coupling galaxies with the cosmic web. Extreme SF conditions prevail early on during the formation of the first stars and galaxies, therefore, the starburst phenomenon constitutes a Casiana Muñoz Tuñón Type
2023-03-23 18:19:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.731078028678894, "perplexity": 2756.7111190733967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00397.warc.gz"}
https://stackoverflow.com/questions/6802577/rotation-of-3d-vector
# Rotation of 3D vector? I have two vectors as Python lists and an angle. E.g.: v = [3,5,0] axis = [4,4,1] theta = 1.2 #radian What is the best/easiest way to get the resulting vector when rotating the v vector around the axis? The rotation should appear to be counter clockwise for an observer to whom the axis vector is pointing. This is called the right hand rule • I find it very surprising that there is no functionality for this in SciPy (or similar easily accessible package); vector rotation isn't that exotic. Jul 23, 2011 at 19:51 • Now there is: scipy.spatial.transform.Rotation.from_rotvec – user Jul 10, 2019 at 16:12 Using the Euler-Rodrigues formula: import numpy as np import math def rotation_matrix(axis, theta): """ Return the rotation matrix associated with counterclockwise rotation about the given axis by theta radians. """ axis = np.asarray(axis) axis = axis / math.sqrt(np.dot(axis, axis)) a = math.cos(theta / 2.0) b, c, d = -axis * math.sin(theta / 2.0) aa, bb, cc, dd = a * a, b * b, c * c, d * d bc, ad, ac, ab, bd, cd = b * c, a * d, a * c, a * b, b * d, c * d return np.array([[aa + bb - cc - dd, 2 * (bc + ad), 2 * (bd - ac)], [2 * (bc - ad), aa + cc - bb - dd, 2 * (cd + ab)], [2 * (bd + ac), 2 * (cd - ab), aa + dd - bb - cc]]) v = [3, 5, 0] axis = [4, 4, 1] theta = 1.2 print(np.dot(rotation_matrix(axis, theta), v)) # [ 2.74911638 4.77180932 1.91629719] • @bougui: Using np.linalg.norm instead of np.sqrt(np.dot(...)) seemed like a nice improvement to me, but timeit tests showed np.sqrt(np.dot(...)) was 2.5x faster than np.linalg.norm, at least on my machine, so I'm sticking with np.sqrt(np.dot(...)). Oct 9, 2012 at 12:51 • sqrt from the Python math module is even faster on scalars. scipy.linalg.norm may be faster than np.linalg.norm; I've submitted a patch to NumPy that changes linalg.norm to use dot, but it hasn't been merged yet. Dec 29, 2013 at 13:54 • I suppose math.sqrt will always be faster than np.sqrt when operating on scalars since np.sqrt's overall performance would be slowed if it had to check its input for scalars. Dec 29, 2013 at 14:28 • This is very neat, would you be so kind to add the equivalent for 2D? I know that for rotating w.r.t OX axis we can just compute new coords as: (x*np.cos(theta)-y*np.sin(theta), x*np.sin(theta)+y*np.cos(theta)), but how should this be modified when the axis of rotation is not OX any longer? thanks for any tips. – user6039682 May 18, 2016 at 14:33 • Shouldn't axis be x, y or z? What's that vector? Jul 2, 2019 at 15:24 A one-liner, with numpy/scipy functions. We use the following: let a be the unit vector along axis, i.e. a = axis/norm(axis) and A = I × a be the skew-symmetric matrix associated to a, i.e. the cross product of the identity matrix with a then M = exp(θ A) is the rotation matrix. from numpy import cross, eye, dot from scipy.linalg import expm, norm def M(axis, theta): return expm(cross(eye(3), axis/norm(axis)*theta)) v, axis, theta = [3,5,0], [4,4,1], 1.2 M0 = M(axis, theta) print(dot(M0,v)) # [ 2.74911638 4.77180932 1.91629719] expm (code here) computes the taylor series of the exponential: \sum_{k=0}^{20} \frac{1}{k!} (θ A)^k , so it's time expensive, but readable and secure. It can be a good way if you have few rotations to do but a lot of vectors. • What is the reference for the quote "let a be... then M = exp(θ A) is the rotation matrix." ? Jul 30, 2018 at 21:53 • Thanks. This Wikipedia page (en.wikipedia.org/wiki/…) is also useful. Last question: could you explain how cross(eye(3), axis/norm(axis)*theta) get you the "cross-product matrix"? Aug 5, 2018 at 23:13 I just wanted to mention that if speed is required, wrapping unutbu's code in scipy's weave.inline and passing an already existing matrix as a parameter yields a 20-fold decrease in the running time. The code (in rotation_matrix_test.py): import numpy as np import timeit from math import cos, sin, sqrt import numpy.random as nr from scipy import weave def rotation_matrix_weave(axis, theta, mat = None): if mat == None: mat = np.eye(3,3) support = "#include <math.h>" code = """ double x = sqrt(axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2]); double a = cos(theta / 2.0); double b = -(axis[0] / x) * sin(theta / 2.0); double c = -(axis[1] / x) * sin(theta / 2.0); double d = -(axis[2] / x) * sin(theta / 2.0); mat[0] = a*a + b*b - c*c - d*d; mat[1] = 2 * (b*c - a*d); mat[2] = 2 * (b*d + a*c); mat[3*1 + 0] = 2*(b*c+a*d); mat[3*1 + 1] = a*a+c*c-b*b-d*d; mat[3*1 + 2] = 2*(c*d-a*b); mat[3*2 + 0] = 2*(b*d-a*c); mat[3*2 + 1] = 2*(c*d+a*b); mat[3*2 + 2] = a*a+d*d-b*b-c*c; """ weave.inline(code, ['axis', 'theta', 'mat'], support_code = support, libraries = ['m']) return mat def rotation_matrix_numpy(axis, theta): mat = np.eye(3,3) axis = axis/sqrt(np.dot(axis, axis)) a = cos(theta/2.) b, c, d = -axis*sin(theta/2.) return np.array([[a*a+b*b-c*c-d*d, 2*(b*c-a*d), 2*(b*d+a*c)], [2*(b*c+a*d), a*a+c*c-b*b-d*d, 2*(c*d-a*b)], [2*(b*d-a*c), 2*(c*d+a*b), a*a+d*d-b*b-c*c]]) The timing: >>> import timeit >>> >>> setup = """ ... import numpy as np ... import numpy.random as nr ... ... from rotation_matrix_test import rotation_matrix_weave ... from rotation_matrix_test import rotation_matrix_numpy ... ... mat1 = np.eye(3,3) ... theta = nr.random() ... axis = nr.random(3) ... """ >>> >>> timeit.repeat("rotation_matrix_weave(axis, theta, mat1)", setup=setup, number=100000) [0.36641597747802734, 0.34883809089660645, 0.3459300994873047] >>> timeit.repeat("rotation_matrix_numpy(axis, theta)", setup=setup, number=100000) [7.180983066558838, 7.172032117843628, 7.180462837219238] Here is an elegant method using quaternions that are blazingly fast; I can calculate 10 million rotations per second with appropriately vectorised numpy arrays. It relies on the quaternion extension to numpy found here. Quaternion Theory: A quaternion is a number with one real and 3 imaginary dimensions usually written as q = w + xi + yj + zk where 'i', 'j', 'k' are imaginary dimensions. Just as a unit complex number 'c' can represent all 2d rotations by c=exp(i * theta), a unit quaternion 'q' can represent all 3d rotations by q=exp(p), where 'p' is a pure imaginary quaternion set by your axis and angle. We start by converting your axis and angle to a quaternion whose imaginary dimensions are given by your axis of rotation, and whose magnitude is given by half the angle of rotation in radians. The 4 element vectors (w, x, y, z) are constructed as follows: import numpy as np import quaternion as quat v = [3,5,0] axis = [4,4,1] theta = 1.2 #radian vector = np.array([0.] + v) rot_axis = np.array([0.] + axis) axis_angle = (theta*0.5) * rot_axis/np.linalg.norm(rot_axis) First, a numpy array of 4 elements is constructed with the real component w=0 for both the vector to be rotated vector and the rotation axis rot_axis. The axis angle representation is then constructed by normalizing then multiplying by half the desired angle theta. See here for why half the angle is required. Now create the quaternions v and qlog using the library, and get the unit rotation quaternion q by taking the exponential. vec = quat.quaternion(*v) qlog = quat.quaternion(*axis_angle) q = np.exp(qlog) Finally, the rotation of the vector is calculated by the following operation. v_prime = q * vec * np.conjugate(q) print(v_prime) # quaternion(0.0, 2.7491163, 4.7718093, 1.9162971) Now just discard the real element and you have your rotated vector! v_prime_vec = v_prime.imag # [2.74911638 4.77180932 1.91629719] as a numpy array Note that this method is particularly efficient if you have to rotate a vector through many sequential rotations, as the quaternion product can just be calculated as q = q1 * q2 * q3 * q4 * ... * qn and then the vector is only rotated by 'q' at the very end using v' = q * v * conj(q). This method gives you a seamless transformation between axis angle <---> 3d rotation operator simply by exp and log functions (yes log(q) just returns the axis-angle representation!). For further clarification of how quaternion multiplication etc. work, see here • Surprisingly, np.conjugate(q) seems to take longer than np.exp(qlog) despite it seems equivalent to just quat.quaternion(q.real, *(-q.imag)) Feb 16, 2019 at 14:05 • I understand this is an old thread, but I have a question about the implementation of this method here if anyone is able to take a look: stackoverflow.com/questions/64988678/… – Lucy Nov 24, 2020 at 14:41 Take a look at http://vpython.org/contents/docs/visual/VisualIntro.html. It provides a vector class which has a method A.rotate(theta,B). It also provides a helper function rotate(A,theta,B) if you don't want to call the method on A. http://vpython.org/contents/docs/visual/vector.html I made a fairly complete library of 3D mathematics for Python{2,3}. It still does not use Cython, but relies heavily on the efficiency of numpy. You can find it here with pip: python[3] -m pip install math3d Or have a look at my gitweb http://git.automatics.dyndns.dk/?p=pymath3d.git and now also on github: https://github.com/mortlind/pymath3d . Once installed, in python you may create the orientation object which can rotate vectors, or be part of transform objects. E.g. the following code snippet composes an orientation that represents a rotation of 1 rad around the axis [1,2,3], applies it to the vector [4,5,6], and prints the result: import math3d as m3d r = m3d.Orientation.new_axis_angle([1,2,3], 1) v = m3d.Vector(4,5,6) print(r * v) The output would be <Vector: (2.53727, 6.15234, 5.71935)> This is more efficient, by a factor of approximately four, as far as I can time it, than the oneliner using scipy posted by B. M. above. However, it requires installation of my math3d package. • I know this is very weird but I can't find a different way of contacting you. Would it be possible to use the math3d library to create 2D projections of 3D functions over an arbitrary axis more easily? For example, imagine projecting a normal distribution on the xy plane from the z axis. Now Imagine moving by the polar angle theta away from the z axis (as in spherical coord. notation) and projecting the normal dist on a plane that is also now rotated by theta in reference to xy? It's like orthogonal projection + integration. I can open a new question for this if you want. Jan 23, 2017 at 18:10 • Hi, ljetbo, I think this sounds difficult, or just not very easy with math3d. The function would imply, I guess, an analytical function, whereas math3d works better with point sets. Further, you seem to be talking about a scalar field over the plane (R(2)), whereas math3d deals with the Special Euclidean group (SE+(3)). It may be possible to do what you wish, but I have no immediate idea about how to mix in an analytical function with math3d. Jan 24, 2017 at 20:10 Use scipy's Rotation.from_rotvec(). The argument is the rotation vector (a unit vector) multiplied by the rotation angle in rads. from scipy.spatial.transform import Rotation from numpy.linalg import norm v = [3, 5, 0] axis = [4, 4, 1] theta = 1.2 axis = axis / norm(axis) # normalize the rotation vector first rot = Rotation.from_rotvec(theta * axis) new_v = rot.apply(v) print(new_v) # results in [2.74911638 4.77180932 1.91629719] There are several more ways to use Rotation based on what data you have about the rotation: Off-topic note: One line code is not necessarily better code as implied by some users. • @smoothumut glad to be of help, friend. – user Jul 7, 2021 at 17:10 It can also be solved using quaternion theory: def angle_axis_quat(theta, axis): """ Given an angle and an axis, it returns a quaternion. """ axis = np.array(axis) / np.linalg.norm(axis) return np.append([np.cos(theta/2)],np.sin(theta/2) * axis) def mult_quat(q1, q2): """ Quaternion multiplication. """ q3 = np.copy(q1) q3[0] = q1[0]*q2[0] - q1[1]*q2[1] - q1[2]*q2[2] - q1[3]*q2[3] q3[1] = q1[0]*q2[1] + q1[1]*q2[0] + q1[2]*q2[3] - q1[3]*q2[2] q3[2] = q1[0]*q2[2] - q1[1]*q2[3] + q1[2]*q2[0] + q1[3]*q2[1] q3[3] = q1[0]*q2[3] + q1[1]*q2[2] - q1[2]*q2[1] + q1[3]*q2[0] return q3 def rotate_quat(quat, vect): """ Rotate a vector with the rotation defined by a quaternion. """ # Transfrom vect into an quaternion vect = np.append([0],vect) # Normalize it norm_vect = np.linalg.norm(vect) vect = vect/norm_vect # Computes the conjugate of quat quat_ = np.append(quat[0],-quat[1:]) # The result is given by: quat * vect * quat_ res = mult_quat(quat, mult_quat(vect,quat_)) * norm_vect return res[1:] v = [3, 5, 0] axis = [4, 4, 1] theta = 1.2 print(rotate_quat(angle_axis_quat(theta, axis), v)) # [2.74911638 4.77180932 1.91629719] Disclaimer: I am the author of this package While special classes for rotations can be convenient, in some cases one needs rotation matrices (e.g. for working with other libraries like the affine_transform functions in scipy). To avoid everyone implementing their own little matrix generating functions, there exists a tiny pure python package which does nothing more than providing convenient rotation matrix generating functions. The package is on github (mgen) and can be installed via pip: pip install mgen Example usage copied from the readme: import numpy as np np.set_printoptions(suppress=True) from mgen import rotation_around_axis from mgen import rotation_from_angles from mgen import rotation_around_x matrix = rotation_from_angles([np.pi/2, 0, 0], 'XYX') matrix.dot([0, 1, 0]) # array([0., 0., 1.]) matrix = rotation_around_axis([1, 0, 0], np.pi/2) matrix.dot([0, 1, 0]) # array([0., 0., 1.]) matrix = rotation_around_x(np.pi/2) matrix.dot([0, 1, 0]) # array([0., 0., 1.]) Note that the matrices are just regular numpy arrays, so no new data-structures are introduced when using this package. Using pyquaternion is extremely simple; to install it (while still in python), run in your console: import pip; pip.main(['install','pyquaternion']) Once installed: from pyquaternion import Quaternion v = [3,5,0] axis = [4,4,1] theta = 1.2 #radian rotated_v = Quaternion(axis=axis,angle=theta).rotate(v) I needed to rotate a 3D model around one of the three axes {x, y, z} in which that model was embedded and this was the top result for a search of how to do this in numpy. I used the following simple function: def rotate(X, theta, axis='x'): '''Rotate multidimensional array X theta degrees around axis axis''' c, s = np.cos(theta), np.sin(theta) if axis == 'x': return np.dot(X, np.array([ [1., 0, 0], [0 , c, -s], [0 , s, c] ])) elif axis == 'y': return np.dot(X, np.array([ [c, 0, -s], [0, 1, 0], [s, 0, c] ])) elif axis == 'z': return np.dot(X, np.array([ [c, -s, 0 ], [s, c, 0 ], [0, 0, 1.], ]))
2022-06-29 11:17:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4928467571735382, "perplexity": 4249.93437604775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00342.warc.gz"}
https://www.maplesoft.com/support/help/Maple/view.aspx?path=dsolve%2Fseries
dsolve - Maple Programming Help Home : Support : Online Help : Mathematics : Differential Equations : dsolve : dsolve/series dsolve find series solutions to ODE problems Calling Sequence dsolve(ODE, y(x), 'series') dsolve(ODE, y(x), 'series', x=pt) dsolve({ODE, ICs}, y(x), 'series') dsolve({sysODE, ICs}, {funcs}, 'series') dsolve(ODE, y(x), 'type=series') dsolve(ODE, y(x), 'type=series', x=pt) dsolve({ODE, ICs}, y(x), 'type=series') dsolve({sysODE, ICs}, {funcs}, 'type=series') Parameters ODE - ordinary differential equation y(x) - dependent variable (indeterminate function) ICs - initial conditions for y(x) and/or its derivatives sysODE - system of ODEs {funcs} - set with indeterminate functions pt - expansion point for series 'type=series' - to request a series solution Description • The dsolve command uses several methods when trying to find a series solution to an ODE or a system of ODEs. When initial conditions or an expansion point are given, the series is calculated at the given point; otherwise, the series is calculated at the origin. • The first method used is a Newton iteration based on a paper of Keith Geddes. See the References section in this help page. • The second method involves a direct substitution to generate a system of equations, which may be solvable (by solve) to give a series. • The third method is the method of Frobenius for nth order linear DEs. See the References section in this help page. • If the aforementioned methods fail, the function invokes LinearFunctionalSystems[SeriesSolution]. Examples > $\mathrm{ode}≔\mathrm{diff}\left(y\left(t\right),t,t\right)+{\mathrm{diff}\left(y\left(t\right),t\right)}^{2}=0$ ${\mathrm{ode}}{≔}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{t}}^{{2}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({t}\right){+}{\left(\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({t}\right)\right)}^{{2}}{=}{0}$ (1) When the initial conditions are not given, the answer is expressed in terms of the indeterminate function and its derivatives evaluated at the origin. > $\mathrm{ans}≔\mathrm{dsolve}\left(\left\{\mathrm{ode}\right\},y\left(t\right),\mathrm{type}='\mathrm{series}'\right)$ ${\mathrm{ans}}{≔}{y}{}\left({t}\right){=}{y}{}\left({0}\right){+}{\mathrm{D}}{}\left({y}\right){}\left({0}\right){}{t}{-}\frac{{1}}{{2}}{}{{\mathrm{D}}{}\left({y}\right){}\left({0}\right)}^{{2}}{}{{t}}^{{2}}{+}\frac{{1}}{{3}}{}{{\mathrm{D}}{}\left({y}\right){}\left({0}\right)}^{{3}}{}{{t}}^{{3}}{-}\frac{{1}}{{4}}{}{{\mathrm{D}}{}\left({y}\right){}\left({0}\right)}^{{4}}{}{{t}}^{{4}}{+}\frac{{1}}{{5}}{}{{\mathrm{D}}{}\left({y}\right){}\left({0}\right)}^{{5}}{}{{t}}^{{5}}{+}{O}{}\left({{t}}^{{6}}\right)$ (2) If initial conditions are given, the series is calculated at that the given point: > $\mathrm{ans}≔\mathrm{dsolve}\left(\left\{\mathrm{ode},y\left(a\right)=\mathrm{Y_a},\mathrm{D}\left(y\right)\left(a\right)=\mathrm{DY_a}\right\},y\left(t\right),\mathrm{type}='\mathrm{series}'\right)$ ${\mathrm{ans}}{≔}{y}{}\left({t}\right){=}{\mathrm{Y_a}}{+}{\mathrm{DY_a}}{}\left({t}{-}{a}\right){-}\frac{{1}}{{2}}{}{{\mathrm{DY_a}}}^{{2}}{}{\left({t}{-}{a}\right)}^{{2}}{+}\frac{{1}}{{3}}{}{{\mathrm{DY_a}}}^{{3}}{}{\left({t}{-}{a}\right)}^{{3}}{-}\frac{{1}}{{4}}{}{{\mathrm{DY_a}}}^{{4}}{}{\left({t}{-}{a}\right)}^{{4}}{+}\frac{{1}}{{5}}{}{{\mathrm{DY_a}}}^{{5}}{}{\left({t}{-}{a}\right)}^{{5}}{+}{O}{}\left({\left({t}{-}{a}\right)}^{{6}}\right)$ (3) Alternatively, an expansion point can be provided, which is most useful when initial conditions cannot be given: > $\mathrm{ans}≔\mathrm{dsolve}\left(\left(1-{t}^{2}\right)\mathrm{diff}\left(y\left(t\right),t,t\right)-2ty\left(t\right)-y\left(t\right),y\left(t\right),'\mathrm{series}',t=1\right)$ ${\mathrm{ans}}{≔}{y}{}\left({t}\right){=}{\mathrm{_C1}}{}\left({t}{-}{1}\right){}\left({1}{-}\frac{{3}}{{4}}{}\left({t}{-}{1}\right){+}\frac{{7}}{{48}}{}{\left({t}{-}{1}\right)}^{{2}}{+}\frac{{1}}{{128}}{}{\left({t}{-}{1}\right)}^{{3}}{-}\frac{{157}}{{15360}}{}{\left({t}{-}{1}\right)}^{{4}}{+}\frac{{3371}}{{921600}}{}{\left({t}{-}{1}\right)}^{{5}}{+}{O}{}\left({\left({t}{-}{1}\right)}^{{6}}\right)\right){+}{\mathrm{_C2}}{}\left({\mathrm{ln}}{}\left({t}{-}{1}\right){}\left({-}\frac{{3}}{{2}}{}\left({t}{-}{1}\right){+}\frac{{9}}{{8}}{}{\left({t}{-}{1}\right)}^{{2}}{-}\frac{{7}}{{32}}{}{\left({t}{-}{1}\right)}^{{3}}{-}\frac{{3}}{{256}}{}{\left({t}{-}{1}\right)}^{{4}}{+}\frac{{157}}{{10240}}{}{\left({t}{-}{1}\right)}^{{5}}{+}{O}{}\left({\left({t}{-}{1}\right)}^{{6}}\right)\right){+}\left({1}{-}\frac{{29}}{{16}}{}{\left({t}{-}{1}\right)}^{{2}}{+}\frac{{21}}{{32}}{}{\left({t}{-}{1}\right)}^{{3}}{-}\frac{{131}}{{3072}}{}{\left({t}{-}{1}\right)}^{{4}}{-}\frac{{2219}}{{102400}}{}{\left({t}{-}{1}\right)}^{{5}}{+}{O}{}\left({\left({t}{-}{1}\right)}^{{6}}\right)\right)\right)$ (4) The order of the series expansion (default = 6) can be changed using (an environment variable - see Order). For example, > $\mathrm{Order}≔3$ ${\mathrm{Order}}{≔}{3}$ (5) An example with a system of ODEs. > $\mathrm{sys}≔\left\{\mathrm{diff}\left(x\left(t\right),t\right)=y\left(t\right),\mathrm{diff}\left(y\left(t\right),t\right)=-x\left(t\right)\right\}$ ${\mathrm{sys}}{≔}\left\{\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{x}{}\left({t}\right){=}{y}{}\left({t}\right){,}\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({t}\right){=}{-}{x}{}\left({t}\right)\right\}$ (6) > $\mathrm{ans}≔\mathrm{dsolve}\left(\mathrm{sys}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{union}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\left\{x\left(0\right)=A,y\left(0\right)=B\right\},\left\{x\left(t\right),y\left(t\right)\right\},\mathrm{type}='\mathrm{series}'\right)$ ${\mathrm{ans}}{≔}\left\{{x}{}\left({t}\right){=}{A}{+}{B}{}{t}{-}\frac{{1}}{{2}}{}{A}{}{{t}}^{{2}}{+}{O}{}\left({{t}}^{{3}}\right){,}{y}{}\left({t}\right){=}{B}{-}{A}{}{t}{-}\frac{{1}}{{2}}{}{B}{}{{t}}^{{2}}{+}{O}{}\left({{t}}^{{3}}\right)\right\}$ (7) An example solved by LinearFunctionalSystems[SeriesSolution]. > $\mathrm{sys}≔\left[\mathrm{diff}\left(\mathrm{y1}\left(x\right),x\right)-\mathrm{y1}\left(x\right)+x\mathrm{y2}\left(x\right)={x}^{3},x\mathrm{diff}\left(\mathrm{y2}\left(x\right),x\right)-2\mathrm{y2}\left(x\right)\right]$ ${\mathrm{sys}}{≔}\left[\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{\mathrm{y1}}{}\left({x}\right){-}{\mathrm{y1}}{}\left({x}\right){+}{x}{}{\mathrm{y2}}{}\left({x}\right){=}{{x}}^{{3}}{,}{x}{}\left(\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{\mathrm{y2}}{}\left({x}\right)\right){-}{2}{}{\mathrm{y2}}{}\left({x}\right)\right]$ (8) > $\mathrm{vars}≔\left[\mathrm{y1}\left(x\right),\mathrm{y2}\left(x\right)\right]$ ${\mathrm{vars}}{≔}\left[{\mathrm{y1}}{}\left({x}\right){,}{\mathrm{y2}}{}\left({x}\right)\right]$ (9) > $\mathrm{dsolve}\left(\left\{\mathrm{op}\left(\mathrm{sys}\right)\right\}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{union}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\left\{\mathrm{y1}\left(0\right)=13\right\},\mathrm{vars},'\mathrm{series}'\right)$ $\left\{{\mathrm{y1}}{}\left({x}\right){=}{13}{+}{13}{}{x}{+}\frac{{13}}{{2}}{}{{x}}^{{2}}{+}{O}{}\left({{x}}^{{3}}\right){,}{\mathrm{y2}}{}\left({x}\right){=}\frac{{{\mathrm{D}}}^{\left({2}\right)}{}\left({\mathrm{y2}}\right){}\left({0}\right)}{{2}}{}{{x}}^{{2}}{+}{O}{}\left({{x}}^{{3}}\right)\right\}$ (10) References Forsyth, A.R. Theory of Differential Equations. Cambridge: University Press, 1906. pp. 78-90 Geddes, Keith.  "Convergence Behaviour of the Newton Iteration for First Order Differential Equations". Proceedings of EUROSAM '79. pp.189-199. Ince, E.L. Ordinary Differential Equations. Dover Publications, 1956. pp. 398-406.
2020-07-04 00:20:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8939626216888428, "perplexity": 2619.3891521072464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00493.warc.gz"}
https://www.physicsforums.com/threads/rearranging-equation-with-more-than-variable-instance.882382/
# I Rearranging equation with more than variable instance 1. Aug 18, 2016 ### Jehannum This question is no doubt absurdly simple to many here, but the answer will help me immensely. Say that: c = a ^ 4 Then it's simple to rearrange for a: a = c ^ 0.25 But what if there is more than one term, for example: c = a ^ 4 + a ^ 3 Is it possible to rearrange this to the form: a = something? 2. Aug 18, 2016 ### BvU No it is not. At least not in general. There may be special cases, but they are rare. Note that in your first example, $a = -c^{0.25}$ also satisfies the equation ! 3. Aug 18, 2016 ### Staff: Mentor Up to quartic polynomials, it is possible, but only with messy formulas. Beyond that, there is no proper solution in general. 4. Aug 18, 2016 ### Jehannum Thank you for your answers. Now, may I generalise my question so that it touches on engineering and physics? If a relationship between physical quantities exists and can be expressed as a function: a = f1 (x) and we wish to obtain x in terms of a, i.e.: x = f2 (a) what do we do if f1 (x) cannot be rearranged algebraically to return x, as the answers above have indicated? Is there a general method used by engineers and physicists to obtain a serviceable approximation of f2 so that x can be determined in terms of a? It seems to me that this must be a common problem, since it is one that I have come across in a particular application I am trying to develop. 5. Aug 18, 2016 ### BvU It happens all the time and is called equation solving ... If you reveal a little more about your specific case perhaps we can say something about usable methods ? (for example: every control problem involves a form of inverting a transfer function) 6. Aug 18, 2016 ### Jehannum Yes, I can certainly give details of the problem I'm working on. It's to do with pressure loss in low pressure gas pipes. The equation I have to work with is: Q = K1 . [p (d ^ 5) / (s L f)] ^ 0.5 where Q = flow rate, K1 = a constant, p = pressure loss, d = pipe diameter, s = relative density, L = pipe length and f = friction factor​ I want to rearrange this equation for d. This would of course be simple were it not for the fact that: f = K2 . (1 + K3 / d) where K2 and K3 are constants So, you see, d is actually in the numerator and denominator of the function for Q. This means it is beyond my mathematical ability to rearrange into the form d = function (Q). As far as I know, it may not be possible to do so anyway. I have constructed an iterative algorithm which returns d, but it is not suitable for human computation. It works by trying successively larger values for d until the flow rate is sufficient with an acceptable pressure loss. I would like to know if there is a way I can present the relationship in a human-computable (preferably non-iterative) form. 7. Aug 18, 2016 ### Staff: Mentor There is no solution in closed form. No finite number of calculations will give you an exact result, so iterations are the best you can do to find an approximate solution. The best approach will depend on typical values of K3/d (only this ratio matters). Is it small? Is it large? Is it close to 1 (worst case)? Newton's method can be useful. 8. Aug 18, 2016 ### pasmith Setting $x = d/K_3$ and squaring both sides yields $$\frac{Q^2 sLK_2}{K_1^2 pK_3^5} = \frac{x^6}{1 + x}.$$ Thus you need to solve $$y = \frac{x^6}{1 + x}$$ for $x$. That in principle must be done numerically, but you need only tabulate the value of $x$ for sufficient values of $y$ and $x$ could then be found for other values of $y$ by interpolation - and then $d = K_3 x$. Alternatively you can graph $\frac{x^6}{1 + x}$ against $x$ and for a given value of $y = \frac{Q^2 sLK_2}{K_1^2 pK_3^5}$ on the vertical axis one can move across to the curve and then down to the horizontal axis to find $x = d/K_3$. 9. Aug 18, 2016 ### BvU Note that this Spitzglass equation is one of many design equations which have limitations . A high degree of accuracy in doing the calculations may not be justified. I agree with mfb that a doing few Newton steps would be the a very sensible approach, but if your trial method works fast enough, then why bother. 10. Aug 19, 2016 ### Jehannum Thank you, guys. BvU, you are probably right about the degree of accuracy. In any case, commercially-available steel pipework comes in well-separated discrete sizes, not a continuum, so the ideal diameter d 'snaps' to the next available size up anyway. The information in this thread has been useful as I am trying to increase my knowledge of applied maths in general. 11. Aug 19, 2016 ### Jehannum Just one more thing ... in general, how does one determine whether there is a solution in closed form? 12. Aug 20, 2016 ### Staff: Mentor This can be a complicated problem. Proving that quintic equations don't have a general solution was an open problem until 1824, while solutions for lower powers were known in the 16th century already. There are two typical cases indicating that no solution in closed form exists: - polynomials with a degree of at least 5 - everything like $x e^x$ or $x \log (x)$ Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add? Draft saved Draft deleted
2017-08-21 10:25:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5714999437332153, "perplexity": 704.8272254803493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00523.warc.gz"}
https://physics.stackexchange.com/questions/423632/why-conservation-of-momentum
Why conservation of momentum? [duplicate] A bullet of mass 20 g travelling horizontally with a speed of 500 m/s passes through the wooden block of mass 10 kg initially at rest . The bullet emerges with a speed 100 m/s and the block slides 20 cm before coming to rest. Find friction coefficient between block and surface My teacher solved this question by conserving momentum between the bullet and the block. But how can he do that when there is external force (friction) acting on the system? I think impulse momentum theorem is used in such scenarios, but how can I apply it in this problem? marked as duplicate by AccidentalFourierTransform, Bruce Lee, stafusa, John Rennie, Community♦Aug 20 '18 at 14:47 • – Farcher Aug 20 '18 at 4:09 • Yes it was very much related to what I wanted to ask. – Nalin Yadav Aug 20 '18 at 4:31 The teacher is assuming that the bullet passes through instantaneously. In other words, the bullet moves so quickly that there is no time for friction to act. Hence, the momentum that the bullet loses is entirely transferred to the block, and none is transferred to the ground via friction. An appropriate follow-up question would be, is this a reasonable assumption to make? Let's take friction into account and try to estimate how fast the block would actually be moving when the bullet exits. We know the bullet is traveling at $\frac{500\text{ m/s}+100\text{ m/s}}{2}\approx300\text{ m/s}$ on average through the block. If the block is wood and cubic in shape, then the block is only ~0.3 m wide.* Hence, the bullet would pass through the block in a time of: $$t=\frac{d}{v}=\frac{0.3\text{ m}}{300\text{ m/s}}=0.001\text{ s}$$ Your teacher calculated that the block reaches a speed of 0.8 m/s as a result of the collision with the bullet. That's a high estimate, because it ignores friction with the ground, which slows the block. But let's go ahead and assume that, while the bullet is in the block, the block's average speed is $\frac{0+0.8\text{ m/s}}{2}\approx0.4\text{ m/s}$. At this speed, and for a time of 0.001 s, the block would only travel a distance of $$d=vt=(0.4\text{ m/s})(0.001\text{ s})=0.0004\text{ m}$$ while in contact with the bullet. If the coefficient of friction is ~0.16, and if we use $g=10\text{ m/s}^2$, then the block's final speed when the bullet exits would be: $$\Delta KE=Fd\cos(180°)=(-0.16*100\text{ N}*0.0004\text{ m})=-0.0064 \text{ J}$$ $$\Delta KE=\frac{1}{2}mv_f^2-\frac{1}{2}mv_i^2$$ $$-0.0064\text{ J}=\frac{1}{2}(10\text{ kg})(v_f^2-(0.8\text{ m/s})^2)$$ $$v_f\approx0.799\text{ m/s}$$ There are plenty of things wrong with this calculation (for example, I'm assuming the block immediately reaches a max speed of 0.8 m/s upon collision with the bullet, which isn't true). But as an order of magnitude estimate, it's reasonable enough to show that friction doesn't have much impact on the block while the bullet is inside. *The density of wood is ~500 kg/m3, so $l=V^{1/3}=(\frac{m}{\rho})^{1/3}=(\frac{10\text{ kg}}{500\text{ kg/m}^3})^{1/3}\approx0.3$. • But at the end it is similar using to conservation of momentum.Moreover it is an essential characteristic of impulse that it exists for small interval of time.So does that mean we can apply momentum conservation in all cases involving impulse? – Nalin Yadav Aug 20 '18 at 1:38 • @NalinYadav, jdphysics has still applied conservation of momentum here, what he has not done is assumed that the impulse is instant as your teacher did. Remember all such calculations involve making simplifying assumptions (for example we are not calculating the forces between the atoms in the wood). It's just a question of how many simplifying assumptions we want to make. JDPhysics has demonstrated that your teacher's assumption that the impulse is instantaneous introduces an error of only about 1 part in 1000. – Ben Aug 20 '18 at 12:14 • @Ben has said it better than I would've. I'll add that, in general, impulse doesn't have to be applied over a short interval of time. For example, the block slows down due to friction, and this takes a relatively longer period of time. During that time, the block receives an impulse due to the ground's friction force. We could calculate that impulse using $mΔv$. In general, whenever an object's velocity changes, it receives an impulse. As another example, a car speeding up on a road feels an impulse--and this is true even if the car speeds up gradually over a long interval of time. – jdphysics Aug 20 '18 at 20:12 You are right: you should treat the momentum transfer as an impulse. Assume that the bullet takes no time at all to get through the block, and that the momentum it loses is transferred instantaneously to the block. Then, you need to solve the rest of the problem. The way I would approach the rest of the problem is to use the block's instantaneous momentum to and mass to calculate how much kinetic energy was transferred to the block, and then calculate the coefficient of friction. Force x distance = work, which = the energy the block got from the bullet; and force in this case is the coefficient of friction times the * weight* of the block. (Make sure you don't mistake the mass for the weight!) What if the friction force was not external? If you consider the system {Bullet + Block + Ground} then you have three phases: • Phase 1: The bullet has kinetic energy, block and ground are at rest. • Phase 2: The bullet has lost some kinetic energy, which has been transfered to the block as kinetic energy • Phase 3: The block has lost energy. It has been transfered as heat to the ground. Knowing the initial energy, and the distance it took to dissipate it, you can calculate the coeffcient. • Some of the kinetic energy of the bullet has been converted into an unknown amount of heat in the bullet and the block, as well as into the energy needed to tear the wood fibres apart. The bullet hitting the wood gives off some sound energy. Kinetic energy is not conserved in Phase 2... – DJohnM Aug 20 '18 at 5:27 • I think it is an implicit assumption of the exercise, these interactions are ignored. – Maxime Aug 20 '18 at 7:33 Whenever two or more particles in an isolated system interact, the total momentum of the system remains constant. The momentum of the system is conserved, but the momentum of the individual particle may not necessarily conserved. The total momentum of an isolated system equals its initial momentum. So here in your set-up, the system of block +bullet is the system. one knows for certain that part of the energy of bullet must have been lost. therefore energy conservation cannot be applied. Therefore limit yourself to the momentum conservation. I think the impulse-momentum theorem is used in such scenarios but how can I apply it in this problem? From Newton’s Second Law dp/dt = F or dp = Fdt By integration, we can find the change in momentum over some time interval say dt.The integral is called the impulse (I ) of the force F acting on an object over the time dt. The impulse imparted to a particle by a force is equal to the change in the momentum of the particle (impulse-momentum theorem). Your momentum conservation is related to Impulse given by the block to the bullet and bullet providing an impulse to the block. And both are equal. Probably your teacher assumed that the motion of the bullet and block could be separated into two phases : 1. The bullet "collides" with the block instantaneously. It passes through the block and transfers most of its momentum to the block in such a short time $\Delta t$ that the block does not have time to move. The impulse $F\Delta t$ due to the friction force between the block and the horizontal surface is therefore negligible. Unlike normal contact force between rigid bodies, which can reach around $1GPa$ for wood, the friction force $F$ has a small upper limit. In this Phase Conservation of Momentum is applied to find the initial velocity of the block. 2. The block slides along the rough horizontal surface, doing work against friction and gradually losing its initial kinetic energy. This takes a finite time. In this Phase the momentum of the block is eliminated by friction, and the Conservation of Energy (or the Work-Energy Theorem) is applied to find the work done against friction. The purpose of splitting the problem into 2 separate phases is that it makes the analysis much easier. Two different principles apply in each phase, and the result of phase 1 provides the initial conditions for phase 2. Any complicated interaction between the two phases is ignored. The key assumption is that it makes no difference (or no significant difference) if the bullet takes a finite time to pass through the block, and is still transferring momentum to the block while the block is moving. This assumption is justified if the work done against the external force (friction) is the same whether or not Phase 1 is instantaneous or overlaps with Phase 2. Provided that the bullet emerges from the block before it stops and that its loss of momentum is known, the total amount of momentum transferred from the bullet to the block and then to the Earth via friction is the same in both cases. Kinetic friction is usually assumed to be independent of the relative velocity between the surfaces, so the manner in which the speed of the block varies is irrelevant. However, friction does depend on the total mass of the block, so the work done against friction over a fixed distance is greater the longer the bullet remains inside the block. This effect will be negligible if the mass of the bullet is much lighter than that of the block. The time taken for the bullet to pass through the block, and the manner in which the resistance force on it varies, are irrelevant because the total amount of momentum which is transferred from the bullet to the block is defined by the problem. Another situation in which an external force acts for a finite time during a collision is when the colliding objects are falling in a gravitational field.
2019-04-24 20:43:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7795600891113281, "perplexity": 274.0411651075944}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578656640.56/warc/CC-MAIN-20190424194348-20190424220348-00104.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Aoshima.yoichi
# zbMATH — the first resource for mathematics ## Oshima, Yoichi Compute Distance To: Author ID: oshima.yoichi Published as: Oshima, Yoichi; Oshima, Y.; Ōshima, Y.; Ôshima, Yôichi; Oshima, Yōichi; Ōshima, Yōichi Documents Indexed: 36 Publications since 1969, including 3 Books Reviewing Activity: 118 Reviews all top 5 #### Co-Authors 25 single-authored 5 Fukushima, Masatoshi 3 Takeda, Masayoshi 1 Kim, Daehong 1 Kondo, Ryoji 1 Sturm, Karl-Theodor 1 Uemura, Toshihiro 1 Yamada, Toshio all top 5 #### Serials 5 Osaka Journal of Mathematics 5 Potential Analysis 3 Forum Mathematicum 3 De Gruyter Studies in Mathematics 2 Journal of the Mathematical Society of Japan 1 Kumamoto Journal of Science. (Mathematics) 1 Proceedings of the Japan Academy. Series A 1 SIAM Journal on Control and Optimization 1 Tohoku Mathematical Journal. Second Series 1 Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 1 Journal of Theoretical Probability 1 Infinite Dimensional Analysis, Quantum Probability and Related Topics 1 RIMS Kokyuroku #### Fields 35 Probability theory and stochastic processes (60-XX) 17 Potential theory (31-XX) 1 Statistics (62-XX) 1 Statistical mechanics, structure of matter (82-XX) #### Citations contained in zbMATH Open 23 Publications have been cited 1,015 times in 904 Documents Cited by Year Dirichlet forms and symmetric Markov processes. Zbl 0838.31001 Fukushima, Masatoshi; Oshima, Yoichi; Takeda, Masayoshi 1994 Dirichlet forms and symmetric Markov processes. 2nd revised and extended ed. Zbl 1227.31001 Fukushima, Masatoshi; Oshima, Yoichi; Takeda, Masayoshi 2011 Semi-Dirichlet forms and Markov processes. Zbl 1286.60002 Oshima, Yoichi 2013 Time-dependent Dirichlet forms and related stochastic calculus. Zbl 1078.60060 Oshima, Yoichi 2004 On a construction of Markov processes associated with time dependent Dirichlet spaces. Zbl 0759.60081 Oshima, Yoichi 1992 On conservativeness and recurrence criteria for Markov processes. Zbl 1081.60545 Ōshima, Yōichi 1992 Some properties of Markov processes associated with time dependent Dirichlet forms. Zbl 0759.60082 Oshima, Yoichi 1992 On the skew product of symmetric diffusion processes. Zbl 0656.60083 Fukushima, Masatoshi; Oshima, Yoichi 1989 On some representations of continuous additive functionals locally of zero energy. Zbl 0542.60078 Ôshima, Yôichi; Yamada, Toshio 1984 Some singular diffusion processes and their associated stochastic differential equations. Zbl 0465.60069 Oshima, Yoichi 1982 On a transformation of symmetric Markov process and recurrence property. Zbl 0634.60064 Oshima, Yoichi; Takeda, Masayoshi 1987 On time change of symmetric Markov processes. Zbl 0722.60077 Oshima, Yoichi 1988 Recurrent Dirichlet forms and Markov property of associated Gaussian fields. Zbl 1409.60117 Fukushima, Masatoshi; Oshima, Yoichi 2018 Potential of recurent symmetric Markov processes and its associated Dirichlet spaces. Zbl 0488.60084 Oshima, Yoichi 1982 On stochastic differential equations characterizing some singular diffusion processes. Zbl 0493.60081 Oshima, Yoichi 1981 On absolute continuity of two symmetric diffusion processes. Zbl 0619.60070 Oshima, Y. 1987 On a construction of diffusion processes on moving domains. Zbl 1039.60073 Oshima, Yoichi 2004 On an optimal stopping problem of time inhomogeneous diffusion processes. Zbl 1128.60030 Oshima, Yoichi 2007 On a construction of recurrent potential kernel by mean of time change and killing. Zbl 0342.60053 Oshima, Y. 1977 On the equilibrium measure of recurrent Markov processes. Zbl 0392.60053 Oshima, Yoichi 1978 On the recurrence of some time inhomogeneous Markov processes. Zbl 0895.60080 Oshima, Yoichi 1998 On the exceptionality of some semipolar sets of time inhomogeneous Markov processes. Zbl 1013.60057 Oshima, Yoichi 2002 On the conservativeness of some Markov processes. Zbl 1364.60097 Oshima, Yoichi; Uemura, Toshihiro 2017 Recurrent Dirichlet forms and Markov property of associated Gaussian fields. Zbl 1409.60117 Fukushima, Masatoshi; Oshima, Yoichi 2018 On the conservativeness of some Markov processes. Zbl 1364.60097 Oshima, Yoichi; Uemura, Toshihiro 2017 Semi-Dirichlet forms and Markov processes. Zbl 1286.60002 Oshima, Yoichi 2013 Dirichlet forms and symmetric Markov processes. 2nd revised and extended ed. Zbl 1227.31001 Fukushima, Masatoshi; Oshima, Yoichi; Takeda, Masayoshi 2011 On an optimal stopping problem of time inhomogeneous diffusion processes. Zbl 1128.60030 Oshima, Yoichi 2007 Time-dependent Dirichlet forms and related stochastic calculus. Zbl 1078.60060 Oshima, Yoichi 2004 On a construction of diffusion processes on moving domains. Zbl 1039.60073 Oshima, Yoichi 2004 On the exceptionality of some semipolar sets of time inhomogeneous Markov processes. Zbl 1013.60057 Oshima, Yoichi 2002 On the recurrence of some time inhomogeneous Markov processes. Zbl 0895.60080 Oshima, Yoichi 1998 Dirichlet forms and symmetric Markov processes. Zbl 0838.31001 Fukushima, Masatoshi; Oshima, Yoichi; Takeda, Masayoshi 1994 On a construction of Markov processes associated with time dependent Dirichlet spaces. Zbl 0759.60081 Oshima, Yoichi 1992 On conservativeness and recurrence criteria for Markov processes. Zbl 1081.60545 Ōshima, Yōichi 1992 Some properties of Markov processes associated with time dependent Dirichlet forms. Zbl 0759.60082 Oshima, Yoichi 1992 On the skew product of symmetric diffusion processes. Zbl 0656.60083 Fukushima, Masatoshi; Oshima, Yoichi 1989 On time change of symmetric Markov processes. Zbl 0722.60077 Oshima, Yoichi 1988 On a transformation of symmetric Markov process and recurrence property. Zbl 0634.60064 Oshima, Yoichi; Takeda, Masayoshi 1987 On absolute continuity of two symmetric diffusion processes. Zbl 0619.60070 Oshima, Y. 1987 On some representations of continuous additive functionals locally of zero energy. Zbl 0542.60078 Ôshima, Yôichi; Yamada, Toshio 1984 Some singular diffusion processes and their associated stochastic differential equations. Zbl 0465.60069 Oshima, Yoichi 1982 Potential of recurent symmetric Markov processes and its associated Dirichlet spaces. Zbl 0488.60084 Oshima, Yoichi 1982 On stochastic differential equations characterizing some singular diffusion processes. Zbl 0493.60081 Oshima, Yoichi 1981 On the equilibrium measure of recurrent Markov processes. Zbl 0392.60053 Oshima, Yoichi 1978 On a construction of recurrent potential kernel by mean of time change and killing. Zbl 0342.60053 Oshima, Y. 1977 all top 5 #### Cited by 672 Authors 56 Chen, Zhen-Qing 25 Röckner, Michael 24 Fukushima, Masatoshi 24 Klimsiak, Tomasz 24 Takeda, Masayoshi 21 Kim, Panki 20 Grigor’yan, Alexander Asaturovich 19 Kumagai, Takashi 18 Ying, Jiangang 17 Song, Renming 17 Zhang, Tusheng S. 16 Kuwae, Kazuhiro 16 Wang, Jian 15 Saloff-Coste, Laurent 14 Albeverio, Sergio A. 14 Uemura, Toshihiro 13 Hu, Jiaxin 13 Kaneko, Hiroshi 13 Shiozawa, Yuichi 13 Sun, Wei 12 Hinz, Michael 12 Keller, Matthias 12 Teplyaev, Alexander 12 Trutnau, Gerald 11 Fitzsimmons, Patrick J. 11 Hino, Masanori 11 Lenz, Daniel H. 11 Li, Liping 10 Bendikov, Alexander D. 10 Ma, Zhi-Ming 10 Robinson, Derek W. 10 Rozkosz, Andrzej 9 Barlow, Martin T. 9 Bass, Richard F. 9 Cipriani, Fabio 9 Kim, Daehong 8 Beznea, Lucian 8 Burdzy, Krzysztof 8 Huang, Xueping 8 Lancia, Maria Rosaria 8 Lau, Kasing 8 Masamune, Jun 8 Oshima, Yoichi 8 Schilling, René Leander 7 Ambrosio, Luigi 7 Coulhon, Thierry 7 Grothaus, Martin 7 Osada, Hirofumi 7 Schmidt, Marcel 7 Shanmugalingam, Nageswari 7 Sturm, Karl-Theodor 7 Vondraček, Zoran 7 Warma, Mahamadi 6 Atsuji, Atsushi 6 Baudoin, Fabrice 6 Ben Amor, Ali 6 Bogdan, Krzysztof 6 Bouleau, Nicolas 6 Jiang, Renjin 6 Kajino, Naotaka 6 Khoshnevisan, Davar 6 Kigami, Jun 6 Kuwada, Kazumasa 6 Linetsky, Vadim 6 Mathieu, Pierre 6 Rogers, Luke G. 6 Tsuchida, Kaneharu 6 Zambotti, Lorenzo 5 Chen, Xin 5 He, Ping 5 Jacob, Niels 5 Jørgensen, Palle E. T. 5 Kassmann, Moritz 5 Knopova, Victoria Pavlovna 5 Lierl, Janna 5 Miclo, Laurent 5 Murugan, Mathav Kishore 5 Pearse, Erin Peter James 5 Shin, Jiyong 5 Sun, Wenjie 5 Vernole, Paola Gioia 5 Zhu, RongChan 4 Alonso-Ruiz, Patricia 4 Bernicot, Frédéric 4 Capitanelli, Raffaela 4 Chen, Chuanzhong 4 Croydon, David A. 4 Denis, Laurent 4 D’Ovidio, Mirko 4 Freiberg, Uta Renata 4 Gal, Ciprian Gheorghe Sorin 4 Gesztesy, Fritz 4 Grillo, Gabriele 4 Güneysu, Batu 4 Kasue, Atsushi 4 Kerkyacharian, Gérard 4 Kondrat’yev, Yuriĭ Grygorovych 4 Koskela, Pekka 4 Lejay, Antoine 4 Li, Lingfei ...and 572 more Authors all top 5 #### Cited in 162 Serials 92 Journal of Functional Analysis 73 Potential Analysis 48 The Annals of Probability 39 Transactions of the American Mathematical Society 39 Stochastic Processes and their Applications 35 Probability Theory and Related Fields 28 Journal of Theoretical Probability 23 Journal of Mathematical Analysis and Applications 20 Proceedings of the American Mathematical Society 20 Journal of Evolution Equations 17 Annales de l’Institut Henri Poincaré. Probabilités et Statistiques 15 Tohoku Mathematical Journal. Second Series 15 Forum Mathematicum 14 Journal de Mathématiques Pures et Appliquées. Neuvième Série 14 Acta Mathematica Sinica. English Series 13 Advances in Mathematics 13 Mathematische Nachrichten 12 Journal of Differential Equations 12 Journal of the Mathematical Society of Japan 12 Mathematische Zeitschrift 12 Electronic Journal of Probability 9 Calculus of Variations and Partial Differential Equations 8 Communications on Pure and Applied Mathematics 8 Osaka Journal of Mathematics 8 Science China. Mathematics 7 Mathematische Annalen 7 Acta Applicandae Mathematicae 7 Bernoulli 6 Communications in Mathematical Physics 6 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 6 Illinois Journal of Mathematics 6 Publications of the Research Institute for Mathematical Sciences, Kyoto University 6 Discrete and Continuous Dynamical Systems 6 Infinite Dimensional Analysis, Quantum Probability and Related Topics 5 Zeitschrift für Analysis und ihre Anwendungen 5 Statistics & Probability Letters 5 The Annals of Applied Probability 5 Journal of the European Mathematical Society (JEMS) 5 Complex Analysis and Operator Theory 5 $$p$$-Adic Numbers, Ultrametric Analysis, and Applications 5 Journal of Fractal Geometry 4 Applicable Analysis 4 Archive for Rational Mechanics and Analysis 4 Journal d’Analyse Mathématique 4 Journal of Statistical Physics 4 Annali di Matematica Pura ed Applicata. Serie Quarta 4 Stochastic Analysis and Applications 4 The Journal of Geometric Analysis 4 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 4 NoDEA. Nonlinear Differential Equations and Applications 3 Journal of Mathematical Physics 3 Studia Mathematica 3 Applied Mathematics and Optimization 3 Bulletin of the London Mathematical Society 3 Acta Mathematicae Applicatae Sinica. English Series 3 SIAM Journal on Mathematical Analysis 3 Bulletin of the American Mathematical Society. New Series 3 Mathematical Finance 3 European Series in Applied and Industrial Mathematics (ESAIM): Probability and Statistics 3 ALEA. Latin American Journal of Probability and Mathematical Statistics 2 Israel Journal of Mathematics 2 Jahresbericht der Deutschen Mathematiker-Vereinigung (DMV) 2 Nonlinearity 2 Annales de l’Institut Fourier 2 The Annals of Statistics 2 Archiv der Mathematik 2 Duke Mathematical Journal 2 Inventiones Mathematicae 2 Journal für die Reine und Angewandte Mathematik 2 Manuscripta Mathematica 2 Memoirs of the American Mathematical Society 2 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 2 Proceedings of the Japan Academy. Series A 2 SIAM Journal on Control and Optimization 2 Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 2 Physica D 2 Constructive Approximation 2 Revista Matemática Iberoamericana 2 Science in China. Series A 2 Sugaku Expositions 2 Journal of Mathematical Sciences (New York) 2 Annales Mathématiques Blaise Pascal 2 Bulletin des Sciences Mathématiques 2 The Journal of Fourier Analysis and Applications 2 Finance and Stochastics 2 Mathematical Physics, Analysis and Geometry 2 Positivity 2 Fractional Calculus & Applied Analysis 2 Annals of Mathematics. Second Series 2 Discrete and Continuous Dynamical Systems. Series B 2 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 Stochastics and Dynamics 2 Communications on Pure and Applied Analysis 2 International Journal of Stochastic Analysis 2 Journal of Spectral Theory 2 Analysis and Geometry in Metric Spaces 1 Advances in Applied Probability 1 Indian Journal of Pure & Applied Mathematics 1 Letters in Mathematical Physics 1 Reports on Mathematical Physics ...and 62 more Serials all top 5 #### Cited in 43 Fields 655 Probability theory and stochastic processes (60-XX) 318 Potential theory (31-XX) 268 Partial differential equations (35-XX) 186 Operator theory (47-XX) 90 Measure and integration (28-XX) 77 Functional analysis (46-XX) 75 Global analysis, analysis on manifolds (58-XX) 41 Statistical mechanics, structure of matter (82-XX) 40 Differential geometry (53-XX) 25 Combinatorics (05-XX) 22 Quantum theory (81-XX) 20 Functions of a complex variable (30-XX) 18 Real functions (26-XX) 18 Dynamical systems and ergodic theory (37-XX) 17 Calculus of variations and optimal control; optimization (49-XX) 12 Harmonic analysis on Euclidean spaces (42-XX) 12 Abstract harmonic analysis (43-XX) 11 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 10 Number theory (11-XX) 9 Ordinary differential equations (34-XX) 9 Statistics (62-XX) 8 Fluid mechanics (76-XX) 7 Integral equations (45-XX) 7 General topology (54-XX) 6 Linear and multilinear algebra; matrix theory (15-XX) 6 Several complex variables and analytic spaces (32-XX) 6 Numerical analysis (65-XX) 5 Topological groups, Lie groups (22-XX) 5 Mechanics of deformable solids (74-XX) 5 Systems theory; control (93-XX) 4 Difference and functional equations (39-XX) 4 Information and communication theory, circuits (94-XX) 3 General and overarching topics; collections (00-XX) 3 Geometry (51-XX) 3 Convex and discrete geometry (52-XX) 3 Operations research, mathematical programming (90-XX) 2 Biology and other natural sciences (92-XX) 1 History and biography (01-XX) 1 Integral transforms, operational calculus (44-XX) 1 Manifolds and cell complexes (57-XX) 1 Optics, electromagnetic theory (78-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Relativity and gravitational theory (83-XX)
2021-09-23 19:30:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3579728305339813, "perplexity": 5261.399339316989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00397.warc.gz"}
http://math.iisc.ac.in/seminars/2020/2020-03-06-matthew-honnor.html
#### Algebra & Combinatorics Seminar ##### Venue: LH-1, Mathematics Department In the 1980’s Tate stated the Brumer–Stark conjecture which, for a totally real field $F$ with prime ideal $\mathfrak{p}$, conjectures the existence of a $\mathfrak{p}$-unit called the Gross–Stark unit. This unit has $\mathfrak{P}$ order equal to the value of a partial zeta function at 0, for a prime $\mathfrak{P}$ above $\mathfrak{p}$. In 2008 and 2018 Dasgupta and Dasgupta–Spieß, conjectured formulas for this unit. During this talk I shall explain Tate’s conjecture and then the ideas for the constructions of these formulas. I will finish by explaining the results I have obtained from comparing these formulas. Contact: +91 (80) 2293 2711, +91 (80) 2293 2265 ;     E-mail: chair.math[at]iisc[dot]ac[dot]in Last updated: 06 Mar 2020
2020-04-07 23:06:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8197861909866333, "perplexity": 1608.9985077003573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00055.warc.gz"}
https://solvedlib.com/check-b-exercise-12-6-algo-trading-securities,17548
##### Candy Company Worksheet For the Year Ended June 30, 2020 Adjusting Entries Adjusted Trial Balance DЬ.... Candy Company Worksheet For the Year Ended June 30, 2020 Adjusting Entries Adjusted Trial Balance DЬ. Cr. Ref DЬ. Cr. Income Statement Db. Cr. Balance Sheet . Cr. Ref Account Titles Cash Accounts Receivable Allowance for Doubtful Accounts Supplies Prepaid Insurance Prepaid Rent Land Buil... ##### Problem 2.. (20 field F we have10 + 10 pts) (a) Show that for any function f and any vector(e'F) = e' (Vf . F+V.F): You may use without proof any identities that previously appeared in lectures, recita- tion sheets, problemsets, Or review sheets:) (b) Find a positive function g(z) (that is, function of € only) such that V-(g(z) F) = 0 where F(s,y,2) = (2,1y,y2). (Hint: look for g in the form ef(c) . Don't forget to write the formula for the final answer g(z)!) Problem 2.. (20 field F we have 10 + 10 pts) (a) Show that for any function f and any vector (e'F) = e' (Vf . F+V.F): You may use without proof any identities that previously appeared in lectures, recita- tion sheets, problemsets, Or review sheets:) (b) Find a positive function g(z) (that ... ##### QUESTION 111.2 m0.8 m0.41 rad2 = out of page325 NRotation axis450 NTwo people push straight down on either end of a lever with forces and distances as shown in the figure. What is the net torque on the lever?30.0 Nm in the +Z direction30.0 Nm in the -Z direction27.5 Nm in the -Z direction12.0 Nm in the direction27.5 Nm in the +Z direction12.0 Nm in the +2 direction QUESTION 11 1.2 m 0.8 m 0.41 rad 2 = out of page 325 N Rotation axis 450 N Two people push straight down on either end of a lever with forces and distances as shown in the figure. What is the net torque on the lever? 30.0 Nm in the +Z direction 30.0 Nm in the -Z direction 27.5 Nm in the -Z direction... ##### Question 1 (5 points) AI(OH): + 3HCl 3H20 + AlCL When 450 mg of Al( OH)3 reacts completly with 0. 2 L of HCL, what is the molarity of HCl solution?0.85 M85.00 M0.085 M8.50 M Question 1 (5 points) AI(OH): + 3HCl 3H20 + AlCL When 450 mg of Al( OH)3 reacts completly with 0. 2 L of HCL, what is the molarity of HCl solution? 0.85 M 85.00 M 0.085 M 8.50 M... ##### Analyze lim f(x) and lim f(x) , and then identify any horizontal asymptotes: Find the vertical asymptotes. For each vertical asymptote x = @,analyze lim_f(x) and lim flx) 7+0 X+0x2 _4x+3 70. f(x) = X-12x'+10x2 +12x 71. f(x) = X+2x2V16x' +64*2 72. f(x) = 2x2 _43x*+3x3 _36x2 73. f(x) = x" -25x2+14474. flx) =x2 (4x2_V 16x" Analyze lim f(x) and lim f(x) , and then identify any horizontal asymptotes: Find the vertical asymptotes. For each vertical asymptote x = @,analyze lim_f(x) and lim flx) 7+0 X+0 x2 _4x+3 70. f(x) = X-1 2x'+10x2 +12x 71. f(x) = X+2x2 V16x' +64*2 72. f(x) = 2x2 _4 3x*+3x3 _36x2 73. f(x) =... ##### You will need to kuow the following basic facts about equivalence relations (this should be familiar for thosc who ve done MATHI2I). An equivalence relation on set X is set R C X xX of ordered pairs frotn satisfying the following thrcc conditions: RI) Refierive: (Tx) € R for all reX: R2) Symmetric: if (1.9) € R then (V,1) € R; uc R3) Tmnsitive: if (T,y) € R and (v. :) € R then (r,:)e R. The equivalence class of an element > € X is the set [r] = {v € X : (r,u) € R} For Ivex wo You will need to kuow the following basic facts about equivalence relations (this should be familiar for thosc who ve done MATHI2I). An equivalence relation on set X is set R C X xX of ordered pairs frotn satisfying the following thrcc conditions: RI) Refierive: (Tx) € R for all reX: R2) Symme... ##### As part of an interview for a summer job with the Coast Guard, you are asked... As part of an interview for a summer job with the Coast Guard, you are asked to help determine the search area for two sunken ships by calculating their velocity just after they collided. According to the last radio transmission from the 40,000-ton luxury liner, the Hedonist, it was going due west a... ##### N 1 FoMLal 1 4 Roaction 1' 1 innl a mone 1 Rates 0 and" 1 1 Hemperature ) 1MacBook Air N 1 FoMLal 1 4 Roaction 1' 1 innl a mone 1 Rates 0 and" 1 1 Hemperature ) 1 MacBook Air... ##### You are the accountant for DigitMart, Inc., a publicly-traded digital marketing firm. After calculating the financial... You are the accountant for DigitMart, Inc., a publicly-traded digital marketing firm. After calculating the financial ratios for DigitMart, you would compare DigitMart’s ratios to all of the following except: trends competitor’s ratios industry averages benchmarks... ##### How long would it take to reach the far to edge of observable universe if going at 5 miles per second? How long would it take to reach the far to edge of observable universe if going at 5 miles per second?... ##### A population has mean ? = 24 and standard deviation ? = 6. Round the answers... A population has mean ? = 24 and standard deviation ? = 6. Round the answers to two decimal places as needed. Part 1 out of 3 Find the z-score for a population value of 4. The z-score for a population value of 4 is... ##### +" 0 2 12JJpm astructionsFind the rndom cundy = Ipictes = and[ tke You" reach cindics Question Iemon orange; and = cherty; contains = Abae probability echt Ibvot One 2775 Nex 3636} 4818 22182 12 4Opm 40867 checked at - Sve Last new datz Previous +" 0 2 12JJpm astructions Find the rndom cundy = Ipictes = and[ tke You" reach cindics Question Iemon orange; and = cherty; contains = Abae probability echt Ibvot One 2775 Nex 3636} 4818 22182 12 4Opm 40867 checked at - Sve Last new datz Previous... ##### QuesTion 21The lable F below shows Ihe Ihen Ihie moving coreorard eanringo ol an enterpnse for 5 years Referring lo avcrage ol Ihe Ihird quarler 0f Ihe frst year Is? (Table F) and using perod Iengih 01 Table F Yeat (lutter Rovrnur Tou 810 654 60 206y Fain 469 0519,8535,8 679 8 774,8 QuesTion 21 The lable F below shows Ihe Ihen Ihie moving coreorard eanringo ol an enterpnse for 5 years Referring lo avcrage ol Ihe Ihird quarler 0f Ihe frst year Is? (Table F) and using perod Iengih 01 Table F Yeat (lutter Rovrnur Tou 810 654 60 206y Fain 469 0 519,8 535,8 679 8 774,8... ##### Molecular comparisons place nematodes and arthropods in clade Ecdysozoa. What characteristic do they share that is the basis for the name Ecdysozoa?a. a complete digestive tractb. body segmentationc. molting of an exoskeletond. bilateral symmetry Molecular comparisons place nematodes and arthropods in clade Ecdysozoa. What characteristic do they share that is the basis for the name Ecdysozoa? a. a complete digestive tract b. body segmentation c. molting of an exoskeleton d. bilateral symmetry... ##### A) 21.50B) 16.770 26.80D) 28.3138) Monica, chef at a S-star restaurant makes eight different desserts. She wants t0 see the customers prefer any specific dessert to another. She keeps a record of desserts ordered over the course of several weeks and the results are summarized in the table below. Dessert Frequency Chocolate Mousse 25 Baked Alaska 12 Orange Cheesecake 23 Caramel Flan 32 Banana Brulee 40 Mississippi Mud Pie 20 Ricotta Cannoli 38 French Walnut Torte 34Find the critical value at a = A) 21.50 B) 16.77 0 26.80 D) 28.31 38) Monica, chef at a S-star restaurant makes eight different desserts. She wants t0 see the customers prefer any specific dessert to another. She keeps a record of desserts ordered over the course of several weeks and the results are summarized in the table below.... ##### Bradley's Appliance Store had the following transactions during the month of May 2019 May 5 8... Bradley's Appliance Store had the following transactions during the month of May 2019 May 5 8 Sold a refrigerator to Mary Wilson that had a retail price of $650. Sales tax of$32.50 was also charged. Sold a freezer to Sam Lee. Issued sales slip for $900 plus sales tax of$45 Gave Mary Wilson an ... ##### This compound is the product of: OH (CH3CH2)2 COCH2CH2CH3 A) B) C) D) the oxidation of... This compound is the product of: OH (CH3CH2)2 COCH2CH2CH3 A) B) C) D) the oxidation of a primary alcohol the reaction of a carboxylic acid and an alcohol the reaction of a carboxylic acid and ammonia the reaction of a ketone and an alcohol Arrange these compounds in order of decreasing melting point... ##### What is true about the following function f(x)--x^3-2x^2+x ?3 pointsHas an intercept at (0,0)Has at most 3 x-interceptsHas at most 2 X-interceptsIncreases then decreases as X goes from -inf to infDecreases then increases as X goes from -inf to infDecreases then does some weird shit then continues decreases as X goes from -inf to infIncreases then does some weird shit then continues decreasing as X goes from -inf to inf What is true about the following function f(x)--x^3-2x^2+x ? 3 points Has an intercept at (0,0) Has at most 3 x-intercepts Has at most 2 X-intercepts Increases then decreases as X goes from -inf to inf Decreases then increases as X goes from -inf to inf Decreases then does some weird shit then conti... ##### When Dr. Bruce Banner becomes angry and undergoes a transformation into the Incredible Hulk, his blood... When Dr. Bruce Banner becomes angry and undergoes a transformation into the Incredible Hulk, his blood plasma becomes saturated with Na-24 atoms. These radioactive sodium atoms decay into stable Mg-24 by beta decay, along with the release of two high-energy gamma rays, to which Dr. Banner is immune.... ##### (b) Suppose a differential equation is in the form of yf(x,y)dx + xg(x,y)dy = 0. If the differential equation can be written as M(x,y)dx + N(x,y)dy = 0, and if xM(x,y) - yN(x,Y) 0, then the integrating factor for the differential equation is in the form of p(x,y) Consider the following differential equation. xM-YN(x3y2 + x)dy + (xzy3 y)dx = 0,x,y > 0uans TacIcBy using part 1(a) or otherwise, show that the general solution for the above differential equation is y2 Cx2e-yz where C is a consta (b) Suppose a differential equation is in the form of yf(x,y)dx + xg(x,y)dy = 0. If the differential equation can be written as M(x,y)dx + N(x,y)dy = 0, and if xM(x,y) - yN(x,Y) 0, then the integrating factor for the differential equation is in the form of p(x,y) Consider the following differentia... ##### (a) An object thrown vertically Muth speed reaches height at Ume Khere: T10] Vo t 9t2 Write and test function that computes the time required t0 reach specified height h; for given value of Vo: The function input should be h; Ve and & Test your function for the case where h=100 m, Vo-50 m/s and &-9,81 ms" by writing MAAB codc which calls Ine function (b) Given: Mio1Find the results 0i UImes B using the array product, ( A divided by using Jtrdy riEht division divided by using array l (a) An object thrown vertically Muth speed reaches height at Ume Khere: T10] Vo t 9t2 Write and test function that computes the time required t0 reach specified height h; for given value of Vo: The function input should be h; Ve and & Test your function for the case where h=100 m, Vo-50 m/s and ... ##### Chapter 24 Nutritional Care and Support 10 et? 31. A patient has taken an H, blocker... Chapter 24 Nutritional Care and Support 10 et? 31. A patient has taken an H, blocker for her ulcer disease for several years. She is careful to maintain a nutrition- ally balanced diet, but a blood test shows that she is deficient in at least one major nutrient. H, blockers are known to sometimes in... ##### During the most recent craze of fitness called CrossFit, orthopedic 7. surgeons have been busy due... During the most recent craze of fitness called CrossFit, orthopedic 7. surgeons have been busy due to the immense amount of jumping involved in the workouts When the patella tendon is completely torn due to repeated jumping on hard surfaces, it is no longer attached to the kneecap which causes the k...
2022-05-23 23:04:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5008770823478699, "perplexity": 7677.048004373439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00353.warc.gz"}
https://socratic.org/questions/how-do-you-multiply-a-4a-3
# How do you multiply a(4a+3)? Distribute the $a$ into the parentheses. $4 {a}^{2} + 3 a$ Distributing $a$ (multiplying $a$ to what is inside the parentheses) will result in $4 a \times a + 3 \times a = 4 {a}^{2} + 3 a$
2020-01-26 20:27:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29600730538368225, "perplexity": 1679.2159750500302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690379.95/warc/CC-MAIN-20200126195918-20200126225918-00168.warc.gz"}
http://pballew.blogspot.co.uk/2011_01_01_archive.html
## Monday, 31 January 2011 ### Jost Burgi and Logarithms Just found an article on the net (part of the LOCOMAT project) by Denis Roegel about Jost Burgi's "Progress Tabulen” of 1620. This is the publication that many people use to justify Burgi as an independent inventor of logarithms. In fact, he had made his method known to Tycho Brahe and others prior to the actual publication of Napier's Canon of Logarithms. Roegel suggest that, in fact, what Burgi has done is : There is no doubt that Bürgi came very close to the notion of logarithm, but if we call logarithms a correspondence between two infinite continua representing real numbers, such that multiplication becomes addition, then Bürgi obviously only partly attained it, whereas Napier undoubtedly hit it.47 We can of course understand why this happened. It is in particular clear that Napier’s kinematic approach based on the theory of proportions gave him naturally a firm base in two infinite continua.48 We can also observe that Napier went beyond his needs. He needed only to define logarithms of sines, but his definition goes beyond. Bürgi, on the other hand, basically only needed to define the multiplication of numbers between 1 and 10 (or 108 and 109) and he therefore introduced a restricted way to do these computations, which at the same time prevented him from defining the general logarithm function which he didn’t really need. A fair comparison between Napier and Bürgi requires a clear definition of the notion of logarithms, but it also makes it necessary to distinguish the abstract notion of logarithm from the knowledge of logarithmic computation. It is by confusing these two notions that many authors were led to attribute the discovery of logarithms to Bürgi or some of his forerunners, when they have in fact only produced what are, admittedly, tables of logarithms. The date on the article is Jan 11, 2011, only a few days before the date (Jan 31) on which Burgi died in 1632. For those who haven't heard of Burgi, he is considered one of the great mechanics of the period, and created beautiful Armillaries and mechanized globes that are still works of both engineering and artistic beauty. He communicated with Brahe and Kepler, and was urged to publish his "Tablulen" by both. From Wikipedia: The most significant artifacts designed and built by Burgi surviving in museums are: * Several mechanized celestial globes (now in Paris, Zuerich (Schweizerisches Landesmuseum), Stuttgart (Wurttembergisches Landesmuseum) and Kassel (Orangerie,2x,1580-1595) ) * Several clocks in Kassel, Dresden (Mathematisch Physikalischer Salon) and Vienna (Kunsthistorisches Museum) * Sextants made for Keppler (at the National Technical Museum in Prag) * The Mond-Anomalien-Uhr (a mechanical model of the irregularities of the motion of the Moon around the Earth) Roegel doesn't deny the creative greatness of Burgi, but suggest it is greatness of a 2nd order, or as he says in his paper: That being said, our purpose was in no way to diminish Bürgi’s contributions. Instead, we are a great admirer of his technical and mathematical contributions. But we have felt that justice had not been given to the Progress Tabulen and that wishful thinking by Swiss landsmen had somewhat distorted Bürgi’s work, and that it had again to be put straight. Although we do not consider that Bürgi discovered or invented logarithms, we think it is still appropriate to quote the words of Cajori in 1915: “The facts as they are known to-day assign to Napier the glory of a star of the first magnitude as the inventor of logarithms who first gave them to the world, and to Bürgi the glory of a star of lesser magnitude, but shining with an independent light.” [from Florian Cajori. Algebra in Napier’s day and alleged prior inventions of logarithms. In Knott [92], pages 93–109.] ## Sunday, 30 January 2011 ### Math Trivia for the Super Bowl Here is one to bring out and impress your friends during the Super Bowl commercials..... Ole Romer is mostly remembered (by those who remember at all) for being the guy who first measured the speed of light. He cleverly used Cassini's observation that the moons of Jupiter seemed to have an irregular pattern and decide to measure the eclipse of Io as the earth approached and receded from Jupiter. The difference led him to the first calculation of the speed of light; "light seems to take about ten to eleven minutes [to cross] a distance equal to the half-diameter of the terrestrial orbit". He was a man of many interests, and among other things was responsible for the first street lights in Copenhagen and the idea of a Meridian Circle. And while working on the design of gears, he came upon the idea of a shape called the Astroid (star like). Because it can be formed by a point moving on a circle rolling around another circle it is also called a hypocycloid. Steeler's fans will recognize it from the helmet decal. If you are old enough, you may remember drawing these with the Spirograph set you got for Christmas. The formula for the astroid is x2/3+y2/3=a^2/3. The actual length of the curve is 6a. In a unit circle, this makes the length of the astroid exactly the same as the perimeter of an inscribed hexagon. The area inside the astroid is 3/8 of the circle area. The astroid is interesting because if you draw any tangent to the curve and extend it so it cuts both the x and y axis, the length of the tangent segments will all be the same. And perhaps as it should be, there is an asteroid in space, named for Ole Romer who first came up with the astroid shape. And the difference in spelling??? That's just how it's done. Who can explain spelling? It's not like it's math or something that makes sense. ## Saturday, 29 January 2011 ### The Mathematics of Karl Marx This Monday, Jan 31, is the birth date of Sofya Yanovskaya, who was Professor of Mathematics at Moscow State University. She received the prized Order of Lenin in 1951 and in 1959 she became the first chairperson of the newly created department of mathematical logic at Moscow State University. Wikipedia describes her as, "a mathematician and historian, specializing in the history of mathematics, mathematical logic, and philosophy of mathematics. She is best known for her efforts of restoring mathematical logic research in the USSR and publishing and editing mathematical works of Karl Marx." [emphasis added]. Go on, admit it, you didn't know Marx did math did you... yeah, me neither. But it seems his math impacted on the study of math in China much later, and even has an effect on math study there today. Here is the impact as described by Joseph W. Dauben Title: Marx, Mao and Mathematics: The Politics of Infinitesimals The Mathematical Manuscripts'' of Karl Marx were first published (in part) in Russian in 1933, along with an analysis by S.~A. Yanovskaya. Friedrich Engels was the first to call attention to the existence of these manuscripts in the preface to his Anti-D\"uhring [1885]. A more definitive edition of the Manuscripts'' was eventually published, under the direction of Yanovskaya, in 1968, and subsequently numerous translations have also appeared. Marx was interested in mathematics primarily because of its relation to his ideas on political economy, but he also saw the idea of variable magnitude as directly related to dialectical processes in nature. He regarded questions about the foundations of the differential calculus as a touchstone of the application of the method of materialist dialectics to mathematics.'' Nearly a century later, Chinese mathematicians explicitly linked Marxist ideology and the foundations of mathematics through a new program interpreting calculus in terms of nonstandard analysis. During the Cultural Revolution (1966--1976), mathematics was suspect for being too abstract, aloof from the concerns of the common man and the struggle to meet the basic needs of daily life in a still largely agrarian society. But during the Cultural Revolution, when Chinese mathematicians discovered the mathematical manuscripts of Karl Marx, these seemed to offer fresh grounds for justifying abstract mathematics, especially concern for foundations and critical evaluation of the calculus. At least one study group in the Department of Mathematics at Chekiang Teachers College issued its own account of The Brilliant Victory of Dialectics - Notes on Studying Marx's `Mathematical Manuscripts'.'' Inspired by nonstandard analysis, introduced by Abraham Robinson only a few years previously, some Chinese mathematicians adapted the model Marx had laid down a century earlier in analyzing the calculus, and especially the nature of infinitesimals in mathematics, from a Marxist perspective. But they did so with new technical tools available thanks to Robinson but unknown to Marx when he began to study the calculus in the 1860s. As a result, considerable interest in nonstandard analysis has developed subsequently in China, and almost immediately after the Cultural Revolution was officially over in 1976, the first all-China conference on nonstandard analysis was held in Xinxiang, Henan Province, in 1978. ## Friday, 28 January 2011 ### A Kilogram is a Kilogram is....Oops, Wait... Just read an interesting article about attempts to standardize the Kilogram in terms of universal constants... that seems not to work.. "Since 1889, shortly after SI units were adopted, the kilogram has been defined as the mass of a cylinder made of platinum and iridium that is locked in a vault at the BIPM." OK, seems safe enough, but ; "the kilogram's mass relative to several identical copies seems to be decreasing ever so slightly. The shift is troubling because there is no way to tell whether the copies are getting heavier, or the original is getting lighter." The longstanding plan has been to replace the venerable cylinder with a kilogram defined in terms of a fundamental constant of nature. Fundamental constants are unchanging, [yeah? that's what you said about the mass of a cylinder made of platinum and iridium.] and a definition based on them would make the kilogram as fixed as the laws of the Universe." [which as you are about to discover, may be as flexible as we want them to be] AHA, but WHICH law... it seems there is trouble in "paradisio scientifica". Two methods are being used; a watt balance which will give a kilogram as a function of Planck's constant, and the other is based on the number of atoms in a sphere of Silicon, which would give the Kg mass in terms of Avogadro’s number. BUT... "In recent years, each method has taken measurements accurate to around 30 parts per billion (in relative uncertainty); within reach of the most accurate measurements of the platinum–iridium cylinder. But each experiment's best measurements diverge from each other by around 175 parts per billion, a quantity far larger than metrologists have been prepared to accept." Ok, that difference won't be noticeable in your next bag of jasmine rice, but this week at the Royal Society in London the former head of the mass division at the International Bureau of Weights and Measures France, suggested that we compromise on the two values, then back calculate to reset Planck's constant and Avogadro’s number... Surprisingly, not every one liked that idea... "Let's see, what is Planck's constant on a Tuesday?" ## Wednesday, 26 January 2011 ### Understanding Mathematics Had a talk with a parent the other day that reminded me of this..thought it was worth a repeat run: I was sitting with a small group of math teachers at a meeting and I asked about their methods of "Testing for Understanding." It seems that for many (most?) the answer was a combination of "They can do the homework." or "They can pass the tests." Am I just looking for too much? Anyway, while doing some serious research (playing around on Google search) I came across a page called Understanding Mathematics, a study guide by Peter Alfeld. He writes," You understand a piece of mathematics if you can do all of the following: Explain mathematical concepts and facts in terms of simpler concepts and facts. Easily make logical connections between different facts and concepts. Recognize the connection when you encounter something new (inside or outside of mathematics) that's close to the mathematics you understand. Identify the principles in the given piece of mathematics that make everything work. (i.e., you can see past the clutter.) By contrast, understanding mathematics does not mean to memorize Recipes, Formulas, Definitions, or Theorems. " He then goes on to give a few examples (powers, logarithms, quadratics) which he uses to amplify his explanation. I think the student who can explain why 8-4/3 = 1/16 without too much armwaving probably has a pretty deep understanding of the exponentiation process. I'm not as sure about his quadratic solutions model. He talks about the quadratic formula and then states, "Forget about the formula, it's an example of clutter. (To this day I cannot remember it.) But since you understand these matters you know how the formula was derived." I'm not sure I believe him. Ok, I know that sounds kind of harsh, but if you derive the quadratic formula a few times it sort of sits there looking at you all the way through, doesn't it... I see it and know what it is I'm working toward. I agree that one ought to be able to derive the quadratic formula by completing the square, or at least solve quadratics without the formula, but then I remember a quote I have from Euler in my notes on "Twenty Ways to Solve a Quadratic Equation": Even a great mathematician like Euler, after deriving the formula, suggests “it will be proper to commit it to memory”. from An Introduction to the Elements of Algebra . Euler understood math, and when crunch time comes, you gotta' really sell me to make me disagree with him. I don't have a wide range of clear delineators of understanding, but a kid who looks at a quadratic with a positive leading coefficient and a negative constant term and can't tell me how many real solutions it has does NOT understand solving quadratics, no matter what he has memorized. And if I write some ugly equation on the board the kid has never seen before and ask, "What happens to the graph of this if I replace all the x's with x-2 and he says it moves to the right (or left, I'm easy) I think they understand something big. If I tell him that (2,3) and (3,5) are on the graph of f(x) and ask him what f-1(3) is and they have no clue, I think they have missed something big. Here is a recent example, we were talking about H1N1 and working with the logistic curve and I was explaining that the maximum rate of growth occurred at half the maximum value, such as here. I suggested that from this point they should be able to see that the ab-t = 1 and that would lead them to see that bt=a. But for many (most) of them sorting out that the exponential part = 1 was too many steps at once... in fact for a large number, it just did not make sense that 1+ab-t=100/50 (seeing past the clutter?). Whether you think of this as a division property or the means and extremes property of proportions, it seems to me that if you don't see this "clumping" (I think that was Polya's term) you can't be very flexible in math patterns. Ok, so send me your gut tests... If they can do THIS, they mostly "get it", or, conversely, if they can't do THIS, then they definitely don't "get it". ## Tuesday, 25 January 2011 ### The First Fractal, The Second Curve If we assume that the length of a circle's circumference is known (well, do you KNOW pi?), then it was known before 100 BC. So what was the next curve whose length was figured out, and when.. Ok, do that movie thing where you have the pages ripping off the calendar...keep waiting...more pages... lots more... and then in the least expected place. After centuries of trying to figure out the length of the elliptic circumference, or the length of a parabola... and failing,... mathematicians had about decided that there just were not many curves that could be rectified (find the length). Then, in 1645, Evangelista Torricelli (he's the guy who invented the barometer) was playing with a logarithmic spiral. Now the logarithmic spiral has several nice properties. First,it is equiangular, that is, if you draw a tangent to the spiral at any point, the angle between that tangent and a line drawn to the origin will always be the same angle. If you write the formula in polar form as then b is the cotangent of the angle. As b gets close to zero, the angle approaches 90 degrees, and the spiral gets more circular. In fact when b=0, the equation produces a circle. The log spiral also has the property that if you drew a ray from the origin in any direction, the intersections with the spiral would occur at distances from the origin that are in a geometric ratio; if it crosses at distances of one unit and two units, the next cross would be at four units, and then at eight, etc. But it was this property that made it such an unlikely candidate to be the 2nd rectified curve because it also decreases toward the origin in proportional steps. So working the same curve I mentioned above back toward the origin it will cross the x-axis at 1/2, and then 1/4, and then...."Holy infinite sequence Batman, this goes on forever." But in spite of the fact that there are an infinite number of spirals around the origin, the distance from any point on the curve to the origin is finite.... read twice, that's right... finite.. and Torricelli not only figured that out, he put a number on it (and the children all say "Ooohhhh.") He did it by recognizing that, although the word would not be invented for over 300 years to come, the log spiral is a fractal. Any part of the curve is self-similar to any other. From this, Torricelli was able to determine that the infinite number of loops around the origin would simply add up to the distance from the tangent to the y-axis. It wasn't quite calculus...but we were sneaking up on it. The log spiral so enchanted Jacques Bernoulli that he commissioned it to be put on his grave. [unfortunately it wasn't drawn correctly]. His Latin inscription said, "eadem mutata resurgo" which translates to something like "though changed, I will arise the same." I got an email after I posted this reminding me that Descartes was the discoverer of the equiangular spiral. It seems it comes up when studying dynamics. I have also been guided to a neat approach to drawing the curve at Wolfram's Mathworld page: "The logarithmic spiral can be constructed from equally spaced rays by starting at a point along one ray, and drawing the perpendicular to a neighboring ray. As the number of rays approaches infinity, the sequence of segments approaches the smooth logarithmic spiral (Hilton et al. 1997, pp. 2-3). " Mathworld also has a nice page to illustrate that the log spiral will show up in mutual pursuit problems. ------------------------- Beautiful story about Toricelli and the barometer, probably false, but still cute: When he was using water filled barometers the tubes would have to be over thirty feet high. Torricelli had one stick through the roof of his house and he put a red floating ball in the water to make it more visible. Unfortunatly, his superstitious neighbors could see this, and became somewhat upset when the appearance and disappearance of the floating "devil" seemed to cause the weather to change... so they stormed his house and burnt it down. That was supposedly ONE Of the reasons he switched to a mercury barometer. ## Monday, 24 January 2011 ### Notes on the History of Graph Paper Just re-ordered graph paper for next year for my department. We don't use nearly as much these days as five or ten years ago... calculators have made them much less common in schools. It reminded me that I hadn't actually put anything here about my notes on the history of graph paper, so for those who are interested... Graph paper, a math class staple, was developed between 1890 and 1910. During this period the number of high school students in the U.S. quadrupled, and by 1920, according to E.L Thorndike, one of every three teenagers in America “enters High School”, compared to one in ten in 1890. The population of “high school age” people had also grown so that the total number of people entering HS was six times as great as only three decades before. Research mathematicians and educators took an active interest in improving high school education. E. H. Moore, a distinguished mathematician at the University of Chicago, served on mathematics education panels and wrote at length on the advantages of teaching students to graph curves using paper with “squared lines.” When the University of Chicago opened in 1892 E.H. Moore was the acting head of the mathematics department. “Moore was born in Marietta, Ohio, in 1862, and graduated from Woodward High School in Cincinnati. “(from Milestones in (Ohio) Mathematics, by David E. Kullman) Moore was President of the American Mathematical Society in 1902. The Fourth Yearbook of the NCTM, Significant Changes and Trends in the Teaching of Mathematics Throughout the World Since 1910, published in 1929, has on page 159, “The graph, of great and growing importance, began to receive the attention of mathematics teachers during the first decade of the present century (20th)” . Later on page 160 they continue, “The graph appeared somewhat prior to 108, and although used to excess for a time, has held its position about as long and as successfully as any proposed reform. Owing to the prominence of the statistical graph, and the increased interest in educational statistics, graphic work is assured a permanent place in our courses in mathematics.” [emphasis added] Hall and Stevens “A school Arithmetic”, printed in 1919, has a chapter on graphing on “squared paper”. John Bibby has written (August,2012) to advise me that John Perry, who was at the time President of the Institution of Electrical Engineers, has a section on "Use of Squared Paper" in an article in Nature in 1900 (The teaching of mathematics, Nature Aug 1900 pp.317-320.) They wanted $19 to see the article, so I take John at his word. I did find another similar endorsement of "squared paper" by Perry in "Englands Neglect of Science" published in 1900 also. On page 18 after several lamentations about trained engineers who had no ability/understanding of the mathematics applying to their field, he writes: "I tell you, gentlemen, that there is only one remedy for this sort of thing. Just as the antiquated method of studying arithmetic has been given up, so the antiquated method of studying other parts of mathematics must be given up. The practical engineer needs to use squared paper." The actual first commercially published “coordinate paper” is usually attributed to Dr. Buxton of England in 1795 (if you know more about this man, let me know). The earliest record I know of the use of coordinate paper in published research was in 1800. Luke Howard (who is remembered for creating the names of clouds.. cumulus, nimbus, and such) included a graph of barometric variations. [On a periodical variation of the barometer, apparently due to the influence of the sun and moon on the atmosphere. Philosophical Magazine, 7 :355-363. ] [The above was gathered from a numbur of authoritive sources including a Smithsonian site, but on a recent visit to Monticello, the home of my longtime favorite American Prisident, Thomas Jefferson, I discovered it was in error. I found a use by Jefferson in his use of the paper for architectural drawings earlier than any of these dates. Here is the information from the Moticello web site.] Prior to 1784, when Jefferson arrived in France, most if not all of his drawings were made in ink. In Paris, Jefferson began to use pencil for drawing, and adopted the use of coordinate, or graph, paper. He treasured the coordinate paper that he brought back to the United States with him and used it sparingly over the course of many years. He gave a few sheets to his good friend David Rittenhouse, the astronomer and inventor: "I send for your acceptance some sheets of drawing-paper, which being laid off in squares representing feet or what you please, saves the necessity of using the rule and dividers in all rectangular draughts and those whose angles have their sines and cosines in the proportion of any integral numbers. Using a black lead pencil the lines are very visible, and easily effaced with Indian rubber to be used for any other draught." {Jefferson to David Rittenhouse, March 19, 1791} A few precious sheets of the paper survive today. The increased use of graphs and graph paper around the turn of the century is supported by a Preface to the “New Edition” of Algebra for Beginners by Hall and Knight. The book, which was reprinted yearly between the original edition and 1904 had no graphs appearing anywhere. When the “New Edition” appeared in 1906 it had an appendix on “Easy Graphs”, and the cover had been changed to include the subhead, “Including Easy Graphs”. The preface includes a strong statement that “the squared paper should be of good quality and accurately ruled to inches and tenths of an inch. Experience shews that anything on a smaller scale (such as ‘millimeter’ paper) is practically worthless in the hands of beginners.” He finishes with the admonition that, “The growing fashion of introducing graphs into all kinds of elementary work, where they are not wanted, and where they serve no purpose – either in illustration of guiding principles or in curtailing calculation – cannot be too strongly deprecated. (H. S. Hall, 1906)” The appendix continued to be the only place where graphs appeared as late as the 1928 edition. The term “graph paper seems not to have caught on quickly. I have a Hall (the same H S Hall as before) and Stevens, A school Arithmetic, printed in 1919 that has a chapter on graphing on “squared paper”. Even later is a 1937 D. C. Heath text, Analytic Geometry by W. A. Wilson and J. A. Tracey, that uses the phrase “coordinate paper” (page 223, topic 153). Even in 1919 Practical mathematics for Home Study by Claude Irwin Palmer introduced a section on “Area Found by the Use of Squared Paper” and then defined “paper accurately ruled into small squares” (pg 183). It may be that the term squared paper hung on much longer in England than in the US. I have a 1961 copy of Public School Arithmetic (“Thirty-sixth impression, First published in 1910) by Baker and Bourne published in London that still uses the term “squared paper” but uses graphs extensively. Of course "graph paper" could not have preceded the term "graph" for a curve of a function relationship, and many teachers and students might be surprised to know that it was not until 1886 when George Chrystal wrote in his Algebra I, "This curve we may call the graph of the function." The actual first known use of the term "graph" for a mathematical object actually predates this event by only eight years and occurred in a discrete math topic. J. J. Sylvester published a note in February 1878 using 'graph' to denote a set of points connected by lines to represent chemical connections. In that note "Chemistry and Algebra", Sylvester wrote: "Every invariant and covariant thus becomes expressible by a graph precisely identical with a Kekulean diagram or chemicograph" . A more or less famous Kekule structure is the benzine shown at right. This short note in Nature was more a notice of the more complete paper he had written in American Journal of Mathematics, Pure and Applied, would appeared the same month, "On an application of the new atomic theory to the graphical representation of the invariants and covariants of binary quantics, — with three appendices," The term "graph" first appears in this paper on page 65. The images he uses appear below. ## Sunday, 23 January 2011 ### Math Myths Just read this on a blog by Dr Michael Taylor..Thought it ought to be printed out and placed on the wall of every math classroom (especially on Parent's night...."Well I'm not suprised he/she is doing poorly... I could never do math." ) Five of the most common math myths are: 1. The Genius Myth (that good mathematicians are born with special math talent and enormous left brains); 2. The Good Memory Myth (that good mathematicians have a phenomenal memory for formulas); 3. The Using-Tools-Is-Cheating Myth (that good mathematicians don’t use fingers, toes and calculators); 4. The Gender Myth (that good mathematicians are all men despite the abundance of female bio-statisticians); 5. The Who Needs it Anyway Myth (that math is useful only to mathematicians). But the biggest myth of them all is the “I-Cant-Do-Math Myth”. I recently taught multivariate calculus to a class of non-mathematicians and social scientists. It wasn’t just them asking the question “why are we here”? But an open-mind is a powerful adversary. They soon dusted-off this myth in a matter of months. Yes, for some, math is like sorcery. We all have our superstitions to overcome. The good news is that we can. Arthur C Clarke who once said that, “any sufficiently advanced technology is indistinguishable from magic”. If that technology is born of math then however miraculous or foreboding it appears, we will learn to embrace it. Mathematics – to learn. Let’s face our fears, dispel the myths and advance. There are legends to be made. POSTSCRIPT..... Steven Colyer suggested two more that could be added to the "Math Myths" 7) Math is boring. 8) Mathematics is finished. So what else should be on the list.. (and do NOT suggest (x+y)2 = x2 + y2) ### Tangible Geometry Had a comment to a post the other day that turned out to be from a high school senior. I checked out her web site where she does incredibly cool geometric stuff... check it out... This child obviously does not need homework to get an education. This one is a stellated icosahedron, and apparently there are 59 different versions of it. [You can download a Wolfram Demonstration of them if you have their free player] This is the kind of stuff Kepler played with when he wasn't measuring the planets. He discovered two of the three stellations of the dodecahedron (there are three in all). If you are curious, I will tell you that the tetrahedron and cube have no stellations (figure out why they don't), and the octahedron only has the beautiful stella octangula. ### We've Come a Long Way Baby Steven Colyer had a blog the other day about calculators of his youth. Like me, I think, he grew up with slide rules and hand held calculators were amazing.... and large... and very expensive. He reminded me of a magazine article I had seen, but now can't find, but in searching for it I came across some interesting reminders of how far we have come.. This first is from Popular Science, June 1971, ...just in time for back to school buying.. keep in mind that the conversion would be about$5 in 2011 for every $1 in 1970. Then there is this picture from the same magazine only four years later..(Feb 1975) The$29 four-function calculators mostly did not have a decimal point. From the same issue I found this quote reminding us that early calculators were even more damaging to student learning because they didn't use the standard algebraic order of operations; "And it is undeniable that the practice among the £30 to £70 scientific calculators has been to adopt algebraic logic universally — possibly because many potential customers for these machines have graduated from basic four-function ." It took a while, but eventually there was a graphing calculator. This add is from the New Scientist in September 1987, The prices were as low as 40 GB Pounds, (about $60 then, I think) ## Saturday, 22 January 2011 ### Which Platonic Solid is Most-Spherical? If you inscribe a regular polygon into a given circle, the larger the number of sides, the larger the area of the polygon. I guess I always thought that the same would apply to Platonic solids inscribed in a sphere..... It doesn't. I noticed this as I was looking though "The Penny cyclopædia of the Society for the Diffusion of Useful Knowledge By Society for the Diffusion of Useful Knowledge" (1841). As I browsed the book, I came across the table below: The table gives features of the Platonic solids when inscribed in a one-unit sphere. At first I thought they must have made a mistake, but not so. The Dodecahedron fills almost 10% more of a sphere (about 66%) than the icosahedron (about 60%). So the Dodecahedron is closer to the sphere than the others. Interestingly, if you look at the radii of the inscribing spheres, it is clear that solids which are duals are tangent to the same internal sphere. But if you look at the table of volumes when the solids are inscribed with a sphere inside tangent to each face: When you put the Platonic solids around a sphere, the one smallest, and thus closest to the sphere is the icosahedron. This leads to the paradox that when platonic solids are inscribed with a sphere, the icosahedron is closest to the circle in volume (thus most spherical?) but when they are circumscribed by a sphere, the dodecahedron is the closest to the volume of a sphere (and thus most spherical?)... hmmm Here is a table of the same values when the surface area (superficies) is one square unit. Notice that for a given surface area, the icosahedron has the largest volume, so it is the most efficient "packaging" of the solids (thus more spherical?). I guess that makes it 2-1 for the icosahedron, so I wasn't completely wrong all along. POSTSCRIPT::: Allen Knutson's comments on the likely cause of this reversal of "closeness" to the sphere: I think it's about points of contact. On the inside, the dodecahedron touches the sphere at the most points (20), and on the outside, the icosahedron touches the sphere at the most points (again 20). Indeed: my recipe would suggest that inside, the 8-vertex cube is bigger than the 6-vertex octahedron, and outside, the 8-face octahedron is smaller than the 6-face cube. Both are borne out by your tables. Thank you, Allen ### To Learn, Take a Test Heard about this report on several blogs, Gas station without pumps and a Quantum Blog in particular Here is a cut from Quantum Blog: The article, To Really Learn, Quit Studying and Take a Test Already, reports on new research findings reported in Science that students who take a test asking them to actively recall information retain more than those who simply “study” or make concept maps. But what is awesome about this study is they didn’t just measure how the students performed using the various study strategies, it also measured how the students thought they performed. Here is the money quote from the NYT: These other methods [rereading notes and concept mapping] not only are popular, the researchers reported; they also seem to give students the illusion that they know material better than they do. In the experiments, the students were asked to predict how much they would remember a week after using one of the methods to learn the material. Those who took the test after reading the passage predicted they would remember less than the other students predicted — but the results were just the opposite. I have worked for the last few years to learn to sit quietly in staff development days when they present each new "best practice". Now I can set quietly and SMILE... Ain't research grand. The actual article is here in Science, but they charge mega-bucks to download... go to the library instead. SO... my newest new years resolution.... give a test in every class every week.. ## Friday, 21 January 2011 ### A Neat Solution to the Towers of Hanoi Problem Just browsing through Wikipedia, and they show a solution to the Towers of Hanoi puzzle that I had never seen using a ruler as a solution key. If you have been off planet for the last 130 years and don't know the Towers problem, you can play online here. You might try that first, and set the number of discs to 6 so that it matches the solution shown below. And for those who know the game but just want to see how a ruler is used, here is the graphic. For any move, just move the disc whose size compares to the marks on the ruler. For instance the first five marks on a ruler marked in 32nds would be 1/32, 1/16/ 3/32, 1/8, 5/32.... The denominators tell you which disk to move. The largest denominator (smallest scale) goes with the smallest disc, etc. If you then apply two fundamentals of any solution, always move the smallest disc From rod A, to B to C and back to A in a cycle, and never put a bigger disc on a smaller one, then you have a solution... That's easier than Gray codes isn't it. Why have I never encountered this before? The connection was made in 1956 by Donald W. Crow, in relation to traversing the vertices of a cube in n-dimensions[ D. W. Crowe, The n-dimensional cube and the tower of Hanoi, Amer. Math. Monthly, 63 (1956), 29-30.] Those interested in a little history of the Puzzle can find a few brief notes here. POSTSCRIPT:::: For another really insightful solution (maybe the best of them all) See the comment by Jeffo....Thanks guy, why don't I see ideas like that? ## Thursday, 20 January 2011 ### Microsoft Mathematics is FREE! Am I the last person on the block to come across Microsoft Mathematics? I downloaded it (which is now free, it seems) and tried it out this week. Here is a brief description from the "About" notes: Microsoft Mathematics provides a set of mathematical tools that help students get school work done quickly and easily. With Microsoft Mathematics, students can learn to solve equations step-by-step, while gaining a better understanding of fundamental concepts in pre-algebra, algebra, trigonometry, physics, chemistry, and calculus. Microsoft Mathematics includes a full-featured graphing calculator that’s designed to work just like a handheld calculator. Additional math tools help you evaluate triangles, convert from one system of units to another, and solve systems of equations. It seems to have several nice features that kids at the 9-12 level would enjoy. I noticed right off that some of my kids studying trig would love the fact that it has an inverse function for the secant, cosecant, and cotangent that works in degrees, radians and the almost extinct, gradians (more commonly called grads). If you don't know what grads are read here. It also has several computer algebra skills, indefinite integrals, expansion of powers of an expression (for students, it will raise (x+y)^5 and expand it), and of course factoring. It works in real and complex numbers, graphs in 2d and 3d with input in Cartesian, of polar form (and you can enter implicit relations.. The matrix input allows up to 15x15, and the linear algebra keys include dot(inner) and cross products and will row reduce a matrix (yes it does inverses, but you don't HAVE to use it). It will solve equations or do integrals and then show and explain the steps. I expect that many students will abuse this as they do other technologies available to them, but the student who really wants to learn math can use this as a valuable resource. There is also an add-in for Word. ### A Little Math Music Maestro I recently wrote about the historical connection between math and poetry and now we have to think about the connection between music and math. Anyway, thanks to Dave Richeson who had this and several other "math songs" on his Division By Zero blog recently. My students may all be too young to remember Queen.. but Enjoy anyway. And of course, a tired old joke I tell my students each year as we get to integration: Two math professors are sitting in a pub. "Isn't it disgusting", the first one complains, "how little the general public knows about mathematics?" "Well", his colleague replies, "you're perhaps a bit too pessimistic." "I don't think so", the first one replies. "And anyhow, I have to go to the washroom now." He goes off, and the other professor decides to use this opportunity to play a prank on his colleague. He makes a sign to the pretty, blonde waitress to come over. "When my friend comes back, I'll wave you over to our table, and I'll ask you a question. I would like you to answer: x to the third over three. Can you do that?" "Sure." The girl giggles and repeats several times: "x to the third over three, x to the third over three, x to the third over three..." When the first professor comes back from the washroom, his colleague says: "I still think, you're way too pessimistic. I'm sure the waitress knows a lot more about mathematics than you imagine." He makes her come over and asks her: "Can you tell us what the integral of x squared is?" She replies: "x to the third over three." The other professor's mouth drops wide open, and his colleague grins smugly when the waitress adds: "...plus C." ## Wednesday, 19 January 2011 ### More About Distribution of Births In a comment on my blog about the Distribution of Birth Dates, Mary O'Keeffe sent a link to an article that addressed several questions about the distribution of births, including how tax incentives had changed births around the end of the year. "Here is an abstract from a paper published in the Journal of Political Economy which attempted to measure the size of the tax incentive effect on behavior. Because the tax savings of having a child are realized only if the birth takes place before midnight, January 1, the incentives for the "marginal" birth are substantial. Using a sample of children from the National Longitudinal Survey of Youth, we find that the probability that a child is born in the last week of December, rather than the first week of January, is positively correlated with tax benefits. We estimate that increasing the tax benefit of having a child by$500 raises the probability of having the child in the last week of December by 26.9 percent." The article had several nice graphs and tables. Here are a couple of graphs that I found pertinent: This chart shows the births in the last week before new years (Black) and the following first week in January for consecutive year. You can click on the chart to enlarge. It also had a graph which supported my belief that weekend births had become far less likely with the practice of inducing labor. Here is that one: ### "Complex" Physics? I love the new one from XKCD... Here is only the teaser frame.. go there. ## Tuesday, 18 January 2011 ### Distribution of Birth Dates I recently wrote about several variations of the Birthday problem, here and here. Steven Colyer from "Multiplication by Infinity" Blog pointed out in a comment that in reality the distribution of birthdays is not uniform. There are months and days that are more likely than others. I actually have a graph that I show my Stats kids each year that illustrates this. In this age of births in hospitals the number of births on holidays and weekends is reduced. My only file on this is from 1979, and I suspect the difference is even more exaggerated in more recent years. In this graph the day of the year (1-365) is on the x-axis and the number of babies born is on y. You can see the Sat/Sun values are about 80% of the weekdays. You can also see a rise in births in the late summer, about July to September. Ok, if you want this one, you can find it at the chance data base which you can cut and paste to almost any statistical software. If anyone has a more current birth by date data chart for a single year I would love to have it. Roy Murphy (murphy@panix.com) retrieved birthdates from 480,040 insurance policy applications made between 1981 through 1994 of a Life Insurance Company. Here is the distribution by month compared to the expected number: Also, for those who are sure to ask... A scientific study conducted in 1994 found that "scientific analysis of data does not support the belief that the number of births increases as the full moon approaches, therefore it is a myth not reality." See: "Labor ward workload waxes and wanes with the lunar cycle, myth or reality?" source: PUBMED, National Libary of Medicine, hosted by nih.gov ## Monday, 17 January 2011 ### Tortured by Math- Dangerous Knowledge I was exposed to a nice collection of You Tube videos that cover the BBC Documentary Dangerous Knowledge from 2007 on the Math Frolic blog by Shecky "In this one-off documentary, David Malone looks at four brilliant mathematicians - Georg Cantor, Ludwig Boltzmann, Kurt Gödel and Alan Turing ..." Here is the first of ten. You can find the entire set. ### Sum of Cubes, Square of sum..Potato/Po'ta toe? Ok, here is an interesting trick; take any number and make a list of its factors (including one and itself). Now count the number of divisors each factor has (include one and the number itself) and put them in another list. Now cube each of the numbers in that new list, and also square the sum of the list. MAGIC??? Let me illustrate. If I pick 12 the factors are {1, 2, 3, 4, 6 and 12}. The number of factors of those numbers are {1, 2, 2, 3, 4, 6}. The magic part... 13+23+23+33+43+63=324... and (1+2+2+3+4+6)2= (18)2= 324... This is related to the well known relation that for any string of consecutive integers {1,2,3, ...n} the sum of the cubes is equal to the square of the sum. $\sum_{k=1}^{n}k^3 = (\sum_{k=1}^{n}k)^2$ (student's should prove this by induction). I came across this recently at a blog called Alasdair's Musing. He gives credit to Joseph Liouville. He has a nice proof of the relation using Cartesian cross products of sets. Yes, children, you want to know what that means. ## Saturday, 15 January 2011 ### Unbelievably Prime-emirP ylbaveilebnU Where do they find the time? I came across this amazing piece of trivia on a web page called "Futility Closet" by Greg Ross. 3139971973786634711391448651577269485891759419122938744591877656925789747974914319 422889611373939731 is prime, whether it’s spelled forward or backward. Further, if it’s cut into 10 pieces: through and through - reversible primes … each row, column, and diagonal is itself a reversible prime. Discovered by Jens Kruse Andersen. Now I'm wondering, is it possible to find a four digit prime that might be similarly divided into a 2x2 rectangle with the same row, column and diagonal primeness? Ahh go on, you know you want to.... but share if you find one. OK, so 13 and 31 are reversible primes, and 1331 is ....crap.. ok your turn. ### Math and Poetry My beautiful wife Jeannie is a poet, so I like the idea that the connections between math and poetry goes back a long way. It might surprise you to know that : 1) The first use of binary numbers 2) The first recorded illustration of what we now call Pascal's Arithmetic Triangle 3) The Fibonacci sequence All appeared first in a book on Sanskrit poetry around 2000 BC, Pingala's "Chandahsutra". Poetry in Sanskrit, and it seems in some modern languages, poems are described by the pattern of long and short syllables. Very little is known about the author's life, and in fact it seems that anything that is stated in one ancient text is contradicted by another. What we know about the work comes from an upgrade of the text created by a 10th century mathematician named Halayudha. Describing short and long syllables seems a natural stimulus for binary notation. Instead of zero's and ones, Pingala used (or we suspect that he used) symbols for syllables which were Guru (heavy-given two beats) or Laghu (light-given one beat). To illustrate that there could only be eight line patterns with three syllables he lists them: LLL, LLH, LHL, HLL, LHH, HLH, HHL, HHH. I think there is still some doubt about whether Pingala actually made a diagram similar to the arithemtic triangle (which they called meruprastāra) or if that was added by Halayudha. In any event, it seems quite easy to see that for n= 2,3,4 etc syllables, and count the number of short syllables we get 1 1 short or long 1 2 1 short short, short long, long short, long long 1 3 3 1 and the triangle is born. But if you decided to count the number of lines by length (remember heavy syllables last twice as long as light ones).. There can only be one line of length one, it has a single light accent. There is also only one line of length two, a single heavy accent. But for length three you can get two different types, HL, LH, or LLL And by now you expect that somehow there will be five patterns for length four. Sure enough LLLL, LLH, LHL, HLL, HH.... and we have the Fibonacci sequence. It is easy to see that the number of patterns of length N, would be all the patterns of length n-2 followed by a heavy accent, plus all the patterns of length n-1 followed by a light accent. This is just the recursive definition of a Fibonacci sequence. And in honor of the Woman I love, and the 4000 year association between the things we love, here is a poem she wrote years ago in Japan. It was inspired by a friend, Idell Tong, who was in charge of something planned around the school. When Jeannie asked her how it was going, she replied, "Well, you know my philosophy, If you have a spare minute, worry." That became this: TIME WELL SPENT The Walls of your world are crumbling. All the plans you made are disintegrating, Got a minute? FRET! When the boss's deadline is looming, You know your rat is losing the race, And your best is just not good enough. Now's the time to PACE! You're stuck in rush hour traffic, Urging the taxi driver to hurry, He gives you the glance of annoyance. Just sit back, scowl, and WORRY! The BIG event is scheduled, Here you are with nothing to do, All the presentations are perfect. ## Friday, 14 January 2011 ### "You Ain't Seen Nothing Yet", But You Might Soon Apologies to Bachman Turner Overdrive... In my recent blog about the solution to a variation of the birthday problem I promised to expand on the idea that if you haven't witnessed an occurrence of an event in a large (large enough) number of multiple trials, then we can say with confidence (95% statistically) that the probability of success will be no greater than 3/n. This is often called the "statistical rule of three." If a binomial event has a probability of p, then the probability that after n trials there are no successes will be (1-p)n, you have to have a non-success every trial. And if you want that probability to be less than .05, then we have an exponential equation to solve, $(1-p)^n\leq .05$ We take the natural log of both sides to make the problem simpler to solve,$n(ln(1-p))=ln(.05)$ Now if you have been wondering about where the three gets into it, this is it... the ln of .05 is very nearly -3. This gives us $n(ln(1-p))\leq -3$. Now we have to do a little calculus approximation. For very small values of p, ln(1-p) is very close to -p. Which means our equation is almost $n(-p)=-3$..and dividing we can arrive at p=3/n. So if we collect give 30 people an injection and none of them can experience a side effect, the rule of three suggests that we can be pretty (95%) sure that the true probability of the side effect is no greater than 1/10. It seems to me that this approach is looking at the problem a little wrong. Let me try to explain. Let's take a very simple problem. Suppose we have a machine that spits out two ping-pong balls every time you press a button. The machine has five settings inside that control the probability that the ping-pong balls are white (otherwise they are some other color). The probabilities are equally spaced, 0, 1/4, 1/2, 3/4, and 1. We want to know how how frequently we might get no white balls when we push the button. We make a table of the possible settings, and the probability that both balls are non-white. Since the machine settings are random each of them has a probability of 1/5. To find the total probability of getting no whites, we multiply 1/5 times each probability and add them up. (1/5)(1)+(1/5)(9/16)+(1/5)(1/4)+(1/5)(1/16)+(1/5)(0)= 3/8. We would expect that, on average, 3/8 of the times we run the machine, we get two non-white balls. But there is a different question that can be asked: "if someone got two non-white balls, what is the probability that the machine was set at some particular value (say p=1/2). This is the "conditional probability" that seems to make students crazy. But in this situation it look pretty clear. Of 80 people who ran the machine, 30 of them got two non-white balls. If we look just at this 30 people, we would see that only 1 of them had occurred when the machine was set to p=3/4, 4 when it was set to p=1/2, and the other 25 occurred when the machine was set to p=1/4 or p=0. Notice that we can say two very similar sounding things that are quite different. 1) The probability you get two non-white balls IF the machine probability of white is set to p=1/2 or greater is less than 1/4. (I think this is the approach the rule of three uses) 2) The probability the machine is set to p(white)=1/2 or higher IF you get two white balls is 5/30 or 1/6. (I think this is the way the confidence interval should be calculated.) Now if we extend the number of options of the machine probability of white to any real number in the interval 0 to 1, then we can answer the same questions using integration to find the area under the curve (1-p)2 to give us the probability of getting two non-white balls. It turns out to be 1/3. Now we can answer the same two questions as above 1) The probability you get two non-white balls IF the machine probability of white is set to p=1/2 or greater is less than 1/4. (This is exactly the same as before, we are just asking what is the function value when the input is 1/2) 2) The probability the machine is set to p(white)=1/2 or higher IF you get two white balls is the ratio of the shaded area to the right of x=1/2 to the total shaded area. It turns out that in this case (two failures) we get about .043/(1/3) or about 12%. So what if we got three or four or more failures. The graph starts to look more and more skewed as the probability of failure clumps to the left of the curve. Here is the graph of (1-p)5 in red, and(1-p)10 in blue . Our rule of three says that if we got ten consecutive failures, we can be pretty sure that the prob of success is probability greater than 3/10. I turns out that for p=3/10, the probability of getting ten failures in a row is about 2.8%, and even lower for any probability greater than 3/10. This suggests that even for relatively small values of n, the limit of 3/n is a conservative approach to bounding the probability of an occurrence. But is it the best estimate of the probability given that we have ten failures. Given that we have had ten failures, the probability that the success probability was 3/10 or higher is the ratio of the area under (1-p)10 between 3/10 and 1 (about .00179) over the area between 0 and 1 (that's 1/11 or .0909090..). The overall ratio is a little less than 2%. The conclusion for me? The rule of three is a very over-cautious estimate, but well worth the effort when you consider the ease of computation. ### Who Created the Birthday Problem, and Even One More Version Steven Coyler who blogs at Multiplication by Infinity sent me a nice comment on my last blog that included a time line of big moments in the development of the birthday problem. It was, I believe, part of a larger work that he blogged here on conjoining the time lines from the book "50 Things You Really Should Know About Mathematics." The last part of his time line on the birthday problem said, "1939 - Richard von Mises proposes the birthday problem." You can search almost anywhere and find that confirmed...but being the contrary guy I am, I will disagree. I realize that in disagreeing with Crilly I am disagreeing with an established world class Math Historian (his biography of Arthur Cayley is classic work)..... and yet I press on. I think it may be that A)the birthday problem as we know it was not first given by von Mises and B) the typical version may have appeared over twelve years before von Mises publication.....(but von Mises may have published first). For support I call upon that great historian of mathematical recreations, David Singmaster. In his "Chronology of Recreational Mathematics" he has: 1927 Davenport invents Birthday Problem. | | | 1939 von Mises first studies Birthday Problems, but not the usual version. 1939 Ball-Coxeter: Mathematical Recreations and Essays, 11th ed. - first publication of Davenport's version of the Birthday Problem In another note he gives source information: Richard von Mises. Ueber Aufteilungs und Besetzungs Wahrscheinlichkeiten. Rev. Fac. Sci. Univ. Istanbul (NS) 4 (1938 39) 145 163. = Selected Papers of Richard von Mises; Amer. Math. Soc., 1964, vol. 2, pp. 313 334. Says the question arose when a group of 60 persons found three had the same birthday. He obtains expected number of repetitions as a function of the number of people. He finds the expected number of pairs with the same birthday is about 1 when the group has 29 people, while the expected number of triples with the same birthday is about 1 when there are 103 people. He doesn't solve the usual problem, contrary to Feller's 1957 citation of this paper. and another: Ball. MRE, 11th ed., 1939, p. 45. Says problem is due to H. Davenport. Says "more than 23" and this is repeated in the 12th and 13th editions. Regarding Davenport, he has : George Tyson was a retired mathematics teacher when he enrolled in the MSc course in mathematical education at South Bank in about 1980 and I taught him. He once remarked that he had known Davenport and Mordell, so I asked him about these people and mentioned the attribution of the Birthday Problem to Davenport. He told me that he had been shown it by Davenport. I later asked him to write this down. George Tyson. Letter of 27 Sep 1983 to me. "This was communicated to me personally by Davenport about 1927, when he was an undergraduate at Manchester. He did not claim originality, but I assumed it. Knowing the man, I should think otherwise he would have mentioned his source, .... Almost certainly he communicated it to Coxeter, with whom he became friendly a few years later, in the same way." He then says the result is in Davenport's The Higher Arithmetic of 1952. When I talked with Tyson about this, he said Davenport seemed pleased with the result, in such a way that Tyson felt sure it was Davenport's own idea. However, I could not find it in The Higher Arithmetic and asked Tyson about this, but I have no record of his response. Anne Davenport. Letter of 23 Feb 1984 to me in response to my writing her about Tyson's report. "I once asked my husband about this. The impression that both my son and I had was that my husband did not claim to have been the 'discoverer' of it because he could not believe that it had not been stated earlier. But that he had never seen it formulated." I have discussed this with Coxeter (who edited the 1939 edition of Ball in which the problem was first published) and C. A. Rogers (who was a student of Davenport's and wrote his obituary for the Royal Society), and neither of them believe that Davenport invented the problem. I don't seem to have any correspondence with Coxeter or Rogers with their opinions and I think I had them verbally. So my spin on all this is that probably Harold Davenport came up with the version, "How many people are needed for the probability of a match to be greater than 1/2?", but did not publish it anywhere. This is not uncommon in recreational problems. Consider the Collatz problem which seemed to circulate around and across college campuses for years with multiple names. In or around 1939 von Mises was at a party and came up with a slightly different version, "How many pairs of birthday matches would you expect for a collection of n people?" This is the inverse relationship to the common birthday problem today which asked, given an expected value of 1/2, what is the probability of a match. I also found an interesting variation of the problem that should be of interest to Steven Coyler. The book he quoted in the comment post is authored by Tony Crilly from Manchester here in the UK. As I was checking some notes in Dr. Singmaster's sources, I came across this citation: Tony Crilly & Shekhar Nandy. The birthday problem for boys and girls. MG 71 (No. 455) (Mar 1987) 19 22. In a group of 16 boys and 16 girls, there is a probability greater than ½ of a boy and a girl having the same birthday and 16 is the minimal number. Folks who like probability might try to derive that result. I am trying to contact Professor Crilly to ask his opinion. Will give his response if I get one. A few years after I wrote this, I came across yet another version of the birthday problem I had never considered. How many people needed so probability is 50% that everyone shares a birthday with at least one other? The strong birthday problem has applications to the interesting problem of look-alikes, which is of interest to criminologists and sociologists. The answer, it seems, is 3,064. Amazingly, with 2000 people in the room, the probability is only 1/10000, but by the time you get 4000 the probability is .9334. In even a pretty small village, there is a pretty good chance that someone else shares your birthdate. ## Thursday, 13 January 2011 ### Back of the Envelope Answers to a Hard Problem In a comment "td" asked if I was going to answer the last question I asked in my blog on a variation of the birthday problem. The question asked the probability that a random selection of 1000 people would include a person with every birth date for all 365 calendar days of a non-leap year. I had not done the problem, but figured from its nature that it was of a size to be difficult to calculate, but theoretically not too difficult. I thought of three ways to try and attack it. I was pretty sure it could be done exactly with Markov matrices (more later) if software was available to raise a square matrix of order 365 to the 1000th power. I will illustrate how this might be done with a much smaller problem. I thought it could probably be approximated really well by using a combination of simulation and statistics... I will do that with also with a smaller problem. And I thought you could convince any reasonable reader with a bit of basic logic that it was a very, very, low probability... I'll do that now. I reasoned that the probability of finding all B-days would be small if the probability of failing for any particular day was only relatively small, so I set out to try and find the probability that any given date (they will all be the same) did not appear in the list of Birthdays of the sample. This part of the problem is actually binomial. The birth dates of the random sample are independent of each other, and with a large population to draw from, the probabilities of getting any date on any person selected will be essentially the same. For a particular date, we can say that the probability that a person selected would NOT have that date of birth is 364/365. So the probability of 1000 people being selected who all did NOT have that date is (364/365)1000. My Ti-84 calculator barely blinked to compute .06434 for the probability. Since the probability of each success is a really small, 1/365, we could also use the Poisson with a mean of 1000/365 and find the probability of zero successes. That comes out to .0645 on my calculator. Once we know the prob of any date not showing up is about .06, we might assume that for 365 dates, even with their lack of independence, the probability of getting all dates in a sample of 1000 would be pretty low. To see how low, I thought I would try to simulate a few trials and see how often I was successful. I wrote a short program for the TI-84, but it took a long time to run (probably bad program more than bad device) so I switched over and wrote used a spreadsheet run off 1000 random integers between 1 and 365. Then I had it how how many times each of the numbers appeared in the list. Finally I just multiplied these counts all together. If it came out one, then there was at least one birthday missing. It took a little longer to lay out, but it would run a test in a few seconds. Then I got a little more clever and just copied the whole thing across for 100 columns. Now I could run 100 trials at a time..(this time the computer blinked). I ran that five times, for a total of 500 simulations, and never got a full set of birthdays. That made me wonder if it was working correctly, so I extended the list to 2000 random integers. Now I started to get some hits. It was working after all. One of the beautiful things about statistics is that you can approximate the probability of something happening that has never happened if it has never happened with enough opportunities to happen. In this case we were 0 for 500. Statistically that doesn't mean the probability is zero. But there is a clever rule of thumb that says if it has not happened in n trials, the probability of happening on any trials is less than 3/n with about a 95% confidence.(How we do that in another blog later.) So I'm figuring that the probability of a full set of birth dates in a group of 1000 is about 3/500 or less. I wanted to do an exact calculation using Markov Matrices, but none of the software I had access to would handle raising a really big matrix to such a high power. Still, it is a beautiful way to handle (smaller) problems like this so I thought I would illustrate the process for students with a mini version of the problem. Suppose instead we had picked seven students from a large high school and wanted to know the probability that we would get one from each of the freshman, sophmore, jr and senior classes. The birthday problem for 10000,365 scaled down to 7,4. We will treat the process as if people were selected one at a time. After the first person is selected, since they will obviously have some birth date, we will say we are in state 1. As each new person is added to the mix, we will move to the next higher state if they have a different birthday to everyone already present, and remain in the previousl state if they match one of the other dates. We can illustrate this with a "State Diagram", a graph of the probability of moving from one state to another as each person is introduced into the mix. In state one, all the people up to that point have had a single birth date. The probability the next person has a different birthday and moves us to state two is 3/4, and the probability that they have the same birthday as everyone else so far is 1/4. The red lines show the probability of moving from one state to the next, while the blue lines show the probability of staying where they are. To find the probability we are in any state after n selections we must add the probability we just moved there from a lower state with the probability we were already in that state after n-1 selections. To accomplish this multiplication process we embed the probabilities in a transition matrix. Each row and column gives the probability that you move from the row-state to the column state. For example, row 2 column 3 gives the probability that you move from state two to state three on any selection. The zeros indicate impossible transitions, like moving from state one to state four on the next person. The one in row 4 column 4 says that once you have all four birthdays, you stay there. This is called an absorbing state. Now the pretty part. This matrix tells us the probability of being in any state after two poeople (because we started after the first person put us in state one). The probability we are still in state one is 1/4, and the probability we are in state two is 3/4. Now to find the probability after a third person is selected, we simply multipy our present matrix state by this matrix... we square our transition matrix. Now we have the probability of being in each state in the first line. Notice that this is after three people... two have been added to the original person who put us in state one. You can check these manually. To be still in state one, we must have had both the 2nd and third person have the same birth date as the initial person, a probability of (1/4)(1/4)=1/16. To be in state three, the second must have differed from the first and the third must have differed from both the others. The probailities in order are 3/4 and 1/2 so there is a 3/8 probability of this outcome after three people. The probablitly of state two is now what is left to total one. So after seven people, six beyond our initial person, we want to raise the matrix to the sixth power. The top row gives us the probability of being in any of the states after seven people. . For seven people, the probability we have found all four birthdays is about 52%. Which means I can solve the problem with 365 birthdays and 1000 people exactly, I just need someone to loan me the money to rent a big Cray Computer for a few seconds.... "anyone, anyone......Bueller?" *** FOOTNOTE: I got a comment from Jon Ingram that says (in much nicer words) that my inability to find the 1000th power of a 365x365 matrix was a matter more of my ignorance than restrictions in computing power that is readily available. Math teachers should be aware of the power that is out there to do this kind of stuff, much of it either very modestly priced, or free. Here is Jon's comment, with my appreciation... I'm old, but I'm not done learning... You don't need a Cray! It took my computer (using Python with the Numpy extension) about 10 seconds to calculate the 1000-person, 365-days matrix. It looks like you shouldn't be surprised not to find all the birthdays, as the probability of getting to that state is around 1.7e-12. The most likely outcome is 342 distinct birthdays (probability around 9.4%).
2017-10-21 12:11:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.633458137512207, "perplexity": 810.4542414336818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824775.99/warc/CC-MAIN-20171021114851-20171021134851-00413.warc.gz"}
http://hal.inria.fr/view_by_stamp.php?label=INSMI&langue=en&action_todo=view&id=hal-00578445&version=2
21734 articles – 15570 references  [version française] hal-00578445, version 2 ## Study of a $3D$ Ginzburg-Landau functional with a discontinuous pinning term Mickaël Dos Santos (, ) 1 Nonlinear Analysis: Theory, Methods and Applications 75, 17 (2012) 6275-6296 Abstract: In a convex domain $\O\subset\R^3$, we consider the minimization of a $3D$-Ginzburg-Landau type energy $E_\v(u)=\frac{1}{2}\int_\O|\n u|^2+\frac{1}{2\v^2}(a^2-|u|^2)^2$ with a discontinuous pinning term $a$ among $H^1(\O,\C)$-maps subject to a Dirichlet boundary condition $g\in H^{1/2}(\p\O,\S^1)$. The pinning term $a:\R^3\to\R^*_+$ takes a constant value $b\in(0,1)$ in $\o$, an inner strictly convex subdomain of $\O$, and $1$ outside $\o$. We prove energy estimates with various error terms depending on assumptions on $\O,\o$ and $g$. In some special cases, we identify the vorticity defects via the concentration of the energy. Under hypotheses on the singularities of $g$ (the singularities are polarized and quantified by their degrees which are $\pm 1$), vorticity defects are geodesics (computed w.r.t. a geodesic metric $d_{a^2}$ depending only on $a$) joining two paired singularities of $g$ $p_i\& n_{\sigma(i)}$ where $\sigma$ is a minimal connection (computed w.r.t. a metric $d_{a^2}$) of the singularities of $g$ and $p_1,...p_k$ are the positive (resp. $n_1,...,n_k$ the negative) singularities. • 1:  Laboratoire d'Analyse et de Mathématiques Appliquées (LAMA) • Université Paris-Est Marne-la-Vallée (UPEMLV) – Université Paris-Est Créteil Val-de-Marne (UPEC) – CNRS : UMR8050 – Fédération de Recherche Bézout • hal-00578445, version 2 • oai:hal.archives-ouvertes.fr:hal-00578445 • From: • Submitted on: Monday, 5 March 2012 18:35:11 • Updated on: Tuesday, 28 August 2012 18:06:24
2013-05-18 16:19:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7299191951751709, "perplexity": 1351.4448880054877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382560/warc/CC-MAIN-20130516092622-00004-ip-10-60-113-184.ec2.internal.warc.gz"}
https://hal.inria.fr/inria-00073214
# A Few Remarks on SKInT Abstract : SKIn and SKInT are two first-order languages that have been proposed recently by Healfdene Goguen and the author. While SKIn encodes lambda-calculus reduction faithfully, standardizes and is confluent even on open terms, it normalizes only weakly in the simply-typed case. On the other hand, SKInT normalizes strongly in the simply-typed case, standardizes and is confluent on open terms, and also encodes lambda-calculus reduction faithfully, although in a less direct way. This report has two goals. First, we show that the natural simple type system for SKInT, seen as a natural deduction system, is not exactly a proof system for intuitionistic logic, but for a very close fragment of the modal logic S4, in which intuitionistic logic is easily coded. This explains why the SKIn and SKInT typing rules are different, and why SKInT encodes lambda-calculus in a less direct way than SKIn. Second, we show that SKInT, like $\lambda\upsilon$ and a few other calculi of explicit substitutions, preserves strong normalization. In fact, SKInT also preserves weak normalization and solvability. We show this as a corollary of a stronger result, analogous to a well-known result in the lambda-calculus: the solvable SKInT-terms are exactly those that are typable in the system $S\omega$ of conjunctive types (inspired from Émilie Sayag), the weakly normalizing SKInT-terms (with or without $\eta$) are exactly those that have a definite positive $S\omega$-typing, and the strongly normalizing SKInT-terms (with or without $\eta$) are exactly those that are typable in the system $S$ of conjunctive types, which does not have the universal type $\omega$. Keywords : Type de document : Rapport [Research Report] RR-3475, INRIA. 1998 Domaine : https://hal.inria.fr/inria-00073214 Contributeur : Rapport de Recherche Inria <> Soumis le : mercredi 24 mai 2006 - 12:12:54 Dernière modification le : mardi 17 avril 2018 - 11:25:23 Document(s) archivé(s) le : dimanche 4 avril 2010 - 21:45:22 ### Identifiants • HAL Id : inria-00073214, version 1 ### Citation Jean Goubault-Larrecq. A Few Remarks on SKInT. [Research Report] RR-3475, INRIA. 1998. 〈inria-00073214〉 ### Métriques Consultations de la notice ## 123 Téléchargements de fichiers
2018-04-19 12:45:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6502858996391296, "perplexity": 3080.5045852439985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936914.5/warc/CC-MAIN-20180419110948-20180419130948-00384.warc.gz"}
https://www.mathdoubts.com/subtracting-unlike-algebraic-terms/
# Subtraction of Unlike Algebraic Terms A mathematical operation of subtracting any two unlike algebraic terms is called the subtraction of unlike algebraic terms. ## Introduction Two unlike algebraic terms are connected by a minus sign in algebraic mathematics to find the difference between them. Mathematically, it is not possible to subtract an algebraic term from another in the case of unlike algebraic terms due to the different literal coefficients. So, the subtraction of the unlike algebraic terms is simply expressed as an expression by displaying a minus sign between them. $6xy$ and $4x^2$ are two unlike algebraic terms. Take, the term $6xy$ is subtracted $4x^2$. #### First step Write the term $4x^2$ and then $6xy$ in a row, and display a minus sign between them. $4x^2-6xy$ 1. The literal coefficient of $4$ is $x^2$ 2. The literal coefficient of $6$ is $xy$ The literal coefficients of both the terms are different. Therefore, they cannot be subtracted as the subtraction of the like algebraic terms. So, the subtraction of unlike algebraic terms is simply expressed as an expression. ### Examples Observe the following example to study the subtraction of unlike algebraic terms in mathematics. $(1)\,\,\,\,\,\,$ $a-2b$ $(2)\,\,\,\,\,\,$ $5cd^2-3c^2d$ $(3)\,\,\,\,\,\,$ $3e^3-ef^3$ $(4)\,\,\,\,\,\,$ $2gh^4i-2g^4hi$ $(5)\,\,\,\,\,\,$ $5j-7k$ Latest Math Topics Apr 18, 2022 Apr 14, 2022 Mar 18, 2022 Latest Math Problems Apr 06, 2022 A best free mathematics education website for students, teachers and researchers. ###### Maths Topics Learn each topic of the mathematics easily with understandable proofs and visual animation graphics. ###### Maths Problems Learn how to solve the maths problems in different methods with understandable steps. Learn solutions ###### Subscribe us You can get the latest updates from us by following to our official page of Math Doubts in one of your favourite social media sites.
2022-05-28 18:07:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4364497661590576, "perplexity": 1645.8792061123181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00795.warc.gz"}
https://stats.stackexchange.com/questions/479325/is-it-the-case-that-increasing-degrees-of-freedom-always-makes-every-tail-of-a-t
# is it the case that increasing degrees of freedom always makes every tail of a t-distribution smaller? As per the title. Say I have X a random variable that is a 0-centered t-student. Can I affirm that P(X>a) decreases when I increase the degrees of freedom of X? Looking at the image in the wikipedia case makes me think this is the case (https://en.wikipedia.org/wiki/Student's_t-distribution) but I am not sure. Also, I have been told that assuming less degrees of freedom is "conservative", which also points in that direction. If this is indeed the case, a proof would also be appreciated • The answer is yes when $a$ is positive. I suspect a fairly short proof might be afforded by representing the Student t as a variance mixture of Gaussians. – whuber Jul 27 '20 at 22:15 • I have a vague recollection that it's possible to show that for given $x$ in the tail of the t density $K(\nu)\left(1 + \frac{x^2}{\nu}\right)^{-(\nu+1)/2}$ $= K(\nu)\left(1+\frac{x^2}{\nu}\right)^{-1/2}\left(1+\frac{x^2}{\nu}\right)^{-\nu/2},$ where the last factor converges to $e^{-.5x^2},$ is decreasing in $\nu.$ Jul 27 '20 at 23:33 As degrees of freedom $$\nu$$ increase, the tails of Student's t distribution contain less probability, with the normal distribution being the limiting case. • As $$\nu = n-1$$ increases, quantile 0.975 $$q$$ decrease to the normal value 1.96. For example, a t confidence interval $$\bar X \pm qS/\sqrt{n}$$ gets closer to the z confidence interval $$\bar X \pm 1.96 \sigma/\sqrt{},$$ for known population standard deviation $$\sigma.$$ • For the standard normal distribution, the probability $$p = P(-1.96 < Z < 1.96) = 0.95.$$ As $$\nu$$ increases, $$p = P(-1.96, < T < 1.96)$$ increases to the normal value. Many elementary textbooks say that, for $$\nu = 30,$$ the t distribution is sufficiently close to normal for some practical purposes. But $$\mathsf{T}(\nu=30)$$ is hardly the same as $$\mathsf{Norm}(0,1).$$ Here are graphs of $$q$$ and $$p$$ for $$\nu = 1, 2, \dots, 200.$$ The R code used to make the plots is shown below. df = 1:200 q = qt(.975, df) pu = pt(1.96, df); pl = pt(-1.96, df); p = pu-pl par(mfrow=c(1,2)) plot(df,q, type="l", ylim=c(1.96,2.25), xaxs="i", main="Quantile 0.975") abline(h = 1.96, col="blue") plot(df,p, type="l", ylim=c(.925,.95), xaxs="i", main="P(-1.96< T < 1.96)") abline(h = diff(pnorm(c(-1.96,1.96))), col="blue") par(mfrow=c(1,1)) • I think the $n=30$ criterion comes from 1) testing at 5% level and 2) the need to limit the size of the tables at the end of the book ... Jan 31 '21 at 6:08
2022-01-23 00:46:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800653219223022, "perplexity": 485.58967219873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00385.warc.gz"}
https://stats.stackexchange.com/questions/248947/test-logistic-regression-model-using-residual-deviance-and-degrees-of-freedom?noredirect=1
# Test logistic regression model using residual deviance and degrees of freedom I was reading this page on Princeton.edu. They are performing a logistic regression (with R). At some point they calculate the probability of getting a residual deviance higher than the one they got on a $\chi^2$ distribution with degrees of freedom equal to the degrees of freedom of the model. Copying-pasting from their website... > glm( cbind(using,notUsing) ~ age + hiEduc + noMore, family=binomial) Call: glm(formula = cbind(using, notUsing) ~ age + hiEduc + noMore, family = binomial) Coefficients: (Intercept) age25-29 age30-39 age40-49 hiEduc noMore -1.9662 0.3894 0.9086 1.1892 0.3250 0.8330 Degrees of Freedom: 15 Total (i.e. Null); 10 Residual Null Deviance: 165.8 Residual Deviance: 29.92 AIC: 113.4 The residual deviance of 29.92 on 10 d.f. is highly significant: > 1-pchisq(29.92,10) [1] 0.0008828339 so we need a better model Why does it make sense to compute 1-pchisq(29.92,10) and why does a low probability indicate that something is going wrong with their model? They are using a deviance test shown below: $$D(y) = -2\ell(\hat\beta;y) + 2\ell(\hat\theta^{(s)};y)$$ Here the $\hat β$ represents the fitted model of interest and $\hatθ(s)$ represents the saturated model. The log-likelihood for the saturated model is (more often than not) $0$, hence you are left with the residual deviance of the model they fitted ($29.92$). This deviance test is approximately chi-squared with degrees of freedom $n-p$ ($n$ being the observations and $p$ being the number of variables fitted). You have $n=16$ and $p=6$ so the test will be approximately $\chi^2_{10}$. The null of the test is that your fitted model fits the data well and there is no misfit—you haven't missed any sources of variation. In the above test you reject the null and, as a result, you have missed something in the model you fitted. The reason for using this test is that the saturated model will fit the data perfectly so if you were in the case where you are not rejecting the null between your fitted model and the saturated model, it indicates you haven't missed big sources of data variation in your model. Your question, as stated, has been answered by @francium87d. Comparing the residual deviance against the appropriate chi-squared distribution constitutes testing the fitted model against the saturated model and shows, in this case, a significant lack of fit. Still, it might help to look more thoroughly at the data and the model to understand better what it means that the model has a lack of fit: d = read.table(text=" age education wantsMore notUsing using <25 low yes 53 6 <25 low no 10 4 <25 high yes 212 52 <25 high no 50 10 25-29 low yes 60 14 25-29 low no 19 10 25-29 high yes 155 54 25-29 high no 65 27 30-39 low yes 112 33 30-39 low no 77 80 30-39 high yes 118 46 30-39 high no 68 78 40-49 low yes 35 6 40-49 low no 46 48 40-49 high yes 8 8 40-49 high no 12 31", header=TRUE, stringsAsFactors=FALSE) d = d[order(d[,3],d[,2]), c(3,2,1,5,4)] library(binom) d$proportion = with(d, using/(using+notUsing)) d$sum = with(d, using+notUsing) bCI = binom.confint(x=d$using, n=d$sum, methods="exact") m = glm(cbind(using,notUsing)~age+education+wantsMore, d, family=binomial) preds = predict(m, new.data=d[,1:3], type="response") windows() par(mar=c(5, 8, 4, 2)) bp = barplot(d$proportion, horiz=T, xlim=c(0,1), xlab="proportion", main="Birth control usage") box() axis(side=2, at=bp, labels=paste(d[,1], d[,2], d[,3]), las=1) arrows(y0=bp, x0=bCI[,5], x1=bCI[,6], code=3, angle=90, length=.05) points(x=preds, y=bp, pch=15, col="red") The figure plots the observed proportion of women in each set of categories that are using birth control, along with the exact 95% confidence interval. The model's predicted proportions are overlaid in red. We can see that two predicted proportions are outside of the 95% CIs, and anther five are at or very near the limits of the respective CIs. That's seven out of sixteen ($44\%$) that are off target. So the model's predictions don't match the observed data very well. How could the model fit better? Perhaps there are interactions amongst the variables that are relevant. Let's add all the two-way interactions and assess the fit: m2 = glm(cbind(using,notUsing)~(age+education+wantsMore)^2, d, family=binomial) summary(m2) # ... # Null deviance: 165.7724 on 15 degrees of freedom # Residual deviance: 2.4415 on 3 degrees of freedom # AIC: 99.949 # # Number of Fisher Scoring iterations: 4 1-pchisq(2.4415, df=3) # [1] 0.4859562 drop1(m2, test="LRT") # Single term deletions # # Model: # cbind(using, notUsing) ~ (age + education + wantsMore)^2 # Df Deviance AIC LRT Pr(>Chi) # <none> 2.4415 99.949 # age:education 3 10.8240 102.332 8.3826 0.03873 * # age:wantsMore 3 13.7639 105.272 11.3224 0.01010 * # education:wantsMore 1 5.7983 101.306 3.3568 0.06693 . The p-value for the lack of fit test for this model is now$0.486$. But do we really need all those extra interaction terms? The drop1() command shows the results of the nested model tests without them. The interaction between education and wantsMore is not quite significant, but I would be fine with it in the model anyway. So let's see how the predictions from this model compare to the data: These aren't perfect, but we shouldn't assume that the observed proportions are a perfect reflection of the true data generating process. These look to me like they are bouncing around the appropriate amount (more correctly that the data are bouncing around the predictions, I suppose). I do not believe that the residual deviance statistic has a$\chi^2$distribution. I think it is a degenerate distribution because asymptotic theory does not apply when the degrees of freedom increases at the same speed as the sample size. At any rate I doubt that the test has sufficient power, and encourage directed tests such as tests of linearity using regression splines and tests of interaction. • I think because in this case all the predictors are categorical, the no. degrees of freedom of the saturated model wouldn't increase with sample size, so the asymptotic approach makes sense. The sample size is still rather small though. – Scortchi - Reinstate Monica Dec 1 '16 at 15:19 • Not sure that's it. The d.f. of the model parameters is fixed but the d.f. of the residual "$\chi^2$" is$n$minus that. – Frank Harrell Dec 1 '16 at 16:54 • In this case the data consist of 1607 individuals in a contingency table & the test is comparing a 6-parameter model with the 16-parameter saturated model (rather than a 1607-parameter model). – Scortchi - Reinstate Monica Dec 1 '16 at 17:14 • Then it should not be labeled as residual$\chi^2\$. – Frank Harrell Dec 1 '16 at 19:26 • I agree this terminology's unfortunate: glm gives a different "residual deviance" when the data are grouped up from when they aren't - & a different "null deviance" for that matter. – Scortchi - Reinstate Monica Dec 2 '16 at 10:44
2019-12-16 10:00:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6115113496780396, "perplexity": 2303.9757547172}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541319511.97/warc/CC-MAIN-20191216093448-20191216121448-00194.warc.gz"}
https://www.bunniestudios.com/blog/?page_id=1932&replytocom=1192202
## On Influenza A (H1N1) I read a fantastic article in Nature magazine (vol 459, pp931-939 (18 June 2009)) that summarizes not only the current state of novel H1N1 (aka Swine Flu) understanding, but also a compares H1N1 against other flu strains. In particular, it discusses in-depth how the pathogenic components — i.e., the stuff that kills you — compare against each other. The Influenza virus is quite fascinating. Allow me to ramble on… Comparison to Computer Viruses How many bits does it take to kill a human? The H1N1 virus has been comprehensively disassembled (sequenced) and logged into the NCBI Influenza Virus Resource database. For example, an instance of influenza known as A/Italy/49/2009(H1N1) isolated from the nose of a 26-year old female homo sapiens returning from the USA to Italy (I love the specificity of these database records), has its entire sequence posted at the NCBI website. It’s amazing — here’s the first 120 bits of the sequence. atgaaggcaa tactagtagt tctgctatat acatttgcaa ccgcaaatgc agacacatta Remember, each symbol represents 2 bits of information. This is alternatively represented as an amino acid sequence, through a translation lookup table, of the following peptides: MKAILVVLLYTFATANADTL In this case, each symbol represents an amino acid which is the equivalent of 6 bits (3 DNA-equivalent codons per amino acid). M is methionine, K is Lysine, A is Alanine, etc. (you can find the translation table here). For those not familiar with molecular biology, DNA is information-equivalent to RNA on a 1 to 1 mapping; DNA is like a program stored on disk, and RNA is like a program loaded into RAM. Upon loading DNA, a transcription occurs where “T” bases are replaced with “U” bases. Remember, each base pair specifies one of four possible symbols (A [T/U] G C), so a single base pair corresponds to 2 bits of information. Proteins are the output of running an RNA program. Proteins are synthesized according to the instructions in RNA on a 3 to 1 mapping. You can think of proteins a bit like pixels in a frame buffer. A complete protein is like an image on the screen; each amino acid on a protein is like a pixel; each pixel has a depth of 6 bits (3 to 1 mapping of a medium that stores 2 bits per base pair); and each pixel has to go through a color palette (the codon translation table) to transform the raw data into a final rendered color. Unlike a computer frame buffer, different biological proteins vary in amino acid count (pixel count). To ground this in a specific example, six bits stored as “ATG” on your hard drive (DNA) is loaded into RAM (RNA) as “AUG” (remember the T->U transcription). When the RNA program in RAM is executed, “AUG” is translated to a pixel (amino acid) of color “M”, or methionine (which is incidentally the biological “start” codon, the first instruction in every valid RNA program). As a short-hand, since DNA and RNA are 1:1 equivalent, bioinformaticists represent gene sequences in DNA format, even if the biological mechanism is in RNA format (as is the case for Influenza–more on the significance of that later!). OK, back to the main point of this post. The particular RNA subroutine mentioned above codes for the HA gene which produces the Hemagglutinin protein: in particular, an H1 variety. This is the “H1” in the H1N1 designation. If you thought of organisms as computers with IP addresses, each functional group of cells in the organism would be listening to the environment through its own active port. So, as port 25 maps specifically to SMTP services on a computer, port H1 maps specifically to the windpipe region on a human. Interestingly, the same port H1 maps to the intestinal tract on a bird. Thus, the same H1N1 virus will attack the respiratory system of a human, and the gut of a bird. In contrast, H5 — the variety found in H5N1, or the deadly “avian flu” — specifies the port for your inner lungs. As a result, H5N1 is much more deadly because it attacks your inner lung tissue, causing severe pneumonia. H1N1 is not as deadly because it is attacking a much more benign port that just causes you to blow your nose a lot and cough up loogies, instead of ceasing to breathe. Researchers are still discovering more about the H5 port; the Nature article indicates that perhaps certain human mutants have lungs that do not listen on the H5 port. So, those of us with the mutation that causes lungs to ignore the H5 port would have a better chance of surviving an Avian flu infection, whereas as those of us that open port H5 on the lungs have no chance to survive make your time / all your base pairs are belong to H5N1. So how many bits are in this instance of H1N1? The raw number of bits, by my count, is 26,022; the actual number of coding bits approximately 25,054 — I say approximately because the virus does the equivalent of self-modifying code to create two proteins out of a single gene in some places (pretty interesting stuff actually), so it’s hard to say what counts as code and what counts as incidental non-executing NOP sleds that are required for self-modifying code. So it takes about 25 kilobits — 3.2 kbytes — of data to code for a virus that has a non-trivial chance of killing a human. This is more efficient than a computer virus, such as MyDoom, which rings in at around 22 kbytes. It’s humbling that I could be killed by 3.2kbytes of genetic data. Then again, with 850 Mbytes of data in my genome, there’s bound to be an exploit or two. Hacking Swine Flu One interesting consequence of reading this Nature article, and having access to the virus sequence, is that I now know how to modify the virus sequence to probably make it more deadly. Here’s how: The Nature article notes, for example, that variants of the PB2 Influenza gene with Glutamic acid at position 627 in the sequence has a low pathogenicity (not very deadly). However, PB2 variants with Lysine at the same position is more deadly. Well, let’s see the sequence of PB2 for H1N1. Going back to our NCBI database: 601 QQMRDVLGTFDTVQIIKLLP 621 FAAAPPEQSRMQFSSLTVNV 641 RGSGLRILVRGNSPVFNYNK As you can see from the above annotation, position 627 has “E” in it, which is the code for Glutamic acid. Thankfully, it’s the less-deadly version; perhaps this is why not as many people have died from contracting H1N1 as the press releases might have scared you into thinking. Let’s reverse this back to the DNA code: 621 F A A A P P E Q S R 1861 tttgctgctg ctccaccaga acagagtagg As you can see, we have “GAA” coding for “E” (Glutamic acid). To modify this genome to be more deadly, we simply need to replace “GAA” with one of the codes for Lysine (“K”), which is either of “AAA” or “AAG”. Thus, the more deadly variant of H1N1 would have its coding sequence read like this: 621 F A A A P P K Q S R 1861 tttgctgctg ctccaccaaa acagagtagg ^ changed There. A single base-pair change, flipping two bits, is perhaps all you need to turn the current less-deadly H1N1 swine flu virus into a more deadly variant. Theoretically, I could apply a long series of well-known biological procedures to synthesize this and actually implement this deadly variant; as a first step, I can go to any number of DNA synthesis websites (such as the cutely-named “Mr. Gene”) and order the modified sequence to get my deadly little project going for a little over \$1,000. Note that Mr. Gene implements a screening procedure against DNA sequences that could be used to implement biohazardous products. I don’t know if they specifically screen against HA variants such as this modified H1 gene. Even if they do, there are well-known protocols for site-directed mutagenesis that can possibly be used to modify a single base of RNA from material extracted from normal H1N1. [Just noticed this citation from the Nature article: Neumann, G. et al Generation of influenza A viruses entirely from cloned cDNA. Proc. Natl Acad. Sci. USA 96, 9345-9350 (1999). This paper tells you how to DIY an Influenza A. Good read.]. OK, before we get our hackles up about this little hack, let’s give Influenza some credit: after all, it packs a deadly punch in 3.2kbytes and despite our best efforts we can’t eradicate it. Could Influenza figure this out on its own? In fact, the Influenza virus is evolved to allow for these adaptations. Normally, when DNA is copied, an error-checking protein runs over the copied genome to verify that no mistakes were made. This keeps the error rate quite low. But remember, Influenza uses an RNA architecture. It therefore needs a different mechanism from DNA for copying. It turns out that Influenza packs inside its virus capsule a protein complex (RNA-dependent RNA polymerase) that is customized for its style of RNA copying. Significantly, it omits the error checking protein. The result is that there is about one error made in copying every 10,000 base pairs. How long is the Influenza genome? About 13,000 base pairs. Thus, on average, every copy of an Influenza virus has one random mutation in it. Some of these mutations make no difference; others render the virus harmless; and quite possibly, some render the virus much more dangerous. Since viruses are replicated and distributed in astronomical quantities, the chance that this little hack could end up occurring naturally is in fact quite high. This is part of the reason, I think, why the health officials are so worried about H1N1: we have no resistance to it, and even though it’s not quite so deadly today, it’s probably just a couple mutations away from being a much bigger health problem. In fact, if anything, perhaps I should be trying to catch the strain of H1N1 going around today because its pathogenicity is currently in-line with normal flu variants — as of this article’s writing, the CDC has recorded 87 deaths out of 21,449 confirmed cases, or a 0.4% mortality rate (to contrast, “normal” flu is <0.1%, while the dreaded Spanish flu of 1918 was around 2.5%; H5N1, or avian flu, is over 50%(!), but thankfully it has trouble spreading between humans). By getting H1N1 today, I would get the added bonus of developing a natural immunity to H1N1, so after it mutates and comes back again I stand a better chance of fighting it. What doesn’t kill you makes you stronger!…or on second thought maybe I’ll just wait until they develop a vaccine for it. There is one other important subtlety to the RNA architecture of the influenza virus, aside from the well-adjusted mutation rate that it guarantees. The subtlety is that the genetic information is stored inside the virus as 8 separate snippets of RNA, instead of as a single unbroken strand (as it is in many other viruses and in living cells). Why is this important? Consider what happens when a host is infected by two types of Influenza at the same time. If the genes were stored as a single piece of DNA, there would be little opportunity for the genes between the two types to shuffle. However, because Influenza stores its genes as 8 separate snippets, the snippets mix freely inside the infected cell, and are randomly shuffled into virus packets as they emerge. Thus, if you are unlucky enough to get two types of flus at once, the result is a potentially novel strain of flu, as RNA strands are copied, mixed and picked out of the metaphorical hat and then packed into virus particles. This process is elegant in that the same mechanism allows for mixing of an arbitrary number of strains in a single host: if you can infect a cell with three or four types of influenza at once, the result is an even wilder variation of flu particles. This is part of the reason why the novel H1N1 is called a “triple-reassortant” virus: through either a series of dual-infections, or perhaps a single calamitous infection of multiple flu varieties, the novel H1N1 acquired a mix of RNA snippets that has bestowed upon it high transmission rates along with no innate human immunity to the virus, i.e., the perfect storm for a pandemic. I haven’t been tracking the latest efforts on the part of computer virus writers, but if there was a computer analogy to this RNA-shuffling model, it would be a virus that distributes itself in the form of unlinked object code files plus a small helper program that, upon infection in a host, would first re-link its files in a random order before copying and redistributing itself. In addition to doing this, it would search for similar viruses that may already be infecting that computer, and it would on occasion link in object code with matching function templates from the other viruses. This re-arrangement and novel re-linking of the code itself would work to foil certain classes of anti-virus software that searches for virus signatures based on fixed code patterns. It would also cause a proliferation of a diverse set of viruses in the wild, with less predictable properties. Thus, the Influenza virus is remarkable in its method for achieving a multi-level adaptation mechanism, consisting of both a slowly evolving point mutation mechanism, as well as a mechanism for drastically altering the virus’ properties in a single generation through gene-level mixing with other viruses (it’s not quite like sex but probably just as good, if not better). It’s also remarkable that these two important properties of the virus arise as a consequence of using RNA instead of DNA as the genetic storage medium. Well, that’s it for me tonight — and if you made it this far through the post, I appreciate your attention; I do tend to ramble in my “Ponderings” posts. There’s actually a lot more fascinating stuff about Influenza A inside the aforementioned Nature article. If you want to know more, I highly recommend the read. ### 5 Responses to “On Influenza A (H1N1)” 1. Howdy. I was contemplating adding a link back to your blog since both of our web sites are based mostly around the same niche. Would you prefer I link to you using your website address: http://www.bunniestudios.com/blog/?page_id=1932 or blog title: On Influenza A (H1N1) bunnie's blog. Please make sure to let me know at your earliest convenience. Cheers 2. Lyda Pittard says: Here is a list of type of virii: Resident Viruses, Direct Action Viruses, Overwrite Viruses, Boot Virus, Macro Virus, Directory Virus, Metamorphic Virus, Polymorphic Virus, File Infectors, Companion Viruses, FAT Virus, Worms,Trojans or Trojan Horses & finaly Logic Bombs. 3. Each month effortlessly find the greater component of pads in your prom dress together with have the thing in the check out dress incorporate utilizing it. 4. Whats up very nice website!! Man .. Beautiful .. Wonderful .. I’ll bookmark your site and take the feeds additionally匢 am satisfied to find numerous useful information right here within the post, we need work out extra techniques on this regard, thanks for sharing.
2023-03-23 08:32:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3857558071613312, "perplexity": 2920.28804353024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00000.warc.gz"}
https://www.practically.com/studymaterial/ncert-solutions/docs/class-6th-science/science/motion-and-measurement-of-distances/
# Motion And Measurement Of Distances NCERT TEXT BOOK EXERCISES Q1. Give two examples each of modes of transport used on land, water, and air. Ans. Two examples of modes of transport used on land are buses and wheel carts. Two examples of modes of transport used on water are ships and boats. Two examples of modes of transport used in air are Aeroplans and helicopters. Q2. Fill in the blanks : (i) One meter is ______________ cm. (ii) Five kilometers is ______________ m. (iii) Motion of a child on a swing is ______________. (iv) Motion of the needle of a sewing machine is ______________. (v) Motion of wheel of a bicycle is______________. Ans. (i) One meter is 100 cm. (ii) Five kilometers is 5000 m. 1 km = 1000 m 5 km = 1000 × 5 = 5000 m Hence, the answer is 5000 m. (iii) Motion of a child on a swing is periodic. Periodic motion : The motion of a swing repeats itself at a certain time interval. Therefore, it has periodic motion. Hence, a child on a swing is said to have periodic motion. (iv) Motion of the needle of a sewing machine is periodic. Periodic motion : The needle of a sewing machine moves up and down repeatedly with a certain time interval. Hence, it is an example of periodic motion. (v) Motion of the wheel of a bicycle is circular. Circular motion : The central part of the wheel of a bicycle is attached to a fixed point. The wheel rotates about this fixed point as the bicycle moves. Hence, the wheel has circular motion. Q3. Why can a pace or a footstep not be used as a standard unit of length ? Ans. The size of the foot varies from person to person. If footsteps of two persons are used to measure the length respectively, then the two distances may not be equal. Thus, a footstep is not a constant quantity. Hence, it cannot be used as a standard unit of length. Q4. Arrange the following lengths in their increasing magnitude : 1 meter, 1 centimeter, 1 kilometer, 1 millimeter. Ans. 1 cm = 10 mm 1 m = 100 cm = 1000 mm Again, 1 km = 1000 m = 100000 cm = 10000000 mm. Hence, 1 mm is smaller than 1 cm, 1 cm is smaller than 1 m, and 1 m is smaller than 1 km, i.e., 1 millimeter < 1 centimeter < l meter < 1 kilometer Q5. The height of a person is 1.65 m. Express this in cm and mm. Ans. Height of the person = 1.65 m 1 m = 100 cm 1.65 m = 100 × 1.65 = 165 cm Hence, the height of the person is 165 cm.  Again, 1 m = 100 cm = 1000 mm Therefore, 1.65 m = 1.65 × 10 = 1650 mm Hence, the height of the person is 1650 mm. Q6. The distance between Radha’s home and her school is 3250 m. Express this distance in km. Ans. The distance between Radha’s home and her school is 3250 m. 1 km = 1000 m i.e., 1000 m = 1 km $3250\mathrm{m}\frac{1}{1000}×3250=3.25\mathrm{km}$ Q7. While measuring the length of a knitting needle, the reading of the scale at one end is 3 cm and at the other end is 33.1 cm. What is the length of the needle ? Ans. The reading of the scale at one end is 3 cm and at the other end is 33. 1 cm. Therefore, the length of the knitting needle is given by subtracting both the readings, i.e., (33.1 − 3.0) cm = 30.1 cm. Q8. Write the similarities and the differences between the motion of a bicycle and a ceiling fan that has been switched on. Ans. Similarities between the motion of a bicycle and a ceiling fan : (i) The blades of a fan and the wheels of a bicycle are fixed at a point. (ii) Both have circular motion about their respective fixed points. Differences between the motion of a bicycle and a ceiling fan : (i) A bicycle has linear motion, whereas the blades of a ceiling fan do not have linear motion. (ii) The motion of the blades of a fan is periodic, whereas the motion of a bicycle is rectilinear motion. Q9. Why can you not use an elastic measuring tape to measure distance ? What would be some of the problems you would meet in telling someone about a distance you measured with an elastic tape ? Ans. An elastic measuring tape is stretchable. It cannot be used to measure distances because the length of the tape may change on stretching. As a result, the measured length would not be correct. If you measure the length of an object twice using an elastic tape, then you may get different values of the same length each time. This is because elastic tapes are stretchable. Q10. Give two examples of periodic motion. Ans. Examples of periodic motion : (i) Motion of a pendulum The bob of a pendulum repeats itself at a certain time period. This motion is called periodic motion. (ii) Motion of a boy sitting on a swing The motion of a swing repeats itself at a certain time period. Hence, a boy sitting on a swing has periodic motion.
2022-08-14 18:50:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5407599806785583, "perplexity": 797.9385373176045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00603.warc.gz"}
https://proxieslive.com/tag/nabla/
## If $\nabla \cdot \vec{F} = 0$ show that $\vec{F}=\nabla \times \int_{0}^{1} \vec{F}(tx,ty,tz)\times(tx,ty,tz)dt$. Suppose $$\vec{F}$$ is a vector field on $$\mathbb{R}^3$$ and $$\nabla \cdot \vec{F} = 0$$. Prove that: $$\vec{F}=\nabla \times \int_{0}^{1} \vec{F}(tx,ty,tz)\times(tx,ty,tz)dt$$. Tried doing it, but can’t seem to get it exactly right. Can anyone give an solution with working – naturally just doing 1st component should suffice. ## Norm of $||(x-x^*,y-y^*,z-z^*) ||\leq ||\nabla J(x,y,z)||$ I want to prove that $$||(x-x^*,y-y^*,z-z^*) ||\leq ||\nabla J(x,y,z)||,$$ where $$J(x,y,z)=\exp(\frac{x+y+z}{2016})+\frac{x^2+2y^2+3z^2}{2}$$ and $$(x^*,y^*,z^*)$$ is the minimum of $$J$$. Also, I have $$||(x-x^*,y-y^*,z-z^*) ||^2\leq U^t H U$$ with $$H$$ the Hessian matrix of $$J$$ and $$U=(x-x^*,y-y^*,z-z^*).$$ Could you give me some clue to complete the proof? Thanks! ## Does $\nabla g=\omega(\cdot) g$ imply $\nabla$ is metric w.r.t a conformal rescaling of $g$? This is a cross-post. Let $$E$$ be a smooth vector bundle over a manifold $$M$$, where $$\text{rank}(E) > 1,\dim M > 1$$. Suppose that $$E$$ is equipped with a metric $$g$$ and an affine connection $$\nabla$$, such that $$\nabla_X g=\omega (X) g$$ for every $$X \in \Gamma(TM)$$. (Here $$\omega$$ is a one form). Must $$\omega$$ be closed? Clearly, $$\nabla$$ is metric-compatible ($$\nabla g=0$$) iff $$\omega=0$$. Moreover, $$\omega=d\phi$$ is exact if and only if $$\nabla s=0$$ where $$s=e^{-\phi}g$$, i.e. $$\nabla$$ is metric w.r.t a positive conformal rescaling of $$g$$. So, an alternative formulation of the question is the following: Suppose that $$\nabla g=\omega (\cdot) g$$ for some $$\omega \in \Omega^1(M)$$. Must $$\nabla$$ be metric w.r.t a local conformal rescalings of $$g$$? Differentiating $$\nabla g=\omega (\cdot) g$$, we get $$R(X,Y)g=d\omega(X,Y)g$$, so if $$\nabla$$ is flat then $$\omega$$ is closed. I required $$\text{rank}(E) > 1$$, since if the rank is $$1$$, $$\nabla g$$ can always be written as $$\omega (\cdot) g$$ for a suitable $$\omega$$, so the assumption always holds, but I think that $$d\omega=0$$ does not always hold. Maybe this can be used to construct a counter example of higher rank by taking a direct sum of line bundles. ## How to treat an equation of the form $-\Delta u=G\cdot \nabla I(u)+f(u) ?$ There are plenty of variational techniques (direct methods of calculus of variations, mountain pass type theorems, Lusternik-Schnirelmann theory) to prove the existence of solutions of a semilinear elliptic equation of the form $$-\Delta u=f(u)$$ in $$H^1_0(\Omega)$$, under suitable hypothesis on $$f:\mathbb{R}\to\mathbb{R}$$, thanks to the fact that we can see weak solutions of this problem as the stationary points of the functional: $$I:H^1_0(\Omega)\to\mathbb{R}, u\mapsto\frac{1}{2}\|u\|_{H^1_0}^2-\int_\Omega\int_0^{u(x)}f(s)\operatorname{d}s\operatorname{d}x.$$ If $$G:\Omega\rightarrow\mathbb{R}^n$$, how can we treat the equation: $$-\Delta u=G\cdot\nabla I(u)+f(u),$$ or even, if $$g:\mathbb{R}^n\to\mathbb{R}$$, the equation: $$-\Delta u=g\left(\nabla I(u)\right)+f(u)?$$ If $$n=1$$, I saw in the $$G$$-case that we can transform the equation into another semilinear elliptic equation that hasn’t the dissipative term $$u’$$, with the same trick used in Sturm-Liouville theory, and so we can bring back this problem into the realm of the previous variational problem. However, what about the $$g$$-case if $$n=1$$? What about the $$G$$-case if $$n\ge2$$? Can we say anything about the $$g$$-case if $$n\ge 2$$?
2021-10-27 12:29:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 61, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936439573764801, "perplexity": 101.69235753790194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588153.7/warc/CC-MAIN-20211027115745-20211027145745-00340.warc.gz"}
https://www.physicsforums.com/threads/klein-gordon-ret-greens-function-on-closed-form.739883/
# Klein Gordon ret. Greens function on closed form 1. Feb 23, 2014 ### center o bass In this note (http://sgovindarajan.wdfiles.com/local--files/serc2009/greenfunction.pdf) the Klein-Gordon retarded green function is derived on the form $$G_{ret}(x − x′) = \theta(t − t') \int \frac{d^3 \vec k}{(2\pi)^3 \omega_k} \sin \omega_k (t − t′) e^{i \vec{k}\cdot (\vec x - \vec x')}$$ where $\omega_k = \sqrt{\vec{k}^2 + m^2}$. The author then gives an exercise to carry out the rest of the integration and express the Greens function on closed form. But I do not see how to carry the integration out due to the dependence of $\omega_k$ on $\vec k^2$. Does anyone have any suggestions on how this might be solved, or alternatively know where I can find a full derivation? Last edited: Feb 23, 2014 2. Feb 23, 2014 ### dextercioby You need to convert to spherical coordinates in momentum space. Then you can do it. It ends up with Bessel functions.
2018-07-21 22:11:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6669087409973145, "perplexity": 297.43671108486666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592778.82/warc/CC-MAIN-20180721203722-20180721223722-00294.warc.gz"}
https://www.jobilize.com/physics-ap/section/conceptual-questions-guts-the-unification-of-forces-by-openstax?qcr=www.quizover.com
# 33.6 Guts: the unification of forces  (Page 5/12) Page 5 / 12 ## Summary • Attempts to show unification of the four forces are called Grand Unified Theories (GUTs) and have been partially successful, with connections proven between EM and weak forces in electroweak theory. • The strong force is carried by eight proposed particles called gluons, which are intimately connected to a quantum number called color—their governing theory is thus called quantum chromodynamics (QCD). Taken together, QCD and the electroweak theory are widely accepted as the Standard Model of particle physics. • Unification of the strong force is expected at such high energies that it cannot be directly tested, but it may have observable consequences in the as-yet unobserved decay of the proton and topics to be discussed in the next chapter. Although unification of forces is generally anticipated, much remains to be done to prove its validity. ## Conceptual questions If a GUT is proven, and the four forces are unified, it will still be correct to say that the orbit of the moon is determined by the gravitational force. Explain why. If the Higgs boson is discovered and found to have mass, will it be considered the ultimate carrier of the weak force? Explain your response. Gluons and the photon are massless. Does this imply that the ${W}^{+}$ , ${W}^{-}$ , and ${Z}^{0}$ are the ultimate carriers of the weak force? ## Problems&Exercises Integrated Concepts The intensity of cosmic ray radiation decreases rapidly with increasing energy, but there are occasionally extremely energetic cosmic rays that create a shower of radiation from all the particles they create by striking a nucleus in the atmosphere as seen in the figure given below. Suppose a cosmic ray particle having an energy of ${\text{10}}^{\text{10}}\phantom{\rule{0.25em}{0ex}}\text{GeV}$ converts its energy into particles with masses averaging $\text{200}\phantom{\rule{0.25em}{0ex}}\text{MeV/}{c}^{2}$ . (a) How many particles are created? (b) If the particles rain down on a $1\text{.}{\text{00-km}}^{2}$ area, how many particles are there per square meter? (a) $5×{\text{10}}^{\text{10}}$ (b) $5×{\text{10}}^{4}\phantom{\rule{0.25em}{0ex}}{\text{particles/m}}^{2}$ Integrated Concepts Assuming conservation of momentum, what is the energy of each $\gamma$ ray produced in the decay of a neutral at rest pion, in the reaction ${\pi }^{0}\to \gamma +\gamma$ ? Integrated Concepts What is the wavelength of a 50-GeV electron, which is produced at SLAC? This provides an idea of the limit to the detail it can probe. $2.5×{\text{10}}^{-\text{17}}\phantom{\rule{0.25em}{0ex}}\text{m}$ Integrated Concepts (a) Calculate the relativistic quantity $\gamma =\frac{1}{\sqrt{1-{v}^{2}/{c}^{2}}}$ for 1.00-TeV protons produced at Fermilab. (b) If such a proton created a ${\pi }^{+}$ having the same speed, how long would its life be in the laboratory? (c) How far could it travel in this time? Integrated Concepts The primary decay mode for the negative pion is ${\pi }^{-}\to {\mu }^{-}+{\stackrel{-}{\nu }}_{\mu }$ . (a) What is the energy release in MeV in this decay? (b) Using conservation of momentum, how much energy does each of the decay products receive, given the ${\pi }^{-}$ is at rest when it decays? You may assume the muon antineutrino is massless and has momentum $p=E/c$ , just like a photon. (a) 33.9 MeV (b) Muon antineutrino 29.8 MeV, muon 4.1 MeV (kinetic energy) Integrated Concepts Plans for an accelerator that produces a secondary beam of K -mesons to scatter from nuclei, for the purpose of studying the strong force, call for them to have a kinetic energy of 500 MeV. (a) What would the relativistic quantity $\gamma =\frac{1}{\sqrt{1-{v}^{2}/{c}^{2}}}$ be for these particles? (b) How long would their average lifetime be in the laboratory? (c) How far could they travel in this time? Integrated Concepts Suppose you are designing a proton decay experiment and you can detect 50 percent of the proton decays in a tank of water. (a) How many kilograms of water would you need to see one decay per month, assuming a lifetime of ${\text{10}}^{\text{31}}\phantom{\rule{0.25em}{0ex}}\text{y}$ ? (b) How many cubic meters of water is this? (c) If the actual lifetime is ${\text{10}}^{\text{33}}\phantom{\rule{0.25em}{0ex}}\text{y}$ , how long would you have to wait on an average to see a single proton decay? (a) $7\text{.}2×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{kg}$ (b) $7\text{.}2×{\text{10}}^{2}\phantom{\rule{0.25em}{0ex}}{\text{m}}^{3}$ (c) $\text{100 months}$ Integrated Concepts In supernovas, neutrinos are produced in huge amounts. They were detected from the 1987A supernova in the Magellanic Cloud, which is about 120,000 light years away from the Earth (relatively close to our Milky Way galaxy). If neutrinos have a mass, they cannot travel at the speed of light, but if their mass is small, they can get close. (a) Suppose a neutrino with a $7\text{-eV/}{c}^{2}$ mass has a kinetic energy of 700 keV. Find the relativistic quantity $\gamma =\frac{1}{\sqrt{1-{v}^{2}/{c}^{2}}}$ for it. (b) If the neutrino leaves the 1987A supernova at the same time as a photon and both travel to Earth, how much sooner does the photon arrive? This is not a large time difference, given that it is impossible to know which neutrino left with which photon and the poor efficiency of the neutrino detectors. Thus, the fact that neutrinos were observed within hours of the brightening of the supernova only places an upper limit on the neutrino's mass. (Hint: You may need to use a series expansion to find v for the neutrino, since its $\gamma$ is so large.) Consider an ultrahigh-energy cosmic ray entering the Earth's atmosphere (some have energies approaching a joule). Construct a problem in which you calculate the energy of the particle based on the number of particles in an observed cosmic ray shower. Among the things to consider are the average mass of the shower particles, the average number per square meter, and the extent (number of square meters covered) of the shower. Express the energy in eV and joules. Consider a detector needed to observe the proposed, but extremely rare, decay of an electron. Construct a problem in which you calculate the amount of matter needed in the detector to be able to observe the decay, assuming that it has a signature that is clearly identifiable. Among the things to consider are the estimated half life (long for rare events), and the number of decays per unit time that you wish to observe, as well as the number of electrons in the detector substance. how can I read physics...am finding it difficult to understand...pls help try to read several books on phy don't just rely one. some authors explain better than other. Ju And don't forget to check out YouTube videos on the subject. Videos offer a different visual way to learn easier. Ju hope that helps Ju I have a exam on 12 february what is velocity Jiti the speed of something in a given direction. Ju what is a magnitude in physics Propose a force standard different from the example of a stretched spring discussed in the text. Your standard must be capable of producing the same force repeatedly. What is meant by dielectric charge? what happens to the size of charge if the dielectric is changed? omega= omega not +alpha t derivation u have to derivate it respected to time ...and as w is the angular velocity uu will relace it with "thita × time"" Abrar do to be peaceful with any body the angle subtended at the center of sphere of radius r in steradian is equal to 4 pi how? if for diatonic gas Cv =5R/2 then gamma is equal to 7/5 how? Saeed define variable velocity displacement in easy way. binding energy per nucleon why God created humanity Because HE needs someone to dominate the earth (Gen. 1:26) Olorunfemi Ali and he to multiply Owofemi stuff happens Ju God plays dice Ju Is the object in a conductor or an insulator? Justify your answer. whats the answer to this question? pls need help figure is given above ok we can say body is electrically neutral ...conductor this quality is given to most metalls who have free electron in orbital d ...but human doesn't have ...so we re made from insulator or dielectric material ... furthermore, the menirals in our body like k, Fe , cu , zn Abrar when we face electric shock these elements work as a conductor that's why we got this shock Abrar how do i calculate the pressure on the base of a deposit if the deposit is moving with a linear aceleration
2019-02-24 03:25:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6548902988433838, "perplexity": 807.6003469700255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249578748.86/warc/CC-MAIN-20190224023850-20190224045850-00344.warc.gz"}
https://zbmath.org/authors/?q=ai%3Aspivak.michael
## Spivak, Michael Compute Distance To: Author ID: spivak.michael Published as: Spivak, Michael External Links: MGP · Wikidata · GND · IdRef Documents Indexed: 23 Publications since 1965, including 20 Books 3 Further Contributions ### Co-Authors 23 single-authored 2 Milnor, John Willard 1 Albis González, Víctor Samuel 1 Brieva-Bustillo, E. ### Serials 1 Topology 1 Annals of Mathematics Studies 1 Seminar on Mathematical Sciences 1 Texts and Readings in Mathematics all top 5 ### Fields 14 Differential geometry (53-XX) 7 Global analysis, analysis on manifolds (58-XX) 5 Algebraic topology (55-XX) 4 Manifolds and cell complexes (57-XX) 3 Topological groups, Lie groups (22-XX) 3 Real functions (26-XX) 3 Mechanics of particles and systems (70-XX) 2 Field theory and polynomials (12-XX) 2 Partial differential equations (35-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 1 General and overarching topics; collections (00-XX) 1 History and biography (01-XX) 1 Geometry (51-XX) ### Citations contained in zbMATH Open 23 Publications have been cited 1,708 times in 1,244 Documents Cited by Year Morse theory. Based on lecture notes by M. Spivak and R. Wells. Zbl 0108.10401 Milnor, John W. 1963 Calculus on manifolds. A modern approach to classical theorems of advanced calculus. Zbl 0141.05403 Spivak, Michael 1965 A comprehensive introduction to differential geometry. Vol. I. 2nd ed. Zbl 0439.53001 Spivak, Michael 1979 A comprehensive introduction to differential geometry. Vol. 1-5. 3rd ed. with corrections. Zbl 1213.53001 Spivak, Michael 1999 A comprehensive introduction to differential geometry. Vol. III. 2nd ed. Zbl 0439.53003 Spivak, Michael 1979 A comprehensive introduction to differential geometry. Vol. 1. Zbl 0202.52001 Spivak, Michael 1970 A comprehensive introduction to differential geometry. Vol. II. 2nd ed. Zbl 0439.53002 Spivak, Michael 1979 A comprehensive introduction to differential geometry. Vol. 3. Zbl 0306.53001 Spivak, Michael 1975 Spaces satisfying Poincaré duality. Zbl 0185.50904 Spivak, Michael 1967 A comprehensive introduction to differential geometry. Vol. 4. Zbl 0306.53002 Spivak, Michael 1975 A comprehensive introduction to differential geometry. Vol. V. 2nd ed. Zbl 0439.53005 Spivak, Michael 1979 A comprehensive introduction to differential geometry. Vol. 5. Zbl 0306.53003 Spivak, Michael 1975 A comprehensive introduction to differential geometry. Vol. 2. Zbl 0202.52201 Spivak, Michael 1970 A comprehensive introduction to differential geometry. Vol. IV. 2nd ed. Zbl 0439.53004 Spivak, Michael 1979 Calculus. Zbl 0159.34302 Spivak, Michael 1967 Calculus on manifolds. Zbl 0173.05104 Spivak, Michael 1968 Calculus on manifolds. A modern approach to classical theorems of advanced calculus. (Analiza na rozmaitosciach. Nowoczesne podejscie do klasycznych twierdzen zaawansowanej analizy.). Zbl 0381.58003 Spivak, Michael 1977 Calculus. 4th ed. Zbl 1272.26002 Spivak, Michael 2008 Morse theory. Based on lecture notes by M. Spivak and R. Wells. Reprint of the 1963 original published by Princeton University Press. Zbl 1281.58005 Milnor, John W. 2013 Physics for mathematicians. Mechanics I. Zbl 1281.70001 Spivak, Michael 2010 Calculus. 2nd ed. Zbl 0458.26001 Spivak, Michael 1980 Calculus. Corrected 3rd ed. Zbl 1117.26002 Spivak, Michael 2006 Some left-over problems from classical differential geometry. Zbl 0306.53004 Spivak, Michael 1975 Morse theory. Based on lecture notes by M. Spivak and R. Wells. Reprint of the 1963 original published by Princeton University Press. Zbl 1281.58005 Milnor, John W. 2013 Physics for mathematicians. Mechanics I. Zbl 1281.70001 Spivak, Michael 2010 Calculus. 4th ed. Zbl 1272.26002 Spivak, Michael 2008 Calculus. Corrected 3rd ed. Zbl 1117.26002 Spivak, Michael 2006 A comprehensive introduction to differential geometry. Vol. 1-5. 3rd ed. with corrections. Zbl 1213.53001 Spivak, Michael 1999 Calculus. 2nd ed. Zbl 0458.26001 Spivak, Michael 1980 A comprehensive introduction to differential geometry. Vol. I. 2nd ed. Zbl 0439.53001 Spivak, Michael 1979 A comprehensive introduction to differential geometry. Vol. III. 2nd ed. Zbl 0439.53003 Spivak, Michael 1979 A comprehensive introduction to differential geometry. Vol. II. 2nd ed. Zbl 0439.53002 Spivak, Michael 1979 A comprehensive introduction to differential geometry. Vol. V. 2nd ed. Zbl 0439.53005 Spivak, Michael 1979 A comprehensive introduction to differential geometry. Vol. IV. 2nd ed. Zbl 0439.53004 Spivak, Michael 1979 Calculus on manifolds. A modern approach to classical theorems of advanced calculus. (Analiza na rozmaitosciach. Nowoczesne podejscie do klasycznych twierdzen zaawansowanej analizy.). Zbl 0381.58003 Spivak, Michael 1977 A comprehensive introduction to differential geometry. Vol. 3. Zbl 0306.53001 Spivak, Michael 1975 A comprehensive introduction to differential geometry. Vol. 4. Zbl 0306.53002 Spivak, Michael 1975 A comprehensive introduction to differential geometry. Vol. 5. Zbl 0306.53003 Spivak, Michael 1975 Some left-over problems from classical differential geometry. Zbl 0306.53004 Spivak, Michael 1975 A comprehensive introduction to differential geometry. Vol. 1. Zbl 0202.52001 Spivak, Michael 1970 A comprehensive introduction to differential geometry. Vol. 2. Zbl 0202.52201 Spivak, Michael 1970 Calculus on manifolds. Zbl 0173.05104 Spivak, Michael 1968 Spaces satisfying Poincaré duality. Zbl 0185.50904 Spivak, Michael 1967 Calculus. Zbl 0159.34302 Spivak, Michael 1967 Calculus on manifolds. A modern approach to classical theorems of advanced calculus. Zbl 0141.05403 Spivak, Michael 1965 Morse theory. Based on lecture notes by M. Spivak and R. Wells. Zbl 0108.10401 Milnor, John W. 1963 all top 5 ### Cited by 1,751 Authors 14 Garcia, Ronaldo A. 11 Sotomayor, Jorge 9 Dajczer, Marcos 7 Klein, John Robert 6 Bürgisser, Peter 6 Fiori, Simone 5 Atzberger, Paul J. 5 Bank, Bernd 5 Düldül, Mustafa 5 Giusti, Marc 5 Heintz, Joos 5 Larsen, Jens Christian 5 Lewicka, Marta 5 Mira, Pablo 5 Pinkall, Ulrich 5 Romani, Giuliano 5 Schlenker, Jean-Marc 5 Vlachos, Theodoros 4 Amat, Sergio P. 4 Bernig, Andreas 4 Busquier, Sonia 4 Ciarlet, Philippe Gaston 4 de Azevedo Tribuzy, Renato 4 Erişir, Tülay 4 Eschenburg, Jost-Hinrich 4 Gálvez, Jose Antonio 4 Gross, B. J. 4 Guadalupe, Irwen Válle 4 Jenssen, Helge Kristian 4 Kogan, Irina A. 4 Krejčiřík, David 4 Kuruoğlu, Nuri 4 Li, Jiayu 4 López Camino, Rafael 4 Mardare, Cristinel 4 Markvorsen, Steen 4 Mora, Maria Giovanna 4 Ortega, Romeo S. 4 Palmas, Oscar 4 Plaza, Sergio 4 Rheinboldt, Werner C. 4 Sabitov, Idzhad Khakovich 4 Yao, Pengfei 4 Zheng, Hong 4 Zwiebach, Barton 3 Abdel-All, Nassar Hassan 3 Abedi, Reza 3 Aléssio, Osmar 3 Amelunxen, Dennis 3 Anderson, Ian M. 3 Anderson, Michael T. 3 Angenent, Sigurd Bernardus 3 Argyros, Ioannis Konstantinos 3 Audoly, Basile 3 Baillif, Mathieu 3 Bates, Larry M. 3 Boon, Wietse M. 3 Bratishchev, Aleksandr Vasil’evich 3 Broer, Henk W. 3 Capobianco, Giuseppe 3 Cazals, Frédéric 3 Chen, Gui-Qiang G. 3 Choe, Jaigyoung 3 Cruttwell, G. S. H. 3 Dillen, Franki 3 Duits, Remco 3 Düldül, Bahar Uyar 3 Eugster, Simon R. 3 Foote, Robert L. 3 Garay, Oscar Jesus 3 Ghomi, Mohammad 3 Guillot, Adolfo 3 Gungor, Mehmet Ali 3 Haber, Robert Bruce 3 Hamann, Bernd 3 Han, Xiaoli 3 Hilout, Saïd 3 Howard, Ralph E. 3 Hurtado, Ana 3 Iliev, Bozhidar Zakhariev 3 Ivanova-Karatopraklieva, Ivanka I. 3 Koenderink, Jan J. 3 Labourie, François 3 Law, Peter R. 3 Lembo, Marzio 3 Leung, Pui-Fai 3 Li, Haizhong 3 Lopes, Débora 3 Malchiodi, Andrea 3 Miller, Scott T. 3 Morvan, Jean-Marie 3 Nicolò, Francesco 3 Nomizu, Katsumi 3 Olver, Peter John 3 Palmer, Vicente 3 Pardo, Luis Miguel 3 Patrangenaru, Victor 3 Peralta-Salas, Daniel 3 Pomet, Jean-Baptiste 3 Portmann, Fabian ...and 1,651 more Authors all top 5 ### Cited in 372 Serials 35 Transactions of the American Mathematical Society 26 Journal of Mathematical Analysis and Applications 25 Journal of Geometry and Physics 22 Proceedings of the American Mathematical Society 20 Mathematische Annalen 20 Differential Geometry and its Applications 18 Mathematische Zeitschrift 18 The Journal of Geometric Analysis 16 Computer Aided Geometric Design 15 Communications in Mathematical Physics 15 Annals of Global Analysis and Geometry 14 Journal of Differential Equations 14 Results in Mathematics 13 Archive for Rational Mechanics and Analysis 13 Journal of Computational Physics 13 Applied Mathematics and Computation 13 Duke Mathematical Journal 11 General Relativity and Gravitation 11 Rocky Mountain Journal of Mathematics 11 Topology and its Applications 10 International Journal of Theoretical Physics 10 Journal of Mathematical Physics 10 Geometriae Dedicata 10 Journal of Computational and Applied Mathematics 10 Manuscripta Mathematica 10 International Journal of Geometric Methods in Modern Physics 9 Advances in Mathematics 9 The Annals of Statistics 9 Automatica 9 Journal of Geometry 9 Journal of Mathematical Sciences (New York) 9 Comptes Rendus. Mathématique. Académie des Sciences, Paris 8 Computer Methods in Applied Mechanics and Engineering 8 Annali di Matematica Pura ed Applicata. Serie Quarta 8 Journal of Elasticity 8 Journal of Mathematical Imaging and Vision 8 Journal of Nonlinear Science 8 Calculus of Variations and Partial Differential Equations 7 Bulletin of the Australian Mathematical Society 7 Nuclear Physics. B 7 Czechoslovak Mathematical Journal 7 Inventiones Mathematicae 7 Kodai Mathematical Journal 7 Systems & Control Letters 7 Communications in Partial Differential Equations 7 Linear Algebra and its Applications 7 Bulletin of the American Mathematical Society. New Series 6 Biological Cybernetics 6 Communications on Pure and Applied Mathematics 6 Annales de l’Institut Fourier 6 Archiv der Mathematik 6 Journal of Mathematical Economics 6 Journal of Optimization Theory and Applications 6 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 6 Numerische Mathematik 6 Rendiconti del Seminario Matematico della Università di Padova 6 Tôhoku Mathematical Journal. Second Series 6 Physica D 6 Stochastic Processes and their Applications 6 International Journal of Modern Physics D 6 Geometry & Topology 6 Bulletin of the Brazilian Mathematical Society. New Series 5 International Journal of Control 5 Physics Letters. A 5 Journal of Functional Analysis 5 Journal für die Reine und Angewandte Mathematik 5 International Journal of Mathematics 5 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 5 Acta Mathematica Sinica. English Series 5 Qualitative Theory of Dynamical Systems 4 The Mathematical Intelligencer 4 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 4 Mathematical Systems Theory 4 Memoirs of the American Mathematical Society 4 Michigan Mathematical Journal 4 Monatshefte für Mathematik 4 Rendiconti del Circolo Matemàtico di Palermo. Serie II 4 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 4 Applied Mathematical Modelling 4 Historia Mathematica 4 Journal de Mathématiques Pures et Appliquées. Neuvième Série 4 SIAM Journal on Mathematical Analysis 4 Boletim da Sociedade Brasileira de Matemática. Nova Série 4 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 4 Advances in Applied Clifford Algebras 4 Journal of Dynamical and Control Systems 4 Analysis and Applications (Singapore) 4 Mediterranean Journal of Mathematics 4 Nonlinear Analysis. Theory, Methods & Applications 3 Acta Mechanica 3 American Mathematical Monthly 3 Israel Journal of Mathematics 3 Mathematical Biosciences 3 Physics Letters. B 3 Reports on Mathematical Physics 3 ZAMP. Zeitschrift für angewandte Mathematik und Physik 3 Arkiv för Matematik 3 Reviews in Mathematical Physics 3 Annals of the Institute of Statistical Mathematics 3 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série ...and 272 more Serials all top 5 ### Cited in 60 Fields 542 Differential geometry (53-XX) 164 Global analysis, analysis on manifolds (58-XX) 144 Partial differential equations (35-XX) 102 Manifolds and cell complexes (57-XX) 97 Numerical analysis (65-XX) 75 Dynamical systems and ergodic theory (37-XX) 69 Mechanics of deformable solids (74-XX) 64 Calculus of variations and optimal control; optimization (49-XX) 62 Computer science (68-XX) 62 Relativity and gravitational theory (83-XX) 52 Quantum theory (81-XX) 51 Mechanics of particles and systems (70-XX) 46 Statistics (62-XX) 45 Systems theory; control (93-XX) 42 Ordinary differential equations (34-XX) 40 Fluid mechanics (76-XX) 39 Algebraic topology (55-XX) 38 Several complex variables and analytic spaces (32-XX) 35 Probability theory and stochastic processes (60-XX) 34 Functions of a complex variable (30-XX) 32 Convex and discrete geometry (52-XX) 32 Operations research, mathematical programming (90-XX) 30 Functional analysis (46-XX) 29 Biology and other natural sciences (92-XX) 28 Topological groups, Lie groups (22-XX) 27 Algebraic geometry (14-XX) 27 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 22 Real functions (26-XX) 21 Geometry (51-XX) 19 Nonassociative rings and algebras (17-XX) 18 Operator theory (47-XX) 18 Optics, electromagnetic theory (78-XX) 15 Approximations and expansions (41-XX) 14 History and biography (01-XX) 14 Information and communication theory, circuits (94-XX) 13 Linear and multilinear algebra; matrix theory (15-XX) 12 Statistical mechanics, structure of matter (82-XX) 11 General and overarching topics; collections (00-XX) 10 Measure and integration (28-XX) 10 General topology (54-XX) 9 Combinatorics (05-XX) 9 Number theory (11-XX) 8 Potential theory (31-XX) 7 Group theory and generalizations (20-XX) 7 Harmonic analysis on Euclidean spaces (42-XX) 7 Classical thermodynamics, heat transfer (80-XX) 6 Commutative algebra (13-XX) 5 Associative rings and algebras (16-XX) 5 Category theory; homological algebra (18-XX) 4 Mathematical logic and foundations (03-XX) 4 Integral equations (45-XX) 3 Difference and functional equations (39-XX) 3 Integral transforms, operational calculus (44-XX) 3 Geophysics (86-XX) 2 $$K$$-theory (19-XX) 2 Special functions (33-XX) 2 Astronomy and astrophysics (85-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Field theory and polynomials (12-XX) 1 Sequences, series, summability (40-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2022-08-10 05:25:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3886198401451111, "perplexity": 4624.061830572938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00173.warc.gz"}
http://unanimated.github.io/ts/ts-basics.htm
Typesetting Basics In order to typeset a sign, you need to time it first. As this needs to be precise, you don't do it on the audio track like with regular lines. Signs need to be frame timed, because if your sign is one frame off, you belong in Hadena. So you rough time the sign first, whether on the audio or by inputting the timecode you got from the translator/typist/editor. (Or, if you don't live in the past, you use this script.) Then you go frame by frame with arrow keys till you find the first frame where the sign appears [the relevant line must be selected in the script]. Then you click the first of the blue icons here: (No. Use fucking hotkeys.) That sets the start time. Use arrow keys to check if you did it right. Then navigate to the last frame the sign is visible on and click the second one. Again try if it's right. The first one sets the time at the start of the visible frame, the second one at the end, so if you use both at the same frame, the sign will be visible on that frame, in other words the duration of the sign will not be zero. This can be used for typesetting signs frame by frame by hand. Many signs start/end at a keyframe, so you can use the audio track for those; for others, you'll need this method. When you've timed your signs, you can begin typesetting. You already know how to create styles, so you'll make one. If you need to override anything, you'll use a few tags like \fs \bord \shad etc. You should know all the basic tags and what they do from here. Note 2018: I've never really used \fs once I learned to TS properly. Recalculator + scaling = size changed in 2 seconds. Border is probably what you'll be changing the most often, maybe the shadow and font size, so you should remember \bord \shad \fs (nope) at least. You should remember the hotkeys to the Cycles script, that is. You may also switch between regular and bold quite a bit, but bold/italics have buttons above the typing area so no need to type those. Let's begin. Here's a simple typeset... This is something that will appear in every episode, so you want to set as much as possible in the style. The only tags I add here is \fad and \blur. The font, colours and border are set in the style. Don't make the mistake of using the default (or any extreme) values! For example the default has a shadow, but you definitely don't want a shadow here, so make sure you set it to 0. Also don't just assume it's black and white. If you do that, you're going Hadena-style. Get the actual colour from the Japanese sign with the eyedropper tool. Aside from the colours and the border, what will make a difference between a good and bad typeset here is the font choice. So don't just take some basic serif or sans serif font, or something like ComicSans, but find something that will actually match and look good. Speaking of which, you need to know your fonts, and you need to have them in the first place. Also, that sign has no layers and looks like shit anyway. Now about that fade... Use arrow keys to go frame by frame and find the place where the fade of the jp sign ends. ^ Check the numbers here. They refer to the currently visible frame, in relation to the start/end time of the line you have selected. So this frame is 898ms after the start of your sign and 4102ms before the end of it. If this is where the fade in ends, you need about 900ms fade in. No need to be too precise, one frame is about 40ms, so 10ms more or less won't make a difference. You will then have \fad(900,0). If there's lead out too, you do the same but use the second number. 2015 Note: Apply fade makes it much easier. Another example of a title. This one uses \blur10 (for the border). Be aware that using this much blur may cause lag, so only use it on simple static signs. 2015 Note: That was 2011. \blur10 shouldn't be a problem anymore. 2018 Note: That was 2015. \blur10 definitely isn't a problem. Notice the positioning, too. The top of the typesets is aligned with the top of the Japanese sign. Don't just throw the signs somewhere randomly, try to align them with something. Also for episode titles and such, try to keep the same positioning in following episodes. A few notes on using blur. It works differently depending on the borders and shadows you're using. This is text with no border, no shadow, no blur... \bord0\shad0\blur0: No border, no shadow, with blur... \bord0\shad0\blur2: Border, no shadow, blur... \bord4\shad0\blur2: You see it will only blur the outline, not the body. Border, shadow, no blur... \bord4\shad4\blur0: Border, shadow, blur... \bord4\shad4\blur2: This blurs both the border and shadow, but not the body. No border, shadow, blur... \bord0\shad4\blur2: This, however, will blur both the shadow AND the body. So you see it's the border that determines whether the body will be blurred or not. You can't separate the border from shadow for blurring. If you use both, both will be blurred. To bypass that, you'd have to use 2 layers. More on that later. Same if you want to blur the outline AND the body, you need layers. It would look like this: "Test" is regular mode, "Blur" is 2 layers with the body blurred. Again, more on that in the "Layers" section. Note: This works the same with 'blur edges' - \be. Experiment to find out the difference between one and the other (shows more with higher values). Two things related to blur: Blur is the most essential tag for typesetting. Signs without blur look like shit, so never forget to use it. What I do before I start is to use a script to add blur to all signs. Check the scripts section. Note 2018: Actually, that doesn't really matter. If you're doing things right, adding a default blur should require pressing 1 (one) key. That way you start with blur already present on all signs. 0.5-0.6 will work most of the time. You'll change it to higher when needed. NOTE 1: Do not use \blur.5 instead of \blur0.5 NOTE 2: \blur0.3 does nothing visible. \blur0.4 blurs VERY LITTLE and is only applicable for really sharp video. Don't swarm the script with \blur0.3 when you can see it's not doing anything. ALWAYS check signs at 100% zoom. Use your damn eyes. Here you can see 3 modes of using blur: 1. The "Mer" part. Completely wrong, because it's only 1 layer and the body is not blurred. 2. The "maid" part. Also wrong. It's two layers, but the primary colour of the bottom layer is the same as the top layer. 3. The "Meal" part. This is correct. You can see it looks like the Japanese sign. If you can't see the difference, you're blind (or your monitor is terrible) and probably shouldn't typeset. When I separate the layers, it looks like this: This is one of the most important things to learn about typesetting, so make sure you get this right. The middle part doesn't work because the blurred edges of the top layer create partial transparency, so you can partly see the sharp edges of the font body of the bottom layer. Same thing happens when you use \1a&HFF& on the bottom layer, so DON'T do that either. The "Layers" section of this guide explains it in detail. Sort the script by time! (At least if you live in the early 2010's) [menu -> Subtitle -> Sort All Lines -> Start Time] If you don't do this, vsfilter will screw up blur pretty much whenever there are two or more lines visible on the screen at the same time. Top is sorted by time: Bottom is not sorted by time. You can see the border on the default style is screwed up. This and worse things happen when you don't sort the sript by time. Of course you don't need that when working, and it's more convenient to have different sorting while working, but always sort the final script you're putting in a release. 2015 Note: This is not an issue anymore. Back to the basics... Here's another title. Very simple but beginners will often fail. This one has a shadow but does not have a border. Beginners will often use border because there's something black around there and who would bother to look carefully? Border is first so... bam! Well, nope. If your style has border, use \bord0 to kill it. Align your typeset properly. It would be dumb if it was clearly closer to one side. Don't put it under the sign here because it might overlap with main dialogue. Other things to pay attention to: the shadow in this case is not transparent at all; get the shadow distance close enough to the original; try to match the thickness of the letters; and for god's sake don't use a sans serif font like Arial for this. Alternatives that work: These two are fine. The thickness matches, they have some pointy ends like the original, horizontal lines a bit thinner than the vertical ones... everything all right. These two are not too bad, but not as good as the previous examples. They're a bit too roundish, lacking any pointy/thin parts. Alternatives that don't work: Sans Serif doesn't fit here. Square ends don't match at all. Looks dull and inelegant. This is too thick/wide. While handwriting is often useful for typesetting anime, because of the calligraphic nature of kanji, here the kanji is actually pretty simple and orderly. The handwriting looks too disorganized. (If you use a Japanese font to add those Japanese "quotation marks", you're a fucking moron.) Next episode title. Pretty simple - get the sizes right, choose a reasonable alignment, get the border colour right, and use blur. 2015 Note: The inner part is actually lacking blur. (I sucked in 2012.) See section on Layers. You can see it's pretty easy to match the original, so I don't want to see things like this: Using thin outline without blur = nope. Using thick sans serif font = nope. Vertically it's not aligned with anything. That's a fail on a sign that takes a minute to do right. Here's something more interesting. In case it wasn't clear, the smaller circles with To Ra Do Ra are typeset. So what you need is letters and circles. The easiest way to make circles is to use a font with symbols, like wingdings. Find out which letter is a circle and use that. 2015 Note: Please no. Use vector drawings. Masquerade makes circles and other shapes really easy to use. Then find a font that has round edges and isn't too thick. Mine was actually too thin but I solved it by adding some outline in the same colour as the primary - white. It was also narrow so I used something like \fscx120. All of this can be set in the style, so no tags needed. To get the letters exactly in the middle of the circles, I used \an5 - align to centre. That way you can use the same \pos coordinates for both the circle and letters and you know it's right in the centre. Now you just need to find the right place to put the circles. Make sure the vertical coordinate is the same for all of them, and that the spaces between them are always the same. The only thing left is to get the right colour for each circle. Tools for colours are above the typing area. Use the eyedropper tool to get the exact ones you need. Speaking of which - always match the colours exactly, not just approximately. Simple typesets for some names. Handwriting font, match the colour, no border, no shadow, use blur. Easy. This close up is different than what they had in the first screenshot. It's thicker and darker so you can use bold, or outline in the same colour... however... If you're gonna use some outline to make the font look thicker, make sure you can actually afford it without making the font look unreadable. This is already pushing it, though still not too bad: Without the border for comparison: This is pretty bad: Letters like 'e' or 's' become hard to read, especially if you stretch the font in one direction. Please avoid stretching fonts more than about 10% in one direction unless you have an extremely good reason. In this case the letters even merge with one another, so try to find a better font instead. White font, thick dark red border, no reason to fail on this. Clearly here you need some handwriting/cartoonish font, and not some Arial/Times New Roman thing. [Actually this fails with blur, but hey, it was a long time ago.] Sometimes you have a bit more to typeset than one line. Here you need a simple sans serif font. I used this one not because it was the best but because I was already using it in the episode and it was good enough. 2015 Note: It should be at least bold/thicker, and the colour is wrong. It should also have a "glow", but back then that would have lagged. Aside from the "49 New Messages" in white, this is all done in one line. Dialogue: 0,0:08:08.63,0:08:08.67,mail,Caption,0000,0000,0000,,{\blur0.8\c&HBD8B5F&\pos(126,186)}Subject Thanks!\N\N\N\NSubject This is Chihaya\N\N\N\NSubject It's getting warmer\N\N\N\NSubject It's starting to rain\N\N\N\NSubject Rain was leaking into our clubroom \N\N\N\NSubject This is our clubroom. \N\N\N\NSubject I got in trouble with Dr. Harada \N\N\N\NSubject How do I cut down on faults? \N\N\N\NSubject This is Chihaya \N\N\N\N Subject Karuta players are... \N\N\N\NSubject About hakama \N\N\N\NSubject Guess what happened today \N\N\N\NSubject Notice for the \N\N\N\NTokyo regional tournament You can see there are 4 line breaks between the text lines (\N\N\N\N) so that I don't have to make 6+ separate script lines to typeset. Choose font size that will make the lines fit in between the Japanese lines. When you have the font size, make spaces between the Subject and the rest of each line. You could typeset each line separately, but... the whole thing was scrolling up in a non-linear fashion. That also means you can't use \move. So I did this frame by frame, always changing just the \pos tag (you may notice the whole line has more text than you see on the screen - this text scrolls up in the following frames). It was about 20 frames so I had 20 lines in the script. If you typeset each line of text separately, you'd have more than 10 times as many lines in the script. A 20-frame sign is usually pretty pointless to typeset, but the way I did this wasn't really difficult and didn't take much time so I did it anyway. [Note: The colour should be darker, and the font should be thicker.] This was not bad a few years ago but is pretty bad now. If you can't match the colours precisely, you suck. The Japanese is not black and white, so use the eyedropper tool to get it right. You can easily make this so natural that it won't even look like it was typeset. You could also use \fscx110 or so for a better match. And it's missing blur. So I gave somebody the task of typesetting this... and this was his first attempt. Positioning is ok. 1 point there. Colours are fine as well. Another point. Alignment of the text is... well, pretty default. More on that in the next chapter. I don't know why the red sign is serif and the rest is sans serif when the JP signs are all the same font. Also the red looks like crap on the light grey background. All that would be passable for a beginner if it wasn't for one obvious problem - no blur. Just adding blur would make it look much better even with the other problems. Here for reference is my own typesetting. You can see the blur makes it blend in beautifully, though the slant helps a lot as well, and the font is much better than the Arial-ish thing above. 2015 Note: It needs "glow". As a sidenote, see the hand moving "over" the Guard Ships sign? That can be done with the \clip tag. More on that later. A simple typeset: 1. Matching font with roughly matching thickness of letters. [as the English is usually longer, you may need a lot more letters than the jp, so you can't always match the thickness.] 2. Matching colours. 3. Matching border size. 4. Two layers for blur. That should cover the basics. Just a few more notes. Sometimes instead of blur you can use \be - blur edges. With value 1 they're pretty much the same but with higher values you'll see the difference. Other tags you can use to override the style are \fscx, \fscy, \fsp... again, you should know all these from the link mentioned at the top. I didn't explain \pos because it's so basic that if you can't figure it out on your own, you're hopeless. \an can be useful for signs with a line break - \N. Type something short, then \N, then something long, like "This is \N a meaningless test sentence." Use \pos to place it somewhere on the screen. Then add \an9 or \an1 to see how the text changes alignment while using \pos. One last note about changing margins. Let's use this screenshot: Numbers 5, 6 and 7 are left/right/vertical margin. Change those numbers to change the margin. The values don't add up, they override the defaults. It's only meaningful when you're NOT using the \pos tag, mostly for default dialogue. You can use this if you need to move the subs to avoid overlapping with something else. For example changing right margin to 500 will move them to the left, changing vertical to 100 will move them up etc. « Back to Typesetting Main
2018-07-18 18:14:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6381015777587891, "perplexity": 1212.6267171702275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590314.29/warc/CC-MAIN-20180718174111-20180718194111-00213.warc.gz"}
https://math.stackexchange.com/questions/4039427/if-g-1-g-2-g-3-are-abelian-groups-and-0-to-g-1-to-g-2-to-g-3-to-0-is-e
# If $G_1, G_2, G_3$ are abelian groups and $0 \to G_1 \to G_2 \to G_3 \to 0$ is exact, then $G_2 \simeq G_1 \oplus G_3$ If $$G_1, G_2, G_3$$ are abelian groups and $$0 \to G_1 \xrightarrow{\varphi_1} G_2 \xrightarrow{\varphi_2} G_3 \to 0$$ is exact, then $$G_2 \simeq \ker(\varphi_2) \oplus \text{im}(\varphi_2) \simeq G_1 \oplus G_3$$ The above is the conclusion of Example 12.2 in Nonlinear Analysis and Semilinear Elliptic Problems, by Ambrosetti and Malchiodi. I worked out as follows: By exactness, $${0} = \ker(\varphi_1)$$, so $$G_1 \simeq \text{im}(\varphi_1) = \ker(\varphi_2) \lhd G_2$$, so it makes sense to consider $$G_2/G_1$$. On the other hand, again by exactness, $$\text{im}(\varphi_2) = G_3$$. By the First Isomorphism Theorem, $$G_2/ \ker(\varphi_2) \simeq G_2/G_1 \simeq G_3.$$ What one would like to do now is to "multiply both sides by $$G_1$$ and cancel out in the left-hand side". My question is, how to do it in a rigorous way? I tried to write $$G_2/G_1$$ explicitly, but it was a dead end. I also tried to follow the hint by Najib Idrissi in this question, but failed. • This doesn't work: $0\to\mathbb{Z}\xrightarrow{2}\mathbb{Z}\to\mathbb{Z}/2\to 0$. Feb 25 '21 at 13:07 • As stated, it is false. Counterexample: $0 \to n\mathbf Z\to\mathbf Z\to\mathbf Z/n\mathbf Z\to 0$ is exact, but your assertion would imply the ideal $n\mathbf Z$ is generated by an idempotent, which is impossible as $\mathbf Z$ is an integral domain. I guess you're for getting a hypothesis. Feb 25 '21 at 13:08 • The claim is false. See split exact sequence. – user239203 Feb 25 '21 at 13:08 • The usual counterexample with finite groups is $0\to\mathbb Z/2\mathbb Z\to \mathbb Z/4\mathbb Z\to\mathbb Z/2\mathbb Z\to 0$, where the second map is reduction mod $2$ and the first map send $0, 1\mapsto 0, 2$. Feb 25 '21 at 13:54 • The authors of that book are just wrong, there is no way around this. I checked the book, that's verbatim what they wrote, and it's simply false. Feb 25 '21 at 14:28 $$0 \to \mathbb{Z} \xrightarrow{\varphi} \mathbb{Z} \xrightarrow{\pi} \mathbb{Z}/2\mathbb{Z} \to 0$$ where $$\varphi(x)=2x$$ and $$\pi(x)=x+2\mathbb{Z}$$ is the quotient map. The sequence is exact but clearly $$\mathbb{Z}$$ is not a direct sum $$\mathbb{Z}\oplus(\mathbb{Z}/2\mathbb{Z})$$. For that to hold we need the $$0 \to G_1 \xrightarrow{\varphi_1} G_2 \xrightarrow{\varphi_2} G_3 \to 0$$ sequence to be a split exact sequence (see also: splitting lemma). This is for example true whenever $$G_3$$ is free abelian, i.e. $$G_3\simeq \bigoplus\mathbb{Z}$$.
2022-01-27 15:51:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9263215661048889, "perplexity": 159.9756014479076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00368.warc.gz"}
https://www.openlearning.com/u/jeffreeramey
0 Kudos # Jeffree Ramey Adam I'll upgrade this assessment if Doulgas Structure arrives with, but since right now I will certainly not suggest all of them for couple of factors.
2017-03-28 17:41:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9048430919647217, "perplexity": 4907.423220338896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189802.72/warc/CC-MAIN-20170322212949-00073-ip-10-233-31-227.ec2.internal.warc.gz"}
https://projecteuler.net/problem=649
## Low-Prime Chessboard Nim Published on Saturday, 29th December 2018, 01:00 pm; Solved by 219; Difficulty rating: 30% ### Problem 649 Alice and Bob are taking turns playing a game consisting of $c$ different coins on a chessboard of size $n$ by $n$. The game may start with any arrangement of $c$ coins in squares on the board. It is possible at any time for more than one coin to occupy the same square on the board at the same time. The coins are distinguishable, so swapping two coins gives a different arrangement if (and only if) they are on different squares. On a given turn, the player must choose a coin and move it either left or up $2$, $3$, $5$, or $7$ spaces in a single direction. The only restriction is that the coin cannot move off the edge of the board. The game ends when a player is unable to make a valid move, thereby granting the other player the victory. Assuming that Alice goes first and that both players are playing optimally, let $M(n, c)$ be the number of possible starting arrangements for which Alice can ensure her victory, given a board of size $n$ by $n$ with $c$ distinct coins. For example, $M(3, 1) = 4$, $M(3, 2) = 40$, and $M(9, 3) = 450304$. What are the last $9$ digits of $M(10\,000\,019, 100)$?
2019-03-26 23:36:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5546270608901978, "perplexity": 283.7339975789528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912206677.94/warc/CC-MAIN-20190326220507-20190327002507-00381.warc.gz"}
https://search.r-project.org/CRAN/refmans/ciu/html/ciu.new.html
ciu.new {ciu} R Documentation ## Create CIU object ### Description Sets up a CIU object with the given parameters. CIU objects have "public" and "private" methods. A CIU object is actually a list whose elements are the public functions (methods). ### Usage ciu.new( bb, formula = NULL, data = NULL, in.min.max.limits = NULL, abs.min.max = NULL, input.names = NULL, output.names = NULL, predict.function = NULL, vocabulary = NULL ) bb Model/"black-box" object. At least all caret models, the lda model from MASS, and the lm model are supported. Otherwise, the prediction function to be used can be gives as value of the predict.function parameter. A more powerful way is to inherit from FunctionApproximator class and implement an "eval" method. formula Formula that describes input versus output values. Only to be used together with data parameter. data The training data used for training the model. If this parameter is provided, a formula MUST be given also. ciu.new attempts to infer the other parameters from data and formula. i.e. in.min.max.limits, abs.min.max, input.names and output.names. If those parameters are provided, then they override the inferred ones. in.min.max.limits matrix with one row per output and two columns, where the first column indicates the minimal value and the second column the maximal value for that input. abs.min.max data.frame or matrix of min-max values of outputs, one row per output, two columns (min, max). input.names labels of inputs. output.names labels of outputs. predict.function can be supplied if a model that is not supported by ciu should be used. As an example, this is the function for lda: o.predict.function <- function(model, inputs) { pred <- predict(model,inputs) return(pred$posterior) } vocabulary list of labels/concepts to be used when producing explanations and what combination of inputs they correspond to. Example of two intermediate concepts and a higher-level one that combines them: list(intermediate.concept1=c(1,2,3), intermediate.concept2=c(4,5), higher.level.concept=c(1,2,3,4,5)) ### Details CIU is implemented in an object-oriented manner, where a CIU object is a list whose methods are made visible as elements of the list. The general way for using CIU objects is to first get a CIU object by calling ciu.new as e.g. ciu <- ciu.new(...), then call ciu.res <- ciu$<method>(...). The methods that can be used in ⁠<method>⁠ are: • explain • meta.explain, see ciu.meta.explain (but omit first parameter ciu). • barplot.ciu • ggplot.col.ciu • pie.ciu • plot.ciu • plot.ciu.3D • textual, see ciu.textual (but omit first parameter ciu). "Usage" section is here in "Details" section because Roxygen etc. don't support documentation of functions within functions. ### Value Object of class CIU. ciu object ### Author(s) Kary Främling Create ciu object from this CIU object. ### References Främling, K. Contextual Importance and Utility in R: the 'ciu' Package. In: Proceedings of 1st Workshop on Explainable Agency in Artificial Intelligence, at 35th AAAI Conference on Artificial Intelligence. Virtual, Online. February 8-9, 2021. pp. 110-114. https://www.researchgate.net/publication/349521362_Contextual_Importance_and_Utility_in_R_the_%27ciu%27_Package. Främling, K. Explainable AI without Interpretable Model. 2020, https://arxiv.org/abs/2009.13996. Främling, K. Decision Theory Meets Explainable AI. 2020, <doi.org/10.1007/978-3-030-51924-7_4>. Främling, K. Modélisation et apprentissage des préférences par réseaux de neurones pour l'aide à la décision multicritère. 1996, https://tel.archives-ouvertes.fr/tel-00825854/document (title translation in English: Learning and Explaining Preferences with Neural Networks for Multiple Criteria Decision Making) ### Examples # Explaining the classification of an Iris instance with lda model. # We use a versicolor (instance 100). library(MASS) test.ind <- 100 iris_test <- iris[test.ind, 1:4] iris_train <- iris[-test.ind, 1:4] iris_lab <- iris[[5]][-test.ind] model <- lda(iris_train, iris_lab) # Create CIU object ciu <- ciu.new(model, Species~., iris) # This can be used with explain method for getting CIU values # of one or several inputs. Here we get CIU for all three outputs # with input feature "Petal.Length" that happens to be the most important. ciu$explain(iris_test, 1) # It is, however, more convenient to use one of the graphical visualisations. # Here's one using ggplot. ciu$ggplot.col.ciu(iris_test) # LDA creates very sharp class limits, which can also be seen in the CIU # explanation. We can study what the underlying model looks like using # plot.ciu and plot.ciu.3D methods. Here is a 3D plot for all three classes # as a function of Petal Length&Width. Iris #100 (shown as the red dot) # is on the ridge of the "versicolor" class, which is quite narrow for # Petal Length&Width. ciu$plot.ciu.3D(iris_test,c(3,4),1,main=levels(iris$Species)[1],) ciu$plot.ciu.3D(iris_test,c(3,4),2,main=levels(iris$Species)[2]) ciu$plot.ciu.3D(iris_test,c(3,4),3,main=levels(iris$Species)[3]) # Same thing with a regression task, the Boston Housing data set. Instance # #370 has the highest valuation (50k$). Model is gbm, which performs # decently here. Plotting with "standard" bar plot this time. # Use something like "par(mai=c(0.8,1.2,0.4,0.2))" for seeing Y-axis labels. library(caret) gbm <- train(medv ~ ., Boston, method="gbm", trControl=trainControl(method="cv", number=10)) ciu <- ciu.new(gbm, medv~., Boston) ciu$barplot.ciu(Boston[370,1:13]) # Same but sort by CI. ciu$barplot.ciu(Boston[370,1:13], sort = "CI") # The two other possible plots ciu$ggplot.col(Boston[370,1:13]) ciu$pie.ciu(Boston[370,1:13]) # Method "plot" for studying the black-box behavior and CIU one input at a time. ciu$plot.ciu(Boston[370,1:13],13) [Package ciu version 0.5.0 Index]
2022-11-30 04:24:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4315561354160309, "perplexity": 10487.936719693163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710719.4/warc/CC-MAIN-20221130024541-20221130054541-00375.warc.gz"}
https://math.stackexchange.com/questions/1295475/a-question-on-metric-spaces-which-does-not-have-lebesgue-covering-property
# A question on metric spaces which does not have Lebesgue covering property Let $X$ be a metric space , $\{U_{\alpha}\}$ be an open cover of $X$ which has no Lebesgue number . So for every $r>0$ , there is an open ball of radius $r$ which is not contained in any open set of the open cover , in particular , for every $n>1 , \exists x_n \in X$ such that $B(x_n , 1/n)$ is not contained in any of the open sets of the open cover . Now how do I show that $\{x_n\}$ has no cluster point i.e. no convergent subsequence ? Please help . Thanks in advance . ( I am planning a proof that if every real valued continuous function on a metric space is uniformly continuous , then the metric space has Lebesgue covering property , so subsequently any continuous function from $X$ to any metric space is uniformly continuous answering this If every real valued continuous function on $X$ is uniformly continuous , then is every continuous function to any metric space uniformly continuous? question of mine ) Assume for a contradiction that $x \in X$ is a cluster point of the $x_n$. Since the $U_{\alpha}$ cover $X$, there is some $\alpha$ such that $x \in U_{\alpha}$, and since $U_{\alpha}$ is open, for some $\epsilon > 0$, $B(x, \epsilon) \subseteq U_{\alpha}$. Now choose $N$ such that $1/N < \epsilon/2$. As $x$ is a cluster point of the $x_n$, $0 < d(x, x_n) < 1/n$ for infinitely many $n$, implying that for some $n > N$, we have $0 < d(x, x_n) < 1/n < 1/N < \epsilon/2$. But then (using the triangle inequality for the first inclusion), we have $B(x_n, 1/n) \subseteq B(x, \epsilon) \subseteq U_{\alpha}$.
2019-08-21 00:54:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957556962966919, "perplexity": 40.61023720840172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00218.warc.gz"}
https://thirdspacelearning.com/gcse-maths/number/percentage-multipliers/
GCSE Maths Number Percentages Percentage Multipliers # Percentage Multipliers Here we will learn about about using a percentage multiplier including how to find the single multiplier from a percentage and use the single multiplier to answer percentage questions. There are also percentage multiplier worksheets based on Edexcel, AQA and OCR exam questions, along with further guidance on where to go next if you’re still stuck. ## What is a percentage multiplier? A percentage multiplier is a number which is used to calculate a percentage of an amount or used to increase or decrease an amount by a percentage. E.g. In order to find 12% of a number we can multiply the number by a multiplier: $12\% = \frac{12}{100} = 0.12$ So 0.12 is the multiplier. ### What is a percentage multiplier? These questions will often involve interest rates in financial situations such as simple interest or compound interest.  It is sometimes referred to as the multiplier method. ## How to find a decimal multiplier from a percentage In order to write a decimal multiplier from a percentage: 1. Write down the percentage 2. Convert this percentage to a decimal by dividing by 100 – this is the multiplier 3. Multiply the original amount by the multiplier ## Percentage multiplier examples ### Example 1: finding the decimal multiplier What is the decimal multiplier for 58 \%? 1. Write down the percentage required $58 \%$ 2Convert the percentage to a decimal by dividing by 100 $58\% = \frac{58}{100} = 0.58$ The decimal multiplier is 0.58. ### Example 2: finding the decimal multiplier What is the decimal multiplier for 26%? $26 \%$ $26\% = \frac{26}{100} = 0.26$ The decimal multiplier is 0.26. ### Example 3: finding the decimal multiplier What is the decimal multiplier for 4.5%? $4.5 \%$ $4.5\% = \frac{4.5}{100} = 0.045$ The decimal multiplier is 0.045. ## How to use a percentage multiplier to calculate the percentage of an amount In order to use a percentage multiplier to calculate the percentage of an amount: 1. Write down what percentage you need 2. Convert this percentage to a decimal by dividing by 100; this is the decimal multiplier 3.  Multiply the original amount in the question by the decimal multiplier ### Example 4: finding the percentage of an amount Work out 34% of £700 $34 \%$ $34\% = \frac{34}{100} = 0.34$ What is the original amount? 700 What is the decimal multiplier? 0.34 $700\times0.34=238$ The answer is £238. ### Example 5: finding the percentage of an amount Work out 8% of £650 $8 \%$ $8\% = \frac{8}{100} = 0.08$ What is the original amount? 650 What is the decimal multiplier? 0.08 $650\times0.08=52$ The answer is £52. ### Example 6: finding the percentage of an amount Work out 15.6% of 200 kg $15.6 \%$ $15.6\% = \frac{15.6}{100} = 0.156$ What is the original amount? 200 What is the decimal multiplier? 0.156 $200\times0.156=31.2$ The answer is 31.2 kg. ### Example 7: calculating a percentage increase Increase £320 by 25% $100\%+25\%=125\%$ $125\% = \frac{125}{100} = 1.25$ What is the original amount? 320 What is the decimal multiplier? 1.25 $320\times1.25=400$ The answer is £400. ### Example 8: calculating a percentage decrease Decrease £600 by 7% $100\%-7\%=93\%$ $93\% = \frac{93}{100} = 0.93$ What is the original amount? 600 What is the decimal multiplier? 0.93 $600\times0.93=558$ The answer is £558. ### Common misconceptions • Decimal multipliers greater than 1 If you need the decimal multiplier of 135% it would be 1.35, so multipliers can in fact be greater than 1. $135\% = \frac{135}{100} = 1.35$ • Take care to remember pence when you are working with money An answer of £43.7 should be written as £43.70.  You need two decimal places for the pence part of the answer. Percentage multipliers is part of our series of lessons to support revision on percentages. You may find it helpful to start with the main percentages lesson for a summary of what to expect, or use the step by step guides below for further detail on individual topics. Other lessons in this series include: ### Practice percentage multiplier questions 1. What is the decimal multiplier of 61\% ? 0.61 6.10 610 0.061 61\% = \frac{61}{100} = 0.61 So 0.61 is the multiplier. 2. What is the decimal multiplier of 5\% ? 0.5 5.00 500 0.05 5\% = \frac{5}{100} = 0.05 So 0.05 is the multiplier. 3. What is the decimal multiplier of 18.3\% ? 0.183 1.83 18.3 0.0183 18.3\% = \frac{18.3}{100} = 0.183 So 0.183 is the multiplier. 4. Work out 29\% of £400 \pounds 371 \pounds 116 \pounds 29 \pounds 429 29\% = \frac{29}{100} = 0.29 So 0.29 is the multiplier. 0.29 \times 400 = 116 5. Work out 9\% of 450 km? 45.0 km 40.5 km 441 km 50.0 km 9\% = \frac{9}{100} = 0.09 So 0.09 is the multiplier. 0.09 \times 450 = 40.5 6. Work out 14.8\% of 560 kg? 8.288 kg 412 kg 148 kg 82.88 kg 14.8\% = \frac{14.8}{100} = 0.148 So 0.148 is the multiplier. 0.148 \times 560 = 82.88 ### Percentage multiplier GCSE questions 1.  Write 61\% as a decimal (1 mark) 61\% = \frac{61}{100} = 0.61 0.61 (1) 2.  Work out 38\% of 600 kg (2 marks) 0.38 × 600 (1) = 228 kg (1) 3.  Fiona is booking a holiday. The holiday costs £700 . She pays a 15\% deposit. Work out how much she has left to pay. (2 marks) 100\% \hspace{1mm}- 15\% = 85\%=0.85 0.85 × 700 (1) \pounds 595 (1) ## Learning checklist You have now learned how to: • Find the decimal multiplier of a percentage • Interpret percentages and percentage changes as a decimal • Use the decimal multiplier to work out the percentage of an amount • Interpret percentages multiplicatively ## Still stuck? Prepare your KS4 students for maths GCSEs success with Third Space Learning. Weekly online one to one GCSE maths revision lessons delivered by expert maths tutors. Find out more about our GCSE maths revision programme.
2022-01-26 14:59:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.914210319519043, "perplexity": 4168.307288084317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304954.18/warc/CC-MAIN-20220126131707-20220126161707-00430.warc.gz"}
https://industrialstrengthscience.blogspot.com/2012/11/pr-back-squat.html
## Tuesday, November 20, 2012 ### PR: Back Squat Today I set a new PR for 3 repetitions of back squat.  This also happened to be a PR for 2 reps and even 1 rep.  The new benchmarks were set using a low-bar or perhaps hybrid low/high-bar technique.  Even so, I think some of that improvement was due to greater strength.  Anyway, the new PRs are as follows: • 1 rep:  250 lb. • 2 reps:  250 lb. • 3 reps:  250 lb. (I'm listing them all individually to make them easier to refer back to.)
2022-09-27 02:47:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961437940597534, "perplexity": 13018.510354891601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00429.warc.gz"}
https://community.atlassian.com/t5/Jira-questions/Error-trying-to-set-up-secure-IMAP/qaq-p/374594
cancel Showing results for Did you mean: See all See all # Error trying to set up secure IMAP I am using Jira 6.0.3 and am trying to set it up so we can create issues with email. So I am setting up the Incoming Mail. I am working in Microsoft environment (Windows 2008R2, Exchange 2010 SP3, RU1) We followed the directions from:https://confluence.atlassian.com/display/JIRA/Configuring+JIRA+to+Receive+Email+from+a+POP+or+IMAP+Mail+Server and also https://confluence.atlassian.com/display/JIRA/Connecting+to+SSL+services I have tried various keystores including one at C:\Program Files\Java\jdk1.7.0_07\jre\lib\security\cacerts I keep gettting this errorUnfortunately no connection was possible. Review the errors below and rectify: SunCertPathBuilderException: unable to find valid certification path to requested target Help... Hi Charles, Can you confirm if the has been correctly imported on the keystore with the command below? keytool -v -list -keystore Z:\path\to\cacerts Have you also applied it to the JIRA Startup Options? You'll need to add following arguments: -Djavax.net.ssl.trustStore=Z:\parth\to\cacerts -Djavax.net.ssl.trustStorePassword="changeit" Best regards, Lucas Timm Lucas, Thank you for your response. I ran the code you suggested and think I saw the cert I added (I am NOT an IT person or a software person). I still get the error. How do I test to see what keystore that Jira is going to check? I think that is the issue, that Jira is not finding the keystore where I put the cert and I don't see how to figure out what keystore it is using... Charles Hi Charles, In JIRA, browse to Administration -> System -> Troubleshooting and Support -> System Info. Then, check in JVM Input Arguments if you have anything like the parameters I sent you above. If you don't have anything, we found the root cause of the error. Just add the same parameters I sent you on the JIRA Startup Options and restart JIRA. It will work. :) Best regards, Lucas Timm Lucas, Thanks for the quick feedback. Hopefully you are a patient person, since I am still trying to make this work. :) I found the JVM Input Arguments: -Dcatalina.base=C:\Program Files\Atlassian\JIRA -Dcatalina.home=C:\Program Files\Atlassian\JIRA -Djava.endorsed.dirs=C:\Program Files\Atlassian\JIRA\endorsed -Djava.awt.headless=true -Datlassian.standalone=JIRA -Dmail.mime.decodeparameters=true -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true -Djava.io.tmpdir=C:\Program Files\Atlassian\JIRA\temp -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.util.logging.config.file=C:\Program Files\Atlassian\JIRA\conf\logging.properties -XX:MaxPermSize=384m -Xms256m -Xmx768m None this looks like the -Djavax.net.ssl.trustStore=Z:\parth\to\cacerts -Djavax.net.ssl.trustStorePassword="changeit" So I assume I need to add it - but how? and where? I did not see anything on the web UI to add startup params so that means I need to hit the actual start of Jira? Using a Windows service to run Jira so not sure how to interface with that process to alter startup params... The following document guides you on how to change JIRA's startup options: 0 vote The following document guides you on how to change JIRA's startup options: https://confluence.atlassian.com/display/JIRA/Setting+Properties+and+Options+on+Startup Community showcase ##### Sarah Schuster Posted Jan 29, 2018 in Jira ### What are common themes you've seen across successful & failed Jira Software implementations? Hey everyone! My name is Sarah Schuster, and I'm a Customer Success Manager in Atlassian specializing in Jira Software Cloud. Over the next few weeks I will be posting discussion topics (8 total) to ... 3,334 views 14 20 ### Atlassian User Groups Connect with like-minded Atlassian users at free events near you! Connect with like-minded Atlassian users at free events near you! ##### Find my local user group Unfortunately there are no AUG chapters near you at the moment.
2018-02-25 22:15:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3469538986682892, "perplexity": 6368.698773221391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817437.99/warc/CC-MAIN-20180225205820-20180225225820-00145.warc.gz"}
https://www.nature.com/articles/s41529-019-0078-1
Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Localized corrosion of low-carbon steel at the nanoscale ## Abstract Mitigating corrosion remains a daunting challenge due to localized, nanoscale corrosion events that are poorly understood but are known to cause unpredictable variations in material longevity. Here, the most recent advances in liquid-cell transmission electron microscopy were employed to capture the advent of localized aqueous corrosion in carbon steel at the nanoscale and in real time. Localized corrosion initiated at a triple junction formed by a solitary cementite grain and two ferrite grains and then continued at the electrochemically-active boundary between these two phases. With this analysis, we identified facetted pitting at the phase boundary, uniform corrosion rates from the steel surface, and data that suggest that a re-initiating galvanic corrosion mechanism is possible in this environment. These observations represent an important step toward atomically defining nanoscale corrosion mechanisms, enabling the informed development of next-generation inhibition technologies and the improvement of corrosion predictive models. ## Introduction The ubiquity of steel in infrastructure makes aqueous steel corrosion a global concern, with negative repercussions impacting most industrial sectors.1 In the oil and gas industry, pipeline failures and the resulting disasters are avoided primarily via strategic replacement programs, which rely on predictive models to develop appropriate replacement timelines.2 These predictive models have been continually updated since their initial development in the early 1970s;3 however, even the models employed today often rely on semi-empirical correction factors derived from field observations or bulk-scale experiments to fill gaps in basic scientific understanding.4 As a result, these predictive models can fail in practice, with repercussions of failure ranging from mild to catastrophic.5 Despite longstanding research efforts, steel corrosion continues to present challenges, due in large part to its intrinsic complexity. Myriad self-compounding environmental factors can completely alter the mechanisms by which corrosion progresses, even when only one factor is changed. This complexity presents a formidable problem for the scientific community to approach, resulting in gaps in the mechanistic understanding of how corrosion progresses in certain environments. Recent studies have suggested that nanoscale processes at heterogeneous sites in materials are a likely culprit for deviation between predictive models and observed corrosion rates from the field.6 A time-dependent understanding of accelerated nanoscale processes in steel corrosion under aqueous flow will help to narrow this knowledge gap. Two known classes of concurrent mechanisms govern mild carbon steel corrosion in wet environments—uniform and localized corrosion. Uniform corrosion processes in low-carbon steel are well accounted for on the bulk scale, where localized corrosion events are generally incorporated into empirical corrosion rates. Uniform aqueous corrosion involves non-specific material loss via solvation of oxidized iron at the surface of the steel.7 On the other hand, localized corrosion results when discrete regions of the metal surface are selectively attacked by the corrosive medium, resulting in accelerated corrosion rates in specific regions relative to the bulk surface. With the tendency to penetrate the steel surface in a relatively short period of time, localized corrosion is difficult to predict or detect, making it the cause of most corrosion-related material failures.8 The two mechanisms work in concert; as localized corrosion exposes a greater surface area to the corrosive medium, the damage caused by uniform corrosion is also exacerbated by the increased exposed surface area. Pitting in steel initiates at or near inclusions within the microstructure, such as MnS and iron carbides.9,10 However, it is not known why some inclusions, even of the same composition, are more electrochemically active than others.11 There are three theories explaining the seemingly stochastic mechanism behind inclusion-induced pitting: (1) iron matrix surface orientation,12 (2) galvanic mediation,11 or (3) a disordered and strained iron matrix.13,14 Iron carbide inclusions, specifically cementite, are of particular interest in the study of carbon steel corrosion. These cementite inclusions are a common microstructural feature of mild steels, and cementite is cathodic to ferrite,15 leading to preferential etching of ferrite when it is in contact with both cementite and with an electrolytic medium (e.g., H2O). A better understanding of the mechanisms by which corrosion initiates and progresses at these types of interfaces in steel will be key to mitigating corrosion-related losses. Real-time experimental observations of nanoscale solid-liquid interfacial processes are limited by the complexity of replicating a flowing corrosive environment within an instrument capable of characterizing corrosion initiation events that are rare, stochastic, and occur at the nanoscale.16,17 Previous studies targeting localized corrosion mechanisms have generally been conducted by examining the surfaces of steels ex-situ, using bulk-scale electrochemical testing in combination with surface characterization methods (e.g., scanning electron microscopy10 or scanning probe microscopy.18,19) Unfortunately, these techniques are too low in temporal and/or spatial resolution to conclusively identify the associated nanoscale mechanisms. Since its first demonstration in 2003,20 liquid-cell scanning/transmission electron microscopy (LC-S/TEM) has undergone drastic improvements, now allowing for real-time structural and compositional characterization of materials within a flowing liquid environment.16,21,22,23 This technique has been used previously to study corrosion in model materials, such as nanoparticles and thin metal films.24,25 The use of in-situ LC-S/TEM to study more complex, less ideal “real world” alloys has only recently begun.26 Here, we present the application of in-situ LC-STEM to study localized corrosion in 1018 carbon steel during exposure to a flowing, aqueous solution. This study was designed to elucidate the mechanism by which localized corrosion of steel at cementite inclusions occurred by relating the known microstructure to localized electrochemical processes resolved by in-situ observation. Advanced structural and analytical electron microscopy techniques were combined to obtain time-resolved, site-specific nanoscale imaging of localized corrosion processes occurring simultaneously at the many different solid-liquid interfaces present in a “real world” sample of pipeline steel. In-depth structural and compositional analysis provided the deeper understanding necessary for studying the stochastic initiation events associated with localized corrosion. A new experimental methodology and workflow (Fig. 1) was developed that paired in-situ, real-time data with extensive initial (preliminary) and final (postmortem) structural and compositional data sets in order to identify the critical nanoscale features in the steel microstructure that were most susceptible to localized corrosion. An electron-transparent sample was prepared from the bulk exterior of a 1018 steel industrial pipe using traditional focused ion beam (FIB) milling and lift-out procedures27 (see Methods). In preliminary examination (ex-situ), full characterization of the microstructure was performed using bright-field (BF) and high angle annular dark-field (HAADF) STEM to image the initial microstructure, including grain boundary locations. Grain orientation mapping with precession electron diffraction (PED)28 was used to identify the microstructure, including distinguishing cementite (orthorhombic crystal structure) from ferrite (body-centered-cubic crystal structure) and quantifying relative misorientation angles. In addition, several localized initial thickness measurements were collected using electron energy loss spectroscopy (EELS)22 for later quantification of the total corrosion-induced thickness change in the Z-plane via comparison to postmortem data. Elemental composition examination was performed for select regions of interest around the known cementite grains using X-ray energy dispersive spectroscopy (XEDS, see SI Fig. S5). Collectively, these data were used to elucidate the complex microstructural relationships that governed where and how accelerated, localized corrosion initiated and progressed. This level of detail was necessary to simultaneously study the many possible initiation sites in the sample. For the in-situ LC-STEM corrosion experiment, the sample was placed between two electron-transparent silicon nitride windows, forming a well-controlled liquid cell. The cell was pre-loaded with a droplet of deionized water to ensure complete wetting of the sample surface and was then placed into a liquid flow holder. The holder was then loaded into a transmission electron microscope, and microfluidic pumping (2 µL/min) of an aqueous electrolyte (pH 6.1) containing CO2 (aq., 6 μM), O2 (aq., 281 μM), and Na2SO4 (aq., 2.78 μM) was initiated. This solution composition was chosen to simulate an aqueous, baseline corrosive environment employing atmospheric levels of aqueous gases and a minimal amount of buffering salt to maintain pH. Solution flow was maintained with no solution recirculation to ensure that reactants were not depleted. Exposure times given herein indicate the total time the sample was exposed to aqueous electrolyte; fluid flow initiated after 10 min of static liquid exposure. STEM imaging was performed to record the corrosion processes in real time. Images were acquired sparingly over a 93 min period using low electron-fluence conditions, and the beam was blanked between exposures to minimize the production of—and interference by—radiolysis products in the aqueous environment.29 The overall electron dose was kept well below the threshold at which other groups have observed interference by radiolysis products (see SI for more details).30 The corrosion experiment was concluded after 1025 min exposure to aqueous electrolyte. Extensive postmortem characterization was conducted on the corroded steel sample to ascertain (1) the final thickness using EELS and energy-filtered transmission electron microscopy (EFTEM), (2) the final microstructure using S/TEM and nanobeam electron diffraction (NBED), (3) the final composition using XEDS and EELS, and (4) the final oxidation state of scale products using EELS. The initial, transient, and final microstructures were compared using image averaging and image overlays to identify active localized corrosion sites and to determine the relative corrosion dissolution rates of the ferrite and cementite. ## Results ### Localized initiation One region near the center of the steel specimen displayed clear indications of localized accelerated corrosion. Crystallographic data (Fig. 2a) and BF-STEM micrographs of this region before (Fig. 2b) and during (Fig. 2c–f) exposure to aqueous electrolyte are provided. The first evidence of localized corrosion was visible after 40 min of exposure to liquid electrolyte (Fig. 2c), indicated by rapid changes in intensity in the in-situ STEM micrographs. Cross-referencing these brighter areas with the preliminary structural data (Fig. 2a, b) revealed that the initiation event occurred at the triple junction formed by an isolated cementite grain inclusion (Fe3C) and two abutting ferrite grains (α), identified as αA and αB (Triple Junction 1, TJ1). Crystallographic analysis indicated zone axis orientations (direction out of plane) of [$$\bar 20\bar 5$$] and [619] for grains αA and αB, respectively, for a disorientation angle of 11 degrees across the ferrite grain boundary (see SI for full table of ferrite Euler angles). The cementite grain had a zone axis orientation of [$$\bar 29\bar 5$$] (orthorhombic space group Pbnm, where a<b<c). ### Localized progression Following initiation at TJ1, localized corrosion progressed over the electrochemically active phase boundary formed by the interface between the cementite grain and its neighboring ferrite grains. Over the course of the 5 mins that elapsed between Fig. 2c and d, evidence of continued localized corrosion was apparent in the BF-STEM micrographs (Fig. 2d), as the material corroded both at the triple junction formed by Fe3C/αB/αC (TJ2) and at the triple junction formed by Fe3C/αE/αF (TJ3). Grain αC was oriented along the [618] zone axis, with 5 degrees disorientation relative to αB. Grains αE and αF had a similarly low relative disorientation angle of 4 degrees, with zone axis orientations of [207] and [$$\bar 10\bar 4$$], respectively. The accelerated mechanism laterally claimed up to 35 nm of ferrite (at TJ1) from grains αA/αB, after which the accelerated material loss arrested (Fig. 2e). ### Localized arrest No further progression of localized corrosion was observed between 47 and 102 min, when in-situ observation concluded; the corroded area in Fig. 2e and f was measured to be the same within the observation error of the experiment (± 2 nm2). To facilitate investigation of the microstructure after localized corrosion arrested, image averaging was used to increase the signal-to-noise ratio of the in-situ image data. While initial examination of single in-situ STEM micrographs (Fig. 2c–f) suggested that the Fe3C corroded away during this period, analysis of averaged BF (Fig. 3a) and HAADF STEM (Fig. 3b) micrographs from 47 to 53 min revealed that corrosion at the phase boundary only occurred in ferrite. Diffraction contrast changes in the cementite grain and removal of ferrite from the inclined phase boundary in relation to the observation direction account for the intensity variations co-localized with the cementite grain position. The arrest of localized corrosion at 47 min occurred either by the loss of physical contact between cementite and ferrite, or by passivation of the area by deposited corrosion product (vide infra). ### Final microstructure Postmortem TEM micrographs (Fig. 4a, b) revealed a crevice in the region of the solitary cementite inclusion. The crevice had grown outward along the grain boundary at the top of the inclusion (as shown) between grains αC and αD and at the bottom of the inclusion between grains αG and αH (Fig. 2a). The material occupying the gap within the crevice was characterized as corrosion product that overlapped the cementite inclusion in its original position. Rapid galvanic corrosion at this phase boundary likely induced a local spike in the concentration of solvated iron ions, leading to preferential deposition of corrosion product in the area. Product deposition increased with proximity to the cementite grain. Linear profiles extracted from the EFTEM intensity map (Fig. 4c) show that the corrosion product deposition on either side of the crevice was roughly evenly distributed, indicating that product deposition was independent of fluid flow. Contrast in the BF and HAADF postmortem S/TEM micrographs also revealed the presence of bridging structures – determined via elemental and crystallographic analysis to be amorphous iron oxide – connecting the two walls of the crevice (Fig. 4a, inset), which may have implications for corrosion product deposition/dissolution mechanisms. Further understanding of the corrosion product deposition mechanisms could be achieved by implementing new instrumentation and methods that are complementary to in-situ TEM and that enable tracking of the iron oxide evolution in-situ.22,31 Overlaying the initial grain boundary outline onto the postmortem image (Fig. 4a, b) revealed that the cementite grain had moved relative to its initial position (confirmed via TEM and spot diffraction analysis), rotating from its initial 25 degrees to a final 20 degrees relative to the fluid flow direction (Fig. 4c, d). This movement indicated that corrosion at the phase boundary advanced sufficiently during the unobserved portion of the experiment to liberate the cementite grain from its original position in the ferrite matrix while leaving it confined within the crevice. A dip in intensity was visible to the left of the displaced cementite inclusion in both the HR-S/TEM micrographs (Fig. 4b) and the relative thickness map (Fig. 4c). This intensity change is indicative of re-initiation of galvanic corrosion at the interface between the two phases, which occurred when the cementite was reconnected to the ferrite prior to the ferrite being fully converted to corrosion product. ### Uniform corrosion Running concurrent to the localized, accelerated corrosion discussed above, much slower, uniform corrosion was also observed over all the ferrite grains present in the sample. Fig. 5 shows overview S/TEM images of the entire steel coupon in vacuum before (Fig. 5a) and after (Fig. 5b) exposure to the liquid electrolyte. Initial EELS relative thickness measurements and EFTEM relative thickness maps were used to determine the average overall iron depletion to be between 70 and 85% over the course of the experiment (Fig. 5c). Electron diffraction and EELS analysis determined that all ferrite was converted to iron oxide (see SI) by the end of the experiment. EELS relative thickness measurements in the pearlite grain present in the bottom left of the sample (Fig. 5b-inset) illustrate the preferential corrosion of ferrite and the relative resistance of cementite to corrosion. The starting thickness of all grains in this region was 85 nm: the corrosion-resistant cementite grains underwent no change in thickness, whereas the corroded ferrite regions returned a final thickness of 55 nm (indexed as Fe2O3). ## Discussion Corrosion product formation and material loss during uniform corrosion are generally nonlinear in rate; significantly higher corrosion rates are often observed over the first several hours of uniform corrosion, followed by a steady-state rate once the iron surface has been converted to corrosion product.32 Qualitatively, we determined that the steel sample, which may have been exposed to the solution on both sides of the sample, was not completely converted to corrosion product within 102 min of exposure. This was evident in the in-situ image acquired at 102 min (Fig. 2f), where observable diffraction contrast within the ferrite grains (due to crystallographic orientation differences) indicated the presence of crystalline ferrite. However, these diffraction contrast differences were not present at the conclusion of aqueous exposure; postmortem EELS and XEDS revealed that the ferrite was fully converted to amorphous corrosion product following the 1025 min of exposure (Figs 4a, b and 5b). At this point, the ferrite regions of the sample were observed to have lost ca. 30 nm of thickness, retaining 55 nm in the form of iron oxide. Given the starting thickness of 85 nm and the quantitative ferrite dissolution timeline, the corrosion material loss rate due only to uniform corrosion was calculated to be between 0.015 and 0.16 mm per year, with a depth of penetration between 0.044 and 0.44 mm per year. These values are expected to be an upper bound, as they were calculated assuming possible contact of both sample surfaces with the solution. Previous experimental workflows have struggled to separate the contributions of uniform and localized processes to the overall corrosion rate; LC-TEM provides a finite sample volume with transmission imaging that is able to indicate changes in thickness and material composition. While discrepancies may arise due to differences in the steel surface roughness and density of surface phase boundaries between bulk and nanoscale samples, the ability to decouple the contribution of localized and uniform processes could be a major boon to the development of more robust corrosion prediction models capable of accurately modeling the individual mechanisms responsible for material degradation. It has been previously demonstrated that the final morphology of low-carbon steel after attack by aqueous solution exhibited surface structures described as “soft line form” features: widely spaced, small-aspect-ratio ridges.33 The results of this work indicate that these soft line microstructures may result from two concurrent processes: (1) release of cementite grains following complete dissolution of the phase boundary between cementite and ferrite, leaving exposed troughs, and (2) increased corrosion product deposition in regions where this type of localized corrosion occurs. The combination of these two events would generate lined crevices with softened features in the regions where cementite grains were released. This realization was achieved due to real-time, in-situ transmission-mode imaging of the sample at the nanoscale, which captured finite surface-level processes that were previously undetected. Characterization data and key observations suggest that a re-initiating mechanism may be involved in the deleterious interaction between the cementite inclusion and the neighboring, receded ferrite grains. Detailed image analysis contrasting the last of the in-situ images with the postmortem images of the localized corrosion region identified relocation of the cementite grain (bright region indicated in Fig. 4b) as well as an intensity decrease along the left side of the final position of the cementite grain (visible as a slightly darker region to the left of the bright cementite in Fig. 4b and in the line profiles in Fig. 4c). We propose a schematic of this re-initiation process in Fig. 6. Localized corrosion occurred first at the TJs and phase boundaries (vide supra), leading to faceted pitting, and the spike in local iron ion concentration led to preferential accumulation of corrosion product in the area (Fig. 6b). During the unmonitored portion of aqueous exposure, the corrosion-resistant cementite grain was unaffected, but the ferrite grains etched in the Z direction (Fig. 6c), leading to gaps in the corrosion product layer that were highlighted by the TEM micrographs and relative thickness maps (Fig. 4c, d). The formation of these gaps both allowed the fluid to regain access to the cavity and left the cementite inclusion jutting out from the surface. The final position of the cementite grain suggests that the fluid flow pushed the cementite grain, which was oriented at 25° to the flow direction, into αE/αF, reinitiating accelerated galvanic corrosion (Fig. 6d) and leading to the dip in intensity seen in the relative thickness map to the left of the final inclusion position (Fig. 4c) as the reconnected cementite and ferrite grains once again experienced localized corrosion at their phase boundary. This process would have continued until either the interface was no longer electrochemically active (ferrite conversion to corrosion product) and/or the cementite grain lost mobility (Fig. 6e), ultimately leading to the same result: a rotated, confined cementite inclusion with a distinct intensity dip to its left in the thickness map. The labile nature of the cementite grain—following the localized-corrosion-mediated dissolution of its surrounding ferrite matrix at the phase boundaries—has far-reaching implications. On the bulk scale, these types of inclusions and phase boundaries are highly prevalent in the steel microstructure. Cavities formed around them via the mechanism observed here would not only allow infiltration of the corrosive medium into the inner structure of the steel, but the dislodged cementite grains could re-initiate galvanic corrosion in any ferrite regions of the steel surface to which they were displaced. Additionally, turbulent fluid flow combined with a cementite grain trapped within a ferrite cavity could lead to a larger opening within the steel subsurface by the same deleterious mechanism. The localized corrosion at the TJs along the phase boundary of the solitary cementite inclusion (e.g., Fig. 2c, arrow) showed faceted pitting of the corroded ferrite. Identification of these ferrite pit facets was determined by comparing the faceted edge of the etched region (Fig. 2f) with possible faceted pit geometries, their 3D intersection with the ferrite surface, and the ways these would appear in a 2D S/TEM micrograph projection. Each of the pits along the phase boundary of the solitary inclusion was found to have {110} faceting. In comparison, no pitting was observed at the cementite inclusions in the pearlite grain, which did not exhibit any obvious TJs. This suggests that structural defects, such as TJs, are important factors in the initiation of localized corrosion. Structural defects may indicate increased matrix strain and/or increased accommodation of dopants that can increase the activity at these sites. TJ1 was likely not the most strained or defected site along the phase boundary. If structural defects were the only contributor to the onset of localized corrosion, then the junction formed by the intersection of four grains (Fe3C, αC/αD/αE) might be expected to corrode first. Instead, the initiation event occurred at TJ1, located at approximately mid-length of the Fe3C inclusion and having the highest contact area to the cathodic inclusion of any grain-boundary-based defect. Furthermore, TJ2 and TJ3 were located about the same distance along the length of the cathodic inclusion and pit at the same time, despite the TJ2 ferrite surfaces being more favorably oriented for pitting over TJ3.34 This may indicate that some interplay between galvanic and strain-based mechanisms is responsible for the pitting behavior observed here. We observed the advent of localized steel corrosion in-situ and identified the nanoscale features implicit in the initiation and progression of an accelerated corrosion mechanism. Localized corrosion initiated at a site of interfacial heterogeneity: a triple junction formed by the intersection of two ferrite grains and an isolated cementite inclusion. The initiation event involved two ferrite grains with a relatively long surface contact length with the cementite inclusion and occurred at the most structurally defected site along the phase boundary involved in these three grains. Following initiation, accelerated corrosion progressed along the electrochemically active interface formed by the ferrite-cementite phase boundary, claiming adjoining portions of abutting ferrite grains, leaving the cementite composition unaffected, and trapping the dislodged cementite grain within the resulting cavity. On the bulk scale, such a cavity could allow infiltration of the corrosive medium into the inner structure of the steel and could feasibly serve as the initiation point of a crack. The isolated cementite grain was not the only cementite grain present in the sample; however, it was the only cementite grain that exhibited clear signs of localized, accelerated corrosion, and it was the only cementite inclusion that displayed clear triple junctions along the phase boundary. Comparison of the relative activity of the pitted locations along the solitary inclusion suggests that a combined galvanic and strain-based mechanism may be responsible for the localized corrosion observed in this work. The experimental workflow and array of electron microscopy techniques established herein allowed for the observation of stochastic, nanoscale events using non-ideal, field-relevant materials and conditions. TEM is particularly suited to this type of investigation, as it provides elemental composition, crystallographic orientation, and nanoscale observation of dynamic events and transient states that could pass unobserved in techniques with lower temporal and spatial resolution. Additionally, other techniques that are primarily sensitive to topology (e.g., scanning electron microscopy or atomic force microscopy) would be limited in this endeavor, as corrosion product deposition concurrent with accelerated corrosion would generate a surface in which any percolation pathways would be invisible to such surface-sensitive techniques. The insight necessary to interpret the in-situ LC-STEM data provided by initial and postmortem analysis emphasizes the importance of full-spectrum characterization to support a mechanistic interpretation. Future investigations will use an approach that specifically targets the active electrochemical interfaces in steel whose involvement in localized corrosion was implicated in this work. New hardware and techniques are being implemented to obtain real-time compositional information from materials within the LC-STEM concurrent with in-situ observation.23,31 Nanoscale corrosion pathways, like the one identified in this work, could have a profound impact on the degradation behavior of bulk materials. This work suggests that the types of electrochemically active interfaces that typify and pervade the steel microstructure may participate in accelerated corrosion mechanisms that, extrapolated to the bulk-scale, would create percolation networks that expose the interior body of the steel to corrosive conditions much sooner than predicted by uniform corrosion models. These types of scientific insights will prove vital for the development of more robust corrosion prediction models as well as the development of next-generation corrosion prevention and mitigation strategies. ## Methods ### Low-carbon steel sample A small coupon (ca. 12.7 × 12.7 × 3.175 mm) of cold finish mild steel, SAE/AISI 1018, produced by the manufacturer S. Izaguirre, S.A, was used for characterization and corrosion testing. The bulk chemical composition provided by the manufacturer is (wt%): 0.16 C, 0.71 Mn, 0.169 Si, 0.016 P, 0.021 S, 0.116 Cr, 0.107 Ni, 0.017 Mo, 0.345 Cu, 0.017 Sn, 0.008 Al, 0.002 V, 0.0088 N, 0.001 Ti, 0.002 Pb, and balanced by Fe. The coupon was coated with a thin layer of grease (Starrett M1 Oil) to preserve the surface state during storage. ### Focused ion-beam TEM sample preparation The steel coupon was affixed to a scanning electron microscope (SEM) stub after removal of the grease layer using isopropyl alcohol. An FEI Nova NanoLabTM 600 DualBeamTM focused ion-beam(FIB)/SEM was used to deposit a ca. 100 nm thick protective layer of Pt/C (ca. 83/17 at%35) with a 15 kV electron beam over a ca. 2 × 20 µm area of the coupon surface followed by a ca. 1 µm thick protective layer of Pt/C with a 30 kV Ga ion beam prior to 30 kV Ga ion milling of the foil. An Omniprobe micromanipulator was then used for in-situ lift-out of the lamella to an Omniprobe Cu transmission electron microscope (TEM) half-grid. The sample was thinned to electron transparency and then received a final 10 kV Ga ion thinning step to reduce the Ga-ion-damaged surface layer. The sample was characterized prior to corrosion testing in this configuration. For in-situ TEM corrosion testing, the steel lamella was removed from the Omniprobe half-grid, again using standard FIB lift-out techniques with 30 kV Ga ion beam cuts, and transferred to the edge of a 50-nm thick silicon nitride (Si3N4) window of a TEM liquid-cell chip (Hummingbird Scientific), such that the lamella rested partially on the thick Si at the window edge and partially overhung the window. ### Characterization of initial structure and composition The initial 1018 steel lamella microstructure was characterized using precession electron diffraction (PED) phase and orientation mapping. The PED data was acquired using a NanoMEGAS DigiSTAR™ precession unit installed on a 200 kV JEOL 2100 TEM with a 7 nm probe size over a 3.3 × 3.8 µm area with 10 nm steps and indexed using ASTAR™ (NanoMEGAS) software. Sample composition was characterized using X-ray energy dispersive spectroscopy (XEDS). The XEDS maps were acquired on a 200 kV FEI Titan ChemiSTEM equipped with 4 windowless silicon drift detectors. The XEDS maps were evaluated using multivariate statistical analysis with Automated eXpert Spectral Image Analysis (AXSIA) software.36 Relative sample thickness was measured in a grid of discrete points using the electron energy loss spectroscopy log-ratio technique23 on a 300 kV Tecnai F30 G2 TEM, equipped with a Tridiem 863 Gatan Image Filter. ### Corrosion solution The liquid for the in-situ TEM corrosion experiment was an aqueous solution (pH 6.1) containing atmospheric levels of dissolved gasses (6 μM CO2, 287.5 μM O2) and sodium sulfate (Sigma Aldrich, 2.78 μM), prepared using ultrapure (18.2 MΩ*cm) water. Using a Hamilton syringe equipped with a stop-cock, 5 mL of the test solution was withdrawn and loaded onto the syringe pump for microfluidic pumping (at 2 µL/min) throughout the experiment. The CO2 concentration was determined via titration with NaOH (0.1 mM) using phenolphthalein solution as an indicator. The O2 concentration was determined using a dissolved oxygen meter (VWR). The equilibrated water was prepared by allowing ultrapure water (18.2 MΩ*cm) to equilibrate under atmospheric conditions overnight and then adding the appropriate amount of sodium sulfate stock solution to reach 2.78 µM. Sodium hydroxide solution (Sigma Aldrich, 0.1 mM) was prepared from ultrapure (18.2 MΩ*cm) water. ### In-situ liquid-cell TEM experiment The liquid flow TEM holder (Hummingbird Scientific) assembly is composed of two liquid-cell chips, including a base and lid, that, when compressed and sealed within the holder, create an environmental cell with continuous liquid flow through a channel maintained with thin spacers. Each chip has a thin Si3N4 window in the flow channel that permits electron transmission with minimal loss of spatial resolution while maintaining separation of the liquid from the vacuum of the TEM. A 250 nm SU-8 photoresist spacer, patterned onto the lid chip (as purchased), was used to provide the minimal fluid thickness within the cell before affixing the steel sample over the window. The base chip was plasma cleaned for 3 min just prior to in-situ TEM corrosion testing to produce a hydrophilic surface and to promote wetting and flow through the thin channel. However, the lid chip with the steel lamella was not plasma cleaned to avoid oxidation of the sample. The base and lid chips were loaded into the holder such that the electron beam would pass through the first window and sample before encountering the liquid to enable the highest STEM imaging resolution. The base chip was pre-loaded with a 6 µL droplet of equilibrated deionized water to ensure complete wetting of the sample surface. Ten minutes elapsed between liquid loading and the acquisition of the first STEM image under solution flow. The sample was imaged in STEM mode at 300 kV using a beam current of 13 pA, and images were acquired with a 2 µs dwell time. The sample was imaged over a 93 min period with images acquired about every minute at various magnifications of 2048 × 2048 pixels: 25 images at 4.06 nm/pixel, 8 images at 2.03 nm/pixel, 9 images at 1.44 nm/pixel, and 25 images at 1.02 nm/pixel were acquired with 0.1, 0.4, 0.8, and 1.6 e2 fluence at each image magnification, respectively. As the radiation dose to the specimen is dependent on the area scanned by the STEM probe, the magnification was kept low for most of the imaging, minimizing accumulated electron fluence in the region of the solitary cementite inclusion to 51.5 e2. This was done to prevent the generation of radiation species that could alter the sample during observation. It should be noted that at the lowest magnification, the whole of the sample contained within the Si3N4 window was imaged; therefore, reactions between the radiation species in the corrosion solution and on the sample surface would be evenly distributed over the entire sample. No preferential dosing would be expected, except at higher magnification, where the fly-back in the scanning probe rests on the left side of the scanned image region. The experiment continued unobserved overnight with solution flow maintained at 2 µL/min. The solution was kept sealed, and a continuous flow of fresh solution was used throughout the experiment to ensure that reactant depletion would not affect the solution composition. The experiment was halted after 1025 min of total exposure to liquid. The sample was immediately removed from the liquid-cell holder, underwent a cursory removal of liquid via wicking with filter paper, and then was left to dry within a nitrogen dry box, where it was stored between postmortem analyses. ### Characterization of final structure and composition An initial survey of the corroded steel sample was performed on the Tecnai F30 TEM operated at 300 kV to image the microstructure and measure the sample thickness. The corroded sample was affixed to the liquid-cell chip, imaged using BF/DF STEM, and the relative sample thickness expressed in units of mean free path were measured in a grid of discrete points based on the Fourier log-ratio EELS technique. More extensive sample thickness mapping was performed on the corroded sample using the same Fourier log-ratio technique with energy-filtered TEM using a Tridiem 863 Gatan Image Filter. Relative thickness maps (units of thickness divided by the material-dependent 300 kV electron inelastic mean free path length) were acquired and calculated using DigitalMicrograph (Gatan) and further visualized with MATLAB. Analytical transmission electron microscopy and core-loss energy loss spectroscopy was performed on a 200 kV FEI Titan ChemiSTEM equipped with a Quantum Gatan Image Filter. High-resolution EELS chemical imaging was performed in STEM mode at 200 kV with an emission current of 185 uA, 3900 V extraction, 32 mm camera length, and 50.5 mrad collection angle based on a 5-mm filter opening aperture. Energy windows extending from the oxygen-K to iron-L edge were collected using a dispersion of 0.25 eV/pixel that was bias, gain, and drift corrected. A total of 70 individual spectra were collected along a line profile that were each individually collected for 10 s. Using the L3,2 white-line-ratio technique over the iron-L edge profiles, subtracted valence profiles were matched to experimental and published data for a variety of iron oxide scale products, including: α-Fe2O3, γ-Fe2O3, α-FeO(OH), Fe3O4, FeCO3, and FeO. Non-linear least squares peak fitting in addition to dual window and a 2:1 stepped background subtraction with lifetime broadening of the L3,2 peaks was utilized to extract quantitative integrals associated with each peak and to calculate the profiles. Comparing experimental and published data fell within two standard deviations, owing to the analytical certainty associated with this core-loss EELS profiling technique. ## Data availability The data that support the findings of this study are available from the corresponding author upon request. ## References 1. 1. Hou, B. et al. The cost of corrosion in China. npj Materials Degradation 1, 4 (2017). 2. 2. Kermani, M. & Morshed, A. Carbon dioxide corrosion in oil and gas production-A compendium. Corrosion 59, 659–683 (2003). 3. 3. De Waard, C. & Milliams, D. Carbonic acid corrosion of steel. Corrosion 31, 177–181 (1975). 4. 4. Nešić, S. Key issues related to modelling of internal corrosion of oil and gas pipelines–A review. Corros. Sci. 49, 4308–4338 (2007). 5. 5. Significant Pipeline Incidents By Cause: Significant Incident Cause Breakdown: 20 Year Average (1997–2016). In Pipeline Incident 20 Year Trends. Updated: Wednesday, December 6, 2017 edn. (Pipeline and Hazardous Material Safety Administration, 2017). 6. 6. Bhandari, J., Khan, F., Abbassi, R., Garaniya, V. & Ojeda, R. Modelling of pitting corrosion in marine and offshore steel structures–A technical review. J. Loss. Prev. Proces.s Ind. 37, 39–62 (2015). 7. 7. Revie R. W., Uhlig H. H. Uhlig’s Corrosion Handbook. (John Wiley & Sons, Hoboken, 2011). 8. 8. Han, J., Yang, Y., Brown, B. & Nesic, S. Electrochemical investigation of localized CO2 corrosion on mild steel. Corrosion 2007 Conference and Expo. Paper No. 07323 (2007). 9. 9. Eklund, G. S. Initiation of pitting at sulfide inclusions in stainless steel. J. Electrochem. Soc. 121, 467–473 (1974). 10. 10. Sun, J., Zhang, G., Liu, W. & Lu, M. The formation mechanism of corrosion scale and electrochemical characteristic of low alloy steel in carbon dioxide-saturated solution. Corrosion Science 57, 131–138 (2012). 11. 11. Wranglen, G. Pitting and sulphide inclusions in steel. Corrosion Science 14, 331–349 (1974). 12. 12. Lillard S. Relationships Between Pitting Corrosion and Crystallographic Orientation, An Historical Perspective. Corrosion Science A Retrospective and Current Status in Honor of Robert P Frankenthal, 334–343 (2002). 13. 13. Avci, R. et al. Mechanism of MnS-mediated pit initiation and propagation in carbon steel in an anaerobic sulfidogenic media. Corros. Sci. 76, 267–274 (2013). 14. 14. Bertali, G., Scenini, F. & Burke, M. The effect of residual stress on the preferential intergranular oxidation of alloy 600. Corros. Sci. 111, 494–507 (2016). 15. 15. Staicopolus, D. The role of cementite in the acidic corrosion of steel. J. Electrochem. Soc. 110, 1121–1124 (1963). 16. 16. Evans, J. E., Jungjohann, K. L., Browning, N. D. & Arslan, I. Controlled growth of nanoparticles from solution with in situ liquid transmission electron microscopy. Nano. Lett. 11, 2809–2813 (2011). 17. 17. Carter C., Williams D. Transmission Electron Microscopy - Diffraction, Imaging, and Spectrometry (Springer International Publishing Switzerland, Berlin, 2016). 18. 18. Ulyanov, P. et al. Microscopy of carbon steels: Combined AFM and EBSD study. Appl. Surf. Sci. 267, 216–218 (2013). 19. 19. Dwivedi, D., Lepková, K. & Becker, T. Carbon steel corrosion: a review of key surface properties and characterization methods. RSC Advances 7, 4580–4610 (2017). 20. 20. Williamson, M., Tromp, R., Vereecken, P., Hull, R. & Ross, F. Dynamic microscopy of nanoscale cluster growth at the solid–liquid interface. Nat. Mater. 2, 532–536 (2003). 21. 21. de Jonge, N., Poirier-Demers, N., Demers, H., Peckys, D. B. & Drouin, D. Nanometer-resolution electron microscopy through micrometers-thick water layers. Ultramicroscopy 110, 1114–1119 (2010). 22. 22. Jungjohann, K. L., Evans, J. E., Aguiar, J. A., Arslan, I. & Browning, N. D. Atomic-scale imaging and spectroscopy for in situ liquid scanning transmission electron microscopy. Microsc. Microanal. 18, 621–627 (2012). 23. 23. Cho, H. et al. The use of graphene and its derivatives for liquid-phase transmission electron microscopy of radiation-sensitive specimens. Nano. Lett. 17, 414–420 (2016). 24. 24. Gross, D., Kacher, J., Key, J., Hattar, K. & Robertson, I. M. In situ TEM observations of corrosion in nanocrystalline Fe thin films. Processing, Properties, and Design of Advanced Ceramics and Composites II 261, 329 (2017). 25. 25. Chee, S. W. et al. Studying localized corrosion using liquid cell transmission electron microscopy. Chem.Communi. 51, 168–171 (2015). 26. 26. Schilling, S., Janssen, A., Zhong, X., Zaluzec, N. & Burke, M. Liquid In Situ Analytical Electron Microscopy: Examining SCC Precursor Events for Type 304 Stainless Steel in H 2 O. Microsc. Microanal. 21, 1291–1292 (2015). 27. 27. Mayer, J., Giannuzzi, L. A., Kamino, T. & Michael, J. TEM sample preparation and FIB-induced damage. MRS. Bull. 32, 400–407 (2007). 28. 28. Rauch, E. F. et al. Automated nanocrystal orientation and phase mapping in the transmission electron microscope on the basis of precession electron diffraction.  Zeitschrift für Kristallographie 225, 103–109 (2010). 29. 29. Schneider, N. M. et al. Electron–water interactions and implications for liquid cell electron microscopy. J. Phys. Chem. C 118, 22373–22382 (2014). 30. 30. Woehl, T. & Abellan, P. Defining the radiation chemistry during liquid cell electron microscopy to enable visualization of nanomaterial growth and degradation dynamics. J. Microsc. 265, 135–147 (2017). 31. 31. Hart, J. L. et al. Direct detection electron energy-loss spectroscopy: a method to push the limits of resolution and sensitivity. Sci. Rep. 7, 8243 (2017). 32. 32. Zhang, Y., Pang, X., Qu, S., Li, X. & Gao, K. Discussion of the CO2 corrosion mechanism between low partial pressure and supercritical condition. Corros. Sci. 59, 186–197 (2012). 33. 33. Clover, D., Kinsella, B., Pejcic, B. & De Marco, R. The influence of microstructure on the corrosion rate of various carbon steels. J. Appl. Electrochem. 35, 139–149 (2005). 34. 34. Kruger, J. Influence of crystallographic orientation on the pitting of iron in distilled water. J. Electrochem. Soc. 106, 736–736 (1959). 35. 35. Botman, A., Mulders, J. & Hagen, C. Creating pure nanostructures from electron-beam-induced deposition using purification techniques: a technology perspective. Nanotechnology. 20, 372001 (2009). 36. 36. Kotula, P. G. & Keenan, M. R. Application of multivariate statistical analysis to STEM X-ray spectral images: Interfacial analysis in microelectronics. Microsc. Microanal. 12, 538–544 (2006). Download references ## Acknowledgements This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science by Los Alamos National Laboratory (Contract DE-AC52-06NA25396) and Sandia National Laboratories (Contract DE-NA-0003525). ## Author information Authors ### Contributions K.L.J., S.C.H., M.L.O., T.J.K. and C.C. designed the experimental methods. W.M.M. prepared the sample. C.C., J.A.A. P.G.K., D.C.B., K.H. and K.L.J. acquired experimental data. Data was interpreted by C.C., S.C.H., R.O.G., J.A.A., I.M.T., K.L.J. and M.L.O. The manuscript was prepared by S.C.H., C.C., R.O.G., T.S.P., K.L.J. and M.L.O. ### Corresponding author Correspondence to Michele L. Ostraat. ## Ethics declarations ### Competing interests The authors declare no competing interests. ## Additional information Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Reprints and Permissions ## About this article ### Cite this article Hayden, S.C., Chisholm, C., Grudt, R.O. et al. Localized corrosion of low-carbon steel at the nanoscale. npj Mater Degrad 3, 17 (2019). https://doi.org/10.1038/s41529-019-0078-1 Download citation • Received: • Accepted: • Published: ## Further reading • ### Oil field microorganisms cause highly localized corrosion on chemically inhibited carbon steel • Jaspreet Mand •  & Dennis Enning Microbial Biotechnology (2021) • ### New Insights into Sulfide Inclusions in 1018 Carbon Steels • Nathaniel Rieders • , Manjula Nandasiri • , David Mogk •  & Recep Avci Metals (2021) • ### Knife-edge interferogram analysis for corrosive wear propagation at sharp edges • Zhikun Wang •  & ChaBum Lee Applied Optics (2021) • ### Nanobubbles as corrosion inhibitor in acidic geothermal fluid • Asuki Aikawa • , Arata Kioka • , Masami Nakagawa •  & Satoshi Anzai Geothermics (2021) • ### In situ electrochemical scanning/transmission electron microscopy of electrode–electrolyte interfaces • Raymond R. Unocic • , Katherine L. Jungjohann • , B. Layla Mehdi • , Nigel D. Browning •  & Chongmin Wang MRS Bulletin (2020) ## Search ### Quick links Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
2021-05-15 03:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49078914523124695, "perplexity": 6797.297792440335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991812.46/warc/CC-MAIN-20210515004936-20210515034936-00247.warc.gz"}
https://www.physicsforums.com/threads/l-edge-of-absorption.314642/
# L-edge of absorption What is the L-edge of absorption? For example, Si has its L2,3 edge at 99.8 eV? The second atomic shell is L, but what does 2,3 mean? Thanks Bob S The k-edge and L edge of an atom refers to the minimum energy (or maximum wavelength) photon (UV or x-ray) that can remove an (photo) electron from the K or L shells (usually meaning ionize), such that other bound electrons cascade down to fill the vacancies. The photon energies correspond to "edges" in a plot of photon attenuation vs. photon energy where there is a sudden large increase in the attenuation coecfficient. Figure 1 in this reference shows a k-shell photoelectron ejection, with the possible atomic electron cascades. L(2,3) may refer to the electron transition that fills the L shell vacancy (M-shell to L-shell) Last edited:
2023-02-02 12:09:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171352744102478, "perplexity": 4797.349399531977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00121.warc.gz"}
http://www.itziararetxaga.net/2020/01/modelling-the-strongest-silicate-emission-features-of-local-type-1-agn/
Modelling the strongest silicate emission features of local type 1 AGN Also available in: Español (Spanish) Euskara (Basque) Accepted for publication in Astrophysical Journal (arXiv: 2001.00844) Abstract We measure the 10 and 18μm silicate features in a sample of 67 local (z<0.1) type 1 active galactic nuclei (AGN) with available Spitzer spectra dominated by non-stellar processes. We find that the 10μm silicate feature peaks at 10.3+0.70.9μm with a strength (Si_p = ln f_p(spectrum)/f_p(continuum)) of 0.11+0.150.36, while the 18μm one peaks at 17.3+0.40.7μm with a strength of 0.14+0.060.06. We select from this sample sources with the strongest 10μm silicate strength (σ_Si10μm>0.28, 10 objects). We carry out a detailed modeling of the IRS/ Spitzer spectra by comparing several models that assume different geometries and dust composition: a smooth torus model, two clumpy torus models, a two-phase medium torus model, and a disk+outflow clumpy model. We find that the silicate features are well modeled by the clumpy model of Nenkova et al. 2008, and among all models those including outflows and complex dust composition are the best (Hoenig et al. 2017). We note that even in AGN-dominated galaxies it is usually necessary to add stellar contributions to reproduce the emission at the shortest wavelengths.
2021-06-16 23:10:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8178361058235168, "perplexity": 5600.588938988936}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626122.27/warc/CC-MAIN-20210616220531-20210617010531-00582.warc.gz"}
https://learnaboutstructures.com/Common-Load-Types-for-Beams-and-Frames
Resources for Structural Engineers and Engineering Students >>When you're done reading this section, check your understanding with the interactive quiz at the bottom of the page. How can we deal with these types of uniform or other distributed loads when performing equilibrium calculations? The way to do this is to consider a equivalent total load or effective force caused by the distributed load, which acts at the centroid of the distribution. The location of this centroid is different depending on the type of load distribution as shown on the right side of Figure 4.1. For a uniform load, the effective force is equal to the total load given by the load per unit length multiplied by the total length (or $wL$). This is also equal to the area under the distributed load diagram, in this case a rectangle. For the uniform load, the centroid is at the center of the distribution ($L/2$). This is the location where you would put the effective force in order to use it in equilibrium calculations. For the triangular load, the effective load is again the total load which is equal to the area under the distribution, in this case $wL/2$ ($\frac{1}{2}bh$) and this acts at the centroid of the triangle, which is located at one-third of the length from the high side. For the trapezoidally distributed load, the case is slightly more complex as shown in Figure 4.1. The point load and point moment do not have any equivalent total load, since they already act at a single point. Effective forces are only used for calculating the effects of distributed loads with equilibrium calculations. Do not replace distributed loadings with effective forces, the rest of the analysis will be incorrect when determining internal shear and moment diagrams. Interactive Quiz
2022-11-26 19:26:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7380048036575317, "perplexity": 267.5974842459687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00101.warc.gz"}
https://code.tutsplus.com/courses/javascript-fundamentals/lessons/conditionals
FREELessons:65Length:7.1 hours • Overview • Transcript # 4.1 Conditionals What happens when we need to do one thing if a condition is true, or something else if it isn’t? We use an if statement, that’s what, and in this lesson we’ll learn all about if statements and the ternary operator. Syntax • if • else • () ? : Back to the top
2021-09-23 15:12:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2358396053314209, "perplexity": 3798.9773376987073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00042.warc.gz"}
https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Attend_and_Predict:_Understanding_Gene_Regulation_by_Selective_Attention_on_Chromatin&diff=prev&oldid=38728
# Background Gene regulation is the process of controlling which genes in a cell's DNA are turned 'on' (expressed) or 'off' (not expressed). By this process functional product such as a protein is created. Even though all the cells of a multicellular organism (e.g., humans) contain the same DNA, different types of cells in that organism may express very different sets of genes. As a result each cell types have distinct functionality. In other words how a cell operates depends upon the genes expressed in that cell. Many factors including ‘Chromatin modification marks’ influence which genes are abundant in that cell. The function of chromatin is to efficiently wraps DNA around histones into a condensed volume to fit into the nucleus of a cell and protect the DNA structure and sequence during cell division and replication. Different chemical modifications in the histones of the chromatin, known as histone marks, changes spatial arrangement of the condensed DNA structure. Which in turn affects the gene’s expression of the histone mark’s neighboring region. Histone marks can promote (obstruct) the gene to be turned on by making the gene region accessible (restricted). This section of the DNA, where histone marks can potentially have an impact, is known as DNA flanking region or ‘gene region’ which is considered to cover 10k base pair centered at the transcription start site (TSS) (i.e., 5k base pair in each direction). Unlike genetic mutations, histone modifications are reversible [1]. Therefore, understanding influence of histone marks in determining gene regulation can assist in developing drugs for genetic diseases. # Introduction Revolution in genomic technologies now enables us to profile genome-wide chromatin mark signals. Therefore, biologists can now measure gene expressions and chromatin signals of the ‘gene region’ for different cell types covering whole human genome. The Roadmap Epigenome Project (REMC, publicly available) [2] recently released 2,804 genome-wide datasets of 100 separate “normal” (not diseased) human cells/tissues, among which 166 datasets are gene expression reads and the rest are signal reads of various histone marks. The goal is to understand which histone marks are the most important and how they interact together in gene regulation for each cell type. Signal reads for histone marks are high-dimensional and spatially structured. Influence of a histone modification mark can be anywhere in the gene region (covering 10k bp centering the TSS). It is important to understand how the impact of the mark on gene expression varies over the gene region. In other words, how histone signals over the gene region impacts the gene expression. There are different types of histone marks in human chromatin that can have an influence on gene regulation. Researchers have found five standard histone proteins. These five histone proteins can be altered in different combinations with different chemical modifications resulting in a large number of distinct histone modification marks. Different histone modification marks can act as a module to interact with each other and influence the gene expression. This paper proposes an attention-based deep learning model to find how this chromatin factors/ histone modification marks contributes to the gene expression of a particular cell. AttentiveChrome[3] utilizes a hierarchy of multiple LSTM to discover interactions between signals of each histone marks, and learn dependencies among the marks on expressing a gene. The authors included two levels of soft attention mechanism, (1) to attend to the most relevant signals of a histone mark, and (2) to attend to the important marks and their interactions. ## Main Contributions The contributions of this work can be summarized as follows: • More accurate predictions than the state-of-the-art baselines. This is measured using datasets from REMC on 56 different cell types. • Better interpretation than the state-of-the-art methods for visualizing deep learning model. They compute the correlation of the attention scores of the model with the mark signal from REMC. • Like the application of attention models previously in indirectly hinting the parts of the input that the model deemed important, AttentiveChrome can too explain it's decisions by hinting at “what” and “where” it has focused. • This is the first time that the attention based deep learning approach is applied to a problem in molecular biology. # Previous Works Machine learning algorithms to classify gene expression from histone modification signals have been surveyed by [15]. These algorithms varies from linear regression, support vector machine, and random forests to rule-based learning, and CNNs. To accommodate the spatially structured, high dimensional input data (histone modification signals) these studies applied different feature selection strategies. The preceding research study, DeepChrome [4], by the authors incorporated the best position selection strategy. The positions that are highly correlated to the gene expression are considered as the best positions. This model can learn the relationship between the histone marks. This CNN based DeepChrome model outperforms all the previous works. However, these approaches either (1) failed to model the spatial dependencies among the marks, or (2) required additional feature analysis. Only AttentiveChrome is reported to satisfy all of the eight desirable metrics of a model. # AttentiveChrome: Model Formulation The authors proposed an end-to-end architecture which has the ability to simultaneously attend and predict. This method incorporates recurrent neural networks (RNN) composed of LSTM units to model the sequential spatial dependencies of the gene regions and predict gene expression level from The embedding vector, $h_t$, output of an LSTM module encodes the learned representation of the feature dependencies from the time step 0 to $t$. For this task, each bin position of the gene region is considered as a time step. The proposed AttentiveChrome framework contains following 5 important modules: • Bin-level LSTM encoder encoding the bin positions of the gene region (one for each HM mark) • Bin-level $\alpha$-Attention across all bin positions (one for each HM mark) • HM-level LSTM encoder (one encoder encoding all HM marks) • HM-level $\beta$-Attention among all HM marks (one) • The final classification module Figure 1 (Supplementary Figure 2) presents the overview of the proposed AttentiveChrome framework. ## Input and Output Each dataset contains the gene expression labels and the histone signal reads for one specific cell type. The authors evaluated AttentiveChrome on 56 different cell types. For each mark, we have a feature/input vector containing the signals reads surrounding the gene’s TSS position (gene region) for the histone mark. The label of this input vector denotes the gene expression of the specific gene. This study considers binary labeling where $+1$ denotes gene is expressed (on) and $-1$ denotes that the gene is not expressed (off). Each histone marks will have one feature vector for each gene. The authors integrates the feature inputs and outputs of their previous work DeepChrome [4] into this research. The input feature is represented by a matrix $\textbf{X}$ of size $M \times T$, where $M$ is the number of HM marks considered in the input, and $T$ is the number of bin positions taken into account to represent the gene region. The $j^{th}$ row of the vector $\textbf{X}$, $x_j$, represents sequentially structured signals from the $j^{th}$ HM mark, where $j\in \{1, \cdots, M\}$. Therefore, $x_j^t$, in the matrix $\textbf{X}$ represents the value from the $t^{th}$ bin belonging to the $j^{th}$ HM mark, where $t\in \{1, \cdots, T\}$. If the training set contains $N_{tr}$ labeled pairs, the $n^{th}$ is specified as $( X^n, y^n)$, where $X^n$ is a matrix of size $M \times T$ and $y^n \in \{ -1, +1 \}$ is the binary label, and $n \in \{ 1, \cdots, N_{tr} \}$. Figure 2 exhibits the input feature, and the output of AttentiveChrome for a particular gene (one sample). ## Bin-Level Encoder (one LSTM for each HM) The sequentially ordered elements (each element actually is a bin position) of the gene region of $n^{th}$ gene is represented by the $j_{th}$ row vector $x^j$. The authors considered each bin position as a time step for LSTM. This study incorporates bidirectional LSTM to model the overall dependencies among a total of $T$ bin positions in the gene region. The bidirectional LSTM contains two LSTMs • A forward LSTM, $\overrightarrow{LSTM_j}$, to model $x^j$ from $x_1^j$ to $x_T^j$, which outputs the embedding vector $\overrightarrow{h^t_j}$, of size $d$ for each bin $t$ • A reverse LSTM, $\overleftarrow{LSTM_j}$, to model $x^j$ from $x_T^j$ to $x_1^j$, which outputs the embedding vector $\overleftarrow{h^j_t}$, of size $d$ for each bin $t$ The final output of this layer, embedding vector at $t^{th}$ bin for the $j^{th}$ HM, $h^j_t$, of size $d$, is obtained by concatenating the two vectors from the both directions. Therefore, $h^j_t = [ \overrightarrow{h^j_t}, \overleftarrow{h^j_t}]$. ## Bin-Level $\alpha$-attention Each bin contributes differently in the encoding of the entire $j^{th}$ mark. To highlight the most important bins for prediction a soft attention weight vector $\alpha^j$ of size $T$ is learned for each $j$. To calculated the soft weight $\alpha^j_t$, for each $t$, the embedding vectors $\{h^j_1, \cdots, h^j_t \}$ of all the bins are utilized. The following equation is used: $\alpha^j_t = \frac{exp(\textbf{W}_b h^j_t)}{\sum_{i=1}^T{exp(\textbf{W}_b h^j_i)}}$ The parameter $W_b$ is learned alongside during the process. Therefore, the $j^{th}$ HM mark can be represented by $m^j = \sum_{t=1}^T{\alpha^j_t \times h^j_t}$. Here, $h^j_t$ is the embedding vector and $\alpha^t_j$ is the importance weight of the $t^{th}$ bin in the representation of the $j^{th}$ HM mark. Intuitively $\textbf{W}_b$ will learn the cell type. ## HM-level Encoder (one LSTM) Studies observed that HMs work cooperatively to provoke or subdue gene expression [5]. The HM-level encoder utilizes one bidirectional LSTM to capture this relationship between the HMs. To formulate the sequential dependency a random sequence is imagined as the authors did not find influence of any specific ordering of the HMs. The representation $m_j$of the $j^{th}$ HM, $HM_j$, which is calculated from the bin-level attention layer, is the input of this step. This set based encoder outputs an embedding vector $s^j$ of size $d’$, which is the encoding for the $j^{th}$ HM. $s^j = [ \overrightarrow{LSTM_s}(m_j), \overleftarrow{LSTM_s}(m_j) ]$ The dependencies between $j^{th}$ HM and the other HM marks are encoded in $s^j$, whereas $m^j$ from the previous step encodes the bin dependencies of the $j^{th}$ HM. HM-Level $\beta$-attention This second soft attention level finds the important HM marks for classifying a gene’s expression by learning the importance weights, $\beta_j$, for each $HM_j$, where $j \in \{ 1, \cdots, M \}$. The equation is $\beta^j = \frac{exp(\textbf{W}_s s^j)}{\sum_{i=1}^M{exp(\textbf{W}_s s^j)}}$ The HM-level context parameter $\textbf{W}_s$ is trained jointly in the process. Intuitively $\textbf{W}_s$ learns how the HMs are significant for a cell type. Finally the entire gene region is encoded in a hidden representation $\textbf{v}$, using the weighted sum of the embedding of all HM marks. $\textbf{v} = \sum_{j=1}^MT{\beta^j \times s^j}$ ## End-to-end training The embedding vector $\textbf{v}$ is fed to a simple classification module, $f(\textbf{v}) =$softmax$(\textbf{W}_c\textbf{v}+b_c)$, where $\textbf{W}_c$, and $b_c$ are learnable parameters. The output is the probability of gene expression being high (expressed) or low (suppressed). The whole model including the attention modules are differentiable. Thus backpropagation can perform end-to-end learning trivially. Negative log-likelihood loss function is minimized in the learning. # Related Works/Studies In the last few years, deep learning models obtained models obtained unprecedented success in diverse research fields. Though as not rapidly as other fields, deep learning based algorithms are gaining popularity among bioinformaticians. ## Attention-based Deep Models The idea of attention technique in deep learning is adapted from human visual perception system. Humans tend to focus over some parts more than the others while perceiving a scene. This mechanism augmented with deep neural networks achieved excellent outcome in several research topics. Various types of attention models e.,g., soft [6], or location aware [7], or hard [8, 9] attentions have been proposed in the literature. In the soft attention model a soft weight vector is calculated for the overall feature vectors. The extent of the weight is correlated with the degree of importance of the feature in the prediction. ## Visualization and Apprehension of Deep Models Prior studies mostly focused on interpreting convolutional neural networks (CNN) for image classification through deconvulation [10], saliency map [11, 12], and class optimization [12] based visualisation techniques. Some recent research works [13, 14] tried to understand recurrent neural networks (RNN) for text-based problems. By looking into the features the model attends to we can interpret the output of a deep model. ## Conclusion The paper has introduced an attention-based approach called "AttentiveChrome" that deals with both understanding and prediction with several advantages on previous architectures including higher accuracy from state-of-the-art baselines, clearer interpretation than saliency map and class optimization. Finally, according to the authors, this is the first implementation of deep attention to understand gene regulation. # Reference [1] Andrew J Bannister and Tony Kouzarides. Regulation of chromatin by histone modifications. Cell research, 21(3):381–395, 2011. [2] Anshul Kundaje, Wouter Meuleman, Jason Ernst, Misha Bilenky, Angela Yen, Alireza Heravi-Moussavi, Pouya Kheradpour, Zhizhuo Zhang, Jianrong Wang, Michael J Ziller, et al. Integrative analysis of 111 reference human epigenomes. Nature, 518(7539):317–330, 2015. [3] Singh, Ritambhara, et al. "Attend and Predict: Understanding Gene Regulation by Selective Attention on Chromatin." Advances in Neural Information Processing Systems. 2017. [4] Ritambhara Singh, Jack Lanchantin, Gabriel Robins, and Yanjun Qi. Deepchrome: deep-learning for predicting gene expression from histone modifications. Bioinformatics, 32(17):i639–i648, 2016. [5] Joanna Boros, Nausica Arnoult, Vincent Stroobant, Jean-François Collet, and Anabelle Decottignies. Polycomb repressive complex 2 and h3k27me3 cooperate with h3k9 methylation to maintain heterochromatin protein 1α at chromatin. Molecular and cellular biology, 34(19):3662–3674, 2014. [6] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [7] Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 577–585. Curran Associates, Inc., 2015. [8] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1412–1421, Lisbon, Portugal, September 2015. Association for Computational Linguistics. [9] Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV, 2016. [10] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014, pages 818–833. Springer, 2014. [11] David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert MÞller. How to explain individual classification decisions. volume 11, pages 1803–1831, 2010. [12] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. 2013. [13] Andrej Karpathy, Justin Johnson, and Fei-Fei Li. Visualizing and understanding recurrent networks. 2015. [14] Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. Visualizing and understanding neural models in nlp. 2015. [15] Xianjun Dong and Zhiping Weng. The correlation between histone modifications and gene expression. Epigenomics, 5(2):113–116, 2013.
2022-01-28 12:25:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.655107319355011, "perplexity": 3203.9932590849626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00310.warc.gz"}
https://chemistry.stackexchange.com/questions/87158/propane-water-molar-ratio
# Propane-water molar ratio [closed] Propane gas, $$\ce{C3H8}$$, burns in oxygen to produce carbon dioxide and water vapour as follows: $$\ce{C3H8 (g) + 5O2 (g) = 3 CO2(g) + 4 H2O (g) + \pu{2200 kJ}}$$ If $$\pu{1.5 mol}$$ of propane is consumed in this reaction, how many moles of $$\ce{H2O}$$ are produced? I'm completely stuck on this question, I have no idea how to start. Any help would be greatly appreciated. • Let's start with the balanced reaction: how many moles of water 1 mol of propane produces? You have $1~\ce{C3H8}$ on the left (reactants), and $4~\ce{H2O}$ on the right (products). Abstract from the energy, it's irrelevant here. – andselisk Dec 10 '17 at 10:27 • Wouldn't that be 4 moles of water produced? – Grimestock Dec 10 '17 at 10:30 • Yep, 4 it is. Now if there is 1.5 mol of propane (e.g. 1.5 times more), how many moles of water would it be? Remember that coefficients reflect molar ratios. – andselisk Dec 10 '17 at 10:32 • 6 moles of water produced? – Grimestock Dec 10 '17 at 10:33 • Wow. That was a lot simpler than I thought. Thank you! – Grimestock Dec 10 '17 at 10:36 $$\frac{n(\ce{H20})}{n(\ce{C3H8})} = \frac{4}{1}$$ $$\frac{4}{1} \cdot 1.5 = 6$$ mole of water being produced
2021-04-11 04:46:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5152624845504761, "perplexity": 1062.4623038186319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00324.warc.gz"}
https://salsa.debian.org/edd/r-cran-mgcv/blame/2b1bc6473ed502201eb0cef883f9e1c71908afc9/man/uniquecombs.Rd
uniquecombs.Rd 2.86 KB Dirk Eddelbuettel committed Apr 10, 2018 1 2 3 4 5 \name{uniquecombs} \alias{uniquecombs} %- Also NEED an \alias' for EACH other topic documented here. \title{find the unique rows in a matrix } \description{ Dirk Eddelbuettel committed Apr 10, 2018 6 7 This routine returns a matrix or data frame containing all the unique rows of the matrix or data frame supplied as its argument. That is, all the duplicate rows are Dirk Eddelbuettel committed Apr 10, 2018 8 stripped out. Note that the ordering of the rows on exit need not be the same Dirk Eddelbuettel committed Apr 10, 2018 9 10 as on entry. It also returns an index attribute for relating the result back to the original matrix. Dirk Eddelbuettel committed Apr 10, 2018 11 12 } \usage{ Dirk Eddelbuettel committed Apr 10, 2018 13 uniquecombs(x,ordered=FALSE) Dirk Eddelbuettel committed Apr 10, 2018 14 15 16 } %- maybe also usage' for other objects documented here. \arguments{ Dirk Eddelbuettel committed Apr 10, 2018 17 \item{x}{ is an \R matrix (numeric), or data frame. } Dirk Eddelbuettel committed Apr 10, 2018 18 \item{ordered}{ set to \code{TRUE} to have the rows of the returned object in the same order regardless of input ordering.} Dirk Eddelbuettel committed Apr 10, 2018 19 20 21 } \details{ Models with more parameters than unique combinations of covariates are not identifiable. This routine provides a means of Dirk Eddelbuettel committed Apr 10, 2018 22 23 24 25 26 27 28 29 30 31 32 evaluating the number of unique combinations of covariates in a model. When \code{x} has only one column then the routine uses \code{\link{unique}} and \code{\link{match}} to get the index. When there are multiple columns then it uses \code{\link{paste0}} to produce labels for each row, which should be unique if the row is unique. Then \code{unique} and \code{match} can be used as in the single column case. Obviously the pasting is inefficient, but still quicker for large n than the C based code that used to be called by this routine, which had O(nlog(n)) cost. In principle a hash table based solution in C would be only O(n) and much quicker in the multicolumn case. Dirk Eddelbuettel committed Apr 10, 2018 33 Dirk Eddelbuettel committed Apr 10, 2018 34 \code{\link{unique}} and \code{\link{duplicated}}, can be used Dirk Eddelbuettel committed Apr 10, 2018 35 in place of this, if the full index is not needed. Relative performance is variable. Dirk Eddelbuettel committed Apr 10, 2018 36 37 38 If \code{x} is not a matrix or data frame on entry then an attmept is made to coerce it to a data frame. Dirk Eddelbuettel committed Apr 10, 2018 39 40 } \value{ Dirk Eddelbuettel committed Apr 10, 2018 41 A matrix or data frame consisting of the unique rows of \code{x} (in arbitrary order). Dirk Eddelbuettel committed Apr 10, 2018 42 Dirk Eddelbuettel committed Apr 10, 2018 43 The matrix or data frame has an \code{"index"} attribute. \code{index[i]} gives the row of the returned Dirk Eddelbuettel committed Apr 10, 2018 44 matrix that contains row i of the original matrix. Dirk Eddelbuettel committed Apr 10, 2018 45 46 47 } Dirk Eddelbuettel committed Apr 10, 2018 48 \seealso{\code{\link{unique}}, \code{\link{duplicated}}, \code{\link{match}}.} Dirk Eddelbuettel committed Apr 10, 2018 49 Dirk Eddelbuettel committed Apr 10, 2018 50 \author{ Simon N. Wood \email{simon.wood@r-project.org} with thanks to Jonathan Rougier} Dirk Eddelbuettel committed Apr 10, 2018 51 52 53 \examples{ Dirk Eddelbuettel committed Apr 10, 2018 54 require(mgcv) Dirk Eddelbuettel committed Apr 10, 2018 55 56 57 ## matrix example... X <- matrix(c(1,2,3,1,2,3,4,5,6,1,3,2,4,5,6,1,1,1),6,3,byrow=TRUE) Dirk Eddelbuettel committed Apr 10, 2018 58 print(X) Dirk Eddelbuettel committed Apr 10, 2018 59 60 61 62 Xu <- uniquecombs(X);Xu ind <- attr(Xu,"index") ## find the value for row 3 of the original from Xu Xu[ind[3],];X[3,] Dirk Eddelbuettel committed Apr 10, 2018 63 Dirk Eddelbuettel committed Apr 10, 2018 64 65 66 67 68 69 70 ## same with fixed output ordering Xu <- uniquecombs(X,TRUE);Xu ind <- attr(Xu,"index") ## find the value for row 3 of the original from Xu Xu[ind[3],];X[3,] Dirk Eddelbuettel committed Apr 10, 2018 71 72 73 74 ## data frame example... df <- data.frame(f=factor(c("er",3,"b","er",3,3,1,2,"b")), x=c(.5,1,1.4,.5,1,.6,4,3,1.7)) uniquecombs(df) Dirk Eddelbuettel committed Apr 10, 2018 75 76 77 78 } \keyword{models} \keyword{regression}%-- one or more ..
2019-05-24 08:46:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5598772764205933, "perplexity": 4819.166101541082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257601.8/warc/CC-MAIN-20190524084432-20190524110432-00370.warc.gz"}
http://math.stackexchange.com/questions/365669/qualifying-exam-question-on-manifolds
# Qualifying Exam Question on Manifolds I am practicing qualifying exam problems and I am having trouble with the following question. Any help is greatly appreciated. Let $P$ be a polygon with an even number of sides. Suppose that the sides are identified in pairs in any way whatsoever. Prove that the quotient space is a manifold. - Do you mean polygon as opposed to polynomial? –  Michael Albanese Apr 18 '13 at 18:07 Since you have been posting so many questions, it may be a good idea for you to post your ideas/attempts. –  user27126 Apr 18 '13 at 18:10 "Any way whatsoever" unfortunately needs a qualifier, or else there are counter-examples. Presumably the identifications on the boundary are 1-1 and linear, and no side is allowed to be identified to two or more other sides, etc. –  Ryan Budney Apr 18 '13 at 18:52 Maybe. I may be able to do that problem. But the one posted is word for word. –  Ethan Hawver Apr 18 '13 at 18:54 Dear Ethan, I think you should understand the "any way whatsoever" phrase to refer to the way the edges are grouped into pairs (and also about the choice of orientation). The actual gluing should just be the obvious one where the two edges are glued (according to some chosen orientation) so that their vertices match up. (The way you glue the edges of a square to make a torus, or a Klein bottle.) Regards, –  Matt E Aug 9 '13 at 12:21 A manifold is, in particular, a locally Euclidean space, meaning that every point has a neighborhood homeomorphic to a disk in $\mathbb{R}^n$. You are working with a surface $(n = 2)$.
2014-12-20 07:23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8121710419654846, "perplexity": 373.131256329814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769581.93/warc/CC-MAIN-20141217075249-00038-ip-10-231-17-201.ec2.internal.warc.gz"}
https://robotics.stackexchange.com/tags/screw-theory/hot
# Tag Info ## Hot answers tagged screw-theory 6 A coordinate transformation of a point P from Frame 1 to Frame 0 is given by: $$\mathbf{p}^0=\mathbf{o}^0_1+\mathbf{R}^0_1\mathbf{p}^1.$$ Differentiating with respect to time gives: $$\dot{\mathbf{p}}^0=\dot{\mathbf{o}}^0_1+\mathbf{R}^0_1\dot{\mathbf{p}}^1+\dot{\mathbf{R}}^0_1\mathbf{p}^1.$$ Considering that $\dot{\mathbf{p}}^1=0$ as $\mathbf{p}^1$ is ... 5 The geometric Jacobian provides all the information you need for singularity or manipulability analysis. Linearly dependent columns correspond to joints with parallel axes. More information about Jacobians for under-actuated manipulators (as is your case) can be found in my book Robotics, Vision & Control" section 8.4.1. For information about ... 5 There are a lot of definitional problems and inconsistencies in this area. Geometric Jacobian. I'm not sure this has a precise and agreed upon meaning. But across the more classical robotics books (Siciliano etal., Spong etal., Corke) it relates joint velocities to end-effector velocity (translational and rotational) expressed in either the world or end-... 4 About why screw axes: According to Kevin Lynch in his video of Twists, "just like the time-derivative of a rotation matrix is not equivalent to the angular velocity, the time-derivative of a transformation matrix is not equivalent to the rigid-body velocity" (linear and angular). Also he mentions that, instead, "any rigid-body velocity is ... 4 The author expects a background that includes a course in physics or mechanics where this equation is taught. When that is the case, this equation gives you instantaneous velocity of a particle (point) moving on a circular path. The $\times$ in $\dot{p} = \omega \times p$ is the cross product. (This may already be obvious to you, it's hard to tell from the ... 4 The Jacobian in that equation is from the joint velocity to the "spatial velocity" of the end effector. The spatial velocity of an object is a somewhat unintuitive concept: it is the velocity of a frame rigidly attached to the end effector but currently coincident with the origin frame. It may help to think of the rigid body as extending to cover the whole ... 4 Actually, the 6x1 vector is sometimes better referred to as the coordinates of the twist. The twist itself is a 4x4 matrix, element of $SE(3)$, found by \begin{align} A &= \begin{bmatrix} \widehat{\omega} & v \\ 0 & 0 \end{bmatrix} \\ v &\triangleq -\omega \times q \end{align} where $\omega$ is the unit vector pointing along the axis of ... 4 I will try to make it as simple as possible. Imagine you have a SCREW, when you WRENCH it, it TWIST forward or backward. From your wiki link The components of the screw define the Plücker coordinates of a line in space and the magnitudes of the vector along the line and moment about this line. It means that any system can be described as those ... 3 The body velocity $V_{b}$ is the velocity of the frame with respect to the world, as seen from the frame's perspective. Its rotational component $\omega_{b}$ contains the rotation rates around the world-fixed axes instantaneously pointing in the frame's forward, lateral, and dorsal directions (local $x$, $y$, and $z$), and its translational component $v_{b}$ ... 3 Lynch and Park's Modern Robotics book uses the product of exponentials formula and screw axes to describe manipulators, and they have a well-documented library available in Python, MATLAB, and Mathematica. Plus there is a community-released C++ port using CMake/Eigen. Book is available on this site (for free): http://modernrobotics.org/ Original library is ... 3 I will try not to skip too many steps. Assuming a Global coordinate frame at the base and the arm is fully extended along the Y-axis of the base frame. Since SCARA has four joints, we will create four 6D spatial vectors (screws) ${ξ}_{i}$ with respect to a global coordinate frame. Keep in mind that all spatial vectors are described with respect to the ... 3 Hint: First, write the transformation matrix as $$T = \begin{bmatrix} R &p\\0_{1\times3} &1 \end{bmatrix}.$$ Now we use the relations $\omega_a = R\omega_b$ and $q_a = Rq_b + p$. Then since $v_a = -\omega_a \times q_a$, we can derive $v_a$ in terms of all the known quantities. In fact, two twists representing the same screw motion described in ... 3 Actuators Forces Do I get this right: you have a theoretical model of a rigid multibody system and would like to perform rigid body dynamics computations. You have implemented the model and now would like to compute how the model behaves when driven by an actuator. However what is an actuator for you? Is it simply a force acting at that joint? Is it a DC ... 2 If you haven't come across the Rigid Body Dynamics Library (RBDL) you might want to look at how they implement it, and/or contact the author Martin Felis. 2 Worked example $\hspace{2.5em}$ $\vec{q}$ = $[q_{1}\hspace{1em}q_{2}]^{T}$ $\hspace{1.5em}$ [Generalized coordinate] $\hspace{2.5em}$ $\vec{J}$ = $\frac{\partial \vec{r}_{OA}(\vec{q})}{\partial\vec{q}}$ = $\begin{bmatrix} \frac{\partial \vec{r}_{1}}{\partial\vec{q}_{1}} & ... & \frac{\partial \vec{r}_{1}}{\partial\vec{q}_{n}} \\ ... & & ..... 2 First note that$p(0)$travels along an arc of the circle of radius$r = \Vert p \Vert \sin(\phi)$centered at a point on the axis of$\omega$; and the velocity$\dot{p}$is perpendicular to the arc of the circle; and (from the definition)$\omega = \dot{\theta} u$, where$u$is a unit vector perpendicular to the plane of rotation. Now we try to relate$\... 2 You want to use the product of exponentials to calculate the transformation of $\zeta_1$ and $\zeta_2$ for $\theta_1$ and $\theta_2$. To be more clear, using your notation of $g_{12}$: $$g_{12} = e^{\hat{\zeta_1} \theta_1} \cdot e^{\hat{\zeta_2} \theta_2}$$ Where $\hat{\zeta_i}$ is the skew-symmetric matrix representation of ... 2 Adding to Peter Corke's answer, there's also a Coursera course by Kevin Lynch which uses the Modern Robotics book as a reference and explains how to derive the screw based Jacobian. The Jacobian can be either with respect to the "space frame" (frame attached to base of the manipulator) or the "body frame" (frame attached to the end-effector). Here's a ... 2 You still haven't posted the (full) code that gives the results you've presented; when I run your snippet I don't the results you posted. Instead, I get: [xdfb] = FDfb(sphere, xfb, [], [], []) xdfb = 0 -2.5000 0 0 0 -1.0000 -1.0000 0 0 0 0 5.0000 -14.8066 Here xdfb is ... 2 You're computing the spatial Jacobian, which relates joint velocities to spatial velocities at the origin. You instead want to compute the body Jacobian, which relates joint velocities to end-effector velocities expressed in the end-effector frame. So in your 2D RR example what you want to do is to compute the body Jacobian then pre-multiply the matrix by $... 2 There are in fact two types of Jacobians, a geometric Jacobian and an analytical Jacobian. The intro to chapter 3 in the book: Robotics: Modelling, Planning and Control by Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, Giuseppe Oriolo says it well: ...differential kinematics is the relationship between the joint velocities and the corresponding end-... 2 Due to the way that frames are defined in the Modern Robotics book (and in this type of vector-field mechanics in general, such as those of Featherstone), both the spatial frame and the body frames are defined as stationary inertial frames. This requires a bit of a different conceptual understanding than the more traditional moving frames that have been &... 1 See https://www.cis.upenn.edu/~cjtaylor/PUBLICATIONS/pdfs/TaylorTR94b.pdf. You can absolutely use "flat" Euclidean space based optimizers while also optimizing on the manifold, but I agree the default scipy solvers don't give you an easy way to do that. Perhaps you can use pymanopt? See https://www.pymanopt.org/. Although I wouldn't be scared of ... 1 In these equations from Modern Robotics (by Park and Lynch), the fixed inertial frame${b}$is both the reference frame used to define all of the coordinate vectors and has its origin located at the centre of mass (COM) of the body. The body's angular velocity relative to$\{b\}$is$\omega_b$, as you say, and is a property of the body which is dependent ... 1 1) There are many ways to express velocities. All of them are mathematical constructs to describe the same motion. They can have some minor advantages/disadvantages depending on the applications. The one main disadvantage of an Euler angle based approach is the gimbal lock problem. An advantage of Euler angle based approaches is, that they can be more ... 1 If I'm understanding you correctly, you're attempting to put the position of the joint in for the translational velocity component of the screw axis. What you actually want to put in there is the velocity of a point rigidly attached to the rotating part of the joint, but currently located at the origin of the base frame,$$v = \begin{bmatrix} \omega_{x} \\ \... 1 maybe need some transformation from centers of mass to the joint frame? Isn't that what$A_i$is? I don't have the book with me, but from your excerpt: Let$A_i$be the screw axis of joint$i$defined in$\{i\}$where it says at the top, frames$\{1\}$to$\{N\}$[are attached] to the centers of mass of links$\{1\}$to$\{N\}\$ 1 Try looking for terms like robot calibration, robot kinematic calibration or kinematic calibration. Have a look at Chapter 6 of [1]. [1] Springer Handbook of Robotics, Eds: B.Siciliano, O.Khatib, 2016 1 According to the notation in Corke's book, it looks like that top-right 3x3 block that you're looking for is the translation from end effector frame E to base frame 0 represented in so(3). It looks like the trouble here comes from viewing the velocity in different frames. Lynch's spatial twist (defined on page 99) differs from Corke's spatial velocity (... 1 Summarizing from Murray, Li, and Sastry (chapters 3 and 5) there are 3 related things: Twist: An element of se(3) (which is a bit like the derivative of an element of SE(3), which is the set of translations + rotations) Screw: A translation+rotation (i.e. and element of SE(3)) Wrench: Generalized force (combination of linear force and torque) Only top voted, non community-wiki answers of a minimum length are eligible
2021-11-27 22:54:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8514273166656494, "perplexity": 786.084813269944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358323.91/warc/CC-MAIN-20211127223710-20211128013710-00562.warc.gz"}
https://itectec.com/matlab/matlab-make-a-variable/
# MATLAB: Make a variable variables I have to make a variable which has the sign %s as many times as the variable x is. This means if x = 15 then i need to make B = [%s %s %s %s %s %s %s %s %s %s %s %s %s %s %s] While also keep the spaces between them. So I can paste it into textscan. Is there a way to produce B. I could make B as 2 columns one for % and one for s but when I transpose it the result is completely wrong. %%%%%%%%%%%%%% ssssssssssssss Any ideas? Thank you very much B = repmat('%s ', 1, x);
2021-05-18 20:25:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1816880851984024, "perplexity": 1225.3356193181946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991514.63/warc/CC-MAIN-20210518191530-20210518221530-00489.warc.gz"}
https://www.gamedev.net/forums/topic/228596-manipulating-stringsc/
#### Archived This topic is now archived and is closed to further replies. # Manipulating strings(C++) This topic is 5223 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts ##### Share on other sites quote: From the MSDN nFileOffset Specifies the zero-based offset, in TCHARs, from the beginning of the path to the file name in the string pointed to by lpstrFile. For the ANSI version, this is the number of bytes; for the Unicode version, this is the number of characters. For example, if lpstrFile points to the following string, "c:\dir1\dir2\file.ext", this member contains the value 13 to indicate the offset of the "file.ext" string. If the user selects more than one file, nFileOffset is the offset to the first file name. nFileExtension Specifies the zero-based offset, in TCHARs, from the beginning of the path to the file name extension in the string pointed to by lpstrFile. For the ANSI version, this is the number of bytes; for the Unicode version, this is the number of characters. For example, if lpstrFile points to the following string, "c:\dir1\dir2\file.ext", this member contains the value 18. If the user did not type an extension and lpstrDefExt is NULL, this member specifies an offset to the terminating NULL character. If the user typed "." as the last character in the file name, this member specifies zero. "I forgot I had the Scroll Lock key until a few weeks ago when some asshole program used it. It even used it right" - Conner McCloud ##### Share on other sites Thanks,though using nFileOffset doesn''t seem to be helping with manipulating the characters of the filename string much. Loading multiple files its working,but that isn''t what i''m trying to do. E.g: if the strFilename is tex1.bmp,i can shorten it to just tex by doing this: ofn.lpstrFileTitle = strFilename; ofn.nMaxFileTitle = sizeof(strFilename -1); so after the function for loading the 2nd file has been called, the string "tex" is displayed in my message box. Though with longer filenames(used for the texture)e.g Texture1.bmp i end up with something like "Texture1." as the string??? All i want to do is when i choose the file from the file requester dialogue box,say Texture1.bmp,is then manipulate the filename..to get just the string "Texture1" then display it in a dialogue box. That will be evrything i need,to do what it is i''m doing. If anyone can help with this,i''ll be very grateful. Thanks. ##### Share on other sites Given the string "C:\path\to\file.ext" the substring "file" will begin at ofn.lpstrFile[ofn.nFileOffset] and end at ofn.lpstrFile[ofn.nFileExtension - 1] What problems are you having with nFileOffset? "I forgot I had the Scroll Lock key until a few weeks ago when some asshole program used it. It even used it right" - Conner McCloud ##### Share on other sites _______________________________________________ What problems are you having with nFileOffset? _______________________________________________ Just can''t end up with the correct string in my message box. What i end up passing to the MessageBox() is always something like Tex,Te,Text...or the full Texture.bmp not just the filename, minus the .ext and the path. If anyone can show the correct way to do it (in some code) will be a great help. Cheers. ##### Share on other sites Did you try the code I posted above? "I forgot I had the Scroll Lock key until a few weeks ago when some asshole program used it. It even used it right" - Conner McCloud ##### Share on other sites Invader X,cheers for taking the time to reply to these lame problems i''m having,i do appreciate it man. The thing is,is no matter what i''m trying with the file name(i mean the filename from the file dialogue box)and the nFileOffset ,its still just passing the same(or similar strings)but not what i want..(just taking off the extension and the path). I really don''t know what else to try(losing patience with it). If you can make sense of my above code(posted in frustration)or are willing to help me further,i''ll post the code that i''m using, if you can help. I''d really appreciate it,becuase its doing my head in something that would seem so simple Cheers. ##### Share on other sites Have you tried something like this: //after GetOpenFileName()char real_filename[MAX_PATH] = {0};strncpy(real_filename, filename + ofn.nFileOffset, ofn.nFileOffset - ofn.nFileExtension - 1); That should copy only the file''s name, without the path and without the extension, into the variable real_filename. "I forgot I had the Scroll Lock key until a few weeks ago when some asshole program used it. It even used it right" - Conner McCloud ##### Share on other sites Ok,this is what is happening in my code. I''m actually loading a map(not a texture),and straight after the map has loaded a data file needs to be loaded(with the same name as the map,minus the path+extension). after GetOpenFileName()... MessageBox(hwnd, filename , NULL, MB_OK | MB_ICONWARNING); These yeilds the string: so i know that the full filename is correct at this part in the code. next... char real_filename[MAX_PATH] = {0};strncpy(real_filename, filename + ofn.nFileOffset, ofn.nFileOffset -ofn.nFileExtension -1); MessageBox(hwnd, real_filename , NULL, MB_OK | MB_ICONWARNING); the app now just crashes ... any further help will be appreciated. Cheers. ##### Share on other sites Err... sorry, I had the formula backwards. It should be this: strncpy(real_filename, filename + ofn.nFileOffset, ofn.nFileExtension - 1 - ofn.nFileOffset); If the above doesn''t work, can you post the values for ofn.nFileOffset and ofn.nFileExtension for the path string in your last post? "I forgot I had the Scroll Lock key until a few weeks ago when some asshole program used it. It even used it right" - Conner McCloud 1. 1 2. 2 3. 3 Rutin 22 4. 4 frob 17 5. 5 • 9 • 33 • 13 • 13 • 10 • ### Forum Statistics • Total Topics 632582 • Total Posts 3007200 ×
2018-09-20 08:53:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19006425142288208, "perplexity": 6162.917313954508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156423.25/warc/CC-MAIN-20180920081624-20180920101624-00434.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-8-section-8-2-multiplying-and-dividing-radicals-exercise-set-page-585/89
## Introductory Algebra for College Students (7th Edition) $5x\sqrt{2x}$ Using $\sqrt{\dfrac{x}{y}}=\dfrac{\sqrt{x}}{\sqrt{y}}$ or the quotient rule of radicals, the given expression, $\dfrac{\sqrt{150x^4}}{\sqrt{3x}} ,$ simplifies to \begin{array}{l}\require{cancel} \sqrt{\dfrac{150x^4}{3x}} \\\\= \sqrt{\dfrac{50x^3\cdot\cancel{3x}}{\cancel{3x}}} \\\\= \sqrt{50x^3} \\\\= \sqrt{25x^2\cdot2x} \\\\= \sqrt{(5x)^2\cdot2x} \\\\= 5x\sqrt{2x} .\end{array} Note that variables are assumed to have positive real numbers.
2018-10-22 09:46:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997949004173279, "perplexity": 12627.105067819793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515029.82/warc/CC-MAIN-20181022092330-20181022113830-00132.warc.gz"}
http://blog.symbolab.com/2015_11_01_archive.html
## Tuesday, November 10, 2015 ### Advanced Math Solutions – Laplace Calculator, Laplace Transform In previous posts, we talked about the four types of ODE - linear first order, separable, Bernoulli, and exact. In today’s post, we will learn about Laplace Transforms, how to compute Laplace transforms and inverse Laplace transforms. I’ll admit I was afraid of Laplace transforms before I learned it. The new symbols and the messiness of the problem really intimidated me. However, once I learned what Laplace transforms are and how to do these types of problems, I began to realize that it isn’t so scary. With the assistance of a table and some formulas, anyone can do Laplace transforms. What is a Laplace Transform? Laplace transforms can be used to solve differential equations. They turn differential equations into algebraic problems. Definition: Suppose f(t) is a piecewise continuous function, a function made up of a finite number of continuous pieces. The Laplace transform of f(t) is denoted L{f(t)} and defined as: L\left\{f(t)\right\}=\int_{0}^{\infty}e^{-st}f(t)dt If you see L\left\{f(t)\right\}=\int_{-\infty}^{\infty}e^{-st}f(t)dt, then you can assume that for t\lt0,\quad f(t)=0, and then you can use the original definition of Laplace transform. Now, we will get into how to compute Laplace transforms: Laplace transforms can be computed using a table and the linearity property, “Given f(t) and g(t) then, L\left\{af(t)+bg(t)\right\}=aF(s)+bG(s).” The statement means that after you’ve taken the transform of the individual functions, then you can add back any constants and add or subtract the results. Look at the table and see what functions you can transform. Algebraic manipulation may be required. The table will be your savior when it comes to these problems. This problem is very simple. It requires looking at the different functions, finding the corresponding transforms in the table, and then adding any constants and adding and subtracting the results together. Let’s get into inverse Laplace transforms! What is an inverse Laplace transform? An inverse Laplace transform is when we are given a transform, F(s), and asked what function(s) we had originally. Definition: f(t)=L^{-1}\left\{F(s)\right\} How to compute the inverse Laplace transforms: Just like Laplace transforms have a linearity property, so do inverse Laplace transforms. “Given the two Laplace transforms F(s) and G(s), then L^{-1}\left\{aF(s)+bG(s)\right}=aL^{-1}\left\{F(s)\right\}+bL^{-1}\left\{G(s)\right\}.” When trying to computer the inverse Laplace transforms, it is important to first look at the denominator and then try to identify the transform based on that. If you can’t figure it out just based on looking at the denominator, look at the numerator. Sometime you may have to manipulate the numerator to get into the correct form needed. By looking at the table, we can see that the denominator is almost the same as the denominator of the transform for \sqrt{t}. With some algebraic manipulation to the numerator, we are able to figure out the inverse Laplace transform. See, Laplace transforms aren’t that hard after all. They can get a little messy and can be take a long time to solve after first, but with more practice, the better you’ll get. Make sure you have that table handy! Until next time, Leah ## Tuesday, November 3, 2015 ### Advanced Math Solutions – Ordinary Differential Equations Calculator, Exact Differential Equations In the previous posts, we have covered three types of ordinary differential equations, (ODE). We have now reached the last type of ODE. In this post, we will talk about exact differential equations. What is an exact differential equation? There must be a 0 on the right side of the equation and M(x,y)dx and N(x,y)dx must be separated by a +. Steps to solve exact differential equations: 1. Verify that \frac{∂M(x,y)}{∂y}=\frac{∂N(x,y)}{∂x} • Find M(x,y) and N(x,y) 2. Integrate \int M(x,y)dx or \int N(x,y)dy • This will help us find Ψ(x,y) 3. Replace c with ƞ(x) if you integrated N(x,y) with respect to y, or ƞ(y) if you integrated M(x,y) with respect to x 4. Compute ƞ(x) or ƞ(y) 5. Solve to get the implicit or explicit solution, depending on which is preferred • Don’t forget to substitute ƞ(x) or ƞ(y) Exact differential equations can be tricky. We will solve the first example step by step to help you better understand how to solve exact differential equations. 1. Verify that \frac{∂M(x,y)}{∂y}=\frac{∂N(x,y)}{∂x} M(x,y)=2xy-9x^2 N(x,y)=2y+x^2+1 2. Integrate \int N(x,y)dy 3. Replace c with ƞ(x) 4. Compute ƞ(x) We took the derivative with respect to x of Ψ(x,y), which is equal to y+x^2 y+y^2+ƞ(x). We then compared the derivative to M(x,y), the equation we didn’t integrate. We then integrated both sides to solve for ƞ(x). 5. Find the implicit equation
2017-12-12 19:57:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903242290019989, "perplexity": 593.6096640311897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517917.20/warc/CC-MAIN-20171212192750-20171212212750-00587.warc.gz"}
http://gmatclub.com/forum/if-40-people-get-the-chance-to-pick-a-card-from-a-canister-97015.html
Find all School-related info fast with the new School-Specific MBA Forum It is currently 27 May 2015, 12:22 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If 40 people get the chance to pick a card from a canister Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: Intern Joined: 10 Jul 2010 Posts: 1 Followers: 0 Kudos [?]: 0 [0], given: 0 If 40 people get the chance to pick a card from a canister [#permalink]  10 Jul 2010, 05:08 1 This post was BOOKMARKED 00:00 Difficulty: (N/A) Question Stats: 50% (01:19) correct 50% (00:24) wrong based on 6 sessions If 40 people get the chance to pick a card from a canister that contains 5 free passes to an amusement park mixed in with 35 blank cards what is the probability that the 40th person who picks will win? Manager Joined: 03 May 2010 Posts: 89 WE 1: 2 yrs - Oilfield Service Followers: 11 Kudos [?]: 74 [0], given: 7 Re: probability of picking last [#permalink]  10 Jul 2010, 07:17 35 - lose, 5 - win Pick 5 people to win => 40C5 = total number of outcomes. Favorable outcome is : First pick the 40th person, then pick any other 4. => 1*40C4 So, probability = 40C5 / 40C4 = 40!*36!*4! / (35!*5!*40!) = 36/(35*5) = 36 / 175 Math Expert Joined: 02 Sep 2009 Posts: 27512 Followers: 4319 Kudos [?]: 42442 [0], given: 6028 Re: probability of picking last [#permalink]  10 Jul 2010, 07:54 Expert's post mpevans wrote: if 40 people get the chance to pick a card from a canister that contains 5 free passes to an amusement park mixed in with 35 blank cards what is the probability that the 40th person who picks will win? I guess we have the situation when 40 people standing in a row and picking the cards one after another and check them in the end. We are asked what is the probability that 40th person win the pass? If so, then probability of picking the pass will be the same for all 40 people - $$\frac{5}{40}$$, (initial probability of picking the pass ($$\frac{5}{40}$$) will be the same for any person in a row). AbhayPrasanna wrote: 35 - lose, 5 - win Pick 5 people to win => 40C5 = total number of outcomes. Favorable outcome is : First pick the 40th person, then pick any other 4. => 1*40C4 So, probability = 40C5 / 40C4 = 40!*36!*4! / (35!*5!*40!) = 36/(35*5) = 36 / 175 This way is also valid and must give the same result. The problem is that you calculated favorable outcomes incorrectly: when you pick 40th person to win, then you have only 39 left to pick 4 from, so # of favorable outcomes is $$1*C^4_{39}$$. Also $$probability=\frac{# \ of \ favorable \ outcomes}{total \ # \ of \ outcomes}$$ and you wrote vise-versa. So $$P=\frac{1*C^4_{39}}{C^5_{40}}=\frac{36*37*38*39}{4!}*\frac{5!}{36*37*38*39*40}=\frac{5}{40}$$. Hope it helps. _________________ Manager Joined: 03 Jun 2010 Posts: 183 Location: United States (MI) Concentration: Marketing, General Management WE: Business Development (Consumer Products) Followers: 4 Kudos [?]: 31 [0], given: 40 Re: probability of picking last [#permalink]  10 Jul 2010, 21:13 I made the same. 4C39/5C40. Where 5C40 - total # of outcomes. 4C39 means that 4 winning tickets were taken out by 39 persons. Manager Joined: 11 Jul 2010 Posts: 228 Followers: 1 Kudos [?]: 53 [0], given: 20 Re: probability of picking last [#permalink]  11 Jul 2010, 05:03 The number of passes here is 40 (35 +5) And the number of people is also 40 How will this problem change if there are 10 passes available and 45 blank passes mixed in and there are 40 people? will the probability of 40th person picking the pass be 10/55 = 2/11? Can someone explain the favorable outcomes/total outcomes set-up using combination's formula? thanks. Math Expert Joined: 02 Sep 2009 Posts: 27512 Followers: 4319 Kudos [?]: 42442 [1] , given: 6028 Re: probability of picking last [#permalink]  11 Jul 2010, 06:41 1 KUDOS Expert's post gmat1011 wrote: The number of passes here is 40 (35 +5) And the number of people is also 40 How will this problem change if there are 10 passes available and 45 blank passes mixed in and there are 40 people? will the probability of 40th person picking the pass be 10/55 = 2/11? Can someone explain the favorable outcomes/total outcomes set-up using combination's formula? thanks. Yes, if there are 10 passes and 45 blank cards and only 40 people are to pick the cards the probability that 40th person will pick the pass will still be 10/55. Consider another example the deck of 52 cards. If we put them in a line, what is the probability that 40th card will be an ace? As there are 4 aces then probability that any particular card in a line is an ace is 4/52. Hope it helps. _________________ Manager Joined: 26 Mar 2010 Posts: 124 Followers: 2 Kudos [?]: 6 [0], given: 17 Re: probability of picking last [#permalink]  12 Jul 2010, 10:41 Bunuel wrote: mpevans wrote: if 40 people get the chance to pick a card from a canister that contains 5 free passes to an amusement park mixed in with 35 blank cards what is the probability that the 40th person who picks will win? I guess we have the situation when 40 people standing in a row and picking the cards one after another and check them in the end. We are asked what is the probability that 40th person win the pass? If so, then probability of picking the pass will be the same for all 40 people - $$\frac{5}{40}$$, (initial probability of picking the pass ($$\frac{5}{40}$$) will be the same for any person in a row). AbhayPrasanna wrote: 35 - lose, 5 - win Pick 5 people to win => 40C5 = total number of outcomes. Favorable outcome is : First pick the 40th person, then pick any other 4. => 1*40C4 So, probability = 40C5 / 40C4 = 40!*36!*4! / (35!*5!*40!) = 36/(35*5) = 36 / 175 This way is also valid and must give the same result. The problem is that you calculated favorable outcomes incorrectly: when you pick 40th person to win, then you have only 39 left to pick 4 from, so # of favorable outcomes is $$1*C^4_{39}$$. Also $$probability=\frac{# \ of \ favorable \ outcomes}{total \ # \ of \ outcomes}$$ and you wrote vise-versa. So $$P=\frac{1*C^4_{39}}{C^5_{40}}=\frac{36*37*38*39}{4!}*\frac{5!}{36*37*38*39*40}=\frac{5}{40}$$. Hope it helps. Hi... thanks for the explanation Bunuel But I do not understand how can the probability of selecting a free pass =5/40 for all in case we assume the people are picking the cards and keeping it with them. won't it keep reducing as 4/39 for the second successfull fick of a free card... Please explain it seems i am missing some logic somewhere. Math Expert Joined: 02 Sep 2009 Posts: 27512 Followers: 4319 Kudos [?]: 42442 [0], given: 6028 Re: probability of picking last [#permalink]  12 Jul 2010, 10:54 Expert's post utin wrote: Hi... thanks for the explanation Bunuel But I do not understand how can the probability of selecting a free pass =5/40 for all in case we assume the people are picking the cards and keeping it with them. won't it keep reducing as 4/39 for the second successfull fick of a free card... Please explain it seems i am missing some logic somewhere. Consider this: put 40 cards on the table and 40 people against them. What is the probability that the card which is against the 40th person is the winning one? 5/40, it's the same probability as for the first, second, ... for any. When we pick the cards from a canister and not knowing the results till the end then it's basically the same scenario. _________________ Intern Joined: 12 Jan 2012 Posts: 25 GMAT 1: 720 Q49 V39 Followers: 0 Kudos [?]: 13 [1] , given: 10 Re: If 40 people get the chance to pick a card from a canister [#permalink]  21 Dec 2012, 22:35 1 KUDOS Consider 40 places to arrange the 40 cards with 35 blank and 5 passes = 40!/(35!*5!) Favorable outcome is when the 40th place contains a pass, so we have 39 places to arrange 35 blanks and 4 passes = 39!/(4!*35!) =(39!*35!*5!)/(40!*35!*4!) = 1/8 Intern Joined: 20 Dec 2012 Posts: 19 Followers: 0 Kudos [?]: 3 [0], given: 3 Re: If 40 people get the chance to pick a card from a canister [#permalink]  22 Dec 2012, 05:43 geneticsgene wrote: Consider 40 places to arrange the 40 cards with 35 blank and 5 passes = 40!/(35!*5!) Favorable outcome is when the 40th place contains a pass, so we have 39 places to arrange 35 blanks and 4 passes = 39!/(4!*35!) =(39!*35!*5!)/(40!*35!*4!) = 1/8 Hello! I am unfamiliar with ! in math, what does it mean? Math Expert Joined: 02 Sep 2009 Posts: 27512 Followers: 4319 Kudos [?]: 42442 [1] , given: 6028 Re: If 40 people get the chance to pick a card from a canister [#permalink]  22 Dec 2012, 05:52 1 KUDOS Expert's post Hiho wrote: geneticsgene wrote: Consider 40 places to arrange the 40 cards with 35 blank and 5 passes = 40!/(35!*5!) Favorable outcome is when the 40th place contains a pass, so we have 39 places to arrange 35 blanks and 4 passes = 39!/(4!*35!) =(39!*35!*5!)/(40!*35!*4!) = 1/8 Hello! I am unfamiliar with ! in math, what does it mean? The factorial of a non-negative integer $$n$$, denoted by $$n!$$, is the product of all positive integers less than or equal to $$n$$. For example: $$4!=1*2*3*4=24$$. For more check here: everything-about-factorials-on-the-gmat-85592.html Hope it helps. _________________ Intern Joined: 20 Dec 2012 Posts: 19 Followers: 0 Kudos [?]: 3 [0], given: 3 Re: If 40 people get the chance to pick a card from a canister [#permalink]  22 Dec 2012, 06:22 Bunuel wrote: Hiho wrote: geneticsgene wrote: Consider 40 places to arrange the 40 cards with 35 blank and 5 passes = 40!/(35!*5!) Favorable outcome is when the 40th place contains a pass, so we have 39 places to arrange 35 blanks and 4 passes = 39!/(4!*35!) =(39!*35!*5!)/(40!*35!*4!) = 1/8 Hello! I am unfamiliar with ! in math, what does it mean? The factorial of a non-negative integer $$n$$, denoted by $$n!$$, is the product of all positive integers less than or equal to $$n$$. For example: $$4!=1*2*3*4=24$$. Hope it helps. Yes, it does. Thanks. I understand the concept, but not the use of it in this particular case. Is it tested on the GMAT, or is it just additional help on some questions for those who are familiar with it? Intern Joined: 22 Dec 2012 Posts: 16 GMAT 1: 720 Q49 V39 Followers: 1 Kudos [?]: 13 [0], given: 19 Re: probability of picking last [#permalink]  22 Dec 2012, 10:38 Hi, we have 40 cards with 5 valid passes and rest junks we have 40 people ... The probability of 1st person picking junk is 35/40 and then he doesnt replace the card rite.. he takes it with him.. so now we are left with 39 cards.. The probability of 2nd person taking a junk card is 34/39 right??? so wont it be 35/40 x 34/39 x 33/38 x ...... 1/5??? what am I missing here pls? Bunuel wrote: mpevans wrote: if 40 people get the chance to pick a card from a canister that contains 5 free passes to an amusement park mixed in with 35 blank cards what is the probability that the 40th person who picks will win? I guess we have the situation when 40 people standing in a row and picking the cards one after another and check them in the end. We are asked what is the probability that 40th person win the pass? If so, then probability of picking the pass will be the same for all 40 people - $$\frac{5}{40}$$, (initial probability of picking the pass ($$\frac{5}{40}$$) will be the same for any person in a row). AbhayPrasanna wrote: 35 - lose, 5 - win Pick 5 people to win => 40C5 = total number of outcomes. Favorable outcome is : First pick the 40th person, then pick any other 4. => 1*40C4 So, probability = 40C5 / 40C4 = 40!*36!*4! / (35!*5!*40!) = 36/(35*5) = 36 / 175 This way is also valid and must give the same result. The problem is that you calculated favorable outcomes incorrectly: when you pick 40th person to win, then you have only 39 left to pick 4 from, so # of favorable outcomes is $$1*C^4_{39}$$. Also $$probability=\frac{# \ of \ favorable \ outcomes}{total \ # \ of \ outcomes}$$ and you wrote vise-versa. So $$P=\frac{1*C^4_{39}}{C^5_{40}}=\frac{36*37*38*39}{4!}*\frac{5!}{36*37*38*39*40}=\frac{5}{40}$$. Hope it helps. Math Expert Joined: 02 Sep 2009 Posts: 27512 Followers: 4319 Kudos [?]: 42442 [0], given: 6028 Re: If 40 people get the chance to pick a card from a canister [#permalink]  23 Dec 2012, 05:18 Expert's post Hiho wrote: Yes, it does. Thanks. I understand the concept, but not the use of it in this particular case. Is it tested on the GMAT, or is it just additional help on some questions for those who are familiar with it? It is tested. Check here: math-combinatorics-87345.html and here: math-probability-87244.html _________________ Math Expert Joined: 02 Sep 2009 Posts: 27512 Followers: 4319 Kudos [?]: 42442 [0], given: 6028 Re: probability of picking last [#permalink]  23 Dec 2012, 05:22 Expert's post SpotlessMind wrote: Hi, we have 40 cards with 5 valid passes and rest junks we have 40 people ... The probability of 1st person picking junk is 35/40 and then he doesnt replace the card rite.. he takes it with him.. so now we are left with 39 cards.. The probability of 2nd person taking a junk card is 34/39 right??? so wont it be 35/40 x 34/39 x 33/38 x ...... 1/5??? what am I missing here pls? Bunuel wrote: mpevans wrote: if 40 people get the chance to pick a card from a canister that contains 5 free passes to an amusement park mixed in with 35 blank cards what is the probability that the 40th person who picks will win? I guess we have the situation when 40 people standing in a row and picking the cards one after another and check them in the end. We are asked what is the probability that 40th person win the pass? If so, then probability of picking the pass will be the same for all 40 people - $$\frac{5}{40}$$, (initial probability of picking the pass ($$\frac{5}{40}$$) will be the same for any person in a row). AbhayPrasanna wrote: 35 - lose, 5 - win Pick 5 people to win => 40C5 = total number of outcomes. Favorable outcome is : First pick the 40th person, then pick any other 4. => 1*40C4 So, probability = 40C5 / 40C4 = 40!*36!*4! / (35!*5!*40!) = 36/(35*5) = 36 / 175 This way is also valid and must give the same result. The problem is that you calculated favorable outcomes incorrectly: when you pick 40th person to win, then you have only 39 left to pick 4 from, so # of favorable outcomes is $$1*C^4_{39}$$. Also $$probability=\frac{# \ of \ favorable \ outcomes}{total \ # \ of \ outcomes}$$ and you wrote vise-versa. So $$P=\frac{1*C^4_{39}}{C^5_{40}}=\frac{36*37*38*39}{4!}*\frac{5!}{36*37*38*39*40}=\frac{5}{40}$$. Hope it helps. You are finding the probability that the first 34 people will not win and the 35th person wins, which is clearly not what we were asked to get. _________________ Re: probability of picking last   [#permalink] 23 Dec 2012, 05:22 Similar topics Replies Last post Similar Topics: 1 In a certain game, you pick a card from a standard deck of 1 20 Dec 2012, 18:10 1 Getting from a 37 to the mid 40s 6 20 Jun 2012, 21:06 GMAT 550 any chances of getting picked in TOP 50 MBA univ 2 09 Nov 2011, 08:02 Pick a card from the deck-Probability 1 15 Jun 2011, 20:48 2 Probability Question - Picking the cards 8 08 Aug 2009, 09:09 Display posts from previous: Sort by # If 40 people get the chance to pick a card from a canister Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2015-05-27 20:22:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6916566491127014, "perplexity": 1976.5606432189531}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929096.44/warc/CC-MAIN-20150521113209-00095-ip-10-180-206-219.ec2.internal.warc.gz"}
http://peeterjoot.com/2020/12/14/
[If mathjax doesn’t display properly for you, click here for a PDF of this post] This post logically follows both of the following: The PDF linked above above contains all the content from this post plus (1.) above [to be edited later into a more logical sequence.] ## More examples. Here are a few additional examples of reciprocal frame calculations. ## Problem: Unidirectional arbitrary functional dependence. Let \label{eqn:reciprocal:2540} x = a f(u), where $$a$$ is a constant vector and $$f(u)$$ is some arbitrary differentiable function with a non-zero derivative in the region of interest. Here we have just a single tangent space direction (a line in spacetime) with tangent vector \label{eqn:reciprocal:2400} \Bx_u = a \PD{u}{f} = a f_u, so we see that the tangent space vectors are just rescaled values of the direction vector $$a$$. This is a simple enough parameterization that we can compute the reciprocal frame vector explicitly using the gradient. We expect that $$\Bx^u = 1/\Bx_u$$, and find \label{eqn:reciprocal:2420} \inv{a} \cdot x = f(u), but for constant $$a$$, we know that $$\grad a \cdot x = a$$, so taking gradients of both sides we find \label{eqn:reciprocal:2440} so the reciprocal vector is \label{eqn:reciprocal:2460} \Bx^u = \grad u = \inv{a f_u}, as expected. ## Problem: Linear two variable parameterization. Let $$x = a u + b v$$, where $$x \wedge a \wedge b = 0$$ represents spacetime plane (also the tangent space.) Find the curvilinear coordinates and their reciprocals. The frame vectors are easy to compute, as they are just \label{eqn:reciprocal:1960} \begin{aligned} \Bx_u &= \PD{u}{x} = a \\ \Bx_v &= \PD{v}{x} = b. \end{aligned} This is an example of a parametric equation that we can easily invert, as we have \label{eqn:reciprocal:1980} \begin{aligned} x \wedge a &= – v \lr{ a \wedge b } \\ x \wedge b &= u \lr{ a \wedge b }, \end{aligned} so \label{eqn:reciprocal:2000} \begin{aligned} u &= \inv{ a \wedge b } \cdot \lr{ x \wedge b } \\ &= \inv{ \lr{a \wedge b}^2 } \lr{ a \wedge b } \cdot \lr{ x \wedge b } \\ &= \frac{ \lr{b \cdot x} \lr{ a \cdot b } \lr{a \cdot x} \lr{ b \cdot b } }{ \lr{a \wedge b}^2 } \end{aligned} \label{eqn:reciprocal:2020} \begin{aligned} v &= -\inv{ a \wedge b } \cdot \lr{ x \wedge a } \\ &= -\inv{ \lr{a \wedge b}^2 } \lr{ a \wedge b } \cdot \lr{ x \wedge a } \\ &= -\frac{ \lr{b \cdot x} \lr{ a \cdot a } \lr{a \cdot x} \lr{ a \cdot b } }{ \lr{a \wedge b}^2 } \end{aligned} Recall that $$\grad \lr{ a \cdot x} = a$$, if $$a$$ is a constant, so our gradients are just \label{eqn:reciprocal:2040} \begin{aligned} &= \frac{ b \lr{ a \cdot b } a \lr{ b \cdot b } }{ \lr{a \wedge b}^2 } \\ &= b \cdot \inv{ a \wedge b }, \end{aligned} and \label{eqn:reciprocal:2060} \begin{aligned} &= -\frac{ b \lr{ a \cdot a } a \lr{ a \cdot b } }{ \lr{a \wedge b}^2 } \\ &= -a \cdot \inv{ a \wedge b }. \end{aligned} Expressed in terms of the frame vectors, this is just \label{eqn:reciprocal:2080} \begin{aligned} \Bx^u &= \Bx_v \cdot \inv{ \Bx_u \wedge \Bx_v } \\ \Bx^v &= -\Bx_u \cdot \inv{ \Bx_u \wedge \Bx_v }, \end{aligned} so we were able to show, for this special two parameter linear case, that the explicit evaluation of the gradients has the exact structure that we intuited that the reciprocals must have, provided they are constrained to the spacetime plane $$a \wedge b$$. It is interesting to observe how this structure falls out of the linear system solution so directly. Also note that these reciprocals are not defined at the origin of the $$(u,v)$$ parameter space. ## Problem: Quadratic two variable parameterization. Now consider a variation of the previous problem, with $$x = a u^2 + b v^2$$. Find the curvilinear coordinates and their reciprocals. \label{eqn:reciprocal:2100} \begin{aligned} \Bx_u &= \PD{u}{x} = 2 u a \\ \Bx_v &= \PD{v}{x} = 2 v b. \end{aligned} Our tangent space is still the $$a \wedge b$$ plane (as is the surface itself), but the spacing of the cells starts getting wider in proportion to $$u, v$$. Utilizing the work from the previous problem, we have \label{eqn:reciprocal:2120} \begin{aligned} b \cdot \inv{ a \wedge b } \\ -a \cdot \inv{ a \wedge b }. \end{aligned} A bit of rearrangement can show that this is equivalent to the reciprocal frame identities. This is a second demonstration that the gradient and the algebraic formulations for the reciprocals match, at least for these special cases of linear non-coupled parameterizations. ## Problem: Reciprocal frame for generalized cylindrical parameterization. Let the vector parameterization be $$x(\rho,\theta) = \rho e^{-i\theta/2} x(\rho_0, \theta_0) e^{i \theta}$$, where $$i^2 = \pm 1$$ is a unit bivector ($$+1$$ for a boost, and $$-1$$ for a rotation), and where $$\theta, \rho$$ are scalars. Find the tangent space vectors and their reciprocals. fig. 1. “Cylindrical” boost parameterization. Note that this is cylindrical parameterization for the rotation case, and traces out hyperbolic regions for the boost case. The boost case is illustrated in fig. 1 where hyperbolas in the light cone are found for boosts of $$\gamma_0$$ with various values of $$\rho$$, and the spacelike hyperbolas are boosts of $$\gamma_1$$, again for various values of $$\rho$$. The tangent space vectors are \label{eqn:reciprocal:2480} \Bx_\rho = \frac{x}{\rho}, and \label{eqn:reciprocal:2500} \begin{aligned} \Bx_\theta &= -\frac{i}{2} x + x \frac{i}{2} \\ &= x \cdot i. \end{aligned} Recall that $$x \cdot i$$ lies perpendicular to $$x$$ (in the plane $$i$$), as illustrated in fig. 2. This means that $$\Bx_\rho$$ and $$\Bx_\theta$$ are orthogonal, so we can find the reciprocal vectors by just inverting them \label{eqn:reciprocal:2520} \begin{aligned} \Bx^\rho &= \frac{\rho}{x} \\ \Bx^\theta &= \frac{1}{x \cdot i}. \end{aligned} fig. 2. Projection and rejection geometry. ## Parameterization of a general linear transformation. Given $$N$$ parameters $$u^0, u^1, \cdots u^{N-1}$$, a general linear transformation from the parameter space to the vector space has the form \label{eqn:reciprocal:2160} x = {a^\alpha}_\beta \gamma_\alpha u^\beta, where $$\beta \in [0, \cdots, N-1]$$ and $$\alpha \in [0,3]$$. For such a general transformation, observe that the curvilinear basis vectors are \label{eqn:reciprocal:2180} \begin{aligned} \Bx_\mu &= \PD{u^\mu}{x} \\ &= \PD{u^\mu}{} {a^\alpha}_\beta \gamma_\alpha u^\beta \\ &= {a^\alpha}_\mu \gamma_\alpha. \end{aligned} We find an interpretation of $${a^\alpha}_\mu$$ by dotting $$\Bx_\mu$$ with the reciprocal frame vectors of the standard basis \label{eqn:reciprocal:2200} \begin{aligned} \Bx_\mu \cdot \gamma^\nu &= {a^\alpha}_\mu \lr{ \gamma_\alpha \cdot \gamma^\nu } \\ &= {a^\nu}_\mu, \end{aligned} so \label{eqn:reciprocal:2220} x = \Bx_\mu u^\mu. We are able to reinterpret \ref{eqn:reciprocal:2160} as a contraction of the tangent space vectors with the parameters, scaling and summing these direction vectors to characterize all the points in the tangent plane. ## Theorem 1.1: Projecting onto the tangent space. Let $$T$$ represent the tangent space. The projection of a vector onto the tangent space has the form \label{eqn:reciprocal:2560} \textrm{Proj}_{\textrm{T}} y = \lr{ y \cdot \Bx^\mu } \Bx_\mu = \lr{ y \cdot \Bx_\mu } \Bx^\mu. ### Start proof: Let’s designate $$a$$ as the portion of the vector $$y$$ that lies outside of the tangent space \label{eqn:reciprocal:2260} y = y^\mu \Bx_\mu + a. If we knew the coordinates $$y^\mu$$, we would have a recipe for the projection. Algebraically, requiring that $$a$$ lies outside of the tangent space, is equivalent to stating $$a \cdot \Bx_\mu = a \cdot \Bx^\mu = 0$$. We use that fact, and then take dot products \label{eqn:reciprocal:2280} \begin{aligned} y \cdot \Bx^\nu &= \lr{ y^\mu \Bx_\mu + a } \cdot \Bx^\nu \\ &= y^\nu, \end{aligned} so \label{eqn:reciprocal:2300} y = \lr{ y \cdot \Bx^\mu } \Bx_\mu + a. Similarly, the tangent space projection can be expressed as a linear combination of reciprocal basis elements \label{eqn:reciprocal:2320} y = y_\mu \Bx^\mu + a. Dotting with $$\Bx_\mu$$, we have \label{eqn:reciprocal:2340} \begin{aligned} y \cdot \Bx^\mu &= \lr{ y_\alpha \Bx^\alpha + a } \cdot \Bx_\mu \\ &= y_\mu, \end{aligned} so \label{eqn:reciprocal:2360} y = \lr{ y \cdot \Bx^\mu } \Bx_\mu + a. We find the two stated ways of computing the projection. Observe that, for the special case that all of $$\setlr{ \Bx_\mu }$$ are orthogonal, the equivalence of these two projection methods follows directly, since \label{eqn:reciprocal:2380} \begin{aligned} \lr{ y \cdot \Bx^\mu } \Bx_\mu &= \lr{ y \cdot \inv{\Bx_\mu} } \inv{\Bx^\mu} \\ &= \lr{ y \cdot \frac{\Bx_\mu}{\lr{\Bx_\mu}^2 } } \frac{\Bx^\mu}{\lr{\Bx^\mu}^2} \\ &= \lr{ y \cdot \Bx_\mu } \Bx^\mu. \end{aligned}
2022-12-02 02:57:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999735355377197, "perplexity": 2099.6166903409303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710890.97/warc/CC-MAIN-20221202014312-20221202044312-00865.warc.gz"}
http://www.leancrew.com/all-this/2011/11/the-siren-song-of-vim/
# The siren song of Vim The internets are conspiring against me. First, I saw this 20th anniversary appreciation of Vim at Ars Technica. Then came this short post promising me that switching to Vim would be quick, painless, and even fun. Finally, today I read Brent Simmons’ post about navigating in various Mac text editors—he compares them all to Vim, the gold standard. Worse than Brent’s post itself is his link to Bram Moolenaar’s “Seven habits of effective text editing,” a beguiling essay that first lured me to the treacherous coast of Vim a decade ago. I know better than to try Vim again. I wrote a blog post about how I know better than to try Vim again. I tend to rewrite sentences as I’m typing, stopping in the middle to go back and reset the tense or flip clauses around. It’s rare that I can type an entire paragraph without stopping to edit it. The need to keep shifting in vi from command to insert mode makes it a very clumsy editor for this type of writer. I’ve tried to change my writing habits. I’ve tried to turn my brain’s internal editor off and just let the words come out, trusting myself to fix them later on. It would make writing much easier, regardless of the text editor I use. But I can’t do it. One of the few advantages of reaching middle age is coming to recognize your strengths and weaknesses and knowing what you can change and what you can’t. This is something I can’t. So why am I launching MacVim? Why am I looking at its cartoonish toolbar and thinking I’m sure there’s a setting to hide that instead of quitting in horror? Why am I looking at its tiny default font and lack of line numbers and thinking of .gvimrc instead of :wq? Why am I thinking about how wizardly I’d be in Vim by now if only I’d stuck with it ten years ago. You know why as well as I do. It’s the same reason we look for a better calendar app (I’m back to using the iPhone’s built-in calendar), a better pen (still using the Jetstream), a better to-do list manager (I’m back with TaskPaper). It’s that nagging sense that there’s a better, faster, smarter way to do things. And usually there is. But you have to know your limits. Experience tells me that the scripts and TextExpander snippets I create usually do save me time or, at the very least, frustration. Experience also tells me that despite Vim’s wonderful navigation and editing commands—and make no mistake, they are wonderful—modal editing isn’t for me. [This post written in my comfortable old friend TextMate.] ## 5 Responses to “The siren song of Vim” 1. Marc Wilson says: What happened to Agenda? I started using it on your recommendation and have been very happy with it. 2. When I first started using iCloud, Agenda didn’t sync (or didn’t sync fast enough) so I returned to the built-in calendar. Agenda is still installed on my phone, but I haven’t felt compelled to switch back to it. 3. I also was recently lured by the siren songs of Vim, which I used for 6 years a while back. As it was only the editing features that I wanted (compared to the extensibility of the editor), I found that I was better served with evil: Vim editing features under emacs. And I need emacs: I haven’t found another editor that can launch and keep interacting with an external process. 4. Charles Turner says: The thing that really killed Vim for me was it’s handling of word-wrapped paragraphs of text. It’d be great if I was only writing code, but the pages and pages of my dissertation— just couldn’t bear the screen refresh. 5. I tend to rewrite sentences as I’m typing, stopping in the middle to go back and reset the tense or flip clauses around. It’s rare that I can type an entire paragraph without stopping to edit it. The need to keep shifting in vi from command to insert mode makes it a very clumsy editor for this type of writer. I’m sure you already know this, but just in case it’s a surprise to others — the GUI versions of Vim (including MacVim) allow you to use the arrows, Home, End, etc., keys without ever leaving Insert mode. Sure, when I’m coding I jump among modes with abandon, but when writing prose, I stay in Insert mode (performing exactly the kind of edits you describe) for long stretches without any trouble. I’m happy to provide additional details if you need them.
2014-09-15 09:28:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3876499831676483, "perplexity": 2219.1595632951457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657104131.95/warc/CC-MAIN-20140914011144-00229-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://onsnetwork.org/blog/tag/excel/
# A few outliers remaining Integrating last bit of qPCR data into master datasheet. This includes 5 runs post 8/15. likely error above – second EF1 was 18s test. CARM Looks nice, corrected… Elong factor Correction is fine, but mechanical reps have some issues As noticed in last batch of analysis these need to be checked at raw data level 28s Thoughts on normalizing gene… Noting that one EF1 rep was thrown out for mechanical (see above). Here is a crude look at EF1, actin, and 28s respectively… # Finishing out with the mechanical Currently there is a pretty robust spreadsheet and over the past few days Jake has cranked through some reps to see how the oysters that were mechanically stressed hold up. Below is how these data are integrated. Currently the 8-10 samples (yellow) have been skipped, but we might have a look. First up is having a look at the new HSP 70 reps. The mechanical data still needs some better resolution. Hopefully teh 8-10 samples migh shed some light. Next up is two more reps of PGEEP4. Looks good, and given the doubling of reps we could easily drop ‘outlier’ runs and still have triplicates, tight triplicates. GRB2… now good to go, with the first pair of reps dead on. BMP2…. could use some help from the other mechanical stress runs TLR….seemed like a relatively easy fix (besides no detection) in that just needed to correct for machine. And the correction indicating the fact that expression was so low, only able to be detected by Opticon The 8-15 runs had minimal control and temp samples with mechanical run in dups. This needs a little carressing before integrating into data. This should be in two columns with empty cells where no samples were run- in this order. H_C_1 H_C_2 H_C_3 H_C_4 H_C_5 H_C_6 H_C_7 H_C_8 N_C_1 N_C_2 N_C_3 N_C_4 N_C_5 N_C_6 N_C_7 N_C_8 S_C_1 S_C_2 S_C_3 S_C_4 S_C_5 S_C_6 S_C_7 S_C_8 H_T_1 H_T_2 H_T_3 H_T_4 H_T_5 H_T_6 H_T_7 H_T_8 N_T_1 N_T_2 N_T_3 N_T_4 N_T_5 N_T_6 N_T_7 N_T_8 S_T_1 S_T_2 S_T_3 S_T_4 S_T_5 S_T_6 S_T_7 S_T_8 H_M_1 H_M_2 H_M_3 H_M_4 H_M_5 H_M_6 H_M_7 H_M_8 N_M_1 N_M_2 N_M_3 N_M_4 N_M_5 N_M_6 N_M_7 N_M_8 S_M_1 S_M_2 S_M_3 S_M_4 S_M_5 S_M_6 S_M_7 S_M_8 8-15 run update Actin Mechanical looks decent after correcting. However taken together, bothersome the difference in crude expression levels. Carm H2AV Assuming correction is correct- still a big differences in mechanincal here- could be real. PGRP No correction required as these were run on cfx, downside is some reps are not detected that would have been picked up with Opticon. Do not see be shift in expression of mechanical stressed. CRAF Easy correction but skeptical of some very, very low Cts # Dirty and crude with Oly qPCR Taking the most decent Ct values, I did the simple and crude calculation, normalizing with EF1 and looking at fold over minimum. Seems to be EF1 data is skewing. Will take a look with actin and also compare delta Ct, using this as a sound reference.
2018-02-24 00:12:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6023882031440735, "perplexity": 4973.3686172623775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814872.58/warc/CC-MAIN-20180223233832-20180224013832-00620.warc.gz"}
https://codereview.stackexchange.com/questions/250575/counting-characters-from-an-html-file-with-python
# Counting Characters from an HTML File with Python I just completed level 2 of The Python Challenge on pythonchallenge.com and I am in the process of learning python so please bear with me and any silly mistakes I may have made. I am looking for some feedback about what I could have done better in my code. Two areas specifically: • How could I have more easily identified the comment section of the HTML file? I used a beat-around-the-bush method that kind of found the end of the comment (or the beginning technically but it is counting from the end) and gave me some extra characters that I was able to recognize and anticipated (the extra "-->" and "-"). What condition would have better found this comment so I could put it in a new string to be counted? This is what I wrote: from collections import Counter import requests page = requests.get('http://www.pythonchallenge.com/pc/def/ocr.html') pagetext = "" pagetext = (page.text) #find out what number we are going back to i = 1 x = 4 testchar = "" testcharstring = "" while x == 4: testcharstring = pagetext[-i:] testchar = testcharstring[0] if testchar == "-": testcharstring = pagetext[-(i+1)] testchar = testcharstring[0] if testchar == "-": testcharstring = pagetext[-(i+2)] testchar = testcharstring[0] if testchar == "!": testcharstring = pagetext[-(i+3)] testchar = testcharstring[0] if testchar == "<": x = 3 else: i += 1 x = 4 else: i += 1 x = 4 else: i += 1 print(i) newstring = pagetext[-i:] charcount = Counter(newstring) print(charcount) And this is the source HTML: <html> <title>ocr</title> <body> <center><img src="ocr.jpg"> <br><font color="#c03000"> recognize the characters. maybe they are in the book, <br>but MAYBE they are in the page source.</center> <br> <br> <br> <font size="-1" color="gold"> General tips: <li>Use the hints. They are helpful, most of the times.</li> <li>Investigate the data given to you.</li> <li>Avoid looking for spoilers.</li> <br> Forums: <a href="http://www.pythonchallenge.com/forums"/>Python Challenge Forums</a>, <br> IRC: irc.freenode.net #pythonchallenge <br><br> To see the solutions to the previous level, replace pc with pcc, i.e. go to: http://www.pythonchallenge.com/pcc/def/ocr.html </body> </html> <!-- find rare characters in the mess below: --> <!-- Followed by thousands of characters and the comment concludes with '-->' I don’t have enough reputation to comment, so I must say this in an answer. It looks clunky to use while x == 4: and then do x = 3 whenever you want to break out of the loop. It looks better to do while True: and when you want to break out of the loop do break Cheers! • Welcome to CodeReview, you will fit right in ;) – konijn Oct 13 at 14:54 • Thx man. I appreciate it. – fartgeek Oct 13 at 15:26 # Redundant Code pagetext = "" pagetext = (page.text) The first line assigns an empty string to pagetext. The second line ignores the contents already in pagetext and assigns a different value to the variable. Why bother with the first statement? It simply makes the code longer, slower, and harder to understand. Why bother with the (...) around page.text? They also are not serving any purpose. # Variable Names Variables like i are a double-edged sword. You're using it as a loop index, and then you're using it to reference a found location after the loop terminates. But i by itself doesn't have much meaning. posn might be clearer. last_comment_posn would be much clearer, though very verbose. PEP-8 recommends using underscores to separate words in variable names: ie, use char_count not charcount etc. # Searching for a string of characters Python strings have built-in functions for searching for a substring in a larger string. For instance, str.find could rapidly find the first occurrence of <!-- in the page text. i = pagetext.find("<!--") But you're not looking for the first one; you're looking for the last one. Python again has you covered, with the reverse find function: str.rfind. i = pagetext.rfind("<!--") But this still finds the index of the last occurrence. You want the characters after the comment marker, so we need to skip forward 4 additional characters: if i >= 0: newstring = pagetext[i+4:] # Improved code import requests from collections import Counter page = requests.get('http://www.pythonchallenge.com/pc/def/ocr.html') page.raise_for_status() # Crash if the request didn't succeed page_text = page.text posn = page_text.rfind("<!--") print(posn) if posn >= 0: comment_text = page_text[posn+4:] # Fix! This is to end of string, not end of comment! char_count = Counter(comment_text) print(char_count)
2020-10-24 06:20:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2916848957538605, "perplexity": 3655.0331976027874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882102.31/warc/CC-MAIN-20201024051926-20201024081926-00033.warc.gz"}
http://northumberlandnurseries.co.uk/0a1t3/d70906-what-is-wavelength-in-physics
The first form, using reciprocal wavelength in the phase, does not generalize as easily to a wave in an arbitrary direction. [15] The wavelength (or alternatively wavenumber or wave vector) is a characterization of the wave in space, that is functionally related to its frequency, as constrained by the physics of the system. Wavelength, a physics concept, is the distance measured between the corresponding points of two waves. where L is the slit width, R is the distance of the pattern (on the screen) from the slit, and λ is the wavelength of light used. Medium. Crest is the highest point of the wave whereas the trough is the lowest. The typical convention of using the cosine phase instead of the sine phase when describing a wave is based on the fact that the cosine is the real part of the complex exponential in the wave. Waves in crystalline solids are not continuous, because they are composed of vibrations of discrete particles arranged in a regular lattice. Video explanation on … n A wave packet has an envelope that describes the overall amplitude of the wave; within the envelope, the distance between adjacent peaks or troughs is sometimes called a local wavelength. Wavelength is usually denoted by the Greek letter lambda (λ); it is equal to the speed (v) of a wave train in a medium divided by its frequency (f): λ = v/f. The concept of wavelength is most often applied to sinusoidal, or nearly sinusoidal, waves, because in a linear system the sinusoid is the unique shape that propagates with no shape change – just a phase change and potentially an amplitude change. The corresponding wavelength in the medium is. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. Omissions? In the analysis of the single slit, the non-zero width of the slit is taken into account, and each point in the aperture is taken as the source of one contribution to the beam of light (Huygens' wavelets). 0. The distance between peaks (high points) is called wavelength. where q is the number of slits, and g is the grating constant. If the water depth is less than one-twentieth of the wavelength, the waves are known as long gravity waves, and their wavelength … In general, the envelope of the wave packet moves at a speed different from the constituent waves. The first activity - Wave Anatomy - provides the learner practice with identifying crests and troughs in a transverse wave … If a wave travels 4 meters and has 20 cycles while doing so, what is its wavelength? Wavelength happens in … [32], The resolvable spatial size of objects viewed through a microscope is limited according to the Rayleigh criterion, the radius to the first null of the Airy disk, to a size proportional to the wavelength of the light used, and depending on the numerical aperture:[33]. The upper figure shows three standing waves in a box. Ring in the new year with a Britannica Membership. The distance between one crest (or trough) of one wave and the next is the wavelength of the wave. the distance, measured in the direction of propagation of a wave, between two successive points in the wave that are characterized by the same phase of oscillation. In a crystal lattice vibration, atomic positions vary. Examples of waves are sound waves, light, water waves and periodic electrical signals in a conductor. Forms of electromagnetic radiation like radio waves, light waves or infrared (heat) waves make characteristic patterns as they travel through space. In simple words, the wavelength is the distance between the crests of the wave. The mathematical relationship that describes how the speed of light within a medium varies with wavelength is known as a dispersion relation. Played 177 times. In addition, the method computes a slowly changing amplitude to satisfy other constraints of the equations or of the physical system, such as for conservation of energy in the wave. Wavelength definition is - the distance in the line of advance of a wave from any one point to the next point of corresponding phase. Paper 26 Atomic & Nuclear Physics 1. See, for example, CS1 maint: multiple names: authors list (, relationship between wavelength and frequency, room temperature and atmospheric pressure, "Figure 4.4: Transition from quasi-harmonic to cnoidal wave", "Chapter 1: Brief history and overview of nonlinear water waves", Conversion: Wavelength to Frequency and vice versa – Sound waves and radio waves, Teaching resource for 14–16 years on sound including wavelength, The visible electromagnetic spectrum displayed in web colors with according wavelengths, https://en.wikipedia.org/w/index.php?title=Wavelength&oldid=998106364, Creative Commons Attribution-ShareAlike License, This page was last edited on 3 January 2021, at 21:11. Corrections? This produces aliasing because the same vibration can be considered to have a variety of different wavelengths, as shown in the figure. by hreeves_16662. There are three activities included in this Concept Builder. A quantity related to the wavelength is the angular wavelength (also known as reduced wavelength), usually symbolized by ƛ (lambda-bar). Our editors will review what you’ve submitted and determine whether to revise the article. It is less related to the other concepts of physics as … N In the case of electromagnetic radiation—such as light—in free space, the phase speed is the speed of light, about 3×108 m/s. Find out what you know about science with this challenging quiz. But in physics, we use angstrom and nanometre widely to measure the wavelength. For example, the term subwavelength-diameter optical fibre means an optical fibre whose diameter is less than the wavelength of light propagating through it. How much do you know about Mars? In linear media, any wave pattern can be described in terms of the independent propagation of sinusoidal components. Waves that are sinusoidal in time but propagate through a medium whose properties vary with position (an inhomogeneous medium) may propagate at a velocity that varies with position, and as a result may not be sinusoidal in space. 0. The function S has zeros where u is a non-zero integer, where are at x values at a separation proportion to wavelength. Wavelength is defined as the horizontal distance between two crests or troughs of a wave. θ It is equal to the "regular" wavelength "reduced" by a factor of 2π (ƛ = λ/2π). So, does this show a correlation between frequency and wavelength? A sound wave is a variation in air pressure, while in light and other electromagnetic radiation the strength of the electric and the magnetic field vary. Such as visible light do not require a medium. A hydrogen atom originally at rest in the n= 3 state decays to the ground state with the emission of a photon. The main result of this interference is to spread out the light from the narrow slit into a broader image on the screen. More recently, the definition has been expanded to include the study of the interactions between particles such as electrons, protons, and ions, as well as their interaction with other particles as a function of their collision energy. [21][22] An example is shown in the figure. Longer waves travel faster than shorter ones, a phenomenon known as dispersion. See the above figure for understanding. Wavelength is one of the important characteristics of a waveform apart from amplitude, velocity and frequency. Wavelengths in audible sound are much longer than those in visible light. They are also commonly expressed in terms of wavenumber k (2π times the reciprocal of wavelength) and angular frequency ω (2π times the frequency) as: in which wavelength and wavenumber are related to velocity and frequency as: In the second form given above, the phase (kx − ωt) is often generalized to (k•r − ωt), by replacing the wavenumber k with a wave vector that specifies the direction and wavenumber of a plane wave in 3-space, parameterized by position vector r. In that case, the wavenumber k, the magnitude of k, is still in the same relationship with wavelength as shown above, with v being interpreted as scalar speed in the direction of the wave vector. This change in speed upon entering a medium causes refraction, or a change in direction of waves that encounter the interface between media at an angle. [24], When sinusoidal waveforms add, they may reinforce each other (constructive interference) or cancel each other (destructive interference) depending upon their relative phase. In physics, the term light sometimes refers to electromagnetic radiation of any wavelength, whether visible or not. [1][2] It is the distance between consecutive corresponding points of the same phase on the wave, such as two adjacent crests, troughs, or zero crossings, and is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. Subwavelength apertures are holes smaller than the wavelength of light propagating through them. 6 × 10 - 19 eV (1) For y-rays , wavelength ranges from 10 … 2. [9] For electromagnetic waves, this change in the angle of propagation is governed by Snell's law. sin Wave. The inverse of the wavelength is called the spatial frequency. [19] Such waves are sometimes regarded as having a wavelength even though they are not sinusoidal. It is our well known primary physical quantity known as Length. For example, in an ocean wave approaching shore, shown in the figure, the incoming wave undulates with a varying local wavelength that depends in part on the depth of the sea floor compared to the wave height. To aid imagination, this bending of the wave often is compared to the analogy of a column of marching soldiers crossing from solid ground into mud. In the Fraunhofer diffraction pattern sufficiently far from a single slit, within a small-angle approximation, the intensity spread S is related to position x via a squared sinc function:[30]. Each wave has a certain shape and length. As a result, the change in direction upon entering a different medium changes with the wavelength of the wave. The wavelength of visible light implies it has a … 6 × 10-19 to convert into eV (electron volt). It is usually encountered in quantum mechanics, where it is used in combination with the reduced Planck constant (symbol ħ, h-bar) and the angular frequency (symbol ω) or angular wavenumber (symbol k). Usually, in transverse waves (waves with points oscillating at right angles to the direction of their advance), wavelength is measured from crest to crest or from trough to trough; in longitudinal waves (waves with points vibrating in the same direction as their advance), it is measured from compression to compression or from rarefaction to rarefaction. It is the distance between consecutive corresponding points of the same phase on the wave, such as two adjacent crests, troughs, or zero crossings, and is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. Wave Length: Wave Speed: Wave Frequency: Note: Period of wave is the time it takes the wave to go through one complete cycle, = 1/f, where f is the wave frequency. If a traveling wave has a fixed shape that repeats in space or in time, it is a periodic wave. where λ is the wavelength of the waves that are focused for imaging, D the entrance pupil diameter of the imaging system, in the same units, and the angular resolution δ is in radians. Consequently, interference occurs. Moreover, we represent the wavelength of the wave by Greek letter lambda ($$\lambda$$). …the wavelength and the wave period, which also controls the speed of wave propagation. Like all types of EM radiation, visible light propagates as waves. As the wave slows down, the wavelength gets shorter and the amplitude increases; after a place of maximum response, the short wavelength is associated with a high loss and the wave dies out. “Corresponding points” refers to two points or particles in the same phase—i.e., points that have completed identical fractions of their periodic motion. length (wāv′lĕngkth′, -lĕngth′, -lĕnth′) n. The distance between one peak of a wave to the next corresponding peak, or between any two adjacent corresponding points, defined as the speed of a wave divided by its frequency. The term wavel… This phenomenon is used in the interferometer. (a) What is the wavelength of 100-MHz radio waves used in an MRI unit? Waves and Wavelength (physics) STUDY. 5 m. 10 m. 0.2 m. 4 m. Tags: Question 18 . Wavelength, distance between corresponding points of two consecutive waves. It is denoted by the Greek letter lambda (λ). noun Physics. The wavelengths of sound frequencies audible to the human ear (20 Hz–20 kHz) are thus between approximately 17 m and 17 mm, respectively. This distribution of wave energy is called diffraction. In a low-frequency wave, the distance between consecutive crests or trough is more than in a high-frequency wave. answer choices . [8] Consequently, wavelength, period, and wave velocity are related just as for a traveling wave. It is good to know about this property and related topics like the definition, unit of wavelength… A subwavelength particle is a particle smaller than the wavelength of light with which it interacts (see Rayleigh scattering). The relationship of the speed of sound, its frequency, and wavelength is the same as for all waves: v w = fλ, where v w is the speed of sound, f is its frequency, and λ is its wavelength. Accordingly, the condition for constructive interference is:[27]. Be on the lookout for your Britannica newsletter to get trusted stories delivered right to your inbox. Visible light is known to have wavelengths in the range of 400 to 700 nanometres (nm), or 4.00 × 10⁻⁷ to 7.00 × 10⁻⁷ m, which is between infrared which has longer wavelengths and ultraviolet which has shorter wavelengths. A standing wave is an undulatory motion that stays in one place. Anybody can ask a question ... $\begingroup$ "color" does not mean the same thing as "wavelength." Updates? [20] As shown in the figure, wavelength is measured between consecutive corresponding points on the waveform. The wave velocity in one medium not only may differ from that in another, but the velocity typically varies with wavelength. Let's take a closer look at the ways we can describe sound. For example, for an electromagnetic wave, if the box has ideal metal walls, the condition for nodes at the walls results because the metal walls cannot support a tangential electric field, forcing the wave to have zero amplitude at the wall. Normally, the measurement of wavelength can be done among two individual points like two adjoining points otherwise channels within a waveform. The term subwavelength is used to describe an object having one or more dimensions smaller than the length of the wave with which the object interacts. For a circular aperture, the diffraction-limited image spot is known as an Airy disk; the distance x in the single-slit diffraction formula is replaced by radial distance r and the sine is replaced by 2J1, where J1 is a first order Bessel function. Think it’ll be easier if you have to pick only true or false? The walls of the box are considered to require the wave to have nodes at the walls of the box (an example of boundary conditions) determining which wavelengths are allowed. By signing up for this email, you are agreeing to news, offers, and information from Encyclopaedia Britannica. In this sense, gamma rays, X-rays, microwaves and radio waves are also light. If you know the speed and frequency of the wave, you can use the basic formula for wavelength. The name originated with the visible light spectrum but now can be applied to the entire electromagnetic spectrum as well as to a sound spectrum or vibration spectrum. where m is an integer, and for destructive interference is: Thus, if the wavelength of the light is known, the slit separation can be determined from the interference pattern or fringes, and vice versa. [24], Louis de Broglie postulated that all particles with a specific value of momentum p have a wavelength λ = h/p, where h is Planck's constant. Electromagnetic Waves. Spatial period of the wave—the distance over which the wave's shape repeats, and thus the inverse of the spatial frequency. [31] Wavelength, distance between corresponding points of two consecutive waves. Time-saving video on wavelength. It is typically measured between two easily identifiable points, such as two adjacent crests or troughs in a waveform. Waves Test DRAFT. For example, the electrons in a CRT display have a De Broglie wavelength of about 10−13 m. To prevent the wave function for such a particle being spread over all space, de Broglie proposed using wave packets to represent particles that are localized in space. A sinusoidal standing wave includes stationary points of no motion, called nodes, and the wavelength is twice the distance between nodes. How to use wavelength in a sentence. In particular, the speed of light in a medium is less than in vacuum, which means that the same frequency will correspond to a shorter wavelength in the medium than in vacuum, as shown in the figure at right. Description: Wavelength is the distance from one crest to another, or from one trough to another, of a wave (which may be an electromagnetic wave, a sound wave, or any other wave). The motion of a disturbance. Edit. Wavelength is commonly designated by the Greek letter lambda (λ). The analysis of the wave can be based upon comparison of the local wavelength with the local water depth.[10]. Its wavelength is the distance from crest to crest or from trough to trough. How about energy? Water waves are variations in the height of a body of water. The wavelength can also be thought of as the distance a wave has traveled after one complete cycle—or one period. The wavelength of a sound is the distance between adjacent identical parts of a wave—for example, between adjacent compressions as illustrated in Figure 2. When wavelengths of electromagnetic radiation are quoted, the wavelength in vacuum usually is intended unless the wavelength is specifically identified as the wavelength in some other medium. The Wavelength Concept Builder is a tool that guides a learner through the meaning of wavelength as the length of the repeating pattern towards being able to calculate the wavelength if given a pattern and the length of the pattern. Diffraction is the fundamental limitation on the resolving power of optical instruments, such as telescopes (including radiotelescopes) and microscopes. It only takes a minute to sign up. “Corresponding points” refers to two points or particles in the same phase—i.e., points that have completed identical fractions of their periodic motion. The effect of interference is to redistribute the light, so the energy contained in the light is not altered, just where it shows up.[29]. where v is called the phase speed (magnitude of the phase velocity) of the wave and f is the wave's frequency. If we suppose the screen is far enough from the slits (that is, s is large compared to the slit separation d) then the paths are nearly parallel, and the path difference is simply d sin θ. Wavelength can be a useful concept even if the wave is not periodic in space. [13] Descriptions using more than one of these wavelengths are redundant; it is conventional to choose the longest wavelength that fits the phenomenon. Localized wave packets, "bursts" of wave action where each wave packet travels as a unit, find application in many fields of physics. Usually, in transverse waves (waves with points oscillating at right angles to the direction of their advance), wavelength is measured from crest to crest or from trough to trough; in … The method integrates phase through space using a local wavenumber, which can be interpreted as indicating a "local wavelength" of the solution as a function of time and space. Wavelength measures distance between two crests of a wave and is one of four main wave characteristics found in Physics. For example, to achieve resonance ionization in the cesium atom that has an ionization potential of only 3.9 electron volts, the…, …of these processes on the wavelength of the radiation. SURVEY . The wavelength of a longitudinal wavecan be measured as the distance between two adjacent compressions. [17] Large-amplitude ocean waves with certain shapes can propagate unchanged, because of properties of the nonlinear surface-wave medium.[18]. [5], Assuming a sinusoidal wave moving at a fixed wave speed, wavelength is inversely proportional to frequency of the wave: waves with higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths.[6]. As represented by the sinusoidal curve, the pressure variation in a sound wave repeats itself in space over a specific distance. In the figure I1 has been set to unity, a very rough approximation. Calculate the wavelength and energy of the emitted photon. Wavelength. Wavelength is basically the space or length between two successive crests or troughs of a wave. In a dispersive medium, the phase speed itself depends upon the frequency of the wave, making the relationship between wavelength and frequency nonlinear. The range of wavelengths or frequencies for wave phenomena is called a spectrum. Two types of diffraction are distinguished, depending upon the separation between the source and the screen: Fraunhofer diffraction or far-field diffraction at large separations and Fresnel diffraction or near-field diffraction at close separations. The angular size of the central bright portion (radius to first null of the Airy disk) of the image diffracted by a circular aperture, a measure most commonly used for telescopes and cameras, is:[34]. In equations, wavelength is indicated using the Greek letter lambda (λ). 9th - 12th grade . Calculating wavelength is dependent upon the information you are given. Hence the S.I unit of wavelength is metre (m). A physical environment through which a disturbance can travel. This indeterminacy in wavelength in solids is important in the analysis of wave phenomena such as energy bands and lattice vibrations. Wavelength is commonly designated by the Greek letter lambda(λ). In a physics lab, light with a wavelength of 490 nm travels in air from a laser to a photocell in a time of 17.5 ns. What is the de Broglie wavelength of an electron with 13.6 eV of kinetic energy? Wavelength is defined as the distance between two consecutive crests or troughs in a wave. = a month ago. Sinusoids are the simplest traveling wave solutions, and more complex solutions can be built up by superposition. The stationary wave can be viewed as the sum of two traveling sinusoidal waves of oppositely directed velocities. Edit. a month ago. The notion of path difference and constructive or destructive interference used above for the double-slit experiment applies as well to the display of a single slit of light intercepted on a screen. 64% average accuracy. In physics, the wavelength is the spatial period of a periodic wave—the distance over which the wave's shape repeats. [23], Using Fourier analysis, wave packets can be analyzed into infinite sums (or integrals) of sinusoidal waves of different wavenumbers or wavelengths. Thus the wavelength of a 100 MHz electromagnetic (radio) wave is about: 3×108 m/s divided by 108 Hz = 3 metres. A simple example is an experiment due to Young where light is passed through two slits. Physics. The analysis of differential equations of such systems is often done approximately, using the WKB method (also known as the Liouville–Green method). where the numerical aperture is defined as The first factor, I1, is the single-slit result, which modulates the more rapidly varying second factor that depends upon the number of slits and their spacing. Mechanical Waves. This distance is known as the wavelength…, A single wavelength must be chosen to excite the atom from its ground state to an excited state, while the second photon completes the ionization process. Let us know if you have suggestions to improve this article (requires login). As with other diffraction patterns, the pattern scales in proportion to wavelength, so shorter wavelengths can lead to higher resolution. In certain circumstances, waves of unchanging shape also can occur in nonlinear media; for example, the figure shows ocean waves in shallow water that have sharper crests and flatter troughs than those of a sinusoid, typical of a cnoidal wave,[16] a traveling wave so named because it is described by the Jacobi elliptic function of m-th order, usually denoted as cn(x; m). For sound waves in air, the speed of sound is 343 m/s (at room temperature and atmospheric pressure). In other words, a wave is something that travels through a medium to … PLAY. In the special case of dispersion-free and uniform media, waves other than sinusoids propagate with unchanging shape and constant velocity. The wavelength λ of a sinusoidal waveform traveling at constant speed v is given by[7]. Spectroscopic analysis has been…. The term wavelength is also sometimes applied to modulated waves, and to the sinusoidal envelopes of modulated waves or waves formed by interference of several sinusoids. Waves that require a medium through which to travel. The variation in speed of light with wavelength is known as dispersion, and is also responsible for the familiar phenomenon in which light is separated into component colors by a prism. Separation occurs when the refractive index inside the prism varies with wavelength, so different wavelengths propagate at different speeds inside the prism, causing them to refract at different angles. The speed of a wave depends upon the medium in which it propagates. Physics » Wavelength Frequency Calculator. Wavelength depends on the medium (for example, vacuum, air, or water) that a wave travels through. In physics, the wavelength is the spatial period of a periodic wave—the distance over which the wave's shape repeats. The range of wavelengths sufficient to provide a description of all possible waves in a crystalline medium corresponds to the wave vectors confined to the Brillouin zone.[14]. In acoustics, where a medium is essential for the waves to exist, the wavelength value is given for a specified medium. An ideal vacuum ) what is the number of slits, and to complex,... ) of one wave and the same point on one wave and the next wave is not in! The sound wave repeats itself in space or length between two consecutive waves characteristics of wave... The two successive waves answer site for active researchers, academics and students of physics as … physics. 3 metres and shines on a screen for active researchers, academics and students of physics as noun! The sound wave illustrated in figure 1B where u is a property of a photon which it.... Concepts of physics observation of standing waves in air, the wavelength of 100-MHz radio waves are light. Energy bands and lattice vibrations wavelength frequency Calculator sound are much longer those. Waves to exist, the phase, does this show a correlation frequency! Are holes smaller than the wavelength is twice the distance between corresponding points of two consecutive crests troughs. Delivered right to your inbox factor of 2π ( ƛ = λ/2π ) angstrom and nanometre widely to the! Frequencies are used by bats so they can resolve targets smaller than 17 mm 4! Highest point of the independent propagation of sinusoidal components a photon to exist, the envelope of the can... Light within a waveform at constant speed v is called wavelength. which to travel waves periodic! At x values at a separation proportion to wavelength, a phenomenon involving subwavelength ;... Lambda ( λ ) so, what is the distance of 1 frequency wave peak to the of! For this email, you can use the basic formula for wavelength ''. Is most commonly associated with the emission of a wave can be based upon comparison of the atom is... Whereas a wave is the distance between consecutive crests or troughs of a longitudinal wavecan be measured as the between. An ideal vacuum relationship that describes how the speed of sound is 343 m/s ( at room and! Transmission, and the next is the highest point of the wave—the distance over which wave! Regular lattice, because they are composed of vibrations of discrete particles arranged in a sound wave illustrated figure. Travel faster than shorter ones, a physics concept, is the point! Radiation, visible light do not require a medium and speed of light with it! Propagate with unchanging shape and constant velocity, X-rays, microwaves and radio waves used an. The analysis of wave phenomena such as telescopes ( including radiotelescopes ) and microscopes Greek letter lambda ( (! A mode of transmission that carries energy without the uptake of the wave—the over. Metre ( m ) other areas of photonics be viewed as the between! The light from the constituent waves the regular '' wavelength reduced '' by factor! The trough is more than in a wave and f is the spatial of! Value is given by [ 7 ] [ 4 ] the inverse of the important characteristics of a body water... Medium through which to travel media, waves other than sinusoids propagate with unchanging shape and constant velocity can. Waveguides, among other areas of photonics is denoted by the Greek letter lambda ( λ ) login.. Review what you ’ ve submitted and determine whether to revise the article trough... Next wave is not periodic in space over a specific distance m. 4 m. Tags: question.. Wave phenomena is called the spatial frequency color '' does not the!, atomic positions vary originally at rest in the special case of dispersion-free uniform! Recoil momentum and speed of sound is 343 m/s ( at room temperature and atmospheric pressure ) phase, not... Britannica newsletter to get trusted stories delivered right to your inbox points of two traveling sinusoidal waves of directed! Mainfreight Usa Headquarters, Mink Vs Marten, Store-bought Snacks For School Parties, Eagles Desperado Listen, Orbea Mx 29 Orange, 2 For 1 Cocktails Wellington Saturday, Mhw Iceborne Hammer Stun Build, Dog Standing On Two Legs,
2021-05-16 04:58:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7359485626220703, "perplexity": 550.1205000049251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00457.warc.gz"}
http://ieeexplore.ieee.org/ieee_pilot/articles/08tro05/tro-andreasson-2004642/article.html
• Abstract # A Minimalistic Approach to Appearance-Based Visual SLAM This paper presents a vision-based approach to simultaneous localization and mapping (SLAM) in indoor/outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omnidirectional vision sensor, a novel method is introduced based on the relative similarity of neighboring images. This new method does not require the determination of distances to image features using multiple-view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle different environments (without modification of the parameters), and it can cope with violations of the “flat floor assumption” to some degree and scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g., for solving the multirobot SLAM problem with unknown initial poses. SECTION I ## Introduction THIS PAPER presents a new vision-based approach to the problem of simultaneous localization and mapping (SLAM). Especially compared with SLAM approaches using a 2-D laser scanner, the rich information provided by a vision-based approach about a substantial part of the environment allows for dealing with high levels of occlusion [1] and enables solutions that do not rely strictly on a flat floor assumption. Cameras can also offer a longer range and are therefore advantageous in environments that contain large open spaces. The proposed method is called “mini-SLAM” since it is minimalistic in several ways. On the hardware side, it relies solely on odometry and an omnidirectional camera as the external source of information. This allows for less expensive systems compared to methods that use 2-D or 3-D laser scanners. Note that the robot used for the experiments was also equipped with a 2-D laser scanner. This laser scanner, however, was not used in the SLAM algorithm but only to visualize the consistency of the created maps. Apart from the frugal hardware requirements, the method is also minimalistic in its computational demands. Map estimation is performed online by a linear-time SLAM algorithm on an efficient graph representation. The main difference to other vision-based SLAM approaches is that there is no estimate of the positions of a set of landmarks involved, enabling the algorithm to scale up better with the size of the environment. Instead, a measure of image similarity is used to estimate the relative pose between corresponding images (“visual relations”) and the uncertainty of this estimate. Given these “visual relations” and relative pose estimates between consecutive images obtained from the odometry of the robot (“odometry relations”), the multilevel relaxation algorithm [2] is then used to determine the maximum likelihood estimate of all image poses. The relations are expressed as a relative pose estimate and the corresponding covariance. A key insight is that the estimate of the relative pose in the “visual relations” does not need to be very accurate as long as the corresponding covariance is modeled appropriately. This is because the relative pose is used only as an initial estimate that the multilevel relaxation algorithm can adjust according to the covariance of the relation. Therefore, even with fairly imprecise initial estimates of the relative poses, it is possible to geometrically build accurate maps using the geometric information in the covariance of the relative pose estimates. Mini-SLAM was found to produce consistent maps in various environments, including, for example, a dataset of an environment containing indoor and outdoor passages (path length of 1.4 km) and an indoor dataset covering five floor levels of a department building. Further to our previously published work [3], we extended the mini-SLAM approach to the multirobot SLAM problem, demonstrating its ability to combine multiple overlapping maps with unknown initial poses. We also provide an evaluation of the robustness of the suggested approach with respect to poor odometry or a less reliable measure of visual similarity. ### A. Related Work Using a camera as the external source of information in SLAM has received increasing attention during the past years. Many approaches extract landmarks using local features in the images and track the positions of these landmarks. As the feature descriptor, Lowe's scale-invariant feature transform (SIFT) [4] has been used widely [5], [6]. An initial estimate of the relative pose change is often obtained from odometry [6], [7], [8], or where multiple cameras are available as in [9] and [10], multiple-view geometry can be applied to obtain depth estimates of the extracted features. To update and maintain visual landmarks, extended Kalman filters (EKFs) [7], [11] and Rao-Blackwellized particle filters (RBPFs) [6], [9] have been used. In the visual SLAM method proposed in [11], particle filters were utilized to obtain the depth of landmarks, while the landmark positions were updated with an EKF. Initial landmark positions had to be provided by the user. A similar approach described in [8] applies a converse methodology. The landmark positions were estimated with a KF, and a particle filter was used to estimate the path. Due to their suitability for addressing the correspondence problem, vision-based systems have been applied as an addition to laser-scanning-based SLAM approaches for detecting loop closure. The principle has been applied to SLAM systems based on a 2-D laser scanner [12] and a 3-D laser scanner [13]. In the approach proposed in this paper, the SLAM optimization problem is solved at the graph level with the multilevel relaxation (MLR) method of Frese and Duckett [2]. This method could be replaced by alternative graph-based SLAM methods, for example, the online method proposed by Grisetti et al. [14] based on the stochastic gradient descent method proposed by Olson et al. [15]. The rest of this paper is structured as follows. Section II describes the proposed SLAM approach. Then, the experimental setup is detailed, and the results are presented in Section III. The paper ends with conclusions and suggestions for future work (Section IV). SECTION II ## Mini-SLAM ### A. Multilevel Relaxation The SLAM optimization problem is solved at the graph level with the MLR method of Frese and Duckett [2]. A map is represented as a set of nodes connected in a graph structure. An example is shown in Fig. 1. Each node corresponds to the robot pose at a particular time and each link to a relative measurement of the spatial relation between the two nodes it connects. A node is created for each omni-image in this paper, and the terms node and frame are used interchangeably in this paper. The MLR algorithm can be briefly explained as follows. The input is a set R of m = | R relations on n planar frames (i.e., a 2-D representation is used). Each relation rR describes the likelihood distribution of the pose of frame a relative to frame b. Relations are modeled as a Gaussian distribution with mean μr and covariance Cr. The output of the MLR algorithm is the maximum likelihood estimation vector for the poses of all the frames. Thus, a globally consistent set of Cartesian coordinates is obtained for the nodes of the graph based on local (relative) and inconsistent (noisy) measurements, by maximizing the total likelihood of all measurements. Fig. 1. Graph representation used. The figure shows frames (nodes) and relations (edges) of both the odometry ro and the visual relations rv. Visual relations are indicated with dotted lines. Each frame a contains a reference to a set of features Fa extracted from an omnidirectional image Ia, an odometry pose xoa, a covariance estimate of the odometry pose Cxoa, the estimated pose , and an estimate of its covariance . Fig. 2 shows images corresponding to the region represented by the graph in this figure. ### B. Odometry Relations The mini-SLAM approach is based on two principles. First, odometry is sufficiently accurate if the distance traveled is short. Second, by using visual matching, correspondence between robot poses can be detected reliably even though the search region around the current pose estimate is large. Accordingly, two different types of relations are created in the MLR graph, relations based on odometry ro and relations based on visual similarity rv. Odometry relations ro are created between successive frames. The relative pose μro is obtained directly from the odometry readings, and the covariance Cro is estimated using the motion model suggested in [16] as TeX Source $$C_{r_o} = \left[\matrix{d^2\delta^2_{X_d} + t^2\delta^2_{X_t} & 0 & 0 \cr0 & d^2\delta^2_{Y_d} + t^2\delta^2_{Y_t} & 0 \cr0 & 0 & d^2\delta^2_{\theta_d} + t^2\delta^2_{\theta_t}} \right]\eqno{\hbox{(1)}}$$where d and t are the total distance traveled and total angle rotated between two successive frames. The δX parameters relate to the forward motion, the δY parameters to the side motion, and the δ_θ parameters to the rotation of the robot. The six δ-parameters adjust the influence of the distance d and rotation t in the calculation of the covariance matrix. They were tuned manually once, and then kept constant throughout the experiments. ### C. Visual Similarity Relations #### Similarity Measure Given two images Ia and Ib, features are first extracted using the SIFT algorithm [4]. This results in two sets of features Fa and Fb for frame a and b. Each feature F = [x,y],H comprises the pixel position [x,y] and a histogram H containing the SIFT descriptor. The similarity measure Sa,b is based on the number of features that match between Fa and Fb. The feature matching algorithm calculates the Euclidean distance between each feature in image Ia and all the features in image Ib. A potential match is found if the smallest distance is less than 60% of the second smallest distance. This criterion was found empirically and was also used in [17]. It guarantees that interest point matches are substantially better than all other possible matches. We also do not allow features to be matched against more than one other feature. If a feature has more than one candidate match, the match that has the lowest Euclidean distance among the candidates is selected. Examples of matched features are shown in Fig. 2. Fig. 2. Examples of (top) loop closure detection outdoors and (bottom) indoors. In the outdoor example, the distance to the extracted features is larger than in the indoor example. (Left) Feature matches at the peak of the similarity value, (top) S678,758 = 0.728 and (bottom) S7,360 = 0.322. (Middle) Feature matches two steps (equivalent to ∼3-m distance) away, (top) S680,758 = 0.286 and (bottom) S9, 360 = 0.076. The pose standard deviation σxrv = σyrv was estimated as (top) 2.06 m and (bottom) 1.09 m, respectively, and the mean dμ as (top) 0.199 m and (bottom) −0.534 m. (Right) Evolution of the similarity measure S against the distance travelled (obtained from odometry) together with the fitted Gaussian. The matching step results in a set of feature pairs Pa,b with a total number Ma,b of matched pairs. Since the number of features varies heavily depending on the image content, the number of matches is normalized to Sa,b∊ [0,1] as TeX Source $$S_{a,b} = {M_{a,b}\over {1/2}(n_{F_a} + n_{F_b})}\eqno{\hbox{(2)}}$$where nFa and nFb are the number of features in Fa and Fb, respectively. A high-similarity measure indicates a perceptually similar position. #### Estimation of the Relative Rotation and Variance The relative rotation between two panoramic images Ia and Ib can be estimated directly from the horizontal displacement of the matched feature pairs Pa,b. If the flat floor assumption is violated, this will be only an approximation. Here, the relative rotations θp for all matched pairs pPa,b are sorted into a 10-bin histogram and the relative rotation estimate μθrv is determined as the maximum of a parabola fitted to the largest bin and its left and right neighbor (see Fig. 3). Fig. 3. Relative orientation histogram from two omnidirectional images taken 2 m apart. The dotted line marks the relative orientation estimate μθrv. To evaluate the accuracy of relative rotation estimates θp, we collected panoramic images in an indoor laboratory environment and computed the relative orientation with respect to a reference image I0. Panoramic images were recorded at a translational distance of 0.5, 1.0, and 2.0 m to the reference image I0. The ground truth rotation was obtained by manually measuring the displacement of corresponding pixels in areas along the displacement of the camera. The results in Table I demonstrate the good accuracy obtained. Even at a displacement of 2 m the mean error is only 7.15°. TABLE I Errors of Relative Rotation θ Estimate in Radians The rotation variance σθrv2 is estimated by the sum of squared differences between the estimate of the relative rotation μθrv and the relative rotation of the matched pairs Pa,b TeX Source $$\sigma_{\theta^{r_v}}^2 = {1\over M_{a,b}-1}\sum_{p\in P_{a,b}}(\mu_{\theta}^{r_v} - \theta_{p})^2.\eqno{\hbox{(3)}}$$To increase the robustness toward outliers, a 10% Winsorized mean is applied. For the evalutated data, this had only a minor effect on the results compared to using an untruncated mean. #### Estimation of Relative Position and Covariance The mini-SLAM approach does not attempt to determine the position of the detected features. Therefore, the relative position between two frames a and b cannot be determined very accurately. Instead, we use only image similarity of the surrounding images to estimate [μxrv, μyrv], as described shortly. It would be possible to estimate the relative position using multiple-view geometry, but this would introduce additional complexity that we want to avoid. Instead, geometric information is obtained from an estimate of the covariance of the relative position between a current frame b and a previously recorded frame a. This covariance estimate is computed using only the similarity measures S of frame b with a and the neighboring frames of a. The number of matched features between successive frames will vary depending on the physical distance to the features (see Figs. 2 and 4). Consider, for example, a robot located in an empty car park where the physical distance to the features is large, and therefore, the appearance of the environment does not change quickly if the robot is moved a certain distance. If, on the other hand, the robot is located in a narrow corridor where the physical distance to the extracted features is small, the number of feature matches in successive frames tends to be smaller if the robot was moved the same distance. Fig. 4. (Left) Physical distance to the features will influence the number of features that can be identified from different poses of the robot. The filled squares represent features that could be matched in all three robot poses, while the unfilled squares represent the features for which correspondences could not be found from all poses. The left wall in the figure is closer to the robot. Thus, due to the faster change in appearance, the number of features of the left wall, which can be matched over successive images, tends to be less compared to the number of matched features of the right wall. (Right) Outdoor robot used in this paper, equipped with a Canon EOS 350D camera and a panoramic lens from 0-360.com, which were used to collect the data, a DGPS unit to determine ground truth positions, and an SICK LMS 200 scanner used for visualization and for obtaining ground truth. The covariance of the robot pose estimate [x,y] TeX Source $$C_{r_v} = \left[\matrix{\sigma_{x^{r_v}}^2 & \sigma_{x^{r_v}}\sigma_{y^{r_v}} \cr\sigma_{x^{r_v}}\sigma_{y^{r_v}} & \sigma_{y^{r_v}}^2}\right]\eqno{\hbox{(4)}}$$is computed based on how the similarity measure varies over the set N(a), which contains frame a and its neighboring frames. The analyzed sequence of similarity measures is indicated in the zoomed-in visualization of a similarity matrix shown in Fig. 5. In order to avoid issues estimating the covariance orthogonal to the path of the robot if the robot was driven along a straight path, the covariance matrix is simplified by setting σxrv2 = σyrv2 and σxrvσyrv = 0. The remaining covariance parameter is estimated by fitting a 1-D Gaussian to the similarity measures SN(a),b and the distance traveled as obtained from odometry (see Fig. 6). Two parameters are determined from the nonlinear least-squares fitting process: mean (d_μ) and variance (σ[x,y]rv2). The initial estimate of the relative position [μxrvyrv] of a visual relation is calculated as TeX Source \eqalignno{\mu_x^{r_v} & = {\rm cos}(\mu_{\theta}^{r_v})d_\mu& \hbox{(5)}\cr\mu_y^{r_v} & = {\rm sin}(\mu_{\theta}^{r_v})d_\mu& \hbox{(6)}}where d_μ is the calculated mean of the fitted Gaussian and μθ the estimated relative orientation (Section II-C2). Fig. 5. (Left) Full similarity matrix for the laboratory dataset. Brighter entries indicate a higher similarity measure S. (Right) Zoomed-in image. The left area (enclosed in a blue frame) corresponds to a sequence of similarity measures that gives a larger position covariance than the right sequence (red frame). Fig. 6. Gaussian fitted to the distance traveled d (as obtained from odometry) and the similarity measures between frame b and the frames of the neighborhood N(a) = {a−2,a−1,a,a+1,a+2}. From the similarity measures, both a relative pose estimate μrv and a covariance estimate Crv are calculated between node a and node b. The orientation and orientation variance are not visualized in this figure. In the experimental evaluation, the Gaussian was estimated using five consecutive frames. To evaluate whether the evolution of the similarity measure in the vicinity of a visual relation can be reasonably approximated by a Gaussian, the mean error between the five similarity measures and the fitted Gaussian was calculated for the outdoor/indoor dataset (the dataset is described in Section III-A). The results in Table II indicate that the Gaussian represents the evolution of the similarity in a reasonable way. Please note that frame b is recorded at a later time than frame a, meaning that the covariance estimate Crva,b can be calculated directly without any time lag. TABLE II Statistics of the Error ∊ Between the Gaussian Fit and the Similarity Measures Sa−2,b, …, Sa+2,b for Each Node for Which the Fit was Performed in the Outdoor/Indoor Dataset #### Selecting Frames to Match In order to speed up the algorithm and make it more robust to perceptual aliasing (the problem that different regions have similar appearance), only those frames are selected for matching that are likely to be located close to each other. Consider the current frame b and a previously recorded frame a. If the similarity measure was to be calculated between b and all previously added frames, the number of frames to be compared would increase linearly (see Fig. 7). Instead, frames are only compared if the current frame b is within a search area around the pose estimate of frame a. The size of this search area is computed from the estimated pose covariance. Fig. 7. Number of similarity calculations performed at each frame in the outdoor/indoor dataset. The first frames were compared around frame 240, since up to then none of the previous frames were within the search area around the current pose estimate defined by the estimated pose covariance. The diagonal line indicates the linear increase for the case that the frames to match are not preselected. From the MLR algorithm (see Section II-A) we obtain the maximum likelihood estimate for frame b. There is, however, no estimate of the corresponding covariance that could be used to distinguish whether frame a is likely to be close enough to frame b so that it can be considered a candidate for a match, i.e., a frame for which the similarity measure Sa,b should be calculated. So far, we have defined two types of covariances: the odometry covariance Cro and the visual relation covariance Crv. To obtain an overall estimate of the relative covariance between frame a and b, we first consider the covariances of the odometry relations ro between a and b, and compute relative covariance Cxoa,b as TeX Source $$C_{x^o_{a, b}} = \sum_{j\in(a, b-1)}{\bf R}_j C_{r_{o_j}}{\bf R}_j^T .\eqno{\hbox{(7)}}$$Rj is a rotation matrix, which is defined as TeX Source $${\bf R}_j = \left(\matrix{{\rm cos}(\hat{x}_{j+1}^{\theta} - \hat{x}_j^{\theta}) & -{\rm sin}(\hat{x}_{j+1}^{\theta} - \hat{x}_j^{\theta}) & 0 \cr{\rm sin}(\hat{x}_{j+1}^{\theta} - \hat{x}_j^{\theta}) & {\rm cos}(\hat{x}_{j+1}^{\theta} - \hat{x}_j^{\theta}) & 0 \cr0 & 0 & 1} \right)\eqno{\hbox{(8)}}$$where is the orientation estimated for frame j. As long as no visual relation rv has been added, either between a and b or any of the frames between a and b, the relative covariance can be determined directly from the odometry covariance Cxoa and Cxob, as described earlier. However, when a visual relation rva,b between a and b is added, the covariance of the estimate decreases. Using the covariance intersection method [18], the covariance for frame b is therefore updated as TeX Source $$C_{\hat{x}_b} = C_{\hat{x}_b} \oplus (C_{\hat{x}_a} + C_{r_v^{a, b}})\eqno{\hbox{(9)}}$$where ⊕ is the covariance intersection operator. The covariance intersection method weighs the influence of both covariances Ca and Cb as TeX Source $$C_A \oplus C_B = [\omega C_A^{-1} + (1-\omega) C_B^{-1}]^{-1} . \eqno{\hbox{(10)}}$$The parameter ω ∊ [0,1] is chosen so that the determinant of the resulting covariance is minimized [19]. The new covariance estimate is also used to update the frames between a and b by adding the odometry covariances Cxoa·.b in opposite order (i.e., simulate that the robot is moving backwards from frame b to a). The new covariance estimate for frame j ∊ (a,b) is calculated as TeX Source $$C_{\hat{x}_j} = C_{\hat{x}_j} \oplus (C_{\hat{x}_b} + C_{x^o_{b, j}}).\eqno{\hbox{(11)}}$$ #### Visual Relation Filtering To avoid adding visual relations with low similarity, visual similarity relations rva,b between frame a and frame b are added only if the similarity measure exceeds a threshold tvs: Sa,b > tvs. In addition, similarity relations are added only if the similarity value Sa,b has its peak at frame a [compared to the neighboring frames N(a)]. There is no limitation on the number of visual relations that can be added for each frame. ### D. Fusing Multiple Datasets Fusion of multiple datasets recorded at different times is related to the problem of multirobot mapping where each of the datasets is collected concurrently with a different robot. The motivation for multirobot mapping is not only to reduce the time required to explore an environment but also to merge the different sensor readings in order to obtain a more accurate map. The problem addressed here is equivalent to “multirobot SLAM with unknown initial poses” [20], because the relative poses between the datasets are not given. The exploration problem is not considered in this paper. Only a minor modification of the standard method described earlier is necessary to address the problem of fusing multiple datasets. The absence of relative pose estimates between the datasets is compensated for by not limiting the search region for which the similarity measures S are computed. This is implemented by adding datasets incrementally and setting the relative pose between consecutively added datasets initially to (0 0 0) with an infinite pose covariance. Such odometry relations between datasets appear as long, diagonal lines in Fig. 16, representing the transition between lab to studarea and studarea to lab-studarea. SECTION III ## Experimental Results In this section, we present results from five different datasets with varying properties. An overview of all datasets is presented in Table III. All datasets were collected with our mobile robot Tjorven (see Fig. 4). The platform uses “skid steering,” which is prone to bad odometry. In the different datasets different wheel types (indoor/outdoor) were used. The robot's odometry was calibrated (for each wheel type) by first driving forward 5 m to obtain a distance per encoder tick value, and second by completing one full revolution to determine the number of differential encoder ticks per angular rotation. Finally, the drift parameter was adjusted so that the robot would drive forward in a straight line, i.e., to compensate for the slightly different size of the wheel pairs. TABLE III For Each Dataset: Number of Nodes , Visual Relations #rv, Performed Similarity Calculations #S, Average Number of Extracted Visual Features μF Per Node With Variance σF, Evaluation Run Time T (Excluding the Similarity Computation) The omnidirectional images were first converted to panoramic images with a resolution of 1000 × 289. When extracting SIFT features, the initial doubling of the images was not performed, i.e., SIFT features from the first octave were ignored, simply to lower the amount of extracted features. The results are presented both visually with maps obtained by superimposing laser range data using the poses estimated with mini-SLAM and quantitatively by the MSE from ground truth data. Since the corresponding pose pairs 〈x}i,xGTi〉 between the estimated pose and the corresponding ground truth pose xGTi are known, the optimal rigid transformation between pose estimates and ground truth data can be determined directly. We applied the method suggested by Arun et al. [21]. To investigate the influence of the threshold tvs, described in Section II-C5, the MSE was calculated for all datasets for which ground truth data were available. The result in Fig. 8 shows that the value of the threshold tvs can be selected so that it is nearly optimal for all datasets and that there is a region in which minor changes of the tvs do not strongly influence the accuracy of the map. Throughout the remainder of this section, a constant threshold tvs = 0.2 is used. Fig. 8. Influence of the threshold parameter tvs on the relative MSE. In order to give a better idea of the function of the mini-SLAM algorithm, the number of visual relations per node depending on the threshold tvs is shown in Fig. 9. The overview of all datasets presented in Table III also contains the number of similarity calculations performed and the evaluation run time on a Pentium 4 (2 GHz) processor with 512 MB of RAM. This time does not include the time required for the similarity computation. Each similarity calculation (including relative rotation and variance estimation) took 0.30 s using a dataset with an average of 522.3 features with standard deviation of 21.4. However, note that the implementation used for feature matching in this paper was not optimized for computational efficiency. Fig. 9. Amount of visual nodes added to the graph depending on the threshold tvs. ### A. Outdoor/Indoor Dataset A large set of 945 omnidirectional images was collected over a total distance of 1.4 km with height differences of up to 3 m. The robot was driven manually, and the data were collected in both indoor and outdoor areas over a period of two days (due to the limited capacity of the camera battery). #### Comparison to Ground Truth Obtained From DGPS To evaluate the accuracy of the created map, the robot position was measured with differential GPS (DGPS) while collecting the omnidirectional images. Thus, for every SLAM pose estimate, there is a corresponding DGPS position 〈xi, xDGPSi〉. DGPS gives a smaller position error than GPS. However, since only the signal noise is corrected, the problem with multipath reflections still remains. DGPS is also available only if the radio link between the robot and the stationary GPS is functional. Thus, only a subset of pose pairs 〈{xi, xDGPSii = 1,…, N can be used for ground truth evaluation. DGPS measurements were considered only when at least five satellites were visible, and the radio link to the stationary GPS was functional. The valid DGPS readings are indicated as light dots in Fig. 10. The total number of pairs used to calculate the MSE for the whole map was 377 compared to the total number of frames of 945. To measure the difference between the poses estimated with mini-SLAM and the DGPS positions xDGPS (using UTM WGS84, which provides a metric coordinate system), the two datasets have to be aligned. Since the correspondence of the filtered pose pairs is known, 〈xi, xDGPSi〉, an optimal rigid alignment can be determined directly with the method by Arun et al. [21], as described earlier. Fig. 10. DGPS data xDGPS with aligned SLAM estimates displayed on an aerial image of the area. The darker squares show the mini-SLAM pose estimates, and the lighter squares show the DGPS poses for which the number of satellites was considered acceptable. The deviation seen at the bottom (the car park) is mainly caused by the fact that the car park is elevated compared with the rest of the environment. Fig. 11. Evolution of the MSE between the ground truth position obtained from DGPS readings xDGPS and the mini-SLAM estimate of the robot pose as frames is added to the map. Drops in the MSE indicate that the consistency of the map has been increased. The final MSE of the raw odometry was 377.5 m2. The MSE between xDGPS and for the dataset shown in Fig. 10 is 4.89 m. To see how it evolves over time when creating the map, the MSE was calculated from the new estimates after each new frame was added. The result is shown in Fig. 11 and compared to the MSE obtained using only odometry to estimate the robot's position. Note that the MSE was evaluated for each frame added. Therefore, when DGPS data are not available, the odometry MSE xo will stay constant for these frames. This can be seen, for example, for the frames 250–440 in Fig. 11. For the same frames, the MSE of the SLAM estimate is not constant since new estimates are computed for each frame added and loop closing also occurs indoors, or generally, when no DGPS is available. The first visual relation rv was added around frame 260. Until then, the error of the mini-SLAM estimate and the odometry MSE xo were the same. ### B. Multiple Floor Levels This dataset was collected inside a department building at Örebro University. It includes all five floor levels and connections between the floor levels by three elevators. The data contain loops in 2-D coordinates and also involving different floor levels. This dataset consists of 419 panoramic images and covers a path with a length of 618 m. The geometrical layout differs for the different floors (see Fig. 13). No information about the floor level is used as an input to the system; hence, the robot pose is still described using (x,y,θ). #### Visualized Results There are no ground truth data available for this dataset. However, it is possible to get a visual impression of the accuracy of the results from Fig. 12. The figure shows occupancy grid maps obtained from laser scanner readings and raw odometry poses (left), or the mini-SLAM pose estimates (right), respectively. All floors are drawn on top of each other without any alignment. To further illustrate the mini-SLAM results, an occupancy map was also created separately for each floor from the laser scanner readings and mini-SLAM pose estimates (see Fig. 13). Here, each pose was assigned to the corresponding floor level manually. Fig. 12. Occupancy grid map of all five floors drawn on top of each other. (Left) Gridmap created using pose information from raw odometry. (Right) Using the estimated robot poses from mini-SLAM. Fig. 13. Occupancy maps for floor levels 1–5, computed using laser scanner data at each estimated pose. The assignment of initial poses to floor levels was done manually and is used only to visualize these maps. This experiment mainly illustrates the robustness of data association that is achieved using omnidirectional vision data. The similarity matrix and a similarity access matrix for the “multiple floor levels” dataset are shown in Fig. 14. Fig. 14. (Left) Pose similarity matrix for the “multiple floor levels” dataset. (Right) Similarity access matrix showing which similarity measures were used in the mini-SLAM computation. Brighter pixels were used more often. ### C. Partly Overlapping Data This dataset consists of three separate indoor sets: laboratory (lab), student area (studarea), and a combination of both (labstudarea) (see Fig. 15). Similar to the dataset described in Section III-B, omnidirectional images, 2-D laser range data, and odometry were recorded. Ground truth poses xGT were determined using the laser scanner and odometry together with the MLR approach as in [2]. Fig. 15. Submaps for the partly overlapping data. (Left) lab. (Middle) studarea. (Right) labstudarea, overlapping both lab and studarea. #### Visualized Results Fig. 16 shows the final graph (left), a plot of laser scanner readings merged using poses from odometry (middle), and poses obtained with mini-SLAM (right). Fig. 17 shows the similarity matrix and the similarity access matrix for the labstudarea dataset. #### Comparison to Ground Truth Obtained From Laser-Based SLAM As described in Section II-D, fusion of multiple maps is motivated both by its need in multirobot mapping and by the increased accuracy of the resulting maps. Instead of simply adding the different maps onto each other, the fused maps also use additional information from the overlapping parts to improve the accuracy of the submaps. This is illustrated in Table IV, which shows the MSE (again obtained by determining the rigid alignment between and xGT) before and after the fusion was performed. While the datasets lab and studarea shows a negligible change in accuracy, labstudarea clearly demonstrate a large improvement. #### Robustness Evaluation The suggested method relies on incremental pose estimates (odometry) and a visual similarity measure S. The robustness of the method is evaluated by corrupting these two inputs and evaluating the performance. For this evaluation, the studarea dataset is used, and the tests were repeated ten times. In the first test, the similarity measures S were corrupted by adding a random value drawn from a Gaussian distribution N}(0,σ) with varying standard deviation σ (see Table V). The amount of added noise has to be compared to the range of [0,1] in which the similarity measure S lies [see (2)]. Fig. 16. (Left) Part of the final MLR graph containing the three different datasets. (Middle) Laser-range scanning-based map using the raw odometry. (Right) Laser-range scanning-based map using the mini-SLAM poses. The robustness evaluation with respect to the similarity measure S shows that the system can handle additional noise to some extent, but incorrect visual relations will affect the accuracy of the final map. This illustrates that the proposed method, as many others, would have difficulties in perceptually similar locations in case the uncertainty of the pose estimates is high. In the second test, the odometry values were corrupted by adding additional noise to the incremental distance d and the orientation θ. The corrupted incremental distance d′ is calculated as TeX Source $$d^{\prime} = d + 0.1d{\cal N}(0,\sigma) + 0.2\theta{\cal N}(0,\sigma)\eqno{\hbox{(12)}}$$and the orientation θ as TeX Source $$\theta^{\prime}= \theta + 0.2d{\cal N}(0,\sigma)+\theta{\cal N}(0,\sigma).\eqno{\hbox{(13)}}$$Since the odometry pose estimates are computed incrementally, the whole later trajectory is affected when adding noise at a particular time step. The results of the robustness evaluation with the corrupted odometry are shown in Fig. 18 together with the MSE of the corrupted odometry. These results show that the system is robust to substantial odometry errors. A failure case is shown in Fig. 19. Fig. 17. (Left) Pose similarity matrix for the labstudarea dataset. (Right) Similarity access matrix showing which similarity measures are used in the proposed method. Brighter pixels were used more often. TABLE IV MSE Results Before and After Merging of the Datasets and Using Odometry Only TABLE V MSE Results (mean and stddev) After Adding a Random Variable Drawn From N(0,σ) to Each Similarity Measure Sa,b Fig. 18. MSE results (mean and stddev) for x (odometry) and (estimated poses) after corrupting the odometry by adding random values drawn from N(0,σ). The plot also shows the MSE when the odometry covariance is increased with the added noise. Fig. 19. Failure case where the corrupted odometry error became too large resulting in a corrupted map. (Left) SLAM map. (Right) Raw odometry. SECTION IV ## Conclusion and Future Work Mini-SLAM combines the principle of using similarity of panoramic images to close loops at the topological level with a graph relaxation method to obtain a metrically accurate map representation and with a novel method to determine the covariance for visual relations based on visual similarity of neighboring poses. The proposed method uses visual similarity to compensate for the lack of range information about local image features, avoiding computationally expensive and less general methods such as tracking of individual image features. Experimentally, the method scales well to the investigated environments. The experimental results are presented by visual means (as occupancy maps rendered from laser scans and poses determined by the mini-SLAM algorithm) and by comparison with ground truth (obtained from DGPS outdoors or laser-based SLAM indoors). The results demonstrate that the mini-SLAM method is able to produce topologically correct and geometrically accurate maps at low computational cost. A simple extension of the method was used to fuse multiple datasets so as to obtain improved accuracy. The method has also been used without any modifications to successfully map a building consisting of five floor levels. Mini-SLAM generates a 2-D map based on 2-D input from odometry. It is worth noting that the “outdoor/indoor” dataset includes variations of up to 3 m in height. This indicates that the mini-SLAM can cope with violations of the flat floor assumption to a certain extent. We expect a graceful degradation in map accuracy as the roughness of the terrain increases. The representation should still be useful for self-localization using 2-D odometry and image similarity, e.g. , using the global localization method in [1], which, in addition, could be used to improve the robustness toward perceptual aliasing when fusing multiple datasets. In extreme cases, of course, it is possible that the method would create inconsistent maps, and a 3-D representation should be considered. The bottleneck of the current implementation in terms of computation time is the calculation of image similarity, which involves the comparison of many local features. The suggested approach, however, is not limited to the particular measure of image similarity used in this paper. There are many possibilities to increase the computation speed either by using alternative similarity measures that are faster to compute while still being distinctive enough, or by optimizing the implementation, for example, by executing image comparisons on a graphics processing unit (GPU) [22]. Further plans for future work include an investigation of the possibility of using a standard camera instead of an omnidirectional camera, and incorporation of vision-based odometry to realize a completely vision-based system. ## Footnotes Manuscript received December 14, 2007; revised July 12, 2008. First published September 26, 2008; current version published nulldate. This paper was recommended for publication by Associate Editor J. Leonard and Editor L. Parker upon evaluation of the reviewers' comments. H. Andreasson and A. J. Lilienthal are with the Applied Autonomous Sensor System (AASS) Research Center, Örebro University, Örebro SE-701 82, Sweden (e-mail: henrik.andreasson@tech.oru.se; achim.lilienthal@tech.oru.se). T. Duckett is with the Department of Computer Science, University of Lincoln, Lincoln LN6 7TS, U.K. (e-mail: tduckett@lincoln.ac.uk). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. ## References 1. Localization for mobile robots using panoramic vision, local features and particle filter H. Andreasson, A. Treptow, T. Duckett Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2005, 3348–3353 2. A multilevel relaxation algorithm for simultaneous localisation and mapping U. Frese, P. Larsson, T. Duckett IEEE Trans. Robot., vol. 21, issue (2), p. 196–207, 2005-04 3. Mini-SLAM: Minimalistic visual SLAM in large-scale environments based on a new interpretation of image similarity H. Andreasson, T. Duckett, A. Lilienthal Proc. IEEE Int. Conf. Robot. Autom. (ICRA 2007), Rome, Italy, pp. 4096–4101 4. Object recognition from local scale-invariant features D. Lowe Corfu, Greece Proc. Int. Conf. Comput. Vision (ICCV), 1999, 1150–1157 5. Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks S. Se, D. Lowe, J. Little Int. J. Robot. Res., vol. 21, issue (8), p. 735–758, 2002 6. Online visual motion estimation using FastSLAM with SIFT features T. Barfoot Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2005, 579–585 7. A framework for vision based bearing only 3D SLAM P. Jensfelt, D. Kragic, J. Folkesson, M. Björkman Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2006, 1944–1950 8. The vSLAM algorithm for robust localization and mapping N. Karlsson, E. D. Bernardo, J. Ostrowski, L. Goncalves, P. Pirjanian, M. E. Munich Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2005, 24–29 9. σSLAM: Stereo vision SLAM using the Rao-Blackwellised particle filter and a novel mixture proposal distribution P. Elinas, R. Sim, J. Little Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2006, 1564–1570 10. 6dof entropy minimization slam J. Sez, F. Escolano Orlando, FL Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2006, 1548–1555 11. Real-time simultaneous localisation and mapping with a single camera A. Davison Proc. IEEE Int. Conf. Comput. Vis. (ICCV 2003), pp. 1403–1410 12. Loop closure detection in SLAM by combining visual and spatial appearance K. L. Ho, P. Newman Robot. Auton. Syst., vol. 54, issue (9), p. 740–749, 2006-09 13. Outdoor SLAM using visual appearance and laser ranging P. M. Newman, D. M. Cole, K. L. Ho Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2006, 1180–1187 14. Online constraint network optimization for efficient maximum likelihood map learning G. Grisetti, D. Lordi Rizzini, C. Stachniss, E. Olson, W. Burgard Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2008, 1880–1885 15. Fast iterative optimization of pose graphs with poor initial estimates E. Olson, J. Leonard, S. Teller Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2006, 2262–2269 16. Learning probabilistic motion models for mobile robots A. I. Eliazar, R. Parr Proc. 21st Int. Conf. Mach. Learning (ICML), Banff, AB, Canada, 2004, pp. 32–39 17. Rover localization in natural environments by indexing panoramic images J. Gonzalez-Barbosa, S. Lacroix Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2002, 1365–1370 18. Dynamic map building and localization for autonomous vehicles, J. Uhlmann Dynamic map building and localization for autonomous vehicles,, Ph.D. dissertation, Univ. Oxford, Oxford, U.K., 1995 19. Using covariance intersection for SLAM S. J. Julier, J. K. Uhlmann Robot. Auton. Syst., vol. 55, issue (1), p. 3–20, 2007 20. Multi-robot simultaneous localization and mapping using particle filters A. Howard Robot.: Sci. Syst. Conf., presented at the, Cambridge, MA, 2005-06 21. Least-squares fitting of two 3-d point sets K. S. Arun, T. S. Huang, S. D. Blostein IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-9, issue (5), p. 698–700, 1987-09 22. GPU-based video feature tracking and matching S. N. Sinha, J.-M. Frahm, M. Pollefeys, Y. Genc Chapel Hill, NC Proc. Workshop Edge Comput. Using New Commodity Archit. (EDGE 2006), 2006 ## Cited By No Citations Available ## Keywords ### IEEE Keywords No Keywords Available ### INSPEC: Controlled Indexing Gaussian distribution, SLAM (robots), covariance matrices, image sensors, robot vision ### More Keywords No Keywords Available No Corrections ## Media No Content Available This paper appears in: IEEE Transactions on Robotics Issue Date: OCTOBER 2008 On page(s): 991 - 1001 ISBN: 1552-3098 Print ISBN: N/A INSPEC Accession Number: 10301446 Digital Object Identifier: 10.1109/TRO.2008.2004642 Date of Current Version: 31 Oct, 2008 Someda, C.G. Machler, P.
2016-02-06 15:47:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6082213521003723, "perplexity": 1634.3125116330475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146600.56/warc/CC-MAIN-20160205193906-00246-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-1-functions-and-limits-1-5-the-limit-of-a-function-1-5-exercises-page-60/14
## Calculus 8th Edition $f(x)=\dfrac{x^2+x}{\sqrt {x^3+x^2}}$ First, graph the function (image attached below) (a) As x goes to 0 from the left hand side, y approaches -1. Therefore, $\lim\limits_{x \to 0^-}f(x)=-1$ (b) As x goes to 0 from the right hand side, y approaches 1. Therefore, $\lim\limits_{x \to 0^+}f(x)=1$ (c) $\lim\limits_{x \to 0}f(x)$ does not exist because $\lim\limits_{x \to 0^-}f(x)\ne\lim\limits_{x \to 0^+}f(x)$
2019-12-13 00:17:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.720879852771759, "perplexity": 298.4097987941363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00312.warc.gz"}
https://mathalino.com/reviewer/engineering-mechanics/problem-535-friction-wedges
Problem 535 A wedge is used to split logs. If φ is the angle of friction between the wedge and the log, determine the maximum angle a of the wedge so that it will remain embedded in the log. Solution 535 0 likes
2020-05-28 18:46:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375202417373657, "perplexity": 778.2060256680397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399830.24/warc/CC-MAIN-20200528170840-20200528200840-00319.warc.gz"}
https://math.stackexchange.com/questions/3125319/prove-that-deta2019-b2019-and-deta2019-b2019-are-divis
# Prove that $\det(A^{2019} +B^{2019} )$ and $\det(A^{2019} -B^{2019} )$ are divisible by $4$ Let $$A, B \in M_2(\mathbb{Z})$$ so that $$\det A=\det B=\frac{1}{4} \det(A^2+B^2)=1$$ Prove that $$\det(A^{2019} +B^{2019} )$$ and $$\det(A^{2019} -B^{2019} )$$ are divisible by $$4$$. The only observation I have made is that $$\det(A^{2019} +B^{2019}) +\det(A^{2019} - B^{2019} )=4$$ so suffice it to prove that one of them is divisible by $$4$$. EDIT : This was featured on the regional stage of the maths olympiad in Romania on Saturday, so it is an actual problem, not something I came up with. It is relevant to me because I am preparing for the next stage. EDIT : $$AB=BA$$ indeed, I forgot to include it when I posted the problem and I am sorry for this. Since there are people in the comments eager to see the official paper, here it is : https://imgur.com/a/z8v3VZp As you can see, this contest took place on the 24th of February(I know we have a pretty strange date format, but this is what 24.02. 2019 means) and it wasn't an online competition, so I really was honest when I said I was not cheating. • What's the source of this problem, please? – Gerry Myerson Feb 24 at 22:57 • How did you make that only observation? – Berci Feb 25 at 1:36 • @Gerry Myerson contest from my country – MathEnthusiast Feb 25 at 5:32 • Definitely not. It was featured on the maths olympiad on Saturday, I would never post ongoing problems – MathEnthusiast Feb 25 at 11:44 • I added a screenshot of the official paper @Jyrki Lahtonen – MathEnthusiast Mar 11 at 18:04 ## 3 Answers The problem statement in general is false if $$A$$ and $$B$$ do not commute. E.g. consider $$A=\pmatrix{1&-1\\ 2&-1}, \ B=\pmatrix{0&-1\\ 1&0}.$$ Then $$A^2=B^2=-I$$, so that $$\det(A)=\det(B)=1\ \text{ and } \ \frac14\det\left(A^2+B^2\right)=\frac14\det(-2I)=1.$$ Yet, as $$A^{2019}=(A^2)^{1009}A=(-I)^{1009}A=-A$$ and the analogous holds for $$B$$, we have \begin{aligned} \det\left(A^{2019}+B^{2019}\right) =\det(-A-B)=\det(A+B) =\det\pmatrix{1&-2\\ 3&-1} =5, \end{aligned} which is not divisible by $$4$$. One can easily verify that $$AB\ne BA$$ in this example. Nonetheless, the problem statement is true if $$AB=BA$$. This can be proved easily be considering the eigenvalues of $$A,\ B$$ and $$A^2+B^2$$. The other answer demonstrates that the result is false unless $$AB=BA$$; here's a proof in that case. Note that $$A^2+B^2=\dfrac12\left((A+B)^2+(A-B)^2\right)$$. Therefore, $$4=\det(A^2+B^2)=\frac14\det\left((A+B)^2+(A-B)^2\right),$$ i.e. $$\det\left((A+B)^2+(A-B)^2\right)=16.$$ But because $$\det(X+Y)=2\det X+2\det Y-\det(X-Y)$$ for any two $$2\times 2$$ matrices $$X,Y$$, we have $$\det\left((A+B)^2+(A-B)^2\right)=2\det(A+B)^2+2\det(A-B)^2-\det(4AB),$$ so by the condition, $$16 = \det(A+B)^2+\det(A-B)^2.$$ Now, $$A+B$$ and $$A-B$$ are both integer matrices, so their determinants squared are nonnegative perfect squares. The only two nonnegative perfect squares that sum to $$16$$ are $$0,16$$, therefore one of $$\det(A+B),\det(A-B)$$ is $$0$$ and the other is $$\pm 4$$. We are done because of the factorisations $$A^{2019}+B^{2019}=(A+B)(A^{2018}-A^{2017}B+\dots+B^{2018}),$$ $$A^{2019}-B^{2019}=(A-B)(A^{2018}+A^{2017}B+\dots+B^{2018}).$$ • Why does $(A+B)^2-(A-B)^2=4AB$? It is $2(AB+BA)$. And the final identities don't hold generally, unless you assume $AB=BA$. – egreg Mar 10 at 16:49 • @egreg You're right indeed, I'd implicitly assumed they commute without realising it. I have edited my answer to reflect that. – YiFan Mar 10 at 21:58 The OP has clarified that $$AB=BA$$. Then the problem can be solved easily by applying the identity $$\det(X+Y)+\det(X-Y)=2\left(\det(X)+\det(Y)\right)\tag{1}$$ for $$2\times2$$ matrices twice. First, by putting $$(X,Y)=(A^2,B^2)$$ into $$(1)$$, we get $$4+\det\left[(A+B)(A-B)\right]=4.$$ Therefore at least one of $$A+B$$ or $$A-B$$ is singular. Since they are factors of $$A^{2019}+B^{2019}$$ and $$A^{2019}-B^{2019}$$ respectively, one of $$\det(A^{2019}+B^{2019})$$ and $$\det(A^{2019}-B^{2019})$$ is $$0$$. Second, by putting $$(X,Y)=(A^{2019},B^{2019})$$ into $$(1)$$, we get $$\det(A^{2019}+B^{2019})+\det(A^{2019}-B^{2019})=4.$$ Hence the other determinant is $$4$$.
2019-11-22 13:35:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 56, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062809944152832, "perplexity": 235.11148791430202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00273.warc.gz"}
http://gmat.kmf.com/question/9c26yk.html
# 考满分网 While on a straight road, Car X and Car Y are traveling at different constant rates. If Car X is now 1 mile ahead of Car Y, how many minutes from now will Car X be 2 miles ahead of Car Y ? (1)Car X is traveling at 50 miles per hour and Car Y is traveling at 40 miles per hour. (2)Three minutes ago Car X was $\frac1 2$ mile ahead of Car Y. • AStatement (1) ALONE is sufficient, but statement (2) alone is not sufficient. • BStatement (2) ALONE is sufficient, but statement (1) alone is not sufficient. • CBOTH statement TOGETHER are sufficient, but NEITHER statement ALONE is sufficient. • DEACH statement ALONE is sufficient. • EStatements (1) and (2) TOGETHER are NOT sufficient. • 网友解析 ### 题目讨论 • 优质讨论 • 最新讨论 • 所属科目:数学DS • 题目来源1:OG12-102 • 题目来源2:OG15-106 • 正确率:71% • 难度:
2018-08-18 12:00:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3815745711326599, "perplexity": 4149.383152652879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213666.61/warc/CC-MAIN-20180818114957-20180818134957-00604.warc.gz"}
http://activebstars.iag.usp.br/bstars/index.php/be-star-newsletter/volume-41/abstracts-41/50-abstract-digest-3
Abstract Digest #3 Due to the deadline for proceeding contributions to IAUS 307, held in Geneva, and the large number of preprints from that conference, those contribtuions will be published in an extra edition in a few days. Today's abstracts: • "The environment of the fast rotating star Achernar. III. Photospheric parameters revealed by the VLTI" by A. Domiciano de Souza et al. • "Beyond the diffraction limit of optical/IR interferometers. II. Stellar parameters of rotating stars from differential phases" by M. Hadjara et al. • "Study of the sub-AU disk of the Herbig B[e] star HD 85567 with near-infrared interferometry" by J. Vural at al. • "HD314884: A Slowly Pulsating B star in a Close Binary" by Christopher B. Johnson et al. • "Time-dependent modeling of extended thin decretion disks of critically rotating stars" by Petr Kurfürst, Achim Feldmeier, Jirí Krticka • "New objects with the B[e] phenomenon in the Large Magellanic Cloud" by H. Levato, A.S. Miroshnichenko, C. Saffe • "Asteroseismology revealing trapped modes in KIC 10553698A" by R. H. Ostensen et al. • "Finding eta Car Analogs in Nearby Galaxies Using Spitzer: II. Identification of An Emerging Class of Extragalactic Self-Obscured Stars" by Rubab Khan et al. • "eta Carinae Baby Homunculus Uncovered by ALMA" by Zulema Abraham, Diego Falceta-Gonçalves, Pedro P. B. Beaklin The environment of the fast rotating star Achernar. III. Photospheric parameters revealed by the VLTI A. Domiciano de Souza et al. Context. Rotation significantly impacts on the structure and life of stars. In phases of high rotation velocity (close to critical), the photospheric structure can be highly modified, and present in particular geometrical deformation (rotation flattening) and latitudinal- dependent flux (gravity darkening). The fastest known rotators among the nondegenerate stars close to the main sequence, Be stars, are key targets for studying the effects of fast rotation on stellar photospheres. Aims. We seek to determine the purely photospheric parameters of Achernar based on observations recorded during an emission-free phase (normal B phase). Methods. Several recent works proved that optical/IR long-baseline interferometry is the only technique able to sufficiently spatially resolve and measure photospheric parameters of fast rotating stars. We thus analyzed ESO-VLTI (PIONIER and AMBER) interferometric observations of Achernar to measure its photospheric parameters by fitting our physical model CHARRON using a Markov chain Monte Carlo method. This analysis was also complemented by spectroscopic, polarimetric, and photometric observations to investigate the status of the circumstellar environment of Achernar during the VLTI observations and to cross-check our model-fitting results. Results. Based on VLTI observations that partially resolve Achernar, we simultaneously measured five photospheric parameters of a Be star for the first time: equatorial radius (equatorial angular diameter), equatorial rotation velocity, polar inclination, position angle of the rotation axis projected on the sky, and the gravity darkening beta coefficient (effective temperature distribution). The close circumstellar environment of Achernar was also investigated based on contemporaneous polarimetry, spectroscopy, and interferometry, including image reconstruction. This analysis did not reveal any important circumstellar contribution, so that Achernar was essentially in a normal B phase at least from mid-2009 to end-2012, and the model parameters derived in this work provide a fair description of its photosphere. Finally, because Achernar is the flattest interferometrically resolved fast rotator to-date, the measured beta and flattening, combined with values from previous works, provide a crucial test for a recently proposed gravity darkening model. This model offers a promising explanation to the fact that the measured beta parameter decreases with flattening and shows significantly lower values than the classical prediction of von Zeipel. Available at: A&A in press Beyond the diffraction limit of optical/IR interferometers. II. Stellar parameters of rotating stars from differential phases Context. As previously demonstrated on Achernar, one can derive the angular radius, rotational velocity, axis tilt, and orientation of a fast-rotating star from the differential phases obtained by spectrally resolved long baseline interferometry using earth-rotation synthesis. Aims. We applied this method on a small sample of stars for different spectral types and classes, in order to generalize the technique to other rotating stars across the H-R diagram and determine their fundamental parameters. Methods. We used differential phase data from the AMBER/VLTI instrument obtained prior to refurbishing its spectrometer in 2010. With the exception of Fomalhaut, which has been observed in the medium-resolution mode of AMBER (R approx 1500), our three other targets, Achernar, Altair, and delta Aquilae offered high-resolution (R approx 12000) spectro-interferometric data around the Br-gamma absorption line in K band. These data were used to constrain the input parameters of an analytical, still realistic model to interpret the observations with a systematic approach for the error budget analysis in order to robustly conclude on the physics of our 4 targets. We applied the super resolution provided by differential phases phi_diff to measure the size (equatorial radius Req and angular diameter / eq ), the equatorial rotation velocity (Veq), the inclination angle (i), and the rotation axis position angle (PArot) of 4 fast-rotating stars: Achernar, Altair, delta Aquilae, and Fomalhaut. The stellar parameters of the targets were constrained using a semi-analytical algorithm dedicated to fast rotators SCIROCCO. Results. The derived parameters for each star were Req = 11.2 ±0.5Rsun, Veq sin i = 290 ± 17km.s-1, PArot = 35.4deg ± 1.4deg, for Achernar; Req = 2.0 ± 0.2Rsun, Veq sin i = 226 ± 34km.s-1, PArot = -65.5deg ± 5.5deg, for Altair; Req = 2.2 ± 0.3Rsun, Veq sin i = 74 ± 35km.s-1, PArot = -101.2deg ± 14deg, for delta Aquilae; and Req = 1.8 ± 0.2Rsun, Veq sin i = 93 ± 16km.s-1, PArot = 65.6deg ± 5deg , for Fomalhaut. They were found to be compatible with previously published values from differential phase and visibility measurements, while we were able to determine, for the first time, the inclination angle i of Fomalhaut (i = 90deg ± 9deg) and delta Aquilae (i = 81deg ± 13deg), and the rotation-axis position angle PArot of delta Aquilae. Conclusions. Beyond the theoretical diffraction limit of an interferometer (ratio of the wavelength to the baseline), spatial super resolution is well suited to systematically estimating the angular diameters of rotating stars and their fundamental parameters with a few sets of baselines and the Earth-rotation synthesis provided a high enough spectral resolution. Available at: A&A in press Study of the sub-AU disk of the Herbig B[e] star HD 85567 with near-infrared interferometry J. Vural at al. Available at: A&A in press HD314884: A Slowly Pulsating B star in a Close Binary Christopher B. Johnson et al. We present the results of a spectroscopic and photometric analysis of HD314884, a slowly pulsating B star (SPB) in a binary system with detected soft X-ray emission. We spectrally classify the B star as a B5V-B6V star with T$_{eff}$ = 15,490 $\pm$ 310 K, log $g$ = 3.75 $\pm$ 0.25 dex, and a photometric period of P$_{0}$ = 0.889521(12) days. A spectroscopic period search reveals an orbital period for the system of P$_{orb}$ = 1.3654(11) days. The discrepancy in the two periods and the identification of a second and third distinct frequency in the photometric fourier transform at P$_1$ = 3.1347(56) and P$_2$ = 1.517(28) days provides evidence that HD314884 is a slowly pulsating B star (SPB) with at least three oscillation frequencies. These frequencies appear to originate from higher-order, non-linear tidal pulsations. Using the dynamical parameters obtained from the radial velocity curve, we find the most probable companion mass to be M$_1$ = $\sim$0.8 M$_{\odot}$ assuming a typical mass for the B star and most probable inclination. We conclude that the X-ray source companion to HD314884 is most likely a coronally active G-type star or a white dwarf (WD), with no apparent emission lines in the optical spectrum. The mass probability distribution of the companion star mass spans 0.6-2.3 M$_{\odot}$ at 99$\%$ confidence which allows the possibility of a neutron star companion. The X-ray source is unlikely to be a black hole unless it is of a very low mass or low binary inclination. Available at: arXiv:1407.7938 Time-dependent modeling of extended thin decretion disks of critically rotating stars Petr Kurfürst, Achim Feldmeier, Jirí Krticka During their evolution massive stars can reach the phase of critical rotation when a further increase in rotational speed is no longer possible. Direct centrifugal ejection from a critically or near-critically rotating surface forms a gaseous equatorial decretion disk. Anomalous viscosity provides the efficient mechanism for transporting the angular momentum outwards. The outer part of the disk can extend up to a very large distance from the parent star. We study the evolution of density, radial and azimuthal velocity, and angular momentum loss rate of equatorial decretion disks out to very distant regions. We investigate how the physical characteristics of the disk depend on the distribution of temperature and viscosity. We calculated stationary models using the Newton-Raphson method. For time-dependent hydrodynamic modeling we developed the numerical code based on an explicit finite difference scheme on an Eulerian grid including full Navier-Stokes shear viscosity. The sonic point distance and the maximum angular momentum loss rate strongly depend on the temperature profile and are almost independent of viscosity. The rotational velocity at large radii rapidly drops accordingly to temperature and viscosity distribution. The total amount of disk mass and the disk angular momentum increase with decreasing temperature and viscosity. The time-dependent one-dimensional models basically confirm the results obtained in the stationary models as well as the assumptions of the analytical approximations. Including full Navier-Stokes viscosity we systematically avoid the rotational velocity sign change at large radii. The unphysical drop of the rotational velocity and angular momentum loss at large radii (present in some models) can be avoided in the models with decreasing temperature and viscosity. Available at: arXiv:1407.7952 New objects with the B[e] phenomenon in the Large Magellanic Cloud H. Levato, A.S. Miroshnichenko, C. Saffe Aims. The study is aimed at discovering new objects with the B[e] phenomenon in the Large Magellanic Cloud. Methods. We report medium-resolution optical spectroscopic observations of two newly found (ARDB 54 and NOMAD 0181-0125572) and two previously known (Hen S–59 and Hen S–137) supergiants with the B[e] phenomenon in the Large Magellanic Cloud. The observations were obtained with the GMOS spectrograph at the southern Gemini telescope. Results. The optical spectra and fundamental parameters of ARDB 54 and NOMAD 0181-0125572 are presented for the first time. We found that the Balmer line profiles of Hen S–59 and Hen S–137 were different from those observed in their spectra nearly 20 years ago. We suggest a higher effective temperature and luminosity for both objects. With the new fundamental parameters, the lowest luminosity for known supergiants with the B[e] phenomenon in the Magellanic Clouds is higher that previously thought (log L/Lsun ~ 4.5 instead of 4.0). The object Hen S–59 may be a binary system based on its UV excess, variable B - V color-index and radial velocity of emission lines, and periodically variable I–band brightness. Available at: A&A in press Asteroseismology revealing trapped modes in KIC 10553698A R. H. Ostensen et al. The subdwarf-B pulsator, KIC 10553698A, is one of 16 such objects observed with one-minute sampling for most of the duration of the Kepler Mission. Like most of these stars, it displays a rich g-mode pulsation spectrum with several clear multiplets that maintain regular frequency splitting. We identify these pulsation modes as components of rotationally split multiplets in a star rotating with a period of ~41 d. From 162 clearly significant periodicities, we are able to identify 156 as likely components of l = 1 or l = 2 multiplets. For the first time we are able to detect l = 1 modes that interpose in the asymptotic period sequences and that provide a clear indication of mode trapping in a stratified envelope, as predicted by theoretical models. A clear signal is also present in the Kepler photometry at 3.387 d. Spectroscopic observations reveal a radial-velocity amplitude of 64.8 km/s. We find that the radial-velocity variations and the photometric signal have phase and amplitude that are perfectly consistent with a Doppler-beaming effect and conclude that the unseen companion, KIC 10553698B, must be a white dwarf most likely with a mass close to 0.6 Msun. Available at: A&A in press Finding eta Car Analogs in Nearby Galaxies Using Spitzer: II. Identification of An Emerging Class of Extragalactic Self-Obscured Stars Rubab Khan et al. Understanding the late-stage evolution of the most massive stars such as $\eta$ Carinae is challenging because no true analogs of $\eta$ Car have been clearly identified in the Milky Way or other galaxies. In Khan et. al. (2013), we utilized Spitzer IRAC images of $7$ nearby ($\lesssim4$ Mpc) galaxies to search for such analogs, and found $34$ candidates with flat or red mid-IR spectral energy distributions. Here, in Paper II, we present our characterization of these candidates using multi-wavelength data from the optical through the far-IR. Our search detected no true analogs of $\eta$ Car, which implies an eruption rate that is a fraction $0.01\lesssim F \lesssim 0.19$ of the ccSN rate. This is roughly consistent with each $M_{ZAMS} \gtrsim 70M_\odot$ star undergoing $1$ or $2$ outbursts in its lifetime. However, we do identify a significant population of $18$ lower luminosity $\left(\log(L/L_\odot)\simeq5.5-6.0\right)$ dusty stars. Stars enter this phase at a rate that is fraction $0.09 \lesssim F \lesssim 0.55$ of the ccSN rate, and this is consistent with all $25 < M_{ZAMS} < 60M_\odot$ stars undergoing an obscured phase at most lasting a few thousand years once or twice. These phases constitute a negligible fraction of post-main sequence lifetimes of massive stars, which implies that these events are likely to be associated with special periods in the evolution of the stars. The mass of the obscuring material is of order $\sim M_\odot$, and we simply do not find enough heavily obscured stars for theses phases to represent more than a modest fraction ($\sim 10\%$ not $\sim 50\%$) of the total mass lost by these stars. In the long term, the sources that we identified will be prime candidates for detailed physical analysis with JWST. Available at: arXiv:1407.7530 eta Carinae Baby Homunculus Uncovered by ALMA Zulema Abraham, Diego Falceta-Gonçalves, Pedro P. B. Beaklin We report observations of eta Carinae obtained with ALMA in the continuum of 100, 230, 280, and 660 GHz in 2012 November, with a resolution that varied from 2.''88 to 0.''45 for the lower and higher frequencies, respectively. The source is not resolved, even at the highest frequency; its spectrum is characteristic of thermal bremsstrahlung of a compact source, but different from the spectrum of optically thin wind. The recombination lines H42alpha, He42alpha, H40alpha, He40alpha, H50beta, H28alpha, He28alpha, H21alpha, and He21alpha were also detected, and their intensities reveal non-local thermodynamic equilibrium effects. We found that the line profiles could only be fit by an expanding shell of dense and ionized gas, which produces a slow shock in the surroundings of eta Carinae. Combined with fittings to the continuum, we were able to constrain the shell size, radius, density, temperature, and velocity. The detection of the He recombination lines is compatible with the high-temperature gas and requires a high-energy ionizing photon flux, which must be provided by the companion star. The mass-loss rate and wind velocity, necessary to explain the formation of the shell, are compatible with an luminous blue variable eruption. The position, velocity, and physical parameters of the shell coincide with those of the Weigelt blobs. The dynamics found for the expanding shell correspond to matter ejected by eta Carinae in 1941 in an event similar to that which formed the Little Homunculus; for that reason, we called the new ejecta the "Baby Homunculus." Available at: ApJ 791, 95
2017-09-20 20:02:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6826053261756897, "perplexity": 3341.527116667111}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687447.54/warc/CC-MAIN-20170920194628-20170920214628-00604.warc.gz"}
http://rbaron.net/blog/2014/01/02/Simulating-dynamic-systems-in-javascript.html
#### Simulating dynamic systems in javascript On 02 Jan 2014 I have played with the idea of implementing an experimental, toy-sized, micro subset of Simulink in javascript for a while. Out of that idea blocksim came to be. It is in no way a candidate for serious numerical simulation, but a simple toy for playing around. It uses Euler's method for integration, with a configurable step and duration. The user interface was built with Raphaël.js (unrelated to me!) and nv/D3 for plotting. The idea of this post is to model and simulate a super simple system in both MATLAB and blocksim. I can't stress this enough: they're not comparable at all and the point of this post is to simply provide an introduction to blocksim. Let us discuss the following mass-spring-damper system: The spring with constant $k$ is responsible for a force that is proportional to the displacement $x$ of the mass $m$, and the damper with constant $c$ is responsible for a force that is proportional to its speed, $\dot{x}$. Drawing a free body diagram of the mass $m$, we get: To derive the equation of motion of the center of mass of the block $m$, we may apply Newton's second law: $$m\ddot{x} = -c\dot{x} -k\dot{x}$$ That's it. Once we solve this differential equation for $x(t)$, we will find how the mass $m$ behave in time, given the coefficients $k$ and $c$, as well as its initial position $x(0)$ and speed $\dot{x}(0)$. At this point, you might be thinking "Hey! That's a homogeneous second-order differential equation with constant coefficients. There are analytical solutions for those! Why do we need to simulate it numericaly"? And you are right: we don't need to. But there are also reasons to do so: • We can compare the simulation against the analytical solution for benchmarking; • Based on experience, we have an idea of how the solution should look like. If you pull a spring in real life, it will want to get back to its resting position, either through an oscillatory movement or a "direct" one. With our model in hands, we may proceed to write it in a way our software understands. Simulink works in a graphical way using blocks, so our differential equation must be written using those. One very important block is the integrator, which takes a variable as input and outputs its integral. Since our differential equation is a second order one, we may start by using two integrators, since we are interested in finding $x(t)$ given a model with $\ddot{x}(t)$. The finished model looks as follows (with $m=1$ left out for simplicity): Using blocksim, we arrive at something alike: You can toy around with the model here. For simulating, I chose the following parameters: • $m = 1$ • $k = 1$ • $c = 0.7$ • $x(0) = 2$ • $\dot{x}(0) = 0$ • Simulation length: $20$ seconds • Integration step: $0.01$ second • Method: Euler's Intuitively, we are interested in finding out what will happen with the mass $m$ if I pull it 2 meters to the right and let it go. Running the simulation, we get the following behavior of $x(t)$ using simulink: And using blocksim: As we might have guessed, this is what happens when we pull the block $m$ 2 meters to the right and let it go: starting at $x(0)=2$, it oscillates around $x = 0$ and the amplitude of the oscillation decreases with time. Hey! That's actually pretty cool, isn't it? Using Euler's method, a few blocks and a questionable javascript implementation, we were able to get an idea of the system's behavior. I find it particularly cool to tweak the simulation parameters and see if my intuition matches the simulation output. For example, what would happen if we lower the damping coefficient $c$? Or if we give the mass some inicial speed $\dot{x}(0) \neq 0$? Or if we increase the mass $m$? What is the influence of the integration step? As said, for such a simple systems, there are analytical solutions that are capable of exactly describing $x(t)$. Numerical simulation are a great tool for simulating more complex systems, as well as nonlinear ones. For instance, if we want to account for dry friction, saturation etc, analytical tools aren't very useful. Simulations are also useful for designing and testing controllers (maybe on a next post?). Again, blocksim is not a serious simulation tool and should definitely not be treated as one. In another words, please don't use it to design your homemade nuclear missile. I hate when it happens.
2018-02-24 07:36:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6302953958511353, "perplexity": 319.19974876869395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00408.warc.gz"}
https://lists.gnu.org/archive/html/axiom-developer/2005-11/msg00269.html
axiom-developer [Top][All Lists] Re: [Axiom-developer] [Bug?] "error in library; negative log" From: Bob McElrath Subject: Re: [Axiom-developer] [Bug?] "error in library; negative log" Date: Fri, 11 Nov 2005 22:13:51 -0800 User-agent: Mutt/1.5.11 (-1)^\infty is an indeterminate expression, like 0/0. Mathematica returns Indeterminate, and maple returns "-1 .. 1" since the limit is bounded, but indeterminate on the interval [-1,1]. Infinity is neither even nor odd. > Attached is a session log showing an error that I receive while > attempting to produce a sequence from an expression in Axiom. Maxima > seems to have no trouble with the similar expression, and computing the > value of the expression by hand, as you can see, seems to work fine > also. > > Another problem I have is that taking the limit of an expression > containing (-1)^n always returns "failed", where my TI-89 Titanium > calculator will give a finite limit. For example: > > limit( 2 + (-2/%pi)^n, n=%plusInfinity ) ===> "failed" > > ... but the TI-89t returns 2. > > The TI-89t says that the limit of (-1)^n as n approaches infinity is -1, > implying that it believes that infinity is an odd number. That kind of > makes sense to me, since if you divide infinity in half, you still have > infinity, and you keep adding 1 to get to infinity, making it odd. If > infinity is even then the answer should be 1, and if we can't know if > infinity is even or odd, then the answer is uncertain or undefined. > > On the other hand, the TI-89t says that lim ( (-1)^n * (n + 1)/n ) is > undefined. But it already told me that lim (-1)^n = -1, and that lim (n > + 1)/n = 1. If the limit of a product is the product of the limits of > the factors, then lim ( (-1)^n * (n + 1)/n ) should be -1, right? > > So, who's right? -- Cheers, Bob McElrath [Univ. of California at Davis, Department of Physics] "In science, 'fact' can only mean 'confirmed to such a degree that it would be perverse to withhold provisional assent.' I suppose that apples might start to rise tomorrow, but the possibility does not merit equal time in physics classrooms." -- Stephen Jay Gould (1941 - 2002) signature.asc Description: Digital signature
2019-12-13 06:54:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8547987937927246, "perplexity": 3627.655334765229}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548544.83/warc/CC-MAIN-20191213043650-20191213071650-00523.warc.gz"}
http://www.physicsforums.com/showthread.php?t=604084
# Confused about simple max{,} notation by dtessela Tags: compton, max, notation, rayleigh, scattering P: 2 I'm working on a project that required me to go through the literature to find some information on Compton and Rayleigh scattering. I came across a key expression, part of it which read: max{ f(x,Z), g(x,Z) } if Z > 10 and f(x,Z) < 2 where f(x,Z) and g(x,Z) are known functions. The problem is I don't understand the max{f(x,Z),g(x,Z)} notation. I have done some poking around on the interwebs but nothing really helpful has come up. Thanks for future help! D Mentor P: 21,286 Quote by dtessela I'm working on a project that required me to go through the literature to find some information on Compton and Rayleigh scattering. I came across a key expression, part of it which read: max{ f(x,Z), g(x,Z) } if Z > 10 and f(x,Z) < 2 where f(x,Z) and g(x,Z) are known functions. The problem is I don't understand the max{f(x,Z),g(x,Z)} notation. I have done some poking around on the interwebs but nothing really helpful has come up. Thanks for future help! D It's the largest value of f and g, where Z > 10 and f(x, Z) < 2. P: 443 Here are some examples for you: max{10, 3} = 10 max{-1, -100} = -1 if x = 30*3 and y = 40! and z = 40^2 then max{x, y, z} = y if f(x) = 2x + 10 and g(x) = x^3 then when x = 1 max{f(x), g(x)} = f(x) Math Emeritus Sci Advisor Thanks PF Gold P: 39,552 Confused about simple max{,} notation Notice that max{a, b} applied to numbers a and b. max{f(x), g(x)} is actually a function, h(x), that, to each value of x, gives the larger of the two numbers f(x) and g(x) for that particular x. P: 2 Quote by HallsofIvy Notice that max{a, b} applied to numbers a and b. max{f(x), g(x)} is actually a function, h(x), that, to each value of x, gives the larger of the two numbers f(x) and g(x) for that particular x. Thanks, that helped clear it up! Related Discussions Calculus 0 Calculus 4 Calculus & Beyond Homework 1 Calculus & Beyond Homework 2 Quantum Physics 4
2014-08-30 10:27:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8150491714477539, "perplexity": 1385.668798036004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834883.60/warc/CC-MAIN-20140820021354-00408-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/140652-master-theorem.html
Math Help - master theorem 1. master theorem $ T(\left\lfloor n \right\rfloor ) = 9T(\left\lfloor {\frac{n}{3}} \right\rfloor ) + \frac{{n^2 }}{{\log n}}$ thanks 2. Originally Posted by liberty $ T(\left\lfloor n \right\rfloor ) = 9T(\left\lfloor {\frac{n}{3}} \right\rfloor ) + \frac{{n^2 }}{{\log n}}$ thanks Is this supposed to be a question? 3. Originally Posted by liberty $ T(\left\lfloor n \right\rfloor ) = 9T(\left\lfloor {\frac{n}{3}} \right\rfloor ) + \frac{{n^2 }}{{\log n}}$ thanks
2014-09-23 03:17:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5141421556472778, "perplexity": 13444.31210300741}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137906.42/warc/CC-MAIN-20140914011217-00234-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/up-down-quarks.816400/
Up & down quarks Main Question or Discussion Point Is there a simple explanation for why up & down quarks have different charge? Related High Energy, Nuclear, Particle Physics News on Phys.org vanhees71 Gold Member 2019 Award When "quarks" were invented by Gell-Mann and Zweig, they were simply associated the charges (2/3 elementary charges for the up, -1/3 for the down quark) such that the correct charge pattern for the hadrons resulted. See, http://en.wikipedia.org/wiki/Eightfold_Way_(physics) for the SU(3)-constituent-quark model. Nowadays this charge pattern, together with the 3 colors of the color SU(3) underlying QCD, make the Standard Model consistent in avoiding a chiral anomaly of the weak gauge group $\mathrm{SU}(2)_{\text{wiso}} \times \mathrm{U}(1)_{\text{Y}}.$
2020-01-28 07:08:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4040549099445343, "perplexity": 3166.149644279026}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00464.warc.gz"}
https://zbmath.org/?q=an:0791.35113
# zbMATH — the first resource for mathematics On the large order asymptotics of general states in semiclassical quantum mechanics. (English) Zbl 0791.35113 Summary: We consider the limit $$\hslash\to 0$$ of the solution $$\Phi(t,x,\hslash)$$ of the Schrödinger equation: $i\hslash {{\partial\Phi(t,x,\hslash)} \over {\partial t}}=- {{\hslash^ 2} \over {2m}} {{d^ 2 \Phi(t,x,\hslash)} \over {dx^ 2}} +V(x) \Phi(t,x,\hslash).$ We prove that, for any integer $$l\geq 2$$ and any initial condition $$\Phi(0,x,\hslash)$$ that belongs to the Schwartz-class, a solution $$\Phi^*(t,x,\hslash)$$ of the semiclassical equation approximates $$\Phi(t,x,\hslash)$$ such as $\|\Phi^*(t,\cdot,\hslash)- \Phi(t,\cdot,\hslash)\|_{L^ 2} \leq C\hslash^{1/2} \qquad (\hslash\to 0)$ . ##### MSC: 35Q40 PDEs in connection with quantum mechanics 81Q20 Semiclassical techniques, including WKB and Maslov methods applied to problems in quantum theory ##### Keywords: Schrödinger equation; semiclassical equation Full Text: ##### References: [1] G.A. Hagedorn , Semiclassical Quantum Mechanics, I: The h \rightarrow 0 Limit for Coherent States , Commun. Math. Phys. , T. 71 , 1980 , pp. 77 - 93 . Article | MR 556903 · minidml.mathdoc.fr [2] G.A. Hagedorn , Semiclassical Quantum Mechanics. III: The Large Order Asymptotics and More General States , Ann. Phys. , T. 135 , 1981 , pp. 58 - 70 . MR 630204 [3] G.A. Hagedorn , Semiclassical Quantum Mechanics, IV: The Large Order Asymptotics and more General States in more than One Dimension , Ann. Inst. H. Poincaré , T. 42 , 1985 , pp. 363 - 374 . Numdam | MR 801234 | Zbl 0900.81053 · Zbl 0900.81053 · numdam:AIHPA_1985__42_4_363_0 · eudml:76287 [4] S. Robinson , The Semiclassical Limit of Quantum Dynamics, I: Time Evolution , J. Math. Phys. , T. 29 , 1988 , pp. 412 - 419 . MR 927028 | Zbl 0647.46060 · Zbl 0647.46060 · doi:10.1063/1.528029 [5] S. Robinson , The Semiclassical Limit of Quantum Dynamics, II: Scattering Theory , Ann. Inst. H. Poincaré , T. 48 , 1988 , pp. 281 - 296 . Numdam | MR 969167 | Zbl 0666.35071 · Zbl 0666.35071 · numdam:AIHPA_1988__48_4_281_0 · eudml:76401 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-03-08 03:29:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7026799917221069, "perplexity": 1492.146058123074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381803.98/warc/CC-MAIN-20210308021603-20210308051603-00253.warc.gz"}
http://math.stackexchange.com/tags/random-variables/new
# Tag Info 2 If $(Y_n)_{n\geqslant 1}$ is a sequence of real valued random variables which converge in distribution to $Y$ and $g\colon\mathbf R\to\mathbf R$ is a continuous function, then $(g(Y_n))_{n\geqslant 1}$ converges to $g(Y)$ in distribution. 1 Hint: In full generality (if $U$ and $V$ are integrable), $$E(\color{red}{5}U+\color{blue}{3}V)=\color{red}{5}\cdot E(U)+\color{blue}{3}\cdot E(V).$$ If $U$ and $V$ are independent (and square integrable), $$\mathrm{var}(\color{red}{5}U+\color{blue}{3}V)=\color{red}{5}^2\cdot \mathrm{var}(U)+\color{blue}{3}^2\cdot \mathrm{var}(V).$$ 0 Each $Y_1, Y_2, \ldots$ follows a logarithmic distribution: $$\Pr(Y=k) = -\frac{(1-p)^k}{k\log(p)}, \quad \text{where } k\geqslant1, 0<p<1.$$ If $N \sim \mathsf{Poi}(\mu)$ independently of $Y_1, Y_2, \ldots$, then we find that $X$ has a negative binomial distribution with parameter $r = -\mu/\log(p)$ and success probability $p$. Hence, the parameter ... 1 1 $X,Y$ are independent if and only if $$P\left\{ X\in A\wedge Y\in B\right\} =P\left\{ X\in A\right\} P\left\{ Y\in B\right\}$$ is true for measurable sets $A,B$. If this is the case and $f,g:\mathbb{R}\rightarrow\mathbb{R}$ are measurable functions then: $$P\left\{ f\left(X\right)\in A\wedge g\left(Y\right)\in B\right\} =P\left\{ X\in ... 0 Evidently so. See this link Are functions of independent variables also independent? which addresses independence. It's clear they are identically distributed, isn't it? 0 Yes. In fact one can show that X and Y are independent if and only if f(X) and g(Y) are independent, for every measurable function f, g I don't know how you introduced independence, but X and Y independent means that the \sigma-algebrae generated by them are independent; it is easy to see that the \sigma-algebra generated by f(X) is ... 0 No, it is not correct. Please note that A = (-\infty,x] is a subset of \mathbb{R} whereas \mathbb{P} is a (probability) measure on the probability space. This means that \mathbb{P}(A) is not even well-defined. For the first one, note that$$\mathbb{E} \big( 1_{(-\infty,x]}(V_i) \big) = \int 1_{(-\infty,x]}(V_i(\omega)) \, d\mathbb{P}(\omega) = ... 0 Prescribe $g:\mathbb{R}^{2}\rightarrow\left\{ 0,1\right\}$ by $\left(x,y\right)\mapsto1$ if $x\in\left(0,1\right)\wedge y\in\left(0,\sqrt{x}\right)$ and $(x,y)\mapsto 0$ otherwise. Then the PDF for $X$ and $Y$ are respectively: $$f_{X}\left(x\right)=\int g\left(x,y\right)8x^{2}ydy$$ and: $$f_{Y}\left(y\right)=\int g\left(x,y\right)8x^{2}ydx$$ For a fixed ... 1 Draw a picture. The joint density function "lives" over the part $D$ of the unit square that is below the half-parabola $y=\sqrt{x}$. To find the density function of $y$, we have to "integrate out" $x$. The function $8x^2y$ is the joint density only on $D$, so we have to confine attention to $D$. Note that at the beginning of $D$ we have $y=\sqrt{x}$, ... 2 In general, no. A standard counterexample is to take your probability space to be $\Omega = [0,1]$ with Lebesgue measure, and consider the complex-valued random variables $X_n(\omega) = e^{2 \pi i n \omega}$, which are all bounded in absolute value by 1. They are orthonormal in $L^2(\Omega)$, so by Bessel's inequality, they converge weakly to 0 in ... 2 For $y>0$: $$P\left\{ X=n\mid\lambda=y\right\} =e^{-y}\frac{y^{n}}{n!}$$ and PDF of $\lambda$ is $f_{\lambda}\left(y\right)=e^{-y}$. This leads to: $$P\left\{ X=n\right\} =\int P\left\{ X=n\mid\lambda=y\right\} ... 1 It's hard to tell from a small sample like this. I suggest using the same method in Excel to generate about 1000 points. Then plot the results on a histogram to see what general shape the distribution takes. If the histogram is relatively flat, you're probably getting the uniform distribution. If you see peaks or slopes, you aren't. 0 Let T denote the time between two successive X_2 arrivals, then T is exponential with parameter \lambda_2 and, conditionally on T=t, the distribution of N is Poisson with parameter \lambda_1t. Thus, E(s^N\mid T)=\mathrm e^{-\lambda_1T(1-s)} and E(s^N)=E(\mathrm e^{-rT}) with r=\lambda_1(1-s). Recall that, for every u\gt-\lambda_2, ... 1 We use the following two facts: for a (deterministic) sequence of real numbers (a_n)_{n\geqslant 1}, we have \limsup_na_n=\limsup_na_{n+k} for any integer k; if (Y_n)_{n\geqslant 1} is a sequence of random variables, then \limsup_n Y_n is \sigma(Y_n,n\geqslant 1) measurable. 2 The proof of the convergence in distribution of (X_n,Y) to (X,Y) is done here (the use of characteristic functions could make it shorter). Assume that X=h(Y) for some Borel function h. Using the assumption with g:=f\circ h, a bounded Borel-measurable function, we obtain that for each continuous and bounded function f,$$\lim_{n\to ... 4 Yes: as soon as at least one of the random variables $X$ and $Y$ has a density and if $X$ and $Y$ are independent, the sum $X+Y$ has a density. To understand the magic, assume that $(X,Y)$ is independent, that $Y$ has density $f$ and that $X$ is purely discrete with $P(X=x_n)=p_n$ for every $n$, then $X+Y$ has density $g$ with $$g(x)=\sum_np_n\,f(x-x_n).$$ ... 2 For a fixed $t$, the series defining $X(t)$ has only one term which can be different from $0$ (for the potential index $n$ for which $nT\lt t\leqslant nT+T/2$). Therefore, we can switch the sum and the expectation and we find that $\mathbb E[X(t)]=0$ for each $t$ because $p(t-nT)$ is not random and $\mathbb E[A_n]=0$. For the covariance, by the argument ... 0 No. Assume $X$ be a binary (Bernully random variable) having values $-1,1$ with probabilities $(1/2,1/2)$. Then $Y+X$ will not have a density. -2 From the graph, $Y$ cannot be less than $-b$ nor greater than $b$. $$\Pr(Y<-b)=0\\ \Pr(Y>b)=0$$ However, at the points of inflection, we know $Y$ has a probability mass: $$\Pr(Y=-b)=\Pr(X\leq -a) \\ \Pr(Y=b)=\Pr(X\geq a)$$ In the interval between there is a linear relation between $X$ and $Y$ so $$\Pr(Y\le y)=\Pr(X\le ay/b) , \forall y\in (-b, b)$$ ... 0 If $b\leq y$ then $g\left(x\right)\leq y$ is true for every $x$ so that: $$F_{Y}\left(y\right)=P\left\{ g\left(X\right)\leq y\right\} =1$$ If $-b\leq y<b$ then $g\left(x\right)\leq y\iff x\leq\frac{a}{b}y$ so that: $$F_{Y}\left(y\right)=P\left\{ g\left(X\right)\leq y\right\} =P\left\{ X\leq\frac{a}{b}y\right\} =F_{X}\left(\frac{a}{b}y\right)$$ If ... 0 First, the text below requires that you know definitions of a real random variable (or vector), measurable function, Borel $\sigma$-algebra and cumulative distribution function. Let $X$ be an $n$-dimensional random vector with the CDF ${F}$ (this is a fundamental assumption which begs the question does the appropriate probability space and the random ... 0 The answer to your question is a matter of interpretation. If you formalize the sample spaces abstractly enough, then the sample spaces for $X_1$ and $X_2$ can be the same. After all, both weight and height are continuous quantities, and so can be modeled as real numbers. If the real numbers aren't a general enough setting, you can go into more abstract ... 1 By the strong law of large numbers for i.i.d. integrable sequences, $$\frac1N\sum_{i=1}^N\log X_i\to\nu\quad\text{almost surely},$$ where $\nu=E(\log X_1)$. This is equivalent to the assertion that $$\lim_{N \to \infty} \left( \prod_{i = 1}^{N} X_{i} \right)^{1/N}=\mathrm e^\nu\quad\text{almost surely},$$ hence the expected value of the LHS exists and is ... 0 This is my crack at the question, please if anybody sees any flaws, point them out I am assuming that $X_{i}$'s are independent. From this it is important to know that $$\left(\prod_{i=1}^{N}X_{i}\right)^{\frac{1}{N}}=e^{\frac{1}{N}\sum_{i=1}^{N}\ln(X_{i})}$$ Well notice that $\frac{1}{N}\sum_{i=1}^{N}\ln(X_{i})$ is just the average of the log of the ... 1 The function $\omega \mapsto(X_1, X_2, \cdots X_n)$ is a measurable function from $\Omega$ to $\mathbb{R}^n$. It is, more strongly, also $\sigma(X_1, X_2, \cdots X_n)$-measurable. Since $f$ is measurable, the composition $f(X_1, X_2, \cdots X_n)$ is $\sigma(X_1, X_2, \cdots X_n)$-measurable. 0 If your set consisted of one value only, then the probability to guess it correctly would be $p=\frac{1}{2^{8 \cdot 32}}$. Since your records are all different, the probability is $1,000,000 \cdot p$. 0 $X_1,\ldots,X_n$ are the $n$ i.i.d. random variables. Let $X_{(1)}<\cdots< X_{(n)}$ be the order statistics, i.e. the same random variables sorted. (By continuity of the c.d.f., we need not write "$\le$".) Then $X_i=X_{(j)}$. Given $i$, what is the distribution of $j\text{ ?}$ Let $Y_1,\ldots,Y_n$ be $X_{\sigma(1)},\ldots,X_{\sigma(n)}$, where ... 5 If $U_i$ is independent of $X_i$ then $E(Y_i\mid X_i)=E(X_i\mid X_i)+E(U_i\mid X_i)=X_i+E(U_i)$ hence $E(Y_i\mid X_i)=X_i$. In particular, $E(Y_i\mid X_i=1)=1$ and $E(Y_i\mid X_i=0)=0$. (Also, I calculated $P(Y > \frac{1}{2}) = 0.6728p + 0.1932$ from an earlier part of the question). Actually, if $\sigma^2=\frac13$ then $P\left(Y > ... 3 Let$Y$be uniform random variable on$(0,1)$and conditionally on$Y$,$X$is a centered normal random variable with variance equal to$Y$. Then$E(X|Y) = E(X) = 0$, but$X$is not independent of$Y$since$E(X^2 | Y) = YE(X^2) = E(Y) = \frac{1}{2}$implies$X \in L^1$1 Your result is correct. It can be done more efficiently by: $$P\left\{ z<X<Y\right\} =\int_{z}^{\infty}f_{X}\left(x\right)P\left\{ z<X<Y\mid X=x\right\} dx=\lambda\int_{z}^{\infty}e^{-\lambda x}e^{-\mu x}dx=\frac{\lambda}{\lambda+\mu}e^{-\left(\lambda+\mu\right)z}$$ 1 Consider independent Poisson processes$N_t$,$M_t$with rates$\lambda$,$\mu$, such that$X$is the time until the first occurrence of$N_t$and$Y$is the time until the first occurrence of$M_t$. Then$N_t + M_t$is a Poisson process with rate$\lambda + \mu$, and$Z$is the time until its first occurrence. One way to realize this is to start with a ... 0 By Lévy's continuity theorem it's enough to proof that$\left(\varphi_{1/X_1}(t/n)\right)^n \rightarrow e^{-a|t|}$, with$\varphi_X(t):=E^X\left[e^{itX}\right]$. It is a well-known fact that if$(c_n)_{n\in \mathbb{N}}$is a complex succession with limit$c$then$\lim_{n\rightarrow +\infty}\left(1+c_n/n\right)^n=e^c$. In this case ... 1 If$y=0$, then$Y_y=0$or$Y_y=1$, depending on the convention. If$y\gt0$, then:$P(Y_y\gt0)=pP_1(T_y\lt T_0)P(Y_y\gt n+1\mid Y_y\gt n)=P_y(T_y\lt T_0)=pP_{y+1}(T_y\lt\infty)+(1-p)P_{y-1}(T_y\lt T_0)$Similar recursions hold when$y\lt0$. If one is able to compute the various probabilities$P_a(T_b\lt T_c)$involved (and one should be), these ... 6 For$i=1$to$50$, define random variable$X_i$by$X_i=1$if box$i$has no balls, and$ X_i=0$otherwise. Then$X=X_1+\cdots +X_{50}$, so we want$E(X_1+\cdots+X_{50}$). By the linearity of expectation this is$E(X_1)+\cdots+E(X_{50})$. But$\Pr(X_i=1)$is the probability all the balls miss box$i$. This is$\left(\frac{49}{50}\right)^{100}$, and is ... 1 Prob$(X=i)$=Prob(All 100 balls go into$(50-i)$boxes)=${50\choose{50-i}}*\frac{(50-i)^{50+i}}{50^{100}}$The denominator is the total number of ways 50balls can be put in 100 bxes, each ball can be put on any of the 50 boxes, i.e in 50 ways, so 100 balls can be put in$50^{100}$ways. The numerator is the product of number of ways to choose the (50-i) ... 1 Define$\xi_i$to be a family of independent identically distributed Bernoulli variables with success probability$p$. Then$E[\xi_i]=p$for all$i$. Note that $$Y=\sum_{i=1}^X\xi_i$$ Wald's Identity states that if$N$is positive integer-valued random variable with finite expectation and$\eta_i$are independent identically distributed random ... 1 Formally, whenever you have a parametrized family of distributions$D(\theta)$for the parameter set$\theta\in \Theta$and some random variable$X$which takes values in$\Theta$and has the distribution$q$, if you define a new random variable as$Y \sim D(X)$, then the distribution of$Y$is given by $$\mathsf P(Y\in A) = \int_\Theta ... 1 E[Y] = E[E[Y|X]] The conditional expectation is pX. Then, E[pX] = pE[X] = \mu p 3 A simple example is X uniformly distributed on the usual Cantor set, in other words,$$X=\sum_{n\geqslant1}\frac{Y_n}{3^n},$$for some i.i.d. sequence (Y_n) with uniform distribution on \{0,2\}. Other examples are based on binary expansions, say,$$X=\sum_{n\geqslant1}\frac{Z_n}{2^n},$$for some i.i.d. sequence (Z_n) with any nondegenerate ... 0 The iff is not true:$$f(x)=\begin{cases} 2 \text{ if } 0\le x \le \frac {1}{4}\\ \frac {2}{3} \text{ if } \frac {1}{4}< x \le 1\\ 0 \text{ otherwise}\end{cases} f\not\in C^0F_X(x)=\int_{-\infty}^x f(x) dx$X\$ is continuous r.v. Top 50 recent answers are included
2014-08-21 12:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836860299110413, "perplexity": 211.07953458783817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815991.16/warc/CC-MAIN-20140820021335-00448-ip-10-180-136-8.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/578742/after-upgrade-of-texlive-i-cannot-integrate-sanskrit-fonts-in-tufte-book-style-a
# After upgrade of texlive I cannot integrate Sanskrit fonts in Tufte book style anymore [closed] After upgrading from a very old Version to Tex Live 2019/Debian I cannot render my "tufte style book" with Sanskrit fonts anymore. In the old version I was able to render with "Quick Build" in my TexMaker Engine. Now I have to render Sanskrit with XeLaTex, otherwise I get "fontspec Error". But with XeLaTeX I cannot render "tufte style". Any suggestion how to solve this? Is it possible to use Sanskrit fonts without fontspec? This is apart of the code I use: \documentclass{tufte-book} % Use the tufte-book class which in turn uses the tufte-common class \hypersetup{colorlinks} % Comment this line if you don't wish to have colored links \usepackage{microtype} \usepackage{booktabs} \usepackage{graphicx} \usepackage{fancyvrb} \usepackage{polyglossia} \setmainlanguage{english} \setotherlanguage{sanskrit} \newfontfamily\devanagarifont{Sanskrit 2003} • Maybe tex.stackexchange.com/questions/202142/… could help here (see the newest answer from September 2020 at the end)? – Marijn 2 days ago • I tried your code (earlier I just looked up the related question) and it works for me with XeLaTeX (after adding \begin{document} abc \textsanskrit{उदु ज्योतिरमृतं विश्वजन्यं विश्वानरः सविता देवो अश्रेत्} \end{document} to make a complete document). I also use TeX Live 2019/Debian. What error do you get exactly? – Marijn 2 days ago • Thank you. Yes, in this short preamble it works. Then the error is somewhere else. I will anyway change the whole project. – Denis 2 days ago • ok, I have vote to close the question as "needs more detail", which means in this case "the error is somewhere else" :) – Marijn 2 days ago
2021-01-16 13:01:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8350370526313782, "perplexity": 4820.958659769628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506640.22/warc/CC-MAIN-20210116104719-20210116134719-00169.warc.gz"}
https://riccardo-cantini.netlify.app/post/bert_text_classification/
# Play with BERT! Text classification using Huggingface and Tensorflow How to fine-tune a BERT classifier for detecting the sentiment of a movie review and the toxicity of a comment. In what follows, I’ll show how to fine-tune a BERT classifier using the Huggingface Transformers library and Keras+Tensorflow. Two different classification problems are addressed: • IMDB sentiment analysis: detect the sentiment of a movie review, classifying it according to its polarity, i.e. negative or positive. • Toxic comment classification: determine the toxicity of a Wikipedia comment, predicting a probability for each type of toxicity, i.e. toxic, severe toxic, obscene, threat, insult and identity hate. ## What is BERT? Bidirectional Encoder Representations from Transformers (BERT) is a Natural Language Processing Model proposed by Google Research in 2018. It is based on a multi-layer bidirectional Transformer, pre-trained on two unsupervised tasks using a large crossdomain corpus: • Masked Language Modeling (MLM): 15% of the words in each sequence are replaced with a [MASK] token. The model then attempts to predict the masked words, based on the context provided by the non-masked ones. • Next Sentence Prediction (NSP): the model receives pairs of sentences as input and learns to predict if the second sentence is the subsequent sentence in the original document. BERT is deeply bidirectional, which means that it can learn the context of a word based on all the information contained in the input sequence, joinlty considering previous and subsequent tokens. In fact, the use of MLM objective enables the representation to fuse the left and right contexts, allowing the pre-training of a deep bidirectional language representation model. This is a key difference comparing to previous language representation models like OpenAI GPT, which uses a unidirectional (left-to-right) language model, or ELMo, which uses a shallow concatenation of independently trained left-to-right and right-to-left language models. BERT outperformed many task-specific architectures, advancing the state of the art in a wide range of Natural Language Processing tasks, such as textual entailment, text classification and question answering. For further details, you might want to read the original BERT paper. ## Fine-tuning Let’s now move on how to fine-tune the BERT model in order to deal with our classification tasks. Text classification can be a quite challenging task, but we can easily achieve amazing results by exploiting the effectiveness of transfer learning form pre-trained language representation models. The first use case is related to the classification of movie reviews according to the expressed sentiment, which can be positive or negative. The used data come from the IMDB dataset, which contians 50000 movie reviews equally divided by polarity. The second case study is about building a model capable of detecting different types of toxicity like threats, obscenity, insults, and identity-based hate. The used dataset is comprised of a large number of comments from Wikipedia. Toxicity detection models are useful for helping online discussion become more productive and respectful. In the following, I show my Keras code for creating the models. def create_model(n_out): input_ids = layers.Input(shape=(MAX_SEQ_LEN,), dtype=tf.int32, name='input_ids') input_type = layers.Input(shape=(MAX_SEQ_LEN,), dtype=tf.int32, name='token_type_ids') bert = TFBertModel.from_pretrained(BERT_NAME) bert_outputs = bert(inputs) last_hidden_states = bert_outputs.last_hidden_state avg = layers.GlobalAveragePooling1D()(last_hidden_states) output = layers.Dense(n_out, activation="sigmoid")(avg) model = keras.Model(inputs=inputs, outputs=output) model.summary() return model The only difference between the two models is the number of neurons in the output layer, i.e. the number of independent classes, determined by the n_out parameter. For the first case study, n_out is equal to $$1$$, as we are coping with a binary classification task that involves the calculation of a single sentiment score. This is the probability that the review is positive, thus a value close to $$1$$ indicates a very positive sentence, a value near to $$0$$ a very negative sentence and a value close to $$0.5$$ is related to an uncertain situation, or rather a neutral review. For the second case study, n_out is equal to $$6$$, as we are coping with a multi-label classification with six possible types of toxicity. This means that the model treats each toxicity type as a separate class, computing an independent probability for each one of them through a Bernuolli trial. As we can see, the BERT model expects three inputs: • Input ids: BERT input sequence unambiguously represents both single text and text pairs. Sentences are encoded using the WordPiece tokenizer, which recursively splits the input tokens until a word in the BERT vocabulary is detected, or the token is reduced to a single char. As first token, BERT uses the CLS special token, whose embedded representation can be used for classification purposes. Moreover, at the end of each sentence, a SEP token is used, which is exploited for differentiating between the two input sentences in the case of text pairs. • Input mask: Allows the model to cleanly differentiate between the content and the padding. The mask has the same shape as the input ids, and contains 1 anywhere the the input ids is not padding. • Input types: Contains 0 or 1 indicating which sentence the token is a part of. For a single-sentence input, it is a vector of zeros. Huggingface model returns two outputs which can be expoited for dowstream tasks: • pooler_output: it is the output of the BERT pooler, corresponding to the embedded representation of the CLS token further processed by a linear layer and a tanh activation. It can be used as an aggregate representation of the whole sentence. • last_hidden_state: 768-dimensional embeddings for each token in the given sentence. The use of the first output (coming from the pooler) is usually not a good idea, as stated in the Hugginface Transformer documentation: This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. For this reason I preferred to use a Global Average Pooling on the sequence of all hidden states, in order to get a concise representation of the whole sentence. Another thing that usually works is to directly take the embedded representation of the CLS token, before it is fed to the BERT pooler. After the creation of the model, we can fine-tune it as follows: def fine_tune(model, X_train, x_val, y_train, y_val): max_epochs = 4 batch_size = 32 loss = keras.losses.BinaryCrossentropy() best_weights_file = "weights.h5" m_ckpt = ModelCheckpoint(best_weights_file, monitor='val_auc', mode='max', verbose=2, save_weights_only=True, save_best_only=True) model.compile(loss=loss, optimizer=opt, metrics=[keras.metrics.AUC(multi_label=True, curve="ROC"), keras.metrics.BinaryAccuracy()]) model.fit( X_train, y_train, validation_data=(x_val, y_val), epochs=max_epochs, batch_size=batch_size, callbacks=[m_ckpt], verbose=2 ) I trained the model for $$4$$ epochs using a very low learning rate ($$3e^{-5}$$). This last aspect is crucial as we only want to readapt the pre-trained features to work with our downstream task, thus large weight updates are not desirable at this stage. Furthermore, we are training a very large model with a relatively small amount of data and a low learning rate is a good choice for minimizing the risk of overfitting. I used a binary cross-entropy loss as the prediction of each of the n_out output classes is modeled like a single Bernoulli trial, estimating the probability through a sigmoid activation. Moreover I chose the Rectified version of ADAM (RAdam) as the optimizer for the training process. RAdam is a variant of the Adam optimizer which rectifies the variance and generalization issues apparent in other adaptive learning rate optimizers. The main idea behind this variation is to apply a warm-up with a low initial learning rate, turning also off the momentum term for the first few sets of input training batches. Lastly, I used the area under the Receiver operating characteristic curve (ROC AUC), and binary accuracy as the main metrics for validation and testing. ## Training epochs In the following, the results of the 4 training epochs of both models are shown: Sentiment analysis, IMDB movie reviews ## Results I evaluated the trained models using $$1024$$ test samples, achieving the following results: Test BCE lossBinary accuracyROC AUC Sentiment classification0.260.880.95 Toxicity classification0.050.980.94 As we can see, the easy use of a fine-tuned BERT classifier led us to achieve very promising results, confirming the effectiveness of transfer learning from language representation models pre-trained on a large crossdomain corpus. To better analyze the performance of the trained classifiers, ROC curves for both models are provided: We can clearly see the high confidence of both models, especially for what concerns the toxicity classifier which achieved a micro-average ROC AUC of $$0.98$$. Just to make it more fun, I wrote some sentences to further test the performance of both models. Sentiment analysis, IMDB movie reviews
2022-07-06 13:07:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.51408451795578, "perplexity": 1548.7113123592949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104672585.89/warc/CC-MAIN-20220706121103-20220706151103-00105.warc.gz"}
https://www.mersenneforum.org/showpost.php?s=b82fc82d88e943e1474679642fd53d5c&p=467787&postcount=9
Thread: Is this solvable in general. View Single Post 2017-09-14, 18:02   #9 jwaltos Apr 2012 By relaxing the condition in Matiyasevich's theorem for integer only solutions, Le Chatelier's principle could be invoked (by analogy) where poly time solutions can be made explicit. And yes, like a blind squirrel searching for nuts, optimism does help but having a nose for certain things prevents that squirrel from starving.
2021-10-16 04:00:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49527332186698914, "perplexity": 8338.473364814214}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00086.warc.gz"}
https://bookstore.ams.org/view?ProductCode=SURV/69
An error was encountered while trying to add the item to the cart. Please try again. Copy To Clipboard Successfully Copied! Surgery on Compact Manifolds: Second Edition C. T. C. Wall University of Liverpool, Liverpool, England Edited by: A. A. Ranicki University of Edinburgh, Edinburgh, Scotland Available Formats: Hardcover ISBN: 978-0-8218-0942-6 Product Code: SURV/69 List Price: $77.00 MAA Member Price:$69.30 AMS Member Price: $61.60 Electronic ISBN: 978-1-4704-1296-8 Product Code: SURV/69.E List Price:$72.00 MAA Member Price: $64.80 AMS Member Price:$57.60 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price: $115.50 MAA Member Price:$103.95 AMS Member Price: $92.40 Click above image for expanded view Surgery on Compact Manifolds: Second Edition C. T. C. Wall University of Liverpool, Liverpool, England Edited by: A. A. Ranicki University of Edinburgh, Edinburgh, Scotland Available Formats: Hardcover ISBN: 978-0-8218-0942-6 Product Code: SURV/69 List Price:$77.00 MAA Member Price: $69.30 AMS Member Price:$61.60 Electronic ISBN: 978-1-4704-1296-8 Product Code: SURV/69.E List Price: $72.00 MAA Member Price:$64.80 AMS Member Price: $57.60 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price:$115.50 MAA Member Price: $103.95 AMS Member Price:$92.40 • Book Details Mathematical Surveys and Monographs Volume: 691999; 302 pp MSC: Primary 57; Secondary 18; 19; 11; The publication of this book in 1970 marked the culmination of a particularly exciting period in the history of the topology of manifolds. The world of high-dimensional manifolds had been opened up to the classification methods of algebraic topology by Thom's work in 1952 on transversality and cobordism, the signature theorem of Hirzebruch in 1954, and by the discovery of exotic spheres by Milnor in 1956. In the 1960s, there had been an explosive growth of interest in the surgery method of understanding the homotopy types of manifolds (initially in the differentiable category), including results such as the $h$-cobordism theory of Smale (1960), the classification of exotic spheres by Kervaire and Milnor (1962), Browder's converse to the Hirzebruch signature theorem for the existence of a manifold in a simply connected homotopy type (1962), the $s$-cobordism theorem of Barden, Mazur, and Stallings (1964), Novikov's proof of the topological invariance of the rational Pontrjagin classes of differentiable manifolds (1965), the fibering theorems of Browder and Levine (1966) and Farrell (1967), Sullivan's exact sequence for the set of manifold structures within a simply connected homotopy type (1966), Casson and Sullivan's disproof of the Hauptvermutung for piecewise linear manifolds (1967), Wall's classification of homotopy tori (1969), and Kirby and Siebenmann's classification theory of topological manifolds (1970). The original edition of the book fulfilled five purposes by providing: • a coherent framework for relating the homotopy theory of manifolds to the algebraic theory of quadratic forms, unifying many of the previous results; • a surgery obstruction theory for manifolds with arbitrary fundamental group, including the exact sequence for the set of manifold structures within a homotopy type, and many computations; • the extension of surgery theory from the differentiable and piecewise linear categories to the topological category; • a survey of most of the activity in surgery up to 1970; • a setting for the subsequent development and applications of the surgery classification of manifolds. This new edition of this classic book is supplemented by notes on subsequent developments. References have been updated and numerous commentaries have been added. The volume remains the single most important book on surgery theory. Graduate students and research mathematicians working in the algebraic and geometric topology of manifolds. • Chapters • Part 0. Preliminaries • Part 1. The main theorem • Part 2. Patterns of application • Part 3. Calculations and applications • Part 4. Postscript • Requests Review Copy – for reviewers who would like to review an AMS book Permission – for use of book, eBook, or Journal content Accessibility – to request an alternate format of an AMS title Volume: 691999; 302 pp MSC: Primary 57; Secondary 18; 19; 11; The publication of this book in 1970 marked the culmination of a particularly exciting period in the history of the topology of manifolds. The world of high-dimensional manifolds had been opened up to the classification methods of algebraic topology by Thom's work in 1952 on transversality and cobordism, the signature theorem of Hirzebruch in 1954, and by the discovery of exotic spheres by Milnor in 1956. In the 1960s, there had been an explosive growth of interest in the surgery method of understanding the homotopy types of manifolds (initially in the differentiable category), including results such as the $h$-cobordism theory of Smale (1960), the classification of exotic spheres by Kervaire and Milnor (1962), Browder's converse to the Hirzebruch signature theorem for the existence of a manifold in a simply connected homotopy type (1962), the $s$-cobordism theorem of Barden, Mazur, and Stallings (1964), Novikov's proof of the topological invariance of the rational Pontrjagin classes of differentiable manifolds (1965), the fibering theorems of Browder and Levine (1966) and Farrell (1967), Sullivan's exact sequence for the set of manifold structures within a simply connected homotopy type (1966), Casson and Sullivan's disproof of the Hauptvermutung for piecewise linear manifolds (1967), Wall's classification of homotopy tori (1969), and Kirby and Siebenmann's classification theory of topological manifolds (1970). The original edition of the book fulfilled five purposes by providing: • a coherent framework for relating the homotopy theory of manifolds to the algebraic theory of quadratic forms, unifying many of the previous results; • a surgery obstruction theory for manifolds with arbitrary fundamental group, including the exact sequence for the set of manifold structures within a homotopy type, and many computations; • the extension of surgery theory from the differentiable and piecewise linear categories to the topological category; • a survey of most of the activity in surgery up to 1970; • a setting for the subsequent development and applications of the surgery classification of manifolds. This new edition of this classic book is supplemented by notes on subsequent developments. References have been updated and numerous commentaries have been added. The volume remains the single most important book on surgery theory.
2023-03-25 16:29:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44469401240348816, "perplexity": 1321.7063909164362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00457.warc.gz"}