url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://hal.inria.fr/inria-00433745
# Strategic Computation and Deduction 2 PAREO - Formal islands: foundations and applications INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications Abstract : We introduce the notion of abstract strategies for abstract reduction systems. Adequate properties of termination, confluence and normalization under strategy can then be defined. Thanks to this abstract concept, we draw a parallel between strategies for computation and strategies for deduction. We define deduction rules as rewrite rules, a deduction step as a rewriting step and a proof construction step as a narrowing step in an adequate abstract reduction system. Computation, deduction and proof search are thus captured in the uniform foundational concept of abstract reduction system in which abstract strategies have a clear formalisation. Keywords : Type de document : Chapitre d'ouvrage Christoph Benzmüller and Chad E. Brown and Jörg Siekmann and Richard Statman. Reasoning in Simple Type Theory. Festchrift in Honour of Peter B. Andrews on His 70th Birthday, 17, College Publications, pp.339-364, 2008, Studies in Logic and the Foundations of Mathematics, 978-1-904987-70-3 https://hal.inria.fr/inria-00433745 Contributeur : Helene Kirchner <> Soumis le : vendredi 20 novembre 2009 - 09:40:39 Dernière modification le : jeudi 10 mai 2018 - 02:06:53 Document(s) archivé(s) le : mardi 16 octobre 2012 - 14:30:25 ### Fichier strategic-3K.pdf Fichiers produits par l'(les) auteur(s) ### Identifiants • HAL Id : inria-00433745, version 1 ### Citation Claude Kirchner, Florent Kirchner, Helene Kirchner. Strategic Computation and Deduction. Christoph Benzmüller and Chad E. Brown and Jörg Siekmann and Richard Statman. Reasoning in Simple Type Theory. Festchrift in Honour of Peter B. Andrews on His 70th Birthday, 17, College Publications, pp.339-364, 2008, Studies in Logic and the Foundations of Mathematics, 978-1-904987-70-3. 〈inria-00433745〉 ### Métriques Consultations de la notice ## 369 Téléchargements de fichiers
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8609438538551331, "perplexity": 9421.364490844262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864776.82/warc/CC-MAIN-20180622182027-20180622202027-00123.warc.gz"}
https://de.mathworks.com/help/phased/ref/perturbations.html
# perturbations Perturbations defined on array ## Syntax ``perts = perturbations(array)`` ``perts = perturbations(array,prop)`` ``perts = perturbations(array,prop,'None')`` ``perts = perturbations(array,prop,'Normal',mean,sigma)`` ``perts = perturbations(array,prop,'Uniform',minval,maxval)`` ``perts = perturbations(array,prop,'RandomFail',failprob)`` ## Description ````perts = perturbations(array)` returns a table of all allowed perturbations `perts` defined for the `array`. This table contains a list of all properties that can be perturbed, the probability type of applied perturbation, and the parameters of the probability type.``` example ````perts = perturbations(array,prop)` lets you view the current perturbation defined for the array property `prop`.``` ````perts = perturbations(array,prop,'None')` specifies that the property `prop` is not perturbed.``` example ````perts = perturbations(array,prop,'Normal',mean,sigma)` specifies that the perturbation is drawn from a normal probability distribution defined by its `mean` and standard deviation `sigma`. To use this syntax, set `prop` to `'ElementPositions'`, `'TaperMagnitude'`, or `'TaperPhase'`. ``` ````perts = perturbations(array,prop,'Uniform',minval,maxval)` specifies that the perturbation is drawn from a uniform probability distribution with a range defined by the interval [`minval`,`maxval`]. To use this syntax, set `prop` to `'ElementPositions'`, `'TaperMagnitude'`, or `'TaperPhase'`.``` ````perts = perturbations(array,prop,'RandomFail',failprob)` specifies that the perturbation is a mask indicating whether an element is functioning based on the element fail property `failprob`. To use this syntax, set prop to `'ElementFailure'`.``` ## Examples collapse all Create an 8-by-3 uniform rectangular array (URA). The array operates at 300 MHz and its elements are spaced one-half wavelength apart. ```freq = 300.0e6; lambda = physconst('Lightspeed')/freq; d = lambda/2; array = phased.URA(8,3,ElementSpacing=[d,d]);``` Initially, there are no perturbations to the array. `perts = perturbations(array)` ```perts=4×3 table Property Type Value ___________________ ________ __________________ {'ElementPosition'} {'None'} {[NaN]} {[NaN]} {'TaperMagnitude' } {'None'} {[NaN]} {[NaN]} {'TaperPhase' } {'None'} {[NaN]} {[NaN]} {'ElementFailure' } {'None'} {[NaN]} {[NaN]} ``` Randomly perturb the element positions according to a normal distribution. Use a position variance of 16th of a wavelength. `perts = perturbations(array,'ElementPosition','Normal',0,lambda/16)` ```perts=4×3 table Property Type Value ___________________ __________ _____________________ {'ElementPosition'} {'Normal'} {[ 0]} {[0.0625]} {'TaperMagnitude' } {'None' } {[NaN]} {[ NaN]} {'TaperPhase' } {'None' } {[NaN]} {[ NaN]} {'ElementFailure' } {'None' } {[NaN]} {[ NaN]} ``` Then perturb the magnitude of the element weights according to a normal distribution with a mean value of 0.1 and a variance of 0.02. `perts = perturbations(array,'TaperMagnitude','Normal',0.1,0.02)` ```perts=4×3 table Property Type Value ___________________ __________ ________________________ {'ElementPosition'} {'Normal'} {[ 0]} {[0.0625]} {'TaperMagnitude' } {'Normal'} {[0.1000]} {[0.0200]} {'TaperPhase' } {'None' } {[ NaN]} {[ NaN]} {'ElementFailure' } {'None' } {[ NaN]} {[ NaN]} ``` Perturb the phase of the element weights according to a uniform distribution between $-40$ and $40$ degrees. `perts = perturbations(array,'TaperPhase','Uniform',-40,40)` ```perts=4×3 table Property Type Value ___________________ ___________ ________________________ {'ElementPosition'} {'Normal' } {[ 0]} {[0.0625]} {'TaperMagnitude' } {'Normal' } {[0.1000]} {[0.0200]} {'TaperPhase' } {'Uniform'} {[ -40]} {[ 40]} {'ElementFailure' } {'None' } {[ NaN]} {[ NaN]} ``` Set a 20$%$ percent failure rate for the elements. `perts = perturbations(array,'ElementFailure','RandomFail',0.2)` ```perts=4×3 table Property Type Value ___________________ ______________ ________________________ {'ElementPosition'} {'Normal' } {[ 0]} {[0.0625]} {'TaperMagnitude' } {'Normal' } {[0.1000]} {[0.0200]} {'TaperPhase' } {'Uniform' } {[ -40]} {[ 40]} {'ElementFailure' } {'RandomFail'} {[0.2000]} {[ NaN]} ``` ## Input Arguments collapse all Phased array, specified as a Phased Array System Toolbox System object. Perturbed property of array, specified as `'ElementPosition'`, `'TaperMagnitude'`, `'TaperPhased'`, or `'ElementFailure'`. Example: `'TaperPhased'` Data Types: `string` Mean value of normal distribution, specified as a scalar. Units depend on the property `prop`. `'ElementPosition'` meters `'TaperMagnitude'` dimensionless `'TaperPhased'` radians Example: `12` #### Dependencies To enable this argument, set the perturbed array property `prop` to `'ElementPosition'`, `'TaperMagnitude'`, or `'TaperPhased'` and the perturbation `type` to `'Normal'`. Data Types: `double` Standard deviation of normal distribution, specified as a positive scalar. Units depend on the property `prop`. `'ElementPosition'` meters `'TaperMagnitude'` dimensionless `'TaperPhased'` radians Example: 1.0 #### Dependencies To enable this argument, set the perturbed array property `prop` to `'ElementPosition'`, `'TaperMagnitude'`, `or 'TaperPhased'` and the perturbation `type` to `'Normal'`. Data Types: `double` Minimum value of range of uniform probability distribution, specified as a scalar. When applied to the `'TaperPhase'` property. the difference between `minval` and `maxval` should be less than or equal to 2π. Units depend on the property `prop`. `'ElementPosition'` meters `'TaperMagnitude'` dimensionless `'TaperPhased'` radians Example: `0` #### Dependencies To enable this argument, set the perturbed array property `prop` to `'ElementPosition'`, `'TaperMagnitude'`, `or 'TaperPhased'` and the perturbation `type` to `'Uniform'`. Data Types: `double` Maximum value of range of uniform probability distribution Example: `1` #### Dependencies Maximum value of range of uniform probability distribution, specified as a scalar. When applied to the `'TaperPhase'` property. the difference between `minval` and `maxval` should be less than or equal to 2π. Units depend on the property `prop`. `'ElementPosition'` meters `'TaperMagnitude'` dimensionless `'TaperPhased'` radians Data Types: `double` Probability of failure, specified as a non-negative scalar greater than or equal to zero and less than one. Zero means that the elements will never fail. Otherwise, there is a some probability of failure. Example: `0.01` #### Dependencies To enable this argument, set the array property `prop` to `'ElementFailure'` and the perturbation type to `'RandomFail'`. Data Types: `double` ## Output Arguments collapse all List of possible perturbations, returned as a MATLAB `table`. See Perturbed properties and perturbation types for a list of perturbations properties and types. Data Types: `table` collapse all ### Perturbed properties and perturbation types You can perturb the array by selecting one of the properties to be perturbed and then applying the type of perturbation. Each type of perturbation has specific arguments. PropertyPerturbation TypeArguments `'ElementPosition'` `'None'` `'Normal'` `'Uniform'` - `mean`, `sigma` `minval`, `maxval` `'TaperMagnitude'` `'None'` `'Normal'` `'Uniform'` - `mean`, `sigma` `minval`, `maxval` `'TaperPhase'` `'None'` `'Normal'` `'Uniform'` - `mean`, `sigma` `minval`, `maxval` `'ElementFailure'` `'None'` `'RandomFail'` - `probfail` ## Version History Introduced in R2022a
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558453559875488, "perplexity": 2825.645329875804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00352.warc.gz"}
https://brilliant.org/problems/pythagoras-theorem/
# Pythagoras theorem Level pending In mathematics, the Pythagorean theorem—or Pythagoras' theorem—is a relation in Euclidean geometry among the three sides of a right triangle. Solve this triangle and comment your solution below. ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229024052619934, "perplexity": 1803.790488779277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613135.2/warc/CC-MAIN-20170529223110-20170530003110-00191.warc.gz"}
https://web2.0calc.com/questions/find-a-polar-equation-of-the-conic-in-terms-of-r
+0 # Find a polar equation of the conic in terms of r with its focus at the pole. 0 240 1 Find a polar equation of the conic in terms of r with its focus at the pole. Conic: hyperbola, Vertices: (5,pi/2), (2,pi/2) What I did is found the center to be (7/2,pi/2), so c=7/2, a=3/2, e=7/2/3/2=7/3 Then the horizontal directrix above the pole r= (7/3)p/1+7/3sin(theta) r=7p/3+7sin(pi/2)=1, p=10/7 r-10/3+7sin(theta) Can someone help with where I went wrong? May 26, 2021 First of all, find the equation in the $xy$ plane then substitute the $r\sin\theta$ and $r\cos\theta$ eqns
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036455750465393, "perplexity": 2201.9138442220087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00722.warc.gz"}
http://eprint.las.ac.cn/user/search.htm?field=author&value=Kobakhidze,%20Archil
Current Location:home > Browse 1. chinaXiv:201605.01152 [pdf] Subjects: Physics >> The Physics of Elementary Particles and Fields We investigate a strategy to search for light, nearly degenerate higgsinos within the natural MSSM at the LHC. We demonstrate that the higgsino mass range mu in 100 - 160 GeV, which is preferred by the naturalness, can be probed at 3 sigma significance through the monojet search at 14TeV HL-LHC with 3000 fb(-1) luminosity. The proposed method can also probe certain region in the parameter space for the lightest neutralino with a high higgsino purity, that cannot be reached by planned direct detection experiments at XENON-1T(2017).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8136946558952332, "perplexity": 2216.234753731729}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076454.41/warc/CC-MAIN-20210414004149-20210414034149-00061.warc.gz"}
https://crypto.stackexchange.com/questions/72691/classification-of-bfv-and-ckks-scheme
# Classification of BFV and CKKS scheme? I would just like to ask what is the classification of both BFV and CKKS scheme are they Somewhat Homomorphic or Fully Homomorphic? Brakerski/Fan-Vercauteren (BFV) [Brakerski12, FV12, BEHZ16, HPS18] https://eprint.iacr.org/2012/144.pdf Cheon-Kim-Kim-Song (CKKS) [CKKS17] https://eprint.iacr.org/2016/421.pdf • Which scheme are those? Could you please provide links to the exact version you are asking about? Aug 19 '19 at 16:46 Both schemes are presented as Leveled Homomorphic Encryption schemes, which means that for each $$L$$, there is at least a set of parameters ($$\lambda$$, $$q$$, etc) that allows us to homomorphically evaluate circuits of multiplicative depth up to $$L$$ (and the reciprocal also holds, i.e. for each set of parameters, there is a limit $$L$$ on the multiplicative depth of the circuits that can be evaluated).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036358594894409, "perplexity": 1176.452017259136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306181.43/warc/CC-MAIN-20220129122405-20220129152405-00070.warc.gz"}
http://superuser.com/questions/163600/why-are-my-google-searches-redirected/163613
# Why are my Google searches redirected? This machine was infected with various malware. I have scanned the system with Malwarebytes. It found and removed some 600 or so infected files. Now the machine seems to be running well with only one exception. Some Google search results are being redirected to some shady search engines. If I were to copy the URL from the Google Search results and paste it in the address bar it would go to the correct site, but if I click the link, I will be redirected somewhere else. Here is my log file from HijackThis: Logfile of Trend Micro HijackThis v2.0.2 Scan saved at 11:55:16 AM, on 7/14/2010 Platform: Windows XP SP3 (WinNT 5.01.2600) MSIE: Internet Explorer v8.00 (8.00.6001.18702) Boot mode: Normal Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\system32\spoolsv.exe C:\Program Files\Bonjour\mDNSResponder.exe C:\WINDOWS\system32\nvsvc32.exe C:\WINDOWS\system32\PnkBstrA.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\Explorer.EXE C:\WINDOWS\system32\ctfmon.exe C:\Program Files\Mozilla Firefox\firefox.exe C:\Program Files\Mozilla Firefox\plugin-container.exe C:\Program Files\Trend Micro\HijackThis\HijackThis.exe R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = http://www.gateway.com/g/sidepanel.html?Ch=Retail&Br=EM&Loc=ENG_US&Sys=DTP&M=T3418 R1 - HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings,ProxyOverride = *.local R3 - URLSearchHook: (no name) - {00A6FAF6-072E-44cf-8957-5838F569A31D} - (no file) O2 - BHO: Adobe PDF Reader Link Helper - {06849E9F-C8D7-4D59-B87D-784B7D6BE0B3} - C:\Program Files\Adobe\Acrobat 7.0\ActiveX\AcroIEHelper.dll O2 - BHO: AskBar BHO - {201f27d4-3704-41d6-89c1-aa35e39143ed} - C:\Program Files\AskBarDis\bar\bin\askBar.dll O2 - BHO: McAfee AntiPhishing Filter - {41D68ED8-4CFF-4115-88A6-6EBB8AF19000} - c:\program files\mcafee\spamkiller\mcapfbho.dll (file missing) O2 - BHO: ALOT Toolbar - {5AA2BA46-9913-4dc7-9620-69AB0FA17AE7} - C:\Program Files\alot\bin\alot.dll (file missing) O2 - BHO: SSVHelper Class - {761497BB-D6F0-462C-B6EB-D4DAF1D92D43} - C:\Program Files\Java\jre1.5.0_09\bin\ssv.dll O2 - BHO: Google Toolbar Helper - {AA58ED58-01DD-4d91-8333-CF10577473F7} - c:\program files\google\googletoolbar3.dll O2 - BHO: CBrowserHelperObject Object - {CA6319C0-31B7-401E-A518-A07C3DB8F777} - c:\windows\system32\BAE.dll O3 - Toolbar: McAfee VirusScan - {BA52B914-B692-46c4-B683-905236F6F655} - c:\progra~1\mcafee.com\vso\mcvsshl.dll O3 - Toolbar: Easy-WebPrint - {327C2873-E90D-4c37-AA9D-10AC9BABA46C} - C:\Program Files\Canon\Easy-WebPrint\Toolband.dll O3 - Toolbar: ALOT Toolbar - {5AA2BA46-9913-4dc7-9620-69AB0FA17AE7} - C:\Program Files\alot\bin\alot.dll (file missing) O3 - Toolbar: Ask Toolbar - {3041d03e-fd4b-44e0-b742-2d9b88305f98} - C:\Program Files\AskBarDis\bar\bin\askBar.dll O4 - HKLM\..\Run: [NvCplDaemon] RUNDLL32.EXE C:\WINDOWS\system32\NvCpl.dll,NvStartup O4 - HKLM\..\RunOnce: [OOBEDDDemise] cmd /x /c erase C:\WINDOWS\system32\oobe\msoobe.exe O4 - HKCU\..\Run: [ctfmon.exe] C:\WINDOWS\system32\ctfmon.exe O4 - HKUS\S-1-5-18\..\Run: [Power2GoExpress] NA (User 'SYSTEM') O4 - HKUS\.DEFAULT\..\Run: [Power2GoExpress] NA (User 'Default user') O8 - Extra context menu item: &Search - http://edits.mywebsearch.com/toolbaredits/menusearch.jhtml?p=ZJxdm172YYUS O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~2\OFFICE11\EXCEL.EXE/3000 O8 - Extra context menu item: Easy-WebPrint Add To Print List - res://C:\Program Files\Canon\Easy-WebPrint\Resource.dll/RC_AddToList.html O8 - Extra context menu item: Easy-WebPrint High Speed Print - res://C:\Program Files\Canon\Easy-WebPrint\Resource.dll/RC_HSPrint.html O8 - Extra context menu item: Easy-WebPrint Preview - res://C:\Program Files\Canon\Easy-WebPrint\Resource.dll/RC_Preview.html O8 - Extra context menu item: Easy-WebPrint Print - res://C:\Program Files\Canon\Easy-WebPrint\Resource.dll/RC_Print.html O9 - Extra button: (no name) - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.5.0_09\bin\ssv.dll O9 - Extra 'Tools' menuitem: Sun Java Console - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.5.0_09\bin\ssv.dll O9 - Extra button: (no name) - {39FD89BF-D3F1-45b6-BB56-3582CCF489E1} - c:\program files\mcafee\spamkiller\mcapfbho.dll (file missing) O9 - Extra 'Tools' menuitem: McAfee AntiPhishing Filter - {39FD89BF-D3F1-45b6-BB56-3582CCF489E1} - c:\program files\mcafee\spamkiller\mcapfbho.dll (file missing) O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - C:\PROGRA~1\MICROS~2\OFFICE11\REFIEBAR.DLL O9 - Extra button: Real.com - {CD67F990-D8E9-11d2-98FE-00C0F0318AFE} - C:\WINDOWS\system32\Shdocvw.dll O9 - Extra button: (no name) - {e2e2dd38-d088-4134-82b7-f2ba38496583} - C:\WINDOWS\Network Diagnostic\xpnetdiag.exe O9 - Extra 'Tools' menuitem: @xpsp3res.dll,-20001 - {e2e2dd38-d088-4134-82b7-f2ba38496583} - C:\WINDOWS\Network Diagnostic\xpnetdiag.exe O9 - Extra button: Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\msmsgs.exe O9 - Extra 'Tools' menuitem: Windows Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\msmsgs.exe O14 - IERESET.INF: START_PAGE_URL=http://www.aol.com O16 - DPF: {6E704581-CCAE-46D2-9C64-20D724B3624E} (UnagiAx Class) - http://radaol-prod-web-rr.streamops.aol.com/mediaplugin/3.0.84.2/win32/unagi3.0.84.2.cab O23 - Service: Bonjour Service - Apple Inc. - C:\Program Files\Bonjour\mDNSResponder.exe O23 - Service: NVIDIA Display Driver Service (NVSvc) - NVIDIA Corporation - C:\WINDOWS\system32\nvsvc32.exe O23 - Service: PnkBstrA - Unknown owner - C:\WINDOWS\system32\PnkBstrA.exe - This is probably just your proxy settings in IE they will stay there even after removing the virus. – Not Kyle stop stalking me Jan 17 '11 at 20:46 I had this for a long time, and I finally fixed it. It's not a virus, it has to do with DNS servers. You just have to start using a public DNS like OpenDNS or Google DNS. – ixtmixilix Jan 31 '11 at 12:59 Disable these under safe mode: R3 - URLSearchHook: (no name) - {00A6FAF6-072E-44cf-8957-5838F569A31D} - (no file) O2 - BHO: AskBar BHO - {201f27d4-3704-41d6-89c1-aa35e39143ed} - C:\Program Files\AskBarDis\bar\bin\askBar.dll BHO: ALOT Toolbar - {5AA2BA46-9913-4dc7-9620-69AB0FA17AE7} - C:\Program Files\alot\bin\alot.dll (file missing) O3 - Toolbar: ALOT Toolbar - {5AA2BA46-9913-4dc7-9620-69AB0FA17AE7} - C:\Program Files\alot\bin\alot.dll (file missing) O3 - Toolbar: Ask Toolbar - {3041d03e-fd4b-44e0-b742-2d9b88305f98} - C:\Program Files\AskBarDis\bar\bin\askBar.dll O4 - HKLM\..\RunOnce: [OOBEDDDemise] cmd /x /c erase C:\WINDOWS\system32\oobe\msoobe.exe O8 - Extra context menu item: &Search - http://edits.mywebsearch.com/toolbaredits/menusearch.jhtml?p=ZJxdm172YYUS And use CWShredder as you might be infected with CoolWebSearch. Reboot and post a new HijackThis log in your post, also do this so I can do a more detailed view: 1. Download AutoRuns, run it and accept the EULA, let it scan. 2. When the scanning is done, go to File and then Save it as a .arn file. 3. Upload this file to a file hosting website like RapidShare/MegaUpload/... For virus scanning purposes, check Mouche's post... I'm more the 'manual cleaning' type. I hate resource consuming virus scanners that take their time... :-D Regarding CT his comment... In Safe Mode, run Autoruns as an administrator and untick the following items, Look in the first column for the name before the ; symbol and in the last column for the path after the ;: 0; File not found: About:Home Defense Center extension; File not found: C:\PROGRA~1\DEFENS~1\defext.dll Please note that Defense Center is a fake rogue antivirus which should be removed! When you have removed these four entries, reboot and tell me if you still have the problem. It is wise to update and scan with your virus scanner to remove any files left. - How do I disable them as stated in your first statement? Use 'Fix' within HijackThis? – CT. Jul 14 '10 at 16:49 Select them and then indeed click 'Fix'. – Tom Wijsman Jul 14 '10 at 16:54 dl.dropbox.com/u/694102/hijackthislog.txt dl.dropbox.com/u/694102/AutoRunsLog.arn Appreciate your assistance. – CT. Jul 14 '10 at 17:16 HijackThis seem clean, be sure to install a anti-virus and firewall to stay protected. Regarding AutoRuns, check my updated post in a moment... I see some things. – Tom Wijsman Jul 14 '10 at 17:54 There is no other explanation : Your machine is still infected. It is unlikely after such a massive infection as you describe, that any AV product will completely clean up everything. There is no other safe solution than to restart from a clean slate. My advice is to save your data, reformat the hard disk, and reinstall Windows (or restore the computer to factory image, as the case may be). - Massive infections can be cured just by removing all possible ways to start the malware and then removing any infected files during Safe Mode, if you want to take it to the next level you still have AutoRuns for checking the low-level stuff and other performance analysis tools like XPerf, XBootMgr, Process Monitor, ... Reinstalling your Windows is a solution that tends to werk, but it's not a solution to spend a lot of time reinstalling everything every time you are infected. – Tom Wijsman Jul 14 '10 at 16:34 @TomWij: Getting infected shouldn't happen very frequently ... And to succeed he would need such a high level of knowledge that excludes him asking his question in the first place. – harrymc Jul 14 '10 at 16:43 @TomWij: You have the knowledge to answer questions in superuser.com and to avoid infection. Not everybody does. – harrymc Jul 14 '10 at 16:49 Yes, indeed, but for some people it does and for those people it isn't helpful to keep reinstalling their system, he has 600 infections so he tends to be such guy... Reinstalling is always an easy fix, but you better learn to prevent viruses and how to get rid of them. – Tom Wijsman Jul 14 '10 at 16:55 I took "Getting infected shouldn't happen very frequently" a bit personal but it wasn't about me, that's why I typed the first part of my answer after the second, adjusted my answer to the right context of your response to avoid confusion... – Tom Wijsman Jul 14 '10 at 16:55 I agree that it would be best to do a clean install after a backup, but you have some options. First, reboot into safe mode with networking. Next, run ESET Online Scanner, an excellent, free scanner that is a scan/remove-only version of ESET NOD32. I've had great success with this product. This will scan for viruses along with other malware in its definitions. Next, run Spybot Search & Destroy to clean up remaining spyware and malware. Finally, reboot into normal mode and look through Add/Remove Programs to get rid of anything that looks suspect. Typically, they will have already been removed, and you're just getting them out of the list ("Adhelper" for example). I've had great success with this method. I usually do some clean up using Disk Cleanup or CCleaner afterwards. I've never had anything escape me through this process, but I don't deal with viruses or malware very frequently. - It's really simple to cure this. Just start using Google Public DNS. Change your primary DNS to 8.8.8.8 and secondary DNS to 8.8.4.4 ... - -1 Using different DNS servers will not fix an infection. – goblinbox Jan 17 '11 at 21:01 @goblinbox I had this for a long time, and it's not the result of malware. – ixtmixilix Jan 31 '11 at 12:58 Understood, but the original question states specifically that there had been an infection. (Many believe that changing DNS servers has some kind of magic result, but such people typically don't know what DNS is or how it works: DNS means domain name service, and it resolves names to numbers. Period. Changing DNS servers does not heal infections, and one's local DNS servers will practically always be faster and have more relevant information than remote DNS servers. Finally, correlation does not imply causation.) – goblinbox Feb 7 '11 at 11:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7489504814147949, "perplexity": 11680.024857043943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827079.61/warc/CC-MAIN-20160723071027-00006-ip-10-185-27-174.ec2.internal.warc.gz"}
https://stackoverflow.com/questions/750172/how-do-i-change-the-author-and-committer-name-email-for-multiple-commits/61368365
# How do I change the author and committer name/email for multiple commits? How do I change the author for a range of commits? • Question: does using git filter-branch preserve the SHA1's for previous tags, versions and objects? Or will changing the author name force change the associated SHA1's as well? Aug 3, 2010 at 14:13 • Hashes will change yes Oct 14, 2010 at 15:16 • Tangentially, I created a small script which finally fixed the root cause for me. gist.github.com/tripleee/16767aa4137706fd896c May 30, 2014 at 8:51 • @impinball The age of the question is hardly relevant. Creating a new duplicate question is out of the question. I suppose I could create a question which begs this particular answer but I'm not altogether convinced it would get all that much visibility. It's not like there is a shortage of Git questions here... Glad I could help, anyway. Sep 1, 2014 at 14:50 • The github script that @TimurBernikovich mentioned is great and works for me. But that github url has changed: docs.github.com/en/enterprise/2.17/user/github/using-git/… Oct 13, 2020 at 4:14 NOTE: This answer changes SHA1s, so take care when using it on a branch that has already been pushed. If you only want to fix the spelling of a name or update an old email, Git lets you do this without rewriting history using .mailmap. See my other answer. ### Using Rebase First, if you haven't already done so, you will likely want to fix your name in git-config: git config --global user.name "New Author Name" This is optional, but it will also make sure to reset the committer name, too, assuming that's what you need. To rewrite metadata for a range of commits using a rebase, do git rebase -r <some commit before all of your bad commits> \ --exec 'git commit --amend --no-edit --reset-author' --exec will run the git commit step after each commit is rewritten (as if you ran git commit && git rebase --continue repeatedly). If you also want to change your first commit (also called the 'root' commit), you will have to add --root to the rebase call. This will change both the committer and the author to your user.name/user.email configuration. If you did not want to change that config, you can use --author "New Author Name <email@address.example>" instead of --reset-author. Note that doing so will not update the committer -- just the author. ### Single Commit If you just want to change the most recent commit, a rebase is not necessary. Just amend the commit: git commit --amend --no-edit --reset-author ### Entire project history git rebase -r --root --exec "git commit --amend --no-edit --reset-author" ### For older Git clients (pre-July 2020) -r,--rebase-merges may not exist for you. As a replacement, you can use -p. Note that -p has serious issues and is now deprecated. • Great for the odd commit though - useful if you're pairing and forget to change the author Sep 25, 2009 at 11:14 • +1 for mentioning the usecase for the typical one-mistake fix: git commit --amend --author=username Mar 15, 2010 at 20:03 • This is perfect, my most common usecase is that I sit down at another computer and forget to set up author and thus usually have < 5 commits or so to fix. Aug 21, 2010 at 11:34 • git commit --amend --reset-author also works once user.name and user.email are configured correctly. – pts Jul 4, 2014 at 16:56 • Rewrite author info on all commits after <commit> using user.name and user.email from ~/.gitconfig: run git rebase -i <commit> --exec 'git commit --amend --reset-author --no-edit', save, quit. No need to edit! – ntc2 Mar 6, 2015 at 23:47 This answer uses git-filter-branch, for which the docs now give this warning: git filter-branch has a plethora of pitfalls that can produce non-obvious manglings of the intended history rewrite (and can leave you with little time to investigate such problems since it has such abysmal performance). These safety and performance issues cannot be backward compatibly fixed and as such, its use is not recommended. Please use an alternative history filtering tool such as git filter-repo. If you still need to use git filter-branch, please carefully read SAFETY (and PERFORMANCE) to learn about the land mines of filter-branch, and then vigilantly avoid as many of the hazards listed there as reasonably possible. Changing the author (or committer) would require rewriting all of the history. If you're okay with that and think it's worth it then you should check out git filter-branch. The manual page includes several examples to get you started. Also note that you can use environment variables to change the name of the author, committer, dates, etc. -- see the "Environment Variables" section of the git manual page. Specifically, you can fix all the wrong author names and emails for all branches and tags with this command (source: GitHub help): #!/bin/sh git filter-branch --env-filter ' OLD_EMAIL="your-old-email@example.com" CORRECT_EMAIL="your-correct-email@example.com" if [ "$GIT_COMMITTER_EMAIL" = "$OLD_EMAIL" ] then export GIT_COMMITTER_NAME="$CORRECT_NAME" export GIT_COMMITTER_EMAIL="$CORRECT_EMAIL" fi if [ "$GIT_AUTHOR_EMAIL" = "$OLD_EMAIL" ] then export GIT_AUTHOR_NAME="$CORRECT_NAME" export GIT_AUTHOR_EMAIL="$CORRECT_EMAIL" fi ' --tag-name-filter cat -- --branches --tags For using alternative history filtering tool git filter-repo, you can first install it and construct a git-mailmap according to the format of gitmailmap. Proper Name <proper@email.xx> Commit Name <commit@email.xx> And then run filter-repo with the created mailmap: git filter-repo --mailmap git-mailmap • After executing the script you may remove the backup branch by executing "git update-ref -d refs/original/refs/heads/master". – D.R. Aug 14, 2013 at 16:47 • @rodowi, it duplicates all my commits. Jun 17, 2014 at 17:43 • @RafaelBarros the author info (just like anything else in the history) is part of the commit's sha key. Any change to the history is a rewrite leading to new id's for all commits. So don't rewrite on a shared repo or make sure all users are aware of it ... Jun 11, 2015 at 13:52 • Solved using git push --force --tags origin HEAD:master Nov 13, 2016 at 11:50 • IMPORTANT!!! Before executing the script, set your user.name and user.email git config parameter properly! And after executing the script you'll have some duplicate backup history called "original"! Delete it via git update-ref -d refs/original/refs/heads/master and then check if .git/refs/original folder structure is empty and then just remove it with rm -rf .git/refs/original. Lastly, you can verify the new rewritten log via: git log --pretty=format:"[%h] %cd - Committer: %cn (%ce), Author: %an (%ae)" ! One more thing: .git/logs has some log files that still have your old name! – user964843 Feb 3, 2017 at 22:23 One liner, but be careful if you have a multi-user repository - this will change all commits to have the same (new) author and committer. git filter-branch -f --env-filter "GIT_AUTHOR_NAME='Newname'; GIT_AUTHOR_EMAIL='new@email'; GIT_COMMITTER_NAME='Newname'; GIT_COMMITTER_EMAIL='new@email';" HEAD With linebreaks in the string (which is possible in bash): git filter-branch -f --env-filter " GIT_AUTHOR_NAME='Newname' GIT_AUTHOR_EMAIL='new@email' GIT_COMMITTER_NAME='Newname' GIT_COMMITTER_EMAIL='new@email' • Why does it rewrite all commits if you specify HEAD in the end of the command? Jun 18, 2015 at 3:26 • This does not work for my bitbucket repository, any idea ? I do a git push --force --tags origin 'refs/heads/*' after the advised command Oct 5, 2016 at 21:46 • The push command for this is : $git push --force --tags origin 'refs/heads/master' Jun 8, 2018 at 22:07 • Neat; this keeps the old timestamps too. Mar 6, 2020 at 21:19 • @HARSHNILESHPATHAK Note that for recently created repositories the branch master has been renamed main, so the command becomes $git push --force --tags origin 'refs/heads/main' Jan 20, 2021 at 22:30 You can also do: git filter-branch --commit-filter ' if [ "$GIT_COMMITTER_NAME" = "<Old Name>" ]; then GIT_COMMITTER_NAME="<New Name>"; GIT_AUTHOR_NAME="<New Name>"; GIT_COMMITTER_EMAIL="<New Email>"; GIT_AUTHOR_EMAIL="<New Email>"; git commit-tree "$@"; else git commit-tree "$@"; fi' HEAD Note, if you are using this command in the Windows command prompt, then you need to use " instead of ': git filter-branch --commit-filter " if [ "$GIT_COMMITTER_NAME" = "<Old Name>" ]; then GIT_COMMITTER_NAME="<New Name>"; GIT_AUTHOR_NAME="<New Name>"; GIT_COMMITTER_EMAIL="<New Email>"; GIT_AUTHOR_EMAIL="<New Email>"; git commit-tree "$@"; else git commit-tree "$@"; • Isn't using env-filter the easier solution? Not sure why this is getting more votes, then. Dec 9, 2011 at 9:21 • Then link is broken. How do we push these changes to another repository? Feb 18, 2012 at 23:21 • env-filter will change all the commits. This solution allows a conditional. Apr 11, 2012 at 15:29 • "A previous backup already exists in refs/original/ Force overwriting the backup with -f" sorry but where the -f -flag is going to be whene executing this script two times. Actually that is in Brian's answer, sorry about disturbance just after the filter-branch is the solution. – hhh May 4, 2012 at 22:11 • @user208769 env-filter also allows a conditional; look at my answer :-) Apr 11, 2013 at 8:52 It happens when you do not have a $HOME/.gitconfig initialized. You may fix this as: git config --global user.name "you name" git config --global user.email you@domain.example git commit --amend --reset-author Tested with Git version 1.7.5.4. Note that this fixes only the last commit. • That works really well on the last commit. Nice and simple. Doesn't have to be a global change, using --local works too – Ben May 30, 2012 at 23:24 • This one was the big winner for me! The git commit --amend --reset-author --no-edit command is especially useful if you created commits with the wrong author information, then set the correct author after-the-fact via git config. Saved my a$$just now when I had to update my email. Feb 8, 2019 at 19:36 • The answers might be overkill. First check whether this satisfies your usecase - stackoverflow.com/a/67363253/8293309 May 3, 2021 at 3:48 In the case where just the top few commits have bad authors, you can do this all inside git rebase -i using the exec command and the --amend commit, as follows: git rebase -i HEAD~6 # as required which presents you with the editable list of commits: pick abcd Someone else's commit pick defg my bad commit 1 pick 1234 my bad commit 2 Then add exec ... --author="..." lines after all lines with bad authors: pick abcd Someone else's commit pick defg my bad commit 1 exec git commit --amend --author="New Author Name <email@address.example>" -C HEAD pick 1234 my bad commit 2 exec git commit --amend --author="New Author Name <email@address.example>" -C HEAD save and exit editor (to run). This solution may be longer to type than some others, but it's highly controllable - I know exactly what commits it hits. Thanks to @asmeurer for the inspiration. • Definitely awesome. Can you shorten it by setting user.name and user.email in the repo's local config, and then each line is onlyexec git commit --amend --reset-author -C HEAD ? Nov 30, 2012 at 11:07 • The canonical answer, to use filter-branch, just deleted refs/heads/master for me. So +1 to your controllable, editable solution. Thanks! – jmtd Jun 17, 2014 at 20:30 • In place of git rebase -i HEAD^^^^^^ you can also write git rebase -i HEAD~6 Jun 9, 2015 at 13:09 • Please note that this changes the timestamp of the commits. See stackoverflow.com/a/11179245/1353267 for reverting to the correct timestamps Oct 10, 2017 at 11:50 • For anyone else struggling with the same problem as me, if you are trying to include the initial commit and you get fatal: Needed a single revision, try git rebase -i --root instead Nov 23, 2020 at 18:44 For a single commit: git commit --amend --author="Author Name <email@address.example>" (extracted from asmeurer's answer) • but that's only if it's the most recent commit Jan 17, 2012 at 23:24 • According to git help commit, git commit --amend changes the commit at the “tip of the current branch” (which is HEAD). This is normally the most recent commit, but you can make it any commit you want by first checking out that commit with git checkout <branch-name> or git checkout <commit-SHA>. Apr 25, 2012 at 19:33 • But if you do that, all of the commits that already have that commit as a parent will be pointing to the wrong commit. Better to use filter-branch at that point. Jul 11, 2012 at 21:02 • @JohnGietzen: You can rebase the commits back onto the one that's changed to fix that. However, if you're doing >1 commit, then as mentioned, filter-branch is probably going to be a lot easier. Oct 24, 2013 at 20:35 • Note that this changes only commit author and not the committer Jun 18, 2015 at 3:39 GitHub originally had a nice solution (broken link), which was the following shell script: #!/bin/sh git filter-branch --env-filter ' an="$GIT_AUTHOR_NAME" am="$GIT_AUTHOR_EMAIL" cn="$GIT_COMMITTER_NAME" cm="$GIT_COMMITTER_EMAIL" if [ "$GIT_COMMITTER_EMAIL" = "your@email.to.match.example" ] then fi if [ "$GIT_AUTHOR_EMAIL" = "your@email.to.match.example" ] then an="Your New Author Name" am="Your New Author Email" fi export GIT_AUTHOR_NAME="$an" export GIT_AUTHOR_EMAIL="$am" export GIT_COMMITTER_NAME="$cn" export GIT_COMMITTER_EMAIL="$cm" ' • Worked perfectly. Just had to git reset --hard HEAD^ a couple of times on the other local repositories to get them to an earlier version, git pull-ed the amended version, and here I am without any lines containing unknown <stupid-windows-user@.StupidWindowsDomain.local> (got to love git's defaulting). Jan 8, 2011 at 17:34 • I cannot push after this. Do I have to use "-f"? Jul 30, 2012 at 7:01 • I did git push -f. Also, local repos have to be recloned after this. Jul 30, 2012 at 7:23 • If you need to run the shell script on a specific branch you can change the last line into: "' master..your-branch-name" (assuming you branched of master). May 29, 2013 at 18:23 • Click on the link <nice solution> as the script has been updated – gxpr Apr 10, 2018 at 11:23 A single command to change the author for the last N commits: git rebase -i HEAD~N -x "git commit --amend --author 'Author Name <author.name@mail.example>' --no-edit" NOTES • replace HEAD~N with the reference until where you want to rewrite your commits. This can be a hash, HEAD~4, a branch name, ... • the --no-edit flag makes sure the git commit --amend doesn't ask an extra confirmation • when you use git rebase -i, you can manually select the commits where to change the author, the file you edit will look like this: pick 897fe9e simplify code a little exec git commit --amend --author 'Author Name <author.name@mail.example>' --no-edit pick abb60f9 add new feature exec git commit --amend --author 'Author Name <author.name@mail.example>' --no-edit pick dc18f70 bugfix exec git commit --amend --author 'Author Name <author.name@mail.example>' --no-edit You can then still modify some lines to see where you want to change the author. This gives you a nice middle ground between automation and control: you see the steps that will run, and once you save everything will be applied at once. Note that if you already fixed the author information with git config user.name <your_name> and git config user.email <your_email>, you can also use this command: git rebase -i HEAD~N -x "git commit --amend --reset-author --no-edit" • I used HEAD~8 and it shows way more than the last 8 commits. Jan 17, 2020 at 0:05 • @BryanBryce if there are merge commits involved, things get complicated :) Jan 17, 2020 at 8:04 • You use --root instead of HEAD~N to edit the entire history (including initial commit), and use --reset-author to take the current committer instead of --author ... Feb 25, 2021 at 12:26 • My use case was that I had to change all past commits in some private repositories because my pushes were under a different username with no email attached. The first bit allowed me to change the author and email for the first N commits but it did not preserve the commit timestamps, those got updated along with it. I solved this by using this script. It is nice and clean and allows me to change the entire commit history to a single username and email while preserving the commit timestamps. Jun 7, 2021 at 12:18 • @PedroHenrique: you need to replace HEAD~4 with the reference until where you want to rewrite your commits... I'll try to make this a little clearer in my answer. As I mentioned before: beware for merge commits where you will get into complicated stuff Sep 8, 2021 at 6:37 As docgnome mentioned, rewriting history is dangerous and will break other people's repositories. But if you really want to do that and you are in a bash environment (no problem in Linux, on Windows, you can use git bash, that is provided with the installation of git), use git filter-branch: git filter-branch --env-filter ' if [$GIT_AUTHOR_EMAIL = bad@email ]; then GIT_AUTHOR_EMAIL=correct@email; fi; export GIT_AUTHOR_EMAIL' To speed things up, you can specify a range of revisions you want to rewrite: git filter-branch --env-filter ' if [ $GIT_AUTHOR_EMAIL = bad@email ]; then GIT_AUTHOR_EMAIL=correct@email; fi; export GIT_AUTHOR_EMAIL' HEAD~20..HEAD • Do note that this will leave any tags pointing at the old commits. --tag-name-filter cat is the "make it work" option. Mar 27, 2014 at 16:46 • @romkyns any idea on how to change tags as well? Jun 18, 2015 at 3:30 • @NickVolynkin Yes, you specify --tag-name-filter cat. This really should have been the default behaviour. Jun 18, 2015 at 21:36 • The answers might be overkill. First check whether this satisfies your usecase - stackoverflow.com/a/67363253/8293309 May 3, 2021 at 3:49 You can use this as a alias so you can do: git change-commits GIT_AUTHOR_NAME "old name" "new name" or for the last 10 commits: git change-commits GIT_AUTHOR_EMAIL "old@email.com" "new@email.com" HEAD~10..HEAD Add to ~/.gitconfig: [alias] change-commits = "!f() { VAR=$1; OLD=$2; NEW=$3; shift 3; git filter-branch --env-filter \"if [[ \\\"$echo$VAR\\\" = '$OLD' ]]; then export$VAR='$NEW'; fi\"$@; }; f " Hope it is useful. • "git: 'change-commits' is not a git command. See 'git --help'." Apr 3, 2017 at 2:05 • After this command & sync with master all commits in the history are duplicated! Even of other users :( Feb 26, 2019 at 13:00 • @Vladimir that is expected, please study about changing history in git Feb 28, 2019 at 10:26 • For me it seems to run in /bin/sh, so I had to replace the bash-specific test [[ ]] with sh-compatible test [ ] (single brackets). Besides that it works very well, thanks! Jun 5, 2020 at 10:22 • @Native_Mobile_Arch_Dev You need this: git config --global alias.change-commits '!'"f() { VAR=\$1; OLD=\$2; NEW=\$3; shift 3; git filter-branch --env-filter \"if [[ \\\"\$echo \$VAR\\\" = '\$OLD' ]]; then export \$VAR='\$NEW'; fi\" \$@; }; f" Mar 29, 2021 at 7:45 When taking over an unmerged commit from another author, there is an easy way to handle this. git commit --amend --reset-author • For a single commit, and if you wanna put your username, this is most easy way. Apr 6, 2016 at 17:08 • You can add --no-edit to make this even easier, as generally most people will want to update only the email address and not the commit message Jun 13, 2016 at 18:15 • Can you guys please share the git command for just to update last commit's email/username with the new one – Adil Aug 3, 2016 at 11:52 • Did you try this? That should be a side effect of this, if not stackoverflow.com/a/2717477/654245 looks like a good path. Aug 4, 2016 at 3:08 I should point out that if the only problem is that the author/email is different from your usual, this is not a problem. The correct fix is to create a file called .mailmap at the base of the directory with lines like Name you want <email you want> Name you don't want <email you don't want> And from then on, commands like git shortlog will consider those two names to be the same (unless you specifically tell them not to). See https://schacon.github.io/git/git-shortlog.html for more information. This has the advantage of all the other solutions here in that you don't have to rewrite history, which can cause problems if you have an upstream, and is always a good way to accidentally lose data. Of course, if you committed something as yourself and it should really be someone else, and you don't mind rewriting history at this point, changing the commit author is probably a good idea for attribution purposes (in which case I direct you to my other answer here). • Actually this is a very interesting answer. In my case I made some commits from home and it may be confusing an extra author so this is all I needed. Sep 8, 2020 at 10:31 • Also, notice this does not works for web side on Gitea. Sep 8, 2020 at 10:39 • @iuliu.net I'm not sure. This question stackoverflow.com/questions/53629125/… seems to suggest it does, but I haven't confirmed it. Certainly if they don't then they ought to, because it's a standard part of git. Jan 11, 2021 at 9:20 This is a more elaborated version of @Brian's version: To change the author and committer, you can do this (with linebreaks in the string which is possible in bash): git filter-branch --env-filter ' if [ "$GIT_COMMITTER_NAME" = "<Old name>" ]; then GIT_COMMITTER_NAME="<New name>"; GIT_COMMITTER_EMAIL="<New email>"; GIT_AUTHOR_NAME="<New name>"; GIT_AUTHOR_EMAIL="<New email>"; fi' -- --all You might get one of these errors: 1. The temporary directory exists already 2. Refs starting with refs/original exists already (this means another filter-branch has been run previously on the repository and the then original branch reference is backed up at refs/original) If you want to force the run in spite of these errors, add the --force flag: git filter-branch --force --env-filter ' if [ "$GIT_COMMITTER_NAME" = "<Old name>" ]; then GIT_COMMITTER_NAME="<New name>"; GIT_COMMITTER_EMAIL="<New email>"; GIT_AUTHOR_NAME="<New name>"; GIT_AUTHOR_EMAIL="<New email>"; fi' -- --all A little explanation of the -- --all option might be needed: It makes the filter-branch work on all revisions on all refs (which includes all branches). This means, for example, that tags are also rewritten and is visible on the rewritten branches. A common "mistake" is to use HEAD instead, which means filtering all revisions on just the current branch. And then no tags (or other refs) would exist in the rewritten branch. • Kudos for supplying a procedure that changes commits on all refs/branches. May 17, 2015 at 22:05 A safer alternative to git's filter-branch is filter-repo tool as suggested by git docs here. git filter-repo --commit-callback ' old_email = b"your-old-email@example.com" correct_name = b"Your Correct Name" correct_email = b"your-correct-email@example.com" if commit.committer_email == old_email : commit.committer_name = correct_name commit.committer_email = correct_email if commit.author_email == old_email : commit.author_name = correct_name commit.author_email = correct_email ' The above command mirrors the logic used in this script but uses filter-repo instead of filter-branch. The code body after commit-callback option is basically python code used for processing commits. You can write your own logic in python here. See more about commit object and its attributes here. Since filter-repo tool is not bundled with git you need to install it separately. If you have a python env >= 3.5, you can use pip to install it. pip3 install git-filter-repo Note: It is strongly recommended to try filter-repo tool on a fresh clone. Also remotes are removed once the operation is done. Read more on why remotes are removed here. Also read the limitations of this tool under INTERNALS section. • This seems to be the new kid on the block and I cherish this answer like gold. remember the fields have to be binary and then remove the == lines, and You can unconditionally change everything before pushing. Did I say I like this answer? It should be the accepted one. Feb 27, 2021 at 9:08 • Thank you for sharing the links in the Note section. Aug 31, 2022 at 9:28 • To get this to work on Windows, I had to escape all the double quotes: old_email = b\"your-old-email@example.com\" Sep 1, 2022 at 6:46 1. run git rebase -i <sha1 or ref of starting point> 2. mark all commits that you want to change with edit (or e) 3. loop the following two commands until you have processed all the commits: git commit --amend --reuse-message=HEAD --author="New Author <new@author.email>" ; git rebase --continue This will keep all the other commit information (including the dates). The --reuse-message=HEAD option prevents the message editor from launching. • This doesn't update the committer. If you want to update the author and committer while keeping the dates, you may be interested in my answer Aug 11, 2022 at 1:44 I use the following to rewrite the author for an entire repository, including tags and all branches: git filter-branch --tag-name-filter cat --env-filter " export GIT_AUTHOR_NAME='New name'; export GIT_AUTHOR_EMAIL='New email' " -- --all Then, as described in the MAN page of filter-branch, remove all original refs backed up by filter-branch (this is destructive, backup first): git for-each-ref --format="%(refname)" refs/original/ | \ xargs -n 1 git update-ref -d • It's very important to use --tag-name-filter cat. Otherwise your tags will remain on the original chain of commits. The other answers fail to mention this. Mar 30, 2014 at 17:22 I adapted this solution which works by ingesting a simple author-conv-file (format is the same as one for git-cvsimport). It works by changing all users as defined in the author-conv-file across all branches. We used this in conjunction with cvs2git to migrate our repository from cvs to git. i.e. Sample author-conv-file john=John Doe <john.doe@hotmail.com> jill=Jill Doe <jill.doe@hotmail.com> The script: #!/bin/bash export$authors_file=author-conv-file git filter-branch -f --env-filter ' get_name () { grep "^$1=" "$authors_file" | sed "s/^.*=$$.*$$ <.*>$/\1/" } get_email () { grep "^$1=" "$authors_file" | sed "s/^.*=.* <$$.*$$>$/\1/" } GIT_AUTHOR_NAME=$(get_name$GIT_COMMITTER_NAME) && GIT_AUTHOR_EMAIL=$(get_email$GIT_COMMITTER_NAME) && GIT_COMMITTER_NAME=$GIT_AUTHOR_NAME && GIT_COMMITTER_EMAIL=$GIT_AUTHOR_EMAIL && export GIT_AUTHOR_NAME GIT_AUTHOR_EMAIL && export GIT_COMMITTER_NAME GIT_COMMITTER_EMAIL ' -- --all • Thanks, I wonder why this is not core git (or git-svn) functionality. This can be done with a flag for git svn clone, but not in git filter-branch... Feb 15, 2012 at 13:36 I found the presented versions way to aggressive, especially if you commit patches from other developers, this will essentially steal their code. The version below does work on all branches and changes the author and comitter separately to prevent that. Kudos to leif81 for the all option. #!/bin/bash git filter-branch --env-filter ' if [ "$GIT_AUTHOR_NAME" = "<old author>" ]; then GIT_AUTHOR_NAME="<new author>"; GIT_AUTHOR_EMAIL="<youmail@somehost.ext>"; fi if [ "$GIT_COMMITTER_NAME" = "<old committer>" ]; then GIT_COMMITTER_NAME="<new commiter>"; GIT_COMMITTER_EMAIL="<youmail@somehost.ext>"; fi ' -- --all 1. Change commit author name & email by Amend, then replacing old-commit with new-one: $git checkout <commit-hash> # checkout to the commit need to modify$ git commit --amend --author "name <author@email.com>" # change the author name and email $git replace <old-commit-hash> <new-commit-hash> # replace the old commit by new one$ git filter-branch -- --all # rewrite all futures commits based on the replacement $git replace -d <old-commit-hash> # remove the replacement for cleanliness$ git push -f origin HEAD # force push 2. Another way Rebasing: $git rebase -i <good-commit-hash> # back to last good commit # Editor would open, replace 'pick' with 'edit' before the commit want to change author$ git commit --amend --author="author name <author@email.com>" # change the author name & email # Save changes and exit the editor $git rebase --continue # finish the rebase • Very nice answer. I like that the changes are wrapped up from the very update to even cleaning up the git commits May 7, 2017 at 20:41 The fastest, easiest way to do this is to use the --exec argument of git rebase: git rebase -i -p --exec 'git commit --amend --reset-author --no-edit' This will create a todo-list that looks like this: pick ef11092 Blah blah blah exec git commit --amend --reset-author --no-edit pick 52d6391 Blah bloh bloo exec git commit --amend --reset-author --no-edit pick 30ebbfe Blah bluh bleh exec git commit --amend --reset-author --no-edit ... and this will work all automatically, which works when you have hundreds of commits. • You can replace -p with --root to change all commits in the history (The -p option is deprecated). And note that this only works after you have corrected the username and email via git config user.name <yourname> and git config user.email <youremail>. Mar 6, 2021 at 10:15 • I have a repository that I've been working on with another contributor. I want to change all my commits' credentials. Is your suggestion safe to use in this case, to avoid any modification on the other contributor's commits? Oct 26, 2022 at 21:12 If you are the only user of this repository, you can rewrite history using either git filter-branch (as svick wrote), or git fast-export/git fast-import plus filter script (as described in article referenced in docgnome answer), or interactive rebase. But either of those would change revisions from first changed commit onwards; this means trouble for anybody that based his/her changes on your branch pre-rewrite. RECOVERY If other developers didn't based their work on pre-rewrite version, simplest solution would be to re-clone (clone again). Alternatively they can try git rebase --pull, which would fast-forward if there weren't any changes in their repository, or rebase their branch on top of re-written commits (we want to avoid merge, as it would keep pre-rewrite comits forever). All of this assuming that they do not have not comitted work; use git stash to stash away changes otherwise. If other developers use feature branches, and/or git pull --rebase doesn't work e.g. because upstream is not set up, they have to rebase their work on top of post-rewrite commits. For example just after fetching new changes (git fetch), for a master branch based on / forked from origin/master, one needs to run $ git rebase --onto origin/master origin/master@{1} master Here origin/master@{1} is pre-rewrite state (before fetch), see gitrevisions. Alternate solution would be to use refs/replace/ mechanism, available in Git since version 1.6.5. In this solution you provide replacements for commits that have wrong email; then anybody who fetches 'replace' refs (something like fetch = +refs/replace/*:refs/replace/* refspec in appropriate place in their .git/config) would get replacements transparently, and those who do not fetch those refs would see old commits. The procedure goes something like this: 1. Find all commits with wrong email, for example using $git log --author=user@wrong.email --all 2. For each wrong commit, create a replacement commit, and add it to object database $ git cat-file -p <ID of wrong commit> | sed -e 's/user@wrong\.email/user@example.com/g' > tmp.txt $git hash-object -t commit -w tmp.txt <ID of corrected commit> 3. Now that you have corrected commit in object database, you have to tell git to automatically and transparently replace wrong commit by corrected one using git replace command: $ git replace <ID of wrong commit> <ID of corrected commit> 4. Finally, list all replacement to check if this procedure succeded $git replace -l and check if replacements take place $ git log --author=user@wrong.email --all You can of course automate this procedure... well, all except using git replace which doesn't have (yet) batch mode, so you would have to use shell loop for that, or replace "by hand". NOT TESTED! YMMV. Note that you might encounter some rough corners when using refs/replace/ mechanism: it is new, and not yet very well tested. Note that git stores two different e-mail addresses, one for the committer (the person who committed the change) and another one for the author (the person who wrote the change). The committer information isn't displayed in most places, but you can see it with git log -1 --format=%cn,%ce (or use show instead of log to specify a particular commit). While changing the author of your last commit is as simple as git commit --amend --author "Author Name <email@example.com>", there is no one-liner or argument to do the same to the committer information. The solution is to (temporarily, or not) change your user information, then amend the commit, which will update the committer to your current information: git config user.email my_other_email@example.com git commit --amend • Note that the old value is still in a few places in path\to\repo\.git. I'm not sure yet what you'd need to do to expunge it totally. Amends unfortunately (?) don't seem to erase. Oct 8, 2014 at 15:02 For reset ALL commits (including first commit) to current user and current timestamp: git rebase --root --exec "git commit --amend --no-edit --date 'now' --reset-author" • this will work only for the current branch. Dec 14, 2021 at 11:42 If the commits you want to fix are the latest ones, and just a couple of them, you can use a combination of git reset and git stash to go back an commit them again after configuring the right name and email. The sequence will be something like this (for 2 wrong commits, no pending changes): git config user.name <good name> git config user.email <good email> git stash git commit -a git stash pop git commit -a If you are using Eclipse with EGit, then there is a quite easy solution. Assumption: you have commits in a local branch 'local_master_user_x' which cannot be pushed to a remote branch 'master' because of the invalid user. 1. Checkout the remote branch 'master' 2. Select the projects/folders/files for which 'local_master_user_x' contains changes 3. Right-click - Replace with - Branch - 'local_master_user_x' 4. Commit these changes again, this time as the correct user and into the local branch 'master' 5. Push to remote 'master' Using interactive rebase, you can place an amend command after each commit you want to alter. For instance: pick a07cb86 Project tile template with full details and styling x git commit --amend --reset-author -Chead • The problem with this is that other commit metadata (e.g. date and time) is also amended. I just found that out the hard way ;-). Jul 7, 2013 at 20:31 We have experienced an issue today where a UTF8 character in an author name was causing trouble on the build server, so we had to rewrite the history to correct this. The steps taken were: Step 2: Run the following bash script: #!/bin/sh REPO_URL=ssh://path/to/your.git REPO_DIR=rewrite.tmp # Clone the repository git clone ${REPO_URL}${REPO_DIR} # Change to the cloned repository cd ${REPO_DIR} # Checkout all the remote branches as local tracking branches git branch --list -r origin/* | cut -c10- | xargs -n1 git checkout # Rewrite the history, use a system that will preseve the eol (or lack of in commit messages) - preferably Linux not OSX git filter-branch --env-filter ' OLD_EMAIL="me@something.com" CORRECT_NAME="New Me" if [ "$GIT_COMMITTER_EMAIL" = "$OLD_EMAIL" ] then export GIT_COMMITTER_NAME="$CORRECT_NAME" fi if [ "$GIT_AUTHOR_EMAIL" = "$OLD_EMAIL" ] then export GIT_AUTHOR_NAME="$CORRECT_NAME" fi ' --tag-name-filter cat -- --branches --tags # Force push the rewritten branches + tags to the remote git push -f # Remove all knowledge that we did something rm -rf${REPO_DIR} # Tell your colleagues to git pull --rebase on all their local remote tracking branches Quick overview: Checkout your repository to a temp file, checkout all the remote branches, run the script which will rewrite the history, do a force push of the new state, and tell all your colleagues to do a rebase pull to get the changes. We had trouble with running this on OS X because it somehow messed up line endings in commit messages, so we had to re-run it on a Linux machine afterwards. Your problem is really common. See "Using Mailmap to Fix Authors List in Git" For the sake of simplicity, I have created a script to ease the process: git-changemail After putting that script on your path, you can issue commands like: • Change author matchings on current branch $git changemail -a old@email.com -n newname -m new@email.com • Change author and committer matchings on <branch> and <branch2>. Pass -f to filter-branch to allow rewriting backups $ git changemail -b old@email.com -n newname -m new@email.com -- -f &lt;branch> &lt;branch2> • Show existing users on repo $git changemail --show-both By the way, after making your changes, clean the backup from the filter-branch with: git-backup-clean • when i run your command, it says "fatal: cannot exec 'git-changemail': Permission denied" Sep 2, 2015 at 8:23 • @Govind You need to set the execute permission for the script chmod +x git-changemail Mar 19, 2021 at 0:45 I want to add my Example too. I want to create a bash_function with given parameter. this works in mint-linux-17.3 #$1 => email to change, $2 => new_name,$3 => new E-Mail function git_change_user_config_for_commit { # defaults WRONG_EMAIL=${1:-"you_wrong_mail@hello.world"} NEW_NAME=${2:-"your name"} NEW_EMAIL=${3:-"new_mail@hello.world"} git filter-branch -f --env-filter " if [ \$GIT_COMMITTER_EMAIL = '$WRONG_EMAIL' ]; then export GIT_COMMITTER_NAME='$NEW_NAME' export GIT_COMMITTER_EMAIL='$NEW_EMAIL' fi if [ \$GIT_AUTHOR_EMAIL = '$WRONG_EMAIL' ]; then export GIT_AUTHOR_NAME='$NEW_NAME' export GIT_AUTHOR_EMAIL='\$NEW_EMAIL' fi " --tag-name-filter cat -- --branches --tags; }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2712380290031433, "perplexity": 6312.572990851533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00491.warc.gz"}
https://link.springer.com/chapter/10.1007/978-3-642-24466-7_2
An EM Algorithm for the Student-t Cluster-Weighted Modeling • Salvatore Ingrassia • Simona C. Minotti • Giuseppe Incarbone Conference paper Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS) Abstract Cluster-Weighted Modeling is a flexible statistical framework for modeling local relationships in heterogeneous populations on the basis of weighted combinations of local models. Besides the traditional approach based on Gaussian assumptions, here we consider Cluster Weighted Modeling based on Student-t distributions. In this paper we present an EM algorithm for parameter estimation in Cluster-Weighted models according to the maximum likelihood approach. Keywords Local Model Conditional Density Finite Mixture Gaussian Case Gaussian Assumption These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. References 1. Faria S, Soromenho G (2010) Fitting mixtures of linear regressions. J Stat Comput Simulat 80:201–225 2. Frühwirth-Schnatter S (2005) Finite mixture and markov switching models. Springer, HeidelbergGoogle Scholar 3. Gershenfeld N, Schöner B, Metois E (1999) Cluster-weighted modeling for time-series analysis. Nature 397:329–332 4. Hurn M, Justel A, Robert CP (2003) Estimating mixtures of regressions. J Comput Graph Stat 12:55–79 5. Ingrassia S, Minotti SC, Vittadini G (2010) Local statistical modeling via the cluster-weighted approach with elliptical distributions. ArXiv: 0911.2634v2Google Scholar 6. Peel D, McLachlan GJ (2000) Robust mixture modelling using the t distribution. Stat Comput 10:339–348 Authors and Affiliations • Salvatore Ingrassia • 1 • Simona C. Minotti • 2 • Giuseppe Incarbone • 1 1. 1.Dipartimento di Impresa, Culture e SocietàUniversità di CataniaCataniaItaly 2. 2.Dipartimento di StatisticaUniversità di Milano-BicoccaMilanoItaly
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825134038925171, "perplexity": 13851.828225861283}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156471.4/warc/CC-MAIN-20180920120835-20180920141235-00015.warc.gz"}
https://stats.stackexchange.com/questions/326446/gaussian-process-regression-for-large-datasets/327978
# gaussian process regression for large datasets I've been learning about Gaussian process regression from online videos and lecture notes, my understanding of it is that if we have a dataset with $n$ points then we assume the data is sampled from an $n$-dimensional multivariate Gaussian. So my question is in the case where $n$ is 10's of millions does Gaussian process regression still work? Will the kernel matrix not be huge rendering the process completely inefficient? If so are there techniques in place to deal with this, like sampling from the data set repeatedly many times? What are some good methods for dealing with such cases? • Why do you want to use Gaussian process and not something that is destined for dealing with large data? – Tim Feb 2 '18 at 17:08 Usually, what you can do is to train Gaussian Processes on subsamples of your dataset (bagging). Bagging is implemented in sk learn and can be used easily. See per example the documentation. Calling $n$ the number of observations, $n_{bags}$ the number of bags you use and $n_{p}$ the number of points per bag, this allow to change training time from a $O(n^3)$ to a $O(n_{bags}n_{p}^3)$. Therefore, with small bags but using all the data, you can achieve a much lower training time. Unfortunately, this often reduces the performance of the model. Apart from bagging techniques, there is some active research about making the Gaussian Process Regressions scalable. The article Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP) proposes to reduce the training time to a $O(n)$ and comes with a matlab code. There are a wide range of approaches to scale GPs to large datasets, for example: Low Rank Approaches: these endeavoring to create a low rank approximation to the covariance matrix. Most famously perhaps is Nystroms method which projects the data onto a subset of points. Building on from that FITC and PITC were developed which use pseudo points rather than points observed. These are included in for example the GPy python library. Other approaches include random Fourier features. H-matrices: these use hierarchical structuring of the covariance matrix and apply low rank approximations to each structures submatrix. This is less commonly implemented in popular libraries. Kronecker Methods: these use Kronecker products of covariance matrices in order to speed up the computational over head bottleneck. Bayesian Committee Machines: This involves splitting your data into subsets and modeling each one with a GP. Then you can combine the predictions using the optimal Bayesian combination of the outputs. This is quite easy to implement yourself and is fast but kind of breaks your kernel is you care about that - Mark Deisenroth’s paper should be easy enough to follow here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4279721975326538, "perplexity": 537.5645702453278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524879.8/warc/CC-MAIN-20190716201412-20190716223412-00326.warc.gz"}
http://www.ni.com/documentation/en/labview-comms/2.0/mnode/repeating-commands-in-the-mathscript-console-command-window/
# Repeating Commands in the MathScript Console Command Window Version: You can repeat the last entered commands in the MathScript Console command window by using the arrow-up keys. Pressing the arrow-up key re-enters the last command, pressing the arrow-up key again re-enters the second last command, and so on. Complete the following steps to enter several commands and to re-enter the single commands with the arrow-up key. 1. Select View»More»Tools Launcher to open the Tools Launcher. 2. Click MathScript Console to open the MathScript Console. 3. Enter A = sin(0.5); in the MathScript Console command window, which is on the right side of the console. 4. Press Enter. The workspace, which is on the left side of the console, displays name, type, size, and the calculated value of A. 5. Enter B = cos(0.5);. 6. Press Enter. The workspace displays name, type, size, and the calculated value of B. 7. Enter C = tan(0.5);. 8. Press Enter. The workspace displays name, type, size, and the calculated value of C. 9. Use the arrow-up key to re-enter these commands. Note You can also modify the values within the function before pressing Enter to calculate the value.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8234611749649048, "perplexity": 5079.507591033639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593302.74/warc/CC-MAIN-20180722135607-20180722155607-00552.warc.gz"}
https://search.datacite.org/works/10.4230/lipics.stacs.2009.1839
### The Price of Anarchy in Cooperative Network Creation Games We analyze the structure of equilibria and the price of anarchy in the family of network creation games considered extensively in the past few years, which attempt to unify the network design and network routing problems by modeling both creation and usage costs. In general, the games are played on a host graph, where each node is a selfish independent agent (player) and each edge has a fixed link creation cost~$\alpha$. Together the agents create...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7429768443107605, "perplexity": 803.0568347017752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668765.35/warc/CC-MAIN-20191116204950-20191116232950-00530.warc.gz"}
https://joshuagoings.com/2013/12/04/research-code-dump/
# Research code dump! I’ve finally gotten around to cleaning up my home-brewed research code (at least a little bit), and I’ve hosted it here on my GitHub. I’m calling the project pyqchem, and it contains Hartree Fock, MP2, coupled cluster and a smattering of excited state methods. I’d love it if you check it out! You’ll notice that I still don’t have a way of reading in molecular geometry and basis set, and so I am still reliant on parsing G09 outputfor the atomic orbital integrals. Oh well, it’s a future project :) I’d also love to learn how to make it object oriented, as well as optimize the lower level routines in C/C++. It is so far far from optimized right now it is ridiculous. I think my EOM-CCSD code scales as N^8 instead of N^6, just because I haven’t factored the equations right. But it works, and it has helped me learn so much about electronic structure methods. If you get the chance to look at it, I would love input for the future. I’m still developing as a programmer and I could use all the help I can get! I hosted a lot of premade molecules (and their AO integrals), so all you have to do is execute pyqchem.py followed by the folder of interest. For example: Would execute the pyqchem.py script on the folder h2_3-21G, which contains all the precomputed atomic-orbital basis integrals. The type of calculation is changed by editing the pyqchem.py script itself. Say I wanted to perform an MP2 calculation on H2 in a 3-21G basis. I open pyqchem.py, and edit at the top: Then run: And you’ll see the pretty output dump to your terminal :) That’s all there is to it! Enjoy!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23389145731925964, "perplexity": 1338.4010520294505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00478.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3739887
# Obama's Candidacy by Pythagorean Tags: candidacy, obama PF Gold P: 1,951 If Romney does not win the primary, I think it's safe to say Obama is definitely winning a second term. None of the other candidates are really viable. Especially Newt gingrich, who has more black marks on his record than a smudged printer test sheet. PF Gold P: 7,120 Quote by Char. Limit Wow, didn't take long for this thread to get derailed by anti-Obama fanatics, did it? The thread started as a single thoughtless statement expressing that Obama is better than the others. How exactly was this derailed? I'm sorry, I suppose everyone should just nod politely and agree, less we're called anti-Obama fanatics. PF Gold P: 4,287 Quote by Pengwuino The thread started as a single thoughtless statement expressing that Obama is better than the others. How exactly was this derailed? I'm sorry, I suppose everyone should just nod politely and agree, less we're called anti-Obama fanatics. Well.. you did politely agree with our first post : ) PF Gold P: 7,120 Quote by Gokul43201 So I'd be surprised if any significant fraction of the population has seen an increase yet (though that may change in the next few years). I think you'd have to be a chain smoking (see: tobacco tax increase) paper mill to have seen more tax raises than cuts. They've had hikes in health care premiums starting right after Obama passed his health care plan. Insurers aren't idiots. My father also runs a small seasonal tax preparation business and has seen his costs go up. Hell, I think the profit from the business barely covers their normal tax bill. The only good President in my opinion will be the one who gets rid of all the BS in the tax code. I did a clients return the other night (I work for him as well on the side) and this lady had $15k income, paid$1.5k in SS/Taxes, and since she had 2 kids, received an $8000 refund. My father does mainly lower income and middle class folks tax return and he says in all his years, the basic trend really is that lower and lower-middle class people do not pay ANY taxes. Most of them receive so much that the feds practically repay any state sales tax the people may have paid so "any" tax literally means ANY tax. The problem with this country is that a vast majority of people pay so little taxes that they have no idea what it costs to run the country. This is why I dislike the pro-taxes types and the people who buy votes by running with pro-taxes agendas. If 30% of everyones income was taken away before you could even see it, I think people would start being a little more wary of having so many taxes. Quote by Pythagorean Well.. you did politely agree with our first post : ) It was a back-handed agreement. It's like saying that the UN has the most experience being the UN. I can't believe the thread wasn't shut down immediately. PF Gold P: 4,287 Quote by Pengwuino The problem with this country is that a vast majority of people pay so little taxes that they have no idea what it costs to run the country. This is why I dislike the pro-taxes types and the people who buy votes by running with pro-taxes agendas. If 30% of everyones income was taken away before you could even see it, I think people would start being a little more wary of having so many taxes. In general, higher taxed places are actually happier. Of course, the higher tax has to actually go towards people's happiness. But based on so-called "for the greater goods" reply in this thread... the money is actually going towards people's happiness. Which is why Obama is going to win : ) As an example, Denmark has a 41.4 HPI, The US has 28.8 HPI, just looking at taxes and happiness index. But you can also read a more thorough review: http://www.marketwatch.com/story/the...-heavily-taxed It was a back-handed agreement. It's like saying that the UN has the most experience being the UN. I can't believe the thread wasn't shut down immediately. It's more like saying let's not uproot the UN and replace it with an administration that has a completely different value system. The time it takes to change everything and all the conflicting policies during transition would be much more costly to members of the UN. And why? The UN is doing it's job! The UN is an excellent candidate for remaining the UN!!! Emeritus Sci Advisor PF Gold P: 11,155 Quote by Pengwuino They've had hikes in health care premiums starting right after Obama passed his health care plan. Insurers aren't idiots. They also had hikes right before, and the year before, and the year before that ... going back many, many years, and at about thrice the inflation rate, on average. What might be useful is a comparison of the increases after, with the rate of increase before ACA was passed. I haven't seen any data that's recent enough for that. My father also runs a small seasonal tax preparation business and has seen his costs go up. Hell, I think the profit from the business barely covers their normal tax bill. But this is not to say that he's seen a net increase in taxes, is it? The only good President in my opinion will be the one who gets rid of all the BS in the tax code. Might not be any President that can pull it off. For one thing, you'd need a supermajority in Congress that wants the same thing. I did a clients return the other night (I work for him as well on the side) and this lady had$15k income, paid $1.5k in SS/Taxes, and since she had 2 kids, received an$8000 refund. My father does mainly lower income and middle class folks tax return and he says in all his years, the basic trend really is that lower and lower-middle class people do not pay ANY taxes. Most of them receive so much that the feds practically repay any state sales tax the people may have paid so "any" tax literally means ANY tax. I believe this though it's quite the opposite in my case. I pay a much higher tax rate than say, Romney ... on a pathetic postdoc salary. If 30% of everyones income was taken away before you could even see it, I think people would start being a little more wary of having so many taxes. I agree. PF Gold P: 4,287 well, look at that... the slope is smaller during Obama! It looks like there's a lot of fallacy in people's selective claims about rising costs. Pengwuino, perhaps you should have your parents create a PF account rather than us relying on your hearsay. P: 1,414 So far, and this is just tentative, and just my opinion, I don't think that Obama represents any sort of significant positive change. That is, assuming Romney gets the GOP nomination, then I don't think it matters who gets elected to the presidency. For example, Obama recently temporarily stopped the TransCanada oil pipeline to Texas. A good thing imo, because I think that what's needed is more American refineries, not a pipeline to Texas for eventual export so that the oil companies can maximize their profits. But it remains to be seen what the eventual outcome will be. I'm betting that, eventually, Obama will go along with it (and of course Romney is pro-pipeline all the way), and then we'll see the usual discussions about how he was forced to do it because of unreasonable Republican intransigence or whatever. I also don't think that Obama is going to spearhead the enactment of sufficient regulatory measures wrt, say, the financial industry. Or that he's going to lead the way to significant changes in the tax code ... etc. In short, flip a coin, it will be business as usual either way. PF Gold P: 4,287 Quote by ThomasT In short, flip a coin, it will be business as usual either way. So then by that measure do you agree that a change in administration would just be an unnecessary hassle? P: 148 Quote by Char. Limit If Romney does not win the primary, I think it's safe to say Obama is definitely winning a second term. None of the other candidates are really viable. Especially Newt gingrich, who has more black marks on his record than a smudged printer test sheet. I think that you have been watching too many Romney ads. Many of Newt's "black marks" are false and many are unusable in a general election campaign. I would be happy to get into specifics but that would probably be considered "thread hijacking". McCain was too much of a gentleman to use personal attacks. Newt will use them in retaliation. Newt doesn't have to cringe whenever the health care topic comes up, Romney does. Newt is not the "poster boy" for the OWS people; Romney is a perfect boogey man for the planned "class warfare" campaign. Present polls not withstanding, I think Newt will be a more formidable candidate than Romney. The only prediction I have is that this race will be extremely close. Anyone who thinks this will be a blowout for either side is engaging in wishful thinking. Skippy P: 1,414 Quote by Pythagorean So then by that measure do you agree that a change in administration would just be an unnecessary hassle? My opinion is that all elected public officials should be allowed one term (say, 6 years) and that's it. Wrt your question, I don't think it will matter whether Obama or Romney is elected. So, yeah, if that's the choice, then why bother voting? Or, as the mainstream ads extoll, "it doesn't matter who you vote for, as long as you vote". Well, if it doesn't matter who you vote for, then why does it matter if you vote at all? On the other hand, if Gingrich gets nominated, then I'll probably vote for Obama. PF Gold P: 4,287 I'm actually impressed with what I've seen of Romney's science stances, so far. I mostly just don't think his stage presence is going to appeal to most the voting US, and of course (to reiterate my OP) a change in administration is a waste of time if the candidates have the same end effect. P: 2,179 Quote by ThomasT Well, if it doesn't matter who you vote for, then why does it matter if you vote at all? If I were a politician and I could do a favor for some district, I might pick one that had voted for me in order to reward it, or I might pick one that had voted against me in order to seduce it, but I would never pick a district that doesn't vote. PF Gold P: 4,287 Quote by Jimmy Snyder If I were a politician and I could do a favor for some district, I might pick one that had voted for me in order to reward it, or I might pick one that had voted against me in order to seduce it, but I would never pick a district that doesn't vote. Good point; that's an important factor. But it doesn't mean much to a district with little/no population. We don't get much political foreplay whether we vote or not because the numbers just aren't enough to warrant appealing to us. PF Gold P: 7,120 Quote by Gokul43201 But this is not to say that he's seen a net increase in taxes, is it? Yes, it is. They are on fixed incomes and haven't had any real changes in their exemptions or anything. Might not be any President that can pull it off. For one thing, you'd need a supermajority in Congress that wants the same thing. Which is a whole 'nother thread, unfortunately. I believe this though it's quite the opposite in my case. I pay a much higher tax rate than say, Romney ... on a pathetic postdoc salary. If you're talking about the 15% rate, that's been debunked before. Have some kids, they do wonders on your tax bill. It surprises me that my city is not rich with tax dollars considering the way people pop out babies around here . PF Gold P: 7,120 Quote by Pythagorean It's more like saying let's not uproot the UN and replace it with an administration that has a completely different value system. The time it takes to change everything and all the conflicting policies during transition would be much more costly to members of the UN. And why? The UN is doing it's job! The UN is an excellent candidate for remaining the UN!!! So in 2004, you would have agreed not to vote out Bush because why should we replace him with an administration that has a completely different value system? Remember, one persons "he's doing half decent" is another persons "he's destroying this country". That would imply we should just get rid of term limits because "why go through the hassle". P: 1,414 Quote by Pythagorean I'm actually impressed with what I've seen of Romney's science stances, so far. I mostly just don't think his stage presence is going to appeal to most the voting US, and of course (to reiterate my OP) a change in administration is a waste of time if the candidates have the same end effect. I think that Obama's stage presence and rhetorical ability exceeds any of his possible opponents. But of course we have no way of knowing if a, say, Romney presidency would be substantially different than an Obama presidency. The problem I have with Obama, and why he's been something of a disappointment to me, is that I don't think he's used the power of the presidency, his bully pulpit, to anywhere near its maximum effect -- assuming that he actually wants the sort of sweeping changes, to the betterment of America, that his rhetoric seems to indicate that he wants. His rhetoric is sort of inspiring, but his actions have been, more or less, in line with the status quo ... imho. PF Gold P: 4,287 Quote by Pengwuino That would imply we should just get rid of term limits because "why go through the hassle". You're being rather selective in your reading comprehension. "Why go through the hassle" is a conditional. It only applies if the forseeable outcome is the same for both candidates. So this kind of argument is only a distraction from the real argument (whether another candidate could do a better job, whether the forseeable outcome is not in favor of Obama). What makes you want to avoid that argument? Are you just throwing everything to the wall and seeing what sticks? For example, why avoid responding to the statistics that show a lower increase in the increase of premiums during Obama's stay? You selectively complained about the function of the data, ignoring the derivative that countered your complaint. Instead, you chose to raise a straw man. If you want to have a productive discussion, tell me who you think would have a better forseeable outcome and why, instead of using deconstruction tactics. Related Discussions Current Events 578 Current Events 492 Current Events 47 Current Events 23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29719123244285583, "perplexity": 2301.996651147394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826016.5/warc/CC-MAIN-20140820021346-00071-ip-10-180-136-8.ec2.internal.warc.gz"}
https://byjus.com/neet/log-phase/
# Log Phase The growth of microbes such as bacteria, yeasts or protozoa in batch culture could be shaped into four stages – • Lag phase • Log phase • Stationary phase • Death phase ## Log phase Overview Log phase definition It is a growth period of a cluster of cells in a culture medium. During this phase, there is an exponential increase in the number indicated by a section of the growth curve. This section is a straight line segment when the logarithm of numbers is plotted against time. Log phase, also referred to as exponential phase or logarithmic phase, is one of the phases observed in the bacterial growth. The striking feature of this phase is the property of cell doubling through binary fission. The count of bacteria (new) that appear each unit time is proportionate to the current population. For any species of bacteria, there is a genetic determination of the generation time under specific growth conditions such as pH, temperature, nutrition, etc. This generation time is the intrinsic growth rate. Doubling remains constant if growth is not restricted. Hence, the rate of population increases and the number of cells doubles with every sequent time period. The relationship between the number of cells and time in this phase is exponential. Consequently, a straight line is the outcome in this case of growth when the natural log of cell number is plotted against the factor of time. The specific growth rate of an organism is obtained from the slope of the straight line. This growth rate is a quantification of the count of divisions per cell per unit time. The growth curve is usually plotted on a semi-logarithmic graph imparting its appearance of a linear relationship. However, the actual or the definite rate of such growth is dependent on the conditions of growth affecting the regularity of cell division occurrences and the prospect of daughter cells to survive. Bacteria (cyanobacteria) under favourable controlled criteria multiply their population quadruple times every day, furthermore can increase their population by threefold. This type of growth fails to sustain perpetually. This environment is nutrition deprived soon and supplemented with wastes. As cells in this log phase exhibit constant growth rate and a steady/consistent metabolic activity, the cells are preferred to be consumed for industrial applications and for research purposes. This stage is also the phase wherein bacteria is highly susceptible to the activity of antibiotics and disinfectants affecting cell-wall synthesis, DNA and protein. This was a brief on Log phase. Learn other important concepts at NEET BYJU’S. See More:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009371995925903, "perplexity": 1219.781904960326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506832.21/warc/CC-MAIN-20210116165621-20210116195621-00651.warc.gz"}
https://math.stackexchange.com/questions/114373/whats-the-difference-between-stochastic-and-random
# What's the difference between stochastic and random? What's the difference between stochastic and random? • There is none. – Did Feb 28 '12 at 9:10 • I don't like the term "random" because its vague and people misconstrue it as "evenly distributed", but I know of no technical difference. – Alex Becker Feb 28 '12 at 9:13 • I agree with @AlexBecker. I would only add that random has many connotations (like entropy), not at all equivalent, and is a more generic term usable outside mathematics. Stochastic means nondeterministic or unpredictable. Random generally means unrecognizable, not adhering to a pattern. A random variable is also called a stochastic variable. Do random numbers exist? We speak of pseudorandom numbers. – bgins Feb 28 '12 at 9:57 • ... but I think that there is no a crucial difference in the meaning, only the difference in terminology used by different groups of scientists. I can say that also in Russia the equivalent of 'random' is used in old-style literature mostly, and 'stochastic' - in the modern one. – Ilya Feb 28 '12 at 11:39 • @bgins Any example of occurrences of "stochastic variable", WP excepted? – Did Aug 29 '12 at 15:28 A variable is random. A process is stochastic. Apart from this difference, the two words are synonyms. • As if random was a subproduct of stochastic? – Billy Rubina Feb 28 '12 at 16:58 • The term "stochastic variable" does occur sometimes. But more often "random" is used. – Michael Hardy Feb 28 '12 at 19:45 • I have seen the terminology "random process" used. – fourierwho Jan 29 '18 at 15:03 • I truly love the simple and intuitive way to explain the difference. Thank you. – CharlesC Oct 10 at 21:08 There is an anecdote about the notion of stochastic processes. They say that when Khinchin wrote his seminal paper "Correlation theory for stationary stochastic processes", this did not go well with Soviet authorities. The reason is that the notion of random process used by Khinchin contradicted dialectical materialism. In diamat, all processes in nature are characterized by deterministic development, transformation etc, so the phrase "random process" itself sounded paradoxically. Therefore, Khinchin had to change the name. After some search, he came up with the term stochastic, from στοχαστικὴ τέχνη, the Greek title of Ars conjectandi. Being popularized later by Feller and Doob, this became a standard notion in English and German literature. Funny enough, in Russian literature the term "stochastic processes" did not live for long. The 1956 Russian translation of Doob's monograph by this name was already entitled Вероятностные процессы (probabilistic processes), and now the standard name is случайный процесс (random process). • Very interesting! Is there a reference for the story? (it's OK if it's in Russian) – Leo Jan 6 '17 at 13:47 • @Leo, unfortunately, I can't recall the origin of the story. For a long time I had been thinking that it is from Shiryaev's book, Probability, but at the moment of writing this post I wasn't able to find it there, so didn't give a source. – zhoraster Jan 6 '17 at 14:56 • @Leo, by the way, there's a similar story (but with happy ending) about the notion of independence, which contradicted diamat much more seriously. This one is somewhere in Suhov, Kelbert Probability and Statistics by Example. – zhoraster Jan 6 '17 at 15:15 Neither word by itself has a commonly accepted formal definition in mathematics, so one cannot really ask about "the difference" between them. They are used in phrases such as "random variable," "random walk," "stochastic process," "stochastically complete," etc, which have accepted definitions of their own. In all cases both words tend to refer to an element of chance or unpredictability. But they are generally not interchangeable; if you talk about a "stochastic walk" people will be confused. Random process and stochastic process are completely interchangeable (at least in many books on the subject). Although once upon a time "stochastic" (process) generally meant things that are randomly changing over time (and not space). See relevant citations: https://en.wikipedia.org/wiki/Stochastic_process#Terminology In English the word "stochastic" is technical and most English speakers wouldn't know it, whereas, from my experience, many German speakers are more familiar with the word "Stochastik", which they use in school when studying probability. The word "stochastic" ultimately comes from Greek, but it first gained its current sense, meaning "random", in German starting in 1917, when Sergei Bortkiewicz used it. Bortkiewicz had drawn inspiration from the book on probability by Jakob (or Jacques) Bernoulli, Ars Conjectandi. In the book, published 1713, Bernoulli used the phrase "Ars Conjectandi sive Stochastice", meaning the art of conjecturing. After being used in German, the word "stochastic" was later adopted into English by Joseph Doob in the 1930s, who cited a paper on stochastic procsses written in German by Aleksandr Khinchin. https://en.wikipedia.org/wiki/Stochastic_process#Etymology The use of the term "random process" pre-dates that of "stochastic process" by four or so decades. Although in English the word "random" does come from French, I strongly doubt it ever meant random in French. In fact, it originally was a noun in English meaning something like "great speed". It's related to the French word "randonée" (meaning hike or trek), which is still used today. To describe a random variable, French uses the word "aléatoire", stemming from the Latin word for dice (which features in a famous quote "Alea iacta est." by Julius Caesar). The English equivalent "aleatory" is not commonly used (at least in my random circles). Stochastic comes from Ancient Greek whereas random is an old French word. (fun fact: random has totally disappeared in modern French and was replaced by aléatoire which comes from... Latin) Otherwise there is no difference between them in the realm of Probability Theory. The term stochastic in Hydrology science refers to a process which periodically and apparently-independently happens but a kind of dependency exists. For example, if the flow of a river in last (say) 2 weeks has been low, it will probably be low in the next weeks too. So, the flow of a river is not a complete random variable but stochastic. • Likewise, I've noticed that network theory tends to refer to traffic as being stochastic. Such traffic, from the point of view of a router, would be considered random, but of course each packet was deterministically produced. – einnocent Sep 4 '14 at 18:15 The terms "stochastic variable" and "random variable" both occur in the literature and are synonymous. The latter is seen more often. Similarly "stochastic process" and "random process", but the former is seen more often. Some mathematicians seem to use "random" when they mean uniformly distributed, but probabilists and statisticians don't. I suspect those who do that haven't thought about it much. • Any example of occurrences of "stochastic variable", WP excepted? – Did Aug 29 '12 at 15:28 • @did : I don't have any at hand, but I've seen it in print. – Michael Hardy Aug 29 '12 at 16:28 • @did : google.com/… – Michael Hardy Aug 29 '12 at 16:30 • scholar.google.com/… – Michael Hardy Aug 29 '12 at 16:31 • Thanks for the links. After skimming very partly through them, what strikes me is that their majority is related to applications of mathematics (electrical engineering, management sciences, econometrics, physics, artificial intelligence, automatics, water resources research, others) rather than to mathematics and/or probability theory per se. My guess is that the frequency of "stochastic variable" would vanish, or nearly so, if the corpus was restricted to these fields. – Did Aug 29 '12 at 17:18 A random process is unpredictable such as the movement of the tip of a feather In wind. If we assume that the movement of a roller coaster is deterministic, then a stochastic process would be the movement of the tip of a feather attached to a moving roller coaster. That is to say, stochastic processes have components that are both deterministic AND random; e.g. Martingales. From a remote sensing point of view, we usually refer to a bounded but unpredictable process as stochastic. If the process were unbounded and unpredictable I would tend to use random, but this case doesn't occur very much in my world! :) In Chinese literature, there is no difference between those two terms at all. Both of the "stochastic" or "random" are 随机 in Chinese. Thus, I would argue that the use of "stochastic" and "random" does not differ in mathematics, but only in language conventions. I will quote from Robert Gray Gallager's MIT OpenCourseWare notes for "Discrete Stochastic Processes" (1): "Stochastic and random are synonyms, but random has become more popular for rv’s (random variable) and stochastic for stochastic processes. The reason for the author’s choice is that the common-sense intuition associated with randomness appears more important than mathematical precision in reasoning about rv’s, whereas for stochastic processes, common-sense intuition causes confusion much more frequently than with rv’s. The less familiar word stochastic warns the reader to be more careful." (Chapter 1: Introduction and review of probability, page 15, fn 15). I believe there is a difference between random and stochastic. Random has no preciptating or a priori cause i.e acausal. Random action stands alone - not within any system. Stochastic is random, but within a probablistic system. In other words an act of God is random, but a hurricane hitting the east coast of the US is stochastic event. Any individual hurricane may be random but it also exists as a mathmatical probability within a system of many hurricanes that hit or do not hit the east coast every year. The later is therefore stochastic. A coin flip has an interesting difference than a hurricane since each individual flip of a coin is already stochastically limited to the 50% statistical probability of two possible results. So. There are Random ODEs and Random PDEs that are not synonymous to SODEs and SPDEs -- although wide classes of RODEs can be converted to SODEs (usually replacing random normal forcings for Ornstein-Uhlenbeck dynamics) through the Imkelller-Schmallflusss correspondence. I don't know that much about the theory because my work is in simulating/numerically solving these. But they're a natural formulation for systems such as earthquakes, tumor growth, etc. A nice wide-ranging textbook aimed at scientists (spends a third of the book introducing rigorous probability before entering the subject) is NECKEL, Tobias; RUPP, Florian. Random Differential Equations in Scientific Computing. Walter de Gruyter, 2013. A textbook of numerics, a little narrower in scope is Han, Xiaoying, and Peter E. Kloeden. "Random Ordinary Differential Equations and Their Numerical Solution." (2017). Kloeden is well known for his textbook on numerical SDEs, of course. If you plug these on Google Scholar you can browse more recent papers that cite them for hours. EDIT: I found this slideset from a talk by Neckel [PDF link]. It defines RODEs and explains the intuition for the Imkeller-Schmallfluss correspondence. I would make a distinction: for example in a queueing system the arrival times (or interval times) might be modelled by a Poisson process which would be time independent and would not be bound my initial conditions. This would be an example of a random process which outputs random variables. Service time in the queue would be dependent on the previous state(s) of the system and possibly initial conditions. This would be an example of a stochastic process which also outputs random variables. I am not sure that the term ‘stochastic variable’ has any real meaning except possibly to indicate how the variable was produced. • As this is a four year old Question with an Accepted Answer (and others), Readers would benefit from your supporting any new material in your Answer with some references to the literature, etc. – hardmath Jul 23 '16 at 1:31 There is absolutely a difference between stochastic process and randomness. For example, if I take one step then let's suppose my friend takes two steps. Now my friend's steps are not random, those are dependent on my steps. That means my friend's steps has a process which is the number of steps I take. But it is random because my friend doesn't know how many steps I will take. So the steps I take is a random walk. • // , Would you please make this example more clear, perhaps by listing its assumptions first, e.g. "Assuming I take a random walk, and let us assume I have a friend, who..." – Nathan Basanese Jan 29 '17 at 2:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5410372614860535, "perplexity": 1391.5392419528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902683.56/warc/CC-MAIN-20201029010437-20201029040437-00125.warc.gz"}
http://tex.stackexchange.com/questions/131957/math-mode-inception-creating-a-list-environment-for-individually-referenced-equ
# Math-mode inception: creating a list environment for individually referenced equations I'm trying to create a list-like environment properties that could be used as such: \begin{properties} \item[Lemonade Rule] \forall \text{lemons} \exists \text{lemonade} \item a^2 + b^2 = c^2 \item[identity] \forall \end{properties} The idea is to redefine \item local to the properties environment such that it first ends a math environment if necessary and then starts a new one, optionally typesetting a description. (properties vs. properties* controls reference numbers, as usual.) Unfortunately, I'm screwing up somewhere in keeping track of my modes. I receive the following error: ERROR: LaTeX Error: Bad math environment delimiter. --- TeX said --- See the LaTeX manual or LaTeX Companion for explanation. Type H <return> for immediate help. ... l.90 \item[inverse] \label{def:group:inverse} --- HELP --- TeX has found either a math-mode-starting command such as $or $$when it is already in math mode, or else a math-mode-ending command such as$$ or$ while in LR or paragraph mode. The problem is caused by either unmatched math mode delimiters or unbalanced braces. Where am I messing things up? Here is the code: (This is new code based on the information egreg provided; it is pared down a bit in functionality from the original, but the error remains. The old code, an apparent model of bad style, is forever available in the edit log.) \documentclass{article} \usepackage{amsmath,xparse,expl3} \ExplSyntaxOn \NewDocumentEnvironment { definition } { m } { % typeset the title of the definition \par\noindent\hangindent\parindent \textbf{#1}\quad } { % end environment 'definition' } % makes sure math content is centered \cs_new:Nn \property_do_fill: { \hskip \textwidth minus \textwidth } % enters math mode and starts a property % does not typeset a math comment \cs_new:Nn \property_do_begin: { \equation \quad\bullet \property_do_fill: } % enters math mode and starts a property % does typeset math comment \cs_new:Nn \property_do_begin:n { \equation \quad\bullet\enspace\text{\slshape #1} \property_do_fill: } % ends this property and exists math mode \cs_new:Nn \property_do_end: { \property_do_fill: \endequation } % declare a new item-like environment 'properties', where each item % * [mathcomment] a^2 + b^2 = c^2 (1.3) \NewDocumentEnvironment { properties } { } { % Delimits properties % starts off by beginning a property, and then redefines itself % to end a previous property before starting a new one. \DeclareDocumentCommand \item { o } { \IfValueTF { ##1 } { \property_do_begin:n { ##1 } } { \property_do_begin: } % redefine \item now to take care of *ending* the last one \DeclareDocumentCommand \item { o } { % end the last property \property_do_end: % and start a new one \IfValueTF { ####1 } { \property_do_begin:n { ####1 } } { \property_do_begin: } } } } { % end environment properties % make sure to end the last property \property_do_end: } \ExplSyntaxOff \begin{document} \begin{definition}{group} A group is a set $G$ together with a binary operation $*$ on $G$ that satisfies the following axioms: \begin{properties} \item[associativity] \label{def:group:assoc} \forall{a,b,c \in G}{(a * b) * c = a * (b * c)} \item[identity] \label{def:group:identity} \exists e \in G : \forall{a \in G}{a * e = a = e * a} \item[inverse] \label{def:group:inverse} \forall{x \in G}{\exists b \in G : a * b = b * a = e} \end{properties} \end{definition} \end{document} - \equation* is two tokens, not one; and you can't abbreviate \begin{equation*} into \equation*. You can't test \IfBooleanTF{#1} in a \cs_new:Nn instruction; that's reserved to \NewDocumentCommand (and the alike commands). While you can do \NewDocumentEnvironment{foo}{s}, the relative \IfBooleanTF will be true when you call \begin{foo}*, not \begin{foo*} (which is undefined). –  egreg Sep 6 '13 at 21:09 @egreg — so general misunderstandings all around? I'll see if I can fix it up myself given this information –  Sean Allred Sep 6 '13 at 21:23 ## 1 Answer I'm afraid your code is wrong in so many respects that almost nothing can be salvaged. ## First error \cs_new:Nn \property_do_begin:n { \IfBooleanTF { #1 } \equation* \equation \quad\bullet \property_do_fill: } The \IfBooleanTF function makes sense only in the body of \NewDocumentCommand (or alike functions), because it's this function that sets up things so that a * will set the internal boolean to true or false. In \cs_new:Nn the result is basically unpredictable. See the third error for other problems; actually this might work in some cases, because the argument is passed from an "interface" command; but it's considered "bad style" anyway. ## Second error Even if \IfBooleanTF{#1} were successful (which may not), when you use it in \IfBooleanTF { #1 } \endequation* \endequation you'd get \equation if a * follows and a * otherwise; in both cases the second \equation would be executed. This is because \equation* is two tokens and you can't abbreviate \begin{equation*} with \equation*. The same for \endequation*, of course. ## Third error With \NewDocumentEnvironment{foo}{<arg specs>}{<start>}{<end>} you're basically doing \NewDocumentCommand{\foo}{<arg specs>}{<start>}{<end>} \NewDocumentCommand{\endfoo}{<arg specs>}{<start>}{<end>} and the arguments passed to \foo after \begin{foo} will be passed also to \endfoo when called by \end{foo}. Thus \NewDocumentEnvironment{foo}{s} would define only the foo environment, not the foo* environment. When you call \begin{foo} your \IfBooleanTF{#1} will evaluate to false. And \begin{foo*} will raise an error. You could call \begin{foo}*, however, and \IfBooleanTF{#1} would evaluate to true. - Most of this was stemming from my blatant oversight of \begin{env*} vs. \begin{env}*; I've made (what I thought to be) the necessary modifications to my code and simplified it significantly—but to no effect (the same error). Should I edit my question to reflect the new code, or should I ask a new one? –  Sean Allred Sep 7 '13 at 3:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9802689552307129, "perplexity": 5898.093157252098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768397.103/warc/CC-MAIN-20141217075248-00162-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/1212.4802/
# Saving the Coherent State Path Integral Yariv Yanay Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca NY 14850    Erich J. Mueller Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca NY 14850 March 29, 2022 ###### Abstract By returning to the underlying discrete time formalism, we relate spurious results in coherent state semiclassical path integral calculations to the high frequency structure of their propagators. We show how to modify the standard expressions for thermodynamic quantities to yield correct results. These expressions are relevant to a broad range of physical problems, from the thermodynamics of Bose lattice gases to the dynamics of spin systems. ###### pacs: 03.65.Db,03.65.Sq,05.30.Jp Path integrals convert the difficult problem of diagonalizing a Hamiltonian into the potentially simpler one of summing over a set of all possible paths, weighted by the classical action Kleinert (2009); Klauder (2003). They are particularly powerful for making semiclassical approximations, where only a few classical paths dominate. Often the natural variables for describing the path are conjugate. For example, one would like to describe a spin system in terms of paths on the Bloch sphere, even though the different components of spin do not commute Klauder and Skagerstam (1985). Coherent states are often used in such cases, and can yield useful results Langer (1968); Solari (1987); Zhang et al. (1990); Kochetov and Yarunin (1997); Zhang (1999); Altland and Simons (2010); Kochetov (1995); Stone et al. (2000); Pletyukhov et al. (2002); Kleinert et al. (2013). Here, we analyze the structure of such path integrals, demonstrating a practical scheme for eliminating anomalies which were first confronted in the 1980s Funahashi et al. (1995a); Enz and Schilling (1986); Funahashi et al. (1995b); Belinicher et al. (1999); Baranger and de Aguiar (2001); Shibata and Takagi (2001); Garg et al. (2003); Viscondi and de Aguiar (2011). The issues we address were most clearly described by Wilson and Galitski Wilson and Galitski (2011), who used two simple examples to illustrate the anomalies. The particular problems described in their paper arise in the continuous-time formulation of the path integral, and we seek to correct them by returning to the discrete-time formalism. To do so, we must restrict ourselves to the semiclassical path integral, expanding the action in quadratic quantum fluctuations around a classical path. Braun and Garg Braun and Garg (2007a, b) calculated the exact propagator for the discrete semiclassical path integral for the particular case of the harmonic oscillator coherent state. We perform a closely related expansion which allows for the use of a more general basis. We also present our results as a correction to the commonly used continuous-time result, providing systematics corrections to previously calculations. One example considered in Wilson and Galitski (2011) is a path integral calculation of the partition function of the single site Bose Hubbard model, , where represents the number of Bosons, parameterizes their interaction and is the chemical potential. This is a sufficiently simple problem that one can calculate the exact partition function , and find . In particular, at zero temperature, the mean occupation number calculated from is , which the exact result derived from is . Here is the integer closest to . We derive an algorithm for correcting the path integral result for the free energy , F=FCPI−i14Δt\intrmldχ0πeiχlog\brdetG−1ωdet¯G−1ωω=πeiχΔt (1) Here is the free energy obtained from the continuous-time path integral (CPI) while the matrices are composed of perturbation field propagators in frequency space for a discrete-time and CPI calculation, respectively. We precisely define all these terms below as we derive Eq. (1) and discuss techniques for calculating the correction terms. As emphasized by Wilson and Galitski, our corrections are not related to ambiguities of operator ordering or geometric phases. Rather, they arise from the over-completeness of coherent states. The formulation of partition function as a path integral in imaginary time involves the expansion Z=\Tre−\gb^H=∑→Ψ0⟨→Ψ0|e−\gb^H|→Ψ0⟩=∑→Ψ1,…,→ΨNtNt∏t=1⟨→Ψt−1|e−^HΔt|→Ψt⟩. (2) Here is the inverse temperature. is any complete basis of the states, characterized by a set of parameters , e.g. so that in the coherent state basis of the Bose-Hubbard model. The sum is the identity operator, of which we insert copies into the operator. We are now summing over all -point paths in -space, with . In the limit of small one can approximate and thus write the partition function in the form of a discrete time path integral , where the Lagrangian is (3) When the basis is orthogonal, the first term in this expansion can be taken to be arbitrarily small, and one can approximate , and by taking convert the problem into the traditional CPI form Altland and Simons (2010). This approximation breaks down when expanding in an overcomplete basis, if the overlap between consecutive time steps remains finite for states that differ to a non-infinitesimal degree. As was previously noted Belinicher et al. (1999), even in the face of this problem, the discrete time formulation in Eq. (3) remains valid. Our task is to develop a techniques for calculations using the discrete time path integrals, and to relate them to the more familiar continuous case. In particular we wish to find a correction of the form Eq. (1). To do so we follow standard procedure Smirnov (2010) and characterize the states in terms of a saddle point solution satisfying , and a fluctuation , writing . We then expand to quadratic order in the fluctuations where the classical energy and matrices are independent of time. This saddle point approximation becomes exact as the number of local degrees of freedom become large. For example, in the Bose Hubbard Model, it is the leading correction in a expansion, where is the average number of particles per site. Similarly, in a spin system, the total spin plays the role of . In terms of the Fourier components , the partition function reads Z=\intrm\Dψexp\br−\gbF0−\half∑ω=ωn→ψω⋅G−1ω⋅→ψω (4) where summation is over the frequencies for , yielding the free energy F=F0+1\gbNt−12∑n=−Nt−12\halflog\brdetG−1ωn2π. (5) This compares with the free energy given by the continuous-time formalism, The difference in energies is given then by F−FCPI=1\gbNt−12∑n=−Nt−12\halflog\brdetG−1ωndet¯G−1ωn. (6) We can replace this sum with a contour integral, using the identity 12π∮γdωf\pωei\gbω−1=1\gb∑ω=ωnf\pω+i∑ωfRes\brf\pωei\gbω−1,ωf. (7) Here the last sum is over the poles of inside the contour , and is the complex circle defined by In the present case the last term of Eq. (7) vanishes: for any fixed , . Thus the function is analytic inside , and the set of singularities is empty. For , the matrices and are no longer simply related, and has branch cut singularities outside of . Once the residue term is eliminated, we are left with the contour integral. This integral involves fluctuations of frequency , corresponding to the time scale separating consecutive time steps. When the basis is orthogonal these fluctuations are vanishingly small, but for an overcomplete basis they are finite, and the contour integral does not vanish. Straightforward algebra then reduces Eqs. (6) and (7) to the expression in Eq. (1). A clear example of this calculation is provided by the single-site Bose-Hubbard Hamiltonian. Using the coherent state basis and the field , the components of the quadratic Lagrangian are (8) and so detG−1ω=2\p1−cos\pωΔt\p1−μΔt. (9) This compares with the CPI result , and indeed the ratio of the two is finite everywhere for . By performing the contour integral one finds the difference between the free energies up to an irrelevant constant. The power of this approach is more readily apparent in the multisite Bose Hubbard model Yanay and Mueller (2012). Consider a -dimensional cubic lattice of sites with lattice constant . There momentum is a good quantum number and one can consider . The large structure takes on the simple form detG−1ω,\vkdet¯G−1ω,\vk=2\p1−cos\pωΔt\p1+\gepkΔt\gb2ω2 (10) where . By performing the contour integral one finds simply, F−FCPI=\half\pμ−2J×DNs (11) plus a constant. This is the same dependence as the single-site problem. For completeness sake, we present the second system explored by Wilson and Galitski in Wilson and Galitski (2011). We examine the Hamiltonian for a spin system. The difference in free energies between the exactly-calculated and the CPI results is given, at , by . Using the semiclassical formalism presented here, one finds detG−1ωdet¯G−1ω=2\p1−cos\pωΔt\p1−\pS−\halfΔt\gb2ω2 (12) leading to a correction of . Our finite time-step correction accounts for most of the discrepancy, while the remaining term arises from the semiclassical approximation. ## I Acknowledgements This paper is based upon work supported by the National Science Foundation under Grant No. PHY-1068165.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9565110802650452, "perplexity": 647.1311646135813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00372.warc.gz"}
http://www.sciforums.com/threads/would-we-notice-if-dm-was-at-the-center-of-the-earth.85774/
# Would We Notice If DM Was At The Center Of The Earth? Discussion in 'Astronomy, Exobiology, & Cosmology' started by common_sense_seeker, Sep 24, 2008. Messages: 2,623 If dark matter existed at the center of the Earth, Moon and stars I propose that we wouldn't notice. All it would mean is that the calculations of their masses are all underestimates. It would be a potential solution for the Missing Mass Problem, would it not? 3. ### Steve100O͓͍̯̬̯̙͈̟̥̳̩͒̆̿ͬ̑̀̓̿͋ͬ ̙̳ͅ ̫̪̳͔OValued Senior Member Messages: 2,346 No, because then all of our observations which work with the masses we currently have would be made false. Messages: 10,296 Of course not! As Steve just said, the numbers we have work VERY well - thank you for nothing. 7. ### StrangerInAStrangeLandSubQuantum MechanicValued Senior Member Messages: 15,396 Better to propose something & be shot down than never shot down at all. 8. ### MetaKronRegistered Senior Member Messages: 5,502 It is true that we wouldn't notice dark matter inside the Earth unless it did something truly unique. It is untrue that our estimates of its mass would then be underestimates because our methods for estimating the mass of the planet already include all gravitating mass that is there. There may be no difference between gravity from dark matter and gravity from regular matter, which is why they theorize dark matter in the first place. 9. ### DwayneD.L.RabonRegistered Senior Member Messages: 999 Well, My hunch about dark matter is that its just all the heavy atoms that are science has not accounted for, other solar systems have atoms that we do not have in our solar system, exspecially so for solar systems in the glaxatic arms ect... these unaccounted for heavy atoms are dark matter. DwayneD.L.Rabon Messages: 10,296 Just more sheer nonsense. Please show us proof of these "heavy atoms", Rabon. And incidentally, you aren't aware that we've done analysis of distant stars by spectroscope, are you?:bugeye: 11. ### Steve100O͓͍̯̬̯̙͈̟̥̳̩͒̆̿ͬ̑̀̓̿͋ͬ ̙̳ͅ ̫̪̳͔OValued Senior Member Messages: 2,346 What's the point of this idea if it doesn't account for the missing mass in in the universe then? 12. ### Steve100O͓͍̯̬̯̙͈̟̥̳̩͒̆̿ͬ̑̀̓̿͋ͬ ̙̳ͅ ̫̪̳͔OValued Senior Member Messages: 2,346 The next reply will be a waste of time... Messages: 2,623 Thanks for some lateral thinking, it makes a change to the standard responses. I believe there is a difference between the gravity from DM and that from regular matter. My simulation model shows that DM gravity is highly directional. In the case of the Sun, DM gravity is much higher in the ecliptic plane compared to the direction of it's spin axis for example. This explains the longevity of the disc shapes of the majority of the galaxies that are observed in my opinion. Messages: 2,623 Thanks for some lateral thinking, it makes a change to the standard responses. I believe there is a difference between the gravity from DM and that from regular matter. My simulation model shows that DM gravity is highly directional. In the case of the Sun, DM gravity is much higher in the ecliptic plane compared to the direction of it's spin axis for example. This explains the longevity of the disc shapes of the majority of the galaxies that are observed in my opinion. Messages: 2,623 Thanks for some lateral thinking, it makes a change to the standard responses. I believe there is a difference between the gravity from DM and that from regular matter. My simulation model shows that DM gravity is highly directional. In the case of the Sun, DM gravity is much higher in the ecliptic plane compared to the direction of it's spin axis for example. This explains the longevity of the disc shapes of the majority of the galaxies that are observed in my opinion. 16. ### kanedaActual CynicRegistered Senior Member Messages: 1,334 It is possible that gravity is not as uniform in all ways as we assume. One hundred one solar mass stars may exert much more gravitational force than one hundred solar mass star. If gravity had a directional component, I would say it pulled in an unknown direction which is why all large objects in space spin around. Messages: 2,623 The spin of matter is due to it's origin in my opinion. Matter was born spinning. That's what energy is. 18. ### MetaKronRegistered Senior Member Messages: 5,502 They're talking about matter that we can't seem to measure except by its gravitational influence. I looked it up on Wikipedia and they say that the dark matter component is hypothesized to be larger than the baryonic matter component. Maybe the actual problem is that no one understands quantum mechanics. Messages: 2,623 That's exactly right. The accepted Cavendish experimental results are simply wrong. On a scaled down version, the Earth would be about the size of your eye and the Moon would be the size of a pea held at arms length. If these were made of everyday magnets, there still wouldn't be enough attraction to sustain an orbit. Even the Wikipedia entry states that the repeats of the experiment have given highly varying results. 20. ### StrangerInAStrangeLandSubQuantum MechanicValued Senior Member Messages: 15,396 Would we notice DM if it doesn't exist? Messages: 2,623 It's necessary to explain the nature of galaxies. Incidentally, not only have I deduced that DM exists at the center of the Earth, Moon and stars, but also at the center of atoms. The nucleus consists of dark matter in my theory. Cavendish's assumption of Newton's law has led to an inaccurate calculation of the Earth's average density in my opinion. I'm starting on the maths proof of my ideas from today. 22. ### SaxionBannedBanned Messages: 264 Again, this is absolute nonesense. Most dark matter is suspected to have negative mass... and if it was in the center of the earth, it would blow the earth apart. Messages: 2,346 Hoorah!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8882429599761963, "perplexity": 822.7432709142569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517159.7/warc/CC-MAIN-20210118220236-20210119010236-00423.warc.gz"}
https://www.buttenschoen.ca/portfolio/2020-01-01-theo-work/
# Development of mathematical tools for (non-local) reaction-advection-diffusion equations Published: Under construction. My cell repolarization / signalling model. A: Left: Cell collision experiment (Desai et al. 2013). Right: My mechanochemical model of a 1D cell has two modules: a polarization model, and visco-elastic “spring” connecting the cell’s edges. The polarization model describes intra-cellular proteins resulting in forces on the cell’s edges. B: Cell collisions in my model. C: Intra-cellular pattern formation determines cell behaviour. D: My modular ODE-iPDE analysis toolbox (based on NumPy). Symbolic linearization allows adoption to other spatial systems. E: Bifurcation diagram of a polarization model created by my toolbox. The “S-branch” consists of constant solutions, while the others are polarized states (see C). A: Typical solutions of the Armstrong adhesion model for varying $\alpha$. The initial condition is in blue (dashed), the steady state solution is in black (solid). The remaining curves are intermediate times. A bifurcation occurs between $\alpha = 1$ and $\alpha=10$. Numerical solution via a finite volume scheme. (w. ROWMAP integrator). B: Bifurcation diagram of the linear Armstrong model via continuation and spectral collocation using my toolbox. The insets show typical solutions. C: My global bifurcation result, classifying solutions along branches, written as a “meta-theorem”.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8987670540809631, "perplexity": 5552.93767382382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141672314.55/warc/CC-MAIN-20201201074047-20201201104047-00211.warc.gz"}
http://math.stackexchange.com/questions/107692/prove-that-a-finite-union-of-closed-sets-is-also-closed/107711
# Prove that a finite union of closed sets is also closed Let $X$ be a metric space. If $F_i \subset X$ is closed for $1 \leq i \leq n$, prove that $\bigcup_{i=1}^n F_i$ is also closed. I'm looking for a direct proof of this theorem. (I already know a proof which first shows that a finite intersection of open sets is also open, and then applies De Morgan's law and the theorem "the complement of an open set is closed.") Note that the theorem is not necessarily true for an infinite collection of closed $\{F_\alpha\}$. Here are the definitions I'm using: Let $X$ be a metric space with distance function $d(p, q)$. For any $p \in X$, the neighborhood $N_r(p)$ is the set $\{x \in X \,|\, d(p, x) < r\}$. Any $p \in X$ is a limit point of $E$ if $\forall r > 0$, $N_r(p) \cap E \neq \{p\}$ and $\neq \emptyset$. Any subset $E$ of $X$ is closed if it contains all of its limit points. - +1 for giving the definitions you're using. I would try proving the contrapositive: suppose that the union fails to be closed, so it doesn't contain one of its limit points, and try to show that this limit point is a limit point of at least one of the $F_i$. –  Qiaochu Yuan Feb 10 '12 at 1:41 @jamaicanworm The key point is that we need the finiteness to take a minimum of radii of balls. –  user38268 Feb 10 '12 at 2:05 Let $F$ and $G$ be two closed sets and let $x$ be a limit point of $F\cup G$. Now, if $x$ is a limit point of $F$ or $G$ it is clearly contained in $F\cup G$. So suppose that $x$ is not a limit point of $F$ and $G$ both. So there are radii $\alpha$ and $\beta$ such that $N_\alpha(x)$ and $N_\beta(x)$ don't intersect with $F$ and $G$ respectively except possibly for $x$. But then if $r=min (\alpha,\beta)$ then $N_r(x)$ doesn't intersect with $F\cup G$ except possibly for $x$, which contradicts $x$ being a limit point. This contradiction establishes the result. The proof can be extended easily to finitely many closed sets. Trying to extend it to infinitely many is not possible as then the "min" will be replaced by "inf" which is not necessarily positive. - Thanks! This is exactly the kind of precise answer I was looking for. –  jamaicanworm Feb 11 '12 at 20:55 It is sufficient to prove this for a pair of closed sets $F_1$ and $F_2$. Suppose $F_1 \cup F_2$ is not closed, even though $F_1$ and $F_2$ are closed. This means that some limit point $p$ of $F_1 \cup F_2$ is missing. So there is a sequence $\{ p_i\} \subset F_1 \cup F_2$ converging to $p$. By pigeonhole principle, at least one of $F_1$ or $F_2$, say $F_1$, contains infinitely many points of $\{p_i\}$, hence contains a subsequence of $\{p_i\}$. But this subsequence must converge to the same limit, so $p \in F_1$, because $F_1$ is closed. Thus, $p \in F_1 \subset F_1 \cup F_2$. Alternatively, if you do not wish to use sequences, then something like this should work. Again, it is sufficient to prove it for a pair of closed sets $F_1$ and $F_2$. Suppose $F_1 \cup F_2$ is not closed. That means that there is some points $p \notin F_1 \cup F_2$ every neighbourhood of which contains infinitely many points of $F_1 \cup F_2$. By pigeonhole principle again, every such neighbourhood contains infinitely many points of at least one of $F_1$ or $F_2$, say $F_1$. Then $p$ must be a limit point of $F_1$; so $p \in F_1 \subset F_1 \cup F_2$. - Thanks--but please see my comment on @Michael's answer. –  jamaicanworm Feb 10 '12 at 1:58 Made the correction. –  Rick Feb 10 '12 at 2:03 How do we know that the metric space contains infinitely many points? –  Shahab Feb 10 '12 at 2:40 @Shahab: We don't. All we need for this proof is to show that $F_1\cup F_2$ contains all of its limit points. If the set is finite, it will have no limit points, which makes the condition vacuously true. –  Michael Greinecker Feb 10 '12 at 15:45 One problem: All we can say is that every neighborhood of $p \notin F_1 \cup F_2$ contains infinitely many points of $F_1 \cup F_2$. BUT one neighborhood might contain infinitely many points of $F_1$, while another might contain infinitely many points of $F_2$, so we cannot say the limit point property of $p$ holds for every neighborhood of any particular $F_i$... –  jamaicanworm Feb 10 '12 at 16:03 Here is one method, that I think is very direct: Check first that a set contains all limit points if and only if every converging sequence in the set has a limit in the set. Now take a convergent sequence in the finite union. Since the union is finite, one of the sets in the union must contain infinitely many terms of the sequence and therefore a subsequence. A subsequence of a convergent sequence is converging and converges to the same point. So there is a converging subsequence lying whole in one of the sets of the finite union and this set contains the limit since it is closed. So the limit lies in the finite union, and we are done. Edit: Here is a sequence free version. Suppose $F_1$ and $F_2$ are closed. Let $x$ be a limit point of $F_1\cup F2$. We are done if we can show that $x$ is a limit point of $F_1$ or $F_2$. If $x$ is not a limit point of $F_1$, then there is an $\epsilon>0$ such that the $\epsilon$-Ball around $x$ contains no element of $F_1$. Hence it contains a point from $F_2$ and by the definition of a limit point, for every positive $\epsilon'<\epsilon$, the $\epsilon'$-Ball contains an element of $F_2$. Hence, $x$ is a limit point of $F_2$. - Sorry for not specifying this in my original post, but I would prefer to not use sequences in the proof. (I'm trying to teach this topic before sequences.) I can, however, use the theorem that says every neighborhood of a limit point $p$ of $E$ contains an infinite number of points in $E$... –  jamaicanworm Feb 10 '12 at 1:54 @MichaelGreinecker: In your second proof, don't you need to show that $x$ is actually contained in $F_1 \cup F_2$? Not just a limit point of $F_2$? –  user66360 Nov 24 '13 at 23:21 @kbball Yes, but if $x$ is a limit point of $F_2$ and $F_2$ is closed, then $x\in F_2$ and a fortiori $x\in F_1\cup F_2$. –  Michael Greinecker Nov 25 '13 at 6:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9744678735733032, "perplexity": 64.69670291924815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.66/warc/CC-MAIN-20150521113210-00176-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.how-to-multiply-fractions.com/how-to-multiply-fractions-with-the-same-denominator.html
Top # How to Multiply Fractions with the Same Denominator Introduction  We are already familiar with operations of addition and subtraction of fractions. In this section let us discuss about the multiplication of fractions with the same denominator. ## Multiplication of fractions with the same Numerator and same Denominator: We know that the fraction is a part of the whole. Let us study the following diagram. In the above diagrams, the first one shows the fraction $\frac{1}{3}$. The second diagram shows $\frac{1}{3}$ of the shaded region of the first., which is $\frac{1}{3}$ of $\frac{1}{3}$ = $\frac{1}{3}\times \frac{1}{3}$ = $\frac{1}{9}$ The third diagram shows the $\frac{1}{3}$ of the previous shaded region, which is $\frac{1}{3}$ of $\frac{1}{9}$ = $\frac{1}{3}\times \frac{1}{9}$ = $\frac{1}{27}$ Hence when we multiply the fractions with the same denominator and same numerator, we get the resulting fraction as the same part of each part. For example : $\frac{2}{5}$   of $\frac{2}{5}$ is the same as $\frac{2}{5}\times\frac{2}{5}$ = $\frac{2\times 2}{5\times 5}$ =  $\frac{4}{25}$ Hence when we find the same part of the part, we group the numerators, and denominators separately and multiply them separetly. (i.e) $\frac{2}{5}\times \frac{2}{5}\times \frac{2}{5}$ = $\frac{2\times 2\times 2}{5\times 5\times 5}$ = $\frac{8}{125}$ ## Exponent form of the product of fractions with the same numerator and same denominator: We are aware that the product 2 x 2 x 2 x 2 x 2 can be expressed in exponent form as $2^{5}$ We use the same procedure for the fractions as well. $\frac{2}{5}\times \frac{2}{5}\times \frac{2}{5}$ = $\left ( \frac{2}{5} \right )^{5}$ = $\frac{2^{5}}{5^{5}}$ = $\frac{32}{3125}$ ## Multiplication of fractions with same denominator but different numerator: In the above figure, first picture shows the fraction $\frac{2}{5}$ Where as the second picture, shows the $\frac{3}{5}$ of $\frac{2}{5}$ of the first picture. Hence overall among 25 small boxes, 6 boxes are shaded, which is shown in the third picture. Hence $\frac{3}{5}$ of $\frac{2}{5}$ is the same as $\frac{6}{25}$ For example : $\frac{3}{7}\times \frac{2}{7}\times \frac{4}{7}$ = $\frac{3\times 2\times 4}{7\times 7\times 7}$ =$\frac{24}{343}$ Hence we observe that when we find the product of the fractions with the same denominator, we get the smaller portion of the whole. Arithmetically when we multiply the fractions with the same denominator, we follow the following steps. Step 1: Group the numerators and multiply them. Step 2: Group the denominators and multiply them. Step 3: The resulting fraction from steps 1 and 2 is the product of the fractions with the same denominators. Step 4: If the resulting fraction in step 3 is an improper fraction and has factors common, we divide the numerator and denominator by the Highest Common Factor and write the simplest form and convert into mixed fraction, else we highlight the obtained fraction as the final answer. Note: When we multiply proper fractions with the same denominators, the resulting product will be  the smallest of the whole. Hence it will definitely be a proper fraction. #### Example 1: $\frac{5}{11}\times \frac{15}{11}\times \frac{20}{11}$ Solution: We have $\frac{4}{11}\times \frac{15}{11}\times \frac{20}{11}$ = $\frac{4\times 15\times 20}{11\times 11\times 11}$ [ grouping the numerators and the denominators separately ] = $\frac{1200}{1331}$ [ multiplying the numbers in the numerator and denominator as grouped in the previous step ] #### Example 2: $\frac{2}{9}\times \frac{4}{9}\times \frac{6}{9}$ Solution: We have $\frac{2}{9}\times \frac{4}{9}\times \frac{6}{9}$ =$\frac{2\times 4\times 6}{9\times 9\times 9}$ [ grouping the numerators and the denominators separately ] = $\frac{48}{729}$ [ multiplying the numbers in the numerator and denominator as grouped in the previous step ] =$\frac{48\div 3}{729\div 3}$ [ dividing the numerator and the denominator by the common factor 3 ] = $\frac{16}{243}$  Final Answer in the simplest form. ## Practice Question: 1. $\frac{1}{6}\times \frac{1}{6}\times \frac{1}{6}$ 2. $\frac{3}{10}\times \frac{7}{10}\times \frac{27}{10}$ 3. $\frac{32}{12}\times\frac{24}{12}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955630898475647, "perplexity": 358.3711691688248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136494.66/warc/CC-MAIN-20140914011216-00264-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.storyofmathematics.com/construct-a-triangle/
Construct a Triangle – Explanation and Examples It is possible to construct equilateral triangles, isosceles triangles, scalene triangles, acute triangles, obtuse triangles, and right triangles using only a compass and straightedge. We have already discussed how to make an equilateral triangle, and we will use this skill to help us make some of the other types of triangles. Problems involving the construction of triangles include constructing triangles with a giver vertex, a given segment, or three given segments. Construction methods can also help us to classify triangles. In this topic, we will go over: • How to Construct a Triangle • How to Construct a Right Triangle • How to Construct a Congruent Triangle • How to Construct a Scalene Triangle How to Construct a Triangle Any time we draw a figure enclosed by three straight sides, we construct a triangle. We can classify these figures by the relationship between the lengths of their sides and by the angles the sides form. Equilateral triangles have three sides of equal length, isosceles triangles have exactly two sides of equal length, and scalene triangles have no sides of equal length. Right triangles have a right angle, obtuse triangles have one angle greater than a right angle, and acute triangles have all three angles less than a right angle. Note that no triangle can have more than one right angle or obtuse angle. How to Construct a Right Triangle If we have a right triangle, this means that two of the legs of the triangle are perpendicular. Therefore, to construct a right triangle, we have to construct a line perpendicular to another line. We can then connect any two points on these lines with a straightedge to get a right triangle. In the figure shown, the lines AD and BC are perpendicular. Therefore, DAB is a right angle. This means that the triangles DAB, CAB, and EAB are all right triangles. How to Construct a Congruent Triangle What if we are given a triangle and want to construct another triangle congruent to it? This is a bit more complicated because it requires us to use several constructions that we have done before. We must first construct an infinite line DE using any two points in the plane. Then, we construct a line segment equal in length to AC with endpoint D. We’ll call this segment DF. After that, we cut off a segment of DE equal in length to AC by constructing a circle with center D and radius DF. We will call the intersection of this circle and DE, G. DG will also be equal in length to AC because it is equal in length to DF, which is equal in length to AC. We then similarly construct a segment equal in length to CB on the segment GE, which we will call GH. Finally, we construct a segment HI equal in length to AB. Next, we create a new circle with center G and radius DG. Label the intersection of this circle and the one with center H and radius HI as J. Then, connect GJ and HJ. JHG is congruent to ACB. Proof of Congruent Triangles How do we know that the triangles ABC and JGH are congruent? We know that DG=AC, GH=CB, and HI=BA. Because they are radii of the same circle, DJ=DG, so DJ=AC by the transitive property. Likewise, because HI and HJ are radii of the same circle, HI=HJ, and HJ=BA. Therefore, the point A lines up with the point J, B lines up with the point H, and C lines up with the point G. How to Construct a Scalene Triangle A scalene triangle has all three sides with different lengths. To construct a scalene triangle, pick three lines that are of different lengths. As above, cut off segments of an infinite line equal in length to the chosen lengths. As before, the three segments should be right in a row. If the cut segments are AB, BC, and CD, we construct circles with center B and radius AB and with center C and radius CD. Label the intersection of these two circles as E, then connect AE and DE. The triangle AED will have segments equal in length to the original chosen lengths as long as the original lengths satisfied the triangle inequality. Otherwise, the circle with center C and radius CD will not intersect the circle with center B and radius AB. Triangle Inequality Suppose we are given three line segments of varying length, AB, CD, and EF. The triangle inequality states that we can construct a triangle with sides equal to the lengths of AB, CD, and EF if and only if: • AB+CD>EF • AB+EF>CD and All three of these conditions must be satisfied; otherwise, we cannot construct such a triangle. Stated differently, we can construct a triangle from three segments if and only if the length of any two segments is greater than the third segment’s length. This requirement is known as the triangle inequality. Examples This section will go over common examples involving the construction of triangles and their step-by-step solutions. Example 1 Construct a right isosceles triangle. Example 1 Solution We will first construct a right triangle. Then, we will cut off a segment of the longer side equal to the shorter side’s length. Begin with a segment, AB. Then, construct the equilateral ABC. Next, construct the angle bisector for ACB. Label the intersection of the bisector and AB as D. CDB will be a right triangle. Now, we have to cut off a segment of DC, the greater of the two legs that form the right angle, equal to DB’s length, the lesser. To do this, construct a circle centered at D with radius DB. Label the intersection of DC and this circle as E. Since DE is also a radius of the circle, it is equal in length to DB. Therefore, if we construct the segment BE, DBE is an isosceles right triangle. Example 2 Construct an obtuse isosceles triangle. Example 2 Solution Let’s return to the construction from example 1, which shows a right isosceles triangle. We need to construct a new segment, DF, so that FDB is greater than EDB. We also want DF to be equal in length to DB. Then we can select any point on the circumference of the circle between A and E to be the point F. Then, we connect BF and DF to create the triangle. In this case, the angle EDB is composed of the two smaller angles EDC and CDB. Since CDB is a right angle, EDC+CDB must be greater than a right angle. Thus, EDB is obtuse, and the triangle EDB is also obtuse. Example 3 Determine whether the following triangle is acute, right, or obtuse. Example 3 Solution First, we extend the lines AB, BC, and CA to infinite lines. Next, we find a line perpendicular to one of the sides at each vertex. That is, CF is perpendicular to CA, AD is perpendicular to CA, and BE is perpendicular to AB. When we consider this figure, we can see how each of the original triangle angles compares to the constructed right angles. The angle CAB lies inside the angle CAD, which is a right angle. Therefore, we know CAB is acute. Likewise, ABC lies inside ABE. Thus, we know ABC is also acute. Finally, the angle ACB is composed of the angles ACF and FCB. Since ACF is a right angle, ACF+FCB must be bigger than a right angle. Therefore, we know that ACB is obtuse. Therefore, the whole triangle is obtuse. Example 4 Show that the segments given do not satisfy the triangle inequality. Example 4 Solution We can cut segments of EF equal in length to AB and CD. Let EG be equal to AB and GH equal to CD. When we do this, we see that H is inside the segment EF. That is, EH is less than EF. Therefore, EF>EH=EG+EH=CD+AB. Thus, the segments CD and AB do not satisfy the triangle inequality. From this, we know that it is impossible to construct a triangle that has sides of length AB, CD, and EF. Example 5 Construct a triangle congruent to the given triangle so that the vertex corresponding to A is at the point D. Example 5 Solution First, we create an infinite line DE, where E is any point in the plane. Then, we need to construct AB and AC on DE so that each segment has an endpoint at D. We’ll call the segment equal to AB DI and the segment equal to AC DG. We do this to ensure that the point D will line up with the point A when we construct the congruent triangles. Then, we construct a segment equal to BC to have an endpoint I. We’ll call the other endpoint K. Now, we create two circles. One will have center I and radius IK. The other will have center D and radius DG—label one of the intersections of these circles L. Now, we can construct the segment DL and IL. The triangle DIL will be congruent to the original triangle ABC. Practice Questions 1. True or False: We can classify triangles based on their side lengths or angle measures. 2. True or False: A triangle has the following side lengths: $23$ cm, $25$ cm, and $27$ cm. This means that it is an isosceles triangle. 3. True or False: A triangle has the following angle measures: $40^{\circ}$, $30^{\circ}$, and $110^{\circ}$. This means that it is an obtuse triangle. 4. Which of the following best describes the triangle shown below? 5. Which of the following best describes the triangle shown below? Open Problems 1. Show that the triangle $ABC$ is scalene. 2. Classify the triangle $ABC$ as scalene, equilateral, or isosceles. 3. Show that the segments satisfy the triangle inequality and then construct a triangle from them. 4. Construct a triangle congruent to the isosceles triangle $ABC$. 5. Construct a triangle congruent to $ABC$ that shares a segment with the original triangle. Open Problem Solutions 1. The triangle is scalene. Otherwise, a circle with center $C$ and radius $CA$ would have $B$ on its circumference, and/or a circle with center $A$ and radius $AB$ would have $C$ on its circumference, and/or a circle with center $B$ and radius $BC$ would have $A$ on its circumference. 2. This triangle is isosceles because the circle with center $C$ and radius $CA$ has $B$ on its circumference. Therefore, $AC=BC$. 3. These sides do satisfy the triangle inequality because of $EG=CD$ and $GH=AB$. Together, $EH$ is longer than $EF$, so $CD+AB>EF$ is required. 4. 5. Images/mathematical drawings are created with GeoGebra. 5/5 - (13 votes)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.757338285446167, "perplexity": 496.6343012297963}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00842.warc.gz"}
http://tug.org/pipermail/texhax/2010-January/014019.html
# [texhax] Aligning two elements in pdfTeX Firestone, Elaine R. (GSFC-279.0)[SCIENCE SYSTEMS AND APPLICATIONS INC] elaine.r.firestone at nasa.gov Fri Jan 8 04:19:21 CET 2010 Hi, I'm using pdfTeX and am trying to align some text with a logo for the cover of a document so that I'd get something like: _________ | | | | The Square Group | logo | Sedona, Arizona | | -------- I've tried various combinations of \line, hboxes, and vboxes to no avail. The text always comes out at the bottom right of the logo as in: _________ | | | | | logo | | | -------- The Square Group Sedona, Arizona What am I doing wrong? This is the latest code I have now: {\line{\vbox{\hbox{\pdfximage width2in{square.pdf}\pdfrefximage\pdflastximage}}}\hskip2.7in {\vbox{\hbox{The Square Group} \vskip3pt\hbox{Sedona, Arizona}}} } Can anyone help me here? I've tried Knuth and Bechtolshein, and I just don't see what it should be. Thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7752689123153687, "perplexity": 5586.716775690312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825464.60/warc/CC-MAIN-20171022203758-20171022223758-00382.warc.gz"}
https://www.buecher.de/shop/kernphysik/solar-neutrons-and-related-phenomena-ebook-pdf/dorman-lev/products_products/detail/prod_id/37411178/
• Format: PDF Short Historical Overview In the 1940s, two phenomena in the ?eld of cosmic rays (CR) forced scientists to think that the Sun is a powerful source of high-energy particles. One of these was discovered because of the daily solar variation of CR, which the maximum number of CR observed near noon (referring to the existence of continuous ?ux of CR from the direction of the Sun); this became the experimental basis of the theory that CR's ¿ originate from the Sun (or, for that matter, from within the solar system) (Alfven 1954). The second phenomenon was discovered when large ?uxes of high energy…mehr • Geräte: PC • ohne Kopierschutz • eBook Hilfe • Größe: 15.58MB Andere Kunden interessierten sich auch für Produktbeschreibung Short Historical Overview In the 1940s, two phenomena in the ?eld of cosmic rays (CR) forced scientists to think that the Sun is a powerful source of high-energy particles. One of these was discovered because of the daily solar variation of CR, which the maximum number of CR observed near noon (referring to the existence of continuous ?ux of CR from the direction of the Sun); this became the experimental basis of the theory that CR's ¿ originate from the Sun (or, for that matter, from within the solar system) (Alfven 1954). The second phenomenon was discovered when large ?uxes of high energy particles were detected from several solar ?ares, or solar CR. These are the - called ground level events (GLE), and were ?rst observed by ionization chambers shielded by 10 cm Pb (and detected mainly from the secondary muon-component CR that they caused) during the events of the 28th of February 1942, the 7th of March 1942, the 25th of July 1946, and the 19th of November 1949. The biggest such event was detected on the 23rd of February 1956 (see the detailed description in Chapters X and XI of Dorman, M1957). The ?rst phenomenon was investigated in detail in Dorman (M1957), by ?rst correcting experimental data on muon temperature effects and then by using coupling functions to determine the change in particle energy caused by the solar-diurnal CR variation. Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden. • Produktdetails • Verlag: Springer-Verlag GmbH • Seitenzahl: 873 • Erscheinungstermin: 15.07.2010 • Englisch • ISBN-13: 9789048137374 • Artikelnr.: 37411178 Inhaltsangabe Preface.- Acknowledgements.- Frequently used Abbreviations and Notations.- Chapter 1. Charged, Accelerated Particle Interactions in the Solar Atmosphere, and the Generation of Secondary Energetic Particles and Radiations: Pioneer Results.- Chapter 2. The Events of August 1972 and the Discovery of Solar Gamma-Radiation.- Chapter 3. The Events of the June 1980 and June 1982, and the Discovery of Solar Neutrons.- Chapter 4. Space Probe Observations of Solar Neutron Events.- Chapter 5. Solar Neutron Propagation in the Earth's Atmosphere, and the Sensitivity of Neutron Monitors and other Ground Based Detectors to Solar Neutrons.- Chapter 6. Statistical Investigations of Solar Neutron Events on the Basis of Ground Observations.- Chapter 7. Observations of Solar Neutron Events by Neutron Monitors, Solar Neutron Telescopes and Muon Detectors, and their Interpretation.- Chapter 8. The Solar Neutron Decay Phenomenon.- Chapter 9. Gamma Rays from Solar Energetic Particle Interactions with the Sun's Atmosphere.- Chapter 10. Positron Generation in the Nuclear Interactions of Flare Energetic Particles in the Solar Atmosphere.- Chapter 11. The Development of Models and Simulations for Solar Neutron and Gamma Ray Events.- Appendix.- Conclusions and Problems.- General Conclusion.- Main Conclusions for Different Chapters.- Actual Problems for Solving in near Future.- References.- References for Monographs and Books.- Object Index.- Author Index. Rezensionen From the reviews: "This book represents an exhausting monograph on solar neutrons written by an expert who worked about these particles from the Sun since 1965. ... The table of contents is a very detailed one, it covers 24 printed pages. ... Many observations are given in tables and figures, the corresponding theoretical developments are given in many formulas, and subject index and reference list are carefully developed." (Hans-Jürgen Schmidt, Zentralblatt MATH, Vol. 1196, 2010)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300815463066101, "perplexity": 4031.3646900080225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366969.45/warc/CC-MAIN-20210303134756-20210303164756-00279.warc.gz"}
https://www.physicsforums.com/threads/simplifying-b2-of-newtons-divided-difference-interpolation.276153/
# Simplifying 'b2' of Newton's divided difference interpolation 1. Dec 1, 2008 ### bsodmike Hi all, "http://www.bsodmike.com/stuff/interpolation.pdf" [Broken] I am going through some of my notes and quite a few books; they all skip the over the point I have marked with 3 red dots in the "http://www.bsodmike.com/stuff/interpolation.pdf" [Broken]. $$\label{eq:solution}\begin{split} b_2&=\dfrac{f(x_2)-b_0-b_1(x_2-x_0)}{(x_2-x_0)(x_2-x_1)}=\dfrac{f(x_2)-f(x_0)-\dfrac{f(x_1)-f(x_0)}{x_1-x_0}(x_2-x_0)}{(x_2-x_0)(x_2-x_1)}={\color{red}\hdots}= \\[10px] &=\dfrac{\dfrac{f(x_2)-f(x_1)}{x_2-x_1}-\dfrac{f(x_1)-f(x_0)}{x_1-x_0}}{(x_2-x_0)} \end{split}$$ $$As marked above in {\color{red}red} as three {\color{red}\hdots}, what algebraic manipulations are needed to arrive at the solution? The farthest I can get is, \label{eq:attempt}\begin{split} b_2&=\dfrac{f(x_2)-f(x_0)-\dfrac{f(x_1)-f(x_0)}{x_1-x_0}(x_2-x_0)}{(x_2-x_0)(x_2-x_1)}\\[10px] &=\dfrac{f(x_2)-f(x_0)-\left[\left(\dfrac{f(x_1)}{x_1-x_0}+\dfrac{f(x_0)}{x_0-x_1}\right)(x_2-x_0)\right]}{(x_2-x_0)(x_2-x_1)} \end{split}$$ I would most appreciate your comments on solving this. You can either send me a PM or an email to mike@bsodmike.com. Last edited by a moderator: Apr 24, 2017 at 8:48 AM 2. Dec 1, 2008 ### bsodmike I believe I managed to figure it out. Take the eq. ($$b_1$$) in terms of $$f(x_0)$$, $$\label{eq:attempt} f(x_0)=f(x_1)-b_1(x_1-x_0)=f(x_1)-\dfrac{f(x_1)-f(x_0)}{x_1-x_0}(x_1-x_0)$$ and substitute $$b_1$$ inside. Substitute the entire $$f(x_0)$$ in, $$\label{eq:attempt}\begin{split} b_2&=\dfrac{f(x_2)-f(x_0)-\dfrac{f(x_1)-f(x_0)}{x_1-x_0}(x_2-x_0)}{(x_2-x_0)(x_2-x_1)}\\[10px] &=\dfrac{f(x_2)-\left(f(x_1)-\dfrac{f(x_1)-f(x_0)}{x_1-x_0}(x_1-x_0)\right)-\dfrac{f(x_1)-f(x_0)}{x_1-x_0}(x_2-x_0)}{(x_2-x_0)(x_2-x_1)}\\[10px] &=\dfrac{\dfrac{f(x_2)-f(x_1)}{x_2-x_1}-\left(\dfrac{f(x_1)-f(x_0)}{(x_1-x_0)(x_2-x_1)}((x_0-x_1)+(x_2-x_0))\right)}{(x_2-x_0)}\\[10px] &=\dfrac{\dfrac{f(x_2)-f(x_1)}{x_2-x_1}-\dfrac{f(x_1)-f(x_0)}{x_1-x_0}}{(x_2-x_0)} \end{split}$$ \o/ Similar Discussions: Simplifying 'b2' of Newton's divided difference interpolation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.679283618927002, "perplexity": 2623.4009948350667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121121.5/warc/CC-MAIN-20170423031201-00540-ip-10-145-167-34.ec2.internal.warc.gz"}
http://nrich.maths.org/6987/index?nomenu=1
Take a look at the image below. How do you think it was created? Did you notice any symmetry in the image? Here is a diagram which shows how we created the image. We started with a triangle (shaded) and then used the coordinate grid to help us to rotate it through multiples of $90^{\circ}$ around the point $(0,0)$. Create some images of your own by rotating a shape through multiples of $90^{\circ}$. You might like to start with a triangle as we did, or you might want to use other shapes. How can you use a coordinate grid to help you to rotate each vertex around $(0,0)$? What is the relationship between the coordinates of the vertices as they rotate through multiples of $90^{\circ}$? Here are some more ideas to explore: Can you use an isometric grid to rotate a shape through multiples of $60^{\circ}$? Try creating some images based on other rotations, such as $30^{\circ}$ or $72^{\circ}$ or... (you will need to use a protractor for these). What is the rotational symmetry of your final image if you rotate through multiples of $80^{\circ}$ or $135^{\circ}$? Can you explain why?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4850868582725525, "perplexity": 236.38144826237132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157472.18/warc/CC-MAIN-20160205193917-00003-ip-10-236-182-209.ec2.internal.warc.gz"}
https://de.mathworks.com/help/simulink/slref/van-der-pol-oscillator.html
Van der Pol Oscillator This example shows how to model the second-order Van der Pol (VDP) differential equation in Simulink®. In dynamics, the VDP oscillator is non-conservative and has nonlinear damping. At high amplitudes, the oscillator dissipates energy. At low amplitudes, the oscillator generates energy. The oscillator is given by this second-order differential equation: where: • x is position as a function of time. • Mu is damping. The VDP oscillator is used in physical and biological sciences, including electric circuits. open_system('vdp'); Simulate with Mu = 1 When Mu = 1, the VDP oscillator has nonlinear damping. set_param('vdp/Mu','Gain','1') sim('vdp'); open_system('vdp/Scope'); Simulate with Mu = 0 When Mu = 0, the VDP oscillator has no damping. Energy is conserved in this simple harmonic oscillator. The equation becomes: set_param('vdp/Mu','Gain','0') sim('vdp'); open_system('vdp/Scope');
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9090918302536011, "perplexity": 2798.762789099255}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00294.warc.gz"}
https://questions.examside.com/past-years/jee/question/if-force-f-length-l-and-time-t-are-taken-as-the-funda-jee-main-physics-units-and-measurements-md6a4yr2mfttslk5
NEW New Website Launch Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc... 1 ### JEE Main 2021 (Online) 27th August Evening Shift If force (F), length (L) and time (T) are taken as the fundamental quantities. Then what will be the dimension of density : A [FL$$-$$4T2] B [FL$$-$$3T2] C [FL$$-$$5T2] D [FL$$-$$3T3] ## Explanation Density = [FaLbTc] [ML$$-$$3] = [MaLa+bT$$-$$2aLbTc] [M1L$$-$$3] = [MaLa+bT$$-$$2a+c] $$\matrix{ {a = 1} & ; & {a + b = - 3} & ; & { - 2a + c = 0} \cr {} & {} & {1 + b = - 3} & {} & {c = 2a} \cr {} & {} & {b = - 4} & {} & {c = 2} \cr }$$ So, density = [F1L$$-$$4T2] 2 ### JEE Main 2021 (Online) 27th August Evening Shift Match List - I with List - II. List - I List - II (a) $${R_H}$$ (Rydberg constant) (i) $$kg\,{m^{ - 1}}{s^{ - 1}}$$ (b) h (Planck's constant) (ii) $$kg\,{m^2}{s^{ - 1}}$$ (c) $${\mu _B}$$ (Magnetic field energy density) (iii) $$\,{m^{ - 1}}$$ (d) $$\eta$$ (coefficient of viscocity) (iv) $$kg\,{m^{ - 1}}{s^{ - 2}}$$ Choose the most appropriate answer from the options given below : A (a)-(ii), (b)-(iii), (c)-(iv), (d)-(i) B (a)-(iii), (b)-(ii), (c)-(iv), (d)-(i) C (a)-(iv), (b)-(ii), (c)-(i), (d)-(iii) D (a)-(iii), (b)-(ii), (c)-(i), (d)-(iv) ## Explanation SI unit of Rydberg const. = m$$-$$1 SI unit of Plank's const. = kg m2s$$-$$1 SI unit of Magnetic field energy density = kg m$$-$$1s$$-$$2 SI unit of coeff. of viscosity = kg m$$-$$1s$$-$$1 3 ### JEE Main 2021 (Online) 27th August Morning Shift If E and H represents the intensity of electric field and magnetising field respectively, then the unit of E/H will be : A ohm B mho C joule D newton ## Explanation Unit of $${E \over H}$$ is $${{volt/metre} \over {Ampere/metre}} = {{volt} \over {Ampere}} = ohm$$ 4 ### JEE Main 2021 (Online) 27th August Morning Shift Which of the following is not a dimensionless quantity? A Relative magnetic permeability ($$\mu$$r) B Power factor C Permeability of free space ($$\mu$$0) D Quality factor ## Explanation [$$\mu$$r] = 1 as $$\mu$$r = $${\mu \over {{\mu _m}}}$$ [power factor (cos $$\phi$$)] = 1 $${\mu _0} = {{{B_0}} \over H}$$ (unit = NA$$-$$2) : Not dimensionless [$$\mu$$0] = [MLT$$-$$2A$$-$$2] quality factor $$(Q) = {{Energy\,stored} \over {Energy\,dissipated\,per\,cycle}}$$ So Q is unitless & dimensionless. ### Joint Entrance Examination JEE Main JEE Advanced WB JEE ### Graduate Aptitude Test in Engineering GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN NEET Class 12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9215832352638245, "perplexity": 24171.61776224048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00136.warc.gz"}
http://is.tuebingen.mpg.de/publications/park18-ehd-vibrations
#### hi In this demonstration, you will hold two pen-shaped modules: an in-pen and an out-pen. The in-pen is instrumented with a high-bandwidth three-axis accelerometer, and the out-pen contains a one-axis voice coil actuator. Use the in-pen to interact with different surfaces; the measured 3D accelerations are continually converted into 1D vibrations and rendered with the out-pen for you to feel. You can test conversion methods that range from simply selecting a single axis to applying a discrete Fourier transform or principal component analysis for realistic and brisk real-time conversion. Author(s): Gunhyuk Park and Katherine J. Kuchenbecker Year: 2018 Month: June Department(s): Haptic Intelligence Bibtex Type: Miscellaneous (misc) Address: Pisa, Italy Note: Hands-on demonstration presented at EuroHaptics BibTex @misc{Park18-EHD-Vibrations, title = {Reducing 3D Vibrations to 1D in Real Time}, author = {Park, Gunhyuk and Kuchenbecker, Katherine J.}, address = {Pisa, Italy}, month = jun, year = {2018}, note = {Hands-on demonstration presented at EuroHaptics}, month_numeric = {6} }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17936952412128448, "perplexity": 22460.641603329517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861980.33/warc/CC-MAIN-20180619060647-20180619080647-00241.warc.gz"}
https://www.lessonplanet.com/teachers/wave-behavior-science-9th-higher-ed
# Wave Behavior In this wave worksheet, students draw cycles of waves determine by amplitude and frequency. Students label angle of incidence and angle of refraction on a diagram. This worksheet has 10 problems to solve. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609956741333008, "perplexity": 5368.5777379042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00468.warc.gz"}
https://learnzillion.com/lesson_plans/5711-convert-fractions-and-mixed-numbers-to-decimals
# Convert fractions and mixed numbers to decimals teaches Common Core State Standards CCSS.Math.Content.7.NS.A.2d http://corestandards.org/Math/Content/7/NS/A/2/d ## You have saved this lesson! Here's where you can access your saved items. Dismiss Card of In this lesson you will learn how to find the decimal equivalent of any fraction or mixed number by using the standard algorithm for decimal division. Related content
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8977236747741699, "perplexity": 3146.7381297755987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320227.27/warc/CC-MAIN-20170624064634-20170624084634-00221.warc.gz"}
http://greenemath.com/Algebra1/46/AddingRationalExpressionsLesson.html
Lesson Objectives • Demonstrate an understanding of how to find the LCD for a group of rational expressions • Learn how to add rational expressions • Learn how to subtract rational expressions ## How to Add & Subtract Rational Expressions When we add or subtract rational expressions, we follow the same rules we learned with fractions. When there is a common denominator present, we can perform the given operation with the numerators and place the result over the common denominator. When there is not a common denominator, we first find the LCD for the rational expressions. This is the LCM of the denominators. Once this is done, we will transform each rational expression into an equivalent rational expression where the LCD is its denominator. We can then perform operations with the numerators and place the result over the common denominator. In the end, we want to leave a simplified answer. This means we will factor our numerator and denominator and cancel any common factors. It is common to leave a rational expression in factored form to show that no other factors can be canceled. Let's look at a few examples. Example 1: Perform each indicated operation $$\frac{3}{x + 6} - \frac{3x}{3x + 24}$$ $$\frac{3}{x + 6} - \frac{3x}{3x + 24}$$ Step 1) Find the LCD LCD » 3(x + 6)(x + 8) Step 2) Transform each rational expression into an equivalent rational expression with the LCD as its denominator $$\frac{3}{x + 6} \cdot \frac{3(x+8)}{3(x+8)} = \frac{9(x+8)}{3(x + 8)(x+6)}$$ $$\frac{3}{x + 6} \cdot \frac{3(x+8)}{3(x+8)} =$$$$\frac{9(x+8)}{3(x + 8)(x+6)}$$ $$\frac{3x}{3(x + 8)} \cdot \frac{(x + 6)}{(x + 6)} = \frac{3x(x + 6)}{3(x + 8)(x + 6)}$$ $$\frac{3x}{3(x + 8)} \cdot \frac{(x + 6)}{(x + 6)} =$$$$\frac{3x(x + 6)}{3(x + 8)(x + 6)}$$ We will leave our denominators in factored form, but simplify the numerators. This will be necessary to combine like terms: $$\frac{9(x+8)}{3(x + 8)(x+6)} = \frac{9x + 72}{3(x + 8)(x + 6)}$$ $$\frac{9(x+8)}{3(x + 8)(x+6)} =$$$$\frac{9x + 72}{3(x + 8)(x + 6)}$$ $$\frac{3x(x + 6)}{3(x + 8)(x + 6)} = \frac{3x^2 + 18x}{3(x + 8)(x + 6)}$$ $$\frac{3x(x + 6)}{3(x + 8)(x + 6)} =$$$$\frac{3x^2 + 18x}{3(x + 8)(x + 6)}$$ Step 3) Perform the given operation with the numerators. We have subtraction here, so we must be very careful of our signs: $$\frac{9x + 72}{3(x + 8)(x + 6)} - \frac{3x^2 + 18x}{3(x + 8)(x + 6)}$$ $$\frac{9x + 72}{3(x + 8)(x + 6)} -$$$$\frac{3x^2 + 18x}{3(x + 8)(x + 6)}$$ We will change our subtraction to addition of the opposite: $$\frac{9x + 72}{3(x + 8)(x + 6)} + \frac{-3x^2 - 18x}{3(x + 8)(x + 6)}$$ $$\frac{9x + 72}{3(x + 8)(x + 6)} +$$$$\frac{-3x^2 - 18x}{3(x + 8)(x + 6)}$$ Now we can combine like terms between numerators: $$\frac{9x + 72}{3(x + 8)(x + 6)} + \frac{-3x^2 - 18x}{3(x + 8)(x + 6)} = \frac{-3x^2 -9x + 72}{3(x + 8)(x + 6)}$$ $$\frac{9x + 72}{3(x + 8)(x + 6)} +$$$$\frac{-3x^2 - 18x}{3(x + 8)(x + 6)} =$$$$\frac{-3x^2 -9x + 72}{3(x + 8)(x + 6)}$$ Step 4) Look to see if we can simplify. Factor the numerator and cancel any common factors: $$\frac{-3x^2 -9x + 72}{3(x + 8)(x + 6)} = \frac{-3(x^2 + 3x - 24)}{3(x + 8)(x + 6)}$$ $$\frac{-3x^2 -9x + 72}{3(x + 8)(x + 6)} =$$$$\frac{-3(x^2 + 3x - 24)}{3(x + 8)(x + 6)}$$ Cancel the common factor of 3: $$\require{cancel}\frac{-1\cancel{3}(x^2 + 3x - 24)}{\cancel{3}(x + 8)(x + 6)} = -\frac{x^2 + 3x - 24}{(x + 8)(x + 6)}$$ $$\require{cancel}\frac{-1\cancel{3}(x^2 + 3x - 24)}{\cancel{3}(x + 8)(x + 6)} =$$$$-\frac{x^2 + 3x - 24}{(x + 8)(x + 6)}$$ Example 2: Perform each indicated operation $$\frac{x - 8}{x - 2} + \frac{x - 3}{x + 6}$$ $$\frac{x - 8}{x - 2} + \frac{x - 3}{x + 6}$$ Step 1) Find the LCD LCD » (x - 2)(x + 6) Step 2) Transform each rational expression into an equivalent rational expression with the LCD as its denominator $$\frac{(x - 8)}{(x - 2)} \cdot \frac{(x + 6)}{(x + 6)} = \frac{(x - 8)(x + 6)}{(x - 2)(x + 6)}$$ $$\frac{(x - 8)}{(x - 2)} \cdot \frac{(x + 6)}{(x + 6)} =$$$$\frac{(x - 8)(x + 6)}{(x - 2)(x + 6)}$$ $$\frac{(x-3)}{(x + 6)} \cdot \frac{(x-2)}{(x-2)} = \frac{(x-3)(x-2)}{(x+6)(x-2)}$$ $$\frac{(x-3)}{(x + 6)} \cdot \frac{(x-2)}{(x-2)} =$$$$\frac{(x-3)(x-2)}{(x+6)(x-2)}$$ We will simplify the numerators, but leave our denominators in factored form: $$\frac{(x - 8)(x + 6)}{(x - 2)(x + 6)} = \frac{x^2 - 2x - 48}{(x - 2)(x + 6)}$$ $$\frac{(x - 8)(x + 6)}{(x - 2)(x + 6)} =$$$$\frac{x^2 - 2x - 48}{(x - 2)(x + 6)}$$ $$\frac{(x - 3)(x - 2)}{(x - 2)(x + 6)} = \frac{x^2 - 5x + 6}{(x - 2)(x + 6)}$$ $$\frac{(x - 3)(x - 2)}{(x - 2)(x + 6)} =$$$$\frac{x^2 - 5x + 6}{(x - 2)(x + 6)}$$ Step 3) Perform the given operation with the numerators: $$\frac{x^2 - 2x - 48}{(x - 2)(x + 6)} + \frac{x^2 - 5x + 6}{(x - 2)(x + 6)} = \frac{2x^2 - 7x - 42}{(x - 2)(x + 6)}$$ $$\frac{x^2 - 2x - 48}{(x - 2)(x + 6)} +$$$$\frac{x^2 - 5x + 6}{(x - 2)(x + 6)} =$$$$\frac{2x^2 - 7x - 42}{(x - 2)(x + 6)}$$ Step 4) Look to see if we can simplify. Factor the numerator and cancel any common factors: $$\frac{2x^2 - 7x - 42}{(x - 2)(x + 6)}$$ $$\frac{2x^2 - 7x - 42}{(x - 2)(x + 6)}$$ The numerator is a prime polynomial. We can't simplify our rational expression any further.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504830002784729, "perplexity": 454.57020887527017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202688.89/warc/CC-MAIN-20190322180106-20190322202106-00077.warc.gz"}
http://math.gatech.edu/node/15334
## Dynamical Mordell-Lang problems Series: Algebra Seminar Thursday, April 23, 2009 - 3:00pm 1 hour (actually 50 minutes) Location: Skiles 255 , Univ. of Rochester Organizer: Let S be a group or semigroup acting on a variety V, let x be a point on V, and let W be a subvariety of V. What can be said about the structure of the intersection of the S-orbit of x with W? Does it have the structure of a union of cosets of subgroups of S? The Mordell-Lang theorem of Laurent, Faltings, and Vojta shows that this is the case for certain groups of translations (the Mordell conjecture is a consequence of this). On the other hand, Pell's equation shows that it is not true for additive translations of the Cartesian plane. We will see that this question relates to issues in complex dynamics, simple questions from linear algebra, and techniques from the study of linear recurrence sequences.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513452410697937, "perplexity": 361.7827460641822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516123.97/warc/CC-MAIN-20181023090235-20181023111735-00005.warc.gz"}
http://www.bot-thoughts.com/2011/06/generate-clock-signal-with-avr-atmega.html?showComment=1313072283035
## Friday, June 17, 2011 ### Generate a Clock Signal with AVR ATmega AVR-generated clock signal Several times lately my MCU needed to generate a clock signal to interface with a device, such as a camera, or a serial interface on an analog to digital converter (ADC). Rather than bit-banging the clock signal, let the MCU's timer hardware do the work, freeing up cycles for real code. Here's how to generate a simple, 50% duty cycle pulse train, aka clock signal, using an AVR MCU. For this experiment I used an ATmega328P but any of the AVR chips that support a 16-bit Timer1 should do. I wanted a 500kHz clock signal. To generate it, the MCU must toggle an output pin, PB1 aka OC1A at 1MHz, or 1/16 the MCU's 16MHz clock, for a total period of 2us (500kHz). Timer1 provides a mode called Clear Timer on Compare Match (CTC). The timer register, TCNT1 counts from 0 up to the value in the Output Compare Register, in this case OCR1A (to go along with the OC1A output). When TCNT1 == OCR1A, the MCU resets TCNT1 and starts counting again. Ok, let's get started with the code. Though we probably don't need to, why not initialize the counter: TCNT1=0; To run the timer at 1MHz, we need to divide the MCU clock by 16. Using a prescaler value of 1, we simply set the output compare register to 15, since the timer counts from 0-15 or 16 ticks. So we come up with: OCR1A = 15; Note that we could also have used a prescaler value of 8 and an OCR1A of 1. Now it's time to set some mode bits. The AVR has three Timer/Counter Control Registers for timer 1: TCCR1A, TCCR1B and, you guessed it, TCCR1C.  In TCCR1A, two bits control the behavior of OC1A when the timer matches OCR1A. Those two bits are COM1A1 and COM1A0. When set to 01, OC1A is toggled when there's a compare match. TCCR1A |= (1<<COM1A0); To tell Timer1 to operate in CTC mode, we set bits WGM13, WGM12, WGM11, WGM10 across two control registers. Actually for CTC mode, WGM12=1 and the rest are 0 (initial value on powerup) TCCR1B |= (1<<WGM12); To configure Timer1 prescaling, set bits CS12, CS11 and CS10 in TCCR1B. For no prescaling, use 001, respectively. That is, set CS10=1 TCCR1B |= (1<<CS10); The only thing left to do is to enable the OC1A (PB1) pin for output DDRB |= _BV(1); Putting it all together, here's the code that generates the clock signal in the picture above. TCNT1=0; OCR1A = 15; TCCR1A |= (1<<COM1A0); TCCR1B |= (1<<CS10) | (1<<WGM12); DDRB |= _BV(1); Finally, if you want to sync your code to the rising and falling clock edge, check the TIFR1 register for the OCF1A flag: if ((TIFR1 & _BV(1)) == _BV(1)) { // do something here } Or, you can sync your code to a high or low clock value by reading PB1 from the PINB register.  The MCU can also be set up to call an interrupt service routine whenever there's a match. 1. Hi this is really one of the things I need. thanks for the post. However, I'm having some problems with the results, the output pin is PB1, right? but when I connect it to the oscilloscope, I see frequency of only 100 kHz... thanks,,, I really need some help here.... I need at least 500 kHz for my project... thank you 2. @Anonymous: I just measured mine and it's running ~470kHz on a 16MHz ATmega328P. That's off by about 5% but that's much better than a 500% error :) We'll figure this out. :) No offense intended but are you counting divisions correctly? Most scopes show tiny hash marks that equal 0.2 division. The big grid marks count as 1 division. I set my Hitachi V-1050F's time/div to 1us and the ~500kHz signal shows up with a 2us period = 500kHz. (If you accidentally counted each hash as a division you'd get a 10us period = 100kHz which would explain your off-by-500% result) I have some other ideas if this isn't it. 1. Your timer starts at 0, not 1. When you set OCR1A = 16, you're counting from 0-16, not 1-16. 2. Four years late, but remember that the counter starts at zero, not 1. When you set OCR1A = 16, you're counting from 0-16 not 1-15. 3. Argh, that's embarrassing on my part. :( I fixed the article and included the proper equation for figuring out frequency 3. yes, I'm quite sure that I measured it properly using my oscilloscope. anyway, I'm using Arduino duemilanove microcontroller(also ATMega 328P). and the pin mapping is PB1 -> digital pin9. so i directly connect the oscilloscope to the pin 9 of arduino?... where could the problem be?... I really need some help,, thanks..... 4. Connect oscope probe to Digital Pin 9, and connect oscope ground to ground pin. Freq is off by a factor of 5. So maybe clock scaling got messed up or a timer scaling issue is to blame. Can you send me your source and I will test it on my board just to eliminate that as a possibility. Use the Contact Me link at the top/right of the page. 5. hi! i'm a beginner in using microcontroller..i have questions and hope you can gives some pointers or tips.. can atmega generate 2 clock signals?? it's because i need 2 clock signals that will be connected to a CCD sensors (charge-couped device) to activate the CCD,these 2 clocks are needed..for the ROG clocks (read-out gate) and CLK..is it possible?? 6. hi! First of all I want to thank you for the time and effort you spend to provide this solution. I spend nearly two hours to find out, why the solution you provided did not work with my setup (Arudino UNO with 1.0 dev. environment). I had to actually reset the TCCR1A and TCCR1B registers, because they were initalized with 1 resp. 3??? (No modification done to either board or dev. environment), also I had to move the OCR1A init after the TCCR1A / TCCR1B init, otherwise it still had 0 and the timer didn't start ... (Complete code see below ...) #define CLK 9 void setup() { // Set Clock to Output pinMode(CLK, OUTPUT); TCNT1=0; // Toggle OC1A on Compare Match TCCR1A = 0x00; bitSet(TCCR1A, COM1A0); // Clear Timer on Compare Match TCCR1B = 0x00; bitSet(TCCR1B, WGM12); // Set frequency (1MHz) OCR1A = 8; // No prescaling bitSet(TCCR1B, CS10); }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23499524593353271, "perplexity": 6640.051270565721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210105.8/warc/CC-MAIN-20180815122304-20180815142304-00201.warc.gz"}
http://clay6.com/qa/2124/find-x-if-
Browse Questions # Find x, if $\begin{bmatrix} 5 & 3x \\ 2y & z \end{bmatrix}$ = $\begin{bmatrix} 5 & 4 \\ 12 & 6 \end{bmatrix}^T$ Toolbox: • If A_{i,j} be a matrix m*n matrix , then the matrix obtained by interchanging the rows and column of A is called as transpose of A. • If the order of 2 matrices are equal, their corresponding elements are equal, i.e, if $A_{ij}=B_{ij}$, then any element $a_{ij}$ in matrix A is equal to corresponding element $b_{ij}$ in matrix B. • We can then match the corresponding elements and solve the resulting equations to find the values of the unknown variables Step1: Given: The transpose of the matrix can be obtained by changing the rows & the column. ${\begin{bmatrix}5 & 4\\12 & 6\end{bmatrix}}^T=\begin{bmatrix}5 & 12\\4 & 6\end{bmatrix}$ Thus $\begin{bmatrix}5 & 3x\\2y & z\end{bmatrix}=\begin{bmatrix}5 & 12\\4 & 6\end{bmatrix}$ The given two matrices are equal,hence their corresponding elements should be equal. $\Rightarrow$ 3x=12 x=4. 2y=4 $y=\frac{4}{2}$ y=2. z=6.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8678028583526611, "perplexity": 433.3636028629897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320763.95/warc/CC-MAIN-20170626133830-20170626153830-00190.warc.gz"}
http://mathhelpforum.com/discrete-math/53878-functions.html
# Math Help - Functions 1. ## Functions Find a function $f: D \to E$ such that $f[f^{-1}[E]] \neq E$ where $f[E]$ is the image of f at E and $f^{-1}[E]$ is the pre-image of f at E. 2. Suppose that $D = \left\{ {1,2,3} \right\}\,\& \,E = \left\{ {a,b,c} \right\}$. Define a function $\begin{gathered} f = \left\{ {\left( {1,a} \right),\left( {2,c} \right),\left( {3,a} \right)} \right\} \hfill \\ f^{ - 1} (E) = D \hfill \\ f\left( {f^{ - 1} (E)} \right) = f\left( D \right) = \left\{ {a,c} \right\} \ne E \hfill \\ \end{gathered}$ 3. Originally Posted by Aryth Find a function $f: D \to E$ such that $f[f^{-1}[E]] \neq E$ where $f[E]$ is the image of f at E and $f^{-1}[E]$ is the pre-image of f at E. The theorem covering the above is: For all functions f : ,if f is not onto then f[ $f^-1$ ( R(f))]=/= R(f) ..........where R(f) is the range of f. But due to contrapositive law the above theorem is equivalent to: For all functions f : if f[ $f^-1$ ( R(f))]= R(f) then f is onto. So any function which is not onto will satisfy f[ $f^-1$ ( R(f))]=/= R(f). Now to prove the above . But to prove that f is onto if f[ $f^-1$ ( R(f))]= R(f),we must prove that: For all yεR(f) then there exists an xεD(f) such that y=f(x),where D(f)is the domain of f . The above in symbols is: $\forall y$[ yεR(f) -------> $\exists x$( xεD(f) & y=f(x))]. Let yεR(f),since f[ $f^-1$ ( R(f))]= R(f),then yεf[ $f^-1$ ( R(f))]. But yεf[ $f^-1$ ( R(f))] <=====> y= f(x) & xε[ $f^-1$ ( R(f))].. But xε[ $f^-1$ ( R(f))] <====> xεD(f) & f(x)εR(f). Hence; yεf[ $f^-1$ ( R(f))]<=======> y=f(x) & xεD(f) & f(x)εR(f). Thus we see that there an xεD(f) such that y= f(x), so f is onto
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9853647947311401, "perplexity": 3656.725301901565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098464.55/warc/CC-MAIN-20150627031818-00168-ip-10-179-60-89.ec2.internal.warc.gz"}
https://byjus.com/standard-error-calculator/
Standard Error Calculator Standard Error Formula:SEx=Sn Enter Inputs(separated by comma): No. of Inputs(N)= Standard Error (SE)= Standard Error Calculator is a free online tool that displays the standard error of the given set of data. BYJU’S online standard error calculator tool makes the calculation faster, and it displays the standard error in a fraction of seconds. How to Use the Standard Error Calculator? The procedure to use the standard error calculator is as follows: Step 1: Enter the numbers separated by a comma in the respective input field Step 2: Now click the button “Calculate” to get the result Step 3: Finally, the standard error for the given set of data will be displayed in the output field What is Meant by Standard Error? In Statistics, the standard error is defined as the statistical measure, which is used to define the accuracy of the estimation for the true population mean. Theoretically, the standard error is similar to the standard deviation. It tells how accurate the mean of the sample data of the population, which is compared to the mean of the true population. When there is an increase in the standard error, it means that the mean of the data set is spread out more.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583478569984436, "perplexity": 445.82675406854685}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00563.warc.gz"}
http://physics.stackexchange.com/users/14133/michael-leonard
# Michael Leonard less info reputation 4 bio website michaelhleonard.com location Fayetteville, AR age 24 member for 2 years, 11 months seen Apr 15 at 0:10 profile views 17 5 Why the energy of a marshmallow is so huge? 4 Intuition of Impulse Formula $J = \sum F \Delta t$ 1 Kinetic Energy of Stone # 186 Reputation +10 Kinetic Energy of Stone +55 Intuition of Impulse Formula $J = \sum F \Delta t$ +20 Why the energy of a marshmallow is so huge? # 0 Questions This user has not asked any questions # 8 Tags 4 momentum 1 drag 4 kinematics 1 newtonian-gravity 1 energy 1 newtonian-mechanics 1 projectile 0 mass-energy # 5 Accounts Electrical Engineering 340 rep 28 Physics 186 rep 4 Stack Overflow 118 rep 3 Super User 101 rep 1 User Experience 101 rep 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5004395246505737, "perplexity": 7962.841382505483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737936627.82/warc/CC-MAIN-20151001221856-00036-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.degruyter.com/document/doi/10.1515/bejeap-2013-0178/html
Toshiki Kodera # Discriminatory Pricing and Spatial Competition in Two-Sided Media Markets De Gruyter | Published online: January 15, 2015 # Abstract This study describes a spatial model of price discrimination in two-sided media markets. Given that media platforms offer a uniform price for consumers and either a uniform or discriminatory price for advertisers, we compare a platform’s profit and welfare under these two different pricing schemes. In contrast to the well-known result that price discrimination based on a consumer’s location leads to lower profits, if consumers have a strong aversion to advertising, we show that a platform’s profit is better off under price discrimination. In addition, if consumers rather dislike advertising, we show that price discrimination is detrimental to both a platform’s profit and the consumer’s welfare. ## 1 Introduction Media platforms may be able to charge different agents different prices. The Economist, one of the most well-known weekly news magazines, price discriminates based on regions, sizes and colors in advertising rates. [1] For example, for the same sizes and colors, advertising rates in North America are more than twice that in the Asia Pacific region. Some Japanese newspaper publishers charge higher advertising fees for public firms than for private firms. Gil and Riera-Crichton (2011) investigated the Spanish TV market, in which TV stations price discriminate their advertisers and viewers. Moreover, publishing companies can sell advertising columns at different prices depending on the various characteristics of advertisers (such as industry and location). [2] The purpose of this paper is to consider whether this price discrimination in media markets is beneficial for media platforms, consumers and advertisers. Oligopolistic price discrimination on the Hotelling model has previously been studied by Thisse and Vives (1988), Bester and Petrakis (1996) and Liu and Serfes (2004). [3] These analyses of price discrimination without indirect network effects suggest that a firm’s profit from price discrimination is below that from uniform pricing. [4] When a firm can price discriminate to account for a given rival’s prices, the firm extracts a surplus from local consumers. However, the firm sets low prices for distant consumers. Conversely, when a rival firm uses the same price schedule, price discrimination exacerbates competition; therefore, consumers benefit when firms engage in price discrimination. We examine how these well-known results change in media markets. To investigate a spatial model of price discrimination in two-sided media markets, we extend the perfect spatial price discrimination model developed by Thisse and Vives (1988). [5] Two platforms are located at each end of a line, and two groups of agents, advertisers and consumers, are uniformly located along the line. There are asymmetric network effects between these groups. We consider a sequential game in which platforms first set prices for consumers, and then for advertisers. The reason that consumers choose first is the accuracy of marketing. [6] Recent developments in information technology improve the accuracy of marketing. Therefore, publishers can strictly predict the number of copies or consumer market share. Media platforms compete for advertisers using these consumer data. In the first stage, platforms offer uniform prices for consumers deciding to join a platform. Platforms cannot price discriminate among consumers, as consumers can resell magazines. [7] In the second stage, both platforms simultaneously offer prices for advertisers. If price discrimination is feasible, both platforms price discriminate for each advertiser. Otherwise, platforms offer uniform prices for advertisers. As advertisers select a platform in the final stage, similar to one-sided markets, price discrimination reduces the platform’s profit from advertisers. However, platforms adjust uniform prices for consumers in response to network effects. As price discrimination intensifies price competition in the final stage, advertisers can easily shift to a platform with a large consumer market share. Therefore, network effects are more influential in determining a platform’s profit from consumers under price discrimination than under uniform pricing. As negative network effects relax price competition, if consumers strongly dislike advertisements, a platform’s profit under price discrimination is higher than that under uniform pricing. We also investigate consumer surplus. If negative network effects are moderate, platforms cannot compensate for price competition for advertisers by extracting a surplus from consumers. Consequently, despite the finding that price discrimination leads to reduced platform profit, consumers are worse off when platforms employ price discrimination. [8] Our paper relates to the literature on two-sided markets using the Hotelling framework. [9] The seminal work of Armstrong (2006a) investigates the role of elasticities of demand in two-sided markets. Anderson and Coate (2005) analyze competition between media platforms in the presence of asymmetric network effects. However, these studies are limited to uniform pricing for each group of agents. The recent paper by Liu and Serfes (2013) analyzes a Hotelling model of price discrimination in two-sided markets with positive network effects. They demonstrated that price discrimination and strong network effects increase a platform’s profit under the condition that the price schedule is limited to a non-negative price. While their model is related to ours, we demonstrate that platform profits increase in the presence of negative network effects under certain conditions. [10] The remainder of this paper is organized as follows: Section 2 presents the model. Section 3 analyzes the equilibrium outcomes. We check the robustness of our model in Section 4. Section 5 concludes the paper. ## 2 Model There are two homogenous platforms (1 and 2). Each platform is located at the end of a unit line. There are two groups of agents: advertisers (sellers, S) and consumers (buyers, B). Each agent is uniformly distributed along the unit line and selects only one platform. [11] To join the platform, each agent incurs transportation costs per unit of distance t k > 0 , k = S , B . The number of agents who join the platform i is n i k . When an agent joins the platform, the agent obtains network benefits, b k n i j , j k , affected by the number of agents who join the same platform from the other side. To focus on media markets, we assume that b S > 0 and b B < 0 . [12] The greater the number of consumers that select platform i, the greater the number of advertisers that place advertisements on the same platform. However, we assume that consumers dislike advertisements. The utility of group k’s agent, who is located at distance x k [ 0 , 1 ] from platform 1, is u 1 k = α k + b k n 1 j p 1 k t k x k , where p 1 k is the price to join platform 1. The agent who joins platform i has sufficiently large benefits α k to cover the market. α B is an intrinsic benefit for a consumer from watching a TV program or reading a magazine. An intrinsic benefit for an advertiser, α S , arises from advertising that increases the advertiser’s reputation. [13] Platform i’s profit is [1] π i = ( p i B c ) n i B + p i S n i S . The platform faces marginal costs c for each consumer. To simplify the analysis, we assume that the marginal cost for advertisers is zero. We assume the following inequality to satisfy the second-order condition under uniform pricing and price discrimination: 2 t S t B > ( b S ) 2 . [14] The game is as follows: First, the two platforms simultaneously set a uniform price for consumers. Consumers decide whether to join platform 1 or 2. In the second stage, we consider two price regimes. Under the first, both platforms set uniform pricing for advertisers, and under the second, both platforms price discriminate among advertisers. ### 2.1 Uniform price We consider that each platform charges uniform prices to both groups. In the second stage, each platform offers a uniform price for advertisers. The number of advertisers who join each platform is given by [2] n i S = t S + b S ( n i B n j B ) ( p i S p j S ) 2 t S . Using the platform’s profit function and the number of advertisers, we derive the price for advertisers. Proposition 1 summarizes prices and profits π i U P when each platform sets uniform prices for both groups. PROPOSITION 1When each platform charges uniform prices for both the groups, equilibrium prices and profits are determined as follows: [3] p i U P S = t S , [4] p i U P B = c + t B b S ( 2 t S + b B ) 3 t S , [5] π i U P = t S + t B 2 b S ( 2 t S + b B ) 6 t S . Proof. See Appendix. ■ ### 2.2 Price discrimination We assume that in the second stage, two platforms can price discriminate among advertisers. Then, platforms can observe the location of each advertiser. Each platform controls the territory near its own location, i.e., platform 1’s territory is [ 0 , 1 / 2 ] , and platform 2’s territory is [ 1 / 2 , 1 ] . When an advertiser is located in x S , platform 1 sets a price of p 1 S to prefer platform 1 to platform 2 given platform 2’s price, p 1 S + t S x S b S n 1 B p 2 S + t S ( 1 x S ) b S n 2 B . Similarly, platform 2 sets a price of p 2 S . If the sum of transportation costs and network benefits is indifferent to an advertiser who is located in x ˆ S , both platforms charge their marginal costs to an advertiser, i.e., a price equal to zero, because each platform has no cost advantage over the rival’s platform. [15] Then, platform 1 (resp. 2) has a cost advantage for advertisers who are located in an area smaller (resp. larger) than x ˆ S . Prices for advertisers are given by p 1 S = t S ( 1 2 x ˆ S ) + b S ( n 1 B n 2 B ) , p 2 S = 0 , i f x ˆ S t S + b S ( n 1 B n 2 B ) 2 t S , p 1 S = 0 , p 2 S = t S ( 2 x ˆ S 1 ) + b S ( n 2 B n 1 B ) , i f x ˆ S t S + b S ( n 1 B n 2 B ) 2 t S , When platforms price discriminate, each platform’s profit π i P P D is given by [6] π 1 P P D = 0 ( t S + b S ( n 1 B n 2 B ) ) / 2 t S t S ( 1 2 x ˆ S ) + b S ( n 1 B n 2 B ) d x ˆ S + ( p 1 B c ) n 1 B , [7] π 2 P P D = ( t S + b S ( n 1 B n 2 B ) ) / 2 t S 1 t S ( 2 x ˆ S 1 ) + b S ( n 2 B n 1 B ) d x ˆ S + ( p 2 B c ) n 2 B . Proposition 2 summarizes prices and profits when each platform price discriminates for advertisers. PROPOSITION 2When each platform price discriminates in the second stage, equilibrium prices and profit are determined as follows: [8] p 1 P P D S = t S ( 1 2 x S ) , p 2 P P D S = 0 , i f x S 1 2 , [9] p 1 P P D S = 0 , p 2 P P D S = t S ( 2 x S 1 ) , i f x S 1 2 , [10] p i P P D B = c + t B b S b S b B t S , [11] π i P P D = t S + 2 t B 2 b S 4 b S b B 2 t S . Proof. See Appendix. ■ Similar to the case of uniform pricing, network effects do not affect prices for advertisers. The equilibrium prices for advertisers are identical to those in Thisse and Vives (1988). Prices for consumers include terms stemming from network effects. These terms under price discrimination are qualitatively similar to those under uniform pricing. ## 3 Comparison We study the symmetric equilibrium prices with and without price discrimination to compare profits. First, we compare prices for advertisers in the second stage. As each platform sets symmetric prices in the first stage, each platform obtains one half of the consumers. There is no difference in an advertiser’s network benefits under the two pricing schemes. Therefore, prices for advertisers are identical to those in Thisse and Vives (1988) on one-sided markets. In the second stage, a platform is profitable when price discrimination is feasible for a given rival’s prices. However, the rival will perform the same action. Therefore, price competition for advertisers is more intense when platforms can price discriminate. In the second stage, uniform pricing is more profitable than price discrimination. Next, we consider prices for consumers in the first stage. Suppose platforms offer a symmetric price to consumers. Then, platform 1 slightly reduces p 1 B . Certain consumers switch from platform 2 to platform 1. The additional consumer attracts additional advertisers due to positive network effects, i.e., a direct effect. Suppose that the marginal advertisers are x S ( n 1 B , n 2 B ) , including optimal prices for advertisers. This marginal change in market share under uniform pricing is x S / n 1 B = b S / 6 t S , while it is b S / 2 t S under price discrimination. Therefore, platform 1 attracts more advertisers under price discrimination than under uniform pricing. This demonstrates that under price discrimination, platforms have a greater incentive to reduce consumer prices. However, this marginal change in market generates a feedback effect, due to negative network effects. Suppose that x B ( n 1 S , n 2 S ) represents the marginal consumers who anticipate the market share in advertisers. The feedback effect of market share under uniform pricing is ( x B / n 1 S ) ( n 1 S / n 1 B ) = b S b B / 12 t S t B , while it is b S b B / 4 t S t B under price discrimination. Therefore, platform 1 loses more advertisers under price discrimination than under uniform pricing. Under price discrimination, platforms have less incentive to reduce consumer prices. We investigate how these two effects impact consumer prices under price discrimination. ( π 1 ) p 1 B | P P D ( π 1 ) p 1 B | U P = b S ( t S + 2 b B ) 6 ( t S t B b S b B ) . When b B < t S / 2 , under price discrimination, platforms have a greater incentive to soften the price competition for consumers. Therefore, when b B < t S / 2 , p P P D B p U P B = b S ( t S + 2 b B ) / 3 t S > 0 , the condition b B < t S / 2 indicates that the platform’s profit from consumers increases under price discrimination. If consumers strongly dislike advertising and/or the transportation cost faced by advertisers is low, platforms can extract a sufficiently large surplus from consumers under price discrimination to cover price competition for advertisers. As a result, a platform’s profit is greater under price discrimination. Summarizing this analysis, PROPOSITION 3Price discrimination is more profitable than a uniform price, if and only if b B < t S ( 3 t S + 2 b S ) 4 b S . Proof. See Appendix. ■ When a consumer joins a platform, the consumer anticipates that the platform will attempt to attract more advertisers in the second stage. If consumers strongly dislike advertising, in the first stage, the consumer has less incentive to join the platform, because the feedback effect is larger than the direct effect. This implies that the consumer is price inelastic. Therefore, the platform sets higher prices for consumers. However, this effect is present under both price discrimination and uniform pricing. In addition to this effect, when the platforms can price discriminate, a platform (e.g., 1) can attract more advertisers when platform 1 obtains consumers from platform 2. However, platform 2 cannot reduce prices for advertisers located at the middle of the line due to marginal cost pricing. As a consequence, price discrimination reinforces the above effect. Therefore, consumer prices are higher under price discrimination than under uniform pricing. In the Hotelling model of one-sided markets, price discrimination intensifies price competition (see Thisse and Vives 1988). However, in our model that considers two-sided markets, if the negative network effects are sufficiently large, a platform’s profit under price discrimination is higher than that under uniform pricing. [16] Our result extends the findings of Liu and Serfes (2013), who analyzed a spatial price discrimination model considering two-sided markets with positive network effects and non-negative prices. In their model, if positive network effects are strong and prices are required to be non-negative, the profit of a platform is greater under price discrimination. In contrast to their result, we can derive our result with asymmetric network effects and without price regulation. Next, we consider welfare. As all agents participate in platform 1 or 2, both pricing schemes have identical total welfare, which is composed of a platform’s profit and agent groups’ surplus. Therefore, under the condition of Proposition 3, the sum of both agent groups’ surplus under price discrimination is lower than under uniform pricing. To determine which pricing schemes affect each group, we compare each group’s surplus under price discrimination with that under uniform pricing. When each platform charges uniform prices for both groups, advertiser and consumer surplus are as follows: [12] C S U P S = 2 0 1 / 2 α S + b S 2 t S t S x S d x S = α S + b S 2 5 t S 4 , C S U P B = 2 0 1 / 2 α B + b B 2 c t B + b S ( 2 t S + b B ) 3 t S t B x B d x B [13] = α B c + b B 2 + b S ( 2 t S + b B ) 3 t S 5 t B 4 . Similarly, when both platforms price discriminate among advertisers, advertisers and consumers surplus are as follows: [14] C S P P D S = α S + b S 2 3 t S 4 , [15] C S P P D B = α B c + b B 2 + b S ( t S + b B ) t S 5 t B 4 . When platform profit is lower under price discrimination than under uniform pricing, we compare agent groups’ surplus under uniform pricing with that under price discrimination. Then, we can derive Proposition 4. PROPOSITION 4When platform profit under price discrimination is lower than that under uniform pricing, • (i) the advertisers’ surplus under price discrimination is always larger than that under uniform pricing; • (ii) the consumers’ surplus under price discrimination is smaller than that under uniform pricing, if and only if t S ( 3 t S + 2 b S ) 4 b S < b B < t S 2 . Proof. See Appendix. ■ The preceding studies on one-sided markets, such as Thisse and Vives (1988), reveal that price discrimination reduces a firm’s profits and improves consumer surplus. In contrast, our model finds that price discrimination is detrimental not only to a platform’s profit but also to consumer surplus. [17] If negative network effects are moderate, platforms cannot compensate for engaging in price competition for advertisers by extracting surplus from consumers. Consequently, despite that a platform’s profit under price discrimination is lower than that under uniform pricing, consumers are worse off when platforms employ price discrimination. This result suggests that policy action is necessary for agents on two-sided media markets. In our model, competition authorities may approve not only consumer surplus but also a platform’s profit as a policy objective. Platforms set a uniform price for consumers, resembling resale price maintenance (RPM). [18] RPM is used to protect publishing culture. Therefore, a platform’s profit becomes one of the policy objectives. Under the condition of Proposition 4, price discrimination is detrimental to a platform’s profit and consumer surplus. Competition authorities should ban price discrimination for advertisers. However, it may be difficult to implement the policy, as advertisers negotiate on their prices in a closed-door room. In contrast, competition authorities can observe a platform’s profit and prices for consumers. When competition authorities notice low profits and high prices for consumers, they could infer large advertiser surplus. Then, competition authorities should examine the prices for advertisers to protect a platform’s profit and consumer surplus. Our result also suggests that competition authorities do not always have to ban price discrimination for advertisers. Under the condition of Proposition 3, a platform’s profit under price discrimination is higher than that under uniform pricing. However, price discrimination for advertisers is harmful to consumers under the condition. If competition authorities prefer to protect platforms, they permit price discrimination for advertisers. If b B > t S / 2 , then price discrimination for advertisers is harmful to platforms, but it is beneficial to consumers. If competition authorities prefer consumer surplus to a platform’s profit, they permit price discrimination for advertisers under the condition. In contrast to our result, Liu and Serfes (2013) demonstrate that when price discrimination reduces a platform’s profit, both groups’ welfare improves. In their model, as both agent groups simultaneously join platforms, the two network effects equally affect prices and profits. In contrast, in our model, consumers select platforms before the platforms offer prices to advertisers. Therefore, the two network effects differently affect prices and profits. ## 4 Discussion In this section, we discuss four important assumptions about our basic model to check its robustness. [19] First, network effects are asymmetric. In Section 4.1, we consider a model with positive network effects for both sides. Second, the agent groups sequentially select a platform. We extend our model to the simultaneous case in Section 4.2. In the simultaneous game, platforms can potentially set negative prices for agents by internalizing network effects. Third, platforms can price discriminate among advertisers. We adapt our model to include price discrimination on both sides in Section 4.3. Finally, we discuss multihoming in Section 4.4. After reviewing these assumptions, we find that the simultaneous game, discriminating on both sides and multihoming do not intrinsically affect our main result. Therefore, the assumption of asymmetric network effects plays a key role in our main results. Moreover, we can compare our model with that Liu and Serfes (2013) by relaxing our assumptions except for multihoming. [20] Therefore, negative network effects differentiate our model from theirs. Moreover, they limit the price schedule to a non-negative price. However, our model intrinsically permits negative prices. ### 4.1 Positive network effects on both sides In this subsection, we investigate how the direction of network effects affects our main result. We extend our basic model to include positive network effects for both groups, i.e., suppose that b B > 0 . As the condition of Proposition 3 does not hold, a platform’s profit is lower under price discrimination than under uniform pricing. Similarly, as t S + 2 b B is positive, consumer surplus under price discrimination is larger than that under uniform pricing. Therefore, asymmetric network effects play an important role in our model. ### 4.2 Simultaneous game We extend our model to a simultaneous game in which platforms set prices for consumers and advertisers simultaneously. We demonstrate that the results of this extension are qualitatively identical to those of the basic model. First, we assume that platforms set uniform prices for both sides in the simultaneous game. This game is identical to that in Armstrong (2006a). Therefore, the equilibrium prices are p U P i = c i + t i b j . These equilibrium prices indicate that the equilibrium price of the standard Hotelling model is adjusted by the platform’s external benefit from obtaining an additional agent. In the simultaneous game, platforms price discriminate among advertisers and set uniform prices for consumers. First, we consider prices for advertisers following Liu and Serfes (2013). Given platform 2’s prices for advertisers and consumer market share, platform 1 charges prices for advertisers who prefer platform 1 to platform 2: p 1 S = t S ( 1 2 x S ) + b S ( n 1 B n 2 B ) + p 2 S , i f x S t S + b S ( n 1 B n 2 B ) 2 t S . We investigate platform 2’s prices for advertisers. For the advertisers who prefer platform 1, platform 2’s prices are equivalent to platform 2’s external benefit of attracting an additional advertiser. [21] When platform 2 attracts an additional advertiser, platform 2 loses a further b B / t B consumers. Platform 2 loses the external benefits p B b B / t B from consumers. This change in market share, b B / t B , has a feedback effect on advertisers. Therefore, platform 2 loses 2 b S b B ( 1 x S ) / t B in additional revenue from infra-marginal advertisers. As these two external benefits must satisfy the connectedness property, platform 2’s prices for advertisers are [22] p 2 S = b B ( p 2 B + 2 b S ( 1 x S ) ) t B , i f x S t S + b S ( n 1 B n 2 B ) 2 t S . However, if platforms set these prices for advertisers in equilibrium, the platforms may have an incentive to deviate. If platforms deviate from the equilibrium, price competition arises because the platforms can price discriminate among advertisers. Therefore, the equilibrium prices and platform profits under price discrimination are p 1 S = t S ( 1 2 x S ) b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , p 2 S = b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , i f x S 1 2 , p 1 S = b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , p 2 S = t S ( 2 x S 1 ) b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , i f x S 1 2 , p 1 B = p 2 B = t B b S 2 ( b S ) 2 b B t S t B b S b B , π i P D = t B b S b B 2 + t S 4 + ( b S ) 2 b B ( b B t B ) t B ( t S t B b S b B ) . We compare profits under two different pricing schemes. π U P π P D = t S t B ( t S t B b S b B ) 4 ( b S ) 2 b B ( b B t B ) 4 t B ( t S t B b S b B ) . If the condition b B < t B { t S b S + ( t S 2 b S ) 2 + 16 ( t S ) 2 } / 8 b S holds, profits under uniform pricing are lower than under price discrimination. Therefore, when our model is extended to the simultaneous game, the main result does not intrinsically change. We can obtain this result not to assume that prices for agents are non-negative. The assumption of non-negative prices differentiates our model from that of Liu and Serfes (2013). ### 4.3 Discrimination on both sides Third, we explore the case in which platforms can price discriminate on both sides. In this extension, platforms sequentially attract both sides similarly to our basic model. As platforms set prices for advertisers in the final stage, the discriminatory prices for advertisers are identical to those in our basic model, i.e., eqs [8] and [9]. We consider discriminatory prices for consumers. Given the market share of advertisers, platforms select discriminatory prices for consumers near platform 1 as follows: [23] p 1 B = t B ( 1 2 x B ) b S b B t S ( 1 2 x B ) b S , p 2 B = b S , i f x B 1 2 . Similarly, we can derive discriminatory prices for consumers near platform 2. Therefore, the platform profit under price discrimination is π P D = t S + t B 2 b S 4 b S b B 4 t S . When platforms set uniform prices on both sides, their profit is eq. [ 5]. We compare platform profits under uniform pricing on both sides and under price discrimination: π U P π P D = 3 t S ( t S + t B ) + 2 t S b S + b S b B 12 t S . In this extension of our model, if the condition b B < ( 3 t S ( t S + t B ) + 2 t S b S ) / b S holds, profits are greater under price discrimination than under uniform pricing. Therefore, when a platform can price discriminate on consumers and advertisers, the main result does not intrinsically change. In this extension, consumer surplus is as follows: C S P P D b o t h B = α B + b B 2 + b S ( t S + b B ) t S 3 t B 4 . We compare this consumer surplus to eq. [ 15]: C S P P D b o t h B C S P P D B = t B 2 b S b B 2 t S > 0. Consumer surplus under price discrimination on both sides is larger than consumer surplus under price discrimination on only the advertiser side. Moreover, platform profits are lower under price discrimination on both sides than under price discrimination on only the advertiser side, as advertiser prices are identical under sequential participation. ### 4.4 Multihoming In this subsection, we consider the extension of our model in which advertisers have the possibility to multihome. Similar to the result of our basic model, we find that the platform profit under price discrimination rises when consumers strongly dislike advertisements. First, we consider the prices for advertisers when advertisers can multihome. Suppose that θ with θ ( 0 , α S ) is the additional reservation utility of shifting singlehoming to multihoming. In the basic model, we assume that α S is sufficiently large to cover the advertiser side. In this case, all advertisers can multihome. Then, platforms relax price competition to advertisers. As multihoming relaxes price competition for advertisers, price discrimination explicitly raises platform profits relative to the singlehoming case. Next, we consider the case in which certain advertisers are singlehoming and other advertisers are multihoming. Then, the equilibrium prices for advertisers are p 1 S = t S ( 1 2 x S ) , p 2 S = 0 , f o r x S 2 t S 2 θ b S 2 t S , p 1 S = θ + b S 2 t S x S , p 2 S = θ + b S 2 t S ( 1 x S ) , f o r x S 2 t S 2 θ b S 2 t S , 2 θ + b S 2 t S , p 1 S = 0 p 2 S = t S ( 2 x S 1 ) f o r x S 2 θ + b S 2 t S . Here, suppose that x 10 S = ( 2 t S 2 θ b S ) / 2 t S , and we differentiate x 10 S with respect to θ : x 10 S θ = 1 t S < 0. The multihoming threshold for platform 1 decreases with θ . Therefore, when the additional reservation utility increases, platforms can attract a larger number of multihoming advertisers. We compare platform profits from advertisers under price discrimination with those under uniform pricing. When advertisers can multihome under price discrimination, a platform can extract all surpluses from advertisers joining the platform. However, under uniform pricing, as each platform sets a monopoly price for advertisers, advertisers enjoy some surplus. Therefore, the platform profit from advertisers increases under price discrimination, which differs from the result of our basic model. Next, we consider the prices for consumers in the first stage by adapting the intuition from Section 3. When platform 1 has an additional consumer, platform 1 has b S / 2 t S additional advertisers under uniform pricing, while it has b S / t S additional advertisers under price discrimination. Therefore, platforms employing price discrimination have a greater incentive to reduce prices for consumers due to positive network effects, while they have less incentive to reduce prices for consumers due to negative network effects. We compare the prices for consumers under two pricing schemes to investigate these two effects, p U P B p P D B = b S ( 4 t S + 2 b B 2 θ b S ) / 4 t S . When b B < ( 2 θ + b S 4 t S ) / 2 , the price for consumers under price discrimination is higher than that under uniform pricing. Therefore, the platform profit under price discrimination increases whenever this condition holds. We consider agents’ surplus when advertisers can multihome. In the singlehoming case, advertiser surplus is greater under price discrimination, because it produces fierce competition. However, in the multihoming case, advertiser surplus is lower under price discrimination. However, consumer surplus does not change qualitatively irrespective of whether advertisers multihome. When negative network effects are sufficiently low, consumer surplus is lower under price discrimination. ## 5 Conclusion We have investigated a spatial model of price discrimination in two-sided media markets. The result is that if negative network effects are sufficiently large, a platform’s profit under price discrimination is higher than that under uniform pricing. We also demonstrate that platforms and consumers are worse off when platforms engage in price discrimination if negative network effects are moderate. This result concerning agents’ welfare indicates that policymakers may need to reconsider policies in two-sided markets. We take a preliminary step in studying a model in which platforms can price discriminate on the locations of agents. In media markets, platforms can price discriminate on the size or quantity of advertising. The extension of our model to include second-degree price discrimination should be addressed in future research. ## Appendix A ### Proof of Proposition 1 We consider prices for advertisers in the second stage. Using eqs [1] and [2], the first-order condition is [16] π i U P p i S = t S + p j S 2 p i S + b S ( n i B n j B ) 2 t S = 0. Solving these equations, we obtain prices for advertisers and the number of advertisers: [17] p i S = t S + b S ( n i B n j B ) 3 , n i S = 1 2 + b S ( n i B n j B ) 6 t S . Next, we consider prices for consumers in the first stage. Using eq. [ 17], the number of consumers who join each platform is given by [18] n i B = 3 t S ( p j B p i B ) b S b B n j B + 3 t S t B 6 t S t B b S b B . Therefore, we obtain the number of consumers: [19] n i B = 3 t S ( p j B p i B ) + 3 t S t B b S b B 2 ( 3 t S t B b S b B ) . Substituting eq. [ 19] for prices for advertisers and the number of advertisers, platform’s profit is π i U P = t S + t S b S ( p j B p i B ) 3 t S t B b S b B 1 2 + b S ( p j B p i B ) 2 ( 3 t S t B b S b B ) + ( p i B c ) 3 t S ( p j B p i B ) + 3 t S t B b S b B 2 ( 3 t S t B b S b B ) . The first-order condition is 3 t S ( p j B 2 p i B ) + 3 t S t B b S b B 2 t S b S + 3 t S c 2 ( 3 t S t B b S b B ) t S ( b S ) 2 ( p j B p i B ) ( 3 t S t B b S b B ) 2 = 0. We also derive the second-order condition: 2 π i U P p i B 2 = t S { ( b S ) 2 9 t S t B + 3 b S b B } ( 3 t S t B b S b B ) 2 < 0. Solving the first-order conditions, the equilibrium price for consumers is p i B = c + t B b S ( 2 t S + b B ) 3 t S . Using price for consumers, we obtain the equilibrium price for advertisers and platform’s profit: p i S = t S , π i U P = t S + t B 2 b S ( 2 t S + b B ) 6 t S . ### Proof of Proposition 2 We consider prices for consumers in the first stage. Since platforms charge uniform prices for consumers, the number of consumers is n i B = t B + b B ( n i S n j S ) ( p i B p j B ) 2 t B . Using the number of advertisers, we simplify the number of agents who join platform i, [20] n i B = t S ( p j B p i B ) + t S t B b S b B 2 ( t S t B b S b B ) , [21] n i S = b S ( p j B p i B ) + t S t B b S b B 2 ( t S t B b S b B ) . Substituting eqs [ 20] and [ 21] for eqs [ 6] and [ 7], we can derive the first-order and second-order conditions: 1 2 t S b S ( t S t B b S b B b S ( p i B p j B ) ) 2 ( t S t B b S b B ) 2 + t S ( p j B 2 p i B ) + t S c 2 ( t S t B b S b B ) = 0 , t S ( ( b S ) 2 + 2 b S b B 2 t S t B ) 2 ( t S t B b S b B ) 2 < 0. Solving the first-order conditions, we obtain the equilibrium price for consumers and platform’s profit: p i B = c + t B b S b S b B t S , π i P P D = t S + 2 t B 2 b S 4 b S b B 2 t S . The number of consumers who join platform i is 1 / 2 . Therefore, the equilibrium prices for advertisers are p 1 S = t S ( 1 2 x S ) , p 2 S = 0 , x S 1 2 , p 1 S = 0 , p 2 S = t S ( 2 x S 1 ) , x S 1 2 . ### Proof of Proposition 3 We compare platform’s profits: [22] π i U P π i P P D = 3 ( t S ) 2 + 2 t S b S + 4 b S b B 12 t S . Therefore, π i U P < π i P P D i f b B < t S ( 3 t S + 2 b S ) 4 b S . ### Proof of Proposition 4 We compare consumer surplus using eqs [13] and [15]: [23] C S U P B C S P P D B = b S ( t S + 2 b B ) 3 t S . Therefore, C S U P B > C S P P D B i f b B < t S 2 . ## Appendix B In this supplemental section, we provide a detailed analysis of Section 4. As it is easy to verify whether there exist positive network effects on both sides, we discuss three cases: a simultaneous game, discrimination on both sides and multihoming. ### Simultaneous game We extend our model to a simultaneous game and compare platform profits under uniform and discriminatory pricing. [24] In this extension, platforms simultaneously set prices for both groups. The main difference between the simultaneous and sequential games is the prices offered to advertisers. In the simultaneous game, platforms must account for the feedback loop created by the network effect in advertiser price. First, we explore the uniform pricing case in the simultaneous game, which is the same as that of Armstrong (2006a). Therefore, equilibrium prices and profits are [24] p U P i = t i b j , [25] π U P = t S + t B b S b B 2 . Next, we consider the case in which platforms simultaneously set discriminatory prices for advertisers and uniform prices for consumers. [25] The equilibrium prices are p 1 S = t S ( 1 2 x S ) b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , p 2 S = b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , i f x S 1 2 , p 1 S = b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , p 2 S = t S ( 2 x S 1 ) b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , i f x S > 1 2 , p 1 B = p 2 B = t B b S 2 ( b S ) 2 b B t S t B b S b B . We describe how to obtain the equilibrium prices. First, we consider the prices for advertisers. Given the consumer market share, when an advertiser at location x ˆ S is indifferent between platforms 1 and 2, advertisers for x S x ˆ S = ( t S + b S ( n 1 B n 2 B ) ) / 2 t S prefer platform 1 to platform 2. Then, platform 1 sets p 1 S given platform 2’s prices as follows: [26] p 1 S = t S ( 1 2 x S ) + b S ( n 1 B n 2 B ) + p 2 S , f o r x S t S + b S ( n 1 B n 2 B ) 2 t S . For x S ( t S + b S ( n 1 B n 2 B ) ) / 2 t S , platform 2’s prices for advertisers are equivalent to platform 2’s benefit from obtaining an additional advertiser. [27] p 1 S = t S ( 1 2 x S ) + b S ( n 1 B n 2 B ) b B ( p 2 B + 2 b S ( 1 x S ) ) t B , p 2 S = b B ( p 2 B + 2 b S ( 1 x S ) ) t B , i f x S x ˜ S . [28] p 1 S = b B ( p 1 B + 2 b S x S ) t B , p 2 S = t S ( 2 x S 1 ) + b S ( n 2 B n 1 B ) b B ( p 1 B + 2 b S x S ) t B , i f x S x ˜ S , x ˜ S = t S + b S ( n 1 B n 2 B ) 2 t S . Here, we investigate whether these prices offered to advertisers achieve equilibrium following Liu and Serfes (2013). In the symmetric equilibrium, each platform has own territory, e.g., platform 1 obtains all advertisers on [ 0 , 1 / 2 ] . [27] Platform 1 must provide for advertisers who are connected because we exclude the deviation that the platform obtains disjoint advertisers. Given prices and market shares on the consumer side, we differentiate platform 1’s prices for advertisers (eqs [ 27] and [ 28]) with respect to x S : p 1 S x S = 2 t S + 2 b S b B t B < 0 i f x S x ˜ S , p 1 S x S = 2 b S b B t B > 0 i f x S x ˜ S . As b B is negative, p 1 S decreases with x S over the interval [ 0 , x ˜ S ] and increases with x S over the interval [ x ˜ S , 1 ] . Then, if platform 1 raises prices for advertisers near to x ˜ S and lowers advertiser prices near the right end of the line without changing the market share on the advertiser side, it is possible for platform 1 to increase its profits. When platform 1 pursues this deviation, we consider price competition for advertisers. Here, suppose that both platforms set the symmetric equilibrium price for consumers, p 1 B = p 2 B = p ˜ B , and have the same market shares on the consumer side, i.e., n 1 B = n 2 B = 1 / 2 . p 1 S ( x ˜ S ) indicates that platform 1 sets the price for an advertiser located at x S = x ˜ S . We have p 1 S ( x ˜ S ) from eq. [27]: [29] p 1 S ( x ˜ S ) = b B ( p ˜ B + b S ) t B . Moreover, p 1 S ( 1 ) indicates that platform 1 sets a prices for an advertiser located at x S = 1 , [30] p 1 S ( 1 ) = b B ( p ˜ B + 2 b S ) t B . Because p 1 S ( x ˜ S ) < p 1 S ( 1 ) , platform 1 has an incentive to follow the above-mentioned deviation. Then, platform 2, which predicts the deviation, reduces the price for the advertiser located at x S = 1 to recover the market share on x S > x ˜ S . This behavior continues until the price offered to the advertiser is equal to p 1 S ( x ˜ S ) . The same thing holds for advertisers located on [ x ˜ S , 1 ] . Therefore, platform 1 sets p 1 S ( x ˜ S ) for advertisers located on [ x ˜ S , 1 ] . Similarly, platform 2 sets p 2 S ( x ˜ S ) = b B ( p ˜ B + b S ) / t B for advertisers located on [ 0 , x ˜ S ] . Therefore, prices for advertisers are p 1 S = t S ( 1 2 x S ) + b S ( n 1 B n 2 B ) b B ( p 2 B + b S ) t B , [31] p 2 S = b B ( p 2 B + b S ) t B , i f x S x ˜ S . p 1 S = b B ( p 1 B + b S ) t B , [32] p 2 S = t S ( 2 x S 1 ) + b S ( n 2 B n 1 B ) b B ( p 1 B + b S ) t B , i f x S x ˜ S . Next, we prove that platform 1 has no incentive to pursue other deviations: The first deviation is that platform 1 decreases its market share on the advertiser side by changing the price for advertisers located on x S x ˜ S ; the second deviation is that platform 1 contracts with more advertisers on x S x ˜ S . [28] Suppose that platform 1’s profit is π 1 s i m when the platforms set p ˜ B for consumers. We demonstrate that platform 1 has no incentive to engage in the first deviation. Suppose that the deviation price, p 1 d S , is the minimum deviation price for losing the advertisers. For x S x ˜ S , the deviation price p 1 d S decreases with x S ; p 1 d S / x S = 2 t S < 0 . If platform 1 pursues the deviation, platform 1 initially loses an advertiser near to one half. Therefore, platform 1 serves advertisers who are connected on [ 0 , x ˜ S ] . Suppose that platform 1 loses an extremely small number δ > 0 of advertisers due to the deviation and has market share x _ S = 1 / 2 δ on the advertiser side. Then, the market share for consumers changes by b B δ / t B . Platform 1’s deviation profit π 1 s i m d is π 1 s i m d = 0 x _ S t S ( 1 2 x S ) 2 b S b B δ t B b B ( p ˜ B + b S ) t B d x S + p ˜ B 1 2 b B δ t B . Therefore, platform 1’s additional profit is π 1 s i m d π 1 s i m = ( t S t B 2 b S b B ) δ 2 t B < 0. It is not profitable for platform 1 to decrease its advertiser market share. We also demonstrate that platform 1 has no incentive to pursue the second deviation. Suppose that the deviation price, p 1 d S , is the minimum deviation price that allows it to obtain advertisers. Then, platform 1 contracts with an additional δ advertisers on x S 1 / 2 , and platform 1’s advertiser market share is x ˉ s = 1 / 2 + δ . Then, the consumer market share changes by b B δ / t B . Platform 1’s deviation profit π 1 s i m d is π 1 s i m d = 0 1 / 2 t S ( 1 2 x S ) + 2 b S b B δ t B b B ( p ˜ B + b S ) t B d x S 1 / 2 x s b B ( p B + b S ) t B d x S + p ˜ B ( 1 2 + b B δ t B ) . Therefore, platform 1’s additional profit is π 1 s i m d π 1 s i m = 0. It is not profitable for platform 1 to increase its market share of advertisers. On the advertiser side, platforms have no incentive for deviations in which they attract additional or fewer advertisers. We investigate consumer prices. From eqs [31] and [32], platform 1’s profit is ([33] π 1 P D = 0 ( t S + b S ( n 1 B n 2 B ) ) / 2 t S t S ( 1 2 x S ) + b S ( n 1 B n 2 B ) b B ( p 2 B + b S ) t B d x S + p 1 B n 1 B . Because platforms set uniform consumer prices, the market shares of platform i are equal to eqs [ 20] and [ 21]. We can derive the first-order conditions for consumer prices: [34] π i P D p i B = t S + t S b S ( p j B p i B ) t S t B b S b B b B ( p j B b S ) t B b S 2 ( t S t B b S b B ) + t S t B b S b B + t S ( p j B 2 p i B ) 2 ( t S t B b S b B ) = 0. Solving the equations, the equilibrium prices and profit are p 1 S = t S ( 1 2 x S ) b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , p 2 S = b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , i f x S 1 2 , p 1 S = b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , p 2 S = t S ( 2 x S 1 ) b B + 2 ( b S b B ) 2 t B ( t S t B b S b B ) , i f x S 1 2 , p 1 B = p 2 B = t B b S 2 ( b S ) 2 b B t S t B b S b B , [35] π i P D = t B b S b B 2 + t S 4 + ( b S ) 2 b B ( b B t B ) t B ( t S t B b S b B ) . Note that 2 ( t S t B b S b B ) ( b S ) 2 > 0 is necessary to satisfy the second-order condition. Finally, we compare profits under two different pricing schemes using eqs [25] and [35]. π U P π i P D = t S t B ( t S t B b S b B ) 4 ( b S ) 2 b B ( b B t B ) 4 t B ( t S t B b S b B ) . As the second-order condition holds, profits under uniform pricing are lower than under price discrimination if and only if b B < ( t B { t S b S + ( t S 2 b S ) 2 + 16 ( t S ) 2 } ) / ( 8 b S ) . Therefore, if negative network effects are sufficiently large, this condition holds. When our model is extended to the simultaneous game, the main result does not change intrinsically. ### Discrimination on both sides In this subsection, we extend our basic model to price discrimination on both sides. Platforms can price discriminate not only among advertisers but also among consumers. Analogously to the basic model, platforms sequentially set prices for both groups, i.e., consumers are charged in the first stage and advertisers are in the next. Suppose that n ˆ i B is platform i’s market share on the consumer side determined in the first stage. Price discrimination among advertisers in this extension is the same in our basic model, p 1 S = t S ( 1 2 x ˆ S ) + b S ( n ˆ 1 B n ˆ 2 B ) , p 2 S = 0 , i f x ˆ S t S + b S ( n ˆ 1 B n ˆ 2 B ) 2 t S , p 1 S = 0 , p 2 S = t S ( 2 x ˆ S 1 ) + b S ( n ˆ 2 B n ˆ 1 B ) , i f x ˆ S t S + b S ( n ˆ 1 B n ˆ 2 B ) 2 t S . Then, platform 1’s market share of advertisers is n ˆ 1 S x ˆ S (resp. n ˆ 2 S 1 x ˆ S ). We investigate discriminatory prices for consumers. Suppose that a consumer at location x ˆ B is indifferent between joining platform 1 or 2. Given platform 2’s consumer prices, platform 1 charges prices for consumers at x B x ˆ B as follows: p 1 B = t B ( 1 2 x B ) + b B ( n ˆ 1 S n ˆ 2 S ) + p 2 B = t B ( 1 2 x B ) + b S b B ( n ˆ 1 B n ˆ 2 B ) t S + p 2 B f o r x B x ˆ B = t S t B + b S b B ( n ˆ 1 B n ˆ 2 B ) 2 t S t B . p 1 B = t B ( 1 2 x B ) + b S b B ( n ˆ 1 B n ˆ 2 B ) t S 2 b S n ˆ 2 S + 2 b S b B ( 1 x B ) t S , p 2 B = 2 b S n ˆ 2 S + 2 b S b B ( 1 x B ) t S , i f x B t S t B + b S b B ( n ˆ 1 B n ˆ 2 B ) 2 t S t B , p 1 B = 2 b S n ˆ 1 S + 2 b S b B x B t S , p 2 B = t B ( 2 x B 1 ) + b S b B ( n ˆ 2 B n ˆ 1 B ) t S 2 b S n ˆ 1 S + 2 b S b B x B t S , i f x B t S t B + b S b B ( n ˆ 1 B n ˆ 2 B ) 2 t S t B . Suppose that x B n ˆ 1 B and 1 x B n ˆ 2 B . The prices for consumers are rewritten as follows: [36] p 1 B = t B ( 1 2 x B ) b S + b S ( b S ( 2 x B 1 ) + b B ( 4 x B 3 ) ) t S , p 2 B = b S + b S ( b S ( 2 x B 1 ) 2 b B ( 1 x B ) ) t S , i f x B 1 2 , [37] p 1 B = b S b S ( b S ( 2 x B 1 ) + 2 b B x B ) t S , p 2 B = t B ( 2 x B 1 ) b S b S ( b S ( 2 x B 1 ) + b B ( 4 x B 1 ) ) t S , i f x B 1 2 . Similar to the simultaneous game, we explore whether platforms provide for consumers who are connected. We differentiate platform 1’s prices for consumers with respect to x B : p 1 B x B = 2 t B + 2 b S ( b S + 2 b B ) t S i f x B 1 2 , p 1 B x B = 2 b S ( b S + b B ) t S i f x B 1 2 . We focus on the case in which consumers strongly dislike advertisements, i.e., b B < b S . Therefore, p 1 B decreases with x B over the interval [ 0 , 1 / 2 ] and increases with x B over the interval [ 1 / 2 , 1 ] . If platform 1 raises prices for consumers close to 1 / 2 and lowers consumer prices near the right end of the line without changing its market share on the advertiser side, it is possible for platform 1 to increase its profits. When platforms pursue the above-mentioned deviation, we consider the effects on price competition for consumers. Using eq. [ 37], p 1 B ( 1 / 2 ) indicates that platform 1 sets a price for a consumer located at x B = 1 / 2 as follows: p 1 B ( 1 / 2 ) = b S b S b B t S . Similarly, p 1 B ( 1 ) indicates that platform 1 sets a price for a consumer located at x B = 1 as follows: p 1 B ( 1 ) = b S b S ( b S + 2 b B ) t S . As p 1 B ( 1 / 2 ) < p 1 B ( 1 ) , platform 1 pursues the above-mentioned deviation in which it decreases its market share on x B < 1 / 2 by increasing the price for a consumer close to 1 / 2 and increases its market share on x B > 1 / 2 by lowering the price for a consumer located at x B = 1 . Platform 2 predicts this deviation and reduces the price for a consumer located at x B = 1 to recover its market share on x B > 1 / 2 . This behavior continues until the price for a consumer located at x B = 1 is equal to p 1 B ( 1 / 2 ) . The same holds for consumers in the interval [ 1 / 2 , 1 ] . Moreover, we can adapt this discussion for consumers in [ 0 , 1 / 2 ] . Therefore, the prices for consumers are [38] p 1 B = t B ( 1 2 x B ) b S + 2 b S b B ( x B 1 ) t S , p 2 B = b S b S b B t S , i f x B 1 2 , [39] p 1 B = b S b S b B t S , p 2 B = t B ( 2 x B 1 ) b S + 2 b S b B x B t S , i f x B 1 2 . Platform 1’s profit under price discrimination is π 1 P D = 0 1 / 2 t B ( 1 2 x B ) b S + 2 b S b B ( x B 1 ) t S d x B + 0 1 / 2 t S ( 1 2 x S ) d x S [40] = t S + t B 4 b S 2 3 b S b B 4 t S . When platforms set these prices, they may have an incentive to pursue other deviations: The first deviation is that platform 1 contracts with more consumers on x B > 1 / 2 ; the second deviation is that platform 1 decreases its market share on the consumer side by changing the price for consumers located on x B < 1 / 2 . We consider the first case. Suppose that platform 1 sets a minimum deviation price p 1 d B for consumers on x B > 1 / 2 . Then, platform 1 acquires an extremely small number δ of consumers on x B > 1 / 2 , and its market share of consumers is x ˉ B = 1 / 2 + δ . Moreover, platform 1 obtains an additional δ b S / t S through the deviation. Platform 1’s deviation profit is π 1 P D d = 0 1 / 2 t B ( 1 2 x B ) b S + 2 b S b B ( x B 1 ) t S d x B b S + b S b B t S δ + 0 1 / 2 + b S δ / t S t S ( 1 2 x S ) 2 b S δ d x S . We compare the deviation profit with eq. [ 40]: [41] π 1 P D d π 1 P D = b S δ ( b S δ b B ) t S > 0. As the delta is sufficiently small, π 1 P D d > π 1 P D . Therefore, when platform 1 pursues the deviation and acquires an additional consumer on x B > 1 / 2 , platform 1 obtains an additional profit, b S b B / t S . Moreover, platform 2 predicts this deviation and reduces its price for the consumer to recover its market share. This behavior continues until platform 1’s price for consumers on x B > 1 / 2 is equal to p 1 B = b S . We can adapt the discussion for consumers on [ 0 , 1 / 2 ] . The prices for consumers and the platform profits are [42] p 1 B = t B ( 1 2 x B ) b S + b S b B ( 2 x B 1 ) t S , p 2 B = b S , i f x B 1 2 , [43] p 1 B = b S , p 2 B = t B ( 2 x B 1 ) b S + b S b B ( 1 2 x B ) t S , i f x B 1 2 , [44] π 1 P D = t S + t B 4 b S 2 b S b B 4 t S . Finally, we prove that platforms have no incentive to pursue the deviation in which platform 1 decreases its market share on the consumer side by changing the price for consumers located on x B < 1 / 2 . Suppose that platform 1 sets a minimum deviation price p 1 d B for consumers on x B < 1 / 2 . Then, platform 1 loses an extremely small number δ of consumers on x B < 1 / 2 , and its consumer market share is x B = 1 / 2 δ . Moreover, platform 1 loses an additional δ b S / t S through the deviation. Platform 1’s deviation profit is π 1 P D d = 0 1 / 2 δ t B ( 1 2 x B ) b S + b S b B ( 2 x B 1 ) t S d x B + 0 1 / 2 b S δ / t S t S ( 1 2 x S ) 2 b S δ d x S . We compare the deviation profit with eq. [44]: [45] π 1 P D d π 1 P D = δ 2 ( t S t B b S b B ( b S ) 2 ) t S < 0. As b B < b S , π 1 P D d < π 1 P D . Therefore, the platforms have no incentive to pursue the above-mentioned deviation. From our basic model, platform profits under uniform pricing are ( t S + t B ) / 2 b S ( 2 t S + b B ) / 6 t S . We compare profits under uniform pricing and under price discrimination: π i U P π i P D = t S + t B 4 + b S 6 + b S b B 12 t S . In this extension of our model, if the condition b B < ( t S ( 3 ( t S + t B ) + 2 b S ) ) / b S holds, the profit under price discrimination is larger than that under uniform pricing. Therefore, when platforms can price discriminate on both sides, the main result does not change intrinsically. ### Multihoming We investigate the extension of our basic model in which advertisers have the ability to multihome. We explore the case in which certain advertisers singlehome and other advertisers multihome. Similar to the result obtained from our benchmark model, we find that platform profits under price discrimination are larger than under uniform pricing when negative network effects are sufficiently large. The utility of an advertiser located in x S [ 0 , 1 ] is given by u i S = { α S + b S n 1 B p 1 S t S x S , i f t h e a d v e r t i s e r j o i n s p l a t f o r m 1 , α S + b S n 2 B p 2 S t S ( 1 x S ) , i f t h e a d v e r t i s e r j o i n s p l a t f o r m 2 , α S + θ + b S p 1 S p 2 S t S , i f t h e a d v e r t i s e r j o i n s b o t h p l a t f o r m s . Suppose that θ ( 0 , α S ) is the additional reservation utility for an advertiser who is multihoming. In this case, the unit line of the advertiser side is divided into three intervals. Advertisers only join platform 1 near the left end. Near the middle, advertisers join both platforms. Advertisers only join platform 2 near the right end. To determine these intervals, we consider two marginal advertisers located at x 10 S and x 02 S ( 0 < x 10 S < x 02 S < 1 ). The location x 10 S (resp. x 02 S ) is indifferent between visiting platform 1 (resp. 2) and visiting both platforms. Similar to the case of singlehoming, we assume that 2 t S t B > ( b S ) 2 to satisfy the second-order condition under uniform pricing and price discrimination. ### Uniform price We study the case in which platforms offer uniform prices to both groups. The locations of the marginal advertisers are given by x 10 S = t S + p 2 S θ b S ( 1 n 1 B ) t S , x 02 S = θ + b S ( 1 n 2 B ) p 1 S t S . The number of advertisers joining platform 1 (resp. 2) is n 1 S = x 02 S (resp. n 2 S = 1 x 10 S ). Substituting these equations for the platform profit function and maximizing profits with respect to p i S , as consumers are singlehoming, we derive prices for advertisers and the market share as follows: p i S = θ + b S n i B 2 , n i S = θ + b S n i B 2 t S . Using these equations, the first-order conditions in the first stage are as follows: π i U P p i B = 1 2 t S ( 2 p i B p j B c ) 2 t S t B b S b B b S ( 2 t S t B b S b B ) ( 2 θ + b S ) 2 t S b S ( p i B p j B ) 4 ( 2 t S t B b S b B ) 2 = 0. We obtain equilibrium prices, the number of agents, and platform profits as follows: p i S = 2 θ + b S 4 , p i B = c + t B b S ( 2 θ + b S + 2 b B ) 4 t S , n i S = 2 θ + b S 4 t S , n i B = 1 2 , [46] π i U P = t B 2 + ( 2 θ + b S ) ( 2 θ b S ) 4 b S b B 16 t S . ### Price discrimination We consider the case in which platforms can price discriminate among advertisers. We identify the location x 10 S (resp. x 02 S ) that is indifferent between visiting platform 1 (resp. 2) and visiting both platforms, when platform 2 (resp. 1) sets zero price. The locations of the marginal advertisers are given by x 10 S = t S θ b S ( 1 n 1 B ) t S x 02 S = θ + b S ( 1 n 2 B ) t S The number of advertisers joining platform 1 (resp. 2) is also n 1 S = x 02 S (resp. n 2 S = 1 x 10 S ). The advertisers located at x S [ x 10 S , x 02 S ] are indifferent between joining only one platform and both platforms. Platform 1’s profit is given by π 1 P D = 0 ( t S θ b S ( 1 n 1 B ) ) / t S t S ( 1 2 x S ) + b S ( n 1 B n 2 B ) d x S + ( t S θ b S ( 1 n 1 B ) ) / t S ( θ + b S ( 1 n 2 B ) ) / t S θ + b S ( 1 n 2 B ) t S x S d x S + ( p 1 B c ) n 1 B . Similarly, we have platform 2’s profit. Solving these equations, the first-order conditions in the first stage are π i P D p i B = 1 2 t S ( 2 p i B p j B c ) 2 ( t S t B b S b B ) t S b S ( t S t B b S b B ) b S ( p i B p j B ) 2 ( t S t B b S b B ) 2 = 0. We obtain equilibrium prices, the number of agents and platform profit as follows: p 1 S = t S ( 1 2 x S ) , p 2 S = 0 , f o r x S 2 t S 2 θ b S 2 t S p 1 S = θ + b S 2 t S x S , p 2 S = θ + b S 2 t S ( 1 x S ) , f o r x S 2 t S 2 θ b S 2 t S , 2 θ + b S 2 t S , p 1 S = 0 p 2 S = t S ( 2 x S 1 ) f o r x S 2 θ + b S 2 t S p i B = c + t B b S b S b B t S , n i B = 1 2 , [47] π i P D = t S + t B 2 b S 2 θ 2 + ( 2 θ + b S ) 2 2 b S b B 4 t S . Here, we investigate the extent to which θ affects market share and profits: x 10 S θ = 1 t S < 0 , π i P D θ = 2 θ + b S t S t S > 0. As we assume that x 10 S < x 02 S , t S < 2 θ + b S . Profits under price discrimination increase with θ . ### Comparison Comparing profits under uniform pricing and price discrimination, we find that platform profits are larger under price discrimination than under uniform pricing, if and only if b B < 8 t S ( t S 2 θ 2 b S ) + ( 2 θ + b S ) ( 6 θ + 5 b S ) 4 b S . We have difficulty directly interpreting this condition. We divide platform profits under each pricing scheme into profits from advertisers and profits from consumers. First, we compare profits from advertisers under uniform pricing with those under price discrimination, π U P S π P D S = 3 ( 2 θ + b S ) 2 8 t S ( 2 θ + b S ) + 16 ( t S ) 2 16 t S < 0. When advertisers multihome, each platform behaves like a monopolist. If platforms can price discriminate, platforms extract all surplus (excluding transportation costs) from each advertiser. Therefore, platform profits from advertisers increase under price discrimination. Next, we compare profits from consumers under the two pricing schemes: π U P B π P D B = b S ( 4 t S 2 θ b S + 2 b B ) 8 t S . If b B < ( 2 θ + b S 4 t S ) / 2 , profits from consumers are larger under price discrimination than under uniform pricing. # Acknowledgments I would like to thank an editor and two anonymous referees, Hiroshi Aiura, Makoto Hanazono, Toshihiro Matsumura, and various seminar audiences for helpful comments. Financial support from AICA Kogyo is gratefully acknowledged. ### References Anderson, S., and S.Coate. 2005. “Market Provision of Broadcasting: A Welfare Analysis.” Review of Economic Studies72:94772. Search in Google Scholar Armstrong, M.2006a. “Competition in Two-Sided Markets.” Rand Journal of Economics37:66891. Search in Google Scholar Armstrong, M. 2006b. “Recent Developments in the Economics of Price Discrimination.” In Advances in Economics and Econometrics 2: Theory and Applications (Ninth World Congress), edited byBlundell, R.. Newey, W.. and Persson, T.. (eds). Cambridge: Cambridge University Press. Search in Google Scholar Armstrong, M., and J.Wright. 2007. “Two-Sided Markets, Competitive Bottlenecks and Exclusive Contracts.” Economic Theory32:35380. Search in Google Scholar Belleflamme, P., and M.Peitz. 2010. Industrial Organization: Markets and Strategies. Cambridge: Cambridge University Press. Search in Google Scholar Bester, H., and E.Petrakis. 1996. “Coupons and Oligopolistic Price Discrimination.” International Journal of Industrial Organization14:22742. Search in Google Scholar Caillaud, B., and B.Jullien. 2003. “Chicken & Egg: Competition among Intermediation Service Providers.” Rand Journal of Economics34:30928. Search in Google Scholar Celsi, M. W., and M.Gilly. 2010. “Employees as Internal Audience: How Advertising Affects Employees’ Customer Focus.” Journal of the Academy of Marketing Science38:5209. Search in Google Scholar Chen, Y.1999. “Oligopoly Price Discrimination and Resale Price Maintenance.” Rand Journal of Economics30:44155. Search in Google Scholar Gil, R., and D.Riera-Crichton. 2011. “Price Discrimination and Competition in Two-Sided Markets: Evidence from the Spanish Local TV Industry.” Working Paper. Search in Google Scholar Gilly, M., and M.Wolfinbarger. 1998. “Advertising’s Internal Audience.” Journal of Marketing62:6988. Search in Google Scholar Hagiu, A.2006. “Pricing and Commitment by Two-Sided Platforms.” Rand Journal of Economics37:72037. Search in Google Scholar Liu, Q., and K.Serfes. 2004. “Quality of Information and Oligopolistic Price Discrimination.” Journal of Economics and Management Strategy13:671702. Search in Google Scholar Liu, Q., and K.Serfes. 2013. “Price Discrimination in Two-Sided Market.” Journal of Economics & Management Strategy22:76886. Search in Google Scholar Melewar, T.2003. “Determinants of the Corporate Identity Construct: A Review of the Literature.” Journal of Marketing Communications9:195220. Search in Google Scholar Motta, M.2004. Competition Policy. Oxford: Oxford University Press. Search in Google Scholar OECD.2008. “Roundtable on Resale Price Maintenance.” http://www.oecd.org/daf/competition/43835526.pdf Search in Google Scholar Schulz, N.2007. “Does the Service Argument Justify Resale Price Maintenance?Journal of Institutional and Theoretical Economics163:23655. Search in Google Scholar Stole, L.2007. “Price Discrimination and Competition.” In Handbook of Industrial Organization 3, edited by M.Armstrong, and R.Porter. Amsterdam: North-Holland. Chapter 34, 2221–99. Search in Google Scholar Thisse, J., and X.Vives. 1988. “On the Strategic Choice of Spatial Price Policy.” American Economic Review78:12237. Search in Google Scholar Published Online: 2015-1-15 Published in Print: 2015-4-1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8680930733680725, "perplexity": 1519.012243371616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00416.warc.gz"}
https://math.au.dk/aktuelt/aktiviteter/event/item/5841/
# Circuit Splits Tommy R. Jensen Seminar Onsdag, 27 juni, 2018, at 13:15-14:15, in Koll. G (1532-214) Abstrakt: Let $G$ be graph and let $C$ be a circuit (a 2-regular connected subgraph) in $G$. Then a split of $C$ is an unordered pair $X$, $Y$ of circuits with the properties: 1. $E(C)$ is the symmetric difference of $E(X)$ with $E(Y)$, and 2. every path in $G$ from $V (X)$ to $V (Y)$ intersects $V(X) \cap V(Y)$. We conjecture that if $G$ is a cubic 3-connected graph, then there exists a split of $C$. This conjecture is motivated by the Cycle Double Cover (CDC) conjecture of Szekeres and Seymour, and by the stronger Fixed Cycle Double Cover (FCDC) conjecture of Goddyn. If the Circuit Split (CS) conjecture holds, then the FCDC conjecture follows, and CDC with it. This talk mentions some observations on the CS conjecture and a further strengthening, the Fixed Path Circuit Split (FPCS) conjecture. An interesting feature of the FPCS conjecture is that it may be viewed as a combinatorial (and non-planar) version of the Jordan Curve Theorem; thus if true, it may allow additional applications. T.R. Jensen, Splits of Circuits, Discrete Mathematics 310, 3026–3029, 2010. Kontaktperson: Anders Nedergaard Jensen
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961670994758606, "perplexity": 1130.4568500163396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145747.6/warc/CC-MAIN-20200223062700-20200223092700-00106.warc.gz"}
https://www.physicsforums.com/threads/proving-inf-st-inf-s-inf-t.514679/
# Proving inf(ST) = inf(S)*inf(T) 1. Jul 17, 2011 ### faceblah 1. The problem statement, all variables and given/known data Sets A and B are sets of positive real numbers. Define C = {st| s \in S and t \in T} Prove inf(C) = inf(S)*inf(T) 3. The attempt at a solution so i'm trying to prove inf(C) <= inf(S)*inf(T) and inf(C) >= inf(S)*inf(T). i'll use e as epsilon. epsilon is positive By definition there is an s in S such that: s < inf S + e. There is a t in T such that: t < inf T + e. Additionally, inf(C) <= st for all s in S and t in T by definition. st < (inf S + e)(inf T + e) = (inf S)(inf T) + (inf S)*e + (inf T)*e + e^2 inf C <= st <= (inf S)(inf T) < inf S)(inf T) + (inf S)*e + (inf T)*e + e^2 (NOTE: i'm not sure about the middle inequality: "st <= (inf S)(inf T)" ) 2. Jul 17, 2011 ### micromass No, the middle inequality is indeed not correct. Try it like this: $$inf(C)\leq st<inf(S)inf(T)+inf(S)e+inf(T)e+e^2$$ Because e is arbitrary, we can let e go to 0, thus $$inf(C)<inf(S)\inf(T)$$ For the other inequality, take $st<inf(C)+e$ and do something with it. 3. Jul 17, 2011 ### Hurkyl Staff Emeritus You're right to be unsure. You have • $\inf C \leq st$ • $st < (\inf S)(\inf T) + (\inf S) \epsilon + (\inf T) \epsilon + \epsilon^2$ So you just put the two together: $\inf C \leq st< (\inf S)(\inf T) + (\inf S) \epsilon + (\inf T) \epsilon + \epsilon^2$​ well, what you really care about is just transitivity: $\inf C < (\inf S)(\inf T) + (\inf S) \epsilon + (\inf T) \epsilon + \epsilon^2$​ And then invoke what you can about the fact that this is true for every positive real number $\epsilon$. (aside: micromass forgot that < turns into $\leq$ when doing limits) 4. Jul 17, 2011 ### micromass Oh my, I'm still not fully awake 5. Jul 17, 2011 ### faceblah For the other direction I'm guessing it's: There is an st in C such that st < inf C + e. By definition, inf S <= s for all s and inf T <= t for all t. So (inf S)(inf T) <= st. So we have (inf C + e) > st => (inf S)(inf T). so (inf C + e) > (inf S)(inf T). (I need this to be a => though) I'm not quite sure what you mean by "invoking what you know about e". Does this mean that the ">" becomes a "=>". See below for context Similar Discussions: Proving inf(ST) = inf(S)*inf(T)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130039811134338, "perplexity": 3696.5066958708894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890394.46/warc/CC-MAIN-20180121080507-20180121100507-00199.warc.gz"}
http://trilinos.sandia.gov/packages/docs/r10.12/packages/intrepid/doc/html/cell_tools_page.html
Intrepid Cell tools ## Cell topologies The range of admissible cell shapes in Intrepid is restricted to d-dimensional polytopes, d=1,2,3. A polytope is defined by a set of its vertices and a base topology BT that defines how these verices are connected into k-dimensional, k < d facets (k-subcells) of that polytope. The base topology of any polytope can be extended by augmenting the set of its vertices by an additional set of points . The extended topology ET is defined by specifying the connectivity of the set relative to the subcells specified by its base topology BT. The vertices and the extra points are collectively referred to as nodes. Thus, a polytope with extended topology ET is defined by a set of nodes , where , and a connectivity rule for these nodes. Intrepid requires any cell to have a valid base topology. The nodes of the cell should always be ordered by listing its vertices first, i.e., To manage cell topologies Intrepid uses the Shards package http://trilinos.sandia.gov/packages/shards . Shards provides definitions for a standard set of base and extended cell topologies plus tools to construct custom, user defined cell topologies, such as arbitrary polyhedral cells. For further details see Shards documentation. ## Reference cells For some cell topologies there exist simple, e.g., polynomial, mappings that allow to obtain any cell having that topology as an image of a single "standard" cell. We refer to such standard cells as reference cells. Just like in the general case, a reference cell with a base topology BT is defined by a set of vertices, and a reference cell with extended topology ET is defined by a set of nodes that include the original vertices and some additional points. The actual vertex and node coordinates for the reference cells can be chosen arbitrarily; however, once selected they should not be changed because in many cases, e.g., in finite element reconstructions, all calculations are done on a reference cell and then transformed to physical cells by an appropriate pullback (see Section Pullbacks). In Intrepid base and extended reference cell topologies are defined using the following selections of vertex and node coordinates: |=======================|==============================================================================| | Topology family | reference cell vertices/additional nodes defining extended topology | |=======================|==============================================================================| | Line<2> | | | Beam<2> | {(-1,0,0),(1,0,0)} | | ShellLine<2> | | |-----------------------|------------------------------------------------------------------------------| | Line<3> | | | Beam<3> | {(0,0,0)} | | ShellLine<3> | | |=======================|==============================================================================| | Triangle<3> | {(0,0,0),(1,0,0),(0,1,0)} | | ShellTriangle<3> | | |-----------------------|------------------------------------------------------------------------------| | Triangle<4> | {(1/3,1/3,0)} | |.......................|..............................................................................| | Triangle<6> | {(1/2,0,0),(1/2,1/2,0),(0,1/2,0)} | | ShellTriangle<6> | | |=======================|==============================================================================| | Quadrilateral<4> | {(-1,-1,0),(1,-1,0), (1,1,0),(-1,1,0)} | |-----------------------|------------------------------------------------------------------------------| |.......................|..............................................................................| |=======================|==============================================================================| | Tetrahedron<4> | {(0,0,0),(1,0,0),(0,1,0),(0,0,1)} | |-----------------------|------------------------------------------------------------------------------| | Tetrahedron<8> | {(1/2,0,0),(1/2,1/2,0),(0,1/2,0),(1/3,1/3,1/3)} | | Tetrahedron<10> | {(1/2,0,0),(1/2,1/2,0),(0,1/2,0),(0,0,1/2),(1/2,0,1/2),(0,1/2,1/2)} | |=======================|==============================================================================| | Pyramid<5> | {(-1,-1,0),(1,-1,0),(1,1,0),(-1,1,0),(0,0,1)} | |-----------------------|------------------------------------------------------------------------------| | Pyramid<13> | {(0,-1,0),(1,0,0),(0,1,0),(-1,0,0), 1/2((-1,-1,1),(1,-1,1),(1,1,1),(-1,1,1))}| | Pyramid<14> | all of the above and (0,0,0) | |=======================|==============================================================================| | Wedge<6> | {(0,0,-1),(1,0,-1),(0,1,-1),(0,0,1),(1,0,1),(0,1,1)} | |-----------------------|------------------------------------------------------------------------------| | Wedge<15> | {(1/2,0,-1),(1/2,1/2,-1),(0,1/2,-1), (0,0,0),(1,0,0),(0,1,0), | | | (1/2,0, 1),(1/2,1/2, 1),(0,1/2, 1) | |.......................|..............................................................................| | Wedge<18> | All of the above plus {(1/2,0,0),(1/2,1/2,0),(0,1/2,0)} | |=======================|==============================================================================| | Hexahedron<8> | {(-1,-1,-1),(1,-1,-1),(1,1,-1),(-1,1,-1),(-1,-1,1),(1,-1,1),(1,1,1),(-1,1,1)}| |-----------------------|------------------------------------------------------------------------------| | Hexahedron<20> | {(0,-1,-1),(1,0,-1),(0,1,-1),(-1,0,-1), (0,-1,0),(1,0,0),(0,1,0),(-1,0,0), | | | (0,-1, 1),(1,0, 1),(0,1, 1),(-1,0, 1) } | |.......................|..............................................................................| | Hexahedron<27> | All of the above plus center point and face midpoints: | | | {(0,0,0), (0,0,-1),(0,0,1), (-1,0,0),(1,0,0), (0,-1,0),(0,1,0)} | |=======================|==============================================================================| Finite element reconstruction methods based on pullbacks (see Section Pullbacks) are restricted to the above cell topologies. ### Reference-to-physical cell mapping The mapping that takes a given reference cell to a physical cell with the same topology is defined using a nodal Lagrangian basis corresponding to the nodes of the reference cell. In other words, the mapping is constructed using basis functions that are dual to the nodes of the reference cell. Implementation details are as follows. Assume that is a reference cell with topology T and nodes , and that is the Lagrangian basis dual to these nodes, i.e., . A physical cell with the same topology T as is then defined as the image of under the mapping where is the set of physical nodes that defines . The number of physical nodes is required to match the number of reference nodes in the specified cell topology T. The i-th coordinate function of the reference-to-physical mapping is given by where is the i-th spatial coordinate of the m-th node. For simplicity, unless there's a chance for confusion, the cell symbol will be ommitted from the designations of physical points and reference-to-physical maps, i.e., we shall simply write . Summary Warning: Intrepid::CellTools does not check for non-degeneracy of the physical cell obtained from a given set of physical nodes. As a result, F is not guaranteed to be a diffeomorphism, i.e., it may not have a continuously differentiable inverse. In this case some Intrepid::CellTools methods, such as Intrepid::CellTools::setJacobianInv, and Intrepid::CellTools::mapToReferenceFrame will fail. ### Jacobian of the reference-to-physical cell mapping Intrepid follows the convention that the rows of the Jacobian are the transposed gradients of the coordinate functions of the mapping, i.e., In light of the definition of F in Section Reference-to-physical cell mapping, it follows that Summary ### Parametrization of physical 1- and 2-subcells Parametrization of a given physical k-subcell , k=1,2, is a map from a k-dimensional parametrization domain R to that subcell: Parametrization domains play role similar to that of reference cells in the sense that they allow computation of line and surface integrals on 1- and 2-subcells (edges and faces) to be reduced to computation of integrals on R . Parametrization maps are supported for 1- and 2-subcells (edges and faces) that belong to physical cells with reference cells. The reason is that these maps are defined by the composition of the parametrization maps for reference edges and faces with the mapping F defined in Reference-to-physical cell mapping. As a result, parametrization of a given physical k-subcell requires selection of a parent cell that contains the subcell. Remarks: Because a given k-subcell may belong to more than one physical cell, its parent cell is not unique. For a single k-subcell the choice of a parent cell is not important, however, when dealing with subcell worksets parent cells must all have the same topology (see Subcell worksets for details about subcell worksets). Implementation of subcell parametrization is as follows. Assume that is a k-subcell with parent cell ; is the associated reference cell and i is the local ordinal of the subcell relative to the reference cell. To this physical subcell corresponds a reference subcell having the same local ordinal. Parametrization of the reference k-subcell is a map from the k-dimensional parametrization domain R to that subcell: Parametrization of is then defined as where F is the reference-to-physical mapping between the parent cell and its reference cell. A 1-subcell (edge) always has Line<N> topology and so, the parametrization domain for edges is the standard 1-cube: On the other hand, faces of reference cells can have Triangle<N> and/or Quadrilateral<N> topologies. Thus, the parametrization domain for a 2-subcell depends on its topology and is either the standard 2-simplex or the standard 2-cube: Summary ### Subcell worksets A subcell workset comprises of 1- or 2-subcells and associated parent cells that satisfy the following conditions • all subcells have the same cell topology; • all parent cells have the same cell topology; • The parent cell topology has a reference cell; • relative to that reference cell, all subcells in the workset have the same local ordinal Therefore, a subcell workset is defined by 1. collecting a set of 1- or 2-subcells having the same topology 2. selecting a parent cell for every subcell in such a way that 1. all parent cells have the same cell topology 2. all subcells in the workset have the same local ordinal relative to the parent cell topology Obviously, a subcell can have multiple parent cells. For example, in a mesh consisiting of Triangle<3> cells, every edge is shared by 2 triangle cells. To define an edge workset we can use either one of the two traingles sharing the cell. Suppose now that the mesh comprises of Triangle<3> and Quadrilateral<4> cells and we want to define an edge workset. Let's say the first few edges in our workset happen to be shared by 2 triangles and so we select one of them as the parent cell. Now suppose the next edge is shared by a traingle and a quadrilateral. Because all parent cells in the workset must have the same cell topology we cannot use the quadrilateral as a parent cell and so we choose the triangle. Finally suppose that one of the candidate edges for our workset is shared by 2 quadrilaterals. Because of the requirement that all parent cells have the same topology, we will have to reject this edge because it does not posses a potential parent cell with the same topology as the rest of the edges in our workset. A subcell workset is denoted by , where • c is parent cell ordinal; • i is the local subcell ordinal (relative to the topology of the parent cell) shared by all subcells in the workset.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8283193707466125, "perplexity": 1807.3312029152128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267330.29/warc/CC-MAIN-20140728011747-00489-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.originlab.com/doc/Origin-Help/ANOVA-CRD
# 17.4.1 One, Two, and Three Way ANOVA See more related videos: See more related video:Analysis of Variance (ANOVA) ## Introduction The factorial ANOVA models consider a completely randomized design for an experiment. Origin supports following factorial ANOVA models . Designs Details One-way compares three or more levels within one factor. Two-way compare the effect of multiple levels of two factors, used to analyze the main effects of and interactions between two factors. Three-way (Pro Only) tests for interaction effects between three independent variables on a continuous dependent variable (i.e., if a three-way interaction exists) In addition to the analysis of variance, Origin also supports various methods for means comparison and actual and hypothetical power analysis. ## Assumptions The ANOVA model has the following assumptions: • Independence The sample cases should be independent of each other. Otherwise you will need to use other ANOVA model, such as the repeated measure ANOVA • Normality Data values of each combination of the groups should be from a normal distribution. We can use a normality test to verify this. However, please note that normal assumptions are usually not "fatal". Even you do not pass the normality test, you may still continue the ANOVA analysis if you have a large sample size. • Homogeneity The variance between the groups should be equal. You can use the Homogeneity Tests(Levene's Test) to verify it. If the assumption is not satisfied, there are several options to consider, including elimination of outliers or data transformation. However, ANOVA is robust to the violation of this assumption. You may continue the study if the group size is equal. ## Processing Procedure ### Preparing Analysis Data • Continuous Data Data of the dependent variable should be continuous. • Independent random sample (no outliers) The sample cases should be independent of one another, i.e., no repeated measures or matched pairs data. In addition, the ANOVA model is sensitive to the inclusion of outliers. To observe the outliers, we can use Box plots or Outlier tests (Grubb's Test and Dixons Q-Test) to find the outliers and exclude them from the data ### Verifying Assumptions The normality test and the Homogeneity Tests(Levene's Test) can be used to verify the assumptions. Please see Assumptions for more information. ### Selecting Mean Comparison Methods Multiple comparison procedures are commonly used in an ANOVA after obtaining a significant omnibus test result. The significant ANOVA result suggests that the global null hypothesis, H0, is rejected. The H0 hypothesis states that the means are the same across the groups being compared. We can use multiple comparison to determine which means are different. Origin provides eight different methods for means comparison. They are Tukey, Bonferroni, Dunn-Sidak, Fisher LSD, Scheffe, Holm-Bonferroni, and Holm-Sidak. Tukey The Tukey method controls the overall Type I error. When Tukey is used, the overall confidence level is $1-\alpha$ with equal sample sizes, that is, the risk of a Type I error is exactly $\alpha$ ; while for unequal sample sizes, the risk of a Type I error is less than $\alpha$ The Bonferroni method controls the overall Type I error and is more conservative than Tukey. The method is commonly used for all pairwise comparisons tests. Fishers LSD test dose not control the overall Type I error. Therefore, it should only be used for the significant overall F-test and the small number of comparisons. When the number of comparisons is small, Scheffé is very conservative (and more than Bonferroni). Scheffé is more powerful in cases of complex multiple comparisons, so it is used for complex multiple comparisons. This is a more powerful method than the Dunnett test method, especially when the number of comparisons is large. This method is less conservative and more powerful than the Bonferroni method. Hence you have more chances to reject null hypotheses with the Bonferroni-Holm method. The method is more powerful than Holm test. However, it can not be used to compute a set of confidence intervals. ### Power Analysis The power analysis procedure calculates the actual power for the sample data, which let you know the % chance of detecting a difference. It also helps you to calculate the hypothetical power if additional sample sizes are specified ## Handling Missing Values The missing values in the data range will be excluded in the analysis From Origin 2015, missing values in the grouping range and the corresponding data values will be excluded in analysis. In the previous version, missing values in the grouping range will be considered as a group.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7276101112365723, "perplexity": 918.8386413822814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946256.50/warc/CC-MAIN-20180423223408-20180424003408-00571.warc.gz"}
https://www.gerad.ca/en/papers/G-92-26
Back # A Strong Heuristic Algorithm for Maximum Likelihood Estimation of the 3-Parameter Weibull Distribution ## E Gourdin, Pierre Hansen, and Brigitte Jaumard BibTeX reference In a previous paper, a global optimization algorithm, called MLEW, is provided for finding Maximum Likelihood Estimators for the three-parameter Weibull distribution. A conjecture of Rockette, Antle and Klimko (1974) states that the log-likelihood function of the three-parameter Weibull distribution has never more than two stationary points. Assuming this conjecture to be true, we propose in this paper an improved version of algorithm MLEW, called MLEWh. This last algorithm is heuristic, due to the assumption made, but no sample has been found for which it failed to find a globally optimal solution. Moreover, an extensive empirical comparison is made between algorithms MLEW and MLEWh. , 19 pages This cahier was revised in February 1993
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654851913452148, "perplexity": 958.7984279217444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00413.warc.gz"}
https://math.stackexchange.com/questions/3148348/what-are-the-possible-solutions-of-xy-1-over-x1-over-y4-2-sqrt-2x1
# What are the possible solutions of $x+y+ {1\over x}+{1\over y}+4=2 (\sqrt {2x+1}+\sqrt {2y+1})$? I encountered a question in an exam in which we had: Find all possible solutions of the equation $$x+y+ {1\over x}+{1\over y}+4=2 (\sqrt {2x+1}+\sqrt {2y+1})$$ where $$x$$ and $$y$$ are real numbers. I tried squaring both sides to eliminate the square roots but the number of terms became too many, making the problem very difficult to handle. I am not really able to understand how to find an easier approach or handle the terms efficiently. Would someone please help me to solve this question? It's $$\sum_{cyc}\left(x+\frac{1}{x}+2-2\sqrt{2x+1}\right)=0$$ or $$\sum_{cyc}\frac{x^2-2x\sqrt{2x+1}+2x+1}{x}=0$$ or $$\sum_{cyc}\frac{(x-\sqrt{2x+1})^2}{x}=0,$$ which for $$xy<0$$ gives infinitely many solutions. But, for $$xy>0$$ we obtain: $$x=\sqrt{2x+1}$$ and $$y=\sqrt{2y+1},$$ which gives $$x=y=1+\sqrt2.$$ • Hmm, this is the first time I see the cyclic sum notation – Jan Tojnar Mar 15 at 1:00 • How can we show that the individual summands cannot be non-zero ($x = -y$)? – Jan Tojnar Mar 15 at 1:12 • Thank you Michael! – Shashwat1337 Mar 15 at 7:40 • You are welcome! – Michael Rozenberg Mar 15 at 7:41 • @Jan Tojnar If $y=-x$ we obtain $1=\sqrt{1-4x^2}$ or $x=0,$ which is impossible. – Michael Rozenberg Mar 15 at 7:51 This solution works only for positive $$x,y$$. However they can not be both negative since then LHS is at most $$0$$. So $$x+y+ {1\over x}+{1\over y}+4 = x+y+{2x+1\over x}+{2y+1\over y}$$ By Am-Gm we have $$x+{2x+1\over x}\geq 2\sqrt{x{2x+1\over x}} = 2\sqrt{2x+1}$$ and the same for $$y$$, so we have $$x+y+ {1\over x}+{1\over y}+4 \geq 2\sqrt{{2x+1}}+2\sqrt{{2y+1}}$$ Since we have equality is achieved when $$x={2x+1\over x}$$ (and the same for $$y$$) we have $$x=y=1+\sqrt{2}$$ • AM-GM requires terms to be positive. So your solution doesn't account for the case when terms are negative. – Anurag A Mar 14 at 19:04 • @Maria Mazur Yes, it appears to be wrong as AM-GM only applies to positive real numbers. But still, thanks for providing me with an alternative solution for positive numbers. – Shashwat1337 Mar 15 at 7:46 • @Maria Mazur After your fixing I deleted my previous comment, Now your statement is true. – Michael Rozenberg Mar 15 at 13:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973421454429626, "perplexity": 315.3822171986453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202671.79/warc/CC-MAIN-20190322135230-20190322161230-00483.warc.gz"}
https://www.physicsforums.com/threads/calculating-impact-force-of-a-block-accelerated-by-a-spring.595685/
# Calculating impact force of a block accelerated by a spring 1. Apr 11, 2012 ### superman22x I'm working on a small design project and need some help remember what equations to use here. Basically, we are creating a hammering device. A compression spring will be loaded, and a block on the end is accelerated into another block. I need to have it hit with a certain impact force so I am designing the spring to match this. I know the force of the spring: F=k(dx) And total energy equation: TE=.5*k(dx)^2 But I'm not really sure where to go from there since the spring will be accelerating the mass as it uncompresses. 2. Apr 11, 2012 ### haruspex With hammering, pile-driving etc., you're not expecting energy to be conserved. Neither are you much concerned with forces, as such. What you want is impulse (momentum). It's usually a reasonable approximation to assume negligible recoil. The block, mass M1, strikes the target, mass M2, at speed U. They then continue together at speed V (at first). By conservation of momentum, M1.U = (M1+M2).V. The two masses together then have to overcome some resistive force as they travel on. The distance they travel now depends on their combined kinetic energy, (M1+M2).V^2/2 = (M1.U)^2/(2(M1+M2)). I assume you want to optimise the distance the block travels before impact for the maximum travel after. The further it goes towards the point at which the spring is fully relaxed, the harder it will strike. On the other hand, if it goes to completely relaxed then the spring will start to act the other way and reduce the distance the target is driven. (Conversely, to the extent that impact is earlier, the spring will assist the travel after impact.) If you can't figure out the calculus I can help, but I suspect the best answer is to have a slack tie in series with the spring. This would allow the spring to expand fully before impact but not inhibit the subsequent travel. Its sole function would be to allow the block to be drawn back into loading position. 3. Apr 12, 2012 ### superman22x We aren't really looking for the maximum impact, we just need to hit with at least 100N. And we are assuming block 2 is stationary, even when impacted. 4. Apr 12, 2012 ### haruspex A Newton is a unit of force. Impact (momentum) is usually expressed in kg m/s or, equivalently, N.s (Newton seconds). To ask that it strike with a given force is meaningless. If you want the impact to be, say, 100 kg m/s and the mass is 1 kg then you need the spring to accelerate it to 100 m/s, giving it a kinetic energy of 5000 J. 5. Apr 13, 2012 ### sookw The project may look simple but actually it is quite impossible to determine the impact force. Even if you have two bodies that collide in space, thus neglecting all the friction and drag forces, it is still difficult to predict the impact force. I guessed you may have to experiment with your project and fine tune to have the required impact force, and I suspect that the impact force may still vary with each test. 6. Apr 13, 2012 ### haruspex As I said, impact and force are two different things. You can't determine the force, but you can determine the impact (change in momentum). That's fortunate, because in a hammer it's the impact that matters. 7. Apr 13, 2012 ### Staff: Mentor Who decided on that constraint? The point people are trying to convey is that it is a poor constraint. 8. Apr 13, 2012 ### haruspex Quite so. Perhaps I can make it even clearer. The force will follow some function over a short period of time, rising from 0 to a peak and falling away again. For the purpose of hammering a nail or driving a pile you hardly care about the shape of the function: what matters is the momentum change (impulse), the area under the curve, $\int$F.dt. This is simply the mass of the hammer multiplied by its velocity on impact. The shape will depend on what is struck. Hit an egg and the function will be very tall and narrow; hit a rolled up blanket and it will be much broader and lower. Hence it is meaningless to ask what force the hammer will deliver; it will deliver a range of forces over the duration of the impact, the details depending on the materials of the hammer and the target. It does become interesting if what you care about is whether the target will survive the impact. In that case you'd like to know the peak force. The shape of the curve may also be of some interest in unusual cases for hammer and nail. The hammer blow achieves nothing until the force rises above the resistance of the nail's substrate. So, strictly speaking, it's integral starting at the point in time where this threshold force is achieved; the little bit of lead-in to that point is wasted. Normally this is negligible, but it explains why a light tap might get you nowhere. Wherever you got the 100N requirement from, go back and ask for a requirement that means something. 9. Apr 14, 2012 ### superman22x It's a constraint used to build buildings. It's called the Michigan Soil Test. We are building a mechanism to perform it, rather than a person hitting a can of soil on a board like is currently performed. Our project says the force of impact should be between 40-140N, so we aimed at 100N. 10. Apr 14, 2012 ### haruspex The Michigan Soil Test specifies that, or is this an interpretation by the person who designed the project? Can't find any more details online. Do you have a link for this? Similar Discussions: Calculating impact force of a block accelerated by a spring
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441563248634338, "perplexity": 875.2653036993072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133032.51/warc/CC-MAIN-20170824043524-20170824063524-00117.warc.gz"}
http://math.stackexchange.com/questions/101527/transversal-of-a-family-of-sets
Transversal of a family of sets What I want to prove is: Let $U=(A_1\ldots,A_n)$ be a family of sets and let $P\subseteq A_1\cup \cdots \cup A_n$. Then $U$ has a transversal which includes the set $P$ if and only if (i) $U$ has a transversal (ii)$|P\setminus (\cup_{i\in I}A_i)|\le n-|I|$ for any $I\subseteq \{1,\ldots,n\}$. A transversal of $U$ is a set $X$ with $|X|=n$ which can have its elements arranged in a certain order, $X=\{a_1,\ldots,a_n\}$ say,so that $a_1\in A_1,\ldots,a_n\in A_n$; in other words then $n$ distinct elements of $X$ 'represent' the sets. I think $\Rightarrow$ direction is quite easy to prove but the converse, I don't know how to start the proof. Does anyone has any idea to prove this statement? - This is an extension of the classical Hall marriage theorem. It should be in Leonid Mirsky's Transversal theory: an account of some aspects of combinatorial mathematics (a whole book on the subject!) –  Mariano Suárez-Alvarez Jan 23 '12 at 4:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49833863973617554, "perplexity": 182.5637861298632}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115926735.70/warc/CC-MAIN-20150124161206-00226-ip-10-180-212-252.ec2.internal.warc.gz"}
https://qiskit.org/documentation/apidoc/transpiler_plugins.html
# Synthesis Plugins (qiskit.transpiler.passes.synthesis.plugin)¶ This module defines the plugin interfaces for the synthesis transpiler passes in Qiskit. These provide a hook point for external python packages to implement their own synthesis techniques and have them seamlessly exposed as opt-in options to users when they run transpile(). The plugin interfaces are built using setuptools entry points which enable packages external to qiskit to advertise they include a synthesis plugin. ## Writing Plugins¶ ### Unitary Synthesis Plugins¶ To write a unitary synthesis plugin there are 2 main steps. The first step is to create a subclass of the abstract plugin class: UnitarySynthesisPlugin. The plugin class defines the interface and contract for unitary synthesis plugins. The primary method is run() which takes in a single positional argument, a unitary matrix as a numpy array, and is expected to return a DAGCircuit object representing the synthesized circuit from that unitary matrix. Then to inform the Qiskit transpiler about what information is necessary for the pass there are several required property methods that need to be implemented such as supports_basis_gates and supports_coupling_map depending on whether the plugin supports and/or requires that input to perform synthesis. For the full details refer to the UnitarySynthesisPlugin documentation for all the required fields. An example plugin class would look something like: from qiskit.transpiler.passes.synthesis import plugin from qiskit_plugin_pkg.synthesis import generate_dag_circuit_from_matrix class SpecialUnitarySynthesis(plugin.UnitarySynthesisPlugin): @property def supports_basis_gates(self): return True @property def supports_coupling_map(self): return False @property def supports_natural_direction(self): return False @property def supports_pulse_optimize(self): return False @property def supports_gate_lengths(self): return False @property def supports_gate_errors(self): return False @property def min_qubits(self): return None @property def max_qubits(self): return None @property def supported_bases(self): return None def run(self, unitary, **options): basis_gates = options['basis_gates'] dag_circuit = generate_dag_circuit_from_matrix(unitary, basis_gates) return dag_circuit If for some reason the available inputs to the run() method are insufficient please open an issue and we can discuss expanding the plugin interface with new opt-in inputs that can be added in a backwards compatible manner for future releases. Do note though that this plugin interface is considered stable and guaranteed to not change in a breaking manner. If changes are needed (for example to expand the available optional input options) it will be done in a way that will not require changes from existing plugins. Note All methods prefixed with supports_ are reserved on a UnitarySynthesisPlugin derived class for part of the interface. You should not define any custom supports_* methods on a subclass that are not defined in the abstract class. The second step is to expose the UnitarySynthesisPlugin as a setuptools entry point in the package metadata. This is done by simply adding an entry_points entry to the setuptools.setup call in the setup.py for the plugin package with the necessary entry points under the qiskit.unitary_synthesis namespace. For example: entry_points = { 'qiskit.unitary_synthesis': [ 'special = qiskit_plugin_pkg.module.plugin:SpecialUnitarySynthesis', ] }, (note that the entry point name = path is a single string not a Python expression). There isn’t a limit to the number of plugins a single package can include as long as each plugin has a unique name. So a single package can expose multiple plugins if necessary. The name default is used by Qiskit itself and can’t be used in a plugin. #### Unitary Synthesis Plugin Configuration¶ For some unitary synthesis plugins that expose multiple options and tunables the plugin interface has an option for users to provide a free form configuration dictionary. This will be passed through to the run() method as the config kwarg. If your plugin has these configuration options you should clearly document how a user should specify these configuration options and how they’re used as it’s a free form field. ## Using Plugins¶ To use a plugin all you need to do is install the package that includes a synthesis plugin. Then Qiskit will automatically discover the installed plugins and expose them as valid options for the appropriate transpile() kwargs and pass constructors. If there are any installed plugins which can’t be loaded/imported this will be logged to Python logging. To get the installed list of installed unitary synthesis plugins you can use the qiskit.transpiler.passes.synthesis.plugin.unitary_synthesis_plugin_names() function. ## Plugin API¶ ### Unitary Synthesis Plugins¶ Abstract unitary synthesis plugin class Unitary Synthesis plugin manager class Return a list of installed unitary synthesis plugin names
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24297726154327393, "perplexity": 4385.076977961465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300289.37/warc/CC-MAIN-20220117031001-20220117061001-00658.warc.gz"}
https://www.semanticscholar.org/paper/Robustness-Certificates-Against-Adversarial-for-Singla-Feizi/5c31ff945ef663b491eedd06fc2b2232adc1d2e2
• Corpus ID: 59599987 # Robustness Certificates Against Adversarial Examples for ReLU Networks @article{Singla2019RobustnessCA, title={Robustness Certificates Against Adversarial Examples for ReLU Networks}, author={Sahil Singla and Soheil Feizi}, journal={ArXiv}, year={2019}, volume={abs/1902.01235} } • Published 1 February 2019 • Computer Science • ArXiv While neural networks have achieved high performance in different learning tasks, their accuracy drops significantly in the presence of small adversarial perturbations to inputs. Defenses based on regularization and adversarial training are often followed by new attacks to defeat them. In this paper, we propose attack-agnostic robustness certificates for a multi-label classification problem using a deep ReLU network. Although computing the exact distance of a given input sample to the… 17 Citations ## Figures and Tables from this paper The center smoothing procedure can produce models with the guarantee that the change in the output, as measured by the distance metric, remains small for any norm-bounded adversarial perturbation of the input. • Computer Science ArXiv • 2022 This work presents provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution, and shows provable lower bounds on the performance of models trained on so-called “unlearnable” datasets that have been poisoned to interfere with model training. • Computer Science, Mathematics NeurIPS • 2019 GeoCert, a novel method for computing exact pointwise robustness of deep neural networks for all convex $\ell_p$ norms, and shows that piecewise linear neural networks partition the input space into a polyhedral complex. • Computer Science AISTATS • 2020 This work takes a holistic look at adversarial examples for non-parametric classifiers, including nearest neighbors, decision trees, and random forests, and derives an optimally robust classifier, which is analogous to the Bayes Optimal. • Computer Science, Mathematics • 2021 The robustness certificates guarantee that the change in the output of the smoothed model as measured by the distance metric remains small for any norm-bounded perturbation of the input. • Computer Science 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) • 2022 This work presents a technique that utilizes properties of random projections to characterize the behavior of clean and adversarial examples across a diverse set of subspaces and demonstrates that this technique outperforms competing detection strategies while remaining truly agnostic to the attack strategy. • Computer Science ArXiv • 2020 This work presents a technique that makes use of special properties of random projections, whereby it can characterize the behavior of clean and adversarial examples across a diverse set of subspaces, and outperforms competing state of the art SOTA attack strategies while remaining truly agnostic to the attack method itself. • Computer Science, Mathematics ICML • 2020 It is shown that extending the smoothing technique to defend against other attack models can be challenging, especially in the high-dimensional regime, and it is established that Gaussian smoothing provides the best possible results, up to a constant factor, when p \geq 2. • Computer Science NeurIPS • 2020 It is demonstrated that extra information about the base classifier at the input point can help improve certified guarantees for the smoothed classifier. • Computer Science NeurIPS • 2020 The proposed method is based on Lagrange dualization and convex envelope, which result in tight approximation bounds that are efficiently computable by dynamic programming and allows an increased number of graphs to be certifled as robust. ## References SHOWING 1-10 OF 44 REFERENCES • Computer Science ICLR • 2018 This work proposes a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value, providing an adaptive regularizer that encourages robustness against all attacks. • Computer Science ICML • 2018 A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. • Computer Science NeurIPS • 2018 This paper introduces CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points and facilitates the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation. • Computer Science ICLR • 2019 Verification of piecewise-linear neural networks as a mixed integer program that is able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack for every network. This work shows how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy and allows the largest model to be verified beyond vacuous bounds on a downscaled version of ImageNet. This paper provides a theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and proposes to use the Extreme Value Theory for efficient evaluation, which yields a novel robustness metric called CLEVER, which is short for Cross LPschitz Extreme Value for nEtwork Robustness. • Computer Science ICLR • 2018 This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. It is shown that, in fact, there is no polynomial time algorithm that can approximately find the minimum adversarial distortion of a ReLU network with a $0.99\ln n$ approximation ratio unless $\mathsf{NP}$=$\ mathsf{P}$, where $n$ is the number of neurons in the network. • Computer Science ICLR • 2018 This work provides a training procedure that augments model parameter updates with worst-case perturbations of training data and efficiently certify robustness for the population loss by considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball. • Computer Science AAAI • 2018 The authors' elastic-net attacks to DNNs (EAD) feature L1-oriented adversarial examples and include the state-of-the-art L2 attack as a special case, suggesting novel insights on leveraging L1 distortion in adversarial machine learning and security implications ofDNNs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7546072602272034, "perplexity": 1311.1201062695102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00719.warc.gz"}
http://aga2012.wikidot.com/dual-faces
Dual Faces Let $F$ be a face of a convex cone $C$. Define the dual face $F^\Delta$ of $C^\ast$ by $F^\Delta = \{ l \in C^\ast | l(x) = 0, \forall x \in F\}$. (a) $F^\Delta$ is an exposed face of $C^\ast$. (b) Part (a) implies that any face $F$ of $C$ is contained in an exposed face. SOLUTION (a)Fix $x$ in the relative interior of $F$. We will show that $F^\Delta =\operatorname{face}_x(C^\ast)=\{l \in C^\ast | l(x) \leq y(x), \forall y \in C^\ast\}= \{l \in C^\ast | l(x) = 0\}$. From the definitions it is clear that $F^\Delta \subset \operatorname{face}_x(C^\ast)$. Suppose that $l \in \operatorname{face}_x(C^\ast)$ and $y \in F$. Since $x$ belongs to the relative interior of $F$, there exists $\epsilon > 0$ such that $x-\epsilon y \in F$. Then we have $l(y) = \frac{1}{\epsilon}l(\epsilon y)\leq \frac{1}{\epsilon}l(x)=0$. On the other hand, $l(y) \geq 0$. We have shown that for any $y \in F$, $l(y)=0$. In other words, $l \in F^\Delta$. Hence $F^\Delta$ is as claimed. (b) It is true in general that $F \subset (F^\Delta)^\Delta$.-MH
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613911509513855, "perplexity": 52.39127183502134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219469.90/warc/CC-MAIN-20180822030004-20180822050004-00349.warc.gz"}
https://www.maplesoft.com/support/help/maple/view.aspx?path=VectorCalculus/Flux
VectorCalculus - Maple Programming Help Home : Support : Online Help : Mathematics : Vector Calculus : VectorCalculus/Flux VectorCalculus Flux compute the flux of a vector field through a surface in R^3 or a curve in R^2 Calling Sequence Flux(f, dom, inert) Parameters f - vector field or Vector-valued procedure; specify the vector field to be integrated dom - unevaluated function call; specify the surface or curve over which to integrate inert - (optional) name; specify that the integral representation is to be returned Description • The Flux(f, dom) command computes the flux of the vector field f through the surface or curve specified by dom. • Surfaces and curves are represented by unevaluated function calls. The possible surfaces are Box(r1, r2, r3, dir), Sphere(cen, rad, dir), and Surface(v, param). The possible curves are Arc(obj, start, finish), Circle(cen, rad, dir), Ellipse(eqn, varx, vary, dir), Line(p1, p2), LineSegments(p1, p2, ..., pk), and Path(v, rng, c). Box(r1, r2, r3, dir) Each ri must have type algebraic..algebraic (a range). These represent the sides of the box, and the integral is taken over each face of the box. If the optional fourth argument dir is specified, it specifies the direction of the normal vector. It must be the word inward or outward.  The default is outward. The first parameter of Sphere, cen, must have type 'Vector'(3, algebraic) and rad must have type algebraic. These represent the center and radius of the sphere, respectively. If a coordinate system attribute is specified on cen, the center is interpreted in that coordinate system. If the optional third argument, dir, is specified, it specifies the direction of the normal vector.  It must be the word inward or outward.  The default is outward. Surface(v, param) This construct can be used to define a general two-parameter surface. The first argument v must be a free Vector or a position Vector; it represents the surface through which the flux will be calculated. The second argument, param, provides information about the parameters that occur in v. It must be of the form [x1, x2] = region, where the names x1 and x2 are the two parameter names and region specifies the bounds on those two parameter names. The region argument must either be (1) any valid two-dimensional region structure that VectorCalculus[int] accepts or (2) a sequence of two equations of the form x1 = range1, x2 = range2, where range1 and range2 are explicit ranges that bound x1 and x2, respectively. Finally, an optional last argument, c, can be given, and must be of the form coords=sys or coordinates=sys. This is the coordinate system in which v is interpreted. Note: if this argument is supplied, any existing coordinate attribute on v is overwritten (and therefore ignored). The normal vector is the cross-product of the partial derivatives of v. Arc(obj, start, finish) The first parameter of Arc, obj, is a Circle or Ellipse structure. The Arc structure defines a segment of the circle or ellipse with endpoints specified by the start and finish angles. To define precisely how the endpoints are determined from the given start and finish angles, it suffices to discuss only circles and ellipses centered at the origin. For a circle or ellipse centered elsewhere, the start and finish endpoints are determined as if the circle or ellipse were first translated to the origin. For a Circle centered at the origin, angle is measured counterclockwise from the positive x-axis. Therefore, the angle $\frac{3\mathrm{\pi }}{2}$ specifies the negative y-axis. To define how angle is measured for an Ellipse centered at the origin, we first define the right semimajor axis of the ellipse to be the semimajor axis in the right half-plane (the first and fourth quadrants of the plane). If the major axis of the ellipse is coincident with the y-axis, then its right semimajor axis is defined to be the one on the negative y-axis. Thus, for an ellipse centered at the origin with its major axis sitting on the line y = x, its right semimajor axis is the one inside the first quadrant. For an Ellipse centered at the origin, angle is measured counterclockwise from its right semimajor axis. Therefore, in the example ellipse given in the previous paragraph, the angle $\frac{\mathrm{\pi }}{4}$ specifies the positive y-axis. Once the terminal arm of the angle is determined, that angle specifies the point where the terminal arm (viewed as a ray) intersects the circle or ellipse. The arc is always traversed in its entirety from start to finish; therefore, it is possible to traverse the circle or ellipse for more than one full revolution or to traverse in the opposite (clockwise) direction by specifying appropriate start and finish angles. The parameter cen is the center of the circle and must have type 'Vector'(algebraic) and rad is the radius of the circle and must have type algebraic.  If a coordinate system attribute is specified on cen, it is interpreted in that coordinate system. If the optional third argument dir is specified, it specifies the direction of the normal vector. It must be the word inward or outward.  The default is outward. Ellipse(cen, a, b, phi, dir) The parameter cen is the center of the ellipse and must have type 'Vector'(algebraic). If a coordinate system attribute is specified on cen, it is interpreted in that coordinate system. The parameters a and b are the lengths of the semimajor and semiminor axes, respectively. The resulting ellipse is constructed via the following process: Start with an ellipse centered at the origin having the specified axes lengths, with its major axis initially on the x-axis. It is rotated through an angle of phi in the counterclockwise direction and translated to cen. If the optional fifth argument dir is specified, it specifies the direction of the normal vector. It must be the word inward or outward.  The default is outward. Ellipse(eqn, varx, vary, dir) The parameter eqn is either a Cartesian equation specifying the ellipse or an algebraic expression such that the equation eqn = 0 specifies the ellipse. A Cartesian equation for a general conic section is of the form $A{x}^{2}+Bxy+C{y}^{2}+Ex+Fy+G=0$, and specifies the locus of all points $⟨x,y⟩$ that satisfy the equation; this locus is a non-degenerate, real ellipse if and only if three conditions hold: $\mathrm{\Delta }≔\mathrm{Determinant}\left(⟨⟨2A|B|E⟩,⟨B|2C|F⟩,⟨E|F|2G⟩⟩\right)\ne 0$ $C\mathrm{\Delta }<0$ $-4AC+{B}^{2}<0$ The two variable names that appear in eqn can be specified via varx and vary. The variable specified by varx represents the x-axis, and vary the y-axis. Both varx and vary can be omitted, but only if F is in Cartesian coordinates and its coordinate names are the same as the variables that appear in the equation. If the optional last argument dir is specified, it specifies the direction of the normal vector. It must be the word inward or outward.  The default is outward. Line(p1, p2) The parameters p1 and p2 must be of type 'Vector'(algebraic), and they represent the endpoints of the directed line segment from p1 to p2. If coordinate system attributes are specified on the points, they are interpreted in their respective coordinate systems. The normal is taken Pi/2 to the right of the direction of the directed line segment. LineSegments(p1, p2, ..., pk) Similar to Line(p1, p2) above, the pi's represent the endpoints of $k-1$ line segments.  The path of integration is the collection of line segments directed from p1 to p2, p2 to p3, ..., p(k-1) to pk. If any coordinate system attributes are specified on these points, they are interpreted in their respective coordinate systems. Path(v, rng, c) The first parameter, v, is a Vector representing the components of the path, and the second parameter, rng, must have type {range, name=range}. If no parameter name is specified in rng, it is inferred from v. If the optional third argument c is specified, it must be an equation of the form coords=sys or coordinates=sys. This is the coordinate system in which v is interpreted. Note: if this argument is supplied, any existing coordinate attribute on v is overwritten (and therefore ignored). The normal is taken Pi/2 to the right of the tangent vector that points in the direction of increasing parameter. • The Flux(f, dom, inert) command returns the integral form of the flux of f over dom. • For some surfaces or curves, the Student[VectorCalculus][Flux] command offers a way to visualize the surface or curve, normal vectors and vector field. Examples > $\mathrm{with}\left(\mathrm{VectorCalculus}\right):$ > $\mathrm{Flux}\left(\mathrm{VectorField}\left(⟨x,y,z⟩,\mathrm{cartesian}\left[x,y,z\right]\right),\mathrm{Surface}\left(⟨r,s,t⟩,s=0..\mathrm{\pi },t=0..2\mathrm{\pi },\mathrm{coords}=\mathrm{spherical}\right)\right)$ ${4}{}{\mathrm{\pi }}{}{{r}}^{{3}}$ (1) > $\mathrm{Flux}\left(\mathrm{VectorField}\left(⟨y,-x,0⟩,\mathrm{cartesian}\left[x,y,z\right]\right),\mathrm{Surface}\left(⟨s,t,{s}^{2}+{t}^{2}⟩,\left[s,t\right]=\mathrm{Rectangle}\left(0..1,2..3\right)\right)\right)$ ${0}$ (2) > $\mathrm{Flux}\left(\mathrm{VectorField}\left(⟨x,y,z⟩,\mathrm{cartesian}\left[x,y,z\right]\right),\mathrm{Sphere}\left(⟨0,0,0⟩,r\right)\right)$ ${4}{}{\mathrm{\pi }}{}{{r}}^{{3}}$ (3) > $\mathrm{Flux}\left(\mathrm{VectorField}\left(⟨x,y,z⟩,\mathrm{cartesian}\left[x,y,z\right]\right),\mathrm{Sphere}\left(⟨0,0,0⟩,r\right),'\mathrm{inert}'\right)$ ${?}$ (4) > $\mathrm{Flux}\left(\mathrm{VectorField}\left(⟨x,y,z⟩,\mathrm{cartesian}\left[x,y,z\right]\right),\mathrm{Sphere}\left(⟨0,0,0⟩,r,'\mathrm{inward}'\right)\right)$ ${-}{4}{}{\mathrm{\pi }}{}{{r}}^{{3}}$ (5) > $\mathrm{Flux}\left(\mathrm{VectorField}\left(⟨y,-x,0⟩,\mathrm{cartesian}\left[x,y,z\right]\right),\mathrm{Box}\left(1..2,3..4,5..6\right)\right)$ ${0}$ (6) > $\mathrm{Flux}\left(\mathrm{VectorField}\left(⟨x,y⟩,\mathrm{cartesian}\left[x,y\right]\right),\mathrm{Circle}\left(⟨0,0⟩,r,'\mathrm{inward}'\right)\right)$ ${-}{2}{}{\mathrm{\pi }}{}{{r}}^{{2}}$ (7)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9575011134147644, "perplexity": 927.9072950448385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897844.44/warc/CC-MAIN-20200709002952-20200709032952-00363.warc.gz"}
https://www.maa.org/press/maa-reviews/essentials-of-modern-algebra-0
# Essentials of Modern Algebra ###### Cheryl Chute Miller Publisher: Mercury Learning & Information Publication Date: 2019 Number of Pages: 339 Format: Hardcover Edition: 2 ISBN: 9781683922353 Category: Textbook [Reviewed by Fernando Q. Gouvêa , on 01/23/2019 ] See Mark Hunacek’s review of the first edition. The author’s preface to this second edition indicates two changes: • Chapters 1–3 have been reorganized to make the material less “tightly packed” than before. The group of units in $\mathbb{Z}/n\mathbb{Z}$ is introduced earlier in order to provide more examples of groups, and homomorphisms are postponed to chapter two. • Twelve biographical profiles of mathematicians have been added, one at the end of each chapter. Rather than sticking to the usual suspects, the author says she “decided to include information about some who are not as commonly heard about,” focusing on mathematicians “who had to overcome struggles due to race, gender, religion, age, or sometimes even health to persevere. The weird definition of $a\pmod{n}$ used in the first edition is retained even though chapter 0 includes a discussion of equivalence relations. As Hunacek notes in his review, this definition should lead to writing things like $5\!\pmod{4}=1$ rather than $5\equiv 1\pmod{4}$. The twelve biographical essays are short accounts in the style of a CV: birth, education, degrees, academic positions, death, honors. Most give no information about the subject’s mathematical work. There are a few minor errors. Given the choice to focus on overcoming struggles, there is often a discussion of when someone’s work was “accepted” or “recognized,” but these vague terms are not usually clarified. For example, it is not clear to me what this means: “Sadly, only in 2001 did the mathematics community officially recognized Haynes as the first African American woman to earn a PhD in mathematics.” (p. 242, biography of Euphremia Lofton Haynes) As Hunacek’s review says, this is a usable but not exceptional textbook. The exercises at the end of chapters are mostly easy, but the projects enhance them in significant ways. The inclusion of Galois theory (restricted to characteristic zero or finite base fields) is a very good feature. Fernando Q. Gouvêa is Carter Professor of Mathematics at Colby College. He has taught abstract algebra more times than he cares to count. 0. Preliminaries 1. Groups 2. Subgroups and Homomorphisms 3. Quotient Groups 4. Rings 5. Quotient Rings 6. Domains 7. Polynomial Rings 8. Factorization of Polynomials 9. Extension Fields 10. Galois Theory 11. Solvability Hints for Selected Exercises Bibliography Index
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5080119371414185, "perplexity": 2655.941781187675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00254.warc.gz"}
https://forum.allaboutcircuits.com/threads/semiconductor-physics-p-n-junction.4413/
# Semiconductor Physics P-N junction #### DJ Veni Joined Jan 9, 2007 7 Hi all, I need to find a way to go about how to simulate the behavior of carrier concentration in the depletion region of a p-n junction silicon diode given the doping concentration under "non-depletion region approximation conditions" using MATLAB and thus plot the energy band diagrams and then apply it to a BJT. Firstly Im confused as to which road to take , uniformly doped P-N where the Nd and Na dopants remain constant (FIG 1) or whether to vary them linearly while approching the junction at which Na (acceptor concentration/cm3)= Nd(donor concentration/cm3) (FIG2)---> i.e P type formed on N type substrate. HELP!! #### Dave Joined Nov 17, 2003 6,970 If you could post up a diagram of what you are trying to say it would be easier to comment. What I would say is, if to create an "abrupt" P-N junction there will be some level of diffusion across the junction. If you consider one of the sides of the junction being initially doped with a particular particle concentration, then the effect of diffusion will result in a concentration gradient -dn/dt in the direction of the particle movement (i.e. the diffusion direction across the P-N junction) which closely approximates a straight line. As for you energy bands, from what I recall the changes in the energy band characteristics will only occur over the depletion region width. I do have some work on this so should try and dig it out. Dave #### DJ Veni Joined Jan 9, 2007 7 I have uploaded the majority and minority carrier concentration of p and n and their concentration gradient. I have to "not use the depletion region approximation" thus im assuming that the Poissons equation remains as p=q( Nd-Na+p-n) from which i can obtain a plot of the electric field intensity and then the electrostatic potential plot and with that the energy band diagram. The question is how can i do it in MATLAB?? Then eventually using the drift and diffusion current quations i can make the changes to what i got. i.e At equilibrium drift, diffusion current =0 q*Mp*p=q*Dp*(dp/dx) Mp= mobility of holes similarly as in the case of electrons. I get the theory bit, but i dont know how i can implement it using matlab. #### Attachments • 74.6 KB Views: 120 #### Dave Joined Nov 17, 2003 6,970 Sorry for being a touch late getting back to you. Ok, so do you have a Mathematical representation of the Electric Field strength as a function of the distance? Firstly, you need to define your distance vector (x) at a specified interval: Rich (BB code): x = [0:1e-6:30e-3]; Will generate a distance vector from 0 to 30mm (this maybe too big so you can alter your distance) at an interval of 1um (this can also be altered), giving you 30001 sampling points Then you need to express your Electric Field equations as a function of the distance vector. As an example, if e = 2*(Sin(fx)), where f is an arbitrary variable equal to 1000, then: Rich (BB code): e = 2*(sin(1000*x)); Will generate the solution to e over the sample range, hence e will have 30001 sampling points. To view this expression with the distance vector on the x-axis and the function e on the y-axis, type: Rich (BB code): plot(x,e) If you provide a copy of your mathematical theory I can give you more pointers about the Matlab programming. Since this moving more onto Matlab programming, I will move this thread to the Programmer's Corner. Dave #### DJ Veni Joined Jan 9, 2007 7 n= number of electrons/cm^3 p=number of holes/cm^3 under equilibrium conditions n=p=ni= 10^(10) /cm^3 for Si-Silicon p=ni*exp((Ei-Ef)/(k*T) where Ei= Intrinsic energy Ef= Fermi level k- boltzman constant T= absolute temperature under equilibrium: n*p= (ni)^2 we have the charge neutrality relationship as: p-n+Nd-Na=0 Diffusion current is given as J p/diff =-q*Dp*delta(p) J n/diff = q* Dn*delta(n) The total net carrier currents in a semiconductor arise as the combined result of the drift and diffusion currents. Summing the respective n and p segments we get Jp= J p/diff + J p/drift Jn= J n/diff + J n/drift "J=Jn+Jp" remember: Ef remains constant even if doping concentration varies Under equilibrium conditions the total current remains Zero!! i.e the drift and diffusion currents will vanish only if E=0 and delta(n) and delta(p)=0------------- see equations for drift and diffusion currents now each and every type of carrier action drift , diffusion gives rise to a change in the carrier concentrations with "Time". thus dn/dt=dn/dt (drift) + dn/dt (diff) dp/dt=dp/dt (drift) + dp/dt (diff) "with this above conditions i am hoping with MATLAB i can see my plots of the carrier concentration to vary with some given time steps BUT i dont know how to use differential equations in MATLAB" <<<also>>> " I am assuming the Minority carrier diffusion equations will also play somepart in this" Now: My first step will have to get the n and p concentrations both minority and majority carrier concentrations with out the "influence of time" BEFORE i use all that previous theory to bring in the time factor and make my changes. for my first step I need to use my Poissions equation which Poissons equation : Charge Density : charge density = q(p-n+Nd-Na) Simplified to one dimension by the depletion approximation: which is what i have to change from " one dimension by depletion approximation" TO " one dimesion WITHOUT the depletion approximation" thus n and p remains even in my electric field expression!! <<< so i need to find and in MATLAB the carrier concentration without the time factor directly and then with the poissons equation to get plots for: Electric Field Voltage and as we know the inverse of the voltage curve we can get the energy band curve. and using MATLAB i have to simulate the above theory and obtain relavent plots for my carrier concentration. #### Dave Joined Nov 17, 2003 6,970 Dave #### Dave Joined Nov 17, 2003 6,970 Apologies for the lateness of my reply, I've been a little snowed in. First thing to do is create a M-file to write your script. Open Matlab, go to File>New>M-File. Save it with a suitable name. Declare all variables, i.e. Na, Nd, k, T, q. Your syntax should be of the form: Rich (BB code): % Lines preceded with one of these => is a comment % Set ni equal to 1x10^10 ni = 1e+10; Remember to put the ; at the end of the line to suppress Matlab returning the result to the Matlab command line. Calculate the simple equations using your above variables: Rich (BB code): p = ni*(exp((Ei - Ef)/(k*T))); As for differential equations, you state that "you to get the n and p concentrations both minority and majority carrier concentrations with out the "influence of time" - I cannot help you with this because I don't have a grasp of what you are trying to achieve here. The way I see it, you need to express n in terms of a time variable t. Define t over some length: Rich (BB code): % Define t from 0 to 60 second at intervals of 0.5 seconds t = [0:0.5:60]; You will then need to ascertain what n is at time 0, 0.5, 1, 1.5 etc. Though from your description I cannot ascertain what that is. ultimately you will have a vector, n, which is 1x121 points long - your time representation of t as a vector (Matlab's native format). To calculate dn/dt, use the following: Rich (BB code): % Differentiates the vector n, i.e. dn/dt ndiff = diff(n); The same situation will apply for the variable p and dp/dt. Can you draw up the code to initialise the simulation as above? We can look at the other stuff later. Dave #### DJ Veni Joined Jan 9, 2007 7 with knowing the Na and Nd concentrations we are able to find the majority carrier concentrations by the np=ni^2 relationship. and "you to get the n and p concentrations both minority and majority carrier concentrations with out the "influence of time" means, that the first thing I wish to do on matlab is to plot the concentration, the electric field and the band diagram on the y axis with the x axis denoting the length of the pn junction. So time is not required in this first step. #### DJ Veni Joined Jan 9, 2007 7 Attachment 1: N type semiconductor depletion region p type semiconductor depletion region Rich (BB code): % Project Description %Equilibrium Energy Band Diagram %(Si,300K, nondegenerately doped step Junction) %Constants q=1.6e-19; KS=11.8; ni=1.0e10; EG=1.12; T=300; k=8.617e-5; eO=8.854e-14; xleft=-5e-4; xright=-xleft; NA=input('Enter p-side doping(cm^-3),NA='); ND=input('Enter n-side doping(cm^-3),ND='); %Computations Vbi=k*T*log((NA*ND)/ni^2); % Built In Voltage xN=sqrt(2*KS*eO/q*NA*Vbi/(ND*(NA+ND))) % N-region depletion width xP=sqrt(2*KS*eO/q*ND*Vbi/(NA*(NA+ND))) % P-region depletion width x=linspace(xleft,xright,200); Vx1=(Vbi-q*ND.*(xN-x).^2/(2*KS*eO).*(x<=xN)).*(x>=0); % Voltage at any point in n-region depletion region Vx2=(.5*q*NA.*(xP+x).^2/(KS*eO).*(x>=-xP)).*(x<0); % Voltage at any point in p-region depletion region Vx=Vx1+Vx2; VMAX=3; % Maximum Plot Voltage EF=Vx(1)+VMAX/2-.026*reallog(NA/ni); % Fermi Level %Plot Diagram close plot(x,-Vx+EG/2+VMAX/2); axis([xleft xright 0 VMAX]); axis('off'); hold on plot(x,-Vx-EG/2+VMAX/2); plot(x,-Vx+VMAX/2,'w'); plot([xleft xright],[EF EF],'w'); plot([0 0],[0.15 VMAX-0.5],'w--'); %plot(xP,EF,'XP'); %plot(-xN,EF,'XN'); text(xleft*1.08,(-Vx(1)+EG/2+VMAX/2-.05),'Ec'); %Plot label for Conduction Band as a Function of Vx and Doping at x<-xp text(xright*1.02,(-Vx(200)+EG/2+VMAX/2-.05),'Ec'); %Plot label for Conduction Band as a Function of Vx and Doping at x>xn text(xleft*1.08,(-Vx(1)-EG/2+VMAX/2-.05),'Ev'); %Plot label for Valence Band as a Function of Vx and Doping at x<-xp text(xright*1.02,(-Vx(200)-EG/2+VMAX/2-.05),'Ev'); %Plot label for Valence Band as a Function of Vx and Doping at x<-xp text(xleft*1.08,(-Vx(1)+VMAX/2-.05),'Ei'); %Plot label for Intrinsic Energy level. text(xright*1.02,EF-.05,'EF'); set(gca,'DefaultTextUnits','Normalized') text(.18,0,'p-side'); text(.47,0,'x=0'); text(.75,0,'n-side'); set(gca,'defaultTextUnits','Data') hold off put in Na as 4e15 and Nd as 3e14. I need to apply depletion region approximation and then use the "!" the other stuff using differential equations. #### Dave Joined Nov 17, 2003 6,970 Hi, Further apologies for the lateness of my reply. The integrated differential function in Matlab will work by calculating the difference between adjacent elements in an array, which is derived with respect to a set of datum values, most commonly but not always time. So the key is to represent your differential function in terms of vectors - then it doesn't matter if you want to find a first, second, or fourteenth order derivative, the diff function will work. If you have multiple differentials then you can simply use the diff function as a stand alone on colaborate the solution in a new array: e.g: d^2y/dt^2 + dx/dt = z Can be deduced by: Rich (BB code): z = diff(y,2) + diff(x); % Where x, y and z are all vectors derived with respect to t Given I am not familiar with the "depletion region aproximation", I cannot comment specifically because I don't know how you intend to deduce the arrays upon which you are working. Sorry at this stage I cannot be more exact in my explanation (this is probably more to do with how long it is since I did semiconductor physics!) Dave #### DJ Veni Joined Jan 9, 2007 7 Thanks, Well you just opened a door so that I can move foreward. Ill try it out and get back in a few days and let you know the results. Dj Veni #### Dave Joined Nov 17, 2003 6,970 You could also have a look at the gradient function, however this too requires the input arguement to be a vector format. I have personally never used the gradient function, but from looking at its syntax in the Matlab help files, you may find it flexible enough to write you own diff function - certainly an option if you cannot get your data in a suitable format. I have also looked over at the Matlab file exchange to see if there are any other implementations of the diff function you can use, with little success. Dave
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8091168999671936, "perplexity": 1630.7345250205233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547475.44/warc/CC-MAIN-20210124075754-20210124105754-00440.warc.gz"}
http://physics.stackexchange.com/users/7896/leongz?tab=activity&sort=all&page=2
# leongz less info reputation 2928 bio website location age member for 2 years, 6 months seen yesterday profile views 269 # 570 Actions Apr22 comment Intro to Solid State Physics You may also wish to check out lecture notes by other professors. For example, I have found the following helpful: www-thphys.physics.ox.ac.uk/people/SteveSimon/condmat2012/… Apr12 asked Diagonalization of Hamiltonian Mar11 comment Difference between RPA and generalized RPA @Adam I am interested in using RPA to compute correlation functions in general. Which diagrams should be summed? How is it different in generalized RPA? Mar11 comment Bogoliubov transformation with a slight twist I would add that the diagonalization is always possible because the matrix is Hermitian. Feb28 awarded Yearling Feb27 asked Difference between RPA and generalized RPA Jan23 awarded Popular Question Dec9 reviewed No Action Needed How long does it take to scan a typical scanning electron microscope image? Nov29 comment Is there a way for an astronaut to rotate? Nov28 revised Azimuthal quantum number,l and magnetic quantum number,m are from angular momentum? improved formatting Nov28 suggested suggested edit on Azimuthal quantum number,l and magnetic quantum number,m are from angular momentum? Nov25 answered Does changing the electric / magnetic field cause self-reinforcing induction of the other? Nov21 revised Some conceptual questions in BEC retagged . Nov21 suggested suggested edit on Some conceptual questions in BEC Nov20 comment Eigenvectors of the angular momentum operator $S_x$ An eigenvector $(a,b)$ represents a state that has a probability of $|a|^2$ being spin up, and a probability of $|b|^2$ being spin down. For the probabilities to sum to one, the eigenvector must be normalized. Nov20 comment Interpreting a Hamiltonian in terms of 'hopping' operators What are $a$, $b$, and $\tau$? Nov7 revised Is $\langle\psi_1|p\psi_1\rangle$ necessarily 0 for eigenstates? fix grammar Nov7 suggested suggested edit on Is $\langle\psi_1|p\psi_1\rangle$ necessarily 0 for eigenstates? Oct18 awarded Popular Question Oct9 revised A naive question on the $U(1)$ gauge transformation of electromagnetic field? fixed latex
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171501159667969, "perplexity": 2424.1793631915143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136494.66/warc/CC-MAIN-20140914011216-00152-ip-10-234-18-248.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-equations/148936-homogeneous-helmholtz-equation-variable-coefficient.html
# Math Help - Homogeneous Helmholtz Equation with Variable Coefficient 1. ## Homogeneous Helmholtz Equation with Variable Coefficient Hello, How does one go about solving a two dimensional (or more) homogeneous helmholtz equation with a variable coefficient, i.e. $\Delta$u(x,y) + u(x,y)*f(x,y) = 0 Where in the standard Helmholtz equation, f(x,y) = k (constant). Knowing some boundary conditions. I am at a loss as to what method to even use, having tried separation of variables, green's functions, method of characteristics. Any hints? Can this equation even be solved? Thank You 2. Is $f(x,y)$ arbitrary or does it have a specific form? 3. Originally Posted by Danny Is $f(x,y)$ arbitrary or does it have a specific form? Nope, it's arbitrary. That's a good sign... right? 4. It is possible that f(x,y) could be a number of step functions. Would that improve the situation?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6272504329681396, "perplexity": 815.5805545991492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549705.144/warc/CC-MAIN-20141224185909-00067-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/strength-of-electric-field.289605/
# Strength of Electric Field 1. Feb 3, 2009 ### glennpagano44 1. The problem statement, all variables and given/known data Halliday and Resnick edition 7 chapter 22 number 29 In Fig 22-45, positive charge q = 7.81 pC is spread uniformly along a thin nonconducting rod of length L = .145m. The y distance between the point and rod is R = .06m. What is the magnitude and direction of the electric field produced at point P. I will attempt to describe the figure: There is a thin nonconducting rod in the x dimension with length L and a point P is directly above the center of the rod in the y dimension (R = .06m). 2. Relevant equations E = (1/4$$\pi$$$$\epsilon$$)(q/r$$^{2}$$) 3. The attempt at a solution $$\int$$dE = 1/4$$\pi$$$$\epsilon$$$$\int$$$$\lambda$$Rdx/(R$$^{2}$$+x$$^{2}$$)$$^{3/2}$$ I then took lamba and R out of the integral since they are constants (4 is not rasied to ($$\pi$$$$\epsilon$$) After taking the integral I got: ($$\lambda$$/4$$\pi\epsilon$$)(x/R(R$$^{2}$$+x$$^{2}$$)$$^{1/2}$$ When I plug in for the varibles I do not get the correct answer, I get 7.48 n/C but the correct answer is 12.4 N/C I plug in L for x and R for R and for $$\lambda$$ I use q/L 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Feb 4, 2009 ### tiny-tim Hi glennpagano44! (have a lambda: λ and an epsilon: ε and a pi: π and a square-root: √ ) No … how can you still have an x when you've just eliminated x by integrating over it? And where did that extra R come from? 3. Feb 4, 2009 ### glennpagano44 I went back to class today and I figured it out thanks alot. I just integrated it wrong, I had to do some trig subsitutions. (that is where the extra R came from) Similar Discussions: Strength of Electric Field
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480075240135193, "perplexity": 1288.871588597139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189525.23/warc/CC-MAIN-20170322212949-00029-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.studysmarter.us/textbooks/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-4th/electric-charges-and-forces/q-43-what-is-the-force-on-the-charge-in-figure-p-give-your-a/
Q. 43 Expert-verified Found in: Page 625 ### Physics for Scientists and Engineers: A Strategic Approach with Modern Physics Book edition 4th Author(s) Randall D. Knight Pages 1240 pages ISBN 9780133942651 # What is the force on the charge in FIGURE P? Give your answer as a magnitude and an angle measured cw or ccw (specify which) from the -axis. The value of force is and the angle is . See the step by step solution Force is, Force is, Force is, Angle is,
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738982915878296, "perplexity": 5363.69761161742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00470.warc.gz"}
http://math.stackexchange.com/questions/44048/if-a-language-is-a-np-hard-is-also-its-complement-np-hard
If a language is a NP-hard, is also its complement NP-hard? I was asked the next question: L is a language which |L (conjunction) {0,1}|=1. In other words, the number of words with length n is exactly one. I need to prove that if L is NP-Hard, also its complement is NP-hard. The GENERAL question: When I solves it, I remarked that I have a verification algorithm for L and witness y. I suggest to run y on A, and answer the opposite. This will be the verification algorithm for the complement language. But is this always true, for all languages? What did I miss? What this language is differ? Thanks a lot. - Now for the general question. In general this is an open question; for NP-complete languages it is exactly the question of whether $NP=coNP$. If $P=NP$ then indeed $NP=coNP$, and hence, proving that $NP\ne coNP$ is "at least as hard as proving $P\ne NP$", so we don't expect an answer soon (the belief is that $NP\ne coNP$).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.851472795009613, "perplexity": 191.28296647971527}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657114926.36/warc/CC-MAIN-20140914011154-00122-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://math.stackexchange.com/questions/3236661/how-can-an-ordered-pair-be-a-set-assuming-there-is-no-order-in-a-set
# How can an ordered pair be a set assuming there is no order in a set? The following propositions, I think, are generally considered as true ( though they may not all have the same level of rigor). (1) In a set there is no order ( due to the extensionality axion) : $$\{a,b\} = \{b,a\}$$ (2) In an ordered pair there is an order : $$(a, b)$$ is not equal to $$(b,a)$$. (3) An ordered pair is a set: $$(a,b) = \{ \{a\} , \{a,b\} \}$$. Does the problem lie in proposition (2) : should one say, instead of " in an ordered pair there is an order" that " an ordered pair is an order"? How to formulate rigorously these propositions in order to make them compatible? You formally define $$(a,b)$$ to be equal to $$\{\{a\}, \{a,b\}\}$$. That's the only formal definition. The rest is our interpretation. We notice that using the definition above, $$(a,b)$$ is not equal to $$(b,a)$$ if $$a\neq b$$ (*), and therefore, in the newly introduced symbol $$(.,.)$$, order matters. That's why we call this newly introduced symbol an "ordered pair". In other words, (2) is not a proposition, it is a consequence of (3). (*) Note that the only thing you need to prove $$(a,b)\neq (b,a)$$ is that $$a\neq b$$. This is because, if $$a\neq b$$, then the set $$\{a\}$$ is an element of $$(a,b)$$ (because $$(a,b)=\{\{a\}, \{a,b\}\}$$, but it is not an element of $$(b,a)$$ (because $$\{a\}\neq \{b\}$$ and $$\{a\}\neq\{b,a\}$$ and there are no other elements in $$(b,a)$$. An order is a relation between elements, not the elements themselves. Propositions 1 and 2 are fine as they stand. (2) is a definition on "reader-level", and defines a piece of notation we would like to make use of and how to think about its use intuitively. (3) is a formal, "lower-level" definition of that same notaiton that makes sure we haven't really made any new set theory in the process, but rather that we are still (under the hood) using only the regular (unordered) sets we already had. There's no incompatibility to be fixed. In fact $$(a,b)=\{ \{a\} , \{a,b\} \} =\{ \{a,b\} ,\{a\} \}=\{ \{a\} , \{b,a\} \} =\{ \{b,a\} ,\{a\} \}.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957867443561554, "perplexity": 229.86980827273032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573323.60/warc/CC-MAIN-20190918172932-20190918194932-00280.warc.gz"}
http://events.berkeley.edu/?event_ID=108231&date=2017-03-23&tab=all_events
## Student Applied Math Talk: Adaptive compression for eigenvalue problems Seminar | March 23 | 5-6 p.m. | 736 Evans Hall Michael Lindsey, UC Berkeley Department of Mathematics Large, dense Hermitian eigenvalue problems play a central role in computational quantum chemistry. Often the matrices for these problems are of the form $A+B$, where multiplication by $A$ is cheap, multiplication by $B$ (and hence also by $A+B$) is costly, and $\Vert A \Vert_2 \gg \Vert B \Vert_2$. Although $B$ is less significant than $A$, it is still too significant to be neglected, and the evaluation of matrix-vector products $Bv$ typically constitutes the vast majority of the computational cost of standard iterative approaches. While $B$ itself cannot necessarily be approximated well by a low-rank matrix, it can be compressed adaptively with respect to a low-dimensional subspace that is iteratively updated until convergence to the desired eigenspace is achieved. We will discuss the properties of the aforementioned ‘adaptive compression’ operation, as well as the convergence of the associated adaptive compression method for solving eigenvalue problems, which has been adopted in community electronic structure software packages such as Quantum ESPRESSO. In particular, we will explain how to prove local convergence with an asymptotic rate, as well as global convergence that holds generically in a strong sense. The proof proceeds by studying the adaptive compression method as a dynamical system and ultimately takes some surprising turns through rather diverse fields of math. events@math.berkeley.edu
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9154601097106934, "perplexity": 461.84380617647184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987763641.74/warc/CC-MAIN-20191021070341-20191021093841-00313.warc.gz"}
https://lexique.netmath.ca/en/truncated-pyramid/
# Truncated Pyramid ## Truncated Pyramid A polyhedron obtained by cutting a pyramid with a plane that is not parallel to its base and that intersects all its generatrices. ← Pyramid ← Truncated Pyramid • Of the two resulting polyhedra, that which does not contain the apex of the pyramid is called a truncated pyramid. • If the plane that cuts the pyramid is parallel to the base, the truncated pyramid is called a frustum of a pyramid.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012595772743225, "perplexity": 586.2422691189131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00243.warc.gz"}
http://www.mpdigest.com/2016/04/25/circuit-designers-notebook-effective-capacitance-vs-frequency/
Home Featured Articles Circuit Designer’s Notebook: Effective Capacitance vs Frequency # Circuit Designer’s Notebook: Effective Capacitance vs Frequency 377 0 by Richard Fiore, Director, RF Applications Engineering – American Technical Ceramics Corp. It is generally assumed that the capacitance value selected from a vendor’s catalog is constant over frequency. This is essentially true for applications with applied frequencies that are well below the capacitor’0s self-resonant frequency. However, as the operating frequency approaches the capacitor’s self-resonant frequency, the capacitance value will appear to increase, resulting in an effective capacitance (CE) that is larger than the nominal capacitance. This article will address the details of effective capacitance as a function of the application operating frequency. In order to illustrate this phenomenon, a simplified lumped element model of a capacitor connected to a frequency source operating in a network will be considered, as depicted in Figure 1. This model has been selected because the effective capacitance is largely a function of the net reactance developed between the capacitor and its parasitic series inductance (LS). The equivalent series resistance “ESR”shown in this illustration does not have a significant effect on the effective capacitance. Effective Capacitance: The nominal capacitance value (CO) is established by a measurement performed at 1 MHz. In typical RF applications, the applied frequency is generally much higher than the 1 MHz measurement frequency, hence at these frequencies, the inductive reactance (XL) associated with the parasitic series inductance (LS) becomes significantly large as compared to the capacitive reactance (XC). Figure 2 illustrates that there is a disproportionate increase in XL as compared to XC with increasing frequencies. This results in an effective capacitance that is greater than the nominal capacitance. Finally, at the capacitor’s series resonant frequency, the two reactances are equal and opposite, yielding a net reactance of zero. The expression for CE becomes undefined at this frequency. As illustrated in Figure 1, the physical capacitor can be represented as CO in series with LS. The impedance of the series combination of CO and LS can then be set equal to CE, which may be referred to as an “ideal equivalent” capacitor. This will yield the following equation: j (ω LS – 1/ω C0) = – j 1/ω CE ω2 LS – 1/C0 = – 1/CE The relationship between the operating frequency F0 and the effective capacitance CE can then be stated as: CE = C0/(1 – ω2 LS C0) CE = C0/[1 – (2π F0)2 LS C0] Where: CE = Effective Capacitance at the application frequency, (F0) C0 = Nominal Capacitance at 1 MHz LS = Parasitic Inductance, (H) F0 = Operating Frequency, (Hz) From this relationship, it can be seen that as the applied frequency increases, the denominator becomes smaller, thereby yielding a larger effective capacitance. At the capacitor’s series resonant frequency, the denominator goes to zero and the expression becomes undefined. The relationship of CE vs frequency is a hyperbolic function as illustrated in Figure 3. Example: Consider an ATC 100A series 100pF capacitor. Calculate the effective capacitance (CE) at 10MHz, 100MHz, 500MHz, 900MHz, 950MHz. Solution: Calculate by using the relationship: CE = C0/[1– (2π F0) 2 LS C0]. Refer to Table 1. Application Considerations: Impedance matching and minimum drift applications such as filters and oscillators require special attention regarding CE. For applications below the capacitor’s self-resonant frequency, the net impedance will be capacitive (-j), whereas for applied frequencies above resonance, the net impedance will be inductive (+j). Operating above series resonance will correspondingly place the impedance of the capacitor on the inductive side of the Smith chart (+j). When designing for these applications, both CE and the sign of the net impedance at the operating frequency must be carefully considered. In contrast, the majority of coupling, bypass and DC blocking applications are usually not sensitive to the sign of the impedance and can be capacitive or inductive, as long as the magnitude of the impedance is low at the applied frequency. The effective capacitance will be very large and the net impedance will be very low when operating close to resonance. At resonance, the net impedance will be equal the magnitude of ESR and the capacitance will be undefined. (377) Close
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037285208702087, "perplexity": 1753.003853371539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866932.69/warc/CC-MAIN-20180624102433-20180624122433-00085.warc.gz"}
http://www.science.gov/topicpages/v/vacuum+plasma+spraying.html
Note: This page contains sample records for the topic vacuum plasma spraying from Science.gov. While these samples are representative of the content of Science.gov, they are not comprehensive nor are they the most current set. We encourage you to perform a real-time search of Science.gov to obtain the most current and comprehensive results. Last update: November 12, 2013. 1 PubMed Hydroxyapatite coatings on titanium substrates were produced using two thermal spray techniques vacuum plasma spraying and detonation gun spraying. X-ray diffraction was used to compare crystallinity and residual stresses in the coatings. Porosity was measured using optical microscopy in conjunction with an image analysis system. Scanning electron microscopy and surface roughness measurements were used to characterise the surface morphologies of the coatings. The vacuum plasma sprayed coatings were found to have a lower residual stress, a higher crystallinity and a higher level of porosity than the detonation gun coatings. It is concluded that consideration needs to be given to the significance of such variations within the clinical context. PMID:10048403 Gledhill, H C; Turner, I G; Doyle, C 1999-02-01 2 National Technical Information Service (NTIS) The Thermal Spray Laboratory at NASA's Marshall Space Flight Center has developed and demonstrated a fabrication technique using Vacuum Plasma Spray (VPS) to form structural components from a tungsten/rhenium alloy. The components were assembled into an a... F. R. Zimmerman D. A. Hissam H. P. Gerrish W. M. Davis 1999-01-01 3 Silicon coatings were fabricated by vacuum plasma spraying technology. The morphology, composition, and microstructure of\\u000a the coatings were investigated by FESEM, XRD, WDX, and TEM. The physical, mechanical, and thermal properties of the coatings\\u000a were characterized. The results showed that vacuum plasma sprayed silicon coatings were compact and consisted of well-molten\\u000a silicon splats. The oxidation introduced by the spraying process Yaran Niu; Xuanyong Liu; Xuebin Zheng; Heng Ji; Chuanxian Ding 2009-01-01 4 The plasma energy input rate of a dc Ar + H2 plasma jet has been measured experimentally under a series of vacuum plasma spraying (VPS) processing conditions. The plasma energy input rate increased approximately linearly with increasing plasma current and Ar flow rate, increased approximately parabolically with increasing H2 flow rate, but did not vary measurably with changes in VPS Y. Y. Zhao; P. S. Grant; B. Cantor 2000-01-01 5 Vacuum plasma spray forming is being used in the near-net fabrication of aerospace components at the Marshall Space Flight Center (MSFC), Alabama. For example, vacuum-plasma-sprayed (VPS) NARloy-Z (a copper-based alloy with high thermal conductivity) is used to form the combustion chamber liner of liquid rocket engines. VPS NARloy-Z possesses properties comparable with the wrought alloy at temperatures ranging from ?259 P. S. Chen; J. H. Sanders; Y. K. Liaw; F. Zimmermann 1995-01-01 6 Vacuum plasma spray (VPS) forming of tungsten-based metal matrix nanocomposites (MMCs) has shown to be a cost effective and time saving method for the formation of bulk monolithic nanostructured thermo-mechanical components. Spray drying of powder feedstock appears to have a significant effect on the improved mechanical properties of the bulk nanocomposite. The reported elastic modulus of the nanocomposite nearly doubles K. E. Rea; V. Viswanathan; A. Kruize; J. Th. M. De Hosson; S. ODell; T. McKechnie; S. Rajagopalan; R. Vaidyanathan; S. Seal 2008-01-01 7 The vacuum plasma spray (VPS) deposition of metal, ceramic, and cermet coatings has been investigated using designed statistical\\u000a experiments. Processing conditions that were considered likely to have a significant influence on the melting characteristics\\u000a of the precursor powders and hence deposition efficiency were incorporated into full and fractional factorial experimental\\u000a designs. The processing of an alumina powder was very sensitive R. Kingswell; K. T. Scott; L. L. Wassell 1993-01-01 8 The vacuum plasma spray (VPS) deposition of metal, ceramic, and cermet coatings has been investigated using designed statistical experiments. Processing conditions that were considered likely to have a significant influence on the melting characteristics of the precursor powders and hence deposition efficiency were incorporated into full and fractional factorial experimental designs. The processing of an alumina powder was very sensitive to variations in the deposition conditions, particularly the injection velocity of the powder into the plasma flame, the plasma gas composition, and the power supplied to the gun. Using a combination of full and fractional factorial experimental designs, it was possible to rapidly identify the important spraying variables and adjust these to produce a deposition efficiency approaching 80 percent. The deposition of a nickel-base alloy metal powder was less sensitive to processing conditions. Generally, however, a high degree of particle melting was achieved for a wide range of spray conditions. Preliminary experiments performed using a tungsten carbide/cobalt cermet powder indicated that spray efficiency was not sensitive to deposition conditions. However, microstructural analysis revealed considerable variations in the degree of tungsten carbide dissolution. The structure and properties of the optimized coatings produced in the factorial experiments are also discussed. Kingswell, R.; Scott, K. T.; Wassell, L. L. 1993-06-01 9 The thermoelectric properties of magnesium silicide samples prepared by Vacuum Plasma Spray (VPS) are compared with those made from the conventional hot press method using the same feedstock powder. Thermal conductivity, electrical conductivity, Seebeck coefficient, and figure of merit are characterized from room temperature to 700 K. X-ray diffraction and scanning electron microscopy of the samples are obtained to assess how phase and microstructure influence the thermoelectric properties. Carrier concentration and Hall mobility are obtained from Hall Effect measurements, which provide further insight into the electrical conductivity and Seebeck coefficient mechanisms. Low-temperature electrical conductivity measurements suggest a 3D variable range hopping effect in the samples. VPS samples achieved a maximum ZT = 0.16 at 700 K, which is around 30% of the hot press sample ZT = 0.55 at 700 K using the same raw powder. The results suggest that thermal spray is a potential deposition technique for thermoelectric materials. Fu, Gaosheng; Zuo, Lei; Longtin, Jon; Nie, Chao; Gambino, Richard 2013-10-01 10 The purpose of this study is to improve ceramic coatings having a high stable electrostatic adsorption force. The use of the coating is for the Johnsen-Rahbek force type electrostatic chucks used to fix silicon wafers inside vacuum chambers for processes such as Etch, CVD and PVD for semiconductor manufacturers. Previously the authors developed a dielectric substance ceramic coating for electrostatic chucks using Atmospheric Plasma Spraying (APS). This ceramic coating was not suitable because of its unstable electrostatic adsorption force. In a subsequent study, Vacuum Plasma Sprayed (VPS) Al2O3-7.5mass%TiO2 coating was investigated. As a result, it was found that the VPS coating has stable electrical resistivity and adsorption force. The dielectric constant of VPS Al2O3-TiO2 coating was sufficient for application to electrostatic chuck. On the other hand, it was suggested from results with respect to residual adsorption force and duration time after power off that the residual adsorption characteristic was not adequate. Takeuchi, Jun-Ichi; Yamasaki, Ryo; Tani, Kazumi; Takahashi, Yasuo 11 Arc velocity and erosion rate measurements were performed on nanostructured pure Cu cathodes in 10-5 Torr vacuum (1.3324 m Pa), in an external magnetic field of 0.04 T. Five different kinds of nanostructured cathodes were produced by spraying pure Cu powders of three different sizes, on Cu coupons by atmospheric pressure plasma spraying and high velocity oxygen fuel spraying techniques. The erosion rates of these electrodes were obtained by measuring the weight loss of the electrode after igniting as many as 135 arc pulses, each of which was 500 s long at an arc current of 125 A. The arc erosion values measured on three kinds of nanostructured coatings were 50% lower than the conventional pure massive Cu cathodes. Microscopic analyses of the arc traces on these nanostructured coatings show that the craters formed on these coatings were smaller than those formed on conventional Cu (<1 m in diameter compared with 7-12 m diameter on conventional Cu). It was concluded that nanostructured cathodes had lower erosion rates than conventional pure Cu cathodes. Rao, Lakshminarayana; Munz, Richard J.; Meunier, Jean-Luc 2007-07-01 12 The object of this study is overlay coatings of MCrAlY alloy sprayed by a vacuum plasma spray (VPS) process for the protection against high-temperature corrosion and oxidation in the field of gas turbine components. Reaction diffusion behaviors at the interface between the MCrAlY coatings and the substrate, which have an important effect on coating degradation, have not always been clarified. Y. Itoh; M. Tamura 1999-01-01 13 In the aerospace field as well as in the stationary gas turbine field, thermally sprayed coatings are used to improve the surface properties of nickel-super-alloys materials. Coatings are commonly used as bond coat and antioxidation materials (mainly MCrAlY alloys) and as thermal barrier coatings (mainly yttria partially stabilized zirconia). The purpose of the current study was to assess the properties of thermally sprayed bond coat CoNiCrAlY alloys comparing the performance of three different techniques: vacuum plasma spray (VPS), high velocity oxygen fuel (HVOF), and axial plasma spray (AxPS). The quality of the deposited films has been assessed and compared from the point of view of microstructural (porosity, oxide concentration, unmelted particles presence) and mechanical (hardness) characteristics. The surface composition and morphology of the coatings were also determined. Specific efficiency tests were performed for the three examined technologies. The highest quality coatings are obtained by VPS, but also high velocity oxygen fuel and AxPS sprayed films have interesting properties, which can make their use interesting for some applications. Scrivani, A.; Bardi, U.; Carrafiello, L.; Lavacchi, A.; Niccolai, F.; Rizzi, G. 2003-12-01 14 TiNi shape memory alloy has been used in many application fields due to its excellent shape memory effect (SME) and superelasticity (SE). However, it is difficult and costly to machine TiNi alloy into complex shapes due to its low ductility. To address this problem, one approach is near-net shape processing by vacuum plasma spraying (VPS). In this study, the transformation behavior, mechanical properties and microstructure of TiNi alloy processed by VPS method are studied. The as-sprayed and homogenized TiNi alloy exhibited compositional variations in the sample, though both samples exhibited a single TiNi phase with low transformation temperatures, below 170 K Aging the homogenized sample at 773 K for 18 ks led to an increase in the transformation temperature, resulting in good transformation behavior. Specifically, DSC measurement revealed clear transformation peaks due to Martensite, austenite and R-phase transitions. Compression testing of a sample aged at 773 K for 18 ks exhibited a good SME below Mf and superelasticity (SE) above Af. The recoverable strain due to SME and SE were more than 2.4 % and 5.0 %, respectively. TEM studies confirmed that aTi3Ni4 precipitate was formed by aging at 773 K for 18 ks. Nakayama, Hiroyuki; Taya, Minoru; Smith, Ronald W.; Nelson, Travis; Yu, Michael; Rosenzweig, Edwin 2006-04-01 15 In this study, molybdenum disilicide (MoSi2) coatings were fabricated by vacuum plasma spraying technology. Their morphology, composition, and microstructure characteristics were intensively investigated. The oxidation behavior of MoSi2 coatings was also explored. The results show that the MoSi2 coatings are compact with porosity less than 5%. Their microstructure exhibits typical lamellar character and is mainly composed of tetragonal and hexagonal MoSi2 phases. A small amount of tetragonal Mo5Si3 phase is randomly distributed in the MoSi2 matrix. A rapid weight gain is found between 300 and 800 C. The MoSi2 coatings exhibit excellent oxidation-resistant properties at temperatures between 1300 and 1500 C, which results from the continuous dense glassy SiO2 film formed on their surface. A thick layer composed of Mo5Si3 is found to be present under the SiO2 film for the MoSi2 coatings treated at 1700 C, suggesting that the phenomenon of continuous oxidation took place. Niu, Yaran; Fei, Xiaoai; Wang, Hongyan; Zheng, Xuebin; Ding, Chuanxian 2013-03-01 16 B4C coating was fabricated by vacuum plasma spraying and the tribological properties of the coating against WC-Co alloy were evaluated by sliding wear tests. Al2O3 coating, one of the most commonly used wear-resistant coatings, was employed as comparison in the tribological evaluation. The results obtained show that, the B4C coating is composed of a large amount of nanostructured particles along with some amorphous phases. Both of the friction coefficient and wear rate of the B4C coating are much lower than those of the Al2O3 coating, and the tribological evaluation reveals a decreasing trend for the B4C coating in friction coefficient as well as wear rate with increasing normal load, which is explained in terms of the formation of a protective transfer layer on its worn surface. Tribofilm wear is found to be the dominant wear mechanism involved in the B4C/WC-Co alloy friction pair. Zhu, Huiying; Niu, Yaran; Lin, Chucheng; Huang, Liping; Ji, Heng; Zheng, Xuebin 2012-12-01 17 High density W coatings on reduced activation ferritic martensitic steel (RAF/M) have been produced by Vacuum Plasma Spraying technique (VPS) and heat flux experiments on them have been carried out to evaluate their possibility as a plasma-facing armor in a fusion device. In addition, quantitative analyses of temperature profile and thermal stress have been carried out using the finite element analysis (FEA) to evaluate its thermal properties. No cracks or exfoliation has been formed by steady state and cyclic heat loading experiments under heat loading at 700 C of surface temperature. In addition, stress distribution and maximum stress between interface of VPS-W and RAF/M have been obtained by FEA. On the other hand, exfoliation has occurred at interlayer of VPS-W coatings near the interface between VPS-W and RAF/M at 1300 C of surface temperature by cyclic heat loading. Tokunaga, K.; Hotta, T.; Araki, K.; Miyamoto, Y.; Fujiwara, T.; Hasegawa, M.; Nakamura, K.; Ezato, K.; Suzuki, S.; Enoeda, M.; Akiba, M.; Nagasaka, T.; Kasada, R.; Kimura, A. 2013-07-01 18 Thick, hard-magnetic Nd-Fe-B films (~1 mm) were deposited on different substrates (Cu, steel) by a low-pressure plasma-spraying process. The properties of the applied Nd-Fe-B powders (e.g., grain size, composition) and the conditions of the spraying process were optimized with respect to the mechanical and magnetic properties of the films. Film thicknesses up to 1.2 mm were achieved with good adhesive properties (bond strength>40 MPa). Cracks at the interface or within the films during the deposition process could be suppressed by adjusting the temperature profile of the substrate and controlling the deposition rate. Depending on the maximum temperature of the substrate and the thickness of the Nd-Fe-B films, either amorphous or microcrystalline structures were obtained. In general, the magnetic properties were improved by a postdeposition annealing treatment. Coercivities HcJ up to 16 kA/cm and isotropic remanences of about 0.6 T were achieved. Rieger, G.; Wecker, J.; Rodewald, W.; Sattler, W.; Bach, Fr.-W.; Duda, T.; Unterberg, W. 2000-05-01 19 PubMed Tantalum, as a potential metallic implant biomaterial, is attracting more and more attention because of its excellent anticorrosion and biocompatibility. However, its significantly high elastic modulus and large mechanical incompatibility with bone tissue make it unsuitable for load-bearing implants. In this study, porous tantalum coatings were first successfully fabricated on titanium substrates by vacuum plasma spraying (VPS), which would exert the excellent biocompatibility of tantalum and alleviate the elastic modulus of tantalum for bone tissue. We evaluated cytocompatibility and osteogenesis activity of the porous tantalum coatings using human bone marrow stromal cells (hBMSCs) and its ability to repair rabbit femur bone defects. The morphology and actin cytoskeletons of hBMSCs were observed via electron microscopy and confocal, and the cell viability, proliferation and osteogenic differentiation potential of hBMSCs were examined quantitatively by PrestoBlue assay, Ki67 immunofluorescence assay, real-time PCR technology and ALP staining. For in vivo detection, the repaired femur were evaluated by histomorphology and double fluorescence labeling 3 months postoperation. Porous tantalum coating surfaces promoted hBMSCs adhesion, proliferation, osteogenesis activity and had better osseointegration and faster new bone formation rate than titanium coating control. Our observation suggested that the porous tantalum coatings had good biocompatibility and could enhance osseoinductivity in vitro and promote new bone formation in vivo. The porous tantalum coatings prepared by VPS is a promising strategy for bone regeneration. PMID:23776648 Tang, Ze; Xie, Youtao; Yang, Fei; Huang, Yan; Wang, Chuandong; Dai, Kerong; Zheng, Xuebin; Zhang, Xiaoling 2013-06-11 20 Adhesion strength is one of the critical properties for plasma-sprayed coating. In this study, the plasma-sprayed Al2O3-13wt.%TiO2/NiCrAl coatings were annealed at 300-900 C for 6 h in vacuum. The tensile bond strength and porosity of the coatings were investigated. The microstructure and the fracture were studied using optical microscopy, scanning electron spectroscopy, and x-ray diffraction. It was found that the tensile bond strength of coatings increased with the increase of annealing temperature until 500 C, reaching the maximum value of 41.2 MPa, and then decreased as the annealing temperature continues to increase. All coatings presented a brittle fracture and the fracture occurred inside the ceramic coatings except for the coating annealed at 500 C, which had a brittle-ductile mixed fracture and the fracture occurred at the interface of bond coating and the substrate. Jingjing, Zhang; Zehua, Wang; Pinghua, Lin; Hongbin, Yuan; Zehua, Zhou; Shaoqun, Jiang 2012-09-01 21 Two coating technologies, magnetron sputtering and vacuum plasma spraying, have been investigated for their capability in producing functionally graded tungsten/EUROFER97 layers. In a first step, non-graded layers with different mixing ratios were deposited on tungsten substrates and characterized by nanoindentation, macroindentation, X-ray diffraction, transmission, Auger and scanning electron microscopy. The thermal stability of the sprayed layers against heat treatments at 800-1100 C for 60 min was further analyzed. In a second step, the produced functionally graded layers deposited on tungsten substrates were joined to EUROFER97 bulk-material by diffusion bonding. The bonding and the graded joints were microscopically characterized and exposed to thermal cycles between 20 C and 650 C. Results from this study show that both coating technologies are ideal for the synthesis of functionally graded tungsten/EUROFER97 coatings. This is important in providing insights for future development of joints with functionally graded interlayers. Weber, T.; Stber, M.; Ulrich, S.; Vaen, R.; Basuki, W. W.; Lohmiller, J.; Sittel, W.; Aktaa, J. 2013-05-01 22 National Technical Information Service (NTIS) Plasma spray systems are used to deposit high temperature materials on substrates to form coatings. Thermal analysis of these systems will assist in determining spray parameters for different materials. Infrared videothermography was used to measure tempe... M. D. Kelly L. D. Abney 1985-01-01 23 Plasma spray technology is being evaluated as a means to address important fabrication and maintenance problems associated with plasma-interactive components in magnetic fusion devices (e.g., limiters, divertors, and some first wall surfaces). Low-oxygen vacuum plasma sprayed copper has been tested as a ductile, high thermal conductivity interlayer to limit thermal stress and prevent cracking when brazing pyrolytic graphite (PG) tiles M. F. Smith; C. D. Croessmann; F. M. Hosking; R. D. Watson; J. A. Koski 1991-01-01 24 Nano-titania coatings were deposited via vacuum plasma spraying. The microstructure and chemical state of the coatings were investigated with SEM, TEM, XRD and XPS, respectively. The results showed that the vacuum plasma sprayed titanium oxide coatings possessed porous structure with small pores and agglomerated nanosized grains. The ac electrical data were measured in the frequency range 1 ? f ? Yingchun Zhu; Chuanxian Ding 2000-01-01 25 Figure I Schematic diagram of the plasma-spraying. particles were caught by a special instrument at a distance of 3 m from the nozzle. Details of the experimental procedure are given elsewhere (3). Since it is generally believed (see, for instance (4)) that oxygen contained in the water-stabilized system must cause strongly oxidizing properties of the flame, a detailed X-ray phase J. Ilavsky; J. Forman; P. Chraska 1992-01-01 26 Thermal barrier coating (TBC) systems are widely used in gas turbines on thermally highly loaded parts as blades, vanes or combustion chamber to improve the performance of the engines. The standard plasma-sprayed systems consist of a vacuum plasma-sprayed (VPS) MCrAlY (M = Ni or Co) and an atmospherically plasma sprayed (APS) ceramic top layer made of yttria partially stabilised zirconia R. Vaen; J.-E. Dring; M. Dietrich; H. Lehmann; D. Stver 27 SciTech Connect Understanding the fundamental metallurgy of vacuum plasma spray formed materials is the key to enhancing and developing full material properties. Investigations have shown that the microstructure of plasma sprayed materials must evolve from a powder splat morphology to a recrystallized grain structure to assure high strength and ductility. A fully, or near fully, dense material that exhibits a powder splat morphology will perform as a brittle material compared to a recrystallized grain structure for the same amount of porosity. Metallurgy and material properties of nickel, iron, and copper base alloys will be presented and correlated to microstructure. Mckechnie, T.N.; Liaw, Y.K.; Zimmerman, F.R.; Poorman, R.M. 1992-01-01 28 Thermal barrier coating (TBC) specimens have been prepared by plasma spraying. A vacuum plasma spray (VPS) MCrAlY bond coat and atmospheric plasma spray (APS) zirconia top coat were deposited onto a nickel superalloy substrate. The stiffness of detached top coats was measured by cantilever bending and also by nanoindentation procedures. Measurements were made on specimens in the as-sprayed state and J. A. Thompson; T. W. Clyne 2001-01-01 29 Thermal spray processing is used to confer specific in-service properties to components via the production of a coating between 50?m (minimum value) to a few millimeters thick. Thermal spray represents a global market of about 4.8Billion Euros (i.e., ?US5 billion) in 2004; 30% of which is European based. 50% of this activity is devoted to plasma spray processing with about Pierre Fauchais; Ghislain Montavon; Michel Vardelle; Julie Cedelle 2006-01-01 30 National Technical Information Service (NTIS) Cermets of tantalum and alumina were fabricated by plasma spraying, with the amount of alumina varied from 0 to 65 percent (by volume). Each of four compositions was then measured for tensile strength, elastic modulus, and coefficient of thermal expansion... C. M. Kramer 1977-01-01 31 SciTech Connect ITER first wall beryllium mockups, which were fabricated by vacuum plasma spraying the beryllium armor, have survived 3000 thermal fatigue cycles at 1 MW/sq m without damage during high heat flux testing at the Plasma Materials Test Facility at Sandia National Laboratory in New Mexico. The thermal and mechanical properties of the plasma sprayed beryllium armor have been characterized. Results are reported on the chemical composition of the beryllium armor in the as-deposited condition, the through thickness and normal to the through thickness thermal conductivity and thermal expansion, the four-point bend flexure strength and edge-notch fracture toughness of the beryllium armor, the bond strength between the beryllium armor and the underlying heat sink material, and ultrasonic C-scans of the Be/heat sink interface. Castro, Richard G.; Vaidya, Rajendra U.; Hollis, Kendall J. 1997-12-31 32 SciTech Connect The plasma sprayed hydroxyapatite (HA) coatings are used on metallic implants to enhance the bonding between the implant and bone in human body. The coating process was implemented at different spraying power for each spraying condition. The coatings formed from a rapid solidification of molten and partly molten particles that impact on the surface of substrate at high velocity and high temperature. The study was concentrated on different spraying power that is between 23 to 31 kW. The effect of different power on the coatings microstructure was investigated using scanning electron microscope (SEM) and phase composition was evaluated using X-ray diffraction (XRD) analysis. The coatings surface morphology showed distribution of molten, partially melted particles and some micro-cracks. The produced coatings were found to be porous as observed from the cross-sectional morphology. The coatings XRD results indicated the presence of crystalline phase of HA and each of the patterns was similar to the initial powder. Regardless of different spraying power, all the coatings were having similar XRD patterns. Mohd, S. M.; Abd, M. Z.; Abd, A. N. [Advanced Material Centre (AMREC), SIRIM Bhd, Lot 34, Jalan Hi-Tech 2/4, Kulim Hi-Tech Park, 09000 Kulim (Malaysia) 2010-03-11 33 NASA Astrophysics Data System (ADS) The plasma sprayed hydroxyapatite (HA) coatings are used on metallic implants to enhance the bonding between the implant and bone in human body. The coating process was implemented at different spraying power for each spraying condition. The coatings formed from a rapid solidification of molten and partly molten particles that impact on the surface of substrate at high velocity and high temperature. The study was concentrated on different spraying power that is between 23 to 31 kW. The effect of different power on the coatings microstructure was investigated using scanning electron microscope (SEM) and phase composition was evaluated using X-ray diffraction (XRD) analysis. The coatings surface morphology showed distribution of molten, partially melted particles and some micro-cracks. The produced coatings were found to be porous as observed from the cross-sectional morphology. The coatings XRD results indicated the presence of crystalline phase of HA and each of the patterns was similar to the initial powder. Regardless of different spraying power, all the coatings were having similar XRD patterns. Mohd, S. M.; Abd, M. Z.; Abd, A. N. 2010-03-01 34 Microsoft Academic Search In order to investigate the thermal response of tungsten coating on carbon and copper substrates by vacuum plasma spray (VPS) or inert gas plasma spray (IPS), annealing and cyclic heat load experiments of these coatings were conducted. It is indicated that the multi-layered tungsten and rhenium interface of VPS-W\\/CFC failed to act as a diffusion barrier at elevated temperature and X. Liu; L. Yang; S. Tamura; K. Tokunaga; N. Yoshida; N. Noda; Z. Xu 2004-01-01 35 Microsoft Academic Search Vacuum and surface technology have significantly contributed to the rapid progress in microelectronics, data storage, displays, photonics, aerospace, automotive, architectural glass and other industries. One of the key elements in the impressive development of vacuum and surface technology is the increased use of plasma processes. Plasma can be used as a tool for heating, evaporation, sputtering, etching and ionization as Horst Heidsieck 1999-01-01 36 DOEpatents A means for monitoring the material portion in the flame of a plasma spray gun during spraying operations is provided. A collimated detector, sensitive to certain wavelengths of light emission, is used to locate the centroid of the material with each pass of the gun. The response from the detector is then relayed to the gun controller to be used to automatically realign the gun. Abbatiello, Leonard A. (Oak Ridge, TN); Neal, Richard E. (Heiskell, TN) 1978-01-01 37 SciTech Connect This detailed report summarizes 8 contributions from a thermal spray conference that was held in late 1991 at Brookhaven National Laboratory (Upton, Long Island, NY, USA). The subject of Plasma Spray Processing is presented under subject headings of Plasma-particle interactions, Deposit formation dynamics, Thermal properties of thermal barrier coatings, Mechanical properties of coatings, Feed stock materials, Porosity: An integrated approach, Manufacture of intermetallic coatings, and Synchrotron x-ray microtomographic methods for thermal spray materials. Each section is intended to present a concise statement of a specific practical and/or scientific problem, then describe current work that is being performed to investigate this area, and finally to suggest areas of research that may be fertile for future activity. Berndt, C.C.; Brindley, W.; Goland, A.N.; Herman, H.; Houck, D.L.; Jones, K.; Miller, R.A.; Neiser, R.; Riggs, W.; Sampath, S.; Smith, M.; Spanne, P. [State Univ. of New York, Stony Brook, NY (United States). Thermal Spray Lab. 1991-12-31 38 SciTech Connect This detailed report summarizes 8 contributions from a thermal spray conference that was held in late 1991 at Brookhaven National Laboratory (Upton, Long Island, NY, USA). The subject of Plasma Spray Processing'' is presented under subject headings of Plasma-particle interactions, Deposit formation dynamics, Thermal properties of thermal barrier coatings, Mechanical properties of coatings, Feed stock materials, Porosity: An integrated approach, Manufacture of intermetallic coatings, and Synchrotron x-ray microtomographic methods for thermal spray materials. Each section is intended to present a concise statement of a specific practical and/or scientific problem, then describe current work that is being performed to investigate this area, and finally to suggest areas of research that may be fertile for future activity. Berndt, C.C.; Brindley, W.; Goland, A.N.; Herman, H.; Houck, D.L.; Jones, K.; Miller, R.A.; Neiser, R.; Riggs, W.; Sampath, S.; Smith, M.; Spanne, P. (State Univ. of New York, Stony Brook, NY (United States). Thermal Spray Lab.) 1991-01-01 39 NASA Astrophysics Data System (ADS) Plasma Window as a Fast Vacuum Valve. A. Hershcovitch, E. Johnson, Brookhaven National Laboratory, J. Noonan, E. Rotela, S. Sharma, A. Khounsary, Argonne National Laboratory-- Fast igniting plasma windows are being considered for use as emergency valves in case of vacuum breach in a beamline. Plasmas can be ignited faster than mechanical valves can close without causing damage to beamlines (unlike, presently used, msec spring loaded shutters). And, plasma windows have proven capability to separate between vacuum and atmosphere. In all existing vacuum valves, motion of solid objects is required. Consequently, the fastest valves or shutters are limited to a closing time of 7x10-3 sec. or longer (and a much longer opening time). But, intense discharges can be established within a few nanoseconds (10-9 sec.). Establishment of the plasma vacuum separator is determined by the motion of fast charged particles (as compared to solid objects in existing valves). Recently, an already established plasma window withstood vacuum breach tests. These experiments and fast igniting plasma configurations will be discussed. Hershcovitch, Ady; Johnson, Erik; Noonan, John; Rotela, Elbio; Sharma, Sushil; Khounsary, Ali 1999-11-01 40 NASA Astrophysics Data System (ADS) Ni-Cr single splats were plasma-sprayed at room temperature onto aluminum and stainless steel substrates, which were modified by thermal and hydrothermal treatments to control the oxide surface chemistry. The proportions of the different splat types were found to vary as a function of substrate pretreatment, especially when the pretreatment involved heating. It was observed that surface roughness did not correlate with changes in splat morphology. Substrate surfaces were characterized by X-ray photoelectron spectroscopy using in situ heating in vacuum to determine the effect of thermal pretreatment on substrate surface chemistry. It was found that the surface layers were composed primarily of oxyhydroxides. When the substrates were heated to 350 C, water vapor was released from the dehydration of oxyhydroxide. Preheating the substrate can remove the water prior to spraying: preheated substrates had improved the physical contact between the splat and substrate, which enhanced the formation of disk splats and increased the number of splats. Tran, A. T. T.; Hyland, M. M.; Qiu, T.; Withy, B.; James, B. J. 2008-12-01 41 Microsoft Academic Search Benefits and limitations of process diagnostics are investigated for the suspension plasma spraying of yttria-stabilized zirconia thermal barrier coatings. The methods applied were enthalpy probe measurements, optical emission spectroscopy, and in-flight particle diagnostic.It was proved that the plasma characteristics are not affected negatively by the injection of the ethanol based suspension since the combustion of species resulting from ethanol decomposition Georg Mauer; Alexandre Guignard; Robert Vaen; Detlev Stver 2010-01-01 42 Microsoft Academic Search Suspension plasma spraying (SPS) offers the manufacture of unique microstructures which are not possible with conventional\\u000a powdery feedstock. Due to the considerably smaller size of the droplets and also the further fragmentation of these in the\\u000a plasma jet, the attainable microstructural features like splat and pore sizes can be downsized to the nanometer range. Our\\u000a present understanding of the deposition Robert Vaen; Holger Kaner; Georg Mauer; Detlev Stver 2010-01-01 43 Microsoft Academic Search Suspension plasma spraying (SPS) consists in injecting a non-Newtonian liquid in a d.c. plasma jet where it is fragmented and then vaporized. The sub-micrometric or nanometric particles contained in the suspension are then accelerated, heated, partially or totally melted before flattening onto the substrate to form the coating. Such coatings are finely structured and present better thermo-mechanical properties than conventionally P. Fauchais; G. Montavon; A. Denoirjean; V. Rat; J.-F. Coudert; H. Ageorges; A. Bacciochini; E. Brousse; G. Darut; N. Caron; K. Wittmann-Teneze 2008-01-01 44 NASA Astrophysics Data System (ADS) The vacuum-plasma-spraying technique presented in this article is suited to produce aluminum-matrix composites with a low coefficient of thermal expansion reinforced with fine ceramic particles, resulting in a uniform particle dispersion and bulk porosity of less than 1.5% in the as-sprayed condition. Plastic deformation of the plates followed by annealing resulted in significant increases in ultimate tensile strength, hardness, and elongation. Smagorinski, M.; Tsantrizos, P.; Grenier, S.; Entezarian, M.; Ajersch, F. 1996-06-01 45 SciTech Connect Small-angle neutron scattering (SANS) was used to study the porosity of plasma sprayed deposits of alumina in as-sprayed and heat-treated conditions. SANS results were compared with mercury intrusion porosimetry (MIP) and water immersion techniques. Multiple small-angle neutron scattering yields a volume-weighted effective pore radius (R{sub eff}), for pores with sizes between 0.08 and 10{mu}m, the pore volume in this size region, and from the Porod region, the surface area of pores of all sizes. Ilavsky, J.; Herman, H.; Berndt, C.C. [State Univ. of New York, Stony Brook, NY (United States); Goland, A.N. [Brookhaven National Lab., Upton, NY (United States); Long, G.G.; Krueger, S.; Allen, A.J. [National Inst. of Standards and Technology, Gaithersburg, MD (United States) 1994-03-01 46 Microsoft Academic Search The paper examines and compares the properties of Al2O3 coatings sprayed using two methods: arc plasma spraying (APS) of micron powders (average particle size is 45 ?m) and suspension\\u000a plasma spraying (SPS) (average particle size is 2.9 ?m). A system for feeding suspension into plasma spray is developed and\\u000a fabricated. It is established that SPS coatings contain finer structural components V. E. Oliker; A. E. Terentev; L. K. Shvedova; I. S. Martsenyuk 2009-01-01 47 Microsoft Academic Search Numerous techniques have been developed to synthesize ceramic powders with improved physical and chemical characteristics. This paper describes a new process called suspension plasma spraying (SPS), based on the use of radio frequency (RF) plasma technology. The objective of SPS is to prepare dense and spherical powders from a suspension of fine (<10 pm) or even ultrafine (<100 nm) powders. Etienne Bouyer; F. Gitzhofer; M. I. Boulos 1997-01-01 48 NASA Astrophysics Data System (ADS) This paper presents an investigation of the influence of the spray angle on thermally sprayed coatings. Spray beads were manufactured with different spray angles between 90 and 20 by means of atmospheric plasma spraying (APS) on heat-treated mild steel (1.0503). WC-12Co and Cr3C2-10(Ni20Cr) powders were employed as feedstock materials. Every spray bead was characterized by a Gaussian fit. This opens the opportunity to analyze the influence of the spray angle on coating properties. Furthermore, metallographic studies of the surface roughness, porosity, hardness, and morphology were carried out and the deposition efficiency as well as the tensile strength was measured. The thermally sprayed coatings show a clear dependence on the spray angle. A decrease in spray angle changes the thickness, width, and form of the spray beads. The coatings become rougher and their quality decreases. Tillmann, Wolfgang; Vogli, Evelina; Krebs, Benjamin 2008-12-01 49 Microsoft Academic Search The arc plasma spraying (APS) technology has demonstrated the capability of being economical in the fabrication of non-reciprocal ferrite phase shifter elements. It has been possible to arc plasma spray a C-band element in less than 10 minutes using a commercial spray dried lithium ferrite powder. This paper discusses the material properties (coercive force, remanence, and microwave loss) of the Richard W. Babbitt 1975-01-01 50 NASA Astrophysics Data System (ADS) Direct current Suspension Plasma Spraying (SPS) allows depositing finely structured coatings. This article presents an analysis of the influence of plasma instabilities on the yttria-stabilized suspension drops fragmentation. A particular attention is paid to the treatment of suspension jet or drops according to the importance of voltage fluctuations (linked to those of the arc root) and depending on the different spray parameters such as the plasma forming gas mixture composition and mass flow rate and the suspension momentum. By observing the suspension drops injection with a fast shutter camera and a laser flash sheet triggered by a defined transient voltage level of the plasma torch, the influence of plasma fluctuations on jet or drops fragmentation is studied through the deviation and dispersion trajectories of droplets within the plasma jet. Etchart-Salas, R.; Rat, V.; Coudert, J. F.; Fauchais, P.; Caron, N.; Wittman, K.; Alexandre, S. 2007-12-01 51 NASA Astrophysics Data System (ADS) Infrared radiation coatings were prepared by plasma spray on the copper sheet. The structure and emissivity were examined by x-ray diffraction and infrared radiant instrument, respectively. The results show that an appropriate addition of TiO2 (5-15 wt.%) to NiO and Cr2O3 leads to high emissivity of coating with (Cr0.88Ti0.12)2O3 and NiCr2O4 phase. However, more (20-30 wt.%) will frustrate the formation of NiCr2O4 and ultimately decrease the emissivity. Moreover, the coating prepared by plasma spray endures a long working time without emissivity decrease. Cheng, Xudong; Duan, Wei; Chen, Wu; Ye, Weiping; Mao, Fang; Ye, Fei; Zhang, Qi 2009-09-01 52 Microsoft Academic Search The formation of a TiN-Ti composite coating by thermal spraying of titanium powder with laser processing of the subsequent\\u000a coating in a low-pressure N2 atmosphere was examined. A low-pressure plasma spray system was used in combination with a CO2 laser. First, the coating was plasma sprayed onto a mild steel substrate using a N2 plasma jet and titanium powder in A. Ohmori; S. Hirano; K. Kamacta 1993-01-01 53 NASA Astrophysics Data System (ADS) Titanium carbide-reinforced metallic coatings, produced by plasma spraying, can be used for sliding wear resistant applications. The sliding wear properties of such coatings are governed to a large extent by the strength, structure and stability of the bond interface between the carbide and the metallic phases. In the present investigation, the microstructure and sliding wear properties of plasma sprayed metal-bonded TiC coatings containing up to 90 v/o carbide have been studied. It was shown that alloying of the metallic phase improved carbide retention in TiC cermets due to better interface bonding, and increased wear resistance and lowered sliding coefficient of friction. TiC-based coatings were produced from both physically blended and synthesized feed powders. It was observed that the precursor TiC-based powder morphology and structure greatly affected the plasma sprayed coating microstructures and the resultant physical and mechanical characteristics. Physical blending of powders induced segregation during spraying, leading to somewhat lower deposit efficiencies and coating uniformity, while synthesized and alloyed titanium carbide/metal composite powders reduced problems of segregation and reactions associated with plasma spraying of physically blended powders where the TiC was in direct contact with the plasma jet. To understand oxidation effects of the environment, Ti and TiC-based coatings were produced under low pressure (VPS), air plasma (APS) and shrouded plasma sprayed conditions. APS Ti and TiC-based powders with reactive matrices suffered severe oxidation decomposition during flight, leading to poor deposition efficiencies and oxidized microstructures. High particle temperatures and cold air plasma spraying. Coating oxidation due to reactions of the particles with the surrounding air during spraying reduced coating hardness and wear resistance. TiC-with Ti or Ti-alloy matrix coatings with the highest hardness, density and wear resistance was achieved by spraying under vacuum plasma spray conditions. VPS coating microstructures of synthesized 40, 60 and 80 v/o TiC in Ti10Ni10Cr5Al and 80 v/o TiC in Fe30Cr alloy matrices exhibited fine and uniform distributions of spheroidal carbides. High volume fraction carbides were also obtained with no segregation effects. It was also shown that coatings produced from mechanically blended powders of 50, 70 and 90 vol. % TiC and commercially pure (C.P.) Ti, using low pressure plasma spray process (VPS), had densities >98% and were well bonded to steel, aluminum alloy or titanium alloy substrates. Reductions in jet oxygen contents by the use of an inert gas shroud enabled Ti and TiC-based coatings to be produced which were cleaner and denser than air plasma sprayed and comparable to vacuum plasma sprayed coatings. Direct oxygen concentration measurements in shrouded plasma jets made using an enthalpy probe and a gas analyzer also showed significant reductions in the entrainment of atmospheric oxygen. VPS and shrouded plasma spraying minimized carbide-matrix interface oxidation and improved coating wear resistance. The sliding wear resistance of synthesized coatings was very high and comparable with standard HVOF sprayed WC/Co and Crsb3Csb2/NiCr coatings. Shrouded plasma spray deposits of Crsb3Csb2/NiCr also performed much better than similar air plasma sprayed coatings, as result of reduced oxidation. Mohanty, Mahesh 54 Microsoft Academic Search Thermal sprayed tungsten carbide (WC)cobalt (Co) coatings have been extensively employed as abrasion\\/wear protective layers. However, carbon loss (decarburization) of WCCo powders during thermal spraying reduces the efficiency of the coatings against abrasive wear. Post-spray treatment with spark plasma sintering (SPS) technique was conducted on plasma-sprayed WCCo coatings in the present study with the aim to compensate the lost carbon H. Li; K. A. Khor; L. G. Yu; P. Cheang 2005-01-01 55 SciTech Connect We consider the vacuum energy of the electromagnetic field interacting with a spherical plasma shell together with a model for the classical motion of the shell. We calculate the heat kernel coefficients, especially that for the TM mode, and carry out the renormalization by redefining the parameters of the classical model. It turns out that this is possible and results in a model which, in the limit of the plasma shell becoming an ideal conductor, reproduces the vacuum energy found by Boyer in 1968. Bordag, M. [Institute for Theoretical Physics, Leipzig University, Vor dem Hospitaltore 1, D-04103 Leipzig (Germany); Khusnutdinov, N. [Department of Physics, Kazan State University, Kremlevskaya 18, Kazan 420008 (Russian Federation) and Department of Physics, Tatar State University of Humanity and Education, Tatarstan 2, Kazan 420021 (Russian Federation) 2008-04-15 56 Microsoft Academic Search Due to the large volume fraction of the internal interfaces, finely structured coatings (nano- or submicronsized) should exhibit better properties than the ones structured at a microscale. Suspension plasma spraying (SPS) appears as a technology permitting to manufacture such coatings and consisting in injecting within a plasma jet a liquid suspension of solid particles. Compared to plasma spraying of micron-sized J.-F. Coudert; V. Rat; H. Ageorges; A. Denoirjean; P. Fauchais; G. Montavon; N. Caron; S. Alexandre 2007-01-01 57 Microsoft Academic Search This paper describes formation of titanium dioxide coatings designed for photocatalytic applications, obtained by suspension\\u000a plasma spraying (SPS), an alternative of the atmospheric plasma spraying (APS) technique in which the material feedstock is\\u000a a suspension of the material to be sprayed. Two different TiO2 powders were dispersed in distilled water and ethanol and injected in Ar-H2 or Ar-H2-He plasma under Filofteia-Laura Toma; Ghislaine Bertrand; Didier Klein; Christian Coddet; Cathy Meunier 2006-01-01 58 Microsoft Academic Search A multi-functional micro-arc plasma spraying system was developed according to aerodynamics and plasma spray theory. The soft switch IGBT (Insulated Gate Bipolar Transistor) invert technique, micro-computer control technique, convergent-divergent nozzle structure and axial powder feeding techniques have been adopted in the design of the micro-arc plasma spraying system. It is not only characterized by a small volume, a light weight, Liuying Wang; Hangong Wang; Shaochun Hua; Xiaoping Cao 2007-01-01 59 Microsoft Academic Search Direct current Suspension Plasma Spraying (SPS) allows depositing finely structured coatings. This article presents an analysis\\u000a of the influence of plasma instabilities on the yttria-stabilized suspension drops fragmentation. A particular attention is\\u000a paid to the treatment of suspension jet or drops according to the importance of voltage fluctuations (linked to those of the\\u000a arc root) and depending on the different R. Etchart-Salas; V. Rat; J. F. Coudert; P. Fauchais; N. Caron; K. Wittman; S. Alexandre 2007-01-01 60 Microsoft Academic Search The aim of this comparative study was to elucidate the characterization of spherical radio frequency (RF) plasma sprayed hydroxyapatite (HA) powder consolidated by spark plasma sintering (SPS) and conventional sintering methods. SPS processing took place under low vacuum of 4.5Pa at the temperature of 9001200C for 3min with a heating rate of 100C\\/min. The conventional processing was conducted at the J. L. Xu; K. A. Khor; R. Kumar 2007-01-01 61 SciTech Connect The isothermal oxidation behavior of thermal barrier coating (TBC) specimens consisting of single-crystal superalloy substrates, vacuum plasma-sprayed Ni-22Cr-10Al-1Y bond coatings and air plasma-sprayed 7.5 wt.% yttria stabilized zirconia top coatings was evaluated by thermogravimetric analysis at 1150{degrees}C for up to 200 hours. Coating durability was assessed by furnace cycling at 1150{degrees}C. Coatings and reaction products were identified by x-ray diffraction, field-emission scanning electron microscopy and energy dispersive spectroscopy. Haynes, J.A. [Univ. of Alabama, Birmingham, AL (United States). Dept. of Materials and Mechanical Engineering; Ferber, M.K.; Porter, W.D. [Oak Ridge National Lab., TN (United States) 1996-04-01 62 Microsoft Academic Search The axial force carried by the expanding plasma plume from 50-250-A copper and aluminum vacuum arcs was measured using a pendulum whose axle was equipped with a rotary optical encoder. It was found that the force was a linear function of current. The electrode geometry was varied to find the maximum force. At maximum, the average forces per unit current Harry S. Marks; Isak I. Beilis; Raymond L. Boxman 2009-01-01 63 PubMed Using Air Plasma Spraying (APS) and Vacuum Plasma Spraying (VPS) techniques, hydroxylapatite (HA) and mixtures of HA and titanium (Ti) were deposited on a Ti6A14V alloy (and on an AISI 316L steel) subjected to different surface treatments. The deposits were investigated for their crystallinity, thickness, and adhesion properties. Higher adhesion values were obtained with VPS rather than with APS. By utilising VPS, the deposition conditions were selected in order to achieve crystallinity values between 70 and 90%. The adhesion results depend on the crystallinity (increasing with its decrease), on the thickness (decreasing slightly with its increase) and especially on the surface finish of the metallic substrate. A porous Ti precoat was more effective than either chemical etching in HCl or sandblasting; sandblasting being the least effective. In particular, the double deposits consisting of a porous Ti precoat and a successive layer of HA proved to be most interesting for their higher adhesion properties and for their capability of providing primary stability due to the presence of the HA and secondary stability, in the case of its reabsorption, due to the porous metal. PMID:8193564 Brossa, F; Cigada, A; Chiesa, R; Paracchini, L; Consonni, C 1993-01-01 64 SciTech Connect In order to generate a better ion beam, a triple-cathode vacuum arc plasma source has been developed. Three plasma generators in the vacuum arc plasma source are equally located on a circle. Each generator initiated by means of a high-voltage breakdown between the cathode and the anode could be operated separately or simultaneously. The arc plasma expands from the cathode spot region in vacuum. In order to study the behaviors of expanding plasma plume generated in the vacuum arc plasma source, a Langmuir probe array is employed to measure the saturated ion current of the vacuum arc plasma source. The time-dependence profiles of the saturated current density of the triple vacuum arc plasma source operated separately and simultaneously are given. Furthermore, the plasma characteristic of this vacuum arc plasma source is also presented in the paper. Xiang, W.; Li, M.; Chen, L. [Institute of Electric Engineering, China Academy of Engineering Physics, P.O. Box 919-518, Mianyang 621900 (China) 2012-02-15 65 PubMed In order to generate a better ion beam, a triple-cathode vacuum arc plasma source has been developed. Three plasma generators in the vacuum arc plasma source are equally located on a circle. Each generator initiated by means of a high-voltage breakdown between the cathode and the anode could be operated separately or simultaneously. The arc plasma expands from the cathode spot region in vacuum. In order to study the behaviors of expanding plasma plume generated in the vacuum arc plasma source, a Langmuir probe array is employed to measure the saturated ion current of the vacuum arc plasma source. The time-dependence profiles of the saturated current density of the triple vacuum arc plasma source operated separately and simultaneously are given. Furthermore, the plasma characteristic of this vacuum arc plasma source is also presented in the paper. PMID:22380209 Xiang, W; Li, M; Chen, L 2012-02-01 66 Microsoft Academic Search This paper describes the formation process of nanostructured alumina coatings and the injection system obtained by suspension\\u000a plasma spraying (SPS), an alternative to the atmospheric plasma spraying technique in which the material feedstock is a suspension\\u000a of the nanopowder to be sprayed. The nanoscale alumina powders (d?20nm) were dispersed in distilled water or ethanol and injected by a peristaltic pump Changjun Qiu; Yong Chen 2009-01-01 67 Microsoft Academic Search The influence of five spraying parameters on the thermal shockresistance of plasma sprayed tungsten coatings was evaluated with a pulsed electron beam gun. The pulse duration was 0.2 s and the absorbed power density 60 MW\\/m2. Two series of samples were tested. Both were plasma sprayed in controlled inert atmosphere, one at atmospheric pressure (AP) and the other at low M. Urquiaga Valdes; R. G. Saint-Jacques; J.-F. Ct; C. Moreau 1997-01-01 68 SciTech Connect The intermetallic compound, molybdenum disilicide (MoSi{sub 2}) is being considered for high temperature structural applications because of its high melting point and superior oxidation resistance at elevated temperatures. The lack of high temperature strength, creep resistance and low temperature ductility has hindered its progress for structural applications. Plasma spraying of coatings and structural components of MoSi{sub 2}-based composites offers an exciting processing alternative to conventional powder processing methods due to superior flexibility and the ability to tailor properties. Laminate, discontinuous and in situ reinforced composites have been produced with secondary reinforcements of Ta, Al{sub 2}O{sub 3}, SiC, Si{sub 3}N{sub 4} and Mo{sub 5}Si{sub 3}. Laminate composites, in particular, have been shown to improve the damage tolerance of MoSi{sub 2} during high temperature melting operations. A review of research which as been performed at Los Alamos National Laboratory on plasma spraying of MoSi{sub 2}-based composites to improve low temperature fracture toughness, thermal shock resistance, high temperature strength and creep resistance will be discussed. Castro, R.G.; Hollis, K.J.; Kung, H.H.; Bartlett, A.H. 1998-05-25 69 Microsoft Academic Search This experimental study describes the Rolling Contact Fatigue (RCF) performance and the failure mechanisms of plasma sprayed tungsten carbide cobalt (WC-15%Co) coatings. The advancements of plasma spray coatings due to higher velocity and temperature of the impacting lamella call for investigations into new applications. One possible application is the rolling element bearing. A modified four ball machine which models the R. Ahmed; M. Hadfield 1998-01-01 70 SciTech Connect The Westinghouse Electric Corporation, in conjunction with the Thermal Spray Laboratory of the State University of New York, Stony Brook, investigated the fabrication of a gas-tight interconnect layer on a tubular solid oxide fuel cell with plasma arc spray deposition. The principal objective was to determine the process variables for the plasma spray deposition of an interconnect with adequate electrical conductivity and other desired properties. Plasma arc spray deposition is a process where the coating material in powder form is heated to or above its melting temperature, while being accelerated by a carrier gas stream through a high power electric arc. The molten powder particles are directed at the substrate, and on impact, form a coating consisting of many layers of overlapping, thin, lenticular particles or splats. The variables investigated were gun power, spray distance, powder feed rate, plasma gas flow rates, number of gun passes, powder size distribution, injection angle of powder into the plasma plume, vacuum or atmospheric plasma spraying, and substrate heating. Typically, coatings produced by both systems showed bands of lanthanum rich material and cracking with the coating. Preheating the substrate reduced but did not eliminate internal coating cracking. A uniformly thick, dense, adherent interconnect of the desired chemistry was finally achieved with sufficient gas- tightness to allow fabrication of cells and samples for measurement of physical and electrical properties. A cell was tested successfully at 1000{degree}C for over 1,000 hours demonstrating the mechanical, electrical, and chemical stability of a plasma-arc sprayed interconnect layer. Ray, E.R.; Spengler, C.J.; Herman, H. 1991-07-01 71 SciTech Connect The Westinghouse Electric Corporation, in conjunction with the Thermal Spray Laboratory of the State University of New York, Stony Brook, investigated the fabrication of a gas-tight interconnect layer on a tubular solid oxide fuel cell with plasma arc spray deposition. The principal objective was to determine the process variables for the plasma spray deposition of an interconnect with adequate electrical conductivity and other desired properties. Plasma arc spray deposition is a process where the coating material in powder form is heated to or above its melting temperature, while being accelerated by a carrier gas stream through a high power electric arc. The molten powder particles are directed at the substrate, and on impact, form a coating consisting of many layers of overlapping, thin, lenticular particles or splats. The variables investigated were gun power, spray distance, powder feed rate, plasma gas flow rates, number of gun passes, powder size distribution, injection angle of powder into the plasma plume, vacuum or atmospheric plasma spraying, and substrate heating. Typically, coatings produced by both systems showed bands of lanthanum rich material and cracking with the coating. Preheating the substrate reduced but did not eliminate internal coating cracking. A uniformly thick, dense, adherent interconnect of the desired chemistry was finally achieved with sufficient gas- tightness to allow fabrication of cells and samples for measurement of physical and electrical properties. A cell was tested successfully at 1000{degree}C for over 1,000 hours demonstrating the mechanical, electrical, and chemical stability of a plasma-arc sprayed interconnect layer. Ray, E.R.; Spengler, C.J.; Herman, H. 1991-07-01 72 Microsoft Academic Search The spray-drying process of ceramics which are candidate materials for thermal barrier coatings (TBCs), i.e. 3YSZ+0, 2, 4, 6 wt.% Al2O3, is discussed in this paper. The two most important properties of spray-dried powders to determine the coating quality are density and particle size. Polyethyleneimine (PEI) acts as both an organic binder and a dispersant giving low viscosity in the X. Q Cao; R Vassen; S Schwartz; W Jungen; F Tietz; D Stever 2000-01-01 73 Microsoft Academic Search Plasma spray coating techniques allow unique control of electrolyte microstructures and properties as well as facilitating deposition on complex surfaces. This can enable significantly improved solid oxide fuel cells (SOFCs), including non-planar designs. SOFCs are promising because they directly convert the oxidization of fuel into electrical energy. However, electrolytes deposited using conventional plasma spray are porous and often greater than Elliot Slamovich; James Fleetwood; James F. McCloskey; Aaron Christopher Hall; Rodney Wayne Trice 2010-01-01 74 Microsoft Academic Search Homogenous dispersion of carbon nanotubes (CNTs) in micron sized aluminum silicon alloy powders was achieved by spray drying. Excellent flowability of the powders allowed fabrication of thick composite coatings and hollow cylinders (5mm thick) containing 5wt.% and 10wt.% CNT by plasma spraying. Two phase microstructure with matrix having good distribution of CNT and CNT rich clusters was observed. Microstructural evolution Srinivasa R. Bakshi; Virendra Singh; Sudipta Seal; Arvind Agarwal 2009-01-01 75 Microsoft Academic Search A Vacuum Arc Thruster (VAT) is a thruster that uses the plasma created in a vacuum arc, an electrical discharge in a vacuum that creates high velocity and highly ionized plasmas, as the propellant without additional acceleration. A VAT would be a small and inexpensive low thrust ion thruster, ideal for small satellites and formation flying spacecraft. The purpose of Michael James Sekerak 2005-01-01 76 National Technical Information Service (NTIS) Plasma spray deposition of carbide/metal hardcoatings is difficult because complex chemical transformations can occur while spraying, especially in the presence of oxygen. A commercial plasma spray torch has been modified to simultaneously inject carbide ... W. J. Lenling M. F. Smith J. A. Henfling 1990-01-01 77 Microsoft Academic Search The penetration phenomena of liquid Mn into porous ZrO2-8 wt.% Y2O3 coating, plasma sprayed on JIS SS400 steel substrate was studied by heating at 1573 K in a vacuum atmosphere, and the possibility of improving the mechanical properties of the coating by heat treatment with liquid Mn was examined. It was found that liquid Mn rapidly penetrated the coating and A. Ohmori; Z. Zhou; K. Inoue 1994-01-01 78 Microsoft Academic Search Different posttreatment methods, such as heat treatment, mechanical processing, sealing, etc., are known to be capable to\\u000a improve microstructure and exploitation properties of thermal spray coatings. In this work, a plasma electrolytic oxidation\\u000a of aluminum coatings obtained by arc spraying on aluminum and carbon steel substrates is carried out. Microstructure and properties\\u000a of oxidized layers formed on sprayed coating as Vasyl Pokhmurskii; Hrygorij Nykyforchyn; Mykhajlo Student; Mykhajlo Klapkiv; Hanna Pokhmurska; Bernhard Wielage; Thomas Grund; Andreas Wank 2007-01-01 79 Microsoft Academic Search Suspension plasma sprayed titanium oxide coatings were analyzed using transmission electron microscope (TEM) and using Raman spectroscopy. The suspensions used to spray were formulated using fine rutile pigment, water, alcohol or their mixtures, and a small quantity of dispersant. TEM study realized using a face-to-face preparation technique enabled to visualize a lamellar shape of grains and their columnar growth. The Harry Podlesak; Lech Pawlowski; Jacky Laureyns; Roman Jaworski; Thomas Lampke 2008-01-01 80 Microsoft Academic Search Al2O3ZrO2 composite coatings were deposited by the suspension plasma spray process using molecularly mixed amorphous powders. X-ray diffraction (XRD) analysis shows that the as-sprayed coating is composed of ?-Al2O3 and tetragonal ZrO2 phases with grain sizes of 26nm and 18nm, respectively. The as-sprayed coating has 93% density with a hardness of 9.9GPa. Heat treatment of the as-sprayed coating reveals that Dianying Chen; Eric H. Jordan; Maurice Gell 2009-01-01 81 SciTech Connect Plasma spray coating techniques allow unique control of electrolyte microstructures and properties as well as facilitating deposition on complex surfaces. This can enable significantly improved solid oxide fuel cells (SOFCs), including non-planar designs. SOFCs are promising because they directly convert the oxidization of fuel into electrical energy. However, electrolytes deposited using conventional plasma spray are porous and often greater than 50 microns thick. One solution to form dense, thin electrolytes of ideal composition for SOFCs is to combine suspension plasma spray (SPS) with very low pressure plasma spray (VLPPS). Increased compositional control is achieved due to dissolved dopant compounds in the suspension that are incorporated into the coating during plasma spraying. Thus, it is possible to change the chemistry of the feed stock during deposition. In the work reported, suspensions of sub-micron diameter 8 mol.% Y2O3-ZrO2 (YSZ) powders were sprayed on NiO-YSZ anodes at Sandia National Laboratories (SNL) Thermal Spray Research Laboratory (TSRL). These coatings were compared to the same suspensions doped with scandium nitrate at 3 to 8 mol%. The pressure in the chamber was 2.4 torr and the plasma was formed from a combination of argon and hydrogen gases. The resultant electrolytes were well adhered to the anode substrates and were approximately 10 microns thick. The microstructure of the resultant electrolytes will be reported as well as the electrolyte performance as part of a SOFC system via potentiodynamic testing and impedance spectroscopy. Slamovich, Elliot (Purdue University, West Lafayette, IN); Fleetwood, James (Purdue University, West Lafayette, IN); McCloskey, James F.; Hall, Aaron Christopher; Trice, Rodney Wayne (Purdue University, West Lafayette, IN) 2010-07-01 82 Microsoft Academic Search NiTi intermetallic compounds not only have shape memory effects but also high erosion resistance. Therefore, applying this material as a coating is an effective method for preventing erosion. In this study, a mixture of Ti and Ni powders was subjected to a mechanical alloying process. Then, the mechanical and structural properties of the coating fabricated by vacuum plasma spraying and Hitoshi Hiraga; Takashi Inoue; Shigeharu Kamado; Yo Kojima; Akira Matsunawa; Hirofumi Shimura 2001-01-01 83 NASA Astrophysics Data System (ADS) High-strength titanium alloy and titanium aluminide foils are required for fabricating composite structures and honeycombs for advanced aircraft engines and airframes. Titanium aluminide alloys possess limited workability, which results in significant yield loss when these materials are produced by the conventional ingot metallurgy route. This article describes the use of induction plasma spray technology to fabricate foil preforms of a titanium alloy and a titanium aluminide. These plasma-sprayed preforms were converted into 100% dense wrought titanium aluminide foil by a roll-consolidation process. The microstructure and mechanical properties of titanium aluminide foil produced from plasma-sprayed preforms were virtually identical to those of conventional ingot metallurgy foil. The plasma-spray plus roll-consolidation route may lead to the production of titanium aluminide foil as continuous coil, which would improve process efficiency and yield high-quality titanium aluminide foil at low cost. Jha, Sunil C.; Forster, James A. 1993-07-01 84 Microsoft Academic Search In this paper, two plasma spraying technologies: solution plasma spraying (SolPS) and suspension plasma spraying (SPS) were\\u000a used to produce nano-structured solid oxide fuel cells (SOFCs) electrolytes. Both plasma spraying processes were optimized\\u000a in order to achieve the thin gas-tight electrolytes. The comparison of the two plasma spraying processes is based on electrolyte\\u000a phase, microstructure, morphology, as well as on Lu Jia; Franois Gitzhofer 2010-01-01 85 Microsoft Academic Search A deposit of carbon nanoparticles based on an onion-like structure was fabricated from detonation nanodiamond powders by a novel plasma spraying process, electromagnetically accelerated plasma spraying (EMAPS). EMAPS was able to transform nanodiamonds to onion-like structured carbon within 300 ?s through a thermal graphitization process in which the temperature of the particles would be in the range of 27004500 K. Anna Valeryevna Gubarevich; Junya Kitamura; Shu Usuba; Hiroyuki Yokoi; Yozo Kakudate; Osamu Odawara 2003-01-01 86 Microsoft Academic Search Fine, home-synthesized, hydroxyapatite powder was formulated with water and alcohol to obtain a suspension used to plasma\\u000a spray coatings onto a titanium substrate. The deposition process was optimized using statistical design of 2\\u000a n\\u000a experiments with two variables: spray distance and electric power input to plasma. X-ray diffraction (XRD) was used to determine\\u000a quantitatively the phase composition of obtained deposits. Harry Podlesak; Lech Pawlowski; Romain dHaese; Jacky Laureyns; Thomas Lampke; Severine Bellayer 2010-01-01 87 Microsoft Academic Search The interest to manufacture on large surfaces thick (i.e., 10 to 20?m, average thickness) finely structured or nano-structured layers is increasingly growing since about 10years. This explains the interest for suspension plasma spraying (SPS) and solution precursor plasma spraying (SPPS), both allowing manufacturing finely structured layers of thicknesses varying between a few micrometers up to a few hundred of micrometers. P. Fauchais; V. Rat; J.-F. Coudert; R. Etchart-Salas; G. Montavon 2008-01-01 88 Microsoft Academic Search Nanostructured WC-12% Co coatings were deposited by suspension plasma spraying of submicron feedstock powders, using an internal\\u000a injection plasma torch. The liquid carrier used in this approach allows for controlled injection of much finer particles than\\u000a in conventional thermal spraying, leading to thin coatings with a fine surface finish. A polyethylene-imine (PEI) dispersant\\u000a was used to stabilize the colloidal suspension J. Oberste Berghaus; B. Marple; C. Moreau 2006-01-01 89 Microsoft Academic Search Among processes evaluated to produce some parts of or the whole solid-oxide fuel cell, Suspension Plasma Spraying (SPS) is\\u000a of prime interest. Aqueous suspensions of yttria partially stabilized zirconia atomized into a spray by an internal-mixing\\u000a co-axial twin-fluid atomizer were injected into a DC plasma jet. The dispersion and stability of the suspensions were enhanced\\u000a by adjusting the amount of Rgine Rampon; Claudine Filiatre; Ghislaine Bertrand 2008-01-01 90 SciTech Connect The dispersive effects of vacuum polarization on the propagation of a strong circularly polarized electromagnetic wave through a cold collisional plasma are studied analytically. It is found that, due to the singular dielectric features of the plasma, the vacuum effects on the wave propagation in a plasma are qualitatively different and much larger than those in pure vacuum in the regime when the frequency of the propagating wave approaches the plasma frequency. A possible experimental setup to detect these effects in plasma is described. Di Piazza, A.; Hatsagortsyan, K. Z.; Keitel, C. H. [Max-Planck-Institut fuer Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg (Germany) 2007-03-15 91 Microsoft Academic Search TiO2 coatings were prepared by plasma spraying using a spray-dried powder as feedstock material. A systematic study has been performed to determine how the titania slurry formulation (e.g. dispersant level, pH, binder addition) affects the granule characteristics. Aqueous slurries consisting of 50 wt.% of TiO2 particles, 01.2 wt.% ammonium polyacrylate as a dispersant and up to 15 wt.% styreneester acrylic N. Berger-Keller; G. Bertrand; C. Filiatre; C. Meunier; C. Coddet 2003-01-01 92 Microsoft Academic Search Suspension plasma spray (SPS) is a promising technique for nano-structured coatings and nano-powder synthesis where nano-particles are injected into the plasma jet with the help of liquid precursors. Most of the suspensions used in plasma spraying have the non-Newtonian behavior (viscoelastic or thixotropic). After injection into the plasma, the suspension is firstly atomized by the plasma jet before the droplets Lijuan Qian; Jianzhong Lin; Hongbing Xiong 2011-01-01 93 Microsoft Academic Search A new system of electromagnetically accelerated plasma spraying (EMAPS) consisting of a pulsed high-current arc-plasma gun\\u000a and a large flow rate pulsed powder injector has been developed to synthesize a hard and dense coating of boron carbide (B4C) with a high adhesion. The plasma gun with a co-axial cylindrical electrode configuration generates electromagnetically\\u000a accelerated arc plasma with a typical velocity J. Kitamura; S. Usuba; Y. Kakudate; H. Yokoi; K. Yamamoto; A. Tanaka; S. Fujiwara 2003-01-01 94 Microsoft Academic Search The phenomena leading to the surface flashover across solid insulators in vacuum and the subsequent spread of the trigger plasma thus formed to bridge the main gap in a triggered vacuum switch are investigated experimentally. The results show that the breakdown proceeds in two stages. In the first stage a plasma is formed by electrons releasing and ionizing absorbed gases. A. J. Green; C. Christopoulos 1979-01-01 95 Microsoft Academic Search Control of the microstructure of TiO2 coatings through preparation methods significantly influences the coating performance. In this study, a vacuum cold-spray\\u000a process, as a new coating technology, is used to deposit nanocrystalline TiO2 coatings on conducting glass and stainless steel substrates. TiO2 deposits were formed using two types of nanocrystalline TiO2 powders with mean particle diameters of 200 and 25 S.-Q. Fan; G.-J. Yang; C.-J. Li; G.-J. Liu; C.-X. Li; L.-Z. Zhang 2006-01-01 96 Microsoft Academic Search In this article, the applications, potential advantages, and challenges of thermal plasma spray (PS) processing for nanopowder production and cell fabrication of solid oxide fuel cells (SOFCs) are reviewed. PS processing creates sufficiently high temperatures to melt all materials fed into the plasma. The heated material can either be quenched into oxide powders or deposited as coatings. This technique has Rob Hui; Zhenwei Wang; Olivera Kesler; Lars Rose; Jasna Jankovic; Sing Yick; Radenka Maric; Dave Ghosh 2007-01-01 97 NASA Astrophysics Data System (ADS) This article focuses on the development of the anode layer for solid oxide fuel cells by plasma spraying. The composite (cermet) anode, developed by thermal spraying, consisted of nickel and yttria-stabilized zirconia (YSZ). The effect of different plasma-spraying technologies on the microstructure characteristics and the electrochemical behavior of the anode layer were investigated. Coatings were fabricated by spraying nickel-coated graphite or nickel oxide with YSZ using a Triplex II plasma torch under atmospheric conditions as well as a standard F4 torch under atmospheric or soft-vacuum conditions. The investigations were directed to have an open microporous structure, higher electrical conductivity, and catalytic activity of anode deposits. Porosity was investigated by measuring the gas permeability. Scanning electron microscopy and x-ray diffraction technologies were applied to examine the morphology, microstructure, and composition of the layers. Electrical conductivity measurements were carried out to determine the ohmic losses within the anode layer. The most promising layers were analyzed by measuring the electrochemical behavior to obtain information about catalytic activity and performance. Weckmann, H.; Syed, A.; Ilhan, Z.; Arnold, J. 2006-12-01 98 NASA Astrophysics Data System (ADS) The conditions under which tungsten is vaporized and ionized during plasma spraying of WC/Co powders are investigated spectroscopically. Overheating of the powder results in less cobalt and decarburization of WC in the sprayed coating. The plasma is dominated by ionized tungsten and the resulting coating has a substantial amount of tungsten metal. Detering, B. A.; Knibloe, J. R.; Eddy, T. L. 99 Microsoft Academic Search This paper proposes an original route for modeling the time-dependent behavior of a plasma jet issued from a DC plasma-spraying\\u000a torch operating with various kinds of gas mixtures. The hydrodynamic interactions between this jet and a liquid jet for suspension\\u000a plasma-spraying or a classical particle injection for the deposition of coatings are studied. In a first step, the classical plasma Erick Meillot; S. Vincent; C. Caruyer; J. P. Caltagirone; D. Damiani 2009-01-01 100 Microsoft Academic Search Undoped TiO2 (anatase) powder was deposited by induction plasma spraying (IPS) varying the plasma power, the carrier gas (argon) flow rate and the powder feed rate according to statistical design of experiments (SDE) methodology. In addition, anatase powders doped with varying amounts of V, Nb or Ta oxides were deposited by suspension plasma spraying (SPS) using reactive induction plasma spray I. Burlacov; J. Jirkovsk; M. Mller; R. B. Heimann 2006-01-01 101 SciTech Connect An oxidation model for molybdenum particles during the plasma spray deposition process is presented. Based on a well-verified model for plasma chemistry and the heating and phase change of particles in a plasma plume, this model accounts for the oxidant diffusion around the surface of particles or splats, oxidation on the surface, as well as oxygen diffusion in molten molybdenum. Calculations are performed for a single molybdenum particle sprayed under Metco-9MB spraying conditions. The oxidation features of particles during the flight are compared with those during the deposition. The result shows the dominance of oxidation of a molybdenum particle during the flight, as well as during deposition when the substrate temperature is high (above 400 C). Fincke, James Russell; Wan, Y. P.; Jiang, X. Y.; Sampath, S.; Prasad, V.; Herman, H. 2001-06-01 102 SciTech Connect Two methods for the use of lunar materials for the construction of shelters on the Moon are being proposed: explosive consolidation of the soil into structural components and plasma spraying of the soil to join components. The plasma-sprayed coating would also provide protection from the intense radiation. In this work, a mare simulant was plasma-sprayed onto a stainless steel substrate. Deposition of a 0.020 inch coating using power inputs of 23, 25, 27 and 29 kW were compared. Hardness of the coatings increased with each increase of power to the system, while porosity at the interface decreased. All coatings exhibited good adhesion. Simultaneously, an explosively consolidated sample was similarly characterized to afford a comparison of structural features associated with each mode of proposed use. Powell, S.J.; Inal, O.T. [New Mexico Tech, Socorro, NM (United States); Smith, M.F. [Sandia National Labs., Albuquerque, NM (United States) 1997-06-01 103 NASA Astrophysics Data System (ADS) Tungsten-based coatings have potential application in the plasma-facing components in future nuclear fusion reactors. By the combination of refractory tungsten with highly thermal conducting copper, or steel as a construction material, functionally graded coatings can be easily obtained by plasma spraying, and may result in the development of a material with favorable properties. During plasma spraying of these materials in the open atmosphere, oxidation is an important issue, which could have adverse effects on their properties. Among the means to control it is the application of inert gas shrouding, which forms the subject of this study and represents a lower-cost alternative to vacuum or low-pressure plasma spraying, potentially applicable also for spraying of large surfaces or spacious components. It is a continuation of recent studies focused on the effects of various parameters of the hybrid water-argon torch on the in-flight behavior of copper and tungsten powders and the resultant coatings. In the current study, argon shrouding with various configurations of the shroud was applied. The effects of torch parameters, such as power and argon flow rate, and powder morphology were also investigated. Their influence on the particle in-flight behavior as well as the structure, composition and properties of the coatings were quantified. With the help of auxiliary calculations, the mass changes of the powder particles, associated with oxidation and evaporation, were assessed. Mat?j?ek, J.; Kavka, T.; Bertolissi, G.; Ctibor, P.; Vilmov, M.; Mulek, R.; Nevrl, B. 2013-06-01 104 Microsoft Academic Search Nanostructured materials offer significant improvements in engineering properties because their grain sizes are smaller than those of conventionally processed materials by a factor of almost 2 orders of magnitude (Ref 1). Since the mid1990s, research has been conducted using thermal spray technology for the deposition of finely structured or nanostructured coatings (Ref 2, 3). To produce finely structured coatings by Pierre Fauchais 2008-01-01 105 Microsoft Academic Search Al2O3-ZrO2 coatings were deposited by the suspension plasma spray (SPS) molecularly mixed amorphous powder and the conventional air\\u000a plasma spray (APS) Al2O3-ZrO2 crystalline powder. The amorphous powder was produced by heat treatment of molecularly mixed chemical solution precursors\\u000a below their crystallization temperatures. Phase composition and microstructure of the as-synthesized and heat-treated SPS\\u000a and APS coatings were characterized by XRD and Dianying Chen; Eric H. Jordan; Maurice Gell 2009-01-01 106 Microsoft Academic Search Triggering systems for vacuum arc plasma sources and ion sources have been developed that make use of a gaseous trigger discharge in a strong magnetic field. Two kinds of trigger discharge configurations have been explored, a Penning discharge and a magnetron discharge. The approach works reliably for low gas pressure in the vacuum arc environment and for long periods of A. G. Nikolaev; G. Y. Yushkov; E. M. Oks; I. G. Brown; R. A. MacGill; M. R. Dickinson 1996-01-01 107 NASA Astrophysics Data System (ADS) An analytical model is developed to describe the plasma deposition process in which average solidified thickness and coating and substrate temperatures are obtained. During the deposition process, the solidification rate is periodically varied, due to the impingement of liquid splats, and the amount of liquid in the coating layer increases. Periodical variation of the solidification rate causes temperature fluctuation in coating and substrate. The nature of interfacial structure of plasma-sprayed NiCrBSi MA powder is compared with the result predicted using the model, which indicates that the liquid deposited at the coating surface during deposition causes discontinuous boundaries within the coating. The spraying rate and the solidification rate reverse periodically with spraying process. Lee, Joo-Dong; Ra, Hyung-Yong; Hong, Kyung-Tae; Hur, Sung-Kang 1992-03-01 108 NASA Astrophysics Data System (ADS) This article presents what is our present knowledge in plasma spraying of suspension, sol, and solution in order to achieve finely or nano-structured coatings. First, it describes the different plasma torches used, the way liquid jet is injected, and the different measurements techniques. Then, drops or jet fragmentation is discussed with especially the influence of arc root fluctuations for direct current plasma jets. The heat treatment of drops and droplets is described successively for suspensions, sols, and solutions both in direct current or radio-frequency plasmas, with a special emphasize on the heat treatment, during spraying, of beads and passes deposited. The resulting coating morphologies are commented and finally examples of applications presented: Solid Oxide Fuel Cells, Thermal Barrier coatings, photocatalytic titania, hydroxyapatite, WC-Co, complex oxides or metastable phases, and functional materials coatings. Fauchais, P.; Etchart-Salas, R.; Rat, V.; Coudert, J. F.; Caron, N.; Wittmann-Tnze, K. 2008-03-01 109 SciTech Connect Physical and mechanical properties were determined for plasma-sprayed MgO- or Y2O3-stabilized ZrO2 thermal barrier coatings. Properties were determined for the ceramic coating in both the freestanding condition and as-bonded to a metal substrate. The properties of the NiCrAlY bond coating were also investigated. Siemers, P.A.; Mehan, R.L. 1983-09-01 110 National Technical Information Service (NTIS) The power level and the type of arc gas used during plasma spraying of a two layer thermal barrier system (TBS) were found to affect the life of the system. Life at 1095 C in a cyclic furnace test was improved by about 140 percent by increasing the power ... S. Stecura 1981-01-01 111 NASA Astrophysics Data System (ADS) The addition by vacuum infiltration of small quantities of a polymer has been found to increase significantly the ability of a plasma-sprayed coating to dissipate vibratory energy at temperatures in the glassy-rubbery transition range of the polymer. As vitreous enamels and glasses undergo a glassy transition, but at much higher temperatures, the addition of a small amount of glass to a ceramic has the potential of providing high damping at such temperatures. Mixtures of yttria-stabilized zirconia (YSZ) and a glass frit were plasma sprayed on specimens with bond coats. Measures of system response (resonant frequencies and loss factors) were extracted from frequency responses to excitations of cantilever beam specimens over a range of excitation amplitudes. Comparisons of values determined before and after coating were used to determine the damping properties of the coatings alone as functions of strain, at temperatures of special interest. Emphasis was given to identifying the lowest level of glass giving significantly more damping than that of the plasma-sprayed ceramic alone. Coatings with weight fractions of 5, 2, 1, , and 0% glass were tested. The inclusion of glass at all weight fractions considered was found to yield significant increases in both the stiffness and dissipation of the coatings. Torvik, P. J.; Henderson, J. P. 2012-07-01 112 Microsoft Academic Search Boron carbide (B4C) coating formation is investigated using an electromagnetically accelerated plasma spraying, which can generate a dense and a high velocity plasma jet of 1 MPa and 2.02.5 km\\/s by applying a pulsed high-current arc-discharge to accelerate and heat powders. Highly crystalline B4C coatings with roughened coating-substrate interfaces were formed on mirror-polished stainless (SUS304) substrates without a binder material. J. Kitamura; S. Usuba; Y. Kakudate; H. Yokoi; K. Yamamoto; A. Tanaka; S. Fujiwara 2003-01-01 113 Microsoft Academic Search This article presents what is our present knowledge in plasma spraying of suspension, sol, and solution in order to achieve\\u000a finely or nano-structured coatings. First, it describes the different plasma torches used, the way liquid jet is injected,\\u000a and the different measurements techniques. Then, drops or jet fragmentation is discussed with especially the influence of\\u000a arc root fluctuations for direct P. Fauchais; R. Etchart-Salas; V. Rat; J. F. Coudert; N. Caron; K. Wittmann-Tnze 2008-01-01 114 SciTech Connect I propose a new method for laser acceleration of relativistic electrons using the leaky modes of a hollow dielectric waveguide. The hollow core of the waveguide can be either in vacuum or filled with uniform gases or plasmas. In case of vacuum and gases, TM01 mode is used for direct acceleration. In case of plasmas, EH11 mode is used to drive longitudinal plasma wave for acceleration. Structure damage due to high power laser can be avoided by choosing a core radius sufficiently larger than laser wavelength. Effect of nonuniform plasma density on waveguide performance is also analyzed. Xie, Ming 1998-07-01 115 NASA Astrophysics Data System (ADS) Plasma spray technology has been widely applied in industry. Unfortunately, the sprayed coating quality is not always perfect and predictable. Plasma jet instability is one of major causes for the inconsistent coating quality. This research has focused on investigating the causes of plasma jet instability, especially the arc instability in a spray torch. With combinations of electrical measurements, optical measurements and acoustic measurements, this research is designed to provide a complete picture of this arc instability. The approach has been to determine the effects of the instability on the in-flight particle properties and the coating quality. The arc instability has been characterized by the arc voltage waveform. High-speed video imaging has been used to capture the arc dynamical behavior. A simple analytical model has been developed to quantitatively estimate the arc column diameter. The velocity of the plasma jet has been measured based on the arc fluctuation propagation. The anode deterioration has been found having strong influences on the arc instability. These influences have been quantitatively described in terms of the boundary layer thickness and the arc operation mode. Fuzzy logic models have been used to diagnose the anode condition on-line and provide control strategies for constant particle heating. Effects of anode erosion on the jet turbulence have also been observed with a heating helium gas that simulates the jet. The heating and cooling processes of a substrate exposed to a plasma jet have been measured, and the influence of the substrate temperature on the coating porosity has been investigated. The results of this research contribute to the understanding of the details of the plasma spray process and help to lay a solid foundation for process optimization and development of feedback control yielding a consistent coating quality. Duan, Zheng 2000-11-01 116 SciTech Connect A new method to measure the plasma potential in an atmospheric dielectric barrier discharge (DBD) plasmas is developed for a new spraying DBD plasma source, which is sustained by electric fields generated by flowing plasmas at the outer region of the electrodes, since conventional electric probe can not be applied due to arcing. The new technique is to measure the spatially averaged plasma potential by using a capacitive coupling method with calculation of collisional sheath thickness. Choi, Yong-Sup; Chung, Kyu-Sun; Jung, Yong Ho; You, Hyun-Jong; Lee, Myoung-Jae [Electric Probe Applications Laboratory (ePAL), Hanyang University, Seoul 133-791 (Korea, Republic of) 2005-01-01 117 SciTech Connect The authors propose a new method for laser acceleration of relativistic electrons using the leaky modes of a hollow dielectric waveguide. The hollow core of the waveguide can be either in vacuum or filled with uniform gases or plasmas. In of vacuum and gases, TM{sub 01} mode is used for direct acceleration. In case of plasmas, EH{sub 11} mode is used to drive longitudinal plasma wave for acceleration. Structure damage by high power laser is avoided by choosing a core radius much larger than laser wavelength. Xie, Ming 1998-06-01 118 Microsoft Academic Search Atmospheric plasma spraying has emerged as a cost-effective alternative to traditional sintering processes for solid oxide fuel cell (SOFC) manufacturing. However, the use of plasma spraying for SOFCs presents unique challenges, mainly due to the high porosity required for the electrodes and fully dense coatings required for the electrolytes. By using optimized spray conditions combined with appropriate feedstocks, SOFC electrolytes Z. Tang; A. Burgess; O. Kesler 119 Microsoft Academic Search Fine, hydroxyapatite (HA) powder, synthesized using calcium nitrate and diammonium nitrate was formulated with water and alcohol to obtain a suspension used to plasma spray coatings onto titanium substrates. The deposition process was optimized using statistical design of 2n experiments with two variables: spray distance and electric power input to arc plasma. The sprayed coatings were soaked in simulated body Romain d'Haese; Lech Pawlowski; Muriel Bigan; Roman Jaworski; Marc Martel 2010-01-01 120 PubMed The rheological behavior of suspensions containing vacuum freeze dried and spray dried starch nanoparticles was investigated to explore the effect of these two drying methods in producing starch nanoparticles which were synthesized using high pressure homogenization and mini-emulsion cross-linking technique. Suspensions containing 10% (w/w) spray dried and vacuum freeze dried nanoparticles were prepared. The continuous shear viscosity tests, temperature sweep tests, the frequency sweep and creep-recovery tests were carried out, respectively. The suspensions containing vacuum freeze dried nanoparticles showed higher apparent viscosity within shear rate range (0.1-100 s(-1)) and temperature range (25-90 C). The suspensions containing vacuum freeze dried nanoparticles were found to have more shear thinning and less thixotropic behavior compared to those containing spray dried nanoparticles. In addition, the suspensions containing vacuum freeze dried particles had stronger elastic structure. However, the suspensions containing spray dried nanoparticles had more stiffness and greater tendency to recover from the deformation. PMID:22944440 Shi, Ai-min; Li, Dong; Wang, Li-jun; Adhikari, Benu 2012-07-31 121 SciTech Connect Coating porosity is an important parameter to optimize for plasma-sprayed ceramics which are intended for service in molten metal environments. Too much porosity and the coatings may be infiltrated by the molten metal causing corrosive attack of the substrate or destruction of the coating upon solidification of the metal. Too little porosity and the coating may fail due to its inability to absorb thermal strains. This study describes the testing and analysis of tungsten rods coated with aluminum oxide, yttria-stabilized zirconia, yttrium oxide, and erbium oxide deposited by atmospheric plasma spraying. The samples were immersed in molten aluminum and analyzed after immersion. One of the ceramic materials used, yttrium oxide, was heat treated at 1000 C and 2000 C and analyzed by X-ray diffractography and mercury intrusion porosimetry. Slight changes in crysl nl structure and significant changes in porosity were observed after heat treatments. Hollis, K. J. (Kendall J.); Peters, M. I. (Maria I.); Bartram, B. D. (Brian D.) 2002-01-01 122 NASA Astrophysics Data System (ADS) In solution precursor plasma spray chemical precursor solutions are injected into a standard plasma torch and the final material is formed and deposited in a single step. This process has several attractive features, including the ability to rapidly explore new compositions and to form amorphous and metastable phases from molecularly mixed precursors. Challenges include: (a) moderate deposition rates due to the need to evaporate the precursor solvent, (b) dealing on a case by case basis with precursor characteristics that influence the spray process (viscosity, endothermic and exothermic reactions, the sequence of physical states through which the precursor passes before attaining the final state, etc.). Desirable precursor properties were identified by comparing an effective precursor for yttria-stabilized zirconia with four less effective candidate precursors for MgO:Y2O3. The critical parameters identified were a lack of major endothermic events during precursor decomposition and highly dense resultant particles. Muoto, Chigozie K.; Jordan, Eric H.; Gell, Maurice; Aindow, Mark 2011-06-01 123 NASA Astrophysics Data System (ADS) The effect of grinding on the surface layer properties of ceria and yttria partially stabilized zirconia plasma-sprayed coatings (CePSZ, YPSZ, respectively) has been studied by X-ray diffraction methods. For this purpose, the modified model of line broadening analysis has been derived. The model considers elastic anisotropic properties along with more random paracrystal imperfections, both affecting X-ray line broadening. Grinding-induced microstructural changes were also studied using an estimation from the quantitative Orientation Distribution Function (ODF) texture. It was concluded, based on this work, that CePSZ ceramic is less mechanically stable compared to YPSZ. Consequently, more beneficial mechanical properties of a ground surface layer can be expected for CePSZ plasma-sprayed coatings. Zeman, J.; Cepera, M.; Musil, J.; Filipensky, J. 1993-12-01 124 SciTech Connect Thermally activated batteries use electrodes that are typically fabricated by cold pressing of powder. In the LiSi/FeS2 system, natural (mineral) pyrite is used for the cathode. In an effort to increase the energy density and specific energy of these batteries, flame and plasma spraying to form thin films of pyrite cathodes were evaluated. The films were deposited on a 304 stainless steel substrate (current collector) and were characterized by scanning electron microscopy and x-ray dlfllaction. The films were electrochemically tested in single cells at 5000C and the petiormance compared to that of standard cells made with cold-pressed powders. The best results were obtained with material deposited by de-arc plasma spraying with a proprietq additive to suppress thermal decomposion of the pyrite. Guidotti, R.A.; Reinhardt, F.W. 1998-10-30 125 NASA Astrophysics Data System (ADS) Response surface methodology was used to describe empirical relationships among three principal independent variables that control the plasma spraying process. The torch-substrate distance, the amount of hydrogen in the primary gas (argon), and the powder feed rate were studied. A number of dependent variables (responses) were determined, including the deposited layer roughness, density, hardness, chemical composition, and erosion rate. The technique facilitates mapping of the responses within a limited experimental region without much prior knowledge of the process mechanisms. The maps allow process optimization and selection of operating conditions to achieve the desired specifications of the plasma sprayed coating. To illustrate the approach, a simple system of WC-12%Co was deposited on a mild steel substrate. The resulting response surfaces were used to define optimum, or robust, deposition parameters. Troczynski, T.; Plamondon, M. 1992-12-01 126 NASA Astrophysics Data System (ADS) Three kinds of plasma sprayed coatings on nuclear valves of FRAMATOME, which are the cobalt-based Stellite, the nickel-based Eutroloy, and the iron-based Cenium, were remelted with a 5 kW CO2 laser. The aim is to build-up a fine homogeneous metallurgical structure onto the hardface, with a uniform thickness and free of cracks in order to improve the wear and galling properties of the coatings. It was concluded from the experimental results that for plasma sprayed Stellite coating, satisfactory results can be obtained by carefully selecting the process parameters, preheating of the substrate is not needed; and for the Eutroloy coating, preheating of the substrate is necessary to get rid of cracking during laser remelting. Laser remelting is not an adequate process for Cerium coating because it is very difficult to avoid cracks on the remelted layer. Li, Yanxiang; Steen, William M.; Sharkey, Sarah J. 1993-05-01 127 National Technical Information Service (NTIS) Plasma spraying allows making ceramic massive pieces. Nevertheless, a rough-sprayed piece is porous and has poor mechanical properties until it is processed by thermal treatment. It then acquires high thermomechanical properties, including high wearing pr... E. Rigal T. Priem E. Vray 1995-01-01 128 Microsoft Academic Search A series of plasma-sprayed coatings has been given a preliminary evaluation to assess the potential of this class of materials in fusion reactor applications. TiC, TiB, Be and VBe coatings on copper and stainless steel were tested for coating adherence, ion erosion resistance and susceptability to arc erosion. The coatings, in general, display a good resistance to thermal shock failure. A. W. Mullendore; D. M. Mattox; J. B. Whitley; D. J. Sharp 1979-01-01 129 Microsoft Academic Search The effect of grinding on the surface layer properties of ceria and yttria partially stabilized zirconia plasma-sprayed coatings\\u000a (CePSZ, YPSZ, respectively) has been studied by X-ray diffraction methods. For this purpose, the modified model of line broadening\\u000a analysis has been derived. The model considers elastic anisotropic properties along with more random paracrystal imperfections,\\u000a both affecting X-ray line broadening. Grinding-induced microstructural J. Zeman; M. Cepera; J. Musil; J. Filipensky 1993-01-01 130 Microsoft Academic Search The low pressure plasma spray (LPPS) process has been utilized in the development and fabrication of metal\\/metal, metal\\/carbide, and metal\\/oxide composite structures; including particulate dispersion and both continuous and discontinuous laminates. This report describes the LPPS process and the development of copper\\/tungsten microlaminate structures utilizing this processing method. Microstructures and mechanical properties of the Cu\\/W composites are compared to conventionally R. G. Castro; P. W. Stanek 1988-01-01 131 Microsoft Academic Search The plasma spraying process of hydroxyapatite (HA) suspension was optimized in order to obtain possibly dense and well adherent coatings onto aluminum and titanium alloy substrates. The process variables included the suspension liquid (water, ethanol or their mixture), the pneumatic pressure applied to inject the suspension (0.2 to 0.8bar), type of injection (nozzle or atomizer), geometry of suspension injection to Stefan Kozerski; Lech Pawlowski; Roman Jaworski; Francine Roudet; Fabrice Petit 2010-01-01 132 Microsoft Academic Search Solution precursor plasma spray (SPPS) synthesis is a simple, single-step, and rapid technique for synthesizing nano-ceramic\\u000a materials from solution precursors. This innovative method uses molecularly mixed precursors as liquids, avoiding a separate\\u000a processing method for the preparation of powders and enabling the synthesis of a wide range of metal oxide powders and coatings.\\u000a Also, this technique is considered to be E. Brinley; K. S. Babu; S. Seal 2007-01-01 133 NASA Astrophysics Data System (ADS) Fine, home-synthesized, hydroxyapatite powder was formulated with water and alcohol to obtain a suspension used to plasma spray coatings onto a titanium substrate. The deposition process was optimized using statistical design of 2 n experiments with two variables: spray distance and electric power input to plasma. X-ray diffraction (XRD) was used to determine quantitatively the phase composition of obtained deposits. Raman microscopy and electron probe microanalysis (EPMA) enabled localization of the phases in different positions of the coating cross sections. Transmission electron microscopic (TEM) study associated with energy-dispersive x-ray spectroscopy (EDS) enabled visualization and analysis of a two-zone microstructure. One zone contained crystals of hydroxyapatite, tetracalcium phosphate, and a phase rich in calcium oxide. This zone included lamellas, usually observed in thermally sprayed coatings. The other zone contained fine hydroxyapatite grains that correspond to nanometric and submicrometric solids from the suspension that were agglomerated and sintered in the cold regions of plasma jet and on the substrate. Podlesak, Harry; Pawlowski, Lech; D'Haese, Romain; Laureyns, Jacky; Lampke, Thomas; Bellayer, Severine 2010-03-01 134 Microsoft Academic Search Objective. A preliminary report from this study showed that hydroxyapatite-coated (HA) titanium plasmasprayed (TPS) cylinder implants had fewer failures than TPS cylinder implants before prosthetic loading. The purpose of this article is to report the long-term success associated with the 2 systems. In addition, local and systemic factors that may influence the success or failure of the implants were analyzed. John D. Jones; John Lupori; Joseph E. Van Sickels; Wayne Gardner 1999-01-01 135 PubMed Biomedical coatings generally have to satisfy specific requirements such as a high degree of crystallinity (for positive biological responses), good coating adhesion and optimal porosity. These are necessary to enhance biocompatibility, accelerate post-operative healing and improved fixation. Thermal spray processes have been frequently used to deposit functionally active biomedical coatings, such as hydroxyapatite (HA), onto prosthetic implants. The benefits of HA materials in coated implants have been widely acknowledged, but the occurrence of several poor performances has generated concerns over the consistency and reliability of thermally sprayed HA coatings. Recent investigations using HA coatings have shown that process related variability has significant influence on coating characteristics such as phase composition, structure and chemical composition and performance such as bioresorption, degradation and bone apposition. Variation in process parameters such as powder morphology can induce microstructural and mechanical inconsistencies that have an effect on the service performance of the coating. In order to reach some acceptable level of reliability, it may be necessary to control existing variability in commercially available HA feedstock. In addition, certain opposing factors severely constrain the means to achieve the necessary coating conditions via thermal spraying alone; therefore, creating the need to introduce other innovative or secondary treatment stages to attain the desired results. This paper highlights some of the problems associated with plasma spray coating of HA and suggests that tailoring the powder feedstock morphology and properties through suitable conditioning processes can aid the deposition efficiency and produce an acceptable coating structure. PMID:8991486 Cheang, P; Khor, K A 1996-03-01 136 NASA Astrophysics Data System (ADS) The vacuum kinetic spray (VKS) method is a relatively advanced technology by which thin and dense ceramic coatings can be fabricated via the high-speed impact of submicron-sized particles at room temperature. However, the actual bonding mechanism associated with the VKS process has not yet been elucidated. In this study, AlN powders were pretreated through ball-milling and heat-treatment processes in order to investigate the effects of microstructural changes on the deposition behavior. It was found that ball-milled and heat-treated powder with polycrystals formed by partially aligned dislocations showed considerably higher deposition rates when compared to only ball-milled powder with tangled dislocations. Therefore, in the VKS process, the deposition behavior is shown to be affected by not only the particle size and defect density, but also the microstructure of the feedstock powder. Park, Hyungkwon; Heo, Jeeae; Cao, Fei; Kwon, Juhyuk; Kang, Kicheol; Bae, Gyuyeol; Lee, Changhee 2013-04-01 137 NASA Astrophysics Data System (ADS) The dissertation presents research results from integrated studies of process, structure and magnetic properties of plasma-sprayed ferrite-metal composites. These magnetic composites are considered as the core materials for miniaturized high frequency planar inductors, thick film magnetoresistive sensors and potentially as magnetostrictive sensors. Offering the advantages of low substrate temperature during processing, high throughput production capability, cost efficiency and minimizing the interfacial reaction between the ferrite and metal phases, plasma spraying can be considered as a promising route for fabricating magnetic composites with industrial applications. A multitude of experimental techniques, including phase analysis, microstructure observation, magnetic property and electrical property measurements and numerical modeling have been applied for this investigation. A number of fundamental attributes in terms of process-microstructure-property relationships have been investigated by a systematic processing approach through the frame work of process maps. Such studies can provide insight into process control and optimization. Three types of magnetic composites have been fabricated using plasma spraying. Rocksalt structured monoxides form in the as-sprayed coatings due to deoxidation of ferrite as well as the oxidation of metallic particles. Random cation distribution and the presence of monoxides and microstructural defects, degrade the magnetic and electrical properties of the composites. Low temperature air annealing can improve these properties by means of forming insulating trivalent oxides (Hematite) and ordering the cations. Functional properties such as magnetoresistance and magnetostriction of comparable value to bulk materials can be obtained after annealing. A salient finding is the transition form a giant magnetoresistance (GMR) to anisotropic magnetoresistance (AMR) at the percolation threshold which has been reported for the first time in magnetic composites. Thermally sprayed composites coatings are found to have a much smaller percolation threshold due to the anisotropy of splats. Using the effective medium approximation, the relationship between the percolation threshold and aspect ratio has been derived. The experimental results are in good agreement with simulation results. This is the first time that the percolation phenomenon in thermally sprayed composites has been studied quantitatively and compared with theory. Liang, Shanshan 138 Microsoft Academic Search Plasma spraying is a commonly used technique to apply thin calcium phosphate ceramic coatings. Special consideration is given to retaining the original structure of CPC particles. However, changes are possible. Thus this study focused on plasma spraying induced changes in material characteristics of commercial coatings and their influence onin vitro dissolution. All analysed coatings were found to undergo significant plasma S. R. Radin; P. Ducheyne 1992-01-01 139 Microsoft Academic Search In the context of nanometre-sized structured materials and the perspectives of their technological applications, plasma spray technology is developing to master the coating microstructure at a nanometre scale level. This paper is an attempt to describe (i) the latest advances in the control of the conventional plasma spray process that requires the monitoring of both the plasma jet fluctuation level P. Fauchais; A. Vardelle 2011-01-01 140 Microsoft Academic Search The control of phase transformations in plasma sprayed hydroxyapatite (HA) coatings are critical to the clinical performance of the material. This paper reports the use of high temperature X-ray diffraction (HT-XRD) to study, in-situ, the phase transformations occurring in plasma sprayed HA coatings. The coatings were prepared using different spray power levels (net plasma power of 12 and 15kW) and S. W. K. Kweh; K. A. Khor; P. Cheang 2002-01-01 141 NASA Astrophysics Data System (ADS) The residual stresses in a ceramic sheet material used for turbine blade tip gas path seals, were estimated. These stresses result from the plasma spraying process which leaves the surface of the sheet in tension. To determine the properties of plasma sprayed ZrO2-Y2O3 sheet material, its load deflection characteristics were measured. Estimates of the mechanical properties for sheet materials were found to differ from those reported for plasma sprayed bulk materials. Hendricks, R. C.; McDonald, G.; Mullen, R. L. 1983-02-01 142 Microsoft Academic Search Plasma spray deposition of carbide\\/metal hardcoatings is difficult because complex chemical transformations can occur while spraying, especially in the presence of oxygen. A commercial plasma spray torch has been modified to simultaneously inject carbide powder and a metal alloy powder at two different locations in the plasma stream. Composite hardcoatings of tungsten carbide\\/cobalt with a nickel-base alloy matrix have been W. J. Lenling; M. F. Smith; J. A. Henfling 1990-01-01 143 Microsoft Academic Search The techniques of plasma spraying are suitable for deposition of metals, ceramics or composites. Atmospheric plasma spraying\\u000a of metals is accompanied by their oxidation. The oxidation of nickel during its spraying gives rise to NiO. During the flight\\u000a of molten nickel particles in the plasma plume, the first stage of the oxidation reaction takes place. To determine the amount\\u000a of K. Volenk; P. Ctibor; J. Dubsk; P. Chrska; J. Hork 2004-01-01 144 Microsoft Academic Search Atmospheric plasma spray is a fast and economical process for deposition of yttria-stabilized zirconia (YSZ) electrolyte for\\u000a solid oxide fuel cells. YSZ powders have been used to prepare plasma-sprayed thin ceramic films on the metallic substrate\\u000a employing plasma spray technology at atmospheric pressure. Alumina doping was employed to improve the structural characteristics\\u000a and electrical properties of YSZ. The effect of Amin Mirahmadi; Mohammad Pourmalek 2010-01-01 145 NASA Astrophysics Data System (ADS) Forming and cutting tools are subjected to the intense wear solicitations. Usually, they are either subject to superficial heat treatments or are covered with various materials with high mechanical properties. In recent years, thermal spraying is used increasingly in engineering area because of the large range of materials that can be used for the coatings. Titanium nitride is a ceramic material with high hardness which is used to cover the cutting tools increasing their lifetime. The paper presents the results obtained after deposition of titanium nitride layers by reactive plasma spraying (RPS). As deposition material was used titanium powder and as substratum was used titanium alloy (Ti6Al4V). Macroscopic and microscopic (scanning electron microscopy) images of the deposited layers and the X ray diffraction of the coatings are presented. Demonstration program with layers deposited with thickness between 68,5 and 81,4 ?m has been achieved and presented. Ro?u, Radu Alexandru; ?erban, Viorel-Aurel; Bucur, Alexandra Ioana; Popescu, Mihaela; U?u, Drago? 2011-01-01 146 SciTech Connect Molten metal environments pose a special demand on materials due to the high temperature corrosion effects and thermal expansion mismatch induced stress effects. A solution that has been successfully employed is the use of a base material for the mechanical strength and a coating material for the chemical compatibility with the molten metal. The work described here used such an approach coating tungsten rods with aluminum oxide, yttria-stabilized zirconia, yttrium oxide, and erbium oxide deposited by atmospheric plasma spraying. The ceramic materials were deposited under varying conditions to produce different structures. Measurement of particle characteristics was performed to correlate to material properties. The coatings were tested in a thermal cycling environment to simulate the metal melting cycle expected in service. Results of the testing indicate the effect of material composition and spray conditions on the thermal cycle crack resistance of the coatings. Hollis, K. J. (Kendall J.); Peters, M. I. (Maria I.); Bartram, B. D. (Brian D.) 2002-01-01 147 NASA Astrophysics Data System (ADS) Plasma-sprayed WC-Co coatings are used extensively in a variety of wear-resistant applications. The quality of these sprayed coatings depends greatly on the temperature and velocity of the powder particles impacting the substrate. Because it is both expensive and difficult to experimentally determine these particle parameters, the present study deals with a theoretical investigation of particle heatup and acceleration during plasma spraying of WC-Co based on a recently developed model. The effect of WC-Co particle size on the evolution of particle temperature and velocity is examined through calculations performed under typical spraying conditions. The implications of the powder particles, assuming an off-axis trajectory during their traverse through the plasma flame, are also discussed. Joshi, S. V. 1993-06-01 148 Microsoft Academic Search Summary form only given. Recently, new concepts in the field of microwave radiation generation have led to the possibly of major advances on the frontier of microwave vacuum devices. These concepts include the emerging technology of DC to AC radiation converters, or DARC sources, ionization fronts for frequency upshifting and conversion of extremely large plasma wakes into a Cherenkov radiation J. R. Hoffman; P. Muggli; M. A. Gundersen; W. B. Mori; C. Joshi; T. Katsouleas 1999-01-01 149 Microsoft Academic Search As presently designed, the Burning Plasma Experiment vacuum vessel will be segmentally fabricated and assembled by bolted joints in the field. Due to geometry constraints, most of the bolted joints have significant eccentricity, which causes the joint behavior to be sensitive to joint clamping forces. Experience indicates that, as a result of this eccentricity, the joint will tend to open P. K. Hsueh; M. Z. Khan; J. Swanson; T. Feng; S. Dinkevich; J. Warren 1991-01-01 150 PubMed The corrosion behavior of titanium with vacuum plasma sprayed titanium coatings and with anodized surfaces, both with and without polymeric bone cement were evaluated. Electrochemical extraction tests were carried out with subsequent analysis of the electrolyte by ICP-MS in order to verify our hypothesis of the ionic permeability of the polymer cement. The complexity of the situation resides in the existence of two interfaces: electrolyte-polymer and polymer-metal. The surface preparation (treatment of the surface) plays an important role in the corrosion resistance of titanium. The electrochemical magnitudes that were examined reveal that the plasma spray surfaces have the lowest corrosion resistance. The cement, in spite of having reduced electrical conductivity in comparison to metal, is an ionic transporter, and therefore capable of participating in the corrosion process. In the present study, we observed in fact crevice corrosion at the metal-cement interface. In the case of plasma spray surfaces, a process of diffusion of titanium particles in the electrolyte could accompany the crevice corrosion. In this study, we have shown that there is a corrosion process at the surface of the titanium through the cement which has as a consequence on the one hand the formation of titanium cations and on the other hand the growth of a passive layer on the titanium. In conclusion, we identified two principal factors that influence the corrosion process: [1] the type of surface treatment for the titanium, and [2] the ionic conductivity of the cement. There is indeed ionic transport through the cement; as evidenced by the presence of titanium in the electrolyte solution (ICP-MS analysis) and chloride at the surface of the titanium sample (EDX analysis). We show that the polymer cement is an ionic conductor and participates in the corrosion of the embedded titanium. We cannot deduce from our results, however, whether the polymer itself possesses corrosive properties. Long-term experiments will be necessary to study the degradation behavior of the polymer cement. PMID:12895575 Reclaru, L; Lerf, R; Eschler, P-Y; Blatter, A; Meyer, J-M 2003-08-01 151 Microsoft Academic Search The focus of this study is the amorphous phase formation in the aluminayttria stabilized zirconia composite coatings during thermal spray deposition. The investigated processes include conventional and suspension plasma spraying. The focus of this paper is on suspension spraying, while making a comparison of the two processes. Through the study of the in-flight collected particles and coatings produced from the F. Tarasi; M. Medraj; A. Dolatabadi; J. Oberste-Berghaus; C. Moreau 2011-01-01 152 National Technical Information Service (NTIS) A laser velocimeter has been used to measure spray particle velocities in a low-pressure plasma spray system at chamber pressures ranging from 6.7 to 80 kPa (50 to 600 Torr). For Al sub 2 O sub 3 spray powder with a mean diameter of 44 mu m, peak particle... M. F. Smith R. C. Dykhuizen 1987-01-01 153 PubMed Free standing structures of hypereutectic aluminum-23 wt% silicon nanocomposite with multiwalled carbon nanotubes (MWCNT) reinforcement have been successfully fabricated by two different thermal spraying technique viz Plasma Spray Forming (PSF) and High Velocity Oxy-Fuel (HVOF) Spray Forming. Comparative microstructural and mechanical property evaluation of the two thermally spray formed nanocomposites has been carried out. Presence of nanosized grains in the Al-Si alloy matrix and physically intact and undamaged carbon nanotubes were observed in both the nanocomposites. Excellent interfacial bonding between Al alloy matrix and MWCNT was observed. The elastic modulus and hardness of HVOF sprayed nanocomposite is found to be higher than PSF sprayed composites. PMID:17450788 Laha, T; Liu, Y; Agarwal, A 2007-02-01 154 SciTech Connect A measurement technique for simultaneously obtaining the size, velocity, temperature, and relative number density of particles entrained in high temperature flow fields is described. In determining the particle temperature from a two-color pyrometery technique, assumptions about the relative spectral emissivity of the particle are required. For situations in which the particle surface undergoes chemical reactions the assumption of grey body behavior is shown to introduce large Temperature measurement uncertainties. Results from isolated, laser heated, single particle measurements and in-flight data from the plasma spraying of WC-Co are presented. 10 refs., 5 figs. Fincke, J.R.; Swank, W.D. (EG and G Idaho, Inc., Idaho Falls, ID (USA)); Bolsaitis, P.P.; Elliott, J.F. (Massachusetts Inst. of Tech., Cambridge, MA (USA)) 1990-01-01 155 NASA Astrophysics Data System (ADS) Titanium Carbonitride (TiCN), a new high hardness and wear-resistant material, has been applied widely in many fields. TiCN coating was first fabricated using reactive plasma spraying (RPS) technology in the reactive chamber that was filled with nitrogen and acetylene (N2 and C2H2) in this study. The microstructure and the phase composition of the coatings were analyzed by SEM and XRD. More chemical information of surface was analyzed by XPS. The Vickers microhardness of TiCN coating is 1659.11 HV100g, and the cross-section of the coating shows a conspicuous phenomenon of indentation size effect. Zhu, Lin; He, Jining; Yan, Dianran; Dong, Yanchun; Xiao, Lisong 156 Microsoft Academic Search Atmospheric plasma spraying (APS) is a most versatile thermal spray method for depositing alumina (Al2O3) coatings, and detonation gun (D-gun) spraying is an alternative thermal spray technology for depositing such coatings with\\u000a extremely good wear characteristics. The present study is aimed at comparing the characteristics of Al2O3 coatings deposited using the above techniques by using Taguchi experimental design.\\u000a \\u000a Alumina coating P. Saravanan; V. Selvarajan; M. P. Srivastava; D. S. Rao; S. V. Joshi; G. Sundararajan 2000-01-01 157 NASA Astrophysics Data System (ADS) Thermal spray technology is an alternative material fabrication technique to the traditional solidification and powder processing methods for producing thick coatings and bulk free-forms. Extensive research has enabled the extension of this technique to a wider range of material classes including polymers, bioceramics and functionally gradient materials. A key area of application of thermal spraying is the formation of thermal barrier coatings for turbine components used in power generation and propulsion. Continuing research intends to improve the quality of coatings produced by this technique to compete with other technologies like physical vapor deposition to make use of some of the advantages like higher throughput that thermal spraying affords. Understanding the adhesion of plasma sprayed coatings is essential to improving the service life of coated components. Progressively research has focussed on the nature of the unique building blocks of plasma sprayed coatings called splats. The current research intends to characterize the microadhesion at the splat substrate interface using nondestructive methods based on the analysis of images obtained using the scanning electron microscope (SEM). A model system of yttria stabilized zirconia, a traditional thermal barrier material, on steel substrate is chosen for the study. Two techniques are developed based on the analysis of through thickness crack distribution and fragmentation of thin brittle films on ductile substrates and that based on the analysis of interface cracking. A novel imaging technique is used to determine the extent of interface cracking from the contrast observed in SEM images. Based on the understanding of ceramic splat formation on metal substrates a shear lag theory of tensile residual stress generation is used to explain the fragmentation observed in splats. An earlier analysis of cracking in brittle films due to uniaxial stress is extended to the present case of equibiaxial thermal residual stress. Three geometric features are identified to analyze the observed fragment geometries and correlated with local interfacial adhesion in splats. The measurements are extracted from secondary and back-scattered electron images using image segmentation software. Measurement of cracked interfacial areas was accomplished using charging contrast in the secondary and specimen current images of splats. Based on these measurements it was found that micro adhesion decreases within splats from center to the periphery. This variation in adhesion was attributed to the temperature and pressure distribution in the splat-substrate interface during formation. Rangarajan, Srinivasan 2000-10-01 158 NASA Astrophysics Data System (ADS) Thermal behaviors of tungsten coating of 0.5 mm thick with multi-layers interface of tungsten (W) and rhenium (Re) coated on CFC (CX-2002U) substrate by vacuum plasma spraying (VPS) technique were examined by annealing with an electron beam thermal load facility between 1200 C and 2000 C. Change of the microstructure was observed and its chemical composition was analyzed by EDS after annealing. It was observed that remarkable recrystallization of VPS-W occurred above 1400 C. The structure of the multi-layers of W and Re become obscure by the mutual diffusion of W, Re and C above 1600C and finally disappeared after annealing at 2000 C for one hour. Very hard tungsten carbides are formed at the interface above 1600 C and they were broadening with increasing annealing temperature and time. Liu, X.; Tamura, S.; Tokunaga, K.; Yoshida, N.; Noda, N. 2003-06-01 159 Microsoft Academic Search In this study under review, substrates of 100mm25mm2mm SUS 420 stainless steel coupons were first sprayed with a Ni22Cr10Al1Y bond coat and then with a 19.5wt.% yttria-stabilized zirconia (YSZ) top coat using an air-plasma spray system. After that, the plasma-sprayed yttria-stabilized zirconia thermal barrier coatings (TBCs) were glazed using a pulsed CO2 laser. The subsequent effects of laser glazing on Pi-Chuen Tsai; Jiing-Herng Lee; Chi-Lung Chang 2007-01-01 160 Microsoft Academic Search Heat-resistant coatings prepared by two different spraying methods: atmospheric pressure plasma spraying (APS) and high-pressure plasma spraying (HPPS), were tested using tungsten carbide indenters of different diameters, for the purpose of proposing the best suited method of indentation testing. It was found that with the APS method, the indentation loaddepth curve gave the indentation depth and the residual depth smaller Yoshio Akimune; Kazuo Matsuo; Satoshi Sodeoka; Tatsuo Sugiyama; Satoshi Shimizu 2004-01-01 161 Microsoft Academic Search Suspension plasma spray (SPS) is a novel process for producing nano-structured coatings with metastable phases using significantly\\u000a smaller particles as compared to conventional thermal spraying. Considering the complexity of the system there is an extensive\\u000a need to better understand the relationship between plasma spray conditions and resulting coating microstructure and defects.\\u000a In this study, an alumina\\/8wt.% yttria-stabilized zirconia was deposited F. Tarasi; M. Medraj; A. Dolatabadi; J. Oberste-Berghaus; C. Moreau 2008-01-01 162 Microsoft Academic Search Conventional thermal spray processes as atmospheric plasma spraying (APS) have to use easily flowable powders with a size\\u000a up to 100?m. This leads to certain limitations in the achievable microstructural features. Suspension plasma spraying (SPS)\\u000a is a new promising processing method which employs suspensions of sub-micrometer particles as feedstock. Therefore much finer\\u000a grain and pore sizes as well as dense Holger Kassner; Roberto Siegert; Dag Hathiramani; Robert Vassen; Detlev Stoever 2008-01-01 163 Microsoft Academic Search The study aimed at optimizing the suspension plasma spraying of TiO2 coatings obtained using different suspensions of fine rutile particles in water solution onto aluminum substrates. The experiments\\u000a of spraying were designed using a 23 full factorial plan. The plan enabled to find the effects of three principal parameters, i.e. electric power input to plasma,\\u000a spray distance, and suspension feed Roman Jaworski; Lech Pawlowski; Francine Roudet; Stefan Kozerski; Agns Le Maguer 2008-01-01 164 Microsoft Academic Search In this study, the liquid precursor plasma spraying process was used to manufacture P2O5-Na2O-CaO-SiO2 bioactive glass-ceramic coatings (BGCCs), where sol and suspension were used as feedstocks for plasma spraying. The effect\\u000a of precursor and spray parameters on the formation and crystallinity of BGCCs was systematically studied. The results indicated\\u000a that coatings with higher crystallinity were obtained using the sol precursor, Yanfeng Xiao; Lei Song; Xiaoguang Liu; Yi Huang; Tao Huang; Jiyong Chen; Yao Wu; Fang Wu 2011-01-01 165 Microsoft Academic Search Mullite is promising as a protective coating for silicon-based aggressive high-temperature environments. Conventionally plasma-sprayed mullite on SiC tends to crack and debond on thermal cycling. It is shown that this behavior is due to the presence of amorphous mullite in the conventionally sprayed mullite. Heating the SiC substrate during the plasma spraying eliminated the amorphous phase and produced coatings with Kang N. Lee; Robert A. Miller; Nathan S. Jacobson 1995-01-01 166 Microsoft Academic Search The growing interest for SOFC leads research towards new materials but also towards processes which could be more effective and less expensive to produce fuel cells. Plasma spraying is one of these interesting processes due to its ability to manufacture the whole cell with the same process. The present study uses suspension plasma spraying (SPS) process which consists in injecting R. Rampon; O. Marchand; C. Filiatre; G. Bertrand 2008-01-01 167 Microsoft Academic Search Suspension plasma spraying (SPS) is a modification of plasma spray processes that uses small feedstock powders suspended in a liquid to rapidly produce fully sintered, thin ceramic coatings with good microstructural control and no need for post-deposition heat treatments. This technique has been proposed as a potential next generation manufacturing method to fabricate metal supported solid oxide fuel cells (SOFC). D. Waldbillig; O. Kesler 2011-01-01 168 Microsoft Academic Search Suspension plasma spray is a promising technique that uses fine particles dispersed in a liquid as feedstock material instead of dry powder as in conventional plasma spraying and has been implemented here to produce layers with appropriate morphologies and microstructures for SOFC applications.This study uses a pressurized gas delivery system to feed the slurry through a homemade two-fluid atomizing nozzle O. Marchand; P. Bertrand; J. Mougin; C. Comminges; M.-P. Planche; G. Bertrand 2010-01-01 169 Microsoft Academic Search Estimation of microfractures in ceramic coating layer during plasma spraying process is critical for its reliability. Acoustic emission (AE) method enables in-process monitoring of such microfractures. Laser AE method was adopted to realize the monitoring of plasma spraying process by non-contact detection of AE with laser interferometer. Also a high performance method for noise reduction of laser AE waveform was Kaita Ito; Manabu Enoki; Makoto Watanabe; Seiji Kuroda 2008-01-01 170 NASA Astrophysics Data System (ADS) Unlike atmosphere plasma spraying (APS), very low pressure plasma spraying (VLPPS) can only weakly heat the feed materials at the plasma-free region exit of the nozzle. Most current VLPPS methods have adopted a high power plasma gun, which operates at high arc currents up to 2500 A to remedy the lower heating ability, causing a series of problems for both the plasma torch and the associated facility. According to the Kundsen number and pressures distribution inside of the nozzle in a low-pressure environment, a plasma torch was designed with a separated anode and nozzle, and with the powder feed to the plasma jets inside the nozzle intake. In this study, the pressures in the plasma gas intake, in the nozzle intake and outside the plasma torch were measured using an enthalpy probe. For practice, SUS 316 stainless steel coatings were prepared at the plasma currents of 500-600 A, an arc voltage of 50 V and a chamber pressure of 1000 Pa; the results indicated that coatings with an equiaxed microstructure could be deposited in proper conditions. Gao, Yang; Yang, De Ming; Gao, Jianyi 2012-06-01 171 Microsoft Academic Search The development of coating formation processes involving electric arcs depends on process stability and the capacity to ensure\\u000a a constant reproducibility of coating properties. This is particularly important when considering suspension plasma spraying\\u000a or solution precursor plasma spraying. Submicron particles closely follow plasma instabilities and have nonhomogeneous plasma\\u000a treatment. Recently, it has been shown that arc voltage fluctuations in direct-current V. Rat; J. F. Coudert 2011-01-01 172 SciTech Connect Material modifications by ion implantation, dry etching, and micro-fabrication are widely used technologies, all of which are performed in vacuum, since ion beams at energies used in these applications are completely attenuated by foils or by long differentially pumped sections, which ate currently used to interface between vacuum and atmosphere. A novel plasma window, which utilizes a short arc for vacuum-atmosphere interface has been developed. This window provides for sufficient vacuum atmosphere separation, as well as for ion beam propagation through it, thus facilitating non-vacuum ion material modification. HERSHCOVITCH,A. 1997-09-07 173 NASA Astrophysics Data System (ADS) Though calcium phosphate (CaP) coated implants are commercially available, its acceptance is still not wide spread due to challenges related to weaker interfacial bonding between metal and ceramic, and low crystallinity of hydroxyapatite (HA). The objectives of this research are to improve interfacial strength, crystallinity, phase purity and bioactivity of CaP coated metallic implants for orthopaedic applications. The rationale is that forming a diffuse and gradient metal-ceramic interface will improve the interfacial strength. Moreover, reducing CaP particles exposure to high temperature during coating preparation, can lead to improvement in both crystallinity and phase purity of CaP. In this study, laser engineered net shaping (LENS(TM)) was used to coat Ti metal with CaP. LENS(TM) processing enabled generation of Ti+TCP (tricalcium phosphate) composite coating with diffused interface, that also increased the coating hardness to 1049+/-112 Hv compared to a substrate hardness of 200+/-15 Hv. In vitro bone cell-material interaction studies confirmed the bioactivity of TCP coatings. Antimicrobial properties of the TCP coatings were improved by silver (Ag) electrodeposition. Along with LENS(TM), radio frequency induction plasma spray, equipped with supersonic plasma nozzle, was used to prepare HA coatings on Ti with improved crystallinity and phase purity. The coating was made of multigrain HA particles of 200 nm in size, which consisted of 15--20 nm HA grains. In vitro bone cell-material interaction and in vivo rat model studies confirmed the HA coatings to be bioactive. Furthermore, incorporation of Sr2+ improved bone cell of HA coatings interaction. A combination of LENS(TM) and plasma spray was used to fabricate a compositionally graded HA coatings on Ti where the microstructure varied from pure HA at the surface to pure Ti substrate with a diffused Ti+TCP composite region in between. The plasma spray system was used to synthesize spherical HA nano powder from HA sol, where the production rate was 20 g/h, which is only 16% of the total powder produced. The effects of Sr2+ and Mg2+ doping on bone cell-CaP interaction was further studied with osteoclast cells. Mg2+ doing was found to be an effective way of controlling osteoclast differentiation. Roy, Mangal 174 SciTech Connect Plasma sprayable grade zirconia powders doped with various mol% of yttria (0, 2, 3, 4, 6, 8 and 12 mol%) were synthesized by a chemical co-precipitation route. The coprecipitation conditions were adjusted such that the powders possessed good flowability in the as calcined condition and thus avoiding the agglomeration step like spray drying. Identical plasma spray parameters were used for plasma spraying all the powders on stainless steel plates. The powders and plasma sprayed coatings were characterized by X-ray diffractometry, Scanning Electron Microscopy and Raman spectroscopy. Zirconia powders are susceptible to phase transformations when subjected to very high temperatures during plasma spraying and XRD is insensitive to the presence of some non crystalline phases and hence Raman spectroscopy was used as an important tool. The microstructure of the plasma sprayed coatings showed a bimodal distribution containing fully melted and unmelted zones. The microhardness and wear resistance of the plasma sprayed coatings were determined. Among the plasma sprayed coatings, 3 mol% yttria stabilized zirconia coating containing pure tetragonal zirconia showed the highest wear resistance. - Research Highlights: {yields} Preparation plasma sprayable YSZ powders without any agglomeration process and plasma spraying {yields} Phase transformation studies of plasma sprayed YSZ coatings by XRD and Raman spectroscopy {yields} Microstructure of the plasma sprayed coatings exhibited bimodal distribution {yields} Plasma sprayed 3 mol% YSZ coating exhibited the highest wear resistance {yields} Higher wear resistance is due to the higher fracture toughness of tetragonal 3 mol% YSZ phase. Aruna, S.T., E-mail: aruna_reddy@nal.res.in; Balaji, N.; Rajam, K.S. 2011-07-15 175 PubMed Plasma spraying (PS) is the most frequently used coating technique for implants; however, in other industries a cheaper, more efficient process, high-velocity oxy-fuel thermal spraying (HVOF), is in use. This process provides higher purity, denser, more adherent coatings than plasma spraying. The primary objective of this work was to determine if the use of HVOF could improve the mechanical properties of calcium phosphate coatings. Previous studies have shown that HVOF calcium phosphate coatings are more crystalline than plasma sprayed coatings. In addition, because the coatings are exposed to more complex loading profiles in vivo than standard ASTM tensile tests provide, a secondary objective of this study was to determine the applicability of four-point bend testing for these coatings. Coatings produced by HVOF and PS were analyzed by profilometry, diffuse reflectance Fourier transform infrared spectroscopy, X-ray diffraction, four-point bend, and ASTM C633 tensile testing. HVOF coatings were found to have lower amorphous calcium phosphate content, higher roughness values, and lower ASTM C633 bond strengths than PS coatings; however, both coatings had similar crystal unit cell sizes, phases present (including hydroxyapatite, beta-tricalcium phosphate, and tetracalcium phosphate), and four-point bend bond strengths. Thus, the chemical, structural, and mechanical results of this study, in general, indicate that the use of HVOF to produce calcium phosphate coatings is equivalent to those produced by plasma spraying. PMID:10556851 Haman, J D; Chittur, K K; Crawmer, D E; Lucas, L C 1999-01-01 176 SciTech Connect Multiply charged ions are present in vacuum arc plasmas. The ions are produced at cathode spots, and their charge state distributions (CSDs) depend on the cathode material but only little on the arc current or other parameters as long as the current is relatively low and the anode is not actively involved in the plasma production. There are experimental data of ion CSDs available in the literature for 50 different cathode materials. The CSDs can be calculated based on the assumption that thermodynamic equilibrium is valid in the vicinity of the cathode spot, and the equilibrium CSDs freeze at a certain distance from the cathode spot (transition to a non-equilibrium plasma). Plasma temperatures and densities at the freezing points have been calculated, and, based on the existence of characteristic groups of elements in the Periodic Table, predictions of CSDs can be made for metallic elements which have not yet been used as cathode materials. Anders, A. [Lawrence Berkeley Lab., CA (United States); Schulke, T. [Fraunhofer-Einrichtung fuer Werkstoffphysik und Schichttechnologie (IWS), Dresden (Germany) 1996-04-01 177 Microsoft Academic Search The tribological behaviour of Al2O3 coatings on AISI 316 stainless steel, obtained by the process of controlled atmosphere plasma spraying (CAPS), is studied in this work. Atmospheric plasma spraying (APS) and high pressure plasma spraying (HPPS) were applied in order to produce these coatings. The APS coatings exhibited lower microhardness values compared to the values of HPPS coatings. Regarding the Ch. I. Sarafoglou; D. I. Pantelis; S. Beauvais; M. Jeandin 2007-01-01 178 NASA Astrophysics Data System (ADS) The penetration phenomena of liquid Mn into porous ZrO2-8 wt.% Y2O3 coating, plasma sprayed on JIS SS400 steel substrate was studied by heating at 1573 K in a vacuum atmosphere, and the possibility of improving the mechanical properties of the coating by heat treatment with liquid Mn was examined. It was found that liquid Mn rapidly penetrated the coating and formed an interface between the coating and the substrate. The densification of the coating occurred when ZrO2 particles were sintered with liquid Mn that penetrated the porous ZrO2 coating. It was revealed that the dense coating was free of porosities and that its hardness increased greatly after heat treatment with Mn, compared with as-sprayed ZrO2 coating. Moreover the modulus of elasticity and the fracture toughness of the coating reached the same levels as those of sintered partially stabilized ZrO2 (Y2O3). Ohmori, A.; Zhou, Z.; Inoue, K. 1994-11-01 179 Microsoft Academic Search The effects of spray angle on the morphology of thermally sprayed particles impinging on polished substrates have been studied by implementing several statistical tools (i.e., Gaussian analysis, Weibull distribution and the t-test). Nickel-based alloy (Astroloy) particles were vacuum plasma-sprayed onto copper plates at normal (i.e., 90 ) and several off-normal spray angles (i.e., 75, 60, 45 and 30 ). Different G. Montavon; S. Sampath; C. C. Berndt; H. Herman; C. Coddet 1997-01-01 180 SciTech Connect A new numerical model is described for simulating thermal plasmas containing entrained particles, with emphasis on plasma spraying applications. The plasma is represented as a continuum multicomponent chemically reacting ideal gas, while the particles are tracked as discrete Lagrangian entities coupled to the plasma. Computational results are presented from a transient simulation of alumina spraying in a turbulent argon-helium plasma jet in air environment, including torch geometry, substrate, and multiple species with chemical reactions. Particle-plasma interactions including turbulent dispersion have been modeled in a fully self-consistent manner. Interactions between the plasma and the torch and substrate walls are modeled using wall functions. (15 refs.) Chang, C.H. 1992-01-01 181 SciTech Connect A new numerical model is described for simulating thermal plasmas containing entrained particles, with emphasis on plasma spraying applications. The plasma is represented as a continuum multicomponent chemically reacting ideal gas, while the particles are tracked as discrete Lagrangian entities coupled to the plasma. Computational results are presented from a transient simulation of alumina spraying in a turbulent argon-helium plasma jet in air environment, including torch geometry, substrate, and multiple species with chemical reactions. Particle-plasma interactions including turbulent dispersion have been modeled in a fully self-consistent manner. Interactions between the plasma and the torch and substrate walls are modeled using wall functions. (15 refs.) Chang, C.H. 1992-08-01 182 NASA Astrophysics Data System (ADS) The effect of various small-particle plasma spray powder injection parameters on the in situ particle position, velocity, and temperature is measured for yttria-stabilized zirconia and yttrium-aluminum-garnet powder. Using full-factorial experiments and multiple regression analysis, carrier gas flow, injector angle, and powder feeder disc speed were found to significantly affect the particle properties. Temperature and velocity were inversely related; on average, the cooler particles traveled faster. These properties also correlated to the particle position in the flame, where particles above the centerline of the flame traveled faster. The trends are discussed on the basis of residence time in the flame, as well as in terms of particle size segregation effects. Coating density and splat geometry reflect the temperature and velocity differences between the runs. Slower, hotter particles possessed more intrasplat and intersplat porosity and less splat-substrate contact area, leading to lower overall coating density. Su, Y. J.; Faber, K. T.; Bernecki, T. F. 2002-03-01 183 SciTech Connect Plasmas can be used to provide a vacuum-atmosphere interface or separation between vacua regions as an alternative to differential pumping. Vacuum-atmosphere interface utilizing a cascade arc discharge was successfully demonstrated and a 175 keV electron beam was successfully propagated from vacuum through such a plasma interface and out into atmospheric pressure. This plasma device also functions as an effective plasma tens. Such a device can be adopted for use in electron beam melting. Hershcovitch, A. 1995-12-31 184 NASA Astrophysics Data System (ADS) Detailed understanding of the microphysics associated with dynamic penetration of low-beta plasmas across magnetic fields has important implications for a number of applications including magnetic-fusion-energy (MFE), high-power transmission-lines, and charged-particle-beam diodes. Analytic models providing linear growth rates and characteristic wavelengths and frequencies for unstable modes at the interface between plasma and vacuum regions are presented and compared with detailed particle-in-cell simulations. The simulations treat both collisionless and collisional plasma regimes in a variety of configurations. Results of this combined theoretical analysis are compared with measurements from several experiments including magnetized electron-beam-diodes and high-power, magnetically-insulated transmission-lines. Potential applications of this modeling to MFE are discussed. Rose, D. V.; Welch, D. R.; Genoni, T. C. 2007-11-01 185 Microsoft Academic Search Suspension plasma spraying (SPS) is able to process sub-micrometric-sized feedstock particles and permits the deposition of\\u000a layers thinner (from 5 to 50?m) than those resulting from conventional atmospheric plasma spraying (APS). SPS consists in\\u000a mechanically injecting within the plasma flow a liquid suspension of particles of average diameter varying between 0.02 and\\u000a 1?m, average values. Upon penetration within the DC O. Tingaud; A. Grimaud; A. Denoirjean; G. Montavon; V. Rat; J. F. Coudert; P. Fauchais; T. Chartier 2008-01-01 186 Microsoft Academic Search Perovskite-type LaMnO3 powders and coatings have been prepared by a novel technique: reactive suspension plasma spraying (SPS) using an inductively\\u000a coupled plasma of approximately 40 kW plate power and an oxygen plasma sheath gas. Suitable precursor mixtures were found\\u000a on the basis of solid state reactions, solubility, and the phases obtained during the spray process. Best results were achieved\\u000a by G. Schiller; M. Mller; F. Gitzhofer 1999-01-01 187 NASA Astrophysics Data System (ADS) An atmospheric pressure non-equilibrium plasma (APNEP) has been developed in the UK by EA Technology Ltd and is currently being investigated in collaboration with the University of Surrey. The main focus is the use of atmospheric pressure plasmas to modify the surfaces of commercially important polymers including polyolefins, poly(ethylene terephthalate) and poly(methyl methacrylate). These surface modifications include surface cleaning and degreasing, oxidation, reduction, grafting, cross-linking (carbonization), etching and deposition. When trying to achieve targeted surface engineering, it is vital to gain an understanding of the mechanisms that cause these effects, for example, surface functionalization, adhesion promotion or multi-layer deposition. Hence comparisons between vacuum plasma treated surfaces have also been sought with a view to using the extensive vacuum plasma literature to gain further insight. In this paper, we will introduce the APNEP and compare the key characteristics of the plasma with those of traditional vacuum plasma systems before highlighting some of the surface modifications that can be achieved by using atmospheric plasma. Data from the analysis of treated polymers (by spectroscopy, microscopy and surface energy studies) and from direct measurements of the plasma and afterglow will be presented. Finally, our current understanding of the processes involved will be given, particularly those that are important in downstream surface treatments which take place remote from the plasma source. Shenton, M. J.; Stevens, G. C. 2001-09-01 188 Microsoft Academic Search Aluminum (Al) matrix composites reinforced by SiC particulates (SiCp\\/Al) were fabricated by the atmospheric plasma spraying. The composite powder, as a feedstock for plasma spraying, was prepared by ball milling of pure Al powders with 55 vol.% SiC particles. The feedstock was deposited into a freestanding bulk composite sheet on a graphite substrate by atmospheric plasma spraying. As-sprayed composites had Kwangjun Euh; Suk Bong Kang 2005-01-01 189 Microsoft Academic Search Thermal spray coatings composed of a variety of carbide sizes and cobalt contents were sprayed with a high energy plasma spray system. The size of the carbides used fell into three rough groupings, micrometer scale carbides (1--2 mum), submicrometer (700--300 nm), and nanoscale (≈100 nm). The feedstock powder was evaluated in terms of their size distribution, external morphology, internal morphology, Graham Alfred Tewksbury 2002-01-01 190 Microsoft Academic Search This paper deals with an experimental investigation on the process of atmospheric plasma spraying of high performance ceramics such as Al2O3, Al2O3TiO2, and PSZ on a steel substrate. The ceramic coatings were deposited under different spray conditions and optimal spray parameters were evaluated based on the quality of the coating judged in terms of bond strength and porosity. An expert S. Gowri; G. Uma Shankar; K. Narayanasamy; R. Krishnamurthy 1997-01-01 191 Microsoft Academic Search Quenching stress arises within a thermally sprayed splat as its thermal contraction after solidification is constrained by\\u000a the underlying solid. Dependence of the quenching stress in plasma-sprayed deposits of Ni-20Cr alloy and alumina on the substrate\\u000a temperature during spraying was discussed in conjunction with the change in the nature of the interlamellar contact between\\u000a splats. It was found by mercury S. Kuroda; T. Dendo; S. Kitahara 1995-01-01 192 NASA Astrophysics Data System (ADS) High-Z materials such as tungsten are currently the potentially best candidates for plasma-facing components (PFCs) in future fusion devices. However, the threat of melting under uncontrolled conditions and the associated material redistribution and loss can place strict limits on the lifetime of PFCs and plasma operation conditions. In particular, material losses in the form of fine sprayed droplets can provide a very intensive source of impurities in the plasma core. In this paper, the plasma response to radiation losses from impurity particles produced by droplet evaporation is modelled for the conditions found in the tokamak TEXTOR. The interplay between tungsten spraying and plasma behaviour, resulting in the reduction of power transferred to the limiter and diminution of droplet production, is taken into account. Calculations predict, in agreement with experimental observations, that this evolution results in a new steady state with significantly reduced central temperature and peaked impurity radiation profile. The efficiency of melt conversion into droplets, estimated by comparing experimental and computed plasma temperatures, is in reasonable agreement with the predictions from models for droplet generation. Tokar, M. Z.; Coenen, J. W.; Philipps, V.; Ueda, Y.; TEXTOR Team 2012-01-01 193 SciTech Connect The energy available during vacuum breakdown between copper electrodes at high vacuum was limited using resistors in series with the vacuum gap and arresting diodes. Surviving features observed with SEM in postmortem samples were tentatively correlated with electrical signals captured during breakdown using a Rogowski coil and a high-voltage probe. The visual and electrical evidence is consistent with the qualitative model of vacuum breakdown by unipolar arc formation by Schwirzke [1, 2]. The evidence paints a picture of two plasmas of different composition and scale being created during vacuum breakdown: an initial plasma made of degassed material from the metal surface, ignites a plasma made up of the electrode material. Castano-Giraldo, C. [University of Illinois, Urbana-Champaign; Aghazarian, Maro [ORNL; Caughman, John B [ORNL; Ruzic, D. N. [University of Illinois, Urbana-Champaign 2012-01-01 194 Microsoft Academic Search Suspension Plasma Spraying (SPS) is a relatively new deposition process which enables to spray micron and submicron particles. It offers the possibility to form finely structured coatings with intermediate thicknesses of a few tens of microns. In order to have a better understanding in SPS, the two parts of this paper are devoted to the description of the phenomena involved J. Fazilleau; C. Delbos; V. Rat; J. F. Coudert; P. Fauchais; B. Pateyron 2006-01-01 195 Microsoft Academic Search The effect of surface adsorbates on splat formation during thermal spraying is examined by controlling substrate adsorption. Splats are formed on a polished flat stainless steel substrate surface by plasma spraying. The adsorption state of the substrate is controlled with different organic substances of different boiling points and different preheating temperatures. The droplet materials used are aluminum, nickel, and Al2O3. Chang-Jiu Li; Jing-Long Li 2004-01-01 196 Microsoft Academic Search Vacuum arc plasmas are produced at micrometer-size, nonstationary cathode spots. Ion charge state distributions (CSD's) are experimentally known for 50 elements, but the theoretical understanding is unsatisfactory. In this paper, CSD's of vacuum arc plasmas are calculated under the assumption that the spot plasma experiences an instantaneous transition from equilibrium to nonequilibrium while expanding. Observable charge state distributions are the Andr Anders 1997-01-01 197 NASA Astrophysics Data System (ADS) Energy efficiency in gas turbine engines is linked to the high temperature capabilities of materials used in the hot section of the engine. To facilitate a significant increase in engine efficiency, tough structural ceramics have been developed that can handle the thermo-mechanical stresses that gas turbine components experience. Unfortunately, the high-temperature, high-pressure, and high-velocity combustion gases in a gas turbine contain water vapor and/or hydrogen which have been shown to volatilize the protective silica layer on silicon-based ceramics. This degradation leads to significant surface recession in ceramic gas turbine components. In order to maintain their structural integrity, an environmental barrier coating (EBC) could be used to protect ceramics from the harsh gas turbine environment. Due to its coefficient of thermal expansion and phase stability at elevated temperatures, tantalum oxide (Ta2O5) was examined as the base material for an air plasma-sprayed EBC on Si3N 4 ceramics. As-sprayed pure Ta2O5 was comprised of both low-temperature beta-Ta2O5 and high-temperature alpha-Ta 2O5 that was quenched into the structure. Residual stress measurements via X-ray diffraction determined the as-sprayed coating to be in tension and extensive vertical macrocracks were observed in the coating. Heat treatments of the pure coating led to conversion of alpha-Ta2 O5 to beta-Ta2O5, conversion of tensile stresses to compressive, localized buckling of the coating, and significant grain growth which caused microcracking in the coating. The pure coating was found to be an inadequate EBC. Al2O3 was investigated as a solid solution alloying addition designed to enhance the stability of beta-Ta2O 5, and reduce grain growth by slowing grain boundary diffusion. La 2O3 was investigated as an alloying addition designed to form second phase particles which would reduce grain growth through pinning. Al2O3 was successful at both stabilizing beta-Ta 2O5 and reducing grain growth, though AlTaO4 was found to form in the coatings. La2O3 additions led to the formation of LaTa7O19 which also contributed to grain growth reduction. Residual stresses in the alloyed coatings were generally found to be tensile. Microcracks were not observed in coatings that were alloyed with both Al2O3 and La2O3 with the most promising alloy being Ta2O5 + 1.5 wt.% Al 2O3 + 1.5 wt.% La2O3. Weyant, Christopher M. 198 NASA Astrophysics Data System (ADS) TiN coatings on Al2O3 substrates were fabricated by vacuum cold spray (VCS) process using ultrafine starting ceramic powders of 20 nm in size at room temperature (RT). Microstructure analysis of the samples was carried out by scanning electron microscopy, transmission electron microscopy, and x-ray diffraction. Sheet resistance of the VCS TiN coatings was measured with a four-point probe. The effects of microstructure on the electrical properties of the coatings were investigated. It was found that the sheet resistance and electrical resistivity of TiN coatings were significantly associating with the spray distance, nozzle traversal speed, and deposition chamber pressure. A minimum sheet resistance of 127 ? was achieved. The microstructural changes can be correlated to the electrical resistivity of TiN coatings. Wang, Y.-Y.; Liu, Y.; Yang, G.-J.; Feng, J.-J.; Kusumoto, K. 2010-12-01 199 NASA Astrophysics Data System (ADS) A Vacuum Arc Thruster (VAT) is a thruster that uses the plasma created in a vacuum arc, an electrical discharge in a vacuum that creates high velocity and highly ionized plasmas, as the propellant without additional acceleration. A VAT would be a small and inexpensive low thrust ion thruster, ideal for small satellites and formation flying spacecraft. The purpose of this thesis was to quantitatively and qualitatively examine the VAT plasma plume to determine operating characteristics and limitations. A VAT with a titanium cathode was operated in two regimes: (A) single 100mus pulse, discharge current JD=510A, and (B) multiple 1500mus pulses at f=40.8Hz, JD=14A. The cathode was 3.18mm diameter Ti rod, surrounded by a 0.80mm thick alumina insulator, set in a molybdenum anode. Three Configurations were tested: Cfg1 (Regime A, cathode recessed 3.00mm from anode), Cfg2 (Regime A, cathode and anode flush), Cfg3 (Regime B, cathode recessed 3.00mm). A semi-empirical model was derived for VAT performance based on the MHD equation of motion using data for ion velocity, ion charge state distribution, ion current fraction (F), and ion current density distribution (ICDD). Additional performance parameters were a2, the peak ion current density angular offset from the cathode normal, and a3, the width of the ion current distribution. Measurements were taken at 162 points on a plane in the plasma plume using a custom faraday probe, and the ICDD empirical form was determined to be a Gaussian. The discharge voltage (VD) and F were Cfg1: VD=25.5V, F=0.025-0.035; Cfg2: VD=40.7V, F=0.08-0.10; Cfg3: VD=14.9V, F=0.006-0.021. For Cfg1, a2 started 15 off-axis while a20 for Cfg2 and 3. In Cfg1, a 3=0.7-0.6, and in Cfg2 a3=1.0-1.1, so the recessed cathode focused the plasma more. However, F is more important for VAT performance because upper and lower bounds for thrust, specific impulse, thrust-to-power, and efficiency were calculated and Cfg2 had the highest performance. High-speed videos captured cathode spot motion showing that the cathode spot had preferred attachment points at the cathode edge. Photographs show uneven cathode erosion at the edge, a deposited layer of cathode material on the anode and insulator, and large macroparticles. Sekerak, Michael James 200 SciTech Connect Over the past five years, four international parties, which include the European Communities, Japan, the Russian Federation and the United States, have been collaborating on the design and development of the International Thermonuclear Experimental Reactor (ITER), the next generation magnetic fusion energy device. During the ITER Engineering Design Activity (EDA), beryllium plasma spray technology was investigated by Los Alamos National Laboratory as a method for fabricating and repairing and the beryllium first wall surface of the ITER tokamak. Significant progress has been made in developing beryllium plasma spraying technology for this application. Information will be presented on the research performed to improve the thermal properties of plasma sprayed beryllium coatings and a method that was developed for cleaning and preparing the surface of beryllium prior to depositing plasma sprayed beryllium coatings. Results of high heat flux testing of the beryllium coatings using electron beam simulated ITER conditions will also be presented. Castro, R.G.; Elliott, K.E.; Hollis, K.J. [Los Alamos National Lab., NM (United States). Material Science and Technology Div.; Bartlett, A.H. [Norsam Technologies Inc., Los Alamos, NM (United States); Watson, R.D. [Sandia National Lab., Albuquerque, NM (United States). Fusion Technology Dept. 1999-02-01 201 National Technical Information Service (NTIS) The sintering and creep of plasma-sprayed ceramic thermal barrier coatings under high temperature conditions are complex phenomena. Changes in thermomechanical and thermophysical properties and in the stress response of these coating systems as a result o... D. Zhu R. A. Miller 1998-01-01 202 National Technical Information Service (NTIS) Multilayer bearing structures were fabricated by arc plasma spraying of selected metallic alloy powders of mild and hardened steel test blocks which were then coated with a dry film lubricant. Five different alloys were investigated, and the quality and a... B. Roessler M. C. Narasiman 1982-01-01 203 National Technical Information Service (NTIS) The partially stabilized zirconia powders used to plasma spray thermal barrier coatings typically exhibit broad particle-size distributions. There are conflicting reports in the literature about the extent of injection-induced particle-sizing effects in a... R. A. Neiser T. J. Roemer 1996-01-01 204 NASA Astrophysics Data System (ADS) Thermal cycling and melt reaction studies of ceramic coatings plasma-sprayed on Nb substrates were carried out to evaluate the performance of barrier coatings for metallic fuel casting applications. Thermal cycling tests of the ceramic plasma-sprayed coatings to 1450 C showed that HfN, TiC, ZrC, and Y2O3 coating had good cycling characteristics with few interconnected cracks even after 20 cycles. Interaction studies by 1550 C melt dipping tests of the plasma-sprayed coatings also indicated that HfN and Y2O3 do not form significant reaction layer between U-20 wt.% Zr melt and the coating layer. Plasma-sprayed Y2O3 coating exhibited the most promising characteristics among HfN, TiC, ZrC, and Y2O3 coating. Kim, Ki Hwan; Lee, Chong Tak; Lee, Chan Bock; Fielding, R. S.; Kennedy, J. R. 2013-10-01 205 Microsoft Academic Search TiN coatings on Al2O3 substrates were fabricated by vacuum cold spray (VCS) process using ultrafine starting ceramic powders of 20nm in size at\\u000a room temperature (RT). Microstructure analysis of the samples was carried out by scanning electron microscopy, transmission\\u000a electron microscopy, and x-ray diffraction. Sheet resistance of the VCS TiN coatings was measured with a four-point probe.\\u000a The effects of Y.-Y. Wang; Y. Liu; G.-J. Yang; J.-J. Feng; K. Kusumoto 2010-01-01 206 Microsoft Academic Search Suspension plasma spraying was used to achieve a dense and thin (?30 ?m) yttria stabilized zirconia (YSZ) coating for the electrolyte of solid oxide fuel cells (SOFCs). A suspension of YSZ powder (d50?1 ?m) was mechanically injected in direct current (dc) plasma jets. The plasma jet acted as an atomizer and the suspension drops (d?200 ?m) were sheared, long before Pierre Fauchais; Vincent Rat; Cdric Delbos; Jean Franois Coudert; Thierry Chartier; Luc Bianchi 2005-01-01 207 NASA Astrophysics Data System (ADS) Ni-based electrode coatings with enhanced surface areas, for hydrogen production, were developed using atmospheric plasma spray (APS) and suspension plasma spray (SPS) processes. The results revealed a larger electrochemical active surface area for the coatings produced by SPS compared to those produced by APS process. SEM micrographs showed that the surface microstructure of the sample with the largest surface area was composed of a large number of small cauliflower-like aggregates with an average diameter of 10 ?m. Aghasibeig, M.; Mousavi, M.; Ben Ettouill, F.; Moreau, C.; Wuthrich, R.; Dolatabadi, A. 2013-10-01 208 Microsoft Academic Search The elastic properties of plasma sprayed deposits have been evaluated using a laser-excited surface acoustic wave (SAW) technique and an inversion processing analysis. The SAWs including Lamb and Rayleigh waves were generated in plasma sprayed NiCoCrAlY and ZrO2, respectively, and their group velocity dispersions were used to determine the elastic properties (i.e.Young's modulus, Poison's ratio and density) of the deposits. X. Q. Ma; Y. Mizutani; M. Takemoto 2001-01-01 209 Microsoft Academic Search The solution precursor plasma spray (SPPS) process has been used to deposit ZrO27wt.% Y2O3 thermal barrier coatings (TBCs) that contain alternate layers of low and high porosities (layered-SPPS). The thermal conductivity of the layered-SPPS coating is found to be lower than those of both a SPPS coating with distributed porosity and an air-plasma-sprayed coating of the same composition, in the Amol D. Jadhav; Nitin P. Padture; Eric H. Jordan; Maurice Gell; Pilar Miranzo; Edwin R. Fuller 2006-01-01 210 Microsoft Academic Search Suspension plasma spray (SPS) is a thermal spray method in which nanoparticles are injected into the plasma jet with the help\\u000a of suspension droplets to achieve thin and finely structured nanocoatings. The nanoparticles experience three in-flight stages:\\u000a injection within the suspension droplets, discharge of the nanoparticle agglomerates after the evaporation of the suspension\\u000a solvent, and tracking of the nanoparticle or Hong-Bing Xiong; Jian-Zhong Lin 2009-01-01 211 Microsoft Academic Search The objective of this study is to compare the tribological properties of alumina coatings with two different structural scales,\\u000a a micrometer-sized one manufactured by atmospheric plasma spraying and a sub-micrometer-sized one manufactured by suspension\\u000a plasma spraying. Coating architectures were analyzed and their friction coefficients in dry sliding mode measured. Sub-micrometer-sized\\u000a structured coatings present a lower friction coefficient than micrometer ones, G. Darut; H. Ageorges; A. Denoirjean; G. Montavon; P. Fauchais 2008-01-01 212 Microsoft Academic Search The suspension plasma spray (SPS) process was used to produce coatings from yttria-stabilized zirconia (YSZ) powders with\\u000a median diameters of 15?m and 80nm. The powder-ethanol suspensions made with 15-?m diameter YSZ particles formed coatings\\u000a with microstructures typical of the air plasma spray (APS) process, while suspensions made with 80-nm diameter YSZ powder\\u000a yielded a coarse columnar microstructure not observed in Kent Vanevery; Matthew J. M. Krane; Rodney W. Trice; Hsin Wang; Wallace D Porter; Matthew Besser; Daniel Sordelet; Jan Ilavsky; Jonathan Almer 2011-01-01 213 Microsoft Academic Search Suspension plasma spraying (SPS) is an alternative to conventional atmospheric plasma spraying (APS) aiming at manufacturing\\u000a thinner layers (i.e., 10-100?m) due to the specific size of the feedstock particles, from a few tens of nanometers to a few\\u000a micrometers. The staking of lamellae and particles, which present a diameter ranging from 0.1 to 2.0?m and an average thickness\\u000a from 20 Olivier Tingaud; Ghislain Montavon; Alain Denoirjean; Jean-Franois Coudert; Vincent Rat; Pierre Fauchais 2010-01-01 214 Microsoft Academic Search Stable water- and water\\/ethanol suspensions of TiO2 were plasma sprayed on stainless steel substrates. The suspensions were injected using two different systems: external, using an atomizing injector, and internal, performed with a continuous-stream injector inside the plasma torch anode. In order to find the optimal spray parameters, seven experimental runs were performed and the resulted deposits were mainly characterized by Stefan Kozerski; Filofteia-Laura Toma; Lech Pawlowski; Beate Leupolt; Leszek Latka; Lutz-Michael Berger 2010-01-01 215 Microsoft Academic Search Inductively coupled radio frequency plasma spraying was used to prepare ultrafine powders of Sm2O3, Dy2O3, and Lu2O3. These three materials were studied because they are effective dopants in multi-layer ceramic capacitors (MLCC) to improve lifetime. The as-sprayed powders consist of both micron-sized mono-dispersed spherical particles and nano-sized particles in various shapes. In addition to the spheroidization effect, plasma treatment leads X. L. Sun; A. I. Y. Tok; R. Huebner; F. Y. C. Boey 2007-01-01 216 Microsoft Academic Search Nanostructured zirconia top coat was deposited by air plasma spray and NiCoCrAlTaY bond coat was deposited on Ni substrate by low pressure plasma spray. Nanostructured and conventional thermal barrier coatings were heat-treated at temperature varying from 1050 to 1 250 C for 2-20 h. The results show that obvious grain growth was found in both nanostructured and conventional thermal barrier Xian-liang JIANG; Chun-bo LIU; Min LIU; Hui-zhao ZHU 2010-01-01 217 Microsoft Academic Search Feedstock powder characteristics (size distribution, morphology, shape, specific mass, and injection rate) are considered\\u000a to be one of the key factors in controlling plasma-sprayed coatings microstructure and properties. The influence of feedstock\\u000a powder characteristics to control the reaction and coatings microstructure in reactive plasma spraying process (RPS) is still\\u000a unclear. This study, investigated the influence of feedstock particle size in Mohammed Shahien; Motohiro Yamada; Toshiaki Yasui; Masahiro Fukumoto 2011-01-01 218 NASA Astrophysics Data System (ADS) Prespray annealing of commercially available hydroxyapatite (HAp) plasma-spray powder at 1300 C for 1 h in air leads to substantial densification without noticeable thermal decomposition. The resulting HAp coatings, low-pressure plasma sprayed onto Ti-6A1-4V substrates, show a dense microstructure, im-proved adhesion strength, and higher rsorption resistance when treated for 7 days in simulated body fluid (Hanks balanced salt solution). Heimann, R. B.; Vu, T. A. 1997-06-01 219 NASA Astrophysics Data System (ADS) Feedstock powder characteristics (size distribution, morphology, shape, specific mass, and injection rate) are considered to be one of the key factors in controlling plasma-sprayed coatings microstructure and properties. The influence of feedstock powder characteristics to control the reaction and coatings microstructure in reactive plasma spraying process (RPS) is still unclear. This study, investigated the influence of feedstock particle size in RPS of aluminum nitride (AlN) coatings, through plasma nitriding of aluminum (Al) feedstock powders. It was possible to fabricate AlN-based coatings through plasma nitriding of all kinds of Al powders in atmospheric plasma spray (APS) process. The nitriding ratio was improved with decreasing the particle size of feedstock powder, due to improving the nitriding reaction during flight. However, decreasing the particle size of feedstock powder suppressed the coatings thickness. Due to the loss of the powder during the injection, the excessive vaporization of fine Al particles and the completing nitriding reaction of some fine Al particles during flight. The feedstock particle size directly affects on the nitriding, melting, flowability, and the vaporization behaviors of Al powders during spraying. It concluded that using smaller particle size powders is useful for improving the nitriding ratio and not suitable for fabrication thick AlN coatings in reactive plasma spray process. To fabricate thick AlN coatings through RPS, enhancing the nitriding reaction of Al powders with large particle size during spraying is required. Shahien, Mohammed; Yamada, Motohiro; Yasui, Toshiaki; Fukumoto, Masahiro 2011-03-01 220 NASA Astrophysics Data System (ADS) A sprayed carbon nanotube (CNT)-modified working electrode was successfully integrated into an electrochemical three-electrode system based on a glass substrate. The integrated biosensing system was fabricated through a series of photolithographic patterning and plasma etching processes. A CNT-dispersed solution was sprayed on the three-electrode system, and the CNT-modified surface was treated with O2 plasma to pattern, clean, and activate the CNT layer. The optimized plasma treatment conditions were verified in terms of plasma power and treatment time by scanning electron microscopy (SEM), cyclic voltammetry (CV), and X-ray photoelectron spectroscopy (XPS). Jin, Joon-Hyung; Kim, Joon Hyub; Lee, Jun-Yong; Lee, Cheol Jin; Min, Nam Ki 2012-01-01 221 PubMed Hydroxyapatite (HA) coatings plasma sprayed without and with bond coats (titania, zirconia) onto titanium alloy (Ti6A14V) substrates under both atmospheric and low pressure plasma spray conditions were investigated in terms of their microstructure and their resorption resistance during immersion in simulated body fluid (Hank's balanced salt solution). The microstructures of test samples were characterized using SEM on as-sprayed and leached surfaces and on the corresponding cross sections. Selected coating systems were studied by 2-dimensional secondary ion mass spectroscopy imaging to obtain information on plasma spray induced diffusional processes at the coating interfaces, as well as the spatial distribution of minor and trace elements. Coatings consisting of thin (10-15 microm) titania/zirconia (eutectic ratio) and titania bond coats, combined with a 150- to 180-microm thick HA top coat, yielded peel strengths in excess of 32 N/m, as well as sufficient resorption resistance. PMID:9855203 Heimann, R B; Kurzweg, H; Ivey, D G; Wayman, M L 1998-01-01 222 NASA Astrophysics Data System (ADS) WC-Co base wear-resistant coatings deposited by plasma spraying are widely used to enhance component longevity in a variety of wear environments. During spraying of WC-Co, ideally only the cobalt phase should melt and act as a binder for the WC particles. Although it is undesirable to fully melt WC because it can cause decarburization, complete melting of the cobalt phase and its satisfactory flattening on impacting the substrate is necessary to minimize porosity and achieve good substrate/coating adhesion. In this article, the influence of the primary plasma spray variables on the melting characteristics of WC-Co powders is investigated with respect to the microstructure of these coatings. This experimental work complements an analytical study on plasma spraying of WC-Co, and thus, observations are presented to support the predictions of the modeling effort. Joshiand, S. V.; Srivastava, M. P. 1993-06-01 223 PubMed The integrity and thermal decomposition of calcium apatite are influenced by the underlying titanium during plasma-spraying deposition, especially at the apatite/titanium interface. The destruction of apatite at the interface is governed by substrate temperature, titanium catalysis, and its reaction with titanium dioxide produced from oxidation of titanium in the plasma gas. The apatite in the outer layer of coatings is affected mainly by the substrate temperature and can keep its integrity with a suitable plasma-spraying procedure to minimize the increase of substrate temperature. The heat treatment of the coatings in vacuum results in the decomposition of apatite to alpha-tricalcium phosphate (alpha-TCP) and tetracalcium phosphate monoxide (TCPM) with the increase of intensity approaching the interface, which roughens the surface of the coatings. In the air-heat treatment, oxidation of titanium produces a thickened, dense rutile layer at the interface which prevents titanium atoms from diffusing into the coatings and inhibits the titanium-catalyzed decomposition of apatite. The apatite adjacent to the rutile layer reacts moderately with rutile to produce calcium titanate (CaTiO3), alpha- and beta-TCP, while the apatite in the outer layer, separated from the rutile layer, maintains its integrity without decomposition even in a prolonged air-heat treatment. The retention of apatite integrity leads to a decreased surface roughness of the coating. PMID:8788100 Weng, J; Liu, X; Zhang, X; de Groot, K 1996-01-01 224 NASA Astrophysics Data System (ADS) This study aimed to numerically and experimentally investigate lump formation during atmospheric plasma spraying with powder injection downstream the plasma gun exit. A first set of investigations was focused on the location and orientation of the powder port injector. It turned out impossible to keep the coating quality while avoiding lumps by simply moving the powder injector. A new geometry of the powder port ring holder was designed and optimized to prevent nozzle clogging, and lump formation using a gas screen. This solution was successfully tested for applications with Ni-5wt.%Al and ZrO2-7wt.%Y2O3 powders used in production. The possible secondary effect of plasma jet shrouding by the gas screen, and its consequence on powder particles prior to impact was also studied. Choquet, Isabelle; Bjrklund, Stefan; Johansson, Jimmy; Wigren, Jan 2007-12-01 225 NASA Astrophysics Data System (ADS) In plasma spraying powder particles are transported to the plasma jet with the help of a carrier gas. The influence of this gas was investigated by means of an enthalpy probe system with a mass spectrometer for measuring plasma temperature, velocity and plasma gas composition, and visualized by means of Schlieren optics. The enthalpy probe system does not allow measurements of the plasma flow containing solid particles. Therefore, to establish changes of the jet characteristics the carrier gas was supplied through different ports without addition of any powders. Nitrogen and helium were used as carrier gases. They were supplied into the jet with flow rates from 5 to 20 slpm either directly into the plasma beam through a hole in the nozzle or with help of an external injector positioned at a distance of several millimeters from the exit nozzle. Injection of the carrier gas led to high jet perturbations. Values of the centerline temperature and the velocity were reduced. The higher the carrier gas flow rate the stronger the changes of the jet properties. Nitrogen carrier gas perturbed plasma jet flow more than helium. Kavka, T.; Maslani, A.; Arnold, J.; Henne, R. 2004-03-01 226 NASA Astrophysics Data System (ADS) Among advanced ceramics, aluminum nitride (AlN) had attracted much attention in the field of electrical and structural applications due to its outstanding properties. However, it is difficult to fabricate AlN coating by conventional thermal spray processes directly. Due to the thermal decomposition of feedstock AlN powder during spraying without a stable melting phase (which is required for deposition in thermal spray). Reactive plasma spraying (RPS) has been considered as a promising technology for in-situ formation of AlN thermally sprayed coatings. In this study the possibility of fabrication of AlN coating by reactive plasma nitriding of alumina (Al2O3) powder using N2/H2 plasma was investigated. It was possible to fabricate a cubic-AlN (c-AlN) based coating and the fabricated coating consists of c-AlN, ?-Al2O3, Al5O6N and ?-Al2O3. It was difficult to understand the nitriding process from the fabricated coatings. Therefore, the Al2O3 powders were sprayed and collected in water. The microstructure observation of the collected powder and its cross section indicate that the reaction started from the surface. Thus, the sprayed particles were melted and reacted in high temperature reactive plasma and formed aluminum oxynitride which has cubic structure and easily nitride to c-AlN. During the coatings process the particles collide, flatten, and rapidly solidified on a substrate surface. The rapid solidification on the substrate surface due to the high quenching rate of the plasma flame prevents AlN crystal growth to form the hexagonal phase. Therefore, it was possible to fabricate c-AlN/Al2O3 based coatings through reactive plasma nitriding reaction of Al2O3 powder in thermal spray. Shahien, Mohammed; Yamada, Motohiro; Yasui, Toshiaki; Fukumoto, Masahiro 227 NASA Astrophysics Data System (ADS) Plasma spraying of metals in air is usually accompanied by evaporation and oxidation of the sprayed material. Optimization of the spraying process must ensure that the particles are fully molten during their short residence time in the plasma jet and prior to hitting the substrate, but not overheated to minimize evaporation losses. In atmospheric plasma spraying (ASP), it is also clearly desirable to be able to control the extent of oxide formation. The objective of this work to develop an overall mathematical model of the oxidization and volatilization phenomena involved in the plasma-spraying of metallic particles in air atmosphere. Four models were developed to simulate the following aspects of the atmospheric plasma spraying (APS) process: (a) the particle trajectories and the velocity and temperature profiles in an Ar-H 2 plasma jet, (b) the heat and mass transfer between particles and plasma jet, (c) the interaction between the evaporation and oxidation phenomena, and (d) the oxidation of liquid metal droplets. The resulting overall model was generated by adapting the computational fluid dynamics code FIDAP and was validated by experimental measurements carried out at the collaborating plasma laboratory of the University of Limoges. The thesis also examined the environmental implications of the oxidization and volatilization phenomena in the plasma spraying of metals. The modeling results showed that the combination of the standard k-s model of turbulence and the Boussinesq eddy-viscosity model provided a more accurate prediction of plasma gas behavior. The estimated NOx generation levels from APS were lower than the U.S.E.P.A. emission standard. Either enhanced evaporation or oxidation can occur on the surface of the metal particles and the relative extent is determined by the process parameters. Comparatively, the particle size has the greatest impact on both evaporation and oxidation. The extent of particle oxidation depends principally on gas-liquid oxidation. The convection due to recirculating flow of liquid within the metal droplet (Hill's vortex) dominates the mass transfer of oxygen after the metal particles become molten. This study showed that the behavior of evaporation and oxidation of metal particles in atmospheric plasma spraying can be predicted and the process parameters can be optimized to reduce the evaporation and/or oxidation of metal particles in industrial applications of plasma spraying. Zhang, Hanwei 228 PubMed The expansion of a semi-infinite plasma slab into vacuum is analyzed with a hydrodynamic model implying a steplike electron energy distribution function. Analytic expressions for the maximum ion energy and the related ion distribution function are derived and compared with one-dimensional numerical simulations. The choice of the specific non-Maxwellian initial electron energy distribution automatically ensures the conservation of the total energy of the system. The estimated ion energies may differ by an order of magnitude from the values obtained with an adiabatic expansion model supposing a Maxwellian electron distribution. Furthermore, good agreement with data from experiments using laser pulses of ultrashort durations ?(L) Kiefer, Thomas; Schlegel, Theodor; Kaluza, Malte C 2013-04-22 229 Microsoft Academic Search A DC non-transferred mode plasma spray torch was fabricated for plasma spheroidization. The effect of powder-carrier gas and powder loading on the temperature of the plasma jet generated by the torch has been studied. The experiment was done at different input power levels; the temperature of the jet was within 50007000K argon was used as plasma gas and powder-carrier gas. G. Shanmugavelayutham; V. Selvarajan; P. V. A. Padmanabhan; K. P. Sreekumar; N. K. Joshi 2007-01-01 230 SciTech Connect TiC coatings of 300-..mu..m thickness produced by plasma spraying have been tested for their outgassing properties from room temperature to 1000 /sup 0/C, to assess their suitability as high-heat/particle flux protective surfaces in tokamaks. The coatings were found to contain the following approximate amounts of gas (in mol %): H/sub 2/, 0.9; H/sub 2/O, 0.7; CO, 0.05; Ar, 0.01; CO/sub 2/, 0.03, and lesser amounts of N/sub 2/ and hydrocarbons. The experimental uncertainties are estimated to be +- 25%. Whereas argon outgasses totally at room temperature and water vapor mostly below 100 /sup 0/C, the other gases require much higher temperatures. This material, baked only at less than or equal to100 /sup 0/C, would be acceptable with respect to the base vacuum of a tokamak; however high-temperature baking is recommended from the viewpoint of recycling and impurity control. Terreault, B.; Boucher, C.; Andrew, P.L.; Haasz, A.A.; Brunet, C.; Dallaire, S. 1987-11-01 231 Microsoft Academic Search Thermal plasma spraying of agglomerated nanostructured ceramic particles has been studied using computational fluid dynamics.\\u000a The plasma jet is modeled as a mixture of Ar-H2 plasmas issuing into a quiescent atmosphere. The particles, modeled as micron-sized spheres, are introduced into the jet\\u000a outside the plasma gun exit with radial injection. The existence of a simple target in front of the I. Ahmed; T. L. Bergman 2000-01-01 232 NASA Astrophysics Data System (ADS) In this paper, two plasma spraying technologies: solution plasma spraying (SolPS) and suspension plasma spraying (SPS) were used to produce nano-structured solid oxide fuel cells (SOFCs) electrolytes. Both plasma spraying processes were optimized in order to achieve the thin gas-tight electrolytes. The comparison of the two plasma spraying processes is based on electrolyte phase, microstructure, morphology, as well as on plasma deposition rate. The results show that nano-structured thin electrolytes (~5 ?m thick) have been successfully SPS deposited on porous anodes with a high deposition rate. Compared to the electrolytes produced by SolPS, the SPS-deposited electrolyte layer is much denser. During the SPS process, fine droplets of 0.5-1 ?m in diameter impact on the surface of the coating and penetrate into the pores of the anode. As the stresses are reduced on the resulting 0.5-2 ?m splats, there is no apparent microcracks network on the splats, this resulting in highly gas-tight coatings. It is demonstrated that the SPS process is beneficial for the improvement of the performance of the films to be used as SOFC electrolytes. Jia, Lu; Gitzhofer, Franois 2010-03-01 233 NASA Astrophysics Data System (ADS) Hydroxyapatite (HA) is a bioactive material because its chemical structure is close to the natural bone. Its bioactive properties make it attractive material in biomedical applications. Gas tunnel type plasma spraying (GTPS) technique was employed in the present study to deposit HA coatings on SUS 304 stainless steel substrate. GTPS is composed of two plasma sources: gun which produces internal low power plasma (1.3-8 kW) and vortex which produces the main plasma with high power level (10-40 kW). Controlling the spraying parameters is the key role for spraying high crystalline HA coatings on the metallic implants. In this study, the arc gun current was changed while the vortex arc current was kept constant at 450 A during the spraying process of HA coatings. The objective of this study is to investigate the influence of gun current on the microstructure, phase crystallinity and hardness properties of HA coatings. The surface morphology and microstructure of as-sprayed coatings were examined by scanning electron microscope. The phase structure of HA coatings was investigated by X-ray diffraction analysis. HA coatings sprayed at high gun current (100 A) are dense, and have high hardness. The crystallinity of HA coatings was decreased with the increasing in the gun current. On the other hand, the hardness was slightly decreased and the coatings suffer from some porosity at gun currents 0, 30 and 50 A. Morks, M. F.; Kobayashi, A. 2007-06-01 234 NASA Astrophysics Data System (ADS) Electrically insulating films of Al2O3 were deposited using thermal spray technology followed by the sputter deposition of a strain gauge pattern using shadow masking techniques. For the first time, a thin film strain gauge of chromium was successfully fabricated on thermally sprayed Al2O3 insulation. X-ray diffraction (XRD), scanning electron microscopy (SEM) and profilometer characterization techniques were used to examine structure and surface morphologies of the Al2O3 coatings. A gauge factor of around 2 was found for chromium film as well as hysteresis and creep for loads exceeding 1700 micro strain. The results are discussed. Djugum, R.; Jolic, K. I. 2006-02-01 235 Microsoft Academic Search The present work deals with the preparation of stable alumina+silica suspensions with high solid loading for the production of spray-dried composite powders. These composite powders are to be used for reactive plasma spraying whereby the formation of mullite and the coating on a ceramic substrate are achieved in a single step process. Electrostatic stabilisation of alumina and silica suspensions has A. Schrijnemakers; S. Andr; G. Lumay; N. Vandewalle; F. Boschini; R. Cloots; B. Vertruyen 2009-01-01 236 National Technical Information Service (NTIS) Work relating to the optimization of a tungsten carbide coating to meet a particular industrial specification is presented. Having established the working point spraying conditions a series of repeat test are carried out in order to demonstrate the repeat... C. E. Grinnell 1988-01-01 237 NASA Astrophysics Data System (ADS) In the context of nanometre-sized structured materials and the perspectives of their technological applications, plasma spray technology is developing to master the coating microstructure at a nanometre scale level. This paper is an attempt to describe (i) the latest advances in the control of the conventional plasma spray process that requires the monitoring of both the plasma jet fluctuation level and particle processing and (ii) the innovative plasma spray processes that have recently emerged. The latter can be ranked in two classes: the processes that use a liquid feedstock where coatings are essentially formed by the impact of molten particles and droplets; and the processes that generally use a powder feedstock where coatings are generated by the condensation of a vapour with possible inclusion of nanometre-sized particles. Their potential applications are briefly presented and it is concluded that they should develop into viable technologies in the near future. Fauchais, P.; Vardelle, A. 2011-05-01 238 Microsoft Academic Search Metal plasma formed by a vacuum arc plasma source can be passed through a toroidal-section magnetic duct for the filtering of macroparticles from the plasma stream. In order to maximize the plasma transport efficiency of the filter the duct wall should be biased, typically to a positive voltage of about 10-20 V. In some cases it is not convenient to T. Zhang; B. Y. Tang; Q. C. Chen; Z. M. Zeng; P. K. Chu; M. M. M. Bilek; I. G. Brown 1999-01-01 239 Microsoft Academic Search Metal plasma formed by a vacuum arc plasma source can be passed through a toroidal-section magnetic duct for the filtering of macroparticles from the plasma stream. In order to maximize the plasma transport efficiency of the filter the duct wall should be biased, typically to a positive voltage of about 1020 V. In some cases it is not convenient to T. Zhang; B. Y. Tang; Q. C. Chen; Z. M. Zeng; P. K. Chu; M. M. M. Bilek; I. G. Brown 1999-01-01 240 Microsoft Academic Search Thermal barrier coatings (TBC) fabricated by plasma spray can exhibit a wide range of microstructures due to differences in feedstock powders and spraying conditions. Since different microstructures naturally result in different thermal and mechanical properties and service life as thermal barrier coatings, it is of great importance to understand the relationship among the feedstock characteristics, spray conditions and the coating Seiji Kuroda; Hideyuki Murakami; Makoto Watanabe; Kaita Itoh; Kentaro Shinoda; Xiancheng Zhang 2010-01-01 241 Microsoft Academic Search Suspension direct current plasma spraying allows achieving finely structured coatings whose thickness is between few tens and few hundreds of micrometres. Drops (200-300 m in diameter) or liquid jets are mechanically injected in the plasma jet. With radial injection they are rapidly (a few s) fragmented into droplets (a few m in diameter). The latter are vaporized (in a few P. Fauchais; R. Etchart-Salas; C. Delbos; M. Tognonvi; V. Rat; J. F. Coudert; T. Chartier 2007-01-01 242 Microsoft Academic Search This paper reports preparation of a highly crystalline nano hydroxyapatite (HA) coating on commercially pure titanium (Cp-Ti) using inductively coupled radio frequency (RF) plasma spray and their in vitro and in vivo biological response. HA coatings were prepared on Ti using normal and supersonic plasma nozzles at different plate powers and working distances. X-ray diffraction (XRD) and Fourier transformed infrared Mangal Roy; Amit Bandyopadhyay; Susmita Bose 2011-01-01 243 Microsoft Academic Search An atmospheric plasma spraying process is investigated in this work by using experimental approach and mathematical modelling. Emphasis was put on the gas shrouded nozzles, their design, and the protection against the mixing with the surrounding air, which they give to the plasma jet. First part of the thesis is dedicated to the analysis of enthalpy probe method, as a Miodrag M. Jankovic 1997-01-01 244 NASA Astrophysics Data System (ADS) Titanium carbide-based coatings have been considered for use in sliding wear resistance applications. Carbides embedded in a metal matrix would improve wear properties, providing a noncontinuous ceramic surface. TiC-Fe coatings obtained by plasma spraying of spray-dried TiC-Fe composite powders containing large and angular TiC particles are not expected to be as resistant as those containing TiC particles formed upon spraying. Coatings containing 60 vol% TiC dispersed in a steel matrix deposited by plasma spraying reactive micropellets, sintered reactive micropellets, and spray-dried TiC-Fe composite powders are compared. The sliding wear resistance of these coatings against steel was measured following the test procedure recommended by the Versailles Advanced Materials and Standards (VAMAS) program, and the inherent surface porosity was evaluated by image analysis. Results show that, after a 1-km sliding distance, TiC-Fe coatings obtained after spraying sintered reactive powders exhibit scar ring three times less deep than sprayed coatings using spray-dried TiC-Fe composite powders. For all coatings considered, porosity is detrimental to wear performance, because it generally lowers the coating strength and provides cavities that favor the adhesion of metal. However, porosity can have a beneficial effect by entrapping debris, thus reducing friction. The good wear behavior of TiC-Fe coatings manufactured by plasma spraying of sintered reactive powders is related to their low coefficient of friction against steel. This is due to the microstructure of these coatings, which consists of 0.3 to 1 ?m TiC rounded particles embedded in a steel matrix. Dallaire, S.; Cliche, G. 1993-03-01 245 NASA Astrophysics Data System (ADS) Growing demands on the quality of thermally sprayed coatings require reliable methods to monitor and optimize the spraying processes. Thus, the importance of diagnostic methods is increasing. A critical requirement of diagnostic methods in thermal spray is the accurate measurement of temperatures. This refers to the hot working gases as well as to the in-flight temperature of the particles. This article gives a review of plasma and particle temperature measurements in thermal spray. The enthalpy probe, optical emission spectroscopy, and computer tomography are introduced for plasma measurements. To determine the in-flight particle temperatures mainly multicolor pyrometry is applied and is hence described in detail. The theoretical background, operation principles and setups are given for each technique. Special interest is attached to calibration methods, application limits, and sources of errors. Furthermore, examples of fields of application are given in the form of results of current research work. Mauer, Georg; Vaen, Robert; Stver, Detlev 2010-11-01 246 NASA Astrophysics Data System (ADS) Growing demands on the quality of thermally sprayed coatings require reliable methods to monitor and optimize the spraying processes. Thus, the importance of diagnostic methods is increasing. A critical requirement of diagnostic methods in thermal spray is the accurate measurement of temperatures. This refers to the hot working gases as well as to the in-flight temperature of the particles. This article gives a review of plasma and particle temperature measurements in thermal spray. The enthalpy probe, optical emission spectroscopy, and computer tomography are introduced for plasma measurements. To determine the in-flight particle temperatures mainly multicolor pyrometry is applied and is hence described in detail. The theoretical background, operation principles and setups are given for each technique. Special interest is attached to calibration methods, application limits, and sources of errors. Furthermore, examples of fields of application are given in the form of results of current research work. Mauer, Georg; Vaen, Robert; Stver, Detlev 2011-03-01 247 NASA Astrophysics Data System (ADS) Suspension plasma spray (SPS) is a novel process for producing nano-structured coatings with metastable phases using significantly smaller particles as compared to conventional thermal spraying. Considering the complexity of the system there is an extensive need to better understand the relationship between plasma spray conditions and resulting coating microstructure and defects. In this study, an alumina/8 wt.% yttria-stabilized zirconia was deposited by axial injection SPS process. The effects of principal deposition parameters on the microstructural features are evaluated using the Taguchi design of experiment. The microstructural features include microcracks, porosities, and deposition rate. To better understand the role of the spray parameters, in-flight particle characteristics, i.e., temperature and velocity were also measured. The role of the porosity in this multicomponent structure is studied as well. The results indicate that thermal diffusivity of the coatings, an important property for potential thermal barrier applications, is barely affected by the changes in porosity content. Tarasi, F.; Medraj, M.; Dolatabadi, A.; Oberste-Berghaus, J.; Moreau, C. 2008-12-01 248 SciTech Connect Cathodic vacuum arc plasmas are known to contain multiply charged ions. 20 years after Pressure Ionization: its role in metal vapour vacuum arc plasmas and ion sources appeared in vol. 1 of Plasma Sources Science and Technology, it is a great opportunity to re-visit the issue of pressure ionization, a non-ideal plasma effect, and put it in perspective to the many other factors that influence observable charge state distributions, such as the role of the cathode material, the path in the density-temperature phase diagram, the noise in vacuum arc plasma as described by a fractal model approach, the effects of external magnetic fields and charge exchange collisions with neutrals. A much more complex image of the vacuum arc plasma emerges putting decades of experimentation and modeling in perspective. Anders, Andre 2011-12-18 249 Microsoft Academic Search Titanium carbide-based coatings have been considered for use in sliding wear resistance applications. Carbides embedded in\\u000a a metal matrix would improve wear properties, providing a noncontinuous ceramic surface. TiC-Fe coatings obtained by plasma\\u000a spraying of spray-dried TiC-Fe composite powders containing large and angular TiC particles are not expected to be as resistant\\u000a as those containing TiC particles formed upon spraying. S. Dallaire; G. Cliche 1993-01-01 250 Microsoft Academic Search We plasma-sprayed nickel coatings on stainless steel and cobalt alloy coupons heated to temperatures ranging from room temperature\\u000a to 650 C. Temperatures, velocities, and sizes of spray particles were recorded while in-flight and held constant during experiments.\\u000a We measured coating adhesion strength and porosity, photographed coating microstructure, and determined thickness and composition\\u000a of surface oxide layers on heated substrates. Coating V. Pershin; M. Lufitha; S. Chandra; J. Mostaghimi 2003-01-01 251 Microsoft Academic Search Atmospheric plasma spraying (APS) is attractive for manufacturing solid oxide fuel cells (SOFCs) because it allows functional\\u000a layers to be built rapidly with controlled microstructures. The technique allows SOFCs that operate at low temperatures (500-700C)\\u000a to be fabricated by spraying directly onto robust and inexpensive metallic supports. However, standard cathode materials used\\u000a in commercial SOFCs exhibit high polarization resistances at J. Harris; O. Kesler 2010-01-01 252 Microsoft Academic Search The heat treatment effect on the characteristics and tensile strength of plasma-sprayed alumina, yttria-stabilized zirconia\\u000a (YSZ), and mixtures of alumina and YSZ coatings on titanium was investigated. The as-sprayed structures of alumina and YSZ\\u000a coatings consists of a and y alumina phases, and cubic and tetragonal zirconia phases, respectively. The tensile strength\\u000a of the coatings containing a large amount of K. Kishitake; H. Era; S. Baba 1995-01-01 253 Microsoft Academic Search Direct measurement of the stressstrain behavior of stand-alone plasma-sprayed 7wt.% Y2O3ZrO2 (YSZ) coatings was made at room temperature. YSZ coatings were evaluated in the as-sprayed condition, and after heat-treatments for 10, 50, and 100h at 1200C using both monotonic and two different cyclic uniaxial compression loading profiles. Heat-treatments were used to change the primarily mechanically interlocked system of lamella to Christopher Petorak; Rodney W. Trice 2011-01-01 254 Microsoft Academic Search The present study uses plasma spray technology as a production process for the fabrication of free- stand-ing, near- net-\\u000a shaped NiAl components. Attention is especially focused on the in situ synthesis of NiAl. A new internal, dual powder injector\\u000a blade has been designed to improve the gun performance as well as the spray efficiency of the feedstock powder. The specific A. Geibel; L. Froyen; L. Delaey; K. U. Leuven 1996-01-01 255 Microsoft Academic Search An important issue for atmospheric plasma sprayed metal coatings is the oxidation involved during processing that significantly\\u000a affects its phase composition and microstructure and thus the overall coating properties. In this study, suspension thermal\\u000a spraying was used to manufacture cast iron coatings with high amounts of graphite carbon as solid-lubricant, because graphite\\u000a structure is considerably diminished in molten droplets of C. Tekmen; K. Iwata; Y. Tsunekawa; M. Okumiya 2010-01-01 256 Microsoft Academic Search A secondary suspension injection system was designed, manufactured and tested, with the aim of depositing composite coatings formed by a conventional air plasma sprayed matrix embedding heat-sensitive phases sprayed and protected in a liquid suspension flow.The system is composed of two main sections: a pressurized vessel, equipped with regulation and recirculation sub-systems, and an adjustable nozzle holder.Preliminary experimental activities were F. Cipri; F. Marra; G. Pulci; J. Tirill; C. Bartuli; T. Valente 2009-01-01 257 Microsoft Academic Search Nanostructured yttria-stabilized zirconia (YSZ) thermal barrier coatings (TBCs) were produced by atmospheric plasma spraying.\\u000a The microstructure of the sprayed coating was characterized by transmission electron microscope (TEM) and scanning electron\\u000a microscope (SEM). The nano-coating had a higher porosity of ~25% than the conventional coating, which is mainly attributed\\u000a to the large amount of intersplat gaps in the nano-coating. The thermal Jing Wu; Hong-Bo Guo; Le Zhou; Lu Wang; Sheng-Kai Gong 2010-01-01 258 Microsoft Academic Search Corrosion behaviour of coatings sprayed with water-atomized (WA), cast iron powder were investigated by surface analyses and electrochemical methods, such as potentiodynamic polarization test and electrochemical impedance spectroscopy (EIS) in deaerated 0.5 M H2SO4 solution. WA cast iron powders of Fe3.75C3.60Si3.93Al (wt.%) were deposited onto an aluminum alloy (AA383 alloy) substrate by atmospheric DC plasma spraying. Four types of samples W. J. Kim; S. H. Ahn; H. G. Kim; J. G. Kim; Ismail Ozdemir; Y. Tsunekawa 2005-01-01 259 SciTech Connect Thin-film electrodes of a plasma-sprayed Li-Si alloy were evaluated for use as anodes in high-temperature thermally activated (thermal) batteries. These anodes were prepared using 44% Li/56% Si (w/w) material as feed material in a special plasma-spray apparatus under helium or hydrogen, to protect this air- and moisture-sensitive material during deposition. Anodes were tested in single cells using conventional pressed-powder separators and lithiated pyrite cathodes at temperatures of 400 to 550 C at several different current densities. A limited number of 5-cell battery tests were also conducted. The data for the plasma-sprayed anodes was compared to that for conventional pressed-powder anodes. The performance of the plasma-sprayed anodes was inferior to that of conventional pressed-powder anodes, in that the cell emfs were lower (due to the lack of formation of the desired alloy phases) and the small porosity of these materials severely limited their rate capability. Consequently, plasma-sprayed Li-Si anodes would not be practical for use in thermal batteries. GUIDOTTI,RONALD A.; REINHARDT,FREDERICK W.; SCHARRER,GREGORY L. 1999-09-08 260 Microsoft Academic Search A glow-to-arc transition of rarefied plasma that controls the ultimate performance of a vacuum interrupter depends on the state of remaining plasma in the contact gap after current zero. In this study, the electron temperature and the ion density of residual plasma of a magnetically stabilized high-current vacuum arc were measured by the electrostatic (Langumuir) probe method. As a result Kazuyoshi Arai; Shinji Takahashi; Osami Morimiya; Yosimitu Niwa 2003-01-01 261 Microsoft Academic Search In this study, TiO2 coatings were deposited by suspension plasma spraying (SPS) from a commercial TiO2 nanoparticle suspension on two different substrates: a standard stainless steel and a Pyrex glass. Coatings were sprayed on both substrates with an F4-MB monocathode torch; a Triplex Pro tricathode torch was also used to spray coatings just on the stainless steel substrates. Spraying distance E. Bannier; G. Darut; E. Snchez; A. Denoirjean; M. C. Bordes; M. D. Salvador; E. Rayn; H. Ageorges 2011-01-01 262 Microsoft Academic Search Titanium dioxide coatings were deposited by utilizing atmospheric plasma-spraying system. The agglomerated P25\\/20 nano-powder and different spraying parameters (e.g., Argon flow rate and spray distance) were used to determine their influences on the microstructure, crystalline structure, photo-absorption, and photo-catalytic performance of the coatings. The microstructure and phases of as-sprayed TiO2 coatings were characterized by scanning electron microscope SEM and X-ray Maryamossadat Bozorgtabar; Mehdi Salehi; Mohammadreza Rahimipour; Mohammadreza Jafarpour 2010-01-01 263 NASA Astrophysics Data System (ADS) To obtain a coating of high quality, a new type of plasma torch was designed and constructed to increase the stability of the plasma arc and reduce the air entrainment into the plasma jet. The torch, called bi-anode torch, generates an elongated arc with comparatively high arc voltage and low arc fluctuation. Spraying experiments were carried out to compare the quality of coatings deposited by a conventional torch and a bi-anode torch. Alumina coatings and tungsten carbide coatings were prepared to appraise the heating of the sprayed particles in the plasma jets and the entrainment of the surrounding air into the plasma jets, respectively. The results show that anode arc root fluctuation has only a small effect on the melting rate of alumina particles. On the other hand, reduced air entrainment into the plasma jet of the bi-anode torch will drastically reduce the decarbonization of tungsten carbide coatings. An, Lian-Tong; Gao, Yang; Sun, Chengqi 2011-06-01 264 SciTech Connect Laser pulse compression in plasma-vacuum systems is investigated in the weakly relativistic regime. First, within one-dimensional hydrodynamic models, the basic features of propagation in plasmas, like width and amplitude changes, are demonstrated. The numerical findings can be interpreted, in part, a by simplified model based on the variation of action method. Since transverse effects like filamentation do play a significant role, the numerical evaluations are then generalized to two-dimensional situations. An approximate analytical criterion for the dominating transverse wave number during laser propagation in plasmas is presented. Finite plasma-vacuum systems show in addition to the filamentation instability the so-called plasma lens effect. The latter is first demonstrated for a single plasma layer. It is then discussed how (i) longitudinal and transversal self-compression in plasmas, (ii) focusing by a plasma layer, and (iii) cleaning of unstable modes compete with each other in layered plasma-vacuum systems. Depending on the available parameters, optimized plasma-vacuum systems are proposed for pulse compression. Such systems can be used as a substitute for hollow fibers which are in use to shorten a pulse. Pulse lengths of one or two cycles can be reached by optimized plasma-vacuum systems, while attaining ultrarelativistic intensities in the focal spot behind the system of layers. Karle, Ch.; Spatschek, K. H. [Institut fuer Theoretische Physik, Heinrich-Heine-Universitaet Duesseldorf, D-40225 Duesseldorf (Germany) 2008-12-15 265 NASA Astrophysics Data System (ADS) The new mode of Vacuum arc-Hot Refractory Anode Vacuum Arc-was studied experimentally using a Langmuir probe, two types of thermal probes, and film collection substrates. The plasma density, electron temperature, plasma energy flux, cathode erosion, mass deposition rate on a substrate, and macroparticle contamination in the deposited films were measured. The arc initially operated as a usual vacuum arc sustained by cathode spots, i.e., and the vapor and plasma source located at the cathode spot. At a later stage the anode heated up and metal vapor originating at the cathode was re-evaporated from the nonconsumable hot graphite anode. Initially, plasma density was about (3-4).1020 m-3 but it increased with time, reaching about 2.1021 m-3 after 60 s in a 340 A arc. The electron temperature initially was about 1.6 eV and decreased with time to a steady-state value of about 1.1 eV after 20 s. The radial plasma energy flux generated by 175 and 340 A arcs was about 1 and 2 MW/m2, respectively, at 1.6 cm from the electrode axis. The deposition rate on substrates placed 110-120 mm from the electrode axis reached about 2 ?m/min. The density of macroparticles found on substrates exposed during the first 60 s of arcing was ~103 macroparticles per mm2, however, this density was reduced to about 1 macroparticle per mm2 on substrates exposed to only the second 30 s period. Beilis, I. I.; Keidar, M.; Boxman, R. L.; Goldsmith, S. 2000-07-01 266 NASA Astrophysics Data System (ADS) Yttrium oxide (Y2O3) coatings have been prepared by axial suspension plasma spraying with fine powders. It is clarified that the coatings have high hardness, low porosity, high erosion resistance against CF4 -containing plasma and retention of smooth eroded surface. This suggests that the axial suspension plasma spraying of Y2O3 is applicable to fabricating equipment for electronic devices, such as dry etching. Surface morphologies of the slurry coatings with splats are similar to conventional plasma-sprayed Y2O3 coatings, identified from microstructural analysis. Dense coating structures with no lamellar boundaries have been seen, which is apparently different from the conventional coatings. It has also been found that crystal structure of the suspension coatings mainly composed of metastable monoclinic phase, whereas the powders and the conventional plasma spray coatings have stable cubic phase. Mechanism of coating formation by plasma spraying with fine powder slurries is discussed based on the results. Kitamura, Junya; Tang, Zhaolin; Mizuno, Hiroaki; Sato, Kazuto; Burgess, Alan 2011-01-01 267 NASA Astrophysics Data System (ADS) Various developmental research works on the metallic glass have been conducted in order to broaden its application field. Thermal spraying method is one of the potential techniques to enhance the excellent properties such as high toughness and corrosion resistance of the metallic glass material. The gas tunnel type plasma spraying is useful to obtain high quality ceramic coatings such as Al2O3 and ZrO2 coatings. In this study, the Ni-based metallic glass coatings were produced by the gas tunnel type plasma spraying under various experimental conditions, and their microstructure and mechanical properties were investigated. At the plasma current of 200-300 A, the Ni-based metallic glass coatings of more than 200 ?m in thickness were formed densely with Vickers hardness of about Hv = 600. Kobayashi, Akira; Kuroda, Toshio; Kimura, Hisamichi; Inoue, Akihisa 2010-10-01 268 SciTech Connect Various developmental research works on the metallic glass have been conducted in order to broaden its application field. Thermal spraying method is one of the potential techniques to enhance the excellent properties such as high toughness and corrosion resistance of the metallic glass material. The gas tunnel type plasma spraying is useful to obtain high quality ceramic coatings such as Al{sub 2}O{sub 3} and ZrO{sub 2} coatings. In this study, the Ni-based metallic glass coatings were produced by the gas tunnel type plasma spraying under various experimental conditions, and their microstructure and mechanical properties were investigated. At the plasma current of 200-300 A, the Ni-based metallic glass coatings of more than 200 {mu}m in thickness were formed densely with Vickers hardness of about Hv = 600. Kobayashi, Akira; Kuroda, Toshio [Joining and Welding Res. Inst., Osaka University, 11-1 Mihogaoka, Ibaraki, Osaka 567-0047 (Japan); Kimura, Hisamichi; Inoue, Akihisa [Inst. for Materials Res., Tohoku University, Katahira, Sendai 980-8577 (Japan) 2010-10-13 269 NASA Astrophysics Data System (ADS) Thermal spraying with liquid-based feedstocks demonstrated a potential to produce coatings with new and enhanced characteristics. A liquid delivery system prototype was developed and tested in this study. The feeder is based on the 5MPE platform and uses a pressure setup to optimally inject and atomize liquid feedstock into a plasma plume. A novel self-cleaning apparatus is incorporated into the system to greatly reduce problems associated with clogging and agglomeration of liquid suspensions. This approach also allows the liquid feedstock line to the gun to remain charged for quick on-off operation. Experiments on aqueous and ethanol-based suspensions of titania, alumina, and YSZ were performed through this liquid delivery system using a 9MB plasma gun. Coatings with ultrafine splat microstructures were obtained by plasma spraying of those suspensions. Phase composition and microstructure of the as-sprayed coatings were investigated. Cotler, Elliot M.; Chen, Dianying; Molz, Ronald J. 2011-06-01 270 NASA Astrophysics Data System (ADS) The effect of coatings, which are formed with laser cladding and plasma spray welding on 1Cr18Ni9Ti base metal, on wear resistance is studied, A 5-kW transverse flowing CO2 laser is used for cladding Co base alloy powder pre-placed on the substrate. Comparing with the plasma spray coatings, the spoiled rate of products with laser clad layers was lower and the rate of finished products was higher. Their microstructure is extremely fine. They have close texture and small size grain. Their dilution resulting from the compositions of the base metal and thermal effect on base metal are less. The hardness, toughness, and strength of the laser cladding layers are higher. Wear tests show that the laser layers have higher properties of anti-friction, anti-scour and high-temperature sliding strike. The wear resistance of laser clad layers are about one time higher than that of plasma spray welding layer. Wang, Xinlin; Shi, Shihong; Zheng, Qiguang 2004-03-01 271 PubMed Hydroxyapatite (HA) coating was carried out by plasma spraying on bulk Ti substrates and porous Ti substrates having a Young's modulus similar to that of human bone. The microstructures and bond strengths of HA coatings were investigated in this study. The HA coatings with thickness of 200-250 microm were free from cracks at interfaces between the coating and Ti substrates. XRD analysis revealed that the HA powder used for plasma spraying had a highly crystallized apatite structure, while the HA coating contained several phases other than HA. The bond strength between the HA coating and the Ti substrates evaluated by standard bonding test (ASTM C633-01) were strongly affected by the failure behavior of the HA coating. A mechanism to explain the failure is discussed in terms of surface roughness of the plasma-sprayed HA coatings on the bulk and porous Ti substrates. PMID:15965595 Oh, Ik-Hyun; Nomura, N; Chiba, A; Murayama, Y; Masahashi, N; Lee, Byong-Taek; Hanada, S 2005-07-01 272 PubMed Highly crystalline feedstock hydroxyapatite (HA) particles with irregular shapes were spheroidized by plasma spraying them onto the surface of ice blocks or into water. The spherical Ca-P particles thus produced contained various amounts of the amorphous phase which were controlled by the stand-off distance between the spray nozzle and the surface of ice blocks or waiter. The smooth surface morphology without cracks of spherical Ca-P particles indicated that there were very low thermal stresses in these particles. Plasma-sprayed Ca-P particles were highly bioactive due to their amorphous component and hence quickly induced the formation of bone-like apatite on their surfaces after they were immersed in an acellular simulated body fluid at 36.5 C. Bone-like apatite nucleated on dissolved surface (due to the amorphous phase) of individual Ca-P particles and grew to coalesce between neighboring Ca-P particles thus forming an integrated apatite plate. Bioactive and biodegradable composite scaffolds were produced by incorporating plasma-spray ed Ca-P particles into a degradable polymer. In vitro experiments showed that plasma-sprayed Ca-P particles enhanced the formation of bone-like apatite on the pore surface of Ca-P/PLLA composite scaffolds. PMID:12059011 Weng, Jie; Wang, Min; Chen, Jiyong 2002-07-01 273 Microsoft Academic Search Al2O3 ceramic coatings plasma sprayed on the surface of metals change greatly the corrosion law of metals in strong acid solutions and enhance effectively their corrosion resistance property. In this paper, the corrosion behaviour of a Q235 steel with plasma sprayed Al2O3 coatings in a boiling 5% HCl solution is investigated. The corrosion rate of the Al2O3 coating sprayed on Yan Dianran; He Jining; Wu Jianjun; Qiu Wanqi; Ma Jing 1997-01-01 274 Microsoft Academic Search In this paper, the NiCr-Cr3C2 coating was prepared by laser-hybrid plasma spraying (LHPS)technology, the NSS (Neutral Salt Spraying) test results showed that the LHPS NiCr-Cr3C2 coating had good corrosion-resistance performance comparing with the base material and the APS (air plasma spraying) coating. A SEM (scanning electron microscope) was used to analyze corrosion morphology of the samples, The LHPS coating overcame Shu-qing Li; Qi-lian Li; Shui-li Gong; Chun Wang 2011-01-01 275 NASA Astrophysics Data System (ADS) In this study, surfaces of copper plates were coated with a thick alumina layer by the plasma spray coating to fabricate a composite with a dielectric performance that made them suitable as substrates in electronic devices with high thermal dissipation. The performance of alumina dielectric layer fabricated by the plasma spray coating and traditional screen-printing process was compared, respectively. Effects of the spraying parameters and size of alumina particles on the microstructure, thickness, and the surface roughness of the coated layer were explored. In addition, the thermal resistance perpendicular to the interface of copper and alumina and the breakdown voltage across the alumina layer of the composite were also investigated. Experimental results indicated that alumina particles with 5-22 ?m in diameter tended to form a thicker layer with a poorer surface roughness than that of the particles with 22-45 ?m in diameter. The thermal resistance increased with the surface roughness of the alumina layer, and the breakdown voltage was affected by the ambient moisture, the microstructure and the thickness of the layer. The optimal parameters for plasma spray coating were an alumina powder of particles size between 22 and 45 ?m, a plasma power of 40 kW, a spraying velocity of 750 m/s, an argon flow rate of 45 L/min, a spraying distance of 140 mm, and a spraying angle of 90. It can be concluded that an alumina layer thickness of 20 ?m provided a low surface roughness, low thermal resistance, and highly reliable breakdown voltage (38 V/?m). Lin, Kuan Hong; Xu, Zi Hao; Lin, Shun Tian 2011-03-01 276 Microsoft Academic Search The characteristic properties of microscale capillary plasma electrode structures were experimentally investigated and compared to the dielectric barrier discharge (DBD) structure. The vacuum ultraviolet (VUV) emission from the capillary plasma electrode discharges (CPEDs) was more intense and more efficient than the one from the DBD. Based on VUV emission characteristics, it is confirmed that the CPED-based plasma display could be Soo-Ho Park; Tae-Seung Cho; Kurt H. Becker; Erich E. Kunhardt 2009-01-01 277 NASA Astrophysics Data System (ADS) Nickel and chromium coatings were produced using plasma spraying and laser remelting on the copper sheet. The corrosion test was carried out in an acidic atmosphere, and the corrosive behaviors of both coatings and original copper samples were investigated by using an impedance comparison method. Experimental results show that nickel and chromium coatings display better corrosion resistance properties relative to the original pure copper sample. The corrosion rate of chromium coating is less than that of nickel coating, and corrosion resistances of laser remelted nickel and chromium samples are better than those of plasma sprayed samples. The corrosion deposit film of copper is loose compared with nickel and chromium. Liang, Gongying; Wong, T. T.; An, Geng; MacAlpine, J. M. K. 2006-01-01 278 SciTech Connect The cycles-to-failure vs cycle duration data for three different thermal barrier coating systems, which consist of atmospheric pressure plasma-sprayed ZrO2-8 percent Y2O3 over similarly deposited or low pressure plasma sprayed Ni-base alloys, are presently analyzed by means of the Miller (1980) oxidation-based life model. Specimens were tested at 1100 C for heating cycle lengths of 1, 6, and 20 h, yielding results supporting the model's value. 9 references. Miller, R.A.; Argarwal, P.; Duderstadt, E.C. 1984-07-01 279 Microsoft Academic Search Water-atomized cast iron powder of Fe-2.17 at.%C-9.93at.%Si-3.75at.%Al were deposited onto an aluminum alloy substrate by\\u000a atmospheric direct current plasma spraying to improve its tribological properties. Preannealing of the cast iron powder allows\\u000a the precipitation of considerable amounts of graphite structure in the powder. However, significant reduction in graphitized\\u000a carbon in cast iron coatings is inevitable after plasma spraying in air Y. Tsunekawa; I. Ozdemir; M. Okumiya 2006-01-01 280 Microsoft Academic Search High-temperature wear characteristics between plasma spray coated piston rings and cylinder liners were investigated to find\\u000a the optimum combination of coating materials using the disc-on-plate reciprocating wear test in dry conditions. The disc and\\u000a plate represented the piston ring and the cylinder liner, respectively. Coating materials studied were Cr2O3-NiCr, Cr2O3-NiCr-Mo, and Cr3C2-NiCr-Mo. Plasma spray conditions for the coating materials were Jong-Hyun Hwang; Myoung-Seoup Han; Dae-Young Kim; Joong-Geun Youn 2006-01-01 281 Microsoft Academic Search Several alumina and aluminazirconia composite coatings were manufactured by suspension plasma spraying (SPS), implementing different operating conditions in order to achieve dense and cohesive structures. Temperatures and velocities of the in flight particles were measured with a commercial diagnostic system (Accuraspray) at the spray distance as a function of the plasma operating parameters. Temperatures around 2000C and velocities as high O. Tingaud; P. Bertrand; G. Bertrand 2010-01-01 282 Microsoft Academic Search Suspension plasma spraying (SPS) is a promising modification of traditional plasma spraying techniques that uses small (?2?m) particles suspended in a liquid to fabricate coatings with fine microstructures and controlled porosity rapidly and without the need for post-deposition heat treatments. These qualities make SPS an interesting new technique to manufacture solid oxide fuel cell (SOFC) active layers. However, in order D. Waldbillig; O. Kesler 2009-01-01 283 SciTech Connect The low fracture toughness of MoSi{sub 2} at ambient temperature has prompted investigations into new processing methods in order to impart some degree of fracture toughness into this inherently brittle material. In the following investigation, low pressure plasma spraying was employed as a fabricating technique to produce spray-formed deposits of MoSi{sub 2} and ductile reinforced MoSi{sub 2} composites containing approximately 10 and 20 volume percent of a discontinuous tantalum lamelli reinforcement. Fracture toughness (K{sub 1C}) measurements of MoSi{sub 2} and the MoSi{sub 2}/Ta composites were done using a chevron notched 4-point bend fracture toughness test in both the as-sprayed condition and after hot isostatic pressing at 1200{degrees}C/206 MPa for 1 hour. Results from the ductile reinforced MoSi{sub 2}/Ta composites have shown fracture toughness increases on the order of 200% over the as-sprayed MoSi{sub 2}. In addition, a marked anisotropy in fracture toughness was observed in the spray-formed deposits due to the layered splat structure produced by the low pressure plasma spray process. Castro, R.G.; Rollett, A.D.; Stanek, P.W. [Los Alamos National Lab., NM (United States); Smith, R.W. [Drexel Univ., Philadelphia, PA (United States). Dept. of Materials Engineering 1992-02-01 284 SciTech Connect The low fracture toughness of MoSi{sub 2} at ambient temperature has prompted investigations into new processing methods in order to impart some degree of fracture toughness into this inherently brittle material. In the following investigation, low pressure plasma spraying was employed as a fabricating technique to produce spray-formed deposits of MoSi{sub 2} and ductile reinforced MoSi{sub 2} composites containing approximately 10 and 20 volume percent of a discontinuous tantalum lamelli reinforcement. Fracture toughness (K{sub 1C}) measurements of MoSi{sub 2} and the MoSi{sub 2}/Ta composites were done using a chevron notched 4-point bend fracture toughness test in both the as-sprayed condition and after hot isostatic pressing at 1200{degrees}C/206 MPa for 1 hour. Results from the ductile reinforced MoSi{sub 2}/Ta composites have shown fracture toughness increases on the order of 200% over the as-sprayed MoSi{sub 2}. In addition, a marked anisotropy in fracture toughness was observed in the spray-formed deposits due to the layered splat structure produced by the low pressure plasma spray process. Castro, R.G.; Rollett, A.D.; Stanek, P.W. (Los Alamos National Lab., NM (United States)); Smith, R.W. (Drexel Univ., Philadelphia, PA (United States). Dept. of Materials Engineering) 1992-01-01 285 NASA Astrophysics Data System (ADS) The microstructural inhomogeneity in the plasma-sprayed hydroxyapatite (HA) coatings was characterized by using electron probe microanalyser (EPMA). A simple and artful method was developed to detect the interface characteristics. All the samples for observation were ground and polished along the direction parallel to the coating surfaces. The BSE images directly and clearly showed the inhomogeneity in the as-sprayed coatings with the amorphous regions being bright gray and crystalline regions being dark gray. X-ray diffractometer (XRD) patterns indicated that after immersion in deionized water for 20 days, bone-like apatite and ?-Ca2P2O7 precipitated on the polished surfaces of the as-sprayed HA coatings. The post-heat treatment could eliminate the microstructural inhomogeneity in the coatings. Only ?-Ca2P2O7 precipitated on the surfaces of the heat-treated HA coatings. The immersed samples were re-polished till tiny substrate was bared to investigate the effect of immersion on interface. It was shown that the immersion decreased the cohesive strength of the as-sprayed coatings. There were more and broader cracks in the splats that came into contact with the substrate and amorphous phase increased toward the coating substrate interface. Post-heat treatment was proved to reduce the peeling off of coating during re-polishing operation. It was proposed that the distributions of amorphous phase and cracks in as-sprayed coatings are detrimental to coating properties and should be modified through improving the plasma spraying processing. Lu, Yu-Peng; Xiao, Gui-Yong; Li, Shi-Tong; Sun, Rui-Xue; Li, Mu-Sen 2006-01-01 286 NASA Astrophysics Data System (ADS) Thermal spray coatings composed of a variety of carbide sizes and cobalt contents were sprayed with a high energy plasma spray system. The size of the carbides used fell into three rough groupings, micrometer scale carbides (1--2 mum), submicrometer (700--300 nm), and nanoscale (?100 nm). The feedstock powder was evaluated in terms of their size distribution, external morphology, internal morphology, and initial carbide size. Two different fixtures were used in spraying to evaluate the effect of cooling rate on the wear resistance of the coatings. The microstructures of the sprayed coatings were examined using optical metallography, SEM, FESEM, TEM, XRD and chemical analysis. The coatings were evaluated in low stress abrasive wear by the ASTM G-65 Dry Sand Rubber Wheel test. Furthermore, the porosity and hardness of the coatings were evaluated. The cobalt content was found to be the predominant influence on the wear rate of the coatings. The decrease in the carbide size was not found to effect the wear rate of the coatings. Coatings sprayed on the 'hot' fixture were found to have slightly improved wear resistance as compared to coatings sprayed on the 'cold' fixture. The wear rates of the coatings were found to be a function of the WC/Co volume ratio. Tewksbury, Graham Alfred 287 National Technical Information Service (NTIS) Very limited research has been carried out on the cavitation-erosion (CE) resistance of thermal sprayed protective coatings. In the work that has been carried out to date, there appears to be a relation between the nature of the particle-particle cohesive... M. F. Smith H. Bhat H. Herman 1984-01-01 288 SciTech Connect One of the most promising engineering solutions of the problem of spraying powder materials is the proposed method of plasma spraying by the laminar plasma jet. Laminar plasma flow is characterized by small jet angle divergence; the powder particles are penetrated and accelerated mainly in the axial direction. The molten powder particles are transported almost to the surface of a treated work-piece inside the laminar plasma flow in an atmosphere of the plasma-forming gas with the acceleration on the entire transfer area, which leads to an increase in the particles velocity, a decrease of their oxidability, an increase in the powder deposition efficiency, density, adhesion strength with the surface to be coated. Khutsishvili, M.; Kikvadze, L. [Plasma Spray Laboratory, Georgian Technical University, M. Kostava street 77, Tbilisi 0175 (Georgia) 2008-03-19 289 NASA Astrophysics Data System (ADS) Simulation studies on the thermal behaviour of yttrium oxide particles in a thermal plasma jet was carried out with the objective of controlling and optimization of the plasma spray process. The 'in-flight' behaviour of yttrium oxide particles in the plasma jet was studied by solving the heat transfer and momentum transfer equations using the velocity and temperature distribution in the plasma jet obtained from a two-dimensional model. In particular, the effect of particle size, thermal power of the torch and the torch operating parameters like gas flow rates were considered to calculate the heat transfer and momentum transfer to the particle. Results of simulation studies agree quite well with the experimental results on variation of deposition efficiency with power and particle size. The complete description of the model with the results obtained for the typical operating parameters of our plasma spray torch is presented in the paper. Thiyagarajan, T. K.; Sreekumar, K. P.; Selvan, V.; Ramachandran, K.; Ananthapadmanabhan, P. V. 2010-02-01 290 NASA Astrophysics Data System (ADS) One of the most promising engineering solutions of the problem of spraying powder materials is the proposed method of plasma spraying by the laminar plasma jet. Laminar plasma flow is characterized by small jet angle divergence; the powder particles are penetrated and accelerated mainly in the axial direction. The molten powder particles are transported almost to the surface of a treated work-piece inside the laminar plasma flow in an atmosphere of the plasma-forming gas with the acceleration on the entire transfer area, which leads to an increase in the particles velocity, a decrease of their oxidability, an increase in the powder deposition efficiency, density, adhesion strength with the surface to be coated. Khutsishvili, M.; Kikvadze, L. 2008-03-01 291 Microsoft Academic Search Vacuum arc or cathodic arc metal plasma sources are attractive and convenient for depositing high-quality thin metal films and metallurgical coatings. It is a common practice to use a curved magnetic filter duct to eliminate macroparticle contamination and to bias the duct wall with a positive voltage to enhance the throughput of the metal plasma. The metal plasma usually consists Dixon Tat-Kun Kwok; Paul K. Chu; M. M. M. Bilek; Ian G. Brown; Alexey Vizir 2000-01-01 292 PubMed Central Implant related infection is one of the key concerns in total joint hip arthroplasties. In order to reduce bacterial adhesion, silver (Ag) / silver oxide (Ag2O) doping was used in plasma sprayed hydroxyapatite (HA) coating on titanium substrate. HA powder was doped with 2.0, 4.0 and 6.0 wt% Ag, heat treated at 800 C and used for plasma spray coating using a 30 kW plasma spray system, equipped with supersonic nozzle. Application of supersonic plasma nozzle significantly reduced phase decomposition and amorphous phase formation in the HA coatings as evident by X-ray diffraction (XRD) study and Fourier transformed infrared spectroscopic (FTIR) analysis. Adhesive bond strength of more than 15 MPa ensured the mechanical integrity of the coatings. Resistance against bacterial adhesion of the coatings was determined by challenging them against Pseudomonas Aeruginosa (PAO1). Live/Dead staining of the adherent bacteria on the coating surfaces indicated a significant reduction in bacterial adhesion due to the presence of Ag. In vitro cell-material interactions and alkaline phosphatase (ALP) protein expressions were evaluated by culturing human fetal osteoblast cells (hFOB). Present results suggest that the plasma sprayed HA coatings doped with an optimum amount of Ag can have excellent antimicrobial property without altering mechanical property of the Ag doped HA coatings. Roy, Mangal; Fielding, Gary A.; Beyenal, Haluk; Bandyopadhyay, Amit; Bose, Susmita 2012-01-01 293 Microsoft Academic Search An inherent feature of the vacuum arc discharge is that small droplets of micrometer size (macroparticles) are produced along with the plasma in the cathode spots. Droplet contamination of the substrate can occur when implanting metal ions using a vacuum arc ion source. The contamination can be significant for some cathode materials such as lead and other low melting point Simone Anders; Andre Anders; Ian G. Brown; Robert A. MacGill; Michael R. Dickinson 1994-01-01 294 Microsoft Academic Search We describe experiments demonstrating the formation of a high current electron beam from a vacuum arc plasma. A preexisting vacuum arc ion source was used, with the extraction voltage reversed in polarity so as to form an electron beam rather than an ion beam; no other changes were required. The beam formed was of energy up to 33 keV, beam Efim M. Oks; Ian G. Brown 1998-01-01 295 Microsoft Academic Search A plasma source for vacuum arc thruster was developed. Electrical energy stored in two resonators is delivered to a vacuum gap and metal ions are emitted from the electrodes. The ions obtain momentum and the thruster obtains thrust power from reaction of ions. A high frequency (HF) current was used to generate energetic ions in spark phase, and a low S. Shibata; T. Yanagidaira; K. Tsuruta 2006-01-01 296 PubMed Thermal sprayed hydroxyapatite coatings suffer from poor mechanical properties like tensile strength, wear resistance, hardness, toughness and fatigue. The mechanical properties of hydroxyapatite coatings can be enhanced via incorporation of secondary bioinert reinforcement material. In this study an attempt has been made to improve the mechanical properties of plasma sprayed hydroxyapatite by reinforcing it with 10, 20 and 30% Al2O3. The plasma sprayed coatings have been characterized using FE-SEM/EDAX, XRD, AFM and FTIR spectroscopy. Corrosion studies have been done in simulated body fluid and abrasive wear studies have been performed on flat specimens on a disk wear tester. Microhardness, tensile strength and wear resistance are found to be increased with increasing Al2O3 content. All types of coatings show superior resistance against corrosion in simulated body fluid. PMID:23623104 Mittal, Manoj; Nath, S K; Prakash, Satya 2013-03-14 297 NASA Astrophysics Data System (ADS) Apatite-type lanthanum silicate (ATLS) electrolyte coatings for use in intermediate-temperature solid oxide fuel cells were deposited by atmospheric plasma spraying (APS). Plasma-sprayed coatings with typical composition La10(SiO4)6O3 exhibiting good densification and high oxide ionic conductivity were obtained by properly adjusting the spraying parameters, particularly the gun current. The highest obtained ionic conductivity value of 3.3 mS/cm at 1,173 K in air is comparable to other ATLS conductors. This work demonstrated empirically that utilization of the APS technique is feasible to synthesize dense La10(SiO4)6O3 electrolyte coatings using gun currents within an unusually broad range. Gao, Wei; Liao, Han-Lin; Coddet, Christian 2013-10-01 298 NASA Astrophysics Data System (ADS) Conventional thermal spray processes as atmospheric plasma spraying (APS) have to use easily flowable powders with a size up to 100 ?m. This leads to certain limitations in the achievable microstructural features. Suspension plasma spraying (SPS) is a new promising processing method which employs suspensions of sub-micrometer particles as feedstock. Therefore much finer grain and pore sizes as well as dense and also thin ceramic coatings can be achieved. Highly porous coatings with fine pore sizes are needed as electrodes in solid-oxide fuel cells. Cathodes made of LaSrMn perovskites have been produced by the SPS process. Their microstructural and electrochemical properties will be presented. Another interesting application is thermal barrier coating (TBC). SPS allows the manufacture of high-segmented TBCs with still relatively high porosity levels. In addition to these specific applications also the manufactures of new microstructures like nano-multilayers and columnar structures are presented. Kassner, Holger; Siegert, Roberto; Hathiramani, Dag; Vassen, Robert; Stoever, Detlev 2008-03-01 299 NASA Astrophysics Data System (ADS) The synthetic hydroxyapatite (HA, Ca10(PO4)6(OH)2) is a very useful biomaterial for numerous applications in medicine, such as e.g., fine powder for suspension plasma spraying. The powder was synthesized using aqueous solution of ammonium phosphate (H2(PO4)NH4) and calcium nitrate (Ca(NO3) 4H2O) in the carefully controlled experiments. The synthesized fine powder was characterized by X-ray diffraction (XRD) and scanning electron microscope (SEM). The powder was formulated into water and alcohol based suspension and used to carry out the initial tests of plasma spraying onto titanium substrate. The phase analysis of sprayed coating was made with the XRD. Jaworski, Roman; Pierlot, Christel; Pawlowski, Lech; Bigan, Muriel; Quivrin, Maxime 2008-12-01 300 NASA Astrophysics Data System (ADS) Plasma spraying at very low pressure (50-200 Pa) is significantly different from atmospheric plasma conditions (APS). By applying powder feedstock, it is possible to fragment the particles into very small clusters or even to evaporate the material. As a consequence, the deposition mechanisms and the resulting coating microstructures could be quite different compared to conventional APS liquid splat deposition. Thin and dense ceramic coatings as well as columnar-structured strain-tolerant coatings with low thermal conductivity can be achieved offering new possibilities for application in energy systems. To exploit the potential of such a gas phase deposition from plasma spray-based processes, the deposition mechanisms and their dependency on process conditions must be better understood. Thus, plasma conditions were investigated by optical emission spectroscopy. Coating experiments were performed, partially at extreme conditions. Based on the observed microstructures, a phenomenological model is developed to identify basic growth mechanisms. Mauer, G.; Hospach, A.; Zotov, N.; Vaen, R. 2013-03-01 301 Microsoft Academic Search Failure in plasma-sprayed thermal barrier coatings systems mostly takes place in the ceramic topcoat or at the interface between the topcoat and the bondcoat. The failure normally occurs by spallation of the topcoat at shutdown operations from high temperatures where compressive thermal mismatch stresses are induced in the topcoat. In order to analyse the residual stresses, knowledge about the elastic Mats Eskner; Rolf Sandstrm 2004-01-01 302 National Technical Information Service (NTIS) The effect of sintering on mechanical and physical properties of free-standing plasma-sprayed ZrO2-8 wt% Y2O3 thermal barrier coatings (TBCs) was determined by annealing them at 1316 C in air. Mechanical and physical properties of the TBCs, including stre... S. R. Choi D. M. Zhu R. A. Miller 2004-01-01 303 Microsoft Academic Search Structural and mechanical properties are investigated on thick boron carbide (B4C) coatings, which are formed onto stainless steel substrates by using an electromagnetically accelerated plasma spraying. Hardness, porosity and surface roughness of the coatings show dependences on both the size of raw powder and the substrates distance. Within the coating conditions using two different sized raw powders; 3010 ?m and J. Kitamura; S. Usuba; Y. Kakudate; H. Yokoi; K. Yamamoto; A. Tanaka; S. Fujiwara 2003-01-01 304 Microsoft Academic Search Flame-spheroidized feedstock, with excellent known heat transfer and consistent melting capabilities, were used to produce hydroxyapatite (HA) coatings via plasma spraying. The characteristics and inherent mechanical properties of the coatings have been investigated and were found to have direct and impacting relationship with the feedstock characteristics, processing parameters as well as microstructural deformities. Processing parameters such as particle sizes (SHA: S. W. K Kweh; K. A Khor; P Cheang 2000-01-01 305 Microsoft Academic Search Home synthesized (HA) powder was formulated with water and alcohol to obtain a suspension used to plasma spray coatings onto titanium substrate. The deposition process was optimized and the resulting coatings were soaked in simulated body fluid (SBF) for the periods of 3, 7, 14, 28, and 60days at controlled temperature of 37C. The microstructural research enabled to find in Leszek ?atka; Lech Pawlowski; Didier Chicot; Christel Pierlot; Fabrice Petit 2010-01-01 306 Microsoft Academic Search It is shown that the experimental results obtained by Kumar et al. on plasma-sprayed Sm-Co alloys, which seem to refute the existence of a eutectoid decomposition of SmCo5, can actually be taken as further experimental evidence in favor of the presence of the decomposition reaction. K. H. J. Buschow; F. J. A. den Broeder 1980-01-01 307 Microsoft Academic Search It is shown that the experimental results obtained by Kumar etal. on plasma-sprayed Sm-Co alloys, which seem to refute the existence of a eutectoid decomposition of SmCo5, can actually be taken as further experimental evidence in favor of the presence of the decomposition reaction. K. H. J. Buschow; F. J. A. den Broeder 1980-01-01 308 National Technical Information Service (NTIS) The plasma sprayed graded layered yittria stabilized zirconia (ZrO2)/metal(CoCrAlY) seal system for gas turbine blade tip applications up to 1589 K (2400 F) seal temperatures was studied. Abradability, erosion, and thermal fatigue characteristics of the g... L. T. Shiembob 1977-01-01 309 Microsoft Academic Search The electron number density has been measured in a plasma spray torch using Stark broadening of H_{\\\\beta}$and Ar-I (430 nm) line. A small amount of hydrogen (1% by volume in argon gas) was introduced to study the H$_{\\\\beta}\$ line profile. Axial variation of electron number density has been determined up to a distance of 20 mm from N. K. Joshi; S. N. Sahasrabudhe; K. P. Sreekumar; N. Venkatramani 2003-01-01 310 The influence of specimen size on thermal shock resistance is investigated for relatively large plasma-sprayed alumina tubes of varying diameter, length, and wall thickness. The observations suggest that an increasing wall thickness has a significant effect on the critical temperature difference for the onset of fracture, {Tc}, compared to the relatively weak effect of tube diameter and length. A plot Ekkehard H. Lutz 1995-01-01 311 Uniaxial tensile tests were performed on plasma spray formed (PSF) AlSi alloy reinforced with multiwalled carbon nanotubes (MWCNTs). The addition of CNTs leads to 78% increase in the elastic modulus of the composite. There was a marginal increase in the tensile strength of CNT reinforced composite with degradation in strain to failure by 46%. The computed critical pullout length of T. Laha; Y. Chen; D. Lahiri; A. Agarwal 2009-01-01 312 Yttria Stabilized Zirconia (YSZ) suspensions were injected in an atmospheric plasma jet using two designs of a home-made two-fluid\\u000a atomizing nozzle. The sprays of drops were visualized and the behavior of the suspension in the plasma jet was investigated\\u000a by implementing the Particle Image Velocimetry (PIV) method. The effects of the suspension formulation (surface tension, liquid\\u000a viscosity, and relative gas-to-liquid O. Marchand; L. Girardot; M. P. Planche; P. Bertrand; Y. Bailly; G. Bertrand 313 Plasma spray process of hydroxyapatite (Ca10(PO4)6(OH)2, HA) followed by laser treatment of obtained coatings were optimized by an advanced statistical planning of experiments. The full factorial design of 24 experiments was used to find effects of four principal parameters, i.e. electric power, plasma forming gas composition, carrier gas flow rate and laser power density onto microstructure of hydroxyapatite (HA) coatings S. Dyshlovenko; C. Pierlot; L. Pawlowski; R. Tomaszek; P. Chagnon 2006-01-01 314 Computational modeling is used to systematically examine many of the sources of statistical variance in particle parameters during thermal plasma spraying. Using the computer program LAVA, a steady-state plasma jet typical of a commercial torch at normal operating conditions, is first developed. Then, assuming a single particle composition(ZrO2) and injection location, real world complexity (e.g., turbulent dispersion, particle size and R. L. Williamson; J. R. Fincke; C. H. Chang 2000-01-01 315 Suspension Plasma Spray process was used for deposition of pseudo-eutectic composition of alumina- yttria-stabilized zirconia as a potential thermal barrier coating using Mettech axial III torch. Process variables including feed and plasma parameters were altered to find their effects on the formation of phases in the composite coating. The in-flight particle velocity was found to be the crucial parameter on F. Tarasi; M. Medraj; A. Dolatabadi; J. Oberste-Berghaus; C. Moreau 2009-01-01 316 Suspension Plasma Spray process was used for deposition of pseudo-eutectic composition of alumina-yttria-stabilized zirconia\\u000a as a potential thermal barrier coating using Mettech axial III torch. Process variables including feed and plasma parameters\\u000a were altered to find their effects on the formation of phases in the composite coating. The in-flight particle velocity was\\u000a found to be the crucial parameter on phase F. Tarasi; M. Medraj; A. Dolatabadi; J. Oberste-Berghaus; C. Moreau 2010-01-01 317 A comprehensive model was developed to investigate the suspension spraying for a radio frequency (RF) inductively coupled\\u000a plasma torch. Firstly, the electromagnetic field is solved with the Maxwell equations and validated by the analytical solutions.\\u000a Secondly, the plasma field with different power inputs is simulated by solving the governing equations of the fluid flow coupled\\u000a with the RF heating. Then, Lijuan Qian; Jianzhong Lin; Hongbin Xiong 2010-01-01 318 Ultra-fine hydroxyapatite (HA)\\/ZrO2 composite powders was synthesised by radio frequency (RF) induction suspension plasma spray. A wet suspension of HA\\/ZrO2 was employed as feedstock. The suspension was injected axially into the RF plasma to produce the nano-composite powders, which were subsequently accumulated in cyclone collectors. The particle size and morphology was resolved by using the Zeta potential nano-particle size analyser, Rajendra Kumar; P. Cheang; K. A. Khor 2003-01-01 319 Applications of Phase-Doppler anemometry to the measurements of metal (nickel) particle size and velocity in the plasma spray process have been studied and analyzed with the aid of Mie scattering theory. The optimum optical settings used in two PDA systems were determined and tested experimentally. Measurements at cross-sectional planes 5 and 10 cm below the SG-100 (Miller) plasma gun were J. Ma; S. C. M. Yu; H. W. Ng; Y. C. Lam 2004-01-01 320 The development of plasma-sprayed yttria stabilized zirconia (YSZ) ceramic turbine blade tip seal components is discussed. The YSZ layers are quite thick (0.040 to 0.090 in.). The service potential of seal components with such thick ceramic layers is cyclic thermal shock limited. The most usual failure mode is ceramic layer delamination at or very near the interface between the plasma R. C. Bill; J. Sovey; G. P. Allen 1981-01-01 321 Deposition of pure spinet phase, photocatalytic zinc ferrite films on SS-304 substrates by solution precursor plasma spraying (SPPS) has been demonstrated for the first time. Deposition parameters such as precursor solution pH, concentration, film thickness, plasma power and gun-substrate distance were found to control physico-chemical properties of the film, with respect to their crystallinity, phase purity, and morphology. Alkaline precursor Rekha Dom; G. Sivakumar; Neha Y. Hebalkar; Shrikant V. Joshi; Pramod H. Borse 2012-01-01 322 Biaxial residual stress states of plasma-sprayed hydroxyapatite coatings (HACs) on titanium alloy substrate as a function of plasma power, powder feed rate and coating thickness were studied by X-ray sin2? method. The Young's modulus of hydroxyapatite (HA), required for the stress analysis, was measured from the separated free coating by three-point bending test method. It was found that the directions Y. C Yang; Edward Chang; B. H Hwang; S. Y Lee 2000-01-01 323 The reactive plasma spraying (RPS) of titanium powders in a nitrogen containing plasma gas produces thick coatings characterised by microdispersed titanium nitride phases in a titanium matrix. In this paper, the wear resistance properties of TiTiN coatings deposited on carbon steel substrates by means of RPS technique are studied. Wear tests were performed in block-on-ring configuration and dry sliding conditions, F. Borgioli; E. Galvanetto; F. P. Galliano; T. Bacci 2006-01-01 324 The wear resistance of plasma sprayed molybdenum blend coatings applicable to synchronizer rings or piston rings was investigated in this study. Four spray powders, one of which was pure molybdenum and the others blended powders of bronze and aluminum-silicon alloy powders mixed with molybdenum powders, were sprayed on a low-carbon steel substrate by atmospheric plasma spraying. Microstructural analysis of the coatings showed that the phases formed during spraying were relatively homogeneously distributed in the molybdenum matrix. The wear test results revealed that the wear rate of all the coatings increased with increasing wear load and that the blended coatings exhibited better wear resistance than the pure molybdenum coating, although the hardness was lower. In the pure molybdenum coatings, splats were readily fractured, or cracks were initiated between splats under high wear loads, thereby leading to the decrease in wear resistance. On the other hand, the molybdenum coating blended with bronze and aluminum-silicon alloy powders exhibited excellent wear resistance because hard phases such as CuAl2 and Cu9Al4 formed inside the coating. Ahn, Jeehoon; Hwang, Byoungchul; Lee, Sunghak 2005-06-01 325 Alloy 625 is a Ni-based superalloy which is often a good solution to surface engineering problems involving high temperature corrosion, wear, and thermal degradation. Coatings of alloy 625 can be efficiently deposited by thermal spray methods such as Air Plasma Spraying. As in all thermal spray processes, the final properties of the coatings are determined by the spraying parameters. In the present study, a D-optimal experimental design was used to characterize the effects of the APS process parameters on in-flight particle temperature and velocity, and on the oxide content and porosity in the coatings. These results were used to create an empirical model to predict the optimum deposition conditions. A second set of coatings was then deposited to test the model predictions. The optimum spraying conditions produced a coating with less than 4% oxide and less than 2.5% porosity. The process parameters which exhibited the most important effects directly on the oxide content in the coating were particle size, spray distance, and Ar flow rate. The parameters with the largest effects directly on porosity were spray distance, particle size, and current. The particle size, current, and Ar flow rate have an influence on particle velocity and temperature but spray distance did not have a significant effect on either of those characteristics. Thus, knowledge of the in-flight particle characteristics alone was not sufficient to control the final microstructure. The oxidation index and the melting index incorporate all the parameters that were found to be significant in the statistical analyses and correlate well with the measured oxide content and porosity in the coatings. Azarmi, F.; Coyle, T. W.; Mostaghimi, J. 2008-03-01 326 In vacuum circuit breakers the post-arc current caused by the remaining ions and electrons in the contact gap is an indication of the residual ionization and its decay. It coincides with the formation of a positive space charge sheath in front of the new cathode, which grows toward the new anode. In a vacuum test chamber an arc (1.5-15 kA G. Duning; Manfred Lindmayer 1999-01-01 327 Yttrium oxide (Y2O3) coatings have been prepared by axial suspension plasma spraying with fine powders. It is clarified that the coatings have\\u000a high hardness, low porosity, high erosion resistance against CF4 -containing plasma and retention of smooth eroded surface. This suggests that the axial suspension plasma spraying of Y2O3 is applicable to fabricating equipment for electronic devices, such as dry Junya Kitamura; Zhaolin Tang; Hiroaki Mizuno; Kazuto Sato; Alan Burgess 2011-01-01 328 The effects of collisions on the composition of the plasma passing through the first vacuum stage of an inductively coupled plasma mass spectrometer were monitored in three sets of experiments. Rates of collisional quenching of an excited state in the neutral calcium atom were estimated from changes in experimental fluorescence lifetimes. Intensities from collisionally-assisted fluorescence provided evidence of energy transfer Jeffrey H. Macedone; Paul B. Farnsworth 2006-01-01 329 ZrO2CeO2Y2O3 and ZrO2Y2O3 thermal barrier coatings were prepared using the air plasma spray process. Phase transformation in the ceramic top coating, bond coat oxidation and thermal barrier properties were investigated to compare ZrO2CeO2Y2O3 with ZrO2Y2O3 at 1300C under high temperature thermal cycles. In the as-sprayed condition, both coatings showed a 7?11% porosity fraction and typical lamellar structures formed by continuous C. H. Lee; H. K. Kim; H. S. Choi; H. S. Ahn 2000-01-01 330 Three kinds of cast iron coatings were prepared by atmospheric plasma spraying. During the spraying, the mild steel substrate temperature was controlled to be averagely 50, 180, and 240C, respectively. Abrasive wear tests were conducted on the coatings under a dry friction condition. It is found that the abrasive wear resistance is enhanced with the substrate temperature increasing. SEM observations show that the wear losses of the coatings during the wear tests mainly result from the spalling of the splats. Furthermore, the improved wear resistance of the coatings mainly owes to the formation of oxides and the enhancement in the mechanical properties with the substrate temperature increasing. Xing, Ya-zhe; Wei, Qiu-lan; Jiang, Chao-ping; Hao, Jian-min 2012-08-01 331 PubMed The control of phase transformations in plasma sprayed hydroxyapatite (HA) coatings are critical to the clinical performance of the material. This paper reports the use of high temperature X-ray diffraction (HT-XRD) to study, in-situ, the phase transformations occurring in plasma sprayed HA coatings. The coatings were prepared using different spray power levels (net plasma power of 12 and 15 kW) and different starting powder size ranges (20-45; 45-75 microm). The temperature range employed was room temperature (approximately 26 degrees C) to 900 degrees C in normal atmosphere and pressure. High temperature differential scanning calorimetry (DSC) was also employed to investigate and determine the precise onset temperature of phase transformations during the recrystallization process. Results showed that actual onset of thermal degradation in the coating into other metastable phases like TTCP, beta-TCP and CaO occurred at 638 degrees C. The aforementioned phase transitions were independent of the selected spraying parameters. The degree of melting and thermal dissociation of HA actually determines the amount of calcium phosphate phases that are formed. A high power level of 15 kW produced a greater degree of melting, resulting in more CaO, TTCP and beta-TCP being formed as a result. PMID:11762329 Kweh, S W K; Khor, K A; Cheang, P 2002-01-01 332 Plasma spraying is known to be a promising process for the manufacturing of Ti\\/SiC long-fiber composites. However, some improvements\\u000a remain for this process to be applied in an industrial route. These include: oxygen contamination of the sprayed material\\u000a through that of titanium particles before and during spraying, damage to fibers due to a high level of thermal stresses induced\\u000a at E. Cochelin; F. Borit; G. Frot; M. Jeandin; L. Decker; D. Jeulin; B. Al Taweel; V. Michaud; P. Nol 1999-01-01 333 Summary form only given. Due to the large volume fraction of the internal interfaces, coatings structured at the nanoscale should exhibit better properties than conventional coatings structured at the microscale. However, when processing by thermal plasmas such feedstock, several questions arise: (i) how feeding the plasma jet with nanosized powders? (ii) how keeping their nanostructured structures when melting them? (iii) J.-F. Coudert; V. Rat; H. Ageorges; A. Denoirjean; P. Fauchais; G. Montavon 2007-01-01 334 Low-pressure glow discharge plasmas are increasingly used as an effective method for the surface modification of polymers; they can also serve in the laboratory to simulate low Earth orbital environment (LEO). Although Vacuum-Ultraviolet (VUV, ? < 200 nm) is an important component of plasma environment, only few studies have focused on its effects so far. The emission from low-pressure microwave A. C. Fozza; J. Roch; J. E. Klemberg-Sapieha; A. Kruse; A. Hollnder; M. R. Wertheimer 1997-01-01 335 SciTech Connect A joint research and development effort has been initiated, whose ultimate goal is the enhancement the mean ion charge states in vacuum arc metal plasmas by a combination of a vacuum arc discharge and an electron cyclotron resonance (ECR) heating. Metal plasma was generated by a special vacuum arc mini-gun and injected into mirror magnetic trap. Plasma was pumped by high frequency gyrotron-generated microwave radiation (frequency 37.5 GHz, max power 100 kW, pulse duration 1.5 ms). Using of powerful microwaves makes it possible to sustain sufficient temperature of electrons needed for multiple ionizations at high plasma density (more then 1013 cm-3). Parameter of multiple ionization efficiency Ne{tau}i, where Ne is plasma density, {tau}i, is ion lifetime, in such a case could reach rather high value {approx}109 cm-3-s. In our situation {tau}i = Ltrap/Vi, where Ltrap is trap length, Vi is plasma gun flow velocity. The results have demonstrated substantial multiple ionization of metal ions (including metals with high melting temperature). For a metal (lead, platinum) plasma, ECR heating shifted the average ion charge up to 5+. Further increase of the ion charge states will be attained by increasing the vacuum arc plasma density and optimizing the ECR heating conditions. Vodopyanov, A.V.; Golubev, S.V.; Mansfeld, D.A.; Razin, S.V. [Institute of Applied Physics (IAP RAS), Nizhny Novgorod (Russian Federation); Nikolaev, A.G.; Oks, E.M.; Savkin, K.P. [High Current Electronics Institute (HCEI RAS), Tomsk (Russian Federation) 2005-03-15 336 Deposition of nanocrystalline TiO2 coating at low temperature is becoming more attractive due to the possibility for continuous roll production of the coating\\u000a for assembly lines of dye-sensitized solar cell (DSC) at a low cost. In this study, porous nano-TiO2 coating was deposited by vacuum cold spraying (VCS) at room temperature on a conducting glass substrate using commercial\\u000a P25 nanocrystalline Sheng-Qiang Fan; Chang-Jiu Li; Guan-Jun Yang; Ling-Zi Zhang; Jin-Cheng Gao; Ying-Xin Xi 2007-01-01 337 The objective of this study was to determine processing-microstructure-properties relationships for small-particle plasma-sprayed (SPPS) ceramic coatings. Plasma-sprayed yttria partially-stabilized zirconia (YSZ) coatings, which are used to protect superalloys from heat and the environment in turbine engines, and plasma-sprayed alumina coatings, which are being investigated as a potential replacement for chrome in corrosion protection applications, were fabricated using SPPS technology and their microstructure and pertinent properties were examined. The properties of plasma-sprayed YSZ and alumina coatings were investigated with designed experiments. The parameters varied include power, spray distance, total plasma gas flow, percent hydrogen in the plasma gas, injector angle, injector offset and carrier gas flow. The variations in thermal diffusivity, thermal conductivity, elastic modulus, and hardness for the YSZ SPPS coatings were found to correlate to the variations in density, which were related to the processing variables. It was found that surface roughness was related to the amount of splashing and debris associated with the single splats. In four-point bending strain tolerance and fatigue tests, the SPPS YSZ coatings showed very little acoustic emission activity, except in the case of tensile fatigue of a coating without network cracks. Small angle X-ray scattering experiments revealed that SPPS YSZ coatings have significantly less submicron intersplat porosity than conventional plasma-sprayed coatings, and that the pore and microcrack scattering area decreases with heat treatment due to the sintering of microcracks and small pores. The SPPS alumina coatings were optimized to produce a coating with excellent corrosion protection capabilities. It was found that the hardest SPPS alumina coatings did not provide the best corrosion protection due to unique porosity defect structures associated with surface bumps in the coatings. The surface bumps were associated with conditions that produced splats that had high amounts of splashing and debris. Significant improvements in properties, such as surface roughness, thermal conductivity, hardness, strain tolerance, fatigue resistance, and corrosion protection, were achieved for both the SPPS YSZ and SPPS alumina coatings compared to conventionally plasmasprayed YSZ and alumina coatings. Mawdsley, Jennifer Renee 338 The deposition rate plays an important role in determining the thickness, stress state, and physical properties of plasma-sprayed coatings. In this article, the effect of the deposition rate on the stress evolution during the deposition (named evolving stress) of yttria-stabilized zirconia coatings was systematically studied by varying the powder feed rate and the robot-scanning speed. The evolving stress during the deposition tends to increase with the increased deposition rate, and this tendency was less significant at a longer spray distance. In some cases, the powder feed rate had more significant influence on the evolving stress than the robot speed. This tendency can be associated with a deviation of a local deposition temperature at a place where sprayed particles are deposited from an average substrate temperature. At a further higher deposition rate, the evolving stress was relieved by introduction of macroscopic vertical cracks as well as horizontal branching cracks. Shinoda, Kentaro; Colmenares-Angulo, Jose; Valarezo, Alfredo; Sampath, Sanjay 2012-12-01 339 Titanium nitride is a bioceramic material successfully used for covering medical implants due to the high hardness meaning good wear resistance. Hydroxyapatite is a bioactive ceramic that contributes to the restoration of bone tissue, which together with titanium nitride may contribute to obtaining a superior composite in terms of mechanical and bone tissue interaction matters. The paper presents the experimental results in obtaining composite layers of titanium nitride and hydroxyapatite by reactive plasma spraying in ambient atmosphere. X-ray diffraction analysis shows that in both cases of powders mixtures used (10% HA + 90% Ti; 25% HA + 75% Ti), hydroxyapatite decomposition occurred; in variant 1 the decomposition is higher compared with the second variant. Microstructure of the deposited layers was investigated using scanning electron microscope, the surfaces presenting a lamellar morphology without defects such as cracks or microcracks. Surface roughness values obtained vary as function of the spraying distance, presenting higher values at lower thermal spraying distances. Ro?u, Radu Alexandru; ?erban, Viorel-Aurel; Bucur, Alexandra Ioana; Drago?, U?u 2012-02-01 340 SciTech Connect These proceedings compile papers about plasma. Topics include: Plasma arc spraying, vacuum melting, plasma melters for nuclear waste vitrification, thermal degradation of metal oxides in plasma, electrohydrodynamics, laser-induced fluorescence, measurements of temperature in plasma, and modeling and diagnostics in plasma processing. Apelian, D.; Szekely, J. 1987-01-01 341 SciTech Connect Metal ions were extracted from pulsed discharge plasmas operating in the transition region between vacuum spark (transient high voltage of kV) and vacuum arc (arc voltage ~;; 20 V). At a peak current of about 4 kA, and with a pulse duration of 8 ?s, we observed mean ion charges states of about 6 for several cathode materials. In the case of platinum, the highest average charge state was 6.74 with ions of charge states as high as 10 present. For gold we found traces of charge state 11, with the highest average charge state of 7.25. At currents higher than 5 kA, non-metallic contaminations started to dominate the ion beam, preventing further enhancement of the metal charge states. Yushkov, Georgy Yu.; Anders, A. 2008-06-19 342 ~ ~ ~ ~ ~ ~ ~~ ~ ABSTRACT: A total of 904 weanling pigs were used to investigate the effects of 1) spray-dried porcine plasma (SDPP), 2) blends of SDPP and spray-dried blood meal (SDBM), and 3) added dietary methionine in a SDPP-based diet on starter pig performance. In Exp. 1, 534 weanling pigs (initially 6.4 kg and 21 L. J. Kats; J. L. Nelssen; M. D. Tokach; R. D. Goodband; J. A. Hansen; J. L. Laurin 2009-01-01 343 SciTech Connect We demonstrate for the first time, the synthesis of nanostructured vanadium pentoxide (V2O5) films and coatings using plasma spray technique. V2O5 has been used in several applications such as catalysts, super-capacitors and also as an electrode material in lithium ion batteries. In the present studies, V2O5 films were synthesized using liquid precursors (vanadium oxychloride and ammonium metavanadate) and powder suspension. In our approach, the precursors were atomized and injected radially into the plasma gun for deposition on the substrates. During the flight towards the substrate, the high temperature of the plasma plume pyrolyzes the precursor particles resulting into the desired film coatings. These coatings were then characterized using X-ray diffraction (XRD), scanning electron microscopy (SEM), Transmission electron microscopy (TEM) and Differential Scanning Calorimetry (DSC). Among the precursors, vanadium oxychloride gave the best results in terms of nanocrystalline and monophasic films. Spraying of commercial powder suspension yielded multi-phasic mixture in the films. Our approach enables deposition of large area coatings of high quality nanocrystalline films of V2O5 with controllable particle morphology. This has been optimized by means of control over precursor composition and plasma spray conditions. Initial electrochemical studies of V2O5 film electrodes show potential for energy storage studies. Nanda, Jagjit [ORNL 2011-01-01 344 The plasma beam produced by a vacuum arc plasma source was injected into a cylindrical duct through an annular anode aperture. The plasma source consisted of a frustum cone-shaped Cu cathode, and either a 20-mm-thick annular Cu anode with aperture diameter D of 10, 17, 30, 40, or 50 mm, or 35 mm thick and D=40 or 50 mm. Magnetic Vladimir N. Zhitomirsky; Raymond L. Boxman; Samuel Goldsmith 2005-01-01 345 Over the last few years, global economic growth has triggered a dramatic increase in the demand for resources, resulting in steady rise in prices for energy and raw materials. In the gas turbine manufacturing sector, process optimizations of cost-intensive production steps involve a heightened potential of savings and form the basis for securing future competitive advantages in the market. In this context, the atmospheric plasma spraying (APS) process for thermal barrier coatings (TBC) has been optimized. A constraint for the optimization of the APS coating process is the use of the existing coating equipment. Furthermore, the current coating quality and characteristics must not change so as to avoid new qualification and testing. Using experience in APS and empirically gained data, the process optimization plan included the variation of e.g. the plasma gas composition and flow-rate, the electrical power, the arrangement and angle of the powder injectors in relation to the plasma jet, the grain size distribution of the spray powder and the plasma torch movement procedures such as spray distance, offset and iteration. In particular, plasma properties (enthalpy, velocity and temperature), powder injection conditions (injection point, injection speed, grain size and distribution) and the coating lamination (coating pattern and spraying distance) are examined. The optimized process and resulting coating were compared to the current situation using several diagnostic methods. The improved process significantly reduces costs and achieves the requirement of comparable coating quality. Furthermore, a contribution was made towards better comprehension of the APS of ceramics and the definition of a better method for future process developments. Mihm, Sebastian; Duda, Thomas; Gruner, Heiko; Thomas, Georg; Dzur, Birger 2012-06-01 346 The insulating effects from thermal barrier coatings (TBCs) in gas turbine engines allow for increased operational efficiencies and longer service lifetimes. Consequently, improving TBCs can lead to enhanced gas turbine engine performance. This study was conducted to investigate if yttria-stabilized zirconia (YSZ) coatings, the standard industrial choice for TBCs, produced from nano-sized powder could provide better thermal insulation than current commericial YSZ coatings generated using micron-sized powders. The coatings for this research were made via the recently developed suspension plasma spraying (SPS) process. With SPS, powders are suspended in a solvent containing dispersing agents; the suspension is then injected directly into a plasma flow that evaporates the solvent and melts the powder while transporting it to the substrate. Although related to the industrial TBC production method of air plasma spraying (APS), SPS has two important differences---the ability to spray sub-micron diameter ceramic particles, and the ability to alloy the particles with chemicals dissolved in the solvent. These aspects of SPS were employed to generate a series of coatings from suspensions containing 100 nm diameter YSZ powder particles, some of which were alloyed with neodymium and ytterbium ions from the solvent. The SPS coatings contained columnar structures not observed in APS TBCs; thus, a theory was developed to explain the formation of these features. The thermal conductivity of the coatings was tested to evaluate the effects of these unique microstructures and the effects of the alloying process. The results for samples in the as-sprayed and heat-treated conditions were compared to conventional YSZ TBCs. This comparison showed that, relative to APS YSZ coatings, the unalloyed SPS samples typically exhibited higher as-sprayed and lower heat-treated thermal conductivities. All thermal conductivity values for the alloyed samples were lower than conventional YSZ TBCs. The different thermal conduction behaviors were linked to the porosity and compositional properties of the coatings using immersion density, SEM, and synchrotron radiation characterization techniques. van Every, Kent J. 347 Microstructural and electrical characterizations of air plasma sprayed TiO2 coatings were carried out to investigate the details of deoxidation during the spray process and the changes following air annealing. The coatings were found to behave as an n-type semiconductor indicating the presence of oxygen vacancies. Direct-current resistivity measurements in plane (?IP) and through thickness (?TT) of the coatings as a function of annealing time and temperature showed remarkably large anisotropies (=?TT/?IP) of up to 105. Impedance spectroscopy of the specimens coupled with microstructural analysis revealed that the origin of this anisotropy lies in the heterogeneous deoxidation and reoxidation behavior of the coatings. Due to rapid quenching, the high temperature deoxidation state is preserved in the splat boundaries making them more conductive than the bulk of the splat in the as-sprayed coating. Upon annealing in air, the splat boundaries get selectively oxidized due to faster surface diffusion of oxygen and become more insulating. This behavior, together with the layered morphology of plasma sprayed coatings, results in anisotropy. Sharma, Atin; Gouldstone, Andrew; Sampath, Sanjay; Gambino, Richard J. 2006-12-01 348 DOEpatents A miniature (dime-size in cross-section) vapor vacuum arc plasma gun is described for use in an apparatus to produce thin films. Any conductive material can be layered as a film on virtually any substrate. Because the entire apparatus can easily be contained in a small vacuum chamber, multiple dissimilar layers can be applied without risk of additional contamination. The invention has special applications in semiconductor manufacturing. 8 figs. Brown, I.G.; MacGill, R.A.; Galvin, J.E.; Ogletree, D.F.; Salmeron, M. 1998-11-24 349 DOEpatents A miniature (dime-size in cross-section) vapor vacuum arc plasma gun is described for use in an apparatus to produce thin films. Any conductive material can be layered as a film on virtually any substrate. Because the entire apparatus can easily be contained in a small vacuum chamber, multiple dissimilar layers can be applied without risk of additional contamination. The invention has special applications in semiconductor manufacturing. Brown, Ian G. (Berkeley, CA); MacGill, Robert A. (Richmond, CA); Galvin, James E. (Emmeryville, CA); Ogletree, David F. (El Cerrito, CA); Salmeron, Miquel (El Cerrito, CA) 1998-01-01 350 The effects of collisions on the composition of the plasma passing through the first vacuum stage of an inductively coupled plasma mass spectrometer were monitored in three sets of experiments. Rates of collisional quenching of an excited state in the neutral calcium atom were estimated from changes in experimental fluorescence lifetimes. Intensities from collisionally-assisted fluorescence provided evidence of energy transfer between excited states. Changes in analyte number density along the axis of the supersonic expansion in the first vacuum stage provided evidence that ion-electron recombination occurs to a significant extent during the expansion. Together, the experiments create a picture of the first vacuum stage in which collisions play an important role in shaping the composition of the plasma that is ultimately delivered to the mass analyzer. Macedone, Jeffrey H.; Farnsworth, Paul B. 2006-09-01 351 SciTech Connect Plasma-spray technology is under investigation as a method for producing high thermal conductivity beryllium coatings for use in magnetic fusion applications. Recent investigations have focused on optimizing the plasma-spray process for depositing beryllium coatings on damaged beryllium surfaces. Of particular interest has been optimizing the processing parameters to maximize the through-thickness thermal conductivity of the beryllium coatings. Experimental results will be reported on the use of secondary H{sub 2} gas additions to improve the melting of the beryllium powder and transferred-arc cleaning to improve the bonding between the beryllium coatings and the underlying surface. Information will also be presented on thermal fatigue tests which were done on beryllium coated ISX-B beryllium limiter tiles using 10 sec cycle times with 60 sec cooldowns and an International Thermonuclear Experimental Reactor (ITER) relevant divertor heat flux slightly in excess of 5 MW/m{sup 2}. Castro, R.G.; Stanek, P.W.; Elliott, K.E. [and others 1995-09-01 352 SciTech Connect Techniques have been developed for measuring the tensile properties of plasma-sprayed coatings which are used in thermal barrier applications. The measurements have included the average Young's modulus, bond strength and elongation at failure. The oxidation behavior of the bond coat plays an important role in the integrity and adhesion of plasma-sprayed thermal barrier coatings. This work studies the nature of the high temperature degradation on the mechanical properties of the coating. Furnace tests have been carried out on U-700 alloy with bond coats of NiCrAlY or NiCrAlZr and an overlay of ZrO2-8 percent Y2O3. Weight gain measurements on the coatings have been examined with relation to the adhesion strength and failure observations. The results from an initial study are reported in this work. 13 references. Berndt, C.C.; Miller, R.A. 1984-07-01 353 PubMed Bioactive ceramic coatings on titanium (Ti) alloys play an important role in orthopedic applications. In this study, akermanite (Ca(2)MgSi(2)O(7)) bioactive coatings are prepared through a plasma spraying technique. The bonding strength between the coatings and Ti-6Al-4V substrates is around 38.7-42.2 MPa, which is higher than that of plasma sprayed hydroxyapatite (HA) coatings reported previously. The prepared akermanite coatings reveal a distinct apatite-mineralization ability in simulated body fluid. Furthermore, akermanite coatings support the attachment and proliferation of rabbit bone marrow mesenchymal stem cells (BMSCs). The proliferation rate of BMSCs on akermanite coatings is obviously higher than that on HA coatings. PMID:23159958 Yi, Deliang; Wu, Chengtie; Ma, Xubing; Ji, Heng; Zheng, Xuebin; Chang, Jiang 2012-11-16 354 The effects of environmental humidity on the flow characteristics of a multicomponent (composite) plasma spray powder have been investigated. Angular and spherical BaF2-CaF2 powder was fabricated by comminution and by atomization, respectively. The fluorides were blended with nichrome, chromia, and silver powders to produce a composite plasma spray feedstock. The tap density, apparent density, and angle of repose were measured at 50% relative humidity (RH). The flow of the powder was studied from 2 to 100% RH. The results suggest that the feedstock flow is only slightly degraded with increasing humidity below 66% RH and is more affected above 66% RH. There was no flow above 90% RH except with narrower particle size distributions of the angular fluorides, which allowed flow up to 95% RH. These results offer guidance that enhances the commercial potential for this material system. Stanford, Malcolm K.; Dellacorte, Christopher 2006-03-01 355 PubMed Factors involved with the plasma-spray coating procedure, such as starting powder compound (fluorapatite, hydroxylapatite, magnesium-whitlockite, or tetra-calcium phosphate), powder particle distribution 1-45 or 1-125 microns), powder port gun (port 2 or 6), and post-heat treatment of 1 h at 600 degrees C, were examined for their effects on crystallinity and solubility/stability of the coating. From solubility tests, X-ray diffractometry, and scanning microscopy studies, the solubility and crystallinity were found to be dependent on Ca/P ratio, particle distribution, and post-heat treatment. The post-heat treatment influenced the degree of both crystallinity and solubility. The plasma-spray powder port factor for the hydroxylapatite coatings was not significant. Incubation in buffer of the coatings introduced precipitation at the surfaces of all non-heat-treated coatings except fluorapatite. No precipitation could be observed in any of the heat-treated coatings. PMID:7983094 Klein, C P; Wolke, J G; de Blieck-Hogervorst, J M; de Groot, K 1994-08-01 356 Structure and phase analysis of high carbon cast iron prepared by plasma spraying was performed using X-ray diffraction and Mssbauer spectroscopy. The sprayed powder particles were trapped in liquid nitrogen to fix their composition during the flight from the plasma torch. The results show decarburization and oxidation of the powder. The carbon content decrease is more pronounced on surfaces of the particles. The fresh powder exhibits an anomalous magnetic transition from ferromagnetic to paramagnetic state during the sample cooling from room temperature down to 28 K. This effect was explained as result of high concentration of defects and strains. The long time ageing at room temperature caused transition to more stable phase composition and diminishing of the anomalous magnetic transition. Schneeweiss, O.; Volenk, K. 2009-02-01 357 SciTech Connect An experimental study of the plasma spraying of alumina-titania powder is presented in this paper. This powder system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic testing. Coating experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical spray parameters in a systematic design of experiments in order to display the range of plasma processing conditions and their effect on the resultant coating. The coatings were characterized by hardness and electrical tests, image analysis, and optical metallography. Coating qualities are discussed with respect to dielectric strength, hardness, porosity, surface roughness, deposition efficiency, and microstructure. The attributes of the coatings are correlated with the changes in operating parameters. Steeper, T.J. [Du Pont de Nemours (E.I.) and Co., Aiken, SC (United States). Savannah River Lab.; Varacalle, D.J. Jr.; Wilson, G.C. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Riggs, W.L. II [Tubal Cain Co., Loveland, OH (United States); Rotolico, A.J.; Nerz, J.E. [Metco/Perkin-Elmer, Westbury, NY (United States) 1992-08-01 358 SciTech Connect An experimental study of the plasma spraying of alumina-titania powder is presented in this paper. This powder system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic testing. Coating experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical spray parameters in a systematic design of experiments in order to display the range of plasma processing conditions and their effect on the resultant coating. The coatings were characterized by hardness and electrical tests, image analysis, and optical metallography. Coating qualities are discussed with respect to dielectric strength, hardness, porosity, surface roughness, deposition efficiency, and microstructure. The attributes of the coatings are correlated with the changes in operating parameters. Steeper, T.J. (Du Pont de Nemours (E.I.) and Co., Aiken, SC (United States). Savannah River Lab.); Varacalle, D.J. Jr.; Wilson, G.C. (EG and G Idaho, Inc., Idaho Falls, ID (United States)); Riggs, W.L. II (Tubal Cain Co., Loveland, OH (United States)); Rotolico, A.J.; Nerz, J.E. (Metco/Perkin-Elmer, Westbury, NY (United States)) 1992-01-01 359 SciTech Connect A high current, multi charged, metal ion source using electron heating of vacuum arc plasma by high power gyrotron radiation has been developed. The plasma is confined in a simple mirror trap with peak magnetic field in the plug up to 2.5 T, mirror ratio of 3-5, and length variable from 15 to 20 cm. Plasma formed by a cathodic vacuum arc is injected into the trap either (i) axially using a compact vacuum arc plasma gun located on axis outside the mirror trap region or (ii) radially using four plasma guns surrounding the trap at midplane. Microwave heating of the mirror-confined, vacuum arc plasma is accomplished by gyrotron microwave radiation of frequency 75 GHz, power up to 200 kW, and pulse duration up to 150 {mu}s, leading to additional stripping of metal ions by electron impact. Pulsed beams of platinum ions with charge state up to 10+, a mean charge state over 6+, and total (all charge states) beam current of a few hundred milliamperes have been formed. Vodopyanov, A. V.; Golubev, S. V.; Khizhnyak, V. I.; Mansfeld, D. A.; Nikolaev, A. G.; Oks, E. M.; Savkin, K. P.; Vizir, A. V.; Yushkov, G. Yu. [Institute of Applied Physics, Russian Academy of Science, Nizhniy Novgorod 603950 (Russian Federation); High Current Electronics Institute, Siberian Division, Russian Academy Science, Tomsk 634055 (Russian Federation) 2008-02-15 360 PubMed A high current, multi charged, metal ion source using electron heating of vacuum arc plasma by high power gyrotron radiation has been developed. The plasma is confined in a simple mirror trap with peak magnetic field in the plug up to 2.5 T, mirror ratio of 3-5, and length variable from 15 to 20 cm. Plasma formed by a cathodic vacuum arc is injected into the trap either (i) axially using a compact vacuum arc plasma gun located on axis outside the mirror trap region or (ii) radially using four plasma guns surrounding the trap at midplane. Microwave heating of the mirror-confined, vacuum arc plasma is accomplished by gyrotron microwave radiation of frequency 75 GHz, power up to 200 kW, and pulse duration up to 150 micros, leading to additional stripping of metal ions by electron impact. Pulsed beams of platinum ions with charge state up to 10+, a mean charge state over 6+, and total (all charge states) beam current of a few hundred milliamperes have been formed. PMID:18315170 Vodopyanov, A V; Golubev, S V; Khizhnyak, V I; Mansfeld, D A; Nikolaev, A G; Oks, E M; Savkin, K P; Vizir, A V; Yushkov, G Yu 2008-02-01 361 SciTech Connect A method is described for calculating the two-dimensional trajectory of a vertically or horizontally unstable axisymmetric tokamak plasma in the presence of a resistive vacuum vessel. The vessel is not assumed to have toroidal symmetry. The plasma is represented by a current-filament loop that is free to move vertically and to change its major radius. Its position is evolved in time self-consistently with the vacuum vessel eddy currents. The plasma current, internal inductance, and poloidal beta can be specified functions of time so that eddy currents resulting from a disruption can be modeled. The vacuum vessel is represented by a set of current-filaments whose positions and orientations are chosen to model the dominant eddy current paths. Although the specific application is to TFTR, the present model is of general applicability. 7 refs., 4 figs., 2 tabs. DeLucia, J. 1985-12-01 362 SciTech Connect Magnetic fusion energy (MFE) research requires ultrahigh-vacuum (UHV) conditions, primarily to reduce plasma contamination by impurities. For radiofrequency (RF)-heated plasmas, a great benefit may accrue from a non-conducting vacuum vessel, allowing external RF antennas which avoids the complications and cost of internal antennas and high-voltage high-current feedthroughs. In this paper we describe these and other criteria, e.g., safety, availability, design flexibility, structural integrity, access, outgassing, transparency, and fabrication techniques that led to the selection and use of 25.4-cm OD, 1.6-cm wall polycarbonate pipe as the main vacuum vessel for an MFE research device whose plasmas are expected to reach keV energies for durations exceeding 0.1 s B. Berlinger, A. Brooks, H. Feder, J. Gumbas, T. Franckowiak and S.A. Cohen 2012-09-27 363 SciTech Connect A typical blade material is made of Nickel super alloy and can bear temperatures up to 950C. But the operating temperature of a gas turbine is above the melting point of super alloy nearly at 1500C. This could lead to hot corrosions, high temperature oxidation, creep, thermal fatigue may takes place on the blade material. Though the turbine has an internal cooling system, the cooling is not adequate to reduce the temperature of the blade substrate. Therefore to protect the blade material as well as increase the efficiency of the turbine, thermal barrier coatings (TBCs) must be used. A TBC coating of 250 ?m thick can reduce the temperature by up to 200 C. Air Plasma Spray Process (APS) and High Enthalpy Plasma Spray Process (100HE) were the processes used for coating the blades with the TBCs. Because thermal conductivity increases with increase in temperature, it is desired that these processes yield very low thermal conductivities at high temperatures in order not to damage the blade. An experiment was carried out using Flash line 5000 apparatus to compare the thermal conductivity of both processes.The apparatus could also be used to determine the thermal diffusivity and specific heat of the TBCs. 75 to 2800 K was the temperature range used in the experimentation. It was found out that though 100HE has high deposition efficiency, the thermal conductivity increases with increase in temperatures whiles APS yielded low thermal conductivities. Uppu, N.; Mensah, P.F.; Ofori, D. 2006-07-01 364 Two kinds of alloys NiCrBSi and NiCrBSi+WC were plasma sprayed onto aluminium alloy. The coatings were remelted successively with a CO2 laser. A comparison of the wear resistance properties of both laser-treated and plasma-sprayed samples with those of aluminium alloy was conducted. A scanning electron microscope (SEM) was used to analyse wear phenomena of samples. Experimental results showed that the G. Y. Liang; T. T. Wong; J. M. K. MacAlpine; J. Y. Su 2000-01-01 365 Male Holstein calves (n = 120) purchased from local dairy farms were fed one of three calf milk replacers for 42 d. Experimental milk replacers were formulated to contain whey protein concentrate (WPC) as the pri- maryproteinsourceorWPCplus5%spray-driedbovine plasma (SDBP) or spray-dried porcine plasma (SDPP). The SDPP was heated to remove heat-insoluble materi- als and provide products with similar IgG content. J. D. Quigley III; T. M. Wolfe 2003-01-01 366 For achieving an excellent bioactivity and mechanical properties, silica and titanium-reinforced hydroxyapatite composite coatings were deposited onto 304 SUS substrate by using a gas-tunnel plasma spraying system. A commercial HA powder of average size 1045?m was blended with fused amorphous silica and titanium powders with HA:SiO2:Ti wt.% ratios of 75:15:10 respectively. The mixed powders have been plasma sprayed at various M. F. Morks; N. F. Fahim; A. Kobayashi 2008-01-01 367 We present an effective method for the batch fabrication of miniaturized single-walled carbon nanotube (SWCNT) film electrodes using oxygen plasma etching. We adopted the approach of spray-coating for good adhesion of the SWCNT film onto a pre-patterned Pt support and used O2 plasma patterning of the coated films to realize efficient biointerfaces between SWCNT surfaces and biomolecules. By these approaches, the SWCNT film can be easily integrated into miniaturized electrode systems. To demonstrate the effectiveness of plasma-etched SWCNT film electrodes as biointerfaces, Legionella antibody was selected as analysis model owing to its considerable importance to electrochemical biosensors and was detected using plasma-etched SWCNT film electrodes and a 3,3',5,5'-tetramethyl-benzidine dihydrochloride/horseradish peroxidase (TMB/HRP) catalytic system. The response currents increased with increasing concentration of Legionella antibody. This result indicates that antibodies were effectively immobilized on plasma-etched and activated SWCNT surfaces. Kim, Joon Hyub; Lee, Jun-Yong; Min, Nam Ki 2012-08-01 368 Plasma sprayed coatings are built up by the accumulation of splats formed by the impacting, spreading and solidifying of molten\\u000a droplets on the substrate. A three-dimensional computational model including heat transfer and solidification is established\\u000a to simulate the formation process of a single splat using the computational fluid dynamics (CFD) software, FLUENT. The fluid\\u000a flow and energy equations are discretized Chang-wen Cui; Qiang Li 2011-01-01 369 A layer of bioceramic HA was coated on laser gas-nitrided pure titanium and grit-blasted pure titanium substrates using plasma-spraying technique, respectively. X-ray diffraction analysis showed that the microstructures of the coating were mainly composed of HA, amorphous calcium phosphate (ACP) and some minute phases of tricalcium phosphate (TCP, ?-TCP and ?-TCP), tetracalcium phosphate (TTCP) and calcium oxide (CaO). The experimental Sen Yang; H. C. Man; Wen Xing; Xuebin Zheng 2009-01-01 370 SciTech Connect Residual stress in a ZrO2-Y2O3 ceramic coating resulting from the plasma spraying operation is calculated. The calculations were done using the finite element method. Both thermal and mechanical analysis were performed. The resulting residual stress field was compared to the measurements obtained by Hendricks and McDonald. Reasonable agreement between the predicted and measured moment occurred. However, the resulting stress field is not in pure bending. 14 references. Mullen, R.L.; Hendricks, R.C.; Mcdonald, G. 1985-08-01 371 Plasma spray technology has the advantage of being able to process low-grade-ore minerals to produce value-added products, and also to deposit ceramics, metals and a combination of these, generating homogenous coatings with the desired microstructure on a range of substrate. The present work deals with the development of a ceramic composite coating on metal substrates using fly ash (the thermal S. C Mishra; K. C Rout; P. V. A Padmanabhan; B Mills 2000-01-01 372 A measurement system consisting of two high- speed two- color pyrometers was used to monitor the flattening degree and cooling\\u000a rate of zirconia particles on a smooth steel substrate at 75 or 150 C during plasma spray deposition. This instrument provided\\u000a data on the deformation behavior and freezing of a particle when it impinged on the surface, in connection with M. Vardelle; A. Vardelle; A. C. Leger; P. Fauchais; D. Gobin 1995-01-01 373 The properties of plasma sprayed Y-Ba-Cu-O coatings deposited on metallic substrates are studied. Stainless steel, nickel steels and pure nickel are used as substrate. Y-Ba-Cu-O deposited on stainless steel and nickel steel reacts with the substrate. This interaction can be suppressed by using an yttria-stabilized zirconia (YsZ) diffusion barrier. However, after heat treatment the Y-Ba-Cu-O layers on YsZ show cracks H. Hemmes; D. Jger; M. Smithers; Veer van der J; D. Stover; H. Rogalla 1993-01-01 374 This article reports the characterisation and optimisation of glass-ceramic coatings plasma-sprayed on traditional ceramic substrates, dealing with microstructures, chemical resistance, and superficial mechanical properties. A CaOZrO2SiO2 (CZS) frit, capable of complete crystallization after proper thermal treatment, has been employed: due to its refractory nature, its firing temperature in a traditional process would be unbearable for common substrates. The frit was Giovanni Bolelli; Valeria Cannillo; Luca Lusvarghi; Tiziano Manfredini; Cristina Siligardi; Cecilia Bartuli; Alessio Loreto; Teodoro Valente 2005-01-01 375 Anisotropic thermal conductivities of the plasma-sprayed ceramic coating are explicitly expressed in terms of the microstructural\\u000a parameters. The dominant features of the porous space are identified as strongly oblate (cracklike) pores that tend to be\\u000a either parallel or normal to the substrate. The scatter in pore orientations is shown to have a pronounced effect on the effective\\u000a conductivities. The established Igor Sevostianov; Mark Kachanov 2000-01-01 376 A measurement system consisting of two high-speed two-colour pyrometers is described; the system is suitable for monitoring the flattening and cooling of particles on a substrate during plasma spray deposition. The first double-wavelength optical fibre pyrometer is focused 2 mm before the substrate and the other is focused on the substrate surface. The present instrument provides data on the temperature, M. Vardelle; A. Vardelle; P. Fauchais; C. Moreau 1994-01-01 377 Results are reported of the laser surface sealing of plasma-sprayed layers of 8 wt% yttria partially stabilized zirconia (YPSZ) using pulsed treatments with powers of 0.4 and 1 kW. The structural features of the processed material were examined for a range of laser processing parameters including preheating, processing temperature and power density. By controlling the processing parameters it was possible K. Mohammed Jasim; R. D. Rawlings; D. R. F. West 1992-01-01 378 The effect of porosity on the thermal diffusivity and elastic modulus has been studied on artificially aged, free-standing thermal barrier coatings (TBCs) produced by air plasma spray (APS). The activation energy of the sintering phenomenon was estimated from the variation in diffusivity with time and temperature. X-ray diffraction was used to evaluate the phase stability of 7wt.% yttria partially stabilized F. Cernuschi; P. G. Bison; S. Marinetti; P. Scardi 2008-01-01 379 SciTech Connect A turbine component (10), such as a turbine blade, is provided which is made of a metal alloy (22) and a base, planar-grained thermal barrier layer (28) applied by air plasma spraying on the alloy surface, where a heat resistant ceramic oxide overlay material (32') covers the bottom thermal barrier coating (28), and the overlay material is the reaction product of the precursor ceramic oxide overlay material (32) and the base thermal barrier coating material (28). Subramanian, Ramesh (Oviedo, FL) 2001-01-01 380 Yttria-stabilized ZrO2 powders with initial sizes of 522 ?m were chsosen as feedstock for hybrid thermal plasma deposition. At 100kW RF input power, the microstructures of the deposited coatings varied from mostly sprayed splats to physical-vapor-deposited nanostructures when the powder feeding rate was reduced from 4 to 1 g\\/min. At a powder feeding rate of 2 g\\/min, a peculiar layered H. Huang; K. Eguchi; T. Yoshida 2003-01-01 381 Four commercial WC-Co powders prepared from different manufacturing techniques and having variations in binder metal content (11-20% wt), and WC grain size (1-15 ..mu.. m). Using identical process parameters, these powders were plasma sprayed, and the resulting coatings were characterized for changes in chemistry, phase content, and microstructural parameters. Finally, the coatings were evaluated for resistance to abrasion, sliding wear, Rangaswamy 1987-01-01 382 Thermal spraying with liquid-based feedstocks demonstrated a potential to produce coatings with new and enhanced characteristics. A liquid delivery system prototype was developed and tested in this study. The feeder is based on the 5MPE platform and uses a pressure setup to optimally inject and atomize liquid feedstock into a plasma plume. A novel self-cleaning apparatus is incorporated into the Elliot M. Cotler; Dianying Chen; Ronald J. Molz 2011-01-01 383 We demonstrate for the first time, the synthesis of nanostructured vanadium pentoxide (V2O5) films and coatings using plasma spray technique. V2O5 has been used in several applications such as catalysts, super-capacitors and also as an electrode material in lithium ion batteries. In the present studies, V2O5 films were synthesized using liquid precursors (vanadium oxychloride and ammonium metavanadate) and powder suspension. Nanda; Jagjit 2011-01-01 384 Thermal spraying with liquid-based feedstocks demonstrated a potential to produce coatings with new and enhanced characteristics.\\u000a A liquid delivery system prototype was developed and tested in this study. The feeder is based on the 5MPE platform and uses\\u000a a pressure setup to optimally inject and atomize liquid feedstock into a plasma plume. A novel self-cleaning apparatus is\\u000a incorporated into the Elliot M. Cotler; Dianying Chen; Ronald J. Molz 2011-01-01 385 The theory of functionally graded material (FGM) was applied in the fabrication process of PEN (Positive-Electrolyte-Negative), the core component of solid oxide fuel cell (SOFC). To enhance its electrochemical performance, the functionally graded PEN of planar SOFC was prepared by atmospheric plasma spray (APS). The cross-sectional SEM micrograph and element energy spectrum of the resultant PEN were analyzed. Its interface Wei-sheng XIA; Yun-zhen YANG; Hai-ou ZHANG; Gui-lan WANG 2009-01-01 386 The paper aims at reviewing of the recent studies related to the development of suspension plasma sprayed TiO2 and Ca5(PO4)3OH (hydroxyapatite, HA) coatings as well as their multilayer composites obtained onto stainless steel, titanium and aluminum\\u000a substrates. The total thickness of the coatings was in the range 10 to 150?m. The suspensions on the base of distilled water,\\u000a ethanol and R. Jaworski; L. Pawlowski; C. Pierlot; F. Roudet; S. Kozerski; F. Petit 2010-01-01 387 To achieve solid oxide fuel cells (SOFC) at reduced costs, the atmospheric plasma spray (APS) process could be an attractive\\u000a technique. However, to make dense and thin layers as needed for electrolytes, a suspension is preferably implemented as a\\u000a feedstock material instead of a conventional powder. Suspensions of yttria-stabilized zirconia particles in methanol have\\u000a been prepared with various solid loadings R. Rampon; F.-L. Toma; G. Bertrand; C. Coddet 2006-01-01 388 Highly porous TiO2 coatings have been produced by suspension plasma spraying on ITO coated glass substrates. The deposition process could be optimized so that fine\\/nano grained highly porous coatings were obtained. Mean crystallite sizes well below 50nm could be achieved in the coatings for the anatase phase.Special emphasis was on the establishment of a high volume fraction of the desired Robert Vaen; Zeng Yi; Holger Kaner; Detlev Stver 2009-01-01 389 In this work, plasma sprayed nano-titania\\/silver coatings were deposited on titanium substrates to obtain an implant material having excellent antibacterial property. The surface characteristics of nano-titania\\/silver coatings were investigated by scanning electron microscopy, energy dispersive spectrometer, optical emission spectrometry and X-ray diffraction. The bioactivity of nano-titania\\/silver coatings was examined by simulated body fluid soaking test. The antibacterial activity against Escherichia Baoe Li; Xuanyong Liu; Fanhao Meng; Jiang Chang; Chuanxian Ding 2009-01-01 390 DOEpatents A turbine component (10), such as a turbine blade, is provided which is made of a metal alloy (22) and a base, planar-grained thermal barrier layer (28) applied by air plasma spraying on the alloy surface, where a heat resistant ceramic oxide overlay material (32') covers the bottom thermal barrier coating (28), and the overlay material is the reaction product of the precursor ceramic oxide overlay material (32) and the base thermal barrier coating material (28). Subramanian, Ramesh (Oviedo, FL) 2001-01-01 391 The microstructure of thermal barrier coatings (TBCs) of 7wt.% Y2O3 stabilized ZrO2 (7YSZ) deposited using the solution-precursor plasma spray (SPPS) method has: (i) controlled porosity, (ii) vertical cracks, and (iii) lack of large-scale splat boundaries. An unusual feature of such SPPS TBCs is that they are well-adherent in ultra-thick forms (~4mm thickness), where most other types of ultra-thick ceramic coatings 2008-01-01 392 The electron number density has been measured in a plasma spray torch using Stark broadening of H{beta} and Ar-I (430 nm) line. A small amount of hydrogen (1% by volume in argon gas) was introduced to study the H{beta} line profile. Axial variation of electron number density has been determined up to a distance of 20 mm from the nozzle N. K. Joshi; S. N. Sahasrabudhe; K. P. Sreekumar; N. Venkatramani 2003-01-01 393 Oxidation reactions during plasma spraying of metallic powders give rise to oxide crusts on powder particle surfaces. The\\u000a first oxidation stage occurs in flight of molten particles. It is usually followed by the second stage after hitting a substrate.\\u000a To investigate the oxidation products immediately after the first stage, abrupt stopping of in-flight oxidation is possible\\u000a by trapping and quenching O. Schneeweiss; J. Dubsk; K. Volenk; J. Had; J. Leitner; M. Seberni 2001-01-01 394 The arc plasma fabrication of ferrite phase shifters has been extended from C-band to S-band. Two S-band lithium ferrite compositions, Ampex 3-601 and Ampex 3-750, were sprayed around a dielectric for phase shifter applications in the 3 to 4 GHz range. The larger dielectric and greater ferrite wall thickness of the S-band phase shifter necessitated a moderation of the arc 1976-01-01 395 By addition of a Ni-Cr undercoating to a steel substrate, a ZrO2; coating with high thermal shock endurance was produced. This coating resists to an initial quenching temperature gradient of 1000K. The plasma spray coating material is 7wt%-CaO stabilized ZrO2 powder. The chemical composition and the thermal expansion coefficient of the base steel, and also the Ni-Cr undercoating were evaluated T. Kurushivna; K. Ishizaki 1993-01-01 396 The surface microtexture of splats deposited by atmospheric dc plasma spraying was studied by the electron-backscattered diffraction method. The examined splats were yttria-stabilized zirconia (YSZ) and nickel deposited onto a mirror-polished stainless steel substrate preheated to 500K. The YSZ splats exhibited a disk-shaped morphology and had a peculiar <111> fiber texture in their peripheral region; the fiber axes were perpendicular Kentaro Shinoda; Masahiko Demura; Hideyuki Murakami; Seiji Kuroda; Sanjay Sampath 2010-01-01 397 Multilayer coatings were prepared using small-particle plasma spray to investigate the effect of interfaces on thermal conductivity and phase stability. Monolithic and multilayer alumina and yttria partially-stabilized zirconia coatings, with 0, 3, 20, and 40 interfaces in 200380 m thick coatings were studied. Thermal conductivity was determined for the temperature range 25 C to 1200 C using the laser flash Y. Jennifer Su; Hsin Wang; Wally D. Porter; A. R. De Arellano Lopez; K. T. Faber 2001-01-01 398 To improve adhesiveness of hydroxapatite (HA) coatings to the titanium substrates, a HA\\/Ti composite was formed on substrates using a radio-frequency (RF) plasma spraying process. This process would reduce the residual stress caused by the large difference between the linear thermal expansion coefficient of the substrate and of HA. The HA\\/Ti composites were prepared by controlled feeding ratio of HA M. Inagaki; Y Yokogawa; T Kameyama 2001-01-01 399 This paper presents an in situ process to form intermetallic matrix composite coatings by reactive radio frequency (RF) plasma spraying with premixed elemental\\u000a powder. The typical splat morphology of impinged titanium droplets on a stainless steel substrate is a disk with an outer\\u000a peripheral fringe. If the supplied titanium powder size becomes finer or the nitrogen partial pressure in the Yoshiki Tsunekawa; Makoto Hiromura; Masahiro Okumiya 2000-01-01 400 SciTech Connect A study was carried out on metal-ceramic bonding produced by the technique of wire-arc-plasma spraying of Ni on Al{sub 2}O{sub 3} substrate. The Ni layer and the Ni/Al{sub 2}O{sub 3} interface were characterised using optical and electro-optic techniques. The plasma-deposited Ni layer shows a uniform lamellar microstructure throughout the cross-section. The metal-ceramic interface was found to be well bonded with no pores, flaws or cracks in the as-sprayed condition. The optical metallography and concentration profiles established with the help of an electron probe microanalyser confirmed the absence of any intermediate phase at the interface. An annealing treatment at 1273 K for 24 h on the plasma-coated samples did not result in formation of any intermetallic compound or spinel at the Ni/Al{sub 2}O{sub 3} interface. This indicates that the oxygen picked up by Ni during the spraying operation is less than the threshold value required to form the spinel NiAl{sub 2}O{sub 4}. Laik, A. [Materials Science Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Chakravarthy, D.P. [Laser and Plasma Technology Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Kale, G.B. [Materials Science Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India)]. E-mail: gbkale@apsara.barc.ernet.in 2005-08-15 401 In this study, the effects of the impact parameters, namely, the diameter d0, velocity V0, and temperature T0, of an impacting droplet of yttria-stabilized zirconia (YSZ) on splat morphology have been investigated systematically under plasma spraying conditions. In particular, fully molten droplets of 30-90 ?m in d0 that impact on a preheated quartz glass substrate at V0 of 10-70 m/s have been examined via hybrid plasma spraying. The degree of flattening of final splat morphology, ?, was found to be predicted by the relationship ?=0.43Re1/3, where Re is the Reynolds number. The dimensionless spreading time of droplets, ts*=tsV0/d0, was distributed around 2.7, where ts is the spreading time of the droplet. The ideal maximum spread factor derived from the splat height was approximately proportional to Re1/4. The latter two findings suggest that the analytical model developed by Pasandideh-Fard et al. [Phys. Fluids 8, 650 (1996)] can be applied to the droplet impact in plasma spraying especially for the case of YSZ. In addition, the thermal contact resistance of disk shaped splats decreased with the increase of V0 within the range of 10-5-10-6 m2 K/W. Shinoda, Kentaro; Koseki, Toshihiko; Yoshida, Toyonobu 2006-10-01 402 In this work, the microhardness of plasma sprayed Al2O3 coatings was evaluated using the Vickers indentation technique, and the effects of measurement direction, location and applied loads were investigated. The measured data sets were then statistically analysed employing the Weibull distribution to evaluate their variability within the coatings. It was found that the Vickers hardness (VHN) increases with decreasing applied indenter load, which can be explained in terms of Kick's law and the Meyer index k of 1.93, as well as relating to the microstructural characteristics of plasma sprayed coatings and the elastic recovery taking place during indentation. In addition, VHN, measured on the cross section of coatings, was obviously higher than that on its top surface. The obtained Weibull modulus and variation coefficient indicate that the VHN was less variable when measured at a higher applied load and on the cross section of coating. The obvious dependence of the VHN on the specific indentation location within through-thickness direction was also realized. These phenomena described above in this work were related to the special microstructure and high anisotropic behaviour of plasma sprayed coatings. Yin, Zhijian; Tao, Shunyan; Zhou, Xiaming; Ding, Chuanxian 2007-11-01 403 Water-atomized cast iron powder of Fe-2.17 at.%C-9.93at.%Si-3.75at.%Al were deposited onto an aluminum alloy substrate by atmospheric direct current plasma spraying to improve its tribological properties. Preannealing of the cast iron powder allows the precipitation of considerable amounts of graphite structure in the powder. However, significant reduction in graphitized carbon in cast iron coatings is inevitable after plasma spraying in air atmosphere due to the in-flight burning and dissolution into molten iron droplets. Hexagonal boron nitride (h-BN) powders, which have excellent lubricating properties like graphite, were incorporated into the cast iron powder as a solid lubricant by the sintering process (1300C) to obtain protective coatings with a low friction coefficient. The performance of each coating was evaluated using a ring-on-disk-type wear tester under a paraffin-based oil condition in an air atmosphere. A conventional cast iron liner, which had a flaky graphite embedded in the pearlitic matrix, was also tested under similar conditions for comparison. Sections of worn surfaces and debris were characterized, and the wear behavior of plasma-sprayed coatings was discussed. Tsunekawa, Y.; Ozdemir, I.; Okumiya, M. 2006-06-01 404 Reactive plasma sprayed coatings were prepared on carbon steel substrates with Ti and B4C as starting materials. Two kinds of gases (Ar and N2) were used as feeding gases for powders, respectively. 10 wt.% Cr was added in the powders as binder to increase the bond strength of the coating. The phases, microstructure, micro-hardness and corrosion polarization behavior in 3.5 wt.% NaCl solution of the two coatings were studied. The results show that TiN-TiB2 coatings were prepared under both conditions. The two coatings have typically laminated structure. However, the coating prepared with Ar as feeding gas has higher porosity and some unmelted Cr particles. It also contains certain content of titanium oxides. The microhardness of coating prepared with Ar as feeding gas is lower due to its higher porosity, unmelted Cr particles and some amounts of TiO2. The corrosion resistance of TiN-TiB2 coating prepared with Ar as feeding gas in 3.5 wt.% NaCl solution is worse than that of the coating prepared with N2 as feeding gas. Yet the corrosion resistance of reactive plasma sprayed TiN-TiB2 coating is improved greatly compared with that of carbon steel. The thermodynamic analysis of reactive plasma spraying process is also discussed. Ma, Jing; Hu, Jianwen; Yan, Dongqing; Mao, Zhengping 2012-06-01 405 PubMed The influence of bond-coating on the mechanical properties of plasma-spray coatings of hydroxyatite on Ti was investigated. Plasma-spray powder was produced from human teeth enamel and dentine. Before processing the main apatite coating, a very thin layer of Al2O3/TiO2 was applied on super clean and roughened, by Al2O3 blasting, Ti surface as bond-coating. The experimental results showed that bond-coating caused significant increase of the mechanical properties of the coating layer: In the case of the enamel powder from 6.66 MPa of the simple coating to 9.71 MPa for the bond-coating and in the case of the dentine powder from 6.27 MPa to 7.84 MPa, respectively. Both tooth derived powders feature high thermal stability likely due to their relatively high content of fluorine. Therefore, F-rich apatites, such those investigated in this study, emerge themselves as superior candidate materials for calcium phosphate coatings of producing medical devices. The methods of apatite powder production and shaping optimization of powder particles are both key factors of a successful coating. The methods used in this study can be adopted as handy, inexpensive and reliable ways to produce high quality of powders for plasma spray purposes. PMID:17122932 Oktar, F N; Yetmez, M; Agathopoulos, S; Lopez Goerne, T M; Goller, G; Peker, I; Ipeker, I; Ferreira, J M F 2006-11-22 406 To investigate the original reasons of the remarkable difference of the photocatalytic activity of plasma sprayed TiO2 and TiO2Fe3O4 coatings, the photoelectrochemical characteristics of plasma sprayed TiO2 and TiO2Fe3O4 electrodes were examined. The photo-response of the sprayed TiO2 electrode was comparable to that of single crystal TiO2, but the breakdown voltage was approximately 0.5 V (vs. SCE). The short-circuit current Fuxing Ye; Akira Ohmori; Changjiu Li 2004-01-01 407 The Madison Plasma Dynamo Experiment (MPDX) facility will create large, un-magnetized, fast flowing, hot plasma for investigating magnetic field self-generation and flow driven MHD instabilities. The scale of the experiment is important to do this science, and so bigger is better. The core infrastructure of MPDX is the matching pair of 3 meter diameter hemispheres. For MPDX the cost and complexity of the vacuum vessel built by traditional means challenged the budget. The path to making this high-vacuum vessel led the research team and collaborators to push the limit cast Al. The challenges and solutions of making the MPDX vessel will be discussed and illustrated today. Clark, Mike; Collins, Cami; Katz, Noam; Weisberg, Dave; Wallace, John; Forest, Cary 2012-10-01 408 The redshifts of emissions from pulsars and magnetars consist of two components: gravitational and non-gravitational redshifts. The latter results from the electromagnetic and kinetic effects of relativistic plasmas, characterized by refractive indices and streaming velocities of the media, respectively. The vacuum polarization effect induced by strong magnetic fields can modify the refractive indices of the media, and thus leads to a modification to the redshifts. The Gordon effective metric is introduced to study the redshifts of emissions. The modification of the gravitational redshift, caused by the effects of relativistic plasmas and vacuum polarization, is obtained. Luo, Yuee; Bu, Zhigang; Chen, Wenbo; Li, Hehe; Ji, Peiyong 2013-11-01 409 This paper presents an in situ process to form intermetallic matrix composite coatings by reactive radio frequency (RF) plasma spraying with premixed elemental powder. The typical splat morphology of impinged titanium droplets on a stainless steel substrate is a disk with an outer peripheral fringe. If the supplied titanium powder size becomes finer or the nitrogen partial pressure in the plasma gas increases, splats containing prominent asperities with a smaller flattening ratio appear along with the plain disk type. An increase in nitrogen content is detected in all the splats sprayed with finer titanium powder and/or higher nitrogen partial pressure. The splats containing prominent asperities, which correspond to TiN, are twice as high in nitrogen content than the plain disk type. Aluminum splats are also classified into two categories: a disk type with an irregular outer periphery and a seminodular type. Oxygen exists on the splat surfaces, on which there are nitrogen concentrated areas corresponding to AlN. Consequently, the nitride formation proceeds on titanium and aluminum droplets during the flight as well as on the substrate. If the substrate temperature is higher than 873 K just before spraying with premixed titanium and aluminum powder, the formation of TiAl and Ti2AlN proceeds on the substrate because of negligible mutual collisions during the flight. Titanium aluminide matrix in situ composites sprayed with premixed titanium and aluminum powder contain more nitrides than those sprayed with TiAl compound powder, because of the higher nitrogen absorption in titanium and aluminum droplets that results in an exothermic reaction. Tsunekawa, Yoshiki; Hiromura, Makoto; Okumiya, Masahiro 2000-03-01 410 In recent years, thermal sprayed protective coatings have gained widespread acceptance for a variety of industrial applications. A vast majority of these applications involve the use of thermal sprayed coatings to combat wear. While plasma spraying is the most versatile variant of all the thermal spray processes, the detonation gun (D-gun) coatings have been a novelty until recently because of their proprietary nature. The present study is aimed at comparing the tribological behavior of coatings deposited using the two above techniques by focusing on some popular coating materials that are widely adopted for wear resistant applications, namely, WC-12% Co, A12O3, and Cr3C2-MCr. To enable a comprehensive comparison of the above indicated thermal spray techniques as well as coating materials, the deposited coatings were extensively characterized employing microstructural evaluation, microhardness measurements, and XRD analysis for phase constitution. The behavior of these coatings under different wear modes was also evaluated by determining their tribological performance when subjected to solid particle erosion tests, rubber wheel sand abrasion tests, and pin-on-disk sliding wear tests. The results from the above tests are discussed here. It is evident that the D-gun sprayed coatings consistently exhibit denser microstructures and higher hardness values than their plasma sprayed counterparts. The D-gun coatings are also found to unfailingly exhibit superior tribological performance superior to the corresponding plasma sprayed coatings in all wear tests. Among all the coating materials studied, D-gun sprayed WC-12%Co, in general, yields the best performance under different modes of wear, whereas plasma sprayed Al2O3 shows least wear resistance to every wear mode. Sundararajan, G.; Prasad, K. U. M.; Rao, D. S.; Joshi, S. V. 1998-06-01 411 SciTech Connect Metallic coatings can be fabricated using the intense plasma generated by the metal vapor vacuum arc. We have made and tested an embodiment of vacuum arc plasma source that operates in a pulsed mode, thereby acquiring precise control over the plasma flux and so also over the deposition rate, and that is in the form of a miniature plasma gun, thereby allowing deposition of metallic thin films to be carried out in confined spaces and also allowing a number of such guns to be clustered together. The plasma is created at the cathode spots on the metallic cathode surface, and is highly ionized and of directed energy a few tens of electron volts. Adhesion of the film to the substrate is thus good. Virtually all of the solid metals of the Periodic Table can be used, including highly refractory metals like tantalum and tungsten. Films, including multilayer thin films, can be fabricated of thickness from Angstroms to microns. We have carried out preliminary experiments using several different versions of miniature, pulsed, metal vapor vacuum arc plasma guns to fabricate metallic thin films and multilayers. Here we describe the plasma guns and their operation in this application, and present examples of some of the thin film structures we have fabricated, including yttrium and platinum films of thicknesses from a few hundred Angstroms up to 1 micron and an yttrium-cobalt multilayer structure of layer thickness about 100 Angstroms. 33 refs., 5 figs. Godechot, X.; Salmeron, M.B.; Ogletree, D.F.; Galvin, J.E.; MacGill, R.A.; Dickinson, M.R.; Yu, K.M.; Brown, I.G. 1990-04-01 412 Suspension Plasma Spray process was used for deposition of pseudo-eutectic composition of alumina-yttria-stabilized zirconia as a potential thermal barrier coating using Mettech axial III torch. Process variables including feed and plasma parameters were altered to find their effects on the formation of phases in the composite coating. The in-flight particle velocity was found to be the crucial parameter on phase formation in the resulting coatings. Low particle velocities below 650 m/s result in the formation of stable phases i.e., ?-alumina and tetragonal zirconia. In contrast, high particle velocities more than 750 m/s favor the metastable ?-alumina and cubic zirconia phases as dominant structures in as-deposited coatings. Accordingly, the plasma auxiliary gas and plasma power as influential parameters on the particle velocity were found to be reliable tools in controlling the resulting coating structure thus, the consequent properties. The noncrystalline portion of the coatings was also studied. It was revealed that upon heating, the amorphous phase prefers to crystallize into pre-existing crystalline phases in the as-deposited coating. Thus, the ultimate crystalline structure can be designed using the parameters that control the particle velocity during plasma spray coating. Tarasi, F.; Medraj, M.; Dolatabadi, A.; Oberste-Berghaus, J.; Moreau, C. 2010-06-01 413 Powders of Mo52Si38B10 were plasma sprayed under inert conditions onto stainless steel substrates to determine if high density free standing forms could be synthesized by this process. Thermal spray conditions were varied to minimize porosity and oxygen impurities while minimizing evaporative metal losses. The assprayed and sintered microstructures were characterized using scanning and transmission electron microscopy and quantitative x-ray diffraction (XRD). The as-sprayed microstructure consisted of elongated splats tens of microns in length and only one to three microns in thickness. The splats contained submicrometer grains of primarily MoB and Mo5Si3B x (T1) and minor amounts of MoSi2 and a glassy grain boundary phase. The interior of the splats typically consisted of a fine eutectic of MoB and T1. Small pieces were cut out of the cross section of the sample and pressureless sintered for 2, 6, and 10 h at 1800 C in flowing Ar. After sintering for 2 h at 1800 C, the samples exhibited a coarser but equiaxed microstructure (1 to 5 m grain size) containing 78 vol.% T1, 16 vol.% MoB, and 6 vol.% MoSi2 as determined by XRD. Approximately 8 at.% of the Si formed silica. The high-temperature anneal removed all vestiges of the layered structure observed in the as-sprayed samples. Kramer, M. J.; Okumus, S. C.; Besser, M. F.; nal, .; Akinc, M. 2000-03-01 414 Thermal spraying is widely employed to deposit hydroxyapatite (HA) and HA-based biocomposites on hip and dental implants. For thick HA coatings (>150 ?m), problems are generally associated with the build-up of residual stresses and lack of control of coating crystallinity. HA/polymer composite coatings are especially interesting to improve the pure HA coatings' mechanical properties. For instance, the polymer may help in releasing the residual stresses in the thick HA coatings. In addition, the selection of a bioresorbable polymer may enhance the coatings' biological behavior. However, there are major challenges associated with spraying ceramic and polymeric materials together because of their very different thermal properties. In this study, pure HA and HA/poly-?-caprolactone (PCL) thick coatings were deposited without significant thermal degradation by low-energy plasma spraying (LEPS). PCL has never been processed by thermal spraying, and its processing is a major achievement of this study. The influence of selected process parameters on microstructure, composition, and mechanical properties of HA and HA/PCL coatings was studied using statistical design of experiments (DOE). The HA deposition rate was significantly increased by the addition of PCL. The average porosity of biocomposite coatings was slightly increased, while retaining or even improving in some cases their fracture toughness and microhardness. Surface roughness of biocomposites was enhanced compared with HA pure coatings. Cell culture experiments showed that murine osteoblast-like cells attach and proliferate well on HA/PCL biocomposite deposits. Garcia-Alonso, Diana; Parco, Maria; Stokes, Joseph; Looney, Lisa 2012-01-01 415 As a novel thermal spray process, very low pressure plasma spray (VLPPS) process has been significantly used to deposit thin, dense and homogenous ceramic coating materials for special application needs in recent years. In this study, in order to enhance low-energy plasma jet under very low pressure ambience, a home-made transferred arc nozzle was made and mounted on a low-power Lin Zhu; Nannan Zhang; Baicheng Zhang; Fu Sun; Rodolphe Bolot; Marie-Pierre Planche; Hanlin Liao; Christian Coddet 416 SciTech Connect This is the second paper of a two part series based on an integrated study carried out at the State University of New York at Stony Brook and Sandia National Laboratories. The goal of the study is the fundamental understanding of the plasma-particle interaction, droplet/substrate interaction, deposit formation dynamics and microstructure development as well as the deposit property. The outcome is science-based relationships, which can be used to link processing to performance. Molybdenum splats and coatings produced at 3 plasma conditions and three substrate temperatures were characterized. It was found that there is a strong mechanical/thermal interaction between droplet and substrate, which builds up the coatings/substrate adhesion. Hardness, thermal conductivity, and modulus increase, while oxygen content and porosity decrease with increasing particle velocity. Increasing deposition temperature resulted in dramatic improvement in coating thermal conductivity and hardness as well as increase in coating oxygen content. Indentation reveals improved fracture resistance for the coatings prepared at higher deposition temperature. Residual stress was significantly affected by deposition temperature, although not significant by particle energy within the investigated parameter range. Coatings prepared at high deposition temperature with high-energy particles suffered considerably less damage in wear tests. Possible mechanisms behind these changes are discussed within the context of relational maps which are under development. XIANGYANG,JIANG; MATEJICEK,JIRI; KULKARNI,ANAND; HERMAN,HERBERT; SAMPATH,SANJAY; GILMORE,DELWYN L.; NEISER JR.,RICHARD A 2000-03-28 417 PubMed Metallic glass is one of the most attractive advanced materials, and many researchers have conducted various developmental research works. Metallic glass is expected to be used as a functional material because of its excellent physical and chemical functions such as high strength and high corrosion resistance. However, the application for small size parts has been carried out only in some industrial fields. In order to widen the industrial application fields, a composite material is preferred for the cost performance. In the coating processes of metallic glass with the conventional deposition techniques, there is a difficulty to form thick coatings due to their low deposition rate. Thermal spraying method is one of the potential candidates to produce metallic glass composites. Metallic glass coatings can be applied to the longer parts and therefore the application field can be widened. The gas tunnel plasma spraying is one of the most important technologies for high quality ceramic coating and synthesizing functional materials. As the gas tunnel type plasma jet is superior to the properties of other conventional type plasma jets, this plasma has great possibilities for various applications in thermal processing. In this study, the gas tunnel type plasma spraying was used to form the metallic glass coatings on the stainless-steel substrate. The microstructure and surface morphology of the metallic glass coatings were examined using Fe-based metallic glass powder and Zr-based metallic glass powder as coating material. For the mechanical properties the Vickers hardness was measured on the cross section of both the coatings and the difference between the powders was compared. PMID:22905546 Kobayashi, A; Kuroda, T; Kimura, H; Inoue, A 2012-06-01 418 SciTech Connect Thin film synthesis by filtered vacuum arc plasma deposition is a widely used technique with a number of important emerging technological applications. A characteristic feature of the method is that during the deposition process not only is the substrate coated by the plasma, but the plasma gun itself and the magnetic field coil and/or vacuum vessel section constituting the macroparticle filter are also coated to some extent. If then the plasma gun cathode is changed to a new element, there can be a contamination of the subsequent film deposition by sputtering from various parts of the system of the previous coating species. We have experimentally explored this effect and compared our results with theoretical estimates of sputtering from the SRIM (Stopping and Range of Ions in Matter) code. We find film contamination of order 10-4 - 10-3, and the memory of the prior history of the deposition hardware can be relatively long-lasting. Martins, D.R.; Salvadori, M.C.; Verdonck, P.; Brown, I.G. 2002-08-13 419 SciTech Connect Studies of the electromagnetic loads produced by a variety of plasma disruptions, and the resulting structural effects on the compact Ignition Tokamak (CIT) vacuum vessel (VV), have been performed to help optimize the VV design. A series of stationary and moving plasmas, with disruption rates from 0.7--10.0 MA/ms, have been analyzed using the EMPRES code to compute eddy currents and electromagnetic pressures, and the NASTRAN code to evaluate the structural response of the vacuum vessel. Key factors contributing to the magnitude of EM forces and resulting stresses on the vessel have been found to include disruption rate, and direction and synchronization of plasma motion with the onset of plasma current decay. As a result of these analyses, a number of design changes have been made, and design margins for the present 1.75 meter design have been improved over the original CIT configuration. 1 ref., 10 figs., 4 tabs. Salem, S.L.; Listvinsky, G.; Lee, M.Y.; Bailey, C. 1987-01-01 420 We have demonstrated that sintered LiF spatial filters may be used in a 10⁻⁶-torr vacuum environment as laser-initiated plasma shutters for retropulse isolation in the Antares high-energy laser fusion system. In our experiments, a 1.1-ns pulsed CO laser, at a 10-..mu..m wavelength and an energy of up to 3.0 J, was used for plasma initiation; a chopped probe laser tuned T. W. Sheheen; S. J. Czuchlewski; J. Hyde; R. L. Ainsworth 1983-01-01 421 We plasma-sprayed nickel coatings on stainless steel and cobalt alloy coupons heated to temperatures ranging from room temperature to 650 C. Temperatures, velocities, and sizes of spray particles were recorded while in-flight and held constant during experiments. We measured coating adhesion strength and porosity, photographed coating microstructure, and determined thickness and composition of surface oxide layers on heated substrates. Coating adhesion strength on stainless steel coupons increased from 10 74 MPa when substrate temperatures were raised from 25 650 C. Coating porosity was lower on high-temperature surfaces. Surface oxide layers grew thicker when substrates were heated, but oxidation alone could not account for the increase in coating adhesion strength. When a coupon was heated to 650 C and allowed to cool before plasma-spraying, its coating adhesion strength was much less than that of a coating deposited on a surface maintained at 650 C. Cobalt alloy coupons, which oxidize much less than stainless steel when heated, also showed improved coating adhesion when heated. Heating the substrate removes surface moisture and other volatile contaminants, delays solidification of droplets so that they can better penetrate surface cavities, and promotes diffusion between the coating and substrate. All of these mechanisms enhance coating adhesion. Pershin, V.; Lufitha, M.; Chandra, S.; Mostaghimi, J. 2003-09-01 422 SciTech Connect The suspension plasma spray (SPS) process was used to produce coatings from yttria-stabilized zirconia (YSZ) powders with median diameters of 15 {micro}m and 80 nm. The powder-ethanol suspensions made with 15-{micro}m diameter YSZ particles formed coatings with microstructures typical of the air plasma spray (APS) process, while suspensions made with 80-nm diameter YSZ powder yielded a coarse columnar microstructure not observed in APS coatings. To explain the formation mechanisms of these different microstructures, a hypothesis is presented which relates the dependence of YSZ droplet flight paths on droplet diameter to variations in deposition behavior. The thermal conductivity (k th) of columnar SPS coatings was measured as a function of temperature in the as-sprayed condition and after a 50 h, 1200 C heat treatment. Coatings produced from suspensions containing 80 nm YSZ particles at powder concentrations of 2, 8, and 11 wt.% exhibited significantly different k th values. These differences are connected to microstructural variations between the SPS coatings produced by the three suspension formulations. Heat treatment increased the k th of the coatings generated from suspensions containing 2 and 11 wt.% of 80 nm YSZ powder, but this k th increase was less than has been observed in APS coatings. Van Every, K.; Krane, M. J. M.; Trice, R. W.; Wang, H.; Porter, W.; Besser, M.; Sordelet, D.; Ilavsky, J.; Almer, J. (Purdue Univ.); (ORNL); (Ames Lab.) 2011-06-01 423 SciTech Connect The suspension plasma spray (SPS) process was used to produce coatings from yttria-stabilized zirconia (YSZ) powders with median diameters of 15 {micro}m and 80 nm. The powder-ethanol suspensions made with 15-{micro}m diameter YSZ particles formed coatings with microstructures typical of the air plasma spray (APS) process, while suspensions made with 80-nm diameter YSZ powder yielded a coarse columnar microstructure not observed in APS coatings. To explain the formation mechanisms of these different microstructures, a hypothesis is presented which relates the dependence of YSZ droplet flight paths on droplet diameter to variations in deposition behavior. The thermal conductivity (k{sub th}) of columnar SPS coatings was measured as a function of temperature in the as-sprayed condition and after a 50 h, 1200 C heat treatment. Coatings produced from suspensions containing 80 nm YSZ particles at powder concentrations of 2, 8, and 11 wt.% exhibited significantly different k{sub th} values. These differences are connected to microstructural variations between the SPS coatings produced by the three suspension formulations. Heat treatment increased the k{sub th} of the coatings generated from suspensions containing 2 and 11 wt.% of 80 nm YSZ powder, but this k{sub th} increase was less than has been observed in APS coatings. VanEvery, Kent; Krane, Matthew J.M.; Trice, Rodney W; Wang, Hsin; Porter, Wallace; Besser, Matthew; Sordelet, Daniel; Ilavsky, Jan; Almer, Jonathan 2012-03-19 424 SciTech Connect The partially stabilized zirconia powders used to plasma spray thermal barrier coatings typically exhibit broad particle-size distributions. There are conflicting reports in the literature about the extent of injection-induced particle-sizing effects in air plasma-sprayed materials. If significant spatial separation of finer and coarser particles in the jet occurs, then one would expect it to play an important role in determining the microstructure and properties of deposits made from powders containing a wide range of particle sizes. This paper presents the results of a study in which a commercially available zirconia powder was fractionated into fine, medium, and coarse cuts and sprayed at the same torch conditions used for the ensemble powder. Diagnostic measurements of particle surface temperature, velocity, and number-density distributions in the plume for each size-cut and for the ensemble powder are reported. Deposits produced by traversing the torch back and forth to produce a raised bead were examined metallographically to study their shape and location with respect to the torch centerline and to look at their internal microstructure. The results show that, for the torch conditions used in this study, the fine, medium, and coarse size-cuts all followed the same mean trajectory. No measureable particle segregation effects were observed. Considerable differences in coatings microstructure were observed. These differences can be explained by the different particle properties measured in the plume. Neiser, R.A. [Sandia National Labs., Albuquerque, NM (United States); Roemer, T.J. [Ktech Corp., Albuquerque, NM (United States) 1996-12-31 425 Abtract This work is devoted to obtaining coatings from M - Cr - Al - Y alloys by the vacuum-plasma method using 10 103 eV particle energies. In this energy range we can realize predominant precipitation (condensation) of the coating, ionic (dry) etching, or the formation of a diffuse layer on the surface depending on the particle type and the S. A. Muboyadzhan; E. N. Kablov; S. A. Budinovskii 1995-01-01 426 The time and space evolution of pulsed vacuum arc plasma parameters have been measured using a single cylindrical Langmuir probe in a free expansion cup. Electron density ne, effective electron temperature Teff and electron energy distribution function (EEDF) are derived from the IV curves using Druyvesteyn method. Results show that during the discharge time, the electron density ne is between Lei Chen; Dazhi Jin; Xiaohua Tan; Jingyi Dai; Liang Cheng; Side Hu 427 Si field emitter arrays (FEAs) are promising cold cathodes for field emission displays (FEDs). The emission current from the Si FEAs, however, is known to decrease significantly after the vacuum-packaging process based on the frit sealing technique. In this work, we have investigated the mechanism of the current decrease and found that CHF3 plasma treatment of the tip surface was Masayoshi Nagao; Hisao Tanabe; Takashi Matsukawa; Seigo Kanemaru; Junji Itoh 2000-01-01 428 The results of the optical study of the plasma jet of the cathode spot of a freely burning vacuum arc with copper electrodes at a current of 60 A are given, as well as of the arc stabilized with a uniform axial magnetic field of induction up to 0.18 T. The axial and radial profiles of radiation intensity for different Alexey M. Chaly; Alexander A. Logatchev; Roman A. Taktarov; Konstantin K. Zabello; Sergey M. Shkol'nik 2009-01-01 429 The techniques of plasma spraying are suitable for deposition of metals, ceramics or composites. Atmospheric plasma spraying of metals is accompanied by their oxidation. The oxidation of nickel during its spraying gives rise to NiO. During the flight of molten nickel particles in the plasma plume, the first stage of the oxidation reaction takes place. To determine the amount of NiO grown during this stage, oxidation can be stopped abruptly by trapping and quenching the particles in liquid nitrogen. If, on the other hand, the flying molten particles are allowed to hit a solid substrate, a plasma deposit or coating is built up. The period starting at the moment of the particle impact and solidification corresponds to the second oxidation stage. This is finished by cooling down the system substrate coating. Plasma spraying of nickel was conducted using a water-stabilized plasma gun. To study the structure and optical properties of the oxidation products, it is necessary to remove the metallic phase from the samples. This was done by a technique of metal dissolution described previously. After the first oxidation stage, if the particles are trapped in liquid nitrogen, NiO is obtained by rapid solidification of oxide melt grown on the surface of the flying particles as a result of a gas molten Ni reaction. The colour of solid NiO formed in this way is green, which corresponds to a region of high reflectance between 1.9 and 2.7 eV. The green colour is typical of stoichiometric NiO and is due to octahedral Ni2+ ions. The second oxidation stage is characterized by a gas solid Ni reaction. It results in black NiO, whose colour follows from strong absorption of light in the whole visible range. The oxygen content in this oxide exceeds slightly the stoichiometric value. The light absorption is due to free charge carriers, i. e. holes, whose presence is a consequence of the deviation of NiO from stoichiometry. Volenk, K.; Ctibor, P.; Dubsk, J.; Chrska, P.; Hork, J. 2004-03-01 430 In this paper, a comprehensive model was developed to investigate the suspension spray for a radio frequency (RF) plasma torch coupled with an effervescent atomizer. Firstly, the RF plasma is simulated by solving the thermo-fluid transport equations with electromagnetic Maxwell equation. Secondly, primary atomization of the suspension is solved by a proposed one-dimensional breakup model and validated with the experimental data. Thirdly, the suspension droplets and discharged nanoparticles are modeled in Lagrangian manner, to calculate each particle tracking, acceleration, heating, melting and evaporation. Saffman lift force, Brownian force and non-continuum effect are considered for nanoparticle momentum transfer, as well as the effects of evaporation on heat transfer. This model predicts the nanoparticle trajectory, velocity, temperature and size in the RF suspension plasma spray. Effects of the torch and atomizer operating conditions on the particle characteristics are investigated. Such operating conditions include gas-to-liquid flow ratio, atomizer orifice diameter, injection pressure, power input level, plasmas gas flow rate, and powder material. The statistical distributions for the multiple particles are also discussed for different cases. Xiong, Hong-Bing; Qian, Li-Juan; Lin, Jian-Zhong 2012-03-01 431 Spectral selective materials have attracted an increasing interest because of Concentration Solar Power Plant. Those materials are expected to exhibit specific optical properties at temperatures higher than 450 C. Plasma-spraying process is commonly used to manufacture high-temperature coatings. In this study, heterogeneous coatings made of aluminum and alumina were produced by spraying both powder and suspension of boehmite clusters. Both optical and electrical properties were measured because, according to the Hagen-Ruben's law, the higher the resistivity the lower the reflectivity. The reflectivity was assessed by spectrometry at 10 m and the resistivity by the four-points technique. The results were combined with the diameter of flattened lamellae and the volume fraction of alumina in the coatings. Then the highest reflectivity is achieved with a metallic coating exhibiting high flattening degree, while the coatings containing a large amount of alumina exhibit the lowest reflectivity and the highest resistivity. Brousse-Pereira, E.; Wittmann-Teneze, K.; Bianchi, V.; Longuet, J. L.; Del Campo, L. 2012-12-01 432 Reactive plasma spraying (RPS) has been considered as a promising technology for in-situ formation of aluminum nitride (AlN) thermally sprayed coatings. To fabricate thick A lN coatings in RPS process, controlling and improving the in-flight nitriding reaction of Al particles is required. In this study, it was possible to control the nitriding reaction by using ammonium chloride (NH4Cl) powders. Thick and dense AlN coating (more than 300 ?m thickness) was successfully fabricated with small addition of NH4Cl powders. Thus, addition of NH4Cl prevented the Al aggregation by changing the reaction pathway to a mild way with no explosive mode (relatively low heating rates) and it acts as a catalyst, nitrogen source and diluent agent. Shahien, Mohammed; Yamada, Motohiro; Yasui, Toshiaki; Fukumoto, Masahiro 2011-10-01 433 MoSi2 oxidation protective coatings on molybdenum substrate were prepared by air plasma spraying technique (APS). Microstructure, phase composition, porosity, microhardness and bonding strength of the coatings were investigated and determined. Oxidation behavior of the coating at high temperature was also examined. Results show that composition of the coatings is constituted with MoSi2 and Mo5Si3, the surface morphology is described as flattened lamellar features, insufficiently flattened protuberance with some degree of surface roughness, a certain quantity of spherical particles, microcracks and pores. Testing results reveal that microhardness and bonding strength of the coatings increase, and porosity decreases with increasing power or decreasing Ar gas flow rate. Moreover, with decreasing the porosity, the microhardness of the coatings increases. The bonding strength of the coatings also increases with increasing spray distance. The MoSi2 coated Mo substrate exhibited a good oxidation resistance at 1200 C. Wang, Yi; Wang, Dezhi; Yan, Jianhui; Sun, Aokui 2013-11-01 434 Zirconium (Zr) metal is of interest for chemical corrosion protection and nuclear reactor core applications. Inert chamber plasma spraying has been used to produce thin Zr coatings on stainless steel (SS) substrates. The coatings were deposited while using transferred arc (TA) cleaning/heating at five different current levels. In order to better understand thermal diffusion governed processes, the coating porosity, grain size and interdiffusion with the substrate were measured as a function of TA current. Low porosity (3.5 to <0.5%), recrystallization with fine equiaxed grain size (3-8 ?m diameter) and varying elemental diffusion distance (0-50 ?m) from the coating-substrate interface were observed. In addition, the coatings were low in oxygen content compared to the wrought SS substrates. The Zr coatings sprayed under these conditions look promising for highly demanding applications. Hollis, K. J.; Hawley, M. E.; Dickerson, P. O. 2012-06-01 435 SciTech Connect As part of an investigation of the the dynamics that occur in the plume of a typical thermal spray torch, an analytical and experimental study of the plasma spraying of alumina is being performed; preliminary results are reported here. Numerical models of the physical processes in the torch column and plume were used to determine the temperature and flow fields. Computer simulations of particle injection (15, 34, and 53 ..mu..m alumina particles) are also presented. The alumina experiments were conducted at a 35 kW power level using a 100 scfh argon and 15 scfh hydrogen gas mixture for two alumina powders. The quality of the coatings is discussed with respect to porosity, sample metallography, and microhardness. 6 refs., 5 figs., 1 tab. Varacalle, D.J. Jr. 1988-01-01 436 SciTech Connect High velocity oxygen-fuel (HVOF) spraying system in open air has been established for producing the coatings that are extremely clean and dense. It is thought that the HVOF sprayed MCrAlY (M is Fe, Ni and/or Co) coatings can be applied to provide resistance against oxidation and corrosion to the hot parts of gas turbines. Also, it is well known that the thicker coating can be sprayed in comparison with any other thermal spraying systems due to improved residual stresses. However, thermal and mechanical properties of HVOF coatings have not been clarified. Especially, the characteristics of residual stress, that are the most important property from the view point of production technique, have not been made clear. In this paper, the mechanical properties of HVOF sprayed MCrAlY coatings were measured in both the case of as-sprayed and heat-treated coatings in comparison with a vacuum plasma sprayed MCrAlY coatings. It was confirmed that the mechanical properties of HVOF sprayed MCrAlY coatings could be improved by a diffusion heat treatment to equate the vacuum plasma sprayed MCrAlY coatings. Also, the residual stress characteristics were analyzed using a deflection measurement technique and a X-ray technique. The residual stress of HVOF coating was reduced by the shot-peening effect comparable to that of a plasma spray system in open air. This phenomena could be explained by the reason that the HVOF sprayed MCrAlY coating was built up by poorly melted particles. Itoh, Y.; Saitoh, M.; Tamura, M. 2000-01-01 437 PubMed Flame-spheroidized feedstock, with excellent known heat transfer and consistent melting capabilities, were used to produce hydroxyapatite (HA) coatings via plasma spraying. The characteristics and inherent mechanical properties of the coatings have been investigated and were found to have direct and impacting relationship with the feedstock characteristics, processing parameters as well as microstructural deformities. Processing parameters such as particle sizes (SHA: 20-45, 45-75 and 75-125 microm) and spray distances (10, 12 and 14 cm) have been systematically varied in the present study. It was found that the increase of particle sizes and spray distances weakened the mechanical properties (microhardness, modulus, fracture toughness and bond strength) and structural stability of the coatings. The presence of inter- and intralamellar thermal microcracks, voids and porosities with limited true contact between lamellae were also found to degrade the mechanical characteristics of the coatings, especially in coatings produced from large-sized HA particles. An effort was made to correlate the effects of microstructural defects with the resultant mechanical properties and structural integrity of the plasma-sprayed hydroxyapatite (HA) coatings. The effects of different heat treatment temperatures (600, 800 and 900 degrees C) on the mechanical properties of the coatings were also studied. It was found that a heat treatment temperature of 800 degrees C does enhance the microhardness and elastic modulus of the coatings significantly (P < 0.05) whereas a further increment in heat treatment temperature to 900 degrees C did not show any discernable improvements (P > 0.1). The elastic response behaviour and fracture toughness of both the as-sprayed and heat-treated HA coatings using Knoop and Vickers indentations at different loadings have been investigated. Results have shown that the mechanical properties of the coatings have improved significantly despite increasing crack density after heat treatment in air. Coatings produced from the spheroidized feedstock of 20-45 microm (SHA 20-45 microm) sprayed at a stand-off distance of 10 cm were found to possess the most favourable mechanical properties. PMID:10811304 Kweh, S W; Khor, K A; Cheang, P 2000-06-01 438 The major problems with plasma sprayed hydroxyapatite (HA) coatings for hard tissue replacement are severe HA decomposition and insufficient mechanical properties of the coatings. Loss of crystalline HA after the high-temperature spraying is due mainly to the loss of OH- in terms of water. The current study used steam to treat HA droplets and coatings during both in-flight and flattening stages during plasma spraying. The microstructure of the HA coatings and splats was characterized using scanning electron microscope, Raman spectroscopy, Fourier transform IR spectroscopy, and x-ray diffraction. Results showed that a significant increase in crystallinity of the HA coating was achieved through the steam treatment (e.g., from 58 to 79%). In addition, the effects were dependent on particle sizes of the HA feedstock, more increase in crystallinity of the coatings made from smaller powders was revealed. The Raman spectroscopy analyses on the individual splats and coatings indicate that the mechanism involves entrapping of water molecules by the individual HA droplets upon their impingement. It further suggests that the HA decomposition has already taken place before the impingement of the droplets on precoating or substrate. The improvement in crystallinity and phases, for example, from tricalcium phosphate and amorphous calcium phosphate to HA, was achieved by reversing the HA decomposition through providing extra OH-. Furthermore, the steam treatment during the spraying also accounts for remarkably increased adhesion strength from 9.09 to 23.13 MPa. The in vitro testing through immersing the HA coatings in simulated body fluid gives further evidence that the economic and simple steam treatment is promising in improving HA coating structure. Li, H.; Khor, K. A.; Cheang, P. 2006-12-01 439 The correlation of microstructure and wear resistance in ferrous coatings applicable to diesel engine cylinder bores was investigated in this study. Seven kinds of ferrous spray powders, two of which were stainless steel powders and the others blend powders of ferrous powders mixed with Al2O3-ZrO2 powders, were sprayed on a low-carbon steel substrate by atmospheric plasma spraying. Microstructural analysis of the ferrous coatings showed that various Fe oxides such as FeO, Fe2O3, and ?-Fe2O3 were formed in the martensitic (or austenitic) matrix as a result of the reaction with oxygen in air. The blend coatings containing ?-Al2O3 and t-ZrO2 oxides, which were formed as Al2O3-ZrO2 powders, were rapidly solidified during plasma spraying. The wear test results revealed that the blend coatings showed better wear resistance than the ferrous coatings because they contained a number of hard Al2O3-ZrO2 oxides. However, delamination occurred when cracks initiated at matrix/oxide interfaces and propagated parallel to the worn surface in the case of the large hardness difference between the matrix and oxide. The wear rate of the coating fabricated with STS316 powders was slightly higher than other coatings, but the wear rate of the counterpart material was very low because of the smaller matrix/oxide hardness difference due to the presence of many Fe oxides. In order to reduce the wear of both the coating and its counterpart material, the matrix/oxide hardness difference should be minimized, and the hardness of the coating should be increased over a certain level by forming an appropriate amount of oxides. Hwang, Byoungchul; Ahn, Jeehoon; Lee, Sunghak 2002-09-01 440 SciTech Connect Plasma spraying is being studied for in situ repair of damaged Be and W plasma facing surfaces for ITER, the next generation magnetic fusion energy device, and is also being considered for fabricating Be and W plasma-facing components for the first wall of ITER. Investigators at LANLs Beryllium Atomization and Thermal Spray Facility have concentrated on investigating the structure-property relation between as-deposited microstructures of plasma sprayed Be coatings and resulting thermal properties. In this study, the effect of initial substrate temperature on resulting thermal diffusivity of Be coatings and the thermal diffusivity at the coating/Be substrate interface (interface thermal resistance) was investigated. Results show that initial Be substrate temperatures above 600 C can improve the thermal diffusivity of the Be coatings and minimize any thermal resistance at the interface between the Be coating and Be substrate. Castro, R.G.; Bartlett, A.; Elliott, K.E.; Hollis, K.J. 1996-09-01 441 The effect of elastic Coulomb collisions on the one-dimensional expansion of a plasma slab is studied in the classical limit, using an electrostatic particle-in-cell code. Two regimes of interest are identified. For a collision rate of few hundreds of the inverse of the expansion characteristic time ?e, the electron distribution function remains isotropic and Maxwellian with a homogeneous temperature, during all the expansion. In this case, the expansion can be approached by a three-dimensional version of the hybrid model developed by Mora [P. Mora, Phys. Rev. E 72, 056401 (2005)]. When the collision rate becomes somewhat greater than 104?e-1, the plasma is divided in two parts: an inner part which expands adiabatically as an ideal gas and an outer part which undergoes an isothermal expansion. Thaury, C.; Mora, P.; Adam, J. C.; Hron, A. 2009-09-01 442 A parametric study was conducted to determine the effect of suspension plasma spray (SPS) processing parameters, including plasma torch standoff, suspension injection velocity, injector location, powder loading in the suspension, and torch power, on the final microstructure of coatings fabricated from 80nm diameter yttria-stabilized zirconia (YSZ) powders. Coatings made with different conditions were analyzed via stereology techniques for the amount Kent VanEvery; Matthew John M. Krane; Rodney W. Trice 443 Plasma spraying using liquid precursors makes possible the production of finely-structured coatings and thin coatings. This technology has been investigated for nearly ten years in many laboratories and applications are now emerging, using conventional plasma equipment except for the feedstock injection system. While superior quality is expected from the nano-structured coatings, the question remains as to the impacts of using A. Moign; A. Vardelle; N. J. Themelis; J. G. Legoux 2010-01-01 444 A rapid determination method for pentazocine in human plasma without complicated pretreatments has been constructed by liquid chromatography\\/mass spectrometry (LC\\/MS) with sonic spray ionization (SSI) using an Oasis HLB cartridge column. The reliability on our method was investigated for human plasma samples spiked with pentazocine and dextromethorphan as internal standard. The regression equation for pentazocine showed good linearity in the Tetsuya Arinobu; Hideki Hattori; Akira Ishii; Takeshi Kumazawa; Xiao-Pen Lee; Sadao Kojima; Osamu Suzuki; Hiroshi Seno 2003-01-01 445 Solution precursor plasma spraying (SPPS) is a novel technology with great potential for depositing finely structured ceramic coatings with nano- and sub-micrometric features. The solution is injected into the plasma jet either as a liquid stream or gas atomized droplets. Solution droplets or the stream interact with the plasma jet and break up into fine droplets. The solvent vaporizes very fast as the droplets travel downstream. Solid particles are finally formed, and the particle are heated up and accelerated to the substrate to generate the coating. The deposition process and the properties of coatings obtained are extremely sensitive to the process parameters, such as torch operating conditions, injection modes, injection parameters, and substrate temperatures. This article numerically investigates the effect of injection modes, a liquid stream injection and a gas-blast injection, on the size distribution of injected droplets. The particle/droplet size, temperature, and position distributions on the substrate are predicted for different injection modes. Shan, Y.; Coyle, T. W.; Mostaghimi, J. 2010-01-01 446 SciTech Connect Tungsten (W) coating on fusion candidate V-4Cr-4Ti (NIFS-HEAT-2) substrate was demonstrated with plasma spray process for the purpose of applying to protection of the plasma facing surface of a fusion blanket. Increase in plasma input power and temperature of the substrate was effective to reduce porosity of the coating, but resulted in hardening of the substrate and degradation of impact property at 77 K. The hardening seemed to be due to contamination with gaseous impurities and deformation by thermal stress during the coating process. Since all the samples showed good ductility at room temperature, further heating seems to be acceptable for the vanadium substrate. The fracture stress of the W coating was estimated from bending tests as at least 313 MPa, which well exceeds the design stress for the vanadium structure in fusion blanket. Nagasaka, Takuya [National Institute for Fusion Science (Japan); Muroga, Takeo [National Institute for Fusion Science (Japan); Noda, Nobuaki [National Institute for Fusion Science (Japan); Kawamura, Masashi [Kawasaki Heavy Industries, LTD (Japan); Ise, Hideo [Kawasaki Heavy Industries, LTD (Japan); Kurishita, Hiroaki [Tohoku University (Japan) 2005-05-15 447 SciTech Connect Transition radiation generated by an electron beam, produced by a laser wakefield accelerator operating in the self-modulated regime, crossing the plasma-vacuum boundary is considered. The angular distributions and spectra are calculated for both the incoherent and coherent radiation. The effects of the longitudinal and transverse momentum distributions on the differential energy spectra are examined. Diffraction radiation from the finite transverse extent of the plasma is considered and shown to strongly modify the spectra and energy radiated for long wavelength radiation. This method of transition radiation generation has the capability of producing high peak power THz radiation, of order 100 (mu)J/pulse at the plasma-vacuum interface, which is several orders of magnitude beyond current state-of-the-art THz sources. Schroeder, Carl B.; Esarey, Eric; van Tilborg, Jeroen; Leemans, Wim P. 2003-06-26 448 SciTech Connect A two dimensional planar model is developed for self-similar isothermal expansions of non-quasi-neutral plasmas into a vacuum of solid targets heated by ultraintense laser pulses. The angular ion distribution and the dependence of the maximum ion velocity on laser parameters and target thicknesses are predicted. Considering the self-generated magnetic field of plasma beams as a perturbation, the ion energy on edge at the ion opening angle has an increase of 2% relative to that on the front center. Therefore, the self-generated magnetic field of plasma beams is not large enough to interpret for the ring structures. Huang Yongsheng [China Institute of Atomic Energy, Beijing 102413 (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Duan Xiaojiao; Shi Yijin; Lan Xiaofei; Tan Zhixin; Wang Naiyan; Tang Xiuzhang [China Institute of Atomic Energy, Beijing 102413 (China); He Yexi [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China) 2008-04-07 449 This paper presents a systematic research on the process of thermal spraying of HA encompassing all stages of layer deposition: powder production and characterization (optimized production led to spherical 39.9010.61?m powder with 0.0% content of tri-calcium phosphate [TCP] or tetra-calcium phosphate [TTCP] phases), plasma jet properties influence on the in-flight powder properties (major influence of spray distance factor), the influence Jan Cizek; Khiam Aik Khor 450 The hot corrosion behavior of NiCoCrAlY+Ta coatings obtained by low-pressure plasma spraying has been investigated (type I hot corrosion with T = 850°C). These coatings have been deposited on two nickel-base superalloys and on a cast alloy of the same composition as the coating. Comparison of the cyclic oxidation behavior at 850°C between the sprayed coating and the cast alloy M. Frances; P. Steinmetz; J. Steinmetz; C. Duret; R. Mevrel 1985-01-01 451 PubMed Silicon-substituted hydroxyapatite (Si-HA) coatings have been plasma sprayed over titanium substrates (Ti-6Al-4V) aiming to improve the bioactivity of the constructs for bone tissue repair/regeneration. X-ray diffraction analysis of the coatings has shown that, previous to the thermal deposition, no secondary phases were formed due to the incorporation of 0.8 wt % Si into HA crystal lattice. Partial decomposition of hydroxyapatite, which lead to the formation of the more soluble phases of alpha- and beta-tricalcium phosphate and calcium oxide, and increase of amorphization level only occurred following plasma spraying. Human bone marrow-derived osteoblastic cells were used to assess the in vitro biocompatibility of the constructs. Cells attached and grew well on the Si-HA coatings, putting in evidence an increased metabolic activity and alkaline phosphatase expression comparing to control, i.e., titanium substrates plasma sprayed with hydroxyapatite. Further, a trend for increased differentiation was also verified by the upregulation of osteogenesis-related genes, as well as by the augmented deposition of globular mineral deposits within established cell layers. Based on the present findings, plasma spraying of Si-HA coatings over titanium substrates demonstrates improved biological properties regarding cell proliferation and differentiation, comparing to HA coatings. This suggests that incorporation of Si into the HA lattice could enhance the biological behavior of the plasma-sprayed coating. PMID:20574971 Gomes, Pedro S; Botelho, Cludia; Lopes, Maria A; Santos, Jos D; Fernandes, Maria H 2010-08-01 452 SciTech Connect In this study, the effects of the impact parameters, namely, the diameter d{sub 0}, velocity V{sub 0}, and temperature T{sub 0}, of an impacting droplet of yttria-stabilized zirconia (YSZ) on splat morphology have been investigated systematically under plasma spraying conditions. In particular, fully molten droplets of 30-90 {mu}m in d{sub 0} that impact on a preheated quartz glass substrate at V{sub 0} of 10-70 m/s have been examined via hybrid plasma spraying. The degree of flattening of final splat morphology, {xi}, was found to be predicted by the relationship {xi}=0.43Re{sup 1/3}, where Re is the Reynolds number. The dimensionless spreading time of droplets, t{sub s}*=t{sub s}V{sub 0}/d{sub 0}, was distributed around 2.7, where t{sub s} is the spreading time of the droplet. The ideal maximum spread factor derived from the splat height was approximately proportional to Re{sup 1/4}. The latter two findings suggest that the analytical model developed by Pasandideh-Fard et al. [Phys. Fluids 8, 650 (1996)] can be applied to the droplet impact in plasma spraying especially for the case of YSZ. In addition, the thermal contact resistance of disk shaped splats decreased with the increase of V{sub 0} within the range of 10{sup -5}-10{sup -6} m{sup 2} K/W. Shinoda, Kentaro; Koseki, Toshihiko; Yoshida, Toyonobu [Department of Materials Engineering, Graduate School of Engineering, University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan) 2006-10-01 453 The sintering and creep of plasma-sprayed ceramic thermal barrier coatings under high temperature conditions are complex phenomena. Changes in thermomechanical and thermophysical properties and in the stress response of these coating systems as a result of the sintering and creep processes are detrimental to coating thermal fatigue resistance and performance. In this paper, the sintering characteristics of ZrO28wt%Y2O3, ZrO225wt%CeO22.5wt%Y2O3, ZrO26w%NiO9wt%Y2O3, Dongming Zhu; Robert A. Miller 1998-01-01 454 Monotonic and cyclic deformation behavior of thermal barrier coatings under uni-axial compressive loading was examined. Specimens of plasma-sprayed ZrO28% Y2O3, Al2O3, CoNiCrAlY and NiCr were fabricated to test the coating materials independent of the substrates. Stressstrain response was measured using the laser speckle straindisplacement gauge (SSDG). The coatings showed nonlinear stressstrain responses with considerably low elastic moduli compared with those Hiroyuki Waki; Keiji Ogura; Izuru Nishikawa; Akira Ohmori 2004-01-01 455 A round-robin test was implemented where nine European research institutions and universities applied different thermal, ultrasonic, and magnetic methods for measuring the thickness of plasma-sprayed coatings. The coatings, which had thicknesses ranging from 50 to 500 m, were applied on substrates of AISI 316, a standard industrial structural material, and on Armco iron in order to have a material of known thermal properties. Destructive testing was performed after the other methods had been applied, resulting in detailed information on the coating thickness, rugosity, and uniformity. The results obtained with the applied methods on the two unknown samples for each substrate type agreed within 20% with the destructive testing data. Fabbri, L.; Oksanen, M. 1999-06-01 456 High-temperature oxidation resistance of the superalloys can be greatly increased by plasma-sprayed coatings, and this is\\u000a a growing industry of considerable economic importance. The purpose of these coatings is to form long-lasting oxidation protective\\u000a scales. In the current investigation, Stellite-6 coatings were deposited on two Ni-base superalloys, Superni 601 and Superni\\u000a 718, and one Fe-base superalloy, Superfer 800H, by a H. Singh; D. Puri; S. Prakash; V. V. Rama Rao 2006-01-01 457 NiCrAlY, Ni20Cr, Ni3Al and Stellite-6 metallic coatings were deposited on a Fe-based Superalloy (32Ni21Cr0.3Al0.3Ti1.5Mn1.0Si0.1CBal Fe). NiCrAlY was used as bond coat in all the cases. Hot corrosion studies were conducted on uncoated as well as plasma spray coated superalloy specimens after exposure to molten salt at 900 C under cyclic conditions. The thermogravimetric technique was used to establish kinetics of Harpreet Singh; D. Puri; S. Prakash 2005-01-01 458 Conventional and nanostructured zirconia coatings were deposited on In-738 Ni super alloy by atmospheric plasma spray technique. The hot corrosion resistance of the coatings was measured at 1050C using an atmospheric electrical furnace and a fused mixture of vanadium pent oxide and sodium sulfate respectively. According to the experimental results nanostructured coatings showed a better hot corrosion resistance than conventional ones. The improved hot corrosion resistance could be explained by the change of structure to a dense and more packed structure in the nanocoating. The evaluation of mechanical properties by nano indentation method showed the hardness (H) and elastic modulus (E) of the YSZ coating increased substantially after hot corrosion. 459 Functionally graded hydroxyapatite (HA)\\/Ti6Al4V coatings were produced by plasma spray process using specially developed HA-coated Ti6Al4V composite powders as feedstock. The microstructure, density, porosity, microhardness, and Young's modulus (E) were found to change progressively through the three-layered functionally graded coating that composed of the layers 50 wt.% HA\\/50 wt.% Ti6Al4V; 80 wt.% HA\\/20 wt.% Ti6Al4V, and HA. No distinct interface K. A Khor; Y. W Gu; C. H Quek; P Cheang 2003-01-01 460 The synthetic hydroxyapatite (HA, Ca10(PO4)6(OH)2) is a very useful biomaterial for numerous applications in medicine, such as e.g., fine powder for suspension plasma spraying.\\u000a The powder was synthesized using aqueous solution of ammonium phosphate (H2(PO4)NH4) and calcium nitrate (Ca(NO3)4H2O) in the carefully controlled experiments. The synthesized fine powder was characterized by X-ray diffraction (XRD) and scanning\\u000a electron microscope (SEM). The Roman Jaworski; Christel Pierlot; Lech Pawlowski; Muriel Bigan; Maxime Quivrin 2008-01-01 461 Growing demands on thermal barrier coatings (TBCs) for gas turbines regarding their temperature and cyclic capabilities, corrosion resistance, and erosion performance have instigated the development of new materials and coating systems. Different pyrochlores, perovskites, doped yttria-stabilized zirconia, and hexaaluminates have been identified as promising candidates. However, processing these novel TBC materials by plasma spraying is often challenging. During the deposition process, stoichiometric changes, formation of undesired secondary phases or non-optimum amorphous contents, as well as detrimental microstructural effects can occur in particular. This article describes these difficulties and the development of process-related solutions by employing diagnostic tools. Mauer, Georg; Jarligo, Maria Ophelia; Mack, Daniel Emil; Vaen, Robert 2013-06-01 462 PubMed Highly oriented hydroxyapatite coatings (HACs) were obtained on titanium substrates through a radio-frequency thermal plasma spraying (TPS) method. XRD patterns showed that the HACs had crystallites with [001] preferred orientation vertical to the coating's surface. XRD results also indicated that tetracalcium phosphate crystallites in the as-sprayed HAC were oriented in the (100) direction. XRD peaks corresponding to tetracalcium phosphate, tricalcium phosphate and calcium oxide were absent after heat and hydrothermal treatment. The orientation degree of the HAC was influenced little by such post-heat treatments. Considering the crystallographic relationship between the tetracalcium phosphate in the as-sprayed HAC and the HA crystallites formed in the heat-treated HAC, these XRD results indicate that the tetracalcium phosphate in the as-prepared coatings transformed topotaxially into HA during the post-heat treatment. TEM and SEM analyses of the highly oriented HAC were conducted. The characteristic lamellar structure of TPS deposits was observed in cross-sections of the HAC. A prismatic texture was also observed in magnified SEM images. TEM observation showed that 200-800-nm-wide prismatic crystallites were formed in HA splats, and their longitudinal axis was oriented vertically to the coating's surface. SAD patterns showed that the longitudinal axis of the prismatic crystallites corresponded to the [001] zone axis of the HA crystal. PMID:17400290 Inagaki, M; Kameyama, T 2007-03-19 463
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.719922661781311, "perplexity": 7654.6786624506885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273350.41/warc/CC-MAIN-20140728011753-00380-ip-10-146-231-18.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/54305/will-molecules-containing-heavy-isotopes-tend-to-rise-up-in-a-liquid?noredirect=1
# Will molecules containing heavy isotopes tend to rise up in a liquid? When you have a bag of chips or nuts or something else with big and small pieces and you shake it, the bigger pieces will rise up and the smaller pieces will rise down. This is known as the brazil nut effect. I'm curious, does something similar happen with molecules in a liquid? For example: say you have a glass filled with water. Is the concentration of $\ce{HDO}$ and $\ce{D2O}$ larger at the top in comparison to the bottom? • I'm quite sure that random thermal Brownian motion at just about any temperature above absolute zero completely mixes the isotopes . The drive towards isotope segregation would be due to a decrease in overall gravitational potential of the system, but gravity is an extremely weak force on the scale of molecules, and this tiny "signal" would be completely swamped by the "noise" of thermal fluctuations. The answer would end up being similar to this. – Nicolau Saker Neto Jun 28 '16 at 22:14 • The only system with isotopic segregation I know of is the phase separation of He-3 and He-4 at a fraction above $\mathrm{0\ K}$, which may meet the requirement in part, but this is a very special case. – Nicolau Saker Neto Jun 28 '16 at 22:16 • Molecules don't behave like different sized lumps in a cereal packet. And the behaviour in a pack of nuts, for example isn't about weight but size. – matt_black Jun 28 '16 at 22:25 • Why, of course they would, but the effect would be ridiculously weak. Think of centrifugal isotope separation. – Ivan Neretin Jun 29 '16 at 6:03 • As a side not, the Brazil nut effect is not well understood. – bon Jun 29 '16 at 7:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5687851905822754, "perplexity": 687.1449195250002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669431.13/warc/CC-MAIN-20191118030116-20191118054116-00268.warc.gz"}
https://blog.acolyer.org/
Large-scale evolution of image classifiers Real et al., 2017 I’m sure you noticed the bewildering array of network architectures in use when we looked at some of the top convolution neural network papers of the last few years last week (Part 1, Part2, Part 3). With sufficient training data, these networks can achieve amazing feats, but how do you find the best network architecture for your problem in the first place? Discovering neural network architectures… remains a laborious task. Even within the specific problem of image classification, the state of the the art was attained through many years of focused investigation by hundreds of researchers. If you’re an AI researcher and you come up against a difficult problem where it’s hard to encode the best rules (or features) by hand, what do you do? Get the machine to learn them for you of course! If what you want to learn is a function, and you can optimise it through back propagation, then a network is a good solution (see e.g. ‘Network in Network‘, or ‘Learning to learn by gradient descent by gradient descent‘). If what you want to learn is a policy that plays out over some number of time steps then reinforcement learning would be a good bet. But what if you wanted to learn a good network architecture? For that the authors turned to a technique from the classical AI toolbox, evolutionary algorithms. As you may recall, evolutionary algorithms start out with an initial population, assess the suitability of population members for the task in hand using some kind of fitness function, and then generate a new population for the next iteration by mutations and combinations of the fittest members. So we know that we’re going to start out with some population of initial model architectures (say, 1000 of them), define a fitness function based on how well they perform when trained, and come up with a set of appropriate mutation operations over model architectures. That’s the big picture, and there’s just one more trick from the deep learning toolbox that we need to bring to bear: brute force! If in doubt, overwhelm the problem with bigger models, more data, or in this case, more computation: We used slightly-modified known evolutionary algorithms and scaled up the computation to unprecedented levels, as far as we know. This, together with a set of novel and intuitive mutation operators, allowed us to reach competitive accuracies on the CIFAR-10 dataset. This dataset was chosen because it requires large networks to reach high accuracies, thus presenting a computational challenge. The initial population consists of very simple (and very poorly performing) linear models, and the end result is a fully trained neural network with no post-processing required. Here’s where the evolved models stand in the league table: That’s a pretty amazing result when you think about it. Really top-notch AI researchers are in very short supply, but computation is much more readily available on the open market. ‘Evolution’ evolved an architecture, with no human guidance, that beats some of our best models from the last few years. Let’s take a closer look at the details of the evolutionary algorithm, and then we’ll come back and dig deeper into the evaluation results. ### Evolving models We start with a population of 1000 very simple linear regression models, and then use tournament selection. During each evolutionary step, a worker process (250 of them running in parallel) chooses two individuals at random and compares their fitness. The worst of the pair is removed from the population, and the better model is chosen as a parent to help create the next generation. A mutation is applied to the parent to create a child. The worker then trains the child, evaluates it on the validation set, and puts it back into the population. Using this strategy to search large spaces of complex image models requires considerable computation. To achieve scale, we developed a massively-parallel, lock-free infrastructure. Many workers operate asynchronously on different computers. They do not communicate directly with each other. Instead, they use a shared file-system, where the population is stored. Training and validation takes place on the CIFAR-10 dataset consisting of 50,000 training examples and 10,000 test examples, all labeled with 1 of 10 common object classes. Each training runs for 25,600 steps – brief enough so that each individual can be trained somewhere between a few seconds and a few hours, depending on the model size. After training, a single evaluation on the validation set provides the accuracy to use as the model’s fitness. We need architectures that are trained to completion within an evolutionary experiment… [but] 25,600 steps are not enough to fully train each individual. Training a large enough model to completion is prohibitively slow for evolution. To resolve this dilemma, we allow the children to inherit the parents’ weights whenever possible. The final piece of the puzzle then, is the encoding of model architectures, and the mutation operations defined over them. A model architecture is encoded as a graph (its DNA). Vertices are tensors or activations (either batch normalisation with ReLUs, or simple linear units). Edges in the graph are identity connections (for skipping) or convolutions. When multiple edges are incident on a vertex, their spatial scales or numbers of channels may not coincide. However, the vertex must have a single size and number of channels for its activations. The inconsistent inputs must be resolved. Resolution is done by choosing one of the incoming edges as the primary one. We pick this primary edge to be the one that is not a skip connection. Activation functions are similarly reshaped (using some combination of interpolation, truncation, and padding), and the learning rate is also encoded in the DNA. When creating a child, a worker picks a mutation at random from the following set: • Alter learning rate • Identity (effectively gives the individual further training time in the next generation) • Reset weights • Insert convolution (at a random location in the ‘convolutional backbone’). Convolutions are 3×3 with strides of 1 or 2. • Remove convolution • Alter stride (powers of 2 only) • Alter number of channels (of a random convolution) • Filter size (horizontal or vertical at random, on a random convolution, odd values only) • Insert one-to-one (adds a one-to-one / identity connection) • Add skip (identity between random layers) • Remove skip (removes a random skip) ### Evaluation results Here’s an example of an evolution experiment, with selected population members highlighted: Five experiment runs are done, and although not all models reach the same accuracy, they get pretty close. It took 9 x 1019 FLOPs on average per experiment. The following chart shows how accuracy improves over time during the experiments: We observe that populations evolve until they plateau at some local optimum. The fitness (i.e. validation accuracy) value at this optimum varies between experiments (Above, inset). Since not all experiments reach the highest possible value, some populations are getting “trapped” at inferior local optima. This entrapment is affected by two important meta-parameters (i.e. parameters that are not optimized by the algorithm). These are the population size and the number of training steps per individual. The larger the population size, the more thoroughly the space of models can be explored, which helps to reach better optima. More training time means that a model needs to undergo fewer identity mutations to reach a given level of training (remember that the end result of the evolution process is a fully trained model, not just a model architecture). Two other approaches to escaping local optima are increasing the mutation rate, and resetting weights. When it looks like members of the population are trapped in poor local optima, the team tried applying 5 mutations instead of 1 for a few generations. During this period some population members escape the local optimum, and none get worse: To avoid getting trapped by poorer architectures that just happened to have received more training (e.g. through the identity mutation), the team also tried experiments in which the weights are simultaneously reset across all population members when a plateau is reached. The populations suffer a temporary degradation (as to be expected), but ultimately reach a higher optima. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE, 2016. http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html. Something a little different for today… the IEEE recently put out a first version of their “Ethically Aligned Design” report for public discussion. It runs to 136 pages (!) but touches on a number of very relevant issues. This document represents the collective input of over one hundred global thought leaders in the fields of Artificial Intelligence, law and ethics, philosophy, and policy from the realms of academia, science, and the government and corporate sectors. The report itself is divided into eight sections, each of which seems to be the result of the deliberations of a different sub-committee. The eight areas are: 1. General Principles 2. Embedding values into autonomous intelligent systems 3. Methodologies to guide ethical research and design 4. Safety and beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) 5. Personal data and individual access control 6. Reframing autonomous weapons systems 7. Economics/humanitarian issues 8. Law I’m going to focus on the first five of these areas today, and of necessity in reducing 136 pages to one blog post, I’ll be skipping over a lot of details and just choosing the parts that stand out to me on this initial reading. ### General Principles Future AI systems may have the capacity to impact the world on the scale of the agricultural or industrial revolutions. This section opens with a broad question, “How can we ensure that AI/AS do not infringe human rights?” (where AI/AS stands for Artificial Intelligence / Autonomous Systems throughout the report). The first component of the answer connects back to documents such as the Universal Declaration of Human Rights and makes a statement that I’m sure very few would disagree with, although it offers little help in the way of implementation: AI/AS should be designed and operated in a way that respects human rights, freedoms, human dignity, and cultural diversity. The other two components of the answer though raise immediately interesting technical considerations: • AI/AS must be verifiably safe and secure throughout their operational lifetime. • If an AI/AS causes harm it must always be possible to discover the root cause (traceability) for said harm. The second of these in particular is very reminiscent of the GDPR ‘right to an explanation,’ and we looked at some of the challenges with provenance and explanation in previous editions of The Morning Paper. A key concern over autonomous systems is that their operation must be transparent to a wide range of stakeholders for different reasons (noting that the level of transparency will necessarily be different for each stakeholder). Stated simply, a transparent AI/AS is one in which it is possible to discover how and why the system made a particular decision, or in the case of a robot, acted the way it did. The report calls for new standards describing “measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined. ### Embedding values This is an interesting section. The overall argument / intention seems to be that we want to build systems that make decisions which align with the way the impacted communities would like decisions to be made. The actual wording raises a few questions though… Society does not have universal standards or guidelines to help embed human norms or moral values into autonomous intelligent systems (AIS) today. But as these systems grow to have increasing autonomy to make decisions and manipulate their environment, it is essential they be designed to adopt, learn, and follow the norms and values of the community they serve, and to communicate and explain their actions in as transparent and trustworthy manner possible, given the scenarios in which they function and the humans who use them. What if the norms and values of the community they serve aren’t desirable? For example, based on all the horrific stories that are increasingly being shared, the ‘norm’ of how women are treated in IT is not something we would ever want to propagate into an AIS. There are many examples in history of things that were once accepted norms which we now find very unacceptable. Could we not embed norms and values (e.g., non-discrimination) of a better, more noble version of ourselves and our communities? Presuming of course we can all agree on what ‘better’ looks like… Values to be embedded in AIS are not universal, but rather largely specific to user communities and tasks. This opens the door to ‘moral overload’, in which an AIS is subject to many possibly conflicting norms and values. What should we do in these situations? The recommended best practice seems guaranteed to produce discrimination against minorities (but then again, so does democracy when viewed through the same lens, this stuff is tricky!): Our recommended best practice is to prioritize the values that reflect the shared set of values of the larger stakeholder groups. For example, a self-driving vehicle’s prioritization of one factor over another in its decision making will need to reflect the priority order of values of its target user population, even if this order is in conflict with that of an individual designer, manufacturer, or client. In the same section though, we also get: Moreover, while deciding which values and norms to prioritize, we call for special attention to the interests of vulnerable and under-represented populations, such that these user groups are not exploited or disadvantaged by (possibly unintended) unethical design. The book “Moral Machines: Teaching robots right from wrong” is recommended as further reading in this area. Understanding whether / ensuring that systems actually implement the intended norms requires transparency. Two levels of transparency are envisaged: firstly around the information conveyed to the user while an autonomous system interacts, and secondly enabling the system to be evaluated as a whole by a third-party. A system with the highest level of traceability would contain a black-box like module such as those used in the airline industry, that logs and helps diagnose all changes and behaviors of the system. ### Methodologies to guide ethical research and design The report highlights two key issues relating to business practices involving AI: • a lack of value-based ethical culture and practices, and • a lack of values-aware leadership Businesses are eager to develop and monetize AI/AS but there is little supportive structure in place for creating ethical systems and practices around its development or use… Engineers and design teams are neither socialized nor empowered to raise ethical concerns regarding their designs, or design specifications, within their organizations. Considering the widespread use of AI/AS and the unique ethical questions it raises, these need to be identified and addressed from their inception Companies should implement ‘ethically aligned design’ programs (from which the entire report derives its title). Professional codes of conduct can support this (there’s a great example from the British Computer Society in this section of the report). The lack of transparency about the AI/AS manufacturing process presents a challenge to ethical implementation and oversight. Regulators and policymakers have an important role to play here the report argues. For example: …when a companion robot like Jibo promises to watch your children, there is no organization that can issue an independent seal of approval or limitation on these devices. We need a ratings and approval system ready to serve social/automation technologies that will come online as soon as possible. CloudPets anyone? What a disgrace. For further reading, “An FDA for Algorithms,” and “The Black Box Society” are recommended. There’s a well made point by Frank Pasquale, Professor of Law at the University of Maryland about the importance (and understandability) of the training data vs the algorithm too: …even if machine learning processes are highly complex, we may still want to know what data was fed into the computational process. Presume as complex a credit scoring system as you want. I still want to know the data sets fed into it, and I don’t want health data in that set… ### Safety and beneficence of AGI and ASI This section stresses the importance of a ‘safety mindset’ at all stages. As AI systems become more capable, unanticipated or unintended behavior becomes increasingly dangerous, and retrofitting safety into these more generally capable and autonomous AI systems may be difficult. Small defects in AI architecture, training, or implementation, as well as mistaken assumptions, could have a very large impact when such systems are sufficiently capable. The paper “Concrete problems in AI safety” (on The Morning Paper backlog) describes a range of possible failure modes. Any AI system that is intended to ultimately have capabilities with the potential to do harm should be design to avoid these issues pre-emptively. Retrofitting safety into future more generally capable AI systems may be difficult: As an example, consider the case of natural selection, which developed an intelligent “artifact” (brains) by simple hill-climbing search. Brains are quite difficult to understand, and “refactoring” a brain to be trustworthy when given large amounts of resources and unchecked power would be quite an engineering feat. Similarly, AI systems developed by pure brute force might be quite difficult to align. ### Personal data and individual access control This is the section most closely aligned with the GDPR, and at its heart is the problem of the asymmetry of data: Our personal information fundamentally informs the systems driving modern society but our data is more of an asset to others than it is to us. The artificial intelligence and autonomous systems (AI/AS) driving the algorithmic economy have widespread access to our data, yet we remain isolated from gains we could obtain from the insights derived from our lives. The call is for tools allowing every individual citizen control over their own data and how it is shared. There’s also this very interesting reminder about Western cultural norms here too: We realize the first version of The IEEE Global Initiative’s insights reflect largely Western views regarding personal data where prioritizing an individual may seem to overshadow the use of information as a communal resource. This issue is complex, as identity and personal information may pertain to single individuals, groups, or large societal data sets. What is personal data? Any data that can be reasonably linked to an individual based on their unique physical, digital, or virtual identity. That includes device identifiers, MAC addresses, IP addresses, and cookies. Guidance on determining what constitutes personal data can be found in the U.K. Information Commissioner’s Office paper, “Determining what is personal data.” As a tool for any organization regarding these issues, a good starting point is to apply the who, what, why and when test to the collection and storage of personal information: • Who requires access and for what duration? • What is the purpose of the access? Is it read, use and discard, or collect, use and store? • Why is the data required? To fulfil compliance? Lower risk? Because it is monetized? In order to provide a better service/experience? • When will it be collected, for how long will it be kept, and when will it be discarded, updated, re-authenticated… The report also points out how difficult informed consent can be. For example, “Data that appears trivial to share can be used to make inferences that an individual would not wish to share… ### Afterword I’ve barely scratched the surface, but this post is getting too long already. One of the key takeaways is that this is a very complex area! I personally hold a fairly pessimistic view when it comes to hoping that the unrestrained forces of capitalism will lead to outcomes we desire. Therefore even though it may seem painful, some kind of stick (aka laws and regulations) does ultimately seem to be required. Will our children (or our children’s children) one day look back in horror at the wild west of personal data exploitation when everything that could be mined about a person was mined, exploited, packaged and sold with barely any restriction? Let’s finish on a positive note though. It’s popular to worry about AI and Autonomous Systems without also remembering that they can be a force for tremendous good. In addition, as well as introducing unintended bias and discrimination, they can also be used to eliminate it in a way we could never achieve with human decision makers. An example I’ve been talking about here draws inspiration from the Adversarial Neural Cryptography paper of all things. There we get a strong hint that the adversarial network structure introduced with GANs can also be applied in other ways. Consider a network that learns an encoding of information about a person (but explicitly excluding, say, information about race and gender). Train it in conjunction with two other networks, one that learns to make the desired business predictions based on the learned representation, and one (the adversarial net) that attempts to predict race and gender based on that same representation. When the adversarial net cannot do better than random chance, we have a pretty good idea that we’ve eliminated unintended bias from the system… To round out the week, I thought I’d take a selection of fun papers from the ‘More papers from 2016’ section of top 100 awesome deep learning papers list. The texture networks paper we’ve covered before, so the link in the above list is to The Morning Paper write-up (but I felt like it belonged in this group nevertheless). ### Colorful image colorization Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. How is this possible? Well, we’ve seen that networks can learn what various parts of the image represent. If you see enough images you can learn that grass is (usually) green, the sky is (sometimes!) blue, and ladybirds are red. The network doesn’t have to recover the actual ground truth colour, just a plausible colouring. Therefore, our task becomes much more achievable: to model enough of the statistical dependencies between the semantics and the textures of grayscale images and their color versions in order to produce visually compelling results. Results like this: Training data for the colourisation task is plentiful – pretty much any colour photo will do. The tricky part is finding a good loss function – as we’ll see soon, many loss functions produce images that look desaturated, whereas we want vibrant realistic images. The network is based on image data using the CIE Lab colourspace. Grayscale images have only the lightness, L, channel, and the goal is to predict the a (green-red) and b (blue-yellow) colour channels. The overall network architecture should look familiar by now, indeed so familiar that supplementary details are pushed to an accompanying website. (That website page is well worth checking out by the way, it even includes a link to a demo site on Algorithmia where you can try the system out for yourself on your own images). Colour prediction is inherently multi-modal, objects can take on several plausible colourings. Apples for example may be red, green, or yellow, but are unlikely to be blue or orange. To model this, the prediction is a distribution of possible colours for each pixel. A typical objective function might use e.g. Euclidean loss between predicted and ground truth colours. However, this loss is not robust to the inherent ambiguity and multimodal nature of the colorization problem. If an object can take on a set of distinct ab values, the optimal solution to the Euclidean loss will be the mean of the set. In color prediction, this averaging effect favors grayish, desaturated results. Additionally, if the set of plausible colorizations is non-convex, the solution will in fact be out of the set, giving implausible results. What can we do instead? The ab output space is divided into bins with grid size 10, and the top Q = 313 in-gamut (within the range of colours we want to use) are kept: The network learns a mapping to a probability distribution over these Q colours (a Q-dimensional vector). The ground truth colouring is also translated into a Q-dimensional vector, and the two are compared using a multinomial cross entropy loss. Notably this includes a weighting term to rebalance loss based on colour-class rarity. The distribution of ab values in natural images is strongly biased towards values with low ab values, due to the appearance of backgrounds such as clouds, pavement, dirt, and walls. Figure 3(b) [below] shows the empirical distribution of pixels in ab space, gathered from 1.3M training images in ImageNet. Observethat the number of pixels in natural images at desaturated values are orders of magnitude higher than for saturated values. Without accounting for this, the loss function is dominated by desaturated ab values. The final predicted distribution then needs to be mapped to a point estimated in ab space. Taking the mode of the predicted distribution leads to vibrant but sometimes spatially inconsistent results (see RH column below). Taking the mean brings back another form of the desaturation problem (see LH column below). To try to get the best of both worlds, we interpolate by re-adjusting the temperature T of the softmax distribution, and taking the mean of the result. We draw inspiration from the simulated annealing technique, and thus refer to the operation as taking the annealed-mean of the distribution. Here are some more colourings from a network trained on ImageNet, which were rated by Amazon Mechanical Turk participants to see how lifelike they are. And now for my very favourite part of the paper: Since our model was trained using “fake” grayscale images generated by stripping ab channels from color photos, we also ran our method on real legacy black and white photographs, as shown in Figure 8 (additional results can be viewed on our project webpage). One can see that our model is still able to produce good colorizations, even though the low-level image statistics of the legacy photographs are quite different from those of the modern-day photos on which it was trained. Aren’t they fabulous! Especially the Migrant Mother colouring. The representations learned by the network also proved useful for object classification, detection, and segmentation tasks. ### Generative visual manipulation on the natural image manifold So we’ve just seen that neural networks can help us with our colouring. But what about those of us that are more artistically challenged and have a few wee issues making realistic looking drawings (or alterations to existing drawings) in the first place? In turns out that generative adversarial neural networks can help. It’s a grand sounding paper title, but you can think of it as “Fiddling about with images while ensuring they still look natural.” I guess that wouldn’t look quite so good in the conference proceedings ;). Today, visual communication is sadly one-sided. We all perceive information in the visual form (through photographs, paintings, sculpture, etc), but only a chosen few are talented enough to effectively express themselves visually… One reason is the lack of “safety wheels” in image editing: any less-than-perfect edit immediately makes the image look completely unrealistic. To put another way, classic visual manipulation paradigm does not prevent the user from “falling off” the manifold of natural images. As we know, GANs can be trained to learn effective representations of natural looking images (“the manifold of natural images“). So let’s do that, but then instead of using the trained GAN to generate images, use it as a constraint on the output of various image manipulation operations, to make sure the results lie on the learned manifold at all times. The result is an interactive tool that helps you make realistic looking alterations to existing images. It helps to see the tool in action, you can see a video here. The authors also demonstrate ‘generative transformation’ of one image to look more like another, and my favourite, creating a new image from scratch based on a user’s sketch. The intuition for using GANs to learn manifold approximations is that they have been shown to produce high-quality samples, and that Euclidean distance in the latent space often corresponds to a perceptually meaningful visual similarity. This means we can also perform interpolation between points in the latent space. Here’s what happens when the latent vector is updated based on user edits (top row, adding black colour and changing the shape): In the interactive tool, each update step takes about 50-100ms, working only on the mapped representation of the original image. When the user is done, the generated image captures roughly the desired change, but the quality is degraded as compared to the original image. To address this issue, we develop a dense correspondence algorithm to estimate both the geometric and color changes induced by the editing process. This motion and colour flow algorithm is used to estimate the colour and shape changes in the generated image sequence (as user editing progressed), and then transfer them back on top of the original photo to generate photo-realistic images. The user interface gives the user a colouring brush for changing the colour of regions, a sketching brush to outline shapes or add fine details, and a warping ‘brush’ for more explicit shape modifications. Here are some results from user edits: Transformations between two images also take place in the GAN-learned representation space and are mapped back in the same way: It’s also possible to use the brush tools to create an image from scratch, and then add more scribbles to refine the result. How good is this! : ### WaveNet: a generative model for raw audio Enough with the images already! What about generating sound? How about text-to-speech sound generation yielding state of the art performance? Hearing is believing, so check out these samples: (You can find more on the DeepMind blog at https://deepmind.com/blog/wavenet-generative-model-raw-audio/). We show that WaveNets can generate raw speech signals with subjective naturalness never before reported in the field of text-to-speech (TTS), as assessed by human raters. The architecture of WaveNet is inspired by PixelRNN (See “RNN models for image generation” from a couple of weeks ago). The foundation is very simple – take a waveform $\mathbf{x}$ with with T values, ${x_1, ..., x_T}$. And let the probability of the $x_t$ be conditioned on all of the values that precede it: $p(x_t | x_1, ..., x_{t-1})$. Now the joint probability of the overall waveform is modelled by: $p(\mathbf{x}) = \prod_{t=1}^{T} p(x_t | x_1, ..., x_{t-1})$ This can be modelled by a stack of convolutional layers. “By using causal convolutions, we make sure the model cannot violate the ordering in which we model the data…” (the prediction at timestep t cannot depend on any of the future timestamps). This can be implemented by shifting the output of a normal convolution by one or more timesteps. At training time, the conditional predictions for all timesteps can be made in parallel because all timesteps of ground truth x are known. When generating with the model, the predictions are sequential: after each sample is predicted, it is fed back into the network to predict the next sample. Causal convolutions need lots of layers to increase their receptive field. WaveNet uses dilated convolutions to increase receptive fields by orders of magnitude, without greatly increasing computational cost. A dilated convolution is a convolution where the filter is applied over an area large than its length by skipping input values with a certain step. WaveNet uses dilation doubling in every layer up to a limit of 512, before repeating (1,2,4, …, 512, 1,2,4, …, 512, …). A straight softmax output layer would need 65,356 probabilities per timestep to model all possible values for raw audio stored as a sequence of 16-bit integer values. The data is quantized to 256 possible values using a non-linear quantization scheme which was found to produce a significantly better reconstruction than a simple linear scheme: $f(x_t) = sign(x_t) \frac{\ln(1 + \mu|x_t|)}{\ln(1 + \mu)}$ where $-1 < x_t < 1$ and $\mu = 255$. The network uses both residual and parameterised skip connections throughout to speed up convergence. By conditioning the model on additional inputs, WaveNet can be guided to produce audio with the required characteristics (e.g., a certain speaker’s voice). For TTS, information about the text is fed as an extra input. > For the first experiment we looked at free-form speech generation (not conditioned on text). We used the English multi-speaker corpus from CSTR voice cloning toolkit (VCTK) (Yamagishi, 2012) and conditioned WaveNet only on the speaker. The conditioning was applied by feeding the speaker ID to the model in the form of a one-hot vector. The dataset consisted of 44 hours of data from 109 different speakers. Since it wasn’t conditioned on text, the model generates made-up but human language-like words in a smooth way (see second audio clip at the top of this section). It can model speech from any of the speakers by conditioning it on the one-hot encoding – thus the model is powerful enough to capture the characteristics of all 109 speakers from the dataset in a single model. The second experiment trained WaveNet on the same single-speaker speech databases that Google’s North American and Mandarin Chinese TTS systems are built on. WaveNet was conditioned on linguistic features derived from the input texts. In subjective paired comparison tests, WaveNet beat the best baselines: WaveNet also achieved the highest ever score in a mean opinion score test where users had to rate naturalness on a scale of 1-5 (scoring over 4 on average). (See the first speech sample at the top of this section). The third set of experiments trained WaveNet on two music datasets (see the third speech sample at the top of this section). “…the samples were often harmonic and aesthetically pleasing, even when produced by unconditional models. From the WaveNet blog post on the DeepMind site: WaveNets open up a lot of possibilities for TTS, music generation and audio modelling in general. The fact that directly generating timestep per timestep with deep neural networks works at all for 16kHz audio is really surprising, let alone that it outperforms state-of-the-art TTS systems. We are excited to see what we can do with them next. ### Google’s neural machine translation system: bridging the gap between human and machine translation Google’s Neural Machine Translation (GNMT) system is an end-to-end learning system for automated translation. Previous NMT systems suffered in one or more of three key areas: training and inference speed, coping with rare words, and sometimes failing to translate all of the words in a source sentence. GNMT is now in production at Google, having handsomely beaten the Phrase-Based Machine Translation (PBMT) system used in production at Google beforehand. Understanding how it all fits together will draw upon many of the papers we’ve looked at so far. At the core it’s a sequence-to-sequence learning network with an encoder network, a decoder network, and an attention network. The encoder transforms a source sentence into a list of vectors, one vector per input symbol. Given this list of vectors, the decoder produces one symbol at a time, until the special end-of-sentence symbol (EOS) is produced. The encoder and decoder are connected through an attention module which allows the decoder to focus on different regions of the source sentence during the course of decoding. The decoder is a combination of an RNN network and a softmax layer. Deeper models give better accuracy, but the team found that LSTM layers worked well up to 4 layers, barely with 6 layers, and very poorly beyond 8 layers. What to do? We learned the answer earlier this week, add residual connections: Since in translation words in the source sentence may appear anywhere in the output sentence, the encoder network uses a bi-directional RNN for the encoder. Only the bottom layer is bi-direction – one LSTM layer processes the sentence left-to-right, while its twin processes the sentence right-to-left. The encoder and decoder networks are placed on multiple GPUs, with each layer running on a different GPU. As well as using multiple GPUs, to get inference time down quantized inference involving reduce precision arithmetic is also used. One of the main challenges in deploying our Neural Machine Translation model to our interactive production translation service is that it is computationally intensive at inference, making low latency translation difficult, and high volume deployment computationally expensive. Quantized inference using reduced precision arithmetic is one technique that can significantly reduce the cost of inference for these models, often providing efficiency improvements on the same computational devices. The model is trained using full-precision floats, it is only for production inference that approximation is used. Here are the decoding times for 6003 English-French sentences across CPU, GPU, and Google’s Tensor Processing Unit (TPU) respectively: Firstly, note that the TPU beats the CPU and GPU hands-down. The CPU beats the GPU because “our current decoder implementation is not fully utilizing the computation capacities that a GPU can theoretically offer during inference.” #### Dealing with out of vocabulary words Neural Machine Translation models often operate with fixed word vocabularies even though translation is fundamentally an open vocabulary problem (names, numbers, dates etc.)… Our most successful approach […] adopts the wordpiece model (WPM) implementation initially developed to solve a Japanese/Korean segmentation problem for the Google speech recognition system. This approach is completely data-driven and guaranteed to generate a deterministic segmentation for any possible sequence of characters. For example, “Jet makers feud over seat width with big orders at stake” turns into the word pieces: “_J et _makers _fe ud _over _seat _width _with _big _orders _at _stake.” The words ‘Jet’ and ‘feud’ are both broken into two word pieces. Given a training corpus and a number of desired tokens D, the optimization problem is to select D wordpieces such that the resulting corpus is minimal in the number of wordpieces when segmented according to the chosen wordpiece model. #### Overall model performance. The following chart shows side-by-side scores for translations made by the previous production system (PBMT), the new GNMT system, and humans fluent in both languages. Side-by-side scores range from 0 to 6, with a score of 0 meaning “completely nonsense translation”, and a score of 6 meaning “perfect translation: the meaning of the translation is completely consistent with the source, and the grammar is correct”. A translation is given a score of 4 if “the sentence retains most of the meaning of the source sentence, but may have some grammar mistakes”, and a translation is given a score of 2 if “the sentence preserves some of the meaning of the source sentence but misses significant parts”. These scores are generated by human raters who are fluent in both languages and hence often capture translation quality better than BLEU scores. Today we’re pressing on with the top 100 awesome deep learning papers list, and the section on recurrent neural networks (RNNs). This contains only four papers (joy!), and even better we’ve covered two of them previously (Neural Turing Machines and Memory Networks, the links below are to the write-ups). That leaves up with only two papers to cover today, however the first paper does run to 43 pages and it’s a lot of fun so I’m glad to be able to devote a little more space to it. These papers are easier to understand with some background in RNNs and LSTMs. Christopher Olah has a wonderful post on “Understanding LSTM networks” which I highly recommend. ### Generating sequences with recurrent neural networks This paper explores the use of RNNs, in particular, LSTMs, for generating sequences. It looks at sequences over discrete domains (characters and words), generating synthetic wikipedia entries, and sequences over real-valued domains, generating handwriting samples. I especially like the moment where Graves demonstrates that the trained networks can be used to ‘clean up’ your handwriting, showing what a slightly neater / easier to read version of your handwriting could look like. We’ll get to that shortly… RNNs can be trained for sequence generation by processing real data sequences one step at a time and predicting what comes next. Assuming the predictions are probabilistic, novel sequences can be generated from a trained network by iteratively sampling from the network’s output distribution, then feeding in the sample as input at the next step. In other words by making the network treat its inventions as if they were real, much like a person dreaming. Using LSTMs effectively gives the network a longer memory, enabling it to look back further in history to formulate its predictions. The basic RNN architecture used for all the models in the paper looks like this: Note how each output vector $y_t$ is used to parameterise a predictive distribution $Pr(x_{t+1} | y_t)$ over the next possible inputs (the dashed lines in the above figure). Also note the use of ‘skip connections’ as we looked at in yesterday’s post. The LSTM cells used in the network look like this: They are trained with the full gradient using backpropagation. To prevent the derivatives becoming too large, the derivative of the loss with respect to the inputs to the LSTM layers are clipped to lie within a predefined range. Onto the experiments… #### Text prediction For text prediction we can either use sequences of words, or sequences of characters. With one-hot encodings, the number of different classes for words makes for very large input vectors (e.g. a vocabulary with 10’s of thousands of words of more). In contrast, the number of characters is much more limited. Also, … predicting one character at a time is more interesting from the perspective of sequence generation, because it allows the network to invent novel words and strings. In general, the experiments in this paper aim to predict at the finest granularity found in the data, so as to maximise the generative flexibility of the network. The Penn Treebank dataset is a selection of Wall Street Journal articles. It’s relatively small at just over a million words in total, but widely used as a language modelling benchmark. Both word and character level networks were trained on this corpus using a single hidden layer with 1000 LSTM units. Both networks are capable of overfitting the training data, so regularisation is applied. Two forms of regularisation were experimented with: weight noise applied at the start of each training sequence, and adaptive weight noise, where the variance of the noise is learned along with the weights. The word-level RNN performed better than the character-based one, but the gap closes with regularisation (perplexity of 117 in the best word-based configuration, vs 122 for the best character-based configuration). “Perplexity can be considered to be a measure of on average how many different equally most probable words can follow any given word. Lower perplexities represent better language models…” ([][] http://www1.icsi.berkeley.edu/Speech/docs/HTKBook3.2/node188_mn.html ) Much more interesting is a network that Graves trains on the first 96M bytes of the Wikipedia corpus (as of March 3rd 2006, captured for the Hutter prize competition). This has seven hidden layers of 700 LSTM cells each. This is an extract of the real Wikipedia data: And here’s a sample generated by the network (for additional samples, see the full paper): The sample shows that the network has learned a lot of structure from the data, at a wide range of different scales. Most obviously, it has learned a large vocabulary of dictionary words, along with a subword model that enables it to invent feasible-looking words and names: for example “Lochroom River”, “Mughal Ralvaldens”, “submandration”, “swalloped”. It has also learned basic punctuation, with commas, full stops and paragraph breaks occurring at roughly the right rhythm in the text blocks. It can correctly open and close quotation marks and parentheses, indicating the models memory and these often span a distance that a short-range context cannot handle. Likewise, it can generate distinct large-scale regions such as XML headers, bullet-point lists, and article text. Of course, the actual generated articles don’t make any sense to a human reader, it is just their structure that is mimicked. When we move onto handwriting though, the outputs do make a lot of sense to us… #### Handwriting prediction To test whether the prediction network could also be used to generate convincing real-valued sequences, we applied it to online handwriting data (online in this context means that the writing is recorded as a sequence of pen-tip locations, as opposed to offline handwriting, where only the page images are available). Online handwriting is an attractive choice for sequence generation due to its low dimensionality (two real numbers per data point) and ease of visualisation. The dataset consists of handwritten lines on a smart whiteboard, with x,y co-ordinates and end-of-stroke markers (yes/no) captured at each time point. The main challenge was figuring out how to determine a predictive distribution for real-value inputs. The solution is to use mixture density neworks. Here the outputs of the network are used to parameterise a mixture distribution. Each output vector consists of the end of stroke probability e, along with a set of means, standard deviations, correlations, and mixture weights for the mixture components used to predict the x and y positions. See pages 20 and 21 for the detailed explanation. Here are the mixture density outputs for predicted locations as the word under is written. The small blobs show accurate predictions while individual strokes are being written, and the large blobs show greater uncertainty at the end of strokes when the pen is lifted from the whiteboard. The best samples were generated by a network with three hidden layers of 400 LSTM cells each, and 20 mixture components to model the offsets. Here are some samples created by the network. The network has clearly learned to model strokes, letters and even short words (especially common ones such as ‘of’ and ‘the’). It also appears to have learned a basic character level language models, since the words it invents (‘eald’, ‘bryoes’, ‘lenrest’) look somewhat plausible in English. Given that the average character occupies more than 25 timesteps, this again demonstrates the network’s ability to generate coherent long-range structures #### Handwriting generation Those samples do of course look like handwriting, but as with our Wikipedia example, the actual words are nonsense. Can we learn to generated handwriting for a given text? To meet this challenge a soft window is convolved with the text string and fed as an extra input to the prediction network. The parameters of the window are output by the network at the same time as it makes the predictions, so that it dynamically determines an alignment between the text and the pen locations. Put simply, it learns to decide which character to write next. The network learns how far to slide the text window at each step, rather than learning an absolute position. “Using offsets was essential to getting the network to align the text with the pen trace. And here are samples generated by the resulting network: Pretty good! #### Biased and primed sampling to control generation One problem with unbiased samples is that they tend to be difficult to read (partly because real handwriting is difficult to read, and partly because the network is an imperfect model). Intuitively, we would expect the network to give higher probability to good handwriting because it tends to be smoother and more predictable than bad handwriting. If this is true, we should aim to output more probable elements of Pr(x|c) if we want the samples to be easier to read. A principled search for high probability samples could lead to a difficult inference problem, as the probability of every output depends on all previous outputs. However a simple heuristic, where the sampler is biased towards more probable predictions at each step independently, generally gives good results. As we increase the bias towards higher probability predictions, the handwriting gets neater and neater… As a final flourish, we can prime the network with a real sequence in the handwriting of a particular writer. The network then continues in this style, generating handwriting mimicking the author’s style. Combine this with bias, and you also get neater versions of their handwriting! ### Conditional random fields as recurrent neural networks Now we turn our attention to a new challenge problem that we haven’t looked at yet: semantic segmentation. This requires us to label the pixels in an image to indicate what kind of object they represent/are part of (land, building, sky, bicycle, chair, person, and so on…). By joining together regions with the same label, we segment the image based on the meaning of the pixels. Like this: (The CRF-RNN column in the above figure shows the results from the network architecture described in this paper). As we’ve seen, CNNs have been very successful in image classification and detection, but there are challenges applying them to pixel-labelling problems. Firstly, traditional CNNs don’t produce fine-grained enough outputs to label every pixel. But perhaps more significantly, even if we could overcome that hurdle, they don’t have any way of understanding that if pixel A is part of, say, a bicycle, then it’s likely that the adjacent pixel B is also part of a bicycle. Or in more fancy words: CNNs lack smoothness constraints that encourage label agreement between similar pixels, and spatial and appearance consistency of the labelling output. Lack of such smoothness constraints can result in poor object delineation and small spurious regions in the segmentation output. Conditional Random Fields (a variant of Markov Random Fields) are very good at smoothing. They’re basically models that take into account surrounding context when making predictions. So maybe we can combine Conditional Random Fields (CRF) and CNNs in some way to get the best of both worlds? The key idea of CRF inference for semantic labelling is to formulate the label assignment problem as a probabilistic inference problem that incorporates assumptions such as the label agreement between similar pixels. CRF inference is able to refine weak and coarse pixel-level label predictions to produce sharp boundaries and fine-grained segmentations. Therefore, intuitively, CRFs can be used to overcome the drawbacks in utilizing CNNs for pixel-level labelling tasks. Sounds good in theory, but it’s quite tricky in practice. The authors proceed in two stages: firstly showing that one iteration of the mean-field algorithm used in CRF can be modelled as a stack of common CNN layers; and secondly by showing that repeating the CRF-CNN stack with outputs from the previous iteration fed back into the next iteration you can end up with an RNN structure, dubbed CRF-RNN, that implements the full algorithm. Our approach comprises a fully convolutional network stage, which predicts pixel-level labels without considering structure, followed by a CRF-RNN stage, which performs CRF-based probabilistic graphical modelling for structured prediction. The complete system, therefore, unifies the strengths of both CNNs and CRFs and is trainable end-to-end using the back-propagation algorithm and the Stochastic Gradient Descent (SGD) procedure. There’s a lot of detail in the paper, some of which passes straight over my head, for example, the following sentence which warranted me bringing out the ‘hot pink’ highlighter: In terms of permutohedral lattice operations, this can be accomplished by only reversing the order of the separable filters in the blur stage, while building the permutohedral lattice, splatting, and slicing in the same way as in the forward pass. (What is a permutohedron you may ask? It’s actually not as scary as it sounds…) Fortunately, we’re just trying to grok the big picture in this write-up, and for that the key is to understand how CNNs can model one mean-field iteration, and then how we stack the resulting structures in RNN formation. #### Mean-field iteration as a stack of CNN layers Consider a vector X with one element per pixel, representing the label assigned to that pixel drawn from some pre-defined set of labels. We construct a graph where the vertices are the elements in X, and edges between the elements hold pairwise ‘energy’ values. Minimising the overall energy of the configuration yields the most probable label assignments. Energy has two components: a unary component which depends only on the individual pixel and roughly speaking, predicts labels for pixels without considering smoothness and consistency; and pairwise energies that provide an image data-dependent smoothing term that encourages assigning similar labels to pixels with similar properties. The energy calculations are based on feature vectors derived from image features. Mean-field iteration is used to find an approximate solution for the minimal energy configuration. The steps involved in a single iteration are: • message passing, • re-weighting, • compatibility transformation, • normalisation Message passing is made tractable by using approximation techniques (those permutohedral lattice thingies) and two Guassian kernels: a spatial kernel and a bilateral kernel. Re-weighting can be implemented as a 1×1 convolution. Each kernel is given independent weights: The intuition is that the relative importance of the spatial kernel vs the bilateral kernel depends on the visual class. For example, bilateral kernels may have on the one hand a high importance in bicycle detection, because similarity of colours is determinant; on the other hand they may have low importance for TV detection, given that whatever is inside the TV screen may have different colours. Compatibility transformation assigns penalties when different labels are assigned to pixels with similar properties. It is implemented with a convolutional filter with learned weights (equivalent to learning a label compatibility function). The addition (copying) and normalisation (softmax) operations are easy. #### CRF as a stack of CRF-CNN layers Multiple mean-field iterations can be implemented by repeating the above stack of layers in such a way that each iteration takes Q value estimates from the previous iteration and the unary values in their original form. This is equivalent to treating the iterative mean-field inference as a Recurrent Neural Network (RNN)… We name this RNN structure CRF-RNN. Recall that the overall network has a fully-convolution network stage predicting pixels labels in isolation, followed by a CRF-CNN for structured prediction. In one forward pass the computation goes through the initial CNN stage, and then it takes T iterations for data to leave the loop created by the RNN. One the data leaves this loop, a softmax loss layer directly follows and terminates the network. The resulting network achieves the state-of-the-art on the Pascal VOC 2010-2012 test datasets. Today we’re looking at the final four papers from the ‘convolutional neural networks’ section of the ‘top 100 awesome deep learning papers‘ list. ### Deep residual learning for image recognition Another paper, another set of state-of-the-art results, this time with 1st place on the ILSVRC 2015 classification tasks (beating GoogLeNet from the year before), as well as 1st place on ImageNet detection, ImageNet localisation, COCO detection, and COCO segmentation competitions. For those of you old enough to remember it, there’s a bit of a Crocodile Dundee moment in this paper: “22 layers? That’s not a deep network, this is a deep network…” How deep? About 100 layers seems to work well, though the authors tested networks up to 1202 layers deep!! Which begs the question, how on earth do you effectively train a network that is hundreds or even a thousand layers deep? Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? No, it’s not. A degradation problem occurs as network depth increases, accuracy gets saturated, and then degrades rapidly as further layers are added. So after a point, the more layers you add, the worse the error rates. For example: An interesting thought experiment leads the team to make a breakthrough and defeat the degradation problem. Imagine a deep network where after a certain (relatively shallow) number of layers, each additional layer is simply an identity mapping from the previous one: “the existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart.” Suppose the desired mapping through a number of layers for certain features really is close to the identity mapping. The degradation problem suggests that the network finds it hard to learn such mappings across multiple non-linear layers. Lets say the output of one layer is $\mathbf{x}$. Now skip forward three layers in the network and imagine that we’d like the ability to easily learn an identity mapping (or close to) for some elements of $\mathbf{x}$ – an easy way to do this is just to provide the elements of $\mathbf{x}$ as direct inputs… But we don’t want to just double the width of the receiving layer, and for some other elements of $\mathbf{x}$ perhaps we do want to learn some non-linear function. So here comes the clever part: suppose the ideal function we could learn is $H(\mathbf{x})$. If the intervening hidden layers can learn that then they can also learn $F(\mathbf{x}) = H(\mathbf{x}) - \mathbf{x}$. Maybe you can see where this is going: we can now simply combine the output of the previous layer with $\mathbf{x}$ using simple addition and we get $F(\mathbf{x}) + \mathbf{x} = H (\mathbf{x})$ ! But constructing things in this way though, we make it easy for the network to include both non-linear elements and also near identity elements (by learning weights close to zero). Adding these layer jumping + addition short-circuits to a deep network creates a residual network. Here’s a 34-layer example: Here are 18 and 34 layer networks trained on ImageNet without any residual layers. You can see the degradation effect with the 34 layer network showing higher error rates. Take the same networks and pop in the residual layers, and now the 34-layer network is handsomely beating the 18-layer one. Here are the results all the way up to 152 layer networks: Now that’s deep! ### Identity mappings in deep residual networks This paper analyses the skip-layers introduced in the residual networks (ResNets) that we just looked at to see whether the identity function is the best option for skipping. It turns out that it is, and that using an identity function for activation as well makes residual units even more effective. Recall that we used $F(\mathbf{x}) + \mathbf{x}$ as the input when skipping layers. More generally, this is $F(\mathbf{x}) + h(\mathbf{x})$ where h is the identity function. But h could be also be another kind of function – for example constant scaling. The authors try a variety of different h functions, but identity works best. When we consider the activation function f as well (the original paper used ReLU) then we have $f(F(\mathbf{x}) + h(\mathbf{x}))$. Things work even better when f is also an identity mapping (instead of ReLU). To construct an identity mapping f, we view the activation functions (ReLU and Batch Normalization) as “pre-activation” of the weight layers, in contrast to conventional wisdom of “post-activation”. This point of view leads to a new residual unit design, shown in Fig 1(b). Making both h and f identity mappings allows a signal to be directly propagated from one unit to any other units, in both forward and backward passes. Here’s the difference the new residual unit design makes when training a 1001 layer ResNet: Based on this unit, we present competitive results on CIFAR-10/100 with a 1001-layer ResNet, which is much easier to train and generalizes better than the original ResNet. We further report improved results on ImageNet using a 200-layer ResNet, for which the counterpart of [the original ResNet] starts to overfit. These results suggest that there is much room to exploit the dimension of network depth, a key to the success of modern deep learning. ### Inception-v4, inception-resnet and the impact of residual connections or learning While all this ResNet excitement was going on, the Inception team kept refining their architecture, up to Inception v3. The obvious question became: what happens if we take the Inception architecture and we add residual connections to it? Does that further improve training time or accuracy? This paper compares four models: Inception v3, a newly introduced in this paper Inception v4, and variations of Inception v3 and v4 that also have residual connections. It’s also fun to see ever more complex network building blocks being used as modules in higher level architectures. Inception v4 is too much to show in one diagram, but here’s the overall schematic: And as a representative selection, here’s what you’ll find if you dig into the stem module: And the ’17 x 17 Inception-B’ module: Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. Since Inception 4 and Inception-ResNet-v2 (Inception 4 with residual connections) give overall very similar results, this seems to suggest that – at least at this depth – residual networks are not necessary for training deep networks. However, the use of residual connections seems to improve the training speed greatly, which is alone a great argument for their use. Both the Inception v4 and Inception ResNet-2 models outperform previous Inception networks, by virtue of the increased model size. ### Rethinking the Inception architecture for computer vision This paper comes a little out of order in our series, as it covers the Inception v3 architecture. The bulk of the paper though is a collection of advice for designing image processing deep convolutional networks. Inception v3 just happens to be the result of applying that advice. In this paper, we start with describing a few general principles and optimization ideas that proved to be useful for scaling up convolution networks in efficient ways. • Avoid representational bottlenecks – representation size should gently decrease from the inputs to the outputs before reaching the final representation used for task at hand. Big jumps (downward) in representation size cause extreme compression of the representation and bottleneck the model. • Higher dimensional representationsare easier to process locally in a network, more activations per tile allows for more disentangled features. The resulting networks train faster. • Spatial aggregation of lower dimensional embeddings can be done without much or any loss in representational power. “For example, before performing a more spread out (e.g. 3×3 convolution), one can reduce the dimension of the input representation before the spatial aggregation without expecting serious adverse effects.” • Balance network width and depth, optimal performance is achieved by balancing the number of filters per stage and the depth of the network. I.e., if you want to go deeper you should also consider going wider. Although these principles might make sense, it is not straightforward to use them to improve the quality of networks out of the box. The idea is to use them judiciously in ambiguous situations only. • Factorize into smaller convolutions, a larger (e.g. 5×5) convolution is disproportionately more expensive than a smaller (e.g. 3×3) one – by a factor of 25/9 in this case. Replacing the 5×5 convolution with a two-layer network of 3×3 convolutions reusing activations between adjacent tiles achieves the same end but uses (9+9)/25 less computation. The above results suggest that convolutions with filters larger 3 × 3 a might not be generally useful as they can always be reduced into a sequence of 3 × 3 convolutional layers. Still we can ask the question whether one should factorize them into smaller, for example 2×2 convolutions. However, it turns out that one can do even better than 2 × 2 by using asymmetric convolutions, e.g. n × 1. For example using a 3 × 1 convolution followed by a 1 × 3 convolution is equivalent to sliding a two layer network with the same receptive field as in a 3×3 convolution. The two-layer solution is 33% cheaper. Taking this idea even further (e.g. replacing 7×7 with a 1×7 followed by a 7×1) works well on medium grid sizes (between 12 and 20), so long as it is not used in early layers. The authors also revisit the question of the auxiliary classifiers used to aid training in the original Inception. “Interestingly, we found that auxiliary classifiers did not result in improved convergence early in the training… near the end of training the network with the auxiliary branches starts to overtake the accuracy of the network without, and reaches a slightly higher plateau.” Removing the lower of the two auxiliary classifiers also had no effect. Together with the earlier observation in the previous paragraph, this means that original the hypothesis of [Inception] that these branches help evolving the low-level features is most likely misplaced. Instead, we argue that the auxiliary classifiers act as regularizer. There are several other hints and tips in the paper that we don’t have space to cover. It’s well worth checking out if you’re building these kinds of models yourself. Today it’s the second tranche of papers from the convolutional neural nets section of the ‘top 100 awesome deep learning papers‘ list: ### Return of the devil in the details: delving deep into convolutional nets This is a very nice study. CNNs had been beating handcrafted features in image recognition tasks, but it was hard to pick apart what really accounted for the differences (and indeed, the differences between different CNN models too) since comparisons across all of them were not done on a shared common basis. For example, we’ve seen augmentation typically used with CNN training – how much of the difference between IVF (the Improved Fisher Vector hand-engineered feature representation) and CNNs is attributed to augmentation, and not the image representation used? So Chatfield et al. studied IFV shallow representation, three different CNN-based deep representations, and deep representations with pre-training and then fine-tuning on the target dataset. For all of the studies, the same task (PASCAL VOC classification) was used. The three different CNN representations are denoted CNN-F (Fast) – based on Krizhevksy’s architecture; CNN-M (Medium) – using a decreased stride and smaller receptive field of the first convolutional layer; and CNN-S (Slow) based on the ‘accurate’ network from OverFeat. Key findings: • Augmentation improves performance by ~3% for both IFV and CNNs. (Or to put it another way, the use of augmentation accounts for 3% of the advantage attributed to deep methods). Flipping on its own helps only marginally, but flipping combined with cropping works well. • Both IFV and CNNs are affected by adding or subtracting colour information. Retraining CNNs after converting images to grayscale results in about a 3% performance drop. • CNN based methods still outperform shallow encodings, even accounting for augmentation improvements etc., by a large approximately 10% margin. • CNN-M and CNN-S both outperform CNN-Fast by 2-3%. CNN-M is about 25% faster than CNN-S. • Retraining the CNNs so that the final layer was of lower dimensionality resulted in a marginal performance boost. • Fine-tuning makes a significant difference, improving results by about 2.7%. In this paper we presented a rigorous empirical evaluation of CNN-based methods for image classification, along with a comparison with more traditional shallow feature encoding methods. We have demonstrated that the performance of shallow representations can be significantly improved by adopting data augmentation, typically used in deep learning. In spite of this improvement, deep architectures still outperform the shallow methods by a large margin. We have shown that the performance of deep representations on the ILSVRC dataset is a good indicator of their performance on other datasets, and that fine-tuning can further improve on already very strong results achieved using the combination of deep representations and a linear SVM. ### Spatial pyramid pooling in deep convolutional networks for visual recognition The CNN architectures we’ve looked at so far have a series of convolutional layers (5 is popular) followed by fully connected layers and an N-way softmax output. One consequence of this is that they can only work with images of fixed size (e.g 224 x 224). Why? The sliding windows used in the convolutional layers can actually cope with any image size, but the fully-connected layers have a fixed sized input by construction. It is the point at which we transition from the convolution layers to the fully-connected layers therefore that imposes the size restriction. As a result images are often cropped or warped to fit the size requirements of the network, which is far from ideal. Spatial pyramid pooling (SPP) adds a new layer between the convolutional layers and the fully-connected layers. Its job is to map any size input down to a fixed size output. The ideal of spatial pyramid pooling, also know as spacial pyramid matching or just ‘multi-level pooling’ pre-existed in computer vision, but had not been applied in the context of CNNs. SPP works by dividing the feature maps output by the last convolutional layer into a number of spatial bins with sizes proportional to the image size, so the number of bins is fixed regardless of the image size. Bins are captured at different levels of granularity – for example, one layer of 16 bins dividing the image into a 4×4 grid, another layer of 4 bins dividing the image into a 2×2 grid, and a final layer comprising the whole image. In each spatial bin, the responses of each filter are simply pooled using max pooling. Since the number of bins is known, we can just concatenate the SPP outputs to give a fixed length representation (see the figure above). This not only allows arbitrary aspect ratios, but also allows arbitrary scales… When the input image is at different scales, the network will extract features at different scales. Interestingly, the coarsest pyramid level has a single bin that covers the entire image. This is in fact a “global pooling” operation, which is also investigated in several concurrent works. An SPP layer added to four different networks architects, including AlexNet (Krizhevsky et al.) and OverFeat improved the accuracy of all of them. “The gain of multi-level pooling is not simply due to more parameters, rather it is because the multi-level pooling is robust to the variance in object deformations and spatial layout. The SPP technique can also be used for detection. The state-of-the-art (as of time of publication) R-CNN method runs feature extraction on each of 2000 candidate windows extracted from an input image. This is expensive and slow. An SPP-net used for object detection extracts feature maps only once (possible at multiple scales). Then just the spatial pyramid pooling piece is run once for each candidate window. This turns out to give comparable results, but with running times 38x-102x faster depending on the number of scales. ### Very deep convolutional networks for large-scale image recognition We’ve now seen the ConvNet architecture and some variations exploring what happens with different window sizes and strides, and training and testing at multiple scales. In this paper Simonyan and Zisserman hold all those variables fixed, and explore what effect the depth of the network has on classification accuracy. The basic setup is a fixed-size 224 x 224 RGB input image with mean pixel value (computed over the training set) subtracted from each pixel. A stack of convolutional layers (with varying depth in each of the experiments) uses filters with a 3×3 receptive field, and in one configuration a layer is added with a 1×1 field (which can be seen as a linear transformation of the input channels, followed by non-linearity). The stride is fixed at 1 pixel. Spatial pooling is carried out by five max-pooling layers interleaved with the convolutional layers. This stack is then feed into three fully-connected layers and a final soft-max layer. The hidden layers all use ReLU activation. Given this basic structure, the actual networks evaluated are shown in the table below (note that only one of them uses local response normalisation – LRN): Here’s what the authors find: Firstly, local response normalisation did not improve accuracy (A-LRN vs A), but adds to training time, so it is not employed in the deeper architectures. Secondly, classification error decreases with increased ConvNet depth (up to 19 layers in configuration E). The error rate of our architecture saturates when the depth reaches 19 layers, but even deeper models might be beneficial for larger datasets. We also compared the net B with a shallow net with five 5 × 5 conv. layers, which was derived from B by replacing each pair of 3 × 3 conv. layers with a single 5 × 5 conv. layer (which has the same receptive field as explained in Sect. 2.3). The top-1 error of the shallow net was measured to be 7% higher than that of B (on a center crop), which confirms that a deep net with small filters outperforms a shallow net with larger filters. The results above were achieved when training at a single scale. Even better results were achieved by adding scale jittering at training time (lightly rescaled versions of the original image). ### Going deeper with convolutions So now things start to get really deep! This is the paper that introduced the ‘Inception’ network architecture, and a particular instantiation of it called ‘GoogLeNet’ which achieved a new state of the art in the 2014 ISLVRC (ImageNet) competition. GoogLeNet is 22 layers deep, and has a pretty daunting overall structure, which I thought I’d just include here in its full glory! Despite the intimidating looking structure, GoogLeNet actually uses 12x fewer parameters than the winning Krizhevsky ConvNet of two years prior. At the same time, it is significantly more accurate. Efficiency in terms of power and memory use was an explicit design goal of the Inception architecture: It is noteworthy that the considerations leading to the design of the deep architecture presented in this paper included this factor [efficiency] rather than having a sheer fixation on accuracy numbers. For most of the experiments, the models were designed to keep a computational budget of 1.5 billion multiply-adds at inference time, so that they do not end up to be a purely academic curiosity, but could be put to real world us, even on large datasets, at a reasonable cost. You can always make your network ‘bigger’ (both in terms of number of layers – depth, as well as the number of units in each layer – width), and in principle this leads to higher quality models. However, bigger networks have many more parameters making them prone to overfitting. To avoid this you need much more training data. They also require much more computational resource to train. “For example, in a deep vision network if two convolutional layers are chained, any uniform increase in the number of their filters results in a quadratic increase in computation. One way to counteract this is to introduce sparsity. Arora et al., in “Provable bounds for learning some deep representations” showed that if the probability distribution of the dataset is representable by a large, very sparse deep neural network, then the optimal network technology can be constructed layer after layer by analyzing the correlation statistics of the preceding layer activations and clustering neurons with highly correlated outputs. Unfortunately, today’s computing infrastructures are very inefficient when it comes to numerical calculation on non-uniform sparse data structures. Is there any hope for an architecture that makes use of filter-level sparsity, as suggested by the theory, but exploits current hardware by utilizing computations on dense matrices? The Inception architecture started out as an exploration of this goal. The main idea of the Inception architecture is to consider how an optimal local sparse structure of a convolutional vision network can be approximated and covered by readily available dense components. Given the layer-by-layer construction approach, this means we just have to find the optimal layer structure, and then stack it. The basic structure of an Inception layer looks like this: The 1×1 convolutions detect correlated units in local regions, and the larger (3×3 and 5×5) convolutions detect the (smaller number) of more spatially spread out clusters. Since pooling has been shown to have a beneficial effect, a pooling path in each stage is added for good luck too. The problem with the structure as shown above though, is that it is prohibitively expensive! While this architecture might cover the optimal sparse structure, it would do it very inefficiently, leading to a computational blow up within a few stages. The solution is to reduce the network dimensions using 1×1 convolutions on all the 3×3 and 5×5 pathways. “Beside being used as reductions, they also include the use of rectified linear activation making them dual purpose.” This gives a final structure for an Inception layer that looks like this: In general, an Inception network is a network consisting of modules of the above type stacked upon each other, with occasional max-pooling layers with stride 2 to halve the resolution of the grid. You can see these layers stacked on top of each other in the GoogLeNet model. That network is 22 layers deep (27 if the pooling layers are also counted). The overall number of building blocks used for the construction of the network is about 100! Propagating gradients all the way back through so many layers is a challenge. We know that shallower networks still have strong discriminative performance, and this fact was exploited by adding auxiliary classifiers connected to intermediate layers (yellow boxes in the overall network diagram at the start of this section)… … These classifiers take the form of smaller convolutional networks put on top of the output of the Inception (4a) and (4d) modules. During training, their loss gets added to the total loss of the network with a discount weight (the losses of the auxiliary classifiers were weighted by 0.3). At inference time, these auxiliary networks are discarded. Having recovered somewhat from the last push on deep learning papers, it’s time this week to tackle the next batch of papers from the ‘top 100 awesome deep learning papers.’ Recall that the plan is to cover multiple papers per day, in a little less depth than usual per paper, to give you a broad sweep of what’s in them (See “An experiment with awesome deep learning papers“). You’ll find the first batch of papers in the archives starting from February 27th. For the next three days, we’ll be tackling the papers from the ‘convolutional neural network models’ section, starting with: Ujjwal Karn’s excellent blog post “An intuitive explanation of convolutional neural networks” provides a some great background on how convolutional networks work if you need a refresher before diving into these papers. ### ImageNet classification with deep convolutional neural networks This is a highly influential paper that kicked off a whole stream of work using deep convolutional neural networks for image processing. Two factors changed that made this possible: firstly, the availability of large enough datasets (specifically, the introduction of ImageNet with millions of images, whereas the previous largest datasets had ‘only’ tens of thousands; and secondly the development of powerful enough GPUs to efficiently train large networks. What makes CNNs such a good fit for working with image data? Their capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have much fewer connections and parameters and so they are easier to train, while their theoretically-best performance is likely to be only slightly worse. The network that Krizhevsky et al. constructed has eight learned layers – five of them convolutional and three fully-connected. The output of the last layer is a 1000-way softmax which produces a distribution over the 1000 class labels (the ImageNet challenge is to create a classifier that can determine which object is in the image). Because the network is too large to fit in the memory of one GPU, training is split across two GPUs and the kernels of the 2nd, 4th, and 5th convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU. The authors single out four other aspects of the model architecture that they feel are particularly important: 1. The use of ReLU activation (instead of tanh). “Deep convolutional neural networks with ReLUs train several times faster than their equivalents with tanh units… Faster learning has a great influence on the performance of large models trained on large datasets 2. Using multiple GPUs (two!), and splitting the kernels between them with cross-GPU communication only in certain layers. The scheme reduces the top-1 and top-5 error rates by 1.7% and 1.2% respectively compared to a net with half as many kernels in each layer and trained on just one GPU. 3. Using local response normalisation, which “implements a form of lateral inhibition inspired by the type found in real neurons, creating competition for big activities amongst neuron outputs computed using different kernels”. 4. Using overlapping pooling. Let pooling layers be of size z x z, and spaced s pixels apart. Traditionally pooling was used with s = z, so that there was no overlap between pools. Krizhevsky et al. used s = 2 and z = 3 to give overlapping pooling. This reduced the top-1 and top-5 error rates by 0.4% and 0.3% respectively. To reduce overfitting dropout and data augmentation (translations, reflections, and principal component manipulation) is used during training. The end result: On ILSVRC-2010, our network achieves top-1 and top-5 test set error rates of 37.5% and 17.0%. In ILSVRC-2012, the network achieved a top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. That huge winning margin sparked the beginning of a revolution. Our results show that a large, deep convolutional neural network is capable of achieving record-breaking results on a highly challenging dataset using purely supervised learning. It is notable that our network’s performance degrades if a single convolutional layer is removed. For example, removing any of the middle layers results in a loss of about 2% for the top-1 performance of the network. So the depth really is important for achieving our results. For a more in-depth look at this paper, see my earlier write-up on The Morning Paper. ### Maxout networks Maxout networks are designed to work hand-in-glove with dropout. As you may recall, training with dropout is like training an exponential number of models all sharing the same parameters. Maxout networks are otherwise standard multilayer perceptron or deep CNNs that use a special activation function called the maxout unit. The output of a maxout unit is simply the maximum of its inputs. In a convolutional network, a maxout feature map can be constructed by taking the maximum across k affine feature maps (i.e., pool across channels, in addition spatial locations). When training with dropout, we perform the elementwise multiplication with the dropout mask immediately prior to the multiplication by the weights in all cases–we do not drop inputs to the max operator. Maxout units make a piecewise linear approximation to an arbitrary convex function, as illustrated below. Under evaluation, the combination of maxout and dropout achieved state of the art classification performance on MNIST, CIFAR-10, CIFAR-100, and SVHN (Street View House Numbers). Why does it work so well? Dropout does exact model averaging in deeper architectures provided that they are locally linear among the space of inputs to each layer that are visited by applying different dropout masks… We argue that dropout training encourages maxout units to have large linear regions around inputs that appear in the training data… Networks of linear operations and maxout may learn to exploit dropout’s approximate model averaging technique well. In addition, rectifier units that saturate at zero are much more common with dropout training. A zero value stops the gradient from flowing through the unit making it hard to change under training and become active again. Maxout does not suffer from this problem because gradient always flows through every maxout unit–even when a maxout unit is 0, this 0 is a function of the parameters and may be adjusted. Units that take on negative activations may be steered to become positive again later. ### Network in Network A traditional convolutional layer applies convolution by applying linear filters to the receptive field, followed by nonlinear activation. The outputs are called feature maps. In this paper Lin et al. point out that such a process cannot learn representations that distinguish well between non-linear concepts. The convolution filter in CNN is a generalized linear model (GLM) for the underlying data patch, and we argue that the level of abstraction is low with GLM. By abstraction we mean that the feature is invariant to the variants of the same concept. Even maxout networks impose the constraint that instances of a latent concept lie within a convex set in the input space, to which they make a piecewise linear approximation. A key question therefore, is whether or not input features do indeed require non-linear functions in order to best represent the concepts contained within them. The authors assert that they do: …the data for the same concept often live on a nonlinear manifold, therefore the representations that capture these concepts are generally highly nonlinear function of the input. In NIN, the GLM is replaced with a ”micro network” structure which is a general nonlinear function approximator. In this work, we choose multilayer perceptron as the instantiation of the micro network, which is a universal function approximator and a neural network trainable by back-propagation. And that’s the big idea right there, replace the linear convolutional layer with a mini multilayer perceptron network (called an mlpconv layer). We know that such networks are good at learning functions, so let’s just allow the mlpconv network to learn what the best convolution function is. Since the mlpconv layers sit inside a larger network model, the overall approach is called “network in network.” The second change that Lin et al. make to the traditional architecture comes at the last layer: Instead of adopting the traditional fully connected layers for classification in CNN, we directly output the spatial average of the feature maps from the last mlpconv layer as the confidence of categories via a global average pooling layer, and then the resulting vector is fed into the softmax layer. In traditional CNN, it is difficult to interpret how the category level information from the objective cost layer is passed back to the previous convolution layer due to the fully connected layers which act as a black box in between. In contrast, global average pooling is more meaningful and interpretable as it enforces correspondance between feature maps and categories, which is made possible by a stronger local modeling using the micro network. On CIFAR-10 (10 classes of natural images with 50,000 traning images in total, and 10,000 testing images), the authors beat the then state-of-the-art by more than one percent. On CIFAR-100 they also beat the then best performance (without data augmentation, which was also not used to evaluate the N-in-N approach) by more than one percent. With the street view house numbers dataset, and with MNIST the authors get good results, but not quite state-of-the-art. ### OverFeat: Integrated recognition, localization and detection using convolutional networks OverFeat shows how the features learned by a CNN-based classifier can also be used for localization and detection. On the ILSVRC 2013 dataset OverFeat ranked 4th in classification, 1st in localization, and 1st in detection. We know what the classification problem is (what object is in this image), but what about the localization and detection problems? Localization is like classification, but the network must also produce a bounding box showing the boundary of the detected object: The detection problem involves images which may contain many small objects, and the network must detect each object and draw its bounding box: The main point of this paper is to show that training a convolutional network to simultaneously classify, locate and detect objects in images can boost the classification accuracy and the detection and localization accuracy of all tasks. The paper proposes a new integrated approach to object detection, recognition, and localization with a single ConvNet. We also introduce a novel method for localization and detection by accumulating predicted bounding boxes. There’s a lot of detail in the OverFeat paper that we don’t have space to cover, so if this work is of interest this paper is one where I definitely recommend going on to read the full thing. Since objects of interest (especially in the later detection task) can vary significantly in size and position within the image, OverFeat applies a ConvNet at multiple locations within the image in a sliding window fashion, and at multiple scales. The system is then trained to produce a prediction of the location and size of the bounding box containing an object relative to the window. Evidence for each object category is accumulated at each location and size. Starting with classification, the model is based on Krizhevsky et al. (our first paper today) in the first five layers, but no contrast normalisation is used, pooling regions are non-overlapping, and a smaller stride is used (2 instead of 4). Six different scales of input are used, resulting in unpooled layer 5 maps of varying resolution. These are then pooled and presented to the classifier. The following diagram summarises the classifier approach built on top of the layer 5 feature maps: At an intuitive level, the two halves of the network — i.e. feature extraction layers (1-5) and classifier layers (6-output) — are used in opposite ways. In the feature extraction portion, the filters are convolved across the entire image in one pass. From a computational perspective, this is far more efficient than sliding a fixed-size feature extractor over the image and then aggregating the results from different locations. However, these principles are reversed for the classifier portion of the network. Here, we want to hunt for a fixed-size representation in the layer 5 feature maps across different positions and scales. Thus the classifier has a fixed-size 5×5 input and is exhaustively applied to the layer 5 maps. The exhaustive pooling scheme (with single pixel shifts (∆x, ∆y)) ensures that we can obtain fine alignment between the classifier and the representation of the object in the feature map. #### Localization For localization the classifier layers are replaced by a regression networks trained to predict bounding boxes at each spatial location and scale. The regression predictions are then combined, along with the classification results at each location. Training with multiple scales ensure predictions match correctly across scales, and exponentially increases the confidence of the merged predictions. Bounding boxes are combined based on the distance between their centres and the intersection of their areas, and the final prediction is made by taking the merged bounding boxes with maximum class scores. Figure 6 below gives a visual overview of the process: #### Detection Detection training is similar to classification training, but with the added necessity of predicting a background task when no object is present. Traditionally, negative examples are initially taken at random for training, then the most offending negative errors are added to the training set in bootstrapping passes… we perform negative training on the fly, by selecting a few interesting negative examples per image such as random ones or most offending ones. This approach is more computationally expensive, but renders the procedure much simpler.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5565758943557739, "perplexity": 1309.5593039597306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189884.21/warc/CC-MAIN-20170322212949-00394-ip-10-233-31-227.ec2.internal.warc.gz"}
https://hpmuseum.org/forum/thread-11447-post-104538.html
Summation on HP 42S 09-23-2018, 06:14 PM Post: #1 lrdheat Senior Member Posts: 735 Joined: Feb 2014 Summation on HP 42S I tried doing a simple summation of "x" from 1 to 100. I got it to work, and correctly come up with an answer of 5050. There must be a simpler way than the contorted way that I did. Here's my program: LBL "SUM" 1.1 STO "T" 0 ENTER LBL "COUNT" RCL "X" ENTER 1 + STO "X" + + ISG "T" GTO "COUNT" .END. I begin by storing 0 into "X". What is a better way to accomplish this? 09-23-2018, 06:28 PM Post: #2 Didier Lachieze Senior Member Posts: 1,495 Joined: Dec 2013 RE: Summation on HP 42S Code: 01 LBL "SUM" 02 0 03 LBL 00 04 RCL+ ST Y 05 DSE ST Y 06 GTO 00 07 END Start with the end value in X: 100 XEQ "SUM" will do the sum of x from 100 down to 1 and return 5050. 09-23-2018, 06:41 PM Post: #3 lrdheat Senior Member Posts: 735 Joined: Feb 2014 RE: Summation on HP 42S Thanks! Amazing how I could make an easy problem so contorted! 09-23-2018, 06:51 PM Post: #4 lrdheat Senior Member Posts: 735 Joined: Feb 2014 RE: Summation on HP 42S 100 in Y. 09-23-2018, 07:40 PM Post: #5 ijabbott Senior Member Posts: 1,150 Joined: Jul 2015 RE: Summation on HP 42S You could cheat: Code: LBL "SUM" X^2 LASTX + 2 / END Thanks to Gauss! — Ian Abbott 09-23-2018, 08:29 PM Post: #6 Thomas Klemm Senior Member Posts: 1,711 Joined: Dec 2013 RE: Summation on HP 42S A somewhat different approach using ∑+: Code: 00 { 10-Byte Prgm } 01 CL∑ 02 0 03▸LBL 00 04 ∑+ 05 X≤Y? 06 GTO 00 07 SUM 08 END Or then for $$n>0$$ simply: Code: 00 { 7-Byte Prgm } 01 1 02 + 03 2 04 COMB 05 END Cheers Thomas 09-23-2018, 09:25 PM Post: #7 lrdheat Senior Member Posts: 735 Joined: Feb 2014 RE: Summation on HP 42S Neat stuff. For something a little more complex such as summation 1 through 100 of x^2 - 3*x, I used the idea from post 2 LBL "SUM" 0 LBL 00 RCL "T" X^2 STO+ "X" RCL "T" 3 * STO- "X" RCL "T" DSE "T" GTO 00 RCL "X" .END. Store 100 in "T", 0 in "X", Produces 323,200 in ~25 seconds 09-23-2018, 10:02 PM Post: #8 lrdheat Senior Member Posts: 735 Joined: Feb 2014 RE: Summation on HP 42S Summation of same (x^2 - 3*x) using summation function on WP 34S takes ~3 seconds. 09-23-2018, 10:03 PM (This post was last modified: 09-23-2018 10:09 PM by Didier Lachieze.) Post: #9 Didier Lachieze Senior Member Posts: 1,495 Joined: Dec 2013 RE: Summation on HP 42S (09-23-2018 06:51 PM)lrdheat Wrote:  100 in Y. No, I 'm using the stack registers X & Y, not variables "X" or "Y". So before calling the program you enter 100 (it's in the stack register X), at step 02 in the program 0 is enter in the stack register X and 100 is pushed to stack register Y which is then used by the two other instructions: RCL+ ST Y and DSE ST Y. (09-23-2018 09:25 PM)lrdheat Wrote:  Neat stuff. For something a little more complex such as summation 1 through 100 of x^2 - 3*x, I used the idea from post 2 [...] Store 100 in "T", 0 in "X", Produces 323,200 in ~25 seconds Here is a shorter way to do it. 100 XEQ SUM returns 323200 in ~11 seconds. Code: 01 LBL "SUM" 02 0 03 LBL 00 04 RCL ST Y 05 3 06 - 07 RCL* ST Z 08 + 09 DSE ST Y 10 GTO 00 11 END At step 04 we have the current sum in X and the current value of x in Y, so we recall Y to calculate x-3. At step 07 we have x-3 in X, the current sum in Y and the current value of x in Z, so we recall and multiply Z to X to get x*(x-3) which is x^2 - 3*x. Then with + we get the updated sum in X and x moves down from Z to Y. For more complex functions it may not be possible to use only the stack so variables may be needed to store x and the sum but with the stack it's generally shorter and faster. Note: in your program you can remove the RCL "T" before the DSE, it's useless. 09-23-2018, 10:37 PM Post: #10 lrdheat Senior Member Posts: 735 Joined: Feb 2014 RE: Summation on HP 42S Thanks! I'm learning, thrilled that this mind can put something together that works, even if it is far from concise. Maybe conciseness will eventually follow! 09-24-2018, 12:56 AM Post: #11 Thomas Klemm Senior Member Posts: 1,711 Joined: Dec 2013 RE: Summation on HP 42S (09-23-2018 09:25 PM)lrdheat Wrote:  For something a little more complex such as summation 1 through 100 of x^2 - 3*x, Let us rewrite this expression as: \begin{align*} x^2 - 3x &= x^2 - x - 2x \\ &= x(x-1) - 2x \\ &= 2\frac{x(x-1)}{2} -2\frac{x}{1} \\ &= 2\binom{x}{2} - 2\binom{x}{1} \end{align*} $$\sum_{x=1}^{n} 2\binom{x}{2} - 2\binom{x}{1} = 2\binom{n+1}{3} - 2\binom{n+1}{2} = 2\left ( \binom{n+1}{3}-\binom{n+1}{2} \right )$$ This program can be used to calculate the sum: Code: 00 { 18-Byte Prgm } 01 1 02 + 03 RCL ST X 04 3 05 COMB 06 X<>Y 07 2 08 COMB 09 - 10 2 11 × 12 END Quote:Produces 323,200 in ~25 seconds I haven't tested but I assume it's a bit faster than that. Cheers Thomas 09-24-2018, 01:17 AM Post: #12 Thomas Klemm Senior Member Posts: 1,711 Joined: Dec 2013 RE: Summation on HP 42S (09-23-2018 09:25 PM)lrdheat Wrote:  I used the idea from post 2 Code: LBL "SUM" 0 LBL 00 RCL "T" X^2 STO+ "X" RCL "T" 3 * STO- "X" RCL "T" DSE "T" GTO 00 RCL "X" .END. Store 100 in "T", 0 in "X", Produces 323,200 in ~25 seconds (09-23-2018 10:03 PM)Didier Lachieze Wrote:  Note: in your program you can remove the RCL "T" before the DSE, it's useless. Another note: The 0 in the 2nd line is useless as well. Unless you store it to initialise variable "X". You might also store the X register of the stack to "T": Code: LBL "SUM" STO "T" 0 STO "X" LBL 00 RCL "T" This allows to run: 100 XEQ "SUM" So you don't have to initialise variables "T" and "X". Cheers Thomas 09-24-2018, 11:56 AM Post: #13 Albert Chan Senior Member Posts: 1,914 Joined: Jul 2018 RE: Summation on HP 42S (09-24-2018 12:56 AM)Thomas Klemm Wrote:  The summation leads to: $$\sum_{x=1}^{n} 2\binom{x}{2} - 2\binom{x}{1} = 2\binom{n+1}{3} - 2\binom{n+1}{2} = 2\left ( \binom{n+1}{3}-\binom{n+1}{2} \right )$$ This almost look like integration ! Any reference to show it is always true ? I normally just fit the data to search for the formula (assume I don't cheat by googling) For sum(x^2 - 3*x), formula should be a cubic, with no constant term (sum=0 when n=0) Try: n = 1, sum = (1^2 - 3) = -2 n = 2, sum = -2 + (2^2 - 3*2) = -4 n = 3, sum = -4 + (3^2 - 3*3) = -4 3 equations, 3 unknowns (cubic coefficients), we get sum = n^3/3 - n^2 - 4/3*n So, for n = 100, sum = n/3 * (n^2 - 3*n - 4) = 100/3 * 9696 = 323200 09-24-2018, 01:52 PM Post: #14 Thomas Klemm Senior Member Posts: 1,711 Joined: Dec 2013 RE: Summation on HP 42S (09-24-2018 11:56 AM)Albert Chan Wrote:  This almost look like integration ! Any reference to show it is always true ? Just have a look at the definition of Pascal's triangle: $${n \choose k}={n-1 \choose k-1}+{n-1 \choose k}$$ Or then just check the diagonals. E.g. for $$k=2$$ the partial sum of the first elements is just below on the next line: 1 = 1 1 + 3 = 4 1 + 3 + 6 = 10 1 + 3 + 6 + 10 = 20 1 + 3 + 6 + 10 + 15 = 35 Quote:I normally just fit the data to search for the formula (assume I don't cheat by googling) For sum(x^2 - 3*x), formula should be a cubic, with no constant term (sum=0 when n=0) Try: n = 1, sum = (1^2 - 3) = -2 n = 2, sum = -2 + (2^2 - 3*2) = -4 n = 3, sum = -4 + (3^2 - 3*3) = -4 3 equations, 3 unknowns (cubic coefficients), we get sum = n^3/3 - n^2 - 4/3*n So, for n = 100, sum = n/3 * (n^2 - 3*n - 4) = 100/3 * 9696 = 323200 You might be interested in Newton's Forward Difference Formula: $$f(x+a)=\sum_{n=0}^\infty{a \choose n}\Delta^nf(x)$$ Quote:the formula looks suspiciously like a finite analog of a Taylor series expansion So for the given example we get: x: 0 1 2 3 … f: 0 -2 -4 -4 … ∆f: -2 -2 0 … ∆²f: 0 2 … ∆³f: 2 And so with $$x=0$$ we end up with: $$f(a)=2{a \choose 3} - 2{a \choose 1}$$ This leads to an even shorter program: Code: 00 { 11-Byte Prgm } 01 RCL ST X 02 3 03 COMB 04 X<>Y 05 - 06 2 07 × 08 END Conclusion: We don't really have to solve the linear system of equations. Kind regards Thomas We don't even have to calculate the sum f but can only calculate ∆f: x: 0 1 2 3 … f: 0 ∆f: -2 -2 0 … ∆²f: 0 2 … ∆³f: 2 09-24-2018, 01:55 PM (This post was last modified: 09-24-2018 01:57 PM by pier4r.) Post: #15 pier4r Senior Member Posts: 2,111 Joined: Nov 2014 RE: Summation on HP 42S for sum(x^2 - 3*x) one could actually do: sum(x^2 ) (closed formula) minus 3*sum(x) (closed formula). namely here: https://brilliant.org/wiki/sum-of-n-n2-or-n3/ PS: brilliant has so much potential that they don't use. So many interesting problems buried by a poor layout sometimes. Wikis are great, Contribute :) 09-24-2018, 04:20 PM (This post was last modified: 09-24-2018 04:42 PM by Albert Chan.) Post: #16 Albert Chan Senior Member Posts: 1,914 Joined: Jul 2018 RE: Summation on HP 42S (09-24-2018 01:52 PM)Thomas Klemm Wrote: Quote:I normally just fit the data to search for the formula (assume I don't cheat by googling) For sum(x^2 - 3*x), formula should be a cubic, with no constant term (sum=0 when n=0) Try: n = 1, sum = (1^2 - 3) = -2 n = 2, sum = -2 + (2^2 - 3*2) = -4 n = 3, sum = -4 + (3^2 - 3*3) = -4 3 equations, 3 unknowns (cubic coefficients), we get sum = n^3/3 - n^2 - 4/3*n So, for n = 100, sum = n/3 * (n^2 - 3*n - 4) = 100/3 * 9696 = 323200 You might be interested in Newton's Forward Difference Formula: I could never remember Forward Difference formula, without looking up. Instead, I use the Lagrange formula, which work for uneven intervals too. The formula look complicated, but it is very mechanical, easy to remember. Fitting a cubic sum = n * quadratic, sum / n = (-2/1) $$(n-2)(n-3)\over(1-2)(1-3)$$ + (-4/2) $$(n-1)(n-3)\over(2-1)(2-3)$$ + (-4/3) $$(n-1)(n-2)\over(3-1)(3-2)$$ = -(n-2)(n-3) + 2(n-1)(n-3) - (2/3)(n-1)(n-2) = (-n^2 + 5 n - 6) + (2 n^2 - 8 n + 6) + (-2/3 n^2 + 2 n - 4/3) = n^2/3 - n - 4/3 sum = n * (n^2/3 - n - 4/3) = n/3 * (n^2 - 3 n - 4) = n(n+1)(n-4) / 3 Edit: Forward Difference Formula is very neat, without even evaluate sums. 09-25-2018, 12:58 PM Post: #17 burkhard Senior Member Posts: 369 Joined: Nov 2017 RE: Summation on HP 42S (09-23-2018 07:40 PM)ijabbott Wrote:  You could cheat: <code snipped out> Thanks to Gauss! Excellent! I don't consider it a cheat at all. The problem said to add up the numbers from 1 to 100 (or whatever). Gauss's solution (which you nicely implemented) is compact, efficient, and much faster than the more obvious way (and least on paper or human brain... not sure for computers/calculators). I was going to chime in with the classic story of Gauss's youthful brilliance here, but I assume most of the people (who humble me greatly) on this forum already know it. If not, look it up. It's one of my dad's favorite stories to tell young people. :-) 09-25-2018, 01:59 PM Post: #18 Frido Bohn Member Posts: 57 Joined: Jan 2015 RE: Summation on HP 42S Challenge! Write a program that multiplies natural numbers from 1 to 1000 (factorial) and overcomes the range limit of the HP42S. (The decimal approximation is a fully acceptable result.) Cheers! Frido 09-25-2018, 04:15 PM Post: #19 John Keith Senior Member Posts: 795 Joined: Dec 2013 RE: Summation on HP 42S (09-25-2018 12:58 PM)burkhard Wrote:  Gauss's solution (which you nicely implemented) is compact, efficient, and much faster than the more obvious way (and least on paper or human brain... not sure for computers/calculators). This is true for computers if you want to calculate one such triangular number, which is the summation of the integers 1 through n. If you want a list of all triangular numbers from 1 through n (aka a cumulative sum), then the straightforward summation is faster because computing each new value requires only one addition. 09-25-2018, 04:47 PM Post: #20 Dieter Senior Member Posts: 2,397 Joined: Dec 2013 RE: Summation on HP 42S (09-25-2018 01:59 PM)Frido Bohn Wrote:  Write a program that multiplies natural numbers from 1 to 1000 (factorial) and overcomes the range limit of the HP42S. (The decimal approximation is a fully acceptable result.) Simple. Add the (base 10) logs of 1...1000. The integer part of the sum is the tens exponent, 10^(fractional part) is the mantissa. Code: 01 LBL "FCT" 02 0 03 LBL 01 04 RCL ST Y 05 LOG 06 + 07 DSE ST Y 08 GTO 01 09 IP 10 LASTX 11 FP 12 10^x 13 END On Free42 (with 34 digit accuracy) this yields 2567 4,02387260077 So the result is 4,02387260077 E+2567. Dieter « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6627585291862488, "perplexity": 3324.5260009169615}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00003.warc.gz"}
https://socratic.org/questions/what-is-the-law-of-definite-proportions
Chemistry Topics # What is the Law of Definite Proportions? The $\text{law of definite proportions}$ or the $\text{law of constant composition}$ holds that a chemical compound always contains its component elements in a fixed ratio, regardless of the source of the compound. Joseph Proust proposed a similar $\text{law of constant composition.}$ An example of this law is that water ALWAYS has a composition by mass of $\frac{1}{8}$ hydrogen, and $\frac{7}{8}$ oxygen. This law allows us to propose empirical formulae, and by extension molecular formulae, if the molecular mass is known.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8136858940124512, "perplexity": 756.6781687890544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604397.40/warc/CC-MAIN-20200121132900-20200121161900-00286.warc.gz"}
http://physics.stackexchange.com/users/11543/r-m
# R. M. less info reputation 29 bio website location age member for 1 year, 6 months seen Feb 9 at 15:24 profile views 37 # 11 Questions 6 Chaos is predictable? 6 What the circled integral? 4 Gravitational inverse-square law 4 Entropy: two explanations for the same quantity? 3 Euler equation of fluid dynamics # 261 Reputation +5 Entropy: two explanations for the same quantity? +5 Gravitational inverse-square law +10 How Did Newton's Second Law Get Its Definition? +5 Chaos is predictable? 2 How Did Newton's Second Law Get Its Definition? # 29 Tags 2 newtonian-mechanics 0 tensors 2 history 0 continuum-mechanics 0 thermodynamics × 2 0 tensor-calculus 0 classical-mechanics × 2 0 mass 0 definition × 2 0 si-units # 8 Accounts TeX - LaTeX 922 rep 1212 Ask Different 343 rep 1516 Physics 261 rep 29 Mathematics 157 rep 4 Area 51 151 rep 2
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728431701660156, "perplexity": 9786.288723761687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999673147/warc/CC-MAIN-20140305060753-00023-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.xissufotoday.space/2019/11/2019-november-5-spiral-galaxies_18.html
# 2019 November 5 Spiral Galaxies Spinning Super-Fast Image Credit: Top row: NASA, ESA, Hubble, P. Ogle & J. DePasquale (STScI); Bottom row: SDSS, P. Ogle & J. DePasquale (STScI) Explanation: Why are these galaxies spinning so fast? If you estimated each spiral’s mass by how much light it emits, their fast rotations should break them apart. The leading hypothesis as to why these galaxies don’t break apart is dark matter – mass so dark we can’t see it. But these galaxies are even out-spinning this break-up limit – they are the fastest rotating disk galaxies known. It is therefore further hypothesized that their dark matter halos are so massive – and their spins so fast – that it is harder for them to form stars than regular spirals. If so, then these galaxies may be among the most massive spirals possible. Further study of surprising super-spirals like these will continue, likely including observations taken by NASA’s James Webb Space Telescope scheduled for launch in 2021. ∞ Source: apod.nasa.gov/apod/ap191105.html
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421489238739014, "perplexity": 3567.9287525158215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00431.warc.gz"}
http://ageconsearch.umn.edu/record/182763
## Technical Efficiency of Organic Farming in the Alpine Region – the Impact of Farm Structures and Policies The paper investigates the impact of subsidies and of para-agriculture on the technical efficiency of organic farms in Switzerland, Austria and Southern Germany. The data-set consists of bookkeeping data with 1,704 observations in the years 2003 to 2005. Technical efficiency is modelled using a stochastic distance-frontier model combined with a Metafrontier-model. The results show almost no efficiency differences among the farms in the three countries. Para-agriculture shows a strong impact on farm’s efficiency and output in Austria and Switzerland, whereas in Germany the effect is rather small. The study confirms that agricultural subsidies have a direct impact on farm’s efficiency. Keywords: Issue Date: 2014-08 Publication Type: Conference Paper/ Presentation PURL Identifier: http://purl.umn.edu/182763 Total Pages: 14 JEL Codes: Q12; Q18; D24; C54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9498002529144287, "perplexity": 7142.853511160439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805761.46/warc/CC-MAIN-20171119191646-20171119211646-00036.warc.gz"}
https://int-brain-lab.github.io/iblenv/notebooks/dj_intro/dj_intro.html
# Datajoint Introductory Tutorial¶ In this tutorial we will use datajoint to replicate the analysis we conducted in the ONE tutorial This tutorial assumes that you have setup the unified ibl environment IBL python environment and set up your Datajoint credentials. First let’s import datajoint [1]: import datajoint as dj # for the purposes of tutorial limit the table print output to 5 dj.config['display.limit'] = 5 We can access datajoint tables by importing schemas from the IBL-pipeline. Let’s import the subject schema [2]: from ibl_pipeline import subject Connecting mayofaulkner@datajoint.internationalbrainlab.org:3306 Within this schema there is a datajoint table called Subject. This holds all the information about subjects registered on Alyx under IBL projects. Let’s access this table and look at the first couple of entries [3]: subjects = subject.Subject() subjects Out[3]: subject_uuid subject_nickname nickname sex sex subject_birth_date birth date ear_mark ear mark subject_line name subject_source name of source protocol_number protocol number subject_description subject_ts subject_strain 0026c82d-39e4-4c6b-acb3-303eb4b24f05 IBL_32 M 2018-04-23 None C57BL/6NCrl Charles River 1 None 2019-08-06 21:30:42 C57BL/6N 00778394-c956-408d-8a6c-ca3b05a611d5 KS019 F 2019-04-30 None C57BL/6J None 2 None 2019-08-13 17:07:33 C57BL/6J 00c60db3-74c3-4ee2-9df9-2c84acf84e92 ibl_witten_10 F 2018-11-13 notag C57BL/6J Jax 3 None 2019-09-25 01:33:25 C57BL/6J 0124f697-16ce-4f59-b87c-e53fcb3a27ac 6867 M 2018-06-25 None Thy1-GCaMP6s CCU - Margarida colonies 1 None 2019-09-25 01:33:25 C57BL/6J 019a22c1-b944-4494-9e38-0e45ae6697bf SWC_022 M 2019-06-18 NA (Front HP) C57BL/6J Charles River 4 ID: 990762 2019-09-25 01:33:25 C57BL/6J ... Total: 856 Next, we will find the entry in the table for the same subject that we looked at in the ONE tutorial, KS022. To do this we will restrict the Subject table by the subject nickname [4]: subjects & 'subject_nickname = "KS022"' Out[4]: subject_uuid subject_nickname nickname sex sex subject_birth_date birth date ear_mark ear mark subject_line name subject_source name of source protocol_number protocol number subject_description subject_ts subject_strain b57c1934-f9d1-4dc4-a474-e2cb4acdf918 KS022 M 2019-06-25 None C57BL/6J None 3 None 2019-09-20 04:28:14 C57BL/6J Total: 1 We now want to find information about the behavioural sessions. This information is stored in a table Session defined in the acquisition schema. Let’s import this schema, access the table and display the first few entries [5]: from ibl_pipeline import acquisition sessions = acquisition.Session() sessions Out[5]: subject_uuid session_start_time start time session_uuid session_number number session_end_time end time session_lab name of lab session_location name of the location session_type type session_narrative session_ts 00778394-c956-408d-8a6c-ca3b05a611d5 2019-08-07 10:49:41 0955e23e-90cf-4250-a092-82fabdf67a24 1 2019-08-07 11:23:14 cortexlab _iblrig_cortexlab_behavior_3 _iblrig_tasks_habituationChoiceWorld5.2.5 Experiment None 2019-08-13 17:26:00 00778394-c956-408d-8a6c-ca3b05a611d5 2019-08-08 15:32:40 996167fa-07bc-4afe-b7b9-4aee0ba75c1f 1 2019-08-08 16:19:23 cortexlab _iblrig_cortexlab_behavior_3 _iblrig_tasks_habituationChoiceWorld5.2.5 Experiment None 2019-08-13 17:26:01 00778394-c956-408d-8a6c-ca3b05a611d5 2019-08-09 10:37:49 13e38e6c-0cd6-41f9-a2ab-8d6021fea035 1 2019-08-09 11:40:12 cortexlab _iblrig_cortexlab_behavior_3 _iblrig_tasks_habituationChoiceWorld5.2.7 Experiment None 2019-08-13 17:26:00 00778394-c956-408d-8a6c-ca3b05a611d5 2019-08-10 11:24:59 fb9bdf18-76be-452b-ac4e-21d5de3a6f9f 1 2019-08-10 12:11:04 cortexlab _iblrig_cortexlab_behavior_3 _iblrig_tasks_trainingChoiceWorld5.2.7 Experiment None 2019-08-13 17:26:06 00778394-c956-408d-8a6c-ca3b05a611d5 2019-08-12 09:21:03 d47e9a4c-18dc-4d4d-991c-d30059ec2cbd 1 2019-08-12 10:07:19 cortexlab _iblrig_cortexlab_behavior_3 _iblrig_tasks_trainingChoiceWorld5.2.7 Experiment None 2019-08-14 12:37:53 ... Total: 26634 If we look at the primary keys (columns with black headings) in the Subjects and Sessions table, we will notice that both contain subject_uuid as a primary key. This means that these two tables can be joined using *****. We want to find information about all the sessions that KS022 did in the training phase of the IBL training pipeline. When combining the tables we will therefore restrict the Subject table by the subject nickname (as we did before) and the Sessions table by the task protocol [6]: (subjects & 'subject_nickname = "KS022"') * (sessions & 'task_protocol LIKE "%training%"') Out[6]: subject_uuid session_start_time start time subject_nickname nickname sex sex subject_birth_date birth date ear_mark ear mark subject_line name subject_source name of source protocol_number protocol number subject_description subject_ts subject_strain session_uuid session_number number session_end_time end time session_lab name of lab session_location name of the location session_type type session_narrative session_ts b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-23 13:43:40 KS022 M 2019-06-25 None C57BL/6J None 3 None 2019-09-20 04:28:14 C57BL/6J 242ed7aa-6e02-4d55-b003-874b71357074 1 2019-09-23 14:28:22 cortexlab _iblrig_cortexlab_behavior_3 _iblrig_tasks_trainingChoiceWorld5.3.0 Experiment None 2019-09-24 04:51:41 b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-24 10:41:18 KS022 M 2019-06-25 None C57BL/6J None 3 None 2019-09-20 04:28:14 C57BL/6J be7f7832-006f-4f79-8079-6dea549c90c0 1 2019-09-24 11:27:11 cortexlab _iblrig_cortexlab_behavior_3 _iblrig_tasks_trainingChoiceWorld5.3.0 Experiment None 2019-09-25 04:51:58 b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-25 09:37:26 KS022 M 2019-06-25 None C57BL/6J None 3 None 2019-09-20 04:28:14 C57BL/6J c4f55950-26ac-4cb6-be38-57d9e604f5fc 1 2019-09-25 10:23:20 cortexlab _iblrig_cortexlab_behavior_3 _iblrig_tasks_trainingChoiceWorld5.3.0 Experiment None 2019-09-26 04:53:06 b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-26 14:43:24 KS022 M 2019-06-25 None C57BL/6J None 3 None 2019-09-20 04:28:14 C57BL/6J 39a0993b-ea91-4d7d-aa8b-84365ae77a81 1 2019-09-26 15:31:48 cortexlab _iblrig_cortexlab_behavior_3 _iblrig_tasks_trainingChoiceWorld5.3.0 Experiment None 2019-09-27 04:53:22 b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-27 10:21:16 KS022 M 2019-06-25 None C57BL/6J None 3 None 2019-09-20 04:28:14 C57BL/6J f1eb053b-3cc2-45df-bbd5-2f5f54f05d23 1 2019-09-27 11:06:09 cortexlab _iblrig_cortexlab_behavior_3 _iblrig_tasks_trainingChoiceWorld5.3.0 Experiment None 2019-09-28 04:53:37 ... Total: 19 There is a lot of information in this table and we are not interested in all of it for the purposes of our analysis. Let’s just use the proj method to restrict the data presented. We do not want any columns from the Subject table (apart from the primary keys) and only want session_uuid from the Sesssions table. So we can write, [7]: ((subjects & 'subject_nickname = "KS022"').proj() * Out[7]: subject_uuid session_start_time start time session_uuid b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-23 13:43:40 242ed7aa-6e02-4d55-b003-874b71357074 b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-24 10:41:18 be7f7832-006f-4f79-8079-6dea549c90c0 b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-25 09:37:26 c4f55950-26ac-4cb6-be38-57d9e604f5fc b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-26 14:43:24 39a0993b-ea91-4d7d-aa8b-84365ae77a81 b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-27 10:21:16 f1eb053b-3cc2-45df-bbd5-2f5f54f05d23 ... Total: 19 Note In the above expression we first used proj and then joined the tables using ** * . We could have also joined the tables first and then usedproj**, ((subjects & 'subject_nickname = "KS022"') * (sessions & 'task_protocol LIKE "%training%"') ).proj('session_uuid') If we look back to the ONE tutorial you will notice that we have the same number of training sessions and that session_uuid corresponds to what we previously defined as the eID Up until now we have been inspecting the content of the tables but do not actually have access to the content. This is because we have not actually read the data from the tables into memory yet. For this we would need to use the fetch command. Let’s fetch the session uuid information into a pandas dataframe [8]: eids = ((subjects & 'subject_nickname = "KS022"').proj() * [9]: eids Out[9]: session_uuid subject_uuid session_start_time b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-23 13:43:40 242ed7aa-6e02-4d55-b003-874b71357074 2019-09-24 10:41:18 be7f7832-006f-4f79-8079-6dea549c90c0 2019-09-25 09:37:26 c4f55950-26ac-4cb6-be38-57d9e604f5fc 2019-09-26 14:43:24 39a0993b-ea91-4d7d-aa8b-84365ae77a81 2019-09-27 10:21:16 f1eb053b-3cc2-45df-bbd5-2f5f54f05d23 2019-10-01 11:01:44 a07e255f-cf31-47aa-9667-5c62b93443e0 2019-10-02 10:02:36 fc7b7ea7-55c4-48fd-bb7b-2422c741fe8e 2019-10-03 14:15:32 22f33af3-5e77-4517-94b9-2cfefea27a10 2019-10-04 10:52:19 7e508dce-e08c-4481-b634-1a9358673aa5 2019-10-07 10:59:27 548ca39d-8898-460c-a930-785083b4127d 2019-10-08 11:19:34 3f815a05-a114-4a16-96ef-b398bd5a0b88 2019-10-10 11:00:46 f6c9daee-3e1e-4a5d-b372-7be8fcda5512 2019-10-11 11:01:47 2f7e07d4-d713-46a1-8bce-6b497af405fc 2019-10-14 11:30:31 228b29b2-fb77-40c9-bd32-953fd5297896 2019-10-15 10:32:56 9ee8bd6b-1e5e-4440-bc2e-7c001abe7b60 2019-10-17 11:32:30 777ecfb9-616f-4ff9-a8d9-eb38f897a19f 2019-10-18 11:52:53 4e626e22-747e-4324-96f1-9827b7ca950d Now that we have access to our list of session eIDs, we can get trial information associated with these sessions. The output from the trials dataset is stored in a table called PyschResults. We can import this from the analyses.behaviour schema [10]: from ibl_pipeline.analyses import behavior trials = behavior.PsychResults() trials Connected to https://alyx.internationalbrainlab.org as mayo Out[10]: subject_uuid session_start_time start time performance percentage correct in this session performance_easy percentage correct of easy trials in this session signed_contrasts contrasts used in this session, negative when on the left n_trials_stim number of trials for each contrast n_trials_stim_right number of reporting "right" trials for each contrast prob_choose_right probability of choosing right, same size as contrasts threshold bias lapse_low lapse_high 00778394-c956-408d-8a6c-ca3b05a611d5 2019-08-10 11:24:59 0.367347 0.367347 =BLOB= =BLOB= =BLOB= =BLOB= 29.7063 -58.2102 0.823603 0.409092 00778394-c956-408d-8a6c-ca3b05a611d5 2019-08-12 09:21:03 0.4 0.4 =BLOB= =BLOB= =BLOB= =BLOB= 21.0607 -58.2391 0.609229 0.181818 00778394-c956-408d-8a6c-ca3b05a611d5 2019-08-13 10:28:45 0.408072 0.408072 =BLOB= =BLOB= =BLOB= =BLOB= 9.4945 -0.13251 0.684564 0.364865 00778394-c956-408d-8a6c-ca3b05a611d5 2019-08-14 09:37:17 0.4 0.4 =BLOB= =BLOB= =BLOB= =BLOB= 3.28746 -67.9662 0.772727 0.454545 00778394-c956-408d-8a6c-ca3b05a611d5 2019-08-14 11:35:16 0.463668 0.463668 =BLOB= =BLOB= =BLOB= =BLOB= 10.4665 99.3649 0.72314 0.0373897 ... Total: 23408 Let’s get the trial information for the first day KS022 trained. We will restrict the Sessions table by the day 1 eID and combine this with the trials table [11]: eid_day1 = dict(session_uuid=eids['session_uuid'][0]) trials_day1 = ((sessions & eid_day1).proj() * trials).fetch(format='frame') trials_day1 Out[11]: performance performance_easy signed_contrasts n_trials_stim n_trials_stim_right prob_choose_right threshold bias lapse_low lapse_high subject_uuid session_start_time b57c1934-f9d1-4dc4-a474-e2cb4acdf918 2019-09-23 13:43:40 0.515432 0.515432 [-1.0, -0.5, 0.5, 1.0] [81, 90, 73, 80] [41, 51, 50, 38] [0.5061728395061729, 0.5666666666666667, 0.684... 7.23087 86.8559 0.581967 0.525546 Notice how a lot of the metrics that we manually computed from the trials dataset in the previous ONE tutorial have already been computed for us and are available in the trials table. This is one advantage of Datajoint, common computations such as performance can be computed when the data is ingested and stored in tables. We can find out which visual stimulus contrasts were presented to KS022 on day 1 and how many of each contrast by looking at the signed_contrasts and n_trials_stim attributes [12]: contrasts = trials_day1['signed_contrasts'].to_numpy()[0] n_contrasts = trials_day1['n_trials_stim'].to_numpy()[0] print(f"Visual stimulus contrasts on day 1 = {contrasts * 100}") print(f"No. of each contrast on day 1 = {n_contrasts}") Visual stimulus contrasts on day 1 = [-100. -50. 50. 100.] No. of each contrast on day 1 = [81 90 73 80] We can easily extract the performance by typing [13]: print(f"Correct = { trials_day1['performance'].to_numpy()[0] * 100} %") Correct = 51.5432 % We can plot the peformance at each contrast by looking at the prob_choose_right attribute. The results stored in Datajoint are already expressed in terms of rightward choice, so we don’t need worry about converting any computations [14]: contrast_performance = trials_day1['prob_choose_right'].to_numpy()[0] import matplotlib.pyplot as plt plt.plot(contrasts * 100, contrast_performance * 100, 'o-', lw=3, ms=10) plt.ylim([0, 100]) plt.xticks([*(contrasts * 100)]) plt.xlabel('Signed contrast (%)') plt.ylabel('Rightward choice (%)') print(contrast_performance) [0.50617284 0.56666667 0.68493151 0.475 ] Let’s now repeat this for day 15 of training [15]: eid_day15 = dict(session_uuid=eids['session_uuid'][14]) trials_day15 = ((sessions & eid_day15).proj() * trials).fetch(format='frame') contrasts = trials_day15['signed_contrasts'].to_numpy()[0] n_contrasts = trials_day15['n_trials_stim'].to_numpy()[0] print(f"Visual stimulus contrasts on day 1 = {contrasts * 100}") print(f"No. of each contrast on day 1 = {n_contrasts}") Visual stimulus contrasts on day 1 = [-100. -50. -25. -12.5 12.5 25. 50. 100. ] No. of each contrast on day 1 = [ 94 65 107 16 21 75 78 64] [16]: print(f"Correct = { trials_day15['performance'].to_numpy()[0] * 100} %") Correct = 88.2692 % [17]: contrast_performance = trials_day15['prob_choose_right'].to_numpy()[0] plt.plot(contrasts * 100, contrast_performance * 100, 'o-', lw=3, ms=10) plt.ylim([0, 100]) plt.xticks([*(contrasts * 100)]) plt.xlabel('Signed contrast (%)') plt.ylabel('Rightward choice (%)') plt.xticks(rotation=90) print(contrast_performance) [0.09574468 0.12307692 0.22429907 0.1875 1. 0.93333333 0.91025641 0.921875 ] If we compare the results with the ONE tutorial, we will find that we have replicated those results using Datajoint, congratulations! You should now be comfortable with the basics of exploring the Datajoint IBL pipeline. More Datajoint tutorials can be found in the IBL-Pipeline github or on the Datajoint jupyter.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5237131118774414, "perplexity": 19293.78812118563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141711306.69/warc/CC-MAIN-20201202144450-20201202174450-00178.warc.gz"}
https://codegolf.meta.stackexchange.com/questions/2140/sandbox-for-proposed-challenges/20572
What is the Sandbox? This "Sandbox" is a place where Code Golf users can get feedback on prospective challenges they wish to post to the main page. This is useful because writing a clear and fully specified challenge on your first try can be difficult, and there is a much better chance of your challenge being well received if you post it in the Sandbox first. To add an inline tag to a proposal use shortcut link syntax with a prefix: [tag:king-of-the-hill] Identify the tonic from a key signature Objective Given a key signature in major, output its tonic. Input An integer from -14 to +14, inclusive. Its absolute value is the numbers of flats/sharps. Negative number represents flats, and positive number represents sharps. Note that theoretical keys are also considered. Mapping Note the use of Unicode characters ♭(U+266D; music flat sign), ♯(U+266F; music sharp sign), 𝄪(U+1D12A; musical symbol double sharp), and 𝄫(U+1D12B; musical symbol double flat). -14 → C𝄫 -13 → G𝄫 -12 → D𝄫 -11 → A𝄫 -10 → E𝄫 -9 → B𝄫 -8 → F♭ -7 → C♭ -6 → G♭ -5 → D♭ -4 → A♭ -3 → E♭ -2 → B♭ -1 → F 0 → C 1 → G 2 → D 3 → A 4 → E 5 → B 6 → F♯ 7 → C♯ 8 → G♯ 9 → D♯ 10 → A♯ 11 → E♯ 12 → B♯ 13 → F𝄪 14 → C𝄪 Output must be a string. Whitespaces are permitted everywhere. Rule • Invalid inputs fall in don't care situation. • "Or a sequence of bytes representing a string in some existing encoding"? (I think this should be the default, but I don't remember seeing any meta post about it) – user202729 Aug 4 '20 at 6:06 Source Code Byte Frequency - Posted here Changes from the original idea: • Without the requirement of fixed representation of the result (percentage and trimming). • With constraint: source code must be at least 1 byte long • Changed from character to byte, plus removing the constraint of SBCS languages only. • This may qualify for the quine tag but I'm not so sure about that – golf69 Aug 4 '20 at 6:40 • Trimming the output may be difficult for some languages, maybe you could also allow fractions, or require that the output is only accurate to x decimal places? Something to consider when writing a challenge is if a rule actually contributes to the problem or is just an accessory of sorts (here I think the main problem is finding the proportions, and rounding is an accessory) – golf69 Aug 4 '20 at 6:47 • @golf69 I'm also not sure about quine... About the trimming, my intention on the trimming and percentage format was to add a little bit of "work" that the program should do and make the frequencies a bit more different/challenging. Do you think I should drop the trimming part from the challenge? – SomoKRoceS Aug 4 '20 at 9:05 • I do think so, yes (also it might be better received that way) – golf69 Aug 4 '20 at 17:21 • I do not think the average person who does not use this site will know what a SBCS is, so it is probably still worth explaining. Alternatively, I think it would be cleaner to just require that the input be a byte and the output reflects the frequency of that byte. That way you don't eliminate multibyte languages from using it to their benefit, and I don't think it allows any "cheating." – FryAmTheEggman Aug 4 '20 at 21:52 • Sounds okay to me. I agree that it is better to avoid elimination of multi-byte languages. – SomoKRoceS Aug 4 '20 at 22:03 • The thing I try to avoid is to get a lot of 0 bytes answers (for languages that print 0 as default). So I want to add a task that the program should do, like printing in percentage format. So the question is, before I reduced the trimming task, if this is enough to achieve that. – SomoKRoceS Aug 5 '20 at 9:06 • Posted here with some changes listed in this edited answer. – SomoKRoceS Aug 9 '20 at 16:50 Simulate simple Bloons Tower Defense! For those who are unaware of this legendary series of video games, here is a link. You are going to be given an integer number and type of bloon wave and two integers describing the damage and pierce (max amount of bloons you can damage in one attack) of each attack. Your task is to output in how many attacks can you destroy the bloon wave. Bloon types For simplicity, there will be no special properties like fortified, regrow, camo e.t.c. White bloons will also not be present as, without special properties, they are the same as black bloons Name - health - what it pops into BAD - 20000 - 3x DDT and 2x ZOMG ZOMG - 4000 - 4x BFB BFB - 700 - 4x MOAB MOAB - 200 - 4x Ceramic DDT - 350 - 6x Ceramic Ceramic - 60 - 1x Rainbow Rainbow - 1 - 2x Zebra Zebra - 1 - 2x Black Black - 1 - 2x Pink Pink - 1 - 1x Yellow Yellow - 1 - 1x Green Green - 1 - 1x Blue Blue - 1 - 1x Red Red - 1 - Nothing! I/O Input: A string describing the type of bloon, and three integers: the amount of bloons in the wave, attack damage and attack pierce Output: An integer describing how many attacks are needed for destroying the whole wave. Examples Note: If there is not enough pierce n to attack the whole wave, then only the first n bloons are attacked Input: Rainbow 3 2 10 Starting: 3x Rainbow Attack 1: 12x Black 2: 20x Yellow 2x Black 3: 10x Blue 10x Yellow 2x Black 4: 10x Yellow 2x Black 5: 10x Blue 2x Black 6: 2x Black 7: 4x Yellow 8: 4x Blue 9: Done! Output: 9 This is the 4/0/x Sniper Monkey: Input: BFB 1 30 1 1: BFB(670) 2: BFB(640) ... 13: BFB(10) 14: 4x MOAB(180) 15: 1x MOAB(150) 3x MOAB(180) ... 19: 1x MOAB(30) 3x MOAB(180) 20: 4x Ceramic(60) 3x MOAB(180) 21: 1x Ceramic(30) 3x Ceramic(60) 3x MOAB(180) 22: 3x Ceramic(60) 3x MOAB(180) ... 27: 1x Ceramic(30) 3x MOAB(180) 28: 3x MOAB(180) ... 69: 1x Ceramic(30) 70: Done! This is codegolf, so lowest byte-count wins • This is extremely complicated. I feel like this will be in unanswered for a while. – Razetime Aug 10 '20 at 17:04 • In the second example, how is ceramic destroyed without giving out any lower class bloons? – Bubbler Aug 11 '20 at 0:31 • +1 because btd is awesome lol. However this is a very complicated challenge, even for people who know how the mechanics work. It might be better if you limit the problem to 1 pierce only – thesilican Aug 18 '20 at 23:34 • or you could even do a challenge that simply requires calculating the RBE for a bloon wave, that could still be an interesting challenge – thesilican Aug 18 '20 at 23:35 • actually RBE calculating is probably a bit too simple – thesilican Aug 19 '20 at 0:02 Solve the Halting Problem for Oneplis Oneplis is a "very simple esolang" (I don't want to count this one toward my esolangs) made by me which only have three commands. As you can probably see from the name, it is a subset of 1+, along the lines of Befinge. The three commands are: • 1, which pushes 1. (Obviously!) • +, which pops the top two numbers and pushes their sum. (Obviously!) • #, pops a number n and jumps to the instruction after the nth (0-based) #. Oneplis is almost certainly a (very limited) push-down automaton, since it's impossible to decrement a number and impossible to retrieve elements arbitrary deep in the stack! Oh, and the only way to read a number is with #, which cannot handle arbitrarily large numbers! This is , so shortest code wins! Your output should be truthy for halting, and falsy for non-halting. You can use any set of five characters for the instructions. Don't care if it jumps to a non-existence # or trying to execute + when there are <2 numbers on the stack. Test cases 11+ -> True 1##1# -> False 1## -> True 11+1+###11+# -> True 11+##1#1 -> False Sandbox • Test cases? • Shall I require the answers to deal with errors? • For "nth #", is it 1- or 0-based? (I guess it's 0-based, but you need to be explicit on it anyway.) – Bubbler Aug 20 '20 at 9:39 • @Bubbler Uh, ok. It's 0-based in 1+, but 0-based indexing does not make any sense in this challenge anyway, it's impossible to push 0... Should I change it to 1-based? – null Aug 20 '20 at 9:42 • I don't think it's that nonsense, as the only effect is that all instructions between first and second #s are unreachable. – Bubbler Aug 20 '20 at 9:47 • @Bubbler Oh, okay then. So if no one objects I'll post this to main. – null Aug 20 '20 at 10:15 • if you don't plan to require answers to deal with errors then also mention that they don't need to worry about popping from an empty stack – Mukundan314 Aug 20 '20 at 11:20 • Or: errors terminate the program. – user253751 Aug 24 '20 at 13:29 • @user253751 Yes, that's also good. Although, I prefer it this way. – null Aug 24 '20 at 13:43 Noncommutative Quineoid Triple This is the hard mode of Quineoid Triple Write three different programs such that all of the following properties hold: • $$\ A(B) = C \$$ • $$\ B(C) = A \$$ • $$\ C(A) = B \$$ • $$\ A(C) = -B \$$ • $$\ B(A) = -C \$$ • $$\ C(B) = -A \$$ • $$\ A(A) = \epsilon \$$ • $$\ B(B) = \epsilon \$$ • $$\ C(C) = \epsilon \$$ Where: • $$\ f(g) \$$ is the output obtained from feeding the program text of $$\g\$$ into program $$\f\$$ • $$\ -x \$$ is the program text of $$\x\$$ in reverse (reversed in terms of either raw bytes or unicode codepoints) • $$\ \epsilon \$$ is the empty string / an empty output Rules and Scoring • This is , so the shortest program length total, in bytes wins. • Standard quine rules apply. • Each program can be in any language. Any number of them may share languages or each may use a different language. • Use any convenient IO format as long as each program uses a consistent convention. • Functions are allowed, as this counts as "any convenient IO". • The result of feeding anything other than program text of one of the three programs is undefined. Sandbox note: This is partially inspired by There's a fault in my vault!, which I thought had some interesting ideas in it. This is my effort to frame those ideas in a clearer fashion. Cops/Robbers: Create a weak block cipher In cryptography, we often use block ciphers, which are a form of keyed encryption. More specifically, for a plain text string $$\s\$$ and a secret key $$\k\$$, we design an encryption function $$\E(s, k)\$$ and a decryption function $$\D(\hat{s}, k)\$$ such that if we encrypt and then decrypt the text with the same key, we get back our original text. That is, we have $$\D(E(s,k),k) = s\$$ for all possible strings $$\s\$$ and $$\k\$$. One security property a good block cipher has is that it is resistant against key-recovery attacks. This means that if we have the ability to run $$\E(s, k)\$$ and $$\D(\hat{s}, k)\$$ for various choices of $$\s\$$ and $$\\hat{s}\$$ and collect pairs of encrypted and decrypted text we cannot tell what the key is. In this challenge, you will design a simple block cipher that is intentionally vulnerable to a key recovery attack, and challenge others to try and exploit it. The Cops' Challenge 1. Design a block cipher. Design an encryption function $$\E(s,K)\$$ and decryption function $$\D(\hat{s},k)\$$ that take strings (or your language's closest equivalent) of a fixed length $$\16\$$ bytes and a key of fixed length $$\16\$$ bytes and outputs a string of length $$\16\$$ bytes. Your $$\E\$$ and $$\D\$$ functions must have the property that $$\D(E(s,k),k) = s\$$ for all 16-byte strings $$\s\$$ and $$\k\$$.1 The functions must be deterministic (not use any randomness) and pure (not rely on any outside state). Your $$\E\$$ and $$\D\$$ must work within the integer/float precision of your language. Specifically, you may not treat floating point as if it's arbitrary precision, nor may you assume integers of arbitrary size if your language utilizes fixed-size integers. 2. Implement a secret key-recovery attack on your block cipher. Write a program that makes calls to $$\E\$$ and $$\D\$$ for a secret, unknown key $$\k\$$ and fully recovers the key by observing properties of the input/output pairs. The key must be recovered with probability $$\1\$$ - you may not rely on probabilistic approaches.2 You must treat $$\E\$$ and $$\D\$$ as black boxes, from which you can only observe their input and output. This means you must not utilize runtime introspection, timing information, or other side effects of the implementation. You must only pass full $$\16\$$ byte strings to $$\E\$$ and $$\D\$$, and not any other type. This means you may not rely on special objects with overloaded operators or similar to glean information about how the input is processed by $$\E\$$ and $$\D\$$. Your attack may be adaptive, in that it decides which strings to pass in based on outputs to previous strings. To enforce a practicality limit, your attack must work for a combined total of strictly less than $$\2^{16}\ = 65536\$$ calls to $$\E\$$ and $$\D\$$ for any key $$\k\$$. If the block cipher you design has the property that for keys $$\k_1\$$ and $$\k_2\$$ that $$\E(s,k_1)=E(s,k_2)\$$ and $$\D(s,k_1)=D(s,k_2)\$$ for all $$\s\$$, then we call these keys functionally identical, and your attack may recover any functionally identical key to the original. That's it! You will reveal both the encryption and decryption functions $$\E\$$ and $$\D\$$, and challenge the robbers to find your key recovery attack (or possibly a different one). Clearly, the challenge is to design your $$\E\$$ and $$\D\$$ to look secure, but they have some catastrophic weakness that allow you to recover the key with very few calls. Another approach is to 'trapdoor' the function in some way only known to you. In the spirit of Kerckhoffs's principle, you are encouraged to post a short explanation of what your $$\E\$$ and $$\D\$$ do, especially if they are written in an esoteric language. You may use cryptographic functions if you wish, but using them presents several practical problems. Hashing functions are designed to be one way and your are unlikely to be able to design both an encryption and decryption function that utilizes them. Symmetric ciphers have both encryption and decryption, but is unlikely to allow the key recovery attack outlined here. If no-one mounts a successful attack in 7 days, you may post your key recovery attack and mark your answer as safe, which prevents it from being cracked. Note your submission can still be cracked until your reveal your attack. Your answer is invalid if you do not follow the rules set above. Your answer can be declared invalid even after it is marked safe, if it turns out your revealed attack does not obey the rules. The shortest safe submission, calculated as the sum of the bytes of the two functions $$\E\$$ and $$\D\$$, wins. Your functions must be named. The Robbers' Challenge 1. Find a vulnerable answer. That is an answer, which hasn't been cracked yet and which isn't safe. 2. Crack it by designing a key recovery attack. Your attack must follow the rules outlined in the cops section. To recap, this means: • The total number of calls to $$\E\$$ and $$\D\$$ with the key $$\k\$$ must be strictly less than $$\2^{16}\$$ • You must only pass $$\16\$$ byte strings to $$\E\$$ and $$\D\$$, and must have the key $$\k\$$ initially be unknown • The attack may be adaptive but must work to recover any 16 byte key $$\k\$$ (or a functionally identical key) • You must treat $$\E\$$ and $$\D\$$ as black box, and may not use runtime introspection, timing information, etc. If you've found such a attack, post an attack on the robber's thread linking back to the answer. If possible, you should post a link to an online interpreter which allows others to run your attack for various keys $$\k\$$. You are encouraged to post how your answer works, and the maximum number of calls your approach makes to $$\E\$$ and $$\D\$$. If your attack does not recover the key, but instead a functionally identical one, explain (briefly) why they are functionally identical. The user who cracked the largest number of answers wins the robbers' challenge. Ties are broken by the sum of bytes of cracked answers (more is better). Example #1 Python 3, 133 bytes (cop) E=lambda s,k:''.join(chr((ord(c)+ord(d))%256) for c,d in zip(s,k)) D=lambda s,k:''.join(chr((ord(c)-ord(d))%256) for c,d in zip(s,k)) Try it online! My program computes the sum of $$\s_i\$$ and $$\k_i\$$ for each $$\i\$$. leaked_key = E('\0'*16,k) print('key = %s' % leaked_key) Try it online! My crack completes in $$\1\$$ call and uses that fact that $$\0 + k = k\$$. Example #2 Python 3, 147 bytes (cop) def E(s,k): o='' V=[*range(256)] j=0 for i in range(16): j+=V[i]+ord(k[0]) j%=256 V[i],V[j]=V[j],V[i] o+=chr(ord(s[i])^j) return o D=E Try it online! My program uses a complicated thing. leaked_key = '' for c in range(256): if E('f'*16,chr(c))==E('f'*16,k): leaked_key = chr(c)+'x'*15 break print('key = %s' % leaked_key) assert E('abcdabcdabcdabcd', leaked_key) == E('abcdabcdabcdabcd', k) assert D('abcdabcdabcdabcd', leaked_key) == D('abcdabcdabcdabcd', k) Try it online! They only ever use the first byte of the key, so we can just bruteforce the first byte and pad with anything to get a functionally identical key. This involves a maximum of $$\256\$$ calls to $$\E\$$ with the secret key. 1. This means that if your language uses null-terminated strings, such as C, then you should be using memcpy-type operations instead of string operations. Since the input length is fixed as 16 bytes, this should be no issue. 2. This requirement forbids most kinds of Birthday attack. Questions to sandbox users: • I know this is a lot to take in. Is it clear? • Can anyone think of a trivial way to trapdoor $$\E\$$ and $$\D\$$ with eg. a hashing function? I don't think it's possible, but I could be wrong. • I love this idea! I think it's written in a pretty clear way, I think you could trivially trapdoor E and D, by doing something like if (s == hash("sixteen_byte_str")) return k, but disallowing cryptography functions should fix that – Redwolf Programs Sep 7 '20 at 14:06 • @RedwolfPrograms Glad you think it's clear! Out of curiosity, if you wrote that as your encryption function, how would you write the corresponding decryption function? – Sisyphus Sep 7 '20 at 22:58 • Something like if (ŝ == k) return hash("sixteen_byte_str"), you'd just need to ensure there's no way it could be confused with a value that legitimately encrypts to k (which would be easily doable by replacing it with whatever hash("sixteen_byte_str") would typically encrypt to). Using crypto functions to trivially win a CnR challenge is practically a loophole, and is likely to be downvoted anyway. (Btw, when I write x == hash("sixteen_byte_str"), I mean hash(x) == "sixteen_byte_str") – Redwolf Programs Sep 8 '20 at 1:51 • Actually, wait, I'm being stupid. I think there's no way to not have it return hash(x) == "sixteen_byte_str" in one of the two functions, so there doesn't appear to be a trivial way to trapdoor it. I'd still disallow crypto in case someone uses some sort of fancy asymmetric thing, but I can't figure it out if there is. – Redwolf Programs Sep 8 '20 at 12:08 Take 6! A good card game is a wonderful thing. I got me a nice fresh set of Take 6! Too bad though, I have no-one to play with. And so I turn to you! The Game The game is played with a set of 104 cards, numbered 1 to 104 inclusively. Each card has a number of 'cows' attached. Here's a quick Python function to calculate the number of cows: def cows(card): out = 1 if(card % 5) == 0: out += 1 if(card % 10) == 0: out += 1 if(card % 11) == 0: out += 4 if(card % 5) == 0: # C-c-c-combo out += 1 return out Therefore, there is a total of • 1 card with 7 cows (number 55) • 8 cards with 5 cows (the other multiples of 11: 11, 22, 33, 44, 66, 77, 88, 99) • 10 cards with 3 cows (multiples of ten: 10, 20, 30, 40, 50, 60, 70, 80, 90, 100) • 9 cards with 2 cows (other multiples of five: 5, 15, 25, 35, 45, 65, 75, 85, 95) • 76 cards with 1 cow (all other cards) The game is played by up to 10 players. Each player is given 10 cards. 4 cards are placed on the table as the starts of 'rows'. Then 10 turns of play take place. Then, results are calculated. A turn Each player selects one of their remaining cards. At the same time, they reveal their selected cards. Going in the order of lowest card number, the player whose card it is must place it into a row according to rules: 1. If there is a row with the top card of a lower number than the player's and no such row with a lower number exists, their card must be placed at the end of the row. If their card is the sixth in a row, they take the first 5 cards and put them on their result pile, leaving theirs as the new start. 2. If no such row exists, they must pick one of the rows, take all the cards there to their result pile, and leave their card as the new start. Examples: row tops: 10 20 30 40 played: 25 must be placed on the row with a 20, creating the configuration 10 25 30 40 with a possible cow gain row tops: 10 20 30 40 played: 9 pick any row, creating for example 10 20 9 40, but guaranteed to gain cows Counting The sum of cow values of the cards in a player's result pile is their score. The lower the score the better. Scores may be added up over several games, creating an overall score for a match. Bots Bots will be standalone programs. Everything belonging to a bot will be placed in a single directory, the name of the directory will be used as the name of the bot. A launch script named launch (may be the entire bot) must be provided. If necessary, a compilation script named build may be provided. Both scripts shall be placed directly in the bot's directory and should use shebangs to specify how they are to be run. Bots shall not interfere with other bots, the controller, or the git repositories used. The bots will have the option of storing extra information in files in their own directory. It will be wiped when a fresh series is being run (such as after adding a new bot). An override input format may be provided. I intend to use StringTemplate for this, I'll write up some details when working on the controller. The default format will have all messages newline-terminated. Once launched, the bot will be first given their cards, as a list of card numbers, where the numbers may or may not be ordered. The default format will be cards 0 1 2 3 4 5 6 7 8 9 No response is expected. For each round, the bot will be prompted with the current state of the grid, that is the number of cards in each row, the sum of cows in each row and the top number card in the row. The default format will be count 1 2 3 4 cows 5 6 7 8 top 11 20 22 35 The bot shall answer with the number of one of its remaining cards. The list of all the cards used by all bots in the round will be given to each bot. Not that this includes the bot's own card. The order of bots in this message will be consistent within a game. The default format will be used 0 1 2 3 4 5 6 7 8 9 No response is expected. If the placement rule 2. has to be invoked, the bot will receive a message containing the board state at the time when it needs to pick a row The default format will be pickrow count 1 2 3 4 cows 5 6 7 8 top 11 20 22 35 The bot shall respond with the number of the row it wishes to take. The rows will be 0-indexed for this. If the bot's move results in a gain of result cows, it will be informed of which cards and how many cows it has gained (note that the lower the number the better). The default format will be cardgain 1 2 3 4 5 cowgain 6 No response is expected. At the end, all bots will be shown their score as well as all the scores of others, in the order consistent with the used cards message. The default format will be score 30 others 0 1 2 3 4 5 6 7 8 No response is expected. If the bot makes an invalid move, it will be delivered a special message informing it of such. From that point the bot's current game is over. It gets 100 points of penalty. The default format will be invalid A timely shutdown is expected. The bot may of course try to save information to its private file at any time, including at the end. After the final message, the bot shall terminate in a timely manner. Scoring will be added up over many games, number depends on how fast the games end up running, but at least 100 sounds reasonable to me. Bots will be placed in a separate github repository TODO for easy setup and reseting. Bots that need a compilation script but don't have one will be given one. Controler Work has started at https://github.com/MrRedstoner/Take6KOTH The controller will be designed to run in Java 1.8+, using the Process API to launch bots. Notes: While the number of bots is too low, it will be padded to 10 by using multiples of primitive bots. The tournament style once 11+ submissions exist is for now playing all subsets of size 10. I intend to write up at least a few primitive bots, to get the games going. Something like using cards in the order they were given, or randomly. These will also demonstrate the custom input functionality. Maybe even one that uses external input, to let me play for fun! Limits for execution time, storage of data etc. are not given at this time. If bots start to behave excessively limits may be added. Sandbox notes: Any better idea for tournament? Should bots be given the names of their competitors as well? Currently leaning towards yes. Planned tags: • Even though most people can read python, you should still include a written description of how the cows are counted. As it is, your program counts twice for it being divisible by 5 in the case of 55, is that intentional? – FryAmTheEggman Sep 18 '20 at 18:13 • @FryAmTheEggman it is indeed intentional, it's a combo for a reason :D. The result also matches what wikipedia describes about the game. Should have some more to edit soon so I'll make the change then. – Mr Redstoner Sep 18 '20 at 18:16 • But when do you take 720?? /s – Jo King Sep 21 '20 at 9:39 Output all of printable ASCII using all of printable ASCII Posted • "Irreducible" isn't really an observable requirement; I'd recommend looking into using pristine-programming to make it an objective criterion. – hyper-neutrino Oct 12 '20 at 18:31 • What do you mean by "observable"? "irreducible" simply means you can't purely remove characters (not purely substrings) from the program and have it still work (not merely not error). That's pretty objective, is it not? – pxeger Oct 12 '20 at 18:39 • Actually, yes it seems you're right, I was probably thinking of some other common criteria that isn't valid. Otherwise challenge looks good, doesn't seem to be a duplicate. I would say this isn't kolmogorov complexity since it's not constant but it is restricted source albeit not in the common usage. – hyper-neutrino Oct 12 '20 at 18:48 • Can my program contain additional non-ASCII bytes? – Adám Oct 12 '20 at 19:00 • @Adám yes, in the post it says "Your program, and its output, can contain any additional non-printable-ASCII bytes (bytes, not characters) if you like, such as newlines". "non-printable-ASCII" includes "non-ASCII" – pxeger Oct 12 '20 at 19:01 • Ah, I see. Maybe clarify that you mean both non-[printable-ASCII] and [non-printable]-ASCII. – Adám Oct 12 '20 at 19:03 • Perhaps subtract 95 from each score so that scores look more reasonable – lyxal Oct 13 '20 at 10:51 • @Lyxal my reasoning for not doing that was because I suspect most answers will be quite a lot longer in order to make sure they're irreducible, it would complicate things, and IMO it doesn't really matter if they're that length – pxeger Oct 13 '20 at 10:55 Round a Matrix Your input is a 2d array of nonnegative floats A. It can be supplied in whatever format is most acceptable for your language. It can have any dimensions. Let r and c be the 1d arrays of row and column sums of A respectively, rounded to the nearest integer, with the rule that 0.5 is rounded up to 1. Your task is to output a 2d array of nonnegative integers B such that |b_{ij} - a_{ij}| < 1 for all i and j, and also the row and column sums of B are equal to r and c respectively. In other words, B is obtained by rounding each element of A up or down, in such a way that the row and column sums are preserved. There may be many possible solutions. In this case, you only need to output one of them. If there is no solution, your program's behaviour can be undefined. Example: A = 1.2 3.4 2.4 3.9 4.0 2.1 7.9 1.6 0.6 in this case, the row sums are [7.0, 10.0, 10.1] and the column sums are [13.0, 9.0, 5.1] so after rounding these, you get r = [7 10 10] and c = [13 9 5]. One acceptable solution is B = 1 3 3 4 4 2 8 2 0 This is code golf, so the shortest code wins. Motivation I am also interested in what clever algorithms people can come up with. I guess the most obvious is just to do a random search, but that can take a very long time, even if the array is only 10x10 or so. Questions • Is it clear? Please can you edit it if it's not in the right format? • Has it appeared here before? (I don't think so, because I was searching Stackoverflow for a while in order to come up with a solution to this.) • Is there always a solution under the conditions given here? • Would it be better in some other format than code golf? • Should the condition |b_{ij} - a_{ij}| < 1 be |b_{ij} - a_{ij}| <= 1? • Since you want optimal, interesting solutions, rank by time complexity. You'll get fewer answers, but they will be more optimal than a direct brute force approach. – Razetime Oct 22 '20 at 6:53 • The suggestion of using complexity isn't often a good one - most challenges here that try to do that wind up closed or unanswered. It would be much simpler to go by execution time for some number of test cases that you pick. For the actual question, I think you should explicitly say that r and c are computed by summing and then rounding (assuming that is the correct order) as it isn't precisely clear from what you have right now. – FryAmTheEggman Oct 22 '20 at 20:34 The Fibonacci Rectangular Prism Sequence (posted) • There are the square roots of A127546. It looks like there are ways to generate this sequence shorter than just generating Fibonacci numbers and adding their squares. So, this doesn't strike me as a duplicate but an interesting challenge in its own right. I'd recommend removing the square-root step from the challenge and just asking for the sum of the three squares, which is a whole number. This might also allow for more interesting recursive solutions. You should include test cases, perhaps something like the first 15 elements of the sequence and maybe one big one. – xnor Oct 27 '20 at 0:39 Example game The winning numbers are as follows • [2, 18, 1, 15, 7] Four people bought tickets with the following prices and numbers • $6, [9, 5, 6, 15, 22], one match, score of 4, weight of 24 •$2, [2, 25, 17, 7, 7], two matches, score of 16, weight of 32. Notice how only the second 7 counts, order matters • $67, [11, 16, 9, 20, 16], no matches, score of 1, weight of 67 •$1, [12, 19, 6, 25, 2], no matches because the 2 is in the wrong spot, score of 1, weight of 1 The total pool is $76,$68.4 after we take our cut, which is then sent out based on the weights. The sum of all weights is 124. • First ticket gets 24/124, $13.24 • Second ticket gets 32/124,$17.65 • Third ticket gets 67/124, 36.96 • Fourth ticket gets 1/124, a whole 55 cents • I think you should explain that the winning numbers are not necessarily distinct in rule 2, instead of leaving it until the examples. Likewise, you should state the 'order matters' rule earlier. – Dingus Jan 12 at 11:57 • @Dingus I added those clarifications to rules 1-3. Thoughts? – Daffy Jan 15 at 23:46 • Looks good to me. – Dingus Jan 16 at 5:02 Bot Factory KoTH (2.0) Posted to main • in a way which produces side effects other than intended Can you define that more precisely? Are you allowed to use global variables to share information between a main bot and worker ones? Also can worker bots collide with their owner? – Command Master Jan 18 at 12:25 • @CommandMaster Yes they can collide, and using global variables for any sort of communication is not allowed. I will probably add an "official" way of doing that at some point. – Redwolf Programs Jan 18 at 14:32 • I think it could be more fun if the size of the original bot doesn't matter, but I'm not sure – Command Master Jan 18 at 16:00 • @CommandMaster The problem with that is that you could make a bot that uses really long comments to manipulate the character pool so that the characters it needs for its worker bots appear more often – Redwolf Programs Jan 18 at 18:43 • It might be a good idea to provide a map of the factory in the API. Also, the effect of program size on the initial score might even be too small: the difference between a program of length 100 and a program of length 5000 is less than 61 collected characters within 100000 turns. (a linear function might be better) – the default. Jan 22 at 15:54 • @thedefault. Maybe the initial score penalty will be double the length of the program, given that there's 100k turns to make up for it. What would the output of the map be like? A 2d array? – Redwolf Programs Jan 22 at 16:47 • @RedwolfPrograms yes (or perhaps a function .get(x,y) so that submissions can't modify the map?). I think double the length of the program would be too much. (what about half the length of the program? that'd also magically explain why creating a worker costs half its size in points) – the default. Jan 22 at 16:56 • @thedefault. The problem with a 2d array is that it's an infinite map so if one bot is really far away from the rest you could end up with a really huge array a naive bot might be searching. I think a function like at(coords) would work well though, maybe returning an object listing all of the chars at that location, and any bots who are there. I do like the half length idea. – Redwolf Programs Jan 22 at 17:00 Is each bracket matched? Given a string consisting only of the characters ()[]{}, determine if each type of bracket is matched--that is, every ( corresponds to one later ), every [ corresponds to one later ], and every { corresponds to one later } (and vice-versa). Pairs are allowed to overlap: ([)] is just as valid as ([]). Output one consistent value for one classification and anything else for the other, or following your language's truthiness semantics (inverted if you want). Test cases: Matched: () [] {} ()[]{} ()()([]) {[][}] {{{}}} ([{}]){([]}) [(()())(((())))(]()(())) Not matched: ( ] {{} [) (()())((((()))))(()()(())(()))) {}{}{) [())[])] )( Meta • Does this admit a variety of interesting solutions? • Would it be better to add <> as a bracket type? Have fewer bracket types? Arbitrarily many? • I'm writing this up entirely because I'm surprised it hasn't been asked yet, so although I have looked, this might still be a duplicate. • Although I don't think it's necessarily unclear, I feel like the specification could be worded better. Is this a valid Irish word? In Irish, most consonants are divided into broad (velarized) and slender (palatalized) variants, and the orthography marks them with neighboring vowels, which are similarly divided. This gives rise to the caol le caol agus leathan le leathan (slender with slender and broad with broad) rule – a medial sequence of consonants must have the same class of vowel on either side: in leabhar, bh is surrounded by two broad vowels, so it is broad as well, and in cailín, l is surrounded by two slender vowels, so it is slender. a, o and u are broad and e and i are slender (similar with the vowels with the fada: á ó ú é í); ae is also considered broad. Given a word, output whether it follows this rule. Input You may assume that the input has only the following characters with their uppercase variants: aábcdeéfghiílmnoóprstuú AÁBCDEÉFGHIÍLMNOÓPRSTUÚ Tests Valid: deartháireacha madra nuachtán gaolta ceannasaithe snámhann fómhair laethanta béar Bealtaine hAoine ball tree ggg Invalid: codegolf delta alishanoi ABI anseo breithlá (Note that anseo and breithlá are Irish words, but they happen not to follow this rule. You should still output a falsy answer for them for the sake of simplicity.) • Jebus, I haven't heard "slender with slender and broad with broad" in a couple of decades, that gave me a flashback! You note that "anseo" ("here", for the benefit of the non-Gaeilgeoirí) doesn't follow the rule but you should probably specify the expected output for it - I'd suggest against special-casing it and having it be invalid. – Shaggy Jun 2 '19 at 19:51 • This needs a much better definition of what is a broad consonant versus what is a slender consonant, unless I'm not understanding the challenge. – AdmBorkBork Jun 3 '19 at 12:56 • @AdmBorkBork as I understood it, broad and slender consonants are indistinguishable in writing, the point is to detect consonants that would have to be both at the same time. – FrownyFrog Jun 13 '19 at 4:49 • I'd suggest listing their uppercase variants – l4m2 Jun 25 '19 at 15:52 • 'in leabhar, bh is surrounded by two broad consonants, so it is broad as well, and in cailín, l is surrounded by two slender consonants'. Did you mean 'vowels' instead of 'consonants' here? – Dingus Feb 8 at 5:06 • Yes, that should have been 'vowels'. – bb94 Feb 8 at 19:14 • I still have no idea what the types of consonants and vowels are. – user202729 Feb 9 at 2:12 • This looks like a good challenge to me, to make it more understandable I would just explicitly list the sets of broad vowels, slender vowels, and consonants. (Rather than "here are all the characters: aábc..." have "here are all the broad vowels:aAáÁ...", "here are all the slender vowels:...", "here are all the consonants:...") – Leo Feb 10 at 5:42 • Also, you should clarify what should happen if the word does not contain a sequence of consonants surrounded by vowels (e.g. "ball", "thx", "tree") – Leo Feb 10 at 5:44 • "thx" is assumed to never be passed in as input. "ball" and "tree" would return true, because there's no contradiction between broad and slender vowels around consonants that can occur here. – bb94 Feb 12 at 6:09 Count strictly overlapping substrings code-golfstringcountingsubsequence Posted • Is it correct that 1 is never a valid result? – Adám Jan 28 at 17:04 • @Adám yes; do you think I should add that? – pxeger Jan 28 at 17:04 • Probably a good idea. – Adám Jan 28 at 17:05 • You need test cases with longer bs that can overlap themselves in multiple ways. – Adám Jan 28 at 17:27 Pad a jagged array to be square code-golfarray-manipulation Posted • Add a test case for pad value -1 or something like that. – user202729 Feb 12 at 14:55 • Can any dimension of the input array be 0? [please review other sandbox posts] – user202729 Feb 12 at 14:56 • Also add a test case with an array consisting of only fill values. – user202729 Feb 12 at 14:58 • @user202729 what do you mean by "[please review other sandbox posts]"? I don't see the need for a test case with pad value -1, because the type of the elements in the array is answer-defined. I'll clarify the other two though – pxeger Feb 12 at 16:22 • I suggest that you specify more clearly what input formats are allowed, that is, what counts as an "array". For example, is it acceptable to take lines of text with space as separator within each line? Or use two types of brackets, such as {[1, 5, 3], [4, 5], [1, 2, 2, 5]}? – Luis Mendo Feb 12 at 18:31 • @LuisMendo input should be taken in your language's natural representation of nested arrays. If it doesn't have a builtin array representation, then take it in some kind of text representation like I mentioned in the rules section – pxeger Feb 12 at 18:49 • The problem with saying "natural representation" is that a language may have more than one way of representing nested tuples. That said, I think what you have in the question is alright - perhaps to clarify what Luis is talking about you could add: "input can be any unambiguous representation of a jagged array"? I think what Luis may be getting at is that there could be a problem with e.g. a Python array contains meta-information (the length) while a C array wouldn't, but usually I think that is left out. – FryAmTheEggman Feb 12 at 19:26 • To explain my point better: MATLAB (or MATL) can use curly braces for arbitrary arrays, and square brackets for rectangular arrays. So either {{1, 5, 3}, {4, 5}, {1, 2, 2, 5}} or {[1, 5, 3], [4, 5], [1, 2, 2, 5]} could be used as input. The latter is probably better to reduce code length. Can we just choose the most convenient one? What is the limit where choosing a convenient format counts as "pre-processing" the input and is not allowed? All this is language-dependent, but some general specification would be needed – Luis Mendo Feb 12 at 19:33 • @pxeger Something that I tend to add to my sandbox review comments recently, hoping to reduce the problem of the sandbox posts not being reviewed enough. – user202729 Feb 13 at 2:37 • @LuisMendo You can probably choose any convenient one (I think that's the common consensus?) – user202729 Feb 13 at 2:38 • The "-1" thing is just to make people notice that the value to be padded is an input rather than hard coded by the code. – user202729 Feb 13 at 2:46 Snail word Very similar to other challenges Reconstruct an integer from its prime exponents Draw four colorful quarter circles The challenge is to reproduce this image in your favorite language: • Your image must be at least 400 by 400 pixels. • The fill colors don't need to be the same as in the image but they must be different from each other. • You must include the outlines but they can be any visible thickness you choose. • The quarters should be at the same orientation as in the image. • Your image must have four quarter circles aligned as in the image which each touch the edge of the circle at a point. • Your code must take input which specifies the location, in pixels, of the point where the quarter-circles meet; you can take this input in any reasonable format, but the units must be pixels (no relative units, such as a fraction of the width/height of the image). You can assume these inputs are always within the bounds of the outer circle. You can also assume that the inputs are such that all four quarter circles can be drawn within the circle. Here is some LaTeX code as an example: \documentclass[tikz,margin=3mm]{standalone} \usetikzlibrary{calc,through} \usetikzlibrary{intersections} \begin{document} \begin{tikzpicture}[inner sep=0pt, outer sep=0pt] \coordinate (point) at (-0.1,0.4); \draw [name path=mycirc] (0,0) circle [radius=1]; \path [name path=di-1] (point) -- ++(-2,2); \path [name path=di-2] (point) -- ++(-2,-2); \path [name path=di-3] (point) -- ++(2,-2); \path [name path=di-4] (point) -- ++(2,2); \foreach \col [count=\i] in {yellow,red,blue,brown}{ \fill [red, name intersections={of=mycirc and di-\i}] (intersection-1) circle [radius=0.05] node (inter-\i) {}; \fill[\col,draw=black,rotate around={(\i+3)*90:(point)}] (point) let \p1 = ((point) - (inter-\i)\$) in arc [start angle=0, end angle=90, radius={0.707*veclen(\x1,\y1)}] -- +(270:{0.707*veclen(\x1,\y1)}) -- cycle ; } \end{tikzpicture} \end{document} [I would love help on how to improve this challenge.] • Describing the exact ratios of the shapes would be helpful in drawing them – user Mar 5 at 15:34 • Those are quarter-circles, not semicircles – pxeger Mar 5 at 15:46 • Might be more interesting if the center of the shape (where the petals meet) was an input – Zaelin Goodman Mar 5 at 15:54 • @pxeger ahem.. thanks :) – Anush Mar 5 at 15:57 • @ZaelinGoodman Could you say exactly how that could be specified? – Anush Mar 5 at 15:59 • @Anush Since the image is 300 x 300 pixels, you could say something like you must take input which specifies the location, in pixels, of the point where the quarter-circles meet; you can take this input in any reasonable format, but the units must be pixels (no relative units, such as a fraction of the width/height of the image). You can assume these inputs are always within the bounds of the outer circle. – Zaelin Goodman Mar 5 at 16:03 • "Your image must be at least 400 by 400 pixels" what about vector graphics? If some one choice to output the image with vector graphics format. How to define its width and height? – tsh Mar 9 at 5:51 Gray code... Gray code? Your task is to print (in an easily readable and consistent format) the binary representations of the numbers 0-255 in some order such that only one bit is altered between two consecutive numbers. Each successive byte of the source code after the first can only change one bit from the previous byte. Other Information Example valid code (in utf-8): q1!#c. Here, q (01110001) and 1 (00110001) are different in only one bit, and so on Example invalid codes (in utf-8): Q1!, "!" Example valid outputs (seperated by an empty line): 10101010 10101011 11101011 ... 01010101 [10, 11, 1011, 1111, 1110, ...] 10 0 1 11 111 ... Example invalid outputs (seperated by an empty line): 0000000100000011000000100000000000000100... 0 01 10 11 100 ... 0 1 ... 100000000 110000000 ... 11111111 00 01 11 10 0100 0101 ... 0 1 3 2 5 6 ... Notes: • A character can be stored as two bytes, but the bytes must differ by only one bit • If your interpreter ignores a character (like Whitespace ignores almost all characters) it cannot be used • Is printing 1 3 2 6 7 5 4 and onwards ok? – PkmnQ Feb 28 at 4:36 • What does "some form of gray code" mean exactly? / Clarify that the restriction part applies to the source code of the program. – user202729 Feb 28 at 6:49 • I'll change it to binary gray code to avoid confusion. – Hyperbole Feb 28 at 15:55 • You should describe what the gray code is to avoid ambiguities. – FryAmTheEggman Feb 28 at 21:49 • Do you think this challenge is too hard? – Hyperbole Mar 2 at 15:01 • Is it okay if the output is printed in decimal instead of binary? – user202729 Mar 3 at 13:20 • Perhaps add "addition of leading zeroes doesn't count as a change" and some examples to illustrate/example valid output (just for challenge accessibility, this is implied from the definition (and the Wikipedia page)) – user202729 Mar 3 at 13:21 • Would be hard for practical languages, but for everything-are-valid languages it should not be a problem. Good challenge idea. – user202729 Mar 3 at 13:22 • Does the last statement means "Your program should not work by removing any single bytes", or "Any subsequence of your program should not work."? – tsh Mar 16 at 11:43 • @tsh neither. It means that you can't use a character if it is completely skipped over by the interpreter. For example, in the python code "if len('abc') < 4: print('Hello, World')" the "c" can still be used because it is not skipped over by the interpreter. – Hyperbole Mar 18 at 17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 111, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4063129127025604, "perplexity": 1485.6780806019951}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00073.warc.gz"}
https://forum.bebac.at/forum_entry.php?id=22385&order=time
## AUC from 0 to tau at steady state [NCA / SHAM] Dear Martin, nice to hear from you. Hope you are healthy. ❝ Should the concentration for time point zero (for calculation of AUC from 0 to tau) be somehow related to the pre-dose concentrations Yep. ❝ or set to zero Nope. ❝ where I think setting to zero would be in-line with the principle of superposition (e.g. have a look here). All the best Detlew
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010379672050476, "perplexity": 2674.151181357654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00705.warc.gz"}
https://math.stackexchange.com/questions/813276/particular-nontrivial-group-has-a-nonidentity-automorphism
# Particular nontrivial group has a nonidentity automorphism [duplicate] If $G$ is a nontrivial group that is not cyclic of order 2, then $G$ has a nonidentity automorphism. This is the exercise of hungerford algebra in the chapter $IV$ MODULES. Can you help me please? ## marked as duplicate by Derek Holt group-theory StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); May 29 '14 at 8:05 • Hint If $G$ is non -Abelian, then $G$ has a non-identity inner automorphism. – Geoff Robinson May 29 '14 at 3:45 If $G$ is not abelian, take $a,b\in G$ such that $ab\ne ba$. Then $x \mapsto axa^{-1}$ is a nonidentity automorphism. If $G$ is abelian, then $x \mapsto x^{-1}$ is a nonidentity automorphism, unless $G$ is a product of $C_2$'s. In this case, write $G=C_2 \times C_2 \times H$. Then $(x,y,z)\mapsto (y,x,z)$ is a nonidentity automorphism. If $G$ is not abelian, then there exists $g\in G$ that does not lie in the center. Define $\varphi:G\rightarrow G$, $\varphi(x)=gxg^{-1}$. This is a non-trivial automorphism. If $G$ is abelian an has an element of order $>2$, then $\varphi(x)=x^{-1}$ defines a non-trivial automorphism. Finally, if $G$ is abelian and every element has order $\leq 2$, then pick an automorphism that non-trivially permutes a generating set for $G$. This will create a non-trivial automorphism.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8066593408584595, "perplexity": 797.5742140780591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317359.75/warc/CC-MAIN-20190822194105-20190822220105-00177.warc.gz"}
https://indico.cern.ch/event/433345/contributions/2358113/
# Quark Matter 2017 5-11 February 2017 Hyatt Regency Chicago America/Chicago timezone ## Machine learning methods in the analysis of low-mass dielectrons in ALICE Not scheduled 2h 30m Hyatt Regency Chicago #### Hyatt Regency Chicago 151 East Wacker Drive Chicago, Illinois, USA, 60601 Board: L20 Poster ### Speaker Sebastian Lehner (Austrian Academy of Sciences (AT)) ### Description Results from non-perturbative QCD indicate that chiral symmetry may be restored in the hot and dense matter produced in relativistic heavy ion collisions. This restoration would affect the vector meson mass spectrum and could be examined with the ALICE detector at the LHC. One of the most promising probes to study these effects are dileptons ($μ^{+}μ^{-}$ and $e^{+}e^{-}$) from ρ meson decays since they reach the detector without significant final state interactions. In order to precisely measure the low-mass dielectron spectrum a high purity sample of $e^{+}e^{-}$ pairs will be required. Whilst traditional cut-based methods can provide high purity samples, they suffer from low efficiency. Multivariate particle identification could in future be used to alleviate this drawback. The main background in the analysis of dielectrons are combinatoric $e^{+}e^{-}$ pairs (S/B ~ $10^{-3}$ for 0.3<$M_{ee}$<1 GeV/c$^{2}$). This background contribution can be suppressed by rejecting $e^{+}$ and $e^{-}$ tracks that originate from photon conversion processes. Numerous observables allow to discriminate background from signal dielectrons which motivates a multivariate approach in the classification of $e^{+}e^{-}$ pairs. The employed machine learning methods and performance based on Monte-Carlo data will be presented as well as their application in the analysis of LHC Run 2 data. Preferred Track Electromagnetic Probes ALICE ### Primary author Sebastian Lehner (Austrian Academy of Sciences (AT))
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7314935922622681, "perplexity": 4277.81440347668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00132.warc.gz"}
https://www.deepdyve.com/lp/ou_press/augmented-lagrangian-method-for-optimal-partial-transportation-VgteHYjKL6
Augmented Lagrangian Method for Optimal Partial Transportation Augmented Lagrangian Method for Optimal Partial Transportation Abstract The use of augmented Lagrangian algorithm for optimal transport problems goes back to Benamou & Brenier (2000, Acomputational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numer. Math., 84, 375–393), in the case where the cost corresponds to the square of the Euclidean distance. It was recently extended in Benamou & Carlier (2015, Augmented Lagrangian methods for transport optimization, mean field games and degenerate elliptic equations. J. Optim. Theory Appl., 167, 1–26), to the optimal transport with the Euclidean distance and Mean-Field Games theory and in Benamou et al. (2017, A numerical solution to Monge’s problem with a Finsler distance cost ESAIM: M2AN), to the optimal transportation with Finsler distances. Our aim here is to show how one can use this method to study the optimal partial transport problem with Finsler distance costs. To this aim, we introduce a suitable dual formulation of the optimal partial transport, which contains all the information on the active regions and the associated flow. Then, we use a finite element discretization with the FreeFem++ software to provide numerical simulations for the optimal partial transportation. A convergence study for the potential together with the flux and the active regions is given to validate the approach 1. Introduction The theory of optimal transportation deals with the problem to find the optimal way to move materials from a given source to a desired target in such a way to minimize the work. The problem was first proposed and studied by G. Monge in 1781, and then L. Kantorovich made fundamental contributions to the problem in the 1940s by relaxing the problem into a linear one. Since the late 1980s, this subject has been investigated under various points of view with many applications in image processing, geometry, probability theory, economics, evolution partial differential equations (PDEs) and other areas. For more information on the optimal mass transport problem, we refer the reader to the pedagogical books (Villani, 2003; Ambrosio et al., 2005; Villani, 2009; Santambrogio, 2015). The standard optimal transport problem requires that the total mass of the source is equal to the total mass of the target (balance condition of mass) and that all the materials of the source must be transported. Here, we are interested in the optimal partial transportation. That is the case where the balance condition of mass is excluded, and the aim is to transport effectively a prescribed amount of mass from the source to the target. In other words, the optimal partial transport problem aims to study the practical situation, where only a part of the commodity (respectively, consumer demand) of a prescribed total mass $$\mathbf{m}$$ needs to be transported (respectively, fulfilled). This generalized problem brings out additional variables called active regions. The problem was first studied theoretically in Caffarelli & McCann (2010) (see also Figalli, 2010) in the case where the work is proportional to the square of the Euclidean distance. Recently in Igbida & Nguyen (2017), we give a complete theoretical study of the problem in the case where the work is proportional to a Finsler distance $$d_{F}$$ (covering by the way the case of the Euclidean distance), where $$d_{F}$$ is given as follows (see Section 2)   dF(x,y):=infξ∈Lip([0,1];Ω¯){∫01F(ξ(t),ξ˙(t))dt:ξ(0)=x,ξ(1)=y}. Concerning numerical approximations for the optimal partial transport, Barrett & Prigozhin (2009) studied the case of the Euclidean distance by using an approximation based on nonlinear approximated PDEs and Raviart–Thomas finite elements. Benamou et al. (2015) and Chizat et al. (2016) introduced general numerical frameworks to approximate solutions to linear programs related to the optimal transport (including the optimal partial transport). Their idea is based on an entropic regularization of the initial linear programs. This is a static approach to optimal transport-type problems and needs to use (approximated) values of $$d_{F}(x, y)$$. In this article, we use a different approach (based mainly on Benamou & Brenier, 2000; Benamou & Carlier, 2015; Igbida & Nguyen, 2017) to compute the solution of the optimal partial transport problem. We first show how one can directly reformulate the unknown quantities (variables) of the optimal partial transport into an infinite-dimensional minimization problem of the form:   minϕ∈VF(ϕ)+G(Λϕ), where $$\mathcal{F}, \mathcal{G}$$ are l.s.c., convex functionals and $${\it{\Lambda}}\in \mathcal{L}(V, Z)$$ is a continuous linear operator between two Banach spaces. Thanks to peculiar properties of $$\mathcal{F}$$ and $$\mathcal{G}$$ in our situation, an augmented Lagrangian method is applied effectively in the same spirit of Benamou & Carlier (2015) and Benamou et al. (2017). We show that, for the computation, we just need to solve linear equations (with a symmetric positive definite coefficient matrix) or to update explicit formulations. It is worth noting that this method uses only elementary operations without evaluating $$d_{F}$$. This article is organized as follows: In the next section, we introduce the optimal partial transport problem and its equivalent formulations with a particular attention to the Kantorovich dual formulation. In Section 3, we give a finite-dimensional approximation of the problem, and show that primal-dual solutions of the discretized problems converge to the ones of original continuous problems. The details of the ALG2 algorithm is given in Section 4. Some numerical examples are presented in Section 5. We terminate the article by an appendix, where we give proofs of some facts we need in this article. 2. Partial transport and its equivalent formulations Let $${\it{\Omega}}$$ be a connected bounded Lipschitz domain and $$F$$ be a continuous Finsler metric on $$\overline{{\it{\Omega}}}$$, i.e., $$F: \overline{{\it{\Omega}}}\times \mathbb{R}^N \longrightarrow [0, +\infty)$$ is continuous and $$F(x, .)$$ is convex, positively homogeneous of degree $$1$$ in the sense   F(x,tv)=tF(x,v)∀t>0,v∈RN. We assume moreover that $$F$$ is nondegenerate in the sense that there exist positive constants $$M_{1}, M_{2}$$, such that   M1|v|≤F(x,v)≤M2|v|∀x∈Ω¯,v∈RN. Let $$\mu, \nu\in \mathcal M^+_{b}(\overline{{\it{\Omega}}})$$ be two Radon measures on $$\overline{{\it{\Omega}}}$$ and $$\mathbf{m}_{\max}:=\min\{\mu(\overline{{\it{\Omega}}}), \nu(\overline{{\it{\Omega}}})\}.$$ Given a total mass $$\mathbf{m}\in [0, \mathbf{m}_{\max}]$$, the optimal partial transport problem (or partial Monge–Kantorovich problem, PMK for short) aims to transport effectively the total mass $$\mathbf{m}$$ from a supply subregion of the source $$\mu$$ into a subregion of the target $$\nu$$. The set of subregions of mass $$\mathbf{m}$$ is given by   Subm(μ,ν):={(ρ0,ρ1)∈Mb+(Ω¯)×Mb+(Ω¯):ρ0≤μ,ρ1≤ν,ρ0(Ω¯)=ρ1(Ω¯)=m}. An element $$(\rho_{0}, \rho_{1})\in Sub_{\mathbf{m}}(\mu, \nu)$$ is called a couple of active regions. As for the optimal transport, one can work with different kinds of cost functions for the optimal partial transport, i.e., in the formulation (2.1) below, $$d_{F}(x, y)$$ can be replaced by a general measurable cost function $$c(x, y)$$. However, in this article, we focus on the case where the cost $$c=d_{F}$$. So let us state the problem directly for $$d_{F}$$. The PMK problem (Barrett & Prigozhin, 2009; Caffarelli & McCann, 2010; Figalli, 2010; Igbida & Nguyen, 2017) aims to minimize the following problem   min{K(γ):=∫Ω¯×Ω¯dF(x,y)dγ:γ∈πm(μ,ν)}, (2.1) where $$d_{F}$$ is the Finsler distance on $$\overline{{\it{\Omega}}}$$ associated with $$F$$, i.e.,   dF(x,y):=inf{∫01F(ξ(t),ξ˙(t))dt:ξ(0)=x,ξ(1)=y,ξ∈Lip([0,1];Ω¯)}. $${\large{\mathbf{\pi}}}_{\mathbf{m}}(\mu, \nu)$$ is the set of transport plans of mass $$\mathbf{m}$$, i.e.,   πm(μ,ν):={γ∈Mb+(Ω¯×Ω¯):(πx#γ,πy#γ)∈Subm(μ,ν)}. Here, $$\pi_x \#\gamma$$ and $$\: \pi_y \# \gamma$$ are the first and second marginals of $$\gamma$$. An optimal $$\gamma^*$$ is called an optimal plan and $$(\pi_x \#\gamma^*, \pi_y \# \gamma^*)$$ is called a couple of optimal active regions. Following Igbida & Nguyen (2017), to study the PMK problem we use its dual problem that we call the dual partial Monge–Kantorovich problem (DPMK). To this aim, we consider $$Lip_{d_{F}}$$ the set of $$1-$$Lipschitz continuous functions w.r.t. $$d_{F}$$ given by   LipdF:={u:Ω¯⟶R|u(y)−u(x)≤dF(x,y)∀x,y∈Ω¯}. Then, the connection between the PMK and DPMK problems is summarized in the following theorem. Theorem 2.1 Let $$\mu, \nu\,{\in}\,\mathcal M^+_b(\overline{{\it{\Omega}}})$$ be Radon measures and $$\mathbf{m}\,{\in}\, [0,\mathbf{m}_{\max}]$$. The partial Monge–Kantorovich problem has a solution $$\sigma^*\,{\in}\, {\large{\mathbf{\pi}}}_{\mathbf{m}}(\mu, \nu)$$ and   K(σ∗)=max{D(λ,u):=∫Ω¯ud(ν−μ)+λ(m−ν(Ω¯)):λ≥0 and u∈LdFλ}, (2.2) where   LdFλ:={u∈LipdF:0≤u(x)≤λ for any x∈Ω¯}. Moreover, $$\sigma\in {\large{\mathbf{\pi}}}_{\mathbf{m}}(\mu, \nu)$$ and $$(\lambda, u)\in \mathbb{R}^+\times L^\lambda_{d_{F}}$$ are solutions, respectively, if and only if   u(x)=0 for (μ−πx#σ)-a.e. x∈Ω¯,u(x)=λ for (ν−πy#σ)-a.e. x∈Ω¯and u(y)−u(x)=dF(x,y) for σ-a.e. (x,y)∈Ω¯×Ω¯. Proof. The proof follows in the same way of Theorem 2.4 in Igbida & Nguyen (2017), where the authors study the case $${\it{\Omega}}=\mathbb{R}^N.$$ □ The DPMK problem (2.2) contains all the information concerning the optimal partial mass transportation. However, for the numerical approximation of the optimal partial transportation and to use the augmented Lagrangian method, we need to rewrite the problem into the form   infϕ∈VF(ϕ)+G(Λϕ). To do that, we consider the polar function $$F^*$$ of $$F,$$ which is defined by   F∗(x,p):=sup{⟨v,p⟩:F(x,v)≤1} for x∈Ω¯,p∈RN. Note that $$F^*(x, .)$$ is not the Legendre–Fenchel transform. It is easy to see that $$F^*$$ is also a continuous, nondegenerate Finsler metric on $$\overline{{\it{\Omega}}}$$ and   ⟨v,p⟩≤F∗(x,p)F(x,v)∀x∈Ω¯,v,p∈RN. Remark 2.2 Using the polar function $$F^*$$, we can characterize the set $$Lip_{d_{F}}$$ as (see the appendix if necessary)   LipdF={u:Ω¯⟶R|u is Lipschitz continuous and F∗(x,∇u(x))≤1 a.e. x∈Ω}. Thanks to this remark, the DPMK problem (2.2) can be written as   max{D(λ,u):0≤u(x)≤λ,u is Lipschitz continuous, F∗(x,∇u(x))≤1 a.e. x∈Ω}. Moreover, we have Theorem 2.3 Under the assumptions of Theorem 2.1, setting $$V:=\mathbb{R}\times C^1(\overline{{\it{\Omega}}})$$ and $$Z:=C(\overline{{\it{\Omega}}})^{N}\times C(\overline{{\it{\Omega}}})\times C(\overline{{\it{\Omega}}}),$$ we have   K(σ∗)=−inf{F(λ,u)+G(Λ(λ,u)):(λ,u)∈V}, (2.3) where $${\it{\Lambda}}\in \mathcal{L}(V, Z)$$ is given by   Λ(λ,u):=(∇u,−u,u−λ)∀(λ,u)∈V, and $$\mathcal{F}: V\longrightarrow (-\infty, +\infty]$$, $$\mathcal{G}: Z\longrightarrow (-\infty, +\infty]$$ are the l.s.c. convex functions given by   F(λ,u) :=−∫Ω¯ud(ν−μ)−λ(m−ν(Ω¯))∀(λ,u)∈V;G(q,z,w) :={0if z(x)≤0,w(x)≤0,F∗(x,q(x))≤1∀x∈Ω¯+∞otherwise  for (q,z,w)∈Z. To prove this theorem, we need the following lemma. Lemma 2.4 Let $$\lambda\,{\ge}\, 0$$ be fixed. For any $$u\,{\in}\, L^\lambda_{d_{F}}$$, there exists a sequence of smooth functions $$u_{\varepsilon}\in C^\infty_{c}(\mathbb{R}^N) \bigcap L^\lambda_{d_{F}}$$ such that $$u_{\varepsilon} \rightrightarrows u$$ uniformly on $$\overline{{\it{\Omega}}}$$. The result of the lemma is more or less known in some cases (see Igbida & Ta Thi, 2017 for the case where the function $$u$$ is null on the boundary). The proof in the general case is quite technical and will be given in the appendix. Proof of Theorem 2.3. Thanks to Remark 2.2 and Lemma 2.4, we have   −inf(λ,u)∈VF(λ,u)+G(Λ(λ,u)) =sup{∫Ω¯ud(ν−μ)+λ(m−ν(Ω¯)):λ≥0,u∈C1(Ω¯)∩LdFλ} =max{D(λ,u):λ≥0 and u∈LdFλ}. Using the duality (2.2), the proof is completed. □ To end this section, we prove the following result that will be useful for the proof of the convergence of our discretization. Theorem 2.5 Under the assumptions of Theorem 2.1, we have   −inf(λ,u)∈VF(λ,u)+G(Λ(λ,u))=min{∫Ω¯F(x,Φ|Φ|(x))d|Φ|:(Φ,θ0,θ1)∈Ψm(μ,ν)}, (2.4) where   Ψm(μ,ν):={(Φ,θ0,θ1)∈Z∗=Mb(Ω¯)N×Mb(Ω¯)×Mb(Ω¯):θ0≥0,θ1≥0,θ1(Ω¯)=ν(Ω¯)−m  and −∇⋅Φ=ν−θ1−(μ−θ0) with Φ.n=0 on ∂Ω}. Actually, the minimal flow-type formulation   min{∫Ω¯F(x,Φ|Φ|(x))d|Φ|:(Φ,θ0,θ1)∈Ψm(μ,ν)} (2.5) introduces the Beckmann problem (see Beckmann, 1952) for the optimal partial transport with Finsler distance costs. See here that in the balanced case, i.e., $$\mathbf{m}=\mu(\overline{{\it{\Omega}}})=\nu(\overline{{\it{\Omega}}})$$, the formulation (2.5) becomes   min{∫Ω¯F(x,Φ|Φ|(x))d|Φ|:Φ∈Mb(Ω¯)N,−∇⋅Φ=ν−μ with Φ.n=0 on ∂Ω}. (2.6) An optimal solution $${\it{\Phi}}$$ of the problem (2.6) is called an optimal flow of transporting $$\mu$$ onto $$\nu$$. As known from the optimal transport theory, the optimal flow gives a way to visualize the transportation. To prove Theorem 2.5, we will use the well-known duality arguments. For convenience, let us recall here the Fenchel–Rockafellar duality. Let us consider the problem   infϕ∈VF(ϕ)+G(Λϕ), (2.7) where $$\mathcal{F}: V\,{\longrightarrow}\, (-\infty, +\infty]$$ and $$\mathcal{G}: Z\,{\longrightarrow}\, (-\infty, +\infty]$$ are convex, l.s.c. and $${\it{\Lambda}}\,{\in}\, \mathcal{L}(V, Z)$$ the space of linear continuous functions from $$V$$ to $$Z.$$ Using $$\mathcal{F}^{*}$$ and $$\mathcal{G}^{*}$$ the conjugate functions (given by the Legendre–Fenchel transformation) of $$\mathcal{F}$$ and $$\mathcal{G}$$, respectively, and $${\it{\Lambda}}^{*}$$ is the adjoint operator of $${\it{\Lambda}},$$ it is not difficult to see that   supσ∈Z∗(−F∗(−Λ∗σ)−G∗(σ))≤infϕ∈VF(ϕ)+G(Λϕ), where $$Z^*$$ is the topological dual space associated with $$Z$$. This is the so called weak duality. For the strong duality, which corresponds to equality we have the following well-known result. Proposition 2.6 (cf. Ekeland & Teman, 1976) In addition, assume that there exists $$\phi_{0}$$ such that $$\mathcal{F}(\phi_0)\,{<}+\infty$$, $$\mathcal{G}({\it{\Lambda}} \phi_{0}) <+\infty,$$$$\mathcal{G}$$ being continuous at $${\it{\Lambda}} \phi_0$$. Then the Fenchel–Rockafellar dual problem   supσ∈Z∗(−F∗(−Λ∗σ)−G∗(σ)) (2.8) has at least a solution $$\sigma\in Z^*$$ and $$\inf$$ (2.7) = $$\max$$ (2.8). Moreover, in this case, $$\phi$$ is a solution to the primal problem (2.7) if and only if   −Λ∗σ∈∂F(ϕ) and σ∈∂G(Λϕ). (2.9) Proof of Theorem 2.5. We work with the uniform convergence for the spaces $$C(\overline{{\it{\Omega}}})^N$$, $$C(\overline{{\it{\Omega}}})$$ and the norm $$\|u\|_{C^1}:=\max\{\|u\|_{\infty}, \|\nabla u\|_{\infty}\}$$ for $$C^1(\overline{{\it{\Omega}}})$$. It is not difficult to see that the hypotheses of Proposition 2.6 are satisfied. Now, let us compute the Fenchel–Rockafellar dual problem of (2.3). Since $$\mathcal{F}$$ is linear, $${\mathcal{F}}^*(-{\it{\Lambda}}^*({\it{\Phi}}, \theta^0, \theta^1))$$ is finite (and always equals to $$0$$) if and only if   −Λ∗(Φ,θ0,θ1)=−(m−ν(Ω¯),ν−μ) in V∗ i.e.,   ⟨Φ,∇u⟩−⟨θ0,u⟩+⟨θ1,u−λ⟩=λ(m−ν(Ω¯))+⟨ν−μ,u⟩∀(λ,u)∈V. This implies that   ∫Ω¯∇udΦ=∫Ω¯ud(ν−θ1)−∫Ω¯ud(μ−θ0) for all u∈C1(Ω¯) and   −λ∫Ω¯dθ1=λ(m−ν(Ω¯))∀λ∈R. These mean that   −∇⋅Φ=ν−θ1−(μ−θ0) with Φ.n=0 on ∂Ω and   θ1(Ω¯)=ν(Ω¯)−m. We also have   G∗(Φ,θ0,θ1)={∫Ω¯F(x,Φ|Φ|(x))d|Φ| if θ0≥0,θ1≥0+∞ otherwise  for any (Φ,θ0,θ1)∈Z∗. Then the proof follows by Proposition 2.6. □ Remark 2.7 The optimality relations (2.9) reads   {−∇⋅Φ=ν−θ1−(μ−θ0) and Φ⋅n=0 on ∂Ωθ1(Ω¯)=ν(Ω¯)−m⟨Φ,∇u⟩≥⟨Φ,q⟩∀q∈C(Ω¯),F∗(x,q(x))≤1∀x∈Ω¯λ∈R+,u∈C1(Ω¯)⋂LdFλu=0,θ0-a.e. in Ω¯u=λ,θ1-a.e. in Ω¯.  In fact, the optimality condition $$-{\it{\Lambda}}^* \sigma \in \partial \mathcal{F}(\phi)$$ gives the first two equations and $$\sigma\in \partial \mathcal{G}({\it{\Lambda}} \phi)$$ gives the last four equations. Moreover, if $${\it{\Phi}}\in L^1({\it{\Omega}})^N$$ then the condition   ⟨Φ,∇u⟩≥⟨Φ,q⟩∀q∈C(Ω¯),F∗(x,q(x))≤1∀x∈Ω¯ can be replaced by   F(x,Φ(x))=⟨∇u(x),Φ(x)⟩ a.e. x∈Ω. (2.10) However, it is not clear in general that $${\it{\Phi}}$$ belongs to $$L^1({\it{\Omega}})^N$$. In the case where $${\it{\Omega}}$$ is convex and $$F(x, v):=|v|$$ the Euclidean norm (or some other uniformly convex and smooth norms), the $$L^p$$ regularity results are known under suitable assumptions on $$\mu$$ and $$\nu$$ (see, e.g., Feldman & McCann, 2002; De Pascale et al., 2004; De Pascale & Pratelli, 2004; Santambrogio, 2009). To our knowledge, the case of general Finsler metrics is still an open question. In the case where $${\it{\Phi}}$$ is a vector-valued measure, the condition (2.10) should be adapted to the tangential gradient. Rigorous formulations using the tangential gradient with respect to a measure, as well as rigorous proofs in the general case, can be found in the article by Igbida & Nguyen (2017) with $${\it{\Omega}}=\mathbb{R}^N$$. It is expected that $$\theta^0\le \mu$$ and $$\theta^1\le \nu$$ for optimal solutions $$({\it{\Phi}}, \theta^0, \theta^1)$$ of the minimal flow formulation (2.5). This is the case whenever $$\mathbf{m}\in [(\mu\wedge \nu)(\overline{{\it{\Omega}}}), {\mathbf{m}}_{\max}]$$, where $$\mu\wedge \nu$$ is the common mass measure of $$\mu$$ and $$\nu$$, i.e., if $$\mu, \nu\in L^1({\it{\Omega}})$$, then $$\mu \wedge \nu\in L^1({\it{\Omega}})$$ and   (μ∧ν)(x)=min{μ(x),ν(x)} for a.e. x∈Ω. In general, the measure $$\mu \wedge \nu$$ is defined by (see Ambrosio et al., 2000)   μ∧ν(A)=inf{μ(A1)+ν(A2):disjoint Borel setsA1,A2,such that A1∪A2=A}. Proposition 2.8 Let $$\mathbf{m}\in [(\mu\wedge \nu)(\overline{{\it{\Omega}}}), {\mathbf{m}}_{\max}]$$ and $$({\it{\Phi}}, \theta^0, \theta^1)\in Z^*$$ be an optimal solution of (2.5). Then $$\theta^0\le \mu$$ and $$\theta^1\le \nu$$. Moreover, $$(\mu-\theta^0, \nu -\theta^1)$$ is a couple of optimal active regions and $${\it{\Phi}}$$ is an optimal flow of transporting $$\mu-\theta^0$$ onto $$\nu -\theta^1$$. Proof. The proof follows in the same way as Theorem 5.21 and Corollary 5.20 in Igbida & Nguyen (2017). □ Our next work is to compute an approximation of $${\it{\Phi}}$$ (in fact, approximations of $${\it{\Phi}}, u, \lambda, \theta^0, \theta^1$$). To do that, we will apply an augmented Lagrangian method to the DPMK problem (2.2). 3. Discretization and convergence Coming back to the DPMK problem (2.2), our aim now is to give, by using a finite element approximation, the discretized problem associated with (2.2). To begin with, let us consider regular triangulations $$\mathcal{T}_{h}$$ of $$\overline{{\it{\Omega}}}$$. For a fixed integer $$k\ge 1$$, $$P_{k}$$ is the set of polynomials of degree less or equal $$k$$. Let $$E_{h}\subset H^{1}({\it{\Omega}})$$ be the space of continuous functions on $$\overline{{\it{\Omega}}}$$ and belonging to $$P_{k}$$ on each triangle of $$\mathcal{T}_{h}$$. We denote by $$Y_{h}$$ the space of vectorial functions such that their restrictions belong to $$(P_{k-1})^N$$ on each triangle of $$\mathcal{T}_{h}$$. Let $$f=\nu-\mu$$ and $$f_{h}\in E_{h}$$ such that $$\{f_{h}\}$$ converges weakly* to $$f$$ in $$\mathcal{M}_{b}(\overline{{\it{\Omega}}})$$. Considering the finite-dimensional spaces   Vh=R×Eh,Zh=Yh×Eh×Eh, we set   Λh(λ,u) :=(∇u,−u,u−λ)∈Zh for (λ,u)∈Vh,Fh(λ,u) :=−⟨u,fh⟩−λ(m−ν(Ω¯))∀(λ,u)∈Vh and   Gh(q,z,w):={0if z≤0,w≤0,F∗(x,q(x))≤1 for a.e. x∈Ω+∞otherwise  for (q,z,w)∈Zh. Then the finite-dimensional approximation of (2.2) reads   inf(λ,u)∈VhFh(λ,u)+Gh(Λh(λ,u)). (3.1) The following result shows that this is a suitable approximation of (2.2). Theorem 3.1 Assume that $$\mathbf{m}\,{<}\, \nu(\overline{{\it{\Omega}}})$$. Let $$(\lambda_{h}, u_{h})\,{\in}\, V_{h}$$ be an optimal solution to the approximated problem (3.1) and $$({\it{\Phi}}_{h}, \theta^0_h, \theta^1_h)$$ be an optimal dual solution to (3.1). Then, up to a subsequence, $$(\lambda_{h}, u_h)$$ converges in $$\mathbb{R}\times C(\overline {\it{\Omega}})$$ to $$(\lambda, u)$$ an optimal solution of the DPMK problem (2.2) and $$({\it{\Phi}}_h, \theta^0_h, \theta^1_h)$$ converges weakly* in $$\mathcal M_b(\overline{\it{\Omega}})^N\times \mathcal M_b(\overline{\it{\Omega}})\times \mathcal M_b(\overline{\it{\Omega}})$$ to $$({\it{\Phi}}, \theta^0, \theta^1)$$ an optimal solution of (2.5). Proof. Since $$\mathbf{m}\,{<}\,\nu(\overline{{\it{\Omega}}})$$, $$\{\lambda_h\}$$ is bounded in $$\mathbb{R}$$ and $$\{u_h\}$$ is bounded in $$(C(\overline{{\it{\Omega}}}), \|.\|_{\infty})$$. From the nondegeneracy of $$F$$ and the definitions of $$\mathcal{F}_{h}, \mathcal{G}_{h}, {\it{\Lambda}}_{h}$$, we have that $$\{u_h\}$$ is equi-Lipschitz and   uh(y)−uh(x)≤dF(x,y)∀x,y∈Ω¯. Using the Ascoli–Arzela theorem, up to a subsequence, $$u_h \rightrightarrows u$$ uniformly on $$\overline{{\it{\Omega}}}$$ and $$\lambda_h \to \lambda$$. Obviously, $$\lambda\ge 0$$ and $$u\in L^\lambda_{d_{F}}$$. Now, by the optimality of $$(\lambda_h, u_h)$$ and of $$({\it{\Phi}}_h, \theta^0_h, \theta^1_h)$$, we have   −Λh∗(Φh,θh0,θh1)=−(m−ν(Ω¯),fh) in Vh∗ and   Fh(λh,uh)+Gh(Λh(λh,uh))=−Fh∗(−Λh∗(Φh,θh0,θh1))−Gh∗(Φh,θh0,θh1). More concretely,   ⟨Φh,∇v⟩−⟨θh0,v⟩+⟨θh1,v−s⟩=s(m−ν(Ω¯))+⟨fh,v⟩∀(s,v)∈Vh, (3.2)  θh0≥0,θh1≥0,θh1(Ω¯)=ν(Ω¯)−m (3.3) and   ⟨uh,fh⟩+λh(m−ν(Ω¯))=sup{⟨q,Φh⟩:q∈Yh,F∗(x,q(x))≤1 a.e. x∈Ω}. (3.4) In (3.2), taking $$v=0$$ and $$s=1$$ (respectively, $$v=s=1$$), we see that $$\{\theta^1_h\}$$ (respectively, $$\{\theta^0_h\}$$) is bounded in $$\mathcal M_{b}(\overline{{\it{\Omega}}})$$. Moreover, using (3.4) and the boundedness of $$(\lambda_h, u_h)$$ we deduce that $$\{{\it{\Phi}}_h\}$$ is bounded in $$\mathcal{M}_b(\overline{{\it{\Omega}}})^N.$$ So, up to a subsequence,   (Φh,θh0,θh1)⇀(Φ,θ0,θ1) in Mb(Ω¯)N×Mb(Ω¯)×Mb(Ω¯)−w∗. Using (3.2) and (3.3), it is clear that $$({\it{\Phi}}, \theta^0, \theta^1)$$ satisfies   ⟨Φ,∇v⟩−⟨θ0,v⟩+⟨θ1,v−s⟩=s(m−ν(Ω¯))+⟨f,v⟩∀(s,v)∈V and   θ0≥0,θ1≥0,θ1(Ω¯)=ν(Ω¯)−m, i.e., $$({\it{\Phi}}, \theta^0, \theta^1)$$ is feasible for the minimal flow problem (2.5). Next, let us show the optimality of $$(\lambda, u)$$ and of $$({\it{\Phi}}, \theta^0, \theta^1)$$, i.e.,   ∫Ω¯F(x,Φ|Φ|(x))d|Φ|=⟨u,ν−μ⟩+λ(m−ν(Ω¯)). (3.5) We fix $$q\,{\in}\, C(\overline{{\it{\Omega}}})^N$$ such that $$F^*(x, q(x))\,{\le}\, 1 \quad\forall x\,{\in}\, \overline{{\it{\Omega}}}$$ and we consider $$q_h\,{\in}\, Y_{h}$$ such that $$\|q_{h}-q\|_{L^{\infty}({\it{\Omega}})} \to 0$$ as $$h\to 0$$. We see that   F∗(x,qh(x))=F∗(x,q(x))+F∗(x,qh(x))−F∗(x,q(x))≤1+O(h) a.e. x∈Ω. By taking $$\frac{q_{h}}{1+ O(h)}$$, we can assume that $$q_{h}\,{\in}\, Y_h, F^*(x, q_h(x))\,{\le}\, 1 \text{ a.e. } x\,{\in}\, {\it{\Omega}}$$ and $$\|q_{h}-q\|_{L^{\infty}({\it{\Omega}})} \,{\to}\, 0$$ as $$h\to 0$$. Using (3.4), we have   ⟨q,Φ⟩ =⟨qh,Φh⟩+⟨q,Φ−Φh⟩+⟨q−qh,Φh⟩ ≤sup{⟨qh,Φh⟩:qh∈Yh,F∗(x,qh(x))≤1 a.e. x∈Ω}+O(h) =⟨uh,fh⟩+λh(m−ν(Ω¯))+O(h). Letting $$h\to 0$$, we get   ⟨q,Φ⟩≤⟨u,ν−μ⟩+λ(m−ν(Ω¯)) for any q∈C(Ω¯)N,F∗(x,q(x))≤1∀x∈Ω¯. Taking supremum in $$q$$, we obtain   ∫Ω¯F(x,Φ|Φ|(x))d|Φ|≤⟨u,ν−μ⟩+λ(m−ν(Ω¯)). At last, thanks to the duality equality (2.4), this implies (3.5), the optimality of $$(\lambda, u)$$ and of $$({\it{\Phi}}, \theta^0, \theta^1)$$. □ Remark 3.2 In the case $$\mathbf{m}=\mathbf{m}_{\max}$$ (called the unbalanced transport), the DPMK problem has a simpler formulation. So for the purpose of implementation, we distinguish the two cases: the partial transport and the unbalanced transport. In the unbalanced case, let us assume that $$\mathbf{m}=\mathbf{m}_{\max}=\nu(\overline{{\it{\Omega}}})$$$$(\text{i.e., }\, \mu(\overline{{\it{\Omega}}})\ge \nu(\overline{{\it{\Omega}}}))$$, the DPMK problem (2.2) can be written as   maxu∈LipdF,u≥0∫Ω¯ud(ν−μ). (3.6) By using $$V_h=E_h,\: Z_h=Y_h \times E_h, {\it{\Lambda}}_h u =(\nabla u, -u)$$ and   Gh(q,z)={0 if z≤0,F∗(x,q(x))≤1 a.e. x∈Ω+∞ otherwise  a finite-dimensional approximation can be given by   infu∈Vh−⟨u,fh⟩+Gh(Λhu). (3.7) As in Theorem 3.1, we can prove the convergence of this finite-dimensional approximation to the original one (3.6). More precisely, we have Proposition 3.3 Assume that $$\mathbf{m}=\nu(\overline{{\it{\Omega}}})$$. Let $$u_{h}\in V_{h}$$ be an optimal solution to the approximated problem (3.7) and $$({\it{\Phi}}_{h}, \theta^0_h)$$ be an optimal dual solution to (3.7). Then, up to a subsequence and translation by constant, $$u_{h}$$ converges to $$u$$ an optimal solution of the DPMK problem (3.6) and $$({\it{\Phi}}_h, \theta^0_h)$$ converges to $$({\it{\Phi}}, \theta^0)$$ an optimal solution of (2.5) with $$\theta^1=0$$. The proof of this proposition is similar to the proof of Theorem 3.1. 4. Solving the discretized problems Our task now is to solve the finite-dimensional problems (3.1) and (3.7). First, let us recall the augmented Lagrangian method we are dealing with. 4.1 ALG2 method Assume that $$V$$ and $$Z$$ are two Hilbert spaces. Let us consider the problem   infϕ∈VF(ϕ)+G(Λϕ), (4.1) where $$\mathcal{F}: V\longrightarrow (-\infty, +\infty]$$ and $$\mathcal{G}: Z\longrightarrow (-\infty, +\infty]$$ are convex, l.s.c. and $${\it{\Lambda}}\in \mathcal{L}(V, Z)$$. We introduce a new variable $$q\in Z$$ to the primal problem (4.1) and we rewrite it in the form   inf(ϕ,q)∈V×Z:Λϕ=qF(ϕ)+G(q). The augmented Lagrangian is given by   L(ϕ,q;σ):=F(ϕ)+G(q)+⟨σ,Λϕ−q⟩+r2|Λϕ−q|2r>0. The so called ALG2 algorithm is given as follows: For given $$q_{0}, \sigma_{0}\in Z$$, we construct the sequences $$\{\phi_i\}, \{q_i\}$$ and $$\{\sigma_i\}, i=1, 2, ...,$$ by Step 1: Minimizing $$\inf\limits_{\phi} L(\phi, q_{i}; \sigma_{i})$$, i.e.,   ϕi+1∈argminϕ∈V{F(ϕ)+⟨σi,Λϕ⟩+r2|Λϕ−qi|2}. Step 2: Minimizing $$\inf\limits_{q\in Z} L(\phi_{i+1}, q; \sigma_{i})$$, i.e.,   qi+1∈argminq∈Z{G(q)−⟨σi,q⟩+r2|Λϕi+1−q|2}. Step 3: Update the multiplier $$\sigma$$,   σi+1=σi+r(Λϕi+1−qi+1). For the theory of this method and its interpretation, we refer the reader to Gabay & Mercier (1976), Glowinski et al. (1981), Fortin & Glowinski (1983), Glowinski & Le Tallec (1989) and Eckstein & Bertsekas (1992). Here, we recall the convergence result of this method which is enough for our discretized problems. Theorem 4.1 (cf. Eckstein & Bertsekas, 1992, Theorem 8) Fixed $$r>0$$, assuming that $$V=\mathbb{R}^n, Z=\mathbb{R}^m$$ and that $${\it{\Lambda}}$$ has full column rank. If there exists a solution to the optimality relations (2.9), then $$\{\phi_{i}\}$$ converges to a solution of the primal problem (2.7) and $$\{\sigma_{i}\}$$ converges to a solution of the dual problem (2.8). Moreover, $$\{q_{i}\}$$ converges to $${\it{\Lambda}} \phi^{*}$$, where $$\phi^{*}$$ is the limit of $$\{\phi_{i}\}$$. The proof of this result in the case of finite-dimensional spaces $$V$$ and $$Z$$ can be found in Eckstein & Bertsekas (1992). The result holds true in infinite-dimensional Hilbert spaces under additional assumptions. One can see Fortin & Glowinski (1983) and Glowinski & Le Tallec (1989) for more details in this direction. Next, we use the ALG2 method for the discretized problems. To simplify the notations, let us drop out the subscript $$h$$ in $$(\lambda_h, u_h)$$ and $$({\it{\Phi}}_h, \theta^0_h, \theta^1_h).$$ Thanks to Remark 3.2, we treat separately the case where $$\mathbf{m}=\nu(\overline{{\it{\Omega}}})$$ and the case where $$\mathbf{m}<\nu(\overline{{\it{\Omega}}}).$$ 4.2 Partial transport ($$\mathbf{m}<\nu(\overline{{\it{\Omega}}})$$) Given $$(q_{i}, z_{i}, w_{i}), ({\it{\Phi}}_{i}, \theta^0_i, \theta^1_i)$$ at the iteration $$i,$$ we compute Step 1:   (λi+1,ui+1) ∈argmin(λ,u)∈VhFh(λ,u)+⟨(Φi,θi0,θi1),Λh(λ,u)⟩+r2|Λh(λ,u)−(qi,zi,wi)|2 =argmin(λ,u)∈Vh−⟨u,fh⟩−λ(m−ν(Ω¯))+⟨Φi,∇u⟩+⟨θi0,−u⟩+⟨θi1,u−λ⟩  +r2|∇u−qi|2+r2|u+zi|2+r2|u−λ−wi|2. Step 2:   (qi+1,zi+1,wi+1) ∈argmin(q,z,w)∈ZhGh(q,z,w)−⟨(Φi,θi0,θi1),(q,z,w)⟩+r2|Λh(λi+1,ui+1)−(q,z,w)|2 =argmin(q,z,w)∈ZhI[F∗(.,q(.))≤1](q)+I[z≤0](z)+I[w≤0](w)−⟨Φi,q⟩−⟨θi0,z⟩−⟨θi1,w⟩  +r2|∇ui+1−q|2+r2|ui+1+z|2+r2|ui+1−λi+1−w|2. Step 3: Update the multiplier   (Φi+1,θi+10,θi+11)=(Φi,θi0,θi1)+r(∇ui+1−qi+1,−ui+1−zi+1,ui+1−λi+1−wi+1). Before giving numerical results, let us take a while to comment the above iteration. Overall, Step 1 is a quadratic programming. Step 2 can be computed easily in many cases and Step 3 updates obviously. We denote by $$\text{Proj}_{C}(.)$$ the projection onto a closed convex subset $$C$$. In Step 1, we split the computation of the couple $$(\lambda_{i+1}, u_{i+1})$$ into two steps: We first minimize w.r.t. $$u$$ to compute $$u_{i+1}$$ and then we use $$u_{i+1}$$ to compute $$\lambda_{i+1}$$. More precisely, we proceed for Step 1 as follows: (1) For $$u_{i+1}$$,   ui+1 ∈argminu∈Eh−⟨u,fh⟩+⟨Φi,∇u⟩+⟨θi0,−u⟩+⟨θi1,u⟩  +r2|∇u−qi|2+r2|u+zi|2+r2|u−λi−wi|2. This is equivalent to   r⟨∇ui+1,∇v⟩+2r⟨ui+1,v⟩ =⟨fh,v⟩−⟨Φi,∇v⟩+⟨θi0,v⟩−⟨θi1,v⟩  +r⟨qi,∇v⟩−r⟨zi,v⟩+r⟨λi+wi,v⟩∀v∈Eh. Remark here that the equation is linear with a symmetric positive definite coefficient matrix. (2) For $$\lambda_{i+1}$$, it is computed explicitly   λi+1 ∈argmins∈R−s(m−ν(Ω¯))+⟨θi1,ui+1−s⟩+r2⟨ui+1−s−wi,ui+1−s−wi⟩  =−ν(Ω¯)−m−∫Ω¯θi1+r∫Ω(wi−ui+1)r∫Ω1. In Step 2, the variables $$q, z, w$$ are independent. So, we solve them separately: (1) For $$z_{i+1}$$ and $$w_{i+1}$$, if we choose $$P_2$$ finite element for $$z_{i+1}$$ and $$w_{i+1}$$, at vertex $$x_k$$,   zi+1(xk) =Proj[r∈R:r≤0](−ui+1(xk)+θi0(xk)r) =min(−ui+1(xk)+θi0(xk)r,0) (4.2) and   wi+1(xk) =Proj[r∈R:r≤0](ui+1(xk)−λi+1+θi1(xk)r) =min(ui+1(xk)−λi+1+θi1(xk)r,0). (4.3) (2) For $$q_{i+1}$$, if we choose $$P_{1}$$ finite element for $$q_{i+1}$$ then at each vertex $$x_{l}$$  qi+1(xl)=ProjBF∗(xl,.)(∇ui+1(xl)+Φi(xl)r), (4.4) where $$B_{F^*(x, .)}:=\left\{q\in \mathbb{R}^N: F^*(x, q)\le 1\right\}$$ the unit ball for $$F^*(x, .)$$. It remains to explain how we compute the projection onto $$B_{F^*(x_l, .)}$$. This issue is recently discussed in Benamou et al. (2017) for Riemann-type Finsler distances and for crystalline norms. For the convenience of the reader, we retake here the case where the unit ball of $$F(x, .)$$ is (not necessarily symmetric) convex polytope. For short, we ignore the dependence of $$x$$ in $$F$$ and $$F^*$$. Given $$d_1, ..., d_k\ne 0$$ such that, for any $$0\ne v\in \mathbb{R}^N$$, $$\max\limits_{1\le i\le k}\left\{\langle v, d_{i}\rangle\right\}> 0$$. We consider the nonsymmetric Finsler metric given by   F(v):=max1≤i≤k{⟨v,di⟩} for any v∈RN. (4.5) It is not difficult to see that the unit ball $$B^*$$ corresponding to $$F^*$$ is exactly the convex hull of $$\{d_i\}$$,   B∗=conv(di,i=1,...,k). Thus, we need to compute the projection onto the convex hull of finite points. In dimension 2, the projection onto $$B^*$$ can be performed as follows: Compute the successive vertices $$S_1, ..., S_n$$. If $$q\,{\notin}\,B^*$$ then compute the projections of $$q$$ onto the segments $$[S_{i}, S_{i+1}]$$ and compare among these projections to chose the right one. Another way is as the one in Benamou et al. (2017): Compute outward orthogonal vectors $$v_{1}, ..., v_n$$ (Fig. 1). If $$q$$ belongs to $$[S_i, S_{i+1}]+\mathbb{R}_+v_i$$ then the projection coincides with the one on the line through $$S_i, S_{i+1}$$. If $$q$$ belongs to the sector $$S_{i}+\mathbb{R}_+v_{i-1}+\mathbb{R}_+v_{i}$$, the projection is $$S_i$$. Fig. 1. View largeDownload slide Illustration of the projection. Fig. 1. View largeDownload slide Illustration of the projection. 4.3 Unbalanced transport ($$\mathbf{m}=\nu(\overline{{\it{\Omega}}})$$) Thanks to Remark 3.2, we can reduce the algorithm in this particular case by ignoring the variable $$\lambda$$. With similar considerations for $${\it{\Lambda}}_h u=(\nabla u, -u)$$, we get the following iteration: Step 1:   ui+1 ∈argminu∈Eh−⟨u,fh⟩+⟨Φi,∇u⟩+⟨θi0,−u⟩+r2|∇u−qi|2+r2|u+zi|2. Equivalently,   r⟨∇ui+1,∇v⟩+r⟨ui+1,v⟩=⟨fh,v⟩−⟨Φi,∇v⟩+⟨θi0,v⟩+r⟨qi,∇v⟩−r⟨zi,v⟩∀v∈Eh. (4.6) Step 2: (1) For $$z_{i+1}$$, choosing $$P_{2}$$ finite element for $$z_{i+1}$$ then at each vertex $$x_k$$,   zi+1(xk)=Proj[r∈R:r≤0](−ui+1(xk)+θi0(xk)r)=min(−ui+1(xk)+θi0(xk)r,0). (4.7) (2) For $$q_{i+1}$$, choosing $$P_{1}$$ finite element, at vertex $$x_l$$,   qi+1(xl)=ProjBF∗(xl,.)(∇ui+1(xl)+Φi(xl)r). Step 3: $$({\it{\Phi}}_{i+1}, \theta^0_{i+1})=({\it{\Phi}}_{i}, \theta^0_{i})+r(\nabla u_{i+1 }- q_{i+1}, -u_{i+1}-z_{i+1}).$$ 5. Numerical experiments For the numerical implementation, we use the FreeFem++ software (Hecht, 2012) and base on Benamou & Brenier (2000) and Benamou & Carlier (2015). We use $$P_{2}$$ finite element for $$u_{i}, z_{i}, w_{i}, \theta^0_{i}, \theta^1_{i}$$ and $$P_{1}$$ finite element for $${\it{\Phi}}_{i}, q_{i}$$. 5.1 Stopping criterion In computational version, the measures $$\mu$$ and $$\nu$$ are approximated by non-negative regular functions that we denote again by $$\mu$$ and $$\nu$$. We use the following stopping criteria: For the partial transport: (1) $$\text{MIN-MAX}:=\min\left\{\min\limits_{\overline{{\it{\Omega}}}} u(x), \lambda-\max\limits_{\overline{{\it{\Omega}}}} u(x), \min\limits_{\overline{{\it{\Omega}}}} \theta^0(x), \min\limits_{\overline{{\it{\Omega}}}} \theta^1(x)\right\}\!\!.$$ (2) $$\text{Max-Lip}:=\sup\limits_{\overline{{\it{\Omega}}}} F^*(x, \nabla u(x)).$$ (3) $$\text{DIV}:=\|\nabla \cdot{\it{\Phi}} + \nu -\theta^1-\mu +\theta^0\|_{L^{2}}.$$ (4) $$\text{DUAL}:=\|F(x, {\it{\Phi}}(x)) -{\it{\Phi}}(x)\cdot \nabla u\|_{L^{2}}$$. (5) $$\text{MASS}:= \left |\int (\nu -\theta^1)\,{\rm d}x - \mathbf{m}\right |$$. For the unbalanced transport: We change (1) $$\text{MIN-MAX}:=\min\left\{\min\limits_{\overline{{\it{\Omega}}}} u(x), \min\limits_{\overline{{\it{\Omega}}}} \theta^0(x)\right\}$$. (2) $$\text{DIV}:=\|\nabla \cdot {\it{\Phi}} + \nu-\mu +\theta^0\|_{L^{2}}.$$ We expect $$\text{MIN-MAX}\ge 0, \text{Max-Lip}\le 1$$; DIV, DUAL and MASS are small. 5.2 Some examples In all the examples below, we take $${\it{\Omega}}=[0, 1]\times [0, 1]$$. We test for the Riemannian case and the crystalline case. For the latter, we consider the Finsler metric of the form $$F(x, v)=\max\limits_{1\le i\le k}\left\{\langle v, d_i\rangle\right\}$$ with given directions $$d_1, ..., d_k$$ such that for any $$0\ne v\in \mathbb{R}^2$$,   max1≤i≤k{⟨v,di⟩}>0. 5.2.1 For the unbalanced transport Example 5.1 Taking $$\mu=3{\mathcal{L}^{2}}_{{\it{\Omega}}}$$ and $$\nu=\delta_{(0.5, 0.5)}$$ the Dirac mass at $$(0.5, 0.5)$$. The Finsler metric is the Euclidean one. The optimal flow is given in Fig. 2. The stopping criterion at each iteration is given in Fig. 3. Fig. 2. View largeDownload slide Optimal flow for $$\mu=3, \; \nu=\delta_{(0.5, 0.5)}, \: F(x, v)=|v|$$. Fig. 2. View largeDownload slide Optimal flow for $$\mu=3, \; \nu=\delta_{(0.5, 0.5)}, \: F(x, v)=|v|$$. Fig. 3. View largeDownload slide Stopping criterion at each iteration. Fig. 3. View largeDownload slide Stopping criterion at each iteration. Example 5.2 We take $$\mu$$ and $$\nu$$ as in the previous example and the Finsler metric given by $$F(x, v)\,{:=}|v_1|\,{+}\,|v_2|$$ for $$v\,{=}\,(v_1, v_2)\,{\in}\, \mathbb{R}^2$$. This corresponds to the crystalline norm with $$d_1\,{=}\,(1, 1), d_2\,{=}(-1, 1), d_3\,{=}\,(-1, -1) \ \text{and} \ d_4\,{=}\,(1, -1)$$. The optimal flow is given in Fig. 4 and the stopping criterion at each iteration is given in Fig. 5. Fig. 4. View largeDownload slide Optimal flow for $$\mu=3, \; \nu=\delta_{(0.5, 0.5)}, \: F(x, (v_1, v_2))=|v_1|+|v_2|$$. Fig. 4. View largeDownload slide Optimal flow for $$\mu=3, \; \nu=\delta_{(0.5, 0.5)}, \: F(x, (v_1, v_2))=|v_1|+|v_2|$$. Fig. 5. View largeDownload slide Stopping criterion at each iteration. Fig. 5. View largeDownload slide Stopping criterion at each iteration. 5.2.2 For the partial transport Example 5.3 Taking $$\mu\,{=}\,4\chi_{[(x-0.3)^2+(y-0.2)^2 <0.03]}$$, and $$\nu\,{=}\,4\chi_{[(x-0.7)^2+(y-0.8)^2<0.03]}.$$ The mass of the transport is $$\mathbf{m}\,{:=}\,\frac{\nu(\overline{{\it{\Omega}}})}{2}.$$ We test for different Finsler metrics. On each figure below, the subfigure at left illustrates the unit ball of $$F$$ and the subfigure at right gives the numerical result (see Figs 6–9). The stopping criteria are summarized in Table 1. Table 1 Stopping criteria for $$800$$ iterations Case  DIV  DUAL  MASS  MIN–MAX  Max–Lip  Time execution (s)  1  2.48182e-05  9.5294e-06  0.000161361  $$-$$0.0149942  1.00068  357  2  3.38395e-05  5.58717e-05  0.000195881  $$-$$0.00120123  1.00248  867  3  7.44768e-05  5.5997e-05  6.66404e-06  $$-$$0.00272389  1.00351  1269  4  6.33726e-05  3.20691e-05  0.000120909  $$-$$0.0104915  1.02572  1123  Case  DIV  DUAL  MASS  MIN–MAX  Max–Lip  Time execution (s)  1  2.48182e-05  9.5294e-06  0.000161361  $$-$$0.0149942  1.00068  357  2  3.38395e-05  5.58717e-05  0.000195881  $$-$$0.00120123  1.00248  867  3  7.44768e-05  5.5997e-05  6.66404e-06  $$-$$0.00272389  1.00351  1269  4  6.33726e-05  3.20691e-05  0.000120909  $$-$$0.0104915  1.02572  1123  Fig. 6. View largeDownload slide Case 1: $$F(x, v)=|v|$$. Fig. 6. View largeDownload slide Case 1: $$F(x, v)=|v|$$. Fig. 7. View largeDownload slide Case 2: The crystalline case with $$d_1=(1, 1), d_2=(-1, 1), d_3=(-1, -1)$$ and $$d_4=(1, -1)$$. Fig. 7. View largeDownload slide Case 2: The crystalline case with $$d_1=(1, 1), d_2=(-1, 1), d_3=(-1, -1)$$ and $$d_4=(1, -1)$$. Fig. 8. View largeDownload slide Case 3: The crystalline case with $$d_1\,{=}\,(1, 0), d_2\,{=}\,(\frac{1}{5}, \frac{1}{5}), d_3\,{=}\,(-\frac{1}{5}, \frac{1}{5}), d_4\,{=}\,(-\frac{1}{5}, -\frac{1}{5})$$ and $$d_5\,{=}\,(\frac{1}{5}, -\frac{1}{5})$$ makes the transport more expensive in the direction of the vector $$(1, 0)$$. Fig. 8. View largeDownload slide Case 3: The crystalline case with $$d_1\,{=}\,(1, 0), d_2\,{=}\,(\frac{1}{5}, \frac{1}{5}), d_3\,{=}\,(-\frac{1}{5}, \frac{1}{5}), d_4\,{=}\,(-\frac{1}{5}, -\frac{1}{5})$$ and $$d_5\,{=}\,(\frac{1}{5}, -\frac{1}{5})$$ makes the transport more expensive in the direction of the vector $$(1, 0)$$. Fig. 9. View largeDownload slide Case 4: The crystalline case with $$d_1\,{=}\,(1, -1), d_2\,{=}\,(1, -\frac{4}{5}), d_3\,{=}\,(-\frac{4}{5}, 1), d_4\,{=}\,(-1, 1)$$ and $$d_5\,{=}\,(-1, -1)$$ makes the transport cheaper in the direction of the vector $$(1, 1)$$. Fig. 9. View largeDownload slide Case 4: The crystalline case with $$d_1\,{=}\,(1, -1), d_2\,{=}\,(1, -\frac{4}{5}), d_3\,{=}\,(-\frac{4}{5}, 1), d_4\,{=}\,(-1, 1)$$ and $$d_5\,{=}\,(-1, -1)$$ makes the transport cheaper in the direction of the vector $$(1, 1)$$. Example 5.4 Let $$\mu\,{=}\,2\chi_{[(x-0.2)^2\,{+}\,(y-0.2)^2 <0.03]}+2\chi_{[(x-0.6)^2+(y-0.1)^2 <0.01]}$$ and $$\nu\,{=}\,2\chi_{[(x-0.6)^2+(y-0.8)^2 <0.03]}$$. In this example, we take the Euclidean norm and we let $$\mathbf{m}$$ vary by taking the values $$\mathbf{m}_{i}\,{=}\,\frac{i}{6}$$$$\min\{\mu({\it{\Omega}}), \nu({\it{\Omega}})\},$$$$i=1, ..., 6.$$ The results are given in Fig. 10. Fig. 10. View largeDownload slide Optimal flows. Fig. 10. View largeDownload slide Optimal flows. Acknowledgements The authors are grateful to J. D. Benamou and G. Carlier who provide some codes of ALG2 on the link https://team.inria.fr/mokaplan/software/. Some parts of our codes are inspired from their work. Appendix Our aim here is to show Lemma A.1 that gives a smooth approximation of 1-$$d_{F}$$ Lipschitz continuous function for continuous nondegenerate Finsler metrics $$F$$. This result is more or less known in some particular cases. However, we could not find any rigorous proofs for the general case in the literature. Lemma A.1 Let $${\it{\Omega}}$$ be a connected bounded Lipschitz domain and $$F$$ be a continuous nondegenerate Finsler metric on $$\overline{{\it{\Omega}}}$$. For any Lipschitz continuous function $$u$$ on $$\overline{{\it{\Omega}}}$$ satisfying   F∗(x,∇u(x))≤1 a.e. x∈Ω, (A.1) there exists a sequence of functions $$u_{\varepsilon}\in C^{\infty}_{c}(\mathbb{R}^N)$$ such that   F∗(x,∇uε(x))≤1∀x∈Ω¯, and   uε⇉u uniformly on Ω¯. Note that $$F$$ and $$F^*$$ are defined only in $$\overline{{\it{\Omega}}}$$ and that the gradient of $$u$$ is controlled only inside of $${\it{\Omega}}$$ by (A.1). If we use the standard convolution to define $$u_{\varepsilon}$$, the value of $$u_{\varepsilon}(x)$$ is affected by the value of $$u(y)$$ outside of $$\overline{{\it{\Omega}}}$$ which remains uncontrolled. To overcome this difficulty, if $$x$$ is near the boundary, we move it a little into inside of $${\it{\Omega}}$$ before taking the convolution. To this aim, we use the smooth partition of unity tool to deal with approximation of $$u$$ near the boundary. Proof. Set   ∀x∈RN,u~(x):={u(x) if x∈Ω¯0 otherwise.  Step 1: Fix $$z\in \partial {\it{\Omega}}$$. Since $${\it{\Omega}}$$ is a Lipschitz domain, there exist $$r_{z}>0$$ and a Lipschitz continuous function $$\gamma_{z}:\mathbb{R}^{N-1} \longrightarrow \mathbb{R}$$ such that (up to rotating and relabeling if necessary)   Ω∩B(z,rz)={x|xN>γz(x1,...,xN−1)}∩B(z,rz). Set $$U_{z}:={\it{\Omega}} \cap B(z, \frac{r_{z}}{2})$$. For any $$x\in \mathbb{R}^N$$, taking   xzε:=x+ελzen, (A.2) where we choose a sufficiently large fixed $$\lambda_{z}$$ and all small $$\varepsilon$$, say fixed $$\lambda_{z} \ge \text{Lip} (\gamma_{z})+1$$, $$0<\varepsilon< \frac{r_{z}}{2(\lambda_{z}+1)}.$$ By this choice and the Lipschitz property of $$\gamma_{z}$$, we see that   B(xzε,ε)⊂Ω∩B(z,rz) for all x∈Uz. (A.3) Defining   u~ε(x):=∫RNρε(y)u~(xzε−y)dy=∫B(xzε,ε)ρε(xzε−y)u~(y)dy for all x∈RN, (A.4) where $$\rho_{\varepsilon}$$ is the standard mollifier on $$\mathbb{R}^N$$. Obviously, $$\tilde{u}_{\varepsilon} \in C^{\infty}_{c}(\mathbb{R}^N)$$. Using (A.3), (A.4) and the continuity of $$u$$ on $$\overline{{\it{\Omega}}}$$, we get   u~ε⇉u on U¯z. Step 2: Now, using the compactness of $$\partial {\it{\Omega}}$$ and $$\partial {\it{\Omega}} \,{\subset}\, \bigcup\limits_{z\,{\in}\, \partial {\it{\Omega}}} B(z, \frac{r_{z}}{2})$$, there exist $$z_{1}, ..., z_{n}\in \partial {\it{\Omega}}$$ such that   ∂Ω⊂⋃i=1nB(zi,rzi2). For short, we write $$r_{i}, U_{i}, x_{i}$$ instead of $$r_{z_{i}}, U_{z_{i}}, x_{z_{i}}$$. Taking an open set $$U_{0}\Subset {\it{\Omega}}$$ such that   Ω¯⊂⋃i=1nB(zi,ri2)⋃U0. Let $$\{\phi\}^{n}_{i=0}$$ be a smooth partition of unity on $$\overline{{\it{\Omega}}}$$, subordinate to $$\left\{U_{0}, B(z_{1}, \frac{r_{1}}{2}), ..., B(z_{n}, \frac{r_{n}}{2})\right\}$$, that is,   {ϕi∈Cc∞(RN)0≤ϕi≤1,∀i=0,...,nsupp(ϕi)⋐B(zi,ri2)∀i=1,...,n, supp(ϕ0)⋐U0∑i=0nϕi(x)=1 for all x∈Ω¯.  Because of Step 1, there exist $$\tilde{u}^1_\varepsilon, ..., \tilde{u}^n_\varepsilon \in C^{\infty}_{c}(\mathbb{R}^N)$$ such that   u~εi⇉u on U¯ii=1,...,n. For $$i=0,$$ since $$U_{0}\Subset {\it{\Omega}}$$, we can take $$\tilde{u}^0_\varepsilon:= \rho_{\varepsilon}\star \tilde{u} \in C^{\infty}_{c}(\mathbb{R}^N)$$ and $$\tilde{u}^0_{\varepsilon} \rightrightarrows u$$ on $$\overline{U}_{0}$$. Set   uε:=11+Cε+w(ε)∑i=0nϕiu~εi, where $$C$$ is chosen later and   w(ε):=sup{|F∗(x,p)−F∗(y,p)|:x,y∈Ω¯,|x−y|≤Mε,|p|≤‖∇u‖L∞} with constant $$M:=\max\limits_{1\le i\le n}\{\lambda_{z_{i}}+1\}$$, $$\lambda_{z_{i}}$$ is given in Step 1. We show that $$u_{\varepsilon}$$ satisfies all the desired properties. By the construction, $$u_{\varepsilon} \in C^{\infty}_{c}(\mathbb{R}^N)$$ and   uε⇉∑i=0nϕiu=u on Ω¯. At last, we show that $$F^*(x, \nabla u_{\varepsilon} (x))\le 1 \quad\forall x\in \overline{{\it{\Omega}}}$$. Indeed, for any $$x\in {\it{\Omega}}$$, if $$x\in U_{i}, i=1, ..., n$$ (near the boundary of $${\it{\Omega}}$$), we move $$x$$ a bit into inside of $${\it{\Omega}}$$ to $$x^\varepsilon_i:=x^\varepsilon_{z_i}$$ (see (A.2) and (A.3)), if $$x\in U_{0}$$, set $$x_0^{\varepsilon}=x$$. We have   ∇uε(x) =11+Cε+w(ε)(∑i=0n∇ϕi(x)u~εi(x)+∑i=0nϕi(x)∇u~εi(x)) =11+Cε+w(ε)(∑i=0n∇ϕi(x)∫B(xiε,ε)ρε(xiε−y)u(y)dy  +∑i=0nϕi(x)∫B(xiε,ε)ρε(xiε−y)∇u(y)dy). The first sum on the right-hand side has a small norm. Indeed, using the fact that   ∑i=0n∇ϕi(x)u(x)=0 for all x∈Ω, we have   ∑i=0n∇ϕi(x)∫B(xiε,ε)ρε(xiε−y)u(y)dy=∑i=0n∇ϕi(x)(∫B(xiε,ε)ρε(xiε−y)u(y)dy−u(x)). (A.5) Moreover,   |∫B(xiε,ε)ρε(xiε−y)u(u)dy−u(x)| ≤|∫B(xiε,ε)ρε(xiε−y)(u(y)−u(xiε))dy|+|u(xiε)−u(x)| ≤C1ε∀i=0,...,n, where the constant $$C_{1}$$ depends only on Lip($$\gamma_{z_{i}}$$) and the Lipschitz constant of $$u$$ on $$\overline{{\it{\Omega}}}$$. Thus, by combining this with (A.5),   |∑i=0n∇ϕi(x)∫B(xiε,ε)ρε(xiε−y)u(y)dy|≤C2ε∀x∈Ω, where $$C_{2}$$ depends only on $$C_{1}$$ and $$\|\nabla \phi_{i}\|_{L^{\infty}}$$. Using the nondegeneracy of $$F$$, we have   F∗(x,∑i=0n∇ϕi(x)∫B(xiε,ε)ρε(xiε−y)u(y)dy)≤C3ε for all x∈Ω. Fixed any $$x\in {\it{\Omega}}$$, if $$y\in B(x_i^{\varepsilon}, \varepsilon)$$ then $$|x-y|\le |x-x_i^{\varepsilon}|+|x_i^{\varepsilon}-y|\le M\varepsilon$$. So we obtain   F∗(x,∇uε(x)) ≤11+Cε+w(ε)[F∗(x,∑i=0n∇ϕi(x)∫B(xiε,ε)ρε(xiε−y)u(y)dy)  +F∗(x,∑i=0nϕi(x)∫B(xiε,ε)ρε(xiε−y)∇u(y)dy)] ≤11+Cε+w(ε)(C3ε+∑i=0nϕi(x)∫B(xiε,ε)ρε(xiε−y)F∗(x,∇u(y))dy) ≤11+Cε+w(ε)[C3ε+∑i=0nϕi(x)∫B(xiε,ε)ρε(xiε−y)F∗(y,∇u(y))dy  +∑i=0nϕi(x)∫B(xiε,ε)ρε(xiε−y)(F∗(x,∇u(y))−F∗(y,∇u(y)))dy] ≤C3ε+1+w(ε)1+Cε+w(ε) ≤1(choose a constant C≥C3). By the continuity of $$\nabla u_{\varepsilon}$$ and of $$F^*$$, we also have $$F^*(x, \nabla u_{\varepsilon}(x))\le 1 \, \quad\forall x\in \overline{{\it{\Omega}}}.$$ □ Proposition A.2 Let $$F$$ be a continuous nondegenerate Finsler metric on a connected bounded Lipshitz domain $${\it{\Omega}}$$. We have   LipdF={u:Ω¯⟶R|u is Lipschitz continuous and F∗(x,∇u(x))≤1 a.e. x∈Ω}:=BF∗. (A.6) As a consequence, for any 1-$$d_{F}$$ Lipschitz continuous function $$u$$, there exists a sequence of 1-$$d_{F}$$ Lipschitz continuous functions $$u_{\varepsilon}\in C^{\infty}_{c}(\mathbb{R}^N)$$ and $$u_{\varepsilon} \rightrightarrows u$$ uniformly on $$\overline{{\it{\Omega}}}$$. Lemma A.3 We have $$Lip_{d_{F}}\subset \mathcal{B}_{F^*}.$$ Proof. Let $$u\in Lip_{d_{F}}$$. Then $$u$$ is Lipschitz and $$u$$ is differentiable a.e. in $${\it{\Omega}}$$. Let $$x\in {\it{\Omega}}$$ be any point where $$u$$ is differentiable. We have, for any $$v\in \mathbb{R}^N$$,   ⟨∇u(x),v⟩F(x,v) =limh→0u(x+hv)−u(x)F(x,hv) ≤lim suph→0dF(x,x+hv)F(x,hv) ≤lim suph→0∫01F(x+thv,hv)dtF(x,hv)=1. Hence, $$F^*(x, \nabla u(x))\le 1$$. So $$u\in \mathcal{B}_{F^*}$$. □ Lemma A.4 We have $$\mathcal{B}_{F^*}\subset Lip_{d_{F}}.$$ Proof. Fix any $$u\in \mathcal{B}_{F^*}$$. Case 1: If $$u$$ is smooth then $$F^*(x, \nabla u(x))\le 1 \quad\forall x\in \overline{{\it{\Omega}}}$$. For any $$x, y \in \overline{{\it{\Omega}}}$$ and any Lipschitz curve $$\xi$$ in $$\overline{{\it{\Omega}}}$$ joining $$x$$ and $$y$$, we have   u(y)−u(x) =∫01∇u(ξ(t))ξ˙(t)dt ≤∫01F∗(ξ(t),∇u(ξ(t)))F(ξ(t),ξ˙(t))dt ≤∫01F(ξ(t),ξ˙(t))dt. Hence $$u\in Lip_{d_{F}}$$. Case 2: For general Lipschitz continuous function $$u$$ satisfying $$F^*(x, \nabla u(x))\,{\le}\, 1 \, \text{ a.e. }\, x\in {\it{\Omega}}$$, thanks to Lemma A.1, there exist $$u_{\varepsilon} \in \mathcal{B}_{F^*}\bigcap C^{\infty}_{c}(\mathbb{R}^N)$$ such that $$u_{\varepsilon} \,{\rightrightarrows}\, u$$ on $$\overline{{\it{\Omega}}}$$. According to Case 1 above, $$u_{\varepsilon}\in Lip_{d_{F}}$$. Since $$u_{\varepsilon} \rightrightarrows u$$ on $$\overline{{\it{\Omega}}}$$, we obtain $$u\in Lip_{d_{F}}$$. □ Proof of Proposition A.2. The proof follows by Lemma A.3 and Lemma A.4. □ Proof of Lemma 2.4. Since $$0\le u\le \lambda$$, the sequence $$u_{\varepsilon}$$ in the proof of Lemma A.1 satisfies $$0\le u_{\varepsilon}\le \lambda$$. So $$u_{\varepsilon}\in C^{\infty}_{c}(\mathbb{R}^N) \cap L^\lambda_{d_{F}}$$ and $$u_{\varepsilon} \rightrightarrows u$$ on $$\overline{{\it{\Omega}}}$$. □ Remark A.5 The results still hold true if $${\it{\Omega}}$$ is connected, bounded and has the segment property. References Ambrosio, L., Fusco, N. & Pallara, D. ( 2000) Functions of Bounded Variation and Free Discontinuity Problems.  Oxford Mathematical Monographs. Oxford: Oxford University Press. Ambrosio, L., Gigli, N. & Savaré, G. ( 2005) Gradient Flows in Metric Spaces and in the Space of Probability Measures. Lectures in Mathematics.  Basel: Birkháuser. Barrett, J. W. & Prigozhin, L. ( 2009) Partial L1 Monge-Kantorovich problem: Variational formulation and numerical approximation. Interface. Free Bound.,  11, 201– 238. Google Scholar CrossRef Search ADS   Beckmann, M. ( 1952) A continuous model of transportation. Econometrica,  20, 643– 660. Google Scholar CrossRef Search ADS   Benamou, J. D. & Brenier, Y. ( 2000) A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numer. Math.,  84, 375– 393. Google Scholar CrossRef Search ADS   Benamou, J. D. & Carlier, G. ( 2015) Augmented Lagrangian methods for transport optimization, mean field games and degenerate elliptic equations. J. Optim. Theory Appl. , 167, 1– 26. Google Scholar CrossRef Search ADS   Benamou, J. D., Carlier, G., Cuturi, M., Nenna, L. & Peyré, G. ( 2015) Iterative Bregman projections for regularized transportation problems. SIAM J. Sci. Comput.,  37, A1111– A1138. Google Scholar CrossRef Search ADS   Benamou, J. D., Carlier, G. & Hatchi, R. ( 2017) A numerical solution to Monge’s problem with a Finsler distance cost. ESAIM: M2AN,  DOI: http://dx.doi.org/10.1051/m2an/2016077. Caffarelli, L. & McCann, R. J. ( 2010) Free boundaries in optimal transport and Monge-Ampere obstacle problems. Ann. Math.,  171, 673– 730. Google Scholar CrossRef Search ADS   Chizat, L., Peyré, G., Schmitzer, B. & Vialard, F. X. ( 2016) Scaling algorithms for unbalanced transport problems. arXiv preprint arXiv:1607.05816. De Pascale, L., Evans, L. C. & Pratelli, A. ( 2004) Integral estimates for transport densities. Bull. London Math. Soc.,  36, 383– 395. Google Scholar CrossRef Search ADS   De Pascale, L. & Pratelli, A. ( 2004) Sharp summability for Monge transport density via interpolation. ESAIM Contr. Optim. Ca. Va. , 10, 549– 552. Google Scholar CrossRef Search ADS   Eckstein, J. & Bertsekas, D. P. ( 1992) On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program.,  55, 293– 318. Google Scholar CrossRef Search ADS   Ekeland, I. & Teman, R. ( 1976) Convex Analysis and Variational Problems.  Amsterdam-New York: North-Holland American Elsevier. Feldman, M. & McCann, R. J. ( 2002) Uniqueness and transport density in Monge’s mass transportation problem. Calc. Var. Partial Differ. Equ. , 15, 81– 113. Google Scholar CrossRef Search ADS   Figalli, A. ( 2010) The Optimal Partial Transport Problem. Arch. Rational Mech. Anal. , 195, 533– 560. Google Scholar CrossRef Search ADS   Fortin, M. & Glowinski, R. ( 1983) Augmented Lagrangian methods: Applications to the Numerical Solution of Boundary-Value Problems , vol 15. Amsterdam: North-Holland Publishing. Gabay, D. & Mercier, B. ( 1976) A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. , 2, 17– 40. Google Scholar CrossRef Search ADS   Glowinski, R., Lions, J. L. & Tre1molie1res, R. ( 1981) Numerical Analysis of Variational Inequalities.  Amsterdam: North-Holland Publishing. Glowinski, R. & Le Tallec, P. ( 1989) Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics,  vol. 9. Philadelphia: SIAM. Google Scholar CrossRef Search ADS   Hecht, F. ( 2012) New development in freefem++. J. Numer. Math.,  20, 251– 266. Google Scholar CrossRef Search ADS   Igbida, N. & Nguyen, V. T. ( 2017) Optimal partial mass transportation and obstacle Monge-Kantorovich equation (submitted for publication). Igbida, N. & Ta Thi, N. N. ( 2017) Sub-gradient Diffusion Operator. J. Differ. Equ . 262, 3837– 3863. Google Scholar CrossRef Search ADS   Santambrogio, F. ( 2009) Absolute continuity and summability of transport densities: simpler proofs and new estimates. Calc. Var. Partial Differ. Equ.,  36, 343– 354. Google Scholar CrossRef Search ADS   Santambrogio, F. ( 2015) Optimal Transport for Applied Mathematicians . Basel: Birkhäuser. Google Scholar CrossRef Search ADS   Villani, C. ( 2003) Topics in Optimal Transportation . Graduate Studies in Mathematics, vol. 58. Providence: American Mathematical Society. Villani, C. ( 2009) Optimal Transport, Old and New . Grundlehren des Mathematischen Wissenschaften (Fundamental Principles of Mathematical Sciences), vol. 338. Berlin, Heidelberg: Springer. © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Numerical Analysis Oxford University Press Augmented Lagrangian Method for Optimal Partial Transportation , Volume 38 (1) – Jan 1, 2018 28 pages /lp/ou_press/augmented-lagrangian-method-for-optimal-partial-transportation-VgteHYjKL6 Publisher Oxford University Press ISSN 0272-4979 eISSN 1464-3642 D.O.I. 10.1093/imanum/drw077 Publisher site See Article on Publisher Site Abstract Abstract The use of augmented Lagrangian algorithm for optimal transport problems goes back to Benamou & Brenier (2000, Acomputational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numer. Math., 84, 375–393), in the case where the cost corresponds to the square of the Euclidean distance. It was recently extended in Benamou & Carlier (2015, Augmented Lagrangian methods for transport optimization, mean field games and degenerate elliptic equations. J. Optim. Theory Appl., 167, 1–26), to the optimal transport with the Euclidean distance and Mean-Field Games theory and in Benamou et al. (2017, A numerical solution to Monge’s problem with a Finsler distance cost ESAIM: M2AN), to the optimal transportation with Finsler distances. Our aim here is to show how one can use this method to study the optimal partial transport problem with Finsler distance costs. To this aim, we introduce a suitable dual formulation of the optimal partial transport, which contains all the information on the active regions and the associated flow. Then, we use a finite element discretization with the FreeFem++ software to provide numerical simulations for the optimal partial transportation. A convergence study for the potential together with the flux and the active regions is given to validate the approach 1. Introduction The theory of optimal transportation deals with the problem to find the optimal way to move materials from a given source to a desired target in such a way to minimize the work. The problem was first proposed and studied by G. Monge in 1781, and then L. Kantorovich made fundamental contributions to the problem in the 1940s by relaxing the problem into a linear one. Since the late 1980s, this subject has been investigated under various points of view with many applications in image processing, geometry, probability theory, economics, evolution partial differential equations (PDEs) and other areas. For more information on the optimal mass transport problem, we refer the reader to the pedagogical books (Villani, 2003; Ambrosio et al., 2005; Villani, 2009; Santambrogio, 2015). The standard optimal transport problem requires that the total mass of the source is equal to the total mass of the target (balance condition of mass) and that all the materials of the source must be transported. Here, we are interested in the optimal partial transportation. That is the case where the balance condition of mass is excluded, and the aim is to transport effectively a prescribed amount of mass from the source to the target. In other words, the optimal partial transport problem aims to study the practical situation, where only a part of the commodity (respectively, consumer demand) of a prescribed total mass $$\mathbf{m}$$ needs to be transported (respectively, fulfilled). This generalized problem brings out additional variables called active regions. The problem was first studied theoretically in Caffarelli & McCann (2010) (see also Figalli, 2010) in the case where the work is proportional to the square of the Euclidean distance. Recently in Igbida & Nguyen (2017), we give a complete theoretical study of the problem in the case where the work is proportional to a Finsler distance $$d_{F}$$ (covering by the way the case of the Euclidean distance), where $$d_{F}$$ is given as follows (see Section 2)   dF(x,y):=infξ∈Lip([0,1];Ω¯){∫01F(ξ(t),ξ˙(t))dt:ξ(0)=x,ξ(1)=y}. Concerning numerical approximations for the optimal partial transport, Barrett & Prigozhin (2009) studied the case of the Euclidean distance by using an approximation based on nonlinear approximated PDEs and Raviart–Thomas finite elements. Benamou et al. (2015) and Chizat et al. (2016) introduced general numerical frameworks to approximate solutions to linear programs related to the optimal transport (including the optimal partial transport). Their idea is based on an entropic regularization of the initial linear programs. This is a static approach to optimal transport-type problems and needs to use (approximated) values of $$d_{F}(x, y)$$. In this article, we use a different approach (based mainly on Benamou & Brenier, 2000; Benamou & Carlier, 2015; Igbida & Nguyen, 2017) to compute the solution of the optimal partial transport problem. We first show how one can directly reformulate the unknown quantities (variables) of the optimal partial transport into an infinite-dimensional minimization problem of the form:   minϕ∈VF(ϕ)+G(Λϕ), where $$\mathcal{F}, \mathcal{G}$$ are l.s.c., convex functionals and $${\it{\Lambda}}\in \mathcal{L}(V, Z)$$ is a continuous linear operator between two Banach spaces. Thanks to peculiar properties of $$\mathcal{F}$$ and $$\mathcal{G}$$ in our situation, an augmented Lagrangian method is applied effectively in the same spirit of Benamou & Carlier (2015) and Benamou et al. (2017). We show that, for the computation, we just need to solve linear equations (with a symmetric positive definite coefficient matrix) or to update explicit formulations. It is worth noting that this method uses only elementary operations without evaluating $$d_{F}$$. This article is organized as follows: In the next section, we introduce the optimal partial transport problem and its equivalent formulations with a particular attention to the Kantorovich dual formulation. In Section 3, we give a finite-dimensional approximation of the problem, and show that primal-dual solutions of the discretized problems converge to the ones of original continuous problems. The details of the ALG2 algorithm is given in Section 4. Some numerical examples are presented in Section 5. We terminate the article by an appendix, where we give proofs of some facts we need in this article. 2. Partial transport and its equivalent formulations Let $${\it{\Omega}}$$ be a connected bounded Lipschitz domain and $$F$$ be a continuous Finsler metric on $$\overline{{\it{\Omega}}}$$, i.e., $$F: \overline{{\it{\Omega}}}\times \mathbb{R}^N \longrightarrow [0, +\infty)$$ is continuous and $$F(x, .)$$ is convex, positively homogeneous of degree $$1$$ in the sense   F(x,tv)=tF(x,v)∀t>0,v∈RN. We assume moreover that $$F$$ is nondegenerate in the sense that there exist positive constants $$M_{1}, M_{2}$$, such that   M1|v|≤F(x,v)≤M2|v|∀x∈Ω¯,v∈RN. Let $$\mu, \nu\in \mathcal M^+_{b}(\overline{{\it{\Omega}}})$$ be two Radon measures on $$\overline{{\it{\Omega}}}$$ and $$\mathbf{m}_{\max}:=\min\{\mu(\overline{{\it{\Omega}}}), \nu(\overline{{\it{\Omega}}})\}.$$ Given a total mass $$\mathbf{m}\in [0, \mathbf{m}_{\max}]$$, the optimal partial transport problem (or partial Monge–Kantorovich problem, PMK for short) aims to transport effectively the total mass $$\mathbf{m}$$ from a supply subregion of the source $$\mu$$ into a subregion of the target $$\nu$$. The set of subregions of mass $$\mathbf{m}$$ is given by   Subm(μ,ν):={(ρ0,ρ1)∈Mb+(Ω¯)×Mb+(Ω¯):ρ0≤μ,ρ1≤ν,ρ0(Ω¯)=ρ1(Ω¯)=m}. An element $$(\rho_{0}, \rho_{1})\in Sub_{\mathbf{m}}(\mu, \nu)$$ is called a couple of active regions. As for the optimal transport, one can work with different kinds of cost functions for the optimal partial transport, i.e., in the formulation (2.1) below, $$d_{F}(x, y)$$ can be replaced by a general measurable cost function $$c(x, y)$$. However, in this article, we focus on the case where the cost $$c=d_{F}$$. So let us state the problem directly for $$d_{F}$$. The PMK problem (Barrett & Prigozhin, 2009; Caffarelli & McCann, 2010; Figalli, 2010; Igbida & Nguyen, 2017) aims to minimize the following problem   min{K(γ):=∫Ω¯×Ω¯dF(x,y)dγ:γ∈πm(μ,ν)}, (2.1) where $$d_{F}$$ is the Finsler distance on $$\overline{{\it{\Omega}}}$$ associated with $$F$$, i.e.,   dF(x,y):=inf{∫01F(ξ(t),ξ˙(t))dt:ξ(0)=x,ξ(1)=y,ξ∈Lip([0,1];Ω¯)}. $${\large{\mathbf{\pi}}}_{\mathbf{m}}(\mu, \nu)$$ is the set of transport plans of mass $$\mathbf{m}$$, i.e.,   πm(μ,ν):={γ∈Mb+(Ω¯×Ω¯):(πx#γ,πy#γ)∈Subm(μ,ν)}. Here, $$\pi_x \#\gamma$$ and $$\: \pi_y \# \gamma$$ are the first and second marginals of $$\gamma$$. An optimal $$\gamma^*$$ is called an optimal plan and $$(\pi_x \#\gamma^*, \pi_y \# \gamma^*)$$ is called a couple of optimal active regions. Following Igbida & Nguyen (2017), to study the PMK problem we use its dual problem that we call the dual partial Monge–Kantorovich problem (DPMK). To this aim, we consider $$Lip_{d_{F}}$$ the set of $$1-$$Lipschitz continuous functions w.r.t. $$d_{F}$$ given by   LipdF:={u:Ω¯⟶R|u(y)−u(x)≤dF(x,y)∀x,y∈Ω¯}. Then, the connection between the PMK and DPMK problems is summarized in the following theorem. Theorem 2.1 Let $$\mu, \nu\,{\in}\,\mathcal M^+_b(\overline{{\it{\Omega}}})$$ be Radon measures and $$\mathbf{m}\,{\in}\, [0,\mathbf{m}_{\max}]$$. The partial Monge–Kantorovich problem has a solution $$\sigma^*\,{\in}\, {\large{\mathbf{\pi}}}_{\mathbf{m}}(\mu, \nu)$$ and   K(σ∗)=max{D(λ,u):=∫Ω¯ud(ν−μ)+λ(m−ν(Ω¯)):λ≥0 and u∈LdFλ}, (2.2) where   LdFλ:={u∈LipdF:0≤u(x)≤λ for any x∈Ω¯}. Moreover, $$\sigma\in {\large{\mathbf{\pi}}}_{\mathbf{m}}(\mu, \nu)$$ and $$(\lambda, u)\in \mathbb{R}^+\times L^\lambda_{d_{F}}$$ are solutions, respectively, if and only if   u(x)=0 for (μ−πx#σ)-a.e. x∈Ω¯,u(x)=λ for (ν−πy#σ)-a.e. x∈Ω¯and u(y)−u(x)=dF(x,y) for σ-a.e. (x,y)∈Ω¯×Ω¯. Proof. The proof follows in the same way of Theorem 2.4 in Igbida & Nguyen (2017), where the authors study the case $${\it{\Omega}}=\mathbb{R}^N.$$ □ The DPMK problem (2.2) contains all the information concerning the optimal partial mass transportation. However, for the numerical approximation of the optimal partial transportation and to use the augmented Lagrangian method, we need to rewrite the problem into the form   infϕ∈VF(ϕ)+G(Λϕ). To do that, we consider the polar function $$F^*$$ of $$F,$$ which is defined by   F∗(x,p):=sup{⟨v,p⟩:F(x,v)≤1} for x∈Ω¯,p∈RN. Note that $$F^*(x, .)$$ is not the Legendre–Fenchel transform. It is easy to see that $$F^*$$ is also a continuous, nondegenerate Finsler metric on $$\overline{{\it{\Omega}}}$$ and   ⟨v,p⟩≤F∗(x,p)F(x,v)∀x∈Ω¯,v,p∈RN. Remark 2.2 Using the polar function $$F^*$$, we can characterize the set $$Lip_{d_{F}}$$ as (see the appendix if necessary)   LipdF={u:Ω¯⟶R|u is Lipschitz continuous and F∗(x,∇u(x))≤1 a.e. x∈Ω}. Thanks to this remark, the DPMK problem (2.2) can be written as   max{D(λ,u):0≤u(x)≤λ,u is Lipschitz continuous, F∗(x,∇u(x))≤1 a.e. x∈Ω}. Moreover, we have Theorem 2.3 Under the assumptions of Theorem 2.1, setting $$V:=\mathbb{R}\times C^1(\overline{{\it{\Omega}}})$$ and $$Z:=C(\overline{{\it{\Omega}}})^{N}\times C(\overline{{\it{\Omega}}})\times C(\overline{{\it{\Omega}}}),$$ we have   K(σ∗)=−inf{F(λ,u)+G(Λ(λ,u)):(λ,u)∈V}, (2.3) where $${\it{\Lambda}}\in \mathcal{L}(V, Z)$$ is given by   Λ(λ,u):=(∇u,−u,u−λ)∀(λ,u)∈V, and $$\mathcal{F}: V\longrightarrow (-\infty, +\infty]$$, $$\mathcal{G}: Z\longrightarrow (-\infty, +\infty]$$ are the l.s.c. convex functions given by   F(λ,u) :=−∫Ω¯ud(ν−μ)−λ(m−ν(Ω¯))∀(λ,u)∈V;G(q,z,w) :={0if z(x)≤0,w(x)≤0,F∗(x,q(x))≤1∀x∈Ω¯+∞otherwise  for (q,z,w)∈Z. To prove this theorem, we need the following lemma. Lemma 2.4 Let $$\lambda\,{\ge}\, 0$$ be fixed. For any $$u\,{\in}\, L^\lambda_{d_{F}}$$, there exists a sequence of smooth functions $$u_{\varepsilon}\in C^\infty_{c}(\mathbb{R}^N) \bigcap L^\lambda_{d_{F}}$$ such that $$u_{\varepsilon} \rightrightarrows u$$ uniformly on $$\overline{{\it{\Omega}}}$$. The result of the lemma is more or less known in some cases (see Igbida & Ta Thi, 2017 for the case where the function $$u$$ is null on the boundary). The proof in the general case is quite technical and will be given in the appendix. Proof of Theorem 2.3. Thanks to Remark 2.2 and Lemma 2.4, we have   −inf(λ,u)∈VF(λ,u)+G(Λ(λ,u)) =sup{∫Ω¯ud(ν−μ)+λ(m−ν(Ω¯)):λ≥0,u∈C1(Ω¯)∩LdFλ} =max{D(λ,u):λ≥0 and u∈LdFλ}. Using the duality (2.2), the proof is completed. □ To end this section, we prove the following result that will be useful for the proof of the convergence of our discretization. Theorem 2.5 Under the assumptions of Theorem 2.1, we have   −inf(λ,u)∈VF(λ,u)+G(Λ(λ,u))=min{∫Ω¯F(x,Φ|Φ|(x))d|Φ|:(Φ,θ0,θ1)∈Ψm(μ,ν)}, (2.4) where   Ψm(μ,ν):={(Φ,θ0,θ1)∈Z∗=Mb(Ω¯)N×Mb(Ω¯)×Mb(Ω¯):θ0≥0,θ1≥0,θ1(Ω¯)=ν(Ω¯)−m  and −∇⋅Φ=ν−θ1−(μ−θ0) with Φ.n=0 on ∂Ω}. Actually, the minimal flow-type formulation   min{∫Ω¯F(x,Φ|Φ|(x))d|Φ|:(Φ,θ0,θ1)∈Ψm(μ,ν)} (2.5) introduces the Beckmann problem (see Beckmann, 1952) for the optimal partial transport with Finsler distance costs. See here that in the balanced case, i.e., $$\mathbf{m}=\mu(\overline{{\it{\Omega}}})=\nu(\overline{{\it{\Omega}}})$$, the formulation (2.5) becomes   min{∫Ω¯F(x,Φ|Φ|(x))d|Φ|:Φ∈Mb(Ω¯)N,−∇⋅Φ=ν−μ with Φ.n=0 on ∂Ω}. (2.6) An optimal solution $${\it{\Phi}}$$ of the problem (2.6) is called an optimal flow of transporting $$\mu$$ onto $$\nu$$. As known from the optimal transport theory, the optimal flow gives a way to visualize the transportation. To prove Theorem 2.5, we will use the well-known duality arguments. For convenience, let us recall here the Fenchel–Rockafellar duality. Let us consider the problem   infϕ∈VF(ϕ)+G(Λϕ), (2.7) where $$\mathcal{F}: V\,{\longrightarrow}\, (-\infty, +\infty]$$ and $$\mathcal{G}: Z\,{\longrightarrow}\, (-\infty, +\infty]$$ are convex, l.s.c. and $${\it{\Lambda}}\,{\in}\, \mathcal{L}(V, Z)$$ the space of linear continuous functions from $$V$$ to $$Z.$$ Using $$\mathcal{F}^{*}$$ and $$\mathcal{G}^{*}$$ the conjugate functions (given by the Legendre–Fenchel transformation) of $$\mathcal{F}$$ and $$\mathcal{G}$$, respectively, and $${\it{\Lambda}}^{*}$$ is the adjoint operator of $${\it{\Lambda}},$$ it is not difficult to see that   supσ∈Z∗(−F∗(−Λ∗σ)−G∗(σ))≤infϕ∈VF(ϕ)+G(Λϕ), where $$Z^*$$ is the topological dual space associated with $$Z$$. This is the so called weak duality. For the strong duality, which corresponds to equality we have the following well-known result. Proposition 2.6 (cf. Ekeland & Teman, 1976) In addition, assume that there exists $$\phi_{0}$$ such that $$\mathcal{F}(\phi_0)\,{<}+\infty$$, $$\mathcal{G}({\it{\Lambda}} \phi_{0}) <+\infty,$$$$\mathcal{G}$$ being continuous at $${\it{\Lambda}} \phi_0$$. Then the Fenchel–Rockafellar dual problem   supσ∈Z∗(−F∗(−Λ∗σ)−G∗(σ)) (2.8) has at least a solution $$\sigma\in Z^*$$ and $$\inf$$ (2.7) = $$\max$$ (2.8). Moreover, in this case, $$\phi$$ is a solution to the primal problem (2.7) if and only if   −Λ∗σ∈∂F(ϕ) and σ∈∂G(Λϕ). (2.9) Proof of Theorem 2.5. We work with the uniform convergence for the spaces $$C(\overline{{\it{\Omega}}})^N$$, $$C(\overline{{\it{\Omega}}})$$ and the norm $$\|u\|_{C^1}:=\max\{\|u\|_{\infty}, \|\nabla u\|_{\infty}\}$$ for $$C^1(\overline{{\it{\Omega}}})$$. It is not difficult to see that the hypotheses of Proposition 2.6 are satisfied. Now, let us compute the Fenchel–Rockafellar dual problem of (2.3). Since $$\mathcal{F}$$ is linear, $${\mathcal{F}}^*(-{\it{\Lambda}}^*({\it{\Phi}}, \theta^0, \theta^1))$$ is finite (and always equals to $$0$$) if and only if   −Λ∗(Φ,θ0,θ1)=−(m−ν(Ω¯),ν−μ) in V∗ i.e.,   ⟨Φ,∇u⟩−⟨θ0,u⟩+⟨θ1,u−λ⟩=λ(m−ν(Ω¯))+⟨ν−μ,u⟩∀(λ,u)∈V. This implies that   ∫Ω¯∇udΦ=∫Ω¯ud(ν−θ1)−∫Ω¯ud(μ−θ0) for all u∈C1(Ω¯) and   −λ∫Ω¯dθ1=λ(m−ν(Ω¯))∀λ∈R. These mean that   −∇⋅Φ=ν−θ1−(μ−θ0) with Φ.n=0 on ∂Ω and   θ1(Ω¯)=ν(Ω¯)−m. We also have   G∗(Φ,θ0,θ1)={∫Ω¯F(x,Φ|Φ|(x))d|Φ| if θ0≥0,θ1≥0+∞ otherwise  for any (Φ,θ0,θ1)∈Z∗. Then the proof follows by Proposition 2.6. □ Remark 2.7 The optimality relations (2.9) reads   {−∇⋅Φ=ν−θ1−(μ−θ0) and Φ⋅n=0 on ∂Ωθ1(Ω¯)=ν(Ω¯)−m⟨Φ,∇u⟩≥⟨Φ,q⟩∀q∈C(Ω¯),F∗(x,q(x))≤1∀x∈Ω¯λ∈R+,u∈C1(Ω¯)⋂LdFλu=0,θ0-a.e. in Ω¯u=λ,θ1-a.e. in Ω¯.  In fact, the optimality condition $$-{\it{\Lambda}}^* \sigma \in \partial \mathcal{F}(\phi)$$ gives the first two equations and $$\sigma\in \partial \mathcal{G}({\it{\Lambda}} \phi)$$ gives the last four equations. Moreover, if $${\it{\Phi}}\in L^1({\it{\Omega}})^N$$ then the condition   ⟨Φ,∇u⟩≥⟨Φ,q⟩∀q∈C(Ω¯),F∗(x,q(x))≤1∀x∈Ω¯ can be replaced by   F(x,Φ(x))=⟨∇u(x),Φ(x)⟩ a.e. x∈Ω. (2.10) However, it is not clear in general that $${\it{\Phi}}$$ belongs to $$L^1({\it{\Omega}})^N$$. In the case where $${\it{\Omega}}$$ is convex and $$F(x, v):=|v|$$ the Euclidean norm (or some other uniformly convex and smooth norms), the $$L^p$$ regularity results are known under suitable assumptions on $$\mu$$ and $$\nu$$ (see, e.g., Feldman & McCann, 2002; De Pascale et al., 2004; De Pascale & Pratelli, 2004; Santambrogio, 2009). To our knowledge, the case of general Finsler metrics is still an open question. In the case where $${\it{\Phi}}$$ is a vector-valued measure, the condition (2.10) should be adapted to the tangential gradient. Rigorous formulations using the tangential gradient with respect to a measure, as well as rigorous proofs in the general case, can be found in the article by Igbida & Nguyen (2017) with $${\it{\Omega}}=\mathbb{R}^N$$. It is expected that $$\theta^0\le \mu$$ and $$\theta^1\le \nu$$ for optimal solutions $$({\it{\Phi}}, \theta^0, \theta^1)$$ of the minimal flow formulation (2.5). This is the case whenever $$\mathbf{m}\in [(\mu\wedge \nu)(\overline{{\it{\Omega}}}), {\mathbf{m}}_{\max}]$$, where $$\mu\wedge \nu$$ is the common mass measure of $$\mu$$ and $$\nu$$, i.e., if $$\mu, \nu\in L^1({\it{\Omega}})$$, then $$\mu \wedge \nu\in L^1({\it{\Omega}})$$ and   (μ∧ν)(x)=min{μ(x),ν(x)} for a.e. x∈Ω. In general, the measure $$\mu \wedge \nu$$ is defined by (see Ambrosio et al., 2000)   μ∧ν(A)=inf{μ(A1)+ν(A2):disjoint Borel setsA1,A2,such that A1∪A2=A}. Proposition 2.8 Let $$\mathbf{m}\in [(\mu\wedge \nu)(\overline{{\it{\Omega}}}), {\mathbf{m}}_{\max}]$$ and $$({\it{\Phi}}, \theta^0, \theta^1)\in Z^*$$ be an optimal solution of (2.5). Then $$\theta^0\le \mu$$ and $$\theta^1\le \nu$$. Moreover, $$(\mu-\theta^0, \nu -\theta^1)$$ is a couple of optimal active regions and $${\it{\Phi}}$$ is an optimal flow of transporting $$\mu-\theta^0$$ onto $$\nu -\theta^1$$. Proof. The proof follows in the same way as Theorem 5.21 and Corollary 5.20 in Igbida & Nguyen (2017). □ Our next work is to compute an approximation of $${\it{\Phi}}$$ (in fact, approximations of $${\it{\Phi}}, u, \lambda, \theta^0, \theta^1$$). To do that, we will apply an augmented Lagrangian method to the DPMK problem (2.2). 3. Discretization and convergence Coming back to the DPMK problem (2.2), our aim now is to give, by using a finite element approximation, the discretized problem associated with (2.2). To begin with, let us consider regular triangulations $$\mathcal{T}_{h}$$ of $$\overline{{\it{\Omega}}}$$. For a fixed integer $$k\ge 1$$, $$P_{k}$$ is the set of polynomials of degree less or equal $$k$$. Let $$E_{h}\subset H^{1}({\it{\Omega}})$$ be the space of continuous functions on $$\overline{{\it{\Omega}}}$$ and belonging to $$P_{k}$$ on each triangle of $$\mathcal{T}_{h}$$. We denote by $$Y_{h}$$ the space of vectorial functions such that their restrictions belong to $$(P_{k-1})^N$$ on each triangle of $$\mathcal{T}_{h}$$. Let $$f=\nu-\mu$$ and $$f_{h}\in E_{h}$$ such that $$\{f_{h}\}$$ converges weakly* to $$f$$ in $$\mathcal{M}_{b}(\overline{{\it{\Omega}}})$$. Considering the finite-dimensional spaces   Vh=R×Eh,Zh=Yh×Eh×Eh, we set   Λh(λ,u) :=(∇u,−u,u−λ)∈Zh for (λ,u)∈Vh,Fh(λ,u) :=−⟨u,fh⟩−λ(m−ν(Ω¯))∀(λ,u)∈Vh and   Gh(q,z,w):={0if z≤0,w≤0,F∗(x,q(x))≤1 for a.e. x∈Ω+∞otherwise  for (q,z,w)∈Zh. Then the finite-dimensional approximation of (2.2) reads   inf(λ,u)∈VhFh(λ,u)+Gh(Λh(λ,u)). (3.1) The following result shows that this is a suitable approximation of (2.2). Theorem 3.1 Assume that $$\mathbf{m}\,{<}\, \nu(\overline{{\it{\Omega}}})$$. Let $$(\lambda_{h}, u_{h})\,{\in}\, V_{h}$$ be an optimal solution to the approximated problem (3.1) and $$({\it{\Phi}}_{h}, \theta^0_h, \theta^1_h)$$ be an optimal dual solution to (3.1). Then, up to a subsequence, $$(\lambda_{h}, u_h)$$ converges in $$\mathbb{R}\times C(\overline {\it{\Omega}})$$ to $$(\lambda, u)$$ an optimal solution of the DPMK problem (2.2) and $$({\it{\Phi}}_h, \theta^0_h, \theta^1_h)$$ converges weakly* in $$\mathcal M_b(\overline{\it{\Omega}})^N\times \mathcal M_b(\overline{\it{\Omega}})\times \mathcal M_b(\overline{\it{\Omega}})$$ to $$({\it{\Phi}}, \theta^0, \theta^1)$$ an optimal solution of (2.5). Proof. Since $$\mathbf{m}\,{<}\,\nu(\overline{{\it{\Omega}}})$$, $$\{\lambda_h\}$$ is bounded in $$\mathbb{R}$$ and $$\{u_h\}$$ is bounded in $$(C(\overline{{\it{\Omega}}}), \|.\|_{\infty})$$. From the nondegeneracy of $$F$$ and the definitions of $$\mathcal{F}_{h}, \mathcal{G}_{h}, {\it{\Lambda}}_{h}$$, we have that $$\{u_h\}$$ is equi-Lipschitz and   uh(y)−uh(x)≤dF(x,y)∀x,y∈Ω¯. Using the Ascoli–Arzela theorem, up to a subsequence, $$u_h \rightrightarrows u$$ uniformly on $$\overline{{\it{\Omega}}}$$ and $$\lambda_h \to \lambda$$. Obviously, $$\lambda\ge 0$$ and $$u\in L^\lambda_{d_{F}}$$. Now, by the optimality of $$(\lambda_h, u_h)$$ and of $$({\it{\Phi}}_h, \theta^0_h, \theta^1_h)$$, we have   −Λh∗(Φh,θh0,θh1)=−(m−ν(Ω¯),fh) in Vh∗ and   Fh(λh,uh)+Gh(Λh(λh,uh))=−Fh∗(−Λh∗(Φh,θh0,θh1))−Gh∗(Φh,θh0,θh1). More concretely,   ⟨Φh,∇v⟩−⟨θh0,v⟩+⟨θh1,v−s⟩=s(m−ν(Ω¯))+⟨fh,v⟩∀(s,v)∈Vh, (3.2)  θh0≥0,θh1≥0,θh1(Ω¯)=ν(Ω¯)−m (3.3) and   ⟨uh,fh⟩+λh(m−ν(Ω¯))=sup{⟨q,Φh⟩:q∈Yh,F∗(x,q(x))≤1 a.e. x∈Ω}. (3.4) In (3.2), taking $$v=0$$ and $$s=1$$ (respectively, $$v=s=1$$), we see that $$\{\theta^1_h\}$$ (respectively, $$\{\theta^0_h\}$$) is bounded in $$\mathcal M_{b}(\overline{{\it{\Omega}}})$$. Moreover, using (3.4) and the boundedness of $$(\lambda_h, u_h)$$ we deduce that $$\{{\it{\Phi}}_h\}$$ is bounded in $$\mathcal{M}_b(\overline{{\it{\Omega}}})^N.$$ So, up to a subsequence,   (Φh,θh0,θh1)⇀(Φ,θ0,θ1) in Mb(Ω¯)N×Mb(Ω¯)×Mb(Ω¯)−w∗. Using (3.2) and (3.3), it is clear that $$({\it{\Phi}}, \theta^0, \theta^1)$$ satisfies   ⟨Φ,∇v⟩−⟨θ0,v⟩+⟨θ1,v−s⟩=s(m−ν(Ω¯))+⟨f,v⟩∀(s,v)∈V and   θ0≥0,θ1≥0,θ1(Ω¯)=ν(Ω¯)−m, i.e., $$({\it{\Phi}}, \theta^0, \theta^1)$$ is feasible for the minimal flow problem (2.5). Next, let us show the optimality of $$(\lambda, u)$$ and of $$({\it{\Phi}}, \theta^0, \theta^1)$$, i.e.,   ∫Ω¯F(x,Φ|Φ|(x))d|Φ|=⟨u,ν−μ⟩+λ(m−ν(Ω¯)). (3.5) We fix $$q\,{\in}\, C(\overline{{\it{\Omega}}})^N$$ such that $$F^*(x, q(x))\,{\le}\, 1 \quad\forall x\,{\in}\, \overline{{\it{\Omega}}}$$ and we consider $$q_h\,{\in}\, Y_{h}$$ such that $$\|q_{h}-q\|_{L^{\infty}({\it{\Omega}})} \to 0$$ as $$h\to 0$$. We see that   F∗(x,qh(x))=F∗(x,q(x))+F∗(x,qh(x))−F∗(x,q(x))≤1+O(h) a.e. x∈Ω. By taking $$\frac{q_{h}}{1+ O(h)}$$, we can assume that $$q_{h}\,{\in}\, Y_h, F^*(x, q_h(x))\,{\le}\, 1 \text{ a.e. } x\,{\in}\, {\it{\Omega}}$$ and $$\|q_{h}-q\|_{L^{\infty}({\it{\Omega}})} \,{\to}\, 0$$ as $$h\to 0$$. Using (3.4), we have   ⟨q,Φ⟩ =⟨qh,Φh⟩+⟨q,Φ−Φh⟩+⟨q−qh,Φh⟩ ≤sup{⟨qh,Φh⟩:qh∈Yh,F∗(x,qh(x))≤1 a.e. x∈Ω}+O(h) =⟨uh,fh⟩+λh(m−ν(Ω¯))+O(h). Letting $$h\to 0$$, we get   ⟨q,Φ⟩≤⟨u,ν−μ⟩+λ(m−ν(Ω¯)) for any q∈C(Ω¯)N,F∗(x,q(x))≤1∀x∈Ω¯. Taking supremum in $$q$$, we obtain   ∫Ω¯F(x,Φ|Φ|(x))d|Φ|≤⟨u,ν−μ⟩+λ(m−ν(Ω¯)). At last, thanks to the duality equality (2.4), this implies (3.5), the optimality of $$(\lambda, u)$$ and of $$({\it{\Phi}}, \theta^0, \theta^1)$$. □ Remark 3.2 In the case $$\mathbf{m}=\mathbf{m}_{\max}$$ (called the unbalanced transport), the DPMK problem has a simpler formulation. So for the purpose of implementation, we distinguish the two cases: the partial transport and the unbalanced transport. In the unbalanced case, let us assume that $$\mathbf{m}=\mathbf{m}_{\max}=\nu(\overline{{\it{\Omega}}})$$$$(\text{i.e., }\, \mu(\overline{{\it{\Omega}}})\ge \nu(\overline{{\it{\Omega}}}))$$, the DPMK problem (2.2) can be written as   maxu∈LipdF,u≥0∫Ω¯ud(ν−μ). (3.6) By using $$V_h=E_h,\: Z_h=Y_h \times E_h, {\it{\Lambda}}_h u =(\nabla u, -u)$$ and   Gh(q,z)={0 if z≤0,F∗(x,q(x))≤1 a.e. x∈Ω+∞ otherwise  a finite-dimensional approximation can be given by   infu∈Vh−⟨u,fh⟩+Gh(Λhu). (3.7) As in Theorem 3.1, we can prove the convergence of this finite-dimensional approximation to the original one (3.6). More precisely, we have Proposition 3.3 Assume that $$\mathbf{m}=\nu(\overline{{\it{\Omega}}})$$. Let $$u_{h}\in V_{h}$$ be an optimal solution to the approximated problem (3.7) and $$({\it{\Phi}}_{h}, \theta^0_h)$$ be an optimal dual solution to (3.7). Then, up to a subsequence and translation by constant, $$u_{h}$$ converges to $$u$$ an optimal solution of the DPMK problem (3.6) and $$({\it{\Phi}}_h, \theta^0_h)$$ converges to $$({\it{\Phi}}, \theta^0)$$ an optimal solution of (2.5) with $$\theta^1=0$$. The proof of this proposition is similar to the proof of Theorem 3.1. 4. Solving the discretized problems Our task now is to solve the finite-dimensional problems (3.1) and (3.7). First, let us recall the augmented Lagrangian method we are dealing with. 4.1 ALG2 method Assume that $$V$$ and $$Z$$ are two Hilbert spaces. Let us consider the problem   infϕ∈VF(ϕ)+G(Λϕ), (4.1) where $$\mathcal{F}: V\longrightarrow (-\infty, +\infty]$$ and $$\mathcal{G}: Z\longrightarrow (-\infty, +\infty]$$ are convex, l.s.c. and $${\it{\Lambda}}\in \mathcal{L}(V, Z)$$. We introduce a new variable $$q\in Z$$ to the primal problem (4.1) and we rewrite it in the form   inf(ϕ,q)∈V×Z:Λϕ=qF(ϕ)+G(q). The augmented Lagrangian is given by   L(ϕ,q;σ):=F(ϕ)+G(q)+⟨σ,Λϕ−q⟩+r2|Λϕ−q|2r>0. The so called ALG2 algorithm is given as follows: For given $$q_{0}, \sigma_{0}\in Z$$, we construct the sequences $$\{\phi_i\}, \{q_i\}$$ and $$\{\sigma_i\}, i=1, 2, ...,$$ by Step 1: Minimizing $$\inf\limits_{\phi} L(\phi, q_{i}; \sigma_{i})$$, i.e.,   ϕi+1∈argminϕ∈V{F(ϕ)+⟨σi,Λϕ⟩+r2|Λϕ−qi|2}. Step 2: Minimizing $$\inf\limits_{q\in Z} L(\phi_{i+1}, q; \sigma_{i})$$, i.e.,   qi+1∈argminq∈Z{G(q)−⟨σi,q⟩+r2|Λϕi+1−q|2}. Step 3: Update the multiplier $$\sigma$$,   σi+1=σi+r(Λϕi+1−qi+1). For the theory of this method and its interpretation, we refer the reader to Gabay & Mercier (1976), Glowinski et al. (1981), Fortin & Glowinski (1983), Glowinski & Le Tallec (1989) and Eckstein & Bertsekas (1992). Here, we recall the convergence result of this method which is enough for our discretized problems. Theorem 4.1 (cf. Eckstein & Bertsekas, 1992, Theorem 8) Fixed $$r>0$$, assuming that $$V=\mathbb{R}^n, Z=\mathbb{R}^m$$ and that $${\it{\Lambda}}$$ has full column rank. If there exists a solution to the optimality relations (2.9), then $$\{\phi_{i}\}$$ converges to a solution of the primal problem (2.7) and $$\{\sigma_{i}\}$$ converges to a solution of the dual problem (2.8). Moreover, $$\{q_{i}\}$$ converges to $${\it{\Lambda}} \phi^{*}$$, where $$\phi^{*}$$ is the limit of $$\{\phi_{i}\}$$. The proof of this result in the case of finite-dimensional spaces $$V$$ and $$Z$$ can be found in Eckstein & Bertsekas (1992). The result holds true in infinite-dimensional Hilbert spaces under additional assumptions. One can see Fortin & Glowinski (1983) and Glowinski & Le Tallec (1989) for more details in this direction. Next, we use the ALG2 method for the discretized problems. To simplify the notations, let us drop out the subscript $$h$$ in $$(\lambda_h, u_h)$$ and $$({\it{\Phi}}_h, \theta^0_h, \theta^1_h).$$ Thanks to Remark 3.2, we treat separately the case where $$\mathbf{m}=\nu(\overline{{\it{\Omega}}})$$ and the case where $$\mathbf{m}<\nu(\overline{{\it{\Omega}}}).$$ 4.2 Partial transport ($$\mathbf{m}<\nu(\overline{{\it{\Omega}}})$$) Given $$(q_{i}, z_{i}, w_{i}), ({\it{\Phi}}_{i}, \theta^0_i, \theta^1_i)$$ at the iteration $$i,$$ we compute Step 1:   (λi+1,ui+1) ∈argmin(λ,u)∈VhFh(λ,u)+⟨(Φi,θi0,θi1),Λh(λ,u)⟩+r2|Λh(λ,u)−(qi,zi,wi)|2 =argmin(λ,u)∈Vh−⟨u,fh⟩−λ(m−ν(Ω¯))+⟨Φi,∇u⟩+⟨θi0,−u⟩+⟨θi1,u−λ⟩  +r2|∇u−qi|2+r2|u+zi|2+r2|u−λ−wi|2. Step 2:   (qi+1,zi+1,wi+1) ∈argmin(q,z,w)∈ZhGh(q,z,w)−⟨(Φi,θi0,θi1),(q,z,w)⟩+r2|Λh(λi+1,ui+1)−(q,z,w)|2 =argmin(q,z,w)∈ZhI[F∗(.,q(.))≤1](q)+I[z≤0](z)+I[w≤0](w)−⟨Φi,q⟩−⟨θi0,z⟩−⟨θi1,w⟩  +r2|∇ui+1−q|2+r2|ui+1+z|2+r2|ui+1−λi+1−w|2. Step 3: Update the multiplier   (Φi+1,θi+10,θi+11)=(Φi,θi0,θi1)+r(∇ui+1−qi+1,−ui+1−zi+1,ui+1−λi+1−wi+1). Before giving numerical results, let us take a while to comment the above iteration. Overall, Step 1 is a quadratic programming. Step 2 can be computed easily in many cases and Step 3 updates obviously. We denote by $$\text{Proj}_{C}(.)$$ the projection onto a closed convex subset $$C$$. In Step 1, we split the computation of the couple $$(\lambda_{i+1}, u_{i+1})$$ into two steps: We first minimize w.r.t. $$u$$ to compute $$u_{i+1}$$ and then we use $$u_{i+1}$$ to compute $$\lambda_{i+1}$$. More precisely, we proceed for Step 1 as follows: (1) For $$u_{i+1}$$,   ui+1 ∈argminu∈Eh−⟨u,fh⟩+⟨Φi,∇u⟩+⟨θi0,−u⟩+⟨θi1,u⟩  +r2|∇u−qi|2+r2|u+zi|2+r2|u−λi−wi|2. This is equivalent to   r⟨∇ui+1,∇v⟩+2r⟨ui+1,v⟩ =⟨fh,v⟩−⟨Φi,∇v⟩+⟨θi0,v⟩−⟨θi1,v⟩  +r⟨qi,∇v⟩−r⟨zi,v⟩+r⟨λi+wi,v⟩∀v∈Eh. Remark here that the equation is linear with a symmetric positive definite coefficient matrix. (2) For $$\lambda_{i+1}$$, it is computed explicitly   λi+1 ∈argmins∈R−s(m−ν(Ω¯))+⟨θi1,ui+1−s⟩+r2⟨ui+1−s−wi,ui+1−s−wi⟩  =−ν(Ω¯)−m−∫Ω¯θi1+r∫Ω(wi−ui+1)r∫Ω1. In Step 2, the variables $$q, z, w$$ are independent. So, we solve them separately: (1) For $$z_{i+1}$$ and $$w_{i+1}$$, if we choose $$P_2$$ finite element for $$z_{i+1}$$ and $$w_{i+1}$$, at vertex $$x_k$$,   zi+1(xk) =Proj[r∈R:r≤0](−ui+1(xk)+θi0(xk)r) =min(−ui+1(xk)+θi0(xk)r,0) (4.2) and   wi+1(xk) =Proj[r∈R:r≤0](ui+1(xk)−λi+1+θi1(xk)r) =min(ui+1(xk)−λi+1+θi1(xk)r,0). (4.3) (2) For $$q_{i+1}$$, if we choose $$P_{1}$$ finite element for $$q_{i+1}$$ then at each vertex $$x_{l}$$  qi+1(xl)=ProjBF∗(xl,.)(∇ui+1(xl)+Φi(xl)r), (4.4) where $$B_{F^*(x, .)}:=\left\{q\in \mathbb{R}^N: F^*(x, q)\le 1\right\}$$ the unit ball for $$F^*(x, .)$$. It remains to explain how we compute the projection onto $$B_{F^*(x_l, .)}$$. This issue is recently discussed in Benamou et al. (2017) for Riemann-type Finsler distances and for crystalline norms. For the convenience of the reader, we retake here the case where the unit ball of $$F(x, .)$$ is (not necessarily symmetric) convex polytope. For short, we ignore the dependence of $$x$$ in $$F$$ and $$F^*$$. Given $$d_1, ..., d_k\ne 0$$ such that, for any $$0\ne v\in \mathbb{R}^N$$, $$\max\limits_{1\le i\le k}\left\{\langle v, d_{i}\rangle\right\}> 0$$. We consider the nonsymmetric Finsler metric given by   F(v):=max1≤i≤k{⟨v,di⟩} for any v∈RN. (4.5) It is not difficult to see that the unit ball $$B^*$$ corresponding to $$F^*$$ is exactly the convex hull of $$\{d_i\}$$,   B∗=conv(di,i=1,...,k). Thus, we need to compute the projection onto the convex hull of finite points. In dimension 2, the projection onto $$B^*$$ can be performed as follows: Compute the successive vertices $$S_1, ..., S_n$$. If $$q\,{\notin}\,B^*$$ then compute the projections of $$q$$ onto the segments $$[S_{i}, S_{i+1}]$$ and compare among these projections to chose the right one. Another way is as the one in Benamou et al. (2017): Compute outward orthogonal vectors $$v_{1}, ..., v_n$$ (Fig. 1). If $$q$$ belongs to $$[S_i, S_{i+1}]+\mathbb{R}_+v_i$$ then the projection coincides with the one on the line through $$S_i, S_{i+1}$$. If $$q$$ belongs to the sector $$S_{i}+\mathbb{R}_+v_{i-1}+\mathbb{R}_+v_{i}$$, the projection is $$S_i$$. Fig. 1. View largeDownload slide Illustration of the projection. Fig. 1. View largeDownload slide Illustration of the projection. 4.3 Unbalanced transport ($$\mathbf{m}=\nu(\overline{{\it{\Omega}}})$$) Thanks to Remark 3.2, we can reduce the algorithm in this particular case by ignoring the variable $$\lambda$$. With similar considerations for $${\it{\Lambda}}_h u=(\nabla u, -u)$$, we get the following iteration: Step 1:   ui+1 ∈argminu∈Eh−⟨u,fh⟩+⟨Φi,∇u⟩+⟨θi0,−u⟩+r2|∇u−qi|2+r2|u+zi|2. Equivalently,   r⟨∇ui+1,∇v⟩+r⟨ui+1,v⟩=⟨fh,v⟩−⟨Φi,∇v⟩+⟨θi0,v⟩+r⟨qi,∇v⟩−r⟨zi,v⟩∀v∈Eh. (4.6) Step 2: (1) For $$z_{i+1}$$, choosing $$P_{2}$$ finite element for $$z_{i+1}$$ then at each vertex $$x_k$$,   zi+1(xk)=Proj[r∈R:r≤0](−ui+1(xk)+θi0(xk)r)=min(−ui+1(xk)+θi0(xk)r,0). (4.7) (2) For $$q_{i+1}$$, choosing $$P_{1}$$ finite element, at vertex $$x_l$$,   qi+1(xl)=ProjBF∗(xl,.)(∇ui+1(xl)+Φi(xl)r). Step 3: $$({\it{\Phi}}_{i+1}, \theta^0_{i+1})=({\it{\Phi}}_{i}, \theta^0_{i})+r(\nabla u_{i+1 }- q_{i+1}, -u_{i+1}-z_{i+1}).$$ 5. Numerical experiments For the numerical implementation, we use the FreeFem++ software (Hecht, 2012) and base on Benamou & Brenier (2000) and Benamou & Carlier (2015). We use $$P_{2}$$ finite element for $$u_{i}, z_{i}, w_{i}, \theta^0_{i}, \theta^1_{i}$$ and $$P_{1}$$ finite element for $${\it{\Phi}}_{i}, q_{i}$$. 5.1 Stopping criterion In computational version, the measures $$\mu$$ and $$\nu$$ are approximated by non-negative regular functions that we denote again by $$\mu$$ and $$\nu$$. We use the following stopping criteria: For the partial transport: (1) $$\text{MIN-MAX}:=\min\left\{\min\limits_{\overline{{\it{\Omega}}}} u(x), \lambda-\max\limits_{\overline{{\it{\Omega}}}} u(x), \min\limits_{\overline{{\it{\Omega}}}} \theta^0(x), \min\limits_{\overline{{\it{\Omega}}}} \theta^1(x)\right\}\!\!.$$ (2) $$\text{Max-Lip}:=\sup\limits_{\overline{{\it{\Omega}}}} F^*(x, \nabla u(x)).$$ (3) $$\text{DIV}:=\|\nabla \cdot{\it{\Phi}} + \nu -\theta^1-\mu +\theta^0\|_{L^{2}}.$$ (4) $$\text{DUAL}:=\|F(x, {\it{\Phi}}(x)) -{\it{\Phi}}(x)\cdot \nabla u\|_{L^{2}}$$. (5) $$\text{MASS}:= \left |\int (\nu -\theta^1)\,{\rm d}x - \mathbf{m}\right |$$. For the unbalanced transport: We change (1) $$\text{MIN-MAX}:=\min\left\{\min\limits_{\overline{{\it{\Omega}}}} u(x), \min\limits_{\overline{{\it{\Omega}}}} \theta^0(x)\right\}$$. (2) $$\text{DIV}:=\|\nabla \cdot {\it{\Phi}} + \nu-\mu +\theta^0\|_{L^{2}}.$$ We expect $$\text{MIN-MAX}\ge 0, \text{Max-Lip}\le 1$$; DIV, DUAL and MASS are small. 5.2 Some examples In all the examples below, we take $${\it{\Omega}}=[0, 1]\times [0, 1]$$. We test for the Riemannian case and the crystalline case. For the latter, we consider the Finsler metric of the form $$F(x, v)=\max\limits_{1\le i\le k}\left\{\langle v, d_i\rangle\right\}$$ with given directions $$d_1, ..., d_k$$ such that for any $$0\ne v\in \mathbb{R}^2$$,   max1≤i≤k{⟨v,di⟩}>0. 5.2.1 For the unbalanced transport Example 5.1 Taking $$\mu=3{\mathcal{L}^{2}}_{{\it{\Omega}}}$$ and $$\nu=\delta_{(0.5, 0.5)}$$ the Dirac mass at $$(0.5, 0.5)$$. The Finsler metric is the Euclidean one. The optimal flow is given in Fig. 2. The stopping criterion at each iteration is given in Fig. 3. Fig. 2. View largeDownload slide Optimal flow for $$\mu=3, \; \nu=\delta_{(0.5, 0.5)}, \: F(x, v)=|v|$$. Fig. 2. View largeDownload slide Optimal flow for $$\mu=3, \; \nu=\delta_{(0.5, 0.5)}, \: F(x, v)=|v|$$. Fig. 3. View largeDownload slide Stopping criterion at each iteration. Fig. 3. View largeDownload slide Stopping criterion at each iteration. Example 5.2 We take $$\mu$$ and $$\nu$$ as in the previous example and the Finsler metric given by $$F(x, v)\,{:=}|v_1|\,{+}\,|v_2|$$ for $$v\,{=}\,(v_1, v_2)\,{\in}\, \mathbb{R}^2$$. This corresponds to the crystalline norm with $$d_1\,{=}\,(1, 1), d_2\,{=}(-1, 1), d_3\,{=}\,(-1, -1) \ \text{and} \ d_4\,{=}\,(1, -1)$$. The optimal flow is given in Fig. 4 and the stopping criterion at each iteration is given in Fig. 5. Fig. 4. View largeDownload slide Optimal flow for $$\mu=3, \; \nu=\delta_{(0.5, 0.5)}, \: F(x, (v_1, v_2))=|v_1|+|v_2|$$. Fig. 4. View largeDownload slide Optimal flow for $$\mu=3, \; \nu=\delta_{(0.5, 0.5)}, \: F(x, (v_1, v_2))=|v_1|+|v_2|$$. Fig. 5. View largeDownload slide Stopping criterion at each iteration. Fig. 5. View largeDownload slide Stopping criterion at each iteration. 5.2.2 For the partial transport Example 5.3 Taking $$\mu\,{=}\,4\chi_{[(x-0.3)^2+(y-0.2)^2 <0.03]}$$, and $$\nu\,{=}\,4\chi_{[(x-0.7)^2+(y-0.8)^2<0.03]}.$$ The mass of the transport is $$\mathbf{m}\,{:=}\,\frac{\nu(\overline{{\it{\Omega}}})}{2}.$$ We test for different Finsler metrics. On each figure below, the subfigure at left illustrates the unit ball of $$F$$ and the subfigure at right gives the numerical result (see Figs 6–9). The stopping criteria are summarized in Table 1. Table 1 Stopping criteria for $$800$$ iterations Case  DIV  DUAL  MASS  MIN–MAX  Max–Lip  Time execution (s)  1  2.48182e-05  9.5294e-06  0.000161361  $$-$$0.0149942  1.00068  357  2  3.38395e-05  5.58717e-05  0.000195881  $$-$$0.00120123  1.00248  867  3  7.44768e-05  5.5997e-05  6.66404e-06  $$-$$0.00272389  1.00351  1269  4  6.33726e-05  3.20691e-05  0.000120909  $$-$$0.0104915  1.02572  1123  Case  DIV  DUAL  MASS  MIN–MAX  Max–Lip  Time execution (s)  1  2.48182e-05  9.5294e-06  0.000161361  $$-$$0.0149942  1.00068  357  2  3.38395e-05  5.58717e-05  0.000195881  $$-$$0.00120123  1.00248  867  3  7.44768e-05  5.5997e-05  6.66404e-06  $$-$$0.00272389  1.00351  1269  4  6.33726e-05  3.20691e-05  0.000120909  $$-$$0.0104915  1.02572  1123  Fig. 6. View largeDownload slide Case 1: $$F(x, v)=|v|$$. Fig. 6. View largeDownload slide Case 1: $$F(x, v)=|v|$$. Fig. 7. View largeDownload slide Case 2: The crystalline case with $$d_1=(1, 1), d_2=(-1, 1), d_3=(-1, -1)$$ and $$d_4=(1, -1)$$. Fig. 7. View largeDownload slide Case 2: The crystalline case with $$d_1=(1, 1), d_2=(-1, 1), d_3=(-1, -1)$$ and $$d_4=(1, -1)$$. Fig. 8. View largeDownload slide Case 3: The crystalline case with $$d_1\,{=}\,(1, 0), d_2\,{=}\,(\frac{1}{5}, \frac{1}{5}), d_3\,{=}\,(-\frac{1}{5}, \frac{1}{5}), d_4\,{=}\,(-\frac{1}{5}, -\frac{1}{5})$$ and $$d_5\,{=}\,(\frac{1}{5}, -\frac{1}{5})$$ makes the transport more expensive in the direction of the vector $$(1, 0)$$. Fig. 8. View largeDownload slide Case 3: The crystalline case with $$d_1\,{=}\,(1, 0), d_2\,{=}\,(\frac{1}{5}, \frac{1}{5}), d_3\,{=}\,(-\frac{1}{5}, \frac{1}{5}), d_4\,{=}\,(-\frac{1}{5}, -\frac{1}{5})$$ and $$d_5\,{=}\,(\frac{1}{5}, -\frac{1}{5})$$ makes the transport more expensive in the direction of the vector $$(1, 0)$$. Fig. 9. View largeDownload slide Case 4: The crystalline case with $$d_1\,{=}\,(1, -1), d_2\,{=}\,(1, -\frac{4}{5}), d_3\,{=}\,(-\frac{4}{5}, 1), d_4\,{=}\,(-1, 1)$$ and $$d_5\,{=}\,(-1, -1)$$ makes the transport cheaper in the direction of the vector $$(1, 1)$$. Fig. 9. View largeDownload slide Case 4: The crystalline case with $$d_1\,{=}\,(1, -1), d_2\,{=}\,(1, -\frac{4}{5}), d_3\,{=}\,(-\frac{4}{5}, 1), d_4\,{=}\,(-1, 1)$$ and $$d_5\,{=}\,(-1, -1)$$ makes the transport cheaper in the direction of the vector $$(1, 1)$$. Example 5.4 Let $$\mu\,{=}\,2\chi_{[(x-0.2)^2\,{+}\,(y-0.2)^2 <0.03]}+2\chi_{[(x-0.6)^2+(y-0.1)^2 <0.01]}$$ and $$\nu\,{=}\,2\chi_{[(x-0.6)^2+(y-0.8)^2 <0.03]}$$. In this example, we take the Euclidean norm and we let $$\mathbf{m}$$ vary by taking the values $$\mathbf{m}_{i}\,{=}\,\frac{i}{6}$$$$\min\{\mu({\it{\Omega}}), \nu({\it{\Omega}})\},$$$$i=1, ..., 6.$$ The results are given in Fig. 10. Fig. 10. View largeDownload slide Optimal flows. Fig. 10. View largeDownload slide Optimal flows. Acknowledgements The authors are grateful to J. D. Benamou and G. Carlier who provide some codes of ALG2 on the link https://team.inria.fr/mokaplan/software/. Some parts of our codes are inspired from their work. Appendix Our aim here is to show Lemma A.1 that gives a smooth approximation of 1-$$d_{F}$$ Lipschitz continuous function for continuous nondegenerate Finsler metrics $$F$$. This result is more or less known in some particular cases. However, we could not find any rigorous proofs for the general case in the literature. Lemma A.1 Let $${\it{\Omega}}$$ be a connected bounded Lipschitz domain and $$F$$ be a continuous nondegenerate Finsler metric on $$\overline{{\it{\Omega}}}$$. For any Lipschitz continuous function $$u$$ on $$\overline{{\it{\Omega}}}$$ satisfying   F∗(x,∇u(x))≤1 a.e. x∈Ω, (A.1) there exists a sequence of functions $$u_{\varepsilon}\in C^{\infty}_{c}(\mathbb{R}^N)$$ such that   F∗(x,∇uε(x))≤1∀x∈Ω¯, and   uε⇉u uniformly on Ω¯. Note that $$F$$ and $$F^*$$ are defined only in $$\overline{{\it{\Omega}}}$$ and that the gradient of $$u$$ is controlled only inside of $${\it{\Omega}}$$ by (A.1). If we use the standard convolution to define $$u_{\varepsilon}$$, the value of $$u_{\varepsilon}(x)$$ is affected by the value of $$u(y)$$ outside of $$\overline{{\it{\Omega}}}$$ which remains uncontrolled. To overcome this difficulty, if $$x$$ is near the boundary, we move it a little into inside of $${\it{\Omega}}$$ before taking the convolution. To this aim, we use the smooth partition of unity tool to deal with approximation of $$u$$ near the boundary. Proof. Set   ∀x∈RN,u~(x):={u(x) if x∈Ω¯0 otherwise.  Step 1: Fix $$z\in \partial {\it{\Omega}}$$. Since $${\it{\Omega}}$$ is a Lipschitz domain, there exist $$r_{z}>0$$ and a Lipschitz continuous function $$\gamma_{z}:\mathbb{R}^{N-1} \longrightarrow \mathbb{R}$$ such that (up to rotating and relabeling if necessary)   Ω∩B(z,rz)={x|xN>γz(x1,...,xN−1)}∩B(z,rz). Set $$U_{z}:={\it{\Omega}} \cap B(z, \frac{r_{z}}{2})$$. For any $$x\in \mathbb{R}^N$$, taking   xzε:=x+ελzen, (A.2) where we choose a sufficiently large fixed $$\lambda_{z}$$ and all small $$\varepsilon$$, say fixed $$\lambda_{z} \ge \text{Lip} (\gamma_{z})+1$$, $$0<\varepsilon< \frac{r_{z}}{2(\lambda_{z}+1)}.$$ By this choice and the Lipschitz property of $$\gamma_{z}$$, we see that   B(xzε,ε)⊂Ω∩B(z,rz) for all x∈Uz. (A.3) Defining   u~ε(x):=∫RNρε(y)u~(xzε−y)dy=∫B(xzε,ε)ρε(xzε−y)u~(y)dy for all x∈RN, (A.4) where $$\rho_{\varepsilon}$$ is the standard mollifier on $$\mathbb{R}^N$$. Obviously, $$\tilde{u}_{\varepsilon} \in C^{\infty}_{c}(\mathbb{R}^N)$$. Using (A.3), (A.4) and the continuity of $$u$$ on $$\overline{{\it{\Omega}}}$$, we get   u~ε⇉u on U¯z. Step 2: Now, using the compactness of $$\partial {\it{\Omega}}$$ and $$\partial {\it{\Omega}} \,{\subset}\, \bigcup\limits_{z\,{\in}\, \partial {\it{\Omega}}} B(z, \frac{r_{z}}{2})$$, there exist $$z_{1}, ..., z_{n}\in \partial {\it{\Omega}}$$ such that   ∂Ω⊂⋃i=1nB(zi,rzi2). For short, we write $$r_{i}, U_{i}, x_{i}$$ instead of $$r_{z_{i}}, U_{z_{i}}, x_{z_{i}}$$. Taking an open set $$U_{0}\Subset {\it{\Omega}}$$ such that   Ω¯⊂⋃i=1nB(zi,ri2)⋃U0. Let $$\{\phi\}^{n}_{i=0}$$ be a smooth partition of unity on $$\overline{{\it{\Omega}}}$$, subordinate to $$\left\{U_{0}, B(z_{1}, \frac{r_{1}}{2}), ..., B(z_{n}, \frac{r_{n}}{2})\right\}$$, that is,   {ϕi∈Cc∞(RN)0≤ϕi≤1,∀i=0,...,nsupp(ϕi)⋐B(zi,ri2)∀i=1,...,n, supp(ϕ0)⋐U0∑i=0nϕi(x)=1 for all x∈Ω¯.  Because of Step 1, there exist $$\tilde{u}^1_\varepsilon, ..., \tilde{u}^n_\varepsilon \in C^{\infty}_{c}(\mathbb{R}^N)$$ such that   u~εi⇉u on U¯ii=1,...,n. For $$i=0,$$ since $$U_{0}\Subset {\it{\Omega}}$$, we can take $$\tilde{u}^0_\varepsilon:= \rho_{\varepsilon}\star \tilde{u} \in C^{\infty}_{c}(\mathbb{R}^N)$$ and $$\tilde{u}^0_{\varepsilon} \rightrightarrows u$$ on $$\overline{U}_{0}$$. Set   uε:=11+Cε+w(ε)∑i=0nϕiu~εi, where $$C$$ is chosen later and   w(ε):=sup{|F∗(x,p)−F∗(y,p)|:x,y∈Ω¯,|x−y|≤Mε,|p|≤‖∇u‖L∞} with constant $$M:=\max\limits_{1\le i\le n}\{\lambda_{z_{i}}+1\}$$, $$\lambda_{z_{i}}$$ is given in Step 1. We show that $$u_{\varepsilon}$$ satisfies all the desired properties. By the construction, $$u_{\varepsilon} \in C^{\infty}_{c}(\mathbb{R}^N)$$ and   uε⇉∑i=0nϕiu=u on Ω¯. At last, we show that $$F^*(x, \nabla u_{\varepsilon} (x))\le 1 \quad\forall x\in \overline{{\it{\Omega}}}$$. Indeed, for any $$x\in {\it{\Omega}}$$, if $$x\in U_{i}, i=1, ..., n$$ (near the boundary of $${\it{\Omega}}$$), we move $$x$$ a bit into inside of $${\it{\Omega}}$$ to $$x^\varepsilon_i:=x^\varepsilon_{z_i}$$ (see (A.2) and (A.3)), if $$x\in U_{0}$$, set $$x_0^{\varepsilon}=x$$. We have   ∇uε(x) =11+Cε+w(ε)(∑i=0n∇ϕi(x)u~εi(x)+∑i=0nϕi(x)∇u~εi(x)) =11+Cε+w(ε)(∑i=0n∇ϕi(x)∫B(xiε,ε)ρε(xiε−y)u(y)dy  +∑i=0nϕi(x)∫B(xiε,ε)ρε(xiε−y)∇u(y)dy). The first sum on the right-hand side has a small norm. Indeed, using the fact that   ∑i=0n∇ϕi(x)u(x)=0 for all x∈Ω, we have   ∑i=0n∇ϕi(x)∫B(xiε,ε)ρε(xiε−y)u(y)dy=∑i=0n∇ϕi(x)(∫B(xiε,ε)ρε(xiε−y)u(y)dy−u(x)). (A.5) Moreover,   |∫B(xiε,ε)ρε(xiε−y)u(u)dy−u(x)| ≤|∫B(xiε,ε)ρε(xiε−y)(u(y)−u(xiε))dy|+|u(xiε)−u(x)| ≤C1ε∀i=0,...,n, where the constant $$C_{1}$$ depends only on Lip($$\gamma_{z_{i}}$$) and the Lipschitz constant of $$u$$ on $$\overline{{\it{\Omega}}}$$. Thus, by combining this with (A.5),   |∑i=0n∇ϕi(x)∫B(xiε,ε)ρε(xiε−y)u(y)dy|≤C2ε∀x∈Ω, where $$C_{2}$$ depends only on $$C_{1}$$ and $$\|\nabla \phi_{i}\|_{L^{\infty}}$$. Using the nondegeneracy of $$F$$, we have   F∗(x,∑i=0n∇ϕi(x)∫B(xiε,ε)ρε(xiε−y)u(y)dy)≤C3ε for all x∈Ω. Fixed any $$x\in {\it{\Omega}}$$, if $$y\in B(x_i^{\varepsilon}, \varepsilon)$$ then $$|x-y|\le |x-x_i^{\varepsilon}|+|x_i^{\varepsilon}-y|\le M\varepsilon$$. So we obtain   F∗(x,∇uε(x)) ≤11+Cε+w(ε)[F∗(x,∑i=0n∇ϕi(x)∫B(xiε,ε)ρε(xiε−y)u(y)dy)  +F∗(x,∑i=0nϕi(x)∫B(xiε,ε)ρε(xiε−y)∇u(y)dy)] ≤11+Cε+w(ε)(C3ε+∑i=0nϕi(x)∫B(xiε,ε)ρε(xiε−y)F∗(x,∇u(y))dy) ≤11+Cε+w(ε)[C3ε+∑i=0nϕi(x)∫B(xiε,ε)ρε(xiε−y)F∗(y,∇u(y))dy  +∑i=0nϕi(x)∫B(xiε,ε)ρε(xiε−y)(F∗(x,∇u(y))−F∗(y,∇u(y)))dy] ≤C3ε+1+w(ε)1+Cε+w(ε) ≤1(choose a constant C≥C3). By the continuity of $$\nabla u_{\varepsilon}$$ and of $$F^*$$, we also have $$F^*(x, \nabla u_{\varepsilon}(x))\le 1 \, \quad\forall x\in \overline{{\it{\Omega}}}.$$ □ Proposition A.2 Let $$F$$ be a continuous nondegenerate Finsler metric on a connected bounded Lipshitz domain $${\it{\Omega}}$$. We have   LipdF={u:Ω¯⟶R|u is Lipschitz continuous and F∗(x,∇u(x))≤1 a.e. x∈Ω}:=BF∗. (A.6) As a consequence, for any 1-$$d_{F}$$ Lipschitz continuous function $$u$$, there exists a sequence of 1-$$d_{F}$$ Lipschitz continuous functions $$u_{\varepsilon}\in C^{\infty}_{c}(\mathbb{R}^N)$$ and $$u_{\varepsilon} \rightrightarrows u$$ uniformly on $$\overline{{\it{\Omega}}}$$. Lemma A.3 We have $$Lip_{d_{F}}\subset \mathcal{B}_{F^*}.$$ Proof. Let $$u\in Lip_{d_{F}}$$. Then $$u$$ is Lipschitz and $$u$$ is differentiable a.e. in $${\it{\Omega}}$$. Let $$x\in {\it{\Omega}}$$ be any point where $$u$$ is differentiable. We have, for any $$v\in \mathbb{R}^N$$,   ⟨∇u(x),v⟩F(x,v) =limh→0u(x+hv)−u(x)F(x,hv) ≤lim suph→0dF(x,x+hv)F(x,hv) ≤lim suph→0∫01F(x+thv,hv)dtF(x,hv)=1. Hence, $$F^*(x, \nabla u(x))\le 1$$. So $$u\in \mathcal{B}_{F^*}$$. □ Lemma A.4 We have $$\mathcal{B}_{F^*}\subset Lip_{d_{F}}.$$ Proof. Fix any $$u\in \mathcal{B}_{F^*}$$. Case 1: If $$u$$ is smooth then $$F^*(x, \nabla u(x))\le 1 \quad\forall x\in \overline{{\it{\Omega}}}$$. For any $$x, y \in \overline{{\it{\Omega}}}$$ and any Lipschitz curve $$\xi$$ in $$\overline{{\it{\Omega}}}$$ joining $$x$$ and $$y$$, we have   u(y)−u(x) =∫01∇u(ξ(t))ξ˙(t)dt ≤∫01F∗(ξ(t),∇u(ξ(t)))F(ξ(t),ξ˙(t))dt ≤∫01F(ξ(t),ξ˙(t))dt. Hence $$u\in Lip_{d_{F}}$$. Case 2: For general Lipschitz continuous function $$u$$ satisfying $$F^*(x, \nabla u(x))\,{\le}\, 1 \, \text{ a.e. }\, x\in {\it{\Omega}}$$, thanks to Lemma A.1, there exist $$u_{\varepsilon} \in \mathcal{B}_{F^*}\bigcap C^{\infty}_{c}(\mathbb{R}^N)$$ such that $$u_{\varepsilon} \,{\rightrightarrows}\, u$$ on $$\overline{{\it{\Omega}}}$$. According to Case 1 above, $$u_{\varepsilon}\in Lip_{d_{F}}$$. Since $$u_{\varepsilon} \rightrightarrows u$$ on $$\overline{{\it{\Omega}}}$$, we obtain $$u\in Lip_{d_{F}}$$. □ Proof of Proposition A.2. The proof follows by Lemma A.3 and Lemma A.4. □ Proof of Lemma 2.4. Since $$0\le u\le \lambda$$, the sequence $$u_{\varepsilon}$$ in the proof of Lemma A.1 satisfies $$0\le u_{\varepsilon}\le \lambda$$. So $$u_{\varepsilon}\in C^{\infty}_{c}(\mathbb{R}^N) \cap L^\lambda_{d_{F}}$$ and $$u_{\varepsilon} \rightrightarrows u$$ on $$\overline{{\it{\Omega}}}$$. □ Remark A.5 The results still hold true if $${\it{\Omega}}$$ is connected, bounded and has the segment property. References Ambrosio, L., Fusco, N. & Pallara, D. ( 2000) Functions of Bounded Variation and Free Discontinuity Problems.  Oxford Mathematical Monographs. Oxford: Oxford University Press. Ambrosio, L., Gigli, N. & Savaré, G. ( 2005) Gradient Flows in Metric Spaces and in the Space of Probability Measures. Lectures in Mathematics.  Basel: Birkháuser. Barrett, J. W. & Prigozhin, L. ( 2009) Partial L1 Monge-Kantorovich problem: Variational formulation and numerical approximation. Interface. Free Bound.,  11, 201– 238. Google Scholar CrossRef Search ADS   Beckmann, M. ( 1952) A continuous model of transportation. Econometrica,  20, 643– 660. Google Scholar CrossRef Search ADS   Benamou, J. D. & Brenier, Y. ( 2000) A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numer. Math.,  84, 375– 393. Google Scholar CrossRef Search ADS   Benamou, J. D. & Carlier, G. ( 2015) Augmented Lagrangian methods for transport optimization, mean field games and degenerate elliptic equations. J. Optim. Theory Appl. , 167, 1– 26. Google Scholar CrossRef Search ADS   Benamou, J. D., Carlier, G., Cuturi, M., Nenna, L. & Peyré, G. ( 2015) Iterative Bregman projections for regularized transportation problems. SIAM J. Sci. Comput.,  37, A1111– A1138. Google Scholar CrossRef Search ADS   Benamou, J. D., Carlier, G. & Hatchi, R. ( 2017) A numerical solution to Monge’s problem with a Finsler distance cost. ESAIM: M2AN,  DOI: http://dx.doi.org/10.1051/m2an/2016077. Caffarelli, L. & McCann, R. J. ( 2010) Free boundaries in optimal transport and Monge-Ampere obstacle problems. Ann. Math.,  171, 673– 730. Google Scholar CrossRef Search ADS   Chizat, L., Peyré, G., Schmitzer, B. & Vialard, F. X. ( 2016) Scaling algorithms for unbalanced transport problems. arXiv preprint arXiv:1607.05816. De Pascale, L., Evans, L. C. & Pratelli, A. ( 2004) Integral estimates for transport densities. Bull. London Math. Soc.,  36, 383– 395. Google Scholar CrossRef Search ADS   De Pascale, L. & Pratelli, A. ( 2004) Sharp summability for Monge transport density via interpolation. ESAIM Contr. Optim. Ca. Va. , 10, 549– 552. Google Scholar CrossRef Search ADS   Eckstein, J. & Bertsekas, D. P. ( 1992) On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program.,  55, 293– 318. Google Scholar CrossRef Search ADS   Ekeland, I. & Teman, R. ( 1976) Convex Analysis and Variational Problems.  Amsterdam-New York: North-Holland American Elsevier. Feldman, M. & McCann, R. J. ( 2002) Uniqueness and transport density in Monge’s mass transportation problem. Calc. Var. Partial Differ. Equ. , 15, 81– 113. Google Scholar CrossRef Search ADS   Figalli, A. ( 2010) The Optimal Partial Transport Problem. Arch. Rational Mech. Anal. , 195, 533– 560. Google Scholar CrossRef Search ADS   Fortin, M. & Glowinski, R. ( 1983) Augmented Lagrangian methods: Applications to the Numerical Solution of Boundary-Value Problems , vol 15. Amsterdam: North-Holland Publishing. Gabay, D. & Mercier, B. ( 1976) A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. , 2, 17– 40. Google Scholar CrossRef Search ADS   Glowinski, R., Lions, J. L. & Tre1molie1res, R. ( 1981) Numerical Analysis of Variational Inequalities.  Amsterdam: North-Holland Publishing. Glowinski, R. & Le Tallec, P. ( 1989) Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics,  vol. 9. Philadelphia: SIAM. Google Scholar CrossRef Search ADS   Hecht, F. ( 2012) New development in freefem++. J. Numer. Math.,  20, 251– 266. Google Scholar CrossRef Search ADS   Igbida, N. & Nguyen, V. T. ( 2017) Optimal partial mass transportation and obstacle Monge-Kantorovich equation (submitted for publication). Igbida, N. & Ta Thi, N. N. ( 2017) Sub-gradient Diffusion Operator. J. Differ. Equ . 262, 3837– 3863. Google Scholar CrossRef Search ADS   Santambrogio, F. ( 2009) Absolute continuity and summability of transport densities: simpler proofs and new estimates. Calc. Var. Partial Differ. Equ.,  36, 343– 354. Google Scholar CrossRef Search ADS   Santambrogio, F. ( 2015) Optimal Transport for Applied Mathematicians . Basel: Birkhäuser. Google Scholar CrossRef Search ADS   Villani, C. ( 2003) Topics in Optimal Transportation . Graduate Studies in Mathematics, vol. 58. Providence: American Mathematical Society. Villani, C. ( 2009) Optimal Transport, Old and New . Grundlehren des Mathematischen Wissenschaften (Fundamental Principles of Mathematical Sciences), vol. 338. Berlin, Heidelberg: Springer. © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. Journal IMA Journal of Numerical AnalysisOxford University Press Published: Jan 1, 2018 DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month Explore the DeepDyve Library Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve Freelancer DeepDyve Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations Abstract access only 18 million full-text articles Print 20 pages / month PDF Discount 20% off
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9712790846824646, "perplexity": 479.5146419157646}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741192.34/warc/CC-MAIN-20181113020909-20181113042909-00317.warc.gz"}
https://www.projectrhea.org/rhea/index.php/Homework_15_MA181Fall2008bell
## 8.8 #54 Does this indefinite integral converge for anyone? Also, if you are having trouble with the integral, take a look at the derivatives of inverse hyperbolic functions. --John Mason It does not converge for me. I used direct comparison to test whether it converges or not. I started by comparing $\sqrt{x^4-1}$ and $\sqrt{x^4}$ It was easy from there. --Josh Visigothsandwich Thanks. I'm not really sure that I needed to use some sort of comparison to show it didn't converge, as it did integrate nicely, but its good to have a second opinion, be that mathematical or otherwise. --John Mason It does not converge for me either, but Josh, be careful with your comparison. It's not really valid to use $1/\sqrt{x^4}$ as the function for comparison, because $x/\sqrt{x^4} < x/\sqrt{x^4-1}$, albeit infinitesimally less. I used $(g(x) = 1/x^{0.99}) > (f(x) = x/\sqrt{x^4-1})$, in which case P < 1 and diverges. --Randy Eckman 15:44, 26 October 2008 (UTC) That is true. But if you start out with that as your comparison: $\sqrt{x^4} > \sqrt{x^4 - 1}$ $\text{For } x > 0$ $x\sqrt{x^4} > x\sqrt{x^4 - 1}$ $\frac{x}{\sqrt{x^4 - 1}} > \frac{x}{\sqrt{x^4}} = \frac{x}{x^2} = \frac{1}{x}$ $\int^\infty_4 \frac{dx}{x} = \infty$ Certainly much cleaner than working with decimal powers. --John Mason Also, remember, $\int_1^{\infty}\frac{1}{x^p}dx$ diverges as long as $p\le 1$. So if p = 1, it still diverges. That's why it's okay to use the comparison I used. I did what John did, only I multiplied the inequality by x after taking the inverse of both sides and switching the inequality sign.His Awesomeness, Josh Hunsberger ## Pg. 329, #16 How accurate do we need to make our answers for the roots? After four iterations, I have the first point accurate to four digits. The text doesn't specify a number of correct digits, and out of curiosity I found the precise roots on Mathematica. I don't know how the textbook could expect us to calculate exactly this: {x -> 0.630115} {x -> 2.57327} --Randy Eckman 17:43, 26 October 2008 (UTC) I went to 10 digits, as that was all my calculator could show. And for the record "Reckman" is a very cool name. --John Mason And I went to five digits because that's all my calculator would show me (I think I can change that, but i wasn't sure how). His Awesomeness, Josh Hunsberger ## Pg.329, #5 I don't really understand how to do #5. It seems like there isn't an actual function. Are we supposed to use maple? Can someone help get me started? --Klosekam 16:19, 27 October 2008 (UTC) Since you need to know where $e^{-x} = 2x + 1$ you can just subtract out one side and solve for the roots (aka at what value of x the function takes zero). So you can use $f(x) = 2x + 1 -e^{-x}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8514229655265808, "perplexity": 612.5382116494384}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00067.warc.gz"}
http://math.stackexchange.com/questions/761228/why-a-holomorphic-function-satisfying-these-conditions-has-to-be-linear/766402
# Why a holomorphic function satisfying these conditions has to be linear? Let $\Omega$ be a bounded open subset of $\mathbb{C}$ and $f:\Omega\rightarrow\Omega$ be holomorphic in $\Omega$. Prove that if there exists a point $z_0$ in $\Omega$ such that $$f(z_0)=z_0~~~~\text{and}~~~~f'(z_0)=1$$ then $f$ is linear. - If $\Omega$ is the unit disc and $f(z) = z+z^2$, then $f(0) = 0$ and $f^\prime(0) = 1$. However, $f$ is not linear. Are you sure there aren't any additional conditions? –  jwsiegel Apr 20 '14 at 2:47 There is. It must be from $\Omega$ to $\Omega$, I guess. –  Aloizio Macedo Apr 20 '14 at 3:20 Sorry for the mistake. It is from $\Omega\rightarrow\Omega$. I have made the corrections. –  Abishanka Saha Apr 20 '14 at 3:23 I'm guessing $\Omega$ should be connected too. Maybe use something along the lines of Schwarz lemma? –  Seth Apr 20 '14 at 3:39 OP, do you know the Schwarz lemma? And do you know you can map a simply connected set $\Omega$ to the unit disc conformally (i.e. preserving $f'$ at each point)? –  Eric Auld Apr 20 '14 at 3:41 Consider the family $\mathscr{F} = \{ f^n : n \in \mathbb{Z}^+\}$, where $f^n$ denotes the $n$-fold iterate of $f$, $f\circ f \circ \dotsc \circ f$. Listen to what Montel has to say about that family. And assume, for the sake of contradiction, that $f$ were not linear. Since $\Omega$ is bounded, $\mathscr{F}$ is a normal family. To simplify notation, let us assume that $z_0 = 0$. Then in a neighbourhood of $0$, we have the Taylor expansion $$f(z) = z + \sum_{k=2}^\infty a_k z^k.$$ If we already know that all $a_k$ for $2 \leqslant k < m$ are zero, iterating the expansion $f(z) = z + a_m z^m + O(z^{m+1})$ leads to $$f^n(z) = z + n\cdot a_m z^m + O(z^{m+1}),$$ which is proved by induction, \begin{align} f^{n+1}(z) &= f(f^n(z))\\ &= f^n(z) + a_m(f^n(z))^m + O(f^n(z)^{m+1})\\ &= z + n\cdot a_m z^m + O(z^{m+1} + a_m(z + O(z^m))^m + O(z^{m+1})\\ &= z + (n+1)a_m z^m + O(z^{m+1}). \end{align} In other words, we have $$\left(\frac{d}{dz}\right)^m \left(f^n\right)\bigl\lvert_{z = 0} = n\cdot f^{(m)}(0)$$ for $m \geqslant 2$ if we already know that $f^{(k)}(0) = 0$ for $2\leqslant k < m$. But the family of $m^{\text{th}}$ derivatives of a normal family is again normal, so $\left(\left(\frac{d}{dz}\right)^m \left(f^n\right)\bigl\lvert_{z = 0}\right)_{n\in \mathbb{N}}$ must have a convergent subsequence. By the above, that is only possible if $f^{(m)}(0) = 0$. Thus all derivatives of order $> 1$ of $f$ vanish in $0$, and $f(z) = z$ follows. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937800765037537, "perplexity": 287.1911121956667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637364.20/warc/CC-MAIN-20150417045717-00275-ip-10-235-10-82.ec2.internal.warc.gz"}
https://link.springer.com/chapter/10.1007/978-3-030-49418-6_7?error=cookies_not_supported&code=7539460d-aa76-443a-9da5-9ac1665c9c4c
Automated Planning for Supporting Knowledge-Intensive Processes Conference paper Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 387) Abstract Knowledge-intensive Processes (KiPs) are processes characterized by high levels of unpredictability and dynamism. Their process structure may not be known before their execution. One way to cope with this uncertainty is to defer decisions regarding the process structure until run time. In this paper, we consider the definition of the process structure as a planning problem. Our approach uses automated planning techniques to generate plans that define process models according to the current context. The generated plan model relies on a metamodel called METAKIP that represents the basic elements of KiPs. Our solution explores Markov Decision Processes (MDP) to generate plan models. This technique allows uncertainty representation by defining state transition probabilities, which gives us more flexibility than traditional approaches. We construct an MDP model and solve it with the help of the PRISM model-checker. The solution is evaluated by means of a proof of concept in the medical domain which reveals the feasibility of our approach. Keywords Knowledge-intensive process Business process modeling Case management Automated planning Markov Decision Process Business process management systems 1 Introduction In the last decades, the business process management (BPM) community has established approaches and tools to design, enact, control, and analyze business processes. Most process management systems follow predefined process models that capture different ways to coordinate their tasks to achieve their business goals. However, not all types of processes can be predefined at design time—some of them can only be specified at run time because of their high degree of uncertainty [18]. This is the case with Knowledge-intensive Processes (KiPs). KiPs are business processes with critical decision-making tasks that involve domain-specific knowledge, information, and data [4]. KiPs can be found in domains like healthcare, emergency management, project coordination, and case management, among others. KiP structure depends on the current situation and new emergent events that are unpredictable and vary in every process instance [4]. Thus, a KiP’s structure is defined step by step as the process executes, by a series of decisions made by process participants considering the current specific situations and contexts [13]. In this sense, it is not possible to entirely define beforehand which activities will execute or their ordering and, indeed, it is necessary to refine them as soon as new information becomes available or whenever new goals are set. These kinds of processes heavily rely on highly qualified and trained professionals called knowledge workers. Knowledge workers use their own experience and expertise to make complex decisions to model the process and achieve business goals [3]. Despite their expertise, it is often the case that knowledge workers become overwhelmed with the number of cases, the differences between cases, rapidly changing contexts, and the need to integrate new information. They therefore require computer-aided support to help them manage these difficult and error-prone tasks. In this paper, we explore how to provide this support by considering the process modeling problem as an automated planning problem. Automated planning, a branch of artificial intelligence, investigates how to search through a space of possible actions and environment conditions to produce a sequence of actions to achieve some goal over time [10]. Our work investigates an automated way to generate process models for KiPs by mapping an artifact-centric case model into a planning model at run time. To encode the planning domain and planning problem, we use a case model defined according to the METAKIP metamodel [20] that encloses data and process logic into domain artifacts. It defines data-driven activities in the form of tactic templates. Each tactic aims to achieve a goal and the planning model is derived from it. In our approach, we use Markov decision processes (MDP) because they allow us to model dynamic systems under uncertainty [7], although our definition of the planning problem model enables using different planning algorithms and techniques. MDP finds optimal solutions to sequential and stochastic decision problems. As the system model evolves probabilistically, an action is taken based on the observed condition or state and a reward or cost is gained [7, 10]. Thus, an MDP model allows us to identify decision alternatives for structuring KiPs at run time. We use PRISM [11], a probabilistic model checker, to implement the solution for the MDP model. We present a proof of concept by applying our method in a medical treatment scenario, which is a typical example of a non-deterministic process. Medical treatments can be seen as sequential decisions in an uncertain environment. Medical decisions not only depend on the current state of the patient, but they are affected by the evolution of the states as well. The evolution of the patient state is unpredictable, since it depends on factors such as preexisting patient illnesses or patient-specific characteristics of the diseases. In addition, medical treatment decisions involve complex trade-offs between the risks and benefits of various treatment options. We show that it is possible to generate different optimal treatment plans according to the current patient state and a target goal state, assuming that we have enough data to accurately estimate the transition probabilities to the next patient state. The resulting process models could help knowledge workers to make complex decisions and structure execution paths at run time with more probability of success and optimizing constraints, such as cost and time. The remainder of this paper is organized as follows: Sect. 2 presents a motivating medical scenario. Section 3 introduces the theoretical and methodological background. Section 4 describes the proposed method to encode a case model as a planning model. Section 5 reports on the application of the methodology in a scenario. Section 6 discusses the obtained findings and related work. Finally, Sect. 7 wraps up the paper with the concluding remarks. 2 Motivating Example This section presents a motivating medical case scenario. Suppose we have the following medical scenario in the oncology department stored in the Electronic Medical Record (EMR). In order to receive the second cycle of R-ICE, it is necessary to stabilize Mary’s health status as soon as possible. Thus, at this time the goal is to decrease her body temperature to $$36.5\,^{\circ }\mathrm{C} \le Temp \le 37.2\,^{\circ }$$C and reduce the level of nausea to zero $$LN=0$$. For that, physicians need to choose from vast treatment strategies to decide which procedures are the best for Mary, in her specific current context. Assume that we have statistical data about two possible tactics for achieving the desired goal: fever (Fvr) and nausea (Nausea) management, shown in Table 1 adapted from [2]. Each of these tactics can be fulfilled through multiple activities that have different interactions and constraints with each other, as well as to the specifics of the patient being treated. For example, (a) treating nausea with a particular drug may affect the fever, (b) administration of the drug may depend on the drugs that the patient is taking, (c) drug effectiveness may depend on the patient history with the drug, or (d) giving the drug may depend on whether the drug has already been administered and how much time has elapsed since the last dose. These issues make manual combination of even this simple case challenging, and it becomes much harder for more complex treatments and patient histories. Support is therefore needed that can take into account patient data, constraints, dependencies, and patient/doctor preferences to help advise the doctor on viable and effective courses of treatment. Table 1. Tactics templates for fever (Fvr) and nausea (Nausea) management Tactic: Fever Management (FVR) Tactic: Nausea Management (Nausea) Definition: Management of a patient with hyperpyrexia caused by non-environmental factors Definition: Prevention and alleviation of nausea Goal: Thermoregulation $$(36.5\,^{\circ }\mathrm{C} \le \ Temp \ \le 37.2\,^{\circ }\mathrm{C})$$ Goal: Stop Nausea (LoN = 0) Metric: Temperature (Temp) Metric: Level of Nausea (LoN) Preconditions: $$Temp \ > 37.2\,^{\circ }$$C Preconditions: LoN > 0 Activities: Activities: A1. Administer ORAL antipyretic medication,as appropriate B1. Ensure that effective antiemetic drugs are given to prevent nausea when possible (except for nausea related to pregnancy) A2. Administer INTRAVENOUS antipyretic medication, as appropriate B2. Control environmental factors that may evoke nausea (e.g., aversive smells, sound and unpleasant visual stimulation A3. Administer medications to treat the cause of fever, as appropriate B3. Give cold, clear liquid and odorless and colorless food, as appropriate A4.Encourage increased intake of oral fluids, as appropriate A5. Administer oxygen, as appropriate 3 Background This section presents the underlying concepts in our proposal. Section 3.1 provides an overview of the METAKIP metamodel; Sect. 3.2 introduces basic concepts of automated planning; Sect. 3.3 explains Markov decision process (MDP). Section 3.4 describes the PRISM tool and language. 3.1 METAKIP: A Metamodel for KiPs Definition Our previous work proposed an artifact-centric metamodel [20] for the definition of KiPs, aiming to support knowledge workers during the decision-making process. The metamodel supports data-centric process management, which is based on the availability and values of data rather than completion of activities. In data-centric processes, data values drive decisions and decisions dynamically drive the course of the process [18]. The metamodel is divided into four major packages: case, control-flow, knowledge, and decision, in such a way that there is an explicit integration of the data, domain, and organizational knowledge, rules, goals, and activities. The Case Package defines the base structure of the metamodel, a Case. A case model definition represents an integrated view of the context and environment data of a case, following the artifact-centric paradigm. This package is composed of a set of interconnected artifacts representing the logical structure of the business process. An artifact is a data object composed of a set of items, attributes, and data values, defined at run time. The Knowledge Package captures explicit organizational knowledge, which is encoded through tactic templates, goals, and metrics that are directly influenced by business rules. Tactics templates represent best practices and guidelines. Usually, they have semi-structured sequences of activities or unstructured loose alternative activities pursuing a goal. The Control-flow Package defines the behavior of a case. It is composed of a set of data-driven activities to handle different cases. Activity definitions are made in a declarative way and have pre- and post-conditions. The metamodel refines the granularity of an activity that could be a step or a task. A task is logically divided into steps, which allows better management of data entry on the artifacts. Step definitions are associated with a single attribute of an artifact, a resource, and a role type at most. This definition gives us a tight integration between data, steps and resources. These packages are used to model alternative plans to answer emergent circumstances, reflecting environmental changes or unexpected outcomes during the execution of a KiP. The Decision Package represents the structure of a collaborative decision-making process performed by knowledge workers. We proposed a representation of how decisions can be made by using the principles of strategic management, such as, looking towards goals and objectives and embracing uncertainty by formulating strategies for the future and correct them if necessary. The strategic plan is structured at run time by goals, objectives, metrics and tactic templates. 3.2 Automated Planning Planning is the explicit and rational deliberation of actions to be performed to achieve a goal [7]. The process of deliberation consists of choosing and organizing actions considering their expected outcomes in the best possible way. Usually, planning is required when an activity involves new or less familiar situations, complex tasks and objectives, or when the adaptation of actions is constrained by critical factors such as high risk. Automated planning studies the deliberation process computationally [7]. A conceptual model for planning can be represented by a state-transition system, which formally is a 4-tuple $$\varSigma = (S, A, E, \gamma )$$, where $$S=\{s_{1}, s_{2}, . . . . \}$$ is a finite or recursively enumerable set of states; $$A = \{a_{1}, a_{2},...\}$$ is a finite or recursively enumerable set of actions; $$E = \{e_{1}, e_{2},...\}$$ is a finite or recursively enumerable set of events; and $$\gamma : S \times A \times E \rightarrow 2^{S}$$ is a state-transition function. Actions are transitions controlled by a plan executor. Events are unforeseen transitions that correspond to the internal dynamics of the system and cannot be controlled by the plan executor. Both events and actions contribute to the evolution of the system. Given a state transition system $$\varSigma$$, the purpose of planning is to deliberate which actions to apply into which states to achieve some goal from a given state. A plan is a structure that gives the appropriate actions. 3.3 Markov Decision Process (MDP) A Markov decision process (MDP) is a discrete-time stochastic control process. It is a popular framework designed to make decisions under uncertainty, dealing with nondeterminism, probabilities, partial observability, and extended goals [7]. In MDPs, an agent chooses action a based on observing state s and receives a reward r for that action [10]. The state evolves probabilistically based on the current state and the action taken by the agent. Figure 1(a) presents a decision network [10], used to represent a MDP. The state transition function $$T({s}'|s,a)$$ represents the probability of transitioning from state s to $${s}'$$ after executing action a. The reward function R(sa) represents the expected reward received when executing action a from state s. We assume that the reward function is a deterministic function of s and a. An MDP treats planning as an optimization problem in which an agent needs to plan a sequence of actions that maximizes the chances of reaching the goal. Action outcomes are modeled with a probability distribution function. Goals are represented as utility functions that can express preferences on the entire execution path of a plan, rather than just desired final states. For example, finding the optimal choice of treatment optimizing the life expectancy of the patient or optimizing cost and resources. 3.4 PRISM PRISM [11] is a probabilistic model checker that allows the modeling and analysis of systems that exhibit probabilistic behavior. The PRISM tool provides support for modeling and construction of many types of probabilistic models: discrete-time Markov chains (DTMCs), continuous-time Markov chains (CTMCs), Markov decision processes (MDPs), and probabilistic timed automata (PTAs). The tool supports statistical model checking, confidence-level approximation, and acceptance sampling with its discrete-event simulator. For non-deterministic models it can generate an optimal adversary/strategy to reach a certain state. Models are described using the PRISM language, a simple, state-based language based on the reactive modules formalism [1]. Figure 1(b) presents an example of the syntax of a PRISM module and rewards. The fundamental components of the PRISM language are modules. A module has two parts: variables and commands. Variables describe the possible states that the module can be in at a given time. Commands describe the behavior of a module, how the state changes over time. A command comprises a guard and one or more updates. The guard is a predicate over all the variables in the model. Each update describes a transition that the module can take if the guard is true. A transition is specified by giving the new values of the variables in the module. Each update has a probability which will be assigned to the corresponding transition. Commands can be labeled with actions. These actions are used for synchronization between modules. Cost and rewards are expressed as real values associated with certain states or transitions of the model. 4 Dynamic Plan Generation for KiPs Execution In our approach, plans are fragments of process models that are frequently created and modified during process execution. Plans may change as new information arrives and/or when a new goal is set. We advocate the creation of a planner to structure process models at run time based on a knowledge base. The planner synthesizes plans on-the-fly according to ongoing circumstances. The generated plans should be revised and re-planned as soon as new information becomes available. Thereby, it involves both computer agents and knowledge workers in a constant interleaving of planning, execution (configuration and enactment), plan supervision, plan revision, and re-planning. An interactive software tool might assist human experts during planning. This tool should allow defining planning goals and verifying emerging events, states, availability of activities and resources, as well as preferences. 4.1 Model Formulation The run-time generation of planning models according to a specific situation in a case instance requires the definition of the planning domain and then the planning problem itself. Definition 1 Let the case model be represented according to the METAKIP metamodel. The planning domain is derived from the case model that can be described using a state-transition system defined as a 5-tuple $$\varSigma = (S, A, E, \gamma , C)$$ such as that: S is the set of possible case states. A is the set of actions that are represented by activities inside tactics that an actor may perform. E is the set of events in the context or in the environment. $$\gamma : S \times A \times E \rightarrow 2^{S}$$, is the state-transition function, so the system evolves according to the actions and events that it receives. $$C: S \times A \rightarrow [0,\infty )$$ is the cost function that may represent monetary cost, time, risk or something that can be minimized or maximized. The state of a case is the set of values (available data) of the attributes contained in artifacts of the context and the environment. However, since the number of attributes of the artifacts is very large, it is necessary to limit the number of attributes to only the most relevant ones, which determines the current state of the case at a given time t. Definition 2 A state $$s_t$$ is the set of values corresponding to a set of relevant attributes $$\{ v_{1}, v_{2}, \dots v_{r} \}$$, with $$r \ge 1$$, contained in the business artifacts at a given time t. Actions in the METAKIP metamodel are represented by the activities within a tactic. Tactics represent best practices and guidelines used by the knowledge workers to make decisions. In METAKIP, they serve as tactic templates to be instantiated to deal with some situations during the execution of a case instance. Tactics are composed of a finite set of activities pursuing a goal. A tactic can be structured or unstructured. A tactic is a 4-tuple $$T = (G, PC, M, A)$$, where: G is a set of variables representing the pursuing goal state, PC is a finite set of preconditions representing a state required for applying the tactic, M is a set of metrics to track and assess the pursuing goal state, and A is a finite set of activities. In METAKIP, an activity could be a single step or a set of steps (called a task). An activity has some preconditions and post-conditions (effects). We map activities into executable actions. An executable action is an activity in which their effects can modify the values of the attributes inside business artifacts. These effects can be deterministic or non-deterministic. Definition 3 An action is a 4-tuple $$a = (Pr, Eff ,Pb, c)$$ where: Pr is a finite set of preconditions. $$Eff$$ is a finite set of effects. Pb is a probability distribution on the effects, such that, $$P_{ef}({i})$$ is the probability of effect $$ef \in Eff$$ and $$\sum _{ef \in Eff } P_{ef}({i}) =1$$. c is the number which represents the cost (monetary, time, etc.) of performing a. As the state-transition function $$\gamma$$ is too large to be explicitly specified, it is necessary to represent it in a generative way. For that, we use the planning operators from which it is possible to compute $$\gamma$$. Thus, $$\gamma$$ can be specified through a set of planning operators O. A planning operator is instantiated by an action. Definition 4 A planning operator O is a pair (ida) where a is an action and id is a unique identifier of action a. At this point, we are able to define the planning problem to generate a plan as a process model. Definition 5 The planning problem for generating a process model at a given time t is defined as a triple $$P = (OS_{t},GS_{t}, RO_{t})$$, where: $$OS_{t}$$ is the observable situation of a case state at time t. $$GS_{t}$$ is the goal state at time t, a set of attributes with expected output values. $$RO_{t}$$ represents a subset of the O that represents only available and relevant actions for a specific situation during the execution of a case instance at a given time t. Definition 6 The observable situation of a case instance C state at a given time t is a set of attributes $$OS_{t} \ = \{ v_1, v_2, \dots , v_m \}$$, with $$m\ge 1$$, such that $$v_i \in \ S_t \cup I_t$$ for each $$1 \le i \le m$$, where the state of C is $$S_t$$ and the set issues in the situation of C is $$I_t$$. Definition 7 The goal state of an observable situation of case instance C at a given time t is the set of attributes $$GS_t = \{ v_1,v_2, \dots , v_m \}$$, with $$m \ge 1$$, such that, for $$1 \le i\le m$$, $$v_i$$ is an attribute with an expected output value, $$v_i$$ belongs to an artifact of C. These attributes are selected by the knowledge workers. Some metrics required to asses some goals inside tactics can be added to the goal. $$GS_t$$ represents the expected reality of C. $$GS_t$$ serves as an input for searching an execution path for a specific situation. Different goal states can be defined over time. Definition 8 Let $$P = (OS_{t},GS_{t}, RO_{t})$$ be the planning problem. A plan $$\pi$$ is a solution for P. The state produced by applying $$\pi$$ to a state $$OS_{t}$$ in the order given is the state $$GS_{t}$$. A plan is any sequence of actions $$\pi$$ = $$(a_{1}, . . . ,a_{k})$$, where $$k \ge 1$$. The plan $$\pi$$ represents the process model. Our problem definition enables the use of different planning algorithms and the application of automatic planning tools to generate alternatives plans. As we are interested in KiPs, which are highly unpredictable processes, we use Markov Decision Processes for formulating the model for the planner. MDPs allows us to represent uncertainty with a probability distribution. MDP makes sequential decision making and reasons about the future sequence of actions and obstructions, which provides us with high levels of flexibility in the process models. In the following, we show how to derive an MDP model expressed in the PRISM language from a METAKIP model automatically. 4.2 PRISM Model Composition Algorithm 1 shows the procedure to automatically generate the MDP model for the PRISM tool, where the input parameters are: $$OS_t$$, $$GS_t$$, set of domain Tactics, t is the given time, PP minimum percentage of preconditions satisfaction, and PG minimum percentage of goal satisfaction, both PP and PG are according to the rules of the domain. As described in Sect. 3.4, a module is composed of variables and commands. Variables of the module are the set of attributes from the case artifacts that belong to $$OS_t \cup GS_t$$. Commands are represented for the relevant planning operators $$RO_t$$. The name of the command is the identifier of the action, the guards are the preconditions PC and the effects $$Eff$$ are the updates with associated probabilities. Rewards are represented by the cost of actions c and are outside of the module of PRISM. For finding the set of relevant planning operators $$RO_t$$, first, we select tactics whose preconditions must be satisfied by the current situation $$OS_t$$ and whose goal is related to the target state $$GS_{t}$$. This can be done by calculating the percentages of both the satisfied preconditions and achievable goals. If these percentages are within an acceptable range according to the rules of the domain, the tactics are selected. Second, this first set of tactics is shown to the knowledge workers who select the most relevant tactics. The set of the selected relevant tactics is denoted as RT. From this set of tactics, we verify which activities inside the tactics are available at time t. Thus, the set of available actions at time t is denoted by $$A_{t}={a_{1},a_{2}, \dots ,a_{n}}$$. Finally, the relevant planning operators, $$RO_t$$, are created by means of $$A_t$$. 4.3 Plan Generation To generate plans in PRISM, it is necessary to define a property file that contains properties that define goals as utility functions. PRISM evaluates properties over an MDP model and generates all possible resolutions of non-determinism in the model, state graphs, and gives us the optimal state graph. The state graph describes a series of possible states that can occur while choosing actions aiming to achieve a goal state. It maximizes the probability to reach the goal state taking into consideration rewards computed, that is maximizing or minimizing rewards and costs. In our context, a property represents the goal state $$GS_t$$ to be achieved while trying to optimize some criteria. Then, PRISM calculates how desirable an executing path is according to one criterion. Thus, plans can be customized according to knowledge workers’ preferences (costs and rewards). To generate a plan, we need to evaluate a property. The generated plan is a state graph that represents a process model to be executed at time t. The generated process model shows case states as nodes and states transitions as arcs labeled with actions which outcomes follow probability distribution function. According to this state graph, the knowledge worker could choose which action to execute in a particular state. This helps knowledge workers to make decisions during KiPs execution. 5 Proof of Concept This section formulates a patient-specific MDP model in PRISM for the medical scenario presented in Sect. 2. In the area of health care, medical decisions can be modeled with Markov Decisions Processes (MDP) [5, 17]. Although MDP is more suitable for certain types of problems involving complex decisions, such as liver transplants, HIV, diabetes, and others, almost every medical decision can be modeled as an MDP [5]. We generate the PRISM model by defining the observable situation $$OS_t$$, Goal state $$GS_t$$, and the set of relevant planning operators $$RO_t$$. Table 2. Activity modeling Activity A1: Administer Oral antipyretic medication, as appropriate Activity B1: Ensure that effective antiemetic drugs are given to prevent nausea when possible Pre-condition: ((Temp > 37.2) and (LN = 0 or LN = 1)) and (allergic = false) and (conflict with current medications = false) and (medication is available = true) Pre-condition: Pregnancy(FALSE) and (LN > 2) and (allergic = false) and (conflict with current medications = false) Effects: Effects: E1: p = 0.6 Respond to treatment (Temp = 37) E1: p = 0.7 Respond to treatment (LN = 0) E2: p = 0.3 Partial Respond to treatment (Temp = Temp − 0.5) E2: p = 0.2 Partially respond to treatment (LN = LN −1) E3: p = 0.1 Not Responding to treatment (Temp = Temp + 0.5) E3: p = 0.1 Not Responding to treatment (LN = LN +1)) Task execution time : 5 min Task execution time : 5 min Cost: 0.08 Cost: 0.08 Taking in consideration the medical scenario, the observable situation is $$OS_{0}=\{ Temp_{0}= 38\,^{\circ }, LN_{0}=4\}$$ and the goal state is $$GS_0= \{36\,^{\circ }\mathrm{C} \le Temp \le 37.2^{\circ }\mathrm{C}, LN=0\}$$ where: Temp is the temperature of the patient and LN is the level of nausea, both attributes of the Health Status artifact. We assume that the set of relevant tactics RT according to the current health status of the patient are fever and nausea management, presented in Sect. 2. Table 2 shows the specification of one activity of each tactic, showing their preconditions, effects with their probability, time, and cost of execution. We modeled the activity effects with probabilities related to the probability of the patient to respond to the treatment. For example, the possible effects of applying the activity Administer ORAL antipyretic medication are: (E1) the patient successfully responds to treatment, occurring with a probability 0.6; (E2) 30% of the time the patient partially responds to treatment where their temperature decreases by 0.5$$^\circ$$ or more fails to reach the goal level; and (E3) the patient does not respond at all to treatment or gets worse (occurring with a probability of 0.1). The other activities are similarly modeled according to the response of the patient. Assuming that all activities from both tactics are available, the set of executable actions is $$A_t=\{ A1,A2,A3,A4,A5,B1,B2,B3 \}$$. Then, it is possible to model the set of relevant planning operators $$RO_t$$. Having $$OS_t$$, $$GS_t$$ and $$RO_t$$, it is possible to generate the MDP model in the language PRISM. Once we created the MDP model, the following utility functions were evaluated: minimize time and cost while reaching the target state. The optimal plan to achieve the goal state $$GS_t$$ while minimizing the cost shows that reachability is eight iterations. The resulting model has 13 states, 35 transitions, and 13 choices. The time for the model construction was 0.056 s. Figure 2 presents only a fragment of the model generated, highlighting the most probable path from the initial state to the goal state. The first suggested action is B1 (labeled arc) with possible outcome states with their probabilities. If the most probable next state is achieved, the next action to perform is A1 which has a probability of 0.6 to reach the goal state. Knowledge workers can use this generated plan to decide which is the next activity they should perform in a particular state. To make the plan readable to knowledge workers, they could be presented with only the most probable path, and this could be updated according to the state actually reached after activity execution. Further studies are necessary to help guiding knowledge workers in interpreting and following the model. 6 Discussion and Related Work In the last decades, there has been a growing interest in highly dynamic process management, with different types of approaches that deal with the variability, flexibility, and customization of processes at design time and at run time. Most approaches start from the premise that there is a process model to which different changes have to be made, such as adding or deleting fragments according to a domain model or to generate an alternative sequence of activities due to some customization option. A few approaches use automated planning for synthesizing execution plans. Laurent et al. [12] explored a declarative modeling language called Alloy to create the planning model and generate the plans. This approach seems to be very promising for activity-centric processes, but not effective enough for data-centric processes, as data is not well-enough treated to be the driver of the process as required in KiPs. SmartPM [16] investigated the problem of coordinating heterogeneous components inside cyber-physical systems. They used a PDDL (Planning Domain Definition Language) planner that evaluates the physical reality and the expected reality, and synthesize a recovery process. Similarly, Marrella and Lespérance proposed an approach [15] to dynamically generate process templates from a representation of the contextual domain described in PDDL, an initial state, and a goal condition. However, for the generation of the process templates, it is assumed that tasks are black boxes with just deterministic effects. On the other hand, Henneberger et al. [8] explored an ontology for generating process models. The generated process models are action state graphs (ASG). Although this work uses a very interesting semantic approach, they did not consider important aspects such as resources and cost for the planning model. There has been an increasing interest in introducing cognitive techniques for supporting the business process cycle. Ferreira et. al. [6] proposed a new life cycle for workflow management based on continuous learning and planning. It uses a planner to generate a process model as a sequence of actions that comply with activity rules and achieve the intended goal. Hull and Nezhad [9] proposed a new cycle Plan-Act-Learn for cognitively-enabled processes that can be carried out by humans and machines, where plans and decisions define actions, and it is possible to learn from it. Recently, Marrella [14] showed how automatic planning techniques can improve different research challenges in the BPM area. This approach explored a set of steps for encoding a concrete problem as a PDDL planning problem with deterministics effects. In this paper we introduced the notion of the state of a case regarding data-values in the artifacts of a case instance. From this state, we can plan different trajectories towards a goal state using automated planning techniques. Our solution generates action plans considering the non-deterministic effects of the actions, new emerging goals and information, which provides high levels of flexibility and adaptation. As we describe a generic planning model, it is possible to use different planning algorithms or combine other planning models, such as the classical planning model or the hierarchical task network (HTN), according to the structuring level of the processes at different moments. Thereby, we could apply this methodology to other types of processes, from well-structured processes to loosely or unstructured processes. Our approach relies on MDP, which requires defining transition probabilities, which in some situations can be very difficult and expensive to get. Nowadays a huge amount of data is produced by many sensors, machines, software systems, etc, which might facilitate the acquisition of data to estimate these transition probabilities. In the medical domain, the increasing use of electronic medical record systems shall provide the medical data from thousands of patients, which can be exploited to derive these probabilities. A limitation in MDPs refers to the size of the problem because the size of the state-space explodes, and it becomes more difficult to solve. In this context, several techniques for finding approximate solutions to MDPs can be applied in addition to taking advantage of the rapid increase of processing power in the last years. Flexible processes could be easily designed if we replan after an activity execution. In fact, our approach suggests a system that has a constant interleaving of planning, execution, and monitoring. In this way, it will help knowledge workers during the decision-making process. 7 Conclusion Process modeling is usually conducted by process designers in a manual way. They define the activities to be executed to accomplish business goals. This task is very difficult and prone to human errors. In some cases (e.g., for KiPs), it is impossible due to uncertainty, context-dependency, and specificity. In this paper, we devised an approach to continually generate run-time process models for a case instance using an artifact-centric case model, data-driven activities, and automatic planning techniques, even for such loosely-structured processes as KiPs. Our approach defined how to synthesize a planning model from an artifact-oriented case model defined according to the METAKIP metamodel. The formulation of the planning domain and the planning problem rely on the current state of a case instance, context and environment, target goals, and tactic templates from which we can represent actions, states, and goals. As our focus is KiPs management, we chose to use the MDP framework that allows representing uncertainty, which is one of KiPs essential characteristics. To automatically generate the action plan, we used the tool PRISM, which solves the MDP model and provides optimal solutions. Future work involve devising a user-friendly software application for knowledge workers to interact with the planner and improve the presentation of plans in such a way that it is more understandable to them. Our goal is to develop a planner which combines different types of planning algorithms to satisfy different requirements in business processes, especially regarding the structuring level. This planner will be incorporated into a fully infrastructure for managing Knowledge-intensive processes that will be based on the DW-SAArch reference architecture [19]. References 1. 1. Alur, R., Henzinger, T.A.: Reactive modules. Form. Methods Syst. Des. 15(1), 7–48 (1999) 2. 2. Butcher, H.K., Bulechek, G.M., Dochterman, J.M.M., Wagner, C.: Nursing Interventions classification (NIC)-E-Book. Elsevier Health Sciences (2018)Google Scholar 3. 3. Davenport, T.: Thinking for a Living. How to Get Better Performance and Results. Harvard Business School Press, Boston (2005)Google Scholar 4. 4. Di Ciccio, C., Marrella, A., Russo, A.: Knowledge-intensive processes: characteristics, requirements and analysis of contemporary approaches. J. Data Semant. 4(1), 29–57 (2015) 5. 5. Dıez, F., Palacios, M., Arias, M.: MDPs in medicine: opportunities and challenges. In: Decision Making in Partially Observable, Uncertain Worlds: Exploring Insights from Multiple Communities (IJCAI Workshop), vol. 9, p. 14 (2011)Google Scholar 6. 6. Ferreira, H.M., Ferreira, D.R.: An integrated life cycle for workflow management based on learning and planning. Int. J. Cooper. Inf. Syst. 15(04), 485–505 (2006) 7. 7. Ghallab, M., Nau, D., Traverso, P.: Automated Planning: Theory and Practice. Elsevier (2004)Google Scholar 8. 8. Henneberger, M., Heinrich, B., Lautenbacher, F., Bauer, B.: Semantic-based planning of process models. In: Multikonferenz Wirtschaftsinformatik (MKWI). GITO-Verlag (2008)Google Scholar 9. 9. Hull, R., Motahari Nezhad, H.R.: Rethinking BPM in a cognitive world: transforming how we learn and perform business processes. In: La Rosa, M., Loos, P., Pastor, O. (eds.) BPM 2016. LNCS, vol. 9850, pp. 3–19. Springer, Cham (2016). 10. 10. Kochenderfer, M.J.: Decision Making Under Uncertainty: Theory and Application. MIT press, Cambridge (2015) 11. 11. Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011). 12. 12. Laurent, Y., Bendraou, R., Baarir, S., Gervais, M.P.: Planning for declarative processes. In: Proceedings of the 29th Annual ACM Symposium on Applied Computing, pp. 1126–1133. ACM (2014)Google Scholar 13. 13. Marjanovic, O.: Towards is supported coordination in emergent business processes. Bus. Process Manag. J. 11(5), 476–487 (2005) 14. 14. Marrella, A.: Automated planning for business process management. J. Data Seman. 8(2), 79–98 (2019) 15. 15. Marrella, A., Lespérance, Y.: A planning approach to the automated synthesis of template-based process models. SOCA 11(4), 367–392 (2017) 16. 16. Marrella, A., Mecella, M., Sardina, S.: SmartPM: an adaptive process management system through situation calculus, IndiGolog, and classical planning. In: Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2014), pp. 518–527 (2014)Google Scholar 17. 17. Mattila, R., Siika, A., Roy, J., Wahlberg, B.: A Markov decision process model to guide treatment of abdominal aortic aneurysms. In: 2016 IEEE Conference on Control Applications (CCA), pp. 436–441. IEEE (2016)Google Scholar 18. 18. Reichert, M., Weber, B.: Enabling Flexibility in Process-Aware Information Systems: Challenges, Methods Technologies. Springer, Heidelberg (2012) 19. 19. Venero, S.K.: DW-SAAArch: a reference architecture for dynamic self-adaptation in workflows. Master’s Thesis, UNICAMP, Campinas, Brazil (2015)Google Scholar 20. 20. Venero, S.K., Dos Reis, J.C., Montecchi, L., Rubira, C.M.F.: Towards a metamodel for supporting decisions in knowledge-intensive processes. In: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pp. 75–84. ACM (2019)Google Scholar
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45545482635498047, "perplexity": 1527.115888142626}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00095.warc.gz"}
https://www.physicsforums.com/threads/the-time-evolution-operator-qm-algebraic-properties.607216/
# The time evolution operator (QM) Algebraic properties 1. May 19, 2012 ### knowlewj01 1. The problem statement, all variables and given/known data The hamiltonian for a given interaction is $H=-\frac{\hbar \omega}{2} \hat{\sigma_y}$ where $\sigma_y = \left( \begin{array}{cc} 0 & i \\ -i & 0 \end{array} \right)$ the pauli Y matrix 2. Relevant equations 3. The attempt at a solution So from the time dependant schrodinger equation we, can take the time dependance and put it into the time evolution operator U(t) $HU(t)\left|\Psi(r,0)\right>=i\hbar \frac{d}{dt}U(t)\left|\Psi(r,0)\right>$ becomes $i\hbar\frac{d}{dt}U(t) = HU(t)$ so for a non time dependant Hamiltonian H, this means: $U(t) = e^{-\frac{i}{\hbar}H t}$ so we have then: $U(t) = e^{\frac{i\omega t}{2}\hat{\sigma_y}}$ How do you treat this? Is there any particular identity that allows you to move the operator out of the exponent? Last edited: May 19, 2012 2. May 19, 2012 ### knowlewj01 edit: changed the matrix to the correct form 3. May 19, 2012 ### dextercioby Do you know how the exponential of a finite matrix is defined? If so, use the definition. Similar Discussions: The time evolution operator (QM) Algebraic properties
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351444840431213, "perplexity": 1433.1170851411011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103910.54/warc/CC-MAIN-20170817185948-20170817205948-00627.warc.gz"}
http://mathhelpforum.com/algebra/161534-e-powers-cancel-down-print.html
# e powers (cancel down) • October 30th 2010, 09:04 AM MattWT e powers (cancel down) Hello, How do you cancel this equation down further? Thanks $((exp(x)exp(-x))/4) + ((exp(x)exp(-x))/4) + ((exp(-2x))/4) + ((exp(2x))/4)$ • October 30th 2010, 09:11 AM Unknown008 $\dfrac{e^x e^{-x}}{4} + \dfrac{e^x e^{-x}}{4} + \dfrac{e^{-2x}}{4} + \dfrac{e^{2x}}{4}$ Is this what you mean? Remember that: $a^b a^c = a^{b+c}$ • October 30th 2010, 09:36 AM Soroban Hello, MattWT! Are you working with hyperbolic functions? Quote: How do you cancel this equation down further? . . $\dfrac{e^x\cdot e^{-x}}{4} + \dfrac{e^x\cdot e^{-x}}{4} + \dfrac{e^{-2x}}{4} + \dfrac{e^{2x}}{4}$ Since . $e^x\cdot e^{-x} \:=\:e^0 \:=\:1$ . . we have: . $\displaystyle \frac{1}{4} + \frac{1}{4} + \frac{e^{-2x}}{4} + \frac{e^{2x}}{4} \;=\;\frac{e^{2x}}{4} + \frac{1}{2} + \frac{e^{-2x}}{4}$ . . . . . . . . . . $\displaystyle =\;\frac{e^{2x} + 2 + e^{-2x}}{4} \;=\;\frac{(e^x + e^{-x})^2}{4}$ . . . . . . . . . . $=\;\left(\dfrac{e^x + e^{-x}}{2}\right)^2 \;=\;\cosh^2\!x$ • October 30th 2010, 09:42 AM MattWT I must have messed my algebra up at the beginning then, as the top line should be equal to 1: The question was to dy/dx tanh(x) by writing the hyperbolic functions first.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9456778764724731, "perplexity": 2133.689709342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834883.60/warc/CC-MAIN-20140820021354-00134-ip-10-180-136-8.ec2.internal.warc.gz"}
https://jp.maplesoft.com/support/help/maplesim/view.aspx?path=worksheet/expressions/smartpopups
Smart Popups - Maple Help Clickable Math: Smart Popups Clickable Math techniques provide easy-to-use tools within Maple for step-by-step, interactive mathematical problem-solving. This document provides information and examples using smart popups for simple mathematical problem-solving. Description Smart Popups are interactive popup options that are automatically displayed in the Context Panel for certain types of expressions, such a trigonometric equations.  They give options for manipulating your selection, such as plotting, expanding, factoring, or substituting trigonometric expressions. Examples Example 1: Solve a trigonometric identity using smart popups: Show $1-\mathrm{cos}\left(2\mathrm{\theta }\right)=2{\mathrm{sin}\left(\mathrm{\theta }\right)}^{2}$. Input the expression for the left-hand side. Tip: Find $\mathrm{\theta }$ in the Greek palette, or type theta and use symbol completion. $1-\mathrm{cos}\left(2\mathrm{\theta }\right)$ Smart popups, if any, are shown at the top of the Context Panel. Each smart popup shows a preview of the result. Select expand. The result is returned to the worksheet. From the smart popups for this output, select simplify. The result is returned to the worksheet.  The identity has been verified. Example 2: Apply a trig identity using smart popups Apply the double angle identity for $\mathrm{sin}\left(2\mathrm{\theta }\right)$. Input the expression $\mathrm{sin}\left(2\mathrm{\theta }\right)$. $\mathrm{sin}\left(2\mathrm{\theta }\right)$ Smart popups are shown at the top of the Context Panel.  Hover your pointer over Trig Identities to see a list of identities. Select $2\mathrm{sin}\left(\mathrm{\theta }\right)\mathrm{cos}\left(\mathrm{\theta }\right)$. The result is returned to the worksheet. Example 3: Produce a 3-D plot for an expression using smart popups: Input the expression $\mathrm{cos}\left(\frac{1}{10}xy\right)$. $\mathrm{cos}\left(\frac{1}{10}xy\right)$ Smart popups are shown at the top of the Context Panel. Select 3D Plot from the available choices. The result is returned to the worksheet.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858748733997345, "perplexity": 3014.6294893296863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00499.warc.gz"}
http://aas.org/archives/BAAS/v35n5/aas203/1168.htm
AAS 203rd Meeting, January 2004 Session 5 T Tauri Stars Poster, Monday, January 5, 2004, 9:20am-6:30pm, Grand Hall ## [5.08] Dynamical Mass of the T Tau Sa-Sb Binary G. H. Schaefer (SUNY Stony Brook), T. L. Beck (Gemini Observatory), L. Prato (UCLA), M. Simon (SUNY Stony Brook) It is possible to derive a useful value for the dynamical mass of a binary system even if a complete orbit has not yet been observed. By considering the distribution of masses produced by orbital solutions that lie within a variation of 1 from the minimum in the reduced \chi2 surface, an estimate of the total mass can be derived. We apply this technique to the infrared measurements of the resolved T Tau Sa-Sb binary. This pair, currently separated by ~ 0.1'' is located ~ 0.7'' south of T Tau N. Curvature is already apparent in the orbital motion of Tau Sa-Sb. Although the range of possible orbital parameters is still large, the total mass lies within the range 3.7 (+2.8/-1.6) Msun. Comparing this value to the masses of T Tau N and T Tau Sb estimated from their spectral types, indicates that the IR companion, T Tau Sa, is the most massive component. Bulletin of the American Astronomical Society, 35#5 © 2003. The American Astronomical Soceity.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9490041732788086, "perplexity": 4179.140463696773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163844441/warc/CC-MAIN-20131204133044-00079-ip-10-33-133-15.ec2.internal.warc.gz"}
http://forum.mackichan.com/node/838
Using \intertext I am trying to use both manual and scientific worplace editing of the same document using the latex compatibility mode (different coauthors). The main problem I have is that any \intertext{} inside of \begin{align}.... do not work when opened in scientific workplace.  When opening the file a message pops up saying "Discarding \intertext" Is there any way to fix this?  Or is there an alternative that can be used in Scientific Workplace?  Keep in mind that the purpose of intertext is that it doesn't mess with the spacing in a particular row, unlike a direct \text{...} command, so the other rows will align fine. Unfortunately SW does not Unfortunately SW does not recognize \intertext and throws it away if it exists in a document. The only work around is to place the \intertext macro at the beginning of the next line of the display inside and encapsulated TeX field. That is, the encapsulated TeX field would appear in the line of the math display that follows the text in the typeset results. If you want to modify the .tex file directly before importing it, you can use the \TeXButton macro that SW interprets as an encapsulated TeX field. \TeXButton takes two parameters, the first is the name of the TeX field and the second is the contents of the TeX field. For example, the AMS documentation uses this example to demonstrate the use of \intertext: \begin{align} A_{1} & =N_{0}(\lambda;\Omega^{\prime})-\phi(\lambda;\Omega^{\prime}),\\ A_{2} & =\phi(\lambda;\Omega^{\prime})-\phi(\lambda;\Omega),\\ \intertext{and} A_3&=\mathcal{N}(\lambda;\omega). \end{align} Change this to: \begin{align} A_{1} & =N_{0}(\lambda;\Omega^{\prime})-\phi(\lambda;\Omega^{\prime}),\\ A_{2} & =\phi(\lambda;\Omega^{\prime})-\phi(\lambda;\Omega),\\ \TeXButton{and}{\intertext{and}}% A_3&=\mathcal{N}(\lambda;\omega). \end{align} Notice the fourth line. The % at the end of the line may not be strictly necessary. If you are saving with the Portable LaTeX file type, modify and save the document before typesetting so the \TeXButton will be rewritten as needed when saving for Portable LaTeX.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967086911201477, "perplexity": 1517.9353901292836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574077.39/warc/CC-MAIN-20190920200607-20190920222607-00459.warc.gz"}
http://mathhelpforum.com/trigonometry/210125-exact-value-diagonal-square-print.html
# Exact Value for a Diagonal of a Square • Dec 19th 2012, 08:57 AM Exact Value for a Diagonal of a Square So, I understand the formula to achieve the diagonal of a square is a variation of Pythagoras' Theorem, which is the square root of 2 times the length squared. My exact side length is 2 + 23. So I input this length in the formula and I get an answer of 42, however, in the book the answer is 4√2+√3, with the √3 being under the first radical. I cannot see how they are getting to this answer, any help would be fantastic. • Dec 19th 2012, 11:07 AM Plato Re: Exact Value for a Diagonal of a Square Quote: So, I understand the formula to achieve the diagonal of a square is a variation of Pythagoras' Theorem, which is the square root of 2 times the length squared. My exact side length is 2 + 23. So I input this length in the formula and I get an answer of 42, however, in the book the answer is 4√2+√3, with the √3 being under the first radical. There are several mistakes in this post. If $d$ is the length of the diagonal of a square of side length $s$ then $d=s\sqrt{2}$ So if you have $s=2+2\sqrt{3}$ then $d=2\sqrt{2}+2\sqrt{6}$. • Dec 19th 2012, 01:06 PM Re: Exact Value for a Diagonal of a Square I see that d=s√2 is a simplified version of the formula I wrote. Still, I'm looking at the answer in the book and it is what I wrote. I suppose it could be a typo but it was also written this way in the answer book at school. I've heard my teacher say that the book sometimes answers questions in a strange way... I guess I will have to wait until tomorrow and ask my teacher. Thanks again. • Dec 19th 2012, 01:30 PM emakarov Re: Exact Value for a Diagonal of a Square Indeed, $2\sqrt{2}+2\sqrt{6}=4\sqrt{2+\sqrt{3}}$, which you can verify by squaring both sides. • Dec 19th 2012, 02:43 PM bjhopper Re: Exact Value for a Diagonal of a Square key concept 6^1/2 = 3^1/2 * 2^1/2 • Dec 19th 2012, 05:04 PM cathectio Re: Exact Value for a Diagonal of a Square Blinkin is correct, as far as s/he goes, but 2sqrt(6) = 2sqrt(2*3), so we have then 2sqrt(2) + 2sqrt(2*3) = 2sqrt(2) + 2sqrt(2)*sqrt(3) = 2(2sqrt(2)) + sqrt(3) = 4sqrt(2) + sqrt(3). • Dec 19th 2012, 05:08 PM cathectio Re: Exact Value for a Diagonal of a Square It is of interest that in euclid's "elements", book i proposition i, if we allow the radius to equal 1 and duplicate the equilateral triangle vertically, we have the length ascertained of sqrt(3), which is the long hi to low opposite to opposite corner diagonal of a cube. • Dec 19th 2012, 06:45 PM
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9744517207145691, "perplexity": 597.4875627324031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00441-ip-10-171-10-108.ec2.internal.warc.gz"}
http://qingkaikong.blogspot.co.il/2017/07/machine-learning-17-using-scikit-learn.html
## Saturday, July 29, 2017 ### Machine learning 17: Using scikit-learn Part 5 - Common practices The material is based on my workshop at Berkeley - Machine learning with scikit-learn. I convert it here so that there will be more explanation. Note that, the code is written using Python 3.6. It is better to read the slides I have first, which you can find it here. You can find the notebook on Qingkai's Github. This week, we will discuss some common practices that we skipped in the previous weeks. These common practices will help us to train a model that generalize well, that is perform well on the new data that we want to predict. from sklearn import datasets import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn-poster') %matplotlib inline ## Classification Example from sklearn.model_selection import train_test_split from sklearn import metrics from sklearn import preprocessing #get the dataset X, y = iris.data, iris.target # Split the dataset into a training and a testing set # Test set will be the 25% taken randomly X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=33) print(X_train.shape, y_train.shape) (112, 4) (112,) X_train[0] array([ 5. , 2.3, 3.3, 1. ]) Let's standardize the input features # Standardize the features scaler = preprocessing.StandardScaler().fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) X_train[0] array([-0.91090798, -1.59761476, -0.15438202, -0.14641523]) #Using svm from sklearn.svm import SVC clf = SVC() clf.fit(X_train, y_train) clf.score(X_test, y_test) 0.94736842105263153 ## Pipeline We can use pipeline to chain all the operations into a simple pipeline: from sklearn.pipeline import Pipeline estimators = [] estimators.append(('standardize', preprocessing.StandardScaler())) estimators.append(('svm', SVC())) pipe = Pipeline(estimators) pipe.fit(X_train, y_train) pipe.score(X_test, y_test) 0.94736842105263153 When evaluating different settings (“hyperparameters”) for estimators, such as the C setting that must be manually set for an SVM, there is still a risk of overfitting on the test set because the parameters can be tweaked until the estimator performs optimally. This way, knowledge about the test set can “leak” into the model and evaluation metrics no longer report on generalization performance. To solve this problem, yet another part of the dataset can be held out as a so-called “validation set”: training proceeds on the training set, after which evaluation is done on the validation set, and when the experiment seems to be successful, final evaluation can be done on the test set. However, by partitioning the available data into three sets, we drastically reduce the number of samples which can be used for learning the model, and the results can depend on a particular random choice for the pair of (train, validation) sets. A solution to this problem is a procedure called cross-validation (CV for short). A test set should still be held out for final evaluation, but the validation set is no longer needed when doing CV. In the basic approach, called k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”: A model is trained using k-1 of the folds as training data; the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy). The performance measure reported by k-fold cross-validation is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as it is the case when fixing an arbitrary test set), which is a major advantage in problem such as inverse inference where the number of samples is very small. ## Computing cross-validated metrics The simplest way to use cross-validation is to call the crossvalscore helper function on the estimator and the dataset. from sklearn.model_selection import cross_val_score scores = cross_val_score(pipe, X, y, cv=5) scores array([ 0.96666667, 0.96666667, 0.96666667, 0.93333333, 1. ]) The mean score and the 95% confidence interval of the score estimate are hence given by: print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std())) Accuracy: 0.97 (+/- 0.02) It is also possible to use other cross validation strategies by passing a cross validation iterator instead, for instance: from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=3, test_size=0.3, random_state=0) cross_val_score(pipe, iris.data, iris.target, cv=cv) array([ 0.97777778, 0.93333333, 0.95555556]) ## Using cross-validation choose parameters For example, if we want to test different value of C vlaues for the SVM, we can run the following code and decide the best parameter. We can have a look of all the parameters we used in our pipeline by using get_params function. pipe.get_params() {'standardize': StandardScaler(copy=True, with_mean=True, with_std=True), 'standardize__copy': True, 'standardize__with_mean': True, 'standardize__with_std': True, 'steps': [('standardize', StandardScaler(copy=True, with_mean=True, with_std=True)), ('svm', SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma='auto', kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False))], 'svm': SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma='auto', kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False), 'svm__C': 1.0, 'svm__cache_size': 200, 'svm__class_weight': None, 'svm__coef0': 0.0, 'svm__decision_function_shape': None, 'svm__degree': 3, 'svm__gamma': 'auto', 'svm__kernel': 'rbf', 'svm__max_iter': -1, 'svm__probability': False, 'svm__random_state': None, 'svm__shrinking': True, 'svm__tol': 0.001, 'svm__verbose': False} C_s = np.linspace(0.001, 1000, 100) scores = list() scores_std = list() for C in C_s: pipe.set_params(svm__C = C) this_scores = cross_val_score(pipe, X, y, n_jobs=1, cv = 5) scores.append(np.mean(this_scores)) scores_std.append(np.std(this_scores)) # Do the plotting plt.figure(1, figsize=(10, 8)) plt.clf() plt.semilogx(C_s, scores) plt.semilogx(C_s, np.array(scores) + np.array(scores_std), 'b--') plt.semilogx(C_s, np.array(scores) - np.array(scores_std), 'b--') locs, labels = plt.yticks() plt.yticks(locs, list(map(lambda x: "%g" % x, locs))) plt.ylabel('CV score') plt.xlabel('Parameter C') plt.ylim(0.82, 1.04) plt.show() Alternatively, we can use the GridSearchCV to do the same thing: from sklearn.model_selection import GridSearchCV params = dict(svm__C=np.linspace(0.001, 1000, 100)) grid_search = GridSearchCV(estimator=pipe, param_grid=params,n_jobs=-1, cv=5) grid_search.fit(X,y) GridSearchCV(cv=5, error_score='raise', estimator=Pipeline(steps=[('standardize', StandardScaler(copy=True, with_mean=True, with_std=True)), ('svm', SVC(C=1000.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma='auto', kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False))]), fit_params={}, iid=True, n_jobs=-1, param_grid={'svm__C': array([ 1.00000e-03, 1.01020e+01, ..., 9.89899e+02, 1.00000e+03])}, pre_dispatch='2*n_jobs', refit=True, return_train_score=True, scoring=None, verbose=0) grid_search.best_score_ 0.97333333333333338 grid_search.best_params_ {'svm__C': 10.102} You can see all the results in grid_search.cv_results_ ## Exercise Using the grid_search.cv_results_ from the GridSearchCV, plot the same figure as above which showing the parameter C vs. CV score. # Do the plotting plt.figure(1, figsize=(10, 8)) plt.clf() C_s = grid_search.cv_results_['param_svm__C'].data scores = grid_search.cv_results_['mean_test_score'] scores_std = grid_search.cv_results_['std_test_score'] plt.semilogx(C_s, scores) plt.semilogx(C_s, np.array(scores) + np.array(scores_std), 'b--') plt.semilogx(C_s, np.array(scores) - np.array(scores_std), 'b--') locs, labels = plt.yticks() plt.yticks(locs, list(map(lambda x: "%g" % x, locs))) plt.ylabel('CV score') plt.xlabel('Parameter C') plt.ylim(0.82, 1.04) plt.show()
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5526940822601318, "perplexity": 6033.526021228974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827853.86/warc/CC-MAIN-20171024014937-20171024034937-00217.warc.gz"}
https://www.originlab.com/doc/LabTalk/guide/Operators
2.3.1.3 Operators Introduction LabTalk supports assignment, arithmetic, logical, relational, and conditional operators: Arithmetic Operators +     -     *     /     ^     &     | String Concatenation + Assignment Operators =     +=     -=     *=     /=     ^= Logical and Relational Operators >     >=     <     <=     ==     !=     &&     || Conditional Operator ? : These operations can be performed on scalars and in many cases they can also be performed on vectors (datasets). Origin also provides a variety of built-in numeric, trigonometric, and statistical functions which can act on datasets. When evaluating an expression, Origin observes the following precedence rules: 1. Exposed assignment operators (not within brackets) are evaluated. 2. Operations within brackets are evaluated before those outside brackets. 3. Multiplication and division are performed before addition and subtraction. 4. The (>, >=, <, <=) relational operators are evaluated, then the (== and !=) operators. 5. The logical operators || is prior to &&. 6. Conditional expressions (?:) are evaluated. Arithmetic Operators Origin recognizes the following arithmetic operators: Operator Use + - Subtraction * Multiplication / Division ^ Exponentiate (X^Y raises X to the Yth power) (see note below) & Bitwise And operator. Acts on the binary bits of a number. | Bitwise Or operator. Acts on the binary bits of a number. Note: For 0 raised to the power n (0^n), if n > 0, 0 is returned. If n < 0, a missing value is returned. If n = 0, then 1 is returned (if @ZZ = 1) or a missing value is returned (if @ZZ = 0). These operations can be performed on scalars and on vectors (datasets). For more information on scalar and vector calculations, see Performing Calculations below. The following example illustrates the use of the exponentiate operator: Enter the following script in the Command window: 1.3 ^ 4.7 = After pressing ENTER, 3.43189 is printed in the Command window. The next example illustrates the use of the bitwise and operator. Enter the following script in the Command window: if (27&41 == 9) {type "Yes!"} After pressing ENTER, Yes! is displayed in the Command window. Note: 27&41 == 9 because 27 = 0000000000011011 41 = 0000000000101001 with bitwise & yields: 0000000000001001 (which is equal to 9) Note: Multiplication must be explicitly included in an expression. For example, 2*X must be used instead of 2X to indicate the multiplication of the variable X by the constant 2. Define a constant We can also define global constants in the CONST.CNF file under User File Folder: //Euler's number const e = 2.718281828459045 • To convert a dataset to a logarithmic scale, use the following syntax: col(c) = log(col(c)); • To convert a dataset back to a linear scale, use the following syntax: col(c) = 10^(col(c)); String Concatenation Very often you need to concatenate two or more strings of either the string variable or string register type. All of the code segments in this section return the string "Hello World." The string concatenation operator is the plus-sign (+), and can be used to concatenate two strings: aa$="Hello"; bb$="World"; cc$=aa$+" "+bb$; cc$=; To concatenate two string registers, you can simply place them together: %J="Hello"; %k="World"; %L=%J %k; %L=; If you need to work with both a string variable and a string register, follow these examples utilizing %( ) substitution: aa$="Hello"; %K="World"; dd$=%(aa$) %K; dd$=; dd$=%K; dd$=aa$+" "+dd$; dd$=; %M=%(aa$) %K; %M=; Assignment Operators Origin recognizes the following assignment operators: Operator Use = Simple assignment. += -= Subtraction assignment. *= Multiplication assignment. /= Division assignment. ^= Exponential assignment. These operations can be performed on scalars and on vectors (datasets). For more information on scalar and vector calculations, see Performing Calculations in this topic. The following example illustrates the use of the -= operator. In this example, 5 is subtracted from the value of A and the result is assigned to A: A -= 5; In the next example, each value in Book1_B is divided by the corresponding value in Book1_A, and the resulting values are assigned to Book1_B. Book1_B /= Book1_A; In addition to these assignment operators, LabTalk also supports the increment and decrement operators for scalar calculations (not vector). Operator Use ++ Add 1 to the variable contents and assign to the variable. -- Subtract 1 from the variable contents and assign to the variable. The following for loop expression illustrates a common use of the increment operator ++. The script prints the data stored in the second column of the current worksheet to the Command window: for (ii = 1; ii <= wks.maxrows; ii++) {type (\$(col(2)[ii])); } Logical and Relational Operators Origin recognizes the following logical and relational operators: Operator Use > Greater than >= Greater than or equal to < Less than <= Less than or equal to == Equal to != Not equal to && And || Or An expression involving logical or relational operators evaluates to either true (non-zero) or false (zero). Logical operators are almost always found in the context of Conditional and Loop Structures. Numeric Comparison The most common comparison is between two numeric values. Generally, at least one is a variable. For instance: if aa<3 type "aa<3"; Or, both items being compared can be variables: if aa<=bb type "aa<=bb"; It is also possible, using parentheses, to make multiple comparisons in the same logical statement: if (aa<3 && aa<bb) type "aa is lower"; String Comparison You can use the == and != operators to compare two strings. String comparison (rather than numeric comparison) is indicated by open and close double quotations (" ") either before, or after, the operator. The following script determines if the %A string is empty: if (%A == ""){type "empty"}; The following examples illustrates the use of the == operator: x = 1; // variable x is set to 1 %a = x; // string a is set to "x" if (%a == 1); type "yes"; else type "no"; The result will be yes, because Origin looks for the value of %a (the value of x), which is 1. In the following script: x = 1; // variable x is set to 1 %a = x; // string a is set to "x" if ("%a" == 1) type "yes"; else type "no"; The result will be no, because Origin finds the quotation marks around %a, and therefore treats it as a string, which has a character x, rather than the value 1. Conditional Operator (?:) The ternary operator or conditional operator (?:) can be used in the form: Expression1 ? Expression2 : Expression3 This expression first evaluates Expression1. If Expression1 is true (non-zero), Expression2 is evaluated. The value of Expression2 becomes the value for the conditional expression. If Expression1 is false (zero), then Expression3 is evaluated and Expression3 becomes the value for the entire conditional expression. Note that Expressions1 and Expressions2 can themselves be conditional operators. The following example assigns the value which is greater (m or n), to variable: m = 2; n = 3; variable = (m>n?m:n); variable = LabTalk returns: variable = 3 In this example, the script replaces all column A values between 5.5 and 5.9 with 5.6: col(A) = col(A)>5.5&&col(A)<5.9?5.6:col(A); Note: A Threshold Replace function tReplace(dataset, value1, value2 [, condition]) is also available for reviewing values in a dataset and replacing them with other values based on a condition. In the tReplace(dataset, value1, value2 [, condition]) function, each value in the dataset is compared to value1 according to the condition. When the comparison is true, the value may be replaced with Value2 or -Value2 depending on the value of condition. When the comparison is false, the value is retained or replaced with a missing value depending on the value of condition. The treplace() function is much faster than the ternary operator. See tReplace(). Performing Calculations You can use LabTalk to perform both • scalar calculations (mathematical operations on a single variable), and • vector calculations (mathematical operations on entire datasets). Scalar Calculations You can use LabTalk to express a calculation and store the result in a numeric variable. For example, consider the following script: inputVal = 21; myResult = 4 * 32 * inputVal; The second line of this example performs a calculation and creates the variable, myResult. The value of the calculation is stored in myResult. When a variable is used as an operand, and will store a result, shorthand notation can be used. For example, the following script: B = B * 3; could also be written: B *= 3; In this example, multiplication is performed with the result assigned to the variable B. Similarly, you can use +=, -=, /=, and ^=. Using shorthand notation produces script that executes faster. Vector Calculations In addition to performing calculations and storing the result in a variable (scalar calculation), you can use LabTalk to perform calculations on entire datasets as well. Vector calculations can be performed in one of two ways: (1) strictly row-by-row, or (2) using linear interpolation. Row-by-Row Calculations Vector calculations are always performed row-by-row when you use the two general notations: datasetB = scalarOrConstant <operator> datasetA; datasetC = datasetA <operator> datasetB; This is the case even if the datasets have a different numbers of elements. Suppose there are three empty columns in your worksheet: A, B, and C. Run the following script: col(a) = {1, 2, 3}; col(b) = {4, 5}; col(c) = col(a) + col(b); The result in column C will be {5, 7, --}. That is, Origin outputs a missing value for rows in which one or both datasets do not contain a value. Vector calculations can also involve a scalar. In the above example, type: col(c) = 2 * col(a); Column A is multiplied by 2 and the results are put into the corresponding rows of column C. Instead, execute the following script (assuming newData does not previously exist): newData = 3 * Book1_A; A temporary dataset called newData is created and assigned the result of the vector operation. Calculations Using Interpolation Origin supports interpolation through range notation and X-Functions such as interp1 and interp1xy. Please refer to Interpolation for more details.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37729695439338684, "perplexity": 3597.6913413789607}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.59/warc/CC-MAIN-20211206072825-20211206102825-00115.warc.gz"}
https://www.rdocumentation.org/packages/MASS/versions/7.3-25/topics/rms.curv
# rms.curv 0th Percentile ##### Relative Curvature Measures for Non-Linear Regression Calculates the root mean square parameter effects and intrinsic relative curvatures, $c^\theta$ and $c^\iota$, for a fitted nonlinear regression, as defined in Bates & Watts, section 7.3, p. 253ff Keywords nonlinear ##### Usage rms.curv(obj) ##### Arguments obj Fitted model object of class "nls". The model must be fitted using the default algorithm. ##### Details The method of section 7.3.1 of Bates & Watts is implemented. The function deriv3 should be used generate a model function with first derivative (gradient) matrix and second derivative (Hessian) array attributes. This function should then be used to fit the nonlinear regression model. A print method, print.rms.curv, prints the pc and ic components only, suitably annotated. If either pc or ic exceeds some threshold (0.3 has been suggested) the curvature is unacceptably high for the planar assumption. ##### Value • A list of class rms.curv with components pc and ic for parameter effects and intrinsic relative curvatures multiplied by sqrt(F), ct and ci for $c^\theta$ and $c^\iota$ (unmultiplied), and C the C-array as used in section 7.3.1 of Bates & Watts. ##### References Bates, D. M, and Watts, D. G. (1988) Nonlinear Regression Analysis and its Applications. Wiley, New York. deriv3 ##### Aliases • rms.curv • print.rms.curv ##### Examples # The treated sample from the Puromycin data mmcurve <- deriv3(~ Vm * conc/(K + conc), c("Vm", "K"), function(Vm, K, conc) NULL) Treated <- Puromycin[Puromycin\$state == "treated", ] (Purfit1 <- nls(rate ~ mmcurve(Vm, K, conc), data = Treated, start = list(Vm=200, K=0.1))) rms.curv(Purfit1) ##Parameter effects: c^theta x sqrt(F) = 0.2121 ## Intrinsic: c^iota x sqrt(F) = 0.092 Documentation reproduced from package MASS, version 7.3-25, License: GPL-2 | GPL-3 ### Community examples Looks like there are no examples yet.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26394811272621155, "perplexity": 12101.764396620789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145657.46/warc/CC-MAIN-20200222085018-20200222115018-00317.warc.gz"}
http://math.stackexchange.com/questions/184548/how-to-prove-that-any-two-circles-and-any-two-disks-have-the-same-cardinality
# how to prove that any two circles and any two disks have the same cardinality 1.I am trying to prove that two circles has the same cardinality I declared two intervals $[0,2\pi R]$ and $[0,2\pi \widetilde{R}]$ I build I bijection between the two interval It's the correct proof ? $2$.The equation for a closed disk $(x-a)^{2}+(y-b)^{2}\leqslant R^{2}$. Can I prove it by doing in the same way that I prove the circles cardinality but now with the area $\pi R^{2}$ instead of using the circumference. Thanks - Notice that any continuous curve has the cardinality of $\mathbb{R}$ being an injective image of $[0, 1]$ and also every set containing an open set can be shown to contain a continuous curve and thus has the same cardinality. – Karolis Juodelė Aug 20 '12 at 11:15 $(1)$. Need to be careful, you want to prove a geometric fact. Note want to use half-open intervals, like $[0,2\pi R)$. One cannot assess what you did without being given some detail. $(2)$. Again, need explicit geometric bijection. – André Nicolas Aug 20 '12 at 11:32 Hello Nicolas:Why did you write half open interval $[0,2\pi R)$ if the point $2\pi R$ is a memeber it is the last point of the circle. – Hernan Aug 20 '12 at 12:18 @Hernan: Circles don't have endpoints. – Hurkyl Oct 29 '12 at 8:13 For discs, the map $$f(r,\theta )=(\frac{r R_2}{R_1},\theta)$$ will give a bijection from the disc of radius $R_1$ to the disc of radius $R_2.$ Yeah, your proof is also correct, but I found it more easy to visualize. Here $(r,\theta )$ is the polar coordinates of a point in the plane – pritam Aug 20 '12 at 10:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9549269080162048, "perplexity": 280.20672516989794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824113.35/warc/CC-MAIN-20160723071024-00171-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/infinite-sum-converge-to-what-value.164785/
Homework Help: Infinite sum converge to what value? 1. Apr 9, 2007 pivoxa15 1. The problem statement, all variables and given/known data The infinite series (-1)^n(x/n) from n=1 converges. But what is the specific value of it? 2. Apr 9, 2007 Gib Z $$\ln(1+x) = \sum^{\infty}_{n=0} \frac{(-1)^n}{n+1} x^{n+1}$$ $$\sum_{n=1}^{\infty} (-1)^n\frac{x}{n} = x\sum_{n=1}^{\infty} \frac{(-1)^n}{n}=x\log_e 2$$ 3. Apr 9, 2007 ILEW You have put x=1 so it should just be ln(2)? 4. Apr 9, 2007 Gib Z Yup. Exactly. 5. Apr 9, 2007 pivoxa15 But ln(2)>0 and $$\sum_{n=1}^{\infty} (-1)^n\frac{x}{n}<0$$ since the first term is negative and has the largest magnitude so will dominate the series. The series should equal -ln(2) so you may have made an error with your series manipulation. Last edited: Apr 9, 2007 6. Apr 10, 2007 Gib Z Yea sorry about that >.< I made a mistake with the starts of the series, some were n=1 and others n=0, and I didn't handle them well. But youve got the idea
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9556846618652344, "perplexity": 2398.0004842282383}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866107.79/warc/CC-MAIN-20180524073324-20180524093324-00307.warc.gz"}
http://mathematica.stackexchange.com/questions?page=218&sort=newest
# All Questions 121 views ### Standard Deviation and StandardDeviationFilter I found this scant description of StandardDeviationFilter in the documentation, implying one could use it to generate a moving standard deviation: I've got a ... 286 views ### How can I access the internal function that plots a molecule from a formatted XYZ file? Mathematica has the ability to plot a molecule using the data contained in an XYZ file. This is a simple text file, of which this is an example. The molecule is plotted using the Import command. ... 124 views ### Arrow pointing upwards in a Graphics3D I am trying to write a code for an arrow that points upwards from the center of the circle in a Graphics3D. ... 72 views ### Sharing styles between notebooks (Mathematica v9) [duplicate] It appears from the documentation that we share styles via stylesheets. I am trying to set up a customised default stylesheet for my notebooks. I have tried simple ideas, and more complex ones, ... 119 views ### Hue function for negative arguments I want to use Hue function for different colours with the argument being a function, i.e., Hue[f[i,j]] Now, ... 105 views ### “And” to continue evaluation after “False” Below, is it possible to print No and have x be 2. That is, I'm looking for an And-like function that looks at all arguments ... 148 views ### Put local variables for Block in a variable [duplicate] Is is possible to assign {x = 2, y = 3, z = 4} to a variable var so that one can write ... 88 views ### The proper way to write the input for a certain series Mathematica tells the series below doesn't converge. I think it converges. What would the proper way to write things be as an input? ... 773 views ### How do I save my animation/manipulation in pdf? Is it possible to presserve my animation and manipulation plots in the pdf? 222 views ### Is there an equivalent of “shiftdim” of MATLAB? Recently I'm spending my time implementing some computer vision algorithms, which usually handle a large amount of data. The problem I'm facing now is that I have to reform my video data to pass it ... 154 views ### Strange spikes in my surface I'm getting strange spikes in my surface S with the code below. PlotPoints seems to help, but it doesn't solve the problem. Any ... 263 views ### Upon PDF export, edgeless rectangles do not tile perfectly, and corners may be cut. Is this a bug? Note: this problem is no longer present in version 10. Exporting the following as a PDF file ... 558 views ### distance between two curves I try to compute the distance between two curves. I use the EuclideanDistance to do that. Here my code: ... 188 views ### Plot a function over a specific domain I have a function that is defined on a specific domain for example the function $$f(x,y)=(x-0.5)*(y-0.5)$$ defined on $\Sigma$ which is the circle $(x-0.5)^2+(y-0.5)^2=0.5^2$ How to plot $f$ over ... 128 views ### Any way to fully expand an expression of two operators? I am looking for a way to expand a complicate expression like the follow (A o B + C o (DxE))*(D o (ExF) - ExA + A) Here o and x are the operator. I want to ... 245 views ### Coloring with Hue for a function on a lattice grid I wish to color a 2-dimensional lattice grid according to the value of a function at each lattice-node. More specifically, if I have 9 angles in a 3x3 array, ... 193 views ### Matrix multiplication that includes a tensor How would I best express the following in Mathematica: $\begin{pmatrix}2 & 4\end{pmatrix} \begin{pmatrix}r_1 & r_2\\r_3 & r_4\end{pmatrix} \begin{pmatrix}6 \\ 8\end{pmatrix}$, where $r_i$ ... 262 views ### Using a user defined NormFunction in FindFit or NDSolve I would like to use a different norm instead of the 2-norm in FindFit (Mathematica 9). For example, instead of using \sqrt{\sum (x_{\mathrm{model}} - ... 59 views ### How can a 3DBarChart be made to be fixed size and proportions? I have a 3DBarChart that adds bar graphs as would be required by the dataset, but as it gets wider runs off the page and can not be printed. Is there a way to fix the height & width of the ... 100 views ### How to extract the coefficient from an expression Here is the expression. I want to extract the coefficient just as the table showing below. I can do it by hand, but I expect a code that can automatically identify how many differenct terms ... 255 views ### Conditional statements in intial conditions? This is potentially a daft question, but I thought I'd ask it; I have some material free to diffuse in a boundary between rn and ro; I've been able to get it working nicely for neumann type boundary ... 522 views ### Replace “,” in a list with “.” I have a list which consists of numbers which use comma (,) instead of dot (.) as their decimal point. I would like to replace the commas, but only those commas which are followed by more than five ... 493 views ### How can I consistently get a good logistic regression fit? I'm executing the command NonlinearModelFit[data, c/(1 + a Exp[-b x]), {a, b, c}, x] with the data being ... 49 views ### How to judge if a point is in the interior of a closed curve or not? [duplicate] For example: pts = {{0, 1}, {-(Sqrt[3]/2), -(1/2)}, {Sqrt[3]/2, -(1/2)}}; trig=JoinedCurve[Line[pts], CurveClosed -> True]; Then ... 92 views ### Reflection transform of function [duplicate] I am trying to find the reflection function. Here is my function and its graph. ... 331 views ### Connecting Mathematica to a SOCKS5 tunnel proxy I have Mathematica setup normally and connections work. However, when I want to use it to connect through my SSH tunnel program (Tunnelier), it throws the error PacletSiteUpdate::err: An error ... 312 views ### Non linear equation phase space As a supplementary to my question solution of differential equation I post a new question of how is it possible to make a Table that has elements the solutions of a non linear differential equation, ... 179 views ### Set all instances of Exp[-x_] to zero? To simplify a huge expression efficiently, which involves a variable in a bunch of exponential functions going to infinity, I have tried to substitute ... 158 views ### Pop Up for Setting directory path I am trying to build certain modules in Mathematica 9. I want it to dynamically ask for the directory path ( like file explorer ) when the code cell is executed. How can I do that ? ... 629 views ### Error messages in importing data file I want to copy the data in a public file (link in code below) and put each row of the data as a sub-list, i.e., in the format ... 411 views ### How to Import random elements of huge data files I am calculating huge data files with an external program. I would like then to import the data into Mathematica for analysis. The files are 2 columns and up to many millions of rows. So for ... 321 views ### Triggering actions when a variable is set Some built-in variables trigger actions when their values are changed: ... 109 views ### TableForm behavior change In an older version NoteBook, I was able to specify some options for TableForm, which enabled the printing of an expression in a simple fashion: ... 283 views ### Runaway MathKernel! I was running a simulation and everything went south (my fault, stupid coding error). So I quit the kernel. Everything was running very slowly, as if it were doing some kind of large calculation. ... 114 views ### How can we suppress the asymptotic notation in Series? [closed] Series expands a function, and also gives an idea of the asymptotic bounds of the function: Series[$\frac{1-x^3}{1-x}$] returns: $1 + x + x^2 + O(x)^5$ I'd like ... 194 views 926 views ### large matrix eigenvalue problem I need solve a very large complex matrix (not sparse and not symmetry) eigenvalue problem, e.g., 1e4*1e4 or even 1e6*1e6. How large dimensions of the matrix can Mathematica support? And, how about ... 271 views ### Problem with working precision I have tried to resolve the problem of the following link How can I solve precision problem I can tell the problem described in that link shortly here, It's no mater how many precision is there after ... 370 views ### Animating the Lorenz Equations I am trying to use the Animate command to vary a parameter of the Lorenz Equations in 3-D phase space and I'm not having much luck. The equations are: ... 430 views ### looking for a generalised Hough Transform function or a least a function to locate circles I am looking for a generalised Hough Transform function or a least a function to locate circles (position of center and radius) in a image. There is the standard Hough line search function ... 149 views ### How can I solve precision problem [duplicate] I want to set 2 decimal places, whether it's real Number or anything.for that purpose I wrote the following function. ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5941683053970337, "perplexity": 1492.0236219647913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135396.68/warc/CC-MAIN-20140914011215-00336-ip-10-234-18-248.ec2.internal.warc.gz"}
https://cracku.in/ssc-chsl-19-march-2018-evening-shift-question-paper-solved
# SSC CHSL 19 March 2018 Evening Shift Instructions For the following questions answer them individually Question 1 Amit is five years older than Vaibhav at present. After four years the ratio of their age will be 5:4. What is Amit age (in year) at present? Question 2 If $$\frac{7x+9y}{3x-4y}=\frac{19}{8}$$, then the value of $$\frac{x}{y}$$ is ___________. Question 3 If $$p+\frac{1}{p}=\sqrt{10}$$, then find the value of $$p^4+\frac{1}{p^4}$$ Question 4 If $$z=6-2\sqrt3$$, then find the value of $$(\sqrt{z}-\frac{1}{\sqrt{z}})^2$$ Question 5 What is the total number of circles passing through the two fixed points? Question 6 If the $$\angle ABC$$ and $$\angle ACB$$ of triangle ABC is $$80^\circ$$ and $$60^\circ$$ respectively. If the Incenter of the triangle is at point ‘I’ then calculate angle BIC. Question 7 The value of a machine depreciates at the rate of 20% per annum. If its present value is Rs 96000, then what was the value (in Rs) of the machine 2 years ago? Question 8 The incomes of S and T are in the ratio 3 : 4 and their expenditures are in the ratio 1 : 1. If S saves Rs 4000 and T saves Rs 22000, then what will be the income (in Rs) of S? Question 9 X and Y started a business by investing Rs 171000 and Rs 243000 respectively. If X’s share in the profit earned at the end of year is Rs 3800, then what will be the total profit (in Rs) earned by them together? Question 10 The average expenditure of Raman for 5 days is Rs.130. If his expenditure for the first 4 days is Rs.100, Rs.125, Rs.85 and Rs.160 respectively, then what is his expenditure (in Rs) of the $$5^{th}$$ day? OR
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6430225372314453, "perplexity": 1123.2781189698526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00054.warc.gz"}
https://www.coursehero.com/file/6267126/312HW52011/
312HW52011 312HW52011 - H H H L L H L L (b) Obtain the voltage... This preview shows page 1. Sign up to view the full content. EE 312 Digital Electronics Take Home Exam Part 5 Due: April 6-7, 2011 TTL Gates Q.1. Consider the following TTL circuit. Figure 1. Schematic for Totem-pole TTL. For the transistors: β F =40 β R =2 V CE (SAT) = 0.2V V BE (FA)= 0.7V V BE (SAT)= 0.8V V BC (RA)= 0.7V For the diodes : V D (ON)= 0.7V For the supplies: V CC = 5 V R BA =R BB =3k R C =2.5k R CP =120 R D =8k (a) Obtain the truth table and determine the function of this TTL gate assuming proper operation (you do not need to verify the states with any calculations). Use the table given below and clearly indicate the state of each semiconductor device (i.e., ON is not accepted as a state for a transistor). V IN,A V IN,B Q AI Q BI Q SA |Q SB Q P D L Q O V OUT This is the end of the preview. Sign up to access the rest of the document. Unformatted text preview: H H H L L H L L (b) Obtain the voltage transfer characteristics of this gate, i.e., V IN,A versus V OUT by determining all the breakpoints, assuming V IN,B is connected to a low voltage. For parts c and d assume that this gate is driving similar gates. The driven gates have their two inputs shorted to each other. (c) Find the maximum fan-out of this gate when the output is high, assuming that minimum tolerated V OH is 2.8 V. (d) Find the maximum fan-out of this gate when the output is low, assuming that both V IN,A =V IN,B =5V . V OUT V CC 5V Q P Q O V IN,A V IN,B R CP D L Q AI Q BI Q SA Q SB R C R BA R BB R D... View Full Document Ask a homework question - tutors are online
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9253455400466919, "perplexity": 5956.277804748281}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541324.73/warc/CC-MAIN-20161202170901-00437-ip-10-31-129-80.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/116249/what-is-the-ideal-corresponding-to-the-pl%c3%bccker-embedding/116253
# What is the ideal corresponding to the Plücker embedding? Let $S$ be a noetherian scheme, $\mathcal{E}$ a quasi-coherent sheaf on $S$ and let $d \in \mathbb{N}$. There is a Plücker embedding $\omega : \mathrm{Grass}_d(\mathcal{E}) \hookrightarrow \mathbb{P}(\wedge^d \mathcal{E})$. A very elegant functorial construction can be found in EGA I, 9.8. My question is: How can we describe the corresponding quasi-coherent ideal $I$ on $\mathbb{P}(\wedge^d \mathcal{E})$ globally? More precisely, if $\mathcal{E}$ is coherent, then by results of EGA II there is an epimorphism $\oplus_i M_i \otimes_{\mathcal{O}_S} \mathcal{O}(n_i) \twoheadrightarrow I$ for some coherent $\mathcal{O}_S$-modules $M_i$ and integers $n_i$. I would like know if one can write this down without using a presentation of $\mathcal{E}$. The answer in the special case $\mathcal{E} = \mathcal{O}_S^{\oplus I}$ for some set $I$ is well-known (at least when $S$ is a field and $I$ is finite, but the general case works the same. Does anybody know a reference where this is done?): The Plücker relations generate $I$. More precisely, let $\mathcal{O}_{\mathbb{P}}(1)$ be the universal invertible sheaf on $\mathbb{P}(\wedge^d \mathcal{E})$ together with its universal epimorphism $s : \wedge^d \mathcal{E} \otimes_{\mathcal{O}_S} \mathcal{O}_{\mathbb{P}} \twoheadrightarrow \mathcal{O}_{\mathbb{P}}(1)$ . Then define $P : \wedge^{d-1}(\mathcal{E}) \otimes_{\mathcal{O}_S} \wedge^{d+1}(\mathcal{E}) \otimes_{\mathcal{O}_S} \mathcal{O}_{\mathbb{P}} \to \mathcal{O}_{\mathbb{P}}(2),$ $${\small f_1 \wedge \dotsc \wedge f_{d-1} \otimes e_0 \wedge \dotsc \wedge e_d \mapsto \sum_{l=0}^{d} (-1)^l s(f_1 \wedge \dotsc \wedge f_{d-1} \wedge e_k) \otimes s(e_0 \wedge \dotsc \wedge \widehat{e_k} \wedge \dotsc \wedge e_d).}$$ Then $I$ is the image of $\check{P} : \wedge^{d-1}(\mathcal{E}) \otimes_{\mathcal{O}_S} \wedge^{d+1}(\mathcal{E}) \otimes_{\mathcal{O}_S} \mathcal{O}_{\mathbb{P}}(-2) \to \mathcal{O}_{\mathbb{P}}$. For general $\mathcal{E}$, these Plücker relations are also satisfied, but I couldn't prove the converse and meanwhile I'm convinced that we need more relations. If it helps, you may assume that $2$ is invertible on $S$. - Note that I am not looking for the equations in the classical special case $\mathrm{Grass}_d(R^n) \hookrightarrow \mathbb{P}(\wedge^d R^n)$. –  Martin Brandenburg Dec 13 '12 at 9:48 ## 2 Answers References for this purpose are: A series of papers initiated by C S Seshadri, Lakshmibai, Musili develops "Standard Monomial Theory" to deal with this. It gives equations for Schubert varietes, describes their singular loci, proves many cohomology-vanishing theorems for line bundles on them. V. Lakshmibai & K.N. Raghavan have written a book published by Springer (2008). Encyclopaedia of Mathematical Sciences, 137. - You might start with Seshadri's "Standard Monomial Theory -- A Historical Account" (in volume 2 of his collected works) or Lakshmibai/Littelmann/Magyar's "Standard Monomial Theory and Applications" (available online). –  Michael Joyce Dec 13 '12 at 12:39 They only deal with the classical case, i.e. $S=\mathrm{Spec}(k)$ for a field $k$ and $\mathcal{E}=k^n$. –  Martin Brandenburg Dec 13 '12 at 17:31 I am not sure if it fits what you are looking form but there is a sort of a description of this ideal sheaf in section 12.A in Hacon-Kovács, Classification of Higher Dimensional Algebraic Varieties. - Thank you, but this only contains the special case that $\mathcal{E}$ is locally free of finite rank. –  Martin Brandenburg Dec 13 '12 at 17:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9233205914497375, "perplexity": 303.31169022949706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00050-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41598-018-38183-1?error=cookies_not_supported&code=85eb8c69-522c-4543-a1bc-2f3721b7a586
Article | Open | Published: # Ultra-Short Pulse Generation in a Three Section Tapered Passively Mode-Locked Quantum-Dot Semiconductor Laser ## Abstract We experimentally and theoretically investigate the pulsed emission dynamics of a three section tapered semiconductor quantum dot laser. The laser output is characterized in terms of peak power, pulse width, timing jitter and amplitude stability and a range of outstanding pulse performance is found. A cascade of dynamic operating regimes is identified and comprehensively investigated. We propose a microscopically motivated traveling-wave model, which optimizes the computation time and naturally allows insights into the internal carrier dynamics. The model excellently reproduces the measured results and is further used to study the pulse-generation mechanism as well as the influence of the geometric design on the pulsed emission. We identify a pulse shortening mechanism responsible for the device performance, that is unique to the device geometry and configuration. The results may serve as future guidelines for the design of monolithic high-power passively mode-locked quantum dot semiconductor lasers. ## Introduction Passively mode-locked semiconductor lasers are photonic light sources, that produce sequences of short equidistant optical pulses at high repetition rates without the need for an external driving frequency1,2. They find a multitude of applications in optical data communication3,4, metrology5,6, medical imaging7 and optical clocking8. Monolithically integrated semiconductor based designs have the advantages of straight-forward growth and processing while keeping a small footprint, which makes them favorable for future photonic integration9. However, spontaneous emission noise and the absence of an external reference clock in such devices leads to relatively pronounced timing and amplitude jitter10,11,12,13, which are limiting factors for applications. Techniques such as hybrid mode-locking14,15,16, optical injection17,18, optical and opto-electronic self-feedback19,20,21,22,23,24,25,26,27,28,29 allow to improve the timing stability considerably, but come at the cost of additional electronics and optics, which need to be properly calibrated and controlled. To avoid this, it is therefore highly desirable to optimize the laser design, such that an excellent pulse train stability can be achieved without additional control schemes. One optimization approach focuses on the device geometry and cavity design, where the precise tuning of the saturable absorber (SA) length30,31 and the facet reflectivity of the adjacent facet32,33 can lead to shorter pulses and an increased pulse train stability. Moreover, a tapered gain section can lead to an additional pulse shortening and a strong increase in output power34,35,36,37. Employing semiconductor quantum dots as an active medium comes with advantages such as high differential gain, ultra-fast recovery, broad gain spectra, small chirp and low temperature sensitivity, due to their atom-like discrete energy levels37,38,39,40,41. These properties can be employed to generate stable mode-locked pulse trains with sub-ps pulses at high repetition rates3,34,36,42. By positioning the absorber section at different cavity positions, the pulse peak power and the mode-locking performance can be improved43,44. In this work, we experimentally and theoretically investigate the optical pulse performance and emission dynamics of a three section tapered semiconductor quantum dot laser with a saturable absorber section positioned at approximately one third of the cavity length. The laser output is characterized in terms of peak power, pulse width and timing and amplitude stability. A semi-classical traveling-wave model excellently reproduces the measurements and is further used to study the spatio-temporal pulse evolution and pulse-generation mechanism. The paper is organized as follows: Section 2 introduces the device and describes the experimental characterization setup. The results are presented in Sec. 3, which is divided into subsections: The measured and simulated dynamics and performance figures are presented in Sec. 3.1. The pulse-generation and shaping mechanism in the fundamental mode-locking regime is analyzed in Sec. 3.2. The influence of the taper angle is investigated in Sec. 3.3 and the influence of the saturable absorber position is studied in Sec. 3.4. Finally, conclusions are drawn in Sec. 4. Additionally, Sec. 5.1 develops the numerical model and describes the simulation techniques. ## Device and Setup The three-section laser consists of 10 layers of InAs quantum dots grown on a GaAs substrate using molecular beam epitaxy. The cavity length amounts to 3 mm corresponding to a repetition rate of 13.24 GHz. The laser has been processed into a 0.7 mm long straight section, a 0.7 mm long absorber section and a 1.6 mm long tapered section with a full taper angle that amounts to 2°. We denote the left side of the saturable absorber as the SA starting position $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.7$$ mm. A sketch of the device geometry is shown in Fig. 1(a). The straight section width is 14 μm. Confinement of the optical field in lateral direction is achieved by gain-guiding. In our simulations, we assume an effective active region width of w0 = 4 μm to approximate the effects of the gain-guided structure, while lateral dimensions are not taken into account. The tapered output facet on the right is anti-reflection coated (AR) resulting in a reflectivity κR = 0.03, while the facet on the left is high-reflection coated (HR), resulting in a reflectivity κL = 0.95. Figure 1(b) shows a light microscope picture of the laser. Figure 1(c) shows a schematic, its biasing and the developed pulsed emission characterization setup. Lasing emission is collimated and sent through an optical isolator to prevent unwanted back reflections, which would alter the dynamics of the laser. The analysis of the optical pulse width Δt is performed by a nonlinear intensity auto-correlator and the average optical power Pavg is obtained by a power meter. After fiber coupling the laser emission, the radio-frequency analysis (pulse repetition frequency frep and repetition linewidth Δν) is performed by a direct detection configuration using a fast photo-detector connected to an electrical spectrum analyzer. Pulse peak power is estimated by Ppk = Pavg * fpsf/(frep * Δt) taking into account an according pulse-shape-factor fpsf36,45. In the experiment, the amplitude jitter is quantified by the relative standard deviation of pulse peak power fluctuations and is calculated from the radio-frequency spectrum (electrical bandwidth: 50 MHz to half of the repetition rate)37,46. The temporal pulse train stability is quantified by the standard deviation of the pulse-to-pulse timing fluctuations and is estimated from the repetition linewidth Δν21,47. Mode-locking stability is defined as an amplitude jitter below 3% and a pulse-to-pulse timing jitter below 250 fs, which corresponds to 0.33% of the pulse repetition period. ## Results ### Dynamics and Performance In this section, we characterize the measured device output in terms of the mode-locking state and pulse performance and compare the results to simulations (see methods section 5.1 for details on the model). We study the laser emission at a reverse bias of U = −6V, which yields the best performance figures and scan the pump current. The results are presented in Fig. 2, where the left column shows the measurements and the right column the simulations. The top row shows color coded radio-frequency (RF) spectra, where each spectrum is normalized to its maximum, the middle row shows the pulse peak power (red) and pulse width (blue) and the bottom row amplitude (black) and timing (blue) jitter. Our simulation results are obtained by averaging over 400000 round-trips (≈30 μs) for each pump current. Scanning the pump current from 750 mA to 1100 mA, we use the RF spectrum Fig. 2(a) and the auto-correlation (AC) signal (not shown) to determine the mode-locking state. As indicated on top of Fig. 2(a), Q-switched mode-locking (QSML) is observed from 750 mA to 780 mA, fundamental mode-locking (FML) from 780 mA to 890 mA, unstable fundamental mode-locking (uFML) from 890 mA to 990 mA and third order harmonic mode-locking (HML3) from 990 mA to 1100 mA. FML and uFML produce a pronounced RF-peak at ν ≈ 13.24 GHz, corresponding to the cavity round trip time, while HML3 produces its first RF-peak at ν ≈ 40 GHz, which is outside the detection range. QSML and dynamical instabilities lead to increased low-frequency contributions, which can be seen outside the FML region. Peculiarly, the QSML pulses are spaced by a third of round trip, leading to only a small peak at the fundamental frequency at ≈13.24 GHz. We explain the occurrence of this inter-pulse spacing by a colliding pulse mechanism48. Two of three evenly spaced pulses meet in the absorber section, which roughly divides the device at the one third position, and thus saturate it more efficiently. Our simulated spectra are plotted in Fig. 2(b) and exhibit the same sequence of mode-locking states for an increasing pump current P, thus matching the measurements quite well. Within the HML3 region the simulations indicate an RF spectrum with less instabilities indicating more stable operation. Morever, in the QSML and uFML regions the low frequencies appear at slightly smaller values. We illustrate the different dynamics for an increasing pump current in Fig. 3 with simulated pseudo space-time plots, where time-series are sliced into pieces with the length of the cold cavity round-trip time and stacked on top of each other to create a color-coded 2D-map of the pulse evolution. Q-switched mode-locking Fig. 3(a) is composed of sets of broad pulses with inter-pulse spacings of about 25 ps, thus the QS-ML is running at the third harmonic frequency, which is also observed in the experiment. The slow envelope has a long period of 5 to 10 μs and leads to the low frequency of about 100 to 200 MHz that we find in Fig. 2(b). Pulse emission ceases in between Q-switched bursts. Fundamental mode-locking is represented by a narrow line in the space-time plot Fig. 3(b) that tilts to the right as the pulse period is slightly longer than the cold-cavity round-trip time. Further increasing the pump current leads to a loss of stability of the FML pulse train and noise induced perturbations create a competing pulse train that periodically ends up taking the gain from the previous pulse train (see Fig. 3(c)). This switch between pulse-trains occurs at round-trip number ≈20 and ≈80 and it takes about 10 round-trips. The period of this pulse-train switching results in the slow frequency of about 150 MHz in the spectrum plotted in Fig. 2(b). Finally, third order harmonic mode-locking Fig. 3(d) resembles FML, except that three pulse-trains are observed within one round-trip. Turning our focus back to the characterization of the pump current scan, we use a nonlinear intensity auto-correlator and a power-meter data to estimate the pulse width (blue) and pulse peak power (red) as plotted in Fig. 2(c). We find pulses as short as 500 fs with about 10 W peak power in the FML and uFML region. Within that region, the pulse-width increases with the pump current from 500 fs to 600 fs, which we attribute to a faster recovery of the gain due to ultra fast refilling of the GS occupation from the ES49. In the QSML region, peak power values are below 1 W and the pulse widths exceed 1.5 ps. In the HML3 region, peak power values drop to ≈2.5 W and the pulse width increases to ≈1.5 ps. Plotted in Fig. 2(d), simulation results excellently reproduce the measurements in the QSML, FML and uFML region. Simulated pulses in the HML3 region carry the same energy but differ from the measurement as they are much shorter (≈500 fs) and consequently higher in peak power (≈8 W). We explain this discrepancy in pulse peak power and width by the difference in pulse train stability, which is evident in the measured and simulated spectra plotted in Fig. 2(a,b). While not being affected by this dynamical instability, simulated pulses in the HML3 region benefit from a colliding pulse mechanism which has been shown to reduce pulse width48. To complement the performance figures, we analyze the pulse-train stability in terms of timing (blue) and amplitude jitter (black) in Fig. 2(e). Due to the bandwidth range of the spectrum analyzer, both quantities can only be evaluated in FML and uFML regime from 780 mA to 990 mA. Between 780 mA and 870 mA, we observe pulse-to-pulse timing jitter values below 100 fs and amplitude jitter values below 2%, thus a 90 mA range of excellent stability, which overlaps with the region of shortest pulses. The degradation of the pulse stability beyond 870 mA comes with the increase of the low-frequency noise and side modes in the spectrum, which we associate with the uFML region. Our simulated amplitude and timing jitter are shown in Fig. 2(f), where the same qualitative behavior is reproduced. Quantitatively however, the results differ as the timing and amplitude jitter are directly computed from the pulse distribution within the time series and not as in the experiment indirectly from the RF-spectrum. Hence, we obtain a timing jitter between 10 fs and 16 fs and amplitude jitter between 3% and 6% in the FML region, followed by a large jump to timing jitter above 1 ps and amplitude jitter above 20% in the uFML region. Moreover, in the QSML region, we obtain very large timing and amplitude jitter values as expected. In the HML3 region we find the amplitude jitter to be comparable to the FML region, while the timing jitter is at ≈450 fs almost two orders of magnitude larger than it is in FML region. ### Ultra-Short Pulse Generation In order to study the underlying mechanisms involved in the generation of short and stable pulses in the FML regime (810 mA), we use simulations to investigate the internal carrier dynamics of the laser and focus on the pulse evolution along one round-trip. Thus, we adapt the co-moving frame, in which the new time t′(z) is constant along the propagation of a small perturbation within the cavity. $$\begin{array}{ccc}{t}^{{\rm{^{\prime} }}}(z) & = & \{\begin{array}{cc}{t}^{-}(z)=t+\frac{z}{{v}_{g}} & {\rm{f}}{\rm{o}}{\rm{r}}\,{E}^{-}(z,t)\\ {t}^{+}(z)=t-\frac{z}{{v}_{g}}-\frac{l}{{v}_{g}} & {\rm{f}}{\rm{o}}{\rm{r}}\,{E}^{+}(z,t)\end{array}\end{array}$$ (1) where l/vg corresponds to the propagation time from one side of the cavity to the other. In this new relative time, the pulse propagation along one round trip occurs at the same time, which helps to illustrate the pulse shaping mechanisms in the different sections of the laser. In Fig. 4(a), we present the full spatio-temporal pulse evolution along one round-trip in an unfolded cavity, i.e. we show the propagation of E(z, t) from the right out-coupling facet to the high-reflectivity coated facet on the left side of the plot and the propagation of E+(z, t) back to the out-coupling facet on the right side of the plot. The pulse power is color coded and normalized to the output power, while the horizontal axis indicates the position within the cavity (note the axis from 3-mm to 0 and back to 3 mm, where the - indicates the backwards traveling pulse) and the vertical axis the relative time t′(z). As a guide for the eye, the pulse maximum, leading and trailing half-maximum are indicated by white lines. The FWHM corresponds to the vertical distance between the top and bottom line. The corresponding effective optical gain gGS(2ρGS − 1)/2 − αint is plotted in Fig. 4(b), where red colors indicate amplification and blue colors absorption. Additionally, the FWHM of the pulses (blue) and the peak power (red) are averaged over 20000 round-trips and are shown Fig. 4(c). Starting at the left side of the plot, i.e. the out-coupling facet of the laser, we first follow the backwards moving pulse through the tapered gain section, where the pulse is amplified. As the width of the tapered gain medium decreases in this direction, the number of QDs per given section reduces and the gain saturates easier. This produces an asymmetry in the amplification of the leading and trailing edge of the pulse, which can be seen between 2-mm and 1.4-mm in Fig. 4(b). Especially just before the pulse enters the absorber section, it carries enough energy such that the pulse front alone fully bleaches the gain and the trailing edge is reduced in power by waveguide losses. This firstly leads to a reduction of the pulse FWHM (white lines in Fig. 4(a,b) and blue line in Fig. 4(c)) and secondly to a slight shift of the pulse maximum to earlier times. Upon entering the absorber section, this mechanism reverses: although the pulse does not carry enough energy to completely bleach the absorber, the absorption at and especially past the pulse maximum is significantly weaker, which pushes the pulses maximum to later times and causes a slight rebroadening of the pulse FWHM. The pulse, which then enters the short gain section at the left side of the device, still carries enough energy to easily bleach the quantum dots. The same mechanism as in the narrow part of the tapered gain section leads to a reduction of the pulse width and again shifts the pulse maximum to later times. The pulse peak power remains almost constant in the short gain section, since gain and waveguide losses nearly balance each other. Upon being reflected at the back facet of the laser as seen in the middle of Fig. 4(a), the pulse travels back through the straight gain section, where the ultra fast carrier relaxation from the QD excited state to the ground state39,40, has restored the gain everywhere except right next to the facet. There, waveguide losses still dominate, as indicated by the small vertical blue region in the middle of Fig. 4(b). Along its return through the device, the forwards moving pulse is further shortened by the interplay of saturable gain and waveguide losses, while the peak power stays roughly constant (Fig. 4(c) from z = 0.0 to z = 0.7). Entering the absorber section, the forwards moving pulse carries less energy compared to the backwards moving pulse and therefore the pulse front as well as the pulse maximum are reduced before the absorber saturates. This results in a strong increase of the pulse FWHM from ≈370 fs to ≈620 fs, which is accompanied by a significant shift of the pulse maximum to later times. Finally, the right moving pulse enters the tapered gain section again, where the increase of QDs along the taper prevents the saturation of the gain and therefore ensures an optimal amplification of the pulse (increasing peak power) before reaching the out-coupling facet. Furthermore, the pulse FWHM reduces from ≈620 fs to ≈560 fs along the tapered gain section. Our averaging procedure also gives us access to the evolution of the amplitude (black) and timing jitter (blue), which are normalized to their out-coupled values and plotted in Fig. 4(d). We observe that amplitude and timing jitter improve in the gain sections and deteriorate in the absorber sections. This behavior is related to the shift of the pulse maximum (see Fig. 4(a,b)), via the recovery process of the gain and absorption. If a pulse comes slightly too early with respect to the previous pulse in a gain section, the available gain is slightly smaller and the pulse undergoes a smaller shift to earlier times. If the pulse comes slightly too late, the shift of the pulse is stronger and therefore the gain sections naturally counteract perturbations of the pulse position by always pulling the pulses towards their equilibrium position. However in the absorber section, the pulse shifting mechanism works exactly in the other direction and thereby amplifies perturbations of the pulse position. A similar argument applies to the evolution of the amplitude jitter: A pulse carrying slightly more energy than the previous pulse will experience less amplification (reducing the perturbation) and less absorption (amplifying the perturbation) during the next round-trip and vice versa. In conclusion, we find that pulses broaden in the absorber section and shorten in the gain sections in our device, which is contrary to the common understanding of the pulse-shaping mechanism in semiconductor mode-locked lasers1,34,50,51. Specific to our device, the short gain section to the left does not contribute to the out-coupled power, but rather functions as a pulse-shortening section. Moreover, due to the intrinsic self-stabilization mechanism against perturbations in the pulse position and power, the short gain section also improves the pulse train stability and thereby largely contributes to the outstanding performance of this device. To affirm this conclusion, we numerically simulate a device where we exchange the short gain section with an entirely passive section, i.e. turn off the light-matter interaction in that section, but preserve the features of the resonator and increase the gain coefficient of the tapered gain section to maintain the lasing threshold. As a result, we obtain significantly broader pulses (≈900 fs vs. ≈530 fs) and much more pronounced timing jitter (≈30 fs vs. ≈14 fs) and amplitude jitter (≈7.0% vs. ≈4.5%). This confirms the stabilizing properties of the short gain section in the investigated device. ### Influence of the Taper Angle To investigate the influence of the taper angle, we simulate scans of the pump current for taper angles between Θ = 0° and Θ = 3°. The results are presented in Fig. 5, where color-coded maps of the emission dynamics, the peak power and the pulse width are shown. Focusing on the dynamics plotted in Fig. 5(a), we find that the sequence of operating states, the laser goes through as we increase the pump current, fundamentally changes with the taper angle. At taper angles close to Θ = 0.0°, no lasing at all is observed as the overall waveguide losses are to large. In a very small region around Θ = 0.1°, cw-lasing (black region in Fig. 5(a)) is found with a high threshold at around P ≈ 1.33A. For an increasing taper angle between 0.1° and 0.58°, the threshold quickly reduces to a minimum of P ≈ 0.48A, as the waveguide losses of the long tapered section decrease. In this region, the laser first emits unstable third order harmonic mode-locking (uHML3, light orange region in Fig. 5(a)) immediately above threshold. At pump currents of around 0.65 A, these pulse trains transition to a window of stable third order harmonic mode-locking (HML3, orange Figs 5(a), cf. 3(d)), which disappears between P ≈ 0.8A and P ≈ 1.0A as the laser returns to uHML3 emission with small pockets of HML3 emission. At pump currents around P ≈ 1.5A, the laser transitions to cw-emission. In general, the non-cw emission in this small taper-angle region can be characterized by having three pulses circulate in the cavity, which is to be expected as the absorber is roughly placed at one-third of the cavity, leading to colliding pulse mode-locking. For taper angles above Θ = 0.58°, the behavior of the laser drastically changes. Firstly, the lasing threshold shifts to higher pump currents (P ≈ 1A for Θ = 3.0°) for increasing taper-angles, since the increased active region leads to a reduced pump-current density, which results in a decreased saturation of the quantum dots. Secondly, after crossing the threshold, instead of uHML3 or HML3, Q-switched mode-locking (QSML, purple region in Fig. 3(a)) is observed and is followed by fundamental mode-locking (FML, blue region in Fig. 3(b)). The pump-current range of stable fundamental mode-locking increases with the taper angle from a few mA at θ = 0.58° to about 230 mA at Θ = 3.00°. Further increasing the pump currents results in a leading edge instability, producing unstable fundamental mode-locking (uFML, cyan, cf. 3(c)). This region is then followed by HML3, which connects to the region of HML3 that is produced at Θ < 0.58°. We explain the occurrence of fundamental-mode locking in a device where a colliding pulse-mechanism should favor third order harmonic mode-locking by the asymmetry in the gain-saturation energies of the left straight gain section and the right tapered section. As the saturation energy is proportional to the number of quantum dots31 of a section, increasing the taper angle directly increases the saturation energy asymmetry in the device. We therefore conclude that for the given parameters, a taper angle of at least Θ = 0.58° is required to be able to observe fundamental mode locking. Furthermore, the peak power and the pulse width are plotted in Fig. 5(b,c), respectively. Pulses are generally the shortest at the lower pump current stability boundary of the stable mode-locking regions. With no substantial dependence on the taper angle, we observe ≈500 fs pulses at the onset of FML emission and ≈470 fs pulses at the onset of HML3 emission. Similarly, but not plotted, the timing and amplitude jitter values for the FML and HML3 region do not change with the taper angle and remain close to the values reported in Sec. 3.1 where a full taper angle Θ = 2.0° was chosen. The highest achievable peak power, however, does critically depend on the taper angle, where we observe the highest values at the upper pump current stability boundary of the FML region (see Fig. 5(b)). Within the FML region, the maximum peak power increases with the taper angle. While we observe ≈10W (experiment and simulation) with a taper angle of 2°, we predict only ≈5 W at a taper angle of Θ = 0.6° and up to ≈15W peak power at a taper angle of Θ = 3°. ### Influence of the Saturable Absorber Section Position Lastly, we study the influence of the saturable absorber (SA) position within the three-section cavity. Using our numerical model, we perform scans of the pump current for SA starting positions from $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.0$$ mm to $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.7$$ mm, while keeping the length of the absorber section constant at 0.7 mm. The former configuration with $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.0$$ mm corresponds to the traditional two-section design with the saturable absorber at the highly reflecting end of the cavity, while the latter configuration with $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.7$$ mm corresponds to the configuration we study in Sec. 3.1–3.3. The resulting emission dynamics for different absorber positions as a function of the pump current are shown in Fig. 6(a), where the observed mode-locking states are again depicted color-coded as in Fig. 5(a). While the previously described dynamics are seen in the top part of Fig. 6(a), we find two new states for SA starting positions below $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.5$$ mm, which are colored in green. Both regimes exhibit two pulses circulating in the cavity, but contrary to second order harmonic mode-locking the pulses have different inter-pulse spacings. We therefore refer to them as asymmetric two-pulse states (A2P, dark green regions in Fig. 6). Similar pulse emission was also found in a two-section quantum well based mode locked laser52 and in V-shaped external cavities53. They are found at pump currents above P ≈ 1.1A for SA starting positions between $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.15$$ mm and $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.5$$ mm. Below $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.15$$ mm unstable asymmetric two-pulse states (uA2P, light green regions in Fig. 6) are observed, which are similar to the uFML emission discussed in Fig. 3(c), but with two adjacent pulses in the cavity that switch their position together. While the lasing threshold remains constant under spatial shifts of the absorber section, Q-switched mode-locking as the first mode-locking state occurs only for SA starting positions from ≈0.55 mm to 0.7 mm. Below $${z}_{{\rm{SA}}}^{{\rm{s}}}\approx 0.55$$ mm the laser emits fundamental mode-locking right after the threshold and thereby increases the pump current range for stable FML. The upper stability boundary of the FML regime increases for a decreasing SA starting position from ≈0.91 A at $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.7$$ mm to almost 1.2 A at $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.0$$ mm, which results in a increase of the FML range from 110 mA at $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.7$$ mm to almost 450 mA at $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.0$$ mm. Figure 6(b) exemplifies the asymmetric two-pulse state with a space-time diagram for the parameter set indicated with a black dot in Fig. 6(a). The pulse spacings in that case are ≈21.2 ps and ≈54.0 ps, meaning the pulses meet inside the absorber section, thus indicating a colliding pulse effect. As the absorber section is shifted towards $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.0$$ mm, the pulse spacings change to ≈14 ps and ≈61.2 ps. To examine the influence of the SA position on the pulse performance, we plot the peak power (red) and pulse width (blue) in Fig. 6(c) and the amplitude (black) and timing jitter (blue) in Fig. 6(d) along the black line of constant pump current in Fig. 6(a). Decreasing the SA starting position from $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.7$$ mm to $${z}_{{\rm{SA}}}^{{\rm{s}}}\approx 0.1$$ mm, the peak power increases by ≈3.1W from ≈9.6W to ≈12.7, while the pulse width remains almost constant. Further decreasing the SA starting position to $${z}_{{\rm{SA}}}^{{\rm{s}}}=0.0$$ mm however, leads to a increase of the pulse width from ≈530 fs to ≈600 fs, which causes a rollover of the peak power. This behavior is caused by the involved time scales of the laser: For a SA starting position closer to the cavity back facet, the absorber has less time to recover before a reflected pulses reenters and thus less photons are absorbed leading to a greater peak power. Up to $${z}_{{\rm{SA}}}^{{\rm{s}}}\approx 0.1$$ mm, the pulse shorting in the straight gain sections occurs to the left and to the right of the absorber section. For $${z}_{{\rm{SA}}}^{{\rm{s}}}\, < \,0.1$$ mm however, the right moving pulse that leaves the absorber is to weak to fully saturate the gain section and therefore the pulse shortening mechanism is reduced in its impact, leading to a broadening of the pulses at the out-coupling facet. Amplitude and timing jitter both improve when shifting the saturable absorber towards the back-facet. The amplitude jitter reduces from ≈4.5% to ≈2.6% and the timing jitter from ≈14 fs to ≈5 fs, with a small kink in the curve at $${z}_{{\rm{SA}}}^{{\rm{s}}}\,\approx \,0.1$$ mm due to the rollover of the peak power. We attribute this improvement again to the reduced absorber recovery time, which reduces the deterioration of the jitter for the right moving pulse within the absorber section. Based on these simulations, we predict an optimum performance for a saturable absorber starting position at $${z}_{{\rm{SA}}}^{{\rm{s}}}\,\approx \,0.1$$ mm. This configuration might reduce the amplitude and timing jitter by about 50%, increases the peak power by about 30% and avoids pulse-width broadening, that is seen for smaller SA starting positions. ## Conclusion We studied experimentally and by simulations the pulsed emission dynamics of a semiconductor quantum dot based three section tapered passively mode-locked laser. We demonstrated stable optical pulse train generation across a 90 mA wide pump current range with highest pulse peak powers of 10 W with 500 fs short optical pulses and a pulse-to-pulse timing jitter below 100 fs. By combining a traveling-wave equation for the field propagation with the Maxwell-Bloch equations for the field-matter coupling and microscopically motivated rate equations for the electronic degrees of freedom, we derived a model, which excellently reproduces the experimental results. Using the numerical simulations, we have performed an in-depth analysis of the spatio-temporal dynamics in the FML regime. We identified an uncommon pulse-shaping mechanism contrary to the established understanding, where due to the interplay high pulse powers, saturable gain and absorption and waveguide losses, the pulses broaden in the absorber section and shorten in gain section. Hence, the placement of a short gain section between the high-reflective facet and the absorber functions as a pulse-shortening section and is therefore of great importance for the observed outstanding pulse performance. Performing further simulations, we showed that for the absorber position of the experimentally investigated device, fundamental mode-locking is only observed for a sufficient gain saturation energy asymmetry between the two gain sections, which is achieved by the tapered gain structure in our device. If the saturation energy asymmetry is to small, no fundamental mode-locking is observed as the laser favors third order harmonic mode-locking via a colliding-pulse mechanism. Shifting the saturable absorber closer to the high-reflective facet also breaks the third order colliding pulse mode-locking and increases the range of stable fundamental mode-locking. Furthermore, an optimal saturable absorber position was found, which maximizes the output power and minimizes the amplitude and timing-jitter while retaining the ultra short pulses. Based on these results, we predict that tuning the taper angle and saturable absorber position has the capability of improving the already outstanding performance of the presented device. Optimizing the other device parameters has the potential of pushing the achievable performance even further. Chasing this goal, our proposed numerical model is efficient enough to implement large parameter studies and thereby guide the design of monolithically integrated mode-locked semiconductor lasers. ## Methods ### Model and Numerical Simulations In order to gain an in-depth understanding of the observed characteristics and study the implications of the device design, we aim to first reproduce the measurements by numerical simulations. Hence, we require a model that is capable of describing the device specific spatio-temporal evolution of the electric field within the quantum dot active medium. However, we also need our simulations to be numerically efficient enough to calculate sufficiently long time series for the evaluation of timing and amplitude jitter as well as to perform multi-parameter studies. The well established delay-differential equation modeling approaches54,55,56 are computationally very efficient, but require limiting assumptions about the device geometry and spectral field evolution. We therefore follow the ideas of 35,49,57,58 and propose a model that couples a traveling-wave equation for the propagation of the electric field via effective Maxwell-Bloch equations to microscopically motivated rate equations that describe the electronic degrees of freedom in the active medium. To achieve the required computational efficiency, we describe the quantum dots by averaging over the inhomogeneously broadened quantum dots35,40,59 and neglect the effects of spatial hole burning57,58. In the following, we present the derivation of our model. The field propagation is described in the slowly-varying envelope and rotating wave approximation, which yield first order partial differential equations for the right (+) and left (−) moving traveling-wave envelope function E±(z, t)60 $$\begin{array}{ccc}(\pm {\partial }_{z}+\frac{1}{{v}_{g}}{\partial }_{t})\,{E}^{\pm }(z,t) & = & \frac{i\omega {\rm{\Gamma }}}{2{\varepsilon }_{b}{v}_{g}}{P}^{\pm }(z,t)={S}^{\pm }(z,t)\end{array}$$ (2) where P± is the active medium macroscopic polarization, ω the optical center frequency, Γ the geometrical confinement factor, vg the group velocity and εb the background permittivity, which are summarized by the source term S±. Integrating this equation along its characteristic curve and using the trapezoid approximation for the RHS with the inclusion of waveguide losses αint yields the delay algebraic field propagation scheme $$\begin{array}{ccc}{E}_{k}^{\pm }(t) & = & \frac{4-{\rm{\Delta }}z{\alpha }_{\mathrm{int}}}{4+{\rm{\Delta }}z{\alpha }_{\mathrm{int}}}\,{E}_{k\mp 1}^{\pm }(t-{\rm{\Delta }}t)+\frac{{\rm{\Delta }}z}{2+{\rm{\Delta }}z{\alpha }_{\mathrm{int}}}[{S}_{k\mp 1}^{\pm }(t-{\rm{\Delta }}t)+{S}_{k}^{\pm }(t)]\end{array}$$ (3) for an equidistant discretization Δz and Δt = Δz/vg, where Δt corresponds to the propagation time between two adjacent sections. This approach allows for a much coarser spatial discretization while maintaining a high temporal accuracy61,62. The pulse propagation scheme is sketched in Fig. 7(a) for a discretization of N sections at the positions zk, k {1, ..., N}. The boundary conditions are given by the reflection of the electric field at the left and right facets of the cavity with the intensity reflectivity coefficients κL and κR. The active medium polarization is calculated semi-classically and is determined by the sum of all microscopic polarization amplitudes $${p}_{\alpha }^{\pm }$$ where α denotes a suitable set of quantum numbers. We assume that only the ground and first excited state quantum dot transitions contribute to the optical gain and their transition frequencies are not subject to inhomogeneous broadening. As a result, the sum reduces to a simple multiplication with the quantum dot sheet density NQD $${P}^{\pm }=\frac{2}{{V}_{{\rm{act}}}}\,{\sum }_{\alpha }{\mu }_{\alpha }^{\ast }{p}_{\alpha }^{\pm }=4\frac{{N}^{{\rm{QD}}}}{{h}^{{\rm{QW}}}}({\nu }_{{\rm{GS}}}{\mu }_{{\rm{GS}}}^{\ast }{p}_{{\rm{GS}}}^{\pm }+{\nu }_{{\rm{ES}}}{\mu }_{{\rm{ES}}}^{\ast }{p}_{{\rm{ES}}}^{\pm })$$ (4) where Vact denotes the active medium volume, hQW the height of the surrounding quantum well reservoir, μGS,ES the respective dipole moment and νGS,ES the relative degeneracy of the GS and ES. The dynamics of the microscopic polarization amplitudes of the ground and excited state (m {GS, ES}) are given by the Maxwell-Bloch equations $$\frac{d}{dt}{p}_{{\rm{m}}}^{\pm }=-\,[i{\rm{\Delta }}{\omega }_{{\rm{m}}}+\frac{1}{{T}_{2}}]\,{p}_{{\rm{m}}}^{\pm }-\,i\frac{{\mu }_{{\rm{m}}}}{2\hslash }{E}^{\pm }(2{\rho }^{{\rm{m}}}-1)$$ (5) where Δωm is the detuning from the optical center frequency and T2 the effective polarization dephasing time, which is assumed to be equal for the ground and excited state, and the excitonic quantum dot occupation probability ρm. Note that the effective dephasing time T2 reflects the full gain bandwidth of the GS(ES) ensemble and not the homogeneous linewidth of an individual optical transition, as we have averaged over the inhomogeneous broadening. Under the assumption that ρm evolves slowly compared to $${p}_{{\rm{m}}}^{\pm }$$, Eq. (5) can be formally solved and written as35,57 $${p}_{{\rm{m}}}^{\pm }=-i\frac{{\mu }_{{\rm{m}}}{T}_{2}}{2\hslash }(2{\rho }^{{\rm{m}}}-1)\,{G}_{{\rm{m}}}^{\pm }$$ (6) with the new variable $${G}_{{\rm{m}}}^{\pm }$$, which behaves as a filtered electric field, whose dynamics are given by $$\frac{d}{dt}{G}_{{\rm{m}}}^{\pm }=\frac{1}{{T}_{2}}({E}^{\pm }-{G}_{{\rm{m}}}^{\pm })+i{\rm{\Delta }}{\omega }_{{\rm{m}}}{G}_{{\rm{m}}}^{\pm }+\sqrt{D{\rho }^{{\rm{m}}}}\xi (t)$$ (7) where the effect of stochastic spontaneous emission has been added via the last term with the δ-correlated complex Gaussian white noise ξ(t) and the noise strength D, which is tuned to match the experiment. Assuming the electric field frequency is centered at the GS transitions, we adiabatically eliminate the dynamical equation for $${G}_{{\rm{ES}}}^{\pm }$$ and obtain $${G}_{{\rm{ES}}}^{\pm }={\mathrm{(1}+i{T}_{2}{\rm{\Delta }}{\omega }_{{\rm{ES}}})}^{-1}\,{E}^{\pm }$$, which has a vanishing real part compared to the imaginary part for the given parameters and therefore only contributes to the amplitude-phase coupling. With the help of the differential gain of both transitions, gm = 2ωΓT2NQDνm|μm|2/εbvghQW, the electric field source term can be expressed as $${S}^{\pm }=\frac{{g}_{{\rm{GS}}}}{2}\mathrm{(2}{\rho }^{{\rm{GS}}}-\mathrm{1)}{G}_{{\rm{GS}}}^{\pm }-i\frac{\delta {\omega }^{{\rm{E}}S}}{2}(2{\rho }^{{\rm{ES}}}-1){E}^{\pm }$$ (8) where δωES = gEST2ΔωES[1 + (T2ΔωES)2]−1 denotes the amplitude-phase coupling due to the ES population. Note that the effective dephasing time T2 not only determines the gain bandwidth, but also the gain coefficient gm. Since the optical gain derived from the full semiconductor Bloch-equations depends upon the microscopic dephasing time of the individual transitions39,59, which is not explicitly included in our model, we treat the gain coefficients gm that are used for the simulations as fit parameters, which are adjusted to best match the experiment. The charge-carrier model includes the excitonic occupation numbers of the quantum dot ground and excited state and the charge-carrier density in the surrounding quantum well. Figure 7(b) shows a sketch of the different levels and their interaction via scattering processes. Their dynamics at each spatial coordinate are described by a set of coupled rate equations40 $$\frac{d}{dt}n=-\,\frac{n}{{\tau }^{n}}+J-4{N}^{{\rm{QD}}}{R}_{{\rm{cap}}}^{{\rm{ES}}}$$ (9) $$\frac{d}{dt}{\rho }^{{\rm{ES}}}=-\,\frac{{\rho }^{{\rm{ES}}}}{{\tau }^{{\rm{ES}}}}+{R}_{{\rm{cap}}}^{{\rm{ES}}}-\frac{1}{2}{R}_{{\rm{rel}}}$$ (10) $$\frac{d}{dt}{\rho }^{{\rm{GS}}}=-\,\frac{{\rho }^{{\rm{GS}}}}{{\tau }^{{\rm{GS}}}}+{R}_{{\rm{rel}}}-{\partial }_{t}{\rho }^{{\rm{GS}}}{|}_{{\rm{stim}}}$$ (11) with the pump current density J, the characteristic carrier lifetimes τGS,ES,n, the factor 4 for spin and ES degeneracy, the net carrier capture from the wetting layer $${R}_{{\rm{cap}}}^{{\rm{ES}}}$$ and the net intra-dot carrier relaxation Rrel38,63,64. The net intra-dot relaxation $${R}_{{\rm{rel}}}={\tilde{R}}_{{\rm{rel}}}[\mathrm{(1}-{\rho }^{{\rm{GS}}}){\rho }^{{\rm{ES}}}-{\rho }^{{\rm{GS}}}\mathrm{(1}-{\rho }^{{\rm{ES}}}){e}^{-\frac{{\rm{\Delta }}{\varepsilon }^{{\rm{ESGS}}}}{{k}_{B}T}}]$$ (12) includes Pauli-blocking terms and a Boltzmann-factor with the energy difference ΔεESGS between the QD excited and ground state and the effective temperature T to to account for detailed balance between the in and out-scattering processes. Thereby, relaxation towards the quasi-equilibrium is ensured. The net carrier-capture rate is given by $${R}_{{\rm{cap}}}={\tilde{R}}_{{\rm{cap}}}[\frac{1}{1+{e}^{{\rm{\Delta }}{\varepsilon }^{{\rm{QWES}}}/{k}_{B}T}/({e}^{n/{D}^{{\rm{2}}D}{k}_{B}T}-\mathrm{1)}}-{\rho }^{{\rm{ES}}}]$$ (13) where the left term inside the parenthesis describes the carrier density dependent quasi-Fermi function with the quantum-well band edge to QD excited state energy difference ΔεQWES and the two-dimensional density of states D2D65. The coupling to the electric field is embedded in the stimulated emission term in Eq. (11), which takes the form $${\partial }_{t}{\rho }^{{\rm{GS}}}{|}_{{\rm{stim}}}={g}^{{\rm{GS}}}\eta \mathrm{(2}{\rho }^{{\rm{GS}}}-\mathrm{1)}{\rm{Re}}({G}_{{\rm{GS}}}^{+}{E}^{{+}^{\ast }}+{G}_{{\rm{GS}}}^{-}{E}^{{-}^{\ast }}).$$ (14) with the photon to field conversion factor η = εbvghQW/(4ΓNQD). So far, we have modeled the charge-carrier dynamics withing the gain sections where a forward bias is applied. In the absorber section however, a reverse bias U is applied, whose effect is twofold. Firstly, the static transverse electric field reduces the barrier height, which leads to enhanced thermionic carrier escape rates66,67. Following68, this is implemented by an effective ES lifetime $${\tau }_{{\rm{abs}}}^{{\rm{ES}}}(U)$$, which exponentially depends on the absorber bias U. Secondly, the optical transitions are slightly red shifted due to the quantum confined stark effect66,69, which is implemented via the detuning ΔωGS in the polarization equation. We assume that the red shift scales linearly with the applied reverse bias. The constituting equations are then given by $${\tau }_{{\rm{abs}}}^{{\rm{ES}}}(U)={\tau }_{{\rm{abs}}\mathrm{,0}}^{{\rm{ES}}}\,\exp \,(U/{U}_{0}^{{\tau }_{{\rm{abs}}}^{{\rm{ES}}}})$$ (15) $${\rm{\Delta }}{\omega }_{{\rm{GS}}}^{{\rm{abs}}}(U)={\rm{\Delta }}{\omega }_{{\rm{GS}}\mathrm{,0}}^{{\rm{abs}}}U$$ (16) where the parameters are chosen similarly to66,68,69 and are given in Table 1. Additionally, we set the differential gain coefficient in the absorber section to $${g}_{{\rm{GS}}}^{{\rm{abs}}}\, > \,{g}_{{\rm{GS}}}^{{\rm{gain}}}$$, since the decreased carrier density in the surrounding quantum well leads to reduced coulomb scattering and thereby to an increased microscopic dephasing time of the optical transitions70. Lastly, the effect of the taper gain structure has to be included in the model. On that account, we assume that the transverse electric field follows the profile of the active region by adiabatically expanding and reducing in its spatial extend31,37,71. In our model, this is described by rescaling the stimulated emission term in Eq. (11) with the relative change of the ridge width w(z) $${\partial }_{t}{\rho }^{{\rm{GS}}}(z{)|}_{{\rm{stim}}}^{{\rm{taper}}}=\frac{{w}_{0}}{w(z)}{\partial }_{t}{\rho }^{{\rm{GS}}}(z{)|}_{{\rm{stim}}}$$ (17) where w0 is the width of active region without taper. As the lateral electric field profile is distributed over a larger area in the tapered region, the quantum dots see only a reduced field strength, depending on the local waveguide width, w(z). This results in a reduced stimulated recombination rate and thus a higher saturation energy of the active medium.. Additionally, we assume that increasing the ridge width leads to better overlap of the field with the active medium, thus improving the transverse confinement factor Γ(z), while simultaneously reducing the the waveguide losses αint(z)35. These effects are modeled phenomenologically by the fit functions $${\alpha }_{{\rm{int}}}(z)={\alpha }_{{\rm{int}}}^{0}-{\alpha }_{{\rm{int}}}^{{\rm{T}}}tan\,h\,(\frac{{w}_{0}-w(z)}{{w}_{0}}-1)$$ (18) $${{\rm{\Gamma }}}_{{\rm{rel}}}(z)=1+{{\rm{\Gamma }}}_{{\rm{T}}}tan\,h\,(\frac{{w}_{0}-w(z)}{{w}_{0}}-1)$$ (19) where the fit parameter ΓT and the fit function Eq. (19) is chosen to mimic results from beam-propagation calculations35,71,72 and $${\alpha }_{{\rm{int}}}^{{\rm{T}}}$$ and the fit function Eq. (18) is chosen the match waveguide characterization measurements. Finally, the pump current P is related to the pump current density J via $$P=J{A}_{{\rm{G}}}{a}_{{\rm{L}}}e{\gamma }^{-1}$$ (20) where AG is the area of the active region, aL the number of QD layers, e the electron charge and γ the injection effiency, which is fitted to the experimental data. The out-coupled power is calculated according to $${P}_{{\rm{out}}}=2\hslash \omega {a}_{{\rm{L}}}{w}_{0}{N}^{{\rm{QD}}}\eta {|{E}_{{\rm{out}}}|}^{2}$$ (21) where the out-coupled electric field at the right facet of the tapered section is given by |Eout|2 = (1 − κR)|E(z = l)|2. After an initial transient time (typically only between 20 and 50 round-trips due to the low right mirror reflectivity), the relevant figures of merit can then be calculated either directly from time series or from numerically computed auto-correlation functions. The amplitude is quantified by the standard deviation of the pulse-peak powers and normalized to the mean. The timing jitter is measured by the standard deviation of the inter-pulse intervals, which corresponds to the pulse-to-pulse or period jitter. Contrary to the long-term timing-jitter, this characterization includes the short-term correlations, which are introduced by the recovery processes of the gain and absorber sections20. Hence, changes of the laser design, which affect the dynamics of the gain and absorber sections are also made visible in the pulse-to-pulse timing-jitter. In summary, we have derived a system of coupled differential-delay-algebraic equations (DDAEs) Eqs (3, 811), which describe the spatio-temporal evolution of the electric field and the charge-carrier populations in the laser. Direct integration of these equations produces time-series of all dynamical variables to be used for analysis and characterization. The integration is implemented via a fourth-order Runge-Kutta approach, that converges for our system with a time-step of 30 fs and a spatial discretization of 50 μm, i.e. 60 sections along the device. On a state of the art desktop CPU (Intel Core i7-4770), a 75 μs time-series (about 1000 round-trips) takes roughly two minutes to compute and already produces reasonable pulse-train statistics. A list of all simulations parameters and their values is presented in Table 1. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Haus, H. A. Mode-locking of lasers. IEEE J. Sel. Top. Quantum Electron. 6, 1173–1185, https://doi.org/10.1109/2944.902165 (2000). 2. 2. Avrutin, E. A., Marsh, J. H. & Portnoi, E. L. Monolithic and multi-gigahertz mode-locked semiconductor lasers: Constructions, experiments, models and applications. IEE Proc. Optoelectron. 147, 251 (2000). 3. 3. Rafailov, E. U., Cataluna, M. A. & Sibbett, W. Mode-locked quantum-dot lasers. Nat. Photonics 1, 395–401 (2007). 4. 4. Bimberg, D. et al. High speed nanophotonic devices based on quantum dots. Phys. Status Solidi A 203, 3523–3532, https://doi.org/10.1002/pssa.200622488 (2006). 5. 5. Udem, T., Holzwarth, R. & Hänsch, T. W. Optical frequency metrology. Nature 416, 233–237 (2002). 6. 6. Keller, U. Recent developments in compact ultrafast lasers. Nature 424, 831–838 (2003). 7. 7. Loesel, F. H., Niemz, M. H., Bille, J. F. & Juhasz, T. Laser-induced optical breakdown on hard and soft tissues and its dependence on the pulse duration: experiment and model. IEEE J. Quantum Electron. 32, 1717–1722 (1996). 8. 8. Delfyett, P. J., Hartman, D. H. & Ahmad, S. Z. Optical clock distribution using a mode-locked semiconductor laser diode system. IEEE J. Lightwave Technol. 9, 1646–1649 (1991). 9. 9. Guzmán, R., Gordon, C., Orbe, L. & Carpintero, G. 1 GHz InP on-chip monolithic extended cavity colliding-pulse mode-locked laser. Opt. Lett. 42, 2318–2321, https://doi.org/10.1364/ol.42.002318 (2017). 10. 10. von der Linde, D. Characterization of the noise in continuously operating mode-locked lasers. Appl. Phys. B 39, 201 (1986). 11. 11. Solgaard, O. & Lau, K. Y. Optical feedback stabilization of the intensity oscillations in ultrahigh-frequency passively modelocked monolithic quantum-well lasers. IEEE Photon. Technol. Lett. 5, 1264 (1993). 12. 12. Kim, J. & Song, Y. Ultralow-noise mode-locked fiber lasers and frequency combs: principles, status, and applications. Adv. Opt. Photon. 8, 465–540, https://doi.org/10.1364/aop.8.000465 (2016). 13. 13. Paschotta, R., Schlatter, A., Zeller, S. C., Telle, H. R. & Keller, U. Optical phase noise and carrier-envelope offset noise of mode-locked lasers. Appl. Phys. B 82, 265–273, https://doi.org/10.1007/s00340-005-2041-9 (2006). 14. 14. Ahmad, F. R. & Rana, F. Fundamental and subharmonic hybrid mode-locking of a high-power (220 mw) monolithic semiconductor laser. IEEE Photon. Technol. Lett. 20, 1308–1310, https://doi.org/10.1109/lpt.2008.926911 (2008). 15. 15. Heck, M. J. R. et al. Analysis of hybrid mode-locking of two-section quantum dot lasers operating at 1.5 μm. Opt. Express 17, 18063–18075, https://doi.org/10.1364/oe.17.018063 (2009). 16. 16. Fiol, G. et al. Hybrid mode-locking in a 40 GHz monolithic quantum dot laser. Appl. Phys. Lett. 96, 011104, https://doi.org/10.1063/1.3279136 (2010). 17. 17. Habruseva, T., Rebrova, N., Hegarty, S. P. & Huyet, G. Quantum-dot mode-locked lasers with dual mode optical injection. Proc. SPIE 7720, 1–8, https://doi.org/10.1117/12.854338 (2010). 18. 18. Rebrova, N., Huyet, G., Rachinskii, D. & Vladimirov, A. G. Optically injected mode-locked laser. Phys. Rev. E 83, 066202, https://doi.org/10.1103/physreve.83.066202 (2011). 19. 19. Breuer, S. et al. Investigations of repetition rate stability of a mode-locked quantum dot semiconductor laser in an auxiliary optical fiber cavity. IEEE J. Quantum Electron. 46, 150, https://doi.org/10.1109/jqe.2009.2033255 (2010). 20. 20. Otto, C., Jaurigue, L. C., Schöll, E. & Lüdge, K. Optimization of timing jitter reduction by optical feedback for a passively mode-locked laser. IEEE Photonics J. 6, 1501814, https://doi.org/10.1109/jphot.2014.2352934 (2014). 21. 21. Drzewietzki, L., Breuer, S. & Elsäßer, W. Timing jitter reduction of passively mode-locked semiconductor lasers by self- and external-injection: Numerical description and experiments. Opt. Express 21, 16142–16161, https://doi.org/10.1364/oe.21.016142 (2013). 22. 22. Drzewietzki, L., Breuer, S. & Elsäßer, W. Timing phase noise reduction of modelocked quantum-dot lasers by time-delayed optoelectronic feedback. Electron. Lett. 49, 557–559, https://doi.org/10.1049/el.2013.0763 (2013). 23. 23. Otto, C., Lüdge, K., Vladimirov, A. G., Wolfrum, M. & Schöll, E. Delay induced dynamics and jitter reduction of passively mode-locked semiconductor laser subject to optical feedback. New J. Phys. 14, 113033 (2012). 24. 24. Nikiforov, O., Jaurigue, L. C., Drzewietzki, L., Lüdge, K. & Breuer, S. Experimental demonstration of change of dynamical properties of a passively mode-locked semiconductor laser subject to dual optical feedback by dual full delay-range tuning. Opt. Express 24, 14301–14310 (2016). 25. 25. Avrutin, E. A. & Russell, B. M. Dynamics and spectra of monolithic mode-locked laser diodes under external optical feedback. IEEE J. Quantum Electron 45, 1456–1464, https://doi.org/10.1109/jqe.2009.2028242 (2009). 26. 26. Helkey, R. J. et al. Repetition frequency stabilisation of passively mode-locked semiconductor lasers. Electron. Lett. 28, 1920–1922, https://doi.org/10.1049/el:19921229 (1992). 27. 27. Jiang, L. A., Abedin, K. S., Grein, M. E. & Ippen, E. P. Timing jitter reduction in modelocked semiconductor lasers with photon seeding. Appl. Phys. Lett. 80, 1707–1709, https://doi.org/10.1063/1.1459112 (2002). 28. 28. Lin, C., Grillot, F., Li, Y., Raghunathan, R. & Lester, L. F. Microwave characterization and stabilization of timing jitter in a quantum-dot passively mode-locked laser via external optical feedback. IEEE J. Sel. Top. Quantum Electron. 17, 1311–1317, https://doi.org/10.1109/jstqe.2011.2118745 (2011). 29. 29. Haji, M. et al. High frequency optoelectronic oscillators based on the optical feedback of semiconductor mode-locked laser diodes. Opt. Express 20, 3268–3274, https://doi.org/10.1364/oe.20.003268 (2012). 30. 30. Javaloyes, J. & Balle, S. Mode-locking in semiconductor fabry-pérot lasers. IEEE J. Quantum Electron. 46, 1023–1030 (2010). 31. 31. Rossetti, M., Tianhong, X., Bardella, P. & Montrosset, I. Impact of gain saturation on passive mode locking regimes in quantum dot lasers with straight and tapered waveguides. IEEE J. Quantum Electron. 47, 1404 (2011). 32. 32. Javaloyes, J. & Balle, S. Anticolliding design for monolithic passively mode-locked semiconductor lasers. Opt. Lett. 36, 4407–4409 (2011). 33. 33. Simos, H. et al. Numerical analysis of passively mode-locked quantum-dot lasers with absorber section at the low-reflectivity output facet. IEEE J. Quantum Electron. 49, 3–10 (2013). 34. 34. Thompson, M. G., Rae, A. R., Xia, M., Penty, R. V. & White, I. H. In GaAs quantum-dot mode-locked laser diodes. IEEE J. Quantum Electron. 15, 661–672 (2009). 35. 35. Rossetti, M., Bardella, P. & Montrosset, I. Time-domain travelling-wave model for quantum dot passively mode-locked lasers. IEEE J. Quantum Electron. 47, 139 (2011). 36. 36. Weber, C. et al. Picosecond pulse amplification up to a peak power of 42W by a quantum-dot tapered optical amplifier and a mode-locked laser emitting at 1.26 μm. Opt. Lett. 40, 395–398, https://doi.org/10.1364/ol.40.000395 (2015). 37. 37. Bardella, P., Drzewietzki, L., Krakowski, M., Krestnikov, I. & Breuer, S. Mode locking in a tapered two-section quantum dot laser: design and experiment. Opt. Lett. 43, 2827–2830, https://doi.org/10.1364/ol.43.002827 (2018). 38. 38. Lüdge, K. Modeling quantum dot based laser devices. In Lüdge, K. (ed.) Nonlinear Laser Dynamics - From Quantum Dots to Cryptography, chap. 1, 3–34 (WILEY-VCH Weinheim, Weinheim, 2012). 39. 39. Lingnau, B. et al. Ultrafast gain recovery and large nonlinear optical response in submonolayer quantum dots. Phys. Rev. B 94, 014305 (2016). 40. 40. Lingnau, B. et al. Dynamic phase response and amplitude-phase coupling of self-assembled semiconductor quantum dots. Appl. Phys. Lett. 110, 241102 (2017). 41. 41. Chow, W. W. & Jahnke, F. On the physics of semiconductor quantum dots for applications in lasers and quantum optics. Prog. Quantum Electron. 37, 109–184, https://doi.org/10.1016/j.pquantelec.2013.04.001 (2013). 42. 42. Kuntz, M., Fiol, G., Lämmlin, M., Meuer, C. & Bimberg, D. High-speed mode-locked quantum-dot lasers and optical amplifiers. Proc. IEEE 95, 1767–1778, https://doi.org/10.1109/jproc.2007.900949 (2007). 43. 43. Xin, Y. C. et al. Reconfigurable quantum dot monolithic multi-section passive mode-locked lasers. Opt. Express 15, 7623–7633, https://doi.org/10.1364/oe.15.007623 (2007). 44. 44. Li, Y. et al. Harmonic mode-locking using the double interval technique in quantum dot lasers. Opt. Express 18, 14637–14643, https://doi.org/10.1364/oe.18.014637 (2010). 45. 45. Weber, C., Klehr, A., Knigge, A. & Breuer, S. Picosecond pulse generation and pulse train stability of a monolithic passively mode-locked semiconductor quantum-well laser at 1070 nm. IEEE J. Quantum Electron. 54, 1–9, https://doi.org/10.1109/jqe.2018.2832288 (2018). 46. 46. Weber, C., Drzewietzki, L. & Breuer, S. Amplitude jitter and timing jitter characterization of a monolithic high-power passively mode-locked tapered quantum dot laser. Proc. SPIE 9892, 9892–989, https://doi.org/10.1117/12.2229739 (2016). 47. 47. Kefelian, F., O’Donoghue, S., Todaro, M. T., McInerney, J. G. & Huyet, G. RF linewidth in monolithic passively mode-locked semiconductor laser. IEEE Photon. Technol. Lett. 20, 1405, https://doi.org/10.1109/lpt.2008.926834 (2008). 48. 48. Fork, R., Shank, C., Yen, R. & Hirlimann, C. Femtosecond optical pulses. IEEE J. Quantum Electron. 19, 500–506, https://doi.org/10.1109/JQE.1983.1071898 (1983). 49. 49. Radziunas, M. et al. Pulse broadening in quantum-dot mode-locked semiconductor lasers: Simulation, analysis, and experiments. IEEE J. Quantum Electron. 47, 935–943 (2011). 50. 50. Haus, H. A. Theory of mode locking with a fast saturable absorber. J. Appl. Phys 46, 3049–3058, https://doi.org/10.1063/1.321997 (1975). 51. 51. Derickson, D. J. et al. Short pulse generation using multisegment mode-locked semconductor lasers. IEEE J. Quantum Electron. 28, 2186–2202 (1992). 52. 52. Vladimirov, A. G., Pimenov, A. S. & Rachinskii, D. Numerical study of dynamical regimes in a monolithic passively mode-locked semiconductor laser. IEEE J. Quantum Electron. 45, 462–46, https://doi.org/10.1109/jqe.2009.2013363 (2009). 53. 53. Waldburger, D. et al. Multipulse instabilities of a femtosecond SESAM-modelocked VECSEL. Opt. Express 26, 21872–21886, https://doi.org/10.1364/oe.26.021872 (2018). 54. 54. Vladimirov, A. G. & Turaev, D. V. Model for passive mode locking in semiconductor lasers. Phys. Rev. A 72, 033808 (2005). 55. 55. Viktorov, E. A., Mandel, P., Vladimirov, A. G. & Bandelow, U. Model for mode locking of quantum dot lasers. Appl. Phys. Lett. 88, 201102 (2006). 56. 56. Rossetti, M., Bardella, P. & Montrosset, I. Modeling passive mode-locking in quantum dot lasers: A comparison between a finite- difference traveling-wave model and a delayed differential equation approach. IEEE J. Quantum Electron. 47, 569 (2011). 57. 57. Dong, M., Mangan, N. M., Kutz, J. N., Cundiff, S. T. & Winful, H. G. Traveling wave model for frequency comb generation in single-section quantum well diode lasers. IEEE J. Quantum Electron. 53, 1–11 (2017). 58. 58. Bardella, P., Columbo, L. & Gioannini, M. Self-generation of optical frequency comb in single section quantum dot Fabry-Perot lasers: a theoretical study. Opt. Express 25, 26234–26252 (2017). 59. 59. Kolarczik, M. et al. Quantum coherence induces pulse shape modification in a semiconductor optical amplifier at room temperature. Nat. Commun. 4, 2953, https://doi.org/10.1038/ncomms3953 (2013). 60. 60. van Tartwijk, G. H. M. & Agrawal, G. P. Laser instabilities: a modern perspective. Prog. Quantum Electron. 22, 43–122, https://doi.org/10.1016/s0079-6727 (1998). 61. 61. Javaloyes, J. & Balle, S. Multimode dynamics in bidirectional laser cavities by folding space into time delay. Opt. Express 20, 8496–8502 (2012). 62. 62. Lin, H. et al. Photonic microwave generation in multimode vcsels subject to orthogonal optical injection. J. Opt. Soc. Am. B 34, 2381–2389, https://doi.org/10.1364/josab.34.002381 (2017). 63. 63. Nielsen, T. R., Gartner, P. & Jahnke, F. Many-body theory of carrier capture and relaxation in semiconductor quantum-dot lasers. Phys. Rev. B 69, 235314 (2004). 64. 64. Majer, N., Lüdge, K. & Schöll, E. Cascading enables ultrafast gain recovery dynamics of quantum dot semiconductor optical amplifiers. Phys. Rev. B 82, 235301 (2010). 65. 65. Lüdge, K. & Schöll, E. Quantum-dot lasers–desynchronized nonlinear dynamics of electrons and holes. IEEE J. Quantum Electron. 45, 1396–1403 (2009). 66. 66. Malins, D. B. et al. Ultrafast electroabsorption dynamics in an InAs quantum dot saturable absorber at 1.3 μm. Appl. Phys. Lett. 89, 171111, https://doi.org/10.1063/1.2369818 (2006). 67. 67. Breuer, S. et al. Dual-state absorber-photocurrent characteristics and bistability of two-section quantum-dot lasers. IEEE J. Sel. Top. Quantum Electron. 19, 1–9 (2013). 68. 68. Viktorov, E. A. et al. Recovery time scales in a reversed-biased quantum dot absorber. Appl. Phys. Lett. 94, 263502 (2009). 69. 69. Wegert, M., Schwochert, D., Schöll, E. & Lüdge, K. Integrated quantum-dot laser devices: Modulation stability with electro-optic modulator. Opt. Quantum Electron. 46, 1337–1344, https://doi.org/10.1007/s11082-014-9878-2 (2014). 70. 70. Lorke, M., Nielsen, T. R., Seebeck, J., Gartner, P. & Jahnke, F. Influence of carrier-carrier and carrier-phonon correlations on optical absorption and gain in quantum-dot systems. Phys. Rev. B 73, 085324, https://doi.org/10.1103/physrevb.73.085324 (2006). 71. 71. Xu, T., Bardella, P., Rossetti, M. & Montrosset, I. Beam propagation method simulation and analysis of quantum dot flared semiconductor optical amplifiers in continuous wave high-saturation regime. IET Optoelectron. 6, 110–116 (2012). 72. 72. Rossetti, M., Xu, T., Bardella, P. & Montrosset, I. Modelling of passive mode-locking in InAs quantum-dot lasers with tapered gain section. Phys. Status Solidi C 9, 286–289, https://doi.org/10.1002/pssc.201100243 (2011). ## Acknowledgements This work was supported by DFG within SFB 787 and EU FP7 project Grant no. 224338. BL acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)–404943123. The authors thank M. Krakowski and his team from III-V Lab, France, and I. Krestnikov and his team from Innolume GmbH, Germany. ## Author information L.D., C.W. and S.B. designed the experimental setup and performed the device characterization. S.M. developed the model with inputs from B.L. and K.L. and carried out the numerical simulations. The results were discussed and interpreted by all authors. S.M. prepared the figures and wrote the manuscript with contributions from S.B., K.L. and B.L. ### Competing Interests The authors declare no competing interests. Correspondence to Stefan Meinecke. ## Rights and permissions Reprints and Permissions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7560186386108398, "perplexity": 2680.9807703909046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999000.76/warc/CC-MAIN-20190619143832-20190619165832-00351.warc.gz"}
http://blog.stackoverflow.com/2008/07/dates-relative-or-absolute/
# Dates: Relative or Absolute? Another item we’re looking at as we get closer to the Stack Overflow private beta is the issue of how to display dates on the questions and answers. We started by displaying the absolute dates as you’ll see them on Joel’s existing forum — although we do add the time as well: Monday, June 27, 2005 at 6:35 pm This works fine, assuming you’re in the same time zone as the server. (Actually, now that I think about it, maybe that’s why Joel opted to drop the time part; the odds of your time zone being in a completely different day from the server’s time zone is fairly slim.) Otherwise, you have to record the user’s time zone and translate all the server times to their local time. We noticed that some sites, like getsatisfaction, opt to display all times in relative units. So the above would be rendered as: Three years ago Granted, it lacks precision, but did you really need to know the message was originally left on June 27th? And isn’t it simpler not to have to do the “how old is this” math in your head? The other big advantage is that relative times work for every timezone, so you don’t have to tell us your timezone in your user profile, and we don’t have to be scrupulously careful to convert every date we touch. However, note that the precision of the date increases automatically as the messages get closer to “now”: Three years ago Two months ago 17 days ago 6 minutes ago We’re leaning heavily towards displaying all question and answer times in relative units now. What are your thoughts? Filed under design Relative time is better. It’s easier to read and the exact DateTime is not useful for anything. Relative time! I hate it when I go to sites and they show the time of the post but it’s not in GMT (my timezone). It’s more of a hinderance because I then have to find out what PST is in relation to GMT. Peter H Jul 21 2008 I like relative time for shorter TimeSpans, like less than 24h or so. Otherwise I think the date information is good, so that I can see if a post was posted on a weekday or such, as well as what time it was (geeks posts at night or at job time? :) However you decide to put it, why not go for ISO-standards? At least, please don’t use am/pm (I know Prf Tufte agrees with me!) paketep Jul 21 2008 How about both?. Relative just below the question (or after, between parenthesis) and absolute, in a smaller font, at the bottom of whatever box you are putting it in. I don’t know, somewhere. Best of both worlds. PS.- That recaptcha thing is real slow :( I apologize for the CAPTCHA, but the comment spam problem was getting really bad, 50+ per day. And Akismet would regularly mark user comments as “spam”.. Relative is better, but if you do use absolute, please use JavaScript to convert it to the viewer’s local time zone. Something like Mike West’s PerfectTime – fails gracefully as well. Mike Tomasello Jul 21 2008 Relative with absolute in a tool-tip (title=”” attribute?). I’d opt for relative, too. But perhaps we are missing something; *is* there any advantage in displaying absolute dates/times? Cyrik Jul 21 2008 i agree with mike. use both and put one in the tooltip Alejo Jul 21 2008 > Relative with absolute in a tool-tip (title=”” attribute?). I second! Absolute for < 1 week, relative for older. Bernard Jul 21 2008 I think sounds good, so long as there is a way to get the absolute date. If you include non-relative time at all, and do not allow for users to enter their timezone/GMT offset, you will alienate a lot of people. Its not “a slim chance” that dates are off either. For example, until 4pm (ie, most of the working day) Melbourne is a day ahead of Denver. I’d tend towards relative as well, unless you can show the time in my local time zone (and explain that it’s in GMT+1 for example). Nothing worse than hitting a site and there’s no info as to what timezone is being used. In fact….why not do both? May offer as options: Show times in my timezone [x] Show times as relative too [x] Means I have to think even less :) Both; relative in the text, absolute in a tooltip. KUTGW I am in the UK and when I post on a forum I always have to think is this GMT or not? The first time I posted a response on this blog I thought the time I posted was not the time shown until I realized (It is now 12:10pm in GMT). So I like relative time. I prefer absolute. I like to see exact dates. I feel a little out of control if someone is “editing” times for me… And I’m so used to absolute times that the “how old is this” math is automatic for me. But I see that most of the people here prefer relative. I can probably get used to relative too, but maybe the tool-tip option would be a logical compromise. Relative time is better. You can also add a “title” element to display the real/date time on hover. (Ahh, Brent already said that :-) Though I can’t come up with a concrete example right now, I know I’ve been frustrated in the past by the lack of precision in relative time. Usually it’s if I need to know for some reason which of two pieces of old content was posted first. Probably not a compelling enough reason to go with the ugly absolute times, but I definitely like the tool-tip option because at least you have access to all the precision. Relative in the text, absolute in a tooltip is the way forward +1 for the relative time + absolute as tooltip > In fact….why not do both? May offer as options… Means I have to think even less :) No it doesn’t. It means you have to make a choice. You have to think more. Don‘t make me set an option for something as trivial as how the date of a post is displayed to me. Figure out something good, and stick with it. I like the relative-with-absolute-in-title-attribute. I think relative is nice, as the reason I look at the date on posts is to see how old they are. Thus relative makes me think less. Flickr does relative dates, and I like it there. Another for relative with absolute in the tooltip! :D Recognising that I’m in the minority, but I dislike relative time. It causes problems when caching or quoting information elsewhere. Most of us are clever enough to mentally calculate if something is old or new based on absolute time. I vote for relative time. Since you are obviously trying to make this site as “low friction” to use as possible, making people have to set up timezones, etc would counter that, as they would actually have to have an account…and most people forget to do it anyway. Absolute is better but whatever you do, make it consistent. Don’t use the ‘yesterday’ AND ’21st July’ like vBulletin does. I’ve got to say both, I like the display relative with the precise date/time as a title. Best of both worlds Relative time in the first few days (up to 3 days), and then switch to absolute date (date only, no time). Simon Jul 21 2008 I’ll put a vote in for “relative”… seems to me to work better for everyone. (tooltipped absolute could be useful, but might be worth highlighting which timezone the system thinks we’re in!) We had the same decision to make when building http://99designs.com/ and I wrote briefly about it here: http://www.sitepoint.com/blogs/2007/06/04/simple-date-and-time-localization-with-javascript/ Here’s the jist of what we decided: * in markup, output dates in a GMT format readable by machines and humans. * standardize on a “microformat” like <span class=”datetime”>…</span>. * allow additional classes to determine output format, e.g. class=”datetime relative” or class=”datetime absolute”. * use JavaScript to parse the timestamps into the browsers local time, and output them in the relevant format. * When displaying relative timestamps (e.g. 1 hour ago), put the absolute timestamp in the title attribute for mouseover inspection. We’ve found this has worked really well for us – has anybody else given it a shot? Saniul Ahmed Jul 21 2008 Use both. Show relative date, as it is easier to comprehend, but if the user wants more precision – show the absolute date in a tooltip when the user hovers the cursor over the date. Timezones – just allow the user to choose a specific one in the Settings/Options menu. Relative is nice for a quick glance, but just occasionally absolute is genuinely useful. Sometimes it is nice to know if there is a ‘news’ context for the comment. For example, the context of a post written on 9/10/01 is clearly very different from a post written on 9/12/01. You wouldn’t be able to tell the difference if it just says “seven years ago”. Admittedly that’s an extreme example. But it can happen with other things too, e.g. company takeovers, software releases etc. As others have suggested, there’s no reason to make the absolute date particularly prominent. Chris Carruthers Jul 21 2008 What about displaying the relative date/time, but having the actual date/time as a mouse-over tooltip, in the user’s local time zone? Obviously requires a bit more data, and probably time zone conversion on the client, but it’s all I could want as a user. Ryan Fox Jul 21 2008 I’m going to vote for absolute. First of all, you’re going to get cached by Google, and other services. If, for whatever reason, the cache is old, then the relative dates are all completely useless. Second, if I wanted to use an article as a cited source for something else, having the exact date that it was published would help the credibility of the citation. Also, if Stack Overflow ever decided to shut down, the exact date would allow my reader to try to find a cached version. Third, there’s no real reason to throw away this data. You don’t want to start with relative dates and then realize that you actually wanted absolute. I would just store everything in UTC time, and then offer to keep track of the users’ timezones and adjust accordingly. Chris Carruthers Jul 21 2008 Sorry for posting above what had already been said – I should have reloaded before posting! Anyway, consider it a seconding! Triggered by Pat Galea’s comment – please PLEASE don’t ever show dates in dd/mm/yy or mm/dd/yy format unless I’ve explicitly provided my preference, which I shouldn’t have to bother to do – so don’t ever show dates in that format! It’s bloody frustrating for any non-US person, and is just plain ambiguous a lot of the time. Absolute dates should also probably be shown with their time zone in GMT +/- notation, again to avoid ambiguity. YES! Please, by all means use relative times. If you do go with absolute, it really only is useful to the user if you convert everything to the users time zone, which is absolutely unnecessary. Dennis Kehrig Jul 21 2008 I’d put the absolute time into the HTML code and then switch this to a relative display with JavaScript, updating it at least every minute (otherwise “1 minute ago” kind of loses its meaning rather quickly, which is also the case if it’s in the Google Cache) and keeping the absolute time in the title tag (as proposed by others) for reference. You could also add a click handler to switch between relative and absolute display (like switching between passed and remaining time in Winamp), should someone want to copy the absolute time (which is not easily possible for tooltips). An alternative to the title attribute approach would be to switch the display on mouseover. If the default display is the relative time, this would make copy & paste of the absolute time even easier than with the click handler (but of course you have to know this feature to even try). If the default display is the absolute time, calculating the relative time would only occure on mouseover, thus removing the need to update frequently. In any case, the absolute time should contain the seconds. Imagine somebody thanks for “the quick answer”. When I read something like this, I always want to know what the author considered quick, so I need the exact time down to the seconds. While you’re at it, you could adjust all time stamps to local time, which is much easier with JavaScript on the client than on the server. @Chris “Triggered by Pat Galea’s comment – please PLEASE don’t ever show dates in dd/mm/yy or mm/dd/yy format unless I’ve explicitly provided my preference, which I shouldn’t have to bother to do – so don’t ever show dates in that format! It’s bloody frustrating for any non-US person, and is just plain ambiguous a lot of the time.” Funnily enough I’m a Brit myself, so I was just writing the date in mm/dd/yy for the benefit of our American friends. :-) Nevertheless, you raise a good point. I use the ISO standard YYYY-MM-DD wherever I can because it’s generally unambiguous. But something like “12 Jun 2008″ would be fine too. mm/dd/yy dates on sites are confusing, not because they’re different to my home usage, but because if it’s an ambiguous date I’m often not sure whether the site has taken my regional preference into account or not. Personally, I’d prefer relative time and don’t care whether the absolute time is included or not. Peter Meyer Jul 21 2008 Relative. Absolute time, please. I often look for precise year the comments were made – and it’s annoying to have to compute it myself (is it 2005? is it 2006 now?). Also static date may be better for search engines (or caches) – I imagine it’s weird to have date information change depending how many years later you revisit the question page. Now, from a technical side. Absolute dates are constant so you can easily cache them, as you don’t need to regenerate a page very time user accesses it. That’s not the case with relative dates – you have to regenerate them every time relative time changes. And what’s more important you have to regenerate in the periods (short times) which need caches the most. The most recent data gets hammered and the old articles get only occasional entries from search providers. And from usability side – if you need the date, then you probably need it to compare with some event. Did I work for my employer when the news hit the website? Did the patch emerged before or after the major release? Etc. You will never be sure with relative dates. But on the other had, they seem to be more “eye” friendly. I am in the UK and think that all time is my local time =>. May be there is an intresting happy point for the difference in time and the reading of post. User Time/Server Time/Relative Time. User Time and Server Time This will help in getting across the delay factor in replies and answers. I the customer want to know what timezone the best answer was in and if i have questions on the answer how long before likely reply. Relative time is good for the feedback/badge system as this relating to how long the post/question/reply has been in existance. So if a question gets answered by a gold squidge with a TTL of question of necromancer then this is old news and I the questioner need to think about thinks – likely i solved it another way and need to look at the answer in relation to my solution set. .jpg “i used to be an image,but i got refactored as XAML” I’ll vote for relative date if the post is recent (however you define recent–less than 1 month?). For older posts show absolute date without the time. And don’t let people convince you to show the absolute date in an ugly, numbers-only ISO format! Use the words, like “Monday, July 21, 2008″ format. You might also want to make relative date no more precise than the hour (for posts less than an hour, just put “in the last hour”). Showing the age in minutes seems kind of pointless because it gets old so fast. PS- Can you just use common English words in the captchas? Also, a tactic I have found to be 100% effective in combatting spam on my own website is to create a comment submission form at the top of the page, which is hidden by CSS. The spambots will try to submit the data using that form, but human users will never see it. (My site gets pretty low traffic; maybe there are more sophisticated spambots out there that will avoid this tactic.) Graham Stewart Jul 21 2008 Relative is good for me. Please also make the date (optionally) have an influence on the search results. Either as a factor of “relevance” or as an explicit search parameter. Typically when I search for something I am more interested in recent developments than a post from three years ago that I’ve probably already read. MSDN seem to suffer a lot from this (“well here’s how you would have done it in .NET 1.0, though that API is now obsolete” – “Gee thanks”). @Jeff: I think you have a wrong perception of timezones. Ideally, timezone is presentation-data. Like you use a dot (10.5) as a decimal seperator, I use a comma (10,5). It’s the same with timezones. Say, you post a message at 6:15am. I would like to see 2:15pm (or even better: 14:15). Because at 2:15pm in Amsterdam, in San Fransisco it’s 6:15am). So you probably need to include timezone information in the profile, whether you display time in relative/google-speak, or in absolute terms. I think relative works best. I also agree with kip in that older posts should show the date, seeing as how “1 year ago” on youtube is kind of ambiguous. who knows, maybe absolute dates could be helpful years down the road? Relative is good. My convention is this, once it gets past a certain point, make a tooltip with the exact date, since time would be irrelevant at the 1 year mark, it can easily be omitted. For me this is the best of both worlds in my apps, nice easy to read relative links, but precision for research with a simple hover. dextar78 Jul 21 2008 As some have said, I think overall relative looks better. I do see one problem using only relative: When viewing a thread that’s say a month or a year old. The thread may have gone on for a day or so and contain quite a few comments/posts. The relative time display will always be the same and loose its context (ie: all comments will say ’1 month’ or ’1 year’ old). So maybe in certain contexts, the time (or some other differentiator?) may be beneficial? Definitely relative time. The precision of the date becomes less relevant as the distance from the date increases. It’s probably several orders of magnitude less cognitive work to scan “1 year ago” than to parse a date, retrieve the current date from the brain and do some mental math on that data. Martin Wallace Jul 21 2008 I like relative when just browsing – as it gives a far better indication of age. However, if I am trying to work out the context of a post with reference to releases of patchs/software versions then absolute is a must. ’1 year ago’ is not particularly helpfull if I am trying to work out if a particular post is referncing release 1.1 or 1.2 of the software it is talking about. Not too sure I have explained that too well, hopefully you get the drift. Mike Firesheets Jul 21 2008 I’m putting in my vote for relative time as well. I once worked on a system that was required to show the full millisecond-resolution timestamp in the local time of the client, and it turned out to be a Javascript nightmare. @Martin Wallace – I totally get your drift, but I don’t think that absolute time will solve all of the cases where a release/patch level is in question. There will be times where a poster hasn’t upgraded a plugin weeks after a major bug release, etc. I think if the post doesn’t give enough detail, most of the time other people are going to have to tease it out of the OP anyway. I like the idea of relative time. However, it would be great if you could also see the exact time, somehow. Perhaps you could use a little javascript to allow the user to click on the time label “5 months ago” and it will add the exact date next to it. I only say this because it is VERY useful to have an exact date. If someone ever wants to reference your post on StackOverflow, then they simply MUST provide the exact date/time of that post. This is due to the focus on editing answers to fit the times. Or, it could be useful to have a user setting that defaults to relative time. Eli Courtwright Jul 21 2008 Relative time is best, because the whole point of displaying a time is to tell the user how long ago something was posted. So just tell them how long ago it was (in relative time) and don’t worry about timezones and the like. For things 3-6 days old, I really like relative dates like “last Tuesday.” Just sayin. I think relative is just so much simpler in this case than trying to resolve all the timezone issues. And like you said, who cares the exact date a post was left? When I look at a post I want to know how current it is. So relative works best in this case. I agree with most of the other commenters… use relative time but include the absolute time in a tooltip or something in case someone wants to know. I also think that just using GMT is sufficient (you’ll have to convert dates and times from the server’s timezone, but you won’t have to do it differently for each user). I think just about all of us know what our timezone’s offset is from GMT. Weeble Jul 21 2008 Whatever you do, if you are displaying absolute times, always display them with a time zone. I hate it when sites tell me times and I don’t know what time zone they’re using. Even worse is when they say “local time” (Livejournal, I’m looking at you), and I’m left wondering: local time for the poster? for the server? for me? for where the server thinks I am? On second thought (as I read through more of the other comments), relative dates are sufficient for the first few weeks and then beyond that, have the relative time displayed but make the exact date available as a tooltip (or however you want to display it). Exact time of day is not relevant after a few weeks have gone by, but certainly the exact date could be helpful in some cases. My vote: relative dates with the exact date displayed in some fashion once the post has reached some predetermined age. I agree with Mike Tomasello. Display the relative time in a span and put the GMT absolute time in the title attribute. Absolute time is better. Everyone can do date-math, especially everyone who is reading content on a software development website. But knowing the exact date something happened is very useful: I can refer to that date (I.e. “your comment posted July 20th”) and I can, later on, correlate that date with other events. Anyway, for a site like this one, which will likely use Javascript a lot, it’d be easy enough to replace the date content with a user-specified format on the client side. Heck, even if you don’t provide that feature you can just give the dates valid HTML class names and then Greasemonkey can do the rest. Just remember to store all dates in UTC in the database. I’m working with a database where the dates are stored in the local timezone, which is madness and causes problems when DST begins/ends, but that design decision was made years ago and it’s too late to fix it. Sigh. Show both relative and absolute. Manipulate the absolute UT date with client-side javascript to convert to the visitor’s time zone and format. Relative with absolute tool tip. @Jeff B: > “…please use JavaScript to convert it to the viewer’s local time zone…” WHY?? I think absolute time is a necessity. As watt said, “static date may be better for search engines (or caches).” I occasionally include a year or month in my searches so I know I’ll get results from that particular year or month. So I might add “July 2008″ to a search, but I’m not going to add “17 days ago”. Using absolute dates lets me add “July 2008″ to a search – a constraint that’s not too specific and would include everything from the month of July. It’s more likely that I’d remember something was written in July 2008 than that I’d remember it was written 17 days ago. I second relative with absolute in tooltips too. I doubt I’ll need to know the exact time of posting for most of the articles. Also, I fall into the minority of people that’s usually a day ahead, but I’ve been able to convert PST/EST/GMT to my local time mentally from the playing of MMOs. I didn’t get a chance to read all the comments, so this might have been said before. I would go with relative, but put the absolute date in the Title attribute. This would be an invasive way of showing the precise date if it is needed for some bizarre reason. I guess I should have read the last comment at least :S paulo Jul 21 2008 It would be cool if you could put it in light years ! That would be universal enough… : ) paulo Jul 21 2008 yes i know… a light year is a unit of distance not time :S Howard Jul 21 2008 I tend to open lots of pages in the morning and get to reading them throughout the day. When I get to a page, it saying “about an hour ago” doesn’t help me at all, unless you have javascript to update the relative date. If if you do, I won’t be able to tell if you update it for me or not and will have to refresh the page anyway. Absolute times don’t need to change after a page has been loaded. Also I’ve never seen a search with a date range feature that let me say “about a year ago”. I agree I’m more likely to remember to search for something in July. I would be fine with either, but it is possible to have it as an option? Matt Miller Jul 21 2008 Relative is better. Absolutely ;) I love relative dates, and since you guys are using JQuery I recommend you check out John Resig’s prettydate at the following url for doing this on the client. (That way the absolute date is in the markup and it degrades really nicely.) http://ejohn.org/blog/javascript-pretty-date/ Without sounding like I’m riding the fence, I think it is going to depend more if you’re looking to keep these threads over the long haul or not. For posts made within the current week, relative time is great because you get a better feel if the post is active or not (multiple posts with “posted a few seconds ago” always looks kind of nice). But for long term archiving and viewing, the absolute date is much nicer. So which way are you looking to focus things? Another vote for the relative + absolute in title. Works well for me! I prefer absolute dates. Precise information is just more useful. One thing some sites neglect is in making clear what format the date is being displayed in and the timezone offset. Because practices vary, the format must be unambiguous. Otherwise, the user doesn’t know if the site has formatted the date for their locale or whether the developer was just ignorant of the need to do so. Relative for anything under some limit (~30 days?) absolute after that. The thought being if it’s more than X time old I’ll care more about when it was relative to other events (the release of version N of the Baz library) than exactly how old it is. OTOH why not throw in both? Miguel Crispin Jul 21 2008 Default: Relative with absolute in a tool tip. And let the user decide in his/her profile which format prefer to see in tool tip. I’m surprised that you’re going to this level of design detail for the private beta. Come on, we’re all itching to have a look. Isn’t this a detail you can work out later? Mark Struzinski Jul 21 2008 I say relative time. The exact Date/Time is not useful, as long as you have an approximate reference to how stale the question might be. I say this because if a question was a year old, there may be an easier way (now) to answer the question because of updates to the language, etc. +1 Relative Justin Standard Jul 21 2008 I think relative times are better for the readability reasons you cited in the post. Maybe you can include a way to get the original absolute time as well (though don’t display it by default) like a javascript rollover or something. That way if someone REALLY wants to see the actual absolute time (relative to the timezone of the server) then they can. Jesse Dearing Jul 21 2008 Whenever I look for dates in information I just care about knowing how “stale” the information is anyway, so my vote is for relative as well. If you make sure that print mode uses absolute, then relative is fine. I don’t want to make sure that the printout contains a timestamp and then do math. Also, in some cases “Three years ago” may not be accurate enough when trying to figure out which version of SharePoint or iPhone the post is talking about. Relative is good most of the time. If there was an easy way to get the absolute (mouseover?), relative is file. And I don’t want to have to go to the individual pages from the search results to find out. When you print the timestamp for a comment, right now you do it like this: [a href="#comment-3370" title=""]timestamp[/a] All you have to do to fix it is print it like this: [a href="#comment-3370" title="" name="comment-3370"]timestamp[/a] Hmm while I can see to some degree why you’d want relative datetime (it looks prettier, i suppose).. I don’t really see much purpose in it. I think I’d vote for absolute, in UTC time. I don’t think it’s orders of magnitude more difficult to parse “3 years ago” into being 3 years ago, than parsing “2005″ into being 3 years ago.. I would have to say BOTH: 6 minutes ago (07/21/08 10:30a) The date in parenthesis could be dimmed (as in gray or silver on white background). My 2 cents. John Millikin Jul 21 2008 I prefer relative time for intervals less than a day (“five hours ago”), or perhaps less than a week (“6:03 last Thursday”), but as the delta increases so does the inaccuracy. As for the UTC/local debate, most sites have an option for the user to enter their preferred timezone on registration. This isn’t perfect — for example, if a user travels across zones then the displayed time will no longer match local time — but such issues I consider unimportant. Assuming the server stores timestamps in UTC, it’s always possible to display something reasonable to a human. Roland Tepp Jul 21 2008 Use relative dates in front page and listings or search results of questions with absolute date in tooltip. The information “3 days ago” is much more informative in this case than anything else. However, when displaying individual question/articles posts, use absolute dates. Also – when article passes certain point in time, I’d prefer to see full date instead of “2 years ago”. At about 1 year old, the exact dates start to gain meaning again. With relation to the user timezones in absolute time I’d recommend a trick – in the raw HTML just spit out full date in the timezone it was stored (your server timezone, presumably) and replace it with user timezone value using javascript at client side. I’d say relative times are better. I don’t really care exactly when something was posted as long as I can get a general idea of when it was posted. I second Dennis Kehrig’s proposal of having the absolute times in the HTML and converting them to relative using JavaScript on the client, as well as click-to-switch. I think this is the best of both worlds, and makes the relative times make sense (“6 minutes ago but when was 0 minutes ago?”). Jeff, I love your attention to details man, really looking forward for the public beta (there is a public beta, right?). Anyway, about your question, I like the way Gmail is handling the dates on the mail. It’s like this July 19 (two day ago) They provide the date and relative time and they drop the year because it’s the same year. Another example July 6 (three weeks ago) They drop the number of days from the relative time since it doesn’t really matter. The purpose of the relative time is to provide a sense of how old is this post like when someone is saying “I think they fix it in the last service pack”, so you run and get the latest service pack and install it and nothing happened because this post was in Wednesday, March 22, 2001, 12:54:24 PM (EST) Of course, it’s easier to try installing the service pack than to read this date. People don’t need the exact nano-second the post was posted. They need a round figure of how old it is. Because in two seconds they’re gonna forget that information anyway. So I think you can do it like Gmail, that would be a popular choice. It’s a little more coding to do because you have to figure out what unit to drop at each situation but it’s totally worth it. Also, you can always leave it to the user to choose which way to display the post time (for those users who are logged on). Relative for the first month and then absolute. Martin Wallace Jul 21 2008 I’m inclined to relative, but for older posts does it really matter about ensuring they are presented with time zone correction. If I am viewing something that is a few months old I’m probably not interested in the time at all – just the date. So I would say, relative for anythng under a week, absolute date, without time for anything older. I would definitely prefer absolute, my forum lurking habits tend to use dates to determine new content. In this way i don’t need to remember titles to posts I found useless, I just look at the date and it tells me if I read it or not. Where as with relative dates, it’s harder to tell if I’ve read it considering the non-static…ness of the time. I prefer absolute. I would default to one, but have the other available as a setting in the user’s profile. I prefer relative for recent and then absolute for older. Although a hybrid where they are both displayed would work for me too. Paul Henry Jul 21 2008 I can think of quite a few reasons why absolute dates would be preferable to relative dates. To pick one rather vivid example, if I come across a post that seems rather… dubious, “Posted April 1, 2005″ is illuminating in a way “Posted three years ago” would not be, I’m sure you’ll agree. On a more mundane level, if I encounter a post describing the features of AmazingProduct 1.0, it would be nice to know if it was posted after the product was released (in which case the poster might know what s/he’s talking about) or before (in which case s/he’s probably just cribbing from an article or a press release). - Mário Marinato, from Brazil I’m going to cast another vote for using relative time with the absolute available via a tool tip. If you don’t go with this method at least make the option to see the absolute time available. Definitely relative. “the odds of your time zone being in a completely different day from the server’s time zone is fairly slim.” Well, for me they’re pretty high, since I’m in New Zealand. And let me tell you, absolute dates that don’t take into account the users timezone are really really annoying. David H Aust Jul 21 2008 Go the relative dates. We’re all supposed to be smart people but any reduction to the cognitive load is a bonus. But it will be interesting because, from what I understand, the posts will be in order of relevance/votes, not chronological. Will it work well to have ’3 months ago’ listed above ’12 minutes ago’ listed above ’3 days ago’? And please don’t make the same mistake as the asp.net forums where the users ‘joined’ date is almost more prominent that the date of the post. Repeating the theme and going to say relative. I say both, but perhaps with the absolute date (GMT) in smaller print. Everyone should know how to translate GMT to local time. +1 for relative +1 for relative time reporting. When I hit a blog post, or any article online really, the first thing I want to see is how old the information is. Dates are nice, relative is better. Absolute, for the many reasons already posted. Relative looks good for a short while and becomes more useless as time goes by. The idea of ‘relative with a hover-over showing absolute’ kind of horrifies me. I imagine someone quoting ‘the entry on April 12 at 3 PM’ and me having to hover over 20 or 30 relative-time thingies to find that post … and I become a sad panda. Or worse, someone quoting ‘the entry that’s 7 hours old’ … that way lies madness. Both, or settable by user pref, would be better I think. It does appear there are a lot of people who like the relative time idea, though I cannot fathom it anymore. I did once think it was a good idea. Then I worked in a ticket system which used the ‘relative with hover-over absolute’ plan, and within a few weeks I realized the problems of that. For me anyway, it was a sounds great, works not-so-great kind of thing. What you have is right on. The variable precision relative dates are the way to go. +1 for Relative. Dates in forum posts etc are generally useless. brian Jul 21 2008 Relative only Absolute dates are noise. You will be saving a lot of brain power on the part of your users. Pioneer Jul 21 2008 tl;dr I think phpBB gives users the option of what format to get an absolute date in, timezone, field order and everything. When in doubt, make it configurable; or as a boss of mine used to answer when I’d give him two mutually exclusive implementation options: “both.” It’s just a matter of how to find a way of doing both. Now, if you’re asking what the default ought to be, for non-logged in users, that’s a different kettle of fish. An absolute date format that’s parsed and converted to local time using browser-side JavaScript manipulating the DOM seems pretty nifty to me: you get to provide something Google can read, plus you get to use information the browser has, such as local timezone. I re-iterate that as a logged in user, I care and want to be able to change my setting. As a not-logged in user, go with whatever you feel is best; if people don’t like it then ought to be able to log in/manage their session and get the power to control that. I think a timezone option is important to allow the users to see the time that means something to them. Maybe it’s just me but I’d consider going for: * Question in Absolute Date “Q: How do I kill the Wumpus? (August 2007) A: You can find it by it’s smell (2 months later) A: Not sure but if you do kill it it’ll scream (10 minutes later)” Actually looking at that written down it’s confusing but I’m sure it can be imporved. Just an idea anyway. The “relativeness” is more important to the question than the current timeframe isn’t it? I think relative dates are a must. It’s so annoying here in Sydney to see dates that are 15 hours behind – who knows when that is. Also, wherever absolute dates are displayed, use the format 12-jul-2008, not 7/12/2008. In Australia, and perhaps Europe & Japan, 7/12/2008 reads as 7 december. I second Peter’s comment re dates and times. I’m in Perth, Western Australia, and I often find the mm/dd/yyyy problematic. Keep overflowing the stack! Another one for relative with absolute in title text, and please go for unambiguous dates. dextar78 says: “The relative time display will always be the same and loose its context (ie: all comments will say ‘1 month’ or ‘1 year’ old).” One way of handling this is have all of the comments display a relative time to the original post, such as: 3 Days ago: [question] I’m well aware that this is not entirely perfect either (3 hours later than the previous answer or the original question?), but it’s a suggestion I’m sure someone can improve upon. After a bit more thought I think it could probably be made clear by the design that the [time] later is in relation to the original post. I don’t see any real benefit from having it relative to the previous comment. First post with an absolute time, and subsequent replies with a relative time to the first post? Or, even better – why not let the user decide? Surely it can’t be too hard to make this a configurable user option? Serhat Jul 22 2008 Relative is best. Better than the best, use absolute for posts further than a week. I would have to agree that both relative and absolute dates should be available on the page. Put the absolute date/time into a footer on each post, and the relative date/time more prominently above the post. Perhaps place the absolute date in GMT into the HTML of the page for caching and future searching purposes. You could then use JavaScript to convert that to the browsers local time for display, and use the same GMT date to update the relative date based on the date/time in the users browser… In other words, if you place the absolute date in GMT into the page code, you can use some fairly simple JavaScript to change it to the users local time and a relative date. +1 more for displaying relative time and making absolute time available via tooltip. (Both Twitter and GetSatisfaction do this too, fwiw). Stephen Jul 22 2008 Rough relative time under the name, then maybe when you hover over the relative time, the exact time (including gmt offset) appears as a tip? This way you get both details over the user, and also at least tell the user which timezone those times are regarding. I would have to say that it should be user selectable. It’s not that hard to write code at the presentation layer that displays the date based on user preferences. It’s a lot like the pidgin debacle. Should the message typing area be small, large, resize itself, or let the user resize it? Just let the user configure it so that it’s comfortable for them. Otherwise half the users will end up being unhappy with the results. Relative in the text & absolute in a tooltip. Telcontar Jul 22 2008 Relative if the post date is < than a week from now and all the visible post are ordered by date, absolute otherwise Relative is better, but put a tooltip with the exact date for convenience. Niloc Jul 22 2008 What about having a preference so the users can choose either one? However, if you have to pick one, I would say relative would be my choice. charles Jul 22 2008 I hate the relative time stuff. Sure, you might have to do the really “hard” math of figuring out how long agao “June 6th, 2001″ was (is this really hard?). But the more annoying math is “Posted 986 days ago.” Usually when I’m looking at those times I want to know year and whether it was fall, summer, november. That way I can quickly figure out what products were out that time, technologies, so on and so forth. Why not use UTC? It would be fairly trivial to have a setting that allows users to have either relative or specific dates. I’d be fine with relative by default if I could change it. Swinders Jul 22 2008 Relative dates sound good but I’ve always seen absolute and not had problems with them. One thing I do find confusing is also having a ‘member since’ date which gives you two date to think about. Store all dates as myDateTime.ToUniversalTime(), which is UTC. Forget the headache of using CultureInfo to customize dates for each website visitor based on geography. Relative with increasing increments similar to Outlook. For example: 5 minutes ago, 4 hours ago, 3 days ago, 2 months ago, 1 year ago. isaac Jul 22 2008 Include the timezone if using server absolute time. I live in New Zealand, so I’m almost always a day ahead of any website I visit. Also specify the timezone as -/+ UTC, as zones like ‘EST’ can also be Australian Eastern time. Not to sound rude, but who cares? It’s not like its that hard to do either one, and if you think people will really care one way or the other, make it an option. I am looking forward to stack overflow, but seriously, spending the time to look at a minor issue like this in depth and write up a blog post seems like a waste of effort that could have gone into something more important. Frances Jul 22 2008 Either use relative time, or allow your users to set their time zone, and display the DateTime according to that. Relative time, absolutely! I’ve wasted too many precious brain cycles trying to calculate dates, and most of the time the result is incorrect anyway. That being said, you can’t exactly remove absolute date information. I’d like to see something along the lines of: 2008/07/23 09:30 [GMT] (three minutes ago) Why would having a timezone preference alienate users? Given it’s not asked at signup time, I could do with a customizable timezone (or date format if the default is not ISO): 2008/07/23 12:33 [GMT+2] (less than a minute ago) Personally I prefer absolute dates. Although it’s “easier” for humans to just read a relative date, I don’t see the major advantage of displaying relative dates, especially when you want to compare two comments.. For example: comment 1: Relative: Posted last year Absolute: 1st Jan 2007 comment 2: Relative: Posted last year Absolute: 2nd Feb 2007 If I need to make a reference to something, like an event that happened last year (let’s say, 15th Jan 2007), I don’t know if the comments 1 and 2 were posted before or after the event because all I know is that they were posted last year and I don’t know the exact date so that I can figure out if the comment was posted before or after 15th Jan 2007. Sounds a bit clunky, but after some time using relative dates, I started having problems with what I just mentioned. Also, to display local time for a particular timezone, it’s simple, really. You don’t even need to store a timezone for each user in the database either. Here’s how I worked it out, on my site. Create a cookie on the user’s computer, storing the offset in minutes between the user’s timezone and GMT 00:00. (I did this with javascript). Then, using a server-side language (PHP in my case), I found out the offset between the server and GMT 00:00 (using date(‘Z’); ) and then I worked out the difference between the two. Then when I want to display a date (absolute dates in this case), I just subtract the total offset (i.e. server-client) from the timestamp that I just retrieved from the database. Also, you might ask, what if javascript is turned off? Well basically, time is display in GMT 00:00 then.. I personally prefer it that way (i.e. in the case when javascript is disabled) than displaying time in the server’s timezone.. because if the server is in, let’s say in the west coast of USA, users from europe/asia will be “disadvantaged” because there’s a major shift in the timezone, but if it’s displayed in GMT 00:00 it’s “even” for every user.. And also, users will be able to work out the difference themselves if it’s displayed in GMT 00:00, because if time is displayed in server timezone, it’s probable I wont be able to figure out the GMT of the server’s timezone. Drthomas Jul 23 2008 How about publishing both, perhaps in, almsot as suggested by Paul Annesley, microformatted tags (/. Then, using cookies or JS, hide one or the other, or neither, based on the stored preferences of the user? If it has to be strictly one or the other, then I think I’d prefer the absolute datetime, formatted in the YYYY-MM-DD, HH:MM:SS style. (I know, but I couldn’t think of another abbreviation to describe ‘minutes’… =( ) Drthomas Jul 23 2008 Darn HTML escaping… after the opening bracket it should’ve read: <span class=”datetime absolute”>&lt/span&rt; / &ltspan class=”datetime relative”>&lt/span&rt;) +1 for Relative time I’d use relative times as long as your within a year of the post, and dates (no times) if more than a year has passed. Then timezones are neglectable, but you don’t get that incomplete feeling you get with ’3 years ago’, since that leaves a huge window. john m Jul 23 2008 - both, relative on left, absolute on right - use UTC times (no daylight savings) – everyone knows their zone relative to UTC, but not everyone knows where they are compared to PDT, PST etc - use day as the finest relative increment, i.e. today, not 7 hrs ago Where’s the podcast???? Of course, both are must-have, maybe absolute in somewhat hidden form. To the question of timezones and date/time formatting – why can’t you just figure out in client-side code what format and time zone are used by my OS? Don’t make me think! Akdom Jul 23 2008 I love the idea of using relative times. One thing that might be a nice preference though, would be the ability to choose how precise the displayed times are. By this I mean: give the user the ability to set how finely grained this relative time is. I can see a case (though it may be a bit too much toward an edge) of someone wanting to know exactly what [Month|Week|Day|Hour|Minute] someone posted something (in bucket value of course, e.g. 1 year, 2 months, 3 days) … just a thought. I really enjoy the relative dates because they make more sense to me as a reader. But it’d be nice to be able to interact with the relative date to get the specific one. Maybe a hover-over, or a click, or something that keeps the specific hidden until someone really wants it. I never really had thought about the topic of ‘relative vs absolute’ until i read this, then i realized that sites like RefactorMyCode.com that use this, in them it’s so much easier to find up to date content. Ashwin Nanjappa Jul 25 2008 A big YES for relative time. But, there *has* to be some way to know the absolute time if needed. I think the best way to do it is use relative until 24 hrs have passed then switch to absolute. The timezone confusion only affects people for the first 24 hrs really. I prefer relative time. Go for relative. I hate when I don’t know at first glance “how old” a post it. Sander Jul 31 2008 Ach! Never! Last year I developed a portal which used relative dates. Sure, they seem cool and flashy at first but after a short while, everyone started hating them! Did he post it in the evening? Did these two people post their message at the same time? How long did it take to reply? On the other hand, what is completely uninteresting about dates? How long ago it was. Michel Billard Jul 31 2008 Relative is more informative, but it could be useful to have some of the absolute data. For example, “3 days ago on Monday”, “5 weeks ago in June”, “2 weeks ago on the 17th”, … This is just an idea, the complete absolute data in the tooltip is a very good idea too. I would have voted “relative” like most others, but reading Sander’s comment made me reconsider, and I’m in (almost) full agreement with it. It’s still relative rather than absolute time, but the real question is “relative to what?”. I like the idea of knowing whether posts were part of a heated debate or just accrued over a longer period of time. The “3 years ago” form of relative time means you’d gradually lose information about the proximity of posts. The idea of “did he post in the evening” is less interesting to me. It might indicate how well the contributor was thinking at the time, but in a global community it’ll always be evening somewhere. I know the discussion’s already been and gone, but here’s a strong piping in for absolute dates for anything more than two weeks ago. If this site wants to be a repository for programming problems, then it’s not about individual wittering of you or I – it’s about those problems. If someone comes up with a useful answer to a problem, the usefulness isn’t going to recede proportionately away into the past. It’s not necessarily going to become out of date. It’s good to know it was posted at some time, and it would be good to know when it posted, but telling a bunch of technologists how long ago something was posted is just guaranteeing that it will slip backwards into oblivion. One thing the web has absolutely not worked out yet is how to let its information self-archive gracefully. For datetimes more recent than two weeks, I think I would favour the same absolute time format, but appended with, say, “(3 days ago)”, as some have suggested. Silverhalide Aug 5 2008 Following up on Douglas’ comment, if you’re going to allow any sort of threaded discussions, or similar where the order in which the comments are shown isn’t necessarily the order of submission, you need the absolute times (or at least high precision relative times – 2 years, 4 months, 8 days, 14 hours and 12 minutes ago). The timing between posts doesn’t necessarily become less useful over time. It may be interesting to see that someone’s question went unanswered for two weeks until they posted the solution themself, compared to being unanswered for 5 minutes. However, those would both show up as “3 years”. Since I’m not a web developer, I have no idea about the additional runtime load that showing absolute times in the user’s timezone would impose (especially compared to the workload of converting the stored times to relative time phrases), but I cannot imagine it to be high. Andrew Oct 17 2008 Please just don’t. Hiding the detail of the full timestamp is obnoxious, for the various reasons already mentioned. I’ve never seen a search with a date range feature that let me say “about a year ago”. I agree I’m more likely to remember to search for something in July. Well, It might indicate how well the contributor was thinking at the time, but in a global community it’ll always be evening somewhere. Hiding the detail of the full timestamp is obnoxious, for the various reasons already mentioned. I am looking forward to stack overflow, but seriously, spending the time to look at a minor issue like this in depth and write up a blog post seems like a waste of effort that could have gone into something more important. Then timezones are neglectable, but you don’t get that incomplete feeling you get with ‘3 years ago’, since that leaves a huge window. Relative is nice for a quick glance, but just occasionally absolute is genuinely useful. Well even my vote goes in for relative as its really easier as compare to the absolute…Relative causes PST and GMT confusion.. It’s more of a hinderance because I then have to find out what PST is in relation to GMT. I think absolute time is better…. gmt is really confusing….someother method is needed to calculate I agree i think absolute is best it’s really a matter of opinion. Relative time is definitely better than absolute time. Hiding the detail of the full timestamp is low, for the various reasons already mentioned. I wish there was a way to get rid of both meaning that when I make a post….I don’t want the day or time to show up. That way the visitor doesn’t think that my information is dated. Oh well…wishful thinking. How about considering doing a test not displaying the time in your all questions and answers? Would that be better? relative and absolute its a matter of opinion. When I look at the dates I am just interested in approximately how old the posts are. Recent or 3 years ago? Exact as in time zones usually don’t matter too much. This is among my favorite sites. I’ve really enjoyed reading your posts and thanks for keeping it interesting. I agree with Justin..and with most of the other commenters… use relative time but include the absolute time in a tooltip or something in case someone wants to know. I really enjoy reading your posts .. keep up the good stuff! I prefer to use absolute dates because it’s more clearer. In relative dates, you have to think or count to get specific dates. Thanks for this information.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33859002590179443, "perplexity": 1535.9913167720642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931014329.94/warc/CC-MAIN-20141125155654-00003-ip-10-235-23-156.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/280187-how-do-i-solve-cos-x-20degrees-1-2-a.html
# Thread: How do I solve cos(X-20degrees)= 1/2 ? 2. ## Re: How do I solve cos(X-20degrees)= 1/2 ? Originally Posted by Seirinaba $\cos\left(x-\frac{\pi}{9}\right)=\frac{1}{2}$ which implies that $\left(x-\frac{\pi}{9}\right)=\frac{\pi}{6}$. Therefore $x=\frac{\pi}{6}+\frac{\pi}{9}=\frac{5\pi}{18}$ 3. ## Re: How do I solve cos(X-20degrees)= 1/2 ? how did you turn 1/2 into π/6? 4. ## Re: How do I solve cos(X-20degrees)= 1/2 ? Originally Posted by Seirinaba how did you turn 1/2 into π/6? As a matter of fact $\frac{1}{2}$ does not turn into $\frac{\pi}{6}$ Because I do not know what a degree is, I chose to do mathematics using numbers. I just know and you must learn that if $\cos(\theta)=\frac{1}{2}$ then $\theta=\pm\frac{\pi}{6}$. That is just a fact of the measure of angles. 5. ## Re: How do I solve cos(X-20degrees)= 1/2 ? Originally Posted by Seirinaba $\cos \frac{\pi }{3}=\frac{1}{2}$ $x=\frac{\pi }{3}+\frac{\pi }{9}=\frac{4 \pi }{9}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9950230717658997, "perplexity": 2395.109319026487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865023.41/warc/CC-MAIN-20180523004548-20180523024548-00041.warc.gz"}
http://www.birs.ca/events/2012/5-day-workshops/12w5055
Graph Searching (12w5055) Arriving in Banff, Alberta Sunday, October 7 and departing Friday October 12, 2012 Organizers (University of Bergen) (Dalhousie University) (Ryerson University) (National and Kapodistrian University of Athens) Objectives There are many variants of graph searching studied in the literature, which are either application driven, i.e. motivated by problems in practice, or are inspired by foundational issues in Computer Science, Discrete Mathematics, and Artificial intelligence, including: Information Seeking Robot motion planning Graph Theory Database Theory and Robber and Marshals Games Logic Distributed Computing Models of computation Network security In the past three years, problems emerged from real applications related to the structure of modern (or projected) networks that are expected to be large scale and dynamic, and where agents' behaviour can be probabilistic, decentralized and even selfish and antagonistic. This is one of the reasons why the field of Graph Searching is nowadays rapidly expanding. Several new models, problems or approaches have appeared relating it to diverse fields such as random walks, game theory, logic, probabilistic analysis, complex networks, motion planning, and distributed computing. Surprising and not yet widely circulated results have been found in the last two years that have consequences for the whole field. A major focus of the Workshop will be on graph searching problems and large scale networks, including tutorial in some of the successful methods. The Workshop will bring together researchers in Graph Searching and those versed in the analysis of large scale networks. For some historical reasons Canada is the stronghold of both fields, with several groups and individuals spread around the country. This makes BIRS to be a natural venue for the Workshop. Little investigation has occurred about the graph searching models in large scale networks because the focus of the graph searching researchers has been on deterministic problems and their associated complexity and algorithmic issues. Recent advances in techniques for understanding `large' graphs bode well for making progress in investigating graph searching on these networks. In 2009, Luczak and Pralat [LP], using probabilistic methods, showed that the cop number behaves in a regular but unexpected fashion---consider the random graph, $G(n,p)$ with expected degree $pn=n^x$, and shows that as $x$ decreases the expected number of cops rises then falls regularly with local extrema at $x=1/2, 1/3, 1/4,dots$. This results suggests that the large networks will have enough edges and local structure to smooth out the irregularities inherent in small graphs. In 2008, the cleaning of large scale networks (the Brush problem, Alon, Messinger, Nowakowski, Pralat, Wormald) [MNP08, APW08, P09] was also investigated using Wormald's differential equations method for random regular graphs. In 1985, Meyniel conjectured that if G is a connected graph of order n, then the number of cops needed to capture a robber is of order at most $n^{1/2}$. This would be best possible because of a construction of a bipartite graph based on the finite projective plane. In 2009, Bollobas, Kun and Leader [BKL], using probabilistic methods, essentially proved Meyniel's bound in random graphs (up to the logarithmic factor). Recently, Pralat and Wormald showed that the logarithmic factor can be eliminated (from both random binomial graphs as well as random d-regular graphs) and so the Meyniel's conjecture is verified for these models. For deterministic graphs, we are still far away from proving the conjecture. Up until recently, the best known upper bound was given by Frankl in 1987 [F87], who showed that the cop number is always of order at most $nlog log n / log n = o(n)$. After twenty years of attacking this problem we have three recent independent proofs (Lu, Peng [LP]; Scott, Sudakov [SS]; and Frieze, Krivelevich, Loh [FKL]) of the same result, namely, that the cop number is at most $n2^{-(1+o(1)) sqrt{log n}}$ (which is still $n^{1-o(1)}$). We do hope that the Workshop will bring us a bit closer to the solution. The original problems all had simple optimality requirements---find the minimum number of searchers to do the job. Another major focus will be on more real-world requirements; e.g. minimize the number of time steps or if the searchers have costs associated with their utilization, minimize the cost. Recently, Scott, Stege, Zeh consider a firefighting model (fancifully called Politician's Firefighting), where the number of resources available at any one time was proportion to the present size of the fire. In many models, the searchers can be regarded as robots having limited processing power. If the robots have only local knowledge of the network, Messinger and Nowakowski [MN09] showed that self-stabilizing behaviour (a minimization of number of steps) could occur even it could take time exponential in the number of edges to achieve (Vetta and Li) [VL10]. Some time would be spent investigating the relationship between the knowledge and decisions that a robot is able to make and the effect on the minimization problem. [LP] T. Luczak and P. Pralat, Chasing robbers on random graphs: zigzag theorem, to appear in Random Structures and Algorithms. [MNP08] M.E. Messinger, R. Nowakowski, and P. Pralat, Cleaning a Network with Brushes, Theoretical Computer Science 399 (2008), 191-205. [APW08] N. Alon, P. Pralat, and N. Wormald, Cleaning regular graphs with brushes, SIAM Journal on Discrete Mathematics 23 (2008), 233-250. [P09] P. Pralat, Cleaning random graphs with brushes, Australasian Journal of Combinatorics 43 (2009), 237-251 [BKL] B. Bollobas, G. Kun, I. Leader, Cops and robbers in a random graph, preprint. [F87] P. Frankl, Cops and robbers in graphs with large girth and Cayley graphs, Discrete Applied Mathematics 17 (1987) 301-305. [LP] L. Lu and X. Peng, On Meyniel's conjecture of the cop number, preprint. [SS] A. Scott and B. Sudakov, A new bound for the cops and robbers problem, preprint. [FKL] A. Frieze, M. Krivelevich, and P. Loh, Variations on Cops and Robbers, preprint. [MN09] M.E. Messinger, R.J. Nowakowski, The Robot Cleans Up, Journal of Combinatorial Optimization, 18 (4) (2009) 350-361. [VL10] A. Vetta, Z. Li, Bounds on the cleaning times of robot vacuums, Operations Research Letters 38(1), (2010) 69-71.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5924360752105713, "perplexity": 1529.0457134155918}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00612-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.researcher-app.com/paper/142852
3 years ago # Search for low-mass Dark Matter with the CRESST Experiment. M. Wüstrich, I. Usherov, C. Türkoğlu, A. Erb, A. Gütlein, M. Stahlberg, F. Pröbst, C. Bucci, A. Langenkämper, L. Stodolsky, G. Angloher, S. Wawoczny, E. Mondragon, J.-C. Lanfranchi, L. Canonica, F. v. Feilitzsch, C. Strandhagen, H.H. Trinh Thi, K. Schäffner, R. Puig, F. Reindl, A. Münster, D. Hauff, J. Rothe, W. Potzel, F. Petricca, A. Tanzke, S. Schönert, R. Strauss, P. Gorla, M. Willers, W. Seidel, H. Kraus, X. Defay, J. Loebell, C. Pagliarone, J. Jochum, N. Ferreiro Iachellini, H. Kluck, J. Schieck, A. Bento, A. Ulrich, P. Bauer, M. Mancuso, M. Kiefer CRESST is a multi-stage experiment directly searching for dark matter (DM) using cryogenic $\mathrm{CaWO_4}$ crystals. Previous stages established leading limits for the spin-independent DM-nucleon cross section down to DM-particle masses $m_\mathrm{DM}$ below $1\,\mathrm{GeV/c^2}$. Furthermore, CRESST performed a dedicated search for dark photons (DP) which excludes new parameter space between DP masses $m_\mathrm{DP}$ of $300\,\mathrm{eV/c^2}$ to $700\,\mathrm{eV/c^2}$. In this contribution we will discuss the latest results based on the previous CRESST-II phase 2 and we will report on the status of the current CRESST-III phase 1: in this stage we have been operating 10 upgraded detectors with $24,\mathrm{g}$ target mass each and enhanced detector performance since summer 2016. The improved detector design in terms of background suppression and reduction of the detection threshold will be discussed with respect to the previous stage. We will conclude with an outlook on the potential of the next stage, CRESST-III phase 2. Publisher URL: http://arxiv.org/abs/1711.01285 DOI: arXiv:1711.01285v1 You might also like Discover & Discuss Important Research Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free. Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4221031367778778, "perplexity": 10124.858739901247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00674.warc.gz"}