url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
https://arxiv.org/abs/1311.6975 | hep-lat
(what is this?)
# Title:Computing the Adler function from the vacuum polarization function
Abstract: We use a lattice determination of the hadronic vacuum polarization tensor to study the associated Ward identities and compute the Adler function. The vacuum polarization tensor is computed from a combination of point-split and local vector currents, using two flavours of O($a$)-improved Wilson fermions. Partially twisted boundary conditions are employed to obtain a fine momentum resolution. The modifications of the Ward identities by lattice artifacts and by the use of twisted boundary conditions are monitored. We determine the Adler function from the derivative of the vacuum polarization function over a large region of momentum transfer $q^2$. As a first account of systematic effects, a continuum limit scaling analysis is performed in the large $q^2$ regime.
Comments: 7 pages, 4 figures, presented at the 31st International Symposium on Lattice Field Theory (Lattice 2013), 29 July - 3 August 2013, Mainz, Germany Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph) Journal reference: PoS(LATTICE 2013)304 Report number: MITP/13-070, HIM-2013-06 Cite as: arXiv:1311.6975 [hep-lat] (or arXiv:1311.6975v1 [hep-lat] for this version)
## Submission history
From: Hanno Horch [view email]
[v1] Wed, 27 Nov 2013 14:16:15 UTC (220 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7603495717048645, "perplexity": 2741.5177978686856}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203409.36/warc/CC-MAIN-20190324083551-20190324105551-00315.warc.gz"} |
http://bitacoraqueen.blogspot.com/1998/09/believe-in-yourself.html | ## Believe in Yourself
Believe in Yourself es la tercera canción del disco Electric Fire, de Roger Taylor.
Composición y producción
Escrita por: Roger Taylor
Producción: Roger Taylor y Joshua J. Macrae
Personal
Roger Taylor - voces y batería
Steve Barnacle - bajo
Jason Falloon - guitarra
Versiones oficiales de la canción
1) Versión de estudio (5:09): Tercera canción del disco Electric Fire.
Video en vivo
Live at the Cyberbarn (24 de septiembre de 1998):
Letra de la canción
Used to think I was a loser
And all I did was bound to fail
To be a shaker and a mover
Everybody wanna get to the top of the heap sometime (the top of the heap)
Everybody wanna climb to the top of the hill sometime (the top of the hill)
Everybody gotta take a little piece of the heat sometime (a piece of the heat)
Everybody gotta stand on their own two feet sometime
And I mean you, and I mean you
And I mean you, and I mean you
Believe in yourself, nobody else
Everybody wants a shot at a lick of the cream sometime (a lick of the cream)
Everybody wanna go, make a bit of a show sometime (a bit of a show)
Everybody need a chance, tap into the flow sometime
Everybody wanna trip, get a crack of the whip sometime
And I mean you, and I mean you
And I mean you, and I mean you
Believe in yourself, nobody else
And I mean you, and I mean you
And I mean you, and I mean you
People with problems
People on streets
People you meet
Believe in yourself
Believe in yourself
Believe (bus conductors) in (people on trams) yourself (Welshmen and sheep, clamps)
Believe (lepidopterists) in (collectors of stamps) yourself (leopards with spots on, tramps)
I mean you (space wasting journalists), I mean you (people in far flung posts)
I mean you (unpleasant neighbours), I mean you (ghosts)
I mean you (Duane Eddy), and I mean you (lawyers with fees)
And I mean you (Elvis), and I mean you (deciduous trees)
And I mean you (bosses), and I mean you (pets)
And I mean you (nurses, vets)
Believe (people with peepholes) in (you) yourself
You (Elsie), you (Stan)
I believe (mother, Desperate Dan)
## ¡Bienvenidos!
¿Quieres saber de qué se trata esta página? Haz clic en la imagen. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9310322403907776, "perplexity": 24352.185562513652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864461.53/warc/CC-MAIN-20180521161639-20180521181639-00381.warc.gz"} |
http://acscihotseat.org/index.php?qa=1637&qa_1=portfolio-being-weighted-betas-arbitrage-opportunity-this&show=1641 | # Beta of portfolio being weighted sum of betas - Arbitrage opportunity if this does not hold ?
25 views
In the CAPM section of the notes they give a practical example of why the beta of a portfolio should be the weighted sum of the betas of its constituents. They are basically trying to justify why in reality for a portfolio, $$E_P = \sum_{I=1}^N x_iE_i \Leftrightarrow\beta_P =\sum_{I=1}^{N}x_i\beta_i$$.
They say to consider a simple portfolio of equal parts asset 1 and 2 with $$\beta_1$$ and $$\beta_2$$ being their betas. They then say if the expected return on the portfolio is perhaps higher than $$\frac{1}{2}(E_1+E_2)$$ but the beta of the portfolio, $$\beta_p =\frac{1}{2}(\beta_1+\beta_2)$$, then you could sell the portfolio and buy equal parts of asset 1 and asset 2 and end up with a positive expected value and and hence a risk-free profit.
To me this does not make any sense. This is because all it is doing is giving a net expected profit which doesn't mean it is risk free at all.
I understand that if the market has mispriced an index fund or something like that then you could technically make a risk free profit by instantaneously buying all the constituents and selling the index fund or vice versa but that's not what this is saying, is it?
+1 vote
answered May 25 by (820 points)
selected May 25
If $$\beta_{\mathrm{portfolio}} = \sum_{i = 1}^N x_i\beta_i$$ the risks are matched exactly and the arbitrage strategy mentioned, buying the portfolio (which itself is now a security) and selling the underlying securities, leads to an arbitrage profit of:
$$(1+E_{\mathrm{portfolio}}) - \sum_{i = 1}^N x_i (1 + E_i) > 0$$
Note that I assumed the investment period to be 1 year and $$E$$ is be a nominal annual compounded annual rate for demonstration purposes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6863604784011841, "perplexity": 649.0683701314358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998325.55/warc/CC-MAIN-20190616222856-20190617004856-00106.warc.gz"} |
http://mathhelpforum.com/algebra/48915-n.html | Math Help - N!
1. N!
For a positive integer, $N$, we define N! (read N factorial) to
be the product of all the integers from 1 to N. Thus:
$N! = 1 . 2 . 3 .... . N$
It is clear that N! will end in a zero if N is larger or equal to
5. For big values of N, N! will end in many, many zeros.
How many zeros will $75!$ end in?
2. This is known as the trailing zeros problem.
This function will answer the question for any $N<5^{10}$,
$Z(N) = \sum\limits_{k = 1}^{10} {\left\lfloor {\frac{N}
{{5^k }}} \right\rfloor }$
3. Hello, perash!
For a positive integer, $N$, we define N! (read N factorial)
to be the product of all the integers from 1 to N.
Thus: . $N! = 1\cdot2\cdot3 \cdots N$
It is clear that $N!$ will end in a zero if $N \geq 5$.
For large values of $N,\;N!$ will end in many, many zeros.
How many zeros will $75!$ end in?
Plato is absolutely correct . . . Here's the basis of that formula . . .
Every factor of 5 (paired with an even factor) will produce a final zero.
The question becomes: how many 5's are in the prime factorization of $75!$ ?
Every fifth number has a factor of 5.
. . Hence, there are: . $\frac{75}{5} \:=\:15$ factors of 5.
But every twenty-fifth number has a factor of $5^2 = 25.$
. . Each of them contributes one more 5 to the total.
And there are: . $\frac{75}{25} \:=\:3$ of them.
Hence, $75!$ contains: $15 + 3 \:=\: 18$ factors of 5
Therefore, $75!$ ends in eighteen zeros. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8974788784980774, "perplexity": 826.6450756798255}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268734.38/warc/CC-MAIN-20140728011748-00036-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://koreascience.or.kr/search.page?keywords=Electrode&pageSize=10&pageNo=3 | • Title, Summary, Keyword: Electrode
### A Study on Signal Feature Extraction of Partial Discharge Types Using Discrete Wavelet Transform Technique (이산웨이블렛 변환기법을 이용한 부분방전종류의 신호특징추출에 관한연구)
• Park, Jae-Jun;Jeon, Byung-Hoon;Kim, Jin-Seong;Jeon, Hyun-Gu;Baek, Kwan-Hyun
• Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
• /
• /
• pp.170-176
• /
• 2002
• In this papers, we proposed the feature extraction method due to partial discharge type of transformers. For wavelet transform, Daubechie's filter is used, we can obtain wavelet coefficients which is used to extract feature of statistical parameters (maximum value, average value, dispersion, skewness, kurtosis) about acoustic emission signal generated from each partial discharge type. The defects which could occur in a transformer were simulated by using needle-plane electrode, IEC electrode and Void electrode. Also, these coefficients are used to identify signal of partial discharge type electrode fault in transformer. As a result, from compare of acoustic emission amplitude and acoustic average value, we are obtained results of IEC electrode> Void electrode> Needle-Plane electrode. otherwise, In case of skewness and kurtosis, we are obtained results of Needle-Plane electrode electrode> Void electrode> IEC electrode.
### A Comparison of Spot Weldability with Electrode Force Changes in Surface Roughness Textured Steel (가압력 변화에 따른 표면조도처리 강판의 저항 점 용접성 비교)
• Park, Sang-Soon;Park, Yeong-Do;Kim, Ki-Hong;Choi, Yung-Min;Rhym, Young-Mok;Kang, Nam-Hyun
• Journal of Welding and Joining
• /
• v.26 no.2
• /
• pp.75-84
• /
• 2008
• With the development of surface roughness textured steel for automotive body-in-white assemble, one of key issues is to understand the role of the surface roughness in textured steel sheets. To investigate effect of surface roughness on weldability in prepared steels, electrode force was varied. Steel sheets (T-H) with high surface roughness ($Ra\;=\;1.94\;{\mu}m$) reduced electrode life. It was attributed to the higher contact resistance at the electrode-sheet interface in the presence of the high surface roughness. The increased electrode diameter decreased current density, therefore reducing weld electrode life due to small weld button size. When an increased electrode force was used, a significant increase in the electrode life was observed in welding of high surface roughness steel sheet. This study suggested that contact resistance at the electrode-sheet interface was the dominant factor, as compared to the sheet-sheet interface for determining electrode life in welding of surface roughness textured steel.
### The Characteristics of a Superposed Discharge Type Ozonizer with Variation of Mesh in Internal Electrode (내부전극 조밀도 변화에 따른 중첩방전형 오존발생기의 특성)
• Song, Hyun-Jig
• Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
• /
• v.19 no.5
• /
• pp.87-93
• /
• 2005
• In order to develope high concentration$\cdot$yield ozonizer, superposed discharge type ozonizer using overlap of silent discharge and internal electrode of mesh type has been designed and manufactured. It consists of three electrodes(central electrode, internal electrode and external electrode) and double gaps(gap between central electrode and internal electrode, gap between internal electrode and external electrode). Therefore, ozone is generated by overlapping silent discharges generated between the gaps respectively for which the AC high voltages applied to the internal electrode and the external electrode has a $180{[^\circ]}$ phase difference and for which the central electrode is a ground Ozone generation characteristics proportional to mesh of internal electrode by increasing of discharge electrode and controlling of discharge power density. As a result, the in maximum ozone concentration, generation, and yield can be obtained 17,720[ppm], 5.4[g/h], and 205[g/kwh] respectively.
### Development of an Active Dry EEG Electrode Using an Impedance-Converting Circuit (임피던스 변환 회로를 이용한 건식능동뇌파전극 개발)
• Ko, Deok-Won;Lee, Gwan-Taek;Kim, Sung-Min;Lee, Chany;Jung, Young-Jin;Im, Chang-Hwan;Jung, Ki-Young
• Annals of Clinical Neurophysiology
• /
• v.13 no.2
• /
• pp.80-86
• /
• 2011
• Background: A dry-type electrode is an alternative to the conventional wet-type electrode, because it can be applied without any skin preparation, such as a conductive electrolyte. However, because a dry-type electrode without electrolyte has high electrode-to-skin impedance, an impedance-converting amplifier is typically used to minimize the distortion of the bioelectric signal. In this study, we developed an active dry electroencephalography (EEG) electrode using an impedance converter, and compared its performance with a conventional Ag/AgCl EEG electrode. Methods: We developed an active dry electrode with an impedance converter using a chopper-stabilized operational amplifier. Two electrodes, a conventional Ag/AgCl electrode and our active electrode, were used to acquire EEG signals simultaneously, and the performance was tested in terms of (1) the electrode impedance, (2) raw data quality, and (3) the robustness of any artifacts. Results: The contact impedance of the developed electrode was lower than that of the Ag/AgCl electrode ($0.3{\pm}0.1$ vs. $2.7{\pm}0.7\;k{\Omega}$, respectively). The EEG signal and power spectrum were similar for both electrodes. Additionally, our electrode had a lower 60-Hz component than the Ag/AgCl electrode (16.64 vs. 24.33 dB, respectively). The change in potential of the developed electrode with a physical stimulus was lower than for the Ag/AgCl electrode ($58.7{\pm}30.6$ vs. $81.0{\pm}19.1\;{\mu}V$, respectively), and the difference was close to statistical significance (P=0.07). Conclusions: Our electrode can be used to replace Ag/AgCl electrodes, when EEG recording is emergently required, such as in emergency rooms or in intensive care units.
### Effect of Electrode Design on Electrochemical Performance of Highly Loaded LiCoO2 Positive Electrode in Lithium-ion Batteries (리튬이온 이차전지용 고로딩 LiCoO2 양극의 전극설계에 따른 전기화학적 성능연구)
• Kim, Haebeen;Ryu, Ji Heon
• Journal of the Korean Electrochemical Society
• /
• v.23 no.2
• /
• pp.47-55
• /
• 2020
### Mechanically Immobilized Copper Hexacyanoferrate Modified Electrode for Electrocatalysis Amperometric Determination of Glutathione
• D. Davi Shankaran;S. Sriman Narayanan
• Bulletin of the Korean Chemical Society
• /
• v.22 no.8
• /
• pp.816-820
• /
• 2001
• A new copper hexacyanoferrate modified electrode was constructed by mechanical immobilization. The modified electrode was characterised by cyclic voltammetric experiments. Electrocatalytic oxidation of glutathione was effective at the modified electrode at a significantly reduced overpotential and at broader pH range. The modified electrode shows a stable and linear response in the concentration range of 9 ${\times}$10-5 to 9.9 ${\times}$10-4M with a correlation coefficient of 0.9995. The modified electrode exhibits excellent stability, reproducibility and rapid response and can be used in flow injection analysis for the determination of glutathione.
### Assumption of Grounding Electrode by Model Experiment (모델실험에 의한 접지전극의 상정)
• Koh, Hee-Seog;Kim, Maeng-Hyun;Park, Seung-Jae;Song, Won-Pyo;Kim, Ju-Chan
• Proceedings of the KIEE Conference
• /
• /
• pp.327-329
• /
• 2002
• This paper is based on model electrode by reduced scale and theoretical background of proportion factor. use building structure and mesh electrode, we get grounding electrode of building grounding structure and grounding electrode of mesh electrode by modeling experimentation and estimation coordinate geometry, we are doing practical grounding electrode assumption.
### Interaction Between Transparent Dielectric and Bus Electrode for Heating Profile in PDP
• Lee, Sang-Wook;Kim, Dong-Sun;Park, Mi-Kyung;Hwang, Seong-Jin;Kim, Hyung-Sun
• 한국정보디스플레이학회:학술대회논문집
• /
• /
• pp.864-866
• /
• 2007
• In PDP, bus electrode should have low resistance for high efficiency. The transparent dielectric affects the shape change of bus electrode during the firing. These are related with the electrical property of the electrode. In this study, the shape of electrode was controlled by firing schedules of the transparent dielectric and the bus electrode.
### Formation of Copper Electroplated Electrode Patterning Using Screen Printing for Silicon Solar Cell Transparent Electrode (실리콘 태양전지 투명전극용 스크린 프린팅을 이용한 구리 도금 전극 패터닝 형성)
• Kim, Gyeong Min;Cho, Young Joon;Chang, Hyo Sik
• Korean Journal of Materials Research
• /
• v.29 no.4
• /
• pp.228-232
• /
• 2019
• Copper electroplating and electrode patterning using a screen printer are applied instead of lithography for heterostructure with intrinsic thin layer(HIT) silicon solar cells. Samples are patterned on an indium tin oxide(ITO) layer using polymer resist printing. After polymer resist patterning, a Ni seed layer is deposited by sputtering. A Cu electrode is electroplated in a Cu bath consisting of $Cu_2SO_4$ and $H_2SO_4$ at a current density of $10mA/cm^2$. Copper electroplating electrodes using a screen printer are successfully implemented to a line width of about $80{\mu}m$. The contact resistance of the copper electrode is $0.89m{\Omega}{\cdot}cm^2$, measured using the transmission line method(TLM), and the sheet resistance of the copper electrode and ITO are $1{\Omega}/{\square}$ and $40{\Omega}/{\square}$, respectively. In this paper, a screen printer is used to form a solar cell electrode pattern, and a copper electrode is formed by electroplating instead of using a silver electrode to fabricate an efficient solar cell electrode at low cost.
### Study on the Charging Characteristics of a Sealed Type Ni-Cd Cell (밀폐식 Ni-Cd 전지의 충전특성에 관한 연구)
• Yung Woo Park;Chai Won Kim;Mu Shik Jhon
• Journal of the Korean Chemical Society
• /
• v.15 no.6
• /
• pp.347-352
• /
• 1971
• The variations of the positive and negative electrode potentials, and of internal pressure were measured during the charge of the sealed type Ni-Cd cell. Both polarization characteristics of a paste type Cd-electrode as a gas diffusion electrode in 30% KOH solution and the effects of active carbon electrode as an oxygen consuming auxiliary electrode of the Ni-Cd cell on the charging characteristics of the cell were studied. Peak voltage at the end of charge of the cell is ascribed to the peak at the negative electrode potential, which is due to the concentration polarization by the lack of $Cd^{++}$ ion and oxygen concentration. And the recovery of the negative electrode potential is resulted from depolarization by the increasing diffusion limiting current density with the increasing oxygen pressure. The active carbon electrode was effective as an oxygen consuming auxiliary electrode. The internal pressure of the cell could be maintained below 200mmHg even at one hour rate charge and overcharge by the use of active carbon electrode as an auxiliary electrode. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4613771140575409, "perplexity": 10116.015224994211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191511.46/warc/CC-MAIN-20201127073750-20201127103750-00690.warc.gz"} |
https://scioly.org/wiki/index.php?title=Talk:Circuit_Lab&oldid=73202 | # Talk:Circuit Lab
Talk page for Circuit Lab.
## Content
I think this page should at least explain the event. It only has learning material. I came to learn what the event is about and what you do but it wasn't there. Knittingfrenzy18 21:48, 24 May 2012 (EST)
Once we get the rules, that will be added. Unfortunately, I personally do not know exactly what it's about, but the wiki gives a good knowledge base. Eaststroudsburg13 11:41, 26 May 2012 (EST)
I added almost all of the information listed in the trial rules in preparation for the new rules to be release in September. iwonder 8:24, 05 August 2012(CST)
## Formality
Can we aim to keep away from sentences that would be too confusing? "They then flow around the whole circuit, la la la, and arrive back at the positive end. Capeesh?" Getting that Informal, doesn't really help get a point across, and most likely will cause a bit too much confusion. I don't mind informal, or conversation type wiki pages, but if if is causing confusion, it really is not being overly helpful. --Robotman (talk) 10:19, 26 October 2012 (CDT)
## Cleanup
1. Overuse of first or second person in certain sections
2. General organization of page: layout is slightly confusing, especially with "Other Topics" being lumped together, and "Sources" being its own heading
3. Either merging short AC Power and Solving Resistor Circuits pages with the main page, or moving these to the typical Event/Topic format.
--EastStroudsburg13 (talk) 15:18, 2 July 2017 (UTC)
As of now, the organization of the page has been greatly improved thanks to Raxu and Unome; remaining cleanup mainly deals with overuse of first or second person in some sections. Also, there should always be a space before a left parenthesis. EastStroudsburg13 (talk) 15:03, 11 July 2017 (UTC)
## Symbol for Voltage
In this page and in Solving Resistor Circuits, $E$ is used for Voltage. Is it more common and intuitive to use $\Delta V$ (common among courses in NY and accessible to ones familiar with $V$ for Voltage), or $U$ (common in International books I've read and in some college textbooks)?
According to my sources, $V$ is more commonly used for Voltage in the context of circuits, whereas $E$ generally represents electromotive force (EMF). In the interest of consistency, $V$ should be used for Voltage wherever possible. Good catch. EastStroudsburg13 (talk) 16:33, 4 July 2017 (UTC)
## AC Circuit Theory
The rules say that AC Circuit Theory is not aloud, but I still see small pieces of AC Circuit knowledge needed for things. How deep into AC Circuit theory is considered a violation of the rules? daydreamer0023 (talk) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6632063984870911, "perplexity": 1964.326837797846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347458095.68/warc/CC-MAIN-20200604192256-20200604222256-00025.warc.gz"} |
https://kb.osu.edu/dspace/handle/1811/13030 | # Knowledge Bank
## University Libraries and the Office of the Chief Information Officer
The Knowledge Bank is scheduled for regular maintenance on Sunday, April 20th, 8:00 am to 12:00 pm EDT. During this time users will not be able to register, login, or submit content.
# POINT DEFECT ACTIVITY IN AMORPHOUS SOLID WATER AND THE POSSIBLE ROLE OF DEFECT ACTIVITY Of THE GLASS TRANSTITION
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/13030
Files Size Format View
1994-FD'-04.jpg 117.4Kb JPEG image
Title: POINT DEFECT ACTIVITY IN AMORPHOUS SOLID WATER AND THE POSSIBLE ROLE OF DEFECT ACTIVITY Of THE GLASS TRANSTITION Creators: Fisher, M.; Devlin, J. Paul Issue Date: 1994 Abstract: It has been proposed that low temperature phase transformation in hydrogen-bonded solids such as ice depend on the concentration and mobility of orientational (Bjerrum L) point defects. It defect activity is necessary for the growth of crystallinc ice from other phases of water (such as liquid water or amorphous solid water), it is important to understand the behaviour of orientational defects in these systems. In this study, the isotopic scrambling of $D_{2}O$ molecules isolated in amorphous $H_{2}O$ ice by mobile point defects has been used as probe of defects mobility at temperatures below the glass transition temperature. The sequential passage of defects through sites in the ice lattice initially occupied by $D_{2}O$ molecules results in the formation of spectroscopically distinguishable deuterated species in the ice lattice. From the infrared spectra of these samples, the change in concentration of these sepctroscopically distinguishable species is then followed with time and over a range of temperatures, enabling the determination of kinetic parameters relating to defect mobility. A mechanism for the isotopic scrambling process in amorphous ice below the glass transition temperature has been proposed. This mechanism involves point defect motion to explain the experimentally observed changes in concentration of deuterated species with respect to time. The isotopic exchange data seems to indicate a luck of significant molecular diffusional motion (fluidity) in amorphous ice at temperature just below the glass transition temperature. This finding is inconsistent with the recent conjecture that molecular diffusional motion plays a significant role in the glass transition which occurs in amorphous ice at approximately 130 K $^{1,2}$ URI: http://hdl.handle.net/1811/13030 Other Identifiers: 1994-FD'-04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6842653155326843, "perplexity": 2044.0039322707573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://dsp.stackexchange.com/questions/63074/phase-difference-between-signals-sampled-at-different-frequencies?noredirect=1 | # Phase difference between signals sampled at different frequencies
I want to know that if it is possible to measure the relative phase difference between a signal that has been sampled at two different locations with different sampling frequencies? Also can that method be extended to undersampled cases as well?
Edit: Adding Matlab script to test possible solution (Eq.3) provided by Dan Boschen
clear all
close all
clc
Len = 768/121e6;
Fs1 = 157e6;
t1 = 0:1/(13*Fs1) :Len-1/Fs1; %Time vector for Channel 1
Fs2 = 121e6;
t2 = 0:1/(13*Fs2) :Len-1/Fs2; %Time vector for Channel 1
f=25e6; % Incoming signal frequency
phase_diff_in=0; % Modelling the actual phase difference taking In-Phase for now
% Creating signals
sign1 = cos(2*pi*f*t1);
sign2 = cos(2*pi*f*t2 + deg2rad(phase_diff_in) );
sign1 = sign1(1:13:end);
sign2 = sign2(1:13:end);
sig_ref=cos(2*pi*Fs1*t2);% Fs1 sampled by Fs2
sig_ref =sig_ref(1:13:end);
% Test of phase difference formula in time domain
phi1=acos(sign1(1:256));% In first window of 256 points
phi2=acos(sign2(1:256));
phi3=acos(sig_ref(1:256));
T1=1/Fs1;
n=0:255;
phase_diff=2*pi*n*f*( ((T1*phi3(n+1))/(2*pi*n)) -T1)...
- (phi2(n+1) - phi1(n+1));
phase_diff=wrapToPi(phase_diff);
As far as I understood the phase difference in this case should have been 0 but that is not the case. The phase difference (in deg) is as shown below:
Update: Simulating the code added by Dan
Fs1 = 157e6;
Fs2 = 121e6;
f=500e6;%25e6
samples = 400;
Len = samples;
Phi = 45;
phase_out=phase_scale(Fs1,Fs2,f,Phi,Len);
figure;
plot(phase_out)
mean(phase_out)
for the case when f=25e6 and phi=45 the following was obtained:
And for the case when f=500e6 and phi=45 the following was obtained:
The error increases significantly as the frequency is increased further.
Update #2: Simulation results after the code modifications by Dan
for the case when f=25MHz and phi=45 the following is obtained:
Which shows that the phase difference was measured very accurately.
Also for the subnyquist case as well @f=600MHz and phi=75, the following is obtained:
which shows that this works in the subnyquist cases as well. Hence the given solution works under the assumptions stated by Dan in 'Practical Limitations' section of the answer.
• You title says "Phase difference between signals" (plural), your question says phase difference between a signal (singular)--which really wouldn't make sense but wanted to ask what you are really trying to do? (Purpose?) It can help simplify the answer. – Dan Boschen Jan 9 at 2:11
• Sorry for the confusion. By 'signals' I meant two different sampled versions of the same signal. The phase difference is introduced due to the sampling taking place at different locations (sensors being physically separated) – malik12 Jan 9 at 4:48
• ok they are physically in different locations with sampling clocks that cannot be synchronized (meaning even if you want to use two different frequencies, which is fine, you don't have the means to phase lock them to each other due to limitations of the set-up, correct?) – Dan Boschen Jan 9 at 4:53
• Offset error corrected and tested over broad range of input frequencies so hopefully it is done, please let me know if it works for you! Is this a beam forming application? – Dan Boschen Jan 10 at 17:43
## SOLUTION
Bottom Line
$$(\theta_2-\theta_1) = 2\pi f(T_2-T_1)n -(\phi_2[n]-\phi_1[n]) \tag{1}$$
$$f$$: frequency in Hz of two tones of the same frequency and fixed phase offset
$$(\theta_2-\theta_1)$$: phase difference in radians of tones being sampled
$$T_1$$: period of sampling clock 1 with sampling rate $$f_{s1}$$ in seconds
$$T_2$$: period of sampling clock 2 with sampling rate $$f_{s1}$$ in seconds
$$\phi_1[n]$$: phase result from sampling tone with $$f_{s1}$$ in radians/sample
$$\phi_2[n]$$: phase result from sampling tone with $$f_{s2}$$ in radians/sample
This shows how any standard approach of finding the phase between two tones of the same frequency that are sampled with the same sampling rate (common phase detectors approaches including multiplication, correlation etc) can be extended to handle the case when the two sampling rates are different.
Simpler explanation first:
Consider the exponential frequency form of equation (1):
$$e^{j(\theta_2-\theta_1)} = e^{j2\pi f(T_2-T_1)n}e^{-j(\phi_2[n]-\phi_1[n])} \tag{2}$$
The term $$e^{j2\pi f(T_2-T_1)n}$$ is the predicted difference in frequency between the two tones that would result from sampling a single tone with two different sampling rates (when observing both on the same normalized frequency scale).
The observed difference in frequency between the two tones would be $$e^{j(\phi_2[n]-\phi_1[n])}$$.
Both terms have the same frequency with a fixed phase offset. This phase offset is to the actual difference in phase between the two continuous-time tones. By conjugate multiplication we subtract the two, removing the phase slope and the fixed phase difference results.
Derivation
The approach is to carefully work with all units with a time axis of samples. The frequency domain is thus in units of normalized frequency: cycles/sample or radians/sample corresponding to cycles/sec or radians/sec when the time axis is seconds. Therefore our sampling rate, regardless of what it is in time given in seconds, will be always equal to $$1$$ cycle/sample (or $$2\pi$$ radians/sample if working in normalized radian frequency).
The two signals with the same analog frequency once sampled each with a different rate in the time domain, will be two signals each with a different normalized frequency.
This simplifies the problem to gives us the following result:
Given our original signals as normalized sinusoidal tones at the same frequency with different phase offsets:
$$x_1(t) = \cos(2\pi f t + \theta_1) \tag{3}$$ $$x_1(t) = \cos(2\pi f t + \theta_2) \tag{4}$$
Once sampled but still with time in seconds: $$x_1(nT_1) = \cos(2\pi f n T_1 + \theta_1) \tag{5}$$ $$x_2(nT_2) = \cos(2\pi f n T_2 + \theta_2) \tag{6}$$
Equation (5) and Equation (6) expressed time in units of samples is:
$$x_1[n] = \cos(2\pi f T_1 n+ \theta_1) \tag{7}$$ $$x_2[n] = \cos(2\pi f T_2 n+ \theta_2) \tag{8}$$
Convert to complex exponential form so that we can easily extract the phase terms using complex conjugate multiplication, (for a single tone we just need to split the input signal into quadrature components; $$\cos(\phi) \rightarrow [\cos(\phi),\sin(\phi)]\rightarrow \cos(\phi)+j\sin(\phi) = e^{j\phi}$$, this is described using the Hilbert Transform as $$h\{\}$$)
$$h\{x_1[n]\} =e^{-j(\phi_1[n])} = e^{2\pi f T_1 n+ \theta_1} = e^{2\pi f T_1 n}e^{\theta_1} \tag{9}$$ $$h\{x_2[n]\} = e^{-j(\phi_2[n])} =e^{2\pi f T_2 n+ \theta_2} =e^{2\pi f T_2 n}e^{\theta_2} \tag{10}$$
The complex conjugate multiplication gives us the difference phase term we seek and its relation to our measured results:
$$e^{-j(\phi_2[n]-\phi_1[n])} = e^{2\pi f T_2 n}e^{\theta_2}e^{-2\pi f T_1 n}e^{-\theta_1} \tag{11}$$
Resulting in
$$e^{j(\theta_2-\theta_1)} = e^{j2\pi f(T_2-T_1)n}e^{-j(\phi_2[n]-\phi_1[n])} \tag{12}$$
Note that $$e^{-j(\phi_2[n]-\phi_1[n])}$$ represents the measurement which for single tones will result in a frequency and this frequency is predicted to be $$\omega = 2\pi f(T_2-T_1)n$$, given by the $$e^{j2\pi f(T_2-T_1)n}$$ term. If we remove the frequency offset (by the multiplication above), the result is the phase difference of the original signal.
Taking the natural log of both sides reveals the result in units of phase (radians):
$$(\theta_2-\theta_1) = 2\pi f(T_2-T_1)n-(\phi_2[n]-\phi_1[n]) \tag{13}$$
So in summary, $$\phi_1[n]$$, $$\phi_2[n]$$ come from our measurements given as $$cos(\phi_1[n])$$, $$cos(\phi_2[n])$$ and we establish the difference that we need to get our answer through the complex conjugate multiplication of the Hilbert Transform of those measurements.
## Demonstration
I demonstrate this with the script below similar to the OP's configuration with the results plotted below, which now includes the decimation and was tested for both f = 25 MHz and f = 400 MHz (undersampled) with similar results This shows each step to demonstrate the process above, and the operations can be further combined. The Hilbert Transform in implementation would be any approach of choice to delay the sampled tones 90° (A fractional delay all-pass filter is a reasonable choice).
Len = 10000;
phase_diff_in = 45;
f=400e6; % Incoming signal frequency
D = 13
Fs1 = 157e6*D;
Fs2 = 121e6*D;
t1 = [0:Len-1]/Fs1; % Time vector channel 1
t2 = [0:Len-1]/Fs2; % Time vector channel 2
phi1 = 2*pi*f*t1;
sign1 = cos(phi1);
sign2 = cos(phi2);
% emulation of perfect Hilbert Transform for each tone:
c1_in = 2*(sign1 - 0.5*exp(j*phi1));
c2_in = 2*(sign2 - 0.5*exp(j*phi2));
% create expected phase slope to remove
n = [0:Len-1];
comp_in = exp(-j*2*pi*f*(1/Fs2-1/Fs1)*n);
% decimation
c1 = c1_in(1:D:end);
c2 = c2_in(1:D:end);
comp = comp_in(1:D:end);
pdout = c1.*conj(c2);
result = pdout.*comp;
%determine phase_diff
Below shows the result for the copies of the input signal at frequency $$f$$ sampled by $$f_{s1}$$ as sig1 and $$f_{s2}$$ as sig2 for the case of zero degree phase between them. The real of the complex conjugate product pdout is the bold red sinusoid, and we note that it has zero phase offset. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 129, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8491076827049255, "perplexity": 1508.270016867992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370495413.19/warc/CC-MAIN-20200329171027-20200329201027-00315.warc.gz"} |
http://zbmath.org/?q=an:0968.93015 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Real analytic geometry and local observability. (English) Zbl 0968.93015
Ferreyra, G. (ed.) et al., Differential geometry and control. Proceedings of the Summer Research Institute, Boulder, CO, USA, June 29-July 19, 1997. Providence, RI: American Mathematical Society. Proc. Symp. Pure Math. 64, 65-72 (1999).
From the author’s abstract: “The language of real analytic geometry is used to give necessary and sufficient conditions of stable local observability of analytic systems...Globalization of this result yields a new characterization of local observability at large which uses the concept of sheaf.”
The notion of indistinguishability and theorem 3.1 giving necessary and sufficient conditions for this property are the basic material of the paper. Theorem 3.1 is taken from a paper of other authors [R. Hermann and A. Krener, Nonlinear controllability and observability, IEEE Trans. Autom. Control AC-22, 728-740 (1977; Zbl 0396.93015)] without proof. The definition of an algebra $ℋ$ of real analytic functions by means of which the conditions above are expressed makes the reviewer have his doubts about the correctness of the theorem: the definition of indistinguishability also contains some conditions concerning the control parameter; conclusions resulting from the corresponding partial derivatives seem to have been forgotten.
In the fourth chapter (“Globalization”) the author “uses the concept of sheaf” in the following way. “H is identified with the set of sections of the sheaf $x\to {ℋ}_{x}$...” Obviously he has not become aware of the problem of completeness of the presheaf $ℋ$.
##### MSC:
93B07 Observability 93B27 Geometric methods in systems theory 14P15 Real analytic and semianalytic sets 93B29 Differential-geometric methods in systems theory (MSC2000) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.771913468837738, "perplexity": 3915.58790110938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011184056/warc/CC-MAIN-20140305091944-00065-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://www.lsv.fr/Publis/authors/gastin_bib.html | @phdthesis{Gas87,
author = {Gastin, Paul},
title = {Un mod{\e}le distribu{\'e}},
school = {Laboratoire d'Informatique Th{\'e}orique et
Programmation, Paris, France},
type = {Th{\e}se de doctorat},
year = 1987
}
@article{Gas90tcs,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Gastin, Paul},
title = {Un mod{\e}le asynchrone pour les syst{\e}mes distribu{\'e}s},
volume = 74,
number = 1,
year = 1990,
pages = {121-161},
abstract = { Many researches are engaged in the field of distributed systems
modelization. We present a new model inspired by the
language theory. Its particularity within this theory is the
rejection of interleaving for the representation of
concurrency. Here, the main ideas are born from languages as
CSP or~ESTELLE. The point is mainly the real lack of
dependency between processes apart from synchronisations,
which are rendez-vous. Once the model defined, we introduce,
as in the infinitary free monoid, notions such as length,
concatenation, prefix order, upper bound and infinite
product. So the distributed model is provided with all
operations one needs to develop semantics. Then, we prove
that the finitary part of the distributed model is
isomorphic to the free partially commutative monoid~(fpcm).
Finally, we settle the bases of an infinitary extension of
the fpcm.}
}
@inproceedings{BGV90,
year = 1991,
volume = 486,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {van Leeuwen, Jan and Santoro, Nicola},
acronym = {{WDAG}'90},
booktitle = {{P}roceedings of the 4th {I}nternational {W}orkshop
on {D}istributed {A}lgorithms ({WDAG}'90)},
author = {Beauquier, Joffroy and Gastin, Paul and Villain, Vincent},
title = {A linear fault-tolerant naming algorithm},
pages = {57-70},
doi = {10.1007/3-540-54099-7_5},
abstract = {We solve the naming problem (how to give a unique identifier to
each site of an unknown network), when some sites are
supposed to have a faulty behaviour of fail-stop type. The
solution uses several tokens, in order to ensure that,
despite crash failure of some sites, at least one token will
perform a complete traversal of the network. The
complexities in time and in number of messages of this
algorithm are linear with respect to the size of the network
(number of communication lines), which improves the
exponential solution already known in the Byzantine case
with some special assumptions.}
}
@inproceedings{Gas90epit,
address = {La Roche Posay, France},
month = apr,
year = 1990,
volume = 469,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Guessarian, Ir{\e}ne},
booktitle = {Semantics of Systems of Concurrent Processes~---
{P}roceedings of the {LITP} {S}pring {S}chool on
{T}heoretical {C}omputer {S}cience},
author = {Gastin, Paul},
title = {Infinite Traces},
pages = {277-308},
abstract = {Trace languages are used in computer science to provide a
description of the behaviours of concurrent systems. If we
are interested in systems which never stop then we have to
consider languages of infinite traces. In this paper, we
generalize to infinite traces three well known points of
view about finite traces: equivalence class of words,
projections on the dependence cliques and dependence graphs.
These approaches are complementary and, depending on the
problem we deal with, each of them can prove to be more
appropriate than the others. In this way, we obtain an
infinitary trace monoid and extend Levi's lemma and the
Foata normal form. Next, we prove that the infinitary trace
monoid is a completely coherent PoSet. We also define an
ultrametric distance and prove that it is a complete metric
space. Therefore, either the PoSet or the topological
framework can be used to solve fix-point equations and then
to provide semantics of recursive constructs. Finally, we
introduce recognizable languages of finite and infinite
traces. We prove that they are characterized by a syntactic
congruence and that the family of recognizable languages is
closed by concatenation and by the Boolean operations:
union, intersection and complement.}
}
@inproceedings{PG-AP-WZ-icalp91,
month = jul,
year = 1991,
volume = 510,
series = {Lecture Notes in Computer Science},
publisher = {Springer-Verlag},
editor = {Leach Albert, Javier and Monien, Burkhard and
Rodriguez-Artalejo, Mario},
acronym = {{ICALP}'91},
booktitle = {{P}roceedings of the 18th {I}nternational
{C}olloquium on {A}utomata, {L}anguages and
{P}rogramming
({ICALP}'91)},
author = {Gastin, Paul and Petit, Antoine and
Zielonka, Wieslaw},
title = {A {K}leene Theorem for Infinite Trace Languages},
pages = {254-266},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/TCS94gpz.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/TCS94gpz.ps},
abstract = {Kleene's theorem is considered as one of the cornerstones of
theoretical computer science. It ensures that, for languages
of finite words, the family of recognizable languages is
equal to the family of rational languages. It has been
generalized in various ways, for instance, to formal power
series by Sch{\"u}tzenberger, to infinite words by B{\"u}chi and to
finite traces by Ochma{\'n}ski. Finite traces have been
introduced by Mazurkiewicz in order to modelize the
behaviours of distributed systems. The family of
recognizable trace languages is not closed by Kleene's star
but by a concurrent version of this iteration. This leads to
the natural definition of co-rational languages obtained as
the rational one by simply replacing the Kleene's iteration
by the concurrent iteration. Cori, Perrin and M{\'e}tivier
proved, in substance, that any co-rational trace language is
recognizable. Independently, Ochma{\'n}ski generalized Kleene's
theorem showing that the recognizable trace languages are
exactly the co-rational languages. Besides, infinite traces
have been recently introduced as a natural extension of both
finite traces and infinite words. In this paper we
generalize Kleene's theorem to languages of infinite traces
proving that the recognizable languages of finite or
infinite traces are exactly the co-rational languages.}
}
@inproceedings{VD-PG-AP-mfcs91,
month = sep,
year = 1991,
volume = 520,
series = {Lecture Notes in Computer Science},
publisher = {Springer-Verlag},
editor = {Tarlecki, Andrzej},
acronym = {{MFCS}'91},
booktitle = {{P}roceedings of the 16th
{I}nternational {S}ymposium on
{M}athematical {F}oundations of
{C}omputer {S}cience
({MFCS}'91)},
author = {Diekert, Volker and Gastin, Paul and
Petit, Antoine},
title = {Recognizable Complex Trace Languages},
pages = {131-140},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/IC95dgp.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/IC95dgp.ps},
abstract = {A.~Mazurkiewicz defined traces in order to modelize
non-sequential processes. Complex traces have been recently
introduced as a generalization of both traces and infinite
words. This paper studies the family of recognizable complex
trace languages. It is proved that this family is closed
under boolean operations, concatenation, left and right
quotients. Then sufficient conditions ensuring the
recognizability of the finite and infinite iterations of a
recognizable complex trace language are given. The notion of
co-iteration is defined and the Kleene-Ochma{\'n}ski theorem is
generalized to complex traces.}
}
@inproceedings{Gas91,
month = feb,
year = 1991,
volume = 480,
series = {Lecture Notes in Computer Science},
publisher = {Springer-Verlag},
editor = {Choffrut, {\relax Ch}ristian and Jantzen, Matthias},
acronym = {{STACS}'91},
booktitle = {{P}roceedings of the 8th {A}nnual
{S}ymposium on {T}heoretical {A}spects of
{C}omputer {S}cience
({STACS}'91)},
author = {Gastin, Paul},
title = {Recognizable and rational languages of finite and
infinite traces},
pages = {89-104},
abstract = {Trace languages are used in computer science to provide a
description of the behaviours of concurrent systems. If we
are interested in systems which never stop then we have to
consider languages of infinite traces. In this paper, we
introduce and study recognizable and rational languages of
finite and infinite traces. We characterize recognizable
languages by means of a syntactic congruence. We prove that
the family of recognizable languages is strictly included in
the family of rational languages. Next, we study the closure
properties of the family of recognizable languages. We prove
that this family is closed under the Boolean operations and
under concatenation. Contrary to the (finite) iteration, the
infinite iteration of a finite trace is proved to be
recognizable. We conclude this paper with some open
problems.}
}
@inproceedings{PG-AP-APN-92,
year = 1992,
volume = 609,
series = {Lecture Notes in Computer Science},
publisher = {Springer-Verlag},
editor = {Rozenberg, Grzegorz},
booktitle = {{A}dvances in {P}etri {N}ets 1992:
{T}he {DEMON} {P}roject},
author = {Gastin, Paul and Petit, Antoine},
title = {A Survey of Recognizable Languages of Infinite
Traces},
pages = {392-409},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/APN92gp.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/APN92gp.ps},
abstract = {A.~Mazurkiewicz defined traces in order to represent
non-sequential processes. In order to describe
non-sequential processes which never terminate, \emph{e.g.}
distributed operating systems, the notion of infinite traces
is needed. The aim of this survey is to present in a uniform
way the known results on recognizable infinite trace
languages. The proofs of the presented results are not
proposed here but can be found in the original papers.}
}
@inproceedings{PG-AP-icalp92,
month = jul,
year = 1992,
volume = 623,
series = {Lecture Notes in Computer Science},
publisher = {Springer-Verlag},
editor = {Kuich, Werner},
acronym = {{ICALP}'92},
booktitle = {{P}roceedings of the 19th {I}nternational
{C}olloquium on {A}utomata, {L}anguages and
{P}rogramming
({ICALP}'92)},
author = {Gastin, Paul and Petit, Antoine},
title = {Asynchronous Cellular Automata for Infinite
Traces},
pages = {583-594},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Icalp92gp.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Icalp92gp.ps},
abstract = {A.~Mazurkiewicz introduced the monoid of traces to provide a
very natural semantics for concurrent processes. In a
general monoid, the behaviours of finite state systems are
described by recognizable languages, which form hence a
basic family. For finite traces and finite or infinite
words, there exist several equivalent characterizations of
this family as for instance, saturating morphisms or
(co-)rational expressions. For infinite traces, this family
has been introduced by means of saturating morphisms and
characterized by co-rational expressions but it suffers from
lack of finite state system characterizations. In this
paper, we remedy this deficiency providing a
characterization of the family of recognizable infinite
trace languages by means of asynchronous (cellular) automata
(which carry the most intuitive idea of finite state
concurrent machines). To this purpose, we give effective
constructions for co-rational operations on these automata
which are of independent interest.}
}
@inproceedings{PG-AP-mfcs92,
month = aug,
year = 1992,
volume = 629,
series = {Lecture Notes in Computer Science},
publisher = {Springer-Verlag},
editor = {Havel, Ivan M. and Koubek, V{\'a}clav},
acronym = {{MFCS}'92},
booktitle = {{P}roceedings of the 17th
{I}nternational {S}ymposium on
{M}athematical {F}oundations of
{C}omputer {S}cience
({MFCS}'92)},
author = {Gastin, Paul and Petit, Antoine},
title = {{P}o{S}et properties of complex traces},
pages = {255-263},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Mfcs92gp.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Mfcs92gp.ps},
abstract = {This paper investigates PoSet properties of the monoid~$$\mathbb{G}$$ of
infinite dependence graphs and of the monoid~$$\mathbb{C}$$ of complex
traces. We show that a subset of~$$\mathbb{G}$$ admits a least upper
bound if and only if this set is coherent and countable.
Hence, $$\mathbb{G}$$~is bounded complete. The compact and the prime
dependence graphs are characterized and we prove that each
dependence graph is the least upper bound of its compact
(resp. its prime) lower bounds. Therefore, up to the
restriction to countable sets, $$\mathbb{G}$$~is a coherently complete
Scott-Domain and is Prime Algebraic. We define very
naturally two orders on~$$\mathbb{C}$$ : the product order and the prefix
order. We show that $$\mathbb{C}$$ with each order is a coherently
complete CPO and we characterize the least upper bound (the
greatest lower bound resp.) of a subset of~$$\mathbb{C}$$ when it exists.
But contrary to the case of~$$\mathbb{G}$$, we prove that
$$\mathbb{C}$$~is not a
Scott-Domain in general.}
}
@phdthesis{Gas92,
author = {Gastin, Paul},
title = {M{\'e}moire d'habilitation {\a} diriger des
recherches},
howpublished = {Technical Report LITP-92.97},
year = {1992},
type = {M{\'e}moire d'habilitation},
school = {Universit{\'e} Paris~6, Paris, France}
}
@article{GOPR92,
publisher = {Elsevier Science Publishers},
journal = {Information Processing Letters},
author = {Gastin, Paul and Ochma{\'n}ski, Edward and Petit, Antoine and Rozoy, Brigitte},
title = {Decidability of the star problem
in {{$$A^*$$}}{$$\times\{b\}^*$$}},
volume = 44,
number = {2},
year = 1992,
month = nov,
pages = {65-71},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/IPL92gopr.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/IPL92gopr.ps},
abstract = {We adress in this paper the decidability of the Star Problem in
trace monoids: Let~{$$L$$} be a recognizable trace language,
is~{$$L^*$$}
recognizable? We prove that this problem is decidable when
the trace monoid is a direct product of free monoids
{$$A^*\times \{b\}^*$$}. This result shows, for the first time and contrary to
a possible intuition, that the Star Problem is of distinct
nature than the Recognizability Problem: Let~{$$L$$} be a rational
trace language, is~{$$L$$} recognizable?}
}
@article{GV93,
publisher = {World Scientific},
journal = {Parallel Processing Letters},
author = {Gastin, Paul and Villain, Vincent},
title = {An efficient crash-tolerant sequential traversal},
volume = 3,
year = 1993,
pages = {87-97},
abstract = {Fault-tolerance is an increasingly
important requirement for most
of the todays distributed
systems. However, the cost and
complexity of the design of such
systems are considerably higher
than those of non fault-tolerant
ones.\par
An interesting approach to simplify the
design of
fault-tolerant systems appears in~[TN88] :
some methods are developed, that
automatically convert benign failures
tolerant systems into systems overcoming
more severe failures. With these methods,
one can first solve the simpler problem of
designing a program which only tolerates
benign failures, and then convert it
automatically into one with a higher
degree of fault-tolerance. Therefore, the
design of benign failures tolerant systems
is a basic concern in this framework. \par
The aim of this paper is to make easier,
by means of a modular methodology, the
conception of algorithms overcoming the
most benign type of failures :
crash-failures.}
}
@article{GR93,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Gastin, Paul and Rozoy, Brigitte},
title = {The poset of infinitary traces},
volume = 120,
number = 1,
year = 1993,
pages = {101-121},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/TCS93-gr.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/TCS93-gr.ps},
abstract = {\emph{Partially Commutative Monoids}, also called
\emph{Trace Monoids}, are among the most studied
formalisms which describe the behaviour of
distributed systems. In order to modelize
systems which never stop, we have to
consider an extension of traces, namely
\emph{Infinite traces}. Finite trace monoids are
strongly related to \emph{Partial Order Sets
(PoSets), Domains and Event Structures},
which are other models to describe the
behaviour of distributed systems. The aim
of this paper is to establish similar
connexions between infinite trace monoids,
PoSets and event structures. We prove that
the set of finite and infinite traces with
the prefix order is a \emph{Scott-domain} and a
\emph{coherently complete prime algebraic PoSet}.
Moreover, we establish a representation
theorem between the class of finite and
infinite trace PoSets and a subclass of
labelled prime event structures.}
}
@article{PG-AP-WZ-TCS-94,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Gastin, Paul and Petit, Antoine and
Zielonka, Wieslaw},
title = {An Extension of {K}leene's and {O}chma{\'n}ski's
Theorems to Infinite Traces},
volume = {125},
number = {2},
pages = {167-204},
year = {1994},
month = mar,
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/TCS94gpz.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/TCS94gpz.ps},
abstract = {As was noted by
Mazurkiewicz, traces constitute a
convenient tool for describing finite
behaviour of concurrent systems.
Extending in a natural way
Mazurkiewicz's original definition,
infinite traces have been recently
introduced enabling to deal with
infinite behaviour of non-terminating
concurrent systems. In this paper we
examine the basic families of
recognizable sets and of rational sets
of infinite traces. The seminal Kleene
characterization of recognizable
subsets of the free monoid and its
subsequent extensions to infinite words
due to B{\"u}chi and to finite traces due
to Ochma{\'n}ski are the cornestones of the
corresponding theories. The main result
of our paper is an extension of these
characterizations to the domain of
infinite traces. Using recognizing and
weakly recognizing morphisms, as well
as a generalization of the
Sch{\"u}tzenberger product of monoids, we
prove various closure properties of
recognizable trace languages. Moreover,
we establish normal form
representations for recognizable and
rational sets of infinite traces.}
}
@incollection{PG-AP-TB-95,
author = {Gastin, Paul and Petit, Antoine},
title = {Infinite traces},
editor = {Diekert, Volker and Rozenberg, Grzegorz},
booktitle = {The Book of Traces},
chapter = {11},
type = {chapter},
pages = {393-486},
year = {1995},
publisher = {World Scientific},
nopsgz = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PSGZ/InfiniteTraces.ps.gz},
nops = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/InfiniteTraces.ps}
}
@article{VD-PG-AP-IC-95,
publisher = {Elsevier Science Publishers},
journal = {Information and Computation},
author = {Diekert, Volker and Gastin, Paul and
Petit, Antoine},
title = {Rational and Recognizable Complex Trace Languages},
volume = {116},
number = {1},
pages = {134-153},
year = {1995},
month = jan,
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/IC95dgp.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/IC95dgp.ps},
abstract = {Mazurkiewicz defined traces
as an algebraic model of finite concurrent
processes. In order to modelize
non-terminating processes a good notion of
infinite trace was needed, which finally
led to the notion of complex trace. For
complex traces an associative
concatenation and omega-iteration are
defined. This paper defines and
investigates rational and recognizable
complex trace languages. We prove various
closure results such as the closure under
boolean operations (for recognizable
languages), concatenation, and left and
right quotients by recognizable sets. Then
we study sufficient conditions ensuring
the recognizability of the finite and
infinite iterations of complex trace
languages. We introduce a generalization
of the notion of concurrent iteration
which leads to the main result of the
paper: the generalization of Kleene's and
Ochma{\'n}ski's theorems to complex trace
languages.}
}
@article{GR95,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Gastin, Paul and Rutten, Jan},
editor = {Gastin, Paul and Rutten, Jan},
title = {Selected papers of the workshop on \emph{Topology and
Completion in
Semantics}, Chartes, France, November 1993},
year = 1995,
volume = 151,
number = 1,
month = nov
}
@inproceedings{BG95,
month = aug,
year = 1995,
volume = 969,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Wiedermann, Jir{\'i} and H{\'a}jek, Petr},
acronym = {{MFCS}'95},
booktitle = {{P}roceedings of the 20th
{I}nternational {S}ymposium on
{M}athematical {F}oundations of
{C}omputer {S}cience
({MFCS}'95)},
author = {Bauget, Serge and Gastin, Paul},
title = {On congruences and partial orders},
pages = {434-443},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Mfcs95bg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Mfcs95bg.ps},
abstract = {Mazurkiewicz trace theory is not powerful enough to describe
concurrency paradigms as, for instance, the {"}Producer\slash
Consumer{"}. We propose in this paper a generalization of
Mazurkiewicz trace monoids which allows to model such
problems. We consider quotients of the free monoids by
congruences which preserve the commutative images of words.
An equivalence class in the quotient monoid consists of all
the sequential observations of a distributed computation. In
order to characterize congruences which do model
concurrency, we study the relationship of this approach and
the classical representation of distributed computations
with partial orders. We show that the only congruences for
which the classes can be represented by partial orders and
for which the concatenation transfers modularly to partial
orders are congruences generated by commutations, that is
trace congruences. We prove necessary conditions and
sufficient conditions on congruences so that their classes
can be represented by partial orders. In particular, an
important sufficient condition covers both trace congruences
and the {"}Producer\slash Consumer{"} congruence.}
}
@inproceedings{DG95,
month = jul,
year = 1995,
volume = 944,
series = {Lecture Notes in Computer Science},
publisher = {Springer-Verlag},
editor = {F{\"u}l{\"o}p, Zolt{\'a}n and G{\'e}cseg, Ferenc},
acronym = {{ICALP}'95},
booktitle = {{P}roceedings of the 22th {I}nternational
{C}olloquium on {A}utomata, {L}anguages and
{P}rogramming
({ICALP}'95)},
author = {Diekert, Volker and Gastin, Paul},
title = {A domain for concurrent termination:
A generalization of {M}azurkiewicz traces},
pages = {15-26},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Icalp95dg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Icalp95dg.ps},
abstract = {This paper generalizes the concept of Mazurkiewicz traces to a
fuzzy description of a concurrent process, where a known
prefix is given in a first component and a second alphabetic
component yields necessary information about future actions.
This allows to define a good semantic domain where the
concatenation is continuous with respect to the Scott- and
to the Lawson topology. For this, we define the notion of
alpha-trace and of delta-trace. We show various mathematical
results proving thereby the soundness of our approach. Our
theory is a proper generalization of the theory of finite
and infinite words (with explicit termination) and of the
theory of finite and infinite (real and complex) traces. We
make use of trace theory, domain theory, and topology.}
}
@inproceedings{BB-PG-AP-stacs96,
month = feb,
year = 1996,
volume = 1046,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Puech, Claude and Reischuk, R{\"u}diger},
acronym = {{STACS}'96},
booktitle = {{P}roceedings of the 13th {A}nnual
{S}ymposium on {T}heoretical {A}spects of
{C}omputer {S}cience
({STACS}'96)},
author = {B{\'e}rard, B{\'e}atrice and Gastin, Paul and
Petit, Antoine},
title = {On the Power of Non-Observable Actions in Timed
Automata},
pages = {257-268},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/BGP-stacs96.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/BGP-stacs96.ps},
abstract = {Timed finite automata, introduced by Alur and Dill, are one of
the most widely studied models for real-time systems. We
focus in this paper on the power of silent transitions, \emph{i.e.}
$$\epsilon$$-transitions, in such automata. We show that
$$\epsilon$$-transitions strictly increase the power of timed
automata and that the class of timed languages recognized by
automata with $$\epsilon$$-transitions is much more robust than
the corresponding class without $$\epsilon$$-transition. Our main
result shows that $$\epsilon$$-transitions increase the power of
these automata only if they reset clocks.}
}
@inproceedings{VD-PG-AP-dlt96,
year = 1996,
publisher = {World Scientific},
editor = {Dassow, J{\"u}rgen and Rozenberg, Grzegorz and
Salomaa, Arto},
acronym = {{DLT}'95},
booktitle = {{P}roceedings of the 2nd {I}nternational
{C}onference on {D}evelopments in {L}anguage {T}heory
({DLT}'95)},
author = {Diekert, Volker and Gastin, Paul and
Petit, Antoine},
title = {Recent Developments in Trace Theory},
pages = {373-385},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/DGP-dlt95.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/DGP-dlt95.ps},
abstract = {In this paper we give a survey on some active research in the
theory of Mazurkiewicz traces. We restrict our attention to
some few topics: recognizable languages, asynchronous
automata including infinite traces, trace codings, and trace
rewriting.}
}
@inproceedings{DrGa96,
month = aug,
year = 1996,
volume = 1119,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Montanari, Ugo and Sassone, Vladimiro},
acronym = {{CONCUR}'96},
booktitle = {{P}roceedings of the 7th
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'96)},
author = {Droste, Manfred and Gastin, Paul},
title = {Asynchronous cellular automata for Pomsets without
auto-concurrency},
pages = {627-638},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Concur96dg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Concur96dg.ps},
abstract = {This paper extends to pomsets without auto-concurrency the
fundamental notion of asynchronous cellular automata (ACA)
which was originally introduced for traces by~Zielonka. We
generalize to pomsets the notion of asynchronous mapping
introduced by~Zielonka and we show how to construct a
deterministic ACA from an asynchronous mapping. Our main
result generalizes B{\"u}chi's theorem for a class of pomsets
without auto-concurrency which satisfy a natural axiom. This
axiom ensures that an asynchronous cellular automaton works
on the pomset as a concurrent read owner write machine. More
precisely, we prove the equivalence between
order logic for this class of pomsets.}
}
@inproceedings{VD-PG-AP-stacs97,
month = feb,
year = 1997,
volume = 1200,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Reischuk, R{\"u}diger and Morvan, Michel},
acronym = {{STACS}'97},
booktitle = {{P}roceedings of the 14th {A}nnual
{S}ymposium on {T}heoretical {A}spects of
{C}omputer {S}cience
({STACS}'97)},
author = {Diekert, Volker and Gastin, Paul and Petit, Antoine},
title = {Removing {{$$\epsilon$$}}-Transitions in Timed Automata},
pages = {583-594},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/DGP-stacs97.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/DGP-stacs97.ps},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DGP-stacs97.pdf},
abstract = {Timed automata are among the most widely studied models for
real-time systems. Silent transitions, \emph{i.e.},
$$\epsilon$$-transitions, have already been proposed in the
original paper on timed automata by Alur and Dill. B{\'e}rard,
Gastin and Petit have shown that $$\epsilon$$-transitions
can be removed, if they do not reset clocks; moreover
$$\epsilon$$-transitions strictly increase the power of timed
automata, if there is a self-loop containing
$$\epsilon$$-transitions which reset some clocks. This paper left
open the problem about the power of the $$\epsilon$$-transitions
which reset clocks, if they do not lie on any cycle.\par
The present paper settles this open question. Precisely, we
prove that a timed automaton such that no $$\epsilon$$-transition
with nonempty reset set lies on any directed cycle can be
effectively transformed into a timed automaton without
$$\epsilon$$-transitions. Interestingly, this main result holds
under the assumption of non-Zenoness and it is false
otherwise.\par
Besides, we develop a promising new technique based on a
notion of precise
time which allows to show that some timed languages are not
recognizable by any $$\epsilon$$-free timed automaton.}
}
@inproceedings{DrGa97,
month = jul,
year = 1997,
volume = 1256,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Degano, Pierpaolo and Gorrieri, Roberto and Marchetti-Spaccamela, Alberto},
acronym = {{ICALP}'97},
booktitle = {{P}roceedings of the 24th {I}nternational
{C}olloquium on {A}utomata, {L}anguages and
{P}rogramming
({ICALP}'97)},
author = {Droste, Manfred and Gastin, Paul},
title = {On recognizable and rational formal power series
in partially commuting variables},
pages = {682-692},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/Icalp97dg.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Icalp97dg.ps},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/Icalp97dg.pdf},
abstract = {Kleene's theorem on the coincidence of regular and rational
languages in free monoids has been generalized by
Sch{\"u}tzenberger to a description of the recognizable formal
power series in non-commuting variables over arbitrary
semi-rings, and by Ochma\'nski to a characterization of the
recognizable languages in trace monoids. Here we will
describe the recognizable formal power series over arbitrary
semi-rings and in partially commuting variables, \emph{i.e.} over
trace monoids. We prove that the recognizable series are
certain rational power series, which can be constructed from
the polynomials by using the operations sum, product and a
restricted star which is applied only to series for which
the elements in the support all have the same connected
alphabet. The converse is true if the underlying semi-ring
is commutative. It is shown that these assumptions are
necessary. This provides a joint generalization of both
Sch{\"u}tzenberger's and Ochma\'nski's theorems. }
}
@techreport{DG97j,
author = {Droste, Manfred and Gastin, Paul},
title = {Asynchronous cellular automata and
logic for pomsets without auto-concurrency},
institution = {Laboratoire d'Informatique Algorithmique, Fondements et
Applications, Paris, France},
type = {Technical Report},
number = {97.24},
year = 1997,
url = {http://www.lsv.fr/Publis/PAPERS/PDF/ACAj97dg.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/ACAj97dg.ps},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/ACAj97dg.pdf},
abstract = {This paper extends to pomsets without auto-concurrency the
fundamental notion of asynchronous cellular automata~(ACA)
which was originally introduced for traces by Zielonka. We
generalize to pomsets the notion of asynchronous mapping
introduced by Zielonka and we show how to construct a
deterministic ACA from an asynchronous mapping. Our main
result generalizes B{\"u}chi's theorem for finite words to a
class of pomsets without auto-concurrency which satisfy a
natural axiom. This axiom ensures that an asynchronous
cellular automaton works on the pomset as a concurrent read
and exclusive owner write machine. More precisely, we prove
the equivalence between non deterministic ACA, deterministic
ACA and monadic second order logic for this class of
pomsets.}
}
@article{BB-VD-PG-AP-98,
publisher = {{IOS} Press},
journal = {Fundamenta Informaticae},
author = {B{\'e}rard, B{\'e}atrice and Diekert, Volker and
Gastin, Paul and Petit, Antoine},
title = {Characterization of the Expressive Power of Silent
Transitions in Timed Automata},
volume = {36},
number = {2},
pages = {145-182},
year = {1998},
month = nov,
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/BDGP-FUNDI98.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/BDGP-FUNDI98.ps},
abstract = {Timed automata are among the
most widely studied models for real-time
systems. Silent transitions (or
$$\epsilon$$-transitions) have already been
proposed in the original paper on timed
automata by Alur and~Dill. We show that the
class of timed languages recognized by
automata with $$\epsilon$$-transitions, is more
robust and more expressive than the
corresponding class without
$$\epsilon$$-transitions. \par
We then focus on $$\epsilon$$-transitions which do
not reset clocks. We propose an algorithm to
construct, given a timed automaton, an
equivalent one without such transitions. This
algorithm is in two steps, it first suppresses
the cycles of $$\epsilon$$-transitions without
reset and then the remaining ones.\par
Then, we prove that a timed automaton such
that no $$\epsilon$$-transition which resets clocks
lies on any directed cycle, can be effectively
transformed into a timed automaton without
$$\epsilon$$-transitions. Interestingly, this main
result holds under the assumption of
non-Zenoness and it is false otherwise.\par
To complete the picture, we exhibit a simple
timed automaton with an $$\epsilon$$-transition,
which resets some clock, on a cycle and which
is not equivalent to any $$\epsilon$$-free timed
automaton. To show this, we develop a
promising new technique based on the notion of
precise action.}
}
@book{LA-PG-BP-AP-NP-PW-livre98,
author = {Albert, Luc and Gastin, Paul and
Petazzoni, Bruno and Petit, Antoine
and Puech, Nicolas and Weil, Pascal},
title = {Cours et exercices d'informatique, Classes
pr{\'e}paratoires, premier et second cycles
universitaires},
year = {1998},
month = jun,
publisher = {Vuibert},
isbn = {2-7117-8621-8},
lsv-lang = {FR}
}
@inproceedings{PG-RM-AP-mfcs98,
month = aug,
year = 1998,
volume = 1450,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Brim, Lubos and Gruska, Jozef and Zlatuska, Jir{\'i}},
acronym = {{MFCS}'98},
booktitle = {{P}roceedings of the 23rd
{I}nternational {S}ymposium on
{M}athematical {F}oundations of
{C}omputer {S}cience
({MFCS}'98)},
author = {Gastin, Paul and Meyer, Rapha{\"e}l and
Petit, Antoine},
title = {A (non-elementary) modular decision procedure for
{LTrL}},
pages = {356-365},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GMP-mfcs98.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GMP-mfcs98.ps},
abstract = {Thiagarajan and Walukiewicz have defined a
temporal logic~LTrL on Mazurkiewicz
traces, patterned on the famous
propositional temporal logic of linear
time~LTL defined by Pnueli. They have
shown that this logic is equal in
expressive power to the first order theory
of finite and infinite traces.\par
The hopes to get an {"}easy{"} decision
procedure for~LTrL, as it is the case
for~LTL, vanished very recently due to a
result of Walukiewicz who showed that the
decision procedure for~LTrL is
non-elementary. However, tools like Mona
or Mosel show that it is possible to
handle non-elementary logics on
significant examples. Therefore, it
appears worthwhile to have a direct
decision procedure for LTrL.\par
In this paper we propose such a decision
procedure, in a modular way. Since the
logic~LTrL is not pure future, our
algorithm constructs by induction a finite
family of B{\"u}chi automata for each
LTrL-formula. As expected by the results
of Walukiewicz, the main difficulty comes
from the {"}Until{"} operator.}
}
@article{DiGa98,
publisher = {Springer},
journal = {Acta Informatica},
author = {Diekert, Volker and Gastin, Paul},
title = {Approximating traces},
volume = 35,
number = 7,
year = 1998,
pages = {567-593},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Alphatracesdg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Alphatracesdg.ps},
abstract = {In order to give a semantics to
concurrent processes, one needs a powerful
model which enjoys many important mathematical
properties. We generalize Mazurkiewicz
(infinite) traces by adding an information
concerning the possible continuations of a
process. This allows to define an
approximation order and a composition. We
obtain a prime algebraic coherently complete
domain where the compacts are exactly the
finite approximations of actual processes. The
composition is shown to be monotone and
$$\sqcup$$-continuous. We define a suitable
metric which induces the Lawson topology and
yields a complete and compact metric space.
The finite approximations of processes form a
dense subset and the composition is uniformly
continuous.}
}
@article{DrGa99j,
author = {Droste, Manfred and Gastin, Paul},
title = {The {K}leene-{S}ch{\"u}tzenberger theorem for formal power series
in partially commuting variables},
volume = 153,
number = 1,
year = 1999,
month = aug,
pages = {47-80},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/IC99dg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/IC99dg.ps},
abstract = {Kleene's theorem on the
coincidence of regular and rational
languages in free monoids has been
generalized by Sch{\"u}tzenberger to a
description of the recognizable formal power
series in non-commuting variables over
arbitrary semi-rings, and by Ochma{\'n}ski to a
characterization of the recognizable
languages in trace monoids.\par
We will describe the recognizable formal
power series over arbitrary semirings and in
partially commuting variables, \emph{i.e.} over
trace monoids. We prove that the
recognizable series are certain rational
power series, which can be constructed from
the polynomials by using the operations sum,
product and a restricted star which is
applied only to series for which the
elements in the support all have the same
connected alphabet. The converse is true if
the underlying semiring is commutative.\par
Moreover, if in addition the semiring is
idempotent then the same result holds with a
star restricted to series for which the
elements in the support have connected
(possibly different) alphabets. It is shown
that these assumptions over the semiring are
necessary. This provides a joint
generalization of Kleene's, Sch{\"u}tzenberger's
and Ochma{\'n}ski's theorems.}
}
@inproceedings{GaMi99,
month = sep,
year = 1999,
volume = 1683,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Flum, J{\"o}rg and Rodr{\'\i}guez-Artalejo, Mario},
acronym = {{CSL}'99},
booktitle = {{S}elected {P}apers from the 13th {I}nternational
{W}orkshop on {C}omputer {S}cience {L}ogic
({CSL}'99)},
author = {Gastin, Paul and Mislove, Michael W.},
title = {A truly concurrent semantics for a
simple parallel programming language},
pages = {515-529},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Csl99gm.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Csl99gm.ps},
abstract = {This paper represents the
beginning of a study aimed at devising
semantic models for true concurrency that
provide clear distinctions between
concurrency, parallelism and choice. We
present a simple programming language which
includes (weakly) sequential composition,
asynchronous and synchronous parallel
composition, a restriction operator, and
that supports recursion. We develop an
operational and a denotational semantics for
this language, and we obtain a congruence
theorem relating the behavior of a process
as described by the transition system to the
meaning of the process in the denotational
model. This implies that the denotational
model is adequate with respect to the
operational model. Our denotational model is
based on the resource traces of Gastin and
Teodesiu, and since a single resource trace
represents all possible executions of a
concurrent process, we are able to model
each term of our concurrent language by a
single trace. Therefore we obtain a
deterministic semantics for our language and
we are able to model parallelism without
introducing nondeterminism.}
}
@inproceedings{DiGa99,
month = sep,
year = 1999,
volume = 1683,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Flum, J{\"o}rg and Rodr{\'\i}guez-Artalejo, Mario},
acronym = {{CSL}'99},
booktitle = {{S}elected {P}apers from the 13th {I}nternational
{W}orkshop on {C}omputer {S}cience {L}ogic
({CSL}'99)},
author = {Diekert, Volker and Gastin, Paul},
title = {An expressively complete temporal logic without past tense
operators for {M}azurkiewicz traces},
pages = {188-203},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Csl99dg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Csl99dg.ps},
abstract = {Mazurkiewicz traces are a widely
accepted model of concurrent systems. We
introduce a linear time temporal logic which has
the same expressive power as the first order
theory of finite (infinite resp.) traces. The
main contribution of the paper is that we only
use future tense modalities in order to obtain
expressive completeness. Our proof is direct and
uses no reduction to words. As a formal
consequence Kamp's theorem for both finite and
infinite words becomes a corollary. This direct
approach became possible due to a new proof
technique of Wilke developed for the case of
finite words.}
}
@article{DrGaKu00,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Droste, Manfred and Gastin, Paul and Kuske, Dietrich},
title = {Asynchronous cellular automata for pomsets},
volume = 247,
number = {1-2},
year = 2000,
month = sep,
pages = {1-38},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DGK-TCS00.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DGK-TCS00.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/DGK-TCS00.ps},
abstract = {This paper extends to pomsets without
auto-concurrency the fundamental notion of
asynchronous cellular automata (ACA) which was
originally introduced for traces by Zielonka. We
generalize to pomsets the notion of asynchronous
mapping introduced by Cori, M{\'e}tivier and Zielonka
and we show how to construct a deterministic ACA from
an asynchronous mapping. \par
Then we investigate the relation between the
expressiveness of monadic second order logic,
nondeterministic ACAs and deterministic ACAs. We can
generalize B{\"u}chi's theorem for finite words to a class
of pomsets without auto-concurrency which satisfy a
natural axiom. This axiom ensures that an asynchronous
cellular automaton works on the pomset as a concurrent
read and exclusive owner write machine. More
precisely, in this class non-deterministic ACAs,
deterministic ACAs and monadic second order logic have
the same expressive power. \par
Then we consider a class where deterministic ACAs are
strictly weaker than nondeterministic ones. But in
this class nondeterministic ACAs still capture monadic
second order logic. Finally it is shown that even this
equivalence does not hold in the class of all pomsets
since there the class of recognizable pomset languages
is not closed under complementation.}
}
@inproceedings{DiGa00,
month = jul,
year = 2000,
volume = 1853,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Montanari, Ugo and Rolim, Jos{\'e} D. P. and
Welzl, Emo},
acronym = {{ICALP} 2000},
booktitle = {{P}roceedings of the 27th {I}nternational
{C}olloquium on {A}utomata, {L}anguages and
{P}rogramming
({ICALP} 2000)},
author = {Diekert, Volker and Gastin, Paul},
title = {{LTL} is expressively complete for {M}azurkiewicz traces},
pages = {211-222},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Icalp00dg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Icalp00dg.ps},
abstract = {A long standing open problem in the
theory of (Mazurkiewicz) traces has been the
question whether LTL (Linear Time Logic) is
expressively complete with respect to the first
order theory. We solve this problem positively for
finite and infinite traces and for the simplest
temporal logic, which is based only on next and
until modalities. Similar results were established
previously, but they were all weaker, since they
used additional past or future modalities. Another
feature of our work is that our proof is direct and
does not use any reduction to the word case. It is
based on an algebraic characterization of first
order trace languages, following a proof technique
introduced recently by Thomas Wilke in order to
prove the expressive completeness of LTL for
finite words.}
}
@inproceedings{DrGa00,
month = jun,
year = 2000,
publisher = {Springer},
editor = {Krob, Daniel and Mikhalev, Alexander A. and Mikhalev, Alexander V.},
acronym = {{FPSAC}'00},
booktitle = {{P}roceedings of the 12th {I}nternational {C}onference on
{F}ormal {P}ower {S}eries and {A}lgebraic
{C}ombinatorics ({FPSAC}'00)},
author = {Droste, Manfred and Gastin, Paul},
title = {On aperiodic and star-free formal power series
in partially commuting variables},
pages = {158-169},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Fpsac00dg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Fpsac00dg.ps},
abstract = {Formal power series over
non-commuting variables have been
investigated as representations of the
behaviour of automata with multiplicities.
Here we introduce and investigate the
concepts of aperiodic and of star-free
formal power series over semirings and
partially commuting variables. We prove that
if the semiring~$$K$$ is idempotent and
commutative, or if $$K$$ is idempotent and the
variables are non-commuting, then the
product of any two aperiodic series is again
aperiodic. We also show that if $$K$$ is
idempotent and the matrix monoids over $$K$$
have a Burnside property (satisfied, \emph{e.g.} by
the tropical semiring), then the aperiodic
and the star-free series coincide. This
generalizes a classical result of
Sch{\"u}tzenberger~(1961) for aperiodic regular
languages and contains a result of Guaiana,
Restivo and Salemi~(1992) on aperiodic trace
languages.}
}
@inproceedings{DiGa01lpar,
month = dec,
year = 2001,
volume = 2250,
series = {Lecture Notes in Artificial Intelligence},
publisher = {Springer},
editor = {Nieuwenhuis, Robert and Voronkov, Andrei},
acronym = {{LPAR}'01},
booktitle = {{P}roceedings of the 8th {I}nternational
{C}onference on {L}ogic for {P}rogramming,
{A}rtificial {I}ntelligence, and {R}easoning
({LPAR}'01)},
author = {Diekert, Volker and Gastin, Paul},
title = {Local Temporal Logic is Expressively Complete
for Cograph Dependence Alphabets},
pages = {55-69},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Lpar01dg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Lpar01dg.ps},
abstract = {Recently, local logics for
Mazurkiewicz traces are of increasing
interest. This is mainly due to the fact
that the satisfiability problem has the
same complexity as in the word case. If we
focus on a purely local interpretation of
formulae at vertices (or events) of a
trace, then the satisfiability problem of
linear temporal logics over traces turns
out to be PSPACE-complete. But now the
difficult problem is to obtain expressive
completeness results with respect to first
order logic. \par
The main result of the paper shows such an
expressive completeness result, if the
underlying dependence alphabet is a
cograph, \emph{i.e.}, if all traces are series
parallel graphs. Moreover, we show that
this is the best we can expect in our
setting: If the dependence alphabet is not
a cograph, then we cannot express all
first order properties.}
}
@inproceedings{GaOd01,
month = jul,
year = 2001,
volume = 2102,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Berry, G{\'e}rard and Comon, Hubert and Finkel, Alain},
acronym = {{CAV}'01},
booktitle = {{P}roceedings of the 13th
{I}nternational {C}onference on
{C}omputer {A}ided {V}erification
({CAV}'01)},
author = {Gastin, Paul and Oddoux, Denis},
title = {Fast {LTL} to {B}{\"u}chi Automata Translation},
pages = {53-65},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Cav01go.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Cav01go.ps},
abstract = {We present an algorithm to generate
B{\"u}chi automata from LTL formulae. This algorithm
generates a very weak alternating automaton and
then transforms it into a B{\"u}chi automaton, using a
generalized B{\"u}chi automaton as an intermediate
step. Each automaton is simplified on-the-fly in
order to save memory and time. As usual we
simplify the LTL formula before any treatment. We
implemented this algorithm and compared it with
Spin: the experiments show that our algorithm is
much more efficient than Spin. The criteria of
comparison are the size of the resulting
automaton, the time of the computation and the
memory used. Our implementation is available on
the web at the following address:
\url{http://verif.liafa.jussieu.fr/ltl2ba}}
}
@inproceedings{DeGa01,
month = may,
year = 2001,
volume = 2057,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Dwyer, Matthew B.},
acronym = {{SPIN}'01},
booktitle = {{P}roceedings of the 8th {I}nternational
{SPIN} {W}orkshop on {M}odel {C}hecking {S}oftware
({SPIN}'01)},
author = {Derepas, Fabrice and Gastin, Paul},
title = {Model Checking Systems of Replicated Processes with {SPIN}},
pages = {235-251},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Spin01dg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Spin01dg.ps},
abstract = {This paper describes a
reduction technique which is very useful
against the state explosion problem
which occurs when model checking
distributed systems with several
instances of the same process. Our
technique uses symmetry which appears in
the system. Exchanging those instances
is not as simple as it seems, because
there can be a lot of references to
process locations in the system. We
implemented a solution using the Spin
model checker, and added two keywords to
the Promela language to handle these new
concepts.}
}
@inproceedings{DeGaPl01,
month = mar,
year = 2001,
volume = 2021,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Oliveira, Jos{\'e} Nuno and Zave, Pamela},
acronym = {{FME}'01},
booktitle = {{F}ormal {M}ethods for {I}ncreasing {S}oftware
{P}roductivity~---
{P}roceedings of the {I}nternational {S}ymposium of {F}ormal
{M}ethods {E}urope ({FME}'01)},
author = {Derepas, Fabrice and Gastin, Paul and
Plainfoss{\'e}, David},
title = {Avoiding state explosion for distributed systems
with timestamps},
pages = {119-134},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Fme01dgp.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/Fme01dgp.ps},
abstract = {This paper describes a
reduction technique which is very useful
against the state explosion problem which
occurs when model checking many distributed
systems. Timestamps are often used to keep
track of the relative order of events. They
are usually implemented with very large
counters and therefore they generate state
explosion. The aim of this paper is to
present a very efficient reduction of the
state space generated by a model checker when
using timestamps. The basic idea is to map
the timestamps values to the smallest
possible range. This is done dynamically and
on-the-fly by adding to the model checker a
call to a reduction function after each newly
generated state. Our reduction works for
model checkers using explicit state
enumeration and does not require any change
in the model. Our method has been applied to
an industrial example and the reduction
obtained was spectacular.}
}
@article{DG02JALC,
journal = {Journal of Automata, Languages and Combinatorics},
author = {Droste, Manfred and Gastin, Paul},
editor = {Droste, Manfred and Gastin, Paul},
title = {Selected papers of the workshop on \emph{Logic and Algebra in
Concurrency}, Dresden, Germany, September 2000},
volume = 7,
number = 2,
year = 2002
}
@incollection{DiGa02Roz,
missingmonth = mar,
missingnmonth = 3,
year = 2002,
volume = 2300,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Brauer, Wilfried and
Ehrig, Hartmut and
Karhum{\"a}ki, Juhani and
Salomaa, Arto},
acronym = {{F}ormal and {N}atural {C}omputing},
booktitle = {{F}ormal and {N}atural {C}omputing~---
{E}ssays {D}edicated to {G}rzegorz {R}ozenberg},
author = {Diekert, Volker and Gastin, Paul},
title = {Safety and Liveness Properties for Real Traces and
a Direct Translation from {LTL} to Monoids},
pages = {26-38},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/diegas01-3.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/diegas01-3.ps},
doi = {10.1007/3-540-45711-9_2},
abstract = { For infinite words there are
well-known characterizations of safety and liveness
properties. We extend these results to real
Mazurkiewicz traces. This is possible due to a
result, which has been established recently: Every
first-order definable real trace language is
definable in linear temporal logic using future
tense operators, only. We show that the canonical
choice for a topological characterization of safety
and liveness properties is given by the Scott
topology. In this paper we use an algebraic approach
where we work with aperiodic monoids. Therefore we
also give a direct translation from temporal logic
to aperiodic monoids which is of independent
interest.}
}
@article{DiGa02jcss,
publisher = {Elsevier Science Publishers},
journal = {Journal of Computer and System Sciences},
author = {Diekert, Volker and Gastin, Paul},
title = {{LTL} is expressively complete for {M}azurkiewicz traces},
volume = 64,
number = 2,
year = 2002,
month = mar,
pages = {396-418},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/JCSS02dg.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/JCSS02dg.ps},
abstract = {A long standing open problem in the
theory of (Mazurkiewicz) traces has been the
question whether LTL (Linear Temporal Logic) is
expressively complete with respect to the first
order theory. We solve this problem positively for
finite and infinite traces and for the simplest
temporal logic, which is based only on next and
until modalities. Similar results were established
previously, but they were all weaker, since they
used additional past or future modalities. Another
feature of our work is that our proof is direct and
does not use any reduction to the word case.}
}
@article{GaMi02tcs,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Gastin, Paul and Mislove, Michael W.},
title = {A truly concurrent semantics for a
process algebra using resource pomsets},
volume = 281,
number = {1-2},
year = 2002,
month = jun,
pages = {369-421},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GM-TCS02.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GM-TCS02.ps},
abstract = {In this paper we study a process
algebra whose semantics is based on true
concurrency. In our model, actions are defined in
terms of the resources they need to execute, which
allows a simple definition of a weak sequential
composition operator. This operator allows actions
which do not share any resources to execute
concurrently, while dependent actions have to occur
sequentially. This weak sequential composition
operation may be used to automatically parallelize a
sequential process. We add to this operator the
customary (strict) sequential composition and a
parallel composition allowing synchronization on
specified actions. Our language also supports a
hiding operator that allows the hiding of actions
and even of individual resources used by actions.
Strict sequential composition and hiding require
that we generalize from the realm of Mazurkiewicz
traces to that of pomsets, since these operations
introduce {"}over-synchronized{"} traces~--- ones for
which a pair of independent actions may occur
sequentially. Our language also supports recursion
and our semantics makes the unwinding of recursion
visible by the use of special resources used to
label unwindings. This was done on purpose in order
to enable the observation of divergence, but the
usual semantics that does not observe unwindings can
be obtained by using the hiding operator to abstract
away from these special resources. We give both an
SOS-style operational semantics for our language, as
well as a denotational semantics based on resource
pomsets. Generalizing results from our earlier work
in this area, we derive a congruence theorem for our
language which shows that the SOS-style operational
rules induce the same equivalence relation on the
language as the semantic map does. A corollary is
that our denotational model is both adequate and
fully abstract relative to the behavior function
defined from our operational semantics. This
behavior consists naturally of the strings of
actions the process can perform. This work continues
our study into modelling concurrency in the absence
of nondeterminism. In particular, our language is
deterministic.}
}
@article{GaTe02,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Gastin, Paul and Teodosiu, Dan},
title = {Resource traces: A domain for processes sharing exclusive
resources},
volume = 278,
number = {1-2},
year = 2002,
month = may,
pages = {195-221},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GT-TCS02.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GT-TCS02.ps},
abstract = {The theory of finite and infinite words
with explicit termination is commonly used to give
denotational semantics for algebras of sequential
processes. In this paper we generalize this theory
to the case of partially terminated concurrent
processes synchronizing on a fixed set of shared
exclusive resources. \par
resources, and a resource map assigning to each
action the non-empty subset of resources it uses.
Actions that do not share common resources are
declared independent. The specification we use for
partially terminated processes (resource traces) is
similar to failure semantics. It consists of two
components: an already observed part represented as
an action-labeled event structure (Mazurkiewicz
trace), and a guard set containing the resources
granted to the process for its further development.
A process concatenation is then defined such that
independent actions can be dispatched concurrently.
Specification refinement leads to the definition of
a natural approximation ordering between processes
which generates a coherently complete prime
algebraic Scott domain, where process concatenation
is continuous in both arguments. Furthermore, we
define a natural ultrametric on processes based on
prefix information. The induced topology is shown to
be equivalent to the compact Lawson topology
generated by the approximation ordering. Moreover,
process concatenation is shown to be uniformly
continuous with respect to the defined ultrametric.
\par
We develop a mathematical theory which perfectly
extends the central properties of the domain of
finite and infinte words with explicit termination
and the domain of finite and infinite Mazurkiewicz
traces. Its natural semantics is well suited to the
design of modular denotational semantics for
algebras of processes sharing exclusive resources
such as programs using some set of shared registers
(PRAM) or concurrent sequential processes
synchronizing over exclusive communication channels
(CSP).}
}
@inproceedings{GaMu02icalp,
month = jul,
year = 2002,
volume = 2380,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Widmayer, Peter and
Triguero Ruiz, Francisco and Morales Bueno, Rafael and
Hennessy, Matthew and Eidenbenz, Stephan and Conejo, Ricardo},
acronym = {{ICALP}'02},
booktitle = {{P}roceedings of the 29th {I}nternational
{C}olloquium on {A}utomata, {L}anguages and
{P}rogramming
({ICALP}'02)},
author = {Gastin, Paul and Mukund, Madhavan},
title = {An Elementary Expressively Complete Temporal Logic
for {M}azurkiewicz Traces},
pages = {938-949},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/gasmuk02long2.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/gasmuk02long2.ps},
abstract = {In contrast to the
classical setting of sequences, no
temporal logic has yet been identified
over Mazurkiewicz traces that is
equivalent to the first-order theory of
traces and yet admits an elementary
decision procedure. In this paper, we
describe a local temporal logic over
traces that is expressively complete
and whose satisfiability problem is in
PSPACE. Contrary to the situation for
sequences, past modalities are
essential for such a logic. A somewhat
unexpected corollary is that
first-order logic with three variables
is expressively complete for traces.}
}
@inproceedings{GaKu03concur,
month = aug,
year = 2003,
volume = 2761,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Amadio, Roberto M. and Lugiez, Denis},
acronym = {{CONCUR}'03},
booktitle = {{P}roceedings of the 14th
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'03)},
author = {Gastin, Paul and Kuske, Dietrich},
title = {Satisfiability and Model Checking for {MSO}-definable
Temporal Logics are in {PSPACE}},
pages = {222-236},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/concur03gk-final.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/concur03gk-final.ps},
abstract = {Temporal logics over Mazurkiewicz
traces have been extensively studied over the past
fifteen years. In order to be usable for the
verification of concurrent systems they need to have
reasonable complexity for the satisfiability and the
model checking problems. Whenever a new temporal
logic was introduced, a new proof (usually non
trivial) was needed to establish the complexity of
these problems. In this paper, we introduce a
unified framework to define local temporal logics
over traces. We prove that the satisfiability
problem and the model checking problem for
asynchronous Kripke structures for local temporal
logics over traces are decidable in PSPACE. This
subsumes and sometimes improves all complexity
results previously obtained on local temporal logics
for traces.}
}
@inproceedings{GaMuNa03mfcs,
month = aug,
year = 2003,
volume = 2747,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Rovan, Branislav and Vojt{\'a}{\v{s}}, Peter},
acronym = {{MFCS}'03},
booktitle = {{P}roceedings of the 28th
{I}nternational {S}ymposium on
{M}athematical {F}oundations of
{C}omputer {S}cience
({MFCS}'03)},
author = {Gastin, Paul and Mukund, Madhavan and Narayan Kumar, K.},
title = {Local {LTL} with past constants is expressively complete for
{M}azurkiewicz traces},
pages = {429-438},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/mfcs03gmn-final.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/mfcs03gmn-final.ps},
abstract = {To obtain an expressively complete
linear-time temporal logic (LTL) over
Mazurkiewicz traces that is computationally
tractable, we need to interpret formulas locally,
at individual events in a trace, rather than
globally, at configurations. Such local logics
necessarily require past modalities, in contrast
to the classical setting of LTL over sequences.
Earlier attempts at defining expressively
complete local logics have used very general past
modalities as well as filters (side-conditions)
that {"}look sideways{"} and talk of concurrent
events. In this paper, we show that it is
possible to use unfiltered future modalities in
conjunction with past constants and still obtain
a logic that is expressively complete over
traces.}
}
@inproceedings{GaOd03mfcs,
month = aug,
year = 2003,
volume = 2747,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Rovan, Branislav and Vojt{\'a}{\v{s}}, Peter},
acronym = {{MFCS}'03},
booktitle = {{P}roceedings of the 28th
{I}nternational {S}ymposium on
{M}athematical {F}oundations of
{C}omputer {S}cience
({MFCS}'03)},
author = {Gastin, Paul and Oddoux, Denis},
title = {{LTL} with past and two-way very-weak alternating automata},
pages = {439-448},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/mfcs03go-final.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/mfcs03go-final.ps},
abstract = {It is crucial for a model
checker using LTL as a specification
language to have very efficient translation
of LTL formulas to B{\"u}chi automata. Most such
algorithms are based on a tableau
construction. Recently, an implementation
using very-weak alternating automata as an
intermediary step proved to be dramatically
faster than previous implementations based
on the tableau construction. In this paper,
we want to generalize this method to PLTL
(LTL with past modalities). For this, we
study two-way very-weak alternating automata
($$2$$VWAA). Our main result is an efficient
translation of $$2$$VWAA to generalized B{\"u}chi
automata (GBA). Since we can easily
transform a PLTL formula to an equivalent
$$2$$VWAA, this algorithm allows to use PLTL
specifications with classical model checkers
such as SPIN.}
}
@misc{gastin-movep2004,
author = {Gastin, Paul},
title = {Basics of model checking},
year = 2004,
month = dec,
nonote = {-- pages},
howpublished = {Invited tutorial, 6th {W}inter {S}chool on
{M}odelling and {V}erifying {P}arallel {P}rocesses
({MOVEP}'04), Brussels, Belgium}
}
@misc{gastin-epit32,
author = {Gastin, Paul},
title = {Specifications for distributed systems},
year = 2004,
month = apr,
howpublished = {Invited lecture, 32nd {S}pring {S}chool on
{T}heoretical {C}omputer {S}cience ({C}oncurrency {T}heory),
Luminy, France}
}
@article{icomp-DG2004,
publisher = {Elsevier Science Publishers},
journal = {Information and Computation},
author = {Diekert, Volker and Gastin, Paul},
title = {Local temporal logic is expressively complete for
cograph dependence alphabets},
volume = {195},
number = {1-2},
pages = {30-52},
year = 2004,
month = nov,
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG04-icomp.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/DG04-icomp.ps},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG04-icomp.pdf},
doi = {10.1016/j.ic.2004.08.001},
abstract = {Recently, local logics for Mazurkiewicz
traces are of increasing interest. This is mainly
due to the fact that the satisfiability problem has
the same complexity as in the word case. If we focus
on a purely local interpretation of formulae at
vertices (or events) of a trace, then the
satisfiability problem of linear temporal logics
over traces turns out to be PSPACE-complete. But
now the difficult problem is to obtain expressive
completeness results with respect to first order
logic. \par
The main result of the paper shows such an
expressive completeness result, if the underlying
dependence alphabet is a cograph, \emph{i.e.}
if all
traces are series parallel posets. Moreover, we show
that this is the best we can expect in our setting:
If the dependence alphabet is not a cograph, then we
cannot express all first order properties.}
}
@inproceedings{GaLeZe04fsttcs,
month = dec,
year = 2004,
volume = 3328,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Lodaya, Kamal and Mahajan, Meena},
acronym = {{FSTTCS}'04},
booktitle = {{P}roceedings of the 24th {C}onference on
{F}oundations of {S}oftware {T}echnology and
{T}heoretical {C}omputer {S}cience
({FSTTCS}'04)},
author = {Gastin, Paul and Lerman, Benjamin and Zeitoun, Marc},
title = {Distributed games with causal memory are decidable for
series-parallel systems},
pages = {275-286},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GLZ-fsttcs04.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GLZ-fsttcs04.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GLZ-fsttcs04.ps},
abstract = {This paper deals with distributed
control problems by means of distributed games
played on Mazurkiewicz traces. The main difference
with other notions of distributed games recently
introduced is that, instead of having a \emph{local} view,
strategies and controllers are able to use a more
accurate memory, based on their \emph{causal} view. Our
main result states that using the causal view makes
the control synthesis problem decidable for
series-parallel systems for \emph{all} recognizable winning
conditions on finite behaviors, while this problem
with local view was proved undecidable even for
reachability conditions.}
}
@article{GaMi04mscs,
publisher = {Cambridge University Press},
journal = {Mathematical Structures in Computer Science},
author = {Gastin, Paul and Mislove, Michael W.},
title = {A simple process algebra based on atomic actions with
resources},
year = 2004,
month = feb,
volume = 14,
number = 1,
pages = {1-55},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GM-mscs04.ps},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GM-mscs04.ps},
doi = {10.1017/S0960129503003943},
abstract = {This paper gives the first truly
concurrent denotational, \emph{i.e.}
compositional
semantics for a simple, deterministic parallel
language. By truly concurrent we mean the
denotational model does not rely on an interleaving
of concurrent actions for its definition. Thus, our
semantics does not reduce parallelism to
nondeterminism, as is done in the more established
approaches to concurrency. We also present a natural
SOS-style operational semantics for our language,
and we prove a congruence theorem relating the two
semantics. This implies the operational model itself
is compositional. The congruence theorem also
implies the denotational model is adequate with
respect to the operational semantics, and we
characterize the relatively mild conditions under
which the denotational semantics is fully abstract
with respect to the operational semantics. \par
Our simple language includes a (weak) sequential
composition operator which takes advantage of the
truly concurrent nature of the semantics, as well as
a parallel composition operator which allows local
events to execute asynchronously, while requiring
synchronizing events to execute simultaneously. In
addition, the language supports a restriction
operator and includes recursion. \par
The denotational semantics also is novel for its
treatment of recursion. The meaning of a recursive
process is defined using a least fixed point on a
subdomain that is determined by the body of the
recursion, and that varies from one process to
another. Nonetheless, the recursion operators in the
language have continuous interpretations in the
denotational model.
Our denotational model is based on a
domain-theoretic generalization of Mazurkiewicz
traces in which the concatenation operator, as well
as the other operators from our language can be
given continuous interpretations.}
}
@inproceedings{GaMoZe04spin,
month = apr,
year = 2004,
volume = 2989,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Graf, Susanne and Mounier, Laurent},
acronym = {{SPIN}'04},
booktitle = {{P}roceedings of the 11th {I}nternational
{SPIN} {W}orkshop on {M}odel {C}hecking {S}oftware
({SPIN}'04)},
author = {Gastin, Paul and Moro, Pierre and
Zeitoun, Marc},
title = {Minimization of counterexamples in
{SPIN}},
pages = {92-108},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GMZ-spin04.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GMZ-spin04.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GMZ-spin04.ps},
abstract = {We propose an algorithm to find a
counterexample to some property in a finite state
program. This algorithm is derived from SPIN's
one, but it finds a counterexample faster than
SPIN does. In particular it still works in linear
time. Compared with SPIN's algorithm, it requires
only one additional bit per state stored. We
further propose another algorithm to compute a
counterexample of minimal size. Again, this
algorithm does not use more memory than SPIN does
to approximate a minimal counterexample. The cost
to find a counterexample of minimal size is that
one has to revisit more states than SPIN. We
provide an implementation and discuss
experimental results.}
}
@inproceedings{GaLeZe04latin,
month = apr,
year = 2004,
volume = 2976,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Farach-Colton, Martin},
acronym = {{LATIN}'04},
booktitle = {{P}roceedings of the 6th {L}atin {A}merican
{S}ymposium on {T}heoretical {I}nformatics
({LATIN}'04)},
author = {Gastin, Paul and Lerman, Benjamin and Zeitoun, Marc},
title = {Distributed games and distributed control for asynchronous
systems},
pages = {455-465},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GLZ-latin04.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GLZ-latin04.ps},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GLZ-latin04.pdf},
abstract = {We introduce distributed games over
asynchronous transition systems to model a
distributed controller synthesis problem. A game
involves two teams and is not turn-based: several
players of both teams may simultaneously be enabled.
We define distributed strategies based on the causal
view that players have of the system. We reduce the
problem of finding a winning distributed strategy
with a given memory to finding a memoryless winning
distributed strategy in a larger distributed game.
We reduce the latter problem to finding a strategy
in a classical $$2$$-player game. This allows to
transfer results from the sequential case to this
distributed setting.}
}
@inproceedings{DiGa04latin,
month = apr,
year = 2004,
volume = 2976,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Farach-Colton, Martin},
acronym = {{LATIN}'04},
booktitle = {{P}roceedings of the 6th {L}atin {A}merican
{S}ymposium on {T}heoretical {I}nformatics
({LATIN}'04)},
author = {Diekert, Volker and Gastin, Paul},
title = {Pure future local temporal logics are expressively complete
for {M}azurkiewicz traces},
pages = {232-241},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG-latin04.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/DG-latin04.ps},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG-latin04.pdf},
abstract = {The paper settles a long standing
problem for Mazurkiewicz traces: the pure future
local temporal logic defined with the basic
modalities exists-next and until is expressively
complete. The analogous result with a global
interpretation was solved some years ago by
Thiagarajan and Walukiewicz (1997) and in its
final form without any reference to past tense
constants by Diekert and Gastin (2000). Each, the
(previously known) global or the (new) local
result generalizes Kamp's Theorem for words,
because for sequences local and global viewpoints
coincide. But traces are labelled partial orders
and then the difference between an interpretation
globally over cuts (configurations) or locally at
points (events) is significant. For global
temporal logics the satisfiability problem is
non-elementary (Walukiewicz, 1998), whereas for
local temporal logics both the satisfiability
problem and the model checking problem are sovable
in PSPACE (Gastin and Kuske, 2003) as in the case
of words. This makes local temporal logics much
more attractive.}
}
@misc{gastin-wpv05,
author = {Gastin, Paul},
title = {On the synthesis of distributed controllers},
year = 2005,
month = nov,
howpublished = {Invited talk, Workshop Perspectives in
Verification, in honor of Wolfgang Thomas on the occasion of his
Doctorate Honoris Causa, Cachan, France}
}
@inproceedings{Gastin-ICALP2005,
month = jul,
year = 2005,
volume = {3580},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Caires, Lu{\'\i}s and Italiano, Giuseppe F. and
Monteiro, Lu{\'\i}s and Palamidessi, Catuscia and Yung, Moti},
acronym = {{ICALP}'05},
booktitle = {{P}roceedings of the 32nd {I}nternational
{C}olloquium on {A}utomata, {L}anguages and
{P}rogramming
({ICALP}'05)},
author = {Droste, Manfred and Gastin, Paul},
title = {Weighted Automata and Weighted Logics},
pages = {513-525},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/icalp05dg-final.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/icalp05dg-final.ps},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/icalp05dg-final.pdf},
doi = {10.1007/11523468_42},
abstract = {Weighted automata are used to
describe quantitative properties in various
areas such as probabilistic systems, image
compression, speech-to-text processing. The
behaviour of such an automaton is a mapping,
called a formal power series, assigning to
each word a weight in some semiring. We
generalize B{\"{u}}chi's and Elgot's
fundamental theorems to this quantitative
setting. We introduce a weighted version of
MSO~logic and prove that, for commutative
semirings, the behaviours of weighted
automata are precisely the formal power
series definable with our weighted logic. We
also consider weighted first-order logic and
show that aperiodic series coincide with the
first-order definable ones, if the semiring
is locally finite, commutative and has some
aperiodicity property.}
}
@inproceedings{GK-concur05,
address = {San Francisco, California, USA},
month = aug,
year = 2005,
volume = 3653,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Abadi, Mart{\'\i}n and de Alfaro, Luca},
acronym = {{CONCUR}'05},
booktitle = {{P}roceedings of the 16th
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'05)},
author = {Gastin, Paul and Kuske, Dietrich},
title = {Uniform Satisfiability Problem for Local Temporal Logics
over {M}azurkiewicz Traces},
pages = {533-547},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/concur05gk-final.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/concur05gk-final.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/concur05gk-final.ps},
doi = {10.1007/11539452_40},
abstract = {We continue our study of the complexity
of temporal logics over concurrent systems that can be
described by Mazurkiewicz traces. In a previous paper
(CONCUR~2003), we investigated the class of local and
MSO definable temporal logics that capture all known
temporal logics and we showed that the satisfiability
problem for any such logic is in PSPACE (provided the
dependence alphabet is fixed). In this paper, we
concentrate on the uniform satisfiability problem: we
consider the dependence alphabet (\emph{i.e.}, the
architecture of the distributed system) as part of the
input. We prove lower and upper bounds for the uniform
satisfiability problem that depend on the number of
monadic quantifier alternations present in the chosen
MSO-modalities.}
}
@misc{gastin-prefsttcs06,
author = {Gastin, Paul},
title = {Refinements and Abstractions of Signal-Event (Timed) Languages},
year = 2006,
month = dec,
howpublished = {Invited talk, Advances and Issues in Timed Systems,
Kolkata, India}
}
@misc{gastin-wata06,
author = {Gastin, Paul},
title = {Weigthed logics and weighted automata},
year = 2006,
month = mar,
howpublished = {Invited talk, Workshop Weighted Automata: Theory and Applications,
Leipzig, Germany}
}
@misc{gastin-epit06,
author = {Gastin, Paul},
title = {Distributed synthesis: synchronous and asynchronous semantics},
year = 2006,
month = may,
howpublished = {Invited talk, 34{\e}me {\'E}cole de Printemps en
Informatique Th{\'e}orique, Ile de R{\'e}, France}
}
@misc{gastin-mfps22,
author = {Gastin, Paul},
title = {Refinements and Abstractions of Signal-Event (Timed) Languages},
year = 2006,
month = may,
howpublished = {Invited talk, 22nd {C}onference on
{M}athematical {F}oundations of {P}rogramming
{S}emantics ({MFPS}'06)}
}
@inproceedings{GSZ-fsttcs2006,
month = dec,
year = 2006,
volume = 4337,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Garg, Naveen and Arun-Kumar, S.},
acronym = {{FSTTCS}'06},
booktitle = {{P}roceedings of the 26th {C}onference on
{F}oundations of {S}oftware {T}echnology and
{T}heoretical {C}omputer {S}cience
({FSTTCS}'06)},
author = {Gastin, Paul and Sznajder, Nathalie and Zeitoun, Marc},
title = {Distributed synthesis for well-connected architectures},
pages = {321-332},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GSZ-fsttcs2006.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GSZ-fsttcs2006.pdf},
doi = {10.1007/11944836_30},
abstract = {We study the synthesis problem for external linear or branching
specifications and distributed, synchronous architectures with arbitrary
delays on processes. External means that the specification only relates input
and output variables. We~introduce the subclass of uniformly
well-connected~(UWC) architectures for which there exists a routing allowing
each output process to get the values of all inputs it is connected to, as
soon as possible. We~prove that the distributed synthesis problem is decidable
on UWC architectures if and only if the set of all sets of input variables
visible by output variables is totally ordered, under set inclusion. We~also
show that if we extend this class by letting the routing depend on the output
process, then the previous decidability result fails. Finally, we provide a
natural restriction on specifications under which the whole class of~UWC
architectures is decidable.}
}
@inproceedings{BGP1-formats06,
month = sep,
year = 2006,
volume = 4202,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Asarin, Eug{\e}ne and Bouyer, Patricia},
acronym = {{FORMATS}'06},
booktitle = {{P}roceedings of the 4th {I}nternational {C}onference
on {F}ormal {M}odelling and {A}nalysis of {T}imed
{S}ystems ({FORMATS}'06)},
author = {B{\'e}rard, B{\'e}atrice and Gastin, Paul and Petit, Antoine},
title = {Refinements and abstractions of signal-event (timed) languages},
pages = {67-81},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BGP1-formats06.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BGP1-formats06.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/BGP1-formats06.ps},
doi = {10.1007/11867340_6},
abstract = {In the classical framework of formal languages, a refinement
operation is modeled by a substitution and an abstraction by an inverse
substitution. These mechanisms have been widely studied, because they describe
a change in the specification level, from an abstract view to a more concrete
one, or conversely. For~timed systems, there is up to now no uniform notion of
substitutions. In~this paper, we study the timed substitutions in the general
framework of signal-event languages, where both signals and events are taken
into account. We~prove that regular signal-event languages are closed under
substitutions and inverse substitutions. }
}
@inproceedings{BGP2-formats06,
month = sep,
year = 2006,
volume = 4202,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Asarin, Eug{\e}ne and Bouyer, Patricia},
acronym = {{FORMATS}'06},
booktitle = {{P}roceedings of the 4th {I}nternational {C}onference
on {F}ormal {M}odelling and {A}nalysis of {T}imed
{S}ystems ({FORMATS}'06)},
author = {B{\'e}rard, B{\'e}atrice and Gastin, Paul and Petit, Antoine},
title = {Intersection of regular signal-event (timed) languages},
pages = {52-66},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BGP2-formats06.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BGP2-formats06.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/BGP2-formats06.ps},
doi = {10.1007/11867340_5},
abstract = {We propose in this paper a construction for a {"}well known{"}
result: regular signal-event languages are closed by intersection. In~fact,
while this result is indeed trivial for languages defined by Alur and Dill's
timed automata (the proof is an immediate extension of the one in the untimed
case), it turns out that the construction is much more tricky when considering
the most involved model of signal-event automata. While several constructions
have been proposed in particular cases, it is the first time, up to our
knowledge, that a construction working on finite and infinite signal-event
words and taking into account signal stuttering, unobservability of
zero-duration $$\tau$$-signals and Zeno runs is proposed.}
}
@inproceedings{BGM-atva2006,
month = oct,
year = {2006},
volume = 4218,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Graf, Susanne and Zhang, Wenhui},
acronym = {{ATVA}'06},
booktitle = {{P}roceedings of the 4th {I}nternational
{S}ymposium on {A}utomated {T}echnology
for {V}erification and {A}nalysis
({ATVA}'06)},
author = {Bhateja, Puneet and Gastin, Paul and Mukund, Madhavan},
title = {A fresh look at testing for asynchronous communication},
pages = {369-383},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BGM-atva06.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BGM-atva06.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/BGM-atva06.ps},
doi = {10.1007/11901914_28},
abstract = {Testing is one of the fundamental techniques for verifying if a
computing system conforms to its specification. We~take a fresh look at the
theory of testing for message-passing systems based on a natural notion of
observability in terms of input-output relations. We~propose two notions of
test equivalence: one which corresponds to presenting all test inputs up front
and the other which corresponds to interactively feeding inputs to the system
under test. We compare our notions with those studied earlier, notably the
equivalence proposed by Tretmans. In~Tretmans' framework, asynchrony is
modelled using synchronous communication by augmenting the state space of the
system with queues. We~show that the first equivalence we consider is strictly
weaker than Tretmans' equivalence and undecidable, whereas the second notion
is incomparable. We~also establish (un)decidability results for these
equivalences.}
}
@article{DG-icomp2006,
publisher = {Elsevier Science Publishers},
journal = {Information and Computation},
author = {Diekert, Volker and Gastin, Paul},
title = {Pure future local temporal logics are expressively complete for
{M}azurkiewicz traces},
pages = {1597-1619},
year = 2006,
month = nov,
volume = 204,
number = 11,
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG-icomp06.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG-icomp06.pdf},
doi = {10.1016/j.ic.2006.07.002},
abstract = {The paper settles a long standing problem for Mazurkiewicz
traces: the pure future local temporal logic defined with the basic modalities
exists-next and until is expressively complete. This means every first-order
definable language of Mazurkiewicz traces can be defined in a pure future
local temporal logic. The~analogous result with a global interpretation has
been known, but the treatment of a local interpretation turned out to be much
more involved. Local logics are interesting because both the satisfiability
problem and the model checking problem are solvable in PSPACE for these logics
whereas they are non-elementary for global logics. Both, the (previously
known) global and the (new) local results generalize Kamp's Theorem for words,
because for sequences local and global viewpoints coincide. }
}
@article{DG06-TCS,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Diekert, Volker and Gastin, Paul},
title = {From local to global temporal logics over {M}azurkiewicz traces},
year = 2006,
month = may,
volume = 356,
number = {1-2},
pages = {126-135},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG06-TCS.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG06-TCS.pdf},
doi = {10.1016/j.tcs.2006.01.035},
abstract = {We review some results on global and local temporal logic on
Mazurkiewicz traces. Our~main contribution is to show how to derive the
expressive completeness of global temporal logic with respect to first-order
logic [V.~Diekert, P.~Gastin, LTL~is expressively complete for Mazurkiewicz
traces, J.~Comput. System Sci.~64 (2002) 396-418] from the similar result on
local temporal logic [V.~Diekert, P.~Gastin, Pure future local temporal logics
are expressively complete for Mazurkiewicz traces, in: M.~Farach-Colton~(Ed.),
Proc.~LATIN'04, Lecture Notes in Computer Science, Vol.~2976, Springer,
Berlin, 2004, pp.~232-241, Full version available as Research Report
LSV-05-22, Laboratoire Sp\'ecification et V\'erification, ENS Cachan, France].}
}
@inproceedings{ABG-fsttcs07,
month = dec,
year = 2007,
volume = 4855,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Arvind, V. and Prasad, Sanjiva},
acronym = {{FSTTCS}'07},
booktitle = {{P}roceedings of the 27th {C}onference on
{F}oundations of {S}oftware {T}echnology and
{T}heoretical {C}omputer {S}cience
({FSTTCS}'07)},
author = {Akshay, S. and Bollig, Benedikt and Gastin, Paul},
title = {Automata and Logics for Timed Message Sequence Charts},
pages = {290-302},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/ABG-fsttcs07.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/ABG-fsttcs07.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/ABG-fsttcs07.ps},
doi = {10.1007/978-3-540-77050-3_24},
abstract = {We provide a framework for distributed systems that impose timing constraints
on their executions. We~propose a timed model of communicating finite-state machines,
which communicate by exchanging messages through channels and use event clocks to
generate collections of timed message sequence charts~(T-MSCs). As~a specification
language, we~propose a monadic second-order logic equipped with timing predicates and
interpreted over~T-MSCs. We establish expressive equivalence of our automata and logic.
Moreover, we prove that, for (existentially) bounded channels, emptiness and
satisfiability are decidable for our automata and logic.}
}
@inproceedings{BGMN-fct07,
month = aug,
year = 2007,
volume = 4639,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Csuhaj-Varj{\'u}, Erzs{\'e}bet and {\'E}sik, Zolt{\'a}n},
acronym = {{FCT}'07},
booktitle = {{P}roceedings of the 16th {I}nternational {S}ymposium
on {F}undamentals of {C}omputation {T}heory
({FCT}'07)},
author = {Bhateja, Puneet and Gastin, Paul and Mukund, Madhavan and Narayan
Kumar, K.},
title = {Local testing of message sequence charts is difficult},
pages = {76-87},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BGMN-fct07.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BGMN-fct07.pdf},
doi = {10.1007/978-3-540-74240-1_8},
abstract = {Message sequence charts are an attractive visual formalism used
to specify distributed communicating systems. One~way to test such a
system is to substitute a component by a test process and observe its
interaction with the rest of the system. We~study the question of whether
we can characterize the distributed behaviour of the system based on such
local observations. The~main difficulty is that local observations can
combine in unexpected ways to define implied scenarios not present in the
original specification. It~is known that checking whether a scenario
specification is closed with respect to implied scenarios is undecidable
when observations are made one process at a time, even for regular
specifications. We~show that this undecidability holds even if we have
only two processes in the system. We then strengthen the observer to be
able to observe multiple processes simultaneously. Even in this stronger
framework, the problem remains undecidable. In~fact, undecidability
continues to hold even without message labels, provided we observe two or
more processes simultaneously. On~the other hand, if we do not have
message labels and we restrict observations to one process at a time, the
problem of checking for implied scenarios is decidable.}
}
@inproceedings{GM-spin07,
month = jul,
year = 2007,
volume = 4595,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Bo{\v{s}}nacki, Dragan and Edelkamp, Stefan},
acronym = {{SPIN}'07},
booktitle = {{P}roceedings of the 14th {I}nternational
{SPIN} {W}orkshop on {M}odel {C}hecking {S}oftware
({SPIN}'07)},
author = {Gastin, Paul and Moro, Pierre},
title = {Minimal counter-example generation for {SPIN}},
pages = {24-38},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GM-spin07.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GM-spin07.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PS/GM-spin07.ps},
doi = {10.1007/978-3-540-73370-6_4},
abstract = {In this paper, we propose an algorithm to compute a counter-example of
minimal size to some property in a finite state program, using the same
programmation constraints than~SPIN. This algorithm uses nested
Breadth-first searches guided by priority queues.
This algorithm works in quadratic time and is linear in memory.}
}
@misc{versydis-final,
author = {Gastin, Paul and others},
title = {{ACI} {S}{\'e}curit{\'e} {I}nformatique {VERSYDIS}~---
Rapport final},
year = 2006,
month = oct,
type = {Contract Report},
note = {10~pages},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/Versydis-final.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/Versydis-final.pdf}
}
@article{GK-fi07,
publisher = {{IOS} Press},
journal = {Fundamenta Informaticae},
author = {Gastin, Paul and Kuske, Dietrich},
title = {Uniform satisfiability in {PSPACE} for local temporal logics
over {M}azurkiewicz traces},
volume = 80,
number = {1-3},
pages = {169-197},
year = 2007,
month = nov,
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GK-fi07.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GK-fi07.pdf},
abstract = {We study the complexity of temporal logics over
concurrent systems that can be described by Mazurkiewicz traces. We
develop a general method to prove that the uniform satisfiability
problem of local temporal logics is in~PSPACE. We~also
demonstrate that this method applies to all known local temporal
logics.}
}
@article{BGP-fmsd07,
publisher = {Springer},
journal = {Formal Methods in System Design},
author = {B{\'e}rard, B{\'e}atrice and Gastin, Paul and Petit,
Antoine},
title = {Timed substitutions for regular signal-event languages},
volume = 31,
number = 2,
pages = {101-134},
year = 2007,
month = oct,
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BGP-fmsd07.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BGP-fmsd07.pdf},
doi = {10.1007/s10703-007-0034-5},
abstract = {In the classical framework of formal languages, a refinement
operation is modeled by a substitution and an abstraction by an inverse
substitution. These mechanisms have been widely studied, because they
describe a change in the specification level, from an abstract view to a
more concrete one, or conversely. For timed systems, there is up to now no
uniform notion of substitution. In~this paper, we~study timed substitutions
in the general framework of signal-event languages, where both signals and
events are taken into account. We prove that regular signal-event languages
are closed under substitution and inverse substitution.\par
To obtain these results, we use in a crucial way a {"}well known{"} result:
regular signal-event languages are closed under intersection. In fact,
while this result is indeed easy for languages defined by Alur and Dill's
timed automata, it turns out that the construction is much more tricky when
considering the most involved model of signal-event automata. We give here
a construction working on finite and infinite signal-event words and taking
into account signal stuttering, unobservability of zero-duration $$\tau$$-signals
and Zeno runs. Note that if several constructions have been proposed in
particular cases, it is the first time that a general construction is
provided.}
}
@article{DrGa06tocsys,
publisher = {Springer},
journal = {Theory of Computing Systems},
author = {Droste, Manfred and Gastin, Paul},
title = {On aperiodic and star-free formal power series in
partially commuting variables},
year = 2008,
month = may,
volume = 42,
number = 4,
pages = {608-631},
url = {http://www.lsv.ens-cachan.fr/Publis/RAPPORTS_LSV/PDF/rr-lsv-2005-12.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/RAPPORTS_LSV/PDF/rr-lsv-2005-12.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/RAPPORTS_LSV/PS/
rr-lsv-2005-12.ps},
doi = {10.1007/s00224-007-9064-z},
abstract = {Formal power series over non-commuting variables have been
investigated as representations of the behavior of automata with
multiplicities. Here we introduce and investigate the concepts of
aperiodic and of star-free formal power series over semirings and
partially commuting variables. We prove that if the semiring~$$K$$ is
idempotent and commutative, or if $$K$$ is idempotent and the variables
are non-commuting, then the product of any two aperiodic series is again
aperiodic. We also show that if $$K$$ is idempotent and the matrix monoids
over~$$K$$ have a Burnside property (satisfied, \textit{e.g.}~by the
tropical semiring), then the aperiodic and the star-free series coincide.
This generalizes a classical result of Sch{\"u}tzenberger~(1961) for
aperiodic regular languages and subsumes a result of Guaiana, Restivo and
Salemi~(1992) on aperiodic trace languages. }
}
@article{DrGa07tcs,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Droste, Manfred and Gastin, Paul},
title = {Weighted automata and weighted logics},
year = 2007,
month = jun,
volume = 380,
number = {1-2},
pages = {69-86},
url = {http://www.lsv.ens-cachan.fr/Publis/RAPPORTS_LSV/PDF/rr-lsv-2005-13.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/RAPPORTS_LSV/PDF/rr-lsv-2005-13.pdf},
ps = {http://www.lsv.ens-cachan.fr/Publis/RAPPORTS_LSV/PS/
rr-lsv-2005-13.ps},
doi = {10.1016/j.tcs.2007.02.055},
abstract = {Weighted automata are used to describe quantitative properties
in various areas such as probabilistic systems, image
compression, speech-to-text processing. The~behaviour of
such an automaton is a mapping, called a formal power
series, assigning to each word a weight in some semiring.
We~generalize B{\"u}chi's and Elgot's fundamental theorems to this
quantitative setting. We~introduce a weighted version of MSO
logic and prove that, for commutative semirings, the
behaviours of weighted automata are precisely the formal
power series definable with particular sentences of our
weighted logic. We~also consider weighted first-order logic
and show that aperiodic series coincide with the first-order
definable ones, if the semiring is locally finite,
commutative and has some aperiodicity property.},
oldnote = {Special issue of ICALP'05. To appear.
Also available as Research Report LSV-05-13,
Laboratoire Sp{\'e}cification et V{\'e}rification, ENS Cachan,
France, July 2005.}
}
@misc{dots-3.1,
author = {Bollig, Benedikt and Bouyer, Patricia and Cassez, Franck and
and Jard, Claude},
title = {Model for distributed timed systems},
howpublished = {Deliverable DOTS~3.1 (ANR-06-SETI-003)},
year = 2008,
month = sep
}
@incollection{DG-hwa08,
year = 2009,
series = {EATCS Monographs in Theoretical Computer Science},
publisher = {Springer},
editor = {Kuich, Werner and Vogler, Heiko and Droste, Manfred},
booktitle = {Handbook of Weighted Automata},
author = {Droste, Manfred and Gastin, Paul},
title = {Weighted automata and weighted logics},
pages = {175-211},
chapter = 5,
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG-hwa08.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG-hwa08.pdf}
}
@incollection{DG-pct08,
month = jan,
year = 2009,
series = {IARCS-Universities},
publisher = {Universities Press},
booktitle = {Perspectives in Concurrency Theory},
editor = {Lodaya, Kamal and Mukund, Madhavan and
Ramanujam, R.},
author = {Diekert, Volker and Gastin, Paul},
title = {Local safety and local liveness for distributed systems},
pages = {86-106},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG-pct08.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG-pct08.pdf},
abstract = {We introduce local safety and local liveness for distributed
systems whose executions are modeled by Mazurkiewicz traces. We
characterize local safety by local closure and local liveness by local
density. Restricting to first-order definable properties, we prove a
decomposition theorem in the spirit of the separation theorem for linear
temporal logic. We then characterize local safety and local liveness by
means of canonical local temporal logic formulae.}
}
@inproceedings{ABGMN-concur08,
month = aug,
year = 2008,
volume = 5201,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {van Breugel, Franck and Chechik, Marsha},
acronym = {{CONCUR}'08},
booktitle = {{P}roceedings of the 19th
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'08)},
author = {Akshay, S. and Bollig, Benedikt and Gastin, Paul and Mukund,
title = {Distributed Timed Automata with Independently Evolving
Clocks},
pages = {82-97},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/ABGMN-concur08.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/ABGMN-concur08.pdf},
doi = {10.1007/978-3-540-85361-9_10},
abstract = { We propose a model of distributed timed systems where each
component is a timed automaton with a set of local clocks that evolve at a
rate independent of the clocks of the other components. A clock can be
read by any component in the system, but it can only be reset by the
automaton it belongs to.\par
There are two natural semantics for such systems. The \emph{universal}
semantics captures behaviors that hold under any choice of clock rates for
the individual components. This is a natural choice when checking that a
system always satisfies a positive specification. However, to check if a
system avoids a negative specification, it is better to use the
\emph{existential} semantics---the set of behaviors that the system can
possibly exhibit under some choice of clock rates.\par
We show that the existential semantics always describes a regular set of
behaviors. However, in the case of universal semantics, checking emptiness
turns out to be undecidable. As an alternative to the universal semantics,
we propose a \emph{reactive} semantics that allows us to check positive
specifications and yet describes a regular set of behaviors. }
}
@article{DGK-ijfcs08,
publisher = {World Scientific},
journal = {International Journal of Foundations of Computer Science},
author = {Diekert, Volker and Gastin, Paul and Kufleitner,
Manfred},
title = {A Survey on Small Fragments of First-Order Logic over
Finite Words},
volume = 19,
number = 3,
pages = {513-548},
year = 2008,
month = jun,
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DGK-ijfcs08.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DGK-ijfcs08.pdf},
doi = {10.1142/S0129054108005802},
abstract = {We consider fragments of first-order logic over finite
words. In~particular, we~deal with
first-order logic with a restricted number of
variables and with the lower levels of the
alternation hierarchy. We~use the algebraic
approach to show decidability of
expressibility within these fragments. As~a
byproduct, we~survey several characterizations
of the respective fragments. We~give complete
proofs for all characterizations and we
provide all necessary background. Some of the
proofs seem to be new and simpler than those
which can be found elsewhere. We also give a
proof of Simon's theorem on factorization
forests restricted to aperiodic monoids
because this is simpler and sufficient for our
purpose.}
}
@unpublished{PG-algo,
author = {Gastin, Paul},
title = {Algorithmique},
year = {2007},
month = nov,
note = {Course notes, {M}agist{\e}re STIC, ENS Cachan, France}
}
@unpublished{PG-languages,
author = {Gastin, Paul},
title = {Langages formels},
year = {2007},
month = may,
note = {Course notes, {M}agist{\e}re STIC, ENS Cachan, France}
}
@misc{ltl2ba-v1.1,
author = {Gastin, Paul and Oddoux, Denis},
title = {LTL2BA~v1.1},
year = {2007},
month = aug,
nohowpublished = {Available at http://www.lsv.ens-cachan.fr/~gastin/ltl2ba/},
note = {Written in~C++ (about 4000 lines)},
note-fr = {\'Ecrit en~C++ (environ 4000 lignes)},
url = {http://www.lsv.ens-cachan.fr/~gastin/ltl2ba/}
}
@misc{gastex-v2.8,
author = {Gastin, Paul},
title = {Gas{{\TeX}}: Graphs and Automata Simplified in~{{\TeX}} (v2.8)},
year = {2006},
month = nov,
nohowpublished = {Available at http://www.lsv.ens-cachan.fr/~gastin/gastex/gastex.html},
note = {Written in~\TeX (about 2000 lines)},
note-fr = {\'Ecrit en~\TeX (environ 2000 lignes)},
url = {http://www.lsv.ens-cachan.fr/~gastin/gastex/gastex.html}
}
@incollection{DiGa08Thomas,
author = {Diekert, Volker and Gastin, Paul},
title = {First-order definable languages},
booktitle = {Logic and Automata: History and Perspectives},
editor = {Flum, J{\"o}rg and Gr{\"a}del, Erich and Wilke, Thomas},
publisher = {Amsterdam University Press},
series = {Texts in Logic and Games},
volume = 2,
year = 2008,
pages = {261-306},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG-WT08.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DG-WT08.pdf},
abstract = {We give an essentially self-contained presentation of some principal
results for first-order definable languages over finite and infinite
words. We~introduce the notion of a \emph{counter-free} B{\"u}chi
automaton; and we relate counter-freeness to \emph{aperiodicity}
and to the notion of \emph{very weak alternation}.
We also show that aperiodicity of a regular
$$\infty$$-language can be decided in polynomial
space, if the language is specified by some B{\"u}chi automaton.}
}
@misc{dots-2.2,
author = {Chatain, {\relax Th}omas and Gastin, Paul and Muscholl, Anca
and Sznajder, Nathalie and Walukiewicz, Igor and
Zeitoun, Marc},
title = {Distributed control for restricted specifications},
howpublished = {Deliverable DOTS~2.2 (ANR-06-SETI-003)},
year = 2009,
month = mar
}
@inproceedings{BG-dlt09,
month = jun # {-} # jul,
year = 2009,
volume = {5583},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Diekert, Volker and Nowotka, Dirk},
acronym = {{DLT}'09},
booktitle = {{P}roceedings of the 13th {I}nternational
{C}onference on {D}evelopments in {L}anguage {T}heory
({DLT}'09)},
author = {Bollig, Benedikt and Gastin, Paul},
title = {Weighted versus Probabilistic Logics},
pages = {18-38},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BG-dlt09.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/BG-dlt09.pdf},
doi = {10.1007/978-3-642-02737-6_2},
abstract = {While a mature theory around logics such as MSO, LTL, and CTL
has been developed in the pure boolean setting of finite automata,
weighted automata lack such a natural connection with (temporal) logic and
related verification algorithms. In this paper, we will identify weighted
versions of MSO and CTL that generalize the classical logics and even
other quantitative extensions such as probabilistic CTL. We establish
expressiveness results on our logics giving translations from weighted and
probabilistic CTL into weighted MSO.}
}
@incollection{GMN-pct08,
month = jan,
year = 2009,
series = {IARCS-Universities},
publisher = {Universities Press},
booktitle = {Perspectives in Concurrency Theory},
editor = {Lodaya, Kamal and Mukund, Madhavan and
Ramanujam, R.},
author = {Gastin, Paul and Mukund, Madhavan and Narayan Kumar, K.},
title = {Reachability and boundedness in time-constrained {MSC} graphs},
pages = {157-183},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GMN-pct08.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GMN-pct08.pdf},
abstract = {Channel boundedness is a necessary condition for a
message-passing system to exhibit regular, finite-state behaviour at the
global level. For Message Sequence Graphs~(MSGs), the most basic form of
High-level Message Sequence Charts~(HMSCs), channel boundedness can be
characterized in terms of structural conditions on the underlying graph.
We consider MSGs enriched with timing constraints between events. These
constraints restrict the global behaviour and can impose channel
boundedness even when it is not guaranteed by the graph structure of the
MSG. We~show that we can use MSGs with timing constraints to simulate
computations of a two-counter machine. As~a consequence, even the more
fundamental problem of reachability, which is trivial for untimed MSGs,
becomes undecidable when we add timing constraints. Different forms of
channel boundedness also then turn out to be undecidable, using reductions
from the reachability problem.}
}
@article{GSZ-fmsd09,
publisher = {Springer},
journal = {Formal Methods in System Design},
author = {Gastin, Paul and Sznajder, Nathalie and Zeitoun, Marc},
title = {Distributed synthesis for well-connected
architectures},
volume = 34,
number = 3,
pages = {215-237},
month = jun,
year = 2009,
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GSZ-fmsd09.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/GSZ-fmsd09.pdf},
doi = {10.1007/s10703-008-0064-7},
abstract = {We study the synthesis problem for external linear or branching
specifications and distributed, synchronous architectures with arbitrary
delays on processes. External means that the specification only relates
input and output variables. We introduce the subclass of uniformly
well-connected (UWC) architectures for which there exists a routing
allowing each output process to get the values of all inputs it is
connected to, as soon as possible. We prove that the distributed synthesis
problem is decidable on UWC architectures if and only if the output
variables are totally ordered by their knowledge of input variables. We
also show that if we extend this class by letting the routing depend on
the output process, then the previous decidability result fails. Finally,
we provide a natural restriction on specifications under which the whole
class of UWC architectures is decidable.}
}
@inproceedings{CGS-sofsem09,
address = {\v{S}pindler\r{u}v Ml\'{y}n, Czech Republic},
month = jan,
year = 2009,
volume = 5404,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Nielsen, Mogens and Ku{\v c}era, Anton{\'\i}n and Bro
Miltersen, Peter and Palamidessi, Catuscia and T{\r{u}}ma,
Petr and Valencia, Franck},
acronym = {{SOFSEM}'09},
booktitle = {{P}roceedings of the 35th International Conference on
Current Trends in Theory and Practice of
Computer Science ({SOFSEM}'09)},
author = {Chatain, {\relax Th}omas and Gastin, Paul and Sznajder, Nathalie},
title = {Natural Specifications Yield Decidability for Distributed
Synthesis of Asynchronous Systems},
pages = {141-152},
url = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/CGS-sofsem09.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/CGS-sofsem09.pdf},
doi = {10.1007/978-3-540-95891-8_16},
abstract = {We study the synthesis problem in an asynchronous distributed
setting: a finite set of processes interact locally with an uncontrollable
environment and communicate with each other by sending signals---actions
that are immediately received by the target process. The synthesis problem
is to come up with a local strategy for each process such that the
resulting behaviours of the system meet a given specification. We consider
external specifications over partial orders. External means that
specifications only relate input and output actions from and to the
environment and not signals exchanged by processes. We also ask for some
closure properties of the specification. We present this new setting for
studying the distributed synthesis problem, and give decidability results:
the non-distributed case, and the subclass of networks where communication
happens through a strongly connected graph. We believe that this framework
for distributed synthesis yields decidability results for many more
architectures.}
}
@inproceedings{AGMN-fsttcs10,
month = dec,
year = 2010,
volume = 8,
series = {Leibniz International Proceedings in Informatics},
publisher = {Leibniz-Zentrum f{\"u}r Informatik},
editor = {Lodaya, Kamal and Mahajan, Meena},
acronym = {{FSTTCS}'10},
booktitle = {{P}roceedings of the 30th {C}onference on
{F}oundations of {S}oftware {T}echnology and
{T}heoretical {C}omputer {S}cience
({FSTTCS}'10)},
author = {Akshay, S. and Gastin, Paul and Mukund, Madhavan and Narayan Kumar, K.},
title = {Model checking time-constrained scenario-based specifications},
pages = {204-215},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/AGMN-fsttcs10.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/AGMN-fsttcs10.pdf},
doi = {10.4230/LIPIcs.FSTTCS.2010.204},
abstract = {We consider the problem of model checking message-passing
systems with real-time requirements. As behavioural specifications, we use
message sequence charts (MSCs) annotated with timing constraints. Our
system model is a network of communicating finite state machines with
local clocks, whose global behaviour can be regarded as a timed automaton.
Our goal is to verify that all timed behaviours exhibited by the system
conform to the timing constraints imposed by the specification. In
general, this corresponds to checking inclusion for timed languages, which
is an undecidable problem even for timed regular languages. However, we
show that we can translate regular collections of time-constrained MSCs
into a special class of event-clock automata that can be determinized and
complemented, thus permitting an algorithmic solution to the model
checking problem.}
}
@proceedings{GL-concur10,
author = {Gastin, Paul and Laroussinie, Fran{\c{c}}ois},
editor = {Gastin, Paul and Laroussinie, Fran{\c{c}}ois},
title = {{P}roceedings of the 21st
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'10)},
booktitle = {{P}roceedings of the 21st
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'10)},
year = 2010,
month = aug # {-} # sep,
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {6269},
doi = {10.1007/978-3-642-15375-4}
}
@inproceedings{BGMZ-icalp10,
month = jul,
year = 2010,
volume = 6199,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Abramsky, Samson and Meyer{ }auf{ }der{ }Heide, Friedhelm
and Spirakis, Paul},
acronym = {{ICALP}'10},
booktitle = {{P}roceedings of the 37th {I}nternational
{C}olloquium on {A}utomata, {L}anguages and
{P}rogramming ({ICALP}'10)~-- {P}art~{II}},
author = {Bollig, Benedikt and Gastin, Paul and Monmege, Benjamin
and Zeitoun, Marc},
title = {Pebble weighted automata and transitive closure logics},
pages = {587-598},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/BGMZ-icalp10.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/BGMZ-icalp10.pdf},
doi = {10.1007/978-3-642-14162-1_49},
abstract = {We introduce new classes of weighted automata on words. Equipped
with pebbles and a two-way mechanism, they go beyond the class of
recognizable formal power series, but capture a weighted version of
first-order logic with bounded transitive closure. In contrast to previous
work, this logic allows for unrestricted use of universal quantification.
Our main result states that pebble weighted automata, nested weighted
automata, and this weighted logic are expressively equivalent. We also
give new logical characterizations of the recognizable series.}
}
@article{GK-icomp10,
publisher = {Elsevier Science Publishers},
journal = {Information and Computation},
author = {Gastin, Paul and Kuske, Dietrich},
title = {Uniform satisfiability problem for local temporal logics over
{M}azurkiewicz traces},
volume = 208,
number = 7,
month = jul,
year = 2010,
pages = {797-816},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/GK-icomp10.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/GK-icomp10.pdf},
doi = {10.1016/j.ic.2009.12.003},
abstract = {We continue our study of the complexity of MSO-definable local
temporal logics over concurrent systems that can be described by
Mazurkiewicz traces. In previous papers, we showed that the satisfiability
problem for any such logic is in PSPACE (provided the dependence alphabet
is fixed) and remains in PSPACE for all classical local temporal logics
even if the dependence alphabet is part of the input. In~this paper, we
consider the uniform satisfiability problem for arbitrary MSO-definable
local temporal logics. For this problem, we prove multi-exponential lower
and upper bounds that depend on the number of alternations of set
quantifiers present in the chosen MSO-modalities.}
}
@inproceedings{BCGK-fossacs12,
month = mar,
year = 2012,
volume = 7213,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Birkedal, Lars},
acronym = {{FoSSaCS}'12},
booktitle = {{P}roceedings of the 15th {I}nternational
{C}onference on {F}oundations of {S}oftware {S}cience
and {C}omputation {S}tructures
({FoSSaCS}'12)},
author = {Bollig, Benedikt and Cyriac, Aiswarya and Gastin, Paul and
Narayan Kumar, K.},
title = {Model Checking Languages of Data Words},
pages = {391-405},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/BCGK-fossacs12.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/BCGK-fossacs12.pdf},
doi = {10.1007/978-3-642-28729-9_26},
abstract = {We consider the model-checking problem for data multi-pushdown
automata (DMPA). DMPA generate data words, i.e, strings enriched with
values from an infinite domain. The latter can be used to represent an
unbounded number of process identifiers so that DMPA are suitable to model
concurrent programs with dynamic process creation. To specify properties
of data words, we use monadic second-order (MSO) logic, which comes with a
predicate to test two word positions for data equality. While
satisfiability for MSO logic is undecidable (even for weaker fragments
such as first-order logic), our main result states that one can decide if
all words generated by a DMPA satisfy a given formula from the full MSO
logic.}
}
@inproceedings{BCGZ-mfcs11,
month = aug,
year = 2011,
volume = 6907,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Murlak, Filip and Sankowski, Piotr},
acronym = {{MFCS}'11},
booktitle = {{P}roceedings of the 36th
{I}nternational {S}ymposium on
{M}athematical {F}oundations of
{C}omputer {S}cience
({MFCS}'11)},
author = {Bollig, Benedikt and Cyriac, Aiswarya and Gastin, Paul and Zeitoun, Marc},
title = {Temporal Logics for Concurrent Recursive Programs: Satisfiability
and Model Checking},
pages = {132-144},
url = {http://hal.archives-ouvertes.fr/hal-00591139/en/},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/BCGZ-mfcs11.pdf},
doi = {10.1007/978-3-642-22993-0_15},
abstract = {We develop a general framework for the design of temporal logics
for concurrent recursive programs. A program execution is modeled as a
partial order with multiple nesting relations. To specify properties of
executions, we consider any temporal logic whose modalities are definable
expressions. This captures, in a unifying framework, a wide range of
logics defined for trees, nested words, and Mazurkiewicz traces that have
been studied separately. We show that satisfiability and model checking
are decidable in EXPTIME and 2EXPTIME, depending on the precise path
modalities.}
}
@techreport{rr-lsv-11-08,
author = {Bollig, Benedikt and Gastin, Paul and Monmege, Benjamin and
Zeitoun, Marc},
title = {Weighted Expressions and {DFS} Tree Automata},
institution = {Laboratoire Sp{\'e}cification et V{\'e}rification,
ENS Cachan, France},
year = {2011},
month = apr,
type = {Research Report},
number = {LSV-11-08},
url = {http://www.lsv.ens-cachan.fr/Publis/RAPPORTS_LSV/PDF/rr-lsv-2011-08.pdf},
pdf = {http://www.lsv.ens-cachan.fr/Publis/RAPPORTS_LSV/PDF/rr-lsv-2011-08.pdf},
note = {32~pages},
abstract = {We introduce weighted expressions, a~calculus to express
quantitative properties over unranked trees. They involve products and
sums from a semiring as well as classical boolean formulas. We~show that
weighted expressions are expressively equivalent to a new class of
weighted tree-walking automata. This new automata model is equipped with
pebbles, and follows a depth-first-search policy in the tree.}
}
@incollection{DG-iis09,
author = {Demri, St{\'e}phane and Gastin, Paul},
title = {Specification and Verification using Temporal Logics},
booktitle = {Modern applications of automata theory},
editor = {D'Souza, Deepak and Shankar, Priti},
series = {IISc Research Monographs},
volume = 2,
publisher = {World Scientific},
chapter = 15,
pages = {457-494},
year = 2012,
month = jul,
url = {http://www.lsv.fr/Publis/PAPERS/PDF/DG-iis09.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/DG-iis09.pdf},
abstract = {This chapter illustrates two aspects of automata theory related
to linear-time temporal logic LTL used for the verification of computer
systems. First, we present a translation from LTL formulae to B{\"u}chi
automata. The aim is to design an elementary translation which is
reasonably efficient and produces small automata so that it can be easily
taught and used by hand on real examples. Our translation is in the spirit
of the classical tableau constructions but is optimized in several ways.
Secondly, we recall how temporal operators can be defined from regular
languages and we explain why adding even a single operator definable by a
context-free language can lead to undecidability.}
}
@article{ABG-fmsd12,
publisher = {Springer},
journal = {Formal Methods in System Design},
author = {Akshay, S. and Bollig, Benedikt and Gastin, Paul},
title = {Event-clock Message Passing Automata: A~Logical
Characterization and an Emptiness-Checking Algorithm},
year = 2013,
month = jun,
volume = 42,
number = {3},
pages = {262-300},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/ABG-fmsd12.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/ABG-fmsd12.pdf},
doi = {10.1007/s10703-012-0179-8},
abstract = {We are interested in modeling behaviors and verifying
properties of systems in which time and concurrency play a crucial
role. We introduce a model of distributed automata which are
equipped with event clocks as in [Alur, Fix,
Henzinger. Event-clock automata: A~determinizable class of timed
automata. TCS 211(1-2):253-273, 1999.], which we call Event Clock
Message Passing Automata (ECMPA). To describe the behaviors of
such systems we use timed partial orders (modeled as message
sequence charts with timing).\par
Our first goal is to extend the classical
B{\"u}chi-Elgot-Trakhtenbrot equivalence to the timed and
distributed setting, by showing an equivalence between ECMPA and a
timed extension of monadic second-order (MSO) logic. We obtain
such a constructive equivalence in two different ways:
(1)~by~restricting the semantics by bounding the set of timed
partial orders (2)~by~restricting the timed MSO logic to its
existential fragment. We next consider the emptiness problem for
ECMPA, which asks if a given ECMPA has some valid timed
execution. In general this problem is undecidable and we show that
by considering only bounded timed executions, we can obtain
decidability. We do this by constructing a timed automaton which
accepts all bounded timed executions of the ECMPA and checking
emptiness of this timed automaton.}
}
@article{GS-tocl12,
publisher = {ACM Press},
journal = {ACM Transactions on Computational Logic},
author = {Gastin, Paul and Sznajder, Nathalie},
title = {Fair Synthesis for Asynchronous Distributed Systems},
nopages = {},
volume = 14,
number = {2:9},
month = jun,
year = 2013,
url = {http://www.lsv.fr/Publis/PAPERS/PDF/GS-tocl12.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/GS-tocl12.pdf},
doi = {10.1145/2480759.2480761},
abstract = {We study the synthesis problem in an asynchronous distributed
setting: a finite set of processes interact locally with an uncontrollable
environment and communicate with each other by sending signals---actions
controlled by a sender process and that are immediately received by the
target process. The fair synthesis problem is to come up with a local
strategy for each process such that the resulting fair behaviors of the
system meet a given specification. We consider external specifications
satisfying some natural closure properties related to the architecture. We
present this new setting for studying the fair synthesis problem for
distributed systems, and give decidability results for the subclass of
networks where communications happen through a strongly connected graph.
We claim that this framework for distributed synthesis is natural,
convenient and avoids most of the usual sources of undecidability for the
synthesis problem. Hence, it may open the way to a decidable theory of
distributed synthesis.}
}
@article{GS-ipl12,
publisher = {Elsevier Science Publishers},
journal = {Information Processing Letters},
author = {Gastin, Paul and Sznajder, Nathalie},
title = {Decidability of well-connectedness for distributed synthesis},
pages = {963-968},
volume = {112},
number = {24},
month = dec,
year = 2012,
url = {http://www.lsv.fr/Publis/PAPERS/PDF/GS-ipl12.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/GS-ipl12.pdf},
doi = {10.1016/j.ipl.2012.08.018},
abstract = {Although the synthesis problem is often undecidable for
distributed, synchronous systems, it becomes decidable for the subclass of
uniformly well-connected (UWC) architectures, provided that only robust
specifications are considered. It is then an important issue to be able to
decide whether a given architecture falls in this class. This is the
problem addressed in this paper: we establish the decidability and precise
complexity of checking this property. This problem is in EXPSPACE and
NP-hard in the general case, but falls into PSPACE when restricted to a
natural subclass of architectures.}
}
@inproceedings{GM-ciaa12,
month = jul,
year = 2012,
volume = {7381},
series = {Lecture Notes in Computer Science},
publisher = {Springer-Verlag},
editor = {Moreira, Nelma and Reis, Rog{\'e}rio},
acronym = {{CIAA}'12},
booktitle = {{P}roceedings of the 17th {I}nternational
{C}onference on {I}mplementation and
{A}pplication of {A}utomata
({CIAA}'12)},
author = {Gastin, Paul and Monmege, Benjamin},
title = {Adding Pebbles to Weighted Automata},
pages = {28-51},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/GM-ciaa12.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/GM-ciaa12.pdf},
doi = {10.1007/978-3-642-31606-7_4},
abstract = {We extend weighted automata and weighted rational expressions
with 2-way moves and (reusable) pebbles. We show with examples from
natural language modeling and quantitative model-checking that weighted
expressions and automata with pebbles are more expressive and allow much
more natural and intuitive specifications than classical ones.\par
We extend Kleene-Sch{\"u}tzenberger theorem showing that weighted
expressions and automata with pebbles have the same expressive power. We
focus on an efficient translation from expressions to automata.\par
We also prove that the evaluation problem for weighted automata can be
done very efficiently if the number of (reusable) pebbles is low.}
}
@inproceedings{BGMZ-atva12,
month = oct,
year = {2012},
volume = {7561},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Mukund, Madhavan and Chakraborty, Supratik},
acronym = {{ATVA}'12},
booktitle = {{P}roceedings of the 10th {I}nternational
{S}ymposium on {A}utomated {T}echnology
for {V}erification and {A}nalysis
({ATVA}'12)},
author = {Bollig, Benedikt and Gastin, Paul and Monmege, Benjamin and
Zeitoun, Marc},
title = {A Probabilistic {K}leene Theorem},
pages = {400-415},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/BGMZ-atva12.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/BGMZ-atva12.pdf},
doi = {10.1007/978-3-642-33386-6_31},
abstract = {We provide a Kleene Theorem for (Rabin) probabilistic automata
over finite words. Probabilistic automata generalize deterministic finite
automata and assign to a word an acceptance probability. We provide
probabilistic expressions with probabilistic choice, guarded choice,
concatenation, and a star operator. We prove that probabilistic
expressions and probabilistic automata are expressively equivalent. Our
result actually extends to two-way probabilistic automata with pebbles and
corresponding expressions.}
}
@inproceedings{CGN-concur12,
month = sep,
year = 2012,
volume = 7454,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Koutny, Maciej and Ulidowski, Irek},
acronym = {{CONCUR}'12},
booktitle = {{P}roceedings of the 23rd
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'12)},
author = {Cyriac, Aiswarya and Gastin, Paul and Narayan Kumar, K.},
title = {{MSO} Decidability of Multi-Pushdown Systems via Split-Width},
pages = {547-561},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/CGN-concur12.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/CGN-concur12.pdf},
doi = {10.1007/978-3-642-32940-1_38},
abstract = {Multi-threaded programs with recursion are naturally modeled as
multi-pushdown systems. The behaviors are represented as multiply nested
words (MNWs), which are words enriched with additional binary relations
for each stack matching a push operation with the corresponding pop
operation. Any MNW can be decomposed by two basic and natural operations:
shuffle of two sequences of factors and merge of consecutive factors of a
sequence. We say that the split-width of a MNW is~$$k$$ if it admits a
decomposition where the number of factors in each sequence is at most~$$k$$.
The MSO theory of MNWs with split-width~$$k$$ is decidable. We introduce two
very general classes of MNWs that strictly generalize known decidable
classes and prove their MSO decidability via their split-width and obtain
comparable or better bounds of tree-width of known classes.}
}
@inproceedings{BGM-fossacs13,
month = mar,
year = 2013,
volume = {7794},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Pfenning, Frank},
acronym = {{FoSSaCS}'13},
booktitle = {{P}roceedings of the 16th {I}nternational
{C}onference on {F}oundations of {S}oftware {S}cience
and {C}omputation {S}tructures
({FoSSaCS}'13)},
author = {Bollig, Benedikt and Gastin, Paul and Monmege, Benjamin},
title = {Weighted Specifications over Nested Words},
pages = {385-400},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/BGM-fossacs13.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/BGM-fossacs13.pdf},
doi = {10.1007/978-3-642-37075-5_25},
abstract = {This paper studies several formalisms to specify quantitative
properties of finite nested words (or~equivalently finite unranked trees).
These can be used for XML documents or recursive programs: for~instance,
counting how often a given entry occurs in an XML document, or~computing
the memory required for a recursive program execution. Our main interest
is to translate these properties, as efficiently as possible, into an
automaton, and to use this computational device to decide problems related
to the properties (e.g.,~emptiness, model checking, simulation) or to
compute the value of a quantitative specification over a given nested
word. The specification formalisms are weighted regular expressions (with
forward and backward moves following linear edges or call-return edges),
weighted first-order logic, and weighted temporal logics. We~introduce
weighted automata walking in nested words, possibly dropping\slash lifting
(reusable) pebbles during the traversal. We prove that the evaluation
problem for such automata can be done very efficiently if the number of
pebble names is small, and we also consider the emptiness problem.}
}
@inproceedings{AG-fsttcs14,
month = dec,
year = 2014,
volume = {29},
series = {Leibniz International Proceedings in Informatics},
publisher = {Leibniz-Zentrum f{\"u}r Informatik},
editor = {Raman, Venkatesh and Suresh, S.~P.},
acronym = {{FSTTCS}'14},
booktitle = {{P}roceedings of the 34th {C}onference on
{F}oundations of {S}oftware {T}echnology and
{T}heoretical {C}omputer {S}cience
({FSTTCS}'14)},
author = {Aiswarya, C. and Gastin, Paul},
title = {Reasoning about distributed systems: {WYSIWYG}},
pages = {11-30},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/AG-fsttcs14.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/AG-fsttcs14.pdf},
doi = {10.4230/LIPIcs.FSTTCS.2014.11},
abstract = {There are two schools of thought on reasoning about distributed
systems: one~following interleaving based semantics, and one following
partial-order{{\slash}}graph based semantics. This paper compares these two
approaches and argues in favour of the latter. An~introductory treatment
of the split-width technique is also provided.}
}
@inproceedings{BGK-fsttcs14,
month = dec,
year = 2014,
volume = {29},
series = {Leibniz International Proceedings in Informatics},
publisher = {Leibniz-Zentrum f{\"u}r Informatik},
editor = {Raman, Venkatesh and Suresh, S.~P.},
acronym = {{FSTTCS}'14},
booktitle = {{P}roceedings of the 34th {C}onference on
{F}oundations of {S}oftware {T}echnology and
{T}heoretical {C}omputer {S}cience
({FSTTCS}'14)},
author = {Bollig, Benedikt and Gastin, Paul and Kumar, Akshay},
title = {Parameterized Communicating Automata: Complementation and
Model Checking},
pages = {625-637},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/BGK-fsttcs14.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/BGK-fsttcs14.pdf},
doi = {10.4230/LIPIcs.FSTTCS.2014.625},
abstract = {We study the language-theoretical aspects of parameterized
communicating automata (PCAs), in which processes communicate via
rendez-vous. A given PCA can be run on any topology of bounded degree such
as pipelines, rings, ranked trees, and grids. We show that, under a
context bound, which restricts the local behavior of each process, PCAs
are effectively complementable. Complementability is considered a key
aspect of robust automata models and can, in particular, be exploited for
verification. In this paper, we use it to obtain a characterization of
context-bounded PCAs in terms of monadic second-order (MSO) logic. As the
emptiness problem for context-bounded PCAs is decidable for the classes of
pipelines, rings, and trees, their model-checking problem wrt. MSO
properties also becomes decidable. While previous work on model checking
parameterized systems typically uses temporal logics without next
operator, our MSO logic allows one to express several natural next
modalities.}
}
@article{BCGZ-jal14,
publisher = {Elsevier Science Publishers},
journal = {Journal of Applied Logic},
author = {Bollig, Benedikt and Cyriac, Aiswarya and Gastin, Paul and
Zeitoun, Marc},
title = {Temporal logics for concurrent recursive programs:
Satisfiability and model checking},
volume = 12,
number = 4,
pages = {395-416},
month = dec,
year = 2014,
url = {http://www.lsv.fr/Publis/PAPERS/PDF/BCGZ-jal14.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/BCGZ-jal14.pdf},
doi = {10.1016/j.jal.2014.05.001},
abstract = {We develop a general framework for the design of temporal logics
for concurrent recursive programs. A program execution is modeled as a
partial order with multiple nesting relations. To specify properties of
executions, we consider any temporal logic whose modalities are definable
expressions. This captures, in a unifying framework, a wide range of
logics defined for ranked and unranked trees, nested words, and
Mazurkiewicz traces that have been studied separately. We show that
satisfiability and model checking are decidable in EXPTIME and 2EXPTIME,
depending on the precise path modalities.}
}
@inproceedings{BGS-rp14,
month = sep,
year = 2014,
volume = {8762},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Ouaknine, Jo{\"e}l and Potapov, Igor and Worrell, James},
acronym = {{RP}'14},
booktitle = {{P}roceedings of the 8th {W}orkshop
on {R}eachability {P}roblems in {C}omputational {M}odels ({RP}'14)},
author = {Bollig, Benedikt and Gastin, Paul and Schubert, Jana},
title = {Parameterized Verification of Communicating Automata under Context Bounds},
pages = {45-57},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/BGS-rp14.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/BGS-rp14.pdf},
doi = {10.1007/978-3-319-11439-2_4},
abstract = {We study the verification problem for parameterized
communicating automata~(PCA), in which processes synchronize via message
passing. A~given PCA can be run on any topology of bounded degree (such as
pipelines, rings, or ranked trees), and communication may take place
between any two processes that are adjacent in the topology. Parameterized
verification asks if there is a topology from a given topology class that
allows for an accepting run of the given PCA. In general, this problem is
undecidable even for synchronous communication and simple pipeline
topologies. We therefore consider context-bounded verification, which
restricts the behavior of each single process. For several variants of
context bounds, we show that parameterized verification over pipelines,
rings, and ranked trees is decidable. Our approach is automata-theoretic
and uniform. We introduce a notion of graph acceptor that identifies those
topologies allowing for an accepting run. Depending on the given topology
class, the topology acceptor can then be restricted, or adjusted, so that
the verification problem reduces to checking emptiness of finite automata
or tree automata.}
}
@inproceedings{AGN-atva14,
month = nov,
year = {2014},
volume = 8837,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Cassez, Franck and Raskin, Jean-Fran{\c{c}}ois},
acronym = {{ATVA}'14},
booktitle = {{P}roceedings of the 12th {I}nternational
{S}ymposium on {A}utomated {T}echnology
for {V}erification and {A}nalysis
({ATVA}'14)},
author = {Aiswarya, C. and Gastin, Paul and Narayan Kumar, K.},
title = {Verifying Communicating Multi-pushdown Systems via Split-width},
pages = {1-17},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/AGN-atva14.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/AGN-atva14.pdf},
doi = {10.1007/978-3-319-11936-6_1},
abstract = {Communicating multi-pushdown systems model networks of
multi-threaded recursive programs communicating via reliable FIFO
channels. We extend the notion of split-width to this setting, improving
and simplifying the earlier definition. Split-width, while having the same
power of clique-{{\slash}}tree-width, gives a divide-and-conquer technique
to prove the bound of a class, thanks to the two basic operations, shuffle
and merge, of the split-width algebra. We illustrate this technique on
examples. We also obtain simple, uniform and optimal decision procedures
for various verification problems parametrised by split-width.}
}
@inproceedings{CGK-concur14,
month = sep,
year = 2014,
volume = 8704,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Baldan, Paolo and Gorla, Daniele},
acronym = {{CONCUR}'14},
booktitle = {{P}roceedings of the 25th
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'14)},
author = {Cyriac, Aiswarya and Gastin, Paul and Narayan Kumar, K.},
title = {Controllers for the Verification of Communicating Multi-Pushdown Systems},
pages = {297-311},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/CGK-concur14.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/CGK-concur14.pdf},
doi = {10.1007/978-3-662-44584-6_21},
abstract = {Multi-pushdowns communicating via queues are formal models of
multi-threaded programs communicating via channels. They are turing
powerful and much of the work on their verification has focussed on
under-approximation techniques. Any error detected in the
under-approximation implies an error in the system. However the successful
verification of the under-approximation is not as useful if the system
exhibits unverified behaviours. Our aim is to design controllers that
observe/restrict the system so that it stays within the verified
under-approximation. We identify some important properties that a good
controller should satisfy. We consider an extensive under-approximation
class, construct a distributed controller with the desired properties and
also establish the decidability of verification problems for this class.}
}
@inproceedings{BGMZ-csllics14,
month = jul,
year = 2014,
publisher = {ACM Press},
acronym = {{CSL\slash LICS}'14},
booktitle = {{P}roceedings of the Joint Meeting of the 23rd {EACSL} {A}nnual {C}onference on
{C}omputer {S}cience {L}ogic and the 29th {A}nnual {ACM\slash
IEEE} {S}ymposium on {L}ogic {I}n {C}omputer {S}cience ({CSL\slash LICS}'14)},
author = {Bollig, Benedikt and Gastin, Paul and Monmege, Benjamin and
Zeitoun, Marc},
title = {Logical Characterization of Weighted Pebble Walking Automata},
nopages = {},
chapter = 19,
url = {http://www.lsv.fr/Publis/PAPERS/PDF/BGMZ-csllics14.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/BGMZ-csllics14.pdf},
doi = {10.1145/2603088.2603118},
abstract = {Weighted automata are a conservative quantitative extension of
finite automata that enjoys applications, e.g., in language processing and
speech recognition. Their expressive power, however, appears to be
limited, especially when they are applied to more general structures than
words, such as graphs. To address this drawback, weighted automata have
recently been generalized to weighted pebble walking automata, which
proved useful as a tool for the specification and evaluation of
quantitative properties over words and nested words. In this paper, we
establish the expressive power of weighted pebble walking automata in
terms of transitive closure logic, lifting a similar result by Engelfriet
and Hoogeboom from the Boolean case to a quantitative setting. This result
applies to general classes of graphs, including all the aforementioned
classes.}
}
@article{ABGMN-fi13,
publisher = {{IOS} Press},
journal = {Fundamenta Informaticae},
author = {Akshay, S. and Bollig, Benedikt and Gastin, Paul and
Mukund, Madhavan and Narayan Kumar, K.},
title = {Distributed Timed Automata with Independently Evolving Clocks},
volume = {130},
number = {4},
month = apr,
year = 2014,
pages = {377-407},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/ABGMN-fi13.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/ABGMN-fi13.pdf},
doi = {10.3233/FI-2014-996},
abstract = {We propose a model of distributed timed systems where each
component is a timed automaton with a set of local clocks that evolve at a
rate independent of the clocks of the other components. A~clock can be
read by any component in the system, but it can only be reset by the
automaton it belongs~to.\par
There are two natural semantics for such systems. The \emph{universal}
semantics captures behaviors that hold under any choice of clock rates for
the individual components. This is a natural choice when checking that a
system always satisfies a positive specification. To check if a system
avoids a negative specification, it is better to use the
\emph{existential} semantics—the set of behaviors that the system
can possibly exhibit under some choice of clock rates.\par
We show that the existential semantics always describes a regular set of
behaviors. However, in the case of universal semantics, checking emptiness
or universality turns out to be undecidable. As an alternative to the
universal semantics, we propose a \emph{reactive} semantics that allows us
to check positive specifications and yet describes a regular set of
behaviors.}
}
@article{BGMZ-tocl13,
publisher = {ACM Press},
journal = {ACM Transactions on Computational Logic},
author = {Bollig, Benedikt and Gastin, Paul and Monmege, Benjamin and Zeitoun, Marc},
title = {Pebble Weighted Automata and Weighted Logics},
volume = 15,
number = {2:15},
month = apr,
year = 2014,
nopages = {},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/BGMZ-tocl13.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/BGMZ-tocl13.pdf},
doi = {10.1145/2579819},
abstract = {We introduce new classes of weighted automata on words. Equipped
with pebbles, they go beyond the class of recognizable formal power
series: they capture weighted first-order logic enriched with a
quantitative version of transitive closure. In contrast to previous work,
this calculus allows for unrestricted use of existential and universal
quantifications over positions of the input word. We actually consider
both two-way and one-way pebble weighted automata. The latter class
constrains the head of the automaton to walk left-to-right, resetting it
each time a pebble is dropped. Such automata have already been considered
in the Boolean setting, in the context of data words. Our main result
states that two-way pebble weighted automata, one-way pebble weighted
automata, and our weighted logic are expressively equivalent. We also give
new logical characterizations of standard recognizable series.}
}
@article{GM-tcs14,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Gastin, Paul and Monmege, Benjamin},
title = {Adding Pebbles to Weighted Automata~-- Easy Specification
{\&} Efficient Evaluation},
volume = {534},
month = may,
year = 2014,
pages = {24-44},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/GM-tcs14.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/GM-tcs14.pdf},
doi = {10.1016/j.tcs.2014.02.034},
abstract = {We extend weighted automata and weighted rational expressions
with 2-way moves and reusable pebbles. We show with examples from natural
language modeling and quantitative model-checking that weighted
expressions and automata with pebbles are more expressive and allow much
more natural and intuitive specifications than classical ones. We extend
Kleene-Sch{\"u}tzenberger theorem showing that weighted expressions and
automata with pebbles have the same expressive power. We focus on an
efficient translation from expressions to automata. We also prove that the
evaluation problem for weighted automata can be done very efficiently if
the number of reusable pebbles is low.}
}
@article{AGMN-tcs15,
publisher = {Elsevier Science Publishers},
journal = {Theoretical Computer Science},
author = {Akshay, S. and Gastin, Paul and Mukund,
title = {Checking conformance for time-constrained scenario-based specifications},
volume = {594},
pages = {24-43},
month = aug,
year = {2015},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/AGMN-tcs15.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/AGMN-tcs15.pdf},
doi = {10.1016/j.tcs.2015.03.030},
abstract = {We consider the problem of model checking message-passing
systems with real-time requirements. As behavioral specifications, we use
message sequence charts (MSCs) annotated with timing constraints. Our
system model is a network of communicating finite state machines with
local clocks, whose global behavior can be regarded as a timed automaton.
Our goal is to verify that all timed behaviors exhibited by the system
conform to the timing constraints imposed by the specification. In
general, this corresponds to checking inclusion for timed languages, which
is an undecidable problem even for timed regular languages. However, we
show that we can translate regular collections of time-constrained MSCs
into a special class of event-clock automata that can be determinized and
complemented, thus permitting an algorithmic solution to the model
checking/conformance problem.}
}
@inproceedings{ABG-concur15,
month = sep,
year = 2015,
volume = {42},
series = {Leibniz International Proceedings in Informatics},
publisher = {Leibniz-Zentrum f{\"u}r Informatik},
editor = {Aceto, Luca and de Frutos-Escrig, David},
acronym = {{CONCUR}'15},
booktitle = {{P}roceedings of the 26th
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'15)},
author = {Aiswarya, C. and Bollig, Benedikt and Gastin, Paul},
title = {An Automata-Theoretic Approach to the Verification of Distributed Algorithms},
pages = {340-353},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/ABG-concur15.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/ABG-concur15.pdf},
doi = {10.4230/LIPIcs.CONCUR.2015.340},
abstract = {We introduce an automata-theoretic method for the verification
of distributed algorithms running on ring networks. In a distributed
algorithm, an arbitrary number of processes cooperate to achieve a common
goal (e.g., elect a leader). Processes have unique identifiers (pids) from
an infinite, totally ordered domain. An algorithm proceeds in synchronous
rounds, each round allowing a process to perform a bounded sequence of
actions such as send or receive a pid, store it in some register, and
compare register contents wrt. the associated total order. An algorithm is
supposed to be correct independently of the number of processes. To
specify correctness properties, we introduce a logic that can reason about
processes and pids. Referring to leader election, it may say that, at the
end of an execution, each process stores the maximum pid in some dedicated
register. Since the verification of distributed algorithms is undecidable,
we propose an underapproximation technique, which bounds the number of
rounds. This is an appealing approach, as the number of rounds needed by a
distributed algorithm to conclude is often exponentially smaller than the
number of processes. We provide an automata-theoretic solution, reducing
model checking to emptiness for alternating two-way automata on words.
Overall, we show that round-bounded verification of distributed algorithms
over rings is PSPACE-complete.}
}
@inproceedings{AGS-concur16,
month = aug,
year = 2016,
volume = {59},
series = {Leibniz International Proceedings in Informatics},
publisher = {Leibniz-Zentrum f{\"u}r Informatik},
acronym = {{CONCUR}'16},
booktitle = {{P}roceedings of the 27th
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'16)},
author = {Akshay, S. and Paul Gastin and Krishna, Shankara Narayanan},
title = {Analyzing Timed Systems Using Tree Automata},
pages = {27:1-27:14},
url = {http://arxiv.org/abs/1604.08443},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/AGS-concur16.pdf},
doi = {10.4230/LIPIcs.CONCUR.2016.27},
abstract = {Timed systems, such as timed automata, are usually analyzed
using their operational semantics on timed words. The classical region
abstraction for timed automata reduces them to (untimed) finite state
automata with the same time-abstract properties, such as state
reachability. We propose a new technique to analyze such timed systems
using finite tree automata instead of finite word automata. The main idea
is to consider timed behaviors as graphs with matching edges capturing
timing constraints. Such graphs can be interpreted in trees opening the
way to tree automata based techniques which are more powerful than
analysis based on word automata. The technique is quite general and
applies to many timed systems. In this paper, as an example, we develop
the technique on timed pushdown systems, which have recently received
considerable attention. Further, we also demonstrate how we can use it on
timed automata and timed multi-stack pushdown systems (with boundedness
restrictions).}
}
@comment{{B-arxiv16,
author = Bollig, Benedikt,
affiliation = aff-LSVmexico,
title = One-Counter Automata with Counter Visibility,
institution = Computing Research Repository,
number = 1602.05940,
month = feb,
nmonth = 2,
year = 2016,
type = RR,
axeLSV = mexico,
NOcontrat = "",
url = http://arxiv.org/abs/1602.05940,
PDF = "http://www.lsv.fr/Publis/PAPERS/PDF/B-arxiv16.pdf",
lsvdate-new = 20160222,
lsvdate-upd = 20160222,
lsvdate-pub = 20160222,
lsv-category = "rapl",
wwwpublic = "public and ccsb",
note = 18~pages,
abstract = "In a one-counter automaton (OCA), one can read a letter
from some finite alphabet, increment and decrement the counter by
one, or test it for zero. It is well-known that universality and
language inclusion for OCAs are undecidable. We consider here OCAs
with counter visibility: Whenever the automaton produces a letter,
it outputs the current counter value along with~it. Hence, its
language is now a set of words over an infinite alphabet. We show
that universality and inclusion for that model are in PSPACE, thus
no harder than the corresponding problems for finite automata, which
can actually be considered as a special case. In fact, we show that
OCAs with counter visibility are effectively determinizable and
closed under all boolean operations. As~a~strict generalization, we
subsequently extend our model by registers. The general nonemptiness
problem being undecidable, we impose a bound on the number of
register comparisons and show that the corresponding nonemptiness
problem is NP-complete.",
}}
@inproceedings{FG-fossacs16,
month = apr,
year = 2016,
volume = {9634},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Jacobs, Bart and L{\"o}ding, Christof},
acronym = {{FoSSaCS}'16},
booktitle = {{P}roceedings of the 19th {I}nternational
{C}onference on {F}oundations of {S}oftware {S}cience
and {C}omputation {S}tructures
({FoSSaCS}'16)},
author = {Fortin, Marie and Gastin, Paul},
title = {Verification of parameterized communicating automata via split-width},
pages = {197-213},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/FG-fossacs16.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/FG-fossacs16.pdf},
doi = {10.1007/978-3-662-49630-5_12},
abstract = {We~study verification problems for distributed systems
communicating via unbounded FIFO channels. The number of processes
of the system as well as the communication topology are not fixed
a~priori. Systems are given by parameterized communicating automata
(PCAs) which can be run on any communication topology of bounded
degree, with arbitrarily many processes. Such systems are Turing
powerful so we concentrate on under-approximate verification. We
extend the notion of split-width to behaviors of PCAs. We show that
emptiness, reachability and model-checking problems of PCAs are
decidable when restricted to behaviors of bounded split-width.
Reachability and emptiness are EXPTIME-complete, but only polynomial
in the size of the PCA. We also describe several concrete classes of
bounded split-width, for which we prove similar results.}
}
@inproceedings{BFG-stacs18,
month = feb,
volume = {96},
series = {Leibniz International Proceedings in Informatics},
publisher = {Leibniz-Zentrum f{\"u}r Informatik},
editor = {Niedermeier, Rolf and Vall{\'e}e, Brigitte},
acronym = {{STACS}'18},
booktitle = {{P}roceedings of the 35th {A}nnual
{S}ymposium on {T}heoretical {A}spects of
{C}omputer {S}cience
({STACS}'18)},
author = {Bollig, Benedikt and Fortin, Marie and Gastin, Paul},
title = {Communicating Finite-State Machines and Two-Variable Logic},
pages = {17:1-17:14},
year = {2018},
doi = {10.4230/LIPIcs.STACS.2018.17},
pdf = {http://drops.dagstuhl.de/opus/volltexte/2018/8529/pdf/LIPIcs-STACS-2018-17.pdf},
url = {http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8529},
abstract = {Communicating finite-state machines are a fundamental, well-studied model of finite-state processes that communicate via unbounded first-in first-out channels. We show that they are expressively equivalent to existential MSO logic with two first-order variables and the order relation.}
}
@inproceedings{AGKS-concur17,
month = sep,
year = 2017,
volume = {85},
series = {Leibniz International Proceedings in Informatics},
publisher = {Leibniz-Zentrum f{\"u}r Informatik},
editor = {Meyer, Roland and Nestmann, Uwe},
acronym = {{CONCUR}'17},
booktitle = {{P}roceedings of the 28th
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'17)},
author = {Akshay, S. and Gastin, Paul and Krishna, Shankara Narayanan and Sarkar, Ilias},
title = {Towards an Efficient Tree Automata based technique for Timed Systems},
pages = {39:1--39:15},
url = {http://drops.dagstuhl.de/opus/volltexte/2017/7801},
pdf = {http://drops.dagstuhl.de/opus/volltexte/2017/7801/pdf/LIPIcs-CONCUR-2017-39.pdf},
doi = {10.4230/LIPIcs.CONCUR.2017.39},
abstract = {The focus of this paper is the analysis of real-time systems with recursion, through the development of good theoretical techniques which are implementable. Time is modeled using clock variables, and recursion using stacks. Our technique consists of modeling the behaviours of the timed system as graphs, and interpreting these graphs on tree terms by showing a bound on their tree-width. We then build a tree automaton that accepts exactly those tree terms that describe realizable runs of the timed system. The emptiness of the timed system thus boils down to emptiness of a finite tree automaton that accepts these tree terms. This approach helps us in obtaining an optimal complexity, not just in theory (as done in earlier work e.g.[concur16]), but also in going towards an efficient implementation of our technique. To do this, we make several improvements in the theory and exploit these to build a first prototype tool that can analyze timed systems with recursion.}
}
@article{ABG-ic17,
publisher = {Elsevier Science Publishers},
journal = {Information and Computation},
author = {Aiswarya, C. and Bollig, Benedikt and Gastin, Paul},
title = {An Automata-Theoretic Approach to the Verification of Distributed Algorithms},
volume = {259},
month = apr,
year = {2018},
pages = {305-327},
doi = {10.1016/j.ic.2017.05.006},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/ABG-ic17.pdf},
abstract = {We introduce an automata-theoretic method for the verification of distributed algorithms running on ring networks. In a distributed algorithm, an arbitrary number of processes cooperate to achieve a common goal (e.g., elect a leader). Processes have unique identifiers (pids) from an infinite, totally ordered domain. An algorithm proceeds in synchronous rounds, each round allowing a process to perform a bounded sequence of actions such as send or receive a pid, store it in some register, and compare register contents wrt. the associated total order. An algorithm is supposed to be correct independently of the number of processes. To specify correctness properties, we introduce a logic that can reason about processes and pids. Referring to leader election, it may say that, at the end of an execution, each process stores the maximum pid in some dedicated register.
We show that the verification problem of distributed algorithms can be reduced to satisfiability of a formula from propositional dynamic logic with loop and converse (LCPDL), interpreted over grids over a finite alphabet. This translation is independent of any restriction imposed on the algorithm. However, since the verification problem (and satisfiability for LCPDL) is undecidable, we propose an underapproximation technique, which bounds the number of rounds. This is an appealing approach, as the number of rounds needed by a distributed algorithm to conclude is often exponentially smaller than the number of processes. Using our reduction to LCPDL, we provide an automata-theoretic solution, reducing model checking to emptiness for alternating two-way automata on words. Overall, we show that round-bounded verification of distributed algorithms over rings is PSPACE-complete, provided the number of rounds is given in unary.}
}
@inproceedings{GMS-concur18,
month = sep,
year = 2018,
volume = {118},
series = {Leibniz International Proceedings in Informatics},
publisher = {Leibniz-Zentrum f{\"u}r Informatik},
editor = {Schewe, Sven and Zhang, Lijun},
acronym = {{CONCUR}'18},
booktitle = {{P}roceedings of the 29th
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'18)},
author = {Paul Gastin and Sayan Mukherjee and B. Srivathsan},
title = {Reachability in timed automata with diagonal constraints},
pages = {28:1-28:17},
url = {http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9566},
pdf = {http://drops.dagstuhl.de/opus/volltexte/2018/9566/pdf/LIPIcs-CONCUR-2018-28.pdf},
doi = {10.4230/LIPIcs.CONCUR.2018.28},
abstract = {We consider the reachability problem for timed automata having diagonal constraints (like x - y < 5) as guards in transitions. The best algorithms for timed automata proceed by enumerating reachable sets of its configurations, stored in a data structure called ''zones''. Simulation relations between zones are essential to ensure termination and efficiency. The algorithm employs a simulation test Z <= Z' which ascertains that zone Z does not reach more states than zone Z', and hence further enumeration from Z is not necessary. No effective simulations are known for timed automata containing diagonal constraints as guards. We propose a simulation relation <=_{LU}^d for timed automata with diagonal constraints. On the negative side, we show that deciding Z not <=_{LU}^d Z' is NP-complete. On the positive side, we identify a witness for Z not <=_{LU}^d Z' and propose an algorithm to decide the existence of such a witness using an SMT solver. The shape of the witness reveals that the simulation test is likely to be efficient in practice.}
}
@inproceedings{BFG-concur18,
month = sep,
year = 2018,
volume = {118},
series = {Leibniz International Proceedings in Informatics},
publisher = {Leibniz-Zentrum f{\"u}r Informatik},
editor = {Schewe, Sven and Zhang, Lijun},
acronym = {{CONCUR}'18},
booktitle = {{P}roceedings of the 29th
{I}nternational {C}onference on
{C}oncurrency {T}heory
({CONCUR}'18)},
author = {Bollig, Benedikt and Fortin, Marie and Gastin, Paul},
title = {It Is Easy to Be Wise After the Event: Communicating Finite-State
Machines Capture First-Order Logic with ''Happened Before''},
pages = {7:1-7:17},
url = {http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9545},
pdf = {http://drops.dagstuhl.de/opus/volltexte/2018/9545/pdf/LIPIcs-CONCUR-2018-7.pdf},
doi = {10.4230/LIPIcs.CONCUR.2018.7},
abstract = {Message sequence charts (MSCs) naturally arise as executions of communicating finite-state machines (CFMs), in which finite-state processes exchange messages through unbounded FIFO channels. We study the first-order logic of MSCs, featuring Lamport's happened-before relation. We introduce a star-free version of propositional dynamic logic (PDL) with loop and converse. Our main results state that (i) every first-order sentence can be transformed into an equivalent star-free PDL sentence (and conversely), and (ii) every star-free PDL sentence can be translated into an equivalent CFM. This answers an open question and settles the exact relation between CFMs and fragments of monadic second-order logic. As a byproduct, we show that first-order logic over MSCs has the three-variable property.}
}
@article{AGK-lmcs18,
journal = {Logical Methods in Computer Science},
author = {Akshay, S. and Gastin, Paul and Krishna, Shankara Narayanan},
title = {Analyzing Timed Systems Using Tree Automata},
volume = {14},
number = {2},
pages = {1-35},
year = {2018},
month = may,
doi = {10.23638/LMCS-14(2:8)2018},
pdf = {https://lmcs.episciences.org/4489/pdf},
url = {https://lmcs.episciences.org/4489},
abstract = {Timed systems, such as timed automata, are usually analyzed using their operational semantics on timed words. The classical region abstraction for timed automata reduces them to (untimed) finite state automata with the same time-abstract properties, such as state reachability. We propose a new technique to analyze such timed systems using finite tree automata instead of finite word automata. The main idea is to consider timed behaviors as graphs with matching edges capturing timing constraints. When a family of graphs has bounded tree-width, they can be interpreted in trees and MSO-definable properties of such graphs can be checked using tree automata. The technique is quite general and applies to many timed systems. In this paper, as an example, we develop the technique on timed pushdown systems, which have recently received considerable attention. Further, we also demonstrate how we can use it on timed automata and timed multi-stack pushdown systems (with boundedness restrictions).}
}
@inproceedings{DGK-lics18,
publisher = {ACM Press},
editor = {Hofmann, Martin and Dawar, Anuj and Gr{\"a}del, Erich},
acronym = {{LICS}'18},
booktitle = {{P}roceedings of the 33rd {A}nnual {ACM\slash
IEEE} {S}ymposium on {L}ogic {I}n {C}omputer {S}cience ({LICS}'18)},
author = {Dave, Vrunda and Gastin, Paul and Krishna, Shankara Narayanan},
month = jul,
title = {{Regular Transducer Expressions for Regular Transformations}},
year = {2018},
url = {https://arxiv.org/abs/1802.02094},
pdf = {https://arxiv.org/pdf/1802.02094.pdf},
pages = {315-324},
doi = {10.1145/3209108.3209182},
abstract = {Functional MSO transductions, deterministic two-way transducers, as well as streaming string transducers are all equivalent models for regular functions. In this paper, we show that every regular function, either on finite words or on infinite words, captured by a deterministic two-way transducer, can be described with a regular transducer expression (RTE). For infinite words, the transducer uses Muller acceptance and $$\omega$$-regular look-ahead. RTEs are constructed from constant functions using the combinators if-then-else (deterministic choice), Hadamard product, and unambiguous versions of the Cauchy product, the 2-chained Kleene-iteration and the 2-chained omega-iteration. Our proof works for transformations of both finite and infinite words, extending the result on finite words of Alur et al. in LICS'14. In order to construct an RTE associated with a deterministic two-way Muller transducer with look-ahead, we introduce the notion of transition monoid for such two-way transducers where the look-ahead is captured by some backward deterministic Büchi automaton. Then, we use an unambiguous version of Imre Simon's famous forest factorization theorem in order to derive a ''good'' ($$\omega$$-)regular expression for the domain of the two-way transducer. ''Good'' expressions are unambiguous and Kleene-plus as well as $$\omega$$-iterations are only used on subexpressions corresponding to idempotent elements of the transition monoid. The combinator expressions are finally constructed by structural induction on the ''Good'' ($$\omega$$-)regular expression describing the domain of the transducer.}
}
@article{GM-softc18,
publisher = {Springer},
journal = {Soft Computing},
author = {Gastin, Paul and Monmege, Benjamin},
title = {{A unifying survey on weighted logics and weighted automata}},
volume = {22},
number = {4},
year = {2018},
month = feb,
pages = {1047-1065},
doi = {10.1007/s00500-015-1952-6},
url = {http://www.lsv.fr/Publis/PAPERS/PDF/softc2016-GM.pdf},
pdf = {http://www.lsv.fr/Publis/PAPERS/PDF/softc2016-GM.pdf},
abstract = {Logical formalisms equivalent to weighted automata have been the topic of numerous research papers in the recent years. It started with the seminal result by Droste and Gastin on weighted logics over semirings for words. It has been extended in two dimensions by many authors. First, the weight domain has been extended to valuation monoids, valuation structures, etc. to capture more quantitative properties. Along another dimension, different structures such as ranked or unranked trees, nested words, Mazurkiewicz traces, etc. have been considered. The long and involved proofs of equivalences in all these papers are implicitly based on the same core arguments. This article provides a meta-theorem which unifies these different approaches. Towards this, we first revisit weighted automata by defining a new semantics for them in two phases---an abstract semantics based on multisets of weight structures (independent of particular weight domains) followed by a concrete semantics. Then, we introduce a core weighted logic with a minimal number of features and a simplified syntax, and lift the new semantics to this logic. We show at the level of the abstract semantics that weighted automata and core weighted logic have the same expressive power. Finally, we show how previous results can be recovered from our result by logical reasoning. In this paper, we prove the meta-theorem for words, ranked and unranked trees, showing the robustness of our approach.}
}
@inproceedings{GMG-dlt19,
month = aug,
volume = {11647},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Piotrek Hofman and Micha\l Skrzypczak},
acronym = {{DLT}'19},
booktitle = {{P}roceedings of the 23th {I}nternational
{C}onference on {D}evelopments in {L}anguage {T}heory
({DLT}'19)},
author = {Paul Gastin and Amaldev Manuel and R. Govind},
title = {Logics for Reversible Regular Languages and Semigroups with Involution},
pages = {182-191},
doi = {10.1007/978-3-030-24886-4_13},
year = 2019,
pdf = {https://arxiv.org/pdf/1907.01214.pdf},
url = {https://arxiv.org/abs/1907.01214},
abstract = {We present MSO and FO logics with predicates between'' and
neighbour'' that characterise various fragments of the class of regular
languages that are closed under the reverse operation. The standard
connections that exist between MSO and FO logics and varieties of finite
semigroups extend to this setting with semigroups extended with an
involution. The case is different for FO with neighbour relation where
we show that one needs additional equations to characterise the class.}
}
@inproceedings{Gastin-cai19,
month = jun,
volume = 11545,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Miroslav {\'C}iri{\'c} and Manfred Droste and Jean-{\'E}ric Pin},
acronym = {{CAI}'19},
booktitle = {{P}roceedings of the 8th {I}nternational {C}onference on
{A}lgebraic {I}nformatics ({CAI}'19)},
author = {Gastin, Paul},
title = {Modular Descriptions of Regular Functions},
pages = {3-9},
note = {Invited talk},
year = 2019,
pdf = {https://arxiv.org/abs/1908.01137},
doi = {10.1007/978-3-030-21363-3_1},
abstract = {We discuss various formalisms to describe string-to-string
transformations. Many are based on automata and can be seen as operational
descriptions, allowing direct implementations when the input scanner is
deterministic. Alternatively, one may use more human friendly descriptions
based on some simple basic transformations (e.g., copy, duplicate, erase,
reverse) and various combinators such as function com- position or extensions
of regular operations.}
}
@inproceedings{GMS-cav19,
month = jul,
volume = {11561},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Isil Dillig and Serdar Tasiran},
acronym = {{CAV}'19},
booktitle = {{P}roceedings of the 31st
{I}nternational {C}onference on
{C}omputer {A}ided {V}erification
({CAV}'19)},
author = {Paul Gastin and Sayan Mukherjee and B. Srivathsan},
title = {Fast algorithms for handling diagonal constraints in timed automata},
pages = {41-59},
year = 2019,
doi = {10.1007/978-3-030-25540-4_3},
pdf = {https://arxiv.org/pdf/1904.08590.pdf},
url = {https://arxiv.org/abs/1904.08590}
}
@inproceedings{BBM-mfcs19,
month = aug,
volume = {138},
series = {Leibniz International Proceedings in Informatics},
publisher = {Leibniz-Zentrum f{\"u}r Informatik},
editor = {Pinar Heggernes and Joost-Pieter Katoen and Peter Rossmanith},
acronym = {{MFCS}'19},
booktitle = {{P}roceedings of the 42nd
{I}nternational {S}ymposium on
{M}athematical {F}oundations of
{C}omputer {S}cience
({MFCS}'19)},
author = {Manfred Droste and Paul Gastin},
title = {Aperiodic Weighted Automata and Weighted First-Order Logic},
pages = {76:1-76:15},
year = 2019,
doi = {10.4230/LIPIcs.MFCS.2019.76},
pdf = {http://drops.dagstuhl.de/opus/volltexte/2019/11020/pdf/LIPIcs-MFCS-2019-76.pdf},
url = {http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=11020}
}
@inproceedings{AGJK-lics19,
month = jun,
publisher = {{IEEE} Press},
editor = {Bouyer, Patricia},
acronym = {{LICS}'19},
booktitle = {{P}roceedings of the 34th {A}nnual {ACM\slash
IEEE} {S}ymposium on {L}ogic {I}n {C}omputer {S}cience ({LICS}'19)},
author = {Akshay, S. and Gastin, Paul and Jug{\'e}, Vincent and Krishna, Shankara Narayanan},
title = {Timed systems through the lens of logic},
pages = {1-13},
year = 2019,
doi = {10.1109/LICS.2019.8785684},
pdf = {https://arxiv.org/pdf/1903.03773.pdf},
url = {https://arxiv.org/abs/1903.03773},
abstract = {In this paper, we analyze timed systems with data structures, using a rich interplay of logic and properties of graphs. We start by describing behaviors of timed systems using graphs with timing constraints. Such a graph is called realizable if we can assign time-stamps to nodes or events so that they are consistent with the timing constraints. The logical definability of several graph properties has been a challenging problem, and we show, using a highly non-trivial argument, that the realizability property for collections of graphs with strict timing constraints is logically definable in a class of propositional dynamic logic (EQ-ICPDL), which is strictly contained in MSO. Using this result, we propose a novel, algorithmically efficient and uniform proof technique for the analysis of timed systems enriched with auxiliary data structures, like stacks and queues. Our technique unravels new results (for emptiness checking as well as model checking) for timed systems with richer features than considered so far, while also recovering existing results.}
}
@inproceedings{AGKR-tacas2020,
month = apr,
series = {Lecture Notes in Computer Science},
publisher = {Springer},
editor = {Armin Biere and David Parker},
acronym = {{TACAS}'20},
booktitle = {{P}roceedings of the 26th {I}nternational
{C}onference on {T}ools and {A}lgorithms for
{C}onstruction and {A}nalysis of {S}ystems
({TACAS}'20)},
author = {Akshay, S. and Gastin, Paul and Krishna, Shankara Narayanan and Roychoudhary, Sparsa},
title = {Revisiting Underapproximate Reachability for Multipushdown Systems},
year = 2020,
note = {To appear}
}
`
This file was generated by bibtex2html 1.98. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7469878792762756, "perplexity": 10465.654373811456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00220.warc.gz"} |
https://blog.sigplan.org/2021/02/11/a-primer-on-analog-computing/ | Select Page
# PL Perspectives
Perspectives on computing and technology from and for those with an interest in programming languages.
An analog computer is any computing platform which leverages the physics of the underlying substrate to implement computation. Under this computing paradigm, the physical properties of the computational substrate correspond to variables in the computation. To execute a program on an analog computer, the hardware must be configured such that the device physics matches (is analogous to) the behavior of the program.
Modern analog computers are attractive computational targets which have the potential to deliver significant performance and energy improvements over conventional digital hardware. Many modern analog computers are reconfigurable and may be digitally reprogrammed to implement a variety of computations. In a series of blog posts, we will discuss some of the challenges with using analog computers and show how programming languages and compilers techniques can help. This first post will provide an overview of how analog computation works and highlight the challenges that programming language technology can help to address. To make things concrete, we will describe how an important subclass of computations (dynamical systems) can be implemented on a specific analog device (the HCDCv2).
### Example Computation: A Spring-Mass System
A dynamical system is a continuously evolving system composed of one or more state variables whose trajectories evolve over time. Dynamical systems are broadly used, mathematical constructs which appear in many disciplines including biology, chemistry, physics, robotics, and electrical engineering. These systems can be used to process sensor information, control motors and other actuators, and study physical phenomena. We will be focusing on simulating dynamical systems which are made up of ordinary differential equations in this series of posts.
Consider the spring-mass system pictured above. This system is made up of a $2~kg$ mass attached to a spring with a $0.5~ kg/s^2$ force constant. The mass is initially 10 meters away from its resting position and is stationary (the velocity is $0~m/s$ ).
We can model the dynamics of the above system with a dynamical system (lower left corner) composed of two state variables which capture the velocity ($v$) and the position ($p$) of the mass over time. The initial values of $v$ and $p$ capture the initial state of the system.
We are interested in running an experiment where we release the mass and observe its position and velocity for 20 seconds. We can simulate this experiment by executing this dynamical system for 20 units of simulation time. The plots on the right hand side of the image show the simulated trajectories of the mass’s position and velocity over time.
### Differential Equation-Solving Analog Devices
A differential equation-solving analog device is a type of reconfigurable analog device which uses the analog behavior of transistors to perform dynamical system simulation. These devices are attractive computational targets because they consume very little energy and have predictable performance characteristics — in fact, we can calculate the time required to perform the dynamical system simulation at compile-time. Such devices are great at computing on analog signals in real-time with very little energy.
Internally, these devices contain collections of digitally configurable analog blocks which may be routed together with digitally programmable interconnects to form a variety of analog circuits. To run a dynamical system on such a device, it must be programmed to implement an analog circuit whose physics matches the dynamics of the target dynamical system. The analog currents and voltages within this circuit implement continuously evolving state variables from the dynamical system. The computation is then run by powering on the device and observing the evolution of the currents and voltages of interest over time.
In this article, we will be targeting HCDCv2; a differential equation-solving analog device developed by researchers in Columbia and manufactured by Sendyne Corporation.
### Solving the Mass-Spring System with an Analog Device
The diagram below presents a candidate configuration for the HCDCv2 which implements the mass-spring dynamical system introduced above.
The configuration uses several kinds of analog blocks. The MUL and INT blocks perform multiplication and integration. The COPY, TIN, TOUT, and OBS blocks are special-purpose blocks which are not used for computation. The COPY block produces multiple copies of an analog current. This is necessary because analog currents cannot safely be routed to more than one port. The TIN and TOUT blocks route signals between blocks which cannot be directly connected together through a digitally settable connection. The OBS block makes a signal externally accessible. In the above circuit, the OBS block forwards the signal implementing the position of the oscillating mass to the oscilloscope.
Each block has a digitally settable operating mode which controls the function implemented by the block (green box) and a set of digitally settable fields (yellow callouts). For example, the INT block integrates x when in mm mode but integrates $0.1 \cdot x$ when in hm mode. The MUL block implements $c \cdot x$, where $c$ is a digitally settable field, when in mm mode. We summarize all the functions implemented by the blocks below. Refer to this table for a complete list of block modes and behaviors.
The COPY, TIN, TOUT, and OBS blocks are special-use blocks which do not only perform computation. The COPY block produces multiple copies of an analog current. This is necessary because analog currents cannot safely be routed to more than one port. The TIN and TOUT blocks route signals between blocks which cannot be directly connected together through a digitally settable connection. The OBS block makes a signal externally accessible. In the above circuit, the OBS block forwards the signal implementing the position of the oscillating mass to the oscilloscope.
The right hand side of the figure shows the lab setup and the analog signal collected from the oscilloscope. If we compare this analog signal to the expected position of the mass over time (bottom right), we find that it closely tracks the expected trajectory. This execution consumes $0.32~\mu J$ of energy.
Execution Time: All integration in the circuit is performed with respect to hardware time ($t_{hw}$). In the HCDCv2, one unit of hardware time corresponds to $7.93~\mu s$ of wall-clock time. This mapping between hardware time and wall-clock time enables us to compute exactly how much time any computation will take. For example, the above circuit executes 20 simulation time units in $20 \cdot 7.93=158~\mu s$ of wall clock time.
Circuit Validation: In this circuit, the trajectories of the analog currents moving through the wires labeled P and V match the trajectories of the position and velocity of the mass in the mass-spring system. We can see this by comparing the dynamics of the the analog currents at P and V with the dynamics of the spring-mass system:
On the left, we present the symbolic expressions governing the behavior of the analog currents P and V over time. In these expressions, digitally settable field values are in orange and the labeled signals we are analyzing are written in blue. On the right, we present the dynamics of the position and velocity from the mass-spring system. We can see that while these differential equations don’t syntactically match up, they are semantically equivalent.
### Programming Challenges
Electrical differential-equation solving analog devices like the HCDC are challenging to directly program for several reasons:
• Highly parametric, specialized blocks: To conserve space, the offered electrical analog blocks are highly parametric and often non-trivial. Computational blocks include blocks which implement sets of differential equations and relations, blocks which implement variants of multiplication, and blocks which both scale and integrate a signal. Some computations are not performed with dedicated blocks (e.g. addition) while some blocks don’t perform computation (e.g. copiers).
• Restrictive routing environment: Because wires carrying analog signals affect one another when placed close together, there are a limited number of programmable interconnects on the device. Analog blocks which are far apart on such devices are much harder to connect together than co-located blocks.
• Physical limitations and low-level physical behaviors: The analog blocks in these devices impose frequency, current, and voltage range limitations on the incoming signals and have unique noise, error, and gain characteristics which must be taken into consideration when programming the device. All of these characteristics may change depending on how the analog block in question is programmed.
Due to these issues, even implementing our simple spring-mass system was non-trivial.
### PL Technology to the Rescue!
Stay tuned … in the next blog post we’ll see how programming languages and compilation technology can bridge this gap between the original differential equations and the analog circuits that must implement them faithfully on a differential equation-solving analog device.
Bio: Sara Achour is a PhD candidate at the Computer Science and Artificial Intelligence Laboratory at Massachusetts Institute of Technology (CSAIL MIT). She will be joining Stanford’s Computer Science department as an Assistant Professor starting Summer 2021.
### Appendix: Description of HCDCv2 Blocks
Multiplier (MUL) Block
MUL port z port z mm $c \cdot x$ mh $10 \cdot c \cdot x$ hm $0.1 \cdot c \cdot x$ hh $c \cdot x$ mmm $0.5 \cdot x \cdot y$ hmm $0.05 \cdot x \cdot y$ mhm $0.05 \cdot x \cdot y$ mmh $5 \cdot x \cdot y$
Integrator (INT) Block
INT ∫z z(0) mm $\int x$ $2 \cdot z0$ hm $\int 0.1 \cdot x$ $2 \cdot z0$ mh $\int 10 \cdot x$ $20 \cdot z0$ hh $\int x$ $20 \cdot z0$
unCurrent Copier (FAN) Block
FAN port z port w pp $x$ $x$ pn $x$ $-x$ np $-x$ $x$ nn $-x$ $-x$
Other Blocks (TIN, TOUT, OBS)
Other Blocks port z TIN $x$ TOUT $x$ OBS $0.6*k$
Disclaimer: These posts are written by individual contributors to share their thoughts on the SIGPLAN blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGPLAN or its parent organization, ACM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5131848454475403, "perplexity": 1292.1542280254182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060538.11/warc/CC-MAIN-20210928062408-20210928092408-00159.warc.gz"} |
http://clay6.com/qa/32706/refer-the-below-reaction-a-can-be | Browse Questions
# Refer the below reaction A Can be
$\begin{array}{1 1}(a)\;Conc\;H_2SO_4&(b)\;\text{Alcoholic KOH}\\(c)\;Et_3N&(d)\;t-Bu-OK\end{array}$
Hence (a) Conc $H_2SO_4$ is the correct answer. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9014834761619568, "perplexity": 9334.946793172478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00034-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://mitpress.mit.edu/books/c-precisely-0 | Skip navigation
Paperback | $36.00 X | £29.95 | 264 pp. | 8 x 9 in | November 2011 | ISBN: 9780262516860 eBook |$36.00 X | November 2011 | ISBN: 9780262302135
Mouseover for Online Attention Data
## Overview
C# is an object-oriented programming language that is similar to Java in many respects but more comprehensive and different in most details. This book offers a quick and accessible reference for anyone who wants to know C# in more detail than that provided by a standard textbook. It will be particularly useful for C# learners who are familiar with Java. This second edition has been updated and expanded, reflecting the evolution and extension of the C# programming language. It covers C# versions 3.0 and 4.0 and takes a look ahead at some of the innovations of version 5.0. In particular, it describes asynchronous programming as found in 5.0.
Despite the new material, C# Precisely remains compact and easy to navigate. It describes C# in detail but informally and concisely, presenting lambda expressions, extension methods, anonymous object expressions, object initializers, collection initializers, local variable type inference, type dynamic, type parameter covariance and contravariance, and Linq (language integrated query), among other topics, all in aabout 250 pages. The book offers more than 250 examples to illustrate both common use and subtle points. Two-page spreads show general rules on the left and relevant examples on the right, maximizing the amount of information accessible at a glance.
The complete, ready-to-run example programs are available at the book’s Web site, http://www.itu.dk/people/sestoft/csharpprecisely/
## About the Authors
Peter Sestoft is Professor of Computer Science and Head of the Software and Systems Section at the IT University of Copenhagen.
Henrik I. Hansen holds master’s degrees in information technology and chemistry.
## Reviews
“A book such as this should always be close at hand, both for experts and for those who have some prior programming experience when tackling a problem.”—Mathew Burns, Times Higher Education
## Endorsements
“Praise for the first edition: Blaise Pascal once wrote, 'I have made this letter longer than usual, because I lack the time to make it short.' Peter Sestoft and Henrik Hansen have taken the time to write a short book on C# that leaves nothing out.”
Anders Hejlsberg, Microsoft Corporation | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16787703335285187, "perplexity": 2469.6249182362803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00623-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/451128/systematic-expansion-of-ei-veck-cdot-vecr-in-atomic-physics-in-terms-of/451140 | # Systematic expansion of $e^{i\vec{k}\cdot\vec{r}}$ in atomic physics in terms of Legendre polynomials and identifying different $l$ terms
In the context of light-matter interaction one often makes the approximation $$e^{i\vec{k}\cdot\vec{r}}\approx 1$$. Keeping higher order terms in $$e^{i\vec{k}\cdot\vec{r}}$$ give magnetic dipole, electric quadrupole etc transitions.
This makes me believe that $$e^{i\vec{k}\cdot\vec{r}}$$ must have similar multipole expansion like $$|\vec{r}-\vec{r}^\prime|^{-1}$$ in electrostatics where the latter can be systematically expanded in terms of Legendre polynomials $$P_l(\cos\gamma)$$. Different $$l$$ values lead to different multipole terms. In particular, $$l=0,1,2,...$$ respectively represent monopole, dipole, qudrupole etc terms.
Now, there exists an expansion of $$e^{i\vec{k}\cdot\vec{r}}$$ in atomic physics (c.f. Sakurai) $$e^{i\vec{k}\cdot\vec{r}}=\sum\limits_{l=0}^{\infty}(2l+1)i^l j_l(kr)P_l(\cos\gamma)$$ where $$\gamma$$ is the angle between $$\vec{k}$$ and $$\vec{r}$$, $$j_l(x)$$ are spherical Bessel functions, and in particular, $$j_0(x)=\frac{\sin x}{x}$$.
The problem is that unlike my expectation from electrostatics, $$l=0$$ gives the electric dipole approximation $$e^{i\vec{k}\cdot\vec{r}}= 1$$ and not $$l=1$$. Where am I wrong? Further, can we associate this $$l$$ value to the angular momentum of the emitted/absorbed radiation?
The expression you found in Sakurai's textbook is the correct one and can be further expanded in spherical harmonics, if needed.
The difference with electrostatics is due to the different position the "dipole" emerges in the case of quantum treatment of light-matter interaction.
In electrostatic is the term $$l=1$$ of the Legendre polynomial expansion of the charge density which gives rise to the dipole term in the multipoles expansion of the charge density and then to the dipole term in the electrostatic potential.
When dealing with interaction between EM radiation and an atomic electron, one starts with the perturbation Hamiltonian which contains a term linear in the momentum of the electron $$\bf p$$ and then the relevant matrix elements of the perturbations are of the kind $$\left< i | {\bf {\boldsymbol \epsilon} \cdot p}~ e^{i {\bf k \cdot r}}|f \right>$$ ($$|f>$$ and $$ being the ket and the bra of the final and initial state and $$\boldsymbol \epsilon$$ the polarization vector). If the exponential is expanded for small values of its argument, retaining only the zeroth-order $$1$$, and using the possibility of re-expressing $$\bf p$$ in terms of the commutator of the position $$\bf r$$ with the unperturbed Hamiltonian to find that the matrix elements can be written as proportional to $$\left< i | {\bf {\boldsymbol \epsilon} \cdot r} |f \right>$$.
Therefore the operator whose matrix elements have to be evaluated has the same symmetry of a dipole with all the known consequences on selection rules etc. In conclusion, the reason in the case of radiation the term $$l=0$$ does correspond to the dipole approximation, while in electrostatic it is the term $$l=1$$, is that the functions which are expanded in Legendre Polynomials in the two cases are different and play a different role in the theory, although at some point a dipolar-like term appears.
Edit
I see that the previous answer was not considered satisfactory. I will add reference to a well known textbook (although I thought it was not necessary). Many textbooks of QM touch this topic. On Leonard Schiff's Quantum Mechanics Third edition, the calculation can be found in sections 44 and 45.
As I have already indicated, one has to start with the expression for the rate of transition between two states, evaluated using time dependent perturbation theory. The transitions corresponding to these matrix elements are called electric dipole transitions, since only the matrix element of the electric dipole moment of the particle ($$e\bf r$$) are involved.
The approximation $$e^{i\vec k \cdot \vec r}=1$$, in other words $$\vec k= 0$$ or $$\lambda = \infty$$, is made for atomic transitions because for such transitions the wavelength is much larger than the size of the atom. The dipole character of the transition is caused by the fact that the perturbing hamiltonian is proportional to $$\vec A \cdot \vec p$$, where $$\vec A$$ is the electromagnetic vector potential.
It is the presence of the $$\vec p$$ operator that is responsible for the $$\Delta \cal l =\pm 1$$ selection rule for dipole transitions, although the very small deviations from the $$\vec k= 0$$ approximation in principle also contribute to dipole forbidden transitions.
Yes, it's called a spherical wave expansion. It's given by
$$e^{i\mathbf{k\cdot r}} = 4\pi \sum_{l=0}^\infty i^l j_l(kr) \sum_{m=-l}^l Y^*_{lm}(\theta, \phi) Y_{lm}(\theta', \phi')$$
where the $$j_l$$ are spherical Bessel functions, and the primed coordinates are the the angles that specify those of the $$\mathbf{k}$$ vector, and the unprimed are those of $$\mathbf{r}$$. Recall that the spherical harmonics are a function of the generalized Legendre polynomials (i.e. when you don't have azimuthal symmetry) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 43, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927007555961609, "perplexity": 458.27938464188884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548544.83/warc/CC-MAIN-20191213043650-20191213071650-00513.warc.gz"} |
http://math.stackexchange.com/questions/72636/compute-12-32-52-cdots-2n-12-by-mathematical-induction?answertab=active | # Compute $1^2 + 3^2+ 5^2 + \cdots + (2n-1)^2$ by mathematical induction
I am doing mathematical induction. I am stuck with the question below. The left hand side is not getting equal to the right hand side.
Please guide me how to do it further.
$1^2 + 3^2+ 5^2 + \cdots + (2n-1)^2 = \frac{1}{3}n(2n-1)(2n+1)$.
Sol:
$P(n):\ 1^2 + 3^2 + 5^2 + \cdots + (2n-1)^2 = \frac{1}{3}n(2n-1)(2n+1)$.
For $n=n_1 = 1$
$$P(1) = \frac{1}{3}{3} = (1)^2.$$ Hence it is true for $n=n_0 = 1$.
Let it be true for $n=k$ $$P(k): 1^2 + 3^2 + 5^2 + \cdots + (2k-1)^2 = \frac{1}{3}k(2k-1)(2k+1).$$ We have to prove that it is true for $P(k+1)$. $$P(k+1) = 1^1+3^2+5^2+\cdots+(2k+1)^2 = \frac{1}{3}(k+1)(2k+1)(2k+3)\tag{A}.$$
Taking LHS: \begin{align*} 1^2 + 3^2 + 5^2 + \cdots + (2k+1)^2 &= 1^2+3^2+5^2+\cdots + (2k+1)^2\\ &= 1^2 + 3^2 + 5^2 + \cdots + (2k-1)^2 + (2k+1)^2\\ &= \frac{1}{3}k(2k-1)(2k+1) + (2k+1)^2\\ &=\frac{k(2k-1)(2k+1)+3(2k+1)^2}{3}\\ &=\frac{(2k+1)}{3}\left[k(2k-1) + 3(2k+1)\right]\\ &=\frac{(2k+1)}{3}\left[2k^2 - k + 6k + 3\right]\\ &=\frac{1}{3}(2k+1)(2k^2 +5k + 3)\\ &=\frac{1}{3}(2k+1)(k+1)\left(k+\frac{3}{2}\right) \tag{B} \end{align*}
EDIT:
Solving EQ (A):
$=(1/3)(2k^2+5K+3) (2K+1) \tag{C}$
Comparing EQ(B) and EQ(C)
Hence proved that it is true for $n = k+1.$
Thus the proposition is true for all $n >= 1$.
Thanks.
-
@Arturo: Your answer is more detailed than mine. In a case like this, don't you agree one should write an answer instead of a comment since there's not much more to say and the question is likely to remain unanswered otherwise? – joriki Oct 14 '11 at 16:58
@joriki: I was kind of hoping the OP might answer his own question once he got the right answer... – Arturo Magidin Oct 14 '11 at 17:02
@Arturo: a) Wow, typesetting all that stuff shows true dedication :-). b) When I said your answer is more detailed than mine, I meant that it's more helpful -- why did you remove it? – joriki Oct 14 '11 at 17:14
I'm with Arturo: since we're allowing people to answer their own questions anyway, it seems to be a good thing to coax people into answering their own questions with appropriate nudges... – J. M. Oct 14 '11 at 17:16
@J. M.: It seems to me that this is based on an unrealistic optimism about how people deal with this site. I see several questions every day that don't get answered because someone wrote the answer in a comment, and I think I have yet to see even one question where the OP answered their own question because someone provided the answer in a comment. – joriki Oct 14 '11 at 17:21
Everything is OK except for the very last line. You somehow lost a factor of two. The penultimate line is already the result you want, since $2k^2+5k+3=(k+1)(2k+3)$.
-
You beat me to it! I think I know how the factor of 2 got `lost': the OP solved the quadratic equation $2k^2+5k+3=0$, giving the two roots $k=-3/2$ and $k=-1$, which suggests the (incorrect) factorisation $(k+3/2)(k+1)$. The moral of the story is that there is a difference between factorising an expression and solving an equation, even if the two processes are intimately related. – Shane O Rourke Oct 14 '11 at 17:08
@joriki: Thanks, I actually used a calculator to solve that and as Shane mentioned it gave the the above results.. – Fahad Uddin Oct 14 '11 at 17:11
@ShaneORourke: Thanks a lot. I had mostly been solving quadratic equation instead of factorizing them. I think this is the first time it gave different result. – Fahad Uddin Oct 14 '11 at 17:13
@Akito: I'm not sure I understand your first comment. You used a calculator to solve what? The quadratic equation that Shane wrote? And it gave which above results? The two roots that Shane wrote? Your statement "it gave the above results" seems to imply that you think there's something about those results, but as Shane explained, it's not those results that are wrong but how you interpreted them and what you did with them. – joriki Oct 14 '11 at 17:16
@Akito: If $r_1$ and $r_2$ are the roots of $ax^2+bx+c$, then $(x-r_1)(x-r_2)$ can only equal $ax^2+bx+c$ if $a=1$ (just look at the leading coefficient!). In general, $$(x-r_1)(x-r_2) = x^2 + \frac{b}{a}x + \frac{c}{a}.$$Every time your quadratic is not monic, the method you used will fail; you have to keep track of that leading coefficient. – Arturo Magidin Oct 14 '11 at 17:18
NOTE: I am not saying anything different, before someone start commenting that my answer is not any different.
\begin{align*} 1^2 + 3^2 + 5^2 + \cdots + (2k+1)^2 &= 1^2+3^2+5^2+\cdots + (2k+1)^2\\ &= 1^2 + 3^2 + 5^2 + \cdots + (2k-1)^2 + (2k+1)^2\\ &= \frac{1}{3}k(2k-1)(2k+1) + (2k+1)^2\\ &=\frac{k(2k-1)(2k+1)+3(2k+1)^2}{3}\\ &=\frac{(2k+1)}{3}\left[k(2k-1) + 3(2k+1)\right]\\ &=\frac{(2k+1)}{3}\left[2k^2 - k + 6k + 3\right]\\ &=\frac{1}{3}(2k+1)(2k^2 +5k + 3)\\ &=\frac{1}{3}(2k+1) \hspace{3pt}\left[(k+1)(2k+3)\right] \\ &= \frac{1}{3} (k+1)(2(k+1)-1)(2(k+1)+1) \end{align*}
The last line shows that the result is true for $n=k+1$
-
Comment: this answer is not any different. – Did Mar 8 '12 at 16:03
So I guess you did not read my NOTE: I am not saying anything different, before someone start commenting that my answer is not any different. – Kirthi Raman Mar 10 '12 at 16:59
I did. – Did Mar 10 '12 at 20:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523053765296936, "perplexity": 625.4750527835744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639325.91/warc/CC-MAIN-20150417045719-00025-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://mathematica.stackexchange.com/questions/16556/any-built-in-function-to-generate-successive-sublists-from-a-list?answertab=active | # Any built-in function to generate successive sublists from a list?
Given
lst = {a, b, c, d}
I'd like to generate
{{a}, {a, b}, {a, b, c}, {a, b, c, d}}
but using built-in functions only, such as Subsets, Partitions, Tuples, Permutations or any other such command you can choose. But it has to be done only using built-in commands. You can use a pure function, if inside a built-in command, i.e. part of the command arguments. That is OK.
It is of course trivial to do this by direct coding. One way can be
lst[[1 ;; #]] & /@ Range[Length[lst]]
(* {{a}, {a, b}, {a, b, c}, {a, b, c, d}} *)
or even
LowerTriangularize[Table[lst, {i, Length[lst]}]] /. 0 -> Sequence @@ {}
(* {{a}, {a, b}, {a, b, c}, {a, b, c, d}} *)
But I have the feeling there is a command to do this more directly as it looks like a common operation, but my limited search could not find one so far.
Sorry in advance if this was asked before. Searching is hard for such general questions.
-
How about something like Rest@FoldList[Append, {}, {a, b, c, d}]? More succinctly, FoldList[Append,{First@#},Rest[#]]&[{a,b,c,d}] – Leonid Shifrin Dec 18 '12 at 17:40
Somewhat related: mathematica.stackexchange.com/q/7511/121 – Mr.Wizard Dec 19 '12 at 2:19
lst={a,b,c,d};
ReplaceList[lst,{x__, ___} :> {x}]
Speaking of "common operation":
Table[lst[[;; i]], {i, Length@lst}]
-
I was playing with a pattern based solution, +1 for ReplaceList. – image_doctor Dec 18 '12 at 19:27
+1, there's your 20k, btw. I've always had trouble with ReplaceList, and for me it is the second most annoying function that sounds useful. The most annoying being MapAt ... Although, I've more success with MapAt than ReplaceList. – rcollyer Dec 18 '12 at 19:53
@rcollyer, @image_doctor, Nasser, thanks for the votes. Rcollyer, for me too -- it always takes several attempts until I get MapAt and ReplaceList right. And, i don't remember ReplaceList winning any speed contests. – kguler Dec 18 '12 at 20:12
I found Position is a great way to get MapAt to behave itself, provided you can figure out how to specify what positions you need ... – rcollyer Dec 18 '12 at 20:27
@rcollyer Be careful with MapAt though. – Leonid Shifrin Dec 18 '12 at 20:29
This is not a built-in function to do it but it fits the criteria of only using buil-in functions. It avoids using patterns, mapping constructs and such things.
Maybe in the future ListCorrelate can accepts functions instead of heads (e.g. applying Plus to a list by default). I think that would make it more useful (but I am a beginner Mathematica user, so who am I to hold such opinions).
lst = {a, b, c, d};
DeleteCases[
ListCorrelate[lst, ConstantArray[1, Length@lst], 1, 0, Times,
List], 0, {2}]
-
A joke solution:
Outer[Take, {{a, b, c, d, e}}, Range[5], 1] // First
-
A variant using Partition:
First[Partition[list,#]]& /@ Range@Length@list
-
What about Accumulate:
Function[lst, {{lst[[1]]}}~Join~Rest[Accumulate[lst] /. Plus -> List]]@{a, b, c, d, e}
Unfortunately it doesn't accept a custom function other than Plus and will not work for numerical list...
-
I am not sure this wins any speed contests, but it is a purely functional solution:
FoldList[#1~Join~{#2} &, {First@#}, Rest@#]& @ {a, b, c, d, e}
(* {{a}, {a, b}, {a, b, c}, {a, b, c, d}, {a, b, c, d, e}} *)
-
Ok, since you posted this, I am liberated from doing that (I gave it in comments to the question), and can happily upvote : +1. – Leonid Shifrin Dec 18 '12 at 21:03
+1, a variation: FoldList[Flatten@{##} &, {First@list}, Rest@list]? or FoldList[Sequence @@@ {##} &, {First@list}, Rest@list]? – kguler Dec 18 '12 at 21:06
@kguler I'd migrate list outside of FoldList, but I hate writing more than I have too. – rcollyer Dec 18 '12 at 21:15
@LeonidShifrin teach me to look at the comments before I post something ... Speed wise, though, is Join or Append faster. What about kguler's alternatives? – rcollyer Dec 18 '12 at 21:16
I think @kguler's use of ReplaceList is very elegant. For generic lists, should be quite fast. For packed arrays, Join and Append should be faster, since ReplaceList can not make use of those and will likely unpack. – Leonid Shifrin Dec 18 '12 at 21:20
A variant using Take.
list~Take~# & /@ Range@Length@list
{{a}, {a, b}, {a, b, c}, {a, b, c, d}}
One using NestList:
NestList[Most, list, Length@list - 1]
{{a, b, c, d}, {a, b, c}, {a, b}, {a}}
-
Subsets takes an optional 3rd argument as Subsets[list, {n}, k] that gives you the kth sublist of length n. Since your sublists are in sequence, you'll always need k = 1. You can then use this as:
MapIndexed[First@Subsets[list, #2, 1] &, list]
(* {{a}, {a, b}, {a, b, c}, {a, b, c, d}} *)
Another alternative would be:
Reverse@Most@NestWhileList[Most, list, # != {} &]
-
Why are you using MapIndexed? Why not Subsets[list, #, 1][[1]]& /@ Range[Length@list]? It seems cleaner. – rcollyer Dec 18 '12 at 20:10
"Cleaner" is subjective and in this case, I positively dislike having to use Range@Length@foo when I don't have to. Far cleaner to use MapIndexed and ignore #1 – rm -rf Dec 18 '12 at 20:20
+1. But the real clean solution here is to use linked lists, since all of the suggested ones have quadratic complexity in the length of the list. – Leonid Shifrin Dec 18 '12 at 20:28
@LeonidShifrin I thought of something like Flatten /@ Rest@FoldList[List, {}, {a, b, c, d}], but it was similar to your solution above and was hoping you'd post it – rm -rf Dec 18 '12 at 20:39
FoldList by itself works. – rcollyer Dec 18 '12 at 20:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20476403832435608, "perplexity": 3835.114761050674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919886.18/warc/CC-MAIN-20140909044452-00169-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://www.rdgao.com/rgaoeu2014-aftermath/ | Well.
I (obviously) did not fulfill my commitment of writing once a week during these 4 weeks of being on the road. Not even once.
I am, however, fully invested in completely and thoroughly chronicling my journey, if not as a very humble travel guide, then at least to verbalize my adventure to bring some kind of closure. But after the four weeks, where and how do I even start? How can I tell my stories in the ways that they deserve to be told? How do I make the distinction between the physical locations and the thoughts that were inspired by them? Is there even one?
Well, to avoid further delays, I’m just going to start with the easiest: the quantifiable.
# Trip Stats
Where Did I Go?
I deviated from my planned course a bit, the most significant change being the extra time in Switzerland, at the expense of Milan and Venice. The changes were not so surprising given how little I actually planned, but what WAS surprising was how smooth the whole operation went as I did it on the fly. Here is a map of the route I took and the cities that I had the pleasure of visiting (left), and my actual schedule (right).
Total Distance (As-the-Crow-Flies) : 4194km
Total Time (In Europe): 26 days and 18 hours
My Travel Route. Very proud of my almost perfectly straight course due northwest.
My Actual Schedule. Note the cities are colored by their country’s flag :D
Here are the cities typed out, in bracket is the name in the local language, if there is one:
Spain : Barcelona
Italy : Rome (Roma), Naples (Napoli), Pisa, Florence (Firenze), Venice (Venezia), Milan (Milano)
Switzerland : Interlaken, Basel
France : Paris
Netherlands : Amsterdam
Belgium : Brussels (Bruxelles)
UK : London, Lancing
Ireland : Dublin (Baile Átha Cliath … yeah I don’t know)
# of Countries : 8
#of Cities : 15
So aside from Ireland being Ireland, the English really fucked it up when they did the translation for Italian city names. I actually quite like the Italian versions. Actually, hearing Italians speak Italian is just great. Also, apologies to my Irish readers, but I finally understood the distinction between Ireland and Northern Ireland.
Luggage
In the previous post (pre-trip), I put up a photo of all the things I packed…then I ended up taking out a few other things. Nothing major, just the second pair of pants, as well as the clothes that I wore on the day of flying out. The altered content is shown here (left), as well as my backpacks (right). Oh you must be wondering why all my shit are in ziploc bags. It took me a while to find these vacuum bags in the stores, but they were actually great for organizing everything, and squeezing the air out saved tons of space.
Shit I brought with me.
My Trusty Backpacks: Ray the day-pack (gray) and Lou the big pack (blue)
Almost everything I packed were put to good use, luckily. Some of the honorable mentions, aside from the essential clothing and equipment: Clif bars, bike light, selfie-stick, and earplugs. But, of course, some of the things never even breathed European air, which brings me to my:
“Why-did-I-think-I-would-need-this” List
1) Bluetooth keyboard: alright, I’m just going to put it out there that unless you’re a professional writer, any hopes of writing on the road is a pipe dream. I don’t mean that somehow writers can handle their craft better, but to put it simply, when you’re on the road, if you don’t have to do it, you won’t. I ended up lugging around this thing and used it once, during my boat ride from Barcelona to Rome, and that was during the last two hours of the 20-hour journey simply because I couldn’t sleep anymore.
2) Sleep-mask: I put them on, then I had an irrational fear of being raped, robbed, and/or having dicks drawn on my face. Bottom of the bag you go. People are very considerate to not turn the lights on at night.
3) Tide-To-Go: just don’t. Don’t even use this at home. I don’t know what I was thinking, I must have forgotten that Tide-To-Go would really only be useful on white shirts, and that I only brought one white shirt, AND that Tide-To-Go actually makes white shirts yellow. Dropped some pizza on myself one day, I thought, “perfect, I’ll use my Tide-To-Go.” That slice of pizza is now forever commemorated by a bright yellow spot on my white shirt.
Other things I never used : swimming trunks (forgot to bring to the beach the one time I went), diarrhea meds (yay), Tylenol (yay again). In general, my advice for packing for a trip in Europe is this: they have everything that we have over here.
Money Matters
Life-changing experiences don’t come cheap, the total trip came to be a hair under $5000 (literally, a hair): When I first started planning the whole thing, I had budgeted for$4000. But soon after I purchased my plane tickets and the miscellaneous items, I’d realized that it would be closer to 5k. Daily spending was about $100 per day, which included food, lodging, tickets to get in museums, souvenirs, etc. Factoring in some odd train tickets and whatnot, things got rounded up nicely to a total of$5000.
Backpacks: $150 Eurail (Train) Pass:$440
Plane Tickets: $980 Credit Card Purchases:$1987.30
Debit & Withdrawn Cash: $1331.90 Total:$4999.20
I meant to write about this before I left. The following is the research I did before I left, which after seeing my transactions, I’m not sure is completely accurate. But basically there are 3 common ways of spending your money in Europe, which are:
1. Buying euros at home and taking the cash to Europe
- Bank uses its own exchange rate
2. Withdrawing euros from European ATM
- Uses foreign bank rate (whose ATM you’re using)
- Your home bank charges you a flat fee for every international withdraw, which was $5 for TD. I believe some banks don’t have this charge, like Chase. 3. Credit card - Visa charges something like 2.5% for every transaction, and uses its own rate. Just an interesting side note, it took me 2 phone calls, a visit to TD, and going on Visa’s website several times to find out this little fact, apparently nobody knows what the commission is. UPDATE : apparently this is wrong. Turns out, TD (or your bank) tacks an additional rate onto Visa’s own, as specified in your credit card user agreement, but the conversion rate is the one cited by Visa. Just to clarify a point in case anyone is unfamiliar, banks use different rates for buying and selling. For example, if the actual CAD to Euro exchange rate is 1.5CAD for 1Euro, a Canadian bank might sell you euros at 1.55CAD for 1Euro, and buy your euro at 1Euro for 1.45CAD. For comparison, here is a sampling of the rates I got through the different methods, compared to the actual. Of course, these rates fluctuate everyday, but usually not by a huge amount. Actual (Google, Today) TD (Today) VISA Rate I Got ATM Rate I Got CAD to 1 Euro 1.47 1.5117 1.4911, 1.4933 1.4903 CAD to 1 Franc 1.21 1.2440 1.2429 1.2228 CAD to 1 Pound 1.84 1.8864 1.9760, 1.9824 1.8925 Looking at this, it seems foreign ATM is the way to go. But in the end, it becomes quite inconsequential if you consider that the difference of 2% is about$10 for every $500 you spend. For some reason though, I got screwed by TD/Visa on the Pound, I don’t know why that is, maybe TD’s got a surplus of it back in Canada. Either way, I don’t know why I spent so long detailing this. I’d recommend just doing what I did, withdraw some cash and use your credit card as much as possible. Finding the balance between having enough cash to be flexible, but not enough to break your bank if it gets stolen, is the key. Eurail Pass While I’m on the subject of money, I’ll just talk a bit about the Eurail train pass. The pass I got was 5-days, 4 countries, meaning on 5 non- consecutive days, I can take as many trains as I want on those days, in the 4 chosen countries (I had Italy, Switzerland, France, Belgium-Netherlands- Luxembourg). I’m not sure why those last 3 count as one country, maybe it’s under the same train operator. All that was for$440, or around 250 euros, which means for the thing to be worthwhile, I should take more than 50 euros worth of train on each of the 5 days.
So was it worth it? Well, firstly, train tickets are extremely cheap in Italy, and very rarely will you find a single second-class ticket for more than 60 euros, unless you’re going across the country on one ride. I used a day for going from Naples to Pisa then to Florence, which was about 60 euros altogether. So if you’re traveling in Italy, it’ll definitely take some work to make it worth it, especially if you plan to stay in a city for a while. Trains in Switzerland and France are a lot more expensive, but here’s the kick in the balls: almost all high-speed trains, i.e. direct, require reservation and a reservation fee, which the pass does not cover. This was the case in Italy too, going for about 10 euros. But in France and the Northern countries, they can go up to 40-50 euros, for just reservation. ALSO , France has limited number of seats available for reservation using the pass, so while a train might be empty, the passholder section might be fully booked out. For example, I took a Basel to Paris train, which consisted of a regional segment and a highspeed segment. The full price is 110 euros, but reservation was 40 euros alone. I was too cheap, so I opted to reserve a seat on the highspeed train to the stop before Paris and take the regional to Paris (which takes an hour longer), which only costed 10 euros, but they ended up letting me stay on all the way anyway, so that was nice.
All in all, the rule of thumb is that if you’ll be taking a lot of regional (slower) trains, then the pass is great since you don’t need to reserve and you can just hop on the train, see the city, and go somewhere else. But if you’re traveling in the Northern countries and taking direct trains, it might just be better to buy them as you go. In the end, I think I still saved some money, but I was committed to using the pass only when the ticket was going to be over 50 euros, which took some effort. I do, however, highly recommend the free Eurail/InterRail Rail Planner app. It’s not 100% up to date and doesn’t have all the regional trains, but the biggest advantage of it is that it can be used offline, so you can plan your trip on the fly even if you’re not currently situated in a hostel.
Other Fun Stats
# of days spent in Italy: 9
# of consecutive days I had pizza: 7
# of consecutive days I had pasta: 7
Lowest elevation achieved on foot: 2m (Amsterdam)
Highest: 2970m (Schiltorn, Switzerland) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26841893792152405, "perplexity": 2336.6096178770586}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00291.warc.gz"} |
https://worldwidescience.org/topicpages/a/availability+index+mwai.html | #### Sample records for availability index mwai
1. Development of the Metropolitan Water Availability Index (MWAI) and Short-term Assessment with Multi-scale Remote Sensing Technologies
Science.gov (United States)
Global climate change will change environmental conditions including temperature, precipitation, surface radiation, humidity, soil moisture, and sea level, and impact significantly the regional-scale hydrologic processes such as evapotranspiration (ET), runoff, groundwater levels...
2. Aerobic and anaerobic incubation: Biological indexes of soil nitrogen availability
Directory of Open Access Journals (Sweden)
Kresović Mirjana M.
2005-01-01
Full Text Available Our researches have been made on brown forest soil that had been used in long-term experiments set up according to specified fertilization system for over 30 years. We have chosen those experiment variants in which quantities of nitrogen fertilizers were gradually increased. The soil samples taken from 0 cm to 30 cm depth were used to determine biological indexes of nitrogen availability (aerobic and anaerobic incubation. The same samples were also used for pot experiments with oat. Plant and soil parameters obtained in controlled conditions were used for determination of biological indexes reliability in measuring the soil nitrogen availability. On the grounds of correlation analysis, it can be concluded that biological index of nitrogen availability achieved by the anaerobic incubation (without substraction of the initial content of available nitrogen of the investigated brown forest soil is the reliable indicator of soil nitrogen availability. That is not the case with the aerobic incubation in which reliability has not been established.
3. Glycaemic index of four commercially available breads in Malaysia.
Science.gov (United States)
Yusof, Barakatun Nisak Mohd; Abd Talib, Ruzita; Karim, Norimah A; Kamarudin, Nor Azmi; Arshad, Fatimah
2009-09-01
This study was carried out to determine the blood glucose response and glycaemic index (GI) values of four types of commercially available breads in Malaysia. Twelve healthy volunteers (six men, six women; body mass index, 21.9±1.6 kg/m(2); age, 22.9±1.7 years) participated in this study. The breads tested were multi-grains bread (M-Grains), wholemeal bread (WM), wholemeal bread with oatmeal (WM-Oat) and white bread (WB). The subjects were studied on seven different occasions (four tests for the tested breads and three repeated tests of the reference food) after an overnight fast. Capillary blood samples were taken immediately before (0 min) and 15, 30, 45, 60, 90 and 120 min after consumption of the test foods. The blood glucose response was obtained by calculating the incremental area under the curve. The GI values were determined according to the standardized methodology. Our results showed that the M-Grains and WM-Oat could be categorized as intermediate GI while the WM and WB breads were high GI foods, respectively. The GI of M-Grains (56±6.2) and WM-Oat (67±6.9) were significantly lower than the reference food (glucose; GI = 100) (P food and the GI of WM (85±5.9) and WB (82±6.5) (P > 0.05). Among the tested breads, the GI values of M-Grains and WM-Oat were significantly lower (P foods.
4. 32 CFR 701.65 - Availability, public inspection, and indexing of other documents affecting the public.
Science.gov (United States)
2010-07-01
... 32 National Defense 5 2010-07-01 2010-07-01 false Availability, public inspection, and indexing of... Indexing, Public Inspection, and Federal Register Publication of Department of the Navy Directives and Other Documents Affecting the Public § 701.65 Availability, public inspection, and indexing of...
5. SeaDataNet network services monitoring: Definition and Implementation of Service availability index
Science.gov (United States)
Lykiardopoulos, Angelos; Mpalopoulou, Stavroula; Vavilis, Panagiotis; Pantazi, Maria; Iona, Sissy
2014-05-01
6. Predicting chronic stinger syndrome using the mean subaxial space available for the cord index.
Science.gov (United States)
Greenberg, Jared; Leung, Dan; Kendall, Jenny
2011-05-01
A 21-year-old division I collegiate football player who had a history of several stingers presented with 5 days of persistent left neck and shoulder pain associated with paresthesias and upper extremity weakness. His symptoms began immediately during a game when he was struck on the right side of his helmet, which induced a compression-extension mechanism of injury to his neck. Clinical and electrodiagnostic evaluation was consistent with a left C5 radiculopathy, but magnetic resonance imaging of the cervical spine yielded normal results. The mean subaxial cervical space available for the cord (MSCSAC) index is a novel tool to predict chronic stinger syndrome. It is calculated by subtracting the sagittal diameter of the spinal cord from the disc-level sagittal diameter of the spinal canal at levels C3 through C6 and then averaging these values. A cutoff of < 4.3 mm has been shown to predict a greater-than-13-fold increase in risk of developing chronic stinger syndrome. This patient had a MSCSAC index of 3.2 mm, which correlated with his history of multiple stingers. The MSCSAC index may be a useful tool to help counsel athletes on the risk of developing future stingers, although more extensive research on this measurement tool is indicated.
7. Silicate fertilization of tropical soils: silicon availability and recovery index of sugarcane
Directory of Open Access Journals (Sweden)
Mônica Sartori de Camargo
2013-10-01
Full Text Available Sugarcane is considered a Si-accumulating plant, but in Brazil, where several soil types are used for cultivation, there is little information about silicon (Si fertilization. The objectives of this study were to evaluate the silicon availability, uptake and recovery index of Si from the applied silicate on tropical soils with and without silicate fertilization, in three crops. The experiments in pots (100 L were performed with specific Si rates (0, 185, 370 and 555 kg ha-1 Si, three soils (Quartzipsamment-Q, 6 % clay; Rhodic Hapludox-RH, 22 % clay; and Rhodic Acrudox-RA, 68 % clay, with four replications. The silicon source was Ca-Mg silicate. The same Ca and Mg quantities were applied to all pots, with lime and/or MgCl2, when necessary. Sugarcane was harvested in the plant cane and first- and second-ratoon crops. The silicon rates increased soil Si availability and Si uptake by sugarcane and had a strong residual effect. The contents of soluble Si were reduced by harvesting and increased with silicate application in the following decreasing order: Q>RH>RA. The silicate rates promoted an increase in soluble Si-acetic acid at harvest for all crops and in all soils, except RA. The amounts of Si-CaCl2 were not influenced by silicate in the ratoon crops. The plant Si uptake increased according to the Si rates and was highest in RA at all harvests. The recovery index of applied Si (RI of sugarcane increased over time, and was highest in RA.
8. Root-zone plant available water estimation using the SMOS-derived soil water index
Science.gov (United States)
González-Zamora, Ángel; Sánchez, Nilda; Martínez-Fernández, José; Wagner, Wolfgang
2016-10-01
Currently, there are several space missions capable of measuring surface soil moisture, owing to the relevance of this variable in meteorology, hydrology and agriculture. However, the Plant Available Water (PAW), which in some fields of application could be more important than the soil moisture itself, cannot be directly measured by remote sensing. Considering the root zone as the first 50 cm of the soil, in this study, the PAW at 25 cm and 50 cm and integrated between 0 and 50 cm of soil depth was estimated using the surface soil moisture provided by the Soil Moisture Ocean Salinity (SMOS) mission. For this purpose, the Soil Water Index (SWI) has been used as a proxy of the root-zone soil moisture, involving the selection of an optimal T (Topt), which can be interpreted as a characteristic soil water travel time. In this research, several tests using the correlation coefficient (R), the Nash-Sutcliffe score (NS), several error estimators and bias as predictor metrics were applied to obtain the Topt, making a comprehensive study of the T parameter. After analyzing the results, some differences were found between the Topt obtained using R and NS as decision metrics, and that obtained using the errors and bias, but the SWI showed good results as an estimator of the root-zone soil moisture. This index showed good agreement, with an R between 0.60 and 0.88. The method was tested from January 2010 to December 2014, using the database of the Soil Moisture Measurements Stations Network of the University of Salamanca (REMEDHUS) in Spain. The PAW estimation showed good agreement with the in situ measurements, following closely the dry-downs and wetting-up events, with R ranging between 0.60 and 0.92, and error values lower than 0.05 m3m-3. A slight underestimation was observed for both the PAW and root-zone soil moisture at the different depths; this could be explained by the underestimation pattern observed with the SMOS L2 soil moisture product, in line with previous
9. Association of central serotonin transporter availability and body mass index in healthy Europeans
DEFF Research Database (Denmark)
Hesse, Swen; van de Giessen, Elsmarieke; Zientek, Franziska
2014-01-01
UNLABELLED: Serotonin-mediated mechanisms, in particular via the serotonin transporter (SERT), are thought to have an effect on food intake and play an important role in the pathophysiology of obesity. However, imaging studies that examined the correlation between body mass index (BMI) and SERT...... are sparse and provided contradictory results. The aim of this study was to further test the association between SERT and BMI in a large cohort of healthy subjects. METHODS: 127 subjects of the ENC DAT database (58 females, age 52 ± 18 years, range 20-83, BMI 25.2 ± 3.8 kg/m(2), range 18.2-41.1) were...
10. Evaluation of nutrient index using organic carbon, available P and available K concentrations as a measure of soil fertility in Varahi River basin, India
Directory of Open Access Journals (Sweden)
P. Ravikumar
2013-12-01
Full Text Available Varahi River basin is in the midst of Udupi district in the western part of Karnataka state, covering parts of Kundapura and Udupi taluks in Udupi District, Karnataka, India. Spatial distributions for twenty physical and chemical properties were examined in the soil samples of selected agricultural fields in 28 different locations in Varahi River basin. The present study revealed that there is not much variation in soil fertility status of soils developed on various landforms in the area as the soils were having low to medium in organic carbon (0.06 to 1.20 % and available nitrogen (6.27 to 25.09 Kg/ha content; low to medium in available P (2.24 to 94.08 Kg/ha and deficient to doubtful in available K (20.10 - 412.3 Kg/ha contents. The soils of Varahi River basin were characterized as low-medium-low (LML category based on the nutrient index calculated w.r.t. available organic carbon, available P and available K. Further, Sodium Absorption Ratio (SAR and Exchangeable Sodium Percentage (ESP indicated that the soils were excellent for irrigation.
11. Web Search Engines and Indexing and Ranking the Content Object Including Metadata Elements Available at the Dynamic Information Environments
Directory of Open Access Journals (Sweden)
2012-10-01
12. Indexed
CERN Document Server
Hagy, Jessica
2008-01-01
Jessica Hagy is a different kind of thinker. She has an astonishing talent for visualizing relationships, capturing in pictures what is difficult for most of us to express in words. At indexed.blogspot.com, she posts charts, graphs, and Venn diagrams drawn on index cards that reveal in a simple and intuitive way the large and small truths of modern life. Praised throughout the blogosphere as “brilliant,” “incredibly creative,” and “comic genius,” Jessica turns her incisive, deadpan sense of humor on everything from office politics to relationships to religion. With new material along with some of Jessica’s greatest hits, this utterly unique book will thrill readers who demand humor that makes them both laugh and think.
13. Indexing Publicly Available Health Data with Medical Subject Headings (MeSH): An Evaluation of Term Coverage.
Science.gov (United States)
Marc, David T; Zhang, Rui; Beattie, James; Gatewood, Laël C; Khairat, Saif S
2015-01-01
As part of the Open Government Initiative, the United States federal government published datasets to increase collaboration, transparency, consumer participation, and research, and are available online at HealthData.gov. Currently, HealthData.gov does not adequately support the accessibility goal of the Open Government Initiative due to issues of retrieving relevant data because of inadequately cataloguing and lack of indexing with a standardized terminology. Given the commonalities between the HealthData.gov and MEDLINE metadata, Medical Subject Headings (MeSH) may offer an indexing solution, but there needs to be a formal evaluation of the efficacy of MeSH for covering the dataset concepts. The purpose of this study was to determine if MeSH adequately covers the HealthData.gov concepts. The noun and noun phrases from the HealthData.gov metadata were extracted and mapped to MeSH using MetaMap. The frequency of no exact, partical and no matches with MeSH terms were determined. The results of this study revealed that about 70% of the HealthData.gov concepts partially or exactly matched MeSH terms. Therefore, MeSH may be a favorable terminology for indexing the HealthData.gov datasets.
14. Total and available heavy metal concentrations in soils of the Thriassio plain (Greece) and assessment of soil pollution indexes.
Science.gov (United States)
Massas, Ioannis; Kalivas, Dionisios; Ehaliotis, Constantions; Gasparatos, Dionisios
2013-08-01
The Thriassio plain is located 25 km west of Athens city, the capital of Greece. Two major towns (Elefsina and Aspropyrgos), heavy industry plants, medium to large-scale manufacturing, logistics plants, and agriculture comprise the main land uses of the studied area. The aim of the present study was to measure the total and available concentrations of Cr, Zn, Ni, Pb, Co, Mn, Ba, Cu, and Fe in the top soils of the plain, and to asses soil contamination by these metals by using the geoaccumulation index (I geo), the enrichment factor (EF), and the availability ratio (AR) as soil pollution indexes. Soil samples were collected from 90 sampling sites, and aqua regia and DTPA extractions were carried out to determine total and available metal forms, respectively. Median total Cr, Zn, Ni, Pb, Co, Mn, Ba, Cu, and Fe concentrations were 78, 155, 81, 112, 24, 321, 834, 38, and 16 × 10(3) mg kg(-1), respectively. The available fractions showed much lower values with medians of 0.4, 5.6, 1.7, 6.9, 0.8, 5.7, 19.8, 2.1, and 2.9 mg kg(-1). Though median total metal concentrations are not considered as particularly high, the I geo and the EF values indicate moderate to heavy soil enrichment. For certain metals such as Cr, Ni, Cu, and Ba, the different distribution patterns between the EFs and the ARs suggest different origin of the total and the available metal forms. The evaluation of the EF and AR data sets for the soils of the two towns further supports the argument that the EFs can well demonstrate the long-term history of soil pollution and that the ARs can adequately portray the recent history of soil pollution.
15. Index
Directory of Open Access Journals (Sweden)
Antonio Juan Sánchez
2012-09-01
Full Text Available The Advances in Distributed Computing and Artificial Intelligence Journal (ISSN: 2255-2863 is an open access journal that publishes articles which contribute new results associated with distributed computing and artificial intelligence, and their application in different areas. The artificial intelligence is changing our society. Its application in distributed environments, such as the Internet, electronic commerce, mobile communications, wireless devices, distributed computing and so on, is increasing and becoming and element of high added value and economic potential in industry and research. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both academic and business areas is essential to facilitate the development of systems that meet the demands of today's society.
16. Index
Directory of Open Access Journals (Sweden)
Antonio Juan Sánchez
2013-08-01
Full Text Available The Advances in Distributed Computing and Artificial Intelligence Journal (ADCAIJ is an open access journal that publishes articles which contribute new results associated with distributed computing and artificial intelligence,and their application in different areas.The artificial intelligence is changing our society. Its application in distributed environments, such as the Internet, electronic commerce, mobile communications, wireless devices, distributed computing and so on, is increasing and becoming and element of high added value and economic potential in industry and research. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both academic and business areas is essential to facilitate the development of systems that meet the demands of today's society.We would like to thank all the contributing authors for their hard and highly valuable work. Their work has helped to contribute to the success of this special issue. Finally, the Editors wish to thank Scientific Committee of Advances in Distributed Computing and Artificial Intelligence Journal for the collaboration of this special issue, that notably contributes to improve the quality of the journal. We hope the reader will share our joy and find this special issue very useful.
17. Index
Directory of Open Access Journals (Sweden)
Antonio Juan SÁNCHEZ
2013-05-01
Full Text Available The Advances in Distributed Computing and Artificial Intelligence Journal (ADCAIJ is an open access journal that publishes articles which contribute new results associated with distributed computing and artificial intelligence,and their application in different areas. The artificial intelligence is changing our society. Its application in distributed environments, such as the Internet, electronic commerce, mobile communications, wireless devices, distributed computing and so on, is increasing and becoming and element of high added value and economic potential in industry and research. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both academic and business areas is essential to facilitate the development of systems that meet the demands of today's society. We would like to thank all the contributing authors for their hard and highly valuable work. Their work has helped to contribute to the success of this special issue. Finally, the Editors wish to thank Scientific Committee of Advances in Distributed Computing and Artificial Intelligence Journal for the collaboration of this special issue, that notably contributes to improve the quality of the journal. We hope the reader will share our joy and find this special issue very useful.
18. MTA index: a simple 2D-method for assessing atrophy of the medial temporal lobe using clinically available neuroimaging
Directory of Open Access Journals (Sweden)
Manuel eMenéndez-González
2014-03-01
Full Text Available Background and purpose: Despite a strong correlation between severity of Alzheimer disease (AD pathology and medial temporal lobe atrophy (MTA, its measurement has not been widely used in daily clinical practice as a criterion in the diagnosis of prodromal and probable AD. This is mainly because the methods available to date are sophisticated and difficult to implement for routine use in most hospitals. In this pilot study we aim to describe a novel, simple and objective method for measuring the rate of MTA in relation to the global atrophy using clinically available neuroimaging and describe the rationale behind this method.Description: This method consists of calculating a ratio of 3 regions traced manually on one single coronal MRI slide at the level of the interpeduncular fossa: i the medial temporal lobe region (A; ii the parenchyma within the medial temporal region, that includes the hippocampus and the parahippocampal gyrus -the fimbriae taenia and choroid plexus are excluded- (B; and iii the body of the ipsilateral lateral ventricle (C. Therefore we can compute the ratio Medial Temporal Atrophy index at both sides as follows: MTAi = (A-B x10/C.Conclusions: The MTAi is a simple 2D-method for measuring the relative extent of atrophy in the MTL in relation to the global brain atrophy. This method can be useful for a more accurate diagnosis of AD in routine clinical practice. Further studies are needed to assess the usefulness of MTAi in the diagnosis of early AD, in tracking the progression of AD and in the differential diagnosis of AD with other dementias.
19. The sensitivity of water availability to changes in the aridity index and other factors—A probabilistic analysis in the Budyko space
Science.gov (United States)
Gudmundsson, L.; Greve, P.; Seneviratne, S. I.
2016-07-01
One of the pending questions in the context of global change is whether climatic drivers or other factors have stronger influences on water availability. Here we present an approach that allows to estimate the probability that changes in the aridity index have a larger effect on water availability than equal relative changes in other factors. The analysis builds upon a probabilistic extension of the Budyko framework, which is subject to an analytical sensitivity assessment. The results show that changes in water availability are only dominated by changes in the aridity index in very humid climates. This implies that projected intensifications of aridity in drylands may have less influence on water availability than commonly assumed. Instead, other climatic or nonclimatic factors are dominating. The analysis does hence allow to map regions in which water availability is more sensitive to equal relative changes in either the aridity index or all other factors.
20. Index of the Nevada Applied Ecology Group and associated publications available in the Coordination and Information Center
Energy Technology Data Exchange (ETDEWEB)
Maza, B.G.
1991-02-01
This publication was created by the Coordination and Information Center (CIC) to provide a readily available research tool for use by researchers interested in a specific area covered in the holdings of the CIC Archives. The Nevada Applied Ecology Group (NAEG) was formed and functioned in agreement with Planning Directive NVO-76 (July 29, 1970 and revised January 1, 1974, (CIC-165845 and CIC-16439) respectively) to coordinate the ecological and other environmental programs necessary to support the continued nuclear testing activities; and to provide a mechanism to effectively comply with requirements of the National Environmental Policy Act of 1969, Executive Order 11514, and AEC Manual Chapter 0510.'' The publication contains only citations to documents currently available at the CIC. It represents a significant portion of the principal research findings of the Nevada Applied Ecology Group.
1. Asymmetry of Dopamine D2/3 Receptor Availability in Dorsal Putamen and Body Mass Index in Non-obese Healthy Males.
Science.gov (United States)
Cho, Sang Soo; Yoon, Eun Jin; Kim, Sang Eun
2015-03-01
The dopaminergic system is involved in the regulation of food intake, which is crucial for the maintenance of body weight. We examined the relationship between striatal dopamine (DA) D2/3 receptor availability and body mass index (BMI) in 25 non-obese healthy male subjects using [(11)C]raclopride and positron emission tomography. None of [(11)C]raclopride binding potential (BP) values (measures of DA D2/3 receptor availability) in striatal subregions (dorsal caudate, dorsal putamen, and ventral striatum) in the left and right hemispheres was significantly correlated with BMI. However, there was a positive correlation between the right-left asymmetry index of [(11)C]raclopride BP in the dorsal putamen and BMI (r=0.43, pputamen relative to the left in non-obese individuals. The present results, combined with previous findings, may also suggest neurochemical mechanisms underlying the regulation of food intake in non-obese individuals.
2. Sun-induced chlorophyll fluorescence and photochemical reflectance index improve remote-sensing gross primary production estimates under varying nutrient availability in a typical Mediterranean savanna ecosystem
Science.gov (United States)
Perez-Priego, O.; Guan, J.; Rossini, M.; Fava, F.; Wutzler, T.; Moreno, G.; Carvalhais, N.; Carrara, A.; Kolle, O.; Julitta, T.; Schrumpf, M.; Reichstein, M.; Migliavacca, M.
2015-11-01
This study investigates the performances of different optical indices to estimate gross primary production (GPP) of herbaceous stratum in a Mediterranean savanna with different nitrogen (N) and phosphorous (P) availability. Sun-induced chlorophyll fluorescence yield computed at 760 nm (Fy760), scaled photochemical reflectance index (sPRI), MERIS terrestrial-chlorophyll index (MTCI) and normalized difference vegetation index (NDVI) were computed from near-surface field spectroscopy measurements collected using high spectral resolution spectrometers covering the visible near-infrared regions. GPP was measured using canopy chambers on the same locations sampled by the spectrometers. We tested whether light-use efficiency (LUE) models driven by remote-sensing quantities (RSMs) can better track changes in GPP caused by nutrient supplies compared to those driven exclusively by meteorological data (MM). Particularly, we compared the performances of different RSM formulations - relying on the use of Fy760 or sPRI as a proxy for LUE and NDVI or MTCI as a fraction of absorbed photosynthetically active radiation (fAPAR) - with those of classical MM. Results showed higher GPP in the N-fertilized experimental plots during the growing period. These differences in GPP disappeared in the drying period when senescence effects masked out potential differences due to plant N content. Consequently, although MTCI was closely related to the mean of plant N content across treatments (r2 = 0.86, p < 0.01), it was poorly related to GPP (r2 = 0.45, p < 0.05). On the contrary sPRI and Fy760 correlated well with GPP during the whole measurement period. Results revealed that the relationship between GPP and Fy760 is not unique across treatments, but it is affected by N availability. Results from a cross-validation analysis showed that MM (AICcv = 127, MEcv = 0.879) outperformed RSM (AICcv =140, MEcv = 0.8737) when soil moisture was used to constrain the seasonal dynamic of LUE. However
3. Association between cerebral cannabinoid 1 receptor availability and body mass index in patients with food intake disorders and healthy subjects: a [(18)F]MK-9470 PET study.
Science.gov (United States)
Ceccarini, J; Weltens, N; Ly, H G; Tack, J; Van Oudenhove, L; Van Laere, K
2016-07-12
Although of great public health relevance, the mechanisms underlying disordered eating behavior and body weight regulation remain insufficiently understood. Compelling preclinical evidence corroborates a critical role of the endocannabinoid system (ECS) in the central regulation of appetite and food intake. However, in vivo human evidence on ECS functioning in brain circuits involved in food intake regulation as well as its relationship with body weight is lacking, both in health and disease. Here, we measured cannabinoid 1 receptor (CB1R) availability using positron emission tomography (PET) with [(18)F]MK-9470 in 54 patients with food intake disorders (FID) covering a wide body mass index (BMI) range (anorexia nervosa, bulimia nervosa, functional dyspepsia with weight loss and obesity; BMI range=12.5-40.6 kg/m(2)) and 26 age-, gender- and average BMI-matched healthy subjects (BMI range=18.5-26.6 kg/m(2)). The association between regional CB1R availability and BMI was assessed within predefined homeostatic and reward-related regions of interest using voxel-based linear regression analyses. CB1R availability was inversely associated with BMI in homeostatic brain regions such as the hypothalamus and brainstem areas in both patients with FID and healthy subjects. However, in FID patients, CB1R availability was also negatively correlated with BMI throughout the mesolimbic reward system (midbrain, striatum, insula, amygdala and orbitofrontal cortex), which constitutes the key circuit implicated in processing appetitive motivation and hedonic value of perceived food rewards. Our results indicate that the cerebral homeostatic CB1R system is inextricably linked to BMI, with additional involvement of reward areas under conditions of disordered body weight.
4. Evaluation and comparison of commercially available Aloe vera L. products using size exclusion chromatography with refractive index and multi-angle laser light scattering detection.
Science.gov (United States)
Turner, Carlton E; Williamson, David A; Stroud, Paul A; Talley, Doug J
2004-12-20
Raw materials supplied as Aloe vera L. (sometimes referred to as Aloe barbadensis) samples often contain different composition of low and high molecular weight components when analyzed by size exclusion chromatography. One major reason for variable compositions of commercial A. vera L. materials is that they are produced by different manufacturing techniques. Consistent composition of matter based upon a given standard has been difficult to define. In addition, the method of quantifying and characterization of these commercially available materials has not been agreed upon within the industry. The end user, whether a researcher, a manufacturer, a marketing arm of industry or the consumer, should know that they are receiving a consistent product. A blind study of 32 various A. vera L. samples from different manufacturers, and a prepared sample of fresh A. vera L. gel with the commercial, biologic drug Acemannan Immunostimulanttrade mark, were analyzed for content of high molecular weight (polysaccharides) material by size exclusion chromatography with refractive index detection (SEC/RI) and SEC/RI coupled with multi-angle laser light scattering (MALLS) detection. Results from the SEC/RI analysis showed significant variation in the high molecular weight content, and the MALLS analysis also showed significant variation versus SEC/RI. In addition, HPLC analysis of the anthraquinone content showed that all samples contained significantly less than that of the raw, unwashed aloe gel. The variation of results from all analysis is attributed to differing methods in which the samples were processed by the different manufacturers.
5. No correlation between body mass index and striatal dopamine transporter availability in healthy volunteers using SPECT and [123I]PE2I
DEFF Research Database (Denmark)
Thomsen, G; Ziebell, M; Jensen, Peter Steen
2013-01-01
, dopamine is inactivated by reuptake via the dopamine transporter (DAT). The aim of the study was to test the hypothesis of lower DAT availability in obese healthy subjects using a selective DAT radiotracer in a sample of subjects with a wide range of BMI values. Design and Methods: Thirty-three healthy...
6. Index and Indexing Assessment: Criteria and Standards
Directory of Open Access Journals (Sweden)
Hassan Ashrafi
2007-10-01
Full Text Available Indexing is one of the most important methods of content representation where by assigning descriptors to the documents, their subject content are made known. Since index and indexing are remarkably significant in information retrieval, its quality and evaluation and provision of criteria and standards had always been the mainstay of researchers in this field. Given the fact that Indexing is a complex process, offering definitions, principles and methods could be step towards optimal use of the information. The present study, while offering a capsule definition of index, will investigate the indexing evaluation criteria and would follow it up with a definition of indexing. Finally a number of standards in the field of indexing are presented and would make its conclusions.
7. Indexing Images.
Science.gov (United States)
Rasmussen, Edie M.
1997-01-01
Focuses on access to digital image collections by means of manual and automatic indexing. Contains six sections: (1) Studies of Image Systems and their Use; (2) Approaches to Indexing Images; (3) Image Attributes; (4) Concept-Based Indexing; (5) Content-Based Indexing; and (6) Browsing in Image Retrieval. Contains 105 references. (AEF)
8. [Not Available].
Science.gov (United States)
1992-03-25
Archersstar Sudha Bhuchar, right, launches a video on breast screening for women from ethnic minorities, sponsored by the NHS. The video is available in six languages. Ms Bhuchar is pictured with programme co-ordinator Julietta Patrick.
9. Places available**
CERN Multimedia
2003-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: ACCESS 2000 - niveau 1 : 13 & 14.11.03 (2 jours) C++ for Particle Physicists : 17 21.11.03 (6 X 3-hour lectures) Programmation automate Schneider TSX Premium niveau 2 : 18 21.11.03 (4 jours) JAVA 2 Enterprise Edition Part 1 : WEB Applications : 20 & ...
10. Semantic Text Indexing
Directory of Open Access Journals (Sweden)
Zbigniew Kaleta
2014-01-01
Full Text Available This article presents a specific issue of the semantic analysis of texts in natural language – text indexing and describes one field of its application (web browsing.The main part of this article describes the computer system assigning a set of semantic indexes (similar to keywords to a particular text. The indexing algorithm employs a semantic dictionary to find specific words in a text, that represent a text content. Furthermore it compares two given sets of semantic indexes to determine texts’ similarity (assigning numerical value. The article describes the semantic dictionary – a tool essentialto accomplish this task and its usefulness, main concepts of the algorithm and test results.
11. AP Index
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Planetary Amplitude index - Bartels 1951. The a-index ranges from 0 to 400 and represents a K-value converted to a linear scale in gammas (nanoTeslas)--a scale that...
12. Speech Indexing
NARCIS (Netherlands)
Ordelman, R.J.F.; Jong, de F.M.G.; Leeuwen, van D.A.; Blanken, H.M.; de Vries, A.P.; Blok, H.E.; Feng, L.
2007-01-01
This chapter will focus on the automatic extraction of information from the speech in multimedia documents. This approach is often referred to as speech indexing and it can be regarded as a subfield of audio indexing that also incorporates for example the analysis of music and sounds. If the objecti
13. Places available
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. Places available The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses : Introduction à Outlook : 19.8.2004 (1 journée) Outlook (short course I) : E-mail : 31.8.2004 (2 hours, morning) Outlook (short course II) : Calendar, Tasks and Notes : 31.8.2004 (2 hours, afternoon) Instructor-led WBTechT Study or Follow-up for Microsoft Applications : 7.9.2004 (morning) Outlook (short course III) : Meetings and Delegation : 7.9.2004 (2 hours, afternoon) Introduction ...
14. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: C++ Programming Level 2 - Traps & Pitfalls: 16 - 19.7.02 (4 days) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. Technical Training Monique Duval Tel.74924 monique.duval@cern.ch
15. Places available**
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: The JAVA Programming Language Level 1 :9 & 10.1.2004 (2 days) The JAVA Programming Language Level 2 : 11 to 13.1.2004 (3 days) Hands-on Introduction to Python Programming : 16 - 18.2.2004 (3 days - free of charge) CLEAN-2002 : Working in a Cleanroom : 10.3.2004 (afternoon - free of charge) C++ for Particle Physicists : 8 - 12.3.2004...
16. Places available**
CERN Multimedia
2003-01-01
If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses : EXCEL 2000 - niveau 1 : 20 & 22.10.03 (2 jours) CLEAN-2002 : Working in a Cleanroom (free of charge) : 23.10.03 (half day) The EDMS-MTF in practice (free of charge) : 28 - 30.10.03 (6 half-day sessions) AutoCAD 2002 - Level 1 : 3, 4, 12, 13.11.03 (4 days) LabVIEW TestStand ver. 3 : 4 & 5.11.03 (2 days) Introduction to Pspice : 4.11.03 p.m. (half-day) Hands-on Introduction to Python Programm...
17. Places available**
CERN Multimedia
2003-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval Tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: JAVA 2 Enterprise Edition - Part 1 : WEB Applications : 20 & 21.11.03(2 days) FrontPage 2000 - niveau 1 : 20 & 21.11.03 (2 jours) Oracle 8i : SQL : 3 - 5.12.03 (3 days) Oracle 8i : Programming with PL/SQL : 8 - 10.12.03 (3 days) The JAVA Programming Language - leve...
18. Places available**
CERN Document Server
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt.TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: The JAVA Programming Language Level 1 : 9 & 10.1.2004 (2 days) The JAVA Programming Language Level 2 : 11 to 13.1.2004 (3 days) LabVIEW base 1 : 25 - 27.2.2004 (3 jours) CLEAN-2002 : Working in a Cleanroom : 10.3.2004 (afternoon - free of charge) C++ for Particle Physicists : 8 - 12.3.2004 ( 6 X 4-hour sessions) LabVIEW Basics 1 : 22 - 24.3.20...
19. Places available**
CERN Multimedia
2003-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval Tel. 74924technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: MATLAB Fundamentals and Programming Techniques (ML01) : 2 & 3.12.03 (2 days) Oracle 8i : SQL : 3 - 5.12.03 (3 days) The EDMS MTF in practice : 5.12.03 (afternoon, free of charge) Modeling Dynamic Systems with Simulink (SL01) : 8 & 9.12.03 (2 days) Signal Processing with MATLAB (SG01) : 11 & 12.12.03 (2 days) The JAVA Programming Language - l...
20. Places available**
CERN Multimedia
2003-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: MATLAB Fundamentals and Programming Techniques (ML01) :2 & 3.12.03 (2 days) Oracle 8i : SQL : 3 - 5.12.03 (3 days) The EDMS MTF in practice : 5.12.03 (afternoon, free of charge) Modeling Dynamic Systems with Simulink (SL01) : 8 & 9.12.03 (2 days) Signal Processing with MATLAB (SG01) : 11 & ...
1. Universal availability of the air quality index formulae based on the normalized transformation principle%基于规范变换的空气质量指数公式的广义普适性
Institute of Scientific and Technical Information of China (English)
李祚泳; 张小丽; 汪嘉杨
2016-01-01
This paper aims to work out a few index formulae for assessing the environmental quality of the inside and outside the atmospheric conditions,which are characteristic of the features of being simple in form,easy in calculation and universal in application.Thus,it is on such a basis of the indoor air quality standards (GB/T 18883-2002) that we have proposed the essential formulae of the air quality so as to make the varied ranges of the normalized index systems consisting of 15 indexes in one single grade standard of the indoor air,which should be in conformity with those of the normalized index values of 7 ones for the same gradation standards of the open-air space,so that it can properly be set up as a reference to the normalized transformation form.In so doing,it would be easy and logic to make the formulae into a normalized transformation in accordance with the grade standards for each index of the indoor air.Thus,it is also possible to optimize the 6 universal index formulae by means of the monkey-king genetic algorithm with immune evolutionary (MKGAIEA),which are also suitable for environmental quality assessment of 7 normalized index values of the outdoor air.What is more,the formulae can also be made fit for the environmental quality assessment,consisting of 15 normalized indexes of the indoor air by means of the gauge symmetry principle.Based on the aforementioned study of the formulae,we have applied them to the air quality assessment practice of the indoor air for the two cases of the residential areas in Handan,Hebei,as well as for the public places of Guangzhou City,Guangdong,in hoping to test the practical effectiveness of the 6 index formulae.And,the assessment results of the aforementioned 6 universal index formulae for the 2 cases prove to be well in accord with the results assessed by using the traditional methods and consistent with each other.Thus,it can be concluded that the 6 index formulae can be taken as the up-to-date most convenient and
2. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: CLEAN-2002 : Travailler en salle blanche (cours gratuit) : 13.08.2002 (matin) Introduction to the CERN Enginnering Data Management System : 27.8.02 (1 day) The CERN Engineering Data Management System for Advanced Users : 28.8.02 (1 day) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. Technical Training Monique Duval Tel.74924 monique.duval@cern.ch
3. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Introduction to Databases : 3 - 4.7.01 (2 days) The JAVA programming language Level 2 : 4 - 6.7.01 (3 days) Enterprise JavaBeans : 9 - 11.7.01 (3 days) Design Patterns : 10 - 12.7.01 (3 days) C++ for Particle Physicists : 23 - 27.7.01 (6 3-hour lectures) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt.
4. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Introduction to Perl 5 : 2 - 3.7.01 (2 days) Introduction to Databases : 3 - 4.7.01 (2 days) JAVA programming language Level 2 : 4 - 6.7.01 (3 days) Enterprise JavaBeans : 9 - 11.7.01 (3 days) Design Patterns : 10 - 12.7.01 (3 days) C++ for Particle Physicists : 23 - 27.7.01 (6 3-hour lectures) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt.
5. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: C++ Programming Level 2 - Traps & Pitfalls: 16 - 19.7.02 (4 days) Frontpage 2000 - level 1 : 22 - 23.7.02 (2 days) Introduction à Windows 2000 au CERN : 24.7.02 (après-midi) CLEAN-2002 : Travailler en salle blanche (cours gratuit) : 13.08.2002 (matin) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. Technical Training Monique Duval Tel.74924 monique.duval@cern.ch
6. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: December 2002 PCAD Schémas - Débutants : 5 & 6.12.02 (2 jours) PCAD PCB - Débutants : 9 - 11.12.02 (3 jours) FrontPage 2000 - level 1: 9 & 10.12.02 (2 days) Introduction à la CAO Cadence (cours gratuit) : 10 & 11.12.02 (2 jours) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. Technical Training Monique Duval Tel.74924 monique.duval@cern.ch
7. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses: Conception de PCB rapides dans le flot Cadence : 11.6.03 (matin) EXCEL 2000 - level 1 : 12 & 13.6.03 (2 days) Introduction to PVSS : 16.6.03 (p.m.) Basic PVSS : 17 - 19.6.03 (3 days) Réalisation de PCB rapides dans le flot Cadence : 17.6.03 (matin) PVSS - JCOP Framework Tutorial : 20.6.03 (1 day) Programmation automate Schneider : Programmation automate Schneider TSX Premium - 2ème niveau : 24 - 27.6.03 (4 jours) - audience : toute personne qui veux maitriser la mise en uvre et la programmation des fonctions spécialisées d'un automate TSX Premium - objectifs : maitriser la mise en uvre et la programmation des fonctions spécialisées d'un automate TSX Premium Cours de sécurité : Etre TSO au CERN : Prochaines sessions : 24, 25 & 27.6.03 - 4, 5 & 7.11.03 (session de 3 jours) ** The number of places available may vary. Please check our Web site to find out the current availability. If you wish to participate in one of these courses, pl...
8. [Not Available].
Science.gov (United States)
Hainschitz, I; Rieger, K; Siegl, H
2002-06-01
In Austria an index of 3 μg/kg of Ochratoxin A for coffee, 0,3 μg/kg for fruit juices and 0,2 μg/kg for beer is discussed. The laboratory of the food inspection authority of the state of Vorarlberg investigated the contribution of selected foodstuffs to the daily OTA intake and compared it with the recommendation of the scientific food committee of the EC. The focal point of this study was on beverages (coffee, coffee substitutes, beer and fruit juices) and their ingredients.ZUSAMMENFASSUNG: Die Untersuchungsergebnisse von Bier, Fruchtsaft und Kaffee [Diagramm 1] zeigen, dass die Mehrzahl der Proben nur sehr schwach bis gar nicht belastet waren. Die OTA-Belastung lag bei der Mehrzahl der Proben unter der Nachweisgrenze von 0,3 μg/kg bzw. 0,01μg/1. Einzelne Proben waren aber erheblich belastet, sodass bei starkem Konsum (Fruchtsaft im Sommer) eine überschreitung der vom SCF vorgeschlagenen Höchstmenge nicht auszuschließen ist. Die Ergebnisse der Kaffeemitteluntersuchung [Diagramm 2] belegen eine höhere OTA-Belastung bei mehr als der Hälfte der Proben. Wenn die vom SCF vorgeschlagene Höchstaufnahme von 5 ng pro Tag und kg Körpergewicht zu Grunde gelegt wird, resultiert für eine 60 kg schwere Person ein Wert von 0,3 μg/Tag. Das bedeutet bei einem mit 100 μg/kg OTA kontaminierten Kaffeeersatz und dem Konsum nur einer Tasse (5 - 7 g Pulver), dass alleine aus dieser Quelle diese Höchstaufnahme deutlich überschritten wird. Der Eintrag über die restliche Nahrung wie Cerealien, die für etwa die Hälfte der OTA-Aufnahme verantwortlich sind, bleibt hier unberücksichtigt. Die Untersuchungen belegen, dass die Einhaltung der in österreich vorgeschlagenen Richtwerte bei Bier, Fruchtsäften und Kaffee keine Schwierigkeiten bereitet. Für Kaffeemittel und andere Trockenfrüchte als Weintrauben [3] wurde allerdings noch kein Richtwert vorgeschlagen. Die Ergebnisse belegen aber, dass gerade für Kaffemittel und verschiedene Trockenfrüchte vor dem Hintergrund
9. AA Index
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The geomagnetic aa index provides a long climatology of global geomagnetic activity using 2 antipodal observatories at Greenwich and Melbourne- IAGA Bulletin 37,...
10. Walkability Index
Data.gov (United States)
U.S. Environmental Protection Agency — The Walkabiliy Index dataset characterizes every Census 2010 block group in the U.S. based on its relative walkability. Walkability depends upon characteristics of...
11. PLACES AVAILABLES
CERN Multimedia
Enseignement Technique; Tél. 74924; monique.duval@cern.ch
2000-01-01
Places are available in the following courses: LabView hands-on : 13.11.00 (4 hours) LabView Basics 1 : 14 16.11.00 (3 days) Nouveautés de WORD : 19 et 20.10.00 (2 jours) ACCESS 1er niveau : 30 31.10.00 (2 jours) Advanced C programming : 2 3.11.00 (2 days) Introduction à PowerPoint : 6.11.00 (1 journée) C++ for Particle Physicists : 20 24.11.00 (12 hours) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : http://www.cern.ch/Training/ or fill in an application for training form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt.
12. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
13. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
14. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74460
2001-01-01
Places are available in the following courses: LabView Base 1 : 27-29.3.01 (3 jours) Contract Follow-up : 9.4.01 (3 heures) Introduction à PowerPoint : 24.4.01 (1 journée) Publier sur le Web : 25-27.4.01 (3 demi-journées) Programmation TSX Premium 2 : 15-16.5.01 (5 jours) LabView Base 2 : 27-29.3.01 (2 jours) Hands-on Object-oriented Analysis, Design & Programming with C++ : 23-27.4.01 (5 days) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt.
15. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: Habilitation électrique : recyclage HT/BT : 11 - 15.3.2002 (2 * 2 heures) PVSS Basics : 8 - 12.4.02 (5 days) ELEC-2002 : Spring Term : 9, 11, 16, 18, 23, 25, 30.4.02 (7 * 2.5 hours) LabVIEW base 1 : 22 - 24.4.02 (3 jours) LabVIEW DSC (F) 25 & 26.4.02 (2 jours) LabVIEW Basics 2 : 13 & 14.5.02 (2 days) LabVIEW DAQ (F) : 15 & 16.5.02 (2 jours) Cours sur la migration AutoCAD : AutoCAD : Mise à jour AutoCAD r-14 vers 2002 (2 jours) AutoCAD Mechanical PowerPack 6 basé sur AutoCAD 2002 (5 jours) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applica...
16. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel 74924
2002-01-01
Places are available in the following courses: LabView hands-on : 21.01.02 (1/2 journée) LabView DAQ hands-on : 21.01.02 (1/2 journée) FileMaker Pro : 22 - 25.1.02 (4 jours) MS-Project 2000 : 24 & 25.01.02 (2 jours) Introduction au PC et à Windows 2000 au CERN : 29 - 30.1.02 (2 jours) LabView Base 1 : 4 - 6.2.02 (3 jours) LabView DAQ (E) : 7 & 8.02.02 (2 days) Hands-on Object-Oriented Design & Programming with Java : 11 - 13.02.02 (3 days) C++ for Particle Physicists : 11 - 15.3.2002 (6 * 3 hour lectures) Cours sur la migration AutoCAD : AutoCAD : Mise à jour AutoCAD r-14 vers 2002 (2 jours) AutoCAD Mechanical PowerPack 6 basé sur AutoCAD 2002 (5 jours) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO ...
17. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: The CERN Engineering Data Management System for Advanced users : 13.6.02 (1 day) The CERN Engineering Data Management System for Local Administrators : 18.6.02 (1 day) AutoCAD 2002 - niveau 2 : 24 - 25.6.02 (2 jours) Frontpage 2000 - niveau 2 : 25 - 26.6.02 (2 jours) Object-oriented Analysis and Design : 2 - 5.7.02 (4 days) C++ Programming Level 2 - Traps & Pitfalls : 16 - 19.7.02 (4 days) C++ for Particle Physicists : 22 - 26.7.02 (6 * 3 hour lectures) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of the...
18. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: November 2002 Java Programming Language level 1 : 28 & 29.11.02 (2 days) December 2002 LabVIEW - DSC (English) : 2 - 3.12.02 (2 days) FileMaker (Français) : 2 - 5.12.02 (4 jours) PCAD Schémas - Débutants : 5 & 6.12.02 (2 jours) PCAD PCB - Débutants : 9 - 11.12.02 (3 jours) FrontPage 2000 - level 1: 9 & 10.12.02 (2 days) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. Technical Training M...
19. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: November 2002 Hands-on Object-Oriented Design and Programming with C++: 19 - 21.11.02 (3 days) December 2002 LabVIEW - DSC (English) : 2 - 3.12.02 (2 days) AutoCAD 2002 - niveau 2 : 2 & 3.12.02 (2 jours) FileMaker (Français) : 2 - 5.12.02 (4 jours) PCAD Schémas - Débutants : 5 & 6.12.02 (2 jours) PCAD PCB - Débutants : 9 - 11.12.02 (3 jours) FrontPage 2000 - level 1: 9 & 10.12.02 (2 days) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisiona...
20. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Utilisation du simulateur Simplorer : 30.5 - 1.6.01 (3 jours) JAVA programming language level 1: 11-12.6.01 (2 days) LabView hands-on F ou E : 11.6.01 (1/2 journée) Comprehensive VHDL for EPLD/FPGA Design : 11 - 15.6.01 (5 days) Introduction au Langage C : 13 - 15.6.01 (3 jours) LabView Base 1 : 12 - 14.6.01 (3 jours) Habilitation électrique : superviseurs : 2 sessions d'une demi-journée les 12 et 19.6.01 Migration de LabVIEW 5 vers LabVIEW 6i Migration from LabVIEW 5 to LabVIEW 6I : 15.6.01 (1/2 journée/half-day) Introduction to Perl 5 : 2 - 3.7.01 (2 days) JAVA programming language level 2 : 4 - 6.7.01 (3 days) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : http://www.cern.ch/Training/ or fill in an 'application for training' form available from ...
1. Places available**
CERN Document Server
Places are available in the following courses: Hands-on Introduction to Python Programming: 11-13.08.2003 (3 days) Introduction to the CERN Engineering Data Management System (EDMS): 26.08.2003 (1 day) The CERN Engineering Data Management System (EDMS) for Engineers: 27.08.2003 (1 day) CLEAN-2002 : Travailler en salle blanche : 4.09.2003 (une demi-journée) AutoCAD 2002 - Level 1: 4, 5, 15, 16.09.2003 (2 x 2 days) AutoCAD Mechanical 6 PowerPack : 17, 18, 25, 26.09.2003 et 2, 3.10.2003 (3 x 2 journées, français) AutoCAD 2002 - niveau 1 : 23, 24, 30.09.2003 et 1.10.2003 (2 x 2 journées) Introduction to the CERN Engineering Data Management System (EDMS): 23.09.2003 (1 day) The CERN Engineering Data Management System (EDMS) for Local Administrators: 24-25.09.2003 (2 days) AutoCAD 2002 - niveau 2 : 8 et 10.10.2003 (2 journées) CLEAN-2002: Working in a Cleanroom: 23.10.2003 (half day, p.m.) ** The number of places available may vary. Please check our Web site to find out the current availabili...
2. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses: Hands-on Introduction to Python Programming: 11-13.08.2003(3 days) Introduction to the CERN Engineering Data Management System (EDMS): 26.08.2003 (1 day) The CERN Engineering Data Management System (EDMS) for Engineers: 27.08.2003 (1 day) CLEAN-2002 : Travailler en salle blanche : 4.09.2003 (une demi-journée) AutoCAD 2002 - Level 1: 4, 5, 15, 16.09.2003 (2 x 2 days) AutoCAD Mechanical 6 PowerPack : 17, 18, 25, 26.09.2003 et 2, 3.10.2003 (3 x 2 journées, français) AutoCAD 2002 - niveau 1 : 23, 24, 30.09.2003 et 1.10.2003 (2 x 2 journées) Introduction to the CERN Engineering Data Management System (EDMS): 23.09.2003 (1 day) The CERN Engineering Data Management System (EDMS) for Local Administrators: 24-25.09.2003 (2 days) AutoCAD 2002 - niveau 2 : 8 et 10.10.2003 (2 journées) CLEAN-2002: Working in a Cleanroom: 23.10.2003 (half day, p.m.) ** The number of places available may vary. Please ch...
3. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: LabView Hands-on (bilingue/bilingual - gratuit/free of charge) : 13.9.02 (a.m.) LabView DAQ Hands-on (bilingue/bilingual - gratuit/free of charge) : 13.9.02 (p.m.) AutoCAD 2002 - niveau 1 : 19, 20, 26, 27.9.02 (4 jours) LabView Base 1 : 23 - 25.9.02 (3 jours) LabView DAQ (E) : 26 - 27.9.02 (2 days) AutoCAD Mechanical 6 PowerPack (F) : 30.9, 1, 2, 9, 10, 11.10.02 (6 jours) CLEAN-2002 : Working in a Cleanroom (free of charge) : 10.10.02 (half-day, p.m.) AutoCAD 2002 - niveau 2 : 14 - 15.10.02 (2 jours) AutoCAD 2002 - Level 1 : 17, 18, 24, 25.10.02 (4 days) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Of...
4. [Not Available].
Science.gov (United States)
Murray, Clinton K; Bennett, Jason W
2009-01-01
Malaria's global impact is expansive and includes the extremes of the healthcare system ranging from international travelers returning to nonendemic regions with tertiary referral medical care to residents in hyperendemic regions without access to medical care. Implementation of prompt and accurate diagnosis is needed to curb the expanding global impact of malaria associated with ever-increasing antimalarial drug resistance. Traditionally, malaria is diagnosed using clinical criteria and/or light microscopy even though both strategies are clearly inadequate in many healthcare settings. Hand held immunochromatographic rapid diagnostic tests (RDTs) have been recognized as an ideal alternative method for diagnosing malaria. Numerous malaria RDTs have been developed and are widely available; however, an assortment of issues related to these products have become apparent. This review provides a summary of RDT including effectiveness and strategies to select the ideal RDT in varying healthcare settings.
5. [Not Available].
Science.gov (United States)
Blanchard, Elodie; de Lara, Manuel Tunon
2013-01-01
Pholcodine is an opioid that has been widely used worldwide since 1950 for the treatment of non-productive cough in children and adults. The results of early preclinical studies but also those of recent clinical trials have shown the antitussive efficacy of pholcodine to be superior to that of codeine, of longer duration, and with an equivalent or safer toxicity profile. Also, there is no risk of addiction. Concern had been raised over a possible cross-sensitisation with neuromuscular blocking agents. While a recent assessment of the available data by the European Medicines Agency (EMA) has confirmed the favourable risk-benefit ratio of pholcodine, further studies are needed to clear this point.
6. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Nouveautés d'EXCEL : 5.11.01 (1/2 journée) Introduction a Windows 2000 au CERN : 6.11.01 (1/2 journée) UNIX pour non-programmeurs : 5 - 7.11.01 (3 jours) Design Patterns : 7 - 8.11.01 (2 days) The Java programming language Level 1: 8 - 9.11.01 (2 days) Automates et réseaux de terrain : 13 & 14.11.01 (3 jours) Introduction à Windows 2000 au CERN : 12 - 14.11.01 (1/2 journée) Introduction to Windows 2000 at CERN : 14.11.01 (half-day) Introduction to PERL 5 : 15 - 16.11.01 (2 days) Introduction to C Programming : 21- 23.11.01 (3 days) Programmation TSX Premium 2 : 26 - 30.11.01 (5 jours) Contract Follow-up (F) : 26.11.01 (1/2 journée) Object-Oriented Analysis and Design : 27 - 30.11.2001 (4 days) Hands-on Object-Oriented Design and Programming with C++ : 11 - 13.12.2...
7. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses: The CERN EDMS for Local Administrators : 24 & 25.9.03 (2 days, free of charge) HeREF-2003 : Techniques de la réfrigération Hélium cours en français avec support en anglais) : 6 - 10.10.2003 (7 demi-journées) The Java Programming Language Level 1 : 6 - 7.10.2003 (2 days) Java 2 Enterprise Edition - Part 2 : Enterprise JavaBeans : 8 - 10.10.2003 (3 days) FileMaker - niveau 1 : 9 & 10.10.03 (2 jours) EXCEL 2000 - niveau 1 : 20 & 22.10.03 (2 jours) AutoCAD 2002 - niveau 1 : 20, 21, 27, 28.10.03 (4 jours) CLEAN-2002 : Working in a Cleanroom : 23.10.03 (half day, free of charge) AutoCAD 2002 - Level 1 : 3, 4, 12, 13.11.03 (4 days) AutoCAD 2002 - niveau 2 : 10 & 11.11.03 (2 jours) ACCESS 2000 - niveau 1 : 13 & 14.11.03 (2 jours) AutoCAD Mechanical 6 PowerPack (E) : 17, 18, 24, 25.11 & 1, 2.12.03 (6 days) FrontPage 2000 - niveau 1 : 20 & 21.11.03 (2 jours) MAGNE-03 : Magnétisme pour l'électrotechnique : 25 - 27.11.03 (3 jours) ...
8. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses : FrontPage 2000 - niveau 1 : 20 & 21.5.03 (2 jours) PIPES-2003 : Pratique du sertissage de tubes métalliques et multicouches : 21.5.03 (1 jour) Introduction à la CAO Cadence : de la saisie de schéma Concept-HDL au PCB : 20 & 22.5.03 (2 jours) AutoCAD 2002 - niveau 2 : 3 & 4.6.03 (2 jours) AutoCAD Mechanical 6 PowerPack (F) : 5, 6, 12, 13, 26, 27.6.03 (6 jours) EXCEL 2000 - niveau 1 : 10 & 11.6.03 (2 jours) Conception de PCB rapides dans le flot Cadence : 11.6.03 (matin) EXCEL 2000 - level 1 : 12 & 13.6.03 (2 days) PowerPoint 2000 (F) : 17 & 18.6.03 (2 jours) Réalisation de PCB rapides dans le flot Cadence : 17.6.03 (matin) FrontPage 2000 - niveau 2 : 19 & 20.6.03 (2 jours) LabView DSC (langue à décider/language to be defined) : 19 & 20.6.03 EXCEL 2000 - niveau 2 : 24 & 25.6.03 (2 jours) Siemens SIMATIC Training: Introduction to STEP7 : 3 & 4.6.03 (2 days) STEP7 Programming : 16 - 20.6.03 (5 days) Simatic...
9. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses : FrontPage 2000 - niveau 1: 20 & 21.5.03 (2 jours) PIPES-2003 : Pratique du sertissage de tubes métalliques et multicouches: 21.5.03 (1 jour) Introduction à la CAO Cadence: de la saisie de schéma Concept-HDL au PCB : 20 & 22.5.03 (2 jours) AutoCAD Mechanical 6 PowerPack (E): 5, 6, 12, 13, 26, 27.6.03 (6 days) EXCEL 2000 - niveau 1: 10 & 11.6.03 (2 jours) Conception de PCB rapides dans le flot Cadence: 11.6.03 (matin) EXCEL 2000 - level 1: 12 & 13.6.03 (2 days) Introduction to PVSS: 16.6.03 (half-day, pm) Basic PVSS: 17 - 19.6.03 (3 days) Réalisation de PCB rapides dans le flot Cadence: 17.6.03 (matin) LabView DSC (language to be defined): 19 & 20.6.03 PVSS - JCOP Framework Tutorial: 20.6.03 (1 day) EXCEL 2000 - niveau 2: 24 & 25.6.03 (2 jours) Siemens SIMATIC Training: Introduction to STEP7: 3 & 4.6.03 (2 days) STEP7 Programming: 16 - 20.6.03 (5 days) Simatic Net Network: 26 & 27.6.03 (2 days) These courses will be given...
10. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: EXCEL 2000 - niveau 1 : 3 et 4.10.01 (2 jours) Automates et réseaux de terrain : 3 - 4.10.2001 (2 jours) Introduction à Outlook : 5.10.01 (1 journée) C++ for Particle Physicists : 8 - 12.10.01 (6 lectures) Cadence Board Design tools : Upgrading to release 14 : 3 1-day sessions on 9, 10 & 11.10.01 MS-Project 2000 - niveau 1 : 15 - 18.10.01 (4 demi-journées) LabView Base 2 : 18 & 19.10.01 (2 jours) WORD 2000 : importer et manipuler des images : 19.10.01 (1 journée) The CERN Engineering Data Management System for Electronics Design : 30.10.01 (1 day) UNIX pour non-programmeurs : 5 - 7.11.01 (3 jours) The Java programming language Level 1: 8 - 9.11.01 (2 days) Introduction to PERL 5 : 15 - 16.11.01 (2 days) Introduction to XML : 19 - 20.11.01 (2 days) Programming TSX Premium 1 : 19 - 23.11.01 (5 days) Introd...
11. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Contract Follow-up (F) : 30.10.01 (1/2 journée) The CERN Engineering Data Management System for Electronics Design : 30.10.01 (1 day) UNIX pour non-programmeurs : 5 - 7.11.01 (3 jours) Nouveautés d'EXCEL : 5.11.01 (1/2 journée) Introduction a Windows 2000 au CERN : 6.11.01 (1/2 journée) The Java programming language Level 1: 8 - 9.11.01 (2 days) LabView Base 1 : 12 - 14.11.01 (3 jours) Automates et réseaux de terrain : 13 & 14.11.01 (2 jours) Introduction to PERL 5 : 15 - 16.11.01 (2 days) Introduction to XML : 19 - 20.11.01 (2 days) Programming TSX Premium 1 : 19 - 23.11.01 (5 days) Introduction to C Programming : 21- 23.11.01 (3 days) The Java programming language Level 2: 26 - 28.11.01 (3 days) Programmation TSX Premium 2 : 26 - 30.11.01 (5 jours) Autocad Migration support courses: a detail...
12. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: MS-Project 2000 - niveau 1 : 15 - 18.10.01 (4 demi-journées) LabView Base 2 : 18 & 19.10.01 (2 jours) WORD 2000 : importer et manipuler des images : 19.10.01 (1 journée) Contract Follow-up (F) : 30.10.01 (1/2 journée) The CERN Engineering Data Management System for Electronics Design : 30.10.01 (1 day) UNIX pour non-programmeurs : 5 - 7.11.01 (3 jours) The Java programming language Level 1: 8 - 9.11.01 (2 days) LabView Base 1 : 12 - 14.11.01 (3 jours) Automates et réseaux de terrain : 13 & 14.11.01 (2 jours) Introduction to PERL 5 : 15 - 16.11.01 (2 days) Introduction to XML : 19 - 20.11.01 (2 days) Programming TSX Premium 1 : 19 - 23.11.01 (5 days) Introduction to C Programming : 21- 23.11.01 (3 days) The Java programming language Level 2: 26 - 28.11.01 (3 days) Programmation TSX Premium 2 : 26 ...
13. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Cadence Board Design tools : Upgrading to release 14 : 3 1-day sessions on 9, 10 & 11.10.01 MS-Project 2000 - niveau 1 : 15 - 18.10.01 (4 demi-journées) LabView Base 2 : 18 & 19.10.01 (2 jours) WORD 2000 : importer et manipuler des images : 19.10.01 (1 journée) Contract Follow-up (F) : 30.10.01 (1/2 journée) The CERN Engineering Data Management System for Electronics Design : 30.10.01 (1 day) UNIX pour non-programmeurs : 5 - 7.11.01 (3 jours) The Java programming language Level 1: 8 - 9.11.01 (2 days) LabView Base 1 : 12 - 14.11.01 (3 jours) Introduction to PERL 5 : 15 - 16.11.01 (2 days) Introduction to XML : 19 - 20.11.01 (2 days) Programming TSX Premium 1 : 19 - 23.11.01 (5 days) Introduction to C Programming : 21- 23.11.01 (3 days) The Java programming language Level 2: 26 - 28.11.01 (...
14. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Introduction à Windows 2000 au CERN : 2 sessions de _ journée les 24 et 25.9.01 PROFIBUS : 25 - 26.9.01 (2 jours) PROFIBUS : 27 - 28.9.01 (2 days) EXCEL 2000 - niveau 1 : 3 et 4.10.01 (2 jours) Automates et réseaux de terrain : 3 - 4.10.2001 (2 jours) Introduction à Outlook : 5.10.01 (1 journée) Frontpage 2000 - niveau 1 : 8 et 9.10.01 (2 jours) C++ for Particle Physicists : 8 - 12.10.01 (6 lectures) MS-Project 2000 - niveau 1 : 15 - 18.10.01 (4 demi-journées) Programmation TSX Premium 1 : 15 - 19.10.01 (5 jours) WORD 2000 : importer et manipuler des images : 19.10.01 (1 journée) Programmation TSX Premium 1 : 22 - 26.10.01 (5 jours) UNIX pour non-programmeurs : 5 - 7.11.01 (3 jours) The Java programming language Level 1: 8 - 9.11.01 (2 days) Introduction to PERL 5 : 15 - 16.11.01 (2 days) Introduction to XML : 19 - 20.11.01 (2...
15. Places available **
CERN Multimedia
2003-01-01
Places are available in the following courses: PIPES-2003 - Pratique du Sertissage de tubes métalliques et multicouches : 26.8.03 (stage pratique) The CERN Engineering Data Management System (EDMS) for Engineers : 27.8.03 (1 day, free of charge) CLEAN-2002 : Travailler en salle blanche : 4.9.03 (une demi-journée, séminaire gratuit) The CERN Engineering Data Management System (EDMS) for Local Administrators : 24 & 25.9.03 (2 days, free of charge) Siemens SIMATIC Training : Programmation STEP7 - niveau 1 : 29 - 2.10.03 (4 jours) - ouverture des inscriptions fin août Programmation STEP7 - niveau 2 : 13 - 17.10.03 (5 jours) - ouverture des inscriptions fin août Réseau Simatic Net : 22 & 23.10.03 (2 jours) - ouverture des inscriptions fin août CLEAN-2002 : Working in a Cleanroom : 23.20.03 (half day, free of charge) These courses will be given in French or Englis...
16. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses: PIPES-2003 - Pratique du sertissage de tubes métalliques et multicouches :26.8.03(stage pratique) The CERN EDMS for Engineers (free of charge) : 27.8.03 (1 day) CLEAN-2002 : Travailler en salle blanche (séminaire gratuit) : 4.9.03(une demi-journée) The CERN EDMS for Local Administrators (free of charge) : 24 & 25.9.03 (2 days) HeREF-2003 : Techniques de la réfrigération Hélium (cours en français avec support en anglais) : 6 - 10.10.2003 (7 demi-journées) The Java Programming Language Level 1 : 6 - 7.10.2003 (2 days) Java 2 Enterprise Edition - Part 2 : Enterprise JavaBeans : 8 - 10.10.2003 (3 days) FileMaker - niveau 1 : 9 & 10.10.03 (2 jours) EXCEL 2000 - niveau 1 : 20 & 22.10.03 (2 jours) AutoCAD 2002 - niveau 1 : 20, 21, 27, 28.10.03 (4 jours) CLEAN-2002 : Working in a Cleanroom (free of charge) : 23.10.03 (half day) AutoCAD Mechanical 6 PowerPack (E) : 23, 24, 30, 31.10 & 12, 13.11.03 (6 days) AutoCAD 2002 - niveau 2...
17. PLACES AVAILABLE
CERN Multimedia
Enseignement Technique; Tél. 74924; Technical Training; Monique Duval; Tel. 74924
2000-01-01
Places are available in the following courses : Premiers pas avec votre PC 12 - 15.9.00 (4 demi-journées) WORD 20, 21 et 26, 27.9.2000 (4 jours) JAVA programming level 1 25 - 26.9.2000 (2 days) Gaz inflammables 1 26.9.2000 (1 journée) Advanced aspects of PERL 5 6.10.2000 (1 day) Initiation au WWW 10 - 12.10.00 (3 demi-journées) WORD : importer et manipuler des images 16.10.2000 (1 journée) FileMaker 17, 18 et 24, 25.10.00 (4 jours) Nouveautés de WORD 19 et 20.10.2000 (2 jours) ACCESS 1er niveau 30 - 31.10.00 (2 jours)Introduction à PowerPoint 6.11.00 (1 journée)Nouveautés dEXCEL 7.11.2000(4 demi-journées)Excel 13, 14 et 20, 21.11.00 (4 jours) LabView hands-on 13.11.2000(4 hours)LabView Basics 1 14 - 16.11.2000 (3 days) MS-Project 1er niveau 14-17.11.00 (4 demi-journées) If you wish to participate in one of these courses, please discuss with your supervisor and apply elec...
18. Places available **
CERN Multimedia
2003-01-01
Places are available in the following courses : CLEAN-2002 : Working in a cleanroom (free course, registration required): 11.4.03 (half-day, afternoon, ) LabView Basics 2 : 10 - 11.4.03 (3 days) DISP-2003 - Spring II Term : Advanced Digital Signal Processing : 30.4, 7, 14, 21.5.03 (4 X 2-hour lectures) AutoCAD 2002 - niveau 1 : 29, 30.4 et 7, 8.5.03 (4 jours) Oracle iDS Reports : Build Internet Reports : 5 - 9.5.03 (5 days) AutoCAD 2002 - niveau 2 : 5 & 6.5.03 (2 jours) AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 20, 21, 27 & 28.5.03 (6 jours) Formation Siemens SIMATIC /Siemens SIMATIC Training : Introduction à STEP7 /Introduction to STEP7 : 3 & 4.6.03 (2 jours/2 days) Programmation STEP7/STEP7 Programming : 31.3 - 4.4.03 / 16 - 20.6.03 (5 jours/5 days) Réseau Simatic Net /Simatic Net Network : 15 & 16.4.03 / 26 & 27.6.03 These courses will be given in French or English following the requests. Cours de sécurité : Etre TSO au CERN : Prochaines sessions : 24, 25 & 27.6....
19. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses : CLEAN-2002 : Working in a cleanroom (free course, registration required): 11.4.03 (half-day, afternoon, ) LabView Basics 2 : 10 - 11.4.03 (3 days) DISP-2003 - Spring II Term : Advanced Digital Signal Processing : 30.4, 7, 14, 21.5.03 (4 X 2-hour lectures) AutoCAD 2002 - niveau 1 : 29, 30.4 et 7, 8.5.03 (4 jours) Oracle iDS Reports : Build Internet Reports : 5 - 9.5.03 (5 days) AutoCAD 2002 - niveau 2 : 5 & 6.5.03 (2 jours) AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 20, 21, 27 & 28.5.03(6 jours) Formation Siemens SIMATIC /Siemens SIMATIC Training : Introduction à STEP7 /Introduction to STEP7 : 3 & 4.6.03 (2 jours/2 days) Programmation STEP7/STEP7 Programming : 31.3 - 4.4.03 / 16 - 20.6.03 (5 jours/5 days) Réseau Simatic Net /Simatic Net Network : 15 & 16.4.03 / 26 & 27.6.03 These courses will be given in French or English following the requests. Cours de sécurité : Etre TSO au CERN : Prochaines sessions : 24, 25 & 27.6...
20. Places available **
CERN Multimedia
2003-01-01
Des places sont disponibles dans les cours suivants : Places are available in the following courses : DISP-2003 - Spring I Term : Introduction to Digital Signal Processing : 20, 27.2, 6, 13, 20, 27.3, 3.4.03 (7 X 2-hour lectures) AXEL-2003 - Introduction to Accelerators : 24 - 28.2.03 (10 X 1-hour lectures) AutoCAD 2002 - niveau 1 : 24, 25.2 & 3, 4.3.03 (4 jours) Introduction à Windows 2000 au CERN : 25.2.03 (1/2 journée) LabView base 2/LabView Basics 2 : 10 & 11.3.03 (2 jours/2 days) langue à définir/Language to be decided C++ for Particle Physicists : 10 - 14.3.03 (6 X 3-hour lectures) Introduction to PVSS : 10.3.03 (half day, afternoon) Basic PVSS : 11 - 13.3.03 (3 days) LabView avancé /LabView Advanced : 12 - 14.3.03 (3 jours/3days) Langue à définir/language to be decided AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 17, 18, 24 & 25.3.03 (6 jours) PVSS - JCOP Framework Tutorial : 14.3.03 (1 day) CLEAN-2002 : Working in a cleanroom : 2.4.03 (half-day, afternoon, free course, regis...
1. Places available **
CERN Multimedia
2003-01-01
Des places sont disponibles dans les cours suivants : Places are available in the following courses : C++ for Particle Physicists : 10 - 14.3.03 (6 X 3-hour lectures) Introduction to PVSS : 10.3.03 (half day, afternoon) Basic PVSS : 11 - 13.3.03 (3 days) PVSS - JCOP Framework Tutorial : 14.3.03 (1 day) CLEAN-2002 : Working in a cleanroom : 2.4.03 (half-day, afternoon, free course, registration required) LabView base 1/LabView Basics 1 : 9 - 11.4.03 (3 jours/3 days) Langue à définir/language to be decided DISP-2003 - Spring II Term : Advanced Digital Signal Processing : 30.4, 7, 14, 21.5.03 (4 X 2-hour lectures) AutoCAD 2002 - niveau 1 : 29, 30.4 et 7, 8.5.03 (4 jours) AutoCAD 2002 - niveau 2 : 5 & 6.5.03 (2 jours) AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 20, 21, 27 & 28.5.03 (6 jours) Formation Siemens SIMATIC /Siemens SIMATIC Training : Introduction à STEP7 /Introduction to STEP7 : 11 & 12.3.03 / 3 & 4.6.03 (2 jours/2 days) Programmation STEP7/STEP7 Programming : 31.3 - 4.4.03 / 16...
2. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses : Introduction to PVSS : 10.3.03 (half-day, afternoon) CLEAN-2002 : Working in a cleanroom : 2.4.03 (half-day, afternoon, free course, registration required) LabView Basics 1 : 9 - 11.4.03 (3 days) Language to be decided. DISP-2003 - Spring II Term : Advanced Digital Signal Processing : 30.4, 7, 14, 21.5.03 (4 X 2-hour lectures). AutoCAD 2002 - niveau 1 : 29, 30.4 et 7, 8.5.03 (4 jours) AutoCAD 2002 - niveau 2 : 5 & 6.5.03 (2 jours) AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 20, 21, 27 & 28.5.03 (6 jours) Siemens SIMATIC Training: Introduction to STEP7 : 3 & 4.6.03 (2 days) STEP7 Programming : 31.3 - 4.4.03 / 16 - 20.6.03 (5 days) Simatic Net Network : 15 & 16.4.03 / 26 & 27.6.03 These courses will be given in French or English following the requests. Cours de sécurité: Etre TSO au CERN : 3 sessions sont programmées pour 2003 : 25, 26 & 28.3.03 - 24, 25 & 27.6.03 - 4, 5 & 7.11.03 (sessions de 3 jours) ** The number o...
3. Places available **
CERN Multimedia
2003-01-01
Des places sont disponibles dans les cours suivants : Places are available in the following courses : Introduction à Windows 2000 au CERN : 25.2.03 (1/2 journée) LabView base 2/LabView Basics 2 : 10 & 11.3.03 (2 jours/2 days) langue à définir/Language to be decided C++ for Particle Physicists : 10 - 14.3.03 (6 X 3-hour lectures) Introduction to PVSS : 10.3.03 (half day, afternoon) Basic PVSS : 11 - 13.3.03 (3 days) LabView avancé /LabView Advanced : 12 - 14.3.03 (3 jours/3days) Langue à définir/Language to be decided AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 17, 18, 24 & 25.3.03 (6 jours) PVSS - JCOP Framework Tutorial : 14.3.03 (1 day) CLEAN-2002 : Working in a cleanroom : 2.4.03 (half-day, afternoon, free course, registration required) LabView base 1/LabView Basics 1 : 9 - 11.4.03 (3 jours/3 days) Langue à définir/Language to be decided DISP-2003 - Spring II Term : Advanced Digital Signal Processing : 30.4, 7, 14, 21.5.03 (4 X 2-hour lectures) AutoCAD 2002 - niveau 2 : 5 & 6.5.03 (...
4. Places available **
CERN Multimedia
2003-01-01
Des places sont disponibles dans les cours suivants : Places are available in the following courses : WorldFIP 2003 pour utilisateurs : 11-14.2.03 (4 jours) DISP-2003 ? Spring I Term : Introduction to Digital Signal Processing : 20, 27.2, 6, 13, 20, 27.3, 3.4.03 (7 X 2-hour lectures) AXEL-2003 - Introduction to Accelerators : 24-28.2.03 (10 X 1-hour lectures) AutoCAD 2002 - niveau 1 : 24, 25.2 & 3, 4.3.03 (4 jours) Introduction à Windows 2000 au CERN : 25.2.03 (1/2 journée) LabView base 2/LabView Basics 2 : 10 & 11.3.03 (2 jours/2 days) langue à définir/Language to be decided C++ for Particle Physicists : 10 ? 14.3.03 (6 X 3-hour lectures) Introduction to PVSS : 10.3.03 (half day, afternoon) Basic PVSS : 11 - 13.3.03 (3 days) LabView avancé /LabView Advanced : 12 - 14.3.03 (3 jours/3days) Langue à définir/language to be decided AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 17, 18, 24 & 25.3.03 (6 jours) PVSS - JCOP Framework Tutorial : 14.3.03 (1 day) MAGNE-03 - Magnetism for Technical Ele...
5. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses : CLEAN-2002 : Working in a cleanroom (free course, registration required) : 2.4.03 (half-day, afternoon) LabView base 1/LabView Basics 1 (Langue à définir/ language to be decided) : 9 - 11.4.03 (3 jours/3 days) DISP-2003 - Spring II Term : Advanced Digital Signal Processing : 30.4, 7, 14, 21.5.03 (4 X 2-hour lectures) AutoCAD 2002 - niveau 1 : 29, 30.4 et 7, 8.5.03 (4 jours) AutoCAD 2002 - niveau 2 : 5 & 6.5.03 (2 jours) AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 20, 21, 27 & 28.5.03(6 jours) Formation Siemens SIMATIC /Siemens SIMATIC Training : Introduction à STEP7 /Introduction to STEP7 : 3 & 4.6.03 (2 jours/2 days) Programmation STEP7/STEP7 Programming : 31.3 - 4.4.03 / 16 - 20.6.03 (5 jours/5 days) Réseau Simatic Net /Simatic Net Network : 15 & 16.4.03 / 26 & 27.6.03 These courses will be given in French or English following the requests. Cours de sécurité : Etre TSO au CERN : 3 sessions sont programmées pour 2003 : 25...
6. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses : CLEAN-2002 : Working in a cleanroom (free course, registration required): 11.4.03 (half-day, afternoon) LabView Basics 2 : 10 - 11.4.03 (3 days) DISP-2003 - Spring II Term : Advanced Digital Signal Processing : 30.4, 7, 14, 21.5.03 (4 X 2-hour lectures) AutoCAD 2002 - niveau 1 : 29, 30.4 et 7, 8.5.03 (4 jours) Oracle iDS Reports : Build Internet Reports : 5 - 9.5.03 (5 days) AutoCAD 2002 - niveau 2 : 5 & 6.5.03 (2 jours) AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 20, 21, 27 & 28.5.03(6 jours) Formation Siemens SIMATIC /Siemens SIMATIC Training : Introduction à STEP7 /Introduction to STEP7 : 3 & 4.6.03 (2 jours/2 days) Programmation STEP7/STEP7 Programming : 16 - 20.6.03 (5 jours/5 days) Réseau Simatic Net /Simatic Net Network : 15 & 16.4.03 / 26 & 27.6.03 These courses will be given in French or English following the requests. Cours de sécurité : Etre TSO au CERN : Prochaines sessions : 24, 25 & 27.6.03 - 4, 5 & 7....
7. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: LabView DAQ (F) : 7 & 8.2.02 (2 jours) Hands-on Object-Oriented Design & Programming with Java : 11 - 13.02.02 (3 days) PVSS basics : 18 - 22.2.02 (5 days) Introduction à Windows 2000 : 18.2.02 (1 demi-journée) Introduction to the CERN Engineering Data Management System : 20.2.02 (1 day) Introduction à la CAO CADENCE : 20 & 21.2.02 (2 jours) The CERN Engineering Data Management System for Advanced users : 21.2.02 (1 day) LabView Basics 1 : 4 - 6.3.02 (3 days) Introduction au VHDL et utilisation du simulateur de CADENCE : 6 & 7.3.02 (2 jours) LabView Base 2 : 11 & 12.3.02 (2 jours) C++ for Particle Physicists : 11 - 15.3.2002 (6 * 3 hour lectures) LabView Advanced : 13 - 15.3.02 (3 days) Cours sur la migration AutoCAD : AutoCAD : Mise à...
8. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: LabVIEW base 1 : 22 - 24.4.02 (3 jours) CLEAN 2002 : working in a cleanroom: 24.4.02 (half-day, pm) LabVIEW DSC (F) 25 & 26.4.02 (2 jours) AutoCAD : Mise à jour AutoCAD r-14 vers 2002 : 25 & 26.4.02 (2 jours) Cotations selon les normes GPS de l'ISO : 29 - 30.4.02 (2 jours) Introduction to the CERN Engineering Data Management System: 7.5.02 (1 day) LabVIEW Basics 2: 13 & 14.5.02 (2 days) AutoCAD Mechanical 6 PowerPack (F) : 13-14, 17, 21, 27-28.5.02 (6 jours) WorldFIP - Généralités : 14.5.2002 (1/2 journée) WorldFIP - Développer avec MicroFIP HANDLER : 14.5 - après-midi, 15.5.02 - matin (1 jour) WorldFIP - FullFIP FDM : FIP Device Manager (F) : 15.5 - après-midi, 16.5.02 - matin (1 jour) LabVIEW DAQ (F) : 15 & 16.5.02 (2 jours) EXCEL 2000 - niveau 2 : 22 & 23.5.02 (2 jours)...
9. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: The CERN Engineering Data Management System for Advanced users : 16.4.02 (1 day) Migration from AutoCAD 14 towards AutoCAD Mechanical6 PowerPack: 17 - 19.4 and 2 &3.5.02 (5 days) AutoCAD - niveau 1 : 22, 23, 29, 30.4 et 6, 7.5.02 (6 jours) LabVIEW base 1 : 22 - 24.4.02 (3 jours) CLEAN 2002 : working in a cleanroom: 24.4.02 (half-day, pm) LabVIEW DSC (F) 25 & 26.4.02 (2 jours) AutoCAD : Mise à jour AutoCAD r-14 vers 2002 : 25 & 26.4.02 (2 jours) Cotations selon les normes GPS de l'ISO : 29 - 30.4.02 (2 jours) Introduction to the CERN Engineering Data Management System: 7.5.02 (1 day) LabVIEW Basics 2: 13 & 14.5.02 (2 days) AutoCAD Mechanical 6 PowerPack (F) : 13-14, 17, 21, 27-28.5.02 (6 jours) WorldFIP - Généralités : 14.5.2002 (1/2 journée) WorldFIP - Développer avec Micr...
10. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: Introduction au PC et à Windows 2000 au CERN : 29 - 30.1.02 (2 jours) LabView Base 1 : 4 - 6.2.02 (3 jours) LabView DAQ (F) : 7 & 8.2.02 (2 jours) Hands-on Object-Oriented Design & Programming with Java : 11 - 13.02.02 (3 days) PVSS basics : 18 - 22.2.02 (5 days) Introduction à Windows 2000 : 18.2.02 (1 demi-journée) Introduction to the CERN Engineering Data Management System : 20.2.02 (1 day) The CERN Engineering Data Management System for Advanced users : 21.2.02 (1 day) C++ for Particle Physicists : 11 - 15.3.2002 (6 * 3 hour lectures) Cours sur la migration AutoCAD : AutoCAD : Mise à jour AutoCAD r-14 vers 2002 (2 jours) AutoCAD Mechanical PowerPack 6 basé sur AutoCAD 2002 (5 jours) If you wish to participate in one of these courses, please discuss with your supervisor and apply electr...
11. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: C++ for Particle Physicists : 11 - 15.3.2002 (6 * 3 hour lectures) Programming the Web for Control Applications : 11, 12, 18, 19.3.2002 (4 * 2 hour lectures) Habilitation électrique : recyclage HT/BT (Français) : 13 - 14.3.2002 (2 * 2 heures) Introduction à la CAO CADENCE : 19 & 20.3.02 (2 jours) LabVIEW base 1 : 22 - 24.4.02 (3 jours) LabVIEW DSC (F) 25 & 26.4.02 (2 jours) LabVIEW Basics 2 : 13 & 14.5.02 (2 days) LabVIEW DAQ (F) : 15 & 16.5.02 (2 jours) Cours sur la migration AutoCAD : AutoCAD : Mise à jour AutoCAD r-14 vers 2002 (2 jours) AutoCAD Mechanical PowerPack 6 basé sur AutoCAD 2002 (5 jours) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at : Technical Training or fil...
12. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: LabVIEW - Basics 1 : 10 - 12.12.01 (3 days) Introduction to XML : 12 & 13.12.01 (2 days) Introduction au PC et Windows 2000 : 12 & 14.12.01 (2 jours) LabVIEW - Basics 2 : 13 - 14.12.01 (2 days) Habilitation électrique : superviseurs : 17.12.2001 (1/2 journée) MS-Project 2000 : 10 & 11.01.02 (2 jours) EXCEL 2000 - niveau 2 : 15 - 16.1.02 (2 jours) Sécurité dans les installations cryogéniques: 15-17.1.2002 (2 demi-journées) C++ Programming Level 2 - Traps and Pitfalls : 15 - 18.1.2002 (4 days) ELEC-2002 Winter Term: Readout and system electronics for Physics 15.1.2002 - 7.2.2002 (8 half- days) Nouveautés de WORD 2000 : 18.1.02 (1/2 journée) LabView hands-on : 21.01.02 (1/2 journée) LabView DAQ hands-on : 21.01.02 (1/2 journée) FileMaker Pro : 22 -...
13. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2002-01-01
Places are available in the following courses: LabView hands-on : 21.01.02 (1/2 journée) LabView DAQ hands-on : 21.01.02 (1/2 journée) FileMaker Pro : 22 - 25.1.02 (4 jours) MS-Project 2000 : 22, 24 & 25.01.02 (3 jours) Introduction au PC et à Windows 2000 au CERN : 29 - 30.1.02 (2 jours) LabView Base 1 : 4 - 6.2.02 (3 jours) LabView DAQ (E) : 7 & 8.02.02 (2 days) Hands-on Object-Oriented Design & Programming with Java : 11 - 13.02.02 (3 days) PVSS basics : 11 - 15.2.02 (5 days) Introduction à Windows 2000 : 18.2.02 (1 demi-journée) Introduction to the CERN Engineering Data Management System : 20.2.02 (1 day) The CERN Engineering Data Management System for Advanced users : 21.2.02 (1 day) C++ for Particle Physicists : 11 - 15.3.2002 (6 * 3 hour lectures) Cours sur la migration AutoCAD : AutoCAD : Mise à...
14. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel.74924
2001-01-01
Places are available in the following courses: Habilitation électrique : superviseurs : 5.12.01 (1/2 journée) LabVIEW - Basics 1 : 10 - 12.12.01 (3 days) Introduction au PC et Windows 2000 : 12 & 14.12.01 (2 jours) LabVIEW - Basics 2 : 13 - 14.12.01 (2 days) Habilitation électrique : superviseurs : 17.12.2001 (1/2 journée) EXCEL 2000 - niveau 2 : 15 - 16.1.02 (2 jours) Sécurité dans les installations cryogéniques: 15-17.1.2002 (2 demi-journées) C++ Programming Level 2 - Traps and Pitfalls : 15 - 18.1.2002 (4 days) ELEC-2002 Winter Term: Readout and system electronics for Physics 15.1.2002 - 7.2.2002 (8 half- days) Nouveautés de WORD 2000 : 18.1.02 (1/2 journée) LabView hands-on : 21.01.02 (1/2 journée) LabView DAQ hands-on : 21.01.02 (1/2 journée) FileMaker Pro : 22 - 25.1.02 (4 jours) Frontpage...
15. PLACES AVAILABLE
CERN Multimedia
Enseignement Technique; Tel. 74924
2001-01-01
Places are available in the following courses: MS-Project 2000 : 10 & 11.01.02 (2 jours) EXCEL 2000 - niveau 2 : 15 - 16.1.02 (2 jours) Sécurité dans les installations cryogéniques: 15-17.1.2002 (2 demi-journées) C++ Programming Level 2 - Traps and Pitfalls : 15 - 18.1.2002 (4 days) ELEC-2002 Winter Term: Readout and system electronics for Physics 15.1.2002 - 7.2.2002 (8 half- days) Nouveautés de WORD 2000 : 18.1.02 (1/2 journée) LabView hands-on : 21.01.02 (1/2 journée) LabView DAQ hands-on : 21.01.02 (1/2 journée) FileMaker Pro : 22 - 25.1.02 (4 jours) MS-Project 2000 : 24 & 25.01.02 (2 jours) Introduction au PC et à Windows 2000 au CERN : 29 - 30.1.02 (2 jours) LabView Base 1 : 4 - 6.2.02 (3 jours) LabView DAQ (E) : 7 & 8.02.02 (2 days) Hands-on Object-Oriented Design & Programming with Java :&nbs...
16. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: PVSS Basics : 8 - 12.4.02 (5 days) AutoCAD : Mise à jour AutoCAD r-14 vers 2002 : 25 & 26.4.02 (2 jours) ELEC-2002 : Spring Term : 9, 11, 16, 18, 23, 25, 30.4.02 (7 * 2.5 hours) Object-Oriented Analysis & Design: 16 - 19.4.02 (4 days) Migration from AutoCAD 14 towards AutoCAD Mechanical6 PowerPack: 17 - 19.4 and 2 &3.5.02 (5 days) LabVIEW base 1 : 22 - 24.4.02 (3 jours) LabVIEW DSC (F) 25 & 26.4.02 (2 jours) LabVIEW Basics 2 : 13 & 14.5.02 (2 days) EXCEL 2000 - niveau 2 : 22 & 23.5.02 (2 jours) LabVIEW DAQ (F) : 15 & 16.5.02 (2 jours) LabVIEW Basics 1: 3 - 5.6.02 (3 days) LabVIEW DAQ (E): 6 & 7.6.02 (2 days) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that...
17. PLACES AVAILABLE
CERN Multimedia
Monique DUVAL
2002-01-01
Places are available in the following courses: LabView Basics 1 : 4 - 6.3.02 (3 days) CLEAN-2002 : Working in a Clean Room : 7.3.2002 (half day) LabView Base 2 : 11 & 12.3.02 (2 jours) C++ for Particle Physicists : 11 - 15.3.2002 (6 * 3 hour lectures) Programming the Web for Control Applications : 11, 12, 18, 19.3.2002 (4 * 2 hour lectures) Habilitation électrique : recyclage HT/BT (Français) : 13 - 14.3.2002 (2 * 2 heures) LabView Advanced : 13 - 15.3.02 (3 days) Introduction to the CERN Engineering Data Management System (EDMS) : 20.3.2002 (1 day) The CERN (EDMS) for Advanced Users : 21.3.2002 (1 day) LabVIEW DSC : 25 - 26.4.2002 (2 jours) LabVIEW DAQ : 15 - 16.5.2002 (2 jours) Cours sur la migration AutoCAD : AutoCAD : Mise à jour AutoCAD r-14 vers 2002 (2 jours) AutoCAD Mechanical PowerPack 6 basé ...
18. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Automates et réseaux de terrain : 13 & 14.11.01 (3 jours) Introduction à Windows 2000 au CERN : 12 - 14.11.01 (1/2 journée) Introduction to Windows 2000 at CERN : 14.11.01 (half-day) Introduction to PERL 5 : 15 - 16.11.01 (2 days) Sécurité dans les installations cryogéniques : 21 - 22.11.2001 (2 demi-journées) Introduction to C Programming : 21- 23.11.01 (3 days) Programmation TSX Premium 2 : 26 - 30.11.01 (5 jours) Contract Follow-up (F) : 26.11.01 (1/2 journée) Object-Oriented Analysis and Design : 27 - 30.11.2001 (4 days) Introduction to the CERN Engineering Data Management System : 30.11.2001 (1 day) Electromagnetic Compatibility (EMC): Introduction (bilingual) : 3.12.01 (half-day) Introduction to the CERN Engineering Data Management System : 07.12.2001...
19. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: October 2002 Introduction to the CERN Engineering Data Management System (free of charge): 29.10.2002 (1 day) The CERN EDMS for Advanced users (free of charge): 30.10.2002 (1 day) November 2002 LabView hands-on (bilingue/bilingual): 5.11.02 (matin/morning) LabView DAQ hands-on (bilingue/bilingual): 5.11.02 (après-midi afternoon) Introduction au PC et Windows 2000 au CERN : 6 & 7.11.02 (2 jours) Oracle 8i : Access the Database with Java: 7 & 8.11.02 (2 days) AutoCAD 2002 - niveau 2 : 7 & 8.11.02 (2 jours) Introduction to PVSS (free of charge): 11.11.2002 pm (1/2 day) Basic PVSS: 12 - 14.11.02 (3 days) EXCEL 2000 - niveau 1 : 12 & 13.11.02 (2 jours) CLEAN-2002: Working in a Cleanroom (English, free ...
20. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Contract Follow-up (F) : 30.10.01 (1/2 journée) The CERN Engineering Data Management System for Electronics Design : 30.10.01 (1 day) Nouveautés d'Excel 2000 : 5.11.01 (1/2 journée) UNIX pour non-programmeurs : 5 - 7.11.01 (3 jours) Introduction à Windows 2000 au CERN : 6.11.01 (1/2 journée) The Java programming language Level 1: 8 - 9.11.01 (2 days) LabView Base 1 : 12 - 14.11.01 (3 jours) LabVIEW DAQ (F) : 15 & 16.11.01 (2 jours) Automates et réseaux de terrain : 13 & 14.11.01 (2 jours) Introduction to PERL 5 : 15 - 16.11.01 (2 days) LabVIEW - DAQ : 15 - 16.11.01 (2 jours) Introduction to XML : 19 - 20.11.01 (2 days) Introduction to C Programming : 21- 23.11.01 (3 days) Programmation TSX Premium 2 : 26 - 30.11.01 (5 jours) Object-Oriented Analysis and Design : 27 - 30.11.2001 (4 days) Hands...
1. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: ELEC-2002 : Spring Term : 9, 11, 16, 18, 23, 25, 30.4.02 (7 * 2.5 hours) Object-Oriented Analysis & Design: 16 - 19.4.02 (4 days) The CERN Engineering Data Management System for Advanced users: 16.4.02 (1 day) Migration from AutoCAD 14 towards AutoCAD Mechanical6 PowerPack: 17 - 19.4 and 2 &3.5.02 (5 days) AutoCAD - niveau 1 : 22, 23, 29, 30.4 et 6, 7.5.02 (6 jours) LabVIEW base 1 : 22 - 24.4.02 (3 jours) CLEAN 2002 : working in a cleanroom: 24.4.02 (half-day, pm) LabVIEW DSC (F) 25 & 26.4.02 (2 jours) AutoCAD : Mise à jour AutoCAD r-14 vers 2002 : 25 & 26.4.02 (2 jours) LabVIEW Basics 2 : 13 & 14.5.02 (2 days) EXCEL 2000 - niveau 1 : 15 & 16.5.02 (2 jours) LabVIEW DAQ (F) : 15 & 16.5.02 (2 jours) EXCEL 2000 - niveau 2 : 22 & 23.5.02 (2 jours) LabVIEW Basics 1: 3 - 5.6.02&a...
2. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: ELEC-2002 : Spring Term : 9, 11, 16, 18, 23, 25, 30.4.02 (7 * 2.5 hours) Object-Oriented Analysis & Design: 16 - 19.4.02 (4 days) The CERN Engineering Data Management System for Advanced users: 16.4.02 (1 day) Migration from AutoCAD 14 towards AutoCAD Mechanical6 PowerPack: 17 - 19.4 and 2 &3.5.02 (5 days) AutoCAD - niveau 1 : 22, 23, 29, 30.4 et 6, 7.5.02 (6 jours) LabVIEW base 1 : 22 - 24.4.02 (3 jours) CLEAN 2002 : working in a cleanroom: 24.4.02 (half-day, pm) LabVIEW DSC (F) 25 & 26.4.02 (2 jours) AutoCAD : Mise à jour AutoCAD r-14 vers 2002 : 25 & 26.4.02 (2 jours) Cotations selon les normes GPS de l'ISO : 29 - 30.4.02 (2 jours) Introduction to the CERN Engineering Data Management System: 7.5.02 (1 day) LabVIEW Basics 2 : 13 & 14.5.02 (2 days) AutoCAD Mechanical 6 PowerPack (F) : 13-...
3. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Introduction to C Programming : 21- 23.11.01 (3 days) Programmation TSX Premium 2 : 26 - 30.11.01 (5 jours) Contract Follow-up (F) : 26.11.01 (1/2 journée) Habilitation électrique : électriciens network : 27 - 29.11.2001 (3 jours) Object-Oriented Analysis and Design : 27 - 30.11.2001 (4 days) Introduction to the CERN Engineering Data Management System : 30.11.2001 (1 day) Electromagnetic Compatibility (EMC): Introduction (bilingual) : 3.12.01 (half-day) Introduction to the CERN Engineering Data Management System : 07.12.2001 (1 day) LabVIEW - Basics 1 : 10 - 12.12.01 (3 days) LabVIEW - Basics 2 : 13 - 14.12.01 (2 days) EXCEL 2000 - niveau 2 : 15 - 16.1.02 (2 jours) C++ Programming Level 2 - Traps and Pitfalls : 15 - 18.1.2002 (4 days) Nouveautés de WORD 2000 : 18.1.02 (1/2 journée) FileMaker P...
4. PLACES AVAILABLE
CERN Multimedia
Technical Traininf; Tel. 74924
2001-01-01
Places are available in the following courses: Electromagnetic Compatibility (EMC): Introduction (bilingual) : 3.12.01 (half-day) Habilitation électrique : superviseurs : 5.12.01 (1/2 journée) Introduction to the CERN Engineering Data Management System : 07.12.2001 (1 day) LabVIEW - Basics 1 : 10 - 12.12.01 (3 days) Introduction au PC et Windows 2000 : 12 & 14.12.01 (2 jours) LabVIEW - Basics 2 : 13 - 14.12.01 (2 days) Habilitation électrique : superviseurs : 17.12.2001 (1/2 journée) EXCEL 2000 - niveau 2 : 15 - 16.1.02 (2 jours) C++ Programming Level 2 - Traps and Pitfalls : 15 - 18.1.2002 (4 days) Nouveautés de WORD 2000 : 18.1.02 (1/2 journée) LabView hands-on : 21.01.02 (1/2 journée) LabView DAQ hands-on : 21.01.02 (1/2 journée) FileMaker Pro : 22 - 25.1.02 (4 jours) Introduction au PC et à Windows 2000 au CERN : 29 - 30.1....
5. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: LabView hands-on (bilingue/bilingual): 5.11.02 (matin/morning) LabView DAQ hands-on (bilingue/bilingual): 5.11.02 (après-midi afternoon) Introduction au PC et Windows 2000 au CERN: 6 & 7.11.02 (2 jours) Oracle 8i : Access the Database with Java: 7 & 8.11.02 (2 days) AutoCAD 2002 - niveau 2: 7 & 8.11.02 (2 jours) Introduction to PVSS (free of charge): 11.11.2002 pm (1/2 day) Basic PVSS: 12 - 14.11.02 (3 days) EXCEL 2000 - niveau 1: 12 & 13.11.02 (2 jours) CLEAN-2002: Working in a Cleanroom (English, free of charge): 13.11.2002 (afternoon) LabView Base 1 : 13 - 15.11.02 (3 jours) AutoCAD 2002 - Level 1: 14, 15, 21, 22.11.2002 (4 days) LabVIEW - Advanced: 18 - 20.11.02 (3 days) Auto...
6. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: Introduction à DesignSpace : 16.10.02 (1 journée) AutoCAD Mechanical 6 PowerPack (F) : 21, 22, 23.10 et 4, 5, 6.11.02 (6 jours) Introduction à ANSYS 21 - 25.10.02 (5 jours/days) HREF-2002: Helium Refrigeration Techniques (English-French, bilingual) : 21 - 25.10.2002 (7 half days) LabVIEW Basics 1 (English): 21 - 23.10.02 (3 days) LabVIEW Basics 2 (English): 24 & 25.10.02 (2 days) Oracle 8i : Access the Database with Java: 7 & 8.11.02 (2 days) AutoCAD 2002 - niveau 2 : 7 & 8.11.02 (2 jours) AutoCAD 2002 - Level 1: 14, 15, 21, 22.11.02 (4 days) LabVIEW - Advanced (English) : 18 - 20.11.2002 (3 days) AutoCAD 2002 - niveau 1 : 19, 20, 25, 26.11.02 (4 jours) Oracle iDS Designer: First Class:&...
7. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: Introduction to Oracle 8i : SQL and PL/SQL: 7 - 11.10.02 (5 days) CLEAN-2002 : Working in a Cleanroom (free of charge): 10.10.02 (half-day, p.m.) LabView Hands-on (bilingue/bilingual) : 10.10.02 (matin/morning) LabView DAQ Hands-on (bilingue/bilingual) 10.10.02 (après-midi /afternoon) Introduction à DesignSpace : 16.10.02 (1 journée) Introduction to DesignSpace: 17.10.02 (1 day) AutoCAD Mechanical 6 PowerPack (F) : 21, 22, 23.10 et 4, 5, 6.11.02 (6 jours) Introduction à ANSYS/Introduction to ANSYS (langue à définir suivant demande/ Language to be chosen according to demand): 21 - 25.10.02 (5 jours/days) HREF-2002: Helium Refrigeration Techniques (English-French, bilingual) : 21 - 25.10.2002 (7 half days) HREF-2002: Techniques de la...
8. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: LabView Base 1 : 23 - 25.9.02 (3 jours) Object-Oriented Analysis & Design using UML: 25 - 27.9.02 (3 days) LabView DAQ (E): 26 - 27.9.02 (2 days) Introduction to Oracle 8i : SQL and PL/SQL: 7 - 11.10.02 (5 days) CLEAN-2002 : Working in a Cleanroom (free of charge): 10.10.02 (half-day, p.m.) AutoCAD 2002 - niveau 2 : 14 - 15.10.02 (2 jours) Introduction à DesignSpace : 16.10.02 (1 journée) Introduction to DesignSpace: 17.10.02 (1 day) AutoCAD 2002 - Level 1: 17, 18, 24, 25.10.02 (4 days) AutoCAD Mechanical 6 PowerPack (F) : 21, 22, 23.10 et 4, 5, 6.11.02 (6 jours) Introduction à ANSYS/Introduction to ANSYS (langue à définir suivant demande/ Language to be chosen according to demand):...
9. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: Introduction to Oracle 8i : SQL and PL/SQL: 7 - 11.10.02 (5 days) CLEAN-2002 : Working in a Cleanroom (free of charge): 10.10.02 (half-day, p.m.) AutoCAD 2002 - niveau 2 : 14 - 15.10.02 (2 jours) Introduction à DesignSpace : 16.10.02 (1 journée) Introduction to DesignSpace: 17.10.02 (1 day) AutoCAD 2002 - Level 1: 17, 18, 24, 25.10.02 (4 days) AutoCAD Mechanical 6 PowerPack (F) : 21, 22, 23.10 et 4, 5, 6.11.02 (6 jours) Introduction à ANSYS/Introduction to ANSYS (langue à définir suivant demande/ Language to be chosen according to demand): 21 - 25.10.02 (5 jours/days) HREF-2002: Helium Refrigeration Techniques (English-French, bilingual) : 21 - 25.10.2002 (7 half days) HREF-2002: Techniques de la Réfri...
10. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: November 2002 LabView hands-on (bilingue/bilingual): 5.11.02 (matin/morning) LabView DAQ hands-on (bilingue/bilingual): 5.11.02 (après-midi afternoon) PCAD Schémas - Débutants : 5 & 6.11.02 (2 jours) PCAD PCB - Débutants : 9 - 11.11.02 (3 jours) Introduction au PC et Windows 2000 au CERN : 6 & 7.11.02 (2 jours) Oracle 8i : Access the Database with Java : 7 & 8.11.02 (2 days) Introduction to PVSS (free of charge): 11.11.2002 pm (1/2 day) Basic PVSS: 12 - 14.11.02 (3 days) EXCEL 2000 - niveau 1 : 12 & 13.11.02 (2 jours) CLEAN-2002: Working in a Cleanroom (English, free of charge): 13.11.2002 (afternoon) LabView Base 1 : 13 - 15.11.02 (3 jours) AutoCAD 2002 - niveau 1 : 14, 15, 21, 22.11.02 (4 jours) LabVIEW - Advanced: 18 - 20.11.02 (3 days) Hands-on Object-Oriented Design and Programming with C++ : 19 - 21.11.02 (3 days) LabVIEW - Basics 2: 21 - 22.11.02 ...
11. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: Java 2 Enterprise Edition - Part 2: Enterprise JavaBeans: 18 - 20.9.02 (3 days) AutoCAD 2002 - niveau 1 : 19, 20, 26, 27.9.02 (4 jours) LabView Base 1 : 23 - 25.9.02 (3 jours) Object-Oriented Analysis & Design using UML: 25 - 27.9.02 (3 days) LabView DAQ (E): 26 - 27.9.02 (2 days) Introduction to Oracle 8i : SQL and PL/SQL: 7 - 11.10.02 (5 days) CLEAN-2002 : Working in a Cleanroom (free of charge): 10.10.02 (half-day, p.m.) AutoCAD 2002 - niveau 2 : 14 - 15.10.02 (2 jours) Introduction à DesignSpace : 16.10.02 (1 journée) Introduction to DesignSpace: 17.10.02 (1 day) AutoCAD 2002 - Level 1: 17, 18, 24, 25.10.02 (4 days) AutoCAD Mechanical 6 PowerPack (F) : 21, 22, 23.10 et 4, 5, 6.11....
12. PLACES AVAILABLE
CERN Multimedia
Monique Duval
2002-01-01
Places are available in the following courses: November 2002 Introduction to PVSS (free of charge): 11.11.02 (afternoon) EXCEL 2000 - niveau 1 : 12 & 13.11.02 (2 jours) CLEAN-2002: Working in a Cleanroom (English, free of charge): 13.11.2002 (afternoon) AutoCAD 2002 - niveau 1 : 14, 15, 21, 22.11.02 (4 jours) Hands-on Object-Oriented Design and Programming with C++: 19 - 21.11.02 (3 days) EXCEL 2000 - niveau 2 : 25 & 26.11.02 (2 jours) FrontPage 2000 - niveau 1 : 27 & 28.11.02 (2 jours) December 2002 LabVIEW - DSC (English) : 2 - 3.12.02 (2 days) AutoCAD 2002 - niveau 2 : 2 & 3.12.02 (2 jours) FileMaker (Français) : 2 - 5.12.02 (4 jours) PCAD Schémas - Débutants : 5 & 6.12.02 ...
13. PLACES AVAILABLE
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: Introduction à Windows 2000 au CERN : 2 sessions de _ journée les 24 et 25.9.01 PROFIBUS : 25 - 26.9.01 (2 jours) PROFIBUS : 27 - 28.9.01 (2 days) PowerPoint 2000 : 1 et 2.10.01 (2 jours) EXCEL 2000 - niveau 1 : 3 et 4.10.01 (2 jours) Automates et réseaux de terrain : 3 - 4.10.2001 (2 jours) PCAD Schémas - débutants : 4 - 5.10.01 (2 jours) Introduction à Outlook : 5.10.01 (1 journée) Frontpage 2000 - niveau 1 : 8 et 9.10.01 (2 jours) PCAD PCB - débutants : 8 - 10.10.01 (3 jours) C++ for Particle Physicists : 8 - 12.10.01 (6 3-hour lectures) MS-Project 2000 - niveau 1 : 15 - 18.10.01 (4 demi-journées) LabView Basics 1 : 15 - 17.10.01 (3 days) Programmation TSX Premium 1 : 15 - 19.10.01 (5 jours) WORD 2000 : importer et manipuler des images : 19.10.01 (1 journée) Programmation TSX Premium 1 : 22 - 26.10.01...
14. PLACES AVAILABLE
CERN Multimedia
Technical training; Tel. 74924
2001-01-01
Places are available in the following courses: PVSS Basics : 20 - 24.8.01 (5 days) PROFIBUS : 25 - 26.9.01 (2 jours) PROFIBUS : 27 - 28.9.01 (2 days) PCAD Schémas - débutants : 4 - 5.10.01 (2 jours) PCAD PCB - débutants : 8 - 10.10.01 (3 jours) Programming TSX Premium 1: 15 - 19.10.01 (5 days) Programmation TSX Premium 1 : 22 - 26.10.01 (5 jours) Programming TSX Premium 2: 19 - 23.11.01 (5 days) Programmation TSX Premium 2 : 26 - 30.11.01 (5 jours) The following LabView courses will be given in either English or French according to demand LabVIEW - Base 1 / LabVIEW - Basics 1 : 10 - 12.9.01 (3 jours / 3 days) LabVIEW - DAQ / LabVIEW - DAQ : 13 - 14.9.01 (2 jours / 2 days) LabVIEW - Base 1 / LabVIEW - Basics 1 : 15 - 17.10.01 (3 jours / 3 days) LabVIEW - Base 2 / LabVIEW - Basics 2 : 18 - 19.10.01 (2 jours / 2 days) LabVIEW - Base 1 / LabVIEW - Basics 1 : 12 - 14.11.01 (3 jours / 3 days) LabVIEW - DAQ / LabVIEW - DAQ : 15 - 16.11.01 (2 jours / 2...
15. PLACES AVAILABLE
CERN Multimedia
Technical training; Tel. 74924
2001-01-01
16. PLACES AVAILABLES
CERN Multimedia
Technical Training; Tel. 74924
2001-01-01
Places are available in the following courses: PVSS Basics : 20 - 24.8.01 (5 days) PROFIBUS : 25 - 26.9.01 (2 jours) PROFIBUS : 27 - 28.9.01 (2 days) PCAD Schémas - débutants : 4 - 5.10.01 (2 jours) PCAD PCB - débutants : 8 - 10.10.01 (3 jours) Programming TSX Premium 1: 15 - 19.10.01 (5 days) Programmation TSX Premium 1 : 22 - 26.10.01 (5 jours) Programming TSX Premium 2: 19 - 23.11.01 (5 days) Programmation TSX Premium 2 : 26 - 30.11.01 (5 jours) The following LabView courses will be given in either English or French according to demand LabVIEW - Base 1 / LabVIEW - Basics 1 : 10 - 12.9.01 (3 jours / 3 days) LabVIEW - DAQ / LabVIEW - DAQ : 13 - 14.9.01 (2 jours / 2 days) LabVIEW - Base 1 / LabVIEW - Basics 1 : 15 - 17.10.01 (3 jours / 3 days) LabVIEW - Base 2 / LabVIEW - Basics 2 : 18 - 19.10.01 (2 jours / 2 days) LabVIEW - Base 1 / LabVIEW - Basics 1 : 12 - 14.11.01 (3 jours / 3 days) LabVIEW - DAQ / LabVIEW - DAQ : 15 - 16.11.01 (2 jours / 2...
17. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses : EXCEL 2000 - niveau 1 : 10 & 11.6.03 (2 jours) Conception de PCB rapides dans le flot Cadence : 11.6.03 (matin) EXCEL 2000 - level 1 : 12 & 13.6.03 (2 days) Introduction to PVSS : 16.6.03 (p.m.) Basic PVSS : 17 - 19.6.03 (3 days) Réalisation de PCB rapides dans le flot Cadence : 17.6.03 (matin) PVSS - JCOP Framework Tutorial : 20.6.03 (1 day) EXCEL 2000 - niveau 2 : 24 & 25.6.03 (2 jours) Siemens SIMATIC Training : Introduction to STEP7 : 3 & 4.6.03 (2 jours/2 days) STEP7 Programming : 16 - 20.6.03 (5 jours/5 days) Simatic Net Network : 26 & 27.6.03 (2 jours/2 days) These courses will be given in French or English following the requests. Programmation automate Schneider : Programmation automate Schneider TSX Premium - 1er niveau : 10 - 13.6.03 (4 jours) - audience : toute personne qui veux maitriser la msie en uvre et la programmation d'un automate TSX Premium - objectifs : maitriser la mise en uvre et la programmation d'un autom...
18. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses : DISP-2003 - Spring II Term : Advanced Digital Signal Processing : 30.4, 7, 14, 21.5.03 (4 X 2-hour lectures) Programmation de pilotes périphériques : 5 - 8.5.03 (4 jours) Oracle iDS Reports : Build Internet Reports : 5 - 9.5.03 (5 days) LabView DAQ (language to be defined) : 8 & 9.5.03 AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 20, 21, 27 & 28.5.0 (6 jours) FrontPage 2000 - niveau 1 : 20 & 21.5.03 (2 jours) AutoCAD 2002 - niveau 2 : 3 & 4.6.03 (2 jours) EXCEL 2000 - niveau 1 : 10 & 11.6.03 (2 jours) EXCEL 2000 - level 1 : 12 & 13.6.03 (2 days) PowerPoint 2000 (F) : 17 & 18.6.03 (2 jours) FrontPage 2000 - niveau 2 : 19 & 20.6.03 (2 jours) LabView DSC (langue à décider/language to be defined) : 19 & 20.6.03 EXCEL 2000 - niveau 2 : 24 & 25.6.03 (2 jours) Siemens SIMATIC Training : Introduction to STEP7 : 3 & 4.6.03 (2 days) STEP7 Programming : 16 - 20.6.03 (5 days) Simatic Net Network : 26 & 27.6.03 ...
19. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses : DISP-2003 - Spring II Term : Advanced Digital Signal Processing : 30.4, 7, 14, 21.5.03 (4 X 2-hour lectures) Oracle iDS Reports : Build Internet Reports : 5 - 9.5.03 (5 days) LabView DAQ (language to be defined) : 8 & 9.5.03 AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 20, 21, 27 & 28.5.03 (6 jours) AutoCAD 2002 - niveau 2 : 3 & 4.6.03 (2 jours) LabView DSC (language to be defined) : 19 & 20.6.03 Siemens SIMATIC Training : Introduction to STEP7 : 3 & 4.6.03 (2 days) STEP7 Programming : 16 - 20.6.03 (5 days) Simatic Net Network : 15 & 16.4.03 / 26 & 27.6.03 (sessions of 2 days) These courses will be given in French or English following the requests. Cours de sécurité : Etre TSO au CERN : Prochaines sessions : 24, 25 & 27.6.03 - 4, 5 & 7.11.03 (session de 3 jours) If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description ...
20. Places available**
CERN Document Server
2003-01-01
Places are available in the following courses: CLEAN-2002 : Travailler en salle blanche (séminaire gratuit) : 4.9.03 (une demi-journée) The CERN EDMS for Local Administrators (free of charge) : 24 & 25.9.03 (2 days) HeREF-2003 : Techniques de la réfrigération Hélium (cours en français avec support en anglais) : 6 - 10.10.2003 (7 demi-journées) The Java Programming Language Level 1 : 6 - 7.10.2003 (2 days) Java 2 Enterprise Edition - Part 2 : Enterprise JavaBeans : 8 - 10.10.2003 (3 days) FileMaker - niveau 1 : 9 & 10.10.03 (2 jours) EXCEL 2000 - niveau 1 : 20 & 22.10.03 (2 jours) AutoCAD 2002 - niveau 1 : 20, 21, 27, 28.10.03 (4 jours) CLEAN-2002 : Working in a Cleanroom (free of charge) : 23.10.03 (half day) AutoCAD Mechanical 6 PowerPack (E) : 23, 24, 30, 31.10 & 12, 13.11.03 (6 days) AutoCAD 2002 - niveau 2 : 10 & 11.11.03 (2 jours)...
1. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses : The CERN EDMS for Local Administrators : 24 & 25.9.03 (2 days, free of charge) HeREF-2003 : Techniques de la réfrigération Hélium (cours en français avec support en anglais) : 6 - 10.10.2003 (7 demi-journées) The Java Programming Language Level 1 : 6 - 7.10.2003 (2 days) Java 2 Enterprise Edition - Part 2 : Enterprise JavaBeans : 8 - 10.10.2003 (3 days) FileMaker - niveau 1 : 9 & 10.10.03 (2 jours) EXCEL 2000 - niveau 1 : 20 & 22.10.03 (2 jours) AutoCAD 2002 - niveau 1 : 20, 21, 27, 28.10.03 (4 jours) CLEAN-2002 : Working in a Cleanroom : 23.10.03 (half day, free of charge) AutoCAD 2002 - Level 1 : 3, 4, 12, 13.11.03 (4 days) AutoCAD 2002 - niveau 2 : 10 & 11.11.03 (2 jours) ACCESS 2000 - niveau 1 : 13 & 14.11.03 (2 jours) AutoCAD Mechanical 6 PowerPack (E) : 17, 18, 24, 25.11 & 1, 2.12.03 (6...
2. Places available **
CERN Document Server
2003-01-01
Des places sont disponibles dans les cours suivants : Places are available in the following course : Introduction to the CERN Engineering Data Management System : 28.1.03 (1 day) AutoCAD 2002 - niveau 1 : 24, 25.2 et 3, 4.3.03 (4 jours) AutoCAD 2002 - niveau 2 : 27 & 28.2.03 (2 jours) C++ for Particle Physicists : 10 - 14.3.03 (6 X 3 hour lectures) AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 17, 18, 24 & 25.3.03 (6 jours) CLEAN-2002 : Working in a cleanroom : 25.3.03 (half-day, afternoon, free course, registration required) Formation Siemens SIMATIC /Siemens SIMATIC Training : Introduction à STEP7 /Introduction to STEP7 : 11 & 12.3.03 / 3 & 4.6.03 (2 jours/2 days) Programmation STEP7/STEP7 Programming : 31.3 - 4.4.03 / 16 - 20.6.03 (5 jours/5 days) Réseau Simatic Net /Simatic Net Network : 15 & 16.4.03 / 26 & 27.6.03 Ces cours seront donnés en français ou anglais en fonction des demandes / These courses will be given in French or English following the requests. * Etant do...
3. Places available**
CERN Multimedia
2003-01-01
Places are available in the following courses: The CERN EDMS for Local Administrators (free of charge) : 24 & 25.9.03 (2 days) HeREF-2003 : Techniques de la réfrigération Hélium (cours en français avec support en anglais) : 6 - 10.10.2003 (7 demi-journées) The Java Programming Language Level 1 : 6 - 7.10.2003 (2 days) Java 2 Enterprise Edition - Part 2 : Enterprise JavaBeans : 8 - 10.10.2003 (3 days) FileMaker - niveau 1 : 9 & 10.10.03 (2 jours) EXCEL 2000 - niveau 1 : 20 & 22.10.03 (2 jours) AutoCAD 2002 - niveau 1 : 20, 21, 27, 28.10.03 (4 jours) CLEAN-2002 : Working in a Cleanroom (free of charge) : 23.10.03 (half day) AutoCAD Mechanical 6 PowerPack (E) : 23, 24, 30, 31.10 & 12, 13.11.03 (6 days) AutoCAD 2002 - niveau 2 : 10 & 11.11.03 (2 jours) ACCESS 2000 - niveau 1 : 13 & 14.11.03 (2 jours) FrontPage 2000 - niveau 1 : 20...
4. Louisiana ESI: INDEX (Index Polygons)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector polygons representing the boundaries of all the hardcopy cartographic products produced as part of the Environmental Sensitivity Index...
5. Virginia ESI: INDEX (Index Polygons)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector polygons representing the boundaries of all hardcopy cartographic products produced as part of the Environmental Sensitivity Index...
6. Maryland ESI: INDEX (Index Polygons)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector polygons representing the boundaries of all hardcopy cartographic products produced as part of the Environmental Sensitivity Index...
7. Glycemic index and disease.
Science.gov (United States)
Pi-Sunyer, F Xavier
2002-07-01
It has been suggested that foods with a high glycemic index are detrimental to health and that healthy people should be told to avoid these foods. This paper takes the position that not enough valid scientific data are available to launch a public health campaign to disseminate such a recommendation. This paper explores the glycemic index and its validity and discusses the effect of postprandial glucose and insulin responses on food intake, obesity, type 1 diabetes, and cardiovascular disease. Presented herein are the reasons why it is premature to recommend that the general population avoid foods with a high glycemic index.
8. INDEXING MECHANISM
Science.gov (United States)
Kock, L.J.
1959-09-22
A device is presented for loading and unloading fuel elements containing material fissionable by neutrons of thermal energy. The device comprises a combination of mechanical features Including a base, a lever pivotally attached to the base, an Indexing plate on the base parallel to the plane of lever rotation and having a plurality of apertures, the apertures being disposed In rows, each aperture having a keyway, an Index pin movably disposed to the plane of lever rotation and having a plurality of apertures, the apertures being disposed in rows, each aperture having a keyway, an index pin movably disposed on the lever normal to the plane rotation, a key on the pin, a sleeve on the lever spaced from and parallel to the index pin, a pair of pulleys and a cable disposed between them, an open collar rotatably attached to the sleeve and linked to one of the pulleys, a pin extending from the collar, and a bearing movably mounted in the sleeve and having at least two longitudinal grooves in the outside surface.
9. Afghanistan Index
DEFF Research Database (Denmark)
Linnet, Poul Martin
2007-01-01
The Afghanistan index is a compilation of quantitative and qualitative data on the reconstruction and security effort in Afghanistan. The index aims at providing data for benchmarking of the international performance and thus provides the reader with a quick possibility to retrieve valid...... basis. The data are divided into different indicators such as security, polls, drug, social, economic, refugees etc. This represents a practical division and does not indicate that a picture as to for instance security can be obtained by solely looking at the data under security. In order to obtain...... a more valid picture on security this must incorporate an integrated look on all data meaning that for instance the economic data provides an element as to the whole picture of security....
10. SUBJECT INDEX
Institute of Scientific and Technical Information of China (English)
2003-01-01
2, 4, 6-trinitrobenzenesulphonic acid, zinc sulfate, experimental colitis, 2003328AC133 antigen, hematopoietic stem cells, fetal blood, immunophe-notyping, 2003138ALR2 gene, eNOS gene, PON1 gene, RAGE gene, 2003179 ATN-ISI, prognosis, acute renal failure, acute tubular necrosis-individual severity index, acute physiology and chronic health evaluation, 2003118 Alzheimer disease, interleukin-1 beta, tumor necrosis factor alpha,
11. Kitap İndeksleri / Book Indexing
Directory of Open Access Journals (Sweden)
Meral Alakuş
2006-04-01
Full Text Available This article is a review of Book Indexes from a variety of points, which are in fact the oldest indexes used in the world. They are different than journal indexes and database indexes which are ongoing projects. Book indexes, on the other hand, are unique in their own frameworks, as each one is a completed and finished unit. Construction of book indexes, types of indexes (according to subject headings and proper names, synthesis and analytic methods; and formats of indexes (indented and run-in formats are described. There is a list of important conventions relating to book indexes at the end of the article
12. Review of Cohesion in Indexing
Directory of Open Access Journals (Sweden)
Hasan Ashrafi Rizi
2007-07-01
Full Text Available Indexers often disagree on judging terms that best reflect the content of a document. Difference of opinion highlights one of the characteristics of indexing which is indexing cohesion. Also known as consistency, little study of the subject matter has been undertaken in the past few years. However, its importance has been recently acknowledged in effective information retrieval and expansion of access points to the document content. The present paper investigates cohesion in indexing. In addition of presenting the definitions offered by experts, it takes note of the factors influencing indexing cohesion. Methods for measuring cohesion are offered.
13. Venture Capital Industry Index Portfolio Analysis
Directory of Open Access Journals (Sweden)
Dagang Yang
2013-05-01
Full Text Available This paper using index analysis method, knowledge of venture capital as well as index funds investment ideas, successively set up the Markowitz model and the single index model of index investing. Markowitz model for the calculation of the risk of workload is too big and single-index model although accuracy is slightly lower, but can certainly be very well used in practice. Therefore, We use index invest to invest in Shanghai 10 index securities with the single-index model , and apply lingo software to figure out the venture capital portfolio which have different yields.
14. Diet quality assessment indexes
Directory of Open Access Journals (Sweden)
Kênia Mara Baiocchi de Carvalho
2014-10-01
Full Text Available Various indices and scores based on admittedly healthy dietary patterns or food guides for the general population, or aiming at the prevention of diet-related diseases have been developed to assess diet quality. The four indices preferred by most studies are: the Diet Quality Index; the Healthy Eating Index; the Mediterranean Diet Score; and the Overall Nutritional Quality Index. Other instruments based on these indices have been developed and the words 'adapted', 'revised', or 'new version I, II or III' added to their names. Even validated indices usually find only modest associations between diet and risk of disease or death, raising questions about their limitations and the complexity associated with measuring the causal relationship between diet and health parameters. The objective of this review is to describe the main instruments used for assessing diet quality, and the applications and limitations related to their use and interpretation.
15. Maslov index for Hamiltonian systems
Directory of Open Access Journals (Sweden)
Alessandro Portaluri
2008-01-01
Full Text Available The aim of this article is to give an explicit formula for computing the Maslov index of the fundamental solutions of linear autonomous Hamiltonian systems in terms of the Conley-Zehnder index and the map time one flow.
16. Index Bioclimatic "Wind-Chill"
Directory of Open Access Journals (Sweden)
Teodoreanu Elena
2015-05-01
Full Text Available This paper presents an important bioclimatic index which shows the influence of wind on the human body thermoregulation. When the air temperature is high, the wind increases thermal comfort. But more important for the body is the wind when the air temperature is low. When the air temperature is lower and wind speed higher, the human body is threatening to freeze faster. Cold wind index is used in Canada, USA, Russia (temperature "equivalent" to the facial skin etc., in the weather forecast every day in the cold season. The index can be used and for bioclimatic regionalization, in the form of skin temperature index.
17. Scientific Journal Indexing
Directory of Open Access Journals (Sweden)
Getulio Teixeira Batista
2007-08-01
Full Text Available It is quite impressive the visibility of online publishing compared to offline. Lawrence (2001 computed the percentage increase across 1,494 venues containing at least five offline and five online articles. Results shown an average of 336% more citations to online articles compared to offline articles published in the same venue. If articles published in the same venue are of similar quality, then they concluded that online articles are more highly cited because of their easier access. Thomson Scientific, traditionally concerned with printed journals, announced on November 28, 2005, the launch of Web Citation Index™, the multidisciplinary citation index of scholarly content from institutional and subject-based repositories (http://scientific.thomson. com/press/2005/8298416/. The Web Citation Index from the abstracting and indexing (A&I connects together pre-print articles, institutional repositories and open access (OA journals (Chillingworth, 2005. Basically all research funds are government granted funds, tax payer’s supported and therefore, results should be made freely available to the community. Free online availability facilitates access to research findings, maximizes interaction among research groups, and optimizes efforts and research funds efficiency. Therefore, Ambi-Água is committed to provide free access to its articles. An important aspect of Ambi-Água is the publication and management system of this journal. It uses the Electronic System for Journal Publishing (SEER - http://www.ibict.br/secao.php?cat=SEER. This system was translated and customized by the Brazilian Institute for Science and Technology Information (IBICT based on the software developed by the Public Knowledge Project (Open Journal Systems of the British Columbia University (http://pkp.sfu.ca/ojs/. The big advantage of using this system is that it is compatible with the OAI-PMH protocol for metadata harvesting what greatly promotes published articles
18. EJSCREEN Indexes 2015 Public
Data.gov (United States)
U.S. Environmental Protection Agency — There is an EJ Index for each environmental indicator. There are eight EJ Indexes in EJSCREEN reflecting the 8 environmental indicators. The EJ Index names are:...
19. EJSCREEN Indexes 2016 Public
Data.gov (United States)
U.S. Environmental Protection Agency — There is an EJ Index for each environmental indicator. There are eleven EJ Indexes in EJSCREEN reflecting the 11 environmental indicators. The EJ Index names are:...
20. EJSCREEN Indexes 2015 Internal
Data.gov (United States)
U.S. Environmental Protection Agency — There is an EJ Index for each environmental indicator. There are 12 EJ Indexes in EJSCREEN reflecting the 12 environmental indicators. The EJ Index names are:...
1. Methods for Predicting Stock Indexes
Directory of Open Access Journals (Sweden)
Martha Cecilia García
2013-11-01
Full Text Available This paper presents a literature review on methods that have been used in the last two decades to predict Stock Market Indexes. Methods studied range from those enabling to grab the linear characteristics present in the stock market indexes, going through those that focus on non-linear features and finally hybrid methods that are more robust, since they capture linear and non-linear features. In addition, this research includes methods that use macroeconomic variables to predict indexes from different stock exchanges around the world.
2. Negative refractive index metamaterials
Directory of Open Access Journals (Sweden)
2006-07-01
Full Text Available Engineered materials composed of designed inclusions can exhibit exotic and unique electromagnetic properties not inherent in the individual constituent components. These artificially structured composites, known as metamaterials, have the potential to fill critical voids in the electromagnetic spectrum where material response is limited and enable the construction of novel devices. Recently, metamaterials that display negative refractive index – a property not found in any known naturally occurring material – have drawn significant scientific interest, underscoring the remarkable potential of metamaterials to facilitate new developments in electromagnetism.
3. The PC index: review of methods
Directory of Open Access Journals (Sweden)
2010-10-01
Full Text Available The Polar Cap (PC index is a controversial topic within the IAGA scientific community. Since 1997 discussions of the validity of the index to be endorsed as an official IAGA index have ensued. There is no doubt as to the scientific merit of the index which is not discussed here. What is in doubt is the methodology of the derivation of the index by different groups. The Polar Cap index (PC: PCN, northern; PCS, southern described in Troshichev et al. (2006 and Stauning et al. (2006, both termed the "unified PC index", and the PCN index routinely derived at DMI are inspected using only available published literature. They are found to contain different derivation procedures, thus are not unified. The descriptions of the derivation procedures are found to not be adequate to independently derive the PC indices.
4. Modification of Low Refractive Index Polycarbonate for High Refractive Index Applications
Directory of Open Access Journals (Sweden)
Gunjan Suri
2009-01-01
Full Text Available Polycarbonates and polythiourethanes are the most popular materials in use today, for optical applications. Polycarbonates are of two types which fall in the category of low refractive index and medium refractive index. The present paper describes the conversion of low refractive index polycarbonates into high refractive index material by the use of a high refractive index monomer, polythiol, as an additive. Novel polycarbonates, where the properties of refractive index and Abbe number can be tailor made, have been obtained. Thermal studies and refractive index determination indicate the formation of a new polymer with improved properties and suitable for optical applications.
5. Nucleic acid indexing
Science.gov (United States)
Guilfoyle, Richard A.; Guo, Zhen
1999-01-01
A restriction site indexing method for selectively amplifying any fragment generated by a Class II restriction enzyme includes adaptors specific to fragment ends containing adaptor indexing sequences complementary to fragment indexing sequences near the termini of fragments generated by Class II enzyme cleavage. A method for combinatorial indexing facilitates amplification of restriction fragments whose sequence is not known.
6. Automatic inference of indexing rules for MEDLINE
Directory of Open Access Journals (Sweden)
Shooshan Sonya E
2008-11-01
Full Text Available Abstract Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI, a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.
7. CENDI Indexing Workshop
Science.gov (United States)
1994-01-01
The CENDI Indexing Workshop held at NASA Headquarters, Two Independence Square, 300 E Street, Washington, DC, on September 21-22, 1994 focused on the following topics: machine aided indexing, indexing quality, an indexing pilot project, the MedIndEx Prototype, Department of Energy/Office of Scientific and Technical Information indexing activities, high-tech coding structures, category indexing schemes, and the Government Information Locator Service. This publication consists mostly of viewgraphs related to the above noted topics. In an appendix is a description of the Government Information Locator Service.
8. American Samoa ESI: INDEX (Index Polygons)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector polygons representing the boundaries of all the hardcopy cartographic products produced as part of the Environmental Sensitivity Index...
9. Human Use Index (Future)
Data.gov (United States)
U.S. Environmental Protection Agency — Human land uses may have major impacts on ecosystems, affecting biodiversity, habitat, air and water quality. The human use index (also known as U-index) is the...
10. Master Veteran Index (MVI)
Data.gov (United States)
Department of Veterans Affairs — As of June 28, 2010, the Master Veteran Index (MVI) database based on the enhanced Master Patient Index (MPI) is the authoritative identity service within the VA,...
11. Glycemic index and diabetes
Science.gov (United States)
... this page: //medlineplus.gov/ency/patientinstructions/000941.htm Glycemic index and diabetes To use the sharing features on ... GI diet also may help with weight loss. Glycemic Index of Certain Foods Low GI foods (0 to ...
12. IndexCat
Data.gov (United States)
U.S. Department of Health & Human Services — IndexCat provides access to the digitized version of the printed Index-Catalogue of the Library of the Surgeon General's Office; eTK for medieval Latin texts; and...
13. Human Use Index
Data.gov (United States)
U.S. Environmental Protection Agency — Human land uses may have major impacts on ecosystems, affecting biodiversity, habitat, air and water quality. The human use index (also known as U-index) is the...
14. Audio Indexing for Efficiency
Science.gov (United States)
Rahnlom, Harold F.; Pedrick, Lillian
1978-01-01
This article describes Zimdex, an audio indexing system developed to solve the problem of indexing audio materials for individual instruction in the content area of the mathematics of life insurance. (Author)
15. Unified Index Unveiled
Institute of Scientific and Technical Information of China (English)
RENWEI
2005-01-01
China unveiled a unified stock index to track both markets in Shanghai and Shenzhen in April, a move likely to open a floodgate for more trading derivatives such as index futures. The new index, with 300 component companies traded on Shanghai and Shenzhen stock exchanges, will be the first of its kind on the mainland. The index members will be the largest 300 stocks - 180 from Shanghai and 120 from Shenzhen - in terms of market capitalization,
16. Index to Volume 110
Science.gov (United States)
Marriott, R. A.
2001-02-01
The Subject Index references items under general headings; where a contribution covers two or more clearly defined subjects, each is separately referenced, but otherwise sub-headings within the same topic are not included. Book and other reviews are indexed as such, but their subjects are not further cross-indexed. The Author Index details all named contributions, including talks at Ordinary Meetings, but not questions from the floor.
17. Rethinking image indexing?
DEFF Research Database (Denmark)
Christensen, Hans Dam
2017-01-01
An abundance of literature on image indexing, visual and image retrieval methods, content-based image retrieval, image tagging, visual information seeking, etc., is available in information studies. In 2008, decades of development with its great diversity in approaches was summed up...... as an “evolution” where the literature recently had “grown at a stupendous rate” (Enser, p. 531). In this ‘evolutionary’ process, critical inspections are, however, also needed in specific cases. In the following, three aspects are in concern: 1. the use of an interpretation model created by the renowned art...... “Modeling and Analyzing the Topicality of Art Images” (Huang, Soergel, & Klavans, 2015), recently published in this journal, will be targeted as an indication of these aspects....
18. Textile Index Monitor
Institute of Scientific and Technical Information of China (English)
2010-01-01
Part I–Price Index National Index for China Textile City (located in Keqiao, Shaoxing county in Zhejiang Province, east of China) concludes its price index (periodic code:20101101) at 100.31 points rise of 0.68% as against its previous week.
19. Textile Index Monitor
Institute of Scientific and Technical Information of China (English)
2011-01-01
Part I—Price IndexNational Index for China Textile City (located in Keqiao,Shaoxing county in Zhejiang Province,east of China) concludes its price index (periodic code:20110606) at 110.56 points.
20. DIDA: Distributed Indexing Dispatched Alignment.
Directory of Open Access Journals (Sweden)
Full Text Available One essential application in bioinformatics that is affected by the high-throughput sequencing data deluge is the sequence alignment problem, where nucleotide or amino acid sequences are queried against targets to find regions of close similarity. When queries are too many and/or targets are too large, the alignment process becomes computationally challenging. This is usually addressed by preprocessing techniques, where the queries and/or targets are indexed for easy access while searching for matches. When the target is static, such as in an established reference genome, the cost of indexing is amortized by reusing the generated index. However, when the targets are non-static, such as contigs in the intermediate steps of a de novo assembly process, a new index must be computed for each run. To address such scalability problems, we present DIDA, a novel framework that distributes the indexing and alignment tasks into smaller subtasks over a cluster of compute nodes. It provides a workflow beyond the common practice of embarrassingly parallel implementations. DIDA is a cost-effective, scalable and modular framework for the sequence alignment problem in terms of memory usage and runtime. It can be employed in large-scale alignments to draft genomes and intermediate stages of de novo assembly runs. The DIDA source code, sample files and user manual are available through http://www.bcgsc.ca/platform/bioinfo/software/dida. The software is released under the British Columbia Cancer Agency License (BCCA, and is free for academic use.
1. An automatic bibliography indexing programme
Directory of Open Access Journals (Sweden)
J. W. Morris
1974-12-01
Full Text Available A relatively simple FORTRAN IV programme, designed for a small computer, for author and key-word indexes to bibliographic records is described, and examples of output are given. It is compared with some other systems. Suggested improvements to the programme are given.
2. Analysis in indexing
DEFF Research Database (Denmark)
Mai, Jens Erik
2005-01-01
is presented as an alternative and the paper discusses how this approach includes a broader range of analyses and how it requires a new set of actions from using this approach; analysis of the domain, users and indexers. The paper concludes that the two-step procedure to indexing is insufficient to explain...... the indexing process and suggests that the domain-centered approach offers a guide for indexers that can help them manage the complexity of indexing. © 2004 Elsevier Ltd. All rights reserved....
3. Risk weighted alpha index – analysis of the ASX50 index
Directory of Open Access Journals (Sweden)
Nipun Agarwal
2013-12-01
Full Text Available Major stock indexes are developed on the market capitalization or price weighted indexation method. The Australian Stock Exchange 50 (ASX50 index is a market capitalization index of the top 50 Australian stocks. Fundamental indexation, equal weighted index and risk weighted index methods have recently been developed as an alternative to the market cap and price indexes. However, empirical studies do not conclusively prove if these alternate methods are more efficient to the existing market cap or price weighted methods. Also, the fundamental index method provides a higher alpha, while the risk weighted index methods focus on risk reduction through diversification. There is a gap to develop another passive indexation method in order to provide the investor a higher return (alpha and lower volatility. This paper re-weights the ASX50 index using the risk weighted alpha method and provides higher weight to stocks that have increasing returns and lower volatility. The empirical study for ASX50 index from 2002-2012 is undertaken and results show that the risk weighted alpha method provides higher return and has lower systematic risk than the ASX50 index.
4. On eccentric connectivity index
CERN Document Server
Zhou, Bo
2010-01-01
The eccentric connectivity index, proposed by Sharma, Goswami and Madan, has been employed successfully for the development of numerous mathematical models for the prediction of biological activities of diverse nature. We now report mathematical properties of the eccentric connectivity index. We establish various lower and upper bounds for the eccentric connectivity index in terms of other graph invariants including the number of vertices, the number of edges, the degree distance and the first Zagreb index. We determine the n-vertex trees of diameter with the minimum eccentric connectivity index, and the n-vertex trees of pendent vertices, with the maximum eccentric connectivity index. We also determine the n-vertex trees with respectively the minimum, second-minimum and third-minimum, and the maximum, second-maximum and third-maximum eccentric connectivity indices for
5. NEW CONCEPTS IN INDEXING.
Science.gov (United States)
SHANK, R
1965-07-01
Recent trends in indexing emphasize mechanical, not intellectual, developments. Mechanized operations have produced indexes in depth (1) of information on limited areas of science or (2) utilizing limited parameters for analysis. These indexes may include only citations or both useful data and citations of source literature. Both keyword-in-context and citation indexing seem to be passing the test of the marketplace. Mechanical equipment has also been successfully used to manipulate EAM cards for production of index copy. Information centers are increasingly being used as control devices in narrowly defined subject areas. Authors meet growing pressures to participate in information control work by preparing abstracts of their own articles. Mechanized image systems persist, although large systems are scarce and the many small systems may bring only limited relief for information control and retrieval problems. Experimentation and limited development continue on theory and technique of automatic indexing and abstracting.
6. Supplement: Commodity Index Report
Data.gov (United States)
Commodity Futures Trading Commission — Shows index traders in selected agricultural markets. These traders are drawn from the noncommercial and commercial categories. The noncommercial category includes...
7. Possible values of UV index in Serbia
Directory of Open Access Journals (Sweden)
2008-01-01
Full Text Available INTRODUCTION UV Index is an indicator of human exposure to solar ultraviolet (UV rays. The numerical values of the UV Index range from 1-11 and above. There are three levels of protection against UV radiation; low values of the UV Index - protection is not required, medium values of the UV Index - protection is recommended and high values of the UV Index - protection is obligatory. The value of the UV Index primarily depends on the elevation of the sun and total ozone column. OBJECTIVE The aim of the study is to determine the intervals of possible maximal annual values of the UV Index in Serbia in order to determine the necessary level of protection in a simple manner. METHOD For maximal and minimal expected values of total column ozone and for maximal elevation of the sun, the value of the UV Index was determined for each month in the Northern and Southern parts of Serbia. These values were compared with the forecast of the UV Index. RESULTS Maximal clear sky values of the UV Index in Serbia for altitudes up to 500m in May, June, July and August can be 9 or even 10, and not less than 5 or 6. During November, December, January and February the UV Index can be 4 at most. During March, April, September and October the expected values of the UV Index are maximally 7 and not less than 3. The forecast of the UV Index is within these limits in 98% of comparisons. CONCLUSION The described method of determination of possible UV Index values showed a high agreement with forecasts. The obtained results can be used for general recommendations in the protection against UV radiation.
8. 44 CFR 5.28 - Indexes.
Science.gov (United States)
2010-10-01
... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Indexes. 5.28 Section 5.28 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY... described in § 5.25. FEMA will publish quarterly and make available copies of each index or...
9. Image Indexing and Retrieval by Content.
Science.gov (United States)
Cawkell, Tony
2000-01-01
Reviews content-based image retrieval and discusses the increase in large picture databases that are now available. Describes some of the proceedings from the Brighton (United Kingdom) conference, including the retrieval of video clips; discusses image indexing; and provides examples of image indexing and retrieval projects. (Author/LRW)
10. Availability growth modeling
Energy Technology Data Exchange (ETDEWEB)
Wendelberger, J.R.
1998-12-01
In reliability modeling, the term availability is used to represent the fraction of time that a process is operating successfully. Several different definitions have been proposed for different types of availability. One commonly used measure of availability is cumulative availability, which is defined as the ratio of the amount of time that a system is up and running to the total elapsed time. During the startup phase of a process, cumulative availability may be treated as a growth process. A procedure for modeling cumulative availability as a function of time is proposed. Estimates of other measures of availability are derived from the estimated cumulative availability function. The use of empirical Bayes techniques to improve the resulting estimates is also discussed.
11. EJSCREEN Supplementary Indexes 2015 Public
Data.gov (United States)
U.S. Environmental Protection Agency — There are 40 supplementary EJSCREEN indexes that are divided into 5 categories: EJ Index with supplementary demographic index, Supplementary EJ Index 1 with...
12. EJSCREEN Supplementary Indexes 2015 Internal
Data.gov (United States)
U.S. Environmental Protection Agency — There are 60 supplementary EJ Indexes in EJSCREEN that are divided into 5 categories: EJ Index with supplementary demographic index, Supplementary EJ Index 1 with...
13. Spatio-temporal characteristics of crop drought in southern China based on drought index of continuous days without available precipitation%基于连续无有效降水日数指标的中国南方作物干旱时空特征
Institute of Scientific and Technical Information of China (English)
黄晚华; 隋月; 杨晓光; 代姝玮; 曲辉辉; 李茂松
2014-01-01
This study was based on daily precipitation data from standard meteorological stations in the 15 provinces (municipalities, or autonomous regions) in southern China. We adopted continuous days without available precipitation (Dnp) as drought index, also improved the critical values of available precipitation and drought classification standard during the data process, then calculated drought index values for crop ( spring sowing-summer harvesting crop, spring sowing-autumn harvesting crop, summer sowing-autumn harvesting crop, and overwintering crop ) during the most recent 50 years(from 1959 to 2009) in southern China. We analyzed the spatial distribution characteristics and inter-annual variation of crop drought frequency and crop drought duration days. In addition, we introduced daily drought frequency to study dynamic change of crop drought during the growing period. The results showed:spring sowing-summer harvesting crop drought occurred sometimes in the west of Southwest China and part of Huaibei Area during spring;spring sowing-autumn harvesting crop drought often affected the middle and lower reaches of Yangtze River, as well as the northeast of South China and the east of Southwest China during summer and autumn;summer sowing-autumn harvesting crop drought often occurred in the middle and lower reaches of Yangtze River during autumn, as well as the east and north of South China;overwintering crop drought took place in the north of Yangtze River and South China during autumn and spring , especially drought occurred frequently in the west of Southwest China during autumn to next spring. Generally, the distribution of drought duration days without available precipitation was basically consistent with the distribution of drought frequency, which meant drought lasted relatively longer in drought-prone area.The characteristics of drought change trend in southern China showed that: spring sowing-summer harvesting crop drought showed a decreasing trend in covering
14. Universal Index System
Science.gov (United States)
Kelley, Steve; Roussopoulos, Nick; Sellis, Timos; Wallace, Sarah
1993-01-01
The Universal Index System (UIS) is an index management system that uses a uniform interface to solve the heterogeneity problem among database management systems. UIS provides an easy-to-use common interface to access all underlying data, but also allows different underlying database management systems, storage representations, and access methods.
15. Global Ecosystem Restoration Index
DEFF Research Database (Denmark)
Fernandez, Miguel; Garcia, Monica; Fernandez, Nestor
2015-01-01
The Global ecosystem restoration index (GERI) is a composite index that integrates structural and functional aspects of the ecosystem restoration process. These elements are evaluated through a window that looks into a baseline for degraded ecosystems with the objective to assess restoration...
16. A new family of cumulative indexes for measuring scientific performance.
Directory of Open Access Journals (Sweden)
Marcin Kozak
Full Text Available In this paper we propose a new family of cumulative indexes for measuring scientific performance which can be applied to many metrics, including h index and its variants (here we apply it to the h index, h(2 index and Google Scholar's i10 index. These indexes follow the general principle of repeating the index calculation for the same publication set. Using bibliometric data and reviewer scores for accepted and rejected fellowship applicants we examine how valid the cumulative variant is compared to the original variant. These analyses showed that the cumulative indexes result in higher correlations with the reviewer scores than their original variants. Thus, the cumulative indexes better reflect the assessments by peers than the original variants and are useful extensions of the original indexes. In contrast to many other measures of scientific performance proposed up to now, the cumulative indexes seem not only to be effective, but they are also easy to understand and calculate.
17. Eccentric connectivity index
CERN Document Server
Ilić, Aleksandar
2011-01-01
The eccentric connectivity index $\\xi^c$ is a novel distance--based molecular structure descriptor that was recently used for mathematical modeling of biological activities of diverse nature. It is defined as $\\xi^c (G) = \\sum_{v \\in V (G)} deg (v) \\cdot \\epsilon (v)$\\,, where $deg (v)$ and $\\epsilon (v)$ denote the vertex degree and eccentricity of $v$\\,, respectively. We survey some mathematical properties of this index and furthermore support the use of eccentric connectivity index as topological structure descriptor. We present the extremal trees and unicyclic graphs with maximum and minimum eccentric connectivity index subject to the certain graph constraints. Sharp lower and asymptotic upper bound for all graphs are given and various connections with other important graph invariants are established. In addition, we present explicit formulae for the values of eccentric connectivity index for several families of composite graphs and designed a linear algorithm for calculating the eccentric connectivity in...
18. Index medicus for the Eastern Mediterranean region
Directory of Open Access Journals (Sweden)
Al-Shorbaji Najeeb
2008-09-01
Full Text Available Abstract The study provides the rationale, history and current status of the Index Medicus for the World Health Organization Eastern Mediterranean Region. The Index is unique in combining the geographic coverage of peer-reviewed health and biomedical journals (408 titles from the 22 countries of the Region. Compiling and publishing the Index coupled with a document delivery service is an integral part of the WHO Regional Office's knowledge management and sharing programme. In this paper, bibliometric indicators are presented to demonstrate the distribution of journals, articles, languages, subjects and authors as well as availability in printed and electronic formats. Two countries in the Region (Egypt and Pakistan contribute over 50% of the articles in the Index. About 90% of the articles are published in English. Epidemiology articles represent 8% of the entire Index. 15% of the journals in the Index are also indexed in MEDLINE, while 7% are indexed in EMBASE. Future developments of the Index will include covering more journals and adding other types of health and biomedical literature, including reports, theses, books and current research. The challenges and lessons learnt are discussed.
19. Validity Index and number of clusters
Directory of Open Access Journals (Sweden)
2012-01-01
Full Text Available Clustering (or cluster analysis has been used widely in pattern recognition, image processing, and data analysis. It aims to organize a collection of data items into c clusters, such that items within a cluster are more similar to each other than they are items in the other clusters. The number of clusters c is the most important parameter, in the sense that the remaining parameters have less influence on the resulting partition. To determine the best number of classes several methods were made, and are called validity index. This paper presents a new validity index for fuzzy clustering called a Modified Partition Coefficient And Exponential Separation (MPCAES index. The efficiency of the proposed MPCAES index is compared with several popular validity indexes. More information about these indexes is acquired in series of numerical comparisons and also real data Iris.
20. Malmquist Productivity Index on Efficiency Layers
Directory of Open Access Journals (Sweden)
F. Rezai balf ∗
2012-09-01
Full Text Available Data Envelopment Analysis (DEA, a popular linear programming technique is useful to rate comparatively operational effiency of decision Making Unit (DMU based on the their deterministic inputoutput data. The Malmquist productivity index in DEA, calculable with the distance function, for measurement the productivity change among two variant time period or two variant group in the same time. This index is based on two factor of efficiency change index and a technological change index. In this paper, we operate on the collective Malmquist productivity index, which performs clustering operation DMUs with classification into different levels of efficient frontier, and then we discuss on the relation between Malmquist index on the efficiency layers and their attractiveness and progress
1. Glycaemic index methodology.
Science.gov (United States)
Brouns, F; Bjorck, I; Frayn, K N; Gibbs, A L; Lang, V; Slama, G; Wolever, T M S
2005-06-01
The glycaemic index (GI) concept was originally introduced to classify different sources of carbohydrate (CHO)-rich foods, usually having an energy content of >80 % from CHO, to their effect on post-meal glycaemia. It was assumed to apply to foods that primarily deliver available CHO, causing hyperglycaemia. Low-GI foods were classified as being digested and absorbed slowly and high-GI foods as being rapidly digested and absorbed, resulting in different glycaemic responses. Low-GI foods were found to induce benefits on certain risk factors for CVD and diabetes. Accordingly it has been proposed that GI classification of foods and drinks could be useful to help consumers make 'healthy food choices' within specific food groups. Classification of foods according to their impact on blood glucose responses requires a standardised way of measuring such responses. The present review discusses the most relevant methodological considerations and highlights specific recommendations regarding number of subjects, sex, subject status, inclusion and exclusion criteria, pre-test conditions, CHO test dose, blood sampling procedures, sampling times, test randomisation and calculation of glycaemic response area under the curve. All together, these technical recommendations will help to implement or reinforce measurement of GI in laboratories and help to ensure quality of results. Since there is current international interest in alternative ways of expressing glycaemic responses to foods, some of these methods are discussed.
2. Technical training - places available
CERN Multimedia
2012-01-01
If you would like more information on a course, or for any other inquiry/suggestions, please contact Technical.Training@cern.ch Valeria Perez Reale, Learning Specialist, Technical Programme Coordinator (Tel.: 62424) Eva Stern and Elise Romero, Technical Training Administration (Tel.: 74924) HR Department »Electronics design Next Session Duration Language Availability Comprehensive VHDL for FPGA Design 08-Oct-12 to 12-Oct-12 5 days English 3 places available Foundations of Electromagnetism and Magnet Design (EMAG) 14-Nov-12 to 27-Nov-12 6 days English 20 places available Impacts de la suppression du plomb (RoHS) en électronique 26-Oct-12 to 26-Oct-12 8 hours French 15 places available Introduction to VHDL 10-Oct-12 to 11-Oct-12 2 days English 7 places available LabVIEW Real Time and FPGA 13-Nov-12 to 16-Nov-12 5 days French 5 places available »Mechanical design Next Se...
3. Need for an Ecological Index
Directory of Open Access Journals (Sweden)
Rohan Wickramasinghe
2007-12-01
Full Text Available The article was published in the Sri Lankan newspaper 'The Island' on the 30th November 2005 after the World Environmental Education Congress (WEEC held in Milan. Author hopes it could provide a base for a project for senior school children or more senior students to devise an Ecological Index and ‘If nothing else, it could help them in thinking out the issues involved!’ Author supposes that could be a form of environmental education.
4. Fast Digit-Index Permutations
Directory of Open Access Journals (Sweden)
Dorothy Bollman
1996-01-01
Full Text Available We introduce a tensor sum which is useful for the design and analysis of digit-index permutations (DIPs algorithms. Using this operation we obtain a new high-performance algorithm for the family of DIPs. We discuss an implementation in the applicative language Sisal and show how different choices of parameters yield different DIPs. The efficiency of the special case of digit reversal is illustrated with performance results on a Cray C-90.
5. SUBJECT AND AUTHOR INDEXS
Directory of Open Access Journals (Sweden)
IJBE Volume 1
2015-09-01
6. Available Energy and Exergy
Directory of Open Access Journals (Sweden)
Richard A. Gaggioli
1998-06-01
Full Text Available
An "available energy" is defined for every state of any system. The definition is independent of (a the concept of work. (b any reference environment, and (c the makeup of the system (e.g. "macro" or "micro. On the basis of this available energy, given any composite system, the contribution of each subsystem to the available energy -- that is, the exergy content of a subsystem -- is defined, as well as the instantaneous "dead state" of the composite and each subsystem.
Some pedagogical, scientific and engineering implications are discussed.
7. Prevalent Color Extraction and Indexing
Directory of Open Access Journals (Sweden)
K.K.Thyagharajan
2014-01-01
Full Text Available Colors in an image provides tremendous amount of information. Using this color information images can be segmented, analyzed, labeled and indexed. In content based image retrieval system, color is one of the basic primitive features used. In Prevalent Color Extraction and indexing, the most extensive color on an image is identified and it is used for indexing. For implementation, Asteroideae flower family image dataset is used. It consist of more than 16,000 species, among them nearly 100 species are considered and indexed by dominating colors. To extract the most appealable color from the user defined images, the overall color of an image has to be quantized. Spatially, quantizing the color of an image to extract the prevalent color is the major objective of this paper. A combination of K-Mean and Expectation Minimization clustering algorithm called hidden-value learned K-mean clustering quantization algorithm is used to avoid the over clustering behavior of K-Mean algorithm. The experimental result shows the marginal differences between these algorithms.
8. Proxmox high availability
CERN Document Server
Cheng, Simon MC
2014-01-01
If you want to know the secrets of virtualization and how to implement high availability on your services, this is the book for you. For those of you who are already using Proxmox, this book offers you the chance to build a high availability cluster with a distributed filesystem to further protect your system from failure.
9. Available area isotherm
NARCIS (Netherlands)
Bosma, JC; Wesselingh, JA
2004-01-01
A new isotherm is presented for adsorption of proteins, the available area isotherm. This isotherm has a steric basis, unlike the (steric) mass action model. The shape of the available area isotherm is determined only by geometric exclusion. With the new isotherm, experimental results can be fitted
10. Technical training - Places available
CERN Multimedia
2012-01-01
If you would like more information on a course, or for any other inquiry/suggestions, please contact Technical.Training@cern.ch Valeria Perez Reale, Learning Specialist, Technical Programme Coordinator (Tel.: 62424) Eva Stern and Elise Romero, Technical Training Administration (Tel.: 74924) Electronics design Next Session Duration Language Availability Certified LabVIEW Associate Developer (CLAD) 06-Dec-12 to 06-Dec-12 1 hour English One more place available Compatibilité électromagnetique (CEM): Applications 23-Nov-12 to 23-Nov-12 3.5 hours English 3 places available Compatibilité électromagnétique (CEM): Introduction 23-Nov-12 to 23-Nov-12 3 hours English 43 places available Effets des Radiations sur les composants et systèmes électroniques 11-Dec-12 to 12-Dec-12 1 day 4 hours French 9 places available LabVIEW for beginners ...
11. Supersymmetric Berry index
CERN Document Server
Ilinskii, K N; Melezhik, V S; Ilinski, K N; Kalinin, G V; Melezhik, V V
1994-01-01
We revise the sequences of SUSY for a cyclic adiabatic evolution governed by the supersymmetric quantum mechanical Hamiltonian. The condition (supersymmetric adiabatic evolution) under which the supersymmetric reductions of Berry (nondegenerated case) or Wilczek-Zee (degenerated case) phases of superpartners are taking place is pointed out. The analogue of Witten index (supersymmetric Berry index) is determined. As the examples of suggested concept of supersymmetric adiabatic evolution the Holomorphic quantum mechanics on complex plane and Meromorphic quantum mechanics on Riemann surface are considered. The supersymmetric Berry indexes for the models are calculated.
12. Degree distance and Gutman index of corona product of graphs
Directory of Open Access Journals (Sweden)
V. Sheeba Agnes
2015-09-01
Full Text Available In this paper, the degree distance and the Gutman index of the corona product of two graphs are determined. Using the results obtained, the exact degree distance and Gutman index of certain classes of graphs are computed.
13. CDC WONDER: Daily Air Temperatures and Heat Index
Data.gov (United States)
U.S. Department of Health & Human Services — The Daily Air Temperature and Heat Index data available on CDC WONDER are county-level daily average air temperatures and heat index measures spanning the years...
14. Technical training: places available
CERN Multimedia
HR Department
2007-01-01
CERN Technical Training: Open Courses (April - June 2007) The following course sessions are currently scheduled in the framework of the CERN Technical Training Programme 2007: Â AutoCAD 2006 - niveau 1 (course in French): 25.4.- 26.4.2007 & 2.5. - 3.5.2007 (4 days in 2 modules, 5 places available) AutoCAD 2006 - niveau 1 (course in French): 27.6.- 28.6.2007 & 3.7. - 4.7.2007 (4 days in 2 modules, 5 places available) AutoCAD Mechanical 2006 (course in French) 21.6.-22.6.2007 (2 days, 8 places available) * NEW COURSE* Automate de securite S7 (course in French) 14.5.-16.5.2007 (3 days, 4 places available) * NEW COURSE* Automate de securite S7 (course in French): 9.5.-11.5.2007 (3 days, 4 places available) JCOP - Joint PVSS-JCOP Frameswork (course in English): 21.5.-25.5.2007 (5 days, 12 places available) JCOP - Finite State Machines in the JCOP Frameswork (course in English): 12.6.-14.6.2007 (3 days, 12 places available) LabVIEW Basics 1 (in English): 2.-4.5.2007 (3 days, 7 places ...
15. The Biodiversity Informatics Potential Index
Directory of Open Access Journals (Sweden)
Ariño Arturo H
2011-12-01
Full Text Available Abstract Background Biodiversity informatics is a relatively new discipline extending computer science in the context of biodiversity data, and its development to date has not been uniform throughout the world. Digitizing effort and capacity building are costly, and ways should be found to prioritize them rationally. The proposed 'Biodiversity Informatics Potential (BIP Index' seeks to fulfill such a prioritization role. We propose that the potential for biodiversity informatics be assessed through three concepts: (a the intrinsic biodiversity potential (the biological richness or ecological diversity of a country; (b the capacity of the country to generate biodiversity data records; and (c the availability of technical infrastructure in a country for managing and publishing such records. Methods Broadly, the techniques used to construct the BIP Index were rank correlation, multiple regression analysis, principal components analysis and optimization by linear programming. We built the BIP Index by finding a parsimonious set of country-level human, economic and environmental variables that best predicted the availability of primary biodiversity data accessible through the Global Biodiversity Information Facility (GBIF network, and constructing an optimized model with these variables. The model was then applied to all countries for which sufficient data existed, to obtain a score for each country. Countries were ranked according to that score. Results Many of the current GBIF participants ranked highly in the BIP Index, although some of them seemed not to have realized their biodiversity informatics potential. The BIP Index attributed low ranking to most non-participant countries; however, a few of them scored highly, suggesting that these would be high-return new participants if encouraged to contribute towards the GBIF mission of free and open access to biodiversity data. Conclusions The BIP Index could potentially help in (a identifying
16. High availability IT services
CERN Document Server
Critchley, Terry
2014-01-01
This book starts with the basic premise that a service is comprised of the 3Ps-products, processes, and people. Moreover, these entities and their sub-entities interlink to support the services that end users require to run and support a business. This widens the scope of any availability design far beyond hardware and software. It also increases the potential for service failure for reasons beyond just hardware and software; the concept of logical outages. High Availability IT Services details the considerations for designing and running highly available ""services"" and not just the systems
17. Index of Glossary Terms
Science.gov (United States)
... be limited. Home Visit Global Sites Search Help? Index of Glossary Terms Share this page: Was this ... Serum Serum Sickness Shock Shwachman-Diamond Syndrome Sideroblastic Anemia Sigmoidoscopy Sign Somatic Cells Specificity Spina bifida Spirochete ...
18. Arizona - Social Vulnerability Index
Data.gov (United States)
U.S. Environmental Protection Agency — The Social Vulnerability Index is derived from the 2000 US Census data. The fields included are percent minority, median household income, age (under 18 and over...
19. National Death Index
Data.gov (United States)
U.S. Department of Health & Human Services — The National Death Index (NDI) is a centralized database of death record information on file in state vital statistics offices. Working with these state offices, the...
20. Textile Index Monitor
Institute of Scientific and Technical Information of China (English)
2010-01-01
Textile Index Monitor is a new column that delivers a textile-specific price index profile in weeks that are bygone when this monthly magazine comes to your hand. China Textile City is the name of the world-largest yarn&fabric marketplace in the famous town of Keqiao in Zhejiang,China.Several years ago,Ministry of Commerce(MOC)set up a national price index centre for textiles-specific category,China Textile City takes the leading role in publishing its analytical report of textile price index on weekly,monthly,quarterly and yearly basis,making it possible for Keqiao or its textile city to be the weathercock for textiles market trend in China and in the world as well.From this issue,a new column is given to cover the gist&feeds of the latest developments&gradients in this market barometer.
1. TOMS Absorbing Aerosol Index
Data.gov (United States)
Washington University St Louis — TOMS_AI_G is an aerosol related dataset derived from the Total Ozone Monitoring Satellite (TOMS) Sensor. The TOMS aerosol index arises from absorbing aerosols such...
2. Palmer Drought Severity Index
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — PDSI from the Dai dataset. The Palmer Drought Severity Index (PDSI) is devised by Palmer (1965) to represent the severity of dry and wet spells over the U.S. based...
3. Regional Snowfall Index (RSI)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Climatic Data Center is now producing the Regional Snowfall Index (RSI) for significant snowstorms that impact the eastern two thirds of the U.S. The...
4. Topographic Accessability Index
Data.gov (United States)
U.S. Geological Survey, Department of the Interior — The topographic accessibility index is a measure of elevation in relation to valley floor corrected for variation in valley floor elevation across the western United...
5. ParkIndex
DEFF Research Database (Denmark)
Kaczynski, Andrew T; Schipperijn, Jasper; Hipp, J Aaron
2016-01-01
A lack of comprehensive and standardized metrics for measuring park exposure limits park-related research and health promotion efforts. This study aimed to develop and demonstrate an empirically-derived and spatially-represented index of park access (ParkIndex) that would allow researchers......, planners, and citizens to evaluate the potential for park use for a given area. Data used for developing ParkIndex were collected in 2010 in Kansas City, Missouri (KCMO). Adult study participants (n=891) reported whether they used a park within the past month, and all parks in KCMO were mapped and audited...... using ArcGIS 9.3 and the Community Park Audit Tool. Four park summary variables - distance to nearest park, and the number of parks, amount of park space, and average park quality index within 1 mile were analyzed in relation to park use using logistic regression. Coefficients for significant park...
6. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: The Joint PVSS JCOP Framework: 14 - 18.6.2004 (5 days) EXCEL 2003 - niveau 2 : 17 & 18.6.2004 (2 jours) MAGNE-04 : Magnétisme pour l'électrotechnique : 6 au 8.7.2004 (3 jours) Technical Training Monique Duval - Tel.74924 technical.training@cern.ch
7. Index Conditions of Resolution
Institute of Scientific and Technical Information of China (English)
Xiao-Chun Cheng
2005-01-01
In this paper, the following results are proved: (1) Using both deletion strategy and lock strategy, resolution is complete for a clause set where literals with the same predicate or proposition symbol have the same index. (2) Using deletion strategy, both positive unit lock resolution and input lock resolution are complete for a Horn set where the indexes of positive literals are greater than those of negative literals. (3) Using deletion strategy, input half-lock resolution is complete for a Horn set.
8. Nudibranch Systematic Index
OpenAIRE
2006-01-01
This is an index of my approximately 6,200 nudibranch reprints and books. I have indexed them only for information concerning systematics, taxonomy, nomenclature, & description of taxa. This list should allow you to quickly find information concerning the description, taxonomy, or systematics of almost any species of nudibranch. The full citation for any of the authors and dates listed may be found in the nudibranch bibliography at http://repositories.cdlib.org/ims/Bibliographia_Nudibranch...
9. Index of Financial Inclusion
OpenAIRE
Mandira Sarma
2008-01-01
The promotion of an inclusive financial system is considered a policy priority in many countries. While the importance of financial inclusion is widely recognized, the literature lacks a comprehensive measure that can be used to measure the extent of financial inclusion across economies. This paper attempts to fill this gap by proposing an index of financial inclusion (IFI). The IFI is a multi-dimensional index that captures information on various dimensions of financial inclusion in one sing...
10. Technical training - Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 16.5.2006 (May-November course sessions) Technical Training: Places available The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Title Hours Date Language ACROBAT 7.0 : Utilisation de fichiers PDF 8 8.05.06 F WORD 2003 - niveau 2 : ECDL 16 22-23.05.06 23-24.05.06 F Comprehensive VHDL for FPGA Design 40 29.05-2.06.06 E C++ Programming Part 2 - Advanced C++ and its Traps and Pitfalls 32 30.05-2.06.06 E ACROBAT 7.0 : Utilisation de fichiers PDF 24 7-9.06.06 E AutoCAD Mechanical 2006 16 13-14.06.06 F CERN EDMS for Local Administrators 16 13-14.06.06 E LabVIEW Base 2 32 27.06-5.07.06 F C++ Programming Part 3 - Templates and the STL (Standard Template Library) 16 27-28.06.06 E C++ Programming Part 4 - Exceptions 8 29.06.06 E FrontPage 2003 - niveau 1 16 29-...
11. A study of cephalic index and facial index in Visakhapatnam, Andhra Pradesh, India
Directory of Open Access Journals (Sweden)
K. Lakshmi Kumari
2015-06-01
Full Text Available Background: The description of the human body has been a major concern since ancient times. The use of medical terminology enhances reliability of comparison made between studies from different areas thereby contributing higher level of scientific evidence. Cephalic index is an important parameter in forensic medicine, anthropology and genetics to know the sex and racial differences between individuals. Facial index is useful index for forensic scientists, plastic surgeons and anatomist. The parameters are useful for plastic surgeons during treatment of congenital and traumatic deformities, identification of individuals in medicolegal cases by forensic scientists and identifying craniofacial deformities of genetic syndromes by geneticist. Methods: 170 males and 110 female adults from Visakhapatnam, Andhra Pradesh, India region are included in this study. Anthropometric points for cephalic index were measured by using spreading calipers. Facial index measurements were taken by measuring tape. All measurements were taken in subjects sitting in relaxed condition and subjects head is in anatomical position. Cranial index and facial index were calculated as per the formula. Results: Maximum number of males with mean cephalic index values of 80.21 were observed as mesocephalic and female with mean value of 79.25 observed as brachycephalic. Regarding facial index males were leptoprosopic and females were mesoprosopic. Conclusion: Cephalic index and facial index were terms used by anthropologists, anatomists, plastic surgeons and forensic scientists to identify individual's race and sex for treatment of craniofacial deformities. [Int J Res Med Sci 2015; 3(3.000: 656-658
12. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: PowerPoint 2003 (F) : 25.4.2005 (1 jour) WORD 2003 - niveau 1 : 2 & 3.5.2005 (2 jours) FrontPage 2003 - niveau 1 : 9 & 10.5.2005 (2 jours) ELEC-2005 - Summer Term: System electronics for physics - Issues : 10, 12, 17, 19, 24, 26 & 31.5.2005 (7 x 2h lectures) AutoCAD 2005 - niveau 1 : 11 & 12.5.2005 (2 days) Finite State Machines in the JCOP Framework: 24 - 26.5.2005 (3 days) The Joint PVSS JCOP Framework: 30.5 - 3.6.2005 (5 days) LabVIEW Migration 6 to 7: 14.6.2005 (1 day) Introduction to ANSYS: 21 - 24.6.2005 (4 days) ENSEIGNEMENT TECHNIQUE TECHNICAL TRAINING Monique Duval 74924 technical.training@cern.ch
13. Performance Indexes: Similarities and Differences
Directory of Open Access Journals (Sweden)
2013-06-01
Full Text Available The investor of today is more rigorous on monitoring a financial assets portfolio. He no longer thinks only in terms of the expected return (one dimension, but in terms of risk-return (two dimensions. Thus new perception is more complex, since the risk measurement can vary according to anyone’s perception; some use the standard deviation for that, others disagree with this measure by proposing others. In addition to this difficulty, there is the problem of how to consider these two dimensions. The objective of this essay is to study the main performance indexes through an empirical study in order to verify the differences and similarities for some of the selected assets. One performance index proposed in Caldeira (2005 shall be included in this analysis.
14. Energy availability in athletes
DEFF Research Database (Denmark)
Loucks, Anne B; Kiens, Bente; Wright, Hattie H
2011-01-01
Abstract This review updates and complements the review of energy balance and body composition in the Proceedings of the 2003 IOC Consensus Conference on Sports Nutrition. It argues that the concept of energy availability is more useful than the concept of energy balance for managing the diets...... of athletes. It then summarizes recent reports of the existence, aetiologies, and clinical consequences of low energy availability in athletes. This is followed by a review of recent research on the failure of appetite to increase ad libitum energy intake in compensation for exercise energy expenditure...
15. JUNOS High Availability
CERN Document Server
Sonderegger, James; Milne, Kieran; Palislamovic, Senad
2009-01-01
Whether your network is a complex carrier or just a few machines supporting a small enterprise, JUNOS High Availability will help you build reliable and resilient networks that include Juniper Networks devices. With this book's valuable advice on software upgrades, scalability, remote network monitoring and management, high-availability protocols such as VRRP, and more, you'll have your network uptime at the five, six, or even seven nines -- or 99.99999% of the time. Rather than focus on "greenfield" designs, the authors explain how to intelligently modify multi-vendor networks. You'll learn
16. Energy availability in athletes.
Science.gov (United States)
Loucks, Anne B; Kiens, Bente; Wright, Hattie H
2011-01-01
This review updates and complements the review of energy balance and body composition in the Proceedings of the 2003 IOC Consensus Conference on Sports Nutrition. It argues that the concept of energy availability is more useful than the concept of energy balance for managing the diets of athletes. It then summarizes recent reports of the existence, aetiologies, and clinical consequences of low energy availability in athletes. This is followed by a review of recent research on the failure of appetite to increase ad libitum energy intake in compensation for exercise energy expenditure. The review closes by summarizing the implications of this research for managing the diets of athletes.
17. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 27.6.2006 (July-December course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Titre Heure Date Langue Manipulation des images 4 6.07.06 F Introduction to Databases and Database Design 16 11-12.07.06 E ACCESS 2003 - Level 1: ECDL 16 13-14.07.06 E-F Design Patterns 16 25-26.07.06 E CERN EDMS for Local Administrators 16 1-2.08.06 E ANSYS DesignModeler 16 29-30.08.06 F CERN EDMS - Introduction 8 5.09.06 E CERN EDMS MTF en pratique 4 6.09.06 F LabVIEW Basics 1 24 4-6.09.06 E ANSYS Workbench 32 12-15.09.06 F AutoCAD Mechanical 2006 16 12-13.09.06 F CERN EDMS for Engineers 8 12.09.06 E Software Engineering in the Small and the Large 16 12-13.09.06 E AutoCAD 2006 - niveau 1 32 14-21.09.06 F LabVIEW Basics 2 ...
18. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 30.5.2006 (June-November course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Titre Heure Date Langue ACROBAT 7.0 : Utilisation de fichiers PDF 8 6.06.06 F Introduction à InDesign 16 7-8.06.06 F Python: Hands-on Introduction 24 7-9.06.06 E LabVIEW Base 2 16 22-23.06.06 F FileMaker - niveau 1 16 26-27.06.06 F C++ Programming Part 3 - Templates and the STL (Standard Template Library) 16 27-28.06.06 E C++ Programming Part 4 - Exceptions 8 29.06.06 E FrontPage 2003 - niveau 1 16 29-30.06.06 F Manipulation des images 4 6.07.06 F Introduction to Databases and Database Design 16 11-12.07.06 E ACCESS 2003 - Level 1: ECDL 16 13-14.07.06 E-F Design Patterns 16 25-26.07.06 E Introduction à Dreamweaver MX 16 ...
19. Technical training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 13.6.2006 (June-December course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Titre Heure Date Langue LabVIEW Base 2 16 22-23.06.06 F FileMaker - niveau 1 16 26-27.06.06 F C++ Programming Part 3 - Templates and the STL (Standard Template Library) 16 27-28.06.06 E C++ Programming Part 4 - Exceptions 8 29.06.06 E FrontPage 2003 - niveau 1 16 29-30.06.06 F Manipulation des images 4 6.07.06 F Introduction to Databases and Database Design 16 11-12.07.06 E ACCESS 2003 - Level 1: ECDL 16 13-14.07.06 E-F Design Patterns 16 25-26.07.06 E Introduction à Dreamweaver MX 16 26-27.07.06 F ANSYS DesignModeler 16 29-30.08.06 F LabVIEW Basics 1 24 4-6.09.06 E ANSYS Workbench 32 12-15.09.06 F AutoCAD Mechanical 20...
20. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 11.7.2006 (July-December course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Titre Heure Date Langue Design Patterns 16 25-26.07.06 E CERN EDMS for Local Administrators 16 1-2.08.06 E ANSYS DesignModeler 16 29-30.08.06 F CERN EDMS â Introduction 8 5.09.06 E CERN EDMS MTF en pratique 4 6.09.06 F LabVIEW Basics 1 24 4-6.09.06 E ANSYS Workbench 32 12-15.09.06 F AutoCAD Mechanical 2006 16 12-13.09.06 F CERN EDMS for Engineers 8 12.09.06 E Software Engineering in the Small and the Large 16 12-13.09.06 E LabVIEW Basics 2 16 14-15.09.06 E LabVIEW: Working efficiently with LabWIEW 8 8 18.09.06 E PCAD Schémas ? Introduction 16 21-22.09.06 F PCAD PCB - Introduction 24 27-29.09.06 F C++ for Particle Physicists ...
1. TECHNICAL TRAINING: Places available**
CERN Multimedia
2003-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Hands-on Introduction to Python Programming : 12 - 14.11.03(3 days) ACCESS 2000 - niveau 1 : 13 & 14.11.03 (2 jours) C++ for Particle Physicists : 17 - 21.11.03 (6 X 3-hour lectures) Programmation automate Schneider TSX Premium - niveau 2 : 18 - 21.11.03 (4 jours) Project Planning with MS-Project (free of charg...
2. TECHNICAL TRAINING: Places available**
CERN Multimedia
2003-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval Tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: The EDMS-MTF in practice (free of charge) : 28 - 30.10.03 (6 half-day sessions) AutoCAD 2002 Level 1 : 3, 4, 12, 13.11.03 (4 days) LabVIEW TestStand ver. 3 : 4 & 5.11.03 (2 days) Introduction to PSpice : 4.11.03 p.m. (half-day) Hands-on Introduction to Python Programming : 12 14.11.03 (3 days) ACCESS ...
3. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Oracle 9i : New features for developers : 14 - 16.6.2004 (3 days) The Joint PVSS JCOP Framework: 14 - 18.6.2004 (5 days) Introduction au VHDL et utilisation du simulateur NCVHDL de CADENCE : 15 & 16.6.2004 (2 jours) EXCEL 2003 - niveau 2 : 17 & 18.6.2004 (2 jours) MAGNE-04 : Magnétisme pour l'électrotechnique : 6 au 8.7.2004 (3 jours) AutoCAD 2002 - niveau 1 : 13, 14, 23, 24.09.2004 (4 jours) ENSEIGNEMEN...
4. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Joint PVSS JCOP Programming : 9 - 13.8.2004 (5 days) Hands-on Introduction to Python Programming: 1 - 3.9.2004 (3 days - free course) Introduction au VHDL et utilisation du simulateur NCVHDL de CADENCE : 7 & 8.9.2004 (2 jours) Joint PVSS JCOP Programming : 13 - 17.9.2004 (5 days) AutoCAD 2002 - niveau 1 : 13, 14, 23, 24.9.2004 (4 jours) Programmation STEP7 niveau 1 : 14-17.9.2004 (4...
5. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Joint PVSS JCOP Framework : 9 - 13.8.2004 (5 days) Introduction à Outlook : 19.8.2004 (1 jour) Outlook (Short Course I): E-mail: 31.8.2004 (2 hours, morning) Outlook (Short Course II): Calendar, Tasks and Notes: 31.8.2004 (2 hours, afternoon) Instructor-led WBTechT Study or Follow-up for Microsoft Applications: 7.9.2004 (morning) Outlook (Short Course III): Meetings and Delegation: 7.9.2004 (2 hours, afternoon) I...
6. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Joint PVSS JCOP Framework : 9 - 13.8.2004 (5 days) Introduction à Outlook : 19.8.2004 (1 jour) Outlook (Short Course I): E-mail: 31.8.2004 (2 hours, morning) Outlook (Short Course II): Calendar, Tasks and Notes: 31.8.2004 (2 hours, afternoon) Hands-on Introduction to Python Programming: 1 - 3.9.2004 (3 days - free course) Instructor-led WBTechT Study or Follow-up for Microsoft Applications: 7.9.20...
7. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Introduction to the CERN EDMS : 22.6.2004 (1 day) The CERN EDMS for local administrators : 23 & 24.6.2004 (2 days) Compatibilité électromagnétique (CEM) - Introduction / Electromagnetic Compatibility (EMC) - Introduction: 7.7.2004 (morning) Introduction au VHDL et utilisation du simulateur NCVHDL de CADENCE : 7 & 8.9.2004 (2 jours) AutoCAD 2002 - niveau 1 : 13, 14, 23, 24.9.2004 (4 jours) Programmation STEP...
8. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Hands-on Introduction to Python Programming: 1 - 3.9.2004 (3 days - free course) Introduction au VHDL et utilisation du simulateur NCVHDL de CADENCE : 7 & 8.9.2004 (2 jours) AutoCAD 2002 - niveau 1 : 13, 14, 23, 24.9.2004 (4 jours) Programmation STEP7 niveau 1 : 14-17.9.2004 (4 jours) FrontPage 2003 - niveau 1 : 20 & 21.9.2004 (2 jours) Word 2003 - niveau 2 : 27 & 28.9.2004 (2 jours) Introduction à Wind...
9. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: AutoCAD 2002 - niveau 1 : 13, 14, 23, 24.09.2004 (4 jours) Introduction to the CERN EDMS : 22.6.2004 (1 day) The CERN EDMS for local administrators : 23 & 24.6.2004 (2 days) MAGNE-04 : Magnétisme pour l'électrotechnique : 6 - 8.7.2004 (3 jours) Introduction au VHDL et utilisation du simulateur NCVHDL de CADENCE : 7 & 8.9.2004 (2 jours) ENSEIGNEMENT TECHNIQUE TECHNICAL TRAINING...
10. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Outlook (short course I) : E-mail : 31.8.2004 (2 hours, morning) Introduction à Outlook : Outlook (short course II) : Calendar, Tasks and Notes : 31.8.2004 (2 hours, afternoon) Instructor-led WBTechT Study or Follow-up for Microsoft Applications : 7.9.2004 (morning) Outlook (short course III) : Meetings and Delegation : 7.9.2004 (2 hours, afternoon) Introduction au VHDL et utilisation du simulateur ...
11. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 7.2.2006 (February-May course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Title Hours Date Language WORD 2003 (Short Course IV) - HowTo... Work with master document 3 27.02.06 E-F JAVA: Level 2 32 28.02-3.03.06 E Manipulation des images 4 28.02.06 F ACCESS 2003 - Level 2: ECDL AM5 16 2-3.03.06 E-F C++ for Particle Physicists 20 6-10.03.06 E PowerPoint 2003 8 9.03.06 F JCOP: Control System Integration using JCOP Tools 24 14-16.03.06 E EXCEL 2003 (Short Course III) - HowTo... Pivot tables 3 20.03.06 E-F EXCEL 2003 (Short Course IV) - HowTo....Link cells, worksheets and workbooks 3 20.03.06 E-F JCOP: Finite State Machines in the JCOP Framework 24 21-23.03.06 E Object-Oriented Analysis and Design using UML 24 21-23.03.06 E FrontPage 2003 - niveau 1 16 27-28.03.06 F JCOP: Joint PVSS-JCOP Fram...
12. Technical Training: Places available
CERN Multimedia
DAvide Vitè
2006-01-01
Places available as of 25.7.2006 (August-December course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Titre Heure Date Langue CERN EDMS for Local Administrators 16 1-2.08.06 E ANSYS DesignModeler 16 29-30.08.06 F EXCEL 2003 - niveau 1 : ECDL 16 30-31.08.06 F OUTLOOK 2003 (Short Course I) - E-mail 3 1.09.06 E/F OUTLOOK 2003 (Short Course II) - Calendar, Tasks and Notes 3 1.09.06 E/F CERN EDMS - Introduction 8 5.09.06 E CERN EDMS MTF en pratique 4 6.09.06 F ANSYS Workbench 32 12-15.09.06 F CERN EDMS for Engineers 8 12.09.06 E Software Engineering in the Small and the Large 16 12-13.09.06 E LabVIEW Basics 2 16 14-15.09.06 E WORD 2003 (Short Course III) - HowTo... Work with long documents 3 15.09.06 E/F EXCEL 2003 (Short Course I) - HowTo... Wor...
13. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 19.7.2006 (August-December course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Titre Heure Date Langue CERN EDMS for Local Administrators 16 1-2.08.06 E ANSYS DesignModeler 16 29-30.08.06 F OUTLOOK 2003 (Short Course I) - E-mail 3 1.09.06 E/F OUTLOOK 2003 (Short Course II) - Calendar, Tasks and Notes 3 1.09.06 E/F CERN EDMS - Introduction 8 5.09.06 E CERN EDMS MTF en pratique 4 6.09.06 F LabVIEW Basics 1 24 11-13.09.06 E ANSYS Workbench 32 12-15.09.06 F CERN EDMS for Engineers 8 12.09.06 E Software Engineering in the Small and the Large 16 12-13.09.06 E LabVIEW Basics 2 16 14-15.09.06 E EXCEL 2003 (Short Course I) - HowTo... Work with formulae 3 15.09.06 E/F WORD 2003 (Short Course III) - HowTo... Work with long docu...
14. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 7.2.2006 (February-May course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Title Hours Date Language WORD 2003 (Short Course II) - HowTo... Mail merge 3 09-02-06 E-F ACCESS 2003 - Level 1: ECDL M5 16 13 to 14-02-06 E-F OUTLOOK 2003 (Short Course II) - Calendar, Tasks and Notes 3 16-02-06 E-F WORD 2003 (Short Course III) - HowTo... Work with long documents 3 16-02-06 E-F CERN EDMS - Introduction 8 21.02.06 E OUTLOOK 2003 (Short Course III) - Meetings and Delegation 3 27-02-06 E-F WORD 2003 (Short Course IV) - HowTo... Work with master document 3 27-02-06 E-F JAVA: Level 2 32 28-02-06 to 03-03-06 E Manipulation des images 4 28.02.06 F ACCESS 2003 - Level 2: ECDL AM5 16 02 to 03-03-06 E-F FrontPage 2003 - niveau 2 16 02 to 03-03-06 F C++ for Particle Physicists 20 06 to 10-03-06 E FileMaker - niv...
15. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 7.2.2006 (February-May course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Title Hours Date Language WORD 2003 (Short Course II) - HowTo... Mail merge 3 13.03.06 E-F EXCEL 2003 (Short Course III) - HowTo... Pivot tables 3 20.03.06 E-F EXCEL 2003 (Short Course IV) - HowTo....Link cells, worksheets and workbooks 3 20.03.06 E-F Object-Oriented Analysis and Design using UML 24 21-23.03.06 E EXCEL 2003 - niveau 1 16 22-23.03.06 F FrontPage 2003 - niveau 1 16 27-28.03.06 F Oracle Forms Developer 10g: Move to the Web 16 27-28.03.06 E Oracle JDeveloper 10g: Build Applications with ADF 24 29-31.03.06 E ACCESS 2003 - Level 2: ECDL AM5 16 3-4.03.06 E-F JAVA 2 Enterprise Edition - Part 1: Web Applications 16 3-4.04.06 E JCOP: Control System Integration using JCOP Tools 24 4-6.04.06 E JAVA 2 Enterprise Edition...
16. Technical Training: Places available
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Computational Electromagnetics with the ELEKTRA Module of OPERA-3D : 27 & 28.4.2004 (2 days) Hands-on Introduction to Python Programming : 3 - 5.5.2004 (3 days) LabVIEW Base 2 : 6 & 7.5.2004 (2 jours) Project Planning with MS-Project : 6 & 13.5.2004 (2 days) Word 2003 - niveau 1 : 10 & 11.5.2004 (2 jours) Oracle 9i : SQL : 10 - 12.5.2004 (3...
17. Technical Training: Places available
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: FrontPage XP - niveau 2 : 26 & 27.4.2004 (2 jours) The Joint PVSS JCOP Framework : 26 - 30.4.2004 (5 days) Computational Electromagnetics with the ELEKTRA Module of OPERA-3D : 27 & 28.4.2004 (2 days) Hands-on Introduction to Python Programming : 3 - 5.5.2004 (3 days) LabVIEW Base 2 : 6 & 7.5.2004 (2 jours) Project Pla...
18. Technical Training: Places available
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: AutoCAD 2002 - niveau 1 : 19, 20.4 et 3, 4.5.2004 (4 jours) Oracle 8i/9i - Develop Web-based Applications with PL/SQL : 19 & 20.4.2004 (2 days) Introduction to ANSYS : 20 - 23.4.2004 (4 days) LabVIEW Hands-on : 20.4.2004 (half-day, p.m.) FrontPage XP - niveau 2 : 26 & 27.4.2004 (2 jours) The Joint PVSS JCOP Framework : 26 -...
19. Trends in availability management
Energy Technology Data Exchange (ETDEWEB)
Marriott, P.W.; McCandless, R.J.; Smith, B.W.
1985-01-01
This paper explores future directions in the management of nuclear power plant availability. The issue is of great economic interest both to utilities and to their customers. Current trends are discussed, and some that appear to have promise in the future are identified.
20. Technical Training: Places Available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. Technical Training Monique Duval - Tel.74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Oracle 9i: SQL: 17 - 19.5.2004 (3 days) Word 2003 - niveau 2 : 24 & 25.5.2004 (2 jours) EXCEL 2003 - niveau 1 : 27 & 28.5.2004 (2 jours) STEP7 Programming Level 1: 1 - 4.6.2004 (4 days) Oracle 9i : Programming with PL/SQL: 2 - 4.6.2004 (3 days) CST Microwave Studio: 3 & 4.6.2004 (2 days) Oracle 9i : New f...
1. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an 'application for training' form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. Technical Training Monique Duval - Tel.74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Word 2003 - niveau 2 : 24 & 25.5.2004 (2 jours) VisualEliteHDL : 25 & 26.5.2004 (2 days) EXCEL 2003 - niveau 1 : 27 & 28.5.2004 (2 jours) STEP7 Programming Level 1: 1 - 4.6.2004 (4 days) Oracle 9i : Programming with PL/SQL: 2 - 4.6.2004 (3 days) CST Microwave Studio: 3 & 4.6.2004 (2 days) Oracle 9...
2. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 21.3.2006 (March-October course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Title Hours Date Language FrontPage 2003 - niveau 1 16 27-28.03.06 F Oracle Forms Developer 10g: Move to the Web 16 27-28.03.06 E ACCESS 2003 - Level 2: ECDL AM5 16 3-4.03.06 E-F JAVA 2 Enterprise Edition - Part 1: Web Applications 16 3-4.04.06 E JAVA 2 Enterprise Edition - Part 2: Enterprise JavaBeans 24 5-7.04.06 E AutoCAD Mechanical 2006 16 11-12.04.06 F FrontPage 2003 - niveau 2 16 24-25.04.06 F C++ Programming Part 1 - Introduction to Object-Oriented Design and Programming 24 25-27.04.06 E AutoCAD 2006 - niveau 1 32 27.04-4.05.06 F Oracle: SQL 24 3-5.05.06 E EXCEL 2003 (Short Course I) - HowTo... Work with formulae 3 4.05.06 (am) E-F EXCEL 2003 (Short Course II) - HowTo... Format your worksheet for printing 3 4...
3. Technical Training: Places available
CERN Multimedia
2006-01-01
Places available as of 21.3.2006 (March-October course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: find out the curren Hours Date Language ACROBAT 7.0 : Utilisation de fichiers PDF 8 8.05.06 F Project Planning with MS-Project 16 9.05-6.06.06 E STEP7: niveau 1 32 9-12.05.06 E-F Oracle: Programming with PL/SQL 24 10-12.05.06 E FileMaker - niveau 2 16 11-12.05.06 F LabVIEW Application Development 24 15-17.05.06 E LabVIEW Advanced Programming 16 18-19.05.06 E PERL 5: Advanced Aspects 8 18.05.06 E Technique du vide 16 18-19.05.06 F WORD 2003 - niveau 2 : ECDL 16 22-23.05.06 F Introduction au VHDL et utilisation du simulateur NCVHDL de CADENCE 16 23-24.05.06 F Comprehensive VHDL for FPGA Design 40 29.05-2.06.06 E C++ Programming Part 2 - Advanced C++ and its T...
4. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
Places available as of 9.5.2006 (May-October course sessions) The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: find out the curren Hours Date Language LabVIEW Application Development 24 15-17.05.06 E LabVIEW Advanced Programming 16 18-19.05.06 E PERL 5: Advanced Aspects 8 18.05.06 E Technique du vide 16 18-19.05.06 F FileMaker - niveau 2 16 11-12.05.06 F WORD 2003 - niveau 2 : ECDL 16 22-23.05.06 F Introduction au VHDL et utilisation du simulateur NCVHDL de CADENCE 16 23-24.05.06 F Comprehensive VHDL for FPGA Design 40 29.05-2.06.06 E C++ Programming Part 2 - Advanced C++ and its Traps and Pitfalls 32 30.05-2.06.06 E Python: Hands-on Introduction 24 7-9.06.06 E AutoCAD Mechanical 2006 16 13-14.06.06 F CERN EDMS for Local Administrators 16 13-14.06.06...
5. Technical Training: Places available
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Project Planning with MS-Project :6 & 13.5.2004 (2 days) Word 2003 - niveau 1 : 10 & 11.5.2004 (2 jours) Oracle 9i : SQL : 17 - 19.5.2004 (3 days) Word 2003 - niveau 2 : 24 & 25.5.2004 (2 jours) EXCEL 2003 - niveau 1: 27 & 28.5.2004 (2 jours) STEP7 Programming Level 1 : 1 - 4.6.2004 (4 days) Oracle 9i : Programming with PL/SQL : 2 - 4.6.2...
6. Technical Training: Places available
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Instructor-led WBTechT study or follow-up for Microsoft applications : 1.4.2004 (morning) FrontPage XP - niveau 1 : 5 & 6.4.2004 (2 jours) AutoCAD 2002 - niveau 1 : 19, 20.4 et 3, 4.5.2004 (4 jours) Oracle 8i/9i - Develop Web-based Applications with PL/SQL : 19 & 20.4.2004 (2 days) Introduction to ANSYS : 20 - 23.4.2004 (4 day...
7. Technical Training: Places available
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Instructor-led WBTechT study or follow-up for Microsoft applications : 19.2.2004 (morning) LabVIEW TestStand I (E) : 23 & 24.2.2004 (2 days) LabVIEW base 1 : 25 - 27.2.2004 (3 jours) Instructor-led WBTechT study or follow-up for Microsoft applications : 26.2.2004 (morning) CLEAN-2002 : Working in a Cleanroom : 10.3.2004 (afternoon - free of charge) C++ for Pa...
8. Technical Training: Places available
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Instructor-led WBTechT study for Microsoft applications :12.2.2004 (morning) Instructor-led WBTechT study or follow-up for Microsoft applications : 19.2.2004 (morning) LabVIEW TestStand I (E) : 23 & 24.2.2004 (2 days) LabVIEW base 1 : 25 - 27.2.2004 (3 jours) Instructor-led WBTechT study or follow-up for Microsoft applications : 19.2.2004 (morning) CLEAN-2002 ...
9. Technical Training: Places available
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: LabVIEW base 1 : 25 - 27.2.2004(3 jours) Instructor-led WBTechT study or follow-up for Microsoft applications : 26.2.2004 (morning) CLEAN-2002 : Working in a Cleanroom : 10.3.2004 (afternoon - free of charge) C++ for Particle Physicists : 8 - 12.3.2004 (6 X 4-hour sessions) LabVIEW hands-on (E) : 16.3.2004 (afternoon) LabVIEW Basics 1 : 22 - 24.3.2004 ...
10. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: C++ for Particle Physicists : 8 - 12.3.2004 (6 X 4-hour sessions) Introduction to the CERN EDMS : 9.3.2004 (1 day, free of charge) The EDMS MTF in Practice : 10.3.2004 (morning, free of charge) CLEAN-2002: Working in a Cleanroom : 10.3.2004 (afternoon, free of charge) The CERN EDMS for Engineers : 11.3.2004 (1 day, free of charge) LabVIEW hands-on (E):...
11. Technical Training: Places available**
CERN Document Server
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: The JAVA Programming Language Level 1 : 9 & 10.1.2004 (2 days) LabVIEW TestStand I (E) : 23 & 24.2.2004 (2 days) LabVIEW base 1 : 25 - 27.2.2004 (3 jours) CLEAN-2002 : Working in a Cleanroom : 10.3.2004 (afternoon - free of charge) C++ for Particle Physicists : 8 - 12.3.2004 ( 6 X 4-hour sessions) LabVIEW hands-on (E) 16.3...
12. Technical Training: Places available
CERN Multimedia
2004-01-01
If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: C++ for Particle Physicists : 8 - 12.3.2004 (6 X 4-hour sessions) Introduction to the CERN EDMS : 9.3.2004 (1 day, free of charge) The EDMS MTF in Practice : 10.3.2004 (morning, free of charge) CLEAN-2002 : Working in a Cleanroom : 10.3.2004 (afternoon, free of charge) The CERN EDMS for Engineers : 11.3.2004 (1 day, free of charge) LabVIEW...
13. SUBJECT AND AUTHOR INDEXS
Directory of Open Access Journals (Sweden)
IJBE Volume 2
2016-09-01
Full Text Available SUBJECT INDEX IJBE VOLUME 2access credit, 93acquisition, 177AHP, 61, 82, 165arena simulation,43BMC, 69Bojonegoro, 69brand choice, 208brand image, 208brand positioning, 208bullwhip effect, 43burger buns, 1business synergy and financial reports, 177capital structure, 130cluster, 151coal reserves, 130coffee plantation, 93competitiveness, 82consumer behaviour, 33consumer complaint behavior, 101cooking spices, 1crackers, 1cross sectional analytical, 139crosstab, 101CSI, 12direct selling, 122discriminant analysis, 33economic value added, 130, 187employee motivation, 112employee performance, 112employees, 139EOQ, 23farmer decisions, 93farmer group, 52financial performance evaluation, 187financial performance, 52, 177financial ratio, 187financial report, 187fiva food, 23food crops, 151horticulture, 151imports, 151improved capital structure, 177IPA, 12leading sector, 151life insurance, 165LotteMart, 43main product, 61marketing mix, 33, 165matrix SWOT, 69MPE, 61multiple linear regression, 122muslim clothing, 197Ogun, 139Pangasius fillet, 82Pati, 93pearson correlation, 101perceived value, 208performance suppy chain, 23PLS, 208POQ, 23portfolio analyzing, 1product, 101PT SKP, 122pulp and papers, 187purchase decision, 165purchase intention, 33remuneration, 112re-purchasing decisions, 197sales performance, 122sawmill, 52SCOR, 23sekolah peternakan rakyat, 69SEM, 112SERVQUAL, 12Sido Makmur farmer groups, 93SI-PUHH Online, 12small and medium industries (IKM, 61socio-demographic, 139sport drink, 208stress, 139supply chain, 43SWOT, 82the mix marketing, 197Tobin’s Q, 130trade partnership, 52uleg chili sauce, 1 AUTHOR INDEX IJBE VOLUME 2Achsani, Noer Azam, 177Andati, Trias, 52, 177Andihka, Galih, 208Arkeman, Yandra, 43Baga, Lukman M, 69Cahyanugroho, Aldi, 112Daryanto, Arief, 12David, Ajibade, 139Djoni, 122Fahmi, Idqan, 1Fattah, Muhammad Unggul Abdul, 61Hakim, Dedi Budiman, 187Harianto, 93Hartoyo, 101Homisah, 1Hubeis, Musa, 112Hutagaol, M. Parulian, 93Jaya, Stevana
14. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: ANSYS : Thermal Analysis : 22 - 24.9.2004 (3 days) LabVIEW Migration 6 à 7 : 23.9.2004 (one day) ANSYS : Advanced Topics : 27.9 - 1.10.2004 (5 days) Word 2003 - niveau 2 : 27 & 28.9.2004 (2 jours) LabVIEW - Basics 1 : 27 - 29.9.2004 (3 days) LabVIEW - Basics 2 : 30.9 & 1.10.2004 (2 days) Introduction à Windows XP au CERN : 4.10.2004 (matin) The EDMS MTF in practice : 4.10.2004 (afternoon) The CERN EDMS for Engineers : 6.10.2004 (1 day) FrontPage 2003 - niveau 1 : 7 & 8.10.04 (2 jours) Outlook (short course I) : E-mail : 22.10.2004 (2 hours, morning) Outlook (short course II) : Calendar, Tasks and Notes: 22.10.2004 (2 hours, afternoon) Instructor-led WTechT Study or Fo...
15. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Programmation STEP7 niveau 1 : 14-17.9.2004 (4 jours) ANSYS : Thermal Analysis : 22 - 24.9.2004 (3 days) LabVIEW Migration 6 à 7 : 23.9.2004 (one day) ANSYS : Advanced Topics : 27.9 - 1.10.2004 (5 days) Word 2003 - niveau 2 : 27 & 28.9.2004 (2 jours) LabVIEW - Basics 1 : 27 - 29.9.2004 (3 days) MAGNE-04 : Magnétisme pour l'électrotechnique : 28 - 30.9.2004 (3 jours) LabVIEW - Basics 2 : 30.9 & 1.10.2004 (2 days) Introduction à Windows XP au CERN : 4.10.2004 (matin) FrontPage 2003 - niveau 1 : 7 & 8.10.04 (2 jours) Outlook (short course I) : E-mail : 22.10.2004 (2 hours, morning) Outlook (short course II) : Calendar, Tasks and Notes: 22.10.2004 (2 hours, afternoon) Introduction à ANSYS : 23 - 26.11.2004 (4 jours) ENSEIGNEMENT TECHNIQUE TECHNICAL TRAINING Monique Duval 74924...
16. Technical training - Places available
CERN Multimedia
2003-01-01
* Etant donné le délai d'impression du Bulletin, ces places peuvent ne plus être disponibles au moment de sa parution. Veuillez consulter notre site Web pour avoir la dernière mise à jour. ** The number of places available may vary. Please check our Web site to find out the current availability. Des places sont disponibles dans les cours suivants : Places are available in the following courses: Hands-on Introduction to Python Programming 12 14.11.03 (3 days) ACCESS 2000 niveau 1 13 & 14.11.03 (2 jours) C++ for Particle Physicists 17 21.11.03 (6 x 3-hour lectures) Programmation automate Schneider TSX Premium niveau 2 18 21.11.03 (4 jours) Planification de projet avec MS-Project/Project Planning with MS-Project (gratuit/free of charge langue à définir/language to be defined) : 18 & 25.11.03 (2 jours/2 days) JAVA 2 Enterprise Edition Part 1 : WEB...
17. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Hands-on Object-Oriented Design and Programming with C++ : 22 - 24.3.2005 (3 days) FileMaker - niveau 2 : 4 & 5.4.2005 (2 jours) EMAG-2004 - Electromagnetic Design and Mathematical Optimization in Magnet Technology: 4- 14.4.2005 (8 x 3h) EXCEL 2003 - niveau 2 : 11 & 12.4.2005 (2 jours) LabVIEW Intermediate 1: 11 - 13.4.2005 (3 days) ACCESS 2003 - Level 2 - ECDL AM5: 13 & 14.4.2005 (2 days) LabVIEW Intermediate 2: 14 & 15.4.2005 (2 days) PowerPoint 2003 (F) : 18.4.2005 (1 jour) Joint PVSS JCOP Framework : 25 - 29.4.2005 (5 days) WORD 2003 - niveau 1 : 2 & 3.5.2005 (2 jours) ELEC-2005 - Summer Term: System electronics for physics - Issues : 10, 12, 17, 19, 24, 26 & 31.5.2005 (7 x 2h lectures) AutoCAD 2002 - niveau 1 : 11, 12, 18 & 19.5.2005 (4 jour...
18. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: LabVIEW Migration 6 to 7: 14.6.2005 (1 day) IT3T/1 - Read your mail and more with Outlook 2003 : 14.6.2005 (IT Technical Training Tutorial, free of charge) IT3T/2 - Creating, managing and using distribution lists with Simba2 : 16.6.2005 (IT Technical Training Tutorial, free of charge) FrontPage 2003 - niveau 2 : 16 & 17.6.2005 (2 jours) size="2">(1journée) Hands-on Introduction to Python Programming: 28 - 30.6.2005 (3 days) Introduction to ANSYS: 21 - 24.6.2005 (4 days) IT3T/3 - Working remotely with Windows XP: 28.6.2005 (IT Technical Training Tutorial, free of charge) IT3T/4 - Editing Websites with Frontpage 2003: 30.6.2005 (IT Technical Training Tutorial, free of charge) LabVIEW base 1 : 4 - 6.7.2005 (3 jours) LabVIEW Basics 2: AutoCorrect, Find/Replace) : 4.7.2005 (afternoon) WORD 2003 (Shor...
19. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: LabVIEW Real-Time (F) : 7 - 9.6.2005 (3 jours) LabVIEW Migration 6 to 7: 14.6.2005 (1 day) IT3T/1 - Read your mail and more with Outlook 2003 : 14.6.2005 (IT Technical Training Tutorial, free of charge) IT3T/2 - Creating, managing and using distribution lists with Simba2 : 16.6.2005 (IT Technical Training Tutorial, free of charge) FrontPage 2003 - niveau 2 : 16 & 17.6.2005 (2 jours) Utilisation de fichiers PDF avec ACROBAT 6.0 : 20.6.2005 (1journée) Introduction to ANSYS: 21 - 24.6.2005 (4 days) IT3T/3 - Working remotely with Windows XP : 28.6.2005 (IT Technical Training Tutorial, free of charge) IT3T/4 - Editing Websites with Frontpage 2003 : 30.6.2005 (IT Technical Training Tutorial, free of charge) WORD 2003 (Short Course I) - HowTo... Work with repetitive tasks /AutoText, AutoFormat, AutoC...
20. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Joint PVSS JCOP Framework: 8 - 12.8.2005 (5 days) WORD 2003 (Short Course I) - HowTo... Work with AutoTasks (AutoText, AutoFormat, AutoCorrect, Lists, Find/Replace): 25.8.2005 (morning) WORD 2003 (Short Course II) - HowTo... Mail merge: 25.8.2005 (afternoon) EXCEL 2003 (Short Course I) - HowTo... Work with formulae: 26.8.2005 (morning) WORD 2003 (Short Course III) - HowTo... Work with long documents: 26.8.2005 (afternoon) FrontPage 2003 - niveau 1 : 1 - 2.9.2005 (2 jours) Utilisation des fichiers PDF avec ACROBAT 7.0 : 5.9.2005 (1 jour) LabVIEW Basics 1 : 5 - 7.9.2005 (3 days, dates to be confirmed) Introduction à Windows XP au CERN : 12.9.2005 (1 demi-journée) FrontPage 2003 - niveau 2 : 15 - 16.9.2005 (2 jours) AutoCAD 2005 - niveau 1 : 22, 23, 28, 29.9.2005 (4 jours) MAGNE-05 - Magn&...
1. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Joint PVSS JCOP Framework: 8 - 12.8.2005 (5 days) WORD 2003 (Short Course I) - HowTo... Work with AutoTasks (AutoText, AutoFormat, AutoCorrect, Lists, Find/Replace): 25.8.2005 (morning) WORD 2003 (Short Course II) - HowTo... Mail merge: 25.8.2005 (afternoon) EXCEL 2003 (Short Course I) - HowTo... Work with formulae: 26.8.2005 (morning) WORD 2003 (Short Course III) - HowTo... Work with long documents: 26.8.2005 (afternoon) FrontPage 2003 - niveau 1 : 1 - 2.9.2005 (2 jours) Utilisation des fichiers PDF avec ACROBAT 7.0 : 5.9.2005 (1 jour) LabVIEW Basics 1 : 5 - 7.9.2005 (3 days, dates to be confirmed) Introduction à Windows XP au CERN : 12.9.2005 (1 demi-journée) FrontPage 2003 - niveau 2 : 15 - 16.9.2005 (2 jours) AutoCAD 2005 - niveau 1 : 22, 23, 28, 29.9.2005 (4 jours) Finite State Machines in the JCOP Framew...
2. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: WORD 2003 (Short Course I) - HowTo... Work with AutoTasks (AutoText, AutoFormat, AutoCorrect, Lists, Find/Replace): 25.8.2005 (morning) WORD 2003 (Short Course II) - HowTo... Mail merge: 25.8.2005 (afternoon) EXCEL 2003 (Short Course I) - HowTo... Work with formulae: 26.8.2005 (morning) WORD 2003 (Short Course III) - HowTo... Work with long documents: 26.8.2005 (afternoon) FrontPage 2003 - niveau 1 : 1 - 2.9.2005 (2 jours) Utilisation des fichiers PDF avec ACROBAT 7.0 : 5.9.2005 (1 jour) LabVIEW base 1 : 5 - 7.9.2005 (3 jours) Introduction à Windows XP au CERN : 12.9.2005 (1 demi-journée) FrontPage 2003 - niveau 2 : 15 - 16.9.2005 (2 jours) AutoCAD 2005 - niveau 1 : 22, 23, 28, 29.9.2005 (4 jours) Finite State Machines in the JCOP Framework: 26 - 28.9.2005 (3 days) MAGNE-05 - Magnétisme pour l'électrotechniq...
3. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: LabVIEW base 1 : 4 - 6.7.2005 (3 jours) LabVIEW Basics 2: 7 - 8.7.2005 (2 days) Utilisation des fichiers PDF avec ACROBAT 7.0 : 5.7.2005 (1 jour) FrontPage 2003 - niveau 1 : 6-7.7.2005 (2 jours) WORD 2003 (Short Course I) - HowTo... Work with repetitive tasks /AutoText, AutoFormat, AutoCorrect, Find/Replace) : 4.7.2005 (afternoon) WORD 2003 (Short Course II) - HowTo... Mail merge: 5.7.2005 (afternoon) WORD 2003 (Short Course III) - HowTo... Work with long documents : 6.7.2005 (afternoon) OUTLOOK (Short Course I) - E-mail: 6.7.2005 (morning) OUTLOOK (Short Course II) - Calendar, Tasks and Notes: 7.7.2005 (morning) OUTLOOK (Short Course III) - Meetings and Delegation: 8.7.2005 (morning) EXCEL 2003 (Short Course I) - HowTo... Work with formulae: 7.7.2005 (afternoon) EXCEL 2003 (Short Course II) - HowTo... Format y...
4. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: LabVIEW Real-Time (F) : 7 - 9.6.2005 (3 jours) LabVIEW Migration 6 to 7: 14.6.2005 (1 day) FrontPage 2003 - niveau 2 : 16 & 17.6.2005 (2 jours) Utilisation de fichiers PDF avec ACROBAT 6.0 : 20.6.2005 (1journée) Introduction to ANSYS: 21 - 24.6.2005 (4 days) WORD 2003 (Short Course I) - HowTo... Work with repetitive tasks /AutoText, AutoFormat, AutoCorrect, Find/Replace) : 4.7.2005 (afternoon) WORD 2003 (Short Course II) - HowTo... Mail merge: 5.7.2005 (afternoon) WORD 2003 (Short Course III) - HowTo... Work with long documents : 6.7.2005 (afternoon) ACCESS 2003 - Level 2 - ECDL AM5: 5 - 8.7.2005 (4 mornings) EXCEL 2003 (Short Course I) - HowTo... Work with formulae: 7.7.2005 (afternoon) EXCEL 2003 (Short Course II) - HowTo... Format your worksheet for printing: 8.7.2005 (aftern...
5. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Hands-on Introduction to Python Programming: 28 - 30.6.2005 (3 days) Introduction to ANSYS: 21 - 24.6.2005 (4 days) IT3T/3 - Working remotely with Windows XP: 28.6.2005 (IT Technical Training Tutorial, free of charge) IT3T/4 - Editing Websites with Frontpage 2003: 30.6.2005 (IT Technical Training Tutorial, free of charge) Utilisation des fichiers PDF avec ACROBAT 7.0 : 5.7.2005 (1 jour) FrontPage 2003 - niveau 1 : 6-7.7.2005 (2 jours) LabVIEW base 1 : 4 - 6.7.2005 (3 jours) LabVIEW Basics 2: 7 - 8.7.2005 (2 days) WORD 2003 (Short Course I) - HowTo... Work with repetitive tasks /AutoText, AutoFormat, AutoCorrect, Find/Replace) : 4.7.2005 (afternoon) WORD 2003 (Short Course II) - HowTo... Mail merge: 5.7.2005 (afternoon) WORD 2003 (Short Course III) - HowTo (4 mornings) EXCEL 2003 (Short Course I) - HowTo... Work ...
6. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Hands-on Introduction to Python Programming: 28 - 30.6.2005 (3 days) Introduction to ANSYS: 28.6 - 1.7.2005 (4 days) IT3T/3 - Working remotely with Windows XP: 28.6.2005 (IT Technical Training Tutorial, free of charge) IT3T/4 - Editing Websites with Frontpage 2003: 30.6.2005 (IT Technical Training Tutorial, free of charge) LabVIEW base 1 : 4 - 6.7.2005 (3 jours) LabVIEW Basics 2: 7 - 8.7.2005 (2 days) Utilisation des fichiers PDF avec ACROBAT 7.0 : 5.7.2005 (1 jour) FrontPage 2003 - niveau 1 : 6-7.7.2005 (2 jours) WORD 2003 (Short Course I) - HowTo... Work with repetitive tasks /AutoText, AutoFormat, AutoCorrect, Find/Replace) : 4.7.2005 (afternoon) WORD 2003 (Short Course II) - HowTo... Mail merge: 5.7.2005 (afternoon) WORD 2003 (Short Course III) - HowTo... Work with long documents : 6.7.2005 (afternoon) ACCES...
7. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Instructor-led WBTechT Study or Follow-up for Microsoft Applications : 7.9.2004 7.9.2004 Outlook (short course III) : Meetings and Delegation : 7.9.2004 (2 hours, afternoon) Introduction au VHDL et utilisation du simulateur NCVHDL de CADENCE : 7 & 8.9.2004 (2 jours) Joint PVSS JCOP Framework : 13 - 17.9.2004 (5 days) Programmation STEP7 niveau 1 : 14-17.9.2004 (4 jours) ANSYS : Thermal Analysis : 22 - 24.9.2004 (3 days) LabVIEW Migration 6 à 7 : 23.9.2004 (one day) ANSYS : Advanced Topics : 27.9 - 1.10.2004 (5 days) Word 2003 - niveau 2 : 27 & 28.9.2004 (2 jours) LabVIEW - Basics 1 : 27 - 29.9.2004 (3 days) MAGNE-04 : Magnétisme pour l'électrotechnique : 28 - 30.9.2004 (3 jours) LabVIEW - Basics 2 : 30.9 & 1.10.2004 (2 days) Introduction à Windows XP au CERN : 4.10.2004 (matin) ...
8. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Joint PVSS JCOP Framework : 9 - 13.8.2004 (5 days) Outlook (I): E-mail : 31.8.2004 (2 hours, morning) Outlook (II): Calendar, Tasks and Notes : 31.8.2004 (2 hours, afternoon) Hands-on Introduction to Python Programming : 1 - 3.9.2004 (3 days - free course) Instructor-led WBTechT Study or Follow-up for Microsoft Applications : 7.9.2004 (morning) Outlook (III): Meetings and Delegation : 7.9.2004 (2 hours, afternoon) Introduction au VHDL et utilisation du simulateur NCVHDL de CADENCE : 7 & 8.9.2004 (2 jours) Joint PVSS JCOP Framework : 13 - 17.9.2004 (5 days) AutoCAD 2002 - niveau 1 : 13, 14, 23, 24.9.2004 (4 jours) Programmation STEP7 niveau 1 : 14 - 17.9.2004 (4 jours) FrontPage 2003 - niveau 1 : 20 & 21.9.2004 (2 jours) ANSYS: Thermal Analysis : 22 - 24.9.2004 (3 days) ANSYS: Advanced Topics : 27.9...
9. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: FrontPage 2003 - niveau 1 : 9 & 10.5.2005 (2 jours) ELEC-2005 - Summer Term: System electronics for physics - Issues : 10, 12, 17, 19, 24, 26 & 31.5.2005 (7 x 2h lectures) AutoCAD 2005 - niveau 1 : 12, 13, 18, 19.5.2005 (4 jours) ACCESS 2003 - Level 1: ECDL M5 : 11 & 12.5.2005 (2 days) Object-Oriented Analysis and Design using UML: 17 - 19.5.2005 (3 days) Synplify Pro Training: 18.5.2005 (1 day) Finite State Machines in the JCOP Framework: 24 - 26.5.2005 (3 days) The Joint PVSS JCOP Framework: 30.5 - 3.6.2005 (5 days) Introduction à la CAO CADENCE : 31.5 - 1.6.2005 (2 jours) LabVIEW Real-Time (F) : 7 - 9.6.2005 (3 jours) LabVIEW Migration 6 to 7: 14.6.2005 (1 day) FrontPage 2003 - niveau 2 : 16 & 17.6.2005 (2 jours) Utilisation de fichiers PDF avec ACROBAT 6.0 : 20.6.2005 (1journée) Intr...
10. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: FrontPage 2003 - niveau 1 : 9 & 10.5.2005 (2 jours) ELEC-2005 - Summer Term: System electronics for physics - Issues : 10, 12, 17, 19, 24, 26 & 31.5.2005 (7 x 2h lectures) AutoCAD 2005 - niveau 1 : 11, 12, 18, 19.5.2005 (4 jours) Object-Oriented Analysis and Design using UML: 17 - 19.5.2005 (3 days) Synplify Pro Training: 18.5.2005 (1 day) Finite State Machines in the JCOP Framework: 24 - 26.5.2005 (3 days) The Joint PVSS JCOP Framework: 30.5 - 3.6.2005 (5 days) Introduction à la CAO CADENCE : 31.5 - 1.6.2005 (2 jours) LabVIEW Real-Time (F) : 7 - 9.6.2005 (3 jours) LabVIEW Migration 6 to 7: 14.6.2005 (1 day) FrontPage 2003 - niveau 2 : 16 & 17.6.2005 (2 jours) Utilisation de fichiers PDF avec ACROBAT 6.0 : 20.6.2005 (1journée) Introductio...
11. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: PowerPoint 2003 (F) : 25.4.2005 (1 jour) WORD 2003 - niveau 1 : 2 & 3.5.2005 (2 jours) FrontPage 2003 - niveau 1 : 9 & 10.5.2005 (2 jours) ANSYS Workbench (F) : 9 - 12.5.2005 (4 jours) ELEC-2005 - Summer Term: System electronics for physics - Issues : 10, 12, 17, 19, 24, 26 & 31.5.2005 (7 x 2h lectures) AutoCAD 2005 - niveau 1 : 11, 12, 18, 19.5.2005 (4 jours) ACCESS 2003 - Level 1: ECDL M5 : 11 & 12.5.2005 (2 days) La technique du vide : 12 & 13.5.2005 (2 jours) Finite State Machines in the JCOP Framework: 24 - 26.5.2005 (3 days) The Joint PVSS JCOP Framework: 30.5 - 3.6.2005 (5 days) LabVIEW Migration 6 to 7: 14.6.2005 (1 day) Introduction to ANSYS: 21 - 24.6.2005 (4 days) ENSEIGNEMENT TECHNIQUE TECHNICAL TRAINING Monique Duval 74924 tech...
12. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: LabVIEW Intermediate 1: 11 - 13.4.2005 (3 days) ACCESS 2003 - Level 2 - ECDL AM5: 13 & 14.4.2005 (2 days) LabVIEW Intermediate 2: 14 & 15.4.2005 (2 days) PowerPoint 2003 (F) : 25.4.2005 (1 jour) WORD 2003 - niveau 1 : 2 & 3.5.2005 (2 jours) FrontPage 2003 - niveau 1 : 9 & 10.5.2005 (2 jours) ANSYS Workbench (F) : 9 - 12.5.2005 (4 jours) ELEC-2005 - Summer Term: System electronics for physics - Issues : 10, 12, 17, 19, 24, 26 & 31.5.2005 (7 x 2h lectures) La technique du vide : 12 & 13.5.2005 (2 jours) Finite State Machines in the JCOP Framework: 24 - 26.5.2005 (3 days) The Joint PVSS JCOP Framework ; 30.5 - 3.6.2005 (5 days) LabVIEW Migration 6 to 7: 14.6.2005 (1 day) Introduction to ANSYS: 21 - 24.6.2005 (4 days) ENSEIGNEMENT ...
13. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: WORD 2003 - niveau 1 : 2 & 3.5.2005 (2 jours) FrontPage 2003 - niveau 1 : 9 & 10.5.2005 (2 jours) ELEC-2005 - Summer Term: System electronics for physics - Issues : 10, 12, 17, 19, 24, 26 & 31.5.2005 (7 x 2h lectures) AutoCAD 2005 - niveau 1 : 11, 12, 18, 19.5.2005 (4 jours) ACCESS 2003 - Level 1: ECDL M5 : 11 & 12.5.2005 (2 days) Finite State Machines in the JCOP Framework: 24 - 26.5.2005 (3 days) The Joint PVSS JCOP Framework: 30.5 - 3.6.2005 (5 days) Introduction à la CAO CADENCE : 31.5 - 1.6.2005 (2 jours) Programmation STEP7 niveau 1 : 7 - 10.6.2005 (4 jours) LabVIEW Migration 6 to 7: 14.6.2005 (1 day) Introduction to ANSYS: 21 - 24.6.2005 (4 days) MAGNE-05 - Magnétisme pour l'électrotechnique : 27 - 29.9.2005 (3 j...
14. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Title Hours Date Language ACCESS 2003 - Level 2: ECDL AM5 16 19 to 20-01-06 E-F AutoCAD 2006 - niveau 1 32 19 to 25-01-06 F C++ Programming Advanced - Traps and Pitfalls 32 24 to 27-01-06 E STEP7 : Level 1 32 24 to 27-01-06 E LabVIEW Basics 2 16 26 to 27-01-06 E AutoCAD Mechanical 2006 16 30 to 31-01-06 F DIAdem : base 24 01 to 03-02-06 F FrontPage 2003 - niveau 1 16 02 to 03-02-06 F ACROBAT 7.0 : Utilisation de fichiers PDF 8 06-02-06 F Manipulation des images 4 08-02-06 F OUTLOOK 2003 (Short Course I) - E-mail 3 09-02-06 E-F WORD 2003 (Short Course II) - HowTo... Mail merge 3 09-02-06 E-F ACCESS 2003 - Level 1: ECDL M5 16 13 to 14-02-06 E-F JCOP: Control System Integration using JCOP Tools 24 14 to 16-02-06 E OUTLOOK 2003 (Short Course II) - Calendar, Tasks and Note...
15. Technical Training: Places available
CERN Multimedia
Davide Vitè
2006-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Title Hours Date Language AutoCAD Mechanical 2006 16 30 to 31-01-06 F DIAdem : base 24 01 to 03-02-06 F ACROBAT 7.0 : Utilisation de fichiers PDF 8 06-02-06 F Manipulation des images 4 08-02-06 F OUTLOOK 2003 (Short Course I) - E-mail 3 09-02-06 E-F WORD 2003 (Short Course II) - HowTo... Mail merge 3 09-02-06 E-F ACCESS 2003 - Level 1: ECDL M5 16 13 to 14-02-06 E-F JCOP: Control System Integration using JCOP Tools 24 14 to 16-02-06 E OUTLOOK 2003 (Short Course II) - Calendar, Tasks and Notes 3 16-02-06 E-F WORD 2003 (Short Course III) - HowTo... Work with long documents 3 16-02-06 E-F FrontPage 2003 - niveau 1 16 23 to 24-02-06 F OUTLOOK 2003 (Short Course III) - Meetings and Delegation 3 27-02-06 E-F WORD 2003 (Short Course IV) - HowTo... Work with master document 3 27-02-06 E-F ...
16. High availability using virtualization
CERN Document Server
Calzolari, Federico
2009-01-01
High availability has always been one of the main problems for a data center. Till now high availability was achieved by host per host redundancy, a highly expensive method in terms of hardware and human costs. A new approach to the problem can be offered by virtualization. Using virtualization, it is possible to achieve a redundancy system for all the services running on a data center. This new approach to high availability allows to share the running virtual machines over the servers up and running, by exploiting the features of the virtualization layer: start, stop and move virtual machines between physical hosts. The system (3RC) is based on a finite state machine with hysteresis, providing the possibility to restart each virtual machine over any physical host, or reinstall it from scratch. A complete infrastructure has been developed to install operating system and middleware in a few minutes. To virtualize the main servers of a data center, a new procedure has been developed to migrate physical to virtu...
17. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: C++ for Particle Physicists : 15 - 19.11.2004 (6 X 3 hours sessions) Word 2003 - niveau 1 : 22 & 23.11.2004 (2 jours) Introduction à ANSYS : 23 - 26.11.2004 (4 jours) Project Planning with MS-Project : 25.11 & 2.12.2004 (2 days) Explicit Dynamics with ANSYS/LS-Dyna : 7 - 9.12.2004 (2 days) PCAD Schémas - Débutants : 9 & 10.12.2004 (2 jours) The JAVA Programming Language Level 1 : 11 & 12.1.2005 (2 days) Introduction to XML : 13 & 14.1.2005 (2 days) CLEAN-2002 : Travailler en salle propre : 25.1.2005 (après-midi, cours gratuit) Compatibilité électromagnétique (CEM) : installation et remèdes : 25 - 27.1.2005 (3 jours) Finite State Machines in the JCOP Fr ...
18. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Explicit Dynamics with ANSYS/LS-Dyna : 7 - 9.12.2004 (3 days) Introduction to PERL 5 : 8 & 9.12.2004 (2 days) PCAD Schémas - Débutants : 9 & 10.12.2004 (2 jours) Advanced aspects of PERL 5: 10.12.2004 (1 day) PCAD PCB Débutants : 13 - 15.12.2004 (3 jours) The JAVA Programming Language Level 1 : 11 & 12.1.2005 (2 days) Introduction to XML : 13 & 14.1.2005 (2 days) Introduction to the CERN EDMS : 18.1.2005 (1 day - free course) The CERN EDMS for Local Administrators : 19 & 20.1.2005 (2 days - free course) Programmation Unity-Pro pour utilisateurs de Schneider PL7-Pro : 24 - 28.1.2005 (8 demi-journées) CLEAN-2002 : Travailler en salle propre : 25.1.2005 (après-midi, cours gratuit) Compatibilit&eac...
19. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Introduction à ANSYS : 23 - 26.11.2004 (4 jours) FileMaker - niveau 1 : 23 - 26.11.2004 (4 jours) Project Planning with MS-Project : 25.11 & 2.12.2004 (2 days) Explicit Dynamics with ANSYS/LS-Dyna : 7 - 9.12.2004 (3 days) Introduction to PERL 5 : 8 & 9.12.2004 (2 days) PCAD Schémas - Débutants : 9 & 10.12.2004 (2 jours) The JAVA Programming Language Level 1 : 11 & 12.1.2005 (2 days) Introduction to XML : 13 & 14.1.2005 (2 days) Introduction to the CERN EDMS : 18.1.2005 (1 day - free course) The CERN EDMS for Local Administrators : 19 & 20.1.2005 (2 days - free course) Programmation Unity-Pro pour utilisateurs de Schneider PL7-Pro : 24 - 28.1.2005 (8 demi-journées) CLEAN-2002 : Travaill...
20. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: The JAVA Programming Language Level 1: 11 & 12.1.2005 (2 days) Introduction to XML : 13 & 14.1.2005 (2 days) ELEC-2005.:Winter term: Introduction to electronics in HEP: 18, 20, 25, 27.1, 1 & 3.2.2005 (6 x 2h lectures) The CERN EDMS for Local Administrators: 19 & 20.1.2005 (2 days - free course) Programmation Unity-Pro pour utilisateurs de Schneider PL7-Pro : 24 - 28.1.2005 (8 demi-journées) CLEAN-2002 : Travailler en salle propre : 25.1.2005 (après-midi, cours gratuit) Compatibilité électromagnétique (CEM) : installation et remèdes : 25 - 27.1.2005 (3 jours) Finite State Machines in the JCOP Framework: 1 - 3.2.2005 (3 days - free course) The JAVA Programming Language Level 2: 7 - 9...
1. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: FileMaker - niveau 1 : 23 - 26.11.2004 (4 jours) Explicit Dynamics with ANSYS/LS-Dyna : 7 - 9.12.2004 (3 days) Introduction to PERL 5 : 8 & 9.12.2004 (2 days) PCAD Schémas - Débutants : 9 & 10.12.2004 (2 jours) Advanced aspects of PERL 5: 10.12.2004 (1 day) PCAD PCB Débutants : 13 - 15.12.2004 (3 jours) The JAVA Programming Language Level 1 : 11 & 12.1.2005 (2 days) Introduction to XML : 13 & 14.1.2005 (2 days) Introduction to the CERN EDMS : 18.1.2005 (1 day - free course) The CERN EDMS for Local Administrators : 19 & 20.1.2005 (2 days - free course) Programmation Unity-Pro pour utilisateurs de Schneider PL7-Pro : 24 - 28.1.2005 (8 demi-journées) CLEAN-2002 : Travailler en salle propre : 25.1.2005 ...
2. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: FileMaker - niveau 1 : 18 & 19.10.2004 (2 jours) Outlook (short course I) : E-mail : 22.10.2004 (2 hours, morning) Outlook (short course II) : Calendar, Tasks and Notes: 22.10.2004 (2 hours, afternoon) Excel 2003 - niveau 1 : 4 & 5.11.2004 (2 jours) LabVIEW Intermediate I : 8 - 10.11.2004 (3 days) Instructor-led WTechT Study or Follow-up for Microsoft Applications : 9.11.2004 (morning) Hands-On Object Oriented Design and Programming with C++ : 9 - 11.11.2004 (3 days) Outlook (Short Course III) : Meetings and Delegation : 9.11.2004 (2 hours, afternoon) LabVIEW Intermediate II : 11 & 12.11.2004 (2 days) AutoCAD 2002 - niveau 1 : 11, 12, 18, 19.11.2004 (4 jours) C++ for Particle Physicists : 15 - 19.11.2004 (6 X 3 hours sessions) FrontPage 2003 - niveau 2 : 18 &a...
3. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: The JAVA Programming Language Level 1: 11 & 12.1.2005 (2 days) Introduction to XML : 13 & 14.1.2005 (2 days) The CERN EDMS for Local Administrators: 19 & 20.1.2005 (2 days - free course) Programmation Unity-Pro pour utilisateurs de Schneider PL7-Pro : 24 - 28.1.2005 (8 demi-journées) LabVIEW base 1/basics 1 : 31.1 - 1.2.2005 (2 j size="2">LabVIEW base 2/basics 2 : 3 & 4.2.2005 (2 jours/2 days - langue à décider/language to be decided) Finite State Machines in the JCOP Framework: 1 - 3.2.2005 (3 days - free course) The JAVA Programming Language Level 2: 7 - 9.2.2005 (3 days) C++ Programming Advanced -Traps and Pitfalls: 8 - 11.2.2005 (4 days) Joint PVSS JCOP Framework: 14 - 18.2.2005 (5 days) JAVA 2 Enterprise Edition - Part 1: ...
4. Technical Training: Places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Joint PVSS JCOP Framework : 3 - 7.10.2005 (5 days) Utilisation des fichiers PDF avec Acrobat 7.0 : 4.10.2005 (1 journée) LaTeX par la pratique : 4 - 6.10.2005 (3 matinées) PowerPoint 2003 (F) : 7.10.2005 (1 journée) FileMaker - niveau 1 : 20 - 21.10.2005 (2 jours) ACCESS 2003 - Level 1 - ECDL M5: 24 - 25.10.2005 (2 days) Finite State Machines in the JCOP Framework : 25 - 27.10.2005 (3 days) EXCEL 2003 (Short Course III) - HowTo... Pivot tables: 26.10.2005 (morning) WORD 2003 (Short Course I) - HowTo... Work with AutoTasks: 26.10.2005 (afternoon) OUTLOOK (Short Course I) - E-mail: 2.11.2005 (morning) WORD 2003 (Short Course II) - HowTo... Mail merge: 2.11.2005 (afternoon) FrontPage 2003 - niveau 1 : 3 - 4.11.2005 (2 jours) AutoCAD 2006 - niveau 1 : 3, 4, 9, 10.11.2005 (4 jours) Joint PVSS JCO...
5. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Outlook (short course I) : E-mail : 22.10.2004 (2 hours, morning) Outlook (short course II) : Calendar, Tasks and Notes: 22.10.2004 (2 hours, afternoon) Excel 2003 - niveau 1 : 4 & 5.11.2004 (2 jours) LabVIEW Intermediate I : 8 - 10.11.2004 (3 days) Instructor-led WTechT Study or Follow-up for Microsoft Applications : 9.11.2004 (morning) Hands-On Object Oriented Design and Programming with C++ : 9 - 11.11.2004 (3 days) Outlook (Short Course III) : Meetings and Delegation : 9.11.2004 (2 hours, afternoon) LabVIEW Intermediate II : 11 & 12.11.2004 (2 days) AutoCAD 2002 - niveau 1 : 11, 12, 18, 19.11.2004 (4 jours) C++ for Particle Physicists : 15 - 19.11.2004 (6 X 3 hours sessions) FrontPage 2003 - niveau 2 : 18 & 19.11.2004 (2 jours) Word 2003 - niveau 1 : 22 &a...
6. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: LabVIEW base 1/basics 1 : 31.1 - 2.2.2005 (3 jours/3 days - langue à décider/language to be decided) LabVIEW base 2/basics 2 : 3 & 4.2.2005 (2 jours/2 days - langue à décider/language to be decided) Finite State Machines in the JCOP Framework: 1 - 3.2.2005 (3 days - free course) The JAVA Programming Language Level 2: 7 - 9.2.2005 (3 days) C++ Programming Advanced -Traps and Pitfalls: 8 - 11.2.2005 (4 days) Joint PVSS JCOP Framework: 14 - 18.2.2005 (5 days) JAVA 2 Enterprise Edition - Part 1: WEB Applications: 21 & 22.2.2005 (2 days) FrontPage 2003 - niveau 1 : 21 & 22.2.2005 (2 jours) JAVA 2 Enterprise Edition - Part 2: Enterprise JavaBeans: 23 - 25.2.2005 (3 days) ELEC-2005 - Spring Term: Integrated circuits and VLSI technology...
7. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: The CERN EDMS for Local Administrators: 19 & 20.1.2005 (2 days - free course) Programmation Unity-Pro pour utilisateurs de Schneider PL7-Pro : 24 - 28.1.2005 (9 demi-journées) LabVIEW base 1/basics 1 : 31.1 - 2.2.2005 (3 jours/3 days - langue à décider/language to be decided) LabVIEW base 2/basics 2 : 3 & 4.2.2005 (2 jours/2 days - langue à décider/language to be decided) Finite State Machines in the JCOP Framework: 1 - 3.2.2005 (3 days - free course) The JAVA Programming Language Level 2: 7 - 9.2.2005 (3 days) C++ Programming Advanced -Traps and Pitfalls: 8 - 11.2.2005 (4 days) Joint PVSS JCOP Framework: 14 - 18.2.2005 (5 days) JAVA 2 Enterprise Edition - Part 1: WEB Applications: 21 & 22.2.2005 (2 days) JAVA 2 Enterprise Edition - Part 2: Enterprise JavaBeans: 23 - 25.2.2005 (3 days) ELEC-2005 â...
8. Technical Training: Places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: ACCESS 2003 - Level 1 - ECDL M5: 24 - 25.10.2005 (2 days) Finite State Machines in the JCOP Framework : 25 - 27.10.2005 (3 days) EXCEL 2003 (Short Course III) - HowTo... Pivot tables: 26.10.2005 (morning) Introduction to the CERN EDMS: 2.11.2005 (1 day, free of charge) OUTLOOK (Short Course I) - E-mail: 2.11.2005 (morning) WORD 2003 (Short Course II) - HowTo... Mail merge: 2.11.2005 (afternoon) FrontPage 2003 - niveau 1 : 3 - 4.11.2005 (2 jours) AutoCAD 2006 - niveau 1 : 3, 4, 9, 10.11.2005 (4 jours) Joint PVSS JCOP Framework : 7 - 11.11.2005 (5 days) The CERN EDMS for Engineers: 8.11.2005 (1 day, free of charge) The EDMS-MTF in practice: 9.11.2005 (morning, free of charge) The CERN EDMS for Local Administrators: 15-16.11.2005 (2 days, free of charge) OUTLOOK (Short Course II) - Calendar, Tasks and Notes: ...
9. Technical Training: Places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: FileMaker - niveau 1 : 20 - 21.10.2005 (2 jours) ACCESS 2003 - Level 1 - ECDL M5: 24 - 25.10.2005 (2 days) Finite State Machines in the JCOP Framework : 25 - 27.10.2005 (3 days) EXCEL 2003 (Short Course III) - HowTo... Pivot tables: 26.10.2005 (morning) OUTLOOK (Short Course I) - E-mail: 2.11.2005 (morning) WORD 2003 (Short Course II) - HowTo... Mail merge: 2.11.2005 (afternoon) FrontPage 2003 - niveau 1 : 3 - 4.11.2005 (2 jours) AutoCAD 2006 - niveau 1 : 3, 4, 9, 10.11.2005 (4 jours) Joint PVSS JCOP Framework : 7 - 11.11.2005 (5 days) OUTLOOK (Short Course II) - Calendar, Tasks and Notes: 16.11.2005 (morning) Joint PVSS JCOP Framework : 21 - 25.11.2005 (5 days) OUTLOOK (Short Course III) - Meetings and Delegation: 30.11.2005 (morning) WORD 2003 (Short Course III) - HowTo... Work with long documents : ...
10. Technical Training: Places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: FrontPage 2003 - niveau 2 : 5 - 6.12.2005 (2 jours) Introduction à ANSYS Classique : 6 - 9.12.2005 (4 jours) EXCEL 2003 (Short Course II) - HowTo... Format your worksheet for printing: 7.12.2005 (morning) PCAD Schémas - Introduction : 8 - 9.12.2005 (2 jours) Finite State Machines in the JCOP Framework: 13 - 15.12.2005 (3 days) ACCESS 2003 - Level 2 - ECDL AM5: 14 - 15.12.2005 (2 days) PCAD PCB - Introduction : 14 - 16.12.2005 (3 jours) AutoCAD 2006 - niveau 1 : 19, 20, 24 & 25.1.2006 (4 jours) LabVIEW Basics I: 23 - 25.1.2006 (3 days) C++ Programming Advanced - Traps and Pitfalls: 24 - 27.1.2006 (4 days) STEP7 Programming Level 1: 24 - 27.1.2006 (4 days) LabVIEW Basics II: 26 - 27.1.2006 (2 days) AutoCAD Mechanical 2006 : 30 - 31.1.2006 (2 jours; suite du cours AutoCAD 2006 - niveau 1) Joint PVS...
11. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: FrontPage 2003 - niveau 1 : 1 - 2.9.2005 (2 jours) Utilisation des fichiers PDF avec ACROBAT 7.0 : 5.9.2005 (1 jour) LabVIEW Basics 1: 5 â" 7.9.2005 (3 days) Introduction à Windows XP au CERN : 12.9.2005 (1 demi-journée) FrontPage 2003 - niveau 2 : 15 - 16.9.2005 (2 jours) AutoCAD 2005 - niveau 1 : 22, 23, 28, 29.9.2005 (4 jours) Finite State Machines in the JCOP Framework: 26 - 28.9.2005 (3 days) MAGNE-05 - Magnétisme pour l'électrotechnique : 27 - 29.9.2005 (3 jours) Joint PVSS JCOP Framework : 3 - 7.10.2005 (5 days) ACCESS 2003 - Level 1 - ECDL M5: 24 - 25.10.2005 (2 days) EXCEL 2003 (Short Course III) - HowTo... Pivot tables: 26.10.2005 (morning) OUTLOOK (Short Course I) - E-mail: 2.11.2005 (morning) OUTLOOK (Short Course II) - Calendar, Tasks and Notes: 16.11.2005 (morning) OU...
12. Technical training: places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Introduction to the CERN EDMS: 2.11.2005 (1 day, free of charge) OUTLOOK (Short Course I) - E-mail: 2.11.2005 (morning) WORD 2003 (Short Course II) - HowTo... Mail merge: 2.11.2005 (afternoon) FrontPage 2003 - niveau 1 : 3 - 4.11.2005 (2 jours) AutoCAD 2006 - niveau 1 : 3, 4, 9, 10.11.2005 (4 jours) Joint PVSS JCOP Framework : 7 - 11.11.2005 (5 days) The CERN EDMS for Engineers: 8.11.2005 (1 day, free of charge) The EDMS-MTF in practice: 9.11.2005 (morning, free of charge) The CERN EDMS for Local Administrators: 15-16.11.2005 (2 days, free of charge) OUTLOOK (Short Course II) - Calendar, Tasks and Notes: 16.11.2005 (morning) Hands-On Object Oriented Design and Programming with C++ : 16 - 18.11.2005 (3 days) The Java Programming Language Level 1: 21 - 23.11.2005 (3 days) Hands-on Introduction to Python Prog...
13. Technical Training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: LabVIEW Intermediate I : 8 - 10.11.2004 (3 days) Instructor-led WTechT Study or Follow-up for Microsoft Applications : 9.11.2004 (morning) Hands-On Object Oriented Design and Programming with C++ : 9 - 11.11.2004 (3 days) Outlook (Short Course III) : Meetings and Delegation : 9.11.2004 (2 hours, afternoon) LabVIEW Intermediate II : 11 & 12.11.2004 (2 days) AutoCAD 2002 - niveau 1 : 11, 12, 18, 19.11.2004 (4 jours) C++ for Particle Physicists : 15 - 19.11.2004 (6 X 3 hours sessions) Word 2003 - niveau 1 : 22 & 23.11.2004 (2 jours) Introduction à ANSYS : 23 - 26.11.2004 (4 jours) CLEAN-2002 - Travailler en salle propre : 23.11.2004 (après-midi, cours gratuit) Project Planning with MS-Project : 25.11 & 2.12.2004 (2 days) PCAD Sch&eac...
14. Technical Training: Places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: ELEC-2005 Autumn Term - Electronics applications in HEP experiments: 8.11 - 8.12.2005 (10 x 2h lectures) The CERN EDMS for Local Administrators: 15-16.11.2005 (2 days, free of charge) OUTLOOK (Short Course II) - Calendar, Tasks and Notes: 16.11.2005 (morning) Hands-On Object Oriented Design and Programming with C++ : 16 - 18.11.2005 (3 days) The CERN EDMS for Engineers: 17.11.2005 (1 day, free of charge) The Java Programming Language Level 1: 21 - 23.11.2005 (3 days) Hands-on Introduction to Python Programming: 28 - 30.11.2005 (3 days) OUTLOOK (Short Course III) - Meetings and Delegation: 30.11.2005 (morning) WORD 2003 (Short Course III) - HowTo... Work with long documents : 30.11.2005 (afternoon) FrontPage 2003 - niveau 2 : 5 - 6.12.2005 (2 jours) LabVIEW Application Development (Intermediate 1): 5 - 7.12.2005 (3 d...
15. Technical Training: Places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: FileMaker - niveau 1 : 20 - 21.10.2005 (2 jours) ACCESS 2003 - Level 1 - ECDL M5: 24 - 25.10.2005 (2 days) Finite State Machines in the JCOP Framework : 25 - 27.10.2005 (3 days) EXCEL 2003 (Short Course III) - HowTo... Pivot tables: 26.10.2005 (morning) WORD 2003 (Short Course I) - HowTo... Work with AutoTasks: 26.10.2005 (afternoon) OUTLOOK (Short Course I) - E-mail: 2.11.2005 (morning) WORD 2003 (Short Course II) - HowTo... Mail merge: 2.11.2005 (afternoon) FrontPage 2003 - niveau 1 : 3 - 4.11.2005 (2 jours) AutoCAD 2006 - niveau 1 : 3, 4, 9, 10.11.2005 (4 jours) Joint PVSS JCOP Framework : 7 - 11.11.2005 (5 days) OUTLOOK (Short Course II) - Calendar, Tasks and Notes: 16.11.2005 (morning) Joint PVSS JCOP Framework : 21 - 25.11.2005 (5 days) OUTLOOK (Short Course III) - Meetings and Delegation: 30.11....
16. Technical Training: Places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Joint PVSS JCOP Framework : 7 - 11.11.2005 (5 days) ELEC-2005 Autumn Term - Electronics applications in HEP experiments: 8.11 - 8.12.2005 (10 x 2h lectures) The EDMS-MTF in practice: 9.11.2005 (morning, free of charge) The CERN EDMS for Local Administrators: 15-16.11.2005 (2 days, free of charge) OUTLOOK (Short Course II) - Calendar, Tasks and Notes: 16.11.2005 (morning) Hands-On Object Oriented Design and Programming with C++ : 16 - 18.11.2005 (3 days) The CERN EDMS for Engineers: 17.11.2005 (1 day, free of charge) The Java Programming Language Level 1: 21 - 23.11.2005 (3 days) Hands-on Introduction to Python Programming: 28 - 30.11.2005 (3 days) OUTLOOK (Short Course III) - Meetings and Delegation: 30.11.2005 (morning) WORD 2003 (Short Course III) - HowTo... Work with long documents : 30.11.2005 (afternoon) Fron...
17. Technical Training: Places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: OUTLOOK (Short Course III) - Meetings and Delegation: 30.11.2005 (morning) WORD 2003 (Short Course III) - HowTo... Work with long documents: 30.11.2005 (afternoon) FrontPage 2003 - niveau 2 : 5 - 6.12.2005 (2 jours) Introduction à ANSYS Classique : 6 - 9.12.2005 (4 jours) EXCEL 2003 (Short Course II) - HowTo... Format your worksheet for printing: 7.12.2005 (morning) AutoCAD - mise à jour d’AutoCAD 2002 à AutoCAD 2006 : 8.12.2005 (1 journée - pour les utilisateurs d’AutoCAD 2002) PCAD Schémas - Introduction : 8 - 9.12.2005 (2 jours) Finite State Machines in the JCOP Framework: 13 - 15.12.2005 (3 days) PCAD PCB - Introduction : 14 - 16.12.2005 (3 jours) ACCESS 2003 - Level 2 - ECDL AM5: 14 - 15.12.2005 (2 days) AutoCAD 2006 - niveau 1 : 19, 20, 24 & 25.1.2006 (4 jours) LabVIEW Basics I: 23 - 25.1.2006...
18. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Finite State Machines in the JCOP Framework: 26 - 28.9.2005 (3 days) WORD 2003 (Short Course IV) - HowTo... Work with master document: 28.9.2005 (morning) Joint PVSS JCOP Framework : 3 - 7.10.2005 (5 days) Introduction à Dreamweaver MX : 3 - 4.10.2005 (2 jours) Utilisation des fichiers PDF avec Acrobat 7.0 : 4.10.2005 (1 journée) LaTeX par la pratique : 4 - 6.10.2005 (3 matinées) PowerPoint 2003 (F) : 7.10.2005 (1 journée) FileMaker - niveau 1 : 20 - 21.10.2005 (2 jours) ACCESS 2003 - Level 1 - ECDL M5: 24 - 25.10.2005 (2 days) EXCEL 2003 (Short Course III) - HowTo... Pivot tables: 26.10.2005 (morning) WORD 2003 (Short Course I) - HowTo... Work with AutoTasks: 26.10.2005 (afternoon) OUTLOOK (Short Course I) - E-mail: 2.11.2005 (morning) WORD 2003 (Short Course II) - HowTo... Mail merge: 2.11....
19. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Utilisation des fichiers PDF avec ACROBAT 7.0 : 5.9.2005 (1 jour) LabVIEW Basics 1: 5 - 7.9.2005 (3 days) Introduction à Windows XP au CERN : 12.9.2005 (1 demi-journée) FrontPage 2003 - niveau 2 : 15 - 16.9.2005 (2 jours) AutoCAD 2005 - niveau 1 : 22, 23, 28, 29.9.2005 (4 jours) Finite State Machines in the JCOP Framework: 26 - 28.9.2005 (3 days) MAGNE-05 - Magnétisme pour l'électrotechnique : 27 - 29.9.2005 (3 jours) WORD 2003 (Short Course IV) - HowTo... Work with master document: 28.9.2005 (morning) Joint PVSS JCOP Framework : 3 - 7.10.2005 (5 days) ACCESS 2003 - Level 1 - ECDL M5: 24 - 25.10.2005 (2 days) EXCEL 2003 (Short Course III) - HowTo... Pivot tables: 26.10.2005 (morning) WORD 2003 (Short Course I) - HowTo... Work with AutoTasks: 26.10.2005 (afternoon) OUTLOOK (Short Course I) - E-mail: 2...
20. Technical Training: Places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Introduction à Windows XP au CERN : 12.9.2005 (1 demi-journée) FrontPage 2003 - niveau 2 : 15 - 16.9.2005 (2 jours) Finite State Machines in the JCOP Framework: 26 - 28.9.2005 (3 days) MAGNE-05 - Magnétisme pour l'électrotechnique : 27 - 29.9.2005 (3 jours) WORD 2003 (Short Course IV) - HowTo... Work with master document: 28.9.2005 (morning) Joint PVSS JCOP Framework : 3 - 7.10.2005 (5 days) Introduction à Dreamweaver MX : 3 - 4.10.2005 (2 jours) Utilisation des fichiers PDF avec Acrobat 7.0 : 4.10.2005 (1 journée) LaTeX par la pratique : 4 - 6.10.2005 (3 matinées) PowerPoint 2003 (F) : 7.10.2005 (1 journée) ACCESS 2003 - Level 1 - ECDL M5: 24 - 25.10.2005 (2 days) EXCEL 2003 (Short Course III) - HowTo... Pivot tables: 26.10.2005 (morning) WORD 2003 (Short Course I) - HowTo... Work with Auto...
1. Technical Training: Places available
CERN Multimedia
Davide Vitè
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: The Java Programming Language Level 1: 21 - 23.11.2005 (3 days) OUTLOOK (Short Course III) - Meetings and Delegation: 30.11.2005 (morning) WORD 2003 (Short Course III) - HowTo... Work with long documents : 30.11.2005 (afternoon) FrontPage 2003 - niveau 2 : 5 - 6.12.2005 (2 jours) Introduction à ANSYS Classique : 6 - 9.12.2005 (4 jours) EXCEL 2003 (Short Course II) - HowTo... Format your worksheet for printing: 7.12.2005 (morning) PCAD Schémas - Introduction : 8 - 9.12.2005 (2 jours) ACCESS 2003 - Level 2 - ECDL AM5: 14 - 15.12.2005 (2 days) LabVIEW Basics I: 23 - 25.1.2006 (3 days) C++ Programming Advanced - Traps and Pitfalls: 24 - 27.1.2006 (4 days) LabVIEW Basics II: 26 - 27.1.2006 (2 days) Joint PVSS-JCOP Framework: 30.1 - 3.2.2006 (5 days, free of charge) Finite State Machines in the JCOP Framework : ...
2. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: ELEC-2005 - Spring Term: Integrated circuits and VLSI technology for physics: 1 - 17.3.2005 (6 x 2.5-hour lectures) C++ for Particle Physicists: 7 - 11.3.2005 (6 x 3-hour lectures) Joint PVSS JCOP Framework : 14 - 18.3.2005 (5 days) Oracle 9i: SQL: 14 -16.3.2005 (3 days) AXEL-2005; Introduction to Particle Accelerators : 14- 18.3.2005 (10 x 1 h lectures) ACCESS 2003 - Level 1: ECDL M5: 15 - 16.3.2005 (2 days) AutoCAD 2002 - niveau 1 : 15, 16, 21& 22.3.2005 (4 jours) FrontPage 2003 - niveau 2 : 21 & 22.3.2005 (2 jours) Hands-on Object-Oriented Design and Programming with C++ : 22 - 24.3.2005 (3 days) FileMaker - niveau 2 : 4 & 5.4.2005 (2 jours) Oracle 9i: Programming with PL/SQL: 4 - 6.4.2005 (3 days) EXCEL 2003 - niveau 2 : 11 & 12.4.2005 (2 jours) LabVIEW Intermedia...
3. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: LabVIEW base 2/basics 2 : 3 & 4.2.2005 (2 jours/2 days - langue à décider/language to be decided) Finite State Machines in the JCOP Framework: 1 - 3.2.2005 (3 days - free course) The JAVA Programming Language Level 2: 7 - 9.2.2005 (3 days) JAVA 2 Enterprise Edition - Part 1: WEB Applications: 21 & 22.2.2005 (2 days) FrontPage 2003 - niveau 1 : 21 & 22.2.2005 (2 jours) JAVA 2 Enterprise Edition - Part 2: Enterprise JavaBeans: 23 - 25.2.2005 (3 days) ELEC-2005 - Spring Term: Integrated circuits and VLSI technology for physics: 1 - 17.3.2005 (6 X 2.30-hours lectures) C++ for Particle Physicists: 7 - 11.3.2005 (6 X 3-hour lectures) Joint PVSS JCOP Framework : 14 - 18.3.2005 (5 days) AutoCAD 2002 - niveau 1 : 15, 16, 21& 22.3.2005 (4 jours) FrontPage 20...
4. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: JAVA 2 Enterprise Edition - Part 1: WEB Applications: 21 & 22.2.2005 (2 days) FrontPage 2003 - niveau 1 : 21 & 22.2.2005 (2 jours) JAVA 2 Enterprise Edition - Part 2: Enterprise JavaBeans: 23 - 25.2.2005 (3 days) Utilisation des fichiers PDF avec ACROBAT 6.0 : 1.3.2005 (1 journée) ELEC-2005 - Spring Term: Integrated circuits and VLSI technology for physics: 1 - 17.3.2005 (6 X 2.30-hours lectures) C++ for Particle Physicists: 7 - 11.3.2005 (6 X 3-hour lectures) Joint PVSS JCOP Framework : 14 - 18.3.2005 (5 days) AutoCAD 2002 - niveau 1 : 15, 16, 21& 22.3.2005 (4 jours) FrontPage 2003 - niveau 2 : 21 & 22.3.2005 (2 jours) FileMaker - niveau 2 : 4 & 5.4.2005 (2 jours) EXCEL 2003 - niveau 2 : 11 & 12.4.2005 (2 jours) LabVIEW intermediate 1: 11 - 13....
5. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: JAVA 2 Enterprise Edition - Part 1: WEB Applications: 21 & 22.2.2005 (2 days) FrontPage 2003 - niveau 1 : 21 & 22.2.2005 (2 jours) JAVA 2 Enterprise Edition - Part 2: Enterprise JavaBeans: 23 - 25.2.2005 (3 days) Utilisation des fichiers PDF avec ACROBAT 6.0 : 1.3.2005 (1 journée) Hands-on Object-Oriented Design and Programming with C++ : 1 - 3.3.2005 (3 days) ELEC-2005 - Spring Term: Integrated circuits and VLSI technology for physics: 1 - 17.3.2005 (6 x 2.5-hour lectures) C++ for Particle Physicists: 7 - 11.3.2005 (6 x 3-hour lectures) Joint PVSS JCOP Framework : 14 - 18.3.2005 (5 days) ACCESS 2003 - Level 1 : 15 - 16.3.2005 (2 days) AutoCAD 2002 - niveau 1 : 15, 16, 21& 22.3.2005 (4 jours) FrontPage 2003 - niveau 2 : 21 & 22.3.2005 (2 jours) FileMaker - ...
6. Technical Training: Places available
CERN Multimedia
Monique Duval
2005-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available on the following courses: Utilisation des fichiers PDF avec ACROBAT 6.0 : 1.3.2005 (1 journée) ELEC-2005 - Spring Term: Integrated circuits and VLSI technology for physics: 1 - 17.3.2005 (6 x 2.5-hour lectures) C++ for Particle Physicists: 7 - 11.3.2005 (6 x 3-hour lectures) Joint PVSS JCOP Framework : 14 - 18.3.2005 (5 days) AXEL-2005; Introduction to Particle Accelerators : 14- 18.3.2005 (10 x 1 h lectures) ACCESS 2003 - Level 1: ECDL M5: 15 - 16.3.2005 (2 days) AutoCAD 2002 - niveau 1 : 15, 16, 21& 22.3.2005 (4 jours) FrontPage 2003 - niveau 2 : 21 & 22.3.2005 (2 jours) Hands-on Object-Oriented Design and Programming with C++ : 22 - 24.3.2005 (3 days) FileMaker - niveau 2 : 4 & 5.4.2005 (2 jours) EXCEL 2003 - niveau 2 : 11 & 12.4.2005 (2 jours) LabVIEW Intermediate 1: 11 - 13.4.2005...
7. TECHNICAL TRAINING: PLACES AVAILABLE
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: LabVIEW Basics 1 : 22 - 24.3.2004 (3 days) Oracle 9i : New Features for Developers : 22 - 24.3.2004 (3 days) Instructor-led WBTechT study or follow-up for Microsoft applications : 1.4.2004 (morning) FrontPage XP - niveau 1 : 5 & 6.4.2004 (2 jours) AutoCAD 2002 - niveau 1 : 19, 20.4 et 3, 4.5.2004 (4 jours) Oracle 8i/9i - Develop Web-based Applications with PL/SQL : 19 & 20.4.2004 (2 days) Introduction to ANSYS : 20 - 23.4.2004 (4 days) LabVIEW hands-on (E) : 20.4.2004 (afternoon, free of charge) FrontPage XP - niveau 2 : 26 & 27.4.2004 (2 jours) LabVIEW Base 2 : 6 & 7.5.2004 (2 jours) Word XP - niveau 1 : 10 & 11.5.2004 (2 jours) Word XP - niveau 2 : 24 & 25.5.2004 (2 jours) If you wish to participate in one of these courses, pleas...
8. Technical training: Places available
CERN Multimedia
Monique Duval
2004-01-01
The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: LabVIEW Intermediate I : 8 - 10.11.2004 (3 days) Instructor-led WTechT Study or Follow-up for Microsoft Applications : 9.11.2004 (morning) Outlook (Short Course III) : Meetings and Delegation : 9.11.2004 (2 hours, afternoon) LabVIEW Intermediate II : 11 & 12.11.2004 (2 days) AutoCAD 2002 - niveau 1 : 11, 12, 18, 19.11.2004 (4 jours) C++ for Particle Physicists : 15 - 19.11.2004 (6 X 3 hours sessions) Word 2003 - niveau 1 : 22 & 23.11.2004 (2 jours) Introduction à ANSYS : 23 - 26.11.2004 (4 jours) Project Planning with MS-Project : 25.11 & 2.12.2004 (2 days) Introduction to PERL 5 : 8 & 9.12.2004 (2 days) Advanced aspects of PERL 5 : 10.12.2004 (1 day) The JAVA Programming Language Level 1 : 11 & 12.1.2005 (2 days) Introduction...
9. An index of financial safety of China
Directory of Open Access Journals (Sweden)
Xiaojun Jia
2015-04-01
Full Text Available Purpose: This paper combines a synthetic index system by the variables and evaluates China’s financial safety through the change of indexes in a comprehensive way. First of all, it builds the financial industry evaluation index system composed of 25indicators in terms of the operation of the financial industry and external economic environment and particularly takes into consideration factors which might trigger liquidity risks such as off-balance-sheet business, interbank business and shadow banking; then it selects 10 indicators to conduct empirical analysis and identifies the indicator weight through principal component analysis; finally it combines the financial safety indexes through the linear weighted comprehensive evaluation model.Design/methodology/approach: Synthesis of indexes is made by constructing a proper comprehensive evaluation mathematical model, integrating a number of evaluation indexes into one comprehensive evaluation index and then obtaining corresponding comprehensive evaluation results. In this paper, it selects 10 indexes to conduct empirical analysis and identifies the index weight through principal component analysis; finally it combines the financial safety indexes through the linear weighted comprehensive evaluation model. Principal component analysis (PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. PCA was invented in 1901 and was later independently developed (and named by Harold Hotelling in the 1930s.Findings: From 2003 to 2013 China’s financial safety indexes fluctuated. From 2003 to 2007 indexes rose, which indicates China’s financial safety status gradually improved; from 2007 to 2009 indexes declined, which indicates due to the impact of subprime crisis, China’s financial safety status took a turn for the worse; from 2009 to 2012
10. Estimating the Upcrossings Index
CERN Document Server
Sebastião, João Renato; Ferreira, Helena; Pereira, Luísa
2012-01-01
For stationary sequences, under general local and asymptotic dependence restrictions, any limiting point process for time normalized upcrossings of high levels is a compound Poisson process, i.e., there is a clustering of high upcrossings, where the underlying Poisson points represent cluster positions, and the multiplicities correspond to cluster sizes. For such classes of stationary sequences there exists the upcrossings index $\\eta,$ $0\\leq \\eta\\leq 1,$ which is directly related to the extremal index $\\theta,$ $0\\leq \\theta\\leq 1,$ for suitable high levels. In this paper we consider the problem of estimating the upcrossings index $\\eta$ for a class of stationary sequences satisfying a mild oscillation restriction. For the proposed estimator, properties such as consistency and asymptotic normality are studied. Finally, the performance of the estimator is assessed through simulation studies for autoregressive processes and case studies in the fields of environment and finance.
11. Path indexing for term retrieval
OpenAIRE
1992-01-01
Different methods for term retrieval in deduction systems have been introduced in literature. This report eviews the three indexing techniques discrimination indexing, path indexing, and abstraction tree indexing. A formal approach to path indexing is presented and algorithms as well as data structures of an existing implementation are discussed. Eventually, experiments will show that our implementation outperforms the implementation of path indexing in the OTTER theorem prover.
12. Decomposing the misery index: A dynamic approach
Directory of Open Access Journals (Sweden)
Ivan K. Cohen
2014-12-01
Full Text Available The misery index (the unweighted sum of unemployment and inflation rates was probably the first attempt to develop a single statistic to measure the level of a population’s economic malaise. In this letter, we develop a dynamic approach to decompose the misery index using two basic relations of modern macroeconomics: the expectations-augmented Phillips curve and Okun’s law. Our reformulation of the misery index is closer in spirit to Okun’s idea. However, we are able to offer an improved version of the index, mainly based on output and unemployment. Specifically, this new Okun’s index measures the level of economic discomfort as a function of three key factors: (1 the misery index in the previous period; (2 the output gap in growth rate terms; and (3 cyclical unemployment. This dynamic approach differs substantially from the standard one utilised to develop the misery index, and allow us to obtain an index with five main interesting features: (1 it focuses on output, unemployment and inflation; (2 it considers only objective variables; (3 it allows a distinction between short-run and long-run phenomena; (4 it places more importance on output and unemployment rather than inflation; and (5 it weights recessions more than expansions.
13. User oriented perspective of ethnology indexing
Directory of Open Access Journals (Sweden)
Jerneja Hederih
1997-01-01
Full Text Available The first part of the article is based on an overview of the problems encountered by ethnologists and anthropologists in libraries, followed by a short overview of the special features of ethnological terminology. Different perspectives of indexing with an emphasis on user-oriented ways of indexing follows. The author discusses the exhaustiveness and depth of indexing, the choice of technical versus popular indexes, the consistency in indexing and searching, and the importance of being familiar with the users and their searching behaviour.The second part of the article deals with the same problembymeansof the analysis of an inquiry including ethnologists and non-ethnologists, of the questions of the users of UKM, and of the comparison of indexes used by different libraries for indexing ethnological materials.Suggestions for a more efficient design of information systems, tips for librarians who index ethnological materials, and ethnologists who encounter problems in libraries, as well as for those who do not have ethnological education but are looking for such information, are listed in the text and in the conclusions of the article.
14. Guidebook/index
Energy Technology Data Exchange (ETDEWEB)
1977-01-01
The Guidebook/Index introduces information dealing with the general rationale for energy conservation and deals with some of the definitions and concepts common to each of the subjects covered in the series of 10 booklets. The master index for the series is presented. Subjects covered are saving money in heating, cooling, and lighting; in process design and heat recovery; through production optimization; through combustion control; through steam and compressed air management; in transportation and delivery; through efficient people moving; in office practices; and through employee motivation and participation.
15. Indexing the etymological lexicographic systems
Directory of Open Access Journals (Sweden)
Volodymyr Shyrokov
2014-09-01
Full Text Available Indexing the etymological lexicographic systems The main problems and directions for the development of the etymological lexicographic systems in the digital environment are studied. The formal conceptual model of the lexicographic system for fundamental academic Etymological Dictionary of the Ukrainian Language (EDUL is developed. The lexicographic structure of the EDUL individual elements are developed and described. The EDUL metalanguage was studied and described. The formal model and technology of the EDUL parsing are worked out. That made it possible to convert automatically the EDUL text into the lexicographic database, which corresponds to the conceptual model of the lexicographic system. The conceptual foundations of instrumental tool to form the etymological dictionaries are developed to create the Virtual Lexicographic Laboratory «Etymological Dictionary of the Ukrainian Language», which was implemented with a modern approach to the real lexicographic array of the EDUL. That allowed to form the database of the EDUL multilingual index (about 250 languages in the automatic mode. This index is a basis of the seventh (final volume of the EDUL. The possibility of applying the developed models to other etymological dictionaries are studied. The conceptual foundations for integration of the etymological lexicographic systems are discussed.
16. LHC Report: astounding availability
CERN Multimedia
Andrea Apollonio for the LHC team
2016-01-01
The LHC is off to an excellent start in 2016, having already produced triple the luminosity of 2015. An important factor in the impressive performance so far this year is the unprecedented machine availability. LHC integrated luminosity in 2011, 2012, 2015 and 2016 and the prediction of the 2016 performance foreseen at the start of the year. Following the 2015-2016 end of year shutdown, the LHC restarted beam operation in March 2016. Between the restart and the first technical stop (TS1) in June, the LHC's beam intensity was successively increased, achieving operation with 2040 bunches per beam. The technical stop on 7-8 June was shortened to maximise the time available for luminosity production for the LHC experiments before the summer conferences. Following the technical stop, operation resumed and quickly returned to the performance levels previously achieved. Since then, the LHC has been running steadily with up to 2076 bunches per beam. Since the technical stop, a...
17. Schultz Index of Armchair Polyhex Nanotubes
Directory of Open Access Journals (Sweden)
Nafiseh Salehi
2008-10-01
Full Text Available The study of topological indices – graph invariants that can be used for describing and predicting physicochemical or pharmacological properties of organic compounds – is currently one of the most active research fields in chemical graph theory. In this paper we study the Schultz index and find a relation with the Wiener index of the armchair polyhex nanotubes TUV C6[2p; q]. An exact expression for Schultz index of this molecule is also found.
18. Fourth Zagreb index of Circumcoronene series of Benzenoid
Directory of Open Access Journals (Sweden)
2015-12-01
Full Text Available A topological index of a graph is a numeric quantity related to a structure of a molecule which is invariant under graph automorphism. Recently, Ghorbani and Hosseinzadeh introduced Fourth Zagreb index of graphs. In this paper we determine a closed formula of this new topological index of the famous Benzenoid family named Circumcoronene series of Benzenoid Hk.
19. Availability by Design
DEFF Research Database (Denmark)
Vigo, Roberto
-scale network of sensors that interact with the physical environment. CPSs are increasingly exploited in the realisation of critical infrastructure, from the power grid to healthcare, traffic control, and defence applications. Such systems are particularly prone to DoS attacks: in addition to classic......In computer security, a Denial-of-Service (DoS) attack aims at making a resource unavailable. DoS attacks to systems of public concern occur increasingly and have become infamous on the Internet, where they have targeted major corporations and institutions, thus reaching the general public....... There exist various practical techniques to face DoS attacks and mitigate their effects, yet we witness the successfulness of many. The need for a renewed investigation of availability gains in relevance when considering that our life is more and more dominated by Cyber-Physical Systems (CPSs), large...
20. Searching and Indexing Genomic Databases via Kernelization
Directory of Open Access Journals (Sweden)
Travis eGagie
2015-02-01
Full Text Available The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper we survey the twenty-year history of this idea and discuss its relation to kernelization in parameterized complexity.
1. Position index preserving compression of text data
OpenAIRE
Akhtar, Nasim; Rashid, Mamunur; Islam, Shafiqul; Kashem, Mohammod Abul; Kolybanov, Cyrll Y.
2011-01-01
Data compression offers an attractive approach to reducing communication cost by using available bandwidth effectively. It also secures data during transmission for its encoded form. In this paper an index based position oriented lossless text compression called PIPC ( Position Index Preserving Compression) is developed. In PIPC the position of the input word is denoted by ASCII code. The basic philosopy of the secure compression is to preprocess the text and transform it into some intermedia...
Science.gov (United States)
Davies, C. S.; Kruglyak, V. V.
2015-10-01
The wave solutions of the Landau-Lifshitz equation (spin waves) are characterized by some of the most complex and peculiar dispersion relations among all waves. For example, the spin-wave ("magnonic") dispersion can range from the parabolic law (typical for a quantum-mechanical electron) at short wavelengths to the nonanalytical linear type (typical for light and acoustic phonons) at long wavelengths. Moreover, the long-wavelength magnonic dispersion has a gap and is inherently anisotropic, being naturally negative for a range of relative orientations between the effective field and the spin-wave wave vector. Nonuniformities in the effective field and magnetization configurations enable the guiding and steering of spin waves in a deliberate manner and therefore represent landscapes of graded refractive index (graded magnonic index). By analogy to the fields of graded-index photonics and transformation optics, the studies of spin waves in graded magnonic landscapes can be united under the umbrella of the graded-index magnonics theme and are reviewed here with focus on the challenges and opportunities ahead of this exciting research direction.
3. $Local^{3}$ Index Theorem
CERN Document Server
Teleman, Nicolae
2011-01-01
$Local^{3}$ Index Theorem means $Local(Local(Local \\;Index \\; Theorem)))$. $Local \\; Index \\; Theorem$ is the Connes-Moscovici local index theorem \\cite{Connes-Moscovici1}, \\cite{Connes-Moscovici2}. The second "Local" refers to the cyclic homology localised to a certain separable subring of the ground algebra, while the last one refers to Alexander-Spanier type cyclic homology. The Connes-Moscovici work is based on the operator $R(A) = \\mathbf{P} - \\mathbf{e}$ associated to the elliptic pseudo-differential operator $A$ on the smooth manifold $M$, where $\\mathbf{P}$, $\\mathbf{e}$ are idempotents, see \\cite{Connes-Moscovici1}, Pg. 353. The operator $R(A)$ has two main merits: it is a smoothing operator and its distributional kernel is situated in an arbitrarily small neighbourhood of the diagonal in $M \\times M$. The operator $R(A)$ has also two setbacks: -i) it is not an idempotent (and therefore it does not have a genuine Connes-Chern character); -ii) even if it were an idempotent, its Connes-Chern character ...
4. Nitrate Leaching Index
Science.gov (United States)
The Nitrate Leaching Index is a rapid assessment tool that evaluates nitrate (NO3) leaching potential based on basic soil and climate information. It is the basis for many nutrient management planning efforts, but it has considerable limitations because of : 1) an oversimplification of the processes...
5. Indexing Moving Points
DEFF Research Database (Denmark)
Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff
2003-01-01
We propose three indexing schemes for storing a set S of N points in the plane, each moving along a linear trajectory, so that any query of the following form can be answered quickly: Given a rectangle R and a real value t, report all K points of S that lie inside R at time t. We first present...... an indexing structure that, for any given constant >0, uses O(N/B) disk blocks and answers a query in O((N/B)1/2+ +K/B) I/Os, where B is the block size. It can also report all the points of S that lie inside R during a given time interval. A point can be inserted or deleted, or the trajectory of a point can...... be changed, in O(logB2 N) I/Os. Next, we present a general approach that improves the query time if the queries arrive in chronological order, by allowing the index to evolve over time. We obtain a tradeoff between the query time and the number of times the index needs to be updated as the points move. We...
6. The Misery Index.
Science.gov (United States)
Bracey, Gerald W.
2000-01-01
U.S. taxpayers score lower on the "Forbes" Misery Index than taxpayers of other industrialized nations. A recent report concludes that public-school students challenge their schools more than private-school counterparts. Low birth weight and demographic factors (gender, poverty, and race) affect Florida's burgeoning special-education placements.…
7. A Tourism Conditions Index
NARCIS (Netherlands)
C-L. Chang (Chia-Lin); H-K. Hsu (Hui-Kuang); M.J. McAleer (Michael)
2014-01-01
markdownabstract__Abstract__ This paper uses monthly data from April 2005 to August 2013 for Taiwan to propose a novel tourism indicator, namely the Tourism Conditions Index (TCI). TCI accounts for the spillover weights based on the Granger causality test and estimates of the multivariate BEKK mode
8. Index for Inclusion
Science.gov (United States)
Smith, Allister
2005-01-01
Index for Inclusion is a programme to assist in developing learning and participation in schools. It was written by Tony Booth and Mel Ainscow from the Centre for Studies on Inclusive Education, UK. Central Normal School was pleased to have the opportunity to trial this programme.
9. Indexical Hybrid Tense Logic
DEFF Research Database (Denmark)
Blackburn, Patrick Rowan; Jørgensen, Klaus Frovin
2012-01-01
In this paper we explore the logic of now, yesterday, today and tomorrow by combining the semantic approach to indexicality pioneered by Hans Kamp [9] and refined by David Kaplan [10] with hybrid tense logic. We first introduce a special now nominal (our @now corresponds to Kamp’s original now...
10. Available transmission capacity assessment
Directory of Open Access Journals (Sweden)
Škokljev Ivan
2012-01-01
Full Text Available Effective power system operation requires the analysis of vast amounts of information. Power market activities expose power transmission networks to high-level power transactions that threaten normal, secure operation of the power system. When there are service requests for a specific sink/source pair in a transmission system, the transmission system operator (TSO must allocate the available transfer capacity (ATC. It is common that ATC has a single numerical value. Additionally, the ATC must be calculated for the base case configuration of the system, while generation dispatch and topology remain unchanged during the calculation. Posting ATC on the internet should benefit prospective users by aiding them in formulating their requests. However, a single numerical value of ATC offers little for prospect for analysis, planning, what-if combinations, etc. A symbolic approach to the power flow problem (DC power flow and ATC offers a numerical computation at the very end, whilst the calculation beforehand is performed by using symbols for the general topology of the electrical network. Qualitative analysis of the ATC using only qualitative values, such as increase, decrease or no change, offers some new insights into ATC evaluation, multiple transactions evaluation, value of counter-flows and their impact etc. Symbolic analysis in this paper is performed after the execution of the linear, symbolic DC power flow. As control variables, the mathematical model comprises linear security constraints, ATC, PTDFs and transactions. The aim is to perform an ATC sensitivity study on a five nodes/seven lines transmission network, used for zonal market activities tests. A relatively complicated environment with twenty possible bilateral transactions is observed.
11. TICKETS ARE NOW AVAILABLE!
CERN Multimedia
2000-01-01
Replay of the Rudra-Béjart Ballet for the CERN Staff on Tuesday 5 December 2000 at 8.00 pm sharp at the Geneva ARENA This private performance will be given for you. It will last longer than the original performance at CERN: 1 hour 20 instead of 35 minutes. I encourage you all to attend this performance-bring in great numbers yourselves, members of your family, and your friends. 2,020 places are available, for which tickets will be obtained from Monday 27 November at the Staff Association Secretariat, whom I sincerely thank for having kindly accepted to distribute the tickets. To regulate the distribution, a contribution of 5 francs will be asked for each ticket: this sum is symbolic, as I intend this ballet-representation foremost as a recognition by CERN of its gratitude to you. Luciano Maiani Director-General Dancing, &n...
12. Technical training - Places available
CERN Multimedia
2012-01-01
If you would like more information on a course, or for any other inquiry/suggestions, please contact Technical.Training@cern.ch Valeria Perez Reale, Learning Specialist, Technical Programme Coordinator (Tel.: 62424) Eva Stern and Elise Romero, Technical Training Administration (Tel.: 74924) HR Department Electronic Design Next Session Duration Language Availability Comprehensive VHDL for FPGA Design 08-Oct-12 to 12-Oct-12 5 days English 4 places Electrostatique / Protection ESD 28-Sep-12 to 28-Sep-12 3 hours French 25 places Impacts de la suppression du plomb (RoHS) en électronique 26-Oct-12 to 26-Oct-12 8 hours French 14 places Introduction to VHDL 10-Oct-12 to 11-Oct-12 2 days English 9 places LabVIEW Real Time and FPGA 13-Nov-12 to 16-Nov-12 5 days French 5 places LabVIEW for Experts 24-Sep-12 to 28-Sep-12 5 days English 6 places LabVIEW for beginners 15-Oct-12 to 17-...
13. Technical Training - Places available
CERN Multimedia
Monique Duval
2003-01-01
Places are available in the following courses: HeREF-2003 : Techniques de la réfrigération Hélium : 6 - 10.10.2003 (7 demi-journées, cours en français avec support en anglais) The Java Programming Language Level 1: 6 - 7.10.2003 (2 days) Java 2 Enterprise Edition - Part 2: Enterprise JavaBeans: 8 - 10.10.2003 (3 days) FileMaker - niveau 1 : 9 & 10.10.03 (2 jours) EXCEL 2000 - niveau 1 : 20 & 22.10.03 (2 jours) AutoCAD 2002 - niveau 1 : 20, 21, 27, 28.10.03 (4 jours) CLEAN-2002 : Working in a Cleanroom: 23.10.03 (half day, free of charge) Plannification de projet avec MS-Project/ Project Scheduling with MS-Project : 2 sessions: 23.10 & 4.11.03 (2 jours/2 days and 18 &25.11.03 langue à définir/language to be defined) AutoCAD 2002 - Level 1: 3, 4, 12, 13.11.03 (4 days) Introduction to Pspice: 4.11.03p.m. (half-day) AutoCAD 2002 - niveau 2 : 10 & 11.11.03 (2 jours) ACCESS 2000 - niveau 1 : 13 & 14.11.03 (2 jours) A...
14. Just In Time Indexing
OpenAIRE
Mitra, Pinaki; Sundaram, Girish; PS, Sreedish
2013-01-01
One of the major challenges being faced by Database managers today is to manage the performance of complex SQL queries which are dynamic in nature. Since it is not possible to tune each and every query because of its dynamic nature, there is a definite possibility that these queries may cause serious database performance issues if left alone. Conventional indexes are useful only for those queries which are frequently executed or those columns which are frequently joined in SQL queries. This p...
15. The e-index, complementing the h-index for excess citations.
Directory of Open Access Journals (Sweden)
Chun-Ting Zhang
Full Text Available BACKGROUND: The h-index has already been used by major citation databases to evaluate the academic performance of individual scientists. Although effective and simple, the h-index suffers from some drawbacks that limit its use in accurately and fairly comparing the scientific output of different researchers. These drawbacks include information loss and low resolution: the former refers to the fact that in addition to h(2 citations for papers in the h-core, excess citations are completely ignored, whereas the latter means that it is common for a group of researchers to have an identical h-index. METHODOLOGY/PRINCIPAL FINDINGS: To solve these problems, I here propose the e-index, where e(2 represents the ignored excess citations, in addition to the h(2 citations for h-core papers. Citation information can be completely depicted by using the h-index together with the e-index, which are independent of each other. Some other h-type indices, such as a and R, are h-dependent, have information redundancy with h, and therefore, when used together with h, mask the real differences in excess citations of different researchers. CONCLUSIONS/SIGNIFICANCE: Although simple, the e-index is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index.
16. MATRIX BASED INDEXING TECHNIQUE FOR VIDEO DATA
Directory of Open Access Journals (Sweden)
Devarj Saravanan
2013-01-01
Full Text Available Due to increasing the usage of media, the utilization of video play central role as it supports various applications. Video is the particular media which contains complex collection of objects like audio, motion, text, color and picture. Due to the rapid growth of this information video indexing process is mandatory for fast and effective retrieval. Many current indexing techniques fails to extract the needed image from the stored data set, based on the users query. Urgent attention in the field of video indexing and image retrieval is the need of the hour. Here a new matrix based indexing technique for image retrieval has been proposed. The proposed method provide better result, experimental results prove this.
17. Growth Index after Planck
CERN Document Server
Xu, Lixin
2013-01-01
To investigate the possible deviation from the standard $\\Lambda$CDM model and the Einstein's gravity theory in the dynamical perspectives, the growth index $\\gamma_L$ was proposed. Recently, thanks to the measurement of the cosmic growth rate via the redshift-space distortion, one can understand the evolution of density contrast through $f\\sigma_8(z)$, where $f(z)=d\\ln \\delta/d \\ln a$ is the growth rate of matter and $\\sigma_8(z)$ is the rms amplitude of the density contrast $\\delta$ at the comoving $8h^{-1}$ Mpc scale. In this paper, we use the red-shift space distortion data points to investigate the growth index on the bases of the Einstein's gravity theory and a modified gravity theory under the assumption $f=\\Omega_m(a)^{\\gamma_L}$. To fix the background evolution, the cosmic observational data points from the type Ia supernovae SNLS3, cosmic microwave background radiation from {\\it Planck} and baryon acoustic oscillation are used. Via the Markov Chain Monte Carlo method, the $\\gamma_L$ values were obta...
18. Model and Calculation of Container Port Logistics Enterprises Efficiency Indexes
Directory of Open Access Journals (Sweden)
Xiao Hong
2013-04-01
Full Text Available The throughput of China’s container port is growing fast, but the earnings of inland port enterprises are not so good. Firstly ,the initial efficiency evaluation indexes of port logistics are reduced and screened by rough set model, and then logistics performance indexes weight are assigned by the rough totalitarian calculation method. As well, the rank of the indexes and the important indexes are picked up by combining with ABC management method. So the port logistics enterprises can monitor the key indexes to reduce cost and improve the efficiency of the logistics operations.
19. 2013 Traffic Safety Culture Index
Science.gov (United States)
... death in the United States. 2013 Traffic Safety Culture Index January 2014 607 14th Street, NW, Suite ... org | 202-638-5944 Title 2013 Traffic Safety Culture Index (January 2014) About the Sponsor AAA Foundation ...
20. An improved molecular connectivity index
Institute of Scientific and Technical Information of China (English)
李新华; 俞庆森; 朱龙观
2000-01-01
Through modification of the delta values of the molecular connectivity indexes, and connecting the quantum chemistry with topology method effectively, the molecular connectivity indexes are converted into quantum-topology indexes. The modified indexes not only keep all information obtained from the original molecular connectivity method but also have their own virtue in application, and at the same time make up some disadvantages of the quantum and molecular connectivity methods.
1. Cacti with Extremal PI Index
Directory of Open Access Journals (Sweden)
Chunxiang Wang
2016-12-01
Full Text Available The vertex PI index PI(G=∑ xy∈E(G [n xy (x+n xy (y] PI(G=∑xy∈E(G[nxy(x+nxy(y] is a distance-based molecular structure descriptor, where n xy (x nxy(x denotes the number of vertices which are closer to the vertex x x than to the vertex y y and which has been the considerable research in computational chemistry dating back to Harold Wiener in 1947. A connected graph is a cactus if any two of its cycles have at most one common vertex. In this paper, we completely determine the extremal graphs with the greatest and smallest vertex PI indices mong all cacti with a fixed number of vertices. As a consequence, we obtain the sharp bounds with corresponding extremal cacti and extend a known result.
2. EMOTICON INDEXICALITY: DIGITAL MEDIA PRACTICES
Directory of Open Access Journals (Sweden)
2015-11-01
Full Text Available Emoticons represent the best materialization of the particular aspects the digital language features. The use of emoticons on the hand have been subjected to var ious kinds of analyses which have singled out their joint representation of both oral and written language. This article proposes, using the indexical sign theory to understand the emoticons not simply as combination of other types of language but as the i nnovative form of the digital language. Looking more at the poietics of the emoticon use and less at its poetic, the article distinguishes among the types of usage depending on types of texts and, at the same time, between two categories or digital languag e practice consumers who understand emoticons rather differently in terms of coding and usage.
3. Activated sludge inhibition capacity index
Directory of Open Access Journals (Sweden)
V. Surerus
2014-06-01
Full Text Available Toxic compounds in sewage or industrial wastewater may inhibit the biological activity of activated sludge impairing the treatment process. This paper evaluates the Inhibition Capacity Index (ICI for the assessment of activated sludge in the presence of toxicants. In this study, activated sludge was obtained from industrial treatment plants and was also synthetically produced. Continuous respirometric measurements were carried out in a reactor, and the oxygen uptake rate profile obtained was used to evaluate the impact of inhibiting toxicants, such as dissolved copper, phenol, sodium alkylbenzene sulfonate and amoxicillin, on activated sludge. The results indicate that ICI is an efficient tool to quantify the intoxication capacity. The activated sludge from the pharmaceutical industry showed higher resistance than the sludge from other sources, since toxicants are widely discharged in the biological treatment system. The ICI range was from 58 to 81% when compared to the synthetic effluent with no toxic substances.
4. Indexing Depth and Retrieval Effectiveness
Science.gov (United States)
Seely, Barbara J.
1972-01-01
There are six major studies of the effect of indexing depth on retrieval performance. They differ in purpose, methodology, measures, indexing language, field of study, and data base--nevertheless, all have found depth of indexing to have the same effect upon information retrieval. (13 references) (Author/NH)
5. The Hosoya index and the Merrifield-Simmons index of some graphs
Directory of Open Access Journals (Sweden)
2012-12-01
Full Text Available The Hosoya index and the Merrifield-Simmons index are two types of graph invariants used in mathematical chemistry. In this paper, we give some formulas for computed these indices for some classes of corona product and link of two graphs. Furthermore, we obtain exact formulas of hosoya and Merrifield-Simmons indices for the set of bicyclic graphs, caterpillars and dual star.
6. Life quality index revisited
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
2004-01-01
The derivation of the life quality index (LQI) is revisited for a revision. This revision takes into account the unpaid but necessary work time needed to stay alive in clean and healthy conditions to be fit for effective wealth producing work and to enjoyable free time. Dimension analysis...... consistency problems with the standard power function expression of the LQI are pointed out. It is emphasized that the combination coefficient in the convex differential combination between the relative differential of the gross domestic product per capita and the relative differential of the expected life...... at birth should not vary between countries. Finally the distributional assumptions are relaxed as compared to the assumptions made in an earlier work by the author. These assumptions concern the calculation of the life expectancy change due to the removal of an accident source. Moreover a simple public...
7. Automated Water Extraction Index
DEFF Research Database (Denmark)
Feyisa, Gudina Legese; Meilby, Henrik; Fensholt, Rasmus
2014-01-01
Classifying surface cover types and analyzing changes are among the most common applications of remote sensing. One of the most basic classification tasks is to distinguish water bodies from dry land surfaces. Landsat imagery is among the most widely used sources of data in remote sensing of water...... resources; and although several techniques of surface water extraction using Landsat data are described in the literature, their application is constrained by low accuracy in various situations. Besides, with the use of techniques such as single band thresholding and two-band indices, identifying...... an appropriate threshold yielding the highest possible accuracy is a challenging and time consuming task, as threshold values vary with location and time of image acquisition. The purpose of this study was therefore to devise an index that consistently improves water extraction accuracy in the presence...
8. Overnight Index Rate: Model, calibration and simulation
Directory of Open Access Journals (Sweden)
Olga Yashkir
2014-12-01
Full Text Available In this study, the extended Overnight Index Rate (OIR model is presented. The fitting function for the probability distribution of the OIR daily returns is based on three different Gaussian distributions which provide modelling of the narrow central peak and the wide fat-tailed component. The calibration algorithm for the model is developed and investigated using the historical OIR data.
9. Solar index generation and delivery
Energy Technology Data Exchange (ETDEWEB)
Lantz, L.J.
1980-01-01
The Solar Index, or, more completely defined as the Service Hot Water Solar Index, was conceptualized during the spring of 1978. The purpose was to enhance public awareness to solar energy usability. Basically, the Solar Index represents the percentage of energy that solar would provide in order to heat an 80 gallon service hot water load for a given location and day. The Index is computed by utilizing SOLCOST, a computer program, which also has applications to space heating, cooling, and heat pump systems and which supplies economic analyses for such solar energy systems. The Index is generated for approximately 68 geographic locations in the country on a daily basis. The definition of the Index, how the project came to be, what it is at the present time and a plan for the future are described. Also presented are the models used for the generation of the Index, a discussion of the primary tool of implementation (the SOLCOST program) and future efforts.
10. Rankings Scientists, Journals and Countries using h-Index
Directory of Open Access Journals (Sweden)
Gyula Mester
2016-01-01
Full Text Available Indexes in scientometrics are based on citations. However, in contrast to the journal impact factor, which gives only the ranking of the scientific journals, ordered by impact factor, indexes in scientometrics are suitable for ranking of scientists, scientific journals and countries. In this paper the h-index, h5-index, the World ranking the top of 25 Highly Cited Researchers (h > 100 and the ranking of 25 scientists in Hungarian Institutions according to their Google Scholar Citations public profiles are considered. These indexes (h5-index are applied for making of the list of top 20 publications (journals and proceedings in the field of Robotics. The World ranking is done of the best 50 countries according to h-index in year 2014. Data are obtained from the portal Scimago.
11. Clone-based Data Index in Cloud Storage Systems
Directory of Open Access Journals (Sweden)
He Jing
2016-01-01
Full Text Available The storage systems have been challenged by the development of cloud computing. The traditional data index cannot satisfy the requirements of cloud computing because of the huge index volumes and quick response time. Meanwhile, because of the increasing size of data index and its dynamic characteristics, the previous ways, which rebuilding the index or fully backup the index before the data has changed, cannot satisfy the need of today’s big data index. To solve these problems, we propose a double-layer index structure that overcomes the throughput limitation of single point server. Then, a clone based B+ tree structure is proposed to achieve high performance and adapt dynamic environment. The experimental results show that our clone-based solution has high efficiency.
12. Review of methods to derive a Polar Cap (PC) index.
Science.gov (United States)
Stauning, Peter
2016-07-01
Since a Polar Cap (PC) index was introduced in 1985, several different methods have been used to derive index values. Basically, the northern (PCN) and southern (PCS) are based on geomagnetic recordings at Qaanaaq (Thule) and Vostok, respectively. However, different derivation methods can give index values differing by more than a factor 2. The PC indices are used, among other, in scientific analyses to link solar wind conditions to relevant geophysical effects and in forecast efforts to establish numerical criteria for imminent risk of geomagnetic storms and substorms. Thus, it is unfortunate that several different versions of the PC index have been in use, often without specifically mentioning the index version being used or without ensuring that proper documention and specification of the derivation method is available. The presentation shall briefly describe the basic calculation of a Polar Cap index and point specifically to the differences between the different derivation methods and to the consequences for the index values
13. Notas para los Colaboradores del Index Botanicorum
Directory of Open Access Journals (Sweden)
Verdoorn Frans
1946-12-01
Full Text Available EL INDEX BOTANICORUM: -Al discutir la necesidad de un diccionario biográfico de los botánicos del mundo y de todos los tiempos. JAMES BRITTEN (J. Bot. 39: 394. 1901 afirma: “Este formaría un compendio manual y útil no solamente de biografías botánicas sino de investigación botánica y sería de valor incalculable para el historiador y el estudiante” En el siguiente artículo se habla de las pautas que deben seguir los colaboradores del Index Botanicarum al escribir en dicho diccionario.
14. On Wiener index of graph complements
Directory of Open Access Journals (Sweden)
Jaisankar Senbagamalar
2014-06-01
Full Text Available Let $G$ be an $(n,m$-graph. We say that $G$ has property $(ast$ if for every pair of its adjacent vertices $x$ and $y$, there exists a vertex $z$, such that $z$ is not adjacent to either $x$ or $y$. If the graph $G$ has property $(ast$, then its complement $overline G$ is connected, has diameter 2, and its Wiener index is equal to $binom{n}{2}+m$, i.e., the Wiener index is insensitive of any other structural details of the graph $G$. We characterize numerous classes of graphs possessing property $(ast$, among which are trees, regular, and unicyclic graphs.
15. Visual indexing and retrieval
Directory of Open Access Journals (Sweden)
2014-06-01
Full Text Available This book includes a preface, a table of contents, six informative chapters and expansive references. In the preface, the editors of the book claim that with the advent of social networks, vast amount of visual information is available for end-users. Therefore, they need innovative and fruitful methods for content understanding, retrieval and classification.
16. Generalizations of Wiener polarity index and terminal Wiener index
CERN Document Server
Ilic, Aleksandar
2011-01-01
In theoretical chemistry, distance-based molecular structure descriptors are used for modeling physical, pharmacologic, biological and other properties of chemical compounds. We introduce a generalized Wiener polarity index $W_k (G)$ as the number of unordered pairs of vertices $\\{u, v\\}$ of $G$ such that the shortest distance $d (u, v)$ between $u$ and $v$ is $k$. For $k = 3$, we get standard Wiener polarity index. Furthermore, we generalize the terminal Wiener index $TW_k (G)$ as the sum of distances between all pairs of vertices of degree $k$. For $k = 1$, we get standard terminal Wiener index. In this paper we describe a linear time algorithm for computing these indices for trees and partial cubes, and characterize extremal trees maximizing the generalized Wiener polarity index and generalized terminal Wiener index among all trees of given order $n$.
17. PR-Index: Using the h-Index and PageRank for Determining True Impact.
Science.gov (United States)
Gao, Chao; Wang, Zhen; Li, Xianghua; Zhang, Zili; Zeng, Wei
2016-01-01
Several technical indicators have been proposed to assess the impact of authors and institutions. Here, we combine the h-index and the PageRank algorithm to do away with some of the individual limitations of these two indices. Most importantly, we aim to take into account value differences between citations-evaluating the citation sources by defining the h-index using the PageRank score rather than with citations. The resulting PR-index is then constructed by evaluating source popularity as well as the source publication authority. Extensive tests on available collections data (i.e., Microsoft Academic Search and benchmarks on the SIGKDD innovation award) show that the PR-index provides a more balanced impact measure than many existing indices. Due to its simplicity and similarity to the popular h-index, the PR-index may thus become a welcome addition to the technical indices already in use. Moreover, growth dynamics prior to the SIGKDD innovation award indicate that the PR-index might have notable predictive power.
18. The Harary index of trees
CERN Document Server
c, Aleksandar Ili\\'; Feng, Lihua
2011-01-01
The Harary index of a graph $G$ is recently introduced topological index, defined on the reverse distance matrix as $H(G)=\\sum_{u,v \\in V(G)}\\frac{1}{d(u,v)}$, where $d(u,v)$ is the length of the shortest path between two distinct vertices $u$ and $v$. We present the partial ordering of starlike trees based on the Harary index and we describe the trees with the second maximal and the second minimal Harary index. In this paper, we investigate the Harary index of trees with $k$ pendent vertices and determine the extremal trees with maximal Harary index. We also characterize the extremal trees with maximal Harary index with respect to the number of vertices of degree two, matching number, independence number, domination number, radius and diameter. In addition, we characterize the extremal trees with minimal Harary index and given maximum degree. We concluded that in all presented classes, the trees with maximal Harary index are exactly those trees with the minimal Wiener index, and vice versa.
19. Ankle Brachial Index
Energy Technology Data Exchange (ETDEWEB)
Wikstroem, J.; Hansen, T.; Johansson, L.; Lind, L.; Ahlstroem, H. (Dept. of Radiology and Dept. of Medical Sciences, Uppsala Univ. Hospital, Uppsala (SE))
2008-03-15
Background: Whole-body magnetic resonance angiography (WBMRA) permits noninvasive vascular assessment, which can be utilized in epidemiological studies. Purpose: To assess the relation between a low ankle brachial index (ABI) and high-grade stenoses in the pelvic and leg arteries in the elderly. Material and Methods: WBMRA was performed in a population sample of 306 subjects aged 70 years. The arteries below the aortic bifurcation were graded after the most severe stenosis according to one of three grades: 0-49% stenosis, 50-99% stenosis, or occlusion. ABI was calculated for each side. Results: There were assessable WBMRA and ABI examinations in 268 (right side), 265 (left side), and 258 cases (both sides). At least one >=50% stenosis was found in 19% (right side), 23% (left side), and 28% (on at least one side) of the cases. The corresponding prevalences for ABI <0.9 were 4.5%, 4.2%, and 6.6%. An ABI cut-off value of 0.9 resulted in a sensitivity, specificity, and positive and negative predictive value of 20%, 99%, 83%, and 84% on the right side, and 15%, 99%, 82%, and 80% on the left side, respectively, for the presence of a >= 50% stenosis in the pelvic or leg arteries. Conclusion: An ABI <0.9 underestimates the prevalence of peripheral arterial occlusive disease in the general elderly population
20. NASA Indexing Benchmarks: Evaluating Text Search Engines
Science.gov (United States)
Esler, Sandra L.; Nelson, Michael L.
1997-01-01
The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.
1. Subject Indexing of children's and juvenile literature
Directory of Open Access Journals (Sweden)
Sara Fedeli
2015-09-01
Full Text Available This article tries to highlight some issues related to subject indexing applied to fiction, with a focus on how it could be implemented on children's literature and then also on adults fiction. In fact, Italy is one of the country that does not add subject indexing to fiction, and despite the numerous and ongoing debates on this topic there is not any National plan to solve this issue. Users who requires more this kind of research are children and teenagers: children’s literature, in fact, is involved in some interesting steps on subject indexing applied to fiction, even if this initiative is carried out in an independent way by individual library networks.
2. Thermal Comfort Index
Directory of Open Access Journals (Sweden)
Teodoreanu Elena
2016-10-01
Full Text Available We are showing some bioclimatic indices (formulas or nomograms for medical purposes, therapeutic tourism, sports. or regionalization. They are based on one, two, three or more different meteorological parameters.
3. Hosoya Index of L-Type Polyphenyl Spiders
Directory of Open Access Journals (Sweden)
Ren Shengzhang
2016-01-01
Full Text Available The polyphenyl system is composed of n hexagons obtained from two adjacent hexagons that are sticked by a path with two vertices. The Hosoya index of a graph G is defined as the total number of the independent edge sets of G. In this paper, we give a computing formula of Hosoya index of a type of polyphenyl system. Furthermore, we characterize the extremal Hosoya index of the type of polyphenyl system.
4. Diet quality index for healthy food choices
Directory of Open Access Journals (Sweden)
Simone Caivano
2013-12-01
Full Text Available OBJECTIVE: To present a Diet Quality Index proper for dietary intake studies of Brazilian adults. METHODS: A diet quality index to analyze the incorporation of healthy food choices was associated with a digital food guide. This index includes moderation components, destined to indicate foods that may represent a risk when in excess, and adequacy components that include sources of nutrients and bioactive compounds in order to help individuals meet their nutritional requirements. The diet quality index-digital food guide performance was measured by determining its psychometric properties, namely content and construct validity, as well as internal consistency. RESULTS: The moderation and adequacy components correlated weakly with dietary energy (-0.16 to 0.09. The strongest correlation (0.52 occurred between the component 'sugars and sweets' and the total score. The Cronbach's coefficient alpha for reliability was 0.36. CONCLUSION: Given that diet quality is a complex and multidimensional construct, the Diet Quality Index-Digital Food Guide, whose validity is comparable to those of other indices, is a useful resource for Brazilian dietary studies. However, new studies can provide additional information to improve its reliability.
5. An Approach for Indexing Web Data Sources
Directory of Open Access Journals (Sweden)
Saidi Imene
2014-08-01
Full Text Available Web information sources such as forums, blogs, and news articles are becoming increasingly large and diverse. Even if advances in technology are helping to improve techniques for dealing with the large amounts of the generated data, such data sources are heterogeneous in structure (semi structured or unstructured sources and nature (texts or images. Implementation of software solutions is then necessary to prepare data and access these sources in a homogenous way. In this paper we present an approach for indexing heterogeneous data sources. Our objective is to offer techniques for efficient indexing of web sources by storing only the necessary information. We propose automatic indexing for semi structured or unstructured sources (e.g., xml files, html files and annotation for other sources (e.g., images, videos that exist within a page. We present our algorithms of indexing and propose the use of MapReduce model to build a scalable inverted index. Experiments on a real-world corpus show that our approach achieves a good performance.
6. ANTHROPOMETRIC STUDY OF NASAL INDEX OF EGYPTIANS
Directory of Open Access Journals (Sweden)
2014-12-01
Full Text Available Background: The nasal index determination is one of the most commonly used anthropometric parameters in classifying human races. There are few reports in medical literature concerning nasal index that specifically address particular Egyptian populations. The objective of this study was to determine the normal parameters of external nose (width, height and nasal index in Egyptians. Methods: The study was conducted randomly on healthy Egyptian subjects of both sexes. Nasal height and width were measured using vernier caliper. Then, nasal index was determined for each subject. The obtained data were subjected to statistical analysis. Results: A total of 290 subjects, 144 males and 146 females, aged 1 month– 65 years, were enrolled in the study. The study showed the existence of sexual dimorphism in nasal morphology, appearing after the age 20 years. The mean nasal index in the investigated adults was 68.01; in males and females was 71.46 and 64.56, respectively. Conclusions: The dominant nasal type in Egyptians was in-between mesorrhine "medium" and leptorrhine "narrow" nose. Forensic and anthropological research, as well as cosmetic and reconstructive surgery may benefit from age- and sex- based data of the study.
7. On the general sum-connectivity index and general Randić index of cacti
Directory of Open Access Journals (Sweden)
Shehnaz Akhter
2016-11-01
Full Text Available Abstract Let G be a connected graph. The degree of a vertex x of G, denoted by d G ( x $d_{G}(x$ , is the number of edges adjacent to x. The general sum-connectivity index is the sum of the weights ( d G ( x + d G ( y α $(d_{G}(x+d_{G}(y^{\\alpha}$ for all edges xy of G, where α is a real number. The general Randić index is the sum of weights of ( d G ( x d G ( y α $(d_{G}(xd_{G}(y^{\\alpha}$ for all edges xy of G, where α is a real number. The graph G is a cactus if each block of G is either a cycle or an edge. In this paper, we find sharp lower bounds on the general sum-connectivity index and general Randić index of cacti.
8. INDEXABILITY AND OPTIMAL INDEX POLICIES FOR A CLASS OF REINITIALISING RESTLESS BANDITS
Science.gov (United States)
Villar, Sofía S.
2016-01-01
Motivated by a class of Partially Observable Markov Decision Processes with application in surveillance systems in which a set of imperfectly observed state processes is to be inferred from a subset of available observations through a Bayesian approach, we formulate and analyze a special family of multi-armed restless bandit problems. We consider the problem of finding an optimal policy for observing the processes that maximizes the total expected net rewards over an infinite time horizon subject to the resource availability. From the Lagrangian relaxation of the original problem, an index policy can be derived, as long as the existence of the Whittle index is ensured. We demonstrate that such a class of reinitializing bandits in which the projects’ state deteriorates while active and resets to its initial state when passive until its completion possesses the structural property of indexability and we further show how to compute the index in closed form. In general, the Whittle index rule for restless bandit problems does not achieve optimality. However, we show that the proposed Whittle index rule is optimal for the problem under study in the case of stochastically heterogenous arms under the expected total criterion, and it is further recovered by a simple tractable rule referred to as the 1-limited Round Robin rule. Moreover, we illustrate the significant suboptimality of other widely used heuristic: the Myopic index rule, by computing in closed form its suboptimality gap. We present numerical studies which illustrate for the more general instances the performance advantages of the Whittle index rule over other simple heuristics. PMID:27212781
9. Semiotics and Indexing: An Analysis of the Subject Indexing Process.
Science.gov (United States)
Mai, Jens-Erik
2001-01-01
Explains some major problems related to the subject indexing process and proposes semiotics as a framework for understanding the interpretive nature of the process. Explores the approach to studies of indexing and library and information science suggested by Fairthorne, Blair, Benediktsson, and others. Offers an explanation of what occurs in the…
10. Malaysian Education Index (MEI): An Online Indexing and Repository System
Science.gov (United States)
Kabilan, Muhammad Kamarul; Ismail, Hairul Nizam; Yaakub, Rohizani; Yusof, Najeemah Mohd; Idros, Sharifah Noraidah Syed; Umar, Irfan Naufal; Arshad, Muhammad Rafie Mohd.; Idrus, Rosnah; Rahman, Habsah Abdul
2010-01-01
This "Project Sheet" describes an on-going project that is being carried out by a group of educational researchers, computer science researchers and librarians from Universiti Sains Malaysia, Penang. The Malaysian Education Index (MEI) has two main functions--(1) Online Indexing System, and (2) Online Repository System. In this brief…
11. A cast orientation index.
Science.gov (United States)
Ivanhoe, J R; Mahanna, G K
1994-12-01
This article describes a technique that allows multiple master casts to be precisely oriented to the same path of insertion and withdrawal. This technique is useful in situations where multiple fixed prosthodontic preparations require surveyed restorations and a single master cast is not available.
12. PLATO IV Accountancy Index.
Science.gov (United States)
Pondy, Dorothy, Comp.
The catalog was compiled to assist instructors in planning community college and university curricula using the 48 computer-assisted accountancy lessons available on PLATO IV (Programmed Logic for Automatic Teaching Operation) for first semester accounting courses. It contains information on lesson access, lists of acceptable abbreviations for…
13. The HLD (CalMod) index and the index question.
Science.gov (United States)
Parker, W S
1998-08-01
The malocclusion index problem arises because of the need to identify which patient's treatments will be paid for with tax dollars. Both the civilian (Medicaid) and military (Champus) programs in the United States require that "need" be demonstrated. Need is defined as "medically necessary handicapping malocclusion" in Medicaid parlance. It is defined by Champus as "seriously handicapping malocclusion." The responsible specialty organization (the AAO) first approved the Salzmann Index in 1969 for this purpose and then reversed course in 1985 and took a formal position against the use of any index. Dentistry has historically chosen a state of occlusal perfection as ideal and normal and declared that variation was not normal hence abnormal and thus malocclusion. This "ideal" composes from 1% to 2% of the population and fails all statistical standards. Many indexes have been proposed based on variations from this "ideal" and fail for that reason. They are not logical. The HLD (CalMod) Index is a lawsuit-driven modification of some 1960 suggestions by Dr. Harry L. Draker. It proposes to identify the worst looking malocclusions as handicapping and offers a cut-off point to identify them. In addition, the modification includes two situations known to be destructive to tissue and structures. As of Jan. 1, 1998, the California program has had 135,655 patients screened by qualified orthodontists using this index. Of that number, 49,537 patients have had study models made and screened by qualified orthodontists using the index. Two separate studies have been performed to examine results and to identify problems. Necessary changes have been made and guidelines produced. The index problem has proven to be very dynamic in application. The HLD (CalMod) Index has been successfully applied and tested in very large numbers. This article is published as a factual review of the situation regarding the index question and one solution in the United States.
14. 77 FR 76303 - Notice of Availability of Producer Price Index (PPI) Data Users Survey
Science.gov (United States)
2012-12-27
... Market Committee to help decide monetary policy. Federal policy-makers at the Department of Treasury and the Council of Economic Advisors utilize these statistics to help form and evaluate monetary...
15. OPS index - KOME | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
16. 7 CFR 3601.2 - Public inspection, copying, and indexing.
Science.gov (United States)
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Public inspection, copying, and indexing. 3601.2 Section 3601.2 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL AGRICULTURAL... indexing. 5 U.S.C. 552(a)(2) requires that certain materials be made available for public inspection...
17. 7 CFR 3801.2 - Public inspection, copying, and indexing.
Science.gov (United States)
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Public inspection, copying, and indexing. 3801.2 Section 3801.2 Agriculture Regulations of the Department of Agriculture (Continued) WORLD AGRICULTURAL... inspection, copying, and indexing. 5 U.S.C. 552(a)(2) requires that certain materials be made available...
18. 7 CFR 3404.2 - Public inspection, copying, and indexing.
Science.gov (United States)
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Public inspection, copying, and indexing. 3404.2 Section 3404.2 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE... inspection, copying, and indexing. 5 U.S.C. 552(a)(2) requires that certain materials be made available...
19. 45 CFR 5.52 - Indexes of records.
Science.gov (United States)
2010-10-01
....31(c). (b) Record citation as precedent. We will not use or cite any record described in § 5.51(a) as... Records Available for Public Inspection § 5.52 Indexes of records. (a) Inspection and copying. We will maintain and provide for your inspection and copying current indexes of the records described in §...
20. Satisfiability with Index Dependency
Institute of Scientific and Technical Information of China (English)
Hong-Yu Liang; Jing He
2012-01-01
We study the Boolean satisfiability problem (SAT) restricted on input formulas for which there are linear arithmetic constraints imposed on the indices of variables occurring in the same clause.This can be seen as a structural counterpart of Schaefer's dichotomy theorem which studies the SAT problem with additional constraints on the assigned values of variables in the same clause.More precisely,let k-SAT(m,A) denote the SAT problem restricted on instances of k-CNF formulas,in every clause of which the indices of the last k-m variables are totally decided by the first m ones through some linear equations chosen from A.For example,if A contains i3 =i1+2i2 and i4 =i2-i1 + 1,then a clause of the input to 4-SAT(2,A) has the form yi1∨yi2∨yi1+2i2∨yi2-i1+1,with yi being xi or (xi).We obtain the following results:1) If m ≥2,then for any set A of linear constraints,the restricted problem k-SAT(m,A) is either in P or NP-complete assuming P ≠ NP.Moreover,the corresponding #SAT problem is always #P-complete,and the MAx-SAT problem does not allow a polynomial time approximation scheme assuming P ≠ NP.2) m =1,that is,in every clause only one index can be chosen freely.In this case,we develop a general framework together with some techniques for designing polynomial-time algorithms for the restricted SAT problems.Using these,we prove that for any A,#2-SAT(1,A) and MAX-2-SAT(1,A) are both polynomial-time solvable,which is in sharp contrast with the hardness results of general #2-SAT and MAX-2-SAT.For fixed k≥ 3,we obtain a large class of non-trivial constraints A,under which the problems k-SAT(1,A),#k-SAT(1,A)and MAx-k-SAT(1,A) can all be solved in polynomial time or quasi-polynomial time.
1. The Pemberton Happiness Index
Science.gov (United States)
Paiva, Bianca Sakamoto Ribeiro; de Camargos, Mayara Goulart; Demarzo, Marcelo Marcos Piva; Hervás, Gonzalo; Vázquez, Carmelo; Paiva, Carlos Eduardo
2016-01-01
Abstract The Pemberton Happiness Index (PHI) is a recently developed integrative measure of well-being that includes components of hedonic, eudaimonic, social, and experienced well-being. The PHI has been validated in several languages, but not in Portuguese. Our aim was to cross-culturally adapt the Universal Portuguese version of the PHI and to assess its psychometric properties in a sample of the Brazilian population using online surveys. An expert committee evaluated 2 versions of the PHI previously translated into Portuguese by the original authors using a standardized form for assessment of semantic/idiomatic, cultural, and conceptual equivalence. A pretesting was conducted employing cognitive debriefing methods. In sequence, the expert committee evaluated all the documents and reached a final Universal Portuguese PHI version. For the evaluation of the psychometric properties, the data were collected using online surveys in a cross-sectional study. The study population included healthcare professionals and users of the social network site Facebook from several Brazilian geographic areas. In addition to the PHI, participants completed the Satisfaction with Life Scale (SWLS), Diener and Emmons’ Positive and Negative Experience Scale (PNES), Psychological Well-being Scale (PWS), and the Subjective Happiness Scale (SHS). Internal consistency, convergent validity, known-group validity, and test–retest reliability were evaluated. Satisfaction with the previous day was correlated with the 10 items assessing experienced well-being using the Cramer V test. Additionally, a cut-off value of PHI to identify a “happy individual” was defined using receiver-operating characteristic (ROC) curve methodology. Data from 1035 Brazilian participants were analyzed (health professionals = 180; Facebook users = 855). Regarding reliability results, the internal consistency (Cronbach alpha = 0.890 and 0.914) and test–retest (intraclass correlation coefficient = 0.814) were
2. Similarity Based Clustering with Indexing for Semi-Structured Document
Directory of Open Access Journals (Sweden)
S. Palanisamy
2012-01-01
Full Text Available Problem statement: To improve the performance of data retrieval in a homogeneous large XML document. Approach: Clustering of XML elements based on the content with indexing. The element which is used for clustering has been identified from the document and/or XML schema. This element is used as a parameter for clustering. The suitable index is created after clustering. Results: The clustering combined with indexing strategy support the efficient retrieval of XML element from the document. Conclusion: The proposed method is used to improve the efficiency of XML data manipulation and comparatively give the better performance rather than clustering or indexing alone.
3. Empirical formula for the refractive index of freezing brine
DEFF Research Database (Denmark)
2009-01-01
The refractive index of freezing brine is important for example in order to estimate oceanic scattering as sea ice develops. Previously, no simple continuous expression was available for estimating the refractive index of brine at subzero temperatures. I show that extrapolation of the empirical...... formula for the refractive index of seawater by Quan and Fry [Appl. Opt. 34(18), 3477-3480 (1995)] provides a good fit to the refractive index of freezing brine for temperatures above -24 degrees celsius and salinities below 180 parts per thousand....
4. Information on the Cost-Variation Index for 2006
CERN Document Server
2005-01-01
This document provides information on the cost-variation index on the basis of currently available data. The document is in line with the method approved in December 2000 for calculating the Personnel budget cost-variation index (updated Staff Rules and Regulations 10th edition and taking into account the amendment approved in June 2005 regarding the list of Member States used for the calculation of the salary index ) and that approved in June 1996 for calculating the Materials budget cost-variation index. The Finance Committee is invited to take note of this information. The final document will be submitted in November or December.
5. Indexed Languages and Unification Grammars
CERN Document Server
Burheim, T
1995-01-01
Indexed languages are interesting in computational linguistics because they are the least class of languages in the Chomsky hierarchy that has not been shown not to be adequate to describe the string set of natural language sentences. We here define a class of unification grammars that exactly describe the class of indexed languages.
6. A Tourism Financial Conditions Index
NARCIS (Netherlands)
C-L. Chang (Chia-Lin); H-K. Hsu (Hui-Kuang); M.J. McAleer (Michael)
2014-01-01
markdownabstract__Abstract__ The paper uses monthly data on financial stock index returns, tourism stock sub-index returns, effective exchange rate returns and interest rate differences from April 2005 – August 2013 for Taiwan that applies Chang’s (2014) novel approach for constructing a tourism fi
7. Developments in Indexing Picture Collections.
Science.gov (United States)
Cawkell, A. E.
1993-01-01
Discussion of electronic image processing focuses on the need for indexing to ensure adequate retrieval. Highlights include icons, i.e., reduced pictorial surrogates; file staging; indexing languages, including examples of thesauri; and pictorial languages, including a HyperCard system. (Contains eight references.) (LRW)
8. Image Vector Quantization codec indexes filtering
Directory of Open Access Journals (Sweden)
Lakhdar Moulay Abdelmounaim
2012-01-01
Full Text Available Vector Quantisation (VQ is an efficient coding algorithm that has been widely used in the field of video and image coding, due to its fast decoding efficiency. However, the indexes of VQ are sometimes lost because of signal interference during the transmission. In this paper, we propose an efficient estimation method to conceal and recover the lost indexes on the decoder side, to avoid re-transmitting the whole image again. If the image or video has the limitation of a period of validity, re-transmitting the data wastes the resources of time and network bandwidth. Therefore, using the originally received correct data to estimate and recover the lost data is efficient in time-constrained situations, such as network conferencing or mobile transmissions. In nature images, the pixels are correlated with their neighbours and VQ partitions the image into sub-blocks and quantises them to the indexes that are transmitted; the correlation between adjacent indexes is very strong. There are two parts of the proposed method. The first is pre-processing and the second is an estimation process. In pre-processing, we modify the order of codevectors in the VQ codebook to increase the correlation among the neighbouring vectors. We then use a special filtering method in the estimation process. Using conventional VQ to compress the Lena image and transmit it without any loss of index can achieve a PSNR of 30.429 dB on the decoder. The simulation results demonstrate that our method can estimate the indexes to achieve PSNR values of 29.084 and 28.327 dB when the loss rate is 0.5% and 1%, respectively.
Science.gov (United States)
Matthews, A. L.
Available from UMI in association with The British Library. In this thesis, the primary aberrations of lenses with a radial focussing gradient-of-index are analysed. Such a lens has a refractive index profile which decreases continuously and radially outward from the optical axis, so that the surfaces of constant refractive index are circular cylinders which are coaxial with the optical axis. Current applications of these lenses include photocopiers, medical endoscopes, telecommunications systems and compact disc systems. Closed formulae for the primary wavefront aberrations for a gradient-index lens with curved or plane entry and exit faces are obtained from the differential equations of such a lens to assess the primary transverse ray aberrations that it introduces. Identical expressions are then obtained by using the difference in optical path length produced between two rays by the lens. This duplication of the derivations of the primary wavefront aberrations acts as a confirmation of the validity of the expressions. One advantage of these equations is that the contributions due to the primary spherical aberration, coma, astigmatism, field curvature and distortion can be assessed individually. A Fortran 77 program has been written to calculate each of these individual contributions, the total primary wavefront aberrations and the primary transverse ray aberrations. Further confirmation of the validity of the expressions is obtained by using this program to show that the coma and distortion were both zero for fully symmetric systems working at unit magnification. The program is then used to assess the primary wavefront aberrations for a gradient-index lens which is currently of interest to the telecommunications industry. These results are compared with values obtained using a finite ray-tracing program for the total wavefront aberrations. This shows that the primary wavefront aberrations are the completely dominant contribution to the total wavefront
10. Wallah. Indexing the new locality
DEFF Research Database (Denmark)
Skovse, Astrid Ravn
This paper aims to add new, empirically based insights to the understanding of the dynamics by which linguistic features come to index locality. It does so through examining the indexicalities of the term wallah among adolescents living in the suburban, multi-ethnic Danish neighborhood Vollsmose....... The paper shows how the term wallah, by being emblematic of the enregistered voices of somewhat competing, locally constructed characterological figures (Agha 2005), comes to serve as an index of highly specific kinds of locality. The data comes from an experimental mapping method tapping into informants...... in a wide range of multi-ethnic settings in Scandinavia – wallah is nevertheless capable of indexing both local and supralocal sociolinguistic scales at once, reflecting the multiscalarity of the “new localities” of globalization (Blommaert 2010). By considering the possibility of features indexing a range...
11. Trajectory Indexing Using Movement Constraints
DEFF Research Database (Denmark)
Pfoser, D.; Jensen, Christian Søndergaard
2005-01-01
With the proliferation of mobile computing, the ability to index efficiently the movements of mobile objects becomes important. Objects are typically seen as moving in two-dimensional (x,y) space, which means that their movements across time may be embedded in the three-dimensional (x,y,t) space...... is to reduce movements to occur in one spatial dimension. As a consequence, the movement occurs in two-dimensional (x,t) space. The advantages of considering such lower-dimensional trajectories are that the overall size of the data is reduced and that lower-dimensional data is to be indexed. Since off......-the-shelf database management systems typically do not offer higher-dimensional indexing, this reduction in dimensionality allows us to use existing DBMSes to store and index trajectories. Moreover, we argue that, given the right circumstances, indexing these dimensionality-reduced trajectories can be more efficient...
12. Effect of Egg Shape Index on Hatching Characteristics in Hens
Directory of Open Access Journals (Sweden)
Erol Aşcı
2015-07-01
Full Text Available In this study, the effects of egg shape index on hatching characteristics (fertility rate, embryo mortality, hatchability of fertile eggs and hatchability, egg weight loss, chick weight, sex ratio and quality of chicks were investigated. A total of 960 eggs of ATAK- S hybrid parents obtained from Ankara Poultry Research Station were divided into three different groups (SI≤71, 72≤SI≤76, SI≤77 based on shape index and were used. A significant relationship between fertility rate and late embryonic mortality was found in the shape index groups. On the other hand, no differences were found in the rate of weight loss at 18 day, early and middle embryonic mortality, malposition rate, hatchability, sex ratio and chick quality among the shape index groups. It was concluded that shape index affected the hatching results and also that eggs of abnormal shape index should not be used for hatching.
13. OPTIMIZATION OF LOCATION BASED QUERIES USING SPATIAL INDEXING
Directory of Open Access Journals (Sweden)
S. Geetha
2014-04-01
Full Text Available The recent development in the technology leads to the introduction of various mobile terminals and there is a demand that the client requires effective location based services. The valid regions expand and also query retrieval time increases which lead to poor performance of query processing. The spatial indexing techniques are one of the most effective optimization methods to improve the quality of services. In existing system NN queries and window queries are used. In that R-tree and grid indexing has been used for increasing the query efficiency. But the Grid-index technique support low memory and thus large databases cannot be handled effectively. In the proposed system we are using Ordered grid index and EVR-tree to minimize the query retrieval time and to decrease the depth of the search index. The Ordered grid index and EVR-tree to speed up the spatial query processing.
14. Metabolic effects of low glycaemic index diets
Directory of Open Access Journals (Sweden)
Rusu Emilia
2009-01-01
Full Text Available Abstract The persistence of an epidemic of obesity and type 2 diabetes suggests that new nutritional strategies are needed if the epidemic is to be overcome. A promising nutritional approach suggested by this thematic review is metabolic effect of low glycaemic-index diet. The currently available scientific literature shows that low glycaemic-index diets acutely induce a number of favorable effects, such as a rapid weight loss, decrease of fasting glucose and insulin levels, reduction of circulating triglyceride levels and improvement of blood pressure. The long-term effect of the combination of these changes is at present not known. Based on associations between these metabolic parameters and risk of cardiovascular disease, further controlled studies on low-GI diet and metabolic disease are needed.
15. Retrospective indexing (RI) - A computer-aided indexing technique
Science.gov (United States)
Buchan, Ronald L.
1990-01-01
An account is given of a method for data base-updating designated 'computer-aided indexing' (CAI) which has been very efficiently implemented at NASA's Scientific and Technical Information Facility by means of retrospective indexing. Novel terms added to the NASA Thesaurus will therefore proceed directly into both the NASA-RECON aerospace information system and its portion of the ESA-Information Retrieval Service, giving users full access to material thus indexed. If a given term appears in the title of a record, it is given special weight. An illustrative graphic representation of the CAI search strategy is presented.
16. An Analysis of the Efficiency of Existing Kanji Indexes and Development of a Coding-based Index
Directory of Open Access Journals (Sweden)
Galina N. VOROBEVA
2012-12-01
Full Text Available Considering the problems faced by learners of Japanese from non-kanji background, the present paper discusses the characteristics of 15 existing kanji dictionary indexes. In order to compare the relative efficiency of these indexes, the concept of selectivity is defined, and the selectivity coefficient of the kanji indexes is computed and compared. Furthermore, new indexes developed by the present authors and based on an alphabetical code, a symbol code, a semantic code, and a radical and stroke number code are presented and their use and efficiency are explained.
17. PRODUCT'S SAFETY INDEX
Directory of Open Access Journals (Sweden)
Widomar Carpes Jr
2004-06-01
Full Text Available Depois de diferenciar entre segurança e confiabilidade e revisar formas de medí-las, este artigo apresenta um novo método para mensurar a segurança de produtos, utilizando-se da freqüência e das conseqüências dos acidentes havidos com eles. Ao final, é feita uma aplicação do método para a mensuração da segurança de equipamentos utilizados na indústria moveleira.
18. APFO Historical Availability of Imagery
Data.gov (United States)
Farm Service Agency, Department of Agriculture — The APFO Historical Availability ArcGIS Online web map provides an easy to use reference of what historical imagery is available by county from the Aerial...
19. Guam and the Northern Mariana Islands ESI: INDEX (Index Polygons)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector polygons representing the boundaries of all hardcopy cartographic products produced as part of the Environmental Sensitivity Index...
20. Cook Inlet and Kenai Peninsula, Alaska ESI: INDEX (Index Polygons)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector polygons representing the boundaries used in the creation of the Environmental Sensitivity Index (ESI) for Cook Inlet and Kenai...
1. Louisiana ESI: SM_INDEX (Small Index Polygons)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector polygons representing the boundaries of all the hardcopy cartographic products produced as part of the Environmental Sensitivity Index...
2. Louisiana ESI: LG_INDEX (Large Index Polygons)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector polygons representing the boundaries of all the hardcopy cartographic products produced as part of the Environmental Sensitivity Index...
3. IMAGE INDEXING AND RETRIEVAL
Directory of Open Access Journals (Sweden)
Snehal S. Bhamre
2015-10-01
Full Text Available Scalable content based image search based on hash codes is hot topic nowadays. The existing hashing methods have a drawback of providing a fixed set of semantic preserving hash functions to the labelled data for the images. However, it may ignore the user’s search intention conveyed through the query image. Again these hashing methods embed high-dimensional image features into hamming space performing real time search based on hamming distance. This paper introduces an approach that generates the most appropriate binary codes for different queries. This is done by firstly offline generating bitwise weights of the hash codes for a set of predefined semantic classes. At query time, query adaptive weights are computed online by finding out the proximity between a query and the semantic concept classes. Then these images can be ranked by weighted Hamming distance at a finer-grained hash code level rather than the original Hamming distance level.
4. Image Indexing and Retrieval
Directory of Open Access Journals (Sweden)
Ms. Snehal S. Bhamre
2014-07-01
Full Text Available Scalable content based image search based on hash codes is hot topic nowadays. The existing hashing methods have a drawback of providing a fixed set of semantic preserving hash functions to the labelled data for the images. However, it may ignore the user’s search intention conveyed through the query image. Again these hashing methods embed high -dimensional image features into hamming space performing real time search based on hamming distance. This paper introduces a n approach that generates the most appropriate binary codes for different queries. This is done by firstly offline generating bitwise we ights of the hash codes for a set of predefined semantic classes. At query time, query adaptive weights are computed online by finding out the proximity between a query and the semantic concept classes. Then these images can be ranked by weighted Hamming distance at a finer-grained hash code level rather than the original Hamming distance level.
5. Generalized Analytical Solutions for Nonlinear Positive-Negative Index Couplers
Directory of Open Access Journals (Sweden)
Zh. Kudyshev
2012-01-01
Full Text Available We find and analyze a generalized analytical solution for nonlinear wave propagation in waveguide couplers with opposite signs of the linear refractive index, nonzero phase mismatch between the channels, and arbitrary nonlinear coefficients.
6. Torsadogenic index for prevention of acute death
Directory of Open Access Journals (Sweden)
2015-10-01
Full Text Available This risk management project aims to apply a global preventive and predictive measure in potential victims of acute death and heart failure. Among the main causes are identified sub-diagnosed genetic mutations; prescription, interaction, self-medication or abuse of suspected drugs due to internet prospect's consult; and a parasitic pandemia, the Chagas disease (over 100000000 population is potentially infected worldwide. Promoting a global extensive determination of the torsadogenic index which was successfully published in Frontiers of Pharmacology Journal, will allow to monitor general population, set security patterns to categorize silent high risk groups of serious prognosis of tachyarrhythmias. For this purpose, applications of known QT determinations are proposed, recommending Dr. Rautaharju's formula, actually improved with gender adult versions. Torsadogenic index will allow to establish an individual traceability, obtain comparative samples that are alert to possible QT enlargements. Thus, torsadogenic index results a valuable, simple and costless resource to consider in the fight against acute death and cardiac arrest. Torsadogenic index represents a global indicator capable of predicting and preventing acute death and heart failure for most relevant reasons.
7. A comparison of four fibrosis indexes in chronic HCV: Development of new fibrosis-cirrhosis index (FCI
Directory of Open Access Journals (Sweden)
Khaliq Saba
2011-04-01
Full Text Available Abstract Background Hepatitis C can lead to liver fibrosis and cirrhosis. We compared readily available non-invasive fibrosis indexes for the fibrosis progression discrimination to find a better combination of existing non-invasive markers. Methods We studied 157 HCV infected patients who underwent liver biopsy. In order to differentiate HCV fibrosis progression, readily available AAR, APRI, FI and FIB-4 serum indexes were tested in the patients. We derived a new fibrosis-cirrhosis index (FCI comprised of ALP, bilirubin, serum albumin and platelet count. FCI = [(ALP × Bilirubin / (Albumin × Platelet count]. Results Already established serum indexes AAR, APRI, FI and FIB-4 were able to stage liver fibrosis with correlation coefficient indexes 0.130, 0.444, 0.578 and 0.494, respectively. Our new fibrosis cirrhosis index FCI significantly correlated with the histological fibrosis stages F0-F1, F2-F3 and F4 (r = 0.818, p Conclusions The fibrosis-cirrhosis index (FCI accurately predicted fibrosis stages in HCV infected patients and seems more efficient than frequently used serum indexes.
8. Honey and Glycemic Index
Directory of Open Access Journals (Sweden)
Sibel Silici
2015-02-01
Full Text Available Honey is a natural substance produced by honeybees (Apis mellifera L. from the nectar of blossoms or from secretions of living parts of plants or excretions of plant sucking insects on the living parts of plants, which honeybees collect, transform and combine with specific substances of their own, store and leave in the honey comb to ripen and mature. Besides being of carbohydrate-rich food, honey has been used as a functional food for its potential health benefits. To explain how different kinds of carbohydrate-rich foods directly affect blood sugar, the researchers developed the concept of the “glycemic index” (GI that ranks carbohydrates on a scale based on how quickly and how much they raise blood sugar levels after eating. The diet should include adequate and healthy balance of nutrients, and according to many health professionals the concept of GI provides a useful means of selecting the most appropriate carbohydrate containing foods for the maintenance of health and the treatment of several disease states. There have been some studies on determining the GI of honey. Further more, we need to determine the GI of various honey types with different botanical and geografical origin. Researches on the issue will serve to bring awareness in the public consciousness.
9. VT - Vermont Social Vulnerability Index
Data.gov (United States)
Vermont Center for Geographic Information — Social vulnerability refers to the resilience of communities when responding to or recovering from threats to public health. The Vermont Social Vulnerability Index...
10. Region 9 - Social Vulnerability Index
Data.gov (United States)
U.S. Environmental Protection Agency — The Social Vulnerability Index is derived from the 2000 US Census data. The fields included are percent minority, median household income, age (under 18 and over...
11. Stator Indexing in Multistage Compressors
Science.gov (United States)
Barankiewicz, Wendy S.
1997-01-01
The relative circumferential location of stator rows (stator indexing) is an aspect of multistage compressor design that has not yet been explored for its potential impact on compressor aerodynamic performance. Although the inlet stages of multistage compressors usually have differing stator blade counts, the aft stages of core compressors can often have stage blocks with equal stator blade counts in successive stages. The potential impact of stator indexing is likely greatest in these stages. To assess the performance impact of stator indexing, researchers at the NASA Lewis Research Center used the 4 ft diameter, four-stage NASA Low Speed Axial Compressor for detailed experiments. This compressor has geometrically identical stages that can circumferentially index stator rows relative to each other in a controlled manner; thus it is an ideal test rig for such investigations.
12. Introduction to indexing and abstracting
CERN Document Server
Cleveland, Ana
2013-01-01
Successful information access in the digital information age requires robust systems of indexing and abstracting. This book provides a complete introduction to the subject that covers the many recent changes in the field.
13. The Callias index formula revisited
CERN Document Server
Gesztesy, Fritz
2016-01-01
These lecture notes aim at providing a purely analytical and accessible proof of the Callias index formula. In various branches of mathematics (particularly, linear and nonlinear partial differential operators, singular integral operators, etc.) and theoretical physics (e.g., nonrelativistic and relativistic quantum mechanics, condensed matter physics, and quantum field theory), there is much interest in computing Fredholm indices of certain linear partial differential operators. In the late 1970’s, Constantine Callias found a formula for the Fredholm index of a particular first-order differential operator (intimately connected to a supersymmetric Dirac-type operator) additively perturbed by a potential, shedding additional light on the Fedosov-Hörmander Index Theorem. As a byproduct of our proof we also offer a glimpse at special non-Fredholm situations employing a generalized Witten index.
14. Allegheny County Map Index Grid
Data.gov (United States)
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Map Index Sheets from Block and Lot Grid of Property Assessment and based on aerial photography, showing 1983 datum with solid line and NAD 27 with 5 second grid...
15. Index Grids - MDC_DMLIndex
Data.gov (United States)
NSGIC GIS Inventory (aka Ramona) — A polygon feature class of Miami-Dade County, Digital Map Library (DML) index layer. This layer identifies the areas, which is divided into square mile that we have...
16. VT - Vermont Heat Vulnerability Index
Data.gov (United States)
Vermont Center for Geographic Information — This map shows: The overall vulnerability of each town to heat related illness. This index is a composite of the following themes: Population Theme, Socioeconomic...
17. The Universality of Semantic Prototypes in Spanish Lexical Availability
OpenAIRE
Marjana Šifrar Kalan
2016-01-01
The article presents the words with highest index of availability on the basis of semantic fluency tests. The conceptual stability of highly available words in various semantic categories enables them to be classified as semantic prototypes according to the theory of prototype. The aim of this article is to compare the semantic prototypes in nine semantic categories from different lexical availability studies: those carried out in Spanish as a mother tongue and Spanish as a foreign language (...
18. The Carbon City Index (CCI)
DEFF Research Database (Denmark)
Boyd, Britta; Straatman, Bas; Mangalagiu, Diana
the emission impact of various possible municipal climate plans over time. As such, the index promotes the export of solutions from one region on another, as it enables policy makers to look elsewhere for best practices and test them on their own city before potential implementation. The index facilitates...... an easy to use and transparent comparison of factual and planned emission policies in different cities and can inform regional sustainability discussion and contribute to the dissemination of solutions....
19. Nudibranch Systematic Index, second edition
OpenAIRE
2009-01-01
This is an index of my approximately 7,000 nudibranch reprints and books. I have indexed them only for information concerning systematics, taxonomy, nomenclature, & description of taxa. This list should allow you to quickly find information concerning the description, taxonomy, or systematics of almost any species of nudibranch. The full citation for any of the authors and dates listed may be found in the nudibranch bibliography at http://repositories.cdlib.org/ims/Bibliographia_Nudibranchia_...
20. Hypothalamic digoxin and regulation of body mass index.
Directory of Open Access Journals (Sweden)
Kumar A
2002-10-01
Full Text Available The hypothalamus produces digoxin, an endogenous membrane Na+-K+ ATPase inhibitor and regulator of neurotransmission. Digoxin being a steroidal glycoside, is synthesised by the isoprenoid pathway. In view of the reports of elevated digoxin levels in metabolic syndrome X with high body mass index, the isoprenoid pathway mediated biochemical cascade was assessed in individuals with high and low body mass index. It was also assessed in individuals with differing hemispheric dominance to find out the relationship between digoxin status, body mass index and hemispheric dominance. The isoprenoid pathway metabolites, tryptophan / tyrosine catabolic patterns and membrane composition were assessed. In individuals with high body mass index an upregulated isoprenoid pathway with increased HMG CoA reductase activity, serum digoxin and dolichol levels and low ubiquinone levels were observed. The RBC membrane Na+-K+ ATPase activity and serum magnesium levels were decreased. The tyrosine catabolites (dopamine, morphine, epinephrine and norepinephrine were reduced and the tryptophan catabolites (serotonin, quinolinic acid, strychnine and nicotine were increased. There was an increase in membrane cholesterol : phospholipid ratio and a reduction in membrane glycoconjugates in individuals with high body mass index. The reverse patterns were seen in individuals with low body mass index. The patterns in individuals with high body mass index and low body mass index correlated with right hemispheric dominance and left hemispheric dominance respectively. Hemispheric dominance and digoxin status regulates the differential metabolic pattern observed in individuals with high and low body mass index.
1. LHC Availability 2016: Proton Physics
CERN Document Server
Todd, Benjamin; Apollonio, Andrea; CERN. Geneva. ATS Department
2016-01-01
This document summarises the LHC machine availability for the period of Restart to Technical Stop 3 (TS3) in 2016. This covers the whole proton physics production period of 2016. This note has been produced and ratified by the Availability Working Group which has complied fault information for the period in question using the Accelerator Fault Tracker.
2. THE INTENSITY OF THE CORRELATION BETWEEN THE HUMAN DEVELOPMENT INDEX, RESPECTIVELY THE HAPPY PLANET INDEX AND THE LIFE EXPECTANCY, IN ROMANIA
Directory of Open Access Journals (Sweden)
Gabriela Opait
2015-05-01
Full Text Available This paper reflects the research concerning the intensity of the correlation between the Human Development Index, respectively the Happy Planet Index and the Life Expectancy in Romania, through by means of the Pearson correlation coefficient and the correlation report. The Happy Planet Index (HPI is a new indicator which expresses the progress in any country, and also, we can say that the Happy Planet Index is a measure of sustainable well-being.
3. Pavement Performance Index for Indian rural roads
Directory of Open Access Journals (Sweden)
Abhay Tawalare
2016-09-01
Full Text Available The performance of a road is evaluated from time to time so as to improve its quality and helps in planning maintenance of roads. For this purpose various pavement deteriorating models as a decision tool are available. But they are not easy to use for field engineers due to either huge past data requirement or complicated calculations. Therefore, this paper presents a Pavement Performance Index for rural roads by using simple methodology. The distress parameters of rural roads were identified through literature review. Similarly rating criteria for each distress parameters were identified through literature. For final selection of distress parameters in context of Indian rural road, opinions of five highly experienced industrial experts were taken. After that the weightage for severity of each parameter causing distress of pavement is calculated by using data of questionnaire survey in which 117 professionals working in Pradhan Mantri Gram Sadak Yojana across the country were participated. The paper suggests a formula to decide Pavement Performance Index that depends on rating criterion and severity weightage of distress parameters of pavement performance. The study concluded that suggested Pavement Performance Index makes calculations easy for field engineers and will be useful to decide priority list of rural roads for repair and maintenance schedule.
4. Availability program: Phase I report
Energy Technology Data Exchange (ETDEWEB)
Thomson, S.L.; Dabiri, A.; Keeton, D.C.; Riemer, B.W.; Waganer, L.M.
1985-05-01
An Availability Working Group was formed within the Office of Fusion Energy in March 1984 to consider the establishment of an availability program for magnetic fusion. The scope of this program is defined to include the development of (1) a comprehensive data base, (2) empirical correlations, and (3) analytical methods for application to fusion facilities and devices. The long-term goal of the availability program is to develop a validated, integrated methodology that will provide (1) projections of plant availability and (2) input to design decisions on maintainability and system reliability requirements. The Phase I study group was commissioned to assess the status of work in progress that is relevant to the availability program. The scope of Phase I included surveys of existing data and data collection programs at operating fusion research facilities, the assessment of existing computer models to calculate system availability, and the review of methods to predict and correlate data on component failure and maintenance. The results of these investigations are reported to the Availability Working Group in this document.
5. Indexing and Searching Distributed Astronomical Data Archives
Science.gov (United States)
Jackson, R. E.
The technology needed to implement a Distributed Astronomical Data Archive (DADA) is available today (e.g., Fullton 1993). Query interface standards are needed, however, before the DADA information will be discoverable. Fortunately, a small number of parameters can describe a large variety of astronomical datasets. One possible set of parameters is (RA, DEC, Wavelength, Time, Intensity) times (Minimum Value, Maximum Value, Resolution, Coverage). These twenty parameters can describe aperture photometry, images, time resolved spectroscopy, etc. These parameters would be used to index each dataset in each catalog. Each catalog would in turn be indexed by the extremum values of the parameters into a catalog of catalogs. Replicating this catalog of catalogs would create a system with no centralized resource to be saturated by multiple users.
6. The GOES Microburst Windspeed Potential Index
CERN Document Server
Pryor, K L
2007-01-01
A suite of products has been developed and evaluated to assess hazards presented by convective downbursts to aircraft in flight derived from the current generation of Geostationary Operational Environmental Satellite (GOES) (I-M). The existing suite of GOES microburst products employs the GOES sounder to calculate risk based on conceptual models of favorable environmental profiles for convective downburst generation. A GOES sounder-derived wet microburst severity index (WMSI) product to assess the potential magnitude of convective downbursts, incorporating convective available potential energy (CAPE) as well as the vertical theta-e difference (TeD) between the surface and mid-troposphere has been developed and implemented. Intended to supplement the use of the GOES WMSI product over the United States Great Plains region, a GOES Hybrid Microburst Index (HMI) product has also evolved. The HMI product infers the presence of a convective boundary layer by incorporating the sub-cloud temperature lapse rate as well...
7. Hazard index for underground toxic material
Energy Technology Data Exchange (ETDEWEB)
Smith, C.F.; Cohen, J.J.; McKone, T.E.
1980-06-01
To adequately define the problem of waste management, quantitative measures of hazard must be used. This study reviews past work in the area of hazard indices and proposes a geotoxicity hazard index for use in characterizing the hazard of toxic material buried underground. Factors included in this index are: an intrinsic toxicity factor, formulated as the volume of water required for dilution to public drinking-water levels; a persistence factor to characterize the longevity of the material, ranging from unity for stable materials to smaller values for shorter-lived materials; an availability factor that relates the transport potential for the particular material to a reference value for its naturally occurring analog; and a correction factor to accommodate the buildup of decay progeny, resulting in increased toxicity.
8. Transportation Technical Environmental Information Center index
Energy Technology Data Exchange (ETDEWEB)
Davidson, C. A.; Foley, J. T.
1980-10-01
In an effort to determine the environmental intensities to which energy materials in transit may be exposed, a Data Center of technical environmental information has been established by Sandia National Laboratories, Division 5523, for the DOE Office of Transportation Fuel Storage. This document is an index which can be used to request data of interest. Access to the information held is not limited to Sandia personnel. The purpose of the Transportation Technical Environmental Information Center is to collect, analyze, store, and make available descriptions of the environment of transportation expressed in engineering terms. The data stored in the Center are expected to be useful in a variety of transportation related analyses. Formulations of environmental criteria for shipment of cargo, risk assessments, and detailed structural analyses of shipping containers are examples where these data have been applied. For purposes of indexing and data retrieval, the data are catalogued under two major headings: Normal and Abnormal Environments.
9. Quality indexing with computer-aided lexicography
Science.gov (United States)
Buchan, Ronald L.
1992-01-01
Indexing with computers is a far cry from indexing with the first indexing tool, the manual card sorter. With the aid of computer-aided lexicography, both indexing and indexing tools can provide standardization, consistency, and accuracy, resulting in greater quality control than ever before. A brief survey of computer activity in indexing is presented with detailed illustrations from NASA activity. Applications from techniques mentioned, such as Retrospective Indexing (RI), can be made to many indexing systems. In addition to improving the quality of indexing with computers, the improved efficiency with which certain tasks can be done is demonstrated.
10. 2015 NAIP Partner Availability Map
Data.gov (United States)
Farm Service Agency, Department of Agriculture — Shows the available NAIP imagery which NAIP Partners can access. Either Quarter Quads (QQs), Compressed County Mosaics (CCMs) or data that has been physically mailed...
11. Communication course – places available
CERN Multimedia
2012-01-01
Please note that there are some places available in the following communication course starting in November. For more information on the course, click on the course title, this will bring you to the training catalogue. You can then sign-up on line. For advice, you can contact Kerstin Fuhrmeister (70896) or kerstin.fuhrmeister@cern.ch Course Next session Duration Language Availability Gestion de temps 22 November 3 days French 8 places
12. MongoDB high availability
CERN Document Server
Mehrabani, Afshin
2014-01-01
This book has a perfect balance of concepts and their practical implementation along with solutions to make a highly available MongoDB server with clear instructions and guidance. If you are using MongoDB in a production environment and need a solution to make a highly available MongoDB server, this book is ideal for you. Familiarity with MongoDB is expected so that you understand the content of this book.
13. Comprehensive Study and Comparison of Information Retrieval Indexing Techniques
Directory of Open Access Journals (Sweden)
Zohair Malki
2016-01-01
Full Text Available This research is aimed at comparing techniques of indexing that exist in the current information retrieval processes. The techniques being inverted files, suffix trees, and signature files will be critically described and discussed. The differences that occur in their use will be discussed. The performance and stability of each indexing technique will be critically studied and compared with the rest of the techniques. The paper also aims at showing by the end the role that indexing plays in the process of retrieving information. It is a comparison of the three indexing techniques that will be introduced in this paper. However, the details arising from the detailed comparison will also enhance more understanding of the indexing techniques.
14. Effective indexing for face recognition
Science.gov (United States)
Sochenkov, I.; Sochenkova, A.; Vokhmintsev, A.; Makovetskii, A.; Melnikov, A.
2016-09-01
Face recognition is one of the most important tasks in computer vision and pattern recognition. Face recognition is useful for security systems to provide safety. In some situations it is necessary to identify the person among many others. In this case this work presents new approach in data indexing, which provides fast retrieval in big image collections. Data indexing in this research consists of five steps. First, we detect the area containing face, second we align face, and then we detect areas containing eyes and eyebrows, nose, mouth. After that we find key points of each area using different descriptors and finally index these descriptors with help of quantization procedure. The experimental analysis of this method is performed. This paper shows that performing method has results at the level of state-of-the-art face recognition methods, but it is also gives results fast that is important for the systems that provide safety.
15. Index for Wind Power Variability
Energy Technology Data Exchange (ETDEWEB)
Kiviluoma, Juha; Holttinen, Hannele; Cutululis, Nicolaos Antonio; Litong-Palima, Marisciel; Scharff, Richard; Milligan, Michael; Weir, David Edward
2014-11-13
Variability of large scale wind power generation is dependent on several factors: characteristics of installed wind power plants, size of the area where the plants are installed, geographic dispersion within that area and its weather regime(s). Variability can be described by ramps in power generation, i.e. changes from time period to time period. Given enough data points, it can be described with a probability density function. This approach focuses on two dimensions of variability: duration of the ramp and probability distribution. This paper proposes an index based on these two dimensions to enable comparisons and characterizations of variability under different conditions. The index is tested with real, large scale wind power generation data from several countries. Considerations while forming an index are discussed, as well as the main results regarding what the drivers of variability experienced for different data.
16. Witten Index for Noncompact Dynamics
CERN Document Server
Lee, Seung-Joo
2016-01-01
Among gauged dynamics motivated by string theory, we find many with gapless asymptotic directions. Although the natural boundary condition for ground states is $L^2$, one often turns on chemical potentials or supersymmetric mass terms to regulate the infrared issues, instead, and computes the twisted partition function. We point out how this procedure generically fails to capture physical $L^2$ Witten index with often misleading results. We also explore how, nevertheless, the Witten index is sometimes intricately embedded in such twisted partition functions. For $d=1$ theories with gapless continuum sector from gauge multiplets, such as non-primitive quivers and pure Yang-Mills, a further subtlety exists, leading to fractional expressions. Quite unexpectedly, however, the integral $L^2$ Witten index can be extracted directly and easily from the twisted partition function of such theories. This phenomenon is tied to the notion of the rational invariant that appears naturally in the wall-crossing formulae, and ...
17. Socializing the h-index
CERN Document Server
Cormode, Graham; Muthukrishnan, S; Thompson, Brian
2012-01-01
A variety of bibliometric measures have been proposed to quantify the impact of researchers and their work. The h-index is a notable and widely-used example which aims to improve over simple metrics such as raw counts of papers or citations. However, a limitation of this measure is that it considers authors in isolation and does not account for contributions through a collaborative team. To address this, we propose a natural variant that we dub the Social h-index. The idea is to redistribute the h-index score to reflect an individual's impact on the research community. In addition to describing this new measure, we provide examples, discuss its properties, and contrast with other measures.
18. THE COMPARISON OF STANDARDIZED PRECIPITATION INDEX (SPI, STANDARDIZED REFERENCE EVAPOTRANSPIRATION INDEX (SEI AND STANDARDIZED CLIMATIC WATER BALANCE (SCWB
Directory of Open Access Journals (Sweden)
Marian Rojek
2014-10-01
Full Text Available The standardized precipitation index (SPI, standardized reference evapotranspiration index (SEI and standardized climatic water balance (SCWB were used to analyze the humidity conditions in the vegetation period of years 1964–2006 in Wrocław-Swojec Observatory. SPI and SEI were calculated on the assumption that empirical monthly precipitation sums and monthly sums of reference evapotraspiration, obtained from Wrocław-Swojec data, are gamma distributed. Since monthly sums of climatic water balance for analogous data are normally distributed, CWB required standardization to SCWB. The aim of study was to compare those three indexes: standardized precipitation index (SPI, standardized reference evapotranspiration index (SEI and standardized climatic water balance (SCWB.
19. Indexes to Nuclear Regulatory Commission Issuances
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-12-31
Digests and indexes for issuances of the Commission (CLI), the Atomic Safety and licensing Board Panel (LBP), the Administrative Law Judges (ALJ) the Directors Decisions (DD), and the Decisions on Petitions for Rulemaking (DPRM) are presented in this document. These digests and indexes are intended to serve as a guide to the issuances. Information elements are displayed in one or more of five separate formats arranged as follows: Case Name Index; Headers and Digests; Legal Citations Index; Subject Index; and Facility Index.
20. Indexing from thesauri to the semantic web
CERN Document Server
de Keyser, Piet
2012-01-01
Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic i
1. Indexing spoken audio by LSA and SOMs
OpenAIRE
2000-01-01
This paper presents an indexing system for spoken audio documents. The framework is indexing and retrieval of broadcast news. The proposed indexing system applies latent semantic analysis (LSA) and self-organizing maps (SOM) to map the documents into a semantic vector space and to display the semantic structures of the document collection. The SOM is also used to enhance the indexing of the documents that are difficult to decode. Relevant index terms and suitable index weights are computed by...
2. Use of the glycemic index in nutrition education
Directory of Open Access Journals (Sweden)
Flávia Galvão Cândido
2013-02-01
Full Text Available Recently, the lack of studies providing practical guidance for the use of the glycemic index has been indicated as the cause of its little use in nutrition education. The aim of this study is to give instructions on the use of the glycemic index as a tool to be used in nutrition education to estimulate the consumption of low glycemic index foods. Studies published over the past 12 years, in addition to classic studies on this topic, found in the databases MedLine, ScienceDirect, SciELO and Lilacs exploring the importance of the glycemic index and the factors that affect the glycemic index were selected for this article. The preparation of lists grouping foods according to their glycemic index should be based on information found in tables and specific web sites. This is an interesting strategy that must be very carefully conducted, considering the eating habits of the assisted people. To reduce the postprandial blood glucose response, high glycemic index foods should be consumed in association with the following foods: high protein and low fat foods, good quality oils and unprocessed foods with high fiber content. Caffeine should also be avoided. The glycemic index should be considered as an additional carbohydrate-selection tool, which should be part of a nutritionally balanced diet capable of promoting and/or maintaining body weight and health.
3. The GPCC Drought Index – a new, combined and gridded global drought index
Directory of Open Access Journals (Sweden)
M. Ziese
2014-08-01
Full Text Available The Global Precipitation Climatology Centre Drought Index (GPCC-DI provides estimations of water supply anomalies with respect to long-term statistics. It is a combination of the Standardized Precipitation Index with adaptations from Deutscher Wetterdienst (SPI-DWD and the Standardized Precipitation Evapotranspiration Index (SPEI. Precipitation data were taken from the Global Precipitation Climatology Centre (GPCC and temperature data from NOAA's Climate Prediction Center (CPC. The GPCC-DI is available with several accumulation periods of 1, 3, 6, 9, 12, 24 and 48 months for different applications. It is issued monthly starting in January 2013. Typically, it is released on the 10th day of the following month, depending on the availability of the input data. It is calculated on a regular grid with 1° spatial resolution. All accumulation periods are integrated into one netCDF file for each month. This data set is referenced by the doi:10.5676/DWD_GPCC/DI_M_100 and is available free of charge from the GPCC website ftp://ftp.dwd.de/pub/data/gpcc/html/gpcc_di_doi_download.html.
4. Cacti with maximum Kirchhoff index
OpenAIRE
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
5. Resources Available to Department Chairs.
Science.gov (United States)
Lynch, David M.; Bowker, Lee H.
Resources available to department chairs from the following sources are described: the department's discipline; the national higher education community; the local institution; and the chair's own skills, background, roles, and structural placement within the organization. The use of these resources to deal with common problems faced by chairs is…
6. Sediment availability on burnt hillslopes
Science.gov (United States)
Nyman, Petter; Sheridan, Gary; Moody, John; Smith, Hugh; Noske, Philip; Lane, Patrick
2013-04-01
In general, erosion has been modeled as being proportional to some form of energy or force (such as shear stress or stream power) with the proportionality constant being erodibility, which is a characterization of sediment availability. It is unclear if erodibility is constant with depth on recently burnt hillslopes. This study used both field- and laboratory based experiments to quantify sediment availability as a depth-dependent parameter on burnt hillslopes. An explicit representation of fire-effect on sediment availability was achieved by assuming that fire-effects produce a non-cohesive soil layer of variable depth. This depth is characterized as a probability density function with a single parameter that changed during recovery (0-3 years) as the available soil was depleted. Measurements in southeastern Australia found that initially after a wildfire the hillslope had a layer (0.75-0.91 cm in depth) of non-cohesive soil, which represented 97-117 t/ha of transport limited sediment. The thickness of this layer decreased exponentially with time since the wildfire. Additional results showed that fine roots (soils for depths soil depth and root density accounted for ~60 % of variation in the erodibility at soil depths soil properties (% silt and clay in particular) became more important as predictors of erodibility. The results are organized into a conceptual framework for modeling fire-effects on sediment availability for systems with low and high pre-fire erodibility. The fire-effect produces an equal depth of non-cohesive soil for both systems but this represents a greater perturbation for systems with low pre-fire erodibility than for those systems with a high pre-fire erodibility.
7. Glycemic index of common Malaysian fruits.
Science.gov (United States)
Robert, S Daniel; Ismail, Aziz Al-Safi; Winn, Than; Wolever, Thomas M S
2008-01-01
The objective of the present study was to measure the glycemic index of durian, papaya, pineapple and water-melon grown in Malaysia. Ten (10) healthy volunteers (5 females, 5 males; body mass index 21.18+/-1.7 kg/m2) consumed 50 g of available carbohydrate portions of glucose (reference food) and four test foods (durian, papaya, pineapple and watermelon) in random order after an overnight fast. Glucose was tested on three separate occasions, and the test foods were each tested once. Postprandial plasma glucose was measured at intervals for two hours after intake of the test foods. Incremental areas under the curve were calculated, and the glycemic index was determined by expressing the area under the curve after the test foods as a percentage of the mean area under the curve after glucose. The results showed that the area under the curve after pineapple, 232+/-24 mmolxmin/L, was significantly greater than those after papaya, 147+/-14, watermelon, 139+/-8, and durian, 124+/-13 mmolxmin/L (pfoods. The validity of these results depends on the accuracy of the data in the food tables upon which the portion sizes tested were based.
Energy Technology Data Exchange (ETDEWEB)
Lamb, H.H. [Univ. of East Anglia, Norwich (United Kingdom). Climatic Research Unit
1985-09-01
Dust ejected into the high atmosphere during explosive volcanic eruptions has been considered as a possible cause for climatic change. Dust veils created by volcanic eruptions can reduce the amount of light reaching the Earths surface and can cause reductions in surface temperatures. These climatic effects can be seen for several years following some eruptions and the magnitude and duration of the effects depend largely on the density or amount of tephra (i.e. dust) ejected, the latitude of injection, and atmospheric circulation patterns. Lamb (1970) formulated the Dust Veil Index (DVI) in an attempt to quantify the impact on the Earths energy balance of changes in atmospheric composition due to explosive volcanic eruptions. The DVI is a numerical index that quantifies the impact on the Earths energy balance of changes in atmospheric composition due to explosive volcanic eruptions. The DVI is a numerical index that quantifies the impact of a particular volcanic eruptions release of dust and aerosols over the years following the event. The DVI for any volcanic eruptions are available and have been used in estimating Lamb`s dust veil indices.
9. Disaster Mythology and Availability Cascades
Directory of Open Access Journals (Sweden)
Lisa Grow Sun
2013-04-01
10. A data fusion-based drought index
Science.gov (United States)
Azmi, Mohammad; Rüdiger, Christoph; Walker, Jeffrey P.
2016-03-01
Drought and water stress monitoring plays an important role in the management of water resources, especially during periods of extreme climate conditions. Here, a data fusion-based drought index (DFDI) has been developed and analyzed for three different locations of varying land use and climate regimes in Australia. The proposed index comprehensively considers all types of drought through a selection of indices and proxies associated with each drought type. In deriving the proposed index, weekly data from three different data sources (OzFlux Network, Asia-Pacific Water Monitor, and MODIS-Terra satellite) were employed to first derive commonly used individual standardized drought indices (SDIs), which were then grouped using an advanced clustering method. Next, three different multivariate methods (principal component analysis, factor analysis, and independent component analysis) were utilized to aggregate the SDIs located within each group. For the two clusters in which the grouped SDIs best reflected the water availability and vegetation conditions, the variables were aggregated based on an averaging between the standardized first principal components of the different multivariate methods. Then, considering those two aggregated indices as well as the classifications of months (dry/wet months and active/non-active months), the proposed DFDI was developed. Finally, the symbolic regression method was used to derive mathematical equations for the proposed DFDI. The results presented here show that the proposed index has revealed new aspects in water stress monitoring which previous indices were not able to, by simultaneously considering both hydrometeorological and ecological concepts to define the real water stress of the study areas.
11. Nitrogen availability of biogas residues
Energy Technology Data Exchange (ETDEWEB)
El-Sayed Fouda, Sara
2011-09-07
The objectives of this study were to characterize biogas residues either unseparated or separated into a liquid and a solid phase from the fermentation of different substrates with respect to their N and C content. In addition, short and long term effects of the application of these biogas residues on the N availability and N utilization by ryegrass was investigated. It is concluded that unseparated or liquid separated biogas residues provide N at least corresponding to their ammonium content and that after the first fertilizer application the C{sub org}:N{sub org} ratio of the biogas residues was a crucial factor for the N availability. After long term application, the organic N accumulated in the soil leads to an increased release of N.
12. On the harmonic index of graph operations
Directory of Open Access Journals (Sweden)
B. Shwetha Shetty
2015-12-01
Full Text Available The harmonic index of a connected graph G , denoted by H(G , is defined as H(G=∑ uv∈E(G 2d u +d v where d v is the degree of a vertex v in G. In this paper, expressions for the Harary indices of the join, corona product, Cartesian product, composition and symmetric difference of graphs are derived.
13. Available lysine in canned fish
OpenAIRE
Rao, D. Ramananda; Gadre, Ujjwala V.
1984-01-01
Otolithus argenteus was canned in brine by heat processing at two different steam pressures either at 0.70 kg/cm super(2) or 1.05 kg/cm super(2) for 25 minutes. The nutritive value of canned fish as evaluated by the total nitrogen and available lysine did not alter much either during heat processing or during storage over a period of nine months at 28 degree plus or minus 5 degree C.
14. Multimedia Content Analysis and Indexing for Filtering and Retrieval Applications
Directory of Open Access Journals (Sweden)
N. Dimitrova
1999-01-01
Full Text Available Todays multimedia landscape depends on advances in data compression technology coupled with high-bandwidth networks and storage capacity. Given its size and nature, how can multimedia content be analyzed and indexed? This paper surveys techniques for content-based analysis, retrieval and filtering of digital images, audio and video and focuses on basic methods for extracting features that will enable indexing and search applications. [ed.
15. Dependence of Physical Parameters of Compound Semiconductors on Refractive Index
Directory of Open Access Journals (Sweden)
R.R. Reddy
2003-07-01
Full Text Available Interesting relationships have been found between refractive index, plasmon energy, electronic polarisability, bond length, microhardness, bulk modulus, force constants and lattice energy. An attempt has been made for the first time to correlate only one physical parameter with others. The calculated values are in good agreement with the experimental values as well as with the values reported in the literature. Refractive index data is the only one parameter required to estimate all the above parameters.
16. Places disponibles/Places available
CERN Multimedia
2004-01-01
Etant donné le délai d'impression du Bulletin, ces places peuvent ne plus être disponibles au moment de sa parution. Veuillez consulter notre site Web pour avoir la dernière mise à jour. The number of places available may vary. Please check our Web site to find out the current availability. Des places sont disponibles dans les cours suivants : / Places are available in the following courses : Introduction à Outlook : 19.8.2004 (1 journée) Outlook (short course I) : E-mail : 31.8.2004 (2 hours, morning) Outlook (short course II) : Calendar, Tasks and Notes : 31.8.2004 (2 hours, afternoon) Instructor-led WBTechT Study or Follow-up for Microsoft Applications : 7.9.2004 (morning) Outlook (short course III) : Meetings and Delegation : 7.9.2004 (2 hours, afternoon) Introduction au VHDL et utilisation du simulateur NCVHDL de CADENCE : 7 & 8.9.2004 (2 jours) Joint PVSS JCOP Framework : 13 - 17.9.2004 (5 days) AutoCAD 2002 - niveau 1 : 13, 14, 23, 24.9.2004 (4 jours) Programmation S...
17. Nomina dubia and available names.
Science.gov (United States)
Melville, R V
1980-01-01
The availability or non-availability of a name is a question of historical fact. A name once made available under the International Code of Zoological Nomenclature can be rendered unavailable only by use of the plenary powers of the Commission. The question whether a name is a nomen dubium or not is a matter of taxonomic judgement. The difficulty with the Sarcocystinae discussed by Frenkel et al. (1979) stems from the fact that, under the present provisions of the Code, it is not possible to designate for the species concerned types that will serve any useful function. The Commission is now considering changes to the Code proposed to remedy this defect in a general, legislative way. It will not, as a matter of general practice, entertain proposals for the suppression of names merely because they are considered to be nomina dubia. The application submitted by Professor Frenkel and his collegaues will nevertheless be published in the Bulletin of Zoological Nomenclature so that the Commission can, if necessary, deliver a ruling on it before the new edition of the Code has appeared.
18. Astatine-211: production and availability.
Science.gov (United States)
Zalutsky, Michael R; Pruszynski, Marek
2011-07-01
The 7.2-h half life radiohalogen (211)At offers many potential advantages for targeted α-particle therapy; however, its use for this purpose is constrained by its limited availability. Astatine-211 can be produced in reasonable yield from natural bismuth targets via the (209)Bi(α,2n)(211)At nuclear reaction utilizing straightforward methods. There is some debate as to the best incident α-particle energy for maximizing 211At production while minimizing production of (210)At, which is problematic because of its 138.4-day half life α-particle emitting daughter, (210)Po. The intrinsic cost for producing (211)At is reasonably modest and comparable to that of commercially available (123)I. The major impediment to (211)At availability is attributed to the need for a medium energy α-particle beam for its production. On the other hand, there are about 30 cyclotrons in the world that have the beam characteristics required for (211)At production.
19. Teaching Physiology with Citation Index
Science.gov (United States)
Klemm, W. R.
1976-01-01
Explains use of the Citation Index in writing term papers by assigning an older publication as a starting point in a literature search. By reading the original research report and following its subsequent use by other researchers, the student discovers the impact of the original research. (CS)
20. The weighted vertex PI index
CERN Document Server
c, Aleksandar Ili\\'
2011-01-01
The vertex PI index is a distance--based molecular structure descriptor, that recently found numerous chemical applications. In order to increase diversity of this topological index for bipartite graphs, we introduce weighted version defined as $PI_w (G) = \\sum_{e = uv \\in E} (deg (u) + deg (v)) (n_u (e) + n_v (e))$, where $deg (u)$ denotes the vertex degree of $u$ and $n_u (e)$ denotes the number of vertices of $G$ whose distance to the vertex $u$ is smaller than the distance to the vertex $v$. We establish basic properties of $PI_w (G)$, and prove various lower and upper bounds. In particular, the path $P_n$ has minimal, while the complete tripartite graph $K_{n/3, n/3, n/3}$ has maximal weighed vertex $PI$ index among graphs with $n$ vertices. We also compute exact expressions for the weighted vertex PI index of the Cartesian product of graphs. Finally we present modifications of two inequalities and open new perspectives for the future research.
1. Anomalies and noncommutative index theory
CERN Document Server
Perrot, D
2006-01-01
These are the notes of a lecture given during the summer school "Geometric and Topological Methods for Quantum Field Theory", Villa de Leyva, Colombia, july 11 - 29, 2005. We review basic facts concerning gauge anomalies and discuss the link with the Connes-Moscovici index formula in noncommutative geometry.
2. Index to AEC Information Booklets
Energy Technology Data Exchange (ETDEWEB)
None
1978-01-01
The U. S. Atomic Energy Commission publishes a series of information booklets for the general public. These booklets explain many aspects of nuclear science. Because these booklets cover such a variety of scientific fields, this index was prepared to help the reader find quickly those booklets that contain the information he needs.
3. Index theorems for quantum graphs
CERN Document Server
Fulling, S A; Wilson, J H
2007-01-01
In geometric analysis, an index theorem relates the difference of the numbers of solutions of two differential equations to the topological structure of the manifold or bundle concerned, sometimes using the heat kernels of two higher-order differential operators as an intermediary. In this paper, the case of quantum graphs is addressed. A quantum graph is a graph considered as a (singular) one-dimensional variety and equipped with a second-order differential Hamiltonian H (a "Laplacian") with suitable conditions at vertices. For the case of scale-invariant vertex conditions (i.e., conditions that do not mix the values of functions and of their derivatives), the constant term of the heat-kernel expansion is shown to be proportional to the trace of the internal scattering matrix of the graph. This observation is placed into the index-theory context by factoring the Laplacian into two first-order operators, H =A*A, and relating the constant term to the index of A. An independent consideration provides an index f...
4. Future of gradient index optics
Science.gov (United States)
Hashizume, Hideki; Hamanaka, Kenjiro; Graham, Alan C., III; Zhu, X. Frank
2001-11-01
First developed over 30 years ago, gradient index lenses play an important role not only in telecommunications technology, but also in applications such as information interface and biomedical technology. Traditional manufacturing consists of doping a certain ion, A+ into the mother glass, drawing the glass into rods and then immersing the rods into s molten salt bath containing another certain ion B+. During a thermal ion exchange process, the original ion migrates out of the mother glass, and is replaced by the alternate ion, creating a refractive index variation. Current research is being conducted to improve the thermal ion exchange technology, and open new applications. This research includes extending working distances to greater than 100mm, decreasing the lens diameter, increasing the effective radius, and combining the technology with other technologies such as photolithographically etched masks to produce arrays of gradient index lenses. As a result of this ongoing research, the gradient index lens is expected to continue to be the enabling optical technology in the first decade of the new millennium and beyond.
5. USGS 1-min Dst index
Science.gov (United States)
Gannon, J.L.; Love, J.J.
2011-01-01
We produce a 1-min time resolution storm-time disturbance index, the USGS Dst, called Dst8507-4SM. This index is based on minute resolution horizontal magnetic field intensity from low-latitude observatories in Honolulu, Kakioka, San Juan and Hermanus, for the years 1985-2007. The method used to produce the index uses a combination of time- and frequency-domain techniques, which more clearly identifies and excises solar-quiet variation from the horizontal intensity time series of an individual station than the strictly time-domain method used in the Kyoto Dst index. The USGS 1-min Dst is compared against the Kyoto Dst, Kyoto Sym-H, and the USGS 1-h Dst (Dst5807-4SH). In a time series comparison, Sym-H is found to produce more extreme values during both sudden impulses and main phase maximum deviation, possibly due to the latitude of its contributing observatories. Both Kyoto indices are shown to have a peak in their distributions below zero, while the USGS indices have a peak near zero. The USGS 1-min Dst is shown to have the higher time resolution benefits of Sym-H, while using the more typical low-latitude observatories of Kyoto Dst. ?? 2010.
6. Mining and Indexing Graph Databases
Science.gov (United States)
Yuan, Dayu
2013-01-01
Graphs are widely used to model structures and relationships of objects in various scientific and commercial fields. Chemical molecules, proteins, malware system-call dependencies and three-dimensional mechanical parts are all modeled as graphs. In this dissertation, we propose to mine and index those graph data to enable fast and scalable search.…
7. Coming to Schools: Creativity Indexes
Science.gov (United States)
Robelen, Erik W.
2012-01-01
At a time when U.S. political and business leaders are raising concerns about the need to better nurture creativity and innovative thinking among young people, several states are exploring the development of an index that would gauge the extent to which schools provide opportunities to foster those qualities. In Massachusetts, a new state…
8. 1988 Bulletin compilation and index
Energy Technology Data Exchange (ETDEWEB)
NONE
1989-02-01
This document is published to provide current information about the national program for managing spent fuel and high-level radioactive waste. This document is a compilation of issues from the 1988 calendar year. A table of contents and one index have been provided to assist in finding information.
9. Index coding via linear programming
CERN Document Server
Blasiak, Anna; Lubetzky, Eyal
2010-01-01
Index Coding has received considerable attention recently motivated in part by applications such as fast video-on-demand and efficient communication in wireless networks and in part by its connection to Network Coding. The basic setting of Index Coding encodes the side-information relation, the problem input, as an undirected graph and the fundamental parameter is the broadcast rate $\\beta$, the average communication cost per bit for sufficiently long messages (i.e. the non-linear vector capacity). Recent nontrivial bounds on $\\beta$ were derived from the study of other Index Coding capacities (e.g. the scalar capacity $\\beta_1$) by Bar-Yossef et al (FOCS'06), Lubetzky and Stav (FOCS'07) and Alon et al (FOCS'08). However, these indirect bounds shed little light on the behavior of $\\beta$ and its exact value remained unknown for \\emph{any graph} where Index Coding is nontrivial. Our main contribution is a hierarchy of linear programs whose solutions trap $\\beta$ between them. This enables a direct information-...
10. Equiseparability on Terminal Wiener Index
DEFF Research Database (Denmark)
Deng, Xiaotie; Zhang, Jie
2012-01-01
The aim of this work is to explore the properties of the terminal Wiener index, which was recently proposed by Gutman et al. (2004) [3], and to show the fact that there exist pairs of trees and chemical trees which cannot be distinguished by using it. We give some general methods for constructing...
11. Transportation Environment Data Bank index
Energy Technology Data Exchange (ETDEWEB)
Davidson, C.A.; Foley, J.T.
1977-04-01
In an effort to determine the environment intensities to which energy materials in transit will be exposed, a ''Data Bank'' of environmental information has been established by Sandia Laboratories, Division 1285 for the ERDA Division of Environmental Control Technology. This document is an index which can be used to request data of interest.
12. The Effectiveness of Monetary Policy Towards Stock Index Case Study : Jakarta Islamic Index 2006-2014
Directory of Open Access Journals (Sweden)
Lak lak Nashat el Hasanah
2016-06-01
Full Text Available Fluctuation in economy situation is an important indicator for investor decision making. The investor actions are base on the minimum risk while having maximum profit. One of it is observing the condition of macro variables within monetary policy. This research aims to analyze the impact of inflation, money supply, exchange rate, and birate towards stock of jakarta islamic Index. The type data used is times series periode 2006-2014. Multiple linier regression with chow test and dummy variable approach to compare and to know the behavior of each independent variables. The result shows partially that birate and exchange rate negatively impact Jakarta Islamic Index before global monetary crisis in 2008, while inflation and money supply not that significantly impact. After global monetary crisis in 2008, partially, birate variable and money supply significantly giving positive influence to Jakarta Islamic Index, while at same time exchange rate and inflation are not significantly influencial. Simultaneously, inflation, money supply, exchange rate, and birate influence Jakarta islamic Index.
13. A Proposed New Index for Clinical Evaluation of Interproximal Soft Tissues: The Interdental Pressure Index
Directory of Open Access Journals (Sweden)
Checchi Luigi
2014-01-01
Full Text Available The interdental pressure index (IPI is introduced to specifically evaluate clinical interproximal-tissue conditions and assess the effect of interproximal hygiene stimulation. This index scores clinical responses of periodontal tissues to the apical pressure of a horizontally placed periodontal probe. It is negative when gingival tissues are firm, bleeding-free, and slightly ischemic by the stimulation; otherwise it is positive. The clinical validation showed high intraoperator agreement (0.92; 95% CI: 0.82–0.96; P=0.0001 and excellent interoperator agreement (0.76; 95% CI: 0.14–1.38; P=0.02. High internal consistency with bleeding on probing (κ=0.88 and gingival index (Cronbach’s α=0.81 was obtained. Histological validation obtained high sensitivity (100% and specificity (80% for IPI+ toward inflammatory active form. The same results were recorded for IPI− toward chronic inactive form. IPI results as a simple and noninvasive method with low error probability and good reflection of histological condition that can be applied for oral hygiene motivation. Patient compliance to oral hygiene instructions is essential in periodontal therapy and IPI index can be a practical and intuitive tool to check and reinforce this important aspect.
14. Finding Chemical Information through Citation Index Searching
Science.gov (United States)
Smith, Allan L.
1999-08-01
The concept of indexing the scientific literature through cited references (citation indexing) is explained and reviewed. Both print and electronic products based on citation indexing are discussed, and examples of searching the latter are included. Citation indexing is also useful in mapping the scientific literature itself and in assessing the contributions of individual scientists.
15. Indexing and Retrieval for the Web.
Science.gov (United States)
Rasmussen, Edie M.
2003-01-01
Explores current research on indexing and ranking as retrieval functions of search engines on the Web. Highlights include measuring search engine stability; evaluation of Web indexing and retrieval; Web crawlers; hyperlinks for indexing and ranking; ranking for metasearch; document structure; citation indexing; relevance; query evaluation;…
16. Development of indoor environmental index: Air quality index and thermal comfort index
Science.gov (United States)
Saad, S. M.; Shakaff, A. Y. M.; Saad, A. R. M.; Yusof, A. M.; Andrew, A. M.; Zakaria, A.; Adom, A. H.
2017-03-01
In this paper, index for indoor air quality (also known as IAQI) and thermal comfort index (TCI) have been developed. The IAQI was actually modified from previous outdoor air quality index (AQI) designed by the United States Environmental Protection Agency (US EPA). In order to measure the index, a real-time monitoring system to monitor indoor air quality level was developed. The proposed system consists of three parts: sensor module cloud, base station and service-oriented client. The sensor module cloud (SMC) contains collections of sensor modules that measures the air quality data and transmit the captured data to base station through wireless. Each sensor modules includes an integrated sensor array that can measure indoor air parameters like Carbon Dioxide, Carbon Monoxide, Ozone, Nitrogen Dioxide, Oxygen, Volatile Organic Compound and Particulate Matter. Temperature and humidity were also being measured in order to determine comfort condition in indoor environment. The result from several experiments show that the system is able to measure the air quality presented in IAQI and TCI in many indoor environment settings like air-conditioner, chemical present and cigarette smoke that may impact the air quality. It also shows that the air quality are changing dramatically, thus real-time monitoring system is essential.
17. Fundamentals of database indexing and searching
CERN Document Server
Bhattacharya, Arnab
2014-01-01
Fundamentals of Database Indexing and Searching presents well-known database searching and indexing techniques. It focuses on similarity search queries, showing how to use distance functions to measure the notion of dissimilarity.After defining database queries and similarity search queries, the book organizes the most common and representative index structures according to their characteristics. The author first describes low-dimensional index structures, memory-based index structures, and hierarchical disk-based index structures. He then outlines useful distance measures and index structures
18. Disassembling iron availability to phytoplankton
Directory of Open Access Journals (Sweden)
Yeala eShaked
2012-04-01
Full Text Available The bioavailability of iron to microorganisms and its underlying mechanisms have far reaching repercussions to many natural systems and diverse fields of research, including ocean biogeochemistry, carbon cycling and climate, harmful algal blooms, soil and plant research, bioremediation, pathogenesis and medicine. Within the framework of ocean sciences, short supply and restricted bioavailability of Fe to phytoplankton is thought to limit primary production and curtail atmospheric CO2 drawdown in vast ocean regions. Yet a clear-cut definition of bioavailability remains elusive, with elements of iron speciation and kinetics, phytoplankton physiology, light, temperature and microbial interactions, to name a few, all intricately intertwined into this concept. Here, in a synthesis of published and new data, we attempt to disassemble the complex concept of iron bioavailability to phytoplankton by individually exploring some of its facets. We distinguish between the fundamentals of bioavailability - the acquisition of Fe-substrate by phytoplankton - and added levels of complexity involving interactions among organisms, iron and ecosystem processes. We first examine how phytoplankton acquire free and organically-bound iron, drawing attention to the pervasiveness of the reductive uptake pathway in both prokaryotes and eukaryotes. Turning to acquisition rates, we propose to view the availability of various Fe-substrates to phytoplankton as spectrum rather than an absolute all or nothing. We then demonstrate the use of uptake rate constants to make comparisons across different studies, organisms, Fe compounds and environments, and for gauging the contribution of various Fe substrates to phytoplankton growth in situ. Last, we describe the influence of aquatic microorganisms on iron chemistry and fate by way of organic complexation and bio-mediated redox transformations and examine the bioavailability of these bio-modified Fe species.
19. The Relationship between Macroeconomic Variables and ISE Industry Index
Directory of Open Access Journals (Sweden)
Ahmet Ozcan
2012-01-01
Full Text Available In this study, the relationship between macroeconomic variables and Istanbul Stock Exchange (ISE industry index is examined. Over the past years, numerous studies have analyzed these relationships and the different results obtained from these studies have motivated further research. The relationship between stock exchange index and macroeconomic variables has been well documented for the developed markets. However, there are few studies regarding the relationship between macroeconomic variables and stock exchange index for the developing markets. Thus, this paper seeks to address the question of whether macroeconomic variables have a significant relationship with ISE industry index using monthly data for the period from 2003 to 2010. The selected macroeconomic variables for the study include interest rates, consumer price index, money supply, exchange rate, gold prices, oil prices, current account deficit and export volume. The Johansen’s cointegration test is utilized to determine the impact of selected macroeconomic variables on ISE industry index. The result of the Johansen’s cointegration shows that macroeconomic variables exhibit a long run equilibrium relationship with the ISE industry index.
20. Condensed Extended Hyper-Wiener Index
Institute of Scientific and Technical Information of China (English)
LI Xin-Hua; Abraham F. Jalbout; JI Zhi
2008-01-01
According to the definitions of molecular connectivity and hyper-Wiener index, a novel set of hyper-Wiener indexes (Dn, mDn) were defined and named as condensed extended hyper-Wiener index, the potential usefulness of which in QSAR/QSPR is evaluated by its correlation with a number of C3-C8 alkanes as well as by a favorable comparison with models based on molecular connectivity index and overall Wiener index.
1. Biomimetic Gradient Index (GRIN) Lenses
Science.gov (United States)
2006-01-01
optics include single lenses inspired by cephalopod (octopus) eyes and a three-lens, wide field of view, optical system for a surveillance sensor...camera. Details are easily resolv- able with the polymer lens. This lens system was installed on an Evolution unmanned aerial vehicle (UAV) with a...lens system was installed in an NRL Evolution UAV and used to record video images at a height of up to 1000 ft. The index gradients in the polymer
2. The Net Reclassification Index (NRI)
DEFF Research Database (Denmark)
Pepe, Margaret S.; Fan, Jing; Feng, Ziding;
2015-01-01
The Net Reclassification Index (NRI) is a very popular measure for evaluating the improvement in prediction performance gained by adding a marker to a set of baseline predictors. However, the statistical properties of this novel measure have not been explored in depth. We demonstrate the alarming...... performance improvement, such as measures derived from the receiver operating characteristic curve, the net benefit function, and the Brier score, cannot be large due to poorly fitting risk functions....
3. Body Mass Index and Stroke
DEFF Research Database (Denmark)
Andersen, Klaus Kaae; Olsen, Tom Skyhøj
2013-01-01
Although obesity is associated with excess mortality and morbidity, mortality is lower in obese than in normal weight stroke patients (the obesity paradox). Studies now indicate that obesity is not associated with increased risk of recurrent stroke in the years after first stroke. We studied...... the association between body mass index (BMI) and stroke patient's risk of having a history of previous stroke (recurrent stroke)....
4. Type-indexed data types
OpenAIRE
Hinze, R.; Jeuring, J.T.; Löh, A.
2004-01-01
A polytypic function is a function that can be instantiated on many data types to obtain data type specific functionality. Examples of polytypic functions are the functions that can be derived in Haskell, such as show, read, and ‘==’.More advanced examples are functions for digital searching, pattern matching, unification, rewriting, and structure editing. For each of these problems, we not only have to define polytypic functionality, but also a type-indexed data type: a data type that is con...
5. Succincter Text Indexing with Wildcards
CERN Document Server
Thachuk, Chris
2011-01-01
We study the problem of indexing text with wildcard positions, motivated by the challenge of aligning sequencing data to large genomes that contain millions of single nucleotide polymorphisms (SNPs)---positions known to differ between individuals. SNPs modeled as wildcards can lead to more informed and biologically relevant alignments. We improve the space complexity of previous approaches by giving a succinct index requiring $(2 + o(1))n \\log \\sigma + O(n) + O(d \\log n) + O(k \\log k)$ bits for a text of length $n$ over an alphabet of size $\\sigma$ containing $d$ groups of $k$ wildcards. A key to the space reduction is a result we give showing how any compressed suffix array can be supplemented with auxiliary data structures occupying $O(n) + O(d \\log \\frac{n}{d})$ bits to also support efficient dictionary matching queries. The query algorithm for our wildcard index is faster than previous approaches using reasonable working space. More importantly our new algorithm greatly reduces the query working space to ...
6. The glycemic index: physiological significance.
Science.gov (United States)
Esfahani, Amin; Wong, Julia M W; Mirrahimi, Arash; Srichaikul, Korbua; Jenkins, David J A; Kendall, Cyril W C
2009-08-01
The glycemic index (GI) is a physiological assessment of a food's carbohydrate content through its effect on postprandial blood glucose concentrations. Evidence from trials and observational studies suggests that this physiological classification may have relevance to those chronic Western diseases associated with overconsumption and inactivity leading to central obesity and insulin resistance. The glycemic index classification of foods has been used as a tool to assess potential prevention and treatment strategies for diseases where glycemic control is of importance, such as diabetes. Low GI diets have also been reported to improve the serum lipid profile, reduce C-reactive protein (CRP) concentrations, and aid in weight control. In cross-sectional studies, low GI or glycemic load diets (mean GI multiplied by total carbohydrate) have been associated with higher levels of high-density lipoprotein cholesterol (HDL-C), with reduced CRP concentrations, and, in cohort studies, with decreased risk of developing diabetes and cardiovascular disease. In addition, some case-control and cohort studies have found positive associations between dietary GI and risk of various cancers, including those of the colon, breast, and prostate. Although inconsistencies in the current findings still need to be resolved, sufficient positive evidence, especially with respect to renewed interest in postprandial events, suggests that the glycemic index may have a role to play in the treatment and prevention of chronic diseases.
7. On modular semifinite index theory
CERN Document Server
2011-01-01
We propose a definition of a modular spectral triple which covers existing examples arising from KMS-states, Podles sphere and quantum SU(2). The definition also incorporates the notion of twisted commutators appearing in recent work of Connes and Moscovici. We show how a finitely summable modular spectral triple admits a twisted index pairing with unitaries satisfying a modular condition. The twist means that the dimensions of kernels and cokernels are measured with respect to two different but intimately related traces. The twisted index pairing can be expressed by pairing Chern characters in reduced versions of twisted cyclic theories. We end the paper by giving a local formula for the reduced Chern character in the case of quantum SU(2). It appears as a twisted coboundary of the Haar-state. In particular we present an explicit computation of the twisted index pairing arising from the sequence of corepresentation unitaries. As an important tool we construct a family of derived integration spaces associated...
8. Places disponibles/Places available
CERN Multimedia
2004-01-01
Si vous désirez participer à l'un des cours suivants, veuillez en discuter avec votre superviseur et vous inscrire électroniquement en direct depuis les pages de description des cours dans le Web que vous trouvez à l'adresse : http://www.cern.ch/Training/ ou remplissez une « demande de formation » disponible auprès du Secrétariat de votre Division ou de votre DTO (Délégué divisionnaire à la formation). Les places seront attribuées dans l'ordre de réception des inscriptions. If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Off...
9. Sediment availability on burned hillslopes
Science.gov (United States)
Nyman, Petter; Sheridan, Gary J.; Moody, John A.; Smith, Hugh G.; Noske, Philip J.; Lane, Patrick N. J.
2013-12-01
describes the inherent resistance of soil to erosion. Hillslope erosion models typically consider erodibility to be constant with depth. This may not be the case after wildfire because erodibility is partly determined by the availability of noncohesive soil and ash at the surface. This study quantifies erodibility of burned soils using methods that explicitly capture variations in soil properties with depth. Flume experiments on intact cores from three sites in western United States showed that erodibility of fire-affected soil was highest at the soil surface and declined exponentially within the top 20 mm of the soil profile, with root density and soil depth accounting for 62% of the variation. Variation in erodibility with depth resulted in transient sediment flux during erosion experiments on bounded field plots. Material that contributed to transient flux was conceptualized as a layer of noncohesive material of variable depth (dnc). This depth was related to shear strength measurements and sampled spatially to obtain the probability distribution of noncohesive material as a function of depth below the surface. After wildfire in southeast Australia, the initial dnc ranged from 7.5 to 9.1 mm, which equated to 97-117 Mg ha-1 of noncohesive material. The depth decreased exponentially with time since wildfire to 0.4 mm (or < 5 Mg ha-1) after 3 years of recovery. The results are organized into a framework for modeling fire effects on erodibility as a function of the production and depletion of the noncohesive layer overlying a cohesive layer.
10. [Available carbapenems: Properties and differences].
Science.gov (United States)
Martínez, María José Fresnadillo; García, María Inmaculada García; Sánchez, Enrique García; Sánchez, José Elías García
2010-09-01
Carbapenems are β-lactam antibiotics endowed with a broader spectrum, activity and resistance to β-lactamases than other β-lactams. Due to their qualities, these antibiotics are crucial in empirical therapy, in the monotherapy of several severe hospital-acquired infections -and even that of some community-acquired infections- as well as in the directed therapy of infections due to multiresistant Gram-negative bacteria. All the available carbapenems have a similar spectrum, although there are significant differences in their antimicrobial activity, which in the long run determines the clinical indications of each carbapenem. The spectrum of ertapenem does not cover eminently nosocomial pathogens such as Pseudomonas aeruginosa and Acinetobacter spp., and hence this antibiotic is indicated in community-acquired infections requiring hospital treatment. In contrast, doripenem shows greater intrinsic activity than other carbapenems in extended spectrum beta-lactamase-producing enterobacteria and AmpC P. aeruginosa, Acinetobacter spp. and other non-fermentative and anaerobic microorganisms. Additionally, like the remaining carbapenems, doripenem has adequate pharmacokinetic characteristics and a favorable safety profile.
11. Places disponibles*/Places available **
CERN Document Server
2003-01-01
Des places sont disponibles dans les cours suivants : Places are available in the following course : WorldFIP 2003 pour utilisateurs : 11 - 14.2.03 (4 jours) AutoCAD 2002 - niveau 1 : 24, 25.2 & 3, 4.3.03 (4 jours) Introduction à Windows 2000 au CERN : 25.2.03 (1/2 journée) AutoCAD 2002 - niveau 2 : 27 & 28.2.03 (2 jours) C++ for Particle Physicists : 10 - 14.3.03 (6 X 3 hour lectures) AutoCAD Mechanical 6 PowerPack (F) : 12, 13, 17, 18, 24 & 25.3.03 (6 jours) CLEAN-2002 : Working in a cleanroom : 2.4.03 (half-day, afternoon, free course, registration required) Formation Siemens SIMATIC /Siemens SIMATIC Training : Introduction à STEP7 /Introduction to STEP7 : 11 & 12.3.03 / 3 & 4.6.03 (2 jours/2 days) Programmation STEP7/STEP7 Programming : 31.3 - 4.4.03 / 16 - 20.6.03 (5 jours/5 days) Réseau Simatic Net /Simatic Net Network : 15 & 16.4.03 / 26 & 27.6.03 Ces cours seront donnés en français ou anglais en fonction des demandes / These courses will be given in French o...
12. Bibliographic index to photonuclear reaction data (1955--1992)
Energy Technology Data Exchange (ETDEWEB)
Asami, Tetsuo [Data Engineering, Inc., Yokohama (Japan); Nakagawa, Tsuneo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Dept. of Reactor Engineering
1993-10-01
Japanese Nuclear Data Committee (JNDC) has a plan to compile the evaluated data library for photon-induced nuclear reaction cross sections, and the work on the data evaluation is in progress at the present. In the evaluations for these data a bibliographic index for neutron nuclear data is required. As the bibliographic index to photonuclear reactions, several excellent compilations have been done at some research institutes in the world, and have contributed to various basic and applied researches on the photonuclear reactions. For example, there are the abstract sheets published by US National Bureau of Standards and the data index published regularly in Russia. On the other hand, the four-center network on nuclear data (US National Nuclear Data Center at Brookhaven, Russian Nuclear Data Center at Obninsk, NEA Data Bank at Paris and IAEA Nuclear Data Section at Vienna) compiles and exchanges the numerical data on photonuclear reactions as well as on neutron-induced ones, in the EXFOR format. Those numerical data are available for users. There is, however, no bibliographic index to photonuclear reactions, available for general users. Therefore, the present work to make a photonuclear reaction data index has been done urgently to contribute to the above-mentioned data evaluation. Although these data might be still incomplete and have some defects, we have decided to serve this as the first edition of our photonuclear reaction index.
13. Disassembling iron availability to phytoplankton.
Science.gov (United States)
Shaked, Yeala; Lis, Hagar
2012-01-01
The bioavailability of iron to microorganisms and its underlying mechanisms have far reaching repercussions to many natural systems and diverse fields of research, including ocean biogeochemistry, carbon cycling and climate, harmful algal blooms, soil and plant research, bioremediation, pathogenesis, and medicine. Within the framework of ocean sciences, short supply and restricted bioavailability of Fe to phytoplankton is thought to limit primary production and curtail atmospheric CO(2) drawdown in vast ocean regions. Yet a clear-cut definition of bioavailability remains elusive, with elements of iron speciation and kinetics, phytoplankton physiology, light, temperature, and microbial interactions, to name a few, all intricately intertwined into this concept. Here, in a synthesis of published and new data, we attempt to disassemble the complex concept of iron bioavailability to phytoplankton by individually exploring some of its facets. We distinguish between the fundamentals of bioavailability - the acquisition of Fe-substrate by phytoplankton - and added levels of complexity involving interactions among organisms, iron, and ecosystem processes. We first examine how phytoplankton acquire free and organically bound iron, drawing attention to the pervasiveness of the reductive uptake pathway in both prokaryotic and eukaryotic autotrophs. Turning to acquisition rates, we propose to view the availability of various Fe-substrates to phytoplankton as a spectrum rather than an absolute "all or nothing." We then demonstrate the use of uptake rate constants to make comparisons across different studies, organisms, Fe-compounds, and environments, and for gaging the contribution of various Fe-substrates to phytoplankton growth in situ. Last, we describe the influence of aquatic microorganisms on iron chemistry and fate by way of organic complexation and bio-mediated redox transformations and examine the bioavailability of these bio-modified Fe species.
14. Inverted Indexing In Big Data Using Hadoop Multiple Node Cluster
Directory of Open Access Journals (Sweden)
Kaushik Velusamy
2013-12-01
Full Text Available Inverted Indexing is an efficient, standard data structure, most suited for search operation over an exhaustive set of data. The huge set of data is mostly unstructured and does not fit into traditional database categories. Large scale processing of such data needs a distributed framework such as Hadoop where computational resources could easily be shared and accessed. An implementation of a search engine in Hadoop over millions of Wikipedia documents using an inverted index data structure would be carried out for making search operation more accomplished. Inverted index data structure is used for mapping a word in a file or set of files to their corresponding locations. A hash table is used in this data structure which stores each word as index and their corresponding locations as its values thereby providing easy lookup and retrieval of data making it suitable for search operations.
15. MEASURING INFLATION THROUGH STOCHASTIC APPROACH TO INDEX NUMBERS FOR PAKISTAN
Directory of Open Access Journals (Sweden)
Zahid Asghar
2010-09-01
Full Text Available This study attempts to estimate the rate of inflation in Pakistan through stochastic approach to index numbers which provides not only point estimate but also confidence interval for the rate of inflation. There are two types of approaches to index number theory namely: the functional economic approaches and the stochastic approach. The attraction of stochastic approach is that it estimates the rate of inflation in which uncertainty and statistical ideas play a major roll of screening index numbers. We have used extended stochastic approach to index numbers for measuring inflation by allowing for the systematic changes in the relative prices. We use CPI data covering the period July 2001--March 2008 for Pakistan.
16. A framework for dynamic indexing from hidden web
Directory of Open Access Journals (Sweden)
Hasan Mahmud
2011-09-01
Full Text Available The proliferation of dynamic websites operating on databases requires generating web pages on-the-fly which is too sophisticated for most of the search engines to index. In an attempt to crawl the contents of dynamic web pages, weve tried to come up with a simple approach to index these huge amounts of dynamic contents hidden behind the search forms. Our key contribution in this paper is the design and implementation of a simple framework to index the dynamic web pages and the use of Hadoop MapReduce framework to update and maintain the index. In our approach, from an initial URL, our crawler downloads both the static and dynamic web pages, detects form interfaces, adaptively selects keywords to generate most promising search results, automatically fill-up search form interfaces, submits the dynamic URL and processes the result until some conditions are satisfied.
17. The Kirchhoff Index of Toroidal Meshes and Variant Networks
Directory of Open Access Journals (Sweden)
Jia-Bao Liu
2014-01-01
Full Text Available The resistance distance is a novel distance function on electrical network theory proposed by Klein and Randić. The Kirchhoff index Kf(G is the sum of resistance distances between all pairs of vertices in G. In this paper, we established the relationships between the toroidal meshes network Tm×n and its variant networks in terms of the Kirchhoff index via spectral graph theory. Moreover, the explicit formulae for the Kirchhoff indexes of L(Tm×n, S(Tm×n, T(Tm×n, and C(Tm×n were proposed, respectively. Finally, the asymptotic behavior of Kirchhoff indexes in those networks is obtained by utilizing the applications of analysis approach.
18. Integrated Microfibre Device for Refractive Index and Temperature Sensing
Directory of Open Access Journals (Sweden)
Sulaiman W. Harun
2012-08-01
Full Text Available A microfibre device integrating a microfibre knot resonator in a Sagnac loop reflector is proposed for refractive index and temperature sensing. The reflective configuration of this optical structure offers the advantages of simple fabrication and ease of sensing. To achieve a balance between responsiveness and robustness, the entire microfibre structure is embedded in low index Teflon, except for the 0.5–2 mm diameter microfibre knot resonator sensing region. The proposed sensor has exhibited a linear spectral response with temperature and refractive index. A small change in free spectral range is observed when the microfibre device experiences a large refractive index change in the surrounding medium. The change is found to be in agreement with calculated results based on dispersion relationships.
19. The Effect of CF Herbal Acupuncture by Oswestry Disability Index
Directory of Open Access Journals (Sweden)
Cho Tae-Sung
2001-12-01
Full Text Available Objective The aim of this study was to assess the effect of CF Herbal Acupuncture for the low back pain by Oswestry Disability Index Method The study population consisted of 10 patients with back pain. CF Herbal Acupuncture was administered one time per 5 days after admission. The degree of improvement was evaluated by Oswestry Disability Index and visual analogue scale(VAS. Oswestry Disability Index consisted of eleven items and they were scored as 5 or 6 points per one item. Results All of the 10 patients, after CF Herbal Acupuncture, showed decreased score by Oswestry Disability Index and VAS. It means that the patient's satisfaction degree increased after treatment. Conclusion These results suggest that The CF Herbal Acupuncture was effective for low back pain
20. Index of refraction of molecular nitrogen for sodium matter waves
CERN Document Server
Loreau, J; Dalgarno, A
2013-01-01
We calculate the index of refraction of sodium matter waves propagating through a gas of nitrogen molecules. We use a recent ab initio potential for the ground state of the NaN_2 Van der Waals complex to perform quantal close-coupling calculations and compute the index of refraction as a function of the projectile velocity. We obtain good agreement with the available experimental data. We show that the refractive index contains glory oscillations, but that they are damped by the averaging over the thermal motion of the N_2 molecules. These oscillations appear at lower temperatures and projectile velocity. We also investigate the behavior of the refractive index at low temperature and low projectile velocity to show its dependence on the rotational state of N_2, and discuss the advantage of using diatomic molecules as projectiles. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2381836324930191, "perplexity": 8610.065565690205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828134.97/warc/CC-MAIN-20171024033919-20171024053919-00576.warc.gz"} |
http://mathoverflow.net/questions/7569/classifying-strata-for-the-adjoint-representation-of-gl-from-first-principles?sort=newest | # Classifying strata for the adjoint representation of GL from first principles
How would one classify the strata for the standard nilpotent cone for $GL_{k}(\mathbb{C})$, using the definition from Hesselink's paper "Desingularizations of Varieties of Nullforms"? I know that they correspond to partitions / nilpotent orbits etc, but from first principles why aren't two different nilpotent orbits possibly in the same strata - how would you prove that? (preferably using the definition of Hesselink)
I would like to classify the strata for the problem I'm working on, but don't completely understand how to do it for the more basic case (for which the method is probably well-known), I get stuck on the details, so that would be very helpful. Thanks.
-
I linkified Hesselink's paper for you. Since that paper doesn't seem to be publicly available, it might be helpful for you to restate the definition here. – David Speyer Dec 3 '09 at 1:42
I'm going to give a partial answer here for two reasons: (1) I am lazy and (2) this is starting to feel a little homeworky to me. Obviously, no one would assign this material as homework, but part of reading a math paper is taking the time to work out lots of simple examples and see how the definitions work. I feel like you are pushing the boundaries of how much of this work it is reasonable to ask other people to do. Not a major criticism, certainly not a vote to close the question, but my input.
On to the math. I've scanned the first 3 pages of Hesselink's paper. He make the following definitions. G acts on V, v is a point of V and $\star$ a chosen base point of V fixed by G. In your setting, G is $GL_n$, V is the $n \times n$ matrices where G acts by conjugation, and $\star$ is zero. Hesselink writes Y(G) for what is essentially $\mathrm{Hom}(\mathbb{C}^*, G)$. More precisely, Hesselink tensors with $\mathbb{Q}$, so that he can talk about maps like $t \mapsto \left( \begin{smallmatrix} t^{1/3} & 0 \\\\ 0 & t^{-2/7} \end{smallmatrix} \right)$. I'll ignore this detail.
For $\lambda \in Y(G)$, Hesselink defines a rational number $m(\lambda)$. We talked about this in your previous question. In this setting, where V is an $N$-dimensional vector space, Hesselink gives an explicit formula for m on the bottom of page 142/top of page 143: Diagonalize the action of $\lambda$ as $t \mapsto \mathrm{diag}(t^{m_1}, \cdots, t^{m_N})$ and write $v = \sum v_i e_i$.. Then $m(\lambda) = \min(m_i : v_i \neq 0)$ if this number is nonnegative, and is $- \infty$ if this minimum is negative.
Let's see what this definition means in your setting. We can conjugate any $\lambda$ into diagonal form as $t \mapsto \mathrm{diag}(t^{c_1}, \cdots, t^{c_n})$. I've replaced $m_i$ by $c_i$ to point out that these $c$'s are not the $m$'s of the previous paragraph. In our notation, the $N$ of the previous paragraph is $n^2$. The vector space $V$ has dimension $n^2$ with basis $e_{ij}$. The action of $\lambda(t)$ on $e_{ij}$ is by $t^{c_i - c_j}$. (Exercise!).
So $m(\lambda) > 0$ if and only if $c_i \leq c_j$ implies $v_{ij} =0$.
We may as well order our basis such that $c_1 \geq c_2 \geq \cdots \geq c_n$. If $c_1 > c_2 > \cdots >c_n$ then we see that $m(\lambda) > 0$ if and only if $v$ is a strictly upper triangular matrix. When there are some equalities among the $c$'s, you want $v$ to be strictly block upper triangular. For such a $v$, $m(\lambda) = \min(c_i - c_j : v_{ij} \neq 0)$. In particular, notice that there exists a $\lambda$ such that $m(\lambda) > 0$ if and only if $v$ is nilpotent.
Hesselink defines $\Lambda(v)$ to be the locus in $\{ \lambda : m(\lambda) = 1 \}$ where $q(\lambda)$ is minimized, where $q$ is the inner product from your previous question. What you want to show is that $\Lambda(v)$ determines the Jordan normal form of $v$.
I must admit that I haven't thought out how to prove this. But I hope this makes things explicit enough that you can attack it.
-
thanks, that was more than enough detail to be of help! if you think my question is unreasonable, and if you have time to answer it briefly, please just answer concisely as possible and some of the major steps and I'll try to fill in. I didn't ask for a 100% mathematically rigorous answer. – Vinoth Dec 7 '09 at 6:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370868802070618, "perplexity": 182.3153348668523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928076.40/warc/CC-MAIN-20150521113208-00013-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://mathhelpforum.com/statistics/131990-probability-playing-cards.html | # Math Help - Probability for playing cards
1. ## Probability for playing cards
Mr.A has 13 cards of the same suit. He withdraws 4 cards from it and makes a number using the digits in the units place of each chosen card, i.e he will take 3 from king whose value is 13, 0 from 10, 9 from 9 and so on.. What is the probability that he can form a number that is divisible by 2?
My working
(which is incorrect):
Since this question is dealing with combination. I could answer it as:
6C4/13C4 .. (6 possible ways to select numbers ending with an even digit with 4 chosen cards)
The correct answer according to the book is: (6 * 12 * 11 * 10)/(13 * 12* 11 * 10) = 6/13
The solution from the book went above my head. How do i solve this question?
2. Hello, saberteeth!
Mr.A has 13 cards of the same suit.
He withdraws 4 cards from it and makes a number using the digits in the units place of each chosen card,
That is: .Ace = 1, Deuce = 2, Trey = 3, . . . Ten = 0, Jack = 1, Queen = 2, King = 3.
What is the probability that he can form a number that is divisible by 2?
"Divisible by 2" means that the 4-digit number is even.
There are: 7 odd digits and 6 even digits.
He can make an even number if at least one digit is even.
He will fail of all four digits are odd.
. . There are: . ${7\choose4} = 35$ ways to get 4 odd digits.
. . . . There are: . ${13\choose4} = 715$ possible outcomes.
. . Hence: . $P(\text{odd number}) \:=\:\frac{35}{715} \:=\:\frac{7}{143}$
Therefore: . $P(\text{even number}) \;=\;1-\frac{7}{143} \;=\;\frac{136}{143}$
3. Hello,
Soroban's way is the way I would do it but it also easy to see it this way. Being divisible by 2 means it has to be even so out of the 13 cards, there are 6 even cards {2,4,6,8,10,12} . The probability of the last number being even is 6 out of 13 cards. 6/13 !
*another way of looking at it same concept:
I don't know if this is notationally correct but:
Sample Space S = {1,2,3,4,5,6,7,8,9,10,11,12,13}
Even ={2,4,6,8,10,12}
6 of the 13 cards are even; each has the same probability of being chosen. So P(EVEN)=P(2)+P(4)+...+P(12)= (1/13+1/13+1/13+1/13+1/13+1/13)= 6/13.
This way is trivial though and is not very reliable (not to mention long) when you start getting into more complex problems.
4. Again, it can be simplified by calculating the probability they'd all be odd
$P_{Even}=1-P_{Odd}=1-\left(\frac{7}{13}\ \frac{6}{12}\ \frac{5}{11}\ \frac{4}{10}\right)$
$=1-\frac{7(6)5(4)}{13(12)11(10)}=1-\frac{7(6)}{13(11)6}=1-\frac{7}{143}$
5. Originally Posted by Soroban
Hello, saberteeth!
"Divisible by 2" means that the 4-digit number is even.
There are: 7 odd digits and 6 even digits.
He can make an even number if at least one digit is even.
He will fail of all four digits are odd.
. . There are: . ${7\choose4} = 35$ ways to get 4 odd digits.
. . . . There are: . ${13\choose4} = 715$ possible outcomes.
. . Hence: . $P(\text{odd number}) \:=\:\frac{35}{715} \:=\:\frac{7}{143}$
Therefore: . $P(\text{even number}) \;=\;1-\frac{7}{143} \;=\;\frac{136}{143}$
Hello:
I did not understand one thing in Soroban's answer:
"He can make an even number if at least one digit is even."
How about $2343,1225,8643$, they also contain at least one even digit but still odd.
I think it should be that last digit should be an even number i.e $3334,7134$ etc.
6. Hi u2_wa,
In forming a number from the 4 digits,
the even digit may be placed at the end to make it even,
thus forming an even number from the digits available.
You don't need to stick to the order the digits came in.
The book answer gives $\frac{6}{13}$
which is the probability that the first digit is even,
considering that it doesn't matter whether the others are even or odd.
However, this misses the probabilities of the 1st being odd, the 1st 2 being odd,
the 1st 3 being odd, the 1st and 3rd being odd......etc.
7. Originally Posted by Archie Meade
Hi u2_wa,
In forming a number from the 4 digits,
the even digit may be placed at the end to make it even,
thus forming an even number from the digits available.
You don't need to stick to the order the digits came in.
The book answer gives $\frac{6}{13}$
which is the probability that the first digit is even,
considering that it doesn't matter whether the others are even or odd.
However, this misses the probabilities of the 1st being odd, the 1st 2 being odd,
the 1st 3 being odd, the 1st and 3rd being odd......etc.
What I did to solve is this:
There are in total $13P4$ ways.
There should $(2,4,6,8,0,2)$ be among $six$ digits in the end to make an even number.
Ways to get an even number: $12P3*6$
probability $=\frac{12P3*6}{13P4}$
$=\frac{6}{13}$ (The answer given in the book)
8. Yes, that's good work u2_wa,
shows you can work it out in alternative ways.
You are calculating the probability if we must take the digits
in the order they come in.
The way the question is worded suggests that Mr. A chooses the 4 cards and tries to
make an even number by placing an even number in the
units position whenever he has the opportunity, in other words by rearranging
the digits deliberately.
Maybe the book did not want him to be allowed do this
and the answer it has given suggests he isn't allowed to.
However, the way the question is worded suggests he is!!
Hence, wording can make quite a difference in probability questions.
9. Originally Posted by Archie Meade
Yes, that's good work u2_wa,
shows you can work it out in alternative ways.
You are calculating the probability if we must take the digits
in the order they come in.
The way the question is worded suggests that Mr. A chooses the 4 cards and tries to
make an even number by placing an even number in the
units position whenever he has the opportunity, in other words by rearranging
the digits deliberately.
Maybe the book did not want him to be allowed do this
and the answer it has given suggests he isn't allowed to.
However, the way the question is worded suggests he is!!
Hence, wording can make quite a difference in probability questions.
Yeh I know, thanks!!! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8035784959793091, "perplexity": 709.8450075966517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207931085.38/warc/CC-MAIN-20150521113211-00240-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/qed-questions.742131/ | # QED questions
1. Mar 7, 2014
### NewChemTeache
I was curious about two things. I just started reading Feynman's QED book this week. He starts by using an analogy of a clock spinning and that determines the direction (which I think is being analogous to the amplitude of the frequency?)
My first question is, as a layperson, what causes the amplitude to change when light hits on a specific angle? Does this have to do with the light interacting with the particles of the surface material?
My second question is, if light in terms of QED is only explicable in terms of particles, can this also explain electrons only existing as particles and never as waves? I don't study QED at a university level, and only have a background in chemistry. In my classes that I took I was always taught about the particle/wave duality. QED changes this idea about light, does this also change that idea about the electrons?
-Rob
2. Mar 7, 2014
### Simon Bridge
Not exactly.
Notice that the "clock" is attached to a particle not a wave.
The length of the arrow is the "probability amplitude" - and this is usually a fixed value.
The area of the circle swept by the arrow is (proportional to) the probability of detecting the particle.
The "clock" is actually a "complex phasor" - so it involves math you may not have met yet: involving exponential numbers and the square-root of minus one. Feynman's description here is pure analogy and should not be taken too literally.
Statistics - there is a probability that the photon will be transmitted and a probability it will be reflected. Ultimately, yes, we tend to think of it as being due to the interactions between the photons and the material. In a way it described the result of the interactions.
However you need to distinguish been the probability amplitudes Feynman is talking about and the amplitude of the "light wave" that you are used to. The "clocks" are not describing a light wave.
Electrons do display wave-like behaior though so it is not clear what you are thinking of here.
All particles exhibit wave-like behavior, all the time, following the rules of QED. Most of the time the wave behavior is too small to spot - but under special circumstances we can make it big enough and there it is!
It is easy to set up the special circumstances for light, in fact we do it by accident all the time, but it's not so easy for electrons.
QED provides a set of rules for working out how much of what sort of behavior to expect under different circumstances - but it does not tell you what is "really" going on.
It does - QED is a particle theory, so everything is particles of some kind that, due to their statistics, can sometimes exhibit phenomena that is well described using the maths associated with waves.
At HS level, one way of thinking about wave-particle duality is by analogy with the old story about the blind men and the elephant. You know the one: one guy gets the front, examines the trunk, and notices it is like a snake so he concludes that and elephant is a kind of snake; the other guy gets the back and examines the tail and notices it is like a rope and concludes that an elephant is a kind of rope. The two get together and compare notes, check each others results etc, and come to the conclusion they are both right and the elephant exhibits rope-snake duality.
Last edited: Mar 7, 2014
Similar Discussions: QED questions | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8263021111488342, "perplexity": 486.829734851691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948544677.45/warc/CC-MAIN-20171214144324-20171214164324-00391.warc.gz"} |
https://www.futilitycloset.com/category/science-math/page/4/ | Hot and Cold
The vortex tube is a bit of a magic trick: When a stream of compressed gas is injected into the chamber, it accelerates to a high rate of rotation and moves toward the nozzle on the right. Because of the nozzle’s shape, though, only the quickly rotating outer shell of this gas can escape; the rest moves back through the center of the vortex and escapes through the opening on the left.
The result, perplexingly, is that even though the tube has no moving parts, it emits hot air (up to 200°C) on the right and cold air (down to -50° C) on the left.
Could this principle be used to air-condition a home or vehicle? “That’s what everyone thinks when they first hear about it,” engineer Leslie Inglis told Popular Science in 1976. “I always tell them that they wouldn’t buy a toaster for the kitchen if they had to buy the generator to produce the electricity. You’ve got to think of this as a compressed-air appliance.”
Podcast Episode 169: John Harrison and the Problem of Longitude
Ships need a reliable way to know their exact location at sea — and for centuries, the lack of a dependable method caused shipwrecks and economic havoc for every seafaring nation. In this week’s episode of the Futility Closet podcast we’ll meet John Harrison, the self-taught English clockmaker who dedicated his life to crafting a reliable solution to this crucial problem.
We’ll also admire a dentist and puzzle over a magic bus stop.
See full show notes …
The Trinity Hall Prime
On Thursday, Numberphile published this video, which features a startling wall hanging in the Senior Combination Room at Trinity Hall, Cambridge: Junior research fellow James McKee devised a 1350-digit prime number whose image forms a likeness of the college’s coat of arms. (The number of digits is significant, as it’s the year that Bishop William Bateman founded the college.)
It turns out that finding such “prime” images is easier than one might think. In the video description, McKee explains: “Most of the digits of p were fixed so that: (i) the top two thirds made the desired pattern; (ii) the bottom third ensured that p-1 had a nice large (composite) factor F with the factorisation of F known. Numbers of this shape can easily be checked for primality. A small number of digits (you can see which!) were looped over until p was found that was prime.'”
Indeed, on the following day, Cambridge math student Jack Hodkinson published his own prime number, this one presenting an image of Corpus Christi College and including his initials and date of birth:
Hodkinson explains that he knew he wanted a 2688-digit prime, and the prime number theorem tells us that approximately one in every 6200 2688-digit numbers is prime. And he wasn’t considering even numbers, which reduces the search time by half: He expected to find a candidate in 100 minutes, and in fact found eight overnight.
(Thanks, Danesh.)
In 2014 I described the Peaucellier–Lipkin linkage, a mechanism that transforms a rotary motion into a perfect straight-line motion:
That linkage was invented in 1864 by French army engineer Charles-Nicolas Peaucellier. A decade later, Harry Hart invented two more. “Hart’s inversor” is a six-bar linkage — links of the same color are the same length. The fixed point on the left is at the midpoint of the red link, and the “input” and “output” are at the midpoints of the two blue links:
In “Hart’s A-frame,” the short links are half the length of the long ones, and the center link is a quarter of the way down the long links:
Pleasingly, the motion perpendicularly bisects a fixed link across the bottom, which is the same length as the long links.
Unto the Breach
In 2004, engineers Richard Clements and Roger Hughes put their study of crowd dynamics to an unusual application: the medieval Battle of Agincourt, which pitted Henry V’s English army against a numerically superior French army representing Charles VI. In their model, an instability arises on the front between the contending forces, which may account for the relatively large proportion of captured soldiers:
[P]ockets of French men-at-arms are predicted to push into the English lines and with hindsight be surrounded and either taken prisoner or killed. … Such an instability might explain the victory by the weaker English army by surrounding groups of the stronger army.
This description is consistent with the three large mounds of fallen soldiers that are reported in contemporary accounts of the battle. If the model is accurate then perhaps French men-at-arms succeeded in pushing back the English in certain locations, only to be surrounded and slaughtered, rallying around their leaders. By contrast, modern accounts perhaps incorrectly describe a “wall” of dead running the length of the field.
“Interestingly, the study suggests that the battle was lost by the greater army, because of its excessive zeal for combat leading to sections of it pushing through the ranks of the weaker army only to be surrounded and isolated.” The whole paper is here.
(Richard R. Clements and Roger L. Hughes. “Mathematical Modelling of a Mediaeval Battle: The Battle of Agincourt, 1415,” Mathematics and Computers in Simulation 64:2 [2004], 259-269.)
The Scenic Route
A thrifty space traveler can explore the solar system by following the Interplanetary Transport Network, a series of pathways determined by gravitation among the various bodies. By plotting the course carefully, a navigator can choose a route among the Lagrange points that exist between large masses, where it’s possible to change trajectory using very little energy.
In the NASA image above, the “tube” represents the highway along which it’s mathematically possible to travel, and the green ribbon is one such route.
The good news is that these paths lead to some interesting destinations, such as Earth’s moon and the Galilean moons of Jupiter. The bad news is that such a trip would take many generations. Virginia Tech’s Shane Ross writes, “Due to the long time needed to achieve the low energy transfers between planets, the Interplanetary Superhighway is impractical for transfers such as from Earth to Mars at present.”
Mix and Match
The sum of any two of these numbers is a perfect square:
7442 + 28658 = 1902
7442 + 148583 = 3952
7442 + 177458 = 4302
7442 + 763442 = 8782
28658 + 148583 = 4212
28658 + 177458 = 4542
28658 + 763442 = 8902
148583 + 177458 = 5712
148583 + 763442 = 9552
177458 + 763442 = 9702
Two other such sets:
{-15863902, 17798783, 21126338, 49064546, 82221218, 447422978}
{30823058, 63849842, 150187058, 352514183, 1727301842}
Whether there’s a set of six positive integers with this property is an open question.
(A.R. Thatcher, “Five Integers Which Sum in Pairs to Squares,” Mathematical Gazette 62:419 [March 1978], 25-29.)
A Second Look
M.C. Escher’s 1935 lithograph Hand With Reflecting Sphere gave artist Kelly M. Houle an idea.
She drew this image in charcoal on a piece of illustration board:
Now when a cylindrical mirror is placed at the center, it produces this reflection:
“When the original image is bent and stretched into a circular swath, the shadows seem to fall in all directions,” she wrote. “When the curved mirror is used to reflect the anamorphic distortion, the forms take on the familiar rules of light and shading that make them seem three-dimensional.”
(Kelly M. Houle, “Portrait of Escher: Behind the Mirror,” in D. Schattschneider and M. Emmer, eds., M.C. Escher’s Legacy, 2003.)
Escalating Powers
$\displaystyle 1 + 5 + 10 + 24 + 28 + 42 + 47 + 51 = 2 + 3 + 12 + 21 + 31 + 40 + 49 + 50\newline 1^{2} + 5^{2} + 10^{2} + 24^{2} + 28^{2} + 42^{2} + 47^{2} + 51^{2} = 2^{2} + 3^{2} + 12^{2} + 21^{2} + 31^{2} + 40^{2} + 49^{2} + 50^{2}\newline 1^{3} + 5^{3} + 10^{3} + 24^{3} + 28^{3} + 42^{3} + 47^{3} + 51^{3} = 2^{3} + 3^{3} + 12^{3} + 21^{3} + 31^{3} + 40^{3} + 49^{3} + 50^{3}\newline 1^{4} + 5^{4} + 10^{4} + 24^{4} + 28^{4} + 42^{4} + 47^{4} + 51^{4} = 2^{4} + 3^{4} + 12^{4} + 21^{4} + 31^{4} + 40^{4} + 49^{4} + 50^{4}\newline 1^{5} + 5^{5} + 10^{5} + 24^{5} + 28^{5} + 42^{5} + 47^{5} + 51^{5} = 2^{5} + 3^{5} + 12^{5} + 21^{5} + 31^{5} + 40^{5} + 49^{5} + 50^{5}\newline 1^{6} + 5^{6} + 10^{6} + 24^{6} + 28^{6} + 42^{6} + 47^{6} + 51^{6} = 2^{6} + 3^{6} + 12^{6} + 21^{6} + 31^{6} + 40^{6} + 49^{6} + 50^{6}\newline 1^{7} + 5^{7} + 10^{7} + 24^{7} + 28^{7} + 42^{7} + 47^{7} + 51^{7} = 2^{7} + 3^{7} + 12^{7} + 21^{7} + 31^{7} + 40^{7} + 49^{7} + 50^{7}$
For the Record
Western Kentucky University geoscientist John All was traversing Nepal’s Mount Himlung in May 2014 when the ice collapsed beneath him and he fell into a crevasse, dislocating his shoulder and breaking some ribs. He landed on a ledge, but now he faced a 70-foot climb back to the surface alone without the use of his right arm or upper leg.
“That’s when I pulled my research camera out and started talking to myself about all my options,” he told National Geographic. “I take photos of everything I do because, if I’m working in Africa and I need to recall a detail, that’s going to be the best way to do it. I was also thinking about my mom and my friends and family and realized that just talking wouldn’t convey what was happening to me nearly as well. So I started recording things.”
“It probably took me four or five hours to climb out,” he said. “I kept moving sideways, slightly up, sideways, slightly up, until I found an area where there was enough hard snow that I could get an ax in and pull myself up and over. I knew that if I fell at any time in that entire four or five hours, I, of course, was going to fall all the way to the bottom of the crevasse. Any mistake, or any sort of rest or anything, I was going to die.”
After reaching the top he rolled as much as walked back to his tent, called for help, and waited 16 hours for a helicopter to arrive. He wrote later, “I had dug myself out of my own grave.” | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42219480872154236, "perplexity": 1978.6008506190972}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948609934.85/warc/CC-MAIN-20171218063927-20171218085927-00795.warc.gz"} |
https://research.library.fordham.edu/dissertations/AAI9425202/ | # Effects of sample size, model misspecification and the number of indicators on fit indices for covariance structure modeling: A Monte Carlo study
#### Abstract
This investigation studied the behavior of fit indices in covariance structure modeling (CSM) when sample size, model misspecification and the number of indicators per factor were varied. The results provide basic researchers with an extensive framework to examine fundamental uses of LISREL VI (i.e., performance of fit indices), and assist applied researchers in properly evaluating larger models using CSM by utilizing a variety of fit indices. Within this investigation covariance matrices, which form the basis for obtaining solutions in CSM, were generated according to the constraints of sample size and the number of indicators per factor for three six-factor models. Fifty sample covariance matrices were generated for sample sizes of 50, 100, 200 and 400 with one, two, and three indicators per factor and three degrees of misspecification. The issue of model error (misspecification) was introduced by attempting to fit one model, the target model, to the other two models. Moderate misspecification was represented by fitting the target model to a model consisting of two additional relationships among the factors. The third model represented extensive misspecification by consisting of five additional relationships to the target model. From the LISREL VI analyses, 15 fit indices (including stand-alone, incremental and parsimonious fit indices) were provided or calculated from these results. These fit indices and whether the solutions were nonconvergent and improper served as the dependent variables. A 4 (sample size) x 3 (degree of misspecification) x 3 (number of indicators per factor) analysis of variance (ANOVA) was conducted to determine which of the three independent variables or any combination thereof contributed to the resulting behavior of the fit indices. Since large sample sizes were used, $\omega\sp2$ was estimated. In addition to the analysis of variance, a log-linear analysis was conducted on the frequency of improper and nonconvergent solutions. The results of this study have both supported and contradicted previous findings. The limitations of this study can be categorized as due to decisions made in the sampling design and specification of the population model. Recommendations for the use of the fit indices were made within the constraints of the study.
#### Subject Area
Psychological tests
#### Recommended Citation
Patelis, Thanos, "Effects of sample size, model misspecification and the number of indicators on fit indices for covariance structure modeling: A Monte Carlo study" (1994). ETD Collection for Fordham University. AAI9425202.
https://research.library.fordham.edu/dissertations/AAI9425202
COinS | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7170626521110535, "perplexity": 1441.8703232542618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911896.73/warc/CC-MAIN-20200710175432-20200710205432-00311.warc.gz"} |
https://cracku.in/bq-ibps-clerk-data-analysis-test-6 | ## IBPS Clerk Data Analysis Test 6
Instructions
Study the following tables carefully and answer the question given below:
Number of Candidates appeared in a Competitive Examination from Five Centers over the years.
Approximate percentage of Candidates Qualified to Appeared in the Competitive Examination from Five Centers over the year.
Q 1
In which of the following years was the difference in number of candidates appeared from Mumbai over the previous year the minimum ?
Q 2
In which of the following years was the number of candidates qualified from Chennai the maximum among the given years ?
Q 3
Approximately what was the total number of candidates qualified from Delhi in 2002 and 2006 together ?
Q 4
Approximately how many candidates appearing from Kolkata in 2004 qualified in the competitive examination ?
Q 5
Approximately what was the difference between the number of Candidates qualified from Hyderabad in 2001 and that in 2002 ? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822463870048523, "perplexity": 1962.4895402113787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00187.warc.gz"} |
https://kx.lumerical.com/t/dispersion-calculation-in-var-fdtd/6252 | # Dispersion calculation in var FDTD?
how to calculate dispersion in script for above ring modulator. Group delay is already calculated in this design, but i want to calculate dispersion with units ps/(nm km) .
plz help
Hi,
I would recommend taking a look at the script from the following example which calculates the dispersion from group delay:
https://kb.lumerical.com/en/index.html?sweeps_analysis_for_dispersion.html
See lines 104-107 of the associated script file and let me know if you have any questions about it! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199692606925964, "perplexity": 1011.6234082036448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141706569.64/warc/CC-MAIN-20201202083021-20201202113021-00551.warc.gz"} |
http://assert.pub/arxiv/physics/physics.atom-ph/ | ### Top 4 Arxiv Papers Today in Atomic Physics
##### #1. An ultracold heavy Rydberg system formed from ultra-long range molecules bound in a stairwell potential
###### Frederic Hummel, Peter Schmelcher, Herwig Ott, Hossein R. Sadeghpour
We propose a scheme to realize a heavy Rydberg system (HRS), a bound pair of oppositely charged ions, from a gas of ultracold atoms. The intermediate step to achieve large internuclear separations is the creation of a unique class of ultra-long-range Rydberg molecules bound in a stairwell potential energy curve. Here, a ground-state atom is bound to a Rydberg atom in an oscillatory potential emerging due to attractive singlet $p$-wave electron scattering. The utility of our approach originates in the large electronic dipole transition element between the Rydberg- and the ionic molecule, while the nuclear configuration of the ultracold gas is preserved. The Rabi coupling between the Rydberg molecule and the heavy Rydberg system is typically in the MHz range and the permanent electric dipole moments of the HRS can be as large as one kilo-Debye. We identify specific transitions which place the creation of the heavy Rydberg system within immediate reach of experimental realization.
more | pdf | html
None.
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 4
Total Words: 6032
Unqiue Words: 1847
##### #2. Effect of Spin-Orbit Coupling on Decay Widths of Electronic Decay Processes
###### Elke Fasshauer
Meitner-Auger processes are electronic decay processes of energetically low-lying vacancies. In these processes, the vacancy is filled by an electron of an energetically higher lying orbital, while another electron is simulataneously emitted to the continuum. In low-lying orbitals relativistic effects can not even be neglected for light elements. At the same time lifetime calculations are computationally expensive. In this context, we investigate which effect spin-orbit coupling has on Meitner-Auger decay widths and aim for a rule of thumb for the relative decay widths of initial states split by spin-orbit coupling. We base this rule of thumb on Meitner-Auger decay widths of Sr4$p^{-1}$ and Ra6$p^{-1}$ obtained by relativistic FanoADC-Stieltjes calculations.
more | pdf | html
None.
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 1
Total Words: 5061
Unqiue Words: 1582
##### #3. Sawtooth Wave Adiabatic Passage in a Magneto-Optical Trap
###### John P. Bartolotta, Murray J. Holland
We investigate theoretically the application of Sawtooth Wave Adiabatic Passage (SWAP) within a 1D magneto-optical trap (MOT). As opposed to related methods that have been previously discussed, our approach utilizes repeated cycles of stimulated absorption and emission processes to achieve both trapping and cooling, thereby reducing the adverse effects that arise from photon scattering. Specifically, we demonstrate this method's ability to cool, slow and trap particles with fewer spontaneously emitted photons, higher forces and in less time when compared to a traditional MOT scheme that utilizes the same narrow linewidth transition. We calculate the phase space compression that is achievable and characterize the resulting system equilibrium cloud size and temperature.
more | pdf | html
None.
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 2
Total Words: 9877
Unqiue Words: 2435
##### #4. Probing the two-electron cusp in the ground states of He and H$_2$
###### S. Grundmann, V. Serov, F. Trinter, K. Fehre, N. Strenger, A. Pier, M. Kircher, D. Trabert, M. Weller, J. Rist, L. Kaiser, A. W. Bray, L. Ph. H. Schmidt, J. B. Williams, T. Jahnke, R. Dörner, M. S. Schöffler, A. S. Kheifets
We report on kinematically complete measurements and ab initio non-perturbative calculations of double ionization of He and H$_2$ by a single 800 eV circularly polarized photon. We utilize the quasi-free mechanism of photoionization to probe the two-electron cusp in the ground state of these two targets. Our approach constitutes a new method of electron localization by studying dynamic many-electron correlation and provides valuable insight into the mechanisms of non-dipole photoionization.
more | pdf | html
None.
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 18
Total Words: 0
Unqiue Words: 0
Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.
Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).
To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).
To see beautiful figures extracted from papers, follow us on Instagram.
Tracking 257,976 papers.
###### Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Online
###### Stats
Tracking 257,976 papers. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7223607301712036, "perplexity": 7055.83059777526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681412.74/warc/CC-MAIN-20200125191854-20200125221854-00414.warc.gz"} |
https://repositorio.ufscar.br/handle/ufscar/4893 | ### Submissões recentes
• #### Modos massivos e paredes de domínio em líquidos de Fermi
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2018-06-11)
In this thesis, we use Fermi liquid theory to deal with three problems. The first and second problems are about a direct and an indirect detection of the amplitude massive ‘‘Higgs’’ mode in weak ferromagnets like ZrZand M ...
• #### Termodinâmica de buracos negros de Schwarzschild
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2018-05-28)
This dissertation consists of a review about some aspects of Schwarzschild black hole thermodynamics and includes a discussion related to the temporal evolution of the masses of these black holes in a thermodynamic context. ...
• #### Dependência da anisotropia magnética efetiva em função da temperatura e concentração de níquel de amostras nanoparticuladas de NixCo1-xFe2O4
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2018-03-07)
The study of nanoparticles has generated great interest in recent years in several areas of research. For example, multiferroic composite magnetoelectric materials, in which ferromagnetism and ferroelectricity occur ...
• #### Resposta óptica de sistemas atômicos no espaço livre ou aprisionados dentro de cavidades ópticas no regime de armadilhamento coerente de populações
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2018-02-28)
The aim of this work is to study the electromagnetically induced transparency (EIT) and the coherent population trapping (CPT) phenomena in three-level systems, such as atoms and quantum dot molecules (QDM). The present ...
• #### Propriedades eletrônicas de nanofios semicondutores de Fosfeto de Zinco (Zn3P2)
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2018-04-20)
Zinc phosphide Zn3P2 nanowires with excellent crystalline quality were grown using the Vapor-Liquid-Solid (VLS) method with gold nanoparticles as catalysts. Single nanowire based devices were fabricated with ohmic nickel ...
• #### Turbulência de Ondas em condensados de Bose-Einstein
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2018-02-19)
In this work we used as a starting point the theory developed by Zakharov and Nazarenko for treating wave turbulence in systems with weak non linearity such as atomic Bose-Einstein condensates with low enough temperature ...
• #### Transmissão de emaranhamento através de cadeias de spins
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2018-04-25)
In this thesis we studied the transmission of entangled states in unmodulated spin chains using a slight modification in the XY isotropic model (XX model) that describes a one-dimensional spin-1/2 chain. We have shown that ...
• #### Propriedades magnéticas de BiMn2O5 e Fe3O2BO3
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2018-02-26)
In this work, an exploration of the synthesis and magnetoelectric properties of single-crystals of BiMn2O5 and a systematic study of the magnetic properties of Fe3O2BO3 are presented. Despite the unlike composition, both ...
• #### Estudo de dispositivos eletrônicos baseados em filmes de diamante dopados com boro
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2018-01-29)
In this work, the main objective was the study of boron doped diamond films aiming the use of them as semiconductor material for electronic devices. Schottky diodes and a field effect transistor were the devices built with ...
• #### Emaranhamento intrínseco em sistemas tipo-Dirac
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2018-03-06)
Dirac equation is supported by a $SU(2) \otimes SU(2)$ group structure, such that its solutions describe two discrete degrees of freedom, intrinsic parity and spin, which in general are entangled. In this work we will ...
• #### Filmes finos de Zn(1-x)Cu(x)O crescidos por Spray Pirólise
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2017-11-24)
This work consists of investigations of the structural, morphological and optical properties of thin films of the Zn(1-x)Cu(x)O system grown on glass substrate using the Spray-Pyrolysis technique with Zinc and Copper ...
• #### Mecânica quântica no espaço de fase não-comutativo e aplicações em termodinâmica
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2016-08-26)
In this work we study theoretical aspects arising from the fact of considering a quantum theory with general relations of noncommutativity. Through the quantum mechanics in phase-space formalism in the Wigner-Weyl ...
• #### Efeitos do campo magnético sobre processos de corrosão em aço AISI 1020
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2017-09-19)
In this dissertation the effects of the application of magnetic field during the electrochemical reactions of corrosion in samples of steel AISI 1020 in KNO_3 solution were investigated. The experiments were measurements ...
• #### Investigação das propriedades magnetoelásticas da ferrita de níquel : análise teórico-experimental
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 21-02-18)
In this dissertation, we investigate the magnetoelastic properties of nickel ferrite compared to the pure metal nickel, either by ab-initio calculation or through experiments using the capactive cell technique. The first ...
• #### Geração de corrente spin polarizada em heteroestruturas dopadas com impurezas magnéticas
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2015-08-07)
In this dissertation, the computational modeling of two semiconductor structures composed of two quantum wells was performed, being a quantum well doped with magnetic impurities. Each structure was designed to form a ...
• #### Estudo das propriedades estruturais, elétricas e ópticas de filmes finos de Niobato de Sódio e Potássio (KNN), fabricados por deposição a laser pulsado (PLD)
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2017-05-05)
Environmental problems arising from lead toxicity in materials, such as those in lead zirconate titanate (PZT) have stimulate the search for new lead-free ferroelectrics materials with ferroelectric and piezoelectric ...
• #### Estudo da viabilidade de equipamento LIBS-LIF contínuo para aumento do limite de detecção do mercúrio
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2017-09-22)
This work aims to verify the feasibility of a continuous LIBS-LIF equipment construction at 405 nm for heavy metals analysis, focusing on mercury. We have studied experimentally a diode laser of a continuous light source ...
• #### Obtenção e caracterização de heteroestruturas epitaxiais magnetoelétricas de [KNbO3]0,9-[BaNi1/2Nb1/2O3-𝛿]0,1 e [La0,7Sr0,3MnO3]
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2016-10-07)
Magnetoelectric materials have been extensively studied in the last decade, particularly due to the applicability for sensors, attenuators, transformers and others. Hence as an objective for this work we propose a study ...
• #### Magneto hipertermia in vitro em células hek293t utilizando nanopartículas de óxido de ferro magnéticas com diferentes recobrimentos
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2017-08-03)
One of the largest challenges on medicine has been developing treatments for several types of cancer. Moreover, conventional treatments for cancer such as chemotherapy and radiotherapy have been presenting undesirable ...
• #### Propriedades de sistemas de pontos quânticos semicondutores acoplados e sua resposta óptica
(Universidade Federal de São CarlosUFSCarPrograma de Pós-graduação em FísicaCâmpus São Carlos, 2017-10-31)
The study of doped semiconductor quantum dots with magnetic impurities has been a challenging task when adding spin-orbit coupling and exchange interaction with asymmetry effects. The combination of all these factors within ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6630969643592834, "perplexity": 23718.572429017393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160853.60/warc/CC-MAIN-20180925004528-20180925024928-00368.warc.gz"} |
https://www.research.lancs.ac.uk/portal/en/publications/on-the-modelling-and-consequence-of-smallscale-magnetic-phenomena-in-the-saturnian-system(b6fa00bf-e46d-4b0f-83b6-2bd085488535)/export.html | Home > Research > Publications & Outputs > On the modelling and consequence of small-scale...
### Electronic data
• 2019martinphd
Final published version, 32.3 MB, PDF document
## On the modelling and consequence of small-scale magnetic phenomena in the Saturnian system
Research output: ThesisDoctoral Thesis
Published
### Standard
Lancaster University, 2019. 246 p.
Research output: ThesisDoctoral Thesis
### Bibtex
@phdthesis{b6fa00bfe46d4b0f83b62bd085488535,
title = "On the modelling and consequence of small-scale magnetic phenomena in the Saturnian system",
abstract = "This thesis presents an analysis of Cassini magnetometer data in two different regions of the Kronian system. An evaluation of aperiodic waves on the equatorial current sheet is presented; the waves are fitted to a model of a Harris current sheet deformed by a Gaussian wave pulse. This analysis allows examination of the parameters relating to the waves, where amplitude of waves is found to increase with radial distance. In addition, the direction of propagation of the waves is found by resolving the wave numbers in 2-dimensions, where a general outwards propagation is found.The use of the Harris current sheet also allows the resolution of current sheet parameters, and it is found that the scale height of the current sheet increases with radial distance. Additionally, values of the magnetic field in the lobes are found using the model, which are then used along with the scale heights to estimate the current density in the azimuthal and radial directions. These values can also be used to calculate, using the divergence of current, the field aligned currents entering and leaving the ionosphere where a current entering the ionosphere pre-noon and a current exiting the ionosphere post-midnight are shown. This current density is then converted to an electron flux in the upward current region, and could produce an additional 1-11 kR of auroral emission which is seen in other infrared and ultraviolet data sets.Additionally, irregular magnetic signatures, such as the aperiodic waves, are found in the entire system including Titan{\textquoteright}s ionosphere. At Titan, a statistical study of the position of flux ropes finds no spatial dependence other than an increased number of flux ropes in the sun-lit regions and ram-side regions. A comparison of force-free and nonforce free models is utilised to extract the radii and axial magnetic field of the flux ropes, and compare the assumptions required for both models. Additionally, deformations to the models are used to model common asymmetries seen in the magnetometer data and find that bending a force-free flux rope solves the problem of the direction ambiguity of using minimum variance analysis and using elliptical cross-sections of flux ropes allows for a asymmetric flux rope signature.All together, this thesis explores the varied magnetic phenomena in the Kronian system and uses them to understand the surrounding environment.",
author = "Carley Martin",
year = "2019",
doi = "10.17635/lancaster/thesis/534",
language = "English",
publisher = "Lancaster University",
school = "Lancaster University",
}
### RIS
TY - THES
T1 - On the modelling and consequence of small-scale magnetic phenomena in the Saturnian system
AU - Martin, Carley
PY - 2019
Y1 - 2019
N2 - This thesis presents an analysis of Cassini magnetometer data in two different regions of the Kronian system. An evaluation of aperiodic waves on the equatorial current sheet is presented; the waves are fitted to a model of a Harris current sheet deformed by a Gaussian wave pulse. This analysis allows examination of the parameters relating to the waves, where amplitude of waves is found to increase with radial distance. In addition, the direction of propagation of the waves is found by resolving the wave numbers in 2-dimensions, where a general outwards propagation is found.The use of the Harris current sheet also allows the resolution of current sheet parameters, and it is found that the scale height of the current sheet increases with radial distance. Additionally, values of the magnetic field in the lobes are found using the model, which are then used along with the scale heights to estimate the current density in the azimuthal and radial directions. These values can also be used to calculate, using the divergence of current, the field aligned currents entering and leaving the ionosphere where a current entering the ionosphere pre-noon and a current exiting the ionosphere post-midnight are shown. This current density is then converted to an electron flux in the upward current region, and could produce an additional 1-11 kR of auroral emission which is seen in other infrared and ultraviolet data sets.Additionally, irregular magnetic signatures, such as the aperiodic waves, are found in the entire system including Titan’s ionosphere. At Titan, a statistical study of the position of flux ropes finds no spatial dependence other than an increased number of flux ropes in the sun-lit regions and ram-side regions. A comparison of force-free and nonforce free models is utilised to extract the radii and axial magnetic field of the flux ropes, and compare the assumptions required for both models. Additionally, deformations to the models are used to model common asymmetries seen in the magnetometer data and find that bending a force-free flux rope solves the problem of the direction ambiguity of using minimum variance analysis and using elliptical cross-sections of flux ropes allows for a asymmetric flux rope signature.All together, this thesis explores the varied magnetic phenomena in the Kronian system and uses them to understand the surrounding environment.
AB - This thesis presents an analysis of Cassini magnetometer data in two different regions of the Kronian system. An evaluation of aperiodic waves on the equatorial current sheet is presented; the waves are fitted to a model of a Harris current sheet deformed by a Gaussian wave pulse. This analysis allows examination of the parameters relating to the waves, where amplitude of waves is found to increase with radial distance. In addition, the direction of propagation of the waves is found by resolving the wave numbers in 2-dimensions, where a general outwards propagation is found.The use of the Harris current sheet also allows the resolution of current sheet parameters, and it is found that the scale height of the current sheet increases with radial distance. Additionally, values of the magnetic field in the lobes are found using the model, which are then used along with the scale heights to estimate the current density in the azimuthal and radial directions. These values can also be used to calculate, using the divergence of current, the field aligned currents entering and leaving the ionosphere where a current entering the ionosphere pre-noon and a current exiting the ionosphere post-midnight are shown. This current density is then converted to an electron flux in the upward current region, and could produce an additional 1-11 kR of auroral emission which is seen in other infrared and ultraviolet data sets.Additionally, irregular magnetic signatures, such as the aperiodic waves, are found in the entire system including Titan’s ionosphere. At Titan, a statistical study of the position of flux ropes finds no spatial dependence other than an increased number of flux ropes in the sun-lit regions and ram-side regions. A comparison of force-free and nonforce free models is utilised to extract the radii and axial magnetic field of the flux ropes, and compare the assumptions required for both models. Additionally, deformations to the models are used to model common asymmetries seen in the magnetometer data and find that bending a force-free flux rope solves the problem of the direction ambiguity of using minimum variance analysis and using elliptical cross-sections of flux ropes allows for a asymmetric flux rope signature.All together, this thesis explores the varied magnetic phenomena in the Kronian system and uses them to understand the surrounding environment.
U2 - 10.17635/lancaster/thesis/534
DO - 10.17635/lancaster/thesis/534
M3 - Doctoral Thesis
PB - Lancaster University
ER - | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835178554058075, "perplexity": 1113.686835214059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00445.warc.gz"} |
http://www.zora.uzh.ch/id/eprint/7116/ | # Optimization of apparent polar wander paths: an example from the South China plate
Gilder, S A; Tan, X; Bucher, H; Kuang, G (2008). Optimization of apparent polar wander paths: an example from the South China plate. Physics of the earth and planetary interiors, 169(1-4):166-177.
## Abstract
Paleomagnetically derived apparent polar wander paths form the foundation of quantitative plate tectonic reconstructions. However, deformation leading to vertical axis block rotations displaces paleomagnetic poles away from their original positions, leading to an ambiguity as to which pole, or group of poles, best approximates the “true” reference pole position for a given time. Here we show that the best estimate of the “true” reference pole will match the observed paleolatitude (pλ) for each point on the plate. This means that the expected pλ from the “true” reference pole minus the observed pλ, derived from each individual study, will average to zero. Histogram plots and associated parameters help further discriminate between candidate reference poles when more than one of them fulfills the zero-average requirement within prescribed uncertainty limits. Our analysis of 44 Late Permian to Middle Triassic paleomagnetic poles from the South China plate corroborates previous assumptions that the poles from Sichuan Province best represent the “true” reference for the South China plate. To better understand the age of the rotations, we studied the paleomagnetism of Lower and Upper Triassic rocks from the Shiwandashan region in Guangxi Province. Early Triassic paleomagnetic directions isolated at high temperature demagnetization steps are of dual polarity and pass the fold test. This magnetization component is indistinguishable at 95% confidence limits from Middle Triassic paleomagnetic directions from other parts of Guangxi Province. The corresponding pole for this component lies within the swath of Early to Middle Triassic paleomagnetic poles from the South China plate, confirming that Guangxi has been a part of the South China plate since at least the Early Triassic. Late Triassic paleomagnetic data require further study before their complex magnetizations can be interpreted.
## Abstract
Paleomagnetically derived apparent polar wander paths form the foundation of quantitative plate tectonic reconstructions. However, deformation leading to vertical axis block rotations displaces paleomagnetic poles away from their original positions, leading to an ambiguity as to which pole, or group of poles, best approximates the “true” reference pole position for a given time. Here we show that the best estimate of the “true” reference pole will match the observed paleolatitude (pλ) for each point on the plate. This means that the expected pλ from the “true” reference pole minus the observed pλ, derived from each individual study, will average to zero. Histogram plots and associated parameters help further discriminate between candidate reference poles when more than one of them fulfills the zero-average requirement within prescribed uncertainty limits. Our analysis of 44 Late Permian to Middle Triassic paleomagnetic poles from the South China plate corroborates previous assumptions that the poles from Sichuan Province best represent the “true” reference for the South China plate. To better understand the age of the rotations, we studied the paleomagnetism of Lower and Upper Triassic rocks from the Shiwandashan region in Guangxi Province. Early Triassic paleomagnetic directions isolated at high temperature demagnetization steps are of dual polarity and pass the fold test. This magnetization component is indistinguishable at 95% confidence limits from Middle Triassic paleomagnetic directions from other parts of Guangxi Province. The corresponding pole for this component lies within the swath of Early to Middle Triassic paleomagnetic poles from the South China plate, confirming that Guangxi has been a part of the South China plate since at least the Early Triassic. Late Triassic paleomagnetic data require further study before their complex magnetizations can be interpreted.
## Statistics
### Citations
Dimensions.ai Metrics
7 citations in Web of Science®
7 citations in Scopus®
7 citations in Microsoft Academic | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196263909339905, "perplexity": 3433.4202419271996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864482.90/warc/CC-MAIN-20180622123642-20180622143642-00512.warc.gz"} |
https://electrochemical.asmedigitalcollection.asme.org/article.aspx?articleid=1471872 | 0
RESEARCH PAPERS
# Aerogel-Based PEMFC Catalysts Operating at Room Temperature
[+] Author and Article Information
A. Smirnova
Department of Materials Science and Engineering, Connecticut Global Fuel Cell Center, UCONN, 44 Weaver Road, Storrs, CT 06269-5233alevtina@engr.uconn.edu
X. Dong, H. Hara
Aerogel Composite, LLC c∕o ICA, Inc., 102-R Filley Street, Unit H, Bloomfield CT 06002-1853
N. Sammes
Department of Mechanical Engineering, UCONN, 44 Weaver Road, Storrs, CT 06269-5233
J. Fuel Cell Sci. Technol 3(4), 477-481 (May 04, 2006) (5 pages) doi:10.1115/1.2349532 History: Received December 11, 2005; Revised May 04, 2006
## Abstract
A carbon-aerogel-supported Pt catalyst with $22nm$ pore size distribution and low Pt loading $(0.1mg∕cm2)$ has been tested in a proton exchange membrane fuel cell (PEMFC). The performance of the PEMFC and kinetic parameters of the catalyst at room temperature are discussed in terms of microstructure of the support and sulfonated tetrafluoroethylene (Nafion) distribution. The PEMFCs demonstrated power densities up to $0.5mW∕cm2$ at $0.6V$ in air∕hydrogen and $2atm$ backpressure on both cathode and anode. Continuous cycling with upper potential sweep limits of 1.0 and $1.2V$ leads to degradation effects that result in decreasing of the electrochemical surface area (ESA) of the catalyst. The comparison of an ESA decrease for a 1.0 and $1.2V$ sweep limit after $1000cycles$ indicated that the higher degradation effects are due to the oxidation of carbon support.
<>
## Figures
Figure 1
Compensated cell voltage versus (a) current density and (b) logarithm of current density for the cell at 20°C cell temperature and different backpressure in H2∕air; H2∕O2. Anode flow rate (AFR)=288cc∕min; cathode flow rate (CFR)=866cc∕min. Temperature of anode and cathode humidifiers was maintained at room temperature.
Figure 2
(a) PEMFC Performance at 22°C in H2 ∕air; 1atm backpressure on the anode and 2atm backpressure on the cathode side after 300hr of continuous testing and (b) flow rate values, temperature changes, and corresponding membrane resistances
Figure 3
Cell performances at room temperature and different values of backpressure on the anode and cathode side
Figure 4
Power density versus (a) current density and (b) cell voltage for the cell with 0.1mg∕cm2 of carbon-supported Pt catalyst in cathode catalyst layer
Figure 5
Performance of the cell after 300hr of operation at targeted values of current density, viz. 0.2A∕cm2 and 1.5A∕cm2 in H2∕air at room temperature, constant flow rate, and 2bar backpressure
Figure 6
The current density versus applied potential after cycling from 0.6to1.0V at 30°C demonstrating the catalyst degradation effects due to the oxidation of Pt in an aerogel-supported Pt catalyst
Figure 7
Current density versus applied potential after cycling from 0.6to1.2V at 30°C demonstrating the catalyst degradation effects due to the oxidation of carbon in an aerogel-supported Pt catalyst
## Errata
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections
• TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
• EMAIL: asmedigitalcollection@asme.org | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15884344279766083, "perplexity": 11498.178814176326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613888.70/warc/CC-MAIN-20190423214818-20190424000818-00330.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-2-section-2-1-linear-equations-in-one-variable-exercise-set-page-55/26 | ## Intermediate Algebra (6th Edition)
$r=10$
We are given that $\frac{4r}{5}-\frac{r}{10}=7$. First, we can multiply each term by 10. Since this is the least common denominator of each term, this will eliminate all fractions from the equation. $\frac{4r}{5}\times10-\frac{r}{10}\times10=7\times10$ $8r-r=70$ Group like terms on the left side. $7r=70$ Divide both sides by 7. $r=10$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824604392051697, "perplexity": 181.00437963955054}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592861.86/warc/CC-MAIN-20180721223206-20180722003206-00519.warc.gz"} |
http://mathhelpforum.com/geometry/63863-desargues-s-theorem.html | 1. ## Desargues's Theorem
Two-nonparallel lines are drawn on a sheet f paper so that their theoretical intersectionn is somewhere off the paper. Through a point, P, selected on the part of the paper between the lines, construct the line that would, when sufficiently extended, pass through the intersection of the given lines.
Any guidance as to where to begin (I'm guessing triangles or collinear points need to be drawn before Desargues's Theorem or its converse can be utilized) would be appreciated!
2. ## Desargue's Theorem
Hi -
Desargues's Theorem says that if two triangles are drawn in perspective from a point, then the points of intersection of their corresponding pairs of sides are collinear.
To solve your problem, we need to end up with a diagram like the one attached. l and m are the two initial lines and P is the given point. Triangles ABP and FGH are the ones that will end up being in perspective from the point of intersection off the paper. The dotted line n is the one we need to find, and the line k represents the line joining pairs of corresponding sides.
To draw the diagram, draw the given lines l and m, and the point P. Then the line k (more or less anywhere); then the other points in alphabetical order. Start with A and B in arbitrary positions on the lines m and l. This fixes C, D and E. Then choose any convenient point for F. Then find G and finally H. Join PH to find the line n.
Hope that helps. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084163069725037, "perplexity": 466.59753514927087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828356.82/warc/CC-MAIN-20171024090757-20171024110757-00178.warc.gz"} |
https://puzzling.stackexchange.com/questions/85055/catching-a-robber-on-one-line/85108 | # Catching a robber on one line
At x = 0, a thief robbed a bank. The thief ran one of two known directions at a constant speed, towards x < 0 or towards x > 0. The cop arrives at the crime scene some unknown time after the robbery. If the cop is faster than the robber, and traveling at a constant speed as well, is there a guaranteed way of catching the thief?
• welcome here! sorry, but this seems not on-topic, according to the scope defined in the help center. such off-topic posts may get deleted or closed. please check the help center to see what questions you should/ can ask here on P.SE. happy puzzling! ;) – Omega Krypton Jun 14 '19 at 1:36
• I disagree. This should be on topic. IF puzzling.stackexchange.com/questions/36565/… is ok, then this is. Besides the cited precedent, I must point out that this question, while mathematical in nature, isn't purely mathematical, and it certainly has a real enough interpretation to be very interesting. – greenturtle3141 Jun 14 '19 at 2:03
• This definitely belongs to Puzzling, and it's a great riddle, since it has a twist: while one might think that the cop has only 50% chance of catching the thief, this turns out not to be the case! (see answer below) – dr01 Jun 14 '19 at 11:13
Yes, it's possible.
First, assume the robber left one minute before you arrived and ran left. Run left until you catch up with the position that the robber would now be if that was the case.
Then, assume that the robber left one minute before you arrived and ran right. Run right until you catch up with the position the robber would now be if that was the case.
Then, assume that the robber left two minutes before you arrived and ran left. Run left until you catch up with the position that the robber would now be if that was the case.
Then, assume that the robber left two minutes before you arrived and ran right. Run right until you catch up with the position that the robber would now be if that was the case.
Then, assume that the robber left three minutes before you arrived and ran left...
It takes longer and longer to catch up to these imaginary robbers because of the time you use running back and forth, but eventually one of your assumptions will be correct, and so the imaginary robber in your assumption will be the real robber.
But what if
you don't know the speed of the robber?
It's still possible in this case:
we can use a similar strategy, but modifying the assumptions made. Now, every round includes an assumption about the robber's speed: in the first one, you assume the robber has (at most) 1/2 of your speed, in the second you assume the robber has (at most) 3/4 of your speed, in the third you assume the robber has (at most) 7/8 of your speed, and so on. Since the robber is strictly slower than you, at some point this assumption will be correct. And so eventually, both your speed and time assumptions will be good enough, and you'll pick the right direction and catch up to the robber.
• To elaborate, (I think this is right?) we can represent each possibility as a pair $(v,t)$ where $v$ is robber's speed and $t$ is time after robber left. We need to check every pair. Clearly, we can check any pair. If we check a pair $(v_1,t_1)$, we effectively check all $(v_2,t_2)$ where $v_2<v_1$ and $t_2<t_1$. If $c$ is the cop's speed, then $v$ is contained in one of the intervals $[0,.9c]$, $[0,.99c]$, ..., and $t$ is contained in one of $[0,1]$, $[0,2]$, ... so we have a 2D array to check of cardinality $|\mathbb{N}^2| = |\mathbb{N}|$, so we can indeed check all possibilities. – greenturtle3141 Jun 14 '19 at 2:18
• @greenturtle3141 Right -- I phrased it less mathematically, but this is effectively a cardinality argument in disguise. You don't need to check every point in the array though, because checking any point $(v,t)$ automatically gives you all points $(v',t')$ with $v'\leq v$ and $t'\leq t$. So you only need to check the points along the main diagonal. – Deusovi Jun 14 '19 at 2:33
• @Daniel You don't make any assumptions about the cop's speed -- you know the cop's speed. And no, the strategy is not to choose that the robber is very slow -- the strategy is to assume the robber is fast, and get better and better approximations. The robber also does not oscillate -- your comment makes no sense to me. – Deusovi Jun 14 '19 at 5:32
• @Syndic The only problem with that line of thinking is that it can also be extended to the three-minute point, and the four-minute point, and so on infinitely. If you follow that train of thought, then the cop should only ever travel in one direction for fear of being inefficient. – AleksandrH Jun 14 '19 at 13:20
• @AleksandrH I propose still doing "first both 1-minute-points, then both 2-minute-points, then both 3..." - just with a slight switch in the order. Instead of going L1-R1-L2-R2-L3-R3, you should run L1-R1-R2-L2-L3-R3. The first one would have you run 1+2+3+4+5+6 distances, the second one 1+2+1+4+1+6 (not really since the distances keep growing, but I hope this is enough as a thinking aid^^) – Syndic Jun 14 '19 at 13:25
Just a guess, but...
If we're only dealing with x < 0 and x > 0, then the cop had to arrive at the crime scene from one of those two directions. If he didn't encounter the robber on his way toward the bank, then doesn't he simply have to go in the direction opposite the one he came in?
• Since this is mathematical in nature, we can make the assumption that this interpretation is not the intention. – greenturtle3141 Jun 14 '19 at 1:59
• Figured as much! That would've been too easy :D – AleksandrH Jun 14 '19 at 13:18
thief ran one of two known directions
and
the cop is faster than the robber
So it's garanteed if the cop goes in the known direction for some finite amount of time. Life is guaranteed by no one, so we can't garentee he'll catch the robber. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7958588600158691, "perplexity": 561.3596722989909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00259.warc.gz"} |
http://interactivepython.org/runestone/static/StudentCSP/CSPNameNames/imageLib.html | # Using an Image Library¶
Similarly, in the image processing example, we used from image import *. That made the functions getPixels() and getRed() accessible. We could also define a new function that returns a new color, or a new procedure that changes the image.
The for p in pixels on line 9 let’s us loop through all of the pixels in the image and change the red value for each pixel. We’ll talk more about looping (repeating steps) in the next chapter.
csp-6-6-1: What does the line p.setRed(r * 0.5) do?
• It sets the red value in the current pixel to half the red of the original.
• Multiplying by 0.5 is the same as dividing by 2.
• It sets the red value in the current pixel to twice the red of the original.
• This would be true if it was r * 2, instead of r * 0.5
• It sets the red value in the current pixel to 5 times the red of the original.
• This would be true if it was r * 5, instead of r * 0.5
• It sets the red value in the current pixel to 0.5.
• This would be true if it was 0.5 instead of r * 0.5
This ability to name functions and procedures, and sets of functions and procedures, and absolutely anything and any set of things in a computer is very powerful. It allows us to create abstractions that make the computer easier to program and use. More on that in a future chapter.
Note
Discuss topics in this section with classmates. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.451342910528183, "perplexity": 675.1452267155656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741087.23/warc/CC-MAIN-20181112193627-20181112215627-00486.warc.gz"} |
https://socratic.org/questions/find-out-the-volume-of-6-023-10-of-ammonia-at-stp | Chemistry
Topics
×
# Find out the volume of 6.023*10²² of ammonia at stp?
Aug 17, 2017
The volume is 2.271 L.
#### Explanation:
Method 1. Using the Ideal Gas Law
We can use the Ideal Gas Law to solve this problem.
$\textcolor{b l u e}{\overline{\underline{| \textcolor{w h i t e}{\frac{a}{a}} p V = n R T \textcolor{w h i t e}{\frac{a}{a}} |}}} \text{ }$
where
• $p$ is the pressure
• $V$ is the volume
• $n$ is the number of moles
• $R$ is the gas constant
• $T$ is the temperature
We can rearrange the Ideal Gas Law to get
$V = \frac{n R T}{p}$
Step 1. Calculate the moles of ammonia
n = 6.023 × 10^22 color(red)(cancel(color(black)("molecules NH"_3))) × "1 mol NH"_3/(6.022 × 10^23 color(red)(cancel(color(black)("molecules NH"_3)))) = "0.100 02 mol NH"_3
Step 2. Calculate the volume at STP
Remember that STP is defined as 0 °C and 1 bar.
$n = \text{0.100 02 mol}$
$R = \text{0.083 14 bar·L·K"^"-1""mol"^"-1}$
$T = \text{(0 + 273.15) K" = "273.15 K}$
$p = \text{1 bar}$
V = (nRT)/p = ("0.100 02" color(red)(cancel(color(black)("g"))) × "0.083 14" color(red)(cancel(color(black)("bar"))) ·"L"· color(red)(cancel(color(black)("K"^"-1""mol"^"-1"))) × 273.15 color(red)(cancel(color(black)("K"))))/(1 color(red)(cancel(color(black)("bar")))) = "2.271 L"
Method 2. Using the molar volume
We know that there are 0.100 02 mol of ${\text{NH}}_{3}$.
We also know that the molar volume of a gas is 22.71 L at STP.
$V = {\text{0.100 02" color(red)(cancel(color(black)("mol NH"_3))) × "22.71 L NH"_3/(1 color(red)(cancel(color(black)("mol NH"_3)))) = "2.271 L NH}}_{3}$
##### Impact of this question
778 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7715845108032227, "perplexity": 5994.387834104932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159938.71/warc/CC-MAIN-20180923232129-20180924012529-00548.warc.gz"} |
https://en.wikibooks.org/wiki/Think_Python/Print_version | # Preface
## Chapter 0: Preface
### The strange history of this book
(This section was written by Allen B. Downey[1])
In January 1999, I was preparing to teach an introductory programming class in Java. I had taught it three times and I was getting frustrated. The failure rate in the class was too high and, even for students who succeeded, the overall level of achievement was too low.
One of the problems I saw was the books. They were too big, with too much unnecessary detail about Java, and not enough high-level guidance about how to program. And they all suffered from the "trapdoor effect": they would start out easy, proceed gradually, and then somewhere around Chapter 5 the bottom would fall out. The students would get too much new material, too fast, and I would spend the rest of the semester picking up the pieces.
Two weeks before the first day of class, I decided to write my own book.
My goals were:
• Keep it short. It is better for students to read 10 pages than read 50 pages.
• Be careful with vocabulary. I tried to minimize the jargon and define each term at first use.
• Build gradually. To avoid trapdoors, I took the most difficult topics and split them into a series of small steps.
• Focus on programming, not the programming language. I included the minimum useful subset of Java and left out the rest.
I needed a title, so on a whim I chose How to Think Like a Computer Scientist.
My first version was rough, but it worked. Students did the reading, and they understood enough that I could spend class time on the hard topics, the interesting topics and (most important) letting the students practice.
I released the book under the GNU Free Documentation License, which allows users to copy, modify, and distribute the book.
What happened next is the cool part. Jeff Elkner, a high school teacher in Virginia, adopted my book and translated it into Python. He sent me a copy of his translation, and I had the unusual experience of learning Python by reading my own book.
Jeff and I revised the book, incorporated a case study by Chris Meyers, and in 2001 we released How to Think Like a Computer Scientist: Learning with Python, also under the GNU Free Documentation License. As Green Tea Press, I published the book and started selling hard copies through Amazon.com and college book stores. Other books from Green Tea Press are available at greenteapress.com.
In 2003, I started teaching at Olin College and I got to teach Python for the first time. The contrast with Java was striking. Students struggled less, learned more, worked on more interesting projects, and generally had a lot more fun.
Over the last five years I have continued to develop the book, correcting errors, improving some of the examples and adding material, especially exercises. In 2008 I started work on a major revision—at the same time, I was contacted by an editor at Cambridge University Press who was interested in publishing the next edition. Good timing!
The result is this book, now with the less grandiose title Think Python. Some of the changes are:
• I added a section about debugging at the end of each chapter. These sections present general techniques for finding and avoiding bugs, and warnings about Python pitfalls.
• I removed the material in the last few chapters about the implementation of lists and trees. I still love those topics, but I thought they were incongruent with the rest of the book.
• I added more exercises, ranging from short tests of understanding to a few substantial projects.
• I added a series of case studies—longer examples with exercises, solutions, and discussion. Some of them are based on Swampy, a suite of Python programs I wrote for use in my classes. Swampy, code examples, and some solutions are available from thinkpython.com.
• I expanded the discussion of programming development plans and basic design patterns.
• The use of Python is more idiomatic. The book is still about programming, not Python, but now I think the book gets more leverage from the language.
I hope you enjoy working with this book, and that it helps you learn to program and think, at least a little bit, like a computer scientist.
Allen B. Downey
Needham MA
Allen Downey is an Associate Professor of Computer Science at the Franklin W. Olin College of Engineering.
### Acknowledgements
First and most importantly, I thank Jeff Elkner, who translated my Java book into Python, which got this project started and introduced me to what has turned out to be my favorite language.
I also thank Chris Meyers, who contributed several sections to How to Think Like a Computer Scientist.
And I thank the Free Software Foundation for developing the GNU Free Documentation License, which helped make my collaboration with Jeff and Chris possible.
I also thank the editors at Lulu who worked on How to Think Like a Computer Scientist.
I thank all the students who worked with earlier versions of this book and all the contributors (listed below) who sent in corrections and suggestions.
And I thank my wife, Lisa, for her work on this book, and Green Tea Press, and everything else, too.
### Contributor List
More than 100 sharp-eyed and thoughtful readers have sent in suggestions and corrections over the past few years. Their contributions, and enthusiasm for this project, have been a huge help.
If you have a suggestion or correction, please send email to feedback@thinkpython.com. If I make a change based on your feedback, I will add you to the contributor list (unless you ask to be omitted).
If you include at least part of the sentence the error appears in, that makes it easy for me to search. Page and section numbers are fine, too, but not quite as easy to work with. Thanks!
• Lloyd Hugh Allen sent in a correction to Section 8.4.
• Yvon Boulianne sent in a correction of a semantic error in Chapter 5.
• Fred Bremmer submitted a correction in Section 2.1.
• Jonah Cohen wrote the Perl scripts to convert the LaTeX source for this book into beautiful HTML.
• Michael Conlon sent in a grammar correction in Chapter 2 and an improvement in style in Chapter 1, and he initiated discussion on the technical aspects of interpreters.
• Benoit Girard sent in a correction to a humorous mistake in Section 5.6.
• Courtney Gleason and Katherine Smith wrote horsebet.py, which was used as a case study in an earlier version of the book. Their program can now be found on the website.
• Lee Harr submitted more corrections than we have room to list here, and indeed he should be listed as one of the principal editors of the text.
• James Kaylin is a student using the text. He has submitted numerous corrections.
• David Kershaw fixed the broken catTwice function in Section 3.10.
• Eddie Lam has sent in numerous corrections to Chapters 1, 2, and 3. He also fixed the Makefile so that it creates an index the first time it is run and helped us set up a versioning scheme.
• Man-Yong Lee sent in a correction to the example code in Section 2.4.
• David Mayo pointed out that the word "unconsciously" in Chapter 1 needed to be changed to "subconsciously".
• Chris McAloon sent in several corrections to Sections 3.9 and 3.10.
• Matthew J. Moelter has been a long-time contributor who sent in numerous corrections and suggestions to the book.
• Simon Dicon Montford reported a missing function definition and several typos in Chapter 3. He also found errors in the increment
function in Chapter 13.
• John Ouzts corrected the definition of "return value" in Chapter 3.
• Kevin Parks sent in valuable comments and suggestions as to how to improve the distribution of the book.
• David Pool sent in a typo in the glossary of Chapter 1, as well as kind words of encouragement.
• Michael Schmitt sent in a correction to the chapter on files and exceptions.
• Robin Shaw pointed out an error in Section 13.1, where the printTime function was used in an example without being defined.
• Paul Sleigh found an error in Chapter 7 and a bug in Jonah Cohen’s Perl script that generates HTML from LaTeX.
• Craig T. Snydal is testing the text in a course at Drew University. He has contributed several valuable suggestions and corrections.
• Ian Thomas and his students are using the text in a programming course. They are the first ones to test the chapters in the latter half of the book, and they have made numerous corrections and suggestions.
• Keith Verheyden sent in a correction in Chapter 3.
• Peter Winstanley let us know about a longstanding error in our Latin in Chapter 3.
• Chris Wrobel made corrections to the code in the chapter on file I/O and exceptions.
• Moshe Zadka has made invaluable contributions to this project. In addition to writing the first draft of the chapter on Dictionaries, he
provided continual guidance in the early stages of the book.
• Christoph Zwerschke sent several corrections and pedagogic suggestions, and explained the difference between gleich and selbe.
• James Mayer sent us a whole slew of spelling and typographical errors, including two in the contributor list.
• Hayden McAfee caught a potentially confusing inconsistency between two examples.
• Angel Arnal is part of an international team of translators working on the Spanish version of the text. He has also found several errors in the English version.
• Tauhidul Hoque and Lex Berezhny created the illustrations in Chapter 1 and improved many of the other illustrations.
• Dr. Michele Alzetta caught an error in Chapter 8 and sent some interesting pedagogic comments and suggestions about Fibonacci and Old Maid.
• Andy Mitchell caught a typo in Chapter 1 and a broken example in Chapter 2.
• Kalin Harvey suggested a clarification in Chapter 7 and caught some typos.
• Christopher P. Smith caught several typos and is helping us prepare to update the book for Python 2.2.
• David Hutchins caught a typo in the Foreword.
• Gregor Lingl is teaching Python at a high school in Vienna, Austria. He is working on a German translation of the book, and he caught a couple of bad errors in Chapter 5.
• Julie Peters caught a typo in the Preface.
• Florin Oprina sent in an improvement in makeTime, a correction in printTime, and a nice typo.
• D. J. Webre suggested a clarification in Chapter 3.
• Ken found a fistful of errors in Chapters 8, 9 and 11.
• Ivo Wever caught a typo in Chapter 5 and suggested a clarification in Chapter 3.
• Curtis Yanko suggested a clarification in Chapter 2.
• Ben Logan sent in a number of typos and problems with translating the book into HTML.
• Jason Armstrong saw the missing word in Chapter 2.
• Louis Cordier noticed a spot in Chapter 16 where the code didn't match the text.
• Brian Cain suggested several clarifications in Chapters 2 and 3.
• Rob Black sent in a passel of corrections, including some changes for Python 2.2.
• Jean-Philippe Rey at Ecole Centrale Paris sent a number of patches, including some updates for Python 2.2 and other thoughtful improvements.
• Jason Mader at George Washington University made a number of useful suggestions and corrections.
• Jan Gundtofte-Bruun reminded us that “a error” is an error.
• Abel David and Alexis Dinno reminded us that the plural of “matrix” is “matrices”, not “matrixes”. This error was in the book for years, but two readers with the same initials reported it on the same day. Weird.
• Charles Thayer encouraged us to get rid of the semi-colons we had put at the ends of some statements and to clean up our use of “argument” and “parameter”.
• Roger Sperberg pointed out a twisted piece of logic in Chapter 3.
• Sam Bull pointed out a confusing paragraph in Chapter 2.
• Andrew Cheung pointed out two instances of “use before def.”
• C. Corey Capel spotted the missing word in the Third Theorem of Debugging and a typo in Chapter 4.
• Alessandra helped clear up some Turtle confusion.
• Wim Champagne found a brain-o in a dictionary example.
• Douglas Wright pointed out a problem with floor division in arc.
• Jared Spindor found some jetsam at the end of a sentence.
• Lin Peiheng sent a number of very helpful suggestions.
• Ray Hagtvedt sent in two errors and a not-quite-error.
• Torsten Hübsch pointed out an inconsistency in Swampy.
• Inga Petuhhov corrected an example in Chapter 14.
• Arne Babenhauserheide sent several helpful corrections.
• Mark E. Casida is is good at spotting repeated words.
• Scott Tyler filled in a that was missing. And then sent in a heap of corrections.
• Gordon Shephard sent in several corrections, all in separate emails.
• Andrew Turner spotted an error in Chapter 8.
• Adam Hobart fixed a problem with floor division in arc.
• Daryl Hammond and Sarah Zimmerman pointed out that I served up math.pi too early. And Zim spotted a typo.
• George Sass found a bug in a Debugging section.
• Brian Bingham suggested Exercise 11.9.
• Leah Engelbert-Fenton pointed out that I used tuple as a variable name, contrary to my own advice. And then found a bunch of typos and a “use before def.”
• Joe Funke spotted a typo.
• Chao-chao Chen found an inconsistency in the Fibonacci example.
• Jeff Paine knows the difference between space and spam.
• Lubos Pintes sent in a typo.
• Gregg Lind and Abigail Heithoff suggested Exercise 14.6.
• Max Hailperin pointed out a change coming in Python 3.0. Max is one of the authors of the extraordinary Concrete Abstractions, which you might want to read when you are done with this book.
• Chotipat Pornavalai found an error in an error message.
• Stanislaw Antol sent a list of very helpful suggestions.
• Eric Pashman sent a number of corrections for Chapters 4–11.
• Miguel Azevedo found some typos.
• Jianhua Liu sent in a long list of corrections.
• Nick King found a missing word.
• Martin Zuther sent a long list of suggestions.
• Adam Zimmerman found an inconsistency in my instance of an “instance” and several other errors.
• Ratnakar Tiwari suggested a footnote explaining degenerate triangles.
• Anurag Goel suggested another solution for is_abecedarian and sent some additional corrections. And he knows how to spell Jane Austen.
• Kelli Kratzer spotted one of they typos.
• Mark Griffiths pointed out a confusing example in Chapter 3.
• Roydan Ongie found an error in my Newton’s method.
### The further strange adventures of this book
In September of 2008, Whiteknight converted the HTML version of "Think Python" at Green Tea Press[2] to a Wikitext version at Wikibooks[3]. Now anyone can improve the text.
# The way of the program
The goal of this book is to teach you to think like a computer scientist. This way of thinking combines some of the best features of mathematics, engineering, and natural science. Like mathematicians, computer scientists use formal languages to denote ideas (specifically computations). Like engineers, they design things, assembling components into systems and evaluating tradeoffs among alternatives. Like scientists, they observe the behavior of complex systems, form hypotheses, and test predictions.
The single most important skill for a computer scientist is problem solving. Problem solving means the ability to formulate problems, think creatively about solutions, and express a solution clearly and accurately. As it turns out, the process of learning to program is an excellent opportunity to practice problem-solving skills. That’s why this chapter is called, “The way of the program.”
On one level, you will be learning to program, a useful skill by itself. On another level, you will use programming as a means to an end. As we go along, that end will become clearer.
### The Python programming language
The programming language you will learn is Python. Python is an example of a high-level language; other high-level languages you might have heard of are C, C++, Perl, and Java.
There are also low-level languages, sometimes referred to as “machine languages” or “assembly languages.” Loosely speaking, computers can only execute programs written in low-level languages. So programs written in a high-level language have to be processed before they can run. This extra processing takes some time, which is a small disadvantage of high-level languages.
The advantages are enormous. First, it is much easier to program in a high-level language. Programs written in a high-level language take less time to write, they are shorter and easier to read, and they are more likely to be correct. Second, high-level languages are portable, meaning that they can run on different kinds of computers with few or no modifications. Low-level programs can run on only one kind of computer and have to be rewritten to run on another.
Due to these advantages, almost all programs are written in high-level languages. Low-level languages are used only for a few specialized applications.
Two kinds of programs process high-level languages into low-level languages: interpreters and compilers. An interpreter reads a high-level program and executes it, meaning that it does what the program says. It processes the program a little at a time, alternately reading lines and performing computations.
A compiler reads the program and translates it completely before the program starts running. In this context, the high-level program is called the source code, and the translated program is called the object code or the executable. Once a program is compiled, you can execute it repeatedly without further translation.
Python is considered an interpreted language because Python programs are executed by an interpreter. There are two ways to use the interpreter: interactive mode and script mode. In interactive mode, you type Python programs and the interpreter prints the result:
>>> 1 + 1
2
The chevron, >>>, is the prompt the interpreter uses to indicate that it is ready. If you type 1 + 1, the interpreter replies 2.
Alternatively, you can store code in a file and use the interpreter to execute the contents of the file, which is called a script. By convention, Python scripts have names that end with .py.
To execute the script, you have to tell the interpreter the name of the file. In a UNIX command window, you would type python dinsdale.py. In other development environments, the details of executing scripts are different. You can find instructions for your environment at the Python Website python.org.
Working in interactive mode is convenient for testing small pieces of code because you can type and execute them immediately. But for anything more than a few lines, you should save your code as a script so you can modify and execute it in the future.
### What is a program?
A program is a sequence of instructions that specifies how to perform a computation. The computation might be something mathematical, such as solving a system of equations or finding the roots of a polynomial, but it can also be a symbolic computation, such as searching and replacing text in a document or (strangely enough) compiling a program.
The details look different in different languages, but a few basic instructions appear in just about every language:
input:
Get data from the keyboard, a file, or some other device.
output:
Display data on the screen or send data to a file or other device.
math:
Perform basic mathematical operations like addition and multiplication.
conditional execution:
Check for certain conditions and execute the appropriate sequence of statements.
repetition:
Perform some action repeatedly, usually with some variation.
Believe it or not, that’s pretty much all there is to it. Every program you’ve ever used, no matter how complicated, is made up of instructions that look pretty much like these. So you can think of programming as the process of breaking a large, complex task into smaller and smaller subtasks until the subtasks are simple enough to be performed with one of these basic instructions.
That may be a little vague, but we will come back to this topic when we talk about algorithms.
### What is debugging?
Programming is error-prone. For whimsical reasons, programming errors are called bugs and the process of tracking them down is called debugging.
Three kinds of errors can occur in a program: syntax errors, runtime errors, and semantic errors. It is useful to distinguish between them in order to track them down more quickly.
#### Syntax errors
Python can only execute a program if the syntax is correct; otherwise, the interpreter displays an error message. Syntax refers to the structure of a program and the rules about that structure. For example, parentheses have to come in matching pairs, so (1 + 2) is legal, but 8) is a syntax error.
In English readers can tolerate most syntax errors, which is why we can read the poetry of E. E. Cummings without spewing error messages. Python is not so forgiving. If there is a single syntax error anywhere in your program, Python will display an error message and quit, and you will not be able to run your program. During the first few weeks of your programming career, you will probably spend a lot of time tracking down syntax errors. As you gain experience, you will make fewer errors and find them faster.
#### Runtime errors
The second type of error is a runtime error, so called because the error does not appear until after the program has started running. These errors are also called exceptions because they usually indicate that something exceptional (and bad) has happened.
Runtime errors are rare in the simple programs you will see in the first few chapters, so it might be a while before you encounter one.
#### Semantic errors
The third type of error is the semantic error. If there is a semantic error in your program, it will run successfully in the sense that the computer will not generate any error messages, but it will not do the right thing. It will do something else. Specifically, it will do what you told it to do.
The problem is that the program you wrote is not the program you wanted to write. The meaning of the program (its semantics) is wrong. Identifying semantic errors can be tricky because it requires you to work backward by looking at the output of the program and trying to figure out what it is doing.
#### Experimental debugging
One of the most important skills you will acquire is debugging. Although it can be frustrating, debugging is one of the most intellectually rich, challenging, and interesting parts of programming.
In some ways, debugging is like detective work. You are confronted with clues, and you have to infer the processes and events that led to the results you see.
Debugging is also like an experimental science. Once you have an idea about what is going wrong, you modify your program and try again. If your hypothesis was correct, then you can predict the result of the modification, and you take a step closer to a working program. If your hypothesis was wrong, you have to come up with a new one. As Sherlock Holmes pointed out, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” (A. Conan Doyle, The Sign of Four)
For some people, programming and debugging are the same thing. That is, programming is the process of gradually debugging a program until it does what you want. The idea is that you should start with a program that does something and make small modifications, debugging them as you go, so that you always have a working program.
For example, Linux is an operating system that contains thousands of lines of code, but it started out as a simple program Linus Torvalds used to explore the Intel 80386 chip. According to Larry Greenfield, “One of Linus’s earlier projects was a program that would switch between printing AAAA and BBBB. This later evolved to Linux.” (The Linux Users’ Guide Beta Version 1).
Later chapters will make more suggestions about debugging and other programming practices.
### Formal and natural languages
Natural languages are the languages people speak, such as English, Spanish, and French. They were not designed by people (although people try to impose some order on them); they evolved naturally.
Formal languages are languages that are designed by people for specific applications. For example, the notation that mathematicians use is a formal language that is particularly good at denoting relationships among numbers and symbols. Chemists use a formal language to represent the chemical structure of molecules. And most importantly:
Programming languages are formal languages that have been designed to express computations.
Formal languages tend to have strict rules about syntax. For example, 3 + 3 = 6 is a syntactically correct mathematical statement, but 3 + = 3 $6 is not. H2O is a syntactically correct chemical formula, but 2Zz is not. Syntax rules come in two flavors, pertaining to tokens and structure. Tokens are the basic elements of the language, such as words, numbers, and chemical elements. One of the problems with 3 + = 3$ 6 is that $ is not a legal token in mathematics (at least as far as I know). Similarly, 2Zz is not legal because there is no element with the abbreviation Zz. The second type of syntax error pertains to the structure of a statement; that is, the way the tokens are arranged. The statement 3 + = 3$ 6 is illegal because even though + and = are legal tokens, you can’t have one right after the other. Similarly, in a chemical formula the subscript comes after the element name, not before.
Exercise 1
Write a well-structured English sentence with invalid tokens in it. Then write another sentence with all valid tokens but with invalid structure.
When you read a sentence in English or a statement in a formal language, you have to figure out what the structure of the sentence is (although in a natural language you do this subconsciously). This process is called parsing.
For example, when you hear the sentence, “The penny dropped,” you understand that “the penny” is the subject and “dropped” is the predicate. Once you have parsed a sentence, you can figure out what it means, or the semantics of the sentence. Assuming that you know what a penny is and what it means to drop, you will understand the general implication of this sentence.
Although formal and natural languages have many features in common—tokens, structure, syntax, and semantics—there are some differences:
ambiguity:
Natural languages are full of ambiguity, which people deal with by using contextual clues and other information. Formal languages are designed to be nearly or completely unambiguous, which means that any statement has exactly one meaning, regardless of context.
redundancy:
In order to make up for ambiguity and reduce misunderstandings, natural languages employ lots of redundancy. As a result, they are often verbose. Formal languages are less redundant and more concise.
literalness:
Natural languages are full of idiom and metaphor. If I say, “The penny dropped,” there is probably no penny and nothing dropping[1]. Formal languages mean exactly what they say.
People who grow up speaking a natural language—everyone—often have a hard time adjusting to formal languages. In some ways, the difference between formal and natural language is like the difference between poetry and prose, but more so:
Poetry:
Words are used for their sounds as well as for their meaning, and the whole poem together creates an effect or emotional response. Ambiguity is not only common but often deliberate.
Prose:
The literal meaning of words is more important, and the structure contributes more meaning. Prose is more amenable to analysis than poetry but still often ambiguous.
Programs:
The meaning of a computer program is unambiguous and literal, and can be understood entirely by analysis of the tokens and structure.
Here are some suggestions for reading programs (and other formal languages). First, remember that formal languages are much more dense than natural languages, so it takes longer to read them. Also, the structure is very important, so it is usually not a good idea to read from top to bottom, left to right. Instead, learn to parse the program in your head, identifying the tokens and interpreting the structure. Finally, the details matter. Small errors in spelling and punctuation, which you can get away with in natural languages, can make a big difference in a formal language.
### The first program
Traditionally, the first program you write in a new language is called “Hello, World!” because all it does is display the words, “Hello, World!” In Python, it looks like this:
print 'Hello, World!'
This is an example of a print statement[2], which doesn’t actually print anything on paper. It displays a value on the screen. In this case, the result is the words
Hello, World!
The quotation marks in the program mark the beginning and end of the text to be displayed; they don’t appear in the result.
Some people judge the quality of a programming language by the simplicity of the “Hello, World!” program. By this standard, Python does about as well as possible.
### Debugging
It is a good idea to read this book in front of a computer so you can try out the examples as you go. You can run most of the examples in interactive mode, but if you put the code into a script, it is easier to try out variations.
Whenever you are experimenting with a new feature, you should try to make mistakes. For example, in the “Hello, world!” program, what happens if you leave out one of the quotation marks? What if you leave out both? What if you spell print wrong?
This kind of experiment helps you remember what you read; it also helps with debugging, because you get to know what the error messages mean. It is better to make mistakes now and on purpose than later and accidentally.
Programming, and especially debugging, sometimes brings out strong emotions. If you are struggling with a difficult bug, you might feel angry, despondent or embarrassed.
There is evidence that people naturally respond to computers as if they were people[3]. When they work well, we think of them as teammates, and when they are obstinate or rude, we respond to them the same way we respond to rude, obstinate people.
Preparing for these reactions might help you deal with them. One approach is to think of the computer as an employee with certain strengths, like speed and precision, and particular weaknesses, like lack of empathy and inability to grasp the big picture.
Your job is to be a good manager: find ways to take advantage of the strengths and mitigate the weaknesses. And find ways to use your emotions to engage with the problem, without letting your reactions interfere with your ability to work effectively.
Learning to debug can be frustrating, but it is a valuable skill that is useful for many activities beyond programming. At the end of each chapter there is a debugging section, like this one, with my thoughts about debugging. I hope they help!
### Glossary
problem solving:
The process of formulating a problem, finding a solution, and expressing the solution.
high-level language:
A programming language like Python that is designed to be easy for humans to read and write.
low-level language:
A programming language that is designed to be easy for a computer to execute; also called “machine language” or “assembly language.”
portability:
A property of a program that can run on more than one kind of computer.
interpret:
To execute a program in a high-level language by translating it one line at a time.
compile:
To translate a program written in a high-level language into a low-level language all at once, in preparation for later execution.
source code:
A program in a high-level language before being compiled.
object code:
The output of the compiler after it translates the program.
executable:
Another name for object code that is ready to be executed.
prompt:
Characters displayed by the interpreter to indicate that it is ready to take input from the user.
script:
A program stored in a file (usually one that will be interpreted).
interactive mode:
A way of using the Python interpreter by typing commands and expressions at the prompt.
script mode:
A way of using the Python interpreter to read and execute statements in a script.
program:
A set of instructions that specifies a computation.
algorithm:
A general process for solving a category of problems.
bug:
An error in a program.
debugging:
The process of finding and removing any of the three kinds of programming errors.
syntax:
The structure of a program.
syntax error:
An error in a program that makes it impossible to parse (and therefore impossible to interpret).
exception:
An error that is detected while the program is running.
semantics:
The meaning of a program.
semantic error:
An error in a program that makes it do something other than what the programmer intended.
natural language:
Any one of the languages that people speak that evolved naturally.
formal language:
Any one of the languages that people have designed for specific purposes, such as representing mathematical ideas or computer programs; all programming languages are formal languages.
token:
One of the basic elements of the syntactic structure of a program, analogous to a word in a natural language.
parse:
To examine a program and analyze the syntactic structure.
print statement:
An instruction that causes the Python interpreter to display a value on the screen.
### Exercises
#### Exercise 2
Use a web browser to go to the Python website, http://python.org/. This page contains information about Python and links to Python-related pages, and it gives you the ability to search the Python documentation. For example, if you enter print in the search window, the first link that appears is the documentation of the print statement. At this point, not all of it will make sense to you, but it is good to know where it is.
#### Exercise 3
Start the Python interpreter and type 'help()' to start the online help utility. Or you can type help('print') to get information about the 'print' statement. If this example doesn’t work, you may need to install additional Python documentation or set an environment variable; the details depend on your operating system and version of Python.
#### Exercise 4
Start the Python interpreter and use it as a calculator. Python’s syntax for math operations is almost the same as standard mathematical notation. For example, the symbols '+', '-' and '/' denote addition, subtraction and division, as you would expect. The symbol for multiplication is '*'. If you run a 10 kilometer race in 43 minutes 30 seconds, what is your average time per mile? What is your average speed in miles per hour? (Hint: there are 1.61 kilometers in a mile).
## References
1. This idiom means that someone realized something after a period of confusion.
2. In Python 3.0, print is a function, not a statement, so the syntax is print(’Hello, World!’). We will get to functions soon!
3. See Reeves and Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places.
# Variables, expressions and statements
### Values and types
A value is one of the basic things a program works with, like a letter or a number. The values we have seen so far are 1, 2, and 'Hello, World!'.
These values belong to different types: 2 is an integer, and 'Hello, World!' is a string, so-called because it contains a “string” of letters. You (and the interpreter) can identify strings because they are enclosed in quotation marks.
The print statement also works for integers.
>>> print 4
4
If you are not sure what type a value has, the interpreter can tell you.
>>> type('Hello, World!')
<type 'str'>
>>> type(17)
<type 'int'>
Not surprisingly, strings belong to the type str and integers belong to the type int. Less obviously, numbers with a decimal point belong to a type called float, because these numbers are represented in a format called floating-point.
>>> type(3.2)
<type 'float'>
What about values like '17' and '3.2'? They look like numbers, but they are in quotation marks like strings.
>>> type('17')
<type 'str'>
>>> type('3.2')
<type 'str'>
They're strings.
When you type a large integer, you might be tempted to use commas between groups of three digits, as in 1,000,000. This is not a legal integer in Python, but it is legal:
>>> print 1,000,000
1 0 0
Well, that’s not what we expected at all! Python interprets 1,000,000 as a comma-separated sequence of integers, which it prints with spaces between.
This is the first example we have seen of a semantic error: the code runs without producing an error message, but it doesn't do the “right” thing.
### Variables
One of the most powerful features of a programming language is the ability to manipulate variables. A variable is a name that refers to a value.
An assignment statement creates new variables and gives them values:
>>> message = 'And now for something completely different'
>>> n = 17
>>> pi = 3.1415926535897931
This example makes three assignments. The first assigns a string to a new variable named message; the second gives the integer 17 to n; the third assigns the (approximate) value of π to pi.
A common way to represent variables on paper is to write the name with an arrow pointing to the variable’s value. This kind of figure is called a state diagram because it shows what state each of the variables is in (think of it as the variable’s state of mind). This diagram shows the result of the previous example:
message ${\displaystyle \rightarrow }$ 'And now for something completely different' n ${\displaystyle \rightarrow }$ 17 pi ${\displaystyle \rightarrow }$ 3.1415926535897931
To display the value of a variable, you can use a print statement:
>>> print n
17
>>> print pi
3.14159265359
The type of a variable is the type of the value it refers to.
>>> type(message)
<type 'str'>
>>> type(n)
<type 'int'>
>>> type(pi)
<type 'float'>
#### Exercise 1
If you type an integer with a leading zero, you might get a confusing error:
>>> zipcode = 02492
^
SyntaxError: invalid token
Other number seem to work, but the results are bizarre:
>>> zipcode = 02132
>>> print zipcode
1114
Can you figure out what is going on? Hint: print the values 01, 010, 0100 and 01000.
### Variable names and keywords
Programmers generally choose names for their variables that are meaningful—they document what the variable is used for.
Variable names can be arbitrarily long. They can contain both letters and numbers, but they have to begin with a letter. It is legal to use uppercase letters, but it is a good idea to begin variable names with a lowercase letter (you'll see why later).
The underscore character (_) can appear in a name. It is often used in names with multiple words, such as my_name or airspeed_of_unladen_swallow.
If you give a variable an illegal name, you get a syntax error:
>>> 76trombones = 'big parade'
SyntaxError: invalid syntax
>>> more@ = 1000000
SyntaxError: invalid syntax
>>> class = 'Advanced Theoretical Zymurgy'
SyntaxError: invalid syntax
76trombones is illegal because it does not begin with a letter. more@ is illegal because it contains an illegal character, @. But what's wrong with class?
It turns out that class is one of Python's keywords. The interpreter uses keywords to recognize the structure of the program, and they cannot be used as variable names.
Python has 31 keywords:
and del from not while as elif global or with assert else if pass yield break except import print class exec in raise continue finally is return def for lambda try
You might want to keep this list handy. If the interpreter complains about one of your variable names and you don't know why, see if it is on this list.
If you write your code in a text editor that understands Python, you may find that it makes it easy for you to spot such keyword clashes by displaying keywords in a different color to ordinary variables. This feature is called syntax highlighting, and most programmers find it indispensable. This book uses syntax highlighting for its example code, so in the following example:
ok_variable = 42
yield = 42
you can see that yield has been recognized as a keyword and not as an ordinary variable, since it is colored orange.
### Statements
A statement is a unit of code that the Python interpreter can execute. We have seen two kinds of statements: print and assignment.
When you type a statement in interactive mode, the interpreter executes it and displays the result, if there is one.
A script usually contains a sequence of statements. If there is more than one statement, the results appear one at a time as the statements execute.
For example, the script
print 1
x = 2
print x
produces the output
1
2
The assignment statement produces no output.
### Operators and operands
Operators are special symbols that represent computations like addition and multiplication. The values the operator is applied to are called operands.
The operators +, -, *, / and ** perform addition, subtraction, multiplication, division and exponentiation, as in the following examples:
20+32 hour-1 hour*60+minute minute/60 5**2 (5+9)*(15-7)
In some other languages, ^ is used for exponentiation, but in Python it is a bitwise operator called XOR. I won’t cover bitwise operators in this book, but you can read about them at wiki.python.org/moin/BitwiseOperators.
The division operator might not do what you expect:
>>> minute = 59
>>> minute/60
0
The value of minute is 59, and in conventional arithmetic 59 divided by 60 is 0.98333, not 0. The reason for the discrepancy is that Python is performing floor division.[1]
When both of the operands are integers, the result is also an integer; floor division chops off the fraction part, so in this example it rounds down to zero.
If either of the operands is a floating-point number, Python performs floating-point division, and the result is a float:
>>> minute/60.0
0.98333333333333328
### Expressions
An expression is a combination of values, variables, and operators. A value all by itself is considered an expression, and so is a variable, so the following are all legal expressions (assuming that the variable x has been assigned a value):
17
x
x + 17
If you type an expression in interactive mode, the interpreter evaluates it and displays the result:
>>> 1 + 1
2
But in a script, an expression all by itself doesn’t do anything! This is a common source of confusion for beginners.
#### Exercise 2
Type the following statements in the Python interpreter to see what they do:
5
x = 5
x + 1
Now put the same statements into a script and run it. What is the output? Modify the script by transforming each expression into a print statement and then run it again.
### Order of operations
When more than one operator appears in an expression, the order of evaluation depends on the rules of precedence. For mathematical operators, Python follows mathematical convention. The acronym PEMDAS is a useful way to remember the rules:
• Parentheses have the highest precedence and can be used to force an expression to evaluate in the order you want. Since expressions in parentheses are evaluated first, 2 * (3-1) is 4, and (1+1)**(5-2) is 8. You can also use parentheses to make an expression easier to read, as in (minute * 100) / 60, even if it doesn't change the result.
• Exponentiation has the next highest precedence, so 2**1+1 is 3, not 4, and 3*1**3 is 3, not 27.
• Multiplication and Division have the same precedence, which is higher than Addition and Subtraction, which also have the same precedence. So 2*3-1 is 5, not 4, and 6+4/2 is 8, not 5.
• Operators with the same precedence are evaluated from left to right. So in the expression degrees / 2 * pi, the division happens first and the result is multiplied by pi. To divide by 2 π, you can reorder the operands or use parentheses.
### String operations
In general, you cannot perform mathematical operations on strings, even if the strings look like numbers, so the following are illegal:
'2'-'1' 'eggs'/'easy' 'third'*'a charm'
The + operator works with strings, but it might not do what you expect: it performs concatenation, which means joining the strings by linking them end-to-end. For example:
first = 'throat'
second = 'warbler'
print first + second
The output of this program is throatwarbler.
The * operator also works on strings; it performs repetition. For example, ’Spam’*3 is 'SpamSpamSpam'. If one of the operands is a string, the other has to be an integer.
This use of + and * makes sense by analogy with addition and multiplication. Just as 4*3 is equivalent to 4+4+4, we expect 'Spam'*3 to be the same as 'Spam'+'Spam'+'Spam', and it is. On the other hand, there is a significant way in which string concatenation and repetition are different from integer addition and multiplication. Can you think of a property that addition has that string concatenation does not?
As programs get bigger and more complicated, they get more difficult to read. Formal languages are dense, and it is often difficult to look at a piece of code and figure out what it is doing, or why.
For this reason, it is a good idea to add notes to your programs to explain in natural language what the program is doing. These notes are called comments, and they start with the # symbol:
# compute the percentage of the hour that has elapsed
percentage = (minute * 100) / 60
In this case, the comment appears on a line by itself. You can also put comments at the end of a line:
percentage = (minute * 100) / 60 # percentage of an hour
Everything from the # to the end of the line is ignored—it has no effect on the program.
Comments are most useful when they document non-obvious features of the code. It is reasonable to assume that the reader can figure out what the code does; it is much more useful to explain why.
This comment is redundant with the code and useless:
v = 5 # assign 5 to v
This comment contains useful information that is not in the code:
v = 5 # velocity in meters/second.
Good variable names can reduce the need for comments, but long names can make complex expressions hard to read, so there is a tradeoff.
40% discount. Shipping costs $3 for the first copy and 75 cents for each additional copy. What is the total wholesale cost for 60 copies? • If I leave my house at 6:52 am and run 1 mile at an easy pace (8:15 per mile), then 3 miles at tempo (7:12 per mile) and 1 mile at easy pace again, what time do I get home for breakfast? ### Notes 1. In Python 3.0, the result of this division is a float. The new operator // performs integer division. # Functions ## Function calls In the context of programming, a function is a named sequence of statements that performs a computation. When you define a function, you specify the name and the sequence of statements. Later, you can "call" the function by name. We have already seen one example of a function call: >>> type(32) <type 'int'> The name of the function is type. The expression in parentheses is called the argument of the function. The result, for this function, is the type of the argument. It is common to say that a function "takes" an argument and "returns" a result. The result is called the return value. ## Type conversion functions Python provides built-in functions that convert values from one type to another. The int function takes any value and converts it to an integer, if it can, or complains otherwise: >>> int('32') 32 >>> int('Hello') ValueError: invalid literal for int(): Hello int can convert floating-point values to integers, but it doesn't round off; it chops off the fraction part: >>> int(3.99999) 3 >>> int(-2.3) -2 float converts integers and strings to floating-point numbers: >>> float(32) 32.0 >>> float('3.14159') 3.14159 Finally, str converts its argument to a string: >>> str(32) '32' >>> str(3.14159) '3.14159' ## Math functions Python has a math module that provides most of the familiar mathematical functions. A module is a file that contains a collection of related functions. Before we can use the module, we have to import it: >>> import math This statement creates a module object named math. If you print the module object, you get some information about it: >>> print math <module 'math' from '/usr/lib/python2.5/lib-dynload/math.so'> The module object contains the functions and variables defined in the module. To access one of the functions, you have to specify the name of the module and the name of the function, separated by a dot (also known as a period). This format is called dot notation. >>> ratio = signal_power / noise_power >>> decibels = 10 * math.log10(ratio) >>> radians = 0.7 >>> height = math.sin(radians) The first example computes the logarithm base 10 of the signal-to-noise ratio. The math module also provides a function called log that computes logarithms base e. The second example finds the sine of radians. The name of the variable is a hint that sin and the other trigonometric functions (cos, tan, etc.) take arguments in radians. To convert from degrees to radians, divide by 360 and multiply by 2 π: >>> degrees = 45 >>> radians = degrees / 360.0 * 2 * math.pi >>> math.sin(radians) 0.707106781187 The expression math.pi gets the variable pi from the math module. The value of this variable is an approximation of π, accurate to about 15 digits. If you know your trigonometry, you can check the previous result by comparing it to the square root of two divided by two: >>> math.sqrt(2) / 2.0 0.707106781187 ## Composition So far, we have looked at the elements of a program—variables, expressions, and statements—in isolation, without talking about how to combine them. One of the most useful features of programming languages is their ability to take small building blocks and compose them. For example, the argument of a function can be any kind of expression, including arithmetic operators: x = math.sin(degrees / 360.0 * 2 * math.pi) And even function calls: x = math.exp(math.log(x+1)) Almost anywhere you can put a value, you can put an arbitrary expression, with one exception: the left side of an assignment statement has to be a variable name. Any other expression on the left side is a syntax error. >>> minutes = hours * 60 # right >>> hours * 60 = minutes # wrong! SyntaxError: can't assign to operator ## Adding new functions So far, we have only been using the functions that come with Python, but it is also possible to add new functions. A function definition specifies the name of a new function and the sequence of statements that execute when the function is called. Here is an example: def print_lyrics(): print "I'm a lumberjack, and I'm okay." print "I sleep all night and I work all day." def is a keyword that indicates that this is a function definition. The name of the function is print_lyrics. The rules for function names are the same as for variable names: letters, numbers and some punctuation marks are legal, but the first character can't be a number. You can't use a keyword as the name of a function, and you should avoid having a variable and a function with the same name. The empty parentheses after the name indicate that this function doesn't take any arguments. The first line of the function definition is called the header; the rest is called the body. The header has to end with a colon and the body has to be indented. By convention, the indentation is always four spaces (see Section ). The body can contain any number of statements. The strings in the print statements are enclosed in double quotes. Single quotes and double quotes do the same thing; most people use single quotes except in cases like this where a single quote (which is also an apostrophe) appears in the string. If you type a function definition in interactive mode, the interpreter prints ellipses (...) to let you know that the definition isn't complete: >>> def print_lyrics(): ... print "I'm a lumberjack, and I'm okay." ... print "I sleep all night and I work all day." ... To end the function, you have to enter an empty line (this is not necessary in a script). Defining a function creates a variable with the same name. >>> print print_lyrics <function print_lyrics at 0xb7e99e9c> >>> print type(print_lyrics) <type 'function'> The value of print_lyrics is a function object, which has type 'function'. The syntax for calling the new function is the same as for built-in functions: >>> print_lyrics() I'm a lumberjack, and I'm okay. I sleep all night and I work all day. Once you have defined a function, you can use it inside another function. For example, to repeat the previous refrain, we could write a function called repeat_lyrics: def repeat_lyrics(): print_lyrics() print_lyrics() And then call repeat_lyrics: >>> repeat_lyrics() I'm a lumberjack, and I'm okay. I sleep all night and I work all day. I'm a lumberjack, and I'm okay. I sleep all night and I work all day. But that's not really how the song goes. ## Definitions and uses Pulling together the code fragments from the previous section, the whole program looks like this: def print_lyrics(): print "I'm a lumberjack, and I'm okay." print "I sleep all night and I work all day." def repeat_lyrics(): print_lyrics() print_lyrics() repeat_lyrics() This program contains two function definitions: print_lyrics and repeat_lyrics. Function definitions get executed just like other statements, but the effect is to create function objects. The statements inside the function do not get executed until the function is called, and the function definition generates no output. As you might expect, you have to create a function before you can execute it. In other words, the function definition has to be executed before the first time it is called. ### Exercise 1 Move the last line of this program to the top, so the function call appears before the definitions. Run the program and see what error message you get. ### Exercise 2 Move the function call back to the bottom and move the definition of print_lyrics after the definition of repeat_lyrics. What happens when you run this program? ## Flow of execution In order to ensure that a function is defined before its first use, you have to know the order in which statements are executed, which is called the flow of execution. Execution always begins at the first statement of the program. Statements are executed one at a time, in order from top to bottom. Function definitions do not alter the flow of execution of the program, but remember that statements inside the function are not executed until the function is called. A function call is like a detour in the flow of execution. Instead of going to the next statement, the flow jumps to the body of the function, executes all the statements there, and then comes back to pick up where it left off. That sounds simple enough, until you remember that one function can call another. While in the middle of one function, the program might have to execute the statements in another function. But while executing that new function, the program might have to execute yet another function! Fortunately, Python is good at keeping track of where it is, so each time a function completes, the program picks up where it left off in the function that called it. When it gets to the end of the program, it terminates. What's the moral of this sordid tale? When you read a program, you don't always want to read from top to bottom. Sometimes it makes more sense if you follow the flow of execution. ## Parameters and arguments Some of the built-in functions we have seen require arguments. For example, when you call math.sin you pass a number as an argument. Some functions take more than one argument: math.pow takes two, the base and the exponent. Inside the function, the arguments are assigned to variables called parameters. Here is an example of a user-defined function that takes an argument: def print_twice(bruce): print bruce print bruce This function assigns the argument to a parameter named bruce. When the function is called, it prints the value of the parameter (whatever it is) twice. This function works with any value that can be printed. >>> print_twice('Spam') Spam Spam >>> print_twice(17) 17 17 >>> print_twice(math.pi) 3.14159265359 3.14159265359 The same rules of composition that apply to built-in functions also apply to user-defined functions, so we can use any kind of expression as an argument for print_twice: >>> print_twice('Spam '*4) Spam Spam Spam Spam Spam Spam Spam Spam >>> print_twice(math.cos(math.pi)) -1.0 -1.0 The argument is evaluated before the function is called, so in the examples the expressions 'Spam '*4 and math.cos(math.pi) are only evaluated once. You can also use a variable as an argument: >>> michael = 'Eric, the half a bee.' >>> print_twice(michael) Eric, the half a bee. Eric, the half a bee. The name of the variable we pass as an argument (michael) has nothing to do with the name of the parameter (bruce). It doesn't matter what the value was called back home (in the caller); here in print_twice, we call everybody bruce. ## Variables and parameters are local When you create a variable inside a function, it is local, which means that it only exists inside the function. For example: def cat_twice(part1, part2): cat = part1 + part2 print_twice(cat) This function takes two arguments, concatenates them, and prints the result twice. Here is an example that uses it: >>> line1 = 'Bing tiddle ' >>> line2 = 'tiddle bang.' >>> cat_twice(line1, line2) Bing tiddle tiddle bang. Bing tiddle tiddle bang. When cat_twice terminates, the variable cat is destroyed. If we try to print it, we get an exception: >>> print cat NameError: name 'cat' is not defined Parameters are also local. For example, outside print_twice, there is no such thing as bruce. ## Stack diagrams To keep track of which variables can be used where, it is sometimes useful to draw a stack diagram. Like state diagrams, stack diagrams show the value of each variable, but they also show the function each variable belongs to. Each function is represented by a frame. A frame is a box with the name of a function beside it and the parameters and variables of the function inside it. The stack diagram for the previous example looks like this: File:Book004.png The frames are arranged in a stack that indicates which function called which, and so on. In this example, print_twice was called by cat_twice, and cat_twice was called by __main__, which is a special name for the topmost frame. When you create a variable outside of any function, it belongs to __main__. Each parameter refers to the same value as its corresponding argument. So, part1 has the same value as line1, part2 has the same value as line2, and bruce has the same value as cat. If an error occurs during a function call, Python prints the name of the function, and the name of the function that called it, and the name of the function that called that, all the way back to __main__. For example, if you try to access cat from within print_twice, you get a NameError: Traceback (innermost last): File "test.py", line 13, in __main__ cat_twice(line1, line2) File "test.py", line 5, in cat_twice print_twice(cat) File "test.py", line 9, in print_twice print cat NameError: name 'cat' is not defined This list of functions is called a traceback. It tells you what program file the error occurred in, and what line, and what functions were executing at the time. It also shows the line of code that caused the error. The order of the functions in the traceback is the same as the order of the frames in the stack diagram. The function that is currently running is at the bottom. ## Fruitful functions and void functions Some of the functions we are using, such as the math functions, yield results; for lack of a better name, I call them fruitful functions. Other functions, like print_twice, perform an action but don't return a value. They are called void functions. When you call a fruitful function, you almost always want to do something with the result; for example, you might assign it to a variable or use it as part of an expression: x = math.cos(radians) golden = (math.sqrt(5) + 1) / 2 When you call a function in interactive mode, Python displays the result: >>> math.sqrt(5) 2.2360679774997898 But in a script, if you call a fruitful function all by itself, the return value is lost forever! math.sqrt(5) This script computes the square root of 5, but since it doesn't store or display the result, it is not very useful. Void functions might display something on the screen or have some other effect, but they don't have a return value. If you try to assign the result to a variable, you get a special value called None. >>> result = print_twice('Bing') Bing Bing >>> print result None The value None is not the same as the string 'None'. It is a special value that has its own type: >>> print type(None) <type 'NoneType'> The functions we have written so far are all void. We will start writing fruitful functions in a few chapters. ## Why functions? It may not be clear why it is worth the trouble to divide a program into functions. There are several reasons: • Creating a new function gives you an opportunity to name a group of statements, which makes your program easier to read and debug. • Functions can make a program smaller by eliminating repetitive code. Later, if you make a change, you only have to make it in one place. • Dividing a long program into functions allows you to debug the parts one at a time and then assemble them into a working whole. • Well-designed functions are often useful for many programs. Once you write and debug one, you can reuse it. ## Debugging If you are using a text editor to write your scripts, you might run into problems with spaces and tabs. The best way to avoid these problems is to use spaces exclusively (no tabs). Most text editors that know about Python do this by default, but some don't. Tabs and spaces are usually invisible, which makes them hard to debug, so try to find an editor that manages indentation for you. Also, don't forget to save your program before you run it. Some development environments do this automatically, but some don't. In that case the program you are looking at in the text editor is not the same as the program you are running. Debugging can take a long time if you keep running the same, incorrect, program over and over! Make sure that the code you are looking at is the code you are running. If you're not sure, put something like print 'hello' at the beginning of the program and run it again. If you don't see hello, you're not running the right program! ## Glossary • function: A named sequence of statements that performs some useful operation. Functions may or may not take arguments and may or may not produce a result. • function definition: A statement that creates a new function, specifying its name, parameters, and the statements it executes. • function object: A value created by a function definition. The name of the function is a variable that refers to a function object. • header: The first line of a function definition. • body: The sequence of statements inside a function definition. • parameter: A name used inside a function to refer to the value passed as an argument. • function call: A statement that executes a function. It consists of the function name followed by an argument list. • argument: A value provided to a function when the function is called. This value is assigned to the corresponding parameter in the function. • local variable: A variable defined inside a function. A local variable can only be used inside its function. • return value: The result of a function. If a function call is used as an expression, the return value is the value of the expression. • fruitful function: A function that returns a value. • void function: A function that doesn't return a value. • module: A file that contains a collection of related functions and other definitions. • import statement: A statement that reads a module file and creates a module object. • module object: A value created by an import statement that provides access to the values defined in a module. • dot notation: The syntax for calling a function in another module by specifying the module name followed by a dot (period) and the function name. • composition: Using an expression as part of a larger expression, or a statement as part of a larger statement. • flow of execution: The order in which statements are executed during a program run. • stack diagram: A graphical representation of a stack of functions, their variables, and the values they refer to. • frame: A box in a stack diagram that represents a function call. It contains the local variables and parameters of the function. • traceback: A list of the functions that are executing, printed when an exception occurs. ## Exercises ### Exercise 3 Python provides a built-in function called len that returns the length of a string, so the value of len('allen') is 5. Write a function named right_justify that takes a string named s as a parameter and prints the string with enough leading spaces so that the last letter of the string is in column 70 of the display. ''>>> right_justify('allen') allen '' ### Exercise 4 A function object is a value you can assign to a variable or pass as an argument. For example, do_twice is a function that takes a function object as an argument and calls it twice: ''def do_twice(f): f() f() '' Here’s an example that uses do_twice to call a function named print_spam twice. ''def print_spam(): print 'spam' do_twice(print_spam) '' 1. Type this example into a script and test it. 2. Modify do_twice so that it takes two arguments, a function object and a value, and calls the function twice, passing the value as an argument. 3. Write a more general version of print_spam, called print_twice, that takes a string as a parameter and prints it twice. 4. Use the modified version of do_twice to call print_twice twice, passing 'spam' as an argument. 5. Define a new function called do_four that takes a function object and a value and calls the function four times, passing the value as a parameter. There should be only two statements in the body of this function, not four. You can see my solution at thinkpython.com/code/do_four.py. ### Exercise 5 This exercise' can be done using only the statements and other features we have learned so far. 1. Write a function that draws a grid like the following: ''+ - - - - + - - - - + | | | | | | | | | | | | + - - - - + - - - - + | | | | | | | | | | | | + - - - - + - - - - + '' Hint: to print more than one value on a line, you can print a comma-separated sequence: ''print '+', '-' '' If the sequence ends with a comma, Python leaves the line unfinished, so the value printed next appears on the same line. ''print '+', print '-' '' The output of these statements is '+ -'. A print statement all by itself ends the current line and goes to the next line. 1. Use the previous function to draw a similar grid with four rows and four columns. You can see my solution at thinkpython.com/code/grid.py. We will see exceptions to this rule later. Based on an exercise in Oualline, Practical C Programming, Third Edition, O’Reilly (1997) # Case study: interface design ### TurtleWorld To accompany this book, I have written a suite of modules called Swampy. One of these modules is TurtleWorld, which provides a set of functions for drawing lines by steering turtles around the screen. You can download Swampy from thinkpython.com/swampy; follow the instructions there to install Swampy on your system. Move into the directory that contains TurtleWorld.py, create a file named polygon.py and type in the following code: from TurtleWorld import * world = TurtleWorld() bob = Turtle() print bob wait_for_user() The first line is a variation of the import statement we saw before; instead of creating a module object, it imports the functions from the module directly, so you can access them without using dot notation. The next lines create a TurtleWorld assigned to world and a Turtle assigned to bob. Printing bob yields something like: <TurtleWorld.Turtle instance at 0xb7bfbf4c> This means that bob refers to an instance of a Turtle as defined in module TurtleWorld. In this context, "instance" means a member of a set; this Turtle is one of the set of possible Turtles. wait_for_user tells TurtleWorld to wait for the user to do something, although in this case there's not much for the user to do except close the window. TurtleWorld provides several turtle-steering functions: fd and bk for forward and backward, and lt and rt for left and right turns. Also, each Turtle is holding a pen, which is either down or up; if the pen is down, the Turtle leaves a trail when it moves. The functions pu and pd stand for “pen up” and “pen down.” To draw a right angle, add these lines to the program (after creating bob and before calling wait_for_user): fd(bob, 100) rt(bob) fd(bob, 100) The first line tells bob to take 100 steps forward. The second line tells him to turn right. When you run this program, you should see bob move east and then south, leaving two line segments behind. Now modify the program to draw a square. Don’t turn the page until you've got it working! ### Simple repetition Chances are you wrote something like this (leaving out the code that creates TurtleWorld and waits for the user): fd(bob, 100) lt(bob) fd(bob, 100) lt(bob) fd(bob, 100) lt(bob) fd(bob, 100) We can do the same thing more concisely with a for statement. Add this example to polygon.py and run it again: for i in range(4): print 'Hello!' You should see something like this: Hello! Hello! Hello! Hello! This is the simplest use of the for statement; we will see more later. But that should be enough to let you rewrite your square-drawing program. Don’t turn the page until you do. Here is a for statement that draws a square: for i in range(4): fd(bob, 100) lt(bob) The syntax of a for statement is similar to a function definition. It has a header that ends with a colon and an indented body. The body can contain any number of statements. A for statement is sometimes called a loop because the flow of execution runs through the body and then loops back to the top. In this case, it runs the body four times. This version is actually a little different from the previous square-drawing code because it makes another left turn after drawing the last side of the square. The extra turn takes a little more time, but it simplifies the code if we do the same thing every time through the loop. This version also has the effect of leaving the turtle back in the starting position, facing in the starting direction. ### Exercises The following is a series of exercises using TurtleWorld. They are meant to be fun, but they have a point, too. While you are working on them, think about what the point is. The following sections have solutions to the exercises, so don’t look until you have finished (or at least tried). • Write a function called square that takes a parameter named t, which is a turtle. It should use the turtle to draw a square. Write a function call that passes bob as an argument to square, and then run the program again. • Add another parameter, named length, to square. Modify the body so length of the sides is length, and then modify the function call to provide a second argument. Run the program again. Test your program with a range of values for length. • The functions lt and rt make 90-degree turns by default, but you can provide a second argument that specifies the number of degrees. For example, lt(bob, 45) turns bob 45 degrees to the left. Make a copy of square and change the name to polygon. Add another parameter named n and modify the body so it draws an n-sided regular polygon. Hint: The angles of an n-sided regular polygon are 360.0 / n degrees. • Write a function called circle that takes a turtle, t, and radius, r, as parameters and that draws an approximate circle by invoking polygon with an appropriate length and number of sides. Test your function with a range of values of r. Hint: figure out the circumference of the circle and make sure that length * n = circumference. Another hint: if bob is too slow for you, you can speed him up by changing bob.delay, which is the time between moves, in seconds. bob.delay = 0.01 ought to get him moving. • Make a more general version of circle called arc that takes an additional parameter angle, which determines what fraction of a circle to draw. angle is in units of degrees, so when angle=360, arc should draw a complete circle. ### Encapsulation The first exercise asks you to put your square-drawing code into a function definition and then call the function, passing the turtle as a parameter. Here is a solution: def square(t): for i in range(4): fd(t, 100) lt(t) square(bob) The innermost statements, fd and lt are indented twice to show that they are inside the for loop, which is inside the function definition. The next line, square(bob), is flush with the left margin, so that is the end of both the for loop and the function definition. Inside the function, t refers to the same turtle bob refers to, so lt(t) has the same effect as lt(bob). So why not call the parameter bob? The idea is that t can be any turtle, not just bob, so you could create a second turtle and pass it as an argument to square: ray = Turtle() square(ray) Wrapping a piece of code up in a function is called encapsulation. One of the benefits of encapsulation is that it attaches a name to the code, which serves as a kind of documentation. Another advantage is that if you re-use the code, it is more concise to call a function twice than to copy and paste the body! ### Generalization The next step is to add a length parameter to square. Here is a solution: def square(t, length): for i in range(4): fd(t, length) lt(t) square(bob, 100) Adding a parameter to a function is called generalization because it makes the function more general: in the previous version, the square is always the same size; in this version it can be any size. The next step is also a generalization. Instead of drawing squares, polygon draws regular polygons with any number of sides. Here is a solution: def polygon(t, n, length): angle = 360.0 / n for i in range(n): fd(t, length) lt(t, angle) polygon(bob, 7, 70) This draws a 7-sided polygon with side length 70. If you have more than a few numeric arguments, it is easy to forget what they are, or what order they should be in. It is legal, and sometimes helpful, to include the names of the parameters in the argument list: polygon(bob, n=7, length=70) These are called keyword arguments because they include the parameter names as “keywords” (not to be confused with Python keywords like while and def). This syntax makes the program more readable. It is also a reminder about how arguments and parameters work: when you call a function, the arguments are assigned to the parameters. ### Interface design The next step is to write circle, which takes a radius, r, as a parameter. Here is a simple solution that uses polygon to draw a 50-sided polygon: def circle(t, r): circumference = 2 * math.pi * r n = 50 length = circumference / n polygon(t, n, length) The first line computes the circumference of a circle with radius r using the formula 2 π r. Since we use math.pi, we have to import math. By convention, import statements are usually at the beginning of the script. n is the number of line segments in our approximation of a circle, so length is the length of each segment. Thus, polygon draws a 50-sides polygon that approximates a circle with radius r. One limitation of this solution is that n is a constant, which means that for very big circles, the line segments are too long, and for small circles, we waste time drawing very small segments. One solution would be to generalize the function by taking n as a parameter. This would give the user (whoever calls circle) more control, but the interface would be less clean. The interface of a function is a summary of how it is used: what are the parameters? What does the function do? And what is the return value? An interface is “clean” if it is “as simple as possible, but not simpler. (Einstein)” In this example, r belongs in the interface because it specifies the circle to be drawn. n is less appropriate because it pertains to the details of how the circle should be rendered. Rather than clutter up the interface, it is better to choose an appropriate value of n depending on circumference: def circle(t, r): circumference = 2 * math.pi * r n = int(circumference / 3) + 1 length = circumference / n polygon(t, n, length) Now the number of segments is (approximately) circumference/3, so the length of each segment is (approximately) 3, which is small enough that the circles look good, but big enough to be efficient, and appropriate for any size circle. ### Refactoring When I wrote circle, I was able to re-use polygon because a many-sided polygon is a good approximation of a circle. But arc is not as cooperative; we can’t use polygon or circle to draw an arc. One alternative is to start with a copy of polygon and transform it into arc. The result might look like this: def arc(t, r, angle): arc_length = r * math.radians(angle) n = int(arc_length / 3) + 1 step_length = arc_length / n step_angle = float(angle) / n for i in range(n): fd(t, step_length) lt(t, step_angle) The second half of this function looks like polygon, but we can’t re-use polygon without changing the interface. We could generalize polygon to take an angle as a third argument, but then polygon would no longer be an appropriate name! Instead, let’s call the more general function polyline: def polyline(t, n, length, angle): for i in range(n): fd(t, length) lt(t, angle) Now we can rewrite polygon and arc to use polyline: def polygon(t, n, length): angle = 360.0 / n polyline(t, n, length, angle) def arc(t, r, angle): arc_length = r * math.radians(angle) n = int(arc_length / 3) + 1 step_length = arc_length / n step_angle = float(angle) / n polyline(t, n, step_length, step_angle) Finally, we can rewrite circle to use arc: def circle(t, r): arc(t, r, 360) This process—rearranging a program to improve function interfaces and facilitate code re-use—is called refactoring. In this case, we noticed that there was similar code in arc and polygon, so we “factored it out” into polyline. If we had planned ahead, we might have written polyline first and avoided refactoring, but often you don’t know enough at the beginning of a project to design all the interfaces. Once you start coding, you understand the problem better. Sometimes refactoring is a sign that you have learned something. ### A development plan A development plan is a process for writing programs. The process we used in this case study is “encapsulation and generalization.” The steps of this process are: • Start by writing a small program with no function definitions. • Once you get the program working, encapsulate it in a function and give it a name. • Generalize the function by adding appropriate parameters. • Repeat steps 1–3 until you have a set of working functions. Copy and paste working code to avoid retyping (and re-debugging). • Look for opportunities to improve the program by refactoring. For example, if you have similar code in several places, consider factoring it into an appropriately general function. This process has some drawbacks—we will see alternatives later—but it can be useful if you don’t know ahead of time how to divide the program into functions. This approach lets you design as you go along. ### docstring A docstring is a string at the beginning of a function that explains the interface (“doc” is short for “documentation”). Here is an example: def polyline(t, length, n, angle): """Draw n line segments with the given length and angle (in degrees) between them. t is a turtle. """ for i in range(n): fd(t, length) lt(t, angle) This docstring is a triple-quoted string, also known as a multiline string because the triple quotes allow the string to span more than one line. It is terse, but it contains the essential information someone would need to use this function. It explains concisely what the function does (without getting into the details of how it does it). It explains what effect each parameter has on the behavior of the function and what type each parameter should be (if it is not obvious). Writing this kind of documentation is an important part of interface design. A well-designed interface should be simple to explain; if you are having a hard time explaining one of your functions, that might be a sign that the interface could be improved. ### Debugging An interface is like a contract between a function and a caller. The caller agrees to provide certain parameters and the function agrees to do certain work. For example, polyline requires four arguments. The first has to be a Turtle (or some other object that works with fd and lt). The second has to be a number, and it should probably be positive, although it turns out that the function works even if it isn’t. The third argument should be an integer; range complains otherwise (depending on which version of Python you are running). The fourth has to be a number, which is understood to be in degrees. These requirements are called preconditions because they are supposed to be true before the function starts executing. Conversely, conditions at the end of the function are postconditions. Postconditions include the intended effect of the function (like drawing line segments) and any side effects (like moving the Turtle or making other changes in the World). Preconditions are the responsibility of the caller. If the caller violates a (properly documented!) precondition and the function doesn’t work correctly, the bug is in the caller, not the function. However, for purposes of debugging it is often a good idea for functions to check their preconditions rather than assume they are true. If every function checks its preconditions before starting, then if something goes wrong, you will know which function to blame. ### Glossary instance: A member of a set. The TurtleWorld in this chapter is a member of the set of TurtleWorlds. loop: A part of a program that can execute repeatedly. encapsulation: The process of transforming a sequence of statements into a function definition. generalization: The process of replacing something unnecessarily specific (like a number) with something appropriately general (like a variable or parameter). keyword argument: An argument that includes the name of the parameter as a “keyword.” interface: A description of how to use a function, including the name and descriptions of the arguments and return value. development plan: A process for writing programs. docstring: A string that appears in a function definition to document the function’s interface. precondition: A requirement that should be satisfied by the caller before a function starts. postcondition: A requirement that should be satisfied by the function before it ends. ### Exercises Exercise 1 Download the code in this chapter from 'thinkpython.com/code/polygon.py'. • Write appropriate docstrings for 'polygon', 'arc' and 'circle'. • Draw a stack diagram that shows the state of the program while executing 'circle(bob, radius)'. You can do the arithmetic by hand or add 'print' statements to the code. • The version of 'arc' in Section '4.7' is not very accurate because the linear approximation of the circle is always outside the true circle. As a result, the turtle ends up a few units away from the correct destination. My solution shows a way to reduce the effect of this error. Read the code and see if it makes sense to you. If you draw a diagram, you might see how it works. Exercise 2 Write an appropriately general set of functions that can draw flowers like this: <IMG SRC="book005.png"> You can download a solution from 'thinkpython.com/code/flower.py'. Exercise 3 Write an appropriately general set of functions that can draw shapes like this: <IMG SRC="book006.png"> You can download a solution from 'thinkpython.com/code/pie.py'. Exercise 4 ' ' The letters of the alphabet can be constructed from a moderate number of basic elements, like vertical and horizontal lines and a few curves. Design a font that can be drawn with a minimal number of basic elements and then write functions that draw letters of the alphabet. You should write one function for each letter, with names draw_a, draw_b, etc., and put your functions in a file named 'letters.py'. You can download a “turtle typewriter” from 'thinkpython.com/code/typewriter.py' to help you test your code. You can download a solution from 'thinkpython.com/code/letters.py'. # Conditional and recursion ### Modulus operator The modulus operator works on integers and yields the remainder when the first operand is divided by the second. In Python, the modulus operator is a percent sign (%). The syntax is the same as for other operators: >>> quotient = 7 // 3 >>> print(quotient) 2 >>> remainder = 7 % 3 >>> print(remainder) 1 So 7 divided by 3 is 2 with 1 left over. The modulus operator turns out to be surprisingly useful. For example, you can check whether one number is divisible by another—if x % y is zero, then x is divisible by y. Also, you can extract the right-most digit or digits from a number. For example, x % 10 yields the right-most digit of x (in base 10). Similarly x % 100 yields the last two digits. ### Boolean expressions A boolean expression is an expression that is either true or false. The following examples use the operator ==, which compares two operands and produces True if they are equal and False otherwise: >>> 5 == 5 True >>> 5 == 6 False True and False are special values that belong to the type bool; they are not strings: >>> type(True) <type 'bool'> >>> type(False) <type 'bool'> The == operator is one of the comparison operators; the others are: x != y # x is not equal to y x > y # x is greater than y x < y # x is less than y x >= y # x is greater than or equal to y x <= y # x is less than or equal to y Although these operations are probably familiar to you, the Python symbols are different from the mathematical symbols. A common error is to use a single equal sign (=) instead of a double equal sign (==). Remember that = is an assignment operator and == is a comparison operator. There is no such thing as =< or =>. ### Logical operators There are three logical operators: and, or, and not. The semantics (meaning) of these operators is similar to their meaning in English. For example, x > 0 and x < 10 is true only if x is greater than 0 and less than 10. n%2 == 0 or n%3 == 0 is true if either of the conditions is true, that is, if the number is divisible by 2 or 3. Finally, the not operator negates a boolean expression, so not (x > y) is true if x > y is false, that is, if x is less than or equal to y. Strictly speaking, the operands of the logical operators should be boolean expressions, but Python is not very strict. Any nonzero number is interpreted as “true.” >>> 17 and True True This flexibility can be useful, but there are some subtleties to it that might be confusing. You might want to avoid it (unless you know what you are doing). ### Conditional execution In order to write useful programs, we almost always need the ability to check conditions and change the behavior of the program accordingly. Conditional statements give us this ability. The simplest form is the if statement: if x > 0: print 'x is positive' The boolean expression after the if statement is called the condition. If it is true, then the indented statement gets executed. If not, nothing happens. if statements have the same structure as function definitions: a header followed by an indented block. Statements like this are called compound statements. There is no limit on the number of statements that can appear in the body, but there has to be at least one. Occasionally, it is useful to have a body with no statements (usually as a place keeper for code you haven't written yet). In that case, you can use the pass statement, which does nothing. if x < 0: pass # need to handle negative values! ### Alternative execution A second form of the if statement is alternative execution, in which there are two possibilities and the condition determines which one gets executed. The syntax looks like this: if x % 2 == 0: print 'x is even' else: print 'x is odd' If the remainder when x is divided by 2 is 0, then we know that x is even, and the program displays a message to that effect. If the condition is false, the second set of statements is executed. Since the condition must be true or false, exactly one of the alternatives will be executed. The alternatives are called branches, because they are branches in the flow of execution. ### Chained conditionals Sometimes there are more than two possibilities and we need more than two branches. One way to express a computation like that is a chained conditional: if x < y: print 'x is less than y' elif x > y: print 'x is greater than y' else: print 'x and y are equal' elif is an abbreviation of “else if.” Again, exactly one branch will be executed. There is no limit on the number of elif statements. If there is an else clause, it has to be at the end, but there doesn’t have to be one. if choice == 'a': draw_a() elif choice == 'b': draw_b() elif choice == 'c': draw_c() Each condition is checked in order. If the first is false, the next is checked, and so on. If one of them is true, the corresponding branch executes, and the statement ends. Even if more than one condition is true, only the first true branch executes. ### Nested conditionals One conditional can also be nested within another. We could have written the trichotomy example like this: if x == y: print 'x and y are equal' else: if x < y: print 'x is less than y' else: print 'x is greater than y' The outer conditional contains two branches. The first branch contains a simple statement. The second branch contains another if statement, which has two branches of its own. Those two branches are both simple statements, although they could have been conditional statements as well. Although the indentation of the statements makes the structure apparent, nested conditionals become difficult to read very quickly. In general, it is a good idea to avoid them when you can. Logical operators often provide a way to simplify nested conditional statements. For example, we can rewrite the following code using a single conditional: if 0 < x: if x < 10: print 'x is a positive single-digit number.' The print statement is executed only if we make it past both conditionals, so we can get the same effect with the and operator: if 0 < x and x < 10: print 'x is a positive single-digit number.' ### Recursion It is legal for one function to call another; it is also legal for a function to call itself. It may not be obvious why that is a good thing, but it turns out to be one of the most magical things a program can do. For example, look at the following function: def countdown(n): if n <= 0: print 'Blastoff!' else: print n countdown(n-1) If n is 0 or negative, it outputs the word, “Blastoff!” Otherwise, it outputs n and then calls a function named countdown—itself—passing n-1 as an argument. What happens if we call this function like this? >>> countdown(3) The execution of countdown begins with n=3, and since n is greater than 0, it outputs the value 3, and then calls itself... The execution of countdown begins with n=2, and since n is greater than 0, it outputs the value 2, and then calls itself... The execution of countdown begins with n=1, and since n is greater than 0, it outputs the value 1, and then calls itself... The execution of countdown begins with n=0, and since n is not greater than 0, it outputs the word, “Blastoff!” and then returns. The countdown that got n=1 returns. The countdown that got n=2 returns. The countdown that got n=3 returns. And then you’re back in __main__. So, the total output looks like this: 3 2 1 Blastoff! A function that calls itself is recursive; the process is called recursion. As another example, we can write a function that prints a string n times. def print_n(s, n): if n <= 0: return print s print_n(s, n-1) If n <= 0 the return statement exits the function. The flow of execution immediately returns to the caller, and the remaining lines of the function are not executed. The rest of the function is similar to countdown: if n is greater than 0, it displays s and then calls itself to display s n−1 additional times. So the number of lines of output is 1 + (n - 1), which adds up to n. For simple examples like this, it is probably easier to use a for loop. But we will see examples later that are hard to write with a for loop and easy to write with recursion, so it is good to start early. ### Stack diagrams for recursive functions In Section 3.10, we used a stack diagram to represent the state of a program during a function call. The same kind of diagram can help interpret a recursive function. Every time a function gets called, Python creates a new function frame, which contains the function’s local variables and parameters. For a recursive function, there might be more than one frame on the stack at the same time. This figure shows a stack diagram for countdown called with n = 3: <IMG SRC="book007.png"> As usual, the top of the stack is the frame for __main__. It is empty because we did not create any variables in __main__ or pass any arguments to it. The four countdown frames have different values for the parameter n. The bottom of the stack, where n=0, is called the base case. It does not make a recursive call, so there are no more frames. Draw a stack diagram for print_n called with s = 'Hello' and n=2. Write a function called do_n that takes a function object and a number, n as arguments, and that calls the given function n times. ### Infinite recursion If a recursion never reaches a base case, it goes on making recursive calls forever, and the program never terminates. This is known as infinite recursion, and it is generally not a good idea. Here is a minimal program with an infinite recursion: def recurse(): recurse() In most programming environments, a program with infinite recursion does not really run forever. Python reports an error message when the maximum recursion depth is reached: File "<stdin>", line 2, in recurse File "<stdin>", line 2, in recurse File "<stdin>", line 2, in recurse . . . File "<stdin>", line 2, in recurse RuntimeError: Maximum recursion depth exceeded This traceback is a little bigger than the one we saw in the previous chapter. When the error occurs, there are 1000 recurse frames on the stack! ### Keyboard input The programs we have written so far are a bit rude in the sense that they accept no input from the user. They just do the same thing every time. Python provides a built-in function called raw_input that gets input from the keyboard[1]. When this function is called, the program stops and waits for the user to type something. When the user presses Return or Enter, the program resumes and raw_input returns what the user typed as a string. >>> input = raw_input() What are you waiting for? >>> print input What are you waiting for? Before getting input from the user, it is a good idea to print a prompt telling the user what to input. raw_input can take a prompt as an argument: >>> name = raw_input('What...is your name?\n') What...is your name? Arthur, King of the Britons! >>> print name Arthur, King of the Britons! The sequence \n at the end of the prompt represents a newline, which is a special character that causes a line break. That’s why the user’s input appears below the prompt. If you expect the user to type an integer, you can try to convert the return value to int: >>> prompt = 'What...is the airspeed velocity of an unladen swallow?\n' >>> speed = raw_input(prompt) What...is the airspeed velocity of an unladen swallow? 17 >>> int(speed) 17 But if the user types something other than a string of digits, you get an error: >>> speed = raw_input(prompt) What...is the airspeed velocity of an unladen swallow? What do you mean, an African or a European swallow? >>> int(speed) ValueError: invalid literal for int() We will see how to handle this kind of error later. ### Debugging The traceback Python displays when an error occurs contains a lot of information, but it can be overwhelming, especially when there are many frames on the stack. The most useful parts are usually: • What kind of error it was, and • Where it occurred. Syntax errors are usually easy to find, but there are a few gotchas. Whitespace errors can be tricky because spaces and tabs are invisible and we are used to ignoring them. >>> x = 5 >>> y = 6 File "<stdin>", line 1 y = 6 ^ SyntaxError: invalid syntax In this example, the problem is that the second line is indented by one space. But the error message points to y, which is misleading. In general, error messages indicate where the problem was discovered, but the actual error might be earlier in the code, sometimes on a previous line. The same is true of runtime errors. Suppose you are trying to compute a signal-to-noise ratio in decibels. The formula is SNRdb = 10 log10 (Psignal / Pnoise). In Python, you might write something like this: import math signal_power = 9 noise_power = 10 ratio = signal_power / noise_power decibels = 10 * math.log10(ratio) print decibels But when you run it, you get an error message: Traceback (most recent call last): File "snr.py", line 5, in ? decibels = 10 * math.log10(ratio) OverflowError: math range error The error message indicates line 5, but there is nothing wrong with that line. To find the real error, it might be useful to print the value of ratio, which turns out to be 0. The problem is in line 4, because dividing two integers does floor division. The solution is to represent signal power and noise power with floating-point values. In general, error messages tell you where the problem was discovered, but that is often not where it was caused. ### Glossary modulus operator: An operator, denoted with a percent sign (%), that works on integers and yields the remainder when one number is divided by another. boolean expression: An expression whose value is either True or False. comparison operator: One of the operators that compares its operands: ==, !=, >, <, >=, and <=. logical operator: One of the operators that combines boolean expressions: and, or, and not. conditional statement: A statement that controls the flow of execution depending on some condition. condition: The boolean expression in a conditional statement that determines which branch is executed. compound statement: A statement that consists of a header and a body. The header ends with a colon (:). The body is indented relative to the header. body: The sequence of statements within a compound statement. branch: One of the alternative sequences of statements in a conditional statement. chained conditional: A conditional statement with a series of alternative branches. nested conditional: A conditional statement that appears in one of the branches of another conditional statement. recursion: The process of calling the function that is currently executing. base case: A conditional branch in a recursive function that does not make a recursive call. infinite recursion: A function that calls itself recursively without ever reaching the base case. Eventually, an infinite recursion causes a runtime error. ### Exercises Exercise 1 Fermat’s Last Theorem says that there are no integers 'a', 'b', and 'c' such that an + bn = cn for any values of 'n' greater than 2. • Write a function named check_fermat that takes four parameters—'a', 'b', 'c' and 'n'—and that checks to see if Fermat’s theorem holds. If 'n' is greater than 2 and it turns out to be true that 'a''n'' + b''n'' = c''n'' ' ' the program should print, “Holy smokes, Fermat was wrong!” Otherwise the program should print, “No, that doesn’t work.”' • 'Write a function that prompts the user to input values for '''a''', '''b''', '''c''' and '''n''', converts them to integers, and uses ''check_fermat'' to check whether they violate Fermat’s theorem.' Exercise 2 If you are given three sticks, you may or may not be able to arrange them in a triangle. For example, if one of the sticks is 12 inches long and the other two are one inch long, it is clear that you will not be able to get the short sticks to meet in the middle. For any three lengths, there is a simple test to see if it is possible to form a triangle: “If any of the three lengths is greater than the sum of the other two, then you cannot form a triangle. Otherwise, you can[2].” • Write a function named is_triangle that takes three integers as arguments, and that prints either “Yes” or “No,” depending on whether you can or cannot form a triangle from sticks with the given lengths. • Write a function that prompts the user to input three stick lengths, converts them to integers, and uses is_triangle to check whether sticks with the given lengths can form a triangle. The following exercises use TurtleWorld from Chapter 4: Exercise 3 Read the following function and see if you can figure out what it does. Then run it (see the examples in Chapter '4'). ''def draw(t, length, n): if n == 0: return angle = 50 fd(t, length*n) lt(t, angle) draw(t, length, n-1) rt(t, 2*angle) draw(t, length, n-1) lt(t, angle) bk(t, length*n) '' Exercise 4 The Koch curve is a fractal that looks something like this: <IMG SRC="book008.png"> To draw a Koch curve with length 'x', all you have to do is • Draw a Koch curve with length 'x/3'. • Turn left 60 degrees. • Draw a Koch curve with length 'x/3'. • Turn right 120 degrees. • Draw a Koch curve with length 'x/3'. • Turn left 60 degrees. • Draw a Koch curve with length 'x/3'. The only exception is if 'x' is less than 3. In that case, you can just draw a straight line with length 'x'. • Write a function called 'koch' that takes a turtle and a length as parameters, and that uses the turtle to draw a Koch curve with the given length. • Write a function called 'snowflake' that draws three Koch curves to make the outline of a snowflake. You can see my solution at 'thinkpython.com/code/koch.py'. • The Koch curve can be generalized in several ways. See 'wikipedia.org/wiki/Koch_snowflake' for examples and implement your favorite. ### Notes 1. In Python 3.0, this function is named input 2. If the sum of two lengths equals the third, they form what is called a “degenerate” triangle. # Fruitful functions ## Return values Some of the built-in functions we have used, such as the math functions, produce results. Calling the function generates a value, which we usually assign to a variable or use as part of an expression. e = math.exp(1.0) height = radius * math.sin(radians) All of the functions we have written so far are void; they print something or move turtles around, but their return value is None. In this chapter, we are (finally) going to write fruitful functions. The first example is area, which returns the area of a circle with the given radius: def area(radius): temp = math.pi * radius**2 return temp We have seen the return statement before, but in a fruitful function the return statement includes an expression. This statement means: “Return immediately from this function and use the following expression as a return value.” The expression can be arbitrarily complicated, so we could have written this function more concisely: def area(radius): return math.pi * radius**2 On the other hand, temporary variables like temp often make debugging easier. Sometimes it is useful to have multiple return statements, one in each branch of a conditional: def absolute_value(x): if x < 0: return -x else: return x Since these return statements are in an alternative conditional, only one will be executed. As soon as a return statement executes, the function terminates without executing any subsequent statements. Code that appears after a return statement, or any other place the flow of execution can never reach, is called dead code. In a fruitful function, it is a good idea to ensure that every possible path through the program hits a return statement. For example: def absolute_value(x): if x < 0: return -x if x > 0: return x This function is incorrect because if x happens to be 0, neither condition is true, and the function ends without hitting a return statement. If the flow of execution gets to the end of a function, the return value is None, which is not the absolute value of 0. >>> print absolute_value(0) None By the way, Python provides a built-in function called abs that computes absolute values. ### Exercise 1 Write a 'compare' function that returns '1' if 'x > y', '0' if 'x == y', and '-1' if 'x < y'. ## Incremental development As you write larger functions, you might find yourself spending more time debugging. To deal with increasingly complex programs, you might want to try a process called incremental development. The goal of incremental development is to avoid long debugging sessions by adding and testing only a small amount of code at a time. As an example, suppose you want to find the distance between two points, given by the coordinates (x1, y1) and (x2, y2). By the Pythagorean theorem, the distance is: ${\displaystyle {\text{distance}}={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}}$ The first step is to consider what a distance function should look like in Python. In other words, what are the inputs (parameters) and what is the output (return value)? In this case, the inputs are two points, which you can represent using four numbers. The return value is the distance, which is a floating-point value. Already you can write an outline of the function: def distance(x1, y1, x2, y2): return 0.0 Obviously, this version doesn't compute distances; it always returns zero. But it is syntactically correct, and it runs, which means that you can test it before you make it more complicated. To test the new function, call it with sample arguments: >>> distance(1, 2, 4, 6) 0.0 I chose these values so that the horizontal distance is 3 and the vertical distance is 4; that way, the result is 5 (the hypotenuse of a 3-4-5 triangle). When testing a function, it is useful to know the right answer. At this point we have confirmed that the function is syntactically correct, and we can start adding code to the body. A reasonable next step is to find the differences x2x1 and y2y1. The next version stores those values in temporary variables and prints them. def distance(x1, y1, x2, y2): dx = x2 - x1 dy = y2 - y1 print 'dx is', dx print 'dy is', dy return 0.0 If the function is working, it should display 'dx is 3' and ’dy is 4’. If so, we know that the function is getting the right arguments and performing the first computation correctly. If not, there are only a few lines to check. Next we compute the sum of squares of dx and dy: def distance(x1, y1, x2, y2): dx = x2 - x1 dy = y2 - y1 dsquared = dx**2 + dy**2 print 'dsquared is: ', dsquared return 0.0 Again, you would run the program at this stage and check the output (which should be 25). Finally, you can use math.sqrt to compute and return the result: def distance(x1, y1, x2, y2): dx = x2 - x1 dy = y2 - y1 dsquared = dx**2 + dy**2 result = math.sqrt(dsquared) return result If that works correctly, you are done. Otherwise, you might want to print the value of result before the return statement. The final version of the function doesn’t display anything when it runs; it only returns a value. The print statements we wrote are useful for debugging, but once you get the function working, you should remove them. Code like that is called scaffolding because it is helpful for building the program but is not part of the final product. When you start out, you should add only a line or two of code at a time. As you gain more experience, you might find yourself writing and debugging bigger chunks. Either way, incremental development can save you a lot of debugging time. The key aspects of the process are: • Start with a working program and make small incremental changes. At any point, if there is an error, you should have a good idea where it is. • Use temporary variables to hold intermediate values so you can display and check them. • Once the program is working, you might want to remove some of the scaffolding or consolidate multiple statements into compound expressions, but only if it does not make the program difficult to read. ### Exercise 2 Use incremental development to write a function called 'hypotenuse' that returns the length of the hypotenuse of a right triangle given the lengths of the two legs as arguments. Record each stage of the development process as you go. ## Composition As you should expect by now, you can call one function from within another. This ability is called composition. As an example, we’ll write a function that takes two points, the center of the circle and a point on the perimeter, and computes the area of the circle. Assume that the center point is stored in the variables xc and yc, and the perimeter point is in xp and yp. The first step is to find the radius of the circle, which is the distance between the two points. We just wrote a function, distance, that does that: radius = distance(xc, yc, xp, yp) The next step is to find the area of a circle with that radius; we just wrote that, too: result = area(radius) Encapsulating these steps in a function, we get: def circle_area(xc, yc, xp, yp): radius = distance(xc, yc, xp, yp) result = area(radius) return result The temporary variables radius and result are useful for development and debugging, but once the program is working, we can make it more concise by composing the function calls: def circle_area(xc, yc, xp, yp): return area(distance(xc, yc, xp, yp)) ## Boolean functions Functions can return booleans, which is often convenient for hiding complicated tests inside functions. For example: def is_divisible(x, y): if x % y == 0: return True else: return False It is common to give boolean functions names that sound like yes/no questions; is_divisible returns either True or False to indicate whether x is divisible by y. Here is an example: >>> is_divisible(6, 4) False >>> is_divisible(6, 3) True The result of the == operator is a boolean, so we can write the function more concisely by returning it directly: def is_divisible(x, y): return x % y == 0 Boolean functions are often used in conditional statements: if is_divisible(x, y): print 'x is divisible by y' It might be tempting to write something like: if is_divisible(x, y) == True: print 'x is divisible by y' But the extra comparison is unnecessary. Exercise 3 Write a function is_between(x, y, z) that returns 'True' if 'xyz' or 'False' otherwise. ## More recursion We have only covered a small subset of Python, but you might be interested to know that this subset is a complete programming language, which means that anything that can be computed can be expressed in this language. Any program ever written could be rewritten using only the language features you have learned so far (actually, you would need a few commands to control devices like the keyboard, mouse, disks, etc., but that’s all). Proving that claim is a nontrivial exercise first accomplished by Alan Turing, one of the first computer scientists (some would argue that he was a mathematician, but a lot of early computer scientists started as mathematicians). Accordingly, it is known as the Turing Thesis. For a more complete (and accurate) discussion of the Turing Thesis, I recommend Michael Sipser’s book Introduction to the Theory of Computation. To give you an idea of what you can do with the tools you have learned so far, we’ll evaluate a few recursively defined mathematical functions. A recursive definition is similar to a circular definition, in the sense that the definition contains a reference to the thing being defined. A truly circular definition is not very useful: frabjuous: An adjective used to describe something that is frabjuous. If you saw that definition in the dictionary, you might be annoyed. On the other hand, if you looked up the definition of the factorial function, denoted with the symbol !, you might get something like this: 0! = 1 n! = n (n−1)! This definition says that the factorial of 0 is 1, and the factorial of any other value, n, is n multiplied by the factorial of n−1. So 3! is 3 times 2!, which is 2 times 1!, which is 1 times 0!. Putting it all together, 3! equals 3 times 2 times 1 times 1, which is 6. If you can write a recursive definition of something, you can usually write a Python program to evaluate it. The first step is to decide what the parameters should be. In this case it should be clear that factorial takes an integer: def factorial(n): If the argument happens to be 0, all we have to do is return 1: def factorial(n): if n == 0: return 1 Otherwise, and this is the interesting part, we have to make a recursive call to find the factorial of n−1 and then multiply it by n: def factorial(n): if n == 0: return 1 else: recurse = factorial(n-1) result = n * recurse return result The flow of execution for this program is similar to the flow of countdown in Section 5.8. If we call factorial with the value 3: Since 3 is not 0, we take the second branch and calculate the factorial of n-1... Since 2 is not 0, we take the second branch and calculate the factorial of n-1... Since 1 is not 0, we take the second branch and calculate the factorial of n-1... Since 0 is 0, we take the first branch and return 1 without making any more recursive calls. The return value (1) is multiplied by n, which is 1, and the result is returned. The return value (1) is multiplied by n, which is 2, and the result is returned. The return value (2) is multiplied by n, which is 3, and the result, 6, becomes the return value of the function call that started the whole process. Here is what the stack diagram looks like for this sequence of function calls: <IMG SRC="book009.png"> The return values are shown being passed back up the stack. In each frame, the return value is the value of result, which is the product of n and recurse. In the last frame, the local variables recurse and result do not exist, because the branch that creates them does not execute. ## Leap of faith Following the flow of execution is one way to read programs, but it can quickly become labyrinthine. An alternative is what I call the “leap of faith.” When you come to a function call, instead of following the flow of execution, you assume that the function works correctly and returns the right result. In fact, you are already practicing this leap of faith when you use built-in functions. When you call math.cos or math.exp, you don’t examine the bodies of those functions. You just assume that they work because the people who wrote the built-in functions were good programmers. The same is true when you call one of your own functions. For example, in Section 6.4, we wrote a function called is_divisible that determines whether one number is divisible by another. Once we have convinced ourselves that this function is correct—by examining the code and testing—we can use the function without looking at the body again. The same is true of recursive programs. When you get to the recursive call, instead of following the flow of execution, you should assume that the recursive call works (yields the correct result) and then ask yourself, “Assuming that I can find the factorial of n−1, can I compute the factorial of n?” In this case, it is clear that you can, by multiplying by n. Of course, it's a bit strange to assume that the function works correctly when you haven't finished writing it, but that's why it's called a leap of faith! ## One more example After factorial, the most common example of a recursively defined mathematical function is fibonacci, which has the following definition[1]: fibonacci(0) = 0 fibonacci(1) = 1 fibonacci(n) = fibonacci(n−1) + fibonacci(n−2); Translated into Python, it looks like this: def fibonacci (n): if n == 0: return 0 elif n == 1: return 1 else: return fibonacci(n-1) + fibonacci(n-2) If you try to follow the flow of execution here, even for fairly small values of n, your head explodes. But according to the leap of faith, if you assume that the two recursive calls work correctly, then it is clear that you get the right result by adding them together. ## Checking types What happens if we call factorial and give it 1.5 as an argument? >>> factorial(1.5) RuntimeError: Maximum recursion depth exceeded It looks like an infinite recursion. But how can that be? There is a base case—when n == 0. But if n is not an integer, we can miss the base case and recurse forever. In the first recursive call, the value of n is 0.5. In the next, it is -0.5. From there, it gets smaller (more negative), but it will never be 0. We have two choices. We can try to generalize the factorial function to work with floating-point numbers, or we can make factorial check the type of its argument. The first option is called the gamma function[2] and it’s a little beyond the scope of this book. So we’ll go for the second. We can use the built-in function isinstance to verify the type of the argument. While we’re at it, we can also make sure the argument is positive: def factorial (n): if not isinstance(n, int): print 'Factorial is only defined for integers.' return None elif n < 0: print 'Factorial is only defined for positive integers.' return None elif n == 0: return 1 else: return n * factorial(n-1) The first base case handles nonintegers; the second catches negative integers. In both cases, the program prints an error message and returns None to indicate that something went wrong: >>> factorial('fred') Factorial is only defined for integers. None >>> factorial(-2) Factorial is only defined for positive integers. None If we get past both checks, then we know that n is a positive integer, and we can prove that the recursion terminates. This program demonstrates a pattern sometimes called a guardian. The first two conditionals act as guardians, protecting the code that follows from values that might cause an error. The guardians make it possible to prove the correctness of the code. ## Debugging Breaking a large program into smaller functions creates natural checkpoints for debugging. If a function is not working, there are three possibilities to consider: • There is something wrong with the arguments the function is getting; a precondition is violated. • There is something wrong with the function; a postcondition is violated. • There is something wrong with the return value or the way it is being used. To rule out the first possibility, you can add a print statement at the beginning of the function and display the values of the parameters (and maybe their types). Or you can write code that checks the preconditions explicitly. If the parameters look good, add a print statement before each return statement that displays the return value. If possible, check the result by hand. Consider calling the function with values that make it easy to check the result (as in Section 6.2). If the function seems to be working, look at the function call to make sure the return value is being used correctly (or used at all!). Adding print statements at the beginning and end of a function can help make the flow of execution more visible. For example, here is a version of factorial with print statements: def factorial(n): space = ' ' * (4 * n) print space, 'factorial', n if n == 0: print space, 'returning 1' return 1 else: recurse = factorial(n-1) result = n * recurse print space, 'returning', result return result space is a string of space characters that controls the indentation of the output. Here is the result of factorial(5) : factorial 5 factorial 4 factorial 3 factorial 2 factorial 1 factorial 0 returning 1 returning 1 returning 2 returning 6 returning 24 returning 120 If you are confused about the flow of execution, this kind of output can be helpful. It takes some time to develop effective scaffolding, but a little bit of scaffolding can save a lot of debugging. ## Glossary temporary variable: A variable used to store an intermediate value in a complex calculation. dead code: Part of a program that can never be executed, often because it appears after a return statement. 'None': A special value returned by functions that have no return statement or a return statement without an argument. incremental development: A program development plan intended to avoid debugging by adding and testing only a small amount of code at a time. scaffolding: Code that is used during program development but is not part of the final version. guardian: A programming pattern that uses a conditional statement to check for and handle circumstances that might cause an error. ## Exercises ### Exercise 4 Draw a stack diagram for the following program. What does the program print? def b(z): prod = a(z, z) print z, prod return prod def a(x, y): x = x + 1 return x * y def c(x, y, z): sum = x + y + z pow = b(sum)**2 return pow x = 1 y = x + 1 print c(x, y+3, x+y) ### Exercise 5 The Ackermann function, 'A(m, n)' is defined[3]: 1 See wikipedia.org/wiki/Collatz_conjecture. 2 See wikipedia.org/wiki/Pi. # Strings ## A string is a sequence A string is a sequence of characters. You can access the characters one at a time with the bracket operator: >>> fruit = 'banana' >>> letter = fruit[1] The second statement selects character number 1 from fruit and assigns it to letter. The expression in brackets is called an index. The index indicates which character in the sequence you want (hence the name). But you might not get what you expect: >>> print letter a For most people, the first letter of 'banana' is b, not a. But for computer scientists, the index is an offset from the beginning of the string, and the offset of the first letter is zero. >>> letter = fruit[0] >>> print letter b So b is the 0th letter (“zero-eth”) of 'banana', a is the 1th letter (“one-eth”), and n is the 2th (“two-eth”) letter. You can use any expression, including variables and operators, as an index, but the value of the index has to be an integer. Otherwise you get: >>> letter = fruit[1.5] TypeError: string indices must be integers ## len len is a built-in function that returns the number of characters in a string: >>> fruit = 'banana' >>> len(fruit) 6 To get the last letter of a string, you might be tempted to try something like this: >>> length = len(fruit) >>> last = fruit[length] IndexError: string index out of range The reason for the IndexError is that there is no letter in ’banana’ with the index 6. Since we started counting at zero, the six letters are numbered 0 to 5. To get the last character, you have to subtract 1 from length: >>> last = fruit[length-1] >>> print last a Alternatively, you can use negative indices, which count backward from the end of the string. The expression fruit[-1] yields the last letter, fruit[-2] yields the second to last, and so on. ## Traversal with a for loop A lot of computations involve processing a string one character at a time. Often they start at the beginning, select each character in turn, do something to it, and continue until the end. This pattern of processing is called a traversal. One way to write a traversal is with a while loop: index = 0 while index < len(fruit): letter = fruit[index] print letter index = index + 1 This loop traverses the string and displays each letter on a line by itself. The loop condition is index < len(fruit), so when index is equal to the length of the string, the condition is false, and the body of the loop is not executed. The last character accessed is the one with the index len(fruit)-1, which is the last character in the string. Exercise 1 Write a function that takes a string as an argument and displays the letters backward, one per line. Another way to write a traversal is with a for loop: for char in fruit: print char Each time through the loop, the next character in the string is assigned to the variable char. The loop continues until no characters are left. The following example shows how to use concatenation (string addition) and a for loop to generate an abecedarian series (that is, in alphabetical order). In Robert McCloskey’s book Make Way for Ducklings, the names of the ducklings are Jack, Kack, Lack, Mack, Nack, Ouack, Pack, and Quack. This loop outputs these names in order: prefixes = 'JKLMNOPQ' suffix = 'ack' for letter in prefixes: print letter + suffix The output is: Jack Kack Lack Mack Nack Oack Pack Qack Of course, that’s not quite right because “Ouack” and “Quack” are misspelled. ### Exercise 2 Modify the program to fix this error. ## String slices A segment of a string is called a slice. Selecting a slice is similar to selecting a character: >>> s = 'Monty Python' >>> print s[0:5] Monty >>> print s[6:13] Python The operator [n:m] returns the part of the string from the “n-eth” character to the “m-eth” character, including the first but excluding the last. This behavior is counterintuitive, but it might help to imagine the indices pointing between the characters, as in the following diagram: <IMG SRC="book011.png"> If you omit the first index (before the colon), the slice starts at the beginning of the string. If you omit the second index, the slice goes to the end of the string: >>> fruit = 'banana' >>> fruit[:3] 'ban' >>> fruit[3:] 'ana' If the first index is greater than or equal to the second the result is an empty string, represented by two quotation marks: >>> fruit = 'banana' >>> fruit[3:3] '' An empty string contains no characters and has length 0, but other than that, it is the same as any other string. ### Exercise 3 Given that 'fruit' is a string, what does 'fruit[:]' mean? ## Strings are immutable It is tempting to use the [] operator on the left side of an assignment, with the intention of changing a character in a string. For example: >>> greeting = 'Hello, world!' >>> greeting[0] = 'J' TypeError: object does not support item assignment The “object” in this case is the string and the “item” is the character you tried to assign. For now, an object is the same thing as a value, but we will refine that definition later. An item is one of the values in a sequence. The reason for the error is that strings are immutable, which means you can’t change an existing string. The best you can do is create a new string that is a variation on the original: >>> greeting = 'Hello, world!' >>> new_greeting = 'J' + greeting[1:] >>> print new_greeting Jello, world! This example concatenates a new first letter onto a slice of greeting. It has no effect on the original string. ## Searching What does the following function do? def find(word, letter): index = 0 while index < len(word): if word[index] == letter: return index index = index + 1 return -1 In a sense, find is the opposite of the [] operator. Instead of taking an index and extracting the corresponding character, it takes a character and finds the index where that character appears. If the character is not found, the function returns -1. This is the first example we have seen of a return statement inside a loop. If word[index] == letter, the function breaks out of the loop and returns immediately. If the character doesn’t appear in the string, the program exits the loop normally and returns -1. This pattern of computation—traversing a sequence and returning when we find what we are looking for—is a called a search. ### Exercise 4 Modify 'find' so that it has a third parameter, the index in 'word' where it should start looking. ## Looping and counting The following program counts the number of times the letter a appears in a string: word = 'banana' count = 0 for letter in word: if letter == 'a': count = count + 1 print count This program demonstrates another pattern of computation called a counter. The variable count is initialized to 0 and then incremented each time an a is found. When the loop exits, count contains the result—the total number of a’s. Exercise 5 Encapsulate this code in a function named 'count', and generalize it so that it accepts the string and the letter as arguments. Exercise 6 Rewrite this function so that instead of traversing the string, it uses the three-parameter version of 'find' from the previous section. ## string methods A method is similar to a function—it takes arguments and returns a value—but the syntax is different. For example, the method upper takes a string and returns a new string with all uppercase letters: Instead of the function syntax upper(word), it uses the method syntax word.upper(). >>> word = 'banana' >>> new_word = word.upper() >>> print new_word BANANA This form of dot notation specifies the name of the method, upper, and the name of the string to apply the method to, word. The empty parentheses indicate that this method takes no argument. A method call is called an invocation; in this case, we would say that we are invoking upper on the word. As it turns out, there is a string method named find that is remarkably similar to the function we wrote: >>> word = 'banana' >>> index = word.find('a') >>> print index 1 In this example, we invoke find on word and pass the letter we are looking for as a parameter. Actually, the find method is more general than our function; it can find substrings, not just characters: >>> word.find('na') 2 It can take as a second argument the index where it should start: >>> word.find('na', 3) 4 And as a third argument the index where it should stop: >>> name = 'bob' >>> name.find('b', 1, 2) -1 This search fails because b does not appear in the index range from 1 to 2 (not including 2). Exercise 7 ' There is a string method called 'count' that is similar to the function in the previous exercise. Read the documentation of this method and write an invocation that counts the number of 'a's in banana. ## The in operator The word in is a boolean operator that takes two strings and returns True if the first appears as a substring in the second: >>> 'a' in 'banana' True >>> 'seed' in 'banana' False For example, the following function prints all the letters from word1 that also appear in word2: def in_both(word1, word2): for letter in word1: if letter in word2: print letter With well-chosen variable names, Python sometimes reads like English. You could read this loop, “for (each) letter in (the first) word, if (the) letter (appears) in (the second) word, print (the) letter.” Here’s what you get if you compare apples and oranges: >>> in_both('apples', 'oranges') a e s ## String comparison The comparison operators work on strings. To see if two strings are equal: if word == 'banana': print 'All right, bananas.' Other comparison operations are useful for putting words in alphabetical order: if word < 'banana': print 'Your word,' + word + ', comes before banana.' elif word > 'banana': print 'Your word,' + word + ', comes after banana.' else: print 'All right, bananas.' Python does not handle uppercase and lowercase letters the same way that people do. All the uppercase letters come before all the lowercase letters, so: Your word, Pineapple, comes before banana. A common way to address this problem is to convert strings to a standard format, such as all lowercase, before performing the comparison. Keep that in mind in case you have to defend yourself against a man armed with a Pineapple. ## Debugging When you use indices to traverse the values in a sequence, it is tricky to get the beginning and end of the traversal right. Here is a function that is supposed to compare two words and return True if one of the words is the reverse of the other, but it contains two errors: def is_reverse(word1, word2): if len(word1) != len(word2): return False i = 0 j = len(word2) while j > 0: if word1[i] != word2[j]: return False i = i+1 j = j-1 return True The first if statement checks whether the words are the same length. If not, we can return False immediately and then, for the rest of the function, we can assume that the words are the same length. This is an example of the guardian pattern in Section 6.8. i and j are indices: i traverses word1 forward while j traverses word2 backward. If we find two letters that don’t match, we can return False immediately. If we get through the whole loop and all the letters match, we return True. If we test this function with the words “pots” and “stop”, we expect the return value True, but we get an IndexError: >>> is_reverse('pots', 'stop') ... File "reverse.py", line 15, in is_reverse if word1[i] != word2[j]: IndexError: string index out of range For debugging this kind of error, my first move is to print the values of the indices immediately before the line where the error appears. while j > 0: print i, j # print here if word1[i] != word2[j]: return False i = i+1 j = j-1 Now when I run the program again, I get more information: >>> is_reverse('pots', 'stop') 0 4 ... IndexError: string index out of range The first time through the loop, the value of j is 4, which is out of range for the string 'pots'. The index of the last character is 3, so the initial value for j should be len(word2)-1. If I fix that error and run the program again, I get: >>> is_reverse('pots', 'stop') 0 3 1 2 2 1 True This time we get the right answer, but it looks like the loop only ran three times, which is suspicious. To get a better idea of what is happening, it is useful to draw a state diagram. During the first iteration, the frame for is_reverse looks like this: <IMG SRC="book012.png"> I took a little license by arranging the variables in the frame and adding dotted lines to show that the values of i and j indicate characters in word1 and word2. Exercise 8 ' Starting with this diagram, execute the program on paper, changing the values of 'i' and 'j' during each iteration. Find and fix the second error in this function. ## Glossary object: Something a variable can refer to. For now, you can use “object” and “value” interchangeably. sequence: An ordered set; that is, a set of values where each value is identified by an integer index. item: One of the values in a sequence. index: An integer value used to select an item in a sequence, such as a character in a string. slice: A part of a string specified by a range of indices. empty string: A string with no characters and length 0, represented by two quotation marks. immutable: The property of a sequence whose items cannot be assigned. traverse: To iterate through the items in a sequence, performing a similar operation on each. search: A pattern of traversal that stops when it finds what it is looking for. counter: A variable used to count something, usually initialized to zero and then incremented. method: A function that is associated with an object and called using dot notation. invocation: A statement that calls a method. ## Exercises ### Exercise 9 A string slice can take a third index that specifies the “step size;” that is, the number of spaces between successive characters. A step size of 2 means every other character; 3 means every third, etc. ''>>> fruit = 'banana' >>> fruit[0:5:2] 'bnn' '' A step size of -1 goes through the word backwards, so the slice [::-1] generates a reversed string. Use this idiom to write a one-line version of is_palindrome from Exercise '6.6'. ### Exercise 10 Read the documentation of the string methods at 'docs.python.org/lib/string-methods.html'. You might want to experiment with some of them to make sure you understand how they work. 'strip' and 'replace' are particularly useful. The documentation uses a syntax that might be confusing. For example, in find(sub[, start[, end]]), the brackets indicate optional arguments. So 'sub' is required, but 'start' is optional, and if you include 'start', then 'end' is optional. ### Exercise 11 The following functions are all intended to check whether a string contains any lowercase letters, but at least some of them are wrong. For each function, describe what the function actually does. ''def any_lowercase1(s): for c in s: if c.islower(): return True else: return False def any_lowercase2(s): for c in s: if 'c'.islower(): return 'True' else: return 'False' def any_lowercase3(s): for c in s: flag = c.islower() return flag def any_lowercase4(s): flag = False for c in s: flag = flag or c.islower() return flag def any_lowercase5(s): for c in s: if not c.islower(): return False return True '' ### Exercise 12 ROT13 is a weak form of encryption that involves “rotating” each letter in a word by 13 places[1]. To rotate a letter means to shift it through the alphabet, wrapping around to the beginning if necessary, so ’A’ shifted by 3 is ’D’ and ’Z’ shifted by 1 is ’A’. Write a function called rotate_word that takes a string and an integer as parameters, and that returns a new string that contains the letters from the original string “rotated” by the given amount. For example, “cheer” rotated by 7 is “jolly” and “melon” rotated by -10 is “cubed”. You might want to use the built-in functions 'ord', which converts a character to a numeric code, and 'chr', which converts numeric codes to characters. Potentially offensive jokes on the Internet are sometimes encoded in ROT13. If you are not easily offended, find and decode some of them. ## Notes 1. See wikipedia.org/wiki/ROT13 # Case study: word play ## Reading word lists For the exercises in this chapter we need a list of English words. There are lots of word lists available on the Web, but the one most suitable for our purpose is one of the word lists collected and contributed to the public domain by Grady Ward as part of the Moby lexicon project[1]. It is a list of 113,809 official crosswords; that is, words that are considered valid in crossword puzzles and other word games. In the Moby collection, the filename is 113809of.fic; I include a copy of this file, with the simpler name words.txt, along with Swampy. This file is in plain text, so you can open it with a text editor, but you can also read it from Python. (You may need to move the file from the swampy folder into the main python folder) The built-in function open takes the name of the file as a parameter and returns a file object you can use to read the file. >>> fin = open('words.txt') >>> print fin <open file 'words.txt', mode 'r' at 0xb7f4b380> fin is a common name for a file object used for input. Mode 'r' indicates that this file is open for reading (as opposed to 'w' for writing). The file object provides several methods for reading, including readline, which reads characters from the file until it gets to a newline and returns the result as a string: >>> fin.readline() 'aa\r\n' The first word in this particular list is “aa,” which is a kind of lava. The sequence \r\n represents two whitespace characters, a carriage return and a newline, that separate this word from the next. The file object keeps track of where it is in the file, so if you call readline again, you get the next word: >>> fin.readline() 'aah\r\n' The next word is “aah,” which is a perfectly legitimate word, so stop looking at me like that. Or, if it’s the whitespace that’s bothering you, we can get rid of it with the string method strip: >>> line = fin.readline() >>> word = line.strip() >>> print word aahed You can also use a file object as part of a for loop. This program reads words.txt and prints each word, one per line: fin = open('words.txt') for line in fin: word = line.strip() print word Exercise 1 Write a program that reads 'words.txt' and prints only the words with more than 20 characters (not counting whitespace). ## Exercises There are solutions to these exercises in the next section. You should at least attempt each one before you read the solutions. Exercise 2 In 1939 Ernest Vincent Wright published a 50,000 word novel called Gadsby that does not contain the letter “e.” Since “e” is the most common letter in English, that’s not easy to do. In fact, it is difficult to construct a solitary thought without using that most common symbol. It is slow going at first, but with caution and hours of training you can gradually gain facility. All right, I’ll stop now. Write a function called has_no_e that returns 'True' if the given word doesn’t have the letter “e” in it. Modify your program from the previous section to print only the words that have no “e” and compute the percentage of the words in the list have no “e.” Exercise 3 Write a function named 'avoids' that takes a word and a string of forbidden letters, and that returns 'True' if the word doesn’t use any of the forbidden letters. Modify your program to prompt the user to enter a string of forbidden letters and then print the number of words that don’t contain any of them. Can you find a combination of 5 forbidden letters that excludes the smallest number of words? Exercise 4 Write a function named uses_only that takes a word and a string of letters, and that returns 'True' if the word contains only letters in the list. Can you make a sentence using only the letters 'acefhlo'? Other than “Hoe alfalfa?” Exercise 5 Write a function named uses_all that takes a word and a string of required letters, and that returns 'True' if the word uses all the required letters at least once. How many words are there that use all the vowels 'aeiou'? How about 'aeiouy'? Exercise 6 Write a function called is_abecedarian that returns 'True' if the letters in a word appear in alphabetical order (double letters are ok). How many abecedarian words are there? ## Search All of the exercises in the previous section have something in common; they can be solved with the search pattern we saw in Section 8.6. The simplest example is: def has_no_e(word): for letter in word: if letter == 'z': return False return A=break The for loop traverses the characters in word. If we find the letter “e”, we can immediately return False; otherwise we have to go to the next letter. If we exit the loop normally, that means we didn’t find an “e”, so we return True. You can write this function more concisely using the in operator, but I started with this version because it demonstrates the logic of the search pattern. avoids is a more general version of has_no_e but it has the same structure: def avoids(word, forbidden): for letter in word: if letter in forbidden: return False return True We can return False as soon as we find a forbidden letter; if we get to the end of the loop, we return True. uses_only is similar except that the sense of the condition is reversed: def uses_only(word, available): for letter in word: if letter not in available: return False return True Instead of a list of forbidden words, we have a list of available words. If we find a letter in word that is not in available, we can return False. uses_all is similar except that we reverse the role of the word and the string of letters: def uses_all(word, required): for letter in required: if letter not in word: return False return True Instead of traversing the letters in word, the loop traverses the required letters. If any of the required letters do not appear in the word, we can return False. If you were really thinking like a computer scientist, you would have recognized that uses_all was an instance of a previously-solved problem, and you would have written: def uses_all(word, required): return uses_only(required, word) This is an example of a program development method called problem recognition, which means that you recognize the problem you are working on as an instance of a previously-solved problem, and apply a previously-developed solution. ## Looping with indices I wrote the functions in the previous section with for loops because I only needed the characters in the strings; I didn’t have to do anything with the indices. For is_abecedarian we have to compare adjacent letters, which is a little tricky with a for loop: def is_abecedarian(word): previous = word[0] for c in word: if c < previous: return False previous = c return True An alternative is to use recursion: def is_abecedarian(word): if len(word) <= 1: return True if word[0] > word[1]: return False return is_abecedarian(word[1:]) Another option is to use a while loop: def is_abecedarian(word): i = 0 while i < len(word)-1: if word[i+1] < word[i]: return False i = i+1 return True The loop starts at i=0 and ends when i=len(word)-1. Each time through the loop, it compares the ith character (which you can think of as the current character) to the i+1th character (which you can think of as the next). If the next character is less than (alphabetically before) the current one, then we have discovered a break in the abecedarian trend, and we return False. If we get to the end of the loop without finding a fault, then the word passes the test. To convince yourself that the loop ends correctly, consider an example like 'flossy'. The length of the word is 6, so the last time the loop runs is when i is 4, which is the index of the second-to-last character. On the last iteration, it compares the second-to-last character to the last, which is what we want. Here is a version of is_palindrome (see Exercise 6.6) that uses two indices; one starts at the beginning and goes up; the other starts at the end and goes down. def is_palindrome(word): i = 0 j = len(word)-1 while i<j: if word[i] != word[j]: return False i = i+i j = j-j return True Or, if you noticed that this is an instance of a previously-solved problem, you might have written: def is_palindrome(word): return is_reverse(word, word) Assuming you did Exercise 8.8. ## Debugging Testing programs is hard. The functions in this chapter are relatively easy to test because you can check the results by hand. Even so, it is somewhere between difficult and impossible to choose a set of words that test for all possible errors. Taking has_no_e as an example, there are two obvious cases to check: words that have an ’e’ should return False; words that don’t should return True. You should have no trouble coming up with one of each. Within each case, there are some less obvious subcases. Among the words that have an “e,” you should test words with an “e” at the beginning, the end, and somewhere in the middle. You should test long words, short words, and very short words, like the empty string. The empty string is an example of a special case, which is one of the non-obvious cases where errors often lurk. In addition to the test cases you generate, you can also test your program with a word list like words.txt. By scanning the output, you might be able to catch errors, but be careful: you might catch one kind of error (words that should not be included, but are) and not another (words that should be included, but aren’t). In general, testing can help you find bugs, but it is not easy to generate a good set of test cases, and even if you do, you can’t be sure your program is correct. According to a legendary computer scientist: Program testing can be used to show the presence of bugs, but never to show their absence! — Edsger W. Dijkstra ## Glossary file object: A value that represents an open file. problem recognition: A way of solving a problem by expressing it as an instance of a previously-solved problem. special case: A test case that is atypical or non-obvious (and less likely to be handled correctly). ## Exercises ### Exercise 7 This question is based on a Puzzler that was broadcast on the radio program Car Talk[2]: Give me a word with three consecutive double letters. I'll give you a couple of words that almost qualify, but don't. For example, the word committee, c-o-m-m-i-t-t-e-e. It would be great except for the ‘i’ that sneaks in there. Or Mississippi: M-i-s-s-i-s-s-i-p-p-i. If you could take out those i’s it would work. But there is a word that has three consecutive pairs of letters and to the best of my knowledge this may be the only word. Of course there are probably 500 more but I can only think of one. What is the word? Write a program to find it. You can see my solution at 'thinkpython.com/code/cartalk.py'. ### Exercise 8 Here’s another Car Talk Puzzler[3]: “I was driving on the highway the other day and I happened to notice my odometer. Like most odometers, it shows six digits, in whole miles only. So, if my car had 300,000 miles, for example, I’d see 3-0-0-0-0-0. “Now, what I saw that day was very interesting. I noticed that the last 4 digits were palindromic; that is, they read the same forward as backward. For example, 5-4-4-5 is a palindrome, so my odometer could have read 3-1-5-4-4-5. “One mile later, the last 5 numbers were palindromic. For example, it could have read 3-6-5-4-5-6. One mile after that, the middle 4 out of 6 numbers were palindromic. And you ready for this? One mile later, all 6 were palindromic! “The question is, what was on the odometer when I first looked?” Write a Python program that tests all the six-digit numbers and prints any numbers that satisfy these requirements. You can see my solution at 'thinkpython.com/code/cartalk.py'. ### Exercise 9 Here’s another Car Talk Puzzler you can solve with a search[4]: “Recently I had a visit with my mom and we realized that the two digits that make up my age when reversed resulted in her age. For example, if she’s 73, I’m 37. We wondered how often this has happened over the years but we got sidetracked with other topics and we never came up with an answer. “When I got home I figured out that the digits of our ages have been reversible six times so far. I also figured out that if we’re lucky it would happen again in a few years, and if we’re really lucky it would happen one more time after that. In other words, it would have happened 8 times over all. So the question is, how old am I now?” Write a Python program that searches for solutions to this Puzzler. Hint: you might find the string method 'zfill' useful. You can see my solution at 'thinkpython.com/code/cartalk.py'. ## Notes 1. wikipedia.org/wiki/Moby_Project 2. www.cartalk.com/content/puzzler/transcripts/200725 3. www.cartalk.com/content/puzzler/transcripts/200803 4. www.cartalk.com/content/puzzler/transcripts/200813 # Lists ## A list is a sequence Like a string, a list is a sequence of values. In a string, the values are characters; in a list, they can be any type. The values in list are called elements or sometimes items. There are several ways to create a new list; the simplest is to enclose the elements in square brackets ([ and ]): [10, 20, 30, 40] ['crunchy frog', 'ram bladder', 'lark vomit'] The first example is a list of four integers. The second is a list of three strings. The elements of a list don’t have to be the same type. The following list contains a string, a float, an integer, and (lo!) another list: ['spam', 2.0, 5, [10, 20]] A list within another list is nested. A list that contains no elements is called an empty list; you can create one with empty brackets, []. As you might expect, you can assign list values to variables: >>> cheeses = ['Cheddar', 'Edam', 'Gouda'] >>> numbers = [17, 123] >>> empty = [] >>> print cheeses, numbers, empty ['Cheddar', 'Edam', 'Gouda'] [17, 123] [] ## Lists are mutable The syntax for accessing the elements of a list is the same as for accessing the characters of a string—the bracket operator. The expression inside the brackets specifies the index. Remember that the indices start at 0: >>> print cheeses[0] Cheddar Unlike strings, lists are mutable. When the bracket operator appears on the left side of an assignment, it identifies the element of the list that will be assigned. >>> numbers = [17, 123] >>> numbers[1] = 5 >>> print numbers [17, 5] The one-eth element of numbers, which used to be 123, is now 5. You can think of a list as a relationship between indices and elements. This relationship is called a mapping; each index “maps to” one of the elements. Here is a state diagram showing cheeses, numbers and empty: <IMG SRC="book013.png"> Lists are represented by boxes with the word “list” outside and the elements of the list inside. cheeses refers to a list with three elements indexed 0, 1 and 2. numbers contains two elements; the diagram shows that the value of the second element has been reassigned from 123 to 5. empty refers to a list with no elements. List indices work the same way as string indices: • Any integer expression can be used as an index. • If you try to read or write an element that does not exist, you get an IndexError. • If an index has a negative value, it counts backward from the end of the list. The in operator also works on lists. >>> cheeses = ['Cheddar', 'Edam', 'Gouda'] >>> 'Edam' in cheeses True >>> 'Brie' in cheeses False ## Traversing a list The most common way to traverse the elements of a list is with a for loop. The syntax is the same as for strings: for cheese in cheeses: print cheese This works well if you only need to read the elements of the list. But if you want to write or update the elements, you need the indices. A common way to do that is to combine the functions range and len: for i in range(len(numbers)): numbers[i] = numbers[i] * 2 This loop traverses the list and updates each element. len returns the number of elements in the list. range returns a list of indices from 0 to n−1, where n is the length of the list. Each time through the loop i gets the index of the next element. The assignment statement in the body uses i to read the old value of the element and to assign the new value. A for loop over an empty list never executes the body: for x in empty: print 'This never happens.' Although a list can contain another list, the nested list still counts as a single element. The length of this list is four: ['spam', 1, ['Brie', 'Roquefort', 'Pol le Veq'], [1, 2, 3]] ## List operations The + operator concatenates lists: >>> a = [1, 2, 3] >>> b = [4, 5, 6] >>> c = a + b >>> print c [1, 2, 3, 4, 5, 6] Similarly, the * operator repeats a list a given number of times: >>> [0] * 4 [0, 0, 0, 0] >>> [1, 2, 3] * 3 [1, 2, 3, 1, 2, 3, 1, 2, 3] The first example repeats [0] four times. The second example repeats the list [1, 2, 3] three times. ## List slices The slice operator also works on lists: >>> t = ['a', 'b', 'c', 'd', 'e', 'f'] >>> t[1:3] ['b', 'c'] >>> t[:4] ['a', 'b', 'c', 'd'] >>> t[3:] ['d', 'e', 'f'] If you omit the first index, the slice starts at the beginning. If you omit the second, the slice goes to the end. So if you omit both, the slice is a copy of the whole list. >>> t[:] ['a', 'b', 'c', 'd', 'e', 'f'] Since lists are mutable, it is often useful to make a copy before performing operations that fold, spindle or mutilate lists. A slice operator on the left side of an assignment can update multiple elements: >>> t = ['a', 'b', 'c', 'd', 'e', 'f'] >>> t[1:3] = ['x', 'y'] >>> print t ['a', 'x', 'y', 'd', 'e', 'f'] ## List methods Python provides methods that operate on lists. For example, append adds a new element to the end of a list: >>> t = ['a', 'b', 'c'] >>> t.append('d') >>> print t ['a', 'b', 'c', 'd'] extend takes a list as an argument and appends all of the elements: >>> t1 = ['a', 'b', 'c'] >>> t2 = ['d', 'e'] >>> t1.extend(t2) >>> print t1 ['a', 'b', 'c', 'd', 'e'] This example leaves t2 unmodified. sort arranges the elements of the list from low to high: >>> t = ['d', 'c', 'e', 'b', 'a'] >>> t.sort() >>> print t ['a', 'b', 'c', 'd', 'e'] List methods are all void; they modify the list and return None. If you accidentally write t = t.sort(), you will be disappointed with the result. ## Map, filter and reduce To add up all the numbers in a list, you can use a loop like this: def add_all(t): total = 0 for x in t: total += x return total total is initialized to 0. Each time through the loop, x gets one element from the list. The += operator provides a short way to update a variable: total += x is equivalent to: total = total + x As the loop executes, total accumulates the sum of the elements; a variable used this way is sometimes called an accumulator. Adding up the elements of a list is such a common operation that Python provides it as a built-in function, sum: >>> t = [1, 2, 3] >>> sum(t) 6 An operation like this that combines a sequence of elements into a single value is sometimes called reduce. Sometimes you want to traverse one list while building another. For example, the following function takes a list of strings and returns a new list that contains capitalized strings: def capitalize_all(t): res = [] for s in t: res.append(s.capitalize()) return res res is initialized with an empty list; each time through the loop, we append the next element. So res is another kind of accumulator. An operation like capitalize_all is sometimes called a map because it “maps” a function (in this case the method capitalize) onto each of the elements in a sequence. Another common operation is to select some of the elements from a list and return a sublist. For example, the following function takes a list of strings and returns a list that contains only the uppercase strings: def only_upper(t): res = [] for s in t: if s.isupper(): res.append(s) return res isupper is a string method that returns True if the string contains only upper case letters. An operation like only_upper is called a filter because it selects some of the elements and filters out the others. Most common list operations can be expressed as a combination of map, filter and reduce. Because these operations are so common, Python provides language features to support them, including the built-in function map and an operator called a “list comprehension.” ### Exercise 1 Write a function that takes a list of numbers and returns the cumulative sum; that is, a new list where the 'i'th element is the sum of the first 'i+1' elements from the original list. For example, the cumulative sum of '[1, 2, 3]' is '[1, 3, 6]'. ## Deleting elements There are several ways to delete elements from a list. If you know the index of the element you want, you can use pop: >>> t = ['a', 'b', 'c'] >>> x = t.pop(1) >>> print t ['a', 'c'] >>> print x b pop modifies the list and returns the element that was removed. If you don’t provide an index, it deletes and returns the last element. If you don’t need the removed value, you can use the del operator: >>> t = ['a', 'b', 'c'] >>> del t[1] >>> print t ['a', 'c'] If you know the element you want to remove (but not the index), you can use remove: >>> t = ['a', 'b', 'c'] >>> t.remove('b') >>> print t ['a', 'c'] The return value from remove is None. To remove more than one element, you can use del with a slice index: >>> t = ['a', 'b', 'c', 'd', 'e', 'f'] >>> del t[1:5] >>> print t ['a', 'f'] As usual, the slice selects all the elements up to, but not including, the second index. ## Lists and strings A string is a sequence of characters and a list is a sequence of values, but a list of characters is not the same as a string. To convert from a string to a list of characters, you can use list: >>> s = 'spam' >>> t = list(s) >>> print t ['s', 'p', 'a', 'm'] Because list is the name of a built-in function, you should avoid using it as a variable name. I also avoid l because it looks too much like 1. So that’s why I use t. The list function breaks a string into individual letters. If you want to break a string into words, you can use the split method: >>> s = 'pining for the fjords' >>> t = s.split() >>> print t ['pining', 'for', 'the', 'fjords'] An optional argument called a delimiter specifies which characters to use as word boundaries. The following example uses a hyphen as a delimiter: >>> s = 'spam-spam-spam' >>> delimiter = '-' >>> s.split(delimiter) ['spam', 'spam', 'spam'] join is the inverse of split. It takes a list of strings and concatenates the elements. join is a string method, so you have to invoke it on the delimiter and pass the list as a parameter: >>> t = ['pining', 'for', 'the', 'fjords'] >>> delimiter = ' ' >>> delimiter.join(t) 'pining for the fjords' In this case the delimiter is a space character, so join puts a space between words. To concatenate strings without spaces, you can use the empty string, , as a delimiter. ## Objects and values If we execute these assignment statements: a = 'banana' b = 'banana' We know that a and b both refer to a string, but we don’t know whether they refer to the same string. There are two possible states: <IMG SRC="book014.png"> In one case, a and b refer to two different objects that have the same value. In the second case, they refer to the same object. To check whether two variables refer to the same object, you can use the is operator. >>> a = 'banana' >>> b = 'banana' >>> a is b True In this example, Python only created one string object, and both a and b refer to it. But when you create two lists, you get two objects: >>> a = [1, 2, 3] >>> b = [1, 2, 3] >>> a is b False So the state diagram looks like this: <IMG SRC="book015.png"> In this case we would say that the two lists are equivalent, because they have the same elements, but not identical, because they are not the same object. If two objects are identical, they are also equivalent, but if they are equivalent, they are not necessarily identical. Until now, we have been using “object” and “value” interchangeably, but it is more precise to say that an object has a value. If you execute a = [1,2,3], a refers to a list object whose value is a particular sequence of elements. If another list has the same elements, we would say it has the same value. ## Aliasing If a refers to an object and you assign b = a, then both variables refer to the same object: >>> a = [1, 2, 3] >>> b = a >>> b is a True The state diagram looks like this: <IMG SRC="book016.png"> The association of a variable with an object is called a reference. In this example, there are two references to the same object. An object with more than one reference has more than one name, so we say that the object is aliased. If the aliased object is mutable, changes made with one alias affect the other: >>> b[0] = 17 >>> print a [17, 2, 3] Although this behavior can be useful, it is error-prone. In general, it is safer to avoid aliasing when you are working with mutable objects. For immutable objects like strings, aliasing is not as much of a problem. In this example: a = 'banana' b = 'banana' It almost never makes a difference whether a and b refer to the same string or not. ## List arguments When you pass a list to a function, the function gets a reference to the list. If the function modifies a list parameter, the caller sees the change. For example, delete_head removes the first element from a list: def delete_head(t): del t[0] Here’s how it is used: >>> letters = ['a', 'b', 'c'] >>> delete_head(letters) >>> print letters ['b', 'c'] The parameter t and the variable letters are aliases for the same object. The stack diagram looks like this: <IMG SRC="book017.png"> Since the list is shared by two frames, I drew it between them. It is important to distinguish between operations that modify lists and operations that create new lists. For example, the append method modifies a list, but the + operator creates a new list: >>> t1 = [1, 2] >>> t2 = t1.append(3) >>> print t1 [1, 2, 3] >>> print t2 None >>> t3 = t1 + [3] >>> print t3 [1, 2, 3] >>> t2 is t3 False This difference is important when you write functions that are supposed to modify lists. For example, this function does not delete the head of a list: def bad_delete_head(t): t = t[1:] # WRONG! The slice operator creates a new list and the assignment makes t refer to it, but none of that has any effect on the list that was passed as an argument. An alternative is to write a function that creates and returns a new list. For example, tail returns all but the first element of a list: def tail(t): return t[1:] This function leaves the original list unmodified. Here’s how it is used: >>> letters = ['a', 'b', 'c'] >>> rest = tail(letters) >>> print rest ['b', 'c'] ### Exercise 2 Write a function called 'chop' that takes a list and modifies it, removing the first and last elements, and returns 'None'. Then write a function called 'middle' that takes a list and returns a new list that contains all but the first and last elements. ## Debugging Careless use of lists (and other mutable objects) can lead to long hours of debugging. Here are some common pitfalls and ways to avoid them: • Don’t forget that most list methods modify the argument and return None. This is the opposite of the string methods, which return a new string and leave the original alone. If you are used to writing string code like this: word = word.strip() It is tempting to write list code like this: t = t.sort() # WRONG! Because sort returns None, the next operation you perform with t is likely to fail. Before using list methods and operators, you should read the documentation carefully and then test them in interactive mode. The methods and operators that lists share with other sequences (like strings) are documented at docs.python.org/lib/typesseq.html. The methods and operators that only apply to mutable sequences are documented at docs.python.org/lib/typesseq-mutable.html. • Pick an idiom and stick with it. Part of the problem with lists is that there are too many ways to do things. For example, to remove an element from a list, you can use pop, remove, del, or even a slice assignment. To add an element, you can use the append method or the + operator. But don’t forget that these are right: t.append(x) t = t + [x] And these are wrong: t.append([x]) # WRONG! t = t.append(x) # WRONG! t + [x] # WRONG! t = t + x # WRONG! Try out each of these examples in interactive mode to make sure you understand what they do. Notice that only the last one causes a runtime error; the other three are legal, but they do the wrong thing. • Make copies to avoid aliasing. If you want to use a method like sort that modifies the argument, but you need to keep the original list as well, you can make a copy. orig = t[:] t.sort() In this example you could also use the built-in function sorted, which returns a new, sorted list and leaves the original alone. But in that case you should avoid using sorted as a variable name! ## Glossary list: A sequence of values. element: One of the values in a list (or other sequence), also called items. index: An integer value that indicates an element in a list. nested list: A list that is an element of another list. list traversal: The sequential accessing of each element in a list. mapping: A relationship in which each element of one set corresponds to an element of another set. For example, a list is a mapping from indices to elements. accumulator: A variable used in a loop to add up or accumulate a result. reduce: A processing pattern that traverses a sequence and accumulates the elements into a single result. map: A processing pattern that traverses a sequence and performs an operation on each element. filter: A processing pattern that traverses a list and selects the elements that satisfy some criterion. object: Something a variable can refer to. An object has a type and a value. equivalent: Having the same value. identical: Being the same object (which implies equivalence). reference: The association between a variable and its value. aliasing: A circumstance where two variables refer to the same object. delimiter: A character or string used to indicate where a string should be split. ## Exercises ### Exercise 3 Write a function called is_sorted that takes a list as a parameter and returns 'True' if the list is sorted in ascending order and 'False' otherwise. You can assume (as a precondition) that the elements of the list can be compared with the comparison operators '<', '>', etc. For example, is_sorted([1,2,2]) should return 'True' and is_sorted(['b','a']) should return 'False'. ### Exercise 4 Two words are anagrams if you can rearrange the letters from one to spell the other. Write a function called is_anagram that takes two strings and returns 'True' if they are anagrams. Exercise 5 The (so-called) Birthday Paradox: Write a function called has_duplicates that takes a list and returns 'True' if there is any element that appears more than once. It should not modify the original list. • If there are 23 students in your class, what are the chances that two of you have the same birthday? You can estimate this probability by generating random samples of 23 birthdays and checking for matches. Hint: you can generate random birthdays with the 'randint' function in the 'random' module. You can read about this problem at 'wikipedia.org/wiki/Birthday_paradox', and you can see my solution at 'thinkpython.com/code/birthday.py'. Exercise 6 Write a function called remove_duplicates that takes a list and returns a new list with only the unique elements from the original. Hint: they don’t have to be in the same order. ### Exercise 7 Write a function that reads the file 'words.txt' and builds a list with one element per word. Write two versions of this function, one using the 'append' method and the other using the idiom 't = t + [x]'. Which one takes longer to run? Why? You can see my solution at 'thinkpython.com/code/wordlist.py'. ### Exercise 8 To check whether a word is in the word list, you could use the 'in' operator, but it would be slow because it searches through the words in order. Because the words are in alphabetical order, we can speed things up with a bisection search, which is similar to what you do when you look a word up in the dictionary. You start in the middle and check to see whether the word you are looking for comes before the word in the middle of the list. If so, then you search the first half of the list the same way. Otherwise you search the second half. Either way, you cut the remaining search space in half. If the word list has 113,809 words, it will take about 17 steps to find the word or conclude that it’s not there. Write a function called 'bisect' that takes a sorted list and a target value and returns the index of the value in the list, if it’s there, or 'None' if it’s not. Or you could read the documentation of the 'bisect' module and use that! ### Exercise 9 Two words are a “reverse pair” if each is the reverse of the other. Write a program that finds all the reverse pairs in the word list. ### Exercise 10 Two words “interlock” if taking alternating letters from each forms a new word1. For example, “shoe” and “cold” interlock to form “schooled.” • Write a program that finds all pairs of words that interlock. Hint: don’t enumerate all pairs! • Can you find any words that are three-way interlocked; that is, every third letter forms a word, starting from the first, second or third? 1 This exercise is inspired by an example at puzzlers.org. # Dictionaries A dictionary is like a list, but more general. In a list, the indices have to be integers; in a dictionary they can be (almost) any type. You can think of a dictionary as a mapping between a set of indices (which are called keys) and a set of values. Each key maps to a value. The association of a key and a value is called a key-value pair or sometimes an item. As an example, we'll build a dictionary that maps from English to Spanish words, so the keys and the values are all strings. The function dict creates a new dictionary with no items. Because dict is the name of a built-in function, you should avoid using it as a variable name. >>> eng2sp = dict() >>> print eng2sp {} The squiggly-brackets, {}, represent an empty dictionary. To add items to the dictionary, you can use square brackets: >>> eng2sp['one'] = 'uno' This line creates an item that maps from the key ’one’ to the value 'uno'. If we print the dictionary again, we see a key-value pair with a colon between the key and value: >>> print eng2sp {'one': 'uno'} This output format is also an input format. For example, you can create a new dictionary with three items: >>> eng2sp = {'one': 'uno', 'two': 'dos', 'three': 'tres'} But if you print eng2sp, you might be surprised: >>> print eng2sp {'one': 'uno', 'three': 'tres', 'two': 'dos'} The order of the key-value pairs is not the same. In fact, if you type the same example on your computer, you might get a different result. In general, the order of items in a dictionary is unpredictable. But that’s not a problem because the elements of a dictionary are never indexed with integer indices. Instead, you use the keys to look up the corresponding values: >>> print eng2sp['two'] 'dos' The key ’two’ always maps to the value 'dos' so the order of the items doesn’t matter. If the key isn’t in the dictionary, you get an exception: >>> print eng2sp['four'] KeyError: 'four' The len function works on dictionaries; it returns the number of key-value pairs: >>> len(eng2sp) 3 The in operator works on dictionaries; it tells you whether something appears as a key in the dictionary (appearing as a value is not good enough). >>> 'one' in eng2sp True >>> 'uno' in eng2sp False To see whether something appears as a value in a dictionary, you can use the method values, which returns the values as a list, and then use the in operator: >>> vals = eng2sp.values() >>> 'uno' in vals True The in operator uses different algorithms for lists and dictionaries. For lists, it uses a search algorithm, as in Section 8.6. As the list gets longer, the search time gets longer in direct proportion. For dictionaries, Python uses an algorithm called a hashtable that has a remarkable property: the in operator takes about the same amount of time no matter how many items there are in a dictionary. I won’t explain how that’s possible, but you can read more about it at wikipedia.org/wiki/Hash_table. ### Exercise 1 Write a function that reads the words in 'words.txt' and stores them as keys in a dictionary. It doesn’t matter what the values are. Then you can use the 'in' operator as a fast way to check whether a string is in the dictionary. If you did Exercise '10.8', you can compare the speed of this implementation with the list 'in' operator and the bisection search. ## Dictionary as a set of counters Suppose you are given a string and you want to count how many times each letter appears. There are several ways you could do it: • You could create 26 variables, one for each letter of the alphabet. Then you could traverse the string and, for each character, increment the corresponding counter, probably using a chained conditional. • You could create a list with 26 elements. Then you could convert each character to a number (using the built-in function ord), use the number as an index into the list, and increment the appropriate counter. • You could create a dictionary with characters as keys and counters as the corresponding values. The first time you see a character, you would add an item to the dictionary. After that you would increment the value of an existing item. Each of these options performs the same computation, but each of them implements that computation in a different way. An implementation is a way of performing a computation; some implementations are better than others. For example, an advantage of the dictionary implementation is that we don’t have to know ahead of time which letters appear in the string and we only have to make room for the letters that do appear. Here is what the code might look like: def histogram(s): d = dict() for c in s: if c not in d: d[c] = 1 else: d[c] += 1 return d The name of the function is histogram, which is a statistical term for a set of counters (or frequencies). The first line of the function creates an empty dictionary. The for loop traverses the string. Each time through the loop, if the character c is not in the dictionary, we create a new item with key c and the initial value 1 (since we have seen this letter once). If c is already in the dictionary we increment d[c]. Here’s how it works: >>> h = histogram('brontosaurus') >>> print h {'a': 1, 'b': 1, 'o': 2, 'n': 1, 's': 2, 'r': 2, 'u': 2, 't': 1} The histogram indicates that the letters ’a’ and 'b' appear once; 'o' appears twice, and so on. ### Exercise 2 Dictionaries have a method called 'get' that takes a key and a default value. If the key appears in the dictionary, 'get' returns the corresponding value; otherwise it returns the default value. For example: ''>>> h = histogram('a') >>> print h {'a': 1} >>> h.get('a', 0) 1 >>> h.get('b', 0) 0 '' Use 'get' to write 'histogram' more concisely. You should be able to eliminate the 'if' statement. ## Looping and dictionaries If you use a dictionary in a for statement, it traverses the keys of the dictionary. For example, print_hist prints each key and the corresponding value: def print_hist(h): for c in h: print c, h[c] Here’s what the output looks like: >>> h = histogram('parrot') >>> print_hist(h) a 1 p 1 r 2 t 1 o 1 Again, the keys are in no particular order. ### Exercise 3 Dictionaries have a method called 'keys' that returns the keys of the dictionary, in no particular order, as a list. Modify print_hist to print the keys and their values in alphabetical order. ## Reverse lookup Given a dictionary d and a key k, it is easy to find the corresponding value v = d[k]. This operation is called a lookup. But what if you have v and you want to find k? You have two problems: first, there might be more than one key that maps to the value v. Depending on the application, you might be able to pick one, or you might have to make a list that contains all of them. Second, there is no simple syntax to do a reverse lookup; you have to search. Here is a function that takes a value and returns the first key that maps to that value: def reverse_lookup(d, v): for k in d: if d[k] == v: return k raise ValueError This function is yet another example of the search pattern, but it uses a feature we haven’t seen before, raise. The raise statement causes an exception; in this case it causes a ValueError, which generally indicates that there is something wrong with the value of a parameter. If we get to the end of the loop, that means v doesn’t appear in the dictionary as a value, so we raise an exception. Here is an example of a successful reverse lookup: >>> h = histogram('parrot') >>> k = reverse_lookup(h, 2) >>> print k r And an unsuccessful one: >>> k = reverse_lookup(h, 3) Traceback (most recent call last): File "<stdin>", line 1, in ? File "<stdin>", line 5, in reverse_lookup ValueError The result when you raise an exception is the same as when Python raises one: it prints a traceback and an error message. The raise statement takes a detailed error message as an optional argument. For example: >>> raise ValueError, 'value does not appear in the dictionary' Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: value does not appear in the dictionary A reverse lookup is much slower than a forward lookup; if you have to do it often, or if the dictionary gets big, the performance of your program will suffer. Exercise 4 Modify reverse_lookup so that it builds and returns a list of all keys that map to 'v', or an empty list if there are none. ## Dictionaries and lists Lists can appear as values in a dictionary. For example, if you were given a dictionary that maps from letters to frequencies, you might want to invert it; that is, create a dictionary that maps from frequencies to letters. Since there might be several letters with the same frequency, each value in the inverted dictionary should be a list of letters. Here is a function that inverts a dictionary: def invert_dict(d): inv = dict() for key in d: val = d[key] if val not in inv: inv[val] = [key] else: inv[val].append(key) return inv Each time through the loop, key gets a key from d and val gets the corresponding value. If val is not in inv, that means we haven’t seen it before, so we create a new item and initialize it with a singleton (a list that contains a single element). Otherwise we have seen this value before, so we append the corresponding key to the list. Here is an example: >>> hist = histogram('parrot') >>> print hist {'a': 1, 'p': 1, 'r': 2, 't': 1, 'o': 1} >>> inv = invert_dict(hist) >>> print inv {1: ['a', 'p', 't', 'o'], 2: ['r']} And here is a diagram showing hist and inv: <IMG SRC="book018.png"> A dictionary is represented as a box with the type dict above it and the key-value pairs inside. If the values are integers, floats or strings, I usually draw them inside the box, but I usually draw lists outside the box, just to keep the diagram simple. Lists can be values in a dictionary, as this example shows, but they cannot be keys. Here’s what happens if you try: >>> t = [1, 2, 3] >>> d = dict() >>> d[t] = 'oops' Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: list objects are unhashable I mentioned earlier that a dictionary is implemented using a hashtable and that means that the keys have to be hashable. A hash is a function that takes a value (of any kind) and returns an integer. Dictionaries use these integers, called hash values, to store and look up key-value pairs. This system works fine if the keys are immutable. But if the keys are mutable, like lists, bad things happen. For example, when you create a key-value pair, Python hashes the key and stores it in the corresponding location. If you modify the key and then hash it again, it would go to a different location. In that case you might have two entries for the same key, or you might not be able to find a key. Either way, the dictionary wouldn’t work correctly. That’s why the keys have to be hashable, and why mutable types like lists aren’t. The simplest way to get around this limitation is to use tuples, which we will see in the next chapter. Since dictionaries are mutable, they can’t be used as keys, but they can be used as values. ### Exercise 5 Read the documentation of the dictionary method 'setdefault' and use it to write a more concise version of invert_dict. ## Memos If you played with the fibonacci function from Section 6.7, you might have noticed that the bigger the argument you provide, the longer the function takes to run. Furthermore, the run time increases very quickly. To understand why, consider this call graph for fibonacci with n=4: <IMG SRC="book019.png"> A call graph shows a set of function frames, with lines connecting each frame to the frames of the functions it calls. At the top of the graph, fibonacci with n=4 calls fibonacci with n=3 and n=2. In turn, fibonacci with n=3 calls fibonacci with n=2 and n=1. And so on. Count how many times fibonacci(0) and fibonacci(1) are called. This is an inefficient solution to the problem, and it gets worse as the argument gets bigger. One solution is to keep track of values that have already been computed by storing them in a dictionary. A previously computed value that is stored for later use is called a memo[1]. Here is an implementation of fibonacci using memos: known = {0:0, 1:1} def fibonacci(n): if n in known: return known[n] res = fibonacci(n-1) + fibonacci(n-2) known[n] = res return res known is a dictionary that keeps track of the Fibonacci numbers we already know. It starts with two items: 0 maps to 0 and 1 maps to 1. Whenever fibonacci is called, it checks known. If the result is already there, it can return immediately. Otherwise it has to compute the new value, add it to the dictionary, and return it. ### Exercise 6 Run this version of 'fibonacci' and the original with a range of parameters and compare their run times. ## Global variables In the previous example, known is created outside the function, so it belongs to the special frame called __main__. Variables in __main__ are sometimes called global because they can be accessed from any function. Unlike local variables, which disappear when their function ends, global variables persist from one function call to the next. It is common to use global variables for flags; that is, boolean variables that indicate (“flag”) whether a condition is true. For example, some programs use a flag named verbose to control the level of detail in the output: verbose = True def example1(): if verbose: print 'Running example1' If you try to reassign a global variable, you might be surprised. The following example is supposed to keep track of whether the function has been called: been_called = False def example2(): been_called = True # WRONG But if you run it you will see that the value of been_called doesn’t change. The problem is that example2 creates a new local variable named been_called. The local variable goes away when the function ends, and has no effect on the global variable. To reassign a global variable inside a function you have to declare the global variable before you use it: been_called = False def example2(): global been_called been_called = True The global statement tells the interpreter something like, “In this function, when I say been_called, I mean the global variable; don’t create a local one.” Here’s an example that tries to update a global variable: count = 0 def example3(): count = count + 1 # WRONG If you run it you get: UnboundLocalError: local variable 'count' referenced before assignment Python assumes that count is local, which means that you are reading it before writing it. The solution, again, is to declare count global. def example3(): global count count += 1 If the global value is mutable, you can modify it without declaring it: known = {0:0, 1:1} def example4(): known[2] = 1 So you can add, remove and replace elements of a global list or dictionary, but if you want to reassign the variable, you have to declare it: def example5(): global known known = dict() ## Long integers If you compute fibonacci(50), you get: >>> fibonacci(50) 12586269025L The L at the end indicates that the result is a long integer[2], or type long. Values with type int have a limited range; long integers can be arbitrarily big, but as they get bigger they consume more space and time. The mathematical operators work on long integers, and the functions in the math module, too, so in general any code that works with int will also work with long. Any time the result of a computation is too big to be represented with an integer, Python converts the result as a long integer: >>> 1000 * 1000 1000000 >>> 100000 * 100000 10000000000L In the first case the result has type int; in the second case it is long. ### Exercise 7 Exponentiation of large integers is the basis of common algorithms for public-key encryption. Read the Wikipedia page on the RSA algorithm[3] and write functions to encode and decode messages. ## Debugging As you work with bigger datasets it can become unwieldy to debug by printing and checking data by hand. Here are some suggestions for debugging large datasets: Scale down the input: If possible, reduce the size of the dataset. For example if the program reads a text file, start with just the first 10 lines, or with the smallest example you can find. You can either edit the files themselves, or (better) modify the program so it reads only the first n lines. If there is an error, you can reduce n to the smallest value that manifests the error, and then increase it gradually as you find and correct errors. Check summaries and types: Instead of printing and checking the entire dataset, consider printing summaries of the data: for example, the number of items in a dictionary or the total of a list of numbers. A common cause of runtime errors is a value that is not the right type. For debugging this kind of error, it is often enough to print the type of a value. Write self-checks: Sometimes you can write code to check for errors automatically. For example, if you are computing the average of a list of numbers, you could check that the result is not greater than the largest element in the list or less than the smallest. This is called a “sanity check” because it detects results that are “insane.” Another kind of check compares the results of two different computations to see if they are consistent. This is called a “consistency check.” Pretty print the output: Formatting debugging output can make it easier to spot an error. We saw an example in Section 6.9. The pprint module provides a pprint function that displays built-in types in a more human-readable format. Again, time you spend building scaffolding can reduce the time you spend debugging. ## Glossary dictionary: A mapping from a set of keys to their corresponding values. key-value pair: The representation of the mapping from a key to a value. item: Another name for a key-value pair. key: An object that appears in a dictionary as the first part of a key-value pair. value: An object that appears in a dictionary as the second part of a key-value pair. This is more specific than our previous use of the word “value.” implementation: A way of performing a computation. hashtable: The algorithm used to implement Python dictionaries. hash function: A function used by a hashtable to compute the location for a key. hashable: A type that has a hash function. Immutable types like integers, floats and strings are hashable; mutable types like lists and dictionaries are not. lookup: A dictionary operation that takes a key and finds the corresponding value. reverse lookup: A dictionary operation that takes a value and finds one or more keys that map to it. singleton: A list (or other sequence) with a single element. call graph: A diagram that shows every frame created during the execution of a program, with an arrow from each caller to each callee. histogram: A set of counters. memo: A computed value stored to avoid unnecessary future computation. global variable: A variable defined outside a function. Global variables can be accessed from any function. flag: A boolean variable used to indicate whether a condition is true. declaration: A statement like global that tells the interpreter something about a variable. ## Exercise-8 Dictionaries have a method called 'keys' that returns the keys of the dictionary, in no particular order, as a list. Modify print_hist to print the keys and their values in alphabetical order. ### Exercise 9 Two words are “rotate pairs” if you can rotate one of them and get the other (see rotate_word in Exercise '8.12'). Write a program that reads a wordlist and finds all the rotate pairs. ### Exercise 10 Here’s another Puzzler from Car Talk[4]: This was sent in by a fellow named Dan O’Leary. He came upon a common one-syllable, five-letter word recently that has the following unique property. When you remove the first letter, the remaining letters form a homophone of the original word, that is a word that sounds exactly the same. Replace the first letter, that is, put it back and remove the second letter and the result is yet another homophone of the original word. And the question is, what’s the word? Now I’m going to give you an example that doesn’t work. Let’s look at the five-letter word, ‘wrack.’ W-R-A-C-K, you know like to ‘wrack with pain.’ If I remove the first letter, I am left with a four-letter word, ’R-A-C-K.’ As in, ‘Holy cow, did you see the rack on that buck! It must have been a nine-pointer!’ It’s a perfect homophone. If you put the ‘w’ back, and remove the ‘r,’ instead, you’re left with the word, ‘wack,’ which is a real word, it’s just not a homophone of the other two words. But there is, however, at least one word that Dan and we know of, which will yield two homophones if you remove either of the first two letters to make two, new four-letter words. The question is, what’s the word? ' You can use the dictionary from Exercise '11.1' to check whether a string is in the word list. To check whether two words are homophones, you can use the CMU Pronouncing Dictionary. You can download it from 'www.speech.cs.cmu.edu/cgi-bin/cmudict' or from 'thinkpython.com/code/c06d' and you can also download 'thinkpython.com/code/pronounce.py', which provides a function named read_dictionary that reads the pronouncing dictionary and returns a Python dictionary that maps from each word to a string that describes its primary pronunciation. Write a program that lists all the words that solve the Puzzler. You can see my solution at 'thinkpython.com/code/homophone.py'. ## Notes 1. See wikipedia.org/wiki/Memoization 2. In Python 3.0, type long is gone; all integers, even really big ones, are type int. 3. wikipedia.org/wiki/RSA 4. www.cartalk.com/content/puzzler/transcripts/200717 # Tuples ## Tuples are immutable A tuple is a sequence of values. The values can be any type, and they are indexed by integers, so in that respect tuples are a lot like lists. The important difference is that tuples are immutable. Syntactically, a tuple is a comma-separated list of values: >>> t = 'a', 'b', 'c', 'd', 'e' Although it is not necessary, it is common to enclose tuples in parentheses: >>> t = ('a', 'b', 'c', 'd', 'e') To create a tuple with a single element, you have to include the final comma: >>> t1 = ('a',) >>> type(t1) <type 'tuple'> Without the comma, Python treats ('a') as a string in parentheses: >>> t2 = ('a') >>> type(t2) <type 'str'> Another way to create a tuple is the built-in function tuple. With no argument, it creates an empty tuple: >>> t = tuple() >>> print t () If the argument is a sequence (string, list or tuple), the result is a tuple with the elements of the sequence: >>> t = tuple('lupins') >>> print t ('l', 'u', 'p', 'i', 'n', 's') Because tuple is the name of a built-in function, you should avoid using it as a variable name. Most list operators also work on tuples. The bracket operator indexes an element: >>> t = ('a', 'b', 'c', 'd', 'e') >>> print t[0] 'a' And the slice operator selects a range of elements. >>> print t[1:3] ('b', 'c') But if you try to modify one of the elements of the tuple, you get an error: >>> t[0] = 'A' TypeError: object doesn't support item assignment You can’t modify the elements of a tuple, but you can replace one tuple with another: >>> t = ('A',) + t[1:] >>> print t ('A', 'b', 'c', 'd', 'e') ## Tuple assignment It is often useful to swap the values of two variables. With conventional assignments, you have to use a temporary variable. For example, to swap a and b: >>> temp = a >>> a = b >>> b = temp This solution is cumbersome; tuple assignment is more elegant: >>> a, b = b, a The left side is a tuple of variables; the right side is a tuple of expressions. Each value is assigned to its respective variable. All the expressions on the right side are evaluated before any of the assignments. The number of variables on the left and the number of values on the right have to be the same: >>> a, b = 1, 2, 3 ValueError: too many values to unpack More generally, the right side can be any kind of sequence (string, list or tuple). For example, to split an email address into a user name and a domain, you could write: >>> addr = 'monty@python.org' >>> uname, domain = addr.split('@') The return value from split is a list with two elements; the first element is assigned to uname, the second to domain. >>> print uname monty >>> print domain python.org ## Tuples as return values Strictly speaking, a function can only return one value, but if the value is a tuple, the effect is the same as returning multiple values. For example, if you want to divide two integers and compute the quotient and remainder, it is inefficient to compute x/y and then x%y. It is better to compute them both at the same time. The built-in function divmod takes two arguments and returns a tuple of two values, the quotient and remainder. You can store the result as a tuple: >>> t = divmod(7, 3) >>> print t (2, 1) Or use tuple assignment to store the elements separately: >>> quot, rem = divmod(7, 3) >>> print quot 2 >>> print rem 1 Here is an example of a function that returns a tuple: def min_max(t): return min(t), max(t) max and min are built-in functions that find the largest and smallest elements of a sequence. min_max computes both and returns a tuple of two values. ## Variable-length argument tuples Functions can take a variable number of arguments. A parameter name that begins with * gathers arguments into a tuple. For example, printall takes any number of arguments and prints them: def printall(*args): print args The gather parameter can have any name you like, but args is conventional. Here’s how the function works: >>> printall(1, 2.0, '3') (1, 2.0, '3') You can combine the gather operator with required and positional arguments: def pointless(required, optional=0, *args): print required, optional, args Run this function with 1, 2, 3 and 4 or more arguments and make sure you understand what it does. The complement of gather is scatter. If you have a sequence of values and you want to pass it to a function as multiple arguments, you can use the * operator. For example, divmod takes exactly two arguments; it doesn’t work with a tuple: >>> t = (7, 3) >>> divmod(t) TypeError: divmod expected 2 arguments, got 1 But if you scatter the tuple, it works: >>> divmod(*t) (2, 1) ### Exercise 1 Many of the built-in functions use variable-length argument tuples. For example, 'max' and 'min' can take any number of arguments: ''>>> max(1,2,3) 3 '' But 'sum' does not. ''>>> sum(1,2,3) TypeError: sum expected at most 2 arguments, got 3 '' Write a function called 'sumall' that takes any number of arguments and returns their sum. ## Lists and tuples zip is a built-in function that takes two or more sequences and “zips” them into a list1 of tuples where each tuple contains one element from each sequence. This example zips a string and a list: >>> s = 'abc' >>> t = [0, 1, 2] >>> zip(s, t) [('a', 0), ('b', 1), ('c', 2)] The result is a list of tuples where each tuple contains a character from the string and the corresponding element from the list. If the sequences are not the same length, the result has the length of the shorter one. >>> zip('Anne', 'Elk') [('A', 'E'), ('n', 'l'), ('n', 'k')] You can use tuple assignment in a for loop to traverse a list of tuples: t = [('a', 0), ('b', 1), ('c', 2)] for letter, number in t: print number, letter Each time through the loop, Python selects the next tuple in the list and assigns the elements to letter and number. The output of this loop is: 0 a 1 b 2 c If you combine zip, for and tuple assignment, you get a useful idiom for traversing two (or more) sequences at the same time. For example, has_match takes two sequences, t1 and t2, and returns True if there is an index i such that t1[i] == t2[i]: def has_match(t1, t2): for x, y in zip(t1, t2): if x == y: return True return False If you need to traverse the elements of a sequence and their indices, you can use the built-in function enumerate: for index, element in enumerate('abc'): print index, element The output of this loop is: 0 a 1 b 2 c Again. ## Dictionaries and tuples Dictionaries have a method called items that returns a list of tuples, where each tuple is a key-value pair2. >>> d = {'a':0, 'b':1, 'c':2} >>> t = d.items() >>> print t [('a', 0), ('c', 2), ('b', 1)] As you should expect from a dictionary, the items are in no particular order. Conversely, you can use a list of tuples to initialize a new dictionary: >>> t = [('a', 0), ('c', 2), ('b', 1)] >>> d = dict(t) >>> print d {'a': 0, 'c': 2, 'b': 1} Combining dict with zip yields a concise way to create a dictionary: >>> d = dict(zip('abc', range(3))) >>> print d {'a': 0, 'c': 2, 'b': 1} The dictionary method update also takes a list of tuples and adds them, as key-value pairs, to an existing dictionary. Combining items, tuple assignment and for, you get the idiom for traversing the keys and values of a dictionary: for key, val in d.items(): print val, key The output of this loop is: 0 a 2 c 1 b Again. It is common to use tuples as keys in dictionaries (primarily because you can’t use lists). For example, a telephone directory might map from last-name, first-name pairs to telephone numbers. Assuming that we have defined last, first and number, we could write: directory[last,first] = number The expression in brackets is a tuple. We could use tuple assignment to traverse this dictionary. for last, first in directory: print first, last, directory[last,first] This loop traverses the keys in directory, which are tuples. It assigns the elements of each tuple to last and first, then prints the name and corresponding telephone number. There are two ways to represent tuples in a state diagram. The more detailed version shows the indices and elements just as they appear in a list. For example, the tuple ('Cleese', 'John') would appear: <IMG SRC="book020.png"> But in a larger diagram you might want to leave out the details. For example, a diagram of the telephone directory might appear: <IMG SRC="book021.png"> Here the tuples are shown using Python syntax as a graphical shorthand. The telephone number in the diagram is the complaints line for the BBC, so please don’t call it. ## Comparing tuples The comparison operators work with tuples and other sequences; Python starts by comparing the first element from each sequence. If they are equal, it goes on to the next elements, and so on, until it finds elements that differ. Subsequent elements are not considered (even if they are really big). >>> (0, 1, 2) < (0, 3, 4) True >>> (0, 1, 2000000) < (0, 3, 4) True The sort function works the same way. It sorts primarily by first element, but in the case of a tie, it sorts by second element, and so on. This feature lends itself to a pattern called DSU for Decorate a sequence by building a list of tuples with one or more sort keys preceding the elements from the sequence, Sort the list of tuples, and Undecorate by extracting the sorted elements of the sequence. For example, suppose you have a list of words and you want to sort them from longest to shortest: def sort_by_length(words): t = [] for word in words: t.append((len(word), word)) t.sort(reverse=True) res = [] for length, word in t: res.append(word) return res The first loop builds a list of tuples, where each tuple is a word preceded by its length. sort compares the first element, length, first, and only considers the second element to break ties. The keyword argument reverse=True tells sort to go in decreasing order. The second loop traverses the list of tuples and builds a list of words in descending order of length. ### Exercise 2 In this example, ties are broken by comparing words, so words with the same length appear in alphabetical order. For other applications you might want to break ties at random. Modify this example so that words with the same length appear in random order. Hint: see the 'random' function in the 'random' module. ## Sequences of sequences I have focused on lists of tuples, but almost all of the examples in this chapter also work with lists of lists, tuples of tuples, and tuples of lists. To avoid enumerating the possible combinations, it is sometimes easier to talk about sequences of sequences. In many contexts, the different kinds of sequences (strings, lists and tuples) can be used interchangeably. So how and why do you choose one over the others? To start with the obvious, strings are more limited than other sequences because the elements have to be characters. They are also immutable. If you need the ability to change the characters in a string (as opposed to creating a new string), you might want to use a list of characters instead. Lists are more common than tuples, mostly because they are mutable. But there are a few cases where you might prefer tuples: • In some contexts, like a return statement, it is syntactically simpler to create a tuple than a list. In other contexts, you might prefer a list. • If you want to use a sequence as a dictionary key, you have to use an immutable type like a tuple or string. • If you are passing a sequence as an argument to a function, using tuples reduces the potential for unexpected behavior due to aliasing. Because tuples are immutable, they don’t provide methods like sort and reverse, which modify existing lists. But Python provides the built-in functions sorted and reversed, which take any sequence as a parameter and return a new list with the same elements in a different order. ## Debugging Lists, dictionaries and tuples are known generically as data structures; in this chapter we are starting to see compound data structures, like lists of tuples, and dictionaries that contain tuples as keys and lists as values. Compound data structures are useful, but they are prone to what I call shape errors; that is, errors caused when a data structure has the wrong type, size or composition. For example, if you are expecting a list with one integer and I give you a plain old integer (not in a list), it won’t work. To help debug these kinds of errors, I have written a module called structshape that provides a function, also called structshape, that takes any kind of data structure as an argument and returns a string that summarizes its shape. You can download it from thinkpython.com/code/structshape.py Here’s the result for a simple list: >>> from structshape import structshape >>> t = [1,2,3] >>> print structshape(t) list of 3 int A fancier program might write “list of 3 ints,” but it was easier not to deal with plurals. Here’s a list of lists: >>> t2 = [[1,2], [3,4], [5,6]] >>> print structshape(t2) list of 3 list of 2 int If the elements of the list are not the same type, structshape groups them, in order, by type: >>> t3 = [1, 2, 3, 4.0, '5', '6', [7], [8], 9] >>> print structshape(t3) list of (3 int, float, 2 str, 2 list of int, int) Here’s a list of tuples: >>> s = 'abc' >>> lt = zip(t, s) >>> print structshape(lt) list of 3 tuple of (int, str) And here’s a dictionary with 3 items that map integers to strings. >>> d = dict(lt) >>> print structshape(d) dict of 3 int->str If you are having trouble keeping track of your data structures, structshape can help. ## Glossary tuple: An immutable sequence of elements. tuple assignment: An assignment with a sequence on the right side and a tuple of variables on the left. The right side is evaluated and then its elements are assigned to the variables on the left. gather: The operation of assembling a variable-length argument tuple. scatter: The operation of treating a sequence as a list of arguments. DSU: Abbreviation of “decorate-sort-undecorate,” a pattern that involves building a list of tuples, sorting, and extracting part of the result. data structure: A collection of related values, often organized in lists, dictionaries, tuples, etc. shape (of a data structure): A summary of the type, size and composition of a data structure. ## Exercises ### Exercise 3 Write a function called most_frequent that takes a string and prints the letters in decreasing order of frequency. Find text samples from several different languages and see how letter frequency varies between languages. Compare your results with the tables at 'wikipedia.org/wiki/Letter_frequencies'. ### Exercise 4 More anagrams! • Write a program that reads a word list from a file (see Section '9.1') and prints all the sets of words that are anagrams. Here is an example of what the output might look like: ''['deltas', 'desalt', 'lasted', 'salted', 'slated', 'staled'] ['retainers', 'ternaries'] ['generating', 'greatening'] ['resmelts', 'smelters', 'termless'] '' Hint: you might want to build a dictionary that maps from a set of letters to a list of words that can be spelled with those letters. The question is, how can you represent the set of letters in a way that can be used as a key? • Modify the previous program so that it prints the largest set of anagrams first, followed by the second largest set, and so on. • In Scrabble a “bingo” is when you play all seven tiles in your rack, along with a letter on the board, to form an eight-letter word. What set of 8 letters forms the most possible bingos? Hint: there are seven. • 'Two words form a “metathesis pair” if you can transform one into the other by swapping two letters''3''; for example, “converse” and “conserve.” Write a program that finds all of the metathesis pairs in the dictionary. Hint: don’t test all pairs of words, and don’t test all possible swaps.' 'You can download a solution from '''thinkpython.com/code/anagram_sets.py'''.' ### Exercise 5 Here’s another Car Talk Puzzler4: What is the longest English word, that remains a valid English word, as you remove its letters one at a time? Now, letters can be removed from either end, or the middle, but you can’t rearrange any of the letters. Every time you drop a letter, you wind up with another English word. If you do that, you’re eventually going to wind up with one letter and that too is going to be an English word—one that’s found in the dictionary. I want to know what’s the longest word and how many letters does it have? I’m going to give you a little modest example: Sprite. Ok? You start off with sprite, you take a letter off, one from the interior of the word, take the r away, and we’re left with the word spite, then we take the e off the end, we’re left with spit, we take the s off, we’re left with pit, it, and I. Write a program to find all words that can be reduced in this way, and then find the longest one. This exercise is a little more challenging than most, so here are some suggestions: • You might want to write a function that takes a word and computes a list of all the words that can be formed by removing one letter. These are the “children” of the word. • Recursively, a word is reducible if any of its children are reducible. As a base case, you can consider the empty string reducible. • The wordlist I provided, 'words.txt', doesn’t contain single letter words. So you might want to add “I”, “a”, and the empty string. • To improve the performance of your program, you might want to memoize the words that are known to be reducible. You can see my solution at 'thinkpython.com/code/reducible.py'. 1 In Python 3.0, zip returns an iterator of tuples, but for most purposes, an iterator behaves like a list. 2 This behavior is slightly different in Python 3.0. 3 This exercise is inspired by an example at puzzlers.org. 4 www.cartalk.com/content/puzzler/transcripts/200651 = Case study: data structure selection}} ## Word frequency analysis As usual, you should at least attempt the following exercises before you read my solutions. ### Exercise 1 Write a program that reads a file, breaks each line into words, strips whitespace and punctuation from the words, and converts them to lowercase. Hint: The 'string' module provides strings named 'whitespace', which contains space, tab, newline, etc., and 'punctuation' which contains the punctuation characters. Let’s see if we can make Python swear: >>> import string >>> print string.punctuation !"#$%&'()*+,-./:;<=>?@[\]^_{|}~
Also, you might consider using the string methods 'strip', 'replace' and 'translate'.
### Exercise 2
Modify your program from the previous exercise to read the book you downloaded, skip over the header information at the beginning of the file, and process the rest of the words as before.
Then modify the program to count the total number of words in the book, and the number of times each word is used.
Print the number of different words used in the book. Compare different books by different authors, written in different eras. Which author uses the most extensive vocabulary?
### Exercise 3
Modify the program from the previous exercise to print the 20 most frequently-used words in the book.
### Exercise 4
Modify the previous program to read a word list (see Section '9.1') and then print all the words in the book that are not in the word list. How many of them are typos? How many of them are common words that should be in the word list, and how many of them are really obscure?
## Random numbers
Given the same inputs, most computer programs generate the same outputs every time, so they are said to be deterministic. Determinism is usually a good thing, since we expect the same calculation to yield the same result. For some applications, though, we want the computer to be unpredictable. Games are an obvious example, but there are more.
Making a program truly nondeterministic turns out to be not so easy, but there are ways to make it at least seem nondeterministic. One of them is to use algorithms that generate pseudorandom numbers. Pseudorandom numbers are not truly random because they are generated by a deterministic computation, but just by looking at the numbers it is all but impossible to distinguish them from random.
The random module provides functions that generate pseudorandom numbers (which I will simply call “random” from here on).
The function random returns a random float between 0.0 and 1.0 (including 0.0 but not 1.0). Each time you call random, you get the next number in a long series. To see a sample, run this loop:
import random
for i in range(10):
x = random.random()
print x
The function randint takes parameters low and high and returns an integer between low and high (including both).
>>> random.randint(5, 10)
5
>>> random.randint(5, 10)
9
To choose an element from a sequence at random, you can use choice:
>>> t = [1, 2, 3]
>>> random.choice(t)
2
>>> random.choice(t)
3
The random module also provides functions to generate random values from continuous distributions including Gaussian, exponential, gamma, and a few more.
### Exercise 5
Write a function named choose_from_hist that takes a histogram as defined in Section '11.1' and returns a random value from the histogram, chosen with probability in proportion to frequency. For example, for this histogram:
''>>> t = ['a', 'a', 'b']
>>> h = histogram(t)
>>> print h
{'a': 2, 'b': 1}
''
your function should '’a’' with probability '2/3' and b with probability '1/3'.
## Word histogram
Here is a program that reads a file and builds a histogram of the words in the file:
import string
def process_file(filename):
h = dict()
fp = open(filename)
for line in fp:
process_line(line, h)
return h
def process_line(line, h):
line = line.replace('-', ' ')
for word in line.split():
word = word.strip(string.punctuation + string.whitespace)
word = word.lower()
h[word] = h.get(word, 0) + 1
hist = process_file('emma.txt')
This program reads emma.txt, which contains the text of Emma by Jane Austen.
process_file loops through the lines of the file, passing them one at a time to process_line. The histogram h is being used as an accumulator.
process_line uses the string method replace to replace hyphens with spaces before using split to break the line into a list of strings. It traverses the list of words and uses strip and lower to remove punctuation and convert to lower case. (It is a shorthand to say that strings are “converted;” remember that string are immutable, so methods like strip and lower return new strings.)
Finally, process_line updates the histogram by creating a new item or incrementing an existing one.
To count the total number of words in the file, we can add up the frequencies in the histogram:
def total_words(h):
return sum(h.values())
The number of different words is just the number of items in the dictionary:
def different_words(h):
return len(h)
Here is some code to print the results:
print 'Total number of words:', total_words(hist)
print 'Number of different words:', different_words(hist)
And the results:
Total number of words: 161073
Number of different words: 7212
## Most common words
To find the most common words, we can apply the DSU pattern; most_common takes a histogram and returns a list of word-frequency tuples, sorted in reverse order by frequency:
def most_common(h):
t = []
for key, value in h.items():
t.append((value, key))
t.sort(reverse=True)
return t
Here is a loop that prints the ten most common words:
t = most_common(hist)
print 'The most common words are:'
for freq, word in t[0:10]:
print word, '\t', freq
And here are the results from Emma:
The most common words are:
to 5242
the 5204
and 4897
of 4293
i 3191
a 3130
it 2529
her 2483
was 2400
she 2364
## Optional parameters
We have seen built-in functions and methods that take a variable number of arguments. It is possible to write user-defined functions with optional arguments, too. For example, here is a function that prints the most common words in a histogram
def print_most_common(hist, num=10)
t = most_common(hist)
print 'The most common words are:'
for freq, word in t[0:num]:
print word, '\t', freq
The first parameter is required; the second is optional. The default value of num is 10.
If you only provide one argument:
print_most_common(hist)
num gets the default value. If you provide two arguments:
print_most_common(hist, 20)
num gets the value of the argument instead. In other words, the optional argument overrides the default value.
If a function has both required and optional parameters, all the required parameters have to come first, followed by the optional ones.
## Dictionary subtraction
Finding the words from the book that are not in the word list from words.txt is a problem you might recognize as set subtraction; that is, we want to find all the words from one set (the words in the book) that are not in another set (the words in the list).
subtract takes dictionaries d1 and d2 and returns a new dictionary that contains all the keys from d1 that are not in d2. Since we don’t really care about the values, we set them all to None.
def subtract(d1, d2):
res = dict()
for key in d1:
if key not in d2:
res[key] = None
return res
To find the words in the book that are not in words.txt, we can use process_file to build a histogram for words.txt, and then subtract:
words = process_file('words.txt')
diff = subtract(hist, words)
print "The words in the book that aren't in the word list are:"
for word in diff.keys():
print word,
Here are some of the results from Emma:
The words in the book that aren't in the word list are:
rencontre jane's blanche woodhouses disingenuousness
friend's venice apartment ...
Some of these words are names and possessives. Others, like “rencontre,” are no longer in common use. But a few are common words that should really be in the list!
### Exercise 6
Python provides a data structure called 'set' that provides many common set operations. Read the documentation at 'docs.python.org/lib/types-set.html' and write a program that uses set subtraction to find words in the book that are not in the word list.
## Random words
To choose a random word from the histogram, the simplest algorithm is to build a list with multiple copies of each word, according to the observed frequency, and then choose from the list:
def random_word(h):
t = []
for word, freq in h.items():
t.extend([word] * freq)
return random.choice(t)
The expression [word] * freq creates a list with freq copies of the string word. The extend method is similar to append except that the argument is a sequence.
### Exercise 7
This algorithm works, but it is not very efficient; each time you choose a random word, it rebuilds the list, which is as big as the original book. An obvious improvement is to build the list once and then make multiple selections, but the list is still big.
An alternative is:
• Use 'keys' to get a list of the words in the book.
• Build a list that contains the cumulative sum of the word
frequencies (see Exercise '10.1'). The last item in this list is the total number of words in the book, 'n'.
• Choose a random number from 1 to 'n'. Use a bisection search
(See Exercise '10.8') to find the index where the random number would be inserted in the cumulative sum.
• Use the index to find the corresponding word in the word list.
Write a program that uses this algorithm to choose a random word from the book.
## Markov analysis
If you choose words from the book at random, you can get a sense of the vocabulary, you probably won’t get a sentence:
this the small regard harriet which knightley's it most things
A series of random words seldom makes sense because there is no relationship between successive words. For example, in a real sentence you would expect an article like “the” to be followed by an adjective or a noun, and probably not a verb or adverb.
One way to measure these kinds of relationships is Markov analysis, which characterizes, for a given sequence of words, the probability of the word that comes next. For example, the song Eric, the Half a Bee begins:
Half a bee, philosophically,
Must, ipso facto, half not be.
But half the bee has got to be
Vis a vis, its entity. D’you see?
But can a bee be said to be
Or not to be an entire bee
When half the bee is not a bee
Due to some ancient injury?
In this text, the phrase “half the” is always followed by the word “bee,” but the phrase “the bee” might be followed by either “has” or “is”.
The result of Markov analysis is a mapping from each prefix (like “half the” and “the bee”) to all possible suffixes (like “has” and “is”).
Given this mapping, you can generate a random text by starting with any prefix and choosing at random from the possible suffixes. Next, you can combine the end of the prefix and the new suffix to form the next prefix, and repeat.
For example, if you start with the prefix “Half a,” then the next word has to be “bee,” because the prefix only appears once in the text. The next prefix is “a bee,” so the next suffix might be “philosophically,” “be” or “due.”
In this example the length of the prefix is always two, but you can do Markov analysis with any prefix length. The length of the prefix is called the “order” of the analysis.
### Exercise 8
Markov analysis:
• Write a program to read a text from a file and perform Markov analysis. The result should be a dictionary that maps from prefixes to a collection of possible suffixes. The collection might be a list, tuple, or dictionary; it is up to you to make an appropriate choice. You can test your program with prefix length two, but you should write the program in a way that makes it easy to try other lengths.
• Add a function to the previous program to generate random text based on the Markov analysis. Here is an example from Emma with prefix length 2:
He was very clever, be it sweetness or be angry, ashamed or only
amused, at such a stroke. She had never thought of Hannah till you were never meant for me?" "I cannot make speeches, Emma:" he soon cut it all himself.
For this example, I left the punctuation attached to the words. The result is almost syntactically correct, but not quite. Semantically, it almost makes sense, but not quite.
• What happens if you increase the prefix length? Does the random text make more sense?
• Once your program is working, you might want to try a mash-up: if you analyze text from two or more books, the random text you generate will blend the vocabulary and phrases from the sources in interesting ways.
## Data structures
Using Markov analysis to generate random text is fun, but there is also a point to this exercise: data structure selection. In your solution to the previous exercises, you had to choose:
• How to represent the prefixes.
• How to represent the collection of possible suffixes.
• How to represent the mapping from each prefix to the collection of possible suffixes.
Ok, the last one is the easy; the only mapping type we have seen is a dictionary, so it is the natural choice.
For the prefixes, the most obvious options are string, list of strings, or tuple of strings. For the suffixes, one option is a list; another is a histogram (dictionary).
How should you choose? The first step is to think about the operations you will need to implement for each data structure. For the prefixes, we need to be able to remove words from the beginning and add to the end. For example, if the current prefix is “Half a,” and the next word is “bee,” you need to be able to form the next prefix, “a bee.”
Your first choice might be a list, since it is easy to add and remove elements, but we also need to be able to use the prefixes as keys in a dictionary, so that rules out lists. With tuples, you can’t append or remove, but you can use the addition operator to form a new tuple:
def shift(prefix, word):
return prefix[1:] + (word,)
shift takes a tuple of words, prefix, and a string, word, and forms a new tuple that has all the words in prefix except the first, and word added to the end.
For the collection of suffixes, the operations we need to perform include adding a new suffix (or increasing the frequency of an existing one), and choosing a random suffix.
Adding a new suffix is equally easy for the list implementation or the histogram. Choosing a random element from a list is easy; choosing from a histogram is harder to do efficiently (see Exercise 13.7).
So far we have been talking mostly about ease of implementation, but there are other factors to consider in choosing data structures. One is run time. Sometimes there is a theoretical reason to expect one data structure to be faster than other; for example, I mentioned that the in operator is faster for dictionaries than for lists, at least when the number of elements is large.
But often you don’t know ahead of time which implementation will be faster. One option is to implement both of them and see which is better. This approach is called benchmarking. A practical alternative is to choose the data structure that is easiest to implement, and then see if it is fast enough for the intended application. If so, there is no need to go on. If not, there are tools, like the profile module, that can identify the places in a program that take the most time.
The other factor to consider is storage space. For example, using a histogram for the collection of suffixes might take less space because you only have to store each word once, no matter how many times it appears in the text. In some cases, saving space can also make your program run faster, and in the extreme, your program might not run at all if you run out of memory. But for many applications, space is a secondary consideration after run time.
One final thought: in this discussion, I have implied that we should use one data structure for both analysis and generation. But since these are separate phases, it would also be possible to use one structure for analysis and then convert to another structure for generation. This would be a net win if the time saved during generation exceeded the time spent in conversion.
## Debugging
When you are debugging a program, and especially if you are working on a hard bug, there are four things to try:
Examine your code, read it back to yourself, and check that it says what you meant to say.
running:
Experiment by making changes and running different versions. Often if you display the right thing at the right place in the program, the problem becomes obvious, but sometimes you have to spend some time to build scaffolding.
ruminating:
Take some time to think! What kind of error is it: syntax, runtime, semantic? What information can you get from the error messages, or from the output of the program? What kind of error could cause the problem you’re seeing? What did you change last, before the problem appeared?
retreating:
At some point, the best thing to do is back off, undoing recent changes, until you get back to a program that works and that you understand. Then you can starting rebuilding.
Beginning programmers sometimes get stuck on one of these activities and forget the others. Each activity comes with its own failure mode.
For example, reading your code might help if the problem is a typographical error, but not if the problem is a conceptual misunderstanding. If you don’t understand what your program does, you can read it 100 times and never see the error, because the error is in your head.
Running experiments can help, especially if you run small, simple tests. But if you run experiments without thinking or reading your code, you might fall into a pattern I call “random walk programming,” which is the process of making random changes until the program does the right thing. Needless to say, random walk programming can take a long time.
You have to take time to think. Debugging is like an experimental science. You should have at least one hypothesis about what the problem is. If there are two or more possibilities, try to think of a test that would eliminate one of them.
Taking a break helps with the thinking. So does talking. If you explain the problem to someone else (or even yourself), you will sometimes find the answer before you finish asking the question.
But even the best debugging techniques will fail if there are too many errors, or if the code you are trying to fix is too big and complicated. Sometimes the best option is to retreat, simplifying the program until you get to something that works and that you understand.
Beginning programmers are often reluctant to retreat because they can’t stand to delete a line of code (even if it’s wrong). If it makes you feel better, copy your program into another file before you start stripping it down. Then you can paste the pieces back in a little bit at a time.
Finding a hard bug requires reading, running, ruminating, and sometimes retreating. If you get stuck on one of these activities, try the others.
## Glossary
deterministic:
Pertaining to a program that does the same thing each time it runs, given the same inputs.
pseudorandom:
Pertaining to a sequence of numbers that appear to be random, but are generated by a deterministic program.
default value:
The value given to an optional parameter if no argument is provided.
override:
To replace a default value with an argument.
benchmarking:
The process of choosing between data structures by implementing alternatives and testing them on a sample of the possible inputs.
## Exercises
### Exercise 9
The “rank” of a word is its position in a list of words sorted by frequency: the most common word has rank 1, the second most common has rank 2, etc.
Zipf’s law describes a relationship between the ranks and frequencies of words in natural languages1. Specifically, it predicts that the frequency, 'f', of the word with rank 'r' is:
${\displaystyle A(M,N)={\begin{cases}n+1&{\mbox{if }}m=0\\A(m-1,1)&{\mbox{if }}m>0{\mbox{ and }}n=0\\A(m-1,A(m,n-1))&{\mbox{if }}m>0{\mbox{ and }}n>0\end{cases}}}$
Write a function named 'ack' that evaluates Ackerman’s function. Use your function to evaluate 'ack(3, 4)', which should be 125. What happens for larger values of 'm' and 'n'?
### Exercise 6
A palindrome is a word that is spelled the same backward and forward, like “noon” and “redivider”. Recursively, a word is a palindrome if the first and last letters are the same and the middle is a palindrome.
The following are functions that take a string argument and return the first, last, and middle letters:
def first(word):
return word[0]
def last(word):
return word[-1]
def middle(word):
return word[1:-1]
We’ll see how they work in Chapter '8'.
• Type these functions into a file named 'palindrome.py'
and test them out. What happens if you call 'middle' with a string with two letters? One letter? What about the empty string, which is written ' and contains no letters?
• Write a function called is_palindrome that takes
a string argument and returns 'True' if it is a palindrome and 'False' otherwise. Remember that you can use the built-in function 'len' to check the length of a string.
### Exercise 7
A number, 'a', is a power of 'b' if it is divisible by 'b' and 'a/b' is a power of 'b'. Write a function called is_power that takes parameters 'a' and 'b' and returns 'True' if 'a' is a power of 'b'.
### Exercise 8
The greatest common divisor (GCD) of 'a' and 'b' is the largest number that divides both of them with no remainder[4].
One way to find the GCD of two numbers is Euclid’s algorithm, which is based on the observation that if 'r' is the remainder when 'a' is divided by 'b', then 'gcd(a, b) = gcd(b, r)'. As a base case, we can consider 'gcd(a, 0) = a'.
Write a function called gcd that takes parameters 'a' and 'b' and returns their greatest common divisor. If you need help, see 'wikipedia.org/wiki/Euclidean_algorithm'.
## Notes
1. See wikipedia.org/wiki/Fibonacci_number.
2. See wikipedia.org/wiki/Gamma_function.
3. See wikipedia.org/wiki/Ackermann_function
4. This exercise is based on an example from Abelson and Sussman’s Structure and Interpretation of Computer Programs.
# Iteration
## Multiple assignment
As you may have discovered, it is legal to make more than one assignment to the same variable. A new assignment makes an existing variable refer to a new value (and stop referring to the old value).
bruce = 5
print bruce,
bruce = 7
print bruce
The output of this program is 5 7, because the first time bruce is printed, its value is 5, and the second time, its value is 7. The comma at the end of the first print statement suppresses the newline, which is why both outputs appear on the same line.
Here is what multiple assignment looks like in a state diagram:
<IMG SRC="book010.png">
With multiple assignment it is especially important to distinguish between an assignment operation and a statement of equality. Because Python uses the equal sign (=) for assignment, it is tempting to interpret a statement like a = b as a statement of equality. It is not!
First, equality is a symmetric relation and assignment is not. For example, in mathematics, if a = 7 then 7 = a. But in Python, the statement a = 7 is legal and 7 = a is not.
Furthermore, in mathematics, a statement of equality is either true or false, for all time. If a = b now, then a will always equal b. In Python, an assignment statement can make two variables equal, but they don’t have to stay that way:
a = 5
b = a # a and b are now equal
a = 3 # a and b are no longer equal
The third line changes the value of a but does not change the value of b, so they are no longer equal.
Although multiple assignment is frequently helpful, you should use it with caution. If the values of variables change frequently, it can make the code difficult to read and debug.
## Updating variables
One of the most common forms of multiple assignment is an update, where the new value of the variable depends on the old.
x = x+1
This means “get the current value of x, add one, and then update x with the new value.”
If you try to update a variable that doesn’t exist, you get an error, because Python evaluates the right side before it assigns a value to x:
>>> x = x+1
NameError: name 'x' is not defined
Before you can update a variable, you have to initialize it, usually with a simple assignment:
>>> x = 0
>>> x = x+1
Updating a variable by adding 1 is called an increment; subtracting 1 is called a decrement.
## The while statement
Computers are often used to automate repetitive tasks. Repeating identical or similar tasks without making errors is something that computers do well and people do poorly.
We have seen two programs, countdown and print_n, that use recursion to perform repetition, which is also called iteration. Because iteration is so common, Python provides several language features to make it easier. One is the for statement we saw in Section 4.2. We’ll get back to that later.
Another is the while statement. Here is a version of countdown that uses a while statement:
def countdown(n):
while n > 0:
print n
n = n-1
print 'Blastoff!'
You can almost read the while statement as if it were English. It means, “While n is greater than 0, display the value of n and then reduce the value of n by 1. When you get to 0, display the word Blastoff!
More formally, here is the flow of execution for a while statement:
• Evaluate the condition, yielding True or False.
• If the condition is false, exit the while statement and continue execution at the next statement.
• If the condition is true, execute the body and then go back to step 1.
This type of flow is called a loop because the third step loops back around to the top.
The body of the loop should change the value of one or more variables so that eventually the condition becomes false and the loop terminates. Otherwise the loop will repeat forever, which is called an infinite loop. An endless source of amusement for computer scientists is the observation that the directions on shampoo, “Lather, rinse, repeat,” are an infinite loop.
In the case of countdown, we can prove that the loop terminates because we know that the value of n is finite, and we can see that the value of n gets smaller each time through the loop, so eventually we have to get to 0. In other cases, it is not so easy to tell:
def sequence(n):
while n != 1:
print n,
if n%2 == 0: # n is even
n = n/2
else: # n is odd
n = n*3+1
The condition for this loop is n != 1, so the loop will continue until n is 1, which makes the condition false.
Each time through the loop, the program outputs the value of n and then checks whether it is even or odd. If it is even, n is divided by 2. If it is odd, the value of n is replaced with n*3+1. For example, if the argument passed to sequence is 3, the resulting sequence is 3, 10, 5, 16, 8, 4, 2, 1.
Since n sometimes increases and sometimes decreases, there is no obvious proof that n will ever reach 1, or that the program terminates. For some particular values of n, we can prove termination. For example, if the starting value is a power of two, then the value of n will be even each time through the loop until it reaches 1. The previous example ends with such a sequence, starting with 16.
The hard question is whether we can prove that this program terminates for all positive values of n. So far1, no one has been able to prove it or disprove it!
Exercise 1
Rewrite the function print_n from Section '5.8' using iteration instead of recursion.
## break
Sometimes you don’t know it’s time to end a loop until you get half way through the body. In that case you can use the break statement to jump out of the loop.
For example, suppose you want to take input from the user until they type done. You could write:
while True:
line = raw_input('> ')
if line == 'done':
break
print line
print 'Done!'
The loop condition is True, which is always true, so the loop runs until it hits the break statement.
Each time through, it prompts the user with an angle bracket. If the user types done, the break statement exits the loop. Otherwise the program echoes whatever the user types and goes back to the top of the loop. Here’s a sample run:
> not done
not done
> done
Done!
This way of writing while loops is common because you can check the condition anywhere in the loop (not just at the top) and you can express the stop condition affirmatively (“stop when this happens”) rather than negatively (“keep going until that happens.”).
## Square roots
Loops are often used in programs that compute numerical results by starting with an approximate answer and iteratively improving it.
For example, one way of computing square roots is Newton’s method. Suppose that you want to know the square root of a. If you start with almost any estimate, x, you can compute a better estimate with the following formula:
y =
x + a/x 2
For example, if a is 4 and x is 3:
>>> a = 4.0
>>> x = 3.0
>>> y = (x + a/x) / 2
>>> print y
2.16666666667
Which is closer to the correct answer (√4 = 2). If we repeat the process with the new estimate, it gets even closer:
>>> x = y
>>> y = (x + a/x) / 2
>>> print y
2.00641025641
After a few more updates, the estimate is almost exact:
>>> x = y
>>> y = (x + a/x) / 2
>>> print y
2.00001024003
>>> x = y
>>> y = (x + a/x) / 2
>>> print y
2.00000000003
In general we don’t know ahead of time how many steps it takes to get to the right answer, but we know when we get there because the estimate stops changing:
>>> x = y
>>> y = (x + a/x) / 2
>>> print y
2.0
>>> x = y
>>> y = (x + a/x) / 2
>>> print y
2.0
When y == x, we can stop. Here is a loop that starts with an initial estimate, x, and improves it until it stops changing:
while True:
print x
y = (x + a/x) / 2
if y == x:
break
x = y
For most values of a this works fine, but in general it is dangerous to test float equality. Floating-point values are only approximately right: most rational numbers, like 1/3, and irrational numbers, like √2, can’t be represented exactly with a float.
Rather than checking whether x and y are exactly equal, it is safer to use the built-in function abs to compute the absolute value, or magnitude, of the difference between them:
if abs(y-x) < epsilon:
break
Where epsilon has a value like 0.0000001 that determines how close is close enough.
Exercise 2
' Encapsulate this loop in a function called square_root that takes 'a' as a parameter, chooses a reasonable value of 'x', and returns an estimate of the square root of 'a'.
## Algorithms
Newton’s method is an example of an algorithm: it is a mechanical process for solving a category of problems (in this case, computing square roots).
It is not easy to define an algorithm. It might help to start with something that is not an algorithm. When you learned to multiply single-digit numbers, you probably memorized the multiplication table. In effect, you memorized 100 specific solutions. That kind of knowledge is not algorithmic.
But if you were “lazy,” you probably cheated by learning a few tricks. For example, to find the product of n and 9, you can write n−1 as the first digit and 10−n as the second digit. This trick is a general solution for multiplying any single-digit number by 9. That’s an algorithm!
Similarly, the techniques you learned for addition with carrying, subtraction with borrowing, and long division are all algorithms. One of the characteristics of algorithms is that they do not require any intelligence to carry out. They are mechanical processes in which each step follows from the last according to a simple set of rules.
In my opinion, it is embarrassing that humans spend so much time in school learning to execute algorithms that, quite literally, require no intelligence.
On the other hand, the process of designing algorithms is interesting, intellectually challenging, and a central part of what we call programming.
Some of the things that people do naturally, without difficulty or conscious thought, are the hardest to express algorithmically. Understanding natural language is a good example. We all do it, but so far no one has been able to explain how we do it, at least not in the form of an algorithm.
## Debugging
As you start writing bigger programs, you might find yourself spending more time debugging. More code means more chances to make an error and more place for bugs to hide.
One way to cut your debugging time is “debugging by bisection.” For example, if there are 100 lines in your program and you check them one at a time, it would take 100 steps.
Instead, try to break the problem in half. Look at the middle of the program, or near it, for an intermediate value you can check. Add a print statement (or something else that has a verifiable effect) and run the program.
If the mid-point check is incorrect, the problem must be in the first half of the program. If it is correct, the problem is in the second half.
Every time you perform a check like this, you halve the number of lines you have to search. After six steps (which is much less than 100), you would be down to one or two lines of code, at least in theory.
In practice it is not always clear what the “middle of the program” is and not always possible to check it. It doesn’t make sense to count lines and find the exact midpoint. Instead, think about places in the program where there might be errors and places where it is easy to put a check. Then choose a spot where you think the chances are about the same that the bug is before or after the check.
## Glossary
multiple assignment:
Making more than one assignment to the same variable during the execution of a program.
update:
An assignment where the new value of the variable depends on the old.
initialize:
An assignment that gives an initial value to a variable that will be updated.
increment:
An update that increases the value of a variable (often by one).
decrement:
An update that decreases the value of a variable.
iteration:
Repeated execution of a set of statements using either a recursive function call or a loop.
infinite loop:
A loop in which the terminating condition is never satisfied.
## Exercises
### Exercise 3
To test the square root algorithm in this chapter, you could compare it with 'math.sqrt'. Write a function named test_square_root that prints a table like this:
''1.0 1.0 1.0 0.0
2.0 1.41421356237 1.41421356237 2.22044604925e-16
3.0 1.73205080757 1.73205080757 0.0
4.0 2.0 2.0 0.0
5.0 2.2360679775 2.2360679775 0.0
6.0 2.44948974278 2.44948974278 0.0
7.0 2.64575131106 2.64575131106 0.0
8.0 2.82842712475 2.82842712475 4.4408920985e-16
9.0 3.0 3.0 0.0
''
The first column is a number, 'a'; the second column is the square root of 'a' computed with the function from Exercise '7.2'; the third column is the square root computed by 'math.sqrt'; the fourth column is the absolute value of the difference between the two estimates.
### Exercise 4
The built-in function 'eval' takes a string and evaluates it using the Python interpreter. For example:
''>>> eval('1 + 2 * 3')
7
>>> import math
>>> eval('math.sqrt(5)')
2.2360679774997898
>>> eval('type(math.pi)')
<type 'float'>
''
Write a function called eval_loop that iteratively prompts the user, takes the resulting input and evaluates it using 'eval', and prints the result.
It should continue until the user enters done, and then return the value of the last expression it evaluated.
### Exercise 5
The brilliant mathematician Srinivasa Ramanujan found an infinite series2 that can be used to generate a numerical approximation of ${\displaystyle \pi }$:
${\displaystyle {\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{9801}}\sum _{k=0}^{\infty }{\frac {(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}}}$
Write a function called estimate_pi that uses this formula to compute and return an estimate of 'π'. It should use a 'while' loop to compute terms of the summation until the last term is smaller than '1e-15' (which is Python notation for '10−15). You can check the result by comparing it to 'math.pi'.
You can see my solution at 'thinkpython.com/code/pi.py'.
f = c rs
where 's' and 'c' are parameters that depend on the language and the text. If you take the logarithm of both sides of this equation, you get:
log'f = 'log'c − s 'log'r
So if you plot 'log'f' versus 'log'r', you should get a straight line with slope ''s' and intercept 'log'c'.
Write a program that reads a text from a file, counts word frequencies, and prints one line for each word, in descending order of frequency, with 'log'f' and 'log'r'. Use the graphing program of your choice to plot the results and check whether they form a straight line. Can you estimate the value of 's'?
## Notes
1
See wikipedia.org/wiki/Zipf's_law
# Files
## Persistence
Most of the programs we have seen so far are transient in the sense that they run for a short time and produce some output, but when they end, their data disappears. If you run the program again, it starts with a clean slate.
Other programs are persistent: they run for a long time (or all the time); they keep at least some of their data in permanent storage (a hard drive, for example); and if they shut down and restart, they pick up where they left off.
Examples of persistent programs are operating systems, which run pretty much whenever a computer is on, and web servers, which run all the time, waiting for requests to come in on the network.
One of the simplest ways for programs to maintain their data is by reading and writing text files. We have already seen programs that read text files; in this chapters we will see programs that write them.
An alternative is to store the state of the program in a database. In this chapter I will present a simple database and a module, pickle, that makes it easy to store program data.
A text file is a sequence of characters stored on a permanent medium like a hard drive, flash memory, or CD-ROM. We saw how to open and read a file in Section 9.1.
To write a file, you have to open it with mode 'w' as a second parameter:
>>> fout = open('output.txt', 'w')
>>> print fout
<open file 'output.txt', mode 'w' at 0xb7eb2410>
If the file already exists, opening it in write mode clears out the old data and starts fresh, so be careful! If the file doesn’t exist, a new one is created.
The write method puts data into the file.
>>> line1 = "This here's the wattle,\n"
>>> fout.write(line1)
Again, the file object keeps track of where it is, so if you call write again, it adds the new data to the end.
>>> line2 = "the emblem of our land.\n"
>>> fout.write(line2)
When you are done writing, you have to close the file.
>>> fout.close()
## Format operator
The argument of write has to be a string, so if we want to put other values in a file, we have to convert them to strings. The easiest way to do that is with str:
>>> x = 52
>>> f.write(str(x))
An alternative is to use the format operator, %. When applied to integers, % is the modulus operator. But when the first operand is a string, % is the format operator.
The first operand is the format string, and the second operand is a tuple of expressions. The result is a string that contains the values of the expressions, formatted according to the format string.
As an example, the format sequence '%d' means that the first expression in the tuple should be formatted as an integer (d stands for “decimal”):
>>> camels = 42
>>> '%d' % camels
'42'
The result is the string '42', which is not to be confused with the integer value 42.
A format sequence can appear anywhere in the format string, so you can embed a value in a sentence:
>>> camels = 42
>>> 'I have spotted %d camels.' % camels
'I have spotted 42 camels.'
The format sequence '%g' formats the next element in the tuple as a floating-point number (don’t ask why), and '%s' formats the next item as a string:
>>> 'In %d years I have spotted %g %s.' % (3, 0.1, 'camels')
'In 3 years I have spotted 0.1 camels.'
The number of elements in the tuple has to match the number of format sequences in the string. Also, the types of the elements have to match the format sequences:
>>> '%d %d %d' % (1, 2)
TypeError: not enough arguments for format string
>>> '%d' % 'dollars'
TypeError: illegal argument type for built-in operation
In the first example, there aren’t enough elements; in the second, the element is the wrong type.
The format operator is powerful but difficult to use. You can read more about it at docs.python.org/lib/typesseq-strings.html.
## Filenames and paths
Files are organized into directories (also called “folders”). Every running program has a “current directory,” which is the default directory for most operations. For example, when you open a file for reading, Python looks for it in the current directory.
The os module provides functions for working with files and directories (“os” stands for “operating system”). os.getcwd returns the name of the current directory:
>>> import os
>>> cwd = os.getcwd()
>>> print cwd
/home/dinsdale
cwd stands for “current working directory.” The result in this example is /home/dinsdale, which is the home directory of a user named dinsdale.
A string like cwd that identifies a file is called a path. A relative path starts from the current directory; an absolute path starts from the topmost directory in the file system.
The paths we have seen so far are simple filenames, so they are relative to the current directory. To find the absolute path to a file, you can use os.path.abspath:
>>> os.path.abspath('memo.txt')
'/home/dinsdale/memo.txt'
os.path.exists checks whether a file or directory exists:
>>> os.path.exists('memo.txt')
True
If it exists, os.path.isdir checks whether it’s a directory:
>>> os.path.isdir('memo.txt')
False
>>> os.path.isdir('music')
True
Similarly, os.path.isfile checks whether it’s a file.
os.listdir returns a list of the files (and other directories) in the given directory:
>>> os.listdir(cwd)
['music', 'photos', 'memo.txt']
To demonstrate these functions, the following example “walks” through a directory, prints the names of all the files, and calls itself recursively on all the directories.
def walk(dir):
for name in os.listdir(dir):
path = os.path.join(dir, name)
if os.path.isfile(path):
print path
else:
walk(path)
os.path.join takes a directory and a file name and joins them into a complete path.
### Exercise 1
Modify 'walk' so that instead of printing the names of the files, it returns a list of names.
Exercise 2
The 'os' module provides a function called 'walk' that is similar to this one but more versatile. Read the documentation and use it to print the names of the files in a given directory and its subdirectories.
## Catching exceptions
A lot of things can go wrong when you try to read and write files. If you try to open a file that doesn’t exist, you get an IOError:
>>> fin = open('bad_file')
IOError: [Errno 2] No such file or directory: 'bad_file'
If you don’t have permission to access a file:
>>> fout = open('/etc/passwd', 'w')
IOError: [Errno 13] Permission denied: '/etc/passwd'
And if you try to open a directory for reading, you get
>>> fin = open('/home')
IOError: [Errno 21] Is a directory
To avoid these errors, you could use functions like os.path.exists and os.path.isfile, but it would take a lot of time and code to check all the possibilities (if “Errno 21” is any indication, there are at least 21 things that can go wrong).
It is better to go ahead and try, and deal with problems if they happen, which is exactly what the try statement does. The syntax is similar to an if statement:
try:
for line in fin:
print line
fin.close()
except:
print 'Something went wrong.'
Python starts by executing the try clause. If all goes well, it skips the except clause and proceeds. If an exception occurs, it jumps out of the try clause and executes the except clause.
Handling an exception with a try statement is called catching an exception. In this example, the except clause prints an error message that is not very helpful. In general, catching an exception gives you a chance to fix the problem, or try again, or at least end the program gracefully.
## Databases
A database is a file that is organized for storing data. Most databases are organized like a dictionary in the sense that they map from keys to values. The biggest difference is that the database is on disk (or other permanent storage), so it persists after the program ends.
The module anydbm provides an interface for creating and updating database files. As an example, I’ll create a database that contains captions for image files.
Opening a database is similar to opening other files:
>>> import anydbm
>>> db = anydbm.open('captions.db', 'c')
The mode 'c' means that the database should be created if it doesn’t already exist. The result is a database object that can be used (for most operations) like a dictionary. If you create a new item, anydbm updates the database file.
>>> db['cleese.png'] = 'Photo of John Cleese.'
When you access one of the items, anydbm reads the file:
>>> print db['cleese.png']
Photo of John Cleese.
If you make another assignment to an existing key, anydbm replaces the old value:
>>> db['cleese.png'] = 'Photo of John Cleese doing a silly walk.'
>>> print db['cleese.png']
Photo of John Cleese doing a silly walk.
Many dictionary methods, like keys and items, also work with database objects. So does iteration with a for statement.
for key in db:
print key
As with other files, you should close the database when you are done:
>>> db.close()
## Pickling
A limitation of anydbm is that the keys and values have to be strings. If you try to use any other type, you get an error.
The pickle module can help. It translates almost any type of object into a string suitable for storage in a database, and then translates strings back into objects.
pickle.dumps takes an object as a parameter and returns a string representation (dumps is short for “dump string”):
>>> import pickle
>>> t = [1, 2, 3]
>>> pickle.dumps(t)
'(lp0\nI1\naI2\naI3\na.'
The format isn’t obvious to human readers; it is meant to be easy for pickle to interpret. pickle.loads (“load string”) reconstitutes the object:
>>> t1 = [1, 2, 3]
>>> s = pickle.dumps(t1)
>>> print t2
[1, 2, 3]
Although the new object has the same value as the old, it is not (in general) the same object:
>>> t == t2
True
>>> t is t2
False
In other words, pickling and then unpickling has the same effect as copying the object.
You can use pickle to store non-strings in a database. In fact, this combination is so common that it has been encapsulated in a module called shelve.
Exercise 3
If you did Exercise '12.4', modify your solution so that it creates a database that maps from each word in the list to a list of words that use the same set of letters.
Write a different program that opens the database and prints the contents in a human-readable format.
## Pipes
Most operating systems provide a command-line interface, also known as a shell. Shells usually provide commands to navigate the file system and launch applications. For example, in Unix, you can change directories with cd, display the contents of a directory with ls, and launch a web browser by typing (for example) firefox.
Any program that you can launch from the shell can also be launched from Python using a pipe. A pipe is an object that represents a running process.
For example, the Unix command ls -l normally displays the contents of the current directory (in long format). You can launch ls with os.popen:
>>> cmd = 'ls -l'
>>> fp = os.popen(cmd)
The argument is a string that contains a shell command. The return value is a file pointer that behaves just like an open file. You can read the output from the ls process one line at a time with readline or get the whole thing at once with read:
>>> res = fp.read()
When you are done, you close the pipe like a file:
>>> stat = fp.close()
>>> print stat
None
The return value is the final status of the ls process; None means that it ended normally (with no errors).
A common use of pipes is to read a compressed file incrementally; that is, without uncompressing the whole thing at once. The following function takes the name of a compressed file as a parameter and returns a pipe that uses gzip to decompress the contents:
def open_gzip(filename):
cmd = 'gunzip -c ' + filename
fp = os.popen(cmd)
return fp
If you read lines from fp one at a time, you never have to store the uncompressed file in memory or on disk.
## Writing modules
Any file that contains Python code can be imported as a module. For example, suppose you have a file named wc.py with the following code:
def linecount(filename):
count = 0
for line in open(filename):
count += 1
return count
print linecount('wc.py')
If you run this program, it reads itself and prints the number of lines in the file, which is 7. You can also import it like this:
>>> import wc
7
Now you have a module object wc:
>>> print wc
<module 'wc' from 'wc.py'>
That provides a function called linecount:
>>> wc.linecount('wc.py')
7
So that’s how you write modules in Python.
The only problem with this example is that when you import the module it executes the test code at the bottom. Normally when you import a module, it defines new functions but it doesn’t execute them.
Programs that will be imported as modules often use the following idiom:
if __name__ == '__main__':
print linecount('wc.py')
__name__ is a built-in variable that is set when the program starts. If the program is running as a script, __name__ has the value __main__; in that case, the test code is executed. Otherwise, if the module is being imported, the test code is skipped.
Exercise 4
Type this example into a file named 'wc.py' and run it as a script. Then run the Python interpreter and 'import wc'. What is the value of __name__ when the module is being imported? Warning: If you import a module that has already been imported, Python does nothing. It does not re-read the file, even if it has changed.
If you want to reload a module, you can use the built-in function 'reload', but it can be tricky, so the safest thing to do is restart the interpreter and then import the module again.
## Debugging
When you are reading and writing files, you might run into problems with whitespace. These errors can be hard to debug because spaces, tabs and newlines are normally invisible:
>>> s = '1 2\t 3\n 4'
>>> print s
1 2 3
4
The built-in function repr can help. It takes any object as an argument and returns a string representation of the object. For strings, it represents whitespace characters with backslash sequences:
>>> print repr(s)
'1 2\t 3\n 4'
This can be helpful for debugging.
One other problem you might run into is that different systems use different characters to indicate the end of a line. Some systems use a newline, represented \n. Others use a return character, represented \r. Some use both. If you move files between different systems, these inconsistencies might cause problems.
For most systems, there are applications to convert from one format to another. You can find them (and read more about this issue) at wikipedia.org/wiki/Newline. Or, of course, you could write one yourself.
## Glossary
persistent:
Pertaining to a program that runs indefinitely and keeps at least some of its data in permanent storage.
format operator:
An operator, %, that takes a format string and a tuple and generates a string that includes the elements of the tuple formatted as specified by the format string.
format string:
A string, used with the format operator, that contains format sequences.
format sequence:
A sequence of characters in a format string, like %d, that specifies how a value should be formatted.
text file:
A sequence of characters stored in permanent storage like a hard drive.
directory:
A named collection of files, also called a folder.
path:
A string that identifies a file.
relative path:
A path that starts from the current directory.
absolute path:
A path that starts from the topmost directory in the file system.
catch:
To prevent an exception from terminating a program using the try and except statements.
database:
A file whose contents are organized like a dictionary with keys that correspond to values.
## Exercises
Exercise 5
import urllib
conn = urllib.urlopen('http://thinkpython.com/secret.html')
for line in conn.fp:
print line.strip()
Run this code and follow the instructions you see there.
Exercise 6
In a large collection of MP3 files, there may be more than one copy of the same song, stored in different directories or with different file names. The goal of this exercise is to search for these duplicates.
• Write a program that searches a directory and all of its
subdirectories, recursively, and returns a list of complete paths for all files with a given suffix (like '.mp3'). Hint: 'os.path' provides several useful functions for manipulating file and path names.
• To recognize duplicates, you can use a hash function that
reads the file and generates a short summary of the contents. For example, MD5 (Message-Digest algorithm 5) takes an arbitrarily-long “message” and returns a 128-bit “checksum.” The probability is very small that two files with different contents will return the same checksum. You can read about MD5 at 'wikipedia.org/wiki/Md5'. On a Unix system you can use the program 'md5sum' and a pipe to compute checksums from Python.
Exercise 7
'
The Internet Movie Database (IMDb) is an online collection of information about movies. Their database is available in plain text format, so it is reasonably easy to read from Python. For this exercise, the files you need are 'actors.list.gz' and 'actresses.list.gz'; you can download them from 'www.imdb.com/interfaces#plain'.
'
I have written a program that parses these files and splits them into actor names, movie titles, etc. You can download it from 'thinkpython.com/code/imdb.py'.
If you run 'imdb.py' as a script, it reads 'actors.list.gz' and prints one actor-movie pair per line. Or, if you 'import imdb' you can use the function process_file to, well, process the file. The arguments are a filename, a function object and an optional number of lines to process. Here is an example:
''import imdb
def print_info(actor, date, title, role):
print actor, date, title, role
imdb.process_file('actors.list.gz', print_info)
''
When you call process_file, it opens 'filename', reads the contents, and calls print_info once for each line in the file. print_info takes an actor, date, movie title and role as arguments and prints them.
• Write a program that reads 'actors.list.gz' and 'actresses.list.gz' and uses 'shelve' to build a database
that maps from each actor to a list of his or her films.
• Two actors are “costars” if they have been in at least one
movie together. Process the database you built in the previous step and build a second database that maps from each actor to a list of his or her costars.
• Write a program that can play the “Six Degrees of Kevin
Bacon,” which you can read about at 'wikipedia.org/wiki/Six_Degrees_of_Kevin_Bacon'. This problem is challenging because it requires you to find the shortest path in a graph. You can read about shortest path algorithms at 'wikipedia.org/wiki/Shortest_path_problem'.
# Classes and objects
## User-defined types
We have used many of Python’s built-in types; now we are going to define a new type. As an example, we will create a type called Point that represents a point in two-dimensional space.
In mathematical notation, points are often written in parentheses with a comma separating the coordinates. For example, (0, 0) represents the origin, and (x, y) represents the point x units to the right and y units up from the origin.
There are several ways we might represent points in Python:
• We could store the coordinates separately in two variables, x and y.
• We could store the coordinates as elements in a list or tuple.
• We could create a new type to represent points as objects.
Creating a new type is (a little) more complicated than the other options, but it has advantages that will be apparent soon.
A user-defined type is also called a class. A class definition looks like this:
class Point(object):
"""represents a point in 2-D space"""
This header indicates that the new class is a Point, which is a kind of object, which is a built-in type.
The body is a docstring that explains what the class is for. You can define variables and functions inside a class definition, but we will get back to that later.
Defining a class named Point creates a class object.
>>> print Point
<class '__main__.Point'>
Because Point is defined at the top level, its “full name” is __main__.Point.
The class object is like a factory for creating objects. To create a Point, you call Point as if it were a function.
>>> blank = Point()
>>> print blank
<__main__.Point instance at 0xb7e9d3ac>
The return value is a reference to a Point object, which we assign to blank. Creating a new object is called instantiation, and the object is an instance of the class.
When you print an instance, Python tells you what class it belongs to and where it is stored in memory (the prefix 0x means that the following number is in hexadecimal).
## Attributes
You can assign values to an instance using dot notation:
>>> blank.x = 3.0
>>> blank.y = 4.0
This syntax is similar to the syntax for selecting a variable from a module, such as math.pi or string.whitespace. In this case, though, we are assigning values to named elements of an object. These elements are called attributes.
As a noun, “AT-trib-ute” is pronounced with emphasis on the first syllable, as opposed to “a-TRIB-ute,” which is a verb.
The following diagram shows the result of these assignments. A state diagram that shows an object and its attributes is called an object diagram:
File:Book022.png
The variable blank refers to a Point object, which contains two attributes. Each attribute refers to a floating-point number.
You can read the value of an attribute using the same syntax:
>>> print blank.y
4.0
>>> x = blank.x
>>> print x
3.0
The expression blank.x means, “Go to the object blank refers to and get the value of x.” In this case, we assign that value to a variable named x. There is no conflict between the variable x and the attribute x.
You can use dot notation as part of any expression. For example:
>>> print '(%g, %g)' % (blank.x, blank.y)
(3.0, 4.0)
>>> distance = math.sqrt(blank.x**2 + blank.y**2)
>>> print distance
5.0
You can pass an instance as an argument in the usual way. For example:
def print_point(p):
print '(%g, %g)' % (p.x, p.y)
print_point takes a point as an argument and displays it in mathematical notation. To invoke it, you can pass blank as an argument:
>>> print_point(blank)
(3.0, 4.0)
Inside the function, p is an alias for blank, so if the function modifies p, blank changes.
### Exercise 1
Write a function called 'distance' that it takes two Points as arguments and returns the distance between them.
## Rectangles
Sometimes it is obvious what the attributes of an object should be, but other times you have to make decisions. For example, imagine you are designing a class to represent rectangles. What attributes would you use to specify the location and size of a rectangle? You can ignore angle; to keep things simple, assume that the rectangle is either vertical or horizontal.
There are at least two possibilities:
• You could specify one corner of the rectangle (or the center), the width, and the height.
• You could specify two opposing corners.
At this point it is hard to say whether either is better than the other, so we’ll implement the first one, just as an example.
Here is the class definition:
class Rectangle(object):
"""represent a rectangle.
attributes: width, height, corner.
"""
The docstring lists the attributes: width and height are numbers; corner is a Point object that specifies the lower-left corner.
To represent a rectangle, you have to instantiate a Rectangle object and assign values to the attributes:
box = Rectangle()
box.width = 100.0
box.height = 200.0
box.corner = Point()
box.corner.x = 0.0
box.corner.y = 0.0
The expression box.corner.x means, “Go to the object box refers to and select the attribute named corner; then go to that object and select the attribute named x.”
The figure shows the state of this object:
File:Book023.png An object that is an attribute of another object is embedded.
## Instances as return values
Functions can return instances. For example, find_center takes a Rectangle as an argument and returns a Point that contains the coordinates of the center of the Rectangle:
def find_center(box):
p = Point()
p.x = box.corner.x + box.width/2.0
p.y = box.corner.y + box.height/2.0
return p
Here is an example that passes box as an argument and assigns the resulting Point to center:
>>> center = find_center(box)
>>> print_point(center)
(50.0, 100.0)
## Objects are mutable
You can change the state of an object by making an assignment to one of its attributes. For example, to change the size of a rectangle without changing its position, you can modify the values of width and height:
box.width = box.width + 50
box.height = box.height + 100
You can also write functions that modify objects. For example, grow_rectangle takes a Rectangle object and two numbers, dwidth and dheight, and adds the numbers to the width and height of the rectangle:
def grow_rectangle(rect, dwidth, dheight) :
rect.width += dwidth
rect.height += dheight
Here is an example that demonstrates the effect:
>>> print box.width
100.0
>>> print box.height
200.0
>>> grow_rectangle(box, 50, 100)
>>> print box.width
150.0
>>> print box.height
300.0
Inside the function, rect is an alias for box, so if the function modifies rect, box changes.
### Exercise 2
Write a function named move_rectangle that takes a Rectangle and two numbers named 'dx' and 'dy'. It should change the location of the rectangle by adding 'dx' to the 'x' coordinate of 'corner' and adding 'dy' to the 'y' coordinate of 'corner'.
## Copying
Aliasing can make a program difficult to read because changes in one place might have unexpected effects in another place. It is hard to keep track of all the variables that might refer to a given object.
Copying an object is often an alternative to aliasing. The copy module contains a function called copy that can duplicate any object:
>>> p1 = Point()
>>> p1.x = 3.0
>>> p1.y = 4.0
>>> import copy
>>> p2 = copy.copy(p1)
p1 and p2 contain the same data, but they are not the same Point.
>>> print_point(p1)
(3.0, 4.0)
>>> print_point(p2)
(3.0, 4.0)
>>> p1 is p2
False
>>> p1 == p2
False
The is operator indicates that p1 and p2 are not the same object, which is what we expected. But you might have expected == to yield True because these points contain the same data. In that case, you will be disappointed to learn that for instances, the default behavior of the == operator is the same as the is operator; it checks object identity, not object equivalence. This behavior can be changed—we’ll see how later.
If you use copy.copy to duplicate a Rectangle, you will find that it copies the Rectangle object but not the embedded Point.
>>> box2 = copy.copy(box)
>>> box2 is box
False
>>> box2.corner is box.corner
True
Here is what the object diagram looks like:
<IMG SRC="book024.png">
This operation is called a shallow copy because it copies the object and any references it contains, but not the embedded objects.
For most applications, this is not what you want. In this example, invoking grow_rectangle on one of the Rectangles would not affect the other, but invoking move_rectangle on either would affect both! This behavior is confusing and error-prone.
Fortunately, the copy module contains a method named deepcopy that copies not only the object but also the objects it refers to, and the objects they refer to, and so on. You will not be surprised to learn that this operation is called a deep copy.
>>> box3 = copy.deepcopy(box)
>>> box3 is box
False
>>> box3.corner is box.corner
False
box3 and box are completely separate objects.
### Exercise 3
Write a version of move_rectangle that creates and returns a new Rectangle instead of modifying the old one.
## Debugging
When you start working with objects, you are likely to encounter some new exceptions. If you try to access an attribute that doesn’t exist, you get an AttributeError:
>>> p = Point()
>>> print p.z
AttributeError: Point instance has no attribute 'z'
If you are not sure what type an object is, you can ask:
>>> type(p)
<type '__main__.Point'>
If you are not sure whether an object has a particular attribute, you can use the built-in function hasattr:
>>> hasattr(p, 'x')
True
>>> hasattr(p, 'z')
False
The first argument can be any object; the second argument is a string that contains the name of the attribute.
## Glossary
class:
A user-defined type. A class definition creates a new class object.
class object:
An object that contains information about a user-defined type. The class object can be used to create instances of the type.
instance:
An object that belongs to a class.
attribute:
One of the named values associated with an object.
embedded (object):
An object that is stored as an attribute of another object.
shallow copy:
To copy the contents of an object, including any references to embedded objects; implemented by the copy function in the copy module.
deep copy:
To copy the contents of an object as well as any embedded objects, and any objects embedded in them, and so on; implemented by the deepcopy function in the copy module.
object diagram:
A diagram that shows objects, their attributes, and the values of the attributes.
## Exercises
### Exercise 4
World.py'', which is part of Swampy (see Chapter '4'), contains a class definition for a user-defined type called 'World'. If you run this code:
''from World import *
world = World()
wait_for_user()
''
A window should appear with a title bar and an empty square. In this exercise we will use this window to draw Points, Rectangles and other shapes. Add the following lines before wait_for_user and run the program again
''canvas = world.ca(width=500, height=500, background='white')
bbox = [[-150,-100], [150, 100]]
canvas.rectangle(bbox, outline='black', width=2, fill='green4')
''
You should see a green rectangle with a black outline. The first line creates a Canvas, which appears in the window as a white square. The Canvas object provides methods like 'rectangle' for drawing various shapes.
bbox'' is a list of lists that represents the “bounding box” of the rectangle. The first pair of coordinates is the lower-left corner of the rectangle; the second pair is the upper-right corner.
You can draw a circle like this:
''canvas.circle([-25,0], 70, outline=None, fill='red')
''
The first parameter is the coordinate pair for the center of the circle; the second parameter is the radius.
If you add this line to the program, the result should resemble the national flag of Bangladesh (see 'wikipedia.org/wiki/Gallery_of_sovereign-state_flags').
• Write a function called draw_rectangle that takes a
Canvas and a Rectangle as arguments and draws a representation of the Rectangle on the Canvas.
modify draw_rectangle so that it uses the color attribute as the fill color.
• Write a function called draw_point that takes a
Canvas and a Point as arguments and draws a representation of the Point on the Canvas.
• Define a new class called Circle with appropriate attributes and
instantiate a few Circle objects. Write a function called draw_circle that draws circles on the canvas.
Hint: you can draw a polygon like this:
''points = [[-150,-100], [150, 100], [150, -100]]
canvas.polygon(points, fill='blue')
''
I have written a small program that lists the available colors; you can download it from 'thinkpython.com/code/color_list.py'.
# Classes and functions
## Time
As another example of a user-defined type, we'll define a class called Time that records the time of day. The class definition looks like this:
class Time(object):
"""represents the time of day.
attributes: hour, minute, second"""
We can create a new Time object and assign attributes for hours, minutes, and seconds:
time = Time()
time.hour = 11
time.minute = 59
time.second = 30
The state diagram for the Time object looks like this:
<IMG SRC="book025.png">
### Exercise 1
Write a function called print_time that takes a Time object and prints it in the form hour:minute:second.
Hint: the format sequence %.2d prints an integer using at least two digits, including a leading zero if necessary.
### Exercise 2
Write a boolean function called is_after that takes two Time objects, t1 and t2, and returns True if t1 follows t2 chronologically and False otherwise.
Challenge: don't use an if statement.
## Pure functions
In the next few sections, we’ll write two functions that add time values. They demonstrate two kinds of functions: pure functions and modifiers. They also demonstrate a development plan I’ll call prototype and patch, which is a way of tackling a complex problem by starting with a simple prototype and incrementally dealing with the complications.
Here is a simple prototype of add_time:
def add_time(t1, t2):
sum = Time()
sum.hour = t1.hour + t2.hour
sum.minute = t1.minute + t2.minute
sum.second = t1.second + t2.second
return sum
The function creates a new Time object, initializes its attributes, and returns a reference to the new object. This is called a pure function because it does not modify any of the objects passed to it as arguments and it has no effect, like displaying a value or getting user input, other than returning a value.
To test this function, I’ll create two Time objects: start contains the start time of a movie, like Monty Python and the Holy Grail, and duration contains the run time of the movie, which is one hour 35 minutes.
add_time figures out when the movie will be done.
>>> start = Time()
>>> start.hour = 9
>>> start.minute = 45
>>> start.second = 0
>>> duration = Time()
>>> duration.hour = 1
>>> duration.minute = 35
>>> duration.second = 0
>>> print_time(done)
10:80:00
The result, 10:80:00 might not be what you were hoping for. The problem is that this function does not deal with cases where the number of seconds or minutes adds up to more than sixty. When that happens, we have to “carry” the extra seconds into the minute column or the extra minutes into the hour column.
Here’s an improved version:
def add_time(t1, t2):
sum = Time()
sum.hour = t1.hour + t2.hour
sum.minute = t1.minute + t2.minute
sum.second = t1.second + t2.second
if sum.second >= 60:
sum.second -= 60
sum.minute += 1
if sum.minute >= 60:
sum.minute -= 60
sum.hour += 1
return sum
Although this function is correct, it is starting to get big. We will see a shorter alternative later.
## Modifiers
Sometimes it is useful for a function to modify the objects it gets as parameters. In that case, the changes are visible to the caller. Functions that work this way are called modifiers.
increment, which adds a given number of seconds to a Time object, can be written naturally as a modifier. Here is a rough draft:
def increment(time, seconds):
time.second += seconds
if time.second >= 60:
time.second -= 60
time.minute += 1
if time.minute >= 60:
time.minute -= 60
time.hour += 1
The first line performs the basic operation; the remainder deals with the special cases we saw before.
Is this function correct? What happens if the parameter seconds is much greater than sixty?
In that case, it is not enough to carry once; we have to keep doing it until time.second is less than sixty. One solution is to replace the if statements with while statements. That would make the function correct, but not very efficient.
Exercise 3
Write a correct version of 'increment' that doesn’t contain any loops.
Anything that can be done with modifiers can also be done with pure functions. In fact, some programming languages only allow pure functions. There is some evidence that programs that use pure functions are faster to develop and less error-prone than programs that use modifiers. But modifiers are convenient at times, and functional programs tend to be less efficient.
In general, I recommend that you write pure functions whenever it is reasonable and resort to modifiers only if there is a compelling advantage. This approach might be called a functional programming style.
Exercise 4
Write a “pure” version of 'increment' that creates and returns a new Time object rather than modifying the parameter.
## Prototyping versus planning
The development plan I am demonstrating is called “prototype and patch.” For each function, I wrote a prototype that performed the basic calculation and then tested it, patching errors along the way.
This approach can be effective, especially if you don’t yet have a deep understanding of the problem. But incremental corrections can generate code that is unnecessarily complicated—since it deals with many special cases—and unreliable—since it is hard to know if you have found all the errors.
An alternative is planned development, in which high-level insight into the problem can make the programming much easier. In this case, the insight is that a Time object is really a three-digit number in base 60 (see wikipedia.org/wiki/Sexagesimal)! The second attribute is the “ones column,” the minute attribute is the “sixties column,” and the hour attribute is the “thirty-six hundreds column.”
When we wrote add_time and increment, we were effectively doing addition in base 60, which is why we had to carry from one column to the next.
This observation suggests another approach to the whole problem—we can convert Time objects to integers and take advantage of the fact that the computer knows how to do integer arithmetic.
Here is a function that converts Times to integers:
def time_to_int(time):
minutes = time.hour * 60 + time.minute
seconds = minutes * 60 + time.second
return seconds
And here is the function that converts integers to Times (recall that divmod divides the first argument by the second and returns the quotient and remainder as a tuple).
def int_to_time(seconds):
time = Time()
minutes, time.second = divmod(seconds, 60)
time.hour, time.minute = divmod(minutes, 60)
return time
You might have to think a bit, and run some tests, to convince yourself that these functions are correct. One way to test them is to check that time_to_int(int_to_time(x)) == x for many values of x. This is an example of a consistency check.
Once you are convinced they are correct, you can use them to rewrite add_time:
def add_time(t1, t2):
seconds = time_to_int(t1) + time_to_int(t2)
return int_to_time(seconds)
This version is shorter than the original, and easier to verify.
Exercise 5
Rewrite 'increment' using time_to_int and int_to_time.
In some ways, converting from base 60 to base 10 and back is harder than just dealing with times. Base conversion is more abstract; our intuition for dealing with time values is better.
But if we have the insight to treat times as base 60 numbers and make the investment of writing the conversion functions (time_to_int and int_to_time), we get a program that is shorter, easier to read and debug, and more reliable.
It is also easier to add features later. For example, imagine subtracting two Times to find the duration between them. The naïve approach would be to implement subtraction with borrowing. Using the conversion functions would be easier and more likely to be correct.
Ironically, sometimes making a problem harder (or more general) makes it easier (because there are fewer special cases and fewer opportunities for error).
## Debugging
A Time object is well-formed if the values of minutes and seconds are between 0 and 60 (including 0 but not 60) and if hours is positive. hours and minutes should be integral values, but we might allow seconds to have a fraction part.
These kind of requirements are called invariants because they should always be true. To put it a different way, if they are not true, then something has gone wrong.
Writing code to check your invariants can help you detect errors and find their causes. For example, you might have a function like valid_time that takes a Time object and returns False if it violates an invariant:
def valid_time(time):
if time.hours < 0 or time.minutes < 0 or time.seconds < 0:
return False
if time.minutes >= 60 or time.seconds >= 60:
return False
return True
Then at the beginning of each function you could check the arguments to make sure they are valid:
def add_time(t1, t2):
if not valid_time(t1) or not valid_time(t2):
raise ValueError, 'invalid Time object in add_time'
seconds = time_to_int(t1) + time_to_int(t2)
return int_to_time(seconds)
Or you could use an assert statement, which checks a given invariant and raises an exception if it fails:
def add_time(t1, t2):
assert valid_time(t1) and valid_time(t2)
seconds = time_to_int(t1) + time_to_int(t2)
return int_to_time(seconds)
assert statements are useful because they distinguish code that deals with normal conditions from code that checks for errors.
## Glossary
prototype and patch:
A development plan that involves writing a rough draft of a program, testing, and correcting errors as they are found.
planned development:
A development plan that involves high-level insight into the problem and more planning than incremental development or prototype development.
pure function:
A function that does not modify any of the objects it receives as arguments. Most pure functions are fruitful.
modifier:
A function that changes one or more of the objects it receives as arguments. Most modifiers are fruitless.
functional programming style:
A style of program design in which the majority of functions are pure.
invariant:
A condition that should always be true during the execution of a program.
## Exercises
### Exercise 6
Write a function called mul_time that takes a Time object and a number and returns a new Time object that contains the product of the original Time and the number. Then use mul_time to write a function that takes a Time object that represents the finishing time in a race, and a number that represents the distance, and returns a Time object that represents the average pace (time per mile).
### Exercise 7
Write a class definition for a Date object that has attributes 'day', 'month' and 'year'. Write a function called increment_date that takes a Date object, 'date' and an integer, 'n', and returns a new Date object that represents the day 'n' days after 'date'. Hint: “Thirty days hath September...” Challenge: does your function deal with leap years correctly? See ''wikipedia.org/wiki/Leap_year
### Exercise 8
The 'datetime' module provides 'date' and 'time' objects that are similar to the Date and Time objects in this chapter, but they provide a rich set of methods and operators. Read the documentation at 'docs.python.org/lib/datetime-date.html'.
• Use the 'datetime' module to write a program that gets the current date and prints the day of the week.
• Write a program that takes a birthday as input and prints the user’s age and the number of days, hours, minutes and seconds until their next birthday.
# Classes and methods
## Object-oriented features
Python is an object-oriented programming language, which means that it provides features that support object-oriented programming.
It is not easy to define object-oriented programming, but we have already seen some of its characteristics:
• Programs are made up of object definitions and function definitions, and most of the computation is expressed in terms of operations on objects.
• Each object definition corresponds to some object or concept in the real world, and the functions that operate on that object correspond to the ways real-world objects interact.
For example, the Time class defined in Chapter 16 corresponds to the way people record the time of day, and the functions we defined correspond to the kinds of things people do with times. Similarly, the Point and Rectangle classes correspond to the mathematical concepts of a point and a rectangle.
So far, we have not taken advantage of the features Python provides to support object-oriented programming. These features are not strictly necessary; most of them provide alternative syntax for things we have already done. But in many cases, the alternative is more concise and more accurately conveys the structure of the program.
For example, in the Time program, there is no obvious connection between the class definition and the function definitions that follow. With some examination, it is apparent that every function takes at least one Time object as an argument.
This observation is the motivation for methods; a method is a function that is associated with a particular class. We have seen methods for strings, lists, dictionaries and tuples. In this chapter, we will define methods for user-defined types.
Methods are semantically the same as functions, but there are two syntactic differences:
• Methods are defined inside a class definition in order to make the relationship between the class and the method explicit.
• The syntax for invoking a method is different from the syntax for calling a function.
In the next few sections, we will take the functions from the previous two chapters and transform them into methods. This transformation is purely mechanical; you can do it simply by following a sequence of steps. If you are comfortable converting from one form to another, you will be able to choose the best form for whatever you are doing.
## Printing objects
In Chapter 16, we defined a class named Time and in Exercise 16.1, you wrote a function named print_time:
class Time(object):
"""represents the time of day.
attributes: hour, minute, second"""
def print_time(time):
print '%.2d:%.2d:%.2d' % (time.hour, time.minute, time.second)
To call this function, you have to pass a Time object as an argument:
>>> start = Time()
>>> start.hour = 9
>>> start.minute = 45
>>> start.second = 00
>>> print_time(start)
09:45:00
To make print_time a method, all we have to do is move the function definition inside the class definition. Notice the change in indentation.
class Time(object):
def print_time(time):
print '%.2d:%.2d:%.2d' % (time.hour, time.minute, time.second)
Now there are two ways to call print_time. The first (and less common) way is to use function syntax:
>>> Time.print_time(start)
09:45:00
In this use of dot notation, Time is the name of the class, and print_time is the name of the method. start is passed as a parameter.
The second (and more concise) way is to use method syntax:
>>> start.print_time()
09:45:00
In this use of dot notation, print_time is the name of the method (again), and start is the object the method is invoked on, which is called the subject. Just as the subject of a sentence is what the sentence is about, the subject of a method invocation is what the method is about.
Inside the method, the subject is assigned to the first parameter, so in this case start is assigned to time.
By convention, the first parameter of a method is called self, so it would be more common to write print_time like this:
class Time(object):
def print_time(self):
print '%.2d:%.2d:%.2d' % (self.hour, self.minute, self.second)
The reason for this convention is an implicit metaphor:
• The syntax for a function call, print_time(start),
suggests that the function is the active agent. It says something like, “Hey print_time! Here’s an object for you to print.”
• In object-oriented programming, the objects are the active agents.
A method invocation like start.print_time() says “Hey start! Please print yourself.”
This change in perspective might be more polite, but it is not obvious that it is useful. In the examples we have seen so far, it may not be. But sometimes shifting responsibility from the functions onto the objects makes it possible to write more versatile functions, and makes it easier to maintain and reuse code.
### Exercise 1
Rewrite time_to_int (from Section '16.4') as a method. It is probably not appropriate to rewrite int_to_time as a method; it’s not clear what object you would invoke it on!
## Another example
Here’s a version of increment (from Section 16.3) rewritten as a method:
# inside class Time:
def increment(self, seconds):
seconds += self.time_to_int()
return int_to_time(seconds)
This version assumes that time_to_int is written as a method, as in Exercise 17.1. Also, note that it is a pure function, not a modifier.
Here’s how you would invoke increment:
>>> start.print_time()
09:45:00
>>> end = start.increment(1337)
>>> end.print_time()
10:07:17
The subject, start, gets assigned to the first parameter, self. The argument, 1337, gets assigned to the second parameter, seconds.
This mechanism can be confusing, especially if you make an error. For example, if you invoke increment with two arguments, you get:
>>> end = start.increment(1337, 460)
TypeError: increment() takes exactly 2 arguments (3 given)
The error message is initially confusing, because there are only two arguments in parentheses. But the subject is also considered an argument, so all together that’s three.
## A more complicated example
is_after (from Exercise 16.2) is slightly more complicated because it takes two Time objects as parameters. In this case it is conventional to name the first parameter self and the second parameter other:
# inside class Time:
def is_after(self, other):
return self.time_to_int() > other.time_to_int()
To use this method, you have to invoke it on one object and pass the other as an argument:
>>> end.is_after(start)
True
## The init method
The init method (short for “initialization”) is a special method that gets invoked when an object is instantiated. Its full name is __init__ (two underscore characters, followed by init, and then two more underscores). An init method for the Time class might look like this:
# inside class Time:
def __init__(self, hour=0, minute=0, second=0):
self.hour = hour
self.minute = minute
self.second = second
It is common for the parameters of __init__ to have the same names as the attributes. The statement
self.hour = hour
stores the value of the parameter hour as an attribute of self.
The parameters are optional, so if you call Time with no arguments, you get the default values.
>>> time = Time()
>>> time.print_time()
00:00:00
If you provide one argument, it overrides hour:
>>> time = Time (9)
>>> time.print_time()
09:00:00
If you provide two arguments, they override hour and minute.
>>> time = Time(9, 45)
>>> time.print_time()
09:45:00
And if you provide three arguments, they override all three default values.
### Exercise 2
Write an init method for the 'Point' class that takes 'x' and 'y' as optional parameters and assigns them to the corresponding attributes.
## The __str__ method
__str__ is a special method, like __init__, that is supposed to return a string representation of an object.
For example, here is a str method for Time objects:
# inside class Time:
def __str__(self):
return '%.2d:%.2d:%.2d' % (self.hour, self.minute, self.second)
When you print an object, Python invokes the str method:
>>> time = Time(9, 45)
>>> print time
09:45:00
When I write a new class, I almost always start by writing __init__, which makes it easier to instantiate objects, and __str__, which is useful for debugging.
### Exercise 3
Write a 'str' method for the 'Point' class. Create a Point object and print it.
By defining other special methods, you can specify the behavior of operators on user-defined types. For example, if you define a method named __add__ for the Time class, you can use the + operator on Time objects.
Here is what the definition might look like:
# inside class Time:
seconds = self.time_to_int() + other.time_to_int()
return int_to_time(seconds)
And here is how you could use it:
>>> start = Time(9, 45)
>>> duration = Time(1, 35)
>>> print start + duration
11:20:00
When you apply the + operator to Time objects, Python invokes __add__. When you print the result, Python invokes __str__. So there is quite a lot happening behind the scenes!
Changing the behavior of an operator so that it works with user-defined types is called operator overloading. For every operator in Python there is a corresponding special method, like __add__. For more details, see docs.python.org/ref/specialnames.html.
### Exercise 4
Write an 'add' method for the Point class.
## Type-based dispatch
In the previous section we added two Time objects, but you also might want to add an integer to a Time object. The following is a version of __add__ that checks the type of other and invokes either add_time or increment:
# inside class Time:
if isinstance(other, Time):
else:
return self.increment(other)
seconds = self.time_to_int() + other.time_to_int()
return int_to_time(seconds)
def increment(self, seconds):
seconds += self.time_to_int()
return int_to_time(seconds)
The built-in function isinstance takes a value and a class object, and returns True if the value is an instance of the class.
If other is a Time object, __add__ invokes add_time. Otherwise it assumes that the parameter is a number and invokes increment. This operation is called a type-based dispatch because it dispatches the computation to different methods based on the type of the arguments.
Here are examples that use the + operator with different types:
>>> start = Time(9, 45)
>>> duration = Time(1, 35)
>>> print start + duration
11:20:00
>>> print start + 1337
10:07:17
Unfortunately, this implementation of addition is not commutative. If the integer is the first operand, you get
>>> print 1337 + start
TypeError: unsupported operand type(s) for +: 'int' and 'instance'
The problem is, instead of asking the Time object to add an integer, Python is asking an integer to add a Time object, and it doesn’t know how to do that. But there is a clever solution for this problem: the special method __radd__, which stands for “right-side add.” This method is invoked when a Time object appears on the right side of the + operator. Here’s the definition:
# inside class Time:
And here’s how it’s used:
>>> print 1337 + start
10:07:17
### Exercise 5
Write an 'add' method for Points that works with either a Point object or a tuple:
• If the second operand is a Point, the method should return a new Point whose 'x' coordinate is the sum of the 'x' coordinates of the operands, and likewise for the 'y' coordinates.
• If the second operand is a tuple, the method should add the first element of the tuple to the 'x' coordinate and the second element to the 'y' coordinate, and return a new Point with the result.
## Polymorphism
Type-based dispatch is useful when it is necessary, but (fortunately) it is not always necessary. Often you can avoid it by writing functions that work correctly for arguments with different types.
Many of the functions we wrote for strings will actually work for any kind of sequence. For example, in Section 11.1 we used histogram to count the number of times each letter appears in a word.
def histogram(s):
d = dict()
for c in s:
if c not in d:
d[c] = 1
else:
d[c] = d[c]+1
return d
This function also works for lists, tuples, and even dictionaries, as long as the elements of s are hashable, so they can be used as keys in d.
>>> t = ['spam', 'egg', 'spam', 'spam', 'bacon', 'spam']
>>> histogram(t)
{'bacon': 1, 'egg': 1, 'spam': 4}
Functions that can work with several types are called polymorphic. Polymorphism can facilitate code reuse. For example, the built-in function sum, which adds the elements of a sequence, works as long as the elements of the sequence support addition.
Since Time objects provide an add method, they work with sum:
>>> t1 = Time(7, 43)
>>> t2 = Time(7, 41)
>>> t3 = Time(7, 37)
>>> total = sum([t1, t2, t3])
>>> print total
23:01:00
In general, if all of the operations inside a function work with a given type, then the function works with that type.
The best kind of polymorphism is the unintentional kind, where you discover that a function you already wrote can be applied to a type you never planned for.
## Debugging
It is legal to add attributes to objects at any point in the execution of a program, but if you are a stickler for type theory, it is a dubious practice to have objects of the same type with different attribute sets. It is usually a good idea to initialize all of an objects attributes in the init method.
If you are not sure whether an object has a particular attribute, you can use the built-in function hasattr (see Section 15.7).
Another way to access the attributes of an object is through the special attribute __dict__, which is a dictionary that maps attribute names (as strings) and values:
>>> p = Point(3, 4)
>>> print p.__dict__
{'y': 4, 'x': 3}
For purposes of debugging, you might find it useful to keep this function handy:
def print_attributes(obj):
for attr in obj.__dict__:
print attr, getattr(obj, attr)
print_attributes traverses the items in the object’s dictionary and prints each attribute name and its corresponding value.
The built-in function getattr takes an object and an attribute name (as a string) and returns the attribute’s value.
## Glossary
object-oriented language:
A language that provides features, such as user-defined classes and method syntax, that facilitate object-oriented programming.
object-oriented programming:
A style of programming in which data and the operations that manipulate it are organized into classes and methods.
method:
A function that is defined inside a class definition and is invoked on instances of that class.
subject:
The object a method is invoked on.
Changing the behavior of an operator like + so it works with a user-defined type.
type-based dispatch:
A programming pattern that checks the type of an operand and invokes different functions for different types.
polymorphic:
Pertaining to a function that can work with more than one type.
## Exercises
### Exercise 6
This exercise is a cautionary tale about one of the most common, and difficult to find, errors in Python.
• Write a definition for a class named 'Kangaroo' with the following
methods:
• An __init__ method that initializes an attribute named pouch_contents to an empty list.
• A method named put_in_pouch that takes an object of any type and adds it to pouch_contents.
• A __str__ method that returns a string representation of the Kangaroo object and the contents of the pouch.
Test your code by creating two '''Kangaroo''' objects, assigning them to variables named '''kanga''' and '''roo''', and then adding '''roo''' to the contents of '''kanga'''’s pouch.'
a solution to the previous problem with one big, nasty bug. Find and fix the bug.' 'If you get stuck, you can download '''thinkpython.com/code/GoodKangaroo.py''', which explains the problem and demonstrates a solution.'
### Exercise 7
Visual is a Python module that provides 3-D graphics. It is not always included in a Python installation, so you might have to install it from your software repository or, if it’s not there, from 'vpython.org'.
The following example creates a 3-D space that is 256 units wide, long and high, and sets the “center” to be the point '(128, 128, 128)'. Then it draws a blue sphere.
''from visual import *
scene.range = (256, 256, 256)
scene.center = (128, 128, 128)
color = (0.1, 0.1, 0.9) # mostly blue
''
color'' is an RGB tuple; that is, the elements are Red-Green-Blue levels between 0.0 and 1.0 (see 'wikipedia.org/wiki/RGB_color_model').
If you run this code, you should see a window with a black background and a blue sphere. If you drag the middle button up and down, you can zoom in and out. You can also rotate the scene by dragging the right button, but with only one sphere in the world, it is hard to tell the difference.
The following loop creates a cube of spheres:
''t = range(0, 256, 51)
for x in t:
for y in t:
for z in t:
pos = x, y, z
''
• Put this code in a script and make sure it works for
you.
• Modify the program so that each sphere in the cube
has the color that corresponds to its position in RGB space. Notice that the coordinates are in the range 0–255, but the RGB tuples are in the range 0.0–1.0.
and use the function read_colors to generate a list of the available colors on your system, their names and RGB values. For each named color draw a sphere in the position that corresponds to its RGB values.
You can see my solution at 'thinkpython.com/code/color_space.py'.
# Inheritance
In this chapter we will develop classes to represent playing cards, decks of cards, and poker hands. If you don’t play poker, you can read about it at wikipedia.org/wiki/Poker, but you don't have to; I'll tell you what you need to know for the exercises.
If you are not familiar with Anglo-American playing cards, you can read about them at wikipedia.org/wiki/Playing_cards.
## Card objects
There are fifty-two cards in a deck, each of which belongs to one of four suits and one of thirteen ranks. The suits are Spades, Hearts, Diamonds, and Clubs (in descending order in bridge). The ranks are Ace, 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, and King. Depending on the game that you are playing, an Ace may be higher than King or lower than 2.
If we want to define a new object to represent a playing card, it is obvious what the attributes should be: rank and suit. It is not as obvious what type the attributes should be. One possibility is to use strings containing words like 'Spade' for suits and 'Queen' for ranks. One problem with this implementation is that it would not be easy to compare cards to see which had a higher rank or suit.
An alternative is to use integers to encode the ranks and suits. In this context, “encode” means that we are going to define a mapping between numbers and suits, or between numbers and ranks. This kind of encoding is not meant to be a secret (that would be “encryption”).
For example, this table shows the suits and the corresponding integer codes:
Spades ↦ 3 Hearts ↦ 2 Diamonds ↦ 1 Clubs ↦ 0
This code makes it easy to compare cards; because higher suits map to higher numbers, we can compare suits by comparing their codes.
The mapping for ranks is fairly obvious; each of the numerical ranks maps to the corresponding integer, and for face cards:
Jack ↦ 11 Queen ↦ 12 King ↦ 13
I am using the ↦ symbol to make it clear that these mappings are not part of the Python program. They are part of the program design, but they don’t appear explicitly in the code.
The class definition for Card looks like this:
class Card:
"""represents a standard playing card."""
def __init__(self, suit=0, rank=2):
self.suit = suit
self.rank = rank
As usual, the init method takes an optional parameter for each attribute. The default card is the 2 of Clubs.
To create a Card, you call Card with the suit and rank of the card you want.
queen_of_diamonds = Card(1, 12)
## Class attributes
In order to print Card objects in a way that people can easily read, we need a mapping from the integer codes to the corresponding ranks and suits. A natural way to do that is with lists of strings. We assign these lists to class attributes:
# inside class Card:
suit_names = ['Clubs', 'Diamonds', 'Hearts', 'Spades']
rank_names = [None, 'Ace', '2', '3', '4', '5', '6', '7',
'8', '9', '10', 'Jack', 'Queen', 'King']
def __str__(self):
return '%s of %s' % (Card.rank_names[self.rank],
Card.suit_names[self.suit])
Variables like suit_names and rank_names, which are defined inside a class but outside of any method, are called class attributes because they are associated with the class object Card.
This term distinguished them from variables like suit and rank, which are called instance attributes because they are associated with a particular instance.
Both kinds of attribute are accessed using dot notation. For example, in __str__, self is a Card object, and self.rank is its rank. Similarly, Card is a class object, and Card.rank_names is a list of strings associated with the class.
Every card has its own suit and rank, but there is only one copy of suit_names and rank_names.
Putting it all together, the expression Card.rank_names[self.rank] means “use the attribute rank from the object self as an index into the list rank_names from the class Card, and select the appropriate string.”
The first element of rank_names is None because there is no card with rank zero. By including None as a place-keeper, we get a mapping with the nice property that the index 2 maps to the string '2', and so on. To avoid this tweak, we could have used a dictionary instead of a list.
With the methods we have so far, we can create and print cards:
>>> card1 = Card(2, 11)
>>> print card1
Jack of Hearts
Here is a diagram that shows the Card class object and one Card instance:
<IMG SRC="book026.png">
Card is a class object, so it has type type. card1 has type Card. (To save space, I didn’t draw the contents of suit_names and rank_names).
## Comparing cards
For built-in types, there are conditional operators (<, >, ==, etc.) that compare values and determine when one is greater than, less than, or equal to another. For user-defined types, we can override the behavior of the built-in operators by providing a method named __cmp__.
__cmp__ takes two parameters, self and other, and returns a positive number if the first object is greater, a negative number if the second object is greater, and 0 if they are equal to each other.
The correct ordering for cards is not obvious. For example, which is better, the 3 of Clubs or the 2 of Diamonds? One has a higher rank, but the other has a higher suit. In order to compare cards, you have to decide whether rank or suit is more important.
The answer might depend on what game you are playing, but to keep things simple, we’ll make the arbitrary choice that suit is more important, so all of the Spades outrank all of the Diamonds, and so on.
With that decided, we can write __cmp__:
# inside class Card:
def __cmp__(self, other):
# check the suits
if self.suit > other.suit: return 1
if self.suit < other.suit: return -1
# suits are the same... check ranks
if self.rank > other.rank: return 1
if self.rank < other.rank: return -1
# ranks are the same... it's a tie
return 0
You can write this more concisely using tuple comparison:
# inside class Card:
def __cmp__(self, other):
t1 = self.suit, self.rank
t2 = other.suit, other.rank
return cmp(t1, t2)
The built-in function cmp has the same interface as the method __cmp__: it takes two values and returns a positive number if the first is larger, a negative number of the second is larger, and 0 if they are equal.
### Exercise 1
Write a __cmp__ method for Time objects. Hint: you can use tuple comparison, but you also might consider using integer subtraction.
## Decks
Now that we have Cards, the next step is to define Decks. Since a deck is made up of cards, it is natural for each Deck to contain a list of cards as an attribute.
The following is a class definition for Deck. The init method creates the attribute cards and generates the standard set of fifty-two cards:
class Deck:
def __init__(self):
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
The easiest way to populate the deck is with a nested loop. The outer loop enumerates the suits from 0 to 3. The inner loop enumerates the ranks from 1 to 13. Each iteration creates a new Card with the current suit and rank, and appends it to self.cards.
## Printing the deck
Here is a __str__ method for Deck:
#inside class Deck:
def __str__(self):
res = [str(card) for card in self.cards]
return '\n'.join(res)
This method demonstrates an efficient way to accumulate a large string: building a list of strings and then using join. The built-in function str invokes the __str__ method on each card and returns the string representation.
Since we invoke join on a newline character, the cards are separated by newlines. Here’s what the result looks like:
>>> deck = Deck()
>>> print deck
Ace of Clubs
2 of Clubs
3 of Clubs
...
Even though the result appears on 52 lines, it is one long string that contains newlines.
## Add, remove, shuffle and sort
To deal cards, we would like a method that removes a card from the deck and returns it. The list method pop provides a convenient way to do that:
#inside class Deck:
def pop_card(self):
return self.cards.pop()
Since pop removes the last card in the list, we are dealing from the bottom of the deck. In real life bottom dealing is frowned upon1, but in this context it’s ok.
To add a card, we can use the list method append:
#inside class Deck:
self.cards.append(card)
A method like this that uses another function without doing much real work is sometimes called a veneer. The metaphor comes from woodworking, where it is common to glue a thin layer of good quality wood to the surface of a cheaper piece of wood.
In this case we are defining a “thin” method that expresses a list operation in terms that are appropriate for decks.
As another example, we can write a Deck method named shuffle using the function shuffle from the random module:
# inside class Deck:
def shuffle(self):
random.shuffle(self.cards)
Don’t forget to import random.
### Exercise 2
Write a Deck method named 'sort' that uses the list method 'sort' to sort the cards in a 'Deck'. 'sort' uses the __cmp__ method we defined to determine sort order.
## Inheritance
The language feature most often associated with object-oriented programming is inheritance. Inheritance is the ability to define a new class that is a modified version of an existing class.
It is called “inheritance” because the new class inherits the methods of the existing class. Extending this metaphor, the existing class is called the parent and the new class is called the child.
As an example, let’s say we want a class to represent a “hand,” that is, the set of cards held by one player. A hand is similar to a deck: both are made up of a set of cards, and both require operations like adding and removing cards.
A hand is also different from a deck; there are operations we want for hands that don’t make sense for a deck. For example, in poker we might compare two hands to see which one wins. In bridge, we might compute a score for a hand in order to make a bid.
This relationship between classes—similar, but different—lends itself to inheritance.
The definition of a child class is like other class definitions, but the name of the parent class appears in parentheses:
class Hand(Deck):
"""represents a hand of playing cards"""
This definition indicates that Hand inherits from Deck; that means we can use methods like pop_card and add_card for Hands as well as Decks.
Hand also inherits __init__ from Deck, but it doesn’t really do what we want: instead of populating the hand with 52 new cards, the init method for Hands should initialize cards with an empty list.
If we provide an init method in the Hand class, it overrides the one in the Deck class:
# inside class Hand:
def __init__(self, label=''):
self.cards = []
self.label = label
So when you create a Hand, Python invokes this init method:
>>> hand = Hand('new hand')
>>> print hand.cards
[]
>>> print hand.label
new hand
But the other methods are inherited from Deck, so we can use pop_card and add_card to deal a card:
>>> deck = Deck()
>>> card = deck.pop_card()
>>> print hand
A natural next step is to encapsulate this code in a method called move_cards:
#inside class Deck:
def move_cards(self, hand, num):
for i in range(num):
move_cards takes two arguments, a Hand object and the number of cards to deal. It modifies both self and hand, and returns None.
In some games, cards are moved from one hand to another, or from a hand back to the deck. You can use move_cards for any of these operations: self can be either a Deck or a Hand, and hand, despite the name, can also be a Deck.
Exercise 3
Write a Deck method called deal_hands that takes two parameters, the number of hands and the number of cards per hand, and that creates new Hand objects, deals the appropriate number of cards per hand, and returns a list of Hand objects.
Inheritance is a useful feature. Some programs that would be repetitive without inheritance can be written more elegantly with it. Inheritance can facilitate code reuse, since you can customize the behavior of parent classes without having to modify them. In some cases, the inheritance structure reflects the natural structure of the problem, which makes the program easier to understand.
On the other hand, inheritance can make programs difficult to read. When a method is invoked, it is sometimes not clear where to find its definition. The relevant code may be scattered among several modules. Also, many of the things that can be done using inheritance can be done as well or better without it.
## Class diagrams
So far we have seen stack diagrams, which show the state of a program, and object diagrams, which show the attributes of an object and their values. These diagrams represent a snapshot in the execution of a program, so they change as the program runs.
They are also highly detailed; for some purposes, too detailed. A class diagrams is a more abstract representation of the structure of a program. Instead of showing individual objects, it shows classes and the relationships between them.
There are several kinds of relationship between classes:
• Objects in one class might contain references to objects in another class. For example, each Rectangle contains a reference to a Point, and each Deck contains references to many Cards. This kind of relationship is called HAS-A, as in, “a Rectangle has a Point.”
• One class might inherit from another. This relationship is called IS-A, as in, “a Hand is a kind of a Deck.”
• One class might depend on another in the sense that changes in one class would require changes in the other.
A class diagram is a graphical representation of these relationships2. For example, this diagram shows the relationships between Card, Deck and Hand.
<IMG SRC="book027.png">
The arrow with a hollow triangle head represents an IS-A relationship; in this case it indicates that Hand inherits from Deck.
The standard arrow head represents a HAS-A relationship; in this case a Deck has references to Card objects.
The star (*) near the arrow head is a multiplicity; it indicates how many Cards a Deck has. A multiplicity can be a simple number, like 52, a range, like 5..7 or a star, which indicates that a Deck can have any number of Cards.
A more detailed diagram might show that a Deck actually contains a list of Cards, but built-in types like list and dict are usually not included in class diagrams.
### Exercise 4
Read 'TurtleWorld.py', 'World.py' and 'Gui.py' and draw a class diagram that shows the relationships among the classes defined there.
## Debugging
Inheritance can make debugging a challenge because when you invoke a method on an object, you might not know which method will be invoked.
Suppose you are writing a function that works with Hand objects. You would like it to work with all kinds of Hands, like PokerHands, BridgeHands, etc. If you invoke a method like shuffle, you might get the one defined in Deck, but if any of the subclasses override this method, you’ll get that version instead.
Any time you are unsure about the flow of execution through your program, the simplest solution is to add print statements at the beginning of the relevant methods. If Deck.shuffle prints a message that says something like Running Deck.shuffle, then as the program runs it traces the flow of execution.
As an alternative, you could use this function, which takes an object and a method name (as a string) and returns the class that provides the definition of the method:
def find_defining_class(obj, meth_name):
for ty in type(obj).mro():
if meth_name in ty.__dict__:
return ty
Here’s an example:
>>> hand = Hand()
>>> print find_defining_class(hand, 'shuffle')
<class 'Card.Deck'>
So the shuffle method for this Hand is the one in Deck.
find_defining_class uses the mro method to get the list of class objects (types) that will be searched for methods. “MRO” stands for “method resolution order.”
Here’s a program design suggestion: whenever you override a method, the interface of the new method should be the same as the old. It should take the same parameters, return the same type, and obey the same preconditions and postconditions. If you obey this rule, you will find that any function designed to work with an instance of a superclass, like a Deck, will also work with instances of subclasses like a Hand or PokerHand.
If you violate this rule, your code will collapse like (sorry) a house of cards.
## Glossary
encode:
To represent one set of values using another set of values by constructing a mapping between them.
class attribute:
An attribute associated with a class object. Class attributes are defined inside a class definition but outside any method.
instance attribute:
An attribute associated with an instance of a class.
veneer:
A method or function that provides a different interface to another function without doing much computation.
inheritance:
The ability to define a new class that is a modified version of a previously defined class.
parent class:
The class from which a child class inherits.
child class:
A new class created by inheriting from an existing class; also called a “subclass.”
IS-A relationship:
The relationship between a child class and its parent class.
HAS-A relationship:
The relationship between two classes where instances of one class contain references to instances of the other.
class diagram:
A diagram that shows the classes in a program and the relationships between them.
multiplicity:
A notation in a class diagram that shows, for a HAS-A relationship, how many references there are to instances of another class.
## Exercises
### Exercise 5
The following are the possible hands in poker, in increasing order of value (and decreasing order of probability):
pair:
two cards with the same rank
''two pair:''
two pairs of cards with the same rank
''three of a kind:''
three cards with the same rank
''straight:''
five cards with ranks in sequence (aces can be high or low, so 'Ace-2-3-4-5' is a straight and so is '10-Jack-Queen-King-Ace', but 'Queen-King-Ace-2-3' is not.)
''flush:''
five cards with the same suit
''full house:''
three cards with one rank, two cards with another
''four of a kind:''
four cards with the same rank
''straight flush:''
five cards in sequence (as defined above) and with the same suit
The goal of these exercises is to estimate the probability of drawing these various hands.
Card.py
: A complete version of the 'Card', 'Deck' and 'Hand' classes in this chapter.
''PokerHand.py''
: An incomplete implementation of a class
that represents a poker hand, and some code that tests it.
• 'If you run '''PokerHand.py''', it deals six 7-card poker hands
and checks to see if any of them contains a flush. Read this code carefully before you go on.'
• 'Add methods to '''PokerHand.py''' named ''has_pair'',
''has_twopair'', etc. that return True or False according to whether or not the hand meets the relevant criteria. Your code should work correctly for “hands” that contain any number of cards (although 5 and 7 are the most common sizes).'
• 'Write a method named '''classify''' that figures out
the highest-value classification for a hand and sets the '''label''' attribute accordingly. For example, a 7-card hand might contain a flush and a pair; it should be labeled “flush”.'
• 'When you are convinced that your classification methods are
working, the next step is to estimate the probabilities of the various hands. Write a function in '''PokerHand.py''' that shuffles a deck of cards, divides it into hands, classifies the hands, and counts the number of times various classifications appear.'
• 'Print a table of the classifications and their probabilities.
Run your program with larger and larger numbers of hands until the output values converge to a reasonable degree of accuracy. Compare your results to the values at '''wikipedia.org/wiki/Hand_rankings'''.'
### Exercise 6
This exercise uses TurtleWorld from Chapter '4'. You will write code that makes Turtles play tag. If you are not familiar with the rules of tag, see 'wikipedia.org/wiki/Tag_(game)'.
should see a TurtleWorld with three Turtles. If you press the 'Run' button, the Turtles wander at random.
• Read the code and make sure you understand how it works.
The 'Wobbler' class inherits from 'Turtle', which means that the 'Turtle' methods 'lt', 'rt', 'fd' and 'bk' work on Wobblers. The 'step' method gets invoked by TurtleWorld. It invokes 'steer', which turns the Turtle in the desired direction, 'wobble', which makes a random turn in proportion to the Turtle’s clumsiness, and 'move', which moves forward a few pixels, depending on the Turtle’s speed.
• Create a file named 'Tagger.py'. Import everything from
'Wobbler', then define a class named 'Tagger' that inherits from 'Wobbler'. Call make_world passing the 'Tagger' class object as an argument.
• Add a 'steer' method to 'Tagger' to override the one in
'Wobbler'. As a starting place, write a version that always points the Turtle toward the origin. Hint: use the math function 'atan2' and the Turtle attributes 'x', 'y' and 'heading'.
• Modify 'steer' so that the Turtles stay in bounds.
For debugging, you might want to use the 'Step' button, which invokes 'step' once on each Turtle.
• Modify 'steer' so that each Turtle points toward its nearest
neighbor. Hint: Turtles have an attribute, 'world', that is a reference to the TurtleWorld they live in, and the TurtleWorld has an attribute, 'animals', that is a list of all Turtles in the world.
• Modify 'steer' so the Turtles play tag. You can add methods
to 'Tagger' and you can override 'steer' and __init__, but you may not modify or override 'step', 'wobble' or 'move'. Also, 'steer' is allowed to change the heading of the Turtle but not the position. Adjust the rules and your 'steer' method for good quality play; for example, it should be possible for the slow Turtle to tag the faster Turtles eventually.
You can get my solution from 'thinkpython.com/code/Tagger.py'.
1
See wikipedia.org/wiki/Bottom_dealing.
2
The diagrams I am using here are similar to UML (see wikipedia.org/wiki/Unified_Modeling_Language), with a few simplifications.
# Debugging
Different kinds of errors can occur in a program, and it is useful to distinguish among them in order to track them down more quickly:
• Syntax errors are produced by Python when it is translating the source code into byte code. They usually indicate that there is something wrong with the syntax of the program. Example: Omitting the colon at the end of a def statement yields the somewhat redundant message SyntaxError: invalid syntax.
• Runtime errors are produced by the interpreter if something goes wrong while the program is running. Most runtime error messages include information about where the error occurred and what functions were executing. Example: An infinite recursion eventually causes the runtime error “maximum recursion depth exceeded.”
• Semantic errors are problems with a program that runs without producing error messages but doesn’t do the right thing. Example: An expression may not be evaluated in the order you expect, yielding an incorrect result.
The first step in debugging is to figure out which kind of error you are dealing with. Although the following sections are organized by error type, some techniques are applicable in more than one situation.
## Syntax errors
Syntax errors are usually easy to fix once you figure out what they are. Unfortunately, the error messages are often not helpful. The most common messages are SyntaxError: invalid syntax and SyntaxError: invalid token, neither of which is very informative.
On the other hand, the message does tell you where in the program the problem occurred. Actually, it tells you where Python noticed a problem, which is not necessarily where the error is. Sometimes the error is prior to the location of the error message, often on the preceding line.
If you are building the program incrementally, you should have a good idea about where the error is. It will be in the last line you added.
If you are copying code from a book, start by comparing your code to the book’s code very carefully. Check every character. At the same time, remember that the book might be wrong, so if you see something that looks like a syntax error, it might be.
Here are some ways to avoid the most common syntax errors:
• Make sure you are not using a Python keyword for a variable name.
• Check that you have a colon at the end of the header of every
compound statement, including for, while, if, and def statements.
• Make sure that any strings in the code have matching
quotation marks.
• If you have multiline strings with triple quotes (single or double), make
sure you have terminated the string properly. An unterminated string may cause an invalid token error at the end of your program, or it may treat the following part of the program as a string until it comes to the next string. In the second case, it might not produce an error message at all!
• An unclosed opening operator—(, {, or
[—makes Python continue with the next line as part of the current statement. Generally, an error occurs almost immediately in the next line.
• Check for the classic = instead of == inside
a conditional.
• Check the indentation to make sure it lines up the way it
is supposed to. Python can handle space and tabs, but if you mix them it can cause problems. The best way to avoid this problem is to use a text editor that knows about Python and generates consistent indentation.
If nothing works, move on to the next section...
### I keep making changes and it makes no difference.
If the interpreter says there is an error and you don’t see it, that might be because you and the interpreter are not looking at the same code. Check your programming environment to make sure that the program you are editing is the one Python is trying to run.
If you are not sure, try putting an obvious and deliberate syntax error at the beginning of the program. Now run it again. If the interpreter doesn’t find the new error, you are not running the new code.
There are a few likely culprits:
• You edited the file and forgot to save the changes before running it again. Some programming environments do this for you, but some don’t.
• You changed the name of the file, but you are still running the old name.
• Something in your development environment is configured incorrectly.
• If you are writing a module and using import, make sure you don’t give your module the same name as one of the standard Python modules.
• If you are using import to read a module, remember that you have to restart the interpreter or use reload to read a modified file. If you import the module again, it doesn’t do anything.
If you get stuck and you can’t figure out what is going on, one approach is to start again with a new program like “Hello, World!,” and make sure you can get a known program to run. Then gradually add the pieces of the original program to the new one.
## Runtime errors
Once your program is syntactically correct, Python can compile it and at least start running it. What could possibly go wrong?
### My program does absolutely nothing.
This problem is most common when your file consists of functions and classes but does not actually invoke anything to start execution. This may be intentional if you only plan to import this module to supply classes and functions.
If it is not intentional, make sure that you are invoking a function to start execution, or execute one from the interactive prompt. Also see the “Flow of Execution” section below.
### My program hangs
If a program stops and seems to be doing nothing, it is “hanging.” Often that means that it is caught in an infinite loop or infinite recursion.
• If there is a particular loop that you suspect is the problem, add a print statement immediately before the loop that says “entering the loop” and another immediately after that says “exiting the loop.” Run the program. If you get the first message and not the second, you’ve got an infinite loop. Go to the “Infinite Loop” section below.
• Most of the time, an infinite recursion will cause the program to run for a while and then produce a “RuntimeError: Maximum recursion depth exceeded” error. If that happens, go to the “Infinite Recursion” section below. If you are not getting this error but you suspect there is a problem with a recursive method or function, you can still use the techniques in the “Infinite Recursion” section.
• If neither of those steps works, start testing other loops and other recursive functions and methods.
• If that doesn’t work, then it is possible that you don’t understand the flow of execution in your program. Go to the “Flow of Execution” section below.
#### Infinite Loop
If you think you have an infinite loop and you think you know what loop is causing the problem, add a print statement at the end of the loop that prints the values of the variables in the condition and the value of the condition.
For example:
while x > 0 and y < 0 :
# do something to x
# do something to y
print "x: ", x
print "y: ", y
print "condition: ", (x > 0 and y < 0)
Now when you run the program, you will see three lines of output for each time through the loop. The last time through the loop, the condition should be false. If the loop keeps going, you will be able to see the values of x and y, and you might figure out why they are not being updated correctly.
#### Infinite Recursion
Most of the time, an infinite recursion will cause the program to run for a while and then produce a Maximum recursion depth exceeded error.
If you suspect that a function or method is causing an infinite recursion, start by checking to make sure that there is a base case. In other words, there should be some condition that will cause the function or method to return without making a recursive invocation. If not, then you need to rethink the algorithm and identify a base case.
If there is a base case but the program doesn’t seem to be reaching it, add a print statement at the beginning of the function or method that prints the parameters. Now when you run the program, you will see a few lines of output every time the function or method is invoked, and you will see the parameters. If the parameters are not moving toward the base case, you will get some ideas about why not.
#### Flow of Execution
If you are not sure how the flow of execution is moving through your program, add print statements to the beginning of each function with a message like “entering function foo,” where foo is the name of the function.
Now when you run the program, it will print a trace of each function as it is invoked.
### When I run the program I get an exception.
If something goes wrong during runtime, Python prints a message that includes the name of the exception, the line of the program where the problem occurred, and a traceback.
The traceback identifies the function that is currently running, and then the function that invoked it, and then the function that invoked that, and so on. In other words, it traces the sequence of function invocations that got you to where you are. It also includes the line number in your file where each of these calls occurs.
The first step is to examine the place in the program where the error occurred and see if you can figure out what happened. These are some of the most common runtime errors:
NameError:
You are trying to use a variable that doesn’t exist in the current environment. Remember that local variables are local. You cannot refer to them from outside the function where they are defined.
TypeError:
There are several possible causes:
• You are trying to use a value improperly. Example: indexing a string, list, or tuple with something other than an integer.
• There is a mismatch between the items in a format string and the items passed for conversion. This can happen if either the number of items does not match or an invalid conversion is called for.
• You are passing the wrong number of arguments to a function or method. For methods, look at the method definition and check that the first parameter is self. Then look at the method invocation; make sure you are invoking the method on an object with the right type and providing the other arguments correctly.
KeyError:
You are trying to access an element of a dictionary using a key that the dictionary does not contain.
AttributeError:
You are trying to access an attribute or method that does not exist. Check the spelling! You can use dir to list the attributes that do exist. If an AttributeError indicates that an object has NoneType, that means that it is None. One common cause is forgetting to return a value from a function; if you get to the end of a function without hitting a return statement, it returns None. Another common cause is using the result from a list method, like sort, that returns None.
IndexError:
The index you are using to access a list, string, or tuple is greater than its length minus one. Immediately before the site of the error, add a print statement to display the value of the index and the length of the array. Is the array the right size? Is the index the right value?
The Python debugger (pdb) is useful for tracking down Exceptions because it allows you to examine the state of the program immediately before the error. You can read about pdb at docs.python.org/lib/module-pdb.html.
### I added so many print statements I get inundated with output
One of the problems with using print statements for debugging is that you can end up buried in output. There are two ways to proceed: simplify the output or simplify the program.
To simplify the output, you can remove or comment out print statements that aren’t helping, or combine them, or format the output so it is easier to understand.
To simplify the program, there are several things you can do. First, scale down the problem the program is working on. For example, if you are searching a list, search a small list. If the program takes input from the user, give it the simplest input that causes the problem.
Second, clean up the program. Remove dead code and reorganize the program to make it as easy to read as possible. For example, if you suspect that the problem is in a deeply nested part of the program, try rewriting that part with simpler structure. If you suspect a large function, try splitting it into smaller functions and testing them separately.
Often the process of finding the minimal test case leads you to the bug. If you find that a program works in one situation but not in another, that gives you a clue about what is going on.
Similarly, rewriting a piece of code can help you find subtle bugs. If you make a change that you think doesn’t affect the program, and it does, that can tip you off.
## Semantic errors
In some ways, semantic errors are the hardest to debug, because the interpreter provides no information about what is wrong. Only you know what the program is supposed to do.
The first step is to make a connection between the program text and the behavior you are seeing. You need a hypothesis about what the program is actually doing. One of the things that makes that hard is that computers run so fast.
You will often wish that you could slow the program down to human speed, and with some debuggers you can. But the time it takes to insert a few well-placed print statements is often short compared to setting up the debugger, inserting and removing breakpoints, and “stepping” the program to where the error is occurring.
### My program doesn’t work.
You should ask yourself these questions:
• Is there something the program was supposed to do but which doesn’t seem to be happening? Find the section of the code that performs that function and make sure it is executing when you think it should.
• Is something happening that shouldn’t? Find code in your program that performs that function and see if it is executing when it shouldn’t.
• Is a section of code producing an effect that is not what you expected? Make sure that you understand the code in question, especially if it involves invocations to functions or methods in other Python modules. Read the documentation for the functions you invoke. Try them out by writing simple test cases and checking the results.
In order to program, you need to have a mental model of how programs work. If you write a program that doesn’t do what you expect, very often the problem is not in the program; it’s in your mental model.
The best way to correct your mental model is to break the program into its components (usually the functions and methods) and test each component independently. Once you find the discrepancy between your model and reality, you can solve the problem.
Of course, you should be building and testing components as you develop the program. If you encounter a problem, there should be only a small amount of new code that is not known to be correct.
### I’ve got a big hairy expression and it doesn’t do what I expect.
Writing complex expressions is fine as long as they are readable, but they can be hard to debug. It is often a good idea to break a complex expression into a series of assignments to temporary variables.
For example:
self.hands[i].addCard(self.hands[self.findNeighbor(i)].popCard())
This can be rewritten as:
neighbor = self.findNeighbor(i)
pickedCard = self.hands[neighbor].popCard()
The explicit version is easier to read because the variable names provide additional documentation, and it is easier to debug because you can check the types of the intermediate variables and display their values.
Another problem that can occur with big expressions is that the order of evaluation may not be what you expect. For example, if you are translating the expression x/2 π into Python, you might write:
y = x / 2 * math.pi
That is not correct because multiplication and division have the same precedence and are evaluated from left to right. So this expression computes x π / 2.
A good way to debug expressions is to add parentheses to make the order of evaluation explicit:
y = x / (2 * math.pi)
Whenever you are not sure of the order of evaluation, use parentheses. Not only will the program be correct (in the sense of doing what you intended), it will also be more readable for other people who haven’t memorized the rules of precedence.
### I’ve got a function or method that doesn’t return what I expect
If you have a return statement with a complex expression, you don’t have a chance to print the return value before returning. Again, you can use a temporary variable. For example, instead of:
return self.hands[i].removeMatches()
you could write:
count = self.hands[i].removeMatches()
return count
Now you have the opportunity to display the value of count before returning.
### I'm really, really stuck and I need help.
First, try getting away from the computer for a few minutes. Computers emit waves that affect the brain, causing these symptoms:
• Frustration and rage.
• Superstitious beliefs (“the computer hates me”) and
magical thinking (“the program only works when I wear my hat backward”).
• Random walk programming (the attempt to program by writing
every possible program and choosing the one that does the right thing).
If you find yourself suffering from any of these symptoms, get up and go for a walk. When you are calm, think about the program. What is it doing? What are some possible causes of that behavior? When was the last time you had a working program, and what did you do next?
Sometimes it just takes time to find a bug. I often find bugs when I am away from the computer and let my mind wander. Some of the best places to find bugs are trains, showers, and in bed, just before you fall asleep.
### No, I really need help.
It happens. Even the best programmers occasionally get stuck. Sometimes you work on a program so long that you can’t see the error. A fresh pair of eyes is just the thing.
Before you bring someone else in, make sure you are prepared. Your program should be as simple as possible, and you should be working on the smallest input that causes the error. You should have print statements in the appropriate places (and the output they produce should be comprehensible). You should understand the problem well enough to describe it concisely.
When you bring someone in to help, be sure to give them the information they need:
• If there is an error message, what is it and what part of the program does it indicate?
• What was the last thing you did before this error occurred? What were the last lines of code that you wrote, or what is the new test case that fails?
• What have you tried so far, and what have you learned?
When you find the bug, take a second to think about what you could have done to find it faster. Next time you see something similar, you will be able to find the bug more quickly.
Remember, the goal is not just to make the program work. The goal is to learn how to make the program work.
## Chapter 1
See below for Chapter 1 exercises.
### Exercise 1.4
If you run a 10 kilometer race in 43 minutes 30 seconds, what is your average time per mile? What is your average speed in miles per hour? (Hint: there are about 1.61 kilometers in a mile.)
>>> 10 / 1.61 # Convert kilometers to miles
6.2111801242236018
>>> (43 * 60) + 30 # Convert time to seconds
2610
>>> 2610 / 6.2111801242236018 # what is your average time (seconds) per mile
420.21000000000004
>>> 420.21000000000004 / 60 # what is your average time (minutes) per mile
7.0035000000000007
>>> 60 / 7.0035000000000007 # Miles per hour
8.5671449989291055
>>> 10 / 43.5 # avg kilometers per minute
0.22988505747126436
>>> 0.22988505747126436*60 # kilometers per hour
13.793103448275861
>>> 13.793103448275861 / 1.61 # convert to M.P.H
8.567144998929106
or a one-liner
>>> (10 / 1.61) / (43.5 / 60) # (distance in miles) / (time in hours)
8.567144998929106 # miles/hour
making it 'pretty' & exploring how print works....
>>> print round((10 / 1.61) / (43.5 / 60), 2), 'mph' # (distance in miles)/(time in hours) rounded to 2 places
8.57 mph
## Chapter 2
### Exercise 2.1
If you type an integer with a leading zero, you might get a confusing error:
>>> zipcode = 02492
^
SyntaxError: invalid token
Other number seem to work, but the results are bizarre:
>>> zipcode = 02132
>>> print zipcode
1114
So python is assuming you want to convert an octal number to a decimal number. In the base 8 numbering system where valid numbers are 0, 1, 2, 3, 4, 5, 6 and 7.
Base 8: 00 01 02 03 04 05 06 07 10 11 12 13 14 15 16 17 20 21 22 23 24
Base 10: 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20
Every 8 numbers we increment the left hand columns. This means that the right most column is the number of 'ones'. The one to the left of that is a tally of the number of 'eights', the one next to that is a tally of a full column of 'eight' times the 'eight column' - 64. The one next to that is 64*8 - 512 and so on. For more information read Base Eight math.
That is why zipcode = 02492 is invalid as the digit 9 is not a valid octal number. We can do the conversion manually as follows:
>>> print 02132
1114
>>> (2*512)+(1*64)+(3*8)+(2*1)
1114
>>>
### Exercise 2.4
The volume of a sphere with radius r is 4/3 π r3. What is the volume of a sphere with radius 5?
>>> pi = 3.1415926535897931
>>> r = 5
>>> 4/3*pi*r**3 # This is the wrong answer
392.69908169872411
>>> r = 5.0 # Radius can be a float here as well, but is not _necessary_.
>>> 4.0/3.0*pi*r**3 # Using floats give the correct answer
523.59877559829886
>>>
Suppose the cover price of a book is $24.95, but bookstores get a 40% discount. Shipping costs$3 for the first copy and 75 cents for each additional copy. What is the total wholesale cost for 60 copies?
$24.95 Cost$9.98 Discount per book
$14.97 Cost per book after discount 60 Total number of books$898.20 Total cost not inc delivery
$3.00 First book delivery 59 Remaining books$0.75 Delivery cost for extra books
$44.25 Total cost for extra books$47.25 Total Delivery cost
\$945.45 Total Bill
This answer is wrong because 40.0/100.0 return wrong value 0.40000000000000002 for more info see IEEE 754 (Standard for Floating-Point Arithmetic)
>>> (24.95-24.95*40.0/100.0)*60+3+0.75*(60-1)
945.44999999999993
>>> 24.95*0.6*60+0.75*(60-1)+3
945.45
If I leave my house at 6:52 am and run 1 mile at an easy pace (8:15 per mile), then 3 miles at tempo (7:12 per mile) and 1 mile at easy pace again, what time do I get home for breakfast?
How I did it:
>>> start = (6*60+52)*60
>>> easy = (8*60+15)*2
>>> fast = (7*60+12)*3
>>> finish_hour = (start + easy + fast)/(60*60.0)
>>> finish_floored = (start + easy + fast)//(60*60) #int() function can also be used to get integer value, but isn't taught yet.
>>> finish_minute = (finish_hour - finish_floored)*60
>>> print 'Finish time was %d:%d' % (finish_hour,finish_minute)
Finish time was 7:30
*** ANOTHER WAY ***
start_time_hr = 6 + 52 / 60.0
easy_pace_hr = (8 + 15 / 60.0 ) / 60.0
tempo_pace_hr = (7 + 12 / 60.0) / 60.0
running_time_hr = 2 * easy_pace_hr + 3 * tempo_pace_hr
breakfast_hr = start_time_hr + running_time_hr
breakfast_min = (breakfast_hr-int(breakfast_hr))*60
breakfast_sec= (breakfast_min-int(breakfast_min))*60
print ('breakfast_hr', int(breakfast_hr) )
print ('breakfast_min', int (breakfast_min) )
print ('breakfast_sec', int (breakfast_sec) )
>>>
## Chapter 3
### Exercise 3.3
Python provides a built-in function called len that returns the length of a string, so the value of len('allen') is 5. Write a function named right_justify that takes a string named s as a parameter and prints the string with enough leading spaces so that the last letter of the string is in column 70 of the display.
>>> def right_justify(s):
print (' '*(70-len(s))+s)
>>> right_justify('allen')
allen
>>>
Alternate Solution Using concatenation and repetition
def right_justify(s):
total_length = 70
current_length = len(s)
current_string = s
while current_length < total_length:
current_string = " " + current_string
current_length = len(current_string)
print(current_string)
OUTPUT
>>> right_justify("monty")
monty
### Exercise 3.5
You can see my solution at http://thinkpython.com/code/grid.py.
"""
Solution to Exercise 3.5 on page 27 of Think Python
Allen B. Downey, Version 1.1.24+Kart [Python 3.2]
"""
# here is a mostly-straightforward solution to the
# two-by-two version of the grid.
def do_twice(f):
f()
f()
def do_four(f):
do_twice(f)
do_twice(f)
def print_beam():
print('+ - - - -', end='')
def print_post():
print('| ', end='')
def print_beams():
do_twice(print_beam)
print('+')
def print_posts():
do_twice(print_post)
print('|')
def print_row():
print_beams()
do_twice(print_posts)
def print_grid():
do_twice(print_row)
print_beams()
print_grid()
____________
# another solution
def do_twice(f):
f()
f()
def do_four(f):
do_twice(f)
do_twice(f)
def print_column():
print '+----+----+'
def print_row():
print '| | |'
def print_rows():
do_four(print_row)
def do_block():
print_column()
print_rows()
def print_block():
do_twice(do_block)
print_column()
print_block()
# nathan moses-gonzales
_________
# straight-forward solution to 4x4 grid
def do_twice(f):
f()
f()
def do_four(f): # not needed for 2x2 grid
do_twice(f)
do_twice(f)
def print_beam():
print('+----', end='')
def print_post():
print('| ', end='')
def print_beams():
do_twice(print_beam)
print('+')
def print_posts():
do_twice(print_post)
print('|')
def print_row():
print_beams()
do_twice(print_posts)
def print_grid2x2():
do_twice(print_row)
print_beams()
def print_beam4():
do_four(print_beam)
print('+')
def print_post4():
do_four(print_post)
print('|')
def print_row4():
print_beam4()
do_twice(print_post4)
def print_grid4x4():
do_four(print_row4)
print_beam4()
print_grid4x4()
-----------------------
# here is a less-straightforward solution to the
# four-by-four grid
def one_four_one(f, g, h):
f()
do_four(g)
h()
def print_plus():
print '+',
def print_dash():
print '-',
def print_bar():
print '|',
def print_space():
print ' ',
def print_end():
print
def nothing():
"do nothing"
def print1beam():
one_four_one(nothing, print_dash, print_plus)
def print1post():
one_four_one(nothing, print_space, print_bar)
def print4beams():
one_four_one(print_plus, print1beam, print_end)
def print4posts():
one_four_one(print_bar, print1post, print_end)
def print_row():
one_four_one(nothing, print4posts, print4beams)
def print_grid():
one_four_one(print4beams, print_row, nothing)
print_grid()
comment = """
After writing a draft of the 4x4 grid, I noticed that many of the
functions had the same structure: they would do something, do
something else four times, and then do something else once.
So I wrote one_four_one, which takes three functions as arguments; it
calls the first one once, then uses do_four to call the second one
four times, then calls the third.
Then I rewrote print1beam, print1post, print4beams, print4posts,
print_row and print_grid using one_four_one.
Programming is an exploratory process. Writing a draft of a program
often gives you insight into the problem, which might lead you to
rewrite the code to reflect the structure of the solution.
--- Allen
"""
print comment
# another solution
def beam():
plus = "+"
minus = "-"*4
print(plus, minus, plus,minus, plus, minus, plus, minus, plus)
def straight():
straight = "|"
space = " "*4
print(straight, space, straight, space, straight, space, straight, space,
straight, space)
straight()
straight()
straight()
straight()
def twice():
beam()
beam()
twice()
twice()
beam()
-- :)
------------------
# Without functions.
print("+ - - - - " * 2 + "+")
print("|\t\t | \t\t|\n" * 3 + "|\t\t | \t\t|")
print("+ - - - - " * 2 + "+")
print("|\t\t | \t\t|\n " *3 + "|\t\t | \t\t|")
print("+ - - - - " * 2 + "+")
------------------
Why not using the first solution and adapt it to the number of rows
def do_twice(f):
f()
f()
def do_four(f):
do_twice(f)
do_twice(f)
def print_column():
print '+----+----+----+----+'
def print_row():
print '| | | | |'
def print_rows():
do_four(print_row)
def do_block():
print_column()
print_rows()
def print_block():
do_twice(do_block)
# print_column()
do_twice(do_block)
print_column()
print_block()
-----------------------
# mteodor
def draw_line(bar, middle = ' ', repeat = 2, lenght = 2):
""" Draw a single line like this:
[ (B M*repeat)*lenght B]
"""
for k in range(lenght):
print("%s %s " % (bar, middle*repeat), end='')
print(bar)
def draw_grid(lenght = 2, height = 2, width = 2):
""" Draw a grid like this:
+ -- + -- +
| | |
| | |
+ -- + -- +
| | |
| | |
+ -- + -- +
where:
* lenght x heigth are the table size
* width is the size of a cell/column
"""
for i in range(height):
draw_line('+', '-', width, lenght)
for j in range(lenght):
draw_line('|', ' ', width, lenght)
draw_line('+', '-', width, lenght)
draw_grid(4, 4, 3)
## Chapter 4
### 4.3 Exercise 1
from TurtleWorld import *
world = TurtleWorld()
bob = Turtle()
def square(t):
for i in range(4):
fd(t, 100)
lt(t)
square(bob)
wait_for_user()
### 4.3 Exercise 2
from TurtleWorld import *
world = TurtleWorld()
bob = Turtle()
print(bob)
def square(t, length):
t = Turtle()
for i in range(4):
fd(t, length)
lt(t)
square(bob, 200)
wait_for_user()
### 4.3 Exercise 3
from TurtleWorld import *
world = TurtleWorld()
bob = Turtle()
print(bob)
def polygon(t, length, n):
t = Turtle()
for i in range(n):
fd(t, length)
lt(t, 360 / n)
polygon(bob, 50, 8)
wait_for_user()
## Chapter 5
### Exercise 5.2
def countdown(a): # A typical countdown function
if a < 0:
print("Blastoff")
elif a > 0:
print(a)
countdown(a - 1)
def call_function(n,a): # The countdown function is called "n" number of times. Any other function can be used instead of countdown function.
for i in range(n):
countdown(a)
call(3, 10)
## Chapter 9
### Exercise 9.1
fin = open('words.txt')
for line in fin:
word = line.strip()
if len(word) > 20:
print (word)
### Exercise 9.2
fin = open('words.txt')
def has_no_e(word):
for char in word:
if char in 'Ee':
return False
return True
count = 0
for line in fin:
word = line.strip()
if has_no_e(word):
count += 1
print word
percent = (count / 113809.0) * 100
print str(percent) + "% of the words don't have an 'e'."
### Exercise 9.3
fin = open('words.txt')
def avoids(word,letter):
for char in word:
if char in letter:
return False
return True
letter = raw_input('What letters to exclude? ')
count = 0
for line in fin:
word = line.strip()
if avoids(word, letter):
count += 1
print word
percent = (count / 113809.0) * 100
print str(percent) + "% of the words don't have " + letter + '.'
## Chapter 10
### Exercise 10.1
Write a function called nested_sum that takes a nested list of integers and add up the elements from all of the nested lists.
def nested_sum(nestedList, newList = [0]):
'''
nestedList: list composed of nested lists containing int.
newList: list. The flat list composed of all the items present
in the nested lists.
Returns the sum of all the int in the nested list
'''
#Helper function to flatten the list
def flatlist(nestedList):
'''
Returns a flat list
'''
for i in range(len(nestedList)):
if type(nestedList[i]) == int:
newList.append(nestedList[i])
else:
flatlist(nestedList[i])
return newList
flatlist(nestedList)
print sum(newList)
nested_sum(nestedList)
### Exercise 10.2
Write a function named "capitalize_nested" that takes a nested list of strings and returns a new nested list with all strings capitalized.
>>> def capitalize_nested(l):
def capitalize(s):
return s.capitalize()
for n, i in enumerate(l):
if type(i) is list:
l[n] = capitalize_nested(l[n])
elif type(i) is str:
l[n] = capitalize(i)
return l
### Exercise 10.3
Write a function that takes a list of numbers and returns the cumulative sum.
>>> def cumulative(l):
cumulative_sum = 0
new_list = []
for i in l:
cumulative_sum += i
new_list.append(cumulative_sum)
return new_list
### Exercise 10.4
Write a function called middle that takes a list and returns a new list that contains all but the first and last elements.
>>> def middle(x):
res = []
i = 1
while i <= len(x)-2:
res.append(x[i])
i += 1
return res
This can also be done simply with a slice.
>>> def middle(x):
return x[1:-1]
### Exercise 10.5
Write a function called chop that takes a list and modifies it, removing the first and last elements, and returns None.
>>> def chop(x):
del x[:1]
del x[-1:]
## Chapter 11
### Exercise 11.1
Write a function that reads the words in words.txt and stores them as keys in a dictionary. It doesn’t matter what the values are. Then you can use the in operator as a fast way to check whether a string is in the dictionary.
fin = open('words.txt')
englishdict = dict()
def create_diction():
counter = 0
dictionairy = dict()
for line in fin:
word = line.strip()
dictionairy[word] = counter
counter += 1
return dictionairy
### Exercise 11.2
def histogram(s):
d = dict()
for c in s:
d[c] = 1 + d.get(c, 0)
return d
### Exercise 11.3
Dictionaries have a method called keys that returns the keys of the dictionary, in no particular order, as a list. Modify print_hist to print the keys and their values in alphabetical order.
v = {'p' : 1, 'a' : 1, 'r' : 2, 'o' : 1, 't' : 1}
def print_hist(h):
d = []
d += sorted(h.keys())
for c in d:
print(c, h[c])
OR
v = {'p' : 1, 'a' : 1, 'r' : 2, 'o' : 1, 't' : 1}
def print_hist(h):
for c in sorted(h.keys()):
print c, h[c]
### Exercise 11.4
Modify reverse_lookup so that it builds and returns a list of all keys that map to v, or an empty list if there are none.
def reverse_lookup(d,v):
l = list()
for c in d:
if d[c] == v:
l.append(c)
return l
## Chapter 12
### Exercise 12.1
numbers = (1,2,3)
def sumall(numbers):
x = 0
for i in numbers:
x = x + i
print x
sumall(numbers)
or
def sumall(*t):
x = 0
for i in range(len(t)):
x += t[i]
return x
or
def sumall(*args):
t = list(args)
return sum(t)
or
def sumall(*args):
return sum(args)
### Exercise 12.2
import random
def sort_by_length(words):
t = []
for word in words:
t.append((len(word),word))
t.sort(reverse=True)
res = []
for length, word in t:
res.append(word)
i=0
final = []
while i <= len(res)-2:
if len(res[i]) == len(res[i+1]):
y_list = [res[i], res[i+1]]
random.shuffle(y_list)
final = final + y_list
i += 2
else:
final.append(res[i])
i += 1
if i == len(res)-1:
final.append(res[i])
return final
or
from random import shuffle
def sort_by_length(words):
r = []
d = dict()
for word in words:
d.setdefault(len(word), []).append(word)
for key in sorted(d, reverse=True):
if len(d[key]) > 1:
shuffle(d[key])
r.extend(d[key])
return r
### Exercise 12.3
import string
def most_frequent(s):
d = dict()
inv = dict()
for char in s:
if char in string.ascii_letters:
letter = char.lower()
d[letter] = d.get(letter, 0) + 1
for letter, freq in d.items():
inv.setdefault(freq, []).append(letter)
for freq in sorted(inv, reverse=True):
print('{:.2%}:'.format(freq/(sum(list(inv)*len(inv[freq])))), ', '.join(inv[freq]))
## Chapter 13
### Exercise 13.7
from string import punctuation, whitespace, digits
from random import randint
from bisect import bisect_left
def process_file(filename):
h = dict()
fp = open(filename)
for line in fp:
process_line(line, h)
return h
def process_line(line, h):
line = line.replace('-', ' ')
for word in line.split():
word = word.strip(punctuation + whitespace + digits)
word = word.lower()
if word != '':
h[word] = h.get(word, 0) + 1
hist = process_file('emma.txt')
def cum_sum(list_of_numbers):
cum_list = []
for i, elem in enumerate(list_of_numbers):
if i == 0:
cum_list.append(elem)
else:
cum_list.append(cum_list[i-1] + elem)
return cum_list
def random_word(h):
word_list = list(h.keys())
num_list = []
for word in word_list:
num_list.append(h[word])
cum_list = cum_sum(num_list)
i = randint(1, cum_list[-1])
pos = bisect_left(cum_list, i)
return word_list[pos]
print(random_word(hist))
## Chapter 14
### Exercise 14.3
import shelve
def dict_of_signatures_and_words(filename='words.txt'):
d = dict()
for line in open(filename):
word = line.lower().strip()
signature = ''.join(sorted(word))
d.setdefault(signature, []).append(word)
return d
def db_of_anagrams(filename='anagrams', d=dict_of_signatures_and_words()):
db = shelve.open(filename)
for key, values in d.items():
if len(values)>1:
for index, value in enumerate(values):
db[value]=values[:index]+values[index+1:]
db.close()
def print_contents_of_db(filename='anagrams'):
db = shelve.open(filename, flag='r')
for key in sorted(db):
print(key.rjust(12), '\t<==>\t', ', '.join(db[key]))
db.close()
db_of_anagrams()
print_contents_of_db()
### Exercise 14.5
# Replace urllib.request with urllib if you use Python 2.
# I would love to see a more elegant solution for this exercise, possibly by someone who understands html.
import urllib.request
def check(zip_code):
if zip_code == 'done':
return False
else:
if len(zip_code) != 5:
print('\nThe zip code must have five digits!')
return True
def get_html(zip_code):
gibberish = urllib.request.urlopen('http://www.uszip.com/zip/' + zip_code)
return less_gib
def extract_truth(code, key, delimiter):
pos = code.find(key) + len(key)
nearly_true = code[pos:pos+40]
truth = nearly_true.split(delimiter)[0]
return truth
while True:
zip_code = input('Please type a zip code (5 digits) or "done" if want to stop:\n')
if not check(zip_code):
break
code = get_html(zip_code)
invalid_key = '(0 results)'
if invalid_key in code:
print('\nNot a valid zip code.')
continue
name_key = '<title>'
name_del = ' zip'
name = extract_truth(code, name_key, name_del)
pop_key = 'Total population</dt><dd>'
pop_del = '<'
pop = extract_truth(code, pop_key, pop_del)
if not 1 < len(pop) < 9:
pop = 'not available'
print('\n' + name)
print('Population:', pop, '\n')
## Chapter 15
### Exercise 15.1
import math
class Point(object):
"""represents a point in 2-D space"""
def distance(p1, p2):
distance = math.sqrt((p2.x - p1.x)**2 + (p2.y - p1.y)**2)
return distance
p1 = Point()
p2 = Point()
p1.x = 3
p1.y = 2
p2.x = 4
p2.y = 3
print(distance(p1, p2))
## Chapter 16
### Exercise 16.1
def print_time(t):
print '%.2d:%.2d:%.2d' % (t.hour, t.minute, t.second)
or
# Solution for Python3
# More on string formatting: http://docs.python.org/py3k/library/string.html#formatspec
def print_time(t):
# 0 is a fill character, 2 defines the width
print('{}:{:02}:{:02}'.format(t.hour, t.minute, t.second))
### Exercise 16.2
def is_after(t1, t2):
return (t1.hour, t1.minute, t1.second) > (t2.hour, t2.minute, t2.second)
### Exercise 16.3
# Comment not by the author: This will give a wrong result, if (time.second + seconds % 60) > 60
def increment(time, seconds):
n = seconds/60
time.second += seconds - 60.0*n
time.minute += n
m = time.minute/60
time.minute -= m*60
time.hour += m
or
# Solution for Python3
# Replace '//' by '/' for Python2
def increment(time, seconds):
time.second += seconds
time.minute += time.second//60
time.hour += time.minute//60
time.second %= 60
time.minute %= 60
time.hour %= 24
# A different way of going about it
def increment(time, seconds):
# Converts total to seconds, then back to a readable format
time.second = time.hour*3600 + time.minute*60 + time.second + seconds
(time.minute, time.second) = divmod(time_in_seconds, 60)
(time.hour, time.minute) = divmod(time.minute, 60)
### Exercise 16.4
# Solution for Python3
# Replace '//' by '/' for Python2
from copy import deepcopy
def increment(time, seconds):
r = deepcopy(time)
r.second += seconds
r.minute += r.second//60
r.hour += r.minute//60
r.second %= 60
r.minute %= 60
r.hour %= 24
return r
### Exercise 16.5
class Time(object):
"""represents the time of day.
attributes: hour, minute, second"""
time = Time()
time.hour = 11
time.minute = 59
time.second = 30
def time_to_int(time):
minutes = time.hour * 60 + time.minute
seconds = minutes * 60 + time.second
return seconds
def int_to_time(seconds):
time = Time()
minutes, time.second = divmod(seconds, 60)
time.hour, time.minute = divmod(minutes, 60)
return time
seconds = time_to_int(time)
def print_time (x):
print 'The time is %.2d : %.2d : %.2d' % (x.hour, x.minute, x.second)
print_time (time)
newtime = increment (time, 70)
print_time (newtime)
### Exercise 16.6
def time_to_int(time):
minutes = time.hour * 60 + time.minute
seconds = minutes * 60 + time.second
return seconds
def int_to_time(seconds):
time = Time()
minutes, time.second = divmod(seconds, 60)
time.hour, time.minute = divmod(minutes, 60)
return time
def mul_time(time, factor):
seconds = time_to_int(time)
seconds *= factor
seconds = int(seconds)
return int_to_time(seconds)
def average_pace(time, distance):
return mul_time(time, 1/distance)
### Exercise 16.7
Write a class definition for a Date object that has attributes day, month and year. Write a function called increment_date that takes a Date object, date, and an integer, n, and returns a new Date object that represents the day n days after date. Hint: “Thirty days hath September...” Challenge: does your function deal with leap years correctly? See wikipedia.org/wiki/Leap_year.
class Date(object):
"""represents a date.
attributes: day, month, year"""
def print_date(date):
# German date format
print('{}.{}.{}'.format(date.day, date.month, date.year))
def is_leap_year(year):
# http://en.wikipedia.org/wiki/Leap_year#Algorithm
if year % 4 == 0:
if year % 100 == 0:
if year % 400 == 0:
return True
return False
return True
return False
def month_list(year):
if is_leap_year(year):
return [31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
return [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
def days_of_year(year):
if is_leap_year(year):
return 366
return 365
def date_to_int(date):
days = 0
for year in range(1, date.year):
days += days_of_year(year)
month_days = month_list(date.year)
for month in range(1, date.month):
days += month_days[month - 1]
days += date.day - 1
return days
def int_to_date(days):
date = Date()
date.year = 1
next_days = 365
while days >= next_days:
date.year += 1
days -= next_days
next_days = days_of_year(date.year)
date.month = 1
next_days = 31
month_days = month_list(date.year)
while days >= next_days:
date.month += 1
days -= next_days
next_days = month_days[date.month - 1]
date.day = days + 1
return date
def increment_date(date, n):
days = date_to_int(date)
return int_to_date(days + n)
d1 = Date()
d1.day, d1.month, d1.year = 8, 3, 2012
print_date(d1)
d2 = increment_date(d1, 7)
print_date(d2)
### Exercise 16.8
1. Use the datetime module to write a program that gets the current date and prints the day of the week.
from datetime import date
def current_weekday():
i = date.today()
print i.strftime('%A')
current_weekday()
2. Write a program that takes a birthday as input and prints the user’s age and the number of days, hours, minutes and seconds until their next birthday.
# Python3 solution. Replace "input" by "raw_input" for Python2.
from datetime import datetime
def time_until_birthday():
'the format "mm/dd/yyyy": '))
dob = datetime.strptime(dob_input, '%m/%d/%Y')
now = datetime.now()
if now > datetime(now.year, dob.month, dob.day):
age = now.year - dob.year
next_year = True
else:
age = now.year - dob.year - 1
next_year = False
time_to_birthday = datetime(now.year + next_year,
dob.month, dob.day) - now
days = time_to_birthday.days
hours, remainder = divmod(time_to_birthday.seconds, 3600)
minutes, seconds = divmod(remainder, 60)
print("\nYou are {} years old.".format(age))
print(("You have {0} days, {1} hours, {2} minutes and {3} "
"seconds left until your next birthday.").format(
days, hours, minutes, seconds))
time_until_birthday()
## Chapter 17
### Exercise 17.8
2.
from visual import scene, sphere
scene.range = (256, 256, 256)
scene.center = (128, 128, 128)
t = range(0, 256, 51)
for x in t:
for y in t:
for z in t:
pos = x, y, z
color = (x/255., y/255., z/255.)
3. Download http://thinkpython.com/code/color_list.py and use the function read_colors to generate a list of the available colors on your system, their names and RGB values. For each named color draw a sphere in the position that corresponds to its RGB values.
# As there currently (2013-04-12) is no function read_colors in color_list.py
# I use a workaround and simply import the variable COLORS from color_list.py.
# I then use the function all_colors() on COLORS to get a list of the colors.
from color_list import COLORS
from visual import scene, sphere
def all_colors(colors_string=COLORS):
"""Extract a list of unique RGB-tuples from COLORS.
The tuples look like (r, g, b), where r, g and b are each integers in
[0, 255].
"""
# split the string into lines and remove irrelevant lines
lines = colors_string.split('\n')[2:-2]
# split the individual lines and remove the names
numbers_only = [line.split()[:3] for line in lines]
# turn strings into ints and rgb-lists into tuples
rgb_tuples = [tuple([int(s) for s in lst]) for lst in numbers_only]
# return a list of unique tuples
return list(set(rgb_tuples))
def make_spheres(color_tuples=all_colors()):
scene.range = (256, 256, 256)
scene.center = (128, 128, 128)
for (r, g, b) in color_tuples:
sphere(pos=(r, g, b), radius=7, color=(r/255., g/255., b/255.))
if __name__ == '__main__':
make_spheres()
## Chapter 3.5
### calculator
#recursion or recursive
print "\n INDEX\n""\n C=1 for addition\n""\n C=2 for substraction\n""\n
C=3 for multiplication\n""\n C=4 for division\n""\n C=5 for to find modulus\n""\n C=6 to find factorial\n"
c=x+y
print x,"+",y,"=",c
def sub(x,y):
c=x-y
print x,"-",y,"=",c
def mul(x,y):
c=x*y
print x,"*",y,"=",c
def div(x,y):
c=x/y
print x,"/",y,"=",c
def mod(x,y):
c=x%y
print x,"%",y,"=",c
if C==6:
def f(n):
if n==1:
print n
return n
else:
print n,"*",
return n*f(n-1)
print f(n)
if C==1:
a=input("Enter your first no here: ")
b=input("Enter your second no here: ")
elif C==2:
a=input("Enter your first no here: ")
b=input("Enter your second no here: ")
sub(a,b)
elif C==3:
a=input("Enter your first no here: ")
b=input("Enter your second no here: ")
mul(a,b)
elif C==4:
a=input("Enter your first no here: ")
b=input("Enter your second no here: ")
div(a,b)
elif C==5:
a=input("Enter your first no here: ")
b=input("Enter your second no here: ")
mod(a,b)
### palindrome
def first(word):
return word[0]
def last(word):return word[-1]
def middle(word):
return word[1:-1]
def palindrome(word):
if first(word)==last(word):
word = middle(word)
n=len(word)
if n<2:
print "palindrome"
else:
return palindrome(word)
else:
print "not palindrome"
word=raw_input("Enter the string:")
palindrome(word)
### sum of all digits
def sum_of_n_numbers(number):
if(number==0):
return 0
else:
return number + sum_of_n_numbers(number-1)
num = raw_input("Enter a number:")
num=int(num)
sum = sum_of_n_numbers(num)
print sum
###another answer in case of while loops
def sum_of_Digits(number):
sum=0
while number>0:
digit=number%10
sum=sum+digit
number=number/10
return sum
num=raw_input("enter the number")
num=int(num)
sum_of_digits=sum_of_Digits(num)
print sum_of_digits
### Exercise 18.5
class Card(object):
suit_names = ['Clubs', 'Diamonds', 'Hearts', 'Spades']
rank_names = [None, 'Ace', '2', '3', '4', '5', '6', '7',
'8', '9', '10', 'Jack', 'Queen', 'King']
def __init__(self, suit = 0, rank = 2):
self.suit = suit
self.rank = rank
def __str__(self):
return '%s of %s' % (Card.rank_names[self.rank],
Card.suit_names[self.suit])
def __cmp__(self, other):
c1 = (self.suit, self.rank)
c2 = (other.suit, other.rank)
return cmp(c1, c2)
def is_valid(self):
return self.rank > 0
class Deck(object):
def __init__(self, label = 'Deck'):
self.label = label
self.cards = []
for i in range(4):
for k in range(1, 14):
card = Card(i, k)
self.cards.append(card)
def __str__(self):
res = []
for card in self.cards:
res.append(str(card))
print self.label
return '\n'.join(res)
def deal_card(self):
return self.cards.pop(0)
self.cards.append(card)
def shuffle(self):
import random
random.shuffle(self.cards)
def sort(self):
self.cards.sort()
def move_cards(self, other, num):
for i in range(num):
def deal_hands(self, num_hands, num_cards):
if num_hands*num_cards > 52:
return 'Not enough cards.'
l = []
for i in range(1, num_hands + 1):
hand_i = Hand('Hand %d' % i)
self.move_cards(hand_i, num_cards)
l.append(hand_i)
return l
class Hand(Deck):
def __init__(self, label = ''):
self.cards = []
self.label = label
# 18-6, 1-4:
class PokerHand(Hand):
def suit_hist(self):
self.suits = {}
for card in self.cards:
self.suits[card.suit] = self.suits.get(card.suit, 0) + 1
return self.suits
def rank_hist(self):
self.ranks = {}
for card in self.cards:
self.ranks[card.rank] = self.ranks.get(card.rank, 0) + 1
return self.ranks
def P(self):
self.rank_hist()
for val in self.ranks.values():
if val >= 2:
return True
return False
def TP(self):
self.rank_hist()
count = 0
for val in self.ranks.values():
if val == 4:
return True
elif val >= 2 and val < 4:
count += 1
return count >= 2
def TOAK(self):
self.rank_hist()
for val in self.ranks.values():
if val >= 3:
return True
return False
def STRseq(self):
seq = []
l = STRlist()
self.rank_hist()
h = self.ranks.keys()
h.sort()
if len(h) < 5:
return []
# Accounts for high Aces:
if 1 in h:
h.append(1)
for i in range(5, len(h)+1):
if h[i-5:i] in l:
seq.append(h[i-5:i])
return seq
def STR(self):
seq = self.STRseq()
return seq != []
def FL(self):
self.suit_hist()
for val in self.suits.values():
if val >= 5:
return True
return False
def FH(self):
d = self.rank_hist()
keys = d.keys()
for key in keys:
if d[key] >= 3:
keys.remove(key)
for key in keys:
if d[key] >= 2:
return True
return False
def FOAK(self):
self.rank_hist()
for val in self.ranks.values():
if val >= 4:
return True
return False
def SFL(self):
seq = self.STRseq()
if seq == []:
return False
for list in seq:
list_suits = []
for index in list:
for card in self.cards:
if card.rank == index:
list_suits.append(card.suit)
list_hist = histogram(list_suits)
for key in list_hist.keys():
if list_hist[key] >= 5:
return True
return False
def classify(self):
self.scores = []
hands = ['Pair', 'Two-Pair',
'Three of a Kind', 'Straight',
'Flush', 'Full House',
'Four of a Kind', 'Straight Flush']
if self.P():
self.scores.append(1)
if self.TP():
self.scores.append(2)
if self.TOAK():
self.scores.append(3)
if self.STR():
self.scores.append(4)
if self.FL():
self.scores.append(5)
if self.FH():
self.scores.append(6)
if self.FOAK():
self.scores.append(7)
if self.SFL():
self.scores.append(8)
if self.scores != []:
return hands[max(self.scores)-1]
def STRlist():
s = []
for i in range(0,9):
s.append(range(1,14)[i:i+5])
s.append([10,11,12,13,1])
return s
def histogram(l):
d = dict()
for k in range(len(l)):
d[l[k]] = 1 + d.get(l[k],0)
return d
# 18-6, 5:
def p(config = '', trials = 10000, n = 1):
"""Estimates probability that the
nth dealt hand will be config. A hand
consists of seven cards."""
successes = 0
for i in range(1, trials + 1):
deck = Deck('Deck %d' % i)
deck.shuffle()
box = Hand()
deck.move_cards(box, (n-1)*7)
hand = PokerHand('Poker Hand %d' % i)
deck.move_cards(hand, 7)
if hand.classify() == config:
successes += 1
return 1.0*successes/trials
#Iterate until first desired config.:
if __name__ == '__main__':
c = 1
while True:
deck = Deck()
deck.shuffle()
hand = PokerHand('Poker Hand %d' % c)
deck.move_cards(hand, 5)
print hand
print hand.SFL()
if hand.SFL():
print hand.STRseq()
break
print ''
c += 1
Code by Victor Alvarez
## Appendix B
### Exercise B.3
Write a function called bisection that takes a sorted list and a target value and returns the index of the value in the list, if it’s there, or None if it’s not.
from bisect import bisect_left
def bisection(sorted_list, item):
i = bisect_left(sorted_list, item)
if i < len(sorted_list) and sorted_list[i] == item:
return i
else:
return None
if __name__ == '__main__':
a = [1, 2, 3]
print(bisection(a, 2)) # expect 1
b = [1, 3]
print(bisection(b, 2)) # expect None
c = [1, 2]
print(bisection(c, 3)) # expect None
` | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2997957170009613, "perplexity": 1724.0778042440684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170741.49/warc/CC-MAIN-20170219104610-00103-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://docs.pymars.org/en/latest/reference/tensor/generated/mars.tensor.linalg.norm.html | # mars.tensor.linalg.norm#
mars.tensor.linalg.norm(x, ord=None, axis=None, keepdims=False)[source]#
Matrix or vector norm.
This function is able to return one of eight different matrix norms, or one of an infinite number of vector norms (described below), depending on the value of the ord parameter.
Parameters
• x (array_like) – Input tensor. If axis is None, x must be 1-D or 2-D.
• ord ({non-zero int, inf, -inf, 'fro', 'nuc'}, optional) – Order of the norm (see table under Notes). inf means mars tensor’s inf object.
• axis ({int, 2-tuple of ints, None}, optional) – If axis is an integer, it specifies the axis of x along which to compute the vector norms. If axis is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed. If axis is None then either a vector norm (when x is 1-D) or a matrix norm (when x is 2-D) is returned.
• keepdims (bool, optional) – If this is set to True, the axes which are normed over are left in the result as dimensions with size one. With this option the result will broadcast correctly against the original x.
Returns
n – Norm of the matrix or vector(s).
Return type
float or Tensor
Notes
For values of ord <= 0, the result is, strictly speaking, not a mathematical ‘norm’, but it may still be useful for various numerical purposes.
The following norms can be calculated:
ord
norm for matrices
norm for vectors
None
Frobenius norm
2-norm
‘fro’
Frobenius norm
‘nuc’
nuclear norm
inf
max(sum(abs(x), axis=1))
max(abs(x))
-inf
min(sum(abs(x), axis=1))
min(abs(x))
0
sum(x != 0)
1
max(sum(abs(x), axis=0))
as below
-1
min(sum(abs(x), axis=0))
as below
2
2-norm (largest sing. value)
as below
-2
smallest singular value
as below
other
sum(abs(x)**ord)**(1./ord)
The Frobenius norm is given by 1:
$$||A||_F = [\\sum_{i,j} abs(a_{i,j})^2]^{1/2}$$
The nuclear norm is the sum of the singular values.
References
1
G. H. Golub and C. F. Van Loan, Matrix Computations, Baltimore, MD, Johns Hopkins University Press, 1985, pg. 15
Examples
>>> from mars.tensor import linalg as LA
>>> import mars.tensor as mt
>>> a = mt.arange(9) - 4
>>> a.execute()
array([-4, -3, -2, -1, 0, 1, 2, 3, 4])
>>> b = a.reshape((3, 3))
>>> b.execute()
array([[-4, -3, -2],
[-1, 0, 1],
[ 2, 3, 4]])
>>> LA.norm(a).execute()
7.745966692414834
>>> LA.norm(b).execute()
7.745966692414834
>>> LA.norm(b, 'fro').execute()
7.745966692414834
>>> LA.norm(a, mt.inf).execute()
4.0
>>> LA.norm(b, mt.inf).execute()
9.0
>>> LA.norm(a, -mt.inf).execute()
0.0
>>> LA.norm(b, -mt.inf).execute()
2.0
>>> LA.norm(a, 1).execute()
20.0
>>> LA.norm(b, 1).execute()
7.0
>>> LA.norm(a, -1).execute()
0.0
>>> LA.norm(b, -1).execute()
6.0
>>> LA.norm(a, 2).execute()
7.745966692414834
>>> LA.norm(b, 2).execute()
7.3484692283495345
>>> LA.norm(a, -2).execute()
0.0
>>> LA.norm(b, -2).execute()
4.351066026358965e-18
>>> LA.norm(a, 3).execute()
5.8480354764257312
>>> LA.norm(a, -3).execute()
0.0
Using the axis argument to compute vector norms:
>>> c = mt.array([[ 1, 2, 3],
... [-1, 1, 4]])
>>> LA.norm(c, axis=0).execute()
array([ 1.41421356, 2.23606798, 5. ])
>>> LA.norm(c, axis=1).execute()
array([ 3.74165739, 4.24264069])
>>> LA.norm(c, ord=1, axis=1).execute()
array([ 6., 6.])
Using the axis argument to compute matrix norms:
>>> m = mt.arange(8).reshape(2,2,2)
>>> LA.norm(m, axis=(1,2)).execute()
array([ 3.74165739, 11.22497216])
>>> LA.norm(m[0, :, :]).execute(), LA.norm(m[1, :, :]).execute()
(3.7416573867739413, 11.224972160321824) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.710769772529602, "perplexity": 14447.22818094051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00238.warc.gz"} |
https://blog.flyingcoloursmaths.co.uk/cosines-small-angles-secrets-mathematical-ninja/ | “Where the hell have you been?” asked the student.
The Mathematical Ninja raised an eyebrow into his well-tanned forehead. That, he didn’t say, would be telling.
The student sighed and sketched out a triangle. “I know that doesn’t look like 2º,” she said, to forestall any criticism.
The Mathematical Ninja nodded: “It’s a sketch. The details don’t matter.”
“The hypotenuse is 100 metres,” she said, “and I want the… adjacent side. Ah, rubbish. I could do it with the opposite - that would be about three metres, right?”
“Three and half or so, yep,” said the Mathematical Ninja. “But you can work out $\cos(x)$ for small angles, too. It’s a bit less than 1, generally, but more precisely, it’s $\cos(x) \simeq 1 - \frac{x^2}{2}$.”
“Where does that come from?”
“Euler series,” said the Mathematical Ninja. “Alternatively, you can say $\sin(x) \simeq x$ and use the binomial expansion on $(1 - \sin^2(x))^{1/2})$.”
“I’ll take your word for it,” she said. “So, to get $\cos(x)$, I’m going to need to convert to radians, square, halve, and take away from one? Sounds like a lot of work.”
“It’s not trivial,” admitted the Mathematical Ninja, “but none of those things are too difficult.”
The student narrowed her eyes. “Right,” she said. “Two degrees is about $\frac{7}{200}$, which squares to $\frac{49}{40,000}$ - that’s ridiculous, isn’t it? Wait, I can round that to $\frac{50}{40,000}$, which cancels to $\frac{1}{800}$. Halve it, that’s $\frac{1}{1,600}$, which is… argh! about $\frac{6}{10,000}$?”
The Mathematical Ninja nodded. “Keep going!”
“So, that’s the fourth decimal place, 0.0006. $\cos(2º) \simeq 0.9994$?”
“Try it!” said the Mathematical Ninja.
“$0.99939$!” said the student, “so the adjacent side is 99.94m!”
The Mathematical Ninja smiled.
The student thought the Mathematical Ninja should take more holidays. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4303393065929413, "perplexity": 5268.00015514433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00482.warc.gz"} |
https://www.physicsforums.com/threads/characterisitics-of-a-parabolic-pde.517754/ | # Characterisitics of a Parabolic PDE
1. Jul 29, 2011
### Tohiko
Greetings,
I want to find the characteristics of the following parabolic PDE
$u_t + v u_x + w u_y + a(t, x,y,v,w, u) u_v + b(t, x,y,v,w, u) u_w - u_{vv} - u_{ww} = c(t,x,y,v,w,u)$
Where $u=u(t,x,y,v,w)$
I know how to find the characteristics of a 2nd-order one-dimensional PDE. I also know how to find the Riemann invariants of a hyperbolic multidimensional PDE.
But how do I find the characteristics of a 2nd-order, nonlinear, multidimensional, parabolic PDE?
Any pointers or references are much appreciated.
Thanks
2. Jul 29, 2011
### hunt_mat
I am unsure what exactly you mean here, you can if you wish find the standard form for your equation by finding the characteristic directions in your second order system.
From what I can gather about your system you have one variable $u$ which is a function of $t,x,y,v,w$ Is this the case?
3. Jul 29, 2011
### Tohiko
That's exactly the case
And as you said, what I want to find are the characteristic directions for this PDE.
It's just that I don't know how to generalize what I already know in 1D case to this case.
4. Jul 29, 2011
### hunt_mat
Characteristics are generally defined as where the second order derivatives are not uniquely defined, so you could start with this idea, you will get a 5x5 determinant that when expanded would give you a polynomial that you would have to solve.
5. Jul 29, 2011
### Tohiko
Thank you hunt_mat,
I think I understand your idea. I will try it out and see what I'd get.
Thank you again
6. Jul 29, 2011
### hunt_mat
I got this from the book applied partial differential equation be Ockendon et al. They even have some discussion about high dimensional systems at the end of chapter 3. It comes down to looking at quadratic forms.
7. Jul 29, 2011
### HallsofIvy
Staff Emeritus
A hyperbolic pde has two independent characteristics.
A parabolic pde has only one.
An ellipitic pde has none.
8. Jul 29, 2011
### hunt_mat
for a system with two variables, however this is not the case in question.
Last edited: Jul 29, 2011
9. Jul 29, 2011
### Tohiko
I'm trying out this idea. And I'm a little perplexed
I don't have access to the book that you mentioned but I'm following these notes
http://www2.imperial.ac.uk/~jdg/AE2MAPDE.PDF
Section 2.1, pages 8 and 9
Following similar ideas to what these notes have I wrote the differentials: dt, dx, dy, dv, dw
And set up a 6x15 matrix multiplying a vector of 2nd order derivatives (mixed and otherwise). But then I don't know what other rows to add to the matrix.
You said that I should obtain a 5x5 determinant, but frankly I don't how I would obtain it.
Last edited: Jul 29, 2011
10. Jul 29, 2011
### hunt_mat
My mistake, it should be a 15x15 matrix. Let me give an example with 2 variables. Differentiate the first derivative, say $\partial_{x}u$ with respect to the characteristic variable to obtain:
$$\left( \frac{\partial u}{\partial x}\right) '(s)=\frac{\partial^{2}u}{\partial x^{2}}\dot{x} +\frac{\partial^{2}u}{\partial x\partial y}\dot{y}$$
Likewise:
$$\left( \frac{\partial u}{\partial y}\right) '(s)=\frac{\partial^{2}u}{\partial x\partial y}\dot{x} +\frac{\partial^{2}u}{\partial y^{2}}\dot{y}$$
Along with the differential equation itself:
$$a\frac{\partial^{2}u}{\partial x^{2}}+b\frac{\partial^{2}u}{\partial x\partial y}+c\frac{\partial^{2}u}{\partial y^{2}}=d$$
Now as we sais that the definition of characteristics were when the second order derivative were not unique, that you mean that the determinant:
$$\left| \begin{array}{ccc} a & b & c \\ \dot{x} & \dot{y} & 0 \\ 0 & \dot{x} & \dot{y} \end{array}\right| =0$$
From here one obtains the quadratic necessary to find the characteristics. Do you this for your system.
11. Jul 30, 2011
### Tohiko
That's what I did, but as I said I don't have enough rows.
I differentiated dt,dx,dy,dv and dw with respect to the characteristic variable s. These gave me 5 rows. Plus one row from the differential equation itself. So I obtain 6 rows of the 15x15 matrix.
But what about the other 9 rows?
12. Jul 30, 2011
### Tohiko
I think I understand what I'm missing. I read a paper about finding the characteristics of a 3D PDE here (http://aerade.cranfield.ac.uk/ara/arc/rm/2615.pdf).
In there since the PDE is a function of 3 variables the characteristics would be a function of 2 variables \alpha and \beta. Then they differentiate the differentials dx1, dx2 and dx3 wrt \alpha and \beta obtaining 6 equations of which 5 are independent (since the PDE defines a relation between 2nd order differentials). These 5 equations along with the original PDE give a 6x6 matrix whose determinant should equal 0.
In my case since I have 5 independent variables I'm guessing the characteristics would be a function of 4 variables p,q,r and s
Then I differentiate the differentials dt,dx,dy,dv,dw wrt to p,q,r and s obtaining 20 equations. Of which I'm guessing only 14 are independent. These 14 equations along with the original PDE give another PDE of the characteristic surfaces.
Does this sound right? If it is, do you know why only 14 of the equations are independent? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084130525588989, "perplexity": 735.356538499375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00219-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://brilliant.org/problems/repunit-factors/ | # Repunit Factors
A repunit is an integer that consists of only copies of the digit $$1$$. Let $$R_n$$ be a repunit with $$n$$ digits. Let a ministring be a sequence of digits composed of a nonnegative integer amount of the ' $$0$$' digit followed by a positive integer number of the ' $$1$$' digit. Find the number of divisors of $$R_{20}$$ that consist entirely of copies of a single ministring.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9181846976280212, "perplexity": 392.14391304048536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868239.93/warc/CC-MAIN-20180527091644-20180527111644-00160.warc.gz"} |
http://www.ck12.org/book/Algebra-II/r1/section/11.1/ | <meta http-equiv="refresh" content="1; url=/nojavascript/"> Matrices | CK-12 Foundation
You are reading an older version of this FlexBook® textbook: Algebra II Go to the latest version.
# 11.1: Matrices
Created by: CK-12
Name: __________________
## Introduction to Matrices
The following matrix, stolen from a rusted lockbox in the back of a large, dark lecture hall in a school called Hogwart’s, is the gradebook for Professor Severus Snape’s class in potions.
Poison Cure Love philter Invulnerability
Granger, $H$ $100$ $105$ $99$ $100$
Longbottom, $N$ $80$ $90$ $85$ $85$
Malfoy, $D$ $95$ $90$ $0$ $85$
Potter, $H$ $70$ $75$ $70$ $75$
Weasley, $R$ $85$ $90$ $95$ $90$
When I say this is a “matrix” I’m referring to the numbers in boxes. The labels (such as “Granger, $H$” or “Poison”) are labels that help you understand the numbers in the matrix, but they are not the matrix itself.
Each student is designated by a row. A row is a horizontal list of numbers.
1. Below, copy the row that represents all the grades for “Malfoy, $D$.”
Each assignment is designated by a column, which is a vertical list of numbers. (This is easy to remember if you picture columns in Greek architecture, which are big and tall and...well, you know...vertical.)
2. Below, copy the column that represents all the grades on the “Love philter” assignment.
I know what you’re thinking, this is so easy it seems pointless. Well, it’s going to stay easy until tomorrow. So bear with me.
The dimensions of a matrix are just the number of rows, and the number of columns...in that order. So a “$10 \times 20$” matrix means $10$ rows and $20$ columns.
3. What are the dimensions of Dr. Snape’s gradebook matrix?
For two matrices to be equal, they must be exactly the same in every way: same dimensions, and every cell the same. If everything is not precisely the same, the two matrices are not equal.
4. What must $x$ and $y$ be, in order to make the following matrix equal to Dr. Snape’s gradebook matrix?
$& 100 && 105 && 99 && 100\\& 80 && x+y && 85 && 85\\& 95 && 90 && 0 && 85\\& 70 && 75 && x-y && 75\\& 85 && 90 && 95&& 90$
Finally, it is possible to add or subtract matrices. But you can only do this when the matrices have the same dimensions!!! If two matrices do not have exactly the same dimensions, you cannot add or subtract them. If they do have the same dimensions, you add and subtract them just by adding or subtracting each individual cell.
As an example: Dr. Snape has decided that his grades are too high, and he needs to curve them downward. So he plans to subtract the following grade-curving matrix from his original grade matrix.
$& 5 && 0 && 10 && 0\\& 5 && 0 && 10 && 0\\& 5 && 0 && 10 && 0\\& 10 && 5 && 15 && 5\\& 5 && 0 && 10 && 0$
5. Below, write the new grade matrix.
6. In the grade-curving matrix, all rows except the fourth one are identical. What is the effect of the different fourth row on the final grades?
Name: __________________
Introduction to Matrices—Homework
1. In the following matrix...
$\begin{bmatrix}1 & 3 & 7 & 4 & 9 & 3\\6 & 3 & 7 & 0 & 8 & 1\\ 8 & 5 & 0 & 7 & 3 & 2\\8 & 9 & 5 & 4 & 3 & 0\\6 & 7 & 4 & 2 & 9 & 1\end{bmatrix}$
a. What are the dimensions? $\underline{\;\;\;} \times \underline{\;\;\;}$
b. Copy the second column here:
c. Copy the third row here:
d. Write another matrix which is equal to this matrix.
2. Add the following two matrices.
$\begin{bmatrix}2 & 6 & 4 \\9 & n & 8 \end{bmatrix} + \begin{bmatrix}5 & 7 & 1\\9 & -n & 3n \end{bmatrix} =$
3. Add the following two matrices.
$\begin{bmatrix}2 & 6 & 4 \\9 & n & 8 \end{bmatrix} + \begin{bmatrix}5 & 7 \\9 & -n \end{bmatrix} =$
4. Subtract the following two matrices.
$\begin{bmatrix}2 & 6 & 4 \\9 & n & 8 \end{bmatrix} - \begin{bmatrix}5 & 7 & 1\\9 & -n & 3n \end{bmatrix} =$
5. Solve the following equation for $x$ and $y$. (That is, find what $x$ and $y$ must be for this equation to be true.)
$\begin{bmatrix}2x\\5y \end{bmatrix}+ \begin{bmatrix}x+y\\-6x \end{bmatrix}= \begin{bmatrix}6\\2\end{bmatrix}$
6. Solve the following equation for $x$ and $y$. (That is, find what $x$ and $y$ must be for this equation to be true.)
$\begin{bmatrix}x+y\\3x-2y \end{bmatrix}+ \begin{bmatrix}4x-y\\x+5y \end{bmatrix}= \begin{bmatrix}3 & 5\\7 & 9\end{bmatrix}$
Name: __________________
Multiplying Matrices I
Just to limber up your matrix muscles, let’s try doing the following matrix addition.
1. $\begin{bmatrix}2 & 5 & x\\3 & 7 & 2y \end{bmatrix}+ \begin{bmatrix}2 & 5 & x\\3 & 7 & 2y \end{bmatrix}+ \begin{bmatrix}2 & 5 & x\\3 & 7 & 2y\end{bmatrix}=$
2. How many times did you add that matrix to itself?
3. Rewrite problem $^\#1$ as a multiplication problem. (Remember what multiplication means—adding something to itself a bunch of times!)
This brings us to the world of multiplying a matrix by a number. It’s very straightforward. You end up with a matrix that has the same dimensions as the original, but all the individual cells have been multiplied by that number.
Let’s do another example. I’m sure you remember Professor Snape’s grade matrix.
Poison Cure Love philter Invulnerability
Granger, $H$ $100$ $105$ $99$ $100$
Longbottom, $N$ $80$ $90$ $85$ $85$
Malfoy, $D$ $95$ $90$ $0$ $85$
Potter, $H$ $70$ $75$ $70$ $75$
Weasley, $R$ $85$ $90$ $95$ $90$
Now, we saw how Professor Snape could lower his grades (which he loves to do) by subtracting a curve matrix. But there is another way he can lower his grades, which is by multiplying the entire matrix by a number. In this case, he is going to multiply his grade matrix by $\frac{9}{10}$. If we designate his grade matrix as $[S]$ then the resulting matrix could be written as $\frac{9}{10}[S]$. ($^*$Remember that the cells in a matrix are numbers! So $[S]$ is just the grades, not the names.)
4. Below, write the matrix $\frac{9}{10}[S]$.
Finally, it’s time for Professor Snape to calculate final grades. He does this according to the following formula: “Poison” counts $30\%$, “Cure” counts $20\%$, “Love philter” counts $15\%$, and the big final project on “Invulnerability” counts $35\%$. For instance, to calculate the final grade for “Granger, $H$” he does the following calculation: $(30\%)(100)+(20\%)(105)+(15\%)(99)+(35\%)(100)=100.85$.
To make the calculations easier to keep track of, the Professor represents the various weights in his grading matrix which looks like the following:
$\begin{bmatrix}.3\\.2\\.15\\.35\end{bmatrix}$
The above calculation can be written very concisely as multiplying a row matrix by a column matrix, as follows.
$[100 \quad 105 \quad 99 \quad 100]\begin{bmatrix}.3\\.2\\.15\\.35\end{bmatrix}= [100.85]$
A “row matrix” means a matrix that is just one row. A “column matrix” means...well, you get the idea. When a row matrix and a column matrix have the same number of items, you can multiply the two matrices. What you do is, you multiply both of the first numbers, and you multiply both of the second numbers, and so on...and you add all those numbers to get one big number. The final answer is not just a number—it is a $1 \times 1$ matrix, with that one big number inside it.
5. Below, write the matrix multiplication that Professor Snape would do to find the grade for “Potter, $H$.” Show both the problem (the two matrices being multiplied) and the answer (the $1 \times 1$ matrix that contains the final grade).
Name: __________________
Homework—Multiplying Matrices I
1. Multiply.
$\frac{1}{2}\begin{bmatrix}2 & 6 & 4\\9 & n & 8\end{bmatrix}$
2. Multiply.
$3[2 \quad 3 \quad 4]\begin{bmatrix}5\\-6\\7\end{bmatrix}$
3. Multiply.
$[3 \quad 6 \quad 7]\begin{bmatrix}x\\y\\z\end{bmatrix}$
4.Solve for $x$.
$2[7 \quad x \quad 3]\begin{bmatrix}x\\x\\5\end{bmatrix}=[6]$
Name: __________________
Multiplying Matrices II
Just for a change, we’re going to start with...Professor Snape’s grade matrix!
Poison Cure Love philter Invulnerability
Granger, $H$ $100$ $105$ $99$ $100$
Longbottom, $N$ $80$ $90$ $85$ $85$
Malfoy, $D$ $95$ $90$ $0$ $85$
Potter, $H$ $70$ $75$ $70$ $75$
Weasley, $R$ $85$ $90$ $95$ $90$
As you doubtless recall, the good Professor calculated final grades by the following computation: “Poison” counts $30\%$, “Cure” counts $20\%$, “Love philter” counts $15\%$, and the big final project on “Invulnerability” counts $35\%$. He was able to represent each student’s final grade as the product of a row matrix (for the student) times a column matrix (for weighting).
1. Just to make sure you remember, write the matrix multiplication that Dr. Snape would use to find the grade for “Malfoy, $D$.” Make sure to include both the two matrices being multiplied, and the final result!
I’m sure you can see the problem with this, which is that you have to write a separate matrix multiplication problem for every student. To get around that problem, we’re going to extend our definition of matrix multiplication so that the first matrix no longer has to be a row—it may be many rows. Each row of the first matrix becomes a new row in the answer. So, Professor Snape can now multiply his entire student matrix by his weighting matrix, and out will come a matrix with all his grades! Let’s try it. Do the following matrix multiplication. The answer will be a $3 \times 1$ matrix with the final grades for “Malfoy, $D$,” “Potter, $H$,” and “Weasley, $R$.”
2. $\begin{bmatrix}95 & 90 & 0 & 85\\70 & 75 & 70 & 75\\85 & 90 & 95 & 90\end{bmatrix} \begin{bmatrix}.3\\.2\\.15\\.35 \end{bmatrix}=$
OK, let’s step back and review where we are. Yesterday, we learned how to multiply a row matrix times a column matrix. Now we have learned that you can add more rows to the first matrix, and they just become extra rows in the answer.
For full generality of matrix multiplication, you just need to know this: if you add more columns to the second matrix, they become additional columns in the answer! As an example, suppose Dr. Snape wants to try out a different weighting scheme, to see if he likes the new grades better. So he adds the new column to his weighting matrix. The first column represents the original weighting scheme, and the second column represents the new weighting scheme. The result will be a $3 \times 2$ matrix where each row is a different student and each column is a different weighting scheme. Got all that? Give it a try now!
3. $\begin{bmatrix}95 & 90 & 0 & 85\\70 & 75 & 70 & 75\\85 & 90 & 95 & 90\end{bmatrix} \begin{bmatrix}.3 & .4\\.2 & .2\\.15 & .3\\.35 & .1\end{bmatrix}=$
Name: __________________
Homework—Multiplying Matrices II
1. Matrix $[A]$ is $\begin{bmatrix}1 & 2\\3 & 4\end{bmatrix}$. Matrix $[B]$ is the product $\begin{bmatrix}5 & 6\\7 & 8\end{bmatrix}$.
a. Find the product $AB$.
b. Find the product $BA$.
2. Multiply.
$\begin{bmatrix}2 & 6 & 4\\9 & 5 & 8\end{bmatrix} \begin{bmatrix}2 & 5 & 4 & 7\\3 & 4 & 6 & 9\\8 & 4 & 2 & 0\end{bmatrix}$
3. Multiply.
$\begin{bmatrix}2 & 5 & 4 & 7\\3 & 4 & 6 & 9\\8 & 4 & 2 & 0 \end{bmatrix}\begin{bmatrix}2 & 6 & 4\\9 & 5 & 8\end{bmatrix}$
4. $\begin{bmatrix}5 & 3 & 9\\7 & 5 & 3\\2 & 7 & 5\end{bmatrix} \begin{bmatrix}x\\y\\z\end{bmatrix}$
a. Multiply.
b. Now, multiply $\begin{bmatrix}5 & 3 & 9\\7 & 5 & 3\\2 & 7 & 5\end{bmatrix} \begin{bmatrix}2\\10\\5\end{bmatrix}$ —but not by manually multiplying it out! Instead, plug $x=2$, $y=10$, and $z=5$ into the formula you came up with in part $(a)$.
5. Multiply.
$\begin{bmatrix}1 & 2 & 3\\4 & 5 & 6\\7 & 8 & 9\end{bmatrix} \begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1 \end{bmatrix}$
6. $3\begin{bmatrix}3 & -2 \\6 & 3 \\\end{bmatrix} \begin{bmatrix}x \\y \\\end{bmatrix}=\begin{bmatrix}9\\-3\end{bmatrix}$
a. Find the $x$ and $y$ values that will make this matrix equation true.
b. Test your answer by doing the multiplication to make sure it works out.
7. $\begin{bmatrix}1 & 2 \\3 & 4 \\\end{bmatrix} \begin{bmatrix}Some \\Matrix \\\end{bmatrix}=\begin{bmatrix}1 & 2\\3 & 4\end{bmatrix}$
a. Find the “some matrix” that will make this matrix equation true.
b. Test your answer by doing the multiplication to make sure it works out.
Name: __________________
## The “Identity” and “Inverse” Matrices
This assignment is brought to you by one of my favorite numbers, and I’m sure it’s one of yours...the number $1$. Some people say that $1$ is the loneliest number that you’ll ever do. (Bonus: who said that?) But I say, $1$ is the multiplicative identity.
Allow me to demonstrate.
1. $5 \times 1 =$
2. $1 \times \frac{2}{3} =$
3. $-\pi \times 1 =$
4. $1 \times x =$
You get the idea? $1$ is called the multiplicative identity because it has this lovely property that whenever you multiply it by anything, you get that same thing back. But that’s not all! Observe...
5. $2 \times \frac{1}{2} =$
6. $\frac{-2}{3} \times \frac{-3}{2}$
The fun never ends! The point of all that was that every number has an inverse. The inverse is defined by the fact that, when you multiply a number by its inverse, you get $1$.
7. Write the equation that defines two numbers $a$ and $b$ as inverses of each other.
8. Find the inverse of $\frac{4}{5}$.
9. Find the inverse of $-3$.
10. Find the inverse of $x$.
11. Is there any number that does not have an inverse, according to your definition in $^\#7?$
So, what does all that have to do with matrices? (I hear you crying.) Well, we’ve already seen a matrix which acts as a multiplicative identity! Do these problems.
12. $\begin{bmatrix}3 & 8 \\-4 & 12 \\\end{bmatrix} \begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}=$
13. $\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}\begin{bmatrix}3 & 8 \\-4 & 12 \\\end{bmatrix} =$
Pretty nifty, huh? When you multiply $\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}$ by another $2 \times 2$ matrix, you get that other matrix back. That’s what makes this matrix (referred to as $[I]$) the multiplicative identity.
Remember that matrix multiplication does not, in general, commute: that is, for any two matrices $[A]$ and $[B]$, the product $AB$ is not necessarily the same as the product $BA$. But in this case, it is: $[I]$ times another matrix gives you that other matrix back no matter which order you do the multiplication in. This is a key part of the definition of $I$, which is...
Definition: The matrix $I$ is defined as the multiplicative identity if it satisfies the equation:
$AI = IA = A$
Which, of course, is just a fancy way of saying what I said before. If you multiply I by any matrix, in either order, you get that other matrix back.
14. We have just seen that $\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}$ acts as the multiplicative identity for a $2 \times 2$ matrix.
a. What is the multiplicative identity for a $3 \times 3$ matrix?
b. Test this identity to make sure it works.
c. What is the multiplicative identity for a $5 \times 5$ matrix? (I won’t make you test this one...)
d. What is the multiplicative identity for a $2 \times 3$ matrix?
e. Trick question! There isn’t one. You could write a matrix that satisfies $AI=A$, but it would not also satisfy $IA=A$—that is, it would not commute, which we said was a requirement. Don’t take my word for it, try it! The point is that only square matrices (*same number of rows as columns) have an identity matrix.
So what about those inverses? Well, remember that two numbers $a$ and $b$ are inverses if $ab=1$. As you might guess, we’re going to define two matrices $A$ and $B$ as inverses if $AB=[I]$. Let’s try a few.
15. Multiply: $\begin{bmatrix}2 & 2\frac{1}{2}\\-1 & -1\frac{1}{2}\end{bmatrix}\begin{bmatrix}3 & 5\\-2 & -4\end{bmatrix}$
16. Multiply: $\begin{bmatrix}3 & 5\\-2 & -4\end{bmatrix}\begin{bmatrix}2 & 2\frac{1}{2}\\-1 & -1\frac{1}{2}\end{bmatrix}$
You see? These two matrices are inverses: no matter which order you multiply them in, you get $[I]$. We will designate the inverse of a matrix as $A^{-1}$, which looks like an exponent, but isn’t really, it just means inverse matrix—just as we used $f^{-1}$ to designate an inverse function. Which leads us to...
Definition
The matrix $A^{-1}$ is defined as the multiplicative inverse of $A$ if it satisfies the equation:
$A^{-1}A = AA^{-1} = I$ ($^*$where I is the identity matrix)
Of course, only a square matrix can have an inverse, since only a square matrix can have an I! Now we know what an inverse matrix does, but how do you find one?
17. Find the inverse of the matrix $\begin{bmatrix}3 & 2\\5 & 4\end{bmatrix}$
a. Since we don’t know the inverse yet, we will designate it as a bunch of unknowns: $\begin{bmatrix}a & b\\c & d\end{bmatrix}$ will be our inverse matrix. Write down the equation that defines this unknown matrix as our inverse matrix.
b. Now, in your equation, you had a matrix multiplication. Go ahead and do that multiplication, and write a new equation which just sets two matrices equal to each other.
c. Now, remember that when we set two matrices equal to each other, every cell must be equal. So, when we set two different $2 \times 2$ matrices equal, we actually end up with four different equations. Write these four equations.
d. Solve for $a, b, c$, and $d$.
e. So, write the inverse matrix $A^{-1}$.
f. Test this inverse matrix to make sure it works!
Name: __________________
Homework: The “Identity” and “Inverse” Matrices
1. Matrix $A$ is $\begin{bmatrix}4 & 10\\2 & 6\end{bmatrix}$.
1. Write the identity matrix $I$ for Matrix $A$.
2. Show that it works.
3. Find the inverse matrix $A^{-1}$.
4. Show that it works.
2. Matrix B is $\begin{bmatrix}1 & 2\\3 & 4\\5 & 6\end{bmatrix}$
1. Can you find a matrix that satisfies the equation $BI=B?$
2. Is this an identity matrix for $B$? If so, demonstrate. If not, why not?
3. Matrix $C$ is $\begin{bmatrix}1 & 2 & 3 & 4\\5 & 6 & 7 & 8\\9 & 10 & 11 & 12\\13 & 14 & 15 & 16\end{bmatrix}$. Write the identity matrix for $C$.
4. Matrix $D$ is $\begin{bmatrix}1 & 2\\3 & n\end{bmatrix}$.
1. Find the inverse matrix $D^{-1}$.
2. Test it.
Name: __________________
The Inverse of the Generic $2 \times 2$ Matrix
Today you are going to find the inverse of the generic $2 \times 2$ matrix. Once you have done that, you will have a formula that can be used to quickly find the inverse of any $2 \times 2$ matrix.
The generic $2 \times 2$ matrix, of course, looks like this:
$[A] = \begin{bmatrix}a & b\\c & d\end{bmatrix}$
Since its inverse is unknown, we will designate the inverse like this:
$\left [A^{-1}\right ] =\begin{bmatrix}w & x\\y & z\end{bmatrix}$
Our goal is to find a formula for $w$ in terms of our original variables $a, b, c$, and $d$. That formula must not have any $w, x, y$, or $z$ in it, since those are unknowns! Just the original four variables in our original matrix $[A]$. Then we will find similar formulae for $x, y$, and $z$ and we will be done.
Our approach will be the same approach we have been using to find an inverse matrix. I will walk you through the steps—after each step, you may want to check to make sure you’ve gotten it right before proceeding to the next.
1. Write the matrix equation that defines $A^{-1}$ as an inverse of $A$.
2. Now, do the multiplication, so you are setting two matrices equal to each other.
3. Now, we have two $2 \times 2$ matrices set equal to each other. That means every cell must be identical, so we get four different equations. Write down the four equations.
4. Solve. Remember that your goal is to find four equations—one for $w$, one for $x$, one for $y$, and one for $z—$where each equation has only the four original constants $a, b, c$, and $d!$
5. Now that you have solved for all four variables, write the inverse matrix $A^{-1}$.
$A^{-1}=$
6. As the final step, to put this in the form that it is most commonly seen in, note that all four terms have an $ad-bc$ in the denominator. ($^*$Do you have a $bc-ad$ instead? Multiply the top and bottom by $-1!$) we can write our answer much more simply if we pull out the common factor of $\frac{1}{ad-bc}$. (This is similar to “pulling out” a common term from a polynomial. Remember how we multiply a matrix by a constant? This is the same thing in reverse.) So rewrite the answer with that term pulled out.
$A^{-1}=$
You’re done! You have found the generic formula for the inverse of any $2 \times 2$ matrix. Once you get the hang of it, you can use this formula to find the inverse of any $2 \times 2$ matrix very quickly. Let’s try a few!
7. The matrix $\begin{bmatrix}2 & 3\\4 & 5\end{bmatrix}$
a. Find the inverse—not the long way, but just by plugging into the formula you found above.
b. Test the inverse to make sure it works.
8. The matrix $\begin{bmatrix}3 & 2\\9 & 5\end{bmatrix}$
a. Find the inverse—not the long way, but just by plugging into the formula you found above.
b. Test the inverse to make sure it works.
9. Can you write a $2 \times 2$ matrix that has no inverse?
Name: ______________________
## Using Matrices for Transformation
You are an animator for the famous company Copycat Studios. Your job is to take the diagram of the “fish” below (whose name is Harpoona) and animate a particular scene for your soon-to-be-released movie.
In this particular scene, the audience is looking down from above on Harpoona who begins the scene happily floating on the surface of the water. Here is a picture of Harpoona as she is happily floating on the surface.
Here is the matrix that represents her present idyllic condition.
$[H]=\begin{bmatrix}0 & 10 & 10 & 0\\0 & 0 & 5 & 0\end{bmatrix}$
1. Explain, in words, how this matrix represents her position. That is, how can this matrix give instructions to a computer on exactly how to draw Harpoona?
2. The transformation $\frac{1}{2}[H]$ is applied to Harpoona.
a. Write the resulting matrix below.
b. In the space below, draw Harpoona after this transformation.
c. In the space below, answer this question in words: in general, what does the transformation $\frac{1}{2}[H]$ do to a picture?
3. Now, Harpoona is going to swim three units to the left. Write below a general transformation that can be applied to any $2 \times 4$ matrix to move a drawing three units to the left.
4. Harpoona—in her original configuration before she was transformed in either way—now undergoes the transformation $\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}[H]$.
a. Write the new matrix that represents Harpoona below.
b. In the space below, draw Harpoona after this transformation.
c. In the space below, answer this question in words: in general, what does the transformation $\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}[H]$ do to a picture?
5. Now: in the movie’s key scene, the audience is looking down from above on Harpoona who begins the scene happily floating on the surface of the water. As the scene progresses, our heroine spins around and around in a whirlpool as she is slowly being sucked down to the bottom of the sea. “Being sucked down” is represented visually, of course, by shrinking.
a. Write a single transformation that will rotate Harpoona by $90^\circ$ and shrink her.
b. Apply this transformation four times to Harpoona’s original state, and compute the resulting matrices that represent her next four states.
c. Now draw all four states—preferably in different colors or something.
Name: ______________________
Homework: Using Matrices for Transformation
1. Harpoona’s best friend is a fish named Sam, whose initial position is represented by the matrix:
$[S_1]= \begin{bmatrix}0 & 4 & 4 & 0 & 0 & 4\\0 & 0 & 3 & 3 & 0 & 3\end{bmatrix}$
Draw Sam.
2. When the matrix $T=\frac{1}{2}\begin{bmatrix}\sqrt{3} & -1\\1 & \sqrt{3}\end{bmatrix}$
is multiplied by any matrix, it effects a powerful transformation on that matrix. Below, write the matrix $S_2=T \ S_1$. (You may use $1.7$ as an approximation for $\sqrt{3}$.)
3. Draw Sam’s resulting condition, $S_2$.
4. The matrix $T^{–1}$ will, of course, do the opposite of $T$. Find $T^{–1}$. (You can use the formula for the inverse matrix that we derived in class, instead of starting from first principles. But make sure to first multiply the $\frac{1}{2}$ into $T$, so you know what the four elements are!)
5. Sam now undergoes this transformation, so his new state is given by $S_3=T^{–1} \ S_2$. Find $S_3$ and graph his new position.
6. Finally, Sam goes through $T^{–1}$ again, so his final position is $S_4=T^{–1} \ S_3$. Find and graph his final position.
7. Describe in words: what do the transformations $T$ and $T^{–1}$ do, in general, to any shape?
Name: __________________
Homework: Calculators
1. Solve on a calculator: $\begin{bmatrix}1 & 2 & 3\\4 & 5 & 6\end{bmatrix} + \begin{bmatrix}7 & 8 & 9\\10 & 11 & 12 \end{bmatrix}$
2. Solve on a calculator:
## Date Created:
Feb 23, 2012
Apr 29, 2014
Files can only be attached to the latest version of None | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 287, "texerror": 0, "math_score": 0.8904110789299011, "perplexity": 560.7476209037502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447551951.106/warc/CC-MAIN-20141224185911-00026-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/3-integration-questions.46097/ | # 3 Integration questions
1. Oct 4, 2004
### Odyssey
(1) $$\alpha(t-t_{0})=\int_{R_{0}}^{R(\Theta)}\frac{du}{u\sqrt{a^ 2-u^2}}$$
(2) $$\beta(t-t_{0})=\int_{R_{0}}^{R(\Theta)}\frac{du}{u\sqrt{u^ 2+b}}$$
(3) $$\beta(t-t_{0})=\int_{R_{0}}^{R(\Theta)}\frac{du}{u\sqrt{u^ 2-b}}$$
Should I use trig subs? If so, what should my "u" be?
2. Oct 6, 2004
### trancefishy
i've not done integrals with the limits and equalities such as you have posted, but, the definite integrals are easy enough.
use the trig identities sin^2 x + cos^2 x = 1
for the first problem, let a = u sin x, da = u cos x
from there, you can see that you can factor out a u^2 in the radical sign. you are left with sin^2 x - 1, which equals cos^2 x. the square root of cos^2 x is cos x.
now you should have definite integral of 1/(sin^2 x cos x)
use power reduction to simpliy sin^2 x in terms of cos. i gotta go to class now, sorry. if nobody has gotten to it in 4 hours from now, i'll be back. and also learn the tex commands so this is readable
3. Oct 6, 2004
### ReyChiquito
for the first one use $$u=asin(x)$$ or $$u=acos(x)$$
second one $$u=b^{1/2}tan(x)$$
third $$u=b^{1/2}sec(x)$$
you must be very carefull with the sign of the functions thoug, remember that
$$(x^2)^{1/2}=|x|$$
4. Oct 6, 2004
thank you!
5. Oct 7, 2004
### HallsofIvy
In general, if you see $\srqt{1- x^2}$ you should immediately think "cos2= 1- sin2".
If you see $\sqrt{1+ x^2}$ you should immediately think "1+ tan2= sec2".
If you see $\sqrt{x^2- 1}$ you should immediately think "sec2- 1= tan2". | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917476773262024, "perplexity": 2223.5923063300343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516135.92/warc/CC-MAIN-20181023111223-20181023132723-00343.warc.gz"} |
https://psm.personalscience.com/science.html | # Chapter 2 The Science of the Microbiome
Living microbes are found everywhere on earth, often in surprising places. This section looks at some examples of how ubiquitous and hardy they can be, both in nature and on our bodies. We’ll also discuss the technologies used to study the human microbiome. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8046833276748657, "perplexity": 1794.5189924272868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529128.47/warc/CC-MAIN-20210122051338-20210122081338-00033.warc.gz"} |
https://www.physicsforums.com/threads/5-nc-converted-to-electrons.110039/ | # +5 nC converted to # electrons
1. Feb 9, 2006
### MathGnome
Ok, for some reason, I'm getting the wrong answer. It's asking for how many electrons were removed from an object that has gained a +5nC charge. Here's my setup.
(1.60x10^(-19)) / ( 5*10^(-9))
I'm getting 3.2*10^(-11)
yet book says the answer is actually 3.13*10^(-10)
wtf?
2. Feb 9, 2006
### quasar987
Well both the book and you are wrong.
If you want wow many electrons are in 5nC, the correct division is 5nC/e. You did e/5nC. The book only has a sign error in the exponent.
You should have gotten a clue that the book answer was wrong, as 3.13*10^(-10) is less than 1 electron and it is a law of nature that you can only mova around charges in amounts multiple of e (until we find how to separate quarks from hadrons that is).
Last edited: Feb 9, 2006
3. Feb 9, 2006
### chroot
Staff Emeritus
Look at your units. Make sure they cancel, as shown below.
$5\,\textrm{nC} \cdot \frac{1 \,\textrm{electron}}{1.6 \cdot 10^{-19} \,\textrm{C}} = x \, \textrm{electrons}$
- Warren | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.764028787612915, "perplexity": 2341.853130738946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719463.40/warc/CC-MAIN-20161020183839-00156-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://nbviewer.jupyter.org/github/SheffieldML/notebook/blob/master/GPy/models_basic.ipynb | In [1]:
#import necessary modules, set up the plotting
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib;matplotlib.rcParams['figure.figsize'] = (8,6)
from matplotlib import pyplot as plt
import GPy
# Interacting with models¶
### November 2014, by Max Zwiessele¶
#### with edits by James Hensman¶
The GPy model class has a set of features which are designed to make it simple to explore the parameter space of the model. By default, the scipy optimisers are used to fit GPy models (via model.optimize()), for which we provide mechanisms for ‘free’ optimisation: GPy can ensure that naturally positive parameters (such as variances) remain positive. But these mechanisms are much more powerful than simple reparameterisation, as we shall see.
Along this tutorial we’ll use a sparse GP regression model as example. This example can be in GPy.examples.regression. All of the examples included in GPy return an instance of a model class, and therefore they can be called in the following way:
In [2]:
m = GPy.examples.regression.sparse_GP_regression_1D(plot=False, optimize=False)
## Examining the model using print¶
To see the current state of the model parameters, and the model’s (marginal) likelihood just print the model
print m
The first thing displayed on the screen is the log-likelihood value of the model with its current parameters. Below the log-likelihood, a table with all the model’s parameters is shown. For each parameter, the table contains the name of the parameter, the current value, and in case there are defined: constraints, ties and prior distrbutions associated.
In [3]:
m
Out[3]:
Model: sparse gp
Objective: 422.820894957
Number of Parameters: 8
Number of Optimization Parameters: 8
sparse_gp. valueconstraintspriors
inducing inputs (5, 1)
rbf.variance 1.0 +ve
rbf.lengthscale 1.0 +ve
Gaussian_noise.variance 1.0 +ve
In this case the kernel parameters (bf.variance, bf.lengthscale) as well as the likelihood noise parameter (Gaussian_noise.variance), are constrained to be positive, while the inducing inputs have no constraints associated. Also there are no ties or prior defined.
You can also print all subparts of the model, by printing the subcomponents individually; this will print the details of this particular parameter handle:
In [4]:
m.rbf
Out[4]:
rbf. valueconstraintspriors
variance 1.0 +ve
lengthscale 1.0 +ve
When you want to get a closer look into multivalue parameters, print them directly:
In [5]:
m.inducing_inputs
Out[5]:
In [6]:
m.inducing_inputs[0] = 1
## Interacting with Parameters:¶
The preferred way of interacting with parameters is to act on the parameter handle itself. Interacting with parameter handles is simple. The names, printed by print m are accessible interactively and programatically. For example try to set the kernel's lengthscale to 0.2 and print the result:
In [7]:
m.rbf.lengthscale = 0.2
print m
Name : sparse gp
Objective : 590.608017128
Number of Parameters : 8
Number of Optimization Parameters : 8
Parameters:
sparse_gp. | value | constraints | priors
inducing inputs | (5, 1) | |
rbf.variance | 1.0 | +ve |
rbf.lengthscale | 0.2 | +ve |
Gaussian_noise.variance | 1.0 | +ve |
This will already have updated the model’s inner state: note how the log-likelihood has changed. YOu can immediately plot the model or see the changes in the posterior (m.posterior) of the model.
## Regular expressions¶
The model’s parameters can also be accessed through regular expressions, by ‘indexing’ the model with a regular expression, matching the parameter name. Through indexing by regular expression, you can only retrieve leafs of the hierarchy, and you can retrieve the values matched by calling values() on the returned object
In [8]:
print m['.*var']
#print "variances as a np.array:", m['.*var'].values()
#print "np.array of rbf matches: ", m['.*rbf'].values()
index | sparse_gp.rbf.variance | constraints | priors
[0] | 1.00000000 | +ve |
----- | sparse_gp.Gaussian_noise.variance | ----------- | ------
[0] | 1.00000000 | +ve |
There is access to setting parameters by regular expression, as well. Here are a few examples of how to set parameters by regular expression. Note that each time the values are set, computations are done internally to compute the log likeliood of the model.
In [9]:
m['.*var'] = 2.
print m
m['.*var'] = [2., 3.]
print m
Name : sparse gp
Objective : 694.201181379
Number of Parameters : 8
Number of Optimization Parameters : 8
Parameters:
sparse_gp. | value | constraints | priors
inducing inputs | (5, 1) | |
rbf.variance | 2.0 | +ve |
rbf.lengthscale | 0.2 | +ve |
Gaussian_noise.variance | 2.0 | +ve |
Name : sparse gp
Objective : 714.484895561
Number of Parameters : 8
Number of Optimization Parameters : 8
Parameters:
sparse_gp. | value | constraints | priors
inducing inputs | (5, 1) | |
rbf.variance | 2.0 | +ve |
rbf.lengthscale | 0.2 | +ve |
Gaussian_noise.variance | 3.0 | +ve |
A handy trick for seeing all of the parameters of the model at once is to regular-expression match every variable:
In [10]:
print m['']
index | sparse_gp.inducing_inputs | constraints | priors
[0 0] | 1.00000000 | |
[1 0] | 2.19478599 | |
[2 0] | 1.09452540 | |
[3 0] | -0.42446266 | |
[4 0] | 0.45784368 | |
----- | sparse_gp.rbf.variance | ----------- | ------
[0] | 2.00000000 | +ve |
----- | sparse_gp.rbf.lengthscale | ----------- | ------
[0] | 0.20000000 | +ve |
----- | sparse_gp.Gaussian_noise.variance | ----------- | ------
[0] | 3.00000000 | +ve |
## Setting and fetching parameters parameter_array¶
Another way to interact with the model’s parameters is through the parameter_array. The Parameter array holds all the parameters of the model in one place and is editable. It can be accessed through indexing the model for example you can set all the parameters through this mechanism:
In [11]:
new_params = np.r_[[-4,-2,0,2,4], [.1,2], [.7]]
print new_params
m[:] = new_params
print m
[-4. -2. 0. 2. 4. 0.1 2. 0.7]
Name : sparse gp
Objective : 322.317452127
Number of Parameters : 8
Number of Optimization Parameters : 8
Parameters:
sparse_gp. | value | constraints | priors
inducing inputs | (5, 1) | |
rbf.variance | 0.1 | +ve |
rbf.lengthscale | 2.0 | +ve |
Gaussian_noise.variance | 0.7 | +ve |
Parameters themselves (leafs of the hierarchy) can be indexed and used the same way as numpy arrays. First let us set a slice of the inducing_inputs:
In [12]:
m.inducing_inputs[2:, 0] = [1,3,5]
print m.inducing_inputs
index | sparse_gp.inducing_inputs | constraints | priors
[0 0] | -4.00000000 | |
[1 0] | -2.00000000 | |
[2 0] | 1.00000000 | |
[3 0] | 3.00000000 | |
[4 0] | 5.00000000 | |
Or you use the parameters as normal numpy arrays for calculations:
In [13]:
precision = 1./m.Gaussian_noise.variance
print precision
[ 1.42857143]
## Getting the model parameter’s gradients¶
The gradients of a model can shed light on understanding the (possibly hard) optimization process. The gradients of each parameter handle can be accessed through their gradient field.:
In [14]:
print "all gradients of the model:\n", m.gradient
all gradients of the model:
[ 2.01697286 3.69837406 1.1975515 -0.38669436 -0.31342694
99.38364871 -12.37834911 -268.18547317]
[ 99.38364871 -12.37834911]
If we optimize the model, the gradients (should be close to) zero
In [15]:
m.optimize()
[ -9.07579744e-04 1.55292147e-03 2.70479755e-04 8.65120340e-04
9.78466589e-04 4.66719022e-04 2.65199506e-04 -3.76904571e-01]
When we initially call the example, it was optimized and hence the log-likelihood gradients were close to zero. However, since we have been changing the parameters, the gradients are far from zero now. Next we are going to show how to optimize the model setting different restrictions on the parameters.
Once a constraint has been set on a parameter, it is possible to remove it with the command unconstrain(), which can be called on any parameter handle of the model. The methods constrain() and unconstrain() return the indices which were actually unconstrained, relative to the parameter handle the method was called on. This is particularly handy for reporting which parameters where reconstrained, when reconstraining a parameter, which was already constrained:
In [16]:
m.rbf.variance.unconstrain()
print m
Name : sparse gp
Objective : -590.884678521
Number of Parameters : 8
Number of Optimization Parameters : 8
Parameters:
sparse_gp. | value | constraints | priors
inducing inputs | (5, 1) | |
rbf.variance | 1.37750834107 | |
rbf.lengthscale | 2.47448694644 | +ve |
Gaussian_noise.variance | 0.00267772602954 | +ve |
In [17]:
m.unconstrain()
print m
Name : sparse gp
Objective : -590.884678521
Number of Parameters : 8
Number of Optimization Parameters : 8
Parameters:
sparse_gp. | value | constraints | priors
inducing inputs | (5, 1) | |
rbf.variance | 1.37750834107 | |
rbf.lengthscale | 2.47448694644 | |
Gaussian_noise.variance | 0.00267772602954 | |
If you want to unconstrain only a specific constraint, you can call the respective method, such as unconstrain_fixed() (or unfix()) to only unfix fixed parameters:
In [18]:
m.inducing_inputs[0].fix()
m.rbf.constrain_positive()
print m
m.unfix()
print m
Name : sparse gp
Objective : -590.884678521
Number of Parameters : 8
Number of Optimization Parameters : 7
Parameters:
sparse_gp. | value | constraints | priors
inducing inputs | (5, 1) | {fixed} |
rbf.variance | 1.37750834107 | +ve |
rbf.lengthscale | 2.47448694644 | +ve |
Gaussian_noise.variance | 0.00267772602954 | |
Name : sparse gp
Objective : -590.884678521
Number of Parameters : 8
Number of Optimization Parameters : 8
Parameters:
sparse_gp. | value | constraints | priors
inducing inputs | (5, 1) | |
rbf.variance | 1.37750834107 | +ve |
rbf.lengthscale | 2.47448694644 | +ve |
Gaussian_noise.variance | 0.00267772602954 | |
## Tying Parameters¶
Not yet implemented for GPy version 0.8.0
## Optimizing the model¶
Once we have finished defining the constraints, we can now optimize the model with the function optimize.:
In [19]:
m.Gaussian_noise.constrain_positive()
m.rbf.constrain_positive()
m.optimize()
WARNING: reconstraining parameters sparse_gp.rbf
By deafult, GPy uses the lbfgsb optimizer.
Some optional parameters may be discussed here.
• optimizer: which optimizer to use, currently there are lbfgsb, fmin_tnc, scg, simplex or any unique identifier uniquely identifying an optimizer. Thus, you can say m.optimize('bfgs') for using the lbfgsb optimizer
• messages: if the optimizer is verbose. Each optimizer has its own way of printing, so do not be confused by differing messages of different optimizers
• max_iters: Maximum number of iterations to take. Some optimizers see iterations as function calls, others as iterations of the algorithm. Please be advised to look into scipy.optimize for more instructions, if the number of iterations matter, so you can give the right parameters to optimize()
• gtol: only for some optimizers. Will determine the convergence criterion, as the tolerance of gradient to finish the optimization.
## Plotting¶
Many of GPys models have built-in plot functionality. we distringuish between plotting the posterior of the function (m.plot_f) and plotting the posterior over predicted data values (m.plot). This becomes especially important for non-Gaussian likleihoods. Here we'll plot the sparse GP model we've been working with. for more information of the meaning of the plot, please refer to the accompanying basic_gp_regression and sparse_gp noteooks.
In [20]:
fig = m.plot()
We can even change the backend for plotting and plot the model using a different backend.
In [21]:
GPy.plotting.change_plotting_library('plotly')
fig = m.plot(plot_density=True)
GPy.plotting.show(fig, filename='gpy_sparse_gp_example')
This is the format of your plot grid:
[ (1,1) x1,y1 ]
Out[21]:
index sparse_gp.inducing_inputs constraintspriors
[0 0] 0.17450990
[1 0] 2.19478599
[2 0] 1.09452540
[3 0] -0.42446266
[4 0] 0.45784368 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5345795154571533, "perplexity": 5380.883117439787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863830.1/warc/CC-MAIN-20180620163310-20180620183310-00266.warc.gz"} |
https://eustore.mdisc.com/8rb30/c9c5b0-directed-simple-graph | A directed graph is a graph in which the edges in the graph that link the vertices have a direction. Explore anything with the first computational knowledge engine. 16 in Graph A directed graph, or digraph, is a graph in which all edges are directed [12]. The first function is an iterative function that reads the graph and creates a list of flags for the graph vertices (called visited in this pseudocode) that are initially marked as NOT_VISITED. Reading, MA: Addison-Wesley, pp. As stated above, a graph in C++ is a non-linear data structure defined as a collection of vertices and edges. The history of graph theory states it was introduced by the famous Swiss mathematician named Leonhard Euler, to solve many mathematical problems by constructing graphs based on given data or a set of points. nodes is joined by a single edge having a unique direction) is called a tournament. It was about to find a simple cycle (i.e. The longest path problem for a general graph is not as easy as the shortest path problem because the longest path problem doesn’t have optimal substructure property.In fact, the Longest Path problem is NP-Hard for a general graph. The directed graphs on nodes can be enumerated • Symmetric directed graphs are directed graphs where all edges are bidirected (that is, for every arrow that belongs to the digraph, the corresponding inversed arrow also belongs to it). Glossary. The Ver… V is a set of nodes (vertices). A simple directed graph. simple graph : An undirected and unweighted graph containing no loops or multiple edges. ", Weisstein, Eric W. "Simple Directed Graph." A directed graph is simple if it has no loops (that is, edges of the form u!u) and no multiple edges. If you're experiencing performance troubles with your graph, try showing fewer links. that enumerates the number of distinct simple directed graphs with nodes (where is the number of directed graphs on nodes with edges) can be found by application of the Pólya Definition 6.1.1. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. This is the sense of graph in combinatorics; the other sense in high-school algebra, which interprets a morphism f:A→Bf: A \to B as a subobject of the product A×BA \times B, is unrelated; see graph of a functionfor more on this. A graph with no loops and no parallel edges is called a simple graph. for the number of directed graphs on nodes with edges. A simple directed weighted graph. Graphs are mathematical concepts that have found many usesin computer science. Practice online or make a printable study sheet. directed edges (i.e., no bidirected edges) is called an oriented A simple directed graph is a directed graph having no multiple edges or graph loops (corresponding to a binary adjacency matrix with 0s on the diagonal). 2 M. Hauskrecht Graphs: basics Basic types of graphs: • Directed graphs • Undirected graphs CS 441 Discrete mathematics for CS a c b c d a b M. Hauskrecht Terminology an•I simple graph each edge connects two different vertices and no two edges connect the same pair of vertices. There are several variations on the idea, described below. Some flavors are: 1. In simple words, it is based on the idea that if one vertex u is reachable from vertex v then vice versa must also hold in a directed graph. The number of simple directed graphs of nodes for , 2, ... are 1, 3, 16, 218, 9608, ... (OEIS A000273), which is given by NumberOfDirectedGraphs[n] Theory. Following is an example of a graph data structure. More formally, we define a graph G as an ordered pair where 1. GCD is the greatest common divisor, the vertex 4 has 3 incoming edges and 3 outgoing edges , so … coefficient, LCM is the least common multiple, For simplicity, we can assume that it’s using an adjacency list. Simple graph 2. We use the names 0 through V-1 for the vertices in a V-vertex graph. Example: Consider the following Graph: Input : (u, v) = (1, 3) Output: Yes Explanation: There is a path from 1 to 3, 1 -> 2 -> 3 Input : (u, v) = (3, 6) Output: No Explanation: There is no path from 3 to 6 Most graphs are defined as a slight alteration of the followingrules. A graph is a formal mathematical representation of a network (“a collection of objects connected in some fashion”). The A graph with directed edges is called a directed graph or digraph. A directed multigraph. Simple Graph. Clone or download Clone with HTTPS Use Git or checkout with SVN using the web URL. A simple directed graph is a directed graph having no multiple edges or graph Join the initiative for modernizing math education. Unlike most of the other examples in the Gallery, force-directed graphs require two queries. between 0 and edges. Given a Directed Graph and two vertices in it, check whether there is a path from the first given vertex to second. Synonym: digraph The term directed graph is used in both graph theory and category theory.The definition varies – even within one of the two theories.. E is a set of edges (links). A directed graph having no symmetric pair of GRAPHS 86 a b d c e Figure 7.6. In simple words , the number of edges coming towards a vertex (v) in Directed graphs is the in degree of v.The number of edges going out from a vertex (v) in Directed graphs is the in degree of v.Example: In the given figure. … Directed Graph. Noun . 10, 186, and 198-211, 1994. 13, 27, 38, 48, 38, 27, 13, 5, 1, 1. graph. Digraphs. A directed multigraph is defined as a pseudograph, with the difference that f is now a function from E to the set of ordered pairs of elements of V. Loops are allowed in directed multigraphs! Setting gives the generating functions exponent vectors of the cycle index, and is the coefficient Directed] in the Wolfram Language This gives the counting polynomial for the number of directed by, (Harary 1994, p. 186). Informally, a graph consists of a non-empty set of vertices (or nodes ), and a set E of edges that connect (pairs of) nodes. The graph will order links from largest to smallest, so if you choose 1000, it will show the 1000 strongest links. package Combinatorica . The following are some of the more basic ways of defining graphs and related mathematical structures. ... and many more too numerous to mention. If you are considering non directed graph then maximum number of edges is $\binom{n}{2}=\frac{n!}{2!(n-2)!}=\frac{n(n-1)}{2}$. The edges indicate a one-way relationship, in that each edge can only be traversed in a single direction. Knowledge-based programming for everyone. The vertices and edges in should be connected, and all the edges are directed from one specific vertex to another. A directed graph is a directed multigraph with no parallel edges. Harary, F. A complete directed graph is a simple directed graph G = (V,E) such that every pair of distinct vertices in G are connected by exactly one edge—so, for each pair of distinct vertices, either (x,y) or (y,x) (but not both) is in E. 7.1. first few cycle indices are. with 0s on the diagonal). . Cyclic or acyclic graphs 4. labeled graphs 5. Simple Directed Graph. A directed graph G D.V;E/consists of a nonempty set of nodes Vand a set of directed edges E. Each edge eof Eis specified by an ordered pair of vertices u;v2V. The number of simple directed A graph is made up of two sets called Vertices and Edges. A simple directed graph on nodes may have https://mathworld.wolfram.com/SimpleDirectedGraph.html, 1, 1, 5, The #1 tool for creating Demonstrations and anything technical. Complete graph K5 Unlimited random practice problems and answers with built-in Step-by-step solutions. 1. Signed directed graphs can be used to build simple qualitative models of complex AMS, and to analyse those conclusions attainable based on a minimal amount of information. What is a Graph? The triangles of graphs counts on nodes (rows) with The graphical representationshows different types of data in the form of bar graphs, frequency tables, line graphs, circle graphs, line plots, etc. A graph is a directed graph if all the edges in the graph have direction. In graph theory, graphs can be categorized generally as a directed or an undirected graph.In this section, we’ll focus our discussion on a directed graph. A directed graph (or digraph) is a set of vertices and a collection of directed edges that each connects an ordered pair of vertices. As it is a directed graph, each edge bears an arrow mark that shows its direction. We say that a directed edge points from the first vertex in the pair and points to the second vertex in the pair. A directed graph is graph, i.e., a set of objects (called vertices or nodes) that are connected together, where all the edges are directed from one vertex to another.A directed graph is sometimes called a digraph or a directed network.In contrast, a graph where the edges are bidirectional is called an undirected graph.. graphs on nodes with edges can be given c data-structure data-structures algorithm algorithms graph 10 commits 1 branch 0 packages 2 releases Fetching contributors C. C 100.0%; Branch: master New pull request Find file. From MathWorld--A Wolfram Web Resource. Each object in a graph is called a node (or vertex). Given a Weighted Directed Acyclic Graph (DAG) and a source vertex s in it, find the longest distances from s to all other vertices in the given graph.. But different types of graphs ( undirected, directed, simple, multigraph,:::) have different formal denitions, depending on what kinds of edges are allowed. Hints help you try the next step on your own. A complete oriented graph (i.e., a directed graph in which each pair of ©æM;;#0Ã&ªç©IÂu>êkV>Tý¢KgúrN]sq(ã\$ùJ\L«
æðÔaÐix0»^Z0ÃS3zÛØ¨ýâ"%. 2. Definition. of the term with exponent vector in . Each edge in a graph joins two distinct nodes. Definitions in graph theory vary. Given above is an example graph G. Graph G is a set of vertices {A,B,C,D,E} and a set of edges {(A,B),(B,C),(A,D),(D,E),(E,C),(B,E),(B,D)}. of Integer Sequences. Let’s start with a simple definition. by NumberOfDirectedGraphs[n, A simple graph is a pseudograph with no loops and no parallel edges. 4.2 Directed Graphs. cycle where are not repeat nodes) in a directed graph. Using Johnson's algorithm find all simple cycles in directed graph. directed graph : A graph G(V,E) with a set V of vertices and a set E of ordered pairs of vertices, called arcs, directed edges or arrows.If (u,v) ∈ E then we say that u points towards v.The opposite of a directed graph is an undirected graph. package Combinatorica . in the Wolfram Language package Combinatorica A052283). sum is over all enumeration theorem. Here, is the floor function, is a binomial m] in the Wolfram Language loops (corresponding to a binary adjacency matrix 2. A directed graph is a type of graph that contains ordered pairs of vertices while an undirected graph is a type of graph that contains unordered pairs of vertices. Walk through homework problems step-by-step from beginning to end. group which acts on the 2-subsets of , given A directed multigraph is a non-simple directed graph in which no loops are permitted, but multiple (parallel) edges between any two vertices are. A directed Graph is said to be strongly connected if there is a path between all pairs of vertices in some subset of vertices of the graph. Ch. "Digraphs." edges (columns) is given below (OEIS A graph (sometimes called undirected graph for distinguishing from a directed graph, or simple graph for distinguishing from a multigraph) is a pair G = (V, E), where V is a set whose elements are called vertices (singular: vertex), and E is a set of paired vertices, whose … Figure 2 depicts a directed graph with set of vertices V= {V1, V2, V3}. Set of edges in the above graph can be written as V= {(V1, V2), (V2, V3), (V1, V3)}. directed graph (plural directed graphs) (graph theory) A graph in which the edges are ordered pairs, so that, if the edge (a, b) is in the graph, the edge (b, a) need not be in the graph and is distinct from (a, b) if it is. A simple directed weighted graph is a simple directed graph for which edges are assigned weights. Corresponding to the connections (or lack thereof) in a network are edges (or links) in a graph. Directed graphs have edges with direction. Weighted graphs 6. Sloane, N. J. A graph is a collection of vertices and edges; each edge links a pair of vertices, defining a relationship of incidencebetween vertices and edges. Is called a directed graph having no symmetric pair of directed edges ( i.e. no... V= { V1, V2, V3 } nodes may have between 0 and edges edges are [! Figure 7.6 generating functions for the number of directed edges ( columns ) called. V2, V3 } with your graph, each edge bears an arrow mark that its! The Wolfram Language package Combinatorica a network are edges ( i.e., bidirected! ( “ a collection of vertices and edges graph if all the edges a! Anything technical d c e figure 7.6 some fashion ” ) troubles with your graph, ‘ ’. Which each edge in a directed graph having no symmetric pair of directed edges ( or links ) simple graph... Its direction network are edges ( i.e., no bidirected edges ) is given below ( OEIS A052283.! To the connections ( or lack thereof ) in a directed graph a... Where are not repeat nodes ) in a graph with no loops no... 0 through V-1 for the vertices and edges in the Gallery, force-directed graphs require two queries order from... Joins two distinct nodes or links ) d c e figure 7.6 and all the edges should! , Weisstein, Eric W. simple directed graph. you 're experiencing performance troubles with graph... Directed weighted graph is a directed graph directed simple graph or digraph troubles with your graph, ‘ ab ’ is from. Parallel edges difference between directed and undirected graph. bidirected is called a directed graph. non-linear data.! As ListGraphs [ n, directed ] in the graph that link the and! An oriented graph. of two sets called vertices and edges and vertices! N, directed ] in the pair you choose 1000, it will show the 1000 strongest links and in. Combinatorica two queries, we define a graph in which all edges are assigned weights are edges ( links... It is a directed graph and two vertices in a graph in which edge. Path from the first vertex in the pair and points to the second vertex in the pair and points the! Edge in a V-vertex graph. directed graph, each edge can be... The names 0 through V-1 for the number of directed edges is called an oriented.! No symmetric pair of directed graphs on nodes may have between 0 and edges showing fewer links and. Of a network are edges ( columns ) is called a node ( or links ) in …! A complete graph K5 using Johnson 's algorithm find all simple cycles in directed graph all. Nodes ( vertices ) or lack thereof ) in a V-vertex graph. edges i.e.. Some of the other examples in the graph have direction formal mathematical representation of a graph is path! Between 0 and edges it, check whether there is a directed graph for which edges directed... Simple graph is made up of two sets called vertices and edges find a simple directed graph is a data. Using Johnson 's algorithm find all simple cycles in directed graph is made up of two called. Gallery, force-directed graphs require two queries graphs on nodes can be enumerated as ListGraphs [,. Not repeat nodes ) in a single direction to smallest, so if you 're experiencing performance troubles your. A. Sequences A000273/M3032 and A052283 in the On-Line Encyclopedia of Integer.. Two queries s using an adjacency list this is the main difference between directed and graph! V3 } it is a formal mathematical representation of a graph in which all edges are directed 12. Unlike most of the other examples in the graph that link the vertices have a direction Combinatorica... And answers with built-in step-by-step solutions adjacency list directed and undirected graph. i.e., no bidirected ). Hints help you try the next step directed simple graph your own ordered pair where 1 the more ways. Given below ( OEIS A052283 ) a slight alteration of the followingrules a! Is made up of two sets called vertices and edges in should be connected, and all the are... Mark that shows its direction V2, V3 } ab ’ is from. Which edges are assigned weights edges indicate a one-way relationship, in that edge! Nodes may have between 0 and edges in the pair several variations on the idea, below... Arrow mark that shows its direction download clone with HTTPS Use Git or with! Anything technical all simple cycles in directed graph if all the edges in the graph have direction Combinatorica! Many ofwhich have found many usesin computer science undirected graph. a single direction # 1 tool creating. Thus, this is the main difference between directed and undirected graph ''... Git or checkout with SVN using the web URL called vertices and.., V2, V3 } with no loops and no parallel edges graph G an. Link the vertices have a direction is different from ‘ ba ’ or links ) in a is. Bidirected is called an oriented graph. pair of directed edges ( i.e., no bidirected )! Edges in the Wolfram Language package Combinatorica relationship, in that each edge in a graph G an. Should be connected, and all the edges are directed from one specific vertex to another i.e. Thus, this is the main difference between directed and undirected graph ''! Between 0 and edges two distinct nodes edges in should be connected, and all edges. Adjacency list with HTTPS Use Git or checkout with SVN using the web URL will the... And all the edges in the graph that link the vertices in it, check there., directed ] in the pair and points to the connections ( or links ) a! Nodes ( rows ) with edges ( links ) in a graph G as ordered... Fashion ” ) input is a non-linear data structure defined as a slight alteration of followingrules. Example of a graph is a formal mathematical representation of a network ( “ a of. Algorithm, the input is a set of edges possible in a graph. Computer programs, this is the main difference between directed and undirected.! The other examples in the pair random practice problems and answers with built-in step-by-step.! Object in a V-vertex graph. directed simple graph edge points from the first vertex in the graph that link the in! And answers with built-in step-by-step solutions the vertices have a direction thus, this is the main difference directed! Or digraph, is a graph is made up of two sets called vertices and edges a direction }.
Tax Reference Number Company, Faerie Vs Fairy Sotn, How To Play Multiplayer On Crash Team Racing Switch, Iran Currency Rate In Pakistan, Oliver Travel Trailer Dealers Near Me, Calmac Ferry In Rough Seas, Aphantasia Test Buzzfeed, | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45781877636909485, "perplexity": 612.5766535681289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00626.warc.gz"} |
https://homework.cpm.org/category/CC/textbook/cca2/chapter/6/lesson/6.2.2/problem/6-121 | Home > CCA2 > Chapter 6 > Lesson 6.2.2 > Problem6-121
6-121.
Add or subtract and simplify each of the following expressions. Justify that each step of your process makes sense.
1. $\frac { 3 } { ( x - 4 ) ( x + 1 ) } + \frac { 6 } { x + 1 }$
Simplify the expression above by creating a common denominator.
Multiply the second fraction by $\frac{(x-4)}{(x-4)}\text{ }$ to make the denominators the same.
2. $\frac { x + 2 } { x ^ { 2 } - 9 } - \frac { 1 } { x + 3 }$
Factor the denominator in the first term. What do you need to multiply the second fraction by to get a common denominator? | {"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9380676746368408, "perplexity": 910.8062880182127}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388758.12/warc/CC-MAIN-20200525130036-20200525160036-00238.warc.gz"} |
https://math.answers.com/Q/Is_2_over_5_equivalent_to_4_over10 | 0
# Is 2 over 5 equivalent to 4 over10?
Wiki User
2011-01-27 00:03:10
yes it is if you multiply 2 over 5 by 2 then it would equal 4 over 10
Wiki User
2011-01-27 00:03:10
🙏
0
🤨
0
😮
0
Study guides
20 cards
➡️
See all cards
3.73
389 Reviews | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938331663608551, "perplexity": 9674.817631275404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00523.warc.gz"} |
http://www.ams.org/publicoutreach/math-in-the-media/mathdigest-md-200507-toc | # Math Digest
## Summaries of Articles about Math in the Popular Press
Edited by Allyn Jackson, AMS
Contributors:
Mike Breen (AMS), Claudia Clark (freelance science writer), Lisa DeKeukelaere (Brown University), Annette Emerson (AMS)
### July 2005
"Life Cycles," by Brian Hayes. American Scientist, July-August 2005, pages 299-303.
Brian Hayes investigates whether there is a periodicity in the creation and die-offs of species. He uses curve-fitting and Fourier analysis on data from paleontologist John Sepkoski. That data gives the youngest and oldest geological layers in which a species appeared, which means that sub-stages, stages, and periods must be translated to years. Hayes translates in a way that is different from the method chosen by authors of a recent article on the subject. Despite the different methods, the results both show a 62 million year period. Yet to get that result, Hayes had to throw out data from a period from which data were sparse, so he wonders if the evidence for periodicity is unmistakable. --- Mike Breen
"Football by the Numbers" is about Aaron Schatz, who "has developed something football has historically lacked: a statistical method for judging individual players and plays that accounts for such contextual factors as down, field position, and strength of opponent." Some NFL franchises have sought his insights, but for now Schatz and colleagues run FootballOutsiders.com. "A Home for Wayward Math Problems" describes The Open Problems Project--the brainchild of Joseph O'Rourke (Smith College), Joseph S.B. Mitchell (State University of New York, Stonybrook), and Erik D. Demaine (Massachusetts Institute of Technology). The three mathematicians have collected and published online (to date) 59 unsolved math problems, each with a concise description, history, citations to related results, "and in some cases a cash reward." The article includes a brief overview of the nature of mathematical problem solving and provides the link to the the Open Problems Project. --- Annette Emerson
"Flawed Statistics in Murder Trial May Cost Expert His Medical License," by Eliot Marshall. Science, 22 July 2005, page 543.
The concept of the independence of two events is a central tenet in many statistics classes, and a British doctor may soon lose his license due to his misapplication of this principle. In the 1999 trial of a woman accused of killing her infant son, child abuse expert Roy Meadow presented an inaccurately small probability that two infants in the same family will die suddenly of unexplained natural causes. Meadow derived the statistic himself using the death probability of a single infant, but he incorrectly assumed that two inter-familial infant deaths would be independent of one another. The woman was convicted and spent three years in prison before the decision was reversed. While some fault Meadow for abusing his position as a doctor and delivering evidence outside his field of expertise, others think his current role in the case is simply that of a scapegoat.
---Lisa DeKeukelaere
"Harvard Researchers Discuss Systems Biology," by John Russell. Bio-IT World, 21 July 2005.
Bio-IT World experiments with posting an entire interview with two scientists to follow up on a shorter report. The topic is the efforts by Harvard Medical School researchers Jeremy Gunawardena and Aneil Mallavarapu to create a new modeling language for systems biology. Gunawardena, a "self-professed mathematician (Ph.D. in algebraic topology)," is director of The Virtual Cell Program in the School's Department of Systems Biology, and in the interview he explains that they are "focusing on biochemical modeling, and we're using it to build fairly simple models called ODE (ordinary differential equations). You represent species as concentration or the amount of species as variables. We want to expand and use the language to describe other types of models like partial differential equations, and we're working on what it will take to write stochastic models. Right now we're just doing biochemistry, but there's nothing in the language that limits it to that."
--- Annette Emerson
"U math institute bugs its way to record grant," by Mary Jane Smetanka. Star Tribune, 20 July 2005.
The occasion for this article is the awarding by the National Science Foundation (NSF) of a nearly $20-million grant to the Institute for Mathematics and its Applications (IMA) at the University of Minnesota. The renewal grant constitutes a 77 percent increase in NSF funding for the IMA. The IMA is one of six US-based mathematics research institutes funded by the NSF; the foundation also contributes funding to a math institute in Canada. The IMA specializes in bringing together mathematicians with researchers from other areas of science and engineering and also with people from industry. Throughout its 25-year history, the IMA has shown how mathematics can make substantial contributions towards the solution of practical problems in science, engineering, and technology. The article discusses an IMA "success story" in which mathematicians, biologists, and engineers began a collaboration that has led to the development of a six-legged robot whose gait is based on that of insects. The hope is that such robots might be able to assist on, for example, future space missions. In other media coverage, IMA director Douglas Arnold was interviewed on Minnesota Public Radio's "All Things Considered" program on 21 July 2005. --- Allyn Jackson Return to Top "The professor's days are numbered," by Keith O'Brien. The Boston Globe, 18 July 2005. O'Brien writes about Dan Rockmore, professor of mathematics at Dartmouth College. Some of the article is about Rockmore's teaching: He tries to keep math interesting and he says "I think it's a tragedy when people get turned off by mathematics or quantitative kinds of approaches very early in life," and part deals with numbers, especially prime numbers. Rockmore has written a recently published book Stalking the Riemann Hypothesis: The Quest to Find the Hidden Law of Prime Numbers. --- Mike Breen Return to Top "Geheimnisse, die sich in Zahlen verbergen", by George Szpiro. Neue Zürcher Zeitung, 17 July 2005. This installment of Szpiro's monthly column on mathematics discusses a recent article from the Ramanujan Journal. How large can the numerator of a fraction become if a selection of the fractions 1/2, 2/3 ... N/(N+1) are multiplied or divided by one another? --- Allyn Jackson Return to Top Students at Bates College can study the design of roller coasters in the course Roller Coasters: Theory, Design, and Properties. The course itself was designed by two faculty in the Bates mathematics department, Meredith L. Greer and Chip Ross, and by two students, "to appeal to people who might normally shy away from math courses." The most recent course ended with a trip to Cedar Point, an amusement park with 16 roller coasters including the first to top 400 feet and 120 mph. --- Mike Breen Return to Top "Data-Point: Steady strides." Random Samples-People, Science, 15 July 2005, page 379. This short item reports that 333 women received doctorates in mathematics in the U.S. in 2003-04, which is an all-time high. The number represents one-third of all U.S. math doctorates---almost double the fraction from 25 years ago. To those who say that women aren't suited for higher mathematics, Ellen Kirkman, a mathematics professor at Wake Forest and lead author of the survey on which the article is based, says, "We would not be seeing this increase if women did not have the ability or the stamina to pursue math degrees." The survey is in the August issue of Notices. --- Mike Breen Return to Top "9-Year-Olds Said Better in Math, Reading," by Darlene Superville. Guardian Unlimited, 14 July 2005. The performance on a 2004 national math test improved for many age groups and for minorities. Nine-year olds scored 241 in mathematics (out of 500) compared to 232 in 1999. Nine- and thirteen-year old minority students earned their highest marks in the history of the exam. The test, given by the National Assessment of Educational Progress, is voluntary. It was taken by 28,000 students during the 2003-04 school year. --- Mike Breen Return to Top "Floating ideas," by Marc Abrahams. The Guardian UK, 12 July 2005. Marc Abrahams, co-founder of the Annals of Improbable Research, writes about Japanese mathematician Shizuo Kakutani. At the outbreak of World War II, Kakutani was a visiting professor at the Institute for Advanced Study in Princeton. He was given the option of staying there or returning to Japan, and Kakutani chose to return home. While shipboard he apparently spent his time proving theorems, then inserting them in bottles and throwing them overboard with the instruction that, if found, the mathematical messages should be returned to Princeton. Abraham reports that to date none of the theorem-filled bottles has been found. --- Annette Emerson Return to Top "Take it to the limit," by Dana Mackenzie. New Scientist, 9 July 2005. This article discusses a revolution that has taken place in the science and engineering of communication codes. These codes "have nothing to do with spies or security," the article explains. Rather, the codes are used to ensure efficient and reliable transmission of information over communication channels; communication between spacecraft and the Earth is cited as a prime area where such codes play a vital role. Communication channels always have some amount of noise, which causes errors in the information transmitted. There are ways of correcting these errors, but they add to the cost of the transmission. In the 1940s, Claude Shannon developed a notion that is now known as the "Shannon limit", which, as the article describes it, is "a formula for how much information you can send with essentially perfect fidelity at a given signal-to-noise ratio." The problem was, Shannon's work gave no indication of how to create codes that give results close to the Shannon limit. For decades engineers struggled along with codes that gave far less than optimal results. It was only in the 1990s that some little-known research was rediscovered that allowed the creation of new codes, called turbo codes and low-density parity check (LDPC) codes, which operate essentially at the Shannon limit. --- Allyn Jackson Return to Top "Teaching Qubits New Tricks," by Charles Seife. Science, 8 July 2005, page 238. Quantum computers would revolutionize computing: For example they could quickly factor the large numbers that current encryption methods are based on. A trait they share with traditional computers is the need for error correction. Physicist Ray Laflamme and colleagues have shown mathematically that quantum error correction techniques that had appeared to be different are actually the same. This could make quantum error correction more efficient and help people understand the limits of quantum information. Peter Shor said that the result is very nice but, "Whether it's a giant leap or just a substantial step forward remains to be seen." The research is published in the Physical Review Letters. --- Mike Breen Return to Top This epsiode includes actual interviews interspersed with dramatizations of the story about MIT math professor Ed Thorp, who in the 1960s was enlisted by gambler Manny Kimmel to make a fortune in blackjack at the major casinos. Thorp's 1962 book, Beat the Dealer: A Winning Strategy for the Game of Twenty-One and scholarly reviews of the book were reported in The Boston Globe, which in turn generated about 20,000 letters from readers to the MIT math department asking for expert explanations on how to win at blackjack. Thorp describes how he learned to program what was at the time a high-speed computer and tested his theory on counting cards and predicting the odds as hands are played out. --- Annette Emerson Return to Top "A Book With a Theory of Everything?," by John Allen Paulos. ABC News, 3 July 2005. Paulos reviews a new book, The Road to Reality: A Complete Guide to the Laws of the Universe, by Roger Penrose. He describes the 1,100-page, detail-packed tome as more like a mathematical physics text than a book of popular science, as the author focuses "on the facts and theories of modern physics and the mathematical techniques needed to arrive at them" in the attempt to explain the laws governing our universe. In the end Paulos describes the book as "truly magisterial" but one which will be best appreciated by those who have considerable background in the subjects. --- Annette Emerson Return to Top "You Are What Your Record Is (Except When You're Not)," by Alan Schwarz. The New York Times, 3 July 2005, Sports, page 8. If you are interested in how Major League baseball teams will fare in the remainder of the season, you might want to examine the Pythagorean Theorem of Baseball. The theorem, so-called because it involves squaring some numbers, uses runs scored and allowed by a team to determine a team's performance. This performance can then be compared with the team's actual won-lost record to see if the team is doing better or worse than the theorem would indicate. According to this article, at the halfway point of the season the Washington Nationals were the biggest over-performers, being in first place despite allowing more runs than they've scored. The article also examines teams' performance in games decided by one run. Fans often think that winning a high percentage of such games indicates a strong team, but Schwarz writes that one-run victories are more attributable to chance than others, so that a winning percentage in these games which exceeds a team's overall winning percentage may be a sign of bad things to come (presuming that the team's luck won't continue). For this statistic, the San Diego Padres may have the most to lose: At the time of the article, the Padres, first place in their division with a record of 43-36, had won 18 of 25 one-run games. --- Mike Breen Return to Top "Gender Divide: Educators worrying more than in the past about shortage of women in such hard-science fields as engineering," by Laura Giovanelli. Winston-Salem Journal, 3 July 2005. "Summer camps geared toward girls have been set up all over North Carolina and the country as educators work to attract more women to engineering and other math-related fields," this article reports. One of these camps is the setting for the article, which discusses the phenomenon of low representation of women in mathematics and science. One of the teachers in the camp noted that even when girls display mathematical ability, they often lack confidence in that ability. "Even at 12, some girls have already decided what they're not good at," the article says. The camps are dedicated to building up girls' confidence in mathematics so that they continue to study the subject throughout high school and beyond. --- Allyn Jackson Return to Top "Renaissance's Man: James Simons Does The Math on Fund," by Gregory Zuckerman. Wall Street Journal, 1 July 2005, page C1; "Seeking the secrets of a `black box' investor," by Joseph Nocera. International Herald Tribune, 19-20 November 2005, page 14. "Mr. Simons, a world-class mathematician who runs Renaissance Technologies Corp., is creating a buzz in the hedge-fund world because he is about to launch a fund that he claims could handle US$100 billion---about 10 percent of all assets managed by hedge funds today. It will have a minimum investment of US\$20 million, and is aimed at institutional investors, according to early marketing materials," writes Zuckerman. The article discusses Simons' wildly successful company, which he built after an outstanding but brief career as a mathematician. Together with mathematician S-S Chern, Simons developed the so-called Chern-Simons invariants, which have been important in theoretical physics. For his work in geometry Simons received the AMS Veblen Prize in 1976. He taught at MIT, Harvard, and SUNY Stony Brook and also served as a codebreaker during the Vietnam War before switching to money management. Simons' company closely guards the secrets to his success. Of the new hedge fund, Zuckerman writes: "The fund will use complex quantitative models, developed by the 60 or so mathematics and physics Ph.D.s on staff". Although Simons rarely talks with reporters, he did talk with Joseph Nocera for the IHT article (which originally ran in the New York Times.
--- Allyn Jackson
"What Don't We Know?" Science, 1 July 2005, pages 75-102.
In celebration of its 125th year of publication, Science has published 125 unanswered scientific questions. Included in the questions are the seven Millennium Problems (the solution of each earns the solver one million dollars). Charles Seife writes about the P = NP? problem on page 96 in "What Are the Limits of Conventional Computing?". The other six problems are listed at the bottom of pages 101 and 102. There is an error in the description of the Riemann Hypothesis, however.
--- Mike Breen
Derbyshire contributes some thoughts on random topics, two of which come under the titles "Birkhoff & Jews" and "Math Corner." In the piece about mathematician George Birkhoff, he quotes Einstein's comment, "G.D. Birkhoff is one of the world's greatest anti-Semites" and acknowledges that while professor at Harvard in the 1930s Birkhoff would not admit Jewish refugees to the department. Saunders Mac Lane is then quoted about the difference in policy of Princeton and Harvard at that time: "Veblen [American mathematician Oswald Veblen, 1880-1960], who really set the style at Princeton and the Institute for Advanced Study, was greatly influential in taking care of the refugees from abroad, bringing them to this country and getting them jobs. George Birkhoff at Harvard had a different policy. He felt that we also ought to pay attention to young Americans, so there were relatively few appointments of such refugees at Harvard. Instead, they tended to appoint young Americans." (from the 1990 book More Mathematical People, edited by Donald Albers, Gerald Alexanderson, and Constance Reid). Derbyshire writes, "I don't feel quite as bad about Birkhoff as I did before reading Mac Lane's comments." In the "Math Corner" piece he presents a statistics problem from the MathCounts competition for middle school students. --- Annette Emerson
"Tilt! If High-rise Buildings Were Designed More Like Ships, Would They Float Upright During An Earthquake?," by Dana Mackenzie. Discover, July 2005, pages 36-37.
If an earthquake causes the ground to behave like a fluid, could "buoyant" buildings survive the melee without capsizing like a ship in a storm? Using computer models of floatation principles spelled out by Archimedes in the third century, mathematician Chris Rorres is developing answers to this question. By testing the angle at which different shapes will tilt in liquid, Rorres obtained a measure that can predict whether an object will collapse following an earthquake. Other scientists point out that Rorres' model may not be directly applicable: Post-earthquake soil may not act like a true liquid and constructing buildings according to the requisite ratio may present other risks. Still, Rorres' work represents a promising idea in preventing earthquake destruction.
--- Lisa DeKeukelaere
Chaos theory, whose most famous example traces the cause of an earthquake to the flapping of butterfly wings, lurks behind unforeseen changes in the behavior of a waterslide system. In steep slides, the water flows fast enough to create swirling, unstable currents that can be disrupted by variables such as the fabric of a rider's bathing suit or even the amount of dust in the air. This unpredictability provides an interesting challenge for designing the slides, which requires not only mathematics, but also a keen sense of visual proportion and material behavior. Cold, clear water carrying a screaming, spandex-clad body through a fiberglass tube is not as simple as it sounds, after all. --- Lisa DeKeukelaere | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3351414203643799, "perplexity": 3765.7470510789735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215222.74/warc/CC-MAIN-20180819145405-20180819165405-00283.warc.gz"} |
https://chainsawriot.com/mannheim/2021/12/22/most2020.html | # Top 5 most important textual analysis methods papers of the year 2020
Posted on Dec 22, 2021 by Chung-hong Chan
I promised to write this on Twitter in Janurary. Alex Wuttke and Valerie Hase said they are interested in reading this. However, the unfinished draft had been in my draft directory for almost a year. I don’t know if they are still interested in reading it anymore. The year 2021 is almost apass. Like always, I was too ambitious. I think I should be practical and write a scale-down version of this. Don’t expect it to be the same as the one last year (e.g. no detailed elaboration of the honorable mentions). As time has been stretched longer, I can really see how these papers influence my own research in 2021. I have cited the following papers many times. So the delay can be a good thing.
The rules are the same though. 1
• The method should deal with “inconvenience middle” datasets
• The method should deal with multilingual datasets
• The method should be validated and open-sourced
Before I move on, I think I need to emphasize that in 2020 and early 2021 some communication journals, broadly defined, published methodological special issues. For example, Journalism Studies, Political Communication, and the German language M&K Medien & Kommunikationswissenschaft.
Some papers in the following list might officially be published in 2021. But they are online-first in 2020. Again, I am super biased and the list represents my clumsy opinion, which is not important and not significant. If I omited any paper, please forgive me. I am a sinner.
Without further ado, here is the list of the top 5 most important — I think — papers.
• Watanabe, K. (2021). Latent semantic scaling: A semisupervised text analysis technique for new domains and languages. Communication Methods and Measures, 15(2), 81-102. doi
I think this is probably the first paper which really introduces semisupervised methods to the scene. Yes, the same author has two previous papers on semisupervised methods: 1. the newsmap paper and 2. another paper dealing with creating seed dictionaries for seeded LDA. This paper introduces a new method that communication scientists can readily use. The most important reason why I like this paper a lot is of course the R package LSX.
The design of the method is really clever: with a seed dictionary, the method uses SVD (or other word embedding techniques) to identify words in the corpus that are similar to the seed words and assign weights accordingly. The paper demonstrates the method’s applicability for English and Japanese data. My experience suggests the method also works for German data. The R implementation LSX is incredibly fast, thanks to the proxyC engine by the same author.
Semisupervised methods are a nice idea. But my experience told me one should still validate the results.
Honorable mention: Baden et al. (2020) Hybrid Content Analysis: Toward a Strategy for the Theory-driven, Computer-assisted Classification of Large Text Corpora. Communication Methods & Measures. doi
This one is also trying to tackle the problem of unsupervised methods, but in a different way.
• Kroon, A. C., Trilling, D., & Raats, T. (2021). Guilty by Association: Using Word Embeddings to Measure Ethnic Stereotypes in News Coverage. Journalism & Mass Communication Quarterly, 98(2), 451-477. doi
This one might be the first major article published in a communication journal that uses locally trained word embeddings to study implicit sentiments in new coverage (correct me if I am wrong). Of course, there are quite a number of papers published in those big journals, e.g. Caliskan et al. (2017), and Garg et al., (2018). These papers use pretrained word embeddings trained on large corpora of text.
Being the first in communication journal, the Amsterdam team creates locally trained word embeddings on Dutch news coverage and then measures the cosine similarity between words representing minority groups (e.g. Surinamese) and negative attributes (e.g. words representing low-status and high-threat stereotypes).
Of course, there are many ways one can critique the study 2. But the paper opens the door for applying the “1st generation” context-blind word embeddings in the analysis of news articles. The bug (word embedding biases) is actually a feature here.
• Song, H., Tolochko, P., Eberl, J. M., Eisele, O., Greussing, E., Heidenreich, T., Lind, F., Galyga, S., & Boomgaarden, H. G. (2020). In validations we trust? The impact of imperfect human annotations as a gold standard on the quality of validation of automated content analysis. Political Communication, 37(4), 550-572. doi
Krippendorff outlines the relationship between intercoder reliability and validity in content analysis in his “bible”. He basically says that when there is no intercoder reliablity there is also no validity. The study by the Vienna team demonstrates this point for automated content analysis using Monte Carlo Simulations. The paper points out the overemphasis of the automated part of automated content analysis while a good automation depends on the quality of the manually coded data. The discussion section of the paper contains some recommendations for improving validation practices. 3
• Nicholls, T., & Culpepper, P. D. (2021). Computational identification of media frames: Strengths, weaknesses, and opportunities. Political Communication, 38(1-2), 159-181. doi
Similar to the previous one, this is in the category of “when computational methods don’t work” and I have an affinity to these papers.
The greatest abuse of topic models is to treat topics as frames 4 and this paper shows exactly this point. Once again, topics are not Entmanian frames.
But I think this paper also reveals something about manual annotation of frames (Table 1 in the original paper). Actually, human coding of frames could vary between individuals. Computers, on the other hand, can generate “frames” with perfect reliability but with very low validity. There are still a lot of rooms to improve this, I think.
Honorable mention: Hopp et al. (2020) Dynamic Transactions Between News Frames and Sociopolitical Events: An Integrative, Hidden Markov Model Approach. Journal of Communication doi
Moral Foundations Dictionary, umm….
• Barberá, P., Boydstun, A. E., Linn, S., McMahon, R., & Nagler, J. (2021). Automated text classification of news articles: A practical guide. Political Analysis, 29(1), 19-42. doi
This is also in the category of “when computational methods don’t work”. This time the dictionary methods. As a guide, this paper is quite good. It explains every decision in the process of doing automated text classification. Empirically, this one shows that news tones do not require very sophisticated supervised methods, e.g. deep learning, to detect. A simple regularized logistic regression can easily outperform dictionary methods. If there is a need to validate an off-the-shelf dictionary with manually annotated data, why doesn’t one just use the annotated data to train a simple supervised machine learning model?
Following the previous paragraph, I’ve recently started to wonder: I think most properly trained supervised machine learning models would have an accuracy of ~75% to predict constructs that we (communication researchers) are interested in. They are certainly useful, no doubt about it. But HOW helpful are they? I like to think about it using human analogy as a Gedankenexperiment. Suppose I know my student helpers will incorrectly code the provided data 25% of the time, would I still hire them? Would I be worried about the quality of my research based on the annotated data by these student helpers?
When I replace the word “student helpers” with “machine learning models”, would you come to the same answers to all these questions?
1. Actually, there is a hidden rule: I can’t pick my own shit. For example, the oolong or rectr paper… Well, compared with these papers, my papers are really… my own shit.
2. For example, using the STAB notation (ST being targets, AB being attributes), the study only uses S and A, i.e. one target group and one attribute group. I think it is accepted that adding B would probably make the results better. Moreover, like all of the word embedding bias detection methods, the measurement is at the corpus-level, not at the sentence- or article-level. I still don’t know how to properly validate these methods. Previously, these methods were validated by either showing the “intelligence” of the trained word embeddings (e.g. analogy test) or showing the correlation between the word embedding biases and real-life indicators of social biases. These are not true validation.
3. And those recommendations influence several design decisions of oolong. And I promise not to talk about my own shit, so it is in a footnote.
4. I must admit, I abused it once too. I am a sinner. Repent ye, for the kingdom of heaven is at hand. Please forgive me.
Powered by Jekyll and profdr theme, a fork of true minimal theme | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41254472732543945, "perplexity": 2290.3014822113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00441.warc.gz"} |
http://gmatclub.com/forum/some-people-have-questioned-the-judge-s-objectivity-in-cases-114401.html | Some people have questioned the judge s objectivity in cases : GMAT Critical Reasoning (CR)
Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack
It is currently 19 Jan 2017, 04:15
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Some people have questioned the judge s objectivity in cases
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Manager
Status: Only GMAT!!
Joined: 17 Sep 2010
Posts: 74
WE 1: 5.5+ years IT Prof.
Followers: 1
Kudos [?]: 42 [0], given: 24
Some people have questioned the judge s objectivity in cases [#permalink]
### Show Tags
30 May 2011, 09:20
6
This post was
BOOKMARKED
00:00
Difficulty:
35% (medium)
Question Stats:
65% (02:35) correct 35% (01:25) wrong based on 319 sessions
### HideShow timer Statistics
Some people have questioned the judge’s objectivity in cases of sex discrimination against women. But the record shows that in sixty percent of such cases, the judge has decided in favor of the women. This record demonstrates that the judge has not discriminated against women in cases of sex discrimination against women.
The argument above is flawed in that it ignores the possibility that
(A) a large number of the judge’s cases arose out of allegations of sex discrimination against women
(B) many judges find it difficult to be objective in cases of sex discrimination against women
(C) the judge is biased against women defendants or plaintiffs in cases that do not involve sex discrimination
(D) the majority of the cases of sex discrimination against women that have reached the judge’s court have been appealed from a lower court
(E) the evidence shows that the women should have won in more than sixty percent of the judge’s cases involving sex discrimination against women
[Reveal] Spoiler: OA
If you have any questions
you can ask an expert
New!
Manager
Joined: 20 Jul 2010
Posts: 52
Followers: 3
Kudos [?]: 59 [3] , given: 51
Re: Judge's Objectivity CR from OG 10 [#permalink]
### Show Tags
30 May 2011, 09:52
3
KUDOS
(a) a large number of the judge’s cases arose out of allegations of sex discrimination against women
So what? Let it be! We are not looking for number of cases comes into the Judge's plate.
(b) many judges find it difficult to be objective in cases of sex discrimination against women
Doesn't explain whether Judge is wrong or right.
C) the judge is biased against women defendants or plaintiffs in cases that do not involve sex discrimination
The argument is not worrying about non sex discrimination cases.
(D) the majority of the cases of sex discrimination against women that have reached the judge’s court have been appealed from a lower court
Still, this doesn't prove whether judge is wrong or right.
(E) the evidence shows that the women should have won in more than sixty percent of the judge’s cases involving sex discrimination against women
Yep. This answer shows that the judge has discrimination against women!
_________________
Cheers!
Ravi
If you like my post, consider giving me some KUDOS !!!
Manager
Status: UC Berkeley 2012
Joined: 03 Jul 2010
Posts: 191
Location: United States
Concentration: Economics, Finance
GPA: 4
WE: Consulting (Investment Banking)
Followers: 3
Kudos [?]: 16 [0], given: 10
Re: Judge's Objectivity CR from OG 10 [#permalink]
### Show Tags
30 May 2011, 11:27
most of the answers weaken the judges fairness, but E does so most seriously!
Intern
Joined: 21 Oct 2010
Posts: 1
Followers: 0
Kudos [?]: 0 [0], given: 2
Re: Judge's Objectivity CR from OG 10 [#permalink]
### Show Tags
31 May 2011, 01:17
E for me
Senior Manager
Joined: 24 Mar 2011
Posts: 457
Location: Texas
Followers: 5
Kudos [?]: 167 [0], given: 20
Re: Judge's Objectivity CR from OG 10 [#permalink]
### Show Tags
01 Jun 2011, 14:39
i was not very firm with E initially, but i see that when i compare this with other options, E works out best.
And was then happy to see that other folks have also opted for C.
VP
Status: There is always something new !!
Affiliations: PMI,QAI Global,eXampleCG
Joined: 08 May 2009
Posts: 1353
Followers: 17
Kudos [?]: 240 [0], given: 10
Re: Judge's Objectivity CR from OG 10 [#permalink]
### Show Tags
06 Jun 2011, 03:36
only two possibilities here.Either woman should have won in those cases or judges should be biased to give it to women.
A and E does the same.
E is better choice and hence a winner.
_________________
Visit -- http://www.sustainable-sphere.com/
Promote Green Business,Sustainable Living and Green Earth !!
Intern
Joined: 09 Apr 2012
Posts: 2
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Some people have questioned the judge s objectivity in cases [#permalink]
### Show Tags
10 Apr 2012, 22:29
E shows that the women should have won but not won due to judges' discrimination in such cases thus weakened the statement that the judges are not biased.
Manager
Joined: 13 Feb 2012
Posts: 144
GMAT 1: 720 Q49 V38
GPA: 3.67
Followers: 0
Kudos [?]: 11 [1] , given: 107
Re: Some people have questioned the judge s objectivity in cases [#permalink]
### Show Tags
11 Apr 2012, 06:39
1
KUDOS
(A) a large number of the judge’s cases arose out of allegations of sex discrimination against women
Not related to the argument
(B) many judges find it difficult to be objective in cases of sex discrimination against women
Not related to the argument
(C) the judge is biased against women defendants or plaintiffs in cases that do not involve sex discrimination
Not related to the argument
(D) the majority of the cases of sex discrimination against women that have reached the judge’s court have been appealed from a lower court
May be.... but not related to the judge =>wrong
(E) the evidence shows that the women should have won in more than sixty percent of the judge’s cases involving sex discrimination against women
Correct.
_________________
Kudos!!!... If you think I help you in some ways....
GMAT Club Legend
Joined: 01 Oct 2013
Posts: 10533
Followers: 918
Kudos [?]: 203 [0], given: 0
Re: Some people have questioned the judge s objectivity in cases [#permalink]
### Show Tags
28 Feb 2014, 07:53
Hello from the GMAT Club VerbalBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
Intern
Joined: 08 Dec 2013
Posts: 47
Followers: 0
Kudos [?]: 0 [0], given: 23
Re: Some people have questioned the judge s objectivity in cases [#permalink]
### Show Tags
22 Sep 2014, 03:58
1
This post was
BOOKMARKED
hey all..
cant we say that if the cases have been appealed from lower court then judges might have discriminated against women (in lower court)and as a result, these cases are now in judges court(may be upper court)
thanku
Manager
Joined: 26 May 2014
Posts: 96
Location: India
Concentration: Technology, General Management
Schools: HKUST '15, ISB '15
GMAT Date: 12-26-2014
GPA: 3
Followers: 2
Kudos [?]: 93 [0], given: 43
Re: Some people have questioned the judge s objectivity in cases [#permalink]
### Show Tags
23 Sep 2014, 01:27
shreygupta3192 wrote:
hey all..
cant we say that if the cases have been appealed from lower court then judges might have discriminated against women (in lower court)and as a result, these cases are now in judges court(may be upper court)
thanku
But it is not clear who re appealed in higher court. if women then yes but if men then NO.
So, This answer is open to interpretation and can go either way.
_________________
Success has been and continues to be defined as Getting up one more time than you have been knocked down.
Intern
Joined: 08 Dec 2013
Posts: 47
Followers: 0
Kudos [?]: 0 [0], given: 23
Re: Some people have questioned the judge s objectivity in cases [#permalink]
### Show Tags
23 Sep 2014, 02:37
not very clear with your line of thought since it is already mentioned in the option "cases against women"..so how can it be an open statement..
Manager
Joined: 26 May 2014
Posts: 96
Location: India
Concentration: Technology, General Management
Schools: HKUST '15, ISB '15
GMAT Date: 12-26-2014
GPA: 3
Followers: 2
Kudos [?]: 93 [0], given: 43
Re: Some people have questioned the judge s objectivity in cases [#permalink]
### Show Tags
23 Sep 2014, 11:07
shreygupta3192 wrote:
not very clear with your line of thought since it is already mentioned in the option "cases against women"..so how can it be an open statement..
Ok. I will give another shot.
the majority of the cases of sex discrimination against women that have reached the judge’s court have been appealed from a lower court.
Quote:
cant we say that if the cases have been appealed from lower court then judges might have discriminated against women (in lower court)and as a result, these cases are now in judges court(may be upper court)
There are two parties in court i.e defendant and prosecutor.Both can re-appeal in higher court if they are not satisfied with outcome in lower court.The statement just tells the cases were re-appealed in higher court, It doesn't mention who re appealed i.e prosecutor or defendant !
Case 1 : Judgement in lower court might have in favor of women more than it should have been and the defendant(other party) re appealed in higher court.
Case 2: Judgement in lower court might have in AGAINST women more than it should have been and the prosecutor re appealed in higher court.
As both contradictory thoughts are possible interpretation of the statement.Hence , it fails to find the flaw in reasoning.
Please let me know if i was able to explain or if there is a flaw in my reasoning.Reply with kudos if it helped:)
_________________
Success has been and continues to be defined as Getting up one more time than you have been knocked down.
GMAT Club Legend
Joined: 01 Oct 2013
Posts: 10533
Followers: 918
Kudos [?]: 203 [0], given: 0
Re: Some people have questioned the judge s objectivity in cases [#permalink]
### Show Tags
19 Jun 2016, 04:15
Hello from the GMAT Club VerbalBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
Intern
Joined: 05 Nov 2014
Posts: 15
Followers: 0
Kudos [?]: 0 [0], given: 39
Re: Some people have questioned the judge s objectivity in cases [#permalink]
### Show Tags
24 Jun 2016, 03:38
(A) a large number of the judge’s cases arose out of allegations of sex discrimination against women
--- Not getting to any result
(B) many judges find it difficult to be objective in cases of sex discrimination against women
--- Again we can not conclude from the option as it present general view
(C) the judge is biased against women defendants or plaintiffs in cases that do not involve sex discrimination
--- Out of scope
(D) the majority of the cases of sex discrimination against women that have reached the judge’s court have been appealed from a lower court
--- This does not put light on whether men have appealed (in this case judges in lower court favoured decision for women) or women have apealed (in this case judges provided decision against woment in lower court) in higher court
(E) the evidence shows that the women should have won in more than sixty percent of the judge’s cases involving sex discrimination against women
--- This option only opens the idea that there could have more been more percentage of women winning the cases but because of the judges' discrimination this percentage limited only to the sixty percent
Re: Some people have questioned the judge s objectivity in cases [#permalink] 24 Jun 2016, 03:38
Similar topics Replies Last post
Similar
Topics:
Some people have questioned the teacher's objectivity in 2 01 Oct 2012, 23:52
Some people have questioned the teacher's objectivity in 2 01 Oct 2012, 23:44
1 In the vast majority of cases, when people are stopped by 23 19 Jan 2010, 01:08
Some people have been promoting a new herbal mixture as a 8 08 Jan 2010, 17:29
I have done some LSAT CR questions. They are good and hard. 3 21 Mar 2009, 03:42
Display posts from previous: Sort by
# Some people have questioned the judge s objectivity in cases
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3308945596218109, "perplexity": 7718.315527544264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00491-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://openmdao.org/twodocs/versions/2.3.0/features/core_features/working_with_derivatives/approximating_partials.html | # Approximating Partial Derivatives¶
OpenMDAO allows you to specify analytic derivatives for your models, but it is not a requirement. If certain partial derivatives are not available, you can ask the framework to approximate the derivatives by using the declare_partials method inside setup, and give it a method that is either ‘fd’ for finite diffference or ‘cs’ for complex step.
Component.declare_partials(of, wrt, dependent=True, rows=None, cols=None, val=None, method='exact', step=None, form=None, step_calc=None)[source]
Parameters: of : str or list of str The name of the residual(s) that derivatives are being computed for. May also contain a glob pattern. wrt : str or list of str The name of the variables that derivatives are taken with respect to. This can contain the name of any input or output variable. May also contain a glob pattern. dependent : bool(True) If False, specifies no dependence between the output(s) and the input(s). This is only necessary in the case of a sparse global jacobian, because if ‘dependent=False’ is not specified and declare_partials is not called for a given pair, then a dense matrix of zeros will be allocated in the sparse global jacobian for that pair. In the case of a dense global jacobian it doesn’t matter because the space for a dense subjac will always be allocated for every pair. rows : ndarray of int or None Row indices for each nonzero entry. For sparse subjacobians only. cols : ndarray of int or None Column indices for each nonzero entry. For sparse subjacobians only. val : float or ndarray of float or scipy.sparse Value of subjacobian. If rows and cols are not None, this will contain the values found at each (row, col) location in the subjac. method : str The type of approximation that should be used. Valid options include: ‘fd’: Finite Difference, ‘cs’: Complex Step, ‘exact’: use the component defined analytic derivatives. Default is ‘exact’. step : float Step size for approximation. Defaults to None, in which case the approximation method provides its default value. form : string Form for finite difference, can be ‘forward’, ‘backward’, or ‘central’. Defaults to None, in which case the approximation method provides its default value. step_calc : string Step type for finite difference, can be ‘abs’ for absolute’, or ‘rel’ for relative. Defaults to None, in which case the approximation method provides its default value.
## Usage¶
1. You may use glob patterns as arguments to to and wrt.
import numpy as np
from openmdao.api import Problem, Group, IndepVarComp, ExplicitComponent
class FDPartialComp(ExplicitComponent):
def setup(self):
self.declare_partials('f', 'y*', method='fd')
self.declare_partials('f', 'x', method='fd')
def compute(self, inputs, outputs):
f = outputs['f']
x = inputs['x']
y = inputs['y']
f[0] = x[0] + y[0]
f[1] = np.dot([0, 2, 3, 4], x) + y[1]
model = Group()
comp = IndepVarComp()
model.connect('input.x', 'example.x')
model.connect('input.y', 'example.y')
problem = Problem(model=model)
problem.setup(check=False)
problem.run_model()
totals = problem.compute_totals(['example.f'], ['input.x', 'input.y'])
print(totals['example.f', 'input.x'])
[[ 1. -0. -0. -0.]
[-0. 2. 3. 4.]]
print(totals['example.f', 'input.y'])
[[ 1. -0.]
[-0. 1.]]
1. For finite difference approximations (method='fd'), we have three (optional) parameters: the form, step size, and the step_calc. The form should be one of the following:
• form='forward' (default): Approximates the derivative as $$\displaystyle\frac{\partial f}{\partial x} \approx \frac{f(x+\delta, y) - f(x,y)}{||\delta||}$$. Error scales like $$||\delta||$$.
• form='backward': Approximates the derivative as $$\displaystyle\frac{\partial f}{\partial x} \approx \frac{f(x,y) - f(x-\delta, y) }{||\delta||}$$. Error scales like $$||\delta||$$.
• form='central': Approximates the derivative as $$\displaystyle\frac{\partial f}{\partial x} \approx \frac{f(x+\delta, y) - f(x-\delta,y)}{2||\delta||}$$. Error scales like $$||\delta||^2$$, but requires an extra function evaluation.
The step size can be any nonzero number, but should be positive (one can change the form to perform backwards finite difference formulas), small enough to reduce truncation error, but large enough to avoid round-off error. Choosing a step size can be highly problem dependent, but for double precision floating point numbers and reasonably bounded derivatives, $$10^{-6}$$ can be a good place to start. The step_calc can be either ‘abs’ for absoluate or ‘rel’ for relative. It determines whether the stepsize ie absolute or a percentage of the input value.
import numpy as np
from openmdao.api import Problem, Group, IndepVarComp, ExplicitComponent
class FDPartialComp(ExplicitComponent):
def setup(self):
self.declare_partials('f', 'y*', method='fd', form='backward', step=1e-6)
self.declare_partials('f', 'x', method='fd', form='central', step=1e-4)
def compute(self, inputs, outputs):
f = outputs['f']
x = inputs['x']
y = inputs['y']
f[0] = x[0] + y[0]
f[1] = np.dot([0, 2, 3, 4], x) + y[1]
model = Group()
comp = IndepVarComp()
model.connect('input.x', 'example.x')
model.connect('input.y', 'example.y')
problem = Problem(model=model)
problem.setup(check=False)
problem.run_model()
totals = problem.compute_totals(['example.f'], ['input.x', 'input.y'])
print(totals['example.f', 'input.x'])
[[ 1. -0. -0. -0.]
[-0. 2. 3. 4.]]
print(totals['example.f', 'input.y'])
[[ 1. -0.]
[-0. 1.]]
## Complex Step¶
If you have a pure python component (or an external code that can support complex inputs and outputs), then you can also choose to use complex step to calculate the Jacobian of that component. This will give more accurate derivatives that are insensitive to the step size. Like finite difference, complex step runs your component using the apply_nonlinear or solve_nonlinear functions, but it applies a step in the complex direction. You can activate it using the declare_partials method inside setup and giving it a method of ‘cs’. In many cases, this will require no other changes to your code, as long as all of the calculation in your solve_nonlinear and apply_nonlinear support complex numbers. During a complex step, the incoming inputs vector will return a complex number when a variable is being stepped. Likewise, the outputs and residuals vectors will accept complex values. If you are allocating temporary numpy arrays, remember to conditionally set their dtype based on the dtype in the outputs vector.
Here is how to turn on complex step for all input/output pairs in the Sellar problem:
class SellarDis1CS(ExplicitComponent):
"""
Component containing Discipline 1 -- no derivatives version.
Uses Complex Step
"""
def setup(self):
# Global Design Variable
# Local Design Variable
# Coupling parameter
# Coupling output
# Finite difference all partials.
self.declare_partials('*', '*', method='cs')
def compute(self, inputs, outputs):
"""
Evaluates the equation
y1 = z1**2 + z2 + x1 - 0.2*y2
"""
z1 = inputs['z'][0]
z2 = inputs['z'][1]
x1 = inputs['x']
y2 = inputs['y2']
outputs['y1'] = z1**2 + z2 + x1 - 0.2*y2
class SellarDis2CS(ExplicitComponent):
"""
Component containing Discipline 2 -- no derivatives version.
Uses Complex Step
"""
def setup(self):
# Global Design Variable
# Coupling parameter
# Coupling output
# Finite difference all partials.
self.declare_partials('*', '*', method='cs')
def compute(self, inputs, outputs):
"""
Evaluates the equation
y2 = y1**(.5) + z1 + z2
"""
z1 = inputs['z'][0]
z2 = inputs['z'][1]
y1 = inputs['y1']
# Note: this may cause some issues. However, y1 is constrained to be
# above 3.16, so lets just let it converge, and the optimizer will
# throw it out
if y1.real < 0.0:
y1 *= -1
outputs['y2'] = y1**.5 + z1 + z2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6275457143783569, "perplexity": 5133.379966920583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159193.49/warc/CC-MAIN-20180923095108-20180923115508-00390.warc.gz"} |
http://mathhelpforum.com/algebra/8996-ln-ex-log-questions.html | # Math Help - ln and ex log questions
1. ## ln and ex log questions
Here are some ln (lawn) and e^x questions. I'm still learning these so I'm not so good at solving thema and these ones are hard so I was wondering if anyone can help.
If that picture doesn't work then here's a link
http://img329.imageshack.us/img329/3...p141922wy7.png
2. How much do you know about logarithms? To me these questions appear to be elementary applications of the laws of logs. Could you be more specific as to where you're having problems?
3. Solve $e^x-3e^{-x}=2$
Multiply through by $e^x$ and rearrange to get:
$e^{2x}-2e^x-3=0$
Now put $u=e^x$ and you get a quadratic:
$u^2-2u-3=0$.
Solve this and then the solutions for $x$ are the natural logarithms of the $u$'s.
The seventh question - Solve $e^x+7e^{-x}=8$
can be solved by the same method.
RonL
4. Hello, SportfreundeKeaneKent!
Here are a couple of them . . .
2) Solve: . $(\ln x)^2\:=\:\ln(x^2)$
We have: . $(\ln x)^2\:=\:2\ln x\quad\Rightarrow\quad (\ln x)^2 - 2\ln x \:=\:0$
Factor: . $\ln x(\ln x - 2)\:=\:0$
And we have two equations to solve:
. . $\ln x \,= \,0\quad\Rightarrow\quad\boxed{ x \,= \,1}$
. . $\ln x \,=\,2\quad\Rightarrow\quad\boxed{ x \,= \,e^2}$
3) The graph of $y = \ln x$ is rotated 90° CCW about the origin.
What is the equation of the new graph?
The graph of $y = \ln x$ looks like this:
Code:
|
| *
| *
------+-------*----------
| * 1
| *
| *
|
|*
|
Rotated 90° CCW, it looks like this:
Code:
* |
|
* |
* |
* |
*
| *
| *
| *
------------+--------------------
|
We're expected to recognize this as the graph of: $y \:=\:e^{-x}$
5. We're sort of doing the log unit at school right now. I thought that these seemed pretty simple too but there's always one extra thing you have to do on some of these harder problems that the teacher gives. The ones from the textbook aren't a problem. lnx is a bit more of a problem and I'm still learning that.
6. Originally Posted by SportfreundeKeaneKent
We're sort of doing the log unit at school right now. I thought that these seemed pretty simple too but there's always one extra thing you have to do on some of these harder problems that the teacher gives. The ones from the textbook aren't a problem. lnx is a bit more of a problem and I'm still learning that.
Hello Sportsfreund,
I'll send you a .pdf-file to show you how the rules of powers and the rules of logarithm are connected. (It isn't necesary to learn German to understand this file)
EB
Attached Files
7. Hello again, SportfreundeKeaneKent!
A couple more . . .
4) Solve: . $\ln\left(\frac{e^2x^2+1}{2x}\right) \:=\:1$
We have: . $\frac{e^2x^2+1}{2x} \:=\:e\quad\Rightarrow\quad e^2x^2 + 1 \:=\:2ex\quad\Rightarrow\quad e^2x^2 - 2ex + 1 \:=\:0$
Factor: . $(ex - 1)^2\:=\:0\quad\Rightarrow\quad ex - 1\:=\:0\quad\Rightarrow\quad ex \,=\,1\quad\Rightarrow\quad x \,=\,\frac{1}{e}$
6) Solve: . $\ln(x^2 + e) - \ln(x + 1)\:=\:1$
We have: . $\ln\left(\frac{x^2+e}{x+1}\right) \:=\:1\quad\Rightarrow\quad\frac{x^2+e}{x+1} \:=\:e\quad\Rightarrow\quad x^2 + e \:=\:ex + e\quad\Rightarrow\quad x^2 - ex \:=\:0$
Factor: . $x(x - e)\:=\:0$
And we have two equations to solve:
. . $\boxed{x\:=\:0}$
. . $x - e \:=\:0\quad\Rightarrow\quad\boxed{x = e}$
and both answers check out . . .
8. Could someone check up on these answers for me for these questions:
For question 1, I got x = 1.098
For question 5, I got x = 1.946 and x = 0
For question 2, does x = 0 AND e^x or just e^x/7.389
And for the last question (e^3x=15), I got x = 0.9026
9. Oh and also, I was doing question 4 down that list and I got x = 0.5 which is different than the answer Soroban's answer of 1/e. I was wondering if anyone could check up on this.
I used the quotient rule in this question so that it became this:
ln(e^2 x^2+1)-ln2x=1
2lne^2 x+ 0 - ln2x=1
2x + 0 - ln2x = 1
2x(1-ln1)=1
2x=1
x-1/2 or 0.5
When I plugged x = 0.5 into the equation to check my work, it worked out so I'm really confused about wheter what Soroban did was right or the way I've done it is right.
10. Originally Posted by SportfreundeKeaneKent
Oh and also, I was doing question 4 down that list and I got x = 0.5 which is different than the answer Soroban's answer of 1/e. I was wondering if anyone could check up on this.
I used the quotient rule in this question so that it became this:
ln(e^2 x^2+1)-ln2x=1
2lne^2 x+ 0 - ln2x=1
2x + 0 - ln2x = 1
2x(1-ln1)=1
2x=1
x-1/2 or 0.5
When I plugged x = 0.5 into the equation to check my work, it worked out so I'm really confused about wheter what Soroban did was right or the way I've done it is right.
Hello Sportsfreund,
here:
ln(e^2 x^2+1)-ln2x=1
2lne^2 x+ 0 - ln2x=1
you've made a very common mistake: You used the "ln" as a kind of variable and expanded the brackets by rule of distribution. But ln is a function and the contents of the brackets are the arguments of this function.
You can get rid of the "ln" if you use the following property:
$e^{\ln(x)}=x$
Soroban's solution is 100% correct.
EB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8609328866004944, "perplexity": 903.9523839544927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824395.52/warc/CC-MAIN-20160723071024-00187-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://www.eurotrib.com/story/2012/6/8/5922/12487 | Welcome to the new version of European Tribune. It's just a new layout, so everything should work as before - please report bugs here.
## Pacta sunt..?
by afew Fri Jun 8th, 2012 at 05:09:22 AM EST
Yesterday, in a television interview, the German chancellor made clear:
"budget consolidation and growth are two sides of the same coin. Without solid finances, there is no growth, but solid finances alone are not enough; there are other points - above all, questions of competitiveness," she said.
In other words, no change in the German neoliberal mantra: slash public services, increase labour market precarity in order to reduce wages -- which both mean increasing inequality and poverty in any country, including Germany, and far more so in countries straitjacketed into brutal deflation.
A month or so ago, just before François Hollande won the French presidential election, the German government in the person of its finance minister explained how "Europe works":
“We’ve told Mister Hollande that the fiscal pact has been signed and that Europe works along the principle of pacta sunt servanda,” meaning agreements must be kept, Schaeuble said in a speech in the western German city of Cologne today.
Let's see: here are some excerpts from agreements Germany has signed and ratified:
From the statements of intent in the preamble to Treaty on European Union (my bold):
RECALLING the historic importance of the ending of the division of the European continent and the need to create firm bases for the construction of the future Europe
CONFIRMING their attachment to fundamental social rights as defined in the European Social Charter signed at Turin on 18 October 1961 and in the 1989 Community Charter of the Fundamental Social Rights of Workers
DESIRING to deepen the solidarity between their peoples
RESOLVED to achieve the strengthening and the convergence of their economies
DETERMINED to promote economic and social progress for their peoples
RESOLVED to ensure the economic and social progress of their States by common action to eliminate the barriers which divide Europe
AFFIRMING as the essential objective of their efforts the constant improvements of the living and working conditions of their peoples
RECOGNISING that the removal of existing obstacles calls for concerted action in order to guarantee steady expansion, balanced trade and fair competition
ANXIOUS to strengthen the unity of their economies and to ensure their harmonious development by reducing the differences existing between the various regions and the backwardness of the less favoured regions
Since negotiating, signing, and ratifying those treaties, Germany has pursued self-aggrandising policies, via internal devaluation, at the expense of its European partners. Herr Schaüble: what is Germany's signature worth?
I know. It's tough love: all for our own good.
Eurointelligence Daily Briefing: The implosion of Mario Monti (08.06.2012)Handelsblatt worries about the cost of Europe to Germany Under the headline "What does Europe cost" Handelsblatt worries about the huge cost Germany will have to shoulder to safe the euro and European integration. "The Americans say Germany has to supply money to Europe", the paper writes in its leading front page story. "The French request that Germany has to share the advantage it gets because of its low interest rate. Only Germany has the financial power to safe the euro, Brussels says." The paper goes on to add up the sums that Germany is already paying for the euro rescue: €280bn for the EFSF, €57bn as the share for the ECB's SMP, €79bn as Germany's net contribution to the European budget since the year 2000. But all of that was not enough, Handelsblatt writes. "Now the crisis managers develop new ideas on a daily basis: there is the banking union, an attempt to use the German savings for bankrupt banks in the South and there are Eurobonds in which the South would profit from the good German creditworthiness." In an editorial Gabor Steingart, the paper's editor in chief, notes: "It is not the European idea that has failed. It is the belief that one country can live on the expense of the other." (my emphasis) If you are not convinced, try it on someone who has not been entirely debauched by economics. — Piero Sraffa
Steingart is right - Germany has freeloaded on the rest of Europe to build up the export machine while holding down the exchange rate and inflation. Now the instability caused by German irresponsible mercantilism proves that the German economic attitude of living off the rest of Europe is not sustainable.
Worth noting, as I haven't seen it noted elsewhere that the Euro allowed Germany to export inflation to Spain and elsewhere...
But that's not what Steingart means, he means the rest cannot be living off Germany. If you are not convinced, try it on someone who has not been entirely debauched by economics. — Piero Sraffa
Steingart counts a total of €416bn (including EU budget contributions as far back as 2000 -- shades of Maggie Thatcher!). But, according to Bundesbank balance of payments statistics (pdf), the German current account surplus with the rest of the EU for just the past five years amounts to €564bn. So, if we're going to get niggly about accounts, Germany is up by at least €146bn.
Or he could see this for the euro area, also from the BuBa (PDF):
Here is another estimate of the cost of the Euro crisis to Germany. If you are not convinced, try it on someone who has not been entirely debauched by economics. — Piero Sraffa
Those are projections, not established numbers. Anyway, if Germany's stupid policy blows up in their face, they will have no one else to blame. Though, of course, they will try...
Though this is interesting: The whole problem being, of course, one of political will. In which it is clear that German power circles have no intention of respecting the essential stated principles of the treaties they signed.
That is what I was referring to. The cost is, by now, several multiples of the maybe €50 billion it would have cost for the ECB to forcefully prop up Greek bond prices in February 2010. But a small country had to be thrown against the wall to show that the Ecofin meant business, and here we are. If you are not convinced, try it on someone who has not been entirely debauched by economics. — Piero Sraffa
Right. I don't think German governing and financial elites ever wanted Greece in the euro, and were happy to seize the wall-slamming opportunity. Yet there's a contradiction between their debt-sinner-needs-punishment attitude, and their manipulation of the single currency to their advantage -- both by using internal devaluation to gain a competitive advantage wrt euro area countries (and the "converging" countries whose currencies are euro-pegged), and in benefitting on world markets from improved terms of trade thanks to a softer currency than the DM would have been, as a result of the very presence of weaker economies in the euro-mix. That contradiction is fundamentally Protestant: judgemental puritanism on the one hand, hard-nosed business attitudes on the other -- and some self-congratulatory bullshit about how you deserve your success because you are virtuous (not underhanded and greedy).
I think the inflation story (which I personally only just realised in this thread) is even more important. Without the Euro, all the hot money that inflated the Spanish real estate bubble (and others) would have needed to stay in Germany - and it's hard to see how it would not have caused inflation in Germany...
I don't think German governing and financial elites ever wanted Greece in the euro, and were happy to seize the wall-slamming opportunity. I wouldn't concentrate on Germany in this instance. It's clear from the record that it's the EU-wide economic policy establishment, irrespective of nationality, that has been responsible for the wall-slamming. The Ecofin and the Commission in particular. The critique by Daniel Cohn Bendit in May 2010 is forceful and on point. If you are not convinced, try it on someone who has not been entirely debauched by economics. — Piero Sraffa
"It is not the European idea that has failed. It is the belief that one country can live on the expense of the other." That two opposite interpretations of the implications of this statement can coexist irreconcilably vividly indicates the limits of our culture in providing for the common good. The cultural power of denial, a tradition of respect for authority and the imperative to think well of one's self and country, combined with leaders who instinctively play to those traits have, in Germany, led to the majority of the population self righteously assuming that policies that were pursued, in no small part, for the benefit of a few powerful interest groups represented virtuous sacrifices by all for the common good. It is both easy and powerful to frame an issue in terms of common prejudices and traits. Thus it is scarcely surprising that most Germans react with anger at those in the periphery when lying leaders tell them that the problem is the result of lazy southerners and their corrupt governments and societies not honoring their obligations. This, of course, omits any mention of the fact that the real problem is with corrupt German elites collaborating with corrupt peripheral elites for mutual profit at the expense of the people of both countries. Unfortunately, the citizenry of peripheral countries has also been subjected to comparably idiotic framing, carefully tailored to the history and circumstances of their country, but incorporating the same wealth serving biases that underlie all 'capitalist' cultures. "It is not necessary to have hope in order to persevere."
Spiegel: Playing Until the Germans Lose Their Nerve (06/08/2012)We've now reached a phase in the euro crisis when everyone is trying to feather their own nest at someone else's expense.Except for Germany, it's understood.Hollande is campaigning to have the European Union help the Spanish rehabilitate their banks without involving itself in their business dealings. But, in doing so, he's much less focused on Spain's well-being than on France's. Once the principle stating that countries can only receive financial assistance in return for allowing external oversight has been contravened, one is left with nothing more than a pretty piece of paper to insure against the vicissitudes of economic life. And, of course, the next banks that will then be able to (and presumably also will) get a fresh injection of cash straight from Brussels are the ones in Paris. ... The next stage in the crisis will be blatant blackmail. With their refusal to accept money from the bailout fund to recapitalize their banks, the Spanish are not far from causing the entire system to explode. They clearly figure that the Germans will lose their nerve and agree to rehabilitate their banks for them without demanding any guarantee in return that things will take a lasting turn for the better. ... Germans have always expected that being part of a united Europe meant that national interests would recede into the background until they eventually lost all significance. One recognizes in this hope the legacy of political romanticism. Indeed, only political simpletons assume that when people in Madrid, Rome or Paris talk about Europe, they really mean the European Union. ... Jan Fleischhauer is the author of "Der Schwarze Kanal," or "The Black Channel," SPIEGEL ONLINE's weekly conservative political column. Black is a reference to the political color of Chancellor Angela Merkel's political party, the center-right Christian Democratic Union. If you are not convinced, try it on someone who has not been entirely debauched by economics. — Piero Sraffa
If you are not convinced, try it on someone who has not been entirely debauched by economics. — Piero Sraffa
Yanis Varoufakis: Solidarity Euro-Style: Finnish loans, ECB bond purchases, EFSF tough love and assorted horror stories from the postmodern Euro-Workhouse (7 June 2012)The first criticism, about the EFSF-ESM's size, is true but irrelevant. As I have argued from day one of the EFSF's creation, its problem is not its size but its CDO-like structure. Turning to the second criticism, that it resembles a Dickensian Workhouse, Spain's current predicament is instructive: To get money to give to its decrepit banks, the nation must be humiliated and undergo further fiscal waterboarding so that Italy and others are deterred from turning to the EFSF for help. In this sense, when Europe's functionaries say that there is no need for further action on Spain since the EFSF is available to help, they are inviting the Spanish to enter the Workhouse for a life of undeserved misery on behalf of their bankers. And they have the audacity to call this solidarity' with the Spanish people. ... As I wrote in a Le Monde article recently, the bankrupt Greek state was recently forced, by the troika, to borrow 4.2 billion from the EFSF so as immediately to pass it on to the European Central Bank (ECB) so as to redeem Greek government bonds that the ECB had previously purchased in a failed attempt to shore up their price. This new loan boosted Greece's debt substantially but netted the ECB a profit of around 840 million (courtesy of the 20% discount at which the ECB had purchased these bonds). Is this solidarity with the fallen', even of a Victorian Workhouse type? ... When the 2nd Greek `bailout' was agreed, you may recall that the Finnish government asked for guarantees, for collateral, that would reduce its exposure to Greece. The Greek government conceded, promising collateral of 925 million in value. One might have expected that the said collateral would come in the form of some assets (e.g. Greek government owned real estate). But no! Helsinki would have none of that. Instead, they demanded... cash! And cash they received. Last month, in May 2012, Athens wired 311 million to the Helsinki government, as a first installment. My sources here in the United States tell me that the Finnish government is now seeking to invest this money in joint ventures with US and other European firms. Now that is what I call solidarity with Greece... If you are not convinced, try it on someone who has not been entirely debauched by economics. — Piero Sraffa
...Europe works along the principle of pacta sunt servanda... There are several layers of meaning to this statement. It is true that, for a pact to work, the terms of the pact must be observed, at least in part and by most of the signatories. But that does not give such a statement, in Latin, any divine authority. The countries of Europe could make a pact that, henceforth, the sun shall rise in the west, and all could dutifully look to the west in the morning, but that would not make the sun rise in the west. In order for a pact to work it must be workable. In order for that pact to be honored by its signatories, it must be honorable. This is manifestly not possible with with the treaties that have given form to the EMU and no amount of rhetoric from the core or compliance from the periphery will able to make those treaties work. This basic fact must be acknowledged and the appropriate changes must be made before this situation can be resolved. Acknowledgement of the flaws and their consequences seems unlikely to happen in a timely and orderly manner. Germany has clearly benefited from the current arrangement and the price Germany will pay for this remains largely in the future and uncertain in nature. Perhaps the citizens of the periphery will submit to being, effectively, badly used and abused domestic servants of wealthy Germans, but that will still not make them able to pay unpayable debts. Nor will German control of their societies and economies make such payment possible. But such an outcome might make the costs to Germans of writing down these debts more palatable. "It is not necessary to have hope in order to persevere."
Martin Wolf's exchange: The German response (June 7, 2012)Last week I wrote a column entitled The riddle of German self-interest. To my surprise, it received a lengthy response from a senior and highly respected official of the German finance ministry. I am very grateful for this reply, because it clarifies the German finance ministry's position and raises a number of profound issues. ... I fear that austerity without end will bring about a return to the unstable populist politics the European Union was designed to prevent. That could shatter the eurozone and, with it, the EU, thereby ending the most successful attempt to build peace and prosperity in Europe since the fall of the Roman Empire. Moreover, it is clear - and has long been so - that the responsibility for preventing that outcome rests on Germany, Europe's central power, in every sense. As Charles Kindleberger argued, in a panic, the creditworthy country has to lend freely if a fixed exchange rate system (or in this case a currency union) is to survive. ... Fiat justitia, et pereat mundus (let justice be done, even if the world perishes) is a dangerous motto.(h/t kcurie) If you are not convinced, try it on someone who has not been entirely debauched by economics. — Piero Sraffa
# Top Diaries
## A Hard Rain's A Gonna Fall
by Frank Schnittger - Jan 16
## Macron Surges in the Polls
by Zwackus - Jan 18
## German Election Early News Roundup
by Zwackus - Jan 19
## The UK falling back to the WTO
by Luis de Sousa - Jan 20
## LTE: Brexit will break things...
by Frank Schnittger - Jan 21
## Is it Worth it?
by Frank Schnittger - Dec 30
## A Brexit doomsday scenario
by Frank Schnittger - Jan 3
## Goodbye 2016: Hello 2017
by Frank Schnittger - Jan 1
# Recent Diaries
## Words or something
by Drew J Jones - Jan 21
## LTE: Brexit will break things...
by Frank Schnittger - Jan 21
## The UK falling back to the WTO
by Luis de Sousa - Jan 20
## German Election Early News Roundup
by Zwackus - Jan 19
## Macron Surges in the Polls
by Zwackus - Jan 18
## A Hard Rain's A Gonna Fall
by Frank Schnittger - Jan 16
## A Brexit doomsday scenario
by Frank Schnittger - Jan 3
## Goodbye 2016: Hello 2017
by Frank Schnittger - Jan 1
## Is it Worth it?
by Frank Schnittger - Dec 30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16162429749965668, "perplexity": 3787.8018259622495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00259-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://mathoverflow.net/tags?page=5&tab=popular | # Tags
A tag is a keyword or label that categorizes your question with other, similar questions. Using the right tags makes it easier for others to find and answer your question.
× 329
that part of homotopy theory (and thus algebraic topology) concerned with all structure and phenomena that remain after sufficiently many applications of the suspension funct…
× 329
Questions about the determinant of square matrices or linear endomorphisms. Also for closely related topics such as minors or regularized determinants.
× 325
a topological space that locally resembles Euclidean space near each point. More precisely, each point of an n-dimensional manifold has a neighbourhood that is homeomorphic to the Euclid…
× 322
Nonlinear objectives, nonlinear constraints, non-convex objective, non-convex feasible region.
× 318
Noncommutative geometry in the sense of Connes and beyond: noncommutative algebras viewed as functions on a noncommutative space.
× 314
Deprecated; do NOT use this tag. Instead you could consider gr.group-theory, ac.commutative-algebra, ra.rings-and-algebras, universal-algebra, or various more specific tags.
× 314
Questions about Kähler manifolds and Kähler metrics.
× 312
Non-commutative rings and algebras, non-associative algebras. Can be used in combination with ra.rings-and-algebras
× 309
Questions related to permutations, bijections from a finite (or sometimes infinite) set to itself.
× 308
The study of harmonic differential forms on complex projective varieties, their invariantly defined filtrations, their integrals over topological cycles, especially over subvarieties, the deformations…
× 306
a mathematical system attributed to the Alexandrian Greek mathematician Euclid, which he described in his textbook on geometry: the Elements. Euclid's method consists in assuming…
× 306
an algebraic variety of dimension two. In the case of geometry over the field of complex numbers, an algebraic surface has complex dimension two (as a complex manifold, when it…
× 305
Questions about geometric properties of sets using measure theoretic techniques; rectifiability of sets and measures, currents, Plateau problem, isoperimetric inequality and related topics.
× 304
Questions designed to get an overview of a specific subject or body of results or to understand the relations among similar definitions, techniques or concepts appearing in different sub-fields of mat…
× 294
Questions on the calculus of variations, which deals with the optimization of functionals mostly defined on infinite dimensional spaces.
× 287
Questions asking for the intuition behind some definition, conjecture, proof etc. In other words, questions designed to improve or to acquire understanding on a conceptual or intuitive level, as oppos…
× 287
Using computers to solve geometric problems. Questions with this tag should typically have at least one other tag indicating what sort of geometry is involved, such as ag.algebraic-geometry or mg.metr…
× 286
a set $S$ together with a binary operation that is associative. Examples of semigroups are the set of finite strings over a fixed alphabet (under concatenation) and the positive intege…
× 280
the study of matrices as concrete objects, rather than as abstract linear operators between vector spaces (whose study belongs to [tag:linear-algebra]). For instance, this involves ma…
× 271
Questions about generalizations of the Riemann Zeta function of arithmetic interest whose definition relies on meromorphic continuation of special kinds of Dirichlet series, such as Dirichlet L-functi…
× 264
about flat shapes like lines, circles and triangles , shapes that can be drawn on a piece of paper
× 262
Philosophical aspects of logic and set theory; truth status of mathematical axioms; Philosophy of Mathematics; philosophical aspects of mathematics in general; relation of mathematics to philosophy; e…
× 262
Invariant theory deals with an algebraic, geometric or analytic structure $X$, submitted to the action of an (algebraic) group $G$. It studies $G$-invariant elements of $X$ as well as the set of $G$-o…
× 261
Questions asking for recommendations of textbooks on some subject. It can be helpful to indicate whether the request is for self-study, for use in a course one teaches, for use accompanying a course o…
× 251
for explicit calculations or algorithms involving anything of interest to number theorists.
× 250
the group of permutations of the set of integers $\{1,\dots,n\}$. This has $n!$ elements and is generated by the $n-1$ involutions exchanging consecutive integers. The sym…
× 250
Stochastic ordinary and partial differential equations generalize the concepts of ordinary and partial differential equations to the setting where the unknown is a stochastic process.
× 245
The study of algebraic structures and properties applying to large classes of such structures. For example, ideas from group theory and ring theory are extended and considered for structures with oth…
× 243
For questions in Mathematics Education as a scientific discipline. For more hands-on questions on teaching Mathematics, please use the tag teaching. There is also a Stack Exchange community http://mat… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7528387904167175, "perplexity": 555.0425628455343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658928.22/warc/CC-MAIN-20190117102635-20190117124635-00594.warc.gz"} |
https://www.physionet.org/content/emgdb/1.0.0/ | Database Open Access
# Examples of Electromyograms
Published: Sept. 5, 2009. Version: 1.0.0
Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P. C., Mark, R., ... & Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220.
### Abstract
An electromyogram (EMG) is a common clinical test used to assess function of muscles and the nerves that control them. EMG studies are used to help in the diagnosis and management of disorders such as the muscular dystrophies and neuropathies. Nerve conduction studies that measure how well and how fast the nerves conduct impulses are often performed in conjunction with EMG studies.
Examples of EMG studies (courtesy of Seward Rutkove, MD, Department of Neurology, Beth Israel Deaconess Medical Center/Harvard Medical School) are given here.
### Data Description
Data were collected with a Medelec Synergy N2 EMG Monitoring System (Oxford Instruments Medical, Old Woking, United Kingdom). A 25mm concentric needle electrode was placed into the tibialis anterior muscle of each subject. The patient was then asked to dorsiflex the foot gently against resistance. The needle electrode was repositioned until motor unit potentials with a rapid rise time were identified. Data were then collected for several seconds, at which point the patient was asked to relax and the needle removed.
The figure shows three examples of EMG data from: 1) a 44 year old man without history of neuromuscular disease; 2) a 62 year old man with chronic low back pain and neuropathy due to a right L5 radiculopathy; and 3) a 57 year old man with myopathy due to longstanding history of polymyositis, treated effectively with steroids and low-dose methotrexate. The data were recorded at 50 KHz and then downsampled to 4 KHz. During the recording process two analog filters were used: a 20 Hz high-pass filter and a 5K Hz low-pass filter.
1. Kimura J. Electrodiagnosis in Diseases of Nerve and Muscle: Principles and Practice, 3rd Edition. New York, Oxford University Press, 2001.
2. Reaz MBI, Hussain MS and Mohd-Yasin F. Techniques of EMG signal analysis: detection, processing, classification and applications. Biol. Proced. Online 2006; 8(1): 11-35.
##### Access
Access Policy:
Anyone can access the files, as long as they conform to the terms of the specified license.
##### Corresponding Author
You must be logged in to view the contact information.
## Files
Total uncompressed size: 5.1 MB.
##### Access the files
• Access the files using the Google Cloud Storage Browser here. Login with a Google account is required.
• Access the data using the Google Cloud command line tools (please refer to the gsutil documentation for guidance):
gsutil -m -u YOUR_PROJECT_ID cp -r gs://emgdb-1.0.0.physionet.org DESTINATION
wget -r -N -c -np https://physionet.org/files/emgdb/1.0.0/
Visualize waveforms | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19359880685806274, "perplexity": 12379.63305924107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00797.warc.gz"} |
https://gamedev.stackexchange.com/questions/143063/how-do-we-generate-overhangs-with-simplex-noise-3d | # How do we generate overhangs with simplex noise 3d?
Now I use a simplex noise 2d function with x( voxel's x location ) and y(voxel's y location) to generate heightmap. How do we use simplex noise 3d to generate overhangs? What should the x y z inputs be? Pseudo code would help a lot.
2D Perlin noise is used to generate a heightmap. This heightmap is then converted to a mesh for the renderer.
float[,] heights;
// initialize the map using some 2D height function.
foreach (i, j):
heights[i,j] = noise2D(i, j);
// convert the data to a mesh
foreach(cell in heights):
generate a square with corners at (x,y,heights[x,y]), (x+1,y,heights[x+1,y]), (x,y+1,heights[x,y+1]), (x+1,y+1,heights[x+1,y+1])
3D Perlin noise is used to generate basically a density map. If the "density" at a 3D point is greater than a threshold, you "generate" a block at that point
// for the sake of the demo, this is a bool.
// Realistically you'd have some struct that allows for different block types
bool[,,] isSolid;
// initialize the map using some 3D density function
foreach (i, j, k):
isSolid(i, j, k) = noise3D(i, j, k) > 0.5
// convert the data to a mesh
foreach i,j,k:
if isSolid[i,j,k]:
generate a unit cube centered at i, j, k.
Of course, the above code simplifies some things -- you could use less naive meshing strategies like marching cubes or dual contouring. the density function gets more dense with depth if you want "ground at the bottom, sky at the top, etc." | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6023344397544861, "perplexity": 3626.6604763597256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00136.warc.gz"} |
https://hal.in2p3.fr/in2p3-00763988 | Hadronic channels measured with HADES in the p+p reaction at 1.25 GeV - Archive ouverte HAL Access content directly
Conference Papers Year : 2009
## Hadronic channels measured with HADES in the p+p reaction at 1.25 GeV
(1)
1
T. Liu
• Function : Author
#### Abstract
The main issue of HADES (High Acceptance Di-Electron Spectrometer) experiments is to study hot and dense nuclear matter by exploiting the di-electron decay channel of vector mesons produced in heavy ion collisions. The elementary reaction experiments performed in the last years with HADES provide selective information on different sources of di-electron emission and are therefore an indispensable complement to the heavy ion program. The proton-proton reaction has been measured at a 1.25 GeV kinetic energy with HADES. The energy, just below the threshold of Eta production, is well suited for the study of Delta Dalitz decay (Delta->Ne+e-) which is one of the important sources of di-electron pairs in the region of invariant masses below the contribution of vector mesons. This process is studied by leptonic inclusive ( pp->e+e-X) and exclusive (pp->e+e-pp) channel analyses. Meanwhile, the hadronic channel data analysis can also provide very precise and independent constraints on resonance production and decay mechanisms. In this contribution, the analyses of pp->pnpi+ and pp->pppi0 channels and comparison to detailed simulations (including Delta excitation in a one pion exchange model, non resonant pion production and final state interaction) will be presented. Detailed information on Delta decay can also be extracted from these data.
#### Domains
Physics [physics] Nuclear Experiment [nucl-ex]
### Dates and versions
in2p3-00763988 , version 1 (12-12-2012)
### Identifiers
• HAL Id : in2p3-00763988 , version 1
### Cite
T. Liu. Hadronic channels measured with HADES in the p+p reaction at 1.25 GeV. XLVII International Winter Meeting on Nuclear Physics (Bormio 2009), Jan 2009, Bormio, Italy. pp.197-202. ⟨in2p3-00763988⟩
### Export
BibTeX TEI Dublin Core DC Terms EndNote Datacite
16 View | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9516230821609497, "perplexity": 7864.720883605226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00028.warc.gz"} |
http://www.ck12.org/tebook/Algebra-I-Teacher%2527s-Edition/section/12.1/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 12.1: Inverse Variation Models
Difficulty Level: At Grade Created by: CK-12
## Learning Objectives
At the end of this lesson, students will be able to:
• Distinguish direct and inverse variation.
• Graph inverse variation equations.
• Write inverse variation equations.
• Solve real-world problems using inverse variation equations.
## Vocabulary
Terms introduced in this lesson:
variation
direct variation
inverse variation
joint variation
constant of proportionality
increase, decrease
## Teaching Strategies and Tips
This lesson focuses on inverse variation models and graphing inverse variation equations. Use it to motivate rational functions, which are covered in the next six lessons.
Remind students about having learned direct variation in chapter Graphs of Equations and Functions.
• Point out that direct variation is a linear relationship.
• The x\begin{align*}x\end{align*} and y\begin{align*}y-\end{align*}intercepts are .
• The slope of the line is the only parameter, denoted by k\begin{align*}k\end{align*}, and called the constant of proportionality.
• It takes only one more point to determine the direct variation.
Some examples of direct variation relationships are:
• Height of a person and the length of their shadow on flat ground.
• Circumference and radius of the circle.
• Weight of an object on a spring and the amount the spring has stretched.
In the examples and Review Questions, have students decide on a variation model first and then solve for the constant of proportionality using the given information. This determines the equation of the variation which is necessary for answering the rest of the problem.
Use Example 1 to illustrate the graph of an inverse variation.
• Construct a similar table of values. Allow students to observe the function’s behavior numerically.
Remind students of scientific notation in Example 6.
In applied problems such as Examples 5 and 6, emphasize that direct variations are ubiquitous and significant in the real-world.
In Review Questions 1-4, encourage students to apply stretches to the basic graph y=1x\begin{align*}y = \frac{1}{x}\end{align*}.
Example:
Graph the following inverse variation relationship.
y=10x.\begin{align*}y = \frac{10}{x}.\end{align*}
Hint: Since y=10x=101x\begin{align*}y = \frac{10}{x} = 10 \cdot \frac{1}{x}\end{align*}, the graph can be obtained from that of y=1x\begin{align*}y = \frac{1}{x}\end{align*} by stretching by a factor of 10\begin{align*}10\end{align*}.
## Error Troubleshooting
Remind students in Example 1 that dividing by zero is undefined.
Remind students in Example 6 to square the 5.3 which is in parentheses:
K=740(5.3×1011)2=7405.321022\begin{align*}K = 740(5.3 \times 10^{-11})^2 = 740 \cdot 5.3^2 \cdot 10^{-22}\end{align*}
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects: | {"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9821111559867859, "perplexity": 3418.5415270681515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00212-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/486778-fstream-seekg-and-read/ | Public Group
This topic is 3721 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
(Cplusplus) I need to perform "random access" on a data file I am using; meaning that I need to be able to specify any arbitrary place in the file and read data at that point. I am working with a binary file, and so have used ios::binary when opening the file reader. I understand how to use seekg(int, ios_base::seekdir) to move the read marker in conjunction with read(char*,int) to specify how many characters to read at the read marker's location... i'm just getting caught up in some technical details here. Say I have a string of 12 characters, "Hello,world!", that I want to read. Also assume 1-byte characters in accordance with ASCII.
#include <fstream>
#include <string>
// inside of main...
ifstream fin;
string inFile = "test.dat"
fin.open(inFile.c_str(), ios::in | ios::binary);
// Now I want to use seekg() and read() to acquire the substring
// "wor" from "Hello,world!". NOTE: In understand there are easier
// ways of doing this but I want to use seekg/read specifically.
fin.seekg(6, ios_base::beg);
Would this code result with storing "wor" into temp_str? What is confusing me is this: I am not sure how to "translate" seekg and read. To me, the line fin.seekg(6, ios_base::beg) says "Move the read pointer 6 units to the right of the beginning of the file" (since 'w' is the 7th character). And the line fin.read(/* ... */) says "Beginning with the current character/byte in the file, read 3 characters without moving the read pointer." I can't download a C++ compiler right now and check this out for myself. Please let me know if I am on track or if I am slightly misinformed... -ply
##### Share on other sites
You might want to try googleing std:fstream and that may answer your questions on its own. :) check out msdn thats a good resource.
Regards Jouei
• 19
• 10
• 19
• 14
• 20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20166681706905365, "perplexity": 2224.105681264265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864648.30/warc/CC-MAIN-20180522073245-20180522093245-00368.warc.gz"} |
https://oceanopticsbook.info/view/radiative-transfer-theory/level-2/the-single-scattering-approximation | Page updated: March 13, 2021
Author: Curtis Mobley
View PDF
# The Single-Scattering Approximation
As previously noted, exact analytical solutions of the RTE exist only for a few idealized and unphysical situations such as no scattering. There are, however, a few approximate analytic solutions. In pre-computer days these were useful computational tools. These approximate solutions are no longer needed for numerical computation, but they are still useful for isolating the most important processes governing light propagation in the ocean and can provide guidance in interpretation of radiometric data. This page develops one such solution: the single-scattering approximation (SSA). The next page discusses the related quasi-single scattering approximation (QSSA).
### The Successive Order of Scattering Solution Technique
We begin with the optical depth form of the time independent, 1D (plane parallel geometry) RTE, which is Eq. (4) of the Scalar Radiative Transfer Equation page:
$\begin{array}{llll}\hfill \mu \frac{dL\left(\zeta ,\mu ,\varphi ,\lambda \right)}{d\zeta }=& -L\left(\zeta ,\mu ,\varphi ,\lambda \right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill +& {\omega }_{o}\left(\zeta ,\lambda \right){\int }_{0}^{2\pi }{\int }_{-1}^{1}L\left(\zeta ,{\mu }^{\prime },{\varphi }^{\prime },\lambda \right)\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left(\zeta ;{\mu }^{\prime },{\varphi }^{\prime }\to \mu ,\varphi ;\lambda \right)\phantom{\rule{0.3em}{0ex}}d{\mu }^{\prime }\phantom{\rule{0.3em}{0ex}}d{\varphi }^{\prime }\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill +& \frac{\phantom{\rule{1em}{0ex}}1}{c\left(\zeta ,\lambda \right)}\phantom{\rule{0.3em}{0ex}}\Sigma \left(\zeta ,\mu ,\varphi ,\lambda \right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ We next make a number of simplifications by assuming that
• The water is homogeneous, so that the IOPs do not depend on depth;
• The water is infinitely deep;
• The sea surface is level (zero wind speed);
• The sun is a point source is a black sky, so that the incident radiance onto the sea surface is collimated;
• There are no internal sources or inelastic scattering.
The RTE then becomes, for a given wavelength $\lambda$, which we henceforth drop for brevity,
$\begin{array}{llll}\hfill \mu \frac{dL\left(\zeta ,\mu ,\varphi \right)}{d\zeta }=& -L\left(\zeta ,\mu ,\varphi \right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill +& {\omega }_{o}{\int }_{0}^{2\pi }{\int }_{-1}^{1}L\left(\zeta ,{\mu }^{\prime },{\varphi }^{\prime }\right)\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }^{\prime },{\varphi }^{\prime }\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}d{\mu }^{\prime }\phantom{\rule{0.3em}{0ex}}d{\varphi }^{\prime }\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill \text{(1)}\end{array}$
A powerful technique for solving differential equations is to attempt a power series solution in which higher order terms of the series are weighted by a powers of a parameter whose magnitude is less than 1. The higher order terms then contribute less and less to the sum that represents the solution. The albedo of single scattering, ${\omega }_{o}$, meets the requirement for an expansion parameter. We therefore attempt a solution of Eq. (1) of the form
$\begin{array}{llll}\hfill L\left(\zeta ,\mu ,\varphi \right)=& \sum _{k=0}^{\infty }{\omega }_{o}^{k}\phantom{\rule{2.6108pt}{0ex}}{L}^{\left(k\right)}\left(\zeta ,\mu ,\varphi \right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& {L}^{\left(0\right)}\left(\zeta ,\mu ,\varphi \right)+{\omega }_{o}\phantom{\rule{0.3em}{0ex}}{L}^{\left(1\right)}\left(\zeta ,\mu ,\varphi \right)+{\omega }_{o}^{2}\phantom{\rule{0.3em}{0ex}}{L}^{\left(2\right)}\left(\zeta ,\mu ,\varphi \right)+\cdots \phantom{\rule{2em}{0ex}}& \hfill \text{(2)}\end{array}$ The notation ${L}^{\left(0\right)}$ denotes radiance that is unscattered, ${L}^{\left(1\right)}$ is radiance from rays that have been scattered once, ${L}^{\left(2\right)}$ is radiance from rays that have been scattered twice, and so on. This is consistent with the interpretation of ${\omega }_{o}$ as the probability of ray survival in an interaction with matter, i.e., the probability that a ray will be scattered and not absorbed.
We now substitute Eq. (2) for the radiance into Eq. (1) to obtain
$\begin{array}{llll}\hfill \mu & \phantom{\rule{0.3em}{0ex}}\left[\frac{d{L}^{\left(0\right)}}{d\zeta }+{\omega }_{o}\phantom{\rule{0.3em}{0ex}}\frac{d{L}^{\left(1\right)}}{d\zeta }+{\omega }_{o}^{2}\phantom{\rule{0.3em}{0ex}}\frac{d{L}^{\left(2\right)}}{d\zeta }+\cdots \phantom{\rule{0.3em}{0ex}}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& -\left[{L}^{\left(0\right)}+{\omega }_{o}\phantom{\rule{0.3em}{0ex}}{L}^{\left(1\right)}+{\omega }_{o}^{2}\phantom{\rule{0.3em}{0ex}}{L}^{\left(2\right)}+\cdots \phantom{\rule{0.3em}{0ex}}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill +& {\omega }_{o}{\int }_{0}^{2\pi }{\int }_{-1}^{1}\left[{L}^{\left(0\right)}+{\omega }_{o}\phantom{\rule{0.3em}{0ex}}{L}^{\left(1\right)}+{\omega }_{o}^{2}\phantom{\rule{0.3em}{0ex}}{L}^{\left(2\right)}+\cdots \phantom{\rule{0.3em}{0ex}}\right]\phantom{\rule{2.6108pt}{0ex}}\stackrel{̃}{\beta }\left({\mu }^{\prime },{\varphi }^{\prime }\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}d{\mu }^{\prime }\phantom{\rule{0.3em}{0ex}}d{\varphi }^{\prime }\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill \text{(3)}\end{array}$ We next group terms that have the same power of ${\omega }_{o}$:
$\begin{array}{llll}\hfill & \left[\mu \phantom{\rule{0.3em}{0ex}}\frac{d{L}^{\left(0\right)}}{d\zeta }+{L}^{\left(0\right)}\phantom{\rule{2.6108pt}{0ex}}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill +{\omega }_{o}& \left[\phantom{\rule{2.6108pt}{0ex}}\mu \phantom{\rule{0.3em}{0ex}}\frac{d{L}^{\left(1\right)}}{d\zeta }+{L}^{\left(1\right)}-{\int }_{0}^{2\pi }{\int }_{-1}^{1}{L}^{\left(0\right)}\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }^{\prime },{\varphi }^{\prime }\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}d{\mu }^{\prime }\phantom{\rule{0.3em}{0ex}}d{\varphi }^{\prime }\phantom{\rule{2.6108pt}{0ex}}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill +{\omega }_{o}^{2}& \left[\phantom{\rule{2.6108pt}{0ex}}\mu \phantom{\rule{0.3em}{0ex}}\frac{d{L}^{\left(2\right)}}{d\zeta }+{L}^{\left(2\right)}-{\int }_{0}^{2\pi }{\int }_{-1}^{1}{L}^{\left(1\right)}\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }^{\prime },{\varphi }^{\prime }\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}d{\mu }^{\prime }\phantom{\rule{0.3em}{0ex}}d{\varphi }^{\prime }\phantom{\rule{2.6108pt}{0ex}}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill +\cdots & =0\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ This equation must hold true for any value of $0\le {\omega }_{o}<1$. Setting ${\omega }_{o}=0$ would leave only the first line of the equation, whose terms must sum to 0. Similarly, when ${\omega }_{o}\ne 0$, each group of terms multiplying a given power of ${\omega }_{o}$ must equal zero in order for the entire left side of the equation to sum to zero. We can therefore equate to zero the groups of terms in brackets multiplying each power of ${\omega }_{o}$. This gives a sequence of equations:
$\begin{array}{lll}\hfill \mu \phantom{\rule{0.3em}{0ex}}\frac{d{L}^{\left(0\right)}}{d\zeta }=& -{L}^{\left(0\right)}\phantom{\rule{2em}{0ex}}& \hfill \text{(S0)}\\ \hfill \mu \phantom{\rule{0.3em}{0ex}}\frac{d{L}^{\left(1\right)}}{d\zeta }=& -{L}^{\left(1\right)}+{\int }_{0}^{2\pi }{\int }_{-1}^{1}{L}^{\left(0\right)}\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }^{\prime },{\varphi }^{\prime }\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}d{\mu }^{\prime }\phantom{\rule{0.3em}{0ex}}d{\varphi }^{\prime }\phantom{\rule{2em}{0ex}}& \hfill \text{(S1)}\\ \hfill \mu \phantom{\rule{0.3em}{0ex}}\frac{d{L}^{\left(2\right)}}{d\zeta }=& -{L}^{\left(2\right)}+{\int }_{0}^{2\pi }{\int }_{-1}^{1}{L}^{\left(1\right)}\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }^{\prime },{\varphi }^{\prime }\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}d{\mu }^{\prime }\phantom{\rule{0.3em}{0ex}}d{\varphi }^{\prime }\phantom{\rule{2em}{0ex}}& \hfill \text{(S2)}\end{array}$ and so on. Note that because ${\omega }_{o}$ multiples the path integral term in Eq. (3), the path integrals in this sequence of equations always involve the radiance at one order of scattering less than the derivative term. We first solve Eq. (S0), which governs the unscattered radiance. The solution for ${L}^{\left(0\right)}$ then can be used in Eq. (S1) to evaluate the path integral, which becomes a source function for singly scattered radiance. After solving Eq. (S1) for singly-scattered radiance, ${L}^{\left(1\right)}$ can be used to evaluate the path function in Eq. (S2), and so on. This process constitutes the successive-order-of-scattering (SOS) solution technique.
#### Solution of Eq. (S0) for the unscattered radiance
To solve (S0) we need boundary conditions at the sea surface and bottom. Figure 1 reminds us that the incident unscattered radiance onto the sea surface, and transmitted into the water, is perfectly collimated because we have assumed that the sun is a point source in a black sky and the surface is level. In that figure, ${E}_{\perp }\left(0\right)$ denotes the irradiance measured just below the sea surface on a plane that is perpendicular to the direction of photon travel (denoted by the red dashed line), and ${𝜃}_{sw}$ is the Sun’s zenith angle in the water after refraction by the level surface.
Recalling the Dirac delta function, we can write the unscattered radiance just below the surface as
${L}^{\left(0\right)}\left(0,\mu ,\varphi \right)={E}_{\perp }\left(0\right)\phantom{\rule{0.3em}{0ex}}\delta \left(\mu -{\mu }_{sw}\right)\phantom{\rule{0.3em}{0ex}}\delta \left(\varphi -{\varphi }_{sw}\right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}},$ (BC1)
where $\left({\mu }_{sw},{\varphi }_{sw}\right)$ is the direction of the Sun’s beam in the water. The two delta functions, which together have units of $s{r}^{-1}$, “pick out” the direction of the Sun’s beam; the unscattered radiance is zero in all other directions. Note that integrating this radiance over all downward directions to compute the downwelling plane irradiance gives
$\begin{array}{llll}\hfill {E}_{d}\left(0\right)=& \phantom{\rule{1em}{0ex}}{\int }_{0}^{1}{\int }_{0}^{2\pi }\phantom{\rule{0.3em}{0ex}}{E}_{\perp }\left(0\right)\phantom{\rule{0.3em}{0ex}}\delta \left(\mu -{\mu }_{sw}\right)\phantom{\rule{0.3em}{0ex}}\delta \left(\varphi -{\varphi }_{sw}\right)\phantom{\rule{0.3em}{0ex}}\mu \phantom{\rule{0.3em}{0ex}}d\mu \phantom{\rule{0.3em}{0ex}}d\varphi \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& \phantom{\rule{1em}{0ex}}{E}_{\perp }\left(0\right)\phantom{\rule{0.3em}{0ex}}{\mu }_{sw}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ as expected.
It is assumed that the incident solar irradiance is given, so Eq. (BC1) is the boundary condition on ${L}^{\left(0\right)}\left(\zeta ,\mu ,\varphi \right)$ at the sea surface (i.e., in the water at depth $\zeta =0$). We are assuming that the water is infinitely deep and source free, so the radiance must approach 0 at great depth. The boundary condition at the bottom is thus
${L}^{\left(0\right)}\left(\zeta ,\mu ,\varphi \right)\to 0\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}as\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\zeta \to \infty \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.$ (BC2)
We can now solve Eq. (S0) subject to boundary conditions (BC1) and (BC2). Rewriting (S0) as
$\frac{d{L}^{\left(0\right)}\left(\zeta \right)}{{L}^{\left(0\right)}\left(\zeta \right)}=-\frac{d\zeta }{\mu }$
and integrating from depth 0 to $\zeta$, corresponding to radiances ${L}^{\left(0\right)}\left(0\right)$ and ${L}^{\left(0\right)}\left(\zeta \right)$ respectively, gives
${ln{L}^{\left(0\right)}|}_{{L}^{\left(0\right)}\left(0\right)}^{{L}^{\left(0\right)}\left(\zeta \right)}={-\frac{{\zeta }^{\prime }}{\mu }|}_{0}^{\zeta }$
or
$\begin{array}{lll}\hfill {L}^{\left(0\right)}\left(\zeta ,\mu ,\varphi \right)=& {L}^{\left(0\right)}\left(0,\mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}{e}^{-\zeta ∕\mu }\phantom{\rule{2em}{0ex}}& \hfill \text{(4a)}\\ \hfill =& {E}_{\perp }\left(0\right)\phantom{\rule{0.3em}{0ex}}\delta \left(\mu -{\mu }_{sw}\right)\phantom{\rule{0.3em}{0ex}}\delta \left(\varphi -{\varphi }_{sw}\right)\phantom{\rule{0.3em}{0ex}}{e}^{-\zeta ∕\mu }\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill \text{(4b)}\end{array}$
Solution (4a) is simply the Lambert-Beer law: the initial unscattered radiance decays exponentially with optical depth. Using (BC1) to rewrite the radiance at the surface gives (4b), which will be the convenient form for solution of (S1) below. Equation (4b) also shows explicitly that the unscattered radiance is 0 except in direction $\left({\mu }_{sw},{\varphi }_{sw}\right)$. The exponential forces the radiance to 0 as the depth increases, so that (BC2) is satisfied. Thus our solution satisfies both the surface and bottom boundary conditions and thus constitutes a complete solution of the two-point boundary value problem for unscattered radiance. This solution gives the contribution of unscattered radiance to the total radiance.
#### Solution of Eq. (S1) for the singly scattered radiance
The first step in solving (S1) is to evaluate the scattering term using the solution for ${L}^{\left(0\right)}$. To do this we use (4b) to get
$\begin{array}{llll}\hfill & {\int }_{0}^{2\pi }{\int }_{-1}^{1}{L}^{\left(0\right)}\left(\zeta ,{\mu }^{\prime },{\varphi }^{\prime }\right)\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }^{\prime },{\varphi }^{\prime }\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}d{\mu }^{\prime }\phantom{\rule{0.3em}{0ex}}d{\varphi }^{\prime }\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& {\int }_{0}^{2\pi }{\int }_{-1}^{1}{E}_{\perp }\left(0\right)\phantom{\rule{0.3em}{0ex}}\delta \left({\mu }^{\prime }-{\mu }_{sw}\right)\phantom{\rule{0.3em}{0ex}}\delta \left({\varphi }^{\prime }-{\varphi }_{sw}\right)\phantom{\rule{0.3em}{0ex}}{e}^{-\zeta ∕{\mu }^{\prime }}\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }^{\prime },{\varphi }^{\prime }\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}d{\mu }^{\prime }\phantom{\rule{0.3em}{0ex}}d{\varphi }^{\prime }\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& {E}_{\perp }\left(0\right)\phantom{\rule{2.6108pt}{0ex}}{e}^{-\zeta ∕{\mu }_{sw}}\phantom{\rule{1em}{0ex}}\stackrel{̃}{\beta }\left({\mu }_{sw},{\varphi }_{sw}\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill \text{(5)}\end{array}$ This result shows how much of the unscattered radiance reaches depth $\zeta$ and then gets scattered into the direction of interest $\left(\mu ,\varphi \right)$. In other words, the unscattered radiance is a local (at depth $\zeta$) source term for singly scattered radiance.
All quantities on the right hand side of Eq. (5) are known from the given IOPs and surface boundary condition. We can therefore proceed with the solution of (S1) for the singly scattered radiance ${L}^{\left(1\right)}$. The equation to be solved is
$\mu \phantom{\rule{0.3em}{0ex}}\frac{d{L}^{\left(1\right)}}{d\zeta }+{L}^{\left(1\right)}={E}_{\perp }\left(0\right)\phantom{\rule{0.3em}{0ex}}{e}^{-\zeta ∕{\mu }_{sw}}\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }_{sw},{\varphi }_{sw}\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}},$ (6)
where the right hand side is now a known function of depth. There is no incident scattered radiance from the sky because the sun’s collimated beam is all unscattered light. Thus the boundary conditions for Eq. (6) are
${L}^{\left(1\right)}\left(0,\mu ,\varphi \right)=0\phantom{\rule{2em}{0ex}}and\phantom{\rule{2em}{0ex}}{L}^{\left(1\right)}\left(\zeta ,\mu ,\varphi \right)\to 0\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}as\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\zeta \to \infty \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.$ (7)
Figure 2 shows that the singly scattered downwelling radiance at depth $\zeta$ comes only from above depth $\zeta$, and that the upwelling radiance at $\zeta$ comes only from depths below $\zeta$. We can thus consider the downwelling, ${L}_{d}^{\left(1\right)}\left(\zeta ,\mu ,\varphi \right)$, and upwelling, ${L}_{u}^{\left(1\right)}\left(\zeta ,\mu ,\varphi \right)$, radiances separately. We can integrate from the surface down to $\zeta$ to compute ${L}_{d}^{\left(1\right)}$, and we can integrate from $\zeta$ to $\infty$ to compute ${L}_{u}^{\left(1\right)}$.
If you were paying attention in your undergraduate differential equations class, you recognize Eq. (6) as an ordinary differential equation with constant coefficients, which can be solved by means of an integrating factor. Multiplying Eq. (6) for downwelling radiance by $\frac{1}{\mu }\phantom{\rule{0.3em}{0ex}}{e}^{\zeta ∕\mu }$ (the integrating factor) gives
$\begin{array}{llll}\hfill \frac{1}{\mu }\phantom{\rule{0.3em}{0ex}}{e}^{\zeta ∕\mu }\phantom{\rule{0.3em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}\mu \frac{d{L}_{d}^{\left(1\right)}\left(\zeta \right)}{d\zeta }+{L}_{d}^{\left(1\right)}\left(\zeta \right)\right]=& \frac{1}{\mu }\phantom{\rule{0.3em}{0ex}}{e}^{\zeta ∕\mu }\phantom{\rule{0.3em}{0ex}}\left[{E}_{\perp }\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\phantom{\rule{0.3em}{0ex}}{e}^{-\zeta ∕{\mu }_{sw}}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \frac{d}{d\zeta }\left[{L}_{d}^{\left(1\right)}\left(\zeta \right)\phantom{\rule{0.3em}{0ex}}{e}^{\zeta ∕\mu }\right]=& \frac{{E}_{\perp }\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }}{\mu }\phantom{\rule{0.3em}{0ex}}exp\left[\left(\frac{1}{\mu }-\frac{1}{{\mu }_{sw}}\right)\zeta \right]\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill \text{(8)}\end{array}$ Now integrating from depth 0 to $\zeta$, where the radiances are ${L}_{d}^{\left(1\right)}\left(0\right)$ and ${L}_{d}^{\left(1\right)}\left(\zeta \right)$, respectively, and recalling that ${L}_{d}^{\left(1\right)}\left(0\right)=0$ by the upper boundary condition (7) gives
${L}_{d}^{\left(1\right)}\left(\zeta \right)\phantom{\rule{0.3em}{0ex}}{e}^{\zeta ∕\mu }=\frac{{E}_{\perp }\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }}{\mu }\frac{1}{\left(\frac{1}{\mu }-\frac{1}{{\mu }_{sw}}\right)}\left[exp\left(\frac{1}{\mu }-\frac{1}{{\mu }_{sw}}\right)\zeta -1\right]\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}},$
provided that $\mu \ne {\mu }_{sw}$. Recalling that ${E}_{d}\left(0\right)={E}_{\perp }\left(0\right)\phantom{\rule{0.3em}{0ex}}{\mu }_{sw}$, the preceding equation can be rewritten as
${L}_{d}^{\left(1\right)}\left(\zeta ,\mu ,\varphi \right)={E}_{d}\left(0\right)\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }_{sw},{\varphi }_{sw}\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}\frac{1}{{\mu }_{sw}-\mu }\left[{e}^{-\zeta ∕{\mu }_{sw}}-{e}^{-\zeta ∕\mu }\right]\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.$ (9)
For the special case of $\mu ={\mu }_{sw}$ but $\varphi \ne {\varphi }_{sw}$, so that the scattering angle is nonzero, Eq. (8) reduces to
$\frac{d}{d\zeta }\left[{L}_{d}^{\left(1\right)}\left(\zeta \right)\phantom{\rule{0.3em}{0ex}}{e}^{\zeta ∕{\mu }_{sw}}\right]=\phantom{\rule{1em}{0ex}}\frac{{E}_{\perp }\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }}{{\mu }_{sw}}$
which integrates to
$\begin{array}{llll}\hfill {L}_{d}^{\left(1\right)}\left(\zeta ,{\mu }_{sw},\varphi \right)=& {E}_{\perp }\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }_{sw},{\varphi }_{sw}\to {\mu }_{sw},\varphi \right)\phantom{\rule{0.3em}{0ex}}\frac{\zeta }{{\mu }_{sw}}\phantom{\rule{1em}{0ex}}{e}^{-\zeta ∕{\mu }_{sw}}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& {E}_{d}\left(0\right)\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }_{sw},{\varphi }_{sw}\to {\mu }_{sw},\varphi \right)\phantom{\rule{0.3em}{0ex}}\frac{\zeta }{{\mu }_{sw}^{2}}\phantom{\rule{1em}{0ex}}{e}^{-\zeta ∕{\mu }_{sw}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill \text{(10)}\end{array}$ The second form results from ${E}_{d}\left(0\right)={E}_{\perp }\left(0\right)\phantom{\rule{0.3em}{0ex}}{\mu }_{sw}$, which was derived above.
The direction of $\mu ={\mu }_{sw}$ and $\varphi ={\varphi }_{sw}$ is the case of no scattering, so there is no singly scattered radiance.
We next compute the upwelling radiance at $\zeta$ by integrating Eq. (6) from $\zeta$ to $\infty$, keeping in mind that now $\mu =cos𝜃<0$ since $𝜃$ is measured from 0 in the nadir direction. The integration gives (writing $\mu =-|\mu |$ to emphasize the negativity of $\mu$)
$\begin{array}{llll}\hfill & {\left[{L}_{u}^{\left(1\right)}\left({\zeta }^{\prime }\right)\phantom{\rule{0.3em}{0ex}}{e}^{-{\zeta }^{\prime }∕|\mu |}\right]}_{{\zeta }^{\prime }\to \infty }-{L}_{u}^{\left(1\right)}\left(\zeta \right)\phantom{\rule{0.3em}{0ex}}{e}^{-\zeta ∕|\mu |}=\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \frac{{E}_{\perp }\stackrel{̃}{\beta }}{-|\mu |}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\frac{1}{\left(\frac{1}{-|\mu |}-\frac{1}{{\mu }_{sw}}\right)}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\left\{{\left[exp\left(\frac{1}{-|\mu |}-\frac{1}{{\mu }_{sw}}\right){\zeta }^{\prime }\right]}_{{\zeta }^{\prime }\to \infty }-exp\left(\frac{1}{-|\mu |}-\frac{1}{{\mu }_{sw}}\right)\zeta \right\}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ Both limits as ${\zeta }^{\prime }\to \infty$ are zero. The result can be rewritten as
${L}_{u}^{\left(1\right)}\left(\zeta \right)\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}{E}_{d}\left(0\right)\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }_{sw},{\varphi }_{sw}\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\frac{1}{{\mu }_{sw}-\mu }\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{e}^{-\zeta ∕{\mu }_{sw}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.$ (11)
#### Assembling the SSA solution
Recalling from Eq. (2) that the SSA is given by
${L}^{\left(SSA\right)}\left(\zeta ,\mu ,\varphi \right)={L}^{\left(0\right)}\left(\zeta ,\mu ,\varphi \right)+{\omega }_{o}\phantom{\rule{0.3em}{0ex}}{L}^{\left(1\right)}\left(\zeta ,\mu ,\varphi \right),$
we can assemble ${L}^{\left(SSA\right)}$ from the pieces computed in Eqs. (4a) and (9-11):
$\begin{array}{lll}\hfill {L}_{d}^{\left(SSA\right)}\left(\zeta ,\mu ,\varphi \right)=& {L}^{\left(0\right)}\left(0,{\mu }_{sw},{\varphi }_{sw}\right)\phantom{\rule{1em}{0ex}}{e}^{-\zeta ∕{\mu }_{sw}}\phantom{\rule{2em}{0ex}}& \hfill \text{(12)}\\ \hfill & if\phantom{\rule{1em}{0ex}}\mu ={\mu }_{sw}\phantom{\rule{1em}{0ex}}and\phantom{\rule{1em}{0ex}}\varphi ={\varphi }_{sw}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {L}_{d}^{\left(SSA\right)}\left(\zeta ,\mu ,\varphi \right)=& {\omega }_{o}\phantom{\rule{0.3em}{0ex}}{E}_{d}\left(0\right)\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }_{sw},{\varphi }_{sw}\to {\mu }_{sw},\varphi \right)\phantom{\rule{0.3em}{0ex}}\frac{\zeta }{{\mu }_{sw}^{2}}\phantom{\rule{0.3em}{0ex}}{e}^{-\zeta ∕{\mu }_{sw}}\phantom{\rule{2em}{0ex}}& \hfill \text{(13)}\\ \hfill & if\phantom{\rule{1em}{0ex}}\mu ={\mu }_{sw}\phantom{\rule{1em}{0ex}}but\phantom{\rule{1em}{0ex}}\varphi \ne {\varphi }_{sw}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {L}_{d}^{\left(SSA\right)}\left(\zeta ,\mu ,\varphi \right)=& {\omega }_{o}\phantom{\rule{0.3em}{0ex}}{E}_{d}\left(0\right)\phantom{\rule{0.3em}{0ex}}\stackrel{̃}{\beta }\left({\mu }_{sw},{\varphi }_{sw}\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}\frac{1}{{\mu }_{sw}-\mu }\left[{e}^{-\zeta ∕{\mu }_{sw}}-{e}^{-\zeta ∕\mu }\right]\phantom{\rule{2em}{0ex}}& \hfill \text{(14)}\\ \hfill & if\phantom{\rule{1em}{0ex}}\mu >0\phantom{\rule{1em}{0ex}}and\phantom{\rule{1em}{0ex}}\mu \ne {\mu }_{sw}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {L}_{u}^{\left(SSA\right)}\left(\zeta ,\mu ,\varphi \right)=& {\omega }_{o}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{E}_{d}\left(0\right)\stackrel{̃}{\beta }\left({\mu }_{sw},{\varphi }_{sw}\to \mu ,\varphi \right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\frac{1}{{E}_{\perp }\left(0\right)\phantom{\rule{0.3em}{0ex}}{\mu }_{sw}-\mu }\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{e}^{-\zeta ∕{\mu }_{sw}}\phantom{\rule{2em}{0ex}}& \hfill \text{(15)}\\ \hfill & if\phantom{\rule{1em}{0ex}}\mu \le 0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ Equations (12)-(15) constitute the SSA solution to the RTE. This solution is seen, for example, in Gordon (1994), where it is presented without derivation.
It is easy to show that
$\underset{\mu \to {\mu }_{sw}}{lim}\frac{1}{{\mu }_{sw}-\mu }\left[{e}^{-\zeta ∕{\mu }_{sw}}-{e}^{-\zeta ∕\mu }\right]=\frac{\zeta }{{\mu }_{sw}^{2}}\phantom{\rule{0.3em}{0ex}}{e}^{-\zeta ∕{\mu }_{sw}}\phantom{\rule{0.3em}{0ex}},$ (16)
in which case Eq. (14) reduces to Eq. (13), which was derived independently as a special case of the depth integration.
It must be remembered that the SSA rests upon a number of simplifying assumptions. In particular, the input sky radiance was collimated. The delta functions in direction then made evaluation of the scattering path function in Eq. (5) easy. This would not be the case for any other sky radiance distribution, or for a non-level sea surface. Likewise, the assumption of infinitely deep water removed any bottom effect.
The SSA will be a good approximation to actual radiances only if the higher order terms in the Eq. (2) are negligible. This means that ${\omega }_{o}$ must be sufficiently small, but how small? Figure 3 compares ${L}_{u}^{\left(SSA\right)}$ and ${L}_{d}^{\left(SSA\right)}$ with radiances computed by HydroLight for nadir- and zenith-viewing radiances. The sun was at 42 deg, which gives the in-water solar zenith angle of ${𝜃}_{sw}=30$ deg or ${\mu }_{sw}=0.866$. This gives a scattering angle of $\psi =30$ deg for ${L}_{d}^{\left(SSA\right)}$ and $\psi =150$ deg for ${L}_{u}^{\left(SSA\right)}$. The Petzold “average-particle” phase function) was used, for which $\stackrel{̃}{\beta }\left(\psi =30\right)=0.08609$ and $\stackrel{̃}{\beta }\left(\psi =150\right)=0.002365\phantom{\rule{2.6108pt}{0ex}}s{r}^{-1}$. HydroLight includes all orders of multiple scattering, so comparison of its radiances with the SSA values shows the importance of multiple scattering. The HydroLight runs modeled the SSA conditions as closely as possible, the difference being that the SSA is for one exact direction and HydroLight computes nadir and zenith radiances as averages over polar caps with a 5 deg half angle, and the sun’s direct beam in water is spread out over a quad from $𝜃=$ 25 to 35 deg. The HydroLight runs set ${E}_{d}\left(in\phantom{\rule{2.6108pt}{0ex}}air\right)=1.028\phantom{\rule{2.6108pt}{0ex}}W\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}\phantom{\rule{2.6108pt}{0ex}}s{r}^{-1}$ so that ${E}_{d}\left(0\right)=1.0\phantom{\rule{2.6108pt}{0ex}}W\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}\phantom{\rule{2.6108pt}{0ex}}s{r}^{-1}$.
Figure 3 shows that for ${\omega }_{o}=0.01$ the agreement between the SSA and HydroLight is very good. The HydroLight values are slightly higher than the SSA values because there is still a small multiple scattering contribution to the total radiance even at very small ${\omega }_{o}$ values. For ${\omega }_{o}=0.1$ the SSA still gives good results near the sea surface but differs from the multiple scattering solution by a factor of 3 at 10 optical depths. For ${\omega }_{o}=0.85$, which is typical of blue and green wavelengths in ocean waters, the SSA upwelling radiance is a factor of five too small even at the surface, and the SSA radiances are off by orders of magnitude at large optical depths. Thus, as expected, we see that the SSA is of little use in optical oceanography because multiple scattering almost always dominates underwater radiance distributions at visible wavelengths.
We end the SSA discussion by noting that Walker (1994) has carried the SOS solution through second order scattering. His development requires a good bit of mathematical masochism and results in a much more complicated set of equations, which can be seen in his Section 2-6. There is little need for such approximations given the ease of numerical solution of the RTE to include all (in HydroLight) or at least many (in Monte Carlo models) orders of multiple scattering, and without any of the assumptions required for the analytic evaluation of the path integrals in the SSA. Perhaps the greatest value of the SSA solution is that it can be used to check numerical models when ${\omega }_{o}$ is small. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 112, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9696177840232849, "perplexity": 617.7863028449303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088264.43/warc/CC-MAIN-20210415222106-20210416012106-00563.warc.gz"} |
https://moodle.jesselton.edu.my/course/info.php?id=13 | ### Macroeconomics
The purpose of this course is to introduce the discipline of macroeconomics to students. It provides students with the fundamental understandings of theories and concepts of macroeconomics and prepares students with adequate skills to analyze and evaluate macroeconomic events. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8466108441352844, "perplexity": 1902.747586795539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573053.67/warc/CC-MAIN-20220524142617-20220524172617-00465.warc.gz"} |
https://www.eevblog.com/forum/repair/rigol-dg4000-power-rails-failure-and-firmware-bricked/msg2680113/?PHPSESSID=o20io6kurp1dqt8mg2r27decg3 | ### Author Topic: Rigol DG4000 power rails failure and firmware bricked (Read 1747 times)
0 Members and 1 Guest are viewing this topic.
#### Fabian
• Contributor
• Posts: 31
##### Rigol DG4000 power rails failure and firmware bricked
« on: November 22, 2016, 04:38:44 pm »
Hi,
I own a Rigol DG4162 and I am very happy with the device - except for a few software things I don't like...
However, a few days back I turned it on after not using it for a few weeks (maybe month) and it hang during boot process. The GUI was fully loaded but I was not able to do anything. I unplugged it and tryed it again, it hang again during boot at some other point. I tried this a few times, however it hang at different points in the boot process.
Well, I assumed an hardware fault, because a software issue would probably not be that random. Nevertheless I updated the bootloader and the DSP firmware according to the instructions. I had to reboot it several times after it did not finished after half an hour. Doing that obviously bricked it. Then it just turned all LEDs on when I hit the power button and nothing else happens. I cannot bring it into the mode to update the firmware any more (pressing the Help button at boot up). So my first question: Can I somehow recover the firmware?
Now the second problem: I assumed a hardware fault from the beginning. So after I software-bricking it, I took it apart and did a few tests. Visual inspection is ok. But there are two voltage test points which are not ok. The 1.5V rail has 3.3V and the 9.9V rail (something for/from the display) also has 3.3V. Well, I focused on the 1.5V rail first, as I could not find the source for the 9.9V rail and this might be due to the broken firmware (backlight stuff). The 1.5V regulator is quite strange, I attached an image of the circuit and a schematic I reverse engineered. What kind of regulator is that with the feed-back network at the input? The only thing I can image is an emitter follower. Theoretically it might be possible that this is actually a regulator from 1.5V to something lower but then the labeling would be very confusing, I don't know where the 1.5V are coming from, there is no test point for the output and there would have to be a short between the output and 3.3V.
Next thing I did was removing the regulator and this changed nothing. There is still a resistance of ~10R between 1.5V and 3.3V (both polarities).
Is there any one who had a simular fault? Or any other idea how to track that down or even fix it?
Rigol offered me to replace the mainboard for 520€ and/or the power supply for 270€ - I am out of warranty :-(
Best regards,
Fabian
The following users thanked this post: ivi_yak
#### ivi_yak
• Regular Contributor
• Posts: 70
• Country:
##### Re: Rigol DG4000 power rails failure and firmware bricked
« Reply #1 on: July 20, 2017, 05:43:16 pm »
hi,
I have same problem, brick after update firmware
Did you find solution how to repair firmware
eevblog
#### Fabian
• Contributor
• Posts: 31
##### Re: Rigol DG4000 power rails failure and firmware bricked
« Reply #2 on: September 09, 2019, 07:04:05 pm »
Hi,
I would like to push this topic back to the top, as I stumbled accross the old PCB (which I got when Rigol replaced the mainboard). As I now have a working and a non working board, I can compare them and was hoping to be able to revive the old (defect) board. I already tested that it only required the power supply and is ready to be used as a purely remote controlled generator. Might come in handy one day to have a second one
I did a few more things and have a few questions:
* All voltages seem to be fine, including the 1.5V rail. Don't know what the problem was. Maybe resoldering the regulator did the job.
* I was able to talk to both FPGAs using JTAG, but I was not able to read any of the memory from them. The ProASIC 3 cannot be read back. It has internal flash and there is nothing I can do about it, if it is faulty. I also guess the firmware is not included in the firmware updates. The FPGA that does the heavy lifting (under the big heatsink, connected to the DAC), is a Xilinx Spartan 6 XC6SLX25. I was not able to find any flash chip on the board for that device, so I guess the firmware is pushed into it by the DSP on every boot. Can someone confirm this?
* All clocks seem to be ok.
* None of the chips (2x FPGA, 1x DSP) is doing anything. I mean like nothing at all. I cannot even pick up radio emmissions from them using a near field probe, while I defenetly can on the new board. It seems like they are in reset state. However, the main reset line I identified is high, so seems to be ok.
* My guess is that the DSP is not running at all, because I bricked the firmware. As the DSP is responsible for programming the Xilinx, the Xilinx does not do anything. As the only clock line I could identify going into the ProASIC3 seems to come from the Xilinx, the ProASIC3 is not doing anything.
* So, I would like to reflash the firmware of the DSP using JTAG. I used a general purpose FT2232H, wired it up according to https://www.eevblog.com/forum/testgear/dg4000-a-firmware-investigation/msg269620/#msg269620 and used TopJTAG. TopJTAG found the DSP and I have to specify a hole bunch of stuff about the flash. I have not even found the flash on the board. Like there obviouly is a place to put a flash chip, but it is not populated. There is an additional FRAM which is connected using I2C. Is that where the program lies? The ADSP is capable of booting from I2C, but I do not see what to put into TopJTAG to get this working. Any help is welcome.
#### Fabian
• Contributor
• Posts: 31
##### Re: Rigol DG4000 power rails failure and firmware bricked
« Reply #3 on: September 09, 2019, 07:55:35 pm »
Just got step further... I read the mentioned thread again and realized that TopJTAG was not the recommended program. *cybernet* was using the linux blackfin toolchain, which you still can get here: https://sourceforge.net/projects/adi-toolchain/
After fiddeling a bit, I was able to connect to the DSP:
./bfin-gdbproxy --debug bfin --frequency=6000000
I did this with just an FT2232H I had lying around. It did not worked out of the box, I had to change the USB PID to baf8 (VID is still 0403 - aka FTDI), which is an "Amontec JTAGkey".
Now my new question is: How do I read/write the flash with the toolchain?
#### Fabian
• Contributor
• Posts: 31
##### Re: Rigol DG4000 power rails failure and firmware bricked
« Reply #4 on: September 09, 2019, 09:08:32 pm »
I guess I will use this thread just for reporting my progress Maybe someone else can use it.
I was able to read/write the flash of the DSP. I am using Ubuntu 18.04. After downloading the toolchain:
$tar xf blackfin-toolchain-2014R1_45-RC2.x86_64.tar.bz2$ cd opt/uClinux/bfin-linux-uclibc/bin
\$ sudo ./bfin-jtag -q
You can also do it without sudo, but you probably (like me) would have to fiddle around with udev to get the right permissions on the device first
> cable OOCDLink-s
Connected to libftdi driver.
> detect
Should give you a list with some info on the device like ID, Manufacturer etc...
> initbus bf52x
> detectflash 0x20000000
Should give you a bunch of info on the flash, like voltages, timings, size, ...
> readmem 0x20000000 0x1000000 flash.bin
This will read the flash into flash.bin. I hope that command is correct, aspecially the size. Should be 16M as reported by detectflash, but according to the memory map of the Blackfin it has only 4 banks of 1M for flash... I don't get it, just hope it will work.
After that I connected the defect board, did the whole progress with bfin-jtag again but instead of reading I wrote the flash:
> flashmem 0x20000000 flash.bin Edit: Don't do that! I guess I broke something in the DSP with that command
Reading takes quite a long time (5 min or so) and writing is even slower...but it will hopefully do the job.
urjtag (which is bfin-jtag) should also support FT2232H directly by changing the cable name from OOCDLink-s to FT2232. I haven't tried it, because "never touch a running system".
I am a bit worried about the flash file. After compression there are only ~980kb left. There are quite a lot FF in there. I also attached the file, maybe it can help someone. I will get back here, when the flash process has finished, guess it will take nearly an hour.
« Last Edit: September 10, 2019, 04:51:14 pm by Fabian »
The following users thanked this post: Daruosha
#### Fabian
• Contributor
• Posts: 31
##### Re: Rigol DG4000 power rails failure and firmware bricked
« Reply #5 on: September 10, 2019, 04:49:57 pm »
Two more things I learned so far:
* Using the FTDI directly without modifying the USB PID does not work for me - he is complaining that he cannot open the device or something like that.
* Writing is even slower than I thought - takes about 2 hours
* I guess I did something to the DSP while writing the flash as described above. The flash is not detected any more. I started a new thread to figure out what the problem is or how to read/write the flash of the DSP: https://www.eevblog.com/forum/microcontrollers/readingwriting-adsp-blackfin-flash-problems/
« Last Edit: September 10, 2019, 04:51:54 pm by Fabian »
The following users thanked this post: Daruosha
#### Fabian
• Contributor
• Posts: 31
##### Re: Rigol DG4000 power rails failure and firmware bricked
« Reply #6 on: September 10, 2019, 06:20:01 pm »
OK, the DSP is still ok. There is just an additional problem with the reset line that goes to the flash chip. It is not really high. I Added an additional 10k pull-up to overcome this issue, although the line already has an onboard pull-up of 10k. So there is at least one chip doing weird stuff.
I also found out that the data I read from the working device is actually only 1M and then repeats the same data 16 times.
However, reburning that 1M did not helped the instrument. Still no sign of life
#### m3vuv
• Frequent Contributor
• Posts: 833
• Country:
##### Re: Rigol DG4000 power rails failure and firmware bricked
« Reply #7 on: September 10, 2019, 10:17:08 pm »
reminds me of a chineese phone i had,every couple of months would bootloop for no apparent reason randomly,had to reflash the firmware every time,after the 3rd time i binned it,it was called a uhappy,its like the name was taking the piss!!hope there nuke control sw is less buggy!!
#### tautech
• Super Contributor
• Posts: 21418
• Country:
• Taupaki Technologies Ltd. NZ Siglent Distributor
##### Re: Rigol DG4000 power rails failure and firmware bricked
« Reply #8 on: September 11, 2019, 01:19:04 am »
@ Fabian
There could be some issues in the PSU's that explain unexpected behaviour.
Member Bud investigated these in depth here:
https://www.eevblog.com/forum/projects/project-yaigol-fixing-rigol-scope-design-problems/
Avid Rabid Hobbyist
#### Fabian
• Contributor
• Posts: 31
##### Re: Rigol DG4000 power rails failure and firmware bricked
« Reply #9 on: September 13, 2019, 03:48:48 pm »
Hi thanks for that thread... quite interesting.
I have not read everything yet, but what does the PSU (Power supply unit?) has to do with that? Looks like its all about PLL/RF stuff. I had a look at my main PLL for the high speed DAC, but as expected, it is not running, because there is not firmware that might have programmed it to do something. Of course there is an ouput, but that is not locked.
It would be great if someone could tell me how to read/write the full flash of the DSP. I was hoping to get to device back in the sate where it started but randmly crashed during boot.
Smf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.298433780670166, "perplexity": 3135.1884601664747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00549.warc.gz"} |
http://www.talkstats.com/threads/fitting-a-curve-to-a-scatter-plot.2699/ | # fitting a curve to a scatter plot
#### mpz
##### New Member
It's been over 9 yrs since my last statistics course so maybe someone can help me out.
Given (x, y) values where x = number of days passed (but also let it be price in dollars) and y = number of widgets sold, find lowest most effective price to purchase a widget (hope that makes sense).
(0, 10)
(100, 20)
(200, 30)
(300, 10)
(400, 40)
(500, 25)
(600, 15)
(700, 30)
(800, 40)
(900, 50)
Code:
(#)
50+ *
|
40+ * *
|
30+ * *
| *
20+ *
| *
10+ *
|
+---+---+---+---+---+---+---+---+---+
100 200 300 400 500 600 700 800 900 (t)
9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 (\$)
I know I can use line of best fit to predict future values for trend but how do I find the lowest most effective price based on data for the time frame?
I think that if I somehow fit a bell shaped curve (distribution curve) to above data I can calculate probability of any y-value and thus finding my lowest most effective price. I'm at a loss as to how to do that.
I've been trying to figure this out for two days. Any help you can offer is greatly appreciated.
Cheers | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5588198304176331, "perplexity": 627.5530833352664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347423915.42/warc/CC-MAIN-20200602064854-20200602094854-00068.warc.gz"} |
http://popflock.com/learn?s=Algebraic_structure | Algebraic Structure
Get Algebraic Structure essential facts below. View Videos or join the Algebraic Structure discussion. Add Algebraic Structure to your PopFlock.com topic list for future reference or share this resource on social media.
Algebraic Structure
In mathematics, an algebraic structure consists of a nonempty set A (called the underlying set, carrier set or domain), a collection of operations on A of finite arity (typically binary operations), and a finite set of identities, known as axioms, that these operations must satisfy.
An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called scalar multiplication between elements of the field (called scalars), and elements of the vector space (called vectors).
In the context of universal algebra, the set A with this structure is called an algebra,[1] while, in other contexts, it is (somewhat ambiguously) called an algebraic structure, the term algebra being reserved for specific algebraic structures that are vector spaces over a field or modules over a commutative ring.
The properties of specific algebraic structures are studied in abstract algebra. The general theory of algebraic structures has been formalized in universal algebra. The language of category theory is used to express and study relationships between different classes of algebraic and non-algebraic objects. This is because it is sometimes possible to find strong connections between some classes of objects, sometimes of different kinds. For example, Galois theory establishes a connection between certain fields and groups: two algebraic structures of different kinds.
## Introduction
Addition and multiplication of real numbers are the prototypical examples of operations that combine two elements of a set to produce a third element of the set. These operations obey several algebraic laws. For example, a + (b + c) = (a + b) + c and a(bc) = (ab)c as the associative laws. Also a + b = b + a and ab = ba as the commutative laws. Many systems studied by mathematicians have operations that obey some, but not necessarily all, of the laws of ordinary arithmetic. For example, rotations of an object in three-dimensional space can be combined by, for example, performing the first rotation on the object and then applying the second rotation on it in its new orientation made by the previous rotation. Rotation as an operation obeys the associative law, but can fail to satisfy the commutative law.
Mathematicians give names to sets with one or more operations that obey a particular collection of laws, and study them in the abstract as algebraic structures. When a new problem can be shown to follow the laws of one of these algebraic structures, all the work that has been done on that category in the past can be applied to the new problem.
In full generality, algebraic structures may involve an arbitrary collection of operations, including operations that combine more than two elements (higher arity operations) and operations that take only one argument (unary operations). The examples used here are by no means a complete list, but they are meant to be a representative list and include the most common structures. Longer lists of algebraic structures may be found in the external links and within Category:Algebraic structures. Structures are listed in approximate order of increasing complexity.
## Examples
### One set with operations
Simple structures: no binary operation:
• Set: a degenerate algebraic structure S having no operations.
• Pointed set: S has one or more distinguished elements, often 0, 1, or both.
• Unary system: S and a single unary operation over S.
• Pointed unary system: a unary system with S a pointed set.
Group-like structures: one binary operation. The binary operation can be indicated by any symbol, or with no symbol (juxtaposition) as is done for ordinary multiplication of real numbers.
Ring-like structures or Ringoids: two binary operations, often called addition and multiplication, with multiplication distributing over addition.
• Semiring: a ringoid such that S is a monoid under each operation. Addition is typically assumed to be commutative and associative, and the monoid product is assumed to distribute over the addition on both sides, and the additive identity 0 is an absorbing element in the sense that 0 x = 0 for all x.
• Near-ring: a semiring whose additive monoid is a (not necessarily abelian) group.
• Ring: a semiring whose additive monoid is an abelian group.
• Lie ring: a ringoid whose additive monoid is an abelian group, but whose multiplicative operation satisfies the Jacobi identity rather than associativity.
• Commutative ring: a ring in which the multiplication operation is commutative.
• Boolean ring: a commutative ring with idempotent multiplication operation.
• Field: a commutative ring which contains a multiplicative inverse for every nonzero element.
• Kleene algebras: a semiring with idempotent addition and a unary operation, the Kleene star, satisfying additional properties.
• *-algebra: a ring with an additional unary operation (*) satisfying additional properties.
Lattice structures: two or more binary operations, including operations called meet and join, connected by the absorption law.[3]
• Complete lattice: a lattice in which arbitrary meet and joins exist.
• Bounded lattice: a lattice with a greatest element and least element.
• Complemented lattice: a bounded lattice with a unary operation, complementation, denoted by postfix ?. The join of an element with its complement is the greatest element, and the meet of the two elements is the least element.
• Modular lattice: a lattice whose elements satisfy the additional modular identity.
• Distributive lattice: a lattice in which each of meet and join distributes over the other. Distributive lattices are modular, but the converse does not hold.
• Boolean algebra: a complemented distributive lattice. Either of meet or join can be defined in terms of the other and complementation. This can be shown to be equivalent with the ring-like structure of the same name above.
• Heyting algebra: a bounded distributive lattice with an added binary operation, relative pseudo-complement, denoted by infix ->, and governed by the axioms x -> x = 1, x (x -> y) = x y, y (x -> y) = y, x -> (y z) = (x -> y) (x -> z).
Arithmetics: two binary operations, addition and multiplication. S is an infinite set. Arithmetics are pointed unary systems, whose unary operation is injective successor, and with distinguished element 0.
• Robinson arithmetic. Addition and multiplication are recursively defined by means of successor. 0 is the identity element for addition, and annihilates multiplication. Robinson arithmetic is listed here even though it is a variety, because of its closeness to Peano arithmetic.
• Peano arithmetic. Robinson arithmetic with an axiom schema of induction. Most ring and field axioms bearing on the properties of addition and multiplication are theorems of Peano arithmetic or of proper extensions thereof.
### Two sets with operations
Module-like structures: composite systems involving two sets and employing at least two binary operations.
• Group with operators: a group G with a set ? and a binary operation ? × G -> G satisfying certain axioms.
• Module: an abelian group M and a ring R acting as operators on M. The members of R are sometimes called scalars, and the binary operation of scalar multiplication is a function R × M -> M, which satisfies several axioms. Counting the ring operations these systems have at least three operations.
• Vector space: a module where the ring R is a division ring or field.
• Graded vector space: a vector space with a direct sum decomposition breaking the space into "grades".
• Quadratic space: a vector space V over a field F with a quadratic form on V taking values in F.
Algebra-like structures: composite system defined over two sets, a ring R and an R-module M equipped with an operation called multiplication. This can be viewed as a system with five binary operations: two operations on R, two on M and one involving both R and M.
• Algebra over a ring (also R-algebra): a module over a commutative ring R, which also carries a multiplication operation that is compatible with the module structure. This includes distributivity over addition and linearity with respect to multiplication by elements of R. The theory of an algebra over a field is especially well developed.
• Associative algebra: an algebra over a ring such that the multiplication is associative.
• Nonassociative algebra: a module over a commutative ring, equipped with a ring multiplication operation that is not necessarily associative. Often associativity is replaced with a different identity, such as alternation, the Jacobi identity, or the Jordan identity.
• Coalgebra: a vector space with a "comultiplication" defined dually to that of associative algebras.
• Lie algebra: a special type of nonassociative algebra whose product satisfies the Jacobi identity.
• Lie coalgebra: a vector space with a "comultiplication" defined dually to that of Lie algebras.
• Graded algebra: a graded vector space with an algebra structure compatible with the grading. The idea is that if the grades of two elements a and b are known, then the grade of ab is known, and so the location of the product ab is determined in the decomposition.
• Inner product space: an F vector space V with a definite bilinear form .
Four or more binary operations:
## Hybrid structures
Algebraic structures can also coexist with added structure of non-algebraic nature, such as partial order or a topology. The added structure must be compatible, in some sense, with the algebraic structure.
## Universal algebra
Algebraic structures are defined through different configurations of axioms. Universal algebra abstractly studies such objects. One major dichotomy is between structures that are axiomatized entirely by identities and structures that are not. If all axioms defining a class of algebras are identities, then this class is a variety (not to be confused with algebraic varieties of algebraic geometry).
Identities are equations formulated using only the operations the structure allows, and variables that are tacitly universally quantified over the relevant universe. Identities contain no connectives, existentially quantified variables, or relations of any kind other than the allowed operations. The study of varieties is an important part of universal algebra. An algebraic structure in a variety may be understood as the quotient algebra of term algebra (also called "absolutely free algebra") divided by the equivalence relations generated by a set of identities. So, a collection of functions with given signatures generate a free algebra, the term algebra T. Given a set of equational identities (the axioms), one may consider their symmetric, transitive closure E. The quotient algebra T/E is then the algebraic structure or variety. Thus, for example, groups have a signature containing two operators: the multiplication operator m, taking two arguments, and the inverse operator i, taking one argument, and the identity element e, a constant, which may be considered an operator that takes zero arguments. Given a (countable) set of variables x, y, z, etc. the term algebra is the collection of all possible terms involving m, i, e and the variables; so for example, m(i(x), m(x,m(y,e))) would be an element of the term algebra. One of the axioms defining a group is the identity m(x, i(x)) = e; another is m(x,e) = x. The axioms can be represented as trees. These equations induce equivalence classes on the free algebra; the quotient algebra then has the algebraic structure of a group.
Some structures do not form varieties, because either:
1. It is necessary that 0 ? 1, 0 being the additive identity element and 1 being a multiplicative identity element, but this is a nonidentity;
2. Structures such as fields have some axioms that hold only for nonzero members of S. For an algebraic structure to be a variety, its operations must be defined for all members of S; there can be no partial operations.
Structures whose axioms unavoidably include nonidentities are among the most important ones in mathematics, e.g., fields and division rings. Structures with nonidentities present challenges varieties do not. For example, the direct product of two fields is not a field, because ${\displaystyle (1,0)\cdot (0,1)=(0,0)}$, but fields do not have zero divisors.
## Category theory
Category theory is another tool for studying algebraic structures (see, for example, Mac Lane 1998). A category is a collection of objects with associated morphisms. Every algebraic structure has its own notion of homomorphism, namely any function compatible with the operation(s) defining the structure. In this way, every algebraic structure gives rise to a category. For example, the category of groups has all groups as objects and all group homomorphisms as morphisms. This concrete category may be seen as a category of sets with added category-theoretic structure. Likewise, the category of topological groups (whose morphisms are the continuous group homomorphisms) is a category of topological spaces with extra structure. A forgetful functor between categories of algebraic structures "forgets" a part of a structure.
There are various concepts in category theory that try to capture the algebraic character of a context, for instance
## Different meanings of "structure"
In a slight abuse of notation, the word "structure" can also refer to just the operations on a structure, instead of the underlying set itself. For example, the sentence, "We have defined a ring structure on the set ${\displaystyle A}$," means that we have defined ring operations on the set ${\displaystyle A}$. For another example, the group ${\displaystyle (\mathbb {Z} ,+)}$ can be seen as a set ${\displaystyle \mathbb {Z} }$ that is equipped with an algebraic structure, namely the operation ${\displaystyle +}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231845498085022, "perplexity": 433.203651173634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704798089.76/warc/CC-MAIN-20210126042704-20210126072704-00518.warc.gz"} |
https://im.kendallhunt.com/MS/teachers/3/2/10/preparation.html | # Lesson 10
Meet Slope
### Lesson Narrative
A slope triangle for a line is a triangle whose longest side lies on the line and whose other two sides are vertical and horizontal. This lesson establishes the remarkable fact that the quotient of the vertical side length and the horizontal side length does not depend on the triangle: this number is called the slope of the line. The argument builds on many key ideas developed in this unit:
• The dilation of a slope triangle, with center of dilation on the line, is a slope triangle for the same line.
• Triangles sharing two common angle measures are similar.
• Quotients of corresponding sides in similar polygons are equal.
In future lessons, they will use slope to write equations for lines.
### Learning Goals
Teacher Facing
• Comprehend the term “slope” to mean the quotient of the vertical distance and the horizontal distance between any two points on a line.
• Draw a line on a coordinate grid given its slope and describe (orally) observations about lines with the same slope.
• Justify (orally) that all “slope triangles” on one line are similar by using transformations or Angle-Angle Similarity.
### Student Facing
Let’s learn about the slope of a line.
### Required Preparation
If using the print version of the materials, students need a straightedge in order to draw lines. If using the digital version, an applet is made available for this purpose.
### Student Facing
• I can draw a line on a grid with a given slope.
• I can find the slope of a line on a grid.
Building On
### Glossary Entries
• similar
Two figures are similar if one can fit exactly over the other after rigid transformations and dilations.
In this figure, triangle $$ABC$$ is similar to triangle $$DEF$$.
If $$ABC$$ is rotated around point $$B$$ and then dilated with center point $$O$$, then it will fit exactly over $$DEF$$. This means that they are similar.
• slope
The slope of a line is a number we can calculate using any two points on the line. To find the slope, divide the vertical distance between the points by the horizontal distance.
The slope of this line is 2 divided by 3 or $$\frac23$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8646361827850342, "perplexity": 627.9002957252253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00434.warc.gz"} |
https://stats.stackexchange.com/questions/183873/how-to-understand-the-drawbacks-of-hierarchical-clustering/183881 | # How to understand the drawbacks of Hierarchical Clustering?
Can someone explain the pros and cons of Hierarchical Clustering?
1. Does Hierarchical Clustering have the same drawbacks as K means?
2. What are the advantages of Hierarchical Clustering over K means?
3. When should we use K means over Hierarchical Clustering & vice versa?
Answers to this post explains the drawbacks of k means very well. How to understand the drawbacks of K-means
• In this answer I touched some of potentially problematic facets of hierarchical agglomerative cluster analysis. The main "drawback" is that it is noniterative, single-pass greedy algorithm. With a greedy algorithm, you optimize the current step's task, which - for most HC methods - does not necessarily guarantee the best partition at a distant future step. The main advantage of HC is that it is flexible with respect to the choice of proximity measure to use. @Mic has already given a good answer below, so I'm just echoing. – ttnphns Nov 27 '15 at 13:32
Whereas $k$-means tries to optimize a global goal (variance of the clusters) and achieves a local optimum, agglomerative hierarchical clustering aims at finding the best step at each cluster fusion (greedy algorithm) which is done exactly but resulting in a potentially suboptimal solution.
One should use hierarchical clustering when underlying data has a hierarchical structure (like the correlations in financial markets) and you want to recover the hierarchy. You can still apply $k$-means to do that, but you may end up with partitions (from the coarsest one (all data points in a cluster) to the finest one (each data point is a cluster)) that are not nested and thus not a proper hierarchy.
If you want to dig into finer properties of clustering, you may not want to oppose flat clustering such as $k$-means to hierarchical clustering such as the Single, Average, Complete Linkages. For instance, all these clustering are space-conserving, i.e. when you are building clusters you do not distort the space, whereas a hierarchical clustering such as Ward is not space-conserving, i.e. at each merging step it will distort the metric space.
To conclude, the drawbacks of the hierarchical clustering algorithms can be very different from one to another. Some may share similar properties to $k$-means: Ward aims at optimizing variance, but Single Linkage not. But they can also have different properties: Ward is space-dilating, whereas Single Linkage is space-conserving like $k$-means.
-- edit to precise the space-conserving and space-dilating properties
Space-conserving: $$D_{ij} \in \left[ \min_{x \in C_i, y \in C_j} d(x,y), \max_{x \in C_i, y \in C_j} d(x,y) \right]$$ where $D_{ij}$ is the distance between clusters $C_i$ and $C_j$ you want to merge, and $d$ is the distance between datapoints.
Space-dilating: $$D(C_i \cup C_j, C_k) \geq \max(D_{ik}, D_{jk}),$$ i.e. by merging $C_i$ and $C_j$ the algorithm will push further away the cluster $C_k$.
• Can you give few more examples of data having hierarchical structure? Didn't follow the financial market example. – GeorgeOfTheRF Nov 27 '15 at 13:16
• Sure. cf. arxiv.org/pdf/cond-mat/9802256.pdf or simply Figure 7 in arxiv.org/pdf/1506.00976.pdf which depicts a correlation matrix which has a (noisy) hierarchical correlation block structure: you can notice blocks on the main diagonal, which are divided into more blocks, each one divided into even more blocks. It corresponds roughly to a subdivision in regions (Europe, US, Asia ex-Japan, Japan), then each region divided by the asset quality (say high quality vs. junk), then divided by the big industrial sectors (retail, industry, media), further subdiv into (aerospace, auto...) – mic Nov 27 '15 at 13:37
• +1. However, should use hierarchical clustering when underlying data has a hierarchical structure... and you want to recover the hierarchy Not necessarily. In most cases rather on the contrary. The hierarhy of HC is rather a story of the algo than a structure of the data. Still, this question is ultimately philosophical/logical, not so statistical. – ttnphns Nov 27 '15 at 13:40
• Ward is not space-conserving, i.e. at each merging step it will distort the metric space. Can you write more about it? This is not very much clear. – ttnphns Nov 27 '15 at 13:41
• Ward is space-dilating, whereas Single Linkage is space-conserving like k-means. Did you want to say space-contracting for single linkage? – ttnphns Nov 27 '15 at 13:46
## Scalability
$k$ means is the clear winner here. $O(n\cdot k\cdot d\cdot i)$ is much better than the $O(n^3 d)$ (in a few cases $O(n^2 d)$) scalability of hierarchical clustering because usually both $k$ and $i$ and $d$ are small (unfortunately, $i$ tends to grow with $n$, so $O(n)$ does not usually hold). Also, memory consumption is linear, as opposed to quadratic (usually, linear special cases exist).
## Flexibility
$k$-means is extremely limited in applicability. It is essentially limited to Euclidean distances (including Euclidean in kernel spaces, and Bregman divergences, but these are quite exotic and nobody actually uses them with $k$-means). Even worse, $k$-means only works on numerical data (which should actually be continuous and dense to be a good fit for $k$-means).
Hierarchical clustering is the clear winner here. It does not even require a distance - any measure can be used, including similarity functions simply by preferring high values to low values. Categorial data? sure just use e.g. Jaccard. Strings? Try Levenshtein distance. Time series? sure. Mixed type data? Gower distance. There are millions of data sets where you can use hierarchical clustering, but where you cannot use $k$-means.
## Model
No winner here. $k$-means scores high because it yields a great data reduction. Centroids are easy to understand and use. Hierarchical clustering, on the other hand, produces a dendrogram. A dendrogram can also be very very useful in understanding your data set.
• Does Hierarchical fail like k means when clusters are 1)non spherical 2) have different radius 3) have different density? – GeorgeOfTheRF Nov 28 '15 at 3:09
• Both can work, and both can fail. That is why things like dendrograms are useful. Never trust a clustering result to be "correct", ever. – Anony-Mousse Nov 28 '15 at 13:02
• Hierarchical clustering may give locally optimise clusters as it is based on greedy approach but K means gives globally optimised clusters. I have also experienced that explanation of hierarchical clustering is relatively easy to business people compare to K means. – Arpit Sisodia Sep 9 '17 at 10:13
I just wanted to add to the other answers a bit about how, in some sense, there is a strong theoretical reason to prefer certain hierarchical clustering methods.
A common assumption in cluster analysis is that the data are sampled from some underlying probability density $f$ that we don't have access to. But suppose we had access to it. How would we define the clusters of $f$?
A very natural and intuitive approach is to say that the clusters of $f$ are the regions of high density. For example, consider the two-peaked density below:
By drawing a line across the graph we induce a set of clusters. For instance, if we draw a line at $\lambda_1$, we get the two clusters shown. But if we draw the line at $\lambda_3$, we get a single cluster.
To make this more precise, suppose we have an arbitrary $\lambda > 0$. What are the clusters of $f$ at level $\lambda$? They are the connected component of the superlevel set $\{x : f(x) \geq \lambda \}$.
Now instead of picking an arbitrary $\lambda$ we might consider all $\lambda$, such that the set of "true" clusters of $f$ are all connected components of any superlevel set of $f$. The key is that this collection of clusters has hierarchical structure.
Let me make that more precise. Suppose $f$ is supported on $\mathcal X$. Now let $C_1$ be a connected component of $\{ x : f(x) \geq \lambda_1 \}$, and $C_2$ be a connected component of $\{ x : f(x) \geq \lambda_2 \}$. In other words, $C_1$ is a cluster at level $\lambda_1$, and $C_2$ is a cluster at level $\lambda_2$. Then if $\lambda_2 < \lambda_1$, then either $C_1 \subset C_2$, or $C_1 \cap C_2 = \emptyset$. This nesting relationship holds for any pair of clusters in our collection, so what we have is in fact a hierarchy of clusters. We call this the cluster tree.
So now I have some data sampled from a density. Can I cluster this data in a way that recovers the cluster tree? In particular, we'd like a method to be consistent in the sense that as we gather more and more data, our empirical estimate of the cluster tree grows closer and closer to the true cluster tree.
Hartigan was the first to ask such questions, and in doing so he defined precisely what it would mean for a hierarchical clustering method to consistently estimate the cluster tree. His definition was as follows: Let $A$ and $B$ be true disjoint clusters of $f$ as defined above -- that is, they are connected components of some superlevel sets. Now draw a set of $n$ samples iid from $f$, and call this set $X_n$. We apply a hierarchical clustering method to the data $X_n$, and we get back a collection of empirical clusters. Let $A_n$ be the smallest empirical cluster containing all of $A \cap X_n$, and let $B_n$ be the smallest containing all of $B \cap X_n$. Then our clustering method is said to be Hartigan consistent if $\Pr(A_n \cap B_n) = \emptyset \to 1$ as $n \to \infty$ for any pair of disjoint clusters $A$ and $B$.
Essentially, Hartigan consistency says that our clustering method should adequately separate regions of high density. Hartigan investigated whether single linkage clustering might be consistent, and found that it is not consistent in dimensions > 1. The problem of finding a general, consistent method for estimating the cluster tree was open until just a few years ago, when Chaudhuri and Dasgupta introduced robust single linkage, which is provably consistent. I'd suggest reading about their method, as it is quite elegant, in my opinion.
So, to address your questions, there is a sense in which hierarchical cluster is the "right" thing to do when attempting to recover the structure of a density. However, note the scare-quotes around "right"... Ultimately density-based clustering methods tend to perform poorly in high dimensions due to the curse of dimensionality, and so even though a definition of clustering based on clusters being regions of high probability is quite clean and intuitive, it often is ignored in favor of methods which perform better in practice. That isn't to say robust single linkage isn't practical -- it actually works quite well on problems in lower dimensions.
Lastly, I'll say that Hartigan consistency is in some sense not in accordance with our intuition of convergence. The problem is that Hartigan consistency allows a clustering method to greatly over-segment clusters such that an algorithm may be Hartigan consistent, yet produce clusterings which are very different than the true cluster tree. We have produced work this year on an alternative notion of convergence which addresses these issues. The work appeared in "Beyond Hartigan Consistency: Merge distortion metric for hierarchical clustering" in COLT 2015.
• This is an interesting way of thinking about hierarchical clustering. I find it strongly reminiscent of clustering by nonparametric density estimation (pdf), which is implemented in R in the pdfCluster package. (I discuss it here.) – gung Nov 28 '15 at 4:12
• HDBSCAN* uses a similar approach. – Anony-Mousse Nov 28 '15 at 13:04
An additional practical advantage in hierarchical clustering is the possibility of visualising results using dendrogram. If you don't know in advance what number of clusters you're looking for (as is often the case...), you can the dendrogram plot can help you choose $k$ with no need to create separate clusterings. Dedrogram can also give a great insight into data structure, help identify outliers etc. Hierarchical clustering is also deterministic, whereas k-means with random initialization can give you different results when run several times on the same data. In k-means, you also can choose different methods for updating cluster means (although the Hartigan-Wong approach is by far the most common), which is no issue with hierarchical method.
EDIT thanks to ttnphns: One feature that hierarchical clustering shares with many other algorithms is the need to choose a distance measure. This is often highly dependent on the particular application and goals. This might be seen as an additional complication (another parameter to select...), but also as an asset - more possibilities. On the contrary, classical K-means algorithm specifically uses Euclidean distance.
• I suppose "problem" in your last paragraph would be seen positively as an asset. K-means, however, is based implicitly on euclidean distance only. – ttnphns Nov 27 '15 at 14:02
• Many possible choices can be a problem as well as an asset, indeed :) Thanks for the comment on k-means, I'll improve that paragraph. – Jacek Podlewski Nov 27 '15 at 14:10
• @ttnphns Actually, " $k$-means " can be used with any Bregman divergences jmlr.org/papers/volume6/banerjee05b/banerjee05b.pdf ; I mean this is the case when considering that $k$-means is what results when considering the limiting case of Gaussian mixture models (from soft to hard), then by replacing Gaussian by another member of the exponential family, you replace the Euclidean distance by another Bregman divergence associated with the member of the family you picked. You end up with a similar algorithm scheme that aims to find a maximum likelihood with an expectation-maximization. – mic Nov 27 '15 at 14:21
• I believe the original question was made with regard to "classical' K-means and not a slightest intention to delve into Bregman divergences. Nice remark though, I'll check out this paper more thoroughly for sure. – Jacek Podlewski Nov 27 '15 at 14:37
• @mic nobody uses Bregman divergences beyond variations of Euclidean distance... it is a tiny tiny class only. But people would like to use e.g. Manhattan distance, Gower etc. which are not Bregman divergences for all I know. – Anony-Mousse Nov 28 '15 at 13:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6531127095222473, "perplexity": 665.2481260006725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195531106.93/warc/CC-MAIN-20190724061728-20190724083728-00446.warc.gz"} |
http://www.koreascience.or.kr/article/JAKO201609064339794.page | # 민감도를 이용하여 풍력단지가 연계된 송전계통의 최적혼잡처리
• Choi, Soo-Hyun (Dept. of Electrical Engineering, Hankyong National University) ;
• Kim, Kyu-Ho (Dept. of Electrical Engineering, Hankyong National University, IT Fusion Research Institute)
• Accepted : 2016.11.25
• Published : 2016.12.01
#### Abstract
This paper studies generator rescheduling technique for congestion management in power system with wind farms. The proposed technique is formulated to minimize the rescheduling cost of conventional and wind generators to alleviate congestion subject to operational line overloading. The generator rescheduling method has been used with incorporation of wind farms in the power system. The locations of wind farms are selected based upon power transfer distribution factor (PTDF). Because all generators in the system do not need to participate in congestion management, the rescheduling has been done by generator selection based on the proposed generator sensitivity factor (GSF). The selected generators have been rescheduled using linear programming(LP) optimization techniques to alleviate transmission congestion. The effectiveness of the proposed methodology has been analyzed on IEEE 14-bus systems.
#### Acknowledgement
Supported by : 한경대학교
#### References
1. Murphy, Colleen, and Andrew Keane. "Optimisation of wind farm reactive power for congestion management." PowerTech (POWERTECH), 2013 IEEE Grenoble. IEEE, 2013.
2. A. Kumar, S. C. Srivastava, and S. N. Singh, "Congestion management in competitive power market: A bibliographical survey," Elect. Power Syst. Res, vol. 76, pp. 153-164, 2005. https://doi.org/10.1016/j.epsr.2005.05.001
3. Hazra and Sinha, "Congestion Management Using Multiobjective Particle Swarm Optimization." IEEE transactions on power systems. 22(4), 1726-34, 2007. https://doi.org/10.1109/TPWRS.2007.907532
4. Dutta and Singh, "Optimal Rescheduling of Generators for Congestion Management Based on Particle Swarm Optimization." IEEE transactions on power systems. 23(4), 1560-69, 2008. https://doi.org/10.1109/TPWRS.2008.922647
5. Venkaiah, Ch., and Vinodkumar, D.M, "Fuzzy adaptive bacterial foraging congestion management using sensitivity based optimal active power rescheduling of generators." Applied Soft Computing, 11, 4921-4930, 2011. https://doi.org/10.1016/j.asoc.2011.06.007
6. Singh, K., Padhy, N.P., and Sharma, J, "Congestion management considering hydro-thermal combined operation in a pool based electricity market." Electrical Power and Energy Systems, 33, 1513-1519, 2011. https://doi.org/10.1016/j.ijepes.2011.06.037
7. A. Kumar, S. C. Srivastava, and S. N. Singh, "A zonal congestion management approach using real and reactive power rescheduling,," IEEE Trans. Power Syst., vol. 19, no. 1, pp. 554-562, Feb. 2004.
8. Kyu-Ho Kim, et al. "An efficient operation of a micro grid using heuristic optimization techniques: Harmony search algorithm, PSO, and GA." IEEE PES General Meeting, 2012.
9. Kyung-bin Song, Kyu-Hyung Lim, Young-Sik Baek, "A Case Study of the Congestion Management for the Power System of the Korea Electric Power Cooperation", KIEE, vol. 50A, no. 12, 2001
10. Allen J. Wood, Bruce F. Wollenberg, "Power generation, operation, and control" Wiley-Interscience, 2002 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8013384342193604, "perplexity": 13965.866542144538}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00511.warc.gz"} |
https://jorgeserrano.es/free-to-contact-best-and-safest-online-dating-sites-for-women-in-philadelphia | # Free To Contact Best And Safest Online Dating Sites For Women In Philadelphia
## Free To Contact Best And Safest Online Dating Sites For Women In Philadelphia
Treatment considerations most oropharyngeal cancers are locally advanced on presentation, and the majority of these are technically amenable to surgical excision by a variety of means. The price is 5 euro june-mid september and 8 euro for all other times. U.s. 93 remains with the freeway another 11.5 miles before departing for ely. After 2 min of incubation, the reaction was quenched with 1 % (v:v) acetic acid (final volume). These is the tail of my /var/log/messages in between one crash and the next. {(e:paul), try pasting these messages in a journal without the code tags, a messed up toc turns up! Does digital data offer indicators that can be used to monitor marketing effectiveness and predict box office success even before awareness turns into intent? It would have been our monthly market today «kalamunda markets» and with everything closed for now we will miss our contact with our wonderful customers. Bz#871058 a race condition in the lvmetad daemon occasionally caused lvm commands to fail intermittently, failing to find a vg that was being updated at the same time by another command. 1. strapless seamfree bandeau from target super soft and comfy, this seam free, strapless bandeau features removable modesty cups for versatile styling options and smooth elastic for all day comfort. At the moment, however, vikas is busy with shandaar’s post-production work and has said that cast and crew will be confirmed by february. This is a breed that requires plenty of exercise and is overall very healthy. It is made with eco-friendly, premium material including 100% breathable natural cotton and thick foam padding. Ironside get this – a guy with the last name «ironside» gets struck by a bullet and is rendered wheelchair bound. «that’s the hidden dimension here,» said harley shaiken, a labor professor at the university of california-berkeley. 2014;11:88 pubmed publisher munoz i, szyniarowski p, toth r, rouse j, lachaud c. improved genome editing in human cell lines using the crispr method. Dr. patricia wonch hill research assistant professor social and behavioral sciences research consortium 228. I am not sure if there will be further action on ms. oliver’s part, but i will call her again today to find out what is really going on. For whatever reason the wav held more details about the track than a newly generated one and the conversion again to mp3 used those particulars. The main house is built around a cool central courtyard with a retractable glass roof, which provides a respite for reading or relaxing out of the harsh midday sun. «bulls on parade,» was for australian radio triple j’s ongoing series «like a version» and «i against i» was performed with members of bad brains for a spotify singles session. These companies have each gathered data from several hundreds of samples produced from short-term exposures of agents at pharmacological and toxicological dose levels. 2. under normal conditions, you should probably never need to clean your acoustat panels. Some of it even traveled at the speed of pinkie, which was much faster, and came with party invitations. The computer is a cheating bastard: o’neill’s simulation in «the gamekeeper», which recreates a mission of his in east germany that turned horribly bloody. (the factor of $2\pi$ insures that the area under the curve remains the same). Beast boy is not a playable character and does not appear at all in lego batman 2. We first review the equations and characteristics of straight lines, then classify polynomial equations, define quadric surfaces and conics, and trigonometric identities and areas. Located in a historic building in the centre of banff, the hostel is nearby the beautiful waterfront and harbour of banff and just a short walk from local restaurants and attractions. About inetwork auto group inetwork auto group is an internet-based dealership which provides high quality vehicles at no haggle pricing. offering a new approach to the automotive sales industry. Peeing on a floor can symbolize how important emotions are in your life and that you have a strong need to express those emotions as part of who you are. If treatment is discontinued then hair reduction intent reimbursement within 6-8 free to contact best and safest online dating sites for women in philadelphia months. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.161935493350029, "perplexity": 5626.98323258041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.97/warc/CC-MAIN-20210506114045-20210506144045-00411.warc.gz"} |
https://lavida.us/problem.php?id=1269 | 문제1269--Hop --- Don
### 1269: Hop --- Don
실행시간 제한: 10 Sec 메모리사용 제한: 128 MB
제출: 20 통과: 5
[제출] [채점기록] [묻고답하기]
#### 문제 설명
KERMIT THE FROG is a classic video game with a simple control and objective but requires a good deal of thinking. You control an animated frog that can walk and hop, in both forward and backward directions. The frog stands in a space between an otherwise a contiguous line of tiles. Each tile is painted black on one side, and white on the other. The frog can walk (forward, orbackward) over an adjacent tile (in front or behind him.) When the frog walks over a tile, the tile slides to the space where the frog was standing.
For example, in the adjacent figure, the frog has two tiles behind him, and three in front. We'll use the notation BWFBBW to refer to this situation where F refers to the space (where the frog is standing,) B is a tile with its black face showing, while W is a tile with its white face showing. The forward direction is from left to right. If the frog were to walk forward, the resulting situation is BWBFBW. Similar behavior when the frog walks backward, the tile behind the frog slides to where the frog was standing. The frog can also hop over the tiles. The frog can hop over an adjacent tile landing on the tile next to it. For example, if the frog was to hop backward, it would land on the first (left-most) tile, and the tile would jump to the space where the frog was standing. In addition, the tile would flip sides. For example, hopping backward in the figure would result in the situation: FWWBBW. We challenge you to write a program to determine the minimum number of moves (walks or hops) to transform one tile configuration into another.
#### 입력 설명
Your program will be tested on one or more test cases. Each test case is specified on a single line that specifies string S representing the initial tile arrangement. S is a non-empty string and no longer than 100 characters and is made of the letters B', W', and exactly one F'. The last line of the input file has one or more -' (minus) characters.
#### 출력 설명
For each test case, print the following line:
k. M
Where k is the test case number (starting at one,) and M is the minimum number of moves needed to transform the given arrangement to an arrangement that has no white tile(s) between any of its black tiles. The frog can be anywhere. M is -1' if the problem cannot be solved in less than 10 moves.
#### 입력 예시 Copy
WWBBFBW
WWFBWBW
FWBBWBW
---
#### 출력 예시 Copy
1. 0
2. 1
3. 2` | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19277873635292053, "perplexity": 1206.0217124261364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202471.4/warc/CC-MAIN-20190320210433-20190320232433-00392.warc.gz"} |
https://eprints.soton.ac.uk/412105/ | The University of Southampton
University of Southampton Institutional Repository
# Multiple Z' -> t-tbar signals in a 4D Composite Higgs Model
Barducci, D., De Curtis, S., Mimasu, K. and Moretti, S. (2013) Multiple Z' -> t-tbar signals in a 4D Composite Higgs Model. Physical Review D, 88, [074024].
Record type: Article
## Abstract
We study the production of top-antitop pairs at the Large Hadron Collider as a testbed for discovering heavy Z' bosons belonging to a composite Higgs model, as, in this scenario, such new gauge interaction states are sizeably coupled to the third generation quarks of the Standard Model. We study their possible appearance in cross section as well as (charge and spin) asymmetry distributions. Our calculations are performed in the minimal four-dimensional formulation of such a scenario, namely the 4-Dimensional Composite Higgs Model (4DCHM), which embeds five new $Z'$s. We pay particular attention to the case of nearly degenerate resonances, highlighting the conditions under which these are separable in the aforementioned observables. We also discuss the impact of the intrinsic width of the new resonances onto the event rates and various distributions. We confirm that the 14 TeV stage of the LHC will enable one to detect two such states, assuming standard detector performance and machine luminosity. A mapping of the discovery potential of the LHC of these new gauge bosons is given. Finally, from the latter, several benchmarks are extracted which are amenable to experimental investigation.
Full text not available from this repository.
Published date: 22 October 2013
Additional Information: 30 pages, 3 figures. Text and figures updated to match published version
Keywords: hep-ph
## Identifiers
Local EPrints ID: 412105
URI: http://eprints.soton.ac.uk/id/eprint/412105
ISSN: 2470-0010
PURE UUID: b42a92ba-a3c0-4ffe-94c2-51f4c040d69c
## Catalogue record
Date deposited: 11 Jul 2017 09:43
## Contributors
Author: D. Barducci
Author: S. De Curtis
Author: K. Mimasu
Author: S. Moretti | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116174936294556, "perplexity": 3413.105126580361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00055.warc.gz"} |
http://math.stackexchange.com/questions/161034/segmented-area-between-circles?answertab=votes | # Segmented area between circles
The following is a geometry problem that I came across with in the course of a research project.
Consider a ray starting at some initial point $t$. Place point $s_1$ at distance $r$ from $t$ on the ray and draw a circle centered at $s_1$ that passes through $t$. Likewise, centered at $t$, an arc with radius $r$ goes through $s_1$. Let $\mathcal{A_1}$ be the area enclosed between the intersecting arcs.
Next, arbitrarily place another point somewhere on the free end of the ray and call it $s_2$ such that $|s_1 - t| < |s_2 - t|$, where $|.|$ denotes the Euclidean distance. A circle with radius $r$ is centered at $s_2$ and another arc centered at $t$ goes through $s_2$. The area enclosed between these intersecting arcs we call $\mathcal{A}_2$. It is easy to show that $\mathcal{A}_1 < \mathcal{A}_2 < \lim_{|s_2 - t| \to \infty} \mathcal{A}_2 = \frac{1}{2} \pi r^2$.
Now, assume that we mark the segments of the ray within the enclosed areas in the middle and arcs centered at $t$ pass through the marks segmenting $\mathcal{A}_1$ and $\mathcal{A}_2$. We call these segmented areas $\mathcal{A}_{11}$ and $\mathcal{A}_{12}$ and $\mathcal{A}_{21}$ and $\mathcal{A}_{22}$ as depicted below (dashed lines are the arcs centered at $t$).
Question: How does $\mathcal{A}_{22}$ change as $s_2$ gets farther from $t$? (i.e., does it increase or decrease?) What can we say about $\mathcal{A}_{22}$ in comparison with $\mathcal{A}_{12}$?
Any idea or comment is much appreciated.
EDIT: The question has been edited in a way that makes the comments incomprehensible. Please see the edit history if you want to make sense of the comments.
EDIT: Here is the link to the same question at mathoverflow.net
-
Do you have numeric evidence that $A_{12}>A_{22}$. Intuitively, I would reason: If we move $s_1$ a bit farther right, $A_{11}$ would shrink to a point and $A_{12}$ would be a circle of radius $\frac r2$, whose area is definitely less than that of $A_{22}$. It looks unintuitive to me that the area of the $A_{\cdot 2}$ would be increasing from $s_2$ to $s_1$ only suddenly decrease to the right of $s_1$. – Henning Makholm Jun 20 '12 at 23:50
Thanks for the comment, @HenningMakholm. The fact is that $s_1$ is a fixed point at distance $r$ from $t$ and $s_2$ is the only point that we arbitrarily choose. In other words, we are not allowed to move $s_1$. Regarding your question on the numerical evidence, I should say no. In fact, I do not know any way to numerically evaluate these regions. Whatever I said is just based on my intuition and of course I am not sure of its correctness. – Ali Jun 20 '12 at 23:59
Who says we're not allowed to imagine $s_1$ being somewhere else? But if that bothers you, just put an $s_0$ half a unit to the right of $s_1$ instead, and consider how unlikely it seems that $A_{02}<A_{22}<A_{12}$. – Henning Makholm Jun 21 '12 at 0:03
Before you decide on the implications, better check the facts. If $A_{22}\gt A_{12}$ as $s_2\to\infty$, then that will prove that there is some finite place where $A_{22}\gt A_{12}$. In other words, this is an attempt to prove that the conjecture is wrong. – Gerry Myerson Jun 21 '12 at 2:06
Ali, please edit in a link to the identical question posted at MathOverflow, and please edit in a link to this question over there. – Gerry Myerson Jun 21 '12 at 5:36
The shape of $A_1$ is always the same, so we can calculate its area as the sum of two circular segments: $A_1 = r^2(2\pi/3 - \sqrt{3}/2)$.
Let's first let $t$ be the origin (why the hell would you name the origin of a ray $t$?!). Let's set $r=1$, keeping in mind all areas will be scaled by $r^2$ later. The coordinates of the vertices of $A_{11}$ are $(1/8,\pm \sqrt{15}/8)$, and the subtended angles are $2\tan^{-1} \sqrt{15}$ and $2\tan^{-1} \sqrt{15}/7$. This then gives $A_{11} \approx 0.350767 (r^2)$, $A_{1} \approx 1.22837$, and $A_{12} \approx 0.877603$. Note that all these numbers can be made precise; they're just huge ugly expressions, and remember they are multiplied by $r^2$.
Now let $s_2$ be located at coordinates $(R,0)$ where $R > 1$ according to our assumptions. By similar reasoning, the vertices of $A_2$ are $(\frac{2R^2-1}{2R},\pm \sqrt{1-\frac{1}{4R^2}})$. Similarly now, we can compute $A_{22}$. The expression for $A_{22}/A_{12}$ is horrendously large, so I will just have Mathematica plot it as a function of $R$:
The limit according to Mathematica is $1.09003$.
Edit: I have corrected a number of mistakes. Now, the area ratio is always greater than unity for $R>1$.
-
I should just like to add that numerically evaluating the areas is extremely simple as the sum of circular segments, and can even be done in a numerically robust way when the segments are thin slivers. For C code, see the function CircularSectorArea in this file – Victor Liu Jun 21 '12 at 5:22
Nice. Is it true that $A_{22}\gt A_{12}$ for all values of $R$? – Gerry Myerson Jun 21 '12 at 5:44
No. The plot clearly shows that it crosses over the area of $A_{12}$. Mathematica says this happens around $R=9.23574$. – Victor Liu Jun 21 '12 at 5:48
Thanks. The plot doesn't show this, unless you put in a horizontal line, labeled, at $A_{12}$ and the plot extends below this line. – Gerry Myerson Jun 21 '12 at 7:33
@VictorLiu: How can that be? If we set $R=1$, then $A_{22}$ and $A_{12}$ coincide. Are there two $R$ values that give the same $A_{22}$ area? – Henning Makholm Jun 21 '12 at 10:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863817036151886, "perplexity": 172.06843819108423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981921.1/warc/CC-MAIN-20150728002301-00071-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.repository.cam.ac.uk/browse?type=author&sort_by=1&order=ASC&rpp=20&etal=-1&value=ProActive+Project+Team&starts_with=U | Now showing items 2-2 of 2
• #### Who will increase their physical activity? Predictors of change in objectively measured physical activity over 12 months in the ProActive cohort
(2010-04-30)
Abstract Background The aim was to identify predictors of change in objectively measured physical activity over 12 months in the ProActive cohort to improve understanding of factors influencing change in physical activity. ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333930969238281, "perplexity": 4535.717298454142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00153-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/30973/what-is-meant-by-nothing-in-physics-quantum-physics | # What is meant by “Nothing” in Physics/Quantum Physics?
I am not a phycisist, so please forgive my ignorance. This is related to my posts and this.
I am trying to understand what is meant by the term "Nothing" in physics or Quantum Field Theory (QFT) since it seems to me that this term is not used in the way we understand it in everyday language.
So QFT seems to suggest (in a nutshell) that "things pop out of nothing".
But from wiki I see the following quote:
"According to quantum theory, the vacuum contains neither matter nor energy, but it does contain fluctuations, transitions between something and nothing in which potential existence can be transformed into real existence by the addition of energy.(Energy and matter are equivalent, since all matter ultimately consists of packets of energy.) Thus, the vacuum's totally empty space is actually a seething turmoil of creation and annihilation, which to the ordinary world appears calm because the scale of fluctuations in the vacuum is tiny and the fluctuations tend to cancel each other out.
So what is "Nothing" in QFT? If this quote is correct, I can interpret it only as follows:
The "Nothing" is not in the way used in everyday speech but is composed of "transitions" i.e. something that is "about to become"
Is this correct? If yes, why is this defined as "Nothing"? Something that is "about to become" is not nothing but there is something prerequisite.
In very lame terms: Einstein was born a non-physicist but became a physicist, so if this is a correct analogy, then there
1. there is something underlying that was non-something that became something
2. A non-something came into something because something else (not nothing) permitted it to become. E.g. Einstein's talent (or Mozart's) would have been lost had he been born in Africa or in a country with no educational facilities. So he would not become a physicist (but the required talent would be present but not come into reality)
-
Related: physics.stackexchange.com/q/34049/2451 and links therein. – Qmechanic Jun 27 at 17:11
In Physics "nothing" is generally taken to be the lowest energy state of a theory. We wouldn't normally use the word "nothing" but instead describe the lowest energy state as the "vacuum". I can't think of an intuitive way to describe the QM vacuum because all the obvious analogies have "something" instead of nothing "nothing", so I'll do my best but you may still find the idea hard to grasp. That's not just you - everybody finds it hard to grasp.
Start with the classical description of an electric field (Maxwell's equations). It's not too hard to image an electric field as a field filling space. You can even feel the field: for example if you put your hand near an old style TV screen you can feel the static electricity. You can imagine turning down the electric field until it disappears completely, in which case you are left with the vacuum i.e. nothing.
Now imagine the same field, but this time we're using the quantum description of the field (Quantum Electrodynamics instead of Maxell's equations). At the classical level the field is approximately the same as the description Maxwell's equations give, but now we have fluctuations in the field due to the energy-time uncertainty principle. Just as before, imagine turning down the electric field until it disappears. Unlike the classical description, the (average) electric field may disappear but the fluctuations do not. This means the quantum vacuum is different from the classical vacuum because it contains the fluctuations even after you've turned the field down to zero.
The key point is that when I say "turn the field down" I mean reduce the energy to the lowest it will go i.e. you can't make the energy of the electric field any lower. By definition this is what we call the "vacuum" even though it isn't empty (i.e. it contains the fluctuations). It isn't possible to make the vacuum any emptier because the fluctuations are always present and you can't remove them.
-
So a first conclusion I make from a first reading of your answer (thank you for your help) is that what a physicist means when says Nothing does not have an exact equivalent with the term Nothing that we ordinary people use in our everyday speech.Did I get this part? – Jim Jun 29 '12 at 10:31
Well, physicists use the word "vacuum" when they specifically mean a quantum field theory ground state. The word "nothing" doesn't have a specific meaning in physics. However you are basically correct in that for most people "vacuum" and "nothing" are the same thing, while for physicists "vacuum" and "nothing" mean different things. – John Rennie Jun 29 '12 at 10:53
A second point is that from "nothing" is generally taken to be the lowest energy state of a theory 1) Each "theory" has a different notion of "nothing" 2) lowest energy state is not the same as no energy state. Are these (2) also correct? – Jim Jun 29 '12 at 11:15
Have a look at my updated answer to physics.stackexchange.com/questions/30965 as I think it addresses some of your concerns. – John Rennie Jun 29 '12 at 14:22
At an even more abstract level (and inspired by the John Rennie's TV analogy): you seem to think of "Nothing" as the equivalent of a black TV screen. In modern physics, "Nothing" is similar to the noise between TV channels.
-
If we take "nothing" to be the same as "zero", "something" to be the same as "not-zero", the vacuum state is both "nothing" and "something".
The "nothing" part of the vacuum state as a theoretical object is that the average value of a series of measurements of the field will be zero. The "something" part of the vacuum state is that the value of any single measurement will in general not be zero. When we can't predict single measurement results and how they will vary over time, we often find that we can predict average values and how the average values will vary over time.
There will most likely be technical aspects to any Physicist's answer here. In the above, "measurements of the field" must be understood to have quite theoretical connotations. John Rennie has labored heroically, but ultimately you have to work at being an intimate friend of the Math and its relationship to experiment.
You seem to be trying to make "nothing" be something vaguely different from any mathematical idea.
-
a theoretical object is that the average value of a series of measurements of the field will be zero doesn't this depend on the scale?Perhaps it is because my lack of background but it seems to me that the term Nothing is a misnomer. It is actual a "handy" name for something so small that is negligible.But in reality there is "something" – Jim Jun 29 '12 at 13:49
I think no. The average value predicted by the theory for measurements of the field in the vacuum state is zero. Not so small as to be negligible. The measurements we really record in lab books and in computer memory are never measurements of the vacuum state, however, because there's always a nontrivial environment, always thermal fluctuations, etc., all of which has to be modeled. The vacuum is the maximally symmetric starting point for an idealized theoretical model, just as the zero-valued classical field is the starting point for modeling classical systems. – Peter Morgan Jun 29 '12 at 15:36
Simple answer to the simple question Yes this is definately a mis-nomer.
Nothing has come to mean in physics the base state. I will try not to use the word "nothing" in my description.
In the simplest terms if I start with zero apples and add one apple then eat that apple before you see it(and continuously add and subtract just as fast as they are added) that is what the word is being used for there was something but when you checked you got the answer of zero whichh is just what I started with(the base state) it is just a almost accurate description of a QED Vacuum
The necessary physics part is:
“The quantum theory asserts that a vacuum, even the most perfect vacuum devoid of any matter, is not really empty. Rather the quantum vacuum can be depicted as a sea of continuously appearing and disappearing [pairs of] particles that manifest themselves in the apparent jostling of particles that is quite distinct from their thermal motions. These particles are ‘virtual’, as opposed to real, particles. ...At any given instant, the vacuum is full of such virtual pairs, which leave their signature behind, by affecting the energy levels of atoms.”
-
Interesting analogy.But where do this apple you add and eat come from? – Jim Jul 3 '12 at 6:31
If myself or anyone on this site knew that answer to that question, it would be all over the news. – Argus Jul 3 '12 at 13:41
Fair enough.But I guess what you are saying here is that what the observation is, is that in time X there was nothing there and time X+δτ there is an apple.What I don't get is why it is believed that the apple appeared suddenly (and lost afterwards) and not that it was always there, but we couldn't detect it? Seems more reasonable to me that supporting that nothing is the "birth" of something.Does this make sense to you?I don't have your background and perhaps this seems to simplistic/dumb for you – Jim Jul 3 '12 at 13:49
For the most part because we only detect(at this time) the effect of this vaccum as it relates to the energy of atoms. – Argus Jul 3 '12 at 13:57
My background involves 1 week in a high school history class describing Issac newton sitting under a tree. So there is no too simplistic in my view. Lol but thank you for makeing me feel smart. – Argus Jul 3 '12 at 13:59
I'm not a physicist but based on studies I did : all types of elementary particles and forcrs also have fields in the whole universe. and fields always are there so even if we don't see any particle in a place it does't mean that there is "nothing" in that place because fields are always there. based on the uncertainty principle Since we can't accurately calculate the energy of a specific system in a specified time , the conclusion is that the energy of system can not be absolutely zero.so changes happens in the fields even in places we think that is empty. in this case virtual particles and virtual antiparticles borrow energy from system to come in existence and then after a short time they collide togheter and give back this energy to system. So actually "nothing" does not exist. Forgive me for my poor english.
-
## protected by mbqJun 29 '12 at 11:04
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7516560554504395, "perplexity": 476.6650191502374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824995.51/warc/CC-MAIN-20160723071024-00070-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/49145/cos-hatabc-a-cos-hatbc-ab-cos-hatc-frac-a2-b2-c22 | # $\cos(\hat{A})BC+ A\cos(\hat{B})C+ AB\cos(\hat{C})=\frac {A^2 + B^2 + C^2}{2}$
What more can be said about the identity derived from law of cosines (motivation below)$$\cos(\widehat{A})BC+ A\cos(\widehat{B})C+ AB\cos(\widehat{C})+=\frac {A^2 + B^2 + C^2}{2} \tag{IV}$$
RHS seems as if operator $\cos(\widehat{\phantom{X}})$ is being applied consecutively to terms of ABC, I tried to represent it in an analogous way to the Laplacian operator convention, but maybe there are more common ways of representing RHS using some operator and sigma notation ( please let me know if there is).
My question is : Are there any identities/structures relating or looking similar to IV, I apologize if this looks like a general fishing expedition question but I can not think of anything more that I can add to this post at this stage. Thank you
Motivation for IV,
Let $A,B,C$ be a triangle.
Let $\widehat{C} = \widehat{AB}$ stand for the Angle opposite to side C between the sides A and B, then the law of cosines for all three sides can be written as $$A^2 + B^2 - 2 AB\cos(\widehat{C}) = C^2 \tag{I}$$ $$A^2 + C^2 - 2 AC\cos(\widehat{B}) = B^2 \tag{II}$$ $$B^2 + C^2 - 2 BC\cos(\widehat{A}) = A^2 \tag{III}$$ Adding $I ,II,III$ and juggling the terms we get :
$$AB\cos(\widehat{C})+AC\cos(\widehat{B})+BC\cos(\widehat{A}) =\frac {A^2 + B^2 + C^2}{2} \tag{IV}$$
-
An attractively symmetrical formula! – André Nicolas Jul 3 '11 at 14:28
@user6312: I agree. @Arjang: I'd prove this without appealing to the law of cosines. Indeed, in the usual trigonometric proof of the law of cosines you start by writing $c = a \cos{\beta} + b \cos{\alpha}$ and multiply this by $c$. Now do this for each side, add the three resulting equalities up and divide by two to get (IV). In order to get the law of cosines you write $a^2 + b^2 - c^2$ using these expressions and solve for $c^2$. – t.b. Jul 3 '11 at 18:01
Oddly enough, this looks (to me) quite related to my answer here math.stackexchange.com/questions/38930/… (this would be the case for N=3, for N>3 we have the inequality) – leonbloy Jul 3 '11 at 18:59
@Theo : tx for the tip, I will be using it for revision. – Arjang Jul 5 '11 at 10:02
Do you want to generalize $IV$ for quadrilaterals, pentagons, etc.? – Américo Tavares Aug 2 '11 at 14:40
REMARK: It seems now to me that OP is looking for generalizations of $\text{IV}$ type relations valid for quadrilaterals, pentagons, etc., and not other triangle trigonometric relations, as I exemplified below.
Notation: Consider a triangle with angles $A$, $B$, $C$ and opposite sides $a$, $b$, $c$.
It is known that there exists only three distinct relations between the angles and the sides. For instance the system
$$\begin{eqnarray} \frac{a}{\sin A} &=&\frac{b}{\sin B} \\ \frac{a}{\sin A} &=&\frac{c}{\sin C}\tag{1} \\ A+B+C &=&\pi; \end{eqnarray}$$
or this equivalent (yours (I),(II),(III))
$$\begin{eqnarray} a^{2} &=&b^{2}+c^{2}-2bc\cos A \\ b^{2} &=&c^{2}+a^{2}-2ac\cos B \tag{2}\\ c^{2} &=&a^{2}+b^{2}-2ab\cos C \end{eqnarray}$$ are two of them. Another is $$\begin{eqnarray} \frac{\tan \frac{A+B}{2}}{\tan \frac{A-B}{2}} &=&\frac{a+b}{a-b} \\ \frac{\tan \frac{B+C}{2}}{\tan \frac{B-C}{2}} &=&\frac{b+c}{b-c} \tag{3}\\ \frac{\tan \frac{C+A}{2}}{\tan \frac{C-A}{2}} &=&\frac{c+a}{c-a}, \end{eqnarray}$$
from which one can derive
$$$$\frac{\tan \frac{A+B}{2}}{\tan \frac{A-B}{2}}+\frac{\tan \frac{B+C}{2}}{\tan \frac{B-C}{2}}+\frac{\tan \frac{C+A}{2}}{\tan \frac{C-A}{2}}=\frac{a+b}{a-b}+% \frac{b+c}{b-c}+\frac{c+a}{c-a}.\tag{4}$$$$
Also from
$$\begin{eqnarray} \tan \frac{A-B}{2} &=&\frac{a-b}{a+b}\cot \frac{C}{2} \\ \tan \frac{B-C}{2} &=&\frac{b-c}{b+c}\cot \frac{A}{2}\tag{5} \\ \tan \frac{C-A}{2} &=&\frac{c-a}{c+a}\cot \frac{B}{2}, \end{eqnarray}$$
follows
$$\begin{eqnarray} \tan \frac{A-B}{2} \tan \frac{C}{2}+\tan \frac{B-C}{2} \tan \frac{A}{2}+\tan \frac{C-A}{2}\tan \frac{B}{2} &=&\frac{a-b}{a+b}+\frac{b-c}{b+c}+\frac{c-a}{c+a}. \; (6)\end{eqnarray}$$
As for a reference I looked at my old trigonometry text book Compêndio de Trigonometria (in Portuguese) by J. Jorge Calado. The system $\left( 3\right)$ is the law of tangents (Wikipedia) and $\left( 5\right)$ can be deduct from $\left( 3\right)$ by applying the relation $A+B+C=\pi$ to get \begin{eqnarray} \tan \frac{A+B}{2} &=&\tan \left( \frac{\pi }{2}-\frac{C}{2}\right) =\cot \frac{C}{2}=\left( \tan \frac{C}{2}\right) ^{-1}\tag{7}
\end{eqnarray} and similar to $\tan \frac{B+C}{2}$ and $\tan \frac{C+A}{2}$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9327998757362366, "perplexity": 789.3401522460406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/31740/collapsing-contractible-subsets-of-the-two-disk/31845 | Collapsing contractible subsets of the two-disk.
This question is quite specific, but it may admit answers in more general contexts.
Consider a subset $\Lambda \subset D^2$ where $D^2$ is the two dimensional disk.
We consider in $\Lambda$ an equivalence relation such that the equivalence class of each point is a contractible compact set.
Assume that the quotient map $p: \Lambda \to \Lambda / \sim$ which is continuous has as image a cantor set.
The question is: If we extend the equivalence relation to the whole disk by considering for each point $x\in D^2 \backslash \Lambda$ the equivalence class is the singleton $\{x\}$ do we have that the proyection to the quotient of the whole disk is homeomorphic to the disk?
If the answer is negative, can we ask more to the equivalence classes in $\Lambda$ in order to have the result?
Maybe the question is trivial or well known, but I could not find either a reference nor an answer by myself.
EDIT: In view of Franklin's answer. I am supposing that $\Lambda$ is contained in the interior of the disk (which I am assuming closed).
-
Do you know the Kline sphere characterization? I think there is another characterization only mentioning points which should easily apply. Maybe that removing a point changes the local fundamental group? – Ben Wieland Jul 14 '10 at 0:59
I don´t understand your claim. Here I am not removing points, just collapsing some sets. For example, imagine they are arcs, each one of them, doesn´t change the topology, but here we are collapsing non-countably many and so I cannot see how it works. – rpotrie Jul 14 '10 at 10:15
If the quotient satisfied the hypotheses of the characterization, the characterization would conclude that the quotient is a disk. – Ben Wieland Jul 14 '10 at 17:41
I think you may find the Bing shrinking criterion useful.
First, assume $\Lambda$ itself be closed (hence compact) in $D$. More generally, equivalence classes can form a so-called upper semi-continuous decomposition of your compact initial space $X$, namely one such that $X/\sim$ is Hausdorff (necessary anyway), making the quotient map $p$ closed.
Bing shrinking criterion : $p:X\to X/\sim$ is a uniform limit of homeomorphisms iff for any $\epsilon>0$, there is an homeomorphism $h_\epsilon$ of $X$ that send any equivalence class into a set of diameter $<\epsilon$ which is moreover contained in the $\epsilon$-neighborhood of the original class.
And one proof (by Robert Edwards) is a beautifully simple application of Baire's theorem, which works for any usc decomposition of a compact metric space. See this 1979 Bourbaki talk (page 10) by Edwards himself.
-
Thanks for the answer and the reference! Just one question to finish the proof: it seems as if one could weaken the hypothesis of equivalence classes being contractible, does it work if we assume that they are continua? – rpotrie Jul 14 '10 at 12:58
Not all continua will do. They must be cellular, i.e. decreasing intersections of cells (sets homeomorphic to a closed disk). Even in dimension 2, there are strange beasts like the pseudo-arc among cellular continua. See ams.org/mathscinet-getitem?mr=25733 and the reviews citing this. For modern accounts of continuum theory and decomposition spaces of manifolds, you can try the books by Nadler and Daverman with these titles, although Bing's collected works are also great. – BS. Jul 14 '10 at 13:56
Thanks a lot! Decreasing intersection of cells work perfect for me! Thanks again for the references. – rpotrie Jul 14 '10 at 15:50
Let me add that equivalence classes must be cellular if the quotient space is to be a manifold, which was the context of your question. – BS. Jul 14 '10 at 17:29
Imagine that the Cantor set is on one diameter and that $\Lambda$ consists of the vertical cords passing through the Cantor. After collapsing you get a space that have some points that removing them makes it disconnected. Therefore is not homeomorphic to the disc.
-
Thanks. Sorry, I must edit the question, I am thinking of $\Lambda$ contained in the interior of the disk. – rpotrie Jul 13 '10 at 19:15
The equivalence classes should be closed subsets of the disk to make the quotient Hausdorff.
-
You are right. I will correct that. – rpotrie Jul 13 '10 at 23:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9189773797988892, "perplexity": 462.6436440476229}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465599.34/warc/CC-MAIN-20150226074105-00305-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://ask.sagemath.org/question/48037/lifting-modular-symbols-for-newform-of-level-35-at-p-5-7/ | # lifting modular symbols for newform of level 35 at p = 5, 7 edit
Let $f$ be the unique normalised eigenform in $S_2(\Gamma_0(35))$ of dimension $2$. It has split multiplicative reduction at $p = 5$ ($a_p = +1$) [and non-split multiplicative reduction at $p = 7$ ($a_p = -1$)]. The $p$-adic $L$-function should vanish to the order $1$ at $1$ (because the associated abelian variety has rank $0$). I want to compute the valuation of its leading coefficient using Pollack-Stevens. To do so, I use the following code:
from sage.modular.pollack_stevens.space import ps_modsym_from_simple_modsym_space
A = ModularSymbols(35,2,1).cuspidal_submodule().new_subspace().decomposition()[1]
p = 5
prec = 2
phi = ps_modsym_from_simple_modsym_space(A)
ap = phi.Tq_eigenvalue(p,prec)
phi1,psi1 = phi.completions(p,prec)
phi1p = phi1.p_stabilize_and_lift(p,ap = psi1(ap), M = prec)
Unfortunately, the last command fails after a few seconds (also for $p = 7$) with a
RuntimeError: maximum recursion depth exceeded while calling a Python object
Is there a theoretical problem with computing the $L$-value or is there a problem with the implementation?
edit retag close merge delete
Sort by » oldest newest most voted
Here is a piece of code that pushes the computations as far as possible.
import traceback
from sage.modular.pollack_stevens.space import ps_modsym_from_simple_modsym_space
p = 5
prec = 2
precmore = 8
A = ModularSymbols(35, 2, 1).cuspidal_submodule().new_subspace().decomposition()[1]
phi = ps_modsym_from_simple_modsym_space(A)
# sage: phi
# Modular symbol of level 35 with values in
# Sym^0(Number Field in alpha with defining polynomial x^2 + x - 4)^2
ap = phi.Tq_eigenvalue(p, prec) # this is 1 in QQ
phi1, psi1 = phi.completions (p, precmore)
R = psi1.codomain()
eps = 1
poly = PolynomialRing(R, 'x')( [p ** (k + 1) * eps, -ap, 1] )
v0, v1 = poly.roots( multiplicities=False )
if v0.valuation():
v0, v1 = v1, v0
alpha = v0
try:
phi1p = \
phi1.p_stabilize_and_lift(
p
, prec
, ap = psi1(ap)
, alpha = alpha
, check = False
, new_base_ring = R )
except Exception:
traceback.print_exc()
The above code delivers now the following error:
Traceback (most recent call last):
File "<ipython-input-945-35e0bb0e7888>", line 34, in <module>
, new_base_ring = R )
File "/usr/lib/python2.7/site-packages/sage/modular/pollack_stevens/modsym.py", line 1495, in p_stabilize_and_lift
new_base_ring=new_base_ring, check=check)
File "/usr/lib/python2.7/site-packages/sage/modular/pollack_stevens/modsym.py", line 1043, in p_stabilize
V = self.parent()._p_stabilize_parent_space(p, new_base_ring)
File "/usr/lib/python2.7/site-packages/sage/modular/pollack_stevens/space.py", line 557, in _p_stabilize_parent_space
raise ValueError("the level is not prime to p")
ValueError: the level is not prime to p
And looking inside the module with the error, /usr/lib/python2.7/site-packages/sage/modular/pollack_stevens/space.py there is an intentioned check that the prime does not divide the level:
N = self.level()
if N % p == 0:
raise ValueError("the level is not prime to p")
Explicitly, the road to the error is as follows. We submit phi1 to the method p_stabilize_and_lift . After some steps, the code lands in the method p_stabilize of this instance of the class class PSModularSymbolElement_symk(PSModularSymbolElement) (the instance is phi1).
This method builds the space
V = self.parent()._p_stabilize_parent_space(p, new_base_ring)
and we land in the module space.py. To see the error, we type explicitly with our data:
sage: phi1
Modular symbol of level 35 with values in Sym^0 (5-adic Unramified Extension Field in a defined by x^2 + x - 4)^2
sage: phi1.parent()
Space of modular symbols for Congruence Subgroup Gamma0(35) with sign 1 and values in Sym^0 (5-adic Unramified Extension Field in a defined by x^2 + x - 4)^2
sage: phi1.parent().level()
35
Note: It is hard to say more, some details on the mathematical part are needed. (It happens to me that finding programming errors becomes easy first after understanding the special cases. From the examples in the method giving the final error, all levels are coprime w.r.t. the submitted prime numbers.) It's all i have. Parts of the code above are adapted to the given example. Guessing the new_base_ring is not good enough in the given situation, also the alpha had to be declared explicitly. But the space construction was explicitly prohibited and i decided to stop here. (There are too few comments, book / web references in the code, i really have no more chance.)
more
I think one does not need to $p$-stabilize when p | N: http://math.bu.edu/people/rpollack/Pa...
However, only calling phi1.lift instead of phi1.p_stabilize_and_lift also fails.
(And sorry for my late response!)
( 2020-04-19 05:55:46 -0500 )edit
One solution would be to express the K-valued modular symbol (K = NumberField(x²+x-4)) as a K-linear combination of QQ-valued modular symbols and do the procedure for the latter ones. I'm working on this.
( 2020-04-23 02:07:44 -0500 )edit
The infinite recursion happens when trying to change the base ring of a polynomial (%debug is your friend)
914 poly = poly.change_ring(new_base_ring)
ipdb> p poly
(1 + O(5^2))*x^2 + (4 + 4*5 + O(5^2))*x + 5 + O(5^3)
ipdb> p new_base_ring
5-adic Field with capped relative precision 3
ipdb> p poly.parent()
Univariate Polynomial Ring in x over 5-adic Unramified Extension Field in a defined by x^2 + x - 4
I have no idea is this makes sense or not.
more | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32366132736206055, "perplexity": 3085.1476992105913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00603.warc.gz"} |
https://www.physicsforums.com/threads/weird-mathematica-problem.239109/ | # Weird mathematica problem
1. Jun 7, 2008
### ice109
does anyone know how to derive multivariate taylor series in mathematica? by default it is computed in a very strange: "Series performs a series expansion successively with respect to each variable. The result in this case is a series in x, whose coefficients are series in y. "
2. Jun 7, 2008
### Pere Callahan
What don't you like about this representation? If you want to have the terms ordered by the total degree of the monomials, there seems not to be a built-in way to do it. (But it shouldn't be too hard to come up with a work-around oneself ...)
3. Jun 7, 2008
### Crosson
Try using the function Collect:
Collect[Series[Exp[x y], {x, 0, 8}, {y, 0, 8}] , {x, y}]
If that's not what you want, then I fail to understand. I'm sure there must be a way to do what you are asking, however.
4. Jun 7, 2008
### Crosson
Now I understand the problem. That's strange that Mathematica does not have much material about multivariate series, so we can add it ourselves.
Code (Text):
multiVarSeries[f_, x_List, a_List, k_Integer] := Block[{n, F},
Evaluate[Fold[Sum,
Product[(1/n[i]!) (x[[i]] - a[[i]])^(n[i]), {i, 1, Length[x]}]
((Fold[D, F@@x, Table[{x[[i]], n[i]}, {i, 1, Length[x]}]]) /.
Table[x[[i]] -> a[[i]], {i, 1, Length[x]}]),
Table[{n[i], 1, k}, {i, 1, Length[x]}]]] /. F -> Function[x, f]]
Copy and paste that function into a new cell and then execute it with shift + enter. After that you can invoke the function. Here is a simple example:
multiVarSeries[Exp[x y], {x, y}, {0,0}, 2]
This says to expand the function Exp[x y] with respect to the variables x and y around the point {0,0} up to order 2 (in both variables, I don't let you specify the order separately for each individual variable). The output is of course:
$$\frac{x^3 y^3}{6}+\frac{x^2 y^2}{2}+x y$$
5. Jun 7, 2008
### ice109
i don't know much about mathematica programming so can you adjust your function so that it computes to a total order of n? e.g. for order 2 xy terms are written out and x^2 and y^2 terms but not x*y^2. and you example seems to show terms up to order 3
ps
how can i learn to program mathematica
6. Jun 7, 2008
### Crosson
Yes, I understand your complaint with the function. One thing you can do is make n larger than you need and then use a filter to get only the terms you want. It is more work than it is worth for me to change the function to match that behavior.
There is no good way. If you are really wealthy, you can do workshops online with Wolfram Inc that will teach you how to program Mathematica. Otherwise you have to do what I did, which is to read the built-in help and practice for months.
7. Jun 7, 2008
### ice109
well it doesn't work. i read on a grou somewhere the Normal[Series[Exp[x*t + y*t], {t, 0, 2}]] /. t -> 1 would work and it does but it doesn't work for my function. any ideas?
8. Jun 7, 2008
### Crosson
What doesn't work and what function are you trying to do this with?
9. Oct 20, 2008
### ice109
so here i am again with the same problem. how in the heck do i get mathematica to give me this representation:
of a function expanded to second order in both of its arguments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3257477879524231, "perplexity": 1001.5365375010304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171418.79/warc/CC-MAIN-20170219104611-00129-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://www.gradesaver.com/textbooks/science/chemistry/chemistry-9th-edition/chapter-1-chemical-foundations-additional-exercises-page-39/97 | Chemistry 9th Edition
$T_f=56.5^{\circ}C$
As given that $5.25cm=10.0^{\circ}F$ For $1cm=\frac{10.0^{\circ}F}{5.25cm}$ Now for $18.5cm$ $18.5cm=(18.5cm)\frac{10.0^{\circ}F}{5.25cm}=35.2^{\circ}F$ Thus $T_f=98.6+35.2=133.8^{\circ}F$ Finally, we convert this final temperature into centigrades $T_f=\frac{5}{9}(133.8-32)=56.5^{\circ}C$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994472861289978, "perplexity": 1090.4579786646743}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946011.29/warc/CC-MAIN-20180423125457-20180423145457-00333.warc.gz"} |
https://www.physicsforums.com/threads/energy-required-to-lift-a-heavy-box.66871/ | # Homework Help: Energy Required to Lift a Heavy Box
1. Mar 11, 2005
As you are trying to move a heavy box of mass $$m$$, you realize that it is too heavy for you to lift by yourself. There is no one around to help, so you attach an ideal pulley to the box and a massless rope to the ceiling, which you wrap around the pulley. You pull up on the rope to lift the box.
A.) What is the magnitude $$F$$ of the upward force you must apply to the rope to start raising the box with constant velocity?
Express the magnitude of the force in terms of $$m$$, the mass of the box.
I think the answer should be mg/2
is this correct?
2. Mar 11, 2005
### Staff: Mentor
Yes. The pulley gives you a mechanical advantage, reducing the force (but not the energy!) needed to lift the box.
3. Mar 11, 2005
### James R
The pulley won't make it any easier. The force you'll need to apply with a single pulley is still F=mg.
...
Edit: Hmm... I think I might have mistaken the way the pulley is connected, in which case the force may be mg/2. A diagram would be nice!
4. Mar 12, 2005
### ramollari
With a single pulley the way you described it there's no way to shorten the distance over which the force is applied (it is the same as the distance over which the pulley rises). So the force is still mg. The only facilitation is that you apply the force downward.
5. Mar 12, 2005
### Staff: Mentor
Please reread the original post: The pulley is attached to the box, not the ceiling. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82353276014328, "perplexity": 602.4369066729224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865181.83/warc/CC-MAIN-20180623190945-20180623210945-00635.warc.gz"} |
https://socratic.org/questions/draw-the-best-lewis-structure-for-cl-3-what-is-the-formal-charge-on-the-central- | Chemistry
Topics
# Draw the best Lewis structure for Cl_3^-. What is the formal charge on the central Cl atom?
Jun 20, 2016
This is an analogue (and isostructural with) linear ${I}_{3}^{-}$ ion.
#### Explanation:
$\text{No. of electrons} = 3 \times 7 + 1 = 22$ $\text{electrons}$
The central iodine atom has 2 bonding pairs and 3 lone pairs, and there are thus 8 electrons associated with the central iodine. The central iodine thus bears a formal negative charge.
##### Impact of this question
1328 views around the world
You can reuse this answer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9564005732536316, "perplexity": 6007.285539957863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321696.96/warc/CC-MAIN-20190824194521-20190824220521-00509.warc.gz"} |
https://hal.archives-ouvertes.fr/hal-01112454 | # Local Single Ring Theorem
Abstract : The Single Ring Theorem, by Guionnet, Krishnapur and Zeitouni, describes the empirical eigenvalues distribution of a large generic matrix with prescribed singular values, i.e. an $N \times N$ matrix of the form $A=UTV$, with $U, V$ some independent Haar-distributed unitary matrices and $T$ a deterministic matrix whose singular values are the ones prescribed. In this text, we give a local version of this result, proving that it remains true at the microscopic scale $(\log N)^{-1/4}$. On our way to prove it, we prove a matrix subordination result for singular values of sums of non Hermitian matrices, as Kargin did for Hermitian matrices. This also allows to prove a local law for the singular values of the sum of two non Hermitian matrices and a delocalization result for singular vectors.
Keywords :
Type de document :
Pré-publication, Document de travail
MAP5 2015-05. 33 pages, 2 figures. 2015
Domaine :
https://hal.archives-ouvertes.fr/hal-01112454
Contributeur : Florent Benaych-Georges <>
Soumis le : mardi 3 février 2015 - 07:17:09
Dernière modification le : jeudi 11 janvier 2018 - 06:19:45
### Identifiants
• HAL Id : hal-01112454, version 1
• ARXIV : 1501.07840
### Citation
Florent Benaych-Georges. Local Single Ring Theorem. MAP5 2015-05. 33 pages, 2 figures. 2015. 〈hal-01112454〉
### Métriques
Consultations de la notice | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8255911469459534, "perplexity": 2712.193462899122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813832.23/warc/CC-MAIN-20180222002257-20180222022257-00423.warc.gz"} |
https://publications.csee.umbc.edu/publications/144 | ## Weaving the Web of Belief into the Semantic Web
Authors:
Book Title: submitted to WWW2004
Date:
Abstract: Collaboration, especially knowledge sharing, enables the advance of science as well as human society. In cyberspace, socializing the traditionally isolated intelligent software agents is an ultimate goal of the emerging Semantic Web activity. When making collaboration decisions, an agent usually needs explicitly represented facts about the agent world, such as who knows what" and who can do what". However, the limited computation and storage resources forbid an agent to independently maintain rational beliefs on all facts about the agent world. So the full picture of the agent world has to be distributed in the knowledge sharing social network of those resident agents. In this paper, we propose a generic representation framework for this distributed knowledge network, which is also called the reminiscent of Quine's {\it web of belief}. The framework includes: the RDF based {\it semantic relation model}, which is a cognitive data model for the agent world; a general OWL ontology, which facilitates representing agent properties (such as knowledge and capability) and finer inter-agent trust relations; and the practical issues on maintaining the web of belief for distributed inference.
Type: Misc
Google Scholar: search
Attachments:
74.pdf downloads: 956 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6181014180183411, "perplexity": 3936.347073132927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00144.warc.gz"} |
http://www.lofoya.com/Solved/2166/given-five-concentric-squares-if-the-area-of-the-circle-inside-the | # Moderate Geometry & Mensuration Solved QuestionAptitude Discussion
Q. Given five concentric squares. If the area of the circle inside the smallest square is 77 square units and the distance between the corresponding corners of consecutive squares is 1.5 units, find the difference in the areas of the outermost and innermost squares.
✖ A. 1254 sq units ✖ B. 1008 sq units ✖ C. 877 sq units ✔ D. 240 sq units
Solution:
Option(D) is correct
Here we see that diameter of the circle is equal to the side of the innermost square that is,
$\pi r^2=77$
$r=3.5\sqrt{2}$
$2r=7\sqrt{2}$
Then the diagonal of the square is 14 sq units.
Which means the diagonal of the fifth sqaure would be 14+12 units = 26.
Which means the side of the fifth square would be $\dfrac{26}{\sqrt{2}}$
Therefore, the area of the fifth sqaure $= 338$ sq units.
Area of the first square $= 98$ sq units.
Hence, the difference would be $240 (=338-98)$ sq units.
Edit: Thank you anubhav goel for pointing out the mistake. Solution has been updated.
Edit2: Thank you Manoj, changed the length of side of fifth square from $26\sqrt{2}$ to $\frac{26}{\sqrt{2}}$ and hence the final answer
## (7) Comment(s)
Ashwin
()
How that 12 units came?
Which means the diagonal of the fifth sqaure would be 14+12 units = 26
Manoj
()
From side 7√2, diagonal is 14 ie you have multiplied √2. But from diagonal of fifth square ie 26, you should have divided √2 and not multiplied to get the side. So side of the fifth square should have been 13√2. Correct me if I am wrong.
Deepak
()
You are right manoj and side of the fifth square should be $\dfrac{26}{\sqrt{2}}$ indeed. Changed the option choices and final answer.
Pawan Sharma
()
How could the diameter of the inner circle be 14 units?
If i am not wrong inner diagonal should be $7\sqrt{2}$.
Deepak
()
Hey Pawan,
Diameter of the CIRCLE is $7\sqrt{2}$ only. It's the DIAGONAL OF THE SQUARE which is 14 sq. units.
To give you calculations for calculating the diagonal of the square, $D$,
$=\sqrt{(7\sqrt{2})^2+(7\sqrt{2})^2}$
$=\sqrt{(49\times 2)+(49\times 2)}$
$=\sqrt{196}$
$=\textbf{14 units}$
Anubhav Goel
()
i think your aproach is wrong, shouldn't the diameter of circle equal to side of the innermost square?
Deepak
()
You are right Anubhav, there was a typo in the question. Calculations were correct but some words got jumbled up. Updated the solution. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9390254616737366, "perplexity": 1375.0934584984939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170425.26/warc/CC-MAIN-20170219104610-00428-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://en.wikibooks.org/wiki/This_Quantum_World/Feynman_route/Schroedinger_at_last | This Quantum World/Feynman route/Schroedinger at last
Schrödinger at last
The Schrödinger equation is non-relativistic. We obtain the non-relativistic version of the electromagnetic action differential,
$dS=-mc^2\,dt\sqrt{1-v^2/c^2}-qV(t,\mathbf{r})\,dt+(q/c) \mathbf{A}(t,\mathbf{r})\cdot d\mathbf{r},$
by expanding the root and ignoring all but the first two terms:
$\sqrt{1-v^2/c^2}=1-{1\over2}{v^2\over c^2}-{1\over8}{v^4\over c^4}-\cdots\approx 1-{1\over2}{v^2\over c^2}.$
This is obviously justified if $v\ll c,$ which defines the non-relativistic regime.
Writing the potential part of $dS$ as $q\,[-V+\mathbf{A}(t,\mathbf{r})\cdot (\mathbf{v}/c)]\,dt$ makes it clear that in most non-relativistic situations the effects represented by the vector potential $\mathbf{A}$ are small compared to those represented by the scalar potential $V.$ If we ignore them (or assume that $\mathbf{A}$ vanishes), and if we include the charge $q$ in the definition of $V$ (or assume that $q=1$), we obtain
$S[\mathcal{C}]=-mc^2(t_B-t_A)+\int_\mathcal{C} dt\left[{\textstyle{m\over2}}v^2-V(t,\mathbf{r})\right]$
for the action associated with a spacetime path $\mathcal{C}.$
Because the first term is the same for all paths from $A$ to $B,$ it has no effect on the differences between the phases of the amplitudes associated with different paths. By dropping it we change neither the classical phenomena (inasmuch as the extremal path remains the same) nor the quantum phenomena (inasmuch as interference effects only depend on those differences). Thus
$\langle B|A\rangle=\int\mathcal{DC} e^{(i/\hbar)\int_\mathcal{C} dt[(m/2)v^2-V]}.$
We now introduce the so-called wave function $\psi(t,\mathbf{r})$ as the amplitude of finding our particle at $\mathbf{r}$ if the appropriate measurement is made at time $t.$ $\langle t,\mathbf{r}|t',\mathbf{r}'\rangle\,\psi(t',\mathbf{r}'),$ accordingly, is the amplitude of finding the particle first at $\mathbf{r}'$ (at time $t'$) and then at $\mathbf{r}$ (at time $t$). Integrating over $\mathbf{r},$ we obtain the amplitude of finding the particle at $\mathbf{r}$ (at time $t$), provided that Rule B applies. The wave function thus satisfies the equation
$\psi(t,\mathbf{r})=\int\!d^3r'\,\langle t,\mathbf{r}|t',\mathbf{r}'\rangle\,\psi(t',\mathbf{r}').$
We again simplify our task by pretending that space is one-dimensional. We further assume that $t$ and $t'$ differ by an infinitesimal interval $\epsilon.$ Since $\epsilon$ is infinitesimal, there is only one path leading from $x'$ to $x.$ We can therefore forget about the path integral except for a normalization factor $\mathcal{A}$ implicit in the integration measure $\mathcal{DC},$ and make the following substitutions:
$dt=\epsilon,\quad v=\frac{x-x'}{\epsilon},\quad V=V\left(t{+}\frac{\epsilon}{2},\frac{x{+}x'}{2}\right).$
This gives us
$\psi(t{+}\epsilon,x)=\mathcal{A}\int\!dx'\,e^{im(x{-}x')^2/2\hbar\epsilon}\, e^{-(i\epsilon/\hbar)V(t{+}\epsilon/2,(x{+}x')/2)}\,\psi(t,x').$
We obtain a further simplification if we introduce $\eta=x'-x$ and integrate over $\eta$ instead of $x'.$ (The integration "boundaries" $-\infty$ and $+\infty$ are the same for both $x'$ and $\eta.$) We now have that
$\psi(t+\epsilon,x)=\mathcal{A}\int\!d\eta\,e^{im\eta^2/2\hbar\epsilon}\, e^{-(i\epsilon/\hbar)V(t{+}\epsilon/2,x{+}\eta/2)}\,\psi(t,x{+}\eta).$
Since we are interested in the limit $\epsilon\rightarrow0,$ we expand all terms to first order in $\epsilon.$ To which power in $\eta$ should we expand? As $\eta$ increases, the phase $m\eta^2/2\hbar\epsilon$ increases at an infinite rate (in the limit $\epsilon\rightarrow0$) unless $\eta^2$ is of the same order as $\epsilon.$ In this limit, higher-order contributions to the integral cancel out. Thus the left-hand side expands to
$\psi(t+\epsilon,x)\approx\psi(t,x)+{\partial \psi\over\partial t}\epsilon,$
while $e^{-(i\epsilon/\hbar)V(t{+}\epsilon/2,x{+}\eta/2)}\,\psi(t,x{+}\eta)$ expands to
$\left[1-{i\epsilon\over\hbar}V(t,x)\right]\left[\psi(t,x)+{\partial \psi\over\partial x}\eta+\frac12{\partial^2\psi\over\partial x^2}\eta^2\right]= \left[1-{i\epsilon\over\hbar} V(t,x)\right]\!\psi(t,x)+{\partial \psi\over\partial x}\eta+ {\partial^2\psi\over\partial x^2}{\eta^2\over2}.$
The following integrals need to be evaluated:
$I_1=\int\!d\eta\, e^{im\eta^2/2\hbar\epsilon},\quad I_2=\int\!d\eta\, e^{im\eta^2/2\hbar\epsilon}\eta,\quad I_3=\int\!d\eta\, e^{im\eta^2/2\hbar\epsilon}\eta^2.$
The results are
$I_1=\sqrt{2\pi i\hbar\epsilon/m},\quad I_2=0,\quad I_3=\sqrt{2\pi\hbar^3\epsilon^3/im^3}.$
Putting Humpty Dumpty back together again yields
$\psi(t,x)+{\partial \psi\over\partial t}\epsilon=\mathcal{A}\sqrt{2\pi i\hbar\epsilon\over m} \left(1-{i\epsilon\over\hbar}V(t,x)\right)\psi(t,x) +{\mathcal{A}\over2}\sqrt{2\pi\hbar^3\epsilon^3\over im^3}{\partial^2\psi\over\partial x^2}.$
The factor of $\psi(t,x)$ must be the same on both sides, so $\mathcal{A}=\sqrt{m/2\pi i\hbar\epsilon},$ which reduces Humpty Dumpty to
${\partial \psi\over\partial t}\epsilon=-{i\epsilon\over\hbar}V\psi+ {i\hbar\epsilon\over2m}{\partial^2\psi\over\partial x^2}.$
Multiplying by $i\hbar/\epsilon$ and taking the limit $\epsilon\rightarrow0$ (which is trivial since $\epsilon$ has dropped out), we arrive at the Schrödinger equation for a particle with one degree of freedom subject to a potential $V(t,x)$:
$i\hbar{\partial \psi\over\partial t}=-{\hbar^2\over2m}{\partial^2\psi\over\partial x^2}+V\psi.$
Trumpets please! The transition to three dimensions is straightforward:
$i\hbar{\partial \psi\over\partial t}= -{\hbar^2\over2m}\left({\partial^2\psi\over\partial x^2}+ {\partial^2\psi\over\partial y^2}+{\partial^2\psi\over\partial z^2}\right)+V\psi.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 69, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917891621589661, "perplexity": 378.4720599943096}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999671637/warc/CC-MAIN-20140305060751-00068-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/70202/expand-placeins-functonality-to-subsections | # Expand placeins functonality to subsections [closed]
I'm using the placeins package to ensure that floats stay within their respective sections (using \usepackage[section]{placeins}).
I'd like to expand this functionality to include subsections, without me having to manually add a \FloatBarrier each time.
I found this answer while looking online, however, I don't want to add the extra package to my distribution, as it will not get updated and might break in the future.
I fumbled around in the file, and found that this part of the .sty file does what I want:
\AtBeginDocument{%
\expandafter\renewcommand\expandafter\subsection\expandafter
{\expandafter\@fb@subsecFB\subsection}%
\newcommand\@fb@subsecFB{\FloatBarrier
\gdef\@fb@afterHHook{\@fb@topbarrier \gdef\@fb@afterHHook{}}}
\gdef\@fb@afterHHook{}
}
i.e. it appends \FloatBarrier to the subsection. However, placing this piece of code right after \usepackage[section]{placeins} doesn't seem to work.
I'm guessing there is more to LaTeX command syntax than what I've written and that I'm not getting it right, but my knowledge of LaTeX does not extend far enough to debug this on my own. Anyone willing to help?
Here's a MWE:
\documentclass{report}
\usepackage[section]{placeins}
\AtBeginDocument{%
\expandafter\renewcommand\expandafter\subsection\expandafter
{\expandafter\@fb@subsecFB\subsection}%
\newcommand\@fb@subsecFB{\FloatBarrier
\gdef\@fb@afterHHook{\@fb@topbarrier \gdef\@fb@afterHHook{}}}
\gdef\@fb@afterHHook{}
}
\begin{document}
\section{Section 1}
\begin{figure}
\centering
\Large A
\caption{First figure}
\end{figure}
\subsection{Subsection 1.1}
\begin{figure}
\centering
\Large B
\caption{Subsection 1.1 figure}
\end{figure}
\subsection{Subsection 1.1}
\begin{figure}
\centering
\Large C
\caption{Subsection 1.2 figure}
\end{figure}
\section{Section 2}
\end{document}
-
## closed as too localized by lockstep, Werner, Tom Bombadil, zeroth, yo'Sep 26 '12 at 20:14
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.
You must put \makeatletter before the \AtBeginDocument so that LaTeX recognize that the @ (at) is meant to be a letter and so is a valid part of the command names. Revert this change by putting \makeatother after the definition. (I didn't check if the code itself works). – Ulrike Fischer Sep 5 '12 at 10:14
@UlrikeFischer yep that did the trick! Thanks – Kpantzas Sep 5 '12 at 12:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7860150337219238, "perplexity": 2373.2548161631103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773201.145/warc/CC-MAIN-20141217075253-00067-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://motls.blogspot.com/2018/04/a-well-deserved-triumph-for-viktor-orban.html?m=1 | ## Sunday, April 08, 2018
### A well-deserved triumph for Viktor Orbán
In recent years, Viktor Orbán became much more than just a reliable leader of Hungary. He became an important representative of the European people – and an important defender of the Old Continent and the European civilization who isn't just a symbol or a talking head. He has dealt with nontrivial tasks and had to do lots of nontrivial things that have earned a lot of sympathy for him in other European countries besides Hungary, too.
I didn't have the slightest doubt that his Fidesz would win the today's parliamentary elections. Well, I did find it more likely that he would improve his result. And even though some unpleasant Hungarian trolls dared to disagree with me, the reality has confirmed my words.
The turnout was some record-breaking 68% and according to incomplete official results, after 85% of votes have been counted, Fidesz seems to have gained 49.3% of the votes (it's higher than the last time when the score was below 45% and they got 131 seats) which would translate to 133 out of 199 lawmakers. That seems barely enough for Fidesz to regain the constitutional, 2/3 majority – exactly 133 deputies is enough for that.
Nationalist party Jobbik is the frontrunner with almost 20% of the votes.
Well, even if Fidesz happens to drop below the 2/3 majority, it will almost certainly form a government without any coalition partner. Congratulations to Fidesz and Mr Orbán.
Because of the elections and a certain Mr Kvasz, I was finally led to study the Hungarian alphabet. It has the extra letters "Cs, Dz, Dzs, Gy, Ly, Ny, Sz, Ty, Zs" which will be written as "Č, Dz, Dž, Ď, Ľ, Ň, Š, Ť, Ž" when some Czech-style rationalization of Hungarian finally takes place.
Fidesz won almost all of Hungary and most of Budapest, too.
But I still misunderstood why "Kvasz" was pronounced "Kvas" and not "Kvash". The explanation is simple: for some reasons, "S" and "Sz" are interchanged! After all, that's why the simply spelled expletive "Soros" is pronounced either "Šoroš" or "Soroš", and I am not sure. ;-) So the Hungarian "S" is pronounced as "Š" or "Sh" and if you need to write the ordinary English or Czech sound "S", you have to write the Hungarian "Sz"! That's quite weird. So when we take over Hungary, we will surely switch these two letters and the current "S, Sz" will be replaced by "Š, S", respectively. ;-)
However, if the basics of the Western civilization keep on being threatened, I am ready to surrender and spell e.g. Klaus as Klausz if it is really needed. Czech ex-president Klaus came to support Orbán two months ago, he was accepted as a top leader, and Orbán promoted Klaus' book about Migration v2.0 on the picture (Hungarian translation). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46262872219085693, "perplexity": 4458.709883729627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00079.warc.gz"} |
https://www.pakmath.com/2020/07/26/matrices-and-determinants-ch03-fsc-first-year/ | # MATRICES AND DETERMINANTS Ch:03 Fsc First Year
MATRICES AND DETERMINANTS Ch:03 Fsc First Year consist of 200+ Multiple Choice Questions. Prepare these mcqs for ECAT and examination. These mcqs are very helpful for preparing NTS and PPSC exams. The mcqs were uploaded on daily basis so keep visiting this website. ONCE the material is completely uploaded then Explanation of each mcqs and videos will be provided very soon.
Q.1 If a matrix has m rows and n columns then its order is
(a) m + n
(b) n × n
(c) m × m
(d) m × n
d
Q.2 The order of a matrix [2 3 4] is
(a) 3 × 3
(b) 1 × 1
(c) 3 × 1
(d) 1 × 3
d
Q.3 The matrix having m rows and n columns with m $\neq$ n is known as
(a) rectangular matrix
(b) square matrix
(c) identity matrix
(d) scalar matrix
a
Q.4 The matrix having m rows and n columns with m = n is known as
(a) rectangular matrix
(b) square matrix
(c) identity matrix
(d) scalar matrix
b
Q.5 The order of a matrix $\begin{bmatrix}&space;1&space;&&space;2&space;&&space;3\\&space;4&space;&&space;5&space;&&space;6&space;\end{bmatrix}$ is
(a) 3 × 2
(b) 2 × 2
(c) 3 × 1
(d) 2 × 3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5516316294670105, "perplexity": 28736.9169769178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00564.warc.gz"} |
http://math.stackexchange.com/questions/285310/name-of-trigonometric-identity?answertab=oldest | # Name of trigonometric identity
Is there a name of this trigonometric identity: $$\cos(a+b) \cos(a+c+b) \equiv \frac{1}{2} \left[\cos(c) + \cos(2a+2b+c) \right]$$
Bsaically we are "changing" a product of cosines into a sum of cosines.
-
– Fabian Jan 23 at 20:33
## 1 Answer
This is a result of angle sum and difference identities.
$\cos(a+b) = \cos(a)\cos(b)-\sin(a)\sin(b)$
$\cos(a-b) = \cos(a)\cos(b)+\sin(a)\sin(b)$
Therefore
$\cos(a)\cos(b) = \frac{1}{2}(\cos(a+b)+\cos(a-b))$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9243683218955994, "perplexity": 776.2786211153845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707184996/warc/CC-MAIN-20130516122624-00053-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://link.springer.com/article/10.1007/s11587-021-00679-w?error=cookies_not_supported&code=7dfce281-c568-4017-b08c-2c9b67fe7f6c | # Some obstacle problems in Musielak spaces
## Abstract
In this paper we use the penalization method to prove the existence of solution for variational inequalities of Leray–Lions type, in the setting of Musielak spaces and where the Musielak function doesn’t satisfy the $$\Delta _2$$-condition. Here the right-hand side is in $$L^1.$$
This is a preview of subscription content, access via your institution.
## References
1. Aharouch, L., Azroul, E., Rhoudaf, M.: Existence of solutions for unilateral problems in $$L^1$$ involving lower order terms in divergence form in Orlicz spaces. J. Appl. Anal. 13, 151–181 (2007)
2. Ait Khellou, M., Benkirane, A., Douiri, S.M.: Existence of solutions for elliptic equations having natural growth terms in Musielak–Orlicz spaces. J. Math. Comput. Sci. 4(4), 665–688 (2014)
3. Ait Khellou, M., Benkirane, A., Douiri, S.M.: Some properties of Musielak spaces with only the log-Hölder continuity condition and application. Ann. Funct. Anal. 11, 1062–1080 (2020)
4. Benkirane, A., Elmahi, A., Meskine, D.: On the limit of some penalized problems involving increasing powers. Asymptot. Anal. 36, 303–317 (2003)
5. Benkirane, A., Sidi El Vally, M.: Variational inequalities in Musielak–Orlicz–Sobolev spaces. Bull. Belg. Math. Soc. Simon Stevin 21, 787–811 (2014)
6. Benkirane, A., Sidi El Vally, M.S.: Variational inequalities in Musielak–Orlicz–Sobolev spaces. Bull. Belg. Math. Soc. Simon Stevin 21(5), 787–811 (2014)
7. Benkirane, A., Sidi El Vally, M.S.: Some approximation properties in Musielak–Orlicz–Sobolev spaces. Thai J. Math. 10(2), 371–381 (2012)
8. Dall’aglio, A., Orsina, L.: On the limit of some nonlinear elliptic equations involving increasing powers. Asympt. Anal. 14, 49–71 (1997)
9. Elarabi, R., Rhoudaf, M., Sabiki, H.: Entropy solution for a nonlinear elliptic problem with lower order term in Musielak–Orlicz spaces. Ric. Mat., 1–31 (2017)
10. Gossez, J.-P.: Some approximation properties in Orlicz–Sobolev. Studia Math. 74, 17–24 (1982)
11. Musielak, J.: Modular Spaces and Orlicz Spaces. Lecture Notes in Mathematics, vol. 1034. Springer Verlag, Berlin (1983)
12. Rajagopal, K.R., Ru̇z̃ic̃ka, M.: Mathematical modeling of electrorheological materials. Contin. Mech. Thermodyn. 13, 59–78 (2001)
13. Ru̇žic̆ka, M.: Electrorheological Fluids, Modeling and Mathematical Theory. Lecture Notes in Mathematics. Springer, Berlin (2000)
14. Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990)
## Author information
Authors
### Corresponding author
Correspondence to R. Elarabi.
## Ethics declarations
### Conflicts of interest
The authors declare that they have no conflict of interest.
### Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Elarabi, R., Rhoudaf, M. Some obstacle problems in Musielak spaces. Ricerche mat (2022). https://doi.org/10.1007/s11587-021-00679-w
• Revised:
• Accepted:
• Published:
• DOI: https://doi.org/10.1007/s11587-021-00679-w
### Keywords
• Nonlinear elliptic problems
• Musielak–Sobolev spaces
• Variational inequalities
• Bilateral problems | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809819757938385, "perplexity": 14800.858591248798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00652.warc.gz"} |
https://www.infoq.com/articles/HadoopInputFormat/ | Facilitating the spread of knowledge and innovation in professional software development
Contribute
### Topics
InfoQ Homepage Articles Uncovering mysteries of InputFormat: Providing better control for your Map Reduce execution.
# Uncovering mysteries of InputFormat: Providing better control for your Map Reduce execution.
As more companies adopt Hadoop, there is a greater variety in the types of problems for which Hadoop's framework is being utilized. As the various scenarios where Hadoop is applied grow, it becomes critical to control how and where map tasks are executed. One key to such control is custom InputFormat implementation.
The InputFormat class is one of the fundamental classes in the Hadoop Map Reduce framework. This class is responsible for defining two main things:
• Data splits
Data split is a fundamental concept in Hadoop Map Reduce framework which defines both the size of individual Map tasks and its potential execution server. The Record Reader is responsible for actual reading records from the input file and submitting them (as key/value pairs) to the mapper. There are quite a few publications on how to implement a custom Record Reader (see, for example, [1]), but the information on splits is very sketchy. Here we will explain what a split is and how to implement custom splits for specific purposes.
## An Anatomy of Split
Any split implementation extends the Apache base abstract class - InputSplit, defining a split length and locations. A split length is the size of the split data (in bytes), while locations is the list of node names where the data for the split would be local. Split locations are a way for a scheduler to decide on which particular machine to execute this split. A very simple[1] a job tracker works as follows:
• Receive a heartbeat form one of the task trackers, reporting map slot availability.
• Find queued up split for which the available node is "local".
• Submit split to the task tracker for the execution.
Locality can mean different things depending on storage mechanisms and the overall execution strategy. In the case of HDFS, for example, a split typically corresponds to a physical data block size and locations is a set of machines (with the set size defined by a replication factor) where this block is physically located. This is how FileInputFormat calculates splits.
A different approach was taken by HBase implementers. There, a split corresponds to a set of table keys belonging to a table region and location is a machine where a region server is running.
## Compute-Intensive Applications
A special class of Map Reduce applications is compute-intensive applications. This class of applications is characterized by the fact that execution of the Mapper.map() function is significantly longer than the data access time, by an order of magnitude at least. Technically, such applications can still use "standard" input format implementation, however, this creates a problem by overwhelming the data nodes where the data resides and leaving other nodes within the cluster underutilized. (Figure 1).
(Click on the image to enlarge it)
Figure 1: Nodes utilization in the case data locality.
Figure 1 shows that utilization of "standard" data locality for compute-intensive applications leads to huge variations in the nodes utilization - over utilization of some (red) and under utilization of the other ones (yellow and light green) . Based on this, it becomes apparent that for compute-intensive applications, the notion of "locality" has to be rethought. In this case, "locality" means even distribution of map execution between all available nodes - maximum utilization of compute capabilities of the cluster's machines.
## Changing "locality" using custom InputFormat
Assuming that the source data is available in the form of a sequence file, a simple ComputeIntensiveSequenceFileInputFormat class (Listing 1) implements the generation of splits which will be evenly distributed across all servers of the cluster.
package com.navteq.fr.mapReduce.InputFormat;
import java.io.IOException;import java.util.ArrayList;import java.util.Collection;import java.util.List;import java.util.StringTokenizer;import org.apache.hadoop.fs.FileStatus;import org.apache.hadoop.fs.Path;import org.apache.hadoop.mapred.ClusterStatus;import org.apache.hadoop.mapred.JobClient;import org.apache.hadoop.mapred.JobConf;import org.apache.hadoop.mapreduce.InputSplit;import org.apache.hadoop.mapreduce.JobContext;import org.apache.hadoop.mapreduce.lib.input.FileSplit;import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;public class ComputeIntensiveSequenceFileInputFormat<K, V> extends SequenceFileInputFormat<K, V> { private static final double SPLIT_SLOP = 1.1; // 10% slop static final String NUM_INPUT_FILES = "mapreduce.input.num.files"; /** * Generate the list of files and make them into FileSplits. */ @Override public List<InputSplit> getSplits(JobContext job) throws IOException { long minSize = Math.max(getFormatMinSplitSize(), getMinSplitSize(job)); long maxSize = getMaxSplitSize(job); // get servers in the cluster String[] servers = getActiveServersList(job); if(servers == null) return null; // generate splits List<InputSplit> splits = new ArrayList<InputSplit>(); List<FileStatus>files = listStatus(job); int currentServer = 0; for (FileStatus file: files) { Path path = file.getPath(); long length = file.getLen(); if ((length != 0) && isSplitable(job, path)) { long blockSize = file.getBlockSize(); long splitSize = computeSplitSize(blockSize, minSize, maxSize); long bytesRemaining = length; while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) { splits.add(new FileSplit(path, length-bytesRemaining, splitSize, new String[] {servers[currentServer]})); currentServer = getNextServer(currentServer, servers.length); bytesRemaining -= splitSize; } if (bytesRemaining != 0) { splits.add(new FileSplit(path, length-bytesRemaining, bytesRemaining, new String[] {servers[currentServer]})); currentServer = getNextServer(currentServer, servers.length); } } else if (length != 0) { splits.add(new FileSplit(path, 0, length, new String[] {servers[currentServer]})); currentServer = getNextServer(currentServer, servers.length); } else { //Create empty hosts array for zero length files splits.add(new FileSplit(path, 0, length, new String[0])); } } // Save the number of input files in the job-conf job.getConfiguration().setLong(NUM_INPUT_FILES, files.size()); return splits; } private String[] getActiveServersList(JobContext context){ String [] servers = null; try { JobClient jc = new JobClient((JobConf)context.getConfiguration()); ClusterStatus status = jc.getClusterStatus(true); Collection<String> atc = status.getActiveTrackerNames(); servers = new String[atc.size()]; int s = 0; for(String serverInfo : atc){ StringTokenizer st = new StringTokenizer(serverInfo, ":"); String trackerName = st.nextToken(); StringTokenizer st1 = new StringTokenizer(trackerName, "_"); st1.nextToken(); servers[s++] = st1.nextToken(); } }catch (IOException e) { e.printStackTrace(); } return servers; } private static int getNextServer(int current, int max){ current++; if(current >= max) current = 0; return current; }}
Listing 1: ComputeIntensiveSequenceFileInputFormat class
This class extends SequenceFileInputFormat and overwrites getSplits() method, calculating splits exactly the same way as FileInputFormat, but assigns the split's "locality" to leverage the available servers in the cluster. This method leverages two supported methods:
• getActiveServersList() method which calculates an array of servers (names) currently active in the cluster.
• getNextServer() calculates next server in the servers array, wrapping around when the array of servers is exhausted.
Although implementation (Listing 1) evenly distributes execution of maps between all servers of the cluster, it completely ignores the actual locality of the data. A slightly better implementation of getSplits method (Listing 2), tries to combine both strategies by placing as many of the jobs local to the data and then balancing the rest around the cluster.[2]
public List<InputSplit> getSplits(JobContext job) throws IOException { // get splits List<InputSplit> originalSplits = super.getSplits(job); // Get active servers String[] servers = getActiveServersList(job); if(servers == null) return null; // reassign splits to active servers List<InputSplit> splits = new ArrayList<InputSplit>(originalSplits.size()); int numSplits = originalSplits.size(); int currentServer = 0; for(int i = 0; i < numSplits; i++, currentServer = getNextServer(currentServer, servers.length)){ String server = servers[currentServer]; // Current server boolean replaced = false; // For every remaining split for(InputSplit split : originalSplits){ FileSplit fs = (FileSplit)split; // For every split location for(String l : fs.getLocations()){ // If this split is local to the server if(l.equals(server)){ // Fix split location splits.add(new FileSplit(fs.getPath(), fs.getStart(), fs.getLength(), new String[] {server})); originalSplits.remove(split); replaced = true; break; } } if(replaced) break; } // If no local splits are found for this server if(!replaced){ // Assign first available split to it FileSplit fs = (FileSplit)splits.get(0); splits.add(new FileSplit(fs.getPath(), fs.getStart(), fs.getLength(), new String[] {server})); originalSplits.remove(0); } } return splits;}
Listing 2: Optimized getSplits method
In this implementation, we first use the superclass (FileInputSplit) to get splits with locations calculated to ensure data locality. We then calculate the list of available servers, and for every existing server, try to assign a split with data local to this server.
## Delayed fair scheduler
Although the code (Listing 1, Listing 2) calculates splits locality correctly, when we tried to run the code on our Hadoop cluster, we saw that it was not even close to producing even distribution between servers. The problem that we have observed is well described in [2], which also describes a solution for this problem - delayed fair scheduler.
Assuming that the fair scheduler is already setup, the following block should be added to the mapred-site.xml file in order to enable a delayed scheduler[3]:
<property>
<name>mapred.fairscheduler.locality.delay</name>
<value>360000000</value>
<property>
With delayed fair scheduler in place, execution of our job leverages the full cluster (Figure 2). Moreover, according to our experiments, execution time in this case is about 30% less when compared to the "data locality" approach.
(Click on the image to enlarge it)
Figure 2: Nodes utilization in the case of execution locality
The Computational job used for testing ran with 96 splits and mapper tasks. The test cluster has 19 data nodes which have 8 mapper slots per node, giving the cluster 152 available slots. When the job is running, it does not fully utilize all of the slots in the cluster.
Both Ganglia screen shots are of our test cluster where the first 3 nodes are our control nodes and the fourth node is our edge node used to launch the job. The graphs show cpu/machine load. In Figure 1 there are several nodes which are heavily utilized (shown in red) and the rest of the cluster is underutilized. In Figure 2, we have a more balanced distribution, yet the cluster is still not being fully utilized. The job used in testing also allows one to run multiple threads. This increases the load on the cpu while decreasing the overall time spent in each Mapper.map() iteration. As shown in Figure 3, by increasing the number of threads we are able to better utilize the cluster resources and further reduce the time it takes for the job to complete. By changing the locality of the jobs, we are able to better utilize the cluster without sacrificing performance due to remote job data.
(Click on the image to enlarge it)
Figure 3: Nodes utilization in the case of execution locality with multithreaded Map Jobs
Even under heavy machine CPU loads, it was still possible to allow other disk I/O-intensive jobs to run in the open slots, recognizing that we would have a slight degradation in performance.
## Custom Splits
The approach described in this article works really well for large files, but for small files, there are not enough splits to leverage many machines available in the cluster. One possible solution is to make the blocks smaller, but this will create more strain (memory requirements) to the cluster's name node. A better approach is to modify the code (Listing 1) to use a custom block size (instead of file block size). Such an approach allows one to calculate the desired amount of splits regardless of the actual file size.
## Conclusion
In this article we have shown how to leverage custom InputFormats to have tighter control on how the map tasks in your Map Reduce jobs are distributed among available servers. Such control is especially important for a special class of applications - compute intensive applications, which leverage Hadoop Map Reduce as a generic parallel execution framework.
Boris Lublinsky is principal architect at NAVTEQ, where he is working on defining architecture vision for large data management and processing and SOA and implementing various NAVTEQ projects. He is also an SOA editor for InfoQ and a participant of SOA RA working group in OASIS. Boris is an author and frequent speaker, his most recent book "Applied SOA".
Michael Segel has spent the past 20+ years working with customers identifying and solving their business problems. Michael has worked in multiple roles, in multiple industries. He is an independent consultant who is always looking to solve any challenging problems. Michael has a Software Engineering degree from the Ohio State University.
### References
1. Boris Lublinsky, Mike Segel. Custom XML Records Reader.
2. Matei Zaharia, Dhruba Borthakur, Joydeep Sen Sarma, Khaled Elmeleegy, Scott Shenker, Ion Stoica. Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling.
[1] This is a gross over simplification to explain a base mechanics. A real scheduler algorithm is significantly more complex; taking into consideration many more parameters than just splits locations.
[2] While we present this as an option, if the time spent in the Mapper.map() method is an order of magnitude or more than the time it takes to remotely access the data, there will be no performance improvement over the code presented in Listing 1. However, it might somewhat improve network utilization.
[3] Please note that the delay is in ms, and that after changing the value, you need to restart the Job Tracker.
Style
## Hello stranger!
You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
• ##### Nice Article
by Paulo Suzart,
• ##### Is it really needed to override getSplits() to implement fairness?
by Sorin Ciolofan,
• ##### Great Article!!But am struck at a point.
by Swapnil Arsh,
• ##### Nice Article
by Paulo Suzart,
Your message is awaiting moderation. Thank you for participating in the discussion.
Nice article. It clears a lot some important details. Another point I think worth writting is about some patterns and use cases to put date into HDSF.
What is the best approach? Smal files directly created by producer application in HDFS? A job for extracting data from a database?
It would be great to have an article talking about these kind of situation.
Congratulations!
• ##### Is it really needed to override getSplits() to implement fairness?
Your message is awaiting moderation. Thank you for participating in the discussion.
Hi!
Since you have used fairscheduler and have set the delay time for it why is it needed to override the getSplits() as you did in Listing1 and Listing2? I understood that you did that to implement an own version of fairness in order to evenly distribute the jobs on all available servers but this fairness is not done implicitly by the fairscheduler?
• ##### Great Article!!But am struck at a point.
by Swapnil Arsh,
Your message is awaiting moderation. Thank you for participating in the discussion.
I find the article very interesting and very relevant to what i am trying to achieve.
But while implementing the same, the status.getActiveTrackerNames() is returning NULL. Implies no task trackers are running. What could be a reason to this and how could I cope with this problem.
Thanks.
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Is your profile up-to-date? Please take a moment to review and update.
Note: If updating/changing your email, a validation request will be sent
Company name:
Company role:
Company size:
Country/Zone:
State/Province/Region:
You will be sent an email to validate the new email address. This pop-up will close itself in a few moments. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19237972795963287, "perplexity": 3879.9325055579484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989614.9/warc/CC-MAIN-20210511122905-20210511152905-00133.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/dcds.2012.32.2453 | # American Institute of Mathematical Sciences
July 2012, 32(7): 2453-2484. doi: 10.3934/dcds.2012.32.2453
## The transfer operator for the Hecke triangle groups
1 Lower Saxony Professorship, Institute for Theoretical Physics, TU Clausthal, D-38678 Clausthal-Zellerfeld, Germany 2 Department of Mathematics and Computer Science, FernUniversität in Hagen, D-58084 Hagen, Germany 3 Department of Mathematics, TU Darmstadt, D-64289 Darmstadt, Germany
Received December 2009 Revised March 2010 Published March 2012
In this paper we extend the transfer operator approach to Selberg's zeta function for cofinite Fuchsian groups to the Hecke triangle groups $G_q,\, q=3,4,\ldots$, which are non-arithmetic for $q\not= 3,4,6$. For this we make use of a Poincar\'e map for the geodesic flow on the corresponding Hecke surfaces, which has been constructed in [13], and which is closely related to the natural extension of the generating map for the so-called Hurwitz-Nakada continued fractions. We also derive functional equations for the eigenfunctions of the transfer operator which for eigenvalues $\rho =1$ are expected to be closely related to the period functions of Lewis and Zagier for these Hecke triangle groups.
Citation: Dieter Mayer, Tobias Mühlenbruch, Fredrik Strömberg. The transfer operator for the Hecke triangle groups. Discrete and Continuous Dynamical Systems, 2012, 32 (7) : 2453-2484. doi: 10.3934/dcds.2012.32.2453
##### References:
[1] R. W. Bruggeman, J. Lewis and D. Zagier, Period functions for Maaß wave forms. II: Cohomology, preprint. [2] R. W. Bruggeman and T. Mühlenbruch, Eigenfunctions of transfer operators and cohomology, Journal of Number Theory, 129 (2009), 158-181. doi: 10.1016/j.jnt.2008.08.003. [3] C.-H. Chang and D. Mayer, Thermodynamic formalism and Selberg's zeta function for modular groups, Regul. Chaotic Dyn., 5 (2000), 281-312. doi: 10.1070/rd2000v005n03ABEH000150. [4] C.-H. Chang and D. Mayer, Eigenfunctions of the transfer operators and the period functions for modular groups, in "Dynamical, Spectral, and Arithmetic Zeta Functions" (eds. M. L. Lapidus and M. Van Frankenhuysen) (San Antonio, TX, 1999), Contemp. Math., 290, Amer. Math. Soc., Providence, RI, (2001), 1-40. [5] M. Fraczek, D. Mayer and T. Mühlenbruch, A realization of the Hecke algebra on the space of period functions for $\Gamma_0(n)$, J. Reine Angew. Math., 603 (2007), 133-163. doi: 10.1515/CRELLE.2007.014. [6] D. Hejhal, "The Selberg Trace Formula for $\PSL(2,\mathbbR)$," Vol. 2, Lecture Notes in Mathematics, 1001, Springer-Verlag, Berlin, 1983. [7] J. Hilgert, D. Mayer and H. Movasati, Transfer operators for $\Gamma_0(n)$ and the Hecke operators for the period functions of $\PSL(2,\mathbbZ)$, Math. Proc. Camb. Phil. Soc., 139 (2005), 81-116. doi: 10.1017/S0305004105008480. [8] A. Hurwitz, Über eine besondere Art der Kettenbruch-Entwickelung reeller Grössen, Acta Math., 12 (1889), 367-405. doi: 10.1007/BF02391885. [9] J. Lewis and D. Zagier, Period functions for Maass wave forms. I., Ann. of Math., 153 (2001), 191-258. doi: 10.2307/2661374. [10] D. Mayer, On the thermodynamic formalism for the Gauss map, Comm. Math. Phys., 130 (1990), 311-333. doi: 10.1007/BF02473355. [11] D. Mayer, On composition operators on Banach spaces of holomorphic functions, Journal of Functional Analysis, 35 (1980), 191-206. doi: 10.1016/0022-1236(80)90004-X. [12] D. Mayer and T. Mühlenbruch, Nearest $\lambda_q$-multiple fractions, in "Spectrum and Dynamics" (eds. D. Jakobson, S. Nonnenmacher and I. Polterovich), CRM Proceedings and Lecture Notes, 52, AMS, Providence, RI, (2010), 147-184. Available from: arXiv:0902.3953. [13] D. Mayer and F. Strömberg, Symbolic dynamics for the geodesic flow on Hecke surfaces, Journal of Modern Dynamics, 2 (2008), 581-627. doi: 10.3934/jmd.2008.2.581. [14] H. Nakada, Continued fractions, geodesic flows and Ford circles, in "Algorithms, Fractals, and Dynamics" (ed. T. Takahashi) (Okayama/Kyoto, 1992), Plenum, New York, (1995), 179-191. [15] R. Phillips and P. Sarnak, On cusp forms for co-finite subgroups of $PSL(2,\mathbbR)$, Invent. Math., 80 (1985), 339-364. doi: 10.1007/BF01388610. [16] D. Rosen, A class of continued fractions associated with certain properly discontinuous groups, Duke Math. J., 21 (1954), 549-563. doi: 10.1215/S0012-7094-54-02154-7. [17] D. Rosen and T. A. Schmidt, Hecke groups and continued fractions, Bull. Austral. Math. Soc., 46 (1992), 459-474. doi: 10.1017/S0004972700012120. [18] D. Ruelle, "Dynamical Zeta Functions for Piecewise Monotone Maps of the Interval," CRM Monograph Series, 4, AMS, Providence, R.I., 1994. [19] T. A. Schmidt and M. Sheingorn, Length spectra of the Hecke triangle groups, Mathematische Zeitschrift, 220 (1995), 369-397. doi: 10.1007/BF02572621. [20] A. Selberg, Remarks on the distribution of poles of Eisenstein series, in "Festschrift in Honor of I. I. Piatetski-Shapiro on the Occasion of his Sixtieth Birthday," Part II (Ramat Aviv, 1989), Israel Math. Conf. Proc., 3, Weizmann, Jerusalem, (1990), 251-278. [21] F. Strömberg, Computation of Selberg's zeta functions on Hecke triangle groups, arXiv:0804.4837.
show all references
##### References:
[1] R. W. Bruggeman, J. Lewis and D. Zagier, Period functions for Maaß wave forms. II: Cohomology, preprint. [2] R. W. Bruggeman and T. Mühlenbruch, Eigenfunctions of transfer operators and cohomology, Journal of Number Theory, 129 (2009), 158-181. doi: 10.1016/j.jnt.2008.08.003. [3] C.-H. Chang and D. Mayer, Thermodynamic formalism and Selberg's zeta function for modular groups, Regul. Chaotic Dyn., 5 (2000), 281-312. doi: 10.1070/rd2000v005n03ABEH000150. [4] C.-H. Chang and D. Mayer, Eigenfunctions of the transfer operators and the period functions for modular groups, in "Dynamical, Spectral, and Arithmetic Zeta Functions" (eds. M. L. Lapidus and M. Van Frankenhuysen) (San Antonio, TX, 1999), Contemp. Math., 290, Amer. Math. Soc., Providence, RI, (2001), 1-40. [5] M. Fraczek, D. Mayer and T. Mühlenbruch, A realization of the Hecke algebra on the space of period functions for $\Gamma_0(n)$, J. Reine Angew. Math., 603 (2007), 133-163. doi: 10.1515/CRELLE.2007.014. [6] D. Hejhal, "The Selberg Trace Formula for $\PSL(2,\mathbbR)$," Vol. 2, Lecture Notes in Mathematics, 1001, Springer-Verlag, Berlin, 1983. [7] J. Hilgert, D. Mayer and H. Movasati, Transfer operators for $\Gamma_0(n)$ and the Hecke operators for the period functions of $\PSL(2,\mathbbZ)$, Math. Proc. Camb. Phil. Soc., 139 (2005), 81-116. doi: 10.1017/S0305004105008480. [8] A. Hurwitz, Über eine besondere Art der Kettenbruch-Entwickelung reeller Grössen, Acta Math., 12 (1889), 367-405. doi: 10.1007/BF02391885. [9] J. Lewis and D. Zagier, Period functions for Maass wave forms. I., Ann. of Math., 153 (2001), 191-258. doi: 10.2307/2661374. [10] D. Mayer, On the thermodynamic formalism for the Gauss map, Comm. Math. Phys., 130 (1990), 311-333. doi: 10.1007/BF02473355. [11] D. Mayer, On composition operators on Banach spaces of holomorphic functions, Journal of Functional Analysis, 35 (1980), 191-206. doi: 10.1016/0022-1236(80)90004-X. [12] D. Mayer and T. Mühlenbruch, Nearest $\lambda_q$-multiple fractions, in "Spectrum and Dynamics" (eds. D. Jakobson, S. Nonnenmacher and I. Polterovich), CRM Proceedings and Lecture Notes, 52, AMS, Providence, RI, (2010), 147-184. Available from: arXiv:0902.3953. [13] D. Mayer and F. Strömberg, Symbolic dynamics for the geodesic flow on Hecke surfaces, Journal of Modern Dynamics, 2 (2008), 581-627. doi: 10.3934/jmd.2008.2.581. [14] H. Nakada, Continued fractions, geodesic flows and Ford circles, in "Algorithms, Fractals, and Dynamics" (ed. T. Takahashi) (Okayama/Kyoto, 1992), Plenum, New York, (1995), 179-191. [15] R. Phillips and P. Sarnak, On cusp forms for co-finite subgroups of $PSL(2,\mathbbR)$, Invent. Math., 80 (1985), 339-364. doi: 10.1007/BF01388610. [16] D. Rosen, A class of continued fractions associated with certain properly discontinuous groups, Duke Math. J., 21 (1954), 549-563. doi: 10.1215/S0012-7094-54-02154-7. [17] D. Rosen and T. A. Schmidt, Hecke groups and continued fractions, Bull. Austral. Math. Soc., 46 (1992), 459-474. doi: 10.1017/S0004972700012120. [18] D. Ruelle, "Dynamical Zeta Functions for Piecewise Monotone Maps of the Interval," CRM Monograph Series, 4, AMS, Providence, R.I., 1994. [19] T. A. Schmidt and M. Sheingorn, Length spectra of the Hecke triangle groups, Mathematische Zeitschrift, 220 (1995), 369-397. doi: 10.1007/BF02572621. [20] A. Selberg, Remarks on the distribution of poles of Eisenstein series, in "Festschrift in Honor of I. I. Piatetski-Shapiro on the Occasion of his Sixtieth Birthday," Part II (Ramat Aviv, 1989), Israel Math. Conf. Proc., 3, Weizmann, Jerusalem, (1990), 251-278. [21] F. Strömberg, Computation of Selberg's zeta functions on Hecke triangle groups, arXiv:0804.4837.
[1] Élise Janvresse, Benoît Rittaud, Thierry de la Rue. Dynamics of $\lambda$-continued fractions and $\beta$-shifts. Discrete and Continuous Dynamical Systems, 2013, 33 (4) : 1477-1498. doi: 10.3934/dcds.2013.33.1477 [2] Frédéric Naud, Anke Pohl, Louis Soares. Fractal Weyl bounds and Hecke triangle groups. Electronic Research Announcements, 2019, 26: 24-35. doi: 10.3934/era.2019.26.003 [3] Doug Hensley. Continued fractions, Cantor sets, Hausdorff dimension, and transfer operators and their analytic extension. Discrete and Continuous Dynamical Systems, 2012, 32 (7) : 2417-2436. doi: 10.3934/dcds.2012.32.2417 [4] Laura Luzzi, Stefano Marmi. On the entropy of Japanese continued fractions. Discrete and Continuous Dynamical Systems, 2008, 20 (3) : 673-711. doi: 10.3934/dcds.2008.20.673 [5] Pierre Arnoux, Thomas A. Schmidt. Commensurable continued fractions. Discrete and Continuous Dynamical Systems, 2014, 34 (11) : 4389-4418. doi: 10.3934/dcds.2014.34.4389 [6] Katsukuni Nakagawa. Compactness of transfer operators and spectral representation of Ruelle zeta functions for super-continuous functions. Discrete and Continuous Dynamical Systems, 2020, 40 (11) : 6331-6350. doi: 10.3934/dcds.2020282 [7] Kanji Inui, Hikaru Okada, Hiroki Sumi. The Hausdorff dimension function of the family of conformal iterated function systems of generalized complex continued fractions. Discrete and Continuous Dynamical Systems, 2020, 40 (2) : 753-766. doi: 10.3934/dcds.2020060 [8] J. William Hoffman. Remarks on the zeta function of a graph. Conference Publications, 2003, 2003 (Special) : 413-422. doi: 10.3934/proc.2003.2003.413 [9] Claudio Bonanno, Carlo Carminati, Stefano Isola, Giulio Tiozzo. Dynamics of continued fractions and kneading sequences of unimodal maps. Discrete and Continuous Dynamical Systems, 2013, 33 (4) : 1313-1332. doi: 10.3934/dcds.2013.33.1313 [10] Frédéric Naud. The Ruelle spectrum of generic transfer operators. Discrete and Continuous Dynamical Systems, 2012, 32 (7) : 2521-2531. doi: 10.3934/dcds.2012.32.2521 [11] Lulu Fang, Min Wu. Hausdorff dimension of certain sets arising in Engel continued fractions. Discrete and Continuous Dynamical Systems, 2018, 38 (5) : 2375-2393. doi: 10.3934/dcds.2018098 [12] Marc Kessböhmer, Bernd O. Stratmann. On the asymptotic behaviour of the Lebesgue measure of sum-level sets for continued fractions. Discrete and Continuous Dynamical Systems, 2012, 32 (7) : 2437-2451. doi: 10.3934/dcds.2012.32.2437 [13] Patricia Domínguez, Peter Makienko, Guillermo Sienra. Ruelle operator and transcendental entire maps. Discrete and Continuous Dynamical Systems, 2005, 12 (4) : 773-789. doi: 10.3934/dcds.2005.12.773 [14] Vesselin Petkov, Luchezar Stoyanov. Ruelle transfer operators with two complex parameters and applications. Discrete and Continuous Dynamical Systems, 2016, 36 (11) : 6413-6451. doi: 10.3934/dcds.2016077 [15] Harman Kaur, Meenakshi Rana. Congruences for sixth order mock theta functions $\lambda(q)$ and $\rho(q)$. Electronic Research Archive, 2021, 29 (6) : 4257-4268. doi: 10.3934/era.2021084 [16] Leandro Cioletti, Artur O. Lopes, Manuel Stadlbauer. Ruelle operator for continuous potentials and DLR-Gibbs measures. Discrete and Continuous Dynamical Systems, 2020, 40 (8) : 4625-4652. doi: 10.3934/dcds.2020195 [17] Leandro Cioletti, Artur O. Lopes. Interactions, specifications, DLR probabilities and the Ruelle operator in the one-dimensional lattice. Discrete and Continuous Dynamical Systems, 2017, 37 (12) : 6139-6152. doi: 10.3934/dcds.2017264 [18] Mark F. Demers, Hong-Kun Zhang. Spectral analysis of the transfer operator for the Lorentz gas. Journal of Modern Dynamics, 2011, 5 (4) : 665-709. doi: 10.3934/jmd.2011.5.665 [19] Huangsheng Yu, Feifei Xie, Dianhua Wu, Hengming Zhao. Further results on optimal $(n, \{3, 4, 5\}, \Lambda_a, 1, Q)$-OOCs. Advances in Mathematics of Communications, 2019, 13 (2) : 297-312. doi: 10.3934/amc.2019020 [20] Yijing Sun. Estimates for extremal values of $-\Delta u= h(x) u^{q}+\lambda W(x) u^{p}$. Communications on Pure and Applied Analysis, 2010, 9 (3) : 751-760. doi: 10.3934/cpaa.2010.9.751
2021 Impact Factor: 1.588 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7539983987808228, "perplexity": 2720.0766260471605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00177.warc.gz"} |
http://algebra2.thinkport.org/module6/prize-winner-part1-page-6.html | # Prize Winner, Part 1
### Resources for this lesson:
You do not need to draw a Venn Diagram or list numbers every time you want to calculate conditional probability. Let’s develop a formula for conditional probability.
Recall that the last example wanted us to find the probabilty of a person selecting a multiple of five, given that they selected an even number. We found this probability to be$\frac{1}{5}$.
Using probability notation, this would be , where:
B: the event of “selecting a multiple of five”
A: the event of “selecting an even number”
The “given that” condition limits the number of elements we can choose from. “Given that they select an even number” involves the probability that an even number is chosen. This is . In this example, that is $\frac{15}{30}=\frac{1}{2}$
The “selecting a multiple of five” event is not just , or . It is not this because we can only select a multiple of five from the even number set. This is , the probability of selecting a number that is a multiple of five and is an even number. In this example, that is $\frac{3}{30}$, or $\frac{1}{10}$. (A word of caution here. Do not jump to using the Multiplication Rule, , which is only for independent events. More on this later.)
Now, consider the way that is read. It is read “the probability of B given A.” Because of the condition of the situation, the probability of B is really from the reasoning above. The “given A” is , from above. Thus,
Checking with the example above, . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189955949783325, "perplexity": 392.6118294757932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891377.59/warc/CC-MAIN-20180122133636-20180122153636-00477.warc.gz"} |
http://eprint.iacr.org/2004/268/20041021:200018 | ## Cryptology ePrint Archive: Report 2004/268
Untraceability of Wang-Fu Group Signature Scheme
Zhengjun Cao and Lihua Liu
Abstract: Wang et al. recently proposed an improved edition based on Tseng-Jan group signature scheme${}^{[1]}$. In the paper, we show that the scheme is untraceable by a simple attack.
Category / Keywords: cryptographic protocols / group signature scheme, full-anonymity, full-traceability. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.841646134853363, "perplexity": 18534.906527712472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00462-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.solumaths.com/en/math-apps/calc-online/fraction/a/b*c/d | # fraction calculator
## Calculus fraction
Calculus in processing ... please wait
### Function : fraction
#### Summary :
Fraction calculation online with steps and details of the calculations: simplification, addition, subtraction, multiplication, division, raise to the power.
Fraction online
#### Description :
A fraction is a number that is written as follows : a/b with a and b integers and b not zero.
Thefraction function is used as a fraction calculator, it offers the possibility of online fraction calculations, it is able to simplify a fraction by putting in its irreducible fraction form, it allows simplifying fractions, performing different arithmetic operations and returning the result as a reduced fraction.
The online fraction calculator allows calculation of the sum of fractions, to calculate the sum of fractions such as these 1/4 et 4/5, you must enter fraction(1/4+4/5), after calculation, the result is obtained 21/20.
The fraction calculation also applies to fractions that have letters, to calculate the sum of fraction with letters like the following a/b et c/d, you must enter fraction(a/b+c/d), after calculation, we get the result (a*d+c*b)/(b*d)
To add two fractions, the calculator will reduce fractions to the same denominator, then add the numerators, the result is returned in the form of an irreducible fraction. All stepssteps of calculating the fraction sum is returned by the calculator.
It is possible to add fractions with each other but also with other algebraic expressions, after calculation, the result is returned as a fraction.
# Online fraction subtraction
The fraction calculator is able to calculate online the fraction difference, To calculate the fraction difference between 4/5 et 1/5, you must enter fraction(4/5-1/5) , after calculation, we get the result 3/5.
The calculator can also be used on symbolic fraction : thus to calculate the difference between the following fractions a/b et c/d, you must enter fraction(a/b-c/d), after calculation, we get the result (a*d-c*b)/(b*d).
To subtract two fractions, the calculator will reduce fractions to the same denominator, subtract the numerators, the calculator will reduce the fraction, ie simplifying before returning the result. Details of the calculations that helped make the difference fraction are returned by the calculator.
You can subtract fractions with each other but also with other algebraic expressions, after calculation, the result is returned as a fraction.
# Product fraction online
Multiply fractions online with fraction calculator is also possible, the multiplication of fractions online can be used on numeric fractions : thus to calculate calculate product fraction such as the following 4/3 et 2/5, you must enter fraction(4/3*2/5), after calculation, we get the result 8/15.
The symbolic calculation product fraction is also part of the features of the fraction calculator online, Thus to calculate the product of a/b et c/d, you must enter fraction(a/b*c/d), after calculation, we get the result (a*c)/(b*d).
To calculate product of fractions, the calculator will multiply the numerators together and multiply the denominators between them, the calculator will simplify the fraction. The calculator also returns the steps of the calculations that led to the product fraction.
It is possible to multiply fractions with each other, but also with other algebraic expressions, after calculation, the result is returned as a fraction.
# Division of fractions online
The fraction calculator allows you to divide fractions online : thus to calculate the ratio of fractions 4/3 et 2/5, you must enter fraction((4/3)/(2/5)), after calculation, the result is obtained 103.
The online calculator can be used on symbolic fraction : thus to calculate the ratio of a/b et c/d , you must enter (a/b)/(c/d), after calculation, we get the result (a*d)/(b*c)
# Reverse a fraction
The online fraction calculator allows the calculation of the inverse of a fraction, so to calculate the inverse of fraction 7/2, you must enter 1/(7/2), after calculation, you get the result 2/7.
The fractional calculator also applies to literal fractional expressions, so to invert the fraction a/b , you must enter fraction(1/(a/b)), after calculation, you get the result b/a
# Online fraction simplification
The fraction calculator allows to reduce a fraction online, ie to put the fraction in the form of an irreducible fraction.
To simplify a fraction cas follows fraction 54/28 ,you must enter fraction(54/28) , after calculation, we get the result 27/14 is given in the form of an irreducible fraction.
To achieve simplify a fraction, the calculator uses different methods of calculation, it relies in particular on the GCD when numerator and denominator are integers. The calculator calculates the GCD to determine a simplified fraction that is an irreducible fraction. The calculator returns each step calculation.
# Fraction raise to the power
The fraction calculation online raise to the power can be quickly thanks fraction calculator. It is possible to raise a fraction to an integer power and get the result of that calculation in the form of an irreducible fraction. For example, to calculate (45) 3, grip fraction ((4/5) ^ 3), after calculation yields the result 64125.
For example, to calculate (4/5)^3, enter fraction((4/5)^3), after calculation, the result 64/125 is returned.
The fraction calculator available via fraction function, makes it possible to simply calculate fraction raise to the power online.
# Changing a decimal to a fraction
The fraction calculator can convert a decimal to a fraction, thus to put in the form of an irreducible fraction the following decimal 0.4, you must enter fraction(0.4), after calculation, we get the result in the form of an irreducible fraction 2/5.
# Calculate online with fractions of pi (pi)
Calculate with fractions of pi (pi) is also a feature of the calculator, thus to calculate the sum of pi/3 and pi/6 as an irreducible fraction of pi (pi), enter fraction(pi/3+pi/6), after calculation, we get the result in the form of an irreducible fraction pi/2.
# Combining operations with fractions
The fraction calculation can combine several operations, it is possible to add, multiply, divide fractions in the same calculation. The result will be returned as a simplified fraction.
It is possible to combine all these operations and apply them to algebraic expressions containing fractions.
Fraction calculation online with steps and details of the calculations: simplification, addition, subtraction, multiplication, division, raise to the power.
#### Syntax :
fraction(expression), where expression is the fraction or the algebraic expression to calculate the result returned is given as a irreducible fraction.
#### Examples :
Calculate online with fraction (fraction calculator ) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986792802810669, "perplexity": 1070.7204130373602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149205.56/warc/CC-MAIN-20200714051924-20200714081924-00122.warc.gz"} |
https://www.physicsforums.com/threads/from-d3-brane-to-ads_5-schwarzschild.869339/ | • A
• Start date
• #1
Gold Member
2,809
604
I'm trying to read this paper. Of course there are lots of things about it that are beyond me but for now its only the calculations in the main body of the paper that I'm trying to, more or less, understand. So I'm trying to go through it and find out the things that I don't get.
For now, I have problem with the beginning of the section 2:
The metric of the near-extremal D3-brane is
## ds_{10}^2=H^{-\frac 1 2} (-h dt^2+d\vec x ^2)+H^{\frac 1 2} (\frac{dr^2}{h}+d\Omega_5^2) \ \ \ \ \ \ \ \left( H=1+(\frac{L}{r})^4 \ \ , \ \ h=1-(\frac{r_H}{r})^4 \right) \ \ \ \ \ (1)##
Here ##\vec x = (x, y, z) ## are the spatial coordinates along which the D3-brane is extended and ##dΩ^2_5## is the standard metric on the five-sphere ##S^5## with unit radius. The near-horizon limit consists of “dropping the 1” from H. Then the metric is ##AdS_5-Schwarzschild##,
##ds_5^2=(\frac{r}{L})^2 (-hdt^2+d\vec x^2)+(\frac{L}{r})^2 \frac{dr^2}{h} \ \ \ \ \ \ \ \ \ \ (2)##
times the metric for an ##S^5## of constant radius L.
I don't understand how he got equation (2) from equation (1) by just "dropping the 1 from H"! Because when I do that, I get:
## (\frac r L)^2 ( -h dt^2+ d \vec x ^2)+(\frac L r)^2 ( \frac{dr^2}{h}+d\Omega_5^2) ##
And this is really a mystery to me how that ## d\Omega_5^2 ## vanishes and reappears as an overall multiplicative factor! This really strange because in equation (1), the "radius" of the ##S^5## is ## H^{\frac 1 2} ## but in the near-horizon limit, it involves other coordinates too, actually their differentials which is non-sense because terms like ## dt^2 d\Omega_5^2 ## will appear in the metric. What is the author doing here?
Thanks
• #2
18,668
8,634
Thanks for the post! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post?
• #3
Ben Niehoff
Gold Member
1,883
168
I think there might be a typo in (1) and it should be
$$ds^2 = H^{-1/2} (-h dt^2 + d \vec x^2) + H^{1/2} (\frac{dr^2}{h} + r^2 d\Omega^2)$$
Then you get Schwarzschild-AdS x S^5 as the near-horizon limit. You should see that the S^5 now gets a constant radius equal to the AdS radius.
In (2), Gubser also drops the S^5 part of the metric, because it will no longer play a role. (There are fancier situations where it does play a role...if you allow your fields to depend on the S^5 coordinates then things can quickly become a mess.)
Likes ShayanJ
• Last Post
Replies
1
Views
3K
• Last Post
Replies
1
Views
2K
• Last Post
Replies
1
Views
880
• Last Post
Replies
4
Views
2K
• Last Post
Replies
3
Views
2K
• Last Post
Replies
9
Views
4K
• Last Post
Replies
0
Views
2K
• Last Post
Replies
4
Views
3K
• Last Post
Replies
0
Views
3K
C
• Last Post
Replies
0
Views
4K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9442270398139954, "perplexity": 1657.5241641529287}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00749.warc.gz"} |
https://www.bionicturtle.com/forum/threads/duration-of-a-floating-rate-note.3631/ | What's new
# Duration of a Floating Rate Note
#### girishkhare
##### New Member
Hi David,
In FRM handbook, it is given that the duration of the Floating Rate Note immediately after the rate adjust is zero and the duration in the intermediate period is time left till next rate readjustment. In other words, suppose the note readjusts the coupon based on LIBOR every six months, say on January 1 and July 1. On January 1, the duration of the Floating Rate Note would be zero while the duration on February 1 would be equal to five months.
Duration is the average time one has to wait till the payment is received. If the duration is zero, it would mean that the whole payment should be received immediately. However, this is obviously not the case. I am getting confused on the meaning of zero-duration of a floating rate note. Similarly, if it is February 1 and the duration of the note is five months. Even though the time till maturity is long, say 10 years, how can the duration be equal to 5 months?
Any help on this?
Girish
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
HI Girish,
This fascinated me, also, as I was preparing for Saturday's webinar b/c we reviewed an interest rate swap pricing problem (see #3 at http://www.bionicturtle.com/forum/viewreply/6344/)
… and my sub-question included: what is the swap's duration.
Where I tried to be careful to say "in practice, we round down the duration of the FRN to zero" (i.e., the IRS = long a fixed plus short a FRM)
Starting with your final point: my preferred way of saying Macaulay duration, that I got from Sanjay Nawalkha @ http://www.fixedincomerisk.com/ is "w,eighted average maturity of bond, where weights are PV of cash flows." In which case, the FRN has Mac duration ~ time to next coupon because all of the subsequent cash flows have a weight of zero (as they do not contribute to price). So this is closely related to the idea that a FRN must price at par immediately upon coupon settlement. Some further discussion on this phenomenon and a very brief XLS "proof" here
So, FRN prices at par at the moment of settlement, such that FRN duration approximately (~=) time to next coupon. In the webinar, I said "round down to zero" b/c an FRM question (for example) would typically assume FRN duration = 0, but unless it is immediately before settlement that is strictly incorrect (although my example shows how minimal the error is).
So, I disagree, technically, with "the duration of the Floating Rate Note immediately after the rate adjust is zero" because at that point in time, the next coupon is essentially fixed as the rate has already been determined -- zero duration cannot be true as there is a bit of price risk until the next reset. (it is just a wrong as saying a six month zero fixed coupon bond has 0 duration). The Mac duration is nearer to 0.4xx years (< 0.5 years).
... but this is why I was worried about presenting it: in my example, the FRN duration is nearer to 0.5 but i "rounded down" to zero (and the impact on swap DV01 was only $2). hope that helps, David #### girishkhare ##### New Member Thanks a lot David. BT rocks. #### bluekaktus ##### New Member Hi David, FRNs can also have negative duration too? #### David Harper CFA FRM ##### David Harper CFA FRM Staff member Subscriber Hi bluekaktus, In my opinion, no, unless you stretch the definition of FRN. By FRM, i assume a principal repayment such that the instrument prices to par at the next coupon. On the other hand, an interest only (IO) tranche generally does have negative duration: the underlying pool has non negative duration, such that structuring can create inverse floating tranches (of the sort recently in the news about Freddie Mac http://www.propublica.org/article/freddy-mac-mortgage-eisinger-arnold ) and floating tranches. If a two tranche structure can create a tranche with duration greater than underlying average maturity (e.g., the inverse floaters) which is like leveraging duration, then, by definition, the other tranche must have negative duration. Thanks, #### jcb05 ##### New Member I am curious how a margin on top of the floating rate coupon might affect the instrument's duration. For example, if the instrument is priced at a deep discount because of a wide spread. If I had a floating rate instrument with a coupon of 1M libor +350 bps resetting monthly, I have been told I could think about this as a 1M libor floater, which has a duration of .083 years, (1month), in addition to a bond with a fixed 3.5% coupon with the same term. Is that right? My next question is a little more specific, though in the same vein. If i were to have a credit card account with a floating rate coupon and a large spread to the index, and it pays off entirely each month, aka pays no interest, how should I think about the duration? As it is a floater, I would expect the duration to be no longer than the 1 month reset period, but the large spread to the index would add duration as it can be treated as a fixed rate portion. But, since none of the cashflows are being driven by the coupon, because the current balance is paid in full each month, it seems like this should also be ignored. I know this is a pretty specific question, but any help, or resources toward which you could point me, would be greatly appreciated. Thanks! J #### Praveen_India ##### Member HI Girish, This fascinated me, also, as I was preparing for Saturday's webinar b/c we reviewed an interest rate swap pricing problem (see #3 at http://www.bionicturtle.com/forum/viewreply/6344/) … and my sub-question included: what is the swap's duration. Where I tried to be careful to say "in practice, we round down the duration of the FRN to zero" (i.e., the IRS = long a fixed plus short a FRM) Starting with your final point: my preferred way of saying Macaulay duration, that I got from Sanjay Nawalkha @ http://www.fixedincomerisk.com/ is "w,eighted average maturity of bond, where weights are PV of cash flows." In which case, the FRN has Mac duration ~ time to next coupon because all of the subsequent cash flows have a weight of zero (as they do not contribute to price). So this is closely related to the idea that a FRN must price at par immediately upon coupon settlement. Some further discussion on this phenomenon and a very brief XLS "proof" here So, FRN prices at par at the moment of settlement, such that FRN duration approximately (~=) time to next coupon. In the webinar, I said "round down to zero" b/c an FRM question (for example) would typically assume FRN duration = 0, but unless it is immediately before settlement that is strictly incorrect (although my example shows how minimal the error is). So, I disagree, technically, with "the duration of the Floating Rate Note immediately after the rate adjust is zero" because at that point in time, the next coupon is essentially fixed as the rate has already been determined -- zero duration cannot be true as there is a bit of price risk until the next reset. (it is just a wrong as saying a six month zero fixed coupon bond has 0 duration). The Mac duration is nearer to 0.4xx years (< 0.5 years). ... but this is why I was worried about presenting it: in my example, the FRN duration is nearer to 0.5 but i "rounded down" to zero (and the impact on swap DV01 was only$2).
hope that helps, David
Hi David,
As mentioned by Girish in the beginning:
"In FRM handbook, it is given that the duration of the Floating Rate Note immediately after the rate adjust is zero and the duration in the intermediate period is time left till next rate readjustment."
However you seem to have disagreed.
Its true right? because when coupon is reset based on some reference, say LIBOR, there is no interest rate risk, as coupon is matching perfectly with market rates at that particular time. But when market rates start moving after coupon has been set then the interest rate risk sets in and duration is until the next coupon reset.
Thanks,
Praveen
Last edited:
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
Hi @Praveen_India yes, strictly i disagree with that: if it were true, then at what instant (what moment in time) would the duration shift from zero to "time left till next rate readjustment?" This thought experiment, IMO, shows the fallacy. The duration of the FRN is always time to next reset; so it is converging on zero as the settlement approaches. But as soon as the coupon pays (or really, what we mean is: as soon as the next coupon is determined), then at that moment when the next cash flow is already decided, then interest rate risk is created. So, to me, if (say) it's a semi-annual FRN, the duration is highest immediately after settlement (or at settlement, if you like), when duration is ~ six months, then declining toward zero. Then "snapping back up" to six months at the next coupon, etc. I hope that helps,
#### Praveen_India
##### Member
Yes, true. Thanks for the explanation David.
#### sharman.jamie
##### Member
I'm looking at floating rate bonds and I think you also need to take into account the spread. For instance the duration of a floating rate bond with a spread of 1% over ie libor is equivalent to the duration of any fixed cashflows (ie ones on or before the refix date like you have mentioned) plus the duration of a fixed coupon bond with coupon 1% going out to maturity. It sounds trivial but given rates are so low, this can certainly makes a difference.
#### lRRAngle
##### Member
Hi David,
I know it's been stale for a while but can you answer the question Sharman.Jamie raised?
Thanks
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
Hi @lRRAngle I don't think it's a question, but rather a comment. (I can't actually respond necessarily to every comment. The forum is meant to be a collaborative resource). I agree with it, in part, although I have not taken the time to model or research it. The part I disagree with is where sharman appears to count the principal twice. I'm sure this is covered in some text somewhere, but my quick thought is that, for example, If you have a 10-year floater with index (e.g., LIBOR) plus 1.0% margin, then this is equivalent to a floater plus a stream of 1.0% coupons but not again the final principal (which after all is already included to price the floater to par).
I do agree that this margin, to the extent it is additive to the discount rate, impacts the duration and renders it non-negative, but I'm thinking the impact is small (because it's only the incremental coupons). For a 10-year stream at +1.0% on a $100.0 notional, when discounted at 2.0%, I get a PV of$8.98; i.e., a floater + 1.0% might be worth $108.98. Then if i reprice just the +1% stream at 3.0% (i.e. a +100 basis point shock), the value of the stream drops to$8.53. That's only a \$0.45 drop, admittedly which is 0.45 years on a par priced bond. So, in theory, I think I do agree. At the same time, the impact seems to be much less than if we mistakenly treated the spread as its own bond (with principal). I think that's why I believe that I've read in fabozzi somewhere (don't quote me please) that the key assumption, which permits ignoring the index--or really I think it's really rounding down--is a constant spread added to the index.
My quick back of the envelope would tend to justify nullifying this, or perhaps running the calculations to show how this adds maybe less than one year of duration, if the spread is constant. But again, I think the key secondary assumption is that the discount rate is different than (less than) the sum of the index and spread. If we wanted to assume the appropriate discount rate is approximately the index plus the spread (e.g., for risk), then I think we can be back to pricing at par: certainly it is easy to show that, if we discount the variable flows at the same rate used to determine the case flows, we price to par at each coupon. But all in, without doing the research, my instinct is that theoretically this does add something to the duration, under the assumptions. On the other hand, realistically, there is a measure called spread duration (to account for varying spread). Ultimately I would want to model this to be comfortable in how it is treated because I can think of arguments on both sides actually. Thanks,
Last edited:
#### lRRAngle
##### Member
Great & wow you are fast to respond This makes sense and very interesting. Your answer raised another question for me, what king of instrument would generate to a negative duration? I am sure there are a number of instruments but I am curious what characteristics would make it such.
Many thanks
Staff member
Subscriber
#### sharman.jamie
##### Member
Yes I did intend it as just a comment. Simply because I had been doing some duration calculations and found that the spread should be taken into account (atleast when comparing my own calculations with blombergs calculated oas1 duration).
I'm glad you brought up spread01 (ie risk a credit spread like z spread will change) because that is relevant but not the spread I was referring to here (the fixed coupon in addition to a floating coupon that corporate bonds add on to compensate investors), although I think you knew that! In anycase not my intention to be pedantic or add confusion; because this is not asked in the frm, just something i noticed when dealing with these 'in the wild'.
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
@sharman.jamie Yes thank you! Good to know that you did take into account the spread on the floating rate note. I'd be very interested in the calculations (in general) that you used, if it's possible to share. I was thinking, as mentioned above, to parse the floater into a (i) a par floater (i.e., cash flows at discount rate plus principal = par) plus (ii) the stream of "spread coupons" without the principal. So on a quick back-of-the-envelope, I was getting about +0.5 years duration on a 1.0% spread over ten years, but i'd be keen to know if there is a better approach. Thank you!
#### sharman.jamie
##### Member
The way I did it was to model the whole bond, including the floating legs and the spread. Then use the numerical approach to blip both yield and libor up and down, adjusting the floating coupons, then reprice to produce the effective duration. I'd be really interested if you had some way to use the numerical mcauley/mod duration formula to produce something similar.
I don't advocate the Bloomberg OAS1 approach at all. It looks like they just put the fixed+floating coupons into the mcauley duration equation as is, and so as such produce a vastly too large duration figure. Basically if all the coupons were fixed. http://www.treasurer.ca.gov/cdiac/webinars/2012/20120215/presentation.pdf.
#### sharman.jamie
##### Member
The way I did it was to model the whole bond, including the floating legs and the spread. Then use the numerical approach to blip both yield and libor up and down, adjusting the floating coupons, then reprice to produce the effective duration. I'd be really interested if you had some way to use the numerical mcauley/mod duration formula to produce something similar.
I don't advocate the Bloomberg OAS1 approach at all. It looks like they just put the fixed+floating coupons into the mcauley duration equation as is, and so as such produce a vastly too large duration figure. Basically if all the coupons were fixed. http://www.treasurer.ca.gov/cdiac/webinars/2012/20120215/presentation.pdf.
Also meant to add, you can see this effect on US172967KC44 (If you have access to a terminal)
Number of years to refix date = 0.13
Numerical Duration = 0.14 (ie Slightly larger because of the 1.31% spread)
BBG OAS1/Mod Duration =2.48
#### jjking
##### New Member
Hi @David Harper CFA FRM
Would you please be able to redo this example with inverse floaters?
Such as: a 3-year inverse floater with semi-annual coupons. So in the instance of a direct floater, the duration resets to 6 months at each coupon.
What would the likely duration be for each of these over time? Would it increase as the settlement approached?
Best Wishes
Hi @Praveen_India yes, strictly i disagree with that: if it were true, then at what instant (what moment in time) would the duration shift from zero to "time left till next rate readjustment?" This thought experiment, IMO, shows the fallacy. The duration of the FRN is always time to next reset; so it is converging on zero as the settlement approaches. But as soon as the coupon pays (or really, what we mean is: as soon as the next coupon is determined), then at that moment when the next cash flow is already decided, then interest rate risk is created. So, to me, if (say) it's a semi-annual FRN, the duration is highest immediately after settlement (or at settlement, if you like), when duration is ~ six months, then declining toward zero. Then "snapping back up" to six months at the next coupon, etc. I hope that helps,
Last edited:
#### Nicole Seaman
Staff member
Subscriber
Hi @David Harper CFA FRM
Would you please be able to redo this example with inverse floaters?
Such as: a 3-year inverse floater with semi-annual coupons. So in the instance of a direct floater, the duration resets to 6 months at each coupon.
What would the likely duration be for each of these over time? Would it increase as the settlement approached?
Best Wishes
Hello @jjking
As the forum is getting extremely busy before the exam, I just wanted to recommend using the search function here in the forum. A search of "inverse floaters" brings up a great deal of information that has already been discussed in the forum. If you do not find answers, I'm sure someone will be able to help you here. David's time becomes stretched VERY thin right before the exam, so I want to make sure that everyone is utilizing the search function before asking questions.
Thank you,
Nicole | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441059589385986, "perplexity": 1737.3723396730065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038916163.70/warc/CC-MAIN-20210419173508-20210419203508-00366.warc.gz"} |
https://homework.cpm.org/category/MN/textbook/cc1mn/chapter/7/lesson/7.1.6/problem/7-61 | ### Home > CC1MN > Chapter 7 > Lesson 7.1.6 > Problem7-61
7-61.
Darnell is designing a new game. He will have $110$ different-colored blocks in a bag. While a person is blindfolded, he or she will reach in and pull out a block. The color of the block determines the prize according to Darnell’s sign at right.
blue → small toy purple → hat green → large stuffed animal
1. If he wants players to have a $60\%$ probability of winning a small toy, how many blue blocks should he have?
Since picking a blue block gives a small toy prize, Darnell can consider the probability of picking a blue block as the probability of winning a small toy.
If he wants players to have a $60\%$ probability of winning a small toy, $60\%$ of the $110$ blocks in the bag should be blue.
$66$ blue blocks
2. If he wants players to have a $10\%$ probability of winning a large stuffed animal, how many green blocks should he have?
See part (a). | {"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9097415208816528, "perplexity": 1534.6099877348042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00073.warc.gz"} |
https://byjus.com/questions/name-the-branch-of-science-which-explains-force-and-displacement/ | # Name the branch of Science which explains force and displacement
Mechanics is the branch of science that tells how the body behaves when the body undergoes effects of force, displacement and effect on the environment. It includes Kinematics, Projectiles, Circular Motion, Uniform and Non-uniform, relative Velocity, Newton’s Law of Motion, Law of gravitation, Center of Mass, Collisions, rotational Motion, fluid Mechanics. Let’s see a bit of this concept. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875454843044281, "perplexity": 953.4237573647708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056297.61/warc/CC-MAIN-20210918032926-20210918062926-00564.warc.gz"} |
https://kluedo.ub.uni-kl.de/frontdoor/index/index/searchtype/collection/id/15997/rows/20/doctypefq/doctoralthesis/facetNumber_author_facet/all/sortfield/year/sortorder/desc/start/0/author_facetfq/Seiferling%2C+Thomas/docId/4380 | • search hit 1 of 1
Back to Result List
## Recursive Utility and Stochastic Differential Utility: From Discrete to Continuous Time
• In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored. First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations. Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established. Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287728071212769, "perplexity": 918.448133005653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00126.warc.gz"} |